Compare commits

...

12 Commits

Author SHA1 Message Date
Frederik Ring
24796928e9 Add test case as per #187 2023-02-10 19:16:23 +01:00
Frederik Ring
81e36289c1 Try deleting file in post hook to ensure correct order 2023-02-10 18:39:40 +01:00
Frederik Ring
2d37e08743 Use go 1.20, join errors using stdlib (#182)
* Use go 1.20, join errors using stdlib

* Use go 1.20 proper
2023-02-02 21:07:25 +01:00
Frederik Ring
1e36bd3eb7 Non-streaming upload to WebDAV fails on big files (#181) 2023-01-16 08:28:29 +01:00
Frederik Ring
e93a74dd48 Instructions in issue templates are not supposed to be shown after submission 2023-01-12 18:02:46 +01:00
Frederik Ring
f799e6c2e9 Azure Blob Storage is missing from headline in README 2023-01-11 21:54:50 +01:00
Frederik Ring
5c04e11f10 Add support for Azure Blob Storage (#171)
* Scaffold Azure storage backend that does nothing yet

* Implement copy for Azure Blob Storage

* Set up automated testing for Azure Storage

* Implement pruning for Azure blob storage

* Add documentation for Azure Blob Storage

* Add support for remote path

* Add azure to notifications doc

* Tidy go.mod file

* Allow use of managed identity credential

* Use volume in tests

* Auto append trailing slash to endpoint if needed, clarify docs, tidy mod file
2023-01-11 21:40:48 +01:00
Frederik Ring
aadbaa741d Update intro in README 2023-01-08 09:39:32 +01:00
Frederik Ring
9b7af67a26 Run tests using compose v2 (#178) 2023-01-07 18:50:27 +01:00
Frederik Ring
1cb4883458 Update alpine base image to 3.17 (#177) 2023-01-05 19:46:47 +01:00
Frederik Ring
982f4fe191 Fix mistake in README 2022-12-30 16:16:46 +01:00
Frederik Ring
63961cd826 Pass file location to lifecycle commands (#173)
* Add test case for extending image and calling through to rsync

* Keep backup file location env var

* Add documentation

* Work against untared content in test
2022-12-30 16:07:34 +01:00
38 changed files with 644 additions and 143 deletions

View File

@@ -8,7 +8,9 @@ assignees: ''
--- ---
**Describe the bug** **Describe the bug**
<!--
A clear and concise description of what the bug is. A clear and concise description of what the bug is.
-->
**To Reproduce** **To Reproduce**
Steps to reproduce the behavior: Steps to reproduce the behavior:
@@ -17,12 +19,16 @@ Steps to reproduce the behavior:
3. ... 3. ...
**Expected behavior** **Expected behavior**
<!--
A clear and concise description of what you expected to happen. A clear and concise description of what you expected to happen.
-->
**Desktop (please complete the following information):** **Version (please complete the following information):**
- Image Version: [e.g. v2.21.0] - Image Version: <!-- e.g. v2.21.0 -->
- Docker Version: [e.g. 20.10.17] - Docker Version: <!-- e.g. 20.10.17 -->
- Docker Compose Version (if applicable): [e.g. 1.29.2] - Docker Compose Version (if applicable): <!-- e.g. 1.29.2 -->
**Additional context** **Additional context**
<!--
Add any other context about the problem here. Add any other context about the problem here.
-->

View File

@@ -8,13 +8,21 @@ assignees: ''
--- ---
**Is your feature request related to a problem? Please describe.** **Is your feature request related to a problem? Please describe.**
<!--
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
-->
**Describe the solution you'd like** **Describe the solution you'd like**
<!--
A clear and concise description of what you want to happen. A clear and concise description of what you want to happen.
-->
**Describe alternatives you've considered** **Describe alternatives you've considered**
<!--
A clear and concise description of any alternative solutions or features you've considered. A clear and concise description of any alternative solutions or features you've considered.
-->
**Additional context** **Additional context**
<!--
Add any other context or screenshots about the feature request here. Add any other context or screenshots about the feature request here.
-->

View File

@@ -8,13 +8,21 @@ assignees: ''
--- ---
**What are you trying to do?** **What are you trying to do?**
<!--
A clear and concise description of what you are trying to do, but cannot get working. A clear and concise description of what you are trying to do, but cannot get working.
-->
**What is your current configuration?** **What is your current configuration?**
<!--
Add the full configuration you are using. Please redact out any real-world credentials. Add the full configuration you are using. Please redact out any real-world credentials.
-->
**Log output** **Log output**
<!--
Provide the full log output of your setup. Provide the full log output of your setup.
-->
**Additional context** **Additional context**
<!--
Add any other context or screenshots about the support request here. Add any other context or screenshots about the support request here.
-->

View File

@@ -1,7 +1,7 @@
# Copyright 2021 - Offen Authors <hioffen@posteo.de> # Copyright 2021 - Offen Authors <hioffen@posteo.de>
# SPDX-License-Identifier: MPL-2.0 # SPDX-License-Identifier: MPL-2.0
FROM golang:1.19-alpine as builder FROM golang:1.20-alpine as builder
WORKDIR /app WORKDIR /app
COPY . . COPY . .
@@ -9,7 +9,7 @@ RUN go mod download
WORKDIR /app/cmd/backup WORKDIR /app/cmd/backup
RUN go build -o backup . RUN go build -o backup .
FROM alpine:3.16 FROM alpine:3.17
WORKDIR /root WORKDIR /root

View File

@@ -4,10 +4,10 @@
# docker-volume-backup # docker-volume-backup
Backup Docker volumes locally or to any S3 compatible storage. Backup Docker volumes locally or to any S3, WebDAV, Azure Blob Storage or SSH compatible storage.
The [offen/docker-volume-backup](https://hub.docker.com/r/offen/docker-volume-backup) Docker image can be used as a lightweight (below 15MB) sidecar container to an existing Docker setup. The [offen/docker-volume-backup](https://hub.docker.com/r/offen/docker-volume-backup) Docker image can be used as a lightweight (below 15MB) sidecar container to an existing Docker setup.
It handles __recurring or one-off backups of Docker volumes__ to a __local directory__, __any S3, WebDAV or SSH compatible storage (or any combination) and rotates away old backups__ if configured. It also supports __encrypting your backups using GPG__ and __sending notifications for failed backup runs__. It handles __recurring or one-off backups of Docker volumes__ to a __local directory__, __any S3, WebDAV, Azure Blob Storage or SSH compatible storage (or any combination) and rotates away old backups__ if configured. It also supports __encrypting your backups using GPG__ and __sending notifications for failed backup runs__.
<!-- MarkdownTOC --> <!-- MarkdownTOC -->
@@ -33,6 +33,7 @@ It handles __recurring or one-off backups of Docker volumes__ to a __local direc
- [Run multiple backup schedules in the same container](#run-multiple-backup-schedules-in-the-same-container) - [Run multiple backup schedules in the same container](#run-multiple-backup-schedules-in-the-same-container)
- [Define different retention schedules](#define-different-retention-schedules) - [Define different retention schedules](#define-different-retention-schedules)
- [Use special characters in notification URLs](#use-special-characters-in-notification-urls) - [Use special characters in notification URLs](#use-special-characters-in-notification-urls)
- [Handle file uploads using third party tools](#handle-file-uploads-using-third-party-tools)
- [Recipes](#recipes) - [Recipes](#recipes)
- [Backing up to AWS S3](#backing-up-to-aws-s3) - [Backing up to AWS S3](#backing-up-to-aws-s3)
- [Backing up to Filebase](#backing-up-to-filebase) - [Backing up to Filebase](#backing-up-to-filebase)
@@ -40,6 +41,7 @@ It handles __recurring or one-off backups of Docker volumes__ to a __local direc
- [Backing up to MinIO \(using Docker secrets\)](#backing-up-to-minio-using-docker-secrets) - [Backing up to MinIO \(using Docker secrets\)](#backing-up-to-minio-using-docker-secrets)
- [Backing up to WebDAV](#backing-up-to-webdav) - [Backing up to WebDAV](#backing-up-to-webdav)
- [Backing up to SSH](#backing-up-to-ssh) - [Backing up to SSH](#backing-up-to-ssh)
- [Backing up to Azure Blob Storage](#backing-up-to-azure-blob-storage)
- [Backing up locally](#backing-up-locally) - [Backing up locally](#backing-up-locally)
- [Backing up to AWS S3 as well as locally](#backing-up-to-aws-s3-as-well-as-locally) - [Backing up to AWS S3 as well as locally](#backing-up-to-aws-s3-as-well-as-locally)
- [Running on a custom cron schedule](#running-on-a-custom-cron-schedule) - [Running on a custom cron schedule](#running-on-a-custom-cron-schedule)
@@ -303,6 +305,25 @@ You can populate below template according to your requirements and use it as you
# SSH_IDENTITY_PASSPHRASE="pass" # SSH_IDENTITY_PASSPHRASE="pass"
# The credential's account name when using Azure Blob Storage. This has to be
# set when using Azure Blob Storage.
# AZURE_STORAGE_ACCOUNT_NAME="account-name"
# The credential's primary account key when using Azure Blob Storage. If this
# is not given, the command tries to fall back to using a managed identity.
# AZURE_STORAGE_PRIMARY_ACCOUNT_KEY="<xxx>"
# The container name when using Azure Blob Storage.
# AZURE_STORAGE_CONTAINER_NAME="container-name"
# The service endpoint when using Azure Blob Storage. This is a template that
# can be passed the account name as shown in the default value below.
# AZURE_STORAGE_ENDPOINT="https://{{ .AccountName }}.blob.core.windows.net/"
# In addition to storing backups remotely, you can also keep local copies. # In addition to storing backups remotely, you can also keep local copies.
# Pass a container-local path to store your backups if needed. You also need to # Pass a container-local path to store your backups if needed. You also need to
# mount a local folder or Docker volume into that location (`/archive` # mount a local folder or Docker volume into that location (`/archive`
@@ -894,6 +915,44 @@ where service is any of the [supported services][shoutrrr-docs], e.g. for SMTP:
docker run --rm -ti containrrr/shoutrrr generate smtp docker run --rm -ti containrrr/shoutrrr generate smtp
``` ```
### Handle file uploads using third party tools
If you want to use a non-supported storage backend, or want to use a third party (e.g. rsync, rclone) tool for file uploads, you can build a Docker image containing the required binaries off this one, and call through to these in lifecycle hooks.
For example, if you wanted to use `rsync`, define your Docker image like this:
```Dockerfile
FROM offen/docker-volume-backup:v2
RUN apk add rsync
```
Using this image, you can now omit configuring any of the supported storage backends, and instead define your own mechanism in a `docker-volume-backup.copy-post` label:
```yml
version: '3'
services:
backup:
image: your-custom-image
restart: always
environment:
BACKUP_FILENAME: "daily-backup-%Y-%m-%dT%H-%M-%S.tar.gz"
BACKUP_CRON_EXPRESSION: "0 2 * * *"
labels:
- docker-volume-backup.copy-post=/bin/sh -c 'rsync $$COMMAND_RUNTIME_ARCHIVE_FILEPATH /destination'
volumes:
- app_data:/backup/app_data:ro
- /var/run/docker.sock:/var/run/docker.sock
# other services defined here ...
volumes:
app_data:
```
Commands will be invoked with the filepath of the tar archive passed as `COMMAND_RUNTIME_BACKUP_FILEPATH`.
## Recipes ## Recipes
This section lists configuration for some real-world use cases that you can mix and match according to your needs. This section lists configuration for some real-world use cases that you can mix and match according to your needs.
@@ -1040,6 +1099,27 @@ volumes:
data: data:
``` ```
### Backing up to Azure Blob Storage
```yml
version: '3'
services:
# ... define other services using the `data` volume here
backup:
image: offen/docker-volume-backup:v2
environment:
AZURE_STORAGE_CONTAINER_NAME: backup-container
AZURE_STORAGE_ACCOUNT_NAME: account-name
AZURE_STORAGE_PRIMARY_ACCOUNT_KEY: Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==
volumes:
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
data:
```
### Backing up locally ### Backing up locally
```yml ```yml

View File

@@ -63,6 +63,11 @@ type Config struct {
ExecLabel string `split_words:"true"` ExecLabel string `split_words:"true"`
ExecForwardOutput bool `split_words:"true"` ExecForwardOutput bool `split_words:"true"`
LockTimeout time.Duration `split_words:"true" default:"60m"` LockTimeout time.Duration `split_words:"true" default:"60m"`
AzureStorageAccountName string `split_words:"true"`
AzureStoragePrimaryAccountKey string `split_words:"true"`
AzureStorageContainerName string `split_words:"true"`
AzureStoragePath string `split_words:"true"`
AzureStorageEndpoint string `split_words:"true" default:"https://{{ .AccountName }}.blob.core.windows.net/"`
} }
func (c *Config) resolveSecret(envVar string, secretPath string) (string, error) { func (c *Config) resolveSecret(envVar string, secretPath string) (string, error) {

View File

@@ -23,10 +23,14 @@ import (
func (s *script) exec(containerRef string, command string) ([]byte, []byte, error) { func (s *script) exec(containerRef string, command string) ([]byte, []byte, error) {
args, _ := argv.Argv(command, nil, nil) args, _ := argv.Argv(command, nil, nil)
commandEnv := []string{
fmt.Sprintf("COMMAND_RUNTIME_ARCHIVE_FILEPATH=%s", s.file),
}
execID, err := s.cli.ContainerExecCreate(context.Background(), containerRef, types.ExecConfig{ execID, err := s.cli.ContainerExecCreate(context.Background(), containerRef, types.ExecConfig{
Cmd: args[0], Cmd: args[0],
AttachStdin: true, AttachStdin: true,
AttachStderr: true, AttachStderr: true,
Env: commandEnv,
}) })
if err != nil { if err != nil {
return nil, nil, fmt.Errorf("exec: error creating container exec: %w", err) return nil, nil, fmt.Errorf("exec: error creating container exec: %w", err)

View File

@@ -4,10 +4,9 @@
package main package main
import ( import (
"errors"
"fmt" "fmt"
"sort" "sort"
"github.com/offen/docker-volume-backup/internal/utilities"
) )
// hook contains a queued action that can be trigger them when the script // hook contains a queued action that can be trigger them when the script
@@ -52,7 +51,7 @@ func (s *script) runHooks(err error) error {
} }
} }
if len(actionErrors) != 0 { if len(actionErrors) != 0 {
return utilities.Join(actionErrors...) return errors.Join(actionErrors...)
} }
return nil return nil
} }

View File

@@ -6,13 +6,13 @@ package main
import ( import (
"bytes" "bytes"
_ "embed" _ "embed"
"errors"
"fmt" "fmt"
"os" "os"
"text/template" "text/template"
"time" "time"
sTypes "github.com/containrrr/shoutrrr/pkg/types" sTypes "github.com/containrrr/shoutrrr/pkg/types"
"github.com/offen/docker-volume-backup/internal/utilities"
) )
//go:embed notifications.tmpl //go:embed notifications.tmpl
@@ -69,7 +69,7 @@ func (s *script) sendNotification(title, body string) error {
} }
} }
if len(errs) != 0 { if len(errs) != 0 {
return fmt.Errorf("sendNotification: error sending message: %w", utilities.Join(errs...)) return fmt.Errorf("sendNotification: error sending message: %w", errors.Join(errs...))
} }
return nil return nil
} }

View File

@@ -5,6 +5,7 @@ package main
import ( import (
"context" "context"
"errors"
"fmt" "fmt"
"io" "io"
"io/fs" "io/fs"
@@ -15,11 +16,11 @@ import (
"time" "time"
"github.com/offen/docker-volume-backup/internal/storage" "github.com/offen/docker-volume-backup/internal/storage"
"github.com/offen/docker-volume-backup/internal/storage/azure"
"github.com/offen/docker-volume-backup/internal/storage/local" "github.com/offen/docker-volume-backup/internal/storage/local"
"github.com/offen/docker-volume-backup/internal/storage/s3" "github.com/offen/docker-volume-backup/internal/storage/s3"
"github.com/offen/docker-volume-backup/internal/storage/ssh" "github.com/offen/docker-volume-backup/internal/storage/ssh"
"github.com/offen/docker-volume-backup/internal/storage/webdav" "github.com/offen/docker-volume-backup/internal/storage/webdav"
"github.com/offen/docker-volume-backup/internal/utilities"
"github.com/containrrr/shoutrrr" "github.com/containrrr/shoutrrr"
"github.com/containrrr/shoutrrr/pkg/router" "github.com/containrrr/shoutrrr/pkg/router"
@@ -76,6 +77,7 @@ func newScript() (*script, error) {
"WebDAV": {}, "WebDAV": {},
"SSH": {}, "SSH": {},
"Local": {}, "Local": {},
"Azure": {},
}, },
}, },
} }
@@ -189,6 +191,21 @@ func newScript() (*script, error) {
s.storages = append(s.storages, localBackend) s.storages = append(s.storages, localBackend)
} }
if s.c.AzureStorageAccountName != "" {
azureConfig := azure.Config{
ContainerName: s.c.AzureStorageContainerName,
AccountName: s.c.AzureStorageAccountName,
PrimaryAccountKey: s.c.AzureStoragePrimaryAccountKey,
Endpoint: s.c.AzureStorageEndpoint,
RemotePath: s.c.AzureStoragePath,
}
azureBackend, err := azure.NewStorageBackend(azureConfig, logFunc)
if err != nil {
return nil, err
}
s.storages = append(s.storages, azureBackend)
}
if s.c.EmailNotificationRecipient != "" { if s.c.EmailNotificationRecipient != "" {
emailURL := fmt.Sprintf( emailURL := fmt.Sprintf(
"smtp://%s:%s@%s:%d/?from=%s&to=%s", "smtp://%s:%s@%s:%d/?from=%s&to=%s",
@@ -312,7 +329,7 @@ func (s *script) stopContainers() (func() error, error) {
stopError = fmt.Errorf( stopError = fmt.Errorf(
"stopContainers: %d error(s) stopping containers: %w", "stopContainers: %d error(s) stopping containers: %w",
len(stopErrors), len(stopErrors),
utilities.Join(stopErrors...), errors.Join(stopErrors...),
) )
} }
@@ -363,7 +380,7 @@ func (s *script) stopContainers() (func() error, error) {
return fmt.Errorf( return fmt.Errorf(
"stopContainers: %d error(s) restarting containers and services: %w", "stopContainers: %d error(s) restarting containers and services: %w",
len(restartErrors), len(restartErrors),
utilities.Join(restartErrors...), errors.Join(restartErrors...),
) )
} }
s.logger.Infof( s.logger.Infof(

View File

@@ -25,7 +25,7 @@ Here is a list of all data passed to the template:
* `FullPath`: full path of the backup file (e.g. `/archive/backup-2022-02-11T01-00-00.tar.gz`) * `FullPath`: full path of the backup file (e.g. `/archive/backup-2022-02-11T01-00-00.tar.gz`)
* `Size`: size in bytes of the backup file * `Size`: size in bytes of the backup file
* `Storages`: object that holds stats about each storage * `Storages`: object that holds stats about each storage
* `Local`, `S3`, `WebDAV` or `SSH`: * `Local`, `S3`, `WebDAV`, `Azure` or `SSH`:
* `Total`: total number of backup files * `Total`: total number of backup files
* `Pruned`: number of backup files that were deleted due to pruning rule * `Pruned`: number of backup files that were deleted due to pruning rule
* `PruneErrors`: number of backup files that were unable to be pruned * `PruneErrors`: number of backup files that were unable to be pruned

8
go.mod
View File

@@ -3,6 +3,8 @@ module github.com/offen/docker-volume-backup
go 1.19 go 1.19
require ( require (
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.2.0
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v0.6.1
github.com/containrrr/shoutrrr v0.5.2 github.com/containrrr/shoutrrr v0.5.2
github.com/cosiner/argv v0.1.0 github.com/cosiner/argv v0.1.0
github.com/docker/docker v20.10.11+incompatible github.com/docker/docker v20.10.11+incompatible
@@ -19,6 +21,9 @@ require (
) )
require ( require (
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.1.4 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.0.1 // indirect
github.com/AzureAD/microsoft-authentication-library-for-go v0.7.0 // indirect
github.com/Microsoft/go-winio v0.5.2 // indirect github.com/Microsoft/go-winio v0.5.2 // indirect
github.com/containerd/containerd v1.6.6 // indirect github.com/containerd/containerd v1.6.6 // indirect
github.com/docker/distribution v2.7.1+incompatible // indirect github.com/docker/distribution v2.7.1+incompatible // indirect
@@ -28,6 +33,7 @@ require (
github.com/fatih/color v1.10.0 // indirect github.com/fatih/color v1.10.0 // indirect
github.com/fsnotify/fsnotify v1.4.9 // indirect github.com/fsnotify/fsnotify v1.4.9 // indirect
github.com/gogo/protobuf v1.3.2 // indirect github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang-jwt/jwt/v4 v4.4.2 // indirect
github.com/golang/protobuf v1.5.2 // indirect github.com/golang/protobuf v1.5.2 // indirect
github.com/google/uuid v1.3.0 // indirect github.com/google/uuid v1.3.0 // indirect
github.com/gorilla/mux v1.7.3 // indirect github.com/gorilla/mux v1.7.3 // indirect
@@ -36,6 +42,7 @@ require (
github.com/klauspost/cpuid/v2 v2.2.1 // indirect github.com/klauspost/cpuid/v2 v2.2.1 // indirect
github.com/kr/fs v0.1.0 // indirect github.com/kr/fs v0.1.0 // indirect
github.com/kr/text v0.2.0 // indirect github.com/kr/text v0.2.0 // indirect
github.com/kylelemons/godebug v1.1.0 // indirect
github.com/mattn/go-colorable v0.1.8 // indirect github.com/mattn/go-colorable v0.1.8 // indirect
github.com/mattn/go-isatty v0.0.12 // indirect github.com/mattn/go-isatty v0.0.12 // indirect
github.com/minio/md5-simd v1.1.2 // indirect github.com/minio/md5-simd v1.1.2 // indirect
@@ -50,6 +57,7 @@ require (
github.com/onsi/gomega v1.10.3 // indirect github.com/onsi/gomega v1.10.3 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.0.3-0.20211202183452-c5a74bcca799 // indirect github.com/opencontainers/image-spec v1.0.3-0.20211202183452-c5a74bcca799 // indirect
github.com/pkg/browser v0.0.0-20210115035449-ce105d075bb4 // indirect
github.com/pkg/errors v0.9.1 // indirect github.com/pkg/errors v0.9.1 // indirect
github.com/rs/xid v1.4.0 // indirect github.com/rs/xid v1.4.0 // indirect
golang.org/x/net v0.2.0 // indirect golang.org/x/net v0.2.0 // indirect

19
go.sum
View File

@@ -1,7 +1,17 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.1.4 h1:pqrAR74b6EoR4kcxF7L7Wg2B8Jgil9UUZtMvxhEFqWo=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.1.4/go.mod h1:uGG2W01BaETf0Ozp+QxxKJdMBNRWPdstHG0Fmdwn1/U=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.2.0 h1:t/W5MYAuQy81cvM8VUNfRLzhtKpXhVUAN7Cd7KVbTyc=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.2.0/go.mod h1:NBanQUfSWiWn3QEpWDTCU0IjBECKOYvl2R8xdRtMtiM=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.0.1 h1:XUNQ4mw+zJmaA2KXzP9JlQiecy1SI+Eog7xVkPiqIbg=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.0.1/go.mod h1:eWRD7oawr1Mu1sLCawqVc0CUiF43ia3qQMxLscsKQ9w=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v0.6.1 h1:YvQv9Mz6T8oR5ypQOL6erY0Z5t71ak1uHV4QFokCOZk=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v0.6.1/go.mod h1:c6WvOhtmjNUWbLfOG1qxM/q0SPvQNSVJvolm+C52dIU=
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78 h1:w+iIsaOQNcT7OZ575w+acHgRric5iCyQh+xv+KJ4HB8= github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78 h1:w+iIsaOQNcT7OZ575w+acHgRric5iCyQh+xv+KJ4HB8=
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8= github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8=
github.com/AzureAD/microsoft-authentication-library-for-go v0.7.0 h1:VgSJlZH5u0k2qxSpqyghcFQKmvYckj46uymKK5XzkBM=
github.com/AzureAD/microsoft-authentication-library-for-go v0.7.0/go.mod h1:BDJ5qMFKx9DugEg3+uQSDCdbYPr5s9vBTrL9P8TpqOU=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/Microsoft/go-winio v0.5.2 h1:a9IhgEQBCUEk6QCdml9CiJGhAws+YwffDHEMp1VMrpA= github.com/Microsoft/go-winio v0.5.2 h1:a9IhgEQBCUEk6QCdml9CiJGhAws+YwffDHEMp1VMrpA=
github.com/Microsoft/go-winio v0.5.2/go.mod h1:WpS1mjBmmwHBEWmogvA2mj8546UReBk4v8QkMxJ6pZY= github.com/Microsoft/go-winio v0.5.2/go.mod h1:WpS1mjBmmwHBEWmogvA2mj8546UReBk4v8QkMxJ6pZY=
@@ -48,6 +58,7 @@ github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ= github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no= github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/dnaeon/go-vcr v1.1.0 h1:ReYa/UBrRyQdant9B4fNHGoCNKw6qh6P0fsdGmZpR7c=
github.com/docker/distribution v2.7.1+incompatible h1:a5mlkVzth6W5A4fOsS3D2EO5BUmsJpcB+cRlLU7cSug= github.com/docker/distribution v2.7.1+incompatible h1:a5mlkVzth6W5A4fOsS3D2EO5BUmsJpcB+cRlLU7cSug=
github.com/docker/distribution v2.7.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w= github.com/docker/distribution v2.7.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/docker v20.10.11+incompatible h1:OqzI/g/W54LczvhnccGqniFoQghHx3pklbLuhfXpqGo= github.com/docker/docker v20.10.11+incompatible h1:OqzI/g/W54LczvhnccGqniFoQghHx3pklbLuhfXpqGo=
@@ -94,6 +105,8 @@ github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7a
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4= github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q= github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang-jwt/jwt/v4 v4.4.2 h1:rcc4lwaZgFMCZ5jxF9ABolDcIHdBytAFgqFPbSJQAYs=
github.com/golang-jwt/jwt/v4 v4.4.2/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
@@ -174,6 +187,8 @@ github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/leekchan/timeutil v0.0.0-20150802142658-28917288c48d h1:2puqoOQwi3Ai1oznMOsFIbifm6kIfJaLLyYzWD4IzTs= github.com/leekchan/timeutil v0.0.0-20150802142658-28917288c48d h1:2puqoOQwi3Ai1oznMOsFIbifm6kIfJaLLyYzWD4IzTs=
github.com/leekchan/timeutil v0.0.0-20150802142658-28917288c48d/go.mod h1:hO90vCP2x3exaSH58BIAowSKvV+0OsY21TtzuFGHON4= github.com/leekchan/timeutil v0.0.0-20150802142658-28917288c48d/go.mod h1:hO90vCP2x3exaSH58BIAowSKvV+0OsY21TtzuFGHON4=
github.com/leodido/go-urn v1.2.0/go.mod h1:+8+nEpDfqqsY+g338gtMEUOtuK+4dEMhiQEgxpxOKII= github.com/leodido/go-urn v1.2.0/go.mod h1:+8+nEpDfqqsY+g338gtMEUOtuK+4dEMhiQEgxpxOKII=
@@ -244,6 +259,8 @@ github.com/otiai10/mint v1.3.3 h1:7JgpsBaN0uMkyju4tbYHu0mnM55hNKVYLsXmwr15NQI=
github.com/otiai10/mint v1.3.3/go.mod h1:/yxELlJQ0ufhjUwhshSj+wFjZ78CnZ48/1wtmBH1OTc= github.com/otiai10/mint v1.3.3/go.mod h1:/yxELlJQ0ufhjUwhshSj+wFjZ78CnZ48/1wtmBH1OTc=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/pelletier/go-toml v1.7.0/go.mod h1:vwGMzjaWMwyfHwgIBhI2YUM4fB6nL6lVAvS1LBMMhTE= github.com/pelletier/go-toml v1.7.0/go.mod h1:vwGMzjaWMwyfHwgIBhI2YUM4fB6nL6lVAvS1LBMMhTE=
github.com/pkg/browser v0.0.0-20210115035449-ce105d075bb4 h1:Qj1ukM4GlMWXNdMBuXcXfz/Kw9s1qm0CLY32QxuSImI=
github.com/pkg/browser v0.0.0-20210115035449-ce105d075bb4/go.mod h1:N6UoU20jOqggOuDwUaBQpluzLNDqif3kq9z2wpdYEfQ=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
@@ -296,8 +313,8 @@ github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXf
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1 h1:5TQK59W5E3v0r2duFAb7P95B6hEeOyEnHRa8MjYSMTY=
github.com/studio-b12/gowebdav v0.0.0-20220128162035-c7b1ff8a5e62 h1:b2nJXyPCa9HY7giGM+kYcnQ71m14JnGdQabMPmyt++8= github.com/studio-b12/gowebdav v0.0.0-20220128162035-c7b1ff8a5e62 h1:b2nJXyPCa9HY7giGM+kYcnQ71m14JnGdQabMPmyt++8=
github.com/studio-b12/gowebdav v0.0.0-20220128162035-c7b1ff8a5e62/go.mod h1:bHA7t77X/QFExdeAnDzK6vKM34kEZAcE1OX4MfiwjkE= github.com/studio-b12/gowebdav v0.0.0-20220128162035-c7b1ff8a5e62/go.mod h1:bHA7t77X/QFExdeAnDzK6vKM34kEZAcE1OX4MfiwjkE=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw= github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=

View File

@@ -0,0 +1,160 @@
// Copyright 2022 - Offen Authors <hioffen@posteo.de>
// SPDX-License-Identifier: MPL-2.0
package azure
import (
"bytes"
"context"
"errors"
"fmt"
"os"
"path/filepath"
"strings"
"sync"
"text/template"
"time"
"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/container"
"github.com/offen/docker-volume-backup/internal/storage"
)
type azureBlobStorage struct {
*storage.StorageBackend
client *azblob.Client
containerName string
}
// Config contains values that define the configuration of an Azure Blob Storage.
type Config struct {
AccountName string
ContainerName string
PrimaryAccountKey string
Endpoint string
RemotePath string
}
// NewStorageBackend creates and initializes a new Azure Blob Storage backend.
func NewStorageBackend(opts Config, logFunc storage.Log) (storage.Backend, error) {
endpointTemplate, err := template.New("endpoint").Parse(opts.Endpoint)
if err != nil {
return nil, fmt.Errorf("NewStorageBackend: error parsing endpoint template: %w", err)
}
var ep bytes.Buffer
if err := endpointTemplate.Execute(&ep, opts); err != nil {
return nil, fmt.Errorf("NewStorageBackend: error executing endpoint template: %w", err)
}
normalizedEndpoint := fmt.Sprintf("%s/", strings.TrimSuffix(ep.String(), "/"))
var client *azblob.Client
if opts.PrimaryAccountKey != "" {
cred, err := azblob.NewSharedKeyCredential(opts.AccountName, opts.PrimaryAccountKey)
if err != nil {
return nil, fmt.Errorf("NewStorageBackend: error creating shared key Azure credential: %w", err)
}
client, err = azblob.NewClientWithSharedKeyCredential(normalizedEndpoint, cred, nil)
if err != nil {
return nil, fmt.Errorf("NewStorageBackend: error creating Azure client: %w", err)
}
} else {
cred, err := azidentity.NewManagedIdentityCredential(nil)
if err != nil {
return nil, fmt.Errorf("NewStorageBackend: error creating managed identity credential: %w", err)
}
client, err = azblob.NewClient(normalizedEndpoint, cred, nil)
if err != nil {
return nil, fmt.Errorf("NewStorageBackend: error creating Azure client: %w", err)
}
}
storage := azureBlobStorage{
client: client,
containerName: opts.ContainerName,
StorageBackend: &storage.StorageBackend{
DestinationPath: opts.RemotePath,
Log: logFunc,
},
}
return &storage, nil
}
// Name returns the name of the storage backend
func (b *azureBlobStorage) Name() string {
return "Azure"
}
// Copy copies the given file to the storage backend.
func (b *azureBlobStorage) Copy(file string) error {
fileReader, err := os.Open(file)
if err != nil {
return fmt.Errorf("(*azureBlobStorage).Copy: error opening file %s: %w", file, err)
}
_, err = b.client.UploadStream(
context.Background(),
b.containerName,
filepath.Join(b.DestinationPath, filepath.Base(file)),
fileReader,
nil,
)
if err != nil {
return fmt.Errorf("(*azureBlobStorage).Copy: error uploading file %s: %w", file, err)
}
return nil
}
// Prune rotates away backups according to the configuration and provided
// deadline for the Azure Blob storage backend.
func (b *azureBlobStorage) Prune(deadline time.Time, pruningPrefix string) (*storage.PruneStats, error) {
lookupPrefix := filepath.Join(b.DestinationPath, pruningPrefix)
pager := b.client.NewListBlobsFlatPager(b.containerName, &container.ListBlobsFlatOptions{
Prefix: &lookupPrefix,
})
var matches []string
var totalCount uint
for pager.More() {
resp, err := pager.NextPage(context.Background())
if err != nil {
return nil, fmt.Errorf("(*azureBlobStorage).Prune: error paging over blobs: %w", err)
}
for _, v := range resp.Segment.BlobItems {
totalCount++
if v.Properties.LastModified.Before(deadline) {
matches = append(matches, *v.Name)
}
}
}
stats := storage.PruneStats{
Total: totalCount,
Pruned: uint(len(matches)),
}
if err := b.DoPrune(b.Name(), len(matches), int(totalCount), "Azure Blob Storage backup(s)", func() error {
wg := sync.WaitGroup{}
wg.Add(len(matches))
var errs []error
for _, match := range matches {
name := match
go func() {
_, err := b.client.DeleteBlob(context.Background(), b.containerName, name, nil)
if err != nil {
errs = append(errs, err)
}
wg.Done()
}()
}
wg.Wait()
if len(errs) != 0 {
return errors.Join(errs...)
}
return nil
}); err != nil {
return &stats, err
}
return &stats, nil
}

View File

@@ -4,6 +4,7 @@
package local package local
import ( import (
"errors"
"fmt" "fmt"
"io" "io"
"os" "os"
@@ -12,7 +13,6 @@ import (
"time" "time"
"github.com/offen/docker-volume-backup/internal/storage" "github.com/offen/docker-volume-backup/internal/storage"
"github.com/offen/docker-volume-backup/internal/utilities"
) )
type localStorage struct { type localStorage struct {
@@ -127,7 +127,7 @@ func (b *localStorage) Prune(deadline time.Time, pruningPrefix string) (*storage
return fmt.Errorf( return fmt.Errorf(
"(*localStorage).Prune: %d error(s) deleting local files, starting with: %w", "(*localStorage).Prune: %d error(s) deleting local files, starting with: %w",
len(removeErrors), len(removeErrors),
utilities.Join(removeErrors...), errors.Join(removeErrors...),
) )
} }
return nil return nil

View File

@@ -15,7 +15,6 @@ import (
"github.com/minio/minio-go/v7" "github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/credentials" "github.com/minio/minio-go/v7/pkg/credentials"
"github.com/offen/docker-volume-backup/internal/storage" "github.com/offen/docker-volume-backup/internal/storage"
"github.com/offen/docker-volume-backup/internal/utilities"
) )
type s3Storage struct { type s3Storage struct {
@@ -159,7 +158,7 @@ func (b *s3Storage) Prune(deadline time.Time, pruningPrefix string) (*storage.Pr
} }
} }
if len(removeErrors) != 0 { if len(removeErrors) != 0 {
return utilities.Join(removeErrors...) return errors.Join(removeErrors...)
} }
return nil return nil
}); err != nil { }); err != nil {

View File

@@ -67,15 +67,17 @@ func (b *webDavStorage) Name() string {
// Copy copies the given file to the WebDav storage backend. // Copy copies the given file to the WebDav storage backend.
func (b *webDavStorage) Copy(file string) error { func (b *webDavStorage) Copy(file string) error {
bytes, err := os.ReadFile(file)
_, name := path.Split(file) _, name := path.Split(file)
if err != nil {
return fmt.Errorf("(*webDavStorage).Copy: Error reading the file to be uploaded: %w", err)
}
if err := b.client.MkdirAll(b.DestinationPath, 0644); err != nil { if err := b.client.MkdirAll(b.DestinationPath, 0644); err != nil {
return fmt.Errorf("(*webDavStorage).Copy: Error creating directory '%s' on WebDAV server: %w", b.DestinationPath, err) return fmt.Errorf("(*webDavStorage).Copy: Error creating directory '%s' on WebDAV server: %w", b.DestinationPath, err)
} }
if err := b.client.Write(filepath.Join(b.DestinationPath, name), bytes, 0644); err != nil {
r, err := os.Open(file)
if err != nil {
return fmt.Errorf("(*webDavStorage).Copy: Error opening the file to be uploaded: %w", err)
}
if err := b.client.WriteStream(filepath.Join(b.DestinationPath, name), r, 0644); err != nil {
return fmt.Errorf("(*webDavStorage).Copy: Error uploading the file to WebDAV server: %w", err) return fmt.Errorf("(*webDavStorage).Copy: Error uploading the file to WebDAV server: %w", err)
} }
b.Log(storage.LogLevelInfo, b.Name(), "Uploaded a copy of backup '%s' to WebDAV URL '%s' at path '%s'.", file, b.url, b.DestinationPath) b.Log(storage.LogLevelInfo, b.Name(), "Uploaded a copy of backup '%s' to WebDAV URL '%s' at path '%s'.", file, b.url, b.DestinationPath)

View File

@@ -1,24 +0,0 @@
// Copyright 2022 - Offen Authors <hioffen@posteo.de>
// SPDX-License-Identifier: MPL-2.0
package utilities
import (
"errors"
"strings"
)
// Join takes a list of errors and joins them into a single error
func Join(errs ...error) error {
if len(errs) == 1 {
return errs[0]
}
var msgs []string
for _, err := range errs {
if err == nil {
continue
}
msgs = append(msgs, err.Error())
}
return errors.New("[" + strings.Join(msgs, ", ") + "]")
}

View File

@@ -0,0 +1,58 @@
version: '3'
services:
storage:
image: mcr.microsoft.com/azure-storage/azurite
volumes:
- azurite_backup_data:/data
command: azurite-blob --blobHost 0.0.0.0 --blobPort 10000 --location /data
healthcheck:
test: nc 127.0.0.1 10000 -z
interval: 1s
retries: 30
az_cli:
image: mcr.microsoft.com/azure-cli
volumes:
- ./local:/dump
command:
- /bin/sh
- -c
- |
az storage container create --name test-container
depends_on:
storage:
condition: service_healthy
environment:
AZURE_STORAGE_CONNECTION_STRING: DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://storage:10000/devstoreaccount1;
backup:
image: offen/docker-volume-backup:${TEST_VERSION:-canary}
hostname: hostnametoken
restart: always
environment:
AZURE_STORAGE_ACCOUNT_NAME: devstoreaccount1
AZURE_STORAGE_PRIMARY_ACCOUNT_KEY: Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==
AZURE_STORAGE_CONTAINER_NAME: test-container
AZURE_STORAGE_ENDPOINT: http://storage:10000/{{ .AccountName }}/
AZURE_STORAGE_PATH: 'path/to/backup'
BACKUP_FILENAME: test.tar.gz
BACKUP_CRON_EXPRESSION: 0 0 5 31 2 ?
BACKUP_RETENTION_DAYS: ${BACKUP_RETENTION_DAYS:-7}
BACKUP_PRUNING_LEEWAY: 5s
BACKUP_PRUNING_PREFIX: test
volumes:
- app_data:/backup/app_data:ro
- /var/run/docker.sock:/var/run/docker.sock
offen:
image: offen/offen:latest
labels:
- docker-volume-backup.stop-during-backup=true
volumes:
- app_data:/var/opt/offen
volumes:
azurite_backup_data:
name: azurite_backup_data
app_data:

40
test/azure/run.sh Normal file
View File

@@ -0,0 +1,40 @@
#!/bin/sh
set -e
cd "$(dirname "$0")"
. ../util.sh
current_test=$(basename $(pwd))
docker-compose up -d
sleep 5
# A symlink for a known file in the volume is created so the test can check
# whether symlinks are preserved on backup.
docker-compose exec backup backup
sleep 5
expect_running_containers "3"
docker-compose run --rm az_cli \
az storage blob download -f /dump/test.tar.gz -c test-container -n path/to/backup/test.tar.gz
tar -xvf ./local/test.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db
pass "Found relevant files in untared remote backups."
# The second part of this test checks if backups get deleted when the retention
# is set to 0 days (which it should not as it would mean all backups get deleted)
# TODO: find out if we can test actual deletion without having to wait for a day
BACKUP_RETENTION_DAYS="0" docker-compose up -d
sleep 5
docker-compose exec backup backup
docker-compose run --rm az_cli \
az storage blob download -f /dump/test.tar.gz -c test-container -n path/to/backup/test.tar.gz
test -f ./local/test.tar.gz
pass "Remote backups have not been deleted."
docker-compose down --volumes

View File

@@ -24,10 +24,10 @@ openssl x509 -req -passin pass:test \
openssl x509 -in minio.crt -noout -text openssl x509 -in minio.crt -noout -text
docker-compose up -d docker compose up -d
sleep 5 sleep 5
docker-compose exec backup backup docker compose exec backup backup
sleep 5 sleep 5
@@ -40,4 +40,4 @@ docker run --rm -it \
pass "Found relevant files in untared remote backups." pass "Found relevant files in untared remote backups."
docker-compose down --volumes docker compose down --volumes

View File

@@ -11,7 +11,8 @@ services:
MARIADB_DATABASE: backup MARIADB_DATABASE: backup
labels: labels:
# this is testing the deprecated label on purpose # this is testing the deprecated label on purpose
- docker-volume-backup.exec-pre=/bin/sh -c 'mysqldump -ptest --all-databases > /tmp/volume/dump.sql' - docker-volume-backup.archive-pre=/bin/sh -c 'mysqldump -ptest --all-databases > /tmp/volume/dump.sql'
- docker-volume-backup.archive-post=/bin/sh -c 'rm /tmp/volume/dump.sql'
- docker-volume-backup.copy-post=/bin/sh -c 'echo "post" > /tmp/volume/post.txt' - docker-volume-backup.copy-post=/bin/sh -c 'echo "post" > /tmp/volume/post.txt'
- docker-volume-backup.exec-label=test - docker-volume-backup.exec-label=test
volumes: volumes:

View File

@@ -6,10 +6,10 @@ cd $(dirname $0)
. ../util.sh . ../util.sh
current_test=$(basename $(pwd)) current_test=$(basename $(pwd))
docker-compose up -d docker compose up -d
sleep 30 # mariadb likes to take a bit before responding sleep 30 # mariadb likes to take a bit before responding
docker-compose exec backup backup docker compose exec backup backup
sudo cp -r $(docker volume inspect --format='{{ .Mountpoint }}' commands_archive) ./local sudo cp -r $(docker volume inspect --format='{{ .Mountpoint }}' commands_archive) ./local
tar -xvf ./local/test.tar.gz tar -xvf ./local/test.tar.gz
@@ -28,7 +28,7 @@ if [ -f ./backup/data/post.txt ]; then
fi fi
pass "Did not find unexpected file." pass "Did not find unexpected file."
docker-compose down --volumes docker compose down --volumes
sudo rm -rf ./local sudo rm -rf ./local

View File

@@ -8,12 +8,12 @@ current_test=$(basename $(pwd))
mkdir -p local mkdir -p local
docker-compose up -d docker compose up -d
# sleep until a backup is guaranteed to have happened on the 1 minute schedule # sleep until a backup is guaranteed to have happened on the 1 minute schedule
sleep 100 sleep 100
docker-compose down --volumes docker compose down --volumes
if [ ! -f ./local/conf.tar.gz ]; then if [ ! -f ./local/conf.tar.gz ]; then
fail "Config from file was not used." fail "Config from file was not used."

4
test/extend/Dockerfile Normal file
View File

@@ -0,0 +1,4 @@
ARG version=canary
FROM offen/docker-volume-backup:$version
RUN apk add rsync

View File

@@ -0,0 +1,26 @@
version: '3'
services:
backup:
image: offen/docker-volume-backup:${TEST_VERSION:-canary}
restart: always
labels:
- docker-volume-backup.copy-post=/bin/sh -c 'mkdir -p /tmp/unpack && tar -xvf $$COMMAND_RUNTIME_ARCHIVE_FILEPATH -C /tmp/unpack && rsync -r /tmp/unpack/backup/app_data /local'
environment:
BACKUP_FILENAME: test.tar.gz
BACKUP_CRON_EXPRESSION: 0 0 5 31 2 ?
EXEC_FORWARD_OUTPUT: "true"
volumes:
- ./local:/local
- app_data:/backup/app_data:ro
- /var/run/docker.sock:/var/run/docker.sock
offen:
image: offen/offen:latest
labels:
- docker-volume-backup.stop-during-backup=true
volumes:
- app_data:/var/opt/offen
volumes:
app_data:

28
test/extend/run.sh Normal file
View File

@@ -0,0 +1,28 @@
#!/bin/sh
set -e
cd "$(dirname "$0")"
. ../util.sh
current_test=$(basename $(pwd))
mkdir -p local
export TEST_VERSION="${TEST_VERSION:-canary}-with-rsync"
docker build . -t offen/docker-volume-backup:$TEST_VERSION
docker compose up -d
sleep 5
docker compose exec backup backup
sleep 5
expect_running_containers "2"
if [ ! -f "./local/app_data/offen.db" ]; then
fail "Could not find expected file in untared archive."
fi
docker compose down --volumes

View File

@@ -8,10 +8,10 @@ current_test=$(basename $(pwd))
mkdir -p local mkdir -p local
docker-compose up -d docker compose up -d
sleep 5 sleep 5
docker-compose exec backup backup docker compose exec backup backup
expect_running_containers "2" expect_running_containers "2"
@@ -30,4 +30,4 @@ if [ ! -L ./local/test-latest.tar.gz.gpg ]; then
fail "Could not find local symlink to latest encrypted backup." fail "Could not find local symlink to latest encrypted backup."
fi fi
docker-compose down --volumes docker compose down --volumes

View File

@@ -8,11 +8,11 @@ current_test=$(basename $(pwd))
mkdir -p local mkdir -p local
docker-compose up -d docker compose up -d
sleep 5 sleep 5
docker-compose exec backup backup docker compose exec backup backup
docker-compose down --volumes docker compose down --volumes
out=$(mktemp -d) out=$(mktemp -d)
sudo tar --same-owner -xvf ./local/test.tar.gz -C "$out" sudo tar --same-owner -xvf ./local/test.tar.gz -C "$out"

View File

@@ -8,13 +8,13 @@ current_test=$(basename $(pwd))
mkdir -p local mkdir -p local
docker-compose up -d docker compose up -d
sleep 5 sleep 5
# A symlink for a known file in the volume is created so the test can check # A symlink for a known file in the volume is created so the test can check
# whether symlinks are preserved on backup. # whether symlinks are preserved on backup.
docker-compose exec offen ln -s /var/opt/offen/offen.db /var/opt/offen/db.link docker compose exec offen ln -s /var/opt/offen/offen.db /var/opt/offen/db.link
docker-compose exec backup backup docker compose exec backup backup
sleep 5 sleep 5
@@ -42,14 +42,14 @@ pass "Found symlink to latest version in local backup."
# The second part of this test checks if backups get deleted when the retention # The second part of this test checks if backups get deleted when the retention
# is set to 0 days (which it should not as it would mean all backups get deleted) # is set to 0 days (which it should not as it would mean all backups get deleted)
# TODO: find out if we can test actual deletion without having to wait for a day # TODO: find out if we can test actual deletion without having to wait for a day
BACKUP_RETENTION_DAYS="0" docker-compose up -d BACKUP_RETENTION_DAYS="0" docker compose up -d
sleep 5 sleep 5
docker-compose exec backup backup docker compose exec backup backup
if [ "$(find ./local -type f | wc -l)" != "1" ]; then if [ "$(find ./local -type f | wc -l)" != "1" ]; then
fail "Backups should not have been deleted, instead seen: "$(find ./local -type f)"" fail "Backups should not have been deleted, instead seen: "$(find ./local -type f)""
fi fi
pass "Local backups have not been deleted." pass "Local backups have not been deleted."
docker-compose down --volumes docker compose down --volumes

View File

@@ -8,13 +8,13 @@ current_test=$(basename $(pwd))
mkdir -p local mkdir -p local
docker-compose up -d docker compose up -d
sleep 5 sleep 5
GOTIFY_TOKEN=$(curl -sSLX POST -H 'Content-Type: application/json' -d '{"name":"test"}' http://admin:custom@localhost:8080/application | jq -r '.token') GOTIFY_TOKEN=$(curl -sSLX POST -H 'Content-Type: application/json' -d '{"name":"test"}' http://admin:custom@localhost:8080/application | jq -r '.token')
info "Set up Gotify application using token $GOTIFY_TOKEN" info "Set up Gotify application using token $GOTIFY_TOKEN"
docker-compose exec backup backup docker compose exec backup backup
NUM_MESSAGES=$(curl -sSL http://admin:custom@localhost:8080/message | jq -r '.messages | length') NUM_MESSAGES=$(curl -sSL http://admin:custom@localhost:8080/message | jq -r '.messages | length')
if [ "$NUM_MESSAGES" != 0 ]; then if [ "$NUM_MESSAGES" != 0 ]; then
@@ -22,11 +22,11 @@ if [ "$NUM_MESSAGES" != 0 ]; then
fi fi
pass "No notifications were sent when not configured." pass "No notifications were sent when not configured."
docker-compose down docker compose down
NOTIFICATION_URLS="gotify://gotify/${GOTIFY_TOKEN}?disableTLS=true" docker-compose up -d NOTIFICATION_URLS="gotify://gotify/${GOTIFY_TOKEN}?disableTLS=true" docker compose up -d
docker-compose exec backup backup docker compose exec backup backup
NUM_MESSAGES=$(curl -sSL http://admin:custom@localhost:8080/message | jq -r '.messages | length') NUM_MESSAGES=$(curl -sSL http://admin:custom@localhost:8080/message | jq -r '.messages | length')
if [ "$NUM_MESSAGES" != 1 ]; then if [ "$NUM_MESSAGES" != 1 ]; then
@@ -47,4 +47,4 @@ if [ "$MESSAGE_BODY" != "Backing up /tmp/test.tar.gz succeeded." ]; then
fi fi
pass "Custom notification body was used." pass "Custom notification body was used."
docker-compose down --volumes docker compose down --volumes

View File

@@ -0,0 +1,28 @@
version: "3"
services:
rsync:
image: eeacms/rsync
tty: true
restart: unless-stopped
labels:
- docker-volume-backup.exec-label=order
- docker-volume-backup.archive-pre=sh -c "rsync -aAX --ignore-missing-args --delete-missing-args /data/ /bu/"
- docker-volume-backup.archive-post=sh -c "rm -rf /bu/*"
volumes:
- ./fixture:/data:ro
- bu:/bu
backup:
image: offen/docker-volume-backup:${TEST_VERSION:-canary}
restart: always
environment:
BACKUP_FILENAME: backup.tar.gz
BACKUP_EXEC_LABEL: order
volumes:
- bu:/backup/order:ro
- ./local:/archive
- /var/run/docker.sock:/var/run/docker.sock
volumes:
bu:

View File

@@ -0,0 +1 @@
ok

26
test/order/run.sh Normal file
View File

@@ -0,0 +1,26 @@
#!/bin/sh
set -e
cd $(dirname $0)
. ../util.sh
current_test=$(basename $(pwd))
mkdir -p local
docker compose up -d
sleep 10
docker compose exec backup backup
if [ ! -f "./local/backup.tar.gz" ]; then
fail "Could not find expected backup file."
fi
tmp_dir=$(mktemp -d)
tar -xvf ./local/backup.tar.gz -C $tmp_dir
if [ ! -f "$tmp_dir/backup/order/test.txt" ]; then
fail "Could not find expected file in untared archive."
fi
docker compose down --volumes

View File

@@ -9,10 +9,10 @@ current_test=$(basename $(pwd))
mkdir -p local mkdir -p local
docker-compose up -d docker compose up -d
sleep 5 sleep 5
docker-compose exec backup backup docker compose exec backup backup
tmp_dir=$(mktemp -d) tmp_dir=$(mktemp -d)
sudo tar --same-owner -xvf ./local/backup.tar.gz -C $tmp_dir sudo tar --same-owner -xvf ./local/backup.tar.gz -C $tmp_dir
@@ -27,4 +27,4 @@ for file in $(sudo find $tmp_dir/backup/postgres); do
done done
pass "All files and directories in backup preserved their ownership." pass "All files and directories in backup preserved their ownership."
docker-compose down --volumes docker compose down --volumes

View File

@@ -6,12 +6,12 @@ cd "$(dirname "$0")"
. ../util.sh . ../util.sh
current_test=$(basename $(pwd)) current_test=$(basename $(pwd))
docker-compose up -d docker compose up -d
sleep 5 sleep 5
# A symlink for a known file in the volume is created so the test can check # A symlink for a known file in the volume is created so the test can check
# whether symlinks are preserved on backup. # whether symlinks are preserved on backup.
docker-compose exec backup backup docker compose exec backup backup
sleep 5 sleep 5
@@ -27,10 +27,10 @@ pass "Found relevant files in untared remote backups."
# The second part of this test checks if backups get deleted when the retention # The second part of this test checks if backups get deleted when the retention
# is set to 0 days (which it should not as it would mean all backups get deleted) # is set to 0 days (which it should not as it would mean all backups get deleted)
# TODO: find out if we can test actual deletion without having to wait for a day # TODO: find out if we can test actual deletion without having to wait for a day
BACKUP_RETENTION_DAYS="0" docker-compose up -d BACKUP_RETENTION_DAYS="0" docker compose up -d
sleep 5 sleep 5
docker-compose exec backup backup docker compose exec backup backup
docker run --rm -it \ docker run --rm -it \
-v minio_backup_data:/minio_data \ -v minio_backup_data:/minio_data \
@@ -39,4 +39,4 @@ docker run --rm -it \
pass "Remote backups have not been deleted." pass "Remote backups have not been deleted."
docker-compose down --volumes docker compose down --volumes

View File

@@ -8,10 +8,10 @@ current_test=$(basename $(pwd))
ssh-keygen -t rsa -m pem -b 4096 -N "test1234" -f id_rsa -C "docker-volume-backup@local" ssh-keygen -t rsa -m pem -b 4096 -N "test1234" -f id_rsa -C "docker-volume-backup@local"
docker-compose up -d docker compose up -d
sleep 5 sleep 5
docker-compose exec backup backup docker compose exec backup backup
sleep 5 sleep 5
@@ -27,10 +27,10 @@ pass "Found relevant files in decrypted and untared remote backups."
# The second part of this test checks if backups get deleted when the retention # The second part of this test checks if backups get deleted when the retention
# is set to 0 days (which it should not as it would mean all backups get deleted) # is set to 0 days (which it should not as it would mean all backups get deleted)
# TODO: find out if we can test actual deletion without having to wait for a day # TODO: find out if we can test actual deletion without having to wait for a day
BACKUP_RETENTION_DAYS="0" docker-compose up -d BACKUP_RETENTION_DAYS="0" docker compose up -d
sleep 5 sleep 5
docker-compose exec backup backup docker compose exec backup backup
docker run --rm -it \ docker run --rm -it \
-v ssh_backup_data:/ssh_data \ -v ssh_backup_data:/ssh_data \
@@ -39,5 +39,5 @@ docker run --rm -it \
pass "Remote backups have not been deleted." pass "Remote backups have not been deleted."
docker-compose down --volumes docker compose down --volumes
rm -f id_rsa id_rsa.pub rm -f id_rsa id_rsa.pub

View File

@@ -6,10 +6,10 @@ cd "$(dirname "$0")"
. ../util.sh . ../util.sh
current_test=$(basename $(pwd)) current_test=$(basename $(pwd))
docker-compose up -d docker compose up -d
sleep 5 sleep 5
docker-compose exec backup backup docker compose exec backup backup
sleep 5 sleep 5
@@ -25,10 +25,10 @@ pass "Found relevant files in untared remote backup."
# The second part of this test checks if backups get deleted when the retention # The second part of this test checks if backups get deleted when the retention
# is set to 0 days (which it should not as it would mean all backups get deleted) # is set to 0 days (which it should not as it would mean all backups get deleted)
# TODO: find out if we can test actual deletion without having to wait for a day # TODO: find out if we can test actual deletion without having to wait for a day
BACKUP_RETENTION_DAYS="0" docker-compose up -d BACKUP_RETENTION_DAYS="0" docker compose up -d
sleep 5 sleep 5
docker-compose exec backup backup docker compose exec backup backup
docker run --rm -it \ docker run --rm -it \
-v webdav_backup_data:/webdav_data \ -v webdav_backup_data:/webdav_data \
@@ -37,4 +37,4 @@ docker run --rm -it \
pass "Remote backups have not been deleted." pass "Remote backups have not been deleted."
docker-compose down --volumes docker compose down --volumes