Compare commits

..

7 Commits

Author SHA1 Message Date
Frederik Ring
8c8a2fa088 Update vulnerable containerd dependency (#104) 2022-06-07 09:21:40 +02:00
Frederik Ring
a850bf13fe Fix broken link in README 2022-05-12 08:18:12 +02:00
Frederik Ring
b52b271bac Allow for the exclusion of files from backups (#100)
* Hoist walking of files so it can be used for features other than archive creation

* Add option to ignore files from backup using glob patterns

* Use Regexp instead of glob for exclusion

* Ignore artifacts

* Add teardown to test

* Allow single Re for filtering only

* Add documentation

* Use MatchString on re, add bad input to message in case of error
2022-05-08 11:20:38 +02:00
Frederik Ring
cac5777e79 Add documentation on using multiple configs for complex retention schemes 2022-04-20 18:01:41 +02:00
Frederik Ring
94a1edc4ad Allow disabling of certificate verification for WebDAV (#98) 2022-04-20 14:16:59 +02:00
Frederik Ring
a654097e59 Update webdav client library (#97) 2022-04-20 10:56:26 +02:00
Frederik Ring
1b1fc4856c List objects recursively when selecting candidates from S3 (#92) 2022-04-15 11:05:52 +02:00
12 changed files with 236 additions and 690 deletions

View File

@@ -30,6 +30,7 @@ It handles __recurring or one-off backups of Docker volumes__ to a __local direc
- [Replace deprecated `BACKUP_FROM_SNAPSHOT` usage](#replace-deprecated-backup_from_snapshot-usage) - [Replace deprecated `BACKUP_FROM_SNAPSHOT` usage](#replace-deprecated-backup_from_snapshot-usage)
- [Using a custom Docker host](#using-a-custom-docker-host) - [Using a custom Docker host](#using-a-custom-docker-host)
- [Run multiple backup schedules in the same container](#run-multiple-backup-schedules-in-the-same-container) - [Run multiple backup schedules in the same container](#run-multiple-backup-schedules-in-the-same-container)
- [Define different retention schedules](#define-different-retention-schedules)
- [Recipes](#recipes) - [Recipes](#recipes)
- [Backing up to AWS S3](#backing-up-to-aws-s3) - [Backing up to AWS S3](#backing-up-to-aws-s3)
- [Backing up to Filebase](#backing-up-to-filebase) - [Backing up to Filebase](#backing-up-to-filebase)
@@ -167,6 +168,12 @@ You can populate below template according to your requirements and use it as you
# BACKUP_SOURCES="/other/location" # BACKUP_SOURCES="/other/location"
# When given, all files in BACKUP_SOURCES whose full path matches the given
# regular expression will be excluded from the archive. Regular Expressions
# can be used as from the Go standard library https://pkg.go.dev/regexp
# BACKUP_EXCLUDE_REGEXP="\.log$"
########### BACKUP STORAGE ########### BACKUP STORAGE
# The name of the remote bucket that should be used for storing backups. If # The name of the remote bucket that should be used for storing backups. If
@@ -207,9 +214,9 @@ You can populate below template according to your requirements and use it as you
# AWS_ENDPOINT_PROTO="https" # AWS_ENDPOINT_PROTO="https"
# Setting this variable to `true` will disable verification of # Setting this variable to `true` will disable verification of
# SSL certificates. You shouldn't use this unless you use self-signed # SSL certificates for AWS_ENDPOINT. You shouldn't use this unless you use
# certificates for your remote storage backend. This can only be used # self-signed certificates for your remote storage backend. This can only be
# when AWS_ENDPOINT_PROTO is set to `https`. # used when AWS_ENDPOINT_PROTO is set to `https`.
# AWS_ENDPOINT_INSECURE="true" # AWS_ENDPOINT_INSECURE="true"
@@ -232,6 +239,12 @@ You can populate below template according to your requirements and use it as you
# WEBDAV_PASSWORD="password" # WEBDAV_PASSWORD="password"
# Setting this variable to `true` will disable verification of
# SSL certificates for WEBDAV_URL. You shouldn't use this unless you use
# self-signed certificates for your remote storage backend.
# WEBDAV_URL_INSECURE="true"
# In addition to storing backups remotely, you can also keep local copies. # In addition to storing backups remotely, you can also keep local copies.
# Pass a container-local path to store your backups if needed. You also need to # Pass a container-local path to store your backups if needed. You also need to
# mount a local folder or Docker volume into that location (`/archive` # mount a local folder or Docker volume into that location (`/archive`
@@ -382,7 +395,7 @@ You can populate below template according to your requirements and use it as you
# EMAIL_SMTP_PORT="<port>" # EMAIL_SMTP_PORT="<port>"
``` ```
In case you encouter double quoted values in your configuration you might be running an [older version of `docker-compose`]. In case you encouter double quoted values in your configuration you might be running an [older version of `docker-compose`][compose-issue].
You can work around this by either updating `docker-compose` or unquoting your configuration values. You can work around this by either updating `docker-compose` or unquoting your configuration values.
[compose-issue]: https://github.com/docker/compose/issues/2854 [compose-issue]: https://github.com/docker/compose/issues/2854
@@ -733,6 +746,39 @@ The exact order of schedules that use the same cron expression is not specified.
In case you need your schedules to overlap, you need to create a dedicated container for each schedule instead. In case you need your schedules to overlap, you need to create a dedicated container for each schedule instead.
When changing the configuration, you currently need to manually restart the container for the changes to take effect. When changing the configuration, you currently need to manually restart the container for the changes to take effect.
### Define different retention schedules
If you want to manage backup retention on different schedules, the most straight forward approach is to define a dedicated configuration for retention rule using a different prefix in the `BACKUP_FILENAME` parameter and then run them on different cron schedules.
For example, if you wanted to keep daily backups for 7 days, weekly backups for a month, and retain monthly backups forever, you could create three configuration files and mount them into `/etc/dockervolumebackup.d`:
```ini
# 01daily.conf
BACKUP_FILENAME="daily-backup-%Y-%m-%dT%H-%M-%S.tar.gz"
# run every day at 2am
BACKUP_CRON_EXPRESSION="0 2 * * *"
BACKUP_PRUNING_PREFIX="daily-backup-"
BACKUP_RETENTION_DAYS="7"
```
```ini
# 02weekly.conf
BACKUP_FILENAME="weekly-backup-%Y-%m-%dT%H-%M-%S.tar.gz"
# run every monday at 3am
BACKUP_CRON_EXPRESSION="0 3 * * 1"
BACKUP_PRUNING_PREFIX="weekly-backup-"
BACKUP_RETENTION_DAYS="31"
```
```ini
# 03monthly.conf
BACKUP_FILENAME="monthly-backup-%Y-%m-%dT%H-%M-%S.tar.gz"
# run every 1st of a month at 4am
BACKUP_CRON_EXPRESSION="0 4 1 * *"
```
Note that while it's possible to define colliding cron schedules for each of these configurations, you might need to adjust the value for `LOCK_TIMEOUT` in case your backups are large and might take longer than an hour.
## Recipes ## Recipes
This section lists configuration for some real-world use cases that you can mix and match according to your needs. This section lists configuration for some real-world use cases that you can mix and match according to your needs.

View File

@@ -11,14 +11,13 @@ import (
"compress/gzip" "compress/gzip"
"fmt" "fmt"
"io" "io"
"io/fs"
"os" "os"
"path" "path"
"path/filepath" "path/filepath"
"strings" "strings"
) )
func createArchive(inputFilePath, outputFilePath string) error { func createArchive(files []string, inputFilePath, outputFilePath string) error {
inputFilePath = stripTrailingSlashes(inputFilePath) inputFilePath = stripTrailingSlashes(inputFilePath)
inputFilePath, outputFilePath, err := makeAbsolute(inputFilePath, outputFilePath) inputFilePath, outputFilePath, err := makeAbsolute(inputFilePath, outputFilePath)
if err != nil { if err != nil {
@@ -28,7 +27,7 @@ func createArchive(inputFilePath, outputFilePath string) error {
return fmt.Errorf("createArchive: error creating output file path: %w", err) return fmt.Errorf("createArchive: error creating output file path: %w", err)
} }
if err := compress(inputFilePath, outputFilePath, filepath.Dir(inputFilePath)); err != nil { if err := compress(files, outputFilePath, filepath.Dir(inputFilePath)); err != nil {
return fmt.Errorf("createArchive: error creating archive: %w", err) return fmt.Errorf("createArchive: error creating archive: %w", err)
} }
@@ -52,7 +51,7 @@ func makeAbsolute(inputFilePath, outputFilePath string) (string, string, error)
return inputFilePath, outputFilePath, err return inputFilePath, outputFilePath, err
} }
func compress(inPath, outFilePath, subPath string) error { func compress(paths []string, outFilePath, subPath string) error {
file, err := os.Create(outFilePath) file, err := os.Create(outFilePath)
if err != nil { if err != nil {
return fmt.Errorf("compress: error creating out file: %w", err) return fmt.Errorf("compress: error creating out file: %w", err)
@@ -62,14 +61,6 @@ func compress(inPath, outFilePath, subPath string) error {
gzipWriter := gzip.NewWriter(file) gzipWriter := gzip.NewWriter(file)
tarWriter := tar.NewWriter(gzipWriter) tarWriter := tar.NewWriter(gzipWriter)
var paths []string
if err := filepath.WalkDir(inPath, func(path string, di fs.DirEntry, err error) error {
paths = append(paths, path)
return err
}); err != nil {
return fmt.Errorf("compress: error walking filesystem tree: %w", err)
}
for _, p := range paths { for _, p := range paths {
if err := writeTarGz(p, tarWriter, prefix); err != nil { if err := writeTarGz(p, tarWriter, prefix); err != nil {
return fmt.Errorf("compress error writing %s to archive: %w", p, err) return fmt.Errorf("compress error writing %s to archive: %w", p, err)

View File

@@ -3,7 +3,11 @@
package main package main
import "time" import (
"fmt"
"regexp"
"time"
)
// Config holds all configuration values that are expected to be set // Config holds all configuration values that are expected to be set
// by users. // by users.
@@ -18,6 +22,7 @@ type Config struct {
BackupPruningPrefix string `split_words:"true"` BackupPruningPrefix string `split_words:"true"`
BackupStopContainerLabel string `split_words:"true" default:"true"` BackupStopContainerLabel string `split_words:"true" default:"true"`
BackupFromSnapshot bool `split_words:"true"` BackupFromSnapshot bool `split_words:"true"`
BackupExcludeRegexp RegexpDecoder `split_words:"true"`
AwsS3BucketName string `split_words:"true"` AwsS3BucketName string `split_words:"true"`
AwsS3Path string `split_words:"true"` AwsS3Path string `split_words:"true"`
AwsEndpoint string `split_words:"true" default:"s3.amazonaws.com"` AwsEndpoint string `split_words:"true" default:"s3.amazonaws.com"`
@@ -36,6 +41,7 @@ type Config struct {
EmailSMTPUsername string `envconfig:"EMAIL_SMTP_USERNAME"` EmailSMTPUsername string `envconfig:"EMAIL_SMTP_USERNAME"`
EmailSMTPPassword string `envconfig:"EMAIL_SMTP_PASSWORD"` EmailSMTPPassword string `envconfig:"EMAIL_SMTP_PASSWORD"`
WebdavUrl string `split_words:"true"` WebdavUrl string `split_words:"true"`
WebdavUrlInsecure bool `split_words:"true"`
WebdavPath string `split_words:"true" default:"/"` WebdavPath string `split_words:"true" default:"/"`
WebdavUsername string `split_words:"true"` WebdavUsername string `split_words:"true"`
WebdavPassword string `split_words:"true"` WebdavPassword string `split_words:"true"`
@@ -43,3 +49,19 @@ type Config struct {
ExecForwardOutput bool `split_words:"true"` ExecForwardOutput bool `split_words:"true"`
LockTimeout time.Duration `split_words:"true" default:"60m"` LockTimeout time.Duration `split_words:"true" default:"60m"`
} }
type RegexpDecoder struct {
Re *regexp.Regexp
}
func (r *RegexpDecoder) Decode(v string) error {
if v == "" {
return nil
}
re, err := regexp.Compile(v)
if err != nil {
return fmt.Errorf("config: error compiling given regexp `%s`: %w", v, err)
}
*r = RegexpDecoder{Re: re}
return nil
}

View File

@@ -9,6 +9,7 @@ import (
"fmt" "fmt"
"io" "io"
"io/fs" "io/fs"
"net/http"
"os" "os"
"path" "path"
"path/filepath" "path/filepath"
@@ -146,6 +147,15 @@ func newScript() (*script, error) {
} else { } else {
webdavClient := gowebdav.NewClient(s.c.WebdavUrl, s.c.WebdavUsername, s.c.WebdavPassword) webdavClient := gowebdav.NewClient(s.c.WebdavUrl, s.c.WebdavUsername, s.c.WebdavPassword)
s.webdavClient = webdavClient s.webdavClient = webdavClient
if s.c.WebdavUrlInsecure {
defaultTransport, ok := http.DefaultTransport.(*http.Transport)
if !ok {
return nil, errors.New("newScript: unexpected error when asserting type for http.DefaultTransport")
}
webdavTransport := defaultTransport.Clone()
webdavTransport.TLSClientConfig.InsecureSkipVerify = s.c.WebdavUrlInsecure
s.webdavClient.SetTransport(webdavTransport)
}
} }
} }
@@ -388,7 +398,28 @@ func (s *script) takeBackup() error {
s.logger.Infof("Removed tar file `%s`.", tarFile) s.logger.Infof("Removed tar file `%s`.", tarFile)
return nil return nil
}) })
if err := createArchive(backupSources, tarFile); err != nil {
backupPath, err := filepath.Abs(stripTrailingSlashes(backupSources))
if err != nil {
return fmt.Errorf("takeBackup: error getting absolute path: %w", err)
}
var filesEligibleForBackup []string
if err := filepath.WalkDir(backupPath, func(path string, di fs.DirEntry, err error) error {
if err != nil {
return err
}
if s.c.BackupExcludeRegexp.Re != nil && s.c.BackupExcludeRegexp.Re.MatchString(path) {
return nil
}
filesEligibleForBackup = append(filesEligibleForBackup, path)
return nil
}); err != nil {
return fmt.Errorf("compress: error walking filesystem tree: %w", err)
}
if err := createArchive(filesEligibleForBackup, backupSources, tarFile); err != nil {
return fmt.Errorf("takeBackup: error compressing backup folder: %w", err) return fmt.Errorf("takeBackup: error compressing backup folder: %w", err)
} }
@@ -537,6 +568,7 @@ func (s *script) pruneBackups() error {
candidates := s.minioClient.ListObjects(context.Background(), s.c.AwsS3BucketName, minio.ListObjectsOptions{ candidates := s.minioClient.ListObjects(context.Background(), s.c.AwsS3BucketName, minio.ListObjectsOptions{
WithMetadata: true, WithMetadata: true,
Prefix: filepath.Join(s.c.AwsS3Path, s.c.BackupPruningPrefix), Prefix: filepath.Join(s.c.AwsS3Path, s.c.BackupPruningPrefix),
Recursive: true,
}) })
var matches []minio.ObjectInfo var matches []minio.ObjectInfo

31
go.mod
View File

@@ -12,14 +12,14 @@ require (
github.com/minio/minio-go/v7 v7.0.16 github.com/minio/minio-go/v7 v7.0.16
github.com/otiai10/copy v1.7.0 github.com/otiai10/copy v1.7.0
github.com/sirupsen/logrus v1.8.1 github.com/sirupsen/logrus v1.8.1
github.com/studio-b12/gowebdav v0.0.0-20211109083228-3f8721cd4b6f github.com/studio-b12/gowebdav v0.0.0-20220128162035-c7b1ff8a5e62
golang.org/x/crypto v0.0.0-20210817164053-32db794688a5 golang.org/x/crypto v0.0.0-20210817164053-32db794688a5
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f
) )
require ( require (
github.com/Microsoft/go-winio v0.4.17 // indirect github.com/Microsoft/go-winio v0.5.2 // indirect
github.com/containerd/containerd v1.5.5 // indirect github.com/containerd/containerd v1.6.6 // indirect
github.com/docker/distribution v2.7.1+incompatible // indirect github.com/docker/distribution v2.7.1+incompatible // indirect
github.com/docker/go-connections v0.4.0 // indirect github.com/docker/go-connections v0.4.0 // indirect
github.com/docker/go-units v0.4.0 // indirect github.com/docker/go-units v0.4.0 // indirect
@@ -27,33 +27,38 @@ require (
github.com/fatih/color v1.10.0 // indirect github.com/fatih/color v1.10.0 // indirect
github.com/fsnotify/fsnotify v1.4.9 // indirect github.com/fsnotify/fsnotify v1.4.9 // indirect
github.com/gogo/protobuf v1.3.2 // indirect github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/protobuf v1.5.0 // indirect github.com/golang/protobuf v1.5.2 // indirect
github.com/google/uuid v1.3.0 // indirect github.com/google/uuid v1.3.0 // indirect
github.com/gorilla/mux v1.7.3 // indirect
github.com/json-iterator/go v1.1.12 // indirect github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.13.6 // indirect github.com/klauspost/compress v1.15.6 // indirect
github.com/klauspost/cpuid/v2 v2.0.9 // indirect github.com/klauspost/cpuid/v2 v2.0.9 // indirect
github.com/kr/text v0.2.0 // indirect
github.com/mattn/go-colorable v0.1.8 // indirect github.com/mattn/go-colorable v0.1.8 // indirect
github.com/mattn/go-isatty v0.0.12 // indirect github.com/mattn/go-isatty v0.0.12 // indirect
github.com/minio/md5-simd v1.1.2 // indirect github.com/minio/md5-simd v1.1.2 // indirect
github.com/minio/sha256-simd v1.0.0 // indirect github.com/minio/sha256-simd v1.0.0 // indirect
github.com/mitchellh/go-homedir v1.1.0 // indirect github.com/mitchellh/go-homedir v1.1.0 // indirect
github.com/moby/term v0.0.0-20200312100748-672ec06f55cd // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/morikuni/aec v1.0.0 // indirect github.com/morikuni/aec v1.0.0 // indirect
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e // indirect
github.com/nxadm/tail v1.4.6 // indirect github.com/nxadm/tail v1.4.6 // indirect
github.com/onsi/ginkgo v1.14.2 // indirect github.com/onsi/ginkgo v1.14.2 // indirect
github.com/onsi/gomega v1.10.3 // indirect github.com/onsi/gomega v1.10.3 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.0.1 // indirect github.com/opencontainers/image-spec v1.0.3-0.20211202183452-c5a74bcca799 // indirect
github.com/pkg/errors v0.9.1 // indirect github.com/pkg/errors v0.9.1 // indirect
github.com/rs/xid v1.3.0 // indirect github.com/rs/xid v1.3.0 // indirect
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110 // indirect golang.org/x/net v0.0.0-20220607020251-c690dde0001d // indirect
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1 // indirect golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a // indirect
golang.org/x/text v0.3.6 // indirect golang.org/x/text v0.3.7 // indirect
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect
google.golang.org/genproto v0.0.0-20201110150050-8816d57aaa9a // indirect google.golang.org/genproto v0.0.0-20220602131408-e326c6e8e9c8 // indirect
google.golang.org/grpc v1.33.2 // indirect google.golang.org/grpc v1.47.0 // indirect
google.golang.org/protobuf v1.26.0 // indirect google.golang.org/protobuf v1.28.0 // indirect
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f // indirect
gopkg.in/ini.v1 v1.65.0 // indirect gopkg.in/ini.v1 v1.65.0 // indirect
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 // indirect gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect gopkg.in/yaml.v2 v2.4.0 // indirect

724
go.sum

File diff suppressed because it is too large Load Diff

View File

@@ -43,6 +43,7 @@ services:
BACKUP_PRUNING_PREFIX: test BACKUP_PRUNING_PREFIX: test
GPG_PASSPHRASE: 1234secret GPG_PASSPHRASE: 1234secret
WEBDAV_URL: http://webdav/ WEBDAV_URL: http://webdav/
WEBDAV_URL_INSECURE: 'true'
WEBDAV_PATH: /my/new/path/ WEBDAV_PATH: /my/new/path/
WEBDAV_USERNAME: test WEBDAV_USERNAME: test
WEBDAV_PASSWORD: test WEBDAV_PASSWORD: test

1
test/ignore/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
local

View File

@@ -0,0 +1,15 @@
version: '3.8'
services:
backup:
image: offen/docker-volume-backup:${TEST_VERSION:-canary}
deploy:
restart_policy:
condition: on-failure
environment:
BACKUP_FILENAME: test.tar.gz
BACKUP_CRON_EXPRESSION: 0 0 5 31 2 ?
BACKUP_EXCLUDE_REGEXP: '\.(me|you)$$'
volumes:
- ./local:/archive
- ./sources:/backup/data:ro

27
test/ignore/run.sh Normal file
View File

@@ -0,0 +1,27 @@
#!/bin/sh
set -e
cd $(dirname $0)
mkdir -p local
docker-compose up -d
sleep 5
docker-compose exec backup backup
docker-compose down --volumes
out=$(mktemp -d)
sudo tar --same-owner -xvf ./local/test.tar.gz -C "$out"
if [ ! -f "$out/backup/data/me.txt" ]; then
echo "[TEST:FAIL] Expected file was not found."
exit 1
fi
echo "[TEST:PASS] Expected file was found."
if [ -f "$out/backup/data/skip.me" ]; then
echo "[TEST:FAIL] Ignored file was found."
exit 1
fi
echo "[TEST:PASS] Ignored file was not found."

View File

View File