mirror of
https://github.com/offen/docker-volume-backup.git
synced 2025-12-05 17:18:02 +01:00
Compare commits
3 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b52b271bac | ||
|
|
cac5777e79 | ||
|
|
94a1edc4ad |
52
README.md
52
README.md
@@ -30,6 +30,7 @@ It handles __recurring or one-off backups of Docker volumes__ to a __local direc
|
|||||||
- [Replace deprecated `BACKUP_FROM_SNAPSHOT` usage](#replace-deprecated-backup_from_snapshot-usage)
|
- [Replace deprecated `BACKUP_FROM_SNAPSHOT` usage](#replace-deprecated-backup_from_snapshot-usage)
|
||||||
- [Using a custom Docker host](#using-a-custom-docker-host)
|
- [Using a custom Docker host](#using-a-custom-docker-host)
|
||||||
- [Run multiple backup schedules in the same container](#run-multiple-backup-schedules-in-the-same-container)
|
- [Run multiple backup schedules in the same container](#run-multiple-backup-schedules-in-the-same-container)
|
||||||
|
- [Define different retention schedules](#define-different-retention-schedules)
|
||||||
- [Recipes](#recipes)
|
- [Recipes](#recipes)
|
||||||
- [Backing up to AWS S3](#backing-up-to-aws-s3)
|
- [Backing up to AWS S3](#backing-up-to-aws-s3)
|
||||||
- [Backing up to Filebase](#backing-up-to-filebase)
|
- [Backing up to Filebase](#backing-up-to-filebase)
|
||||||
@@ -167,6 +168,12 @@ You can populate below template according to your requirements and use it as you
|
|||||||
|
|
||||||
# BACKUP_SOURCES="/other/location"
|
# BACKUP_SOURCES="/other/location"
|
||||||
|
|
||||||
|
# When given, all files in BACKUP_SOURCES whose full path matches the given
|
||||||
|
# regular expression will be excluded from the archive. Regular Expressions
|
||||||
|
# can be used as from the Go standard library https://pkg.go.dev/regexp
|
||||||
|
|
||||||
|
# BACKUP_EXCLUDE_REGEXP="\.log$"
|
||||||
|
|
||||||
########### BACKUP STORAGE
|
########### BACKUP STORAGE
|
||||||
|
|
||||||
# The name of the remote bucket that should be used for storing backups. If
|
# The name of the remote bucket that should be used for storing backups. If
|
||||||
@@ -207,9 +214,9 @@ You can populate below template according to your requirements and use it as you
|
|||||||
# AWS_ENDPOINT_PROTO="https"
|
# AWS_ENDPOINT_PROTO="https"
|
||||||
|
|
||||||
# Setting this variable to `true` will disable verification of
|
# Setting this variable to `true` will disable verification of
|
||||||
# SSL certificates. You shouldn't use this unless you use self-signed
|
# SSL certificates for AWS_ENDPOINT. You shouldn't use this unless you use
|
||||||
# certificates for your remote storage backend. This can only be used
|
# self-signed certificates for your remote storage backend. This can only be
|
||||||
# when AWS_ENDPOINT_PROTO is set to `https`.
|
# used when AWS_ENDPOINT_PROTO is set to `https`.
|
||||||
|
|
||||||
# AWS_ENDPOINT_INSECURE="true"
|
# AWS_ENDPOINT_INSECURE="true"
|
||||||
|
|
||||||
@@ -232,6 +239,12 @@ You can populate below template according to your requirements and use it as you
|
|||||||
|
|
||||||
# WEBDAV_PASSWORD="password"
|
# WEBDAV_PASSWORD="password"
|
||||||
|
|
||||||
|
# Setting this variable to `true` will disable verification of
|
||||||
|
# SSL certificates for WEBDAV_URL. You shouldn't use this unless you use
|
||||||
|
# self-signed certificates for your remote storage backend.
|
||||||
|
|
||||||
|
# WEBDAV_URL_INSECURE="true"
|
||||||
|
|
||||||
# In addition to storing backups remotely, you can also keep local copies.
|
# In addition to storing backups remotely, you can also keep local copies.
|
||||||
# Pass a container-local path to store your backups if needed. You also need to
|
# Pass a container-local path to store your backups if needed. You also need to
|
||||||
# mount a local folder or Docker volume into that location (`/archive`
|
# mount a local folder or Docker volume into that location (`/archive`
|
||||||
@@ -733,6 +746,39 @@ The exact order of schedules that use the same cron expression is not specified.
|
|||||||
In case you need your schedules to overlap, you need to create a dedicated container for each schedule instead.
|
In case you need your schedules to overlap, you need to create a dedicated container for each schedule instead.
|
||||||
When changing the configuration, you currently need to manually restart the container for the changes to take effect.
|
When changing the configuration, you currently need to manually restart the container for the changes to take effect.
|
||||||
|
|
||||||
|
### Define different retention schedules
|
||||||
|
|
||||||
|
If you want to manage backup retention on different schedules, the most straight forward approach is to define a dedicated configuration for retention rule using a different prefix in the `BACKUP_FILENAME` parameter and then run them on different cron schedules.
|
||||||
|
|
||||||
|
For example, if you wanted to keep daily backups for 7 days, weekly backups for a month, and retain monthly backups forever, you could create three configuration files and mount them into `/etc/dockervolumebackup.d`:
|
||||||
|
|
||||||
|
```ini
|
||||||
|
# 01daily.conf
|
||||||
|
BACKUP_FILENAME="daily-backup-%Y-%m-%dT%H-%M-%S.tar.gz"
|
||||||
|
# run every day at 2am
|
||||||
|
BACKUP_CRON_EXPRESSION="0 2 * * *"
|
||||||
|
BACKUP_PRUNING_PREFIX="daily-backup-"
|
||||||
|
BACKUP_RETENTION_DAYS="7"
|
||||||
|
```
|
||||||
|
|
||||||
|
```ini
|
||||||
|
# 02weekly.conf
|
||||||
|
BACKUP_FILENAME="weekly-backup-%Y-%m-%dT%H-%M-%S.tar.gz"
|
||||||
|
# run every monday at 3am
|
||||||
|
BACKUP_CRON_EXPRESSION="0 3 * * 1"
|
||||||
|
BACKUP_PRUNING_PREFIX="weekly-backup-"
|
||||||
|
BACKUP_RETENTION_DAYS="31"
|
||||||
|
```
|
||||||
|
|
||||||
|
```ini
|
||||||
|
# 03monthly.conf
|
||||||
|
BACKUP_FILENAME="monthly-backup-%Y-%m-%dT%H-%M-%S.tar.gz"
|
||||||
|
# run every 1st of a month at 4am
|
||||||
|
BACKUP_CRON_EXPRESSION="0 4 1 * *"
|
||||||
|
```
|
||||||
|
|
||||||
|
Note that while it's possible to define colliding cron schedules for each of these configurations, you might need to adjust the value for `LOCK_TIMEOUT` in case your backups are large and might take longer than an hour.
|
||||||
|
|
||||||
## Recipes
|
## Recipes
|
||||||
|
|
||||||
This section lists configuration for some real-world use cases that you can mix and match according to your needs.
|
This section lists configuration for some real-world use cases that you can mix and match according to your needs.
|
||||||
|
|||||||
@@ -11,14 +11,13 @@ import (
|
|||||||
"compress/gzip"
|
"compress/gzip"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
"io"
|
||||||
"io/fs"
|
|
||||||
"os"
|
"os"
|
||||||
"path"
|
"path"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"strings"
|
"strings"
|
||||||
)
|
)
|
||||||
|
|
||||||
func createArchive(inputFilePath, outputFilePath string) error {
|
func createArchive(files []string, inputFilePath, outputFilePath string) error {
|
||||||
inputFilePath = stripTrailingSlashes(inputFilePath)
|
inputFilePath = stripTrailingSlashes(inputFilePath)
|
||||||
inputFilePath, outputFilePath, err := makeAbsolute(inputFilePath, outputFilePath)
|
inputFilePath, outputFilePath, err := makeAbsolute(inputFilePath, outputFilePath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -28,7 +27,7 @@ func createArchive(inputFilePath, outputFilePath string) error {
|
|||||||
return fmt.Errorf("createArchive: error creating output file path: %w", err)
|
return fmt.Errorf("createArchive: error creating output file path: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := compress(inputFilePath, outputFilePath, filepath.Dir(inputFilePath)); err != nil {
|
if err := compress(files, outputFilePath, filepath.Dir(inputFilePath)); err != nil {
|
||||||
return fmt.Errorf("createArchive: error creating archive: %w", err)
|
return fmt.Errorf("createArchive: error creating archive: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -52,7 +51,7 @@ func makeAbsolute(inputFilePath, outputFilePath string) (string, string, error)
|
|||||||
return inputFilePath, outputFilePath, err
|
return inputFilePath, outputFilePath, err
|
||||||
}
|
}
|
||||||
|
|
||||||
func compress(inPath, outFilePath, subPath string) error {
|
func compress(paths []string, outFilePath, subPath string) error {
|
||||||
file, err := os.Create(outFilePath)
|
file, err := os.Create(outFilePath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("compress: error creating out file: %w", err)
|
return fmt.Errorf("compress: error creating out file: %w", err)
|
||||||
@@ -62,14 +61,6 @@ func compress(inPath, outFilePath, subPath string) error {
|
|||||||
gzipWriter := gzip.NewWriter(file)
|
gzipWriter := gzip.NewWriter(file)
|
||||||
tarWriter := tar.NewWriter(gzipWriter)
|
tarWriter := tar.NewWriter(gzipWriter)
|
||||||
|
|
||||||
var paths []string
|
|
||||||
if err := filepath.WalkDir(inPath, func(path string, di fs.DirEntry, err error) error {
|
|
||||||
paths = append(paths, path)
|
|
||||||
return err
|
|
||||||
}); err != nil {
|
|
||||||
return fmt.Errorf("compress: error walking filesystem tree: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, p := range paths {
|
for _, p := range paths {
|
||||||
if err := writeTarGz(p, tarWriter, prefix); err != nil {
|
if err := writeTarGz(p, tarWriter, prefix); err != nil {
|
||||||
return fmt.Errorf("compress error writing %s to archive: %w", p, err)
|
return fmt.Errorf("compress error writing %s to archive: %w", p, err)
|
||||||
|
|||||||
@@ -3,7 +3,11 @@
|
|||||||
|
|
||||||
package main
|
package main
|
||||||
|
|
||||||
import "time"
|
import (
|
||||||
|
"fmt"
|
||||||
|
"regexp"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
// Config holds all configuration values that are expected to be set
|
// Config holds all configuration values that are expected to be set
|
||||||
// by users.
|
// by users.
|
||||||
@@ -18,6 +22,7 @@ type Config struct {
|
|||||||
BackupPruningPrefix string `split_words:"true"`
|
BackupPruningPrefix string `split_words:"true"`
|
||||||
BackupStopContainerLabel string `split_words:"true" default:"true"`
|
BackupStopContainerLabel string `split_words:"true" default:"true"`
|
||||||
BackupFromSnapshot bool `split_words:"true"`
|
BackupFromSnapshot bool `split_words:"true"`
|
||||||
|
BackupExcludeRegexp RegexpDecoder `split_words:"true"`
|
||||||
AwsS3BucketName string `split_words:"true"`
|
AwsS3BucketName string `split_words:"true"`
|
||||||
AwsS3Path string `split_words:"true"`
|
AwsS3Path string `split_words:"true"`
|
||||||
AwsEndpoint string `split_words:"true" default:"s3.amazonaws.com"`
|
AwsEndpoint string `split_words:"true" default:"s3.amazonaws.com"`
|
||||||
@@ -36,6 +41,7 @@ type Config struct {
|
|||||||
EmailSMTPUsername string `envconfig:"EMAIL_SMTP_USERNAME"`
|
EmailSMTPUsername string `envconfig:"EMAIL_SMTP_USERNAME"`
|
||||||
EmailSMTPPassword string `envconfig:"EMAIL_SMTP_PASSWORD"`
|
EmailSMTPPassword string `envconfig:"EMAIL_SMTP_PASSWORD"`
|
||||||
WebdavUrl string `split_words:"true"`
|
WebdavUrl string `split_words:"true"`
|
||||||
|
WebdavUrlInsecure bool `split_words:"true"`
|
||||||
WebdavPath string `split_words:"true" default:"/"`
|
WebdavPath string `split_words:"true" default:"/"`
|
||||||
WebdavUsername string `split_words:"true"`
|
WebdavUsername string `split_words:"true"`
|
||||||
WebdavPassword string `split_words:"true"`
|
WebdavPassword string `split_words:"true"`
|
||||||
@@ -43,3 +49,19 @@ type Config struct {
|
|||||||
ExecForwardOutput bool `split_words:"true"`
|
ExecForwardOutput bool `split_words:"true"`
|
||||||
LockTimeout time.Duration `split_words:"true" default:"60m"`
|
LockTimeout time.Duration `split_words:"true" default:"60m"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type RegexpDecoder struct {
|
||||||
|
Re *regexp.Regexp
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *RegexpDecoder) Decode(v string) error {
|
||||||
|
if v == "" {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
re, err := regexp.Compile(v)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("config: error compiling given regexp `%s`: %w", v, err)
|
||||||
|
}
|
||||||
|
*r = RegexpDecoder{Re: re}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|||||||
@@ -9,6 +9,7 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
"io"
|
||||||
"io/fs"
|
"io/fs"
|
||||||
|
"net/http"
|
||||||
"os"
|
"os"
|
||||||
"path"
|
"path"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
@@ -146,6 +147,15 @@ func newScript() (*script, error) {
|
|||||||
} else {
|
} else {
|
||||||
webdavClient := gowebdav.NewClient(s.c.WebdavUrl, s.c.WebdavUsername, s.c.WebdavPassword)
|
webdavClient := gowebdav.NewClient(s.c.WebdavUrl, s.c.WebdavUsername, s.c.WebdavPassword)
|
||||||
s.webdavClient = webdavClient
|
s.webdavClient = webdavClient
|
||||||
|
if s.c.WebdavUrlInsecure {
|
||||||
|
defaultTransport, ok := http.DefaultTransport.(*http.Transport)
|
||||||
|
if !ok {
|
||||||
|
return nil, errors.New("newScript: unexpected error when asserting type for http.DefaultTransport")
|
||||||
|
}
|
||||||
|
webdavTransport := defaultTransport.Clone()
|
||||||
|
webdavTransport.TLSClientConfig.InsecureSkipVerify = s.c.WebdavUrlInsecure
|
||||||
|
s.webdavClient.SetTransport(webdavTransport)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -388,7 +398,28 @@ func (s *script) takeBackup() error {
|
|||||||
s.logger.Infof("Removed tar file `%s`.", tarFile)
|
s.logger.Infof("Removed tar file `%s`.", tarFile)
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
if err := createArchive(backupSources, tarFile); err != nil {
|
|
||||||
|
backupPath, err := filepath.Abs(stripTrailingSlashes(backupSources))
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("takeBackup: error getting absolute path: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var filesEligibleForBackup []string
|
||||||
|
if err := filepath.WalkDir(backupPath, func(path string, di fs.DirEntry, err error) error {
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if s.c.BackupExcludeRegexp.Re != nil && s.c.BackupExcludeRegexp.Re.MatchString(path) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
filesEligibleForBackup = append(filesEligibleForBackup, path)
|
||||||
|
return nil
|
||||||
|
}); err != nil {
|
||||||
|
return fmt.Errorf("compress: error walking filesystem tree: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := createArchive(filesEligibleForBackup, backupSources, tarFile); err != nil {
|
||||||
return fmt.Errorf("takeBackup: error compressing backup folder: %w", err)
|
return fmt.Errorf("takeBackup: error compressing backup folder: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -43,6 +43,7 @@ services:
|
|||||||
BACKUP_PRUNING_PREFIX: test
|
BACKUP_PRUNING_PREFIX: test
|
||||||
GPG_PASSPHRASE: 1234secret
|
GPG_PASSPHRASE: 1234secret
|
||||||
WEBDAV_URL: http://webdav/
|
WEBDAV_URL: http://webdav/
|
||||||
|
WEBDAV_URL_INSECURE: 'true'
|
||||||
WEBDAV_PATH: /my/new/path/
|
WEBDAV_PATH: /my/new/path/
|
||||||
WEBDAV_USERNAME: test
|
WEBDAV_USERNAME: test
|
||||||
WEBDAV_PASSWORD: test
|
WEBDAV_PASSWORD: test
|
||||||
|
|||||||
1
test/ignore/.gitignore
vendored
Normal file
1
test/ignore/.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
|||||||
|
local
|
||||||
15
test/ignore/docker-compose.yml
Normal file
15
test/ignore/docker-compose.yml
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
version: '3.8'
|
||||||
|
|
||||||
|
services:
|
||||||
|
backup:
|
||||||
|
image: offen/docker-volume-backup:${TEST_VERSION:-canary}
|
||||||
|
deploy:
|
||||||
|
restart_policy:
|
||||||
|
condition: on-failure
|
||||||
|
environment:
|
||||||
|
BACKUP_FILENAME: test.tar.gz
|
||||||
|
BACKUP_CRON_EXPRESSION: 0 0 5 31 2 ?
|
||||||
|
BACKUP_EXCLUDE_REGEXP: '\.(me|you)$$'
|
||||||
|
volumes:
|
||||||
|
- ./local:/archive
|
||||||
|
- ./sources:/backup/data:ro
|
||||||
27
test/ignore/run.sh
Normal file
27
test/ignore/run.sh
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
cd $(dirname $0)
|
||||||
|
mkdir -p local
|
||||||
|
|
||||||
|
docker-compose up -d
|
||||||
|
sleep 5
|
||||||
|
docker-compose exec backup backup
|
||||||
|
|
||||||
|
docker-compose down --volumes
|
||||||
|
|
||||||
|
out=$(mktemp -d)
|
||||||
|
sudo tar --same-owner -xvf ./local/test.tar.gz -C "$out"
|
||||||
|
|
||||||
|
if [ ! -f "$out/backup/data/me.txt" ]; then
|
||||||
|
echo "[TEST:FAIL] Expected file was not found."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
echo "[TEST:PASS] Expected file was found."
|
||||||
|
|
||||||
|
if [ -f "$out/backup/data/skip.me" ]; then
|
||||||
|
echo "[TEST:FAIL] Ignored file was found."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
echo "[TEST:PASS] Ignored file was not found."
|
||||||
0
test/ignore/sources/me.txt
Normal file
0
test/ignore/sources/me.txt
Normal file
0
test/ignore/sources/skip.me
Normal file
0
test/ignore/sources/skip.me
Normal file
Reference in New Issue
Block a user