mirror of
https://github.com/offen/docker-volume-backup.git
synced 2025-12-05 17:18:02 +01:00
Compare commits
7 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
5ea9a7ce15 | ||
|
|
bcffe0bc25 | ||
|
|
144e65ce6f | ||
|
|
07afa53cd3 | ||
|
|
9a07f5486b | ||
|
|
d4c5f65f31 | ||
|
|
5b8a484d80 |
10
.github/workflows/test.yml
vendored
10
.github/workflows/test.yml
vendored
@@ -11,10 +11,20 @@ jobs:
|
|||||||
runs-on: ubuntu-22.04
|
runs-on: ubuntu-22.04
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v3
|
- uses: actions/checkout@v3
|
||||||
|
|
||||||
|
- name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v2
|
||||||
|
|
||||||
- name: Build Docker Image
|
- name: Build Docker Image
|
||||||
|
env:
|
||||||
|
DOCKER_BUILDKIT: '1'
|
||||||
run: docker build . -t offen/docker-volume-backup:test
|
run: docker build . -t offen/docker-volume-backup:test
|
||||||
|
|
||||||
- name: Run Tests
|
- name: Run Tests
|
||||||
working-directory: ./test
|
working-directory: ./test
|
||||||
run: |
|
run: |
|
||||||
|
# Stop the buildx container so the tests can make assertions
|
||||||
|
# about the number of running containers
|
||||||
|
docker rm -f $(docker ps -aq)
|
||||||
export GPG_TTY=$(tty)
|
export GPG_TTY=$(tty)
|
||||||
./test.sh test
|
./test.sh test
|
||||||
|
|||||||
@@ -16,8 +16,6 @@ WORKDIR /root
|
|||||||
RUN apk add --no-cache ca-certificates
|
RUN apk add --no-cache ca-certificates
|
||||||
|
|
||||||
COPY --from=builder /app/cmd/backup/backup /usr/bin/backup
|
COPY --from=builder /app/cmd/backup/backup /usr/bin/backup
|
||||||
|
COPY --chmod=755 ./entrypoint.sh /root/
|
||||||
COPY ./entrypoint.sh /root/
|
|
||||||
RUN chmod +x entrypoint.sh
|
|
||||||
|
|
||||||
ENTRYPOINT ["/root/entrypoint.sh"]
|
ENTRYPOINT ["/root/entrypoint.sh"]
|
||||||
|
|||||||
31
README.md
31
README.md
@@ -260,6 +260,15 @@ You can populate below template according to your requirements and use it as you
|
|||||||
|
|
||||||
# AWS_STORAGE_CLASS="GLACIER"
|
# AWS_STORAGE_CLASS="GLACIER"
|
||||||
|
|
||||||
|
# Setting this variable will change the S3 default part size for the copy step.
|
||||||
|
# This value is useful when you want to upload large files.
|
||||||
|
# NB : While using Scaleway as S3 provider, be aware that the parts counter is set to 1.000.
|
||||||
|
# While Minio uses a hard coded value to 10.000. As a workaround, try to set a higher value.
|
||||||
|
# Defaults to "16" (MB) if unset (from minio), you can set this value according to your needs.
|
||||||
|
# The unit is in MB and an integer.
|
||||||
|
|
||||||
|
# AWS_PART_SIZE=16
|
||||||
|
|
||||||
# You can also backup files to any WebDAV server:
|
# You can also backup files to any WebDAV server:
|
||||||
|
|
||||||
# The URL of the remote WebDAV server
|
# The URL of the remote WebDAV server
|
||||||
@@ -424,7 +433,7 @@ You can populate below template according to your requirements and use it as you
|
|||||||
|
|
||||||
# Notifications (email, Slack, etc.) can be sent out when a backup run finishes.
|
# Notifications (email, Slack, etc.) can be sent out when a backup run finishes.
|
||||||
# Configuration is provided as a comma-separated list of URLs as consumed
|
# Configuration is provided as a comma-separated list of URLs as consumed
|
||||||
# by `shoutrrr`: https://containrrr.dev/shoutrrr/v0.5/services/overview/
|
# by `shoutrrr`: https://containrrr.dev/shoutrrr/0.7/services/overview/
|
||||||
# The content of such notifications can be customized. Dedicated documentation
|
# The content of such notifications can be customized. Dedicated documentation
|
||||||
# on how to do this can be found in the README. When providing multiple URLs or
|
# on how to do this can be found in the README. When providing multiple URLs or
|
||||||
# an URL that contains a comma, the values can be URL encoded to avoid ambiguities.
|
# an URL that contains a comma, the values can be URL encoded to avoid ambiguities.
|
||||||
@@ -568,7 +577,7 @@ services:
|
|||||||
Notification backends other than email are also supported.
|
Notification backends other than email are also supported.
|
||||||
Refer to the documentation of [shoutrrr][shoutrrr-docs] to find out about options and configuration.
|
Refer to the documentation of [shoutrrr][shoutrrr-docs] to find out about options and configuration.
|
||||||
|
|
||||||
[shoutrrr-docs]: https://containrrr.dev/shoutrrr/v0.5/services/overview/
|
[shoutrrr-docs]: https://containrrr.dev/shoutrrr/0.7/services/overview/
|
||||||
|
|
||||||
### Customize notifications
|
### Customize notifications
|
||||||
|
|
||||||
@@ -657,7 +666,23 @@ volumes:
|
|||||||
The backup procedure is guaranteed to wait for all `pre` or `post` commands to finish before proceeding.
|
The backup procedure is guaranteed to wait for all `pre` or `post` commands to finish before proceeding.
|
||||||
However there are no guarantees about the order in which they are run, which could also happen concurrently.
|
However there are no guarantees about the order in which they are run, which could also happen concurrently.
|
||||||
|
|
||||||
By default the backup command is executed by the root user. It is possible to specify a custom user in container labels with the format `docker-volume-backup.[step]-[pre|post]-[user]`. The option will allow you to run a specific step command by specified user. Make sure the user exists and present in `passwd` inside the target container.
|
By default the backup command is executed by the user provided by the container's image.
|
||||||
|
It is possible to specify a custom user that is used to run commands in dedicated labels with the format `docker-volume-backup.[step]-[pre|post].user`:
|
||||||
|
|
||||||
|
```yml
|
||||||
|
version: '3'
|
||||||
|
|
||||||
|
services:
|
||||||
|
gitea:
|
||||||
|
image: gitea/gitea
|
||||||
|
volumes:
|
||||||
|
- backup_data:/tmp
|
||||||
|
labels:
|
||||||
|
- docker-volume-backup.archive-pre.user=git
|
||||||
|
- docker-volume-backup.archive-pre=/bin/bash -c 'cd /tmp; /usr/local/bin/gitea dump -c /data/gitea/conf/app.ini -R -f dump.zip'
|
||||||
|
```
|
||||||
|
|
||||||
|
Make sure the user exists and is present in `passwd` inside the target container.
|
||||||
|
|
||||||
### Encrypting your backup using GPG
|
### Encrypting your backup using GPG
|
||||||
|
|
||||||
|
|||||||
@@ -28,6 +28,7 @@ type Config struct {
|
|||||||
AwsSecretAccessKey string `split_words:"true"`
|
AwsSecretAccessKey string `split_words:"true"`
|
||||||
AwsSecretAccessKeyFile string `split_words:"true"`
|
AwsSecretAccessKeyFile string `split_words:"true"`
|
||||||
AwsIamRoleEndpoint string `split_words:"true"`
|
AwsIamRoleEndpoint string `split_words:"true"`
|
||||||
|
AwsPartSize int64 `split_words:"true"`
|
||||||
BackupSources string `split_words:"true" default:"/backup"`
|
BackupSources string `split_words:"true" default:"/backup"`
|
||||||
BackupFilename string `split_words:"true" default:"backup-%Y-%m-%dT%H-%M-%S.tar.gz"`
|
BackupFilename string `split_words:"true" default:"backup-%Y-%m-%dT%H-%M-%S.tar.gz"`
|
||||||
BackupFilenameExpand bool `split_words:"true"`
|
BackupFilenameExpand bool `split_words:"true"`
|
||||||
|
|||||||
@@ -142,6 +142,7 @@ func newScript() (*script, error) {
|
|||||||
BucketName: s.c.AwsS3BucketName,
|
BucketName: s.c.AwsS3BucketName,
|
||||||
StorageClass: s.c.AwsStorageClass,
|
StorageClass: s.c.AwsStorageClass,
|
||||||
CACert: s.c.AwsEndpointCACert.Cert,
|
CACert: s.c.AwsEndpointCACert.Cert,
|
||||||
|
PartSize: s.c.AwsPartSize,
|
||||||
}
|
}
|
||||||
if s3Backend, err := s3.NewStorageBackend(s3Config, logFunc); err != nil {
|
if s3Backend, err := s3.NewStorageBackend(s3Config, logFunc); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
|
|||||||
19
go.mod
19
go.mod
@@ -5,7 +5,7 @@ go 1.19
|
|||||||
require (
|
require (
|
||||||
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.2.0
|
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.2.0
|
||||||
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v0.6.1
|
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v0.6.1
|
||||||
github.com/containrrr/shoutrrr v0.6.0
|
github.com/containrrr/shoutrrr v0.7.1
|
||||||
github.com/cosiner/argv v0.1.0
|
github.com/cosiner/argv v0.1.0
|
||||||
github.com/docker/docker v20.10.24+incompatible
|
github.com/docker/docker v20.10.24+incompatible
|
||||||
github.com/gofrs/flock v0.8.1
|
github.com/gofrs/flock v0.8.1
|
||||||
@@ -17,7 +17,7 @@ require (
|
|||||||
github.com/sirupsen/logrus v1.9.0
|
github.com/sirupsen/logrus v1.9.0
|
||||||
github.com/studio-b12/gowebdav v0.0.0-20220128162035-c7b1ff8a5e62
|
github.com/studio-b12/gowebdav v0.0.0-20220128162035-c7b1ff8a5e62
|
||||||
golang.org/x/crypto v0.3.0
|
golang.org/x/crypto v0.3.0
|
||||||
golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f
|
golang.org/x/sync v0.1.0
|
||||||
)
|
)
|
||||||
|
|
||||||
require (
|
require (
|
||||||
@@ -25,24 +25,21 @@ require (
|
|||||||
github.com/Azure/azure-sdk-for-go/sdk/internal v1.0.1 // indirect
|
github.com/Azure/azure-sdk-for-go/sdk/internal v1.0.1 // indirect
|
||||||
github.com/AzureAD/microsoft-authentication-library-for-go v0.7.0 // indirect
|
github.com/AzureAD/microsoft-authentication-library-for-go v0.7.0 // indirect
|
||||||
github.com/Microsoft/go-winio v0.5.2 // indirect
|
github.com/Microsoft/go-winio v0.5.2 // indirect
|
||||||
github.com/docker/distribution v2.8.0+incompatible // indirect
|
github.com/docker/distribution v2.8.2+incompatible // indirect
|
||||||
github.com/docker/go-connections v0.4.0 // indirect
|
github.com/docker/go-connections v0.4.0 // indirect
|
||||||
github.com/docker/go-units v0.4.0 // indirect
|
github.com/docker/go-units v0.4.0 // indirect
|
||||||
github.com/dustin/go-humanize v1.0.0 // indirect
|
github.com/dustin/go-humanize v1.0.0 // indirect
|
||||||
github.com/fatih/color v1.10.0 // indirect
|
github.com/fatih/color v1.13.0 // indirect
|
||||||
github.com/gogo/protobuf v1.3.2 // indirect
|
github.com/gogo/protobuf v1.3.2 // indirect
|
||||||
github.com/golang-jwt/jwt/v4 v4.4.2 // indirect
|
github.com/golang-jwt/jwt/v4 v4.4.2 // indirect
|
||||||
github.com/golang/protobuf v1.5.2 // indirect
|
|
||||||
github.com/google/go-cmp v0.5.6 // indirect
|
|
||||||
github.com/google/uuid v1.3.0 // indirect
|
github.com/google/uuid v1.3.0 // indirect
|
||||||
github.com/json-iterator/go v1.1.12 // indirect
|
github.com/json-iterator/go v1.1.12 // indirect
|
||||||
github.com/klauspost/compress v1.15.12 // indirect
|
github.com/klauspost/compress v1.15.12 // indirect
|
||||||
github.com/klauspost/cpuid/v2 v2.2.1 // indirect
|
github.com/klauspost/cpuid/v2 v2.2.1 // indirect
|
||||||
github.com/kr/fs v0.1.0 // indirect
|
github.com/kr/fs v0.1.0 // indirect
|
||||||
github.com/kr/text v0.2.0 // indirect
|
|
||||||
github.com/kylelemons/godebug v1.1.0 // indirect
|
github.com/kylelemons/godebug v1.1.0 // indirect
|
||||||
github.com/mattn/go-colorable v0.1.8 // indirect
|
github.com/mattn/go-colorable v0.1.13 // indirect
|
||||||
github.com/mattn/go-isatty v0.0.12 // indirect
|
github.com/mattn/go-isatty v0.0.16 // indirect
|
||||||
github.com/minio/md5-simd v1.1.2 // indirect
|
github.com/minio/md5-simd v1.1.2 // indirect
|
||||||
github.com/minio/sha256-simd v1.0.0 // indirect
|
github.com/minio/sha256-simd v1.0.0 // indirect
|
||||||
github.com/moby/term v0.0.0-20200312100748-672ec06f55cd // indirect
|
github.com/moby/term v0.0.0-20200312100748-672ec06f55cd // indirect
|
||||||
@@ -50,7 +47,6 @@ require (
|
|||||||
github.com/modern-go/reflect2 v1.0.2 // indirect
|
github.com/modern-go/reflect2 v1.0.2 // indirect
|
||||||
github.com/morikuni/aec v1.0.0 // indirect
|
github.com/morikuni/aec v1.0.0 // indirect
|
||||||
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e // indirect
|
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e // indirect
|
||||||
github.com/onsi/gomega v1.10.3 // indirect
|
|
||||||
github.com/opencontainers/go-digest v1.0.0 // indirect
|
github.com/opencontainers/go-digest v1.0.0 // indirect
|
||||||
github.com/opencontainers/image-spec v1.0.3-0.20211202183452-c5a74bcca799 // indirect
|
github.com/opencontainers/image-spec v1.0.3-0.20211202183452-c5a74bcca799 // indirect
|
||||||
github.com/pkg/browser v0.0.0-20210115035449-ce105d075bb4 // indirect
|
github.com/pkg/browser v0.0.0-20210115035449-ce105d075bb4 // indirect
|
||||||
@@ -59,10 +55,7 @@ require (
|
|||||||
golang.org/x/net v0.7.0 // indirect
|
golang.org/x/net v0.7.0 // indirect
|
||||||
golang.org/x/sys v0.7.0 // indirect
|
golang.org/x/sys v0.7.0 // indirect
|
||||||
golang.org/x/text v0.7.0 // indirect
|
golang.org/x/text v0.7.0 // indirect
|
||||||
golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac // indirect
|
|
||||||
google.golang.org/protobuf v1.28.0 // indirect
|
|
||||||
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f // indirect
|
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f // indirect
|
||||||
gopkg.in/ini.v1 v1.67.0 // indirect
|
gopkg.in/ini.v1 v1.67.0 // indirect
|
||||||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
|
||||||
gotest.tools/v3 v3.0.3 // indirect
|
gotest.tools/v3 v3.0.3 // indirect
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -8,6 +8,7 @@ import (
|
|||||||
"crypto/x509"
|
"crypto/x509"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"os"
|
||||||
"path"
|
"path"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"time"
|
"time"
|
||||||
@@ -22,6 +23,7 @@ type s3Storage struct {
|
|||||||
client *minio.Client
|
client *minio.Client
|
||||||
bucket string
|
bucket string
|
||||||
storageClass string
|
storageClass string
|
||||||
|
partSize int64
|
||||||
}
|
}
|
||||||
|
|
||||||
// Config contains values that define the configuration of a S3 backend.
|
// Config contains values that define the configuration of a S3 backend.
|
||||||
@@ -35,6 +37,7 @@ type Config struct {
|
|||||||
RemotePath string
|
RemotePath string
|
||||||
BucketName string
|
BucketName string
|
||||||
StorageClass string
|
StorageClass string
|
||||||
|
PartSize int64
|
||||||
CACert *x509.Certificate
|
CACert *x509.Certificate
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -89,6 +92,7 @@ func NewStorageBackend(opts Config, logFunc storage.Log) (storage.Backend, error
|
|||||||
client: mc,
|
client: mc,
|
||||||
bucket: opts.BucketName,
|
bucket: opts.BucketName,
|
||||||
storageClass: opts.StorageClass,
|
storageClass: opts.StorageClass,
|
||||||
|
partSize: opts.PartSize,
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -100,16 +104,32 @@ func (v *s3Storage) Name() string {
|
|||||||
// Copy copies the given file to the S3/Minio storage backend.
|
// Copy copies the given file to the S3/Minio storage backend.
|
||||||
func (b *s3Storage) Copy(file string) error {
|
func (b *s3Storage) Copy(file string) error {
|
||||||
_, name := path.Split(file)
|
_, name := path.Split(file)
|
||||||
|
putObjectOptions := minio.PutObjectOptions{
|
||||||
if _, err := b.client.FPutObject(context.Background(), b.bucket, filepath.Join(b.DestinationPath, name), file, minio.PutObjectOptions{
|
|
||||||
ContentType: "application/tar+gzip",
|
ContentType: "application/tar+gzip",
|
||||||
StorageClass: b.storageClass,
|
StorageClass: b.storageClass,
|
||||||
}); err != nil {
|
}
|
||||||
|
|
||||||
|
if b.partSize > 0 {
|
||||||
|
srcFileInfo, err := os.Stat(file)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("(*s3Storage).Copy: error reading the local file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
_, partSize, _, err := minio.OptimalPartInfo(srcFileInfo.Size(), uint64(b.partSize*1024*1024))
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("(*s3Storage).Copy: error computing the optimal s3 part size: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
putObjectOptions.PartSize = uint64(partSize)
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, err := b.client.FPutObject(context.Background(), b.bucket, filepath.Join(b.DestinationPath, name), file, putObjectOptions); err != nil {
|
||||||
if errResp := minio.ToErrorResponse(err); errResp.Message != "" {
|
if errResp := minio.ToErrorResponse(err); errResp.Message != "" {
|
||||||
return fmt.Errorf("(*s3Storage).Copy: error uploading backup to remote storage: [Message]: '%s', [Code]: %s, [StatusCode]: %d", errResp.Message, errResp.Code, errResp.StatusCode)
|
return fmt.Errorf("(*s3Storage).Copy: error uploading backup to remote storage: [Message]: '%s', [Code]: %s, [StatusCode]: %d", errResp.Message, errResp.Code, errResp.StatusCode)
|
||||||
}
|
}
|
||||||
return fmt.Errorf("(*s3Storage).Copy: error uploading backup to remote storage: %w", err)
|
return fmt.Errorf("(*s3Storage).Copy: error uploading backup to remote storage: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
b.Log(storage.LogLevelInfo, b.Name(), "Uploaded a copy of backup `%s` to bucket `%s`.", file, b.bucket)
|
b.Log(storage.LogLevelInfo, b.Name(), "Uploaded a copy of backup `%s` to bucket `%s`.", file, b.bucket)
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
|
|||||||
@@ -17,7 +17,7 @@ sleep 5
|
|||||||
|
|
||||||
expect_running_containers "3"
|
expect_running_containers "3"
|
||||||
|
|
||||||
docker-compose run --rm az_cli \
|
docker compose run --rm az_cli \
|
||||||
az storage blob download -f /dump/test.tar.gz -c test-container -n path/to/backup/test.tar.gz
|
az storage blob download -f /dump/test.tar.gz -c test-container -n path/to/backup/test.tar.gz
|
||||||
tar -xvf ./local/test.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db
|
tar -xvf ./local/test.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db
|
||||||
|
|
||||||
@@ -26,7 +26,7 @@ pass "Found relevant files in untared remote backups."
|
|||||||
# The second part of this test checks if backups get deleted when the retention
|
# The second part of this test checks if backups get deleted when the retention
|
||||||
# is set to 0 days (which it should not as it would mean all backups get deleted)
|
# is set to 0 days (which it should not as it would mean all backups get deleted)
|
||||||
# TODO: find out if we can test actual deletion without having to wait for a day
|
# TODO: find out if we can test actual deletion without having to wait for a day
|
||||||
BACKUP_RETENTION_DAYS="0" docker-compose up -d
|
BACKUP_RETENTION_DAYS="0" docker compose up -d
|
||||||
sleep 5
|
sleep 5
|
||||||
|
|
||||||
docker compose exec backup backup
|
docker compose exec backup backup
|
||||||
|
|||||||
Reference in New Issue
Block a user