Compare commits

...

8 Commits

Author SHA1 Message Date
Frederik Ring
62bd2f4a5a Update base docker image to alpine 3.15 (#53) 2022-01-27 14:40:56 +01:00
Frederik Ring
6fe629ce87 Allow path to be set for bucket storage (#52) 2022-01-25 21:16:16 +01:00
Frederik Ring
1db896f7cf Tweak README, improve client naming, tidy go.mod file 2022-01-22 13:35:13 +01:00
Kaerbr
6ded00aa06 Support Nextcloud / WebDav (#48)
* add studio-b12/gowebdav to be able to upload to webdav server

* make sure all env variables are present for webdav upload

* implement file upload to WebDav server

directory defaults to the base directory

* docs: add the new feature to the documentation

* if no WebDav env variable are given throw no error

* docs: use more elegant english :D

Co-authored-by: Frederik Ring <frederik.ring@gmail.com>

* docs: use official spelling of "WebDAV"

* perf: golang likes to return early instead of having an else block

* use WEBDAV_PATH instead of WEBDAV_DIRECTORY

* use split_words for more convenience

like shown here: https://github.com/kelseyhightower/envconfig#struct-tag-support

* simplify

* feat: add pruning of files in WebDAV remote

Based on / Inspired by the minio/S3 implementation of pruning remote files.

* remove logging from the development

* test: first try implementing tests

Sandly I have to use the remote pipeline -- local wont work for me.

* test: adapt used volume names

* test: specify image only once!

* test: minio AND webdav data should be present

* test: backups on WebDAV remote are laying in the root directory

* test: the webdav server stores date in /var/lib/dav

* trying with data subfolder

* test: 1 container was added so the number raised from 3 to 4

* webdav  subfolder is "data" not "backup"

* fix: password AND username must be defined

not password OR username

* improve logging

* feat: if the given path on the server isnt preset it will be created

* test: add creation of new folder for webdav to tests

Co-authored-by: Frederik Ring <frederik.ring@gmail.com>
2022-01-22 13:29:21 +01:00
Hendrik Niefeld
6b79f1914b Update README.md 2022-01-06 16:09:33 +01:00
Hendrik Niefeld
40ff2e00c9 Update README.md 2022-01-06 16:07:00 +01:00
Frederik Ring
760cc9cebc Add issue template 2021-12-29 13:10:43 +01:00
Frederik Ring
1f9582df51 Fix handling of empty directories (#44)
* Add test checking whether empty directories are included in backups

* Update targz library to include fix
2021-12-29 10:10:12 +01:00
9 changed files with 187 additions and 27 deletions

20
.github/ISSUE_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,20 @@
* **I'm submitting a ...**
- [ ] bug report
- [ ] feature request
- [ ] support request
* **What is the current behavior?**
* **If the current behavior is a bug, please provide the configuration and steps to reproduce and if possible a minimal demo of the problem.**
* **What is the expected behavior?**
* **What is the motivation / use case for changing the behavior?**
* **Please tell us about your environment:**
- Image version:
- Docker version:
- docker-compose version:
* **Other information** (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, etc)

View File

@@ -9,7 +9,7 @@ RUN go mod download
COPY cmd/backup/main.go ./cmd/backup/main.go COPY cmd/backup/main.go ./cmd/backup/main.go
RUN go build -o backup cmd/backup/main.go RUN go build -o backup cmd/backup/main.go
FROM alpine:3.14 FROM alpine:3.15
WORKDIR /root WORKDIR /root

View File

@@ -1,9 +1,13 @@
<a href="https://www.offen.dev/">
<img src="https://offen.github.io/press-kit/offen-material/gfx-GitHub-Offen-logo.svg" alt="Offen logo" title="Offen" width="150px"/>
</a>
# docker-volume-backup # docker-volume-backup
Backup Docker volumes locally or to any S3 compatible storage. Backup Docker volumes locally or to any S3 compatible storage.
The [offen/docker-volume-backup](https://hub.docker.com/r/offen/docker-volume-backup) Docker image can be used as a lightweight (below 15MB) sidecar container to an existing Docker setup. The [offen/docker-volume-backup](https://hub.docker.com/r/offen/docker-volume-backup) Docker image can be used as a lightweight (below 15MB) sidecar container to an existing Docker setup.
It handles __recurring or one-off backups of Docker volumes__ to a __local directory__ or __any S3 compatible storage__ (or both), and __rotates away old backups__ if configured. It also supports __encrypting your backups using GPG__ and __sending notifications for failed backup runs__. It handles __recurring or one-off backups of Docker volumes__ to a __local directory__, __any S3 or WebDAV compatible storage (or any combination) and rotates away old backups__ if configured. It also supports __encrypting your backups using GPG__ and __sending notifications for failed backup runs__.
<!-- MarkdownTOC --> <!-- MarkdownTOC -->
@@ -24,6 +28,7 @@ It handles __recurring or one-off backups of Docker volumes__ to a __local direc
- [Recipes](#recipes) - [Recipes](#recipes)
- [Backing up to AWS S3](#backing-up-to-aws-s3) - [Backing up to AWS S3](#backing-up-to-aws-s3)
- [Backing up to MinIO](#backing-up-to-minio) - [Backing up to MinIO](#backing-up-to-minio)
- [Backing up to WebDAV](#backing-up-to-webdav)
- [Backing up locally](#backing-up-locally) - [Backing up locally](#backing-up-locally)
- [Backing up to AWS S3 as well as locally](#backing-up-to-aws-s3-as-well-as-locally) - [Backing up to AWS S3 as well as locally](#backing-up-to-aws-s3-as-well-as-locally)
- [Running on a custom cron schedule](#running-on-a-custom-cron-schedule) - [Running on a custom cron schedule](#running-on-a-custom-cron-schedule)
@@ -130,7 +135,7 @@ You can populate below template according to your requirements and use it as you
# Please note that you will need to escape the `$` when providing the value # Please note that you will need to escape the `$` when providing the value
# in a docker-compose.yml file, i.e. using $$VAR instead of $VAR. # in a docker-compose.yml file, i.e. using $$VAR instead of $VAR.
# BACKUP_FILENAME_TEMPLATE="true" # BACKUP_FILENAME_EXPAND="true"
# When storing local backups, a symlink to the latest backup can be created # When storing local backups, a symlink to the latest backup can be created
# in case a value is given for this key. This has no effect on remote backups. # in case a value is given for this key. This has no effect on remote backups.
@@ -151,6 +156,11 @@ You can populate below template according to your requirements and use it as you
# AWS_S3_BUCKET_NAME="backup-bucket" # AWS_S3_BUCKET_NAME="backup-bucket"
# If you want to store the backup in a non-root location on your bucket
# you can provide a path. The path must not contain a leading slash.
# AWS_S3_PATH="my/backup/location"
# Define credentials for authenticating against the backup storage and a bucket # Define credentials for authenticating against the backup storage and a bucket
# name. Although all of these keys are `AWS`-prefixed, the setup can be used # name. Although all of these keys are `AWS`-prefixed, the setup can be used
# with any S3 compatible storage. # with any S3 compatible storage.
@@ -185,6 +195,25 @@ You can populate below template according to your requirements and use it as you
# AWS_ENDPOINT_INSECURE="true" # AWS_ENDPOINT_INSECURE="true"
# You can also backup files to any WebDAV server:
# The URL of the remote WebDAV server
# WEBDAV_URL="https://webdav.example.com"
# The Directory to place the backups to on the WebDAV server.
# If the path is not present on the server it will be created.
# WEBDAV_PATH="/my/directory/"
# The username for the WebDAV server
# WEBDAV_USERNAME="user"
# The password for the WebDAV server
# WEBDAV_PASSWORD="password"
# In addition to storing backups remotely, you can also keep local copies. # In addition to storing backups remotely, you can also keep local copies.
# Pass a container-local path to store your backups if needed. You also need to # Pass a container-local path to store your backups if needed. You also need to
# mount a local folder or Docker volume into that location (`/archive` # mount a local folder or Docker volume into that location (`/archive`
@@ -526,6 +555,28 @@ volumes:
data: data:
``` ```
### Backing up to WebDAV
```yml
version: '3'
services:
# ... define other services using the `data` volume here
backup:
image: offen/docker-volume-backup:latest
environment:
WEBDAV_URL: https://webdav.mydomain.me
WEBDAV_PATH: /my/directory/
WEBDAV_USERNAME: user
WEBDAV_PASSWORD: password
volumes:
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
data:
```
### Backing up locally ### Backing up locally
```yml ```yml

View File

@@ -9,6 +9,7 @@ import (
"errors" "errors"
"fmt" "fmt"
"io" "io"
"io/fs"
"os" "os"
"path" "path"
"path/filepath" "path/filepath"
@@ -31,6 +32,7 @@ import (
"github.com/minio/minio-go/v7/pkg/credentials" "github.com/minio/minio-go/v7/pkg/credentials"
"github.com/otiai10/copy" "github.com/otiai10/copy"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"github.com/studio-b12/gowebdav"
"golang.org/x/crypto/openpgp" "golang.org/x/crypto/openpgp"
) )
@@ -86,12 +88,13 @@ func main() {
// script holds all the stateful information required to orchestrate a // script holds all the stateful information required to orchestrate a
// single backup run. // single backup run.
type script struct { type script struct {
cli *client.Client cli *client.Client
mc *minio.Client minioClient *minio.Client
logger *logrus.Logger webdavClient *gowebdav.Client
sender *router.ServiceRouter logger *logrus.Logger
hooks []hook sender *router.ServiceRouter
hookLevel hookLevel hooks []hook
hookLevel hookLevel
start time.Time start time.Time
file string file string
@@ -112,6 +115,7 @@ type config struct {
BackupStopContainerLabel string `split_words:"true" default:"true"` BackupStopContainerLabel string `split_words:"true" default:"true"`
BackupFromSnapshot bool `split_words:"true"` BackupFromSnapshot bool `split_words:"true"`
AwsS3BucketName string `split_words:"true"` AwsS3BucketName string `split_words:"true"`
AwsS3Path string `split_words:"true"`
AwsEndpoint string `split_words:"true" default:"s3.amazonaws.com"` AwsEndpoint string `split_words:"true" default:"s3.amazonaws.com"`
AwsEndpointProto string `split_words:"true" default:"https"` AwsEndpointProto string `split_words:"true" default:"https"`
AwsEndpointInsecure bool `split_words:"true"` AwsEndpointInsecure bool `split_words:"true"`
@@ -127,6 +131,10 @@ type config struct {
EmailSMTPPort int `envconfig:"EMAIL_SMTP_PORT" default:"587"` EmailSMTPPort int `envconfig:"EMAIL_SMTP_PORT" default:"587"`
EmailSMTPUsername string `envconfig:"EMAIL_SMTP_USERNAME"` EmailSMTPUsername string `envconfig:"EMAIL_SMTP_USERNAME"`
EmailSMTPPassword string `envconfig:"EMAIL_SMTP_PASSWORD"` EmailSMTPPassword string `envconfig:"EMAIL_SMTP_PASSWORD"`
WebdavUrl string `split_words:"true"`
WebdavPath string `split_words:"true" default:"/"`
WebdavUsername string `split_words:"true"`
WebdavPassword string `split_words:"true"`
} }
var msgBackupFailed = "backup run failed" var msgBackupFailed = "backup run failed"
@@ -206,7 +214,16 @@ func newScript() (*script, error) {
if err != nil { if err != nil {
return nil, fmt.Errorf("newScript: error setting up minio client: %w", err) return nil, fmt.Errorf("newScript: error setting up minio client: %w", err)
} }
s.mc = mc s.minioClient = mc
}
if s.c.WebdavUrl != "" {
if s.c.WebdavUsername == "" || s.c.WebdavPassword == "" {
return nil, errors.New("newScript: WEBDAV_URL is defined, but no credentials were provided")
} else {
webdavClient := gowebdav.NewClient(s.c.WebdavUrl, s.c.WebdavUsername, s.c.WebdavPassword)
s.webdavClient = webdavClient
}
} }
if s.c.EmailNotificationRecipient != "" { if s.c.EmailNotificationRecipient != "" {
@@ -508,8 +525,8 @@ func (s *script) encryptBackup() error {
// as per the given configuration. // as per the given configuration.
func (s *script) copyBackup() error { func (s *script) copyBackup() error {
_, name := path.Split(s.file) _, name := path.Split(s.file)
if s.mc != nil { if s.minioClient != nil {
if _, err := s.mc.FPutObject(context.Background(), s.c.AwsS3BucketName, name, s.file, minio.PutObjectOptions{ if _, err := s.minioClient.FPutObject(context.Background(), s.c.AwsS3BucketName, filepath.Join(s.c.AwsS3Path, name), s.file, minio.PutObjectOptions{
ContentType: "application/tar+gzip", ContentType: "application/tar+gzip",
}); err != nil { }); err != nil {
return fmt.Errorf("copyBackup: error uploading backup to remote storage: %w", err) return fmt.Errorf("copyBackup: error uploading backup to remote storage: %w", err)
@@ -517,6 +534,20 @@ func (s *script) copyBackup() error {
s.logger.Infof("Uploaded a copy of backup `%s` to bucket `%s`.", s.file, s.c.AwsS3BucketName) s.logger.Infof("Uploaded a copy of backup `%s` to bucket `%s`.", s.file, s.c.AwsS3BucketName)
} }
if s.webdavClient != nil {
bytes, err := os.ReadFile(s.file)
if err != nil {
return fmt.Errorf("copyBackup: error reading the file to be uploaded: %w", err)
}
if err := s.webdavClient.MkdirAll(s.c.WebdavPath, 0644); err != nil {
return fmt.Errorf("copyBackup: error creating directory '%s' on WebDAV server: %w", s.c.WebdavPath, err)
}
if err := s.webdavClient.Write(filepath.Join(s.c.WebdavPath, name), bytes, 0644); err != nil {
return fmt.Errorf("copyBackup: error uploading the file to WebDAV server: %w", err)
}
s.logger.Infof("Uploaded a copy of backup `%s` to WebDAV-URL '%s' at path '%s'.", s.file, s.c.WebdavUrl, s.c.WebdavPath)
}
if _, err := os.Stat(s.c.BackupArchive); !os.IsNotExist(err) { if _, err := os.Stat(s.c.BackupArchive); !os.IsNotExist(err) {
if err := copyFile(s.file, path.Join(s.c.BackupArchive, name)); err != nil { if err := copyFile(s.file, path.Join(s.c.BackupArchive, name)); err != nil {
return fmt.Errorf("copyBackup: error copying file to local archive: %w", err) return fmt.Errorf("copyBackup: error copying file to local archive: %w", err)
@@ -551,8 +582,9 @@ func (s *script) pruneOldBackups() error {
deadline := time.Now().AddDate(0, 0, -int(s.c.BackupRetentionDays)) deadline := time.Now().AddDate(0, 0, -int(s.c.BackupRetentionDays))
if s.mc != nil { // Prune minio/S3 backups
candidates := s.mc.ListObjects(context.Background(), s.c.AwsS3BucketName, minio.ListObjectsOptions{ if s.minioClient != nil {
candidates := s.minioClient.ListObjects(context.Background(), s.c.AwsS3BucketName, minio.ListObjectsOptions{
WithMetadata: true, WithMetadata: true,
Prefix: s.c.BackupPruningPrefix, Prefix: s.c.BackupPruningPrefix,
}) })
@@ -580,7 +612,7 @@ func (s *script) pruneOldBackups() error {
} }
close(objectsCh) close(objectsCh)
}() }()
errChan := s.mc.RemoveObjects(context.Background(), s.c.AwsS3BucketName, objectsCh, minio.RemoveObjectsOptions{}) errChan := s.minioClient.RemoveObjects(context.Background(), s.c.AwsS3BucketName, objectsCh, minio.RemoveObjectsOptions{})
var removeErrors []error var removeErrors []error
for result := range errChan { for result := range errChan {
if result.Err != nil { if result.Err != nil {
@@ -612,6 +644,38 @@ func (s *script) pruneOldBackups() error {
} }
} }
// Prune WebDAV backups
if s.webdavClient != nil {
candidates, err := s.webdavClient.ReadDir(s.c.WebdavPath)
if err != nil {
return fmt.Errorf("pruneOldBackups: error looking up candidates from remote storage: %w", err)
}
var matches []fs.FileInfo
var lenCandidates int
for _, candidate := range candidates {
lenCandidates++
if candidate.ModTime().Before(deadline) {
matches = append(matches, candidate)
}
}
if len(matches) != 0 && len(matches) != lenCandidates {
for _, match := range matches {
if err := s.webdavClient.Remove(filepath.Join(s.c.WebdavPath, match.Name())); err != nil {
return fmt.Errorf("pruneOldBackups: error removing a file from remote storage: %w", err)
}
s.logger.Infof("Pruned %s from WebDAV: %s", match.Name(), filepath.Join(s.c.WebdavUrl, s.c.WebdavPath))
}
s.logger.Infof("Pruned %d out of %d remote backup(s) as their age exceeded the configured retention period of %d days.", len(matches), lenCandidates, s.c.BackupRetentionDays)
} else if len(matches) != 0 && len(matches) == lenCandidates {
s.logger.Warnf("The current configuration would delete all %d remote backup copies.", len(matches))
s.logger.Warn("Refusing to do so, please check your configuration.")
} else {
s.logger.Infof("None of %d remote backup(s) were pruned.", lenCandidates)
}
}
// Prune local backups
if _, err := os.Stat(s.c.BackupArchive); !os.IsNotExist(err) { if _, err := os.Stat(s.c.BackupArchive); !os.IsNotExist(err) {
globPattern := path.Join( globPattern := path.Join(
s.c.BackupArchive, s.c.BackupArchive,

3
go.mod
View File

@@ -8,10 +8,11 @@ require (
github.com/gofrs/flock v0.8.1 github.com/gofrs/flock v0.8.1
github.com/kelseyhightower/envconfig v1.4.0 github.com/kelseyhightower/envconfig v1.4.0
github.com/leekchan/timeutil v0.0.0-20150802142658-28917288c48d github.com/leekchan/timeutil v0.0.0-20150802142658-28917288c48d
github.com/m90/targz v0.0.0-20210904082215-2e9a4529a615 github.com/m90/targz v0.0.0-20211229090208-2f22c2d9278e
github.com/minio/minio-go/v7 v7.0.16 github.com/minio/minio-go/v7 v7.0.16
github.com/otiai10/copy v1.7.0 github.com/otiai10/copy v1.7.0
github.com/sirupsen/logrus v1.8.1 github.com/sirupsen/logrus v1.8.1
github.com/studio-b12/gowebdav v0.0.0-20211109083228-3f8721cd4b6f
golang.org/x/crypto v0.0.0-20210817164053-32db794688a5 golang.org/x/crypto v0.0.0-20210817164053-32db794688a5
) )

6
go.sum
View File

@@ -450,8 +450,8 @@ github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/leekchan/timeutil v0.0.0-20150802142658-28917288c48d h1:2puqoOQwi3Ai1oznMOsFIbifm6kIfJaLLyYzWD4IzTs= github.com/leekchan/timeutil v0.0.0-20150802142658-28917288c48d h1:2puqoOQwi3Ai1oznMOsFIbifm6kIfJaLLyYzWD4IzTs=
github.com/leekchan/timeutil v0.0.0-20150802142658-28917288c48d/go.mod h1:hO90vCP2x3exaSH58BIAowSKvV+0OsY21TtzuFGHON4= github.com/leekchan/timeutil v0.0.0-20150802142658-28917288c48d/go.mod h1:hO90vCP2x3exaSH58BIAowSKvV+0OsY21TtzuFGHON4=
github.com/leodido/go-urn v1.2.0/go.mod h1:+8+nEpDfqqsY+g338gtMEUOtuK+4dEMhiQEgxpxOKII= github.com/leodido/go-urn v1.2.0/go.mod h1:+8+nEpDfqqsY+g338gtMEUOtuK+4dEMhiQEgxpxOKII=
github.com/m90/targz v0.0.0-20210904082215-2e9a4529a615 h1:rn0LO2tQEgCDOct8qnbcslTUpAIWdVlWcGkjoumhf2U= github.com/m90/targz v0.0.0-20211229090208-2f22c2d9278e h1:Kzm2zfxS40RUGD5UVtVtOo9RT5TtGoNJnmWORtCEaxM=
github.com/m90/targz v0.0.0-20210904082215-2e9a4529a615/go.mod h1:YZK3bSO/oVlk9G+v00BxgzxW2Us4p/R4ysHOBjk0fJI= github.com/m90/targz v0.0.0-20211229090208-2f22c2d9278e/go.mod h1:YZK3bSO/oVlk9G+v00BxgzxW2Us4p/R4ysHOBjk0fJI=
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/mailru/easyjson v0.0.0-20190403194419-1ea4449da983/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= github.com/mailru/easyjson v0.0.0-20190403194419-1ea4449da983/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
@@ -659,6 +659,8 @@ github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UV
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.6.1 h1:hDPOHmpOpP40lSULcqw7IrRb/u7w6RpDC9399XyoNd0= github.com/stretchr/testify v1.6.1 h1:hDPOHmpOpP40lSULcqw7IrRb/u7w6RpDC9399XyoNd0=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/studio-b12/gowebdav v0.0.0-20211109083228-3f8721cd4b6f h1:L2NE7BXnSlSLoNYZ0lCwZDjdnYjCNYC71k9ClZUTFTs=
github.com/studio-b12/gowebdav v0.0.0-20211109083228-3f8721cd4b6f/go.mod h1:bHA7t77X/QFExdeAnDzK6vKM34kEZAcE1OX4MfiwjkE=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw= github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
github.com/syndtr/gocapability v0.0.0-20170704070218-db04d3cc01c8/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww= github.com/syndtr/gocapability v0.0.0-20170704070218-db04d3cc01c8/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/syndtr/gocapability v0.0.0-20180916011248-d98352740cb2/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww= github.com/syndtr/gocapability v0.0.0-20180916011248-d98352740cb2/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=

View File

@@ -7,6 +7,7 @@ cd $(dirname $0)
docker network create test_network docker network create test_network
docker volume create backup_data docker volume create backup_data
docker volume create app_data docker volume create app_data
docker volume create empty_data
docker run -d \ docker run -d \
--name minio \ --name minio \
@@ -31,6 +32,7 @@ sleep 10
docker run --rm \ docker run --rm \
--network test_network \ --network test_network \
-v app_data:/backup/app_data \ -v app_data:/backup/app_data \
-v empty_data:/backup/empty_data \
-v /var/run/docker.sock:/var/run/docker.sock \ -v /var/run/docker.sock:/var/run/docker.sock \
--env AWS_ACCESS_KEY_ID=test \ --env AWS_ACCESS_KEY_ID=test \
--env AWS_SECRET_ACCESS_KEY=GMusLtUmILge2by+z890kQ \ --env AWS_SECRET_ACCESS_KEY=GMusLtUmILge2by+z890kQ \
@@ -44,7 +46,7 @@ docker run --rm \
docker run --rm -it \ docker run --rm -it \
-v backup_data:/data alpine \ -v backup_data:/data alpine \
ash -c 'tar -xvf /data/backup/test.tar.gz && test -f /backup/app_data/offen.db' ash -c 'tar -xvf /data/backup/test.tar.gz && test -f /backup/app_data/offen.db && test -d /backup/empty_data'
echo "[TEST:PASS] Found relevant files in untared backup." echo "[TEST:PASS] Found relevant files in untared backup."

View File

@@ -10,13 +10,23 @@ services:
MINIO_SECRET_KEY: GMusLtUmILge2by+z890kQ MINIO_SECRET_KEY: GMusLtUmILge2by+z890kQ
entrypoint: /bin/ash -c 'mkdir -p /data/backup && minio server /data' entrypoint: /bin/ash -c 'mkdir -p /data/backup && minio server /data'
volumes: volumes:
- backup_data:/data - minio_backup_data:/data
webdav:
image: bytemark/webdav:2.4
environment:
AUTH_TYPE: Digest
USERNAME: test
PASSWORD: test
volumes:
- webdav_backup_data:/var/lib/dav
backup: &default_backup_service backup: &default_backup_service
image: offen/docker-volume-backup:${TEST_VERSION} image: offen/docker-volume-backup:${TEST_VERSION}
hostname: hostnametoken hostname: hostnametoken
depends_on: depends_on:
- minio - minio
- webdav
restart: always restart: always
environment: environment:
AWS_ACCESS_KEY_ID: test AWS_ACCESS_KEY_ID: test
@@ -32,6 +42,10 @@ services:
BACKUP_PRUNING_LEEWAY: 5s BACKUP_PRUNING_LEEWAY: 5s
BACKUP_PRUNING_PREFIX: test BACKUP_PRUNING_PREFIX: test
GPG_PASSPHRASE: 1234secret GPG_PASSPHRASE: 1234secret
WEBDAV_URL: http://webdav/
WEBDAV_PATH: /my/new/path/
WEBDAV_USERNAME: test
WEBDAV_PASSWORD: test
volumes: volumes:
- ./local:/archive - ./local:/archive
- app_data:/backup/app_data:ro - app_data:/backup/app_data:ro
@@ -45,5 +59,6 @@ services:
- app_data:/var/opt/offen - app_data:/var/opt/offen
volumes: volumes:
backup_data: minio_backup_data:
webdav_backup_data:
app_data: app_data:

View File

@@ -13,10 +13,13 @@ docker-compose exec offen ln -s /var/opt/offen/offen.db /var/opt/offen/db.link
docker-compose exec backup backup docker-compose exec backup backup
docker run --rm -it \ docker run --rm -it \
-v compose_backup_data:/data alpine \ -v compose_minio_backup_data:/minio_data \
ash -c 'apk add gnupg && echo 1234secret | gpg -d --pinentry-mode loopback --passphrase-fd 0 --yes /data/backup/test-hostnametoken.tar.gz.gpg > /tmp/test-hostnametoken.tar.gz && tar -xf /tmp/test-hostnametoken.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db' -v compose_webdav_backup_data:/webdav_data alpine \
ash -c 'apk add gnupg && \
echo 1234secret | gpg -d --pinentry-mode loopback --passphrase-fd 0 --yes /minio_data/backup/test-hostnametoken.tar.gz.gpg > /tmp/test-hostnametoken.tar.gz && tar -xf /tmp/test-hostnametoken.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db && \
echo 1234secret | gpg -d --pinentry-mode loopback --passphrase-fd 0 --yes /webdav_data/data/my/new/path/test-hostnametoken.tar.gz.gpg > /tmp/test-hostnametoken.tar.gz && tar -xf /tmp/test-hostnametoken.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db'
echo "[TEST:PASS] Found relevant files in untared remote backup." echo "[TEST:PASS] Found relevant files in untared remote backups."
test -L ./local/test-hostnametoken.latest.tar.gz.gpg test -L ./local/test-hostnametoken.latest.tar.gz.gpg
echo 1234secret | gpg -d --yes --passphrase-fd 0 ./local/test-hostnametoken.tar.gz.gpg > ./local/decrypted.tar.gz echo 1234secret | gpg -d --yes --passphrase-fd 0 ./local/test-hostnametoken.tar.gz.gpg > ./local/decrypted.tar.gz
@@ -26,7 +29,7 @@ test -L /tmp/backup/app_data/db.link
echo "[TEST:PASS] Found relevant files in untared local backup." echo "[TEST:PASS] Found relevant files in untared local backup."
if [ "$(docker-compose ps -q | wc -l)" != "3" ]; then if [ "$(docker-compose ps -q | wc -l)" != "4" ]; then
echo "[TEST:FAIL] Expected all containers to be running post backup, instead seen:" echo "[TEST:FAIL] Expected all containers to be running post backup, instead seen:"
docker-compose ps docker-compose ps
exit 1 exit 1
@@ -43,8 +46,10 @@ sleep 5
docker-compose exec backup backup docker-compose exec backup backup
docker run --rm -it \ docker run --rm -it \
-v compose_backup_data:/data alpine \ -v compose_minio_backup_data:/minio_data \
ash -c '[ $(find /data/backup/ -type f | wc -l) = "1" ]' -v compose_webdav_backup_data:/webdav_data alpine \
ash -c '[ $(find /minio_data/backup/ -type f | wc -l) = "1" ] && \
[ $(find /webdav_data/data/my/new/path/ -type f | wc -l) = "1" ]'
echo "[TEST:PASS] Remote backups have not been deleted." echo "[TEST:PASS] Remote backups have not been deleted."