mirror of
https://github.com/offen/docker-volume-backup.git
synced 2025-12-05 17:18:02 +01:00
Compare commits
14 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3bb99a7117 | ||
|
|
ddc34be55d | ||
|
|
cb9b4bfcff | ||
|
|
62bd2f4a5a | ||
|
|
6fe629ce87 | ||
|
|
1db896f7cf | ||
|
|
6ded00aa06 | ||
|
|
6b79f1914b | ||
|
|
40ff2e00c9 | ||
|
|
760cc9cebc | ||
|
|
1f9582df51 | ||
|
|
32575c831e | ||
|
|
c062710ce8 | ||
|
|
3a7dfe8e60 |
20
.github/ISSUE_TEMPLATE.md
vendored
Normal file
20
.github/ISSUE_TEMPLATE.md
vendored
Normal file
@@ -0,0 +1,20 @@
|
||||
* **I'm submitting a ...**
|
||||
- [ ] bug report
|
||||
- [ ] feature request
|
||||
- [ ] support request
|
||||
|
||||
* **What is the current behavior?**
|
||||
|
||||
* **If the current behavior is a bug, please provide the configuration and steps to reproduce and if possible a minimal demo of the problem.**
|
||||
|
||||
* **What is the expected behavior?**
|
||||
|
||||
* **What is the motivation / use case for changing the behavior?**
|
||||
|
||||
* **Please tell us about your environment:**
|
||||
|
||||
- Image version:
|
||||
- Docker version:
|
||||
- docker-compose version:
|
||||
|
||||
* **Other information** (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, etc)
|
||||
@@ -9,7 +9,7 @@ RUN go mod download
|
||||
COPY cmd/backup/main.go ./cmd/backup/main.go
|
||||
RUN go build -o backup cmd/backup/main.go
|
||||
|
||||
FROM alpine:3.14
|
||||
FROM alpine:3.15
|
||||
|
||||
WORKDIR /root
|
||||
|
||||
|
||||
105
README.md
105
README.md
@@ -1,9 +1,13 @@
|
||||
<a href="https://www.offen.dev/">
|
||||
<img src="https://offen.github.io/press-kit/offen-material/gfx-GitHub-Offen-logo.svg" alt="Offen logo" title="Offen" width="150px"/>
|
||||
</a>
|
||||
|
||||
# docker-volume-backup
|
||||
|
||||
Backup Docker volumes locally or to any S3 compatible storage.
|
||||
|
||||
The [offen/docker-volume-backup](https://hub.docker.com/r/offen/docker-volume-backup) Docker image can be used as a lightweight (below 15MB) sidecar container to an existing Docker setup.
|
||||
It handles __recurring or one-off backups of Docker volumes__ to a __local directory__ or __any S3 compatible storage__ (or both), and __rotates away old backups__ if configured. It also supports __encrypting your backups using GPG__ and __sending notifications for failed backup runs__.
|
||||
It handles __recurring or one-off backups of Docker volumes__ to a __local directory__, __any S3 or WebDAV compatible storage (or any combination) and rotates away old backups__ if configured. It also supports __encrypting your backups using GPG__ and __sending notifications for failed backup runs__.
|
||||
|
||||
<!-- MarkdownTOC -->
|
||||
|
||||
@@ -23,7 +27,9 @@ It handles __recurring or one-off backups of Docker volumes__ to a __local direc
|
||||
- [Update deprecated email configuration](#update-deprecated-email-configuration)
|
||||
- [Recipes](#recipes)
|
||||
- [Backing up to AWS S3](#backing-up-to-aws-s3)
|
||||
- [Backing up to Filebase](#backing-up-to-filebase)
|
||||
- [Backing up to MinIO](#backing-up-to-minio)
|
||||
- [Backing up to WebDAV](#backing-up-to-webdav)
|
||||
- [Backing up locally](#backing-up-locally)
|
||||
- [Backing up to AWS S3 as well as locally](#backing-up-to-aws-s3-as-well-as-locally)
|
||||
- [Running on a custom cron schedule](#running-on-a-custom-cron-schedule)
|
||||
@@ -122,6 +128,16 @@ You can populate below template according to your requirements and use it as you
|
||||
|
||||
# BACKUP_FILENAME="backup-%Y-%m-%dT%H-%M-%S.tar.gz"
|
||||
|
||||
# Setting BACKUP_FILENAME_EXPAND to true allows for environment variable
|
||||
# placeholders in BACKUP_FILENAME, BACKUP_LATEST_SYMLINK and in
|
||||
# BACKUP_PRUNING_PREFIX that will get expanded at runtime,
|
||||
# e.g. `backup-$HOSTNAME-%Y-%m-%dT%H-%M-%S.tar.gz`. Expansion happens before
|
||||
# interpolating strftime tokens. It is disabled by default.
|
||||
# Please note that you will need to escape the `$` when providing the value
|
||||
# in a docker-compose.yml file, i.e. using $$VAR instead of $VAR.
|
||||
|
||||
# BACKUP_FILENAME_EXPAND="true"
|
||||
|
||||
# When storing local backups, a symlink to the latest backup can be created
|
||||
# in case a value is given for this key. This has no effect on remote backups.
|
||||
|
||||
@@ -141,6 +157,11 @@ You can populate below template according to your requirements and use it as you
|
||||
|
||||
# AWS_S3_BUCKET_NAME="backup-bucket"
|
||||
|
||||
# If you want to store the backup in a non-root location on your bucket
|
||||
# you can provide a path. The path must not contain a leading slash.
|
||||
|
||||
# AWS_S3_PATH="my/backup/location"
|
||||
|
||||
# Define credentials for authenticating against the backup storage and a bucket
|
||||
# name. Although all of these keys are `AWS`-prefixed, the setup can be used
|
||||
# with any S3 compatible storage.
|
||||
@@ -156,7 +177,7 @@ You can populate below template according to your requirements and use it as you
|
||||
# AWS_IAM_ROLE_ENDPOINT="http://169.254.169.254"
|
||||
|
||||
# This is the FQDN of your storage server, e.g. `storage.example.com`.
|
||||
# Do not set this when working against AWS S3 (the default value is
|
||||
# Do not set this when working against AWS S3 (the default value is
|
||||
# `s3.amazonaws.com`). If you need to set a specific (non-https) protocol, you
|
||||
# will need to use the option below.
|
||||
|
||||
@@ -175,6 +196,25 @@ You can populate below template according to your requirements and use it as you
|
||||
|
||||
# AWS_ENDPOINT_INSECURE="true"
|
||||
|
||||
# You can also backup files to any WebDAV server:
|
||||
|
||||
# The URL of the remote WebDAV server
|
||||
|
||||
# WEBDAV_URL="https://webdav.example.com"
|
||||
|
||||
# The Directory to place the backups to on the WebDAV server.
|
||||
# If the path is not present on the server it will be created.
|
||||
|
||||
# WEBDAV_PATH="/my/directory/"
|
||||
|
||||
# The username for the WebDAV server
|
||||
|
||||
# WEBDAV_USERNAME="user"
|
||||
|
||||
# The password for the WebDAV server
|
||||
|
||||
# WEBDAV_PASSWORD="password"
|
||||
|
||||
# In addition to storing backups remotely, you can also keep local copies.
|
||||
# Pass a container-local path to store your backups if needed. You also need to
|
||||
# mount a local folder or Docker volume into that location (`/archive`
|
||||
@@ -286,6 +326,11 @@ You can populate below template according to your requirements and use it as you
|
||||
# EMAIL_SMTP_PORT="<port>"
|
||||
```
|
||||
|
||||
In case you encouter double quoted values in your configuration you might be running an [older version of `docker-compose`].
|
||||
You can work around this by either updating `docker-compose` or unquoting your configuration values.
|
||||
|
||||
[compose-issue]: https://github.com/docker/compose/issues/2854
|
||||
|
||||
## How to
|
||||
|
||||
### Stopping containers during backup
|
||||
@@ -384,14 +429,14 @@ In case you need to restore a volume from a backup, the most straight forward pr
|
||||
```console
|
||||
tar -C /tmp -xvf backup.tar.gz
|
||||
```
|
||||
- Using a temporary one-off container, mount the volume (the example assumes it's named `data`) and copy over the backup. Make sure you copy the correct path level (this depends on how you mount your volume into the backup container), you might need to strip some leading elements
|
||||
- Using a temporary once-off container, mount the volume (the example assumes it's named `data`) and copy over the backup. Make sure you copy the correct path level (this depends on how you mount your volume into the backup container), you might need to strip some leading elements
|
||||
```console
|
||||
docker run -d --name backup_restore -v data:/backup_restore alpine
|
||||
docker cp /tmp/backup/data-backup backup_restore:/backup_restore
|
||||
docker stop backup_restore
|
||||
docker rm backup_restore
|
||||
docker run -d --name temp_restore_container -v data:/backup_restore alpine
|
||||
docker cp /tmp/backup/data-backup temp_restore_container:/backup_restore
|
||||
docker stop temp_restore_container
|
||||
docker rm temp_restore_container
|
||||
```
|
||||
- Restart the container(s) that are using the volume
|
||||
- Restart the container(s) that are using the volume
|
||||
|
||||
Depending on your setup and the application(s) you are running, this might involve other steps to be taken still.
|
||||
|
||||
@@ -489,6 +534,28 @@ volumes:
|
||||
data:
|
||||
```
|
||||
|
||||
### Backing up to Filebase
|
||||
|
||||
```yml
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
# ... define other services using the `data` volume here
|
||||
backup:
|
||||
image: offen/docker-volume-backup:latest
|
||||
environment:
|
||||
AWS_ENDPOINT: s3.filebase.com
|
||||
AWS_BUCKET_NAME: filebase-bucket
|
||||
AWS_ACCESS_KEY_ID: FILEBASE-ACCESS-KEY
|
||||
AWS_SECRET_ACCESS_KEY: FILEBASE-SECRET-KEY
|
||||
volumes:
|
||||
- data:/backup/my-app-backup:ro
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
|
||||
volumes:
|
||||
data:
|
||||
```
|
||||
|
||||
### Backing up to MinIO
|
||||
|
||||
```yml
|
||||
@@ -511,6 +578,28 @@ volumes:
|
||||
data:
|
||||
```
|
||||
|
||||
### Backing up to WebDAV
|
||||
|
||||
```yml
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
# ... define other services using the `data` volume here
|
||||
backup:
|
||||
image: offen/docker-volume-backup:latest
|
||||
environment:
|
||||
WEBDAV_URL: https://webdav.mydomain.me
|
||||
WEBDAV_PATH: /my/directory/
|
||||
WEBDAV_USERNAME: user
|
||||
WEBDAV_PASSWORD: password
|
||||
volumes:
|
||||
- data:/backup/my-app-backup:ro
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
|
||||
volumes:
|
||||
data:
|
||||
```
|
||||
|
||||
### Backing up locally
|
||||
|
||||
```yml
|
||||
|
||||
@@ -9,6 +9,7 @@ import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/fs"
|
||||
"os"
|
||||
"path"
|
||||
"path/filepath"
|
||||
@@ -31,6 +32,7 @@ import (
|
||||
"github.com/minio/minio-go/v7/pkg/credentials"
|
||||
"github.com/otiai10/copy"
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/studio-b12/gowebdav"
|
||||
"golang.org/x/crypto/openpgp"
|
||||
)
|
||||
|
||||
@@ -86,12 +88,13 @@ func main() {
|
||||
// script holds all the stateful information required to orchestrate a
|
||||
// single backup run.
|
||||
type script struct {
|
||||
cli *client.Client
|
||||
mc *minio.Client
|
||||
logger *logrus.Logger
|
||||
sender *router.ServiceRouter
|
||||
hooks []hook
|
||||
hookLevel hookLevel
|
||||
cli *client.Client
|
||||
minioClient *minio.Client
|
||||
webdavClient *gowebdav.Client
|
||||
logger *logrus.Logger
|
||||
sender *router.ServiceRouter
|
||||
hooks []hook
|
||||
hookLevel hookLevel
|
||||
|
||||
start time.Time
|
||||
file string
|
||||
@@ -103,6 +106,7 @@ type script struct {
|
||||
type config struct {
|
||||
BackupSources string `split_words:"true" default:"/backup"`
|
||||
BackupFilename string `split_words:"true" default:"backup-%Y-%m-%dT%H-%M-%S.tar.gz"`
|
||||
BackupFilenameExpand bool `split_words:"true"`
|
||||
BackupLatestSymlink string `split_words:"true"`
|
||||
BackupArchive string `split_words:"true" default:"/archive"`
|
||||
BackupRetentionDays int32 `split_words:"true" default:"-1"`
|
||||
@@ -111,6 +115,7 @@ type config struct {
|
||||
BackupStopContainerLabel string `split_words:"true" default:"true"`
|
||||
BackupFromSnapshot bool `split_words:"true"`
|
||||
AwsS3BucketName string `split_words:"true"`
|
||||
AwsS3Path string `split_words:"true"`
|
||||
AwsEndpoint string `split_words:"true" default:"s3.amazonaws.com"`
|
||||
AwsEndpointProto string `split_words:"true" default:"https"`
|
||||
AwsEndpointInsecure bool `split_words:"true"`
|
||||
@@ -126,6 +131,10 @@ type config struct {
|
||||
EmailSMTPPort int `envconfig:"EMAIL_SMTP_PORT" default:"587"`
|
||||
EmailSMTPUsername string `envconfig:"EMAIL_SMTP_USERNAME"`
|
||||
EmailSMTPPassword string `envconfig:"EMAIL_SMTP_PASSWORD"`
|
||||
WebdavUrl string `split_words:"true"`
|
||||
WebdavPath string `split_words:"true" default:"/"`
|
||||
WebdavUsername string `split_words:"true"`
|
||||
WebdavPassword string `split_words:"true"`
|
||||
}
|
||||
|
||||
var msgBackupFailed = "backup run failed"
|
||||
@@ -153,6 +162,12 @@ func newScript() (*script, error) {
|
||||
}
|
||||
|
||||
s.file = path.Join("/tmp", s.c.BackupFilename)
|
||||
if s.c.BackupFilenameExpand {
|
||||
s.file = os.ExpandEnv(s.file)
|
||||
s.c.BackupLatestSymlink = os.ExpandEnv(s.c.BackupLatestSymlink)
|
||||
s.c.BackupPruningPrefix = os.ExpandEnv(s.c.BackupPruningPrefix)
|
||||
}
|
||||
s.file = timeutil.Strftime(&s.start, s.file)
|
||||
|
||||
_, err := os.Stat("/var/run/docker.sock")
|
||||
if !os.IsNotExist(err) {
|
||||
@@ -199,7 +214,16 @@ func newScript() (*script, error) {
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("newScript: error setting up minio client: %w", err)
|
||||
}
|
||||
s.mc = mc
|
||||
s.minioClient = mc
|
||||
}
|
||||
|
||||
if s.c.WebdavUrl != "" {
|
||||
if s.c.WebdavUsername == "" || s.c.WebdavPassword == "" {
|
||||
return nil, errors.New("newScript: WEBDAV_URL is defined, but no credentials were provided")
|
||||
} else {
|
||||
webdavClient := gowebdav.NewClient(s.c.WebdavUrl, s.c.WebdavUsername, s.c.WebdavPassword)
|
||||
s.webdavClient = webdavClient
|
||||
}
|
||||
}
|
||||
|
||||
if s.c.EmailNotificationRecipient != "" {
|
||||
@@ -413,7 +437,6 @@ func (s *script) stopContainers() (func() error, error) {
|
||||
// takeBackup creates a tar archive of the configured backup location and
|
||||
// saves it to disk.
|
||||
func (s *script) takeBackup() error {
|
||||
s.file = timeutil.Strftime(&s.start, s.file)
|
||||
backupSources := s.c.BackupSources
|
||||
|
||||
if s.c.BackupFromSnapshot {
|
||||
@@ -502,8 +525,8 @@ func (s *script) encryptBackup() error {
|
||||
// as per the given configuration.
|
||||
func (s *script) copyBackup() error {
|
||||
_, name := path.Split(s.file)
|
||||
if s.mc != nil {
|
||||
if _, err := s.mc.FPutObject(context.Background(), s.c.AwsS3BucketName, name, s.file, minio.PutObjectOptions{
|
||||
if s.minioClient != nil {
|
||||
if _, err := s.minioClient.FPutObject(context.Background(), s.c.AwsS3BucketName, filepath.Join(s.c.AwsS3Path, name), s.file, minio.PutObjectOptions{
|
||||
ContentType: "application/tar+gzip",
|
||||
}); err != nil {
|
||||
return fmt.Errorf("copyBackup: error uploading backup to remote storage: %w", err)
|
||||
@@ -511,6 +534,20 @@ func (s *script) copyBackup() error {
|
||||
s.logger.Infof("Uploaded a copy of backup `%s` to bucket `%s`.", s.file, s.c.AwsS3BucketName)
|
||||
}
|
||||
|
||||
if s.webdavClient != nil {
|
||||
bytes, err := os.ReadFile(s.file)
|
||||
if err != nil {
|
||||
return fmt.Errorf("copyBackup: error reading the file to be uploaded: %w", err)
|
||||
}
|
||||
if err := s.webdavClient.MkdirAll(s.c.WebdavPath, 0644); err != nil {
|
||||
return fmt.Errorf("copyBackup: error creating directory '%s' on WebDAV server: %w", s.c.WebdavPath, err)
|
||||
}
|
||||
if err := s.webdavClient.Write(filepath.Join(s.c.WebdavPath, name), bytes, 0644); err != nil {
|
||||
return fmt.Errorf("copyBackup: error uploading the file to WebDAV server: %w", err)
|
||||
}
|
||||
s.logger.Infof("Uploaded a copy of backup `%s` to WebDAV-URL '%s' at path '%s'.", s.file, s.c.WebdavUrl, s.c.WebdavPath)
|
||||
}
|
||||
|
||||
if _, err := os.Stat(s.c.BackupArchive); !os.IsNotExist(err) {
|
||||
if err := copyFile(s.file, path.Join(s.c.BackupArchive, name)); err != nil {
|
||||
return fmt.Errorf("copyBackup: error copying file to local archive: %w", err)
|
||||
@@ -545,8 +582,9 @@ func (s *script) pruneOldBackups() error {
|
||||
|
||||
deadline := time.Now().AddDate(0, 0, -int(s.c.BackupRetentionDays))
|
||||
|
||||
if s.mc != nil {
|
||||
candidates := s.mc.ListObjects(context.Background(), s.c.AwsS3BucketName, minio.ListObjectsOptions{
|
||||
// Prune minio/S3 backups
|
||||
if s.minioClient != nil {
|
||||
candidates := s.minioClient.ListObjects(context.Background(), s.c.AwsS3BucketName, minio.ListObjectsOptions{
|
||||
WithMetadata: true,
|
||||
Prefix: s.c.BackupPruningPrefix,
|
||||
})
|
||||
@@ -574,7 +612,7 @@ func (s *script) pruneOldBackups() error {
|
||||
}
|
||||
close(objectsCh)
|
||||
}()
|
||||
errChan := s.mc.RemoveObjects(context.Background(), s.c.AwsS3BucketName, objectsCh, minio.RemoveObjectsOptions{})
|
||||
errChan := s.minioClient.RemoveObjects(context.Background(), s.c.AwsS3BucketName, objectsCh, minio.RemoveObjectsOptions{})
|
||||
var removeErrors []error
|
||||
for result := range errChan {
|
||||
if result.Err != nil {
|
||||
@@ -606,6 +644,38 @@ func (s *script) pruneOldBackups() error {
|
||||
}
|
||||
}
|
||||
|
||||
// Prune WebDAV backups
|
||||
if s.webdavClient != nil {
|
||||
candidates, err := s.webdavClient.ReadDir(s.c.WebdavPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("pruneOldBackups: error looking up candidates from remote storage: %w", err)
|
||||
}
|
||||
var matches []fs.FileInfo
|
||||
var lenCandidates int
|
||||
for _, candidate := range candidates {
|
||||
lenCandidates++
|
||||
if candidate.ModTime().Before(deadline) {
|
||||
matches = append(matches, candidate)
|
||||
}
|
||||
}
|
||||
|
||||
if len(matches) != 0 && len(matches) != lenCandidates {
|
||||
for _, match := range matches {
|
||||
if err := s.webdavClient.Remove(filepath.Join(s.c.WebdavPath, match.Name())); err != nil {
|
||||
return fmt.Errorf("pruneOldBackups: error removing a file from remote storage: %w", err)
|
||||
}
|
||||
s.logger.Infof("Pruned %s from WebDAV: %s", match.Name(), filepath.Join(s.c.WebdavUrl, s.c.WebdavPath))
|
||||
}
|
||||
s.logger.Infof("Pruned %d out of %d remote backup(s) as their age exceeded the configured retention period of %d days.", len(matches), lenCandidates, s.c.BackupRetentionDays)
|
||||
} else if len(matches) != 0 && len(matches) == lenCandidates {
|
||||
s.logger.Warnf("The current configuration would delete all %d remote backup copies.", len(matches))
|
||||
s.logger.Warn("Refusing to do so, please check your configuration.")
|
||||
} else {
|
||||
s.logger.Infof("None of %d remote backup(s) were pruned.", lenCandidates)
|
||||
}
|
||||
}
|
||||
|
||||
// Prune local backups
|
||||
if _, err := os.Stat(s.c.BackupArchive); !os.IsNotExist(err) {
|
||||
globPattern := path.Join(
|
||||
s.c.BackupArchive,
|
||||
|
||||
3
go.mod
3
go.mod
@@ -8,10 +8,11 @@ require (
|
||||
github.com/gofrs/flock v0.8.1
|
||||
github.com/kelseyhightower/envconfig v1.4.0
|
||||
github.com/leekchan/timeutil v0.0.0-20150802142658-28917288c48d
|
||||
github.com/m90/targz v0.0.0-20210904082215-2e9a4529a615
|
||||
github.com/m90/targz v0.0.0-20220208141135-d3baeef59a97
|
||||
github.com/minio/minio-go/v7 v7.0.16
|
||||
github.com/otiai10/copy v1.7.0
|
||||
github.com/sirupsen/logrus v1.8.1
|
||||
github.com/studio-b12/gowebdav v0.0.0-20211109083228-3f8721cd4b6f
|
||||
golang.org/x/crypto v0.0.0-20210817164053-32db794688a5
|
||||
)
|
||||
|
||||
|
||||
8
go.sum
8
go.sum
@@ -450,8 +450,10 @@ github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
||||
github.com/leekchan/timeutil v0.0.0-20150802142658-28917288c48d h1:2puqoOQwi3Ai1oznMOsFIbifm6kIfJaLLyYzWD4IzTs=
|
||||
github.com/leekchan/timeutil v0.0.0-20150802142658-28917288c48d/go.mod h1:hO90vCP2x3exaSH58BIAowSKvV+0OsY21TtzuFGHON4=
|
||||
github.com/leodido/go-urn v1.2.0/go.mod h1:+8+nEpDfqqsY+g338gtMEUOtuK+4dEMhiQEgxpxOKII=
|
||||
github.com/m90/targz v0.0.0-20210904082215-2e9a4529a615 h1:rn0LO2tQEgCDOct8qnbcslTUpAIWdVlWcGkjoumhf2U=
|
||||
github.com/m90/targz v0.0.0-20210904082215-2e9a4529a615/go.mod h1:YZK3bSO/oVlk9G+v00BxgzxW2Us4p/R4ysHOBjk0fJI=
|
||||
github.com/m90/targz v0.0.0-20211229090208-2f22c2d9278e h1:Kzm2zfxS40RUGD5UVtVtOo9RT5TtGoNJnmWORtCEaxM=
|
||||
github.com/m90/targz v0.0.0-20211229090208-2f22c2d9278e/go.mod h1:YZK3bSO/oVlk9G+v00BxgzxW2Us4p/R4ysHOBjk0fJI=
|
||||
github.com/m90/targz v0.0.0-20220208141135-d3baeef59a97 h1:Uc/WzUKI/zvhkqIzk5TyaPE6AY1SD1DWGc7RV7cky4s=
|
||||
github.com/m90/targz v0.0.0-20220208141135-d3baeef59a97/go.mod h1:YZK3bSO/oVlk9G+v00BxgzxW2Us4p/R4ysHOBjk0fJI=
|
||||
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
|
||||
github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
|
||||
github.com/mailru/easyjson v0.0.0-20190403194419-1ea4449da983/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
|
||||
@@ -659,6 +661,8 @@ github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UV
|
||||
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
|
||||
github.com/stretchr/testify v1.6.1 h1:hDPOHmpOpP40lSULcqw7IrRb/u7w6RpDC9399XyoNd0=
|
||||
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/studio-b12/gowebdav v0.0.0-20211109083228-3f8721cd4b6f h1:L2NE7BXnSlSLoNYZ0lCwZDjdnYjCNYC71k9ClZUTFTs=
|
||||
github.com/studio-b12/gowebdav v0.0.0-20211109083228-3f8721cd4b6f/go.mod h1:bHA7t77X/QFExdeAnDzK6vKM34kEZAcE1OX4MfiwjkE=
|
||||
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
|
||||
github.com/syndtr/gocapability v0.0.0-20170704070218-db04d3cc01c8/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
|
||||
github.com/syndtr/gocapability v0.0.0-20180916011248-d98352740cb2/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
|
||||
|
||||
@@ -7,6 +7,7 @@ cd $(dirname $0)
|
||||
docker network create test_network
|
||||
docker volume create backup_data
|
||||
docker volume create app_data
|
||||
docker volume create empty_data
|
||||
|
||||
docker run -d \
|
||||
--name minio \
|
||||
@@ -31,6 +32,7 @@ sleep 10
|
||||
docker run --rm \
|
||||
--network test_network \
|
||||
-v app_data:/backup/app_data \
|
||||
-v empty_data:/backup/empty_data \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||
--env AWS_ACCESS_KEY_ID=test \
|
||||
--env AWS_SECRET_ACCESS_KEY=GMusLtUmILge2by+z890kQ \
|
||||
@@ -44,7 +46,7 @@ docker run --rm \
|
||||
|
||||
docker run --rm -it \
|
||||
-v backup_data:/data alpine \
|
||||
ash -c 'tar -xvf /data/backup/test.tar.gz && test -f /backup/app_data/offen.db'
|
||||
ash -c 'tar -xvf /data/backup/test.tar.gz && test -f /backup/app_data/offen.db && test -d /backup/empty_data'
|
||||
|
||||
echo "[TEST:PASS] Found relevant files in untared backup."
|
||||
|
||||
|
||||
@@ -10,12 +10,23 @@ services:
|
||||
MINIO_SECRET_KEY: GMusLtUmILge2by+z890kQ
|
||||
entrypoint: /bin/ash -c 'mkdir -p /data/backup && minio server /data'
|
||||
volumes:
|
||||
- backup_data:/data
|
||||
- minio_backup_data:/data
|
||||
|
||||
webdav:
|
||||
image: bytemark/webdav:2.4
|
||||
environment:
|
||||
AUTH_TYPE: Digest
|
||||
USERNAME: test
|
||||
PASSWORD: test
|
||||
volumes:
|
||||
- webdav_backup_data:/var/lib/dav
|
||||
|
||||
backup: &default_backup_service
|
||||
image: offen/docker-volume-backup:${TEST_VERSION}
|
||||
hostname: hostnametoken
|
||||
depends_on:
|
||||
- minio
|
||||
- webdav
|
||||
restart: always
|
||||
environment:
|
||||
AWS_ACCESS_KEY_ID: test
|
||||
@@ -23,13 +34,18 @@ services:
|
||||
AWS_ENDPOINT: minio:9000
|
||||
AWS_ENDPOINT_PROTO: http
|
||||
AWS_S3_BUCKET_NAME: backup
|
||||
BACKUP_FILENAME: test.tar.gz
|
||||
BACKUP_LATEST_SYMLINK: test.latest.tar.gz.gpg
|
||||
BACKUP_FILENAME_EXPAND: 'true'
|
||||
BACKUP_FILENAME: test-$$HOSTNAME.tar.gz
|
||||
BACKUP_LATEST_SYMLINK: test-$$HOSTNAME.latest.tar.gz.gpg
|
||||
BACKUP_CRON_EXPRESSION: 0 0 5 31 2 ?
|
||||
BACKUP_RETENTION_DAYS: ${BACKUP_RETENTION_DAYS:-7}
|
||||
BACKUP_PRUNING_LEEWAY: 5s
|
||||
BACKUP_PRUNING_PREFIX: test
|
||||
GPG_PASSPHRASE: 1234secret
|
||||
WEBDAV_URL: http://webdav/
|
||||
WEBDAV_PATH: /my/new/path/
|
||||
WEBDAV_USERNAME: test
|
||||
WEBDAV_PASSWORD: test
|
||||
volumes:
|
||||
- ./local:/archive
|
||||
- app_data:/backup/app_data:ro
|
||||
@@ -43,5 +59,6 @@ services:
|
||||
- app_data:/var/opt/offen
|
||||
|
||||
volumes:
|
||||
backup_data:
|
||||
minio_backup_data:
|
||||
webdav_backup_data:
|
||||
app_data:
|
||||
|
||||
@@ -13,20 +13,23 @@ docker-compose exec offen ln -s /var/opt/offen/offen.db /var/opt/offen/db.link
|
||||
docker-compose exec backup backup
|
||||
|
||||
docker run --rm -it \
|
||||
-v compose_backup_data:/data alpine \
|
||||
ash -c 'apk add gnupg && echo 1234secret | gpg -d --pinentry-mode loopback --passphrase-fd 0 --yes /data/backup/test.tar.gz.gpg > /tmp/test.tar.gz && tar -xf /tmp/test.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db'
|
||||
-v compose_minio_backup_data:/minio_data \
|
||||
-v compose_webdav_backup_data:/webdav_data alpine \
|
||||
ash -c 'apk add gnupg && \
|
||||
echo 1234secret | gpg -d --pinentry-mode loopback --passphrase-fd 0 --yes /minio_data/backup/test-hostnametoken.tar.gz.gpg > /tmp/test-hostnametoken.tar.gz && tar -xf /tmp/test-hostnametoken.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db && \
|
||||
echo 1234secret | gpg -d --pinentry-mode loopback --passphrase-fd 0 --yes /webdav_data/data/my/new/path/test-hostnametoken.tar.gz.gpg > /tmp/test-hostnametoken.tar.gz && tar -xf /tmp/test-hostnametoken.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db'
|
||||
|
||||
echo "[TEST:PASS] Found relevant files in untared remote backup."
|
||||
echo "[TEST:PASS] Found relevant files in untared remote backups."
|
||||
|
||||
test -L ./local/test.latest.tar.gz.gpg
|
||||
echo 1234secret | gpg -d --yes --passphrase-fd 0 ./local/test.tar.gz.gpg > ./local/decrypted.tar.gz
|
||||
test -L ./local/test-hostnametoken.latest.tar.gz.gpg
|
||||
echo 1234secret | gpg -d --yes --passphrase-fd 0 ./local/test-hostnametoken.tar.gz.gpg > ./local/decrypted.tar.gz
|
||||
tar -xf ./local/decrypted.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db
|
||||
rm ./local/decrypted.tar.gz
|
||||
test -L /tmp/backup/app_data/db.link
|
||||
|
||||
echo "[TEST:PASS] Found relevant files in untared local backup."
|
||||
|
||||
if [ "$(docker-compose ps -q | wc -l)" != "3" ]; then
|
||||
if [ "$(docker-compose ps -q | wc -l)" != "4" ]; then
|
||||
echo "[TEST:FAIL] Expected all containers to be running post backup, instead seen:"
|
||||
docker-compose ps
|
||||
exit 1
|
||||
@@ -43,8 +46,10 @@ sleep 5
|
||||
docker-compose exec backup backup
|
||||
|
||||
docker run --rm -it \
|
||||
-v compose_backup_data:/data alpine \
|
||||
ash -c '[ $(find /data/backup/ -type f | wc -l) = "1" ]'
|
||||
-v compose_minio_backup_data:/minio_data \
|
||||
-v compose_webdav_backup_data:/webdav_data alpine \
|
||||
ash -c '[ $(find /minio_data/backup/ -type f | wc -l) = "1" ] && \
|
||||
[ $(find /webdav_data/data/my/new/path/ -type f | wc -l) = "1" ]'
|
||||
|
||||
echo "[TEST:PASS] Remote backups have not been deleted."
|
||||
|
||||
|
||||
Reference in New Issue
Block a user