Compare commits

..

10 Commits

Author SHA1 Message Date
Frederik Ring
3bb99a7117 Update package targz 2022-02-08 15:12:46 +01:00
Fridgemagnet
ddc34be55d Updated README.md "Restoring..." section example (#56)
Edited the "Restoring a volume from a backup" section example to be able to better differentiate between the names of the temporary restore container and the name of the mounted restore volume.
New example container name: temp_restore_container
The restore volume name remains the same: backup_restore

Also changed "one-off container" to "once-off container".
2022-02-04 11:52:59 +01:00
Joshua Noble
cb9b4bfcff Add support for Filebase (#54) 2022-02-02 17:22:42 +01:00
Frederik Ring
62bd2f4a5a Update base docker image to alpine 3.15 (#53) 2022-01-27 14:40:56 +01:00
Frederik Ring
6fe629ce87 Allow path to be set for bucket storage (#52) 2022-01-25 21:16:16 +01:00
Frederik Ring
1db896f7cf Tweak README, improve client naming, tidy go.mod file 2022-01-22 13:35:13 +01:00
Kaerbr
6ded00aa06 Support Nextcloud / WebDav (#48)
* add studio-b12/gowebdav to be able to upload to webdav server

* make sure all env variables are present for webdav upload

* implement file upload to WebDav server

directory defaults to the base directory

* docs: add the new feature to the documentation

* if no WebDav env variable are given throw no error

* docs: use more elegant english :D

Co-authored-by: Frederik Ring <frederik.ring@gmail.com>

* docs: use official spelling of "WebDAV"

* perf: golang likes to return early instead of having an else block

* use WEBDAV_PATH instead of WEBDAV_DIRECTORY

* use split_words for more convenience

like shown here: https://github.com/kelseyhightower/envconfig#struct-tag-support

* simplify

* feat: add pruning of files in WebDAV remote

Based on / Inspired by the minio/S3 implementation of pruning remote files.

* remove logging from the development

* test: first try implementing tests

Sandly I have to use the remote pipeline -- local wont work for me.

* test: adapt used volume names

* test: specify image only once!

* test: minio AND webdav data should be present

* test: backups on WebDAV remote are laying in the root directory

* test: the webdav server stores date in /var/lib/dav

* trying with data subfolder

* test: 1 container was added so the number raised from 3 to 4

* webdav  subfolder is "data" not "backup"

* fix: password AND username must be defined

not password OR username

* improve logging

* feat: if the given path on the server isnt preset it will be created

* test: add creation of new folder for webdav to tests

Co-authored-by: Frederik Ring <frederik.ring@gmail.com>
2022-01-22 13:29:21 +01:00
Hendrik Niefeld
6b79f1914b Update README.md 2022-01-06 16:09:33 +01:00
Hendrik Niefeld
40ff2e00c9 Update README.md 2022-01-06 16:07:00 +01:00
Frederik Ring
760cc9cebc Add issue template 2021-12-29 13:10:43 +01:00
8 changed files with 214 additions and 31 deletions

20
.github/ISSUE_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,20 @@
* **I'm submitting a ...**
- [ ] bug report
- [ ] feature request
- [ ] support request
* **What is the current behavior?**
* **If the current behavior is a bug, please provide the configuration and steps to reproduce and if possible a minimal demo of the problem.**
* **What is the expected behavior?**
* **What is the motivation / use case for changing the behavior?**
* **Please tell us about your environment:**
- Image version:
- Docker version:
- docker-compose version:
* **Other information** (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, etc)

View File

@@ -9,7 +9,7 @@ RUN go mod download
COPY cmd/backup/main.go ./cmd/backup/main.go
RUN go build -o backup cmd/backup/main.go
FROM alpine:3.14
FROM alpine:3.15
WORKDIR /root

View File

@@ -1,9 +1,13 @@
<a href="https://www.offen.dev/">
<img src="https://offen.github.io/press-kit/offen-material/gfx-GitHub-Offen-logo.svg" alt="Offen logo" title="Offen" width="150px"/>
</a>
# docker-volume-backup
Backup Docker volumes locally or to any S3 compatible storage.
The [offen/docker-volume-backup](https://hub.docker.com/r/offen/docker-volume-backup) Docker image can be used as a lightweight (below 15MB) sidecar container to an existing Docker setup.
It handles __recurring or one-off backups of Docker volumes__ to a __local directory__ or __any S3 compatible storage__ (or both), and __rotates away old backups__ if configured. It also supports __encrypting your backups using GPG__ and __sending notifications for failed backup runs__.
It handles __recurring or one-off backups of Docker volumes__ to a __local directory__, __any S3 or WebDAV compatible storage (or any combination) and rotates away old backups__ if configured. It also supports __encrypting your backups using GPG__ and __sending notifications for failed backup runs__.
<!-- MarkdownTOC -->
@@ -23,7 +27,9 @@ It handles __recurring or one-off backups of Docker volumes__ to a __local direc
- [Update deprecated email configuration](#update-deprecated-email-configuration)
- [Recipes](#recipes)
- [Backing up to AWS S3](#backing-up-to-aws-s3)
- [Backing up to Filebase](#backing-up-to-filebase)
- [Backing up to MinIO](#backing-up-to-minio)
- [Backing up to WebDAV](#backing-up-to-webdav)
- [Backing up locally](#backing-up-locally)
- [Backing up to AWS S3 as well as locally](#backing-up-to-aws-s3-as-well-as-locally)
- [Running on a custom cron schedule](#running-on-a-custom-cron-schedule)
@@ -130,7 +136,7 @@ You can populate below template according to your requirements and use it as you
# Please note that you will need to escape the `$` when providing the value
# in a docker-compose.yml file, i.e. using $$VAR instead of $VAR.
# BACKUP_FILENAME_TEMPLATE="true"
# BACKUP_FILENAME_EXPAND="true"
# When storing local backups, a symlink to the latest backup can be created
# in case a value is given for this key. This has no effect on remote backups.
@@ -151,6 +157,11 @@ You can populate below template according to your requirements and use it as you
# AWS_S3_BUCKET_NAME="backup-bucket"
# If you want to store the backup in a non-root location on your bucket
# you can provide a path. The path must not contain a leading slash.
# AWS_S3_PATH="my/backup/location"
# Define credentials for authenticating against the backup storage and a bucket
# name. Although all of these keys are `AWS`-prefixed, the setup can be used
# with any S3 compatible storage.
@@ -166,7 +177,7 @@ You can populate below template according to your requirements and use it as you
# AWS_IAM_ROLE_ENDPOINT="http://169.254.169.254"
# This is the FQDN of your storage server, e.g. `storage.example.com`.
# Do not set this when working against AWS S3 (the default value is
# Do not set this when working against AWS S3 (the default value is
# `s3.amazonaws.com`). If you need to set a specific (non-https) protocol, you
# will need to use the option below.
@@ -185,6 +196,25 @@ You can populate below template according to your requirements and use it as you
# AWS_ENDPOINT_INSECURE="true"
# You can also backup files to any WebDAV server:
# The URL of the remote WebDAV server
# WEBDAV_URL="https://webdav.example.com"
# The Directory to place the backups to on the WebDAV server.
# If the path is not present on the server it will be created.
# WEBDAV_PATH="/my/directory/"
# The username for the WebDAV server
# WEBDAV_USERNAME="user"
# The password for the WebDAV server
# WEBDAV_PASSWORD="password"
# In addition to storing backups remotely, you can also keep local copies.
# Pass a container-local path to store your backups if needed. You also need to
# mount a local folder or Docker volume into that location (`/archive`
@@ -399,14 +429,14 @@ In case you need to restore a volume from a backup, the most straight forward pr
```console
tar -C /tmp -xvf backup.tar.gz
```
- Using a temporary one-off container, mount the volume (the example assumes it's named `data`) and copy over the backup. Make sure you copy the correct path level (this depends on how you mount your volume into the backup container), you might need to strip some leading elements
- Using a temporary once-off container, mount the volume (the example assumes it's named `data`) and copy over the backup. Make sure you copy the correct path level (this depends on how you mount your volume into the backup container), you might need to strip some leading elements
```console
docker run -d --name backup_restore -v data:/backup_restore alpine
docker cp /tmp/backup/data-backup backup_restore:/backup_restore
docker stop backup_restore
docker rm backup_restore
docker run -d --name temp_restore_container -v data:/backup_restore alpine
docker cp /tmp/backup/data-backup temp_restore_container:/backup_restore
docker stop temp_restore_container
docker rm temp_restore_container
```
- Restart the container(s) that are using the volume
- Restart the container(s) that are using the volume
Depending on your setup and the application(s) you are running, this might involve other steps to be taken still.
@@ -504,6 +534,28 @@ volumes:
data:
```
### Backing up to Filebase
```yml
version: '3'
services:
# ... define other services using the `data` volume here
backup:
image: offen/docker-volume-backup:latest
environment:
AWS_ENDPOINT: s3.filebase.com
AWS_BUCKET_NAME: filebase-bucket
AWS_ACCESS_KEY_ID: FILEBASE-ACCESS-KEY
AWS_SECRET_ACCESS_KEY: FILEBASE-SECRET-KEY
volumes:
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
data:
```
### Backing up to MinIO
```yml
@@ -526,6 +578,28 @@ volumes:
data:
```
### Backing up to WebDAV
```yml
version: '3'
services:
# ... define other services using the `data` volume here
backup:
image: offen/docker-volume-backup:latest
environment:
WEBDAV_URL: https://webdav.mydomain.me
WEBDAV_PATH: /my/directory/
WEBDAV_USERNAME: user
WEBDAV_PASSWORD: password
volumes:
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
data:
```
### Backing up locally
```yml

View File

@@ -9,6 +9,7 @@ import (
"errors"
"fmt"
"io"
"io/fs"
"os"
"path"
"path/filepath"
@@ -31,6 +32,7 @@ import (
"github.com/minio/minio-go/v7/pkg/credentials"
"github.com/otiai10/copy"
"github.com/sirupsen/logrus"
"github.com/studio-b12/gowebdav"
"golang.org/x/crypto/openpgp"
)
@@ -86,12 +88,13 @@ func main() {
// script holds all the stateful information required to orchestrate a
// single backup run.
type script struct {
cli *client.Client
mc *minio.Client
logger *logrus.Logger
sender *router.ServiceRouter
hooks []hook
hookLevel hookLevel
cli *client.Client
minioClient *minio.Client
webdavClient *gowebdav.Client
logger *logrus.Logger
sender *router.ServiceRouter
hooks []hook
hookLevel hookLevel
start time.Time
file string
@@ -112,6 +115,7 @@ type config struct {
BackupStopContainerLabel string `split_words:"true" default:"true"`
BackupFromSnapshot bool `split_words:"true"`
AwsS3BucketName string `split_words:"true"`
AwsS3Path string `split_words:"true"`
AwsEndpoint string `split_words:"true" default:"s3.amazonaws.com"`
AwsEndpointProto string `split_words:"true" default:"https"`
AwsEndpointInsecure bool `split_words:"true"`
@@ -127,6 +131,10 @@ type config struct {
EmailSMTPPort int `envconfig:"EMAIL_SMTP_PORT" default:"587"`
EmailSMTPUsername string `envconfig:"EMAIL_SMTP_USERNAME"`
EmailSMTPPassword string `envconfig:"EMAIL_SMTP_PASSWORD"`
WebdavUrl string `split_words:"true"`
WebdavPath string `split_words:"true" default:"/"`
WebdavUsername string `split_words:"true"`
WebdavPassword string `split_words:"true"`
}
var msgBackupFailed = "backup run failed"
@@ -206,7 +214,16 @@ func newScript() (*script, error) {
if err != nil {
return nil, fmt.Errorf("newScript: error setting up minio client: %w", err)
}
s.mc = mc
s.minioClient = mc
}
if s.c.WebdavUrl != "" {
if s.c.WebdavUsername == "" || s.c.WebdavPassword == "" {
return nil, errors.New("newScript: WEBDAV_URL is defined, but no credentials were provided")
} else {
webdavClient := gowebdav.NewClient(s.c.WebdavUrl, s.c.WebdavUsername, s.c.WebdavPassword)
s.webdavClient = webdavClient
}
}
if s.c.EmailNotificationRecipient != "" {
@@ -508,8 +525,8 @@ func (s *script) encryptBackup() error {
// as per the given configuration.
func (s *script) copyBackup() error {
_, name := path.Split(s.file)
if s.mc != nil {
if _, err := s.mc.FPutObject(context.Background(), s.c.AwsS3BucketName, name, s.file, minio.PutObjectOptions{
if s.minioClient != nil {
if _, err := s.minioClient.FPutObject(context.Background(), s.c.AwsS3BucketName, filepath.Join(s.c.AwsS3Path, name), s.file, minio.PutObjectOptions{
ContentType: "application/tar+gzip",
}); err != nil {
return fmt.Errorf("copyBackup: error uploading backup to remote storage: %w", err)
@@ -517,6 +534,20 @@ func (s *script) copyBackup() error {
s.logger.Infof("Uploaded a copy of backup `%s` to bucket `%s`.", s.file, s.c.AwsS3BucketName)
}
if s.webdavClient != nil {
bytes, err := os.ReadFile(s.file)
if err != nil {
return fmt.Errorf("copyBackup: error reading the file to be uploaded: %w", err)
}
if err := s.webdavClient.MkdirAll(s.c.WebdavPath, 0644); err != nil {
return fmt.Errorf("copyBackup: error creating directory '%s' on WebDAV server: %w", s.c.WebdavPath, err)
}
if err := s.webdavClient.Write(filepath.Join(s.c.WebdavPath, name), bytes, 0644); err != nil {
return fmt.Errorf("copyBackup: error uploading the file to WebDAV server: %w", err)
}
s.logger.Infof("Uploaded a copy of backup `%s` to WebDAV-URL '%s' at path '%s'.", s.file, s.c.WebdavUrl, s.c.WebdavPath)
}
if _, err := os.Stat(s.c.BackupArchive); !os.IsNotExist(err) {
if err := copyFile(s.file, path.Join(s.c.BackupArchive, name)); err != nil {
return fmt.Errorf("copyBackup: error copying file to local archive: %w", err)
@@ -551,8 +582,9 @@ func (s *script) pruneOldBackups() error {
deadline := time.Now().AddDate(0, 0, -int(s.c.BackupRetentionDays))
if s.mc != nil {
candidates := s.mc.ListObjects(context.Background(), s.c.AwsS3BucketName, minio.ListObjectsOptions{
// Prune minio/S3 backups
if s.minioClient != nil {
candidates := s.minioClient.ListObjects(context.Background(), s.c.AwsS3BucketName, minio.ListObjectsOptions{
WithMetadata: true,
Prefix: s.c.BackupPruningPrefix,
})
@@ -580,7 +612,7 @@ func (s *script) pruneOldBackups() error {
}
close(objectsCh)
}()
errChan := s.mc.RemoveObjects(context.Background(), s.c.AwsS3BucketName, objectsCh, minio.RemoveObjectsOptions{})
errChan := s.minioClient.RemoveObjects(context.Background(), s.c.AwsS3BucketName, objectsCh, minio.RemoveObjectsOptions{})
var removeErrors []error
for result := range errChan {
if result.Err != nil {
@@ -612,6 +644,38 @@ func (s *script) pruneOldBackups() error {
}
}
// Prune WebDAV backups
if s.webdavClient != nil {
candidates, err := s.webdavClient.ReadDir(s.c.WebdavPath)
if err != nil {
return fmt.Errorf("pruneOldBackups: error looking up candidates from remote storage: %w", err)
}
var matches []fs.FileInfo
var lenCandidates int
for _, candidate := range candidates {
lenCandidates++
if candidate.ModTime().Before(deadline) {
matches = append(matches, candidate)
}
}
if len(matches) != 0 && len(matches) != lenCandidates {
for _, match := range matches {
if err := s.webdavClient.Remove(filepath.Join(s.c.WebdavPath, match.Name())); err != nil {
return fmt.Errorf("pruneOldBackups: error removing a file from remote storage: %w", err)
}
s.logger.Infof("Pruned %s from WebDAV: %s", match.Name(), filepath.Join(s.c.WebdavUrl, s.c.WebdavPath))
}
s.logger.Infof("Pruned %d out of %d remote backup(s) as their age exceeded the configured retention period of %d days.", len(matches), lenCandidates, s.c.BackupRetentionDays)
} else if len(matches) != 0 && len(matches) == lenCandidates {
s.logger.Warnf("The current configuration would delete all %d remote backup copies.", len(matches))
s.logger.Warn("Refusing to do so, please check your configuration.")
} else {
s.logger.Infof("None of %d remote backup(s) were pruned.", lenCandidates)
}
}
// Prune local backups
if _, err := os.Stat(s.c.BackupArchive); !os.IsNotExist(err) {
globPattern := path.Join(
s.c.BackupArchive,

3
go.mod
View File

@@ -8,10 +8,11 @@ require (
github.com/gofrs/flock v0.8.1
github.com/kelseyhightower/envconfig v1.4.0
github.com/leekchan/timeutil v0.0.0-20150802142658-28917288c48d
github.com/m90/targz v0.0.0-20211229090208-2f22c2d9278e
github.com/m90/targz v0.0.0-20220208141135-d3baeef59a97
github.com/minio/minio-go/v7 v7.0.16
github.com/otiai10/copy v1.7.0
github.com/sirupsen/logrus v1.8.1
github.com/studio-b12/gowebdav v0.0.0-20211109083228-3f8721cd4b6f
golang.org/x/crypto v0.0.0-20210817164053-32db794688a5
)

4
go.sum
View File

@@ -452,6 +452,8 @@ github.com/leekchan/timeutil v0.0.0-20150802142658-28917288c48d/go.mod h1:hO90vC
github.com/leodido/go-urn v1.2.0/go.mod h1:+8+nEpDfqqsY+g338gtMEUOtuK+4dEMhiQEgxpxOKII=
github.com/m90/targz v0.0.0-20211229090208-2f22c2d9278e h1:Kzm2zfxS40RUGD5UVtVtOo9RT5TtGoNJnmWORtCEaxM=
github.com/m90/targz v0.0.0-20211229090208-2f22c2d9278e/go.mod h1:YZK3bSO/oVlk9G+v00BxgzxW2Us4p/R4ysHOBjk0fJI=
github.com/m90/targz v0.0.0-20220208141135-d3baeef59a97 h1:Uc/WzUKI/zvhkqIzk5TyaPE6AY1SD1DWGc7RV7cky4s=
github.com/m90/targz v0.0.0-20220208141135-d3baeef59a97/go.mod h1:YZK3bSO/oVlk9G+v00BxgzxW2Us4p/R4ysHOBjk0fJI=
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/mailru/easyjson v0.0.0-20190403194419-1ea4449da983/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
@@ -659,6 +661,8 @@ github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UV
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.6.1 h1:hDPOHmpOpP40lSULcqw7IrRb/u7w6RpDC9399XyoNd0=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/studio-b12/gowebdav v0.0.0-20211109083228-3f8721cd4b6f h1:L2NE7BXnSlSLoNYZ0lCwZDjdnYjCNYC71k9ClZUTFTs=
github.com/studio-b12/gowebdav v0.0.0-20211109083228-3f8721cd4b6f/go.mod h1:bHA7t77X/QFExdeAnDzK6vKM34kEZAcE1OX4MfiwjkE=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
github.com/syndtr/gocapability v0.0.0-20170704070218-db04d3cc01c8/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/syndtr/gocapability v0.0.0-20180916011248-d98352740cb2/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=

View File

@@ -10,13 +10,23 @@ services:
MINIO_SECRET_KEY: GMusLtUmILge2by+z890kQ
entrypoint: /bin/ash -c 'mkdir -p /data/backup && minio server /data'
volumes:
- backup_data:/data
- minio_backup_data:/data
webdav:
image: bytemark/webdav:2.4
environment:
AUTH_TYPE: Digest
USERNAME: test
PASSWORD: test
volumes:
- webdav_backup_data:/var/lib/dav
backup: &default_backup_service
image: offen/docker-volume-backup:${TEST_VERSION}
hostname: hostnametoken
depends_on:
- minio
- webdav
restart: always
environment:
AWS_ACCESS_KEY_ID: test
@@ -32,6 +42,10 @@ services:
BACKUP_PRUNING_LEEWAY: 5s
BACKUP_PRUNING_PREFIX: test
GPG_PASSPHRASE: 1234secret
WEBDAV_URL: http://webdav/
WEBDAV_PATH: /my/new/path/
WEBDAV_USERNAME: test
WEBDAV_PASSWORD: test
volumes:
- ./local:/archive
- app_data:/backup/app_data:ro
@@ -45,5 +59,6 @@ services:
- app_data:/var/opt/offen
volumes:
backup_data:
minio_backup_data:
webdav_backup_data:
app_data:

View File

@@ -13,10 +13,13 @@ docker-compose exec offen ln -s /var/opt/offen/offen.db /var/opt/offen/db.link
docker-compose exec backup backup
docker run --rm -it \
-v compose_backup_data:/data alpine \
ash -c 'apk add gnupg && echo 1234secret | gpg -d --pinentry-mode loopback --passphrase-fd 0 --yes /data/backup/test-hostnametoken.tar.gz.gpg > /tmp/test-hostnametoken.tar.gz && tar -xf /tmp/test-hostnametoken.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db'
-v compose_minio_backup_data:/minio_data \
-v compose_webdav_backup_data:/webdav_data alpine \
ash -c 'apk add gnupg && \
echo 1234secret | gpg -d --pinentry-mode loopback --passphrase-fd 0 --yes /minio_data/backup/test-hostnametoken.tar.gz.gpg > /tmp/test-hostnametoken.tar.gz && tar -xf /tmp/test-hostnametoken.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db && \
echo 1234secret | gpg -d --pinentry-mode loopback --passphrase-fd 0 --yes /webdav_data/data/my/new/path/test-hostnametoken.tar.gz.gpg > /tmp/test-hostnametoken.tar.gz && tar -xf /tmp/test-hostnametoken.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db'
echo "[TEST:PASS] Found relevant files in untared remote backup."
echo "[TEST:PASS] Found relevant files in untared remote backups."
test -L ./local/test-hostnametoken.latest.tar.gz.gpg
echo 1234secret | gpg -d --yes --passphrase-fd 0 ./local/test-hostnametoken.tar.gz.gpg > ./local/decrypted.tar.gz
@@ -26,7 +29,7 @@ test -L /tmp/backup/app_data/db.link
echo "[TEST:PASS] Found relevant files in untared local backup."
if [ "$(docker-compose ps -q | wc -l)" != "3" ]; then
if [ "$(docker-compose ps -q | wc -l)" != "4" ]; then
echo "[TEST:FAIL] Expected all containers to be running post backup, instead seen:"
docker-compose ps
exit 1
@@ -43,8 +46,10 @@ sleep 5
docker-compose exec backup backup
docker run --rm -it \
-v compose_backup_data:/data alpine \
ash -c '[ $(find /data/backup/ -type f | wc -l) = "1" ]'
-v compose_minio_backup_data:/minio_data \
-v compose_webdav_backup_data:/webdav_data alpine \
ash -c '[ $(find /minio_data/backup/ -type f | wc -l) = "1" ] && \
[ $(find /webdav_data/data/my/new/path/ -type f | wc -l) = "1" ]'
echo "[TEST:PASS] Remote backups have not been deleted."