Compare commits

...

74 Commits

Author SHA1 Message Date
MaxJa4
336c5bed71 Replace Gzip with PGzip (#266)
* Replace Gzip with optimized PGzip. Add concurrency option.

* Add shortened timeout for 'dc down' too.

* Add NaturalNumberZero to allow zero.

* Add test for concurrency=0

* Rename to GZIP_PARALLELISM

* Fix block size. Fix compression level. Fix CI.

* Refactor compression writer fetching. Renamed WholeNumber
2023-09-03 16:49:52 +02:00
Frederik Ring
1e39ac41f4 Run tests Docker in Docker (#261)
* Try running tests in Docker

* Spawn new container for each test

* Store test artifacts outside of mount

* When requested, build up to date image in test script

* sudo is unneccessary in containerized test env

* Skip azure test

* Backdate fixture file in JSON database

* Pin versions for azure tools

* Mount temp volume for /var/lib/docker to prevent dangling ones created by VOLUME instruction

* Fail backdating tests with message

* Add some documentation on test setup

* Cache images

* Run compose stacks with shortened default timeout
2023-09-02 15:17:46 +02:00
MaxJa4
43c4961116 Support _FILE based configuration values (#264)
* Replace envconfig with env

* Adjust config options and processing

* Added _FILE variant for all password vars.

* Try pathenvconfig

* Revert everything so far

* Use our fork of envconfig with custom lookup

* Use our fork of envconfig with custom lookup

* Test compose timeout option

* Remove secret resolving and specific _FILE config

* Fix timing issue in swarm tests

* Revert "Test compose timeout option"

This reverts commit ab50b21748, reversing
changes made to 0282514b2b.

Revert "Test compose timeout option"

This reverts commit 0282514b2b.

* Use offen/envconfig v1.5.0

* Add info about _FILE in README

* Value > File. Panic on file error. Panic on duplicate presence.

* Test panic on duplicate vars and panic on file error.
2023-08-30 20:14:24 +02:00
Frederik Ring
24a6ec9480 Dropbox was missing from notification docs 2023-08-28 20:42:45 +02:00
MaxJa4
ad4e2af83f Exclude specific backends from pruning (#262)
* Skip backends while pruning

* Add pruning test step and silence download log for better readability

* Add test cases for pruning in all backends

Also add -q or --quiet-pull to all tests.

* Add test case for skipping backends while pruning

* Adjusted test logging, generate new test spec file

* Gitignore for temp test file
2023-08-27 19:19:11 +02:00
MaxJa4
5fcc96edf9 Cleanup: Lint warnings and deprecated packages (#263)
* Fix lint warnings and std lib deprecations

* Replace deprecated std lib with maintained drop-in replacement fork

Backwartds compatible with original package and suggested by std lib due to security and stability issues.

* OAuth2 is now a direct dependency due to Dropbox

* Undo change

* Revert "Replace deprecated std lib with maintained drop-in replacement fork"

This reverts commit 2887bd409f.

* Update channel handling

* Add linter for PRs

* Rename CI, fetch all issues, add govet
2023-08-27 18:14:55 +02:00
Frederik Ring
3d7677f02a Print context in log field instead of prepending to message (#260)
* Print context in log field instead of prepending to message

* Log messages on pruning do not need a description anymore

* Remove redundant information from logs and errors
2023-08-25 12:44:43 +02:00
Frederik Ring
88a4794083 Zstd test does not need to spin up MinIO server (#258) 2023-08-24 22:14:04 +02:00
Frederik Ring
7011261dc5 Pin version of mock openapi server used in Dropbox test 2023-08-24 22:01:07 +02:00
Frederik Ring
9ba8143be2 Add license headers to vendored Dropbox swagger spec 2023-08-24 19:52:33 +02:00
dependabot[bot]
b90fc9ea4d Bump github.com/minio/minio-go/v7 from 7.0.61 to 7.0.62 (#255)
Bumps [github.com/minio/minio-go/v7](https://github.com/minio/minio-go) from 7.0.61 to 7.0.62.
- [Release notes](https://github.com/minio/minio-go/releases)
- [Commits](https://github.com/minio/minio-go/compare/v7.0.61...v7.0.62)

---
updated-dependencies:
- dependency-name: github.com/minio/minio-go/v7
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-24 19:37:50 +02:00
MaxJa4
e08a3303bf Add new storage backend: Dropbox (#103) (#251)
* Add new storage backend: Dropbox (#103)

* Remove duplicate check

* Add concurrency level for parallel upload to dropbox.

* Fixed some instabilites. Changed default concurrency to 6.

* Added some env config vars to readme. WIP

* Wrap errors for storage backend creation.

* Fixed token issue, added OAuth2 including recipe and docs.

* Readme typo fix

* Test for dropbox integration

* Update info and TOC

* Missed a file

* Docker-compose fix

* Fix endpoint connection

* Fix container names

* Fix log fetching

* Fix log fetching (again)

* Print command output to logs

* Addressing comments part 1

* Address comments part 2

* OpenAPI Mock spec path adjusted
* Dropbox FileMetadata reflection refactored
* NaturalNumber type added

* Add OAuth2 mock server for CI testing

* Fix env name of oauth2 endpoint

* Remove hostname

* Add forgotten change to commit...

* Fix oauth2 endpoint

"Worked on my machine"

* Try again

* Try suggested hostname again

* Fix docker internal DNS resolving issues (as suggested by oauth2 mock docs)

* Add docker network, remove hostname

* Network not external

* Last hostname try

* Add more delay, add oauth2 endpoint log

* Temp CI log output of command even when failing

* Try different config and method

* Add custom server-hostname. Rename test folder to accellerate debugging

* Try that fix again

* Adding quotes

* Port fix attempt

* Try localhost

* Try extra hosts

* Change network mode

* Undo some changes

* Use static IP

* Remove specific IP binding

* Change to default net driver

* Fix static IP

* Squash for revert

* Revert "Squash for revert"

This reverts commit e9b617be9a.

* Actual fix for CI testing from #257
2023-08-24 19:33:47 +02:00
MaxJa4
47326c7c59 Fix storage backends not outputting any info logs (#250) 2023-08-20 20:35:25 +02:00
Michal Middleton
67e7288855 Add support for zstd compression (#249)
Co-authored-by: Michal Middleton <jafa81@gmail.com>
2023-08-19 19:20:13 +02:00
dependabot[bot]
1765b06835 Bump github.com/pkg/sftp from 1.13.5 to 1.13.6 (#248) 2023-08-15 04:38:55 +00:00
Frederik Ring
67d978f515 Drop logrus dependency, log using slog package from stdlib (#247) 2023-08-10 19:41:03 +02:00
Frederik Ring
a93ff6fe09 Build in Go 1.21 (#246) 2023-08-10 16:03:59 +02:00
Frederik Ring
1c6f64e254 Current Docker client breaks in newer Go versions (#241)
* Current Docker client breaks in newer Go versions

* Cater for breaking API changes in Docker client

* Update Docker client

* Unpin Go version used for build

* Tidy sum file
2023-07-25 19:46:57 +02:00
dependabot[bot]
085d2c5dfd Bump github.com/minio/minio-go/v7 from 7.0.59 to 7.0.61 (#240) 2023-07-24 19:16:02 +00:00
dependabot[bot]
b1382dee00 Bump github.com/Azure/azure-sdk-for-go/sdk/storage/azblob (#239) 2023-07-24 19:15:56 +00:00
Frederik Ring
c3732107b1 Current Docker client breaks in Go 1.20.6 (#242) 2023-07-24 21:01:28 +02:00
dependabot[bot]
d288c87c54 Bump github.com/minio/minio-go/v7 from 7.0.58 to 7.0.59 (#238) 2023-07-04 07:53:19 +00:00
dependabot[bot]
47491439a1 Bump github.com/studio-b12/gowebdav (#235) 2023-06-27 07:46:34 +00:00
dependabot[bot]
94f71ac765 Bump github.com/minio/minio-go/v7 from 7.0.57 to 7.0.58 (#236) 2023-06-27 05:25:42 +00:00
dependabot[bot]
2addf1dd6c Bump golang.org/x/sync from 0.2.0 to 0.3.0 (#234) 2023-06-20 12:29:11 +00:00
dependabot[bot]
c07990eaf6 Bump github.com/minio/minio-go/v7 from 7.0.56 to 7.0.57 (#233) 2023-06-20 12:28:06 +00:00
jsloane
a27743bd32 Update README.md (#230) 2023-06-17 08:30:48 +02:00
dependabot[bot]
9d5b897ab4 Bump golang.org/x/sync from 0.1.0 to 0.2.0 (#229) 2023-06-13 07:17:58 +00:00
dependabot[bot]
30bf31cd90 Bump github.com/Azure/azure-sdk-for-go/sdk/storage/azblob (#228) 2023-06-13 07:04:03 +00:00
dependabot[bot]
32e9a05b40 Bump github.com/minio/minio-go/v7 from 7.0.44 to 7.0.56 (#227) 2023-06-13 07:03:38 +00:00
dependabot[bot]
b302884447 Bump golang.org/x/crypto from 0.3.0 to 0.9.0 (#223) 2023-06-10 13:15:41 +00:00
dependabot[bot]
b3e1ce27be Bump github.com/Azure/azure-sdk-for-go/sdk/azidentity (#225) 2023-06-10 12:57:44 +00:00
dependabot[bot]
66518ed0ff Bump github.com/sirupsen/logrus from 1.9.0 to 1.9.3 (#226) 2023-06-10 12:57:39 +00:00
dependabot[bot]
14d966d41a Bump github.com/otiai10/copy from 1.10.0 to 1.11.0 (#224) 2023-06-10 12:57:30 +00:00
Frederik Ring
336dece328 Set up automated updates for Docker base images and Go packages 2023-06-10 14:42:28 +02:00
Frederik Ring
dc8172b673 Use alpine-1.18 as the base image (#219) 2023-06-03 13:08:35 +02:00
Erwan LE PRADO
5ea9a7ce15 feat: add better handler for part size (#214)
* feat: add better handler for part size


fix: use local file 


fix: try with another path


fix: use bytes 


chore: go back


go back readme


goback


goback


goback

* chore: better handling

* fix: typo readme

* chore: wrong comparaison

* fix: typo
2023-06-02 16:30:02 +02:00
Frederik Ring
bcffe0bc25 Clarify docs section about user executing labeled commands 2023-05-26 16:15:09 +02:00
dependabot[bot]
144e65ce6f Bump github.com/docker/distribution (#215) 2023-05-11 21:07:45 +00:00
ba-tno
07afa53cd3 Update shoutrrr to 0.7 (#213)
* Replace docker-compose reference with docker[space]compose

* Update shoutrrr only to 0.7.1

* modules after go mod tidy

* Refer to v0.7 docs of shoutrrr

* Replace docker-compose reference with docker[space]compose

* Update shoutrrr only to 0.7.1

* modules after go mod tidy

* Refer to v0.7 docs of shoutrrr

* Remove 'v' from shoutrrr doc link
2023-04-29 20:14:04 +02:00
Frederik Ring
9a07f5486b Docs reference incorrect shoutrrr version 2023-04-29 14:44:39 +02:00
Frederik Ring
d4c5f65f31 Entrypoint permissions can be set on COPY (#211) 2023-04-28 20:06:57 +02:00
Frederik Ring
5b8a484d80 Documentation around user label is lacking 2023-04-28 16:01:17 +02:00
Frederik Ring
37c01a578c TaskTemplate.ForceUpdate is a counter (#209) 2023-04-26 08:45:12 +02:00
Frederik Ring
46c6441d48 Add note about GHCR to README 2023-04-07 12:00:39 +02:00
Frederik Ring
5715d9ff9b Update of package copy does not fail on deleted files (#206) 2023-04-07 11:28:36 +02:00
dependabot[bot]
6ba173d916 Bump github.com/docker/docker (#205) 2023-04-05 04:58:07 +00:00
Frederik Ring
301fe6628c on: is expected to be an object 2023-04-02 19:45:46 +02:00
Frederik Ring
5ff2d53602 Items in on: are expected to be objects 2023-04-02 19:44:51 +02:00
Frederik Ring
cddd1fdcea Prevent duplicate builds on pull request 2023-04-02 19:41:49 +02:00
Frederik Ring
808cf8f82d Local directory can be used instead of volume for storing test artifact (#204) 2023-04-02 19:41:00 +02:00
Frederik Ring
c177202ac1 Multi platform build requires explicit buildx setup 2023-04-02 11:51:35 +02:00
Frederik Ring
27c2201161 Branches filter is a glob pattern, not a regex 2023-04-02 11:46:06 +02:00
Diulgher Artiom
7f20036b15 Possibility to use -u (user) option in docker exec (#203)
* Add user option for docker exec

* Add test for user option

* Return test version for image

* remove gitea config file

* refactor tests

* remove comments & fix image name

* add docs

* cleanup

* Update README.md with suggested correction

Co-authored-by: Frederik Ring <frederik.ring@gmail.com>

* fix backup command & bind folder instead of volume

---------

Co-authored-by: tao <generaltao.md@gmail.com>
Co-authored-by: Frederik Ring <frederik.ring@gmail.com>
2023-04-02 11:12:10 +02:00
Frederik Ring
2ac1f0cea4 Also trigger test runs on Pull Request 2023-03-29 07:57:09 +02:00
Frederik Ring
66ad124ddd any can be used instead of interface{} 2023-03-16 19:48:12 +01:00
Frederik Ring
aee802cb09 Migrate CI setup to GitHub Actions, also publish to GHCR (#199)
* Run tests in GitHub actions

* Do not try to allocate a pseudo TTY when running compose commands

* Try hard disabling TTY allocation

* Use compose plugin

* Test scripts shall not try to allocate a TTY

* Pass correct base version

* Check whether env var is even needed

* Stop running tests in CircleCI

* Run releases from GitHub actions as well

* Manually construct tags to be pushed on release
2023-03-16 19:32:44 +01:00
dependabot[bot]
a06ad1957a Bump github.com/docker/distribution (#195) 2023-03-07 06:56:56 +00:00
dependabot[bot]
15786c5da3 Bump golang.org/x/net from 0.2.0 to 0.7.0 (#191) 2023-02-18 06:34:40 +00:00
dependabot[bot]
641a3203c7 Bump github.com/containerd/containerd from 1.6.6 to 1.6.18 (#190) 2023-02-16 19:32:46 +00:00
Frederik Ring
5adfe3989e Document usage with rootless Docker installations
As described in #189
2023-02-16 08:18:57 +01:00
dependabot[bot]
550833be33 Merge pull request #188 from offen/dependabot/go_modules/github.com/containrrr/shoutrrr-0.6.0 2023-02-14 19:09:43 +00:00
dependabot[bot]
201a983ea4 Bump github.com/containrrr/shoutrrr from 0.5.2 to 0.6.0
Bumps [github.com/containrrr/shoutrrr](https://github.com/containrrr/shoutrrr) from 0.5.2 to 0.6.0.
- [Release notes](https://github.com/containrrr/shoutrrr/releases)
- [Changelog](https://github.com/containrrr/shoutrrr/blob/main/goreleaser.yml)
- [Commits](https://github.com/containrrr/shoutrrr/compare/v0.5.2...v0.6.0)

---
updated-dependencies:
- dependency-name: github.com/containrrr/shoutrrr
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-02-14 18:57:32 +00:00
Frederik Ring
2d37e08743 Use go 1.20, join errors using stdlib (#182)
* Use go 1.20, join errors using stdlib

* Use go 1.20 proper
2023-02-02 21:07:25 +01:00
Frederik Ring
1e36bd3eb7 Non-streaming upload to WebDAV fails on big files (#181) 2023-01-16 08:28:29 +01:00
Frederik Ring
e93a74dd48 Instructions in issue templates are not supposed to be shown after submission 2023-01-12 18:02:46 +01:00
Frederik Ring
f799e6c2e9 Azure Blob Storage is missing from headline in README 2023-01-11 21:54:50 +01:00
Frederik Ring
5c04e11f10 Add support for Azure Blob Storage (#171)
* Scaffold Azure storage backend that does nothing yet

* Implement copy for Azure Blob Storage

* Set up automated testing for Azure Storage

* Implement pruning for Azure blob storage

* Add documentation for Azure Blob Storage

* Add support for remote path

* Add azure to notifications doc

* Tidy go.mod file

* Allow use of managed identity credential

* Use volume in tests

* Auto append trailing slash to endpoint if needed, clarify docs, tidy mod file
2023-01-11 21:40:48 +01:00
Frederik Ring
aadbaa741d Update intro in README 2023-01-08 09:39:32 +01:00
Frederik Ring
9b7af67a26 Run tests using compose v2 (#178) 2023-01-07 18:50:27 +01:00
Frederik Ring
1cb4883458 Update alpine base image to 3.17 (#177) 2023-01-05 19:46:47 +01:00
Frederik Ring
982f4fe191 Fix mistake in README 2022-12-30 16:16:46 +01:00
Frederik Ring
63961cd826 Pass file location to lifecycle commands (#173)
* Add test case for extending image and calling through to rsync

* Keep backup file location env var

* Add documentation

* Work against untared content in test
2022-12-30 16:07:34 +01:00
Frederik Ring
9534cde7d9 Allow use of a custom ca cert when working against S3 storages (#170) 2022-12-22 14:37:51 +01:00
81 changed files with 16171 additions and 722 deletions

View File

@@ -1,75 +0,0 @@
version: 2.1
jobs:
canary:
machine:
image: ubuntu-2004:202201-02
working_directory: ~/docker-volume-backup
resource_class: large
steps:
- checkout
- run:
name: Build
command: |
docker build . -t offen/docker-volume-backup:canary
- run:
name: Install gnupg
command: |
sudo apt-get install -y gnupg
- run:
name: Run tests
working_directory: ~/docker-volume-backup/test
command: |
export GPG_TTY=$(tty)
./test.sh canary
build:
docker:
- image: cimg/base:2020.06
environment:
DOCKER_BUILDKIT: '1'
DOCKER_CLI_EXPERIMENTAL: enabled
working_directory: ~/docker-volume-backup
resource_class: large
steps:
- checkout
- setup_remote_docker:
version: 20.10.6
- docker/install-docker-credential-helper:
release-tag: v0.6.4
- docker/configure-docker-credentials-store
- run:
name: Push to Docker Hub
command: |
echo "$DOCKER_ACCESSTOKEN" | docker login --username offen --password-stdin
# This is required for building ARM: https://gitlab.alpinelinux.org/alpine/aports/-/issues/12406
docker run --rm --privileged linuxkit/binfmt:v0.8
docker context create docker-volume-backup
docker buildx create docker-volume-backup --name docker-volume-backup --use
docker buildx inspect --bootstrap
tag_args="-t offen/docker-volume-backup:$CIRCLE_TAG"
if [[ "$CIRCLE_TAG" =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
# prerelease tags like `v2.0.0-alpha.1` should not be released as `latest`
tag_args="$tag_args -t offen/docker-volume-backup:latest"
tag_args="$tag_args -t offen/docker-volume-backup:$(echo "$CIRCLE_TAG" | cut -d. -f1)"
fi
docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 \
$tag_args . --push
workflows:
version: 2
docker_image:
jobs:
- canary:
filters:
tags:
ignore: /^v.*/
- build:
filters:
branches:
ignore: /.*/
tags:
only: /^v.*/
orbs:
docker: circleci/docker@2.1.4

View File

@@ -8,7 +8,9 @@ assignees: ''
--- ---
**Describe the bug** **Describe the bug**
<!--
A clear and concise description of what the bug is. A clear and concise description of what the bug is.
-->
**To Reproduce** **To Reproduce**
Steps to reproduce the behavior: Steps to reproduce the behavior:
@@ -17,12 +19,16 @@ Steps to reproduce the behavior:
3. ... 3. ...
**Expected behavior** **Expected behavior**
<!--
A clear and concise description of what you expected to happen. A clear and concise description of what you expected to happen.
-->
**Desktop (please complete the following information):** **Version (please complete the following information):**
- Image Version: [e.g. v2.21.0] - Image Version: <!-- e.g. v2.21.0 -->
- Docker Version: [e.g. 20.10.17] - Docker Version: <!-- e.g. 20.10.17 -->
- Docker Compose Version (if applicable): [e.g. 1.29.2] - Docker Compose Version (if applicable): <!-- e.g. 1.29.2 -->
**Additional context** **Additional context**
<!--
Add any other context about the problem here. Add any other context about the problem here.
-->

View File

@@ -8,13 +8,21 @@ assignees: ''
--- ---
**Is your feature request related to a problem? Please describe.** **Is your feature request related to a problem? Please describe.**
<!--
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
-->
**Describe the solution you'd like** **Describe the solution you'd like**
<!--
A clear and concise description of what you want to happen. A clear and concise description of what you want to happen.
-->
**Describe alternatives you've considered** **Describe alternatives you've considered**
<!--
A clear and concise description of any alternative solutions or features you've considered. A clear and concise description of any alternative solutions or features you've considered.
-->
**Additional context** **Additional context**
<!--
Add any other context or screenshots about the feature request here. Add any other context or screenshots about the feature request here.
-->

View File

@@ -8,13 +8,21 @@ assignees: ''
--- ---
**What are you trying to do?** **What are you trying to do?**
<!--
A clear and concise description of what you are trying to do, but cannot get working. A clear and concise description of what you are trying to do, but cannot get working.
-->
**What is your current configuration?** **What is your current configuration?**
<!--
Add the full configuration you are using. Please redact out any real-world credentials. Add the full configuration you are using. Please redact out any real-world credentials.
-->
**Log output** **Log output**
<!--
Provide the full log output of your setup. Provide the full log output of your setup.
-->
**Additional context** **Additional context**
<!--
Add any other context or screenshots about the support request here. Add any other context or screenshots about the support request here.
-->

10
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,10 @@
version: 2
updates:
- package-ecosystem: docker
directory: /
schedule:
interval: weekly
- package-ecosystem: gomod
directory: /
schedule:
interval: weekly

54
.github/workflows/golangci-lint.yml vendored Normal file
View File

@@ -0,0 +1,54 @@
name: Run Linters
on:
push:
branches:
- main
pull_request:
permissions:
contents: read
# Optional: allow read access to pull request. Use with `only-new-issues` option.
pull-requests: read
jobs:
golangci:
name: lint
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version: '1.21'
cache: false
- name: golangci-lint
uses: golangci/golangci-lint-action@v3
with:
# Require: The version of golangci-lint to use.
# When `install-mode` is `binary` (default) the value can be v1.2 or v1.2.3 or `latest` to use the latest version.
# When `install-mode` is `goinstall` the value can be v1.2.3, `latest`, or the hash of a commit.
version: v1.54
# Optional: working directory, useful for monorepos
# working-directory: somedir
# Optional: golangci-lint command line arguments.
#
# Note: By default, the `.golangci.yml` file should be at the root of the repository.
# The location of the configuration file can be changed by using `--config=`
# args: --timeout=30m --config=/my/path/.golangci.yml --issues-exit-code=0
# Optional: show only new issues if it's a pull request. The default value is `false`.
# only-new-issues: true
# Optional: if set to true, then all caching functionality will be completely disabled,
# takes precedence over all other caching options.
# skip-cache: true
# Optional: if set to true, then the action won't cache or restore ~/go/pkg.
# skip-pkg-cache: true
# Optional: if set to true, then the action won't cache or restore ~/.cache/go-build.
# skip-build-cache: true
# Optional: The mode to install golangci-lint. It can be 'binary' or 'goinstall'.
# install-mode: "goinstall"

59
.github/workflows/release.yml vendored Normal file
View File

@@ -0,0 +1,59 @@
name: Release Docker Image
on:
push:
tags: v**
jobs:
push_to_registries:
name: Push Docker image to multiple registries
runs-on: ubuntu-latest
permissions:
packages: write
contents: read
steps:
- name: Check out the repo
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Log in to GHCR
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract Docker tags
id: meta
run: |
version_tag="${{github.ref_name}}"
tags=($version_tag)
if [[ "$version_tag" =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
# prerelease tags like `v2.0.0-alpha.1` should not be released as `latest` nor `v2`
tags+=("latest")
tags+=($(echo "$version_tag" | cut -d. -f1))
fi
releases=""
for tag in "${tags[@]}"; do
releases="${releases:+$releases,}offen/docker-volume-backup:$tag,ghcr.io/offen/docker-volume-backup:$tag"
done
echo "releases=$releases" >> "$GITHUB_OUTPUT"
- name: Build and push Docker images
uses: docker/build-push-action@v4
with:
context: .
push: true
platforms: linux/amd64,linux/arm64,linux/arm/v7
tags: ${{ steps.meta.outputs.releases }}

21
.github/workflows/test.yml vendored Normal file
View File

@@ -0,0 +1,21 @@
name: Run Integration Tests
on:
push:
branches:
- main
pull_request:
jobs:
test:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Run Tests
working-directory: ./test
run: |
BUILD_IMAGE=1 ./test.sh

8
.golangci.yml Normal file
View File

@@ -0,0 +1,8 @@
linters:
# Enable specific linter
# https://golangci-lint.run/usage/linters/#enabled-by-default
enable:
- staticcheck
- govet
output:
format: github-actions

View File

@@ -1,7 +1,7 @@
# Copyright 2021 - Offen Authors <hioffen@posteo.de> # Copyright 2021 - Offen Authors <hioffen@posteo.de>
# SPDX-License-Identifier: MPL-2.0 # SPDX-License-Identifier: MPL-2.0
FROM golang:1.19-alpine as builder FROM golang:1.21-alpine as builder
WORKDIR /app WORKDIR /app
COPY . . COPY . .
@@ -9,15 +9,13 @@ RUN go mod download
WORKDIR /app/cmd/backup WORKDIR /app/cmd/backup
RUN go build -o backup . RUN go build -o backup .
FROM alpine:3.16 FROM alpine:3.18
WORKDIR /root WORKDIR /root
RUN apk add --no-cache ca-certificates RUN apk add --no-cache ca-certificates
COPY --from=builder /app/cmd/backup/backup /usr/bin/backup COPY --from=builder /app/cmd/backup/backup /usr/bin/backup
COPY --chmod=755 ./entrypoint.sh /root/
COPY ./entrypoint.sh /root/
RUN chmod +x entrypoint.sh
ENTRYPOINT ["/root/entrypoint.sh"] ENTRYPOINT ["/root/entrypoint.sh"]

279
README.md
View File

@@ -4,16 +4,17 @@
# docker-volume-backup # docker-volume-backup
Backup Docker volumes locally or to any S3 compatible storage. Backup Docker volumes locally or to any S3, WebDAV, Azure Blob Storage, Dropbox or SSH compatible storage.
The [offen/docker-volume-backup](https://hub.docker.com/r/offen/docker-volume-backup) Docker image can be used as a lightweight (below 15MB) sidecar container to an existing Docker setup. The [offen/docker-volume-backup](https://hub.docker.com/r/offen/docker-volume-backup) Docker image can be used as a lightweight (below 15MB) sidecar container to an existing Docker setup.
It handles __recurring or one-off backups of Docker volumes__ to a __local directory__, __any S3, WebDAV or SSH compatible storage (or any combination) and rotates away old backups__ if configured. It also supports __encrypting your backups using GPG__ and __sending notifications for failed backup runs__. It handles __recurring or one-off backups of Docker volumes__ to a __local directory__, __any S3, WebDAV, Azure Blob Storage, Dropbox or SSH compatible storage (or any combination) and rotates away old backups__ if configured. It also supports __encrypting your backups using GPG__ and __sending notifications for failed backup runs__.
<!-- MarkdownTOC --> <!-- MarkdownTOC -->
- [Quickstart](#quickstart) - [Quickstart](#quickstart)
- [Recurring backups in a compose setup](#recurring-backups-in-a-compose-setup) - [Recurring backups in a compose setup](#recurring-backups-in-a-compose-setup)
- [One-off backups using Docker CLI](#one-off-backups-using-docker-cli) - [One-off backups using Docker CLI](#one-off-backups-using-docker-cli)
- [Available image registries](#available-image-registries)
- [Configuration reference](#configuration-reference) - [Configuration reference](#configuration-reference)
- [How to](#how-to) - [How to](#how-to)
- [Stop containers during backup](#stop-containers-during-backup) - [Stop containers during backup](#stop-containers-during-backup)
@@ -30,15 +31,21 @@ It handles __recurring or one-off backups of Docker volumes__ to a __local direc
- [Replace deprecated `BACKUP_FROM_SNAPSHOT` usage](#replace-deprecated-backup_from_snapshot-usage) - [Replace deprecated `BACKUP_FROM_SNAPSHOT` usage](#replace-deprecated-backup_from_snapshot-usage)
- [Replace deprecated `exec-pre` and `exec-post` labels](#replace-deprecated-exec-pre-and-exec-post-labels) - [Replace deprecated `exec-pre` and `exec-post` labels](#replace-deprecated-exec-pre-and-exec-post-labels)
- [Using a custom Docker host](#using-a-custom-docker-host) - [Using a custom Docker host](#using-a-custom-docker-host)
- [Use with rootless Docker](#use-with-rootless-docker)
- [Run multiple backup schedules in the same container](#run-multiple-backup-schedules-in-the-same-container) - [Run multiple backup schedules in the same container](#run-multiple-backup-schedules-in-the-same-container)
- [Define different retention schedules](#define-different-retention-schedules) - [Define different retention schedules](#define-different-retention-schedules)
- [Use special characters in notification URLs](#use-special-characters-in-notification-urls) - [Use special characters in notification URLs](#use-special-characters-in-notification-urls)
- [Handle file uploads using third party tools](#handle-file-uploads-using-third-party-tools)
- [Setup Dropbox storage backend](#setup-dropbox-storage-backend)
- [Recipes](#recipes) - [Recipes](#recipes)
- [Backing up to AWS S3](#backing-up-to-aws-s3) - [Backing up to AWS S3](#backing-up-to-aws-s3)
- [Backing up to Filebase](#backing-up-to-filebase) - [Backing up to Filebase](#backing-up-to-filebase)
- [Backing up to MinIO](#backing-up-to-minio) - [Backing up to MinIO](#backing-up-to-minio)
- [Backing up to MinIO \(using Docker secrets\)](#backing-up-to-minio-using-docker-secrets)
- [Backing up to WebDAV](#backing-up-to-webdav) - [Backing up to WebDAV](#backing-up-to-webdav)
- [Backing up to SSH](#backing-up-to-ssh) - [Backing up to SSH](#backing-up-to-ssh)
- [Backing up to Azure Blob Storage](#backing-up-to-azure-blob-storage)
- [Backing up to Dropbox](#backing-up-to-dropbox)
- [Backing up locally](#backing-up-locally) - [Backing up locally](#backing-up-locally)
- [Backing up to AWS S3 as well as locally](#backing-up-to-aws-s3-as-well-as-locally) - [Backing up to AWS S3 as well as locally](#backing-up-to-aws-s3-as-well-as-locally)
- [Running on a custom cron schedule](#running-on-a-custom-cron-schedule) - [Running on a custom cron schedule](#running-on-a-custom-cron-schedule)
@@ -117,9 +124,24 @@ docker run --rm \
Alternatively, pass a `--env-file` in order to use a full config as described below. Alternatively, pass a `--env-file` in order to use a full config as described below.
### Available image registries
This Docker image is published to both Docker Hub and the GitHub container registry.
Depending on your preferences and needs, you can reference both `offen/docker-volume-backup` as well as `ghcr.io/offen/docker-volume-backup`:
```
docker pull offen/docker-volume-backup:v2
docker pull ghcr.io/offen/docker-volume-backup:v2
```
Documentation references Docker Hub, but all examples will work using ghcr.io just as well.
## Configuration reference ## Configuration reference
Backup targets, schedule and retention are configured in environment variables. Backup targets, schedule and retention are configured in environment variables.
Note: You can use any environment variable from below also with a `_FILE` suffix to be able to load the value from a file. This is usually useful when using [Docker Secrets](https://docs.docker.com/engine/swarm/secrets/) or similar.
You can populate below template according to your requirements and use it as your `env_file`: You can populate below template according to your requirements and use it as your `env_file`:
```ini ```ini
@@ -131,13 +153,29 @@ You can populate below template according to your requirements and use it as you
# BACKUP_CRON_EXPRESSION="0 2 * * *" # BACKUP_CRON_EXPRESSION="0 2 * * *"
# The name of the backup file including the `.tar.gz` extension. # The compression algorithm used in conjunction with tar.
# Valid options are: "gz" (Gzip) and "zst" (Zstd).
# Note that the selection affects the file extension.
# BACKUP_COMPRESSION="gz"
# Parallelism level for "gz" (Gzip) compression.
# Defines how many blocks of data are concurrently processed.
# Higher values result in faster compression. No effect on decompression
# Default = 1. Setting this to 0 will use all available threads.
# GZIP_PARALLELISM=1
# The name of the backup file including the extension.
# Format verbs will be replaced as in `strftime`. Omitting them # Format verbs will be replaced as in `strftime`. Omitting them
# will result in the same filename for every backup run, which means previous # will result in the same filename for every backup run, which means previous
# versions will be overwritten on subsequent runs. The default results # versions will be overwritten on subsequent runs.
# in filenames like `backup-2021-08-29T04-00-00.tar.gz`. # Extension can be defined literally or via "{{ .Extension }}" template,
# in which case it will become either "tar.gz" or "tar.zst" (depending
# on your BACKUP_COMPRESSION setting).
# The default results in filenames like: `backup-2021-08-29T04-00-00.tar.gz`.
# BACKUP_FILENAME="backup-%Y-%m-%dT%H-%M-%S.tar.gz" # BACKUP_FILENAME="backup-%Y-%m-%dT%H-%M-%S.{{ .Extension }}"
# Setting BACKUP_FILENAME_EXPAND to true allows for environment variable # Setting BACKUP_FILENAME_EXPAND to true allows for environment variable
# placeholders in BACKUP_FILENAME, BACKUP_LATEST_SYMLINK and in # placeholders in BACKUP_FILENAME, BACKUP_LATEST_SYMLINK and in
@@ -177,6 +215,15 @@ You can populate below template according to your requirements and use it as you
# BACKUP_EXCLUDE_REGEXP="\.log$" # BACKUP_EXCLUDE_REGEXP="\.log$"
# Exclude one or many storage backends from the pruning process.
# E.g. with one backend excluded: BACKUP_SKIP_BACKENDS_FROM_PRUNE=s3
# E.g. with multiple backends excluded: BACKUP_SKIP_BACKENDS_FROM_PRUNE=s3,webdav
# Available backends are: S3, WebDAV, SSH, Local, Dropbox, Azure
# Note: The name of the backends is case insensitive.
# Default: All backends get pruned.
# BACKUP_SKIP_BACKENDS_FROM_PRUNE=
########### BACKUP STORAGE ########### BACKUP STORAGE
# The name of the remote bucket that should be used for storing backups. If # The name of the remote bucket that should be used for storing backups. If
@@ -196,14 +243,6 @@ You can populate below template according to your requirements and use it as you
# AWS_ACCESS_KEY_ID="<xxx>" # AWS_ACCESS_KEY_ID="<xxx>"
# AWS_SECRET_ACCESS_KEY="<xxx>" # AWS_SECRET_ACCESS_KEY="<xxx>"
# It is possible to provide the keys in files, allowing to hide the sensitive data.
# These values have a higher priority than the ones above, meaning if both are set
# the values from the files will be used.
# This option is most useful with Docker [secrets](https://docs.docker.com/engine/swarm/secrets/).
# AWS_ACCESS_KEY_ID_FILE="/path/to/file"
# AWS_SECRET_ACCESS_KEY_FILE="/path/to/file"
# Instead of providing static credentials, you can also use IAM instance profiles # Instead of providing static credentials, you can also use IAM instance profiles
# or similar to provide authentication. Some possible configuration options on AWS: # or similar to provide authentication. Some possible configuration options on AWS:
# - EC2: http://169.254.169.254 # - EC2: http://169.254.169.254
@@ -231,11 +270,27 @@ You can populate below template according to your requirements and use it as you
# AWS_ENDPOINT_INSECURE="true" # AWS_ENDPOINT_INSECURE="true"
# If you wish to use self signed certificates your S3 server, you can pass
# the location of a PEM encoded CA certificate and it will be used for
# validating your certificates.
# Alternatively, pass a PEM encoded string containing the certificate.
# AWS_ENDPOINT_CA_CERT="/path/to/cert.pem"
# Setting this variable will change the S3 storage class header. # Setting this variable will change the S3 storage class header.
# Defaults to "STANDARD", you can set this value according to your needs. # Defaults to "STANDARD", you can set this value according to your needs.
# AWS_STORAGE_CLASS="GLACIER" # AWS_STORAGE_CLASS="GLACIER"
# Setting this variable will change the S3 default part size for the copy step.
# This value is useful when you want to upload large files.
# NB : While using Scaleway as S3 provider, be aware that the parts counter is set to 1.000.
# While Minio uses a hard coded value to 10.000. As a workaround, try to set a higher value.
# Defaults to "16" (MB) if unset (from minio), you can set this value according to your needs.
# The unit is in MB and an integer.
# AWS_PART_SIZE=16
# You can also backup files to any WebDAV server: # You can also backup files to any WebDAV server:
# The URL of the remote WebDAV server # The URL of the remote WebDAV server
@@ -295,6 +350,45 @@ You can populate below template according to your requirements and use it as you
# SSH_IDENTITY_PASSPHRASE="pass" # SSH_IDENTITY_PASSPHRASE="pass"
# The credential's account name when using Azure Blob Storage. This has to be
# set when using Azure Blob Storage.
# AZURE_STORAGE_ACCOUNT_NAME="account-name"
# The credential's primary account key when using Azure Blob Storage. If this
# is not given, the command tries to fall back to using a managed identity.
# AZURE_STORAGE_PRIMARY_ACCOUNT_KEY="<xxx>"
# The container name when using Azure Blob Storage.
# AZURE_STORAGE_CONTAINER_NAME="container-name"
# The service endpoint when using Azure Blob Storage. This is a template that
# can be passed the account name as shown in the default value below.
# AZURE_STORAGE_ENDPOINT="https://{{ .AccountName }}.blob.core.windows.net/"
# Absolute remote path in your Dropbox where the backups shall be stored.
# Note: Use your app's subpath in Dropbox, if it doesn't have global access.
# Consulte the README for further information.
# DROPBOX_REMOTE_PATH="/my/directory"
# Number of concurrent chunked uploads for Dropbox.
# Values above 6 usually result in no enhancements.
# DROPBOX_CONCURRENCY_LEVEL="6"
# App key and app secret from your app created at https://www.dropbox.com/developers/apps/info
# DROPBOX_APP_KEY=""
# DROPBOX_APP_SECRET=""
# Refresh token to request new short-lived tokens (OAuth2). Consult README to see how to get one.
# DROPBOX_REFRESH_TOKEN=""
# In addition to storing backups remotely, you can also keep local copies. # In addition to storing backups remotely, you can also keep local copies.
# Pass a container-local path to store your backups if needed. You also need to # Pass a container-local path to store your backups if needed. You also need to
# mount a local folder or Docker volume into that location (`/archive` # mount a local folder or Docker volume into that location (`/archive`
@@ -381,7 +475,7 @@ You can populate below template according to your requirements and use it as you
# Notifications (email, Slack, etc.) can be sent out when a backup run finishes. # Notifications (email, Slack, etc.) can be sent out when a backup run finishes.
# Configuration is provided as a comma-separated list of URLs as consumed # Configuration is provided as a comma-separated list of URLs as consumed
# by `shoutrrr`: https://containrrr.dev/shoutrrr/v0.5/services/overview/ # by `shoutrrr`: https://containrrr.dev/shoutrrr/0.7/services/overview/
# The content of such notifications can be customized. Dedicated documentation # The content of such notifications can be customized. Dedicated documentation
# on how to do this can be found in the README. When providing multiple URLs or # on how to do this can be found in the README. When providing multiple URLs or
# an URL that contains a comma, the values can be URL encoded to avoid ambiguities. # an URL that contains a comma, the values can be URL encoded to avoid ambiguities.
@@ -525,7 +619,7 @@ services:
Notification backends other than email are also supported. Notification backends other than email are also supported.
Refer to the documentation of [shoutrrr][shoutrrr-docs] to find out about options and configuration. Refer to the documentation of [shoutrrr][shoutrrr-docs] to find out about options and configuration.
[shoutrrr-docs]: https://containrrr.dev/shoutrrr/v0.5/services/overview/ [shoutrrr-docs]: https://containrrr.dev/shoutrrr/0.7/services/overview/
### Customize notifications ### Customize notifications
@@ -614,6 +708,24 @@ volumes:
The backup procedure is guaranteed to wait for all `pre` or `post` commands to finish before proceeding. The backup procedure is guaranteed to wait for all `pre` or `post` commands to finish before proceeding.
However there are no guarantees about the order in which they are run, which could also happen concurrently. However there are no guarantees about the order in which they are run, which could also happen concurrently.
By default the backup command is executed by the user provided by the container's image.
It is possible to specify a custom user that is used to run commands in dedicated labels with the format `docker-volume-backup.[step]-[pre|post].user`:
```yml
version: '3'
services:
gitea:
image: gitea/gitea
volumes:
- backup_data:/tmp
labels:
- docker-volume-backup.archive-pre.user=git
- docker-volume-backup.archive-pre=/bin/bash -c 'cd /tmp; /usr/local/bin/gitea dump -c /data/gitea/conf/app.ini -R -f dump.zip'
```
Make sure the user exists and is present in `passwd` inside the target container.
### Encrypting your backup using GPG ### Encrypting your backup using GPG
The image supports encrypting backups using GPG out of the box. The image supports encrypting backups using GPG out of the box.
@@ -753,7 +865,7 @@ services:
- docker-volume-backup.archive-post=rm -rf /tmp/backup/my-app - docker-volume-backup.archive-post=rm -rf /tmp/backup/my-app
backup: backup:
image: offen/docker-volume-backup:latest image: offen/docker-volume-backup:v2
environment: environment:
BACKUP_SOURCES: /tmp/backup BACKUP_SOURCES: /tmp/backup
volumes: volumes:
@@ -791,6 +903,23 @@ DOCKER_HOST=tcp://docker_socket_proxy:2375
In case you are using a socket proxy, it must support `GET` and `POST` requests to the `/containers` endpoint. If you are using Docker Swarm, it must also support the `/services` endpoint. If you are using pre/post backup commands, it must also support the `/exec` endpoint. In case you are using a socket proxy, it must support `GET` and `POST` requests to the `/containers` endpoint. If you are using Docker Swarm, it must also support the `/services` endpoint. If you are using pre/post backup commands, it must also support the `/exec` endpoint.
### Use with rootless Docker
It's also possible to use this image with a [rootless Docker installation][rootless-docker].
Instead of mounting `/var/run/docker.sock`, mount the user-specific socket into the container:
```yml
services:
backup:
image: offen/docker-volume-backup:v2
# ... configuration omitted
volumes:
- backup:/backup:ro
- /run/user/1000/docker.sock:/var/run/docker.sock:ro
```
[rootless-docker]: https://docs.docker.com/engine/security/rootless/
### Run multiple backup schedules in the same container ### Run multiple backup schedules in the same container
Multiple backup schedules with different configuration can be configured by mounting an arbitrary number of configuration files (using the `.env` format) into `/etc/dockervolumebackup/conf.d`: Multiple backup schedules with different configuration can be configured by mounting an arbitrary number of configuration files (using the `.env` format) into `/etc/dockervolumebackup/conf.d`:
@@ -841,7 +970,7 @@ BACKUP_SOURCES=/backup/app2_data
If you want to manage backup retention on different schedules, the most straight forward approach is to define a dedicated configuration for retention rule using a different prefix in the `BACKUP_FILENAME` parameter and then run them on different cron schedules. If you want to manage backup retention on different schedules, the most straight forward approach is to define a dedicated configuration for retention rule using a different prefix in the `BACKUP_FILENAME` parameter and then run them on different cron schedules.
For example, if you wanted to keep daily backups for 7 days, weekly backups for a month, and retain monthly backups forever, you could create three configuration files and mount them into `/etc/dockervolumebackup.d`: For example, if you wanted to keep daily backups for 7 days, weekly backups for a month, and retain monthly backups forever, you could create three configuration files and mount them into `/etc/dockervolumebackup/conf.d`:
```ini ```ini
# 01daily.conf # 01daily.conf
@@ -886,6 +1015,75 @@ where service is any of the [supported services][shoutrrr-docs], e.g. for SMTP:
docker run --rm -ti containrrr/shoutrrr generate smtp docker run --rm -ti containrrr/shoutrrr generate smtp
``` ```
### Handle file uploads using third party tools
If you want to use a non-supported storage backend, or want to use a third party (e.g. rsync, rclone) tool for file uploads, you can build a Docker image containing the required binaries off this one, and call through to these in lifecycle hooks.
For example, if you wanted to use `rsync`, define your Docker image like this:
```Dockerfile
FROM offen/docker-volume-backup:v2
RUN apk add rsync
```
Using this image, you can now omit configuring any of the supported storage backends, and instead define your own mechanism in a `docker-volume-backup.copy-post` label:
```yml
version: '3'
services:
backup:
image: your-custom-image
restart: always
environment:
BACKUP_FILENAME: "daily-backup-%Y-%m-%dT%H-%M-%S.tar.gz"
BACKUP_CRON_EXPRESSION: "0 2 * * *"
labels:
- docker-volume-backup.copy-post=/bin/sh -c 'rsync $$COMMAND_RUNTIME_ARCHIVE_FILEPATH /destination'
volumes:
- app_data:/backup/app_data:ro
- /var/run/docker.sock:/var/run/docker.sock
# other services defined here ...
volumes:
app_data:
```
Commands will be invoked with the filepath of the tar archive passed as `COMMAND_RUNTIME_BACKUP_FILEPATH`.
### Setup Dropbox storage backend
#### Auth-Setup:
1. Create a new Dropbox App in the [App Console](https://www.dropbox.com/developers/apps)
2. Open your new Dropbox App and set `DROPBOX_APP_KEY` and `DROPBOX_APP_SECRET` in your environment (e.g. docker-compose.yml) accordingly
3. Click on `Permissions` in your app and make sure, that the following permissions are cranted (or more):
- `files.metadata.write`
- `files.metadata.read`
- `files.content.write`
- `files.content.read`
4. Replace APPKEY in `https://www.dropbox.com/oauth2/authorize?client_id=APPKEY&token_access_type=offline&response_type=code` with the app key from step 2
5. Visit the URL and confirm the access of your app. This gives you an `auth code` -> save it somewhere!
6. Replace AUTHCODE, APPKEY, APPSECRET accordingly and perform the request:
```
curl https://api.dropbox.com/oauth2/token \
-d code=AUTHCODE \
-d grant_type=authorization_code \
-d client_id=APPKEY \
-d client_secret=APPSECRET
```
7. Execute the request. You will get a JSON formatted reply. Use the value of the `refresh_token` for the last environment variable `DROPBOX_REFRESH_TOKEN`
8. You should now have `DROPBOX_APP_KEY`, `DROPBOX_APP_SECRET` and `DROPBOX_REFRESH_TOKEN` set. These don't expire.
Note: Using the "Generated access token" in the app console is not supported, as it is only very short lived and therefore not suitable for an automatic backup solution. The refresh token handles this automatically - the setup procedure above is only needed once.
#### Other parameters
Important: If you chose `App folder` access during the creation of your Dropbox app in step 1 above, you can only write in the app's directory!
This means, that `DROPBOX_REMOTE_PATH` must start with e.g. `/Apps/YOUR_APP_NAME` or `/Apps/YOUR_APP_NAME/some_sub_dir`
## Recipes ## Recipes
This section lists configuration for some real-world use cases that you can mix and match according to your needs. This section lists configuration for some real-world use cases that you can mix and match according to your needs.
@@ -1032,6 +1230,51 @@ volumes:
data: data:
``` ```
### Backing up to Azure Blob Storage
```yml
version: '3'
services:
# ... define other services using the `data` volume here
backup:
image: offen/docker-volume-backup:v2
environment:
AZURE_STORAGE_CONTAINER_NAME: backup-container
AZURE_STORAGE_ACCOUNT_NAME: account-name
AZURE_STORAGE_PRIMARY_ACCOUNT_KEY: Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==
volumes:
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
data:
```
### Backing up to Dropbox
See [Dropbox Setup](#setup-dropbox-storage-backend) on how to get the appropriate environment values.
```yml
version: '3'
services:
# ... define other services using the `data` volume here
backup:
image: offen/docker-volume-backup:v2
environment:
DROPBOX_REFRESH_TOKEN: REFRESH_KEY # replace
DROPBOX_APP_KEY: APP_KEY # replace
DROPBOX_APP_SECRET: APP_SECRET # replace
DROPBOX_REMOTE_PATH: /Apps/my-test-app/some_subdir # replace
volumes:
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
data:
```
### Backing up locally ### Backing up locally
```yml ```yml

View File

@@ -8,16 +8,20 @@ package main
import ( import (
"archive/tar" "archive/tar"
"compress/gzip"
"fmt" "fmt"
"io" "io"
"os" "os"
"path" "path"
"path/filepath" "path/filepath"
"runtime"
"strings" "strings"
"github.com/klauspost/pgzip"
"github.com/klauspost/compress/zstd"
) )
func createArchive(files []string, inputFilePath, outputFilePath string) error { func createArchive(files []string, inputFilePath, outputFilePath string, compression string, compressionConcurrency int) error {
inputFilePath = stripTrailingSlashes(inputFilePath) inputFilePath = stripTrailingSlashes(inputFilePath)
inputFilePath, outputFilePath, err := makeAbsolute(inputFilePath, outputFilePath) inputFilePath, outputFilePath, err := makeAbsolute(inputFilePath, outputFilePath)
if err != nil { if err != nil {
@@ -27,7 +31,7 @@ func createArchive(files []string, inputFilePath, outputFilePath string) error {
return fmt.Errorf("createArchive: error creating output file path: %w", err) return fmt.Errorf("createArchive: error creating output file path: %w", err)
} }
if err := compress(files, outputFilePath, filepath.Dir(inputFilePath)); err != nil { if err := compress(files, outputFilePath, filepath.Dir(inputFilePath), compression, compressionConcurrency); err != nil {
return fmt.Errorf("createArchive: error creating archive: %w", err) return fmt.Errorf("createArchive: error creating archive: %w", err)
} }
@@ -51,18 +55,21 @@ func makeAbsolute(inputFilePath, outputFilePath string) (string, string, error)
return inputFilePath, outputFilePath, err return inputFilePath, outputFilePath, err
} }
func compress(paths []string, outFilePath, subPath string) error { func compress(paths []string, outFilePath, subPath string, algo string, concurrency int) error {
file, err := os.Create(outFilePath) file, err := os.Create(outFilePath)
if err != nil { if err != nil {
return fmt.Errorf("compress: error creating out file: %w", err) return fmt.Errorf("compress: error creating out file: %w", err)
} }
prefix := path.Dir(outFilePath) prefix := path.Dir(outFilePath)
gzipWriter := gzip.NewWriter(file) compressWriter, err := getCompressionWriter(file, algo, concurrency)
tarWriter := tar.NewWriter(gzipWriter) if err != nil {
return fmt.Errorf("compress: error getting compression writer: %w", err)
}
tarWriter := tar.NewWriter(compressWriter)
for _, p := range paths { for _, p := range paths {
if err := writeTarGz(p, tarWriter, prefix); err != nil { if err := writeTarball(p, tarWriter, prefix); err != nil {
return fmt.Errorf("compress: error writing %s to archive: %w", p, err) return fmt.Errorf("compress: error writing %s to archive: %w", p, err)
} }
} }
@@ -72,9 +79,9 @@ func compress(paths []string, outFilePath, subPath string) error {
return fmt.Errorf("compress: error closing tar writer: %w", err) return fmt.Errorf("compress: error closing tar writer: %w", err)
} }
err = gzipWriter.Close() err = compressWriter.Close()
if err != nil { if err != nil {
return fmt.Errorf("compress: error closing gzip writer: %w", err) return fmt.Errorf("compress: error closing compression writer: %w", err)
} }
err = file.Close() err = file.Close()
@@ -85,10 +92,38 @@ func compress(paths []string, outFilePath, subPath string) error {
return nil return nil
} }
func writeTarGz(path string, tarWriter *tar.Writer, prefix string) error { func getCompressionWriter(file *os.File, algo string, concurrency int) (io.WriteCloser, error) {
switch algo {
case "gz":
w, err := pgzip.NewWriterLevel(file, 5)
if err != nil {
return nil, fmt.Errorf("getCompressionWriter: gzip error: %w", err)
}
if concurrency == 0 {
concurrency = runtime.GOMAXPROCS(0)
}
if err := w.SetConcurrency(1<<20, concurrency); err != nil {
return nil, fmt.Errorf("getCompressionWriter: error setting concurrency: %w", err)
}
return w, nil
case "zst":
compressWriter, err := zstd.NewWriter(file)
if err != nil {
return nil, fmt.Errorf("getCompressionWriter: zstd error: %w", err)
}
return compressWriter, nil
default:
return nil, fmt.Errorf("getCompressionWriter: unsupported compression algorithm: %s", algo)
}
}
func writeTarball(path string, tarWriter *tar.Writer, prefix string) error {
fileInfo, err := os.Lstat(path) fileInfo, err := os.Lstat(path)
if err != nil { if err != nil {
return fmt.Errorf("writeTarGz: error getting file infor for %s: %w", path, err) return fmt.Errorf("writeTarball: error getting file infor for %s: %w", path, err)
} }
if fileInfo.Mode()&os.ModeSocket == os.ModeSocket { if fileInfo.Mode()&os.ModeSocket == os.ModeSocket {
@@ -99,19 +134,19 @@ func writeTarGz(path string, tarWriter *tar.Writer, prefix string) error {
if fileInfo.Mode()&os.ModeSymlink == os.ModeSymlink { if fileInfo.Mode()&os.ModeSymlink == os.ModeSymlink {
var err error var err error
if link, err = os.Readlink(path); err != nil { if link, err = os.Readlink(path); err != nil {
return fmt.Errorf("writeTarGz: error resolving symlink %s: %w", path, err) return fmt.Errorf("writeTarball: error resolving symlink %s: %w", path, err)
} }
} }
header, err := tar.FileInfoHeader(fileInfo, link) header, err := tar.FileInfoHeader(fileInfo, link)
if err != nil { if err != nil {
return fmt.Errorf("writeTarGz: error getting file info header: %w", err) return fmt.Errorf("writeTarball: error getting file info header: %w", err)
} }
header.Name = strings.TrimPrefix(path, prefix) header.Name = strings.TrimPrefix(path, prefix)
err = tarWriter.WriteHeader(header) err = tarWriter.WriteHeader(header)
if err != nil { if err != nil {
return fmt.Errorf("writeTarGz: error writing file info header: %w", err) return fmt.Errorf("writeTarball: error writing file info header: %w", err)
} }
if !fileInfo.Mode().IsRegular() { if !fileInfo.Mode().IsRegular() {
@@ -120,13 +155,13 @@ func writeTarGz(path string, tarWriter *tar.Writer, prefix string) error {
file, err := os.Open(path) file, err := os.Open(path)
if err != nil { if err != nil {
return fmt.Errorf("writeTarGz: error opening %s: %w", path, err) return fmt.Errorf("writeTarball: error opening %s: %w", path, err)
} }
defer file.Close() defer file.Close()
_, err = io.Copy(tarWriter, file) _, err = io.Copy(tarWriter, file)
if err != nil { if err != nil {
return fmt.Errorf("writeTarGz: error copying %s to tar writer: %w", path, err) return fmt.Errorf("writeTarball: error copying %s to tar writer: %w", path, err)
} }
return nil return nil

View File

@@ -4,72 +4,116 @@
package main package main
import ( import (
"crypto/x509"
"encoding/pem"
"fmt" "fmt"
"os" "os"
"regexp" "regexp"
"strconv"
"time" "time"
) )
// Config holds all configuration values that are expected to be set // Config holds all configuration values that are expected to be set
// by users. // by users.
type Config struct { type Config struct {
AwsS3BucketName string `split_words:"true"` AwsS3BucketName string `split_words:"true"`
AwsS3Path string `split_words:"true"` AwsS3Path string `split_words:"true"`
AwsEndpoint string `split_words:"true" default:"s3.amazonaws.com"` AwsEndpoint string `split_words:"true" default:"s3.amazonaws.com"`
AwsEndpointProto string `split_words:"true" default:"https"` AwsEndpointProto string `split_words:"true" default:"https"`
AwsEndpointInsecure bool `split_words:"true"` AwsEndpointInsecure bool `split_words:"true"`
AwsStorageClass string `split_words:"true"` AwsEndpointCACert CertDecoder `envconfig:"AWS_ENDPOINT_CA_CERT"`
AwsAccessKeyID string `envconfig:"AWS_ACCESS_KEY_ID"` AwsStorageClass string `split_words:"true"`
AwsAccessKeyIDFile string `envconfig:"AWS_ACCESS_KEY_ID_FILE"` AwsAccessKeyID string `envconfig:"AWS_ACCESS_KEY_ID"`
AwsSecretAccessKey string `split_words:"true"` AwsSecretAccessKey string `split_words:"true"`
AwsSecretAccessKeyFile string `split_words:"true"` AwsIamRoleEndpoint string `split_words:"true"`
AwsIamRoleEndpoint string `split_words:"true"` AwsPartSize int64 `split_words:"true"`
BackupSources string `split_words:"true" default:"/backup"` BackupCompression CompressionType `split_words:"true" default:"gz"`
BackupFilename string `split_words:"true" default:"backup-%Y-%m-%dT%H-%M-%S.tar.gz"` GzipParallelism WholeNumber `split_words:"true" default:"1"`
BackupFilenameExpand bool `split_words:"true"` BackupSources string `split_words:"true" default:"/backup"`
BackupLatestSymlink string `split_words:"true"` BackupFilename string `split_words:"true" default:"backup-%Y-%m-%dT%H-%M-%S.{{ .Extension }}"`
BackupArchive string `split_words:"true" default:"/archive"` BackupFilenameExpand bool `split_words:"true"`
BackupRetentionDays int32 `split_words:"true" default:"-1"` BackupLatestSymlink string `split_words:"true"`
BackupPruningLeeway time.Duration `split_words:"true" default:"1m"` BackupArchive string `split_words:"true" default:"/archive"`
BackupPruningPrefix string `split_words:"true"` BackupRetentionDays int32 `split_words:"true" default:"-1"`
BackupStopContainerLabel string `split_words:"true" default:"true"` BackupPruningLeeway time.Duration `split_words:"true" default:"1m"`
BackupFromSnapshot bool `split_words:"true"` BackupPruningPrefix string `split_words:"true"`
BackupExcludeRegexp RegexpDecoder `split_words:"true"` BackupStopContainerLabel string `split_words:"true" default:"true"`
GpgPassphrase string `split_words:"true"` BackupFromSnapshot bool `split_words:"true"`
NotificationURLs []string `envconfig:"NOTIFICATION_URLS"` BackupExcludeRegexp RegexpDecoder `split_words:"true"`
NotificationLevel string `split_words:"true" default:"error"` BackupSkipBackendsFromPrune []string `split_words:"true"`
EmailNotificationRecipient string `split_words:"true"` GpgPassphrase string `split_words:"true"`
EmailNotificationSender string `split_words:"true" default:"noreply@nohost"` NotificationURLs []string `envconfig:"NOTIFICATION_URLS"`
EmailSMTPHost string `envconfig:"EMAIL_SMTP_HOST"` NotificationLevel string `split_words:"true" default:"error"`
EmailSMTPPort int `envconfig:"EMAIL_SMTP_PORT" default:"587"` EmailNotificationRecipient string `split_words:"true"`
EmailSMTPUsername string `envconfig:"EMAIL_SMTP_USERNAME"` EmailNotificationSender string `split_words:"true" default:"noreply@nohost"`
EmailSMTPPassword string `envconfig:"EMAIL_SMTP_PASSWORD"` EmailSMTPHost string `envconfig:"EMAIL_SMTP_HOST"`
WebdavUrl string `split_words:"true"` EmailSMTPPort int `envconfig:"EMAIL_SMTP_PORT" default:"587"`
WebdavUrlInsecure bool `split_words:"true"` EmailSMTPUsername string `envconfig:"EMAIL_SMTP_USERNAME"`
WebdavPath string `split_words:"true" default:"/"` EmailSMTPPassword string `envconfig:"EMAIL_SMTP_PASSWORD"`
WebdavUsername string `split_words:"true"` WebdavUrl string `split_words:"true"`
WebdavPassword string `split_words:"true"` WebdavUrlInsecure bool `split_words:"true"`
SSHHostName string `split_words:"true"` WebdavPath string `split_words:"true" default:"/"`
SSHPort string `split_words:"true" default:"22"` WebdavUsername string `split_words:"true"`
SSHUser string `split_words:"true"` WebdavPassword string `split_words:"true"`
SSHPassword string `split_words:"true"` SSHHostName string `split_words:"true"`
SSHIdentityFile string `split_words:"true" default:"/root/.ssh/id_rsa"` SSHPort string `split_words:"true" default:"22"`
SSHIdentityPassphrase string `split_words:"true"` SSHUser string `split_words:"true"`
SSHRemotePath string `split_words:"true"` SSHPassword string `split_words:"true"`
ExecLabel string `split_words:"true"` SSHIdentityFile string `split_words:"true" default:"/root/.ssh/id_rsa"`
ExecForwardOutput bool `split_words:"true"` SSHIdentityPassphrase string `split_words:"true"`
LockTimeout time.Duration `split_words:"true" default:"60m"` SSHRemotePath string `split_words:"true"`
ExecLabel string `split_words:"true"`
ExecForwardOutput bool `split_words:"true"`
LockTimeout time.Duration `split_words:"true" default:"60m"`
AzureStorageAccountName string `split_words:"true"`
AzureStoragePrimaryAccountKey string `split_words:"true"`
AzureStorageContainerName string `split_words:"true"`
AzureStoragePath string `split_words:"true"`
AzureStorageEndpoint string `split_words:"true" default:"https://{{ .AccountName }}.blob.core.windows.net/"`
DropboxEndpoint string `split_words:"true" default:"https://api.dropbox.com/"`
DropboxOAuth2Endpoint string `envconfig:"DROPBOX_OAUTH2_ENDPOINT" default:"https://api.dropbox.com/"`
DropboxRefreshToken string `split_words:"true"`
DropboxAppKey string `split_words:"true"`
DropboxAppSecret string `split_words:"true"`
DropboxRemotePath string `split_words:"true"`
DropboxConcurrencyLevel NaturalNumber `split_words:"true" default:"6"`
} }
func (c *Config) resolveSecret(envVar string, secretPath string) (string, error) { type CompressionType string
if secretPath == "" {
return envVar, nil func (c *CompressionType) Decode(v string) error {
switch v {
case "gz", "zst":
*c = CompressionType(v)
return nil
default:
return fmt.Errorf("config: error decoding compression type %s", v)
} }
data, err := os.ReadFile(secretPath) }
func (c *CompressionType) String() string {
return string(*c)
}
type CertDecoder struct {
Cert *x509.Certificate
}
func (c *CertDecoder) Decode(v string) error {
if v == "" {
return nil
}
content, err := os.ReadFile(v)
if err != nil { if err != nil {
return "", fmt.Errorf("resolveSecret: error reading secret path: %w", err) content = []byte(v)
} }
return string(data), nil block, _ := pem.Decode(content)
cert, err := x509.ParseCertificate(block.Bytes)
if err != nil {
return fmt.Errorf("config: error parsing certificate: %w", err)
}
*c = CertDecoder{Cert: cert}
return nil
} }
type RegexpDecoder struct { type RegexpDecoder struct {
@@ -87,3 +131,41 @@ func (r *RegexpDecoder) Decode(v string) error {
*r = RegexpDecoder{Re: re} *r = RegexpDecoder{Re: re}
return nil return nil
} }
// NaturalNumber is a type that can be used to decode a positive, non-zero natural number
type NaturalNumber int
func (n *NaturalNumber) Decode(v string) error {
asInt, err := strconv.Atoi(v)
if err != nil {
return fmt.Errorf("config: error converting %s to int", v)
}
if asInt <= 0 {
return fmt.Errorf("config: expected a natural number, got %d", asInt)
}
*n = NaturalNumber(asInt)
return nil
}
func (n *NaturalNumber) Int() int {
return int(*n)
}
// WholeNumber is a type that can be used to decode a positive whole number, including zero
type WholeNumber int
func (n *WholeNumber) Decode(v string) error {
asInt, err := strconv.Atoi(v)
if err != nil {
return fmt.Errorf("config: error converting %s to int", v)
}
if asInt < 0 {
return fmt.Errorf("config: expected a whole, positive number, including zero. Got %d", asInt)
}
*n = WholeNumber(asInt)
return nil
}
func (n *WholeNumber) Int() int {
return int(*n)
}

View File

@@ -10,7 +10,7 @@ import (
"bytes" "bytes"
"context" "context"
"fmt" "fmt"
"io/ioutil" "io"
"os" "os"
"strings" "strings"
@@ -21,12 +21,17 @@ import (
"golang.org/x/sync/errgroup" "golang.org/x/sync/errgroup"
) )
func (s *script) exec(containerRef string, command string) ([]byte, []byte, error) { func (s *script) exec(containerRef string, command string, user string) ([]byte, []byte, error) {
args, _ := argv.Argv(command, nil, nil) args, _ := argv.Argv(command, nil, nil)
commandEnv := []string{
fmt.Sprintf("COMMAND_RUNTIME_ARCHIVE_FILEPATH=%s", s.file),
}
execID, err := s.cli.ContainerExecCreate(context.Background(), containerRef, types.ExecConfig{ execID, err := s.cli.ContainerExecCreate(context.Background(), containerRef, types.ExecConfig{
Cmd: args[0], Cmd: args[0],
AttachStdin: true, AttachStdin: true,
AttachStderr: true, AttachStderr: true,
Env: commandEnv,
User: user,
}) })
if err != nil { if err != nil {
return nil, nil, fmt.Errorf("exec: error creating container exec: %w", err) return nil, nil, fmt.Errorf("exec: error creating container exec: %w", err)
@@ -46,19 +51,15 @@ func (s *script) exec(containerRef string, command string) ([]byte, []byte, erro
outputDone <- err outputDone <- err
}() }()
select { if <-outputDone != nil {
case err := <-outputDone: return nil, nil, fmt.Errorf("exec: error demultiplexing output: %w", err)
if err != nil {
return nil, nil, fmt.Errorf("exec: error demultiplexing output: %w", err)
}
break
} }
stdout, err := ioutil.ReadAll(&outBuf) stdout, err := io.ReadAll(&outBuf)
if err != nil { if err != nil {
return nil, nil, fmt.Errorf("exec: error reading stdout: %w", err) return nil, nil, fmt.Errorf("exec: error reading stdout: %w", err)
} }
stderr, err := ioutil.ReadAll(&errBuf) stderr, err := io.ReadAll(&errBuf)
if err != nil { if err != nil {
return nil, nil, fmt.Errorf("exec: error reading stderr: %w", err) return nil, nil, fmt.Errorf("exec: error reading stderr: %w", err)
} }
@@ -86,7 +87,6 @@ func (s *script) runLabeledCommands(label string) error {
}) })
} }
containersWithCommand, err := s.cli.ContainerList(context.Background(), types.ContainerListOptions{ containersWithCommand, err := s.cli.ContainerList(context.Background(), types.ContainerListOptions{
Quiet: true,
Filters: filters.NewArgs(f...), Filters: filters.NewArgs(f...),
}) })
if err != nil { if err != nil {
@@ -100,7 +100,6 @@ func (s *script) runLabeledCommands(label string) error {
Value: "docker-volume-backup.exec-pre", Value: "docker-volume-backup.exec-pre",
} }
deprecatedContainers, err := s.cli.ContainerList(context.Background(), types.ContainerListOptions{ deprecatedContainers, err := s.cli.ContainerList(context.Background(), types.ContainerListOptions{
Quiet: true,
Filters: filters.NewArgs(f...), Filters: filters.NewArgs(f...),
}) })
if err != nil { if err != nil {
@@ -118,7 +117,6 @@ func (s *script) runLabeledCommands(label string) error {
Value: "docker-volume-backup.exec-post", Value: "docker-volume-backup.exec-post",
} }
deprecatedContainers, err := s.cli.ContainerList(context.Background(), types.ContainerListOptions{ deprecatedContainers, err := s.cli.ContainerList(context.Background(), types.ContainerListOptions{
Quiet: true,
Filters: filters.NewArgs(f...), Filters: filters.NewArgs(f...),
}) })
if err != nil { if err != nil {
@@ -150,13 +148,16 @@ func (s *script) runLabeledCommands(label string) error {
g.Go(func() error { g.Go(func() error {
cmd, ok := c.Labels[label] cmd, ok := c.Labels[label]
if !ok && label == "docker-volume-backup.archive-pre" { if !ok && label == "docker-volume-backup.archive-pre" {
cmd, _ = c.Labels["docker-volume-backup.exec-pre"] cmd = c.Labels["docker-volume-backup.exec-pre"]
} else if !ok && label == "docker-volume-backup.archive-post" { } else if !ok && label == "docker-volume-backup.archive-post" {
cmd, _ = c.Labels["docker-volume-backup.exec-post"] cmd = c.Labels["docker-volume-backup.exec-post"]
} }
s.logger.Infof("Running %s command %s for container %s", label, cmd, strings.TrimPrefix(c.Names[0], "/")) userLabelName := fmt.Sprintf("%s.user", label)
stdout, stderr, err := s.exec(c.ID, cmd) user := c.Labels[userLabelName]
s.logger.Info(fmt.Sprintf("Running %s command %s for container %s", label, cmd, strings.TrimPrefix(c.Names[0], "/")))
stdout, stderr, err := s.exec(c.ID, cmd, user)
if s.c.ExecForwardOutput { if s.c.ExecForwardOutput {
os.Stderr.Write(stderr) os.Stderr.Write(stderr)
os.Stdout.Write(stdout) os.Stdout.Write(stdout)

View File

@@ -4,10 +4,9 @@
package main package main
import ( import (
"errors"
"fmt" "fmt"
"sort" "sort"
"github.com/offen/docker-volume-backup/internal/utilities"
) )
// hook contains a queued action that can be trigger them when the script // hook contains a queued action that can be trigger them when the script
@@ -52,7 +51,7 @@ func (s *script) runHooks(err error) error {
} }
} }
if len(actionErrors) != 0 { if len(actionErrors) != 0 {
return utilities.Join(actionErrors...) return errors.Join(actionErrors...)
} }
return nil return nil
} }

View File

@@ -18,7 +18,7 @@ import (
func (s *script) lock(lockfile string) (func() error, error) { func (s *script) lock(lockfile string) (func() error, error) {
start := time.Now() start := time.Now()
defer func() { defer func() {
s.stats.LockedTime = time.Now().Sub(start) s.stats.LockedTime = time.Since(start)
}() }()
retry := time.NewTicker(5 * time.Second) retry := time.NewTicker(5 * time.Second)
@@ -41,9 +41,11 @@ func (s *script) lock(lockfile string) (func() error, error) {
} }
if !s.encounteredLock { if !s.encounteredLock {
s.logger.Infof( s.logger.Info(
"Exclusive lock was not available on first attempt. Will retry until it becomes available or the timeout of %s is exceeded.", fmt.Sprintf(
s.c.LockTimeout, "Exclusive lock was not available on first attempt. Will retry until it becomes available or the timeout of %s is exceeded.",
s.c.LockTimeout,
),
) )
s.encounteredLock = true s.encounteredLock = true
} }

View File

@@ -4,6 +4,7 @@
package main package main
import ( import (
"fmt"
"os" "os"
) )
@@ -14,14 +15,16 @@ func main() {
} }
unlock, err := s.lock("/var/lock/dockervolumebackup.lock") unlock, err := s.lock("/var/lock/dockervolumebackup.lock")
defer unlock() defer s.must(unlock())
s.must(err) s.must(err)
defer func() { defer func() {
if pArg := recover(); pArg != nil { if pArg := recover(); pArg != nil {
if err, ok := pArg.(error); ok { if err, ok := pArg.(error); ok {
if hookErr := s.runHooks(err); hookErr != nil { if hookErr := s.runHooks(err); hookErr != nil {
s.logger.Errorf("An error occurred calling the registered hooks: %s", hookErr) s.logger.Error(
fmt.Sprintf("An error occurred calling the registered hooks: %s", hookErr),
)
} }
os.Exit(1) os.Exit(1)
} }
@@ -29,9 +32,11 @@ func main() {
} }
if err := s.runHooks(nil); err != nil { if err := s.runHooks(nil); err != nil {
s.logger.Errorf( s.logger.Error(
"Backup procedure ran successfully, but an error ocurred calling the registered hooks: %v", fmt.Sprintf(
err, "Backup procedure ran successfully, but an error ocurred calling the registered hooks: %v",
err,
),
) )
os.Exit(1) os.Exit(1)
} }

View File

@@ -6,13 +6,13 @@ package main
import ( import (
"bytes" "bytes"
_ "embed" _ "embed"
"errors"
"fmt" "fmt"
"os" "os"
"text/template" "text/template"
"time" "time"
sTypes "github.com/containrrr/shoutrrr/pkg/types" sTypes "github.com/containrrr/shoutrrr/pkg/types"
"github.com/offen/docker-volume-backup/internal/utilities"
) )
//go:embed notifications.tmpl //go:embed notifications.tmpl
@@ -69,7 +69,7 @@ func (s *script) sendNotification(title, body string) error {
} }
} }
if len(errs) != 0 { if len(errs) != 0 {
return fmt.Errorf("sendNotification: error sending message: %w", utilities.Join(errs...)) return fmt.Errorf("sendNotification: error sending message: %w", errors.Join(errs...))
} }
return nil return nil
} }

View File

@@ -4,34 +4,40 @@
package main package main
import ( import (
"bytes"
"context" "context"
"errors"
"fmt" "fmt"
"io" "io"
"io/fs" "io/fs"
"log/slog"
"os" "os"
"path" "path"
"path/filepath" "path/filepath"
"slices"
"strings"
"text/template" "text/template"
"time" "time"
"github.com/offen/docker-volume-backup/internal/storage" "github.com/offen/docker-volume-backup/internal/storage"
"github.com/offen/docker-volume-backup/internal/storage/azure"
"github.com/offen/docker-volume-backup/internal/storage/dropbox"
"github.com/offen/docker-volume-backup/internal/storage/local" "github.com/offen/docker-volume-backup/internal/storage/local"
"github.com/offen/docker-volume-backup/internal/storage/s3" "github.com/offen/docker-volume-backup/internal/storage/s3"
"github.com/offen/docker-volume-backup/internal/storage/ssh" "github.com/offen/docker-volume-backup/internal/storage/ssh"
"github.com/offen/docker-volume-backup/internal/storage/webdav" "github.com/offen/docker-volume-backup/internal/storage/webdav"
"github.com/offen/docker-volume-backup/internal/utilities"
"github.com/ProtonMail/go-crypto/openpgp"
"github.com/containrrr/shoutrrr" "github.com/containrrr/shoutrrr"
"github.com/containrrr/shoutrrr/pkg/router" "github.com/containrrr/shoutrrr/pkg/router"
"github.com/docker/docker/api/types" "github.com/docker/docker/api/types"
ctr "github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/filters" "github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/swarm" "github.com/docker/docker/api/types/swarm"
"github.com/docker/docker/client" "github.com/docker/docker/client"
"github.com/kelseyhightower/envconfig"
"github.com/leekchan/timeutil" "github.com/leekchan/timeutil"
"github.com/offen/envconfig"
"github.com/otiai10/copy" "github.com/otiai10/copy"
"github.com/sirupsen/logrus"
"golang.org/x/crypto/openpgp"
"golang.org/x/sync/errgroup" "golang.org/x/sync/errgroup"
) )
@@ -40,7 +46,7 @@ import (
type script struct { type script struct {
cli *client.Client cli *client.Client
storages []storage.Backend storages []storage.Backend
logger *logrus.Logger logger *slog.Logger
sender *router.ServiceRouter sender *router.ServiceRouter
template *template.Template template *template.Template
hooks []hook hooks []hook
@@ -61,21 +67,18 @@ type script struct {
func newScript() (*script, error) { func newScript() (*script, error) {
stdOut, logBuffer := buffer(os.Stdout) stdOut, logBuffer := buffer(os.Stdout)
s := &script{ s := &script{
c: &Config{}, c: &Config{},
logger: &logrus.Logger{ logger: slog.New(slog.NewTextHandler(stdOut, nil)),
Out: stdOut,
Formatter: new(logrus.TextFormatter),
Hooks: make(logrus.LevelHooks),
Level: logrus.InfoLevel,
},
stats: &Stats{ stats: &Stats{
StartTime: time.Now(), StartTime: time.Now(),
LogOutput: logBuffer, LogOutput: logBuffer,
Storages: map[string]StorageStats{ Storages: map[string]StorageStats{
"S3": {}, "S3": {},
"WebDAV": {}, "WebDAV": {},
"SSH": {}, "SSH": {},
"Local": {}, "Local": {},
"Azure": {},
"Dropbox": {},
}, },
}, },
} }
@@ -86,11 +89,47 @@ func newScript() (*script, error) {
return nil return nil
}) })
envconfig.Lookup = func(key string) (string, bool) {
value, okValue := os.LookupEnv(key)
location, okFile := os.LookupEnv(key + "_FILE")
switch {
case okValue && !okFile: // only value
return value, true
case !okValue && okFile: // only file
contents, err := os.ReadFile(location)
if err != nil {
s.must(fmt.Errorf("newScript: failed to read %s! Error: %s", location, err))
return "", false
}
return string(contents), true
case okValue && okFile: // both
s.must(fmt.Errorf("newScript: both %s and %s are set!", key, key+"_FILE"))
return "", false
default: // neither, ignore
return "", false
}
}
if err := envconfig.Process("", s.c); err != nil { if err := envconfig.Process("", s.c); err != nil {
return nil, fmt.Errorf("newScript: failed to process configuration values: %w", err) return nil, fmt.Errorf("newScript: failed to process configuration values: %w", err)
} }
s.file = path.Join("/tmp", s.c.BackupFilename) s.file = path.Join("/tmp", s.c.BackupFilename)
tmplFileName, tErr := template.New("extension").Parse(s.file)
if tErr != nil {
return nil, fmt.Errorf("newScript: unable to parse backup file extension template: %w", tErr)
}
var bf bytes.Buffer
if tErr := tmplFileName.Execute(&bf, map[string]string{
"Extension": fmt.Sprintf("tar.%s", s.c.BackupCompression),
}); tErr != nil {
return nil, fmt.Errorf("newScript: error executing backup file extension template: %w", tErr)
}
s.file = bf.String()
if s.c.BackupFilenameExpand { if s.c.BackupFilenameExpand {
s.file = os.ExpandEnv(s.file) s.file = os.ExpandEnv(s.file)
s.c.BackupLatestSymlink = os.ExpandEnv(s.c.BackupLatestSymlink) s.c.BackupLatestSymlink = os.ExpandEnv(s.c.BackupLatestSymlink)
@@ -108,40 +147,33 @@ func newScript() (*script, error) {
s.cli = cli s.cli = cli
} }
logFunc := func(logType storage.LogLevel, context string, msg string, params ...interface{}) { logFunc := func(logType storage.LogLevel, context string, msg string, params ...any) {
switch logType { switch logType {
case storage.LogLevelWarning: case storage.LogLevelWarning:
s.logger.Warnf("["+context+"] "+msg, params...) s.logger.Warn(fmt.Sprintf(msg, params...), "storage", context)
case storage.LogLevelError: case storage.LogLevelError:
s.logger.Errorf("["+context+"] "+msg, params...) s.logger.Error(fmt.Sprintf(msg, params...), "storage", context)
case storage.LogLevelInfo:
default: default:
s.logger.Infof("["+context+"] "+msg, params...) s.logger.Info(fmt.Sprintf(msg, params...), "storage", context)
} }
} }
if s.c.AwsS3BucketName != "" { if s.c.AwsS3BucketName != "" {
accessKeyID, err := s.c.resolveSecret(s.c.AwsAccessKeyID, s.c.AwsAccessKeyIDFile)
if err != nil {
return nil, fmt.Errorf("newScript: error resolving AwsAccessKeyID: %w", err)
}
secretAccessKey, err := s.c.resolveSecret(s.c.AwsSecretAccessKey, s.c.AwsSecretAccessKeyFile)
if err != nil {
return nil, fmt.Errorf("newScript: error resolving AwsSecretAccessKey: %w", err)
}
s3Config := s3.Config{ s3Config := s3.Config{
Endpoint: s.c.AwsEndpoint, Endpoint: s.c.AwsEndpoint,
AccessKeyID: accessKeyID, AccessKeyID: s.c.AwsAccessKeyID,
SecretAccessKey: secretAccessKey, SecretAccessKey: s.c.AwsSecretAccessKey,
IamRoleEndpoint: s.c.AwsIamRoleEndpoint, IamRoleEndpoint: s.c.AwsIamRoleEndpoint,
EndpointProto: s.c.AwsEndpointProto, EndpointProto: s.c.AwsEndpointProto,
EndpointInsecure: s.c.AwsEndpointInsecure, EndpointInsecure: s.c.AwsEndpointInsecure,
RemotePath: s.c.AwsS3Path, RemotePath: s.c.AwsS3Path,
BucketName: s.c.AwsS3BucketName, BucketName: s.c.AwsS3BucketName,
StorageClass: s.c.AwsStorageClass, StorageClass: s.c.AwsStorageClass,
CACert: s.c.AwsEndpointCACert.Cert,
PartSize: s.c.AwsPartSize,
} }
if s3Backend, err := s3.NewStorageBackend(s3Config, logFunc); err != nil { if s3Backend, err := s3.NewStorageBackend(s3Config, logFunc); err != nil {
return nil, err return nil, fmt.Errorf("newScript: error creating s3 storage backend: %w", err)
} else { } else {
s.storages = append(s.storages, s3Backend) s.storages = append(s.storages, s3Backend)
} }
@@ -156,7 +188,7 @@ func newScript() (*script, error) {
RemotePath: s.c.WebdavPath, RemotePath: s.c.WebdavPath,
} }
if webdavBackend, err := webdav.NewStorageBackend(webDavConfig, logFunc); err != nil { if webdavBackend, err := webdav.NewStorageBackend(webDavConfig, logFunc); err != nil {
return nil, err return nil, fmt.Errorf("newScript: error creating webdav storage backend: %w", err)
} else { } else {
s.storages = append(s.storages, webdavBackend) s.storages = append(s.storages, webdavBackend)
} }
@@ -173,7 +205,7 @@ func newScript() (*script, error) {
RemotePath: s.c.SSHRemotePath, RemotePath: s.c.SSHRemotePath,
} }
if sshBackend, err := ssh.NewStorageBackend(sshConfig, logFunc); err != nil { if sshBackend, err := ssh.NewStorageBackend(sshConfig, logFunc); err != nil {
return nil, err return nil, fmt.Errorf("newScript: error creating ssh storage backend: %w", err)
} else { } else {
s.storages = append(s.storages, sshBackend) s.storages = append(s.storages, sshBackend)
} }
@@ -188,6 +220,38 @@ func newScript() (*script, error) {
s.storages = append(s.storages, localBackend) s.storages = append(s.storages, localBackend)
} }
if s.c.AzureStorageAccountName != "" {
azureConfig := azure.Config{
ContainerName: s.c.AzureStorageContainerName,
AccountName: s.c.AzureStorageAccountName,
PrimaryAccountKey: s.c.AzureStoragePrimaryAccountKey,
Endpoint: s.c.AzureStorageEndpoint,
RemotePath: s.c.AzureStoragePath,
}
azureBackend, err := azure.NewStorageBackend(azureConfig, logFunc)
if err != nil {
return nil, fmt.Errorf("newScript: error creating azure storage backend: %w", err)
}
s.storages = append(s.storages, azureBackend)
}
if s.c.DropboxRefreshToken != "" && s.c.DropboxAppKey != "" && s.c.DropboxAppSecret != "" {
dropboxConfig := dropbox.Config{
Endpoint: s.c.DropboxEndpoint,
OAuth2Endpoint: s.c.DropboxOAuth2Endpoint,
RefreshToken: s.c.DropboxRefreshToken,
AppKey: s.c.DropboxAppKey,
AppSecret: s.c.DropboxAppSecret,
RemotePath: s.c.DropboxRemotePath,
ConcurrencyLevel: s.c.DropboxConcurrencyLevel.Int(),
}
dropboxBackend, err := dropbox.NewStorageBackend(dropboxConfig, logFunc)
if err != nil {
return nil, fmt.Errorf("newScript: error creating dropbox storage backend: %w", err)
}
s.storages = append(s.storages, dropboxBackend)
}
if s.c.EmailNotificationRecipient != "" { if s.c.EmailNotificationRecipient != "" {
emailURL := fmt.Sprintf( emailURL := fmt.Sprintf(
"smtp://%s:%s@%s:%d/?from=%s&to=%s", "smtp://%s:%s@%s:%d/?from=%s&to=%s",
@@ -262,9 +326,7 @@ func (s *script) stopContainers() (func() error, error) {
return noop, nil return noop, nil
} }
allContainers, err := s.cli.ContainerList(context.Background(), types.ContainerListOptions{ allContainers, err := s.cli.ContainerList(context.Background(), types.ContainerListOptions{})
Quiet: true,
})
if err != nil { if err != nil {
return noop, fmt.Errorf("stopContainers: error querying for containers: %w", err) return noop, fmt.Errorf("stopContainers: error querying for containers: %w", err)
} }
@@ -274,7 +336,6 @@ func (s *script) stopContainers() (func() error, error) {
s.c.BackupStopContainerLabel, s.c.BackupStopContainerLabel,
) )
containersToStop, err := s.cli.ContainerList(context.Background(), types.ContainerListOptions{ containersToStop, err := s.cli.ContainerList(context.Background(), types.ContainerListOptions{
Quiet: true,
Filters: filters.NewArgs(filters.KeyValuePair{ Filters: filters.NewArgs(filters.KeyValuePair{
Key: "label", Key: "label",
Value: containerLabel, Value: containerLabel,
@@ -289,17 +350,19 @@ func (s *script) stopContainers() (func() error, error) {
return noop, nil return noop, nil
} }
s.logger.Infof( s.logger.Info(
"Stopping %d container(s) labeled `%s` out of %d running container(s).", fmt.Sprintf(
len(containersToStop), "Stopping %d container(s) labeled `%s` out of %d running container(s).",
containerLabel, len(containersToStop),
len(allContainers), containerLabel,
len(allContainers),
),
) )
var stoppedContainers []types.Container var stoppedContainers []types.Container
var stopErrors []error var stopErrors []error
for _, container := range containersToStop { for _, container := range containersToStop {
if err := s.cli.ContainerStop(context.Background(), container.ID, nil); err != nil { if err := s.cli.ContainerStop(context.Background(), container.ID, ctr.StopOptions{}); err != nil {
stopErrors = append(stopErrors, err) stopErrors = append(stopErrors, err)
} else { } else {
stoppedContainers = append(stoppedContainers, container) stoppedContainers = append(stoppedContainers, container)
@@ -311,7 +374,7 @@ func (s *script) stopContainers() (func() error, error) {
stopError = fmt.Errorf( stopError = fmt.Errorf(
"stopContainers: %d error(s) stopping containers: %w", "stopContainers: %d error(s) stopping containers: %w",
len(stopErrors), len(stopErrors),
utilities.Join(stopErrors...), errors.Join(stopErrors...),
) )
} }
@@ -348,7 +411,7 @@ func (s *script) stopContainers() (func() error, error) {
if serviceMatch.ID == "" { if serviceMatch.ID == "" {
return fmt.Errorf("stopContainers: couldn't find service with name %s", serviceName) return fmt.Errorf("stopContainers: couldn't find service with name %s", serviceName)
} }
serviceMatch.Spec.TaskTemplate.ForceUpdate = 1 serviceMatch.Spec.TaskTemplate.ForceUpdate += 1
if _, err := s.cli.ServiceUpdate( if _, err := s.cli.ServiceUpdate(
context.Background(), serviceMatch.ID, context.Background(), serviceMatch.ID,
serviceMatch.Version, serviceMatch.Spec, types.ServiceUpdateOptions{}, serviceMatch.Version, serviceMatch.Spec, types.ServiceUpdateOptions{},
@@ -362,12 +425,14 @@ func (s *script) stopContainers() (func() error, error) {
return fmt.Errorf( return fmt.Errorf(
"stopContainers: %d error(s) restarting containers and services: %w", "stopContainers: %d error(s) restarting containers and services: %w",
len(restartErrors), len(restartErrors),
utilities.Join(restartErrors...), errors.Join(restartErrors...),
) )
} }
s.logger.Infof( s.logger.Info(
"Restarted %d container(s) and the matching service(s).", fmt.Sprintf(
len(stoppedContainers), "Restarted %d container(s) and the matching service(s).",
len(stoppedContainers),
),
) )
return nil return nil
}, stopError }, stopError
@@ -391,7 +456,9 @@ func (s *script) createArchive() error {
if err := remove(backupSources); err != nil { if err := remove(backupSources); err != nil {
return fmt.Errorf("createArchive: error removing snapshot: %w", err) return fmt.Errorf("createArchive: error removing snapshot: %w", err)
} }
s.logger.Infof("Removed snapshot `%s`.", backupSources) s.logger.Info(
fmt.Sprintf("Removed snapshot `%s`.", backupSources),
)
return nil return nil
}) })
if err := copy.Copy(s.c.BackupSources, backupSources, copy.Options{ if err := copy.Copy(s.c.BackupSources, backupSources, copy.Options{
@@ -400,7 +467,9 @@ func (s *script) createArchive() error {
}); err != nil { }); err != nil {
return fmt.Errorf("createArchive: error creating snapshot: %w", err) return fmt.Errorf("createArchive: error creating snapshot: %w", err)
} }
s.logger.Infof("Created snapshot of `%s` at `%s`.", s.c.BackupSources, backupSources) s.logger.Info(
fmt.Sprintf("Created snapshot of `%s` at `%s`.", s.c.BackupSources, backupSources),
)
} }
tarFile := s.file tarFile := s.file
@@ -408,7 +477,9 @@ func (s *script) createArchive() error {
if err := remove(tarFile); err != nil { if err := remove(tarFile); err != nil {
return fmt.Errorf("createArchive: error removing tar file: %w", err) return fmt.Errorf("createArchive: error removing tar file: %w", err)
} }
s.logger.Infof("Removed tar file `%s`.", tarFile) s.logger.Info(
fmt.Sprintf("Removed tar file `%s`.", tarFile),
)
return nil return nil
}) })
@@ -432,11 +503,13 @@ func (s *script) createArchive() error {
return fmt.Errorf("createArchive: error walking filesystem tree: %w", err) return fmt.Errorf("createArchive: error walking filesystem tree: %w", err)
} }
if err := createArchive(filesEligibleForBackup, backupSources, tarFile); err != nil { if err := createArchive(filesEligibleForBackup, backupSources, tarFile, s.c.BackupCompression.String(), s.c.GzipParallelism.Int()); err != nil {
return fmt.Errorf("createArchive: error compressing backup folder: %w", err) return fmt.Errorf("createArchive: error compressing backup folder: %w", err)
} }
s.logger.Infof("Created backup of `%s` at `%s`.", backupSources, tarFile) s.logger.Info(
fmt.Sprintf("Created backup of `%s` at `%s`.", backupSources, tarFile),
)
return nil return nil
} }
@@ -453,7 +526,9 @@ func (s *script) encryptArchive() error {
if err := remove(gpgFile); err != nil { if err := remove(gpgFile); err != nil {
return fmt.Errorf("encryptArchive: error removing gpg file: %w", err) return fmt.Errorf("encryptArchive: error removing gpg file: %w", err)
} }
s.logger.Infof("Removed GPG file `%s`.", gpgFile) s.logger.Info(
fmt.Sprintf("Removed GPG file `%s`.", gpgFile),
)
return nil return nil
}) })
@@ -483,7 +558,9 @@ func (s *script) encryptArchive() error {
} }
s.file = gpgFile s.file = gpgFile
s.logger.Infof("Encrypted backup using given passphrase, saving as `%s`.", s.file) s.logger.Info(
fmt.Sprintf("Encrypted backup using given passphrase, saving as `%s`.", s.file),
)
return nil return nil
} }
@@ -530,6 +607,12 @@ func (s *script) pruneBackups() error {
for _, backend := range s.storages { for _, backend := range s.storages {
b := backend b := backend
eg.Go(func() error { eg.Go(func() error {
if skipPrune(b.Name(), s.c.BackupSkipBackendsFromPrune) {
s.logger.Info(
fmt.Sprintf("Skipping pruning for backend `%s`.", b.Name()),
)
return nil
}
stats, err := b.Prune(deadline, s.c.BackupPruningPrefix) stats, err := b.Prune(deadline, s.c.BackupPruningPrefix)
if err != nil { if err != nil {
return err return err
@@ -555,7 +638,20 @@ func (s *script) pruneBackups() error {
// is non-nil. // is non-nil.
func (s *script) must(err error) { func (s *script) must(err error) {
if err != nil { if err != nil {
s.logger.Errorf("Fatal error running backup: %s", err) s.logger.Error(
fmt.Sprintf("Fatal error running backup: %s", err),
)
panic(err) panic(err)
} }
} }
// skipPrune returns true if the given backend name is contained in the
// list of skipped backends.
func skipPrune(name string, skippedBackends []string) bool {
return slices.ContainsFunc(
skippedBackends,
func(b string) bool {
return strings.EqualFold(b, name) // ignore case on both sides
},
)
}

View File

@@ -25,7 +25,7 @@ Here is a list of all data passed to the template:
* `FullPath`: full path of the backup file (e.g. `/archive/backup-2022-02-11T01-00-00.tar.gz`) * `FullPath`: full path of the backup file (e.g. `/archive/backup-2022-02-11T01-00-00.tar.gz`)
* `Size`: size in bytes of the backup file * `Size`: size in bytes of the backup file
* `Storages`: object that holds stats about each storage * `Storages`: object that holds stats about each storage
* `Local`, `S3`, `WebDAV` or `SSH`: * `Local`, `S3`, `WebDAV`, `Azure`, `Dropbox` or `SSH`:
* `Total`: total number of backup files * `Total`: total number of backup files
* `Pruned`: number of backup files that were deleted due to pruning rule * `Pruned`: number of backup files that were deleted due to pruning rule
* `PruneErrors`: number of backup files that were unable to be pruned * `PruneErrors`: number of backup files that were unable to be pruned

80
go.mod
View File

@@ -1,66 +1,72 @@
module github.com/offen/docker-volume-backup module github.com/offen/docker-volume-backup
go 1.19 go 1.21
require ( require (
github.com/containrrr/shoutrrr v0.5.2 github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.3.0
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.1.0
github.com/containrrr/shoutrrr v0.7.1
github.com/cosiner/argv v0.1.0 github.com/cosiner/argv v0.1.0
github.com/docker/docker v20.10.11+incompatible github.com/docker/docker v24.0.5+incompatible
github.com/gofrs/flock v0.8.1 github.com/gofrs/flock v0.8.1
github.com/kelseyhightower/envconfig v1.4.0 github.com/klauspost/compress v1.16.7
github.com/leekchan/timeutil v0.0.0-20150802142658-28917288c48d github.com/leekchan/timeutil v0.0.0-20150802142658-28917288c48d
github.com/minio/minio-go/v7 v7.0.44 github.com/minio/minio-go/v7 v7.0.62
github.com/otiai10/copy v1.7.0 github.com/offen/envconfig v1.5.0
github.com/pkg/sftp v1.13.5 github.com/otiai10/copy v1.11.0
github.com/sirupsen/logrus v1.9.0 github.com/pkg/sftp v1.13.6
github.com/studio-b12/gowebdav v0.0.0-20220128162035-c7b1ff8a5e62 github.com/studio-b12/gowebdav v0.9.0
golang.org/x/crypto v0.3.0 golang.org/x/crypto v0.12.0
golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f golang.org/x/oauth2 v0.0.0-20221014153046-6fdb5e3db783
golang.org/x/sync v0.3.0
) )
require ( require (
github.com/cloudflare/circl v1.3.3 // indirect
github.com/golang/protobuf v1.5.2 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/protobuf v1.28.1 // indirect
)
require (
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.6.0 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.3.0 // indirect
github.com/AzureAD/microsoft-authentication-library-for-go v1.0.0 // indirect
github.com/Microsoft/go-winio v0.5.2 // indirect github.com/Microsoft/go-winio v0.5.2 // indirect
github.com/containerd/containerd v1.6.6 // indirect github.com/ProtonMail/go-crypto v0.0.0-20230717121422-5aa5874ade95
github.com/docker/distribution v2.7.1+incompatible // indirect github.com/docker/distribution v2.8.2+incompatible // indirect
github.com/docker/go-connections v0.4.0 // indirect github.com/docker/go-connections v0.4.0 // indirect
github.com/docker/go-units v0.4.0 // indirect github.com/docker/go-units v0.4.0 // indirect
github.com/dustin/go-humanize v1.0.0 // indirect github.com/dropbox/dropbox-sdk-go-unofficial/v6 v6.0.5
github.com/fatih/color v1.10.0 // indirect github.com/dustin/go-humanize v1.0.1 // indirect
github.com/fsnotify/fsnotify v1.4.9 // indirect github.com/fatih/color v1.13.0 // indirect
github.com/gogo/protobuf v1.3.2 // indirect github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/protobuf v1.5.2 // indirect github.com/golang-jwt/jwt/v4 v4.5.0 // indirect
github.com/google/uuid v1.3.0 // indirect github.com/google/uuid v1.3.0 // indirect
github.com/gorilla/mux v1.7.3 // indirect
github.com/json-iterator/go v1.1.12 // indirect github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.15.12 // indirect github.com/klauspost/cpuid/v2 v2.2.5 // indirect
github.com/klauspost/cpuid/v2 v2.2.1 // indirect github.com/klauspost/pgzip v1.2.6
github.com/kr/fs v0.1.0 // indirect github.com/kr/fs v0.1.0 // indirect
github.com/kr/text v0.2.0 // indirect github.com/kylelemons/godebug v1.1.0 // indirect
github.com/mattn/go-colorable v0.1.8 // indirect github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.12 // indirect github.com/mattn/go-isatty v0.0.16 // indirect
github.com/minio/md5-simd v1.1.2 // indirect github.com/minio/md5-simd v1.1.2 // indirect
github.com/minio/sha256-simd v1.0.0 // indirect github.com/minio/sha256-simd v1.0.1 // indirect
github.com/moby/term v0.0.0-20200312100748-672ec06f55cd // indirect github.com/moby/term v0.0.0-20200312100748-672ec06f55cd // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/morikuni/aec v1.0.0 // indirect github.com/morikuni/aec v1.0.0 // indirect
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e // indirect github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e // indirect
github.com/nxadm/tail v1.4.6 // indirect
github.com/onsi/ginkgo v1.14.2 // indirect
github.com/onsi/gomega v1.10.3 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.0.3-0.20211202183452-c5a74bcca799 // indirect github.com/opencontainers/image-spec v1.0.3-0.20211202183452-c5a74bcca799 // indirect
github.com/pkg/browser v0.0.0-20210911075715-681adbf594b8 // indirect
github.com/pkg/errors v0.9.1 // indirect github.com/pkg/errors v0.9.1 // indirect
github.com/rs/xid v1.4.0 // indirect github.com/rs/xid v1.5.0 // indirect
golang.org/x/net v0.2.0 // indirect github.com/sirupsen/logrus v1.9.3 // indirect
golang.org/x/sys v0.2.0 // indirect golang.org/x/net v0.14.0 // indirect
golang.org/x/text v0.4.0 // indirect golang.org/x/sys v0.11.0 // indirect
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect golang.org/x/text v0.12.0 // indirect
google.golang.org/genproto v0.0.0-20220602131408-e326c6e8e9c8 // indirect
google.golang.org/grpc v1.47.0 // indirect
google.golang.org/protobuf v1.28.0 // indirect
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f // indirect gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f // indirect
gopkg.in/ini.v1 v1.67.0 // indirect gopkg.in/ini.v1 v1.67.0 // indirect
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 // indirect gotest.tools/v3 v3.0.3 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
) )

1168
go.sum

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,160 @@
// Copyright 2022 - Offen Authors <hioffen@posteo.de>
// SPDX-License-Identifier: MPL-2.0
package azure
import (
"bytes"
"context"
"errors"
"fmt"
"os"
"path/filepath"
"strings"
"sync"
"text/template"
"time"
"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/container"
"github.com/offen/docker-volume-backup/internal/storage"
)
type azureBlobStorage struct {
*storage.StorageBackend
client *azblob.Client
containerName string
}
// Config contains values that define the configuration of an Azure Blob Storage.
type Config struct {
AccountName string
ContainerName string
PrimaryAccountKey string
Endpoint string
RemotePath string
}
// NewStorageBackend creates and initializes a new Azure Blob Storage backend.
func NewStorageBackend(opts Config, logFunc storage.Log) (storage.Backend, error) {
endpointTemplate, err := template.New("endpoint").Parse(opts.Endpoint)
if err != nil {
return nil, fmt.Errorf("NewStorageBackend: error parsing endpoint template: %w", err)
}
var ep bytes.Buffer
if err := endpointTemplate.Execute(&ep, opts); err != nil {
return nil, fmt.Errorf("NewStorageBackend: error executing endpoint template: %w", err)
}
normalizedEndpoint := fmt.Sprintf("%s/", strings.TrimSuffix(ep.String(), "/"))
var client *azblob.Client
if opts.PrimaryAccountKey != "" {
cred, err := azblob.NewSharedKeyCredential(opts.AccountName, opts.PrimaryAccountKey)
if err != nil {
return nil, fmt.Errorf("NewStorageBackend: error creating shared key Azure credential: %w", err)
}
client, err = azblob.NewClientWithSharedKeyCredential(normalizedEndpoint, cred, nil)
if err != nil {
return nil, fmt.Errorf("NewStorageBackend: error creating Azure client: %w", err)
}
} else {
cred, err := azidentity.NewManagedIdentityCredential(nil)
if err != nil {
return nil, fmt.Errorf("NewStorageBackend: error creating managed identity credential: %w", err)
}
client, err = azblob.NewClient(normalizedEndpoint, cred, nil)
if err != nil {
return nil, fmt.Errorf("NewStorageBackend: error creating Azure client: %w", err)
}
}
storage := azureBlobStorage{
client: client,
containerName: opts.ContainerName,
StorageBackend: &storage.StorageBackend{
DestinationPath: opts.RemotePath,
Log: logFunc,
},
}
return &storage, nil
}
// Name returns the name of the storage backend
func (b *azureBlobStorage) Name() string {
return "Azure"
}
// Copy copies the given file to the storage backend.
func (b *azureBlobStorage) Copy(file string) error {
fileReader, err := os.Open(file)
if err != nil {
return fmt.Errorf("(*azureBlobStorage).Copy: error opening file %s: %w", file, err)
}
_, err = b.client.UploadStream(
context.Background(),
b.containerName,
filepath.Join(b.DestinationPath, filepath.Base(file)),
fileReader,
nil,
)
if err != nil {
return fmt.Errorf("(*azureBlobStorage).Copy: error uploading file %s: %w", file, err)
}
return nil
}
// Prune rotates away backups according to the configuration and provided
// deadline for the Azure Blob storage backend.
func (b *azureBlobStorage) Prune(deadline time.Time, pruningPrefix string) (*storage.PruneStats, error) {
lookupPrefix := filepath.Join(b.DestinationPath, pruningPrefix)
pager := b.client.NewListBlobsFlatPager(b.containerName, &container.ListBlobsFlatOptions{
Prefix: &lookupPrefix,
})
var matches []string
var totalCount uint
for pager.More() {
resp, err := pager.NextPage(context.Background())
if err != nil {
return nil, fmt.Errorf("(*azureBlobStorage).Prune: error paging over blobs: %w", err)
}
for _, v := range resp.Segment.BlobItems {
totalCount++
if v.Properties.LastModified.Before(deadline) {
matches = append(matches, *v.Name)
}
}
}
stats := storage.PruneStats{
Total: totalCount,
Pruned: uint(len(matches)),
}
if err := b.DoPrune(b.Name(), len(matches), int(totalCount), func() error {
wg := sync.WaitGroup{}
wg.Add(len(matches))
var errs []error
for _, match := range matches {
name := match
go func() {
_, err := b.client.DeleteBlob(context.Background(), b.containerName, name, nil)
if err != nil {
errs = append(errs, err)
}
wg.Done()
}()
}
wg.Wait()
if len(errs) != 0 {
return errors.Join(errs...)
}
return nil
}); err != nil {
return &stats, err
}
return &stats, nil
}

View File

@@ -0,0 +1,260 @@
package dropbox
import (
"bytes"
"context"
"fmt"
"net/url"
"os"
"path"
"path/filepath"
"strings"
"sync"
"time"
"github.com/dropbox/dropbox-sdk-go-unofficial/v6/dropbox"
"github.com/dropbox/dropbox-sdk-go-unofficial/v6/dropbox/files"
"github.com/offen/docker-volume-backup/internal/storage"
"golang.org/x/oauth2"
)
type dropboxStorage struct {
*storage.StorageBackend
client files.Client
concurrencyLevel int
}
// Config allows to configure a Dropbox storage backend.
type Config struct {
Endpoint string
OAuth2Endpoint string
RefreshToken string
AppKey string
AppSecret string
RemotePath string
ConcurrencyLevel int
}
// NewStorageBackend creates and initializes a new Dropbox storage backend.
func NewStorageBackend(opts Config, logFunc storage.Log) (storage.Backend, error) {
tokenUrl, _ := url.JoinPath(opts.OAuth2Endpoint, "oauth2/token")
conf := &oauth2.Config{
ClientID: opts.AppKey,
ClientSecret: opts.AppSecret,
Endpoint: oauth2.Endpoint{
TokenURL: tokenUrl,
},
}
logFunc(storage.LogLevelInfo, "Dropbox", "Fetching fresh access token for Dropbox storage backend.")
tkSource := conf.TokenSource(context.Background(), &oauth2.Token{RefreshToken: opts.RefreshToken})
token, err := tkSource.Token()
if err != nil {
return nil, fmt.Errorf("(*dropboxStorage).NewStorageBackend: Error refreshing token: %w", err)
}
dbxConfig := dropbox.Config{
Token: token.AccessToken,
}
if opts.Endpoint != "https://api.dropbox.com/" {
dbxConfig.URLGenerator = func(hostType string, namespace string, route string) string {
return fmt.Sprintf("%s/%d/%s/%s", opts.Endpoint, 2, namespace, route)
}
}
client := files.New(dbxConfig)
if opts.ConcurrencyLevel < 1 {
logFunc(storage.LogLevelWarning, "Dropbox", "Concurrency level must be at least 1! Using 1 instead of %d.", opts.ConcurrencyLevel)
opts.ConcurrencyLevel = 1
}
return &dropboxStorage{
StorageBackend: &storage.StorageBackend{
DestinationPath: opts.RemotePath,
Log: logFunc,
},
client: client,
concurrencyLevel: opts.ConcurrencyLevel,
}, nil
}
// Name returns the name of the storage backend
func (b *dropboxStorage) Name() string {
return "Dropbox"
}
// Copy copies the given file to the WebDav storage backend.
func (b *dropboxStorage) Copy(file string) error {
_, name := path.Split(file)
folderArg := files.NewCreateFolderArg(b.DestinationPath)
if _, err := b.client.CreateFolderV2(folderArg); err != nil {
switch err := err.(type) {
case files.CreateFolderV2APIError:
if err.EndpointError.Path.Tag != files.WriteErrorConflict {
return fmt.Errorf("(*dropboxStorage).Copy: Error creating directory '%s': %w", b.DestinationPath, err)
}
b.Log(storage.LogLevelInfo, b.Name(), "Destination path '%s' already exists, no new directory required.", b.DestinationPath)
default:
return fmt.Errorf("(*dropboxStorage).Copy: Error creating directory '%s': %w", b.DestinationPath, err)
}
}
r, err := os.Open(file)
if err != nil {
return fmt.Errorf("(*dropboxStorage).Copy: Error opening the file to be uploaded: %w", err)
}
defer r.Close()
// Start new upload session and get session id
b.Log(storage.LogLevelInfo, b.Name(), "Starting upload session for backup '%s' at path '%s'.", file, b.DestinationPath)
var sessionId string
uploadSessionStartArg := files.NewUploadSessionStartArg()
uploadSessionStartArg.SessionType = &files.UploadSessionType{Tagged: dropbox.Tagged{Tag: files.UploadSessionTypeConcurrent}}
if res, err := b.client.UploadSessionStart(uploadSessionStartArg, nil); err != nil {
return fmt.Errorf("(*dropboxStorage).Copy: Error starting the upload session: %w", err)
} else {
sessionId = res.SessionId
}
// Send the file in 148MB chunks (Dropbox API limit is 150MB, concurrent upload requires a multiple of 4MB though)
// Last append can be any size <= 150MB with Close=True
const chunkSize = 148 * 1024 * 1024 // 148MB
var offset uint64 = 0
var guard = make(chan struct{}, b.concurrencyLevel)
var errorChn = make(chan error, b.concurrencyLevel)
var EOFChn = make(chan bool, b.concurrencyLevel)
var mu sync.Mutex
var wg sync.WaitGroup
loop:
for {
guard <- struct{}{} // limit concurrency
select {
case err := <-errorChn: // error from goroutine
return err
case <-EOFChn: // EOF from goroutine
wg.Wait() // wait for all goroutines to finish
break loop
default:
}
go func() {
defer func() {
wg.Done()
<-guard
}()
wg.Add(1)
chunk := make([]byte, chunkSize)
mu.Lock() // to preserve offset of chunks
select {
case <-EOFChn:
EOFChn <- true // put it back for outer loop
mu.Unlock()
return // already EOF
default:
}
bytesRead, err := r.Read(chunk)
if err != nil {
errorChn <- fmt.Errorf("(*dropboxStorage).Copy: Error reading the file to be uploaded: %w", err)
mu.Unlock()
return
}
chunk = chunk[:bytesRead]
uploadSessionAppendArg := files.NewUploadSessionAppendArg(
files.NewUploadSessionCursor(sessionId, offset),
)
isEOF := bytesRead < chunkSize
uploadSessionAppendArg.Close = isEOF
if isEOF {
EOFChn <- true
}
offset += uint64(bytesRead)
mu.Unlock()
if err := b.client.UploadSessionAppendV2(uploadSessionAppendArg, bytes.NewReader(chunk)); err != nil {
errorChn <- fmt.Errorf("(*dropboxStorage).Copy: Error appending the file to the upload session: %w", err)
return
}
}()
}
// Finish the upload session, commit the file (no new data added)
_, err = b.client.UploadSessionFinish(
files.NewUploadSessionFinishArg(
files.NewUploadSessionCursor(sessionId, 0),
files.NewCommitInfo(filepath.Join(b.DestinationPath, name)),
), nil)
if err != nil {
return fmt.Errorf("(*dropboxStorage).Copy: Error finishing the upload session: %w", err)
}
b.Log(storage.LogLevelInfo, b.Name(), "Uploaded a copy of backup '%s' at path '%s'.", file, b.DestinationPath)
return nil
}
// Prune rotates away backups according to the configuration and provided deadline for the Dropbox storage backend.
func (b *dropboxStorage) Prune(deadline time.Time, pruningPrefix string) (*storage.PruneStats, error) {
var entries []files.IsMetadata
res, err := b.client.ListFolder(files.NewListFolderArg(b.DestinationPath))
if err != nil {
return nil, fmt.Errorf("(*webDavStorage).Prune: Error looking up candidates from remote storage: %w", err)
}
entries = append(entries, res.Entries...)
for res.HasMore {
res, err = b.client.ListFolderContinue(files.NewListFolderContinueArg(res.Cursor))
if err != nil {
return nil, fmt.Errorf("(*webDavStorage).Prune: Error looking up candidates from remote storage: %w", err)
}
entries = append(entries, res.Entries...)
}
var matches []*files.FileMetadata
var lenCandidates int
for _, candidate := range entries {
switch candidate := candidate.(type) {
case *files.FileMetadata:
if !strings.HasPrefix(candidate.Name, pruningPrefix) {
continue
}
lenCandidates++
if candidate.ServerModified.Before(deadline) {
matches = append(matches, candidate)
}
default:
continue
}
}
stats := &storage.PruneStats{
Total: uint(lenCandidates),
Pruned: uint(len(matches)),
}
if err := b.DoPrune(b.Name(), len(matches), lenCandidates, func() error {
for _, match := range matches {
if _, err := b.client.DeleteV2(files.NewDeleteArg(filepath.Join(b.DestinationPath, match.Name))); err != nil {
return fmt.Errorf("(*dropboxStorage).Prune: Error removing file from Dropbox storage: %w", err)
}
}
return nil
}); err != nil {
return stats, err
}
return stats, nil
}

View File

@@ -4,6 +4,7 @@
package local package local
import ( import (
"errors"
"fmt" "fmt"
"io" "io"
"os" "os"
@@ -12,7 +13,6 @@ import (
"time" "time"
"github.com/offen/docker-volume-backup/internal/storage" "github.com/offen/docker-volume-backup/internal/storage"
"github.com/offen/docker-volume-backup/internal/utilities"
) )
type localStorage struct { type localStorage struct {
@@ -47,9 +47,9 @@ func (b *localStorage) Copy(file string) error {
_, name := path.Split(file) _, name := path.Split(file)
if err := copyFile(file, path.Join(b.DestinationPath, name)); err != nil { if err := copyFile(file, path.Join(b.DestinationPath, name)); err != nil {
return fmt.Errorf("(*localStorage).Copy: Error copying file to local archive: %w", err) return fmt.Errorf("(*localStorage).Copy: Error copying file to archive: %w", err)
} }
b.Log(storage.LogLevelInfo, b.Name(), "Stored copy of backup `%s` in local archive `%s`.", file, b.DestinationPath) b.Log(storage.LogLevelInfo, b.Name(), "Stored copy of backup `%s` in `%s`.", file, b.DestinationPath)
if b.latestSymlink != "" { if b.latestSymlink != "" {
symlink := path.Join(b.DestinationPath, b.latestSymlink) symlink := path.Join(b.DestinationPath, b.latestSymlink)
@@ -116,7 +116,7 @@ func (b *localStorage) Prune(deadline time.Time, pruningPrefix string) (*storage
Pruned: uint(len(matches)), Pruned: uint(len(matches)),
} }
if err := b.DoPrune(b.Name(), len(matches), len(candidates), "local backup(s)", func() error { if err := b.DoPrune(b.Name(), len(matches), len(candidates), func() error {
var removeErrors []error var removeErrors []error
for _, match := range matches { for _, match := range matches {
if err := os.Remove(match); err != nil { if err := os.Remove(match); err != nil {
@@ -125,9 +125,9 @@ func (b *localStorage) Prune(deadline time.Time, pruningPrefix string) (*storage
} }
if len(removeErrors) != 0 { if len(removeErrors) != 0 {
return fmt.Errorf( return fmt.Errorf(
"(*localStorage).Prune: %d error(s) deleting local files, starting with: %w", "(*localStorage).Prune: %d error(s) deleting files, starting with: %w",
len(removeErrors), len(removeErrors),
utilities.Join(removeErrors...), errors.Join(removeErrors...),
) )
} }
return nil return nil

View File

@@ -5,8 +5,10 @@ package s3
import ( import (
"context" "context"
"crypto/x509"
"errors" "errors"
"fmt" "fmt"
"os"
"path" "path"
"path/filepath" "path/filepath"
"time" "time"
@@ -14,7 +16,6 @@ import (
"github.com/minio/minio-go/v7" "github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/credentials" "github.com/minio/minio-go/v7/pkg/credentials"
"github.com/offen/docker-volume-backup/internal/storage" "github.com/offen/docker-volume-backup/internal/storage"
"github.com/offen/docker-volume-backup/internal/utilities"
) )
type s3Storage struct { type s3Storage struct {
@@ -22,6 +23,7 @@ type s3Storage struct {
client *minio.Client client *minio.Client
bucket string bucket string
storageClass string storageClass string
partSize int64
} }
// Config contains values that define the configuration of a S3 backend. // Config contains values that define the configuration of a S3 backend.
@@ -35,11 +37,12 @@ type Config struct {
RemotePath string RemotePath string
BucketName string BucketName string
StorageClass string StorageClass string
PartSize int64
CACert *x509.Certificate
} }
// NewStorageBackend creates and initializes a new S3/Minio storage backend. // NewStorageBackend creates and initializes a new S3/Minio storage backend.
func NewStorageBackend(opts Config, logFunc storage.Log) (storage.Backend, error) { func NewStorageBackend(opts Config, logFunc storage.Log) (storage.Backend, error) {
var creds *credentials.Credentials var creds *credentials.Credentials
if opts.AccessKeyID != "" && opts.SecretAccessKey != "" { if opts.AccessKeyID != "" && opts.SecretAccessKey != "" {
creds = credentials.NewStaticV4( creds = credentials.NewStaticV4(
@@ -58,18 +61,23 @@ func NewStorageBackend(opts Config, logFunc storage.Log) (storage.Backend, error
Secure: opts.EndpointProto == "https", Secure: opts.EndpointProto == "https",
} }
transport, err := minio.DefaultTransport(true)
if err != nil {
return nil, fmt.Errorf("NewStorageBackend: failed to create default minio transport: %w", err)
}
if opts.EndpointInsecure { if opts.EndpointInsecure {
if !options.Secure { if !options.Secure {
return nil, errors.New("NewStorageBackend: AWS_ENDPOINT_INSECURE = true is only meaningful for https") return nil, errors.New("NewStorageBackend: AWS_ENDPOINT_INSECURE = true is only meaningful for https")
} }
transport, err := minio.DefaultTransport(true)
if err != nil {
return nil, fmt.Errorf("NewStorageBackend: failed to create default minio transport: %w", err)
}
transport.TLSClientConfig.InsecureSkipVerify = true transport.TLSClientConfig.InsecureSkipVerify = true
options.Transport = transport } else if opts.CACert != nil {
if transport.TLSClientConfig.RootCAs == nil {
transport.TLSClientConfig.RootCAs = x509.NewCertPool()
}
transport.TLSClientConfig.RootCAs.AddCert(opts.CACert)
} }
options.Transport = transport
mc, err := minio.New(opts.Endpoint, &options) mc, err := minio.New(opts.Endpoint, &options)
if err != nil { if err != nil {
@@ -84,6 +92,7 @@ func NewStorageBackend(opts Config, logFunc storage.Log) (storage.Backend, error
client: mc, client: mc,
bucket: opts.BucketName, bucket: opts.BucketName,
storageClass: opts.StorageClass, storageClass: opts.StorageClass,
partSize: opts.PartSize,
}, nil }, nil
} }
@@ -95,16 +104,37 @@ func (v *s3Storage) Name() string {
// Copy copies the given file to the S3/Minio storage backend. // Copy copies the given file to the S3/Minio storage backend.
func (b *s3Storage) Copy(file string) error { func (b *s3Storage) Copy(file string) error {
_, name := path.Split(file) _, name := path.Split(file)
putObjectOptions := minio.PutObjectOptions{
if _, err := b.client.FPutObject(context.Background(), b.bucket, filepath.Join(b.DestinationPath, name), file, minio.PutObjectOptions{
ContentType: "application/tar+gzip", ContentType: "application/tar+gzip",
StorageClass: b.storageClass, StorageClass: b.storageClass,
}); err != nil { }
if b.partSize > 0 {
srcFileInfo, err := os.Stat(file)
if err != nil {
return fmt.Errorf("(*s3Storage).Copy: error reading the local file: %w", err)
}
_, partSize, _, err := minio.OptimalPartInfo(srcFileInfo.Size(), uint64(b.partSize*1024*1024))
if err != nil {
return fmt.Errorf("(*s3Storage).Copy: error computing the optimal s3 part size: %w", err)
}
putObjectOptions.PartSize = uint64(partSize)
}
if _, err := b.client.FPutObject(context.Background(), b.bucket, filepath.Join(b.DestinationPath, name), file, putObjectOptions); err != nil {
if errResp := minio.ToErrorResponse(err); errResp.Message != "" { if errResp := minio.ToErrorResponse(err); errResp.Message != "" {
return fmt.Errorf("(*s3Storage).Copy: error uploading backup to remote storage: [Message]: '%s', [Code]: %s, [StatusCode]: %d", errResp.Message, errResp.Code, errResp.StatusCode) return fmt.Errorf(
"(*s3Storage).Copy: error uploading backup to remote storage: [Message]: '%s', [Code]: %s, [StatusCode]: %d",
errResp.Message,
errResp.Code,
errResp.StatusCode,
)
} }
return fmt.Errorf("(*s3Storage).Copy: error uploading backup to remote storage: %w", err) return fmt.Errorf("(*s3Storage).Copy: error uploading backup to remote storage: %w", err)
} }
b.Log(storage.LogLevelInfo, b.Name(), "Uploaded a copy of backup `%s` to bucket `%s`.", file, b.bucket) b.Log(storage.LogLevelInfo, b.Name(), "Uploaded a copy of backup `%s` to bucket `%s`.", file, b.bucket)
return nil return nil
@@ -123,7 +153,7 @@ func (b *s3Storage) Prune(deadline time.Time, pruningPrefix string) (*storage.Pr
lenCandidates++ lenCandidates++
if candidate.Err != nil { if candidate.Err != nil {
return nil, fmt.Errorf( return nil, fmt.Errorf(
"(*s3Storage).Prune: Error looking up candidates from remote storage! %w", "(*s3Storage).Prune: error looking up candidates from remote storage! %w",
candidate.Err, candidate.Err,
) )
} }
@@ -137,7 +167,7 @@ func (b *s3Storage) Prune(deadline time.Time, pruningPrefix string) (*storage.Pr
Pruned: uint(len(matches)), Pruned: uint(len(matches)),
} }
if err := b.DoPrune(b.Name(), len(matches), lenCandidates, "remote backup(s)", func() error { if err := b.DoPrune(b.Name(), len(matches), lenCandidates, func() error {
objectsCh := make(chan minio.ObjectInfo) objectsCh := make(chan minio.ObjectInfo)
go func() { go func() {
for _, match := range matches { for _, match := range matches {
@@ -153,7 +183,7 @@ func (b *s3Storage) Prune(deadline time.Time, pruningPrefix string) (*storage.Pr
} }
} }
if len(removeErrors) != 0 { if len(removeErrors) != 0 {
return utilities.Join(removeErrors...) return errors.Join(removeErrors...)
} }
return nil return nil
}); err != nil { }); err != nil {

View File

@@ -7,7 +7,6 @@ import (
"errors" "errors"
"fmt" "fmt"
"io" "io"
"io/ioutil"
"os" "os"
"path" "path"
"path/filepath" "path/filepath"
@@ -46,7 +45,7 @@ func NewStorageBackend(opts Config, logFunc storage.Log) (storage.Backend, error
} }
if _, err := os.Stat(opts.IdentityFile); err == nil { if _, err := os.Stat(opts.IdentityFile); err == nil {
key, err := ioutil.ReadFile(opts.IdentityFile) key, err := os.ReadFile(opts.IdentityFile)
if err != nil { if err != nil {
return nil, errors.New("NewStorageBackend: error reading the private key") return nil, errors.New("NewStorageBackend: error reading the private key")
} }
@@ -75,7 +74,7 @@ func NewStorageBackend(opts Config, logFunc storage.Log) (storage.Backend, error
sshClient, err := ssh.Dial("tcp", fmt.Sprintf("%s:%s", opts.HostName, opts.Port), sshClientConfig) sshClient, err := ssh.Dial("tcp", fmt.Sprintf("%s:%s", opts.HostName, opts.Port), sshClientConfig)
if err != nil { if err != nil {
return nil, fmt.Errorf("NewStorageBackend: Error creating ssh client: %w", err) return nil, fmt.Errorf("NewStorageBackend: error creating ssh client: %w", err)
} }
_, _, err = sshClient.SendRequest("keepalive", false, nil) _, _, err = sshClient.SendRequest("keepalive", false, nil)
if err != nil { if err != nil {
@@ -108,13 +107,13 @@ func (b *sshStorage) Copy(file string) error {
source, err := os.Open(file) source, err := os.Open(file)
_, name := path.Split(file) _, name := path.Split(file)
if err != nil { if err != nil {
return fmt.Errorf("(*sshStorage).Copy: Error reading the file to be uploaded: %w", err) return fmt.Errorf("(*sshStorage).Copy: error reading the file to be uploaded: %w", err)
} }
defer source.Close() defer source.Close()
destination, err := b.sftpClient.Create(filepath.Join(b.DestinationPath, name)) destination, err := b.sftpClient.Create(filepath.Join(b.DestinationPath, name))
if err != nil { if err != nil {
return fmt.Errorf("(*sshStorage).Copy: Error creating file on SSH storage: %w", err) return fmt.Errorf("(*sshStorage).Copy: error creating file: %w", err)
} }
defer destination.Close() defer destination.Close()
@@ -124,7 +123,7 @@ func (b *sshStorage) Copy(file string) error {
if err == io.EOF { if err == io.EOF {
tot, err := destination.Write(chunk[:num]) tot, err := destination.Write(chunk[:num])
if err != nil { if err != nil {
return fmt.Errorf("(*sshStorage).Copy: Error uploading the file to SSH storage: %w", err) return fmt.Errorf("(*sshStorage).Copy: error uploading the file: %w", err)
} }
if tot != len(chunk[:num]) { if tot != len(chunk[:num]) {
@@ -135,12 +134,12 @@ func (b *sshStorage) Copy(file string) error {
} }
if err != nil { if err != nil {
return fmt.Errorf("(*sshStorage).Copy: Error uploading the file to SSH storage: %w", err) return fmt.Errorf("(*sshStorage).Copy: error uploading the file: %w", err)
} }
tot, err := destination.Write(chunk[:num]) tot, err := destination.Write(chunk[:num])
if err != nil { if err != nil {
return fmt.Errorf("(*sshStorage).Copy: Error uploading the file to SSH storage: %w", err) return fmt.Errorf("(*sshStorage).Copy: error uploading the file: %w", err)
} }
if tot != len(chunk[:num]) { if tot != len(chunk[:num]) {
@@ -148,7 +147,7 @@ func (b *sshStorage) Copy(file string) error {
} }
} }
b.Log(storage.LogLevelInfo, b.Name(), "Uploaded a copy of backup `%s` to SSH storage '%s' at path '%s'.", file, b.hostName, b.DestinationPath) b.Log(storage.LogLevelInfo, b.Name(), "Uploaded a copy of backup `%s` to '%s' at path '%s'.", file, b.hostName, b.DestinationPath)
return nil return nil
} }
@@ -157,7 +156,7 @@ func (b *sshStorage) Copy(file string) error {
func (b *sshStorage) Prune(deadline time.Time, pruningPrefix string) (*storage.PruneStats, error) { func (b *sshStorage) Prune(deadline time.Time, pruningPrefix string) (*storage.PruneStats, error) {
candidates, err := b.sftpClient.ReadDir(b.DestinationPath) candidates, err := b.sftpClient.ReadDir(b.DestinationPath)
if err != nil { if err != nil {
return nil, fmt.Errorf("(*sshStorage).Prune: Error reading directory from SSH storage: %w", err) return nil, fmt.Errorf("(*sshStorage).Prune: error reading directory: %w", err)
} }
var matches []string var matches []string
@@ -175,10 +174,10 @@ func (b *sshStorage) Prune(deadline time.Time, pruningPrefix string) (*storage.P
Pruned: uint(len(matches)), Pruned: uint(len(matches)),
} }
if err := b.DoPrune(b.Name(), len(matches), len(candidates), "SSH backup(s)", func() error { if err := b.DoPrune(b.Name(), len(matches), len(candidates), func() error {
for _, match := range matches { for _, match := range matches {
if err := b.sftpClient.Remove(filepath.Join(b.DestinationPath, match)); err != nil { if err := b.sftpClient.Remove(filepath.Join(b.DestinationPath, match)); err != nil {
return fmt.Errorf("(*sshStorage).Prune: Error removing file from SSH storage: %w", err) return fmt.Errorf("(*sshStorage).Prune: error removing file: %w", err)
} }
} }
return nil return nil

View File

@@ -29,7 +29,7 @@ const (
LogLevelError LogLevelError
) )
type Log func(logType LogLevel, context string, msg string, params ...interface{}) type Log func(logType LogLevel, context string, msg string, params ...any)
// PruneStats is a wrapper struct for returning stats after pruning // PruneStats is a wrapper struct for returning stats after pruning
type PruneStats struct { type PruneStats struct {
@@ -39,23 +39,22 @@ type PruneStats struct {
// DoPrune holds general control flow that applies to any kind of storage. // DoPrune holds general control flow that applies to any kind of storage.
// Callers can pass in a thunk that performs the actual deletion of files. // Callers can pass in a thunk that performs the actual deletion of files.
func (b *StorageBackend) DoPrune(context string, lenMatches, lenCandidates int, description string, doRemoveFiles func() error) error { func (b *StorageBackend) DoPrune(context string, lenMatches, lenCandidates int, doRemoveFiles func() error) error {
if lenMatches != 0 && lenMatches != lenCandidates { if lenMatches != 0 && lenMatches != lenCandidates {
if err := doRemoveFiles(); err != nil { if err := doRemoveFiles(); err != nil {
return err return err
} }
b.Log(LogLevelInfo, context, b.Log(LogLevelInfo, context,
"Pruned %d out of %d %s as their age exceeded the configured retention period of %d days.", "Pruned %d out of %d backups as their age exceeded the configured retention period of %d days.",
lenMatches, lenMatches,
lenCandidates, lenCandidates,
description,
b.RetentionDays, b.RetentionDays,
) )
} else if lenMatches != 0 && lenMatches == lenCandidates { } else if lenMatches != 0 && lenMatches == lenCandidates {
b.Log(LogLevelWarning, context, "The current configuration would delete all %d existing %s.", lenMatches, description) b.Log(LogLevelWarning, context, "The current configuration would delete all %d existing backups.", lenMatches)
b.Log(LogLevelWarning, context, "Refusing to do so, please check your configuration.") b.Log(LogLevelWarning, context, "Refusing to do so, please check your configuration.")
} else { } else {
b.Log(LogLevelInfo, context, "None of %d existing %s were pruned.", lenCandidates, description) b.Log(LogLevelInfo, context, "None of %d existing backups were pruned.", lenCandidates)
} }
return nil return nil
} }

View File

@@ -67,18 +67,20 @@ func (b *webDavStorage) Name() string {
// Copy copies the given file to the WebDav storage backend. // Copy copies the given file to the WebDav storage backend.
func (b *webDavStorage) Copy(file string) error { func (b *webDavStorage) Copy(file string) error {
bytes, err := os.ReadFile(file)
_, name := path.Split(file) _, name := path.Split(file)
if err != nil {
return fmt.Errorf("(*webDavStorage).Copy: Error reading the file to be uploaded: %w", err)
}
if err := b.client.MkdirAll(b.DestinationPath, 0644); err != nil { if err := b.client.MkdirAll(b.DestinationPath, 0644); err != nil {
return fmt.Errorf("(*webDavStorage).Copy: Error creating directory '%s' on WebDAV server: %w", b.DestinationPath, err) return fmt.Errorf("(*webDavStorage).Copy: error creating directory '%s' on server: %w", b.DestinationPath, err)
} }
if err := b.client.Write(filepath.Join(b.DestinationPath, name), bytes, 0644); err != nil {
return fmt.Errorf("(*webDavStorage).Copy: Error uploading the file to WebDAV server: %w", err) r, err := os.Open(file)
if err != nil {
return fmt.Errorf("(*webDavStorage).Copy: error opening the file to be uploaded: %w", err)
} }
b.Log(storage.LogLevelInfo, b.Name(), "Uploaded a copy of backup '%s' to WebDAV URL '%s' at path '%s'.", file, b.url, b.DestinationPath)
if err := b.client.WriteStream(filepath.Join(b.DestinationPath, name), r, 0644); err != nil {
return fmt.Errorf("(*webDavStorage).Copy: error uploading the file: %w", err)
}
b.Log(storage.LogLevelInfo, b.Name(), "Uploaded a copy of backup '%s' to '%s' at path '%s'.", file, b.url, b.DestinationPath)
return nil return nil
} }
@@ -87,7 +89,7 @@ func (b *webDavStorage) Copy(file string) error {
func (b *webDavStorage) Prune(deadline time.Time, pruningPrefix string) (*storage.PruneStats, error) { func (b *webDavStorage) Prune(deadline time.Time, pruningPrefix string) (*storage.PruneStats, error) {
candidates, err := b.client.ReadDir(b.DestinationPath) candidates, err := b.client.ReadDir(b.DestinationPath)
if err != nil { if err != nil {
return nil, fmt.Errorf("(*webDavStorage).Prune: Error looking up candidates from remote storage: %w", err) return nil, fmt.Errorf("(*webDavStorage).Prune: error looking up candidates from remote storage: %w", err)
} }
var matches []fs.FileInfo var matches []fs.FileInfo
var lenCandidates int var lenCandidates int
@@ -106,10 +108,10 @@ func (b *webDavStorage) Prune(deadline time.Time, pruningPrefix string) (*storag
Pruned: uint(len(matches)), Pruned: uint(len(matches)),
} }
if err := b.DoPrune(b.Name(), len(matches), lenCandidates, "WebDAV backup(s)", func() error { if err := b.DoPrune(b.Name(), len(matches), lenCandidates, func() error {
for _, match := range matches { for _, match := range matches {
if err := b.client.Remove(filepath.Join(b.DestinationPath, match.Name())); err != nil { if err := b.client.Remove(filepath.Join(b.DestinationPath, match.Name())); err != nil {
return fmt.Errorf("(*webDavStorage).Prune: Error removing file from WebDAV storage: %w", err) return fmt.Errorf("(*webDavStorage).Prune: error removing file: %w", err)
} }
} }
return nil return nil

View File

@@ -1,24 +0,0 @@
// Copyright 2022 - Offen Authors <hioffen@posteo.de>
// SPDX-License-Identifier: MPL-2.0
package utilities
import (
"errors"
"strings"
)
// Join takes a list of errors and joins them into a single error
func Join(errs ...error) error {
if len(errs) == 1 {
return errs[0]
}
var msgs []string
for _, err := range errs {
if err == nil {
continue
}
msgs = append(msgs, err.Error())
}
return errors.New("[" + strings.Join(msgs, ", ") + "]")
}

13
test/Dockerfile Normal file
View File

@@ -0,0 +1,13 @@
FROM docker:24-dind
RUN apk add \
coreutils \
curl \
gpg \
jq \
moreutils \
tar \
zstd \
--no-cache
WORKDIR /code/test

70
test/README.md Normal file
View File

@@ -0,0 +1,70 @@
# Integration Tests
## Running tests
The main entry point for running tests is the `./test.sh` script.
It can be used to run the entire test suite, or just a single test case.
### Run all tests
```sh
./test.sh
```
### Run a single test case
```sh
./test.sh <directory-name>
```
### Configuring a test run
In addition to the match pattern, which can be given as the first positional argument, certain behavior can be changed by setting environment variables:
#### `BUILD_IMAGE`
When set, the test script will build an up-to-date `docker-volume-backup` image from the current state of your source tree, and run the tests against it.
```sh
BUILD_IMAGE=1 ./test.sh
```
The default behavior is not to build an image, and instead look for a version on your host system.
#### `IMAGE_TAG`
Setting this value lets you run tests against different existing images, so you can compare behavior:
```sh
IMAGE_TAG=v2.30.0 ./test.sh
```
#### `NO_IMAGE_CACHE`
When set, images from remote registries will not be cached and shared between sandbox containers.
```sh
NO_IMAGE_CACHE=1 ./test.sh
```
By default, two local images are created that persist the image data and provide it to containers at runtime.
## Understanding the test setup
The test setup runs each test case in an isolated Docker container, which itself is running an otherwise unused Docker daemon.
This means, tests can rely on noone else using that daemon, making expectations about the number of running containers and so forth.
As the sandbox container is also expected to be torn down post test, the scripts do not need to do any clean up or similar.
## Anatomy of a test case
The `test.sh` script looks for an exectuable file called `run.sh` in each directory.
When found, it is executed and signals success by returning a 0 exit code.
Any other exit code is considered a failure and will halt execution of further tests.
There is an `util.sh` file containing a few commonly used helpers which can be used by putting the following prelude to a new test case:
```sh
cd "$(dirname "$0")"
. ../util.sh
current_test=$(basename $(pwd))
```

View File

@@ -0,0 +1,56 @@
version: '3'
services:
storage:
image: mcr.microsoft.com/azure-storage/azurite:3.26.0
volumes:
- ${DATA_DIR:-./data}:/data
command: azurite-blob --blobHost 0.0.0.0 --blobPort 10000 --location /data
healthcheck:
test: nc 127.0.0.1 10000 -z
interval: 1s
retries: 30
az_cli:
image: mcr.microsoft.com/azure-cli:2.51.0
volumes:
- ${LOCAL_DIR:-./local}:/dump
command:
- /bin/sh
- -c
- |
az storage container create --name test-container
depends_on:
storage:
condition: service_healthy
environment:
AZURE_STORAGE_CONNECTION_STRING: DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://storage:10000/devstoreaccount1;
backup:
image: offen/docker-volume-backup:${TEST_VERSION:-canary}
hostname: hostnametoken
restart: always
environment:
AZURE_STORAGE_ACCOUNT_NAME: devstoreaccount1
AZURE_STORAGE_PRIMARY_ACCOUNT_KEY: Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==
AZURE_STORAGE_CONTAINER_NAME: test-container
AZURE_STORAGE_ENDPOINT: http://storage:10000/{{ .AccountName }}/
AZURE_STORAGE_PATH: 'path/to/backup'
BACKUP_FILENAME: test.tar.gz
BACKUP_CRON_EXPRESSION: 0 0 5 31 2 ?
BACKUP_RETENTION_DAYS: ${BACKUP_RETENTION_DAYS:-7}
BACKUP_PRUNING_LEEWAY: 5s
BACKUP_PRUNING_PREFIX: test
volumes:
- app_data:/backup/app_data:ro
- /var/run/docker.sock:/var/run/docker.sock
offen:
image: offen/offen:latest
labels:
- docker-volume-backup.stop-during-backup=true
volumes:
- app_data:/var/opt/offen
volumes:
app_data:

86
test/azure/run.sh Executable file
View File

@@ -0,0 +1,86 @@
#!/bin/sh
set -e
cd "$(dirname "$0")"
. ../util.sh
current_test=$(basename $(pwd))
export LOCAL_DIR=$(mktemp -d)
export TMP_DIR=$(mktemp -d)
export DATA_DIR=$(mktemp -d)
download_az () {
docker compose run --rm az_cli \
az storage blob download -f /dump/$1.tar.gz -c test-container -n path/to/backup/$1.tar.gz
}
docker compose up -d --quiet-pull
sleep 5
docker compose exec backup backup
sleep 5
expect_running_containers "3"
download_az "test"
tar -xvf "$LOCAL_DIR/test.tar.gz" -C $TMP_DIR
if [ ! -f "$TMP_DIR/backup/app_data/offen.db" ]; then
fail "Could not find expeced file in untared backup"
fi
pass "Found relevant files in untared remote backups."
rm "$LOCAL_DIR/test.tar.gz"
# The second part of this test checks if backups get deleted when the retention
# is set to 0 days (which it should not as it would mean all backups get deleted)
BACKUP_RETENTION_DAYS="0" docker compose up -d
sleep 5
docker compose exec backup backup
download_az "test"
if [ ! -f "$LOCAL_DIR/test.tar.gz" ]; then
fail "Remote backup was deleted"
fi
pass "Remote backups have not been deleted."
# The third part of this test checks if old backups get deleted when the retention
# is set to 7 days (which it should)
BACKUP_RETENTION_DAYS="7" docker compose up -d
sleep 5
info "Create first backup with no prune"
docker compose exec backup backup
docker compose run --rm az_cli \
az storage blob upload -f /dump/test.tar.gz -c test-container -n path/to/backup/test-old.tar.gz
docker compose down
rm "$LOCAL_DIR/test.tar.gz"
back_date="$(date "+%Y-%m-%dT%H:%M:%S%z" -d "14 days ago" | rev | cut -c 3- | rev):00"
jq --arg back_date "$back_date" '(.collections[] | select(.name=="$BLOBS_COLLECTION$") | .data[] | select(.name=="path/to/backup/test-old.tar.gz") | .properties.creationTime = $back_date)' "$DATA_DIR/__azurite_db_blob__.json" | sponge "$DATA_DIR/__azurite_db_blob__.json"
docker compose up -d
sleep 5
info "Create second backup and prune"
docker compose exec backup backup
info "Download first backup which should be pruned"
download_az "test-old" || true
if [ -f "$LOCAL_DIR/test-old.tar.gz" ]; then
fail "Backdated file was not deleted"
fi
download_az "test" || true
if [ ! -f "$LOCAL_DIR/test.tar.gz" ]; then
fail "Recent file was not found"
fi
pass "Old remote backup has been pruned, new one is still present."

View File

@@ -0,0 +1,48 @@
version: '3'
services:
minio:
hostname: minio.local
image: minio/minio:RELEASE.2020-08-04T23-10-51Z
environment:
MINIO_ROOT_USER: test
MINIO_ROOT_PASSWORD: test
MINIO_ACCESS_KEY: test
MINIO_SECRET_KEY: GMusLtUmILge2by+z890kQ
entrypoint: /bin/ash -c 'mkdir -p /data/backup && minio server --certs-dir "/certs" --address ":443" /data'
volumes:
- minio_backup_data:/data
- ${CERT_DIR:-.}/minio.crt:/certs/public.crt
- ${CERT_DIR:-.}/minio.key:/certs/private.key
backup:
image: offen/docker-volume-backup:${TEST_VERSION:-canary}
depends_on:
- minio
restart: always
environment:
BACKUP_FILENAME: test.tar.gz
AWS_ACCESS_KEY_ID: test
AWS_SECRET_ACCESS_KEY: GMusLtUmILge2by+z890kQ
AWS_ENDPOINT: minio.local:443
AWS_ENDPOINT_CA_CERT: /root/minio-rootCA.crt
AWS_S3_BUCKET_NAME: backup
BACKUP_CRON_EXPRESSION: 0 0 5 31 2 ?
BACKUP_RETENTION_DAYS: ${BACKUP_RETENTION_DAYS:-7}
BACKUP_PRUNING_LEEWAY: 5s
volumes:
- app_data:/backup/app_data:ro
- /var/run/docker.sock:/var/run/docker.sock
- ${CERT_DIR:-.}/rootCA.crt:/root/minio-rootCA.crt
offen:
image: offen/offen:latest
labels:
- docker-volume-backup.stop-during-backup=true
volumes:
- app_data:/var/opt/offen
volumes:
minio_backup_data:
name: minio_backup_data
app_data:

43
test/certs/run.sh Executable file
View File

@@ -0,0 +1,43 @@
#!/bin/sh
set -e
cd "$(dirname "$0")"
. ../util.sh
current_test=$(basename $(pwd))
export CERT_DIR=$(mktemp -d)
openssl genrsa -des3 -passout pass:test -out "$CERT_DIR/rootCA.key" 4096
openssl req -passin pass:test \
-subj "/C=DE/ST=BE/O=IntegrationTest, Inc." \
-x509 -new -key "$CERT_DIR/rootCA.key" -sha256 -days 1 -out "$CERT_DIR/rootCA.crt"
openssl genrsa -out "$CERT_DIR/minio.key" 4096
openssl req -new -sha256 -key "$CERT_DIR/minio.key" \
-subj "/C=DE/ST=BE/O=IntegrationTest, Inc./CN=minio" \
-out "$CERT_DIR/minio.csr"
openssl x509 -req -passin pass:test \
-in "$CERT_DIR/minio.csr" \
-CA "$CERT_DIR/rootCA.crt" -CAkey "$CERT_DIR/rootCA.key" -CAcreateserial \
-extfile san.cnf \
-out "$CERT_DIR/minio.crt" -days 1 -sha256
openssl x509 -in "$CERT_DIR/minio.crt" -noout -text
docker compose up -d --quiet-pull
sleep 5
docker compose exec backup backup
sleep 5
expect_running_containers "3"
docker run --rm \
-v minio_backup_data:/minio_data \
alpine \
ash -c 'tar -xvf /minio_data/backup/test.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db'
pass "Found relevant files in untared remote backups."

1
test/certs/san.cnf Normal file
View File

@@ -0,0 +1 @@
subjectAltName = DNS:minio.local

View File

@@ -13,7 +13,7 @@ docker volume create app_data
# correctly. It is not supposed to hold any data. # correctly. It is not supposed to hold any data.
docker volume create empty_data docker volume create empty_data
docker run -d \ docker run -d -q \
--name minio \ --name minio \
--network test_network \ --network test_network \
--env MINIO_ROOT_USER=test \ --env MINIO_ROOT_USER=test \
@@ -25,7 +25,7 @@ docker run -d \
docker exec minio mkdir -p /data/backup docker exec minio mkdir -p /data/backup
docker run -d \ docker run -d -q \
--name offen \ --name offen \
--network test_network \ --network test_network \
-v app_data:/var/opt/offen/ \ -v app_data:/var/opt/offen/ \
@@ -33,7 +33,7 @@ docker run -d \
sleep 10 sleep 10
docker run --rm \ docker run --rm -q \
--network test_network \ --network test_network \
-v app_data:/backup/app_data \ -v app_data:/backup/app_data \
-v empty_data:/backup/empty_data \ -v empty_data:/backup/empty_data \
@@ -48,7 +48,7 @@ docker run --rm \
--entrypoint backup \ --entrypoint backup \
offen/docker-volume-backup:${TEST_VERSION:-canary} offen/docker-volume-backup:${TEST_VERSION:-canary}
docker run --rm -it \ docker run --rm -q \
-v backup_data:/data alpine \ -v backup_data:/data alpine \
ash -c 'tar -xvf /data/backup/test.tar.gz && test -f /backup/app_data/offen.db && test -d /backup/empty_data' ash -c 'tar -xvf /data/backup/test.tar.gz && test -f /backup/app_data/offen.db && test -d /backup/empty_data'

View File

@@ -1 +0,0 @@
local

View File

@@ -42,10 +42,9 @@ services:
EXEC_LABEL: test EXEC_LABEL: test
EXEC_FORWARD_OUTPUT: "true" EXEC_FORWARD_OUTPUT: "true"
volumes: volumes:
- archive:/archive - ${LOCAL_DIR:-./local}:/archive
- app_data:/backup/data:ro - app_data:/backup/data:ro
- /var/run/docker.sock:/var/run/docker.sock - /var/run/docker.sock:/var/run/docker.sock
volumes: volumes:
app_data: app_data:
archive:

34
test/commands/run.sh Normal file → Executable file
View File

@@ -6,34 +6,37 @@ cd $(dirname $0)
. ../util.sh . ../util.sh
current_test=$(basename $(pwd)) current_test=$(basename $(pwd))
docker-compose up -d export LOCAL_DIR=$(mktemp -d)
export TMP_DIR=$(mktemp -d)
docker compose up -d --quiet-pull
sleep 30 # mariadb likes to take a bit before responding sleep 30 # mariadb likes to take a bit before responding
docker-compose exec backup backup docker compose exec backup backup
sudo cp -r $(docker volume inspect --format='{{ .Mountpoint }}' commands_archive) ./local
tar -xvf ./local/test.tar.gz tar -xvf "$LOCAL_DIR/test.tar.gz" -C $TMP_DIR
if [ ! -f ./backup/data/dump.sql ]; then if [ ! -f "$TMP_DIR/backup/data/dump.sql" ]; then
fail "Could not find file written by pre command." fail "Could not find file written by pre command."
fi fi
pass "Found expected file." pass "Found expected file."
if [ -f ./backup/data/not-relevant.txt ]; then if [ -f "$TMP_DIR/backup/data/not-relevant.txt" ]; then
fail "Command ran for container with other label." fail "Command ran for container with other label."
fi fi
pass "Command did not run for container with other label." pass "Command did not run for container with other label."
if [ -f ./backup/data/post.txt ]; then if [ -f "$TMP_DIR/backup/data/post.txt" ]; then
fail "File created in post command was present in backup." fail "File created in post command was present in backup."
fi fi
pass "Did not find unexpected file." pass "Did not find unexpected file."
docker-compose down --volumes docker compose down --volumes
sudo rm -rf ./local
info "Running commands test in swarm mode next." info "Running commands test in swarm mode next."
export LOCAL_DIR=$(mktemp -d)
export TMP_DIR=$(mktemp -d)
docker swarm init docker swarm init
docker stack deploy --compose-file=docker-compose.yml test_stack docker stack deploy --compose-file=docker-compose.yml test_stack
@@ -47,18 +50,13 @@ sleep 20
docker exec $(docker ps -q -f name=backup) backup docker exec $(docker ps -q -f name=backup) backup
sudo cp -r $(docker volume inspect --format='{{ .Mountpoint }}' test_stack_archive) ./local tar -xvf "$LOCAL_DIR/test.tar.gz" -C $TMP_DIR
if [ ! -f "$TMP_DIR/backup/data/dump.sql" ]; then
tar -xvf ./local/test.tar.gz
if [ ! -f ./backup/data/dump.sql ]; then
fail "Could not find file written by pre command." fail "Could not find file written by pre command."
fi fi
pass "Found expected file." pass "Found expected file."
if [ -f ./backup/data/post.txt ]; then if [ -f "$TMP_DIR/backup/data/post.txt" ]; then
fail "File created in post command was present in backup." fail "File created in post command was present in backup."
fi fi
pass "Did not find unexpected file." pass "Did not find unexpected file."
docker stack rm test_stack
docker swarm leave --force

View File

@@ -1 +0,0 @@
local

View File

@@ -5,7 +5,7 @@ services:
image: offen/docker-volume-backup:${TEST_VERSION:-canary} image: offen/docker-volume-backup:${TEST_VERSION:-canary}
restart: always restart: always
volumes: volumes:
- ./local:/archive - ${LOCAL_DIR:-./local}:/archive
- app_data:/backup/app_data:ro - app_data:/backup/app_data:ro
- ./01backup.env:/etc/dockervolumebackup/conf.d/01backup.env - ./01backup.env:/etc/dockervolumebackup/conf.d/01backup.env
- ./02backup.env:/etc/dockervolumebackup/conf.d/02backup.env - ./02backup.env:/etc/dockervolumebackup/conf.d/02backup.env

View File

@@ -6,26 +6,24 @@ cd $(dirname $0)
. ../util.sh . ../util.sh
current_test=$(basename $(pwd)) current_test=$(basename $(pwd))
mkdir -p local export LOCAL_DIR=$(mktemp -d)
docker-compose up -d docker compose up -d --quiet-pull
# sleep until a backup is guaranteed to have happened on the 1 minute schedule # sleep until a backup is guaranteed to have happened on the 1 minute schedule
sleep 100 sleep 100
docker-compose down --volumes if [ ! -f "$LOCAL_DIR/conf.tar.gz" ]; then
if [ ! -f ./local/conf.tar.gz ]; then
fail "Config from file was not used." fail "Config from file was not used."
fi fi
pass "Config from file was used." pass "Config from file was used."
if [ ! -f ./local/other.tar.gz ]; then if [ ! -f "$LOCAL_DIR/other.tar.gz" ]; then
fail "Run on same schedule did not succeed." fail "Run on same schedule did not succeed."
fi fi
pass "Run on same schedule succeeded." pass "Run on same schedule succeeded."
if [ -f ./local/never.tar.gz ]; then if [ -f "$LOCAL_DIR/never.tar.gz" ]; then
fail "Unexpected file was found." fail "Unexpected file was found."
fi fi
pass "Unexpected cron did not run." pass "Unexpected cron did not run."

1
test/dropbox/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
user_v2_ready.yaml

View File

@@ -0,0 +1,57 @@
version: '3'
services:
openapi_mock:
image: muonsoft/openapi-mock:0.3.9
environment:
OPENAPI_MOCK_USE_EXAMPLES: if_present
OPENAPI_MOCK_SPECIFICATION_URL: '/etc/openapi/user_v2.yaml'
ports:
- 8080:8080
volumes:
- ${SPEC_FILE:-./user_v2.yaml}:/etc/openapi/user_v2.yaml
oauth2_mock:
image: ghcr.io/navikt/mock-oauth2-server:1.0.0
ports:
- 8090:8090
environment:
PORT: 8090
JSON_CONFIG_PATH: '/etc/oauth2/config.json'
volumes:
- ./oauth2_config.json:/etc/oauth2/config.json
backup:
image: offen/docker-volume-backup:${TEST_VERSION:-canary}
hostname: hostnametoken
depends_on:
- openapi_mock
- oauth2_mock
restart: always
environment:
BACKUP_FILENAME_EXPAND: 'true'
BACKUP_FILENAME: test-$$HOSTNAME.tar.gz
BACKUP_CRON_EXPRESSION: 0 0 5 31 2 ?
BACKUP_RETENTION_DAYS: ${BACKUP_RETENTION_DAYS:-7}
BACKUP_PRUNING_LEEWAY: 5s
BACKUP_PRUNING_PREFIX: test
DROPBOX_ENDPOINT: http://openapi_mock:8080
DROPBOX_OAUTH2_ENDPOINT: http://oauth2_mock:8090
DROPBOX_REFRESH_TOKEN: test
DROPBOX_APP_KEY: test
DROPBOX_APP_SECRET: test
DROPBOX_REMOTE_PATH: /test
DROPBOX_CONCURRENCY_LEVEL: 6
volumes:
- app_data:/backup/app_data:ro
- /var/run/docker.sock:/var/run/docker.sock
offen:
image: offen/offen:latest
labels:
- docker-volume-backup.stop-during-backup=true
volumes:
- app_data:/var/opt/offen
volumes:
app_data:

View File

@@ -0,0 +1,37 @@
{
"interactiveLogin": true,
"httpServer": "NettyWrapper",
"tokenCallbacks": [
{
"issuerId": "issuer1",
"tokenExpiry": 120,
"requestMappings": [
{
"requestParam": "scope",
"match": "scope1",
"claims": {
"sub": "subByScope",
"aud": [
"audByScope"
]
}
}
]
},
{
"issuerId": "issuer2",
"requestMappings": [
{
"requestParam": "someparam",
"match": "somevalue",
"claims": {
"sub": "subBySomeParam",
"aud": [
"audBySomeParam"
]
}
}
]
}
]
}

61
test/dropbox/run.sh Executable file
View File

@@ -0,0 +1,61 @@
#!/bin/sh
set -e
cd "$(dirname "$0")"
. ../util.sh
current_test=$(basename $(pwd))
export SPEC_FILE=$(mktemp -d)/user_v2.yaml
cp user_v2.yaml $SPEC_FILE
sed -i 's/SERVER_MODIFIED_1/'"$(date "+%Y-%m-%dT%H:%M:%SZ")/g" $SPEC_FILE
sed -i 's/SERVER_MODIFIED_2/'"$(date "+%Y-%m-%dT%H:%M:%SZ" -d "14 days ago")/g" $SPEC_FILE
docker compose up -d --quiet-pull
sleep 5
logs=$(docker compose exec -T backup backup)
sleep 5
expect_running_containers "4"
echo "$logs"
if echo "$logs" | grep -q "ERROR"; then
fail "Backup failed, errors reported: $logs"
else
pass "Backup succeeded, no errors reported."
fi
# The second part of this test checks if backups get deleted when the retention
# is set to 0 days (which it should not as it would mean all backups get deleted)
BACKUP_RETENTION_DAYS="0" docker compose up -d
sleep 5
logs=$(docker compose exec -T backup backup)
echo "$logs"
if echo "$logs" | grep -q "Refusing to do so, please check your configuration"; then
pass "Remote backups have not been deleted."
else
fail "Remote backups would have been deleted: $logs"
fi
# The third part of this test checks if old backups get deleted when the retention
# is set to 7 days (which it should)
BACKUP_RETENTION_DAYS="7" docker compose up -d
sleep 5
info "Create second backup and prune"
logs=$(docker compose exec -T backup backup)
echo "$logs"
if echo "$logs" | grep -q "Pruned 1 out of 2 backups as their age exceeded the configured retention period"; then
pass "Old remote backup has been pruned, new one is still present."
elif echo "$logs" | grep -q "ERROR"; then
fail "Pruning failed, errors reported: $logs"
elif echo "$logs" | grep -q "None of 1 existing backups were pruned"; then
fail "Pruning failed, old backup has not been pruned: $logs"
else
fail "Pruning failed, unknown result: $logs"
fi

12758
test/dropbox/user_v2.yaml Normal file

File diff suppressed because it is too large Load Diff

4
test/extend/Dockerfile Normal file
View File

@@ -0,0 +1,4 @@
ARG version=canary
FROM offen/docker-volume-backup:$version
RUN apk add rsync

View File

@@ -0,0 +1,26 @@
version: '3'
services:
backup:
image: offen/docker-volume-backup:${TEST_VERSION:-canary}
restart: always
labels:
- docker-volume-backup.copy-post=/bin/sh -c 'mkdir -p /tmp/unpack && tar -xvf $$COMMAND_RUNTIME_ARCHIVE_FILEPATH -C /tmp/unpack && rsync -r /tmp/unpack/backup/app_data /local'
environment:
BACKUP_FILENAME: test.tar.gz
BACKUP_CRON_EXPRESSION: 0 0 5 31 2 ?
EXEC_FORWARD_OUTPUT: "true"
volumes:
- ${LOCAL_DIR:-local}:/local
- app_data:/backup/app_data:ro
- /var/run/docker.sock:/var/run/docker.sock
offen:
image: offen/offen:latest
labels:
- docker-volume-backup.stop-during-backup=true
volumes:
- app_data:/var/opt/offen
volumes:
app_data:

27
test/extend/run.sh Executable file
View File

@@ -0,0 +1,27 @@
#!/bin/sh
set -e
cd "$(dirname "$0")"
. ../util.sh
current_test=$(basename $(pwd))
export LOCAL_DIR=$(mktemp -d)
export BASE_VERSION="${TEST_VERSION:-canary}"
export TEST_VERSION="${TEST_VERSION:-canary}-with-rsync"
docker build . -t offen/docker-volume-backup:$TEST_VERSION --build-arg version=$BASE_VERSION
docker compose up -d --quiet-pull
sleep 5
docker compose exec backup backup
sleep 5
expect_running_containers "2"
if [ ! -f "$LOCAL_DIR/app_data/offen.db" ]; then
fail "Could not find expected file in untared archive."
fi

1
test/gpg/.gitignore vendored
View File

@@ -1 +0,0 @@
local

View File

@@ -11,7 +11,7 @@ services:
BACKUP_RETENTION_DAYS: ${BACKUP_RETENTION_DAYS:-7} BACKUP_RETENTION_DAYS: ${BACKUP_RETENTION_DAYS:-7}
GPG_PASSPHRASE: 1234#$$ecret GPG_PASSPHRASE: 1234#$$ecret
volumes: volumes:
- ./local:/archive - ${LOCAL_DIR:-./local}:/archive
- app_data:/backup/app_data:ro - app_data:/backup/app_data:ro
- /var/run/docker.sock:/var/run/docker.sock - /var/run/docker.sock:/var/run/docker.sock

View File

@@ -6,28 +6,27 @@ cd "$(dirname "$0")"
. ../util.sh . ../util.sh
current_test=$(basename $(pwd)) current_test=$(basename $(pwd))
mkdir -p local export LOCAL_DIR=$(mktemp -d)
docker-compose up -d docker compose up -d --quiet-pull
sleep 5 sleep 5
docker-compose exec backup backup docker compose exec backup backup
expect_running_containers "2" expect_running_containers "2"
tmp_dir=$(mktemp -d) TMP_DIR=$(mktemp -d)
echo "1234#\$ecret" | gpg -d --pinentry-mode loopback --yes --passphrase-fd 0 ./local/test.tar.gz.gpg > ./local/decrypted.tar.gz echo "1234#\$ecret" | gpg -d --pinentry-mode loopback --yes --passphrase-fd 0 "$LOCAL_DIR/test.tar.gz.gpg" > "$LOCAL_DIR/decrypted.tar.gz"
tar -xf ./local/decrypted.tar.gz -C $tmp_dir tar -xf "$LOCAL_DIR/decrypted.tar.gz" -C $TMP_DIR
if [ ! -f $tmp_dir/backup/app_data/offen.db ]; then
if [ ! -f $TMP_DIR/backup/app_data/offen.db ]; then
fail "Could not find expected file in untared archive." fail "Could not find expected file in untared archive."
fi fi
rm ./local/decrypted.tar.gz rm "$LOCAL_DIR/decrypted.tar.gz"
pass "Found relevant files in decrypted and untared local backup." pass "Found relevant files in decrypted and untared local backup."
if [ ! -L ./local/test-latest.tar.gz.gpg ]; then if [ ! -L "$LOCAL_DIR/test-latest.tar.gz.gpg" ]; then
fail "Could not find local symlink to latest encrypted backup." fail "Could not find local symlink to latest encrypted backup."
fi fi
docker-compose down --volumes

View File

@@ -1 +0,0 @@
local

View File

@@ -11,5 +11,5 @@ services:
BACKUP_CRON_EXPRESSION: 0 0 5 31 2 ? BACKUP_CRON_EXPRESSION: 0 0 5 31 2 ?
BACKUP_EXCLUDE_REGEXP: '\.(me|you)$$' BACKUP_EXCLUDE_REGEXP: '\.(me|you)$$'
volumes: volumes:
- ./local:/archive - ${LOCAL_DIR:-./local}:/archive
- ./sources:/backup/data:ro - ./sources:/backup/data:ro

16
test/ignore/run.sh Normal file → Executable file
View File

@@ -6,23 +6,21 @@ cd $(dirname $0)
. ../util.sh . ../util.sh
current_test=$(basename $(pwd)) current_test=$(basename $(pwd))
mkdir -p local export LOCAL_DIR=$(mktemp -d)
docker-compose up -d docker compose up -d --quiet-pull
sleep 5 sleep 5
docker-compose exec backup backup docker compose exec backup backup
docker-compose down --volumes TMP_DIR=$(mktemp -d)
tar --same-owner -xvf "$LOCAL_DIR/test.tar.gz" -C "$TMP_DIR"
out=$(mktemp -d) if [ ! -f "$TMP_DIR/backup/data/me.txt" ]; then
sudo tar --same-owner -xvf ./local/test.tar.gz -C "$out"
if [ ! -f "$out/backup/data/me.txt" ]; then
fail "Expected file was not found." fail "Expected file was not found."
fi fi
pass "Expected file was found." pass "Expected file was found."
if [ -f "$out/backup/data/skip.me" ]; then if [ -f "$TMP_DIR/backup/data/skip.me" ]; then
fail "Ignored file was found." fail "Ignored file was found."
fi fi
pass "Ignored file was not found." pass "Ignored file was not found."

View File

@@ -1 +0,0 @@
local

View File

@@ -16,7 +16,7 @@ services:
volumes: volumes:
- app_data:/backup/app_data:ro - app_data:/backup/app_data:ro
- /var/run/docker.sock:/var/run/docker.sock - /var/run/docker.sock:/var/run/docker.sock
- ./local:/archive - ${LOCAL_DIR:-./local}:/archive
offen: offen:
image: offen/offen:latest image: offen/offen:latest

View File

@@ -6,26 +6,26 @@ cd "$(dirname "$0")"
. ../util.sh . ../util.sh
current_test=$(basename $(pwd)) current_test=$(basename $(pwd))
mkdir -p local export LOCAL_DIR=$(mktemp -d)
docker-compose up -d docker compose up -d --quiet-pull
sleep 5 sleep 5
# A symlink for a known file in the volume is created so the test can check # A symlink for a known file in the volume is created so the test can check
# whether symlinks are preserved on backup. # whether symlinks are preserved on backup.
docker-compose exec offen ln -s /var/opt/offen/offen.db /var/opt/offen/db.link docker compose exec offen ln -s /var/opt/offen/offen.db /var/opt/offen/db.link
docker-compose exec backup backup docker compose exec backup backup
sleep 5 sleep 5
expect_running_containers "2" expect_running_containers "2"
tmp_dir=$(mktemp -d) tmp_dir=$(mktemp -d)
tar -xvf ./local/test-hostnametoken.tar.gz -C $tmp_dir tar -xvf "$LOCAL_DIR/test-hostnametoken.tar.gz" -C $tmp_dir
if [ ! -f "$tmp_dir/backup/app_data/offen.db" ]; then if [ ! -f "$tmp_dir/backup/app_data/offen.db" ]; then
fail "Could not find expected file in untared archive." fail "Could not find expected file in untared archive."
fi fi
rm -f ./local/test-hostnametoken.tar.gz rm -f "$LOCAL_DIR/test-hostnametoken.tar.gz"
if [ ! -L "$tmp_dir/backup/app_data/db.link" ]; then if [ ! -L "$tmp_dir/backup/app_data/db.link" ]; then
fail "Could not find expected symlink in untared archive." fail "Could not find expected symlink in untared archive."
@@ -33,7 +33,7 @@ fi
pass "Found relevant files in decrypted and untared local backup." pass "Found relevant files in decrypted and untared local backup."
if [ ! -L ./local/test-hostnametoken.latest.tar.gz.gpg ]; then if [ ! -L "$LOCAL_DIR/test-hostnametoken.latest.tar.gz.gpg" ]; then
fail "Could not find symlink to latest version." fail "Could not find symlink to latest version."
fi fi
@@ -41,15 +41,36 @@ pass "Found symlink to latest version in local backup."
# The second part of this test checks if backups get deleted when the retention # The second part of this test checks if backups get deleted when the retention
# is set to 0 days (which it should not as it would mean all backups get deleted) # is set to 0 days (which it should not as it would mean all backups get deleted)
# TODO: find out if we can test actual deletion without having to wait for a day BACKUP_RETENTION_DAYS="0" docker compose up -d
BACKUP_RETENTION_DAYS="0" docker-compose up -d
sleep 5 sleep 5
docker-compose exec backup backup docker compose exec backup backup
if [ "$(find ./local -type f | wc -l)" != "1" ]; then if [ "$(find "$LOCAL_DIR" -type f | wc -l)" != "1" ]; then
fail "Backups should not have been deleted, instead seen: "$(find ./local -type f)"" fail "Backups should not have been deleted, instead seen: "$(find "$local_dir" -type f)""
fi fi
pass "Local backups have not been deleted." pass "Local backups have not been deleted."
docker-compose down --volumes # The third part of this test checks if old backups get deleted when the retention
# is set to 7 days (which it should)
BACKUP_RETENTION_DAYS="7" docker compose up -d
sleep 5
info "Create first backup with no prune"
docker compose exec backup backup
touch -r "$LOCAL_DIR/test-hostnametoken.tar.gz" -d "14 days ago" "$LOCAL_DIR/test-hostnametoken-old.tar.gz"
info "Create second backup and prune"
docker compose exec backup backup
if [ -f "$LOCAL_DIR/test-hostnametoken-old.tar.gz" ]; then
fail "Backdated file has not been deleted."
fi
if [ ! -f "$LOCAL_DIR/test-hostnametoken.tar.gz" ]; then
fail "Recent file has been deleted."
fi
pass "Old remote backup has been pruned, new one is still present."

View File

@@ -1 +0,0 @@
local

View File

@@ -12,7 +12,7 @@ services:
NOTIFICATION_URLS: ${NOTIFICATION_URLS} NOTIFICATION_URLS: ${NOTIFICATION_URLS}
EXTRA_VALUE: extra-value EXTRA_VALUE: extra-value
volumes: volumes:
- ./local:/archive - ${LOCAL_DIR:-./local}:/archive
- app_data:/backup/app_data:ro - app_data:/backup/app_data:ro
- ./notifications.tmpl:/etc/dockervolumebackup/notifications.d/notifications.tmpl - ./notifications.tmpl:/etc/dockervolumebackup/notifications.d/notifications.tmpl

View File

@@ -6,15 +6,15 @@ cd $(dirname $0)
. ../util.sh . ../util.sh
current_test=$(basename $(pwd)) current_test=$(basename $(pwd))
mkdir -p local export LOCAL_DIR=$(mktemp -d)
docker-compose up -d docker compose up -d --quiet-pull
sleep 5 sleep 5
GOTIFY_TOKEN=$(curl -sSLX POST -H 'Content-Type: application/json' -d '{"name":"test"}' http://admin:custom@localhost:8080/application | jq -r '.token') GOTIFY_TOKEN=$(curl -sSLX POST -H 'Content-Type: application/json' -d '{"name":"test"}' http://admin:custom@localhost:8080/application | jq -r '.token')
info "Set up Gotify application using token $GOTIFY_TOKEN" info "Set up Gotify application using token $GOTIFY_TOKEN"
docker-compose exec backup backup docker compose exec backup backup
NUM_MESSAGES=$(curl -sSL http://admin:custom@localhost:8080/message | jq -r '.messages | length') NUM_MESSAGES=$(curl -sSL http://admin:custom@localhost:8080/message | jq -r '.messages | length')
if [ "$NUM_MESSAGES" != 0 ]; then if [ "$NUM_MESSAGES" != 0 ]; then
@@ -22,11 +22,11 @@ if [ "$NUM_MESSAGES" != 0 ]; then
fi fi
pass "No notifications were sent when not configured." pass "No notifications were sent when not configured."
docker-compose down docker compose down
NOTIFICATION_URLS="gotify://gotify/${GOTIFY_TOKEN}?disableTLS=true" docker-compose up -d NOTIFICATION_URLS="gotify://gotify/${GOTIFY_TOKEN}?disableTLS=true" docker compose up -d
docker-compose exec backup backup docker compose exec backup backup
NUM_MESSAGES=$(curl -sSL http://admin:custom@localhost:8080/message | jq -r '.messages | length') NUM_MESSAGES=$(curl -sSL http://admin:custom@localhost:8080/message | jq -r '.messages | length')
if [ "$NUM_MESSAGES" != 1 ]; then if [ "$NUM_MESSAGES" != 1 ]; then
@@ -46,5 +46,3 @@ if [ "$MESSAGE_BODY" != "Backing up /tmp/test.tar.gz succeeded." ]; then
fail "Unexpected notification body $MESSAGE_BODY" fail "Unexpected notification body $MESSAGE_BODY"
fi fi
pass "Custom notification body was used." pass "Custom notification body was used."
docker-compose down --volumes

View File

@@ -1 +0,0 @@
local

View File

@@ -9,9 +9,9 @@ services:
volumes: volumes:
- postgres_data:/var/lib/postgresql/data - postgres_data:/var/lib/postgresql/data
environment: environment:
- POSTGRES_PASSWORD=1FHJMSwt0yhIN1zS7I4DilGUhThBKq0x POSTGRES_PASSWORD: 1FHJMSwt0yhIN1zS7I4DilGUhThBKq0x
- POSTGRES_USER=test POSTGRES_USER: test
- POSTGRES_DB=test POSTGRES_DB: test
backup: backup:
image: offen/docker-volume-backup:${TEST_VERSION} image: offen/docker-volume-backup:${TEST_VERSION}
@@ -21,7 +21,7 @@ services:
volumes: volumes:
- postgres_data:/backup/postgres:ro - postgres_data:/backup/postgres:ro
- /var/run/docker.sock:/var/run/docker.sock:ro - /var/run/docker.sock:/var/run/docker.sock:ro
- ./local:/archive - ${LOCAL_DIR:-./local}:/archive
volumes: volumes:
postgres_data: postgres_data:

20
test/ownership/run.sh Normal file → Executable file
View File

@@ -7,24 +7,22 @@ cd $(dirname $0)
. ../util.sh . ../util.sh
current_test=$(basename $(pwd)) current_test=$(basename $(pwd))
mkdir -p local export LOCAL_DIR=$(mktemp -d)
docker-compose up -d docker compose up -d --quiet-pull
sleep 5 sleep 5
docker-compose exec backup backup docker compose exec backup backup
tmp_dir=$(mktemp -d) TMP_DIR=$(mktemp -d)
sudo tar --same-owner -xvf ./local/backup.tar.gz -C $tmp_dir tar --same-owner -xvf "$LOCAL_DIR/backup.tar.gz" -C $TMP_DIR
sudo find $tmp_dir/backup/postgres > /dev/null find $TMP_DIR/backup/postgres > /dev/null
pass "Backup contains files at expected location" pass "Backup contains files at expected location"
for file in $(sudo find $tmp_dir/backup/postgres); do for file in $(find $TMP_DIR/backup/postgres); do
if [ "$(sudo stat -c '%u:%g' $file)" != "70:70" ]; then if [ "$(stat -c '%u:%g' $file)" != "70:70" ]; then
fail "Unexpected file ownership for $file: $(sudo stat -c '%u:%g' $file)" fail "Unexpected file ownership for $file: $(stat -c '%u:%g' $file)"
fi fi
done done
pass "All files and directories in backup preserved their ownership." pass "All files and directories in backup preserved their ownership."
docker-compose down --volumes

42
test/pgzip/run.sh Executable file
View File

@@ -0,0 +1,42 @@
#!/bin/sh
set -e
cd $(dirname $0)
. ../util.sh
current_test=$(basename $(pwd))
docker network create test_network
docker volume create app_data
LOCAL_DIR=$(mktemp -d)
docker run -d -q \
--name offen \
--network test_network \
-v app_data:/var/opt/offen/ \
offen/offen:latest
sleep 5
docker run --rm -q \
--network test_network \
-v app_data:/backup/app_data \
-v $LOCAL_DIR:/archive \
-v /var/run/docker.sock:/var/run/docker.sock \
--env BACKUP_COMPRESSION=gz \
--env GZIP_PARALLELISM=0 \
--env BACKUP_FILENAME='test.{{ .Extension }}' \
--entrypoint backup \
offen/docker-volume-backup:${TEST_VERSION:-canary}
tmp_dir=$(mktemp -d)
tar -xvf "$LOCAL_DIR/test.tar.gz" -C $tmp_dir
if [ ! -f "$tmp_dir/backup/app_data/offen.db" ]; then
fail "Could not find expected file in untared archive."
fi
pass "Found relevant files in untared local backup."
# This test does not stop containers during backup. This is happening on
# purpose in order to cover this setup as well.
expect_running_containers "1"

View File

@@ -0,0 +1,50 @@
version: '3'
services:
minio:
image: minio/minio:RELEASE.2020-08-04T23-10-51Z
environment:
MINIO_ROOT_USER: test
MINIO_ROOT_PASSWORD: test
MINIO_ACCESS_KEY: test
MINIO_SECRET_KEY: GMusLtUmILge2by+z890kQ
entrypoint: /bin/ash -c 'mkdir -p /data/backup && minio server /data'
volumes:
- minio_backup_data:/data
backup:
image: offen/docker-volume-backup:${TEST_VERSION:-canary}
hostname: hostnametoken
depends_on:
- minio
restart: always
environment:
AWS_ACCESS_KEY_ID: test
AWS_SECRET_ACCESS_KEY: GMusLtUmILge2by+z890kQ
AWS_ENDPOINT: minio:9000
AWS_ENDPOINT_PROTO: http
AWS_S3_BUCKET_NAME: backup
BACKUP_FILENAME_EXPAND: 'true'
BACKUP_FILENAME: test-$$HOSTNAME.tar.gz
BACKUP_CRON_EXPRESSION: 0 0 5 31 2 ?
BACKUP_RETENTION_DAYS: 7
BACKUP_PRUNING_LEEWAY: 5s
BACKUP_PRUNING_PREFIX: test
BACKUP_LATEST_SYMLINK: test-$$HOSTNAME.latest.tar.gz
BACKUP_SKIP_BACKENDS_FROM_PRUNE: 's3'
volumes:
- app_data:/backup/app_data:ro
- /var/run/docker.sock:/var/run/docker.sock
- ./local:/archive
offen:
image: offen/offen:latest
labels:
- docker-volume-backup.stop-during-backup=true
volumes:
- app_data:/var/opt/offen
volumes:
app_data:
minio_backup_data:
name: minio_backup_data

70
test/pruning/run.sh Executable file
View File

@@ -0,0 +1,70 @@
#!/bin/sh
# Tests prune-skipping with multiple backends (local, s3)
# Pruning itself is tested individually for each storage backend
set -e
cd "$(dirname "$0")"
. ../util.sh
current_test=$(basename $(pwd))
mkdir -p local
docker compose up -d --quiet-pull
sleep 5
docker compose exec backup backup
sleep 5
expect_running_containers "3"
touch -r ./local/test-hostnametoken.tar.gz -d "14 days ago" ./local/test-hostnametoken-old.tar.gz
docker run --rm \
-v minio_backup_data:/minio_data \
alpine \
ash -c 'touch -d@$(( $(date +%s) - 1209600 )) /minio_data/backup/test-hostnametoken-old.tar.gz'
# Skip s3 backend from prune
docker compose up -d
sleep 5
info "Create backup with no prune for s3 backend"
docker compose exec backup backup
info "Check if old backup has been pruned (local)"
test ! -f ./local/test-hostnametoken-old.tar.gz
info "Check if old backup has NOT been pruned (s3)"
docker run --rm \
-v minio_backup_data:/minio_data \
alpine \
ash -c 'test -f /minio_data/backup/test-hostnametoken-old.tar.gz'
pass "Old remote backup has been pruned locally, skipped S3 backend is untouched."
# Skip local and s3 backend from prune (all backends)
touch -r ./local/test-hostnametoken.tar.gz -d "14 days ago" ./local/test-hostnametoken-old.tar.gz
docker compose up -d
sleep 5
info "Create backup with no prune for both backends"
docker compose exec -e BACKUP_SKIP_BACKENDS_FROM_PRUNE="s3,local" backup backup
info "Check if old backup has NOT been pruned (local)"
if [ ! -f ./local/test-hostnametoken-old.tar.gz ]; then
fail "Backdated file has not been deleted"
fi
info "Check if old backup has NOT been pruned (s3)"
docker run --rm \
-v minio_backup_data:/minio_data \
alpine \
ash -c 'test -f /minio_data/backup/test-hostnametoken-old.tar.gz'
pass "Skipped all backends while pruning."

View File

@@ -6,18 +6,16 @@ cd "$(dirname "$0")"
. ../util.sh . ../util.sh
current_test=$(basename $(pwd)) current_test=$(basename $(pwd))
docker-compose up -d docker compose up -d --quiet-pull
sleep 5 sleep 5
# A symlink for a known file in the volume is created so the test can check docker compose exec backup backup
# whether symlinks are preserved on backup.
docker-compose exec backup backup
sleep 5 sleep 5
expect_running_containers "3" expect_running_containers "3"
docker run --rm -it \ docker run --rm \
-v minio_backup_data:/minio_data \ -v minio_backup_data:/minio_data \
alpine \ alpine \
ash -c 'tar -xvf /minio_data/backup/test-hostnametoken.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db' ash -c 'tar -xvf /minio_data/backup/test-hostnametoken.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db'
@@ -26,17 +24,38 @@ pass "Found relevant files in untared remote backups."
# The second part of this test checks if backups get deleted when the retention # The second part of this test checks if backups get deleted when the retention
# is set to 0 days (which it should not as it would mean all backups get deleted) # is set to 0 days (which it should not as it would mean all backups get deleted)
# TODO: find out if we can test actual deletion without having to wait for a day BACKUP_RETENTION_DAYS="0" docker compose up -d
BACKUP_RETENTION_DAYS="0" docker-compose up -d
sleep 5 sleep 5
docker-compose exec backup backup docker compose exec backup backup
docker run --rm -it \ docker run --rm \
-v minio_backup_data:/minio_data \ -v minio_backup_data:/minio_data \
alpine \ alpine \
ash -c '[ $(find /minio_data/backup/ -type f | wc -l) = "1" ]' ash -c '[ $(find /minio_data/backup/ -type f | wc -l) = "1" ]'
pass "Remote backups have not been deleted." pass "Remote backups have not been deleted."
docker-compose down --volumes # The third part of this test checks if old backups get deleted when the retention
# is set to 7 days (which it should)
BACKUP_RETENTION_DAYS="7" docker compose up -d
sleep 5
info "Create first backup with no prune"
docker compose exec backup backup
docker run --rm \
-v minio_backup_data:/minio_data \
alpine \
ash -c 'touch -d@$(( $(date +%s) - 1209600 )) /minio_data/backup/test-hostnametoken-old.tar.gz'
info "Create second backup and prune"
docker compose exec backup backup
docker run --rm \
-v minio_backup_data:/minio_data \
alpine \
ash -c 'test ! -f /minio_data/backup/test-hostnametoken-old.tar.gz && test -f /minio_data/backup/test-hostnametoken.tar.gz'
pass "Old remote backup has been pruned, new one is still present."

View File

@@ -22,7 +22,7 @@ sleep 20
docker exec $(docker ps -q -f name=backup) backup docker exec $(docker ps -q -f name=backup) backup
docker run --rm -it \ docker run --rm \
-v backup_data:/data alpine \ -v backup_data:/data alpine \
ash -c 'tar -xf /data/backup/test.tar.gz && test -f /backup/pg_data/PG_VERSION' ash -c 'tar -xf /data/backup/test.tar.gz && test -f /backup/pg_data/PG_VERSION'
@@ -31,14 +31,12 @@ pass "Found relevant files in untared backup."
sleep 5 sleep 5
expect_running_containers "5" expect_running_containers "5"
docker stack rm test_stack docker exec -e AWS_ACCESS_KEY_ID=test $(docker ps -q -f name=backup) backup \
&& fail "Backup should have failed due to duplicate env variables."
docker secret rm minio_root_password pass "Backup failed due to duplicate env variables."
docker secret rm minio_root_user
docker swarm leave --force docker exec -e AWS_ACCESS_KEY_ID_FILE=/tmp/nonexistant $(docker ps -q -f name=backup) backup \
&& fail "Backup should have failed due to non existing file env variable."
sleep 10 pass "Backup failed due to non existing file env variable."
docker volume rm backup_data
docker volume rm pg_data

View File

@@ -8,7 +8,7 @@ services:
- PGID=1000 - PGID=1000
- USER_NAME=test - USER_NAME=test
volumes: volumes:
- ./id_rsa.pub:/config/.ssh/authorized_keys - ${KEY_DIR:-.}/id_rsa.pub:/config/.ssh/authorized_keys
- ssh_backup_data:/tmp - ssh_backup_data:/tmp
backup: backup:
@@ -30,7 +30,7 @@ services:
SSH_REMOTE_PATH: /tmp SSH_REMOTE_PATH: /tmp
SSH_IDENTITY_PASSPHRASE: test1234 SSH_IDENTITY_PASSPHRASE: test1234
volumes: volumes:
- ./id_rsa:/root/.ssh/id_rsa - ${KEY_DIR:-.}/id_rsa:/root/.ssh/id_rsa
- app_data:/backup/app_data:ro - app_data:/backup/app_data:ro
- /var/run/docker.sock:/var/run/docker.sock - /var/run/docker.sock:/var/run/docker.sock

View File

@@ -6,18 +6,20 @@ cd "$(dirname "$0")"
. ../util.sh . ../util.sh
current_test=$(basename $(pwd)) current_test=$(basename $(pwd))
ssh-keygen -t rsa -m pem -b 4096 -N "test1234" -f id_rsa -C "docker-volume-backup@local" export KEY_DIR=$(mktemp -d)
docker-compose up -d ssh-keygen -t rsa -m pem -b 4096 -N "test1234" -f "$KEY_DIR/id_rsa" -C "docker-volume-backup@local"
docker compose up -d --quiet-pull
sleep 5 sleep 5
docker-compose exec backup backup docker compose exec backup backup
sleep 5 sleep 5
expect_running_containers 3 expect_running_containers 3
docker run --rm -it \ docker run --rm \
-v ssh_backup_data:/ssh_data \ -v ssh_backup_data:/ssh_data \
alpine \ alpine \
ash -c 'tar -xvf /ssh_data/test-hostnametoken.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db' ash -c 'tar -xvf /ssh_data/test-hostnametoken.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db'
@@ -26,18 +28,40 @@ pass "Found relevant files in decrypted and untared remote backups."
# The second part of this test checks if backups get deleted when the retention # The second part of this test checks if backups get deleted when the retention
# is set to 0 days (which it should not as it would mean all backups get deleted) # is set to 0 days (which it should not as it would mean all backups get deleted)
# TODO: find out if we can test actual deletion without having to wait for a day BACKUP_RETENTION_DAYS="0" docker compose up -d
BACKUP_RETENTION_DAYS="0" docker-compose up -d
sleep 5 sleep 5
docker-compose exec backup backup docker compose exec backup backup
docker run --rm -it \ docker run --rm \
-v ssh_backup_data:/ssh_data \ -v ssh_backup_data:/ssh_data \
alpine \ alpine \
ash -c '[ $(find /ssh_data/ -type f | wc -l) = "1" ]' ash -c '[ $(find /ssh_data/ -type f | wc -l) = "1" ]'
pass "Remote backups have not been deleted." pass "Remote backups have not been deleted."
docker-compose down --volumes # The third part of this test checks if old backups get deleted when the retention
rm -f id_rsa id_rsa.pub # is set to 7 days (which it should)
BACKUP_RETENTION_DAYS="7" docker compose up -d
sleep 5
info "Create first backup with no prune"
docker compose exec backup backup
# Set the modification date of the old backup to 14 days ago
docker run --rm \
-v ssh_backup_data:/ssh_data \
--user 1000 \
alpine \
ash -c 'touch -d@$(( $(date +%s) - 1209600 )) /ssh_data/test-hostnametoken-old.tar.gz'
info "Create second backup and prune"
docker compose exec backup backup
docker run --rm \
-v ssh_backup_data:/ssh_data \
alpine \
ash -c 'test ! -f /ssh_data/test-hostnametoken-old.tar.gz && test -f /ssh_data/test-hostnametoken.tar.gz'
pass "Old remote backup has been pruned, new one is still present."

View File

@@ -19,7 +19,7 @@ sleep 20
docker exec $(docker ps -q -f name=backup) backup docker exec $(docker ps -q -f name=backup) backup
docker run --rm -it \ docker run --rm \
-v backup_data:/data alpine \ -v backup_data:/data alpine \
ash -c 'tar -xf /data/backup/test.tar.gz && test -f /backup/pg_data/PG_VERSION' ash -c 'tar -xf /data/backup/test.tar.gz && test -f /backup/pg_data/PG_VERSION'
@@ -27,11 +27,3 @@ pass "Found relevant files in untared backup."
sleep 5 sleep 5
expect_running_containers "5" expect_running_containers "5"
docker stack rm test_stack
docker swarm leave --force
sleep 10
docker volume rm backup_data
docker volume rm pg_data

View File

@@ -2,15 +2,69 @@
set -e set -e
TEST_VERSION=${1:-canary} MATCH_PATTERN=$1
IMAGE_TAG=${IMAGE_TAG:-canary}
for dir in $(ls -d -- */); do sandbox="docker_volume_backup_test_sandbox"
test="${dir}run.sh" tarball="$(mktemp -d)/image.tar.gz"
trap finish EXIT INT TERM
finish () {
rm -rf $(dirname $tarball)
if [ ! -z $(docker ps -aq --filter=name=$sandbox) ]; then
docker rm -f $(docker stop $sandbox)
fi
if [ ! -z $(docker volume ls -q --filter=name="^${sandbox}\$") ]; then
docker volume rm $sandbox
fi
}
docker build -t offen/docker-volume-backup:test-sandbox .
if [ ! -z "$BUILD_IMAGE" ]; then
docker build -t offen/docker-volume-backup:$IMAGE_TAG $(dirname $(pwd))
fi
docker save offen/docker-volume-backup:$IMAGE_TAG -o $tarball
find_args="-mindepth 1 -maxdepth 1 -type d"
if [ ! -z "$MATCH_PATTERN" ]; then
find_args="$find_args -name $MATCH_PATTERN"
fi
for dir in $(find $find_args | sort); do
dir=$(echo $dir | cut -c 3-)
echo "################################################" echo "################################################"
echo "Now running $test" echo "Now running ${dir}"
echo "################################################" echo "################################################"
echo "" echo ""
TEST_VERSION=$TEST_VERSION /bin/sh $test
test="${dir}/run.sh"
docker_run_args="--name "$sandbox" --detach \
--privileged \
-v $(dirname $(pwd)):/code \
-v $tarball:/cache/image.tar.gz \
-v $sandbox:/var/lib/docker"
if [ -z "$NO_IMAGE_CACHE" ]; then
docker_run_args="$docker_run_args \
-v "${sandbox}_image":/var/lib/docker/image \
-v "${sandbox}_overlay2":/var/lib/docker/overlay2"
fi
docker run $docker_run_args offen/docker-volume-backup:test-sandbox
until docker exec $sandbox /bin/sh -c 'docker info' > /dev/null 2>&1; do
sleep 0.5
done
sleep 0.5
docker exec $sandbox /bin/sh -c "docker load -i /cache/image.tar.gz"
docker exec -e TEST_VERSION=$IMAGE_TAG $sandbox /bin/sh -c "/code/test/$test"
docker rm $(docker stop $sandbox)
docker volume rm $sandbox
echo "" echo ""
echo "$test passed" echo "$test passed"
echo "" echo ""

View File

@@ -0,0 +1,29 @@
version: '2.4'
services:
alpine:
image: alpine:3.17.3
tty: true
volumes:
- app_data:/tmp
labels:
- docker-volume-backup.archive-pre.user=testuser
- docker-volume-backup.archive-pre=/bin/sh -c 'whoami > /tmp/whoami.txt'
backup:
image: offen/docker-volume-backup:${TEST_VERSION:-canary}
deploy:
restart_policy:
condition: on-failure
environment:
BACKUP_FILENAME: test.tar.gz
BACKUP_CRON_EXPRESSION: 0 0 5 31 2 ?
EXEC_FORWARD_OUTPUT: "true"
volumes:
- ${LOCAL_DIR:-./local}:/archive
- app_data:/backup/data:ro
- /var/run/docker.sock:/var/run/docker.sock
volumes:
app_data:
archive:

30
test/user/run.sh Executable file
View File

@@ -0,0 +1,30 @@
#!/bin/sh
set -e
cd $(dirname $0)
. ../util.sh
current_test=$(basename $(pwd))
export LOCAL_DIR=$(mktemp -d)
export TMP_DIR=$(mktemp -d)
echo "LOCAL_DIR $LOCAL_DIR"
echo "TMP_DIR $TMP_DIR"
docker compose up -d --quiet-pull
user_name=testuser
docker exec user-alpine-1 adduser --disabled-password "$user_name"
docker compose exec backup backup
tar -xvf "$LOCAL_DIR/test.tar.gz" -C "$TMP_DIR"
if [ ! -f "$TMP_DIR/backup/data/whoami.txt" ]; then
fail "Could not find file written by pre command."
fi
pass "Found expected file."
if [ "$(cat $TMP_DIR/backup/data/whoami.txt)" != "$user_name" ]; then
fail "Could not find expected user name."
fi
pass "Found expected user."

View File

@@ -15,9 +15,34 @@ fail () {
exit 1 exit 1
} }
skip () {
echo "[test:${current_test:-none}:skip] "$1""
exit 0
}
expect_running_containers () { expect_running_containers () {
if [ "$(docker ps -q | wc -l)" != "$1" ]; then if [ "$(docker ps -q | wc -l)" != "$1" ]; then
fail "Expected $1 containers to be running, instead seen: "$(docker ps -a | wc -l)"" fail "Expected $1 containers to be running, instead seen: "$(docker ps -a | wc -l)""
fi fi
pass "$1 containers running." pass "$1 containers running."
} }
docker() {
case $1 in
compose)
shift
case $1 in
up)
shift
command docker compose up --timeout 3 "$@";;
down)
shift
command docker compose down --timeout 3 "$@";;
*)
command docker compose "$@";;
esac
;;
*)
command docker "$@";;
esac
}

View File

@@ -6,16 +6,16 @@ cd "$(dirname "$0")"
. ../util.sh . ../util.sh
current_test=$(basename $(pwd)) current_test=$(basename $(pwd))
docker-compose up -d docker compose up -d --quiet-pull
sleep 5 sleep 5
docker-compose exec backup backup docker compose exec backup backup
sleep 5 sleep 5
expect_running_containers "3" expect_running_containers "3"
docker run --rm -it \ docker run --rm \
-v webdav_backup_data:/webdav_data \ -v webdav_backup_data:/webdav_data \
alpine \ alpine \
ash -c 'tar -xvf /webdav_data/data/my/new/path/test-hostnametoken.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db' ash -c 'tar -xvf /webdav_data/data/my/new/path/test-hostnametoken.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db'
@@ -24,17 +24,40 @@ pass "Found relevant files in untared remote backup."
# The second part of this test checks if backups get deleted when the retention # The second part of this test checks if backups get deleted when the retention
# is set to 0 days (which it should not as it would mean all backups get deleted) # is set to 0 days (which it should not as it would mean all backups get deleted)
# TODO: find out if we can test actual deletion without having to wait for a day BACKUP_RETENTION_DAYS="0" docker compose up -d
BACKUP_RETENTION_DAYS="0" docker-compose up -d
sleep 5 sleep 5
docker-compose exec backup backup docker compose exec backup backup
docker run --rm -it \ docker run --rm \
-v webdav_backup_data:/webdav_data \ -v webdav_backup_data:/webdav_data \
alpine \ alpine \
ash -c '[ $(find /webdav_data/data/my/new/path/ -type f | wc -l) = "1" ]' ash -c '[ $(find /webdav_data/data/my/new/path/ -type f | wc -l) = "1" ]'
pass "Remote backups have not been deleted." pass "Remote backups have not been deleted."
docker-compose down --volumes # The third part of this test checks if old backups get deleted when the retention
# is set to 7 days (which it should)
BACKUP_RETENTION_DAYS="7" docker compose up -d
sleep 5
info "Create first backup with no prune"
docker compose exec backup backup
# Set the modification date of the old backup to 14 days ago
docker run --rm \
-v webdav_backup_data:/webdav_data \
--user 82 \
alpine \
ash -c 'touch -d@$(( $(date +%s) - 1209600 )) /webdav_data/data/my/new/path/test-hostnametoken-old.tar.gz'
info "Create second backup and prune"
docker compose exec backup backup
docker run --rm \
-v webdav_backup_data:/webdav_data \
alpine \
ash -c 'test ! -f /webdav_data/data/my/new/path/test-hostnametoken-old.tar.gz && test -f /webdav_data/data/my/new/path/test-hostnametoken.tar.gz'
pass "Old remote backup has been pruned, new one is still present."

41
test/zstd/run.sh Executable file
View File

@@ -0,0 +1,41 @@
#!/bin/sh
set -e
cd $(dirname $0)
. ../util.sh
current_test=$(basename $(pwd))
docker network create test_network
docker volume create app_data
LOCAL_DIR=$(mktemp -d)
docker run -d -q \
--name offen \
--network test_network \
-v app_data:/var/opt/offen/ \
offen/offen:latest
sleep 10
docker run --rm -q \
--network test_network \
-v app_data:/backup/app_data \
-v $LOCAL_DIR:/archive \
-v /var/run/docker.sock:/var/run/docker.sock \
--env BACKUP_COMPRESSION=zst \
--env BACKUP_FILENAME='test.{{ .Extension }}' \
--entrypoint backup \
offen/docker-volume-backup:${TEST_VERSION:-canary}
tmp_dir=$(mktemp -d)
tar -xvf "$LOCAL_DIR/test.tar.zst" --zstd -C $tmp_dir
if [ ! -f "$tmp_dir/backup/app_data/offen.db" ]; then
fail "Could not find expected file in untared archive."
fi
pass "Found relevant files in untared local backup."
# This test does not stop containers during backup. This is happening on
# purpose in order to cover this setup as well.
expect_running_containers "1"