Compare commits

...

64 Commits

Author SHA1 Message Date
İbrahim Akyel
58573e6733 Performance optimization for sshStorage (#313)
* Performance optimization for sshStorage

* go fmt
2023-12-05 21:38:33 +01:00
dependabot[bot]
84990ed6bd Bump github.com/klauspost/compress from 1.17.3 to 1.17.4 (#311) 2023-12-05 06:40:04 +00:00
dependabot[bot]
94f0975a30 Bump github.com/minio/minio-go/v7 from 7.0.64 to 7.0.65 (#312) 2023-12-05 06:23:44 +00:00
dependabot[bot]
e5c3b47ec9 Bump github.com/minio/minio-go/v7 from 7.0.63 to 7.0.64 (#309) 2023-11-28 06:02:52 +00:00
dependabot[bot]
619624f0d0 Bump golang.org/x/oauth2 from 0.14.0 to 0.15.0 (#308) 2023-11-28 06:02:32 +00:00
dependabot[bot]
52cd70c7a9 Bump github.com/klauspost/compress from 1.17.2 to 1.17.3 (#305) 2023-11-20 21:54:22 +00:00
dependabot[bot]
55bcd90c2d Bump golang.org/x/oauth2 from 0.13.0 to 0.14.0 (#303) 2023-11-14 05:53:23 +00:00
dependabot[bot]
382a613cbc Bump golang.org/x/sync from 0.4.0 to 0.5.0 (#302) 2023-11-07 06:45:49 +00:00
Frederik Ring
0325889ac4 Pruning method logs nonsensical configuration values (#301)
* Pruning method logs nonsensical configuration values

* Adjust test assertion about log output
2023-11-04 12:19:44 +01:00
Frederik Ring
d3e1d1531b Bump Docker client (#296) 2023-10-30 19:10:44 +01:00
Frederik Ring
1d549042fc Error message on demultiplexing command output receives bad error arg (#295) 2023-10-29 15:43:34 +01:00
Frederik Ring
2252c26edf Add note about potential leading and trailing whitespace in Docker secrets 2023-10-29 07:53:55 +01:00
dependabot[bot]
2d81ac046b Bump github.com/klauspost/compress from 1.17.1 to 1.17.2 (#289) 2023-10-24 05:02:31 +00:00
Nigel Metheringham
d0d8e5b076 docs: Update the custom commands documentation (#288)
Include requirement for docker socket and the need to use EXEC_LABEL
if you are running multiple copies
2023-10-21 08:05:41 +02:00
dependabot[bot]
e8ac4e1da6 Bump github.com/klauspost/compress from 1.17.0 to 1.17.1 (#284) 2023-10-17 06:07:47 +00:00
dependabot[bot]
3477c12b9d Bump github.com/Azure/azure-sdk-for-go/sdk/storage/azblob (#285) 2023-10-17 05:43:33 +00:00
dependabot[bot]
264c2e3089 Bump github.com/Azure/azure-sdk-for-go/sdk/azidentity (#286) 2023-10-17 05:15:26 +00:00
dependabot[bot]
e079eeafa0 Bump golang.org/x/net from 0.16.0 to 0.17.0 (#282) 2023-10-12 04:34:56 +00:00
dependabot[bot]
e1e2843f87 Bump golang.org/x/sync from 0.3.0 to 0.4.0 (#281) 2023-10-10 05:28:48 +00:00
dependabot[bot]
2e1f65b0df Bump golang.org/x/oauth2 from 0.12.0 to 0.13.0 (#280) 2023-10-10 05:11:52 +00:00
dependabot[bot]
e35164628c Bump github.com/otiai10/copy from 1.11.0 to 1.14.0 (#279) 2023-10-03 07:12:56 +00:00
Daniel Al Mouiee
19fb822a4c Updated Docs subheading mispelling (#278) 2023-09-29 07:24:07 +02:00
dependabot[bot]
40bbf2c919 Bump github.com/minio/minio-go/v7 from 7.0.62 to 7.0.63 (#274) 2023-09-19 13:01:54 +00:00
Frederik Ring
e7631d8d53 Retry creation of sandbox container (#276)
* Extend sleep when Docker daemon seems ready

* Pruning test shall not write to local fs

* Make sure container names do not conflict
2023-09-19 14:45:09 +02:00
dependabot[bot]
c87dc09ad4 Bump golang.org/x/oauth2 (#273) 2023-09-19 06:39:24 +00:00
dependabot[bot]
9be3a1861b Bump github.com/klauspost/compress from 1.16.7 to 1.17.0 (#275) 2023-09-19 06:38:39 +00:00
Frederik Ring
e4fdcba898 Ruby action expects working dir as input 2023-09-16 11:58:18 +02:00
Frederik Ring
0bb94a2f56 Docs site (#269)
* Set up documentation site using jekyll

* Add workflow for deploying docs

* Ini formatting is hard to read

* Add instructions on how to run docs locally

* Work through docs

* Remove content from README

* Miscellaneous fixes

* Fix artifact upload
2023-09-16 11:54:39 +02:00
MaxJa4
336c5bed71 Replace Gzip with PGzip (#266)
* Replace Gzip with optimized PGzip. Add concurrency option.

* Add shortened timeout for 'dc down' too.

* Add NaturalNumberZero to allow zero.

* Add test for concurrency=0

* Rename to GZIP_PARALLELISM

* Fix block size. Fix compression level. Fix CI.

* Refactor compression writer fetching. Renamed WholeNumber
2023-09-03 16:49:52 +02:00
Frederik Ring
1e39ac41f4 Run tests Docker in Docker (#261)
* Try running tests in Docker

* Spawn new container for each test

* Store test artifacts outside of mount

* When requested, build up to date image in test script

* sudo is unneccessary in containerized test env

* Skip azure test

* Backdate fixture file in JSON database

* Pin versions for azure tools

* Mount temp volume for /var/lib/docker to prevent dangling ones created by VOLUME instruction

* Fail backdating tests with message

* Add some documentation on test setup

* Cache images

* Run compose stacks with shortened default timeout
2023-09-02 15:17:46 +02:00
MaxJa4
43c4961116 Support _FILE based configuration values (#264)
* Replace envconfig with env

* Adjust config options and processing

* Added _FILE variant for all password vars.

* Try pathenvconfig

* Revert everything so far

* Use our fork of envconfig with custom lookup

* Use our fork of envconfig with custom lookup

* Test compose timeout option

* Remove secret resolving and specific _FILE config

* Fix timing issue in swarm tests

* Revert "Test compose timeout option"

This reverts commit ab50b21748, reversing
changes made to 0282514b2b.

Revert "Test compose timeout option"

This reverts commit 0282514b2b.

* Use offen/envconfig v1.5.0

* Add info about _FILE in README

* Value > File. Panic on file error. Panic on duplicate presence.

* Test panic on duplicate vars and panic on file error.
2023-08-30 20:14:24 +02:00
Frederik Ring
24a6ec9480 Dropbox was missing from notification docs 2023-08-28 20:42:45 +02:00
MaxJa4
ad4e2af83f Exclude specific backends from pruning (#262)
* Skip backends while pruning

* Add pruning test step and silence download log for better readability

* Add test cases for pruning in all backends

Also add -q or --quiet-pull to all tests.

* Add test case for skipping backends while pruning

* Adjusted test logging, generate new test spec file

* Gitignore for temp test file
2023-08-27 19:19:11 +02:00
MaxJa4
5fcc96edf9 Cleanup: Lint warnings and deprecated packages (#263)
* Fix lint warnings and std lib deprecations

* Replace deprecated std lib with maintained drop-in replacement fork

Backwartds compatible with original package and suggested by std lib due to security and stability issues.

* OAuth2 is now a direct dependency due to Dropbox

* Undo change

* Revert "Replace deprecated std lib with maintained drop-in replacement fork"

This reverts commit 2887bd409f.

* Update channel handling

* Add linter for PRs

* Rename CI, fetch all issues, add govet
2023-08-27 18:14:55 +02:00
Frederik Ring
3d7677f02a Print context in log field instead of prepending to message (#260)
* Print context in log field instead of prepending to message

* Log messages on pruning do not need a description anymore

* Remove redundant information from logs and errors
2023-08-25 12:44:43 +02:00
Frederik Ring
88a4794083 Zstd test does not need to spin up MinIO server (#258) 2023-08-24 22:14:04 +02:00
Frederik Ring
7011261dc5 Pin version of mock openapi server used in Dropbox test 2023-08-24 22:01:07 +02:00
Frederik Ring
9ba8143be2 Add license headers to vendored Dropbox swagger spec 2023-08-24 19:52:33 +02:00
dependabot[bot]
b90fc9ea4d Bump github.com/minio/minio-go/v7 from 7.0.61 to 7.0.62 (#255)
Bumps [github.com/minio/minio-go/v7](https://github.com/minio/minio-go) from 7.0.61 to 7.0.62.
- [Release notes](https://github.com/minio/minio-go/releases)
- [Commits](https://github.com/minio/minio-go/compare/v7.0.61...v7.0.62)

---
updated-dependencies:
- dependency-name: github.com/minio/minio-go/v7
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-24 19:37:50 +02:00
MaxJa4
e08a3303bf Add new storage backend: Dropbox (#103) (#251)
* Add new storage backend: Dropbox (#103)

* Remove duplicate check

* Add concurrency level for parallel upload to dropbox.

* Fixed some instabilites. Changed default concurrency to 6.

* Added some env config vars to readme. WIP

* Wrap errors for storage backend creation.

* Fixed token issue, added OAuth2 including recipe and docs.

* Readme typo fix

* Test for dropbox integration

* Update info and TOC

* Missed a file

* Docker-compose fix

* Fix endpoint connection

* Fix container names

* Fix log fetching

* Fix log fetching (again)

* Print command output to logs

* Addressing comments part 1

* Address comments part 2

* OpenAPI Mock spec path adjusted
* Dropbox FileMetadata reflection refactored
* NaturalNumber type added

* Add OAuth2 mock server for CI testing

* Fix env name of oauth2 endpoint

* Remove hostname

* Add forgotten change to commit...

* Fix oauth2 endpoint

"Worked on my machine"

* Try again

* Try suggested hostname again

* Fix docker internal DNS resolving issues (as suggested by oauth2 mock docs)

* Add docker network, remove hostname

* Network not external

* Last hostname try

* Add more delay, add oauth2 endpoint log

* Temp CI log output of command even when failing

* Try different config and method

* Add custom server-hostname. Rename test folder to accellerate debugging

* Try that fix again

* Adding quotes

* Port fix attempt

* Try localhost

* Try extra hosts

* Change network mode

* Undo some changes

* Use static IP

* Remove specific IP binding

* Change to default net driver

* Fix static IP

* Squash for revert

* Revert "Squash for revert"

This reverts commit e9b617be9a.

* Actual fix for CI testing from #257
2023-08-24 19:33:47 +02:00
MaxJa4
47326c7c59 Fix storage backends not outputting any info logs (#250) 2023-08-20 20:35:25 +02:00
Michal Middleton
67e7288855 Add support for zstd compression (#249)
Co-authored-by: Michal Middleton <jafa81@gmail.com>
2023-08-19 19:20:13 +02:00
dependabot[bot]
1765b06835 Bump github.com/pkg/sftp from 1.13.5 to 1.13.6 (#248) 2023-08-15 04:38:55 +00:00
Frederik Ring
67d978f515 Drop logrus dependency, log using slog package from stdlib (#247) 2023-08-10 19:41:03 +02:00
Frederik Ring
a93ff6fe09 Build in Go 1.21 (#246) 2023-08-10 16:03:59 +02:00
Frederik Ring
1c6f64e254 Current Docker client breaks in newer Go versions (#241)
* Current Docker client breaks in newer Go versions

* Cater for breaking API changes in Docker client

* Update Docker client

* Unpin Go version used for build

* Tidy sum file
2023-07-25 19:46:57 +02:00
dependabot[bot]
085d2c5dfd Bump github.com/minio/minio-go/v7 from 7.0.59 to 7.0.61 (#240) 2023-07-24 19:16:02 +00:00
dependabot[bot]
b1382dee00 Bump github.com/Azure/azure-sdk-for-go/sdk/storage/azblob (#239) 2023-07-24 19:15:56 +00:00
Frederik Ring
c3732107b1 Current Docker client breaks in Go 1.20.6 (#242) 2023-07-24 21:01:28 +02:00
dependabot[bot]
d288c87c54 Bump github.com/minio/minio-go/v7 from 7.0.58 to 7.0.59 (#238) 2023-07-04 07:53:19 +00:00
dependabot[bot]
47491439a1 Bump github.com/studio-b12/gowebdav (#235) 2023-06-27 07:46:34 +00:00
dependabot[bot]
94f71ac765 Bump github.com/minio/minio-go/v7 from 7.0.57 to 7.0.58 (#236) 2023-06-27 05:25:42 +00:00
dependabot[bot]
2addf1dd6c Bump golang.org/x/sync from 0.2.0 to 0.3.0 (#234) 2023-06-20 12:29:11 +00:00
dependabot[bot]
c07990eaf6 Bump github.com/minio/minio-go/v7 from 7.0.56 to 7.0.57 (#233) 2023-06-20 12:28:06 +00:00
jsloane
a27743bd32 Update README.md (#230) 2023-06-17 08:30:48 +02:00
dependabot[bot]
9d5b897ab4 Bump golang.org/x/sync from 0.1.0 to 0.2.0 (#229) 2023-06-13 07:17:58 +00:00
dependabot[bot]
30bf31cd90 Bump github.com/Azure/azure-sdk-for-go/sdk/storage/azblob (#228) 2023-06-13 07:04:03 +00:00
dependabot[bot]
32e9a05b40 Bump github.com/minio/minio-go/v7 from 7.0.44 to 7.0.56 (#227) 2023-06-13 07:03:38 +00:00
dependabot[bot]
b302884447 Bump golang.org/x/crypto from 0.3.0 to 0.9.0 (#223) 2023-06-10 13:15:41 +00:00
dependabot[bot]
b3e1ce27be Bump github.com/Azure/azure-sdk-for-go/sdk/azidentity (#225) 2023-06-10 12:57:44 +00:00
dependabot[bot]
66518ed0ff Bump github.com/sirupsen/logrus from 1.9.0 to 1.9.3 (#226) 2023-06-10 12:57:39 +00:00
dependabot[bot]
14d966d41a Bump github.com/otiai10/copy from 1.10.0 to 1.11.0 (#224) 2023-06-10 12:57:30 +00:00
Frederik Ring
336dece328 Set up automated updates for Docker base images and Go packages 2023-06-10 14:42:28 +02:00
Frederik Ring
dc8172b673 Use alpine-1.18 as the base image (#219) 2023-06-03 13:08:35 +02:00
101 changed files with 16256 additions and 1829 deletions

10
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,10 @@
version: 2
updates:
- package-ecosystem: docker
directory: /
schedule:
interval: weekly
- package-ecosystem: gomod
directory: /
schedule:
interval: weekly

52
.github/workflows/deploy-docs.yml vendored Normal file
View File

@@ -0,0 +1,52 @@
name: Deploy Documenation site to GitHub Pages
on:
push:
branches: ['main']
workflow_dispatch:
permissions:
contents: read
pages: write
id-token: write
concurrency:
group: 'pages'
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Setup Ruby
uses: ruby/setup-ruby@v1
with:
ruby-version: '3.2'
bundler-cache: true
cache-version: 0
working-directory: docs
- name: Setup Pages
id: pages
uses: actions/configure-pages@v2
- name: Build with Jekyll
working-directory: docs
run: bundle exec jekyll build --baseurl "${{ steps.pages.outputs.base_path }}"
env:
JEKYLL_ENV: production
- name: Upload artifact
uses: actions/upload-pages-artifact@v1
with:
path: 'docs/_site/'
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
needs: build
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v1

54
.github/workflows/golangci-lint.yml vendored Normal file
View File

@@ -0,0 +1,54 @@
name: Run Linters
on:
push:
branches:
- main
pull_request:
permissions:
contents: read
# Optional: allow read access to pull request. Use with `only-new-issues` option.
pull-requests: read
jobs:
golangci:
name: lint
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version: '1.21'
cache: false
- name: golangci-lint
uses: golangci/golangci-lint-action@v3
with:
# Require: The version of golangci-lint to use.
# When `install-mode` is `binary` (default) the value can be v1.2 or v1.2.3 or `latest` to use the latest version.
# When `install-mode` is `goinstall` the value can be v1.2.3, `latest`, or the hash of a commit.
version: v1.54
# Optional: working directory, useful for monorepos
# working-directory: somedir
# Optional: golangci-lint command line arguments.
#
# Note: By default, the `.golangci.yml` file should be at the root of the repository.
# The location of the configuration file can be changed by using `--config=`
# args: --timeout=30m --config=/my/path/.golangci.yml --issues-exit-code=0
# Optional: show only new issues if it's a pull request. The default value is `false`.
# only-new-issues: true
# Optional: if set to true, then all caching functionality will be completely disabled,
# takes precedence over all other caching options.
# skip-cache: true
# Optional: if set to true, then the action won't cache or restore ~/go/pkg.
# skip-pkg-cache: true
# Optional: if set to true, then the action won't cache or restore ~/.cache/go-build.
# skip-build-cache: true
# Optional: The mode to install golangci-lint. It can be 'binary' or 'goinstall'.
# install-mode: "goinstall"

View File

@@ -15,16 +15,7 @@ jobs:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Build Docker Image
env:
DOCKER_BUILDKIT: '1'
run: docker build . -t offen/docker-volume-backup:test
- name: Run Tests
working-directory: ./test
run: |
# Stop the buildx container so the tests can make assertions
# about the number of running containers
docker rm -f $(docker ps -aq)
export GPG_TTY=$(tty)
./test.sh test
BUILD_IMAGE=1 ./test.sh

8
.golangci.yml Normal file
View File

@@ -0,0 +1,8 @@
linters:
# Enable specific linter
# https://golangci-lint.run/usage/linters/#enabled-by-default
enable:
- staticcheck
- govet
output:
format: github-actions

View File

@@ -1,7 +1,7 @@
# Copyright 2021 - Offen Authors <hioffen@posteo.de>
# SPDX-License-Identifier: MPL-2.0
FROM golang:1.20-alpine as builder
FROM golang:1.21-alpine as builder
WORKDIR /app
COPY . .
@@ -9,7 +9,7 @@ RUN go mod download
WORKDIR /app/cmd/backup
RUN go build -o backup .
FROM alpine:3.17
FROM alpine:3.18
WORKDIR /root

1305
README.md

File diff suppressed because it is too large Load Diff

View File

@@ -8,16 +8,20 @@ package main
import (
"archive/tar"
"compress/gzip"
"fmt"
"io"
"os"
"path"
"path/filepath"
"runtime"
"strings"
"github.com/klauspost/pgzip"
"github.com/klauspost/compress/zstd"
)
func createArchive(files []string, inputFilePath, outputFilePath string) error {
func createArchive(files []string, inputFilePath, outputFilePath string, compression string, compressionConcurrency int) error {
inputFilePath = stripTrailingSlashes(inputFilePath)
inputFilePath, outputFilePath, err := makeAbsolute(inputFilePath, outputFilePath)
if err != nil {
@@ -27,7 +31,7 @@ func createArchive(files []string, inputFilePath, outputFilePath string) error {
return fmt.Errorf("createArchive: error creating output file path: %w", err)
}
if err := compress(files, outputFilePath, filepath.Dir(inputFilePath)); err != nil {
if err := compress(files, outputFilePath, filepath.Dir(inputFilePath), compression, compressionConcurrency); err != nil {
return fmt.Errorf("createArchive: error creating archive: %w", err)
}
@@ -51,18 +55,21 @@ func makeAbsolute(inputFilePath, outputFilePath string) (string, string, error)
return inputFilePath, outputFilePath, err
}
func compress(paths []string, outFilePath, subPath string) error {
func compress(paths []string, outFilePath, subPath string, algo string, concurrency int) error {
file, err := os.Create(outFilePath)
if err != nil {
return fmt.Errorf("compress: error creating out file: %w", err)
}
prefix := path.Dir(outFilePath)
gzipWriter := gzip.NewWriter(file)
tarWriter := tar.NewWriter(gzipWriter)
compressWriter, err := getCompressionWriter(file, algo, concurrency)
if err != nil {
return fmt.Errorf("compress: error getting compression writer: %w", err)
}
tarWriter := tar.NewWriter(compressWriter)
for _, p := range paths {
if err := writeTarGz(p, tarWriter, prefix); err != nil {
if err := writeTarball(p, tarWriter, prefix); err != nil {
return fmt.Errorf("compress: error writing %s to archive: %w", p, err)
}
}
@@ -72,9 +79,9 @@ func compress(paths []string, outFilePath, subPath string) error {
return fmt.Errorf("compress: error closing tar writer: %w", err)
}
err = gzipWriter.Close()
err = compressWriter.Close()
if err != nil {
return fmt.Errorf("compress: error closing gzip writer: %w", err)
return fmt.Errorf("compress: error closing compression writer: %w", err)
}
err = file.Close()
@@ -85,10 +92,38 @@ func compress(paths []string, outFilePath, subPath string) error {
return nil
}
func writeTarGz(path string, tarWriter *tar.Writer, prefix string) error {
func getCompressionWriter(file *os.File, algo string, concurrency int) (io.WriteCloser, error) {
switch algo {
case "gz":
w, err := pgzip.NewWriterLevel(file, 5)
if err != nil {
return nil, fmt.Errorf("getCompressionWriter: gzip error: %w", err)
}
if concurrency == 0 {
concurrency = runtime.GOMAXPROCS(0)
}
if err := w.SetConcurrency(1<<20, concurrency); err != nil {
return nil, fmt.Errorf("getCompressionWriter: error setting concurrency: %w", err)
}
return w, nil
case "zst":
compressWriter, err := zstd.NewWriter(file)
if err != nil {
return nil, fmt.Errorf("getCompressionWriter: zstd error: %w", err)
}
return compressWriter, nil
default:
return nil, fmt.Errorf("getCompressionWriter: unsupported compression algorithm: %s", algo)
}
}
func writeTarball(path string, tarWriter *tar.Writer, prefix string) error {
fileInfo, err := os.Lstat(path)
if err != nil {
return fmt.Errorf("writeTarGz: error getting file infor for %s: %w", path, err)
return fmt.Errorf("writeTarball: error getting file infor for %s: %w", path, err)
}
if fileInfo.Mode()&os.ModeSocket == os.ModeSocket {
@@ -99,19 +134,19 @@ func writeTarGz(path string, tarWriter *tar.Writer, prefix string) error {
if fileInfo.Mode()&os.ModeSymlink == os.ModeSymlink {
var err error
if link, err = os.Readlink(path); err != nil {
return fmt.Errorf("writeTarGz: error resolving symlink %s: %w", path, err)
return fmt.Errorf("writeTarball: error resolving symlink %s: %w", path, err)
}
}
header, err := tar.FileInfoHeader(fileInfo, link)
if err != nil {
return fmt.Errorf("writeTarGz: error getting file info header: %w", err)
return fmt.Errorf("writeTarball: error getting file info header: %w", err)
}
header.Name = strings.TrimPrefix(path, prefix)
err = tarWriter.WriteHeader(header)
if err != nil {
return fmt.Errorf("writeTarGz: error writing file info header: %w", err)
return fmt.Errorf("writeTarball: error writing file info header: %w", err)
}
if !fileInfo.Mode().IsRegular() {
@@ -120,13 +155,13 @@ func writeTarGz(path string, tarWriter *tar.Writer, prefix string) error {
file, err := os.Open(path)
if err != nil {
return fmt.Errorf("writeTarGz: error opening %s: %w", path, err)
return fmt.Errorf("writeTarball: error opening %s: %w", path, err)
}
defer file.Close()
_, err = io.Copy(tarWriter, file)
if err != nil {
return fmt.Errorf("writeTarGz: error copying %s to tar writer: %w", path, err)
return fmt.Errorf("writeTarball: error copying %s to tar writer: %w", path, err)
}
return nil

View File

@@ -7,79 +7,92 @@ import (
"crypto/x509"
"encoding/pem"
"fmt"
"io/ioutil"
"os"
"regexp"
"strconv"
"time"
)
// Config holds all configuration values that are expected to be set
// by users.
type Config struct {
AwsS3BucketName string `split_words:"true"`
AwsS3Path string `split_words:"true"`
AwsEndpoint string `split_words:"true" default:"s3.amazonaws.com"`
AwsEndpointProto string `split_words:"true" default:"https"`
AwsEndpointInsecure bool `split_words:"true"`
AwsEndpointCACert CertDecoder `envconfig:"AWS_ENDPOINT_CA_CERT"`
AwsStorageClass string `split_words:"true"`
AwsAccessKeyID string `envconfig:"AWS_ACCESS_KEY_ID"`
AwsAccessKeyIDFile string `envconfig:"AWS_ACCESS_KEY_ID_FILE"`
AwsSecretAccessKey string `split_words:"true"`
AwsSecretAccessKeyFile string `split_words:"true"`
AwsIamRoleEndpoint string `split_words:"true"`
AwsPartSize int64 `split_words:"true"`
BackupSources string `split_words:"true" default:"/backup"`
BackupFilename string `split_words:"true" default:"backup-%Y-%m-%dT%H-%M-%S.tar.gz"`
BackupFilenameExpand bool `split_words:"true"`
BackupLatestSymlink string `split_words:"true"`
BackupArchive string `split_words:"true" default:"/archive"`
BackupRetentionDays int32 `split_words:"true" default:"-1"`
BackupPruningLeeway time.Duration `split_words:"true" default:"1m"`
BackupPruningPrefix string `split_words:"true"`
BackupStopContainerLabel string `split_words:"true" default:"true"`
BackupFromSnapshot bool `split_words:"true"`
BackupExcludeRegexp RegexpDecoder `split_words:"true"`
GpgPassphrase string `split_words:"true"`
NotificationURLs []string `envconfig:"NOTIFICATION_URLS"`
NotificationLevel string `split_words:"true" default:"error"`
EmailNotificationRecipient string `split_words:"true"`
EmailNotificationSender string `split_words:"true" default:"noreply@nohost"`
EmailSMTPHost string `envconfig:"EMAIL_SMTP_HOST"`
EmailSMTPPort int `envconfig:"EMAIL_SMTP_PORT" default:"587"`
EmailSMTPUsername string `envconfig:"EMAIL_SMTP_USERNAME"`
EmailSMTPPassword string `envconfig:"EMAIL_SMTP_PASSWORD"`
WebdavUrl string `split_words:"true"`
WebdavUrlInsecure bool `split_words:"true"`
WebdavPath string `split_words:"true" default:"/"`
WebdavUsername string `split_words:"true"`
WebdavPassword string `split_words:"true"`
SSHHostName string `split_words:"true"`
SSHPort string `split_words:"true" default:"22"`
SSHUser string `split_words:"true"`
SSHPassword string `split_words:"true"`
SSHIdentityFile string `split_words:"true" default:"/root/.ssh/id_rsa"`
SSHIdentityPassphrase string `split_words:"true"`
SSHRemotePath string `split_words:"true"`
ExecLabel string `split_words:"true"`
ExecForwardOutput bool `split_words:"true"`
LockTimeout time.Duration `split_words:"true" default:"60m"`
AzureStorageAccountName string `split_words:"true"`
AzureStoragePrimaryAccountKey string `split_words:"true"`
AzureStorageContainerName string `split_words:"true"`
AzureStoragePath string `split_words:"true"`
AzureStorageEndpoint string `split_words:"true" default:"https://{{ .AccountName }}.blob.core.windows.net/"`
AwsS3BucketName string `split_words:"true"`
AwsS3Path string `split_words:"true"`
AwsEndpoint string `split_words:"true" default:"s3.amazonaws.com"`
AwsEndpointProto string `split_words:"true" default:"https"`
AwsEndpointInsecure bool `split_words:"true"`
AwsEndpointCACert CertDecoder `envconfig:"AWS_ENDPOINT_CA_CERT"`
AwsStorageClass string `split_words:"true"`
AwsAccessKeyID string `envconfig:"AWS_ACCESS_KEY_ID"`
AwsSecretAccessKey string `split_words:"true"`
AwsIamRoleEndpoint string `split_words:"true"`
AwsPartSize int64 `split_words:"true"`
BackupCompression CompressionType `split_words:"true" default:"gz"`
GzipParallelism WholeNumber `split_words:"true" default:"1"`
BackupSources string `split_words:"true" default:"/backup"`
BackupFilename string `split_words:"true" default:"backup-%Y-%m-%dT%H-%M-%S.{{ .Extension }}"`
BackupFilenameExpand bool `split_words:"true"`
BackupLatestSymlink string `split_words:"true"`
BackupArchive string `split_words:"true" default:"/archive"`
BackupRetentionDays int32 `split_words:"true" default:"-1"`
BackupPruningLeeway time.Duration `split_words:"true" default:"1m"`
BackupPruningPrefix string `split_words:"true"`
BackupStopContainerLabel string `split_words:"true" default:"true"`
BackupFromSnapshot bool `split_words:"true"`
BackupExcludeRegexp RegexpDecoder `split_words:"true"`
BackupSkipBackendsFromPrune []string `split_words:"true"`
GpgPassphrase string `split_words:"true"`
NotificationURLs []string `envconfig:"NOTIFICATION_URLS"`
NotificationLevel string `split_words:"true" default:"error"`
EmailNotificationRecipient string `split_words:"true"`
EmailNotificationSender string `split_words:"true" default:"noreply@nohost"`
EmailSMTPHost string `envconfig:"EMAIL_SMTP_HOST"`
EmailSMTPPort int `envconfig:"EMAIL_SMTP_PORT" default:"587"`
EmailSMTPUsername string `envconfig:"EMAIL_SMTP_USERNAME"`
EmailSMTPPassword string `envconfig:"EMAIL_SMTP_PASSWORD"`
WebdavUrl string `split_words:"true"`
WebdavUrlInsecure bool `split_words:"true"`
WebdavPath string `split_words:"true" default:"/"`
WebdavUsername string `split_words:"true"`
WebdavPassword string `split_words:"true"`
SSHHostName string `split_words:"true"`
SSHPort string `split_words:"true" default:"22"`
SSHUser string `split_words:"true"`
SSHPassword string `split_words:"true"`
SSHIdentityFile string `split_words:"true" default:"/root/.ssh/id_rsa"`
SSHIdentityPassphrase string `split_words:"true"`
SSHRemotePath string `split_words:"true"`
ExecLabel string `split_words:"true"`
ExecForwardOutput bool `split_words:"true"`
LockTimeout time.Duration `split_words:"true" default:"60m"`
AzureStorageAccountName string `split_words:"true"`
AzureStoragePrimaryAccountKey string `split_words:"true"`
AzureStorageContainerName string `split_words:"true"`
AzureStoragePath string `split_words:"true"`
AzureStorageEndpoint string `split_words:"true" default:"https://{{ .AccountName }}.blob.core.windows.net/"`
DropboxEndpoint string `split_words:"true" default:"https://api.dropbox.com/"`
DropboxOAuth2Endpoint string `envconfig:"DROPBOX_OAUTH2_ENDPOINT" default:"https://api.dropbox.com/"`
DropboxRefreshToken string `split_words:"true"`
DropboxAppKey string `split_words:"true"`
DropboxAppSecret string `split_words:"true"`
DropboxRemotePath string `split_words:"true"`
DropboxConcurrencyLevel NaturalNumber `split_words:"true" default:"6"`
}
func (c *Config) resolveSecret(envVar string, secretPath string) (string, error) {
if secretPath == "" {
return envVar, nil
type CompressionType string
func (c *CompressionType) Decode(v string) error {
switch v {
case "gz", "zst":
*c = CompressionType(v)
return nil
default:
return fmt.Errorf("config: error decoding compression type %s", v)
}
data, err := os.ReadFile(secretPath)
if err != nil {
return "", fmt.Errorf("resolveSecret: error reading secret path: %w", err)
}
return string(data), nil
}
func (c *CompressionType) String() string {
return string(*c)
}
type CertDecoder struct {
@@ -90,7 +103,7 @@ func (c *CertDecoder) Decode(v string) error {
if v == "" {
return nil
}
content, err := ioutil.ReadFile(v)
content, err := os.ReadFile(v)
if err != nil {
content = []byte(v)
}
@@ -118,3 +131,41 @@ func (r *RegexpDecoder) Decode(v string) error {
*r = RegexpDecoder{Re: re}
return nil
}
// NaturalNumber is a type that can be used to decode a positive, non-zero natural number
type NaturalNumber int
func (n *NaturalNumber) Decode(v string) error {
asInt, err := strconv.Atoi(v)
if err != nil {
return fmt.Errorf("config: error converting %s to int", v)
}
if asInt <= 0 {
return fmt.Errorf("config: expected a natural number, got %d", asInt)
}
*n = NaturalNumber(asInt)
return nil
}
func (n *NaturalNumber) Int() int {
return int(*n)
}
// WholeNumber is a type that can be used to decode a positive whole number, including zero
type WholeNumber int
func (n *WholeNumber) Decode(v string) error {
asInt, err := strconv.Atoi(v)
if err != nil {
return fmt.Errorf("config: error converting %s to int", v)
}
if asInt < 0 {
return fmt.Errorf("config: expected a whole, positive number, including zero. Got %d", asInt)
}
*n = WholeNumber(asInt)
return nil
}
func (n *WholeNumber) Int() int {
return int(*n)
}

View File

@@ -10,7 +10,7 @@ import (
"bytes"
"context"
"fmt"
"io/ioutil"
"io"
"os"
"strings"
@@ -51,19 +51,15 @@ func (s *script) exec(containerRef string, command string, user string) ([]byte,
outputDone <- err
}()
select {
case err := <-outputDone:
if err != nil {
return nil, nil, fmt.Errorf("exec: error demultiplexing output: %w", err)
}
break
if err := <-outputDone; err != nil {
return nil, nil, fmt.Errorf("exec: error demultiplexing output: %w", err)
}
stdout, err := ioutil.ReadAll(&outBuf)
stdout, err := io.ReadAll(&outBuf)
if err != nil {
return nil, nil, fmt.Errorf("exec: error reading stdout: %w", err)
}
stderr, err := ioutil.ReadAll(&errBuf)
stderr, err := io.ReadAll(&errBuf)
if err != nil {
return nil, nil, fmt.Errorf("exec: error reading stderr: %w", err)
}
@@ -91,7 +87,6 @@ func (s *script) runLabeledCommands(label string) error {
})
}
containersWithCommand, err := s.cli.ContainerList(context.Background(), types.ContainerListOptions{
Quiet: true,
Filters: filters.NewArgs(f...),
})
if err != nil {
@@ -105,7 +100,6 @@ func (s *script) runLabeledCommands(label string) error {
Value: "docker-volume-backup.exec-pre",
}
deprecatedContainers, err := s.cli.ContainerList(context.Background(), types.ContainerListOptions{
Quiet: true,
Filters: filters.NewArgs(f...),
})
if err != nil {
@@ -123,7 +117,6 @@ func (s *script) runLabeledCommands(label string) error {
Value: "docker-volume-backup.exec-post",
}
deprecatedContainers, err := s.cli.ContainerList(context.Background(), types.ContainerListOptions{
Quiet: true,
Filters: filters.NewArgs(f...),
})
if err != nil {
@@ -155,15 +148,15 @@ func (s *script) runLabeledCommands(label string) error {
g.Go(func() error {
cmd, ok := c.Labels[label]
if !ok && label == "docker-volume-backup.archive-pre" {
cmd, _ = c.Labels["docker-volume-backup.exec-pre"]
cmd = c.Labels["docker-volume-backup.exec-pre"]
} else if !ok && label == "docker-volume-backup.archive-post" {
cmd, _ = c.Labels["docker-volume-backup.exec-post"]
cmd = c.Labels["docker-volume-backup.exec-post"]
}
userLabelName := fmt.Sprintf("%s.user", label)
user := c.Labels[userLabelName]
s.logger.Infof("Running %s command %s for container %s", label, cmd, strings.TrimPrefix(c.Names[0], "/"))
s.logger.Info(fmt.Sprintf("Running %s command %s for container %s", label, cmd, strings.TrimPrefix(c.Names[0], "/")))
stdout, stderr, err := s.exec(c.ID, cmd, user)
if s.c.ExecForwardOutput {
os.Stderr.Write(stderr)

View File

@@ -18,7 +18,7 @@ import (
func (s *script) lock(lockfile string) (func() error, error) {
start := time.Now()
defer func() {
s.stats.LockedTime = time.Now().Sub(start)
s.stats.LockedTime = time.Since(start)
}()
retry := time.NewTicker(5 * time.Second)
@@ -41,9 +41,11 @@ func (s *script) lock(lockfile string) (func() error, error) {
}
if !s.encounteredLock {
s.logger.Infof(
"Exclusive lock was not available on first attempt. Will retry until it becomes available or the timeout of %s is exceeded.",
s.c.LockTimeout,
s.logger.Info(
fmt.Sprintf(
"Exclusive lock was not available on first attempt. Will retry until it becomes available or the timeout of %s is exceeded.",
s.c.LockTimeout,
),
)
s.encounteredLock = true
}

View File

@@ -4,6 +4,7 @@
package main
import (
"fmt"
"os"
)
@@ -14,14 +15,16 @@ func main() {
}
unlock, err := s.lock("/var/lock/dockervolumebackup.lock")
defer unlock()
defer s.must(unlock())
s.must(err)
defer func() {
if pArg := recover(); pArg != nil {
if err, ok := pArg.(error); ok {
if hookErr := s.runHooks(err); hookErr != nil {
s.logger.Errorf("An error occurred calling the registered hooks: %s", hookErr)
s.logger.Error(
fmt.Sprintf("An error occurred calling the registered hooks: %s", hookErr),
)
}
os.Exit(1)
}
@@ -29,9 +32,11 @@ func main() {
}
if err := s.runHooks(nil); err != nil {
s.logger.Errorf(
"Backup procedure ran successfully, but an error ocurred calling the registered hooks: %v",
err,
s.logger.Error(
fmt.Sprintf(
"Backup procedure ran successfully, but an error ocurred calling the registered hooks: %v",
err,
),
)
os.Exit(1)
}

View File

@@ -4,35 +4,40 @@
package main
import (
"bytes"
"context"
"errors"
"fmt"
"io"
"io/fs"
"log/slog"
"os"
"path"
"path/filepath"
"slices"
"strings"
"text/template"
"time"
"github.com/offen/docker-volume-backup/internal/storage"
"github.com/offen/docker-volume-backup/internal/storage/azure"
"github.com/offen/docker-volume-backup/internal/storage/dropbox"
"github.com/offen/docker-volume-backup/internal/storage/local"
"github.com/offen/docker-volume-backup/internal/storage/s3"
"github.com/offen/docker-volume-backup/internal/storage/ssh"
"github.com/offen/docker-volume-backup/internal/storage/webdav"
"github.com/ProtonMail/go-crypto/openpgp"
"github.com/containrrr/shoutrrr"
"github.com/containrrr/shoutrrr/pkg/router"
"github.com/docker/docker/api/types"
ctr "github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/swarm"
"github.com/docker/docker/client"
"github.com/kelseyhightower/envconfig"
"github.com/leekchan/timeutil"
"github.com/offen/envconfig"
"github.com/otiai10/copy"
"github.com/sirupsen/logrus"
"golang.org/x/crypto/openpgp"
"golang.org/x/sync/errgroup"
)
@@ -41,7 +46,7 @@ import (
type script struct {
cli *client.Client
storages []storage.Backend
logger *logrus.Logger
logger *slog.Logger
sender *router.ServiceRouter
template *template.Template
hooks []hook
@@ -62,22 +67,18 @@ type script struct {
func newScript() (*script, error) {
stdOut, logBuffer := buffer(os.Stdout)
s := &script{
c: &Config{},
logger: &logrus.Logger{
Out: stdOut,
Formatter: new(logrus.TextFormatter),
Hooks: make(logrus.LevelHooks),
Level: logrus.InfoLevel,
},
c: &Config{},
logger: slog.New(slog.NewTextHandler(stdOut, nil)),
stats: &Stats{
StartTime: time.Now(),
LogOutput: logBuffer,
Storages: map[string]StorageStats{
"S3": {},
"WebDAV": {},
"SSH": {},
"Local": {},
"Azure": {},
"S3": {},
"WebDAV": {},
"SSH": {},
"Local": {},
"Azure": {},
"Dropbox": {},
},
},
}
@@ -88,11 +89,47 @@ func newScript() (*script, error) {
return nil
})
envconfig.Lookup = func(key string) (string, bool) {
value, okValue := os.LookupEnv(key)
location, okFile := os.LookupEnv(key + "_FILE")
switch {
case okValue && !okFile: // only value
return value, true
case !okValue && okFile: // only file
contents, err := os.ReadFile(location)
if err != nil {
s.must(fmt.Errorf("newScript: failed to read %s! Error: %s", location, err))
return "", false
}
return string(contents), true
case okValue && okFile: // both
s.must(fmt.Errorf("newScript: both %s and %s are set!", key, key+"_FILE"))
return "", false
default: // neither, ignore
return "", false
}
}
if err := envconfig.Process("", s.c); err != nil {
return nil, fmt.Errorf("newScript: failed to process configuration values: %w", err)
}
s.file = path.Join("/tmp", s.c.BackupFilename)
tmplFileName, tErr := template.New("extension").Parse(s.file)
if tErr != nil {
return nil, fmt.Errorf("newScript: unable to parse backup file extension template: %w", tErr)
}
var bf bytes.Buffer
if tErr := tmplFileName.Execute(&bf, map[string]string{
"Extension": fmt.Sprintf("tar.%s", s.c.BackupCompression),
}); tErr != nil {
return nil, fmt.Errorf("newScript: error executing backup file extension template: %w", tErr)
}
s.file = bf.String()
if s.c.BackupFilenameExpand {
s.file = os.ExpandEnv(s.file)
s.c.BackupLatestSymlink = os.ExpandEnv(s.c.BackupLatestSymlink)
@@ -113,28 +150,19 @@ func newScript() (*script, error) {
logFunc := func(logType storage.LogLevel, context string, msg string, params ...any) {
switch logType {
case storage.LogLevelWarning:
s.logger.Warnf("["+context+"] "+msg, params...)
s.logger.Warn(fmt.Sprintf(msg, params...), "storage", context)
case storage.LogLevelError:
s.logger.Errorf("["+context+"] "+msg, params...)
case storage.LogLevelInfo:
s.logger.Error(fmt.Sprintf(msg, params...), "storage", context)
default:
s.logger.Infof("["+context+"] "+msg, params...)
s.logger.Info(fmt.Sprintf(msg, params...), "storage", context)
}
}
if s.c.AwsS3BucketName != "" {
accessKeyID, err := s.c.resolveSecret(s.c.AwsAccessKeyID, s.c.AwsAccessKeyIDFile)
if err != nil {
return nil, fmt.Errorf("newScript: error resolving AwsAccessKeyID: %w", err)
}
secretAccessKey, err := s.c.resolveSecret(s.c.AwsSecretAccessKey, s.c.AwsSecretAccessKeyFile)
if err != nil {
return nil, fmt.Errorf("newScript: error resolving AwsSecretAccessKey: %w", err)
}
s3Config := s3.Config{
Endpoint: s.c.AwsEndpoint,
AccessKeyID: accessKeyID,
SecretAccessKey: secretAccessKey,
AccessKeyID: s.c.AwsAccessKeyID,
SecretAccessKey: s.c.AwsSecretAccessKey,
IamRoleEndpoint: s.c.AwsIamRoleEndpoint,
EndpointProto: s.c.AwsEndpointProto,
EndpointInsecure: s.c.AwsEndpointInsecure,
@@ -144,11 +172,11 @@ func newScript() (*script, error) {
CACert: s.c.AwsEndpointCACert.Cert,
PartSize: s.c.AwsPartSize,
}
if s3Backend, err := s3.NewStorageBackend(s3Config, logFunc); err != nil {
return nil, err
} else {
s.storages = append(s.storages, s3Backend)
s3Backend, err := s3.NewStorageBackend(s3Config, logFunc)
if err != nil {
return nil, fmt.Errorf("newScript: error creating s3 storage backend: %w", err)
}
s.storages = append(s.storages, s3Backend)
}
if s.c.WebdavUrl != "" {
@@ -159,11 +187,11 @@ func newScript() (*script, error) {
Password: s.c.WebdavPassword,
RemotePath: s.c.WebdavPath,
}
if webdavBackend, err := webdav.NewStorageBackend(webDavConfig, logFunc); err != nil {
return nil, err
} else {
s.storages = append(s.storages, webdavBackend)
webdavBackend, err := webdav.NewStorageBackend(webDavConfig, logFunc)
if err != nil {
return nil, fmt.Errorf("newScript: error creating webdav storage backend: %w", err)
}
s.storages = append(s.storages, webdavBackend)
}
if s.c.SSHHostName != "" {
@@ -176,11 +204,11 @@ func newScript() (*script, error) {
IdentityPassphrase: s.c.SSHIdentityPassphrase,
RemotePath: s.c.SSHRemotePath,
}
if sshBackend, err := ssh.NewStorageBackend(sshConfig, logFunc); err != nil {
return nil, err
} else {
s.storages = append(s.storages, sshBackend)
sshBackend, err := ssh.NewStorageBackend(sshConfig, logFunc)
if err != nil {
return nil, fmt.Errorf("newScript: error creating ssh storage backend: %w", err)
}
s.storages = append(s.storages, sshBackend)
}
if _, err := os.Stat(s.c.BackupArchive); !os.IsNotExist(err) {
@@ -202,11 +230,28 @@ func newScript() (*script, error) {
}
azureBackend, err := azure.NewStorageBackend(azureConfig, logFunc)
if err != nil {
return nil, err
return nil, fmt.Errorf("newScript: error creating azure storage backend: %w", err)
}
s.storages = append(s.storages, azureBackend)
}
if s.c.DropboxRefreshToken != "" && s.c.DropboxAppKey != "" && s.c.DropboxAppSecret != "" {
dropboxConfig := dropbox.Config{
Endpoint: s.c.DropboxEndpoint,
OAuth2Endpoint: s.c.DropboxOAuth2Endpoint,
RefreshToken: s.c.DropboxRefreshToken,
AppKey: s.c.DropboxAppKey,
AppSecret: s.c.DropboxAppSecret,
RemotePath: s.c.DropboxRemotePath,
ConcurrencyLevel: s.c.DropboxConcurrencyLevel.Int(),
}
dropboxBackend, err := dropbox.NewStorageBackend(dropboxConfig, logFunc)
if err != nil {
return nil, fmt.Errorf("newScript: error creating dropbox storage backend: %w", err)
}
s.storages = append(s.storages, dropboxBackend)
}
if s.c.EmailNotificationRecipient != "" {
emailURL := fmt.Sprintf(
"smtp://%s:%s@%s:%d/?from=%s&to=%s",
@@ -281,9 +326,7 @@ func (s *script) stopContainers() (func() error, error) {
return noop, nil
}
allContainers, err := s.cli.ContainerList(context.Background(), types.ContainerListOptions{
Quiet: true,
})
allContainers, err := s.cli.ContainerList(context.Background(), types.ContainerListOptions{})
if err != nil {
return noop, fmt.Errorf("stopContainers: error querying for containers: %w", err)
}
@@ -293,7 +336,6 @@ func (s *script) stopContainers() (func() error, error) {
s.c.BackupStopContainerLabel,
)
containersToStop, err := s.cli.ContainerList(context.Background(), types.ContainerListOptions{
Quiet: true,
Filters: filters.NewArgs(filters.KeyValuePair{
Key: "label",
Value: containerLabel,
@@ -308,17 +350,19 @@ func (s *script) stopContainers() (func() error, error) {
return noop, nil
}
s.logger.Infof(
"Stopping %d container(s) labeled `%s` out of %d running container(s).",
len(containersToStop),
containerLabel,
len(allContainers),
s.logger.Info(
fmt.Sprintf(
"Stopping %d container(s) labeled `%s` out of %d running container(s).",
len(containersToStop),
containerLabel,
len(allContainers),
),
)
var stoppedContainers []types.Container
var stopErrors []error
for _, container := range containersToStop {
if err := s.cli.ContainerStop(context.Background(), container.ID, nil); err != nil {
if err := s.cli.ContainerStop(context.Background(), container.ID, ctr.StopOptions{}); err != nil {
stopErrors = append(stopErrors, err)
} else {
stoppedContainers = append(stoppedContainers, container)
@@ -384,9 +428,11 @@ func (s *script) stopContainers() (func() error, error) {
errors.Join(restartErrors...),
)
}
s.logger.Infof(
"Restarted %d container(s) and the matching service(s).",
len(stoppedContainers),
s.logger.Info(
fmt.Sprintf(
"Restarted %d container(s) and the matching service(s).",
len(stoppedContainers),
),
)
return nil
}, stopError
@@ -410,7 +456,9 @@ func (s *script) createArchive() error {
if err := remove(backupSources); err != nil {
return fmt.Errorf("createArchive: error removing snapshot: %w", err)
}
s.logger.Infof("Removed snapshot `%s`.", backupSources)
s.logger.Info(
fmt.Sprintf("Removed snapshot `%s`.", backupSources),
)
return nil
})
if err := copy.Copy(s.c.BackupSources, backupSources, copy.Options{
@@ -419,7 +467,9 @@ func (s *script) createArchive() error {
}); err != nil {
return fmt.Errorf("createArchive: error creating snapshot: %w", err)
}
s.logger.Infof("Created snapshot of `%s` at `%s`.", s.c.BackupSources, backupSources)
s.logger.Info(
fmt.Sprintf("Created snapshot of `%s` at `%s`.", s.c.BackupSources, backupSources),
)
}
tarFile := s.file
@@ -427,7 +477,9 @@ func (s *script) createArchive() error {
if err := remove(tarFile); err != nil {
return fmt.Errorf("createArchive: error removing tar file: %w", err)
}
s.logger.Infof("Removed tar file `%s`.", tarFile)
s.logger.Info(
fmt.Sprintf("Removed tar file `%s`.", tarFile),
)
return nil
})
@@ -451,11 +503,13 @@ func (s *script) createArchive() error {
return fmt.Errorf("createArchive: error walking filesystem tree: %w", err)
}
if err := createArchive(filesEligibleForBackup, backupSources, tarFile); err != nil {
if err := createArchive(filesEligibleForBackup, backupSources, tarFile, s.c.BackupCompression.String(), s.c.GzipParallelism.Int()); err != nil {
return fmt.Errorf("createArchive: error compressing backup folder: %w", err)
}
s.logger.Infof("Created backup of `%s` at `%s`.", backupSources, tarFile)
s.logger.Info(
fmt.Sprintf("Created backup of `%s` at `%s`.", backupSources, tarFile),
)
return nil
}
@@ -472,7 +526,9 @@ func (s *script) encryptArchive() error {
if err := remove(gpgFile); err != nil {
return fmt.Errorf("encryptArchive: error removing gpg file: %w", err)
}
s.logger.Infof("Removed GPG file `%s`.", gpgFile)
s.logger.Info(
fmt.Sprintf("Removed GPG file `%s`.", gpgFile),
)
return nil
})
@@ -502,7 +558,9 @@ func (s *script) encryptArchive() error {
}
s.file = gpgFile
s.logger.Infof("Encrypted backup using given passphrase, saving as `%s`.", s.file)
s.logger.Info(
fmt.Sprintf("Encrypted backup using given passphrase, saving as `%s`.", s.file),
)
return nil
}
@@ -549,6 +607,12 @@ func (s *script) pruneBackups() error {
for _, backend := range s.storages {
b := backend
eg.Go(func() error {
if skipPrune(b.Name(), s.c.BackupSkipBackendsFromPrune) {
s.logger.Info(
fmt.Sprintf("Skipping pruning for backend `%s`.", b.Name()),
)
return nil
}
stats, err := b.Prune(deadline, s.c.BackupPruningPrefix)
if err != nil {
return err
@@ -574,7 +638,20 @@ func (s *script) pruneBackups() error {
// is non-nil.
func (s *script) must(err error) {
if err != nil {
s.logger.Errorf("Fatal error running backup: %s", err)
s.logger.Error(
fmt.Sprintf("Fatal error running backup: %s", err),
)
panic(err)
}
}
// skipPrune returns true if the given backend name is contained in the
// list of skipped backends.
func skipPrune(name string, skippedBackends []string) bool {
return slices.ContainsFunc(
skippedBackends,
func(b string) bool {
return strings.EqualFold(b, name) // ignore case on both sides
},
)
}

2
docs/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
_site
.jekyll-cache

4
docs/Gemfile Normal file
View File

@@ -0,0 +1,4 @@
source 'https://rubygems.org'
gem "jekyll", "~> 4.3.2"
gem "just-the-docs", "0.6.1"

80
docs/Gemfile.lock Normal file
View File

@@ -0,0 +1,80 @@
GEM
remote: https://rubygems.org/
specs:
addressable (2.8.5)
public_suffix (>= 2.0.2, < 6.0)
colorator (1.1.0)
concurrent-ruby (1.2.2)
em-websocket (0.5.3)
eventmachine (>= 0.12.9)
http_parser.rb (~> 0)
eventmachine (1.2.7)
ffi (1.15.5)
forwardable-extended (2.6.0)
http_parser.rb (0.8.0)
i18n (1.14.1)
concurrent-ruby (~> 1.0)
jekyll (4.3.2)
addressable (~> 2.4)
colorator (~> 1.0)
em-websocket (~> 0.5)
i18n (~> 1.0)
jekyll-sass-converter (>= 2.0, < 4.0)
jekyll-watch (~> 2.0)
kramdown (~> 2.3, >= 2.3.1)
kramdown-parser-gfm (~> 1.0)
liquid (~> 4.0)
mercenary (>= 0.3.6, < 0.5)
pathutil (~> 0.9)
rouge (>= 3.0, < 5.0)
safe_yaml (~> 1.0)
terminal-table (>= 1.8, < 4.0)
webrick (~> 1.7)
jekyll-include-cache (0.2.1)
jekyll (>= 3.7, < 5.0)
jekyll-sass-converter (2.2.0)
sassc (> 2.0.1, < 3.0)
jekyll-seo-tag (2.8.0)
jekyll (>= 3.8, < 5.0)
jekyll-watch (2.2.1)
listen (~> 3.0)
just-the-docs (0.6.1)
jekyll (>= 3.8.5)
jekyll-include-cache
jekyll-seo-tag (>= 2.0)
rake (>= 12.3.1)
kramdown (2.4.0)
rexml
kramdown-parser-gfm (1.1.0)
kramdown (~> 2.0)
liquid (4.0.4)
listen (3.8.0)
rb-fsevent (~> 0.10, >= 0.10.3)
rb-inotify (~> 0.9, >= 0.9.10)
mercenary (0.4.0)
pathutil (0.16.2)
forwardable-extended (~> 2.6)
public_suffix (4.0.7)
rake (13.0.6)
rb-fsevent (0.11.2)
rb-inotify (0.10.1)
ffi (~> 1.0)
rexml (3.2.6)
rouge (3.30.0)
safe_yaml (1.0.5)
sassc (2.4.0)
ffi (~> 1.9)
terminal-table (3.0.2)
unicode-display_width (>= 1.1.1, < 3)
unicode-display_width (2.4.2)
webrick (1.8.1)
PLATFORMS
ruby
DEPENDENCIES
jekyll (~> 4.3.2)
just-the-docs (= 0.6.1)
BUNDLED WITH
2.1.4

View File

@@ -1,40 +0,0 @@
# Notification templates reference
In order to customize title and body of notifications you'll have to write a [go template](https://pkg.go.dev/text/template) and mount it inside the `/etc/dockervolumebackup/notifications.d/` directory.
Configuration, data about the backup run and helper functions will be passed to this template, this page documents them fully.
## Data
Here is a list of all data passed to the template:
* `Config`: this object holds the configuration that has been passed to the script. The field names are the name of the recognized environment variables converted in PascalCase. (e.g. `BACKUP_STOP_CONTAINER_LABEL` becomes `BackupStopContainerLabel`)
* `Error`: the error that made the backup fail. Only available in the `title_failure` and `body_failure` templates
* `Stats`: objects that holds stats regarding script execution. In case of an unsuccessful run, some information may not be available.
* `StartTime`: time when the script started execution
* `EndTime`: time when the backup has completed successfully (after pruning)
* `TookTime`: amount of time it took for the backup to run. (equal to `EndTime - StartTime`)
* `LockedTime`: amount of time it took for the backup to acquire the exclusive lock
* `LogOutput`: full log of the application
* `Containers`: object containing stats about the docker containers
* `All`: total number of containers
* `ToStop`: number of containers matched by the stop rule
* `Stopped`: number of containers successfully stopped
* `StopErrors`: number of containers that were unable to be stopped (equal to `ToStop - Stopped`)
* `BackupFile`: object containing information about the backup file
* `Name`: name of the backup file (e.g. `backup-2022-02-11T01-00-00.tar.gz`)
* `FullPath`: full path of the backup file (e.g. `/archive/backup-2022-02-11T01-00-00.tar.gz`)
* `Size`: size in bytes of the backup file
* `Storages`: object that holds stats about each storage
* `Local`, `S3`, `WebDAV`, `Azure` or `SSH`:
* `Total`: total number of backup files
* `Pruned`: number of backup files that were deleted due to pruning rule
* `PruneErrors`: number of backup files that were unable to be pruned
## Functions
Some formatting and helper functions are also available:
* `formatTime`: formats a time object using [RFC3339](https://datatracker.ietf.org/doc/html/rfc3339) format (e.g. `2022-02-11T01:00:00Z`)
* `formatBytesBin`: formats an amount of bytes using powers of 1024 (e.g. `7055258` bytes will be `6.7 MiB`)
* `formatBytesDec`: formats an amount of bytes using powers of 1000 (e.g. `7055258` bytes will be `7.1 MB`)
* `env`: returns the value of the environment variable of the given key if set

14
docs/README.md Normal file
View File

@@ -0,0 +1,14 @@
# Documentation site
This directory contains the sources for the documentation site published at <https://offen.github.io/docker-volume-backup>.
Assuming you have Ruby and [`bundler`][bundler] installed, you can run the site locally using the following commands:
```
bundle install
bundle exec jekyll serve
```
Note that changes in `_config.yml` require a manual restart to take effect.
[bundler]: https://bundler.io/

35
docs/_config.yml Normal file
View File

@@ -0,0 +1,35 @@
title: docker-volume-backup
description: Documentation for the offen/docker-volume-backup Docker image.
theme: just-the-docs
url: https://offen.github.io/docker-volume-backup/
callouts_level: quiet
callouts:
highlight:
color: yellow
important:
title: Important
color: blue
new:
title: New
color: green
note:
title: Note
color: purple
warning:
title: Warning
color: red
aux_links:
'GitHub Repository':
- https://github.com/offen/docker-volume-backup
nav_external_links:
- title: GitHub Repository
url: https://github.com/offen/docker-volume-backup
footer_content: >-
Copyright &copy; 2021 Offen Authors and contributors.
Distributed under the <a href="https://github.com/offen/docker-volume-backup/tree/main/LICENSE">MPL-2.0 License.</a><br>
Something missing, unclear or not working? Open <a href="https://github.com/offen/docker-volume-backup/issues">an issue</a>.

View File

@@ -0,0 +1,7 @@
.site-title {
font-size: unset !important;
}
.main-content pre {
font-size: 1.1em;
}

View File

@@ -0,0 +1,34 @@
---
title: Automatically prune old backups
layout: default
parent: How Tos
nav_order: 3
---
# Automatically prune old backups
When `BACKUP_RETENTION_DAYS` is configured, the command will check if there are any archives in the remote storage backend(s) or local archive that are older than the given retention value and rotate these backups away.
{: .note }
Be aware that this mechanism looks at __all files in the target bucket or archive__, which means that other files that are older than the given deadline are deleted as well.
In case you need to use a target that cannot be used exclusively for your backups, you can configure `BACKUP_PRUNING_PREFIX` to limit which files are considered eligible for deletion:
```yml
version: '3'
services:
# ... define other services using the `data` volume here
backup:
image: offen/docker-volume-backup:v2
environment:
BACKUP_FILENAME: backup-%Y-%m-%dT%H-%M-%S.tar.gz
BACKUP_PRUNING_PREFIX: backup-
BACKUP_RETENTION_DAYS: '7'
volumes:
- ${HOME}/backups:/archive
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
data:
```

View File

@@ -0,0 +1,40 @@
---
title: Define different retention schedules
layout: default
parent: How Tos
nav_order: 9
---
# Define different retention schedules
If you want to manage backup retention on different schedules, the most straight forward approach is to define a dedicated configuration for retention rule using a different prefix in the `BACKUP_FILENAME` parameter and then run them on different cron schedules.
For example, if you wanted to keep daily backups for 7 days, weekly backups for a month, and retain monthly backups forever, you could create three configuration files and mount them into `/etc/dockervolumebackup/conf.d`:
```ini
# 01daily.conf
BACKUP_FILENAME="daily-backup-%Y-%m-%dT%H-%M-%S.tar.gz"
# run every day at 2am
BACKUP_CRON_EXPRESSION="0 2 * * *"
BACKUP_PRUNING_PREFIX="daily-backup-"
BACKUP_RETENTION_DAYS="7"
```
```ini
# 02weekly.conf
BACKUP_FILENAME="weekly-backup-%Y-%m-%dT%H-%M-%S.tar.gz"
# run every monday at 3am
BACKUP_CRON_EXPRESSION="0 3 * * 1"
BACKUP_PRUNING_PREFIX="weekly-backup-"
BACKUP_RETENTION_DAYS="31"
```
```ini
# 03monthly.conf
BACKUP_FILENAME="monthly-backup-%Y-%m-%dT%H-%M-%S.tar.gz"
# run every 1st of a month at 4am
BACKUP_CRON_EXPRESSION="0 4 1 * *"
```
{: .note }
While it's possible to define colliding cron schedules for each of these configurations, you might need to adjust the value for `LOCK_TIMEOUT` in case your backups are large and might take longer than an hour.

View File

@@ -0,0 +1,17 @@
---
title: Encrypt backups using GPG
layout: default
parent: How Tos
nav_order: 7
---
# Encrypt backups using GPG
The image supports encrypting backups using GPG out of the box.
In case a `GPG_PASSPHRASE` environment variable is set, the backup archive will be encrypted using the given key and saved as a `.gpg` file instead.
Assuming you have `gpg` installed, you can decrypt such a backup using (your OS will prompt for the passphrase before decryption can happen):
```console
gpg -o backup.tar.gz -d backup.tar.gz.gpg
```

View File

@@ -0,0 +1,44 @@
---
title: Handle file uploads using third party tools
layout: default
parent: How Tos
nav_order: 10
---
# Handle file uploads using third party tools
If you want to use an unsupported storage backend, or want to use a third party (e.g. rsync, rclone) tool for file uploads, you can build a Docker image containing the required binaries off this one, and call through to these in lifecycle hooks.
For example, if you wanted to use `rsync`, define your Docker image like this:
```Dockerfile
FROM offen/docker-volume-backup:v2
RUN apk add rsync
```
Using this image, you can now omit configuring any of the supported storage backends, and instead define your own mechanism in a `docker-volume-backup.copy-post` label:
```yml
version: '3'
services:
backup:
image: your-custom-image
restart: always
environment:
BACKUP_FILENAME: "daily-backup-%Y-%m-%dT%H-%M-%S.tar.gz"
BACKUP_CRON_EXPRESSION: "0 2 * * *"
labels:
- docker-volume-backup.copy-post=/bin/sh -c 'rsync $$COMMAND_RUNTIME_ARCHIVE_FILEPATH /destination'
volumes:
- app_data:/backup/app_data:ro
- /var/run/docker.sock:/var/run/docker.sock
# other services defined here ...
volumes:
app_data:
```
{: .note }
Commands will be invoked with the filepath of the tar archive passed as `COMMAND_RUNTIME_BACKUP_FILEPATH`.

8
docs/how-tos/index.md Normal file
View File

@@ -0,0 +1,8 @@
---
title: How Tos
layout: default
nav_order: 3
has_children: true
---
## How Tos

View File

@@ -0,0 +1,20 @@
---
title: Trigger a backup manually
layout: default
parent: How Tos
nav_order: 8
---
# Trigger a backup manually
You can manually trigger a backup run outside of the defined cron schedule by executing the `backup` command inside the container:
```console
docker exec <container_ref> backup
```
If the container is configured to run multiple schedules, you can source the respective conf file before invoking the command:
```console
docker exec <container_ref> /bin/sh -c 'set -a; source /etc/dockervolumebackup/conf.d/myconf.env; set +a && backup'
```

View File

@@ -0,0 +1,37 @@
---
title: Replace deprecated BACKUP_FROM_SNAPSHOT usage
layout: default
parent: How Tos
nav_order: 16
---
# Replace deprecated `BACKUP_FROM_SNAPSHOT` usage
Starting with version 2.15.0, the `BACKUP_FROM_SNAPSHOT` feature has been deprecated.
If you need to prepare your sources before the backup is taken, use `archive-pre`, `archive-post` and an intermediate volume:
```yml
version: '3'
services:
my_app:
build: .
volumes:
- data:/var/my_app
- backup:/tmp/backup
labels:
- docker-volume-backup.archive-pre=cp -r /var/my_app /tmp/backup/my-app
- docker-volume-backup.archive-post=rm -rf /tmp/backup/my-app
backup:
image: offen/docker-volume-backup:v2
environment:
BACKUP_SOURCES: /tmp/backup
volumes:
- backup:/backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
data:
backup:
```

View File

@@ -0,0 +1,23 @@
---
title: Replace deprecated exec-pre and exec-post labels
layout: default
parent: How Tos
nav_order: 17
---
# Replace deprecated `exec-pre` and `exec-post` labels
Version 2.19.0 introduced the option to run labeled commands at multiple points in time during the backup lifecycle.
In order to be able to use more obvious terminology in the new labels, the existing `exec-pre` and `exec-post` labels have been deprecated.
If you want to emulate the existing behavior, all you need to do is change `exec-pre` to `archive-pre` and `exec-post` to `archive-post`:
```diff
labels:
- - docker-volume-backup.exec-pre=cp -r /var/my_app /tmp/backup/my-app
+ - docker-volume-backup.archive-pre=cp -r /var/my_app /tmp/backup/my-app
- - docker-volume-backup.exec-post=rm -rf /tmp/backup/my-app
+ - docker-volume-backup.archive-post=rm -rf /tmp/backup/my-app
```
The `EXEC_LABEL` setting and the `docker-volume-backup.exec-label` label stay as is.
Check the additional documentation on running commands during the backup lifecycle to find out about further possibilities.

View File

@@ -0,0 +1,46 @@
---
title: Restore volumes from a backup
layout: default
parent: How Tos
nav_order: 6
---
# Restore volumes from a backup
In case you need to restore a volume from a backup, the most straight forward procedure to do so would be:
- Stop the container(s) that are using the volume
- Untar the backup you want to restore
```console
tar -C /tmp -xvf backup.tar.gz
```
- Using a temporary once-off container, mount the volume (the example assumes it's named `data`) and copy over the backup. Make sure you copy the correct path level (this depends on how you mount your volume into the backup container), you might need to strip some leading elements
```console
docker run -d --name temp_restore_container -v data:/backup_restore alpine
docker cp /tmp/backup/data-backup temp_restore_container:/backup_restore
docker stop temp_restore_container
docker rm temp_restore_container
```
- Restart the container(s) that are using the volume
Depending on your setup and the application(s) you are running, this might involve other steps to be taken still.
---
If you want to rollback an entire volume to an earlier backup snapshot (recommended for database volumes):
- Trigger a manual backup if necessary (see `Manually triggering a backup`).
- Stop the container(s) that are using the volume.
- If volume was initially created using docker-compose, find out exact volume name using:
```console
docker volume ls
```
- Remove existing volume (the example assumes it's named `data`):
```console
docker volume rm data
```
- Create new volume with the same name and restore a snapshot:
```console
docker run --rm -it -v data:/backup/my-app-backup -v /path/to/local_backups:/archive:ro alpine tar -xvzf /archive/full_backup_filename.tar.gz
```
- Restart the container(s) that are using the volume.

View File

@@ -0,0 +1,93 @@
---
title: Run custom commands during the backup lifecycle
layout: default
nav_order: 5
parent: How Tos
---
# Run custom commands during the backup lifecycle
In certain scenarios it can be required to run specific commands before and after a backup is taken (e.g. dumping a database).
When mounting the Docker socket into the `docker-volume-backup` container, you can define pre- and post-commands that will be run in the context of the target container (it is also possible to run commands inside the `docker-volume-backup` container itself using this feature).
Such commands are defined by specifying the command in a `docker-volume-backup.[step]-[pre|post]` label where `step` can be any of the following phases of a backup lifecycle:
- `archive` (the tar archive is created)
- `process` (the tar archive is processed, e.g. encrypted - optional)
- `copy` (the tar archive is copied to all configured storages)
- `prune` (existing backups are pruned based on the defined ruleset - optional)
{: .note }
So that the `docker-volume-backup` container can access the labels on other containers, it is necessary that the docker socket is mounted into
the `docker-volume-backup` container as shown in the Quickstart example.
Taking a database dump using `mysqldump` would look like this:
```yml
version: '3'
services:
# ... define other services using the `data` volume here
database:
image: mariadb
volumes:
- backup_data:/tmp/backups
labels:
- docker-volume-backup.archive-pre=/bin/sh -c 'mysqldump --all-databases > /backups/dump.sql'
volumes:
backup_data:
```
{: .note }
Due to Docker limitations, you currently cannot use any kind of redirection in these commands unless you pass the command to `/bin/sh -c` or similar.
I.e. instead of using `echo "ok" > ok.txt` you will need to use `/bin/sh -c 'echo "ok" > ok.txt'`.
If you have more than one `docker-volume-backup` container (possibly across several docker-compose environments) to backup or you are using
multiple backup schedules, you will need to use `EXEC_LABEL` in the configuration and a `docker-volume-backup.exec-label` label on each
container using custom commands to ensure that the commands are only run by the correct `docker-volume-backup` instance.
```yml
version: '3'
services:
database:
image: mariadb
volumes:
- backup_data:/tmp/backups
labels:
- docker-volume-backup.archive-pre=/bin/sh -c 'mysqldump --all-databases > /tmp/volume/dump.sql'
- docker-volume-backup.exec-label=database
backup:
image: offen/docker-volume-backup:v2
environment:
EXEC_LABEL: database
volumes:
- data:/backup/dump:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
backup_data:
```
The backup procedure is guaranteed to wait for all `pre` or `post` commands to finish before proceeding.
However, there are no guarantees about the order in which they are run, which could also happen concurrently.
By default the backup command is executed by the user provided by the container's image.
It is possible to specify a custom user that is used to run commands in dedicated labels with the format `docker-volume-backup.[step]-[pre|post].user`:
```yml
version: '3'
services:
gitea:
image: gitea/gitea
volumes:
- backup_data:/tmp
labels:
- docker-volume-backup.archive-pre.user=git
- docker-volume-backup.archive-pre=/bin/bash -c 'cd /tmp; /usr/local/bin/gitea dump -c /data/gitea/conf/app.ini -R -f dump.zip'
```
Make sure the user exists and is present in `passwd` inside the target container.

View File

@@ -0,0 +1,52 @@
---
title: Run multiple backup schedules in the same container
layout: default
parent: How Tos
nav_order: 11
---
# Run multiple backup schedules in the same container
Multiple backup schedules with different configuration can be configured by mounting an arbitrary number of configuration files (using the `.env` format) into `/etc/dockervolumebackup/conf.d`:
```yml
version: '3'
services:
# ... define other services using the `data` volume here
backup:
image: offen/docker-volume-backup:v2
volumes:
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./configuration:/etc/dockervolumebackup/conf.d
volumes:
data:
```
A separate cronjob will be created for each config file.
If a configuration value is set both in the global environment as well as in the config file, the config file will take precedence.
The `backup` command expects to run on an exclusive lock, so in case you provide the same or overlapping schedules in your cron expressions, the runs will still be executed serially, one after the other.
The exact order of schedules that use the same cron expression is not specified.
In case you need your schedules to overlap, you need to create a dedicated container for each schedule instead.
When changing the configuration, you currently need to manually restart the container for the changes to take effect.
Set `BACKUP_SOURCES` for each config file to control which subset of volume mounts gets backed up:
```yml
# With a volume configuration like this:
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./configuration:/etc/dockervolumebackup/conf.d
- app1_data:/backup/app1_data:ro
- app2_data:/backup/app2_data:ro
```
```ini
# In the 1st config file:
BACKUP_SOURCES=/backup/app1_data
# In the 2nd config file:
BACKUP_SOURCES=/backup/app2_data
```

View File

@@ -0,0 +1,27 @@
---
title: Set the timezone the container runs in
layout: default
parent: How Tos
nav_order: 8
---
# Set the timezone the container runs in
By default a container based on this image will run in the UTC timezone.
As the image is designed to be as small as possible, additional timezone data is not included.
In case you want to run your cron rules in your local timezone (respecting DST and similar), you can mount your Docker host's `/etc/timezone` and `/etc/localtime` in read-only mode:
```yml
version: '3'
services:
backup:
image: offen/docker-volume-backup:v2
volumes:
- data:/backup/my-app-backup:ro
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
volumes:
data:
```

View File

@@ -0,0 +1,37 @@
---
title: Set up Dropbox storage backend
layout: default
parent: How Tos
nav_order: 12
---
# Set up Dropbox storage backend
## Acquiring authentication tokens
1. Create a new Dropbox App in the [App Console](https://www.dropbox.com/developers/apps)
2. Open your new Dropbox App and set `DROPBOX_APP_KEY` and `DROPBOX_APP_SECRET` in your environment (e.g. docker-compose.yml) accordingly
3. Click on `Permissions` in your app and make sure, that the following permissions are cranted (or more):
- `files.metadata.write`
- `files.metadata.read`
- `files.content.write`
- `files.content.read`
4. Replace APPKEY in `https://www.dropbox.com/oauth2/authorize?client_id=APPKEY&token_access_type=offline&response_type=code` with the app key from step 2
5. Visit the URL and confirm the access of your app. This gives you an `auth code` -> save it somewhere!
6. Replace AUTHCODE, APPKEY, APPSECRET accordingly and perform the request:
```
curl https://api.dropbox.com/oauth2/token \
-d code=AUTHCODE \
-d grant_type=authorization_code \
-d client_id=APPKEY \
-d client_secret=APPSECRET
```
7. Execute the request. You will get a JSON formatted reply. Use the value of the `refresh_token` for the last environment variable `DROPBOX_REFRESH_TOKEN`
8. You should now have `DROPBOX_APP_KEY`, `DROPBOX_APP_SECRET` and `DROPBOX_REFRESH_TOKEN` set. These don't expire.
Note: Using the "Generated access token" in the app console is not supported, as it is only very short lived and therefore not suitable for an automatic backup solution. The refresh token handles this automatically - the setup procedure above is only needed once.
## Other parameters
Important: If you chose `App folder` access during the creation of your Dropbox app in step 1 above, you can only write in the app's directory!
This means, that `DROPBOX_REMOTE_PATH` must start with e.g. `/Apps/YOUR_APP_NAME` or `/Apps/YOUR_APP_NAME/some_sub_dir`

View File

@@ -0,0 +1,125 @@
---
title: Receive notifications
layout: default
nav_order: 4
parent: How Tos
---
# Receive notifications
## Send email notifications on failed backup runs
To send out email notifications on failed backup runs, provide SMTP credentials, a sender and a recipient:
```yml
version: '3'
services:
backup:
image: offen/docker-volume-backup:v2
environment:
# ... other configuration values go here
NOTIFICATION_URLS=smtp://me:secret@smtp.example.com:587/?fromAddress=no-reply@example.com&toAddresses=you@example.com
```
Notification backends other than email are also supported.
Refer to the documentation of [shoutrrr][shoutrrr-docs] to find out about options and configuration.
[shoutrrr-docs]: https://containrrr.dev/shoutrrr/0.7/services/overview/
{: .note }
If you also want notifications on successful executions, set `NOTIFICATION_LEVEL` to `info`.
## Customize notifications
The title and body of the notifications can be tailored to your needs using [Go templates](https://pkg.go.dev/text/template).
Template sources must be mounted inside the container in `/etc/dockervolumebackup/notifications.d/`: any file inside this directory will be parsed.
```yml
services:
backup:
image: offen/docker-volume-backup:v2
volumes:
- ./customized.template:/etc/dockervolumebackup/notifications.d/01.template
```
The files have to define [nested templates](https://pkg.go.dev/text/template#hdr-Nested_template_definitions) in order to override the original values. An example:
{% raw %}
```
{{ define "title_success" -}}
✅ Successfully ran backup {{ .Config.BackupStopContainerLabel }}
{{- end }}
{{ define "body_success" -}}
▶️ Start time: {{ .Stats.StartTime | formatTime }}
⏹️ End time: {{ .Stats.EndTime | formatTime }}
⌛ Took time: {{ .Stats.TookTime }}
🛑 Stopped containers: {{ .Stats.Containers.Stopped }}/{{ .Stats.Containers.All }} ({{ .Stats.Containers.StopErrors }} errors)
⚖️ Backup size: {{ .Stats.BackupFile.Size | formatBytesBin }} / {{ .Stats.BackupFile.Size | formatBytesDec }}
🗑️ Pruned backups: {{ .Stats.Storages.Local.Pruned }}/{{ .Stats.Storages.Local.Total }} ({{ .Stats.Storages.Local.PruneErrors }} errors)
{{- end }}
```
{% endraw %}
Template names that can be overridden are:
- `title_success` (the title used for a successful execution)
- `body_success` (the body used for a successful execution)
- `title_failure` (the title used for a failed execution)
- `body_failure` (the body used for a failed execution)
## Notification templates reference
Configuration, data about the backup run and helper functions will be passed to these templates, this page documents them fully.
### Data
Here is a list of all data passed to the template:
* `Config`: this object holds the configuration that has been passed to the script. The field names are the name of the recognized environment variables converted in PascalCase. (e.g. `BACKUP_STOP_CONTAINER_LABEL` becomes `BackupStopContainerLabel`)
* `Error`: the error that made the backup fail. Only available in the `title_failure` and `body_failure` templates
* `Stats`: objects that holds stats regarding script execution. In case of an unsuccessful run, some information may not be available.
* `StartTime`: time when the script started execution
* `EndTime`: time when the backup has completed successfully (after pruning)
* `TookTime`: amount of time it took for the backup to run. (equal to `EndTime - StartTime`)
* `LockedTime`: amount of time it took for the backup to acquire the exclusive lock
* `LogOutput`: full log of the application
* `Containers`: object containing stats about the docker containers
* `All`: total number of containers
* `ToStop`: number of containers matched by the stop rule
* `Stopped`: number of containers successfully stopped
* `StopErrors`: number of containers that were unable to be stopped (equal to `ToStop - Stopped`)
* `BackupFile`: object containing information about the backup file
* `Name`: name of the backup file (e.g. `backup-2022-02-11T01-00-00.tar.gz`)
* `FullPath`: full path of the backup file (e.g. `/archive/backup-2022-02-11T01-00-00.tar.gz`)
* `Size`: size in bytes of the backup file
* `Storages`: object that holds stats about each storage
* `Local`, `S3`, `WebDAV`, `Azure`, `Dropbox` or `SSH`:
* `Total`: total number of backup files
* `Pruned`: number of backup files that were deleted due to pruning rule
* `PruneErrors`: number of backup files that were unable to be pruned
### Functions
Some formatting and helper functions are also available:
* `formatTime`: formats a time object using [RFC3339](https://datatracker.ietf.org/doc/html/rfc3339) format (e.g. `2022-02-11T01:00:00Z`)
* `formatBytesBin`: formats an amount of bytes using powers of 1024 (e.g. `7055258` bytes will be `6.7 MiB`)
* `formatBytesDec`: formats an amount of bytes using powers of 1000 (e.g. `7055258` bytes will be `7.1 MB`)
* `env`: returns the value of the environment variable of the given key if set
## Special characters in notification URLs
The value given to `NOTIFICATION_URLS` is a comma separated list of URLs.
If such a URL contains special characters (e.g. commas) these need to be URL encoded.
To obtain an encoded version of your URL, you can use the CLI tool provided by `shoutrrr` (which is the library used for sending notifications):
```
docker run --rm -ti containrrr/shoutrrr generate [service]
```
where service is any of the [supported services][shoutrrr-docs], e.g. for SMTP:
```
docker run --rm -ti containrrr/shoutrrr generate smtp
```

View File

@@ -0,0 +1,35 @@
---
title: Stop containers during backup
layout: default
parent: How Tos
nav_order: 1
---
# Stop containers during backup
In many cases, it will be desirable to stop the services that are consuming the volume you want to backup in order to ensure data integrity.
This image can automatically stop and restart containers and services.
By default, any container that is labeled `docker-volume-backup.stop-during-backup=true` will be stopped before the backup is being taken and restarted once it has finished.
In case you need more fine grained control about which containers should be stopped (e.g. when backing up multiple volumes on different schedules), you can set the `BACKUP_STOP_CONTAINER_LABEL` environment variable and then use the same value for labeling:
```yml
version: '3'
services:
app:
# definition for app ...
labels:
- docker-volume-backup.stop-during-backup=service1
backup:
image: offen/docker-volume-backup:v2
environment:
BACKUP_STOP_CONTAINER_LABEL: service1
volumes:
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
data:
```

View File

@@ -0,0 +1,26 @@
---
title: Update deprecated email configuration
layout: default
parent: How Tos
nav_order: 18
---
# Update deprecated email configuration
Starting with version 2.6.0, configuring email notifications using `EMAIL_*` keys has been deprecated.
Instead of providing multiple values using multiple keys, you can now provide a single URL for `NOTIFICATION_URLS`.
Before:
```ini
EMAIL_NOTIFICATION_RECIPIENT="you@example.com"
EMAIL_NOTIFICATION_SENDER="no-reply@example.com"
EMAIL_SMTP_HOST="posteo.de"
EMAIL_SMTP_PASSWORD="secret"
EMAIL_SMTP_USERNAME="me"
EMAIL_SMTP_PORT="587"
```
After:
```ini
NOTIFICATION_URLS=smtp://me:secret@posteo.de:587/?fromAddress=no-reply@example.com&toAddresses=you@example.com
```

View File

@@ -0,0 +1,17 @@
---
title: Use a custom Docker host
layout: default
parent: How Tos
nav_order: 14
---
# Use a custom Docker host
If you are interfacing with Docker via TCP, set `DOCKER_HOST` to the correct URL.
```ini
DOCKER_HOST=tcp://docker_socket_proxy:2375
```
In case you are using a socket proxy, it must support `GET` and `POST` requests to the `/containers` endpoint. If you are using Docker Swarm, it must also support the `/services` endpoint. If you are using pre/post backup commands, it must also support the `/exec` endpoint.

View File

@@ -0,0 +1,23 @@
---
title: Use with rootless Docker
layout: default
parent: How Tos
nav_order: 15
---
# Use with rootless Docker
It's also possible to use this image with a [rootless Docker installation][rootless-docker].
Instead of mounting `/var/run/docker.sock`, mount the user-specific socket into the container:
```yml
services:
backup:
image: offen/docker-volume-backup:v2
# ... configuration omitted
volumes:
- backup:/backup:ro
- /run/user/1000/docker.sock:/var/run/docker.sock:ro
```
[rootless-docker]: https://docs.docker.com/engine/security/rootless/

View File

@@ -0,0 +1,27 @@
---
title: Use with Docker Swarm
layout: default
parent: How Tos
nav_order: 13
---
# Use with Docker Swarm
By default, Docker Swarm will restart stopped containers automatically, even when manually stopped.
If you plan to have your containers / services stopped during backup, this means you need to apply the `on-failure` restart policy to your service's definitions.
A restart policy of `always` is not compatible with this tool.
---
When running in Swarm mode, it's also advised to set a hard memory limit on your service (~25MB should be enough in most cases, but if you backup large files above half a gigabyte or similar, you might have to raise this in case the backup exits with `Killed`):
```yml
services:
backup:
image: offen/docker-volume-backup:v2
deployment:
resources:
limits:
memory: 25M
```

117
docs/index.md Normal file
View File

@@ -0,0 +1,117 @@
---
title: Home
layout: home
nav_order: 1
---
# offen/docker-volume-backup
{:.no_toc}
Backup Docker volumes locally or to any S3, WebDAV, Azure Blob Storage, Dropbox or SSH compatible storage.
{: .fs-6 .fw-300 }
---
The [offen/docker-volume-backup](https://hub.docker.com/r/offen/docker-volume-backup) Docker image can be used as a lightweight (below 15MB) companion container to an existing Docker setup.
It handles __recurring or one-off backups of Docker volumes__ to a __local directory__, __any S3, WebDAV, Azure Blob Storage, Dropbox or SSH compatible storage (or any combination thereof) and rotates away old backups__ if configured. It also supports __encrypting your backups using GPG__ and __sending notifications for (failed) backup runs__.
{: .note }
Code and documentation for `v1` versions are found on [this branch][v1-branch].
[v1-branch]: https://github.com/offen/docker-volume-backup/tree/v1
---
1. TOC
{:toc}
## Quickstart
### Recurring backups in a compose setup
Add a `backup` service to your compose setup and mount the volumes you would like to see backed up:
```yml
version: '3'
services:
volume-consumer:
build:
context: ./my-app
volumes:
- data:/var/my-app
labels:
# This means the container will be stopped during backup to ensure
# backup integrity. You can omit this label if stopping during backup
# not required.
- docker-volume-backup.stop-during-backup=true
backup:
# In production, it is advised to lock your image tag to a proper
# release version instead of using `latest`.
# Check https://github.com/offen/docker-volume-backup/releases
# for a list of available releases.
image: offen/docker-volume-backup:latest
restart: always
env_file: ./backup.env # see below for configuration reference
volumes:
- data:/backup/my-app-backup:ro
# Mounting the Docker socket allows the script to stop and restart
# the container during backup and to access the container labels to
# specify custom commands. You can omit this if you don't want to
# stop the container or run custom commands. In case you need to
# proxy the socket, you can also provide a location by setting
# `DOCKER_HOST` in the container
- /var/run/docker.sock:/var/run/docker.sock:ro
# If you mount a local directory or volume to `/archive` a local
# copy of the backup will be stored there. You can override the
# location inside of the container by setting `BACKUP_ARCHIVE`.
# You can omit this if you do not want to keep local backups.
- /path/to/local_backups:/archive
volumes:
data:
```
### One-off backups using Docker CLI
To run a one time backup, mount the volume you would like to see backed up into a container and run the `backup` command:
```console
docker run --rm \
-v data:/backup/data \
--env AWS_ACCESS_KEY_ID="<xxx>" \
--env AWS_SECRET_ACCESS_KEY="<xxx>" \
--env AWS_S3_BUCKET_NAME="<xxx>" \
--entrypoint backup \
offen/docker-volume-backup:v2
```
Alternatively, pass a `--env-file` in order to use a full config as described below.
### Available image registries
This Docker image is published to both Docker Hub and the GitHub container registry.
Depending on your preferences and needs, you can reference both `offen/docker-volume-backup` as well as `ghcr.io/offen/docker-volume-backup`:
```
docker pull offen/docker-volume-backup:v2
docker pull ghcr.io/offen/docker-volume-backup:v2
```
Documentation references Docker Hub, but all examples will work using ghcr.io just as well.
## Differences to `jareware/docker-volume-backup`
This image is heavily inspired by `jareware/docker-volume-backup`. We decided to publish this image as a simpler and more lightweight alternative because of the following requirements:
- The original image is based on `ubuntu` and requires additional tools, making it heavy.
This version is roughly 1/25 in compressed size (it's ~15MB).
- The original image uses a shell script, when this version is written in Go.
- The original image proposed to handle backup rotation through AWS S3 lifecycle policies.
This image adds the option to rotate away old backups through the same command so this functionality can also be offered for non-AWS storage backends like MinIO.
Local copies of backups can also be pruned once they reach a certain age.
- InfluxDB specific functionality from the original image was removed.
- `arm64` and `arm/v7` architectures are supported.
- Docker in Swarm mode is supported.
- Notifications on finished backups are supported.
- IAM authentication through instance profiles is supported.

373
docs/recipes/index.md Normal file
View File

@@ -0,0 +1,373 @@
---
title: Recipes
layout: default
nav_order: 4
---
# Recipes
{: .no_toc }
This doc lists configuration for some real-world use cases that you can copy and paste to tweak and match your needs.
1. TOC
{: toc }
## Backing up to AWS S3
```yml
version: '3'
services:
# ... define other services using the `data` volume here
backup:
image: offen/docker-volume-backup:v2
environment:
AWS_S3_BUCKET_NAME: backup-bucket
AWS_ACCESS_KEY_ID: AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
volumes:
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
data:
```
## Backing up to Filebase
```yml
version: '3'
services:
# ... define other services using the `data` volume here
backup:
image: offen/docker-volume-backup:v2
environment:
AWS_ENDPOINT: s3.filebase.com
AWS_S3_BUCKET_NAME: filebase-bucket
AWS_ACCESS_KEY_ID: FILEBASE-ACCESS-KEY
AWS_SECRET_ACCESS_KEY: FILEBASE-SECRET-KEY
volumes:
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
data:
```
## Backing up to MinIO
```yml
version: '3'
services:
# ... define other services using the `data` volume here
backup:
image: offen/docker-volume-backup:v2
environment:
AWS_ENDPOINT: minio.example.com
AWS_S3_BUCKET_NAME: backup-bucket
AWS_ACCESS_KEY_ID: MINIOACCESSKEY
AWS_SECRET_ACCESS_KEY: MINIOSECRETKEY
volumes:
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
data:
```
## Backing up to MinIO (using Docker secrets)
```yml
version: '3'
services:
# ... define other services using the `data` volume here
backup:
image: offen/docker-volume-backup:v2
environment:
AWS_ENDPOINT: minio.example.com
AWS_S3_BUCKET_NAME: backup-bucket
AWS_ACCESS_KEY_ID_FILE: /run/secrets/minio_access_key
AWS_SECRET_ACCESS_KEY_FILE: /run/secrets/minio_secret_key
volumes:
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
secrets:
- minio_access_key
- minio_secret_key
volumes:
data:
secrets:
minio_access_key:
# ... define how secret is accessed
minio_secret_key:
# ... define how secret is accessed
```
## Backing up to WebDAV
```yml
version: '3'
services:
# ... define other services using the `data` volume here
backup:
image: offen/docker-volume-backup:v2
environment:
WEBDAV_URL: https://webdav.mydomain.me
WEBDAV_PATH: /my/directory/
WEBDAV_USERNAME: user
WEBDAV_PASSWORD: password
volumes:
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
data:
```
## Backing up to SSH
```yml
version: '3'
services:
# ... define other services using the `data` volume here
backup:
image: offen/docker-volume-backup:v2
environment:
SSH_HOST_NAME: server.local
SSH_PORT: 2222
SSH_USER: user
SSH_REMOTE_PATH: /data
volumes:
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- /path/to/private_key:/root/.ssh/id_rsa
volumes:
data:
```
## Backing up to Azure Blob Storage
```yml
version: '3'
services:
# ... define other services using the `data` volume here
backup:
image: offen/docker-volume-backup:v2
environment:
AZURE_STORAGE_CONTAINER_NAME: backup-container
AZURE_STORAGE_ACCOUNT_NAME: account-name
AZURE_STORAGE_PRIMARY_ACCOUNT_KEY: Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==
volumes:
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
data:
```
## Backing up to Dropbox
See [Dropbox Setup](#setup-dropbox-storage-backend) on how to get the appropriate environment values.
```yml
version: '3'
services:
# ... define other services using the `data` volume here
backup:
image: offen/docker-volume-backup:v2
environment:
DROPBOX_REFRESH_TOKEN: REFRESH_KEY # replace
DROPBOX_APP_KEY: APP_KEY # replace
DROPBOX_APP_SECRET: APP_SECRET # replace
DROPBOX_REMOTE_PATH: /Apps/my-test-app/some_subdir # replace
volumes:
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
data:
```
## Backing up locally
```yml
version: '3'
services:
# ... define other services using the `data` volume here
backup:
image: offen/docker-volume-backup:v2
environment:
BACKUP_FILENAME: backup-%Y-%m-%dT%H-%M-%S.tar.gz
BACKUP_LATEST_SYMLINK: backup-latest.tar.gz
volumes:
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ${HOME}/backups:/archive
volumes:
data:
```
## Backing up to AWS S3 as well as locally
```yml
version: '3'
services:
# ... define other services using the `data` volume here
backup:
image: offen/docker-volume-backup:v2
environment:
AWS_S3_BUCKET_NAME: backup-bucket
AWS_ACCESS_KEY_ID: AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
volumes:
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ${HOME}/backups:/archive
volumes:
data:
```
## Running on a custom cron schedule
```yml
version: '3'
services:
# ... define other services using the `data` volume here
backup:
image: offen/docker-volume-backup:v2
environment:
# take a backup on every hour
BACKUP_CRON_EXPRESSION: "0 * * * *"
AWS_S3_BUCKET_NAME: backup-bucket
AWS_ACCESS_KEY_ID: AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
volumes:
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
data:
```
## Rotating away backups that are older than 7 days
```yml
version: '3'
services:
# ... define other services using the `data` volume here
backup:
image: offen/docker-volume-backup:v2
environment:
AWS_S3_BUCKET_NAME: backup-bucket
AWS_ACCESS_KEY_ID: AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
BACKUP_FILENAME: backup-%Y-%m-%dT%H-%M-%S.tar.gz
BACKUP_PRUNING_PREFIX: backup-
BACKUP_RETENTION_DAYS: 7
volumes:
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
data:
```
## Encrypting your backups using GPG
```yml
version: '3'
services:
# ... define other services using the `data` volume here
backup:
image: offen/docker-volume-backup:v2
environment:
AWS_S3_BUCKET_NAME: backup-bucket
AWS_ACCESS_KEY_ID: AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
GPG_PASSPHRASE: somesecretstring
volumes:
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
data:
```
## Using mysqldump to prepare the backup
```yml
version: '3'
services:
database:
image: mariadb:latest
labels:
- docker-volume-backup.archive-pre=/bin/sh -c 'mysqldump -psecret --all-databases > /tmp/dumps/dump.sql'
volumes:
- data:/tmp/dumps
backup:
image: offen/docker-volume-backup:v2
environment:
BACKUP_FILENAME: db.tar.gz
BACKUP_CRON_EXPRESSION: "0 2 * * *"
volumes:
- ./local:/archive
- data:/backup/data:ro
- /var/run/docker.sock:/var/run/docker.sock
volumes:
data:
```
## Running multiple instances in the same setup
```yml
version: '3'
services:
# ... define other services using the `data_1` and `data_2` volumes here
backup_1: &backup_service
image: offen/docker-volume-backup:v2
environment: &backup_environment
BACKUP_CRON_EXPRESSION: "0 2 * * *"
AWS_S3_BUCKET_NAME: backup-bucket
AWS_ACCESS_KEY_ID: AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
# Label the container using the `data_1` volume as `docker-volume-backup.stop-during-backup=service1`
BACKUP_STOP_CONTAINER_LABEL: service1
volumes:
- data_1:/backup/data-1-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
backup_2:
<<: *backup_service
environment:
<<: *backup_environment
# Label the container using the `data_2` volume as `docker-volume-backup.stop-during-backup=service2`
BACKUP_CRON_EXPRESSION: "0 3 * * *"
BACKUP_STOP_CONTAINER_LABEL: service2
volumes:
- data_2:/backup/data-2-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
data_1:
data_2:
```

419
docs/reference/index.md Normal file
View File

@@ -0,0 +1,419 @@
---
title: Configuration Reference
layout: default
nav_order: 2
---
# Configuration reference
Backup targets, schedule and retention are configured using environment variables.
{: .note }
You can use any environment variable from below also with a `_FILE` suffix to be able to load the value from a file.
This is typically useful when using [Docker Secrets](https://docs.docker.com/engine/swarm/secrets/) or similar.
Note that secrets will not be trimmed of leading or trailing whitespace.
{: .warning }
In case you encounter double quoted values in your runtime configuration you might still be using an [older version of `docker-compose`][compose-issue].
You can work around this by either updating `docker-compose` or unquoting your configuration values.
You can populate below template according to your requirements and use it as your `env_file`:
{% raw %}
```
########### BACKUP SCHEDULE
# Backups run on the given cron schedule in `busybox` flavor. If no
# value is set, `@daily` will be used. If you do not want the cron
# to ever run, use `0 0 5 31 2 ?`.
# BACKUP_CRON_EXPRESSION="0 2 * * *"
# The compression algorithm used in conjunction with tar.
# Valid options are: "gz" (Gzip) and "zst" (Zstd).
# Note that the selection affects the file extension.
# BACKUP_COMPRESSION="gz"
# Parallelism level for "gz" (Gzip) compression.
# Defines how many blocks of data are concurrently processed.
# Higher values result in faster compression. No effect on decompression
# Default = 1. Setting this to 0 will use all available threads.
# GZIP_PARALLELISM=1
# The name of the backup file including the extension.
# Format verbs will be replaced as in `strftime`. Omitting them
# will result in the same filename for every backup run, which means previous
# versions will be overwritten on subsequent runs.
# Extension can be defined literally or via "{{ .Extension }}" template,
# in which case it will become either "tar.gz" or "tar.zst" (depending
# on your BACKUP_COMPRESSION setting).
# The default results in filenames like: `backup-2021-08-29T04-00-00.tar.gz`.
# BACKUP_FILENAME="backup-%Y-%m-%dT%H-%M-%S.{{ .Extension }}"
# Setting BACKUP_FILENAME_EXPAND to true allows for environment variable
# placeholders in BACKUP_FILENAME, BACKUP_LATEST_SYMLINK and in
# BACKUP_PRUNING_PREFIX that will get expanded at runtime,
# e.g. `backup-$HOSTNAME-%Y-%m-%dT%H-%M-%S.tar.gz`. Expansion happens before
# interpolating strftime tokens. It is disabled by default.
# Please note that you will need to escape the `$` when providing the value
# in a docker-compose.yml file, i.e. using $$VAR instead of $VAR.
# BACKUP_FILENAME_EXPAND="true"
# When storing local backups, a symlink to the latest backup can be created
# in case a value is given for this key. This has no effect on remote backups.
# BACKUP_LATEST_SYMLINK="backup.latest.tar.gz"
# ************************************************************************
# The BACKUP_FROM_SNAPSHOT option has been deprecated and will be removed
# in the next major version. Please use exec-pre and exec-post
# as documented below instead.
# ************************************************************************
# Whether to copy the content of backup folder before creating the tar archive.
# In the rare scenario where the content of the source backup volume is continuously
# updating, but we do not wish to stop the container while performing the backup,
# this setting can be used to ensure the integrity of the tar.gz file.
# BACKUP_FROM_SNAPSHOT="false"
# By default, the `/backup` directory inside the container will be backed up.
# In case you need to use a custom location, set `BACKUP_SOURCES`.
# BACKUP_SOURCES="/other/location"
# When given, all files in BACKUP_SOURCES whose full path matches the given
# regular expression will be excluded from the archive. Regular Expressions
# can be used as from the Go standard library https://pkg.go.dev/regexp
# BACKUP_EXCLUDE_REGEXP="\.log$"
# Exclude one or many storage backends from the pruning process.
# E.g. with one backend excluded: BACKUP_SKIP_BACKENDS_FROM_PRUNE=s3
# E.g. with multiple backends excluded: BACKUP_SKIP_BACKENDS_FROM_PRUNE=s3,webdav
# Available backends are: S3, WebDAV, SSH, Local, Dropbox, Azure
# Note: The name of the backends is case insensitive.
# Default: All backends get pruned.
# BACKUP_SKIP_BACKENDS_FROM_PRUNE=
########### BACKUP STORAGE
# The name of the remote bucket that should be used for storing backups. If
# this is not set, no remote backups will be stored.
# AWS_S3_BUCKET_NAME="backup-bucket"
# If you want to store the backup in a non-root location on your bucket
# you can provide a path. The path must not contain a leading slash.
# AWS_S3_PATH="my/backup/location"
# Define credentials for authenticating against the backup storage and a bucket
# name. Although all of these keys are `AWS`-prefixed, the setup can be used
# with any S3 compatible storage.
# AWS_ACCESS_KEY_ID="<xxx>"
# AWS_SECRET_ACCESS_KEY="<xxx>"
# Instead of providing static credentials, you can also use IAM instance profiles
# or similar to provide authentication. Some possible configuration options on AWS:
# - EC2: http://169.254.169.254
# - ECS: http://169.254.170.2
# AWS_IAM_ROLE_ENDPOINT="http://169.254.169.254"
# This is the FQDN of your storage server, e.g. `storage.example.com`.
# Do not set this when working against AWS S3 (the default value is
# `s3.amazonaws.com`). If you need to set a specific (non-https) protocol, you
# will need to use the option below.
# AWS_ENDPOINT="storage.example.com"
# The protocol to be used when communicating with your storage server.
# Defaults to "https". You can set this to "http" when communicating with
# a different Docker container on the same host for example.
# AWS_ENDPOINT_PROTO="https"
# Setting this variable to `true` will disable verification of
# SSL certificates for AWS_ENDPOINT. You shouldn't use this unless you use
# self-signed certificates for your remote storage backend. This can only be
# used when AWS_ENDPOINT_PROTO is set to `https`.
# AWS_ENDPOINT_INSECURE="true"
# If you wish to use self signed certificates your S3 server, you can pass
# the location of a PEM encoded CA certificate and it will be used for
# validating your certificates.
# Alternatively, pass a PEM encoded string containing the certificate.
# AWS_ENDPOINT_CA_CERT="/path/to/cert.pem"
# Setting this variable will change the S3 storage class header.
# Defaults to "STANDARD", you can set this value according to your needs.
# AWS_STORAGE_CLASS="GLACIER"
# Setting this variable will change the S3 default part size for the copy step.
# This value is useful when you want to upload large files.
# NB : While using Scaleway as S3 provider, be aware that the parts counter is set to 1.000.
# While Minio uses a hard coded value to 10.000. As a workaround, try to set a higher value.
# Defaults to "16" (MB) if unset (from minio), you can set this value according to your needs.
# The unit is in MB and an integer.
# AWS_PART_SIZE=16
# You can also backup files to any WebDAV server:
# The URL of the remote WebDAV server
# WEBDAV_URL="https://webdav.example.com"
# The Directory to place the backups to on the WebDAV server.
# If the path is not present on the server it will be created.
# WEBDAV_PATH="/my/directory/"
# The username for the WebDAV server
# WEBDAV_USERNAME="user"
# The password for the WebDAV server
# WEBDAV_PASSWORD="password"
# Setting this variable to `true` will disable verification of
# SSL certificates for WEBDAV_URL. You shouldn't use this unless you use
# self-signed certificates for your remote storage backend.
# WEBDAV_URL_INSECURE="true"
# You can also backup files to any SSH server:
# The URL of the remote SSH server
# SSH_HOST_NAME="server.local"
# The port of the remote SSH server
# Optional variable default value is `22`
# SSH_PORT=2222
# The Directory to place the backups to on the SSH server.
# SSH_REMOTE_PATH="/my/directory/"
# The username for the SSH server
# SSH_USER="user"
# The password for the SSH server
# SSH_PASSWORD="password"
# The private key path in container for SSH server
# Default value: /root/.ssh/id_rsa
# If file is mounted to /root/.ssh/id_rsa path it will be used. Non-RSA keys will
# also work.
# SSH_IDENTITY_FILE="/root/.ssh/id_rsa"
# The passphrase for the identity file
# SSH_IDENTITY_PASSPHRASE="pass"
# The credential's account name when using Azure Blob Storage. This has to be
# set when using Azure Blob Storage.
# AZURE_STORAGE_ACCOUNT_NAME="account-name"
# The credential's primary account key when using Azure Blob Storage. If this
# is not given, the command tries to fall back to using a managed identity.
# AZURE_STORAGE_PRIMARY_ACCOUNT_KEY="<xxx>"
# The container name when using Azure Blob Storage.
# AZURE_STORAGE_CONTAINER_NAME="container-name"
# The service endpoint when using Azure Blob Storage. This is a template that
# can be passed the account name as shown in the default value below.
# AZURE_STORAGE_ENDPOINT="https://{{ .AccountName }}.blob.core.windows.net/"
# Absolute remote path in your Dropbox where the backups shall be stored.
# Note: Use your app's subpath in Dropbox, if it doesn't have global access.
# Consulte the README for further information.
# DROPBOX_REMOTE_PATH="/my/directory"
# Number of concurrent chunked uploads for Dropbox.
# Values above 6 usually result in no enhancements.
# DROPBOX_CONCURRENCY_LEVEL="6"
# App key and app secret from your app created at https://www.dropbox.com/developers/apps/info
# DROPBOX_APP_KEY=""
# DROPBOX_APP_SECRET=""
# Refresh token to request new short-lived tokens (OAuth2). Consult README to see how to get one.
# DROPBOX_REFRESH_TOKEN=""
# In addition to storing backups remotely, you can also keep local copies.
# Pass a container-local path to store your backups if needed. You also need to
# mount a local folder or Docker volume into that location (`/archive`
# by default) when running the container. In case the specified directory does
# not exist (nothing is mounted) in the container when the backup is running,
# local backups will be skipped. Local paths are also be subject to pruning of
# old backups as defined below.
# BACKUP_ARCHIVE="/archive"
########### BACKUP PRUNING
# **IMPORTANT, PLEASE READ THIS BEFORE USING THIS FEATURE**:
# The mechanism used for pruning old backups is not very sophisticated
# and applies its rules to **all files in the target directory** by default,
# which means that if you are storing your backups next to other files,
# these might become subject to deletion too. When using this option
# make sure the backup files are stored in a directory used exclusively
# for such files, or to configure BACKUP_PRUNING_PREFIX to limit
# removal to certain files.
# Define this value to enable automatic rotation of old backups. The value
# declares the number of days for which a backup is kept.
# BACKUP_RETENTION_DAYS="7"
# In case the duration a backup takes fluctuates noticeably in your setup
# you can adjust this setting to make sure there are no race conditions
# between the backup finishing and the rotation not deleting backups that
# sit on the edge of the time window. Set this value to a duration
# that is expected to be bigger than the maximum difference of backups.
# Valid values have a suffix of (s)econds, (m)inutes or (h)ours. By default,
# one minute is used.
# BACKUP_PRUNING_LEEWAY="1m"
# In case your target bucket or directory contains other files than the ones
# managed by this container, you can limit the scope of rotation by setting
# a prefix value. This would usually be the non-parametrized part of your
# BACKUP_FILENAME. E.g. if BACKUP_FILENAME is `db-backup-%Y-%m-%dT%H-%M-%S.tar.gz`,
# you can set BACKUP_PRUNING_PREFIX to `db-backup-` and make sure
# unrelated files are not affected by the rotation mechanism.
# BACKUP_PRUNING_PREFIX="backup-"
########### BACKUP ENCRYPTION
# Backups can be encrypted using gpg in case a passphrase is given.
# GPG_PASSPHRASE="<xxx>"
########### STOPPING CONTAINERS DURING BACKUP
# Containers can be stopped by applying a
# `docker-volume-backup.stop-during-backup` label. By default, all containers
# that are labeled with `true` will be stopped. If you need more fine grained
# control (e.g. when running multiple containers based on this image), you can
# override this default by specifying a different value here.
# BACKUP_STOP_CONTAINER_LABEL="service1"
########### EXECUTING COMMANDS IN CONTAINERS PRE/POST BACKUP
# It is possible to define commands to be run in any container before and after
# a backup is conducted. The commands themselves are defined in labels like
# `docker-volume-backup.archive-pre=/bin/sh -c 'mysqldump [options] > dump.sql'.
# Several options exist for controlling this feature:
# By default, any output of such a command is suppressed. If this value
# is configured to be "true", command execution output will be forwarded to
# the backup container's stdout and stderr.
# EXEC_FORWARD_OUTPUT="true"
# Without any further configuration, all commands defined in labels will be
# run before and after a backup. If you need more fine grained control, you
# can use this option to set a label that will be used for narrowing down
# the set of eligible containers. When set, an eligible container will also need
# to be labeled as `docker-volume-backup.exec-label=database`.
# EXEC_LABEL="database"
########### NOTIFICATIONS
# Notifications (email, Slack, etc.) can be sent out when a backup run finishes.
# Configuration is provided as a comma-separated list of URLs as consumed
# by `shoutrrr`: https://containrrr.dev/shoutrrr/0.7/services/overview/
# The content of such notifications can be customized. Dedicated documentation
# on how to do this can be found in the README. When providing multiple URLs or
# an URL that contains a comma, the values can be URL encoded to avoid ambiguities.
# The below URL demonstrates how to send an email using the provided SMTP
# configuration and credentials.
# NOTIFICATION_URLS=smtp://username:password@host:587/?fromAddress=sender@example.com&toAddresses=recipient@example.com
# By default, notifications would only be sent out when a backup run fails
# To receive notifications for every run, set `NOTIFICATION_LEVEL` to `info`
# instead of the default `error`.
# NOTIFICATION_LEVEL="error"
########### DOCKER HOST
# If you are interfacing with Docker via TCP you can set the Docker host here
# instead of mounting the Docker socket as a volume. This is unset by default.
# DOCKER_HOST="tcp://docker_socket_proxy:2375"
########### LOCK_TIMEOUT
# In the case of overlapping cron schedules run by the same container,
# subsequent invocations will wait for previous runs to finish before starting.
# By default, this will time out and fail in case the lock could not be acquired
# after 60 minutes. In case you need to adjust this timeout, supply a duration
# value as per https://pkg.go.dev/time#ParseDuration to `LOCK_TIMEOUT`
# LOCK_TIMEOUT="60m"
########### EMAIL NOTIFICATIONS
# ************************************************************************
# Providing notification configuration like this has been deprecated
# and will be removed in the next major version. Please use NOTIFICATION_URLS
# as documented above instead.
# ************************************************************************
# In case SMTP credentials are provided, notification emails can be sent out when
# a backup run finished. These emails will contain the start time, the error
# message on failure and all prior log output.
# The recipient(s) of the notification. Supply a comma separated list
# of addresses if you want to notify multiple recipients. If this is
# not set, no emails will be sent.
# EMAIL_NOTIFICATION_RECIPIENT="you@example.com"
# The "From" header of the sent email. Defaults to `noreply@nohost`.
# EMAIL_NOTIFICATION_SENDER="no-reply@example.com"
# Configuration and credentials for the SMTP server to be used.
# EMAIL_SMTP_PORT defaults to 587.
# EMAIL_SMTP_HOST="posteo.de"
# EMAIL_SMTP_PASSWORD="<xxx>"
# EMAIL_SMTP_USERNAME="no-reply@example.com"
# EMAIL_SMTP_PORT="<port>"
```
{% endraw %}
[compose-issue]: https://github.com/docker/compose/issues/2854

63
go.mod
View File

@@ -1,47 +1,57 @@
module github.com/offen/docker-volume-backup
go 1.19
go 1.21
require (
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.2.0
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v0.6.1
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.4.0
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.2.0
github.com/containrrr/shoutrrr v0.7.1
github.com/cosiner/argv v0.1.0
github.com/docker/docker v20.10.24+incompatible
github.com/docker/docker v24.0.7+incompatible
github.com/gofrs/flock v0.8.1
github.com/kelseyhightower/envconfig v1.4.0
github.com/klauspost/compress v1.17.4
github.com/leekchan/timeutil v0.0.0-20150802142658-28917288c48d
github.com/minio/minio-go/v7 v7.0.44
github.com/otiai10/copy v1.10.0
github.com/pkg/sftp v1.13.5
github.com/sirupsen/logrus v1.9.0
github.com/studio-b12/gowebdav v0.0.0-20220128162035-c7b1ff8a5e62
golang.org/x/crypto v0.3.0
golang.org/x/sync v0.1.0
github.com/minio/minio-go/v7 v7.0.65
github.com/offen/envconfig v1.5.0
github.com/otiai10/copy v1.14.0
github.com/pkg/sftp v1.13.6
github.com/studio-b12/gowebdav v0.9.0
golang.org/x/crypto v0.16.0
golang.org/x/oauth2 v0.15.0
golang.org/x/sync v0.5.0
)
require (
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.1.4 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.0.1 // indirect
github.com/AzureAD/microsoft-authentication-library-for-go v0.7.0 // indirect
github.com/cloudflare/circl v1.3.3 // indirect
github.com/golang-jwt/jwt/v5 v5.0.0 // indirect
github.com/golang/protobuf v1.5.3 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/protobuf v1.31.0 // indirect
)
require (
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.8.0 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.3.0 // indirect
github.com/AzureAD/microsoft-authentication-library-for-go v1.1.1 // indirect
github.com/Microsoft/go-winio v0.5.2 // indirect
github.com/ProtonMail/go-crypto v0.0.0-20230717121422-5aa5874ade95
github.com/docker/distribution v2.8.2+incompatible // indirect
github.com/docker/go-connections v0.4.0 // indirect
github.com/docker/go-units v0.4.0 // indirect
github.com/dustin/go-humanize v1.0.0 // indirect
github.com/dropbox/dropbox-sdk-go-unofficial/v6 v6.0.5
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/fatih/color v1.13.0 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang-jwt/jwt/v4 v4.4.2 // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/google/uuid v1.3.1 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.15.12 // indirect
github.com/klauspost/cpuid/v2 v2.2.1 // indirect
github.com/klauspost/cpuid/v2 v2.2.5 // indirect
github.com/klauspost/pgzip v1.2.6
github.com/kr/fs v0.1.0 // indirect
github.com/kylelemons/godebug v1.1.0 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.16 // indirect
github.com/minio/md5-simd v1.1.2 // indirect
github.com/minio/sha256-simd v1.0.0 // indirect
github.com/minio/sha256-simd v1.0.1 // indirect
github.com/moby/term v0.0.0-20200312100748-672ec06f55cd // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
@@ -49,12 +59,13 @@ require (
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.0.3-0.20211202183452-c5a74bcca799 // indirect
github.com/pkg/browser v0.0.0-20210115035449-ce105d075bb4 // indirect
github.com/pkg/browser v0.0.0-20210911075715-681adbf594b8 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/rs/xid v1.4.0 // indirect
golang.org/x/net v0.7.0 // indirect
golang.org/x/sys v0.7.0 // indirect
golang.org/x/text v0.7.0 // indirect
github.com/rs/xid v1.5.0 // indirect
github.com/sirupsen/logrus v1.9.3 // indirect
golang.org/x/net v0.19.0 // indirect
golang.org/x/sys v0.15.0 // indirect
golang.org/x/text v0.14.0 // indirect
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f // indirect
gopkg.in/ini.v1 v1.67.0 // indirect
gotest.tools/v3 v3.0.3 // indirect

142
go.sum
View File

@@ -181,24 +181,28 @@ cloud.google.com/go/webrisk v1.5.0/go.mod h1:iPG6fr52Tv7sGk0H6qUFzmL3HHZev1htXuW
cloud.google.com/go/workflows v1.6.0/go.mod h1:6t9F5h/unJz41YqfBmqSASJSXccBLtD1Vwf+KmJENM0=
cloud.google.com/go/workflows v1.7.0/go.mod h1:JhSrZuVZWuiDfKEFxU0/F1PQjmpnpcoISEXH2bcHC3M=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.1.4 h1:pqrAR74b6EoR4kcxF7L7Wg2B8Jgil9UUZtMvxhEFqWo=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.1.4/go.mod h1:uGG2W01BaETf0Ozp+QxxKJdMBNRWPdstHG0Fmdwn1/U=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.2.0 h1:t/W5MYAuQy81cvM8VUNfRLzhtKpXhVUAN7Cd7KVbTyc=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.2.0/go.mod h1:NBanQUfSWiWn3QEpWDTCU0IjBECKOYvl2R8xdRtMtiM=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.0.1 h1:XUNQ4mw+zJmaA2KXzP9JlQiecy1SI+Eog7xVkPiqIbg=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.0.1/go.mod h1:eWRD7oawr1Mu1sLCawqVc0CUiF43ia3qQMxLscsKQ9w=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v0.6.1 h1:YvQv9Mz6T8oR5ypQOL6erY0Z5t71ak1uHV4QFokCOZk=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v0.6.1/go.mod h1:c6WvOhtmjNUWbLfOG1qxM/q0SPvQNSVJvolm+C52dIU=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.8.0 h1:9kDVnTz3vbfweTqAUmk/a/pH5pWFCHtvRpHYC0G/dcA=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.8.0/go.mod h1:3Ug6Qzto9anB6mGlEdgYMDF5zHQ+wwhEaYR4s17PHMw=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.4.0 h1:BMAjVKJM0U/CYF27gA0ZMmXGkOcvfFtD0oHVZ1TIPRI=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.4.0/go.mod h1:1fXstnBMas5kzG+S3q8UoJcmyU6nUeunJcMDHcRYHhs=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.3.0 h1:sXr+ck84g/ZlZUOZiNELInmMgOsuGwdjjVkEIde0OtY=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.3.0/go.mod h1:okt5dMMTOFjX/aovMlrjvvXoPMBVSPzk9185BT0+eZM=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.2.0 h1:Ma67P/GGprNwsslzEH6+Kb8nybI8jpDTm4Wmzu2ReK8=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.2.0/go.mod h1:c+Lifp3EDEamAkPVzMooRNOK6CZjNSdEnf1A7jsI9u4=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.2.0 h1:gggzg0SUMs6SQbEw+3LoSsYf9YMjkupeAnHMX8O9mmY=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.2.0/go.mod h1:+6KLcKIVgxoBDMqMO/Nvy7bZ9a0nbU3I1DtFQK3YvB4=
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78 h1:w+iIsaOQNcT7OZ575w+acHgRric5iCyQh+xv+KJ4HB8=
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8=
github.com/AzureAD/microsoft-authentication-library-for-go v0.7.0 h1:VgSJlZH5u0k2qxSpqyghcFQKmvYckj46uymKK5XzkBM=
github.com/AzureAD/microsoft-authentication-library-for-go v0.7.0/go.mod h1:BDJ5qMFKx9DugEg3+uQSDCdbYPr5s9vBTrL9P8TpqOU=
github.com/AzureAD/microsoft-authentication-library-for-go v1.1.1 h1:WpB/QDNLpMw72xHJc34BNNykqSOeEJDAWkhf0u12/Jk=
github.com/AzureAD/microsoft-authentication-library-for-go v1.1.1/go.mod h1:wP83P5OoQ5p6ip3ScPr0BAq0BvuPAvacpEuSzyouqAI=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/DataDog/datadog-go v3.2.0+incompatible/go.mod h1:LButxg5PwREeZtORoXG3tL4fMGNddJ+vMq1mwgfaqoQ=
github.com/Microsoft/go-winio v0.5.2 h1:a9IhgEQBCUEk6QCdml9CiJGhAws+YwffDHEMp1VMrpA=
github.com/Microsoft/go-winio v0.5.2/go.mod h1:WpS1mjBmmwHBEWmogvA2mj8546UReBk4v8QkMxJ6pZY=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/ProtonMail/go-crypto v0.0.0-20230717121422-5aa5874ade95 h1:KLq8BE0KwCL+mmXnjLWEAOYO+2l2AE4YMmqG1ZpZHBs=
github.com/ProtonMail/go-crypto v0.0.0-20230717121422-5aa5874ade95/go.mod h1:EjAoLdwvbIOoOQr3ihjnSoLZRtE8azugULFRteWMNc0=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
@@ -216,6 +220,7 @@ github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
github.com/bwesterb/go-ristretto v1.2.3/go.mod h1:fUIoIZaG73pV5biE2Blr2xEzDoMj7NFEuV9ekS419A0=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
@@ -225,6 +230,8 @@ github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMn
github.com/circonus-labs/circonus-gometrics v2.3.1+incompatible/go.mod h1:nmEj6Dob7S7YxXgwXpfOuvO54S+tGdZdw9fuRZt25Ag=
github.com/circonus-labs/circonusllhist v0.1.3/go.mod h1:kMXHVDlOchFAehlya5ePtbp5jckzBHf4XRpQvBOLI+I=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cloudflare/circl v1.3.3 h1:fE/Qz0QdIGqeWfnwq0RE0R7MI51s0M2E4Ga9kq5AEMs=
github.com/cloudflare/circl v1.3.3/go.mod h1:5XYMA4rFBvNIrhs50XuiBJ15vF2pZn4nnUKZrLbUZFA=
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/cncf/udpa/go v0.0.0-20200629203442-efcf912fb354/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
@@ -245,17 +252,21 @@ github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ3
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dnaeon/go-vcr v1.1.0 h1:ReYa/UBrRyQdant9B4fNHGoCNKw6qh6P0fsdGmZpR7c=
github.com/dnaeon/go-vcr v1.2.0 h1:zHCHvJYTMh1N7xnV7zf1m1GPBF9Ad0Jk/whtQ1663qI=
github.com/dnaeon/go-vcr v1.2.0/go.mod h1:R4UdLID7HZT3taECzJs4YgbbH6PIGXB6W/sc5OLb6RQ=
github.com/docker/distribution v2.8.2+incompatible h1:T3de5rq0dB1j30rp0sA2rER+m322EBzniBPB6ZIzuh8=
github.com/docker/distribution v2.8.2+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/docker v20.10.24+incompatible h1:Ugvxm7a8+Gz6vqQYQQ2W7GYq5EUPaAiuPgIfVyI3dYE=
github.com/docker/docker v20.10.24+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker v24.0.7+incompatible h1:Wo6l37AuwP3JaMnZa226lzVXGA3F9Ig1seQen0cKYlM=
github.com/docker/docker v24.0.7+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/go-connections v0.4.0 h1:El9xVISelRB7BuFusrZozjnkIM5YnzCViNKohAFqRJQ=
github.com/docker/go-connections v0.4.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec=
github.com/docker/go-units v0.4.0 h1:3uh0PgVws3nIA0Q+MwDC8yjEPf9zjRfZZWXZYDct3Tw=
github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/dustin/go-humanize v1.0.0 h1:VSnTsYCnlFHaM2/igO1h6X3HA71jcobQuxemgkq4zYo=
github.com/dropbox/dropbox-sdk-go-unofficial/v6 v6.0.5 h1:FT+t0UEDykcor4y3dMVKXIiWJETBpRgERYTGlmMd7HU=
github.com/dropbox/dropbox-sdk-go-unofficial/v6 v6.0.5/go.mod h1:rSS3kM9XMzSQ6pw91Qgd6yB5jdt70N4OdtrAf74As5M=
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
@@ -294,8 +305,8 @@ github.com/gofrs/flock v0.8.1/go.mod h1:F1TvTiK9OcQqauNUHlbJvyl9Qa1QvF/gOUDKA14j
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang-jwt/jwt/v4 v4.4.2 h1:rcc4lwaZgFMCZ5jxF9ABolDcIHdBytAFgqFPbSJQAYs=
github.com/golang-jwt/jwt/v4 v4.4.2/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
github.com/golang-jwt/jwt/v5 v5.0.0 h1:1n1XNM9hk7O9mnQoNBGolZvzebBQ7p93ULHRc28XJUE=
github.com/golang-jwt/jwt/v5 v5.0.0/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
@@ -326,8 +337,9 @@ github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.1/go.mod h1:DopwsBzvsk0Fs44TXzsVbJyPhcCPeIwnvohx4u74HPM=
github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/golang/snappy v0.0.3/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
@@ -370,8 +382,9 @@ github.com/google/pprof v0.0.0-20210609004039-a478d1d731e9/go.mod h1:kpwsk12EmLe
github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I=
github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.3.1 h1:KjJaJ9iWZ3jOFZIf1Lqf4laDRCasjl0BCmnEGxkdLb4=
github.com/google/uuid v1.3.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/enterprise-certificate-proxy v0.0.0-20220520183353-fd19c99a87aa/go.mod h1:17drOmN3MwGY7t0e+Ei9b45FFGA3fBs3x36SsCg1hq8=
github.com/googleapis/enterprise-certificate-proxy v0.1.0/go.mod h1:17drOmN3MwGY7t0e+Ei9b45FFGA3fBs3x36SsCg1hq8=
github.com/googleapis/enterprise-certificate-proxy v0.2.0/go.mod h1:8C0jb7/mgJe/9KK8Lm7X9ctZC2t60YyIpYEI16jx0Qg=
@@ -440,16 +453,15 @@ github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1
github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
github.com/kelseyhightower/envconfig v1.4.0 h1:Im6hONhd3pLkfDFsbRgu68RDNkGF1r3dvMUtDTo2cv8=
github.com/kelseyhightower/envconfig v1.4.0/go.mod h1:cccZRl6mQpaq41TPp5QxidR+Sa3axMbJDNb//FQX6Gg=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.15.12 h1:YClS/PImqYbn+UILDnqxQCZ3RehC9N318SU3kElDUEM=
github.com/klauspost/compress v1.15.12/go.mod h1:QPwzmACJjUTFsnSHH934V6woptycfrDDJnH7hvFVbGM=
github.com/klauspost/compress v1.17.4 h1:Ej5ixsIri7BrIjBkRZLTo6ghwrEtHFk7ijlczPW4fZ4=
github.com/klauspost/compress v1.17.4/go.mod h1:/dCuZOvVtNoHsyb+cuJD3itjs3NbnF6KH9zAO4BDxPM=
github.com/klauspost/cpuid/v2 v2.0.1/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
github.com/klauspost/cpuid/v2 v2.0.4/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
github.com/klauspost/cpuid/v2 v2.2.1 h1:U33DW0aiEj633gHYw3LoDNfkDiYnE5Q8M/TKJn2f2jI=
github.com/klauspost/cpuid/v2 v2.2.1/go.mod h1:RVVoqg1df56z8g3pUjL/3lE5UfnlrJX8tyFgg4nqhuY=
github.com/klauspost/cpuid/v2 v2.2.5 h1:0E5MSMDEoAulmXNFquVs//DdoomxaoTY1kUhbc/qbZg=
github.com/klauspost/cpuid/v2 v2.2.5/go.mod h1:Lcz8mBdAVJIBVzewtcLocK12l3Y+JytZYpaMropDUws=
github.com/klauspost/pgzip v1.2.6 h1:8RXeL5crjEUFnR2/Sn6GJNWtSQ3Dk8pq4CL3jvdDyjU=
github.com/klauspost/pgzip v1.2.6/go.mod h1:Ch1tH69qFZu15pkjo5kYi6mth2Zzwzt50oCQKQE9RUs=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/fs v0.1.0 h1:Jskdu9ieNAYnjxsi0LbQp1ulIKZV1LAFgK1tWhpZgl8=
@@ -489,10 +501,10 @@ github.com/miekg/dns v1.1.26/go.mod h1:bPDLeHnStXmXAq1m/Ch/hvfNHr14JKNPMBo3VZKju
github.com/miekg/dns v1.1.41/go.mod h1:p6aan82bvRIyn+zDIv9xYNUpwa73JcSh9BKwknJysuI=
github.com/minio/md5-simd v1.1.2 h1:Gdi1DZK69+ZVMoNHRXJyNcxrMA4dSxoYHZSQbirFg34=
github.com/minio/md5-simd v1.1.2/go.mod h1:MzdKDxYpY2BT9XQFocsiZf/NKVtR7nkE4RoEpN+20RM=
github.com/minio/minio-go/v7 v7.0.44 h1:9zUJ7iU7ax2P1jOvTp6nVrgzlZq3AZlFm0XfRFDKstM=
github.com/minio/minio-go/v7 v7.0.44/go.mod h1:nCrRzjoSUQh8hgKKtu3Y708OLvRLtuASMg2/nvmbarw=
github.com/minio/sha256-simd v1.0.0 h1:v1ta+49hkWZyvaKwrQB8elexRqm6Y0aMLjCNsrYxo6g=
github.com/minio/sha256-simd v1.0.0/go.mod h1:OuYzVNI5vcoYIAmbIvHPl3N3jUzVedXbKy5RFepssQM=
github.com/minio/minio-go/v7 v7.0.65 h1:sOlB8T3nQK+TApTpuN3k4WD5KasvZIE3vVFzyyCa0go=
github.com/minio/minio-go/v7 v7.0.65/go.mod h1:R4WVUR6ZTedlCcGwZRauLMIKjgyaWxhs4Mqi/OMPmEc=
github.com/minio/sha256-simd v1.0.1 h1:6kaan5IFmwTNynnKKpDHe6FWHohJOHhCPchzK49dzMM=
github.com/minio/sha256-simd v1.0.1/go.mod h1:Pz6AKMiUdngCLpeTL/RJY1M9rUuPMYujV5xJjtbRSN8=
github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc=
github.com/mitchellh/cli v1.1.0/go.mod h1:xcISNoH86gajksDmfB23e/pu+B+GeFRMYmoHXxx3xhI=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
@@ -517,6 +529,8 @@ github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e h1:fD57ERR4JtEqsWb
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU=
github.com/offen/envconfig v1.5.0 h1:LHL4wYIDVeoGxSDI40MShmWfss3gYUlCdstfSiSq4Fk=
github.com/offen/envconfig v1.5.0/go.mod h1:L7ny7R+4JWH3VVnZ+ARHvZysWUiZ2eQcm3L0imU9ACY=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
github.com/onsi/ginkgo v1.16.4 h1:29JGrr5oVBm5ulCWet69zQkzWipVXIol6ygQUe/EzNc=
@@ -539,22 +553,23 @@ github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.0.3-0.20211202183452-c5a74bcca799 h1:rc3tiVYb5z54aKaDfakKn0dDjIyPpTtszkjuMzyt7ec=
github.com/opencontainers/image-spec v1.0.3-0.20211202183452-c5a74bcca799/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
github.com/otiai10/copy v1.10.0 h1:znyI7l134wNg/wDktoVQPxPkgvhDfGCYUasey+h0rDQ=
github.com/otiai10/copy v1.10.0/go.mod h1:rSaLseMUsZFFbsFGc7wCJnnkTAvdc5L6VWxPE4308Ww=
github.com/otiai10/copy v1.14.0 h1:dCI/t1iTdYGtkvCuBG2BgR6KZa83PTclw4U5n2wAllU=
github.com/otiai10/copy v1.14.0/go.mod h1:ECfuL02W+/FkTWZWgQqXPWZgW9oeKCSQ5qVfSc4qc4w=
github.com/otiai10/mint v1.5.1 h1:XaPLeE+9vGbuyEHem1JNk3bYc7KKqyI/na0/mLd/Kks=
github.com/otiai10/mint v1.5.1/go.mod h1:MJm72SBthJjz8qhefc4z1PYEieWmy8Bku7CjcAqyUSM=
github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/pascaldekloe/goe v0.1.0/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/pelletier/go-toml v1.9.5/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCkoOuaOx1Y+c=
github.com/pelletier/go-toml/v2 v2.0.5/go.mod h1:OMHamSCAODeSsVrwwvcJOaoN0LIUIaFVNZzmWyNfXas=
github.com/pkg/browser v0.0.0-20210115035449-ce105d075bb4 h1:Qj1ukM4GlMWXNdMBuXcXfz/Kw9s1qm0CLY32QxuSImI=
github.com/pkg/browser v0.0.0-20210115035449-ce105d075bb4/go.mod h1:N6UoU20jOqggOuDwUaBQpluzLNDqif3kq9z2wpdYEfQ=
github.com/pkg/browser v0.0.0-20210911075715-681adbf594b8 h1:KoWmjvw+nsYOo29YJK9vDA65RGE3NrOnUtO7a+RF9HU=
github.com/pkg/browser v0.0.0-20210911075715-681adbf594b8/go.mod h1:HKlIX3XHQyzLZPlr7++PzdhaXEj94dEiJgZDTsxEqUI=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/sftp v1.13.1/go.mod h1:3HaPG6Dq1ILlpPZRO0HVMrsydcdLt6HRDccSgb87qRg=
github.com/pkg/sftp v1.13.5 h1:a3RLUqkyjYRtBTZJZ1VRrKbN3zhuPLlUc3sphVz81go=
github.com/pkg/sftp v1.13.5/go.mod h1:wHDZ0IZX6JcBYRK1TH9bcVq8G7TLpVHYIGJRFnmPfxg=
github.com/pkg/sftp v1.13.6 h1:JFZT4XbOU7l77xGSpOdW+pwIMqP044IyjXX6FGyEKFo=
github.com/pkg/sftp v1.13.6/go.mod h1:tz1ryNURKu77RL+GuCzmoJYxQczL3wLNNpPWagdg4Qk=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI=
@@ -580,8 +595,8 @@ github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1
github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rogpeppe/go-internal v1.6.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc=
github.com/rs/xid v1.4.0 h1:qd7wPTDkN6KQx2VmMBLrpHkiyQwgFXRnkOLacUiaSNY=
github.com/rs/xid v1.4.0/go.mod h1:trrq9SKmegXys3aeAKXMUTdJsYXVwGY3RLcfgqegfbg=
github.com/rs/xid v1.5.0 h1:mKX4bl4iPYJtEIxp6CYiUuLQ/8DYMoz0PUdtGgMFRVc=
github.com/rs/xid v1.5.0/go.mod h1:trrq9SKmegXys3aeAKXMUTdJsYXVwGY3RLcfgqegfbg=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
github.com/ryanuber/columnize v2.1.0+incompatible/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
@@ -591,8 +606,8 @@ github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPx
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/sirupsen/logrus v1.9.0 h1:trlNQbNUG3OdDrDil03MCb1H2o9nJ1x4/5LYw7byDE0=
github.com/sirupsen/logrus v1.9.0/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/afero v1.9.2/go.mod h1:iUV7ddyEEZPO5gA3zD4fJt6iStLlL+Lg4m2cihcDf8Y=
github.com/spf13/cast v1.5.0/go.mod h1:SpXXQ5YoyJw6s3/6cMTQuxvgRl3PCJiyaX9p6b155UU=
@@ -616,8 +631,8 @@ github.com/stretchr/testify v1.7.5/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1 h1:w7B6lhMri9wdJUVmEZPGGhZzrYTPvgJArz7wNPgYKsk=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/studio-b12/gowebdav v0.0.0-20220128162035-c7b1ff8a5e62 h1:b2nJXyPCa9HY7giGM+kYcnQ71m14JnGdQabMPmyt++8=
github.com/studio-b12/gowebdav v0.0.0-20220128162035-c7b1ff8a5e62/go.mod h1:bHA7t77X/QFExdeAnDzK6vKM34kEZAcE1OX4MfiwjkE=
github.com/studio-b12/gowebdav v0.9.0 h1:1j1sc9gQnNxbXXM4M/CebPOX4aXYtr7MojAVcN4dHjU=
github.com/studio-b12/gowebdav v0.9.0/go.mod h1:bHA7t77X/QFExdeAnDzK6vKM34kEZAcE1OX4MfiwjkE=
github.com/subosito/gotenv v1.4.1/go.mod h1:ayKnFf/c6rvx/2iiLrJUk1e6plDbT3edrFNGqEflhK0=
github.com/tv42/httpunix v0.0.0-20150427012821-b75d8614f926/go.mod h1:9ESjWnEqriFuLhtthL60Sar/7RFoluCcXsuvEwTV5KM=
github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
@@ -656,11 +671,12 @@ golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPh
golang.org/x/crypto v0.0.0-20210421170649-83a5a9bb288b/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20211108221036-ceb1ce70b4fa/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.0.0-20220525230936-793ad666bf5e/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.1.0/go.mod h1:RecgLatLF4+eUMCP1PoPZQb+cVrJcOPbHkTkbkB9sbw=
golang.org/x/crypto v0.3.0 h1:a06MkbcxBrEFc0w0QIZWXrH/9cCX6KJyWbBOIwAn+7A=
golang.org/x/crypto v0.3.0/go.mod h1:hebNnKkNXi2UzZN1eVRvBB7co0a+JxK6XbPiWVs/3J4=
golang.org/x/crypto v0.3.1-0.20221117191849-2c476679df9a/go.mod h1:hebNnKkNXi2UzZN1eVRvBB7co0a+JxK6XbPiWVs/3J4=
golang.org/x/crypto v0.7.0/go.mod h1:pYwdfH91IfpZVANVyUOhSIPZaFoJGxTFbZhFTx+dXZU=
golang.org/x/crypto v0.16.0 h1:mMMrFzRSCF0GvB7Ne27XVtVAaXLrPmgPC7/v0tkwHaY=
golang.org/x/crypto v0.16.0/go.mod h1:gCAAfMLgwOJRpTjQ2zCCt2OcSfYMTeZVSRtQlPC7Nq4=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@@ -699,6 +715,7 @@ golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3/go.mod h1:3p9vT2HGsQu2K1YbXdKPJLVgG5VJdoTa1poYQBtP1AY=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.6.0/go.mod h1:4mET923SAdbXp2ki8ey+zGs1SLqsuM2Y0uvdZR/fUNI=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@@ -757,8 +774,11 @@ golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug
golang.org/x/net v0.0.0-20220909164309-bea034e7d591/go.mod h1:YDH+HFinaLZZlnHAfSS6ZXJJ9M9t4Dl22yv3iI2vPwk=
golang.org/x/net v0.0.0-20221014081412-f15817d10f9b/go.mod h1:YDH+HFinaLZZlnHAfSS6ZXJJ9M9t4Dl22yv3iI2vPwk=
golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco=
golang.org/x/net v0.7.0 h1:rJrUqqhjsgNp7KqAIc25s9pZnjU7TUcSY7HcVZjdn1g=
golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.2.0/go.mod h1:KqCZLdyyvdV855qA2rE3GC2aiw5xGR5TEjj8smXukLY=
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc=
golang.org/x/net v0.19.0 h1:zTwKpTd2XuCqf8huc7Fo2iSy+4RHPd10s4KzeTnVr1c=
golang.org/x/net v0.19.0/go.mod h1:CfAk/cbD4CthTvqiEl8NpboMuiuOYsAr/7NOjZJtv1U=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -783,6 +803,8 @@ golang.org/x/oauth2 v0.0.0-20220622183110-fd043fe589d2/go.mod h1:jaDAt6Dkxork7Lm
golang.org/x/oauth2 v0.0.0-20220822191816-0ebed06d0094/go.mod h1:h4gKUeWbJ4rQPri7E0u6Gs4e9Ri2zaLxzw5DI5XGrYg=
golang.org/x/oauth2 v0.0.0-20220909003341-f21342109be1/go.mod h1:h4gKUeWbJ4rQPri7E0u6Gs4e9Ri2zaLxzw5DI5XGrYg=
golang.org/x/oauth2 v0.0.0-20221014153046-6fdb5e3db783/go.mod h1:h4gKUeWbJ4rQPri7E0u6Gs4e9Ri2zaLxzw5DI5XGrYg=
golang.org/x/oauth2 v0.15.0 h1:s8pnnxNVzjWyrvYdFUQq5llS1PX2zhPXmccZv99h7uQ=
golang.org/x/oauth2 v0.15.0/go.mod h1:q48ptWNTY5XWf+JNten23lcvHpLJ0ZSxF5ttTHKVCAM=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -797,8 +819,9 @@ golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220929204114-8fcdb60fdcc0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0 h1:wsuoTGHzEhffawBOhz5CYhcrV4IdKZbEyZjBMuTp12o=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.5.0 h1:60k92dhOjHxJkrqnwsfl8KuaHbn/5dl0lUPUklKo3qE=
golang.org/x/sync v0.5.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -864,6 +887,7 @@ golang.org/x/sys v0.0.0-20210514084401-e8d321eab015/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210603125802-9665404d3644/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210616045830-e2b7044e8c71/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210806184541-e5e7981a1069/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
@@ -887,19 +911,26 @@ golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20220610221304-9f5ed59c137d/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220615213510-4f61da869c0c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220624220833-87e55d714810/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220704084225-05e143d24a9e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220728004956-3c1f35247d10/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220908164124-27713097b956/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.7.0 h1:3jlCCIQZPdOYu1h8BkNvLz8Kgwtae2cagcG/VamtZRU=
golang.org/x/sys v0.7.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.3.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.15.0 h1:h48lPFYpsTvQJZF4EKyI4aLHaev3CxivZmv7yZig9pc=
golang.org/x/sys v0.15.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0 h1:n2a8QNdAb0sZNpU9R1ALUXBbY+w51fCQDN+7EdxNBsY=
golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U=
golang.org/x/term v0.15.0 h1:y/Oo/a/q3IXu26lQgl04j/gjuBDOBlx7X6Om1j2CPW4=
golang.org/x/term v0.15.0/go.mod h1:BDl952bC7+uMoWR75FIrCDx79TPU9oHkTZ9yRbYOrX0=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@@ -910,8 +941,10 @@ golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.7.0 h1:4BRB4x83lYWy72KwLD/qYDuTu7q9PjSagHvijDw7cLo=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.14.0 h1:ScX5w1eTa3QqT8oi6+ziP7dTV1S2+ALU0bI+0zXKWiQ=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
@@ -977,6 +1010,7 @@ golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.10/go.mod h1:Uh6Zz+xoGYZom868N8YTex3t7RhtHDBrE8Gzo9bV56E=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.2.0/go.mod h1:y4OqIKeOV/fWJetJ8bXPU1sEVniLMIyDAZWeHdV+NTA=
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@@ -1040,6 +1074,7 @@ google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7
google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/appengine v1.6.6/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c=
google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
@@ -1195,8 +1230,9 @@ google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp0
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.28.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
google.golang.org/protobuf v1.28.1 h1:d0NfwRgPtno5B1Wa6L2DAG+KivqkdutMf1UhdNx175w=
google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
google.golang.org/protobuf v1.31.0 h1:g0LDEJHgrBl9N9r17Ru3sqWhkIx2NB67okBHPwC7hs8=
google.golang.org/protobuf v1.31.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=

View File

@@ -127,12 +127,12 @@ func (b *azureBlobStorage) Prune(deadline time.Time, pruningPrefix string) (*sto
}
}
stats := storage.PruneStats{
stats := &storage.PruneStats{
Total: totalCount,
Pruned: uint(len(matches)),
}
if err := b.DoPrune(b.Name(), len(matches), int(totalCount), "Azure Blob Storage backup(s)", func() error {
pruneErr := b.DoPrune(b.Name(), len(matches), int(totalCount), deadline, func() error {
wg := sync.WaitGroup{}
wg.Add(len(matches))
var errs []error
@@ -152,9 +152,7 @@ func (b *azureBlobStorage) Prune(deadline time.Time, pruningPrefix string) (*sto
return errors.Join(errs...)
}
return nil
}); err != nil {
return &stats, err
}
})
return &stats, nil
return stats, pruneErr
}

View File

@@ -0,0 +1,258 @@
package dropbox
import (
"bytes"
"context"
"fmt"
"net/url"
"os"
"path"
"path/filepath"
"strings"
"sync"
"time"
"github.com/dropbox/dropbox-sdk-go-unofficial/v6/dropbox"
"github.com/dropbox/dropbox-sdk-go-unofficial/v6/dropbox/files"
"github.com/offen/docker-volume-backup/internal/storage"
"golang.org/x/oauth2"
)
type dropboxStorage struct {
*storage.StorageBackend
client files.Client
concurrencyLevel int
}
// Config allows to configure a Dropbox storage backend.
type Config struct {
Endpoint string
OAuth2Endpoint string
RefreshToken string
AppKey string
AppSecret string
RemotePath string
ConcurrencyLevel int
}
// NewStorageBackend creates and initializes a new Dropbox storage backend.
func NewStorageBackend(opts Config, logFunc storage.Log) (storage.Backend, error) {
tokenUrl, _ := url.JoinPath(opts.OAuth2Endpoint, "oauth2/token")
conf := &oauth2.Config{
ClientID: opts.AppKey,
ClientSecret: opts.AppSecret,
Endpoint: oauth2.Endpoint{
TokenURL: tokenUrl,
},
}
logFunc(storage.LogLevelInfo, "Dropbox", "Fetching fresh access token for Dropbox storage backend.")
tkSource := conf.TokenSource(context.Background(), &oauth2.Token{RefreshToken: opts.RefreshToken})
token, err := tkSource.Token()
if err != nil {
return nil, fmt.Errorf("(*dropboxStorage).NewStorageBackend: Error refreshing token: %w", err)
}
dbxConfig := dropbox.Config{
Token: token.AccessToken,
}
if opts.Endpoint != "https://api.dropbox.com/" {
dbxConfig.URLGenerator = func(hostType string, namespace string, route string) string {
return fmt.Sprintf("%s/%d/%s/%s", opts.Endpoint, 2, namespace, route)
}
}
client := files.New(dbxConfig)
if opts.ConcurrencyLevel < 1 {
logFunc(storage.LogLevelWarning, "Dropbox", "Concurrency level must be at least 1! Using 1 instead of %d.", opts.ConcurrencyLevel)
opts.ConcurrencyLevel = 1
}
return &dropboxStorage{
StorageBackend: &storage.StorageBackend{
DestinationPath: opts.RemotePath,
Log: logFunc,
},
client: client,
concurrencyLevel: opts.ConcurrencyLevel,
}, nil
}
// Name returns the name of the storage backend
func (b *dropboxStorage) Name() string {
return "Dropbox"
}
// Copy copies the given file to the WebDav storage backend.
func (b *dropboxStorage) Copy(file string) error {
_, name := path.Split(file)
folderArg := files.NewCreateFolderArg(b.DestinationPath)
if _, err := b.client.CreateFolderV2(folderArg); err != nil {
switch err := err.(type) {
case files.CreateFolderV2APIError:
if err.EndpointError.Path.Tag != files.WriteErrorConflict {
return fmt.Errorf("(*dropboxStorage).Copy: Error creating directory '%s': %w", b.DestinationPath, err)
}
b.Log(storage.LogLevelInfo, b.Name(), "Destination path '%s' already exists, no new directory required.", b.DestinationPath)
default:
return fmt.Errorf("(*dropboxStorage).Copy: Error creating directory '%s': %w", b.DestinationPath, err)
}
}
r, err := os.Open(file)
if err != nil {
return fmt.Errorf("(*dropboxStorage).Copy: Error opening the file to be uploaded: %w", err)
}
defer r.Close()
// Start new upload session and get session id
b.Log(storage.LogLevelInfo, b.Name(), "Starting upload session for backup '%s' at path '%s'.", file, b.DestinationPath)
var sessionId string
uploadSessionStartArg := files.NewUploadSessionStartArg()
uploadSessionStartArg.SessionType = &files.UploadSessionType{Tagged: dropbox.Tagged{Tag: files.UploadSessionTypeConcurrent}}
if res, err := b.client.UploadSessionStart(uploadSessionStartArg, nil); err != nil {
return fmt.Errorf("(*dropboxStorage).Copy: Error starting the upload session: %w", err)
} else {
sessionId = res.SessionId
}
// Send the file in 148MB chunks (Dropbox API limit is 150MB, concurrent upload requires a multiple of 4MB though)
// Last append can be any size <= 150MB with Close=True
const chunkSize = 148 * 1024 * 1024 // 148MB
var offset uint64 = 0
var guard = make(chan struct{}, b.concurrencyLevel)
var errorChn = make(chan error, b.concurrencyLevel)
var EOFChn = make(chan bool, b.concurrencyLevel)
var mu sync.Mutex
var wg sync.WaitGroup
loop:
for {
guard <- struct{}{} // limit concurrency
select {
case err := <-errorChn: // error from goroutine
return err
case <-EOFChn: // EOF from goroutine
wg.Wait() // wait for all goroutines to finish
break loop
default:
}
go func() {
defer func() {
wg.Done()
<-guard
}()
wg.Add(1)
chunk := make([]byte, chunkSize)
mu.Lock() // to preserve offset of chunks
select {
case <-EOFChn:
EOFChn <- true // put it back for outer loop
mu.Unlock()
return // already EOF
default:
}
bytesRead, err := r.Read(chunk)
if err != nil {
errorChn <- fmt.Errorf("(*dropboxStorage).Copy: Error reading the file to be uploaded: %w", err)
mu.Unlock()
return
}
chunk = chunk[:bytesRead]
uploadSessionAppendArg := files.NewUploadSessionAppendArg(
files.NewUploadSessionCursor(sessionId, offset),
)
isEOF := bytesRead < chunkSize
uploadSessionAppendArg.Close = isEOF
if isEOF {
EOFChn <- true
}
offset += uint64(bytesRead)
mu.Unlock()
if err := b.client.UploadSessionAppendV2(uploadSessionAppendArg, bytes.NewReader(chunk)); err != nil {
errorChn <- fmt.Errorf("(*dropboxStorage).Copy: Error appending the file to the upload session: %w", err)
return
}
}()
}
// Finish the upload session, commit the file (no new data added)
_, err = b.client.UploadSessionFinish(
files.NewUploadSessionFinishArg(
files.NewUploadSessionCursor(sessionId, 0),
files.NewCommitInfo(filepath.Join(b.DestinationPath, name)),
), nil)
if err != nil {
return fmt.Errorf("(*dropboxStorage).Copy: Error finishing the upload session: %w", err)
}
b.Log(storage.LogLevelInfo, b.Name(), "Uploaded a copy of backup '%s' at path '%s'.", file, b.DestinationPath)
return nil
}
// Prune rotates away backups according to the configuration and provided deadline for the Dropbox storage backend.
func (b *dropboxStorage) Prune(deadline time.Time, pruningPrefix string) (*storage.PruneStats, error) {
var entries []files.IsMetadata
res, err := b.client.ListFolder(files.NewListFolderArg(b.DestinationPath))
if err != nil {
return nil, fmt.Errorf("(*webDavStorage).Prune: Error looking up candidates from remote storage: %w", err)
}
entries = append(entries, res.Entries...)
for res.HasMore {
res, err = b.client.ListFolderContinue(files.NewListFolderContinueArg(res.Cursor))
if err != nil {
return nil, fmt.Errorf("(*webDavStorage).Prune: Error looking up candidates from remote storage: %w", err)
}
entries = append(entries, res.Entries...)
}
var matches []*files.FileMetadata
var lenCandidates int
for _, candidate := range entries {
switch candidate := candidate.(type) {
case *files.FileMetadata:
if !strings.HasPrefix(candidate.Name, pruningPrefix) {
continue
}
lenCandidates++
if candidate.ServerModified.Before(deadline) {
matches = append(matches, candidate)
}
default:
continue
}
}
stats := &storage.PruneStats{
Total: uint(lenCandidates),
Pruned: uint(len(matches)),
}
pruneErr := b.DoPrune(b.Name(), len(matches), lenCandidates, deadline, func() error {
for _, match := range matches {
if _, err := b.client.DeleteV2(files.NewDeleteArg(filepath.Join(b.DestinationPath, match.Name))); err != nil {
return fmt.Errorf("(*dropboxStorage).Prune: Error removing file from Dropbox storage: %w", err)
}
}
return nil
})
return stats, pruneErr
}

View File

@@ -47,9 +47,9 @@ func (b *localStorage) Copy(file string) error {
_, name := path.Split(file)
if err := copyFile(file, path.Join(b.DestinationPath, name)); err != nil {
return fmt.Errorf("(*localStorage).Copy: Error copying file to local archive: %w", err)
return fmt.Errorf("(*localStorage).Copy: Error copying file to archive: %w", err)
}
b.Log(storage.LogLevelInfo, b.Name(), "Stored copy of backup `%s` in local archive `%s`.", file, b.DestinationPath)
b.Log(storage.LogLevelInfo, b.Name(), "Stored copy of backup `%s` in `%s`.", file, b.DestinationPath)
if b.latestSymlink != "" {
symlink := path.Join(b.DestinationPath, b.latestSymlink)
@@ -116,7 +116,7 @@ func (b *localStorage) Prune(deadline time.Time, pruningPrefix string) (*storage
Pruned: uint(len(matches)),
}
if err := b.DoPrune(b.Name(), len(matches), len(candidates), "local backup(s)", func() error {
pruneErr := b.DoPrune(b.Name(), len(matches), len(candidates), deadline, func() error {
var removeErrors []error
for _, match := range matches {
if err := os.Remove(match); err != nil {
@@ -125,17 +125,15 @@ func (b *localStorage) Prune(deadline time.Time, pruningPrefix string) (*storage
}
if len(removeErrors) != 0 {
return fmt.Errorf(
"(*localStorage).Prune: %d error(s) deleting local files, starting with: %w",
"(*localStorage).Prune: %d error(s) deleting files, starting with: %w",
len(removeErrors),
errors.Join(removeErrors...),
)
}
return nil
}); err != nil {
return stats, err
}
})
return stats, nil
return stats, pruneErr
}
// copy creates a copy of the file located at `dst` at `src`.

View File

@@ -125,7 +125,12 @@ func (b *s3Storage) Copy(file string) error {
if _, err := b.client.FPutObject(context.Background(), b.bucket, filepath.Join(b.DestinationPath, name), file, putObjectOptions); err != nil {
if errResp := minio.ToErrorResponse(err); errResp.Message != "" {
return fmt.Errorf("(*s3Storage).Copy: error uploading backup to remote storage: [Message]: '%s', [Code]: %s, [StatusCode]: %d", errResp.Message, errResp.Code, errResp.StatusCode)
return fmt.Errorf(
"(*s3Storage).Copy: error uploading backup to remote storage: [Message]: '%s', [Code]: %s, [StatusCode]: %d",
errResp.Message,
errResp.Code,
errResp.StatusCode,
)
}
return fmt.Errorf("(*s3Storage).Copy: error uploading backup to remote storage: %w", err)
}
@@ -148,7 +153,7 @@ func (b *s3Storage) Prune(deadline time.Time, pruningPrefix string) (*storage.Pr
lenCandidates++
if candidate.Err != nil {
return nil, fmt.Errorf(
"(*s3Storage).Prune: Error looking up candidates from remote storage! %w",
"(*s3Storage).Prune: error looking up candidates from remote storage! %w",
candidate.Err,
)
}
@@ -162,7 +167,7 @@ func (b *s3Storage) Prune(deadline time.Time, pruningPrefix string) (*storage.Pr
Pruned: uint(len(matches)),
}
if err := b.DoPrune(b.Name(), len(matches), lenCandidates, "remote backup(s)", func() error {
pruneErr := b.DoPrune(b.Name(), len(matches), lenCandidates, deadline, func() error {
objectsCh := make(chan minio.ObjectInfo)
go func() {
for _, match := range matches {
@@ -181,9 +186,7 @@ func (b *s3Storage) Prune(deadline time.Time, pruningPrefix string) (*storage.Pr
return errors.Join(removeErrors...)
}
return nil
}); err != nil {
return stats, err
}
})
return stats, nil
return stats, pruneErr
}

View File

@@ -7,7 +7,6 @@ import (
"errors"
"fmt"
"io"
"io/ioutil"
"os"
"path"
"path/filepath"
@@ -46,7 +45,7 @@ func NewStorageBackend(opts Config, logFunc storage.Log) (storage.Backend, error
}
if _, err := os.Stat(opts.IdentityFile); err == nil {
key, err := ioutil.ReadFile(opts.IdentityFile)
key, err := os.ReadFile(opts.IdentityFile)
if err != nil {
return nil, errors.New("NewStorageBackend: error reading the private key")
}
@@ -75,14 +74,18 @@ func NewStorageBackend(opts Config, logFunc storage.Log) (storage.Backend, error
sshClient, err := ssh.Dial("tcp", fmt.Sprintf("%s:%s", opts.HostName, opts.Port), sshClientConfig)
if err != nil {
return nil, fmt.Errorf("NewStorageBackend: Error creating ssh client: %w", err)
return nil, fmt.Errorf("NewStorageBackend: error creating ssh client: %w", err)
}
_, _, err = sshClient.SendRequest("keepalive", false, nil)
if err != nil {
return nil, err
}
sftpClient, err := sftp.NewClient(sshClient)
sftpClient, err := sftp.NewClient(sshClient,
sftp.UseConcurrentReads(true),
sftp.UseConcurrentWrites(true),
sftp.MaxConcurrentRequestsPerFile(64),
)
if err != nil {
return nil, fmt.Errorf("NewStorageBackend: error creating sftp client: %w", err)
}
@@ -108,23 +111,23 @@ func (b *sshStorage) Copy(file string) error {
source, err := os.Open(file)
_, name := path.Split(file)
if err != nil {
return fmt.Errorf("(*sshStorage).Copy: Error reading the file to be uploaded: %w", err)
return fmt.Errorf("(*sshStorage).Copy: error reading the file to be uploaded: %w", err)
}
defer source.Close()
destination, err := b.sftpClient.Create(filepath.Join(b.DestinationPath, name))
if err != nil {
return fmt.Errorf("(*sshStorage).Copy: Error creating file on SSH storage: %w", err)
return fmt.Errorf("(*sshStorage).Copy: error creating file: %w", err)
}
defer destination.Close()
chunk := make([]byte, 1000000)
chunk := make([]byte, 1e9)
for {
num, err := source.Read(chunk)
if err == io.EOF {
tot, err := destination.Write(chunk[:num])
if err != nil {
return fmt.Errorf("(*sshStorage).Copy: Error uploading the file to SSH storage: %w", err)
return fmt.Errorf("(*sshStorage).Copy: error uploading the file: %w", err)
}
if tot != len(chunk[:num]) {
@@ -135,12 +138,12 @@ func (b *sshStorage) Copy(file string) error {
}
if err != nil {
return fmt.Errorf("(*sshStorage).Copy: Error uploading the file to SSH storage: %w", err)
return fmt.Errorf("(*sshStorage).Copy: error uploading the file: %w", err)
}
tot, err := destination.Write(chunk[:num])
if err != nil {
return fmt.Errorf("(*sshStorage).Copy: Error uploading the file to SSH storage: %w", err)
return fmt.Errorf("(*sshStorage).Copy: error uploading the file: %w", err)
}
if tot != len(chunk[:num]) {
@@ -148,7 +151,7 @@ func (b *sshStorage) Copy(file string) error {
}
}
b.Log(storage.LogLevelInfo, b.Name(), "Uploaded a copy of backup `%s` to SSH storage '%s' at path '%s'.", file, b.hostName, b.DestinationPath)
b.Log(storage.LogLevelInfo, b.Name(), "Uploaded a copy of backup `%s` to '%s' at path '%s'.", file, b.hostName, b.DestinationPath)
return nil
}
@@ -157,7 +160,7 @@ func (b *sshStorage) Copy(file string) error {
func (b *sshStorage) Prune(deadline time.Time, pruningPrefix string) (*storage.PruneStats, error) {
candidates, err := b.sftpClient.ReadDir(b.DestinationPath)
if err != nil {
return nil, fmt.Errorf("(*sshStorage).Prune: Error reading directory from SSH storage: %w", err)
return nil, fmt.Errorf("(*sshStorage).Prune: error reading directory: %w", err)
}
var matches []string
@@ -175,16 +178,14 @@ func (b *sshStorage) Prune(deadline time.Time, pruningPrefix string) (*storage.P
Pruned: uint(len(matches)),
}
if err := b.DoPrune(b.Name(), len(matches), len(candidates), "SSH backup(s)", func() error {
pruneErr := b.DoPrune(b.Name(), len(matches), len(candidates), deadline, func() error {
for _, match := range matches {
if err := b.sftpClient.Remove(filepath.Join(b.DestinationPath, match)); err != nil {
return fmt.Errorf("(*sshStorage).Prune: Error removing file from SSH storage: %w", err)
return fmt.Errorf("(*sshStorage).Prune: error removing file: %w", err)
}
}
return nil
}); err != nil {
return stats, err
}
})
return stats, nil
return stats, pruneErr
}

View File

@@ -4,6 +4,7 @@
package storage
import (
"fmt"
"time"
)
@@ -17,7 +18,6 @@ type Backend interface {
// StorageBackend is a generic type of storage. Everything here are common properties of all storage types.
type StorageBackend struct {
DestinationPath string
RetentionDays int
Log Log
}
@@ -39,23 +39,27 @@ type PruneStats struct {
// DoPrune holds general control flow that applies to any kind of storage.
// Callers can pass in a thunk that performs the actual deletion of files.
func (b *StorageBackend) DoPrune(context string, lenMatches, lenCandidates int, description string, doRemoveFiles func() error) error {
func (b *StorageBackend) DoPrune(context string, lenMatches, lenCandidates int, deadline time.Time, doRemoveFiles func() error) error {
if lenMatches != 0 && lenMatches != lenCandidates {
if err := doRemoveFiles(); err != nil {
return err
}
formattedDeadline, err := deadline.Local().MarshalText()
if err != nil {
return fmt.Errorf("(*StorageBackend).DoPrune: error marshaling deadline: %w", err)
}
b.Log(LogLevelInfo, context,
"Pruned %d out of %d %s as their age exceeded the configured retention period of %d days.",
"Pruned %d out of %d backups as they were older than the given deadline of %s.",
lenMatches,
lenCandidates,
description,
b.RetentionDays,
string(formattedDeadline),
)
} else if lenMatches != 0 && lenMatches == lenCandidates {
b.Log(LogLevelWarning, context, "The current configuration would delete all %d existing %s.", lenMatches, description)
b.Log(LogLevelWarning, context, "The current configuration would delete all %d existing backups.", lenMatches)
b.Log(LogLevelWarning, context, "Refusing to do so, please check your configuration.")
} else {
b.Log(LogLevelInfo, context, "None of %d existing %s were pruned.", lenCandidates, description)
b.Log(LogLevelInfo, context, "None of %d existing backups were pruned.", lenCandidates)
}
return nil
}

View File

@@ -69,18 +69,18 @@ func (b *webDavStorage) Name() string {
func (b *webDavStorage) Copy(file string) error {
_, name := path.Split(file)
if err := b.client.MkdirAll(b.DestinationPath, 0644); err != nil {
return fmt.Errorf("(*webDavStorage).Copy: Error creating directory '%s' on WebDAV server: %w", b.DestinationPath, err)
return fmt.Errorf("(*webDavStorage).Copy: error creating directory '%s' on server: %w", b.DestinationPath, err)
}
r, err := os.Open(file)
if err != nil {
return fmt.Errorf("(*webDavStorage).Copy: Error opening the file to be uploaded: %w", err)
return fmt.Errorf("(*webDavStorage).Copy: error opening the file to be uploaded: %w", err)
}
if err := b.client.WriteStream(filepath.Join(b.DestinationPath, name), r, 0644); err != nil {
return fmt.Errorf("(*webDavStorage).Copy: Error uploading the file to WebDAV server: %w", err)
return fmt.Errorf("(*webDavStorage).Copy: error uploading the file: %w", err)
}
b.Log(storage.LogLevelInfo, b.Name(), "Uploaded a copy of backup '%s' to WebDAV URL '%s' at path '%s'.", file, b.url, b.DestinationPath)
b.Log(storage.LogLevelInfo, b.Name(), "Uploaded a copy of backup '%s' to '%s' at path '%s'.", file, b.url, b.DestinationPath)
return nil
}
@@ -89,7 +89,7 @@ func (b *webDavStorage) Copy(file string) error {
func (b *webDavStorage) Prune(deadline time.Time, pruningPrefix string) (*storage.PruneStats, error) {
candidates, err := b.client.ReadDir(b.DestinationPath)
if err != nil {
return nil, fmt.Errorf("(*webDavStorage).Prune: Error looking up candidates from remote storage: %w", err)
return nil, fmt.Errorf("(*webDavStorage).Prune: error looking up candidates from remote storage: %w", err)
}
var matches []fs.FileInfo
var lenCandidates int
@@ -108,16 +108,13 @@ func (b *webDavStorage) Prune(deadline time.Time, pruningPrefix string) (*storag
Pruned: uint(len(matches)),
}
if err := b.DoPrune(b.Name(), len(matches), lenCandidates, "WebDAV backup(s)", func() error {
pruneErr := b.DoPrune(b.Name(), len(matches), lenCandidates, deadline, func() error {
for _, match := range matches {
if err := b.client.Remove(filepath.Join(b.DestinationPath, match.Name())); err != nil {
return fmt.Errorf("(*webDavStorage).Prune: Error removing file from WebDAV storage: %w", err)
return fmt.Errorf("(*webDavStorage).Prune: error removing file: %w", err)
}
}
return nil
}); err != nil {
return stats, err
}
return stats, nil
})
return stats, pruneErr
}

13
test/Dockerfile Normal file
View File

@@ -0,0 +1,13 @@
FROM docker:24-dind
RUN apk add \
coreutils \
curl \
gpg \
jq \
moreutils \
tar \
zstd \
--no-cache
WORKDIR /code/test

70
test/README.md Normal file
View File

@@ -0,0 +1,70 @@
# Integration Tests
## Running tests
The main entry point for running tests is the `./test.sh` script.
It can be used to run the entire test suite, or just a single test case.
### Run all tests
```sh
./test.sh
```
### Run a single test case
```sh
./test.sh <directory-name>
```
### Configuring a test run
In addition to the match pattern, which can be given as the first positional argument, certain behavior can be changed by setting environment variables:
#### `BUILD_IMAGE`
When set, the test script will build an up-to-date `docker-volume-backup` image from the current state of your source tree, and run the tests against it.
```sh
BUILD_IMAGE=1 ./test.sh
```
The default behavior is not to build an image, and instead look for a version on your host system.
#### `IMAGE_TAG`
Setting this value lets you run tests against different existing images, so you can compare behavior:
```sh
IMAGE_TAG=v2.30.0 ./test.sh
```
#### `NO_IMAGE_CACHE`
When set, images from remote registries will not be cached and shared between sandbox containers.
```sh
NO_IMAGE_CACHE=1 ./test.sh
```
By default, two local images are created that persist the image data and provide it to containers at runtime.
## Understanding the test setup
The test setup runs each test case in an isolated Docker container, which itself is running an otherwise unused Docker daemon.
This means, tests can rely on noone else using that daemon, making expectations about the number of running containers and so forth.
As the sandbox container is also expected to be torn down post test, the scripts do not need to do any clean up or similar.
## Anatomy of a test case
The `test.sh` script looks for an exectuable file called `run.sh` in each directory.
When found, it is executed and signals success by returning a 0 exit code.
Any other exit code is considered a failure and will halt execution of further tests.
There is an `util.sh` file containing a few commonly used helpers which can be used by putting the following prelude to a new test case:
```sh
cd "$(dirname "$0")"
. ../util.sh
current_test=$(basename $(pwd))
```

View File

@@ -2,19 +2,19 @@ version: '3'
services:
storage:
image: mcr.microsoft.com/azure-storage/azurite
image: mcr.microsoft.com/azure-storage/azurite:3.26.0
volumes:
- azurite_backup_data:/data
- ${DATA_DIR:-./data}:/data
command: azurite-blob --blobHost 0.0.0.0 --blobPort 10000 --location /data
healthcheck:
test: nc 127.0.0.1 10000 -z
interval: 1s
retries: 30
test: nc 127.0.0.1 10000 -z
interval: 1s
retries: 30
az_cli:
image: mcr.microsoft.com/azure-cli
image: mcr.microsoft.com/azure-cli:2.51.0
volumes:
- ./local:/dump
- ${LOCAL_DIR:-./local}:/dump
command:
- /bin/sh
- -c
@@ -53,6 +53,4 @@ services:
- app_data:/var/opt/offen
volumes:
azurite_backup_data:
name: azurite_backup_data
app_data:

70
test/azure/run.sh Normal file → Executable file
View File

@@ -6,35 +6,81 @@ cd "$(dirname "$0")"
. ../util.sh
current_test=$(basename $(pwd))
docker compose up -d
export LOCAL_DIR=$(mktemp -d)
export TMP_DIR=$(mktemp -d)
export DATA_DIR=$(mktemp -d)
download_az () {
docker compose run --rm az_cli \
az storage blob download -f /dump/$1.tar.gz -c test-container -n path/to/backup/$1.tar.gz
}
docker compose up -d --quiet-pull
sleep 5
# A symlink for a known file in the volume is created so the test can check
# whether symlinks are preserved on backup.
docker compose exec backup backup
sleep 5
expect_running_containers "3"
docker compose run --rm az_cli \
az storage blob download -f /dump/test.tar.gz -c test-container -n path/to/backup/test.tar.gz
tar -xvf ./local/test.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db
download_az "test"
tar -xvf "$LOCAL_DIR/test.tar.gz" -C $TMP_DIR
if [ ! -f "$TMP_DIR/backup/app_data/offen.db" ]; then
fail "Could not find expeced file in untared backup"
fi
pass "Found relevant files in untared remote backups."
rm "$LOCAL_DIR/test.tar.gz"
# The second part of this test checks if backups get deleted when the retention
# is set to 0 days (which it should not as it would mean all backups get deleted)
# TODO: find out if we can test actual deletion without having to wait for a day
BACKUP_RETENTION_DAYS="0" docker compose up -d
sleep 5
docker compose exec backup backup
docker compose run --rm az_cli \
az storage blob download -f /dump/test.tar.gz -c test-container -n path/to/backup/test.tar.gz
test -f ./local/test.tar.gz
download_az "test"
if [ ! -f "$LOCAL_DIR/test.tar.gz" ]; then
fail "Remote backup was deleted"
fi
pass "Remote backups have not been deleted."
docker compose down --volumes
# The third part of this test checks if old backups get deleted when the retention
# is set to 7 days (which it should)
BACKUP_RETENTION_DAYS="7" docker compose up -d
sleep 5
info "Create first backup with no prune"
docker compose exec backup backup
docker compose run --rm az_cli \
az storage blob upload -f /dump/test.tar.gz -c test-container -n path/to/backup/test-old.tar.gz
docker compose down
rm "$LOCAL_DIR/test.tar.gz"
back_date="$(date "+%Y-%m-%dT%H:%M:%S%z" -d "14 days ago" | rev | cut -c 3- | rev):00"
jq --arg back_date "$back_date" '(.collections[] | select(.name=="$BLOBS_COLLECTION$") | .data[] | select(.name=="path/to/backup/test-old.tar.gz") | .properties.creationTime = $back_date)' "$DATA_DIR/__azurite_db_blob__.json" | sponge "$DATA_DIR/__azurite_db_blob__.json"
docker compose up -d
sleep 5
info "Create second backup and prune"
docker compose exec backup backup
info "Download first backup which should be pruned"
download_az "test-old" || true
if [ -f "$LOCAL_DIR/test-old.tar.gz" ]; then
fail "Backdated file was not deleted"
fi
download_az "test" || true
if [ ! -f "$LOCAL_DIR/test.tar.gz" ]; then
fail "Recent file was not found"
fi
pass "Old remote backup has been pruned, new one is still present."

View File

@@ -12,8 +12,8 @@ services:
entrypoint: /bin/ash -c 'mkdir -p /data/backup && minio server --certs-dir "/certs" --address ":443" /data'
volumes:
- minio_backup_data:/data
- ./minio.crt:/certs/public.crt
- ./minio.key:/certs/private.key
- ${CERT_DIR:-.}/minio.crt:/certs/public.crt
- ${CERT_DIR:-.}/minio.key:/certs/private.key
backup:
image: offen/docker-volume-backup:${TEST_VERSION:-canary}
@@ -33,7 +33,7 @@ services:
volumes:
- app_data:/backup/app_data:ro
- /var/run/docker.sock:/var/run/docker.sock
- ./rootCA.crt:/root/minio-rootCA.crt
- ${CERT_DIR:-.}/rootCA.crt:/root/minio-rootCA.crt
offen:
image: offen/offen:latest

24
test/certs/run.sh Normal file → Executable file
View File

@@ -6,25 +6,27 @@ cd "$(dirname "$0")"
. ../util.sh
current_test=$(basename $(pwd))
openssl genrsa -des3 -passout pass:test -out rootCA.key 4096
export CERT_DIR=$(mktemp -d)
openssl genrsa -des3 -passout pass:test -out "$CERT_DIR/rootCA.key" 4096
openssl req -passin pass:test \
-subj "/C=DE/ST=BE/O=IntegrationTest, Inc." \
-x509 -new -key rootCA.key -sha256 -days 1 -out rootCA.crt
-x509 -new -key "$CERT_DIR/rootCA.key" -sha256 -days 1 -out "$CERT_DIR/rootCA.crt"
openssl genrsa -out minio.key 4096
openssl req -new -sha256 -key minio.key \
openssl genrsa -out "$CERT_DIR/minio.key" 4096
openssl req -new -sha256 -key "$CERT_DIR/minio.key" \
-subj "/C=DE/ST=BE/O=IntegrationTest, Inc./CN=minio" \
-out minio.csr
-out "$CERT_DIR/minio.csr"
openssl x509 -req -passin pass:test \
-in minio.csr \
-CA rootCA.crt -CAkey rootCA.key -CAcreateserial \
-in "$CERT_DIR/minio.csr" \
-CA "$CERT_DIR/rootCA.crt" -CAkey "$CERT_DIR/rootCA.key" -CAcreateserial \
-extfile san.cnf \
-out minio.crt -days 1 -sha256
-out "$CERT_DIR/minio.crt" -days 1 -sha256
openssl x509 -in minio.crt -noout -text
openssl x509 -in "$CERT_DIR/minio.crt" -noout -text
docker compose up -d
docker compose up -d --quiet-pull
sleep 5
docker compose exec backup backup
@@ -39,5 +41,3 @@ docker run --rm \
ash -c 'tar -xvf /minio_data/backup/test.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db'
pass "Found relevant files in untared remote backups."
docker compose down --volumes

View File

@@ -13,7 +13,7 @@ docker volume create app_data
# correctly. It is not supposed to hold any data.
docker volume create empty_data
docker run -d \
docker run -d -q \
--name minio \
--network test_network \
--env MINIO_ROOT_USER=test \
@@ -25,7 +25,7 @@ docker run -d \
docker exec minio mkdir -p /data/backup
docker run -d \
docker run -d -q \
--name offen \
--network test_network \
-v app_data:/var/opt/offen/ \
@@ -33,7 +33,7 @@ docker run -d \
sleep 10
docker run --rm \
docker run --rm -q \
--network test_network \
-v app_data:/backup/app_data \
-v empty_data:/backup/empty_data \
@@ -48,7 +48,7 @@ docker run --rm \
--entrypoint backup \
offen/docker-volume-backup:${TEST_VERSION:-canary}
docker run --rm \
docker run --rm -q \
-v backup_data:/data alpine \
ash -c 'tar -xvf /data/backup/test.tar.gz && test -f /backup/app_data/offen.db && test -d /backup/empty_data'

View File

@@ -1 +0,0 @@
local

View File

@@ -42,7 +42,7 @@ services:
EXEC_LABEL: test
EXEC_FORWARD_OUTPUT: "true"
volumes:
- ./local:/archive
- ${LOCAL_DIR:-./local}:/archive
- app_data:/backup/data:ro
- /var/run/docker.sock:/var/run/docker.sock

28
test/commands/run.sh Normal file → Executable file
View File

@@ -6,36 +6,37 @@ cd $(dirname $0)
. ../util.sh
current_test=$(basename $(pwd))
mkdir -p ./local
export LOCAL_DIR=$(mktemp -d)
export TMP_DIR=$(mktemp -d)
docker compose up -d
docker compose up -d --quiet-pull
sleep 30 # mariadb likes to take a bit before responding
docker compose exec backup backup
tar -xvf ./local/test.tar.gz
if [ ! -f ./backup/data/dump.sql ]; then
tar -xvf "$LOCAL_DIR/test.tar.gz" -C $TMP_DIR
if [ ! -f "$TMP_DIR/backup/data/dump.sql" ]; then
fail "Could not find file written by pre command."
fi
pass "Found expected file."
if [ -f ./backup/data/not-relevant.txt ]; then
if [ -f "$TMP_DIR/backup/data/not-relevant.txt" ]; then
fail "Command ran for container with other label."
fi
pass "Command did not run for container with other label."
if [ -f ./backup/data/post.txt ]; then
if [ -f "$TMP_DIR/backup/data/post.txt" ]; then
fail "File created in post command was present in backup."
fi
pass "Did not find unexpected file."
docker compose down --volumes
sudo rm -rf ./local
info "Running commands test in swarm mode next."
mkdir -p ./local
export LOCAL_DIR=$(mktemp -d)
export TMP_DIR=$(mktemp -d)
docker swarm init
docker stack deploy --compose-file=docker-compose.yml test_stack
@@ -49,16 +50,13 @@ sleep 20
docker exec $(docker ps -q -f name=backup) backup
tar -xvf ./local/test.tar.gz
if [ ! -f ./backup/data/dump.sql ]; then
tar -xvf "$LOCAL_DIR/test.tar.gz" -C $TMP_DIR
if [ ! -f "$TMP_DIR/backup/data/dump.sql" ]; then
fail "Could not find file written by pre command."
fi
pass "Found expected file."
if [ -f ./backup/data/post.txt ]; then
if [ -f "$TMP_DIR/backup/data/post.txt" ]; then
fail "File created in post command was present in backup."
fi
pass "Did not find unexpected file."
docker stack rm test_stack
docker swarm leave --force

View File

@@ -1 +0,0 @@
local

View File

@@ -5,7 +5,7 @@ services:
image: offen/docker-volume-backup:${TEST_VERSION:-canary}
restart: always
volumes:
- ./local:/archive
- ${LOCAL_DIR:-./local}:/archive
- app_data:/backup/app_data:ro
- ./01backup.env:/etc/dockervolumebackup/conf.d/01backup.env
- ./02backup.env:/etc/dockervolumebackup/conf.d/02backup.env

View File

@@ -6,26 +6,24 @@ cd $(dirname $0)
. ../util.sh
current_test=$(basename $(pwd))
mkdir -p local
export LOCAL_DIR=$(mktemp -d)
docker compose up -d
docker compose up -d --quiet-pull
# sleep until a backup is guaranteed to have happened on the 1 minute schedule
sleep 100
docker compose down --volumes
if [ ! -f ./local/conf.tar.gz ]; then
if [ ! -f "$LOCAL_DIR/conf.tar.gz" ]; then
fail "Config from file was not used."
fi
pass "Config from file was used."
if [ ! -f ./local/other.tar.gz ]; then
if [ ! -f "$LOCAL_DIR/other.tar.gz" ]; then
fail "Run on same schedule did not succeed."
fi
pass "Run on same schedule succeeded."
if [ -f ./local/never.tar.gz ]; then
if [ -f "$LOCAL_DIR/never.tar.gz" ]; then
fail "Unexpected file was found."
fi
pass "Unexpected cron did not run."

1
test/dropbox/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
user_v2_ready.yaml

View File

@@ -0,0 +1,57 @@
version: '3'
services:
openapi_mock:
image: muonsoft/openapi-mock:0.3.9
environment:
OPENAPI_MOCK_USE_EXAMPLES: if_present
OPENAPI_MOCK_SPECIFICATION_URL: '/etc/openapi/user_v2.yaml'
ports:
- 8080:8080
volumes:
- ${SPEC_FILE:-./user_v2.yaml}:/etc/openapi/user_v2.yaml
oauth2_mock:
image: ghcr.io/navikt/mock-oauth2-server:1.0.0
ports:
- 8090:8090
environment:
PORT: 8090
JSON_CONFIG_PATH: '/etc/oauth2/config.json'
volumes:
- ./oauth2_config.json:/etc/oauth2/config.json
backup:
image: offen/docker-volume-backup:${TEST_VERSION:-canary}
hostname: hostnametoken
depends_on:
- openapi_mock
- oauth2_mock
restart: always
environment:
BACKUP_FILENAME_EXPAND: 'true'
BACKUP_FILENAME: test-$$HOSTNAME.tar.gz
BACKUP_CRON_EXPRESSION: 0 0 5 31 2 ?
BACKUP_RETENTION_DAYS: ${BACKUP_RETENTION_DAYS:-7}
BACKUP_PRUNING_LEEWAY: 5s
BACKUP_PRUNING_PREFIX: test
DROPBOX_ENDPOINT: http://openapi_mock:8080
DROPBOX_OAUTH2_ENDPOINT: http://oauth2_mock:8090
DROPBOX_REFRESH_TOKEN: test
DROPBOX_APP_KEY: test
DROPBOX_APP_SECRET: test
DROPBOX_REMOTE_PATH: /test
DROPBOX_CONCURRENCY_LEVEL: 6
volumes:
- app_data:/backup/app_data:ro
- /var/run/docker.sock:/var/run/docker.sock
offen:
image: offen/offen:latest
labels:
- docker-volume-backup.stop-during-backup=true
volumes:
- app_data:/var/opt/offen
volumes:
app_data:

View File

@@ -0,0 +1,37 @@
{
"interactiveLogin": true,
"httpServer": "NettyWrapper",
"tokenCallbacks": [
{
"issuerId": "issuer1",
"tokenExpiry": 120,
"requestMappings": [
{
"requestParam": "scope",
"match": "scope1",
"claims": {
"sub": "subByScope",
"aud": [
"audByScope"
]
}
}
]
},
{
"issuerId": "issuer2",
"requestMappings": [
{
"requestParam": "someparam",
"match": "somevalue",
"claims": {
"sub": "subBySomeParam",
"aud": [
"audBySomeParam"
]
}
}
]
}
]
}

61
test/dropbox/run.sh Executable file
View File

@@ -0,0 +1,61 @@
#!/bin/sh
set -e
cd "$(dirname "$0")"
. ../util.sh
current_test=$(basename $(pwd))
export SPEC_FILE=$(mktemp -d)/user_v2.yaml
cp user_v2.yaml $SPEC_FILE
sed -i 's/SERVER_MODIFIED_1/'"$(date "+%Y-%m-%dT%H:%M:%SZ")/g" $SPEC_FILE
sed -i 's/SERVER_MODIFIED_2/'"$(date "+%Y-%m-%dT%H:%M:%SZ" -d "14 days ago")/g" $SPEC_FILE
docker compose up -d --quiet-pull
sleep 5
logs=$(docker compose exec -T backup backup)
sleep 5
expect_running_containers "4"
echo "$logs"
if echo "$logs" | grep -q "ERROR"; then
fail "Backup failed, errors reported: $logs"
else
pass "Backup succeeded, no errors reported."
fi
# The second part of this test checks if backups get deleted when the retention
# is set to 0 days (which it should not as it would mean all backups get deleted)
BACKUP_RETENTION_DAYS="0" docker compose up -d
sleep 5
logs=$(docker compose exec -T backup backup)
echo "$logs"
if echo "$logs" | grep -q "Refusing to do so, please check your configuration"; then
pass "Remote backups have not been deleted."
else
fail "Remote backups would have been deleted: $logs"
fi
# The third part of this test checks if old backups get deleted when the retention
# is set to 7 days (which it should)
BACKUP_RETENTION_DAYS="7" docker compose up -d
sleep 5
info "Create second backup and prune"
logs=$(docker compose exec -T backup backup)
echo "$logs"
if echo "$logs" | grep -q "Pruned 1 out of 2 backups as they were older"; then
pass "Old remote backup has been pruned, new one is still present."
elif echo "$logs" | grep -q "ERROR"; then
fail "Pruning failed, errors reported: $logs"
elif echo "$logs" | grep -q "None of 1 existing backups were pruned"; then
fail "Pruning failed, old backup has not been pruned: $logs"
else
fail "Pruning failed, unknown result: $logs"
fi

12758
test/dropbox/user_v2.yaml Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -11,7 +11,7 @@ services:
BACKUP_CRON_EXPRESSION: 0 0 5 31 2 ?
EXEC_FORWARD_OUTPUT: "true"
volumes:
- ./local:/local
- ${LOCAL_DIR:-local}:/local
- app_data:/backup/app_data:ro
- /var/run/docker.sock:/var/run/docker.sock

8
test/extend/run.sh Normal file → Executable file
View File

@@ -6,14 +6,14 @@ cd "$(dirname "$0")"
. ../util.sh
current_test=$(basename $(pwd))
mkdir -p local
export LOCAL_DIR=$(mktemp -d)
export BASE_VERSION="${TEST_VERSION:-canary}"
export TEST_VERSION="${TEST_VERSION:-canary}-with-rsync"
docker build . -t offen/docker-volume-backup:$TEST_VERSION --build-arg version=$BASE_VERSION
docker compose up -d
docker compose up -d --quiet-pull
sleep 5
docker compose exec backup backup
@@ -22,8 +22,6 @@ sleep 5
expect_running_containers "2"
if [ ! -f "./local/app_data/offen.db" ]; then
if [ ! -f "$LOCAL_DIR/app_data/offen.db" ]; then
fail "Could not find expected file in untared archive."
fi
docker compose down --volumes

1
test/gpg/.gitignore vendored
View File

@@ -1 +0,0 @@
local

View File

@@ -11,7 +11,7 @@ services:
BACKUP_RETENTION_DAYS: ${BACKUP_RETENTION_DAYS:-7}
GPG_PASSPHRASE: 1234#$$ecret
volumes:
- ./local:/archive
- ${LOCAL_DIR:-./local}:/archive
- app_data:/backup/app_data:ro
- /var/run/docker.sock:/var/run/docker.sock

View File

@@ -6,28 +6,27 @@ cd "$(dirname "$0")"
. ../util.sh
current_test=$(basename $(pwd))
mkdir -p local
export LOCAL_DIR=$(mktemp -d)
docker compose up -d
docker compose up -d --quiet-pull
sleep 5
docker compose exec backup backup
expect_running_containers "2"
tmp_dir=$(mktemp -d)
TMP_DIR=$(mktemp -d)
echo "1234#\$ecret" | gpg -d --pinentry-mode loopback --yes --passphrase-fd 0 ./local/test.tar.gz.gpg > ./local/decrypted.tar.gz
tar -xf ./local/decrypted.tar.gz -C $tmp_dir
if [ ! -f $tmp_dir/backup/app_data/offen.db ]; then
echo "1234#\$ecret" | gpg -d --pinentry-mode loopback --yes --passphrase-fd 0 "$LOCAL_DIR/test.tar.gz.gpg" > "$LOCAL_DIR/decrypted.tar.gz"
tar -xf "$LOCAL_DIR/decrypted.tar.gz" -C $TMP_DIR
if [ ! -f $TMP_DIR/backup/app_data/offen.db ]; then
fail "Could not find expected file in untared archive."
fi
rm ./local/decrypted.tar.gz
rm "$LOCAL_DIR/decrypted.tar.gz"
pass "Found relevant files in decrypted and untared local backup."
if [ ! -L ./local/test-latest.tar.gz.gpg ]; then
if [ ! -L "$LOCAL_DIR/test-latest.tar.gz.gpg" ]; then
fail "Could not find local symlink to latest encrypted backup."
fi
docker compose down --volumes

View File

@@ -1 +0,0 @@
local

View File

@@ -11,5 +11,5 @@ services:
BACKUP_CRON_EXPRESSION: 0 0 5 31 2 ?
BACKUP_EXCLUDE_REGEXP: '\.(me|you)$$'
volumes:
- ./local:/archive
- ${LOCAL_DIR:-./local}:/archive
- ./sources:/backup/data:ro

14
test/ignore/run.sh Normal file → Executable file
View File

@@ -6,23 +6,21 @@ cd $(dirname $0)
. ../util.sh
current_test=$(basename $(pwd))
mkdir -p local
export LOCAL_DIR=$(mktemp -d)
docker compose up -d
docker compose up -d --quiet-pull
sleep 5
docker compose exec backup backup
docker compose down --volumes
TMP_DIR=$(mktemp -d)
tar --same-owner -xvf "$LOCAL_DIR/test.tar.gz" -C "$TMP_DIR"
out=$(mktemp -d)
sudo tar --same-owner -xvf ./local/test.tar.gz -C "$out"
if [ ! -f "$out/backup/data/me.txt" ]; then
if [ ! -f "$TMP_DIR/backup/data/me.txt" ]; then
fail "Expected file was not found."
fi
pass "Expected file was found."
if [ -f "$out/backup/data/skip.me" ]; then
if [ -f "$TMP_DIR/backup/data/skip.me" ]; then
fail "Ignored file was found."
fi
pass "Ignored file was not found."

View File

@@ -1 +0,0 @@
local

View File

@@ -16,7 +16,7 @@ services:
volumes:
- app_data:/backup/app_data:ro
- /var/run/docker.sock:/var/run/docker.sock
- ./local:/archive
- ${LOCAL_DIR:-./local}:/archive
offen:
image: offen/offen:latest

View File

@@ -6,9 +6,9 @@ cd "$(dirname "$0")"
. ../util.sh
current_test=$(basename $(pwd))
mkdir -p local
export LOCAL_DIR=$(mktemp -d)
docker compose up -d
docker compose up -d --quiet-pull
sleep 5
# A symlink for a known file in the volume is created so the test can check
@@ -21,11 +21,11 @@ sleep 5
expect_running_containers "2"
tmp_dir=$(mktemp -d)
tar -xvf ./local/test-hostnametoken.tar.gz -C $tmp_dir
tar -xvf "$LOCAL_DIR/test-hostnametoken.tar.gz" -C $tmp_dir
if [ ! -f "$tmp_dir/backup/app_data/offen.db" ]; then
fail "Could not find expected file in untared archive."
fi
rm -f ./local/test-hostnametoken.tar.gz
rm -f "$LOCAL_DIR/test-hostnametoken.tar.gz"
if [ ! -L "$tmp_dir/backup/app_data/db.link" ]; then
fail "Could not find expected symlink in untared archive."
@@ -33,7 +33,7 @@ fi
pass "Found relevant files in decrypted and untared local backup."
if [ ! -L ./local/test-hostnametoken.latest.tar.gz.gpg ]; then
if [ ! -L "$LOCAL_DIR/test-hostnametoken.latest.tar.gz.gpg" ]; then
fail "Could not find symlink to latest version."
fi
@@ -41,15 +41,36 @@ pass "Found symlink to latest version in local backup."
# The second part of this test checks if backups get deleted when the retention
# is set to 0 days (which it should not as it would mean all backups get deleted)
# TODO: find out if we can test actual deletion without having to wait for a day
BACKUP_RETENTION_DAYS="0" docker compose up -d
sleep 5
docker compose exec backup backup
if [ "$(find ./local -type f | wc -l)" != "1" ]; then
fail "Backups should not have been deleted, instead seen: "$(find ./local -type f)""
if [ "$(find "$LOCAL_DIR" -type f | wc -l)" != "1" ]; then
fail "Backups should not have been deleted, instead seen: "$(find "$local_dir" -type f)""
fi
pass "Local backups have not been deleted."
docker compose down --volumes
# The third part of this test checks if old backups get deleted when the retention
# is set to 7 days (which it should)
BACKUP_RETENTION_DAYS="7" docker compose up -d
sleep 5
info "Create first backup with no prune"
docker compose exec backup backup
touch -r "$LOCAL_DIR/test-hostnametoken.tar.gz" -d "14 days ago" "$LOCAL_DIR/test-hostnametoken-old.tar.gz"
info "Create second backup and prune"
docker compose exec backup backup
if [ -f "$LOCAL_DIR/test-hostnametoken-old.tar.gz" ]; then
fail "Backdated file has not been deleted."
fi
if [ ! -f "$LOCAL_DIR/test-hostnametoken.tar.gz" ]; then
fail "Recent file has been deleted."
fi
pass "Old remote backup has been pruned, new one is still present."

View File

@@ -1 +0,0 @@
local

View File

@@ -12,7 +12,7 @@ services:
NOTIFICATION_URLS: ${NOTIFICATION_URLS}
EXTRA_VALUE: extra-value
volumes:
- ./local:/archive
- ${LOCAL_DIR:-./local}:/archive
- app_data:/backup/app_data:ro
- ./notifications.tmpl:/etc/dockervolumebackup/notifications.d/notifications.tmpl

View File

@@ -6,9 +6,9 @@ cd $(dirname $0)
. ../util.sh
current_test=$(basename $(pwd))
mkdir -p local
export LOCAL_DIR=$(mktemp -d)
docker compose up -d
docker compose up -d --quiet-pull
sleep 5
GOTIFY_TOKEN=$(curl -sSLX POST -H 'Content-Type: application/json' -d '{"name":"test"}' http://admin:custom@localhost:8080/application | jq -r '.token')
@@ -46,5 +46,3 @@ if [ "$MESSAGE_BODY" != "Backing up /tmp/test.tar.gz succeeded." ]; then
fail "Unexpected notification body $MESSAGE_BODY"
fi
pass "Custom notification body was used."
docker compose down --volumes

View File

@@ -1 +0,0 @@
local

View File

@@ -9,9 +9,9 @@ services:
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=1FHJMSwt0yhIN1zS7I4DilGUhThBKq0x
- POSTGRES_USER=test
- POSTGRES_DB=test
POSTGRES_PASSWORD: 1FHJMSwt0yhIN1zS7I4DilGUhThBKq0x
POSTGRES_USER: test
POSTGRES_DB: test
backup:
image: offen/docker-volume-backup:${TEST_VERSION}
@@ -21,7 +21,7 @@ services:
volumes:
- postgres_data:/backup/postgres:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./local:/archive
- ${LOCAL_DIR:-./local}:/archive
volumes:
postgres_data:

18
test/ownership/run.sh Normal file → Executable file
View File

@@ -7,24 +7,22 @@ cd $(dirname $0)
. ../util.sh
current_test=$(basename $(pwd))
mkdir -p local
export LOCAL_DIR=$(mktemp -d)
docker compose up -d
docker compose up -d --quiet-pull
sleep 5
docker compose exec backup backup
tmp_dir=$(mktemp -d)
sudo tar --same-owner -xvf ./local/backup.tar.gz -C $tmp_dir
TMP_DIR=$(mktemp -d)
tar --same-owner -xvf "$LOCAL_DIR/backup.tar.gz" -C $TMP_DIR
sudo find $tmp_dir/backup/postgres > /dev/null
find $TMP_DIR/backup/postgres > /dev/null
pass "Backup contains files at expected location"
for file in $(sudo find $tmp_dir/backup/postgres); do
if [ "$(sudo stat -c '%u:%g' $file)" != "70:70" ]; then
fail "Unexpected file ownership for $file: $(sudo stat -c '%u:%g' $file)"
for file in $(find $TMP_DIR/backup/postgres); do
if [ "$(stat -c '%u:%g' $file)" != "70:70" ]; then
fail "Unexpected file ownership for $file: $(stat -c '%u:%g' $file)"
fi
done
pass "All files and directories in backup preserved their ownership."
docker compose down --volumes

42
test/pgzip/run.sh Executable file
View File

@@ -0,0 +1,42 @@
#!/bin/sh
set -e
cd $(dirname $0)
. ../util.sh
current_test=$(basename $(pwd))
docker network create test_network
docker volume create app_data
LOCAL_DIR=$(mktemp -d)
docker run -d -q \
--name offen \
--network test_network \
-v app_data:/var/opt/offen/ \
offen/offen:latest
sleep 5
docker run --rm -q \
--network test_network \
-v app_data:/backup/app_data \
-v $LOCAL_DIR:/archive \
-v /var/run/docker.sock:/var/run/docker.sock \
--env BACKUP_COMPRESSION=gz \
--env GZIP_PARALLELISM=0 \
--env BACKUP_FILENAME='test.{{ .Extension }}' \
--entrypoint backup \
offen/docker-volume-backup:${TEST_VERSION:-canary}
tmp_dir=$(mktemp -d)
tar -xvf "$LOCAL_DIR/test.tar.gz" -C $tmp_dir
if [ ! -f "$tmp_dir/backup/app_data/offen.db" ]; then
fail "Could not find expected file in untared archive."
fi
pass "Found relevant files in untared local backup."
# This test does not stop containers during backup. This is happening on
# purpose in order to cover this setup as well.
expect_running_containers "1"

View File

@@ -0,0 +1,50 @@
version: '3'
services:
minio:
image: minio/minio:RELEASE.2020-08-04T23-10-51Z
environment:
MINIO_ROOT_USER: test
MINIO_ROOT_PASSWORD: test
MINIO_ACCESS_KEY: test
MINIO_SECRET_KEY: GMusLtUmILge2by+z890kQ
entrypoint: /bin/ash -c 'mkdir -p /data/backup && minio server /data'
volumes:
- minio_backup_data:/data
backup:
image: offen/docker-volume-backup:${TEST_VERSION:-canary}
hostname: hostnametoken
depends_on:
- minio
restart: always
environment:
AWS_ACCESS_KEY_ID: test
AWS_SECRET_ACCESS_KEY: GMusLtUmILge2by+z890kQ
AWS_ENDPOINT: minio:9000
AWS_ENDPOINT_PROTO: http
AWS_S3_BUCKET_NAME: backup
BACKUP_FILENAME_EXPAND: 'true'
BACKUP_FILENAME: test-$$HOSTNAME.tar.gz
BACKUP_CRON_EXPRESSION: 0 0 5 31 2 ?
BACKUP_RETENTION_DAYS: 7
BACKUP_PRUNING_LEEWAY: 5s
BACKUP_PRUNING_PREFIX: test
BACKUP_LATEST_SYMLINK: test-$$HOSTNAME.latest.tar.gz
BACKUP_SKIP_BACKENDS_FROM_PRUNE: 's3'
volumes:
- app_data:/backup/app_data:ro
- /var/run/docker.sock:/var/run/docker.sock
- ${LOCAL_DIR:-./local}:/archive
offen:
image: offen/offen:latest
labels:
- docker-volume-backup.stop-during-backup=true
volumes:
- app_data:/var/opt/offen
volumes:
app_data:
minio_backup_data:
name: minio_backup_data

72
test/pruning/run.sh Executable file
View File

@@ -0,0 +1,72 @@
#!/bin/sh
# Tests prune-skipping with multiple backends (local, s3)
# Pruning itself is tested individually for each storage backend
set -e
cd "$(dirname "$0")"
. ../util.sh
current_test=$(basename $(pwd))
export LOCAL_DIR=$(mktemp -d)
docker compose up -d --quiet-pull
sleep 5
docker compose exec backup backup
sleep 5
expect_running_containers "3"
touch -r "$LOCAL_DIR/test-hostnametoken.tar.gz" -d "14 days ago" "$LOCAL_DIR/test-hostnametoken-old.tar.gz"
docker run --rm \
-v minio_backup_data:/minio_data \
alpine \
ash -c 'touch -d@$(( $(date +%s) - 1209600 )) /minio_data/backup/test-hostnametoken-old.tar.gz'
# Skip s3 backend from prune
docker compose up -d
sleep 5
info "Create backup with no prune for s3 backend"
docker compose exec backup backup
info "Check if old backup has been pruned (local)"
if [ -f "$LOCAL_DIR/test-hostnametoken-old.tar.gz" ]; then
fail "Expired backup was not pruned from local storage."
fi
info "Check if old backup has NOT been pruned (s3)"
docker run --rm \
-v minio_backup_data:/minio_data \
alpine \
ash -c 'test -f /minio_data/backup/test-hostnametoken-old.tar.gz'
pass "Old remote backup has been pruned locally, skipped S3 backend is untouched."
# Skip local and s3 backend from prune (all backends)
touch -r "$LOCAL_DIR/test-hostnametoken.tar.gz" -d "14 days ago" "$LOCAL_DIR/test-hostnametoken-old.tar.gz"
docker compose up -d
sleep 5
info "Create backup with no prune for both backends"
docker compose exec -e BACKUP_SKIP_BACKENDS_FROM_PRUNE="s3,local" backup backup
info "Check if old backup has NOT been pruned (local)"
if [ ! -f "$LOCAL_DIR/test-hostnametoken-old.tar.gz" ]; then
fail "Backdated file has not been deleted"
fi
info "Check if old backup has NOT been pruned (s3)"
docker run --rm \
-v minio_backup_data:/minio_data \
alpine \
ash -c 'test -f /minio_data/backup/test-hostnametoken-old.tar.gz'
pass "Skipped all backends while pruning."

View File

@@ -6,11 +6,9 @@ cd "$(dirname "$0")"
. ../util.sh
current_test=$(basename $(pwd))
docker compose up -d
docker compose up -d --quiet-pull
sleep 5
# A symlink for a known file in the volume is created so the test can check
# whether symlinks are preserved on backup.
docker compose exec backup backup
sleep 5
@@ -26,7 +24,6 @@ pass "Found relevant files in untared remote backups."
# The second part of this test checks if backups get deleted when the retention
# is set to 0 days (which it should not as it would mean all backups get deleted)
# TODO: find out if we can test actual deletion without having to wait for a day
BACKUP_RETENTION_DAYS="0" docker compose up -d
sleep 5
@@ -39,4 +36,26 @@ docker run --rm \
pass "Remote backups have not been deleted."
docker compose down --volumes
# The third part of this test checks if old backups get deleted when the retention
# is set to 7 days (which it should)
BACKUP_RETENTION_DAYS="7" docker compose up -d
sleep 5
info "Create first backup with no prune"
docker compose exec backup backup
docker run --rm \
-v minio_backup_data:/minio_data \
alpine \
ash -c 'touch -d@$(( $(date +%s) - 1209600 )) /minio_data/backup/test-hostnametoken-old.tar.gz'
info "Create second backup and prune"
docker compose exec backup backup
docker run --rm \
-v minio_backup_data:/minio_data \
alpine \
ash -c 'test ! -f /minio_data/backup/test-hostnametoken-old.tar.gz && test -f /minio_data/backup/test-hostnametoken.tar.gz'
pass "Old remote backup has been pruned, new one is still present."

View File

@@ -31,14 +31,12 @@ pass "Found relevant files in untared backup."
sleep 5
expect_running_containers "5"
docker stack rm test_stack
docker exec -e AWS_ACCESS_KEY_ID=test $(docker ps -q -f name=backup) backup \
&& fail "Backup should have failed due to duplicate env variables."
docker secret rm minio_root_password
docker secret rm minio_root_user
pass "Backup failed due to duplicate env variables."
docker swarm leave --force
docker exec -e AWS_ACCESS_KEY_ID_FILE=/tmp/nonexistant $(docker ps -q -f name=backup) backup \
&& fail "Backup should have failed due to non existing file env variable."
sleep 10
docker volume rm backup_data
docker volume rm pg_data
pass "Backup failed due to non existing file env variable."

View File

@@ -8,7 +8,7 @@ services:
- PGID=1000
- USER_NAME=test
volumes:
- ./id_rsa.pub:/config/.ssh/authorized_keys
- ${KEY_DIR:-.}/id_rsa.pub:/config/.ssh/authorized_keys
- ssh_backup_data:/tmp
backup:
@@ -30,7 +30,7 @@ services:
SSH_REMOTE_PATH: /tmp
SSH_IDENTITY_PASSPHRASE: test1234
volumes:
- ./id_rsa:/root/.ssh/id_rsa
- ${KEY_DIR:-.}/id_rsa:/root/.ssh/id_rsa
- app_data:/backup/app_data:ro
- /var/run/docker.sock:/var/run/docker.sock

View File

@@ -6,9 +6,11 @@ cd "$(dirname "$0")"
. ../util.sh
current_test=$(basename $(pwd))
ssh-keygen -t rsa -m pem -b 4096 -N "test1234" -f id_rsa -C "docker-volume-backup@local"
export KEY_DIR=$(mktemp -d)
docker compose up -d
ssh-keygen -t rsa -m pem -b 4096 -N "test1234" -f "$KEY_DIR/id_rsa" -C "docker-volume-backup@local"
docker compose up -d --quiet-pull
sleep 5
docker compose exec backup backup
@@ -26,7 +28,6 @@ pass "Found relevant files in decrypted and untared remote backups."
# The second part of this test checks if backups get deleted when the retention
# is set to 0 days (which it should not as it would mean all backups get deleted)
# TODO: find out if we can test actual deletion without having to wait for a day
BACKUP_RETENTION_DAYS="0" docker compose up -d
sleep 5
@@ -39,5 +40,28 @@ docker run --rm \
pass "Remote backups have not been deleted."
docker compose down --volumes
rm -f id_rsa id_rsa.pub
# The third part of this test checks if old backups get deleted when the retention
# is set to 7 days (which it should)
BACKUP_RETENTION_DAYS="7" docker compose up -d
sleep 5
info "Create first backup with no prune"
docker compose exec backup backup
# Set the modification date of the old backup to 14 days ago
docker run --rm \
-v ssh_backup_data:/ssh_data \
--user 1000 \
alpine \
ash -c 'touch -d@$(( $(date +%s) - 1209600 )) /ssh_data/test-hostnametoken-old.tar.gz'
info "Create second backup and prune"
docker compose exec backup backup
docker run --rm \
-v ssh_backup_data:/ssh_data \
alpine \
ash -c 'test ! -f /ssh_data/test-hostnametoken-old.tar.gz && test -f /ssh_data/test-hostnametoken.tar.gz'
pass "Old remote backup has been pruned, new one is still present."

View File

@@ -27,11 +27,3 @@ pass "Found relevant files in untared backup."
sleep 5
expect_running_containers "5"
docker stack rm test_stack
docker swarm leave --force
sleep 10
docker volume rm backup_data
docker volume rm pg_data

View File

@@ -2,15 +2,80 @@
set -e
TEST_VERSION=${1:-canary}
MATCH_PATTERN=$1
IMAGE_TAG=${IMAGE_TAG:-canary}
for dir in $(ls -d -- */); do
test="${dir}run.sh"
sandbox="docker_volume_backup_test_sandbox"
tarball="$(mktemp -d)/image.tar.gz"
trap finish EXIT INT TERM
finish () {
rm -rf $(dirname $tarball)
if [ ! -z $(docker ps -aq --filter=name=$sandbox) ]; then
docker rm -f $(docker stop $sandbox)
fi
if [ ! -z $(docker volume ls -q --filter=name="^${sandbox}\$") ]; then
docker volume rm $sandbox
fi
}
docker build -t offen/docker-volume-backup:test-sandbox .
if [ ! -z "$BUILD_IMAGE" ]; then
docker build -t offen/docker-volume-backup:$IMAGE_TAG $(dirname $(pwd))
fi
docker save offen/docker-volume-backup:$IMAGE_TAG -o $tarball
find_args="-mindepth 1 -maxdepth 1 -type d"
if [ ! -z "$MATCH_PATTERN" ]; then
find_args="$find_args -name $MATCH_PATTERN"
fi
for dir in $(find $find_args | sort); do
dir=$(echo $dir | cut -c 3-)
echo "################################################"
echo "Now running $test"
echo "Now running ${dir}"
echo "################################################"
echo ""
TEST_VERSION=$TEST_VERSION /bin/sh $test
test="${dir}/run.sh"
docker_run_args="--name "$sandbox" --detach \
--privileged \
-v $(dirname $(pwd)):/code \
-v $tarball:/cache/image.tar.gz \
-v $sandbox:/var/lib/docker"
if [ -z "$NO_IMAGE_CACHE" ]; then
docker_run_args="$docker_run_args \
-v "${sandbox}_image":/var/lib/docker/image \
-v "${sandbox}_overlay2":/var/lib/docker/overlay2"
fi
docker run $docker_run_args offen/docker-volume-backup:test-sandbox
retry_counter=0
until docker exec $sandbox /bin/sh -c 'docker info' > /dev/null 2>&1; do
if [ $retry_counter -gt 20 ]; then
echo "Gave up waiting for Docker daemon to become ready after 20 attempts"
exit 1
fi
if [ "$(docker inspect $sandbox --format '{{ .State.Running }}')" = "false" ]; then
docker rm $sandbox
docker run $docker_run_args offen/docker-volume-backup:test-sandbox
fi
sleep 0.5
retry_counter=$((retry_counter+1))
done
docker exec $sandbox /bin/sh -c "docker load -i /cache/image.tar.gz"
docker exec -e TEST_VERSION=$IMAGE_TAG $sandbox /bin/sh -c "/code/test/$test"
docker rm $(docker stop $sandbox)
docker volume rm $sandbox
echo ""
echo "$test passed"
echo ""

View File

@@ -1,2 +0,0 @@
local
backup

View File

@@ -10,7 +10,6 @@ services:
- docker-volume-backup.archive-pre.user=testuser
- docker-volume-backup.archive-pre=/bin/sh -c 'whoami > /tmp/whoami.txt'
backup:
image: offen/docker-volume-backup:${TEST_VERSION:-canary}
deploy:
@@ -21,7 +20,7 @@ services:
BACKUP_CRON_EXPRESSION: 0 0 5 31 2 ?
EXEC_FORWARD_OUTPUT: "true"
volumes:
- ./local:/archive
- ${LOCAL_DIR:-./local}:/archive
- app_data:/backup/data:ro
- /var/run/docker.sock:/var/run/docker.sock

18
test/user/run.sh Normal file → Executable file
View File

@@ -6,25 +6,25 @@ cd $(dirname $0)
. ../util.sh
current_test=$(basename $(pwd))
docker compose up -d
export LOCAL_DIR=$(mktemp -d)
export TMP_DIR=$(mktemp -d)
echo "LOCAL_DIR $LOCAL_DIR"
echo "TMP_DIR $TMP_DIR"
docker compose up -d --quiet-pull
user_name=testuser
docker exec user-alpine-1 adduser --disabled-password "$user_name"
docker compose exec backup backup
tar -xvf ./local/test.tar.gz
if [ ! -f ./backup/data/whoami.txt ]; then
tar -xvf "$LOCAL_DIR/test.tar.gz" -C "$TMP_DIR"
if [ ! -f "$TMP_DIR/backup/data/whoami.txt" ]; then
fail "Could not find file written by pre command."
fi
pass "Found expected file."
tar -xvf ./local/test.tar.gz
if [ "$(cat ./backup/data/whoami.txt)" != "$user_name" ]; then
if [ "$(cat $TMP_DIR/backup/data/whoami.txt)" != "$user_name" ]; then
fail "Could not find expected user name."
fi
pass "Found expected user."
docker compose down --volumes
sudo rm -rf ./local

View File

@@ -15,9 +15,34 @@ fail () {
exit 1
}
skip () {
echo "[test:${current_test:-none}:skip] "$1""
exit 0
}
expect_running_containers () {
if [ "$(docker ps -q | wc -l)" != "$1" ]; then
fail "Expected $1 containers to be running, instead seen: "$(docker ps -a | wc -l)""
fi
pass "$1 containers running."
}
docker() {
case $1 in
compose)
shift
case $1 in
up)
shift
command docker compose up --timeout 3 "$@";;
down)
shift
command docker compose down --timeout 3 "$@";;
*)
command docker compose "$@";;
esac
;;
*)
command docker "$@";;
esac
}

View File

@@ -6,7 +6,7 @@ cd "$(dirname "$0")"
. ../util.sh
current_test=$(basename $(pwd))
docker compose up -d
docker compose up -d --quiet-pull
sleep 5
docker compose exec backup backup
@@ -24,7 +24,6 @@ pass "Found relevant files in untared remote backup."
# The second part of this test checks if backups get deleted when the retention
# is set to 0 days (which it should not as it would mean all backups get deleted)
# TODO: find out if we can test actual deletion without having to wait for a day
BACKUP_RETENTION_DAYS="0" docker compose up -d
sleep 5
@@ -37,4 +36,28 @@ docker run --rm \
pass "Remote backups have not been deleted."
docker compose down --volumes
# The third part of this test checks if old backups get deleted when the retention
# is set to 7 days (which it should)
BACKUP_RETENTION_DAYS="7" docker compose up -d
sleep 5
info "Create first backup with no prune"
docker compose exec backup backup
# Set the modification date of the old backup to 14 days ago
docker run --rm \
-v webdav_backup_data:/webdav_data \
--user 82 \
alpine \
ash -c 'touch -d@$(( $(date +%s) - 1209600 )) /webdav_data/data/my/new/path/test-hostnametoken-old.tar.gz'
info "Create second backup and prune"
docker compose exec backup backup
docker run --rm \
-v webdav_backup_data:/webdav_data \
alpine \
ash -c 'test ! -f /webdav_data/data/my/new/path/test-hostnametoken-old.tar.gz && test -f /webdav_data/data/my/new/path/test-hostnametoken.tar.gz'
pass "Old remote backup has been pruned, new one is still present."

Some files were not shown because too many files have changed in this diff Show More