Compare commits

...

50 Commits

Author SHA1 Message Date
Michal Middleton
67e7288855 Add support for zstd compression (#249)
Co-authored-by: Michal Middleton <jafa81@gmail.com>
2023-08-19 19:20:13 +02:00
dependabot[bot]
1765b06835 Bump github.com/pkg/sftp from 1.13.5 to 1.13.6 (#248) 2023-08-15 04:38:55 +00:00
Frederik Ring
67d978f515 Drop logrus dependency, log using slog package from stdlib (#247) 2023-08-10 19:41:03 +02:00
Frederik Ring
a93ff6fe09 Build in Go 1.21 (#246) 2023-08-10 16:03:59 +02:00
Frederik Ring
1c6f64e254 Current Docker client breaks in newer Go versions (#241)
* Current Docker client breaks in newer Go versions

* Cater for breaking API changes in Docker client

* Update Docker client

* Unpin Go version used for build

* Tidy sum file
2023-07-25 19:46:57 +02:00
dependabot[bot]
085d2c5dfd Bump github.com/minio/minio-go/v7 from 7.0.59 to 7.0.61 (#240) 2023-07-24 19:16:02 +00:00
dependabot[bot]
b1382dee00 Bump github.com/Azure/azure-sdk-for-go/sdk/storage/azblob (#239) 2023-07-24 19:15:56 +00:00
Frederik Ring
c3732107b1 Current Docker client breaks in Go 1.20.6 (#242) 2023-07-24 21:01:28 +02:00
dependabot[bot]
d288c87c54 Bump github.com/minio/minio-go/v7 from 7.0.58 to 7.0.59 (#238) 2023-07-04 07:53:19 +00:00
dependabot[bot]
47491439a1 Bump github.com/studio-b12/gowebdav (#235) 2023-06-27 07:46:34 +00:00
dependabot[bot]
94f71ac765 Bump github.com/minio/minio-go/v7 from 7.0.57 to 7.0.58 (#236) 2023-06-27 05:25:42 +00:00
dependabot[bot]
2addf1dd6c Bump golang.org/x/sync from 0.2.0 to 0.3.0 (#234) 2023-06-20 12:29:11 +00:00
dependabot[bot]
c07990eaf6 Bump github.com/minio/minio-go/v7 from 7.0.56 to 7.0.57 (#233) 2023-06-20 12:28:06 +00:00
jsloane
a27743bd32 Update README.md (#230) 2023-06-17 08:30:48 +02:00
dependabot[bot]
9d5b897ab4 Bump golang.org/x/sync from 0.1.0 to 0.2.0 (#229) 2023-06-13 07:17:58 +00:00
dependabot[bot]
30bf31cd90 Bump github.com/Azure/azure-sdk-for-go/sdk/storage/azblob (#228) 2023-06-13 07:04:03 +00:00
dependabot[bot]
32e9a05b40 Bump github.com/minio/minio-go/v7 from 7.0.44 to 7.0.56 (#227) 2023-06-13 07:03:38 +00:00
dependabot[bot]
b302884447 Bump golang.org/x/crypto from 0.3.0 to 0.9.0 (#223) 2023-06-10 13:15:41 +00:00
dependabot[bot]
b3e1ce27be Bump github.com/Azure/azure-sdk-for-go/sdk/azidentity (#225) 2023-06-10 12:57:44 +00:00
dependabot[bot]
66518ed0ff Bump github.com/sirupsen/logrus from 1.9.0 to 1.9.3 (#226) 2023-06-10 12:57:39 +00:00
dependabot[bot]
14d966d41a Bump github.com/otiai10/copy from 1.10.0 to 1.11.0 (#224) 2023-06-10 12:57:30 +00:00
Frederik Ring
336dece328 Set up automated updates for Docker base images and Go packages 2023-06-10 14:42:28 +02:00
Frederik Ring
dc8172b673 Use alpine-1.18 as the base image (#219) 2023-06-03 13:08:35 +02:00
Erwan LE PRADO
5ea9a7ce15 feat: add better handler for part size (#214)
* feat: add better handler for part size


fix: use local file 


fix: try with another path


fix: use bytes 


chore: go back


go back readme


goback


goback


goback

* chore: better handling

* fix: typo readme

* chore: wrong comparaison

* fix: typo
2023-06-02 16:30:02 +02:00
Frederik Ring
bcffe0bc25 Clarify docs section about user executing labeled commands 2023-05-26 16:15:09 +02:00
dependabot[bot]
144e65ce6f Bump github.com/docker/distribution (#215) 2023-05-11 21:07:45 +00:00
ba-tno
07afa53cd3 Update shoutrrr to 0.7 (#213)
* Replace docker-compose reference with docker[space]compose

* Update shoutrrr only to 0.7.1

* modules after go mod tidy

* Refer to v0.7 docs of shoutrrr

* Replace docker-compose reference with docker[space]compose

* Update shoutrrr only to 0.7.1

* modules after go mod tidy

* Refer to v0.7 docs of shoutrrr

* Remove 'v' from shoutrrr doc link
2023-04-29 20:14:04 +02:00
Frederik Ring
9a07f5486b Docs reference incorrect shoutrrr version 2023-04-29 14:44:39 +02:00
Frederik Ring
d4c5f65f31 Entrypoint permissions can be set on COPY (#211) 2023-04-28 20:06:57 +02:00
Frederik Ring
5b8a484d80 Documentation around user label is lacking 2023-04-28 16:01:17 +02:00
Frederik Ring
37c01a578c TaskTemplate.ForceUpdate is a counter (#209) 2023-04-26 08:45:12 +02:00
Frederik Ring
46c6441d48 Add note about GHCR to README 2023-04-07 12:00:39 +02:00
Frederik Ring
5715d9ff9b Update of package copy does not fail on deleted files (#206) 2023-04-07 11:28:36 +02:00
dependabot[bot]
6ba173d916 Bump github.com/docker/docker (#205) 2023-04-05 04:58:07 +00:00
Frederik Ring
301fe6628c on: is expected to be an object 2023-04-02 19:45:46 +02:00
Frederik Ring
5ff2d53602 Items in on: are expected to be objects 2023-04-02 19:44:51 +02:00
Frederik Ring
cddd1fdcea Prevent duplicate builds on pull request 2023-04-02 19:41:49 +02:00
Frederik Ring
808cf8f82d Local directory can be used instead of volume for storing test artifact (#204) 2023-04-02 19:41:00 +02:00
Frederik Ring
c177202ac1 Multi platform build requires explicit buildx setup 2023-04-02 11:51:35 +02:00
Frederik Ring
27c2201161 Branches filter is a glob pattern, not a regex 2023-04-02 11:46:06 +02:00
Diulgher Artiom
7f20036b15 Possibility to use -u (user) option in docker exec (#203)
* Add user option for docker exec

* Add test for user option

* Return test version for image

* remove gitea config file

* refactor tests

* remove comments & fix image name

* add docs

* cleanup

* Update README.md with suggested correction

Co-authored-by: Frederik Ring <frederik.ring@gmail.com>

* fix backup command & bind folder instead of volume

---------

Co-authored-by: tao <generaltao.md@gmail.com>
Co-authored-by: Frederik Ring <frederik.ring@gmail.com>
2023-04-02 11:12:10 +02:00
Frederik Ring
2ac1f0cea4 Also trigger test runs on Pull Request 2023-03-29 07:57:09 +02:00
Frederik Ring
66ad124ddd any can be used instead of interface{} 2023-03-16 19:48:12 +01:00
Frederik Ring
aee802cb09 Migrate CI setup to GitHub Actions, also publish to GHCR (#199)
* Run tests in GitHub actions

* Do not try to allocate a pseudo TTY when running compose commands

* Try hard disabling TTY allocation

* Use compose plugin

* Test scripts shall not try to allocate a TTY

* Pass correct base version

* Check whether env var is even needed

* Stop running tests in CircleCI

* Run releases from GitHub actions as well

* Manually construct tags to be pushed on release
2023-03-16 19:32:44 +01:00
dependabot[bot]
a06ad1957a Bump github.com/docker/distribution (#195) 2023-03-07 06:56:56 +00:00
dependabot[bot]
15786c5da3 Bump golang.org/x/net from 0.2.0 to 0.7.0 (#191) 2023-02-18 06:34:40 +00:00
dependabot[bot]
641a3203c7 Bump github.com/containerd/containerd from 1.6.6 to 1.6.18 (#190) 2023-02-16 19:32:46 +00:00
Frederik Ring
5adfe3989e Document usage with rootless Docker installations
As described in #189
2023-02-16 08:18:57 +01:00
dependabot[bot]
550833be33 Merge pull request #188 from offen/dependabot/go_modules/github.com/containrrr/shoutrrr-0.6.0 2023-02-14 19:09:43 +00:00
dependabot[bot]
201a983ea4 Bump github.com/containrrr/shoutrrr from 0.5.2 to 0.6.0
Bumps [github.com/containrrr/shoutrrr](https://github.com/containrrr/shoutrrr) from 0.5.2 to 0.6.0.
- [Release notes](https://github.com/containrrr/shoutrrr/releases)
- [Changelog](https://github.com/containrrr/shoutrrr/blob/main/goreleaser.yml)
- [Commits](https://github.com/containrrr/shoutrrr/compare/v0.5.2...v0.6.0)

---
updated-dependencies:
- dependency-name: github.com/containrrr/shoutrrr
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-02-14 18:57:32 +00:00
31 changed files with 1506 additions and 487 deletions

View File

@@ -1,75 +0,0 @@
version: 2.1
jobs:
canary:
machine:
image: ubuntu-2004:202201-02
working_directory: ~/docker-volume-backup
resource_class: large
steps:
- checkout
- run:
name: Build
command: |
docker build . -t offen/docker-volume-backup:canary
- run:
name: Install gnupg
command: |
sudo apt-get install -y gnupg
- run:
name: Run tests
working_directory: ~/docker-volume-backup/test
command: |
export GPG_TTY=$(tty)
./test.sh canary
build:
docker:
- image: cimg/base:2020.06
environment:
DOCKER_BUILDKIT: '1'
DOCKER_CLI_EXPERIMENTAL: enabled
working_directory: ~/docker-volume-backup
resource_class: large
steps:
- checkout
- setup_remote_docker:
version: 20.10.6
- docker/install-docker-credential-helper:
release-tag: v0.6.4
- docker/configure-docker-credentials-store
- run:
name: Push to Docker Hub
command: |
echo "$DOCKER_ACCESSTOKEN" | docker login --username offen --password-stdin
# This is required for building ARM: https://gitlab.alpinelinux.org/alpine/aports/-/issues/12406
docker run --rm --privileged linuxkit/binfmt:v0.8
docker context create docker-volume-backup
docker buildx create docker-volume-backup --name docker-volume-backup --use
docker buildx inspect --bootstrap
tag_args="-t offen/docker-volume-backup:$CIRCLE_TAG"
if [[ "$CIRCLE_TAG" =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
# prerelease tags like `v2.0.0-alpha.1` should not be released as `latest`
tag_args="$tag_args -t offen/docker-volume-backup:latest"
tag_args="$tag_args -t offen/docker-volume-backup:$(echo "$CIRCLE_TAG" | cut -d. -f1)"
fi
docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 \
$tag_args . --push
workflows:
version: 2
docker_image:
jobs:
- canary:
filters:
tags:
ignore: /^v.*/
- build:
filters:
branches:
ignore: /.*/
tags:
only: /^v.*/
orbs:
docker: circleci/docker@2.1.4

10
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,10 @@
version: 2
updates:
- package-ecosystem: docker
directory: /
schedule:
interval: weekly
- package-ecosystem: gomod
directory: /
schedule:
interval: weekly

59
.github/workflows/release.yml vendored Normal file
View File

@@ -0,0 +1,59 @@
name: Release Docker Image
on:
push:
tags: v**
jobs:
push_to_registries:
name: Push Docker image to multiple registries
runs-on: ubuntu-latest
permissions:
packages: write
contents: read
steps:
- name: Check out the repo
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Log in to GHCR
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract Docker tags
id: meta
run: |
version_tag="${{github.ref_name}}"
tags=($version_tag)
if [[ "$version_tag" =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
# prerelease tags like `v2.0.0-alpha.1` should not be released as `latest` nor `v2`
tags+=("latest")
tags+=($(echo "$version_tag" | cut -d. -f1))
fi
releases=""
for tag in "${tags[@]}"; do
releases="${releases:+$releases,}offen/docker-volume-backup:$tag,ghcr.io/offen/docker-volume-backup:$tag"
done
echo "releases=$releases" >> "$GITHUB_OUTPUT"
- name: Build and push Docker images
uses: docker/build-push-action@v4
with:
context: .
push: true
platforms: linux/amd64,linux/arm64,linux/arm/v7
tags: ${{ steps.meta.outputs.releases }}

30
.github/workflows/test.yml vendored Normal file
View File

@@ -0,0 +1,30 @@
name: Run Integration Tests
on:
push:
branches:
- main
pull_request:
jobs:
test:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Build Docker Image
env:
DOCKER_BUILDKIT: '1'
run: docker build . -t offen/docker-volume-backup:test
- name: Run Tests
working-directory: ./test
run: |
# Stop the buildx container so the tests can make assertions
# about the number of running containers
docker rm -f $(docker ps -aq)
export GPG_TTY=$(tty)
./test.sh test

View File

@@ -1,7 +1,7 @@
# Copyright 2021 - Offen Authors <hioffen@posteo.de> # Copyright 2021 - Offen Authors <hioffen@posteo.de>
# SPDX-License-Identifier: MPL-2.0 # SPDX-License-Identifier: MPL-2.0
FROM golang:1.20-alpine as builder FROM golang:1.21-alpine as builder
WORKDIR /app WORKDIR /app
COPY . . COPY . .
@@ -9,15 +9,13 @@ RUN go mod download
WORKDIR /app/cmd/backup WORKDIR /app/cmd/backup
RUN go build -o backup . RUN go build -o backup .
FROM alpine:3.17 FROM alpine:3.18
WORKDIR /root WORKDIR /root
RUN apk add --no-cache ca-certificates RUN apk add --no-cache ca-certificates
COPY --from=builder /app/cmd/backup/backup /usr/bin/backup COPY --from=builder /app/cmd/backup/backup /usr/bin/backup
COPY --chmod=755 ./entrypoint.sh /root/
COPY ./entrypoint.sh /root/
RUN chmod +x entrypoint.sh
ENTRYPOINT ["/root/entrypoint.sh"] ENTRYPOINT ["/root/entrypoint.sh"]

View File

@@ -14,6 +14,7 @@ It handles __recurring or one-off backups of Docker volumes__ to a __local direc
- [Quickstart](#quickstart) - [Quickstart](#quickstart)
- [Recurring backups in a compose setup](#recurring-backups-in-a-compose-setup) - [Recurring backups in a compose setup](#recurring-backups-in-a-compose-setup)
- [One-off backups using Docker CLI](#one-off-backups-using-docker-cli) - [One-off backups using Docker CLI](#one-off-backups-using-docker-cli)
- [Available image registries](#available-image-registries)
- [Configuration reference](#configuration-reference) - [Configuration reference](#configuration-reference)
- [How to](#how-to) - [How to](#how-to)
- [Stop containers during backup](#stop-containers-during-backup) - [Stop containers during backup](#stop-containers-during-backup)
@@ -30,6 +31,7 @@ It handles __recurring or one-off backups of Docker volumes__ to a __local direc
- [Replace deprecated `BACKUP_FROM_SNAPSHOT` usage](#replace-deprecated-backup_from_snapshot-usage) - [Replace deprecated `BACKUP_FROM_SNAPSHOT` usage](#replace-deprecated-backup_from_snapshot-usage)
- [Replace deprecated `exec-pre` and `exec-post` labels](#replace-deprecated-exec-pre-and-exec-post-labels) - [Replace deprecated `exec-pre` and `exec-post` labels](#replace-deprecated-exec-pre-and-exec-post-labels)
- [Using a custom Docker host](#using-a-custom-docker-host) - [Using a custom Docker host](#using-a-custom-docker-host)
- [Use with rootless Docker](#use-with-rootless-docker)
- [Run multiple backup schedules in the same container](#run-multiple-backup-schedules-in-the-same-container) - [Run multiple backup schedules in the same container](#run-multiple-backup-schedules-in-the-same-container)
- [Define different retention schedules](#define-different-retention-schedules) - [Define different retention schedules](#define-different-retention-schedules)
- [Use special characters in notification URLs](#use-special-characters-in-notification-urls) - [Use special characters in notification URLs](#use-special-characters-in-notification-urls)
@@ -120,6 +122,18 @@ docker run --rm \
Alternatively, pass a `--env-file` in order to use a full config as described below. Alternatively, pass a `--env-file` in order to use a full config as described below.
### Available image registries
This Docker image is published to both Docker Hub and the GitHub container registry.
Depending on your preferences and needs, you can reference both `offen/docker-volume-backup` as well as `ghcr.io/offen/docker-volume-backup`:
```
docker pull offen/docker-volume-backup:v2
docker pull ghcr.io/offen/docker-volume-backup:v2
```
Documentation references Docker Hub, but all examples will work using ghcr.io just as well.
## Configuration reference ## Configuration reference
Backup targets, schedule and retention are configured in environment variables. Backup targets, schedule and retention are configured in environment variables.
@@ -134,13 +148,22 @@ You can populate below template according to your requirements and use it as you
# BACKUP_CRON_EXPRESSION="0 2 * * *" # BACKUP_CRON_EXPRESSION="0 2 * * *"
# The name of the backup file including the `.tar.gz` extension. # The compression algorithm used in conjunction with tar.
# Valid options are: "gz" (Gzip) and "zst" (Zstd).
# Note that the selection affects the file extension.
# BACKUP_COMPRESSION="gz"
# The name of the backup file including the extension.
# Format verbs will be replaced as in `strftime`. Omitting them # Format verbs will be replaced as in `strftime`. Omitting them
# will result in the same filename for every backup run, which means previous # will result in the same filename for every backup run, which means previous
# versions will be overwritten on subsequent runs. The default results # versions will be overwritten on subsequent runs.
# in filenames like `backup-2021-08-29T04-00-00.tar.gz`. # Extension can be defined literally or via "{{ .Extension }}" template,
# in which case it will become either "tar.gz" or "tar.zst" (depending
# on your BACKUP_COMPRESSION setting).
# The default results in filenames like: `backup-2021-08-29T04-00-00.tar.gz`.
# BACKUP_FILENAME="backup-%Y-%m-%dT%H-%M-%S.tar.gz" # BACKUP_FILENAME="backup-%Y-%m-%dT%H-%M-%S.{{ .Extension }}"
# Setting BACKUP_FILENAME_EXPAND to true allows for environment variable # Setting BACKUP_FILENAME_EXPAND to true allows for environment variable
# placeholders in BACKUP_FILENAME, BACKUP_LATEST_SYMLINK and in # placeholders in BACKUP_FILENAME, BACKUP_LATEST_SYMLINK and in
@@ -246,6 +269,15 @@ You can populate below template according to your requirements and use it as you
# AWS_STORAGE_CLASS="GLACIER" # AWS_STORAGE_CLASS="GLACIER"
# Setting this variable will change the S3 default part size for the copy step.
# This value is useful when you want to upload large files.
# NB : While using Scaleway as S3 provider, be aware that the parts counter is set to 1.000.
# While Minio uses a hard coded value to 10.000. As a workaround, try to set a higher value.
# Defaults to "16" (MB) if unset (from minio), you can set this value according to your needs.
# The unit is in MB and an integer.
# AWS_PART_SIZE=16
# You can also backup files to any WebDAV server: # You can also backup files to any WebDAV server:
# The URL of the remote WebDAV server # The URL of the remote WebDAV server
@@ -410,7 +442,7 @@ You can populate below template according to your requirements and use it as you
# Notifications (email, Slack, etc.) can be sent out when a backup run finishes. # Notifications (email, Slack, etc.) can be sent out when a backup run finishes.
# Configuration is provided as a comma-separated list of URLs as consumed # Configuration is provided as a comma-separated list of URLs as consumed
# by `shoutrrr`: https://containrrr.dev/shoutrrr/v0.5/services/overview/ # by `shoutrrr`: https://containrrr.dev/shoutrrr/0.7/services/overview/
# The content of such notifications can be customized. Dedicated documentation # The content of such notifications can be customized. Dedicated documentation
# on how to do this can be found in the README. When providing multiple URLs or # on how to do this can be found in the README. When providing multiple URLs or
# an URL that contains a comma, the values can be URL encoded to avoid ambiguities. # an URL that contains a comma, the values can be URL encoded to avoid ambiguities.
@@ -554,7 +586,7 @@ services:
Notification backends other than email are also supported. Notification backends other than email are also supported.
Refer to the documentation of [shoutrrr][shoutrrr-docs] to find out about options and configuration. Refer to the documentation of [shoutrrr][shoutrrr-docs] to find out about options and configuration.
[shoutrrr-docs]: https://containrrr.dev/shoutrrr/v0.5/services/overview/ [shoutrrr-docs]: https://containrrr.dev/shoutrrr/0.7/services/overview/
### Customize notifications ### Customize notifications
@@ -643,6 +675,24 @@ volumes:
The backup procedure is guaranteed to wait for all `pre` or `post` commands to finish before proceeding. The backup procedure is guaranteed to wait for all `pre` or `post` commands to finish before proceeding.
However there are no guarantees about the order in which they are run, which could also happen concurrently. However there are no guarantees about the order in which they are run, which could also happen concurrently.
By default the backup command is executed by the user provided by the container's image.
It is possible to specify a custom user that is used to run commands in dedicated labels with the format `docker-volume-backup.[step]-[pre|post].user`:
```yml
version: '3'
services:
gitea:
image: gitea/gitea
volumes:
- backup_data:/tmp
labels:
- docker-volume-backup.archive-pre.user=git
- docker-volume-backup.archive-pre=/bin/bash -c 'cd /tmp; /usr/local/bin/gitea dump -c /data/gitea/conf/app.ini -R -f dump.zip'
```
Make sure the user exists and is present in `passwd` inside the target container.
### Encrypting your backup using GPG ### Encrypting your backup using GPG
The image supports encrypting backups using GPG out of the box. The image supports encrypting backups using GPG out of the box.
@@ -782,7 +832,7 @@ services:
- docker-volume-backup.archive-post=rm -rf /tmp/backup/my-app - docker-volume-backup.archive-post=rm -rf /tmp/backup/my-app
backup: backup:
image: offen/docker-volume-backup:latest image: offen/docker-volume-backup:v2
environment: environment:
BACKUP_SOURCES: /tmp/backup BACKUP_SOURCES: /tmp/backup
volumes: volumes:
@@ -820,6 +870,23 @@ DOCKER_HOST=tcp://docker_socket_proxy:2375
In case you are using a socket proxy, it must support `GET` and `POST` requests to the `/containers` endpoint. If you are using Docker Swarm, it must also support the `/services` endpoint. If you are using pre/post backup commands, it must also support the `/exec` endpoint. In case you are using a socket proxy, it must support `GET` and `POST` requests to the `/containers` endpoint. If you are using Docker Swarm, it must also support the `/services` endpoint. If you are using pre/post backup commands, it must also support the `/exec` endpoint.
### Use with rootless Docker
It's also possible to use this image with a [rootless Docker installation][rootless-docker].
Instead of mounting `/var/run/docker.sock`, mount the user-specific socket into the container:
```yml
services:
backup:
image: offen/docker-volume-backup:v2
# ... configuration omitted
volumes:
- backup:/backup:ro
- /run/user/1000/docker.sock:/var/run/docker.sock:ro
```
[rootless-docker]: https://docs.docker.com/engine/security/rootless/
### Run multiple backup schedules in the same container ### Run multiple backup schedules in the same container
Multiple backup schedules with different configuration can be configured by mounting an arbitrary number of configuration files (using the `.env` format) into `/etc/dockervolumebackup/conf.d`: Multiple backup schedules with different configuration can be configured by mounting an arbitrary number of configuration files (using the `.env` format) into `/etc/dockervolumebackup/conf.d`:
@@ -870,7 +937,7 @@ BACKUP_SOURCES=/backup/app2_data
If you want to manage backup retention on different schedules, the most straight forward approach is to define a dedicated configuration for retention rule using a different prefix in the `BACKUP_FILENAME` parameter and then run them on different cron schedules. If you want to manage backup retention on different schedules, the most straight forward approach is to define a dedicated configuration for retention rule using a different prefix in the `BACKUP_FILENAME` parameter and then run them on different cron schedules.
For example, if you wanted to keep daily backups for 7 days, weekly backups for a month, and retain monthly backups forever, you could create three configuration files and mount them into `/etc/dockervolumebackup.d`: For example, if you wanted to keep daily backups for 7 days, weekly backups for a month, and retain monthly backups forever, you could create three configuration files and mount them into `/etc/dockervolumebackup/conf.d`:
```ini ```ini
# 01daily.conf # 01daily.conf

View File

@@ -15,9 +15,11 @@ import (
"path" "path"
"path/filepath" "path/filepath"
"strings" "strings"
"github.com/klauspost/compress/zstd"
) )
func createArchive(files []string, inputFilePath, outputFilePath string) error { func createArchive(files []string, inputFilePath, outputFilePath string, compression string) error {
inputFilePath = stripTrailingSlashes(inputFilePath) inputFilePath = stripTrailingSlashes(inputFilePath)
inputFilePath, outputFilePath, err := makeAbsolute(inputFilePath, outputFilePath) inputFilePath, outputFilePath, err := makeAbsolute(inputFilePath, outputFilePath)
if err != nil { if err != nil {
@@ -27,7 +29,7 @@ func createArchive(files []string, inputFilePath, outputFilePath string) error {
return fmt.Errorf("createArchive: error creating output file path: %w", err) return fmt.Errorf("createArchive: error creating output file path: %w", err)
} }
if err := compress(files, outputFilePath, filepath.Dir(inputFilePath)); err != nil { if err := compress(files, outputFilePath, filepath.Dir(inputFilePath), compression); err != nil {
return fmt.Errorf("createArchive: error creating archive: %w", err) return fmt.Errorf("createArchive: error creating archive: %w", err)
} }
@@ -51,18 +53,30 @@ func makeAbsolute(inputFilePath, outputFilePath string) (string, string, error)
return inputFilePath, outputFilePath, err return inputFilePath, outputFilePath, err
} }
func compress(paths []string, outFilePath, subPath string) error { func compress(paths []string, outFilePath, subPath string, algo string) error {
file, err := os.Create(outFilePath) file, err := os.Create(outFilePath)
var compressWriter io.WriteCloser
if err != nil { if err != nil {
return fmt.Errorf("compress: error creating out file: %w", err) return fmt.Errorf("compress: error creating out file: %w", err)
} }
prefix := path.Dir(outFilePath) prefix := path.Dir(outFilePath)
gzipWriter := gzip.NewWriter(file) switch algo {
tarWriter := tar.NewWriter(gzipWriter) case "gz":
compressWriter = gzip.NewWriter(file)
case "zst":
compressWriter, err = zstd.NewWriter(file)
if err != nil {
return fmt.Errorf("compress: zstd error: %w", err)
}
default:
return fmt.Errorf("compress: unsupported compression algorithm: %s", algo)
}
tarWriter := tar.NewWriter(compressWriter)
for _, p := range paths { for _, p := range paths {
if err := writeTarGz(p, tarWriter, prefix); err != nil { if err := writeTarball(p, tarWriter, prefix); err != nil {
return fmt.Errorf("compress: error writing %s to archive: %w", p, err) return fmt.Errorf("compress: error writing %s to archive: %w", p, err)
} }
} }
@@ -72,9 +86,9 @@ func compress(paths []string, outFilePath, subPath string) error {
return fmt.Errorf("compress: error closing tar writer: %w", err) return fmt.Errorf("compress: error closing tar writer: %w", err)
} }
err = gzipWriter.Close() err = compressWriter.Close()
if err != nil { if err != nil {
return fmt.Errorf("compress: error closing gzip writer: %w", err) return fmt.Errorf("compress: error closing compression writer: %w", err)
} }
err = file.Close() err = file.Close()
@@ -85,10 +99,10 @@ func compress(paths []string, outFilePath, subPath string) error {
return nil return nil
} }
func writeTarGz(path string, tarWriter *tar.Writer, prefix string) error { func writeTarball(path string, tarWriter *tar.Writer, prefix string) error {
fileInfo, err := os.Lstat(path) fileInfo, err := os.Lstat(path)
if err != nil { if err != nil {
return fmt.Errorf("writeTarGz: error getting file infor for %s: %w", path, err) return fmt.Errorf("writeTarball: error getting file infor for %s: %w", path, err)
} }
if fileInfo.Mode()&os.ModeSocket == os.ModeSocket { if fileInfo.Mode()&os.ModeSocket == os.ModeSocket {
@@ -99,19 +113,19 @@ func writeTarGz(path string, tarWriter *tar.Writer, prefix string) error {
if fileInfo.Mode()&os.ModeSymlink == os.ModeSymlink { if fileInfo.Mode()&os.ModeSymlink == os.ModeSymlink {
var err error var err error
if link, err = os.Readlink(path); err != nil { if link, err = os.Readlink(path); err != nil {
return fmt.Errorf("writeTarGz: error resolving symlink %s: %w", path, err) return fmt.Errorf("writeTarball: error resolving symlink %s: %w", path, err)
} }
} }
header, err := tar.FileInfoHeader(fileInfo, link) header, err := tar.FileInfoHeader(fileInfo, link)
if err != nil { if err != nil {
return fmt.Errorf("writeTarGz: error getting file info header: %w", err) return fmt.Errorf("writeTarball: error getting file info header: %w", err)
} }
header.Name = strings.TrimPrefix(path, prefix) header.Name = strings.TrimPrefix(path, prefix)
err = tarWriter.WriteHeader(header) err = tarWriter.WriteHeader(header)
if err != nil { if err != nil {
return fmt.Errorf("writeTarGz: error writing file info header: %w", err) return fmt.Errorf("writeTarball: error writing file info header: %w", err)
} }
if !fileInfo.Mode().IsRegular() { if !fileInfo.Mode().IsRegular() {
@@ -120,13 +134,13 @@ func writeTarGz(path string, tarWriter *tar.Writer, prefix string) error {
file, err := os.Open(path) file, err := os.Open(path)
if err != nil { if err != nil {
return fmt.Errorf("writeTarGz: error opening %s: %w", path, err) return fmt.Errorf("writeTarball: error opening %s: %w", path, err)
} }
defer file.Close() defer file.Close()
_, err = io.Copy(tarWriter, file) _, err = io.Copy(tarWriter, file)
if err != nil { if err != nil {
return fmt.Errorf("writeTarGz: error copying %s to tar writer: %w", path, err) return fmt.Errorf("writeTarball: error copying %s to tar writer: %w", path, err)
} }
return nil return nil

View File

@@ -16,58 +16,60 @@ import (
// Config holds all configuration values that are expected to be set // Config holds all configuration values that are expected to be set
// by users. // by users.
type Config struct { type Config struct {
AwsS3BucketName string `split_words:"true"` AwsS3BucketName string `split_words:"true"`
AwsS3Path string `split_words:"true"` AwsS3Path string `split_words:"true"`
AwsEndpoint string `split_words:"true" default:"s3.amazonaws.com"` AwsEndpoint string `split_words:"true" default:"s3.amazonaws.com"`
AwsEndpointProto string `split_words:"true" default:"https"` AwsEndpointProto string `split_words:"true" default:"https"`
AwsEndpointInsecure bool `split_words:"true"` AwsEndpointInsecure bool `split_words:"true"`
AwsEndpointCACert CertDecoder `envconfig:"AWS_ENDPOINT_CA_CERT"` AwsEndpointCACert CertDecoder `envconfig:"AWS_ENDPOINT_CA_CERT"`
AwsStorageClass string `split_words:"true"` AwsStorageClass string `split_words:"true"`
AwsAccessKeyID string `envconfig:"AWS_ACCESS_KEY_ID"` AwsAccessKeyID string `envconfig:"AWS_ACCESS_KEY_ID"`
AwsAccessKeyIDFile string `envconfig:"AWS_ACCESS_KEY_ID_FILE"` AwsAccessKeyIDFile string `envconfig:"AWS_ACCESS_KEY_ID_FILE"`
AwsSecretAccessKey string `split_words:"true"` AwsSecretAccessKey string `split_words:"true"`
AwsSecretAccessKeyFile string `split_words:"true"` AwsSecretAccessKeyFile string `split_words:"true"`
AwsIamRoleEndpoint string `split_words:"true"` AwsIamRoleEndpoint string `split_words:"true"`
BackupSources string `split_words:"true" default:"/backup"` AwsPartSize int64 `split_words:"true"`
BackupFilename string `split_words:"true" default:"backup-%Y-%m-%dT%H-%M-%S.tar.gz"` BackupCompression CompressionType `split_words:"true" default:"gz"`
BackupFilenameExpand bool `split_words:"true"` BackupSources string `split_words:"true" default:"/backup"`
BackupLatestSymlink string `split_words:"true"` BackupFilename string `split_words:"true" default:"backup-%Y-%m-%dT%H-%M-%S.{{ .Extension }}"`
BackupArchive string `split_words:"true" default:"/archive"` BackupFilenameExpand bool `split_words:"true"`
BackupRetentionDays int32 `split_words:"true" default:"-1"` BackupLatestSymlink string `split_words:"true"`
BackupPruningLeeway time.Duration `split_words:"true" default:"1m"` BackupArchive string `split_words:"true" default:"/archive"`
BackupPruningPrefix string `split_words:"true"` BackupRetentionDays int32 `split_words:"true" default:"-1"`
BackupStopContainerLabel string `split_words:"true" default:"true"` BackupPruningLeeway time.Duration `split_words:"true" default:"1m"`
BackupFromSnapshot bool `split_words:"true"` BackupPruningPrefix string `split_words:"true"`
BackupExcludeRegexp RegexpDecoder `split_words:"true"` BackupStopContainerLabel string `split_words:"true" default:"true"`
GpgPassphrase string `split_words:"true"` BackupFromSnapshot bool `split_words:"true"`
NotificationURLs []string `envconfig:"NOTIFICATION_URLS"` BackupExcludeRegexp RegexpDecoder `split_words:"true"`
NotificationLevel string `split_words:"true" default:"error"` GpgPassphrase string `split_words:"true"`
EmailNotificationRecipient string `split_words:"true"` NotificationURLs []string `envconfig:"NOTIFICATION_URLS"`
EmailNotificationSender string `split_words:"true" default:"noreply@nohost"` NotificationLevel string `split_words:"true" default:"error"`
EmailSMTPHost string `envconfig:"EMAIL_SMTP_HOST"` EmailNotificationRecipient string `split_words:"true"`
EmailSMTPPort int `envconfig:"EMAIL_SMTP_PORT" default:"587"` EmailNotificationSender string `split_words:"true" default:"noreply@nohost"`
EmailSMTPUsername string `envconfig:"EMAIL_SMTP_USERNAME"` EmailSMTPHost string `envconfig:"EMAIL_SMTP_HOST"`
EmailSMTPPassword string `envconfig:"EMAIL_SMTP_PASSWORD"` EmailSMTPPort int `envconfig:"EMAIL_SMTP_PORT" default:"587"`
WebdavUrl string `split_words:"true"` EmailSMTPUsername string `envconfig:"EMAIL_SMTP_USERNAME"`
WebdavUrlInsecure bool `split_words:"true"` EmailSMTPPassword string `envconfig:"EMAIL_SMTP_PASSWORD"`
WebdavPath string `split_words:"true" default:"/"` WebdavUrl string `split_words:"true"`
WebdavUsername string `split_words:"true"` WebdavUrlInsecure bool `split_words:"true"`
WebdavPassword string `split_words:"true"` WebdavPath string `split_words:"true" default:"/"`
SSHHostName string `split_words:"true"` WebdavUsername string `split_words:"true"`
SSHPort string `split_words:"true" default:"22"` WebdavPassword string `split_words:"true"`
SSHUser string `split_words:"true"` SSHHostName string `split_words:"true"`
SSHPassword string `split_words:"true"` SSHPort string `split_words:"true" default:"22"`
SSHIdentityFile string `split_words:"true" default:"/root/.ssh/id_rsa"` SSHUser string `split_words:"true"`
SSHIdentityPassphrase string `split_words:"true"` SSHPassword string `split_words:"true"`
SSHRemotePath string `split_words:"true"` SSHIdentityFile string `split_words:"true" default:"/root/.ssh/id_rsa"`
ExecLabel string `split_words:"true"` SSHIdentityPassphrase string `split_words:"true"`
ExecForwardOutput bool `split_words:"true"` SSHRemotePath string `split_words:"true"`
LockTimeout time.Duration `split_words:"true" default:"60m"` ExecLabel string `split_words:"true"`
AzureStorageAccountName string `split_words:"true"` ExecForwardOutput bool `split_words:"true"`
AzureStoragePrimaryAccountKey string `split_words:"true"` LockTimeout time.Duration `split_words:"true" default:"60m"`
AzureStorageContainerName string `split_words:"true"` AzureStorageAccountName string `split_words:"true"`
AzureStoragePath string `split_words:"true"` AzureStoragePrimaryAccountKey string `split_words:"true"`
AzureStorageEndpoint string `split_words:"true" default:"https://{{ .AccountName }}.blob.core.windows.net/"` AzureStorageContainerName string `split_words:"true"`
AzureStoragePath string `split_words:"true"`
AzureStorageEndpoint string `split_words:"true" default:"https://{{ .AccountName }}.blob.core.windows.net/"`
} }
func (c *Config) resolveSecret(envVar string, secretPath string) (string, error) { func (c *Config) resolveSecret(envVar string, secretPath string) (string, error) {
@@ -81,6 +83,22 @@ func (c *Config) resolveSecret(envVar string, secretPath string) (string, error)
return string(data), nil return string(data), nil
} }
type CompressionType string
func (c *CompressionType) Decode(v string) error {
switch v {
case "gz", "zst":
*c = CompressionType(v)
return nil
default:
return fmt.Errorf("config: error decoding compression type %s", v)
}
}
func (c *CompressionType) String() string {
return string(*c)
}
type CertDecoder struct { type CertDecoder struct {
Cert *x509.Certificate Cert *x509.Certificate
} }

View File

@@ -21,7 +21,7 @@ import (
"golang.org/x/sync/errgroup" "golang.org/x/sync/errgroup"
) )
func (s *script) exec(containerRef string, command string) ([]byte, []byte, error) { func (s *script) exec(containerRef string, command string, user string) ([]byte, []byte, error) {
args, _ := argv.Argv(command, nil, nil) args, _ := argv.Argv(command, nil, nil)
commandEnv := []string{ commandEnv := []string{
fmt.Sprintf("COMMAND_RUNTIME_ARCHIVE_FILEPATH=%s", s.file), fmt.Sprintf("COMMAND_RUNTIME_ARCHIVE_FILEPATH=%s", s.file),
@@ -31,6 +31,7 @@ func (s *script) exec(containerRef string, command string) ([]byte, []byte, erro
AttachStdin: true, AttachStdin: true,
AttachStderr: true, AttachStderr: true,
Env: commandEnv, Env: commandEnv,
User: user,
}) })
if err != nil { if err != nil {
return nil, nil, fmt.Errorf("exec: error creating container exec: %w", err) return nil, nil, fmt.Errorf("exec: error creating container exec: %w", err)
@@ -90,7 +91,6 @@ func (s *script) runLabeledCommands(label string) error {
}) })
} }
containersWithCommand, err := s.cli.ContainerList(context.Background(), types.ContainerListOptions{ containersWithCommand, err := s.cli.ContainerList(context.Background(), types.ContainerListOptions{
Quiet: true,
Filters: filters.NewArgs(f...), Filters: filters.NewArgs(f...),
}) })
if err != nil { if err != nil {
@@ -104,7 +104,6 @@ func (s *script) runLabeledCommands(label string) error {
Value: "docker-volume-backup.exec-pre", Value: "docker-volume-backup.exec-pre",
} }
deprecatedContainers, err := s.cli.ContainerList(context.Background(), types.ContainerListOptions{ deprecatedContainers, err := s.cli.ContainerList(context.Background(), types.ContainerListOptions{
Quiet: true,
Filters: filters.NewArgs(f...), Filters: filters.NewArgs(f...),
}) })
if err != nil { if err != nil {
@@ -122,7 +121,6 @@ func (s *script) runLabeledCommands(label string) error {
Value: "docker-volume-backup.exec-post", Value: "docker-volume-backup.exec-post",
} }
deprecatedContainers, err := s.cli.ContainerList(context.Background(), types.ContainerListOptions{ deprecatedContainers, err := s.cli.ContainerList(context.Background(), types.ContainerListOptions{
Quiet: true,
Filters: filters.NewArgs(f...), Filters: filters.NewArgs(f...),
}) })
if err != nil { if err != nil {
@@ -159,8 +157,11 @@ func (s *script) runLabeledCommands(label string) error {
cmd, _ = c.Labels["docker-volume-backup.exec-post"] cmd, _ = c.Labels["docker-volume-backup.exec-post"]
} }
s.logger.Infof("Running %s command %s for container %s", label, cmd, strings.TrimPrefix(c.Names[0], "/")) userLabelName := fmt.Sprintf("%s.user", label)
stdout, stderr, err := s.exec(c.ID, cmd) user := c.Labels[userLabelName]
s.logger.Info(fmt.Sprintf("Running %s command %s for container %s", label, cmd, strings.TrimPrefix(c.Names[0], "/")))
stdout, stderr, err := s.exec(c.ID, cmd, user)
if s.c.ExecForwardOutput { if s.c.ExecForwardOutput {
os.Stderr.Write(stderr) os.Stderr.Write(stderr)
os.Stdout.Write(stdout) os.Stdout.Write(stdout)

View File

@@ -41,9 +41,11 @@ func (s *script) lock(lockfile string) (func() error, error) {
} }
if !s.encounteredLock { if !s.encounteredLock {
s.logger.Infof( s.logger.Info(
"Exclusive lock was not available on first attempt. Will retry until it becomes available or the timeout of %s is exceeded.", fmt.Sprintf(
s.c.LockTimeout, "Exclusive lock was not available on first attempt. Will retry until it becomes available or the timeout of %s is exceeded.",
s.c.LockTimeout,
),
) )
s.encounteredLock = true s.encounteredLock = true
} }

View File

@@ -4,6 +4,7 @@
package main package main
import ( import (
"fmt"
"os" "os"
) )
@@ -21,7 +22,9 @@ func main() {
if pArg := recover(); pArg != nil { if pArg := recover(); pArg != nil {
if err, ok := pArg.(error); ok { if err, ok := pArg.(error); ok {
if hookErr := s.runHooks(err); hookErr != nil { if hookErr := s.runHooks(err); hookErr != nil {
s.logger.Errorf("An error occurred calling the registered hooks: %s", hookErr) s.logger.Error(
fmt.Sprintf("An error occurred calling the registered hooks: %s", hookErr),
)
} }
os.Exit(1) os.Exit(1)
} }
@@ -29,9 +32,12 @@ func main() {
} }
if err := s.runHooks(nil); err != nil { if err := s.runHooks(nil); err != nil {
s.logger.Errorf( s.logger.Error(
"Backup procedure ran successfully, but an error ocurred calling the registered hooks: %v", fmt.Sprintf(
err,
"Backup procedure ran successfully, but an error ocurred calling the registered hooks: %v",
err,
),
) )
os.Exit(1) os.Exit(1)
} }

View File

@@ -4,11 +4,13 @@
package main package main
import ( import (
"bytes"
"context" "context"
"errors" "errors"
"fmt" "fmt"
"io" "io"
"io/fs" "io/fs"
"log/slog"
"os" "os"
"path" "path"
"path/filepath" "path/filepath"
@@ -25,13 +27,13 @@ import (
"github.com/containrrr/shoutrrr" "github.com/containrrr/shoutrrr"
"github.com/containrrr/shoutrrr/pkg/router" "github.com/containrrr/shoutrrr/pkg/router"
"github.com/docker/docker/api/types" "github.com/docker/docker/api/types"
ctr "github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/filters" "github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/swarm" "github.com/docker/docker/api/types/swarm"
"github.com/docker/docker/client" "github.com/docker/docker/client"
"github.com/kelseyhightower/envconfig" "github.com/kelseyhightower/envconfig"
"github.com/leekchan/timeutil" "github.com/leekchan/timeutil"
"github.com/otiai10/copy" "github.com/otiai10/copy"
"github.com/sirupsen/logrus"
"golang.org/x/crypto/openpgp" "golang.org/x/crypto/openpgp"
"golang.org/x/sync/errgroup" "golang.org/x/sync/errgroup"
) )
@@ -41,7 +43,7 @@ import (
type script struct { type script struct {
cli *client.Client cli *client.Client
storages []storage.Backend storages []storage.Backend
logger *logrus.Logger logger *slog.Logger
sender *router.ServiceRouter sender *router.ServiceRouter
template *template.Template template *template.Template
hooks []hook hooks []hook
@@ -62,13 +64,8 @@ type script struct {
func newScript() (*script, error) { func newScript() (*script, error) {
stdOut, logBuffer := buffer(os.Stdout) stdOut, logBuffer := buffer(os.Stdout)
s := &script{ s := &script{
c: &Config{}, c: &Config{},
logger: &logrus.Logger{ logger: slog.New(slog.NewTextHandler(stdOut, nil)),
Out: stdOut,
Formatter: new(logrus.TextFormatter),
Hooks: make(logrus.LevelHooks),
Level: logrus.InfoLevel,
},
stats: &Stats{ stats: &Stats{
StartTime: time.Now(), StartTime: time.Now(),
LogOutput: logBuffer, LogOutput: logBuffer,
@@ -93,6 +90,20 @@ func newScript() (*script, error) {
} }
s.file = path.Join("/tmp", s.c.BackupFilename) s.file = path.Join("/tmp", s.c.BackupFilename)
tmplFileName, tErr := template.New("extension").Parse(s.file)
if tErr != nil {
return nil, fmt.Errorf("newScript: unable to parse backup file extension template: %w", tErr)
}
var bf bytes.Buffer
if tErr := tmplFileName.Execute(&bf, map[string]string{
"Extension": fmt.Sprintf("tar.%s", s.c.BackupCompression),
}); tErr != nil {
return nil, fmt.Errorf("newScript: error executing backup file extension template: %w", tErr)
}
s.file = bf.String()
if s.c.BackupFilenameExpand { if s.c.BackupFilenameExpand {
s.file = os.ExpandEnv(s.file) s.file = os.ExpandEnv(s.file)
s.c.BackupLatestSymlink = os.ExpandEnv(s.c.BackupLatestSymlink) s.c.BackupLatestSymlink = os.ExpandEnv(s.c.BackupLatestSymlink)
@@ -110,15 +121,15 @@ func newScript() (*script, error) {
s.cli = cli s.cli = cli
} }
logFunc := func(logType storage.LogLevel, context string, msg string, params ...interface{}) { logFunc := func(logType storage.LogLevel, context string, msg string, params ...any) {
switch logType { switch logType {
case storage.LogLevelWarning: case storage.LogLevelWarning:
s.logger.Warnf("["+context+"] "+msg, params...) s.logger.Warn(fmt.Sprintf("["+context+"] "+msg, params...))
case storage.LogLevelError: case storage.LogLevelError:
s.logger.Errorf("["+context+"] "+msg, params...) s.logger.Error(fmt.Sprintf("["+context+"] "+msg, params...))
case storage.LogLevelInfo: case storage.LogLevelInfo:
default: default:
s.logger.Infof("["+context+"] "+msg, params...) s.logger.Info(fmt.Sprintf("["+context+"] "+msg, params...))
} }
} }
@@ -142,6 +153,7 @@ func newScript() (*script, error) {
BucketName: s.c.AwsS3BucketName, BucketName: s.c.AwsS3BucketName,
StorageClass: s.c.AwsStorageClass, StorageClass: s.c.AwsStorageClass,
CACert: s.c.AwsEndpointCACert.Cert, CACert: s.c.AwsEndpointCACert.Cert,
PartSize: s.c.AwsPartSize,
} }
if s3Backend, err := s3.NewStorageBackend(s3Config, logFunc); err != nil { if s3Backend, err := s3.NewStorageBackend(s3Config, logFunc); err != nil {
return nil, err return nil, err
@@ -280,9 +292,7 @@ func (s *script) stopContainers() (func() error, error) {
return noop, nil return noop, nil
} }
allContainers, err := s.cli.ContainerList(context.Background(), types.ContainerListOptions{ allContainers, err := s.cli.ContainerList(context.Background(), types.ContainerListOptions{})
Quiet: true,
})
if err != nil { if err != nil {
return noop, fmt.Errorf("stopContainers: error querying for containers: %w", err) return noop, fmt.Errorf("stopContainers: error querying for containers: %w", err)
} }
@@ -292,7 +302,6 @@ func (s *script) stopContainers() (func() error, error) {
s.c.BackupStopContainerLabel, s.c.BackupStopContainerLabel,
) )
containersToStop, err := s.cli.ContainerList(context.Background(), types.ContainerListOptions{ containersToStop, err := s.cli.ContainerList(context.Background(), types.ContainerListOptions{
Quiet: true,
Filters: filters.NewArgs(filters.KeyValuePair{ Filters: filters.NewArgs(filters.KeyValuePair{
Key: "label", Key: "label",
Value: containerLabel, Value: containerLabel,
@@ -307,17 +316,19 @@ func (s *script) stopContainers() (func() error, error) {
return noop, nil return noop, nil
} }
s.logger.Infof( s.logger.Info(
"Stopping %d container(s) labeled `%s` out of %d running container(s).", fmt.Sprintf(
len(containersToStop), "Stopping %d container(s) labeled `%s` out of %d running container(s).",
containerLabel, len(containersToStop),
len(allContainers), containerLabel,
len(allContainers),
),
) )
var stoppedContainers []types.Container var stoppedContainers []types.Container
var stopErrors []error var stopErrors []error
for _, container := range containersToStop { for _, container := range containersToStop {
if err := s.cli.ContainerStop(context.Background(), container.ID, nil); err != nil { if err := s.cli.ContainerStop(context.Background(), container.ID, ctr.StopOptions{}); err != nil {
stopErrors = append(stopErrors, err) stopErrors = append(stopErrors, err)
} else { } else {
stoppedContainers = append(stoppedContainers, container) stoppedContainers = append(stoppedContainers, container)
@@ -366,7 +377,7 @@ func (s *script) stopContainers() (func() error, error) {
if serviceMatch.ID == "" { if serviceMatch.ID == "" {
return fmt.Errorf("stopContainers: couldn't find service with name %s", serviceName) return fmt.Errorf("stopContainers: couldn't find service with name %s", serviceName)
} }
serviceMatch.Spec.TaskTemplate.ForceUpdate = 1 serviceMatch.Spec.TaskTemplate.ForceUpdate += 1
if _, err := s.cli.ServiceUpdate( if _, err := s.cli.ServiceUpdate(
context.Background(), serviceMatch.ID, context.Background(), serviceMatch.ID,
serviceMatch.Version, serviceMatch.Spec, types.ServiceUpdateOptions{}, serviceMatch.Version, serviceMatch.Spec, types.ServiceUpdateOptions{},
@@ -383,9 +394,11 @@ func (s *script) stopContainers() (func() error, error) {
errors.Join(restartErrors...), errors.Join(restartErrors...),
) )
} }
s.logger.Infof( s.logger.Info(
"Restarted %d container(s) and the matching service(s).", fmt.Sprintf(
len(stoppedContainers), "Restarted %d container(s) and the matching service(s).",
len(stoppedContainers),
),
) )
return nil return nil
}, stopError }, stopError
@@ -409,7 +422,9 @@ func (s *script) createArchive() error {
if err := remove(backupSources); err != nil { if err := remove(backupSources); err != nil {
return fmt.Errorf("createArchive: error removing snapshot: %w", err) return fmt.Errorf("createArchive: error removing snapshot: %w", err)
} }
s.logger.Infof("Removed snapshot `%s`.", backupSources) s.logger.Info(
fmt.Sprintf("Removed snapshot `%s`.", backupSources),
)
return nil return nil
}) })
if err := copy.Copy(s.c.BackupSources, backupSources, copy.Options{ if err := copy.Copy(s.c.BackupSources, backupSources, copy.Options{
@@ -418,7 +433,9 @@ func (s *script) createArchive() error {
}); err != nil { }); err != nil {
return fmt.Errorf("createArchive: error creating snapshot: %w", err) return fmt.Errorf("createArchive: error creating snapshot: %w", err)
} }
s.logger.Infof("Created snapshot of `%s` at `%s`.", s.c.BackupSources, backupSources) s.logger.Info(
fmt.Sprintf("Created snapshot of `%s` at `%s`.", s.c.BackupSources, backupSources),
)
} }
tarFile := s.file tarFile := s.file
@@ -426,7 +443,9 @@ func (s *script) createArchive() error {
if err := remove(tarFile); err != nil { if err := remove(tarFile); err != nil {
return fmt.Errorf("createArchive: error removing tar file: %w", err) return fmt.Errorf("createArchive: error removing tar file: %w", err)
} }
s.logger.Infof("Removed tar file `%s`.", tarFile) s.logger.Info(
fmt.Sprintf("Removed tar file `%s`.", tarFile),
)
return nil return nil
}) })
@@ -450,11 +469,13 @@ func (s *script) createArchive() error {
return fmt.Errorf("createArchive: error walking filesystem tree: %w", err) return fmt.Errorf("createArchive: error walking filesystem tree: %w", err)
} }
if err := createArchive(filesEligibleForBackup, backupSources, tarFile); err != nil { if err := createArchive(filesEligibleForBackup, backupSources, tarFile, s.c.BackupCompression.String()); err != nil {
return fmt.Errorf("createArchive: error compressing backup folder: %w", err) return fmt.Errorf("createArchive: error compressing backup folder: %w", err)
} }
s.logger.Infof("Created backup of `%s` at `%s`.", backupSources, tarFile) s.logger.Info(
fmt.Sprintf("Created backup of `%s` at `%s`.", backupSources, tarFile),
)
return nil return nil
} }
@@ -471,7 +492,9 @@ func (s *script) encryptArchive() error {
if err := remove(gpgFile); err != nil { if err := remove(gpgFile); err != nil {
return fmt.Errorf("encryptArchive: error removing gpg file: %w", err) return fmt.Errorf("encryptArchive: error removing gpg file: %w", err)
} }
s.logger.Infof("Removed GPG file `%s`.", gpgFile) s.logger.Info(
fmt.Sprintf("Removed GPG file `%s`.", gpgFile),
)
return nil return nil
}) })
@@ -501,7 +524,9 @@ func (s *script) encryptArchive() error {
} }
s.file = gpgFile s.file = gpgFile
s.logger.Infof("Encrypted backup using given passphrase, saving as `%s`.", s.file) s.logger.Info(
fmt.Sprintf("Encrypted backup using given passphrase, saving as `%s`.", s.file),
)
return nil return nil
} }
@@ -573,7 +598,9 @@ func (s *script) pruneBackups() error {
// is non-nil. // is non-nil.
func (s *script) must(err error) { func (s *script) must(err error) {
if err != nil { if err != nil {
s.logger.Errorf("Fatal error running backup: %s", err) s.logger.Error(
fmt.Sprintf("Fatal error running backup: %s", err),
)
panic(err) panic(err)
} }
} }

73
go.mod
View File

@@ -1,74 +1,61 @@
module github.com/offen/docker-volume-backup module github.com/offen/docker-volume-backup
go 1.19 go 1.21
require ( require (
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.2.0 github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.3.0
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v0.6.1 github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.1.0
github.com/containrrr/shoutrrr v0.5.2 github.com/containrrr/shoutrrr v0.7.1
github.com/cosiner/argv v0.1.0 github.com/cosiner/argv v0.1.0
github.com/docker/docker v20.10.11+incompatible github.com/docker/docker v24.0.5+incompatible
github.com/gofrs/flock v0.8.1 github.com/gofrs/flock v0.8.1
github.com/kelseyhightower/envconfig v1.4.0 github.com/kelseyhightower/envconfig v1.4.0
github.com/klauspost/compress v1.16.7
github.com/leekchan/timeutil v0.0.0-20150802142658-28917288c48d github.com/leekchan/timeutil v0.0.0-20150802142658-28917288c48d
github.com/minio/minio-go/v7 v7.0.44 github.com/minio/minio-go/v7 v7.0.61
github.com/otiai10/copy v1.7.0 github.com/otiai10/copy v1.11.0
github.com/pkg/sftp v1.13.5 github.com/pkg/sftp v1.13.6
github.com/sirupsen/logrus v1.9.0 github.com/studio-b12/gowebdav v0.9.0
github.com/studio-b12/gowebdav v0.0.0-20220128162035-c7b1ff8a5e62 golang.org/x/crypto v0.11.0
golang.org/x/crypto v0.3.0 golang.org/x/sync v0.3.0
golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f
) )
require ( require (
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.1.4 // indirect github.com/Azure/azure-sdk-for-go/sdk/azcore v1.6.0 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.0.1 // indirect github.com/Azure/azure-sdk-for-go/sdk/internal v1.3.0 // indirect
github.com/AzureAD/microsoft-authentication-library-for-go v0.7.0 // indirect github.com/AzureAD/microsoft-authentication-library-for-go v1.0.0 // indirect
github.com/Microsoft/go-winio v0.5.2 // indirect github.com/Microsoft/go-winio v0.5.2 // indirect
github.com/containerd/containerd v1.6.6 // indirect github.com/docker/distribution v2.8.2+incompatible // indirect
github.com/docker/distribution v2.7.1+incompatible // indirect
github.com/docker/go-connections v0.4.0 // indirect github.com/docker/go-connections v0.4.0 // indirect
github.com/docker/go-units v0.4.0 // indirect github.com/docker/go-units v0.4.0 // indirect
github.com/dustin/go-humanize v1.0.0 // indirect github.com/dustin/go-humanize v1.0.1 // indirect
github.com/fatih/color v1.10.0 // indirect github.com/fatih/color v1.13.0 // indirect
github.com/fsnotify/fsnotify v1.4.9 // indirect
github.com/gogo/protobuf v1.3.2 // indirect github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang-jwt/jwt/v4 v4.4.2 // indirect github.com/golang-jwt/jwt/v4 v4.5.0 // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/google/uuid v1.3.0 // indirect github.com/google/uuid v1.3.0 // indirect
github.com/gorilla/mux v1.7.3 // indirect
github.com/json-iterator/go v1.1.12 // indirect github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.15.12 // indirect github.com/klauspost/cpuid/v2 v2.2.5 // indirect
github.com/klauspost/cpuid/v2 v2.2.1 // indirect
github.com/kr/fs v0.1.0 // indirect github.com/kr/fs v0.1.0 // indirect
github.com/kr/text v0.2.0 // indirect
github.com/kylelemons/godebug v1.1.0 // indirect github.com/kylelemons/godebug v1.1.0 // indirect
github.com/mattn/go-colorable v0.1.8 // indirect github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.12 // indirect github.com/mattn/go-isatty v0.0.16 // indirect
github.com/minio/md5-simd v1.1.2 // indirect github.com/minio/md5-simd v1.1.2 // indirect
github.com/minio/sha256-simd v1.0.0 // indirect github.com/minio/sha256-simd v1.0.1 // indirect
github.com/moby/term v0.0.0-20200312100748-672ec06f55cd // indirect github.com/moby/term v0.0.0-20200312100748-672ec06f55cd // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/morikuni/aec v1.0.0 // indirect github.com/morikuni/aec v1.0.0 // indirect
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e // indirect github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e // indirect
github.com/nxadm/tail v1.4.6 // indirect
github.com/onsi/ginkgo v1.14.2 // indirect
github.com/onsi/gomega v1.10.3 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.0.3-0.20211202183452-c5a74bcca799 // indirect github.com/opencontainers/image-spec v1.0.3-0.20211202183452-c5a74bcca799 // indirect
github.com/pkg/browser v0.0.0-20210115035449-ce105d075bb4 // indirect github.com/pkg/browser v0.0.0-20210911075715-681adbf594b8 // indirect
github.com/pkg/errors v0.9.1 // indirect github.com/pkg/errors v0.9.1 // indirect
github.com/rs/xid v1.4.0 // indirect github.com/rs/xid v1.5.0 // indirect
golang.org/x/net v0.2.0 // indirect github.com/sirupsen/logrus v1.9.3 // indirect
golang.org/x/sys v0.2.0 // indirect golang.org/x/net v0.12.0 // indirect
golang.org/x/text v0.4.0 // indirect golang.org/x/sys v0.10.0 // indirect
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect golang.org/x/text v0.11.0 // indirect
google.golang.org/genproto v0.0.0-20220602131408-e326c6e8e9c8 // indirect
google.golang.org/grpc v1.47.0 // indirect
google.golang.org/protobuf v1.28.0 // indirect
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f // indirect gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f // indirect
gopkg.in/ini.v1 v1.67.0 // indirect gopkg.in/ini.v1 v1.67.0 // indirect
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 // indirect gotest.tools/v3 v3.0.3 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
) )

1155
go.sum

File diff suppressed because it is too large Load Diff

View File

@@ -8,6 +8,7 @@ import (
"crypto/x509" "crypto/x509"
"errors" "errors"
"fmt" "fmt"
"os"
"path" "path"
"path/filepath" "path/filepath"
"time" "time"
@@ -22,6 +23,7 @@ type s3Storage struct {
client *minio.Client client *minio.Client
bucket string bucket string
storageClass string storageClass string
partSize int64
} }
// Config contains values that define the configuration of a S3 backend. // Config contains values that define the configuration of a S3 backend.
@@ -35,6 +37,7 @@ type Config struct {
RemotePath string RemotePath string
BucketName string BucketName string
StorageClass string StorageClass string
PartSize int64
CACert *x509.Certificate CACert *x509.Certificate
} }
@@ -89,6 +92,7 @@ func NewStorageBackend(opts Config, logFunc storage.Log) (storage.Backend, error
client: mc, client: mc,
bucket: opts.BucketName, bucket: opts.BucketName,
storageClass: opts.StorageClass, storageClass: opts.StorageClass,
partSize: opts.PartSize,
}, nil }, nil
} }
@@ -100,16 +104,32 @@ func (v *s3Storage) Name() string {
// Copy copies the given file to the S3/Minio storage backend. // Copy copies the given file to the S3/Minio storage backend.
func (b *s3Storage) Copy(file string) error { func (b *s3Storage) Copy(file string) error {
_, name := path.Split(file) _, name := path.Split(file)
putObjectOptions := minio.PutObjectOptions{
if _, err := b.client.FPutObject(context.Background(), b.bucket, filepath.Join(b.DestinationPath, name), file, minio.PutObjectOptions{
ContentType: "application/tar+gzip", ContentType: "application/tar+gzip",
StorageClass: b.storageClass, StorageClass: b.storageClass,
}); err != nil { }
if b.partSize > 0 {
srcFileInfo, err := os.Stat(file)
if err != nil {
return fmt.Errorf("(*s3Storage).Copy: error reading the local file: %w", err)
}
_, partSize, _, err := minio.OptimalPartInfo(srcFileInfo.Size(), uint64(b.partSize*1024*1024))
if err != nil {
return fmt.Errorf("(*s3Storage).Copy: error computing the optimal s3 part size: %w", err)
}
putObjectOptions.PartSize = uint64(partSize)
}
if _, err := b.client.FPutObject(context.Background(), b.bucket, filepath.Join(b.DestinationPath, name), file, putObjectOptions); err != nil {
if errResp := minio.ToErrorResponse(err); errResp.Message != "" { if errResp := minio.ToErrorResponse(err); errResp.Message != "" {
return fmt.Errorf("(*s3Storage).Copy: error uploading backup to remote storage: [Message]: '%s', [Code]: %s, [StatusCode]: %d", errResp.Message, errResp.Code, errResp.StatusCode) return fmt.Errorf("(*s3Storage).Copy: error uploading backup to remote storage: [Message]: '%s', [Code]: %s, [StatusCode]: %d", errResp.Message, errResp.Code, errResp.StatusCode)
} }
return fmt.Errorf("(*s3Storage).Copy: error uploading backup to remote storage: %w", err) return fmt.Errorf("(*s3Storage).Copy: error uploading backup to remote storage: %w", err)
} }
b.Log(storage.LogLevelInfo, b.Name(), "Uploaded a copy of backup `%s` to bucket `%s`.", file, b.bucket) b.Log(storage.LogLevelInfo, b.Name(), "Uploaded a copy of backup `%s` to bucket `%s`.", file, b.bucket)
return nil return nil

View File

@@ -29,7 +29,7 @@ const (
LogLevelError LogLevelError
) )
type Log func(logType LogLevel, context string, msg string, params ...interface{}) type Log func(logType LogLevel, context string, msg string, params ...any)
// PruneStats is a wrapper struct for returning stats after pruning // PruneStats is a wrapper struct for returning stats after pruning
type PruneStats struct { type PruneStats struct {

View File

@@ -6,18 +6,18 @@ cd "$(dirname "$0")"
. ../util.sh . ../util.sh
current_test=$(basename $(pwd)) current_test=$(basename $(pwd))
docker-compose up -d docker compose up -d
sleep 5 sleep 5
# A symlink for a known file in the volume is created so the test can check # A symlink for a known file in the volume is created so the test can check
# whether symlinks are preserved on backup. # whether symlinks are preserved on backup.
docker-compose exec backup backup docker compose exec backup backup
sleep 5 sleep 5
expect_running_containers "3" expect_running_containers "3"
docker-compose run --rm az_cli \ docker compose run --rm az_cli \
az storage blob download -f /dump/test.tar.gz -c test-container -n path/to/backup/test.tar.gz az storage blob download -f /dump/test.tar.gz -c test-container -n path/to/backup/test.tar.gz
tar -xvf ./local/test.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db tar -xvf ./local/test.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db
@@ -26,15 +26,15 @@ pass "Found relevant files in untared remote backups."
# The second part of this test checks if backups get deleted when the retention # The second part of this test checks if backups get deleted when the retention
# is set to 0 days (which it should not as it would mean all backups get deleted) # is set to 0 days (which it should not as it would mean all backups get deleted)
# TODO: find out if we can test actual deletion without having to wait for a day # TODO: find out if we can test actual deletion without having to wait for a day
BACKUP_RETENTION_DAYS="0" docker-compose up -d BACKUP_RETENTION_DAYS="0" docker compose up -d
sleep 5 sleep 5
docker-compose exec backup backup docker compose exec backup backup
docker-compose run --rm az_cli \ docker compose run --rm az_cli \
az storage blob download -f /dump/test.tar.gz -c test-container -n path/to/backup/test.tar.gz az storage blob download -f /dump/test.tar.gz -c test-container -n path/to/backup/test.tar.gz
test -f ./local/test.tar.gz test -f ./local/test.tar.gz
pass "Remote backups have not been deleted." pass "Remote backups have not been deleted."
docker-compose down --volumes docker compose down --volumes

View File

@@ -33,7 +33,7 @@ sleep 5
expect_running_containers "3" expect_running_containers "3"
docker run --rm -it \ docker run --rm \
-v minio_backup_data:/minio_data \ -v minio_backup_data:/minio_data \
alpine \ alpine \
ash -c 'tar -xvf /minio_data/backup/test.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db' ash -c 'tar -xvf /minio_data/backup/test.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db'

66
test/cli-zstd/run.sh Executable file
View File

@@ -0,0 +1,66 @@
#!/bin/sh
set -e
cd $(dirname $0)
. ../util.sh
current_test=$(basename $(pwd))
docker network create test_network
docker volume create backup_data
docker volume create app_data
# This volume is created to test whether empty directories are handled
# correctly. It is not supposed to hold any data.
docker volume create empty_data
docker run -d \
--name minio \
--network test_network \
--env MINIO_ROOT_USER=test \
--env MINIO_ROOT_PASSWORD=test \
--env MINIO_ACCESS_KEY=test \
--env MINIO_SECRET_KEY=GMusLtUmILge2by+z890kQ \
-v backup_data:/data \
minio/minio:RELEASE.2020-08-04T23-10-51Z server /data
docker exec minio mkdir -p /data/backup
docker run -d \
--name offen \
--network test_network \
-v app_data:/var/opt/offen/ \
offen/offen:latest
sleep 10
docker run --rm \
--network test_network \
-v app_data:/backup/app_data \
-v empty_data:/backup/empty_data \
-v /var/run/docker.sock:/var/run/docker.sock \
--env AWS_ACCESS_KEY_ID=test \
--env AWS_SECRET_ACCESS_KEY=GMusLtUmILge2by+z890kQ \
--env AWS_ENDPOINT=minio:9000 \
--env AWS_ENDPOINT_PROTO=http \
--env AWS_S3_BUCKET_NAME=backup \
--env BACKUP_COMPRESSION=zst \
--env BACKUP_FILENAME='test.{{ .Extension }}' \
--env "BACKUP_FROM_SNAPSHOT=true" \
--entrypoint backup \
offen/docker-volume-backup:${TEST_VERSION:-canary}
# Have to install tar and zstd on Alpine because the plain image comes with very
# basic tar from busybox and it does not seem to support zstd
docker run --rm \
-v backup_data:/data alpine \
ash -c 'apk add --no-cache zstd tar && tar -xvf /data/backup/test.tar.zst --zstd && test -f /backup/app_data/offen.db && test -d /backup/empty_data'
pass "Found relevant files in untared remote backup."
# This test does not stop containers during backup. This is happening on
# purpose in order to cover this setup as well.
expect_running_containers "2"
docker rm $(docker stop minio offen)
docker volume rm backup_data app_data
docker network rm test_network

View File

@@ -48,7 +48,7 @@ docker run --rm \
--entrypoint backup \ --entrypoint backup \
offen/docker-volume-backup:${TEST_VERSION:-canary} offen/docker-volume-backup:${TEST_VERSION:-canary}
docker run --rm -it \ docker run --rm \
-v backup_data:/data alpine \ -v backup_data:/data alpine \
ash -c 'tar -xvf /data/backup/test.tar.gz && test -f /backup/app_data/offen.db && test -d /backup/empty_data' ash -c 'tar -xvf /data/backup/test.tar.gz && test -f /backup/app_data/offen.db && test -d /backup/empty_data'

View File

@@ -42,10 +42,9 @@ services:
EXEC_LABEL: test EXEC_LABEL: test
EXEC_FORWARD_OUTPUT: "true" EXEC_FORWARD_OUTPUT: "true"
volumes: volumes:
- archive:/archive - ./local:/archive
- app_data:/backup/data:ro - app_data:/backup/data:ro
- /var/run/docker.sock:/var/run/docker.sock - /var/run/docker.sock:/var/run/docker.sock
volumes: volumes:
app_data: app_data:
archive:

View File

@@ -6,11 +6,12 @@ cd $(dirname $0)
. ../util.sh . ../util.sh
current_test=$(basename $(pwd)) current_test=$(basename $(pwd))
mkdir -p ./local
docker compose up -d docker compose up -d
sleep 30 # mariadb likes to take a bit before responding sleep 30 # mariadb likes to take a bit before responding
docker compose exec backup backup docker compose exec backup backup
sudo cp -r $(docker volume inspect --format='{{ .Mountpoint }}' commands_archive) ./local
tar -xvf ./local/test.tar.gz tar -xvf ./local/test.tar.gz
if [ ! -f ./backup/data/dump.sql ]; then if [ ! -f ./backup/data/dump.sql ]; then
@@ -34,6 +35,7 @@ sudo rm -rf ./local
info "Running commands test in swarm mode next." info "Running commands test in swarm mode next."
mkdir -p ./local
docker swarm init docker swarm init
docker stack deploy --compose-file=docker-compose.yml test_stack docker stack deploy --compose-file=docker-compose.yml test_stack
@@ -47,8 +49,6 @@ sleep 20
docker exec $(docker ps -q -f name=backup) backup docker exec $(docker ps -q -f name=backup) backup
sudo cp -r $(docker volume inspect --format='{{ .Mountpoint }}' test_stack_archive) ./local
tar -xvf ./local/test.tar.gz tar -xvf ./local/test.tar.gz
if [ ! -f ./backup/data/dump.sql ]; then if [ ! -f ./backup/data/dump.sql ]; then
fail "Could not find file written by pre command." fail "Could not find file written by pre command."

View File

@@ -8,9 +8,10 @@ current_test=$(basename $(pwd))
mkdir -p local mkdir -p local
export BASE_VERSION="${TEST_VERSION:-canary}"
export TEST_VERSION="${TEST_VERSION:-canary}-with-rsync" export TEST_VERSION="${TEST_VERSION:-canary}-with-rsync"
docker build . -t offen/docker-volume-backup:$TEST_VERSION docker build . -t offen/docker-volume-backup:$TEST_VERSION --build-arg version=$BASE_VERSION
docker compose up -d docker compose up -d
sleep 5 sleep 5

View File

@@ -17,7 +17,7 @@ sleep 5
expect_running_containers "3" expect_running_containers "3"
docker run --rm -it \ docker run --rm \
-v minio_backup_data:/minio_data \ -v minio_backup_data:/minio_data \
alpine \ alpine \
ash -c 'tar -xvf /minio_data/backup/test-hostnametoken.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db' ash -c 'tar -xvf /minio_data/backup/test-hostnametoken.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db'
@@ -32,7 +32,7 @@ sleep 5
docker compose exec backup backup docker compose exec backup backup
docker run --rm -it \ docker run --rm \
-v minio_backup_data:/minio_data \ -v minio_backup_data:/minio_data \
alpine \ alpine \
ash -c '[ $(find /minio_data/backup/ -type f | wc -l) = "1" ]' ash -c '[ $(find /minio_data/backup/ -type f | wc -l) = "1" ]'

View File

@@ -22,7 +22,7 @@ sleep 20
docker exec $(docker ps -q -f name=backup) backup docker exec $(docker ps -q -f name=backup) backup
docker run --rm -it \ docker run --rm \
-v backup_data:/data alpine \ -v backup_data:/data alpine \
ash -c 'tar -xf /data/backup/test.tar.gz && test -f /backup/pg_data/PG_VERSION' ash -c 'tar -xf /data/backup/test.tar.gz && test -f /backup/pg_data/PG_VERSION'

View File

@@ -17,7 +17,7 @@ sleep 5
expect_running_containers 3 expect_running_containers 3
docker run --rm -it \ docker run --rm \
-v ssh_backup_data:/ssh_data \ -v ssh_backup_data:/ssh_data \
alpine \ alpine \
ash -c 'tar -xvf /ssh_data/test-hostnametoken.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db' ash -c 'tar -xvf /ssh_data/test-hostnametoken.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db'
@@ -32,7 +32,7 @@ sleep 5
docker compose exec backup backup docker compose exec backup backup
docker run --rm -it \ docker run --rm \
-v ssh_backup_data:/ssh_data \ -v ssh_backup_data:/ssh_data \
alpine \ alpine \
ash -c '[ $(find /ssh_data/ -type f | wc -l) = "1" ]' ash -c '[ $(find /ssh_data/ -type f | wc -l) = "1" ]'

View File

@@ -19,7 +19,7 @@ sleep 20
docker exec $(docker ps -q -f name=backup) backup docker exec $(docker ps -q -f name=backup) backup
docker run --rm -it \ docker run --rm \
-v backup_data:/data alpine \ -v backup_data:/data alpine \
ash -c 'tar -xf /data/backup/test.tar.gz && test -f /backup/pg_data/PG_VERSION' ash -c 'tar -xf /data/backup/test.tar.gz && test -f /backup/pg_data/PG_VERSION'

2
test/user/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
local
backup

View File

@@ -0,0 +1,30 @@
version: '2.4'
services:
alpine:
image: alpine:3.17.3
tty: true
volumes:
- app_data:/tmp
labels:
- docker-volume-backup.archive-pre.user=testuser
- docker-volume-backup.archive-pre=/bin/sh -c 'whoami > /tmp/whoami.txt'
backup:
image: offen/docker-volume-backup:${TEST_VERSION:-canary}
deploy:
restart_policy:
condition: on-failure
environment:
BACKUP_FILENAME: test.tar.gz
BACKUP_CRON_EXPRESSION: 0 0 5 31 2 ?
EXEC_FORWARD_OUTPUT: "true"
volumes:
- ./local:/archive
- app_data:/backup/data:ro
- /var/run/docker.sock:/var/run/docker.sock
volumes:
app_data:
archive:

30
test/user/run.sh Normal file
View File

@@ -0,0 +1,30 @@
#!/bin/sh
set -e
cd $(dirname $0)
. ../util.sh
current_test=$(basename $(pwd))
docker compose up -d
user_name=testuser
docker exec user-alpine-1 adduser --disabled-password "$user_name"
docker compose exec backup backup
tar -xvf ./local/test.tar.gz
if [ ! -f ./backup/data/whoami.txt ]; then
fail "Could not find file written by pre command."
fi
pass "Found expected file."
tar -xvf ./local/test.tar.gz
if [ "$(cat ./backup/data/whoami.txt)" != "$user_name" ]; then
fail "Could not find expected user name."
fi
pass "Found expected user."
docker compose down --volumes
sudo rm -rf ./local

View File

@@ -15,7 +15,7 @@ sleep 5
expect_running_containers "3" expect_running_containers "3"
docker run --rm -it \ docker run --rm \
-v webdav_backup_data:/webdav_data \ -v webdav_backup_data:/webdav_data \
alpine \ alpine \
ash -c 'tar -xvf /webdav_data/data/my/new/path/test-hostnametoken.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db' ash -c 'tar -xvf /webdav_data/data/my/new/path/test-hostnametoken.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db'
@@ -30,7 +30,7 @@ sleep 5
docker compose exec backup backup docker compose exec backup backup
docker run --rm -it \ docker run --rm \
-v webdav_backup_data:/webdav_data \ -v webdav_backup_data:/webdav_data \
alpine \ alpine \
ash -c '[ $(find /webdav_data/data/my/new/path/ -type f | wc -l) = "1" ]' ash -c '[ $(find /webdav_data/data/my/new/path/ -type f | wc -l) = "1" ]'