Compare commits

...

33 Commits

Author SHA1 Message Date
ba-tno
07afa53cd3 Update shoutrrr to 0.7 (#213)
* Replace docker-compose reference with docker[space]compose

* Update shoutrrr only to 0.7.1

* modules after go mod tidy

* Refer to v0.7 docs of shoutrrr

* Replace docker-compose reference with docker[space]compose

* Update shoutrrr only to 0.7.1

* modules after go mod tidy

* Refer to v0.7 docs of shoutrrr

* Remove 'v' from shoutrrr doc link
2023-04-29 20:14:04 +02:00
Frederik Ring
9a07f5486b Docs reference incorrect shoutrrr version 2023-04-29 14:44:39 +02:00
Frederik Ring
d4c5f65f31 Entrypoint permissions can be set on COPY (#211) 2023-04-28 20:06:57 +02:00
Frederik Ring
5b8a484d80 Documentation around user label is lacking 2023-04-28 16:01:17 +02:00
Frederik Ring
37c01a578c TaskTemplate.ForceUpdate is a counter (#209) 2023-04-26 08:45:12 +02:00
Frederik Ring
46c6441d48 Add note about GHCR to README 2023-04-07 12:00:39 +02:00
Frederik Ring
5715d9ff9b Update of package copy does not fail on deleted files (#206) 2023-04-07 11:28:36 +02:00
dependabot[bot]
6ba173d916 Bump github.com/docker/docker (#205) 2023-04-05 04:58:07 +00:00
Frederik Ring
301fe6628c on: is expected to be an object 2023-04-02 19:45:46 +02:00
Frederik Ring
5ff2d53602 Items in on: are expected to be objects 2023-04-02 19:44:51 +02:00
Frederik Ring
cddd1fdcea Prevent duplicate builds on pull request 2023-04-02 19:41:49 +02:00
Frederik Ring
808cf8f82d Local directory can be used instead of volume for storing test artifact (#204) 2023-04-02 19:41:00 +02:00
Frederik Ring
c177202ac1 Multi platform build requires explicit buildx setup 2023-04-02 11:51:35 +02:00
Frederik Ring
27c2201161 Branches filter is a glob pattern, not a regex 2023-04-02 11:46:06 +02:00
Diulgher Artiom
7f20036b15 Possibility to use -u (user) option in docker exec (#203)
* Add user option for docker exec

* Add test for user option

* Return test version for image

* remove gitea config file

* refactor tests

* remove comments & fix image name

* add docs

* cleanup

* Update README.md with suggested correction

Co-authored-by: Frederik Ring <frederik.ring@gmail.com>

* fix backup command & bind folder instead of volume

---------

Co-authored-by: tao <generaltao.md@gmail.com>
Co-authored-by: Frederik Ring <frederik.ring@gmail.com>
2023-04-02 11:12:10 +02:00
Frederik Ring
2ac1f0cea4 Also trigger test runs on Pull Request 2023-03-29 07:57:09 +02:00
Frederik Ring
66ad124ddd any can be used instead of interface{} 2023-03-16 19:48:12 +01:00
Frederik Ring
aee802cb09 Migrate CI setup to GitHub Actions, also publish to GHCR (#199)
* Run tests in GitHub actions

* Do not try to allocate a pseudo TTY when running compose commands

* Try hard disabling TTY allocation

* Use compose plugin

* Test scripts shall not try to allocate a TTY

* Pass correct base version

* Check whether env var is even needed

* Stop running tests in CircleCI

* Run releases from GitHub actions as well

* Manually construct tags to be pushed on release
2023-03-16 19:32:44 +01:00
dependabot[bot]
a06ad1957a Bump github.com/docker/distribution (#195) 2023-03-07 06:56:56 +00:00
dependabot[bot]
15786c5da3 Bump golang.org/x/net from 0.2.0 to 0.7.0 (#191) 2023-02-18 06:34:40 +00:00
dependabot[bot]
641a3203c7 Bump github.com/containerd/containerd from 1.6.6 to 1.6.18 (#190) 2023-02-16 19:32:46 +00:00
Frederik Ring
5adfe3989e Document usage with rootless Docker installations
As described in #189
2023-02-16 08:18:57 +01:00
dependabot[bot]
550833be33 Merge pull request #188 from offen/dependabot/go_modules/github.com/containrrr/shoutrrr-0.6.0 2023-02-14 19:09:43 +00:00
dependabot[bot]
201a983ea4 Bump github.com/containrrr/shoutrrr from 0.5.2 to 0.6.0
Bumps [github.com/containrrr/shoutrrr](https://github.com/containrrr/shoutrrr) from 0.5.2 to 0.6.0.
- [Release notes](https://github.com/containrrr/shoutrrr/releases)
- [Changelog](https://github.com/containrrr/shoutrrr/blob/main/goreleaser.yml)
- [Commits](https://github.com/containrrr/shoutrrr/compare/v0.5.2...v0.6.0)

---
updated-dependencies:
- dependency-name: github.com/containrrr/shoutrrr
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-02-14 18:57:32 +00:00
Frederik Ring
2d37e08743 Use go 1.20, join errors using stdlib (#182)
* Use go 1.20, join errors using stdlib

* Use go 1.20 proper
2023-02-02 21:07:25 +01:00
Frederik Ring
1e36bd3eb7 Non-streaming upload to WebDAV fails on big files (#181) 2023-01-16 08:28:29 +01:00
Frederik Ring
e93a74dd48 Instructions in issue templates are not supposed to be shown after submission 2023-01-12 18:02:46 +01:00
Frederik Ring
f799e6c2e9 Azure Blob Storage is missing from headline in README 2023-01-11 21:54:50 +01:00
Frederik Ring
5c04e11f10 Add support for Azure Blob Storage (#171)
* Scaffold Azure storage backend that does nothing yet

* Implement copy for Azure Blob Storage

* Set up automated testing for Azure Storage

* Implement pruning for Azure blob storage

* Add documentation for Azure Blob Storage

* Add support for remote path

* Add azure to notifications doc

* Tidy go.mod file

* Allow use of managed identity credential

* Use volume in tests

* Auto append trailing slash to endpoint if needed, clarify docs, tidy mod file
2023-01-11 21:40:48 +01:00
Frederik Ring
aadbaa741d Update intro in README 2023-01-08 09:39:32 +01:00
Frederik Ring
9b7af67a26 Run tests using compose v2 (#178) 2023-01-07 18:50:27 +01:00
Frederik Ring
1cb4883458 Update alpine base image to 3.17 (#177) 2023-01-05 19:46:47 +01:00
Frederik Ring
982f4fe191 Fix mistake in README 2022-12-30 16:16:46 +01:00
43 changed files with 1627 additions and 450 deletions

View File

@@ -1,75 +0,0 @@
version: 2.1
jobs:
canary:
machine:
image: ubuntu-2004:202201-02
working_directory: ~/docker-volume-backup
resource_class: large
steps:
- checkout
- run:
name: Build
command: |
docker build . -t offen/docker-volume-backup:canary
- run:
name: Install gnupg
command: |
sudo apt-get install -y gnupg
- run:
name: Run tests
working_directory: ~/docker-volume-backup/test
command: |
export GPG_TTY=$(tty)
./test.sh canary
build:
docker:
- image: cimg/base:2020.06
environment:
DOCKER_BUILDKIT: '1'
DOCKER_CLI_EXPERIMENTAL: enabled
working_directory: ~/docker-volume-backup
resource_class: large
steps:
- checkout
- setup_remote_docker:
version: 20.10.6
- docker/install-docker-credential-helper:
release-tag: v0.6.4
- docker/configure-docker-credentials-store
- run:
name: Push to Docker Hub
command: |
echo "$DOCKER_ACCESSTOKEN" | docker login --username offen --password-stdin
# This is required for building ARM: https://gitlab.alpinelinux.org/alpine/aports/-/issues/12406
docker run --rm --privileged linuxkit/binfmt:v0.8
docker context create docker-volume-backup
docker buildx create docker-volume-backup --name docker-volume-backup --use
docker buildx inspect --bootstrap
tag_args="-t offen/docker-volume-backup:$CIRCLE_TAG"
if [[ "$CIRCLE_TAG" =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
# prerelease tags like `v2.0.0-alpha.1` should not be released as `latest`
tag_args="$tag_args -t offen/docker-volume-backup:latest"
tag_args="$tag_args -t offen/docker-volume-backup:$(echo "$CIRCLE_TAG" | cut -d. -f1)"
fi
docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 \
$tag_args . --push
workflows:
version: 2
docker_image:
jobs:
- canary:
filters:
tags:
ignore: /^v.*/
- build:
filters:
branches:
ignore: /.*/
tags:
only: /^v.*/
orbs:
docker: circleci/docker@2.1.4

View File

@@ -8,7 +8,9 @@ assignees: ''
---
**Describe the bug**
<!--
A clear and concise description of what the bug is.
-->
**To Reproduce**
Steps to reproduce the behavior:
@@ -17,12 +19,16 @@ Steps to reproduce the behavior:
3. ...
**Expected behavior**
<!--
A clear and concise description of what you expected to happen.
-->
**Desktop (please complete the following information):**
- Image Version: [e.g. v2.21.0]
- Docker Version: [e.g. 20.10.17]
- Docker Compose Version (if applicable): [e.g. 1.29.2]
**Version (please complete the following information):**
- Image Version: <!-- e.g. v2.21.0 -->
- Docker Version: <!-- e.g. 20.10.17 -->
- Docker Compose Version (if applicable): <!-- e.g. 1.29.2 -->
**Additional context**
<!--
Add any other context about the problem here.
-->

View File

@@ -8,13 +8,21 @@ assignees: ''
---
**Is your feature request related to a problem? Please describe.**
<!--
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
-->
**Describe the solution you'd like**
<!--
A clear and concise description of what you want to happen.
-->
**Describe alternatives you've considered**
<!--
A clear and concise description of any alternative solutions or features you've considered.
-->
**Additional context**
<!--
Add any other context or screenshots about the feature request here.
-->

View File

@@ -8,13 +8,21 @@ assignees: ''
---
**What are you trying to do?**
<!--
A clear and concise description of what you are trying to do, but cannot get working.
-->
**What is your current configuration?**
<!--
Add the full configuration you are using. Please redact out any real-world credentials.
-->
**Log output**
<!--
Provide the full log output of your setup.
-->
**Additional context**
<!--
Add any other context or screenshots about the support request here.
-->

59
.github/workflows/release.yml vendored Normal file
View File

@@ -0,0 +1,59 @@
name: Release Docker Image
on:
push:
tags: v**
jobs:
push_to_registries:
name: Push Docker image to multiple registries
runs-on: ubuntu-latest
permissions:
packages: write
contents: read
steps:
- name: Check out the repo
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Log in to GHCR
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract Docker tags
id: meta
run: |
version_tag="${{github.ref_name}}"
tags=($version_tag)
if [[ "$version_tag" =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
# prerelease tags like `v2.0.0-alpha.1` should not be released as `latest` nor `v2`
tags+=("latest")
tags+=($(echo "$version_tag" | cut -d. -f1))
fi
releases=""
for tag in "${tags[@]}"; do
releases="${releases:+$releases,}offen/docker-volume-backup:$tag,ghcr.io/offen/docker-volume-backup:$tag"
done
echo "releases=$releases" >> "$GITHUB_OUTPUT"
- name: Build and push Docker images
uses: docker/build-push-action@v4
with:
context: .
push: true
platforms: linux/amd64,linux/arm64,linux/arm/v7
tags: ${{ steps.meta.outputs.releases }}

30
.github/workflows/test.yml vendored Normal file
View File

@@ -0,0 +1,30 @@
name: Run Integration Tests
on:
push:
branches:
- main
pull_request:
jobs:
test:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Build Docker Image
env:
DOCKER_BUILDKIT: '1'
run: docker build . -t offen/docker-volume-backup:test
- name: Run Tests
working-directory: ./test
run: |
# Stop the buildx container so the tests can make assertions
# about the number of running containers
docker rm -f $(docker ps -aq)
export GPG_TTY=$(tty)
./test.sh test

View File

@@ -1,7 +1,7 @@
# Copyright 2021 - Offen Authors <hioffen@posteo.de>
# SPDX-License-Identifier: MPL-2.0
FROM golang:1.19-alpine as builder
FROM golang:1.20-alpine as builder
WORKDIR /app
COPY . .
@@ -9,15 +9,13 @@ RUN go mod download
WORKDIR /app/cmd/backup
RUN go build -o backup .
FROM alpine:3.16
FROM alpine:3.17
WORKDIR /root
RUN apk add --no-cache ca-certificates
COPY --from=builder /app/cmd/backup/backup /usr/bin/backup
COPY ./entrypoint.sh /root/
RUN chmod +x entrypoint.sh
COPY --chmod=755 ./entrypoint.sh /root/
ENTRYPOINT ["/root/entrypoint.sh"]

102
README.md
View File

@@ -4,16 +4,17 @@
# docker-volume-backup
Backup Docker volumes locally or to any S3 compatible storage.
Backup Docker volumes locally or to any S3, WebDAV, Azure Blob Storage or SSH compatible storage.
The [offen/docker-volume-backup](https://hub.docker.com/r/offen/docker-volume-backup) Docker image can be used as a lightweight (below 15MB) sidecar container to an existing Docker setup.
It handles __recurring or one-off backups of Docker volumes__ to a __local directory__, __any S3, WebDAV or SSH compatible storage (or any combination) and rotates away old backups__ if configured. It also supports __encrypting your backups using GPG__ and __sending notifications for failed backup runs__.
It handles __recurring or one-off backups of Docker volumes__ to a __local directory__, __any S3, WebDAV, Azure Blob Storage or SSH compatible storage (or any combination) and rotates away old backups__ if configured. It also supports __encrypting your backups using GPG__ and __sending notifications for failed backup runs__.
<!-- MarkdownTOC -->
- [Quickstart](#quickstart)
- [Recurring backups in a compose setup](#recurring-backups-in-a-compose-setup)
- [One-off backups using Docker CLI](#one-off-backups-using-docker-cli)
- [Available image registries](#available-image-registries)
- [Configuration reference](#configuration-reference)
- [How to](#how-to)
- [Stop containers during backup](#stop-containers-during-backup)
@@ -30,6 +31,7 @@ It handles __recurring or one-off backups of Docker volumes__ to a __local direc
- [Replace deprecated `BACKUP_FROM_SNAPSHOT` usage](#replace-deprecated-backup_from_snapshot-usage)
- [Replace deprecated `exec-pre` and `exec-post` labels](#replace-deprecated-exec-pre-and-exec-post-labels)
- [Using a custom Docker host](#using-a-custom-docker-host)
- [Use with rootless Docker](#use-with-rootless-docker)
- [Run multiple backup schedules in the same container](#run-multiple-backup-schedules-in-the-same-container)
- [Define different retention schedules](#define-different-retention-schedules)
- [Use special characters in notification URLs](#use-special-characters-in-notification-urls)
@@ -41,6 +43,7 @@ It handles __recurring or one-off backups of Docker volumes__ to a __local direc
- [Backing up to MinIO \(using Docker secrets\)](#backing-up-to-minio-using-docker-secrets)
- [Backing up to WebDAV](#backing-up-to-webdav)
- [Backing up to SSH](#backing-up-to-ssh)
- [Backing up to Azure Blob Storage](#backing-up-to-azure-blob-storage)
- [Backing up locally](#backing-up-locally)
- [Backing up to AWS S3 as well as locally](#backing-up-to-aws-s3-as-well-as-locally)
- [Running on a custom cron schedule](#running-on-a-custom-cron-schedule)
@@ -119,6 +122,18 @@ docker run --rm \
Alternatively, pass a `--env-file` in order to use a full config as described below.
### Available image registries
This Docker image is published to both Docker Hub and the GitHub container registry.
Depending on your preferences and needs, you can reference both `offen/docker-volume-backup` as well as `ghcr.io/offen/docker-volume-backup`:
```
docker pull offen/docker-volume-backup:v2
docker pull ghcr.io/offen/docker-volume-backup:v2
```
Documentation references Docker Hub, but all examples will work using ghcr.io just as well.
## Configuration reference
Backup targets, schedule and retention are configured in environment variables.
@@ -304,6 +319,25 @@ You can populate below template according to your requirements and use it as you
# SSH_IDENTITY_PASSPHRASE="pass"
# The credential's account name when using Azure Blob Storage. This has to be
# set when using Azure Blob Storage.
# AZURE_STORAGE_ACCOUNT_NAME="account-name"
# The credential's primary account key when using Azure Blob Storage. If this
# is not given, the command tries to fall back to using a managed identity.
# AZURE_STORAGE_PRIMARY_ACCOUNT_KEY="<xxx>"
# The container name when using Azure Blob Storage.
# AZURE_STORAGE_CONTAINER_NAME="container-name"
# The service endpoint when using Azure Blob Storage. This is a template that
# can be passed the account name as shown in the default value below.
# AZURE_STORAGE_ENDPOINT="https://{{ .AccountName }}.blob.core.windows.net/"
# In addition to storing backups remotely, you can also keep local copies.
# Pass a container-local path to store your backups if needed. You also need to
# mount a local folder or Docker volume into that location (`/archive`
@@ -390,7 +424,7 @@ You can populate below template according to your requirements and use it as you
# Notifications (email, Slack, etc.) can be sent out when a backup run finishes.
# Configuration is provided as a comma-separated list of URLs as consumed
# by `shoutrrr`: https://containrrr.dev/shoutrrr/v0.5/services/overview/
# by `shoutrrr`: https://containrrr.dev/shoutrrr/0.7/services/overview/
# The content of such notifications can be customized. Dedicated documentation
# on how to do this can be found in the README. When providing multiple URLs or
# an URL that contains a comma, the values can be URL encoded to avoid ambiguities.
@@ -534,7 +568,7 @@ services:
Notification backends other than email are also supported.
Refer to the documentation of [shoutrrr][shoutrrr-docs] to find out about options and configuration.
[shoutrrr-docs]: https://containrrr.dev/shoutrrr/v0.5/services/overview/
[shoutrrr-docs]: https://containrrr.dev/shoutrrr/0.7/services/overview/
### Customize notifications
@@ -623,6 +657,23 @@ volumes:
The backup procedure is guaranteed to wait for all `pre` or `post` commands to finish before proceeding.
However there are no guarantees about the order in which they are run, which could also happen concurrently.
By default the backup command is executed by the root user. It is possible to specify a custom user that is used to run commands in dedicated labels with the format `docker-volume-backup.[step]-[pre|post].user`:
```yml
version: '3'
services:
gitea:
image: gitea/gitea
volumes:
- backup_data:/tmp
labels:
- docker-volume-backup.archive-pre.user=git
- docker-volume-backup.archive-pre=/bin/bash -c 'cd /tmp; /usr/local/bin/gitea dump -c /data/gitea/conf/app.ini -R -f dump.zip'
```
Make sure the user exists and is present in `passwd` inside the target container.
### Encrypting your backup using GPG
The image supports encrypting backups using GPG out of the box.
@@ -762,7 +813,7 @@ services:
- docker-volume-backup.archive-post=rm -rf /tmp/backup/my-app
backup:
image: offen/docker-volume-backup:latest
image: offen/docker-volume-backup:v2
environment:
BACKUP_SOURCES: /tmp/backup
volumes:
@@ -800,6 +851,23 @@ DOCKER_HOST=tcp://docker_socket_proxy:2375
In case you are using a socket proxy, it must support `GET` and `POST` requests to the `/containers` endpoint. If you are using Docker Swarm, it must also support the `/services` endpoint. If you are using pre/post backup commands, it must also support the `/exec` endpoint.
### Use with rootless Docker
It's also possible to use this image with a [rootless Docker installation][rootless-docker].
Instead of mounting `/var/run/docker.sock`, mount the user-specific socket into the container:
```yml
services:
backup:
image: offen/docker-volume-backup:v2
# ... configuration omitted
volumes:
- backup:/backup:ro
- /run/user/1000/docker.sock:/var/run/docker.sock:ro
```
[rootless-docker]: https://docs.docker.com/engine/security/rootless/
### Run multiple backup schedules in the same container
Multiple backup schedules with different configuration can be configured by mounting an arbitrary number of configuration files (using the `.env` format) into `/etc/dockervolumebackup/conf.d`:
@@ -902,8 +970,7 @@ If you want to use a non-supported storage backend, or want to use a third party
For example, if you wanted to use `rsync`, define your Docker image like this:
```Dockerfile
ARG version=canary
FROM offen/docker-volume-backup:$version
FROM offen/docker-volume-backup:v2
RUN apk add rsync
```
@@ -1080,6 +1147,27 @@ volumes:
data:
```
### Backing up to Azure Blob Storage
```yml
version: '3'
services:
# ... define other services using the `data` volume here
backup:
image: offen/docker-volume-backup:v2
environment:
AZURE_STORAGE_CONTAINER_NAME: backup-container
AZURE_STORAGE_ACCOUNT_NAME: account-name
AZURE_STORAGE_PRIMARY_ACCOUNT_KEY: Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==
volumes:
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
data:
```
### Backing up locally
```yml

View File

@@ -16,53 +16,58 @@ import (
// Config holds all configuration values that are expected to be set
// by users.
type Config struct {
AwsS3BucketName string `split_words:"true"`
AwsS3Path string `split_words:"true"`
AwsEndpoint string `split_words:"true" default:"s3.amazonaws.com"`
AwsEndpointProto string `split_words:"true" default:"https"`
AwsEndpointInsecure bool `split_words:"true"`
AwsEndpointCACert CertDecoder `envconfig:"AWS_ENDPOINT_CA_CERT"`
AwsStorageClass string `split_words:"true"`
AwsAccessKeyID string `envconfig:"AWS_ACCESS_KEY_ID"`
AwsAccessKeyIDFile string `envconfig:"AWS_ACCESS_KEY_ID_FILE"`
AwsSecretAccessKey string `split_words:"true"`
AwsSecretAccessKeyFile string `split_words:"true"`
AwsIamRoleEndpoint string `split_words:"true"`
BackupSources string `split_words:"true" default:"/backup"`
BackupFilename string `split_words:"true" default:"backup-%Y-%m-%dT%H-%M-%S.tar.gz"`
BackupFilenameExpand bool `split_words:"true"`
BackupLatestSymlink string `split_words:"true"`
BackupArchive string `split_words:"true" default:"/archive"`
BackupRetentionDays int32 `split_words:"true" default:"-1"`
BackupPruningLeeway time.Duration `split_words:"true" default:"1m"`
BackupPruningPrefix string `split_words:"true"`
BackupStopContainerLabel string `split_words:"true" default:"true"`
BackupFromSnapshot bool `split_words:"true"`
BackupExcludeRegexp RegexpDecoder `split_words:"true"`
GpgPassphrase string `split_words:"true"`
NotificationURLs []string `envconfig:"NOTIFICATION_URLS"`
NotificationLevel string `split_words:"true" default:"error"`
EmailNotificationRecipient string `split_words:"true"`
EmailNotificationSender string `split_words:"true" default:"noreply@nohost"`
EmailSMTPHost string `envconfig:"EMAIL_SMTP_HOST"`
EmailSMTPPort int `envconfig:"EMAIL_SMTP_PORT" default:"587"`
EmailSMTPUsername string `envconfig:"EMAIL_SMTP_USERNAME"`
EmailSMTPPassword string `envconfig:"EMAIL_SMTP_PASSWORD"`
WebdavUrl string `split_words:"true"`
WebdavUrlInsecure bool `split_words:"true"`
WebdavPath string `split_words:"true" default:"/"`
WebdavUsername string `split_words:"true"`
WebdavPassword string `split_words:"true"`
SSHHostName string `split_words:"true"`
SSHPort string `split_words:"true" default:"22"`
SSHUser string `split_words:"true"`
SSHPassword string `split_words:"true"`
SSHIdentityFile string `split_words:"true" default:"/root/.ssh/id_rsa"`
SSHIdentityPassphrase string `split_words:"true"`
SSHRemotePath string `split_words:"true"`
ExecLabel string `split_words:"true"`
ExecForwardOutput bool `split_words:"true"`
LockTimeout time.Duration `split_words:"true" default:"60m"`
AwsS3BucketName string `split_words:"true"`
AwsS3Path string `split_words:"true"`
AwsEndpoint string `split_words:"true" default:"s3.amazonaws.com"`
AwsEndpointProto string `split_words:"true" default:"https"`
AwsEndpointInsecure bool `split_words:"true"`
AwsEndpointCACert CertDecoder `envconfig:"AWS_ENDPOINT_CA_CERT"`
AwsStorageClass string `split_words:"true"`
AwsAccessKeyID string `envconfig:"AWS_ACCESS_KEY_ID"`
AwsAccessKeyIDFile string `envconfig:"AWS_ACCESS_KEY_ID_FILE"`
AwsSecretAccessKey string `split_words:"true"`
AwsSecretAccessKeyFile string `split_words:"true"`
AwsIamRoleEndpoint string `split_words:"true"`
BackupSources string `split_words:"true" default:"/backup"`
BackupFilename string `split_words:"true" default:"backup-%Y-%m-%dT%H-%M-%S.tar.gz"`
BackupFilenameExpand bool `split_words:"true"`
BackupLatestSymlink string `split_words:"true"`
BackupArchive string `split_words:"true" default:"/archive"`
BackupRetentionDays int32 `split_words:"true" default:"-1"`
BackupPruningLeeway time.Duration `split_words:"true" default:"1m"`
BackupPruningPrefix string `split_words:"true"`
BackupStopContainerLabel string `split_words:"true" default:"true"`
BackupFromSnapshot bool `split_words:"true"`
BackupExcludeRegexp RegexpDecoder `split_words:"true"`
GpgPassphrase string `split_words:"true"`
NotificationURLs []string `envconfig:"NOTIFICATION_URLS"`
NotificationLevel string `split_words:"true" default:"error"`
EmailNotificationRecipient string `split_words:"true"`
EmailNotificationSender string `split_words:"true" default:"noreply@nohost"`
EmailSMTPHost string `envconfig:"EMAIL_SMTP_HOST"`
EmailSMTPPort int `envconfig:"EMAIL_SMTP_PORT" default:"587"`
EmailSMTPUsername string `envconfig:"EMAIL_SMTP_USERNAME"`
EmailSMTPPassword string `envconfig:"EMAIL_SMTP_PASSWORD"`
WebdavUrl string `split_words:"true"`
WebdavUrlInsecure bool `split_words:"true"`
WebdavPath string `split_words:"true" default:"/"`
WebdavUsername string `split_words:"true"`
WebdavPassword string `split_words:"true"`
SSHHostName string `split_words:"true"`
SSHPort string `split_words:"true" default:"22"`
SSHUser string `split_words:"true"`
SSHPassword string `split_words:"true"`
SSHIdentityFile string `split_words:"true" default:"/root/.ssh/id_rsa"`
SSHIdentityPassphrase string `split_words:"true"`
SSHRemotePath string `split_words:"true"`
ExecLabel string `split_words:"true"`
ExecForwardOutput bool `split_words:"true"`
LockTimeout time.Duration `split_words:"true" default:"60m"`
AzureStorageAccountName string `split_words:"true"`
AzureStoragePrimaryAccountKey string `split_words:"true"`
AzureStorageContainerName string `split_words:"true"`
AzureStoragePath string `split_words:"true"`
AzureStorageEndpoint string `split_words:"true" default:"https://{{ .AccountName }}.blob.core.windows.net/"`
}
func (c *Config) resolveSecret(envVar string, secretPath string) (string, error) {

View File

@@ -21,7 +21,7 @@ import (
"golang.org/x/sync/errgroup"
)
func (s *script) exec(containerRef string, command string) ([]byte, []byte, error) {
func (s *script) exec(containerRef string, command string, user string) ([]byte, []byte, error) {
args, _ := argv.Argv(command, nil, nil)
commandEnv := []string{
fmt.Sprintf("COMMAND_RUNTIME_ARCHIVE_FILEPATH=%s", s.file),
@@ -31,6 +31,7 @@ func (s *script) exec(containerRef string, command string) ([]byte, []byte, erro
AttachStdin: true,
AttachStderr: true,
Env: commandEnv,
User: user,
})
if err != nil {
return nil, nil, fmt.Errorf("exec: error creating container exec: %w", err)
@@ -159,8 +160,11 @@ func (s *script) runLabeledCommands(label string) error {
cmd, _ = c.Labels["docker-volume-backup.exec-post"]
}
userLabelName := fmt.Sprintf("%s.user", label)
user := c.Labels[userLabelName]
s.logger.Infof("Running %s command %s for container %s", label, cmd, strings.TrimPrefix(c.Names[0], "/"))
stdout, stderr, err := s.exec(c.ID, cmd)
stdout, stderr, err := s.exec(c.ID, cmd, user)
if s.c.ExecForwardOutput {
os.Stderr.Write(stderr)
os.Stdout.Write(stdout)

View File

@@ -4,10 +4,9 @@
package main
import (
"errors"
"fmt"
"sort"
"github.com/offen/docker-volume-backup/internal/utilities"
)
// hook contains a queued action that can be trigger them when the script
@@ -52,7 +51,7 @@ func (s *script) runHooks(err error) error {
}
}
if len(actionErrors) != 0 {
return utilities.Join(actionErrors...)
return errors.Join(actionErrors...)
}
return nil
}

View File

@@ -6,13 +6,13 @@ package main
import (
"bytes"
_ "embed"
"errors"
"fmt"
"os"
"text/template"
"time"
sTypes "github.com/containrrr/shoutrrr/pkg/types"
"github.com/offen/docker-volume-backup/internal/utilities"
)
//go:embed notifications.tmpl
@@ -69,7 +69,7 @@ func (s *script) sendNotification(title, body string) error {
}
}
if len(errs) != 0 {
return fmt.Errorf("sendNotification: error sending message: %w", utilities.Join(errs...))
return fmt.Errorf("sendNotification: error sending message: %w", errors.Join(errs...))
}
return nil
}

View File

@@ -5,6 +5,7 @@ package main
import (
"context"
"errors"
"fmt"
"io"
"io/fs"
@@ -15,11 +16,11 @@ import (
"time"
"github.com/offen/docker-volume-backup/internal/storage"
"github.com/offen/docker-volume-backup/internal/storage/azure"
"github.com/offen/docker-volume-backup/internal/storage/local"
"github.com/offen/docker-volume-backup/internal/storage/s3"
"github.com/offen/docker-volume-backup/internal/storage/ssh"
"github.com/offen/docker-volume-backup/internal/storage/webdav"
"github.com/offen/docker-volume-backup/internal/utilities"
"github.com/containrrr/shoutrrr"
"github.com/containrrr/shoutrrr/pkg/router"
@@ -76,6 +77,7 @@ func newScript() (*script, error) {
"WebDAV": {},
"SSH": {},
"Local": {},
"Azure": {},
},
},
}
@@ -108,7 +110,7 @@ func newScript() (*script, error) {
s.cli = cli
}
logFunc := func(logType storage.LogLevel, context string, msg string, params ...interface{}) {
logFunc := func(logType storage.LogLevel, context string, msg string, params ...any) {
switch logType {
case storage.LogLevelWarning:
s.logger.Warnf("["+context+"] "+msg, params...)
@@ -189,6 +191,21 @@ func newScript() (*script, error) {
s.storages = append(s.storages, localBackend)
}
if s.c.AzureStorageAccountName != "" {
azureConfig := azure.Config{
ContainerName: s.c.AzureStorageContainerName,
AccountName: s.c.AzureStorageAccountName,
PrimaryAccountKey: s.c.AzureStoragePrimaryAccountKey,
Endpoint: s.c.AzureStorageEndpoint,
RemotePath: s.c.AzureStoragePath,
}
azureBackend, err := azure.NewStorageBackend(azureConfig, logFunc)
if err != nil {
return nil, err
}
s.storages = append(s.storages, azureBackend)
}
if s.c.EmailNotificationRecipient != "" {
emailURL := fmt.Sprintf(
"smtp://%s:%s@%s:%d/?from=%s&to=%s",
@@ -312,7 +329,7 @@ func (s *script) stopContainers() (func() error, error) {
stopError = fmt.Errorf(
"stopContainers: %d error(s) stopping containers: %w",
len(stopErrors),
utilities.Join(stopErrors...),
errors.Join(stopErrors...),
)
}
@@ -349,7 +366,7 @@ func (s *script) stopContainers() (func() error, error) {
if serviceMatch.ID == "" {
return fmt.Errorf("stopContainers: couldn't find service with name %s", serviceName)
}
serviceMatch.Spec.TaskTemplate.ForceUpdate = 1
serviceMatch.Spec.TaskTemplate.ForceUpdate += 1
if _, err := s.cli.ServiceUpdate(
context.Background(), serviceMatch.ID,
serviceMatch.Version, serviceMatch.Spec, types.ServiceUpdateOptions{},
@@ -363,7 +380,7 @@ func (s *script) stopContainers() (func() error, error) {
return fmt.Errorf(
"stopContainers: %d error(s) restarting containers and services: %w",
len(restartErrors),
utilities.Join(restartErrors...),
errors.Join(restartErrors...),
)
}
s.logger.Infof(

View File

@@ -25,7 +25,7 @@ Here is a list of all data passed to the template:
* `FullPath`: full path of the backup file (e.g. `/archive/backup-2022-02-11T01-00-00.tar.gz`)
* `Size`: size in bytes of the backup file
* `Storages`: object that holds stats about each storage
* `Local`, `S3`, `WebDAV` or `SSH`:
* `Local`, `S3`, `WebDAV`, `Azure` or `SSH`:
* `Total`: total number of backup files
* `Pruned`: number of backup files that were deleted due to pruning rule
* `PruneErrors`: number of backup files that were unable to be pruned

45
go.mod
View File

@@ -3,41 +3,43 @@ module github.com/offen/docker-volume-backup
go 1.19
require (
github.com/containrrr/shoutrrr v0.5.2
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.2.0
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v0.6.1
github.com/containrrr/shoutrrr v0.7.1
github.com/cosiner/argv v0.1.0
github.com/docker/docker v20.10.11+incompatible
github.com/docker/docker v20.10.24+incompatible
github.com/gofrs/flock v0.8.1
github.com/kelseyhightower/envconfig v1.4.0
github.com/leekchan/timeutil v0.0.0-20150802142658-28917288c48d
github.com/minio/minio-go/v7 v7.0.44
github.com/otiai10/copy v1.7.0
github.com/otiai10/copy v1.10.0
github.com/pkg/sftp v1.13.5
github.com/sirupsen/logrus v1.9.0
github.com/studio-b12/gowebdav v0.0.0-20220128162035-c7b1ff8a5e62
golang.org/x/crypto v0.3.0
golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f
golang.org/x/sync v0.1.0
)
require (
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.1.4 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.0.1 // indirect
github.com/AzureAD/microsoft-authentication-library-for-go v0.7.0 // indirect
github.com/Microsoft/go-winio v0.5.2 // indirect
github.com/containerd/containerd v1.6.6 // indirect
github.com/docker/distribution v2.7.1+incompatible // indirect
github.com/docker/distribution v2.8.0+incompatible // indirect
github.com/docker/go-connections v0.4.0 // indirect
github.com/docker/go-units v0.4.0 // indirect
github.com/dustin/go-humanize v1.0.0 // indirect
github.com/fatih/color v1.10.0 // indirect
github.com/fsnotify/fsnotify v1.4.9 // indirect
github.com/fatih/color v1.13.0 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/golang-jwt/jwt/v4 v4.4.2 // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/gorilla/mux v1.7.3 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.15.12 // indirect
github.com/klauspost/cpuid/v2 v2.2.1 // indirect
github.com/kr/fs v0.1.0 // indirect
github.com/kr/text v0.2.0 // indirect
github.com/mattn/go-colorable v0.1.8 // indirect
github.com/mattn/go-isatty v0.0.12 // indirect
github.com/kylelemons/godebug v1.1.0 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.16 // indirect
github.com/minio/md5-simd v1.1.2 // indirect
github.com/minio/sha256-simd v1.0.0 // indirect
github.com/moby/term v0.0.0-20200312100748-672ec06f55cd // indirect
@@ -45,22 +47,15 @@ require (
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/morikuni/aec v1.0.0 // indirect
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e // indirect
github.com/nxadm/tail v1.4.6 // indirect
github.com/onsi/ginkgo v1.14.2 // indirect
github.com/onsi/gomega v1.10.3 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.0.3-0.20211202183452-c5a74bcca799 // indirect
github.com/pkg/browser v0.0.0-20210115035449-ce105d075bb4 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/rs/xid v1.4.0 // indirect
golang.org/x/net v0.2.0 // indirect
golang.org/x/sys v0.2.0 // indirect
golang.org/x/text v0.4.0 // indirect
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect
google.golang.org/genproto v0.0.0-20220602131408-e326c6e8e9c8 // indirect
google.golang.org/grpc v1.47.0 // indirect
google.golang.org/protobuf v1.28.0 // indirect
golang.org/x/net v0.7.0 // indirect
golang.org/x/sys v0.7.0 // indirect
golang.org/x/text v0.7.0 // indirect
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f // indirect
gopkg.in/ini.v1 v1.67.0 // indirect
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gotest.tools/v3 v3.0.3 // indirect
)

1092
go.sum

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,160 @@
// Copyright 2022 - Offen Authors <hioffen@posteo.de>
// SPDX-License-Identifier: MPL-2.0
package azure
import (
"bytes"
"context"
"errors"
"fmt"
"os"
"path/filepath"
"strings"
"sync"
"text/template"
"time"
"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/container"
"github.com/offen/docker-volume-backup/internal/storage"
)
type azureBlobStorage struct {
*storage.StorageBackend
client *azblob.Client
containerName string
}
// Config contains values that define the configuration of an Azure Blob Storage.
type Config struct {
AccountName string
ContainerName string
PrimaryAccountKey string
Endpoint string
RemotePath string
}
// NewStorageBackend creates and initializes a new Azure Blob Storage backend.
func NewStorageBackend(opts Config, logFunc storage.Log) (storage.Backend, error) {
endpointTemplate, err := template.New("endpoint").Parse(opts.Endpoint)
if err != nil {
return nil, fmt.Errorf("NewStorageBackend: error parsing endpoint template: %w", err)
}
var ep bytes.Buffer
if err := endpointTemplate.Execute(&ep, opts); err != nil {
return nil, fmt.Errorf("NewStorageBackend: error executing endpoint template: %w", err)
}
normalizedEndpoint := fmt.Sprintf("%s/", strings.TrimSuffix(ep.String(), "/"))
var client *azblob.Client
if opts.PrimaryAccountKey != "" {
cred, err := azblob.NewSharedKeyCredential(opts.AccountName, opts.PrimaryAccountKey)
if err != nil {
return nil, fmt.Errorf("NewStorageBackend: error creating shared key Azure credential: %w", err)
}
client, err = azblob.NewClientWithSharedKeyCredential(normalizedEndpoint, cred, nil)
if err != nil {
return nil, fmt.Errorf("NewStorageBackend: error creating Azure client: %w", err)
}
} else {
cred, err := azidentity.NewManagedIdentityCredential(nil)
if err != nil {
return nil, fmt.Errorf("NewStorageBackend: error creating managed identity credential: %w", err)
}
client, err = azblob.NewClient(normalizedEndpoint, cred, nil)
if err != nil {
return nil, fmt.Errorf("NewStorageBackend: error creating Azure client: %w", err)
}
}
storage := azureBlobStorage{
client: client,
containerName: opts.ContainerName,
StorageBackend: &storage.StorageBackend{
DestinationPath: opts.RemotePath,
Log: logFunc,
},
}
return &storage, nil
}
// Name returns the name of the storage backend
func (b *azureBlobStorage) Name() string {
return "Azure"
}
// Copy copies the given file to the storage backend.
func (b *azureBlobStorage) Copy(file string) error {
fileReader, err := os.Open(file)
if err != nil {
return fmt.Errorf("(*azureBlobStorage).Copy: error opening file %s: %w", file, err)
}
_, err = b.client.UploadStream(
context.Background(),
b.containerName,
filepath.Join(b.DestinationPath, filepath.Base(file)),
fileReader,
nil,
)
if err != nil {
return fmt.Errorf("(*azureBlobStorage).Copy: error uploading file %s: %w", file, err)
}
return nil
}
// Prune rotates away backups according to the configuration and provided
// deadline for the Azure Blob storage backend.
func (b *azureBlobStorage) Prune(deadline time.Time, pruningPrefix string) (*storage.PruneStats, error) {
lookupPrefix := filepath.Join(b.DestinationPath, pruningPrefix)
pager := b.client.NewListBlobsFlatPager(b.containerName, &container.ListBlobsFlatOptions{
Prefix: &lookupPrefix,
})
var matches []string
var totalCount uint
for pager.More() {
resp, err := pager.NextPage(context.Background())
if err != nil {
return nil, fmt.Errorf("(*azureBlobStorage).Prune: error paging over blobs: %w", err)
}
for _, v := range resp.Segment.BlobItems {
totalCount++
if v.Properties.LastModified.Before(deadline) {
matches = append(matches, *v.Name)
}
}
}
stats := storage.PruneStats{
Total: totalCount,
Pruned: uint(len(matches)),
}
if err := b.DoPrune(b.Name(), len(matches), int(totalCount), "Azure Blob Storage backup(s)", func() error {
wg := sync.WaitGroup{}
wg.Add(len(matches))
var errs []error
for _, match := range matches {
name := match
go func() {
_, err := b.client.DeleteBlob(context.Background(), b.containerName, name, nil)
if err != nil {
errs = append(errs, err)
}
wg.Done()
}()
}
wg.Wait()
if len(errs) != 0 {
return errors.Join(errs...)
}
return nil
}); err != nil {
return &stats, err
}
return &stats, nil
}

View File

@@ -4,6 +4,7 @@
package local
import (
"errors"
"fmt"
"io"
"os"
@@ -12,7 +13,6 @@ import (
"time"
"github.com/offen/docker-volume-backup/internal/storage"
"github.com/offen/docker-volume-backup/internal/utilities"
)
type localStorage struct {
@@ -127,7 +127,7 @@ func (b *localStorage) Prune(deadline time.Time, pruningPrefix string) (*storage
return fmt.Errorf(
"(*localStorage).Prune: %d error(s) deleting local files, starting with: %w",
len(removeErrors),
utilities.Join(removeErrors...),
errors.Join(removeErrors...),
)
}
return nil

View File

@@ -15,7 +15,6 @@ import (
"github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/credentials"
"github.com/offen/docker-volume-backup/internal/storage"
"github.com/offen/docker-volume-backup/internal/utilities"
)
type s3Storage struct {
@@ -159,7 +158,7 @@ func (b *s3Storage) Prune(deadline time.Time, pruningPrefix string) (*storage.Pr
}
}
if len(removeErrors) != 0 {
return utilities.Join(removeErrors...)
return errors.Join(removeErrors...)
}
return nil
}); err != nil {

View File

@@ -29,7 +29,7 @@ const (
LogLevelError
)
type Log func(logType LogLevel, context string, msg string, params ...interface{})
type Log func(logType LogLevel, context string, msg string, params ...any)
// PruneStats is a wrapper struct for returning stats after pruning
type PruneStats struct {

View File

@@ -67,15 +67,17 @@ func (b *webDavStorage) Name() string {
// Copy copies the given file to the WebDav storage backend.
func (b *webDavStorage) Copy(file string) error {
bytes, err := os.ReadFile(file)
_, name := path.Split(file)
if err != nil {
return fmt.Errorf("(*webDavStorage).Copy: Error reading the file to be uploaded: %w", err)
}
if err := b.client.MkdirAll(b.DestinationPath, 0644); err != nil {
return fmt.Errorf("(*webDavStorage).Copy: Error creating directory '%s' on WebDAV server: %w", b.DestinationPath, err)
}
if err := b.client.Write(filepath.Join(b.DestinationPath, name), bytes, 0644); err != nil {
r, err := os.Open(file)
if err != nil {
return fmt.Errorf("(*webDavStorage).Copy: Error opening the file to be uploaded: %w", err)
}
if err := b.client.WriteStream(filepath.Join(b.DestinationPath, name), r, 0644); err != nil {
return fmt.Errorf("(*webDavStorage).Copy: Error uploading the file to WebDAV server: %w", err)
}
b.Log(storage.LogLevelInfo, b.Name(), "Uploaded a copy of backup '%s' to WebDAV URL '%s' at path '%s'.", file, b.url, b.DestinationPath)

View File

@@ -1,24 +0,0 @@
// Copyright 2022 - Offen Authors <hioffen@posteo.de>
// SPDX-License-Identifier: MPL-2.0
package utilities
import (
"errors"
"strings"
)
// Join takes a list of errors and joins them into a single error
func Join(errs ...error) error {
if len(errs) == 1 {
return errs[0]
}
var msgs []string
for _, err := range errs {
if err == nil {
continue
}
msgs = append(msgs, err.Error())
}
return errors.New("[" + strings.Join(msgs, ", ") + "]")
}

View File

@@ -0,0 +1,58 @@
version: '3'
services:
storage:
image: mcr.microsoft.com/azure-storage/azurite
volumes:
- azurite_backup_data:/data
command: azurite-blob --blobHost 0.0.0.0 --blobPort 10000 --location /data
healthcheck:
test: nc 127.0.0.1 10000 -z
interval: 1s
retries: 30
az_cli:
image: mcr.microsoft.com/azure-cli
volumes:
- ./local:/dump
command:
- /bin/sh
- -c
- |
az storage container create --name test-container
depends_on:
storage:
condition: service_healthy
environment:
AZURE_STORAGE_CONNECTION_STRING: DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://storage:10000/devstoreaccount1;
backup:
image: offen/docker-volume-backup:${TEST_VERSION:-canary}
hostname: hostnametoken
restart: always
environment:
AZURE_STORAGE_ACCOUNT_NAME: devstoreaccount1
AZURE_STORAGE_PRIMARY_ACCOUNT_KEY: Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==
AZURE_STORAGE_CONTAINER_NAME: test-container
AZURE_STORAGE_ENDPOINT: http://storage:10000/{{ .AccountName }}/
AZURE_STORAGE_PATH: 'path/to/backup'
BACKUP_FILENAME: test.tar.gz
BACKUP_CRON_EXPRESSION: 0 0 5 31 2 ?
BACKUP_RETENTION_DAYS: ${BACKUP_RETENTION_DAYS:-7}
BACKUP_PRUNING_LEEWAY: 5s
BACKUP_PRUNING_PREFIX: test
volumes:
- app_data:/backup/app_data:ro
- /var/run/docker.sock:/var/run/docker.sock
offen:
image: offen/offen:latest
labels:
- docker-volume-backup.stop-during-backup=true
volumes:
- app_data:/var/opt/offen
volumes:
azurite_backup_data:
name: azurite_backup_data
app_data:

40
test/azure/run.sh Normal file
View File

@@ -0,0 +1,40 @@
#!/bin/sh
set -e
cd "$(dirname "$0")"
. ../util.sh
current_test=$(basename $(pwd))
docker compose up -d
sleep 5
# A symlink for a known file in the volume is created so the test can check
# whether symlinks are preserved on backup.
docker compose exec backup backup
sleep 5
expect_running_containers "3"
docker compose run --rm az_cli \
az storage blob download -f /dump/test.tar.gz -c test-container -n path/to/backup/test.tar.gz
tar -xvf ./local/test.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db
pass "Found relevant files in untared remote backups."
# The second part of this test checks if backups get deleted when the retention
# is set to 0 days (which it should not as it would mean all backups get deleted)
# TODO: find out if we can test actual deletion without having to wait for a day
BACKUP_RETENTION_DAYS="0" docker compose up -d
sleep 5
docker compose exec backup backup
docker compose run --rm az_cli \
az storage blob download -f /dump/test.tar.gz -c test-container -n path/to/backup/test.tar.gz
test -f ./local/test.tar.gz
pass "Remote backups have not been deleted."
docker compose down --volumes

View File

@@ -24,20 +24,20 @@ openssl x509 -req -passin pass:test \
openssl x509 -in minio.crt -noout -text
docker-compose up -d
docker compose up -d
sleep 5
docker-compose exec backup backup
docker compose exec backup backup
sleep 5
expect_running_containers "3"
docker run --rm -it \
docker run --rm \
-v minio_backup_data:/minio_data \
alpine \
ash -c 'tar -xvf /minio_data/backup/test.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db'
pass "Found relevant files in untared remote backups."
docker-compose down --volumes
docker compose down --volumes

View File

@@ -48,7 +48,7 @@ docker run --rm \
--entrypoint backup \
offen/docker-volume-backup:${TEST_VERSION:-canary}
docker run --rm -it \
docker run --rm \
-v backup_data:/data alpine \
ash -c 'tar -xvf /data/backup/test.tar.gz && test -f /backup/app_data/offen.db && test -d /backup/empty_data'

View File

@@ -42,10 +42,9 @@ services:
EXEC_LABEL: test
EXEC_FORWARD_OUTPUT: "true"
volumes:
- archive:/archive
- ./local:/archive
- app_data:/backup/data:ro
- /var/run/docker.sock:/var/run/docker.sock
volumes:
app_data:
archive:

View File

@@ -6,11 +6,12 @@ cd $(dirname $0)
. ../util.sh
current_test=$(basename $(pwd))
docker-compose up -d
mkdir -p ./local
docker compose up -d
sleep 30 # mariadb likes to take a bit before responding
docker-compose exec backup backup
sudo cp -r $(docker volume inspect --format='{{ .Mountpoint }}' commands_archive) ./local
docker compose exec backup backup
tar -xvf ./local/test.tar.gz
if [ ! -f ./backup/data/dump.sql ]; then
@@ -28,12 +29,13 @@ if [ -f ./backup/data/post.txt ]; then
fi
pass "Did not find unexpected file."
docker-compose down --volumes
docker compose down --volumes
sudo rm -rf ./local
info "Running commands test in swarm mode next."
mkdir -p ./local
docker swarm init
docker stack deploy --compose-file=docker-compose.yml test_stack
@@ -47,8 +49,6 @@ sleep 20
docker exec $(docker ps -q -f name=backup) backup
sudo cp -r $(docker volume inspect --format='{{ .Mountpoint }}' test_stack_archive) ./local
tar -xvf ./local/test.tar.gz
if [ ! -f ./backup/data/dump.sql ]; then
fail "Could not find file written by pre command."

View File

@@ -8,12 +8,12 @@ current_test=$(basename $(pwd))
mkdir -p local
docker-compose up -d
docker compose up -d
# sleep until a backup is guaranteed to have happened on the 1 minute schedule
sleep 100
docker-compose down --volumes
docker compose down --volumes
if [ ! -f ./local/conf.tar.gz ]; then
fail "Config from file was not used."

View File

@@ -8,14 +8,15 @@ current_test=$(basename $(pwd))
mkdir -p local
export BASE_VERSION="${TEST_VERSION:-canary}"
export TEST_VERSION="${TEST_VERSION:-canary}-with-rsync"
docker build . -t offen/docker-volume-backup:$TEST_VERSION
docker build . -t offen/docker-volume-backup:$TEST_VERSION --build-arg version=$BASE_VERSION
docker-compose up -d
docker compose up -d
sleep 5
docker-compose exec backup backup
docker compose exec backup backup
sleep 5
@@ -25,4 +26,4 @@ if [ ! -f "./local/app_data/offen.db" ]; then
fail "Could not find expected file in untared archive."
fi
docker-compose down --volumes
docker compose down --volumes

View File

@@ -8,10 +8,10 @@ current_test=$(basename $(pwd))
mkdir -p local
docker-compose up -d
docker compose up -d
sleep 5
docker-compose exec backup backup
docker compose exec backup backup
expect_running_containers "2"
@@ -30,4 +30,4 @@ if [ ! -L ./local/test-latest.tar.gz.gpg ]; then
fail "Could not find local symlink to latest encrypted backup."
fi
docker-compose down --volumes
docker compose down --volumes

View File

@@ -8,11 +8,11 @@ current_test=$(basename $(pwd))
mkdir -p local
docker-compose up -d
docker compose up -d
sleep 5
docker-compose exec backup backup
docker compose exec backup backup
docker-compose down --volumes
docker compose down --volumes
out=$(mktemp -d)
sudo tar --same-owner -xvf ./local/test.tar.gz -C "$out"

View File

@@ -8,13 +8,13 @@ current_test=$(basename $(pwd))
mkdir -p local
docker-compose up -d
docker compose up -d
sleep 5
# A symlink for a known file in the volume is created so the test can check
# whether symlinks are preserved on backup.
docker-compose exec offen ln -s /var/opt/offen/offen.db /var/opt/offen/db.link
docker-compose exec backup backup
docker compose exec offen ln -s /var/opt/offen/offen.db /var/opt/offen/db.link
docker compose exec backup backup
sleep 5
@@ -42,14 +42,14 @@ pass "Found symlink to latest version in local backup."
# The second part of this test checks if backups get deleted when the retention
# is set to 0 days (which it should not as it would mean all backups get deleted)
# TODO: find out if we can test actual deletion without having to wait for a day
BACKUP_RETENTION_DAYS="0" docker-compose up -d
BACKUP_RETENTION_DAYS="0" docker compose up -d
sleep 5
docker-compose exec backup backup
docker compose exec backup backup
if [ "$(find ./local -type f | wc -l)" != "1" ]; then
fail "Backups should not have been deleted, instead seen: "$(find ./local -type f)""
fi
pass "Local backups have not been deleted."
docker-compose down --volumes
docker compose down --volumes

View File

@@ -8,13 +8,13 @@ current_test=$(basename $(pwd))
mkdir -p local
docker-compose up -d
docker compose up -d
sleep 5
GOTIFY_TOKEN=$(curl -sSLX POST -H 'Content-Type: application/json' -d '{"name":"test"}' http://admin:custom@localhost:8080/application | jq -r '.token')
info "Set up Gotify application using token $GOTIFY_TOKEN"
docker-compose exec backup backup
docker compose exec backup backup
NUM_MESSAGES=$(curl -sSL http://admin:custom@localhost:8080/message | jq -r '.messages | length')
if [ "$NUM_MESSAGES" != 0 ]; then
@@ -22,11 +22,11 @@ if [ "$NUM_MESSAGES" != 0 ]; then
fi
pass "No notifications were sent when not configured."
docker-compose down
docker compose down
NOTIFICATION_URLS="gotify://gotify/${GOTIFY_TOKEN}?disableTLS=true" docker-compose up -d
NOTIFICATION_URLS="gotify://gotify/${GOTIFY_TOKEN}?disableTLS=true" docker compose up -d
docker-compose exec backup backup
docker compose exec backup backup
NUM_MESSAGES=$(curl -sSL http://admin:custom@localhost:8080/message | jq -r '.messages | length')
if [ "$NUM_MESSAGES" != 1 ]; then
@@ -47,4 +47,4 @@ if [ "$MESSAGE_BODY" != "Backing up /tmp/test.tar.gz succeeded." ]; then
fi
pass "Custom notification body was used."
docker-compose down --volumes
docker compose down --volumes

View File

@@ -9,10 +9,10 @@ current_test=$(basename $(pwd))
mkdir -p local
docker-compose up -d
docker compose up -d
sleep 5
docker-compose exec backup backup
docker compose exec backup backup
tmp_dir=$(mktemp -d)
sudo tar --same-owner -xvf ./local/backup.tar.gz -C $tmp_dir
@@ -27,4 +27,4 @@ for file in $(sudo find $tmp_dir/backup/postgres); do
done
pass "All files and directories in backup preserved their ownership."
docker-compose down --volumes
docker compose down --volumes

View File

@@ -6,18 +6,18 @@ cd "$(dirname "$0")"
. ../util.sh
current_test=$(basename $(pwd))
docker-compose up -d
docker compose up -d
sleep 5
# A symlink for a known file in the volume is created so the test can check
# whether symlinks are preserved on backup.
docker-compose exec backup backup
docker compose exec backup backup
sleep 5
expect_running_containers "3"
docker run --rm -it \
docker run --rm \
-v minio_backup_data:/minio_data \
alpine \
ash -c 'tar -xvf /minio_data/backup/test-hostnametoken.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db'
@@ -27,16 +27,16 @@ pass "Found relevant files in untared remote backups."
# The second part of this test checks if backups get deleted when the retention
# is set to 0 days (which it should not as it would mean all backups get deleted)
# TODO: find out if we can test actual deletion without having to wait for a day
BACKUP_RETENTION_DAYS="0" docker-compose up -d
BACKUP_RETENTION_DAYS="0" docker compose up -d
sleep 5
docker-compose exec backup backup
docker compose exec backup backup
docker run --rm -it \
docker run --rm \
-v minio_backup_data:/minio_data \
alpine \
ash -c '[ $(find /minio_data/backup/ -type f | wc -l) = "1" ]'
pass "Remote backups have not been deleted."
docker-compose down --volumes
docker compose down --volumes

View File

@@ -22,7 +22,7 @@ sleep 20
docker exec $(docker ps -q -f name=backup) backup
docker run --rm -it \
docker run --rm \
-v backup_data:/data alpine \
ash -c 'tar -xf /data/backup/test.tar.gz && test -f /backup/pg_data/PG_VERSION'

View File

@@ -8,16 +8,16 @@ current_test=$(basename $(pwd))
ssh-keygen -t rsa -m pem -b 4096 -N "test1234" -f id_rsa -C "docker-volume-backup@local"
docker-compose up -d
docker compose up -d
sleep 5
docker-compose exec backup backup
docker compose exec backup backup
sleep 5
expect_running_containers 3
docker run --rm -it \
docker run --rm \
-v ssh_backup_data:/ssh_data \
alpine \
ash -c 'tar -xvf /ssh_data/test-hostnametoken.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db'
@@ -27,17 +27,17 @@ pass "Found relevant files in decrypted and untared remote backups."
# The second part of this test checks if backups get deleted when the retention
# is set to 0 days (which it should not as it would mean all backups get deleted)
# TODO: find out if we can test actual deletion without having to wait for a day
BACKUP_RETENTION_DAYS="0" docker-compose up -d
BACKUP_RETENTION_DAYS="0" docker compose up -d
sleep 5
docker-compose exec backup backup
docker compose exec backup backup
docker run --rm -it \
docker run --rm \
-v ssh_backup_data:/ssh_data \
alpine \
ash -c '[ $(find /ssh_data/ -type f | wc -l) = "1" ]'
pass "Remote backups have not been deleted."
docker-compose down --volumes
docker compose down --volumes
rm -f id_rsa id_rsa.pub

View File

@@ -19,7 +19,7 @@ sleep 20
docker exec $(docker ps -q -f name=backup) backup
docker run --rm -it \
docker run --rm \
-v backup_data:/data alpine \
ash -c 'tar -xf /data/backup/test.tar.gz && test -f /backup/pg_data/PG_VERSION'

2
test/user/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
local
backup

View File

@@ -0,0 +1,30 @@
version: '2.4'
services:
alpine:
image: alpine:3.17.3
tty: true
volumes:
- app_data:/tmp
labels:
- docker-volume-backup.archive-pre.user=testuser
- docker-volume-backup.archive-pre=/bin/sh -c 'whoami > /tmp/whoami.txt'
backup:
image: offen/docker-volume-backup:${TEST_VERSION:-canary}
deploy:
restart_policy:
condition: on-failure
environment:
BACKUP_FILENAME: test.tar.gz
BACKUP_CRON_EXPRESSION: 0 0 5 31 2 ?
EXEC_FORWARD_OUTPUT: "true"
volumes:
- ./local:/archive
- app_data:/backup/data:ro
- /var/run/docker.sock:/var/run/docker.sock
volumes:
app_data:
archive:

30
test/user/run.sh Normal file
View File

@@ -0,0 +1,30 @@
#!/bin/sh
set -e
cd $(dirname $0)
. ../util.sh
current_test=$(basename $(pwd))
docker compose up -d
user_name=testuser
docker exec user-alpine-1 adduser --disabled-password "$user_name"
docker compose exec backup backup
tar -xvf ./local/test.tar.gz
if [ ! -f ./backup/data/whoami.txt ]; then
fail "Could not find file written by pre command."
fi
pass "Found expected file."
tar -xvf ./local/test.tar.gz
if [ "$(cat ./backup/data/whoami.txt)" != "$user_name" ]; then
fail "Could not find expected user name."
fi
pass "Found expected user."
docker compose down --volumes
sudo rm -rf ./local

View File

@@ -6,16 +6,16 @@ cd "$(dirname "$0")"
. ../util.sh
current_test=$(basename $(pwd))
docker-compose up -d
docker compose up -d
sleep 5
docker-compose exec backup backup
docker compose exec backup backup
sleep 5
expect_running_containers "3"
docker run --rm -it \
docker run --rm \
-v webdav_backup_data:/webdav_data \
alpine \
ash -c 'tar -xvf /webdav_data/data/my/new/path/test-hostnametoken.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db'
@@ -25,16 +25,16 @@ pass "Found relevant files in untared remote backup."
# The second part of this test checks if backups get deleted when the retention
# is set to 0 days (which it should not as it would mean all backups get deleted)
# TODO: find out if we can test actual deletion without having to wait for a day
BACKUP_RETENTION_DAYS="0" docker-compose up -d
BACKUP_RETENTION_DAYS="0" docker compose up -d
sleep 5
docker-compose exec backup backup
docker compose exec backup backup
docker run --rm -it \
docker run --rm \
-v webdav_backup_data:/webdav_data \
alpine \
ash -c '[ $(find /webdav_data/data/my/new/path/ -type f | wc -l) = "1" ]'
pass "Remote backups have not been deleted."
docker-compose down --volumes
docker compose down --volumes