Compare commits

...

28 Commits

Author SHA1 Message Date
Frederik Ring
789fc656e8 Merge pull request #27 from offen/latest-symlink
Automatically create symlink to latest local backup if configured
2021-10-01 18:47:16 +02:00
Frederik Ring
c59b40f2df automatically create symlink to latest local backup if configured 2021-10-01 18:19:24 +02:00
Frederik Ring
cff418e735 fix README grammar 2021-10-01 08:48:20 +02:00
Frederik Ring
d7ccdd79fc Merge pull request #26 from offen/instance-profile
Allow s3 authentication via IAM role
2021-09-30 19:32:54 +02:00
Frederik Ring
bd73a2b5e4 allow s3 authentication via IAM role 2021-09-30 19:24:43 +02:00
Frederik Ring
6cf5cf47e7 Merge pull request #25 from offen/delete-on-failure
Ensure script always tries to remove local artifacts even when backup failed
2021-09-13 09:33:12 +02:00
Frederik Ring
53c257065e ensure script always tries to remove local artifacts even when backup failed 2021-09-12 10:48:19 +02:00
Frederik Ring
184b7a1e18 add docs on one off backups using docker cli 2021-09-11 11:21:48 +02:00
Frederik Ring
69a94f226b tweak configuration reference for email settings 2021-09-10 11:58:33 +02:00
Frederik Ring
160a47e90b allow registering hooks at different levels 2021-09-09 16:55:49 +02:00
Frederik Ring
59660ec5c7 include exit log message in notification 2021-09-09 11:08:05 +02:00
Frederik Ring
af3e69b7a8 fix typo in README 2021-09-09 09:19:37 +02:00
Frederik Ring
5d400cb943 Merge pull request #24 from offen/failure-email
Enable sending out email notifications on failed backups
2021-09-09 09:10:20 +02:00
Frederik Ring
88368197c1 implement email notifications on failed backup runs 2021-09-09 09:00:23 +02:00
Frederik Ring
e46968ed79 call error hooks on script failure 2021-09-09 08:12:07 +02:00
Frederik Ring
2c06f81503 collect all log output in buffer so it could be used in notifications 2021-09-09 07:24:18 +02:00
Frederik Ring
55d030a06a Merge pull request #22 from offen/targz-fork
Fix handling of symlinks in backup targets
2021-09-06 18:15:34 +02:00
Frederik Ring
fefc34c6aa tidy go mod file 2021-09-04 15:54:09 +02:00
Frederik Ring
5922820ada add test for checking behavior on symlinks 2021-09-04 10:30:34 +02:00
Frederik Ring
8aba98c012 use forked version of package targz 2021-09-04 10:08:06 +02:00
Frederik Ring
70daa0308a Merge pull request #19 from offen/golang-version
v2 Rewrite
2021-08-30 19:57:36 +02:00
Frederik Ring
ede94bcd88 display all error messages instead of first one 2021-08-29 19:39:51 +02:00
Frederik Ring
aae97a5617 try restarting even when stopping some containers failed 2021-08-29 18:51:05 +02:00
Frederik Ring
825cbb50ef always use background context directly 2021-08-29 18:26:40 +02:00
Frederik Ring
bea203af3d improve documentation 2021-08-29 18:16:04 +02:00
Frederik Ring
6034e6a902 print proper local archive in log message 2021-08-29 08:36:45 +02:00
Frederik Ring
d0eca0a179 fix container stop execution order 2021-08-26 16:22:24 +02:00
Frederik Ring
a0fe2cf42d handle errors on container restart 2021-08-26 12:50:22 +02:00
7 changed files with 757 additions and 186 deletions

633
README.md
View File

@@ -2,114 +2,47 @@
Backup Docker volumes locally or to any S3 compatible storage. Backup Docker volumes locally or to any S3 compatible storage.
The [offen/docker-volume-backup](https://hub.docker.com/r/offen/docker-volume-backup) Docker image can be used as a lightweight (below 15MB) sidecar container to an existing Docker setup. It handles __recurring or one-off backups of Docker volumes__ to a __local directory__ or __any S3 compatible storage__ (or both), and __rotates away old backups__ if configured. It also supports __encrypting your backups using GPG__. The [offen/docker-volume-backup](https://hub.docker.com/r/offen/docker-volume-backup) Docker image can be used as a lightweight (below 15MB) sidecar container to an existing Docker setup.
It handles __recurring or one-off backups of Docker volumes__ to a __local directory__ or __any S3 compatible storage__ (or both), and __rotates away old backups__ if configured. It also supports __encrypting your backups using GPG__ and __sending notifications for failed backup runs__.
## Configuration <!-- MarkdownTOC -->
Backup targets, schedule and retention are configured in environment variables: - [Quickstart](#quickstart)
- [Recurring backups in a compose setup](#recurring-backups-in-a-compose-setup)
- [One-off backups using Docker CLI](#one-off-backups-using-docker-cli)
- [Configuration reference](#configuration-reference)
- [How to](#how-to)
- [Stopping containers during backup](#stopping-containers-during-backup)
- [Automatically pruning old backups](#automatically-pruning-old-backups)
- [Send email notifications on failed backup runs](#send-email-notifications-on-failed-backup-runs)
- [Encrypting your backup using GPG](#encrypting-your-backup-using-gpg)
- [Restoring a volume from a backup](#restoring-a-volume-from-a-backup)
- [Using with Docker Swarm](#using-with-docker-swarm)
- [Manually triggering a backup](#manually-triggering-a-backup)
- [Recipes](#recipes)
- [Backing up to AWS S3](#backing-up-to-aws-s3)
- [Backing up to MinIO](#backing-up-to-minio)
- [Backing up locally](#backing-up-locally)
- [Backing up to AWS S3 as well as locally](#backing-up-to-aws-s3-as-well-as-locally)
- [Running on a custom cron schedule](#running-on-a-custom-cron-schedule)
- [Rotating away backups that are older than 7 days](#rotating-away-backups-that-are-older-than-7-days)
- [Encrypting your backups using GPG](#encrypting-your-backups-using-gpg)
- [Running multiple instances in the same setup](#running-multiple-instances-in-the-same-setup)
- [Differences to `futurice/docker-volume-backup`](#differences-to-futuricedocker-volume-backup)
```ini <!-- /MarkdownTOC -->
########### BACKUP SCHEDULE
# Backups run on the given cron schedule and use the filename defined in the ---
# template expression.
BACKUP_CRON_EXPRESSION="0 2 * * *" Code and documentation for `v1` versions are found on [this branch][v1-branch].
# Format verbs will be replaced as in the `date` command. Omitting them
# will result in the same filename for every backup run, which means previous
# versions will be overwritten.
BACKUP_FILENAME="backup-%Y-%m-%dT%H-%M-%S.tar.gz"
########### BACKUP STORAGE [v1-branch]: https://github.com/offen/docker-volume-backup/tree/v1
# Define credentials for authenticating against the backup storage and a bucket ## Quickstart
# name. Although all of these values are `AWS`-prefixed, the setup can be used
# with any S3 compatible storage.
AWS_ACCESS_KEY_ID="<xxx>" ### Recurring backups in a compose setup
AWS_SECRET_ACCESS_KEY="<xxx>"
AWS_S3_BUCKET_NAME="<xxx>"
# This is the FQDN of your storage server, e.g. `storage.example.com`. Add a `backup` service to your compose setup and mount the volumes you would like to see backed up:
# Do not set this when working against AWS S3. If you need to set a
# specific protocol, you will need to use the option below.
# AWS_ENDPOINT="<xxx>"
# The protocol to be used when communicating with your storage server.
# Defaults to "https". You can set this to "http" when communicating with
# a different Docker container on the same host for example.
# AWS_ENDPOINT_PROTO="https"
# Setting this variable to any value will disable verification of
# SSL certificates. You shouldn't use this unless you use self-signed
# certificates for your remote storage backend.
# AWS_ENDPOINT_INSECURE="true"
# In addition to backing up you can also store backups locally. Pass in
# a local path to store your backups here if needed. You likely want to
# mount a local folder or Docker volume into that location when running
# the container. Local paths can also be subject to pruning of old
# backups as defined below.
# BACKUP_ARCHIVE="/archive"
########### BACKUP PRUNING
# **IMPORTANT, PLEASE READ THIS BEFORE USING THIS FEATURE**:
# The mechanism used for pruning backups is not very sophisticated
# and applies its rules to **all files in the target directory** by default,
# which means that if you are storing your backups next to other files,
# these might become subject to deletion too. When using this option
# make sure the backup files are stored in a directory used exclusively
# for storing them or to configure BACKUP_PRUNING_PREFIX to limit
# removal to certain files.
# Define this value to enable automatic pruning of old backups. The value
# declares the number of days for which a backup is kept.
# BACKUP_RETENTION_DAYS="7"
# In case the duration a backup takes fluctuates noticeably in your setup
# you can adjust this setting to make sure there are no race conditions
# between the backup finishing and the rotation not deleting backups that
# sit on the very edge of the time window. Set this value to a duration
# that is expected to be bigger than the maximum difference of backups.
# Valid values have a suffix of (s)econds, (m)inutes or (h)ours.
# BACKUP_PRUNING_LEEWAY="1m"
# In case your target bucket or directory contains other files than the ones
# managed by this container, you can limit the scope of rotation by setting
# a prefix value. This would usually be the non-parametrized part of your
# BACKUP_FILENAME. E.g. if BACKUP_FILENAME is `db-backup-%Y-%m-%dT%H-%M-%S.tar.gz`,
# you can set BACKUP_PRUNING_PREFIX to `db-backup-` and make sure
# unrelated files are not affected.
# BACKUP_PRUNING_PREFIX="backup-"
########### BACKUP ENCRYPTION
# Backups can be encrypted using gpg in case a passphrase is given
# GPG_PASSPHRASE="<xxx>"
########### STOPPING CONTAINERS DURING BACKUP
# Containers can be stopped by applying a
# `docker-volume-backup.stop-during-backup` label. By default, all containers
# that are labeled with `true` will be stopped. If you need more fine grained
# control (e.g. when running multiple containers based on this image), you can
# override this default by specifying a different value here.
# BACKUP_STOP_CONTAINER_LABEL="service1"
```
## Example in a docker-compose setup
Most likely, you will use this image as a sidecar container in an existing docker-compose setup like this:
```yml ```yml
version: '3' version: '3'
@@ -127,26 +60,309 @@ services:
- docker-volume-backup.stop-during-backup=true - docker-volume-backup.stop-during-backup=true
backup: backup:
# In production, it is advised to lock your image tag to a proper
# release version instead of using `latest`.
# Check https://github.com/offen/docker-volume-backup/releases
# for a list of available releases.
image: offen/docker-volume-backup:latest image: offen/docker-volume-backup:latest
restart: always restart: always
env_file: ./backup.env env_file: ./backup.env # see below for configuration reference
volumes: volumes:
- data:/backup/my-app-backup:ro
# Mounting the Docker socket allows the script to stop and restart # Mounting the Docker socket allows the script to stop and restart
# the container during backup. You can omit this if you don't want # the container during backup. You can omit this if you don't want
# to stop the container # to stop the container
- /var/run/docker.sock:/var/run/docker.sock:ro - /var/run/docker.sock:/var/run/docker.sock:ro
- data:/backup/my-app-backup:ro
# If you mount a local directory or volume to `/archive` a local # If you mount a local directory or volume to `/archive` a local
# copy of the backup will be stored there. You can override the # copy of the backup will be stored there. You can override the
# location inside of the container by setting `BACKUP_ARCHIVE` # location inside of the container by setting `BACKUP_ARCHIVE`.
# - /path/to/local_backups:/archive # You can omit this if you do not want to keep local backups.
- /path/to/local_backups:/archive
volumes: volumes:
data: data:
``` ```
## Using with Docker Swarm ### One-off backups using Docker CLI
By default, Docker Swarm will restart stopped containers automatically, even when manually stopped. If you plan to have your containers / services stopped during backup, this means you need to apply the `on-failure` restart policy to your service's definitions. A restart policy of `always` is not compatible with this tool. To run a one time backup, mount the volume you would like to see backed up into a container and run the `backup` command:
```console
docker run --rm \
-v data:/backup/data \
--env AWS_ACCESS_KEY_ID="<xxx>" \
--env AWS_SECRET_ACCESS_KEY="<xxx>" \
--env AWS_S3_BUCKET_NAME="<xxx>" \
--entrypoint backup \
offen/docker-volume-backup:latest
```
Alternatively, pass a `--env-file` in order to use a full config as described below.
## Configuration reference
Backup targets, schedule and retention are configured in environment variables.
You can populate below template according to your requirements and use it as your `env_file`:
```ini
########### BACKUP SCHEDULE
# Backups run on the given cron schedule in `busybox` flavor. If no
# value is set, `@daily` will be used. If you do not want the cron
# to ever run, use `0 0 5 31 2 ?`.
# BACKUP_CRON_EXPRESSION="0 2 * * *"
# The name of the backup file including the `.tar.gz` extension.
# Format verbs will be replaced as in `strftime`. Omitting them
# will result in the same filename for every backup run, which means previous
# versions will be overwritten on subsequent runs. The default results
# in filenames like `backup-2021-08-29T04-00-00.tar.gz`.
# BACKUP_FILENAME="backup-%Y-%m-%dT%H-%M-%S.tar.gz"
# When storing local backups, a symlink to the latest backup can be created
# in case a value is given for this key. This has no effect on remote backups.
# BACKUP_LATEST_SYMLINK="backup.latest.tar.gz"
########### BACKUP STORAGE
# The name of the remote bucket that should be used for storing backups. If
# this is not set, no remote backups will be stored.
# AWS_S3_BUCKET_NAME="backup-bucket"
# Define credentials for authenticating against the backup storage and a bucket
# name. Although all of these keys are `AWS`-prefixed, the setup can be used
# with any S3 compatible storage.
# AWS_ACCESS_KEY_ID="<xxx>"
# AWS_SECRET_ACCESS_KEY="<xxx>"
# Instead of providing static credentials, you can also use IAM instance profiles
# or similar to provide authentication. Some possible configuration options on AWS:
# - EC2: http://169.254.169.254
# - ECS: http://169.254.170.2
# AWS_IAM_ROLE_ENDPOINT="http://169.254.169.254"
# This is the FQDN of your storage server, e.g. `storage.example.com`.
# Do not set this when working against AWS S3 (the default value is
# `s3.amazonaws.com`). If you need to set a specific (non-https) protocol, you
# will need to use the option below.
# AWS_ENDPOINT="storage.example.com"
# The protocol to be used when communicating with your storage server.
# Defaults to "https". You can set this to "http" when communicating with
# a different Docker container on the same host for example.
# AWS_ENDPOINT_PROTO="https"
# Setting this variable to `true` will disable verification of
# SSL certificates. You shouldn't use this unless you use self-signed
# certificates for your remote storage backend.
# AWS_ENDPOINT_INSECURE="true"
# In addition to storing backups remotely, you can also keep local copies.
# Pass a container-local path to store your backups if needed. You also need to
# mount a local folder or Docker volume into that location (`/archive`
# by default) when running the container. In case the specified directory does
# not exist (nothing is mounted) in the container when the backup is running,
# local backups will be skipped. Local paths are also be subject to pruning of
# old backups as defined below.
# BACKUP_ARCHIVE="/archive"
########### BACKUP PRUNING
# **IMPORTANT, PLEASE READ THIS BEFORE USING THIS FEATURE**:
# The mechanism used for pruning old backups is not very sophisticated
# and applies its rules to **all files in the target directory** by default,
# which means that if you are storing your backups next to other files,
# these might become subject to deletion too. When using this option
# make sure the backup files are stored in a directory used exclusively
# for such files, or to configure BACKUP_PRUNING_PREFIX to limit
# removal to certain files.
# Define this value to enable automatic rotation of old backups. The value
# declares the number of days for which a backup is kept.
# BACKUP_RETENTION_DAYS="7"
# In case the duration a backup takes fluctuates noticeably in your setup
# you can adjust this setting to make sure there are no race conditions
# between the backup finishing and the rotation not deleting backups that
# sit on the edge of the time window. Set this value to a duration
# that is expected to be bigger than the maximum difference of backups.
# Valid values have a suffix of (s)econds, (m)inutes or (h)ours. By default,
# one minute is used.
# BACKUP_PRUNING_LEEWAY="1m"
# In case your target bucket or directory contains other files than the ones
# managed by this container, you can limit the scope of rotation by setting
# a prefix value. This would usually be the non-parametrized part of your
# BACKUP_FILENAME. E.g. if BACKUP_FILENAME is `db-backup-%Y-%m-%dT%H-%M-%S.tar.gz`,
# you can set BACKUP_PRUNING_PREFIX to `db-backup-` and make sure
# unrelated files are not affected by the rotation mechanism.
# BACKUP_PRUNING_PREFIX="backup-"
########### BACKUP ENCRYPTION
# Backups can be encrypted using gpg in case a passphrase is given.
# GPG_PASSPHRASE="<xxx>"
########### STOPPING CONTAINERS DURING BACKUP
# Containers can be stopped by applying a
# `docker-volume-backup.stop-during-backup` label. By default, all containers
# that are labeled with `true` will be stopped. If you need more fine grained
# control (e.g. when running multiple containers based on this image), you can
# override this default by specifying a different value here.
# BACKUP_STOP_CONTAINER_LABEL="service1"
########### EMAIL NOTIFICATIONS ON FAILED BACKUP RUNS
# In case SMTP credentials are provided, notification emails can be sent out on
# failed backup runs. These emails will contain the start time, the error
# message and all log output prior to the failure.
# The recipient(s) of the notification. Supply a comma separated list
# of adresses if you want to notify multiple recipients. If this is
# not set, no emails will be sent.
# EMAIL_NOTIFICATION_RECIPIENT="you@example.com"
# The "From" header of the sent email. Defaults to `noreply@nohost`.
# EMAIL_NOTIFICATION_SENDER="no-reply@example.com"
# Configuration and credentials for the SMTP server to be used.
# EMAIL_SMTP_PORT defaults to 587.
# EMAIL_SMTP_HOST="posteo.de"
# EMAIL_SMTP_PASSWORD="<xxx>"
# EMAIL_SMTP_USERNAME="no-reply@example.com"
# EMAIL_SMTP_PORT="<port>"
```
## How to
### Stopping containers during backup
In many cases, it will be desirable to stop the services that are consuming the volume you want to backup in order to ensure data integrity.
This image can automatically stop and restart containers and services (in case you are running Docker in Swarm mode).
By default, any container that is labeled `docker-volume-backup.stop-during-backup=true` will be stopped before the backup is being taken and restarted once it has finished.
In case you need more fine grained control about which containers should be stopped (e.g. when backing up multiple volumes on different schedules), you can set the `BACKUP_STOP_CONTAINER_LABEL` environment variable and then use the same value for labeling:
```yml
version: '3'
services:
app:
# definition for app ...
labels:
- docker-volume-backup.stop-during-backup=service1
backup:
image: offen/docker-volume-backup:latest
environment:
BACKUP_STOP_CONTAINER_LABEL: service1
volumes:
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
data:
```
### Automatically pruning old backups
When `BACKUP_RETENTION_DAYS` is configured, the image will check if there are any backups in the remote bucket or local archive that are older than the given retention value and rotate these backups away.
Be aware that this mechanism looks at __all files in the target bucket or archive__, which means that other files that are older than the given deadline are deleted as well. In case you need to use a target that cannot be used exclusively for your backups, you can configure `BACKUP_PRUNING_PREFIX` to limit which files are considered eligible for deletion:
```yml
version: '3'
services:
# ... define other services using the `data` volume here
backup:
image: offen/docker-volume-backup:latest
environment:
BACKUP_FILENAME: backup-%Y-%m-%dT%H-%M-%S.tar.gz
BACKUP_PRUNING_PREFIX: backup-
BACKUP_RETENTION_DAYS: 7
volumes:
- ${HOME}/backups:/archive
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
data:
```
### Send email notifications on failed backup runs
To send out email notifications on failed backup runs, provide SMTP credentials, a sender and a recipient:
```yml
version: '3'
services:
backup:
image: offen/docker-volume-backup:latest
environment:
# ... other configuration values go here
EMAIL_SMTP_HOST: "smtp.example.com"
EMAIL_SMTP_PASSWORD: "password"
EMAIL_SMTP_USERNAME: "username"
EMAIL_NOTIFICATION_SENDER: "noreply@example.com"
EMAIL_NOTIFICATION_RECIPIENT: "notifications@example.com"
```
### Encrypting your backup using GPG
The image supports encrypting backups using GPG out of the box.
In case a `GPG_PASSPHRASE` environment variable is set, the backup will be encrypted using the given key and saved as a `.gpg` file instead.
Assuming you have `gpg` installed, you can decrypt such a backup using (your OS will prompt for the passphrase before decryption can happen):
```console
gpg -o backup.tar.gz -d backup.tar.gz.gpg
```
### Restoring a volume from a backup
In case you need to restore a volume from a backup, the most straight forward procedure to do so would be:
- Stop the container(s) that are using the volume
- Untar the backup you want to restore
```console
tar -C /tmp -xvf backup.tar.gz
```
- Using a temporary one-off container, mount the volume (the example assumes it's named `data`) and copy over the backup. Make sure you copy the correct path level (this depends on how you mount your volume into the backup container), you might need to strip some leading elements
```console
docker run -d --name backup_restore -v data:/backup_restore alpine
docker cp /tmp/backup/data-backup backup_restore:/backup_restore
docker stop backup_restore
docker rm backup_restore
```
- Restart the container(s) that are using the volume
Depending on your setup and the application(s) you are running, this might involve other steps to be taken still.
### Using with Docker Swarm
By default, Docker Swarm will restart stopped containers automatically, even when manually stopped.
If you plan to have your containers / services stopped during backup, this means you need to apply the `on-failure` restart policy to your service's definitions.
A restart policy of `always` is not compatible with this tool.
--- ---
@@ -162,7 +378,7 @@ services:
memory: 25M memory: 25M
``` ```
## Manually triggering a backup ### Manually triggering a backup
You can manually trigger a backup run outside of the defined cron schedule by executing the `backup` command inside the container: You can manually trigger a backup run outside of the defined cron schedule by executing the `backup` command inside the container:
@@ -170,15 +386,210 @@ You can manually trigger a backup run outside of the defined cron schedule by ex
docker exec <container_ref> backup docker exec <container_ref> backup
``` ```
--- ## Recipes
This section lists configuration for some real-world use cases that you can mix and match according to your needs.
### Backing up to AWS S3
```yml
version: '3'
services:
# ... define other services using the `data` volume here
backup:
image: offen/docker-volume-backup:latest
environment:
AWS_BUCKET_NAME: backup-bucket
AWS_ACCESS_KEY_ID: AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
volumes:
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
data:
```
### Backing up to MinIO
```yml
version: '3'
services:
# ... define other services using the `data` volume here
backup:
image: offen/docker-volume-backup:latest
environment:
AWS_ENDPOINT: minio.example.com
AWS_BUCKET_NAME: backup-bucket
AWS_ACCESS_KEY_ID: MINIOACCESSKEY
AWS_SECRET_ACCESS_KEY: MINIOSECRETKEY
volumes:
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
data:
```
### Backing up locally
```yml
version: '3'
services:
# ... define other services using the `data` volume here
backup:
image: offen/docker-volume-backup:latest
environment:
BACKUP_FILENAME: backup-%Y-%m-%dT%H-%M-%S.tar.gz
BACKUP_LATEST_SYMLINK: backup-latest.tar.gz
volumes:
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ${HOME}/backups:/archive
volumes:
data:
```
### Backing up to AWS S3 as well as locally
```yml
version: '3'
services:
# ... define other services using the `data` volume here
backup:
image: offen/docker-volume-backup:latest
environment:
AWS_BUCKET_NAME: backup-bucket
AWS_ACCESS_KEY_ID: AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
volumes:
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ${HOME}/backups:/archive
volumes:
data:
```
### Running on a custom cron schedule
```yml
version: '3'
services:
# ... define other services using the `data` volume here
backup:
image: offen/docker-volume-backup:latest
environment:
# take a backup on every hour
BACKUP_CRON_EXPRESSION: "0 * * * *"
AWS_BUCKET_NAME: backup-bucket
AWS_ACCESS_KEY_ID: AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
volumes:
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
data:
```
### Rotating away backups that are older than 7 days
```yml
version: '3'
services:
# ... define other services using the `data` volume here
backup:
image: offen/docker-volume-backup:latest
environment:
AWS_BUCKET_NAME: backup-bucket
AWS_ACCESS_KEY_ID: AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
BACKUP_FILENAME: backup-%Y-%m-%dT%H-%M-%S.tar.gz
BACKUP_PRUNING_PREFIX: backup-
BACKUP_RETENTION_DAYS: 7
volumes:
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
data:
```
### Encrypting your backups using GPG
```yml
version: '3'
services:
# ... define other services using the `data` volume here
backup:
image: offen/docker-volume-backup:latest
environment:
AWS_BUCKET_NAME: backup-bucket
AWS_ACCESS_KEY_ID: AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
GPG_PASSPHRASE: somesecretstring
volumes:
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
data:
```
### Running multiple instances in the same setup
```yml
version: '3'
services:
# ... define other services using the `data_1` and `data_2` volumes here
backup_1: &backup_service
image: offen/docker-volume-backup:latest
environment: &backup_environment
BACKUP_CRON_EXPRESSION: "0 2 * * *"
AWS_BUCKET_NAME: backup-bucket
AWS_ACCESS_KEY_ID: AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
# Label the container using the `data_1` volume as `docker-volume-backup.stop-during-backup=service1`
BACKUP_STOP_CONTAINER_LABEL: service1
volumes:
- data_1:/backup/data-1-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
backup_2:
<<: *backup_service
environment:
<<: *backup_environment
# Label the container using the `data_2` volume as `docker-volume-backup.stop-during-backup=service2`
BACKUP_CRON_EXPRESSION: "0 3 * * *"
BACKUP_STOP_CONTAINER_LABEL: service2
volumes:
- data_2:/backup/data-2-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
data_1:
data_2:
```
## Differences to `futurice/docker-volume-backup` ## Differences to `futurice/docker-volume-backup`
This image is heavily inspired by the `futurice/docker-volume-backup`. We decided to publish this image as a simpler and more lightweight alternative because of the following requirements: This image is heavily inspired by `futurice/docker-volume-backup`. We decided to publish this image as a simpler and more lightweight alternative because of the following requirements:
- The original image is based on `ubuntu` and additional tools, making it very heavy. This version is roughly 1/25 in compressed size (it's ~12MB). - The original image is based on `ubuntu` and requires additional tools, making it heavy.
- The original image uses a shell script, when this is written in Go, which makes it easier to extend and maintain (more verbose also). This version is roughly 1/25 in compressed size (it's ~12MB).
- The original image proposed to handle backup rotation through AWS S3 lifecycle policies. This image adds the option to rotate away old backups through the same command so this functionality can also be offered for non-AWS storage backends like MinIO. Local copies of backups can also be pruned once they reach a certain age. - The original image uses a shell script, when this version is written in Go, which makes it easier to extend and maintain (more verbose too).
- The original image proposed to handle backup rotation through AWS S3 lifecycle policies.
This image adds the option to rotate away old backups through the same command so this functionality can also be offered for non-AWS storage backends like MinIO.
Local copies of backups can also be pruned once they reach a certain age.
- InfluxDB specific functionality from the original image was removed. - InfluxDB specific functionality from the original image was removed.
- `arm64` and `arm/v7` architectures are supported. - `arm64` and `arm/v7` architectures are supported.
- Docker in Swarm mode is supported. - Docker in Swarm mode is supported.

View File

@@ -4,25 +4,29 @@
package main package main
import ( import (
"bytes"
"context" "context"
"errors"
"fmt" "fmt"
"io" "io"
"os" "os"
"path" "path"
"path/filepath" "path/filepath"
"strings"
"time" "time"
"github.com/docker/docker/api/types" "github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters" "github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/swarm" "github.com/docker/docker/api/types/swarm"
"github.com/docker/docker/client" "github.com/docker/docker/client"
"github.com/go-gomail/gomail"
"github.com/gofrs/flock" "github.com/gofrs/flock"
"github.com/kelseyhightower/envconfig" "github.com/kelseyhightower/envconfig"
"github.com/leekchan/timeutil" "github.com/leekchan/timeutil"
"github.com/m90/targz"
"github.com/minio/minio-go/v7" "github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/credentials" "github.com/minio/minio-go/v7/pkg/credentials"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
"github.com/walle/targz"
"golang.org/x/crypto/openpgp" "golang.org/x/crypto/openpgp"
) )
@@ -35,18 +39,34 @@ func main() {
panic(err) panic(err)
} }
defer func() {
if err := recover(); err != nil {
if e, ok := err.(error); ok && strings.Contains(e.Error(), msgBackupFailed) {
os.Exit(1)
}
panic(err)
}
}()
s.must(func() error { s.must(func() error {
restartContainers, err := s.stopContainers() restartContainers, err := s.stopContainers()
defer restartContainers() defer func() {
s.must(restartContainers())
}()
if err != nil { if err != nil {
return err return err
} }
return s.takeBackup() return s.takeBackup()
}()) }())
s.must(s.encryptBackup()) s.must(func() error {
s.must(s.copyBackup()) defer func() {
s.must(s.removeArtifacts()) s.must(s.removeArtifacts())
}()
s.must(s.encryptBackup())
return s.copyBackup()
}())
s.must(s.pruneOldBackups()) s.must(s.pruneOldBackups())
s.logger.Info("Finished running backup tasks.") s.logger.Info("Finished running backup tasks.")
} }
@@ -54,49 +74,61 @@ func main() {
// script holds all the stateful information required to orchestrate a // script holds all the stateful information required to orchestrate a
// single backup run. // single backup run.
type script struct { type script struct {
ctx context.Context
cli *client.Client cli *client.Client
mc *minio.Client mc *minio.Client
logger *logrus.Logger logger *logrus.Logger
hooks []hook
start time.Time start time.Time
file string file string
output *bytes.Buffer
c *config c *config
} }
type config struct { type config struct {
BackupSources string `split_words:"true" default:"/backup"` BackupSources string `split_words:"true" default:"/backup"`
BackupFilename string `split_words:"true" default:"backup-%Y-%m-%dT%H-%M-%S.tar.gz"` BackupFilename string `split_words:"true" default:"backup-%Y-%m-%dT%H-%M-%S.tar.gz"`
BackupArchive string `split_words:"true" default:"/archive"` BackupLatestSymlink string `split_words:"true"`
BackupRetentionDays int32 `split_words:"true" default:"-1"` BackupArchive string `split_words:"true" default:"/archive"`
BackupPruningLeeway time.Duration `split_words:"true" default:"1m"` BackupRetentionDays int32 `split_words:"true" default:"-1"`
BackupPruningPrefix string `split_words:"true"` BackupPruningLeeway time.Duration `split_words:"true" default:"1m"`
BackupStopContainerLabel string `split_words:"true" default:"true"` BackupPruningPrefix string `split_words:"true"`
AwsS3BucketName string `split_words:"true"` BackupStopContainerLabel string `split_words:"true" default:"true"`
AwsEndpoint string `split_words:"true" default:"s3.amazonaws.com"` AwsS3BucketName string `split_words:"true"`
AwsEndpointProto string `split_words:"true" default:"https"` AwsEndpoint string `split_words:"true" default:"s3.amazonaws.com"`
AwsEndpointInsecure bool `split_words:"true"` AwsEndpointProto string `split_words:"true" default:"https"`
AwsAccessKeyID string `envconfig:"AWS_ACCESS_KEY_ID"` AwsEndpointInsecure bool `split_words:"true"`
AwsSecretAccessKey string `split_words:"true"` AwsAccessKeyID string `envconfig:"AWS_ACCESS_KEY_ID"`
GpgPassphrase string `split_words:"true"` AwsSecretAccessKey string `split_words:"true"`
AwsIamRoleEndpoint string `split_words:"true"`
GpgPassphrase string `split_words:"true"`
EmailNotificationRecipient string `split_words:"true"`
EmailNotificationSender string `split_words:"true" default:"noreply@nohost"`
EmailSMTPHost string `envconfig:"EMAIL_SMTP_HOST"`
EmailSMTPPort int `envconfig:"EMAIL_SMTP_PORT" default:"587"`
EmailSMTPUsername string `envconfig:"EMAIL_SMTP_USERNAME"`
EmailSMTPPassword string `envconfig:"EMAIL_SMTP_PASSWORD"`
} }
var msgBackupFailed = "backup run failed"
// newScript creates all resources needed for the script to perform actions against // newScript creates all resources needed for the script to perform actions against
// remote resources like the Docker engine or remote storage locations. All // remote resources like the Docker engine or remote storage locations. All
// reading from env vars or other configuration sources is expected to happen // reading from env vars or other configuration sources is expected to happen
// in this method. // in this method.
func newScript() (*script, error) { func newScript() (*script, error) {
stdOut, logBuffer := buffer(os.Stdout)
s := &script{ s := &script{
c: &config{}, c: &config{},
ctx: context.Background(),
logger: &logrus.Logger{ logger: &logrus.Logger{
Out: os.Stdout, Out: stdOut,
Formatter: new(logrus.TextFormatter), Formatter: new(logrus.TextFormatter),
Hooks: make(logrus.LevelHooks), Hooks: make(logrus.LevelHooks),
Level: logrus.InfoLevel, Level: logrus.InfoLevel,
}, },
start: time.Now(), start: time.Now(),
output: logBuffer,
} }
if err := envconfig.Process("", s.c); err != nil { if err := envconfig.Process("", s.c); err != nil {
@@ -115,12 +147,21 @@ func newScript() (*script, error) {
} }
if s.c.AwsS3BucketName != "" { if s.c.AwsS3BucketName != "" {
mc, err := minio.New(s.c.AwsEndpoint, &minio.Options{ var creds *credentials.Credentials
Creds: credentials.NewStaticV4( if s.c.AwsAccessKeyID != "" && s.c.AwsSecretAccessKey != "" {
creds = credentials.NewStaticV4(
s.c.AwsAccessKeyID, s.c.AwsAccessKeyID,
s.c.AwsSecretAccessKey, s.c.AwsSecretAccessKey,
"", "",
), )
} else if s.c.AwsIamRoleEndpoint != "" {
creds = credentials.NewIAM(s.c.AwsIamRoleEndpoint)
} else {
return nil, errors.New("newScript: AWS_S3_BUCKET_NAME is defined, but no credentials were provided")
}
mc, err := minio.New(s.c.AwsEndpoint, &minio.Options{
Creds: creds,
Secure: !s.c.AwsEndpointInsecure && s.c.AwsEndpointProto == "https", Secure: !s.c.AwsEndpointInsecure && s.c.AwsEndpointProto == "https",
}) })
if err != nil { if err != nil {
@@ -129,6 +170,28 @@ func newScript() (*script, error) {
s.mc = mc s.mc = mc
} }
if s.c.EmailNotificationRecipient != "" {
s.hooks = append(s.hooks, hook{hookLevelFailure, func(err error, start time.Time, logOutput string) error {
mailer := gomail.NewDialer(
s.c.EmailSMTPHost, s.c.EmailSMTPPort, s.c.EmailSMTPUsername, s.c.EmailSMTPPassword,
)
subject := fmt.Sprintf(
"Failure running docker-volume-backup at %s", start.Format(time.RFC3339),
)
body := fmt.Sprintf(
"Running docker-volume-backup failed with error: %s\n\nLog output of the failed run was:\n\n%s\n", err, logOutput,
)
message := gomail.NewMessage()
message.SetHeader("From", s.c.EmailNotificationSender)
message.SetHeader("To", s.c.EmailNotificationRecipient)
message.SetHeader("Subject", subject)
message.SetBody("text/plain", body)
return mailer.DialAndSend(message)
}})
}
return s, nil return s, nil
} }
@@ -142,7 +205,7 @@ func (s *script) stopContainers() (func() error, error) {
return noop, nil return noop, nil
} }
allContainers, err := s.cli.ContainerList(s.ctx, types.ContainerListOptions{ allContainers, err := s.cli.ContainerList(context.Background(), types.ContainerListOptions{
Quiet: true, Quiet: true,
}) })
if err != nil { if err != nil {
@@ -153,7 +216,7 @@ func (s *script) stopContainers() (func() error, error) {
"docker-volume-backup.stop-during-backup=%s", "docker-volume-backup.stop-during-backup=%s",
s.c.BackupStopContainerLabel, s.c.BackupStopContainerLabel,
) )
containersToStop, err := s.cli.ContainerList(s.ctx, types.ContainerListOptions{ containersToStop, err := s.cli.ContainerList(context.Background(), types.ContainerListOptions{
Quiet: true, Quiet: true,
Filters: filters.NewArgs(filters.KeyValuePair{ Filters: filters.NewArgs(filters.KeyValuePair{
Key: "label", Key: "label",
@@ -179,18 +242,19 @@ func (s *script) stopContainers() (func() error, error) {
var stoppedContainers []types.Container var stoppedContainers []types.Container
var stopErrors []error var stopErrors []error
for _, container := range containersToStop { for _, container := range containersToStop {
if err := s.cli.ContainerStop(s.ctx, container.ID, nil); err != nil { if err := s.cli.ContainerStop(context.Background(), container.ID, nil); err != nil {
stopErrors = append(stopErrors, err) stopErrors = append(stopErrors, err)
} else { } else {
stoppedContainers = append(stoppedContainers, container) stoppedContainers = append(stoppedContainers, container)
} }
} }
var stopError error
if len(stopErrors) != 0 { if len(stopErrors) != 0 {
return noop, fmt.Errorf( stopError = fmt.Errorf(
"stopContainersAndRun: %d error(s) stopping containers: %w", "stopContainersAndRun: %d error(s) stopping containers: %w",
len(stopErrors), len(stopErrors),
err, join(stopErrors...),
) )
} }
@@ -203,13 +267,13 @@ func (s *script) stopContainers() (func() error, error) {
servicesRequiringUpdate[swarmServiceName] = struct{}{} servicesRequiringUpdate[swarmServiceName] = struct{}{}
continue continue
} }
if err := s.cli.ContainerStart(s.ctx, container.ID, types.ContainerStartOptions{}); err != nil { if err := s.cli.ContainerStart(context.Background(), container.ID, types.ContainerStartOptions{}); err != nil {
restartErrors = append(restartErrors, err) restartErrors = append(restartErrors, err)
} }
} }
if len(servicesRequiringUpdate) != 0 { if len(servicesRequiringUpdate) != 0 {
services, _ := s.cli.ServiceList(s.ctx, types.ServiceListOptions{}) services, _ := s.cli.ServiceList(context.Background(), types.ServiceListOptions{})
for serviceName := range servicesRequiringUpdate { for serviceName := range servicesRequiringUpdate {
var serviceMatch swarm.Service var serviceMatch swarm.Service
for _, service := range services { for _, service := range services {
@@ -219,11 +283,11 @@ func (s *script) stopContainers() (func() error, error) {
} }
} }
if serviceMatch.ID == "" { if serviceMatch.ID == "" {
return fmt.Errorf("stopContainersAndRun: Couldn't find service with name %s", serviceName) return fmt.Errorf("stopContainersAndRun: couldn't find service with name %s", serviceName)
} }
serviceMatch.Spec.TaskTemplate.ForceUpdate = 1 serviceMatch.Spec.TaskTemplate.ForceUpdate = 1
_, err := s.cli.ServiceUpdate( _, err := s.cli.ServiceUpdate(
s.ctx, serviceMatch.ID, context.Background(), serviceMatch.ID,
serviceMatch.Version, serviceMatch.Spec, types.ServiceUpdateOptions{}, serviceMatch.Version, serviceMatch.Spec, types.ServiceUpdateOptions{},
) )
if err != nil { if err != nil {
@@ -236,7 +300,7 @@ func (s *script) stopContainers() (func() error, error) {
return fmt.Errorf( return fmt.Errorf(
"stopContainersAndRun: %d error(s) restarting containers and services: %w", "stopContainersAndRun: %d error(s) restarting containers and services: %w",
len(restartErrors), len(restartErrors),
err, join(restartErrors...),
) )
} }
s.logger.Infof( s.logger.Infof(
@@ -244,7 +308,7 @@ func (s *script) stopContainers() (func() error, error) {
len(stoppedContainers), len(stoppedContainers),
) )
return nil return nil
}, nil }, stopError
} }
// takeBackup creates a tar archive of the configured backup location and // takeBackup creates a tar archive of the configured backup location and
@@ -260,7 +324,7 @@ func (s *script) takeBackup() error {
// encryptBackup encrypts the backup file using PGP and the configured passphrase. // encryptBackup encrypts the backup file using PGP and the configured passphrase.
// In case no passphrase is given it returns early, leaving the backup file // In case no passphrase is given it returns early, leaving the backup file
// untouched. // untouched.
func (s *script) encryptBackup() error { func (s *script) encryptBackup() error {
if s.c.GpgPassphrase == "" { if s.c.GpgPassphrase == "" {
return nil return nil
@@ -302,31 +366,48 @@ func (s *script) encryptBackup() error {
// as per the given configuration. // as per the given configuration.
func (s *script) copyBackup() error { func (s *script) copyBackup() error {
_, name := path.Split(s.file) _, name := path.Split(s.file)
if s.c.AwsS3BucketName != "" { if s.mc != nil {
_, err := s.mc.FPutObject(s.ctx, s.c.AwsS3BucketName, name, s.file, minio.PutObjectOptions{ _, err := s.mc.FPutObject(context.Background(), s.c.AwsS3BucketName, name, s.file, minio.PutObjectOptions{
ContentType: "application/tar+gzip", ContentType: "application/tar+gzip",
}) })
if err != nil { if err != nil {
return fmt.Errorf("copyBackup: error uploading backup to remote storage: %w", err) return fmt.Errorf("copyBackup: error uploading backup to remote storage: %w", err)
} }
s.logger.Infof("Uploaded a copy of backup `%s` to bucket `%s`", s.file, s.c.AwsS3BucketName) s.logger.Infof("Uploaded a copy of backup `%s` to bucket `%s`.", s.file, s.c.AwsS3BucketName)
} }
if _, err := os.Stat(s.c.BackupArchive); !os.IsNotExist(err) { if _, err := os.Stat(s.c.BackupArchive); !os.IsNotExist(err) {
if err := copy(s.file, path.Join(s.c.BackupArchive, name)); err != nil { if err := copy(s.file, path.Join(s.c.BackupArchive, name)); err != nil {
return fmt.Errorf("copyBackup: error copying file to local archive: %w", err) return fmt.Errorf("copyBackup: error copying file to local archive: %w", err)
} }
s.logger.Infof("Stored copy of backup `%s` in local archive `%s`", s.file, s.c.AwsS3BucketName) s.logger.Infof("Stored copy of backup `%s` in local archive `%s`.", s.file, s.c.BackupArchive)
if s.c.BackupLatestSymlink != "" {
symlink := path.Join(s.c.BackupArchive, s.c.BackupLatestSymlink)
if _, err := os.Lstat(symlink); err == nil {
os.Remove(symlink)
}
if err := os.Symlink(name, symlink); err != nil {
return fmt.Errorf("copyBackup: error creating latest symlink: %w", err)
}
s.logger.Infof("Created/Updated symlink `%s` for latest backup.", s.c.BackupLatestSymlink)
}
} }
return nil return nil
} }
// removeArtifacts removes the backup file from disk. // removeArtifacts removes the backup file from disk.
func (s *script) removeArtifacts() error { func (s *script) removeArtifacts() error {
if err := os.Remove(s.file); err != nil { _, err := os.Stat(s.file)
return fmt.Errorf("removeArtifacts: error removing file: %w", err) if err != nil {
if os.IsNotExist(err) {
return nil
}
return fmt.Errorf("removeArtifacts: error calling stat on file %s: %w", s.file, err)
} }
s.logger.Info("Removed local artifacts.") if err := os.Remove(s.file); err != nil {
return fmt.Errorf("removeArtifacts: error removing file %s: %w", s.file, err)
}
s.logger.Infof("Removed local artifacts %s.", s.file)
return nil return nil
} }
@@ -343,11 +424,10 @@ func (s *script) pruneOldBackups() error {
time.Sleep(s.c.BackupPruningLeeway) time.Sleep(s.c.BackupPruningLeeway)
} }
s.logger.Infof("Trying to prune backups older than %d day(s) now.", s.c.BackupRetentionDays)
deadline := time.Now().AddDate(0, 0, -int(s.c.BackupRetentionDays)) deadline := time.Now().AddDate(0, 0, -int(s.c.BackupRetentionDays))
if s.c.AwsS3BucketName != "" { if s.mc != nil {
candidates := s.mc.ListObjects(s.ctx, s.c.AwsS3BucketName, minio.ListObjectsOptions{ candidates := s.mc.ListObjects(context.Background(), s.c.AwsS3BucketName, minio.ListObjectsOptions{
WithMetadata: true, WithMetadata: true,
Prefix: s.c.BackupPruningPrefix, Prefix: s.c.BackupPruningPrefix,
}) })
@@ -375,25 +455,26 @@ func (s *script) pruneOldBackups() error {
} }
close(objectsCh) close(objectsCh)
}() }()
errChan := s.mc.RemoveObjects(s.ctx, s.c.AwsS3BucketName, objectsCh, minio.RemoveObjectsOptions{}) errChan := s.mc.RemoveObjects(context.Background(), s.c.AwsS3BucketName, objectsCh, minio.RemoveObjectsOptions{})
var errors []error var removeErrors []error
for result := range errChan { for result := range errChan {
if result.Err != nil { if result.Err != nil {
errors = append(errors, result.Err) removeErrors = append(removeErrors, result.Err)
} }
} }
if len(errors) != 0 { if len(removeErrors) != 0 {
return fmt.Errorf( return fmt.Errorf(
"pruneOldBackups: %d error(s) removing files from remote storage: %w", "pruneOldBackups: %d error(s) removing files from remote storage: %w",
len(errors), len(removeErrors),
errors[0], join(removeErrors...),
) )
} }
s.logger.Infof( s.logger.Infof(
"Pruned %d out of %d remote backup(s) as their age exceeded the configured retention period.", "Pruned %d out of %d remote backup(s) as their age exceeded the configured retention period of %d days.",
len(matches), len(matches),
lenCandidates, lenCandidates,
s.c.BackupRetentionDays,
) )
} else if len(matches) != 0 && len(matches) == lenCandidates { } else if len(matches) != 0 && len(matches) == lenCandidates {
s.logger.Warnf( s.logger.Warnf(
@@ -427,29 +508,30 @@ func (s *script) pruneOldBackups() error {
) )
} }
if fi.ModTime().Before(deadline) { if fi.Mode() != os.ModeSymlink && fi.ModTime().Before(deadline) {
matches = append(matches, candidate) matches = append(matches, candidate)
} }
} }
if len(matches) != 0 && len(matches) != len(candidates) { if len(matches) != 0 && len(matches) != len(candidates) {
var errors []error var removeErrors []error
for _, candidate := range matches { for _, candidate := range matches {
if err := os.Remove(candidate); err != nil { if err := os.Remove(candidate); err != nil {
errors = append(errors, err) removeErrors = append(removeErrors, err)
} }
} }
if len(errors) != 0 { if len(removeErrors) != 0 {
return fmt.Errorf( return fmt.Errorf(
"pruneOldBackups: %d error(s) deleting local files, starting with: %w", "pruneOldBackups: %d error(s) deleting local files, starting with: %w",
len(errors), len(removeErrors),
errors[0], join(removeErrors...),
) )
} }
s.logger.Infof( s.logger.Infof(
"Pruned %d out of %d local backup(s) as their age exceeded the configured retention period.", "Pruned %d out of %d local backup(s) as their age exceeded the configured retention period of %d days.",
len(matches), len(matches),
len(candidates), len(candidates),
s.c.BackupRetentionDays,
) )
} else if len(matches) != 0 && len(matches) == len(candidates) { } else if len(matches) != 0 && len(matches) == len(candidates) {
s.logger.Warnf( s.logger.Warnf(
@@ -464,9 +546,35 @@ func (s *script) pruneOldBackups() error {
return nil return nil
} }
// runHooks runs all hooks that have been registered using the
// given level. In case executing a hook returns an error, the following
// hooks will still be run before the function returns an error.
func (s *script) runHooks(err error, targetLevel string) error {
var actionErrors []error
for _, hook := range s.hooks {
if hook.level != targetLevel {
continue
}
if err := hook.action(err, s.start, s.output.String()); err != nil {
actionErrors = append(actionErrors, err)
}
}
if len(actionErrors) != 0 {
return join(actionErrors...)
}
return nil
}
// must exits the script run prematurely in case the given error
// is non-nil. If failure hooks have been registered on the script object, they
// will be called, passing the failure and previous log output.
func (s *script) must(err error) { func (s *script) must(err error) {
if err != nil { if err != nil {
s.logger.Fatalf("Fatal error running backup: %s", err) s.logger.Errorf("Fatal error running backup: %s", err)
if hookErr := s.runHooks(err, hookLevelFailure); hookErr != nil {
s.logger.Errorf("An error occurred calling the registered failure hooks: %s", hookErr)
}
panic(errors.New(msgBackupFailed))
} }
} }
@@ -505,3 +613,48 @@ func copy(src, dst string) error {
} }
return out.Close() return out.Close()
} }
// join takes a list of errors and joins them into a single error
func join(errs ...error) error {
if len(errs) == 1 {
return errs[0]
}
var msgs []string
for _, err := range errs {
if err == nil {
continue
}
msgs = append(msgs, err.Error())
}
return errors.New("[" + strings.Join(msgs, ", ") + "]")
}
// buffer takes an io.Writer and returns a wrapped version of the
// writer that writes to both the original target as well as the returned buffer
func buffer(w io.Writer) (io.Writer, *bytes.Buffer) {
buffering := &bufferingWriter{buf: bytes.Buffer{}, writer: w}
return buffering, &buffering.buf
}
type bufferingWriter struct {
buf bytes.Buffer
writer io.Writer
}
func (b *bufferingWriter) Write(p []byte) (n int, err error) {
if n, err := b.buf.Write(p); err != nil {
return n, fmt.Errorf("bufferingWriter: error writing to buffer: %w", err)
}
return b.writer.Write(p)
}
// hook contains a queued action that can be trigger them when the script
// reaches a certain point (e.g. unsuccessful backup)
type hook struct {
level string
action func(err error, start time.Time, logOutput string) error
}
const (
hookLevelFailure = "failure"
)

4
go.mod
View File

@@ -7,9 +7,9 @@ require (
github.com/gofrs/flock v0.8.1 github.com/gofrs/flock v0.8.1
github.com/kelseyhightower/envconfig v1.4.0 github.com/kelseyhightower/envconfig v1.4.0
github.com/leekchan/timeutil v0.0.0-20150802142658-28917288c48d github.com/leekchan/timeutil v0.0.0-20150802142658-28917288c48d
github.com/m90/targz v0.0.0-20210904082215-2e9a4529a615
github.com/minio/minio-go/v7 v7.0.12 github.com/minio/minio-go/v7 v7.0.12
github.com/sirupsen/logrus v1.8.1 github.com/sirupsen/logrus v1.8.1
github.com/walle/targz v0.0.0-20140417120357-57fe4206da5a
golang.org/x/crypto v0.0.0-20210817164053-32db794688a5 golang.org/x/crypto v0.0.0-20210817164053-32db794688a5
) )
@@ -20,6 +20,7 @@ require (
github.com/docker/go-connections v0.4.0 // indirect github.com/docker/go-connections v0.4.0 // indirect
github.com/docker/go-units v0.4.0 // indirect github.com/docker/go-units v0.4.0 // indirect
github.com/dustin/go-humanize v1.0.0 // indirect github.com/dustin/go-humanize v1.0.0 // indirect
github.com/go-gomail/gomail v0.0.0-20160411212932-81ebce5c23df // indirect
github.com/gogo/protobuf v1.3.2 // indirect github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/protobuf v1.5.0 // indirect github.com/golang/protobuf v1.5.0 // indirect
github.com/google/uuid v1.2.0 // indirect github.com/google/uuid v1.2.0 // indirect
@@ -41,5 +42,6 @@ require (
google.golang.org/genproto v0.0.0-20201110150050-8816d57aaa9a // indirect google.golang.org/genproto v0.0.0-20201110150050-8816d57aaa9a // indirect
google.golang.org/grpc v1.33.2 // indirect google.golang.org/grpc v1.33.2 // indirect
google.golang.org/protobuf v1.26.0 // indirect google.golang.org/protobuf v1.26.0 // indirect
gopkg.in/alexcesaro/quotedprintable.v3 v3.0.0-20150716171945-2caba252f4dc // indirect
gopkg.in/ini.v1 v1.57.0 // indirect gopkg.in/ini.v1 v1.57.0 // indirect
) )

8
go.sum
View File

@@ -254,6 +254,8 @@ github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeME
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU= github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8= github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8= github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-gomail/gomail v0.0.0-20160411212932-81ebce5c23df h1:Bao6dhmbTA1KFVxmJ6nBoMuOJit2yjEgLJpIMYpop0E=
github.com/go-gomail/gomail v0.0.0-20160411212932-81ebce5c23df/go.mod h1:GJr+FCSXshIwgHBtLglIg9M2l2kQSi6QjVAngtzI08Y=
github.com/go-ini/ini v1.25.4/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8= github.com/go-ini/ini v1.25.4/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
@@ -407,6 +409,8 @@ github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/leekchan/timeutil v0.0.0-20150802142658-28917288c48d h1:2puqoOQwi3Ai1oznMOsFIbifm6kIfJaLLyYzWD4IzTs= github.com/leekchan/timeutil v0.0.0-20150802142658-28917288c48d h1:2puqoOQwi3Ai1oznMOsFIbifm6kIfJaLLyYzWD4IzTs=
github.com/leekchan/timeutil v0.0.0-20150802142658-28917288c48d/go.mod h1:hO90vCP2x3exaSH58BIAowSKvV+0OsY21TtzuFGHON4= github.com/leekchan/timeutil v0.0.0-20150802142658-28917288c48d/go.mod h1:hO90vCP2x3exaSH58BIAowSKvV+0OsY21TtzuFGHON4=
github.com/m90/targz v0.0.0-20210904082215-2e9a4529a615 h1:rn0LO2tQEgCDOct8qnbcslTUpAIWdVlWcGkjoumhf2U=
github.com/m90/targz v0.0.0-20210904082215-2e9a4529a615/go.mod h1:YZK3bSO/oVlk9G+v00BxgzxW2Us4p/R4ysHOBjk0fJI=
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
@@ -596,8 +600,6 @@ github.com/vishvananda/netlink v1.1.1-0.20201029203352-d40f9887b852/go.mod h1:tw
github.com/vishvananda/netns v0.0.0-20180720170159-13995c7128cc/go.mod h1:ZjcWmFBXmLKZu9Nxj3WKYEafiSqer2rnvPr0en9UNpI= github.com/vishvananda/netns v0.0.0-20180720170159-13995c7128cc/go.mod h1:ZjcWmFBXmLKZu9Nxj3WKYEafiSqer2rnvPr0en9UNpI=
github.com/vishvananda/netns v0.0.0-20191106174202-0a2b9b5464df/go.mod h1:JP3t17pCcGlemwknint6hfoeCVQrEMVwxRLRjXpq+BU= github.com/vishvananda/netns v0.0.0-20191106174202-0a2b9b5464df/go.mod h1:JP3t17pCcGlemwknint6hfoeCVQrEMVwxRLRjXpq+BU=
github.com/vishvananda/netns v0.0.0-20200728191858-db3c7e526aae/go.mod h1:DD4vA1DwXk04H54A1oHXtwZmA0grkVMdPxx/VGLCah0= github.com/vishvananda/netns v0.0.0-20200728191858-db3c7e526aae/go.mod h1:DD4vA1DwXk04H54A1oHXtwZmA0grkVMdPxx/VGLCah0=
github.com/walle/targz v0.0.0-20140417120357-57fe4206da5a h1:6cKSHLRphD9Fo1LJlISiulvgYCIafJ3QfKLimPYcAGc=
github.com/walle/targz v0.0.0-20140417120357-57fe4206da5a/go.mod h1:nccQrXCnc5SjsThFLmL7hYbtT/mHJcuolPifzY5vJqE=
github.com/willf/bitset v1.1.11-0.20200630133818-d5bec3311243/go.mod h1:RjeCKbqT1RxIR/KWY6phxZiaY1IyutSBfGjNPySAYV4= github.com/willf/bitset v1.1.11-0.20200630133818-d5bec3311243/go.mod h1:RjeCKbqT1RxIR/KWY6phxZiaY1IyutSBfGjNPySAYV4=
github.com/willf/bitset v1.1.11/go.mod h1:83CECat5yLh5zVOf4P1ErAgKA5UDvKtgyUABdr3+MjI= github.com/willf/bitset v1.1.11/go.mod h1:83CECat5yLh5zVOf4P1ErAgKA5UDvKtgyUABdr3+MjI=
github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU= github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU=
@@ -908,6 +910,8 @@ google.golang.org/protobuf v1.26.0 h1:bxAC2xTBsZGibn2RTntX0oH50xLsqy1OxA9tTL3p/l
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc= google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
gopkg.in/airbrake/gobrake.v2 v2.0.9/go.mod h1:/h5ZAUhDkGaJfjzjKLSjv6zCL6O0LLBxU4K+aSYdM/U= gopkg.in/airbrake/gobrake.v2 v2.0.9/go.mod h1:/h5ZAUhDkGaJfjzjKLSjv6zCL6O0LLBxU4K+aSYdM/U=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw= gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/alexcesaro/quotedprintable.v3 v3.0.0-20150716171945-2caba252f4dc h1:2gGKlE2+asNV9m7xrywl36YYNnBG5ZQ0r/BOOxqPpmk=
gopkg.in/alexcesaro/quotedprintable.v3 v3.0.0-20150716171945-2caba252f4dc/go.mod h1:m7x9LTH6d71AHyAX77c9yqWCCa3UKHcVEj9y7hAtKDk=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20141024133853-64131543e789/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20141024133853-64131543e789/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=

View File

@@ -29,8 +29,7 @@ docker run -d \
sleep 10 sleep 10
docker run -d \ docker run --rm \
--name backup \
--network test_network \ --network test_network \
-v app_data:/backup/app_data \ -v app_data:/backup/app_data \
-v /var/run/docker.sock:/var/run/docker.sock \ -v /var/run/docker.sock:/var/run/docker.sock \
@@ -40,18 +39,16 @@ docker run -d \
--env AWS_ENDPOINT_PROTO=http \ --env AWS_ENDPOINT_PROTO=http \
--env AWS_S3_BUCKET_NAME=backup \ --env AWS_S3_BUCKET_NAME=backup \
--env BACKUP_FILENAME=test.tar.gz \ --env BACKUP_FILENAME=test.tar.gz \
--env BACKUP_CRON_EXPRESSION="0 0 5 31 2 ?" \ --entrypoint backup \
offen/docker-volume-backup:$TEST_VERSION offen/docker-volume-backup:$TEST_VERSION
docker exec backup backup
docker run --rm -it \ docker run --rm -it \
-v backup_data:/data alpine \ -v backup_data:/data alpine \
ash -c 'tar -xvf /data/backup/test.tar.gz && test -f /backup/app_data/offen.db' ash -c 'tar -xvf /data/backup/test.tar.gz && test -f /backup/app_data/offen.db'
echo "[TEST:PASS] Found relevant files in untared backup." echo "[TEST:PASS] Found relevant files in untared backup."
if [ "$(docker ps -q | wc -l)" != "3" ]; then if [ "$(docker ps -q | wc -l)" != "2" ]; then
echo "[TEST:FAIL] Expected all containers to be running post backup, instead seen:" echo "[TEST:FAIL] Expected all containers to be running post backup, instead seen:"
docker ps docker ps
exit 1 exit 1

View File

@@ -24,6 +24,7 @@ services:
AWS_ENDPOINT_PROTO: http AWS_ENDPOINT_PROTO: http
AWS_S3_BUCKET_NAME: backup AWS_S3_BUCKET_NAME: backup
BACKUP_FILENAME: test.tar.gz BACKUP_FILENAME: test.tar.gz
BACKUP_LATEST_SYMLINK: test.latest.tar.gz.gpg
BACKUP_CRON_EXPRESSION: 0 0 5 31 2 ? BACKUP_CRON_EXPRESSION: 0 0 5 31 2 ?
BACKUP_RETENTION_DAYS: ${BACKUP_RETENTION_DAYS:-7} BACKUP_RETENTION_DAYS: ${BACKUP_RETENTION_DAYS:-7}
BACKUP_PRUNING_LEEWAY: 5s BACKUP_PRUNING_LEEWAY: 5s

View File

@@ -9,6 +9,7 @@ mkdir -p local
docker-compose up -d docker-compose up -d
sleep 5 sleep 5
docker-compose exec offen ln -s /var/opt/offen/offen.db /var/opt/offen/db.link
docker-compose exec backup backup docker-compose exec backup backup
docker run --rm -it \ docker run --rm -it \
@@ -17,9 +18,11 @@ docker run --rm -it \
echo "[TEST:PASS] Found relevant files in untared remote backup." echo "[TEST:PASS] Found relevant files in untared remote backup."
test -L ./local/test.latest.tar.gz.gpg
echo 1234secret | gpg -d --yes --passphrase-fd 0 ./local/test.tar.gz.gpg > ./local/decrypted.tar.gz echo 1234secret | gpg -d --yes --passphrase-fd 0 ./local/test.tar.gz.gpg > ./local/decrypted.tar.gz
tar -xf ./local/decrypted.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db tar -xf ./local/decrypted.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db
rm ./local/decrypted.tar.gz rm ./local/decrypted.tar.gz
test -L /tmp/backup/app_data/db.link
echo "[TEST:PASS] Found relevant files in untared local backup." echo "[TEST:PASS] Found relevant files in untared local backup."