Compare commits

..

25 Commits

Author SHA1 Message Date
Frederik Ring
0c666d0c88 use lstat when checking whether file is a symlink 2021-11-03 18:07:55 +01:00
Frederik Ring
a0402b407d fix fileinfo mode comparison when checking for symlinks 2021-11-03 18:03:44 +01:00
Frederik Ring
3193e88fc0 os.FileInfo cannot be used for deleting files as it does not contain a full path 2021-11-02 06:40:37 +01:00
Frederik Ring
c391230be6 Merge pull request #31 from offen/exclude-symlink-candidates
Exclude symlinks from candidates when pruning local files
2021-10-31 20:07:51 +01:00
Frederik Ring
f946f36fb0 exclude symlinks from candidates when pruning local files
Previously, symlinks would be included in the set of candidates, but would
be skipped when pruning. This could lead to a wrong number of candidates
being printed in the log messages.
2021-10-29 09:00:37 +02:00
Frederik Ring
5245b5882f update README, save some indentation 2021-10-28 19:55:39 +02:00
schwannden
7f0f173115 adding option to skip tls verification error (#30)
* adding option to skip tls verification error

* merge options

* removed merged option from README

Co-authored-by: Schwannden Kuo <schwannden@mobagel.com>
2021-10-28 19:51:35 +02:00
Frederik Ring
ad7ec58322 add syntax highlighting 2021-10-23 17:45:57 +02:00
Frederik Ring
b7ab2fbacc add section about container timezones to the README 2021-10-23 17:44:30 +02:00
Frederik Ring
789fc656e8 Merge pull request #27 from offen/latest-symlink
Automatically create symlink to latest local backup if configured
2021-10-01 18:47:16 +02:00
Frederik Ring
c59b40f2df automatically create symlink to latest local backup if configured 2021-10-01 18:19:24 +02:00
Frederik Ring
cff418e735 fix README grammar 2021-10-01 08:48:20 +02:00
Frederik Ring
d7ccdd79fc Merge pull request #26 from offen/instance-profile
Allow s3 authentication via IAM role
2021-09-30 19:32:54 +02:00
Frederik Ring
bd73a2b5e4 allow s3 authentication via IAM role 2021-09-30 19:24:43 +02:00
Frederik Ring
6cf5cf47e7 Merge pull request #25 from offen/delete-on-failure
Ensure script always tries to remove local artifacts even when backup failed
2021-09-13 09:33:12 +02:00
Frederik Ring
53c257065e ensure script always tries to remove local artifacts even when backup failed 2021-09-12 10:48:19 +02:00
Frederik Ring
184b7a1e18 add docs on one off backups using docker cli 2021-09-11 11:21:48 +02:00
Frederik Ring
69a94f226b tweak configuration reference for email settings 2021-09-10 11:58:33 +02:00
Frederik Ring
160a47e90b allow registering hooks at different levels 2021-09-09 16:55:49 +02:00
Frederik Ring
59660ec5c7 include exit log message in notification 2021-09-09 11:08:05 +02:00
Frederik Ring
af3e69b7a8 fix typo in README 2021-09-09 09:19:37 +02:00
Frederik Ring
5d400cb943 Merge pull request #24 from offen/failure-email
Enable sending out email notifications on failed backups
2021-09-09 09:10:20 +02:00
Frederik Ring
88368197c1 implement email notifications on failed backup runs 2021-09-09 09:00:23 +02:00
Frederik Ring
e46968ed79 call error hooks on script failure 2021-09-09 08:12:07 +02:00
Frederik Ring
2c06f81503 collect all log output in buffer so it could be used in notifications 2021-09-09 07:24:18 +02:00
7 changed files with 326 additions and 47 deletions

114
README.md
View File

@@ -3,17 +3,21 @@
Backup Docker volumes locally or to any S3 compatible storage. Backup Docker volumes locally or to any S3 compatible storage.
The [offen/docker-volume-backup](https://hub.docker.com/r/offen/docker-volume-backup) Docker image can be used as a lightweight (below 15MB) sidecar container to an existing Docker setup. The [offen/docker-volume-backup](https://hub.docker.com/r/offen/docker-volume-backup) Docker image can be used as a lightweight (below 15MB) sidecar container to an existing Docker setup.
It handles __recurring or one-off backups of Docker volumes__ to a __local directory__ or __any S3 compatible storage__ (or both), and __rotates away old backups__ if configured. It also supports __encrypting your backups using GPG__. It handles __recurring or one-off backups of Docker volumes__ to a __local directory__ or __any S3 compatible storage__ (or both), and __rotates away old backups__ if configured. It also supports __encrypting your backups using GPG__ and __sending notifications for failed backup runs__.
<!-- MarkdownTOC --> <!-- MarkdownTOC -->
- [Quickstart](#quickstart) - [Quickstart](#quickstart)
- [Recurring backups in a compose setup](#recurring-backups-in-a-compose-setup)
- [One-off backups using Docker CLI](#one-off-backups-using-docker-cli)
- [Configuration reference](#configuration-reference) - [Configuration reference](#configuration-reference)
- [How to](#how-to) - [How to](#how-to)
- [Stopping containers during backup](#stopping-containers-during-backup) - [Stopping containers during backup](#stopping-containers-during-backup)
- [Automatically pruning old backups](#automatically-pruning-old-backups) - [Automatically pruning old backups](#automatically-pruning-old-backups)
- [Send email notifications on failed backup runs](#send-email-notifications-on-failed-backup-runs)
- [Encrypting your backup using GPG](#encrypting-your-backup-using-gpg) - [Encrypting your backup using GPG](#encrypting-your-backup-using-gpg)
- [Restoring a volume from a backup](#restoring-a-volume-from-a-backup) - [Restoring a volume from a backup](#restoring-a-volume-from-a-backup)
- [Set the timezone the container runs in](#set-the-timezone-the-container-runs-in)
- [Using with Docker Swarm](#using-with-docker-swarm) - [Using with Docker Swarm](#using-with-docker-swarm)
- [Manually triggering a backup](#manually-triggering-a-backup) - [Manually triggering a backup](#manually-triggering-a-backup)
- [Recipes](#recipes) - [Recipes](#recipes)
@@ -37,6 +41,8 @@ Code and documentation for `v1` versions are found on [this branch][v1-branch].
## Quickstart ## Quickstart
### Recurring backups in a compose setup
Add a `backup` service to your compose setup and mount the volumes you would like to see backed up: Add a `backup` service to your compose setup and mount the volumes you would like to see backed up:
```yml ```yml
@@ -55,6 +61,10 @@ services:
- docker-volume-backup.stop-during-backup=true - docker-volume-backup.stop-during-backup=true
backup: backup:
# In production, it is advised to lock your image tag to a proper
# release version instead of using `latest`.
# Check https://github.com/offen/docker-volume-backup/releases
# for a list of available releases.
image: offen/docker-volume-backup:latest image: offen/docker-volume-backup:latest
restart: always restart: always
env_file: ./backup.env # see below for configuration reference env_file: ./backup.env # see below for configuration reference
@@ -73,6 +83,22 @@ volumes:
data: data:
``` ```
### One-off backups using Docker CLI
To run a one time backup, mount the volume you would like to see backed up into a container and run the `backup` command:
```console
docker run --rm \
-v data:/backup/data \
--env AWS_ACCESS_KEY_ID="<xxx>" \
--env AWS_SECRET_ACCESS_KEY="<xxx>" \
--env AWS_S3_BUCKET_NAME="<xxx>" \
--entrypoint backup \
offen/docker-volume-backup:latest
```
Alternatively, pass a `--env-file` in order to use a full config as described below.
## Configuration reference ## Configuration reference
Backup targets, schedule and retention are configured in environment variables. Backup targets, schedule and retention are configured in environment variables.
@@ -95,6 +121,11 @@ You can populate below template according to your requirements and use it as you
# BACKUP_FILENAME="backup-%Y-%m-%dT%H-%M-%S.tar.gz" # BACKUP_FILENAME="backup-%Y-%m-%dT%H-%M-%S.tar.gz"
# When storing local backups, a symlink to the latest backup can be created
# in case a value is given for this key. This has no effect on remote backups.
# BACKUP_LATEST_SYMLINK="backup.latest.tar.gz"
########### BACKUP STORAGE ########### BACKUP STORAGE
# The name of the remote bucket that should be used for storing backups. If # The name of the remote bucket that should be used for storing backups. If
@@ -109,6 +140,13 @@ You can populate below template according to your requirements and use it as you
# AWS_ACCESS_KEY_ID="<xxx>" # AWS_ACCESS_KEY_ID="<xxx>"
# AWS_SECRET_ACCESS_KEY="<xxx>" # AWS_SECRET_ACCESS_KEY="<xxx>"
# Instead of providing static credentials, you can also use IAM instance profiles
# or similar to provide authentication. Some possible configuration options on AWS:
# - EC2: http://169.254.169.254
# - ECS: http://169.254.170.2
# AWS_IAM_ROLE_ENDPOINT="http://169.254.169.254"
# This is the FQDN of your storage server, e.g. `storage.example.com`. # This is the FQDN of your storage server, e.g. `storage.example.com`.
# Do not set this when working against AWS S3 (the default value is # Do not set this when working against AWS S3 (the default value is
# `s3.amazonaws.com`). If you need to set a specific (non-https) protocol, you # `s3.amazonaws.com`). If you need to set a specific (non-https) protocol, you
@@ -124,7 +162,8 @@ You can populate below template according to your requirements and use it as you
# Setting this variable to `true` will disable verification of # Setting this variable to `true` will disable verification of
# SSL certificates. You shouldn't use this unless you use self-signed # SSL certificates. You shouldn't use this unless you use self-signed
# certificates for your remote storage backend. # certificates for your remote storage backend. This can only be used
# when AWS_ENDPOINT_PROTO is set to `https`.
# AWS_ENDPOINT_INSECURE="true" # AWS_ENDPOINT_INSECURE="true"
@@ -188,6 +227,30 @@ You can populate below template according to your requirements and use it as you
# override this default by specifying a different value here. # override this default by specifying a different value here.
# BACKUP_STOP_CONTAINER_LABEL="service1" # BACKUP_STOP_CONTAINER_LABEL="service1"
########### EMAIL NOTIFICATIONS ON FAILED BACKUP RUNS
# In case SMTP credentials are provided, notification emails can be sent out on
# failed backup runs. These emails will contain the start time, the error
# message and all log output prior to the failure.
# The recipient(s) of the notification. Supply a comma separated list
# of adresses if you want to notify multiple recipients. If this is
# not set, no emails will be sent.
# EMAIL_NOTIFICATION_RECIPIENT="you@example.com"
# The "From" header of the sent email. Defaults to `noreply@nohost`.
# EMAIL_NOTIFICATION_SENDER="no-reply@example.com"
# Configuration and credentials for the SMTP server to be used.
# EMAIL_SMTP_PORT defaults to 587.
# EMAIL_SMTP_HOST="posteo.de"
# EMAIL_SMTP_PASSWORD="<xxx>"
# EMAIL_SMTP_USERNAME="no-reply@example.com"
# EMAIL_SMTP_PORT="<port>"
``` ```
## How to ## How to
@@ -247,6 +310,25 @@ volumes:
data: data:
``` ```
### Send email notifications on failed backup runs
To send out email notifications on failed backup runs, provide SMTP credentials, a sender and a recipient:
```yml
version: '3'
services:
backup:
image: offen/docker-volume-backup:latest
environment:
# ... other configuration values go here
EMAIL_SMTP_HOST: "smtp.example.com"
EMAIL_SMTP_PASSWORD: "password"
EMAIL_SMTP_USERNAME: "username"
EMAIL_NOTIFICATION_SENDER: "noreply@example.com"
EMAIL_NOTIFICATION_RECIPIENT: "notifications@example.com"
```
### Encrypting your backup using GPG ### Encrypting your backup using GPG
The image supports encrypting backups using GPG out of the box. The image supports encrypting backups using GPG out of the box.
@@ -278,6 +360,27 @@ In case you need to restore a volume from a backup, the most straight forward pr
Depending on your setup and the application(s) you are running, this might involve other steps to be taken still. Depending on your setup and the application(s) you are running, this might involve other steps to be taken still.
### Set the timezone the container runs in
By default a container based on this image will run in the UTC timezone.
As the image is designed to be as small as possible, additional timezone data is not included.
In case you want to run your cron rules in your local timezone (respecting DST and similar), you can mount your Docker host's `/etc/timezone` and `/etc/localtime` in read-only mode:
```yml
version: '3'
services:
backup:
image: offen/docker-volume-backup:latest
volumes:
- data:/backup/my-app-backup:ro
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
volumes:
data:
```
### Using with Docker Swarm ### Using with Docker Swarm
By default, Docker Swarm will restart stopped containers automatically, even when manually stopped. By default, Docker Swarm will restart stopped containers automatically, even when manually stopped.
@@ -362,6 +465,9 @@ services:
# ... define other services using the `data` volume here # ... define other services using the `data` volume here
backup: backup:
image: offen/docker-volume-backup:latest image: offen/docker-volume-backup:latest
environment:
BACKUP_FILENAME: backup-%Y-%m-%dT%H-%M-%S.tar.gz
BACKUP_LATEST_SYMLINK: backup-latest.tar.gz
volumes: volumes:
- data:/backup/my-app-backup:ro - data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro - /var/run/docker.sock:/var/run/docker.sock:ro
@@ -499,7 +605,7 @@ volumes:
## Differences to `futurice/docker-volume-backup` ## Differences to `futurice/docker-volume-backup`
This image is heavily inspired by the `futurice/docker-volume-backup`. We decided to publish this image as a simpler and more lightweight alternative because of the following requirements: This image is heavily inspired by `futurice/docker-volume-backup`. We decided to publish this image as a simpler and more lightweight alternative because of the following requirements:
- The original image is based on `ubuntu` and requires additional tools, making it heavy. - The original image is based on `ubuntu` and requires additional tools, making it heavy.
This version is roughly 1/25 in compressed size (it's ~12MB). This version is roughly 1/25 in compressed size (it's ~12MB).
@@ -510,3 +616,5 @@ Local copies of backups can also be pruned once they reach a certain age.
- InfluxDB specific functionality from the original image was removed. - InfluxDB specific functionality from the original image was removed.
- `arm64` and `arm/v7` architectures are supported. - `arm64` and `arm/v7` architectures are supported.
- Docker in Swarm mode is supported. - Docker in Swarm mode is supported.
- Notifications on failed backups are supported
- IAM authentication through instance profiles is supported

View File

@@ -4,6 +4,7 @@
package main package main
import ( import (
"bytes"
"context" "context"
"errors" "errors"
"fmt" "fmt"
@@ -18,6 +19,7 @@ import (
"github.com/docker/docker/api/types/filters" "github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/swarm" "github.com/docker/docker/api/types/swarm"
"github.com/docker/docker/client" "github.com/docker/docker/client"
"github.com/go-gomail/gomail"
"github.com/gofrs/flock" "github.com/gofrs/flock"
"github.com/kelseyhightower/envconfig" "github.com/kelseyhightower/envconfig"
"github.com/leekchan/timeutil" "github.com/leekchan/timeutil"
@@ -37,6 +39,15 @@ func main() {
panic(err) panic(err)
} }
defer func() {
if err := recover(); err != nil {
if e, ok := err.(error); ok && strings.Contains(e.Error(), msgBackupFailed) {
os.Exit(1)
}
panic(err)
}
}()
s.must(func() error { s.must(func() error {
restartContainers, err := s.stopContainers() restartContainers, err := s.stopContainers()
defer func() { defer func() {
@@ -48,9 +59,14 @@ func main() {
return s.takeBackup() return s.takeBackup()
}()) }())
s.must(s.encryptBackup()) s.must(func() error {
s.must(s.copyBackup()) defer func() {
s.must(s.removeArtifacts()) s.must(s.removeArtifacts())
}()
s.must(s.encryptBackup())
return s.copyBackup()
}())
s.must(s.pruneOldBackups()) s.must(s.pruneOldBackups())
s.logger.Info("Finished running backup tasks.") s.logger.Info("Finished running backup tasks.")
} }
@@ -61,9 +77,11 @@ type script struct {
cli *client.Client cli *client.Client
mc *minio.Client mc *minio.Client
logger *logrus.Logger logger *logrus.Logger
hooks []hook
start time.Time start time.Time
file string file string
output *bytes.Buffer
c *config c *config
} }
@@ -71,6 +89,7 @@ type script struct {
type config struct { type config struct {
BackupSources string `split_words:"true" default:"/backup"` BackupSources string `split_words:"true" default:"/backup"`
BackupFilename string `split_words:"true" default:"backup-%Y-%m-%dT%H-%M-%S.tar.gz"` BackupFilename string `split_words:"true" default:"backup-%Y-%m-%dT%H-%M-%S.tar.gz"`
BackupLatestSymlink string `split_words:"true"`
BackupArchive string `split_words:"true" default:"/archive"` BackupArchive string `split_words:"true" default:"/archive"`
BackupRetentionDays int32 `split_words:"true" default:"-1"` BackupRetentionDays int32 `split_words:"true" default:"-1"`
BackupPruningLeeway time.Duration `split_words:"true" default:"1m"` BackupPruningLeeway time.Duration `split_words:"true" default:"1m"`
@@ -82,23 +101,34 @@ type config struct {
AwsEndpointInsecure bool `split_words:"true"` AwsEndpointInsecure bool `split_words:"true"`
AwsAccessKeyID string `envconfig:"AWS_ACCESS_KEY_ID"` AwsAccessKeyID string `envconfig:"AWS_ACCESS_KEY_ID"`
AwsSecretAccessKey string `split_words:"true"` AwsSecretAccessKey string `split_words:"true"`
AwsIamRoleEndpoint string `split_words:"true"`
GpgPassphrase string `split_words:"true"` GpgPassphrase string `split_words:"true"`
EmailNotificationRecipient string `split_words:"true"`
EmailNotificationSender string `split_words:"true" default:"noreply@nohost"`
EmailSMTPHost string `envconfig:"EMAIL_SMTP_HOST"`
EmailSMTPPort int `envconfig:"EMAIL_SMTP_PORT" default:"587"`
EmailSMTPUsername string `envconfig:"EMAIL_SMTP_USERNAME"`
EmailSMTPPassword string `envconfig:"EMAIL_SMTP_PASSWORD"`
} }
var msgBackupFailed = "backup run failed"
// newScript creates all resources needed for the script to perform actions against // newScript creates all resources needed for the script to perform actions against
// remote resources like the Docker engine or remote storage locations. All // remote resources like the Docker engine or remote storage locations. All
// reading from env vars or other configuration sources is expected to happen // reading from env vars or other configuration sources is expected to happen
// in this method. // in this method.
func newScript() (*script, error) { func newScript() (*script, error) {
stdOut, logBuffer := buffer(os.Stdout)
s := &script{ s := &script{
c: &config{}, c: &config{},
logger: &logrus.Logger{ logger: &logrus.Logger{
Out: os.Stdout, Out: stdOut,
Formatter: new(logrus.TextFormatter), Formatter: new(logrus.TextFormatter),
Hooks: make(logrus.LevelHooks), Hooks: make(logrus.LevelHooks),
Level: logrus.InfoLevel, Level: logrus.InfoLevel,
}, },
start: time.Now(), start: time.Now(),
output: logBuffer,
} }
if err := envconfig.Process("", s.c); err != nil { if err := envconfig.Process("", s.c); err != nil {
@@ -117,20 +147,66 @@ func newScript() (*script, error) {
} }
if s.c.AwsS3BucketName != "" { if s.c.AwsS3BucketName != "" {
mc, err := minio.New(s.c.AwsEndpoint, &minio.Options{ var creds *credentials.Credentials
Creds: credentials.NewStaticV4( if s.c.AwsAccessKeyID != "" && s.c.AwsSecretAccessKey != "" {
creds = credentials.NewStaticV4(
s.c.AwsAccessKeyID, s.c.AwsAccessKeyID,
s.c.AwsSecretAccessKey, s.c.AwsSecretAccessKey,
"", "",
), )
Secure: !s.c.AwsEndpointInsecure && s.c.AwsEndpointProto == "https", } else if s.c.AwsIamRoleEndpoint != "" {
}) creds = credentials.NewIAM(s.c.AwsIamRoleEndpoint)
} else {
return nil, errors.New("newScript: AWS_S3_BUCKET_NAME is defined, but no credentials were provided")
}
options := minio.Options{
Creds: creds,
Secure: s.c.AwsEndpointProto == "https",
}
if s.c.AwsEndpointInsecure {
if !options.Secure {
return nil, errors.New("newScript: AWS_ENDPOINT_INSECURE = true is only meaningful for https")
}
transport, err := minio.DefaultTransport(true)
if err != nil {
return nil, fmt.Errorf("newScript: failed to create default minio transport")
}
transport.TLSClientConfig.InsecureSkipVerify = true
options.Transport = transport
}
mc, err := minio.New(s.c.AwsEndpoint, &options)
if err != nil { if err != nil {
return nil, fmt.Errorf("newScript: error setting up minio client: %w", err) return nil, fmt.Errorf("newScript: error setting up minio client: %w", err)
} }
s.mc = mc s.mc = mc
} }
if s.c.EmailNotificationRecipient != "" {
s.hooks = append(s.hooks, hook{hookLevelFailure, func(err error, start time.Time, logOutput string) error {
mailer := gomail.NewDialer(
s.c.EmailSMTPHost, s.c.EmailSMTPPort, s.c.EmailSMTPUsername, s.c.EmailSMTPPassword,
)
subject := fmt.Sprintf(
"Failure running docker-volume-backup at %s", start.Format(time.RFC3339),
)
body := fmt.Sprintf(
"Running docker-volume-backup failed with error: %s\n\nLog output of the failed run was:\n\n%s\n", err, logOutput,
)
message := gomail.NewMessage()
message.SetHeader("From", s.c.EmailNotificationSender)
message.SetHeader("To", s.c.EmailNotificationRecipient)
message.SetHeader("Subject", subject)
message.SetBody("text/plain", body)
return mailer.DialAndSend(message)
}})
}
return s, nil return s, nil
} }
@@ -320,16 +396,33 @@ func (s *script) copyBackup() error {
return fmt.Errorf("copyBackup: error copying file to local archive: %w", err) return fmt.Errorf("copyBackup: error copying file to local archive: %w", err)
} }
s.logger.Infof("Stored copy of backup `%s` in local archive `%s`.", s.file, s.c.BackupArchive) s.logger.Infof("Stored copy of backup `%s` in local archive `%s`.", s.file, s.c.BackupArchive)
if s.c.BackupLatestSymlink != "" {
symlink := path.Join(s.c.BackupArchive, s.c.BackupLatestSymlink)
if _, err := os.Lstat(symlink); err == nil {
os.Remove(symlink)
}
if err := os.Symlink(name, symlink); err != nil {
return fmt.Errorf("copyBackup: error creating latest symlink: %w", err)
}
s.logger.Infof("Created/Updated symlink `%s` for latest backup.", s.c.BackupLatestSymlink)
}
} }
return nil return nil
} }
// removeArtifacts removes the backup file from disk. // removeArtifacts removes the backup file from disk.
func (s *script) removeArtifacts() error { func (s *script) removeArtifacts() error {
if err := os.Remove(s.file); err != nil { _, err := os.Stat(s.file)
return fmt.Errorf("removeArtifacts: error removing file: %w", err) if err != nil {
if os.IsNotExist(err) {
return nil
} }
s.logger.Info("Removed local artifacts.") return fmt.Errorf("removeArtifacts: error calling stat on file %s: %w", s.file, err)
}
if err := os.Remove(s.file); err != nil {
return fmt.Errorf("removeArtifacts: error removing file %s: %w", s.file, err)
}
s.logger.Infof("Removed local artifacts %s.", s.file)
return nil return nil
} }
@@ -410,15 +503,35 @@ func (s *script) pruneOldBackups() error {
} }
if _, err := os.Stat(s.c.BackupArchive); !os.IsNotExist(err) { if _, err := os.Stat(s.c.BackupArchive); !os.IsNotExist(err) {
candidates, err := filepath.Glob( globPattern := path.Join(
path.Join(s.c.BackupArchive, fmt.Sprintf("%s*", s.c.BackupPruningPrefix)), s.c.BackupArchive,
fmt.Sprintf("%s*", s.c.BackupPruningPrefix),
) )
globMatches, err := filepath.Glob(globPattern)
if err != nil { if err != nil {
return fmt.Errorf( return fmt.Errorf(
"pruneOldBackups: error looking up matching files, starting with: %w", err, "pruneOldBackups: error looking up matching files using pattern %s: %w",
globPattern,
err,
) )
} }
var candidates []string
for _, candidate := range globMatches {
fi, err := os.Lstat(candidate)
if err != nil {
return fmt.Errorf(
"pruneOldBackups: error calling Lstat on file %s: %w",
candidate,
err,
)
}
if fi.Mode()&os.ModeSymlink != os.ModeSymlink {
candidates = append(candidates, candidate)
}
}
var matches []string var matches []string
for _, candidate := range candidates { for _, candidate := range candidates {
fi, err := os.Stat(candidate) fi, err := os.Stat(candidate)
@@ -429,7 +542,6 @@ func (s *script) pruneOldBackups() error {
err, err,
) )
} }
if fi.ModTime().Before(deadline) { if fi.ModTime().Before(deadline) {
matches = append(matches, candidate) matches = append(matches, candidate)
} }
@@ -437,8 +549,8 @@ func (s *script) pruneOldBackups() error {
if len(matches) != 0 && len(matches) != len(candidates) { if len(matches) != 0 && len(matches) != len(candidates) {
var removeErrors []error var removeErrors []error
for _, candidate := range matches { for _, match := range matches {
if err := os.Remove(candidate); err != nil { if err := os.Remove(match); err != nil {
removeErrors = append(removeErrors, err) removeErrors = append(removeErrors, err)
} }
} }
@@ -468,11 +580,35 @@ func (s *script) pruneOldBackups() error {
return nil return nil
} }
// must exits the script run non-zero and prematurely in case the given error // runHooks runs all hooks that have been registered using the
// is non-nil. // given level. In case executing a hook returns an error, the following
// hooks will still be run before the function returns an error.
func (s *script) runHooks(err error, targetLevel string) error {
var actionErrors []error
for _, hook := range s.hooks {
if hook.level != targetLevel {
continue
}
if err := hook.action(err, s.start, s.output.String()); err != nil {
actionErrors = append(actionErrors, err)
}
}
if len(actionErrors) != 0 {
return join(actionErrors...)
}
return nil
}
// must exits the script run prematurely in case the given error
// is non-nil. If failure hooks have been registered on the script object, they
// will be called, passing the failure and previous log output.
func (s *script) must(err error) { func (s *script) must(err error) {
if err != nil { if err != nil {
s.logger.Fatalf("Fatal error running backup: %s", err) s.logger.Errorf("Fatal error running backup: %s", err)
if hookErr := s.runHooks(err, hookLevelFailure); hookErr != nil {
s.logger.Errorf("An error occurred calling the registered failure hooks: %s", hookErr)
}
panic(errors.New(msgBackupFailed))
} }
} }
@@ -526,3 +662,33 @@ func join(errs ...error) error {
} }
return errors.New("[" + strings.Join(msgs, ", ") + "]") return errors.New("[" + strings.Join(msgs, ", ") + "]")
} }
// buffer takes an io.Writer and returns a wrapped version of the
// writer that writes to both the original target as well as the returned buffer
func buffer(w io.Writer) (io.Writer, *bytes.Buffer) {
buffering := &bufferingWriter{buf: bytes.Buffer{}, writer: w}
return buffering, &buffering.buf
}
type bufferingWriter struct {
buf bytes.Buffer
writer io.Writer
}
func (b *bufferingWriter) Write(p []byte) (n int, err error) {
if n, err := b.buf.Write(p); err != nil {
return n, fmt.Errorf("bufferingWriter: error writing to buffer: %w", err)
}
return b.writer.Write(p)
}
// hook contains a queued action that can be trigger them when the script
// reaches a certain point (e.g. unsuccessful backup)
type hook struct {
level string
action func(err error, start time.Time, logOutput string) error
}
const (
hookLevelFailure = "failure"
)

2
go.mod
View File

@@ -20,6 +20,7 @@ require (
github.com/docker/go-connections v0.4.0 // indirect github.com/docker/go-connections v0.4.0 // indirect
github.com/docker/go-units v0.4.0 // indirect github.com/docker/go-units v0.4.0 // indirect
github.com/dustin/go-humanize v1.0.0 // indirect github.com/dustin/go-humanize v1.0.0 // indirect
github.com/go-gomail/gomail v0.0.0-20160411212932-81ebce5c23df // indirect
github.com/gogo/protobuf v1.3.2 // indirect github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/protobuf v1.5.0 // indirect github.com/golang/protobuf v1.5.0 // indirect
github.com/google/uuid v1.2.0 // indirect github.com/google/uuid v1.2.0 // indirect
@@ -41,5 +42,6 @@ require (
google.golang.org/genproto v0.0.0-20201110150050-8816d57aaa9a // indirect google.golang.org/genproto v0.0.0-20201110150050-8816d57aaa9a // indirect
google.golang.org/grpc v1.33.2 // indirect google.golang.org/grpc v1.33.2 // indirect
google.golang.org/protobuf v1.26.0 // indirect google.golang.org/protobuf v1.26.0 // indirect
gopkg.in/alexcesaro/quotedprintable.v3 v3.0.0-20150716171945-2caba252f4dc // indirect
gopkg.in/ini.v1 v1.57.0 // indirect gopkg.in/ini.v1 v1.57.0 // indirect
) )

4
go.sum
View File

@@ -254,6 +254,8 @@ github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeME
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU= github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8= github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8= github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-gomail/gomail v0.0.0-20160411212932-81ebce5c23df h1:Bao6dhmbTA1KFVxmJ6nBoMuOJit2yjEgLJpIMYpop0E=
github.com/go-gomail/gomail v0.0.0-20160411212932-81ebce5c23df/go.mod h1:GJr+FCSXshIwgHBtLglIg9M2l2kQSi6QjVAngtzI08Y=
github.com/go-ini/ini v1.25.4/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8= github.com/go-ini/ini v1.25.4/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
@@ -908,6 +910,8 @@ google.golang.org/protobuf v1.26.0 h1:bxAC2xTBsZGibn2RTntX0oH50xLsqy1OxA9tTL3p/l
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc= google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
gopkg.in/airbrake/gobrake.v2 v2.0.9/go.mod h1:/h5ZAUhDkGaJfjzjKLSjv6zCL6O0LLBxU4K+aSYdM/U= gopkg.in/airbrake/gobrake.v2 v2.0.9/go.mod h1:/h5ZAUhDkGaJfjzjKLSjv6zCL6O0LLBxU4K+aSYdM/U=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw= gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/alexcesaro/quotedprintable.v3 v3.0.0-20150716171945-2caba252f4dc h1:2gGKlE2+asNV9m7xrywl36YYNnBG5ZQ0r/BOOxqPpmk=
gopkg.in/alexcesaro/quotedprintable.v3 v3.0.0-20150716171945-2caba252f4dc/go.mod h1:m7x9LTH6d71AHyAX77c9yqWCCa3UKHcVEj9y7hAtKDk=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20141024133853-64131543e789/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20141024133853-64131543e789/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=

View File

@@ -29,8 +29,7 @@ docker run -d \
sleep 10 sleep 10
docker run -d \ docker run --rm \
--name backup \
--network test_network \ --network test_network \
-v app_data:/backup/app_data \ -v app_data:/backup/app_data \
-v /var/run/docker.sock:/var/run/docker.sock \ -v /var/run/docker.sock:/var/run/docker.sock \
@@ -40,18 +39,16 @@ docker run -d \
--env AWS_ENDPOINT_PROTO=http \ --env AWS_ENDPOINT_PROTO=http \
--env AWS_S3_BUCKET_NAME=backup \ --env AWS_S3_BUCKET_NAME=backup \
--env BACKUP_FILENAME=test.tar.gz \ --env BACKUP_FILENAME=test.tar.gz \
--env BACKUP_CRON_EXPRESSION="0 0 5 31 2 ?" \ --entrypoint backup \
offen/docker-volume-backup:$TEST_VERSION offen/docker-volume-backup:$TEST_VERSION
docker exec backup backup
docker run --rm -it \ docker run --rm -it \
-v backup_data:/data alpine \ -v backup_data:/data alpine \
ash -c 'tar -xvf /data/backup/test.tar.gz && test -f /backup/app_data/offen.db' ash -c 'tar -xvf /data/backup/test.tar.gz && test -f /backup/app_data/offen.db'
echo "[TEST:PASS] Found relevant files in untared backup." echo "[TEST:PASS] Found relevant files in untared backup."
if [ "$(docker ps -q | wc -l)" != "3" ]; then if [ "$(docker ps -q | wc -l)" != "2" ]; then
echo "[TEST:FAIL] Expected all containers to be running post backup, instead seen:" echo "[TEST:FAIL] Expected all containers to be running post backup, instead seen:"
docker ps docker ps
exit 1 exit 1

View File

@@ -24,6 +24,7 @@ services:
AWS_ENDPOINT_PROTO: http AWS_ENDPOINT_PROTO: http
AWS_S3_BUCKET_NAME: backup AWS_S3_BUCKET_NAME: backup
BACKUP_FILENAME: test.tar.gz BACKUP_FILENAME: test.tar.gz
BACKUP_LATEST_SYMLINK: test.latest.tar.gz.gpg
BACKUP_CRON_EXPRESSION: 0 0 5 31 2 ? BACKUP_CRON_EXPRESSION: 0 0 5 31 2 ?
BACKUP_RETENTION_DAYS: ${BACKUP_RETENTION_DAYS:-7} BACKUP_RETENTION_DAYS: ${BACKUP_RETENTION_DAYS:-7}
BACKUP_PRUNING_LEEWAY: 5s BACKUP_PRUNING_LEEWAY: 5s

View File

@@ -18,6 +18,7 @@ docker run --rm -it \
echo "[TEST:PASS] Found relevant files in untared remote backup." echo "[TEST:PASS] Found relevant files in untared remote backup."
test -L ./local/test.latest.tar.gz.gpg
echo 1234secret | gpg -d --yes --passphrase-fd 0 ./local/test.tar.gz.gpg > ./local/decrypted.tar.gz echo 1234secret | gpg -d --yes --passphrase-fd 0 ./local/test.tar.gz.gpg > ./local/decrypted.tar.gz
tar -xf ./local/decrypted.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db tar -xf ./local/decrypted.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db
rm ./local/decrypted.tar.gz rm ./local/decrypted.tar.gz