mirror of
https://github.com/offen/docker-volume-backup.git
synced 2025-12-05 17:18:02 +01:00
Compare commits
12 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
789fc656e8 | ||
|
|
c59b40f2df | ||
|
|
cff418e735 | ||
|
|
d7ccdd79fc | ||
|
|
bd73a2b5e4 | ||
|
|
6cf5cf47e7 | ||
|
|
53c257065e | ||
|
|
184b7a1e18 | ||
|
|
69a94f226b | ||
|
|
160a47e90b | ||
|
|
59660ec5c7 | ||
|
|
af3e69b7a8 |
51
README.md
51
README.md
@@ -3,11 +3,13 @@
|
|||||||
Backup Docker volumes locally or to any S3 compatible storage.
|
Backup Docker volumes locally or to any S3 compatible storage.
|
||||||
|
|
||||||
The [offen/docker-volume-backup](https://hub.docker.com/r/offen/docker-volume-backup) Docker image can be used as a lightweight (below 15MB) sidecar container to an existing Docker setup.
|
The [offen/docker-volume-backup](https://hub.docker.com/r/offen/docker-volume-backup) Docker image can be used as a lightweight (below 15MB) sidecar container to an existing Docker setup.
|
||||||
It handles __recurring or one-off backups of Docker volumes__ to a __local directory__ or __any S3 compatible storage__ (or both), and __rotates away old backups__ if configured. It also supports __encrypting your backups using GPG__.
|
It handles __recurring or one-off backups of Docker volumes__ to a __local directory__ or __any S3 compatible storage__ (or both), and __rotates away old backups__ if configured. It also supports __encrypting your backups using GPG__ and __sending notifications for failed backup runs__.
|
||||||
|
|
||||||
<!-- MarkdownTOC -->
|
<!-- MarkdownTOC -->
|
||||||
|
|
||||||
- [Quickstart](#quickstart)
|
- [Quickstart](#quickstart)
|
||||||
|
- [Recurring backups in a compose setup](#recurring-backups-in-a-compose-setup)
|
||||||
|
- [One-off backups using Docker CLI](#one-off-backups-using-docker-cli)
|
||||||
- [Configuration reference](#configuration-reference)
|
- [Configuration reference](#configuration-reference)
|
||||||
- [How to](#how-to)
|
- [How to](#how-to)
|
||||||
- [Stopping containers during backup](#stopping-containers-during-backup)
|
- [Stopping containers during backup](#stopping-containers-during-backup)
|
||||||
@@ -38,6 +40,8 @@ Code and documentation for `v1` versions are found on [this branch][v1-branch].
|
|||||||
|
|
||||||
## Quickstart
|
## Quickstart
|
||||||
|
|
||||||
|
### Recurring backups in a compose setup
|
||||||
|
|
||||||
Add a `backup` service to your compose setup and mount the volumes you would like to see backed up:
|
Add a `backup` service to your compose setup and mount the volumes you would like to see backed up:
|
||||||
|
|
||||||
```yml
|
```yml
|
||||||
@@ -78,6 +82,22 @@ volumes:
|
|||||||
data:
|
data:
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### One-off backups using Docker CLI
|
||||||
|
|
||||||
|
To run a one time backup, mount the volume you would like to see backed up into a container and run the `backup` command:
|
||||||
|
|
||||||
|
```console
|
||||||
|
docker run --rm \
|
||||||
|
-v data:/backup/data \
|
||||||
|
--env AWS_ACCESS_KEY_ID="<xxx>" \
|
||||||
|
--env AWS_SECRET_ACCESS_KEY="<xxx>" \
|
||||||
|
--env AWS_S3_BUCKET_NAME="<xxx>" \
|
||||||
|
--entrypoint backup \
|
||||||
|
offen/docker-volume-backup:latest
|
||||||
|
```
|
||||||
|
|
||||||
|
Alternatively, pass a `--env-file` in order to use a full config as described below.
|
||||||
|
|
||||||
## Configuration reference
|
## Configuration reference
|
||||||
|
|
||||||
Backup targets, schedule and retention are configured in environment variables.
|
Backup targets, schedule and retention are configured in environment variables.
|
||||||
@@ -100,6 +120,11 @@ You can populate below template according to your requirements and use it as you
|
|||||||
|
|
||||||
# BACKUP_FILENAME="backup-%Y-%m-%dT%H-%M-%S.tar.gz"
|
# BACKUP_FILENAME="backup-%Y-%m-%dT%H-%M-%S.tar.gz"
|
||||||
|
|
||||||
|
# When storing local backups, a symlink to the latest backup can be created
|
||||||
|
# in case a value is given for this key. This has no effect on remote backups.
|
||||||
|
|
||||||
|
# BACKUP_LATEST_SYMLINK="backup.latest.tar.gz"
|
||||||
|
|
||||||
########### BACKUP STORAGE
|
########### BACKUP STORAGE
|
||||||
|
|
||||||
# The name of the remote bucket that should be used for storing backups. If
|
# The name of the remote bucket that should be used for storing backups. If
|
||||||
@@ -114,6 +139,13 @@ You can populate below template according to your requirements and use it as you
|
|||||||
# AWS_ACCESS_KEY_ID="<xxx>"
|
# AWS_ACCESS_KEY_ID="<xxx>"
|
||||||
# AWS_SECRET_ACCESS_KEY="<xxx>"
|
# AWS_SECRET_ACCESS_KEY="<xxx>"
|
||||||
|
|
||||||
|
# Instead of providing static credentials, you can also use IAM instance profiles
|
||||||
|
# or similar to provide authentication. Some possible configuration options on AWS:
|
||||||
|
# - EC2: http://169.254.169.254
|
||||||
|
# - ECS: http://169.254.170.2
|
||||||
|
|
||||||
|
# AWS_IAM_ROLE_ENDPOINT="http://169.254.169.254"
|
||||||
|
|
||||||
# This is the FQDN of your storage server, e.g. `storage.example.com`.
|
# This is the FQDN of your storage server, e.g. `storage.example.com`.
|
||||||
# Do not set this when working against AWS S3 (the default value is
|
# Do not set this when working against AWS S3 (the default value is
|
||||||
# `s3.amazonaws.com`). If you need to set a specific (non-https) protocol, you
|
# `s3.amazonaws.com`). If you need to set a specific (non-https) protocol, you
|
||||||
@@ -210,20 +242,12 @@ You can populate below template according to your requirements and use it as you
|
|||||||
|
|
||||||
# EMAIL_NOTIFICATION_SENDER="no-reply@example.com"
|
# EMAIL_NOTIFICATION_SENDER="no-reply@example.com"
|
||||||
|
|
||||||
# The hostname of your SMTP server.
|
# Configuration and credentials for the SMTP server to be used.
|
||||||
|
# EMAIL_SMTP_PORT defaults to 587.
|
||||||
|
|
||||||
# EMAIL_SMTP_HOST="posteo.de"
|
# EMAIL_SMTP_HOST="posteo.de"
|
||||||
|
|
||||||
# The SMTP password.
|
|
||||||
|
|
||||||
# EMAIL_SMTP_PASSWORD="<xxx>"
|
# EMAIL_SMTP_PASSWORD="<xxx>"
|
||||||
|
|
||||||
# The SMTP username.
|
|
||||||
|
|
||||||
# EMAIL_SMTP_USERNAME="no-reply@example.com"
|
# EMAIL_SMTP_USERNAME="no-reply@example.com"
|
||||||
|
|
||||||
The port used when communicating with the server. Defaults to 587.
|
|
||||||
|
|
||||||
# EMAIL_SMTP_PORT="<port>"
|
# EMAIL_SMTP_PORT="<port>"
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -418,6 +442,9 @@ services:
|
|||||||
# ... define other services using the `data` volume here
|
# ... define other services using the `data` volume here
|
||||||
backup:
|
backup:
|
||||||
image: offen/docker-volume-backup:latest
|
image: offen/docker-volume-backup:latest
|
||||||
|
environment:
|
||||||
|
BACKUP_FILENAME: backup-%Y-%m-%dT%H-%M-%S.tar.gz
|
||||||
|
BACKUP_LATEST_SYMLINK: backup-latest.tar.gz
|
||||||
volumes:
|
volumes:
|
||||||
- data:/backup/my-app-backup:ro
|
- data:/backup/my-app-backup:ro
|
||||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||||
@@ -555,7 +582,7 @@ volumes:
|
|||||||
|
|
||||||
## Differences to `futurice/docker-volume-backup`
|
## Differences to `futurice/docker-volume-backup`
|
||||||
|
|
||||||
This image is heavily inspired by the `futurice/docker-volume-backup`. We decided to publish this image as a simpler and more lightweight alternative because of the following requirements:
|
This image is heavily inspired by `futurice/docker-volume-backup`. We decided to publish this image as a simpler and more lightweight alternative because of the following requirements:
|
||||||
|
|
||||||
- The original image is based on `ubuntu` and requires additional tools, making it heavy.
|
- The original image is based on `ubuntu` and requires additional tools, making it heavy.
|
||||||
This version is roughly 1/25 in compressed size (it's ~12MB).
|
This version is roughly 1/25 in compressed size (it's ~12MB).
|
||||||
|
|||||||
@@ -39,6 +39,15 @@ func main() {
|
|||||||
panic(err)
|
panic(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
defer func() {
|
||||||
|
if err := recover(); err != nil {
|
||||||
|
if e, ok := err.(error); ok && strings.Contains(e.Error(), msgBackupFailed) {
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
s.must(func() error {
|
s.must(func() error {
|
||||||
restartContainers, err := s.stopContainers()
|
restartContainers, err := s.stopContainers()
|
||||||
defer func() {
|
defer func() {
|
||||||
@@ -50,9 +59,14 @@ func main() {
|
|||||||
return s.takeBackup()
|
return s.takeBackup()
|
||||||
}())
|
}())
|
||||||
|
|
||||||
s.must(s.encryptBackup())
|
s.must(func() error {
|
||||||
s.must(s.copyBackup())
|
defer func() {
|
||||||
s.must(s.removeArtifacts())
|
s.must(s.removeArtifacts())
|
||||||
|
}()
|
||||||
|
s.must(s.encryptBackup())
|
||||||
|
return s.copyBackup()
|
||||||
|
}())
|
||||||
|
|
||||||
s.must(s.pruneOldBackups())
|
s.must(s.pruneOldBackups())
|
||||||
s.logger.Info("Finished running backup tasks.")
|
s.logger.Info("Finished running backup tasks.")
|
||||||
}
|
}
|
||||||
@@ -60,10 +74,10 @@ func main() {
|
|||||||
// script holds all the stateful information required to orchestrate a
|
// script holds all the stateful information required to orchestrate a
|
||||||
// single backup run.
|
// single backup run.
|
||||||
type script struct {
|
type script struct {
|
||||||
cli *client.Client
|
cli *client.Client
|
||||||
mc *minio.Client
|
mc *minio.Client
|
||||||
logger *logrus.Logger
|
logger *logrus.Logger
|
||||||
errorHooks []errorHook
|
hooks []hook
|
||||||
|
|
||||||
start time.Time
|
start time.Time
|
||||||
file string
|
file string
|
||||||
@@ -72,11 +86,10 @@ type script struct {
|
|||||||
c *config
|
c *config
|
||||||
}
|
}
|
||||||
|
|
||||||
type errorHook func(err error, start time.Time, logOutput string) error
|
|
||||||
|
|
||||||
type config struct {
|
type config struct {
|
||||||
BackupSources string `split_words:"true" default:"/backup"`
|
BackupSources string `split_words:"true" default:"/backup"`
|
||||||
BackupFilename string `split_words:"true" default:"backup-%Y-%m-%dT%H-%M-%S.tar.gz"`
|
BackupFilename string `split_words:"true" default:"backup-%Y-%m-%dT%H-%M-%S.tar.gz"`
|
||||||
|
BackupLatestSymlink string `split_words:"true"`
|
||||||
BackupArchive string `split_words:"true" default:"/archive"`
|
BackupArchive string `split_words:"true" default:"/archive"`
|
||||||
BackupRetentionDays int32 `split_words:"true" default:"-1"`
|
BackupRetentionDays int32 `split_words:"true" default:"-1"`
|
||||||
BackupPruningLeeway time.Duration `split_words:"true" default:"1m"`
|
BackupPruningLeeway time.Duration `split_words:"true" default:"1m"`
|
||||||
@@ -88,6 +101,7 @@ type config struct {
|
|||||||
AwsEndpointInsecure bool `split_words:"true"`
|
AwsEndpointInsecure bool `split_words:"true"`
|
||||||
AwsAccessKeyID string `envconfig:"AWS_ACCESS_KEY_ID"`
|
AwsAccessKeyID string `envconfig:"AWS_ACCESS_KEY_ID"`
|
||||||
AwsSecretAccessKey string `split_words:"true"`
|
AwsSecretAccessKey string `split_words:"true"`
|
||||||
|
AwsIamRoleEndpoint string `split_words:"true"`
|
||||||
GpgPassphrase string `split_words:"true"`
|
GpgPassphrase string `split_words:"true"`
|
||||||
EmailNotificationRecipient string `split_words:"true"`
|
EmailNotificationRecipient string `split_words:"true"`
|
||||||
EmailNotificationSender string `split_words:"true" default:"noreply@nohost"`
|
EmailNotificationSender string `split_words:"true" default:"noreply@nohost"`
|
||||||
@@ -97,6 +111,8 @@ type config struct {
|
|||||||
EmailSMTPPassword string `envconfig:"EMAIL_SMTP_PASSWORD"`
|
EmailSMTPPassword string `envconfig:"EMAIL_SMTP_PASSWORD"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
var msgBackupFailed = "backup run failed"
|
||||||
|
|
||||||
// newScript creates all resources needed for the script to perform actions against
|
// newScript creates all resources needed for the script to perform actions against
|
||||||
// remote resources like the Docker engine or remote storage locations. All
|
// remote resources like the Docker engine or remote storage locations. All
|
||||||
// reading from env vars or other configuration sources is expected to happen
|
// reading from env vars or other configuration sources is expected to happen
|
||||||
@@ -131,12 +147,21 @@ func newScript() (*script, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if s.c.AwsS3BucketName != "" {
|
if s.c.AwsS3BucketName != "" {
|
||||||
mc, err := minio.New(s.c.AwsEndpoint, &minio.Options{
|
var creds *credentials.Credentials
|
||||||
Creds: credentials.NewStaticV4(
|
if s.c.AwsAccessKeyID != "" && s.c.AwsSecretAccessKey != "" {
|
||||||
|
creds = credentials.NewStaticV4(
|
||||||
s.c.AwsAccessKeyID,
|
s.c.AwsAccessKeyID,
|
||||||
s.c.AwsSecretAccessKey,
|
s.c.AwsSecretAccessKey,
|
||||||
"",
|
"",
|
||||||
),
|
)
|
||||||
|
} else if s.c.AwsIamRoleEndpoint != "" {
|
||||||
|
creds = credentials.NewIAM(s.c.AwsIamRoleEndpoint)
|
||||||
|
} else {
|
||||||
|
return nil, errors.New("newScript: AWS_S3_BUCKET_NAME is defined, but no credentials were provided")
|
||||||
|
}
|
||||||
|
|
||||||
|
mc, err := minio.New(s.c.AwsEndpoint, &minio.Options{
|
||||||
|
Creds: creds,
|
||||||
Secure: !s.c.AwsEndpointInsecure && s.c.AwsEndpointProto == "https",
|
Secure: !s.c.AwsEndpointInsecure && s.c.AwsEndpointProto == "https",
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -146,7 +171,7 @@ func newScript() (*script, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if s.c.EmailNotificationRecipient != "" {
|
if s.c.EmailNotificationRecipient != "" {
|
||||||
s.errorHooks = append(s.errorHooks, func(err error, start time.Time, logOutput string) error {
|
s.hooks = append(s.hooks, hook{hookLevelFailure, func(err error, start time.Time, logOutput string) error {
|
||||||
mailer := gomail.NewDialer(
|
mailer := gomail.NewDialer(
|
||||||
s.c.EmailSMTPHost, s.c.EmailSMTPPort, s.c.EmailSMTPUsername, s.c.EmailSMTPPassword,
|
s.c.EmailSMTPHost, s.c.EmailSMTPPort, s.c.EmailSMTPUsername, s.c.EmailSMTPPassword,
|
||||||
)
|
)
|
||||||
@@ -155,7 +180,7 @@ func newScript() (*script, error) {
|
|||||||
"Failure running docker-volume-backup at %s", start.Format(time.RFC3339),
|
"Failure running docker-volume-backup at %s", start.Format(time.RFC3339),
|
||||||
)
|
)
|
||||||
body := fmt.Sprintf(
|
body := fmt.Sprintf(
|
||||||
"Running docker-volume-backup failed with error: %s\n\nLog output before the error occurred:\n\n%s\n", err, logOutput,
|
"Running docker-volume-backup failed with error: %s\n\nLog output of the failed run was:\n\n%s\n", err, logOutput,
|
||||||
)
|
)
|
||||||
|
|
||||||
message := gomail.NewMessage()
|
message := gomail.NewMessage()
|
||||||
@@ -164,7 +189,7 @@ func newScript() (*script, error) {
|
|||||||
message.SetHeader("Subject", subject)
|
message.SetHeader("Subject", subject)
|
||||||
message.SetBody("text/plain", body)
|
message.SetBody("text/plain", body)
|
||||||
return mailer.DialAndSend(message)
|
return mailer.DialAndSend(message)
|
||||||
})
|
}})
|
||||||
}
|
}
|
||||||
|
|
||||||
return s, nil
|
return s, nil
|
||||||
@@ -356,16 +381,33 @@ func (s *script) copyBackup() error {
|
|||||||
return fmt.Errorf("copyBackup: error copying file to local archive: %w", err)
|
return fmt.Errorf("copyBackup: error copying file to local archive: %w", err)
|
||||||
}
|
}
|
||||||
s.logger.Infof("Stored copy of backup `%s` in local archive `%s`.", s.file, s.c.BackupArchive)
|
s.logger.Infof("Stored copy of backup `%s` in local archive `%s`.", s.file, s.c.BackupArchive)
|
||||||
|
if s.c.BackupLatestSymlink != "" {
|
||||||
|
symlink := path.Join(s.c.BackupArchive, s.c.BackupLatestSymlink)
|
||||||
|
if _, err := os.Lstat(symlink); err == nil {
|
||||||
|
os.Remove(symlink)
|
||||||
|
}
|
||||||
|
if err := os.Symlink(name, symlink); err != nil {
|
||||||
|
return fmt.Errorf("copyBackup: error creating latest symlink: %w", err)
|
||||||
|
}
|
||||||
|
s.logger.Infof("Created/Updated symlink `%s` for latest backup.", s.c.BackupLatestSymlink)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// removeArtifacts removes the backup file from disk.
|
// removeArtifacts removes the backup file from disk.
|
||||||
func (s *script) removeArtifacts() error {
|
func (s *script) removeArtifacts() error {
|
||||||
if err := os.Remove(s.file); err != nil {
|
_, err := os.Stat(s.file)
|
||||||
return fmt.Errorf("removeArtifacts: error removing file: %w", err)
|
if err != nil {
|
||||||
|
if os.IsNotExist(err) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return fmt.Errorf("removeArtifacts: error calling stat on file %s: %w", s.file, err)
|
||||||
}
|
}
|
||||||
s.logger.Info("Removed local artifacts.")
|
if err := os.Remove(s.file); err != nil {
|
||||||
|
return fmt.Errorf("removeArtifacts: error removing file %s: %w", s.file, err)
|
||||||
|
}
|
||||||
|
s.logger.Infof("Removed local artifacts %s.", s.file)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -466,7 +508,7 @@ func (s *script) pruneOldBackups() error {
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
if fi.ModTime().Before(deadline) {
|
if fi.Mode() != os.ModeSymlink && fi.ModTime().Before(deadline) {
|
||||||
matches = append(matches, candidate)
|
matches = append(matches, candidate)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -504,17 +546,35 @@ func (s *script) pruneOldBackups() error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// must exits the script run non-zero and prematurely in case the given error
|
// runHooks runs all hooks that have been registered using the
|
||||||
// is non-nil. If error hooks are present on the script object, they
|
// given level. In case executing a hook returns an error, the following
|
||||||
|
// hooks will still be run before the function returns an error.
|
||||||
|
func (s *script) runHooks(err error, targetLevel string) error {
|
||||||
|
var actionErrors []error
|
||||||
|
for _, hook := range s.hooks {
|
||||||
|
if hook.level != targetLevel {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if err := hook.action(err, s.start, s.output.String()); err != nil {
|
||||||
|
actionErrors = append(actionErrors, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(actionErrors) != 0 {
|
||||||
|
return join(actionErrors...)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// must exits the script run prematurely in case the given error
|
||||||
|
// is non-nil. If failure hooks have been registered on the script object, they
|
||||||
// will be called, passing the failure and previous log output.
|
// will be called, passing the failure and previous log output.
|
||||||
func (s *script) must(err error) {
|
func (s *script) must(err error) {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
for _, hook := range s.errorHooks {
|
s.logger.Errorf("Fatal error running backup: %s", err)
|
||||||
if hookErr := hook(err, s.start, s.output.String()); hookErr != nil {
|
if hookErr := s.runHooks(err, hookLevelFailure); hookErr != nil {
|
||||||
s.logger.Errorf("An error occurred calling an error hook: %s", hookErr)
|
s.logger.Errorf("An error occurred calling the registered failure hooks: %s", hookErr)
|
||||||
}
|
|
||||||
}
|
}
|
||||||
s.logger.Fatalf("Fatal error running backup: %s", err)
|
panic(errors.New(msgBackupFailed))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -587,3 +647,14 @@ func (b *bufferingWriter) Write(p []byte) (n int, err error) {
|
|||||||
}
|
}
|
||||||
return b.writer.Write(p)
|
return b.writer.Write(p)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// hook contains a queued action that can be trigger them when the script
|
||||||
|
// reaches a certain point (e.g. unsuccessful backup)
|
||||||
|
type hook struct {
|
||||||
|
level string
|
||||||
|
action func(err error, start time.Time, logOutput string) error
|
||||||
|
}
|
||||||
|
|
||||||
|
const (
|
||||||
|
hookLevelFailure = "failure"
|
||||||
|
)
|
||||||
|
|||||||
@@ -29,8 +29,7 @@ docker run -d \
|
|||||||
|
|
||||||
sleep 10
|
sleep 10
|
||||||
|
|
||||||
docker run -d \
|
docker run --rm \
|
||||||
--name backup \
|
|
||||||
--network test_network \
|
--network test_network \
|
||||||
-v app_data:/backup/app_data \
|
-v app_data:/backup/app_data \
|
||||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||||
@@ -40,18 +39,16 @@ docker run -d \
|
|||||||
--env AWS_ENDPOINT_PROTO=http \
|
--env AWS_ENDPOINT_PROTO=http \
|
||||||
--env AWS_S3_BUCKET_NAME=backup \
|
--env AWS_S3_BUCKET_NAME=backup \
|
||||||
--env BACKUP_FILENAME=test.tar.gz \
|
--env BACKUP_FILENAME=test.tar.gz \
|
||||||
--env BACKUP_CRON_EXPRESSION="0 0 5 31 2 ?" \
|
--entrypoint backup \
|
||||||
offen/docker-volume-backup:$TEST_VERSION
|
offen/docker-volume-backup:$TEST_VERSION
|
||||||
|
|
||||||
docker exec backup backup
|
|
||||||
|
|
||||||
docker run --rm -it \
|
docker run --rm -it \
|
||||||
-v backup_data:/data alpine \
|
-v backup_data:/data alpine \
|
||||||
ash -c 'tar -xvf /data/backup/test.tar.gz && test -f /backup/app_data/offen.db'
|
ash -c 'tar -xvf /data/backup/test.tar.gz && test -f /backup/app_data/offen.db'
|
||||||
|
|
||||||
echo "[TEST:PASS] Found relevant files in untared backup."
|
echo "[TEST:PASS] Found relevant files in untared backup."
|
||||||
|
|
||||||
if [ "$(docker ps -q | wc -l)" != "3" ]; then
|
if [ "$(docker ps -q | wc -l)" != "2" ]; then
|
||||||
echo "[TEST:FAIL] Expected all containers to be running post backup, instead seen:"
|
echo "[TEST:FAIL] Expected all containers to be running post backup, instead seen:"
|
||||||
docker ps
|
docker ps
|
||||||
exit 1
|
exit 1
|
||||||
|
|||||||
@@ -24,6 +24,7 @@ services:
|
|||||||
AWS_ENDPOINT_PROTO: http
|
AWS_ENDPOINT_PROTO: http
|
||||||
AWS_S3_BUCKET_NAME: backup
|
AWS_S3_BUCKET_NAME: backup
|
||||||
BACKUP_FILENAME: test.tar.gz
|
BACKUP_FILENAME: test.tar.gz
|
||||||
|
BACKUP_LATEST_SYMLINK: test.latest.tar.gz.gpg
|
||||||
BACKUP_CRON_EXPRESSION: 0 0 5 31 2 ?
|
BACKUP_CRON_EXPRESSION: 0 0 5 31 2 ?
|
||||||
BACKUP_RETENTION_DAYS: ${BACKUP_RETENTION_DAYS:-7}
|
BACKUP_RETENTION_DAYS: ${BACKUP_RETENTION_DAYS:-7}
|
||||||
BACKUP_PRUNING_LEEWAY: 5s
|
BACKUP_PRUNING_LEEWAY: 5s
|
||||||
|
|||||||
@@ -18,6 +18,7 @@ docker run --rm -it \
|
|||||||
|
|
||||||
echo "[TEST:PASS] Found relevant files in untared remote backup."
|
echo "[TEST:PASS] Found relevant files in untared remote backup."
|
||||||
|
|
||||||
|
test -L ./local/test.latest.tar.gz.gpg
|
||||||
echo 1234secret | gpg -d --yes --passphrase-fd 0 ./local/test.tar.gz.gpg > ./local/decrypted.tar.gz
|
echo 1234secret | gpg -d --yes --passphrase-fd 0 ./local/test.tar.gz.gpg > ./local/decrypted.tar.gz
|
||||||
tar -xf ./local/decrypted.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db
|
tar -xf ./local/decrypted.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db
|
||||||
rm ./local/decrypted.tar.gz
|
rm ./local/decrypted.tar.gz
|
||||||
|
|||||||
Reference in New Issue
Block a user