mirror of
https://github.com/offen/docker-volume-backup.git
synced 2025-12-05 17:18:02 +01:00
Compare commits
31 Commits
v2.0.0-alp
...
v2.0.0-rc.
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
6034e6a902 | ||
|
|
d0eca0a179 | ||
|
|
a0fe2cf42d | ||
|
|
5334ff1a5a | ||
|
|
e73256ad70 | ||
|
|
e0c4adc563 | ||
|
|
2469597848 | ||
|
|
b1c4bee85d | ||
|
|
ec87bd27e7 | ||
|
|
f4f4fa9e74 | ||
|
|
7086c6e645 | ||
|
|
411a62a6c7 | ||
|
|
5a2bf48ec6 | ||
|
|
07b06cf0ba | ||
|
|
4c80494433 | ||
|
|
7244725c5b | ||
|
|
935de92f2e | ||
|
|
d195e8967f | ||
|
|
188c14c00f | ||
|
|
da9458724f | ||
|
|
435583168b | ||
|
|
67499d776c | ||
|
|
8c99ec0bdf | ||
|
|
f2739b583e | ||
|
|
78e4e3813b | ||
|
|
4d9482a8b4 | ||
|
|
0c6ac05789 | ||
|
|
8b110fd789 | ||
|
|
efb52aa806 | ||
|
|
4c84674650 | ||
|
|
6fe81cdf2d |
@@ -11,6 +11,10 @@ jobs:
|
|||||||
name: Build
|
name: Build
|
||||||
command: |
|
command: |
|
||||||
docker build . -t offen/docker-volume-backup:canary
|
docker build . -t offen/docker-volume-backup:canary
|
||||||
|
- run:
|
||||||
|
name: Install gnupg
|
||||||
|
command: |
|
||||||
|
sudo apt-get install -y gnupg
|
||||||
- run:
|
- run:
|
||||||
name: Run tests
|
name: Run tests
|
||||||
working_directory: ~/docker-volume-backup/test
|
working_directory: ~/docker-volume-backup/test
|
||||||
|
|||||||
22
NOTICE
22
NOTICE
@@ -1,22 +0,0 @@
|
|||||||
Copyright 2021 Offen Authors <hioffen@posteo.de>
|
|
||||||
|
|
||||||
This project contains software that is Copyright (c) 2018 Futurice
|
|
||||||
Licensed under the Terms of the MIT License:
|
|
||||||
|
|
||||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
||||||
of this software and associated documentation files (the "Software"), to deal
|
|
||||||
in the Software without restriction, including without limitation the rights
|
|
||||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
||||||
copies of the Software, and to permit persons to whom the Software is
|
|
||||||
furnished to do so, subject to the following conditions:
|
|
||||||
|
|
||||||
The above copyright notice and this permission notice shall be included in all
|
|
||||||
copies or substantial portions of the Software.
|
|
||||||
|
|
||||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
||||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
||||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
||||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
||||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
||||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
SOFTWARE.
|
|
||||||
@@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
Backup Docker volumes locally or to any S3 compatible storage.
|
Backup Docker volumes locally or to any S3 compatible storage.
|
||||||
|
|
||||||
The [offen/docker-volume-backup](https://hub.docker.com/r/offen/docker-volume-backup) Docker image can be used as a sidecar container to an existing Docker setup. It handles recurring backups of Docker volumes to a local directory or any S3 compatible storage (or both) and rotates away old backups if configured.
|
The [offen/docker-volume-backup](https://hub.docker.com/r/offen/docker-volume-backup) Docker image can be used as a lightweight (below 15MB) sidecar container to an existing Docker setup. It handles __recurring or one-off backups of Docker volumes__ to a __local directory__ or __any S3 compatible storage__ (or both), and __rotates away old backups__ if configured. It also supports __encrypting your backups using GPG__.
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
@@ -79,7 +79,7 @@ AWS_S3_BUCKET_NAME="<xxx>"
|
|||||||
# that is expected to be bigger than the maximum difference of backups.
|
# that is expected to be bigger than the maximum difference of backups.
|
||||||
# Valid values have a suffix of (s)econds, (m)inutes or (h)ours.
|
# Valid values have a suffix of (s)econds, (m)inutes or (h)ours.
|
||||||
|
|
||||||
# BACKUP_PRUNING_LEEWAY="10m"
|
# BACKUP_PRUNING_LEEWAY="1m"
|
||||||
|
|
||||||
# In case your target bucket or directory contains other files than the ones
|
# In case your target bucket or directory contains other files than the ones
|
||||||
# managed by this container, you can limit the scope of rotation by setting
|
# managed by this container, you can limit the scope of rotation by setting
|
||||||
@@ -177,8 +177,8 @@ docker exec <container_ref> backup
|
|||||||
This image is heavily inspired by the `futurice/docker-volume-backup`. We decided to publish this image as a simpler and more lightweight alternative because of the following requirements:
|
This image is heavily inspired by the `futurice/docker-volume-backup`. We decided to publish this image as a simpler and more lightweight alternative because of the following requirements:
|
||||||
|
|
||||||
- The original image is based on `ubuntu` and additional tools, making it very heavy. This version is roughly 1/25 in compressed size (it's ~12MB).
|
- The original image is based on `ubuntu` and additional tools, making it very heavy. This version is roughly 1/25 in compressed size (it's ~12MB).
|
||||||
- The original image uses a shell script, when this is written in Go.
|
- The original image uses a shell script, when this is written in Go, which makes it easier to extend and maintain (more verbose also).
|
||||||
- The original image proposed to handle backup rotation through AWS S3 lifecycle policies. This image adds the option to rotate away old backups through the same command so this functionality can also be offered for non-AWS storage backends like MinIO. Local copies of backups can also be pruned once they reach a certain age.
|
- The original image proposed to handle backup rotation through AWS S3 lifecycle policies. This image adds the option to rotate away old backups through the same command so this functionality can also be offered for non-AWS storage backends like MinIO. Local copies of backups can also be pruned once they reach a certain age.
|
||||||
- InfluxDB specific functionality was removed.
|
- InfluxDB specific functionality from the original image was removed.
|
||||||
- `arm64` and `arm/v7` architectures are supported.
|
- `arm64` and `arm/v7` architectures are supported.
|
||||||
- Docker in Swarm mode is supported.
|
- Docker in Swarm mode is supported.
|
||||||
|
|||||||
@@ -4,26 +4,22 @@
|
|||||||
package main
|
package main
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"bytes"
|
|
||||||
"context"
|
"context"
|
||||||
"errors"
|
|
||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
"io"
|
||||||
"io/ioutil"
|
|
||||||
"os"
|
"os"
|
||||||
"os/exec"
|
|
||||||
"path"
|
"path"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"strconv"
|
|
||||||
"strings"
|
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/docker/docker/api/types"
|
"github.com/docker/docker/api/types"
|
||||||
"github.com/docker/docker/api/types/filters"
|
"github.com/docker/docker/api/types/filters"
|
||||||
"github.com/docker/docker/api/types/swarm"
|
"github.com/docker/docker/api/types/swarm"
|
||||||
"github.com/docker/docker/client"
|
"github.com/docker/docker/client"
|
||||||
"github.com/joho/godotenv"
|
"github.com/gofrs/flock"
|
||||||
minio "github.com/minio/minio-go/v7"
|
"github.com/kelseyhightower/envconfig"
|
||||||
|
"github.com/leekchan/timeutil"
|
||||||
|
"github.com/minio/minio-go/v7"
|
||||||
"github.com/minio/minio-go/v7/pkg/credentials"
|
"github.com/minio/minio-go/v7/pkg/credentials"
|
||||||
"github.com/sirupsen/logrus"
|
"github.com/sirupsen/logrus"
|
||||||
"github.com/walle/targz"
|
"github.com/walle/targz"
|
||||||
@@ -31,114 +27,133 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
unlock := lock("/var/dockervolumebackup.lock")
|
unlock := lock("/var/lock/dockervolumebackup.lock")
|
||||||
defer unlock()
|
defer unlock()
|
||||||
|
|
||||||
s := &script{}
|
s, err := newScript()
|
||||||
s.must(s.init())
|
if err != nil {
|
||||||
s.must(s.stopContainersAndRun(s.takeBackup))
|
panic(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
s.must(func() error {
|
||||||
|
restartContainers, err := s.stopContainers()
|
||||||
|
defer func() {
|
||||||
|
s.must(restartContainers())
|
||||||
|
}()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return s.takeBackup()
|
||||||
|
}())
|
||||||
|
|
||||||
s.must(s.encryptBackup())
|
s.must(s.encryptBackup())
|
||||||
s.must(s.copyBackup())
|
s.must(s.copyBackup())
|
||||||
s.must(s.cleanBackup())
|
s.must(s.removeArtifacts())
|
||||||
s.must(s.pruneOldBackups())
|
s.must(s.pruneOldBackups())
|
||||||
s.logger.Info("Finished running backup tasks.")
|
s.logger.Info("Finished running backup tasks.")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// script holds all the stateful information required to orchestrate a
|
||||||
|
// single backup run.
|
||||||
type script struct {
|
type script struct {
|
||||||
ctx context.Context
|
ctx context.Context
|
||||||
cli *client.Client
|
cli *client.Client
|
||||||
mc *minio.Client
|
mc *minio.Client
|
||||||
logger *logrus.Logger
|
logger *logrus.Logger
|
||||||
|
|
||||||
|
start time.Time
|
||||||
file string
|
file string
|
||||||
bucket string
|
|
||||||
archive string
|
c *config
|
||||||
sources string
|
|
||||||
passphrase string
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// lock opens a lockfile at the given location, keeping it locked until the
|
type config struct {
|
||||||
// caller invokes the returned release func. When invoked while the file is
|
BackupSources string `split_words:"true" default:"/backup"`
|
||||||
// still locked the function panics.
|
BackupFilename string `split_words:"true" default:"backup-%Y-%m-%dT%H-%M-%S.tar.gz"`
|
||||||
func lock(lockfile string) func() error {
|
BackupArchive string `split_words:"true" default:"/archive"`
|
||||||
lf, err := os.OpenFile(lockfile, os.O_CREATE, os.ModeAppend)
|
BackupRetentionDays int32 `split_words:"true" default:"-1"`
|
||||||
if err != nil {
|
BackupPruningLeeway time.Duration `split_words:"true" default:"1m"`
|
||||||
panic(err)
|
BackupPruningPrefix string `split_words:"true"`
|
||||||
}
|
BackupStopContainerLabel string `split_words:"true" default:"true"`
|
||||||
return func() error {
|
AwsS3BucketName string `split_words:"true"`
|
||||||
if err := lf.Close(); err != nil {
|
AwsEndpoint string `split_words:"true" default:"s3.amazonaws.com"`
|
||||||
return fmt.Errorf("lock: error releasing file lock: %w", err)
|
AwsEndpointProto string `split_words:"true" default:"https"`
|
||||||
}
|
AwsEndpointInsecure bool `split_words:"true"`
|
||||||
if err := os.Remove(lockfile); err != nil {
|
AwsAccessKeyID string `envconfig:"AWS_ACCESS_KEY_ID"`
|
||||||
return fmt.Errorf("lock: error removing lock file: %w", err)
|
AwsSecretAccessKey string `split_words:"true"`
|
||||||
}
|
GpgPassphrase string `split_words:"true"`
|
||||||
return nil
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// init creates all resources needed for the script to perform actions against
|
// newScript creates all resources needed for the script to perform actions against
|
||||||
// remote resources like the Docker engine or remote storage locations.
|
// remote resources like the Docker engine or remote storage locations. All
|
||||||
func (s *script) init() error {
|
// reading from env vars or other configuration sources is expected to happen
|
||||||
s.ctx = context.Background()
|
// in this method.
|
||||||
s.logger = logrus.New()
|
func newScript() (*script, error) {
|
||||||
s.logger.SetOutput(os.Stdout)
|
s := &script{
|
||||||
|
c: &config{},
|
||||||
if err := godotenv.Load("/etc/backup.env"); err != nil {
|
ctx: context.Background(),
|
||||||
return fmt.Errorf("init: failed to load env file: %w", err)
|
logger: &logrus.Logger{
|
||||||
|
Out: os.Stdout,
|
||||||
|
Formatter: new(logrus.TextFormatter),
|
||||||
|
Hooks: make(logrus.LevelHooks),
|
||||||
|
Level: logrus.InfoLevel,
|
||||||
|
},
|
||||||
|
start: time.Now(),
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if err := envconfig.Process("", s.c); err != nil {
|
||||||
|
return nil, fmt.Errorf("newScript: failed to process configuration values: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
s.file = path.Join("/tmp", s.c.BackupFilename)
|
||||||
|
|
||||||
_, err := os.Stat("/var/run/docker.sock")
|
_, err := os.Stat("/var/run/docker.sock")
|
||||||
if !os.IsNotExist(err) {
|
if !os.IsNotExist(err) {
|
||||||
cli, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
cli, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("init: failed to create docker client")
|
return nil, fmt.Errorf("newScript: failed to create docker client")
|
||||||
}
|
}
|
||||||
s.cli = cli
|
s.cli = cli
|
||||||
}
|
}
|
||||||
|
|
||||||
if bucket := os.Getenv("AWS_S3_BUCKET_NAME"); bucket != "" {
|
if s.c.AwsS3BucketName != "" {
|
||||||
s.bucket = bucket
|
mc, err := minio.New(s.c.AwsEndpoint, &minio.Options{
|
||||||
mc, err := minio.New(os.Getenv("AWS_ENDPOINT"), &minio.Options{
|
|
||||||
Creds: credentials.NewStaticV4(
|
Creds: credentials.NewStaticV4(
|
||||||
os.Getenv("AWS_ACCESS_KEY_ID"),
|
s.c.AwsAccessKeyID,
|
||||||
os.Getenv("AWS_SECRET_ACCESS_KEY"),
|
s.c.AwsSecretAccessKey,
|
||||||
"",
|
"",
|
||||||
),
|
),
|
||||||
Secure: os.Getenv("AWS_ENDPOINT_INSECURE") == "" && os.Getenv("AWS_ENDPOINT_PROTO") == "https",
|
Secure: !s.c.AwsEndpointInsecure && s.c.AwsEndpointProto == "https",
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("init: error setting up minio client: %w", err)
|
return nil, fmt.Errorf("newScript: error setting up minio client: %w", err)
|
||||||
}
|
}
|
||||||
s.mc = mc
|
s.mc = mc
|
||||||
}
|
}
|
||||||
|
|
||||||
file := os.Getenv("BACKUP_FILENAME")
|
return s, nil
|
||||||
if file == "" {
|
|
||||||
return errors.New("init: BACKUP_FILENAME not given")
|
|
||||||
}
|
|
||||||
s.file = path.Join("/tmp", file)
|
|
||||||
s.archive = os.Getenv("BACKUP_ARCHIVE")
|
|
||||||
s.sources = os.Getenv("BACKUP_SOURCES")
|
|
||||||
s.passphrase = os.Getenv("GPG_PASSPHRASE")
|
|
||||||
return nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// stopContainersAndRun stops all Docker containers that are marked as to being
|
var noop = func() error { return nil }
|
||||||
// stopped during the backup and runs the given thunk. After returning, it makes
|
|
||||||
// sure containers are being restarted if required.
|
// stopContainers stops all Docker containers that are marked as to being
|
||||||
func (s *script) stopContainersAndRun(thunk func() error) error {
|
// stopped during the backup and returns a function that can be called to
|
||||||
|
// restart everything that has been stopped.
|
||||||
|
func (s *script) stopContainers() (func() error, error) {
|
||||||
if s.cli == nil {
|
if s.cli == nil {
|
||||||
return nil
|
return noop, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
allContainers, err := s.cli.ContainerList(s.ctx, types.ContainerListOptions{
|
allContainers, err := s.cli.ContainerList(s.ctx, types.ContainerListOptions{
|
||||||
Quiet: true,
|
Quiet: true,
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("stopContainersAndRun: error querying for containers: %w", err)
|
return noop, fmt.Errorf("stopContainersAndRun: error querying for containers: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
containerLabel := fmt.Sprintf(
|
containerLabel := fmt.Sprintf(
|
||||||
"docker-volume-backup.stop-during-backup=%s",
|
"docker-volume-backup.stop-during-backup=%s",
|
||||||
os.Getenv("BACKUP_STOP_CONTAINER_LABEL"),
|
s.c.BackupStopContainerLabel,
|
||||||
)
|
)
|
||||||
containersToStop, err := s.cli.ContainerList(s.ctx, types.ContainerListOptions{
|
containersToStop, err := s.cli.ContainerList(s.ctx, types.ContainerListOptions{
|
||||||
Quiet: true,
|
Quiet: true,
|
||||||
@@ -149,28 +164,39 @@ func (s *script) stopContainersAndRun(thunk func() error) error {
|
|||||||
})
|
})
|
||||||
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("stopContainersAndRun: error querying for containers to stop: %w", err)
|
return noop, fmt.Errorf("stopContainersAndRun: error querying for containers to stop: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if len(containersToStop) == 0 {
|
||||||
|
return noop, nil
|
||||||
|
}
|
||||||
|
|
||||||
s.logger.Infof(
|
s.logger.Infof(
|
||||||
"Stopping %d containers labeled `%s` out of %d running containers.",
|
"Stopping %d container(s) labeled `%s` out of %d running container(s).",
|
||||||
len(containersToStop),
|
len(containersToStop),
|
||||||
containerLabel,
|
containerLabel,
|
||||||
len(allContainers),
|
len(allContainers),
|
||||||
)
|
)
|
||||||
|
|
||||||
var stoppedContainers []types.Container
|
var stoppedContainers []types.Container
|
||||||
var errors []error
|
var stopErrors []error
|
||||||
if len(containersToStop) != 0 {
|
|
||||||
for _, container := range containersToStop {
|
for _, container := range containersToStop {
|
||||||
if err := s.cli.ContainerStop(s.ctx, container.ID, nil); err != nil {
|
if err := s.cli.ContainerStop(s.ctx, container.ID, nil); err != nil {
|
||||||
errors = append(errors, err)
|
stopErrors = append(stopErrors, err)
|
||||||
} else {
|
} else {
|
||||||
stoppedContainers = append(stoppedContainers, container)
|
stoppedContainers = append(stoppedContainers, container)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if len(stopErrors) != 0 {
|
||||||
|
return noop, fmt.Errorf(
|
||||||
|
"stopContainersAndRun: %d error(s) stopping containers: %w",
|
||||||
|
len(stopErrors),
|
||||||
|
err,
|
||||||
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
defer func() error {
|
return func() error {
|
||||||
servicesRequiringUpdate := map[string]struct{}{}
|
servicesRequiringUpdate := map[string]struct{}{}
|
||||||
|
|
||||||
var restartErrors []error
|
var restartErrors []error
|
||||||
@@ -195,7 +221,7 @@ func (s *script) stopContainersAndRun(thunk func() error) error {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
if serviceMatch.ID == "" {
|
if serviceMatch.ID == "" {
|
||||||
return fmt.Errorf("stopContainersAndRun: Couldn't find service with name %s", serviceName)
|
return fmt.Errorf("stopContainersAndRun: couldn't find service with name %s", serviceName)
|
||||||
}
|
}
|
||||||
serviceMatch.Spec.TaskTemplate.ForceUpdate = 1
|
serviceMatch.Spec.TaskTemplate.ForceUpdate = 1
|
||||||
_, err := s.cli.ServiceUpdate(
|
_, err := s.cli.ServiceUpdate(
|
||||||
@@ -215,37 +241,22 @@ func (s *script) stopContainersAndRun(thunk func() error) error {
|
|||||||
err,
|
err,
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
s.logger.Infof("Successfully restarted %d containers.", len(stoppedContainers))
|
s.logger.Infof(
|
||||||
return nil
|
"Restarted %d container(s) and the matching service(s).",
|
||||||
}()
|
len(stoppedContainers),
|
||||||
|
|
||||||
var stopErr error
|
|
||||||
if len(errors) != 0 {
|
|
||||||
stopErr = fmt.Errorf(
|
|
||||||
"stopContainersAndRun: %d errors stopping containers: %w",
|
|
||||||
len(errors),
|
|
||||||
err,
|
|
||||||
)
|
)
|
||||||
}
|
return nil
|
||||||
if stopErr != nil {
|
}, nil
|
||||||
return stopErr
|
|
||||||
}
|
|
||||||
|
|
||||||
return thunk()
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// takeBackup creates a tar archive of the configured backup location and
|
// takeBackup creates a tar archive of the configured backup location and
|
||||||
// saves it to disk.
|
// saves it to disk.
|
||||||
func (s *script) takeBackup() error {
|
func (s *script) takeBackup() error {
|
||||||
outBytes, err := exec.Command("date", fmt.Sprintf("+%s", s.file)).Output()
|
s.file = timeutil.Strftime(&s.start, s.file)
|
||||||
if err != nil {
|
if err := targz.Compress(s.c.BackupSources, s.file); err != nil {
|
||||||
return fmt.Errorf("takeBackup: error formatting filename template: %w", err)
|
|
||||||
}
|
|
||||||
s.file = strings.TrimSpace(string(outBytes))
|
|
||||||
if err := targz.Compress(s.sources, s.file); err != nil {
|
|
||||||
return fmt.Errorf("takeBackup: error compressing backup folder: %w", err)
|
return fmt.Errorf("takeBackup: error compressing backup folder: %w", err)
|
||||||
}
|
}
|
||||||
s.logger.Infof("Successfully created backup of `%s` at `%s`.", s.sources, s.file)
|
s.logger.Infof("Created backup of `%s` at `%s`.", s.c.BackupSources, s.file)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -253,42 +264,39 @@ func (s *script) takeBackup() error {
|
|||||||
// In case no passphrase is given it returns early, leaving the backup file
|
// In case no passphrase is given it returns early, leaving the backup file
|
||||||
// untouched.
|
// untouched.
|
||||||
func (s *script) encryptBackup() error {
|
func (s *script) encryptBackup() error {
|
||||||
if s.passphrase == "" {
|
if s.c.GpgPassphrase == "" {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
defer os.Remove(s.file)
|
||||||
|
|
||||||
|
gpgFile := fmt.Sprintf("%s.gpg", s.file)
|
||||||
|
outFile, err := os.Create(gpgFile)
|
||||||
|
defer outFile.Close()
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("encryptBackup: error opening out file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
buf := bytes.NewBuffer(nil)
|
|
||||||
_, name := path.Split(s.file)
|
_, name := path.Split(s.file)
|
||||||
pt, err := openpgp.SymmetricallyEncrypt(buf, []byte(s.passphrase), &openpgp.FileHints{
|
dst, err := openpgp.SymmetricallyEncrypt(outFile, []byte(s.c.GpgPassphrase), &openpgp.FileHints{
|
||||||
IsBinary: true,
|
IsBinary: true,
|
||||||
FileName: name,
|
FileName: name,
|
||||||
}, nil)
|
}, nil)
|
||||||
|
defer dst.Close()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("encryptBackup: error encrypting backup file: %w", err)
|
return fmt.Errorf("encryptBackup: error encrypting backup file: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
unencrypted, err := ioutil.ReadFile(s.file)
|
src, err := os.Open(s.file)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
pt.Close()
|
return fmt.Errorf("encryptBackup: error opening backup file %s: %w", s.file, err)
|
||||||
return fmt.Errorf("encryptBackup: error reading unencrypted backup file: %w", err)
|
|
||||||
}
|
|
||||||
_, err = pt.Write(unencrypted)
|
|
||||||
if err != nil {
|
|
||||||
pt.Close()
|
|
||||||
return fmt.Errorf("encryptBackup: error writing backup contents: %w", err)
|
|
||||||
}
|
|
||||||
pt.Close()
|
|
||||||
|
|
||||||
gpgFile := fmt.Sprintf("%s.gpg", s.file)
|
|
||||||
if err := ioutil.WriteFile(gpgFile, buf.Bytes(), os.ModeAppend); err != nil {
|
|
||||||
return fmt.Errorf("encryptBackup: error writing encrypted version of backup: %w", err)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := os.Remove(s.file); err != nil {
|
if _, err := io.Copy(dst, src); err != nil {
|
||||||
return fmt.Errorf("encryptBackup: error removing unencrpyted backup: %w", err)
|
return fmt.Errorf("encryptBackup: error writing ciphertext to file: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
s.file = gpgFile
|
s.file = gpgFile
|
||||||
s.logger.Infof("Successfully encrypted backup using given passphrase, saving as `%s`.", s.file)
|
s.logger.Infof("Encrypted backup using given passphrase, saving as `%s`.", s.file)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -296,31 +304,31 @@ func (s *script) encryptBackup() error {
|
|||||||
// as per the given configuration.
|
// as per the given configuration.
|
||||||
func (s *script) copyBackup() error {
|
func (s *script) copyBackup() error {
|
||||||
_, name := path.Split(s.file)
|
_, name := path.Split(s.file)
|
||||||
if s.bucket != "" {
|
if s.c.AwsS3BucketName != "" {
|
||||||
_, err := s.mc.FPutObject(s.ctx, s.bucket, name, s.file, minio.PutObjectOptions{
|
_, err := s.mc.FPutObject(s.ctx, s.c.AwsS3BucketName, name, s.file, minio.PutObjectOptions{
|
||||||
ContentType: "application/tar+gzip",
|
ContentType: "application/tar+gzip",
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("copyBackup: error uploading backup to remote storage: %w", err)
|
return fmt.Errorf("copyBackup: error uploading backup to remote storage: %w", err)
|
||||||
}
|
}
|
||||||
s.logger.Infof("Successfully uploaded a copy of backup `%s` to bucket `%s`", s.file, s.bucket)
|
s.logger.Infof("Uploaded a copy of backup `%s` to bucket `%s`", s.file, s.c.AwsS3BucketName)
|
||||||
}
|
}
|
||||||
|
|
||||||
if _, err := os.Stat(s.archive); !os.IsNotExist(err) {
|
if _, err := os.Stat(s.c.BackupArchive); !os.IsNotExist(err) {
|
||||||
if err := copy(s.file, path.Join(s.archive, name)); err != nil {
|
if err := copy(s.file, path.Join(s.c.BackupArchive, name)); err != nil {
|
||||||
return fmt.Errorf("copyBackup: error copying file to local archive: %w", err)
|
return fmt.Errorf("copyBackup: error copying file to local archive: %w", err)
|
||||||
}
|
}
|
||||||
s.logger.Infof("Successfully stored copy of backup `%s` in local archive `%s`", s.file, s.archive)
|
s.logger.Infof("Stored copy of backup `%s` in local archive `%s`", s.file, s.c.BackupArchive)
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// cleanBackup removes the backup file from disk.
|
// removeArtifacts removes the backup file from disk.
|
||||||
func (s *script) cleanBackup() error {
|
func (s *script) removeArtifacts() error {
|
||||||
if err := os.Remove(s.file); err != nil {
|
if err := os.Remove(s.file); err != nil {
|
||||||
return fmt.Errorf("cleanBackup: error removing file: %w", err)
|
return fmt.Errorf("removeArtifacts: error removing file: %w", err)
|
||||||
}
|
}
|
||||||
s.logger.Info("Successfully cleaned up local artifacts.")
|
s.logger.Info("Removed local artifacts.")
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -328,29 +336,22 @@ func (s *script) cleanBackup() error {
|
|||||||
// the given configuration. In case the given configuration would delete all
|
// the given configuration. In case the given configuration would delete all
|
||||||
// backups, it does nothing instead.
|
// backups, it does nothing instead.
|
||||||
func (s *script) pruneOldBackups() error {
|
func (s *script) pruneOldBackups() error {
|
||||||
retention := os.Getenv("BACKUP_RETENTION_DAYS")
|
if s.c.BackupRetentionDays < 0 {
|
||||||
if retention == "" {
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
retentionDays, err := strconv.Atoi(retention)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("pruneOldBackups: error parsing BACKUP_RETENTION_DAYS as int: %w", err)
|
|
||||||
}
|
|
||||||
leeway := os.Getenv("BACKUP_PRUNING_LEEWAY")
|
|
||||||
sleepFor, err := time.ParseDuration(leeway)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("pruneBackups: error parsing given leeway value: %w", err)
|
|
||||||
}
|
|
||||||
s.logger.Infof("Sleeping for %s before pruning backups.", leeway)
|
|
||||||
time.Sleep(sleepFor)
|
|
||||||
|
|
||||||
s.logger.Infof("Trying to prune backups older than %d days now.", retentionDays)
|
if s.c.BackupPruningLeeway != 0 {
|
||||||
deadline := time.Now().AddDate(0, 0, -retentionDays)
|
s.logger.Infof("Sleeping for %s before pruning backups.", s.c.BackupPruningLeeway)
|
||||||
|
time.Sleep(s.c.BackupPruningLeeway)
|
||||||
|
}
|
||||||
|
|
||||||
if s.bucket != "" {
|
s.logger.Infof("Trying to prune backups older than %d day(s) now.", s.c.BackupRetentionDays)
|
||||||
candidates := s.mc.ListObjects(s.ctx, s.bucket, minio.ListObjectsOptions{
|
deadline := time.Now().AddDate(0, 0, -int(s.c.BackupRetentionDays))
|
||||||
|
|
||||||
|
if s.c.AwsS3BucketName != "" {
|
||||||
|
candidates := s.mc.ListObjects(s.ctx, s.c.AwsS3BucketName, minio.ListObjectsOptions{
|
||||||
WithMetadata: true,
|
WithMetadata: true,
|
||||||
Prefix: os.Getenv("BACKUP_PRUNING_PREFIX"),
|
Prefix: s.c.BackupPruningPrefix,
|
||||||
})
|
})
|
||||||
|
|
||||||
var matches []minio.ObjectInfo
|
var matches []minio.ObjectInfo
|
||||||
@@ -358,7 +359,10 @@ func (s *script) pruneOldBackups() error {
|
|||||||
for candidate := range candidates {
|
for candidate := range candidates {
|
||||||
lenCandidates++
|
lenCandidates++
|
||||||
if candidate.Err != nil {
|
if candidate.Err != nil {
|
||||||
return fmt.Errorf("pruneOldBackups: error looking up candidates from remote storage: %w", candidate.Err)
|
return fmt.Errorf(
|
||||||
|
"pruneOldBackups: error looking up candidates from remote storage: %w",
|
||||||
|
candidate.Err,
|
||||||
|
)
|
||||||
}
|
}
|
||||||
if candidate.LastModified.Before(deadline) {
|
if candidate.LastModified.Before(deadline) {
|
||||||
matches = append(matches, candidate)
|
matches = append(matches, candidate)
|
||||||
@@ -373,7 +377,7 @@ func (s *script) pruneOldBackups() error {
|
|||||||
}
|
}
|
||||||
close(objectsCh)
|
close(objectsCh)
|
||||||
}()
|
}()
|
||||||
errChan := s.mc.RemoveObjects(s.ctx, s.bucket, objectsCh, minio.RemoveObjectsOptions{})
|
errChan := s.mc.RemoveObjects(s.ctx, s.c.AwsS3BucketName, objectsCh, minio.RemoveObjectsOptions{})
|
||||||
var errors []error
|
var errors []error
|
||||||
for result := range errChan {
|
for result := range errChan {
|
||||||
if result.Err != nil {
|
if result.Err != nil {
|
||||||
@@ -383,29 +387,31 @@ func (s *script) pruneOldBackups() error {
|
|||||||
|
|
||||||
if len(errors) != 0 {
|
if len(errors) != 0 {
|
||||||
return fmt.Errorf(
|
return fmt.Errorf(
|
||||||
"pruneOldBackups: %d errors removing files from remote storage: %w",
|
"pruneOldBackups: %d error(s) removing files from remote storage: %w",
|
||||||
len(errors),
|
len(errors),
|
||||||
errors[0],
|
errors[0],
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
s.logger.Infof(
|
s.logger.Infof(
|
||||||
"Successfully pruned %d out of %d remote backups as their age exceeded the configured retention period.",
|
"Pruned %d out of %d remote backup(s) as their age exceeded the configured retention period of %d days.",
|
||||||
len(matches),
|
len(matches),
|
||||||
lenCandidates,
|
lenCandidates,
|
||||||
|
s.c.BackupRetentionDays,
|
||||||
)
|
)
|
||||||
} else if len(matches) != 0 && len(matches) == lenCandidates {
|
} else if len(matches) != 0 && len(matches) == lenCandidates {
|
||||||
s.logger.Warnf(
|
s.logger.Warnf(
|
||||||
"The current configuration would delete all %d remote backup copies. Refusing to do so, please check your configuration.",
|
"The current configuration would delete all %d remote backup copies.",
|
||||||
len(matches),
|
len(matches),
|
||||||
)
|
)
|
||||||
|
s.logger.Warn("Refusing to do so, please check your configuration.")
|
||||||
} else {
|
} else {
|
||||||
s.logger.Infof("None of %d remote backups were pruned.", lenCandidates)
|
s.logger.Infof("None of %d remote backup(s) were pruned.", lenCandidates)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if _, err := os.Stat(s.archive); !os.IsNotExist(err) {
|
if _, err := os.Stat(s.c.BackupArchive); !os.IsNotExist(err) {
|
||||||
candidates, err := filepath.Glob(
|
candidates, err := filepath.Glob(
|
||||||
path.Join(s.archive, fmt.Sprintf("%s*", os.Getenv("BACKUP_PRUNING_PREFIX"))),
|
path.Join(s.c.BackupArchive, fmt.Sprintf("%s*", s.c.BackupPruningPrefix)),
|
||||||
)
|
)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf(
|
return fmt.Errorf(
|
||||||
@@ -413,7 +419,7 @@ func (s *script) pruneOldBackups() error {
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
var matches []os.FileInfo
|
var matches []string
|
||||||
for _, candidate := range candidates {
|
for _, candidate := range candidates {
|
||||||
fi, err := os.Stat(candidate)
|
fi, err := os.Stat(candidate)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -425,36 +431,38 @@ func (s *script) pruneOldBackups() error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if fi.ModTime().Before(deadline) {
|
if fi.ModTime().Before(deadline) {
|
||||||
matches = append(matches, fi)
|
matches = append(matches, candidate)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if len(matches) != 0 && len(matches) != len(candidates) {
|
if len(matches) != 0 && len(matches) != len(candidates) {
|
||||||
var errors []error
|
var errors []error
|
||||||
for _, candidate := range matches {
|
for _, candidate := range matches {
|
||||||
if err := os.Remove(candidate.Name()); err != nil {
|
if err := os.Remove(candidate); err != nil {
|
||||||
errors = append(errors, err)
|
errors = append(errors, err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if len(errors) != 0 {
|
if len(errors) != 0 {
|
||||||
return fmt.Errorf(
|
return fmt.Errorf(
|
||||||
"pruneOldBackups: %d errors deleting local files, starting with: %w",
|
"pruneOldBackups: %d error(s) deleting local files, starting with: %w",
|
||||||
len(errors),
|
len(errors),
|
||||||
errors[0],
|
errors[0],
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
s.logger.Infof(
|
s.logger.Infof(
|
||||||
"Successfully pruned %d out of %d local backups as their age exceeded the configured retention period.",
|
"Pruned %d out of %d local backup(s) as their age exceeded the configured retention period of %d days.",
|
||||||
len(matches),
|
len(matches),
|
||||||
len(candidates),
|
len(candidates),
|
||||||
|
s.c.BackupRetentionDays,
|
||||||
)
|
)
|
||||||
} else if len(matches) != 0 && len(matches) == len(candidates) {
|
} else if len(matches) != 0 && len(matches) == len(candidates) {
|
||||||
s.logger.Warnf(
|
s.logger.Warnf(
|
||||||
"The current configuration would delete all %d local backup copies. Refusing to do so, please check your configuration.",
|
"The current configuration would delete all %d local backup copies.",
|
||||||
len(matches),
|
len(matches),
|
||||||
)
|
)
|
||||||
|
s.logger.Warn("Refusing to do so, please check your configuration.")
|
||||||
} else {
|
} else {
|
||||||
s.logger.Infof("None of %d local backups were pruned.", len(candidates))
|
s.logger.Infof("None of %d local backup(s) were pruned.", len(candidates))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
@@ -462,14 +470,26 @@ func (s *script) pruneOldBackups() error {
|
|||||||
|
|
||||||
func (s *script) must(err error) {
|
func (s *script) must(err error) {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if s.logger == nil {
|
s.logger.Fatalf("Fatal error running backup: %s", err)
|
||||||
panic(err)
|
|
||||||
}
|
|
||||||
s.logger.Errorf("Fatal error running backup: %s", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// lock opens a lockfile at the given location, keeping it locked until the
|
||||||
|
// caller invokes the returned release func. When invoked while the file is
|
||||||
|
// still locked the function panics.
|
||||||
|
func lock(lockfile string) func() error {
|
||||||
|
fileLock := flock.New(lockfile)
|
||||||
|
acquired, err := fileLock.TryLock()
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
if !acquired {
|
||||||
|
panic("unable to acquire file lock")
|
||||||
|
}
|
||||||
|
return fileLock.Unlock
|
||||||
|
}
|
||||||
|
|
||||||
|
// copy creates a copy of the file located at `dst` at `src`.
|
||||||
func copy(src, dst string) error {
|
func copy(src, dst string) error {
|
||||||
in, err := os.Open(src)
|
in, err := os.Open(src)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
|||||||
@@ -3,38 +3,12 @@
|
|||||||
# Copyright 2021 - Offen Authors <hioffen@posteo.de>
|
# Copyright 2021 - Offen Authors <hioffen@posteo.de>
|
||||||
# SPDX-License-Identifier: MPL-2.0
|
# SPDX-License-Identifier: MPL-2.0
|
||||||
|
|
||||||
# Portions of this file are taken from github.com/futurice/docker-volume-backup
|
|
||||||
# See NOTICE for information about authors and licensing.
|
|
||||||
|
|
||||||
set -e
|
set -e
|
||||||
|
|
||||||
# Write cronjob env to file, fill in sensible defaults, and read them back in
|
|
||||||
mkdir -p /etc/backup
|
|
||||||
cat <<EOF > /etc/backup.env
|
|
||||||
BACKUP_SOURCES="${BACKUP_SOURCES:-/backup}"
|
|
||||||
BACKUP_CRON_EXPRESSION="${BACKUP_CRON_EXPRESSION:-@daily}"
|
BACKUP_CRON_EXPRESSION="${BACKUP_CRON_EXPRESSION:-@daily}"
|
||||||
BACKUP_FILENAME="${BACKUP_FILENAME:-backup-%Y-%m-%dT%H-%M-%S.tar.gz}"
|
|
||||||
BACKUP_ARCHIVE="${BACKUP_ARCHIVE:-/archive}"
|
|
||||||
|
|
||||||
BACKUP_RETENTION_DAYS="${BACKUP_RETENTION_DAYS:-}"
|
|
||||||
BACKUP_PRUNING_LEEWAY="${BACKUP_PRUNING_LEEWAY:-10m}"
|
|
||||||
BACKUP_PRUNING_PREFIX="${BACKUP_PRUNING_PREFIX:-}"
|
|
||||||
BACKUP_STOP_CONTAINER_LABEL="${BACKUP_STOP_CONTAINER_LABEL:-true}"
|
|
||||||
|
|
||||||
AWS_S3_BUCKET_NAME="${AWS_S3_BUCKET_NAME:-}"
|
|
||||||
AWS_ENDPOINT="${AWS_ENDPOINT:-s3.amazonaws.com}"
|
|
||||||
AWS_ENDPOINT_PROTO="${AWS_ENDPOINT_PROTO:-https}"
|
|
||||||
AWS_ENDPOINT_INSECURE="${AWS_ENDPOINT_INSECURE:-}"
|
|
||||||
|
|
||||||
GPG_PASSPHRASE="${GPG_PASSPHRASE:-}"
|
|
||||||
EOF
|
|
||||||
chmod a+x /etc/backup.env
|
|
||||||
source /etc/backup.env
|
|
||||||
|
|
||||||
# Add our cron entry, and direct stdout & stderr to Docker commands stdout
|
|
||||||
echo "Installing cron.d entry with expression $BACKUP_CRON_EXPRESSION."
|
echo "Installing cron.d entry with expression $BACKUP_CRON_EXPRESSION."
|
||||||
echo "$BACKUP_CRON_EXPRESSION backup 2>&1" | crontab -
|
echo "$BACKUP_CRON_EXPRESSION backup 2>&1" | crontab -
|
||||||
|
|
||||||
# Let cron take the wheel
|
|
||||||
echo "Starting cron in foreground."
|
echo "Starting cron in foreground."
|
||||||
crond -f -l 8
|
crond -f -l 8
|
||||||
|
|||||||
6
go.mod
6
go.mod
@@ -4,8 +4,11 @@ go 1.17
|
|||||||
|
|
||||||
require (
|
require (
|
||||||
github.com/docker/docker v20.10.8+incompatible
|
github.com/docker/docker v20.10.8+incompatible
|
||||||
github.com/joho/godotenv v1.3.0
|
github.com/gofrs/flock v0.8.1
|
||||||
|
github.com/kelseyhightower/envconfig v1.4.0
|
||||||
|
github.com/leekchan/timeutil v0.0.0-20150802142658-28917288c48d
|
||||||
github.com/minio/minio-go/v7 v7.0.12
|
github.com/minio/minio-go/v7 v7.0.12
|
||||||
|
github.com/sirupsen/logrus v1.8.1
|
||||||
github.com/walle/targz v0.0.0-20140417120357-57fe4206da5a
|
github.com/walle/targz v0.0.0-20140417120357-57fe4206da5a
|
||||||
golang.org/x/crypto v0.0.0-20210817164053-32db794688a5
|
golang.org/x/crypto v0.0.0-20210817164053-32db794688a5
|
||||||
)
|
)
|
||||||
@@ -32,7 +35,6 @@ require (
|
|||||||
github.com/opencontainers/image-spec v1.0.1 // indirect
|
github.com/opencontainers/image-spec v1.0.1 // indirect
|
||||||
github.com/pkg/errors v0.9.1 // indirect
|
github.com/pkg/errors v0.9.1 // indirect
|
||||||
github.com/rs/xid v1.2.1 // indirect
|
github.com/rs/xid v1.2.1 // indirect
|
||||||
github.com/sirupsen/logrus v1.8.1 // indirect
|
|
||||||
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110 // indirect
|
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110 // indirect
|
||||||
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1 // indirect
|
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1 // indirect
|
||||||
golang.org/x/text v0.3.4 // indirect
|
golang.org/x/text v0.3.4 // indirect
|
||||||
|
|||||||
11
go.sum
11
go.sum
@@ -274,6 +274,8 @@ github.com/godbus/dbus v0.0.0-20180201030542-885f9cc04c9c/go.mod h1:/YcGZj5zSblf
|
|||||||
github.com/godbus/dbus v0.0.0-20190422162347-ade71ed3457e/go.mod h1:bBOAhwG1umN6/6ZUMtDFBMQR8jRg9O75tm9K00oMsK4=
|
github.com/godbus/dbus v0.0.0-20190422162347-ade71ed3457e/go.mod h1:bBOAhwG1umN6/6ZUMtDFBMQR8jRg9O75tm9K00oMsK4=
|
||||||
github.com/godbus/dbus/v5 v5.0.3/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
|
github.com/godbus/dbus/v5 v5.0.3/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
|
||||||
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
|
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
|
||||||
|
github.com/gofrs/flock v0.8.1 h1:+gYjHKf32LDeiEEFhQaotPbLuUXjY5ZqxKgXy7n59aw=
|
||||||
|
github.com/gofrs/flock v0.8.1/go.mod h1:F1TvTiK9OcQqauNUHlbJvyl9Qa1QvF/gOUDKA14jxHU=
|
||||||
github.com/gogo/googleapis v1.2.0/go.mod h1:Njal3psf3qN6dwBtQfUmBZh2ybovJ0tlu3o/AC7HYjU=
|
github.com/gogo/googleapis v1.2.0/go.mod h1:Njal3psf3qN6dwBtQfUmBZh2ybovJ0tlu3o/AC7HYjU=
|
||||||
github.com/gogo/googleapis v1.4.0/go.mod h1:5YRNX2z1oM5gXdAkurHa942MDgEJyk02w4OecKY87+c=
|
github.com/gogo/googleapis v1.4.0/go.mod h1:5YRNX2z1oM5gXdAkurHa942MDgEJyk02w4OecKY87+c=
|
||||||
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
|
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
|
||||||
@@ -370,8 +372,6 @@ github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANyt
|
|||||||
github.com/j-keck/arping v0.0.0-20160618110441-2cf9dc699c56/go.mod h1:ymszkNOg6tORTn+6F6j+Jc8TOr5osrynvN6ivFWZ2GA=
|
github.com/j-keck/arping v0.0.0-20160618110441-2cf9dc699c56/go.mod h1:ymszkNOg6tORTn+6F6j+Jc8TOr5osrynvN6ivFWZ2GA=
|
||||||
github.com/jmespath/go-jmespath v0.0.0-20160202185014-0b12d6b521d8/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
|
github.com/jmespath/go-jmespath v0.0.0-20160202185014-0b12d6b521d8/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
|
||||||
github.com/jmespath/go-jmespath v0.0.0-20160803190731-bd40a432e4c7/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
|
github.com/jmespath/go-jmespath v0.0.0-20160803190731-bd40a432e4c7/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
|
||||||
github.com/joho/godotenv v1.3.0 h1:Zjp+RcGpHhGlrMbJzXTrZZPrWj+1vfm90La1wgB6Bhc=
|
|
||||||
github.com/joho/godotenv v1.3.0/go.mod h1:7hK45KPybAkOC6peb+G5yklZfMxEjkZhHbwpqxOKXbg=
|
|
||||||
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
|
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
|
||||||
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
|
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
|
||||||
github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
|
github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
|
||||||
@@ -382,6 +382,8 @@ github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/X
|
|||||||
github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
|
github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
|
||||||
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
|
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
|
||||||
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
|
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
|
||||||
|
github.com/kelseyhightower/envconfig v1.4.0 h1:Im6hONhd3pLkfDFsbRgu68RDNkGF1r3dvMUtDTo2cv8=
|
||||||
|
github.com/kelseyhightower/envconfig v1.4.0/go.mod h1:cccZRl6mQpaq41TPp5QxidR+Sa3axMbJDNb//FQX6Gg=
|
||||||
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
|
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
|
||||||
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
|
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
|
||||||
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
|
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
|
||||||
@@ -397,10 +399,14 @@ github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxv
|
|||||||
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
|
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
|
||||||
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
||||||
github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
|
github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
|
||||||
|
github.com/kr/pretty v0.2.1 h1:Fmg33tUaq4/8ym9TJN1x7sLJnHVwhP33CNkpYV/7rwI=
|
||||||
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
|
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
|
||||||
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||||
github.com/kr/pty v1.1.5/go.mod h1:9r2w37qlBe7rQ6e1fg1S/9xpWHSnaqNdHD3WcMdbPDA=
|
github.com/kr/pty v1.1.5/go.mod h1:9r2w37qlBe7rQ6e1fg1S/9xpWHSnaqNdHD3WcMdbPDA=
|
||||||
|
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
|
||||||
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
||||||
|
github.com/leekchan/timeutil v0.0.0-20150802142658-28917288c48d h1:2puqoOQwi3Ai1oznMOsFIbifm6kIfJaLLyYzWD4IzTs=
|
||||||
|
github.com/leekchan/timeutil v0.0.0-20150802142658-28917288c48d/go.mod h1:hO90vCP2x3exaSH58BIAowSKvV+0OsY21TtzuFGHON4=
|
||||||
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
|
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
|
||||||
github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
|
github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
|
||||||
github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
|
github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
|
||||||
@@ -905,6 +911,7 @@ gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLks
|
|||||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||||
gopkg.in/check.v1 v1.0.0-20141024133853-64131543e789/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
gopkg.in/check.v1 v1.0.0-20141024133853-64131543e789/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||||
|
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
|
||||||
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||||
gopkg.in/cheggaaa/pb.v1 v1.0.25/go.mod h1:V/YB90LKu/1FcN3WVnfiiE5oMCibMjukxqG/qStrOgw=
|
gopkg.in/cheggaaa/pb.v1 v1.0.25/go.mod h1:V/YB90LKu/1FcN3WVnfiiE5oMCibMjukxqG/qStrOgw=
|
||||||
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
|
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
|
||||||
|
|||||||
@@ -28,6 +28,7 @@ services:
|
|||||||
BACKUP_RETENTION_DAYS: ${BACKUP_RETENTION_DAYS:-7}
|
BACKUP_RETENTION_DAYS: ${BACKUP_RETENTION_DAYS:-7}
|
||||||
BACKUP_PRUNING_LEEWAY: 5s
|
BACKUP_PRUNING_LEEWAY: 5s
|
||||||
BACKUP_PRUNING_PREFIX: test
|
BACKUP_PRUNING_PREFIX: test
|
||||||
|
GPG_PASSPHRASE: 1234secret
|
||||||
volumes:
|
volumes:
|
||||||
- ./local:/archive
|
- ./local:/archive
|
||||||
- app_data:/backup/app_data:ro
|
- app_data:/backup/app_data:ro
|
||||||
@@ -40,7 +41,6 @@ services:
|
|||||||
volumes:
|
volumes:
|
||||||
- app_data:/var/opt/offen
|
- app_data:/var/opt/offen
|
||||||
|
|
||||||
|
|
||||||
volumes:
|
volumes:
|
||||||
backup_data:
|
backup_data:
|
||||||
app_data:
|
app_data:
|
||||||
|
|||||||
@@ -13,11 +13,13 @@ docker-compose exec backup backup
|
|||||||
|
|
||||||
docker run --rm -it \
|
docker run --rm -it \
|
||||||
-v compose_backup_data:/data alpine \
|
-v compose_backup_data:/data alpine \
|
||||||
ash -c 'tar -xf /data/backup/test.tar.gz && test -f /backup/app_data/offen.db'
|
ash -c 'apk add gnupg && echo 1234secret | gpg -d --pinentry-mode loopback --passphrase-fd 0 --yes /data/backup/test.tar.gz.gpg > /tmp/test.tar.gz && tar -xf /tmp/test.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db'
|
||||||
|
|
||||||
echo "[TEST:PASS] Found relevant files in untared remote backup."
|
echo "[TEST:PASS] Found relevant files in untared remote backup."
|
||||||
|
|
||||||
tar -xf ./local/test.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db
|
echo 1234secret | gpg -d --yes --passphrase-fd 0 ./local/test.tar.gz.gpg > ./local/decrypted.tar.gz
|
||||||
|
tar -xf ./local/decrypted.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db
|
||||||
|
rm ./local/decrypted.tar.gz
|
||||||
|
|
||||||
echo "[TEST:PASS] Found relevant files in untared local backup."
|
echo "[TEST:PASS] Found relevant files in untared local backup."
|
||||||
|
|
||||||
@@ -29,8 +31,6 @@ fi
|
|||||||
|
|
||||||
echo "[TEST:PASS] All containers running post backup."
|
echo "[TEST:PASS] All containers running post backup."
|
||||||
|
|
||||||
docker-compose down
|
|
||||||
|
|
||||||
# The second part of this test checks if backups get deleted when the retention
|
# The second part of this test checks if backups get deleted when the retention
|
||||||
# is set to 0 days (which it should not as it would mean all backups get deleted)
|
# is set to 0 days (which it should not as it would mean all backups get deleted)
|
||||||
# TODO: find out if we can test actual deletion without having to wait for a day
|
# TODO: find out if we can test actual deletion without having to wait for a day
|
||||||
|
|||||||
Reference in New Issue
Block a user