Compare commits

..

6 Commits

Author SHA1 Message Date
Frederik Ring
1892d56ff6 Change default value for SSH identity file (#108)
* Change default value for SSH identity file

* Force remove write protected file in tests
2022-06-17 11:28:29 +02:00
İbrahim Akyel
0b205fe6dc SSH Backup Storage Support (#107)
* SSH Client implemented

* Private key auth implemented
Code refactoring

* Refactoring

* Passphrase renamed to IdentityPassphrase
Default private key location changed to .ssh/id
2022-06-17 11:06:15 +02:00
Frederik Ring
8c8a2fa088 Update vulnerable containerd dependency (#104) 2022-06-07 09:21:40 +02:00
Frederik Ring
a850bf13fe Fix broken link in README 2022-05-12 08:18:12 +02:00
Frederik Ring
b52b271bac Allow for the exclusion of files from backups (#100)
* Hoist walking of files so it can be used for features other than archive creation

* Add option to ignore files from backup using glob patterns

* Use Regexp instead of glob for exclusion

* Ignore artifacts

* Add teardown to test

* Allow single Re for filtering only

* Add documentation

* Use MatchString on re, add bad input to message in case of error
2022-05-08 11:20:38 +02:00
Frederik Ring
cac5777e79 Add documentation on using multiple configs for complex retention schemes 2022-04-20 18:01:41 +02:00
15 changed files with 458 additions and 696 deletions

102
README.md
View File

@@ -7,7 +7,7 @@
Backup Docker volumes locally or to any S3 compatible storage. Backup Docker volumes locally or to any S3 compatible storage.
The [offen/docker-volume-backup](https://hub.docker.com/r/offen/docker-volume-backup) Docker image can be used as a lightweight (below 15MB) sidecar container to an existing Docker setup. The [offen/docker-volume-backup](https://hub.docker.com/r/offen/docker-volume-backup) Docker image can be used as a lightweight (below 15MB) sidecar container to an existing Docker setup.
It handles __recurring or one-off backups of Docker volumes__ to a __local directory__, __any S3 or WebDAV compatible storage (or any combination) and rotates away old backups__ if configured. It also supports __encrypting your backups using GPG__ and __sending notifications for failed backup runs__. It handles __recurring or one-off backups of Docker volumes__ to a __local directory__, __any S3, WebDAV or SSH compatible storage (or any combination) and rotates away old backups__ if configured. It also supports __encrypting your backups using GPG__ and __sending notifications for failed backup runs__.
<!-- MarkdownTOC --> <!-- MarkdownTOC -->
@@ -30,11 +30,13 @@ It handles __recurring or one-off backups of Docker volumes__ to a __local direc
- [Replace deprecated `BACKUP_FROM_SNAPSHOT` usage](#replace-deprecated-backup_from_snapshot-usage) - [Replace deprecated `BACKUP_FROM_SNAPSHOT` usage](#replace-deprecated-backup_from_snapshot-usage)
- [Using a custom Docker host](#using-a-custom-docker-host) - [Using a custom Docker host](#using-a-custom-docker-host)
- [Run multiple backup schedules in the same container](#run-multiple-backup-schedules-in-the-same-container) - [Run multiple backup schedules in the same container](#run-multiple-backup-schedules-in-the-same-container)
- [Define different retention schedules](#define-different-retention-schedules)
- [Recipes](#recipes) - [Recipes](#recipes)
- [Backing up to AWS S3](#backing-up-to-aws-s3) - [Backing up to AWS S3](#backing-up-to-aws-s3)
- [Backing up to Filebase](#backing-up-to-filebase) - [Backing up to Filebase](#backing-up-to-filebase)
- [Backing up to MinIO](#backing-up-to-minio) - [Backing up to MinIO](#backing-up-to-minio)
- [Backing up to WebDAV](#backing-up-to-webdav) - [Backing up to WebDAV](#backing-up-to-webdav)
- [Backing up to SSH](#backing-up-to-ssh)
- [Backing up locally](#backing-up-locally) - [Backing up locally](#backing-up-locally)
- [Backing up to AWS S3 as well as locally](#backing-up-to-aws-s3-as-well-as-locally) - [Backing up to AWS S3 as well as locally](#backing-up-to-aws-s3-as-well-as-locally)
- [Running on a custom cron schedule](#running-on-a-custom-cron-schedule) - [Running on a custom cron schedule](#running-on-a-custom-cron-schedule)
@@ -167,6 +169,12 @@ You can populate below template according to your requirements and use it as you
# BACKUP_SOURCES="/other/location" # BACKUP_SOURCES="/other/location"
# When given, all files in BACKUP_SOURCES whose full path matches the given
# regular expression will be excluded from the archive. Regular Expressions
# can be used as from the Go standard library https://pkg.go.dev/regexp
# BACKUP_EXCLUDE_REGEXP="\.log$"
########### BACKUP STORAGE ########### BACKUP STORAGE
# The name of the remote bucket that should be used for storing backups. If # The name of the remote bucket that should be used for storing backups. If
@@ -238,6 +246,40 @@ You can populate below template according to your requirements and use it as you
# WEBDAV_URL_INSECURE="true" # WEBDAV_URL_INSECURE="true"
# You can also backup files to any SSH server:
# The URL of the remote SSH server
# SSH_HOST_NAME="server.local"
# The port of the remote SSH server
# Optional variable default value is `22`
# SSH_PORT=2222
# The Directory to place the backups to on the SSH server.
# SSH_REMOTE_PATH="/my/directory/"
# The username for the SSH server
# SSH_USER="user"
# The password for the SSH server
# SSH_PASSWORD="password"
# The private key path in container for SSH server
# Default value: /root/.ssh/id_rsa
# If file is mounted to /root/.ssh/id_rsa path it will be used. Non-RSA keys will
# also work.
# SSH_IDENTITY_FILE="/root/.ssh/id_rsa"
# The passphrase for the identity file
# SSH_IDENTITY_PASSPHRASE="pass"
# In addition to storing backups remotely, you can also keep local copies. # In addition to storing backups remotely, you can also keep local copies.
# Pass a container-local path to store your backups if needed. You also need to # Pass a container-local path to store your backups if needed. You also need to
# mount a local folder or Docker volume into that location (`/archive` # mount a local folder or Docker volume into that location (`/archive`
@@ -388,7 +430,7 @@ You can populate below template according to your requirements and use it as you
# EMAIL_SMTP_PORT="<port>" # EMAIL_SMTP_PORT="<port>"
``` ```
In case you encouter double quoted values in your configuration you might be running an [older version of `docker-compose`]. In case you encouter double quoted values in your configuration you might be running an [older version of `docker-compose`][compose-issue].
You can work around this by either updating `docker-compose` or unquoting your configuration values. You can work around this by either updating `docker-compose` or unquoting your configuration values.
[compose-issue]: https://github.com/docker/compose/issues/2854 [compose-issue]: https://github.com/docker/compose/issues/2854
@@ -739,6 +781,39 @@ The exact order of schedules that use the same cron expression is not specified.
In case you need your schedules to overlap, you need to create a dedicated container for each schedule instead. In case you need your schedules to overlap, you need to create a dedicated container for each schedule instead.
When changing the configuration, you currently need to manually restart the container for the changes to take effect. When changing the configuration, you currently need to manually restart the container for the changes to take effect.
### Define different retention schedules
If you want to manage backup retention on different schedules, the most straight forward approach is to define a dedicated configuration for retention rule using a different prefix in the `BACKUP_FILENAME` parameter and then run them on different cron schedules.
For example, if you wanted to keep daily backups for 7 days, weekly backups for a month, and retain monthly backups forever, you could create three configuration files and mount them into `/etc/dockervolumebackup.d`:
```ini
# 01daily.conf
BACKUP_FILENAME="daily-backup-%Y-%m-%dT%H-%M-%S.tar.gz"
# run every day at 2am
BACKUP_CRON_EXPRESSION="0 2 * * *"
BACKUP_PRUNING_PREFIX="daily-backup-"
BACKUP_RETENTION_DAYS="7"
```
```ini
# 02weekly.conf
BACKUP_FILENAME="weekly-backup-%Y-%m-%dT%H-%M-%S.tar.gz"
# run every monday at 3am
BACKUP_CRON_EXPRESSION="0 3 * * 1"
BACKUP_PRUNING_PREFIX="weekly-backup-"
BACKUP_RETENTION_DAYS="31"
```
```ini
# 03monthly.conf
BACKUP_FILENAME="monthly-backup-%Y-%m-%dT%H-%M-%S.tar.gz"
# run every 1st of a month at 4am
BACKUP_CRON_EXPRESSION="0 4 1 * *"
```
Note that while it's possible to define colliding cron schedules for each of these configurations, you might need to adjust the value for `LOCK_TIMEOUT` in case your backups are large and might take longer than an hour.
## Recipes ## Recipes
This section lists configuration for some real-world use cases that you can mix and match according to your needs. This section lists configuration for some real-world use cases that you can mix and match according to your needs.
@@ -830,6 +905,29 @@ volumes:
data: data:
``` ```
### Backing up to SSH
```yml
version: '3'
services:
# ... define other services using the `data` volume here
backup:
image: offen/docker-volume-backup:v2
environment:
SSH_HOST_NAME: server.local
SSH_PORT: 2222
SSH_USER: user
SSH_REMOTE_PATH: /data
volumes:
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- /path/to/private_key:/root/.ssh/id
volumes:
data:
```
### Backing up locally ### Backing up locally
```yml ```yml

View File

@@ -11,14 +11,13 @@ import (
"compress/gzip" "compress/gzip"
"fmt" "fmt"
"io" "io"
"io/fs"
"os" "os"
"path" "path"
"path/filepath" "path/filepath"
"strings" "strings"
) )
func createArchive(inputFilePath, outputFilePath string) error { func createArchive(files []string, inputFilePath, outputFilePath string) error {
inputFilePath = stripTrailingSlashes(inputFilePath) inputFilePath = stripTrailingSlashes(inputFilePath)
inputFilePath, outputFilePath, err := makeAbsolute(inputFilePath, outputFilePath) inputFilePath, outputFilePath, err := makeAbsolute(inputFilePath, outputFilePath)
if err != nil { if err != nil {
@@ -28,7 +27,7 @@ func createArchive(inputFilePath, outputFilePath string) error {
return fmt.Errorf("createArchive: error creating output file path: %w", err) return fmt.Errorf("createArchive: error creating output file path: %w", err)
} }
if err := compress(inputFilePath, outputFilePath, filepath.Dir(inputFilePath)); err != nil { if err := compress(files, outputFilePath, filepath.Dir(inputFilePath)); err != nil {
return fmt.Errorf("createArchive: error creating archive: %w", err) return fmt.Errorf("createArchive: error creating archive: %w", err)
} }
@@ -52,7 +51,7 @@ func makeAbsolute(inputFilePath, outputFilePath string) (string, string, error)
return inputFilePath, outputFilePath, err return inputFilePath, outputFilePath, err
} }
func compress(inPath, outFilePath, subPath string) error { func compress(paths []string, outFilePath, subPath string) error {
file, err := os.Create(outFilePath) file, err := os.Create(outFilePath)
if err != nil { if err != nil {
return fmt.Errorf("compress: error creating out file: %w", err) return fmt.Errorf("compress: error creating out file: %w", err)
@@ -62,14 +61,6 @@ func compress(inPath, outFilePath, subPath string) error {
gzipWriter := gzip.NewWriter(file) gzipWriter := gzip.NewWriter(file)
tarWriter := tar.NewWriter(gzipWriter) tarWriter := tar.NewWriter(gzipWriter)
var paths []string
if err := filepath.WalkDir(inPath, func(path string, di fs.DirEntry, err error) error {
paths = append(paths, path)
return err
}); err != nil {
return fmt.Errorf("compress: error walking filesystem tree: %w", err)
}
for _, p := range paths { for _, p := range paths {
if err := writeTarGz(p, tarWriter, prefix); err != nil { if err := writeTarGz(p, tarWriter, prefix); err != nil {
return fmt.Errorf("compress error writing %s to archive: %w", p, err) return fmt.Errorf("compress error writing %s to archive: %w", p, err)

View File

@@ -3,7 +3,11 @@
package main package main
import "time" import (
"fmt"
"regexp"
"time"
)
// Config holds all configuration values that are expected to be set // Config holds all configuration values that are expected to be set
// by users. // by users.
@@ -18,6 +22,7 @@ type Config struct {
BackupPruningPrefix string `split_words:"true"` BackupPruningPrefix string `split_words:"true"`
BackupStopContainerLabel string `split_words:"true" default:"true"` BackupStopContainerLabel string `split_words:"true" default:"true"`
BackupFromSnapshot bool `split_words:"true"` BackupFromSnapshot bool `split_words:"true"`
BackupExcludeRegexp RegexpDecoder `split_words:"true"`
AwsS3BucketName string `split_words:"true"` AwsS3BucketName string `split_words:"true"`
AwsS3Path string `split_words:"true"` AwsS3Path string `split_words:"true"`
AwsEndpoint string `split_words:"true" default:"s3.amazonaws.com"` AwsEndpoint string `split_words:"true" default:"s3.amazonaws.com"`
@@ -40,7 +45,30 @@ type Config struct {
WebdavPath string `split_words:"true" default:"/"` WebdavPath string `split_words:"true" default:"/"`
WebdavUsername string `split_words:"true"` WebdavUsername string `split_words:"true"`
WebdavPassword string `split_words:"true"` WebdavPassword string `split_words:"true"`
SSHHostName string `split_words:"true"`
SSHPort string `split_words:"true" default:"22"`
SSHUser string `split_words:"true"`
SSHPassword string `split_words:"true"`
SSHIdentityFile string `split_words:"true" default:"/root/.ssh/id_rsa"`
SSHIdentityPassphrase string `split_words:"true"`
SSHRemotePath string `split_words:"true"`
ExecLabel string `split_words:"true"` ExecLabel string `split_words:"true"`
ExecForwardOutput bool `split_words:"true"` ExecForwardOutput bool `split_words:"true"`
LockTimeout time.Duration `split_words:"true" default:"60m"` LockTimeout time.Duration `split_words:"true" default:"60m"`
} }
type RegexpDecoder struct {
Re *regexp.Regexp
}
func (r *RegexpDecoder) Decode(v string) error {
if v == "" {
return nil
}
re, err := regexp.Compile(v)
if err != nil {
return fmt.Errorf("config: error compiling given regexp `%s`: %w", v, err)
}
*r = RegexpDecoder{Re: re}
return nil
}

View File

@@ -7,8 +7,11 @@ import (
"context" "context"
"errors" "errors"
"fmt" "fmt"
"github.com/pkg/sftp"
"golang.org/x/crypto/ssh"
"io" "io"
"io/fs" "io/fs"
"io/ioutil"
"net/http" "net/http"
"os" "os"
"path" "path"
@@ -39,6 +42,8 @@ type script struct {
cli *client.Client cli *client.Client
minioClient *minio.Client minioClient *minio.Client
webdavClient *gowebdav.Client webdavClient *gowebdav.Client
sshClient *ssh.Client
sftpClient *sftp.Client
logger *logrus.Logger logger *logrus.Logger
sender *router.ServiceRouter sender *router.ServiceRouter
template *template.Template template *template.Template
@@ -159,6 +164,57 @@ func newScript() (*script, error) {
} }
} }
if s.c.SSHHostName != "" {
var authMethods []ssh.AuthMethod
if s.c.SSHPassword != "" {
authMethods = append(authMethods, ssh.Password(s.c.SSHPassword))
}
if _, err := os.Stat(s.c.SSHIdentityFile); err == nil {
key, err := ioutil.ReadFile(s.c.SSHIdentityFile)
if err != nil {
return nil, errors.New("newScript: error reading the private key")
}
var signer ssh.Signer
if s.c.SSHIdentityPassphrase != "" {
signer, err = ssh.ParsePrivateKeyWithPassphrase(key, []byte(s.c.SSHIdentityPassphrase))
if err != nil {
return nil, errors.New("newScript: error parsing the encrypted private key")
}
authMethods = append(authMethods, ssh.PublicKeys(signer))
} else {
signer, err = ssh.ParsePrivateKey(key)
if err != nil {
return nil, errors.New("newScript: error parsing the private key")
}
authMethods = append(authMethods, ssh.PublicKeys(signer))
}
}
sshClientConfig := &ssh.ClientConfig{
User: s.c.SSHUser,
Auth: authMethods,
HostKeyCallback: ssh.InsecureIgnoreHostKey(),
}
sshClient, err := ssh.Dial("tcp", fmt.Sprintf("%s:%s", s.c.SSHHostName, s.c.SSHPort), sshClientConfig)
s.sshClient = sshClient
if err != nil {
return nil, fmt.Errorf("newScript: error creating ssh client: %w", err)
}
_, _, err = s.sshClient.SendRequest("keepalive", false, nil)
if err != nil {
return nil, err
}
sftpClient, err := sftp.NewClient(sshClient)
s.sftpClient = sftpClient
if err != nil {
return nil, fmt.Errorf("newScript: error creating sftp client: %w", err)
}
}
if s.c.EmailNotificationRecipient != "" { if s.c.EmailNotificationRecipient != "" {
emailURL := fmt.Sprintf( emailURL := fmt.Sprintf(
"smtp://%s:%s@%s:%d/?from=%s&to=%s", "smtp://%s:%s@%s:%d/?from=%s&to=%s",
@@ -398,7 +454,28 @@ func (s *script) takeBackup() error {
s.logger.Infof("Removed tar file `%s`.", tarFile) s.logger.Infof("Removed tar file `%s`.", tarFile)
return nil return nil
}) })
if err := createArchive(backupSources, tarFile); err != nil {
backupPath, err := filepath.Abs(stripTrailingSlashes(backupSources))
if err != nil {
return fmt.Errorf("takeBackup: error getting absolute path: %w", err)
}
var filesEligibleForBackup []string
if err := filepath.WalkDir(backupPath, func(path string, di fs.DirEntry, err error) error {
if err != nil {
return err
}
if s.c.BackupExcludeRegexp.Re != nil && s.c.BackupExcludeRegexp.Re.MatchString(path) {
return nil
}
filesEligibleForBackup = append(filesEligibleForBackup, path)
return nil
}); err != nil {
return fmt.Errorf("compress: error walking filesystem tree: %w", err)
}
if err := createArchive(filesEligibleForBackup, backupSources, tarFile); err != nil {
return fmt.Errorf("takeBackup: error compressing backup folder: %w", err) return fmt.Errorf("takeBackup: error compressing backup folder: %w", err)
} }
@@ -491,6 +568,52 @@ func (s *script) copyBackup() error {
s.logger.Infof("Uploaded a copy of backup `%s` to WebDAV-URL '%s' at path '%s'.", s.file, s.c.WebdavUrl, s.c.WebdavPath) s.logger.Infof("Uploaded a copy of backup `%s` to WebDAV-URL '%s' at path '%s'.", s.file, s.c.WebdavUrl, s.c.WebdavPath)
} }
if s.sshClient != nil {
source, err := os.Open(s.file)
if err != nil {
return fmt.Errorf("copyBackup: error reading the file to be uploaded: %w", err)
}
defer source.Close()
destination, err := s.sftpClient.Create(filepath.Join(s.c.SSHRemotePath, name))
if err != nil {
return fmt.Errorf("copyBackup: error creating file on SSH storage: %w", err)
}
defer destination.Close()
chunk := make([]byte, 1000000)
for {
num, err := source.Read(chunk)
if err == io.EOF {
tot, err := destination.Write(chunk[:num])
if err != nil {
return fmt.Errorf("copyBackup: error uploading the file to SSH storage: %w", err)
}
if tot != len(chunk[:num]) {
return fmt.Errorf("sshClient: failed to write stream")
}
break
}
if err != nil {
return fmt.Errorf("copyBackup: error uploading the file to SSH storage: %w", err)
}
tot, err := destination.Write(chunk[:num])
if err != nil {
return fmt.Errorf("copyBackup: error uploading the file to SSH storage: %w", err)
}
if tot != len(chunk[:num]) {
return fmt.Errorf("sshClient: failed to write stream")
}
}
s.logger.Infof("Uploaded a copy of backup `%s` to SSH storage '%s' at path '%s'.", s.file, s.c.SSHHostName, s.c.SSHRemotePath)
}
if _, err := os.Stat(s.c.BackupArchive); !os.IsNotExist(err) { if _, err := os.Stat(s.c.BackupArchive); !os.IsNotExist(err) {
if err := copyFile(s.file, path.Join(s.c.BackupArchive, name)); err != nil { if err := copyFile(s.file, path.Join(s.c.BackupArchive, name)); err != nil {
return fmt.Errorf("copyBackup: error copying file to local archive: %w", err) return fmt.Errorf("copyBackup: error copying file to local archive: %w", err)
@@ -624,6 +747,37 @@ func (s *script) pruneBackups() error {
}) })
} }
if s.sshClient != nil {
candidates, err := s.sftpClient.ReadDir(s.c.SSHRemotePath)
if err != nil {
return fmt.Errorf("pruneBackups: error reading directory from SSH storage: %w", err)
}
var matches []string
for _, candidate := range candidates {
if !strings.HasPrefix(candidate.Name(), s.c.BackupPruningPrefix) {
continue
}
if candidate.ModTime().Before(deadline) {
matches = append(matches, candidate.Name())
}
}
s.stats.Storages.SSH = StorageStats{
Total: uint(len(candidates)),
Pruned: uint(len(matches)),
}
doPrune(len(matches), len(candidates), "SSH backup(s)", func() error {
for _, match := range matches {
if err := s.sftpClient.Remove(filepath.Join(s.c.SSHRemotePath, match)); err != nil {
return fmt.Errorf("pruneBackups: error removing file from SSH storage: %w", err)
}
}
return nil
})
}
if _, err := os.Stat(s.c.BackupArchive); !os.IsNotExist(err) { if _, err := os.Stat(s.c.BackupArchive); !os.IsNotExist(err) {
globPattern := path.Join( globPattern := path.Join(
s.c.BackupArchive, s.c.BackupArchive,

View File

@@ -30,10 +30,11 @@ type StorageStats struct {
PruneErrors uint PruneErrors uint
} }
// StoragesStats stats about each possible archival location (Local, WebDAV, S3) // StoragesStats stats about each possible archival location (Local, WebDAV, SSH, S3)
type StoragesStats struct { type StoragesStats struct {
Local StorageStats Local StorageStats
WebDAV StorageStats WebDAV StorageStats
SSH StorageStats
S3 StorageStats S3 StorageStats
} }

View File

@@ -25,7 +25,7 @@ Here is a list of all data passed to the template:
* `FullPath`: full path of the backup file (e.g. `/archive/backup-2022-02-11T01-00-00.tar.gz`) * `FullPath`: full path of the backup file (e.g. `/archive/backup-2022-02-11T01-00-00.tar.gz`)
* `Size`: size in bytes of the backup file * `Size`: size in bytes of the backup file
* `Storages`: object that holds stats about each storage * `Storages`: object that holds stats about each storage
* `Local`, `S3` or `WebDAV`: * `Local`, `S3`, `WebDAV` or `SSH`:
* `Total`: total number of backup files * `Total`: total number of backup files
* `Pruned`: number of backup files that were deleted due to pruning rule * `Pruned`: number of backup files that were deleted due to pruning rule
* `PruneErrors`: number of backup files that were unable to be pruned * `PruneErrors`: number of backup files that were unable to be pruned

33
go.mod
View File

@@ -11,15 +11,16 @@ require (
github.com/leekchan/timeutil v0.0.0-20150802142658-28917288c48d github.com/leekchan/timeutil v0.0.0-20150802142658-28917288c48d
github.com/minio/minio-go/v7 v7.0.16 github.com/minio/minio-go/v7 v7.0.16
github.com/otiai10/copy v1.7.0 github.com/otiai10/copy v1.7.0
github.com/pkg/sftp v1.13.5
github.com/sirupsen/logrus v1.8.1 github.com/sirupsen/logrus v1.8.1
github.com/studio-b12/gowebdav v0.0.0-20220128162035-c7b1ff8a5e62 github.com/studio-b12/gowebdav v0.0.0-20220128162035-c7b1ff8a5e62
golang.org/x/crypto v0.0.0-20210817164053-32db794688a5 golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f
) )
require ( require (
github.com/Microsoft/go-winio v0.4.17 // indirect github.com/Microsoft/go-winio v0.5.2 // indirect
github.com/containerd/containerd v1.5.5 // indirect github.com/containerd/containerd v1.6.6 // indirect
github.com/docker/distribution v2.7.1+incompatible // indirect github.com/docker/distribution v2.7.1+incompatible // indirect
github.com/docker/go-connections v0.4.0 // indirect github.com/docker/go-connections v0.4.0 // indirect
github.com/docker/go-units v0.4.0 // indirect github.com/docker/go-units v0.4.0 // indirect
@@ -27,33 +28,39 @@ require (
github.com/fatih/color v1.10.0 // indirect github.com/fatih/color v1.10.0 // indirect
github.com/fsnotify/fsnotify v1.4.9 // indirect github.com/fsnotify/fsnotify v1.4.9 // indirect
github.com/gogo/protobuf v1.3.2 // indirect github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/protobuf v1.5.0 // indirect github.com/golang/protobuf v1.5.2 // indirect
github.com/google/uuid v1.3.0 // indirect github.com/google/uuid v1.3.0 // indirect
github.com/gorilla/mux v1.7.3 // indirect
github.com/json-iterator/go v1.1.12 // indirect github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.13.6 // indirect github.com/klauspost/compress v1.15.6 // indirect
github.com/klauspost/cpuid/v2 v2.0.9 // indirect github.com/klauspost/cpuid/v2 v2.0.9 // indirect
github.com/kr/fs v0.1.0 // indirect
github.com/kr/text v0.2.0 // indirect
github.com/mattn/go-colorable v0.1.8 // indirect github.com/mattn/go-colorable v0.1.8 // indirect
github.com/mattn/go-isatty v0.0.12 // indirect github.com/mattn/go-isatty v0.0.12 // indirect
github.com/minio/md5-simd v1.1.2 // indirect github.com/minio/md5-simd v1.1.2 // indirect
github.com/minio/sha256-simd v1.0.0 // indirect github.com/minio/sha256-simd v1.0.0 // indirect
github.com/mitchellh/go-homedir v1.1.0 // indirect github.com/mitchellh/go-homedir v1.1.0 // indirect
github.com/moby/term v0.0.0-20200312100748-672ec06f55cd // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/morikuni/aec v1.0.0 // indirect github.com/morikuni/aec v1.0.0 // indirect
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e // indirect
github.com/nxadm/tail v1.4.6 // indirect github.com/nxadm/tail v1.4.6 // indirect
github.com/onsi/ginkgo v1.14.2 // indirect github.com/onsi/ginkgo v1.14.2 // indirect
github.com/onsi/gomega v1.10.3 // indirect github.com/onsi/gomega v1.10.3 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.0.1 // indirect github.com/opencontainers/image-spec v1.0.3-0.20211202183452-c5a74bcca799 // indirect
github.com/pkg/errors v0.9.1 // indirect github.com/pkg/errors v0.9.1 // indirect
github.com/rs/xid v1.3.0 // indirect github.com/rs/xid v1.3.0 // indirect
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110 // indirect golang.org/x/net v0.0.0-20220607020251-c690dde0001d // indirect
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1 // indirect golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a // indirect
golang.org/x/text v0.3.6 // indirect golang.org/x/text v0.3.7 // indirect
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect
google.golang.org/genproto v0.0.0-20201110150050-8816d57aaa9a // indirect google.golang.org/genproto v0.0.0-20220602131408-e326c6e8e9c8 // indirect
google.golang.org/grpc v1.33.2 // indirect google.golang.org/grpc v1.47.0 // indirect
google.golang.org/protobuf v1.26.0 // indirect google.golang.org/protobuf v1.28.0 // indirect
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f // indirect
gopkg.in/ini.v1 v1.65.0 // indirect gopkg.in/ini.v1 v1.65.0 // indirect
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 // indirect gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect gopkg.in/yaml.v2 v2.4.0 // indirect

732
go.sum

File diff suppressed because it is too large Load Diff

View File

@@ -21,12 +21,24 @@ services:
volumes: volumes:
- webdav_backup_data:/var/lib/dav - webdav_backup_data:/var/lib/dav
ssh:
image: linuxserver/openssh-server:version-8.6_p1-r3
environment:
- PUID=1000
- PGID=1000
- USER_NAME=test
volumes:
- ./id_rsa.pub:/config/.ssh/authorized_keys
- ssh_backup_data:/tmp
- ssh_config:/config
backup: backup:
image: offen/docker-volume-backup:${TEST_VERSION:-canary} image: offen/docker-volume-backup:${TEST_VERSION:-canary}
hostname: hostnametoken hostname: hostnametoken
depends_on: depends_on:
- minio - minio
- webdav - webdav
- ssh
restart: always restart: always
environment: environment:
AWS_ACCESS_KEY_ID: test AWS_ACCESS_KEY_ID: test
@@ -47,8 +59,14 @@ services:
WEBDAV_PATH: /my/new/path/ WEBDAV_PATH: /my/new/path/
WEBDAV_USERNAME: test WEBDAV_USERNAME: test
WEBDAV_PASSWORD: test WEBDAV_PASSWORD: test
SSH_HOST_NAME: ssh
SSH_PORT: 2222
SSH_USER: test
SSH_REMOTE_PATH: /tmp
SSH_IDENTITY_PASSPHRASE: test1234
volumes: volumes:
- ./local:/archive - ./local:/archive
- ./id_rsa:/root/.ssh/id_rsa
- app_data:/backup/app_data:ro - app_data:/backup/app_data:ro
- /var/run/docker.sock:/var/run/docker.sock - /var/run/docker.sock:/var/run/docker.sock
@@ -62,4 +80,6 @@ services:
volumes: volumes:
minio_backup_data: minio_backup_data:
webdav_backup_data: webdav_backup_data:
ssh_backup_data:
ssh_config:
app_data: app_data:

View File

@@ -2,9 +2,10 @@
set -e set -e
cd $(dirname $0) cd "$(dirname "$0")"
mkdir -p local mkdir -p local
ssh-keygen -t rsa -m pem -b 4096 -N "test1234" -f id_rsa -C "docker-volume-backup@local"
docker-compose up -d docker-compose up -d
sleep 5 sleep 5
@@ -15,7 +16,7 @@ docker-compose exec offen ln -s /var/opt/offen/offen.db /var/opt/offen/db.link
docker-compose exec backup backup docker-compose exec backup backup
sleep 5 sleep 5
if [ "$(docker-compose ps -q | wc -l)" != "4" ]; then if [ "$(docker-compose ps -q | wc -l)" != "5" ]; then
echo "[TEST:FAIL] Expected all containers to be running post backup, instead seen:" echo "[TEST:FAIL] Expected all containers to be running post backup, instead seen:"
docker-compose ps docker-compose ps
exit 1 exit 1
@@ -25,10 +26,12 @@ echo "[TEST:PASS] All containers running post backup."
docker run --rm -it \ docker run --rm -it \
-v compose_minio_backup_data:/minio_data \ -v compose_minio_backup_data:/minio_data \
-v compose_webdav_backup_data:/webdav_data alpine \ -v compose_webdav_backup_data:/webdav_data \
-v compose_ssh_backup_data:/ssh_data alpine \
ash -c 'apk add gnupg && \ ash -c 'apk add gnupg && \
echo 1234secret | gpg -d --pinentry-mode loopback --passphrase-fd 0 --yes /minio_data/backup/test-hostnametoken.tar.gz.gpg > /tmp/test-hostnametoken.tar.gz && tar -xvf /tmp/test-hostnametoken.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db && \ echo 1234secret | gpg -d --pinentry-mode loopback --passphrase-fd 0 --yes /minio_data/backup/test-hostnametoken.tar.gz.gpg > /tmp/test-hostnametoken.tar.gz && tar -xvf /tmp/test-hostnametoken.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db && \
echo 1234secret | gpg -d --pinentry-mode loopback --passphrase-fd 0 --yes /webdav_data/data/my/new/path/test-hostnametoken.tar.gz.gpg > /tmp/test-hostnametoken.tar.gz && tar -xvf /tmp/test-hostnametoken.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db' echo 1234secret | gpg -d --pinentry-mode loopback --passphrase-fd 0 --yes /webdav_data/data/my/new/path/test-hostnametoken.tar.gz.gpg > /tmp/test-hostnametoken.tar.gz && tar -xvf /tmp/test-hostnametoken.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db && \
echo 1234secret | gpg -d --pinentry-mode loopback --passphrase-fd 0 --yes /ssh_data/test-hostnametoken.tar.gz.gpg > /tmp/test-hostnametoken.tar.gz && tar -xvf /tmp/test-hostnametoken.tar.gz -C /tmp && test -f /tmp/backup/app_data/offen.db'
echo "[TEST:PASS] Found relevant files in decrypted and untared remote backups." echo "[TEST:PASS] Found relevant files in decrypted and untared remote backups."
@@ -52,9 +55,11 @@ docker-compose exec backup backup
docker run --rm -it \ docker run --rm -it \
-v compose_minio_backup_data:/minio_data \ -v compose_minio_backup_data:/minio_data \
-v compose_webdav_backup_data:/webdav_data alpine \ -v compose_webdav_backup_data:/webdav_data \
-v compose_ssh_backup_data:/ssh_data alpine \
ash -c '[ $(find /minio_data/backup/ -type f | wc -l) = "1" ] && \ ash -c '[ $(find /minio_data/backup/ -type f | wc -l) = "1" ] && \
[ $(find /webdav_data/data/my/new/path/ -type f | wc -l) = "1" ]' [ $(find /webdav_data/data/my/new/path/ -type f | wc -l) = "1" ] && \
[ $(find /ssh_data/ -type f | wc -l) = "1" ]'
echo "[TEST:PASS] Remote backups have not been deleted." echo "[TEST:PASS] Remote backups have not been deleted."
@@ -66,3 +71,4 @@ fi
echo "[TEST:PASS] Local backups have not been deleted." echo "[TEST:PASS] Local backups have not been deleted."
docker-compose down --volumes docker-compose down --volumes
rm -f id_rsa id_rsa.pub

1
test/ignore/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
local

View File

@@ -0,0 +1,15 @@
version: '3.8'
services:
backup:
image: offen/docker-volume-backup:${TEST_VERSION:-canary}
deploy:
restart_policy:
condition: on-failure
environment:
BACKUP_FILENAME: test.tar.gz
BACKUP_CRON_EXPRESSION: 0 0 5 31 2 ?
BACKUP_EXCLUDE_REGEXP: '\.(me|you)$$'
volumes:
- ./local:/archive
- ./sources:/backup/data:ro

27
test/ignore/run.sh Normal file
View File

@@ -0,0 +1,27 @@
#!/bin/sh
set -e
cd $(dirname $0)
mkdir -p local
docker-compose up -d
sleep 5
docker-compose exec backup backup
docker-compose down --volumes
out=$(mktemp -d)
sudo tar --same-owner -xvf ./local/test.tar.gz -C "$out"
if [ ! -f "$out/backup/data/me.txt" ]; then
echo "[TEST:FAIL] Expected file was not found."
exit 1
fi
echo "[TEST:PASS] Expected file was found."
if [ -f "$out/backup/data/skip.me" ]; then
echo "[TEST:FAIL] Ignored file was found."
exit 1
fi
echo "[TEST:PASS] Ignored file was not found."

View File

View File