mirror of
https://github.com/offen/docker-volume-backup.git
synced 2026-01-12 02:22:36 +01:00
Compare commits
9 Commits
test-post-
...
v2.24.0-pr
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
7ec6f154ec | ||
|
|
49a10094cc | ||
|
|
676cfbe25f | ||
|
|
1fa0548756 | ||
|
|
c0eff2e14f | ||
|
|
fdce7ee454 | ||
|
|
a253fdfbec | ||
|
|
7aa2166aee | ||
|
|
e702b2b682 |
14
.github/ISSUE_TEMPLATE/bug_report.md
vendored
14
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@@ -8,9 +8,7 @@ assignees: ''
|
||||
---
|
||||
|
||||
**Describe the bug**
|
||||
<!--
|
||||
A clear and concise description of what the bug is.
|
||||
-->
|
||||
|
||||
**To Reproduce**
|
||||
Steps to reproduce the behavior:
|
||||
@@ -19,16 +17,12 @@ Steps to reproduce the behavior:
|
||||
3. ...
|
||||
|
||||
**Expected behavior**
|
||||
<!--
|
||||
A clear and concise description of what you expected to happen.
|
||||
-->
|
||||
|
||||
**Version (please complete the following information):**
|
||||
- Image Version: <!-- e.g. v2.21.0 -->
|
||||
- Docker Version: <!-- e.g. 20.10.17 -->
|
||||
- Docker Compose Version (if applicable): <!-- e.g. 1.29.2 -->
|
||||
**Desktop (please complete the following information):**
|
||||
- Image Version: [e.g. v2.21.0]
|
||||
- Docker Version: [e.g. 20.10.17]
|
||||
- Docker Compose Version (if applicable): [e.g. 1.29.2]
|
||||
|
||||
**Additional context**
|
||||
<!--
|
||||
Add any other context about the problem here.
|
||||
-->
|
||||
|
||||
8
.github/ISSUE_TEMPLATE/feature_request.md
vendored
8
.github/ISSUE_TEMPLATE/feature_request.md
vendored
@@ -8,21 +8,13 @@ assignees: ''
|
||||
---
|
||||
|
||||
**Is your feature request related to a problem? Please describe.**
|
||||
<!--
|
||||
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
|
||||
-->
|
||||
|
||||
**Describe the solution you'd like**
|
||||
<!--
|
||||
A clear and concise description of what you want to happen.
|
||||
-->
|
||||
|
||||
**Describe alternatives you've considered**
|
||||
<!--
|
||||
A clear and concise description of any alternative solutions or features you've considered.
|
||||
-->
|
||||
|
||||
**Additional context**
|
||||
<!--
|
||||
Add any other context or screenshots about the feature request here.
|
||||
-->
|
||||
|
||||
8
.github/ISSUE_TEMPLATE/support_request.md
vendored
8
.github/ISSUE_TEMPLATE/support_request.md
vendored
@@ -8,21 +8,13 @@ assignees: ''
|
||||
---
|
||||
|
||||
**What are you trying to do?**
|
||||
<!--
|
||||
A clear and concise description of what you are trying to do, but cannot get working.
|
||||
-->
|
||||
|
||||
**What is your current configuration?**
|
||||
<!--
|
||||
Add the full configuration you are using. Please redact out any real-world credentials.
|
||||
-->
|
||||
|
||||
**Log output**
|
||||
<!--
|
||||
Provide the full log output of your setup.
|
||||
-->
|
||||
|
||||
**Additional context**
|
||||
<!--
|
||||
Add any other context or screenshots about the support request here.
|
||||
-->
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Copyright 2021 - Offen Authors <hioffen@posteo.de>
|
||||
# SPDX-License-Identifier: MPL-2.0
|
||||
|
||||
FROM golang:1.20-alpine as builder
|
||||
FROM golang:1.19-alpine as builder
|
||||
|
||||
WORKDIR /app
|
||||
COPY . .
|
||||
@@ -9,7 +9,7 @@ RUN go mod download
|
||||
WORKDIR /app/cmd/backup
|
||||
RUN go build -o backup .
|
||||
|
||||
FROM alpine:3.17
|
||||
FROM alpine:3.16
|
||||
|
||||
WORKDIR /root
|
||||
|
||||
|
||||
46
README.md
46
README.md
@@ -4,7 +4,7 @@
|
||||
|
||||
# docker-volume-backup
|
||||
|
||||
Backup Docker volumes locally or to any S3, WebDAV, Azure Blob Storage or SSH compatible storage.
|
||||
Backup Docker volumes locally or to any S3 compatible storage.
|
||||
|
||||
The [offen/docker-volume-backup](https://hub.docker.com/r/offen/docker-volume-backup) Docker image can be used as a lightweight (below 15MB) sidecar container to an existing Docker setup.
|
||||
It handles __recurring or one-off backups of Docker volumes__ to a __local directory__, __any S3, WebDAV, Azure Blob Storage or SSH compatible storage (or any combination) and rotates away old backups__ if configured. It also supports __encrypting your backups using GPG__ and __sending notifications for failed backup runs__.
|
||||
@@ -33,7 +33,6 @@ It handles __recurring or one-off backups of Docker volumes__ to a __local direc
|
||||
- [Run multiple backup schedules in the same container](#run-multiple-backup-schedules-in-the-same-container)
|
||||
- [Define different retention schedules](#define-different-retention-schedules)
|
||||
- [Use special characters in notification URLs](#use-special-characters-in-notification-urls)
|
||||
- [Handle file uploads using third party tools](#handle-file-uploads-using-third-party-tools)
|
||||
- [Recipes](#recipes)
|
||||
- [Backing up to AWS S3](#backing-up-to-aws-s3)
|
||||
- [Backing up to Filebase](#backing-up-to-filebase)
|
||||
@@ -305,8 +304,7 @@ You can populate below template according to your requirements and use it as you
|
||||
|
||||
# SSH_IDENTITY_PASSPHRASE="pass"
|
||||
|
||||
# The credential's account name when using Azure Blob Storage. This has to be
|
||||
# set when using Azure Blob Storage.
|
||||
# The credential's account name when using Azure Blob Storage.
|
||||
|
||||
# AZURE_STORAGE_ACCOUNT_NAME="account-name"
|
||||
|
||||
@@ -320,7 +318,7 @@ You can populate below template according to your requirements and use it as you
|
||||
# AZURE_STORAGE_CONTAINER_NAME="container-name"
|
||||
|
||||
# The service endpoint when using Azure Blob Storage. This is a template that
|
||||
# can be passed the account name as shown in the default value below.
|
||||
# will be passed the account name as shown in the default value below.
|
||||
|
||||
# AZURE_STORAGE_ENDPOINT="https://{{ .AccountName }}.blob.core.windows.net/"
|
||||
|
||||
@@ -915,44 +913,6 @@ where service is any of the [supported services][shoutrrr-docs], e.g. for SMTP:
|
||||
docker run --rm -ti containrrr/shoutrrr generate smtp
|
||||
```
|
||||
|
||||
### Handle file uploads using third party tools
|
||||
|
||||
If you want to use a non-supported storage backend, or want to use a third party (e.g. rsync, rclone) tool for file uploads, you can build a Docker image containing the required binaries off this one, and call through to these in lifecycle hooks.
|
||||
|
||||
For example, if you wanted to use `rsync`, define your Docker image like this:
|
||||
|
||||
```Dockerfile
|
||||
FROM offen/docker-volume-backup:v2
|
||||
|
||||
RUN apk add rsync
|
||||
```
|
||||
|
||||
Using this image, you can now omit configuring any of the supported storage backends, and instead define your own mechanism in a `docker-volume-backup.copy-post` label:
|
||||
|
||||
```yml
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
backup:
|
||||
image: your-custom-image
|
||||
restart: always
|
||||
environment:
|
||||
BACKUP_FILENAME: "daily-backup-%Y-%m-%dT%H-%M-%S.tar.gz"
|
||||
BACKUP_CRON_EXPRESSION: "0 2 * * *"
|
||||
labels:
|
||||
- docker-volume-backup.copy-post=/bin/sh -c 'rsync $$COMMAND_RUNTIME_ARCHIVE_FILEPATH /destination'
|
||||
volumes:
|
||||
- app_data:/backup/app_data:ro
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
|
||||
# other services defined here ...
|
||||
volumes:
|
||||
app_data:
|
||||
```
|
||||
|
||||
|
||||
Commands will be invoked with the filepath of the tar archive passed as `COMMAND_RUNTIME_BACKUP_FILEPATH`.
|
||||
|
||||
## Recipes
|
||||
|
||||
This section lists configuration for some real-world use cases that you can mix and match according to your needs.
|
||||
|
||||
@@ -23,14 +23,10 @@ import (
|
||||
|
||||
func (s *script) exec(containerRef string, command string) ([]byte, []byte, error) {
|
||||
args, _ := argv.Argv(command, nil, nil)
|
||||
commandEnv := []string{
|
||||
fmt.Sprintf("COMMAND_RUNTIME_ARCHIVE_FILEPATH=%s", s.file),
|
||||
}
|
||||
execID, err := s.cli.ContainerExecCreate(context.Background(), containerRef, types.ExecConfig{
|
||||
Cmd: args[0],
|
||||
AttachStdin: true,
|
||||
AttachStderr: true,
|
||||
Env: commandEnv,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("exec: error creating container exec: %w", err)
|
||||
|
||||
@@ -4,9 +4,10 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"sort"
|
||||
|
||||
"github.com/offen/docker-volume-backup/internal/utilities"
|
||||
)
|
||||
|
||||
// hook contains a queued action that can be trigger them when the script
|
||||
@@ -51,7 +52,7 @@ func (s *script) runHooks(err error) error {
|
||||
}
|
||||
}
|
||||
if len(actionErrors) != 0 {
|
||||
return errors.Join(actionErrors...)
|
||||
return utilities.Join(actionErrors...)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -6,13 +6,13 @@ package main
|
||||
import (
|
||||
"bytes"
|
||||
_ "embed"
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"text/template"
|
||||
"time"
|
||||
|
||||
sTypes "github.com/containrrr/shoutrrr/pkg/types"
|
||||
"github.com/offen/docker-volume-backup/internal/utilities"
|
||||
)
|
||||
|
||||
//go:embed notifications.tmpl
|
||||
@@ -69,7 +69,7 @@ func (s *script) sendNotification(title, body string) error {
|
||||
}
|
||||
}
|
||||
if len(errs) != 0 {
|
||||
return fmt.Errorf("sendNotification: error sending message: %w", errors.Join(errs...))
|
||||
return fmt.Errorf("sendNotification: error sending message: %w", utilities.Join(errs...))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -5,7 +5,6 @@ package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/fs"
|
||||
@@ -21,6 +20,7 @@ import (
|
||||
"github.com/offen/docker-volume-backup/internal/storage/s3"
|
||||
"github.com/offen/docker-volume-backup/internal/storage/ssh"
|
||||
"github.com/offen/docker-volume-backup/internal/storage/webdav"
|
||||
"github.com/offen/docker-volume-backup/internal/utilities"
|
||||
|
||||
"github.com/containrrr/shoutrrr"
|
||||
"github.com/containrrr/shoutrrr/pkg/router"
|
||||
@@ -329,7 +329,7 @@ func (s *script) stopContainers() (func() error, error) {
|
||||
stopError = fmt.Errorf(
|
||||
"stopContainers: %d error(s) stopping containers: %w",
|
||||
len(stopErrors),
|
||||
errors.Join(stopErrors...),
|
||||
utilities.Join(stopErrors...),
|
||||
)
|
||||
}
|
||||
|
||||
@@ -380,7 +380,7 @@ func (s *script) stopContainers() (func() error, error) {
|
||||
return fmt.Errorf(
|
||||
"stopContainers: %d error(s) restarting containers and services: %w",
|
||||
len(restartErrors),
|
||||
errors.Join(restartErrors...),
|
||||
utilities.Join(restartErrors...),
|
||||
)
|
||||
}
|
||||
s.logger.Infof(
|
||||
|
||||
2
go.mod
2
go.mod
@@ -3,7 +3,6 @@ module github.com/offen/docker-volume-backup
|
||||
go 1.19
|
||||
|
||||
require (
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.2.0
|
||||
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v0.6.1
|
||||
github.com/containrrr/shoutrrr v0.5.2
|
||||
github.com/cosiner/argv v0.1.0
|
||||
@@ -22,6 +21,7 @@ require (
|
||||
|
||||
require (
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.1.4 // indirect
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.2.0 // indirect
|
||||
github.com/Azure/azure-sdk-for-go/sdk/internal v1.0.1 // indirect
|
||||
github.com/AzureAD/microsoft-authentication-library-for-go v0.7.0 // indirect
|
||||
github.com/Microsoft/go-winio v0.5.2 // indirect
|
||||
|
||||
3
go.sum
3
go.sum
@@ -2,6 +2,7 @@ cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMT
|
||||
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.1.4 h1:pqrAR74b6EoR4kcxF7L7Wg2B8Jgil9UUZtMvxhEFqWo=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.1.4/go.mod h1:uGG2W01BaETf0Ozp+QxxKJdMBNRWPdstHG0Fmdwn1/U=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.1.0 h1:QkAcEIAKbNL4KoFr4SathZPhDhF4mVwpBMFlYjyAqy8=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.2.0 h1:t/W5MYAuQy81cvM8VUNfRLzhtKpXhVUAN7Cd7KVbTyc=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.2.0/go.mod h1:NBanQUfSWiWn3QEpWDTCU0IjBECKOYvl2R8xdRtMtiM=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/internal v1.0.1 h1:XUNQ4mw+zJmaA2KXzP9JlQiecy1SI+Eog7xVkPiqIbg=
|
||||
@@ -10,6 +11,7 @@ github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v0.6.1 h1:YvQv9Mz6T8oR5ypQO
|
||||
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v0.6.1/go.mod h1:c6WvOhtmjNUWbLfOG1qxM/q0SPvQNSVJvolm+C52dIU=
|
||||
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78 h1:w+iIsaOQNcT7OZ575w+acHgRric5iCyQh+xv+KJ4HB8=
|
||||
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8=
|
||||
github.com/AzureAD/microsoft-authentication-library-for-go v0.5.1 h1:BWe8a+f/t+7KY7zH2mqygeUD0t8hNFXe08p1Pb3/jKE=
|
||||
github.com/AzureAD/microsoft-authentication-library-for-go v0.7.0 h1:VgSJlZH5u0k2qxSpqyghcFQKmvYckj46uymKK5XzkBM=
|
||||
github.com/AzureAD/microsoft-authentication-library-for-go v0.7.0/go.mod h1:BDJ5qMFKx9DugEg3+uQSDCdbYPr5s9vBTrL9P8TpqOU=
|
||||
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
|
||||
@@ -105,6 +107,7 @@ github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7a
|
||||
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
|
||||
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
|
||||
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
|
||||
github.com/golang-jwt/jwt v3.2.1+incompatible h1:73Z+4BJcrTC+KczS6WvTPvRGOp1WmfEP4Q1lOd9Z/+c=
|
||||
github.com/golang-jwt/jwt/v4 v4.4.2 h1:rcc4lwaZgFMCZ5jxF9ABolDcIHdBytAFgqFPbSJQAYs=
|
||||
github.com/golang-jwt/jwt/v4 v4.4.2/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
|
||||
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
|
||||
|
||||
@@ -6,11 +6,9 @@ package azure
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
"text/template"
|
||||
"time"
|
||||
@@ -19,6 +17,7 @@ import (
|
||||
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
|
||||
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/container"
|
||||
"github.com/offen/docker-volume-backup/internal/storage"
|
||||
"github.com/offen/docker-volume-backup/internal/utilities"
|
||||
)
|
||||
|
||||
type azureBlobStorage struct {
|
||||
@@ -46,7 +45,6 @@ func NewStorageBackend(opts Config, logFunc storage.Log) (storage.Backend, error
|
||||
if err := endpointTemplate.Execute(&ep, opts); err != nil {
|
||||
return nil, fmt.Errorf("NewStorageBackend: error executing endpoint template: %w", err)
|
||||
}
|
||||
normalizedEndpoint := fmt.Sprintf("%s/", strings.TrimSuffix(ep.String(), "/"))
|
||||
|
||||
var client *azblob.Client
|
||||
if opts.PrimaryAccountKey != "" {
|
||||
@@ -55,7 +53,7 @@ func NewStorageBackend(opts Config, logFunc storage.Log) (storage.Backend, error
|
||||
return nil, fmt.Errorf("NewStorageBackend: error creating shared key Azure credential: %w", err)
|
||||
}
|
||||
|
||||
client, err = azblob.NewClientWithSharedKeyCredential(normalizedEndpoint, cred, nil)
|
||||
client, err = azblob.NewClientWithSharedKeyCredential(ep.String(), cred, nil)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("NewStorageBackend: error creating Azure client: %w", err)
|
||||
}
|
||||
@@ -64,7 +62,7 @@ func NewStorageBackend(opts Config, logFunc storage.Log) (storage.Backend, error
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("NewStorageBackend: error creating managed identity credential: %w", err)
|
||||
}
|
||||
client, err = azblob.NewClient(normalizedEndpoint, cred, nil)
|
||||
client, err = azblob.NewClient(ep.String(), cred, nil)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("NewStorageBackend: error creating Azure client: %w", err)
|
||||
}
|
||||
@@ -135,21 +133,21 @@ func (b *azureBlobStorage) Prune(deadline time.Time, pruningPrefix string) (*sto
|
||||
if err := b.DoPrune(b.Name(), len(matches), int(totalCount), "Azure Blob Storage backup(s)", func() error {
|
||||
wg := sync.WaitGroup{}
|
||||
wg.Add(len(matches))
|
||||
var errs []error
|
||||
var errors []error
|
||||
|
||||
for _, match := range matches {
|
||||
name := match
|
||||
go func() {
|
||||
_, err := b.client.DeleteBlob(context.Background(), b.containerName, name, nil)
|
||||
if err != nil {
|
||||
errs = append(errs, err)
|
||||
errors = append(errors, err)
|
||||
}
|
||||
wg.Done()
|
||||
}()
|
||||
}
|
||||
wg.Wait()
|
||||
if len(errs) != 0 {
|
||||
return errors.Join(errs...)
|
||||
if len(errors) != 0 {
|
||||
return utilities.Join(errors...)
|
||||
}
|
||||
return nil
|
||||
}); err != nil {
|
||||
|
||||
@@ -4,7 +4,6 @@
|
||||
package local
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
@@ -13,6 +12,7 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/offen/docker-volume-backup/internal/storage"
|
||||
"github.com/offen/docker-volume-backup/internal/utilities"
|
||||
)
|
||||
|
||||
type localStorage struct {
|
||||
@@ -127,7 +127,7 @@ func (b *localStorage) Prune(deadline time.Time, pruningPrefix string) (*storage
|
||||
return fmt.Errorf(
|
||||
"(*localStorage).Prune: %d error(s) deleting local files, starting with: %w",
|
||||
len(removeErrors),
|
||||
errors.Join(removeErrors...),
|
||||
utilities.Join(removeErrors...),
|
||||
)
|
||||
}
|
||||
return nil
|
||||
|
||||
@@ -15,6 +15,7 @@ import (
|
||||
"github.com/minio/minio-go/v7"
|
||||
"github.com/minio/minio-go/v7/pkg/credentials"
|
||||
"github.com/offen/docker-volume-backup/internal/storage"
|
||||
"github.com/offen/docker-volume-backup/internal/utilities"
|
||||
)
|
||||
|
||||
type s3Storage struct {
|
||||
@@ -158,7 +159,7 @@ func (b *s3Storage) Prune(deadline time.Time, pruningPrefix string) (*storage.Pr
|
||||
}
|
||||
}
|
||||
if len(removeErrors) != 0 {
|
||||
return errors.Join(removeErrors...)
|
||||
return utilities.Join(removeErrors...)
|
||||
}
|
||||
return nil
|
||||
}); err != nil {
|
||||
|
||||
@@ -67,17 +67,15 @@ func (b *webDavStorage) Name() string {
|
||||
|
||||
// Copy copies the given file to the WebDav storage backend.
|
||||
func (b *webDavStorage) Copy(file string) error {
|
||||
bytes, err := os.ReadFile(file)
|
||||
_, name := path.Split(file)
|
||||
if err != nil {
|
||||
return fmt.Errorf("(*webDavStorage).Copy: Error reading the file to be uploaded: %w", err)
|
||||
}
|
||||
if err := b.client.MkdirAll(b.DestinationPath, 0644); err != nil {
|
||||
return fmt.Errorf("(*webDavStorage).Copy: Error creating directory '%s' on WebDAV server: %w", b.DestinationPath, err)
|
||||
}
|
||||
|
||||
r, err := os.Open(file)
|
||||
if err != nil {
|
||||
return fmt.Errorf("(*webDavStorage).Copy: Error opening the file to be uploaded: %w", err)
|
||||
}
|
||||
|
||||
if err := b.client.WriteStream(filepath.Join(b.DestinationPath, name), r, 0644); err != nil {
|
||||
if err := b.client.Write(filepath.Join(b.DestinationPath, name), bytes, 0644); err != nil {
|
||||
return fmt.Errorf("(*webDavStorage).Copy: Error uploading the file to WebDAV server: %w", err)
|
||||
}
|
||||
b.Log(storage.LogLevelInfo, b.Name(), "Uploaded a copy of backup '%s' to WebDAV URL '%s' at path '%s'.", file, b.url, b.DestinationPath)
|
||||
|
||||
24
internal/utilities/util.go
Normal file
24
internal/utilities/util.go
Normal file
@@ -0,0 +1,24 @@
|
||||
// Copyright 2022 - Offen Authors <hioffen@posteo.de>
|
||||
// SPDX-License-Identifier: MPL-2.0
|
||||
|
||||
package utilities
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Join takes a list of errors and joins them into a single error
|
||||
func Join(errs ...error) error {
|
||||
if len(errs) == 1 {
|
||||
return errs[0]
|
||||
}
|
||||
var msgs []string
|
||||
for _, err := range errs {
|
||||
if err == nil {
|
||||
continue
|
||||
}
|
||||
msgs = append(msgs, err.Error())
|
||||
}
|
||||
return errors.New("[" + strings.Join(msgs, ", ") + "]")
|
||||
}
|
||||
@@ -4,7 +4,7 @@ services:
|
||||
storage:
|
||||
image: mcr.microsoft.com/azure-storage/azurite
|
||||
volumes:
|
||||
- azurite_backup_data:/data
|
||||
- ./foo:/data
|
||||
command: azurite-blob --blobHost 0.0.0.0 --blobPort 10000 --location /data
|
||||
healthcheck:
|
||||
test: nc 127.0.0.1 10000 -z
|
||||
|
||||
@@ -24,10 +24,10 @@ openssl x509 -req -passin pass:test \
|
||||
|
||||
openssl x509 -in minio.crt -noout -text
|
||||
|
||||
docker compose up -d
|
||||
docker-compose up -d
|
||||
sleep 5
|
||||
|
||||
docker compose exec backup backup
|
||||
docker-compose exec backup backup
|
||||
|
||||
sleep 5
|
||||
|
||||
@@ -40,4 +40,4 @@ docker run --rm -it \
|
||||
|
||||
pass "Found relevant files in untared remote backups."
|
||||
|
||||
docker compose down --volumes
|
||||
docker-compose down --volumes
|
||||
|
||||
@@ -11,8 +11,7 @@ services:
|
||||
MARIADB_DATABASE: backup
|
||||
labels:
|
||||
# this is testing the deprecated label on purpose
|
||||
- docker-volume-backup.archive-pre=/bin/sh -c 'mysqldump -ptest --all-databases > /tmp/volume/dump.sql'
|
||||
- docker-volume-backup.archive-post=/bin/sh -c 'rm /tmp/volume/dump.sql'
|
||||
- docker-volume-backup.exec-pre=/bin/sh -c 'mysqldump -ptest --all-databases > /tmp/volume/dump.sql'
|
||||
- docker-volume-backup.copy-post=/bin/sh -c 'echo "post" > /tmp/volume/post.txt'
|
||||
- docker-volume-backup.exec-label=test
|
||||
volumes:
|
||||
|
||||
@@ -6,10 +6,10 @@ cd $(dirname $0)
|
||||
. ../util.sh
|
||||
current_test=$(basename $(pwd))
|
||||
|
||||
docker compose up -d
|
||||
docker-compose up -d
|
||||
sleep 30 # mariadb likes to take a bit before responding
|
||||
|
||||
docker compose exec backup backup
|
||||
docker-compose exec backup backup
|
||||
sudo cp -r $(docker volume inspect --format='{{ .Mountpoint }}' commands_archive) ./local
|
||||
|
||||
tar -xvf ./local/test.tar.gz
|
||||
@@ -28,7 +28,7 @@ if [ -f ./backup/data/post.txt ]; then
|
||||
fi
|
||||
pass "Did not find unexpected file."
|
||||
|
||||
docker compose down --volumes
|
||||
docker-compose down --volumes
|
||||
sudo rm -rf ./local
|
||||
|
||||
|
||||
|
||||
@@ -8,12 +8,12 @@ current_test=$(basename $(pwd))
|
||||
|
||||
mkdir -p local
|
||||
|
||||
docker compose up -d
|
||||
docker-compose up -d
|
||||
|
||||
# sleep until a backup is guaranteed to have happened on the 1 minute schedule
|
||||
sleep 100
|
||||
|
||||
docker compose down --volumes
|
||||
docker-compose down --volumes
|
||||
|
||||
if [ ! -f ./local/conf.tar.gz ]; then
|
||||
fail "Config from file was not used."
|
||||
|
||||
@@ -1,4 +0,0 @@
|
||||
ARG version=canary
|
||||
FROM offen/docker-volume-backup:$version
|
||||
|
||||
RUN apk add rsync
|
||||
@@ -1,26 +0,0 @@
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
backup:
|
||||
image: offen/docker-volume-backup:${TEST_VERSION:-canary}
|
||||
restart: always
|
||||
labels:
|
||||
- docker-volume-backup.copy-post=/bin/sh -c 'mkdir -p /tmp/unpack && tar -xvf $$COMMAND_RUNTIME_ARCHIVE_FILEPATH -C /tmp/unpack && rsync -r /tmp/unpack/backup/app_data /local'
|
||||
environment:
|
||||
BACKUP_FILENAME: test.tar.gz
|
||||
BACKUP_CRON_EXPRESSION: 0 0 5 31 2 ?
|
||||
EXEC_FORWARD_OUTPUT: "true"
|
||||
volumes:
|
||||
- ./local:/local
|
||||
- app_data:/backup/app_data:ro
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
|
||||
offen:
|
||||
image: offen/offen:latest
|
||||
labels:
|
||||
- docker-volume-backup.stop-during-backup=true
|
||||
volumes:
|
||||
- app_data:/var/opt/offen
|
||||
|
||||
volumes:
|
||||
app_data:
|
||||
@@ -1,28 +0,0 @@
|
||||
#!/bin/sh
|
||||
|
||||
set -e
|
||||
|
||||
cd "$(dirname "$0")"
|
||||
. ../util.sh
|
||||
current_test=$(basename $(pwd))
|
||||
|
||||
mkdir -p local
|
||||
|
||||
export TEST_VERSION="${TEST_VERSION:-canary}-with-rsync"
|
||||
|
||||
docker build . -t offen/docker-volume-backup:$TEST_VERSION
|
||||
|
||||
docker compose up -d
|
||||
sleep 5
|
||||
|
||||
docker compose exec backup backup
|
||||
|
||||
sleep 5
|
||||
|
||||
expect_running_containers "2"
|
||||
|
||||
if [ ! -f "./local/app_data/offen.db" ]; then
|
||||
fail "Could not find expected file in untared archive."
|
||||
fi
|
||||
|
||||
docker compose down --volumes
|
||||
@@ -8,10 +8,10 @@ current_test=$(basename $(pwd))
|
||||
|
||||
mkdir -p local
|
||||
|
||||
docker compose up -d
|
||||
docker-compose up -d
|
||||
sleep 5
|
||||
|
||||
docker compose exec backup backup
|
||||
docker-compose exec backup backup
|
||||
|
||||
expect_running_containers "2"
|
||||
|
||||
@@ -30,4 +30,4 @@ if [ ! -L ./local/test-latest.tar.gz.gpg ]; then
|
||||
fail "Could not find local symlink to latest encrypted backup."
|
||||
fi
|
||||
|
||||
docker compose down --volumes
|
||||
docker-compose down --volumes
|
||||
|
||||
@@ -8,11 +8,11 @@ current_test=$(basename $(pwd))
|
||||
|
||||
mkdir -p local
|
||||
|
||||
docker compose up -d
|
||||
docker-compose up -d
|
||||
sleep 5
|
||||
docker compose exec backup backup
|
||||
docker-compose exec backup backup
|
||||
|
||||
docker compose down --volumes
|
||||
docker-compose down --volumes
|
||||
|
||||
out=$(mktemp -d)
|
||||
sudo tar --same-owner -xvf ./local/test.tar.gz -C "$out"
|
||||
|
||||
@@ -8,13 +8,13 @@ current_test=$(basename $(pwd))
|
||||
|
||||
mkdir -p local
|
||||
|
||||
docker compose up -d
|
||||
docker-compose up -d
|
||||
sleep 5
|
||||
|
||||
# A symlink for a known file in the volume is created so the test can check
|
||||
# whether symlinks are preserved on backup.
|
||||
docker compose exec offen ln -s /var/opt/offen/offen.db /var/opt/offen/db.link
|
||||
docker compose exec backup backup
|
||||
docker-compose exec offen ln -s /var/opt/offen/offen.db /var/opt/offen/db.link
|
||||
docker-compose exec backup backup
|
||||
|
||||
sleep 5
|
||||
|
||||
@@ -42,14 +42,14 @@ pass "Found symlink to latest version in local backup."
|
||||
# The second part of this test checks if backups get deleted when the retention
|
||||
# is set to 0 days (which it should not as it would mean all backups get deleted)
|
||||
# TODO: find out if we can test actual deletion without having to wait for a day
|
||||
BACKUP_RETENTION_DAYS="0" docker compose up -d
|
||||
BACKUP_RETENTION_DAYS="0" docker-compose up -d
|
||||
sleep 5
|
||||
|
||||
docker compose exec backup backup
|
||||
docker-compose exec backup backup
|
||||
|
||||
if [ "$(find ./local -type f | wc -l)" != "1" ]; then
|
||||
fail "Backups should not have been deleted, instead seen: "$(find ./local -type f)""
|
||||
fi
|
||||
pass "Local backups have not been deleted."
|
||||
|
||||
docker compose down --volumes
|
||||
docker-compose down --volumes
|
||||
|
||||
@@ -8,13 +8,13 @@ current_test=$(basename $(pwd))
|
||||
|
||||
mkdir -p local
|
||||
|
||||
docker compose up -d
|
||||
docker-compose up -d
|
||||
sleep 5
|
||||
|
||||
GOTIFY_TOKEN=$(curl -sSLX POST -H 'Content-Type: application/json' -d '{"name":"test"}' http://admin:custom@localhost:8080/application | jq -r '.token')
|
||||
info "Set up Gotify application using token $GOTIFY_TOKEN"
|
||||
|
||||
docker compose exec backup backup
|
||||
docker-compose exec backup backup
|
||||
|
||||
NUM_MESSAGES=$(curl -sSL http://admin:custom@localhost:8080/message | jq -r '.messages | length')
|
||||
if [ "$NUM_MESSAGES" != 0 ]; then
|
||||
@@ -22,11 +22,11 @@ if [ "$NUM_MESSAGES" != 0 ]; then
|
||||
fi
|
||||
pass "No notifications were sent when not configured."
|
||||
|
||||
docker compose down
|
||||
docker-compose down
|
||||
|
||||
NOTIFICATION_URLS="gotify://gotify/${GOTIFY_TOKEN}?disableTLS=true" docker compose up -d
|
||||
NOTIFICATION_URLS="gotify://gotify/${GOTIFY_TOKEN}?disableTLS=true" docker-compose up -d
|
||||
|
||||
docker compose exec backup backup
|
||||
docker-compose exec backup backup
|
||||
|
||||
NUM_MESSAGES=$(curl -sSL http://admin:custom@localhost:8080/message | jq -r '.messages | length')
|
||||
if [ "$NUM_MESSAGES" != 1 ]; then
|
||||
@@ -47,4 +47,4 @@ if [ "$MESSAGE_BODY" != "Backing up /tmp/test.tar.gz succeeded." ]; then
|
||||
fi
|
||||
pass "Custom notification body was used."
|
||||
|
||||
docker compose down --volumes
|
||||
docker-compose down --volumes
|
||||
|
||||
@@ -1,28 +0,0 @@
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
rsync:
|
||||
image: eeacms/rsync
|
||||
tty: true
|
||||
restart: unless-stopped
|
||||
labels:
|
||||
- docker-volume-backup.exec-label=order
|
||||
- docker-volume-backup.archive-pre=sh -c "rsync -aAX --ignore-missing-args --delete-missing-args /data/ /bu/"
|
||||
- docker-volume-backup.archive-post=sh -c "rm -rf /bu/*"
|
||||
volumes:
|
||||
- ./fixture:/data:ro
|
||||
- bu:/bu
|
||||
|
||||
backup:
|
||||
image: offen/docker-volume-backup:${TEST_VERSION:-canary}
|
||||
restart: always
|
||||
environment:
|
||||
BACKUP_FILENAME: backup.tar.gz
|
||||
BACKUP_EXEC_LABEL: order
|
||||
volumes:
|
||||
- bu:/backup/order:ro
|
||||
- ./local:/archive
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
|
||||
volumes:
|
||||
bu:
|
||||
@@ -1 +0,0 @@
|
||||
ok
|
||||
@@ -1,26 +0,0 @@
|
||||
#!/bin/sh
|
||||
|
||||
set -e
|
||||
|
||||
cd $(dirname $0)
|
||||
. ../util.sh
|
||||
current_test=$(basename $(pwd))
|
||||
|
||||
mkdir -p local
|
||||
|
||||
docker compose up -d
|
||||
sleep 10
|
||||
|
||||
docker compose exec backup backup
|
||||
|
||||
if [ ! -f "./local/backup.tar.gz" ]; then
|
||||
fail "Could not find expected backup file."
|
||||
fi
|
||||
|
||||
tmp_dir=$(mktemp -d)
|
||||
tar -xvf ./local/backup.tar.gz -C $tmp_dir
|
||||
if [ ! -f "$tmp_dir/backup/order/test.txt" ]; then
|
||||
fail "Could not find expected file in untared archive."
|
||||
fi
|
||||
|
||||
docker compose down --volumes
|
||||
@@ -9,10 +9,10 @@ current_test=$(basename $(pwd))
|
||||
|
||||
mkdir -p local
|
||||
|
||||
docker compose up -d
|
||||
docker-compose up -d
|
||||
sleep 5
|
||||
|
||||
docker compose exec backup backup
|
||||
docker-compose exec backup backup
|
||||
|
||||
tmp_dir=$(mktemp -d)
|
||||
sudo tar --same-owner -xvf ./local/backup.tar.gz -C $tmp_dir
|
||||
@@ -27,4 +27,4 @@ for file in $(sudo find $tmp_dir/backup/postgres); do
|
||||
done
|
||||
pass "All files and directories in backup preserved their ownership."
|
||||
|
||||
docker compose down --volumes
|
||||
docker-compose down --volumes
|
||||
|
||||
@@ -6,12 +6,12 @@ cd "$(dirname "$0")"
|
||||
. ../util.sh
|
||||
current_test=$(basename $(pwd))
|
||||
|
||||
docker compose up -d
|
||||
docker-compose up -d
|
||||
sleep 5
|
||||
|
||||
# A symlink for a known file in the volume is created so the test can check
|
||||
# whether symlinks are preserved on backup.
|
||||
docker compose exec backup backup
|
||||
docker-compose exec backup backup
|
||||
|
||||
sleep 5
|
||||
|
||||
@@ -27,10 +27,10 @@ pass "Found relevant files in untared remote backups."
|
||||
# The second part of this test checks if backups get deleted when the retention
|
||||
# is set to 0 days (which it should not as it would mean all backups get deleted)
|
||||
# TODO: find out if we can test actual deletion without having to wait for a day
|
||||
BACKUP_RETENTION_DAYS="0" docker compose up -d
|
||||
BACKUP_RETENTION_DAYS="0" docker-compose up -d
|
||||
sleep 5
|
||||
|
||||
docker compose exec backup backup
|
||||
docker-compose exec backup backup
|
||||
|
||||
docker run --rm -it \
|
||||
-v minio_backup_data:/minio_data \
|
||||
@@ -39,4 +39,4 @@ docker run --rm -it \
|
||||
|
||||
pass "Remote backups have not been deleted."
|
||||
|
||||
docker compose down --volumes
|
||||
docker-compose down --volumes
|
||||
|
||||
@@ -8,10 +8,10 @@ current_test=$(basename $(pwd))
|
||||
|
||||
ssh-keygen -t rsa -m pem -b 4096 -N "test1234" -f id_rsa -C "docker-volume-backup@local"
|
||||
|
||||
docker compose up -d
|
||||
docker-compose up -d
|
||||
sleep 5
|
||||
|
||||
docker compose exec backup backup
|
||||
docker-compose exec backup backup
|
||||
|
||||
sleep 5
|
||||
|
||||
@@ -27,10 +27,10 @@ pass "Found relevant files in decrypted and untared remote backups."
|
||||
# The second part of this test checks if backups get deleted when the retention
|
||||
# is set to 0 days (which it should not as it would mean all backups get deleted)
|
||||
# TODO: find out if we can test actual deletion without having to wait for a day
|
||||
BACKUP_RETENTION_DAYS="0" docker compose up -d
|
||||
BACKUP_RETENTION_DAYS="0" docker-compose up -d
|
||||
sleep 5
|
||||
|
||||
docker compose exec backup backup
|
||||
docker-compose exec backup backup
|
||||
|
||||
docker run --rm -it \
|
||||
-v ssh_backup_data:/ssh_data \
|
||||
@@ -39,5 +39,5 @@ docker run --rm -it \
|
||||
|
||||
pass "Remote backups have not been deleted."
|
||||
|
||||
docker compose down --volumes
|
||||
docker-compose down --volumes
|
||||
rm -f id_rsa id_rsa.pub
|
||||
|
||||
@@ -6,10 +6,10 @@ cd "$(dirname "$0")"
|
||||
. ../util.sh
|
||||
current_test=$(basename $(pwd))
|
||||
|
||||
docker compose up -d
|
||||
docker-compose up -d
|
||||
sleep 5
|
||||
|
||||
docker compose exec backup backup
|
||||
docker-compose exec backup backup
|
||||
|
||||
sleep 5
|
||||
|
||||
@@ -25,10 +25,10 @@ pass "Found relevant files in untared remote backup."
|
||||
# The second part of this test checks if backups get deleted when the retention
|
||||
# is set to 0 days (which it should not as it would mean all backups get deleted)
|
||||
# TODO: find out if we can test actual deletion without having to wait for a day
|
||||
BACKUP_RETENTION_DAYS="0" docker compose up -d
|
||||
BACKUP_RETENTION_DAYS="0" docker-compose up -d
|
||||
sleep 5
|
||||
|
||||
docker compose exec backup backup
|
||||
docker-compose exec backup backup
|
||||
|
||||
docker run --rm -it \
|
||||
-v webdav_backup_data:/webdav_data \
|
||||
@@ -37,4 +37,4 @@ docker run --rm -it \
|
||||
|
||||
pass "Remote backups have not been deleted."
|
||||
|
||||
docker compose down --volumes
|
||||
docker-compose down --volumes
|
||||
|
||||
Reference in New Issue
Block a user