Database container recovery and simple backup implementation

antecedents

My colleague wanted to add MathJax function to the outline document service for internal use. However, because he was not familiar with docker, he boldly implemented docker-compose Down in the production environment, resulting in the direct delete of Postgres and Redis containers.

Data recovery

After investigation, it is learned that docker-compose Down will not delete the volume where the data is located if -v option is not specially added. Therefore, mount the corresponding volumn when creating a new postgres container.

# Query the corresponding volume
ls /var/lib/docker/volumes

Create a new postgres container
docker run -d --name <Container Name> -e POSTGRES_PASSWORD=<Your Password> -p 5432:5432 --mount source=<Volume Name>,target=/var/lib/postgresql/data postgres
Copy the code

※ Docker container is created with volume and network in addition to the container itself. Docker inspect

reflection

It is a problem for colleagues to operate directly without knowledge of the deployment environment and technology.

I also neglected data backup because it was an internal self-use service, and I didn’t develop a good sense of relying too much on the backup service provided by AWS RDS.

A bottom-line backup strategy is a must in a production environment, or when problems do occur, they are irreversible.

Implementing a primary+ Standby +pgBackRest architecture should be the best solution if the number of users increases.

Easy backup implementation

It took 20 minutes to write a backup script, because the company is a full set of AWS cloud, so it chose AWS S3 as the backup repository.

AWS CLI is used directly because it is familiar with the process and is faster to implement. Considering the suitability of cloud storage service, RClone should also be used to achieve it.

demand

  • File name format: Outline.YYYYMMDD
  • Leave simple script execution logs: / var/log/backup/outline. The log
  • Design a simple recovery strategy (recovery time: less than 4 hours)
  • Email notification when backup fails

implementation

  • Environment: AWS EC2 + Postgres in Docker
  • Backup script
    • Backup generation: Use docker container interaction command directly to postgres backup
    • Backup upload: AWS S3 (Prepare IAM user and configure AWS CLI in advance)
    • Scheduled execution:crontab
    • Failure notification: exploitcrontabtheMAILTO=To set up the Mutt mail client.
  • Recovery strategy
    • Aws CLI pulls the latest S3 backup
    • Drop db, create db, import. (Mostly markdown text data, so the amount of data is small)

Backup script

#! /usr/bin/env bash
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# Author: xxxx
# Version: v1.0.0
# Date: 2021-03-10
# Description: Backup Database
# Usage: sh <file_name>
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #

set -Eeuo pipefail

cd "$(dirname "${BASH_SOURCE[0]}")" >/dev/null 2>&1

trap cleanup SIGINT SIGTERM ERR EXIT

DB_NAME=your_database_name
DB_USER=your_database_user
DB_CONTAINER=your_docker_container_name
S3_BACKUP_PATH=s3://

DT=$(date +'%Y%m%d')

cleanup(){
    rm -f "${DB_NAME}.${DT}"
}

main() {# dump
    docker exec -it ${DB_CONTAINER} pg_dump -U ${DB_USER} ${DB_NAME} > ${DB_NAME}.${DT}
    aws s3 cp ${DB_NAME}.${DT} ${S3_BACKUP_PATH}

    # log
    echo "[${DT}] ${DB_NAME}.${DT} uploaded to ${S3_BACKUP_PATH}" >> /var/log/backup/${DB_NAME}.log

    cleanup
}

main "$@"
Copy the code
  • Snippets: pg_dump, pg_restore
# dump to single SQL file
$ pg_dump -d mydb -n public -f mydb.sql
# dump to a custom format file
$ pg_dump -d mydb -n public --format=custom -f mydb.pgdmp

# restoring from a SQL dump file, the simple version
$ psql -d mydb_new < mydb.sql
# restoring from a SQL dump file, the recommended version
$ PGOPTIONS='--client-min-messages=warning' psql -X -q -1 -v ON_ERROR_STOP=1 --pset pager=off -d mydb_new -f mydb.sql -L restore.log

# restoring from a dump written to a custom format file
$ pg_restore -d mydb_new -v -1 mydb.pgdmp
# restore a single table from the dump
$ pg_restore -d mydb_new --table=mytable -v -1 mydb.pgdmp
# restore a single function from the dump
$ pg_restore -d mydb_new --function=myfunc -v -1 mydb.pgdmp
Copy the code