Disaster Recovery
Backup and Restore
Crunchy PostgreSQL for Pivotal Cloud Foundry (PCF) Tile uses pgBackRest as a dedicated backup and archiving host. The tile comes pre-configured with nightly physical backups of the database server for non-standalone plans:
| Backup Type | Day | Time |
|---|---|---|
| Full | Sunday | 1 am UTC |
| Incremental | Monday | 1 am UTC |
| Incremental | Tuesday | 1 am UTC |
| Incremental | Wednesday | 1 am UTC |
| Incremental | Thursday | 1 am UTC |
| Incremental | Friday | 1 am UTC |
| Incremental | Saturday | 1 am UTC |
Archiving and backups are disabled by default in the
standalone plan. PCF users can set the standalone_backups parameter
to true to enable this feature.Although backups only happen once a day, PostgreSQL is continuously shipping the Write-Ahead-Logs (WAL) to the pgBackrest server. This means that point-in-time recovery is possible, regardless of the schedule.
These backups not only offer peace of mind, but are used frequently by the tile.
Crunchy PostgreSQL for PCF uses backups to create replicas in the stack. By using backups in operations, we ensure that backup and restore operations are functioning properly.
All archives from the database server are stored on the dedicated backup host. This means that databases can be restored to specific points in time.
Currently, individual databases cannot be restored. All databases are restored in the shared cluster.
Standalone plans do not include automatic backups by default. When using a standalone plan, you should regularly perform manual backups using external tools, or enable the backup using the service creation/update parameter.
Restore Using Deltas
These instructions demonstrate how the database can be restored using only deltas between the backup and the database.
The following requires an administrator to SSH to the postgresql
servers of the service instance. For guidance on using SSH to connect to
servers in the service, see Advanced Troubleshooting with the BOSH
CLI.
Pivotal recommends that you use BOSH CLI v2 or later when running PCF v1.11 or later.
Cluster Plans
- SSH into any
postgresqlserver using theboshtool:
bosh ssh postgresql 0
# If using the BOSH v2+ Client:
bosh -e $ENV -d $DEPLOYMENT ssh postgresql/0- Identify the current primary by querying Consul:
sudo su - vcap
curl -sSL http://localhost:8500/v1/catalog/service/postgresql-zone1 | jq -r '.[] | .Node, .ServiceTags'- SSH into the
postgresqlserver(s) identified above as replicas using theboshtool:
bosh ssh postgresql <index identified above>
# If using the BOSH v2+ Client:
bosh -e $ENV -d $DEPLOYMENT ssh postgresql/<index identified above>Switch to the
rootuser:sudo -iStop monitoring
consulandpostgresqlservices:
monit unmonitor consul
monit stop consul
monit unmonitor postgresql
monit stop postgresql- SSH into the
postgresqlserver identified above as primary using theboshtool:
bosh ssh postgresql <index identified above>
# If using the BOSH v2+ Client:
bosh -e $ENV -d $DEPLOYMENT ssh postgresql/<index identified above>Switch to the
rootuser:sudo -iStop monitoring
consulandpostgresqlservices:
monit unmonitor consul
monit stop consul
sleep 60 # Give adequate time for consul to settle
monit unmonitor postgresql
monit stop postgresqlSwitch to the
vcapuser:su - vcapVerify that backups are available:
pgbackrest --stanza=main info
# Note that you may need to provide the `pgbackrest.conf` path:
pgbackrest --config=/var/vcap/store/pgbackrest/config/pgbackrest-local.conf --stanza=main info
# If you wish to restore from S3, replace pgbackrest-local.conf with pgbackrest-s3.conf.- Restore the database:
pgbackrest --config=/var/vcap/store/pgbackrest/config/pgbackrest-local.conf \
--stanza=main \
--delta \
--log-level-console=detail \
restoreSwitch to
root:sudo -iStart the database:
monit start postgresql
monit monitor postgresql- Return the other systems to working order:
monit start consul
monit monitor consul- SSH into the
postgresqlserver(s) identified above as replicas using theboshtool:
bosh ssh postgresql <index identified above>
# If using the BOSH v2+ Client:
bosh -e $ENV -d $DEPLOYMENT ssh postgresql/<index identified above>Switch to the
rootuser:sudo -iResume monitoring
consulandpostgresqlservices:
monit start consul
monit monitor consul
monit start postgresql
monit monitor postgresqlStandalone Plan
- SSH into the
postgresqlserver using theboshtool:
bosh ssh postgresql 0
# If using the BOSH v2+ Client:
bosh -e $ENV -d $DEPLOYMENT ssh postgresql/0Switch to the
rootuser:sudo -iStop monitoring the
postgresqlservice:
monit unmonitor postgresql
monit stop postgresqlSwitch to the
vcapuser:sh su - vcapVerify that backups are available:
pgbackrest --stanza=main info
# Note that you may need to provide the `pgbackrest.conf` path:
pgbackrest --config=/var/vcap/store/pgbackrest/config/pgbackrest-local.conf --stanza=main info
# If you wish to restore from S3, replace `pgbackrest-local.conf` with `pgbackrest-s3.conf`.- Restore the database:
pgbackrest --config=/var/vcap/store/pgbackrest/config/pgbackrest-local.conf \
--stanza=main \
--delta \
--log-level-console=detail \
restoreSwitch to
root:sudo -iStart the database:
monit start postgresql
monit monitor postgresqlPoint in Time Recovery
These instructions demonstrate how the database can be restored using a point in time recovery.
The following requires an administrator to SSH to the postgresql
servers of the service instance. For guidance on using SSH to connect to
servers in the service, see Advanced Troubleshooting with the BOSH
CLI.
Pivotal recommends that you use BOSH CLI v2 or later when running PCF v1.11 or later.
Cluster Plans
- SSH into any
postgresqlserver using theboshtool:
bosh ssh postgresql 0
# If using the BOSH v2+ Client:
bosh -e $ENV -d $DEPLOYMENT ssh postgresql/0- Identify the current primary by querying Consul:
sudo su - vcap
curl -sSL http://localhost:8500/v1/catalog/service/postgresql-zone1 | jq -r '.[] | .Node, .ServiceTags'- SSH into the
postgresqlserver(s) identified above as replicas using theboshtool:
bosh ssh postgresql <index identified above>
# If using the BOSH v2+ Client:
bosh -e $ENV -d $DEPLOYMENT ssh postgresql/<index identified above>Switch to the
rootuser:sudo -iStop monitoring
consulandpostgresqlservices:
monit unmonitor consul
monit stop consul
monit unmonitor postgresql
monit stop postgresql- SSH into the
postgresqlserver identified above as primary using theboshtool:
bosh ssh postgresql <index identified above>
# If using the BOSH v2+ Client:
bosh -e $ENV -d $DEPLOYMENT ssh postgresql/<index identified above>Switch to the
rootuser:sudo -iStop monitoring
consulandpostgresqlservices:
monit unmonitor consul
monit stop consul
sleep 60 # Give adequate time for consul to settle
monit unmonitor postgresql
monit stop postgresqlSwitch to the
vcapuser:su - vcapVerify that backups are available:
pgbackrest --stanza=main info
# Note that you may need to provide the pgbackrest.conf path:
pgbackrest --config=/var/vcap/store/pgbackrest/config/pgbackrest-local.conf --stanza=main info
# If you wish to restore from S3, replace pgbackrest-local.conf with pgbackrest-s3.conf.- Restore the database:
pgbackrest --config=/var/vcap/store/pgbackrest/config/pgbackrest-local.conf \
--stanza=main \
--delta \
--log-level-console=detail \
--type=time "--target=2016-12-13 00:11:34.531619+00" \
restoreSwitch to
root:sudo -iStart the database:
monit start postgresql
monit monitor postgresql- Return the other systems to working order:
monit start consul
monit monitor consul- SSH into the
postgresqlserver(s) identified above as replicas using theboshtool:
bosh ssh postgresql <index identified above>
# If using the BOSH v2+ Client:
bosh -e $ENV -d $DEPLOYMENT ssh postgresql/<index identified above>Switch to the
rootuser:sudo -iResume monitoring
consulandpostgresqlservices:
monit start consul
monit monitor consul
monit start postgresql
monit monitor postgresqlStandalone Plan
- SSH into the
postgresqlserver using theboshtool:
bosh ssh postgresql 0
# If using the BOSH v2+ Client:
bosh -e $ENV -d $DEPLOYMENT ssh postgresql/0Switch to the
rootuser:sudo -iStop monitoring the
postgresqlservice:
monit unmonitor postgresql
monit stop postgresqlSwitch to the
vcapuser:su - vcapVerify that backups are available:
pgbackrest --stanza=main info
# Note that you may need to provide the `pgbackrest.conf` path:
pgbackrest --config=/var/vcap/store/pgbackrest/config/pgbackrest-local.conf --stanza=main info
# If you wish to restore from S3, replace pgbackrest-local.conf with pgbackrest-s3.conf.- Restore the database:
pgbackrest --config=/var/vcap/store/pgbackrest/config/pgbackrest-local.conf \
--stanza=main \
--delta \
--log-level-console=detail \
--type=time "--target=2016-12-13 00:11:34.531619+00" \
restoreThe default behavior for restore is to use the last backup but an earlier backup can be specified with the
--set option pgbackrest --config=/var/vcap/store/pgbackrest/config/pgbackrest-local.conf \
--stanza=main \
--delta \
--log-level-console=detail \
--type=time "--target=2016-12-13 00:11:34.531619+00" \
--set=ID_OF_PREVIOUS_BACKUP
restoreSwitch to
root:sudo -iStart the database:
monit start postgresql
monit monitor postgresql- Finally, confirm that backups and archiving are still working. Connect to the primary PostgreSQL server instance and run:
sudo su vcap
psql -d postgres
pgbackrest --config=/var/vcap/store/pgbackrest/config/pgbackrest-local.conf --stanza=main info
pgbackrest --config=/var/vcap/store/pgbackrest/config/pgbackrest-local.conf --stanza=main checkpgBackrest Documentation
For more information about pgBackrest restorations, please refer to the official PGBackRest documentation.