Disaster Recovery

Backup and Restore

Crunchy PostgreSQL for Pivotal Cloud Foundry (PCF) Tile uses pgBackRest as a dedicated backup and archiving host. The tile comes pre-configured with nightly physical backups of the database server for non-standalone plans:

Backup Type Day Time
Full Sunday 1 am UTC
Incremental Monday 1 am UTC
Incremental Tuesday 1 am UTC
Incremental Wednesday 1 am UTC
Incremental Thursday 1 am UTC
Incremental Friday 1 am UTC
Incremental Saturday 1 am UTC

Although backups only happen once a day, PostgreSQL is continuously shipping the Write-Ahead-Logs (WAL) to the pgBackrest server. This means that point-in-time recovery is possible, regardless of the schedule.

These backups not only offer peace of mind, but are used frequently by the tile.

Crunchy PostgreSQL for PCF uses backups to create replicas in the stack. By using backups in operations, we ensure that backup and restore operations are functioning properly.

All archives from the database server are stored on the dedicated backup host. This means that databases can be restored to specific points in time.

Currently, individual databases cannot be restored. All databases are restored in the shared cluster.

Restore Using Deltas

These instructions demonstrate how the database can be restored using only deltas between the backup and the database.

The following requires an administrator to SSH to the postgresql servers of the service instance. For guidance on using SSH to connect to servers in the service, see Advanced Troubleshooting with the BOSH CLI.

Cluster Plans

  • SSH into any postgresql server using the bosh tool:
   bosh ssh postgresql 0
   # If using the BOSH v2+ Client:
   bosh -e $ENV -d $DEPLOYMENT ssh postgresql/0
  • Identify the current primary by querying Consul:
   sudo su - vcap
   curl -sSL http://localhost:8500/v1/catalog/service/postgresql-zone1 | jq -r '.[] | .Node, .ServiceTags'
  • SSH into the postgresql server(s) identified above as replicas using the bosh tool:
   bosh ssh postgresql <index identified above>
   # If using the BOSH v2+ Client:
   bosh -e $ENV -d $DEPLOYMENT ssh postgresql/<index identified above>
  • Switch to the root user: sudo -i

  • Stop monitoring consul and postgresql services:

   monit unmonitor consul
   monit stop consul
   monit unmonitor postgresql
   monit stop postgresql
  • SSH into the postgresql server identified above as primary using the bosh tool:
   bosh ssh postgresql <index identified above>
   # If using the BOSH v2+ Client:
   bosh -e $ENV -d $DEPLOYMENT ssh postgresql/<index identified above>
  • Switch to the root user: sudo -i

  • Stop monitoring consul and postgresql services:

   monit unmonitor consul
   monit stop consul
   sleep 60 # Give adequate time for consul to settle
   monit unmonitor postgresql
   monit stop postgresql
  • Switch to the vcap user: su - vcap

  • Verify that backups are available:

   pgbackrest --stanza=main info
   # Note that you may need to provide the `pgbackrest.conf` path:
   pgbackrest --config=/var/vcap/store/pgbackrest/config/pgbackrest-local.conf --stanza=main info
   # If you wish to restore from S3, replace pgbackrest-local.conf with pgbackrest-s3.conf.
  • Restore the database:
   pgbackrest --config=/var/vcap/store/pgbackrest/config/pgbackrest-local.conf \
       --stanza=main \
       --delta \
       --log-level-console=detail \
       restore
  • Switch to root: sudo -i

  • Start the database:

   monit start postgresql
   monit monitor postgresql
  • Return the other systems to working order:
   monit start consul
   monit monitor consul
  • SSH into the postgresql server(s) identified above as replicas using the bosh tool:
   bosh ssh postgresql <index identified above>
   # If using the BOSH v2+ Client:
   bosh -e $ENV -d $DEPLOYMENT ssh postgresql/<index identified above>
  • Switch to the root user: sudo -i

  • Resume monitoring consul and postgresql services:

   monit start consul
   monit monitor consul
   monit start postgresql
   monit monitor postgresql

Standalone Plan

  • SSH into the postgresql server using the bosh tool:
   bosh ssh postgresql 0
   # If using the BOSH v2+ Client:
   bosh -e $ENV -d $DEPLOYMENT ssh postgresql/0
  • Switch to the root user: sudo -i

  • Stop monitoring the postgresql service:

   monit unmonitor postgresql
   monit stop postgresql
  • Switch to the vcap user: sh su - vcap

  • Verify that backups are available:

   pgbackrest --stanza=main info
   # Note that you may need to provide the `pgbackrest.conf` path:
   pgbackrest --config=/var/vcap/store/pgbackrest/config/pgbackrest-local.conf --stanza=main info
   # If you wish to restore from S3, replace `pgbackrest-local.conf` with `pgbackrest-s3.conf`.
  • Restore the database:
   pgbackrest --config=/var/vcap/store/pgbackrest/config/pgbackrest-local.conf \
       --stanza=main \
       --delta \
       --log-level-console=detail \
       restore
  • Switch to root: sudo -i

  • Start the database:

   monit start postgresql
   monit monitor postgresql

Point in Time Recovery

These instructions demonstrate how the database can be restored using a point in time recovery.

The following requires an administrator to SSH to the postgresql servers of the service instance. For guidance on using SSH to connect to servers in the service, see Advanced Troubleshooting with the BOSH CLI.

Cluster Plans

  • SSH into any postgresql server using the bosh tool:
   bosh ssh postgresql 0
   # If using the BOSH v2+ Client:
   bosh -e $ENV -d $DEPLOYMENT ssh postgresql/0
  • Identify the current primary by querying Consul:
   sudo su - vcap
   curl -sSL http://localhost:8500/v1/catalog/service/postgresql-zone1 | jq -r '.[] | .Node, .ServiceTags'
  • SSH into the postgresql server(s) identified above as replicas using the bosh tool:
   bosh ssh postgresql <index identified above>
   # If using the BOSH v2+ Client:
   bosh -e $ENV -d $DEPLOYMENT ssh postgresql/<index identified above>
  • Switch to the root user: sudo -i

  • Stop monitoring consul and postgresql services:

   monit unmonitor consul
   monit stop consul
   monit unmonitor postgresql
   monit stop postgresql
  • SSH into the postgresql server identified above as primary using the bosh tool:
   bosh ssh postgresql <index identified above>
   # If using the BOSH v2+ Client:
   bosh -e $ENV -d $DEPLOYMENT ssh postgresql/<index identified above>
  • Switch to the root user: sudo -i

  • Stop monitoring consul and postgresql services:

   monit unmonitor consul
   monit stop consul
   sleep 60 # Give adequate time for consul to settle
   monit unmonitor postgresql
   monit stop postgresql
  • Switch to the vcap user: su - vcap

  • Verify that backups are available:

   pgbackrest --stanza=main info
   # Note that you may need to provide the pgbackrest.conf path:
   pgbackrest --config=/var/vcap/store/pgbackrest/config/pgbackrest-local.conf --stanza=main info
   # If you wish to restore from S3, replace pgbackrest-local.conf with pgbackrest-s3.conf.
  • Restore the database:
   pgbackrest --config=/var/vcap/store/pgbackrest/config/pgbackrest-local.conf \
       --stanza=main \
       --delta \
       --log-level-console=detail \
       --type=time "--target=2016-12-13 00:11:34.531619+00" \
       restore
  • Switch to root: sudo -i

  • Start the database:

   monit start postgresql
   monit monitor postgresql
  • Return the other systems to working order:
   monit start consul
   monit monitor consul
  • SSH into the postgresql server(s) identified above as replicas using the bosh tool:
   bosh ssh postgresql <index identified above>
   # If using the BOSH v2+ Client:
   bosh -e $ENV -d $DEPLOYMENT ssh postgresql/<index identified above>
  • Switch to the root user: sudo -i

  • Resume monitoring consul and postgresql services:

   monit start consul
   monit monitor consul
   monit start postgresql
   monit monitor postgresql

Standalone Plan

  • SSH into the postgresql server using the bosh tool:
   bosh ssh postgresql 0
   # If using the BOSH v2+ Client:
   bosh -e $ENV -d $DEPLOYMENT ssh postgresql/0
  • Switch to the root user: sudo -i

  • Stop monitoring the postgresql service:

   monit unmonitor postgresql
   monit stop postgresql
  • Switch to the vcap user: su - vcap

  • Verify that backups are available:

   pgbackrest --stanza=main info
   # Note that you may need to provide the `pgbackrest.conf` path:
   pgbackrest --config=/var/vcap/store/pgbackrest/config/pgbackrest-local.conf --stanza=main info
   # If you wish to restore from S3, replace pgbackrest-local.conf with pgbackrest-s3.conf.
  • Restore the database:
   pgbackrest --config=/var/vcap/store/pgbackrest/config/pgbackrest-local.conf \
       --stanza=main \
       --delta \
       --log-level-console=detail \
       --type=time "--target=2016-12-13 00:11:34.531619+00" \
       restore
   pgbackrest --config=/var/vcap/store/pgbackrest/config/pgbackrest-local.conf \
       --stanza=main \
       --delta \
       --log-level-console=detail \
       --type=time "--target=2016-12-13 00:11:34.531619+00" \
       --set=ID_OF_PREVIOUS_BACKUP
       restore
  • Switch to root: sudo -i

  • Start the database:

   monit start postgresql
   monit monitor postgresql
  • Finally, confirm that backups and archiving are still working. Connect to the primary PostgreSQL server instance and run:
   sudo su vcap
   psql -d postgres
   pgbackrest --config=/var/vcap/store/pgbackrest/config/pgbackrest-local.conf --stanza=main info
   pgbackrest --config=/var/vcap/store/pgbackrest/config/pgbackrest-local.conf --stanza=main check

pgBackrest Documentation

For more information about pgBackrest restorations, please refer to the official PGBackRest documentation.