Upgrade Method #2: Backups
Info
Before attempting to upgrade from v4.x to v5, please familiarize yourself with the prerequisites applicable for all v4.x to v5 upgrade methods.
This upgrade method allows you to migrate from CPK v4 to CPK v5 by creating a new CPK v5 Postgres cluster using a backup from a CPK v4 cluster. This method allows you to preserve the data in your CPK v4 cluster while you transition to CPK v5. To fully move the data over, you will need to incur downtime and shut down your CPK v4 cluster.
Step 1: Prepare the CPK v4 Cluster for Migration
- Ensure you have a recent backup of your cluster. You can do so with the
pgo backup
command, e.g.:
pgo backup hippo
Please ensure that the backup completes. You will see the latest backup appear using the pgo show backup
command.
- Next, delete the cluster while keeping backups (using the
--keep-backups
flag):
pgo delete cluster hippo --keep-backups
Warning
Additional steps are required to set proper file permissions when using certain storage options, such as NFS and HostPath storage, due to a known issue with how fsGroups are applied. When migrating from CPK v4, this will require the user to manually set the group value of the pgBackRest repo directory, and all subdirectories, to 26
to match the postgres
group used in CPK v5. Please see here for more information.
Step 2: Migrate to CPK v5
With the CPK v4 Postgres cluster's backup repository prepared, you can now create a PostgresCluster
custom resource. This migration method does not carry over any specific configurations or customizations from CPK v4: you will need to create the specific PostgresCluster
configuration that you need.
To complete the upgrade process, your PostgresCluster
custom resource MUST include the following:
- You will need to configure your pgBackRest repository based upon whether you are using a PVC to store your backups, or an object storage system such as S3/GCS. Please follow the directions based on the repository type you are using.
PVC-based Backup Repository
When migrating from a PVC-based backup repository, you will need to configure a pgBackRest repo at spec.backups.pgbackrest.repos.volume
with the name repo1
. The volumeClaimSpec
should match the attributes of the pgBackRest repo PVC being used as part of the migration, i.e. it must have the same storageClassName
, accessModes
, resources
, etc. For example, if your v4 Postgres cluster volume was 1Gi of standard
storage with a ReadWriteOnce
access mode, your v5 cluster would look something like this (note the repo1
name):
spec:
backups:
pgbackrest:
repos:
- name: repo1
volume:
volumeClaimSpec:
storageClassName: standard
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 1Gi
Please note that you will need to perform the cluster upgrade in the same namespace as the original cluster in order for your v5 cluster to access the existing PVCs.
S3 / GCS Backup Repository
When migrating from a S3 or GCS based backup repository, you will need to configure your spec.backups.pgbackrest.repos.volume
to point to the backup storage system. For instance, if AWS S3 storage is being utilized, the repo would be defined similar to the following:
spec:
backups:
pgbackrest:
repos:
- name: repo1
s3:
bucket: hippo
endpoint: s3.amazonaws.com
region: us-east-1
Any required secrets or desired custom pgBackRest configuration should be created and configured as described in the backup tutorial.
You will also need to ensure that the “pgbackrest-repo-path” configured for the repository matches the path used by the CPK v4 cluster. The default repository path follows the pattern /backrestrepo/<clusterName>-backrest-shared-repo
. Note that the path name here is different than migrating from a PVC-based repository.
Using the hippo
Postgres cluster as an example, you would set the following in the spec.backups.pgbackrest.global
section:
spec:
backups:
pgbackrest:
global:
repo1-path: /backrestrepo/hippo-backrest-shared-repo
- Once you have configured the pgBackRest repository configuration in step 1, set the
spec.dataSource
section to restore from the backups used for this migration. For example:
spec:
dataSource:
postgresCluster:
repoName: repo1
You can also provide other pgBackRest restore options, e.g. if you wish to restore to a specific point-in-time (PITR).
- If you are using a PVC-based pgBackRest repository, then you will also need to specify a pgBackRestVolume data source that references the CPK v4 pgBackRest repository PVC:
spec:
dataSource:
volumes:
pgBackRestVolume:
pvcName: hippo-pgbr-repo
directory: "hippo-backrest-shared-repo"
postgresCluster:
repoName: repo1
-
If you customized other Postgres parameters, you will need to ensure they match in the CPK v5 cluster. For more information, please review the tutorial on customizing a Postgres cluster.
-
Once the
PostgresCluster
spec is populated according to these guidelines, you can create thePostgresCluster
custom resource. For example, if thePostgresCluster
you're creating is a modified version of thepostgres
example in the CPK examples repo, you can run the following command:
kubectl apply -k kustomize/postgres
WARNING: Once the PostgresCluster custom resource is created, it will become the owner of the PVC. This means that if the PostgresCluster is then deleted (e.g. if attempting to revert back to a CPK v4 cluster), then the PVC will be deleted as well.
If you wish to protect against this, first remove the reference to the pgBackRest PVC in the PostgresCluster spec:
kubectl patch postgrescluster hippo-pgbr-repo --type='json' -p='[{"op": "remove", "path": "/spec/dataSource/volumes"}]'
Then relabel the PVC prior to deleting the PostgresCluster custom resource:
kubectl label pvc hippo-pgbr-repo \
postgres-operator.crunchydata.com/cluster- \
postgres-operator.crunchydata.com/pgbackrest-repo- \
postgres-operator.crunchydata.com/pgbackrest-volume- \
postgres-operator.crunchydata.com/pgbackrest-
You will also need to remove all ownership references from the PVC:
kubectl patch pvc hippo-pgbr-repo --type='json' -p='[{"op": "remove", "path": "/metadata/ownerReferences"}]'
It is recommended to set the reclaim policy for any PV’s bound to existing PVC’s to Retain
to ensure data is retained in the event a PVC is accidentally deleted during the upgrade.
Your upgrade is now complete! For more information on how to use CPK v5, we recommend reading through the CPK v5 tutorials.