Resize a Postgres Cluster
You did it – the application is a success! Traffic is booming, so much so that you need to add more resources to your Postgres cluster. However, you’re worried that any resize operation may cause downtime and create a poor experience for your end users.
This is where PGO comes in: PGO will help orchestrate rolling out any potentially disruptive changes to your cluster to minimize or eliminate and downtime for your application. To do so, we will assume that you have deployed a high availability Postgres cluster as described in the previous section.
Let’s dive in.
Resize Memory and CPU
Memory and CPU resources are an important component for vertically scaling your Postgres cluster. Couple with tweaks to your Postgres configuration file, allowing your cluster to have more memory and CPU allotted to it can help it to perform better under load.
It’s important for instances in the same high availability set to have the same resources. PGO lets you adjust CPU and memory within the spec.instances.resources
section of the postgresclusters.postgres-operator.crunchydata.com
custom resource. The layout of spec.instances.resources
should be familiar: it follows the same pattern as the standard Kubernetes structure for setting container resources.
For example, let’s say we want to update our hippo
Postgres cluster so that each instance has a limit of 2.0
CPUs and 4Gi
of memory. We can make the following changes to the manifest:
apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PostgresCluster
metadata:
name: hippo
spec:
image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres-ha:centos8-13.3-0
postgresVersion: 13
instances:
- name: instance1
replicas: 2
resources:
limits:
cpu: 2.0
memory: 4Gi
dataVolumeClaimSpec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 1Gi
backups:
pgbackrest:
image: registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:centos8-2.33-0
repoHost:
dedicated: {}
repos:
- name: repo1
volume:
volumeClaimSpec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 1Gi
(If you are on OpenShift, ensure that spec.openshift
is set to true
).
In particular, we added the following to spec.instances
:
resources:
limits:
cpu: 2.0
memory: 4Gi
Apply these updates to your Kubernetes cluster with the following command:
kubectl apply -k kustomize/postgres
Now, let’s watch how the rollout happens:
watch "kubectl -n postgres-operator get pods \
--selector=postgres-operator.crunchydata.com/cluster=hippo,postgres-operator.crunchydata.com/instance \
-o=jsonpath='{range .items[*]}{.metadata.name}{\"\t\"}{.metadata.labels.postgres-operator\.crunchydata\.com/role}{\"\t\"}{.status.phase}{\"\t\"}{.spec.containers[].resources.limits}{\"\n\"}{end}'"
Observe how each Pod is terminated one-at-a-time. This is part of a “rolling update”. Because updating the resources of a Pod is a destructive action, PGO first applies the CPU and memory changes to the replicas. PGO ensures that the changes are successfully applied to a replica instance before moving on to the next replica.
Once all of the changes are applied, PGO will perform a “controlled switchover”: it will promote a replica to become a primary, and apply the changes to the final Postgres instance.
By rolling out the changes in this way, PGO ensures there is minimal to zero disruption to your application: you are able to successfully roll out updates and your users may not even notice!
Resize PVC
Your application is a success! Your data continues to grow, and it’s becoming apparently that you need more disk. That’s great: you can resize your PVC directly on your postgresclusters.postgres-operator.crunchydata.com
custom resource with minimal to zero downtime.
PVC resizing, also known as volume expansion, is a function of your storage class: it must support volume resizing. Additionally, PVCs can only be sized up: you cannot shrink the size of a PVC.
You can adjust PVC sizes on all of the managed storage instances in a Postgres instance that are using Kubernetes storage. These include:
spec.instances.dataVolumeClaimSpec.resources.requests.storage
: The Postgres data directory (aka your database).spec.backups.pgbackrest.repos.volume.volumeClaimSpec.resources.requests.storage
: The pgBackRest repository when using “volume” storage
The above should be familiar: it follows the same pattern as the standard Kubernetes PVC structure.
For example, let’s say we want to update our hippo
Postgres cluster so that each instance now uses a 10Gi
PVC and our backup repository uses a 20Gi
PVC. We can do so with the following markup:
apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PostgresCluster
metadata:
name: hippo
spec:
image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres-ha:centos8-13.3-0
postgresVersion: 13
instances:
- name: instance1
replicas: 2
resources:
limits:
cpu: 2.0
memory: 4Gi
dataVolumeClaimSpec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 10Gi
backups:
pgbackrest:
image: registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:centos8-2.33-0
repoHost:
dedicated: {}
repos:
- name: repo1
volume:
volumeClaimSpec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 20Gi
(If you are on OpenShift, ensure that spec.openshift
is set to true
).
In particular, we added the following to spec.instances
:
dataVolumeClaimSpec:
resources:
requests:
storage: 10Gi
and added the following to spec.backups.pgbackrest.repos.volume
:
volumeClaimSpec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 20Gi
Apply these updates to your Kubernetes cluster with the following command:
kubectl apply -k kustomize/postgres
Troubleshooting
Postgres Pod Can’t Be Scheduled
There are many reasons why a PostgreSQL Pod may not be scheduled:
- Resources are unavailable. Ensure that you have a Kubernetes Node with enough resources to satisfy your memory or CPU Request.
- PVC cannot be provisioned. Ensure that you request a PVC size that is available, or that your PVC storage class is set up correctly.
PVCs Do Not Resize
Ensure that your storage class supports PVC resizing. You can check that by inspecting the allowVolumeExpansion
attribute:
kubectl get sc
Next Steps
You’ve now resized your Postgres cluster, but how can you configure Postgres to take advantage of the new resources? Let’s look at how we can customize the Postgres cluster configuration.