Apply Software Updates

Did you know that Postgres releases bug fixes once every three months? Additionally, we periodically refresh the container images to ensure the base images have the latest software that may fix some CVEs.

It's generally good practice to keep your software up-to-date for stability and security purposes, so let's learn how Crunchy Postgres for Kubernetes helps to you accept low risk, "patch" type updates.

Please note that you do not need to immediately update your Postgres and component container images following a Crunchy Postgres for Kubernetes upgrade, although it is recommended that you update your images as soon as possible to ensure you have the latest security updates and bug fixes. This lets you choose when you want to apply updates to each of your Postgres clusters, so you can update it on your own schedule. And if you have a high availability Postgres cluster, Crunchy Postgres for Kubernetes uses a rolling update to minimize or eliminate any downtime for your application.

To find the Postgres and component images that correspond with your Crunchy Postgres for Kubernetes installation, you can browse the containers page in the Crunchy Data Developer Portal.

Warning

The component image tagging strategy changed, starting with the v5.8.0 release. Please see the Components and Compatibility page for more details.

Applying Minor Postgres Updates

The Postgres image is referenced using the spec.image and looks similar to the below:

spec:
  image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi9-17.4-2513

Diving into the tag a bit further, you will notice the 14.2-2513 portion. This represents the Postgres minor version (17.4) and the patch number of the release 2513. If the patch number is incremented (e.g. 2516), this means that the container is rebuilt, but there are no changes to the Postgres version. If the minor version is incremented (e.g. 17.5-2316), this means that there is a newer bug fix release of Postgres within the container.

To update the image, you just need to modify the spec.image field with the new image reference, e.g.

spec:
  image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-17.5-2516

You can apply the changes using kubectl apply. Similar to the rolling update example when we resized the cluster, the update is first applied to the Postgres replicas, then a controlled switchover occurs, and the final instance is updated.

For the hippo cluster, you can see the status of the rollout by running the command below.

Bash:

kubectl -n postgres-operator get pods --selector=postgres-operator.crunchydata.com/cluster=hippo,postgres-operator.crunchydata.com/instance -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.labels.postgres-operator\.crunchydata\.com/role}{"\t"}{.status.phase}{"\t"}{.spec.containers[].image}{"\n"}{end}'

Powershell:

kubectl -n postgres-operator get pods --selector=postgres-operator.crunchydata.com/cluster=hippo,postgres-operator.crunchydata.com/instance -o=jsonpath="{range .items[*]}{.metadata.name}{'\t'}{.metadata.labels.postgres-operator\.crunchydata\.com/role}{'\t'}{.status.phase}{'\t'}{.spec.containers[].image}{'\n'}{end}"

Or, by running a watch:

Bash:

watch "kubectl -n postgres-operator get pods --selector=postgres-operator.crunchydata.com/cluster=hippo,postgres-operator.crunchydata.com/instance -o=jsonpath='{range .items[*]}{.metadata.name}{\"\t\"}{.metadata.labels.postgres-operator\.crunchydata\.com/role}{\"\t\"}{.status.phase}{\"\t\"}{.spec.containers[].image}{\"\n\"}{end}'"

Powershell:

kubectl -n postgres-operator get pods --watch --selector=postgres-operator.crunchydata.com/cluster=hippo,postgres-operator.crunchydata.com/instance -o=jsonpath="{range .items[*]}{.metadata.name}{'\t'}{.metadata.labels.postgres-operator\.crunchydata\.com/role}{'\t'}{.status.phase}{'\t'}{.spec.containers[].image}{'\n'}"

Rolling Back Minor Postgres Updates

This methodology also allows you to rollback changes from minor Postgres updates. You can change the spec.image field to your desired container image. Crunchy Postgres for Kubernetes will then ensure each Postgres instance in the cluster rolls back to the desired image.

Applying Other Component Updates

There are other components that go into a Crunchy Postgres for Kubernetes Postgres cluster. These include pgBackRest, PgBouncer and others. Each one of these components has its own image: for example, you can find a reference to the pgBackRest image in the spec.backups.pgbackrest.image attribute.

Applying software updates for the other components in a Postgres cluster works similarly to the above. As pgBackRest and PgBouncer are Kubernetes Deployments, Kubernetes will help manage the rolling update to minimize disruption.

Changing Base Images

Warning

Changing the Postgres base image from UBI 8 to UBI 9 can lead to corrupt indexes and other potential problems with your data. A full backup is recommended prior to upgrading your base image, and you should thoroughly check and verify your data once the update is complete.

Postgres uses locale information provided by the operating system for sorting text. Changes to that information can lead to erroneous query results and other incorrect behavior. Postgres is able to detect those changes and emit warnings like the following:

WARNING:  collation "my-custom" has version mismatch
DETAIL:  The collation in the database was created using version 2.34, but the operating system provides version 2.28.
HINT:  Rebuild all objects affected by this collation and run ALTER COLLATION "my-custom" REFRESH VERSION

WARNING:  database "postgres" has a collation version mismatch
WARNING:  template database "template1" has a collation version mismatch
DETAIL:  The database was created using collation version 2.34, but the operating system provides version 2.28.
HINT:  Rebuild all objects in this database that use the default collation and run ALTER DATABASE "template1" REFRESH COLLATION VERSION

These warnings indicate you should REINDEX your databases to avoid any possibility of data corruption. At a minimum, in every database in an affected cluster, run REINDEX DATABASE followed by ALTER DATABASE.

The Postgres documentation has more detailed instructions for custom collations.

You can always verify your indexes using the included amcheck extension.