5.0.3
Crunchy Data announces the release of Crunchy Postgres for Kubernetes 5.0.3.
Crunchy Postgres for Kubernetes is powered by PGO, the open source Postgres Operator from Crunchy Data. PGO is released in conjunction with the Crunchy Container Suite.
Crunchy Postgres for Kubernetes 5.0.3 includes the following software versions upgrades:
- PostgreSQL 14 is now available.
- pgBackRest is updated to version 2.35.
- Patroni is updated to version 2.1.1.
- The pgAudit extension is now at version 1.6.0.
- The pgAudit Analyze extension is now at version 1.0.8.
- The pgnodemx extension is now at version 1.0.5.
- The set_user extension is now at version 3.0.0.
- The wal2json extension is now at version 2.4.
Read more about how you can get started with Crunchy Postgres for Kubernetes. We recommend forking the Postgres Operator examples repo.
Features
- The Postgres containers are renamed.
crunchy-postgres-hais nowcrunchy-postgres, andcrunchy-postgres-gis-hais nowcrunchy-postgres-gis. - Some network filesystems are sensitive to Linux user and group permissions. Process GIDs can now be configured through
PostgresCluster.spec.supplementalGroupsfor when your PVs don’t advertise their GID requirements. - A replica service is now automatically reconciled for access to Postgres replicas within a cluster.
- The Postgres primary service and PgBouncer service can now each be configured to have either a
ClusterIP,NodePortorLoadBalancerservice type. Suggested by Bryan A. S. (@bryanasdev000). - Pod Topology Spread Constraints can now be specified for Postgres instances, the pgBackRest dedicated repository host as well as PgBouncer. Suggested by Annette Clewett.
- Default topology spread constraints are included to ensure PGO always attempts to deploy a high availability cluster architecture.
- PGO can now execute a custom SQL script when initializing a Postgres cluster.
- Custom resource requests and limits are now configurable for all
initcontainers, therefore ensuring the desired Quality of Service (QoS) class can be assigned to the various Pods comprising a cluster. - Custom resource requests and limits are now configurable for all Jobs created for a
PostgresCluster. - A Pod Priority Class is configurable for the Pods created for a
PostgresCluster. - An
imagePullPolicycan now be configured for Pods created for aPostgresCluster. - Existing
PGDATA, Write-Ahead Log (WAL) and pgBackRest repository volumes can now be migrated from PGO v4 to PGO v5 by specifying avolumesdata source when creating aPostgresCluster. - There is now a migration guide available for moving Postgres clusters between PGO v4 to PGO v5.
- The pgAudit extension is now enabled by default in all clusters.
- There is now additional validation for PVC definitions within the
PostgresClusterspec to ensure successful PVC reconciliation. - Postgres server certificates are now automatically reloaded when they change.
Changes
- The supplemental group
65534is no longer applied by default. Upgrading the operator will perform a rolling update on allPostgresClustercustom resources to remove it.
If you need this GID for your network filesystem, you should perform the following steps when upgrading:
Before deploying the new operator, deploy the new CRD. You can get the new CRD from the Postgres Operator Examples repository and executing the following command:
$ kubectl apply -k kustomize/install- Add the group to your existing
PostgresClustercustom resource: ```console $ kubectl edit postgrescluster/hippo
kind: PostgresCluster … spec: supplementalGroups: - 65534 …
_or_ ```console $ kubectl patch postgrescluster/hippo --type=merge --patch='{"spec":{"supplementalGroups":[65534]}}'or
by modifying
spec.supplementalGroupsin your manifest.- Deploy the new operator. If you are using an up-to-date version of the manifest, you can run:
console $ kubectl apply -k kustomize/install
- Add the group to your existing
- A dedicated pgBackRest repository host is now only deployed if a
volumerepository is configured. This means that deployments that use only cloud-based (s3,gcs,azure) repos will no longer see a dedicated repository host, nor willSSHDrun in within that Postgres cluster. As a result of this change, thespec.backups.pgbackrest.repoHost.dedicatedsection is removed from thePostgresClusterspec, and all settings within it are consolidated under thespec.backups.pgbackrest.repoHostsection. When upgrading please update thePostgresClusterspec to ensure any settings from sectionspec.backups.pgbackrest.repoHost.dedicatedare moved into sectionspec.backups.pgbackrest.repoHost. - PgBouncer now uses SCRAM when authenticating into Postgres.
- Generated Postgres certificates include the FQDN and other local names of the primary Postgres service. To regenerate the certificate of an existing cluster, delete the
tls.keyfield from its certificate secret. Suggested by @ackerr01.
Fixes
- Validation for the PostgresCluster spec is updated to ensure at least one repo is always defined for section
spec.backups.pgbackrest.repos. - A restore will now complete successfully If
max_connectionsand/ormax_worker_processesis configured to a value higher than the default when backing up the Postgres database. Reported by Tiberiu Patrascu (@tpatrascu). - The installation documentation now properly defines how to set the
PGO_TARGET_NAMESPACEenvironment variable for a single namespace installation. - Ensure the full allocation of shared memory is available to Postgres containers. Reported by Yuyang Zhang (@helloqiu).
- OpenShift auto-detection logic now looks for the presence of the
SecurityContextConstraintsAPI to avoid false positives when APIs with anopenshift.ioGroup suffix are installed in non-OpenShift clusters. Reported by Jean-Daniel.