Customize a Postgres Cluster
Postgres is known for its ease of customization; PGO helps you to roll out changes efficiently and without disruption. After resizing the resources for our Postgres cluster in the previous step of this tutorial, lets see how we can tweak our Postgres configuration to optimize its usage of them.
Custom Postgres Configuration
Part of the trick of managing multiple instances in a Postgres cluster is ensuring all of the configuration changes are propagated to each of them. This is where PGO helps: when you make a Postgres configuration change for a cluster, PGO will apply the changes to all of the managed instances.
For example, in our previous step we added CPU and memory limits of 2.0
and 4Gi
respectively. Let’s tweak some of the Postgres settings to better use our new resources. We can do this in the spec.patroni.dynamicConfiguration
section. Here is an example updated manifest that tweaks several settings:
apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PostgresCluster
metadata:
name: hippo
spec:
image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres:centos8-13.5-0
postgresVersion: 13
instances:
- name: instance1
replicas: 2
resources:
limits:
cpu: 2.0
memory: 4Gi
dataVolumeClaimSpec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 1Gi
backups:
pgbackrest:
image: registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:centos8-2.36-0
repos:
- name: repo1
volume:
volumeClaimSpec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 1Gi
patroni:
dynamicConfiguration:
postgresql:
parameters:
max_parallel_workers: 2
max_worker_processes: 2
shared_buffers: 1GB
work_mem: 2MB
In particular, we added the following to spec
:
patroni:
dynamicConfiguration:
postgresql:
parameters:
max_parallel_workers: 2
max_worker_processes: 2
shared_buffers: 1GB
work_mem: 2MB
Apply these updates to your Kubernetes cluster with the following command:
kubectl apply -k kustomize/postgres
PGO will go and apply these settings to all of the Postgres clusters. You can verify that the changes are present using the Postgres SHOW
command, e.g.
SHOW work_mem;
should yield something similar to:
work_mem
----------
2MB
Customize TLS
All connections in PGO use TLS to encrypt communication between components. PGO sets up a PKI and certificate authority (CA) that allow you create verifiable endpoints. However, you may want to bring a different TLS infrastructure based upon your organizational requirements. The good news: PGO lets you do this!
If you want to use the TLS infrastructure that PGO provides, you can skip the rest of this section and move on to learning how to apply software updates.
How to Customize TLS
There are a few different TLS endpoints that can be customized for PGO, including those of the Postgres cluster and controlling how Postgres instances authenticate with each other. Let’s look at how we can customize TLS.
Your TLS certificate should have a Common Name (CN) setting that matches the primary Service name. This is the name of the cluster suffixed with -primary
. For example, for our hippo
cluster this would be hippo-primary
.
To customize the TLS for a Postgres cluster, you will need to create a Secret in the Namespace of your Postgres cluster that contains the TLS key (tls.key
), TLS certificate (tls.crt
) and the CA certificate (ca.crt
) to use. The Secret should contain the following values:
data:
ca.crt: <value>
tls.crt: <value>
tls.key: <value>
For example, if you have files named ca.crt
, hippo.key
, and hippo.crt
stored on your local machine, you could run the following command:
kubectl create secret generic -n postgres-operator hippo.tls \
--from-file=ca.crt=ca.crt \
--from-file=tls.key=hippo.key \
--from-file=tls.crt=hippo.crt
You can specify the custom TLS Secret in the spec.customTLSSecret.name
field in your postgrescluster.postgres-operator.crunchydata.com
custom resource, e.g:
spec:
customTLSSecret:
name: hippo.tls
If you’re unable to control the key-value pairs in the Secret, you can create a mapping that looks similar to this:
spec:
customTLSSecret:
name: hippo.tls
items:
- key: <tls.crt key>
path: tls.crt
- key: <tls.key key>
path: tls.key
- key: <ca.crt key>
path: ca.crt
If spec.customTLSSecret
is provided you must also provide spec.customReplicationTLSSecret
and both must contain the same ca.crt
.
As with the other changes, you can roll out the TLS customizations with kubectl apply
.
Labels
There are several ways to add your own custom Kubernetes Labels to your Postgres cluster.
- Cluster: You can apply labels to any PGO managed object in a cluster by editing the
spec.metadata.labels
section of the custom resource. - Postgres: You can apply labels to a Postgres instance set and its objects by editing
spec.instances.metadata.labels
. - pgBackRest: You can apply labels to pgBackRest and its objects by editing
postgresclusters.spec.backups.pgbackrest.metadata.labels
. - PgBouncer: You can apply labels to PgBouncer connection pooling instances by editing
spec.proxy.pgBouncer.metadata.labels
.
Annotations
There are several ways to add your own custom Kubernetes Annotations to your Postgres cluster.
- Cluster: You can apply annotations to any PGO managed object in a cluster by editing the
spec.metadata.annotations
section of the custom resource. - Postgres: You can apply annotations to a Postgres instance set and its objects by editing
spec.instances.metadata.annotations
. - pgBackRest: You can apply annotations to pgBackRest and its objects by editing
spec.backups.pgbackrest.metadata.annotations
. - PgBouncer: You can apply annotations to PgBouncer connection pooling instances by editing
spec.proxy.pgBouncer.metadata.annotations
.
Pod Priority Classes
PGO allows you to use pod priority classes to indicate the relative importance of a pod by setting a priorityClassName
field on your Postgres cluster. This can be done as follows:
- Instances: Priority is defined per instance set and is applied to all Pods in that instance set by editing the
spec.instances.priorityClassName
section of the custom resource. - Dedicated Repo Host: Priority defined under the repoHost section of the spec is applied to the dedicated repo host by editing the
spec.backups.pgbackrest.repoHost.priorityClassName
section of the custom resource. - PgBouncer: Priority is defined under the pgBouncer section of the spec and will apply to all PgBouncer Pods by editing the
spec.proxy.pgBouncer.priorityClassName
section of the custom resource. - Backup (manual and scheduled): Priority is defined under the
spec.backups.pgbackrest.jobs.priorityClassName
section and applies that priority to all pgBackRest backup Jobs (manual and scheduled). - Restore (data source or in-place): Priority is defined for either a “data source” restore or an in-place restore by editing the
spec.dataSource.postgresCluster.priorityClassName
section of the custom resource. - Data Migration: The priority defined for the first instance set in the spec (array position 0) is used for the PGDATA and WAL migration Jobs. The pgBackRest repo migration Job will use the priority class applied to the repoHost.
Separate WAL PVCs
PostgreSQL commits transactions by storing changes in its Write-Ahead Log (WAL). Because the way WAL files are accessed and
utilized often differs from that of data files, and in high-performance situations, it can desirable to put WAL files on separate storage volume. With PGO, this can be done by adding
the walVolumeClaimSpec
block to your desired instance in your PostgresCluster spec, either when your cluster is created or anytime thereafter:
spec:
instances:
- name: instance
walVolumeClaimSpec:
accessModes:
- "ReadWriteMany"
resources:
requests:
storage: 1Gi
This volume can be removed later by removing the walVolumeClaimSpec
section from the instance. Note that when changing the WAL directory, care is taken so as not to lose any WAL files. PGO only
deletes the PVC once there are no longer any WAL files on the previously configured volume.
Database Initialization SQL
PGO can run SQL for you as part of the cluster creation and initialization process. PGO runs the SQL using the psql client so you can use meta-commands to connect to different databases, change error handling, or set and use variables. Its capabilities are described in the psql documentation.
Initialization SQL ConfigMap
The Postgres cluster spec accepts a reference to a ConfigMap containing your init SQL file. Update your cluster spec to include the ConfigMap name, spec.databaseInitSQL.name
, and the data key, spec.databaseInitSQL.key
, for your SQL file. For example, if you create your ConfigMap with the following command:
kubectl -n postgres-operator create configmap hippo-init-sql --from-file=init.sql=/path/to/init.sql
You would add the following section to your Postgrescluster spec:
spec:
databaseInitSQL:
key: init.sql
name: hippo-init-sql
After you add the ConfigMap reference to your spec, apply the change with kubectl apply -k kustomize/postgres
. PGO will create your hippo
cluster and run your initialization SQL once the cluster has started. You can verify that your SQL has been run by checking the databaseInitSQL
status on your Postgres cluster. While the status is set, your init SQL will not be run again. You can check cluster status with the kubectl describe
command:
kubectl -n postgres-operator describe postgresclusters.postgres-operator.crunchydata.com hippo
Now that databaseInitSQL
is defined in your cluster status, verify database objects have been created as expected. After verifying, we recommend removing the spec.databaseInitSQL
field from your spec. Removing the field from the spec will also remove databaseInitSQL
from the cluster status.
PSQL Usage
PGO uses the psql interactive terminal to execute SQL statements in your database. Statements are passed in using standard input and the filename flag (e.g. psql -f -
).
SQL statements are executed as superuser in the default maintenance database. This means you have full control to create database objects, extensions, or run any SQL statements that you might need.
Integration with User and Database Management
If you are creating users or databases, please see the User/Database Management documentation. Databases created through the user management section of the spec can be referenced in your initialization sql. For example, if a database zoo
is defined:
spec:
users:
- name: hippo
databases:
- "zoo"
You can connect to zoo
by adding the following psql
meta-command to your SQL:
\c zoo
create table t_zoo as select s, md5(random()::text) from generate_Series(1,5) s;
Transaction support
By default, psql
commits each SQL command as it completes. To combine multiple commands into a single transaction, use the BEGIN
and COMMIT
commands.
BEGIN;
create table t_random as select s, md5(random()::text) from generate_Series(1,5) s;
COMMIT;
PSQL Exit Code and Database Init SQL Status
The exit code from psql
will determine when the databaseInitSQL
status is set. When psql
returns 0
the status will be set and SQL will not be run again. When psql
returns with an error exit code the status will not be set. PGO will continue attempting to execute the SQL as part of its reconcile loop until psql
returns normally. If psql
exits with a failure, you will need to edit the file in your ConfigMap to ensure your SQL statements will lead to a successful psql
return. The easiest way to make live changes to your ConfigMap is to use the following kubectl edit
command:
kubectl -n <cluster-namespace> edit configmap hippo-init-sql
Be sure to transfer any changes back over to your local file. Another option is to make changes in your local file and use kubectl --dry-run
to create a template and pipe the output into kubectl apply
:
kubectl create configmap hippo-init-sql --from-file=init.sql=/path/to/init.sql --dry-run=client -o yaml | kubectl apply -f -
kubectl apply -k kustomize/postgres
).To ensure that psql
returns a failure exit code when your SQL commands fail, set the ON_ERROR_STOP
variable as part of your SQL file:
\set ON_ERROR_STOP
\echo Any error will lead to exit code 3
create table t_random as select s, md5(random()::text) from generate_Series(1,5) s;
Troubleshooting
Changes Not Applied
If your Postgres configuration settings are not present, you may need to check a few things. First, ensure that you are using the syntax that Postgres expects. You can see this in the Postgres configuration documentation.
Some settings, such as shared_buffers
, require for Postgres to restart. Patroni only performs a reload when parameter changes are identified. Therefore, for parameters that require a restart, the restart can be performed manually by executing into a Postgres instance and running patronictl restart --force <clusterName>-ha
.
Next Steps
You’ve now seen how you can further customize your Postgres cluster, but what about managing users and atabases? That’s a great question that is answered in the next section.