Monitoring
While having high availability and disaster recovery systems in place helps in the event of something going wrong with your PostgreSQL cluster, monitoring helps you anticipate problems before they happen. Monitoring can also help you diagnose and resolve issues that may cause degraded performance.
Adding the Exporter Sidecar
Let's look at how we can add the Crunchy Postgres Exporter sidecar to your cluster using the kustomize/postgres
example in the Postgres Operator examples repository.
Monitoring tools are added using the spec.monitoring
section of the custom resource. Currently, the only monitoring tool supported is the Crunchy PostgreSQL Exporter configured with pgMonitor.
In the kustomize/postgres/postgres.yaml
file, add the following YAML to the spec:
monitoring:
pgmonitor:
exporter: {}
Save your changes and run:
kubectl apply -k kustomize/postgres
PGO will detect the change and add the Exporter sidecar to all Postgres Pods that exist in your cluster. PGO will also configure the Exporter to connect to the database and gather metrics. These metrics can be accessed using the PGO Monitoring stack.
Setting a custom ccp_monitoring password
The postgres_exporter
process will use the ccp_monitoring
username and password to gather metrics from Postgres. Considering these credentials are only used within a cluster, they can normally be generated by PGO without user intervention. There are some cases, like standby monitoring, where a user might need to manually configure the ccp_monitoring
password.
To update the ccp_monitoring
password for a PostgresCluster, you will need to edit the $CLUSTER_NAME-monitoring
secret. The following command will open up an editor with the contents of the monitoring secret:
kubectl edit secret $CLUSTER_NAME-monitoring
The editor will look something like this:
apiVersion: v1
kind: Secret
metadata:
name: $CLUSTER_NAME-monitoring
labels:
postgres-operator.crunchydata.com/cluster: $CLUSTER_NAME
postgres-operator.crunchydata.com/role: monitoring
data:
password: cGFzc3dvcmQ=
verifier: $sha
To set a password you can remove the entire data
section (including both the password
and verifier
fields) and replace it with the stringData
field:
stringData:
password: $NEW_PASSWORD
Note: The stringData
field is a Kubernetes feature that allows you to provide a plain-text field to a secret that is then encoded like the data
field. This field is describe in the Kubernetes documentation.
By saving this change, the secret will be updated and the change will make its way into the pod. The new secret files will be updated in the file system and the postgres_exporter
process will be restarted, which may take a minute or two. Once the process has restarted, the postgres_exporter
will query the database using the updated password.
Configuring TLS Encryption for the Exporter
PGO allows you to configure the exporter sidecar to use TLS encryption. If you provide a custom TLS Secret via the exporter spec:
monitoring:
pgmonitor:
exporter:
customTLSSecret:
name: hippo.tls
Like other custom TLS Secrets that can be configured with PGO, the Secret will need to be created in the same Namespace as your PostgresCluster. It should also contain the TLS key (tls.key
) and TLS certificate (tls.crt
) needed to enable encryption.
data:
tls.crt: $VALUE
tls.key: $VALUE
After you configure TLS for the exporter, you will need to update your Prometheus deployment to use TLS, and your connection to the exporter will be encrypted. Check out the Prometheus documentation for more information on configuring TLS for Prometheus.
Custom Queries for the Exporter
Out of the box, the exporter is set up with default queries that will provide you with valuable information about your PostgresClusters. However, sometimes, you want to provide your own custom queries to retrieve metrics not in the defaults. Luckily, PGO has you covered.
The first thing you will need to figure out when implementing your own custom queries is whether you want to completely swap out the default queries or add your queries to the defaults that Crunchy Data provides.
Using Your Own Custom Set
If you wish to completely swap out the Crunchy provided default queries with your own set, you will need to start by putting all of the queries that you wish to run in a YAML file named queries.yml
. You can use the queries files found in the pgMonitor repo as guidance for the proper format. This file should then be placed in a ConfigMap. For example, we could run the following command:
kubectl create configmap my-custom-queries --from-file=path/to/file/queries.yml -n postgres-operator
This will create a ConfigMap named my-custom-queries
in the postgres-operator
namespace, and it will hold the queries.yml
file found at the relative path of path/to/file
.
Once the ConfigMap is created, you simply need to tell PGO the name of the ConfigMap by editing your PostgresCluster Spec:
monitoring:
pgmonitor:
exporter:
configuration:
- configMap:
name: my-custom-queries
Once the spec is applied, the exporter will be restarted and your new metrics will be available. If you later make a change to the custom queries in the ConfigMap, the exporter process will again be restarted and the new queries used once a difference is detected in the ConfigMap.
Append Your Custom Queries to the Defaults
Starting with Postgres Operator 5.5, you can easily append custom queries to the Crunchy Data defaults! To do this, the setup has the same three easy steps that we just went through:
- Put your desired queries in a YAML file named
queries.yml
. - Create a ConfigMap that holds the
queries.yml
file. - Tell PGO the name of your ConfigMap using the
monitoring.pgmonitor.exporter.configuration
spec.
The additional step that tells PGO to append the queries rather than swapping them out is to turn on the AppendCustomQueries
feature gate.
PGO feature gates are enabled by setting the PGO_FEATURE_GATES
environment variable on the PGO Deployment. To enable the appending of the custom queries, you would want to set:
PGO_FEATURE_GATES="AppendCustomQueries=true"
Please note that it is possible to enable more than one feature at a time as this variable accepts a comma delimited list. For example, to enable multiple features, you would set PGO_FEATURE_GATES
like so:
PGO_FEATURE_GATES="FeatureName=true,FeatureName2=true,FeatureName3=true..."
Accessing the Metrics
Once the Crunchy PostgreSQL Exporter has been enabled in your cluster, follow the steps outlined in PGO Monitoring to install the monitoring stack. This will allow you to deploy a pgMonitor configuration of Prometheus, Grafana, and Alertmanager monitoring tools in Kubernetes. These tools will be set up by default to connect to the Exporter containers on your Postgres Pods.
Next Steps
Now that we can monitor our cluster, let's look at how we can customize the Postgres cluster configuration.