Kustomize
Installing Crunchy Postgres for Kubernetes Monitoring Using Kustomize
This section provides instructions for installing and configuring Crunchy Postgres for Kubernetes Monitoring using Kustomize.
Prerequisites
First, go to GitHub and fork the Postgres Operator examples repository, which contains the Monitoring Kustomize installer.
https://github.com/CrunchyData/postgres-operator-examples/fork
Once you have forked this repo, you can download it to your working environment with a command similar to this:
YOUR_GITHUB_UN="$YOUR_GITHUB_USERNAME"
git clone --depth 1 "git@github.com:${YOUR_GITHUB_UN}/postgres-operator-examples.git"
cd postgres-operator-examples
For Powershell environments:
$env:YOUR_GITHUB_UN="YOUR_GITHUB_USERNAME"
git clone --depth 1 "git@github.com:$env:YOUR_GITHUB_UN/postgres-operator-examples.git"
cd postgres-operator-examples
To add the Crunchy Postgres Exporter sidecar to your cluster, open the kustomize/postgres/postgres.yaml
file and add the following YAML to the spec:
monitoring:
pgmonitor:
exporter: {}
The Monitoring project is located in the kustomize/monitoring
directory.
Configuration
While the default Kustomize install should work in most Kubernetes environments, it may be necessary to further customize the project according to your specific needs.
For instance, by default fsGroup
is set to 26
for the securityContext
defined for the various Deployments comprising the Monitoring stack:
securityContext:
fsGroup: 26
In most Kubernetes environments this setting is needed to ensure processes within the container have the permissions needed to write to any volumes mounted to each of the Pods comprising the Monitoring stack. However, when installing in an OpenShift environment (and more specifically when using the restricted
Security Context Constraint), the fsGroup
setting should be removed since OpenShift will automatically handle setting the proper fsGroup
within the Pod's securityContext
.
Additionally, within this same section it may also be necessary to modify the supplementalGroups
setting according to your specific storage configuration:
securityContext:
supplementalGroups: 65534
Therefore, the following files (located under kustomize/monitoring
) should be modified and/or patched (e.g. using additional overlays) as needed to ensure the securityContext
is properly defined for your Kubernetes environment:
alertmanager/deployment.yaml
grafana/deployment.yaml
prometheus/deployment.yaml
Those files should also be modified to set appropriate constraints on compute resources for the Grafana, Prometheus and/or AlertManager deployments. And to modify the configuration for the various storage resources (i.e. PersistentVolumeClaims) created by the Monitoring installer, modify the following files:
alertmanager/pvc.yaml
grafana/pvc.yaml
prometheus/pvc.yaml
Additionally, it is also possible to further customize the configuration for the various components comprising the Monitoring stack (Grafana, Prometheus and/or AlertManager) by modifying the following configuration resources:
alertmanager/config/alertmanager.yml
grafana/config/crunchy_grafana_datasource.yml
prometheus/config/crunchy-alert-rules-pg.yml
prometheus/config/prometheus.yml
Finally, please note that the default username and password for Grafana can be updated by modifying the Secret grafana-admin
defined in kustomize/monitoring/grafana/kustomization.yaml
:
secretGenerator:
- name: grafana-admin
literals:
- "password=admin"
- "username=admin"
Install
Once the Kustomize project has been modified according to your specific needs, Monitoring can then be installed using kubectl
and Kustomize:
kubectl apply -k kustomize/monitoring
Once installed, a simple way to immediately access the various Monitoring stack components is by using the kubectl
port-forward command. For example, to access the Grafana dashboards, you would use a command similar to
kubectl -n postgres-operator port-forward service/crunchy-grafana 3000:3000
and then login via a web browser pointed to localhost:3000
.
If you are upgrading or altering a preexisting installation, see below for specific instructions for this use-case.
Installing using Older Kubectl
This installer is optimized for Kustomize v4.0.5 or later, which is included in kubectl
v1.21.
If you are using an earlier version of kubectl
to manage your Kubernetes objects,
the kubectl apply -k kustomize/monitoring
command will produce an error:
Error: json: unknown field "labels"
To fix this error, download the most recent version of Kustomize. Once you have installed Kustomize v4.0.5 or later, you can use it to produce valid Kubernetes yaml:
kustomize build kustomize/monitoring
The output from the kustomize build
command can be captured to a file or piped directly to kubectl
:
kustomize build kustomize/monitoring | kubectl apply -f -
Uninstall
And similarly, once Monitoring has been installed, it can uninstalled using kubectl
and Kustomize:
kubectl delete -k kustomize/monitoring
Upgrading the Monitoring stack to v5.5.x
Several changes have been made to the kustomize installer for the Monitoring stack in order to make the project easier to read and modify:
- Project reorganization
The project has been reorganized so that each tranche of the Monitoring stack has its own folder. This should make it easier to find and modify the Kubernetes objects or configurations for each tranche. For example, if you want to modify the Prometheus configuration, you can find the source file in prometheus/config/prometheus.yml
; if you want to modify the PVC used by Prometheus, you can find the source file in prometheus/pvc.yaml
.
- Image and configuration updating in line with pgMonitor
Crunchy Postgres for Kubernetes Monitoring used the Grafana dashboards and configuration set by the pgMonitor project. We have updated the installer to pgMonitor v4.9 settings, including updating the images for the Alertmanager, Grafana, and Prometheus Deployments.
- Regularize naming conventions
We have changed the following Kubernetes objects to regularize our installation:
- the ServiceAccount
prometheus-sa
is renamedprometheus
- the ClusterRole
prometheus-cr
is renamedprometheus
- the ClusterRoleBinding
prometheus-crb
is renamedprometheus
(and has been updated to take into account the ClusterRole and ServiceAccount renaming) - the ConfigMaps
alertmanager-rules-config
is renamedalert-rules-config
- the Secret
grafana-secret
is renamedgrafana-admin
How to upgrade the Monitoring installation
First, verify that you are using a Monitoring installation from before these changes. To verify, you can check that the existing monitoring Deployments are lacking a vendor
label:
kubectl get deployments -L vendor
NAME READY UP-TO-DATE AVAILABLE AGE VENDOR
crunchy-grafana 1/1 1 1 11s
crunchy-prometheus 1/1 1 1 11s
crunchy-alertmanager 1/1 1 1 11s
If the vendor
label show crunchydata
, then you are using an updated installer and do not need to follow the instructions here:
kubectl get deployments -L vendor
NAME READY UP-TO-DATE AVAILABLE AGE VENDOR
crunchy-grafana 1/1 1 1 16s crunchydata
crunchy-prometheus 1/1 1 1 16s crunchydata
crunchy-alertmanager 1/1 1 1 16s crunchydata
Second, if you have an older version of the Monitoring stack installed, before upgrading to the new version, you should first remove the Deployments:
kubectl delete deployments crunchy-grafana crunchy-prometheus crunchy-alertmanager
Now you can install as usual:
kubectl apply -k kustomize/monitoring
This will leave some orphaned Kubernetes objects, that can be cleaned up manually without impacting performance. The objects to be cleaned up include all of the objects listed above in point 3 on Regularize naming conventions:
kubectl delete clusterrolebinding prometheus-crb
kubectl delete serviceaccount prometheus-sa
kubectl delete clusterrole prometheus-cr
kubectl delete configmap alertmanager-rules-config
kubectl delete secret grafana-secret
Alternately, you can install the Monitoring stack with the --prune --all
flags to remove the objects that are no longer managed by this manifest:
kubectl apply -k kustomize --prune --all
This will remove those objects that are namespaced: the ConfigMap, Secret, and ServiceAccount. To prune cluster-wide objects, see the --prune-allowlist
flag.
Pruning is an automated feature and should be used with caution.
Further Information
For further information about monitoring features, see our tutorial.