Several pieces must come together to create a production-ready Postgres cluster and Crunchy Postgres for Kubernetes provides everything that you need. From high-availability to disaster recovery and monitoring, we’ll cover how a Crunchy Postgres for Kubernetes deployment fits the pieces together.


PGO, the Postgres Operator from Crunchy Data, runs as a Kubernetes Deployment and is composed of a single container. This PGO container holds a collection of Kubernetes controllers that manage native Kubernetes resources (Jobs, Pods) as well as Custom Resources (PostgresCluster). As a user, you provide Kubernetes with the specification of what you want your Postgres cluster to look like and PGO uses a Custom Resource Definition(CRD) to teach Kubernetes how to handle those specifications. PGO's controllers do the work of making your specifications a reality. The main custom resource definition is This CRD allows you to control all the information about a Postgres cluster, including:

  • Resource allocation
  • High availability
  • Backup management
  • Where and how your cluster is deployed (affinity, tolerations, topology spread constraints)
  • Disaster Recovery / standby clusters
  • Monitoring
  • and more.

Crunchy Postgres

Crunchy Postgres for Kubernetes enables you to deploy Kubernetes-native production ready clusters of Crunchy Postgres, Crunchy Data's open source Postgres distribution. When you use one of Crunchy Data’s installers, you’re given the option to install and deploy a range of Crunchy Postgres versions and specify the number of replicas (in addition to your primary Postgres instance) in your cluster. The spec you create for the deployment will command Kubernetes to create a number of Pods corresponding to the number of Postgres clusters, each running a container with Crunchy Postgres inside. Crunchy Postgres for Kubernetes uses Kubernetes Statefulsets to create Postgres instance groups and support advanced operations such as rolling updates to minimize Postgres downtime as well as affinity and toleration rules to force one or more replicas to run on nodes in different regions.


A production-ready Postgres cluster demands a disaster recovery solution. Crunchy Postgres for Kubernetes uses pgBackRest to backup and restore your data. With pgBackRest, you can perform scheduled backups, one-off backups and point-in-time recoveries. Crunchy Postgres for Kubernetes enables pgBackRest by default. When a new Postgres cluster is created, a pgBackRest repository is created too. Crunchy Postgres for Kubernetes runs pgBackrest in the same pod that runs your Crunchy Postgres container. A separate pgBackRest pod can be used to manage backups through cloud storage services such as S3, GCS, and Azure.


You want your data to always be available. Maintaining high availability requires a cluster of Postgres instances where there is one leader and some number of replicas. If the leader instance goes down, Crunchy Postgres for Kubernetes uses Patroni to promote a new leader from your replicas. Each container running a Crunchy Postgres instance comes loaded with Patroni to handle failover and keep your data available.

Monitoring Stack

Resource starvation happens. You can run out of storage space and you can run out of computing power. Crunchy Postgres for Kubernetes provides a monitoring stack to help you track the health of your Postgres cluster, replete with dashboards, alerts, and insights into your workloads. While having high availability, backups, and disaster recovery systems in place helps in the event of something going wrong with your Postgres cluster, monitoring helps you anticipate problems before they happen. The monitoring stack includes components provided by pgMonitor and pgnodemx and deploys as a collection of pods containing Grafana, Alertmanager, and Prometheus.