Create a Postgres Cluster
If you came here through the quickstart, you may have already created a cluster. If you created a cluster by using the example in the kustomize/postgres
directory, feel free to skip to connecting to a cluster or follow our instructions on deleting your cluster for a fresh start.
Use Kustomize to Create a Postgres Cluster
Creating a Postgres cluster is pretty simple from a fork of our examples repository. Using the example in the kustomize/postgres
directory, all we have to do is run:
kubectl apply -k kustomize/postgres
and PGO will create a simple Postgres cluster named hippo
in the postgres-operator
namespace. You can track the status of your Postgres cluster using kubectl describe
on the postgresclusters.postgres-operator.crunchydata.com
custom resource:
kubectl -n postgres-operator describe postgresclusters.postgres-operator.crunchydata.com hippo
and you can track the state of the Postgres Pod using the following command:
kubectl -n postgres-operator get pods --selector=postgres-operator.crunchydata.com/cluster=hippo,postgres-operator.crunchydata.com/instance
Use Helm to Create a Postgres Cluster
Creating a Postgres cluster is pretty simple from a fork of our examples repository. Let's assume that you've installed Crunchy Postgres for Kubernetes from the examples repository like this:
helm install cpk helm/install --namespace postgres-operator --create-namespace
You can create a Postgres Cluster in the postgres-operator
namespace with a command like this:
helm install hippo helm/postgres --namespace postgres-operator
and you can track the state of the Postgres Pod using the following command:
kubectl -n postgres-operator get pods --selector=postgres-operator.crunchydata.com/cluster=hippo,postgres-operator.crunchydata.com/instance
What Happens When a Postgres Cluster is Created
Crunchy Postgres for Kubernetes created a Postgres cluster based on the information provided to it from either the Kustomize manifests in the kustomize/postgres
directory or the Helm chart in the helm/postgres
directory.
Let's better understand what happened by inspecting the kustomize/postgres/postgres.yaml
file:
apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PostgresCluster
metadata:
name: hippo
spec:
postgresVersion: 16
instances:
- name: instance1
dataVolumeClaimSpec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 1Gi
backups:
pgbackrest:
repos:
- name: repo1
volume:
volumeClaimSpec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 1Gi
When we ran the kubectl apply
command earlier, what we did was create a PostgresCluster
custom resource in Kubernetes. PGO detected that we added a new PostgresCluster
resource and started to create all the objects needed to run Postgres in Kubernetes!
What else happened? PGO read the value from metadata.name
to provide the Postgres cluster with the name hippo
. Additionally, PGO knew which containers to use for Postgres and pgBackRest by looking at the values in spec.image
and spec.backups.pgbackrest.image
respectively. The value in spec.postgresVersion
is important as it will help PGO track which major version of Postgres you are using.
PGO knows how many Postgres instances to create through the spec.instances
section of the manifest. While name
is optional, we opted to give it the name instance1
. We could have also created multiple replicas and instances during cluster initialization, but we will cover that more when we discuss how to scale and create a HA Postgres cluster.
A very important piece of your PostgresCluster
custom resource is the dataVolumeClaimSpec
section. This describes the storage that your Postgres instance will use. It is modeled after the Persistent Volume Claim. If you do not provide a spec.instances.dataVolumeClaimSpec.storageClassName
, then the default storage class in your Kubernetes environment is used.
As part of creating a Postgres cluster, we also specify information about our backup archive. PGO uses pgBackRest, an open source backup and restore tool designed to handle terabyte-scale backups. As part of initializing our cluster, we can specify where we want our backups and archives (write-ahead logs or WAL) stored. We will talk about this portion of the PostgresCluster
spec in greater depth in the disaster recovery section of this tutorial, and also see how we can store backups in Amazon S3, Google GCS, and Azure Blob Storage.
Troubleshooting
PostgreSQL / pgBackRest Pods Stuck in Pending
Phase
The most common occurrence of this is due to PVCs not being bound. Ensure that you have set up your storage options correctly in any volumeClaimSpec
. You can always update your settings and reapply your changes with kubectl apply
.
Also ensure that you have enough persistent volumes available: your Kubernetes administrator may need to provision more.
If you are on OpenShift, you may need to set spec.openshift
to true
.
Next Steps
We're up and running -- now let's connect to our Postgres cluster!