Auto-Growable Disk
You may be nearing your disk space limit and not know it. Once you hit that limit, you're looking at downtime. Monitoring and a scalable storage class are great tools to avoid disk-full errors. But sometimes, the best solution is not having to think about it. Enabling auto-grow will let Crunchy Postgres for Kubernetes do the work for you. Auto-grow will watch your data directory and grow your disk. You set the limit on growth and Crunchy Postgres for Kubernetes does the rest.
Prerequisites
To use this feature, you'll need a storage provider that supports dynamic scaling. To see if your volume can expand, run the following
command on your storage class and see if the allowVolumeExpansion
field is set to true
:
# Check whether your storage classes are expandable
kubectl describe storageclass | grep -e Name -e Expansion
Enabling Auto-Grow
To enable Crunchy Postgres for Kubernetes' auto-grow feature, you need to activate the Autogrow feature gate. PGO feature gates are enabled by setting the
PGO_FEATURE_GATES
environment variable on the PGO Deployment.
PGO_FEATURE_GATES="AutoGrowVolumes=true"
Please note that it is possible to enable more than one feature at a time as this variable accepts a comma delimited list.
For example, to enable multiple features, you would set PGO_FEATURE_GATES
like so:
PGO_FEATURE_GATES="FeatureName=true,FeatureName2=true,FeatureName3=true..."
Additionally, you will need to set a limit for volume expansion to prevent the volume from growing beyond a specified size. Don't worry if you need to up the limit. Just change the limit field in your spec and re-apply. For example you could define the following in your spec:
spec:
image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-16.4-0
postgresVersion: 16
instances:
- name: instance1
dataVolumeClaimSpec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 1Gi
limits:
storage: 5Gi # Set the limit on disk growth.
Warning
- Once auto-grow has expanded your volume request,
requests.storage
in your manifest will no longer be accurate. Examine the pgdata PVC for instance1 and update your manifest, if you want to re-apply your manifest. Nothing bad will happen if you don't updaterequests.storage
, though you will likely receive a warning. - Some storage services may place a limit on the number of volume expansions you can perform within some period of time. With that in mind, it remains a good idea to start with a resource request of what you think you'll actually need.
How It Works
After enabling the feature gate and setting the growth limit, Crunchy Postgres for Kubernetes will monitor your disk usage. When the disk is 75% full, a request will be sent to expand your disk by 50%. In processing this request, Kubernetes will likely round the figure up to the nearest Gi.
Info
When scaling up your PVC, Crunchy Postgres for Kubernetes will make a precise request in Mi. But, your storage solution may round up that request to the nearest Gi.
An event will be logged when the disk starts growing. Look out for notifications indicating "expansion requested" and check your PVC status for completion. You can grow up to the limit. Beyond that, you'll see an event alerting you that the volume can't be expanded beyond the limit.
Downsizing
In the event that your volume has grown larger than what you need, you can scale down to a smaller disk allocation by adding a second instance set with a smaller storage request. The steps we'll follow are similar to what we describe in our tutorial Resize PVC, which you may want to review for further background.
Let's assume that you've defined a PostgresCluster
similar
to what was described earlier:
apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PostgresCluster
metadata:
name: hippo
spec:
image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-16.4-0
postgresVersion: 16
instances:
- name: instance1
dataVolumeClaimSpec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 2Gi # Assume this number has been set correctly, following disk expansion.
limit:
storage: 5Gi
Imagine that your volumes have grown to 2Gi and you want to downsize to 1Gi. You'll want to be sure that 1Gi is enough space and that
you won't have to scale up immediately after downsizing. If you exec into your instance Pod
, you can use a tool like df
to
check usage in the /pgdata
directory. Once you're confident in your estimate, add the following to the list of instances in your
spec, but do not remove the existing instance set.
- name: instance2
dataVolumeClaimSpec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 1Gi # Set an appropriate, smaller request here.
Notice that resources.limit
has not been set. By leaving resources.limit
unset, you have disengaged auto-grow for this instance set.
Apply your manifest and confirm that your data has replicated to the new instance set. Once your data is synced, you can
remove instance1 from the list of instances and apply again to remove the old instance set from your cluster.