Auto-Growable Disk

Managing disk space shouldn't be a constant worry. With auto-grow enabled, Crunchy Postgres for Kubernetes automatically expands your storage as your data grows. Set your maximum limits once, and let the system handle the rest. It's one less thing for you to worry about.

Info

FEATURE STATE: Crunchy Postgres for Kubernetes v6.0.0 (enabled by default: true)

Prerequisites

To use this feature, you'll need a storage provider that supports dynamic scaling. To see if your volume can expand, run the following command on your storage class and see if the allowVolumeExpansion field is set to true:

Bash:

# Check whether your storage classes are expandable
kubectl describe storageclass | grep -e Name -e Expansion

Powershell:

kubectl describe storageclass | Select-String -Pattern @("Name", "Expansion") -CaseSensitive

How It Works

When auto-grow is configured on your volumes, Crunchy Postgres for Kubernetes will monitor the disk usage of all auto-grow enabled volumes. By default, when any volume reaches 75% capacity, a request will be sent to expand that volume by 50%. These defaults are recommended for most cases. If you need to adjust them for your specific use case, you can customize the behavior using the tuning options described below. For details on customizing these settings, see Auto-Grow Tuning.

Info

When scaling up your PVC, Crunchy Postgres for Kubernetes will make a precise request in Mi. But, your storage solution may round up that request to the nearest Gi.

An event will be logged when the disk starts growing. Look out for events indicating "expansion requested" and check your PVC status for completion. Your volumes can grow up to the limit that you set. Beyond that, you'll see an event alerting you that the volume can't be expanded further.

Configuration

Auto-grow can be configured for different volume types in your PostgresCluster specification:

  • PGDATA volumes: Configure auto-grow on the primary data storage for your PostgreSQL instances
  • WAL volumes: Configure auto-grow on write-ahead log storage (when using dedicated WAL volumes)
  • pgBackRest repository volumes: Configure auto-grow on backup repository storage

Warning

WAL Volume Growth: While auto-grow on WAL volumes provides protection against disk space issues, unexpected WAL volume growth can indicate underlying problems with your backups or replication. Monitor WAL volume expansion events closely, as they may signal backup failures, replication issues, or archiving problems that should be investigated promptly.

Enabling Auto-Grow

To enable auto-grow for any of these volumes, you need to set a limit for volume expansion to prevent the volume from growing beyond a specified size. Don't worry if you need to up the limit. Just change the limit field in your spec and re-apply.

spec:
  instances:
    - name: instance1
      dataVolumeClaimSpec:
        resources:
          requests:
            storage: 1Gi
          limits:
            storage: 5Gi  # Required: Maximum size for PGDATA volume
      walVolumeClaimSpec:
        resources:
          requests:
            storage: 1Gi
          limits:
            storage: 3Gi  # Required: Maximum size for WAL volume
  backups:
    pgbackrest:
      repos:
        - name: repo1
          volume:
            volumeClaimSpec:
              resources:
                requests:
                  storage: 2Gi
                limits:
                  storage: 10Gi  # Required: Maximum size for backup repository

Info

The limits.storage field is only required if you want to enable auto-grow for that volume.

Warning

  • Once auto-grow has expanded your volume request, requests.storage in your manifest will no longer be accurate. Examine the relevant PVC and update your manifest, if you want to re-apply your manifest. Nothing bad will happen if you don't update requests.storage, though you will likely receive a warning.
  • Some storage services impose limits (quota or rate limits) on how many volume expansions you can perform in a given time window. If you begin seeing failed or throttled expansion events, consider (1) starting with a larger initial resources.requests.storage value and/or (2) reducing how often expansions are triggered by adjusting trigger and maxGrow settings documented in Auto-Grow Tuning.

Auto-Grow Tuning

For most use cases, the default auto-grow behavior works well. However, you can customize the expansion behavior using the autoGrow configuration. Auto-grow behavior is controlled by two configurable parameters:

  • Trigger: The percentage of used space that triggers volume expansion (default: 75%)
  • MaxGrow: The maximum amount by which the volume can expand in a single growth event
spec:
  instances:
    - name: instance1
      dataVolumeClaimSpec:
        resources:
          requests:
            storage: 1Gi
          limits:
            storage: 5Gi
        autoGrow:
          trigger: 80      # Expand when 80% full (default: 75)
          maxGrow: 2Gi     # Maximum expansion per growth event

When a volume reaches the trigger threshold, the system will expand by the smaller of either:

  • 50% of the current volume size, OR
  • The value specified in maxGrow

This approach ensures predictable growth patterns while allowing fine-tuned control for large datasets. For example, with a 2TB volume, you might set maxGrow: 200Gi to avoid expanding by a full 1TB at once.

Downsizing

In the event that any of your volumes (PGDATA, WAL, or pgBackRest repository volumes) have grown larger than what you need, you can scale down to a smaller disk allocation. The downsizing strategy is similar for all volume types that support auto-grow:

  • For PGDATA and WAL volumes: Add a second instance set with smaller storage requests
  • For pgBackRest repository volumes: Add a new repository configuration with smaller storage requests

The steps we'll follow are similar to what we describe in our tutorial Resize PVC, which you may want to review for further background.

Example: Downsizing PGDATA Volumes

Let's assume that you've defined a PostgresCluster similar to what was described earlier:

apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PostgresCluster
metadata:
  name: hippo
spec:
  postgresVersion: 18
  instances:
    - name: instance1
      dataVolumeClaimSpec:
        accessModes:
        - "ReadWriteOnce"
        resources:
          requests:
            storage: 2Gi # Assume this number has been set correctly, following disk expansion.
          limits:
            storage: 5Gi

Imagine that your volumes have grown to 2Gi and you want to downsize to 1Gi. You'll want to be sure that 1Gi is enough space and that you won't have to scale up immediately after downsizing. If you exec into your instance Pod, you can use a tool like df to check usage in the /pgdata directory. Once you're confident in your estimate, add the following to the list of instances in your spec, but do not remove the existing instance set.

- name: instance2
      dataVolumeClaimSpec:
        accessModes:
        - "ReadWriteOnce"
        resources:
          requests:
            storage: 1Gi # Set an appropriate, smaller request here.

Notice that resources.limits has not been set. By leaving resources.limits unset, you have disabled auto-grow for this instance set. Apply your manifest and confirm that your data has replicated to the new instance set. Once your data is synced, you can remove instance1 from the list of instances and apply again to remove the old instance set from your cluster.

Disabling Auto-Grow

To disable the auto-grow feature for a particular volume, you will need to remove the specified limit.

If the volume in question has not grown, simply remove the limit in question. To verify the current volume sizes for a cluster in the postgres-operator namespace, you would execute

kubectl get pvc -n postgres-operator

and compare the CAPACITY value for the relevant volume(s). If those values match the resources.requests.storage value defined in the PostgresCluster manifest, no update besides removing the limit is required.

If the volumes in question have grown, you will need to update the resources.requests.storage value to match the actual volume size.

For example, with a manifest similar to

apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PostgresCluster
metadata:
  name: hippo
spec:
  postgresVersion: 18
  instances:
    - name: instance1
      dataVolumeClaimSpec:
        accessModes:
        - "ReadWriteOnce"
        resources:
          requests:
            storage: 1Gi
          limits:
            storage: 3Gi

and the command

kubectl get pvc -n postgres-operator

returning output such as

NAME                          STATUS   VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS
hippo-instance1-abcd-pgdata   Bound    pvc-123   2Gi        RWO            standard-rwo
hippo-repo1                   Bound    pvc-456   1Gi        RWO            standard-rwo

we can see that the pgData volume has grown from 1Gi to 2Gi. Therefore, to disable the auto-grow feature for the pgData volume, the manifest needs to be updated as follows

apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PostgresCluster
metadata:
  name: hippo
spec:
  postgresVersion: 18
  instances:
    - name: instance1
      dataVolumeClaimSpec:
        accessModes:
        - "ReadWriteOnce"
        resources:
          requests:
            storage: 2Gi

and then applied. The pgData resources.requests.storage now matches the volume in question but will no longer grow larger even if the storage space used surpasses the utilization limit.