4.3.0
Crunchy Data announces the release of the PostgreSQL Operator 4.3.0 on May 1, 2020.
The PostgreSQL Operator is released in conjunction with the Crunchy Container Suite.
The PostgreSQL Operator 4.3.0 release includes the following software versions upgrades:
- The PostgreSQL containers now use versions 12.2, 11.7, 10.12, 9.6.17, and 9.5.21
- This now includes support for using the JIT compilation feature introduced in PostgreSQL 11
- PostgreSQL containers now support PL/Python3
- pgBackRest is now at version 2.25
- Patroni is now at version 1.6.5
- postgres_exporter is now at version 0.7.0
- pgAdmin 4 is at 4.18
PostgreSQL Operator is tested with Kubernetes 1.13 - 1.18, OpenShift 3.11+, OpenShift 4.3+, Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+.
Major Features
- Standby Clusters + Multi-Kubernetes Deployments
- Improved custom configuration for PostgreSQL clusters
- Installation via the
pgo-deployer
container - Automatic Upgrades of the PostgreSQL Operator via
pgo upgrade
- Set custom PVC sizes for PostgreSQL clusters on creation and clone
- Support for PostgreSQL Tablespaces
- The ability to specify an external volume for write-ahead logs (WAL)
- Elimination of
ClusterRole
requirement for using the PostgreSQL Operator - Easy TLS-enabled PostgreSQL cluster creation
- All Operator commands now support TLS-only PostgreSQL workflows
- Feature Preview: pgAdmin 4 Integration + User Synchronization
Standby Clusters + Multi-Kubernetes Deployments
A key component of building database architectures that can ensure continuity of operations is to be able to have the database available across multiple data centers. In Kubernetes, this would mean being able to have the PostgreSQL Operator be able to have the PostgreSQL Operator run in multiple Kubernetes clusters, have PostgreSQL clusters exist in these Kubernetes clusters, and only ensure the “standby” deployment is promoted in the event of an outage or planned switchover.
As of this release, the PostgreSQL Operator now supports standby PostgreSQL clusters that can be deployed across namespaces or other Kubernetes or Kubernetes-enabled clusters (e.g. OpenShift). This is accomplished by leveraging the PostgreSQL Operator’s support for pgBackRest and leveraging an intermediary, i.e. S3, to provide the ability for the standby cluster to read in the PostgreSQL archives and replicate the data. This allows a user to quickly promote a standby PostgreSQL cluster in the event that the primary cluster suffers downtime (e.g. data center outage), for planned switchovers such as Kubernetes cluster maintenance or moving a PostgreSQL workload from one data center to another.
To support standby clusters, there are several new flags available on pgo create cluster
that are required to set up a new standby cluster. These include:
--standby
: If set, creates the PostgreSQL cluster as a standby cluster.--pgbackrest-repo-path
: Allows the user to override thepgBackRest
repository path for a cluster. While this setting can now be utilized when creating any cluster, it is typically required for the creation of standby clusters as the repository path will need to match that of the primary cluster.--password-superuser
: When creating a standby cluster, allows the user to specify a password for the superuser that matches the superuser account in the cluster the standby is replicating from.--password-replication
: When creating a standby cluster, allows the user to specify a password for the replication user that matches the superuser account in the cluster the standby is replicating from.
Note that the --password
flag must be used to ensure the password of the main PostgreSQL user account matches that of the primary PostgreSQL cluster, if you are using Kubernetes to manage the user’s password.
For example, if you have a cluster named hippo
and wanted to create a standby cluster called hippo
and assuming the S3 credentials are using the defaults provided to the PostgreSQL Operator, you could execute a command similar to:
pgo create cluster hippo-standby --standby \
--pgbackrest-repo-path=/backrestrepo/hippo-backrest-shared-repo
--password-superuser=superhippo
--password-replication=replicahippo
To shutdown the primary cluster (if you can), you can execute a command similar to:
pgo update cluster hippo --shutdown
To promote the standby cluster to be able to accept write traffic, you can execute the following command:
pgo update cluster hippo-standby --promote-standby
To convert the old primary cluster into a standby cluster, you can execute the following command:
pgo update cluster hippo --enable-standby
Once the old primary is converted to a standby cluster, you can bring it online with the following command:
pgo update cluster hippo --startup
For information on the architecture and how to set up a standby PostgreSQL cluster, please refer to the documentation.
At present, streaming replication between the primary and standby clusters are not supported, but the PostgreSQL instances within each cluster do support streaming replication.
Installation via the pgo-deployer
container
Installation, alongside upgrading, have long been two of the biggest challenges of using the PostgreSQL Operator. This release makes improvements on both (with upgrading being described in the next section).
For installation, we have introduced a new container called pgo-deployer
. For environments that use hostpath storage (e.g. minikube), installing the PostgreSQL Operator can be as simple as:
kubectl create namespace pgo
kubectl apply -f https://raw.githubusercontent.com/CrunchyData/postgres-operator/v4.3.0/installers/kubectl/postgres-operator.yml
The pgo-deployer
container can be configured by a manifest called postgres-operator.yml
and provides a set of environmental variables that should be familiar from using the other installers.
The pgo-deployer
launches a Job in the namespace that the PostgreSQL Operator will be installed into and sets up the requisite Kubernetes objects: CRDs, Secrets, ConfigMaps, etc.
The pgo-deployer
container can also be used to uninstall the PostgreSQL Operator. For more information, please see the installation documentation.
Automatic PostgreSQL Operator Upgrade Process
One of the biggest challenges to using a newer version of the PostgreSQL Operator was upgrading from an older version.
This release introduces the ability to automatically upgrade from an older version of the Operator (as early as 4.1.0) to the newest version (4.3.0) using the pgo upgrade
command.
The pgo upgrade
command follows a process similar to the manual PostgreSQL Operator upgrade process, but instead automates it.
To find out more about how to upgrade the PostgreSQL Operator, please review the upgrade documentation.
Improved Custom Configuration for PostgreSQL Clusters
The ability to customize the configuration for a PostgreSQL cluster with the PostgreSQL Operator can now be easily modified by making changes directly to the ConfigMap that is created with each PostgreSQL cluster. The ConfigMap, which follows the pattern <clusterName>-pgha-config
(e.g. hippo-pgha-config
for
pgo create cluster hippo
), manages the user-facing configuration settings available for a PostgreSQL cluster, and when modified, it will automatically synchronize the settings across all primaries and replicas in a PostgreSQL cluster.
Presently, the ConfigMap can be edited using the kubectl edit cm
command, and future iterations will add functionality to the PostgreSQL Operator to make this process easier.
Customize PVC Size on PostgreSQL cluster Creation & Clone
The PostgreSQL Operator provides the ability to set customization for how large the PVC can be via the “storage config” options available in the PostgreSQL Operator configuration file (aka pgo.yaml
). While these provide a baseline level of customizability, it is often important to be able to set the size of the PVC that a PostgreSQL cluster should use at cluster creation time. In other words, users should be able to choose exactly how large they want their PostgreSQL PVCs ought to be.
PostgreSQL Operator 4.3 introduces the ability to set the PVC sizes for the PostgreSQL cluster, the pgBackRest repository for the PostgreSQL cluster, and the PVC size for each tablespace at cluster creation time. Additionally, this behavior has been extended to the clone functionality as well, which is helpful when trying to resize a PostgreSQL cluster. Here is some information on the flags that have been added:
pgo create cluster
--pvc-size
- sets the PVC size for the PostgreSQL data directory
--pgbackrest-pvc-size
- sets the PVC size for the PostgreSQL pgBackRest repository
For tablespaces, one can use the pvcsize
option to set the PVC size for that tablespace.
pgo clone cluster
--pvc-size
- sets the PVC size for the PostgreSQL data directory for the newly created cluster
--pgbackrest-pvc-size
- sets the PVC size for the PostgreSQL pgBackRest repository for the newly created cluster
Tablespaces
Tablespaces can be used to spread out PostgreSQL workloads across multiple volumes, which can be used for a variety of use cases:
- Partitioning larger data sets
- Putting data onto archival systems
- Utilizing hardware (or a storage class) for a particular database object, e.g. an index
and more.
Tablespaces can be created via the pgo create cluster
command using the --tablespace
flag. The arguments to --tablespace
can be passed in using one of several key/value pairs, including:
name
(required) - the name of the tablespacestorageconfig
(required) - the storage configuration to use for the tablespacepvcsize
- if specified, the size of the PVC. Defaults to the PVC size in the storage configuration
Each value is separated by a :
, for example:
pgo create cluster hacluster --tablespace=name=ts:storageconfig=nfsstorage
All tablespaces are mounted in the /tablespaces
directory. The PostgreSQL Operator manages the mount points and persistent volume claims (PVCs) for the tablespaces, and ensures they are available throughout all of the PostgreSQL lifecycle operations, including:
- Provisioning
- Backup & Restore
- High-Availability, Failover, Healing
- Clone
etc.
One additional value is added to the pgcluster CRD:
- TablespaceMounts: a map of the name of the tablespace and its associated storage.
Tablespaces are automatically created in the PostgreSQL cluster. You can access them as soon as the cluster is initialized. For example, using the tablespace created above, you could create a table on the tablespace ts
with the following SQL:
CREATE TABLE (id int) TABLESPACE ts;
Tablespaces can also be added to existing PostgreSQL clusters by using the pgo update cluster
command. The syntax is similar to that of creating a PostgreSQL cluster with a tablespace, i.e.:
pgo update cluster hacluster --tablespace=name=ts2:storageconfig=nfsstorage
As additional volumes need to be mounted to the Deployments, this action can cause downtime, though the expectation is that the downtime is brief.
Based on usage, future work will look to making this more flexible. Dropping tablespaces can be tricky as no objects must exist on a tablespace in order for PostgreSQL to drop it (i.e. there is no DROP TABLESPACE .. CASCADE command).
Easy TLS-Enabled PostgreSQL Clusters
Connecting to PostgreSQL clusters is a typical requirement when deploying to an untrusted network, such as a public cloud. The PostgreSQL Operator makes it easy to enable TLS for PostgreSQL. To do this, one must create two secrets prior: one containing the trusted certificate authority (CA) and one containing the PostgreSQL server’s TLS keypair, e.g.:
kubectl create secret generic postgresql-ca --from-file=ca.crt=/path/to/ca.crt
kubectl create secret tls hippo-tls-keypair \
--cert=/path/to/server.crt \
--key=/path/to/server.key
From there, one can create a PostgreSQL cluster that supports TLS with the following command:
pgo create cluster hippo-tls \
--server-ca-secret=postgresql-ca \
--server-tls-secret=hippo-tls-keypair
To create a PostgreSQL cluster that only accepts TLS connections and rejects any connection attempts made over an insecure channel, you can use the --tls-only
flag on cluster creation, e.g.:
pgo create cluster hippo-tls \
--tls-only \
--server-ca-secret=postgresql-ca \
--server-tls-secret=hippo-tls-keypair
External WAL Volume
An optimization used for improving PostgreSQL performance related to file system usage is to have the PostgreSQL write-ahead logs (WAL) written to a different mounted volume than other parts of the PostgreSQL system, such as the data directory.
To support this, the PostgreSQL Operator now supports the ability to specify an external volume for writing the PostgreSQL write-head log (WAL) during cluster creation, which carries through to replicas and clones. When not specified, the WAL resides within the PGDATA directory and volume, which is the present behavior.
To create a PostgreSQL cluster to use an external volume, one can use the --wal-storage-config
flag at cluster creation time to select the storage configuration to use, e.g.
pgo create cluster --wal-storage-config=nfsstorage hippo
Additionally, it is also possible to specify the size of the WAL storage on all newly created clusters. When in use, the size of the volume can be overridden per-cluster. This is specified with the --wal-storage-size
flag, i.e.
pgo create cluster --wal-storage-config=nfsstorage --wal-storage-size=10Gi hippo
This implementation does not define the WAL volume in any deployment templates because the volume name and mount path are constant.
Elimination of ClusterRole
Requirement for the PostgreSQL Operator
PostgreSQL Operator 4.0 introduced the ability to manage PostgreSQL clusters across multiple Kubernetes Namespaces. PostgreSQL Operator 4.1 built on this functionality by allowing users to dynamically control which Namespaces it managed as well as the PostgreSQL clusters deployed to them. In order to leverage this feature, one must grant a ClusterRole
level permission via a ServiceAccount to the PostgreSQL Operator.
There are a lot of deployment environments for the PostgreSQL Operator that only need for it to exists within a single namespace and as such, granting cluster-wide privileges is superfluous, and in many cases, undesirable. As such, it should be possible to deploy the PostgreSQL Operator to a single namespace without requiring a ClusterRole
.
To do this, but maintain the aforementioned Namespace functionality for those who require it, PostgreSQL Operator 4.3 introduces the ability to opt into deploying it with minimum required ClusterRole
privileges and in turn, the ability to deploy the PostgreSQL Operator without a ClusterRole
. To do so, the PostgreSQL Operator introduces the concept of “namespace operating mode” which lets one select the type deployment to create. The namespace mode is set at the install time for the PostgreSQL Operator, and files into one of three options:
dynamic
: This is the default. This enables full dynamic Namespace management capabilities, in which the PostgreSQL Operator can create, delete and update any Namespaces within the Kubernetes cluster, while then also having the ability to create the Roles, Role Bindings and Service Accounts within those Namespaces for normal operations. The PostgreSQL Operator can also listen for Namespace events and create or remove controllers for various Namespaces as changes are made to Namespaces from Kubernetes and the PostgreSQL Operator’s management.readonly
: In this mode, the PostgreSQL Operator is able to listen for namespace events within the Kubernetetes cluster, and then manage controllers as Namespaces are added, updated or deleted. While this still requires aClusterRole
, the permissions mirror those of a “read-only” environment, and as such the PostgreSQL Operator is unable to create, delete or update Namespaces itself nor create RBAC that it requires in any of those Namespaces. Therefore, while inreadonly
, mode namespaces must be preconfigured with the proper RBAC as the PostgreSQL Operator cannot create the RBAC itself.disabled
: Use this mode if you do not want to deploy the PostgreSQL Operator with anyClusterRole
privileges, especially if you are only deploying the PostgreSQL Operator to a single namespace. This disables any Namespace management capabilities within the PostgreSQL Operator and will simply attempt to work with the target Namespaces specified during installation. If no target Namespaces are specified, then the Operator will be configured to work within the namespace in which it is deployed. As with thereadonly
mode, while in this mode, Namespaces must be pre-configured with the proper RBAC, since the PostgreSQL Operator cannot create the RBAC itself.
Based on the installer you use, the variables to set this mode are either named:
- PostgreSQL Operator Installer:
NAMESPACE_MODE
- Developer Installer:
PGO_NAMESPACE_MODE
- Ansible Installer:
namespace_mode
Feature Preview: pgAdmin 4 Integration + User Synchronization
pgAdmin 4 is a popular graphical user interface that lets you work with PostgreSQL databases from both a desktop or web-based client. With its ability to manage and orchestrate changes for PostgreSQL users, the PostgreSQL Operator is a natural partner to keep a pgAdmin 4 environment synchronized with a PostgreSQL environment.
This release introduces an integration with pgAdmin 4 that allows you to deploy a pgAdmin 4 environment alongside a PostgreSQL cluster and keeps the user’s database credentials synchronized. You can simply log into pgAdmin 4 with your PostgreSQL username and password and immediately have access to your databases.
For example, let’s there is a PostgreSQL cluster called hippo
that has a user named hippo
with password datalake
:
pgo create cluster hippo --username=hippo --password=datalake
After the PostgreSQL cluster becomes ready, you can create a pgAdmin 4 deployment with the pgo create pgadmin
command:
pgo create pgadmin hippo
This creates a pgAdmin 4 deployment unique to this PostgreSQL cluster and synchronizes the PostgreSQL user information into it. To access pgAdmin 4, you can set up a port-forward to the Service, which follows the pattern <clusterName>-pgadmin
, to port 5050
:
kubectl port-forward svc/hippo-pgadmin 5050:5050
Point your browser at http://localhost:5050
and use your database username (e.g. hippo
) and password (e.g. datalake
) to log in.
(Note: if your password does not appear to work, you can retry setting up the user with the pgo update user
command: pgo update user hippo --password=datalake
)
The pgo create user
, pgo update user
, and pgo delete user
commands are synchronized with the pgAdmin 4 deployment. Note that if you use pgo create user
without the --managed
flag prior to deploying pgAdmin 4, then the user’s credentials will not be synchronized to the pgAdmin 4 deployment. However, a subsequent run of pgo update user --password
will synchronize the credentials with pgAdmin 4.
You can remove the pgAdmin 4 deployment with the pgo delete pgadmin
command.
We have released the first version of this change under “feature preview” so you can try it out. As with all of our features, we open to feedback on how we can continue to improve the PostgreSQL Operator.
Enhanced pgo df
pgo df
provides information on the disk utilization of a PostgreSQL cluster, and previously, this was not reporting accurate numbers. The new pgo df
looks at each PVC that is mounted to each PostgreSQL instance in a cluster, including the PVCs for tablespaces, and computers the overall utilization. Even better, the data is returned in a structured format for easy scraping. This implementation also leverages Golang concurrency to help compute the results quickly.
Enhanced pgBouncer Integration
The pgBouncer integration was completely rewritten to support the TLS-only operations via the PostgreSQL Operator. While most of the work was internal, you should now see a much more stable pgBouncer experience.
The pgBouncer attributes in the pgclusters.crunchydata.com
CRD are also declarative and any updates will be reflected by the PostgreSQL Operator.
Additionally, a few new commands were added:
pgo create pgbouncer --cpu
andpgo update pgbouncer --memory
resource request flags for settings container resources for the pgBouncer instances. For CPU, this will also set the limit.pgo create pgbouncer --enable-memory-limit
sets the Kubernetes resource limit for memorypgo create pgbouncer --replicas
sets the number of pgBouncer Pods to deploy with a PostgreSQL cluster. The default is1
.pgo show pgbouncer
shows information about a pgBouncer deploymentpgo update pgbouncer --cpu
andpgo update pgbouncer --memory
resource request flags for settings container resources for the pgBouncer instances after they are deployed. For CPU, this will also set the limit.pgo update pgbouncer --disables-memory-limit
andpgo update pgbouncer --enable-memory-limit
respectively unset and set the Kubernetes resource limit for memorypgo update pgbouncer --replicas
sets the number of pgBouncer Pods to deploy with a PostgreSQL cluster.pgo update pgbouncer --rotate-password
allows one to rotate the service account password for pgBouncer
Rewritten pgo User Management commands
The user management commands were rewritten to support the TLS only workflow. These commands now return additional information about a user when actions are taken. Several new flags have been added too, including the option to view all output in JSON. Other flags include:
pgo update user --rotate-password
to automatically rotate the passwordpgo update user --disable-login
which disables the ability for a PostgreSQL user to loginpgo update user --enable-login
which enables the ability for a PostgreSQL user to loginpgo update user --valid-always
which sets a password to always be valid, i.e. it has no expirationpgo show user
does not show system accounts by default now, but can be made to show the system accounts by usingpgo show user --show-system-accounts
A major change as well is that the default password expiration function is now defaulted to be unlimited (i.e. never expires) which aligns with typical PostgreSQL workflows.
Breaking Changes
pgo create cluster
will now set the default database name to be the name of the cluster. For example,pgo create cluster hippo
would create the initial database namedhippo
.- The
Database
configuration parameter inpgo.yaml
(db_name
in the Ansible inventory) is now set to""
by default. - the
--password
/-w
flag forpgo create cluster
now only sets the password for the regular user account that is created, not all of the system accounts (e.g. thepostgres
superuser). - A default
postgres-ha.yaml
file is no longer is no longer created by the Operator for every PostgreSQL cluster. - “Limit” resource parameters are no longer set on the containers, in particular, the PostgreSQL container, due to undesired behavior stemming from the host machine OOM killer. Further details can be found in the original pull request.
- Added
DefaultInstanceMemory
,DefaultBackrestMemory
, andDefaultPgBouncerMemory
options to thepgo.yaml
configuration to allow for the setting of default memory requests for PostgreSQL instances, the pgBackRest repository, and pgBouncer instances respectively. - If unset by either the PostgreSQL Operator configuration or one-off, the default memory resource requests for the following applications are:
- PostgreSQL: The installers default to 128Mi (suitable for test environments), though the “default of last resort” is 512Mi to be consistent with the PostgreSQL default shared memory requirement
- pgBackRest: 48Mi
- pgBouncer: 24Mi
- Remove the
Default...ContainerResources
set of parameters from thepgo.yaml
configuration file. - The
pgbackups.crunchydata.com
, deprecated since 4.2.0, has now been completely removed, along with any code that interfaced with it. - The
PreferredFailoverFeature
is removed. This had not been doing anything since 4.2.0, but some of the legacy bits and configuration were still there. pgo status
no longer returns information about the nodes available in a Kubernetes cluster- Remove
--series
flag frompgo create cluster
command. This affects API calls more than actual usage of thepgo
client. pgo benchmark
,pgo show benchmark
,pgo delete benchmark
are removed. PostgreSQL benchmarks withpgbench
can still be executed using thecrunchy-pgbench
container.pgo ls
is removed.- The API that is used by
pgo create cluster
now returns its contents in JSON. The output now includes information about the user that is created. - The API that is used by
pgo show backup
now returns its contents in JSON. The output view ofpgo show backup
remains the same. - Remove the
PreferredFailoverNode
feature, as it had already been effectively removed. - Remove explicit
rm
calls when cleaning up PostgreSQL clusters. This behavior is left to the storage provisioner that one deploys with their PostgreSQL instances. - Schedule backup job names have been shortened, and follow a pattern that looks like
<clusterName>-<backupType>-sch-backup
Features
- Several additions to
pgo create cluster
around PostgreSQL users and databases, including:--ccp-image-prefix
sets theCCPImagePrefix
that specifies the image prefix for the PostgreSQL related containers that are deployed by the PostgreSQL Operator--cpu
flag that sets the amount of CPU to use for the PostgreSQL instances in the cluster. This also sets the limit. ---database
/-d
flag that sets the name of the initial database created.--enable-memory-limit
,--enable-pgbackrest-memory-limit
,--enable-pgbouncer-memory-limit
enable the Kubernetes memory resource limit for PostgreSQL, pgBackRest, and pgBouncer respectively--memory
flag that sets the amount of memory to use for the PostgreSQL instances in the cluster--user
/-u
flag that sets the PostgreSQL username for the standard database user--password-length
sets the length of the password that should be generated, if--password
is not set.--pgbackrest-cpu
flag that sets the amount of CPU to use for the pgBackRest repository--pgbackrest-memory
flag that sets the amount of memory to use for the pgBackRest repository--pgbackrest-s3-ca-secret
specifies the name of a Kubernetes Secret that contains a key (aws-s3-ca.crt
) to override the default CA used for making connections to a S3 interface--pgbackrest-storage-config
lets one specify a different storage configuration to use for a local pgBackRest repository--pgbouncer-cpu
flag that sets the amount of CPU to use for the pgBouncer instances--pgbouncer-memory
flag that sets the amount of memory to use for the pgBouncer instances--pgbouncer-replicas
sets the number of pgBouncer Pods to deploy with the PostgreSQL cluster. The default is1
.--pgo-image-prefix
sets thePGOImagePrefix
that specifies the image prefix for the PostgreSQL Operator containers that help to manage the PostgreSQL clusters--show-system-accounts
returns the credentials of the system accounts (e.g. thepostgres
superuser) along with the credentials for the standard database user
pgo update cluster
now supports the--cpu
,--disable-memory-limit
,--disable-pgbackrest-memory-limit
,--enable-memory-limit
,--enable-pgbackrest-memory-limit
,--memory
,--pgbackrest-cpu
, and--pgbackrest-memory
flags to allow PostgreSQL instances and the pgBackRest repository to have their resources adjusted post deployment- Added the
PodAntiAffinityPgBackRest
andPodAntiAffinityPgBouncer
to thepgo.yaml
configuration file to set specific Pod anti-affinity rules for pgBackRest and pgBouncer Pods that are deployed along with PostgreSQL clusters that are managed by the Operator. The default for pgBackRest and pgBouncer is to use the value that is set inPodAntiAffinity
. pgo create cluster
now supports the--pod-anti-affinity-pgbackrest
and--pod-anti-affinity-pgbouncer
flags to specifically overwrite the pgBackRest repository and pgBouncer Pod anti-affinity rules on a specific PostgreSQL cluster deployment, which overrides any values present inPodAntiAffinityPgBackRest
andPodAntiAffinityPgBouncer
respectfully. The default for pgBackRest and pgBouncer is to use the value for pod anti-affinity that is used for the PostgreSQL instances in the cluster.- One can specify the “image prefix” (e.g.
crunchydata
) for the containers that are deployed by the PostgreSQL Operator. This adds two fields to the pgcluster CRD:CCPImagePrefix
and `PGOImagePrefix - Specify a different S3 Certificate Authority (CA) with
pgo create cluster
by using the--pgbackrest-s3-ca-secret
flag, which refers to an existing Secret that contains a key calledaws-s3-ca.crt
that contains the CA. Reported by Aurelien Marie @(aurelienmarie) pgo clone
now supports the--enable-metrics
flag, which will deploy the monitoring sidecar along with the newly cloned PostgreSQL cluster.- The pgBackRest repository now uses ED25519 SSH key pairs.
- Add the
--enable-autofail
flag topgo update
to make it clear how the autofailover mechanism can be re-enabled for a PostgreSQL cluster.
Changes
- Remove
backoffLimit
from Jobs that can be retried, which is most of them. - POSIX shared memory is now used for the PostgreSQL Deployments.
- Increase the number of namespaces that can be watched by the PostgreSQL Operator.
- The number of unsupported pgBackRest flags on the deny list has been reduced.
- The liveness and readiness probes for a PostgreSQL cluster now reference the
/opt/cpm/bin/health
wal_level
is now defaulted tological
to enable logical replicationarchive_timeout
is now a default setting in thecrunchy-postgres-ha
andcrunchy-postgres-ha-gis
containers and is set to60
ArchiveTimeout
,LogStatement
,LogMinDurationStatement
are removed frompgo.yaml
, as these can be customized either via a custompostgresql.conf
file orpostgres-ha.yaml
file- Quoted identifiers for the database name and user name in bootstrap scripts for the PostgreSQL containers
- Password generation now leverages cryptographically secure random number generation and uses the full set of typeable ASCII characters
- The
node
ClusterRole is no longer used - The names of the scheduled backups are shortened to use the pattern
<clusterName>-<backupType>-sch-backup
- The PostgreSQL Operator now logs its timestamps using RFC3339 formatting as implemented by Go
- SSH key pairs are no longer created as part of the Operator installation process. This was a legacy behavior that had not been removed
- The
pv/create-pv-nfs.sh
has been modified to create persistent volumes with their own directories on the NFS filesystems. This better mimics production environments. The older version of the script still exists aspv/create-pv-nfs-legacy.sh
- Load pgBackRest S3 credentials into environmental variables as Kubernetes Secrets, to avoid revealing their contents in Kubernetes commands or in logs
- Update how the pgBackRest and pgMonitor pamareters are loaded into Deployment templates to no longer use JSON fragments
- The
pgo-rmdata
Job no longer calls therm
command on any data within the PVC, but rather leaves this task to the storage provisioner - Remove using
expenv
in theadd-targeted-namespace.sh
script
Fixes
- Ensure PostgreSQL clusters can be successfully restored via
pgo restore
after ‘pgo scaledown’ is executed - Allow the original primary to be removed with
pgo scaledown
after it is failed over - The replica Service is now properly managed based on the existence of replicas in a PostgreSQL cluster, i.e. if there are replicas, the Service exists, if not, it is removed
- Report errors in a SQL policy at the time
pgo apply
is executed, which was the previous behavior. Reported by José Joye (@jose-joye) - Ensure all replicas are listed out via the
--query
flag inpgo scaledown
andpgo failover
. This now follows the pattern outlined by the Kubernetes safe random string generator - Default the recovery action to “promote” when performing a “point-in-time-recovery” (PITR), which will ensure that a PITR process completes
- The
stanza-create
Job now waits for both the PostgreSQL cluster and the pgBackRest repository to be ready before executing - Remove
backoffLimit
from Jobs that can be retried, which is most of them. Reported by Leo Khomenko (@lkhomenk) - The
pgo-rmdata
Job will not fail if a PostgreSQL cluster has not been properly initialized - Fixed a separate
pgo-rmdata
crash related to an improper SecurityContext - The
failover
ConfigMap for a PostgreSQL cluster is now removed when the cluster is deleted - Allow the standard PostgreSQL user created with the Operator to be able to create and manage objects within its own user schema. Reported by Nicolas HAHN (@hahnn)
- Honor the value of “PasswordLength” when it is set in the pgo.yaml file for password generation. The default is now set at
24
- Do not log pgBackRest environmental variables to the Kubernetes logs
- By default, exclude using the trusted OS certificate authority store for the Windows pgo client.
- Update the
pgo-client
imagePullPolicy to beIfNotPresent
, which is the default for all of the managed containers across the project - Set
UsePAM yes
in thesshd_config
file to fix an issue with using SSHD in newer versions of Docker - Only add Operator labels to a managed namespace if the namespace already exists when executing the
add-targeted-namespace.sh
script