Namespace-Scoped pgAdmin 4
Info
FEATURE AVAILABILITY: Available in v5.5.0 and above
The PostgreSQL Operator lets you deploy pgAdmin 4 independently from a PostgresCluster. This way, rather than have one pgAdmin 4 deployment tied to one PostgresCluster, you can have one pgAdmin 4 deployment managing several PostgresClusters in a namespace, or even easily access one PostgresCluster from several pgAdmin 4 deployments.
Hint
Namespace-Scoped pgAdmin currently support all actively maintained versions of Postgres
When you deploy pgAdmin 4 through the Operator in this way, you can define selectors to dynamically discover PostgresClusters. This means that you can set up pgAdmin 4 to already know about your labeled PostgresClusters without having to manually import each server individually!
The namespace-scoped pgAdmin 4 architecture also gives you access to an administrator user which you can use for direct user management.
Installation
After you have v5.5.0 or later running, the pgadmin
resource should be available in your cluster. To make sure you are set up, check the existing CustomResourceDefinitions
. You can do this with a selector
flag to limit which CRDs
are returned:
# I'm using the label our installers add to our CRDs as a selector:
kubectl get crd --selector postgres-operator.crunchydata.com/control-plane=postgres-operator
NAME CREATED AT
pgadmins.postgres-operator.crunchydata.com ...
Here you see the pgadmins
Custom Resource, which allows the Operator to create and manage namespace-scoped pgAdmin 4 installations. If you see that, you should be good to go!
If you do not see the pgadmins
CRD
, you will need to upgrade to v5.5.0 or later, which should automatically install the pgadmins
CRD
. To make sure you have the most current installer before upgrading or installing, check the crd
(or if you're using the Helm installer, crds
) directory to verify that the postgres-operator.crunchydata.com_pgadmins.yaml
file is present.
Create a pgAdmin 4
Once you've verified that you have the pgadmins
CRD
installed and a v5.5.0 or later Operator is running, you're ready to request a pgAdmin 4 deployment.
Much like a PostgresCluster, a pgAdmin 4 deployment is defined as YAML:
apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PGAdmin
metadata:
name: rhino
spec:
dataVolumeClaimSpec: # 1
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 1Gi
serverGroups: # 2
- name: supply
postgresClusterSelector: {}
We'll cover those fields labeled 1 and 2 in Configuration Details below.
Let's say this file is saved locally as pgadmin.yaml
. This file can then be sent to Kubernetes via kubectl
:
kubectl apply -f pgadmin.yaml
Configuration Details
-
The
dataVolumeClaimSpec
section is required. This section describes the storage that your pgAdmin 4 instance will use. It is modeled after the Persistent Volume Claim. If you do not provide aspec.dataVolumeClaimSpec.storageClassName
, then the default storage class in your Kubernetes environment is used. -
The
serverGroups
field is optional but necessary for dynamic server discovery. (See the Warning about zero servers below.)serverGroups
is an array ofserverGroup
objects. AserverGroup
consists of
- a
name
, which is used as the name of theServerGroup
when registering these PostgresClusters with pgAdmin 4; and - a
postgresClusterSelector
, which is used to find the PostgresClusters to be registered. This field is a LabelSelector.
Dynamic server discovery
A namespace-scoped pgAdmin 4 deployment is capable of discovering PostgresClusters so that any user who can sign in to that pgAdmin 4 deployment can see that PostgresCluster. How does that work?
Discovery by selector
Every pgadmin
custom resource can be set up with 0 or more serverGroups
as detailed above. Each serverGroup
has a required postgresClusterSelector
field which the Operator uses to select certain PostgresClusters by label.
Let's walk through the following pgadmin
example to see how this would work.
spec:
serverGroups:
- name: demand
postgresClusterSelector:
matchLabels:
owner: logistics
- name: supply
postgresClusterSelector: {}
- name: maintenance
postgresClusterSelector:
matchExpressions:
- { key: owner, operator: In, values: [logistics, transportation] }
Here we have defined three serverGroups
, showing three separate ways to select on labels.
-
The
demand
group has apostgresClusterSelector
in thematchLabels
form: any PostgresCluster that matches all of the labels here will be registered automatically. -
The
supply
group matches an emptypostgresClusterSelector
. This is a Kubernetes-specific idiom that will match all PostgresClusters in the namespace. -
The
maintenance
group uses thematchExpressions
format to define what labels to match on.
To be clear, this example is meant to demonstrate several different ways you can define the postgresClusterSelector
. If you had a PostgresCluster with the label owner: logistics
, you should be able to log in to your pgAdmin 4 instance and see that PostgresCluster in all three ServerGroups.
Warning
Warning about zero servers
If the serverGroups
field is omitted or if the specified selectors do not match any PostgresClusters, then no servers will be found. Currently, if no servers are detected by pgAdmin 4, the initial administrator user will not be able to add new ServerGroups or Servers manually using the pgAdmin 4 dashboard.
In this case, the administrator user can still be used to manage users, and those users will be able to manually manage ServerGroups and Servers.
Registering the servers
When a PostgresCluster matches the postgresClusterSelector
section of a serverGroup
, that cluster will be registered in pgAdmin 4 as a shared server owned by the initial administrator user set up by pgAdmin 4.
Because the server is shared, any user who logs into this pgAdmin 4 will be able to see that PostgresCluster, but will be unable to delete or rename it.
(Note: for namespace-scoped pgAdmin 4 deployments, there is no automatic synchronization between pgAdmin 4 and Postgres users as there is with PostgresCluster-limited pgAdmin 4. Even if you can sign into pgAdmin 4 and see PostgresClusters, you will still need a valid Postgres user and credentials to access the Postgres database.)
So if you want to deploy one pgAdmin 4 to manage all the PostgresClusters in a namespace and share those servers with all the pgAdmin 4 users, you can set your pgadmin
deployment to register all those PostgresClusters automatically and skip manually importing them one-by-one!
Configuring custom pgAdmin 4
You can configure pgAdmin 4 using the spec.config
field. For most basic pgAdmin configurations you will only need the spec.config.settings
field, but if you want to enable more advanced features, like external authentication sources, you will need to use some other fields. In this section we will go through an example configuration that outlines each of the spec.config
sub-fields.
Basic pgAdmin settings
The spec.config.settings
field will be used to set any value that you would find in the pgAdmin 4 config.py file. Some of easiest values to describe are the SHOW_GRAVATAR_IMAGE
and DEBUG
settings. The following configuration will enable DEBUG
mode and disable gravatars when your users log in:
spec:
config:
settings:
SHOW_GRAVATAR_IMAGE: False
DEBUG: True
The values provided in spec.config.settings
are stored in a ConfigMap that is mounted to the pgAdmin Pod. The mounted ConfigMap and its values are passed to pgAdmin through the config_system.py
configuration file.
It is worth noting that CPK will own some of the fields, and you won't be able to configure them. A good example of this is the SERVER_MODE
setting. Since we want pgAdmin to run as a web server and not a desktop app, CPK will always set this value.
Hint
You can check the pgAdmin settings ConfigMap with the following command:
kubectl get cm -l postgres-operator.crunchydata.com/pgadmin=rhino -o yaml
Using custom files in pgAdmin
In some cases you may need to mount configuration files to the pgAdmin 4 Pod. For example, if you wanted to secure the connection to your LDAP server using TLS, you will need to provide cert files (more on that in LDAP TLS configuration). You can mount files by defining ConfigMaps or Secrets in the spec.config.files
field. The contents of the defined objects are mounted to the /etc/pgadmin/conf.d
directory in the pgAdmin Pod using projected volumes. The following mounts tls.crt
of Secret mysecret
to /etc/pgadmin/conf.d/tls.crt
:
spec:
config:
files:
- secret:
name: mysecret
items:
- key: tls.crt
Authentication Sources
Another configuration option that pgAdmin provides is the AUTHENTICATION_SOURCES
setting which allows you to adjust the ways in which pgAdmin users can authenticate. By default, pgAdmin is setup to only allow internal users to authenticate. If you wanted your pgAdmin users to be able to authenticate via an external source, such as LDAP, you would need to include this option in the AUTHENTICATION_SOURCES
setting array.
You can include multiple AUTHENTICATION_SOURCES
as an array:
spec:
config:
settings:
AUTHENTICATION_SOURCES: ['ldap', 'internal']
The first source in the list will have a higher priority, meaning you can use ldap
as your first source and internal
as a fallback in case ldap
fails.
LDAP Configuration
The pgAdmin 4 config.py file has configuration options to enable LDAP authentication into pgAdmin 4. These settings will depend on your LDAP server. We will go through some simple examples here to show how you can connect to an LDAP server.
Basic connection
You will configure a majority of LDAP settings using the spec.config.settings
field. The first step to enabling LDAP is to update your AUTHENTICATION_SOURCES
setting to include the new source. CPK requires that you enable the LDAP_AUTO_CREATE_USER
setting so that pgAdmin will create a pgAdmin user for any LDAP user that successfully logs in.
spec:
config:
settings:
AUTHENTICATION_SOURCES: ['ldap', 'internal']
LDAP_AUTO_CREATE_USER: True # Required if using LDAP authentication source
This is also where you will configure your LDAP_SERVER_URI
and other LDAP settings, like LDAP_SEARCH_BASE_DN
or LDAP_ANONYMOUS_BIND
. Reference the pgAdmin 4 LDAP documentation for more information about LDAP settings.
LDAP Bind User and Password
Depending on your LDAP configuration, you might need to define a user and password that will bind pgAdmin 4 to the LDAP server. These options are defined in config.py as LDAP_BIND_USER
and LDAP_BIND_PASSWORD
. You will define the LDAP_BIND_USER
like you would any other setting. However, the LDAP_BIND_PASSWORD
is not something that we recommend storing in your PGAdmin spec. Instead, CPK provides the ldapBindPassword
field that lets you point at a Secret:
spec:
config:
settings:
LDAP_BIND_USER: $user
ldapBindPassword:
name: ldappass
key: $password
This field is a Secret key reference that will be mounted to the pgAdmin Pod. CPK will configure pgAdmin to look in the mounted file instead of using the plaintext LDAP_BIND_PASSWORD
setting. This helps to keep you password secure.
Connection to a TLS LDAP server
If you are connecting to a LDAP server using TLS, you will need to provide cert files to secure the connection. Like we covered above, you will use the spec.config.settings
field to mount your cert files. Once the files are available to pgAdmin, you will need to tell pgAdmin where to look for them. This is done using the LDAP_CA_CERT_FILE
, LDAP_CERT_FILE
, and LDAP_KEY_FILE
settings. Your final spec should include something like this:
spec:
config:
settings:
LDAP_SERVER_URI: ldaps://my.ds.example.com
LDAP_CA_CERT_FILE: /etc/pgadmin/conf.d/certs/ca.crt
LDAP_CERT_FILE: /etc/pgadmin/conf.d/certs/tls.crt
LDAP_KEY_FILE: /etc/pgadmin/conf.d/certs/tls.key
files:
- secret:
name: openldap
items:
- key: ca.crt
path: certs/ca.crt
- key: tls.crt
path: certs/tls.crt
- key: tls.key
path: certs/tls.key
OAuth2 Configuration
The pgAdmin config.py file also has configuration options to enable OAuth2 authentication for pgAdmin. These settings will depend on your OAuth2 server. As with LDAP, we will go through some simple examples here to show how you can connect to an OAuth2 server.
Example Configurations
You will configure the OAuth2 settings using the config.settings
field. The first step to enabling OAuth2 is to update your AUTHENTICATION_SOURCES
setting to include the new source. CPK requires that you enable the OAUTH2_AUTO_CREATE_USER
setting so that pgAdmin will create a pgAdmin user for any OAuth2 user that successfully logs in. As shown below, more than one OAuth2 authentication source can be defined. Please note that in pgAdmin 8.12, OAUTH2_ICON
, OAUTH2_BUTTON_COLOR
and other settings are required. This will be updated in a future release.
config:
settings:
AUTHENTICATION_SOURCES: ['internal','oauth2']
OAUTH2_AUTO_CREATE_USER: True
OAUTH2_CONFIG:
- OAUTH2_NAME: "google"
OAUTH2_DISPLAY_NAME: "Google"
OAUTH2_CLIENT_ID: "xxxxxxxxxxxxxxxxx"
OAUTH2_CLIENT_SECRET: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
OAUTH2_TOKEN_URL: "https://oauth2.googleapis.com/token"
OAUTH2_AUTHORIZATION_URL: "https://accounts.google.com/o/oauth2/auth"
OAUTH2_API_BASE_URL: "https://openidconnect.googleapis.com/v1/"
OAUTH2_SERVER_METADATA_URL: "https://accounts.google.com/.well-known/openid-configuration"
OAUTH2_SCOPE: "openid email profile"
OAUTH2_USERINFO_ENDPOINT: "userinfo"
OAUTH2_BUTTON_COLOR: "red"
OAUTH2_ICON: "None"
- OAUTH2_NAME: "github"
OAUTH2_DISPLAY_NAME: "Github"
OAUTH2_CLIENT_ID: "xxxxxxxxxxxxxxxxx"
OAUTH2_CLIENT_SECRET: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
OAUTH2_TOKEN_URL: "https://github.com/login/oauth/access_token"
OAUTH2_AUTHORIZATION_URL: "https://github.com/login/oauth/authorize"
OAUTH2_API_BASE_URL: "https://api.github.com/"
OAUTH2_USERINFO_ENDPOINT: "user"
OAUTH2_BUTTON_COLOR: "blue"
OAUTH2_ICON: "None"
OAUTH2_SCOPE: "user"
With the above configuration added to the PGAdmin deployment, you will see that you now have two new login options available:
Connecting to pgAdmin 4
To connect to pgAdmin 4, you will need to access the port and get the user credentials.
Accessing the pgAdmin 4 port
PgAdmin 4 is available on port 5050 of the Pod. To access it, you have two options:
- Direct Port-Forwarding to the Pod: You can create a port-forward directly to the Pod.
- Port-Forwarding via a Service: Alternatively, you can set up a Service and then establish a port-forward to that Service.
Before proceeding with either method, you need to identify the specific pgAdmin Pod you want to connect to. This can be done by selecting the Pod based on its label. To make it easier to reuse, you can save the Pod name to the variable PGADMIN_POD
. For example, if you want to connect to the rhino
pgAdmin instance as mentioned earlier, execute the following command:
Bash:
export PGADMIN_POD=$(kubectl get pod -n postgres-operator --selector="postgres-operator.crunchydata.com/pgadmin=rhino" -o name)
Powershell:
$env:PGADMIN_POD=$(kubectl get pod -n postgres-operator --selector="postgres-operator.crunchydata.com/pgadmin=rhino" -o name)
Once you've identified your pgAdmin Pod, you could port-foward to it directly:
Bash:
kubectl port-forward -n postgres-operator ${PGADMIN_POD} 5050:5050
Powershell:
kubectl port-forward -n postgres-operator ${env:PGADMIN_POD} 5050:5050
Or you could create a Service, such as by using the kubectl expose
command and port-forwarding to that Service:
Bash:
kubectl expose -n postgres-operator ${PGADMIN_POD} --name rhino-pgadmin
Powershell:
kubectl expose -n postgres-operator ${env:PGADMIN_POD} --name rhino-pgadmin
Then, port-forward to that Service. The following command will work in both Bash and Powershell environments:
kubectl port-forward -n postgres-operator svc/rhino-pgadmin 5050:5050
Point your browser at http://localhost:5050
and you will be prompted to log in.
Using your initial user credentials
pgAdmin 4 requires an initial administrator user. As part of deploying, CPK sets up an administrator user and creates a password for that user that you can use.
- Username: The admin user name is set as
admin@<pgAdmin_name>.<pgAdmin_namespace>.svc
. So the user for aPGAdmin
namedrhino
in the namespacepostgres-operator
would beadmin@rhino.postgres-operator.svc
. This naming format was chosen since pgAdmin 4 requires an email format in order to log in; and to reinforce the connection between the username and the pgAdmin 4 instance being logged into. For ease of use, this username is saved in the Secret with the labelpostgres-operator.crunchydata.com/pgadmin=<pgAdmin_name>
. - Password: The password is saved in a Secret with the label
postgres-operator.crunchydata.com/pgadmin=<pgAdmin_name>
.
With the rhino
pgadmin
example, you could enter the following commands to get the username and password:
# To retrieve the username:
kubectl get secret -n postgres-operator -l postgres-operator.crunchydata.com/pgadmin=rhino -o go-template="{{range .items}}{{.data.username | base64decode }}{{end}}"
# To retrieve the password:
kubectl get secret -n postgres-operator -l postgres-operator.crunchydata.com/pgadmin=rhino -o go-template="{{range .items}}{{.data.password | base64decode }}{{end}}"
That will output the password. Now with the username and password, you can log in as the administrator and manage other users.
Deleting pgAdmin 4
You can remove the pgAdmin 4 deployment by sending a delete request:
kubectl delete pgadmin rhino -n postgres-operator
# or using the YAML file defined above
kubectl delete -f pgadmin.yaml