This document serves four purposes:
- Ensure you have the prerequisites for building the images in Crunchy Container Suite
- Make sure your local machine has all the pieces needed to run the examples in the GitHub repository
- Run the images as standalone containers in Docker
- Instruct you how to install the Crunchy Container Suite into Kubernetes or OpenShift
Where applicable, we will try to denote which installations and steps are required for the items above.
When we set up the directories below, you will notice they seem to be quite deeply nested. We are setting up a Go programming language workspace. Go has a specific folder structure for it’s workspaces with multiple projects in a workspace. If you are not going build the container images you can ignore the deep directories below, but it will not hurt you if you follow the directions exactly.
These instructions are developed and on for the following operating systems:
We also assume you are using the Docker provided with the distributions above. If you have installed Docker CE or EE on your machine, please create a VM for this work or uninstall Docker CE or EE.
The images in Crunchy Container Suite can run on different environments including:
OpenShift Container Platform 3.11
CentOS 7 only
$ sudo yum -y install epel-release $ sudo yum -y install golang git
RHEL 7 only
$ sudo subscription-manager repos --enable="rhel-7-server-extras-rpms" --enable="rhel-7-server-optional-rpms" $ sudo yum -y install epel-release $ sudo yum -y install golang git
Clone GitHub repository
Make directories to hold the GitHub clone that also work with the Go workspace structure
$ mkdir -p $HOME/cdev/src/github.com/crunchydata $HOME/cdev/pkg $HOME/cdev/bin $ cd $HOME/cdev/src/github.com/crunchydata $ git clone https://github.com/crunchydata/crunchy-containers $ cd crunchy-containers $ git checkout v4.3.0
We also need to go fetch a Go module for expanding environement variables
$ go get github.com/blang/expenv
Your Shell Environment
We have found, that because of the way Go handles different projects, you may want to create a separate account if are plan to build the containers and work on other Go projects. You could also look into some of the GOPATH wrappers.
If your goal is to simply run the containers, any properly configured user account should work.
Now we need to set the project paths and software version numbers. Edit your $HOME/.bashrc file with your favorite editor and add the following information. You can leave out the comments at the end of each line starting with #:
export GOPATH=$HOME/cdev # set path to your new Go workspace export GOBIN=$GOPATH/bin # set bin path export PATH=$PATH:$GOBIN # add Go bin path to your overall path export CCP_BASEOS=centos7 # centos7 for Centos, ubi7 for Red Hat Universal Base Image export CCP_PGVERSION=10 # The PostgreSQL major version export CCP_PG_FULLVERSION=10.11 export CCP_VERSION=4.3.0 export CCP_IMAGE_PREFIX=crunchydata # Prefix to put before all the container image names export CCP_IMAGE_TAG=$CCP_BASEOS-$CCP_PG_FULLVERSION-$CCP_VERSION # Used to tag the images export CCPROOT=$GOPATH/src/github.com/crunchydata/crunchy-containers # The base of the clone github repo export CCP_SECURITY_CONTEXT="" export CCP_CLI=kubectl # kubectl for K8s, oc for OpenShift export CCP_NAMESPACE=demo # Change this to whatever namespace/openshift project name you want to use
It will be necessary to refresh your
.bashrc file in order for the changes to take
At this point we have almost all the prequesites required to build the Crunchy Container Suite.
Building UBI Containers With Supported Crunchy Enterprise Software
Before you can build supported containers on UBI and Crunchy Supported Software, you need
to add the Crunchy repositories to your approved Yum repositories. Crunchy Enterprise Customer running on UBI
can login and download the Crunchy repository key and yum repository from https://access.crunchydata.com/
on the downloads page. Once the files are downloaded please place them into the
$CCPROOT/conf directory (defined
above in the environment variable section).
The OpenShift and Kubernetes (KubeAdm) instructions both have a section for installing docker. Installing docker now won’t cause any issues but you may wish to configure Docker storage before bringing everything up. Configuring Docker Storage is different from Storage Configuration referenced later in the instructions and is not covered here.
For a basic docker installation, you can follow the instructions below. Please refer to the respective installation guide for the version of Kubernetes you are installing for more specific details.
sudo yum -y install docker
It is necessary to add the
docker group and give your user access
to that group:
sudo groupadd docker sudo usermod -a -G docker <username>
Logout and login again as the same user to allow group settings to take effect.
Enable Docker service and start Docker (once all configuration is complete):
sudo systemctl enable docker.service sudo systemctl start docker.service
These installation instructions assume the installation of PostgreSQL 10 through the official Postgresql Development Group (PGDG) repository. View the documentation located here in order to view more detailed notes or install a different version of PostgreSQL.
Locate and edit your distribution’s
.repo file, located:
On CentOS: /etc/yum.repos.d/CentOS-Base.repo, [base] and [updates] sections
On RHEL: /etc/yum/pluginconf.d/rhnplugin.conf [main] section
To the section(s) identified above, depending on OS being used, you need to append a line to prevent dependencies from getting resolved to the PostgreSQL supplied by the base repository:
On CentOS and RHEL:
Next, install the RPM relating to the base operating system and PostgreSQL version you wish to install. The RPMs can be found here. Below we chose Postgresql 10 for the example (change if you need different version):
On CentOS system:
sudo yum -y install https://download.postgresql.org/pub/repos/yum/10/redhat/rhel-7-x86_64/pgdg-centos10-10-2.noarch.rpm
On RHEL system:
sudo yum -y install https://download.postgresql.org/pub/repos/yum/testing/10/redhat/rhel-7-x86_64/pgdg-redhat10-10-2.noarch.rpm
Update the system:
sudo yum -y update
Install the PostgreSQL server package.
sudo yum -y install postgresql10-server.x86_64
Update the system:
sudo yum -y update
Configuring Storage for Kubernetes Based Systems
In addition to the environment variables we set earlier, you will need to add environment variables
for Kubernetes storage configuration. Please see the Storage Configuration
document for configuring storage using environment variables set in
Don’t forget to:
Use the OpenShift installation guide to install OpenShift Enterprise on your host. Make sure to choose the proper version of OpenShift you want to install. The main instructions for 3.11 are here and you’ll be able to select a different version there, if needed:
Make sure your hostname resolves to a single IP address in your /etc/hosts file. The NFS examples will not work otherwise and other problems with installation can occur unless you have a resolving hostname.
You should see a single IP address returned from this command:
$ hostname --ip-address
We suggest using Kubeadm as a simple way to install Kubernetes.
See Kubeadm for installing the latest version of Kubeadm.
See Create a Cluster for creating the Kubernetes cluster using Kubeadm. Note: We find that Weave networking works particularly well with the container suite.
Please see here to view the official documentation regarding configuring DNS for your Kubernetes cluster.
Post Kubernetes Configuration
In order to run the various examples, Role Based Account Control will need to be set up. Specifically, the cluster-admin role will need to be assigned to the Kubernetes user that will be utilized to run the examples. This is done by creating the proper ClusterRoleBinding:
$ kubectl create clusterrolebinding cluster-admin-binding \ --clusterrole cluster-admin --user someuser
If you are running on GKE, the following command can be utilized to auto-populate the user option with the account that is currently logged into Google Cloud:
$ kubectl create clusterrolebinding cluster-admin-binding \ --clusterrole cluster-admin --user $(gcloud config get-value account)
If more than one user will be running the examples on the same Kubernetes cluster, a unique name will need to be provided for each new ClusterRoleBinding created in order to assign the cluster-admin role to every user. The example below will create a ClusterRoleBinding with a unique value:
$ kubectl create clusterrolebinding <unique>-cluster-admin-binding \ --clusterrole cluster-admin \ --user someuser
If you are running on GKE, the following can be utilized to create a unique ClusterRoleBinding for each user, with the user’s Google Cloud account prepended to the name of each new ClusterRoleBinding:
$ kubectl create clusterrolebinding "$(gcloud config get-value account)-cluster-admin-binding" \ --clusterrole cluster-admin \ --user $(gcloud config get-value account)
Some Kubernetes Helm examples are provided in the following directory as one option for deploying the Container Suite.
Once you have your Kubernetes environment configured, it is simple to get Helm up and running. Please refer to this document to get Helm installed and configured properly.
Configuring Namespace and Permissions
In Kubernetes, a concept called a namespace provides the means to separate created resources or components into individual logically grouped partitions. In OpenShift, namespace is referred to as a project.
It is considered a best practice to have dedicated namespaces for projects in both testing and production environments.
$CCP_NAMESPACE. The default we use for namespace is ‘demo’ but it can be set to any valid namespace name. The instructions below illustrate how to set up and work within new namespaces or projects in both Kubernetes and OpenShift.
This section will illustrate how to set up a new Kubernetes namespace called demo, and will then show how to provide permissions to that namespace to allow the Kubernetes examples to run within that namespace.
First, view currently existing namespaces:
$ kubectl get namespace NAME STATUS AGE default Active 21d kube-public Active 21d kube-system Active 21d
Then, create a new namespace called demo:
$ kubectl create -f $CCPROOT/conf/demo-namespace.json namespace "demo" created $ kubectl get namespace demo NAME STATUS AGE demo Active 7s
Then set the namespace as the default for the current context:
$ kubectl config set-context $(kubectl config current-context) --namespace=demo
We can verify that the namespace was set correctly through the following command:
$ kubectl config view | grep namespace: namespace: demo
This section assumes an administrator has already logged in first as the system:admin user as directed by the OpenShift Installation Guide.
For our development purposes only, we typically specify the OCP
Authorization policy of
AllowAll as documented here:
We do not recommend this authentication policy for a production deployment of OCP.
Log into the system as a user:
$ oc login -u <user>
The next step is to create a demo namespace to run the examples within. The name of this OCP project will be what you supply in the CCP_NAMESPACE environment variable:
$ oc new-project demo --description="Crunchy Containers project" --display-name="Crunchy-Containers" Now using project "demo" on server "https://127.0.0.1:8443". $ export CCP_NAMESPACE=demo
If we view the list of projects, we can see the new project has been added and is “active”.
$ oc get projects NAME DISPLAY NAME STATUS demo Crunchy-Containers Active myproject My Project Active
If you were on a different project and wanted to switch to the demo project, you would do so by running the following:
$ oc project demo Now using project "demo" on server "https://127.0.0.1:8443".
When self-provisioning a new project using the
oc new-project command, the current user (i.e.,
the user you used when logging into OCP with the
oc login command) will automatically be assigned
to the admin role for that project. This will allow the user to create the majority of the
objects needed to successfully run the examples. However, in order to create the Persistent
Volume objects needed to properly configure storage for the examples, an additional role is
needed. Specifically, a new role is needed that can both create and delete Persistent Volumes.
Using the following two commands, create a new Cluster Role that has the ability to create and delete persistent volumes, and then assign that role to your current user:
$ oc create clusterrole crunchytester --verb="list,create,delete" --resource=persistentvolumes clusterrole "crunchytester" created $ oc adm policy add-cluster-role-to-user crunchytester someuser cluster role "crunchytester" added: "someuser"
Your user should now have the roles and privileges required to run the examples.