Installing Che on Kubespray

This article explains how to deploy Che on Kubernetes provisioned by Kubespray.

Configuring Kubernetes using Kubespray

Commands are given for bash on Ubuntu.

System update

Start with a system update before installing new packages.

$ sudo apt-get update
$ sudo apt-get upgrade
This step is only required on the machine used to run Kubespray. Kubespray will handle system updates on the cluster nodes.
SSH access
  • Install SSH server

    If a node does not have SSH server installed by default, you have to install it to remotely control this machine.

    $ sudo apt-get install openssh-server
  • Create SSH key pair

    You have to generate one or multiple SSH key pair(s) to allow the Kubespray or Ansible automatic login using SSH. You can use a different key pair for each node or use the same for all nodes.

    $ ssh-keygen -b 2048 -t rsa -f /home/<local-user>/.ssh/id_rsa -q -N ""
  • Copy your public key(s) on nodes

    Copy your public key(s) in the ~/.ssh/authorized_keys file of the user accounts you will use on each node for deployment. You will be prompted twice for the password corresponding to account, the first time for the public key upload using SSH and the second time for adding the public key in the authorized_keys file.

    for ip in <node1-ip> <node2-ip> ...; do
       scp /home/<local-user>/.ssh/ <node-user>@$ip:/home/<node-user>/.ssh
       ssh <node-user>@$ip "cat ~/.ssh/ > ~/.ssh/authorized_keys"

    You will never be prompted again for password using SSH, the key will be used to authenticate you!

IPv4 forwarding

Kubespray requires to turn on IPv4 forwarding. This should be done automatically by Kubespray.

To do it manually, run the following command:

for ip in <node1-ip> <node2-ip> ...; do
   ssh <node-user>@$ip "echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward"
Turn off swap

Turning swap off is required by Kubernetes. See this issue for more information.

for ip in <node1-ip> <node2-ip> ...; do
   ssh <node-user>@$ip "sudo swapoff -a && sudo sed -i '/ swap / s/^/#/' /etc/fstab"
Get Kubespray

Start by installing curl.

sudo apt-get install curl

Get the latest Kubespray source code from its code repository.

The latest release when writing this tutorial, v2.12.5, throws error not encountered in the development version.
mkdir -p ~/projects/ && \
curl -LJO && \
unzip -d kubespray && \
rm && \
cd kubespray
Install Kubespray requirements

Kubespray uses Python 3 and several dependencies to be installed.

  • Install Python 3

    Install Python 3 but also pip (package installer for Python) and venv to create virtual environments (see below).

    sudo apt-get install python3.7 python3-pip python3-venv
  • Create a virtual environment

    This is a best isolation practice using Python to use virtual environments (or conda environments for conda users).

    python3 -m venv ~/projects/kubespray-venv
    source ~/projects/kubespray-venv/bin/activate
  • Install Kubespray dependencies

    pip install -r requirements.txt
Create a new cluster configuration

Start creating a copy of the default settings from sample cluster.

cp -rfp inventory/sample inventory/mycluster

Be sure you are still in the ~/projects/kubespray/ directory before executing this command!

Then customize your new cluster

  1. Update Ansible inventory file with inventory builder

    declare -a IPS=(<node1-ip> <node2-ip> ...)
    CONFIG_FILE=inventory/mycluster/hosts.yaml python contrib/inventory_builder/ ${IPS[@]}
  2. (optional) Rename your nodes or deactivate hostname renaming

    If you skip this step, your cluster hostnames will be renamed node1, node2, and so on.

    You can either edit the file ~/projects/kubespray/inventory/mycluster/hosts.yaml

    sed -e 's/node1/tower/g' -e 's/node2/laptop/g' ... -i inventory/mycluster/hosts.yaml


    keep the current hostnames

    echo "override_system_hostname: false" >>  inventory/mycluster/group_vars/all/all.yml
  3. Check localhost compared to nodes usernames

    If your localhost username differ from a node username (the one that owns your SSH public key), you must specify it to Ansible by editing (manually) the hosts.yaml file.


    localhost username node1 username



    > cat inventory/mycluster/hosts.yaml
          ansible_ssh_user: bar
Deploy your cluster!

It’s time to deploy Kubernetes by running the Ansible playbook command.

ansible-playbook -i inventory/mycluster/hosts.yaml  --become --become-user=root cluster.yml
Access your cluster API

The cluster is created but you currently have no access to its API for configuration purpose. `kubectl ` has been installed by Kubespray on master nodes of your cluster and configuration files are saved in root home directories of master nodes.

When you are about to access the cluster API from another computer on your network, install kubectl first.

curl -LO$(curl -s
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
  1. Copy the configuration files from the root home directory of a master node:

    1. On the master node, copy configurations files from root to your user account:

      $ ssh <node-user>@<master-node-ip> "sudo cp -R /root/.kube ~ && sudo chown -R <node-user>:<node-user> ~/.kube"
  2. Download the configuration files to a remote computer:

    $ scp -r <node-user>@<master-node-ip>:~/.kube ~
    $ sudo chown -R <local-user>:<local-user> ~/.kube
  3. Keep secrets protected on the master node:

    $ ssh <node-user>@<master-node-ip> "rm -r ~/.kube"

    Use autocompletion for the sake of sanity:

    $ echo 'source <(kubectl completion bash)' >>~/.bashrc

Configuring the MetalLB load balancer on Kubernetes

This section describes how to use MetalLB to install a load balancer for Kubernetes on bare metal.

  1. Configure Kubernetes proxy:

    $ kubectl edit configmap -n kube-system kube-proxy

    and set:

    kind: KubeProxyConfiguration
    mode: "ipvs"
      strictARP: true
  2. Apply the MetalLB manifests:

    $ kubectl apply -f
    $ kubectl apply -f
    # On first install only:
    $ kubectl create secret generic -n metallb-system memberlist \
      --from-literal=secretkey="$(openssl rand -base64 128)"
  3. Configure MetalLB. To allow the load balancer to distribute external IPs, specify in its configuration the IP range allocated for it:

    $ cat << EOF | kubectl apply -f -
    apiVersion: v1
    kind: ConfigMap
      namespace: metallb-system
      name: config
      config: |
        - name: default
          protocol: layer2
          - <your-ip-range>
    Set <your-ip-range> to the IP range you intend to use.

Installing storage for Kubernetes

This section describes how to enable persistent storage for Kubernetes using NFS.

  1. Install the NFS server on a machine on the same network as your cluster nodes:

    # apt-get install -y nfs-kernel-server
  2. Create the export directory:

    # mkdir -p /mnt/my-nfs

    Change its permissions:

    # chown nobody:nogroup /mnt/my-nfs
    # chmod 777 /mnt/my-nfs
  3. Start the NFS export:

    # echo "<mount-path> <subnet>(rw,sync,no_subtree_check)" | tee /etc/exports
    # exportfs -a
    # systemctl restart nfs-kernel-server
    Replace <subnet> and <mount-path> with your nfs settings.
  4. Define the StorageClass settings and the provisioner (using the external-storage template):

    $ kubectl apply -f
  5. Set StorageClass:

    $ cat << EOF | kubectl apply -f -
    kind: StorageClass
      name: managed-nfs-storage
      annotations: "true"
    provisioner: nfs-provisioner
      archiveOnDelete: "false"

    Declare StorageClass as the default one to be automatically selected by PVCs.

  6. Set the provisioner:

    $ cat << EOF | kubectl apply -f -
    apiVersion: apps/v1
    kind: Deployment
      name: nfs-client-provisioner
        app: nfs-client-provisioner
      # replace with {orch-namespace} where provisioner is deployed
      namespace: default
      replicas: 1
        type: Recreate
          app: nfs-client-provisioner
            app: nfs-client-provisioner
          serviceAccountName: nfs-client-provisioner
            - name: nfs-client-provisioner
                - name: nfs-client-root
                  mountPath: /persistentvolumes
                - name: PROVISIONER_NAME
                  value: nfs-provisioner
                - name: NFS_SERVER
                  value: <nfs-server-ip>
                - name: NFS_PATH
                  value: <mount-path>
            - name: nfs-client-root
                server: <nfs-server-ip>
                path: <mount-path>
    Replace <subnet> and <mount-path> with your nfs settings.
  7. Verify the configuration:

    $ kubectl get deployments.apps,pods,sc -n default

    You should see the Deployment of the provisioner, the corresponding Pod and also the StorageClass as the default one.

Installing Che on Kubespray using chectl

This section describes how to install Che on Kubernetes provisioned by Kubespray.

  1. Deploy an Ingress controller (using Nginx, the cloud deployment because a load balancer is used):

    $ kubectl apply -f
  2. Get the IP attributed by the load balancer:

    $ kubectl get svc -n ingress-nginx

    It corresponds to the EXTERNAL-IP of the ingress-nginx-controller service. Use it as <ip-from-load-balancer> in the following step.

    • Deploy Che:

      $ chectl server:deploy --platform k8s --domain <ip-from-load-balancer>