Installing Che on Kubespray

This article explains how to deploy Che on Kubernetes provisioned by Kubespray.

Configuring Kubernetes using Kubespray

Commands are given for bash on Ubuntu.

System update

Start with a system update before installing new packages.

$ sudo apt-get update
$ sudo apt-get upgrade
This step is only required on the machine used to run Kubespray. Kubespray will handle system updates on the cluster nodes.
SSH access
  • Install SSH server

    If a node does not have SSH server installed by default, you have to install it to remotely control this machine.

    $ sudo apt-get install openssh-server
  • Create SSH key pair

    You have to generate one or multiple SSH key pair(s) to allow the Kubespray/Ansible automatic login using SSH. You can use a different key pair for each node or use the same for all nodes.

    $ ssh-keygen -b 2048 -t rsa -f /home/<local-user>/.ssh/id_rsa -q -N ""
  • Copy your public key(s) on nodes

    Copy your public key(s) in the ~/.ssh/authorized_keys file of the user accounts you will use on each node for deployment. You will be prompted twice for the password corresponding to account, the first time for the public key upload using SSH and the second time for adding the public key in the authorized_keys file.

    for ip in <node1-ip> <node2-ip> ...; do
       scp /home/<local-user>/.ssh/id_rsa.pub <node-user>@$ip:/home/<node-user>/.ssh
       ssh <node-user>@$ip "cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys"
    done

    You will never be prompted again for password using SSH, the key will be used to authenticate you!

IPv4 forwarding

Kubespray requires to turn on IPv4 forwarding. This should be done automatically by Kubespray.

To do it manually, run the following command:

for ip in <node1-ip> <node2-ip> ...; do
   ssh <node-user>@$ip "echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward"
done
Turn off swap

Turning swap off is required by Kubernetes. See this issue for more information.

for ip in <node1-ip> <node2-ip> ...; do
   ssh <node-user>@$ip "sudo swapoff -a && sudo sed -i '/ swap / s/^/#/' /etc/fstab"
done
Get Kubespray

Start by installing curl.

sudo apt-get install curl

Get the latest Kubespray source code from its code repository.

The latest release when writing this tutorial, v2.12.5, throws error not encountered in the master version.
mkdir -p ~/projects/ && \
curl -LJO https://github.com/kubernetes-sigs/kubespray/archive/master.zip && \
unzip kubespray-master.zip -d kubespray && \
rm kubespray-master.zip && \
cd kubespray
Install Kubespray requirements

Kubespray uses Python 3 and several dependencies to be installed.

  • Install Python 3

    Install Python 3 but also pip (package installer for Python) and venv to create virtual environments (see below).

    sudo apt-get install python3.7 python3-pip python3-venv
  • Create a virtual environment

    This is a best isolation practice using Python to use virtual environments (or conda environments for conda users).

    python3 -m venv ~/projects/kubespray-venv
    source ~/projects/kubespray-venv/bin/activate
  • Install Kubespray dependencies

    pip install -r requirements.txt
Create a new cluster configuration

Start creating a copy of the default settings from sample cluster.

cp -rfp inventory/sample inventory/mycluster

Be sure you are still in the ~/projects/kubespray/ directory before executing this command!

Then customize your new cluster

  1. Update Ansible inventory file with inventory builder

    declare -a IPS=(<node1-ip> <node2-ip> ...)
    CONFIG_FILE=inventory/mycluster/hosts.yaml python contrib/inventory_builder/inventory.py ${IPS[@]}
  2. (optional) Rename your nodes or deactivate host name renaming

    If you skip this step, your cluster host names will be renamed node1, node2, and so on.

    You can either edit the file ~/projects/kubespray/inventory/mycluster/hosts.yaml

    sed -e 's/node1/tower/g' -e 's/node2/laptop/g' ... -i inventory/mycluster/hosts.yaml

    OR

    keep the current host names

    echo "override_system_hostname: false" >>  inventory/mycluster/group_vars/all/all.yml
  3. Check localhost vs nodes usernames

    If your localhost username differ from a node username (the one that owns your SSH public key), you must specify it to Ansible by editing (manually) the hosts.yaml file.

    Example:

    localhost username node1 username

    foo

    bar

    > cat inventory/mycluster/hosts.yaml
    all:
      hosts:
        node1:
          ansible_ssh_user: bar
Deploy your cluster!

It’s time to deploy Kubernetes by running the Ansible playbook command.

ansible-playbook -i inventory/mycluster/hosts.yaml  --become --become-user=root cluster.yml
Access your cluster API

The cluster is created but you currently have no access to its API for configuration purpose. `kubectl ` has been installed by Kubespray on master nodes of your cluster and configuration files are saved in root home directories of master nodes.

When you are about to access the cluster API from another computer on your network, install kubectl first.

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
  1. Copy the configuration files from the root home directory of a master node:

    1. On the master node, copy configurations files from root to your user account:

      $ ssh <node-user>@<master-node-ip> "sudo cp -R /root/.kube ~ && sudo chown -R <node-user>:<node-user> ~/.kube"
  2. Download the configuration files to a remote computer:

    $ scp -r <node-user>@<master-node-ip>:~/.kube ~
    $ sudo chown -R <local-user>:<local-user> ~/.kube
  3. Keep secrets protected on the master node:

    $ ssh <node-user>@<master-node-ip> "rm -r ~/.kube"

    Use autocompletion for the sake of sanity:

    $ echo 'source <(kubectl completion bash)' >>~/.bashrc

Configuring the MetalLB load balancer on Kubernetes

This section describes how to use MetalLB to install a load balancer for Kubernetes on bare metal.

Prerequisites
Procedure
  1. Configure Kubernetes proxy:

    $ kubectl edit configmap -n kube-system kube-proxy

    and set:

    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    kind: KubeProxyConfiguration
    mode: "ipvs"
    ipvs:
      strictARP: true
  2. Apply the MetalLB manifests:

    $ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml
    $ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
    
    # On first install only:
    $ kubectl create secret generic -n metallb-system memberlist \
      --from-literal=secretkey="$(openssl rand -base64 128)"
  3. Configure MetalLB. To allow the load balancer to distribute external IPs, specify in its configuration the IP range allocated for it:

    $ cat << EOF | kubectl apply -f -
    apiVersion: v1
    kind: ConfigMap
    metadata:
      namespace: metallb-system
      name: config
    data:
      config: |
        address-pools:
        - name: default
          protocol: layer2
          addresses:
          - <your-ip-range>
    EOF
    Set <your-ip-range> to the IP range you intend to use.

Installing storage for Kubernetes

This section describes how to enable persistent storage for Kubernetes using NFS.

Procedure
  1. Install the NFS server on a machine on the same network as your cluster nodes:

    # apt-get install -y nfs-kernel-server
  2. Create the export directory:

    # mkdir -p /mnt/my-nfs

    Change its permissions:

    # chown nobody:nogroup /mnt/my-nfs
    # chmod 777 /mnt/my-nfs
  3. Start the NFS export:

    # echo "<mount-path> <subnet>(rw,sync,no_subtree_check)" | tee /etc/exports
    # exportfs -a
    # systemctl restart nfs-kernel-server
    Replace <subnet> and <mount-path> with your nfs settings.
  4. Define the StorageClass settings and the provisioner (using the external-storage template):

    $ kubectl apply -f https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/nfs-client/deploy/rbac.yaml
  5. Set StorageClass:

    $ cat << EOF | kubectl apply -f -
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: managed-nfs-storage
      annotations:
        storageclass.kubernetes.io/is-default-class: "true"
    provisioner: nfs-provisioner
    parameters:
      archiveOnDelete: "false"
    EOF

    Declare StorageClass as the default one to be automatically selected by PVCs.

  6. Set the provisioner:

    $ cat << EOF | kubectl apply -f -
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nfs-client-provisioner
      labels:
        app: nfs-client-provisioner
      # replace with {orch-namespace} where provisioner is deployed
      namespace: default
    spec:
      replicas: 1
      strategy:
        type: Recreate
      selector:
        matchLabels:
          app: nfs-client-provisioner
      template:
        metadata:
          labels:
            app: nfs-client-provisioner
        spec:
          serviceAccountName: nfs-client-provisioner
          containers:
            - name: nfs-client-provisioner
              image: quay.io/external_storage/nfs-client-provisioner:latest
              volumeMounts:
                - name: nfs-client-root
                  mountPath: /persistentvolumes
              env:
                - name: PROVISIONER_NAME
                  value: nfs-provisioner
                - name: NFS_SERVER
                  value: <nfs-server-ip>
                - name: NFS_PATH
                  value: <mount-path>
          volumes:
            - name: nfs-client-root
              nfs:
                server: <nfs-server-ip>
                path: <mount-path>
    EOF
    Replace <subnet> and <mount-path> with your nfs settings.
  7. Verify the configuration:

    $ kubectl get deployments.apps,pods,sc -n default

    You should see the Deployment of the provisioner, the corresponding Pod and also the StorageClass as the default one.

Installing che on Kubespray using chectl

This section describes how to install Che on Kubernetes provisioned by Kubespray.

Prerequisites
Procedure
  1. Deploy an Ingress controller (using Nginx, the cloud deployment because a load balancer is used):

    $ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud/deploy.yaml
  2. Get the IP attributed by the load balancer:

    $ kubectl get svc -n ingress-nginx

    It corresponds to the EXTERNAL-IP of the ingress-nginx-controller service. Use it as <ip-from-load-balancer> in the following step.

    • Deploy Che:

      $ chectl server:deploy --platform k8s --domain <ip-from-load-balancer>.nip.io