Installing Che on Kubespray
This article explains how to deploy Che on Kubernetes provisioned by Kubespray.
Configuring Kubernetes using Kubespray
Commands are given for bash on Ubuntu.
Start with a system update before installing new packages.
$ sudo apt-get update $ sudo apt-get upgrade
This step is only required on the machine used to run Kubespray. Kubespray will handle system updates on the cluster nodes. |
-
Install SSH server
If a node does not have SSH server installed by default, you have to install it to remotely control this machine.
$ sudo apt-get install openssh-server
-
Create SSH key pair
You have to generate one or multiple SSH key pair(s) to allow the Kubespray or Ansible automatic login using SSH. You can use a different key pair for each node or use the same for all nodes.
$ ssh-keygen -b 2048 -t rsa -f /home/<local-user>/.ssh/id_rsa -q -N ""
-
Copy your public key(s) on nodes
Copy your public key(s) in the
~/.ssh/authorized_keys
file of the user accounts you will use on each node for deployment. You will be prompted twice for the password corresponding to account, the first time for the public key upload using SSH and the second time for adding the public key in the authorized_keys file.for ip in <node1-ip> <node2-ip> ...; do scp /home/<local-user>/.ssh/id_rsa.pub <node-user>@$ip:/home/<node-user>/.ssh ssh <node-user>@$ip "cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys" done
You will never be prompted again for password using SSH, the key will be used to authenticate you!
Kubespray requires to turn on IPv4 forwarding. This should be done automatically by Kubespray.
To do it manually, run the following command:
for ip in <node1-ip> <node2-ip> ...; do ssh <node-user>@$ip "echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward" done
Turning swap off is required by Kubernetes. See this issue for more information.
for ip in <node1-ip> <node2-ip> ...; do ssh <node-user>@$ip "sudo swapoff -a && sudo sed -i '/ swap / s/^/#/' /etc/fstab" done
Start by installing curl.
sudo apt-get install curl
Get the latest Kubespray source code from its code repository.
The latest release when writing this tutorial, v2.12.5, throws error not encountered in the development version. |
mkdir -p ~/projects/ && \ curl -LJO https://github.com/kubernetes-sigs/kubespray/archive/master.zip && \ unzip kubespray-master.zip -d kubespray && \ rm kubespray-master.zip && \ cd kubespray
Kubespray uses Python 3 and several dependencies to be installed.
-
Install Python 3
Install Python 3 but also
pip
(package installer for Python) andvenv
to create virtual environments (see below).sudo apt-get install python3.7 python3-pip python3-venv
-
Create a virtual environment
This is a best isolation practice using Python to use virtual environments (or
conda
environments forconda
users).python3 -m venv ~/projects/kubespray-venv source ~/projects/kubespray-venv/bin/activate
-
Install Kubespray dependencies
pip install -r requirements.txt
Start creating a copy of the default settings from sample cluster.
cp -rfp inventory/sample inventory/mycluster
Be sure you are still in the ~/projects/kubespray/
directory before executing this command!
Then customize your new cluster
-
Update Ansible inventory file with inventory builder
declare -a IPS=(<node1-ip> <node2-ip> ...) CONFIG_FILE=inventory/mycluster/hosts.yaml python contrib/inventory_builder/inventory.py ${IPS[@]}
-
(optional) Rename your nodes or deactivate hostname renaming
If you skip this step, your cluster hostnames will be renamed node1, node2, and so on.
You can either edit the file
~/projects/kubespray/inventory/mycluster/hosts.yaml
sed -e 's/node1/tower/g' -e 's/node2/laptop/g' ... -i inventory/mycluster/hosts.yaml
OR
keep the current hostnames
echo "override_system_hostname: false" >> inventory/mycluster/group_vars/all/all.yml
-
Check localhost compared to nodes usernames
If your localhost username differ from a node username (the one that owns your SSH public key), you must specify it to Ansible by editing (manually) the
hosts.yaml
file.Example:
localhost username node1 username foo
bar
> cat inventory/mycluster/hosts.yaml all: hosts: node1: ansible_ssh_user: bar
It’s time to deploy Kubernetes by running the Ansible playbook command.
ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml
The cluster is created but you currently have no access to its API for configuration purpose. `kubectl ` has been installed by Kubespray on master nodes of your cluster and configuration files are saved in root home directories of master nodes.
When you are about to access the cluster API from another computer on your network, install kubectl first.
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl chmod +x ./kubectl sudo mv ./kubectl /usr/local/bin/kubectl
-
Copy the configuration files from the root home directory of a master node:
-
On the master node, copy configurations files from root to your user account:
$ ssh <node-user>@<master-node-ip> "sudo cp -R /root/.kube ~ && sudo chown -R <node-user>:<node-user> ~/.kube"
-
-
Download the configuration files to a remote computer:
$ scp -r <node-user>@<master-node-ip>:~/.kube ~ $ sudo chown -R <local-user>:<local-user> ~/.kube
-
Keep secrets protected on the master node:
$ ssh <node-user>@<master-node-ip> "rm -r ~/.kube"
Use autocompletion for the sake of sanity:
$ echo 'source <(kubectl completion bash)' >>~/.bashrc
Configuring the MetalLB load balancer on Kubernetes
This section describes how to use MetalLB to install a load balancer for Kubernetes on bare metal.
-
MetalLB installed. See the installation guide.
-
Configure Kubernetes proxy:
$ kubectl edit configmap -n kube-system kube-proxy
and set:
apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: "ipvs" ipvs: strictARP: true
-
Apply the MetalLB manifests:
$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml $ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml # On first install only: $ kubectl create secret generic -n metallb-system memberlist \ --from-literal=secretkey="$(openssl rand -base64 128)"
-
Configure MetalLB. To allow the load balancer to distribute external IPs, specify in its configuration the IP range allocated for it:
$ cat << EOF | kubectl apply -f - apiVersion: v1 kind: ConfigMap metadata: namespace: metallb-system name: config data: config: | address-pools: - name: default protocol: layer2 addresses: - <your-ip-range> EOF
Set <your-ip-range>
to the IP range you intend to use.
Installing storage for Kubernetes
This section describes how to enable persistent storage for Kubernetes using NFS.
-
Install the NFS server on a machine on the same network as your cluster nodes:
# apt-get install -y nfs-kernel-server
-
Create the export directory:
# mkdir -p /mnt/my-nfs
Change its permissions:
# chown nobody:nogroup /mnt/my-nfs # chmod 777 /mnt/my-nfs
-
Start the NFS export:
# echo "<mount-path> <subnet>(rw,sync,no_subtree_check)" | tee /etc/exports # exportfs -a # systemctl restart nfs-kernel-server
Replace <subnet>
and<mount-path>
with yournfs
settings. -
Define the
StorageClass
settings and the provisioner (using the external-storage template):$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/nfs-client/deploy/rbac.yaml
-
Set
StorageClass
:$ cat << EOF | kubectl apply -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-nfs-storage annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: nfs-provisioner parameters: archiveOnDelete: "false" EOF
Declare
StorageClass
as the default one to be automatically selected by PVCs. -
Set the provisioner:
$ cat << EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner # replace with {orch-namespace} where provisioner is deployed namespace: default spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: quay.io/external_storage/nfs-client-provisioner:latest volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: nfs-provisioner - name: NFS_SERVER value: <nfs-server-ip> - name: NFS_PATH value: <mount-path> volumes: - name: nfs-client-root nfs: server: <nfs-server-ip> path: <mount-path> EOF
Replace <subnet>
and<mount-path>
with yournfs
settings. -
Verify the configuration:
$ kubectl get deployments.apps,pods,sc -n default
You should see the
Deployment
of the provisioner, the correspondingPod
and also theStorageClass
as the default one.
Installing Che on Kubespray using chectl
This section describes how to install Che on Kubernetes provisioned by Kubespray.
-
A running Kubernetes cluster deployed using Kubespray on bare metal. See Configuring Kubernetes using Kubespray.
-
A load balancer is running on your cluster. See Configuring the MetalLB load balancer on Kubernetes
-
A storage is defined with a provisioner. See Installing storage for Kubernetes.
-
The
chectl
management tool is available. See Using the chectl management tool.
-
Deploy an Ingress controller (using Nginx, the
cloud
deployment because a load balancer is used):$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud/deploy.yaml
-
Get the IP attributed by the load balancer:
$ kubectl get svc -n ingress-nginx
It corresponds to the
EXTERNAL-IP
of theingress-nginx-controller
service. Use it as<ip-from-load-balancer>
in the following step.-
Deploy Che:
$ chectl server:deploy --platform k8s --domain <ip-from-load-balancer>.nip.io
-