This section contains instructions for deploying and running Eclipse Che locally, on a personal workstation.

Prerequisites

To run and manage Che:

Eclipse Che is available in two modes: * Kubernetes (version 1.9 or higher) or OpenShift cluster (version 3.11 or higher) to deploy Che on * Single-user: Non-authenticated Che, lighter and suited for personal desktop environments * Multi-user: Authenticated Che, suited for the cloud, for organizations and developer teams

This section describes how to deploy and run Che in single-user mode.

Setting up a local Kubernetes or OpenShift cluster

Using Minikube to set up Kubernetes

This section describes how to use Minikube to set up Kubernetes.

Prerequisites
Procedure
  1. Start Minikube (it is important to allocate at least 4 GB of RAM):

    $ minikube start --memory=4096

Running Minikube inside an LXC container

This section describes how to properly configure an LXC container to set up Minikube when the hypervisor uses ZFS, Btrfs, or LVM to provision the containers storage.

Background

The chectl command-line tool requires the Minikube Ingress plug-in to be enabled in Minikube. At the same time, the Minikube Ingress plug-in requires the Docker daemon to be running with the overlay filesystem driver.

Problem

According to Docker storage drivers, the Docker overlay2 driver is only supported with the Ext4 and XFS file systems (with ftype=1).

Solution

The solution is to create a virtual block device inside a volume, which in the case of BTRFS is impossible and will require to use a file as the virtual block device.

Procedure

In the following instructions, change the zfsPool or LVM volume_group name and dockerstorage according to your use case and preferences.

  1. Create a fixed size ZFS dataset or LVM volume on the hypervisor side:

    $ zfs create -V 50G zfsPool/dockerstorage           #USING ZFS
    $ lvcreate -L 50G -n dockerstorage volumegroup_name #USING LVM
  2. Use a partition tool to create a partition inside the virtual block device:

    $ parted /dev/zvol/zfsPool/dockerstorage --script mklabel gpt                      #USING ZFS
    $ parted /dev/zvol/zfsPool/dockerstorage --script mkpart primary 1 100%            #USING ZFS
    $ parted /dev/mapper/volumegroup_name-dockerstorage --script mklabel gpt           #USING LVM
    $ parted /dev/mapper/volumegroup_name-dockerstorage --script mkpart primary 1 100% #USING LVM

    There is now a reference called:

    • For ZFS: dockerstorage-part1 inside the /dev/zvol/zfsPool directory

    • For LVM: volumegroup_name-dockerstorage1 inside the /dev/mapper directory

      This is the partition of the virtual block device to be used to store /var/lib/docker from the LXC container.

  3. Format the virtual partition to XFS with the ftype flag set to 1:

    $ mkfs.xfs -n ftype=1 /dev/zvol/zfsPool/dockerstorage-part1       #FOR ZFS
    $ mkfs.xfs -n ftype=1 /dev/mapper/volumegroup_name-dockerstorage1 #FOR LVM
  4. Attach the virtual partition to the container (minikube is the name of the LXC container, dockerstorage is the name for the storage instance in LXC configuration):

    $ lxc config device add minikube dockerstorage disk path=/var/lib/docker \
      source=/dev/zvol/zfsPool/dockerstorage-part1       #FOR ZFS
    $ lxc config device add minikube dockerstorage disk path=/var/lib/docker \
      source=/dev/mapper/volumegroup_name-dockerstorage1 #FOR LVM

    Check the filesystem inside the container using the df command:

    $ df -T /var/lib/docker
  5. Use the following LXC configuration profile in the LXC container to allow it running Minikube:

    config:
      linux.kernel_modules: ip_vs,ip_vs_rr,ip_vs_wrr,ip_vs_sh,ip_tables,ip6_tables,netlink_diag,nf_nat,overlay,br_netfilter
      raw.lxc: |
        lxc.apparmor.profile=unconfined
        lxc.mount.auto=proc:rw sys:rw
        lxc.cgroup.devices.allow=a
        lxc.cap.drop=
      security.nesting: "true"
      security.privileged: "true"
    description: Profile supporting minikube in containers
    devices:
      aadisable:
        path: /sys/module/apparmor/parameters/enabled
        source: /dev/null
        type: disk
      aadisable2:
        path: /sys/module/nf_conntrack/parameters/hashsize
        source: /sys/module/nf_conntrack/parameters/hashsize
        type: disk
      aadisable3:
        path: /dev/kmsg
        source: /dev/kmsg
        type: disk
    name: minikube
  6. After starting and setting up networking and the Docker service inside the container, start Minikube:

    $ minikube start --vm-driver=none --extra-config kubeadm.ignore-preflight-errors=SystemVerification

Using Minishift to set up OpenShift 3

This section describes how to use Minishift to set up OpenShift 3.

Prerequisites
Procedure
  • Start Minishift with at least 4 GB of RAM:

    $ minishift start --memory=4096

Using CodeReady Containers to set up OpenShift 4

This section describes how to use CodeReady Containers to set up OpenShift 4.

Prerequisites
Procedure
  1. Set up your host machine for CodeReady Containers:

    $ crc setup
  2. Remove any previous cluster

    $ crc delete
  3. Start the CodeReady Containers virtual machine with at least 12 GB of RAM.

    $ crc start --memory 12288
  4. When prompted, supply your user pull secret.

  5. Take note of the password for the user kudadmin that is displayed at the end of the installation.

  6. Access the OpenShift web console

    $ crc console
  7. Log in a first time with the developer account (password: developer) to initialize a first user using OAuth.

  8. Log out.

  9. Log in again with the previously mentioned kubadmin user and password.

  10. Follow the procedure for Installing Che on OpenShift 4 from OperatorHub.

Using Docker Desktop to set up Kubernetes

This section describes how to use Docker Desktop to set up Kubernetes.

Prerequisites
  • Running macOS or Windows.

  • An installation of Docker Desktop running Kubernetes version 1.9 or higher. See Installing Docker Desktop

Set up Kubernetes using Kubespray

Commands are given for bash on Ubuntu.

System update

Start with a system update before installing new packages.

$ sudo apt-get update
$ sudo apt-get upgrade
This step is only required on the machine used to run Kubespray. Kubespray will take care of system updates on the cluster nodes.
SSH access
  • Install SSH server

    If a node does not have SSH server installed by default, you have to install it to remotely control this machine.

    $ sudo apt-get install openssh-server
  • Create SSH key pair

    You have to generate one or multiple SSH key pair(s) to allow Kubespray/Ansible automatic login using SSH. You can use a different key pair for each node or use the same for all nodes.

    $ ssh-keygen -b 2048 -t rsa -f /home/<local-user>/.ssh/id_rsa -q -N ""
  • Copy your public key(s) on nodes

    Copy your public key(s) in the ~/.ssh/authorized_keys file of the user accounts you will use on each node for deployment. You will be prompted twice for the password corresponding to account, the first time for the public key upload using SSH and the second time for adding the public key in the authorized_keys file.

    for ip in <node1-ip> <node2-ip> ...; do
       scp /home/<local-user>/.ssh/id_rsa.pub <node-user>@$ip:/home/<node-user>/.ssh
       ssh <node-user>@$ip "cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys"
    done

    You will never be prompted again for password using SSH, the key will be used to authenticate you!

IPv4 forwarding

Kubespray requires to turn on IPv4 forwarding. This should be done automatically by Kubepsray.

To do it manually, run the following command:

for ip in <node1-ip> <node2-ip> ...; do
   ssh <node-user>@$ip "echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward"
done
Turn off swap

Turning swap off is required by Kubernetes. See this issue for more information.

for ip in <node1-ip> <node2-ip> ...; do
   ssh <node-user>@$ip "sudo swapoff -a && sudo sed -i '/ swap / s/^/#/' /etc/fstab"
done
Get Kubespray

Start by installing curl.

sudo apt-get install curl

Get the lastest Kubespray source code from its repo.

The latest release when writing this tutorial, v2.12.5, throws error not encountered in the master version.
mkdir -p ~/projects/ && \
curl -LJO https://github.com/kubernetes-sigs/kubespray/archive/master.zip && \
unzip kubespray-master.zip -d kubespray && \
rm kubespray-master.zip && \
cd kubespray
Install Kubespray requirements

Kubespray uses Python 3 and several dependencies to be installed.

  • Install Python 3

    Install Python 3 but also pip (package installer for Python) and venv to create virtual environnements (see below).

    sudo apt-get install python3.7 python3-pip python3-venv
  • Create a virtual env

    This is a best isolation pratice using Python to use virtual env (or conda env for conda users).

    python3 -m venv ~/projects/kubespray-venv
    source ~/projects/kubespray-venv/bin/activate
  • Install Kubespray dependencies

    pip install -r requirements.txt
Create a new cluster configuration

Start creating a copy of the default settings from sample cluster.

cp -rfp inventory/sample inventory/mycluster

Be sure you are still in the ~/projects/kubespray/ directory before executing this command!

Then customize your new cluster

  1. Update Ansible inventory file with inventory builder

    declare -a IPS=(<node1-ip> <node2-ip> ...)
    CONFIG_FILE=inventory/mycluster/hosts.yaml python contrib/inventory_builder/inventory.py ${IPS[@]}
  2. (optional) Rename your nodes or deactivate hostname renaming

    If you skip this step, your cluster hostnames will be renamed node1, node2, etc.

    You can either edit the file ~/projects/kubespray/inventory/mycluster/hosts.yaml

    sed -e 's/node1/tower/g' -e 's/node2/laptop/g' ... -i inventory/mycluster/hosts.yaml

    OR

    keep the current hostnames

    echo "override_system_hostname: false" >>  inventory/mycluster/group_vars/all/all.yml
  3. Check localhost vs nodes usernames

    If your localhost username differ from a node username (the one that owns your SSH public key), you must specify it to Ansible by editing (manually) the hosts.yaml file.

    Example:

    localhost username node1 username

    foo

    bar

    > cat inventory/mycluster/hosts.yaml
    all:
      hosts:
        node1:
          ansible_ssh_user: bar
Deploy your cluster!

It’s time to deploy Kubernetes by running the Ansible playbook command.

ansible-playbook -i inventory/mycluster/hosts.yaml  --become --become-user=root cluster.yml
Access your cluster API

The cluster is created but you currently have no access to its API for configuration purpose. kubectl has been installed by Kubespray on master nodes of your cluster and configuration files are saved in root home directories of master nodes.

If you want to access the cluster API from another computer on your network, install kubectl first.

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl

Then copy the configuration files from the root home directory of a master node:

  • On the master node, copy configurations files from root to your useraccount

    ssh <node-user>@<master-node-ip> "sudo cp -R /root/.kube ~ && sudo chown -R <node-user>:<node-user> ~/.kube"
  • Then download the configuration files to a remote computer

    scp -r <node-user>@<master-node-ip>:~/.kube ~
    sudo chown -R <local-user>:<local-user> ~/.kube
  • Keep secrets protected on the master node

    ssh <node-user>@<master-node-ip> "rm -r ~/.kube"

For sanity, use autocompletion!

echo 'source <(kubectl completion bash)' >>~/.bashrc

Installing che on Minikube using chectl

This section describes how to install Che on Minikube using chectl.

Prerequisites
Procedure
  • Run the following command:

    $ chectl server:start --platform minikube

Installing Che on Minishift using chectl

This section describes how to install Che on Minishift using chectl.

Prerequisites
Procedure
  • Run the following command:

    $ chectl server:start --platform minishift

Installing che on CodeReady Containers using chectl

This section describes how to install Che on CodeReady Containers using chectl.

Prerequisites
Procedure
  • Run the following command:

    $ chectl server:start --platform crc

Installing Che on kind using chectl

This section describes how to install Che on kind using chectl. kind is a tool for running local Kubernetes clusters using Docker-formatted containers as nodes. It is useful for quickly creating ephemeral clusters, and is used as part of the test infrastructure of the Kubernetes project. Running Che in kind is a way to try the application, or for a contributor to test their change quickly with a real cluster.

Prerequisites
Procedure

For Eclipse Che installation, Kind cluster should have ingress backend and persitent volumes storage backend. If these requirements are met, one may go directly to step 8.

Following instruction is a way of configuring Kind cluster to have all needed for Eclipse Che components:

  1. Install csi-driver-host-path in the kind cluster:

    Install snapshotter CRDs as described in the docs:

    $ SNAPSHOTTER_VERSION=v2.1.1
    
    # Apply VolumeSnapshot CRDs
    $ kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/${SNAPSHOTTER_VERSION}/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
    $ kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/${SNAPSHOTTER_VERSION}/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
    $ kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/${SNAPSHOTTER_VERSION}/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml
    
    # Create snapshot controller
    $ kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/${SNAPSHOTTER_VERSION}/deploy/kubernetes/snapshot-controller/rbac-snapshot-controller.yaml
    $ kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/${SNAPSHOTTER_VERSION}/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml

    Value of the latest SNAPSHOTTER_VERSION could be found on corresponding release page.

    Then deploy:

    $ git clone https://github.com/kubernetes-csi/csi-driver-host-path && cd csi-driver-host-path
    $ ./deploy/kubernetes-<version>/deploy.sh
    $ kubectl apply -f examples/csi-storageclass.yaml

    Kubernetes version could be obtained via kubectl version command (see Server Version).

  2. Set csi-hostpath-sc as the default StorageClass:

    $ kubectl patch storageclass csi-hostpath-sc -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}'
    $ kubectl patch storageclass standard -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}'
  3. Install the NGINX Ingress Controller:

    $ VERSION=0.30.0
    $ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-${VERSION}/deploy/static/mandatory.yaml
    $ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-${VERSION}/deploy/static/provider/cloud-generic.yaml
  4. Install the MetalLB load balancer:

    $ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml
    $ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
    $ kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

    The above command may apply to an out-of-date version of MetalLB Kubernetes manifests. See the installation instructions for the most up-to-date command.

  5. Determine an IP range to allocate to MetalLB from the docker bridge network:

    $ docker inspect bridge | grep -C 5 Subnet
    "IPAM": {
                "Driver": "default",
                "Options": null,
                "Config": [
                    {
                        "Subnet": "172.17.0.0/16",
                        "Gateway": "172.17.0.1"
                    }
                ]
            },
            "Internal": false,

    In this case, there is a /16 subnet range to allocate. Choose a section in the 172.17.250.0 range.

  6. Create a ConfigMap for MetalLB specifying the IP range to expose:

    $ cat << EOF > metallb-config.yaml
    apiVersion: v1
    kind: ConfigMap
    metadata:
      namespace: metallb-system
      name: config
    data:
      config: |
        address-pools:
        - name: default
          protocol: layer2
          addresses:
          - 172.17.250.1-172.17.250.250
    EOF
    $ kubectl apply -f metallb-config.yaml
  7. The ingress-nginx service now has an external IP:

    $ kubectl get svc -n ingress-nginx
    NAME            TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)                      AGE
    ingress-nginx   LoadBalancer   10.107.194.26   172.17.250.1   80:32033/TCP,443:30428/TCP   19h
  8. Run chectl, using the external IP of the ingress-nginx Service as an nip.io URL:

    $ chectl server:start --installer operator --platform k8s --domain 172.17.250.1.nip.io

    In some cases, after all the steps above, it is still not possible to reach Eclipse Che from host machine. If you encounter such problem please refer to Kind cluster docs or forums on how to make an endpoint available outside Kind cluster for your system and/or network configuration.

Installing che on Kubespray using chectl

Prerequisites
Procedure
  • Deploy an ingress controller

    We will use nginx for that, get the cloud deployment since you have configured a load balancer!

    $ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud/deploy.yaml
  • Get the IP attributed by the load balancer

    $ kubectl get svc -n ingress-nginx

    It corresponds to the EXTERNAL-IP of the ingress-nginx-controller service. We will refer it as <ip-from-load-balancer>.

  • Run the following command:

    $ chectl server:start --platform k8s --domain <ip-from-load-balancer>.nip.io --self-signed-cert

    Unless you provide a certificate, don’t forget the self-signed-cert flag

Deploying multiuser Che in multiuser mode

Choose one of the following procedures to start the Che Server in multi-user mode using the chectl tool.

Installing multiuser Che on Minikube using chectl

This section describes how to install Eclipse Che in multiuser mode on Minikube using chectl.

Prerequisites
Procedure
  • Run the following command:

    $ chectl server:start --platform minikube --multiuser

Installing multiuser Che on Minishift using chectl

This section describes how to install Eclipse Che in multiuser mode on Minishift using chectl.

Prerequisites
Procedure
  • Run the following command:

    $ chectl server:start --platform minishift

Installing multiuser Che on CodeReady Containers using chectl

This section describes how to install Eclipse Che in multiuser mode on CodeReady Containers using chectl.

Prerequisites
Procedure
  • Run the following command:

    $ chectl server:start --platform crc --multiuser

Che deployment options using chectl

chectl server:start --help
start Eclipse Che Server

USAGE
  $ chectl server:start

OPTIONS
  -a, --installer=helm|operator|minishift-addon                                Installer type
  -b, --domain=domain                                                          Domain of the Kubernetes cluster (for example example.k8s-cluster.com or <local-ip>.nip.io)
  -h, --help                                                                   show CLI help
  -i, --cheimage=cheimage                                                      [default: eclipse/che-server:7.3.0] Che server container image
  -m, --multiuser                                                              Starts che in multi-user mode
  -n, --chenamespace=chenamespace                                              [default: che] Kubernetes namespace where Che server is supposed by be deployed
  -o, --cheboottimeout=cheboottimeout                                          (required) [default: 40000] Che server bootstrap timeout (in milliseconds)

  -p, --platform=minikube|minishift|k8s|openshift|microk8s|docker-desktop|crc  Type of Kubernetes platform. Valid values are "minikube", "minishift", "k8s (for kubernetes)", "openshift", "crc (for CodeReady Containers)", "microk8s".

  -s, --tls                                                                    Enable TLS encryption. Note that for kubernetes 'che-tls' with TLS certificate must be created in the configured namespace. For OpenShift, router will use default cluster certificates.

  -t, --templates=templates                                                    [default: templates] Path to the templates folder

  --che-operator-cr-yaml=che-operator-cr-yaml                                  Path to a yaml file that defines a CheCluster used by the Operator. This parameter is used only when the installer is the Operator.

  --che-operator-image=che-operator-image                                      [default: quay.io/eclipse/che-operator:7.3.0] Container image of the Operator. This parameter is used only when the installer is the Operator

  --deployment-name=deployment-name                                            [default: che] Che deployment name

  --devfile-registry-url=devfile-registry-url                                  The URL of the external Devfile registry.

  --k8spodreadytimeout=k8spodreadytimeout                                      [default: 130000] Waiting time for Pod Ready Kubernetes (in milliseconds)

  --k8spodwaittimeout=k8spodwaittimeout                                        [default: 300000] Waiting time for Pod Wait Timeout Kubernetes (in milliseconds)

  --listr-renderer=default|silent|verbose                                      [default: default] Listr renderer

  --os-oauth                                                                   Enable use of OpenShift credentials to log into Che

  --plugin-registry-url=plug-in-registry-url                                    The URL of the external plug-in registry.

  --self-signed-cert                                                           Authorize usage of self signed certificates for encryption. Note that `self-signed-certificate` secret with CA certificate must be created in the configured namespace.
Tags: