© 2021 The original authors.

openshift-gradle-plugin

1. Introduction

The openshift-gradle-plugin brings your Gradle Java applications on to OpenShift. This plugin focuses on two tasks:

  • Building Container images.

  • Creating OpenShift resource descriptors.

2. Getting Started

When working with openshift-gradle-plugin, you’ll probably be facing similar situations and following the same patterns other users do. These are some of the most common scenarios and configuration modes:

2.1. Red Hat Developer Sandbox

This is an example of how you can use the JKube zero configuration to build and deploy your Java application with Red Hat OpenShift Developer Sandbox.

Prerequisites

You will need the following for the scenario:

Provision your DevSandbox

The Developer Sandbox for Red Hat OpenShift is a free OpenShift cluster that gives you experience of working with a Kubernetes Cluster. Once you’ve created an account and logged into console, you can copy login command and paste it into your terminal:

$ oc login --token=sha256~%TOKEN% --server=https://%SERVER%:6443
Logged into "https://%SERVER%:6443" as "%USERNAME%" using the token provided.

You have access to the following projects and can switch between them with 'oc project <projectname>':

  * %USERNAME%-dev
    %USERNAME%-stage

Using project "%USERNAME-dev".
Welcome! See 'oc help' to get started.

Adding openshift-gradle-plugin to your project

We’ll be using quickstarts/gradle/spring-boot for this demonstration. If you have your own gradle project set up, you can follow instructions mentioned below.

You can add openshift-gradle-plugin to plugins section:

Open the build.gradle file and add the plugin in the plugins section.

plugins {
   id 'java'
   id 'io.quarkus'
   id 'org.eclipse.jkube.openshift' version '1.16.2'
}

Deploying application to Red Hat OpenShift

Make sure you’ve compiled the application:

$ ./gradlew clean build

Run JKube gradle tasks to deploy the application:

$ ./gradlew ocBuild ocResource ocApply

How to disable routes

openshift-gradle-plugin automatically generates a Route to expose your application. You can disable it with jkube.openshift.generateRoute flag:

$ ./gradlew ocResource -Djkube.openshift.generateRoute=false

2.2. Spring Boot

openshift-gradle-plugin works with any Spring Boot project without any configuration. It automatically detects your project dependencies and generated opinionated container image and OpenShift manifests.

Adding openshift-gradle-plugin to project

You would need to add openshift-gradle-plugin to your project in order to use it:

Open the build.gradle file and add the plugin in the plugins section.

plugins {
   id 'java'
   id 'io.quarkus'
   id 'org.eclipse.jkube.openshift' version '1.16.2'
}

Building container Image of your application

In case of OpenShift, Source to Image (S2I) builds are performed by default. ImageStream is updated after the image creation.

Run this command which would build your application’s container image and push it to OpenShift’s internal container registry:

$ ./gradlew ocBuild

Generating & applying Kubernetes manifests

Just like container image, openshift-gradle-plugin can generate opinionated OpenShift manifests. Run this command to automatically generate and apply manifests onto currently logged in OpenShift cluster.

$ ./gradlew ocResource ocApply

After running these tasks, you can also check OpenShift manifests generated by openshift-gradle-plugin in build/classes/java/main/META-INF/jkube/ directory.

Cleanup applied Kubernetes resources after testing:

$ ./gradlew ocUndeploy

How to add a liveness and readiness probe?

openshift-gradle-plugin automatically adds OpenShift liveness and readiness probes in generated OpenShift manifests in presence of Spring Boot Actuator dependency.

To add actuator to your project, add the following dependency:

dependencies {
    implementation 'org.springframework.boot:spring-boot-starter-actuator'
}

Once you run ocResource task again, you should be able to see liveness and readiness probes added in generated manifests.

2.3. Vert.x

You can easily get started with using openshift-gradle-plugin on an Eclipse Vert.x without providing any explicit configuration. openshift-gradle-plugin would generate an opinionated container image and manifests by inspecting your project configuration.

Adding openshift-gradle-plugin to project

You would need to add openshift-gradle-plugin to your project in order to use it:

Open the build.gradle file and add the plugin in the plugins section.

plugins {
   id 'java'
   id 'io.quarkus'
   id 'org.eclipse.jkube.openshift' version '1.16.2'
}

Building container Image of your application

In case of OpenShift, Source to Image (S2I) builds are performed by default. ImageStream is updated after the image creation.

Run this command which would build your application’s container image and push it to OpenShift’s internal container registry:

$ ./gradlew ocBuild

Generating & applying Kubernetes manifests

Just like container image, openshift-gradle-plugin can generate opinionated OpenShift manifests. Run this command to automatically generate and apply manifests onto currently logged in OpenShift cluster.

$ ./gradlew ocResource ocApply

After running these tasks, you can also check OpenShift manifests generated by openshift-gradle-plugin in build/classes/java/main/META-INF/jkube/ directory.

Cleanup applied Kubernetes resources after testing:

$ ./gradlew ocUndeploy

How to set Service Port?

By default, in Vert.x applications, application port value is 8888. openshift-gradle-plugin opinionated defaults use port 8080. If you want to change this, you’ll need to configure openshift-gradle-plugin to generate image with desired port:

Example for setting port number for vertx apps
openshift {
  generator {
      config {
          'vertx' {
              webPort = '8888'
          }
      }
  }
}

Once configured, you can go ahead and deploy application to OpenShift.

How to add Kubernetes Readiness Liveness probes?

openshift-gradle-plugin doesn’t add any OpenShift liveness and readiness probes by default. However, it does provide a rich set of configuration options to add health checks. Read Vert.x Healthchecks section for more details.

2.4. Quarkus

You can easily get started with using openshift-gradle-plugin on a Quarkus project without providing any explicit configuration. openshift-gradle-plugin would generate an opinionated container image and manifests by inspecting your project configuration.

Zero Configuration

Adding openshift-gradle-plugin to project

You would need to add openshift-gradle-plugin to your project in order to use it:

Open the build.gradle file and add the plugin in the plugins section.

plugins {
   id 'java'
   id 'io.quarkus'
   id 'org.eclipse.jkube.openshift' version '1.16.2'
}

Building container Image of your application

In case of OpenShift, Source to Image (S2I) builds are performed by default. ImageStream is updated after the image creation.

Run this command which would build your application’s container image and push it to OpenShift’s internal container registry:

$ ./gradlew ocBuild

Generating & applying Kubernetes manifests

Just like container image, openshift-gradle-plugin can generate opinionated OpenShift manifests. Run this command to automatically generate and apply manifests onto currently logged in OpenShift cluster.

$ ./gradlew ocResource ocApply

After running these tasks, you can also check OpenShift manifests generated by openshift-gradle-plugin in build/classes/java/main/META-INF/jkube/ directory.

Cleanup applied Kubernetes resources after testing:

$ ./gradlew ocUndeploy

Quarkus Native Mode

While containerizing a Quarkus application under native mode, openshift-gradle-plugin would automatically detect that it’s a native executable artifact and would select a lighter base image while containerizing application. There is no additional configuration needed by openshift-gradle-plugin for Native Builds.

How to add OpenShift liveness and readiness probes?

openshift-gradle-plugin automatically adds OpenShift liveness and readiness probes in generated OpenShift manifests in presence of SmallRye Health dependency.

To add SmallRye to your project, add the following dependency:

dependencies {
    implementation 'io.quarkus:quarkus-smallrye-health'
}

Once you run ocResource task again, you should be able to see liveness and readiness probes added in generated manifests.

2.5. Dockerfile

You can build a container image and deploy to Kubernetes with openshift-gradle-plugin by just providing a Dockerfile. openshift-gradle-plugin builds a container image based on your Dockerfile and generates opinionated Kubernetes manifests by inspecting it.

Placing Dockerfile in project root directory

You can place the Dockerfile in the project root directory along with build.gradle.

openshift-gradle-plugin detects it and automatically builds an image based on this Dockerfile. There is no need to provide any sort of configuration apart from Dockerfile and project root directory as docker context directory. The Image is created with an opinionated name from group, artifact and version. The name can be overridden by using the jkube.image.name property. Read Simple Dockerfile section for more details.

Placing Dockerfile in some other directory

You can choose to place your Dockerfile at some other location. By default, the plugin assumes it to be src/main/docker, but you’ll need to configure docker context directory in plugin configuration. When not specified, context directory is assumed to be Dockerfile’s parent directory. You can take a look at Docker File Provided Quickstarts for more details.

Controlling what gets copied to image

When using Dockerfile mode, every file and directory present in the Docker build context directory gets copied to the created Docker image. In case you want to ignore some files, or you want to include only a specific set of files, the openshift-gradle-plugin provides the following options to achieve this:

Using Property placeholders in Dockerfiles

You can reference properties in your Dockerfiles using standard maven property placeholders ${*}. For example, if you have a property in your gradle.properties like this:

gradle.properties
fromImage = fabric8/s2i-java
Dockerfile
FROM ${fromImage}:latest-java11

You can override placeholders using the filter field in image build configuration, see Build Filtering for more details.

3. Examples

Let’s have a look at some code. The following examples will demonstrate all available configurations variants:

3.1. Using Groovy Configuration

If the zero configuration mode doesn’t fit your use case, and you want more flexibility, you can also use openshift-gradle-plugin Groovy configuration to configure plugin as per your needs. openshift-gradle-plugin provides a rich set of configuration in form of Groovy DSL which can be used to tune plugin’s output as per your specific requirements.

The plugin configuration can be roughly divided into the following sections:

  • Global configuration options are responsible for tuning the behavior of plugin tasks.

  • images defines which container images are used and configured.

  • resources defines the resource descriptors for deploying on OpenShift cluster.

  • enricher configures various aspects of creating and enhancing resource descriptors.

A working example can be found in quickstarts/gradle/groovy-dsl-config directory.

3.1.1. Configuring Images

This section provides an overview of images element with which you can configure different aspects of container images generated by openshift-gradle-plugin. Here is an example of providing Groovy DSL configuration for a simple image:

Example for providing image using Groovy DSL configuration
openshift {
    images {
        image {
            name = "jkube/${project.name}:${project.version}" (1)
            alias = "camel-service" (2)
            build {
                from = "quay.io/jkube/jkube-java:0.0.13" (3)
                assembly { (4)
                    targetDir = "/deployments" (5)
                    layers = [{ (6)
                        fileSets = [{ (7)
                            directory = file("${project.rootDir}/build/dependencies")
                       }]
                    }]
                }
                env { (8)
                    JAVA_LIB_DIR = "/deployments/dependencies/*"
                    JAVA_MAIN_CLASS = "org.apache.camel.cdi.Main"
                }
                labels { (9)
                    labelWithValue = "foo"
                    version = "${project.version}"
                    artifactId = "${project.name}"
                }
                ports = ["8787"] (10)
            }
        }
    }
}
1 Name with which we want our image to be built; See name field in Image Configuration for details.
2 Shortcut name for image; See alias field in Image Configuration for details.
3 Base image on which this image would be built upon; See from field in Image Build Configuration for more details.
4 Assembly Configuration for copying files/directories into image. See Assembly Configuration for more details.
5 Target directory inside image for copying a directory into image
6 Assembly layer; See Assembly Inline/Layer Configuration for details.
7 FileSet Assembly Configuraton for copying directories. See fileSets field in Assembly Layer Configuration for details.
8 Environment variables added to image. See Environment and Labels for details.
9 Labels added to image. See Environment and Labels for details.
10 Ports to be exposed. See port field in Image Build Configuration for details.

You can read more about supported fields in image configuration element in Image Configuration section.

3.1.2. Copying Files/Directories into Image

If you want to copy some files/directories into your image. You can make use of openshift-gradle-plugin Assembly Configuration. You would need to provide assembly element in image > build.

Here is an example of copying a single jar file into image. This configuration would copy a jar file located in build/libs/ to /deployments folder inside the image:

Example for copying a file to image using Groovy DSL configuration:
openshift {
    images {
        image {
            name = "${project.group}/${project.name}:${project.version}"
            build {
                from = "quay.io/jkube/jkube-java:0.0.13"
                assembly {
                    targetDir = "/deployments"
                    layers = [{
                        id = "custom-assembly-for-copying-file"
                        files = [{
                            source = file("build/libs/${project.name}-${project.version}-all.jar")
                            outputDirectory = "."
                        }]
                    }]
                }
            }
        }
    }
}

In order to copy directories, you would be using fileSets configuration element instead of files. Here is an example of copying directories. This example would copy build/dependencies directory to /deployments directory inside image.

Example for copying a directory to image using Groovy DSL configuration:
openshift {
    images {
        image {
            name = "${project.group}/${project.name}:${project.version}"
            build {
                from = "quay.io/jkube/jkube-java:0.0.13"
                assembly {
                    targetDir = "/deployments"
                    layers = [{
                        id = "custom-assembly-for-copying-directory"
                        fileSets = [{
                            directory = file("${project.rootDir}/build/dependencies")
                       }]
                    }]
                }
            }
        }
    }
}

3.1.3. Kubernetes Labels and Annotations

Refer to Labels/Annotations Configuration in Kubernetes Resource Configuration

3.1.4. Kubernetes Controller Generation

Refer to Kubernetes Controller Resource Generation in Kubernetes Resource Configuration

3.1.5. Kubernetes Ingress Generation

Refer to Ingress Generation in Kubernetes Resource Configuration

3.1.6. Kubernetes ServiceAccount Generation

Refer to ServiceAccount Generation in Kubernetes Resource Configuration

3.1.7. Route Generation

When the ocResource goal is run,

an OpenShift Route descriptor (route.yml) will also be generated along the service if an OpenShift cluster is targeted. If you do not want to generate a Route descriptor, you can set the jkube.openshift.generateRoute property to false.

Note: Routes will be automatically generated for Services with recognized web ports (80, 443, 8080, 8443, 9080, , 9090, 9443).

If your service exposes any other port and you still want to generate a Route, you can do any of the following:

  • Force the route creation by setting the jkube.createExternalUrls property to true.

  • Force the route creation by using the expose: true label in the Service:

    • Add the expose: true label in a Service fragment.

    • Add the expose: true label by leveraging the JKube Service Enricher (jkube.enricher.jkube-service.expose).

Table 1. Route Generation Configuration
Element Description Property

generateRoute

If value is set to false then no Route descriptor will be generated. By default it is set to true, which will create a route.yml descriptor and also add Route resource to openshift.yml.

jkube.openshift.generateRoute

jkube.enricher.jkube-openshift-route.generateRoute

tlsTermination

jkube.enricher.jkube-openshift-route.tlsTermination

tlsInsecureEdgeTerminationPolicy

tlsInsecureEdgeTerminationPolicy indicates the desired behavior for insecure connections to a route. While each router may make its own decisions on which ports to expose, this is normally port 80.

  • Allow - traffic is sent to the server on the insecure port (default)

  • Disable - no traffic is allowed on the insecure port.

  • Redirect - clients are redirected to the secure port.

jkube.enricher.jkube-openshift-route.tlsInsecureEdgeTerminationPolicy

Below is an example of generating a Route with "edge" termination and "Allow" insecureEdgeTerminationPolicy:

Example for generating route resource by configuring it in build.gradle
openshift {
    enricher {
        config {
            'jkube-openshift-route' {
                generateRoute = true
                tlsInsecureEdgeTerminationPolicy = 'Allow'
                tlsTermination = 'Edge'
            }
        }
    }
}

Adding certificates for routes is not directly supported in the pom, but can be added via a yaml fragment.

If you do not want to generate a Route descriptor, you can also specify so in the plugin configuration in your POM as seen below.

Example for not generating route resource by configuring it in build.gradle
openshift {
    enricher {
        config {
            'jkube-openshift-route' {
                generateRoute = false
            }
        }
    }
}

If you are using resource fragments, then also you can configure it in your Service resource fragment (e.g. service.yml). You need to add an expose label to the metadata section of your service and set it to false.

Example for not generating route resource by configuring it in resource fragments
metadata:
  annotations:
    api.service.kubernetes.io/path: /hello
  labels:
    expose: "false"
spec:
  type: LoadBalancer

3.2. Kubernetes Resource Fragments

You can also use an external configuration in form of YAML resource descriptors which are located in the src/main/jkube directory. Each resource gets its own file, which contains a skeleton of a resource descriptor. The plugin will pick up the resource, enrich it and then combine all to a single kubernetes.yml and openshift.yml file. Within these descriptor files you are can freely use any Kubernetes feature.

Let’s have a look at an example from quickstarts/gradle/external-resources. This is a plain Spring Boot application, whose images are auto generated like in case of no configuration. The resource fragments are in src/main/jkube.

Example fragment "deployment.yml"
spec:
  replicas: 1
  template:
    spec:
      volumes:
        - name: config
          gitRepo:
            repository: 'https://github.com/jstrachan/sample-springboot-config.git'
            revision: 667ee4db6bc842b127825351e5c9bae5a4fb2147
            directory: .
      containers:
        - volumeMounts:
            - name: config
              mountPath: /app/config
          env:
            - name: MY_POD_NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
      serviceAccount: ribbon

As you can see, there is no metadata section as would be expected for Kubernetes resources because it will be automatically added by the openshift-gradle-plugin. The object’s Kind, if not given, is automatically derived from the filename. In this case, the openshift-gradle-plugin will create a Deployment because the file is called deployment.yml. Similar mappings between file names and resource type exist for each supported resource kind, the complete list of which (along with associated abbreviations) can be found in the Kind Filename Mapping section.

Additionally, if you name your fragment using a name prefix followed by a dash and the mapped file name, the plugin will automatically use that name for your resource. So, for example, if you name your deployment fragment myapp-deployment.yml, the plugin will name your resource myapp. In the absence of such provided name for your resource, a name will be automatically derived from your project’s metadata (in particular, its project name as specified in your build.gradle).

No image is also referenced in this example because the plugin also fills in the image details based on the configured image you are building with (either from a generator or from a dedicated image plugin configuration, as seen before).

Enrichment of resource fragments can be fine-tuned by using profile sub-directories.

3.3. Zero Configuration

It’s very common, especially when dealing with the inner development loop, that you don’t need to provide any configuration for your Gradle project. You can get started simply by adding the plugin to your build.gradle file:

Example for setting up the openshift-gradle-plugin in build.gradle
plugins {
  id 'org.eclipse.jkube.openshift' version '1.16.2'
}

In this case, openshift-gradle-plugin analyzes your project and configures the container image and the cluster configuration manifests using a set of opinionated defaults.

3.4. OpenShift Scenarios

openshift-gradle-plugin by default does a S2I Binary build where image is built within a pod inside OpenShift cluster. See OpenShift Build for more details. You can specify which kind of binary build you want to choose using buildStrategy plugin configuration option:

Example for S2I build using Docker build strategy:
openshift {
   buildStrategy = 'docker'
}

3.4.1. Setting Requests/Limits for OpenShift Build

You can configure CPU/Memory used by OpenShift build process during $ocBuild providing openshiftBuildConfig field in resources. See Setting Quotas for OpenShift Build section for more details.

3.4.2. Configuring Build Output

By default openshift-gradle-plugin generates an ImageStreamTag as build output which is pushed to internal OpenShift registry. You can set buildOutputKind field in plugin configuration to override this behavior. If you want to push to some other image registry instead. You can set build output to DockerImage like this which will push your image to specified registry.

Example for S2I build using DockerImage build output:
openshift {
    buildOutputKind = 'DockerImage'
}

4. Tasks Overview

This plugin supports a rich set for providing a smooth Java developer experience. These tasks can be categorized in multiple groups:

  • Build and Deployment tasks are all about creating and managing Kubernetes build artifacts like Docker images or S2I builds.

  • Development tasks target help not only in developing resource descriptors to the development cluster but also to manage the lifecycle of the development cluster as well.

Table 2. Build & Deployment Tasks
Task Description

ocBuild

Build images

ocPush

Pushes the built images to the container image registry

ocResource

Generate resource manifests for your application

ocApply

Applies the generated resources to the connected cluster

ocHelm

Generate Helm charts for your application

ocHelmPush

Upload Helm charts to Helm repositories.

Table 3. Development Tasks
Task Description

ocDebug

Debug your Java app running on the cluster

ocLog

Show the logs of your Java app running on the cluster

ocUndeploy

Deletes the kubernetes resources that you deployed via the ocApply task

ocRemoteDev (preview)

Start a remote development session

ocWatch

Watch for file changes and perform rebuilds and redeployments

4.1. Build And Deployment Tasks

4.1.1. ocBuild

This task is for building container images for your application.

For the openshift mode, OpenShift specific builds will be performed. These are so called Binary Source builds ("binary builds" in short), where the data specified with the build configuration is sent directly to OpenShift as a binary archive.

There are two kind of binary builds supported by this plugin, which can be selected with the buildStrategy configuration option (jkube.build.strategy property)

Table 4. Build Strategies
buildStrategy Description

s2i

The Source-to-Image (S2I) build strategy uses so called builder images for creating new application images from binary build data. The builder image to use is taken from the base image configuration specified with from in the image build configuration. See below for a list of builder images which can be used with this plugin.

docker

A Docker Build is similar to a normal Docker build except that it is done by the OpenShift cluster and not by a Docker daemon. In addition this build pushes the generated image to the OpenShift internal registry so that it is accessbile in the whole cluster.

Both build strategies update an Image Stream after the image creation.

The Build Config and Image streams can be managed by this plugin. If they do not exist, they will be automatically created by ocBuild.

If they do already exist, they are reused, except when the buildRecreate configuration option (property jkube.build.recreate) is set to a value as described in Global Configuration. Also if the provided build strategy is different than the one defined in the existing build configuration, the Build Config is edited to reflect the new type (which in turn removes all build associated with the previous build).

This image stream created can then be directly referenced from Deployment Configuration objects created by ocResource.

By default, image streams are created with a local lookup policy, so that they can be used also by other resources such as Deployments or StatefulSets. This behavior can be turned off by setting the jkube.s2i.imageStreamLookupPolicyLocal property to false when building the project.

In order to be able to create these OpenShift resource objects access to an OpenShift installation is required.

Regardless of which build mode is used, the images are configured in the same way.

The configuration consists of two parts:

  • a global section which defines the overall behaviour of this plugin

  • and an images section which defines how the images should be build

Many of the options below are relevant for the Kubernetes Workflow or the OpenShift Workflow with Docker builds as they influence how the Docker image is build.

For an S2I binary build, on the other hand, the most relevant section is the Assembly one because the build depends on which buider/base image is used and how it interprets the content of the uploaded docker.tar.

Setting Quotas for OpenShift Build

You can also limit resource use by specifying resource limits as part of the build configuration. You can do this by providing openshiftBuildConfig field in resource configuration. Below is an example on how to do this:

Example of OpenShift S2I Build resource/limit Configuration
openshift {
    resources {
        openshiftBuildConfig {
            requests { (1)
                cpu = '500m' (2)
                memory = '512Mi' (3)
            }
            limits { (4)
                cpu = '1000m' (5)
                memory = '1Gi' (6)
            }
        }
    }
}
1 Request field which maps to created BuildConfig’s .spec.resources.requests
2 Minimum CPU required by Build Pod
3 Minimum memory required by Build Pod
4 Limits field which maps to created BuildConfig’s (.spec.resources.limits)
5 Maximum CPU required by Build Pod
6 Maximum memory required by Build Pod

It’s also possible to provide a buildconfig.yml BuildConfig resource fragment in src/main/jkube directory like this:

BuildConfig fragment Example(buildconfig.yml)
spec:
  resources:
    limits:
      cpu: "600m"
      memory: "512Mi"
    requests:
      cpu: "500m"
      memory: "300Mi"

4.1.2. ocPush

This task uploads images to the registry which have a build configuration section. The images to push can be restricted with the global option filter (see Build Goal Configuration for details). The registry to push is by default docker.io but can be specified as part of the images’s name the Docker way. E.g. docker.test.org:5000/data:1.5 will push the image data with tag 1.5 to the registry docker.test.org at port 5000. Registry credentials (i.e. username and password) can be specified in multiple ways as described in section Authentication.

4.1.3. ocResource

This task generates OpenShift resources based on your project. It can either be opinionated defaults or based on the configuration provided in Groovy DSL configuration or resource fragments in src/main/jkube. Generated resources are in build/classes/java/main/META-INF/jkube/openshift directory. You can find all Groovy DSL configuration options for ocResource in Kubernetes Resource configuration section.

Resource task also validates the generated resource descriptors using API specification of Kubernetes. You can see configuration options regarding Kubernetes resource validation in Global Configuration section.

4.1.4. ocApply

This task applies the resources created with ocResource to a connected OpenShift cluster.

gradle ocApply

4.1.5. ocHelm

This feature allows you to create Helm charts from the Kubernetes resources Eclipse JKube generates for your project. You can then use the generated charts to leverage Helm's capabilities to install, update, or delete your app in Kubernetes.

To generate the Helm chart you need to invoke the ocHelm Gradle task on the command line:

gradle ocResource ocHelm
The ocResource goal is required to create the resource descriptors that are included in the Helm chart. If you have already generated the resources in a previous step then you can omit this task.

There are multiple ways to configure the generated Helm Chart:

  • By providing a Chart.helm.yaml fragment in src/main/jkube directory.

  • Through the helm section in the openshift-gradle-plugin Groovy DSL configuration.

When using the fragment approach, you simply need to create a Chart.helm.yaml file in the src/main/jkube directory with the fields you want to override. JKube will take care of merging this fragment with the opinionated and configured defaults.

The Groovy DSL configuration is defined in a helm section within the plugin’s configuration:

Example Helm configuration
kubernetes {
  helm {
    chart = 'Jenkins'
    keywords = ['ci', 'cd', 'server']
    dependencies = [{
      name = 'ingress-nginx'
      version = '1.26.0'
      repository = 'https://kubernetes.github.io/ingress-nginx'
    }]
  }
}

This configuration section knows the following sub-elements in order to configure your Helm chart.

Table 5. Helm configuration
Element Description Property

apiVersion

The apiVersion of Chart.yaml schema, defaults to v1.

jkube.helm.apiVersion

chart

The Chart name.

jkube.helm.chart

version

The Chart SemVer version.

jkube.helm.version

description

The Chart single-sentence description.

jkube.helm.description

home

The Chart URL for this project’s home page.

jkube.helm.home

sources

The Chart list of URLs to source code for this project.

maintainers

The Chart list of maintainers (name+email).

icon

The Chart URL to an SVG or PNG image to be used as an icon, default is extracted from the kubernetes manifest (kubernetes.yml) jkube.eclipse.org/iconUrl annotation if not provided.

jkube.helm.icon

appVersion

The version of the application that Chart contains.

jkube.helm.appVersion

keywords

Comma separated list of keywords to add to the chart.

engine

The template engine to use.

additionalFiles

The list of additional files to be included in the Chart archive. Any file named README or LICENSE or values.schema.json will always be included by default.

type / types

Platform for which to generate the chart. By default this is kubernetes, but can be also openshift for using OpenShift specific resources in the chart. You can also add both values as a comma separated list.

Please note that there is no OpenShift support yet for charts, so this is experimental.

jkube.helm.type

sourceDir

Where to find the resource descriptors generated with ocResource.

By default, this is ${basedir}/build/classes/java/main/classes/META-INF/jkube, which is also the output directory used by ocResource.

jkube.helm.sourceDir

outputDir

Where to create the Helm chart, which is ${basedir}/build/jkube/helm/${chartName}/kubernetes by default for Kubernetes and ${basedir}/build/jkube/helms/${chartName}/openshift for OpenShift.

jkube.helm.outputDir

tarballOutputDir

Where to create the Helm chart archive, which is same as outputDir if not provided.

jkube.helm.tarballOutputDir

tarFileClassifier

A string at the end of Helm archive filename as a classifier.

Defaults to empty string.

jkube.helm.tarFileClassifier

chartExtension

The Helm chart file extension (tgz, tar.bz, tar.bzip2, tar.bz2), default value is tar.gz if not provided.

jkube.helm.chartExtension

dependencies

The list of dependencies for this chart.

parameters

The list of parameters to interpolate the Chart templates from the provided Fragments.

These parameters can represent variables, in this case the values are used to generate the values.yaml file. The fragment placeholders will be replaced with a .Values variable.

The parameters can also represent a Golang expression

Table 6. Helm Configuration - Maintainer
Element Description

name

The maintainer’s name or organization.

email

The maintainer’s contact email address.

url

The maintainer’s URL address.

Table 7. Helm Configuration - Dependency
Element Description

name

The name of the chart dependency.

version

Semantic version or version range for the dependency.

repository

URL pointing to a chart repository.

condition

Optional reference to a boolean value that toggles the inclusion of the dependency. IE subchart.enabled. For more information see helm documentation.

alias

Optional reference to the map that will be passed as the value scope for the subchart. For more information see helm documentation.

Table 8. Helm Configuration - Parameters
Element Description

name

The name of the interpolatable parameter. Will be used to replace placeholders (${name}) in the provided YAML fragments. And to generate the values.yaml file.

required

Set to true if this is a required value (when used to generate values).

value

In case we are generating a .Values variable, the default value.

In case the placeholder has to be replaced by an expression, the Golang expression e.g. {{ .Chart.Name | upper }}.

Helm-specific fragments

In addition to the standard Kubernetes resource fragments, you can also provide fragments for Helm Chart.yaml and values.yaml files.

For the Chart.yaml file you can provide a Chart.helm.yaml fragment in the src/main/jkube directory.

For the values.yaml file you can provide a values.helm.yaml fragment in the src/main/jkube directory.

These fragments will be merged with the opinionated and configured defaults. The values provided in the fragments will override any of the generated default values taking precedence over them.

Installing the generated Helm chart

In a next step you can install this via the helm command line tool as follows:

helm install nameForChartInRepository build/jkube/helm/${chartName}/OpenShift

or

helm install build/jkube/helm/${chartName}/OpenShift --generate-name

In addition, this task will also create a tar-archive below outputDir which contains the chart with its template.

4.1.6. ocHelmPush

This feature allows you to upload your Eclipse JKube-generated Helm charts to one of the supported repositories: Artifactory, Chartmuseum, Nexus, and OCI.

To publish a Helm chart you need to invoke the ocHelmPush Gradle task on the command line:

gradle ocResource ocHelm ocHelmPush
The ocResource and the ocHelm tasks are required to create the resource descriptors which are included in the Helm chart and the Helm chart itself. If you have already built the resource and created the chart, then you can omit these tasks.

The configuration is defined in a helm section within the plugin’s configuration:

Example Helm configuration
openshift {
  helm {
    chart = 'Jenkins'
    keywords = ['ci', 'cd', 'server']
    stableRepository {
      name = 'stable-repo-id'
      url = 'https://stable-repo-url'
      type = 'ARTIFACTORY'
    }
    snapshotRepository {
      name = 'snapshot-repo-id'
      url = 'https://snapshot-repo-url'
      type = 'ARTIFACTORY'
    }
  }
}

This configuration section knows the following sub-elements in order to configure your Helm chart.

Table 9. Helm configuration
Element Description Property

stableRepository

The configuration of the stable helm repository (see Helm stable repository configuration).

snapshotRepository

The configuration of the snapshot helm repository (see Helm repository configuration).

Table 10. Helm stable repository configuration
Element Description Property

name

The name (id) of the server configuration. It can select the maven server by this ID.

jkube.helm.stableRepository.name

url

The url of the server.

jkube.helm.stableRepository.url

username

The username of the repository. Optional. If a maven server ID is specified, the username is taken from there.

jkube.helm.stableRepository.username

password

The password of the repository. Optional. If a maven server ID is specified, the password is taken from there.

jkube.helm.stableRepository.password

type

The type of the repository. One of ARTIFACTORY, NEXUS, CHARTMUSEUM, OCI

jkube.helm.stableRepository.type

Table 11. Helm snapshot repository configuration
Element Description Property

name

The name (id) of the server configuration. It can select the maven server by this ID.

jkube.helm.snapshotRepository.name

url

The url of the server.

jkube.helm.snapshotRepository.url

username

The username of the repository. Optional. If a maven server ID is specified, the username is taken from there.

jkube.helm.snapshotRepository.username

password

The password of the repository. Optional. If a maven server ID is specified, the password is taken from there.

jkube.helm.snapshotRepository.password

type

The type of the repository. One of ARTIFACTORY, NEXUS, CHARTMUSEUM

jkube.helm.snapshotRepository.type

4.1.7. ocHelmLint

This feature allows you to lint your Eclipse JKube-generated Helm charts and examine it for possible issues.

It provides the same output as the helm lint command.

To lint a Helm chart you need to invoke the ocHelmLint Gradle task on the command line:

gradle ocResource ocHelm ocHelmLint
The ocResource and the ocHelm tasks are required to create the resource descriptors which are included in the Helm chart and the Helm chart itself. If you have already built the resource and created the chart, then you can omit these tasks.
Table 12. Helm lint configuration
Element Description Property

lintStrict

Enable strict mode, fails on lint warnings.

jkube.helm.lint.strict

lintQuiet

Enable quiet mode, only shows warnings and errors.

jkube.helm.lint.quiet

Example Helm lint configuration
kubernetes {
  helm {
    lintStrict = true
    lintQuiet = true
  }
}

4.2. Development Tasks

4.2.1. ocUndeploy

This task is for deleting the kubernetes resources that you deployed via the ocApply task.

It iterates through all the resources generated by the ocResource task and deletes them from your current OpenShift cluster.

gradle ocUndeploy

4.2.2. ocLog

This task tails the log of the app that you deployed via the ocApply task

gradle ocLog

You can then terminate the output by hitting Ctrl+C

If you wish to get the log of the app and then terminate immediately then try:

gradle ocLog -Djkube.log.follow=false

This lets you pipe the output into grep or some other tool

gradle ocLog -Djkube.log.follow=false | grep Exception

If your app is running in multiple pods you can configure the pod name to log via the jkube.log.pod property, otherwise it defaults to the latest pod:

gradle ocLog -Djkube.log.pod=foo

If your pod has multiple containers you can configure the container name to log via the jkube.log.container property, otherwise it defaults to the first container:

gradle ocLog -Djkube.log.container=foo

4.2.3. ocDebug

This task enables debugging in your Java app and then port forwards from localhost to the latest running pod of your app so that you can easily debug your app from your Java IDE.

gradle ocDebug

Then follow the on screen instructions.

The default debug port is 5005.If you wish to change the local port to use for debugging then pass in the jkube.debug.port parameter:

gradle ocDebug -Djkube.debug.port=8000

Then in your IDE you start a Remote debug execution using this remote port using localhost and you should be able to set breakpoints and step through your code.

This lets you debug your apps while they are running inside a Kubernetes cluster - for example if you wish to debug a REST endpoint while another pod is invoking it.

Debug is enabled via the JAVA_ENABLE_DEBUG environment variable being set to true. This environment variable is used for all the standard Java docker images used by Spring Boot, Quarkus, flat classpath and executable JAR projects. If you use your own custom docker base image you may wish to also respect this environment variable too to enable debugging.

Speeding up debugging

By default the ocDebug task has to edit your Deployment to enable debugging then wait for a pod to start.It might be in development you frequently want to debug things and want to speed things up a bit.

If so you can enable debug mode for each build via the jkube.debug.enabled property.

e.g. you can pass this property on the command line:

gradle ocResource ocApply -Djkube.debug.enabled=true

Then whenever you type the ocDebug task there is no need for the gradle task to edit the Deployment and wait for a pod to restart; we can immediately start debugging when you type:

gradle ocDebug
Debugging with suspension

The ocDebug task allows to attach a remote debugger to a running container, but the application is free to execute when the debugger is not attached. In some cases, you may want to have complete control on the execution, e.g. to investigate the application behavior at startup. This can be done using the jkube.debug.suspend flag:

gradle ocDebug -Djkube.debug.suspend

The suspend flag will set the JAVA_DEBUG_SUSPEND environment variable to true and JAVA_DEBUG_SESSION to a random number in your deployment. When the JAVA_DEBUG_SUSPEND environment variable is set, standard docker images will use suspend=y in the JVM startup options for debugging.

The JAVA_DEBUG_SESSION environment variable is always set to a random number (each time you run the debug task with the suspend flag) in order to tell Kubernetes to restart the pod. The remote application will start only after a remote debugger is attached. You can use the remote debugging feature of your IDE to connect (on localhost, port 5005 by default).

The jkube.debug.suspend flag will disable readiness probes in the Kubernetes deployment in order to start port-forwarding during the early phases of application startup

4.2.4. ocRemoteDev (preview)

Eclipse JKube Remote Development allows you to run and debug code in your local machine:

  • While connected to and consuming services that are only available in your cluster

  • While exposing your locally running application to other Pods and services running on your cluster

Remote Development features
  • Expose your local application to the cluster

  • Consume cluster services locally without having to expose them to the Internet

  • Connect your local toolset to the cluster services

  • Simple configuration

  • No tools required

  • No special or super-user permissions required in the local machine

  • No special features required in the cluster (should work on any kind of Kubernetes flavor)

  • Boosts your inner-loop developer experience when combined with live-reload frameworks such as Quarkus

Project configuration

The remote development configuration must be provided within the remoteDevelopment configuration element for the project.

kubernetes {
  remoteDevelopment {
    localServices = [{
      serviceName = "my-local-service" (1)
      port = 8080 (2)
    },{
    }]
    remoteServices = [{
      hostname = "postgresql" (3)
      port = 5432 (4)
    },{
      hostname = "rabbit-mq"
      port = 5672
      localPort = 15672 (5)
    }]
  }
}
1 Name of the service to be exposed in the cluster, the local application will be able accessible in the cluster through this hostname/service
2 Port where the local application is listening for connections
3 Name of a cluster service that will be forwarded and exposed locally
4 Port where the cluster service listens for connections (by default, the same port will be used to expose the service locally)
5 Optional port where the cluster service will be exposed locally
Starting the remote development session
$ gradle ocRemoteDev
Table 13. Options available for the remoteDevelopment configuration element
Element Description

localServices

The list of local services to expose in the cluster.

remoteServices

The list of cluster services to expose locally.

Table 14. Options available for the localServices configuration element
Element Description

serviceName

The name of the service that will be created/hijacked in the cluster.

type

The type of service to create (defaults to ClusterIP).

port

The service port, must match the port where the local application is listening for connections.

Table 15. Options available for the remoteServices configuration element
Element Description

hostname

The name of the cluster service whose port will be forwarded to the local machine.

port

The port where the cluster service is listening for connections.

localPort

(Optional) The port where the cluster service will be exposed locally. If not specified, the same port will be used.

4.2.5. ocWatch

This task is used to monitor the project workspace for changes and automatically trigger a redeploy of the application running on Kubernetes. There are two kinds of watchers present at the moment:

  • Docker Image Watcher(watches docker images)

  • Spring Boot Watcher(based on Spring Boot Devtools)

Before entering the watch mode, this task must generate the docker image and the Kubernetes resources (optionally including some development libraries/configuration), and deploy the app on Kubernetes.

For any application having k8sResource and k8sBuild tasks bound to the lifecycle, the following command can be used to run the watch task.

gradle ocWatch

This plugin supports different watcher providers, enabled automatically if the project satisfies certain conditions.

Watcher providers can also be configured manually. Here is an example:

kubernetes {
    watcher {
        includes = ['docker-image']
        config {
            'spring-boot' {
                 serviceUrlWaitTimeSeconds = 10
            }
        }
    }
}
Spring Boot

This watcher is enabled by default for all Spring Boot projects. It performs the following actions:

  • deploys your application with Spring Boot DevTools enabled

  • tails the log of the latest running pod for your application

  • watches the local development build of your Spring Boot based application and then triggers a reload of the application when there are changes

You need to make sure that devtools is included in the repacked archive, as shown in the following listing (taken from Spring Docs)

bootJar {
    classpath configurations.developmentOnly
}

Then you need to set a spring.devtools.remote.secret in application.properties, as shown in the following example:

spring.devtools.remote.secret=mysecret
Spring devtools automatically ignores projects named spring-boot, spring-boot-devtools, spring-boot-autoconfigure, spring-boot-actuator, and spring-boot-starter

You can try it on any spring boot application via:

gradle ocWatch

Once the task starts up the spring boot RemoteSpringApplication it will watch for local development changes.

e.g. if you edit the java code of your app and then build it via something like this:

gradle build

You should see your app reload on the fly in the shell running the ocWatch task!

There is also support for LiveReload as well.

Docker Image

This is a generic watcher that can be used in Kubernetes mode only. Once activated, it listens for changes in the project workspace in order to trigger a redeploy of the application. This enables rebuilding of images and restarting of containers in case of updates.

There are five watch modes, which can be specified in multiple ways:

  • build: Automatically rebuild one or more Docker images when one of the files selected by an assembly changes. This works for all files included in assembly.

  • run: Automatically restart your application when their associated images change.

  • copy: Copy changed files into the running container. This is the fast way to update a container, however the target container must support hot deploy, too so that it makes sense. Most application servers like Tomcat supports this.

  • both: Enables both build and run. This is the default.

  • none: Image is completely ignored for watching.

The watcher can be activated e.g. by running this command in another shell:

gradle build

The watcher will detect that the binary artifact has changed and will first rebuild the docker image, then start a redeploy of the Kubernetes pod.

5. Gradle Groovy DSL Configuration

5.1. Global Configuration

Table 16. Global configuration
Element Description Property

buildStrategy

Defines what build strategy to choose while building container image. Possible values are docker, buildpacks and jib out of which docker is default.

If the build is performed in an OpenShift cluster an additional s2i option is available and selected by default.

Available strategies for OpenShift are:

jkube.build.strategy

buildSourceDirectory

Default directory that contains the assembly descriptor(s) used by the plugin. The default value is src/main/docker. This option is only relevant for the ocBuild task.

jkube.build.source.dir

authConfig

Authentication information when pulling from or pushing to Docker registry. There is a dedicated section Authentication for how to do security.

autoPull

Decide how to pull missing base images or images to start:

  • on : Automatic download any missing images (default)

  • off : Automatic pulling is switched off

  • always : Pull images always even when they already exist locally

  • once : For multi-module builds images are only checked once and pulled for the whole build.

jkube.docker.autoPull

imagePullPolicy

Specify whether images should be pull when looking for base images while building or images for starting. This property can take the following values (case insensitive):

  • IfNotPresent: Automatic download any missing images (default)

  • Never : Automatic pulling is switched off always

  • Always : Pull images always even when they already exist locally.

By default, a progress meter is printed out on the console, which is omitted when using gradle in batch mode (option -B). A very simplified progress meter is provided when using no color output (i.e. with -Djkube.useColor=false).

jkube.docker.imagePullPolicy

certPath

Path to SSL certificate when SSL is used for communicating with the Docker daemon. These certificates are normally stored in ~/.docker/. With this configuration the path can be set explicitly. If not set, the fallback is first taken from the environment variable DOCKER_CERT_PATH and then as last resort ~/.docker/. The keys in this are expected with it standard names ca.pem, cert.pem and key.pem. Please refer to the Docker documentation for more information about SSL security with Docker.

jkube.docker.certPath

dockerHost

The URL of the Docker Daemon. If this configuration option is not given, then the optional <machine> configuration section is consulted. The scheme of the URL can be either given directly as http or https depending on whether plain HTTP communication is enabled or SSL should be used. Alternatively the scheme could be tcp in which case the protocol is determined via the IANA assigned port: 2375 for http and 2376 for https. Finally, Unix sockets are supported by using the scheme unix together with the filesystem path to the unix socket.

The discovery sequence used by the docker-maven-plugin to determine the URL is:

  1. Value of dockerHost (jkube.docker.host)

  2. The Docker host associated with the docker-machine named in <machine>, i.e. the DOCKER_HOST from docker-machine env. See below for more information about Docker machine support.

  3. The value of the environment variable DOCKER_HOST.

  4. unix:///var/run/docker.sock if it is a readable socket.

jkube.docker.host

filter

In order to temporarily restrict the operation of plugin goals this configuration option can be used. Typically, this will be set via the system property jkube.image.filter when gradle is called. The value can be a single image name (either its alias or full name) or it can be a comma separated list with multiple image names. Any name which doesn’t refer an image in the configuration will be ignored.

jkube.image.filter

machine

Docker machine configuration. See Docker Machine for possible values.

maxConnections

Number of parallel connections are allowed to be opened to the Docker Host. For parsing log output, a connection needs to be kept open (as well for the wait features), so don’t put that number to low. Default is 100 which should be suitable for most of the cases.

jkube.docker.maxConnections

outputDirectory

Default output directory to be used by this plugin. The default value is build/docker and is only used for the goal ocBuild.

jkube.build.target.dir

profile

Profile to which contains enricher and generators configuration. See Profiles for details.

jkube.profile

forcePull

Applicable only for OpenShift, S2I build strategy.

While creating a BuildConfig, By default, if the builder image specified in the build configuration is available locally on the node, that image will be used.

Using forcePull will override the local image and refresh it from the registry the image stream points to.

jkube.build.forcePull

openshiftPullSecret

The name to use for naming pullSecret to be created to pull the base image in case pulling from a private registry which requires authentication for OpenShift.

The default value for pull registry will be picked from jkube.docker.pull.registry/jkube.docker.registry.

jkube.build.pullSecret

openshiftPushSecret

The name of pushSecret to be used to push the final image in case pushing from a protected registry which requires authentication.

jkube.build.pushSecret

buildOutputKind

Allow to specify in which registry to push the container image at the end of the build. If the output kind is ImageStreamTag, then the image will be pushed to the internal OpenShift registry. If the output is of type DockerImage, then the name of the output reference will be used as a Docker push specification. The default value is ImageStreamTag

jkube.build.buildOutput.kind

buildRecreate

If the build is performed in an OpenShift cluster then this option decides how the OpenShift resource objects associated with the build should be treated when they already exist:

  • buildConfig or bc : Only the BuildConfig is recreated

  • imageStream or is : Only the ImageStream is recreated

  • all : Both, BuildConfig and ImageStream are recreated

  • none : Neither BuildConfig nor ImageStream is recreated

The default is none. If you provide the property without value then all is assumed, so everything gets recreated.

jkube.build.recreate

registry

Specify globally a registry to use for pulling and pushing images. See Registry handling for details.

jkube.docker.registry

skip

With this parameter the execution of this plugin can be skipped completely.

jkube.skip

skipBuild

If set not images will be build (which implies also skip.tag) with ocBuild.

jkube.skip.build

skipBuildPom

If set the build step will be skipped for modules of type pom. If not set, then by default projects of type pom will be skipped if there are no image configurations contained.

jkube.skip.build.pom

skipTag

If set to true this plugin won’t add any tags to images that have been built with ocBuild.

jkube.skip.tag

skipMachine

Skip using docker machine in any case

jkube.docker.skip.machine

sourceDirectory

Default directory that contains the assembly descriptor(s) used by the plugin. The default value is src/main/docker. This option is only relevant for the ocBuild goal.

jkube.build.source.dir

verbose

Boolean attribute for switching on verbose output like the build steps when doing a Docker build. Default is false.

jkube.docker.verbose

logDate

The date format to use when logging messages from Docker. Default is DEFAULT (HH:mm:ss.SSS)

jkube.docker.logDate

logStdout

Log to stdout regardless if log files are configured or not. Default is false.

jkube.docker.logStdout

access

Group of configuration parameters to connect to Kubernetes/OpenShift cluster.

createNewResources

Create new OpenShift resources.

Defaults to true

jkube.deploy.create

debugSuspend

Disables readiness probes in Kubernetes Deployment in order to start port forwarding during early phases of application startup.

Defaults to false.

jkube.debug.suspend

deletePodsOnReplicationControllerUpdate

Delete all the pods if we update a Replication Controller.

Defaults to true.

jkube.deploy.deletePods

failOnNoKubernetesJson

Fail if there is no kubernetes json present.

Defaults to false.

jkube.deploy.failOnNoKubernetesJson

failOnValidationError

If value is set to true then any validation error will block the plugin execution. A warning will be printed otherwise.

Default is false.

jkube.failOnValidationError

ignoreRunningOAuthClients

Ignore OAuthClients which are already running. OAuthClients are shared across namespaces so we should not try to update or create/delete global oauth clients.

Defaults to true.

jkube.deploy.ignoreRunningOAuthClients

ignoreServices

Ignore Service resources while applying resources. This is particularly useful when in recreate mode to let you easily recreate all the ReplicationControllers and Pods but leave any service definitions alone to avoid changing the portalIP addresses and breaking existing pods using the service.

Defaults to false.

jkube.deploy.ignoreServices

interpolateTemplateParameters

Interpolate parameter values from *template.yml fragments in the generated resource list (kubernetes.yml).

This is useful when using JKube in combination with Helm.

Placeholders for variables defined in template files can be used in the different resource fragments. Helm generated charts will contain these placeholders/parameters.

For resource task, these placeholders are replaced in the aggregated resource list YAML file (not in the individual generated resources) if this option is enabled.

Defaults to true.

jkube.interpolateTemplateParameters

jsonLogDir

The folder we should store any temporary json files or results

Defaults to ${project.rootDir}/build/jkube/applyJson.

jkube.deploy.jsonLogDir

localDebugPort

Default port available for debugging your application inside Kubernetes.

Defaults to 5005.

jkube.debug.port

logFollow

Get follow logs for your application inside Kubernetes.

Defaults to true.

jkube.log.follow

logContainerName

Get logs of some specific container inside your application Deployment.

Defaults to null.

jkube.log.container

logPodName

Get logs of some specific pod inside your application Deployment.

Defaults to null.

jkube.log.pod

mergeWithDekorate

When resource generation is delegated to Dekorate, should JKube resources be merged with Dekorate generated ones.

Defaults to false.

jkube.mergeWithDekorate

offline

Whether to try detecting Kubernetes Cluster or stay offline.

Defaults to false.

jkube.offline

processTemplatesLocally

Process templates locally in Java so that we can apply OpenShift templates on any Kubernetes environment

Defaults to false.

jkube.deploy.processTemplatesLocally

pushRegistry

The registry to use when pushing the image. See Registry Handling for more details.

jkube.docker.push.registry

recreate

Update resources by deleting them first and then creating them again.

Defaults to false.

jkube.recreate

pushRetries

How often should a push be retried before giving up. This useful for flaky registries which tend to return 500 error codes from time to time.

Defaults to 0.

jkube.docker.push.retries

resourceEnvironment

Environment name where resources are placed. For example, if you set this property to dev and resourceDir is the default one, plugin will look at src/main/jkube/dev. Multiple environments can also be provided in form of comma separated strings. Resource fragments in these directories will be combined while generating resources.

Defaults to null.

jkube.environment

resourceSourceDirectory

Folder where to find project specific files.

Defaults to ${project.rootDir}/src/main/jkube.

jkube.resourceDir

resourceTargetDirectory

The generated Kubernetes manifests target directory.

Defaults to ${basedir}/build/classes/java/main/META-INF/jkube.

jkube.targetDir

rollingUpgrades

Use Rolling Upgrades to apply changes.

jkube.rolling

s2iImageStreamLookupPolicyLocal

Allow the ImageStream used in the S2I binary build to be used in standard Kubernetes resources such as Deployment or StatefulSet.

Defaults to true

jkube.s2i.imageStreamLookupPolicyLocal

s2iBuildNameSuffix

The S2I binary builder BuildConfig name suffix appended to the image name to avoid clashing with the underlying BuildConfig for the Jenkins pipeline

Defaults to -s2i

jkube.s2i.buildNameSuffix

servicesOnly

Only process services so that those can be recursively created/updated first before creating/updating any pods and Replication Controllers.

Defaults to false.

jkube.deploy.servicesOnly

skip

With this parameter the execution of this plugin can be skipped completely.

jkube.skip

skipApply

If set no resource maniefst would be applied to connected OpenShift cluster.

Defaults to false.

jkube.skip.apply

skipUndeploy

If set no applied resources would be deleted from connected OpenShift cluster.

Defaults to false.

jkube.skip.undeploy

skipBuild

If set not images will be build (which implies also skip.tag) with ocBuild.

jkube.skip.build

skipResource

If not set resource manifests would be generated with ocResource.

jkube.skip.resource

skipPush

If set to true the plugin won’t push any images that have been built.

Defaults to false.

jkube.skip.push

skipResourceValidation

If value is set to true then resource validation is skipped. This may be useful if resource validation is failing for some reason but you still want to continue the deployment.

Default is false.

jkube.skipResourceValidation

skipTag

If set to true this plugin won’t push any tags

Defaults to false.

jkube.skip.tag

useProjectClassPath

Should we use the project’s compile time classpath to scan for additional enrichers/generators.

Defaults to false.

jkube.useProjectClassPath

watchMode

How to watch for image changes.

  • copy: Copy watched artifacts into container

  • build: Build only images

  • run: Run images

  • both: Build and run images

  • none: Neither build nor run

Defaults to both.

jkube.watch.mode

watchInterval

Interval in milliseconds (how often to check for changes).

Defaults to 5000.

jkube.watch.interval

watchPostExec

A command which is executed within the container after files are copied into this container when watchMode is copy. Note that this container must be running.

jkube.watch.postExec

workDirectory

The JKube working directory. Defaults to ${project.build.directory}/jkube-temp.

jkube.workDir

5.1.1. OpenShift Access Configuration

You can configure parameters to define how the plugin connects to the OpenShift cluster instead of relying on default parameters.

kubernetes {
  access {
    username = ""
    password = ""
    masterUrl = ""
    apiVersion = ""
  }
}
Element Description Property

username

Username on which to operate.

jkube.username

password

Password on which to operate.

jkube.password

namespace

Namespace on which to operate.

jkube.namespace

masterUrl

Master URL on which to operate.

jkube.masterUrl

apiVersion

Api version on which to operate.

jkube.apiVersion

caCertFile

CaCert File on which to operate.

jkube.caCertFile

caCertData

CaCert Data on which to operate.

jkube.caCertData

clientCertFile

Client Cert File on which to operate.

jkube.clientCertFile

clientCertData

Client Cert Data on which to operate.

jkube.clientCertData

clientKeyFile

Client Key File on which to operate.

jkube.clientKeyFile

clientKeyData

Client Key Data on which to operate.

jkube.clientKeyData

clientKeyAlgo

Client Key Algorithm on which to operate.

jkube.clientKeyAlgo

clientKeyPassphrase

Client Key Passphrase on which to operate.

jkube.clientKeyPassphrase

trustStoreFile

Trust Store File on which to operate.

jkube.trustStoreFile

trustStorePassphrase

Trust Store Passphrase on which to operate.

jkube.trustStorePassphrase

keyStoreFile

Key Store File on which to operate.

jkube.keyStoreFile

keyStorePassphrase

Key Store Passphrase on which to operate.

jkube.keyStorePassphrase

5.2. Image Configuration

The configuration how images should be created are defined in a dedicated images sections. These are specified for each image within the images element of the configuration with one image element per image to use.

The image element can contain the following sub elements:

Table 17. Image Configuration
Element Description

name

Each image configuration has a mandatory, unique docker repository name. This can include registry and tag parts, but also placeholder parameters. See below for a detailed explanation.

alias

Shortcut name for an image which can be used for identifying the image within this configuration. This is used when linking images together or for specifying it with the global image configuration element.

registry

Registry to use for this image. If the name already contains a registry this takes precedence. See Registry handling for more details.

build

Element which contains all the configuration aspects when doing a ocBuild.

This element can be omitted if the image is only pulled from a registry e.g. as support for integration tests like database images.

The build section is mandatory and is explained in below.

When specifying the image name in the configuration with the name field, then you can use several placeholders. These placeholders are replaced during the execution by this plugin.

In addition, you can use regular Gradle properties. These properties are resolved by Gradle itself.

Table 18. Image Names
Placeholder Description

%g

The last part of the gradle group name. The name gets sanitized, so that it can be used as username on GitHub. Only the part after the last dot is used. For example, given the group id org.eclipse.jkube, this placeholder would insert jkube.

%a

A sanitized version of the artefact id, so that it can be used as part of a Docker image name. This means primarily, that it is converted to all lower case (as required by Docker).

%v

A sanitized version of the project version. Replaces + with - in ${project.version} to comply with the Docker tag convention. (A different replacement symbol can be defined by setting the jkube.image.tag.semver_plus_substitution property.) For example, the version '1.2.3b' becomes the exact same Docker tag, '1.2.3b'. But '1.2.3+internal' becomes the 1.2.3-internal Docker tag.

%l

If the pre-release part of the project version ends with -SNAPSHOT, then this placeholder resolves to latest. Otherwise, it’s the same as %v.

If the ${project.version} contains a build metadata part (i.e. everything after the +), then the + is substituted and the rest is appended. For example, the project version 1.2.3-SNAPSHOT+internal becomes the latest-internal Docker tag.

%t

If the project version ends with -SNAPSHOT, this placeholder resolves to snapshot-<timestamp> where timestamp has the date format yyMMdd-HHmmss-SSSS (eg snapshot-). This feature is especially useful during development in order to avoid conflicts when images are to be updated which are still in use. You need to take care yourself of cleaning up old images afterwards, though.

If the ${project.version} contains a build metadata part (i.e. everything after the +), then the + is substituted and the rest is appended. For example, the project version 1.2.3-SNAPSHOT+internal becomes the snapshot-221018-113000-0000-internal Docker tag.

5.2.1. Build Configuration

Here are different modes how images can be built:

When using this mode, the Dockerfile is created on the fly with all instructions extracted from the configuration given.

External Dockerfile or Docker archive

Alternatively an external Dockerfile template or Docker archive can be used. This mode is switched on by using one of these three configuration options within

  • contextDir specifies docker build context if an external dockerfile is located outside of Docker build context. If not specified, Dockerfile’s parent directory is used as build context.

  • dockerFile specifies a specific Dockerfile path. The Docker build context directory is set to contextDir if given. If not the directory by default is the directory in which the Dockerfile is stored.

  • dockerArchive specifies a previously saved image archive to load directly. If a dockerArchive is provided, no dockerFile must be given.

All paths can be either absolute or relative paths. A relative path is looked up in $projectDir/src/main/docker by default. You can make it easily an absolute path by using $projectDir in your configuration.

However, you need to add the files on your own in the Dockerfile with an ADD or COPY command. The files of the assembly are stored in a build context relative directory maven/ but can be changed by changing the assembly name with the option name in the assembly configuration.

E.g. the files can be added with .Example

COPY maven/ /my/target/directory

so that the assembly files will end up in /my/target/directory within the container.

If this directory contains a .jkube-dockerignore (or alternatively, a .jkube-dockerexclude file), then it is used for excluding files for the build. If the file doesn’t exist, or it’s empty, then there are no exclusions.

Each line in this file is treated as an entry in the excludes assembly fileSet configuration . Files can be referenced by using their relative path name. Wildcards are also supported, patterns will be matched using FileSystem#getPathMatcher glob syntax.

It is similar to .dockerignore when using Docker but has a slightly different syntax (hence the different name).

Example .jkube-dockerexclude or .jkube-dockerignore is an example which excludes all compiled Java classes.

Example 1. Example .jkube-dockerexclude or .jkube-dockerignore
build/classes/**  (1)
1 Exclude all compiled classes

If this directory contains a .jkube-dockerinclude file, then it is used for including only those files for the build. If the file doesn’t exist or it’s empty, then everything is included.

Each line in this file is treated as an entry in the includes assembly fileSet configuration . Files can be referenced by using their relative path name. Wildcards are also supported, patterns will be matched using FileSystem#getPathMatcher glob syntax.

Example .jkube-dockerinclude shows how to include only jar file that have build to the Docker build context.

Example 2. Example .jkube-dockerinclude
build/libs/*.jar (1)
1 Only add jar file to you Docker build context.

Except for the assembly configuration all other configuration options are ignored for now.

Simple Dockerfile build

When only a single image should be built with a Dockerfile no Groovy DSL configuration is needed at all. All what need to be done is to place a Dockerfile into the top-level module directory, alongside to build.gradle. You can still configure global aspects in the plugin configuration, but as soon as you add an image in the Groovy DSL configuration, you need to configure also the build explicitly.

The image name is by default set from the gradle coordinates (%g/%a:%l, see Image Name for an explanation of the params which are essentially the Gradle’s group, project name and project version) This name can be set with the property jkube.image.name in gradle.properties.

Filtering

openshift-gradle-plugin filters given Dockerfile with gradle properties, much like the maven-resource-plugin does. Filtering is enabled by default and can be switched off with a build config filter='false'. Properties which we want to replace are specified with the ${..} syntax. Replacement includes properties set in the build, command-line properties, and system properties. Unresolved properties remain untouched.

This partial replacement means that you can easily mix it with Docker build arguments and environment variable reference, but you need to be careful. If you want to be more explicit about the property delimiter to clearly separate Docker properties and gradle properties you can redefine the delimiter. In general, the filter option can be specified the same way as delimiters in the resource plugin. In particular, if this configuration contains a * then the parts left, and right of the asterisks are used as delimiters.

For example, the default filter='${*}' parse gradle properties in the format that we know. If you specify a single character for filter then this delimiter is taken for both, the start and the end. E.g a filter='@' triggers on parameters in the format @…​@. Use something like this if you want to clearly separate from Docker builds args. This form of property replacement works for Dockerfile only. For replacing other data in other files targeted for the Docker image, please use the assembly configuration with filtering to make them available in the docker build context.

Example

The following example replaces all properties in the format @property@ within the Dockerfile.

openshift {
    images {
        image {
            name = 'user/demo'
            build {
                filter = '@'
            }
        }
    }
}

All build relevant configuration is contained in the build section of an image configuration. The following configuration options are supported:

Table 19. Build configuration (image)
Element Description

assembly

Specifies the assembly configuration as described in Build Assembly

args

Map specifying the value of Docker build args which should be used when building the image with an external Dockerfile which uses build arguments. The key-value syntax is the same as when defining gradle properties (or labels or env). This argument is ignored when no external Dockerfile is used. Build args can also be specified as properties as described in Build Args

buildOptions

Map specifying the build options to provide to the docker daemon when building the image. These options map to the ones listed as query parameters in the Docker Remote API and are restricted to simple options (e.g.: memory, shmsize). If you use the respective configuration options for build options natively supported by the build configuration (i.e. noCache, cleanup=remove for buildoption forcerm=1 and args for build args) then these will override any corresponding options given here. The key-value syntax is the same as when defining environment variables or labels as described in Setting Environment Variables and Labels.

createImageOptions

Map specifying the create image options to provide to the docker daemon when pulling or importing an image. These options map to the ones listed as query parameters in the Docker Remote API and are restricted to simple options (e.g.: fromImage, fromSrc, platform).

cleanup

Cleanup dangling (untagged) images after each build (including any containers created from them). Default is try which tries to remove the old image, but doesn’t fail the build if this is not possible because e.g. the image is still used by a running container. Use remove if you want to fail the build and none if no cleanup is requested.

contextDir

Path to a directory used for the build’s context. You can specify the Dockerfile to use with dockerFile, which by default is the Dockerfile found in the contextDir. The Dockerfile can be also located outside of the contextDir, if provided with an absolute file path. See External Dockerfile for details.

cmd

A command to execute by default (i.e. if no command is provided when a container for this image is started). See Startup Arguments for details.

compression

The compression mode how the build archive is transmitted to the docker daemon (ocBuild) and how docker build archives are attached to this build as sources. The value can be none (default), gzip or bzip2.

dockerFile

Path to a Dockerfile which also triggers Dockerfile mode. See External Dockerfile for details.

dockerArchive

Path to a saved image archive which is then imported. See Docker archive for details.

entryPoint

An entrypoint allows you to configure a container that will run as an executable. See Startup Arguments for details.

env

The environments as described in Setting Environment Variables and Labels.

filter

Enable and set the delimiters for property replacements. By default, properties in the format ${..} are replaced with gradle properties. You can switch off property replacement by setting this property to false. When using a single char like @ then this is used as a delimiter (e.g @…​@). See Filtering for more details.

from

The base image which should be used for this image. If not given this default to busybox:latest and is suitable for a pure data image. In case of an S2I Binary build this parameter specifies the S2I Builder Image to use, which by default is fabric8/s2i-java:latest. See also from-ext how to add additional properties for the base image.

fromExt

Extended definition for a base image. This field holds a map of defined in key = "value" format. The known keys are:

  • name : Name of the base image

  • kind : Kind of the reference to the builder image when in S2I build mode. By default its ImageStreamTag but can be also ImageStream. An alternative would be DockerImage

  • namespace : Namespace where this builder image lives.

A provided from takes precedence over the name given here. This tag is useful for extensions of this plugin.

imagePullPolicy

Specific pull policy for the base image. This overwrites any global pull policy. See the global configuration option imagePullPolicy for the possible values and the default.

labels

Labels as described in Setting Environment Variables and Labels.

maintainer

The author (MAINTAINER) field for the generated image

noCache

Don’t use Docker’s build cache. This can be overwritten by setting a system property docker.noCache when running gradle.

cacheFrom

A list of image elements specifying image names to use as cache sources.

optimise

if set to true then it will compress all the runCmds into a single RUN directive so that only one image layer is created.

ports

The exposed ports which is a list of port elements, one for each port to expose. Whitespace is trimmed from each element and empty elements are ignored. The format can be either pure numerical ("8080") or with the protocol attached ("8080/tcp").

shell

Shell to be used for the runCmds. It contains arg elements which are defining the executable and its params.

runCmds

Commands to be run during the build process. It contains run elements which are passed to the shell. Whitespace is trimmed from each element and empty elements are ignored. The run commands are inserted right after the assembly and after workdir into the Dockerfile.

skip

if set to true disables building of the image. This config option is best used together with a gradle property

skipTag

If set to true this plugin won’t add any tags to images.

tags

List of additional tag elements with which an image is to be tagged after the build. Whitespace is trimmed from each element and empty elements are ignored.

user

User to which the Dockerfile should switch to the end (corresponds to the USER Dockerfile directive).

volumes

List of volume elements to create a container volume. Whitespace is trimmed from each element and empty elements are ignored.

workdir

Directory to change to when starting the container.

5.2.2. Assembly

The assembly element within build element defines how build artifacts and other files can be added to the Docker image. The files which are supposed to be added via assembly should be present in project directory. It’s also possible to add file from external source using your own custom logic (see JKube Plugin for more details).

Table 20. Assembly Configuration (image : build )
Element Description

name

Assembly name, which is maven by default. This name is used for the archives and directories created during the build. This directory holds the files specified by the assembly. If an external Dockerfile is used then this name is also the relative directory which contains the assembly files.

targetDir

Directory under which the files and artifacts contained in the assembly will be copied within the container. The default value for this is ${assembly.name}, so /maven if name is not set to a different value.

inline

Deprecated: Use layers instead Inlined assembly descriptor as described in Assembly - Inline below.

layers

Each of the layers that the assembly will contain as described in Assembly - Layer below.

exportTargetDir

Specification whether the targetDir should be exported as a volume. This value is true by default except in the case the targetDir is set to the container root (/). It is also false by default when a base image is used with from since exporting makes no sense in this case and will waste disk space unnecessarily.

excludeFinalOutputArtifact

By default, the project’s final artifact will be included in the assembly, set this flag to true in case the artifact should be excluded from the assembly.

mode

Mode how the assembled files should be collected:

  • dir : Files are simply copied (default),

  • tar : Transfer via tar archive

  • tgz : Transfer via compressed tar archive

  • zip : Transfer via ZIP archive

The archive formats have the advantage that file permission can be preserved better (since the copying is independent from the underlying files systems)

permissions

Permission of the files to add:

  • ignore to use the permission as found on files regardless on any assembly configuration

  • keep to respect the assembly provided permissions

  • exec for setting the executable bit on all files (required for Windows when using an assembly mode dir)

  • auto to let the plugin select exec on Windows and keep on others.

keep is the default value.

tarLongFileMode

Sets the TarArchiver behaviour on file paths with more than 100 characters length. Valid values are: "warn"(default), "fail", "truncate", "gnu", "posix", "posix_warn" or "omit"

user

User and/or group under which the files should be added. The user must already exist in the base image.

It has the general format user[:group[:run-user]]. The user and group can be given either as numeric user- and group-id or as names. The group id is optional.

If a third part is given, then the build changes to user root before changing the ownerships, changes the ownerships and then change to user run-user which is then used for the final command to execute. This feature might be needed, if the base image already changed the user (e.g. to 'jboss') so that a chown from root to this user would fail.

For example, the image jboss/wildfly use a "jboss" user under which all commands are executed. Adding files in Docker always happens under the UID root. These files can only be changed to "jboss" is the chown command is executed as root. For the following commands to be run again as "jboss" (like the final standalone.sh), the plugin switches back to user jboss (this is this "run-user") after changing the file ownership. For this example a specification of jboss:jboss:jboss would be required.

In the event you do not need to include any artifacts with the image, you may safely omit this element from the configuration.

5.2.3. Assembly - Inline/Layer

Inlined assembly description with a format very similar to Maven Assembly Plugin.

Partial configuration example of an inline/layer element
assembly {
    targetDir = "/deployments"
    layers = [{
        fileSets = [{
            directory = file("${project.rootDir}/build/dependencies")
            outputDirectory = "static"
       }]
    }]
}

The layers element within the assembly element can have one or more layer elements with a Groovy DSL structure that supports the following configuration options:

Table 21. Assembly - Inline/Layer (image : build : assembly )
Element Description

id

Unique ID for the layer.

files

List of files for the layer.

Each file has the following fields:

  • source: Absolute or relative path from the project’s directory of the file to be included in the assembly.

  • outputDirectory: Output directory relative to the root of the root directory of the assembly.

  • destName: Destination filename in the outputDirectory.

  • fileMode: Similar to a UNIX permission, sets the file mode of the file included.

fileSets

List of filesets for the layer.

Each fileset has the following fields:

  • directory: Absolute or relative location from the project’s directory.

  • outputDirectory: Output directory relative to the root of the root directory of the assembly fileSet.

  • includes: A set of files and directories to include.

    • If none is present, then everything is included.

    • Files can be referenced by using their complete path name.

    • Wildcards are also supported, patterns will be matched using FileSystem#getPathMatcher glob syntax.

  • excludes: A set of files and directory to exclude.

    • If none is present, then there are no exclusions.

    • Wildcards are also supported, patterns will be matched using FileSystem#getPathMatcher glob syntax.

  • fileMode: Similar to a UNIX permission, sets the file mode of the files included.

  • directoryMode: Similar to a UNIX permission, sets the directory mode of the directories included.

baseDirectory

Base directory from which to resolve the Assembly’s layer files and filesets.

5.2.4. Build Args

As described in section Configuration for external Dockerfiles Docker build arg can be used. In addition to the configuration within the plugin configuration you can also use properties to specify them:

  • Set a system property when running gradle, e.g: docker.buildArg.http_proxy=http://proxy:8001. This is especially useful when using predefined Docker arguments for setting proxies transparently.

  • Set a project property within the build.gradle, e.g:

Example
docker.buildArg.myBuildArg = myValue

Please note that the system property setting will always override the project property. Also note that for all properties which are not Docker predefined properties, the external Dockerfile must contain an ARGS instruction.

5.3. Environment and Labels

When creating a container one or more environment variables can be set via configuration with the env parameter

Example
openshift {
  images {
    image {
      build {
        env {
          JAVA_HOME = '/opt/jdk8'
          CATALINA_OPTS = '-Djava.security.egd=file:/dev/./urandom'
        }
      }
    }
  }
}

It is also possible to set the environment variables from the outside of the plugin’s configuration with the parameter envPropertyFile. If given, this property file is used to set the environment variables where the keys and values specify the environment variable. Environment variables specified in this file override any environment variables specified in the configuration.

Labels can be set inline the same way as environment variables:

Example
openshift {
  images {
    image {
      build {
        labels = {
          version = "${project.version}"
          artifactId = "${project.name}"
        }
      }
    }
  }
}

5.4. Startup Arguments

Using entryPoint and cmd it is possible to specify the entry point or cmd for a container.

The difference is, that an entrypoint is the command that always be executed, with the cmd as argument. If no entryPoint is provided, it defaults to /bin/sh -c so any cmd given is executed with a shell. The arguments given to docker run are always given as arguments to the entrypoint, overriding any given cmd option. On the other hand if no extra arguments are given to docker run the default cmd is used as argument to entrypoint.

See this stackoverflow question for a detailed explanation.

An entry point or command can be specified in two alternative formats:

Table 22. Entrypoint and Command Configuration
Mode Description

shell

Shell form in which the whole line is given to shell -c for interpretation.

exec

List of arguments (with inner args) arguments which will be given to the exec call directly without any shell interpretation.

Either shell or params should be specified.

Example
openshift {
    images {
        image {
            build {
                entryPoint {
                    shell = "java -jar \$HOME/server.jar"
                }
            }
        }
    }
}

or

Example
openshift {
    images {
        image {
            build {
                entryPoint {
                    exec = ["java", "-jar", "/opt/demo/server.jar"]
                }
            }
        }
    }
}
INFO

Startup arguments are not used in S2I builds

5.5. Kubernetes Resource Configuration

This section includes Groovy DSL configuration options you can use to tweak generated Kubernetes manifests.

5.5.1. Labels/Annotations

Labels and annotations can be easily added to any resource object. This is best explained by an example.

Example for label and annotations
openshift {
    resources {
        labels {   (1)
            all {  (2)
                organisation = 'unesco' (3)
            }
            service { (4)
                database = 'mysql',
                persistent = 'true'
            }
            replicaSet { (5)
            }
            pod { (6)
            }
            deployment { (7)
            }
        }
        annotations { (8)
        }
    }
}
1 labels section with resources contains labels which should be applied to objects of various kinds
2 Within all labels which should be applied to every object can be specified
3 Within property you can specify key value pairs
4 service labels are used to label services
5 replicaSet labels are for replica set and replication controller
6 pod holds labels for pod specifications in replication controller, replica sets and deployments
7 deployment is for labels on deployments (kubernetes) and deployment configs (openshift)
8 The subelements are also available for specifying annotations.

Labels and annotations can be specified in free form as a map. In this map, the element name is the name of the label or annotation respectively, whereas the content is the value to set.

The following subelements are possible for labels and annotations :

Table 23. Label and annotation configuration
Element Description

all

All entries specified in the all sections are applied to all resource objects created. This also implies build object like image stream and build configs which are created implicitly for an OpenShift build.

deployment

Labels and annotations applied to Deployment (for Kubernetes). And DeploymentConfig (for OpenShift) objects.

pod

Labels and annotations applied pod specification as used in ReplicationController, ReplicaSets, Deployments and DeploymentConfigs objects.

replicaSet

Labels and annotations applied to ReplicaSet and ReplicationController objects.

service

Labels and annotations applied to Service objects.

ingress

Labels and annotations applied to Ingress objects.

serviceAccount

Labels and annotations applied to ServiceAccount objects.

route

Labels and annotations applied to Route objects.

5.5.2. Controller Generation

In JKube terminology, a Controller resource is a Kubernetes resource which manages Pods created for your application. These can be one of the following resources:

By default, Deployment is generated in Kubernetes mode. You can easily configure different aspects of generated Controller resource using Groovy DSL configuration. Here is an example:

Example of Controller Resource Configuration
openshift {
    resources {
        controller {
            env { (1)
                organization = 'Eclipse Foundation'
                projectname = 'jkube'
            }
            controllerName = 'my-deploymentname' (2)
            containerPrivileged = 'true' (3)
            imagePullPolicy = 'Always' (4)
            replicas = '3' (5)
            liveness { (6)
                getUrl = 'http://:8080/q/health'
                tcpPort = '8080'
                initialDelaySeconds = '3'
                timeoutSeconds = '3'
            }
            startup { (7)
              periodSeconds = 30
              failureThreshold = 1
              getUrl = "http://:8080/actuator/health"
            }
            volumes = [{ (8)
                name = 'scratch'
                type = 'emptyDir'
                medium = 'Memory'
                mounts = ['/var/scratch']
            }]
            containerResources {
                requests { (9)
                    cpu = '250m'
                    memory = '32Mi'
                }
                limits { (10)
                    cpu = '500m'
                    memory = '64Mi'
                }
            }
        }
    }
}
1 Environment variables added to all of your application Pods
2 Name of Controller(metadata.name set in generated Deployment, Job, ReplicaSet etc)
3 Setting Security Context of all application Pods.
4 Configure how images would be updated. Can be one of IfNotPresent, Always or Never. Read Kubernetes Images docs for more details.
5 Number of replicas of pods we want to have in our application
6 Define an HTTP liveness request, see Kubernetes Liveness/Readiness probes for more details.
7 Define an HTTP startup request, see Kubernetes Startup probes for more details.
8 Mounting an EmptyDir Volume to your application pods
9 Requests describe the minimum amount of compute resources required. See Kubernetes Resource Management Documentation for more info.
10 Limits describe the maximum amount of compute resources allowed. See Kubernetes Resource Management Documentation for more info.

Here are the fields available in resources Groovy DSL configuration that would work with ocResource:

Table 24. resources fields for configuring generated controllers
Element Description

controller

Configuration element for changing various aspects of generated Controller.

serviceAccount

ServiceAccount name which will be used by pods created by controller resources(e.g. Deployment, ReplicaSet etc)

useLegacyJKubePrefix

Use old jkube.io/ annotation prefix instead of jkube.eclipse.org/ annotation prefix

Configuring generated Controller via Groovy DSL

This configuration field is focused only on changing various elements of Controller (mainly fields specified in PodTemplateSpec). Here are available configuration fields within this object:

Table 25. controller fields for configuring generated controllers
Element Description

env

Environment variables which will be added to containers in Pod template spec.

volumes

Configuration element for adding volume mounts to containers in Pod template spec

controllerName

Name of the controller resource(i.e. Deployment, ReplicaSet, StatefulSet etc) generated

liveness

Configuration element for adding a liveness probe

readiness

Configuration element for adding readiness probe

startup

Configuration element for adding startup probe

containerPrivileged

Run container in privileged mode. Sets privileged: true in generated Controller’s PodTemplateSpec

imagePullPolicy

How images should be pulled (maps to ImagePullPolicy).

initContainers

Configuration element for adding InitContainers to generated Controller resource.

replicas

Number of replicas to create

restartPolicy

Pod’s restart policy.

For Job, this defaults to OnFailure. For others, it’s not provided (OpenShift assumes it to be Always)

containerResources

Configure Controller’s compute resource requirements

schedule

Schedule for CronJob written in Cron syntax.

InitContainer Groovy DSL configuration
Table 26. initContainer fields for specifying initContainers
Element Description

name

Name of InitContainer

imageName

Image used for InitContainer

imagePullPolicy

How images should be pulled (maps to ImagePullPolicy).

cmd

Command to be executed in InitContainer (maps to .command)

volumes

Configuration element for adding volume mounts to InitContainers in Pod template spec

env

Environment variables that will be added to this InitContainer in Pod template spec.

Container Resource Groovy DSL configuration
Table 27. containerResources fields for specifying compute resource requirements
Element Description

requests

The minimum amount of compute resources required. See Kubernetes Resource Management Documentation for more info.

limits

The maximum amount of compute resources allowed. See Kubernetes Resource Management Documentation for more info.

5.5.3. Probe Configuration

Probe configuration is used for configuring liveness and readiness probes for containers. Both liveness and readiness probes the following options:

Table 28. Groovy DSL Probe configuration
Element Description

initialDelaySeconds

Initial delay in seconds before the probe is started.

timeoutSeconds

Timeout in seconds how long the probe might take.

exec

Command to execute for probing.

getUrl

Probe URL for HTTP Probe. Configures HTTP probe fields like host, scheme, path etc by parsing URL. For example, a getUrl = "http://:8080/health" would result in probe generated with fields set like this:

host: ""

path: /health

port: 8080

scheme: HTTP

Host name with empty value defaults to Pod IP. You probably want to set "Host" in httpHeaders instead.

tcpPort

TCP port to probe.

failureThreshold

When a probe fails, Kubernetes will try failureThreshold times before giving up

successThreshold

Minimum consecutive successes for the probe to be considered successful after having failed.

httpHeaders

Custom headers to set in the request.

periodSeconds

How often in seconds to perform the probe. Defaults to 10 seconds. Minimum value is 1.

5.5.4. Volume Configuration

volumes field contains a list of volume configurations. Different configurations are supported in order to support different Volumes in Kubernetes.

Here are the options supported by a single volume :

Table 29. Groovy DSL volume configuration
Element Description

type

type of Volume

name

name of volume to be mounted

mounts

List of mount paths of this volume.

path

Path for volume

medium

medium ,applicable for Volume type emptyDir

repository

repository ,applicable for Volume type gitRepo

revision

revision ,applicable for Volume type gitRepo

secretName

Secret name ,applicable for Volume type secret

server

Server name, applicable for Volume type nfsPath

readOnly

Whether it’s read only or not

pdName

pdName, applicable for Volume type gcePdName

fsType

File system type for Volume

partition

partition, applicable for Volume type gcePdName

endpoints

endpoints, applicable for Volume type glusterFsPath

claimRef

Claim Reference, applicable for Volume type persistentVolumeClaim

volumeId

volume id

diskName

disk name, applicable for Volume type azureDisk

diskUri

disk uri, applicable for Volume type azureDisk

kind

kind, applicable for Volume type azureDisk

cachingMode

caching mode, applicable for Volume type azureDisk

hostPathType

Host Path type

shareName

Share name, applicable for Volume type azureFile

user

User name

secretFile

Secret File, applicable for Volume type cephfs

secretRef

Secret reference, applicable for Volume type cephfs

lun

LUN(Logical Unit Number)

targetWwns

target WWNs, applicable for Volume type fc

datasetName

data set name, applicable for Volume type flocker

portals

list of portals, applicable for Volume type iscsi

targetPortal

target portal, applicable for Volume type iscsi

registry

registry, applicable for Volume type quobyte

volume

volume, applicable for Volume type quobyte

group

group, applicable for Volume type quobyte

iqn

IQN, applicable for Volume type iscsi

monitors

list of monitors, applicable for Volume type rbd

pool

pool, applicable for Volume type rbd

keyring

keyring, applicable for Volume type rbd

image

image, applicable for Volume type rbd

gateway

gateway, applicable for Volume type scaleIO

system

system, applicable for Volume type scaleIO

protectionDomain

protection domain, applicable for Volume type scaleIO

storagePool

storage pool, applicable for Volume type scaleIO

volumeName

volume name, applicable for Volume type scaleIO and storageOS

configMapName

ConfigMap name, applicable for Volume type configMap

configMapItems

List of ConfigMap items, applicable for Volume type configMap

items

List of items, applicable for Volume type downwardAPI

5.5.5. Secrets

Groovy DSL configuration

You can create a secret using Groovy DSL configuration in the build.gradle file. It should contain the following fields:

key required description

name

true

this will be used as name of the kubernetes secret resource

namespace

false

the secret resource will be applied to the specific namespace, if provided

This is best explained by an example.

Yaml fragment with annotation

You can create a secret using a yaml fragment. You can reference the docker server id with an annotation jkube.eclipse.org/dockerServerId. The yaml fragment file should be put under the src/main/jkube/ folder.

Example
apiVersion: v1
kind: Secret
metadata:
  name: mydockerkey
  namespace: default
  annotations:
    jkube.eclipse.org/dockerServerId: ${docker.registry}
type: kubernetes.io/dockercfg

5.5.6. Ingress Generation

When the ocResource goal is run, an Ingress will be generated for each Service if the jkube.createExternalUrls property is enabled.

The generated Ingress can be further customized by using an Groovy DSL configuration, or by providing a YAML resource fragment.

Groovy DSL Configuration

Table 30. Fields supported in resources
Element Description

ingress

Configuration element for creating new Ingress

routeDomain

Set host for Ingress or OpenShift Route

Here is an example of configuring Ingress using Groovy DSL configuration:

Enable Ingress Generation by enabling createExternalUrl property
jkube.createExternalUrls=true
Example for Ingress Configuration
openshift {
  resources {
    ingress {
      ingressTlsConfigs = [{ (1)
        hosts = ["foo.bar.com"]
        secretName = "testsecret-tls"
      }]
      ingressRules = [{
        host = "foo.bar.com" (2)
        paths = [{
          pathType = "Prefix" (3)
          path = "/foo" (4)
          serviceName = "service1" (5)
          servicePort = "8080" (6)
        }]
      }]
    }
  }
}
1 Ingress TLS Configuration to specify Secret that contains TLS private key and certificate
2 Host names, can be precise matches or a wildcard. See Kubernetes Ingress Hostname documentation for more details
3 Ingress Path Type, Can be one of ImplementationSpecific, Exact or Prefix
4 Ingress path corresponding to provided service.name
5 Service Name corresponding to path
6 Service Port corresponding to path
Ingress Groovy DSL Configuration

Here are the supported options while providing ingress in Groovy DSL configuration

Table 31. ingress configuration
Element Description

ingressRules

IngressRule configuration

ingressTlsConfigs

Ingress TLS configuration

IngressRule Groovy DSL Configuration

Here are the supported options while providing ingressRules in Groovy DSL configuration

Table 32. ingressRule configuration
Element Description

host

Host name

paths

IngressRule path configuration

IngressRule Path Groovy DSL Configuration

Here are the supported options while providing paths in Groovy DSL configuration

Table 33. IngressRule path Groovy DSL configuration
Element Description

pathType

type of Path

path

path

serviceName

Service name

servicePort

Service port

resource

Resource reference in Ingress backend

IngressRule Path Resource Groovy DSL Configuration

Here are the supported options while providing resource in IngressRule’s path Groovy DSL configuration

Table 34. IngressRule Path resource Groovy DSL configuration
Element Description

name

Resource name

kind

Resource kind

apiGroup

Resource’s apiGroup

IngressRule Path Resource Groovy DSL Configuration

Here are the supported options while providing ingressTlsConfigs in IngressRule’s path Groovy DSL configuration

Table 35. IngressTls ingressTlsConfig Groovy DSL configuration
Element Description

secretName

Secret name

hosts

a list of string host objects

Ingress Yaml fragment:

You can create Ingress using YAML fragments too by placing the partial YAML file in the src/main/jkube directory. The following snippet contains an Ingress fragment example.

Ingress fragment Example
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: tls-example-ingress
spec:
  tls:
  - hosts:
    - https-example.foo.com
    secretName: testsecret-tls
  rules:
  - host: https-example.foo.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: service1
            port:
              number: 80

5.5.7. ServiceAccount Generation

You can configure resource configuration to generate a ServiceAccount or configure an already existing ServiceAccount into your generated Deployment.

Here is an example of Groovy DSL configuration to generate a ServiceAccount:

Example for Creating ServiceAccount via Groovy DSL
openshift {
    resources {
        serviceAccounts = [{
            name = 'my-serviceaccount' (1)
            deploymentRef = 'my-deployment-name' (2)
        }]
    }
}
1 Name of ServiceAccount to be created
2 Deployment which will be using this ServiceAccount

If you don’t want to generate ServiceAccount but just use an existing ServiceAccount in your Deployment. You can configure it via serviceAccount field in resource configuration. Here is an example:

Example for Configuring already existing ServiceAccount into generated Deployment
openshift {
    resources {
        serviceAccount = 'my-existing-serviceaccount'
    }
}

Service Account Resource fragment:

If you don’t want to use Groovy DSL configuration, you can provide a resource fragment for ServiceAccount resource. Here is how it would look like:

ServiceAccount resource fragment
apiVersion: v1
kind: ServiceAccount
metadata:
  name: build-robot

5.5.8. Resource Validation

Resource goal also validates the generated resource descriptors using API specification of Kubernetes.

Table 36. Validation Configuration
Element Description Property

skipResourceValidation

If value is set to true then resource validation is skipped. This may be useful if resource validation is failing for some reason but you still want to continue the deployment.

Default is false.

jkube.skipResourceValidation

failOnValidationError

If value is set to true then any validation error will block the plugin execution. A warning will be printed otherwise.

Default is false.

jkube.failOnValidationError

6. Generators

The usual way to define Docker images is with the plugin configuration as explained in ocBuild. This can either be done completely within the build.gradle or by referring to an external Dockerfile.

However, this plugin provides an additional route for defining image configurations. This is done by so called Generators. A generator is a Java component providing an auto-detection mechanism for certain build types like a Spring Boot build or a plain Java build. As soon as a Generator detects that it is applicable it will be called with the list of images configured in the build.gradle. Typically a generator only creates dynamically a new image configuration if this list is empty. But a generator is free to also add new images to an existing list or even change the current image list.

The included Generators are enabled by default, but you can easily disable them or only select a certain set of generators. Each generator has a name, which is unique for a generator.

The generator configuration is embedded in a generator configuration section:

Example for a generator configuration
openshift {
  generator { (1)
    includes = ['spring-boot'] (2)
    config { (3)
      'spring-boot' { (4)
          alias = 'ping'
      }
    }
  }
}
1 Start of generators' configuration.
2 Generators can be included and excluded. Includes have precedence, and the generators are called in the given order.
3 Configuration for individual generators.
4 The config is a map of supported config values. Each section is embedded in a tag named after the generator. The following sub-elements are supported:
Table 37. Generator configuration
Element Description

includes

Contains one or more include elements with generator names which should be included. If given only this list of generators are included in this given order. The order is important because by default only the first matching generator kicks in. The generators from every active profile are included, too. However the generators listed here are moved to the front of the list, so that they are called first. Use the profile raw if you want to explicitly set the complete list of generators.

excludes

Holds one or more exclude elements with generator names to exclude. If set then all detected generators are used except the ones mentioned in this section.

config

Configuration for all generators. Each generator support a specific set of configuration values as described in the documentation. The subelements of this section are generator names to configure. E.g. for generator spring-boot, the sub-element is called spring-boot. This element then holds the specific generator configuration like name for specifying the final image name. See above for an example. Configuration coming from profiles are merged into this config, but not overriding the configuration specified here.

Beside specifying generator configuration in the plugin’s configuration it can be set directly with properties, too:

Example generator property config
gradle k8sBuild -Pjkube.generator.java-exec.webPort=8082

The general scheme is a prefix jkube.generator. followed by the unique generator name and then the generator specific key.

In addition to the provided default Generators described in the next section Default Generators, custom generators can be easily added. There are two ways to include generators:

Plugin dependency

You can declare the generator holding jars as dependency to this plugin as shown in this example

buildscript {
    repositories {
        mavenLocal()
    }
    dependencies {
        classpath('io.acme:mygenerator:1.0')
    }
}
Compile time dependency

Alternatively and if your application code comes with a custom generator you can set the global configuration option useProjectClasspath (property: jkube.useProjectClasspath) to true. In this case also the project artifact and its dependencies are looked up for Generators. See Generator API for details how to write your own generators.

6.1. Default Generators

All default generators examine the build information for certain aspects and generate a Docker build configuration on the fly. They can be configured to a certain degree, where the configuration is generator specific.

Table 38. Default Generators
Generator Name Description

Java Applications

java-exec

Generic generator for flat classpath and fat-jar Java applications

Spring Boot

spring-boot

Spring Boot specific generator

Thorntail v2

thorntail-v2

Generator for Thorntail v2 apps

Vert.x

vertx

Generator for Vert.x applications

Web applications

webapps

Generator for WAR based applications supporting Tomcat, Jetty and Wildfly base images

Quarkus

Quarkus

Generator for Quarkus based applications

Open Liberty

openliberty

Generator for Open Liberty applications

Micronaut

micronaut

Generator for Micronaut based applications

There are some configuration options which are shared by all generators:

Table 39. Common generator options
Element Description Property

add

When set to true, then the generator adds to an existing image configuration. By default this is disabled, so that a generator only kicks in when there are no other image configurations in the build, which are either configured directly for a ocBuild or already added by a generator which has been run previously.

jkube.generator.add

alias

An alias name for referencing this image in various other parts of the configuration. This is also used in the log output. The default alias name is the name of the generator.

jkube.generator.alias

from

This is the base image from where to start when creating the images. By default, the generators make an opinionated decision for the base image which are described in the respective generator section.

jkube.generator.from

fromMode

Whe using OpenShift S2I builds the base image can be either a plain docker image (mode: jib) or a reference to an ImageStreamTag (mode: istag). In the case of an ImageStreamTag, from has to be specified in the form namespace/image-stream:tag. The mode takes only effect when running in OpenShift mode.

jkube.generator.fromMode

name

The Docker image name used when doing Docker builds. For OpenShift S2I builds its the name of the image stream. This can be a pattern as described in Name Placeholders. The default is %g/%a:%l. Note that this flag would only work when you’re using opinionated image configuration provided by generators. if generators are not applicable for your project configuration, this flag won’t work.

jkube.generator.name

registry

A optional Docker registry used when doing Docker builds. It has no effect for OpenShift S2I builds.

jkube.generator.registry

tags

A comma separated list of additional tags you want to tag your image with

jkube.generator.tags

When used as properties they can be directly referenced with the property names above.

6.1.1. Java Applications

One of the most generic Generators is the java-exec generator. It is responsible for starting up arbitrary Java application. It knows how to deal with fat-jar applications where the application and all dependencies are included within a single jar and the MANIFEST.MF within the jar references a main class. But also flat classpath applications, where the dependencies are separate jar files and a main class is given.

If no main class is explicitly configured, the plugin first attempts to locate a fat jar. If the gradle build creates a JAR file with a META-INF/MANIFEST.MF containing a Main-Class entry, then this is considered to be the fat jar to use. If there are more than one of such files then the largest one is used.

If a main class is configured (see below) then the image configuration will contain the application jar plus all dependency jars. If no main class is configured as well as no fat jar being detected, then this Generator tries to detect a single main class by searching for public static void main(String args[]) among the application classes. If exactly one class is found this is considered to be the main class. If no or more than one is found the Generator finally does nothing.

It will use the following base image by default, but as explained above and can be changed with the from configuration.

Table 40. Java Base Images
Docker Build S2I Build ImageStream

Community

quay.io/jkube/jkube-java

quay.io/jkube/jkube-java

jkube-java

These images always refer to the latest tag.

When a fromMode of istag is used to specify an ImageStreamTag and when no from is given, then as default the ImageStreamTag jkube-java in the namespace openshift is chosen. By default, fromMode = "docker" which use the a plain Docker image reference for the S2I builder image.

Beside the common configuration parameters described in the table common generator options the following additional configuration options are recognized:

Table 41. Java Application configuration options
Element Description Property

targetDir

Directory within the generated image where to put the detected artefacts into. Change this only if the base image is changed, too.

Defaults to /deployments.

jkube.generator.java-exec.targetDir

jolokiaPort

Port of the Jolokia agent exposed by the base image. Set this to 0 if you don’t want to expose the Jolokia port.

Defaults to 8778.

jkube.generator.java-exec.jolokiaPort

mainClass

Main class to call. If not given first a check is performed to detect a fat-jar (see above).

Next a class is looked up by scanning build/classes for a single class with a main method.

If no such class is found or if more than one is found, then this generator does nothing.

jkube.generator.java-exec.mainClass

prometheusPort

Port of the Prometheus jmx_exporter exposed by the base image. Set this to 0 if you don’t want to expose the Prometheus port.

Defaults to 9779.

jkube.generator.java-exec.prometheusPort

webPort

Port to expose as service, which is supposed to be the port of a web application. Set this to 0 if you don’t want to expose a port.

Defaults to 8080.

jkube.generator.java-exec.webPort

The exposed ports are typically later on use by Enrichers to create default Kubernetes or OpenShift services.

You can add additional files to the target image within baseDir by placing files into src/main/jkube-includes. These will be added with mode 0644, while everything in src/main/jkube-includes/bin will be added with 0755.

6.1.2. Spring Boot

This generator is called spring-boot and gets activated when it finds a plugin with id org.springframework.boot in the build.gradle.

This generator is based on the Java Application Generator and inherits all of its configuration values. The generated container port is read from the server.port property application.properties, defaulting to 8080 if it is not found. It also uses the same default images as the java-exec Generator.

Beside the common generator options and the java-exec options the following additional configuration is recognized:

Table 42. Spring-Boot configuration options
Element Description Property

color

If set, force the use of color in the Spring Boot console output.

jkube.generator.spring-boot.color

The generator adds Kubernetes liveness and readiness probes pointing to either the management or server port as read from the application.properties. If the management.port (for Spring Boot 1) or management.server.port (for Spring Boot 2) and management.ssl.key-store (for Spring Boot 1) or management.server.ssl.key-store (for Spring Boot 2) properties are set in application.properties otherwise or server.ssl.key-store property is set in application.properties then the probes are automatically set to use https.

The generator works differently when called together with ocWatch. In that case it enables support for Spring Boot Developer Tools which allows for hot reloading of the Spring Boot app. In particular, the following steps are performed:

  • If a secret token is not provided within the Spring Boot application configuration application.properties or application.yml with the key spring.devtools.remote.secret then a custom secret token is created and added to application.properties

  • Add spring-boot-devtools.jar as BOOT-INF/lib/spring-devtools.jar to the spring-boot fat jar.

Since during ocWatch the application itself within the build/ directory is modified for allowing easy reloading you must ensure that you do a gradle clean before building an artifact which should be put into production.

Since the released version are typically generated with a CI system which does a clean build anyway this should be only a theoretical problem.

6.1.3. Thorntail v2

The Thorntail v2 generator detects a Thorntail v2 build and disables the Prometheus Java agent because of this issue.

Otherwise, this generator is identical to the java-exec generator. It supports the common generator options and the java-exec options.

6.1.4. Vert.x

The Vert.x generator detects an application using Eclipse Vert.x. It generates the metadata to start the application as a fat jar.

Currently, this generator is enabled if:

Otherwise, this generator is identical to the java-exec generator. It supports the common generator options and the java-exec options.

The generator automatically:

  • enable metrics and JMX publishing of the metrics when io.vertx:vertx-dropwizard-metrics is in the project’s classpath / dependencies.

  • enable clustering when a Vert.x cluster manager is available in the project’s classpath / dependencies. this is done by appending -cluster to the command line.

  • Force IPv4 stack when vertx-infinispan is used.

  • Disable the async DNS resolver to fall back to the regular JVM DNS resolver.

You can pass application parameter by setting the JAVA_ARGS env variable. You can pass system properties either using the same variable or using JAVA_OPTIONS. For instance, create src/main/jkube/deployment.yml with the following content to configure JAVA_ARGS:

spec:
 template:
   spec:
     containers:
       - env:
         - name: JAVA_ARGS
           value: "-Dfoo=bar -cluster -instances=2"

6.1.5. Web Applications

The webapp generator tries to detect WAR builds and selects a base servlet container image based on the configuration found in the build.gradle:

  • A Tomcat base image is selected by default.

  • A Jetty base image is selected when one of the files WEB-INF/jetty-web.xml or WEB-INF/jetty-logging.properties is found.

  • A Wildfly base image is chosen when a Wildfly specific deployment descriptor like jboss-web.xml is found.

The base images chosen are:

Table 43. Webapp Base Images
Docker Build S2I Build

Tomcat

quay.io/jkube/jkube-tomcat

quay.io/jkube/jkube-tomcat

Jetty

quay.io/jkube/jkube-jetty9

quay.io/jkube/jkube-jetty9

Wildfly

jboss/wildfly

quay.io/wildfly/wildfly-centos7

In addition to the common generator options this generator can be configured with the following options:

Table 44. Webapp configuration options
Element Description Property

server

Fix server to use in the base image. Can be either tomcat, jetty or wildfly.

jkube.generator.webapp.server

targetDir

Where to put the war file into the target image. By default, it’s selected by the base image chosen but can be overwritten with this option.

Defaults to /deployments.

jkube.generator.webapp.targetDir

user

User and/or group under which the files should be added. The syntax of this options is described in Assembly Configuration.

jkube.generator.webapp.user

path

Context path with which the application can be reached by default.

Defaults to / (root context).

jkube.generator.webapp.path

cmd

Command to use to start the container. By default, the base images startup command is used.

jkube.generator.webapp.cmd

ports

Comma separated list of ports to expose in the image and which eventually are translated later to Kubernetes services. The ports depend on the base image and are selected automatically. But they can be overridden here.

jkube.generator.webapp.ports

env

Environment variable to be set to the image builder environment. Should be set in the format ENV_NAME=environment value. You can inject multiple env variables by adding a new line for each variable.

This may be required for Wildfly webapp s2i build to compose a WildFly server with Galleon layers. See https://docs.wildfly.org/21/Galleon_Guide.html#wildfly_foundational_galleon_layers and https://github.com/wildfly/wildfly-s2i#environment-variables-to-be-used-at-s2i-build-time/.

jkube.generator.webapp.env

JakartaEE and retrocompatibility with JavaEE in Tomcat

From Tomcat 10, only JakartaEE compliant projects are supported. However, legacy JavaEE projects can automatically be migrated by deploying the war in ${CATALINA_HOME}/webapps-javaee. By default, the webapp generator is based on a Tomcat 10+ image and will copy the war file to ${CATALINA_HOME}/webapps-javaee.

If the project is already JakartaEE compliant, it is recommanded to set the webapp directory to ${CATALINA_HOME}/webapps. This can be done by setting the property jkube.generator.webapp.env to TOMCAT_WEBAPPS_DIR=webapps.

To keep using Tomcat 9, set the properties:

  • jkube.generator.webapp.from to quay.io/jkube/jkube-tomcat9:0.0.16

  • jkube.generator.webapp.cmd to /usr/local/s2i/run

  • jkube.generator.webapp.supportsS2iBuild to true

6.1.6. Quarkus

The Quarkus generator detects Quarkus based projects looking at project build.gradle:

The base images chosen are:

Table 45. Webapp Base Images
Docker Build S2I Build

Native

registry.access.redhat.com/ubi9/ubi-minimal:9.3

quay.io/quarkus/ubi-quarkus-native-binary-s2i

Normal Build

quay.io/jkube/jkube-java

quay.io/jkube/jkube-java

6.1.7. Open Liberty

The Open Liberty generator runs when the Open Liberty plugin is enabled in the gradle build.

It can done via two ways as specified in OpenLiberty Gradle Plugin docs:

  • Within apply plugin: section as liberty

  • Within plugins section as io.openliberty.tools.gradle.Liberty

The generator is similar to the java-exec generator. It supports the common generator options and the java-exec options.

For Open Liberty, the default value of webPort is 9080.

6.1.8. Micronaut Generator

The Micronaut generator (named micronaut) detects a Micronaut project by analyzing the plugin

dependencies searching for io.micronaut.application:io.micronaut.application.gradle.plugin.

This generator is based on the Java Application Generator and inherits all of its configuration values.

6.1.9. Helidon

The Helidon generator detects Helidon based projects looking at project build.gradle:

The base images chosen are the following, however, these can be overridden using jkube.generator.from property:

Table 46. Webapp Base Images
Docker Build S2I Build

Native

registry.access.redhat.com/ubi9/ubi-minimal:9.3

registry.access.redhat.com/ubi9/ubi-minimal:9.3

Normal Build

quay.io/jkube/jkube-java

quay.io/jkube/jkube-java

6.2. Generator API

It’s possible to extend Eclipse JKube’s Generator API to define your own custom Generators as per use case. Please refer to the Generator Interface; You can create new generators by implementing this interface. Please check out Custom Foo generator quickstart for detailed example.

7. Enrichers

Enriching is the complementary concept to Generators. Whereas Generators are used to create and customize Docker images, Enrichers are use to create and customize OpenShift resource objects.

There are a lot of similarities to Generators:

  • Each Enricher has a unique name.

  • Enrichers are looked up automatically from the plugin dependencies and there is a set of default enrichers delivered with this plugin.

  • Enrichers are configured the same ways as generators

The Generator example is a good blueprint, simply replace generator with enricher. The configuration is structural identical:

Table 47. Enricher configuration
Element Description

includes

Contains one ore more include elements with enricher names which should be included. If given, only this list of enrichers are included in this order. The enrichers from every active profile are included, too. However the enrichers listed here are moved to the front of the list, so that they are called first. Use the profile raw if you want to explicitly set the complete list of enrichers.

excludes

Holds one or more exclude elements with enricher names to exclude. This means all the detected enrichers are used except the ones mentioned in this section.

config

Configuration for all enrichers. Each enricher supports a specific set of configuration values as described in its documentation. The subelements of this section are enricher names. E.g. for enricher jkube-service, the sub-element is called jkube-service. This element then holds the specific enricher configuration like name for the service name. Configuration coming from profiles are merged into this config, but not overriding the configuration specified here.

This plugin comes with a set of default enrichers.

7.1. Default Enrichers

openshift-gradle-plugin comes with a set of enrichers which are enabled by default. There are two categories of default enrichers:

  • Generic Enrichers are used to add default resource object when they are missing or add common metadata extracted from the given build information.

  • Specific Enrichers are enrichers which are focused on a certain tech stack that they detect.

Table 48. Default Enrichers Overview
Enricher Description

jkube-configmap-file

Add ConfigMap elements defined as Groovy DSL or as annotation.

jkube-controller

Create default controller (replication controller, replica set or deployment Kubernetes doc) if missing.

jkube-container-env-java-options

Merges JAVA_OPTIONS environment variable defined in Build configuration (image) environment (env) with Container JAVA_OPTIONS environment variable added by other enrichers, Groovy DSL configuration or fragment.

jkube-debug

Enables debug mode via a property or Groovy DSL configuration

jkube-dependency

Examine build dependencies for kubernetes.yml/openshift.yml and add the objects found therein.

jkube-git

Check local .git directory and add build information as annotations.

jkube-image

Add the image name into a PodSpec of replication controller, replication sets and deployments, if missing.

jkube-ingress

Create a default Ingress if missing or configured from Groovy DSL configuration

jkube-imagepullpolicy

Overrides ImagePullPolicy in controller resources provided jkube.imagePullPolicy property is set.

jkube-metadata

Add labels/annotations to generated Kubernetes resources

jkube-namespace

Set the Namespace of the generated and processed Kubernetes resources metadata and optionally create a new Namespace

jkube-name

Add a default name to every object which misses a name.

jkube-openshift-autotls

Enriches declarations with auto-TLS annotations, required secrets reference, mounted volumes and PEM to keystore converter init container.

jkube-openshift-deploymentconfig

Enriches that converts existing Deployment object to DeploymentConfig.

jkube-openshift-imageChangeTrigger

Enriches that adds ImageChange trigger to DeploymentConfig.

jkube-openshift-project

Converts a Kubernetes Namespace resource to OpenShift Project.

jkube-openshift-route

Adds OpenShift Route for existing Service

jkube-persistentvolumeclaim-storageclass

Add name of StorageClass required by PersistentVolumeClaim either in metadata or in spec.

jkube-pod-annotation

Copy over annotations from a Deployment to a Pod

jkube-portname

Add a default portname for commonly known service.

jkube-project-label

Add gradle coordinates as labels to all objects.

jkube-replicas

Override number of replicas for any controller processed by JKube.

jkube-revision-history

Add revision history limit (Kubernetes doc) as a deployment spec property to the Kubernetes/OpenShift resources.

jkube-secret-file

Add Secret elements defined as annotation.

jkube-security-hardening

Enforces best practice and recommended security rules for Kubernetes and OpenShift resources.

jkube-service

Create a default service if missing and extract ports from the Docker image configuration.

jkube-serviceaccount

Add a ServiceAccount defined as Groovy DSL or mentioned in resource fragment.

jkube-triggers-annotation

Add ImageStreamTag change triggers on OpenShift resources such as StatefulSets, ReplicaSets and DaemonSets using the image.openshift.io/triggers annotation.

jkube-volume-permission

Fixes the permission of persistent volume mount with the help of an init container.

jkube-well-known-labels

Add Kubernetes Recommended Well Known labels.

7.1.1. Generic Enrichers

Default generic enrichers are used for adding missing resources or adding metadata to given resource objects. The following default enhancers are available out of the box.

jkube-configmap-file

This enricher adds ConfigMap defined as resources in plugin configuration and/or resolves file content from an annotation.

As Groovy you can define:

build.gradle
openshift {
  resources {
    configMap {
      name = 'myconfigmap'
      entries = [{
        name = 'A'
        value = 'B'
      }]
    }
  }
}

This creates a ConfigMap data with key A and value B.

You can also use file tag to refer to the content of a file.

openshift {
  resources {
    configMap {
      name = 'configmap-test'
      entries = [{
        file = 'src/test/resources/test-application.properties'
      }]
    }
  }
}

This creates a ConfigMap with key test-application.properties and value the content of the src/test/resources/test-application.properties file. If you set name tag then this is used as key instead of the filename.

ConfigMap Groovy DSL Configuration

Here are the supported options while providing configMap in Groovy DSL configuration

Table 49. Groovy DSL configmap configuration configMap
Element Description

entries

data for ConfigMap

name

Name of the ConfigMap

ConfigMap Entry Groovy DSL Configuration

entries is a list of entry configuration objects. Here are the supported options while providing entry in Groovy DSL configuration

Table 50. Groovy DSL configmap entry configuration entry
Element Description

value

Entry value

file

path to a file or directory. If it’s a single file then file contents would be read as value. If it’s a directory then each file’s content is stored as value with file name as key.

name

Entry name

If you are defining a custom ConfigMap file, you can use an annotation to define a file name as key and its content as the value:

metadata:
  name: ${name}
  annotations:
    jkube.eclipse.org/cm/application.properties: src/test/resources/test-application.properties

This creates a ConfigMap data with key application.properties (part defined after cm) and value the content of src/test/resources/test-application.properties file.

You can specify a directory instead of a file:

metadata:
  name: ${name}
  annotations:
    jkube.eclipse.org/cm/application.properties: src/test/resources/test-dir

This creates a ConfigMap named application.properties (part defined after cm) and for each file under the directory test-dir one entry with file name as key and its content as the value; subdirectories are ignored.

jkube-controller

This enricher is used to ensure that a controller is present. This can be either directly configured with fragments or with the Groovy DSL configuration. An explicit configuration always takes precedence over auto detection. See Kubernetes doc for more information on types of controllers.

The following configuration parameters can be used to influence the behaviour of this enricher:

Table 51. Default controller enricher
Element Description Property

name

Name of the Controller. Kubernetes Controller names must start with a letter. If the maven artifactId starts with a digit, s will be prefixed.

Defaults to project name.

jkube.enricher.jkube-controller.name

pullPolicy

Deprecated: use jkube.imagePullPolicy instead.

Image pull policy to use for the container. One of: IfNotPresent, Always.

Defaults to IfNotPresent.

jkube.enricher.jkube-controller.pullPolicy

type

Type of Controller to create. One of: ReplicationController, ReplicaSet, Deployment, DeploymentConfig, StatefulSet, DaemonSet, Job, CronJob.

Defaults to Deployment.

jkube.enricher.jkube-controller.type

replicaCount

Number of replicas for the container.

Defaults to 1.

jkube.enricher.jkube-controller.replicaCount

schedule

Schedule for CronJob written in Cron syntax.

jkube.enricher.jkube-controller.schedule

Image pull policy to use for the container. One of: IfNotPresent, Always.

jkube.imagePullPolicy

jkube-container-env-java-options

Merges JAVA_OPTIONS environment variable defined in Build configuration (image) environment (env) with Container JAVA_OPTIONS environment variable added by other enrichers, Groovy DSL configuration or fragment.

Option Description Property

disable

Disabled the enricher, any JAVA_OPTIONS environment variable defined by an enricher, Groovy DSL configuration or YAML fragment will override the one defined by the generator or Image Build configuration.

Defaults to false.

jkube.enricher.jkube-container-env-java-options.disable

jkube-debug

This enricher enables debug mode via a property jkube.debug.enabled or via enabling debug mode in enricher configuration.

You can either set this property in gradle.properties file:

gradle.properties
jkube.debug.enabled = true

Or provide Groovy DSL configuration for enricher

build.gradle
openshift {
    enricher {
        config {
            'jkube-debug' {
                enabled = true
            }
        }
    }
}

This would do the following things:

  • Add environment variable JAVA_ENABLE_DEBUG with value set to true in your application container

  • Add a container port named debug to your existing list of container ports with value set via JAVA_DEBUG_PORT environment variable. If not present, it defaults to 5005.

jkube-dependency

This enricher is used for embedding OpenShift configuration manifests (YAML) to single package. It looks for the following files in compile scope dependencies and adds OpenShift resources inside to final generated OpenShift manifests:

  • META-INF/jkube/kubernetes.yml

  • META-INF/jkube/k8s-template.yml

  • META-INF/jkube/openshift.yml (in case of OpenShift)

Table 52. Configuration options
Option Description Property

includeTransitive

Whether to look for kubernetes manifest files in transitive dependencies.

Defaults to true.

jkube.enricher.jkube-dependency.includeTransitive

includePlugin

Whether to look on the current plugin classpath too.

Defaults to true.

jkube.enricher.jkube-dependency.includePlugin

jkube-git

Enricher that adds info from .git directory as annotations. These are explained in the table below:

Table 53. Annotations added via Git enricher

Annotation

Description

jkube.eclipse.org/git-branch

Current Git Branch

jkube.eclipse.org/git-commit

Latest commit of current branch

jkube.eclipse.org/git-url

URL of your configured git remote

jkube.io/git-branch

Deprecated: Use jkube.eclipse.org/ annotation prefix.

Current Git Branch

jkube.io/git-commit

Deprecated: Use jkube.eclipse.org/ annotation prefix.

Latest commit of current branch

jkube.io/git-url

Deprecated: Use jkube.eclipse.org/ annotation prefix.

URL of your configured git remote

app.openshift.io/vcs-ref

Current Git Branch

app.openshift.io/vcs-uri

URL of your configured git remote

Table 54. Supported Configuration options
Option Description Property

gitRemote

Configures the git remote name, whose URL you want to annotate as 'git-url'.

Defaults to origin.

jkube.remoteName

jkube-image

This enricher merges in container image related fields into specified controller (e.g Deployment, ReplicaSet, ReplicationController etc.) Pod specification.

  • The full image name is set as image.

  • An image alias is set as name. If alias isn’t provided, then opinionated name using image user and project name is used.

  • The pull policy imagePullPolicy is set according to the given configuration. If no configuration is set, the default is IfNotPresent for release versions, and Always for snapshot versions.

  • Environment variables as configured via Groovy DSL configuration.

Any already configured container in the pod spec is updated if the property is not set.

Table 55. Configuration options
Option Description Property

pullPolicy

What pull policy to use when fetching images

jkube.enricher.jkube-image.pullPolicy

jkube-ingress

Enricher responsible for creation of Ingress either using opinionated defaults or as per provided Groovy DSL configuration. This enricher gets activated when jkube.createExternalUrls is set to true. JKube generates Ingress only for Services which have either expose=true or exposeUrl=true labels set.

For more information, check out Ingress Generation section.

Table 56. Ingress enricher
Element Description Property

host

Host is the fully qualified domain name of a network host.

jkube.enricher.jkube-ingress.host

targetApiVersion

Whether to generate extensions/v1beta1 Ingress or networking.k8s.io/v1 Ingress.

Defaults to networking.k8s.io/v1.

jkube.enricher.jkube-ingress.targetApiVersion

jkube-imagepullpolicy

This enricher fixes ImagePullPolicy for Kubernetes/Openshift resources whenever a -Djkube.imagePullPolicy parameter is provided.

jkube-metadata

This enricher is responsible for adding labels and annotations to your resources. It reads labels and annotations fields provided in resources and adds respective labels/annotations to Kubernetes resources.

You can also configure whether you want to add these labels/annotations to some specific resource or all resources.

You can see an example if it’s usage in ocResource Labels And Annotations section.

jkube-namespace

This enricher adds a Namespace/Project resource to the Kubernetes Resources list in case the namespace configuration (jkube.enricher.jkube-namespace.namespace) is provided.

In addition, this enricher sets the namespace (.metadata.namespace ) of the JKube generated and processed Kubernetes resources in case they don’t already have one configured (see the force configuration).

The following configuration parameters can be used to influence the behaviour of this enricher:

Table 57. Default namespace enricher
Element Description Property

namespace

Namespace as string which we want to create. A new Namespace object will be created and added to the list of Kubernetes resources generated during the enrichment phase.

jkube.enricher.jkube-namespace.namespace

type

Whether we want to generate a Namespace or an OpenShift specific Project resource. One of: Namespace, Project.

Defaults to Namespace.

jkube.enricher.jkube-namespace.type

force

If the .metadata.namespace field must be forced even if the resource has already one configured.

Defaults to false.

jkube.enricher.jkube-namespace.force

This enricher also configures generated Namespace in .metadata.namespace field for Kubernetes resources as per provided Groovy DSL configuration too. Here is an example:

openshift {
  resources {
    namespace = 'mynamespace'
  }
}
jkube-name

Enricher for adding a "name" to the metadata to various objects we create.

Table 58. Supported Configuration options
Option Description Property

name

Configures the .metadata.name of all generated Kubernetes manifests.

jkube.enricher.jkube-name.name

jkube-openshift-autotls

Enricher which adds appropriate annotations and volumes to enable OpenShift’s automatic Service Serving Certificate Secrets. This enricher adds an init container to convert the service serving certificates from PEM (the format that OpenShift generates them in) to a JKS-format Java keystore ready for consumption in Java services.

This enricher is disabled by default. In order to use it, you must configure the openshift-gradle-plugin to use this enricher:

openshift {
    enricher {
        includes = ['jkube-openshift-autotls']
        config {
            'jkube-openshift-autotls' {
                // ...
            }
        }
    }
}

The auto-TLS enricher supports the following configuration options:

Element Description Property

tlsSecretName

The name of the secret to be used to store the generated service serving certs. Defaults to <name>-tls.

jkube.enricher.jkube-openshift-autotls.tlsSecretName

tlsSecretVolumeMountPoint

Where the service serving secret should be mounted to in the pod.

Defaults to /var/run/secrets/jkube.io/tls-pem.

jkube.enricher.jkube-openshift-autotls.tlsSecretName

tlsSecretVolumeName

The name of the secret volume.

Defaults to tls-pem.

jkube.enricher.jkube-openshift-autotls.tlsSecretVolumeName

jksVolumeMountPoint

Where the generated keystore volume should be mounted to in the pod.

Defaults to /var/run/secrets/jkube.io/tls-jks.

jkube.enricher.jkube-openshift-autotls.the

jksVolumeName

The name of the keystore volume.

Defaults to tls-jks.

jkube.enricher.jkube-openshift-autotls.jksVolumeName

pemToJKSInitContainerImage

The name of the image used as an init container to convert PEM certificate/key to Java keystore.

Defaults to jimmidyson/pemtokeystore:v0.1.0.

jkube.enricher.jkube-openshift-autotls.pemToJKSInitContainerImage

pemToJKSInitContainerName

the name of the init container to convert PEM certificate/key to Java keystore.

Defaults to tls-jks-converter.

jkube.enricher.jkube-openshift-autotls.pemToJKSInitContainerName

keystoreFileName

The name of the generated keystore file.

Defaults to keystore.jks.

jkube.enricher.jkube-openshift-autotls.keystoreFileName

keystorePassword

The password to use for the generated keystore.

Defaults to changeit.

jkube.enricher.jkube-openshift-autotls.keystorePassword

keystoreCertAlias

The alias in the keystore used for the imported service serving certificate.

Defaults to server.

jkube.enricher.jkube-openshift-autotls.keystoreCertAlias

jkube-openshift-deploymentconfig

This enricher converts Kubernetes Deployment object(extensions/v1beta1 or apps/v1) to OpenShift equivalent DeploymentConfig.

It’s applicable only for OpenShift.

Note that this won’t be enabled if you’ve set jkube.build.switchToDeployment to true or you’ve configured DefaultControllerEnricher to generate a controller of type DeploymentConfig

Table 59. Supported configuration options for this enricher
Property Description

jkube.openshift.deployTimeoutSeconds

The OpenShift deploy timeout in seconds.

Defaults to 3600.

jkube.build.switchToDeployment

Disable conversion of Deployment to DeploymentConfig.

Defaults to false

jkube-openshift-imageChangeTrigger

This enricher is responsible for adding a DeploymentConfig trigger of type ImageChange based containers.

It is only applicable in case of OpenShift.

Table 60. Supported configuration options for this enricher
Property Description

jkube.openshift.enableAutomaticTrigger

Enable automatic deployment in generated ImageChange trigger.

Defaults to true.

jkube.openshift.imageChangeTriggers

Enable generation of ImageChange triggers to DeploymentConfigs.

Defaults to true.

jkube.openshift.trimImageInContainerSpec

Set the container image reference to "", this is done to handle weird behavior of OpenShift 3.7 in which subsequent rollouts lead to ImagePullErr.

Defaults to false.

jkube.openshift.enrichAllWithImageChangeTrigger

Add ImageChange Triggers with respect to all containers specified inside DeploymentConfig.

Defaults to false.

jkube-openshift-project

Enricher that converts Kubernetes Namespace resource into an OpenShift equivalent resource i.e. Project.

This is only applicable in case of OpenShift.

jkube-openshift-route

This enricher adds an OpenShift Route for existing Service.

This is only applicable to OpenShift.

Table 61. Supported configuration options for this enricher
Element Description Property

generateRoute

Generate Route for corresponding Service

Defaults to true

jkube.enricher.jkube-openshift-route.generateRoute

tlsTermination

Add TLS termination of the route

jkube.enricher.jkube-openshift-route.tlsTermination

tlsInsecureEdgeTerminationPolicy

Add Edge TLS termination of the route

jkube.enricher.jkube-openshift-route.tlsInsecureEdgeTerminationPolicy

Generate Route for corresponding Service. Note that this flag takes less precedence as compared to generateRoute enricher configuration option. When both flags are provided, only generateRoute would be considered.

jkube.openshift.generateRoute

jkube-pod-annotation

This enricher copies the annotations from a Controller (Deployment/ReplicaSet/StatefulSet) metadata to the annotations of container Pod template spec’s metadata.

jkube-portname

This enricher uses a given set of well known ports:

Table 62. Default Port Mappings

Port Number

Name

8080

http

8443

https

8778

jolokia

9779

prometheus

If not found, it creates container ports with names of IANA registered services.

jkube-project-label

Enricher that adds standard labels and selectors to generated resources (e.g. app, group, provider, version).

The jkube-project-label enricher supports the following configuration options:

Option Description Property

useProjectLabel

Enable this flag to turn on the generation of the old project label in Kubernetes resources. The project label has been replaced by the app label in newer versions of the plugin.

Defaults to false.

jkube.enricher.jkube-project-label.useProjectLabel

app

Makes it possible to define a custom app label used in the generated resource files used for deployment.

Defaults to the Gradle Project name property.

jkube.enricher.jkube-project-label.app

provider

Makes it possible to define a custom provider label used in the generated resource files used for deployment.

Defaults to jkube.

jkube.enricher.jkube-project-label.provider

group

Makes it possible to define a custom group label used in the generated resource files used for deployment.

Defaults to the Gradle Project group property.

jkube.enricher.jkube-project-label.group

version

Makes it possible to define a custom version label used in the generated resource files used for deployment.

Defaults to the Gradle Project version property.

jkube.enricher.jkube-project-label.version

The project labels which are already specified in the input fragments are not overridden by the enricher.

jkube-persistentvolumeclaim-storageclass

Enricher which fixes adds name of StorageClass required by PersistentVolumeClaim either in metadata or in spec.

Table 63. Supported properties
Option Description Property

defaultStorageClass

PersistentVolume storage class.

jkube.enricher.jkube-volume-permission.defaultStorageClass

useStorageClassAnnotation

If enabled, storage class would be added to PersistentVolumeClaim metadata as volume.beta.kubernetes.io/storage-class=<storageClassName> annotation rather than .spec.storageClassName

Defaults to false

jkube.enricher.jkube-volume-permission.useStorageClassAnnotation

jkube-replicas

This enricher overrides the number of replicas for every controller (DaemonSet, Deployment, DeploymentConfig, Job, CronJob, ReplicationController, ReplicaSet, StatefulSet) generated or processed by JKube (including those from dependencies).

In order to use this enricher you need to configure the jkube.replicas property:

gradle -Pjkube.replicas=42 ocResource
gradle.properties
jkube.replicas = 42

You can use this Enricher at runtime to temporarily force the number of replicas to a given value.

jkube-revision-history

This enricher adds spec.revisionHistoryLimit property to deployment spec of Kubernetes/OpenShift resources. A deployment’s revision history is stored in the replica sets, that specifies the number of old ReplicaSets to retain in order to allow rollback. For more information read Kubernetes documentation.

The following configuration parameters can be used to influence the behaviour of this enricher:

Table 64. Default revision history enricher
Element Description Property

limit

Number of revision histories to retain.

Defaults to 2.

jkube.enricher.jkube-revision-history.limit

Just as any other enricher you can specify required properties with in the enricher’s configuration as below,

openshift {
  enricher {
    config {
      'jkube-revision-history' {
         limit = 8
      }
    }
  }
}

This information will be enriched as spec property in the generated manifest like,

# ...
kind: Deployment
spec:
  revisionHistoryLimit: 8
# ...
jkube-secret-file

This enricher adds Secret defined as file content from an annotation.

If you are defining a custom Secret file, you can use an annotation to define a file name as key and its content as the value:

metadata:
  name: ${name}
  annotations:
    jkube.eclipse.org/secret/application.properties: src/test/resources/test-application.properties

This creates a Secret data with the key application.properties (part defined after secret) and value content of src/test/resources/test-application.properties file (base64 encoded).

jkube-security-hardening

This enricher enforces security best practices and recommendations for Kubernetes objects such as Deployments, ReplicaSets, Jobs, CronJobs, and so on.

The enricher is not included in the default profile. However, you can easily enable it by leveraging the security-hardening profile.

These are some of the rules enforces by this enricher:

  • Disables the auto-mounting of the service account token.

  • Prevents containers from running in privileged mode.

  • Ensures containers do not allow privilege escalation.

  • Prevents containers from running as the root user.

  • Configures the container to run as a user with a high UID to avoid host conflict.

  • Ensures the container’s seccomp is set to Runtime/Default.

jkube-service

This enricher is used to ensure that a service is present. This can be either directly configured with fragments or with the Groovy DSL configuration, but it can be also automatically inferred by looking at the ports exposed by an image configuration. An explicit configuration always takes precedence over auto detection. For enriching an existing service this enricher actually works only on a configured service which matches with the configured (or inferred) service name.

The following configuration parameters can be used to influence the behaviour of this enricher:

Table 65. Default service enricher
Element Description Property

name

Service name to enrich by default. If not given here or configured elsewhere, the artifactId/project name is used.

jkube.enricher.jkube-service.name

headless

Whether a headless service without a port should be configured. A headless service has the ClusterIP set to None and will be only used if no ports are exposed by the image configuration or by the configuration port.

Defaults to false.

jkube.enricher.jkube-service.headless

expose

If set to true, a label expose with value true is added which can be picked up by the jkube. expose-controller to expose the service to the outside by various means. See the documentation of expose-controller for more details.

Defaults to false.

jkube.enricher.jkube-service.expose

type

Kubernetes / OpenShift service type to set like LoadBalancer, NodePort or ClusterIP.

jkube.enricher.jkube-service.type

port

The service port to use. By default the same port as the ports exposed in the image configuration is used, but can be changed with this parameter. See below for a detailed description of the format which can be put into this variable.

jkube.enricher.jkube-service.port

multiPort

Set this to true if you want all ports to be exposed from an image configuration. Otherwise only the first port is used as a service port.

Defaults to false.

jkube.enricher.jkube-service.multiPort

protocol

Default protocol to use for the services. Must be tcp or udp.

Defaults to tcp.

jkube.enricher.jkube-service.protocol

normalizePort

Normalize the port numbering of the service to common and conventional port numbers.

Defaults to false.

jkube.enricher.jkube-service.normalizePort

Following is the Port mapping that comes in effect, when normalizePort option is set true.

Original Port Normalized Port

8080

80

8081

80

8181

80

8180

80

8443

443

443

443

You specify the properties like for any enricher within the enrichers configuration like in

Example
openshift {
  enricher {
    config {
      'jkube-service' {
         name = 'my-service'
         type = 'NodePort'
         multiPort = true
      }
    }
  }
}
Port specification

With the option port you can influence the mapping how ports are mapped from the pod to the service. By default, and if this option is not given the ports exposed are dictated by the ports exposed from the Docker images contained in the pods. Remember, each image configured can be part of the pod. However, you can expose also completely different ports as the images meta data declare.

The property port can contain a comma separated list of mappings of the following format:

<servicePort1>:<targetPort1>/<protocol>,<servicePort2>:<targetPort2>/<protocol>,....

where the targetPort and protocol specification is optional. These ports are overlayed over the ports exposed by the images, in the given order.

This is best explained by some examples.

For example if you have a pod which exposes a Microservice on port 8080 and you want to expose it as a service on port 80 (so that it can be accessed with http://myservice) you can simply use the following enricher configuration:

Example
openshift {
  enricher {
    config {
      'jkube-service' {
         name = 'myservice'
         port = '80:8080' (1)
      }
    }
  }
}
1 80 is the service port, 8080 the port opened in from the pod’s images

If your pod exposes their ports (which e.g. all generator do), then you can even omit the 8080 here (i.e. port = 80). In this case the first port exposed will be mapped to port 80, all other exposed ports will be omitted.

By default, an automatically generated service only exposes the first port, even when more ports are exposed. When you want to map multiple ports you need to set the config option multiPort to true. In this case you can also provide multiple mappings as a comma separated list in the port specification where each element of the list are the mapping for the first, second, …​ port.

A more (and bit artificially constructed) specification could be port = '80,9779:9779/udp,443'. Assuming that the image exposes ports 8080 and 8778 (either directly or via generators) and we have switched on multiport mode, then the following service port mappings will be performed for the automatically generated service:

  • Pod port 8080 is mapped to service port 80.

  • Pod port 9779 is mapped to service port 9779 with protocol UDP. Note how this second entry overrides the pod exposed port 8778.

  • Pod port 443 is mapped to service port 443.

This example shows also the mapping rules:

  • Port specification in port always override the port metadata of the contained Docker images (i.e. the ports exposed)

  • You can always provide a complete mapping with port on your own

  • The ports exposed by the images serve as default values which are used if not specified by this configuration option.

  • You can map ports which are not exposed by the images by specifying them as target ports.

Multiple ports are only mapped when multiPort mode is enabled (which is switched off by default). If multiPort mode is disabled, only the first port from the list of mapped ports calculated as above is taken.

When you set legacyPortMapping to true then ports 8080 to 9090 are mapped to port 80 automatically if not explicitly mapped via port. I.e. when an image exposes port 8080 with a legacy mapping this mapped to a service port 80, not 8080. You should not switch this on for any good reason. In fact, it might be that this option can vanish anytime.

This enricher is also used by resources Groovy DSL configuration to generate Service configured via Groovy DSL. Here are the fields supported in resources which work with this enricher:

Table 66. Fields supported in resources

Element

Description

services

Configuration element for generating Service resource

Service Groovy DSL Configuration

services is a list of service configuration objects. Here are the supported options while providing service in Groovy DSL configuration

Table 67. Groovy DSL service configuration
Element Description

name

Service name

port

Port to expose

headless

Whether this is a headless service.

type

Service type

normalizePort

Whether to normalize service port numbering.

ports

Ports to expose

Service port Configuration

ports is a list of port configuration objects. Here are the supported options while providing port in Groovy DSL configuration

Table 68. Groovy DSL service port configuration
Element Description

protocol

Protocol to use. Can be either "tcp" or "udp".

port

Container port to expose.

targetPort

Target port to expose.

nodePort

Port to expose from the port.

name

Name of the port

jkube-serviceaccount

This enricher is responsible for creating ServiceAccount resource. See ServiceAccount Generation for more details.

The following configuration parameters can be used to influence the behaviour of this enricher:

Table 69. ServiceAccountEnricher configuration options
Element Description Property

skipCreate

Skip creating ServiceAccount objects

Defaults to false.

jkube.enricher.jkube-serviceaccount.skipCreate

jkube-triggers-annotation

OpenShift resources like BuildConfig and DeploymentConfig can be automatically triggered by changes to ImageStreamTags. However, plain Kubernetes resources don’t have a way to support this kind of triggering. You can use image.openshift.io/triggers annotation in OpenShift to request triggering. Read OpenShift docs for more details : Triggering updates on ImageStream changes

This enricher adds ImageStreamTag change triggers on Kubernetes resources that support the image.openshift.io/triggers annotation, such as StatefulSets, ReplicaSets and DaemonSets.

The trigger is added to all containers that apply, but can be restricted to a limited set of containers using the following configuration:

openshift {
    enricher {
        config {
            'jkube-triggers-annotation' {
                containers = 'container-name-1,c2'
            }
        }
    }
}
jkube-volume-permission

Enricher which fixes the permission of persistent volume mount with the help of an init container.

Table 70. Supported properties
Option Description Property

imageName

Image name for PersistentVolume init container

Defaults to quay.io/quay/busybox.

jkube.enricher.jkube-volume-permission.imageName

permission

PersistentVolume init container access mode

Defaults to 777.

jkube.enricher.jkube-volume-permission.permission

defaultStorageClass

Deprecated: Use PersistentVolumeClaimStorageClassEnricher's defaultStorageClass field

PersistentVolume storage class.

jkube.enricher.jkube-volume-permission.defaultStorageClass

useStorageClassAnnotation

Deprecated: Use PersistentVolumeClaimStorageClassEnricher's defaultStorageClass field

If enabled, storage class would be added to PersistentVolumeClaim metadata as volume.beta.kubernetes.io/storage-class=<storageClassName> annotation rather than .spec.storageClassName

Defaults to false

jkube.enricher.jkube-volume-permission.useStorageClassAnnotation

cpuLimit

Set PersistentVolume initContainer's .resources CPU limit

jkube.enricher.jkube-volume-permission.cpuLimit

memoryLimit

Set PersistentVolume initContainer's .resources memory limit

jkube.enricher.jkube-volume-permission.memoryLimit

cpuRequest

Set PersistentVolume initContainer's .resources CPU request

jkube.enricher.jkube-volume-permission.cpuRequest

memoryRequest

Set PersistentVolume initContainer's .resources memory request

jkube.enricher.jkube-volume-permission.memoryRequest

jkube-well-known-labels

Enricher that adds Well Known Labels recommended by Kubernetes.

The jkube-well-known-labels enricher supports the following configuration options:

Option Description Property

Add Kubernetes Well Known labels to generated resources.

Defaults to true

jkube.kubernetes.well-known-labels

enabled

Enable this flag to turn on addition of Kubernetes Well Known labels.

Defaults to true.

jkube.enricher.jkube-well-known-labels.enabled

name

The name of the application (app.kubernetes.io/name).

Defaults to the Gradle Project name property.

jkube.enricher.jkube-well-known-labels.name

version

The current version of the application (app.kubernetes.io/version).

Defaults to the Gradle Project version property.

jkube.enricher.jkube-well-known-labels.version

component

The component within the architecture (app.kubernetes.io/component).

jkube.enricher.jkube-well-known-labels.component

partOf

The name of a higher level application this one is part of (app.kubernetes.io/part-of).

Defaults to the Gradle Project group property.

jkube.enricher.jkube-well-known-labels.partOf

managedBy

The tool being used to manage the operation of an application (app.kubernetes.io/managed-by).

Defaults to jkube

jkube.enricher.jkube-well-known-labels.managedBy

The Well Known Labels which are already specified in the input fragments are not overridden by the enricher.

7.1.2. Specific Enrichers

Specific enrichers provide resource manifest enhancement for a certain tech stack that they detect.

jkube-healthcheck-openliberty

This enricher adds kubernetes readiness and liveness and startup probes for OpenLiberty based projects. Note that Kubernetes startup probes are only added in projects using MicroProfile 3.1 and later.

The application should be configured as follows to enable the enricher (i.e. Either microProfile or mpHealth should be enabled in Liberty Server Configuration file as pointed out in OpenLiberty Health Docs)

server.xml
<featureManager>
    <feature>mpHealth-4.1</feature>
</featureManager>
Probe configuration

You can configure the different aspects of the probes.

Table 71. OpenLiberty HealthCheck Enricher probe configuration
Element Description Property

scheme

Scheme to use for connecting to the host.

Defaults to HTTP.

jkube.enricher.jkube-healthcheck-openliberty.scheme

port

Port number to access the container.

Defaults to 9080.

jkube.enricher.jkube-healthcheck-openliberty.port

livenessFailureThreshold

Configures failureThreshold field in .livenessProbe . Minimum consecutive failures for the probes to be considered failed after having succeeded.

Defaults to 3.

jkube.enricher.jkube-healthcheck-openliberty.livenessFailureThreshold

livenessSuccessThreshold

Configures successThreshold field in .livenessProbe. Minimum consecutive successes for the probes to be considered successful after having failed.

Defaults to 1.

jkube.enricher.jkube-healthcheck-openliberty.livenessSuccessThreshold

livenessInitialDelay

Configures initialDelaySeconds field in .livenessProbe. Number of seconds after the container has started before liveness or readiness probes are initiated.

Defaults to 0 seconds.

jkube.enricher.jkube-healthcheck-openliberty.livenessInitialDelay

livenessPeriodSeconds

Configures periodSeconds field in .livenessProbe. How often (in seconds) to perform the liveness probe.

Defaults to 10.

jkube.enricher.jkube-healthcheck-openliberty.livenessPeriodSeconds

livenessPath

Path to access on the application server.

Defaults to /health/live.

jkube.enricher.jkube-healthcheck-openliberty.livenessPath

readinessFailureThreshold

Configures failureThreshold field in .readinessProbe . Minimum consecutive failures for the probes to be considered failed after having succeeded.

Defaults to 3.

jkube.enricher.jkube-healthcheck-openliberty.readinessFailureThreshold

readinessSuccessThreshold

Configures successThreshold field in .readinessProbe. Minimum consecutive successes for the probes to be considered successful after having failed.

Defaults to 1.

jkube.enricher.jkube-healthcheck-openliberty.readinessSuccessThreshold

readinessInitialDelay

Configures initialDelaySeconds field in .readinessProbe. Number of seconds after the container has started before liveness or readiness probes are initiated.

Defaults to 0 seconds.

jkube.enricher.jkube-healthcheck-openliberty.readinessInitialDelay

readinessPeriodSeconds

Configures periodSeconds field in .readinessProbe. How often (in seconds) to perform the liveness probe.

Defaults to 10.

jkube.enricher.jkube-healthcheck-openliberty.readinessPeriodSeconds

readinessPath

Path to access on the application server.

Defaults to /health/ready.

jkube.enricher.jkube-healthcheck-openliberty.readinessPath

startupFailureThreshold

Configures failureThreshold field in .startupProbe . Minimum consecutive failures for the probes to be considered failed after having succeeded.

Defaults to 3.

jkube.enricher.jkube-healthcheck-openliberty.startupFailureThreshold

startupSuccessThreshold

Configures successThreshold field in .startupProbe. Minimum consecutive successes for the probes to be considered successful after having failed.

Defaults to 1.

jkube.enricher.jkube-healthcheck-openliberty.startupSuccessThreshold

startupInitialDelay

Configures initialDelaySeconds field in .startupProbe. Number of seconds after the container has started before liveness or startup probes are initiated.

Defaults to 0 seconds.

jkube.enricher.jkube-healthcheck-openliberty.startupInitialDelay

startupPeriodSeconds

Configures periodSeconds field in .startupProbe. How often (in seconds) to perform the liveness probe.

Defaults to 10.

jkube.enricher.jkube-healthcheck-openliberty.startupPeriodSeconds

startupPath

Path to access on the application server.

Defaults to /health/started.

jkube.enricher.jkube-healthcheck-openliberty.startupPath

jkube-healthcheck-spring-boot

This enricher adds kubernetes readiness and liveness probes with Spring Boot. This requires the following dependency has been enabled in Spring Boot

implementation 'org.springframework.boot:spring-boot-starter-actuator'

The enricher will try to discover the settings from the application.properties / application.yaml Spring Boot configuration file.

The port number is read from the management.port option, and will use the default value of 8080 The scheme will use HTTPS if server.ssl.key-store option is in use, and fallback to use HTTP otherwise.

The enricher will use the following settings by default:

  • readinessProbeInitialDelaySeconds : 10

  • readinessProbePeriodSeconds : <kubernetes-default>

  • livenessProbeInitialDelaySeconds : 180

  • livenessProbePeriodSeconds : <kubernetes-default>

  • timeoutSeconds : <kubernetes-default>

  • failureThreshold: 3

  • successThreshold: 1

These values can be configured by the enricher in the openshift-gradle-plugin configuration as shown below:

openshift {
  enricher {
    config {
      'jkube-healthcheck-spring-boot' {
         timeoutSeconds = '5'
         readinessProbeInitialDelaySeconds = '30'
         failureThreshold = '3'
         successThreshold = '1'
      }
    }
  }
}
jkube-healthcheck-thorntail-v2

This enricher adds kubernetes readiness and liveness probes with Thorntail v2. This requires the following fraction has been enabled in Thorntail

implementation 'io.thorntail:microprofile-health:2.7.0.Final'

The enricher will use the following settings by default:

  • port = 8080

  • scheme = HTTP

  • path = /health

  • failureThreshold = 3

  • successThreshold = 1

These values can be configured by the enricher in the openshift-gradle-plugin configuration as shown below:

openshift {
  enricher {
    config {
      'jkube-healthcheck-thorntail' {
         port = '4444'
         scheme = 'HTTPS'
         path = 'health/myapp'
         failureThreshold = '3'
         successThreshold = '1'
      }
    }
  }
}
jkube-healthcheck-quarkus

This enricher adds kubernetes readiness, liveness and startup probes with Quarkus. This requires the following dependency to be added to your Quarkus project:

implementation 'io.quarkus:quarkus-smallrye-health'

The enricher will try to discover the settings from the application.properties / application.yaml configuration file. JKube uses the following properties to resolve the health check URLs:

  • quarkus.http.root-path: Quarkus Application root path.

  • quarkus.http.non-application-root-path: This property got introduced in recent versions of Quarkus(2.x) for non application endpoints.

  • quarkus.smallrye-health.root-path: The location of the all-encompassing health endpoint.

  • quarkus.smallrye-health.readiness-path: The location of the readiness endpoint.

  • quarkus.smallrye-health.liveness-path: The location of the liveness endpoint.

  • quarkus.smallrye-health.startup-path: The location of the startup endpoint.

Note: Please note that behavior of these properties seem to have changed since Quarkus 1.11.x versions (e.g for health and liveness paths leading slashes are now being considered). openshift-gradle-plugin would also check quarkus version along with value of these properties in order to resolve effective health endpoints.

You can read more about these flags in Quarkus Documentation.

The enricher will use the following settings by default:

  • scheme : HTTP

  • port : 8080

  • failureThreshold : 3

  • successThreshold : 1

  • livenessInitialDelay : 10

  • readinessInitialDelay : 5

  • startupInitialDelay : 5

  • livenessPath : q/health/live

  • readinessPath : q/health/ready

  • startupPath : q/health/started

These values can be configured by the enricher in the openshift-gradle-plugin configuration as shown below:

openshift {
  enricher {
    config {
      'jkube-healthcheck-quarkus' {
         livenessInitialDelay = '5'
         failureThreshold = '3'
         successThreshold = '1'
      }
    }
  }
}
jkube-healthcheck-micronaut

This enricher adds kubernetes readiness and liveness probes for Micronaut based projects.

The application should be configured as follows to enable the enricher:

endpoints:
  health:
    enabled: true

The enricher will try to discover the settings from the application.properties / application.yaml Micronaut configuration file.

Probe configuration

You can configure the different aspects of the probes.

Table 72. Micronaut HealthCheck Enricher probe configuration
Element Description Property

readinessProbeInitialDelaySeconds

Number of seconds after the container has started before the readiness probe is initialized.

jkube.enricher.jkube-healthcheck-micronaut.readinessProbeInitialDelaySeconds

readinessProbePeriodSeconds

How often (in seconds) to perform the readiness probe.

jkube.enricher.jkube-healthcheck-micronaut.readinessProbePeriodSeconds

livenessProbeInitialDelaySeconds

Number of seconds after the container has started before the liveness probe is initialized.

jkube.enricher.jkube-healthcheck-micronaut.livenessProbeInitialDelaySeconds

livenessProbePeriodSeconds

How often (in seconds) to perform the liveness probe.

jkube.enricher.jkube-healthcheck-micronaut.livenessProbePeriodSeconds

failureThreshold

Minimum consecutive failures for the probes to be considered failed after having succeeded.

Defaults to 3.

jkube.enricher.jkube-healthcheck-micronaut.failureThreshold

successThreshold

Minimum consecutive successes for the probes to be considered successful after having failed.

Defaults to 1.

jkube.enricher.jkube-healthcheck-micronaut.successThreshold

timeoutSeconds

Number of seconds after which the probes timeout.

jkube.enricher.jkube-healthcheck-micronaut.timeoutSeconds

scheme

Scheme to use for connecting to the host.

Defaults to HTTP.

jkube.enricher.jkube-healthcheck-micronaut.scheme

port

Port number to access the container.

Defaults to the one provided in the Image configuration.

jkube.enricher.jkube-healthcheck-micronaut.port

path

Path to access on the HTTP server.

Defaults to /health.

jkube.enricher.jkube-healthcheck-micronaut.path

jkube-healthcheck-vertx

This enricher adds kubernetes readiness and liveness probes with Eclipse Vert.x applications. The readiness probe lets Kubernetes detect when the application is ready, while the liveness probe allows Kubernetes to verify that the application is still alive.

This enricher allows configuring the readiness and liveness probes. The following probe types are supported: http (emit HTTP requests), tcp (open a socket), exec (execute a command).

By default, this enricher uses the same configuration for liveness and readiness probes. But specific configurations can be provided too. The configurations can be overridden using project’s properties.

Using the jkube-healthcheck-vertx enricher

The enricher is automatically executed if your project uses the io.vertx.vertx-plugin or depends on io.vertx:vertx-core. However, by default, no health check will be added to your deployment unless configured explicitly.

Minimal configuration

The minimal configuration to add health checks is the following:

openshift {
  enricher {
    config {
      'jkube-healthcheck-vertx' {
        path = "/health"
      }
    }
  }
}

It configures the readiness and liveness health checks using HTTP requests on the port 8080 (default port) and on the path /health. The defaults are:

  • port = 8080 (for HTTP)

  • scheme = HTTP

  • path = none (disabled)

the previous configuration can also be given use project’s properties:

vertx.health.path = /health
Configuring differently the readiness and liveness health checks

You can provide two different configuration for the readiness and liveness checks:

openshift {
  enricher {
    config {
      'jkube-healthcheck-vertx' {
        readiness {
          path = '/ready'
        }
        liveness {
          path = '/health'
        }
      }
    }
  }
}

You can also use the readiness and liveness chunks in user properties:

vertx.health.readiness.path = /ready
vertx.health.liveness.path = /ready

Shared (generic) configuration can be set outside of the specific configuration. For instance, to use the port 8081:

openshift {
  enricher {
    config {
      'jkube-healthcheck-vertx' {
          port = '8081'
          readiness {
              path = '/ready'
          }
          liveness {
              path = '/health'
          }
      }
    }
  }
}

Or:

vertx.health.port = 8081
vertx.health.readiness.path = /ready
vertx.health.liveness.path = /ready
Configuration Structure

The configuration is structured as follows

openshift {
  enricher {
    config {
      'jkube-healthcheck-vertx' {
          // Generic configuration, applied to both liveness and readiness
          path = '/both'
          readiness = [
              // Specific configuration for the liveness probe
              'port-name': 'ping'
          ]
          liveness = [
              // Specific configuration for the readiness probe
              'port-name': 'ready'
          ]
      }
    }
  }
}

The same structure is used in project’s properties:

# Generic configuration given as vertx.health.$attribute
vertx.health.path = /both
# Specific liveness configuration given as vertx.health.liveness.$attribute
vertx.health.liveness.port-name = ping
# Specific readiness configuration given as vertx.health.readiness.$attribute
vertx.health.readiness.port-name = ready

Important: Project’s plugin configuration override the project’s properties. The overriding rules are: specific configuration > specific properties > generic configuration > generic properties.

Probe configuration

You can configure the different aspects of the probes. These attributes can be configured for both the readiness and liveness probes or be specific to one.

Table 73. Vert.x HealthCheck Enricher probe configuration
Element Description Property

type

The probe type among http (default), tcp and exec.

Defaults to http.

vertx.health.type

jkube.enricher.jkube-healthcheck-vertx.type

initial-delay

Number of seconds after the container has started before probes are initiated.

vertx.health.initial-delay

jkube.enricher.jkube-healthcheck-vertx.initial-delay

period

How often (in seconds) to perform the probe.

vertx.health.period

jkube.enricher.jkube-healthcheck-vertx.period

timeout

Number of seconds after which the probe times out.

vertx.health.timeout

jkube.enricher.jkube-healthcheck-vertx.timeout

success-threshold

Minimum consecutive successes for the probe to be considered successful after having failed.

vertx.health.success-threshold

jkube.enricher.jkube-healthcheck-vertx.success-threshold

failure-threshold

Minimum consecutive failures for the probe to be considered failed after having succeeded.

vertx.health.failure-threshold

jkube.enricher.jkube-healthcheck-vertx.failure-threshold

HTTP specific probe configuration

When using HTTP GET requests to determine readiness or liveness, several aspects can be configured. HTTP probes are used by default. To be more specific set the type attribute to http.

Table 74. Vert.x HealthCheck Enricher HTTP probe configuration
Element Description Property

scheme

Scheme to use for connecting to the host.

Defaults to HTTP.

vertx.health.scheme

jkube.enricher.jkube-healthcheck-vertx.scheme

path

Path to access on the HTTP server. An empty path disable the check.

vertx.health.path

jkube.enricher.jkube-healthcheck-vertx.path

headers

Custom headers to set in the request. HTTP allows repeated headers. It cannot be configured using project’s properties. An example is available below.

vertx.health.headers

jkube.enricher.jkube-healthcheck-vertx.headers

port

Port number to access the container. A 0 or negative number disables the check.

Defaults to 8080.

vertx.health.port

jkube.enricher.jkube-healthcheck-vertx.port

port-name

Name of the port to access on the container. If neither the port nor the port-name is set, the check is disabled. If both are set the configuration is considered invalid.

vertx.health.port-name

jkube.enricher.jkube-healthcheck-vertx.port-name

Here is an example of HTTP probe configuration:

openshift {
  enricher {
    config {
      'jkube-healthcheck-vertx' {
         liveness {
           port = '8081'
           path = '/ping'
           scheme = 'HTTPS'
           headers = [
             'X-Custom-Header': 'Awesome'
           ]
         }
         readiness {
             port = '-1'
         }
      }
    }
  }
}
TCP specific probe configuration

You can also configure the probes to just open a socket on a specific port. The type attribute must be set to tcp.

Table 75. Vert.x HealthCheck Enricher TCP probe configuration
Element Description Property

port

Port number to access the container. A 0 or negative number disables the check.

vertx.health.port

jkube.enricher.jkube-healthcheck-vertx.port

port-name

Name of the port to access on the container. If neither the port nor the port-name is set, the check is disabled. If both are set the configuration is considered invalid.

vertx.health.port-name

jkube.enricher.jkube-healthcheck-vertx.port-name

For example:

openshift {
  enricher {
    config {
      'jkube-healthcheck-vertx' {
         liveness {
           type = 'tcp'
           port = '8081'
         }
         readiness {
           // Use HTTP Get probe
           path = '/ping'
           port = '8080'
         }
      }
    }
  }
}
Exec probe configuration

You can also configure the probes to execute a command. If the command succeeds, it returns 0, and Kubernetes consider the pod to be alive and healthy. If the command returns a non-zero value, Kubernetes kills the pod and restarts it. To use a command, you must set the type attribute to exec:

openshift {
  enricher {
    config {
      'jkube-healthcheck-vertx' {
         liveness {
             type = 'exec'
             command = [
               'cmd': ['cat', '/tmp/healthy']
             ]
         }
         readiness {
             // Use HTTP Get probe
             path = '/ping'
             port = '8080'
         }
      }
    }
  }
}

As you can see in the snippet above the command is passed using the command attribute. This attribute cannot be configured using project’s properties. An empty command disables the check.

Disabling health checks

You can disable the checks by setting:

  • the port to 0 or to a negative number for http and tcp probes

  • the command to an empty list for exec

In the first case, you can use project’s properties to disable them:

Disables tcp and http probes
vertx.health.port = -1

For http probes, an empty or not set path also disable the probe.

jkube-healthcheck-smallrye

This enricher adds kubernetes readiness and liveness and startup probes for projects which have io.smallrye:smallrye-health dependency added for Health management. Note that Kubernetes startup probes are only added in projects using MicroProfile 3.1 and later.

Probe configuration

You can configure the different aspects of the probes.

Table 76. SmallRye HealthCheck Enricher probe configuration
Element Description Property

scheme

Scheme to use for connecting to the host.

Defaults to HTTP.

jkube.enricher.jkube-healthcheck-smallrye.scheme

port

Port number to access the container.

Defaults to 9080.

jkube.enricher.jkube-healthcheck-smallrye.port

livenessFailureThreshold

Configures failureThreshold field in .livenessProbe . Minimum consecutive failures for the probes to be considered failed after having succeeded.

Defaults to 3.

jkube.enricher.jkube-healthcheck-smallrye.livenessFailureThreshold

livenessSuccessThreshold

Configures successThreshold field in .livenessProbe. Minimum consecutive successes for the probes to be considered successful after having failed.

Defaults to 1.

jkube.enricher.jkube-healthcheck-smallrye.livenessSuccessThreshold

livenessInitialDelay

Configures initialDelaySeconds field in .livenessProbe. Number of seconds after the container has started before liveness or readiness probes are initiated.

Defaults to 0 seconds.

jkube.enricher.jkube-healthcheck-smallrye.livenessInitialDelay

livenessPeriodSeconds

Configures periodSeconds field in .livenessProbe. How often (in seconds) to perform the liveness probe.

Defaults to 10.

jkube.enricher.jkube-healthcheck-smallrye.livenessPeriodSeconds

livenessPath

Path to access on the application server.

Defaults to /health/live.

jkube.enricher.jkube-healthcheck-smallrye.livenessPath

readinessFailureThreshold

Configures failureThreshold field in .readinessProbe . Minimum consecutive failures for the probes to be considered failed after having succeeded.

Defaults to 3.

jkube.enricher.jkube-healthcheck-smallrye.readinessFailureThreshold

readinessSuccessThreshold

Configures successThreshold field in .readinessProbe. Minimum consecutive successes for the probes to be considered successful after having failed.

Defaults to 1.

jkube.enricher.jkube-healthcheck-smallrye.readinessSuccessThreshold

readinessInitialDelay

Configures initialDelaySeconds field in .readinessProbe. Number of seconds after the container has started before liveness or readiness probes are initiated.

Defaults to 0 seconds.

jkube.enricher.jkube-healthcheck-smallrye.readinessInitialDelay

readinessPeriodSeconds

Configures periodSeconds field in .readinessProbe. How often (in seconds) to perform the liveness probe.

Defaults to 10.

jkube.enricher.jkube-healthcheck-smallrye.readinessPeriodSeconds

readinessPath

Path to access on the application server.

Defaults to /health/ready.

jkube.enricher.jkube-healthcheck-smallrye.readinessPath

startupFailureThreshold

Configures failureThreshold field in .startupProbe . Minimum consecutive failures for the probes to be considered failed after having succeeded.

Defaults to 3.

jkube.enricher.jkube-healthcheck-smallrye.startupFailureThreshold

startupSuccessThreshold

Configures successThreshold field in .startupProbe. Minimum consecutive successes for the probes to be considered successful after having failed.

Defaults to 1.

jkube.enricher.jkube-healthcheck-smallrye.startupSuccessThreshold

startupInitialDelay

Configures initialDelaySeconds field in .startupProbe. Number of seconds after the container has started before liveness or startup probes are initiated.

Defaults to 0 seconds.

jkube.enricher.jkube-healthcheck-smallrye.startupInitialDelay

startupPeriodSeconds

Configures periodSeconds field in .startupProbe. How often (in seconds) to perform the liveness probe.

Defaults to 10.

jkube.enricher.jkube-healthcheck-smallrye.startupPeriodSeconds

startupPath

Path to access on the application server.

Defaults to /health/started.

jkube.enricher.jkube-healthcheck-smallrye.startupPath

jkube-healthcheck-helidon

This enricher adds kubernetes readiness and liveness and startup probes for Helidon based projects. Note that Kubernetes startup probes are only added in projects using MicroProfile 3.1 and later.

The application should be configured as follows to enable the enricher (i.e. io.helidon.health:helidon-health dependency is found in project dependencies)

Probe configuration

You can configure the different aspects of the probes.

Table 77. Helidon HealthCheck Enricher probe configuration
Element Description Property

scheme

Scheme to use for connecting to the host.

Defaults to HTTP.

jkube.enricher.jkube-healthcheck-helidon.scheme

port

Port number to access the container.

Defaults to 8080.

jkube.enricher.jkube-healthcheck-helidon.port

livenessFailureThreshold

Configures failureThreshold field in .livenessProbe . Minimum consecutive failures for the probes to be considered failed after having succeeded.

Defaults to 3.

jkube.enricher.jkube-healthcheck-helidon.livenessFailureThreshold

livenessSuccessThreshold

Configures successThreshold field in .livenessProbe. Minimum consecutive successes for the probes to be considered successful after having failed.

Defaults to 1.

jkube.enricher.jkube-healthcheck-helidon.livenessSuccessThreshold

livenessInitialDelay

Configures initialDelaySeconds field in .livenessProbe. Number of seconds after the container has started before liveness or readiness probes are initiated.

Defaults to 0 seconds.

jkube.enricher.jkube-healthcheck-helidon.livenessInitialDelay

livenessPeriodSeconds

Configures periodSeconds field in .livenessProbe. How often (in seconds) to perform the liveness probe.

Defaults to 10.

jkube.enricher.jkube-healthcheck-helidon.livenessPeriodSeconds

livenessPath

Path to access on the application server.

Defaults to /health/live.

jkube.enricher.jkube-healthcheck-helidon.livenessPath

readinessFailureThreshold

Configures failureThreshold field in .readinessProbe . Minimum consecutive failures for the probes to be considered failed after having succeeded.

Defaults to 3.

jkube.enricher.jkube-healthcheck-helidon.readinessFailureThreshold

readinessSuccessThreshold

Configures successThreshold field in .readinessProbe. Minimum consecutive successes for the probes to be considered successful after having failed.

Defaults to 1.

jkube.enricher.jkube-healthcheck-helidon.readinessSuccessThreshold

readinessInitialDelay

Configures initialDelaySeconds field in .readinessProbe. Number of seconds after the container has started before liveness or readiness probes are initiated.

Defaults to 0 seconds.

jkube.enricher.jkube-healthcheck-helidon.readinessInitialDelay

readinessPeriodSeconds

Configures periodSeconds field in .readinessProbe. How often (in seconds) to perform the liveness probe.

Defaults to 10.

jkube.enricher.jkube-healthcheck-helidon.readinessPeriodSeconds

readinessPath

Path to access on the application server.

Defaults to /health/ready.

jkube.enricher.jkube-healthcheck-helidon.readinessPath

startupFailureThreshold

Configures failureThreshold field in .startupProbe . Minimum consecutive failures for the probes to be considered failed after having succeeded.

Defaults to 3.

jkube.enricher.jkube-healthcheck-helidon.startupFailureThreshold

startupSuccessThreshold

Configures successThreshold field in .startupProbe. Minimum consecutive successes for the probes to be considered successful after having failed.

Defaults to 1.

jkube.enricher.jkube-healthcheck-helidon.startupSuccessThreshold

startupInitialDelay

Configures initialDelaySeconds field in .startupProbe. Number of seconds after the container has started before liveness or startup probes are initiated.

Defaults to 0 seconds.

jkube.enricher.jkube-healthcheck-helidon.startupInitialDelay

startupPeriodSeconds

Configures periodSeconds field in .startupProbe. How often (in seconds) to perform the liveness probe.

Defaults to 10.

jkube.enricher.jkube-healthcheck-helidon.startupPeriodSeconds

startupPath

Path to access on the application server.

Defaults to /health/started.

jkube.enricher.jkube-healthcheck-helidon.startupPath

7.2. Enricher API

How to write your own enrichers and install them.

It’s possible to extend Eclipse JKube’s Enricher API to define your own custom Enrichers as per use case. Please refer to the Enricher Interface; You can create new enrichers by implementing this interface.

Please check out Custom Istio Enricher Gradle quickstart for detailed example.

8. Profiles

Profiles can be used to combine a set of enrichers and generators and to give this combination a referable name.

Profiles are defined in YAML. The following example shows a simple profile which uses only the Spring Boot generator and a few enrichers to add a Kubernetes Deployment and a Service:

Profile Definition
- name: my-spring-boot-apps (1)
  generator: (2)
    includes:
      - spring-boot
  enricher: (3)
    includes: (4)
      # Default Deployment object
      - jkube-controller
      # Add a default service
      - jkube-service
    excludes: (5)
      - jkube-icon
    config: (6)
      jkube-service:
        # Expose service as NodePort
        type: NodePort
  order: 10 (7)
- name: another-profile
# ....
1 Profile’s name
2 Generators to use
3 Enrichers to use
4 List of enrichers to include in that given order
5 List of enrichers to exclude (especially useful when extending profiles)
6 Configuration for services an enrichers
7 An order which influences the way how profiles with the same name are merged

Each profiles.yml has a list of profiles which are defined with these elements:

Table 78. Profile elements
Element Description

name

Profile name.

extends

This plugin comes with a set of predefined profiles. These profiles can be extended by defining a custom profile that references the name of the profile to extend in the extends field.

generator

List of generator definitions. See below for the format of these definitions.

enricher

List of enrichers definitions. See below for the format of these definitions.

order

The order of the profile which is used when profiles of the same name are merged.

8.1. Generator and Enricher definitions

The definition of generators and enrichers in the profile follows the same format:

Table 79. Generator and Enricher definition
Element Description

includes

List of generators or enrichers to include. The order in the list determines the order in which the processors are applied.

excludes

List of generators or enrichers. These have precedences over includes and will exclude a processor even when referenced in an includes sections

config

Configuration for generators or enrichers. This is a map where the keys are the name of the processor to configure and the value is again a map with configuration keys and values specific to the processor. See the documentation of the respective generator or enricher for the available configuration keys.

8.2. Lookup order

Profiles can be defined externally either directly as a build resource in src/main/jkube/profiles.yml or provided as part of a plugin’s dependency where it is supposed to be included as META-INF/jkube/profiles.yml. Multiple profiles can be included in these profiles.yml descriptors as a list:

If a profile is used then it is looked up from various places in the following order:

  • From the compile and plugin classpath from META-INF/jkube/profiles-default.yml. These files are reserved for profiles defined by this plugin

  • From the compile and plugin classpath from META-INF/jkube/profiles.yml. Use this location for defining your custom profiles which you want to include via dependencies.

  • From the project in src/main/jkube/profiles.yml. The directory can be tuned with the plugin option resourceDir (property: jkube.resourceDir)

When multiple profiles of the same name are found, then these profiles are merged. If the profiles have an order number, then the profile with higher order takes precedence when merging.

For includes of the same processors, the processor is moved to the earliest position. e.g. consider the following two profiles with the name my-profile

Profile A
name: my-profile
enricher:
  includes: [ e1, e2 ]
Profile B
name: my-profile
enricher:
  includes: [ e3, e1 ]
order: 10

then when merged results in the following profile (when no order is given, it defaults to 0):

Profile merged
name: my-profile
enricher:
  includes: [ e1, e2, e3 ]
order: 10

Profile with the same order number are merged according to the lookup order described above, where the latter profile is supposed to have a higher order.

The configuration for enrichers and generators are merged, too, where higher order profiles override configuration values with the same key of lower order profile configuration.

8.3. Using Profiles

Profiles can be selected by defining them in the plugin configuration, by giving a system property or by using special directories in the directory holding the resource fragments.

Profile used in plugin configuration

Here is an example how the profile can be used in a plugin configuration:

openshift {
  profile = 'my-spring-boot-apps' // (1)
}
1 Name which selects the profile from the profiles.yml or profiles-default.yml file.
Profile as property

Alternatively a profile can be also specified on the command line or as a project property:

gradle -Pjkube.profile=my-spring-boot-apps ocBuild ocApply

If a configuration for enrichers and generators is provided as part of the project plugin’s configuration then this takes precedence and overrides any of the defaults provided by the selected profile.

Profiles for resource fragments

Profiles are also very useful when used together with resource fragments in src/main/jkube. By default, the resource objects defined here are enriched with the configured profile (if any). A different profile can be selected easily by using a subdirectory within src/main/jkube. The name of each subdirectory is interpreted as a profile name and all resource definition files found in this subdirectory are enhanced with the enhancers defined in this profile.

For example, consider the following directory layout:

.
├── src/main/jkube
  ├── app-rc.yml
  ├── app-svc.yml
  └── raw
    ├── couchbase-rc.yml
    └── couchbase-svc.yml

Here, the resource descriptors app-rc.yml and app-svc.yml are enhanced with the enrichers defined in the main configuration. The two files couchbase-rc.yml and couchbase-svc.yml in the subdirectory raw/ are enriched with the profile raw instead. This is a predefined profile which includes no enricher at all, so the couchbase resource objects are not enriched and taken over literally. This is an easy way how you can fine tune enrichment for different object set.

8.4. Predefined Profiles

This plugin comes with the following predefined profiles:

Table 80. Predefined Profiles
Profile Description

default

The default profile which is active if no profile is specified. It consists of a curated set of generator and enrichers. See below for the current definition.

minimal

This profile contains no generators and only enrichers for adding default objects (controller and services). No other enrichment is included.

explicit

Like default but without adding default objects like controllers and services.

aggregate

Includes no generators and only the jkube-dependency enricher for picking up and combining resources from the compile time dependencies.

internal-microservice

default profile extension that prevents services from being externally exposed.

security-hardening

default profile extension that enables the security-hardening enricher.

8.5. Extending Profiles

A profile can also extend another profile to avoid repetition. This is useful to add optional enrichers/generators to a given profile or to partially exclude enrichers/generators from another.

- name: security-hardening
  extends: default
  enricher:
    includes:
      - jkube-security-hardening

For example, this profiles includes the optional jkube-security-hardening enricher to the default profile.

9. JKube Plugins

This plugin supports so call jkube-plugins which have entry points that can be bound to the different JKube operation phases. jkube-plugins are enabled by just declaring a dependency in the plugin declaration:

The following example is from quickstarts/gradle/plugin

JKube Plugin is defined under Gradle’s buildSrc directory which is automatically added to build script classpath by Gradle.

.
├── app
├── buildSrc
  ├── build.gradle
  └── src
      └── main
          ├── java
          │ └── org
          │     └── eclipse
          │         └── jkube
          │             └── quickstart
          │                 └── plugin
          │                     └── SimpleJKubePlugin.java
          └── resources
              └── META-INF
                  └── jkube
                      └── plugin

JKubePlugins are automatically loaded by JKube by declaring a dependency to a module that contains a descriptor file at META-INF/jkube/plugin with class names line by line, for example:

src/main/resources/META-INF/jkube/plugin
org.eclipse.jkube.quickstart.plugin.SimpleJKubePlugin

At the moment descriptor files are looked up in these locations:

  • META-INF/maven/io.fabric8/dmp-plugin (Deprecated, kept for backward compatibility)

  • META-INF/jkube/plugin

  • META-INF/jkube-plugin

During a build with ocBuild, those classes are loaded and certain fixed method are called.

JKube plugin would need to implement org.eclipse.jkube.api.JKubePlugin interface. At the moment, The following methods are supported:

Method Description

addExtraFiles

A method called by openshift-gradle-plugin with a single File argument. This will point to a directory jkube-extra which can be referenced easily by a Dockerfile or an assembly. A openshift-gradle-plugin plugin typically will create an own subdirectory to avoid a clash with other jkube-plugins.

Check out quickstarts/gradle/plugin for a fully working example.

10. Registry handling

Docker uses registries to store images. The registry is typically specified as part of the name. I.e. if the first part (everything before the first /) contains a dot (.) or colon (:) this part is interpreted as an address (with an optionally port) of a remote registry. This registry (or the default docker.io if no registry is given) is used during push and pull operations. This plugin follows the same semantics, so if an image name is specified with a registry part, this registry is contacted. Authentication is explained in the next section.

There are some situations however where you want to have more flexibility for specifying a remote registry. This might be because you do not want to hard code a registry into build.gradle but provide it from the outside with an environment variable or a system property.

This plugin supports various ways of specifying a registry:

  • If the image name contains a registry part, this registry is used unconditionally and can not be overwritten from the outside.

  • If an image name doesn’t contain a registry, then by default the default Docker registry docker.io is used for push and pull operations. But this can be overwritten through various means:

    • If the image configuration contains a registry subelement this registry is used.

    • Otherwise, a global configuration element registry is evaluated which can be also provided as system property via -Djkube.docker.registry.

    • Finally an environment variable DOCKER_REGISTRY is looked up for detecting a registry.

This registry is used for pulling (i.e. for autopull the base image when doing a ocBuild) and pushing with ocPush. However, when these two tasks are combined on the command line like in mvn -Djkube.docker.registry=myregistry:5000 package ocBuild ocPush the same registry is used for both operation. For a more fine grained control, separate registries for pull and push can be specified.

  • In the plugin’s configuration with the parameters pullRegistry and pushRegistry, respectively.

  • With the system properties jkube.docker.pull.registry and jkube.docker.push.registry, respectively.

Example
openshift {
    registry = "docker.jolokia.org:443"
    images {
        image1 {
            // Without an explicit registry
            name = "jolokia/jolokia-java"
            // Hence use this registry
            registry = "docker.ro14nd.de"
        }
        image2 {
            name ="postgresql"
            // No registry in the name, hence use this globally
            // configured docker.jolokia.org:443 as registry
        }
        image3 {
            // Explicitly specified always wins
            name = "docker.example.com:5000/another/server"
        }
    }
}

There is some special behaviour when using an externally provided registry like described above:

  • When pulling, the image pulled will be also tagged with a repository name without registry. The reasoning behind this is that this image then can be referenced also by the configuration when the registry is not specified anymore explicitly.

  • When pushing a local image, temporarily a tag including the registry is added and removed after the push. This is required because Docker can only push registry-named images.

11. Registry Authentication

When pulling (via the autoPull mode of ocBuild) or pushing image, it might be necessary to authenticate against a Docker registry.

There are five different locations searched for credentials. In order, these are:

  • Providing system properties jkube.docker.username and jkube.docker.password from the outside.

  • Using a authConfig section in the plugin configuration with username and password elements.

  • Using OpenShift configuration in ~/.config/kube

  • Login into a registry with docker login (credentials in a credential helper or in ~/.docker/config.json)

Using the username and password directly in the build.gradle is not recommended since this is widely visible. This is the easiest and transparent way, though. Using an authConfig is straight forward:

openshift {
    images {
        image {
            name = "consol/tomcat-7.0"
        }
    }
    authConfig {
        username = "jolokia"
        password = "s!cr!t"
    }
}

The system property provided credentials are a good compromise when using CI servers like Jenkins. You simply provide the credentials from the outside:

Example
gradle -Djkube.docker.username=jolokia -Djkube.docker.password=s!cr!t ocPush

The most secure way is to rely on docker’s credential store or credential helper and read confidential information from an external credentials store, such as the native keychain of the operating system. Follow the instruction on the docker login documentation.

As a final fallback, this plugin consults $DOCKER_CONFIG/config.json if DOCKER_CONFIG is set, or ~/.docker/config.json if not, and reads credentials stored directly within this file. This unsafe behavior happened when connecting to a registry with the command docker login from the command line with older versions of docker (pre 1.13.0) or when docker is not configured to use a credential store.

11.1. Pull vs. Push Authentication

The credentials lookup described above is valid for both push and pull operations. In order to narrow things down, credentials can be provided for pull or push operations alone:

In an authConfig section a sub-section pull and/or push can be added. In the example below the credentials provider are only used for image push operations:

Example
openshift {
    images {
        image {
            name = "consol/tomcat-7.0"
        }
    }
    authConfig {
        push {
            username = "jolokia"
            password = "secret"
        }
    }
}

When the credentials are given on the command line as system properties, then the properties jkube.docker.pull.username / jkube.docker.pull.password and jkube.docker.push.username / jkube.docker.push.password are used for pull and push operations, respectively (when given). Either way, the standard lookup algorithm as described in the previous section is used as fallback.

12. Integrations

12.1. JIB (Java Image Builder)

openshift-gradle-plugin also provides user an option to build container images without having access to any docker daemon. You just need to set jkube.build.strategy property to jib. It will delegate the build process to JIB. It creates a tarball inside your target directory which can be loaded into any docker daemon afterwards. You may also push the image to your specified registry using push goal with feature flag enabled.

You can find more details at Spring Boot JIB With Assembly Quickstart.

12.2. Buildpacks

openshift-gradle-plugin provides the required features for users to leverage Cloud Native Buildpacks for building container images. You can enable this build strategy by setting the jkube.build.strategy property to buildpacks.

Access to a Docker daemon is required in order to use Buildpacks as mentioned in Buildpack Prerequisites.

  gradle ocBuild -Djkube.build.strategy=buildpacks

openshift-gradle-plugin downloads Pack CLI to the user’s $HOME/.jkube folder and starts the pack build process. If the download for the Pack CLI binary fails, openshift-gradle-plugin looks for any locally installed Pack CLI version.

12.2.1. Buildpack Builder Image

By default openshift-gradle-plugin uses the builder image specified in the Pack Config file for building the container image using Pack CLI.

For example, if the user has this image set in the $HOME/.pack/config.toml file:

  default-builder-image = "testuser/buildpacks-quarkus-builder:latest"

openshift-gradle-plugin uses testuser/buildpacks-quarkus-builder:latest as Buildpacks builder image. If no image is configured, then openshift-gradle-plugin uses paketobuildpacks/builder:base as the default builder image.

13. Kind/Filename Mapping

13.1. Default Kind/Filename Mapping

Kind Filename Type

BuildConfig

bc, buildconfig

ClusterRole

cr, crole, clusterrole

ConfigMap

cm, configmap

ClusterRoleBinding

crb, clusterrb, clusterrolebinding

CronJob

cj, cronjob

CustomResourceDefinition

crd, customerresourcedefinition

DaemonSet

ds, daemonset

Deployment

deployment

DeploymentConfig

dc, deploymentconfig

ImageStream

is, imagestream

ImageStreamTag

istag, imagestreamtag

Ingress

ingress

Job

job

LimitRange

lr, limitrange

Namespace

ns, namespace

NetworkPolicy

np, networkpolicy

OAuthClient

oauthclient

PolicyBinding

pb, policybinding

PersistentVolume

pv, persistentvolume

PersistentVolumeClaim

pvc, persistentvolumeclaim

Project

project

ProjectRequest

pr, projectrequest

ReplicaSet

rs, replicaset

ReplicationController

rc, replicationcontroller

ResourceQuota

rq, resourcequota

Role

role

RoleBinding

rb, rolebinding

RoleBindingRestriction

rbr, rolebindingrestriction

Route

route

Secret

secret

Service

svc, service

ServiceAccount

sa, serviceaccount

StatefulSet

statefulset

Template

template

Pod

pd, pod

13.2. Custom Kind/Filename Mapping

You can add your custom Kind/Filename mappings. To do it you have two approaches:

  • Setting an environment variable or system property called jkube.mapping pointing out to a .properties files with pairs <kind>⇒<filename1>, <filename2> By default if no environment variable nor system property is set, JKube looks for a file located at classpath /META-INF/jkube.kind-filename-type-mapping-default.properties.

  • By defining the Mapping in the plugin’s configuration

kubernetes {
    mappings {
        mapping {
            kind = "Var" (1)
            filenameTypes = "foo, bar" (2)
            apiVersion = "api.example.com/v1" (3)
        }
    }
}
1 The kind name (mandatory)
2 The filename types (mandatory), a comma-separated list of filenames to map to the specified kind
3 The apiVersion (optional)

14. FAQ

14.1. General questions

14.1.1. How do I define an environment variable?

The easiest way is to add a src/main/jkube/deployment.yml file to your project containing something like:

spec:
  template:
    spec:
      containers:
        - env:
          - name: FOO
            value: bar

The above will generate an environment variable $FOO of value bar

For a full list of the environments used in java base images, see this list

14.1.2. How do I define a system property?

The simplest way is to add system properties to the JAVA_OPTIONS environment variable.

For a full list of the environments used in java base images, see this list

e.g. add a src/main/jkube/deployment.yml file to your project containing something like:

spec:
 template:
   spec:
     containers:
       - env:
         - name: JAVA_OPTIONS
           value: "-Dfoo=bar -Dxyz=abc"

The above will define the system properties foo=bar and xyz=abc

14.1.3. How do I mount a config file from a ConfigMap?

First you need to create your ConfigMap resource via a file src/main/jkube/configmap.yml

data:
  application.properties: |
    # spring application properties file
    welcome = Hello from Kubernetes ConfigMap!!!
    dummy = some value

Then mount the entry in the ConfigMap into your Deployment via a file src/main/jkube/deployment.yml

metadata:
  annotations:
    configmap.jkube.io/update-on-change: ${project.artifactId}
spec:
  replicas: 1
  template:
    spec:
      volumes:
        - name: config
          configMap:
            name: ${project.artifactId}
            items:
            - key: application.properties
              path: application.properties
      containers:
        - volumeMounts:
            - name: config
              mountPath: /deployments/config

Note that the annotation configmap.jkube.io/update-on-change is optional; its used if your application is not capable of watching for changes in the /deployments/config/application.properties file. In this case if you are also running the configmapcontroller then this will cause a rolling upgrade of your application to use the new ConfigMap contents as you change it.

14.1.4. How do I use a Persistent Volume?

First you need to create your PersistentVolumeClaim resource via a file src/main/jkube/foo-pvc.yml where foo is the name of the PersistentVolumeClaim. It might be your app requires multiple vpersistent volumes so you will need multiple PersistentVolumeClaim resources.

spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi

Then to mount the PersistentVolumeClaim into your Deployment create a file src/main/jkube/deployment.yml

spec:
  template:
    spec:
      volumes:
      - name: foo
        persistentVolumeClaim:
          claimName: foo
      containers:
      - volumeMounts:
        - mountPath: /whatnot
          name: foo

Where the above defines the PersistentVolumeClaim called foo which is then mounted into the container at /whatnot

14.1.5. How do I generate Ingress for my generated Service?

Ingress generation is supported by Eclipse JKube for Service objects of type LoadBalancer. In order to generate Ingress you need to enable jkube.createExternalUrls property to true and jkube.domain property to desired host suffix, it would be appended to your service name for host value.

You can also provide a host for it in gradle.properties file like this:

jkube.createExternalUrls=true
jkube.domain=example.com

14.1.6. How do I build the image with Podman instead of Docker?

When invoking ocBuild with only Podman installed, the following error appears:

No <dockerHost> given, no DOCKER_HOST environment variable, no read/writable '/var/run/docker.sock' or '//./pipe/docker_engine' and no external provider like Docker machine configured -> [Help 1]

By default, JKube is relying on the Docker REST API /var/run/docker.sock to build Docker images. Using Podman even with the Docker CLI emulation won’t work as it is just a CLI wrapper and does not provide any Docker REST API. However, it is possible to start an emulated Docker REST API with the podman command:

export DOCKER_HOST="unix:/run/user/$(id -u)/podman/podman.sock"
podman system service --time=0 unix:/run/user/$(id -u)/podman/podman.sock &

14.1.7. How to configure image name generated by Eclipse JKube?

If you want to configure image name generated by Eclipse JKube which is %g/%a:%l by default(see [image-name]). It will depend upon what mode you’re using in Eclipse JKube:

  • If you’re using zero configuration mode, which means you depend on Eclipse JKube Generators to generate an opinionated image, you will be able to do it using jkube.generator.name maven property.

  • If you’re providing Groovy DSL image configuration, image name would be picked from name tag like in this example:

openshift {
    images {
        image {
            name = "myusername/myimagename:latest"
            build {
                from = "openjdk:latest"
                cmd {
                    exec = ["java", "-jar", "${project.name}-${project.version}.jar"]
                }
            }
         }
     }
}
  • If you’re using Simple Dockerfile Mode, you can configure image name via jkube.image.name or jkube.generator.name flags