© 2020 The original authors.

kubernetes-maven-plugin

1. Introduction

The kubernetes-maven-plugin brings your Java applications on to Kubernetes. It provides a tight integration into Maven and benefits from the build configuration already provided. This plugin focus on two tasks: Building Docker images and creating Kubernetes resource descriptors. It can be configured very flexibly and supports multiple configuration models for creating: A Zero-Config setup allows for a quick ramp-up with some opinionated defaults. For more advanced requirements, an XML configuration provides additional configuration options which can be added to the pom.xml. For the full power, in order to tune all facets of the creation, external resource fragments and Dockerfiles can be used.

1.1. Building Images

The k8s:build goal is for creating Docker images containing the actual application. These then can be deployed later on Kubernetes or OpenShift. It is easy to include build artifacts and their dependencies into these images. This plugin uses the assembly descriptor format similar to the one used in maven-assembly-plugin to specify the content which will be added to the image. That images can then be pushed to public or private Docker registries with k8s:push.

Depending on the operational mode, for building the actual image either a Docker daemon is used directly or an OpenShift Docker Build is performed.

A special k8s:watch goal allows for reacting to code changes to automatically recreate images or copy new artifacts into running containers.

1.2. Kubernetes Resources

Kubernetes resource descriptors can be created or generated from k8s:resource. These files are packaged within the Maven artifacts and can be deployed to a running orchestration platform with k8s:apply.

Typically you only specify a small part of the real resource descriptors which will be enriched by this plugin with various extra information taken from the pom.xml. This drastically reduces boilerplate code for common scenarios.

1.3. Configuration

As mentioned already there are three levels of configuration:

  • Zero-Config mode makes some very opinionated decisions based on what is present in the pom.xml like what base image to use or which ports to expose. This is great for starting up things and for keeping quickstart applications small and tidy.

  • XML plugin configuration mode is similar to what docker-maven-plugin provides. This allows for type-safe configuration with IDE support, but only a subset of possible resource descriptor features is provided.

  • Kubernetes & OpenShift resource fragments are user provided YAML files that can be enriched by the plugin. This allows expert users to use a plain configuration file with all their capabilities, but also to add project specific build information and avoid boilerplate code.

The following table gives an overview of the different models

Table 1. Configuration Models
Model Docker Images Resource Descriptors

Zero-Config

Generators are used to create Docker image configurations. Generators can detect certain aspects of the build (e.g. whether Spring Boot is used) and then choose some opinionated defaults like the base image, which ports to expose and the startup command. They can be configured, but offer only a few options.

Default Enrichers will create a default Service and Deployment (DeploymentConfig for OpenShift) when no other resource objects are provided. Depending on the image they can detect which port to expose in the service. As with Generators, Enrichers support a limited set of configuration options.

XML configuration

kubernetes-maven-plugin inherits the XML based configuration for building images from the docker-maven-plugin and provides the same functionality. It supports an assembly descriptor for specifying the content of the Docker image.

A subset of possible resource objects can be configured with a dedicated XML syntax. With a decent IDE you get autocompletion on most objects and inline documentation for the available configuration elements. The provided configuration can be still enhanced by Enhancers which is useful for adding e.g. labels and annotations containing build or other information.

Resource Fragments and Dockerfiles

Similarly to the docker-maven-plugin, kubernetes-maven-plugin supports external Dockerfiles too, which are referenced from the plugin configuration.

Resource descriptors can be provided as external YAML files which will build a base skeleton for the applicable resource.

The "skeleton" is then post-processed by Enrichers which will complete the skeleton by adding the fields each enricher is responsible of (labels, annotations, port information, etc.). Maven properties within these files are resolved to their values.

With this model you can use every Kubernetes / OpenShift resource objects with all their flexibility, but still get the benefit of adding build information.

1.4. Examples

Let’s have a look at some code. The following examples will demonstrate all three configurations variants:

1.4.1. Zero-Config

This minimal but full working example pom.xml shows how a simple spring boot application can be dockerized and prepared for Kubernetes. The full example can be found in directory quickstarts/maven/zero-config.

Example
<project>
  <modelVersion>4.0.0</modelVersion>

  <groupId>org.eclipse.jkube</groupId>
  <artifactId>jkube-maven-sample-zero-config</artifactId>
  <version>1.0.0</version>
  <packaging>jar</packaging>

  <parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId> (1)
    <version>1.5.5.RELEASE</version>
  </parent>

  <dependencies>
    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-web</artifactId> (2)
    </dependency>
  </dependencies>

  <build>
    <plugins>
      <plugin>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-maven-plugin</artifactId> (3)
      </plugin>
      <plugin>
        <groupId>org.eclipse.jkube</groupId>
        <artifactId>kubernetes-maven-plugin</artifactId> (4)
        <version>1.0.0</version>
      </plugin>
    </plugins>
  </build>
</project>
1 This minimalistic spring boot application uses the spring-boot parent POM for setting up dependencies and plugins
2 The Spring Boot web starter dependency enables a simple embedded Tomcat for serving Spring MVC apps
3 The spring-boot-maven-plugin is responsible for repackaging the application into a fat jar, including all dependencies and the embedded Tomcat
4 The kubernetes-maven-plugin enables the automatic generation of a Docker image and Kubernetes / OpenShift descriptors including this Spring application.

This setup make some opinionated decisions for you:

These choices can be influenced by configuration options as described in Spring Boot Generator.

To start the Docker image build, you simply run

mvn package k8s:build

This will create the Docker image against a running Docker daemon (which must be accessible either via Unix Socket or with the URL set in DOCKER_HOST). Alternatively, when connected to an OpenShift cluster then a S2I build will be performed on OpenShift which at the end creates an ImageStream.

To deploy the resources to the cluster call

mvn k8s:resource k8s:deploy

By default a Service and a Deployment object pointing to the created Docker image is created. When running in OpenShift mode, a Service and DeploymentConfig which refers the ImageStream created with k8s:build will be installed.

Of course you can bind all those jkube-goals to execution phases as well, so that they are called along with standard lifecycle goals like install. For example, to bind the building of the Kubernetes resource files and the Docker images, add the following goals to the execution of the kubernetes-maven-plugin:

Example for lifecycle bindings
<plugin>
  <groupId>org.eclipse.jkube</groupId>
  <artifactId>kubernetes-maven-plugin</artifactId>

  <!-- ... -->

  <executions>
    <execution>
      <goals>
        <goal>resource</goal>
        <goal>build</goal>
      </goals>
    </execution>
  </executions>
</plugin>

If you’d also like to automatically deploy to Kubernetes each time you do a mvn install you can add the apply goal:

Example for lifecycle bindings with automatic deploys for mvn install
<plugin>
  <groupId>org.eclipse.jkube</groupId>
  <artifactId>kubernetes-maven-plugin</artifactId>

  <!-- ... -->

  <executions>
    <execution>
      <goals>
        <goal>resource</goal>
        <goal>build</goal>
        <goal>apply</goal>
      </goals>
    </execution>
  </executions>
</plugin>

1.4.2. XML Configuration

XML based configuration is only partially implemented and is not recommended for use right now.

Although the Zero-config mode and its generators can be tweaked with options up to a certain degree, many cases require more flexibility. For such instances, an XML-based plugin configuration can be used, in a way similar to the XML configuration used by docker-maven-plugin.

The plugin configuration can be roughly divided into the following sections:

  • Global configuration options are responsible for tuning the behaviour of plugin goals

  • <images> defines which Docker images are used and configured. This section is similar to the image configuration of the docker-maven-plugin, except that <run> and <external> sub-elements are ignored)

  • <resource> defines the resource descriptors for deploying on an OpenShift or Kuberneres cluster.

  • <generator> configures generators which are responsible for creating images. Generators are used as an alternative to a dedicated <images> section.

  • <enricher> configures various aspects of enrichers for creating or enhancing resource descriptors.

A working example can be found in the quickstarts/maven/xml-config directory. An extract of the plugin configuration is shown below:

Example for an XML configuration
<configuration>
  <namespace>test-ns</namespace>
  <images>  (1)
    
  </images>

  <resources> (2)
    <labels> (3)
      <all>
        <group>quickstarts</group>
      </all>
    </labels>

    <replicas>2</replicas> (4)
    <controllerName>${project.artifactId}</controllerName> (5)

    <services> (6)
      <service>
        <name>camel-service</name>
        <headless>true</headless>
      </service>
    </services>

    <serviceAccounts>
      <serviceAccount>
        <name>build-robot</name>
      </serviceAccount>
    </serviceAccounts>
  </resources>
</configuration>
1 Standard XML configuration for building one single Docker image
2 Kubernetes / OpenShift resources to create
3 Labels which should be applied globally to all resource objects
4 Number of replicas desired
5 Name of controller created by plugin
6 One or more Service definitions.

The XML resource configuration is based on plain Kubernetes resource objects. When targeting OpenShift, Kubernetes resource descriptors will be automatically converted to their OpenShift counterparts, e.g. a Kubernetes Deployment will be converted to an OpenShift DeploymentConfig.

1.4.3. Resource Fragments

The third configuration option is to use an external configuration in form of YAML resource descriptors which are located in the src/main/jkube directory. Each resource gets its own file, which contains a skeleton of a resource descriptor. The plugin will pick up the resource, enrich it and then combine all to a single kubernetes.yml and openshift.yml file. Within these descriptor files you are can freely use any Kubernetes feature.

Note: In order to support simultaneously both OpenShift and Kubernetes, there is currently no way to specify OpenShift-only features this way, though this might change in future releases.

Let’s have a look at an example from quickstarts/maven/external-resources. This is a plain Spring Boot application, whose images are auto generated like in the Zero-Config case. The resource fragments are in src/main/jkube.

Example fragment "deployment.yml"
spec:
  replicas: 1
  template:
    spec:
      volumes:
        - name: config
          gitRepo:
            repository: 'https://github.com/jstrachan/sample-springboot-config.git'
            revision: 667ee4db6bc842b127825351e5c9bae5a4fb2147
            directory: .
      containers:
        - volumeMounts:
            - name: config
              mountPath: /app/config
          env:
            - name: KUBERNETES_NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
      serviceAccount: ribbon

As you can see, there is no metadata section as would be expected for Kubernetes resources because it will be automatically added by the kubernetes-maven-plugin. The object’s Kind, if not given, is automatically derived from the filename. In this case, the kubernetes-maven-plugin will create a Deployment because the file is called deployment.yml. Similar mappings between file names and resource type exist for each supported resource kind, the complete list of which (along with associated abbreviations) can be found in the Appendix.

Now that sidecar containers are supported in this plugin(if jkube.sidecar is enabled), be careful whenever you’re supplying container name in the resource fragment. If container specified in resource fragment doesn’t have a name or it’s name is equal to default fmp generated application’s container name; it would not be treated as sidecar and it would be merged into main container. However, You can override plugin’s default name for main container via jkube.generator.alias property.

Additionally, if you name your fragment using a name prefix followed by a dash and the mapped file name, the plugin will automatically use that name for your resource. So, for example, if you name your deployment fragment myapp-deployment.yml, the plugin will name your resource myapp. In the absence of such provided name for your resource, a name will be automatically derived from your project’s metadata (in particular, its artifactId as specified in your POM).

No image is also referenced in this example because the plugin also fills in the image details based on the configured image you are building with (either from a generator or from a dedicated image plugin configuration, as seen before).

For building images there is also an alternative mode using external Dockerfiles, in addition to the XML based configuration. Refer to k8s:build for details.

Enrichment of resource fragments can be fine-tuned by using profile sub-directories. For more details see Profiles.

Now that we have seen some examples of the various ways how this plugin can be used, the following sections will describe the plugin goals and extension points in detail.

2. Compatibility with Kubernetes

2.1. Kubernetes Compatibility

Table 2. Kubernetes Compatibility
KMP Kubernetes 1.18 Kubernetes 1.17 Kubernetes 1.14 Kubernetes 1.12

KMP 1.0.0

KMP 0.2.0

x

KMP 0.1.1

x

x

KMP 0.1.0

x

x

3. Installation

This plugin is available from Maven central and can be connected to pre- and post-integration phase as seen below. The configuration and available goals are described below.

By default, Maven will only search for plugins in the org.apache.maven.plugins and org.codehaus.mojo packages. In order to resolve the provider for the JKube plugin goals, you need to edit ~/.m2/settings.xml and add the org.eclipse.jkube namespace so the <pluginGroups> configuration.

<settings>
      ...

      <pluginGroups>
        <pluginGroup>org.eclipse.jkube</pluginGroup>
      </pluginGroups>

      ...
</settings>
<plugin>
  <groupId>org.eclipse.jkube</groupId>
  <artifactId>kubernetes-maven-plugin</artifactId>
  <version>1.0.0</version>

  <configuration>
     ....
     <images>
        <!-- A single's image configuration -->
        
        ....
     </images>
  </configuration>

  <!-- Connect k8s:resource, k8s:build and k8s:helm to lifecycle phases -->
  <executions>
    <execution>
       <id>jkube</id>
       <goals>
         <goal>resource</goal>
         <goal>build</goal>
         <goal>helm</goal>
       </goals>
    </execution>
  </executions>
</plugin>

4. Goals Overview

This plugin supports a rich set for providing a smooth Java developer experience. These goals can be categorized in multiple groups:

  • Build goals are all about creating and managing Kubernetes build artifacts like Docker images or S2I builds.

  • Development goals target help not only in deploying resource descriptors to the development cluster but also to manage the lifecycle of the development cluster as well.

Table 3. Build Goals
Goal Description

k8s:build

Build images

k8s:push

Push images to a registry

k8s:resource

Create Kubernetes or OpenShift resource descriptors

k8s:apply

Apply resources to a running cluster

Table 4. Development Goals
Goal Description

k8s:deploy

Deploy resources descriptors to a cluster after creating them and building the app. Same as k8s:apply except that it runs in the background.

k8s:undeploy

Undeploy and remove resources descriptors from a cluster.

k8s:watch

Watch for file changes and perform rebuilds and redeployments

k8s:log

Show the logs of the running application

k8s:debug

Enable remote debugging

Depending on whether the OpenShift or Kubernetes operational mode is used, the workflow and the performed actions differs :

Table 5. Workflows
Use Case Kubernetes OpenShift

Build

k8s:build k8s:push

  • Creates an image against an exposed Docker daemon (with a docker.tar)

  • Pushes the image to a registry which is then referenced from the configuration

k8s:build

  • Creates or uses a BuildConfig

  • Creates or uses an ImageStream which can be referenced by the deployment descriptors in a DeploymenConfig

  • Starts an OpenShift build with a docker.tar as input

Deploy

k8s:deploy

  • Applies a Kubernetes resource descriptor to cluster

k8s:deploy

  • Applies an OpenShift resource descriptor to a cluster

5. Build Goals

5.1. k8s:resource

This goal generates Kubernetes resources based on your project. It can either be opinionated defaults or based on the configuration provided in XML config or resource fragments in src/main/jkube. Generated resources are in target/classes/META-INF/jkube/kubernetes directory.

5.1.1. Labels and Annotations

Labels and annotations can be easily added to any resource object. This is best explained by an example.

Example for label and annotations
<plugin>
  <!-- ... -->
  <configuration>
    <!-- ... -->
    <resources>
      <labels> (1)
        <all> (1)
          <property> (2)
            <name>organisation</name>
            <value>unesco</value>
          </property>
        </all>
        <service> (3)
          <property>
            <name>database</name>
            <value>mysql</value>
          </property>
          <property>
            <name>persistent</name>
            <value>true</value>
          </property>
        </service>
        <replicaSet> (4)
          <!-- ... -->
        </replicaSet>
        <pod> (5)
          <!-- ... -->
        </pod>
        <deployment> (6)
          <!-- ... -->
        </deployment>
      </labels>

      <annotations> (7)
         <!-- ... -->
      </annotations>
      <remotes> (8)
        <remote>https://gist.githubusercontent.com/lordofthejars/ac2823cec7831697d09444bbaa76cd50/raw/e4b43f1b6494766dfc635b5959af7730c1a58a93/deployment.yaml</remote>
      </remotes>
    </resource>
  </configuration>
</plugin>
1 <labels> section with <resources> contains labels which should be applied to objects of various kinds
2 Within <all> labels which should be applied to every object can be specified
3 <service> labels are used to label services
4 <replicaSet> labels are for replica set and replication controller
5 <pod> holds labels for pod specifications in replication controller, replica sets and deployments
6 <deployment> is for labels on deployments (kubernetes) and deployment configs (openshift)
7 The subelements are also available for specifying annotations.
8 <remotes> you can set location of fragments as URL.

Labels and annotations can be specified in free form as a map. In this map, the element name is the name of the label or annotation respectively, whereas the content is the value to set.

The following subelements are possible for <labels> and <annotations> :

Table 6. Label and annotation configuration
Element Description

all

All entries specified in the <all> sections are applied to all resource objects created.

deployment

Labels and annotations applied to Deployment (for Kubernetes).

pod

Labels and annotations applied pod specification as used in ReplicationController, ReplicaSets, Deployments and DeploymentConfigs objects.

replicaSet

Labels and annotations applied to ReplicaSet and ReplicationController objects.

service

Labels and annotations applied to Service objects.

5.1.2. Secrets

Once you’ve configured some docker registry credentials into ~/.m2/setting.xml, as explained in the Authentication section, you can create Kubernetes secrets from a server declaration.

XML configuration

You can create a secret using xml configuration in the pom.xml file. It should contain the following fields:

key required description

dockerServerId

true

the server id which is configured in ~/.m2/setting.xml

name

true

this will be used as name of the kubernetes secret resource

namespace

false

the secret resource will be applied to the specific namespace, if provided

This is best explained by an example.

Example for XML configuration
<properties>
    <jkube.docker.registry>docker.io</docker.registry>
</properties>
<configuration>
    <resources>
        <secrets>
            <secret>
                <dockerServerId>${docker.registry}</dockerServerId>
                <name>mydockerkey</name>
            </secret>
        </secrets>
    </resources>
</configuration>

Yaml fragment with annotation

You can create a secret using a yaml fragment. You can reference the docker server id with an annotation maven.jkube.io/dockerServerId. The yaml fragment file should be put under the src/main/jkube/ folder.

Example
apiVersion: v1
kind: Secret
metadata:
  name: mydockerkey
  namespace: default
  annotations:
    maven.jkube.io/dockerServerId: ${docker.registry}
type: kubernetes.io/dockercfg

5.1.3. Resource Validation

Resource goal also validates the generated resource descriptors using API specification of Kubernetes.

Table 7. Validation Configuration
Element Description Property

skipResourceValidation

If value is set to true then resource validation is skipped. This may be useful if resource validation is failing for some reason but you still want to continue the deployment.

Default is false.

jkube.skipResourceValidation

failOnValidationError

If value is set to true then any validation error will block the plugin execution. A warning will be printed otherwise.

Default is false.

jkube.failOnValidationError

5.1.4. Supported Properties for Resource goal

Table 8. Options available with resource goal
Element Description Property

profile

Profile to use. A profile contains the enrichers and generators to use as well as their configuration. Profiles are looked up in the classpath and can be provided as yaml files.

Defaults to default.

jkube.profile

sidecar

Whether to enable sidecar behavior or not. By default pod specs are merged into main application container.

Defaults to false.

jkube.sidecar

skipHealthCheck

Whether to skip health checks addition in generated resources or not.

Defaults to false.

jkube.skipHealthCheck

workDir

The JKube working directory. Defaults to ${project.build.directory}/jkube.

jkube.workDir

environment

Environment name where resources are placed. For example, if you set this property to dev and resourceDir is the default one, plugin will look at src/main/jkube/dev.

Defaults to null.

jkube.environment

useProjectClassPath

Should we use the project’s compile time classpath to scan for additional enrichers/generators.

Defaults to false.

jkube.useProjectClassPath

resourceDir

Folder where to find project specific files.

Defaults to ${basedir}/src/main/jkube.

jkube.resourceDir

targetDir

The generated Kubernetes manifests target direcotry.

Defaults to ${project.build.outputDirectory}/META-INF/jkube.

jkube.targetDir

resourceType

The artifact type for attaching the generated resource file to the project. Can be either 'json' or 'yaml'.

Defaults to yaml.

jkube.resourceType

mergeWithDekorate

When resource generation is delegated to Dekorate, should JKube resources be merged with Dekorate generated ones.

Defaults to false.

jkube.mergeWithDekorate

skipResource

Skip resource generation.

Defaults to false.

jkube.skip.resource

createExternalUrls

Should we create external Ingress for any LoadBalancer Services which don’t already have them.

Defaults to false.

jkube.createExternalUrls

domain

Domain added to the Service ID when creating Kubernetes Ingresses or OpenShift routes.

jkube.domain

5.2. k8s:build

This goal is for building Docker images.

5.2.1. Kubernetes Build

A normal Docker build is performed by default. The connection configuration to access the Docker daemon is described in Access Configuration.

In order to make the generated images available to the Kubernetes cluster the generated images need to be pushed to a registry with the goal k8s:push. This is not necessary for single node clusters, though as there is no need to distribute images.

5.2.2. Configuration (XML)

The following sections describe the usual configuration, which is similar to the build configuration used in the docker-maven-plugin.

In addition a more automatic way for creating predefined build configuration can be performed with so called Generators. Generators are very flexible and can be easily created. These are described in an extra section.

Global configuration parameters specify overall behavior common for all images to build. Some of the configuration options are shared with other goals.

Table 9. Global configuration
Element Description Property

buildStrategy

Defines what build strategy to choose while building container image. Possible values are docker and jib out of which docker is default.

jkube.build.strategy

apiVersion

Use this variable if you are using an older version of docker not compatible with the current default use to communicate with the server.

jkube.docker.apiVersion

authConfig

Authentication information when pulling from or pushing to Docker registry. There is a dedicated section Authentication for how to do security.

autoPull

Decide how to pull missing base images or images to start:

  • on : Automatic download any missing images (default)

  • off : Automatic pulling is switched off

  • always : Pull images always even when they already exist locally

  • once : For multi-module builds images are only checked once and pulled for the whole build.

jkube.docker.autoPull

buildRecreate

If the effective mode is openshift then this option decides how the OpenShift resource objects associated with the build should be treated when they already exist:

  • buildConfig or bc : Only the BuildConfig is recreated

  • imageStream or is : Only the ImageStream is recreated

  • all : Both, BuildConfig and ImageStream are recreated

  • none : Neither BuildConfig nor ImageStream is recreated

The default is none. If you provide the property without value then all is assumed, so everything gets recreated.

jkube.build.recreate

imagePullPolicy

Specify whether images should be pull when looking for base images while building or images for starting. This property can take the following values (case insensitive):

  • IfNotPresent: Automatic download any missing images (default)

  • Never : Automatic pulling is switched off always

  • Always : Pull images always even when they already exist locally.

By default a progress meter is printed out on the console, which is omitted when using Maven in batch mode (option -B). A very simplified progress meter is provided when using no color output (i.e. with -Djkube.useColor=false).

jkube.docker.imagePullPolicy

certPath

Path to SSL certificate when SSL is used for communicating with the Docker daemon. These certificates are normally stored in ~/.docker/. With this configuration the path can be set explicitly. If not set, the fallback is first taken from the environment variable DOCKER_CERT_PATH and then as last resort ~/.docker/. The keys in this are expected with it standard names ca.pem, cert.pem and key.pem. Please refer to the Docker documentation for more information about SSL security with Docker.

jkube.docker.certPath

dockerHost

The URL of the Docker Daemon. If this configuration option is not given, then the optional <machine> configuration section is consulted. The scheme of the URL can be either given directly as http or https depending on whether plain HTTP communication is enabled or SSL should be used. Alternatively the scheme could be tcp in which case the protocol is determined via the IANA assigned port: 2375 for http and 2376 for https. Finally, Unix sockets are supported by using the scheme unix together with the filesystem path to the unix socket.

The discovery sequence used by the docker-maven-plugin to determine the URL is:

  1. Value of dockerHost (jkube.docker.host)

  2. The Docker host associated with the docker-machine named in <machine>, i.e. the DOCKER_HOST from docker-machine env. See below for more information about Docker machine support.

  3. The value of the environment variable DOCKER_HOST.

  4. unix:///var/run/docker.sock if it is a readable socket.

jkube.docker.host

filter

In order to temporarily restrict the operation of plugin goals this configuration option can be used. Typically this will be set via the system property jkube.image.filter when Maven is called. The value can be a single image name (either its alias or full name) or it can be a comma separated list with multiple image names. Any name which doesn’t refer an image in the configuration will be ignored.

jkube.image.filter

machine

Docker machine configuration. See Docker Machine for possible values.

maxConnections

Number of parallel connections are allowed to be opened to the Docker Host. For parsing log output, a connection needs to be kept open (as well for the wait features), so don’t put that number to low. Default is 100 which should be suitable for most of the cases.

jkube.docker.maxConnections

access

Group of configuration parameters to connect to Kubernetes/OpenShift cluster.

outputDirectory

Default output directory to be used by this plugin. The default value is target/docker and is only used for the goal k8s:build.

jkube.build.target.dir

profile

Profile to which contains enricher and generators configuration. See Profiles for details.

jkube.profile

registry

Specify globally a registry to use for pulling and pushing images. See Registry handling for details.

jkube.docker.registry

skip

With this parameter the execution of this plugin can be skipped completely.

jkube.skip

skipBuild

If set not images will be build (which implies also skip.tag) with k8s:build.

jkube.skip.build

skipBuildPom

If set the build step will be skipped for modules of type pom. If not set, then by default projects of type pom will be skipped if there are no image configurations contained.

jkube.skip.build.pom

skipTag

If set to true this plugin won’t add any tags to images that have been built with k8s:build.

jkube.skip.tag

skipMachine

Skip using docker machine in any case

jkube.docker.skip.machine

sourceDirectory

Default directory that contains the assembly descriptor(s) used by the plugin. The default value is src/main/docker. This option is only relevant for the k8s:build goal.

jkube.build.source.dir

verbose

Boolean attribute for switching on verbose output like the build steps when doing a Docker build. Default is false.

jkube.docker.verbose

logDate

The date format to use when logging messages from Docker. Default is DEFAULT (HH:mm:ss.SSS)

jkube.docker.logDate

logStdout

Log to stdout regardless if log files are configured or not. Default is false.

jkube.docker.logStdout

5.2.3. Kubernetes Access Configuration

You can configure parameters to define how plugin is going to connect to Kubernetes cluster instead of relying on default parameters.

<configuration>
  <access>
    <username></username>
    <password></password>
    <masterUrl></masterUrl>
    <apiVersion></apiVersion>
  </access>
</configuration>
Element Description Property

username

Username on which to operate.

jkube.username

password

Password on which to operate.

jkube.password

namespace

Namespace on which to operate.

jkube.namespace

masterUrl

Master URL on which to operate.

jkube.masterUrl

apiVersion

Api version on which to operate.

jkube.apiVersion

caCertFile

CaCert File on which to operate.

jkube.caCertFile

caCertData

CaCert Data on which to operate.

jkube.caCertData

clientCertFile

Client Cert File on which to operate.

jkube.clientCertFile

clientCertData

Client Cert Data on which to operate.

jkube.clientCertData

clientKeyFile

Client Key File on which to operate.

jkube.clientKeyFile

clientKeyData

Client Key Data on which to operate.

jkube.clientKeyData

clientKeyAlgo

Client Key Algorithm on which to operate.

jkube.clientKeyAlgo

clientKeyPassphrase

Client Key Passphrase on which to operate.

jkube.clientKeyPassphrase

trustStoreFile

Trust Store File on which to operate.

jkube.trustStoreFile

trustStorePassphrase

Trust Store Passphrase on which to operate.

jkube.trustStorePassphrase

keyStoreFile

Key Store File on which to operate.

jkube.keyStoreFile

keyStorePassphrase

Key Store Passphrase on which to operate.

jkube.keyStorePassphrase

5.2.4. Image Configuration

The configuration how images should be created a defined in a dedicated <images> sections. These are specified for each image within the <images> element of the configuration with one 
    
  </images>
</configuration>
1 One or more 
   </images>
 </configuration>
 ...
</plugin>
Build Plugins

This plugin supports so call dmp-plugins which are used during the build phase. dmp-plugins are enabled by just declaring a dependency in the plugin declaration:

<plugin>
  <groupId>org.eclipse.jkube</groupId>
  <artifactId>kubernetes-maven-plugin</artifactId>

  <dependencies>
    <dependency>
      <groupId>org.eclipse.jkube</groupId>
      <artifactId>run-java-sh</artifactId>
      <version>1.2.2</version>
    </dependency>
  </dependencies>
</plugin>
org.eclipse.jkube.runsh.RunShLoader

During a build with kubernetes-maven-plugin:build, those classes are loaded and certain fixed method are called.

The following methods are supported:

Method Description

addExtraFiles

A static method called by dmp with a single File argument. This will point to a directory docker-extra which can be referenced easily by a Dockerfile or an assembly. A dmp plugin typically will create an own subdirectory to avoid a clash with other dmp-plugins.

If a configured plugin does not provide method of this name and signature, then it will be simply ignored. Also, no interface needs to be implemented to keep the coupling low.

The following official dmp-plugins are known and supported:

Name G,A Description

run-java.sh

jkube.io, run-java

General purpose startup script fo running Java applications. The dmp plugin creates a target/docker-extra/run-java/run-java.sh which can be included in a Dockerfile (see the example above). See the run-java.sh Documentation for more details.

Check out samples/run-java for a fully working example.

All build relevant configuration is contained in the <build> section of an image configuration. The following configuration options are supported:

Table 12. Build configuration (
    
    
  </images>
</configuration>

There is some special behaviour when using an externally provided registry like described above:

  • When pulling, the image pulled will be also tagged with a repository name without registry. The reasoning behind this is that this image then can be referenced also by the configuration when the registry is not specified anymore explicitly.

  • When pushing a local image, temporarily a tag including the registry is added and removed after the push. This is required because Docker can only push registry-named images.

12. Authentication

When pulling (via the autoPull mode of k8s:build) or pushing image, it might be necessary to authenticate against a Docker registry.

There are five different locations searched for credentials. In order, these are:

  • Providing system properties jkube.docker.username and jkube.docker.password from the outside.

  • Using a <authConfig> section in the plugin configuration with <username> and <password> elements.

  • Using OpenShift configuration in ~/.config/kube

  • Using a <server> configuration in ~/.m2/settings.xml

  • Login into a registry with docker login (credentials in a credential helper or in ~/.docker/config.json)

Using the username and password directly in the pom.xml is not recommended since this is widely visible. This is the easiest and transparent way, though. Using an <authConfig> is straight forward:

<plugin>
  <configuration>
    
    <!-- ... -->
    <authConfig>
      <username>jolokia</username>
      <password>s!cr!t</password>
    </authConfig>
  </configuration>
</plugin>

The system property provided credentials are a good compromise when using CI servers like Jenkins. You simply provide the credentials from the outside:

Example
mvn -Djkube.docker.username=jolokia -Djkube.docker.password=s!cr!t k8s:push

The most mavenish way is to add a server to the Maven settings file ~/.m2/settings.xml:

Example
<servers>
  <server>
    <id>docker.io</id>
    <username>jolokia</username>
    <password>s!cr!t</password>
  </server>
  <!-- ... -->
</servers>

The server id must specify the registry to push to/pull from, which by default is central index docker.io (or index.docker.io / registry.hub.docker.com as fallbacks). Here you should add your docker.io account for your repositories. If you have multiple accounts for the same registry, the second user can be specified as part of the ID. In the example above, if you have a second account 'jkubeio' then use an <id>docker.io/jkubeio</id> for this second entry. I.e. add the username with a slash to the id name. The default without username is only taken if no server entry with a username appended id is chosen.

The most secure way is to rely on docker’s credential store or credential helper and read confidential information from an external credentials store, such as the native keychain of the operating system. Follow the instruction on the docker login documentation.

As a final fallback, this plugin consults $DOCKER_CONFIG/config.json if DOCKER_CONFIG is set, or ~/.docker/config.json if not, and reads credentials stored directly within this file. This unsafe behavior happened when connecting to a registry with the command docker login from the command line with older versions of docker (pre 1.13.0) or when docker is not configured to use a credential store.

12.1. Pull vs. Push Authentication

The credentials lookup described above is valid for both push and pull operations. In order to narrow things down, credentials can be provided for pull or push operations alone:

In an <authConfig> section a sub-section <pull> and/or <push> can be added. In the example below the credentials provider are only used for image push operations:

Example
<plugin>
  <configuration>
    
    <!-- ... -->
    <authConfig>
      <push>
         <username>jolokia</username>
         <password>s!cr!t</password>
      </push>
    </authConfig>
  </configuration>
</plugin>

When the credentials are given on the command line as system properties, then the properties jkube.docker.pull.username / jkube.docker.pull.password and jkube.docker.push.username / jkube.docker.push.password are used for pull and push operations, respectively (when given). Either way, the standard lookup algorithm as described in the previous section is used as fallback.

12.2. OpenShift Authentication

When working with the default registry in OpenShift, the credentials to authenticate are the OpenShift username and access token. So, a typical interaction with the OpenShift registry from the outside is:

oc login
...
mvn -Djkube.docker.registry=docker-registry.domain.com:80/default/myimage \
    -Djkube.docker.username=$(oc whoami) \
    -Djkube.docker.password=$(oc whoami -t)

(note, that the image’s username part ("default" here") must correspond to an OpenShift project with the same name to which you currently connected account has access).

This can be simplified by using the system property docker.useOpenShiftAuth in which case the plugin does the lookup. The equivalent to the example above is

oc login
...
mvn -Ddocker.registry=docker-registry.domain.com:80/default/myimage \
    -Ddocker.useOpenShiftAuth

Alternatively the configuration option <useOpenShiftAuth> can be added to the <authConfig> section.

For dedicated pull and push configuration the system properties jkube.docker.pull.useOpenShiftAuth and jkube.docker.push.useOpenShiftAuth are available as well as the configuration option <useOpenShiftAuth> in an <pull> or <push> section within the <authConfig> configuration.

If useOpenShiftAuth is enabled then the OpenShift configuration will be looked up in $KUBECONFIG or, if this environment variable is not set, in ~/.kube/config.

12.3. Password encryption

Regardless of which mode you choose you can encrypt password as described in the Maven documentation. Assuming that you have setup a master password in ~/.m2/security-settings.xml you can create easily encrypt passwords:

Example
$ mvn --encrypt-password
Password:
{QJ6wvuEfacMHklqsmrtrn1/ClOLqLm8hB7yUL23KOKo=}

This password then can be used in authConfig, docker.password and/or the <server> setting configuration. However, putting an encrypted password into authConfig in the pom.xml doesn’t make much sense, since this password is encrypted with an individual master password.

12.4. Extended Authentication

Some docker registries require additional steps to authenticate. Amazon ECR requires using an IAM access key to obtain temporary docker login credentials. The k8s:push and k8s:build goals automatically execute this exchange for any registry of the form <awsAccountId> .dkr.ecr. <awsRegion> .amazonaws.com, unless the skipExtendedAuth configuration (jkube.docker.skip.extendedAuth property) is set true.

Note that for an ECR repository with URI 123456789012.dkr.ecr.eu-west-1.amazonaws.com/example/image the d-m-p’s jkube.docker.registry should be set to 123456789012.dkr.ecr.eu-west-1.amazonaws.com and example/image is the <name> of the image.

You can use any IAM access key with the necessary permissions in any of the locations mentioned above except ~/.docker/config.json. Use the IAM Access key ID as the username and the Secret access key as the password. In case you’re using temporary security credentials provided by the AWS Security Token Service (AWS STS), you have to provide the security token as well. To do so, either specify the docker.authToken system property or provide an <auth> element alongside username & password in the authConfig.

In case you are running on an EC2 instance that has an appropriate IAM role assigned (e.g. a role that grants the AWS built-in policy AmazonEC2ContainerRegistryPowerUser) authentication information doesn’t need to be provided at all. Instead the instance meta-data service is queried for temporary access credentials supplied by the assigned role.

13. Volume Configuration

kubernetes-maven-plugin supports volume configuration in XML format in pom.xml. These are the volume types which are supported:

Table 47. Supported Volume Types
Volume Type Description

hostPath

Mounts a file or directory from the host node’s filesystem into your pod

emptyDir

Containers in the Pod can all read and write the same files in the emptyDir volume, though that volume can be mounted at the same or different paths in each Container. When a Pod is removed from a node for any reason, the data in the emptyDir is deleted forever.

gitRepo

It mounts an empty directory and clones a git repository into it for your Pod to use.

secret

It is used to pass sensitive information, such as passwords, to Pods.

nfsPath

Allows an existing NFS(Network File System) share to be mounted into your Pod.

gcePdName

Mounts a Google Compute Engine(GCE) into your Pod. You must create PD using gcloud or GCE API or UI before you can use it.

glusterFsPath

Allows a Glusterfs (an open source networked filesystem) volume to be mounted into your Pod.

persistentVolumeClaim

Used to mount a PersistentVolume into a Pod.

awsElasticBlockStore

Mounts an Amazon Web Services(AWS) EBS Volume into your Pod.

azureDisk

Mounts a Microsoft Azure Data Disk into a Pod

azureFile

Mounts a Microsoft Azure File Volume(SMB 2.1 and 3.0 into a Pod.

cephfs

Allows an existing CephFS volume to be mounted into your Pod. You must have your own Ceph server running with the share exported before you can use it.

fc

Allows existing fibre channel volume to be mounted in a Pod. You must configure FC SAN Zoning to allocate and mask those LUNs (volumes) to the target WWNs beforehand so that Kubernetes hosts can access them.

flocker

Flocker is an open source clustered Container data volume manager. A flocker volume allows a Flocker dataset to be mounted into a Pod. You must have your own Flocker installation running before you can use it.

iscsi

Allows an existing ISCSI(SCSI over IP) volume to be mounted into your Pod.

portworxVolume

A portworxVolume is an elastic block storage layer that runs hyperconverged with Kubernetes.

quobyte

Allows existing Quobyte volume to be mounted into your Pod. You must have your own Quobyte setup running the volumes created.

rbd

Allows a Rados Block Device volume to be mounted into your Pod.

scaleIO

ScaleIO is a software-based storage platform that can use existing hardware to create clusters of scalable shared block networked storage. The scaleIO volume plugin allows deployed Pods to access existing ScaleIO volumes.

storageOS

A storageos volume allows an existing StorageOS volume to be mounted into your Pod. You must run the StorageOS container on each node that wants to access StorageOS volumes

vsphereVolume

Used to mount a vSphere VMDK volume into your Pod.

downwardAPI

A downwardAPI volume is used to make downward API data available to applications. It mounts a directory and writes the requested data in plain text files.

14. Integrations

14.1. Dekorate

kubernetes-maven-plugin provides a Zero Configuration approach to delegate deployment manifests generation to Dekorate.

Just by adding a dependency to Dekorate library in the pom.xml file, all manifest generation will be delegated to Dekorate.

  <dependencies>
    <!-- ... -->
    <dependency>
        <groupId>io.dekorate</groupId>
        <artifactId>option-annotations</artifactId>
        <version>${dekorate.version}</version>
      </dependency>
      <dependency>
        <groupId>io.dekorate</groupId>
        <artifactId>openshift-annotations</artifactId>
        <version>${dekorate.version}</version>
      </dependency>
      <dependency>
        <groupId>io.dekorate</groupId>
        <artifactId>kubernetes-annotations</artifactId>
        <version>${dekorate.version}</version>
      </dependency>
      <dependency>
        <groupId>io.dekorate</groupId>
        <artifactId>dekorate-spring-boot</artifactId>
        <version>${dekorate.version}</version>
      </dependency>
  </dependencies>

A full example of the integration can be found in the directory quickstarts/maven/spring-boot-dekorate.

An experimental feature is also provided to merge resources generated both by kubernetes-maven-plugin and Dekorate. You can activate this feature by using the following flag -Djkube.mergeWithDekorate in the command-line, or setting it up as a property (<jkube.mergeWithDekorate>true</jkube.mergeWithDekorate>).

14.2. JIB (Java Image Builder)

kubernetes-maven-plugin also provides user an option to build container images without having access to any docker daemon. You just need to set jkube.build.strategy property to jib. It will delegate the build process to JIB. It creates a tarball inside your target directory which can be loaded into any docker daemon afterwards. You may also push the image to your specified registry using push goal with feature flag enabled.

You can find more details at Spring Boot JIB Quickstart.

15. FAQ

15.1. General questions

15.1.1. How do I define an environment variable?

The easiest way is to add a src/main/jkube/deployment.yml file to your project containing something like:

spec:
  template:
    spec:
      containers:
        -env:
          - name: FOO
            value: bar

The above will generate an environment variable $FOO of value bar

For a full list of the environments used in java base images, see this list

15.1.2. How do I define a system property?

The simplest way is to add system properties to the JAVA_OPTIONS environment variable.

For a full list of the environments used in java base images, see this list

e.g. add a src/main/jkube/deployment.yml file to your project containing something like:

spec:
 template:
   spec:
     containers:
       - env:
         - name: JAVA_OPTIONS
           value: "-Dfoo=bar -Dxyz=abc"

The above will define the system properties foo=bar and xyz=abc

15.1.3. How do I mount a config file from a ConfigMap?

First you need to create your ConfigMap resource via a file src/main/jkube/configmap.yml

data:
  application.properties: |
    # spring application properties file
    welcome = Hello from Kubernetes ConfigMap!!!
    dummy = some value

Then mount the entry in the ConfigMap into your Deployment via a file src/main/jkube/deployment.yml

metadata:
  annotations:
    configmap.jkube.io/update-on-change: ${project.artifactId}
spec:
  replicas: 1
  template:
    spec:
      volumes:
        - name: config
          configMap:
            name: ${project.artifactId}
            items:
            - key: application.properties
              path: application.properties
      containers:
        - volumeMounts:
            - name: config
              mountPath: /deployments/config

Note that the annotation configmap.jkube.io/update-on-change is optional; its used if your application is not capable of watching for changes in the /deployments/config/application.properties file. In this case if you are also running the configmapcontroller then this will cause a rolling upgrade of your application to use the new ConfigMap contents as you change it.

15.1.4. How do I use a Persistent Volume?

First you need to create your PersistentVolumeClaim resource via a file src/main/jkube/foo-pvc.yml where foo is the name of the PersistentVolumeClaim. It might be your app requires multiple vpersistent volumes so you will need multiple PersistentVolumeClaim resources.

spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi

Then to mount the PersistentVolumeClaim into your Deployment create a file src/main/jkube/deployment.yml

spec:
  template:
    spec:
      volumes:
      - name: foo
        persistentVolumeClaim:
          claimName: foo
      containers:
      - volumeMounts:
        - mountPath: /whatnot
          name: foo

Where the above defines the PersistentVolumeClaim called foo which is then mounted into the container at /whatnot

15.1.5. How do I generate Ingress for my generated Service?

Ingress generation is supported by Eclipse JKube for Service objects of type LoadBalancer. In order to generate Ingress you need to enable jkube.createExternalUrls property to true and jkube.domain property to desired host suffix, it would be appended to your service name for host value. You can also also provide a host for it in XML config like this:

<plugin>
  <groupId>org.eclipse.jkube</groupId>
  <artifactId>kubernetes-maven-plugin</artifactId>
  <version>1.0.0</version>

  <configuration>
    <resources>
      <routeDomain>org.eclipse.jkube</routeDomain>
    </resources>

    <enricher>
      <config>
        <jkube-service>
          <type>LoadBalancer</type>
        </jkube-service>
      </config>
    </enricher>
  </configuration>
</plugin>

You can find an example in our spring-boot quickstart in kubernetes-with-ingress profile.

16. Appendix

16.1. Kind/Filename Type Mapping

Kind Filename Type

BuildConfig

bc, buildconfig

ClusterRole

cr, crole, clusterrole

ConfigMap

cm, configmap

ClusterRoleBinding

crb, clusterrb, clusterrolebinding

CronJob

cj, cronjob

CustomResourceDefinition

crd, customerresourcedefinition

DaemonSet

ds, daemonset

Deployment

deployment

DeploymentConfig

dc, deploymentconfig

ImageStream

is, imagestream

ImageStreamTag

istag, imagestreamtag

Ingress

ingress

Job

job

LimitRange

lr, limitrange

Namespace

ns, namespace

OAuthClient

oauthclient

PolicyBinding

pb, policybinding

PersistentVolume

pv, persistentvolume

PersistentVolumeClaim

pvc, persistemtvolumeclaim

Project

project

ProjectRequest

pr, projectrequest

ReplicaSet

rs, replicaset

ReplicationController

rc, replicationcontroller

ResourceQuota

rq, resourcequota

Role

role

RoleBinding

rb, rolebinding

RoleBindingRestriction

rbr, rolebindingrestriction

Route

route

Secret

secret

Service

svc, service

ServiceAccount

sa, serviceaccount

StatefulSet

statefulset

Template

template

Pod

pd, pod

16.2. Custom Kind/Filename Mapping

You can add your custom Kind/Filename mappings. To do it you have two approaches:

  • Setting an environment variable or system property called jkube.mapping pointing out to a .properties files with pairs <kind>⇒filename1>, <filename2> By default if no environment variable nor system property is set, scan for a file located at classpath /META-INF/jkube.kind-filename-type-mapping-default.properties.

  • By embedding in MOJO configuration the mapping:

<plugin>
  <groupId>org.eclipse.jkube</groupId>
  <artifactId>kubernetes-maven-plugin</artifactId>
  <configuration>
    <mappings>
      <mapping>
        <kind>Var</kind>
        <filenameTypes>foo, bar</filenameTypes>
      </mapping>
    </mappings>
  </configuration>
</plugin>