Skip to end of metadata
Go to start of metadata


  • This article explains a simplified build process for JOC Cockpit images that is extracted from the SOS build environment.
  • Users can build their own Docker images for JOC Cockpit.

Build Environment

For the build environment the following directory hierarchy is assumed:

The root directory joc can have any name. The build files listed above are available for download. Note that build script described below will, by default, use the directory name and release number to determine the resulting image name.


Download: Dockerfile

Docker images for JS7 JOC Cockpit provided by SOS make use of the following Dockerfile:

Dockerfile for JOC Cockpit Image

FROM alpine:latest AS js7-pre-image

# provide build arguments for release information

# add installer archive file
ADD${JS_RELEASE_MAJOR}/js7_joc_linux.${JS_RELEASE}.tar.gz /usr/local/src/
# COPY js7_joc_linux.${JS_RELEASE}.tar.gz /usr/local/src/

RUN test -e /usr/local/src/js7_joc_linux.${JS_RELEASE}.tar.gz && \
    tar zxvf /usr/local/src/js7_joc_linux.${JS_RELEASE}.tar.gz -C /usr/local/src/ && \
    rm -f /usr/local/src/js7_joc_linux.${JS_RELEASE}.tar.gz

# copy installer response file, entrypoint script and start script
COPY joc_install.xml /usr/local/src/
COPY /usr/local/src/
COPY /usr/local/src/

# copy configuration
COPY config/ /usr/local/src/resources


FROM alpine:latest AS js7-image

LABEL maintainer="Software- und Organisations-Service GmbH"

# provide build arguments for release information

# image user id has to match later run-time user id

# JS7 user id, ports and Java options

COPY --from=js7-pre-image ["/usr/local/src", "/usr/local/src"]


# install process tools, net tools, bash, openjdk
# add jobscheduler user account and group
# for JDK < 12, /dev/random does not provide sufficient entropy, see
# substitute build arguments in installer response file
# run setup
RUN apk update && apk add --no-cache \
    procps \
    net-tools \
    bash \
    shadow \
    openjdk8 && \
    adduser -u ${JS_USER_ID:-1001} --disabled-password --home /home/jobscheduler --shell /bin/bash jobscheduler jobscheduler && \
    ln -s /usr/local/src/joc.${JS_RELEASE} /usr/local/src/joc && \
    mv /usr/local/src/joc_install.xml /usr/local/src/joc/ && \
    mv /usr/local/src/resources/hibernate.cfg.xml /usr/local/src/joc/ && \
    sed -i 's/securerandom.source=file:\/dev\/random/securerandom.source=file:\/dev\/urandom/g' /usr/lib/jvm/java-1.8-openjdk/jre/lib/security/ && \
    sed -i "s/\s*<entry\s*key\s*=\"jettyPort\".*\/>/<entry key=\"jettyPort\" value=\"$JS_HTTP_PORT\"\/>/g" /usr/local/src/joc/joc_install.xml && \
    cd /usr/local/src/joc && ./ -u joc_install.xml && \
    mv /usr/local/src/ /usr/local/bin/ && \
    mv /usr/local/src/ /usr/local/bin/ && \
    chmod +x /usr/local/bin/ /usr/local/bin/ && \
    mv /usr/local/src/resources/* /var/ && \
    cat /var/ >> /var/ && \
    cat /var/ >> /var/ && \
    sed -i "s/\s*jetty.ssl.port\s*=.*/jetty.ssl.port=$JS_HTTPS_PORT/g" /var/ && \
    java -jar "/opt/" -Djetty.home="/opt/" -Djetty.base="/var/" --add-to-start=https && \
    mv /var/ /var/ && \
    ln -s /var/ /var/ && \
    chown -R jobscheduler:jobscheduler /var/


ENTRYPOINT ["sh", "/usr/local/bin/"]

CMD ["/usr/local/bin/", "--http-port=$RUN_JS_HTTP_PORT", "--https-port=$RUN_JS_HTTPS_PORT", "--java-options=\"$RUN_JS_JAVA_OPTIONS\""]


  • The build script implements two stages to exclude installer files from the resulting image.
  • Line 3: The base image is the current Alpine image at build-time.
  • Line 6 - 7: The release identification is injected by build arguments. This information is used to determine the tarball to be downloaded or copied.
  • Line 10 - 11: You can either download the JOC Cockpit tarball directly from the SOS web site or you store the tarball with the build directory and copy from this location.
  • Line 13 - 15: The tarball is extracted.

  • Line 18: the joc_install.xml response file is copied to the image. This file includes settings for headless installation of JOC Cockpit, see JS7 - JOC Cockpit Installation On Premises. In fact when building the image a JOC Cockpit installation is performed.

    JOC Cockpit Installer Response File
    <?xml version="1.0" encoding="UTF-8" standalone="no"?>
         XML configuration file for JOC
    If you call the installer with this XML file then
    you accept at the same time the terms of the
    licence agreement under GNU GPL 2.0 License
    <AutomatedInstallation langpack="eng">
        <com.izforge.izpack.panels.UserInputPanel id="home">
        <com.izforge.izpack.panels.HTMLLicencePanel id="gpl_licence"/>
        <com.izforge.izpack.panels.TargetPanel id="target">
                              It must be absolute!
                 For example:
                 /opt/ on Linux
                 C:\Program Files\\joc on Windows -->
        <com.izforge.izpack.panels.UserInputPanel id="jetty">
                <!-- JOC requires a servlet container such as Jetty.
                                      If a servlet container already installed then you can use it.
                     Otherwise a Jetty will be installed in addition if withJettyInstall=yes.
                     You need root permissions to install JOC with Jetty. -->
                <entry key="withJettyInstall" value="yes"/>
                <entry key="jettyPort" value="4446"/>
                <!-- Specify the name of the Windows service or Linux Daemon (default: joc).
                                      Only necessary for multiple instances of JOC on one server. It must be
                     unique per server. This entry is deactivated by a comment because it
                                 <entry key="jettyServiceName" value="joc"/>
                <!-- Only necessary for Windows -->
                <entry key="jettyStopPort" value="44446"/>
                <!-- Only necessary for Unix (root permissions required) -->
                <entry key="withJocInstallAsDaemon" value="yes"/>
                <!-- To enter a JOC User (default=current User).
                                      For Unix only (root permissions required)!!! -->
                <entry key="runningUser" value="jobscheduler"/>
                <!-- Path to Jetty base directory
                                      For example:
                     /home/[user]/ on Linux
                     C:\ProgramData\\joc on Windows -->
                <entry key="jettyBaseDir" value="/var/"/>
                <!-- Choose (yes or no) wether the JOC's Jetty should be (re)started at the end of the installation -->
                <entry key="launchJetty" value="no"/>
                <!-- Java options for Jetty. -->
                <!-- Initial memory pool (-Xms) in MB -->
                <entry key="jettyOptionXms" value="128"/>
                <!-- Maximum memory pool (-Xmx) in MB -->
                <entry key="jettyOptionXmx" value="512"/>
                <!-- Thread stack size (-Xss) in KB -->
                <entry key="jettyOptionXss" value="4000"/>
                <!-- Further Java options -->
                <entry key="jettyOptions" value=""/>
        <com.izforge.izpack.panels.UserInputPanel id="joc">
                <!-- JOC can be installed in a cluster. Please type a unique title to identify the cluster node,
                     e.g. hostname. Max. length is 30 characters -->
                <entry key="jocTitle" value="PRIMARY JOC COCKPIT"/>
                <!-- Choose yes if JOC is a standby node in a cluster -->
                <entry key="isStandby" value="no"/>
                <!-- Security Level for the signing mechanism: possibly values are 'LOW', 'MEDIUM' and 'HIGH'
                        public PGP keys are stored for verification only
                        all signing will be done externally outside of JOC Cockpit
                        a private PGP key will be stored for signing
                        signing will be done automatically with the provided key
                        no keys will be stored
                        signing will be done internally with default keys -->
                <entry key="securityLevel" value="LOW"/>
        <com.izforge.izpack.panels.UserInputPanel id="database">
                <!-- Reporting Database Configuration -->
                <!-- Database connection settings can be specified with following entries such as
                                      databaseHost, databasePort, ... or by a hibernate configuration file
                     Posible values are 'withoutHibernateFile' (default) and 'withHibernateFile'. -->
                <entry key="databaseConfigurationMethod" value="withoutHibernateFile"/>
                <!-- Choose the database management system. Supported values are 'mysql' for MySQL,
                                      'oracle' for Oracle, 'mssql' for MS SQL Server, 'pgsql' for PostgreSQL.
                     Only if databaseConfigurationMethod=withoutHibernateFile -->
                <entry key="databaseDbms" value="mysql"/>
                <!-- Path to a hibernate configuration file if databaseConfigurationMethod=withHibernateFile -->
                <entry key="hibernateConfFile" value=""/>
               <!-- You can choose between 'on' or 'off' to create the database tables.
                     If you have modified the initial data of an already existing installation,
                     then the modifications will be undone. Data added remains unchanged.
                     This entry should be only 'off', when you sure, that all tables are already created. -->
                <entry key="databaseCreateTables" value="off"/>
        <com.izforge.izpack.panels.UserInputPanel id="dbconnection">
                <!-- Database Configuration if databaseConfigurationMethod=withoutHibernateFile -->
                <!-- Enter the name or ip address of the database host
                                      This entry can also be used to configure the URL(s) for Oracle RAC databases.
                     For example:
                     <entry key="databaseHost" value="(DESCRIPTION=(ADDRESS_LIST=(LOAD_BALANCE=OFF)(FAILOVER=ON)
                     The "databaseSchema" and "databasePort" entries should then be left empty. -->
                <entry key="databaseHost" value=""/>
                <!-- Enter the port number for the database instance. Default ports are for MySQL 3306,
                                      Oracle 1521, MS SQL Server 1433, postgreSQL 5432. -->
                <entry key="databasePort" value=""/>
                <!-- Enter the schema -->
                <entry key="databaseSchema" value=""/>
                <!-- Enter the user name for database access -->
                <entry key="databaseUser" value=""/>
                <!-- Enter the password for database access -->
                <entry key="databasePassword" value=""/>
        <com.izforge.izpack.panels.UserInputPanel id="jdbc">
                <!-- Database Configuration -->
                <!-- You can specify an external JDBC connector then set internalConnector = no
                                      For license reasons MySQL, MS SQL Server and Oracle ojdbc7 JDBC
                     drivers are not provided.
                     Alternatively you can use the mariadb JDBC Driver for MySQL and
                     the jTDS JDBC Driver for MS SQL Server which is provided.
                     An Oracle ojdbc6 JDBC driver is also provided. -->
                <!-- You can choose between 'yes' or 'no' for using the internal JDBC connector
                                      or not -->
                <entry key="internalConnector" value="yes"/>
                <!-- Select the path to JDBC Driver -->
                <entry key="connector" value=""/>
        <com.izforge.izpack.panels.UserInputPanel id="end">
        <com.izforge.izpack.panels.InstallPanel id="install"/>
        <com.izforge.izpack.panels.ProcessPanel id="process"/>
        <com.izforge.izpack.panels.FinishPanel id="finish"/>
  • Line 19: The script is copied from the build directory to the image. Users can apply their own version of the entrypoint script. The entrypoint script used by SOS looks like this:

    JOC Cockpit Start Script
    JS_USER_ID=`echo $RUN_JS_USER_ID | cut -d ':' -f 1`
    JS_GROUP_ID=`echo $RUN_JS_USER_ID | cut -d ':' -f 2`
    BUILD_GROUP_ID=`cat /etc/group | grep jobscheduler | cut -d ':' -f 3`
    BUILD_USER_ID=`cat /etc/passwd | grep jobscheduler | cut -d ':' -f 4`
    if [ "$(id -u)" = "0" ]
      if [ ! "$BUILD_USER_ID" = "$JS_USER_ID" ]
        echo "JS7 entrypoint script preparing switch of image user id '$BUILD_USER_ID' -> '$JS_USER_ID', group id '$BUILD_GROUP_ID' -> '$JS_GROUP_ID'"
        usermod -u $JS_USER_ID jobscheduler
        groupmod -g $JS_GROUP_ID jobscheduler
        find /var/ -group $BUILD_GROUP_ID -exec chgrp -h jobscheduler {} \;
        find /var/ -user $BUILD_USER_ID -exec chown -h jobscheduler {} \;
        find /var/log/ -group $BUILD_GROUP_ID -exec chgrp -h jobscheduler {} \;
        find /var/log/ -user $BUILD_USER_ID -exec chown -h jobscheduler {} \;
      echo "JS7 entrypoint script switching to user account 'jobscheduler' to run start script"
      exec su jobscheduler -c "$*"
      if [ "$BUILD_USER_ID" = "$JS_USER_ID" ]
        if [ "$(id -u)" = "$JS_USER_ID" ]
          echo "JS7 entrypoint script running for user id '$(id -u)'"
          echo "JS7 entrypoint script running for user id '$(id -u)' cannot switch to user id '$JS_USER_ID', group id '$JS_GROUP_ID'"
          echo "JS7 entrypoint script missing permission to switch user id and group id, consider to omit the 'docker run --user' option"
        echo "JS7 entrypoint script running for user id '$(id -u)' cannot switch image user id '$BUILD_USER_ID' -> '$JS_USER_ID', group id '$BUILD_GROUP_ID' -> '$JS_GROUP_ID'"
        echo "JS7 entrypoint script missing permission to switch user id and group id, consider to omit the 'docker run --user' option"
      exec sh -c "$*"
  • Line 20: The script is copied from the build directory to the image. Users can apply their own version of the start script. The start script used by SOS looks like this:

    JOC Cockpit Start Script
    for option in "$@"
      case "$option" in
             --http-port=*)    js_http_port=`echo "$option" | sed 's/--http-port=//'`
             --https-port=*)   js_https_port=`echo "$option" | sed 's/--https-port=//'`
             --java-options=*) js_java_options=`echo "$option" | sed 's/--java-options=//'`
             *)                echo "unknown argument: $option"
                               exit 1
    if [ ! "$js_http_port" = "" ]
      # enable http access
      sed -i "s/.*--module=http$/--module=http/g" /var/
      # set port for http access
      sed -i "s/.*jetty.http.port\s*=.*/jetty.http.port=$js_http_port/g" /var/
      # disable http access
      sed -i "s/\s*--module=http$/# --module=http/g" /var/
    if [ ! "$js_https_port" = "" ]
      # enable https access
      sed -i "s/.*--module=https$/--module=https/g" /var/
      # set port for https access
      sed -i "s/\s*jetty.ssl.port\s*=.*/jetty.ssl.port=$js_https_port/g" /var/
      # disable https access
      sed -i "s/\s*--module=https$/# --module=https/g" /var/
    if [ ! -z "$js_java_options" ]
      export JAVA_OPTIONS="${JAVA_OPTIONS} $js_java_options"
    echo "starting JOC Cockpit: /opt/ start"
    /opt/ start && tail -f /dev/null
  • Line 23: The config folder available in the build directory is copied to the respective config folder in the image. This can be useful to create an image with individual settings in configuration files, see JS7 - JOC Cockpit Configuration Items.
    • The hibernate.cfg.xml specifies the database connection. This file is not used at build-time, however, it is provided as a sample for run-time configuration. Find details from the JS7 - Database article.
    • The default https-keystore.p12 and https-truststore.p12 files are copied that would hold the private key and certificate required for server authentication with HTTPS. By default empty keystore and truststore files are used that at run-time users would add their private keys and certificates to.
  • Line 36 - 39: Defaults for the user id running the JOC Cockpit inside the container as well as HTTP and HTTPS ports are provided. These values can be overwritten by providing the respective build arguments.
  • Line 42 - 45: Environment variables are provided at run-time, not at build-time. They can be used to specify ports and Java options when running the container.
  • Line 56 - 59: The image OS is updated and additional packages are installed (ps, netstat, bash).
  • Line 61: The most recent Java 1.8 package available with Alpine is applied. JOC Cockpit can be operated with newer Java releases, however, stick to Oracle, OpenJDK or AdoptOpenJDK as the source for your Java LTS release. Alternatively you can use your own base image and install Java 1.8 or later on top of this.
  • Line 62: The user account jobscheduler is created and is assigned the user id and group id handed over by the respective build arguments. This translates to the fact that the account running the JOC Cockpit inside the container and the account that starts the container are assigned the same user id and group id. This allows the account running the container to access any files created by the JOC Cockpit in mounted volumes with identical permissions.
  • Line 66: Java releases < Java 12 make use of /dev/random for random number generation. This is a bottleneck as random number generation with this file is blocking. Instead /dev/urandom should be used that implements non-blocking behavior. The change of the random file is applied to the Java security file.
  • Line 68: The JOC Cockpit setup is performed.
  • Line 73 - 74: The keystore and truststore locations are added to the Jetty start.ini file and file respectively. 
    • start.ini.add is used for access e.g. by client browsers:

      Jetty HTTPS Configuration File start.ini.add
      ## Keystore file path (relative to $jetty.base)
      ## Truststore file path (relative to $jetty.base)
      ## Keystore password
      ## KeyManager password (same as keystore password for pkcs12 keystore type)
      ## Truststore password
      ## Connector port to listen on
    • is used for connections to the Controller should such connections require HTTPS mutual authentication:

      JOC Cockpit configuration File
      ### Location, type and password of the Java truststore which contains the
      ### certificates of eachJobScheduler Controller for HTTPS connections. Path can be
      ### absolute or relative to this file.
      keystore_path = ../../resources/joc/https-keystore.p12
      keystore_type = PKCS12
      keystore_password = jobscheduler
      key_password = jobscheduler
      truststore_path = ../../resources/joc/https-truststore.p12
      truststore_type = PKCS12
      truststore_password = jobscheduler
  • Line 75: The Jetty servlet container is added the HTTPS module for use with JOC Cockpit.
  • Line 82-84: The entry script and start script is executed and is dynamically parameterized from environment variables that are forwarded when starting the container.

Build Script

The build script offers a number of options to parameterize the Dockerfile:

Build Script for JOC Cockpit Image

set -e

SCRIPT_HOME=$(dirname "$0")
SCRIPT_HOME="`cd "${SCRIPT_HOME}" >/dev/null && pwd`"
SCRIPT_FOLDER="`basename $(dirname "$SCRIPT_HOME")`"

# ----- modify default settings -----

JS_IMAGE="$(basename "${SCRIPT_HOME}")-${JS_RELEASE//\./-}"




# ----- modify default settings -----

for option in "$@"
  case "$option" in
         --release=*)      JS_RELEASE=`echo "$option" | sed 's/--release=//'`
         --repository=*)   JS_REPOSITORY=`echo "$option" | sed 's/--repository=//'`
         --image=*)        JS_IMAGE=`echo "$option" | sed 's/--image=//'`
         --user-id=*)      JS_USER_ID=`echo "$option" | sed 's/--user-id=//'`
         --http-port=*)    JS_HTTP_PORT=`echo "$option" | sed 's/--http-port=//'`
         --https-port=*)   JS_HTTPS_PORT=`echo "$option" | sed 's/--https-port=//'`
         --java-options=*) JS_JAVA_OPTIONS=`echo "$option" | sed 's/--java-options=//'`
         --build-args=*)   JS_BUILD_ARGS=`echo "$option" | sed 's/--build-args=//'`
         *)                echo "unknown argument: $option"
                           exit 1

set -x

docker build --no-cache --rm \
      --tag=$JS_REPOSITORY:$JS_IMAGE \
      --file=$SCRIPT_HOME/build/Dockerfile \
      --build-arg="JS_RELEASE=$JS_RELEASE" \
      --build-arg="JS_RELEASE_MAJOR=$(echo $JS_RELEASE | cut -d . -f 1,2)" \
      --build-arg="JS_USER_ID=$JS_USER_ID" \
      --build-arg="JS_HTTP_PORT=$JS_HTTP_PORT" \
      --build-arg="JS_HTTPS_PORT=$JS_HTTPS_PORT" \
      --build-arg="JS_JAVA_OPTIONS=$JS_JAVA_OPTIONS" \

set +x


  • Line 12 - 22: Default values are specified that are used if no command line arguments are provided. This includes values for:
    • the release number: adjust this value to a current release of JS7.
    • the repository which by default is sosberlin:js7.
    • the image name is determined from the current folder name and the release number.
    • the user id is by default the user id of the user running the build script.
    • the HTTP port and HTTPS port: if the respective port is not specified then the JOC Cockpit will not listen to a port for the respective protocol. You can, for example, disable the HTTP protocol by specifying an empty value. The default ports should be fine as they are mapped by the run script to outside ports on the Docker host. However, you can modify ports as you like.
    • Java options: typically you would specify default values e.g. for Java memory consumption. The Java options can be overwritten by the run script when starting the container, however, you might want to create your own image with adjusted default values.
  • Line 27 - 50: The above options can be overwritten by command line arguments like this:

    Running the Build Script with Arguments
    ./ --http-port=14445 --https-port=14443 --java-options="-Xmx1G"
  • Line 54 - 63: The effective docker build command is executed with arguments. The Dockerfile is assumed to be located with the build sub-directory of the current directory.

  • No labels
Write a comment…