Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Download: Dockerfile

    Code Block
    languagebash
    titleDockerfile
    linenumberstrue
    collapsetrue
    FROM openjdk:8
    LABEL maintainer="Software- und Organisations-Service GmbH"
    
    # default user id has to match later run-time user
    ARG USER_ID=$UID
    
    # provide build arguments for release information
    ARG JS_MAJOR=1.13
    ARG JS_RELEASE=1.13.3-SNAPSHOT
    
    # add installer archive file
    COPY jobscheduler_linux-x64.${JS_RELEASE}.tar.gz /usr/local/src/
    RUN test -e /usr/local/src/jobscheduler_linux-x64.${JS_RELEASE}.tar.gz && \
        tar zxvf /usr/local/src/jobscheduler_linux-x64.${JS_RELEASE}.tar.gz -C /usr/local/src/ && \
        rm -f /usr/local/src/jobscheduler_linux-x64.${JS_RELEASE}.tar.gz && \
        ln -s /usr/local/src/jobscheduler.${JS_RELEASE} /usr/local/src/jobscheduler
    
    # for JDK < 12, /dev/random does not provide sufficient entropy, see https://kb.sos-berlin.com/x/lIM3
    RUN rm /dev/random && ln -s /dev/urandom /dev/random
    
    # copy installer response file and run installer
    COPY jobscheduler_install.xml /usr/local/src/jobscheduler/jobscheduler_install.xml
    RUN /usr/local/src/jobscheduler/setup.sh -u /usr/local/src/jobscheduler/jobscheduler_install.xml
    
    # make default user the owner of directories
    RUN groupadd --gid ${USER_ID:-1000} jobscheduler && \
        useradd --uid ${USER_ID:-1000} --gid jobscheduler --home-dir /home/jobscheduler --no-create-home --shell /bin/bash jobscheduler && \
        chown -R jobscheduler:jobscheduler /var/sos-berlin.com
    
    # copy and prepare start script
    COPY start_jobscheduler.sh /usr/local/bin/
    RUN chmod +x /usr/local/bin/start_jobscheduler.sh
    
    # create volumes for data persistence
    VOLUME /var/sos-berlin.com/jobscheduler/testsuite/config/live
    
    # allow incoming traffic to port
    EXPOSE 40444
    
    # run-time user, can be overwritten when running the container
    USER jobscheduler
    
    CMD ["/usr/local/bin/start_jobscheduler.sh"]
  • Explanations
    • Line 1: We start from a CentOS image an Alpine image that includes JDK 8. Newer Java version can be used, see Which Java versions is JobScheduler available for?
    • Line 5: Consider that $UID provides the numeric ID of the account that the JobScheduler Master installation inside the Docker container is performed for. This numeric ID typically starts above 1000 and should correspond to the account that is used on the Docker host, i.e. the account on the Docker Host and the account inside the container should use the same numeric ID. This mechanism simplifies exposure of the Docker container's file system.
    • Line 8-9: Adjust the JobScheduler release number as required.
    • Line 12-16: The installer tarball is copied and extracted to the container.
    • Line 22-23: The installer response file is copied to the container, for details of this file see next chapter. Then the installer is executed for the current user.
    • Line 26-28: An account and group "jobscheduler" is created that is handed over ownership of installed files.
    • Line 31-32: The start script is copied to the container, see below chapter Start Script.
    • Line 35: A volume is indicated for later mapping to a mount point at run-time.
    • Line 38: Port 40444 is exposed for later mapping. This port is used for the connection between JOC Cockpit and JobScheduler Master.
    • Line 41: The account "jobscheduler" that is the owner of the installation is exposed for later mapping. This account should be mapped at run-time to the account in the Docker Host that will mount the exposed volume.
    • Line 43: The start script is executed to launch the JobScheduler Master daemon.

...

  • Download: start_jobscheduler.sh

    Code Block
    languagebash
    titleStart Script
    linenumberstrue
    #!/bin/sh
    
    /opt/sos-berlin.com/jobscheduler/testsuite/bin/jobscheduler.sh start without-change-user && tail -f /dev/null
  • Explanations
    • Line 3: The standard start script jobscheduler.sh is used. The tail command prevents the start script from terminating in order to keep the container alive. The sub-directory testsuite represents the JobScheduler ID that is specified with the above installer response file.

Build Command

There are a number of ways how to write a build command, find the following example:

...

  • Explanations
    • Using a common network for JobScheduler components allows direct access to resources such as ports within the network. The network is required at build time to allow the installer to create and to populate the JobScheduler database.
    • Consider use of the --build-arg that injects the USER_ID environment variable into the image with the numeric ID of the account running the build command. This simplifies later access to the volume exposed by the Dockerfile as the same numeric user ID and group ID inside and outside of the container are used.

...

  • A typical run command could look like this:

    Code Block
    languagebash
    titleRun Command
    linenumberstrue
    #!/bin/sh
    
    IMAGE_NAME="master-1-13"
    RUN_USER_ID="$(id -u $USER):$(id -g $USER)"
    
    mkdir -p /some/path/logs
    
    docker run -dit --rm --user=$RUN_USER_ID --hostname=$IMAGE_NAME --network=js --publish=50444:40444 --mount="type=volume,src=$IMAGE_NAME,dst=/var/sos-berlin.com/jobscheduler/testsuite/config/live" --volume=/some/path/logs:/var/log/sos-berlin.com/jobscheduler/testsuite:Z --name=$IMAGE_NAME $IMAGE_NAME
  • Explanations
    • Using a common network for JobScheduler components allows direct access to resources such as ports within the network.
    • The RUN_USER_ID variable is populated with the numeric ID of the account and the group that executes the run command. This value is assigned the --user option to inject the account information into the container (replacing the account specified with the USE jobscheduler instruction in the Dockerfile).
    • Port 40444 for access to JobScheduler Master by JOC Cockpit can optionally be mapped to some outside port. This is not required if a network is used.
    • Specify a logs directory to be created that is referenced with the --volume option to expose the log directory of the JobScheduler Master for reading. Consider that the testsuite sub-directory is created from the value of the JobScheduler ID that is specified with the installer response file. Avoid to modify log files in this directory and to add new files.
    • The --mount option is used in order to map a previously created Docker volume (assumed to be the value of the src= option) e.g. to the JobScheduler Master's live folder. This allows to read and to write job related files to this directory that will automatically be picked up by JobScheduler Master.

...