You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

Scope

  • JobScheduler Agent is operated with a Docker container. 
  • Prerequisites
    • Consider to prepare the files indicated with chapter Build.

Build

The following files are required for the build context:

  • Dockerfile
  • Start Script start_jobscheduler_agent.sh
  • JobScheduler Agent tarball as available from SOS for download.

Dockerfile

  • Download: Dockerfile

    Dockerfile
    FROM openjdk:8
    LABEL maintainer="Software- und Organisations-Service GmbH"
    
    # default user id has to match later run-time user
    ARG USER_ID=$UID
    
    # provide build arguments for release information
    ARG JS_MAJOR=1.13
    ARG JS_RELEASE=1.13.3-SNAPSHOT
    
    # setup working directory
    RUN mkdir -p /var/sos-berlin.com
    WORKDIR /var/sos-berlin.com
    
    # add and extract tarball
    COPY jobscheduler_unix_universal_agent.${JS_RELEASE}.tar.gz /usr/local/src/
    RUN test -e /usr/local/src/jobscheduler_unix_universal_agent.${JS_RELEASE}.tar.gz && \
        tar xfvz /usr/local/src/jobscheduler_unix_universal_agent.${JS_RELEASE}.tar.gz && \
        rm /usr/local/src/jobscheduler_unix_universal_agent.${JS_RELEASE}.tar.gz
    
    # make default user the owner of directories
    RUN groupadd --gid ${USER_ID:-1000} jobscheduler && \
        useradd --uid ${USER_ID:-1000} --gid jobscheduler --home-dir /home/jobscheduler --no-create-home --shell /bin/bash jobscheduler && \
        chown -R jobscheduler:jobscheduler /var/sos-berlin.com
    
    # copy and prepare start script
    COPY start_jobscheduler_agent.sh /usr/local/bin/
    RUN chmod +x /usr/local/bin/start_jobscheduler_agent.sh
    
    # prepare logs directory
    RUN mkdir -p /var/sos-berlin.com/jobscheduler_agent/var_4445/logs && chown -R jobscheduler:jobscheduler /var/sos-berlin.com/jobscheduler_agent/var_4445/logs
    
    # expose volume for storage persistence
    # VOLUME /var/sos-berlin.com/jobscheduler_agent/var_4445
    
    # allow incoming traffic to port
    EXPOSE 4445
    
    # run-time user, can be overwritten when running the container
    USER jobscheduler
    
    CMD ["/usr/local/bin/start_jobscheduler_agent.sh"]
  • Explanations
    • Line 1: We start from a CentOS image that includes JDK 8. Newer Java version can be used, see Which Java versions is JobScheduler available for?
    • Line 5: Consider that $UID provides the numeric ID of the account that the JobScheduler Agent installation inside the Docker container is performed for. This numeric ID typically starts above 1000 and should correspond to the account that is used on the Docker host, i.e. the account on the Docker Host and the account inside the container should use the same numeric ID. This mechanism simplifies exposure of the Docker container's file system.
    • Line 8-9: Adjust the JobScheduler release number as required.
    • Line 16-19: The Agent tarball is copied and extracted to the container.
    • Line 22-24: An account and group "jobscheduler" is created that is handed over ownership of installed files.
    • Line 27-28: The start script is copied to the container, see below chapter Start Script.
    • Line 37: Port 4445 is exposed for later mapping. This port is used for the connection between JobScheduler Master and Agent.
    • Line 40: The account "jobscheduler" that is the owner of the installation is exposed for later mapping. This account should be mapped at run-time to the account in the Docker Host that will mount the exposed volume.
    • Line 42: The start script is executed to launch the JobScheduler Agent daemon.

Start Script

  • Download: start_jobscheduler_agent.sh

    Start Script
    #!/bin/sh
    
    /var/sos-berlin.com/jobscheduler_agent/bin/jobscheduler_agent.sh start -http-port=4445 && tail -f /dev/null
  • Explanations
    • Line 3: The standard start script jobscheduler_agent.sh is used. The tail command prevents the start script from terminating in order to keep the container alive.

Build Command

There are a number of ways how to write a build command, find the following example:

  • A typical build command could look like this:

    Build Command
    #!/bin/sh
    
    IMAGE_NAME="agent-1-13-4445"
    
    docker build --no-cache --rm --tag=$IMAGE_NAME --file=./build/Dockerfile --network=js --build-arg="USER_ID=$UID" ./build
  • Explanations
    • Using a common network for JobScheduler components allows direct access to resources such as ports within the network. The network is required at build time to allow the installer to create and populate the JobScheduler database.
    • Consider use of the --build-arg that injects the USER_ID environment variable into the image with the numeric ID of the account running the build command. This simplifies access to the volume that optionally can be exposed by the Dockerfile as the same numeric user ID and group ID inside and outside of the container are used.

Run

There are a number of ways how to write a run command, find the following example:

  • A typical run command could look like this:

    Run Command
    #!/bin/sh
    
    IMAGE_NAME="agent-1-13-4445"
    RUN_USER_ID="$(id -u $USER):$(id -g $USER)"
    
    mkdir -p /some/path/logs
    
    docker run -dit --rm --user=$RUN_USER_ID --hostname=$IMAGE_NAME --network=js --publish=5445:4445 --volume=/some/path/logs:/var/sos-berlin.com/jobscheduler_agent/var_4445/logs:Z --name=$IMAGE_NAME $IMAGE_NAME
  • Explanations
    • Using a common network for JobScheduler components allows direct access to resources such as ports within the network.
    • The RUN_USER_ID variable is populated with the numeric ID of the account and the group that executes the run command. This value is assigned the --user option to inject the account information into the container (replacing the account specified with the USE jobscheduler instruction in the Dockerfile.
    • Specify a logs directory to be created that is referenced with the --volume option to expose the log directory of the JobScheduler Agent for reading.


  • No labels