You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 18 Current »

Scope

  • JobScheduler Master is operated with a Docker container. 
  • Prerequisites
    • The JobScheduler Master requires a database to be available from any physical or virtual host or from a Docker container.
    • Consider to prepare the files indicated with chapter Build.

Build

The following files are required for the build context:

  • Dockerfile
  • Installer Response file jobscheduler_install.xml with individual installation settings. A template of this file is available when extracting the installer tarball.
  • Start Script start_jobscheduler.sh
  • JobScheduler Master installer tarball as available from SOS for download.

Dockerfile

  • Download: Dockerfile

    Dockerfile
    FROM openjdk:8
    LABEL maintainer="Software- und Organisations-Service GmbH"
    
    # default user id has to match later run-time user
    ARG USER_ID=$UID
    
    # provide build arguments for release information
    ARG JS_MAJOR=1.13
    ARG JS_RELEASE=1.13.3
    
    # add installer archive file
    COPY jobscheduler_linux-x64.${JS_RELEASE}.tar.gz /usr/local/src/
    RUN test -e /usr/local/src/jobscheduler_linux-x64.${JS_RELEASE}.tar.gz && \
        tar zxvf /usr/local/src/jobscheduler_linux-x64.${JS_RELEASE}.tar.gz -C /usr/local/src/ && \
        rm -f /usr/local/src/jobscheduler_linux-x64.${JS_RELEASE}.tar.gz && \
        ln -s /usr/local/src/jobscheduler.${JS_RELEASE} /usr/local/src/jobscheduler
    
    # for JDK < 12, /dev/random does not provide sufficient entropy, see https://kb.sos-berlin.com/x/lIM3
    RUN rm /dev/random && ln -s /dev/urandom /dev/random
    
    # copy installer response file and run installer
    COPY jobscheduler_install.xml /usr/local/src/jobscheduler/jobscheduler_install.xml
    RUN /usr/local/src/jobscheduler/setup.sh -u /usr/local/src/jobscheduler/jobscheduler_install.xml
    
    # make default user the owner of directories
    RUN groupadd --gid ${USER_ID:-1000} jobscheduler && \
        useradd --uid ${USER_ID:-1000} --gid jobscheduler --home-dir /home/jobscheduler --no-create-home --shell /bin/bash jobscheduler && \
        chown -R jobscheduler:jobscheduler /var/sos-berlin.com
    
    # copy and prepare start script
    COPY start_jobscheduler.sh /usr/local/bin/
    RUN chmod +x /usr/local/bin/start_jobscheduler.sh
    
    # create volumes for data persistence
    VOLUME /var/sos-berlin.com/jobscheduler/testsuite/config/live
    
    # allow incoming traffic to port
    EXPOSE 40444
    
    # run-time user, can be overwritten when running the container
    USER jobscheduler
    
    CMD ["/usr/local/bin/start_jobscheduler.sh"]
  • Explanations
    • Line 1: We start from an Alpine image that includes JDK 8. Newer Java version can be used, see Which Java versions is JobScheduler available for?
    • Line 5: Consider that $UID provides the numeric ID of the account that the JobScheduler Master installation inside the Docker container is performed for. This numeric ID typically starts above 1000 and should correspond to the account that is used on the Docker host, i.e. the account on the Docker Host and the account inside the container should use the same numeric ID. This mechanism simplifies exposure of the Docker container's file system.
    • Line 8-9: Adjust the JobScheduler release number as required.
    • Line 12-16: The installer tarball is copied and extracted to the container.
    • Line 22-23: The installer response file is copied to the container, for details of this file see next chapter. Then the installer is executed for the current user.
    • Line 26-28: An account and group "jobscheduler" is created that is handed over ownership of installed files.
    • Line 31-32: The start script is copied to the container, see below chapter Start Script.
    • Line 35: A volume is indicated for later mapping to a mount point at run-time.
    • Line 38: Port 40444 is exposed for later mapping. This port is used for the connection between JOC Cockpit and JobScheduler Master.
    • Line 41: The account "jobscheduler" that is the owner of the installation is exposed for later mapping. This account should be mapped at run-time to the account in the Docker Host that will mount the exposed volume.
    • Line 43: The start script is executed to launch the JobScheduler Master daemon.

Installer Response File

  • Download: jobscheduler_install.xml

    Installer Response File
    <?xml version="1.0" encoding="UTF-8" standalone="no"?>
    <!-- 
    XML configuration file for JobScheduler setup
    
    The JobScheduler is available with a dual licensing model.
    - GNU GPL 2.0 License (see http://www.gnu.org/licenses/gpl-2.0.html)
    - JobScheduler Commercial License (see licence.txt)
    
    The setup asks you for the desired license model 
    (see <entry key="licenceOptions" .../> below).
    
    If you call the setup with this XML file then you accept 
    at the same time the terms of the chosen license agreement. 
    -->
    <AutomatedInstallation langpack="eng">
        <com.izforge.izpack.panels.UserInputPanel id="home">
            <userInput/>
        </com.izforge.izpack.panels.UserInputPanel>
        <com.izforge.izpack.panels.UserInputPanel id="licences">
            <userInput>
            
                <!-- Select the license model (GPL or Commercial) -->
                <entry key="licenceOptions" value="GPL"/>
                
                <!-- If you selected GPL as license model than the licence must be empty.
                     Otherwise please enter a license key if available.
                     It is also possible to modify the license key later. -->
                <entry key="licence" value=""/>
                
            </userInput>
        </com.izforge.izpack.panels.UserInputPanel>
        <com.izforge.izpack.panels.HTMLLicencePanel id="gpl_licence"/>
        <com.izforge.izpack.panels.HTMLLicencePanel id="commercial_licence"/>
        <com.izforge.izpack.panels.TargetPanel id="target">
            
            <!-- SELECT THE INSTALLATION PATH FOR THE BINARIES AND LIBRARIES
                 The installation expands this path with the Scheduler ID as subdirectory.
                 The path must be absolute!
                 Default paths are
                 /opt/sos-berlin.com/jobscheduler for Unix
                 C:\Program Files\sos-berlin.com\jobscheduler for Windows -->
            <installpath>/opt/sos-berlin.com/jobscheduler</installpath>
            
        </com.izforge.izpack.panels.TargetPanel>
        <com.izforge.izpack.panels.UserPathPanel id="userpath">
            
            <!-- SELECT THE DATA PATH FOR CONFIGURATION AND LOG FILES
                 The installation expands this path with the Scheduler ID as subdirectory.
                 The path must be absolute!
                 Default paths are
                 /home/[user]/sos-berlin.com/jobscheduler for Unix
                 C:\ProgramData\sos-berlin.com\jobscheduler for Windows -->
            <UserPathPanelElement>/var/sos-berlin.com/jobscheduler</UserPathPanelElement>
            
        </com.izforge.izpack.panels.UserPathPanel>
        <com.izforge.izpack.panels.PacksPanel id="package">
        
            <!-- SELECT THE PACKS WHICH YOU WANT INSTALL -->
               
            <!-- Package: JobScheduler
                 JobScheduler Basic Installation
                 THIS PACK IS REQUIRED. IT MUST BE TRUE -->
            <pack index="0" name="Job Scheduler" selected="true"/>
            
            <!-- Package: Database Support
                 Job history and log files can be stored in a database. Database support is 
                 available for MySQL, PostgreSQL, Oracle, SQL Server, DB2.
                 THIS PACK IS REQUIRED. IT MUST BE TRUE -->
            <pack index="2" name="Database Support" selected="true"/>
            
            <!-- Package: Housekeeping Jobs
                 Housekeeping Jobs are automatically launched by the Job Scheduler, e.g. to send 
                 buffered logs by mail, to remove temporary files or to restart the JobScheduler. -->
            <pack index="5" name="Housekeeping Jobs" selected="true"/>
            
        </com.izforge.izpack.panels.PacksPanel>
        <com.izforge.izpack.panels.UserInputPanel id="network">
            <userInput>
                <!-- Network Configuration -->
                
                <!-- Enter the port for TCP communication (e.g. 4444) 
                     No longer required! -->
                <entry key="schedulerPort" value=""/>
                
                <!-- Enter the port for HTTP communication -->
                <entry key="schedulerHTTPPort" value="40444"/>
                
                <!-- To enter a JobScheduler ID is required. 
                     The IDs of multiple instances of the JobScheduler must be unique per server. 
                     The JobScheduler ID expands the above installation paths as subdirectory.
                     Please omit special characters like: / \ : ; * ? ! $ % & " < > ( ) | ^ -->
                <entry key="schedulerId" value="testsuite"/>
                
                <!-- Only for Linux (root permissions required) -->
                <entry key="schedulerInstallAsDaemon" value="yes"/>
                	
                <!-- To enter a JobScheduler User (default=current User). 
                     Only for Linux (root permissions required) -->
                <entry key="runningUser" value=""/>
                
                <!-- Specify optional Java options here -->
                <entry key="jsJavaOptions" value=""/>
                
                <!-- It is recommended to enable TCP access for the host where the JobScheduler will install, 
                     optionally enter additional host names or ip addresses. To enable all hosts in your 
                     network to access the JobScheduler enter '0.0.0.0'. -->
                <entry key="schedulerAllowedHost" value="0.0.0.0"/>
                
                <!-- Choose (yes or no) wether the JobScheduler should be started at the end of the installation -->
                <entry key="launchScheduler" value="no"/>
                
            </userInput>
        </com.izforge.izpack.panels.UserInputPanel>
        <com.izforge.izpack.panels.UserInputPanel id="cluster">
            <userInput>
                <!-- Cluster Configuration / Job Streams Plugin -->
                
                <!-- The JobScheduler can be installed independent of other possibly JobSchedulers, 
                     as a primary JobScheduler in a backup system or as a backup JobScheduler. 
                     Use '' for a standalone, '-exclusive' for a primary 
                     or '-exclusive -backup' for a backup JobScheduler.
                     A database is required for a backup system. All JobSchedulers in a backup system 
                     must have the same JobScheduler ID and the same database. 
                     Further you can set '-distributed-orders' for a load balancing cluster.
                     For more information see
                     http://www.sos-berlin.com/doc/de/scheduler.doc/backupscheduler.xml
                     http://www.sos-berlin.com/doc/de/scheduler.doc/distributed_orders.xml -->
                <entry key="clusterOptions" value=""/>
                
                <!-- Enable JobStreams plugin.
                     For more information see https://kb.sos-berlin.com/x/uoC2AQ -->
                <entry key="jobStreamsPlugin" value="on"/>
                
            </userInput>
        </com.izforge.izpack.panels.UserInputPanel>
        <com.izforge.izpack.panels.UserInputPanel id="smtp">
            <userInput>
                <!-- Mail Recipients Configuration / SMTP Authentication -->
                
                <!-- Enter the ip address or host name and port (default: 25) of your SMTP server -->
                <entry key="mailServer" value=""/>
                <entry key="mailPort" value="25"/>
                
                <!-- Configure the SMTP authentication if necessary. -->
                <entry key="smtpAccount" value=""/>
                <entry key="smtpPass" value=""/>
                
                <!-- Enter the addresses of recipients to which mails with log files are automatically
                     forwarded. Separate multiple recipients by commas -->
                
                <!-- Account from which mails are sent -->
                <entry key="mailFrom" value=""/>
                
                <!-- Recipients of mails -->
                <entry key="mailTo" value=""/>
                
                <!-- Recipients of carbon copies: -->
                <entry key="mailCc" value=""/>
                
                <!-- Recipients of blind carbon copies -->
                <entry key="mailBcc" value=""/>
                
            </userInput>
        </com.izforge.izpack.panels.UserInputPanel>
        <com.izforge.izpack.panels.UserInputPanel id="email">
            <userInput>
                <!-- Mail Configuration / Event Handler -->
                
                <!-- Choose in which cases mails with log files are automatically forwarded. -->
                <entry key="mailOnError" value="yes"/>
                <entry key="mailOnWarning" value="yes"/>
                <entry key="mailOnSuccess" value="no"/>
                
                <!-- The Housekeeping package is required for configure JobScheduler as event handler
                     Choose this option if you intend to use JobScheduler Events and
                     - this JobScheduler instance is the only instance which processes Events
                     - this JobScheduler instance is a supervisor for other JobSchedulers which submit Events -->
                <entry key="jobEvents" value="off"/> 
                
            </userInput>
        </com.izforge.izpack.panels.UserInputPanel>
        <com.izforge.izpack.panels.UserInputPanel id="database">
            <userInput>
                <!-- JobScheduler Database Configuration -->
                
                <!-- Database connection settings can be specified with following entries such as
                     databaseHost, databasePort, ... or by a hibernate configuration file 
                     Posible values are 'withoutHibernateFile' (default) and 'withHibernateFile'. -->
                <entry key="databaseConfigurationMethod" value="withoutHibernateFile"/>
                     
                <!-- Choose the database management system. Supported values are 'mysql' for MySQL,
                     'oracle' for Oracle, 'mssql' for MS SQL Server, 'pgsql' for PostgreSQL,
                     'db2' for DB2 and 'sybase' for Sybase. 
                     Only if databaseConfigurationMethod=withoutHibernateFile -->
                <entry key="databaseDbms" value="mysql"/>
                
                <!-- Path to a hibernate configuration file if databaseConfigurationMethod=withHibernateFile -->
                <entry key="hibernateConfFile" value=""/>
                
                <!-- You can choose between 'on' or 'off' to create the database tables.
                     If you have modified the initial data of an already existing installation, 
                     then the modifications will be undone. Data added remains unchanged. 
                     This entry should be only 'off', when you sure, that all tables are already created. -->
                <entry key="databaseCreate" value="on"/>
                
            </userInput>
        </com.izforge.izpack.panels.UserInputPanel>
        <com.izforge.izpack.panels.UserInputPanel id="dbconnection">
            <userInput>
                <!-- JobScheduler Database Configuration if databaseConfigurationMethod=withoutHibernateFile -->
                     
                <!-- Enter the name or ip address of the database host 
                     This entry can also be used to configure the URL(s) for Oracle RAC databases.
                     For example:
                     <entry key="databaseHost" value="(DESCRIPTION=(ADDRESS_LIST=(LOAD_BALANCE=OFF)(FAILOVER=ON)
                        (ADDRESS=(PROTOCOL=TCP)(HOST=tst-db1.myco.com)(PORT=1604))
                        (ADDRESS=(PROTOCOL=TCP)(HOST=tst-db2.myco.com)(PORT=1604)))
                        (CONNECT_DATA=(SERVICE_NAME=mydb1.myco.com)(SERVER=DEDICATED)))"/>
                     The "databaseSchema" and "databasePort" entries should then be left empty. -->
                <entry key="databaseHost" value="mysql-5-7"/>
                
                <!-- Enter the port number for the database instance. Default ports are for MySQL 3306, 
                     Oracle 1521, MS SQL Server 1433, postgreSQL 5432, DB2 50000, Sybase 5000. -->
                <entry key="databasePort" value="3306"/>
                
                <!-- Enter the schema -->
                <entry key="databaseSchema" value="jobscheduler113"/>
                
                <!-- Enter the user name for database access -->
                <entry key="databaseUser" value="jobscheduler"/>
                
                <!-- Enter the password for database access -->
                <entry key="databasePassword" value="jobscheduler"/>
                
            </userInput>
        </com.izforge.izpack.panels.UserInputPanel>
        <com.izforge.izpack.panels.UserInputPanel id="jdbc">
            <userInput>
                <!-- JobScheduler Database Configuration -->
                     
                <!-- You can specify an external JDBC connector then set internalConnector = no
                     For license reasons MySQL, Sybase, MS SQL Server and Oracle ojdbc7 JDBC 
                     drivers are not provided. 
                     Alternatively you can use the mariadb JDBC Driver for MySQL and 
                     the jTDS JDBC Driver for MS SQL Server and Sybase which is provided. 
                     An Oracle ojdbc6 JDBC driver is also provided.
                     An internal JDBC connector for DB2 is not available -->
                     
                <!-- You can choose between 'yes' or 'no' for using the internal JDBC connector
                     or not -->
                <entry key="internalConnector" value="yes"/>
                
                <!-- Select the path to JDBC Driver -->
                <entry key="connector" value=""/>
                
                <!-- Only for DB2: Select the path to DB2 license file for JDBC Driver -->
                <entry key="connectorLicense" value=""/>
                
            </userInput>
        </com.izforge.izpack.panels.UserInputPanel>
        <com.izforge.izpack.panels.UserInputPanel id="reportingDatabase">
            <userInput>
                <!-- Reporting Database Configuration 
                     NOT SUPPORTED FOR SYBASE AND DB2 -->
                
                <!-- Set 'yes' if the JobScheduler and the Reporting database are the same.
                     If 'yes' then further Reporting database variables are ignored. -->
                <entry key="sameDbConnection" value="yes"/>
                
                <!-- Database connection settings can be specified with following entries such as
                     databaseHost, databasePort, ... or by a hibernate configuration file 
                     Posible values are 'withoutHibernateFile' (default) and 'withHibernateFile'. -->
                <entry key="reporting.databaseConfigurationMethod" value="withoutHibernateFile"/>            
                     
                <!-- Choose the database management system. Supported values are 'mysql' for MySQL,
                     'oracle' for Oracle, 'mssql' for MS SQL Server, 'pgsql' for PostgreSQL. 
                     only if reporting.databaseConfigurationMethod=withoutHibernateFile-->
                <entry key="reporting.databaseDbms" value="mysql"/>
                
                <!-- Path to a hibernate configuration file if reporting.databaseConfigurationMethod=withHibernateFile -->
                <entry key="reporting.hibernateConfFile" value=""/>
                
            </userInput>
        </com.izforge.izpack.panels.UserInputPanel>
        <com.izforge.izpack.panels.UserInputPanel id="reportingDbconnection">
            <userInput>
                <!-- Reporting Database Configuration if reporting.databaseConfigurationMethod=withoutHibernateFile
                     NOT SUPPORTED FOR SYBASE AND DB2 -->
                     
                <!-- Enter the name or ip address of the database host 
                     This entry can also be used to configure the URL(s) for Oracle RAC databases.
                     For example:
                     <entry key="reporting.databaseHost" value="(DESCRIPTION=(ADDRESS_LIST=(LOAD_BALANCE=OFF)(FAILOVER=ON)
                        (ADDRESS=(PROTOCOL=TCP)(HOST=tst-db1.myco.com)(PORT=1604))
                        (ADDRESS=(PROTOCOL=TCP)(HOST=tst-db2.myco.com)(PORT=1604)))
                        (CONNECT_DATA=(SERVICE_NAME=mydb1.myco.com)(SERVER=DEDICATED)))"/>
                     The "reporting.databaseSchema" and "reporting.databasePort" entries should then be left empty. -->
                <entry key="reporting.databaseHost" value="mysql-5-7"/>
                
                <!-- Enter the port number for the database instance. Default ports are for MySQL 3306, 
                     Oracle 1521, MS SQL Server 1433, postgreSQL 5432. -->
                <entry key="reporting.databasePort" value="3306"/>
                
                <!-- Enter the schema -->
                <entry key="reporting.databaseSchema" value="jobscheduler113"/>
                
                <!-- Enter the user name for database access -->
                <entry key="reporting.databaseUser" value="jobscheduler"/>
                
                <!-- Enter the password for database access -->
                <entry key="reporting.databasePassword" value="jobscheduler"/>
                
            </userInput>
        </com.izforge.izpack.panels.UserInputPanel>
        <com.izforge.izpack.panels.UserInputPanel id="reportingJdbc">
            <userInput>
                <!-- Reporting Database Configuration 
                     NOT SUPPORTED FOR SYBASE AND DB2 -->
                     
                <!-- You can specify an external JDBC connector then set reporting.internalConnector = no
                     For license reasons MySQL, MS SQL Server and Oracle ojdbc7 JDBC 
                     drivers are not provided. 
                     Alternatively you can use the mariadb JDBC Driver for MySQL and 
                     the jTDS JDBC Driver for MS SQL Server and Sybase which is provided. 
                     An Oracle ojdbc6 JDBC driver is also provided. -->
                     
                <!-- You can choose between 'yes' or 'no' for using the internal JDBC connector
                     or not -->
                <entry key="reporting.internalConnector" value="yes"/>
                     
                <!-- Select the path to JDBC Driver -->
                <entry key="reporting.connector" value=""/>
                
            </userInput>
        </com.izforge.izpack.panels.UserInputPanel>
        <com.izforge.izpack.panels.UserInputPanel id="end">
            <userInput/>
        </com.izforge.izpack.panels.UserInputPanel>
        <com.izforge.izpack.panels.InstallPanel id="install"/>
        <com.izforge.izpack.panels.ProcessPanel id="process"/>
        <com.izforge.izpack.panels.FinishPanel id="finish"/>
    </AutomatedInstallation>
  • Explanations
    • The above installer response file works for releases 1.13. Other releases ship with different versions of this file. You should pick-up a template of this file that matches your JobScheduler release by extracting the installer tarball. 
    • Generally all defaults of the response file can be maintained.
      • This includes use of port 40444 for the connection of JOC Cockpit to the Master. At run-time this port can be mapped, see Dockerfile.
    • Line 182-233: The database connection makes use of a hostname "mysql-5-7" that is assumed to be the hostname of a Docker container running the MySQL database.
      • Modify the database connection settings as required for use with your DBMS and access credentials.

Start Script

  • Download: start_jobscheduler.sh

    Start Script
    #!/bin/sh
    
    /opt/sos-berlin.com/jobscheduler/testsuite/bin/jobscheduler.sh start without-change-user && tail -f /dev/null
  • Explanations
    • Line 3: The standard start script jobscheduler.sh is used. The tail command prevents the start script from terminating in order to keep the container alive. The sub-directory testsuite represents the JobScheduler ID that is specified with the above installer response file.

Build Command

There are a number of ways how to write a build command, find the following example:

  • A typical build command could look like this:

    Build Command
    #!/bin/sh
    
    IMAGE_NAME="master-1-13"
    
    docker build --no-cache --rm --tag=$IMAGE_NAME --file=./build/Dockerfile --network=js --build-arg="USER_ID=$UID" ./build
  • Explanations
    • Using a common network for JobScheduler components allows direct access to resources such as ports within the network. The network is required at build time to allow the installer to create and to populate the JobScheduler database.
    • Consider use of the --build-arg that injects the USER_ID environment variable into the image with the numeric ID of the account running the build command. This simplifies later access to the volume exposed by the Dockerfile as the same numeric user ID and group ID inside and outside of the container are used.

Run

There are a number of ways how to write a run command, find the following example:

  • Before starting the container consider to create a Docker volume to persistently store configuration files in the live folder of the JobScheduler Master installation:

    • Create Docker Volume
      #!/bin/sh
      
      IMAGE_NAME="master-1-13"
      
      docker volume create --name=$IMAGE_NAME
      
      ln -s /var/lib/docker/volumes/$IMAGE_NAME/_data/ ./live
      
      sudo chmod o+r /var/lib/docker
      sudo chmod o+rx /var/lib/docker/volumes
      sudo chown -R $USER:$USER /var/lib/docker/volumes/$IMAGE_NAME
    • Explanations
      • Create the Docker volume with an arbitrary name, e.g. the image name. 
      • Create a symlink live that points to the Docker volume
      • Adjust permissions to allow the current user to read/execute from docker volumes. Make the current user the owner of the newly created volume.
      • This configuration will expose the Master's live directory to the Docker volume and allows the current user to add/update/delete configuration files to the live folder.
  • A typical run command could look like this:

    Run Command
    #!/bin/sh
    
    IMAGE_NAME="master-1-13"
    RUN_USER_ID="$(id -u $USER):$(id -g $USER)"
    
    mkdir -p /some/path/logs
    
    docker run -dit --rm --user=$RUN_USER_ID --hostname=$IMAGE_NAME --network=js --publish=50444:40444 --mount="type=volume,src=$IMAGE_NAME,dst=/var/sos-berlin.com/jobscheduler/testsuite/config/live" --volume=/some/path/logs:/var/log/sos-berlin.com/jobscheduler/testsuite:Z --name=$IMAGE_NAME $IMAGE_NAME
  • Explanations
    • Using a common network for JobScheduler components allows direct access to resources such as ports within the network.
    • The RUN_USER_ID variable is populated with the numeric ID of the account and the group that executes the run command. This value is assigned the --user option to inject the account information into the container (replacing the account specified with the USE jobscheduler instruction in the Dockerfile).
    • Port 40444 for access to JobScheduler Master by JOC Cockpit can optionally be mapped to some outside port. This is not required if a network is used.
    • Specify a logs directory to be created that is referenced with the --volume option to expose the log directory of the JobScheduler Master for reading. Consider that the testsuite sub-directory is created from the value of the JobScheduler ID that is specified with the installer response file. Avoid to modify log files in this directory and to add new files.
    • The --mount option is used in order to map a previously created Docker volume (assumed to be the value of the src= option) e.g. to the JobScheduler Master's live folder. This allows to read and to write job related files to this directory that will automatically be picked up by JobScheduler Master.



  • No labels