Skip to end of metadata
Go to start of metadata


  • Sizing is an important step when planning the installation and operation of JobScheduler.
  • The following guidelines are recommendations (best practices) that have to mapped to the requirements of your environment.
  • We recommend to decide on the desired target architecture in a first step and then to determine the sizing accordingly.



Server Resources

  • The following indicators should be considered:
    • The overall number of jobs that will be available.
    • The number of jobs that will be executed in parallel.
  • For general information on resource consumption of JobScheduler Master and Agent see CPU and Memory Usage

Overall number of jobs

  • Impact
    • JobScheduler is designed to operate up to 20 000 overall job objects. For a higher number of jobs or with respect to the sharing of organisational responsibilities it is recommended to split the load across multiple JobScheduler instances.
    • A high number of jobs affects the responsiveness of the JOC GUI that shows the objects for jobs, job chains, orders etc.. Performance of the JOC GUI can be increased by use of the Jetty web server that comes bundled with JobScheduler instead of the built-in web server.
  • Recommendations
    • JobScheduler is designed for re-usability of jobs and job chains. The same jobs and job chains can be started with different parameter sets and can therefore be re-used.
    • Check how you would want to implement or migrate your jobs and job chains. SOS provides Consulting Services to assist you in designing your jobs and job chains.


  • Impact
    • Starting from your architecture decision the distribution of jobs across Master and Agent instances should be considered.
      • Jobs that are executed on Agents do not result in direct CPU and memory requirements on the Master.
      • Jobs that are executed agentless by SSH require a process that is executed on the Master and a process on the SSH server for execution of the job script.
    • For each job executed in parallel a JobScheduler instance the following resources are required:
      • A Java Virtual Machine is started that requires at least 32 MB memory. When using API jobs or JITL jobs then this value could be increased to 64 MB depending on the nature of the job.
      • The scripts and programs that you execute will cause individual requirements concerning CPU and memory.
      • When running jobs for database processing then the load will be in the database and not in the JobScheduler instance. CPU and memory requirements are usually stable for such jobs independently from the size or duration of transactions that are performed by such jobs.
  • Recommendations
    • Calculate at least 32 MB memory for each API job or JITL job, e.g. 7 GB memory for approx. 200 parallel job executions. Shell jobs wiithout monitor scripts would not exceed about 20 MB memory each.
    • Consider to share the load by use of Agents that are executed on different servers. There is no hard limit concerning the number of Agents that can be operated with a Master, however, we recommend not to exceed 1024 Agent instances with one Master.


  • Impact
    • JobScheduler has a low footprint when it comes to CPU usage. Practically no CPU usage applies to idle jobs and monitored directories.
  • Recommendations
    • Do not use a single core CPU system. Situations could occur when individual job scripts might misbehave and consume more CPU than expected. A dual core CPU system allows to have process resources ready in a situation when a single core CPU is blocked.


  • Impact
    • The JobScheduler Master requires about 200 MB memory.
    • The JobScheduler Universal Agent has a footprint of about 100 MB memory.
  • Recommendations
    • Calculate at least 32 MB memory for each job, e.g. 7 GB memory for approx. 200 parallel job executions.

Initial Installation

  • File System
    • The size of the installed files should not exceed approx. 200 MB.
  • Database
    • The database initially can be used with as little as 200 MB tablespace.

Ongoing Operation

File System

  • Job related files
    • The overall size of job related files is negligible and would hardly exceed 50 MB.
  • Log files
    • The following indicators affect the estimated size for log file storage:
      • Job frequency: for each job execution a log file is created with approx. 1 KB file size. The log file for a task will be overwritten with each execution of the job. Therefore the number of log files for jobs and tasks will be stable in relation to the overall number of jobs.
      • Job log output: The log output of JobScheduler is restricted to a few lines, however, your job scripts (programs, applications) will create individual log output.
      • JobScheduler log files: JobScheduler writes its own log files including a main log for the lifetime of JobScheduler execution and a debug log for analysis purposes.
      • JobScheduler log levels: When debugging jobs then you could increase the log level and cause JobScheduler to create huge logs. 
      • JobScheduler log rotation: Determine the retention period for log files. JobScheduler provides a houskeeping job to rotate and zip log files. A typical policy on log files could e.g. include to have all log files of the previous week available on disk, to have zipped versions of the log files available for the last three months and to move older log files to some cheaper long term storage.
    • Recommendations
      • It is suggested not to start below 2 GB disk space for logs. For a mid-sized environment with e.g. 1000 job executions per day you should not start below 10 GB.
      • When JobScheduler cannot write to its log file then it will be blocked from further operation. To prevent such a situation add a disk space monitoring task to your system monitor.


  • Memory
    • Some DBMS allow to store tables in memory. This can improve the overall performance of JobScheduler for frequently used tables. 
    • Recommendations
      • We recommend to leave this subject to your DBA. This is about database optimisation and will result in a noticeable effect only in high performance environments.
  • Tablespace
    • JobScheduler
      • stores the job execution history in the database.
      • stores the task logs in the database, by default in zipped format. You could purge task logs by use of a housekeeping job, however, for compliance reasons an individual retention period for task logs in the database might apply.
    • Recommendations
      • Calculate the increase in tablespace size based on the overall number of job executions. Assume 2 KB for job history and task log for each job execution. Add a factor for individual log output of your scripts or programs.

Related Information


Write a comment…