You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

JobScheduler's memory and CPU usage depends on a number of factors:

  • The number of jobs defined and the number of orders which have a schedule or run-time.
  • The type of job. Internal API jobs consume more resources than pure shell-jobs but if a shell job uses pre or post-processing, it will consume a similar amount of resources as an internal API-job. The file-transfer jobs, which come with JITL (JobScheduler Integrated Template Library), are internal API Jobs.
  • Whether a job is remotely or locally executed. All remotely executed jobs, e.g. executed by an agent or another instance of JS on a remote node, are like API-jobs.
  • The number of tasks running in parallel.

Note that JS uses a special "order processing" concept . Order processing can reduce the workload on a machine for the JS as well as for the number of tasks. An order is nothing else than a container for the parameters which are needed to execute a job or a job chain. A JS task can process more than one order without terminating and restart the task. This means that using orders can reduce the number of jobs because it is possible to create generic jobs. An order has a schedule like a job and can also be executed on a JobScheduler cluster.

It is also possible to improve performance by limiting the number of jobs and tasks which are running at the same time on a node. One approach to doing this is to define a process classes for JS. For example, if you define a process class named "abc" and set the number of concurrent tasks to 10, JS will start no more than 10 tasks for the jobs being executed, even if there are more jobs waiting to be scheduled.

Here are three sketches that may illustrate how many factors come together to influence performance:

  • We did some tests with tasks running in parallel on one of our systems. We started off by running 500 shell jobs in parallel. Our test system had no problem with up to 1500 jobs in parallel. But if we started more than 1500 then the JOC (JS Operations Center) started to slow down.
  • A customer in the USA has approx. 1500 jobs defined, each of them with a schedule. These jobs are running more or less around the clock on a RedHat Linux with an Oracle 11 database. The CPU allocation for a single processor is nearly 50%. Note that the current version of JobScheduler is not able to use more than one CPU, even if more than one is available.
  • A third example: a customer drives a series of Indesign servers with JS. More than 1000 orders are started and run for a short time – typically a few seconds, The orders use the same task sequentially without creating a setup time for each order. Here, JS is running on a Windows server and the workload it generates is relatively small.
    One further point which has an impact on the performance of JS is the size of the history-database and the performance of the database server. We recommend that the database is compacted from time to time, e.g. once a week or perhaps once a day, depending on the number of job runs.
  • No labels