You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 8 Next »

General

Two or more JobSchedulers can be operated as a cluster.
All the JobSchedulers in a cluster must have the same Id and use the same database.
Further, each JobScheduler in a cluster must be started with one of the following cluster options:

  • -exclusive
  • -exclusive -backup
  • -distributed-orders

See http://www.sos-berlin.com/doc/en/scheduler.doc/command_line.xml for more information about JobScheduler start options.

Three different types of JobScheduler cluster can be configured, depending on the cluster start options used:

JobScheduler Clusters

All JobSchedulers of this cluster type have the start option -exclusive.

Only the primary JobScheduler will be active once all the JobSchedulers in this type of cluster have been started.

The other JobSchedulers will be waiting for activation.

We assume that all JobSchedulers in this type of cluster will have the same configuration (Jobs, Job Chains, Orders, etc.)

If the active JobScheduler is terminated for whatever reason then one of the others will automatically become active and be aware of all the job and order states. That is, the 'new' JobScheduler will be aware of whether Jobs or Job Chains are stopped or active, whether Job Chain Nodes are stopped, skipped or active and whether Orders are suspended or active and which step they were at.

JobScheduler Backup Clusters

Here we have a primary JobScheduler with the start option -exclusive.

All other Backup JobScheduler will have the start option -exclusive -backup -backup-precedence=n where n is a number.

The option -backup-precedence is optional and the number n defines the order in which the backup JobSchedulers become active.

Only the primary JobScheduler will be active once all the JobSchedulers in this type of cluster have been started.

We assume that all JobScheduler of the cluster have the same configuration (Jobs, Job Chains, Orders, etc.)

A Backup JobScheduler in this type of cluster won't start when the primary JobScheduler terminates in a normal way.
But when the primary JobScheduler is aborted or its process is killed (e.g. the Server crashes) then the (next) Backup JobScheduler will become active and be aware of all the job and order states. That is, the 'new' JobScheduler will be aware of whether Jobs or Job Chains are stopped or active, whether Job Chain Nodes are stopped, skipped or active and whether Orders are suspended or active and which step they were at.

If a Backup JobScheduler is active and the primary JobScheduler is restarted then the Backup JobScheduler has to be terminated in order to reactivate the primary.

See also http://www.sos-berlin.com/doc/en/scheduler.doc/backupscheduler.xml

JobScheduler Load Balancing Clusters (Distributed Orders)

All JobSchedulers in this type of cluster have the start option -distributed-orders.

All the JobSchedulers in this type of cluster will be active once they all have been started.

Each JobScheduler of this cluster will handle its own Jobs independantly, with the exception of Orders for Job Chains which are configured as distributed.

See also http://www.sos-berlin.com/doc/en/scheduler.doc/distributed_orders.xml and http://www.sos-berlin.com/doc/en/scheduler.doc/xml/job_chain.xml#attribute_distributed

How can I set the cluster start option?

You can set cluster options during installation of the JobScheduler.
After installation has been completed, cluster options can be set by editing the ./user_bin/jobscheduler_environment_variables file.(sh|cmd).

Example: ./user_bin/jobscheduler_environment_variables.sh on Unix

 SCHEDULER_CLUSTER_OPTIONS=-exclusive

Example: ./user_bin/jobscheduler_environment_variables.cmd on Windows

 SET SCHEDULER_CLUSTER_OPTIONS=-exclusive

See also http://www.sos-berlin.com/doc/en/scheduler_installation.pdf

The Cluster tab in JOC

If you open the JobScheduler Operating Center (JOC) of a JobScheduler which is member of a cluster then a Cluster tab will be displayed, showing all the cluster members.

Example: Backup Cluster

  • No labels