Skip to end of metadata
Go to start of metadata

You have installed a Backup Cluster with two Jobschedulers installed.

For more information about a Backup Cluster and its installation see

Both JobScheduler in a Backup Cluster should have the same configuration.
Normally a JobScheduler reads its configuration from the Hot Folder (./config/live), but each JobScheduler has its own Hot Folder.
All changes in one Hot Folder must be changed in the other also.
This is painful and error-prone.
This article describes the options to configure the Cluster in a single point.

The job configuration can be stored in a network storage or you use the Remote Configuration via a Supervisor JobScheduler.
In the following we assume that the JobSchedulers of the Backup Cluster have the id mycluster.

Using a network storage

You can change the path of the Hot Folder (default ./config/live).

For this edit the ./config/scheduler.xml in both JobScheduler of the Backup Cluster and set the attribute configuration_directory of the config element (see

For Unix you must mount a corresponding folder (i.e. /mnt/serverC/live) if the new Hot Folder is not local.

For Windows you can use a UNC path from your network environment (i.e. \\serverC\live) if the new Hot Folder is not local.

Please note that if the new Hot Folder is not local and you start the JobScheduler as a Windows service then you must provide that the service run as a user which has network access.

You can use the sc.exe to change the JobScheduler service account.

The user domain\domain-user must have the "log on as a service" right.
Otherwise you get the error 1069.

To add the log on as a service right to an account see

Using the Remote Configuration via Supervisor JobScheduler

Please read (Chapter 1.3) for more information.

We assume that a third JobScheduler is installed on serverC with the port 4445.

Then you must edit the ./config/scheduler.xml in both JobScheduler of the Backup Cluster and set the attribute supervisor with hostname:port of the config element (see

You can check the correct hostname:port configuration via the JOC Cockpit Dashboard of the JobScheduler instance you want to use as supervisor:

Now you can store the Cluster configuration at the Supervisor JobScheduler in the directory ./config/remote/mycluster.