You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

How to execute Jobs and Orders in Remote JobScheduler Instances?

Question:

How to execute Jobs and Orders in Remote JobScheduler Instances?

Answer:

You need 2 JobScheduler objects at your local JobScheduler, a process class and a job. You can use the Job Editor to create these objects in your local Hot Folder (./live). In the following you will find two simple examples:

  • Object 1: Process Class

    In the process class (e.g. with the name 'remote') you must define the host and port of the remote JobScheduler, so a file ./live/remote.process_class.xml is generated with the content:

    <?xml version="1.0" encoding="ISO-8859-1"?>
    <process_class max_processes="10"
                remote_scheduler="[your remote host]:[your remote tcp port]"/>
                      
  • Object 2: Job

    In the job (e.g with the name 'remote' as independent job and the title 'Remote Execution') you must set the above process class, so a file ./live/remote.job.xml is generated with the content:

    <?xml version="1.0" encoding="ISO-8859-1"?>
    <job process_class="/remote" title="Remote Execution">
          <script language="shell"><![CDATA[
    echo "remote execution"
          ]]></script>
          <run_time/>
    </job>
                         

    Note that the job could also be configured as an order job for a job chain.

Make sure that ....
  • a JobScheduler is running at the remote host [your remote host] with the port [your remote tcp port]

  • the security element (see ./config/scheduler.xml) of your local JobScheduler and your remote JobScheduler allows the communication

    local:
    <security ignore_unknown_hosts = "yes">
    ...
    <allowed_host host = "[your remote host]" level = "all"/>
    ...
    </security>
    remote:
    <security ignore_unknown_hosts = "yes">
    ...
    <allowed_host host = "[your local host]" level = "all"/>
    ...
    </security>
                          
  • if a firewall is used then it must be configured such the communication is allowed.

You can find more information here:
  • The HowTo shows how to configure  for remote execution and ssh. This solution will also work for standalone jobs.
  • Another solution is central configuration. A supervisor will deploy job configurations to workload JobSchedulers. These workload JobSchedulers execute their jobs independently, so if the machine of the supervisor goes down, the workload Schedulers can continue working, however jobs have to be monitored in each Scheduler seperately.
  • No labels