Table of Contents | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
|
Introduction
Consider the situation where a large number of similar data records are to be processed one after the other. A typical example here would be credit card transactions from a cash terminal or a retail checkout. A standard procedure used to speed up processing of such data records is to split up each item into its constitutent parts and process each part seperately. With financial transactions, each data record is usually made up of a header, body and footer, with the header and footer being of fixed length and the length of the body varying with the number of items in the transaction.
Here we have a situation with a combination of parallel and serial processing as shown in the following diagram:
Image missing : !Serial_Job_Execution_with_Locks_-_Overview.png|Schematic diagram!
...
Code Block | ||
---|---|---|
| ||
function spooler_process() \{ try \{ if (!spooler.locks().lock_or_null('FILE_PROC')) \{ var lock = spooler.locks().create_lock(); lock.set_name('FILE_PROC'); spooler.locks().add_lock( lock ); if (spooler_task.try_hold_lock('FILE_PROC')) \{ return true; \} else \{ spooler_task.call_me_again_when_locks_available(); \} \} else \{ spooler_task.order().setback(); \} return true; \} catch (e) \{ spooler_log.warn("error occurred : " + String(e)); return false; \} \} |
The 'release_lock' job
Code Block | ||
---|---|---|
| ||
function spooler_process() \{ try \{ if (spooler.locks().lock_or_null('FILE_PROC')) \{ spooler.locks().lock('FILE_PROC').remove(); \} return true; \} catch (e) \{ spooler_log.warn("error occurred: " + String(e)); return false; \} \} |
See also:
- Best Practice - JOE for guidelines about creating job chains.
- Internal API Job Implementation Tutorial.
- The Rhino section of the JobScheduler API Reference documentation.
- Getting started with the JobScheduler Java API Reference Impl
- The Locks section of the JobScheduler reference documentation]
{{SchedulerFaqBack}}