The event table is a new table for JeDi which keeps track of progress of job at event level. We are planning to use the table for the even server and event-level job splitting. Here is the first result of the performance test for the event table. The table was created in INTR with the following schema:
NOT NULL NUMBER(11)
NOT NULL NUMBER(11)
NOT NULL NUMBER(10)
where PANDAID and FILEID are IDs in job and file tables, JOB_PROCESSID is ID for a subprocess, DEF_MIN_EVENTID and DEF_MAX_EVENTID define the range of events for the subprocess, and PROCESSED_UPTO_EVENTID represents how many events are
done so far. The primary key is the combination of PANDAID, FILEID, and JOB_PROCESSID. The table physical layout is range-partitioned based on PandaID. The table is index-organized but also partitioned, which is handy for avoiding row-by-row deletion and tree fragmentation. Now each partition will fit 1 million PandaIDs.
The idea of the event server is shown in Fig.1.
|Figure 1. A schematic view of the event server|
In the event server scheme, multiple pilots process the same job and file in parallel, but each of them takes care of only a different range of events. When the panda server receives a request from a pilot, the panda server sends a range of events (e.g., DEF_MIN_EVENTID=0 and DEF_MAX_EVENTID=99) to the pilot together with job specification and one record is inserted to the event table. The pilot sends heartbeat at every N events processed, so that PROCESSED_UPTO_EVENTID of the record is updated in the event table. When another pilot comes, the panda server scans the event table, and sends a new range of events (e.g., DEF_MIN_EVENTID=100 and DEF_MAX_EVENTID=299) to the pilot if there are events remaining for the job and file.
A script was implemented to emulate interactions between the panda server and the database. The script spawned 5000 child processes and 1000 jobs were processed in parallel, i.e., 5 child processes were used for one job. Each child process sends heartbeat every 2 sec. The script processed roughly 0.4 million jobs per day, which corresponds to the half of the number of jobs processed per day in the current system. Note that INTR is hosted by a low performance machine since it is a testbed and not all jobs will use the event server scheme. Although the result might be acceptable, we will continue stress tests to see if further optimization is possible.