File: sched_conf.html

package info (click to toggle)
gridengine 6.2-4
  • links: PTS, VCS
  • area: main
  • in suites: lenny
  • size: 51,532 kB
  • ctags: 51,172
  • sloc: ansic: 418,155; java: 37,080; sh: 22,593; jsp: 7,699; makefile: 5,292; csh: 4,244; xml: 2,901; cpp: 2,086; perl: 1,895; tcl: 1,188; lisp: 669; ruby: 642; yacc: 393; lex: 266
file content (480 lines) | stat: -rw-r--r-- 25,369 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
<HTML>
<BODY BGCOLOR=white>
<PRE>
<!-- Manpage converted by man2html 3.0.1 -->
NAME
     sched_conf - Grid  Engine  default  scheduler  configuration
     file

DESCRIPTION
     <I>sched</I>_<I>conf</I> defines the configuration file  format  for  Grid
     Engine's  default  scheduler  provided by <B><A HREF="../htmlman8/sge_schedd.html">sge_schedd(8)</A></B>.  In
     order to modify the configuration, use the graphical  user's
     interface <B><A HREF="../htmlman1/qmon.html">qmon(1)</A></B> or the -<I>msconf</I> option of the <B><A HREF="../htmlman1/qconf.html">qconf(1)</A></B> com-
     mand. A default configuration is provided together with  the
     Grid Engine distribution package.

FORMAT
     The following parameters are recognized by the  Grid  Engine
     scheduler if present in <I>sched</I>_<I>conf</I>:

  algorithm
     Allows for the selection  of  alternative  scheduling  algo-
     rithms.

     Currently default is the only allowed setting.

  load_formula
     A simple  algebraic  expression  used  to  derive  a  single
     weighted  load value from all or part of the load parameters
     reported by <B><A HREF="../htmlman8/sge_execd.html">sge_execd(8)</A></B> for each host and from all or  part
     of  the  consumable  resources  (see <B><A HREF="../htmlman5/complex.html">complex(5)</A></B>) being main-
     tained for each host.  The load formula expression syntax is
     that of a summation weighted load values, that is:

          {w1|load_val1[*w1]}[{+|-}{w2|load_val2[*w2]}[{+|-}...]]

     Note, no blanks are allowed in the load formula.
     The load values and consumable  resources  (load_val1,  ...)
     are  specified  by the name defined in the complex (see <I>com-</I>
     <B><A HREF="../htmlman5/plex.html">plex(5)</A></B>).
     Note: Administrator defined load values (see the load_sensor
     parameter   in   <B><A HREF="../htmlman5/sge_conf.html">sge_conf(5)</A></B>  for  details)  and  consumable
     resources available for all hosts (see  <B><A HREF="../htmlman5/complex.html">complex(5)</A></B>)  may  be
     used as well as Grid Engine default load parameters.
     The weighting factors (w1, ...) are positive integers. After
     the  expression  is  evaluated for each host the results are
     assigned to the  hosts  and  are  used  to  sort  the  hosts
     corresponding  to the weighted load. The sorted host list is
     used to sort queues subsequently.
     The default load formula is "np_load_avg".

  job_load_adjustments
     The load, which is imposed by the Grid Engine  jobs  running
     on  a  system  varies  in  time, and often, e.g. for the CPU
     load, requires some amount of time to  be  reported  in  the
     appropriate  quantity by the operating system. Consequently,
     if a job was started very recently, the  reported  load  may
     not provide a sufficient representation of the load which is
     already imposed on that host by the job. The  reported  load
     will  adapt  to  the  real load over time, but the period of
     time, in which the reported load is  too  low,  may  already
     lead to an oversubscription of that host. Grid Engine allows
     the administrator to specify job_load_adjustments which  are
     used  in  the  Grid  Engine scheduler to compensate for this
     problem.
     The job_load_adjustments are specified as a comma  separated
     list  of  arbitrary  load parameters or consumable resources
     and (separated by an equal sign) an associated load  correc-
     tion  value.  Whenever  a  job  is  dispatched  to a host by
     <B><A HREF="../htmlman8/sge_schedd.html">sge_schedd(8)</A></B>, the load parameter and consumable  value  set
     of  that  host  is  increased  by the values provided in the
     job_load_adjustments  list.  These  correction  values   are
     decayed      linearly      over     time     until     after
     load_adjustment_decay_time from the  start  the  corrections
     reach  the  value  0.   If  the job_load_adjustments list is
     assigned the special denominator NONE, no  load  corrections
     are performed.
     The adjusted load and consumable values are used to  compute
     the  combined  and  weighted  load  of  the  hosts  with the
     load_formula (see above) and to compare the load and consum-
     able  values against the load threshold lists defined in the
     queue   configurations   (see   <B><A HREF="../htmlman5/queue_conf.html">queue_conf(5)</A></B>).    If    the
     load_formula consists simply of the default CPU load average
     parameter <I>np</I>_<I>load</I>_<I>avg</I>, and if  the  jobs  are  very  compute
     intensive,  one  might  want to set the job_load_adjustments
     list to <I>np</I>_<I>load</I>_<I>avg</I>=<I>1</I>.<I>00</I>, which means  that  every  new  job
     dispatched  to  a host will require 100 % CPU time, and thus
     the machine's load is instantly increased by 1.00.

  load_adjustment_decay_time
     The load  corrections  in  the  "job_load_adjustments"  list
     above  are  decayed linearly over time from the point of the
     job start, where the corresponding load or consumable param-
     eter  is  raised by the full correction value, until after a
     time  period  of  "load_adjustment_decay_time",  where   the
     correction      becomes     0.     Proper     values     for
     "load_adjustment_decay_time" greatly depend upon the load or
     consumable   parameters  used  and  the  specific  operating
     system(s). Therefore, they can only  be  determined  on-site
     and experimentally.  For the default <I>np</I>_<I>load</I>_<I>avg</I> load param-
     eter a "load_adjustment_decay_time" of 7 minutes has  proven
     to yield reasonable results.

  maxujobs
     The maximum number of jobs any user may have  running  in  a
     Grid  Engine cluster at the same time. If set to 0 (default)
     the users may run an arbitrary number of jobs.

  schedule_interval
     At   the   time   <B><A HREF="../htmlman8/sge_schedd.html">sge_schedd(8)</A></B>   initially   registers   to
     <B><A HREF="../htmlman8/sge_qmaster.html">sge_qmaster(8)</A></B>  schedule_interval  is  used  to set the time
     interval in  which  <B><A HREF="../htmlman8/sge_qmaster.html">sge_qmaster(8)</A></B>  sends  scheduling  event
     updates  to  <B><A HREF="../htmlman8/sge_schedd.html">sge_schedd(8)</A></B>.   A scheduling event is a status
     change that has occurred  within  <B><A HREF="../htmlman8/sge_qmaster.html">sge_qmaster(8)</A></B>  which  may
     trigger  or  affect scheduler decisions (e.g. a job has fin-
     ished and thus the allocated resources are available again).
     In the Grid  Engine  default  scheduler  the  arrival  of  a
     scheduling  event  report  triggers  a  scheduler  run.  The
     scheduler waits for event reports otherwise.
     Schedule_interval is a time value (see <B><A HREF="../htmlman5/queue_conf.html">queue_conf(5)</A></B>  for  a
     definition of the syntax of time values).

  queue_sort_method
     This parameter determines in which  order  several  criteria
     are  taken  into  account  to  product  a sorted queue list.
     Currently, two settings are valid:  seqno and load.  However
     in  both  cases, Grid Engine attempts to maximize the number
     of soft requests (see <B><A HREF="../htmlman1/qsub.html">qsub(1)</A></B> -s option) being fulfilled  by
     the queues for a particular as the primary criterion.
     Then, if the queue_sort_method parameter is  set  to  seqno,
     Grid  Engine  will use the seq_no parameter as configured in
     the current queue configurations (see <B><A HREF="../htmlman5/queue_conf.html">queue_conf(5)</A></B>) as  the
     next criterion to sort the queue list. The load_formula (see
     above) has only a meaning if two queues have equal  sequence
     numbers.   If  queue_sort_method  is  set  to  load the load
     according the load_formula is the criterion after maximizing
     a  job's  soft requests and the sequence number is only used
     if two hosts have the same load.  The sequence number  sort-
     ing  is  most  useful if you want to define a fixed order in
     which queues are to be filled (e.g.   the cheapest  resource
     first).

     The default for this parameter is load.

  halftime
     When executing under a share  based  policy,  the  scheduler
     "ages"  (i.e. decreases) usage to implement a sliding window
     for achieving the share entitlements as defined by the share
     tree.  The halftime defines the time interval in which accu-
     mulated usage will have been decayed to  half  its  original
     value.  Valid  values are specified in hours or according to
     the time format as specified in <B><A HREF="../htmlman5/queue_conf.html">queue_conf(5)</A></B>.
     If the value is set to 0, the usage is not decayed.

  usage_weight_list
     Grid Engine accounts for the consumption  of  the  resources
     CPU-time,  memory  and  IO  to  determine the usage which is
     imposed on a system by a job. A single usage value  is  com-
     puted  from  these three input parameters by multiplying the
     individual values by weights and adding them up. The weights
     are defined in the usage_weight_list. The format of the list
     is

          cpu=wcpu,mem=wmem,io=wio

     where wcpu, wmem and wio are the configurable  weights.  The
     weights  are real number. The sum of all tree weights should
     be 1.

  compensation_factor
     Determines how fast Grid Engine should compensate  for  past
     usage  below  of  above the share entitlement defined in the
     share tree. Recommended values are between 2 and  10,  where
     10 means faster compensation.

  weight_user
     The relative importance of the user shares in the functional
     policy.  Values are of type real.

  weight_project
     The relative importance of the project shares in  the  func-
     tional policy.  Values are of type real.

  weight_department
     The relative importance of  the  department  shares  in  the
     functional policy. Values are of type real.

  weight_job
     The relative importance of the job shares in the  functional
     policy. Values are of type real.

  weight_tickets_functional
     The maximum number of functional tickets available for  dis-
     tribution by Grid Engine. Determines the relative importance
     of the functional policy. See under <B><A HREF="../htmlman5/sge_priority.html">sge_priority(5)</A></B>  for  an
     overview on job priorities.

  weight_tickets_share
     The maximum number of share based tickets available for dis-
     tribution by Grid Engine. Determines the relative importance
     of the share tree policy. See under <B><A HREF="../htmlman5/sge_priority.html">sge_priority(5)</A></B>  for  an
     overview on job priorities.

  weight_deadline
     The weight applied on the remaining time until a jobs latest
     start  time. Determines the relative importance of the dead-
     line. See under  <B><A HREF="../htmlman5/sge_priority.html">sge_priority(5)</A></B>  for  an  overview  on  job
     priorities.

  weight_waiting_time
     The weight applied on the jobs waiting  time  since  submis-
     sion.  Determines  the  relative  importance  of the waiting
     time.  See under <B><A HREF="../htmlman5/sge_priority.html">sge_priority(5)</A></B>  for  an  overview  on  job
     priorities.

  weight_urgency
     The weight applied on jobs normalized urgency when determin-
     ing  priority  finally used.  Determines the relative impor-
     tance of urgency.  See under <B><A HREF="../htmlman5/sge_priority.html">sge_priority(5)</A></B> for an overview
     on job priorities.

  weight_ticket
     The weight applied on normalized ticket amount  when  deter-
     mining  priority  finally  used.   Determines  the  relative
     importance of the ticket policies. See under <B><A HREF="../htmlman5/sge_priority.html">sge_priority(5)</A></B>
     for an overview on job priorities.

  flush_finish_sec
     The parameters are provided for tuning the system's schedul-
     ing  behavior.   By default, a scheduler run is triggered in
     the scheduler interval. When this parameter is set to  1  or
     larger,  the  scheduler  will be triggered x seconds after a
     job has finished. Setting this parameter to 0  disables  the
     flush after a job has finished.

  flush_submit_sec
     The parameters are provided for tuning the system's schedul-
     ing  behavior.   By default, a scheduler run is triggered in
     the scheduler interval.  When this parameter is set to 1  or
     larger,  the  scheduler will be triggered  x seconds after a
     job was submitted to the system. Setting this parameter to 0
     disables the flush after a job was submitted.

  schedd_job_info
     The default scheduler can keep track why jobs could  not  be
     scheduled  during  the  last  scheduler  run. This parameter
     enables or disables the observation.  The value true enables
     the monitoring false turns it off.

     It is also possible to activate  the  observation  only  for
     certain  jobs.  This will be done if the parameter is set to
     job_list followed by a comma separated list of job ids.

     The user can obtain the collected information with the  com-
     mand qstat -j.

  params
     This is foreseen for passing additional  parameters  to  the
     Grid Engine scheduler. The following values are recognized:

     <I>DURATION</I>_<I>OFFSET</I>
          If set, overrides the  default  of  value  60  seconds.
          This  parameter  is  used  by the Grid Engine scheduler
          when planning resource utilization as the delta between
          net  job runtimes and total time until resources become
          available again. Net job runtime as specified  with  -l
          h_rt=...   or  -l  s_rt=...  or default_duration always
          differs from total job runtime due to delays before and
          after  actual  job  start  and finish. Among the delays
          before job start  is  the  time  until  the  end  of  a
          schedule_interval,  the  time it takes to deliver a job
          to <B><A HREF="../htmlman8/sge_execd.html">sge_execd(8)</A></B> and the  delays  caused  by  prolog  in
          <B><A HREF="../htmlman5/queue_conf.html">queue_conf(5)</A></B>   ,   start_proc_args  in  <B><A HREF="../htmlman5/sge_pe.html">sge_pe(5)</A></B>  and
          starter_method      in      <B><A HREF="../htmlman5/queue_conf.html">queue_conf(5)</A></B>      (notify,
          terminate_method   or  checkpointing),  procedures  run
          after acutal job  finish,  such  as  stop_proc_args  in
          <B><A HREF="../htmlman5/sge_pe.html">sge_pe(5)</A></B>  or  epilog  in <B><A HREF="../htmlman5/queue_conf.html">queue_conf(5)</A></B> , and the delay
          until a new schedule_interval.
          If the offset is too low,  resource  reservations  (see
          max_reservation)  can  be  delayed repeatedly due to an
          overly optistic job circulation time.

     <I>JC</I>_<I>FILTER</I>
          If set to true, the scheduler limits the number of jobs
          it  looks  at during a scheduling run. At the beginning
          of the scheduling run it assigns each  job  a  specific
          category,  which is based on the job's requests, prior-
          ity settings, and the job owner. All  scheduling  poli-
          cies will assign the same importance to each job in one
          category. Therefore the number  of  jobs  per  category
          have  a  FIFO order and can be limited to the number of
          free slots in the system.

          A exception are jobs, which request a resource reserva-
          tion.  They  are  included  regardless of the number of
          jobs in a category.

          This setting is turned off per default, because in very
          rare cases, the scheduler can make a wrong decision. It
          is also advised to turn report_pjob_tickets off. Other-
          wise qstat -ext can report outdated ticket amounts. The
          information shown with a qstat -j for a job,  that  was
          excluded in a scheduling run, is very limited.

     <I>PROFILE</I>
          If set equal to 1, the scheduler logs profiling  infor-
          mation summarizing each scheduling run.

     <I>MONITOR</I>
          If set equal to 1, the  scheduler  records  information
          for  each  scheduling  run  allowing  to  reproduce job
          resources      utilization      in       the       file
          &lt;<I>sge</I>_<I>root</I>&gt;/&lt;<I>cell</I>&gt;/<I>common</I>/<I>schedule</I>.

     <I>SELECT</I>_<I>PE</I>_<I>RANGE</I>_<I>ALG</I>
          This parameter sets the  algorithm  for  the  pe  range
          computation. The default is automatic, which means that
          the scheduler will select the best one, and  it  should
          not be necessary to change it to a different setting in
          normal operation. If a custom setting  is  needed,  the
          following values are available:
          auto       : the scheduler selects the best algorithm
          least      : starts  the  resource  matching  with  the
          lowest slot amount first
          bin        : starts the resource matching in the middle
          of the pe slot range
          highest    : starts  the  resource  matching  with  the
          highest slot amount first

     Changing params will take immediate effect.  The default for
     params is none.

  reprioritize_interval
     Interval (HH:MM:SS) to reprioritize jobs  on  the  execution
     hosts  based  on  the  current ticket amount for the running
     jobs. If the interval is set to 00:00:00  the  reprioritiza-
     tion is turned off. The default value is 00:00:00.

  report_pjob_tickets
     This parameter allows to tune the  system's  scheduling  run
     time.  It is used to enable / disable the reporting of pend-
     ing job tickets to the qmaster.  It does not  influence  the
     tickets  calculation.  The  sort  order of jobs in qstat and
     qmon is only based on the submit time, when the reporting is
     turned off.
     The reporting should be turned of  in  a  system  with  very
     large amount of jobs by setting this param to "false".

  halflife_decay_list
     The halflife_decay_list allows to configure different  decay
     rates  for  the "finished_jobs usage types, which is used in
     the pending job ticket calculation to account for jobs which
     have just ended. This allows the user the pending jobs algo-
     rithm to count finished jobs against a user or project for a
     configurable decayed time period. This feature is turned off
     by default, and the halftime is used instead.
     The halflife_decay_list also allows one  to  configure  dif-
     ferent  decay  rates for each usage type being tracked (cpu,
     io, and mem). The list is specified in the following format:

          &lt;USAGE_TYPE&gt;=&lt;TIME&gt;[:&lt;USAGE_TYPE&gt;=&lt;TIME&gt;[:&lt;USAGE_TYPE&gt;=&lt;TIME&gt;]]

     &lt;Usage_TYPE&gt; can be one of the following: cpu, io, or mem.
     &lt;TIME&gt; can be -1, 0 or a timespan specified in  minutes.  If
     &lt;TIME&gt;  is  -1,  only the usage of currently running jobs is
     used. 0 means that the usage is not decayed.


  policy_hierarchy
     This parameter sets up a dependency chain  of  ticket  based
     policies.  Each  ticket based policy in the dependency chain
     is influenced by the previous policies  and  influences  the
     following  policies.  A  typical  scenario is to assign pre-
     cedence for the override policy over the share-based policy.
     The  override  policy  determines  in such a case how share-
     based tickets are assigned among jobs of the  same  user  or
     project.   Note  that  all policies contribute to the ticket
     amount assigned to a particular job regardless of the policy
     hierarchy  definition. Yet the tickets calculated in each of
     the    policies    can    be    different    depending    on
     "<I>POLICY</I>_<I>HIERARCHY</I>".

     The "<I>POLICY</I>_<I>HIERARCHY</I>" parameter can be a  up  to  3  letter
     combination of the first letters of the 3 ticket based poli-
     cies S(hare-based), F(unctional) and O(verride). So a  value
     "OFS"  means  that the override policy takes precedence over
     the functional policy, which finally influences  the  share-
     based  policy.   Less  than  3 letters mean that some of the
     policies do not influence other policies and  also  are  not
     influenced  by other policies. So a value of "FS" means that
     the functional policy influences the share-based policy  and
     that there is no interference with the other policies.

     The special value "NONE" switches off policy hierarchies.

  share_override_tickets
     If set to "true" or "1", override tickets  of  any  override
     object  instance  are  shared equally among all running jobs
     associated with the object. The pending  jobs  will  get  as
     many  override  tickets,  as they would have, when they were
     running. If set to "false" or "0", each job  gets  the  full
     value  of  the  override tickets associated with the object.
     The default value is "true".

  share_functional_shares
     If set to "true" or "1", functional shares of any functional
     object  instance  are  shared  among all the jobs associated
     with the object. If set to "false" or "0", each job  associ-
     ated  with  a  functional  object,  gets the full functional
     shares of that object. The default value is "true".

  max_functional_jobs_to_schedule
     The maximum number of pending jobs to schedule in the  func-
     tional policy. The default value is 200.

  max_pending_tasks_per_job
     The maximum number of subtasks  per  pending  array  job  to
     schedule.  This parameter exists in order to reduce schedul-
     ing overhead. The default value is 50.

  max_reservation
     The  maximum  number  of  reservations  scheduled  within  a
     schedule  interval.  When  a runnable job can not be started
     due  to  a  shortage  of  resources  a  reservation  can  be
     scheduled   instead.  A  reservation  can  cover  consumable
     resources with the global host, any execution host  and  any
     queue.  For  parallel  jobs  reservations  are done also for
     slots resource as specified in <B><A HREF="../htmlman5/sge_pe.html">sge_pe(5)</A></B>.   As  job  runtime
     the  maximum  of  the  time specified with -l h_rt=... or -l
     s_rt=... is assumed. For jobs that have neither of them  the
     default_duration  is  assumed.  Reservations prevent jobs of
     lower priority as specified in <B><A HREF="../htmlman5/sge_priority.html">sge_priority(5)</A></B> from  utiliz-
     ing the reserved resource quota during the while of reserva-
     tion. Jobs of lower priority are allowed  to  utilize  those
     reserved  resources  only  if  their  prospective job end is
     before the start of the reservation (backfilling).  Reserva-
     tion  is  done  only  for  non-immediate jobs (-now no) that
     request reservation (-R y). If max_reservation is set to "0"
     no job reservation is done.

     Note, that reservation scheduling can be performance consum-
     ing  and  hence  reservation  scheduling  is switched off by
     default. Since reservation scheduling  performance  consump-
     tion is known to grow with the number of pending jobs use of
     -R y option is recommended  only  for  those  jobs  actually
     queuing   for   bottleneck   resources.  Together  with  the
     max_reservation parameter this technique can be used to nar-
     row down performance impacts.

  default_duration
     When job  reservation  is  enabled  through  max_reservation
     <B><A HREF="../htmlman5/sched_conf.html">sched_conf(5)</A></B>  parameter  the default duration is assumed as
     runtime for jobs  that  have  neither  -l  h_rt=...  nor  -l
     s_rt=...  specified.  In  contrast to a h_rt/s_rt time limit
     the default_duration is not enforced.

FILES
     &lt;<I>sge</I>_<I>root</I>&gt;/&lt;<I>cell</I>&gt;/<I>common</I>/<I>sched</I>_<I>configuration</I>
                sge_schedd configuration

SEE ALSO
     <B><A HREF="../htmlman1/sge_intro.html">sge_intro(1)</A></B>, <B><A HREF="../htmlman1/qalter.html">qalter(1)</A></B>, <B><A HREF="../htmlman1/qconf.html">qconf(1)</A></B>, <B><A HREF="../htmlman1/qstat.html">qstat(1)</A></B>,  <B><A HREF="../htmlman1/qsub.html">qsub(1)</A></B>,  <I>com-</I>
     <B><A HREF="../htmlman5/plex.html">plex(5)</A></B>,    <B><A HREF="../htmlman5/queue_conf.html">queue_conf(5)</A></B>,   <B><A HREF="../htmlman8/sge_execd.html">sge_execd(8)</A></B>,   <B><A HREF="../htmlman8/sge_qmaster.html">sge_qmaster(8)</A></B>,
     <B><A HREF="../htmlman8/sge_schedd.html">sge_schedd(8)</A></B>.  <I>Grid</I> <I>Engine</I> <I>Installation</I> <I>and</I>  <I>Administration</I>
     <I>Guide</I>

COPYRIGHT
     See <B><A HREF="../htmlman1/sge_intro.html">sge_intro(1)</A></B> for a full statement of rights and  permis-
     sions.



</PRE>
<HR>
<ADDRESS>
Man(1) output converted with
<a href="http://www.oac.uci.edu/indiv/ehood/man2html.html">man2html</a>
</ADDRESS>
</BODY>
</HTML>