File: README

package info (click to toggle)
gridengine 6.2-4
  • links: PTS, VCS
  • area: main
  • in suites: lenny
  • size: 51,532 kB
  • ctags: 51,172
  • sloc: ansic: 418,155; java: 37,080; sh: 22,593; jsp: 7,699; makefile: 5,292; csh: 4,244; xml: 2,901; cpp: 2,086; perl: 1,895; tcl: 1,188; lisp: 669; ruby: 642; yacc: 393; lex: 266
file content (139 lines) | stat: -rw-r--r-- 5,350 bytes parent folder | download | duplicates (3)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
The Grid Engine qmaster stores its configuration and actual state to a spooling
database.

As default, a Berkeley DB database is used as spooling database.
As an option, the former flat file spooling mechanism can be used by building 
Grid Engine with the aimk option -spool-classic.

The Berkeley DB comprises a database library providing database access functions
that is linked to sge_qmaster, as well as of a number of binaries for database 
maintenance and an RPC server process.

The Berkeley DB isn't part of the Grid Engine source code distribution, but has 
to be downloaded and built on the Grid Engine build system.



Building Berkeley DB
====================

1. Download the sources from www.sleepycat.com.
Grid Engine Requires a version >= 4.2, the default version used for maintrunk is 4.4.20.

2. Build and install the Berkeley DB:
Untar the source code distribution.

Create a build directory. This needn't be a directory in the Berkeley DB source
repository, it can be anywhere. You can even build for multiple platforms in 
parallel using different build directories.

Please note, that the Grid Engine build process searches include, lib and bin 
in an architecture dependent directory, e.g. /usr/local/sol-sparc/...
The architecture (e.g. sol-sparc) comes from a call to dist/util/arch.
Either you specify a path of that form in --prefix of your configure call, or
you create a symbolic link in the install directory, 
e.g. create a link /usr/local/sol-sparc pointing to /usr/local.

cd <builddir>
<source-dir>/dist/configure <options>
where <options> are
--enable-rpc (required, build the RPC server)
--enable-posixmutexes

For a list of other options (e.g. --prefix) call
<source-dir>/dist/configure --help.

Depending on the build architecture, some additional options and parameters to 
configure are required (just append them to the configure commandline):

hp11-64:       CFLAGS=+DD64 LDFLAGS=+DD64
irix65:        CFLAGS="-n32 -mips3"
sol-sparc64:   CFLAGS="-xarch=v9" CPPFLAGS="-xarch=v9" LDFLAGS="-xarch=v9"
sol-amd64:     CFLAGS="-xarch=amd64" CPPFLAGS="-xarch=amd64" LDFLAGS="-xarch=amd64"

make

make install



Configure the Grid Engine build process
=======================================

Grid Engine has to find your Berkeley DB installation.
For the build process, the Berkeley DB installation directory is configured
in aimk.site or aimk.private.

The variable BERKELEYDB_HOME has to be set to the location, where the 
Berkeley DB has been installed.
BERKELEYDB_HOME has to contain an include and a lib directory containing the 
Berkeley DB header files and libraries.

aimk.site contains a section setting the default for BERKELEYDB_HOME (matching 
the environment of the Grid Engine core development team).

The best place to configure a different BERKELEYDB_HOME is the file 
aimk.private.


During the installation process, the Berkeley DB C shared library and some 
binaries are copied.

The location of the Berkeley DB installation is configured in 
scripts/distinst.site or scripts/distinst.private in the variable 
BERKELEYDBBASE.

distinst.site contains the default setting (see above). Please place your 
Installation directory in the file distinst.private.


After these setting are done, you can build and install Grid Engine as described
in the build page.


Using the Berkeley DB as spooling database
==========================================

Spooling configuration and state information is done by sge_qmaster only.

The spooling database itself is no supported public interface.
Use the Grid Engine commandline or graphical interfaces to query and change
Grid Engine configuration and state information.


Local spooling vs. RPC client/server mechanism
----------------------------------------------

Berkeley DB uses mechanisms like file locking and memory mapping to model its 
locking and transaction mechanism.

Therefore a Berkeley DB cannot be accessed through NFS. All database access has 
to be done on a local filesystem.

If no local filesystem is available, or if mechanisms like the Grid Engine 
shadowd shall be used, direct database access isn't possible.
In this case, the Berkeley DB RPC client/server mechanism can be used.

The Berkeley DB RPC server is a process providing an RPC service for Berkeley DB
database functions.
If use of the Berkeley DB is configured for Grid Engine during the Grid Engine 
installation process, sge_qmaster will access the Berkeley DB RPC server instead
of accessing a local database.

The RPC server should be started on a dedicated machine, e.g. on the NFS server
that is used for Grid Engine.

The database directory for local spooling and/or the RPC server host are 
set during the qmaster installation.
If the RPC client/server mechanism is used, the Grid Engine install script 
creates an rc script for the startup of the RPC server.
The new installation script (sge_config) can also install the rc script on the 
RPC server host and startup the RPC server.

If you use the Berkeley DB RPC server, transaction checkpointing and removing 
outdated transaction logs has to be done using external commands (db_checkpoint
and db_archive).
These binaries are called by the utility script 
$SGE_ROOT/util/bdb_checkpoint.sh. 
The best way to call it is probably by starting it as a cronjob (crontab of the
admin user on the RPC server host).