File: lazybridge.html

package info (click to toggle)
libapache2-mod-rivet 3.2.2-1
  • links: PTS
  • area: main
  • in suites: bookworm
  • size: 6,296 kB
  • sloc: xml: 8,554; tcl: 7,568; ansic: 7,094; sh: 5,017; makefile: 195; sql: 91; lisp: 78
file content (423 lines) | stat: -rw-r--r-- 24,074 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
<html><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8"><title>Example: the “Lazy” bridge</title><link rel="stylesheet" type="text/css" href="rivet.css"><meta name="generator" content="DocBook XSL Stylesheets Vsnapshot"><link rel="home" href="index.html" title="Apache Rivet 3.2"><link rel="up" href="index.html" title="Apache Rivet 3.2"><link rel="prev" href="internals.html" title="Rivet Internals"></head><body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF"><div class="navheader"><table width="100%" summary="Navigation header"><tr><th colspan="3" align="center">Example: the <span class="quote">“<span class="quote">Lazy</span>”</span> bridge</th></tr><tr><td width="20%" align="left"><a accesskey="p" href="internals.html"><img src="images/prev.png" alt="Prev"></a> </td><th width="60%" align="center"> </th><td width="20%" align="right"> </td></tr></table></div><div class="section"><div class="titlepage"><div><div><hr><h2 class="title" style="clear: both"><a name="lazybridge"></a>Example: the <span class="quote">“<span class="quote">Lazy</span>”</span> bridge</h2></div></div></div><div class="section"><div class="titlepage"><div><div><h3 class="title"><a name="idm4549"></a>The rationale of threaded bridges</h3></div></div></div><p style="width:90%">    	
	    	The 'bridge' concept was introduced to cope with the ability of 
	    	the Apache HTTP web server to adopt different multiprocessing 
	    	models by loading one of the available MPMs (Multi Processing Modules). 
			A bridge's task is to let mod_rivet fit the selected multiprocessing
			model in the first place. Still separating mod_rivet core
			functions from the MPM machinery provided also a solution for
			implementing a flexible and extensible design that enables 
			a programmer to develop alternative approaches to workload and 
			resource management. 
   	</p><p style="width:90%">
   		The Apache HTTP web server demands its modules to
   		run with any MPM irrespective of its internal architecture and its
   		a general design constrain to make no assumptions about the MPM. 
   		This clashes with some requirements of threaded builds of Tcl. 
   		First of all Tcl is itself threaded (unless threads are disabled 
   		at compile time) and many of the basic Tcl data structures (namely Tcl_Obj) 
   		cannot be safely shared among threads. 
   		This demands a Tcl interpreters be run 
   		on separated threads communicating with the HTTP web server 
   		through suitable methods.
   	</p></div><div class="section"><div class="titlepage"><div><div><h3 class="title"><a name="idm4553"></a>Lazy bridge data structures</h3></div></div></div><p style="width:90%">
	   The lazy bridge was initially developed to outline the basic tasks
    	carried out by each function making a rivet MPM bridge. 
    	The lazy bridge attempts to be minimalist
    	but it's nearly fully functional, only a few configuration
    	directives (SeparateVirtualInterps and SeparateChannel)
    	are ignored because fundamentally incompatible. 
    	The bridge is experimental but perfectly fit for many applications,
    	for example it's good on development machines where server restarts
    	are frequent. 
    </p><p style="width:90%">
    	This is the lazy bridge jump table, as such it defines the functions
    	implemented by the bridge.
    </p><pre class="programlisting">RIVET_MPM_BRIDGE {
    LazyBridge_ServerInit,
    LazyBridge_ChildInit,
    LazyBridge_Request,
    LazyBridge_Finalize,
    LazyBridge_ExitHandler,
    LazyBridge_Interp
};</pre><p style="width:90%">
		After the server initialization stage, child processes read the configuration 
		and modules build their own configuration representation. MPM bridges hooks into
		this stage to store and/or build data structures relevant to their design.
		A fundamental information built during this stage is the database of virtual hosts.
		The lazy bridge keeps an array of virtual host descriptor pointers
		each of them referencing an instance of the following structure.
	</p><pre class="programlisting">/* virtual host thread queue descriptor */

typedef struct vhost_iface {
    int                 threads_count;      /* total number of running and idle threads */
    apr_thread_mutex_t* mutex;              /* mutex protecting 'array'                 */
    apr_array_header_t* array;              /* LIFO array of lazy_tcl_worker pointers   */
} vhost;</pre><p style="width:90%">
 		A pointer to this data structure array is stored in the bridge status which a basic
 		structure that likely every bridge has to create.
	</p><pre class="programlisting">/* Lazy bridge internal status data */

typedef struct mpm_bridge_status {
    apr_thread_mutex_t* mutex;
    int                 exit_command;
    int                 exit_command_status;
    int                 server_shutdown;    /* the child process is shutting down  */
    vhost*              vhosts;             /* array of vhost descriptors          */
} mpm_bridge_status;</pre><p style="width:90%">
		The lazy bridge also extends the thread private data structure with the
		data concerning the Tcl intepreter, its configuration and 
	</p><pre class="programlisting">/* lazy bridge thread private data extension */

typedef struct mpm_bridge_specific {
    rivet_thread_interp*  interp;           /* thread Tcl interpreter object        */
    int                   keep_going;       /* thread loop controlling variable     */
                                            /* the request_rec and TclWebRequest    *
                                             * are copied here to be passed to a    *
                                             * channel                              */
} mpm_bridge_specific;</pre><p style="width:90%">
		By design the bridge must create exactly one instance of <span style="font-family:monospace"><span class="command"><strong>mpm_bridge_status</strong></span></span>
		and store its pointer in <span style="font-family:monospace"><span class="command"><strong>module_globals-&gt;mpm</strong></span></span>.
		This is usually done at the very beginning of the child init script function pointed by 
		<span style="font-family:monospace"><span class="command"><strong>mpm_child_init</strong></span></span> in the <span style="font-family:monospace"><span class="command"><strong>rivet_bridge_table</strong></span></span> structure.
		For the lazy bridge this field in the jump table points to <span style="font-family:monospace"><span class="command"><strong>Lazy_MPM_ChildInit</strong></span></span>
		function
	</p><pre class="programlisting">/*
 * -- LazyBridge_ChildInit
 * 
 * child process initialization. This function prepares the process
 * data structures for virtual hosts and threads management
 *
 */

void LazyBridge_ChildInit (apr_pool_t* pool, server_rec* server)
{
    apr_status_t    rv;
    server_rec*     s;
    server_rec*     root_server = module_globals-&gt;server;

    module_globals-&gt;mpm = apr_pcalloc(pool,sizeof(mpm_bridge_status));

    /* This mutex is only used to consistently carry out these 
     * two tasks
     *
     *  - set the exit status of a child process (hopefully will be 
     *    unnecessary when Tcl is able again of calling 
     *    Tcl_DeleteInterp safely) 
     *  - control the server_shutdown flag. Actually this is
     *    not entirely needed because once set this flag 
     *    is never reset to 0
     *
     */

    rv = apr_thread_mutex_create(&amp;module_globals-&gt;mpm-&gt;mutex,
                                  APR_THREAD_MUTEX_UNNESTED,pool);
    ap_assert(rv == APR_SUCCESS);

    /* the mpm-&gt;vhosts array is created with as many entries as the number of
     * configured virtual hosts */

    module_globals-&gt;mpm-&gt;vhosts = 
        (vhost *) apr_pcalloc(pool,module_globals-&gt;vhosts_count*sizeof(vhost));
    ap_assert(module_globals-&gt;mpm-&gt;vhosts != NULL);

    /*
     * Each virtual host descriptor has its own mutex controlling
     * the queue of available threads
     */
     
    for (s = root_server; s != NULL; s = s-&gt;next)
    {
        int                 idx;
        apr_array_header_t* array;
        rivet_server_conf*  rsc = RIVET_SERVER_CONF(s-&gt;module_config);

        idx = rsc-&gt;idx;
        rv  = apr_thread_mutex_create(&amp;module_globals-&gt;mpm-&gt;vhosts[idx].mutex,
                                      APR_THREAD_MUTEX_UNNESTED,pool);
        ap_assert(rv == APR_SUCCESS);
        array = apr_array_make(pool,0,sizeof(void*));
        ap_assert(array != NULL);
        module_globals-&gt;mpm-&gt;vhosts[idx].array = array;
        module_globals-&gt;mpm-&gt;vhosts[idx].threads_count = 0;
    }
    module_globals-&gt;mpm-&gt;server_shutdown = 0;
}</pre></div><div class="section"><div class="titlepage"><div><div><h3 class="title"><a name="idm4571"></a>Handling Tcl's exit core command</h3></div></div></div><p style="width:90%">
		Most of the fields in the <span style="font-family:monospace"><span class="command"><strong>mpm_bridge_status</strong></span></span> are meant to deal 
		with the child exit process. Rivet supersedes the Tcl core's exit function
		with a <span style="font-family:monospace"><span class="command"><strong>::rivet::exit</strong></span></span> function and it does so in order to curb the effects
		of the core function that would force a child process to immediately exit. 
		This could have unwanted side effects, like skipping the execution of important
		code dedicated to release locks or remove files. For threaded MPMs the abrupt
		child process termination could be even more disruptive as all the threads
		will be deleted without warning.	
	</p><p style="width:90%">
		The <span style="font-family:monospace"><span class="command"><strong>::rivet::exit</strong></span></span> implementation calls the function pointed by
		<span style="font-family:monospace"><span class="command"><strong>mpm_exit_handler</strong></span></span> which is bridge specific. Its main duty
		is to take the proper action in order to release resources and force the
		bridge controlled threads to exit.  
	</p><div class="note" style="margin-left: 0.5in; margin-right: 0.5in;"><table border="0" summary="Note"><tr><td rowspan="2" align="center" valign="top" width="25"><img alt="[Note]" src="images/note.png"></td><th align="left">Note</th></tr><tr><td align="left" valign="top">
		Nonetheless the <span style="font-family:monospace"><span class="command"><strong>exit</strong></span></span> command should be avoided in ordinary mod_rivet
		programming. We cannot stress this point enough. If your application must bail out
		for some reason focus your attention on the design to find the most appropriate
		route to exit and whenever possible avoid 
		calling <span style="font-family:monospace"><span class="command"><strong>exit</strong></span></span> at all (which basically wraps a
		C call to Tcl_Exit). Anyway the Rivet implementation partially transforms
		<span style="font-family:monospace"><span class="command"><strong>exit</strong></span></span> in a sort of special <span style="font-family:monospace"><span class="command"><strong>::rivet::abort_page</strong></span></span>
		implementation whose eventual action is to call the <span style="font-family:monospace"><span class="command"><strong>Tcl_Exit</strong></span></span>
		library call. See the <span style="font-family:monospace"><span class="command"><strong><a class="xref" href="exit.html" title="exit">exit</a></strong></span></span>
		command for further explanations.
	</td></tr></table></div><p style="width:90%">
		Both the worker bridge and lazy bridge 
		implementations of <span style="font-family:monospace"><span class="command"><strong>mpm_exit_handler</strong></span></span> call the function pointed 
		by <span style="font-family:monospace"><span class="command"><strong>mpm_finalize</strong></span></span> which also the function called by the framework 
		when the web server shuts down.
		See these functions' code for further details, they are very easy to 
		read and understand
	</p></div><div class="section"><div class="titlepage"><div><div><h3 class="title"><a name="idm4590"></a>HTTP request processing with the lazy bridge</h3></div></div></div><p style="width:90%">
		Requests processing with the lazy bridge is done by determining for which
		virtual host a request was created. The <span style="font-family:monospace"><span class="command"><strong>rivet_server_conf</strong></span></span>
		structure keeps a numerical index for each virtual host. This index is used
		to reference the virtual host descriptor and from it the request
		handler tries to gain lock on the mutex protecting the array of <span style="font-family:monospace"><span class="command"><strong>lazy_tcl_worker</strong></span></span>
		structure pointers. Each instance of this structure is a descriptor of a thread created for
		a specific virtual host; threads available for processing have their descriptor
		on that array and the handler callback will pop the first
		<span style="font-family:monospace"><span class="command"><strong>lazy_tcl_worker</strong></span></span> pointer to signal the thread
		there is work to do for it. This is the <span style="font-family:monospace"><span class="command"><strong>lazy_tcl_worker</strong></span></span> structure
	</p><pre class="programlisting">/* lazy bridge Tcl thread status and communication variables */

typedef struct lazy_tcl_worker {
    apr_thread_mutex_t* mutex;
    apr_thread_cond_t*  condition;
    int                 status;
    apr_thread_t*       thread_id;
    server_rec*         server;
    request_rec*        r;
    int                 ctype;
    int                 ap_sts;
    rivet_server_conf*  conf;               /* rivet_server_conf* record                */
} lazy_tcl_worker;</pre><p style="width:90%">
		The server field is assigned with the virtual host server record. Whereas the <span style="font-family:monospace"><span class="command"><strong>conf</strong></span></span>
		field keeps the pointer to a run time computed <span style="font-family:monospace"><span class="command"><strong>rivet_server_conf</strong></span></span>. This structure
		may change from request to request because the request configuration changes when the URL may refer 
		to directory specific configuration in <span style="font-family:monospace"><span class="command"><strong>&lt;Directory ...&gt;...&lt;/Directory&gt;</strong></span></span> 
		blocks
	</p><p style="width:90%">
		The Lazy bridge will not start any Tcl worker thread at server startup, but it will
		wait for requests to come in and then if worker threads are sitting on a virtual host queue
		a thread's <span style="font-family:monospace"><span class="command"><strong>lazy_tcl_worker</strong></span></span> structure pointer is popped
		and the request handed to it. If no available thread is on the queue a new worker thread is 
		created. The code in the <span style="font-family:monospace"><span class="command"><strong>Lazy_MPM_Request</strong></span></span> easy to understand and shows
		how this is working
	</p><pre class="programlisting">/* -- Lazy_MPM_Request
 *
 * The lazy bridge HTTP request function. This function 
 * stores the request_rec pointer into the lazy_tcl_worker
 * structure which is used to communicate with a worker thread.
 * Then the array of idle threads is checked and if empty
 * a new thread is created by calling create_worker
 */

int Lazy_MPM_Request (request_rec* r,rivet_req_ctype ctype)
{
    lazy_tcl_worker*    w;
    int                 ap_sts;
    rivet_server_conf*  conf = RIVET_SERVER_CONF(r-&gt;server-&gt;module_config);
    apr_array_header_t* array;
    apr_thread_mutex_t* mutex;

    mutex = module_globals-&gt;mpm-&gt;vhosts[conf-&gt;idx].mutex;
    array = module_globals-&gt;mpm-&gt;vhosts[conf-&gt;idx].array;
    apr_thread_mutex_lock(mutex);

   /* This request may have come while the child process was 
    * shutting down. We cannot run the risk that incoming requests 
    * may hang the child process by keeping its threads busy, 
    * so we simply return an HTTP_INTERNAL_SERVER_ERROR. 
    * This is hideous and explains why the 'exit' commands must 
    * be avoided at any costs when programming with mod_rivet
    */

    if (module_globals-&gt;mpm-&gt;server_shutdown == 1) {
        ap_log_rerror(APLOG_MARK, APLOG_ERR, APR_EGENERAL, r,
                      MODNAME ": http request aborted during child process shutdown");
        apr_thread_mutex_unlock(mutex);
        return HTTP_INTERNAL_SERVER_ERROR;
    }

    /* If the array is empty we create a new worker thread */

    if (apr_is_empty_array(array))
    {
        w = create_worker(module_globals-&gt;pool,r-&gt;server);
        (module_globals-&gt;mpm-&gt;vhosts[conf-&gt;idx].threads_count)++; 
    }
    else
    {
        w = *(lazy_tcl_worker**) apr_array_pop(array);
    }

    apr_thread_mutex_unlock(mutex);
    
    apr_thread_mutex_lock(w-&gt;mutex);
    w-&gt;r        = r;
    w-&gt;ctype    = ctype;
    w-&gt;status   = init;
    w-&gt;conf     = conf;
    apr_thread_cond_signal(w-&gt;condition);

    /* we wait for the Tcl worker thread to finish its job */

    while (w-&gt;status != done) {
        apr_thread_cond_wait(w-&gt;condition,w-&gt;mutex);
    } 
    ap_sts = w-&gt;ap_sts;

    w-&gt;status = idle;
    w-&gt;r      = NULL;
    apr_thread_cond_signal(w-&gt;condition);
    apr_thread_mutex_unlock(w-&gt;mutex);

    return ap_sts;
}</pre><p style="width:90%">
		After a request is processed the worker thread returns its own
		lazy_tcl_worker descriptor to the array and then waits
		on the condition variable used to control and synchronize the bridge 
		threads with the Apache worker threads. The worker thread code
		is the request_processor function
	</p><pre class="programlisting">/*
 * -- request_processor
 *
 * The lazy bridge worker thread. This thread prepares its control data and 
 * will serve requests addressed to a given virtual host. Virtual host server
 * data are stored in the lazy_tcl_worker structure stored in the generic 
 * pointer argument 'data'
 * 
 */

static void* APR_THREAD_FUNC request_processor (apr_thread_t *thd, void *data)
{
    lazy_tcl_worker*        w = (lazy_tcl_worker*) data; 
    rivet_thread_private*   private;
    int                     idx;
    rivet_server_conf*      rsc;

    /* The server configuration */

    rsc = RIVET_SERVER_CONF(w-&gt;server-&gt;module_config);

    /* Rivet_ExecutionThreadInit creates and returns the thread private data. */

    private = Rivet_ExecutionThreadInit();

    /* A bridge creates and stores in private-&gt;ext its own thread private
     * data. The lazy bridge is no exception. We just need a flag controlling 
     * the execution and an intepreter control structure */

    private-&gt;ext = apr_pcalloc(private-&gt;pool,sizeof(mpm_bridge_specific));
    private-&gt;ext-&gt;keep_going = 1;
    //private-&gt;ext-&gt;interp = Rivet_NewVHostInterp(private-&gt;pool,w-&gt;server);
    RIVET_POKE_INTERP(private,rsc,Rivet_NewVHostInterp(private-&gt;pool,rsc-&gt;default_cache_size));
    private-&gt;ext-&gt;interp-&gt;channel = private-&gt;channel;

    /* The worker thread can respond to a single request at a time therefore 
       must handle and register its own Rivet channel */

    Tcl_RegisterChannel(private-&gt;ext-&gt;interp-&gt;interp,*private-&gt;channel);

    /* From the rivet_server_conf structure we determine what scripts we
     * are using to serve requests */

    private-&gt;ext-&gt;interp-&gt;scripts = 
            Rivet_RunningScripts (private-&gt;pool,private-&gt;ext-&gt;interp-&gt;scripts,rsc);

    /* This is the standard Tcl interpreter initialization */

    Rivet_PerInterpInit(private-&gt;ext-&gt;interp,private,w-&gt;server,private-&gt;pool);
    
    /* The child initialization is fired. Beware of the terminologic 
     * trap: we inherited from fork capable systems the term 'child'
     * meaning 'child process'. In this case the child init actually
     * is a worker thread initialization, because in a threaded module
     * this is the agent playing the same role a child process plays
     * with the prefork bridge */

    Lazy_RunConfScript(private,w,child_init);

    idx = w-&gt;conf-&gt;idx;

    /* After the thread has run the configuration script we 
       increment the threads counter */

    apr_thread_mutex_lock(module_globals-&gt;mpm-&gt;vhosts[idx].mutex);
    (module_globals-&gt;mpm-&gt;vhosts[idx].threads_count)++;
    apr_thread_mutex_unlock(module_globals-&gt;mpm-&gt;vhosts[idx].mutex);

    /* The thread is now set up to serve request within the the 
     * do...while loop controlled by private-&gt;keep_going  */

    apr_thread_mutex_lock(w-&gt;mutex);
    do 
    {
        while ((w-&gt;status != init) &amp;&amp; (w-&gt;status != thread_exit)) {
            apr_thread_cond_wait(w-&gt;condition,w-&gt;mutex);
        } 
        if (w-&gt;status == thread_exit) {
            private-&gt;ext-&gt;keep_going = 0;
            continue;
        }

        w-&gt;status = processing;

        /* Content generation */

        private-&gt;req_cnt++;
        private-&gt;ctype = w-&gt;ctype;
        private-&gt;r = w-&gt;r;

        w-&gt;ap_sts = Rivet_SendContent(private);

        // if (module_globals-&gt;mpm-&gt;server_shutdown) continue;

        w-&gt;status = done;
        apr_thread_cond_signal(w-&gt;condition);
        while (w-&gt;status == done) {
            apr_thread_cond_wait(w-&gt;condition,w-&gt;mutex);
        } 
 
        /* rescheduling itself in the array of idle threads */
       
        apr_thread_mutex_lock(module_globals-&gt;mpm-&gt;vhosts[idx].mutex);
        *(lazy_tcl_worker **) apr_array_push(module_globals-&gt;mpm-&gt;vhosts[idx].array) = w;
        apr_thread_mutex_unlock(module_globals-&gt;mpm-&gt;vhosts[idx].mutex);

    } while (private-&gt;ext-&gt;keep_going);
    apr_thread_mutex_unlock(w-&gt;mutex);
    
    Lazy_RunConfScript(private,w,child_exit);
    ap_log_error(APLOG_MARK,APLOG_DEBUG,APR_SUCCESS,w-&gt;server,"processor thread orderly exit");

    apr_thread_mutex_lock(module_globals-&gt;mpm-&gt;vhosts[idx].mutex);
    (module_globals-&gt;mpm-&gt;vhosts[idx].threads_count)--;
    apr_thread_mutex_unlock(module_globals-&gt;mpm-&gt;vhosts[idx].mutex);

    apr_thread_exit(thd,APR_SUCCESS);
    return NULL;
}</pre><p style="width:90%">
		The lazy bridge <span style="font-family:monospace"><span class="command"><strong>module_globals-&gt;bridge_jump_table-&gt;mpm_thread_interp</strong></span></span>, which
		is supposed to return the rivet_thread_interp structure pointer relevant to a given
		request, has a straightforward task to do since by design each thread has
		one interpreter
	</p><pre class="programlisting">rivet_thread_interp* Lazy_MPM_Interp(rivet_thread_private *private,
                                     rivet_server_conf* conf)
{
    return private-&gt;ext-&gt;interp;
}</pre><p style="width:90%">
		As already pointed out
		running this bridge you get separate virtual interpreters and separate channels by default
		and since by design each threads gets its own Tcl interpreter and Rivet channel you will
		not be able to revert this behavior in the configuration with 
	</p><pre class="programlisting">SeparateVirtualInterps Off
SeparateChannels       Off</pre><p style="width:90%">
		which are simply ignored
	</p></div></div><div class="navfooter"><hr><table width="100%" summary="Navigation footer"><tr><td width="40%" align="left"><a accesskey="p" href="internals.html"><img src="images/prev.png" alt="Prev"></a> </td><td width="20%" align="center"> </td><td width="40%" align="right"> </td></tr><tr><td width="40%" align="left" valign="top">Rivet Internals </td><td width="20%" align="center"><a accesskey="h" href="index.html"><img src="images/home.png" alt="Home"></a></td><td width="40%" align="right" valign="top"> </td></tr></table></div></body></html>