File: SimulationNext.htm

package info (click to toggle)
cain 1.10+dfsg-2
  • links: PTS, VCS
  • area: main
  • in suites: stretch
  • size: 29,856 kB
  • sloc: cpp: 49,612; python: 14,988; xml: 11,654; ansic: 3,644; makefile: 133; sh: 2
file content (167 lines) | stat: -rw-r--r-- 7,412 bytes parent folder | download | duplicates (4)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167

<html>
<head>
<title>Next Reaction Method</title>
</head>
<body>
<h1>Next Reaction Method</h1>

<p>
Gibson and Bruck's next reaction method is an adaptation of the first
reaction method [<a href="Bibliography.htm#gibson2000">Gibson 2000</a>].
Instead of computing the time to each reaction, one deals with the
time at which a reaction will occur.  These times are not computed
anew at each time step, but re-used.  The reaction times are stored in
an indexed priority queue (<em>indexed</em> because the reaction
indices are stored with the reaction times).  Also, propensities are
computed only when they have changed.  Below is the algorithm for a
single step.
<ol>
<li> Get the reaction index &mu; and the reaction time &tau; by 
  removing the minimum element from the priority queue.
<li> <em>t = &tau;</em>
<li> <em>X = X + V<sub>&mu;</sub></em>
<li> For each propensity <em>m</em> (except &mu;) that is affected by 
   reaction &mu;: <!--CONTINUE != -->
  <ol>
    <li> &alpha; = updated propensity.
    <li> <em>&tau;<sub>m</sub> = (a<sub>m</sub> / &alpha;)
    (&tau;<sub>m</sub> - t) + t</em>
    <li> <em>a<sub>m</sub> = &alpha;</em>
    <li> Update the priority queue with the new value of
    <em>tau<sub>m</sub></em>.
  </ol>
<li> Generate an exponential random variable <em>r</em> with mean
  <em>a<sub>&mu;</sub></em>.
<li> <em>&tau;<sub>m</sub> = t + r</em>
<li> Push <em>&tau;<sub>m</sub></em> into the priority queue.
</ol>
</p>

<p>
Consider the computational complexity of the next reaction method.  We
assume that the reactions are loosely coupled and hence computing a
propensity <em>a<sub>m</sub></em> is O(1).  Let <em>D</em> be an upper
bound on the number of propensities that are affected by firing a
single reaction.  Then the cost of updating the propensities and the
reaction times is O(<em>D</em>). Since the cost of inserting or
changing a value in the priority queue is O(log <em>M</em>), the cost
of updating the priority queue is O(<em>D</em> log <em>M</em>).
Therefore the computational complexity of a step with the next
reaction method is O(<em>D</em> log <em>M</em>).
</p>

<p>
One can reformulate the next reaction method to obtain a more efficient 
algorithm.  The most expensive parts of the algorithm are maintaining
the binary heap, updating the state, and generating exponential deviates.
Improving the generation of exponential deviates is a minimally invasive 
procedure.  Instead of using the inversion method, one can use the 
ziggurat method [<a href="Bibliography.htm#marsaglia2000">Marsaglia 2000</a>]
or the acceptance complement method
[<a href="Bibliography.htm#rubin2006">Rubin 2006</a>].
Reducing the cost of the binary heap operations is 
a more complicated affair.  We present several approaches below.
</p>


<p>
<b>Indexed Priority Queues</b><br>
The term <em>priority queue</em> has almost become synonymous with
<em>binary heap</em>.  For most applications, a binary heap is an
efficient way of implementing a priority queue.  For a heap with <em>M</em>
elements, one can access the minimum element in constant time. The
cost to insert or extract an element or to change the value of an
element is O(log <em>M</em>).  Also, the storage requirements are
linear in the number of elements.  While a binary heap is rarely the
most efficient data structure for a particular application, it is
usually efficient enough.  If performance is important and the heap
operations constitute a significant portion of the computational cost
in an application, then it may be profitable to consider other data
structures.
</p>


<p>
<b>Linear Search</b><br>
The simplest method of implementing a priority queue is to store the
elements in an array and use a linear search to find the minimum
element.  The computational complexity of finding the minimum element
is O(<em>M</em>).  Inserting, deleting, and modifying elements can be
done in constant time.  For the next reaction method, linear search is
the most efficient algorithm when the number of reactions is small.
</p>

<p>
<b>Partitioning</b><br>
For larger problem sizes, one can utilize the under-appreciated method
of partitioning.  One stores the elements in an array, but classifies the
elements into two categories: <em>lower</em> and <em>upper</em>.  One uses a splitting
value to discriminate; the elements in the lower partition are less than
the splitting value.  Then one can determine the minimum value in the queue
with a linear search on the elements in the lower partition.  Inserting,
erasing, and modifying values can all be done in constant time.  However,
there is the overhead of determining in which partition an element belongs.
When the lower partition becomes empty, one must choose a new splitting 
value and re-partition the elements (at cost O(<em>M</em>)).
By choosing the splitting value so that there are O(<em>M<sup>1/2</sup></em>)
elements in the lower partition, one can attain an average cost of
O(<em>M<sup>1/2</sup></em>) for determining the minimum element.  
This choice balances the costs of searching and re-partitioning.
The cost of a search, O(<em>M<sup>1/2</sup></em>), times the number
of searches before one needs to re-partition, O(<em>M<sup>1/2</sup></em>),
has the same complexity as the cost of re-partitioning.  There are 
several strategies for choosing the splitting value and partitioning
the elements.  Partitioning with a linear search is an efficient method
for problems of moderate size.
</p>

<p>
<b>Binary Heaps</b><br>
When using indexed binary heaps, there are a few implementation details
that have a significant impact on performance. See the documentation
of the source code for details.
Binary heaps have decent performance for a wide range of problem sizes.
Because the algorithms are fairly simple, they perform well for small 
problems.  Because of the logarithmic complexity, they are suitable for
fairly large problems.
</p>

<p>
<b>Hashing</b><br>
There is a data structure that can perform each of the operations
(finding the minimum element, inserting, removing, and modifying)
in constant time.  This is accomplished with hashing. (One could also 
refer to the method as bucketing.)  The reaction times are stored in
a hash table [<a href="Bibliography.htm#cormen2001">Cormen 2001</a>].
The hashing function is a linear function of the reaction
time (with a truncation to convert from a floating point value to an 
integer index).
The constant in the linear function is chosen to give the desired load.
For hashing with chaining, if the load is O(1), then all
operations can be done in constant time.  As with binary heaps, the 
implementation is important.
</p>

<p>
The following options are available with the next reaction method. 
<ul>
  <li>
  <tt>Hashing.</tt> O(1). Uses a hash table to store the reaction times.
  This is typically the fastest method for medium and large problems.
  <li>
  <tt>Binary Search.</tt> O(log <em>M</em>). Uses a indexed binary heap
  to store the reaction times. This version has good performance for most
  problems.
  <li>
  <tt>Partition.</tt> O(<em>M<sup>1/2</sup></em>). Partitions the reactions
  according to which will fire soon. This option has good performance for
  small to medium sized problems.
  <li>
  <tt>Linear Search.</tt> O(<em>M</em>). Stores the reaction times in an
  array. This method offers good performance only for fairly small problems.
</ul>
</p>

</body>
</html>