File: monitoring.rst

package info (click to toggle)
python-simpy3 3.0.11-3
  • links: PTS, VCS
  • area: main
  • in suites: bookworm
  • size: 1,080 kB
  • sloc: python: 2,885; makefile: 138
file content (306 lines) | stat: -rw-r--r-- 9,924 bytes parent folder | download | duplicates (4)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
==========
Monitoring
==========

Monitoring is a relatively complex topic with a lot of different use-cases and
lots of variations.

This guide presents some of the more common and more interesting ones.  It’s
purpose is to give you some hints and ideas how you can implement simulation
monitoring tailored to your use-cases.

So, before you start, you need to define them:


*What* do you want to monitor?

- :ref:`Your processes <monitoring-your-processes>`?

- :ref:`Resource usage <resource-usage>`?

- :ref:`Trace all events of the simulation <event-tracing>`?


*When* do you want to monitor?

- Regularly in defined intervals?

- When something happens?


*How* do you want to store the collected data?

 - Store it in a simple list?

 - Log it to a file?

 - Write it to a database?

 The following sections discuss these questions and provide some example code
 to help you.


.. _monitoring-your-processes:

Monitoring your processes
-------------------------

Monitoring your own processes is relatively easy, because *you* control the
code.  From our experience, the most common thing you might want to do is
monitor the value of one or more state variables every time they change or at
discrete intervals and store it somewhere (in memory, in a database, or in
a file, for example).

In the simples case, you just use a list and append the required value(s) every
time they change:

.. code-block:: python

   >>> import simpy
   >>>
   >>> data = []  # This list will hold all collected data
   >>>
   >>> def test_process(env, data):
   ...     val = 0
   ...     for i in range(5):
   ...         val += env.now
   ...         data.append(val)  # Collect data
   ...         yield env.timeout(1)
   >>>
   >>> env = simpy.Environment()
   >>> p = env.process(test_process(env, data))
   >>> env.run(p)
   >>> print('Collected', data)  # Lets see what we got
   Collected [0, 1, 3, 6, 10]

If you want to monitor multiple variables, you can append (named)tuples to your
data list.

If you want to store the data in a NumPy array or a database, you can often
increase performance if you buffer the data in a plain Python list and only
write larger chunks (or the complete dataset) to the database.


.. _resource-usage:

Resource usage
--------------

The use-cases for resource monitoring are numerous, for example you might want
to monitor:

- Utilization of a resource over time and on average, that is,

  - the number of processes that are using the resource at a time

  - the level of a container

  - the amount of items in a store

  This can be monitored either in discrete time steps or every time there is
  a change.

- Number of processes in the (put|get)queue over time (and the average).
  Again, this could be monitored at discrete time steps or every time there is
  a change.

- For *PreemptiveResource*, you may want to measure how often preemption occurs
  over time.

In contrast to your processes, you don't have direct access to the code of the
built-in resource classes.  But this doesn't prevent you from monitoring them.

Monkey-patching some of a resource's methods allows you to gather all the data
you need.

Here is an example that demonstrate how you can add callbacks to
a resource that get called just before or after a *get / request* or a *put
/ release* event:

.. code-block:: python

   >>> from functools import partial, wraps
   >>> import simpy
   >>>
   >>> def patch_resource(resource, pre=None, post=None):
   ...     """Patch *resource* so that it calls the callable *pre* before each
   ...     put/get/request/release operation and the callable *post* after each
   ...     operation.  The only argument to these functions is the resource
   ...     instance.
   ...
   ...     """
   ...     def get_wrapper(func):
   ...         # Generate a wrapper for put/get/request/release
   ...         @wraps(func)
   ...         def wrapper(*args, **kwargs):
   ...             # This is the actual wrapper
   ...             # Call "pre" callback
   ...             if pre:
   ...                 pre(resource)
   ...
   ...             # Perform actual operation
   ...             ret = func(*args, **kwargs)
   ...
   ...             # Call "post" callback
   ...             if post:
   ...                 post(resource)
   ...
   ...             return ret
   ...         return wrapper
   ...
   ...     # Replace the original operations with our wrapper
   ...     for name in ['put', 'get', 'request', 'release']:
   ...         if hasattr(resource, name):
   ...             setattr(resource, name, get_wrapper(getattr(resource, name)))
   >>>
   >>> def monitor(data, resource):
   ...     """This is our monitoring callback."""
   ...     item = (
   ...         resource._env.now,  # The current simulation time
   ...         resource.count,  # The number of users
   ...         len(resource.queue),  # The number of queued processes
   ...     )
   ...     data.append(item)
   >>>
   >>> def test_process(env, res):
   ...     with res.request() as req:
   ...         yield req
   ...         yield env.timeout(1)
   >>>
   >>> env = simpy.Environment()
   >>>
   >>> res = simpy.Resource(env, capacity=1)
   >>> data = []
   >>> # Bind *data* as first argument to monitor()
   >>> # see https://docs.python.org/3/library/functools.html#functools.partial
   >>> monitor = partial(monitor, data)
   >>> patch_resource(res, post=monitor)  # Patches (only) this resource instance
   >>>
   >>> p = env.process(test_process(env, res))
   >>> env.run(p)
   >>>
   >>> print(data)
   [(0, 1, 0), (1, 0, 0)]

The example above is a very generic but also very flexible way to monitor all
aspects of all kinds of resources.

The other extreme would be to fit the monitoring to exactly one use case.
Imagine, for example, you only want to know how many processes are waiting for
a ``Resource`` at a time:

.. code-block:: python

   >>> import simpy
   >>>
   >>> class MonitoredResource(simpy.Resource):
   ...     def __init__(self, *args, **kwargs):
   ...         super().__init__(*args, **kwargs)
   ...         self.data = []
   ...
   ...     def request(self, *args, **kwargs):
   ...         self.data.append((self._env.now, len(self.queue)))
   ...         return super().request(*args, **kwargs)
   ...
   ...     def release(self, *args, **kwargs):
   ...         self.data.append((self._env.now, len(self.queue)))
   ...         return super().release(*args, **kwargs)
   >>>
   >>> def test_process(env, res):
   ...     with res.request() as req:
   ...         yield req
   ...         yield env.timeout(1)
   >>>
   >>> env = simpy.Environment()
   >>>
   >>> res = MonitoredResource(env, capacity=1)
   >>> p1 = env.process(test_process(env, res))
   >>> p2 = env.process(test_process(env, res))
   >>> env.run()
   >>>
   >>> print(res.data)
   [(0, 0), (0, 0), (1, 1), (2, 0)]

In contrast to the first example, we now haven't patched a single resource
instance but the whole class.  It also removed all of the first example's
flexibility: We only monitor ``Resource`` typed resources, we only collect data
*before* the actual requests are made and we only collect the time and queue
length.  At the same time, you need less than half of the code.


.. _event-tracing:

Event tracing
-------------

.. currentmodule:: simpy.core

In order to debug or visualize a simulation, you might want to trace when
events are created, triggered and processed.  Maybe you also want to trace
which process created an event and which processes waited for an event.

The two most interesting functions for these use-cases are
:meth:`Environment.step()`, where all events get processed, and
:meth:`Environment.schedule()`, where all events get scheduled and inserted
into SimPy's event queue.

Here is an example that shows how :meth:`Environment.step()` can be patched in
order to trace all processed events:

.. code-block:: python

   >>> from functools import partial, wraps
   >>> import simpy
   >>>
   >>> def trace(env, callback):
   ...     """Replace the ``step()`` method of *env* with a tracing function
   ...     that calls *callbacks* with an events time, priority, ID and its
   ...     instance just before it is processed.
   ...
   ...     """
   ...     def get_wrapper(env_step, callback):
   ...         """Generate the wrapper for env.step()."""
   ...         @wraps(env_step)
   ...         def tracing_step():
   ...             """Call *callback* for the next event if one exist before
   ...             calling ``env.step()``."""
   ...             if len(env._queue):
   ...                 t, prio, eid, event = env._queue[0]
   ...                 callback(t, prio, eid, event)
   ...             return env_step()
   ...         return tracing_step
   ...
   ...     env.step = get_wrapper(env.step, callback)
   >>>
   >>> def monitor(data, t, prio, eid, event):
   ...     data.append((t, eid, type(event)))
   >>>
   >>> def test_process(env):
   ...     yield env.timeout(1)
   >>>
   >>> data = []
   >>> # Bind *data* as first argument to monitor()
   >>> # see https://docs.python.org/3/library/functools.html#functools.partial
   >>> monitor = partial(monitor, data)
   >>>
   >>> env = simpy.Environment()
   >>> trace(env, monitor)
   >>>
   >>> p = env.process(test_process(env))
   >>> env.run(until=p)
   >>>
   >>> for d in data:
   ...     print(d)
   (0, 0, <class 'simpy.events.Initialize'>)
   (1, 1, <class 'simpy.events.Timeout'>)
   (1, 2, <class 'simpy.events.Process'>)

The example above is inspired by a pull request from Steve Pothier.

Using the same concepts, you can also patch :meth:`Environment.schedule()`.
This would give you central access to the information when which event is
scheduled for what time.

In addition to that, you could also patch some or all of SimPy's event classes,
e.g., their `__init__()` method in order to trace when and how an event is
initially being created.