File: README.rst

package info (click to toggle)
python-line-profiler 2.1-3
  • links: PTS, VCS
  • area: main
  • in suites: bullseye
  • size: 284 kB
  • sloc: python: 694; ansic: 53; makefile: 17
file content (404 lines) | stat: -rw-r--r-- 17,386 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
line_profiler and kernprof
--------------------------

`line_profiler` is a module for doing line-by-line profiling of functions.
kernprof is a convenient script for running either `line_profiler` or the Python
standard library's cProfile or profile modules, depending on what is available.

They are available under a `BSD license`_.

.. _BSD license: https://raw.githubusercontent.com/rkern/line_profiler/master/LICENSE.txt

.. contents::


Installation
============

Releases of `line_profiler` can be installed using pip::

    $ pip install line_profiler

Source releases and any binaries can be downloaded from the PyPI link.

    http://pypi.python.org/pypi/line_profiler

To check out the development sources, you can use Git_::

    $ git clone https://github.com/rkern/line_profiler.git

You may also download source tarballs of any snapshot from that URL.

Source releases will require a C compiler in order to build `line_profiler`.
In addition, git checkouts will also require Cython_ >= 0.10. Source releases
on PyPI should contain the pregenerated C sources, so Cython should not be
required in that case.

`kernprof` is a single-file pure Python script and does not require
a compiler.  If you wish to use it to run cProfile and not line-by-line
profiling, you may copy it to a directory on your `PATH` manually and avoid
trying to build any C extensions.

.. _git: http://git-scm.com/
.. _Cython: http://www.cython.org
.. _build and install: http://docs.python.org/install/index.html


line_profiler
=============

The current profiling tools supported in Python 2.7 and later only time
function calls. This is a good first step for locating hotspots in one's program
and is frequently all one needs to do to optimize the program. However,
sometimes the cause of the hotspot is actually a single line in the function,
and that line may not be obvious from just reading the source code. These cases
are particularly frequent in scientific computing. Functions tend to be larger
(sometimes because of legitimate algorithmic complexity, sometimes because the
programmer is still trying to write FORTRAN code), and a single statement
without function calls can trigger lots of computation when using libraries like
numpy. cProfile only times explicit function calls, not special methods called
because of syntax. Consequently, a relatively slow numpy operation on large
arrays like this, ::

    a[large_index_array] = some_other_large_array

is a hotspot that never gets broken out by cProfile because there is no explicit
function call in that statement.

LineProfiler can be given functions to profile, and it will time the execution
of each individual line inside those functions. In a typical workflow, one only
cares about line timings of a few functions because wading through the results
of timing every single line of code would be overwhelming. However, LineProfiler
does need to be explicitly told what functions to profile. The easiest way to
get started is to use the `kernprof` script. ::

    $ kernprof -l script_to_profile.py

`kernprof` will create an instance of LineProfiler and insert it into the
`__builtins__` namespace with the name `profile`. It has been written to be
used as a decorator, so in your script, you decorate the functions you want
to profile with @profile. ::

    @profile
    def slow_function(a, b, c):
        ...

The default behavior of `kernprof` is to put the results into a binary file
script_to_profile.py.lprof . You can tell `kernprof` to immediately view the
formatted results at the terminal with the [-v/--view] option. Otherwise, you
can view the results later like so::

    $ python -m line_profiler script_to_profile.py.lprof

For example, here are the results of profiling a single function from
a decorated version of the pystone.py benchmark (the first two lines are output
from `pystone.py`, not `kernprof`)::

    Pystone(1.1) time for 50000 passes = 2.48
    This machine benchmarks at 20161.3 pystones/second
    Wrote profile results to pystone.py.lprof
    Timer unit: 1e-06 s

    File: pystone.py
    Function: Proc2 at line 149
    Total time: 0.606656 s

    Line #      Hits         Time  Per Hit   % Time  Line Contents
    ==============================================================
       149                                           @profile
       150                                           def Proc2(IntParIO):
       151     50000        82003      1.6     13.5      IntLoc = IntParIO + 10
       152     50000        63162      1.3     10.4      while 1:
       153     50000        69065      1.4     11.4          if Char1Glob == 'A':
       154     50000        66354      1.3     10.9              IntLoc = IntLoc - 1
       155     50000        67263      1.3     11.1              IntParIO = IntLoc - IntGlob
       156     50000        65494      1.3     10.8              EnumLoc = Ident1
       157     50000        68001      1.4     11.2          if EnumLoc == Ident1:
       158     50000        63739      1.3     10.5              break
       159     50000        61575      1.2     10.1      return IntParIO


The source code of the function is printed with the timing information for each
line. There are six columns of information.

    * Line #: The line number in the file.

    * Hits: The number of times that line was executed.

    * Time: The total amount of time spent executing the line in the timer's
      units. In the header information before the tables, you will see a line
      "Timer unit:" giving the conversion factor to seconds. It may be different
      on different systems.

    * Per Hit: The average amount of time spent executing the line once in the
      timer's units.

    * % Time: The percentage of time spent on that line relative to the total
      amount of recorded time spent in the function.

    * Line Contents: The actual source code. Note that this is always read from
      disk when the formatted results are viewed, *not* when the code was
      executed. If you have edited the file in the meantime, the lines will not
      match up, and the formatter may not even be able to locate the function
      for display.

If you are using IPython, there is an implementation of an %lprun magic command
which will let you specify functions to profile and a statement to execute. It
will also add its LineProfiler instance into the __builtins__, but typically,
you would not use it like that.

For IPython 0.11+, you can install it by editing the IPython configuration file
`~/.ipython/profile_default/ipython_config.py` to add the `'line_profiler'`
item to the extensions list::

    c.TerminalIPythonApp.extensions = [
        'line_profiler',
    ]


To get usage help for %lprun, use the standard IPython help mechanism::

    In [1]: %lprun?

These two methods are expected to be the most frequent user-level ways of using
LineProfiler and will usually be the easiest. However, if you are building other
tools with LineProfiler, you will need to use the API. There are two ways to
inform LineProfiler of functions to profile: you can pass them as arguments to
the constructor or use the `add_function(f)` method after instantiation. ::

    profile = LineProfiler(f, g)
    profile.add_function(h)

LineProfiler has the same `run()`, `runctx()`, and `runcall()` methods as
cProfile.Profile as well as `enable()` and `disable()`. It should be noted,
though, that `enable()` and `disable()` are not entirely safe when nested.
Nesting is common when using LineProfiler as a decorator. In order to support
nesting, use `enable_by_count()` and `disable_by_count()`. These functions will
increment and decrement a counter and only actually enable or disable the
profiler when the count transitions from or to 0.

After profiling, the `dump_stats(filename)` method will pickle the results out
to the given file. `print_stats([stream])` will print the formatted results to
sys.stdout or whatever stream you specify. `get_stats()` will return LineStats
object, which just holds two attributes: a dictionary containing the results and
the timer unit.


kernprof
========

`kernprof` also works with cProfile, its third-party incarnation lsprof, or the
pure-Python profile module depending on what is available. It has a few main
features:

    * Encapsulation of profiling concerns. You do not have to modify your script
      in order to initiate profiling and save the results. Unless if you want to
      use the advanced __builtins__ features, of course.

    * Robust script execution. Many scripts require things like __name__,
      __file__, and sys.path to be set relative to it. A naive approach at
      encapsulation would just use execfile(), but many scripts which rely on
      that information will fail. kernprof will set those variables correctly
      before executing the script.

    * Easy executable location. If you are profiling an application installed on
      your PATH, you can just give the name of the executable. If kernprof does
      not find the given script in the current directory, it will search your
      PATH for it.

    * Inserting the profiler into __builtins__. Sometimes, you just want to
      profile a small part of your code. With the [-b/--builtin] argument, the
      Profiler will be instantiated and inserted into your __builtins__ with the
      name "profile". Like LineProfiler, it may be used as a decorator, or
      enabled/disabled with `enable_by_count()` and `disable_by_count()`, or
      even as a context manager with the "with profile:" statement.

    * Pre-profiling setup. With the [-s/--setup] option, you can provide
      a script which will be executed without profiling before executing the
      main script. This is typically useful for cases where imports of large
      libraries like wxPython or VTK are interfering with your results. If you
      can modify your source code, the __builtins__ approach may be
      easier.

The results of profile script_to_profile.py will be written to
script_to_profile.py.prof by default. It will be a typical marshalled file that
can be read with pstats.Stats(). They may be interactively viewed with the
command::

    $ python -m pstats script_to_profile.py.prof

Such files may also be viewed with graphical tools like kcachegrind_ through the
converter program pyprof2calltree_ or RunSnakeRun_.

.. _kcachegrind: http://kcachegrind.sourceforge.net/html/Home.html
.. _pyprof2calltree: http://pypi.python.org/pypi/pyprof2calltree/
.. _RunSnakeRun: http://www.vrplumber.com/programming/runsnakerun/


Frequently Asked Questions
==========================

* Why the name "kernprof"?

    I didn't manage to come up with a meaningful name, so I named it after
    myself.

* Why not use hotshot instead of line_profile?

    hotshot can do line-by-line timings, too. However, it is deprecated and may
    disappear from the standard library. Also, it can take a long time to
    process the results while I want quick turnaround in my workflows. hotshot
    pays this processing time in order to make itself minimally intrusive to the
    code it is profiling. Code that does network operations, for example, may
    even go down different code paths if profiling slows down execution too
    much. For my use cases, and I think those of many other people, their
    line-by-line profiling is not affected much by this concern.

* Why not allow using hotshot from kernprof.py?

    I don't use hotshot, myself. I will accept contributions in this vein,
    though.

* The line-by-line timings don't add up when one profiled function calls
  another. What's up with that?

    Let's say you have function F() calling function G(), and you are using
    LineProfiler on both. The total time reported for G() is less than the time
    reported on the line in F() that calls G(). The reason is that I'm being
    reasonably clever (and possibly too clever) in recording the times.
    Basically, I try to prevent recording the time spent inside LineProfiler
    doing all of the bookkeeping for each line. Each time Python's tracing
    facility issues a line event (which happens just before a line actually gets
    executed), LineProfiler will find two timestamps, one at the beginning
    before it does anything (t_begin) and one as close to the end as possible
    (t_end). Almost all of the overhead of LineProfiler's data structures
    happens in between these two times.

    When a line event comes in, LineProfiler finds the function it belongs to.
    If it's the first line in the function, we record the line number and
    *t_end* associated with the function. The next time we see a line event
    belonging to that function, we take t_begin of the new event and subtract
    the old t_end from it to find the amount of time spent in the old line. Then
    we record the new t_end as the active line for this function. This way, we
    are removing most of LineProfiler's overhead from the results. Well almost.
    When one profiled function F calls another profiled function G, the line in
    F that calls G basically records the total time spent executing the line,
    which includes the time spent inside the profiler while inside G.

    The first time this question was asked, the questioner had the G() function
    call as part of a larger expression, and he wanted to try to estimate how
    much time was being spent in the function as opposed to the rest of the
    expression. My response was that, even if I could remove the effect, it
    might still be misleading. G() might be called elsewhere, not just from the
    relevant line in F(). The workaround would be to modify the code to split it
    up into two lines, one which just assigns the result of G() to a temporary
    variable and the other with the rest of the expression.

    I am open to suggestions on how to make this more robust. Or simple
    admonitions against trying to be clever.

* Why do my list comprehensions have so many hits when I use the LineProfiler?

    LineProfiler records the line with the list comprehension once for each
    iteration of the list comprehension.

* Why is kernprof distributed with line_profiler? It works with just cProfile,
  right?

    Partly because kernprof.py is essential to using line_profiler effectively,
    but mostly because I'm lazy and don't want to maintain the overhead of two
    projects for modules as small as these. However, kernprof.py is
    a standalone, pure Python script that can be used to do function profiling
    with just the Python standard library. You may grab it and install it by
    itself without `line_profiler`.

* Do I need a C compiler to build `line_profiler`? kernprof.py?

    You do need a C compiler for line_profiler. kernprof.py is a pure Python
    script and can be installed separately, though.

* Do I need Cython to build `line_profiler`?

    You should not have to if you are building from a released source tarball.
    It should contain the generated C sources already. If you are running into
    problems, that may be a bug; let me know. If you are building from
    a git checkout or snapshot, you will need Cython to generate the
    C sources. You will probably need version 0.10 or higher. There is a bug in
    some earlier versions in how it handles NULL PyObject* pointers.

* What version of Python do I need?

    Both `line_profiler` and `kernprof` have been tested with Python 2.7, and
    3.2-3.4.


To Do
=====

cProfile uses a neat "rotating trees" data structure to minimize the overhead of
looking up and recording entries. LineProfiler uses Python dictionaries and
extension objects thanks to Cython. This mostly started out as a prototype that
I wanted to play with as quickly as possible, so I passed on stealing the
rotating trees for now. As usual, I got it working, and it seems to have
acceptable performance, so I am much less motivated to use a different strategy
now. Maybe later. Contributions accepted!


Bugs and Such
=============

Bugs and pull requested can be submitted on GitHub_.

.. _GitHub: https://github.com/rkern/line_profiler


Changes
=======

2.1
~~~
* ENH: Add support for Python 3.5 coroutines
* ENH: Documentation updates
* ENH: CI for most recent Python versions (3.5, 3.6, 3.6-dev, 3.7-dev, nightly)
* ENH: Add timer unit argument for output time granularity spec

2.0
~~~
* BUG: Added support for IPython 5.0+, removed support for IPython <=0.12

1.1
~~~
* BUG: Read source files as bytes.

1.0
~~~
* ENH: `kernprof.py` is now installed as `kernprof`.
* ENH: Python 3 support. Thanks to the long-suffering Mikhail Korobov for being
  patient.
* Dropped 2.6 as it was too annoying.
* ENH: The `stripzeros` and `add_module` options. Thanks to Erik Tollerud for
  contributing it.
* ENH: Support for IPython cell blocks. Thanks to Michael Forbes for adding
  this feature.
* ENH: Better warnings when building without Cython. Thanks to David Cournapeau
  for spotting this.

1.0b3
~~~~~

* ENH: Profile generators.
* BUG: Update for compatibility with newer versions of Cython. Thanks to Ondrej
  Certik for spotting the bug.
* BUG: Update IPython compatibility for 0.11+. Thanks to Yaroslav Halchenko and
  others for providing the updated imports.

1.0b2
~~~~~

* BUG: fixed line timing overflow on Windows.
* DOC: improved the README.

1.0b1
~~~~~

* Initial release.