File: gotchas.rst

package info (click to toggle)
pandas 1.5.3%2Bdfsg-2
  • links: PTS, VCS
  • area: main
  • in suites: bookworm
  • size: 56,516 kB
  • sloc: python: 382,477; ansic: 8,695; sh: 119; xml: 102; makefile: 97
file content (411 lines) | stat: -rw-r--r-- 13,405 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
.. _gotchas:

{{ header }}

********************************
Frequently Asked Questions (FAQ)
********************************

.. _df-memory-usage:

DataFrame memory usage
----------------------
The memory usage of a :class:`DataFrame` (including the index) is shown when calling
the :meth:`~DataFrame.info`. A configuration option, ``display.memory_usage``
(see :ref:`the list of options <options.available>`), specifies if the
:class:`DataFrame` memory usage will be displayed when invoking the ``df.info()``
method.

For example, the memory usage of the :class:`DataFrame` below is shown
when calling :meth:`~DataFrame.info`:

.. ipython:: python

    dtypes = [
        "int64",
        "float64",
        "datetime64[ns]",
        "timedelta64[ns]",
        "complex128",
        "object",
        "bool",
    ]
    n = 5000
    data = {t: np.random.randint(100, size=n).astype(t) for t in dtypes}
    df = pd.DataFrame(data)
    df["categorical"] = df["object"].astype("category")

    df.info()

The ``+`` symbol indicates that the true memory usage could be higher, because
pandas does not count the memory used by values in columns with
``dtype=object``.

Passing ``memory_usage='deep'`` will enable a more accurate memory usage report,
accounting for the full usage of the contained objects. This is optional
as it can be expensive to do this deeper introspection.

.. ipython:: python

   df.info(memory_usage="deep")

By default the display option is set to ``True`` but can be explicitly
overridden by passing the ``memory_usage`` argument when invoking ``df.info()``.

The memory usage of each column can be found by calling the
:meth:`~DataFrame.memory_usage` method. This returns a :class:`Series` with an index
represented by column names and memory usage of each column shown in bytes. For
the :class:`DataFrame` above, the memory usage of each column and the total memory
usage can be found with the ``memory_usage`` method:

.. ipython:: python

    df.memory_usage()

    # total memory usage of dataframe
    df.memory_usage().sum()

By default the memory usage of the :class:`DataFrame` index is shown in the
returned :class:`Series`, the memory usage of the index can be suppressed by passing
the ``index=False`` argument:

.. ipython:: python

    df.memory_usage(index=False)

The memory usage displayed by the :meth:`~DataFrame.info` method utilizes the
:meth:`~DataFrame.memory_usage` method to determine the memory usage of a
:class:`DataFrame` while also formatting the output in human-readable units (base-2
representation; i.e. 1KB = 1024 bytes).

See also :ref:`Categorical Memory Usage <categorical.memory>`.

.. _gotchas.truth:

Using if/truth statements with pandas
-------------------------------------

pandas follows the NumPy convention of raising an error when you try to convert
something to a ``bool``. This happens in an ``if``-statement or when using the
boolean operations: ``and``, ``or``, and ``not``. It is not clear what the result
of the following code should be:

.. code-block:: python

    >>> if pd.Series([False, True, False]):
    ...     pass

Should it be ``True`` because it's not zero-length, or ``False`` because there
are ``False`` values? It is unclear, so instead, pandas raises a ``ValueError``:

.. ipython:: python
    :okexcept:

    if pd.Series([False, True, False]):
        print("I was true")

You need to explicitly choose what you want to do with the :class:`DataFrame`, e.g.
use :meth:`~DataFrame.any`, :meth:`~DataFrame.all` or :meth:`~DataFrame.empty`.
Alternatively, you might want to compare if the pandas object is ``None``:

.. ipython:: python

    if pd.Series([False, True, False]) is not None:
        print("I was not None")


Below is how to check if any of the values are ``True``:

.. ipython:: python

    if pd.Series([False, True, False]).any():
        print("I am any")

To evaluate single-element pandas objects in a boolean context, use the method
:meth:`~DataFrame.bool`:

.. ipython:: python

   pd.Series([True]).bool()
   pd.Series([False]).bool()
   pd.DataFrame([[True]]).bool()
   pd.DataFrame([[False]]).bool()

Bitwise boolean
~~~~~~~~~~~~~~~

Bitwise boolean operators like ``==`` and ``!=`` return a boolean :class:`Series`
which performs an element-wise comparison when compared to a scalar.

.. ipython:: python

   s = pd.Series(range(5))
   s == 4

See :ref:`boolean comparisons<basics.compare>` for more examples.

Using the ``in`` operator
~~~~~~~~~~~~~~~~~~~~~~~~~

Using the Python ``in`` operator on a :class:`Series` tests for membership in the
**index**, not membership among the values.

.. ipython:: python

    s = pd.Series(range(5), index=list("abcde"))
    2 in s
    'b' in s

If this behavior is surprising, keep in mind that using ``in`` on a Python
dictionary tests keys, not values, and :class:`Series` are dict-like.
To test for membership in the values, use the method :meth:`~pandas.Series.isin`:

.. ipython:: python

    s.isin([2])
    s.isin([2]).any()

For :class:`DataFrame`, likewise, ``in`` applies to the column axis,
testing for membership in the list of column names.

.. _gotchas.udf-mutation:

Mutating with User Defined Function (UDF) methods
-------------------------------------------------

This section applies to pandas methods that take a UDF. In particular, the methods
``.apply``, ``.aggregate``, ``.transform``, and ``.filter``.

It is a general rule in programming that one should not mutate a container
while it is being iterated over. Mutation will invalidate the iterator,
causing unexpected behavior. Consider the example:

.. ipython:: python

   values = [0, 1, 2, 3, 4, 5]
   n_removed = 0
   for k, value in enumerate(values):
       idx = k - n_removed
       if value % 2 == 1:
           del values[idx]
           n_removed += 1
       else:
           values[idx] = value + 1
   values

One probably would have expected that the result would be ``[1, 3, 5]``.
When using a pandas method that takes a UDF, internally pandas is often
iterating over the
:class:`DataFrame` or other pandas object. Therefore, if the UDF mutates (changes)
the :class:`DataFrame`, unexpected behavior can arise.

Here is a similar example with :meth:`DataFrame.apply`:

.. ipython:: python

   def f(s):
       s.pop("a")
       return s

   df = pd.DataFrame({"a": [1, 2, 3], "b": [4, 5, 6]})
   try:
       df.apply(f, axis="columns")
   except Exception as err:
       print(repr(err))

To resolve this issue, one can make a copy so that the mutation does
not apply to the container being iterated over.

.. ipython:: python

   values = [0, 1, 2, 3, 4, 5]
   n_removed = 0
   for k, value in enumerate(values.copy()):
       idx = k - n_removed
       if value % 2 == 1:
           del values[idx]
           n_removed += 1
       else:
           values[idx] = value + 1
   values

.. ipython:: python

   def f(s):
       s = s.copy()
       s.pop("a")
       return s

   df = pd.DataFrame({"a": [1, 2, 3], 'b': [4, 5, 6]})
   df.apply(f, axis="columns")

``NaN``, Integer ``NA`` values and ``NA`` type promotions
---------------------------------------------------------

Choice of ``NA`` representation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

For lack of ``NA`` (missing) support from the ground up in NumPy and Python in
general, we were given the difficult choice between either:

* A *masked array* solution: an array of data and an array of boolean values
  indicating whether a value is there or is missing.
* Using a special sentinel value, bit pattern, or set of sentinel values to
  denote ``NA`` across the dtypes.

For many reasons we chose the latter. After years of production use it has
proven, at least in my opinion, to be the best decision given the state of
affairs in NumPy and Python in general. The special value ``NaN``
(Not-A-Number) is used everywhere as the ``NA`` value, and there are API
functions :meth:`DataFrame.isna` and :meth:`DataFrame.notna` which can be used across the dtypes to
detect NA values.

However, it comes with it a couple of trade-offs which I most certainly have
not ignored.

.. _gotchas.intna:

Support for integer ``NA``
~~~~~~~~~~~~~~~~~~~~~~~~~~

In the absence of high performance ``NA`` support being built into NumPy from
the ground up, the primary casualty is the ability to represent NAs in integer
arrays. For example:

.. ipython:: python

   s = pd.Series([1, 2, 3, 4, 5], index=list("abcde"))
   s
   s.dtype

   s2 = s.reindex(["a", "b", "c", "f", "u"])
   s2
   s2.dtype

This trade-off is made largely for memory and performance reasons, and also so
that the resulting :class:`Series` continues to be "numeric".

If you need to represent integers with possibly missing values, use one of
the nullable-integer extension dtypes provided by pandas

* :class:`Int8Dtype`
* :class:`Int16Dtype`
* :class:`Int32Dtype`
* :class:`Int64Dtype`

.. ipython:: python

   s_int = pd.Series([1, 2, 3, 4, 5], index=list("abcde"), dtype=pd.Int64Dtype())
   s_int
   s_int.dtype

   s2_int = s_int.reindex(["a", "b", "c", "f", "u"])
   s2_int
   s2_int.dtype

See :ref:`integer_na` for more.

``NA`` type promotions
~~~~~~~~~~~~~~~~~~~~~~

When introducing NAs into an existing :class:`Series` or :class:`DataFrame` via
:meth:`~Series.reindex` or some other means, boolean and integer types will be
promoted to a different dtype in order to store the NAs. The promotions are
summarized in this table:

.. csv-table::
   :header: "Typeclass","Promotion dtype for storing NAs"
   :widths: 40,60

   ``floating``, no change
   ``object``, no change
   ``integer``, cast to ``float64``
   ``boolean``, cast to ``object``

While this may seem like a heavy trade-off, I have found very few cases where
this is an issue in practice i.e. storing values greater than 2**53. Some
explanation for the motivation is in the next section.

Why not make NumPy like R?
~~~~~~~~~~~~~~~~~~~~~~~~~~

Many people have suggested that NumPy should simply emulate the ``NA`` support
present in the more domain-specific statistical programming language `R
<https://www.r-project.org/>`__. Part of the reason is the NumPy type hierarchy:

.. csv-table::
   :header: "Typeclass","Dtypes"
   :widths: 30,70
   :delim: |

   ``numpy.floating`` | ``float16, float32, float64, float128``
   ``numpy.integer`` | ``int8, int16, int32, int64``
   ``numpy.unsignedinteger`` | ``uint8, uint16, uint32, uint64``
   ``numpy.object_`` | ``object_``
   ``numpy.bool_`` | ``bool_``
   ``numpy.character`` | ``string_, unicode_``

The R language, by contrast, only has a handful of built-in data types:
``integer``, ``numeric`` (floating-point), ``character``, and
``boolean``. ``NA`` types are implemented by reserving special bit patterns for
each type to be used as the missing value. While doing this with the full NumPy
type hierarchy would be possible, it would be a more substantial trade-off
(especially for the 8- and 16-bit data types) and implementation undertaking.

An alternate approach is that of using masked arrays. A masked array is an
array of data with an associated boolean *mask* denoting whether each value
should be considered ``NA`` or not. I am personally not in love with this
approach as I feel that overall it places a fairly heavy burden on the user and
the library implementer. Additionally, it exacts a fairly high performance cost
when working with numerical data compared with the simple approach of using
``NaN``. Thus, I have chosen the Pythonic "practicality beats purity" approach
and traded integer ``NA`` capability for a much simpler approach of using a
special value in float and object arrays to denote ``NA``, and promoting
integer arrays to floating when NAs must be introduced.


Differences with NumPy
----------------------
For :class:`Series` and :class:`DataFrame` objects, :meth:`~DataFrame.var` normalizes by
``N-1`` to produce `unbiased estimates of the population variance <https://en.wikipedia.org/wiki/Bias_of_an_estimator>`__, while NumPy's
:meth:`numpy.var` normalizes by N, which measures the variance of the sample. Note that
:meth:`~DataFrame.cov` normalizes by ``N-1`` in both pandas and NumPy.

.. _gotchas.thread-safety:

Thread-safety
-------------

pandas is not 100% thread safe. The known issues relate to
the :meth:`~DataFrame.copy` method. If you are doing a lot of copying of
:class:`DataFrame` objects shared among threads, we recommend holding locks inside
the threads where the data copying occurs.

See `this link <https://stackoverflow.com/questions/13592618/python-pandas-dataframe-thread-safe>`__
for more information.


Byte-ordering issues
--------------------
Occasionally you may have to deal with data that were created on a machine with
a different byte order than the one on which you are running Python. A common
symptom of this issue is an error like::

    Traceback
        ...
    ValueError: Big-endian buffer not supported on little-endian compiler

To deal
with this issue you should convert the underlying NumPy array to the native
system byte order *before* passing it to :class:`Series` or :class:`DataFrame`
constructors using something similar to the following:

.. ipython:: python

   x = np.array(list(range(10)), ">i4")  # big endian
   newx = x.byteswap().newbyteorder()  # force native byteorder
   s = pd.Series(newx)

See `the NumPy documentation on byte order
<https://numpy.org/doc/stable/user/basics.byteswapping.html>`__ for more
details.