File: duplicates.rst

package info (click to toggle)
pandas 1.5.3%2Bdfsg-2
  • links: PTS, VCS
  • area: main
  • in suites: bookworm
  • size: 56,516 kB
  • sloc: python: 382,477; ansic: 8,695; sh: 119; xml: 102; makefile: 97
file content (206 lines) | stat: -rw-r--r-- 6,263 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
.. _duplicates:

****************
Duplicate Labels
****************

:class:`Index` objects are not required to be unique; you can have duplicate row
or column labels. This may be a bit confusing at first. If you're familiar with
SQL, you know that row labels are similar to a primary key on a table, and you
would never want duplicates in a SQL table. But one of pandas' roles is to clean
messy, real-world data before it goes to some downstream system. And real-world
data has duplicates, even in fields that are supposed to be unique.

This section describes how duplicate labels change the behavior of certain
operations, and how prevent duplicates from arising during operations, or to
detect them if they do.

.. ipython:: python

   import pandas as pd
   import numpy as np

Consequences of Duplicate Labels
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Some pandas methods (:meth:`Series.reindex` for example) just don't work with
duplicates present. The output can't be determined, and so pandas raises.

.. ipython:: python
   :okexcept:
   :okwarning:

   s1 = pd.Series([0, 1, 2], index=["a", "b", "b"])
   s1.reindex(["a", "b", "c"])

Other methods, like indexing, can give very surprising results. Typically
indexing with a scalar will *reduce dimensionality*. Slicing a ``DataFrame``
with a scalar will return a ``Series``. Slicing a ``Series`` with a scalar will
return a scalar. But with duplicates, this isn't the case.

.. ipython:: python

   df1 = pd.DataFrame([[0, 1, 2], [3, 4, 5]], columns=["A", "A", "B"])
   df1

We have duplicates in the columns. If we slice ``'B'``, we get back a ``Series``

.. ipython:: python

   df1["B"]  # a series

But slicing ``'A'`` returns a ``DataFrame``


.. ipython:: python

   df1["A"]  # a DataFrame

This applies to row labels as well

.. ipython:: python

   df2 = pd.DataFrame({"A": [0, 1, 2]}, index=["a", "a", "b"])
   df2
   df2.loc["b", "A"]  # a scalar
   df2.loc["a", "A"]  # a Series

Duplicate Label Detection
~~~~~~~~~~~~~~~~~~~~~~~~~

You can check whether an :class:`Index` (storing the row or column labels) is
unique with :attr:`Index.is_unique`:

.. ipython:: python

   df2
   df2.index.is_unique
   df2.columns.is_unique

.. note::

   Checking whether an index is unique is somewhat expensive for large datasets.
   pandas does cache this result, so re-checking on the same index is very fast.

:meth:`Index.duplicated` will return a boolean ndarray indicating whether a
label is repeated.

.. ipython:: python

   df2.index.duplicated()

Which can be used as a boolean filter to drop duplicate rows.

.. ipython:: python

   df2.loc[~df2.index.duplicated(), :]

If you need additional logic to handle duplicate labels, rather than just
dropping the repeats, using :meth:`~DataFrame.groupby` on the index is a common
trick. For example, we'll resolve duplicates by taking the average of all rows
with the same label.

.. ipython:: python

   df2.groupby(level=0).mean()

.. _duplicates.disallow:

Disallowing Duplicate Labels
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. versionadded:: 1.2.0

As noted above, handling duplicates is an important feature when reading in raw
data. That said, you may want to avoid introducing duplicates as part of a data
processing pipeline (from methods like :meth:`pandas.concat`,
:meth:`~DataFrame.rename`, etc.). Both :class:`Series` and :class:`DataFrame`
*disallow* duplicate labels by calling ``.set_flags(allows_duplicate_labels=False)``.
(the default is to allow them). If there are duplicate labels, an exception
will be raised.

.. ipython:: python
   :okexcept:

   pd.Series([0, 1, 2], index=["a", "b", "b"]).set_flags(allows_duplicate_labels=False)

This applies to both row and column labels for a :class:`DataFrame`

.. ipython:: python
   :okexcept:

   pd.DataFrame([[0, 1, 2], [3, 4, 5]], columns=["A", "B", "C"],).set_flags(
       allows_duplicate_labels=False
   )

This attribute can be checked or set with :attr:`~DataFrame.flags.allows_duplicate_labels`,
which indicates whether that object can have duplicate labels.

.. ipython:: python

   df = pd.DataFrame({"A": [0, 1, 2, 3]}, index=["x", "y", "X", "Y"]).set_flags(
       allows_duplicate_labels=False
   )
   df
   df.flags.allows_duplicate_labels

:meth:`DataFrame.set_flags` can be used to return a new ``DataFrame`` with attributes
like ``allows_duplicate_labels`` set to some value

.. ipython:: python

   df2 = df.set_flags(allows_duplicate_labels=True)
   df2.flags.allows_duplicate_labels

The new ``DataFrame`` returned is a view on the same data as the old ``DataFrame``.
Or the property can just be set directly on the same object


.. ipython:: python

   df2.flags.allows_duplicate_labels = False
   df2.flags.allows_duplicate_labels

When processing raw, messy data you might initially read in the messy data
(which potentially has duplicate labels), deduplicate, and then disallow duplicates
going forward, to ensure that your data pipeline doesn't introduce duplicates.


.. code-block:: python

   >>> raw = pd.read_csv("...")
   >>> deduplicated = raw.groupby(level=0).first()  # remove duplicates
   >>> deduplicated.flags.allows_duplicate_labels = False  # disallow going forward

Setting ``allows_duplicate_labels=False`` on a ``Series`` or ``DataFrame`` with duplicate
labels or performing an operation that introduces duplicate labels on a ``Series`` or
``DataFrame`` that disallows duplicates will raise an
:class:`errors.DuplicateLabelError`.

.. ipython:: python
   :okexcept:

   df.rename(str.upper)

This error message contains the labels that are duplicated, and the numeric positions
of all the duplicates (including the "original") in the ``Series`` or ``DataFrame``

Duplicate Label Propagation
^^^^^^^^^^^^^^^^^^^^^^^^^^^

In general, disallowing duplicates is "sticky". It's preserved through
operations.

.. ipython:: python
   :okexcept:

   s1 = pd.Series(0, index=["a", "b"]).set_flags(allows_duplicate_labels=False)
   s1
   s1.head().rename({"a": "b"})

.. warning::

   This is an experimental feature. Currently, many methods fail to
   propagate the ``allows_duplicate_labels`` value. In future versions
   it is expected that every method taking or returning one or more
   DataFrame or Series objects will propagate ``allows_duplicate_labels``.