1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813
|
.. _10min:
.. currentmodule:: pandas
.. ipython:: python
:suppress:
import numpy as np
import pandas as pd
import os
np.random.seed(123456)
np.set_printoptions(precision=4, suppress=True)
import matplotlib
matplotlib.style.use('ggplot')
pd.options.display.max_rows = 15
#### portions of this were borrowed from the
#### Pandas cheatsheet
#### created during the PyData Workshop-Sprint 2012
#### Hannah Chen, Henry Chow, Eric Cox, Robert Mauriello
********************
10 Minutes to pandas
********************
This is a short introduction to pandas, geared mainly for new users.
You can see more complex recipes in the :ref:`Cookbook<cookbook>`
Customarily, we import as follows:
.. ipython:: python
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
Object Creation
---------------
See the :ref:`Data Structure Intro section <dsintro>`
Creating a :class:`Series` by passing a list of values, letting pandas create
a default integer index:
.. ipython:: python
s = pd.Series([1,3,5,np.nan,6,8])
s
Creating a :class:`DataFrame` by passing a numpy array, with a datetime index
and labeled columns:
.. ipython:: python
dates = pd.date_range('20130101', periods=6)
dates
df = pd.DataFrame(np.random.randn(6,4), index=dates, columns=list('ABCD'))
df
Creating a ``DataFrame`` by passing a dict of objects that can be converted to series-like.
.. ipython:: python
df2 = pd.DataFrame({ 'A' : 1.,
'B' : pd.Timestamp('20130102'),
'C' : pd.Series(1,index=list(range(4)),dtype='float32'),
'D' : np.array([3] * 4,dtype='int32'),
'E' : pd.Categorical(["test","train","test","train"]),
'F' : 'foo' })
df2
Having specific :ref:`dtypes <basics.dtypes>`
.. ipython:: python
df2.dtypes
If you're using IPython, tab completion for column names (as well as public
attributes) is automatically enabled. Here's a subset of the attributes that
will be completed:
.. ipython::
@verbatim
In [1]: df2.<TAB>
df2.A df2.boxplot
df2.abs df2.C
df2.add df2.clip
df2.add_prefix df2.clip_lower
df2.add_suffix df2.clip_upper
df2.align df2.columns
df2.all df2.combine
df2.any df2.combineAdd
df2.append df2.combine_first
df2.apply df2.combineMult
df2.applymap df2.compound
df2.as_blocks df2.consolidate
df2.asfreq df2.convert_objects
df2.as_matrix df2.copy
df2.astype df2.corr
df2.at df2.corrwith
df2.at_time df2.count
df2.axes df2.cov
df2.B df2.cummax
df2.between_time df2.cummin
df2.bfill df2.cumprod
df2.blocks df2.cumsum
df2.bool df2.D
As you can see, the columns ``A``, ``B``, ``C``, and ``D`` are automatically
tab completed. ``E`` is there as well; the rest of the attributes have been
truncated for brevity.
Viewing Data
------------
See the :ref:`Basics section <basics>`
See the top & bottom rows of the frame
.. ipython:: python
df.head()
df.tail(3)
Display the index, columns, and the underlying numpy data
.. ipython:: python
df.index
df.columns
df.values
Describe shows a quick statistic summary of your data
.. ipython:: python
df.describe()
Transposing your data
.. ipython:: python
df.T
Sorting by an axis
.. ipython:: python
df.sort_index(axis=1, ascending=False)
Sorting by values
.. ipython:: python
df.sort_values(by='B')
Selection
---------
.. note::
While standard Python / Numpy expressions for selecting and setting are
intuitive and come in handy for interactive work, for production code, we
recommend the optimized pandas data access methods, ``.at``, ``.iat``,
``.loc``, ``.iloc`` and ``.ix``.
See the indexing documentation :ref:`Indexing and Selecting Data <indexing>` and :ref:`MultiIndex / Advanced Indexing <advanced>`
Getting
~~~~~~~
Selecting a single column, which yields a ``Series``,
equivalent to ``df.A``
.. ipython:: python
df['A']
Selecting via ``[]``, which slices the rows.
.. ipython:: python
df[0:3]
df['20130102':'20130104']
Selection by Label
~~~~~~~~~~~~~~~~~~
See more in :ref:`Selection by Label <indexing.label>`
For getting a cross section using a label
.. ipython:: python
df.loc[dates[0]]
Selecting on a multi-axis by label
.. ipython:: python
df.loc[:,['A','B']]
Showing label slicing, both endpoints are *included*
.. ipython:: python
df.loc['20130102':'20130104',['A','B']]
Reduction in the dimensions of the returned object
.. ipython:: python
df.loc['20130102',['A','B']]
For getting a scalar value
.. ipython:: python
df.loc[dates[0],'A']
For getting fast access to a scalar (equiv to the prior method)
.. ipython:: python
df.at[dates[0],'A']
Selection by Position
~~~~~~~~~~~~~~~~~~~~~
See more in :ref:`Selection by Position <indexing.integer>`
Select via the position of the passed integers
.. ipython:: python
df.iloc[3]
By integer slices, acting similar to numpy/python
.. ipython:: python
df.iloc[3:5,0:2]
By lists of integer position locations, similar to the numpy/python style
.. ipython:: python
df.iloc[[1,2,4],[0,2]]
For slicing rows explicitly
.. ipython:: python
df.iloc[1:3,:]
For slicing columns explicitly
.. ipython:: python
df.iloc[:,1:3]
For getting a value explicitly
.. ipython:: python
df.iloc[1,1]
For getting fast access to a scalar (equiv to the prior method)
.. ipython:: python
df.iat[1,1]
Boolean Indexing
~~~~~~~~~~~~~~~~
Using a single column's values to select data.
.. ipython:: python
df[df.A > 0]
A ``where`` operation for getting.
.. ipython:: python
df[df > 0]
Using the :func:`~Series.isin` method for filtering:
.. ipython:: python
df2 = df.copy()
df2['E'] = ['one', 'one','two','three','four','three']
df2
df2[df2['E'].isin(['two','four'])]
Setting
~~~~~~~
Setting a new column automatically aligns the data
by the indexes
.. ipython:: python
s1 = pd.Series([1,2,3,4,5,6], index=pd.date_range('20130102', periods=6))
s1
df['F'] = s1
Setting values by label
.. ipython:: python
df.at[dates[0],'A'] = 0
Setting values by position
.. ipython:: python
df.iat[0,1] = 0
Setting by assigning with a numpy array
.. ipython:: python
df.loc[:,'D'] = np.array([5] * len(df))
The result of the prior setting operations
.. ipython:: python
df
A ``where`` operation with setting.
.. ipython:: python
df2 = df.copy()
df2[df2 > 0] = -df2
df2
Missing Data
------------
pandas primarily uses the value ``np.nan`` to represent missing data. It is by
default not included in computations. See the :ref:`Missing Data section
<missing_data>`
Reindexing allows you to change/add/delete the index on a specified axis. This
returns a copy of the data.
.. ipython:: python
df1 = df.reindex(index=dates[0:4], columns=list(df.columns) + ['E'])
df1.loc[dates[0]:dates[1],'E'] = 1
df1
To drop any rows that have missing data.
.. ipython:: python
df1.dropna(how='any')
Filling missing data
.. ipython:: python
df1.fillna(value=5)
To get the boolean mask where values are ``nan``
.. ipython:: python
pd.isnull(df1)
Operations
----------
See the :ref:`Basic section on Binary Ops <basics.binop>`
Stats
~~~~~
Operations in general *exclude* missing data.
Performing a descriptive statistic
.. ipython:: python
df.mean()
Same operation on the other axis
.. ipython:: python
df.mean(1)
Operating with objects that have different dimensionality and need alignment.
In addition, pandas automatically broadcasts along the specified dimension.
.. ipython:: python
s = pd.Series([1,3,5,np.nan,6,8], index=dates).shift(2)
s
df.sub(s, axis='index')
Apply
~~~~~
Applying functions to the data
.. ipython:: python
df.apply(np.cumsum)
df.apply(lambda x: x.max() - x.min())
Histogramming
~~~~~~~~~~~~~
See more at :ref:`Histogramming and Discretization <basics.discretization>`
.. ipython:: python
s = pd.Series(np.random.randint(0, 7, size=10))
s
s.value_counts()
String Methods
~~~~~~~~~~~~~~
Series is equipped with a set of string processing methods in the `str`
attribute that make it easy to operate on each element of the array, as in the
code snippet below. Note that pattern-matching in `str` generally uses `regular
expressions <https://docs.python.org/2/library/re.html>`__ by default (and in
some cases always uses them). See more at :ref:`Vectorized String Methods
<text.string_methods>`.
.. ipython:: python
s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'])
s.str.lower()
Merge
-----
Concat
~~~~~~
pandas provides various facilities for easily combining together Series,
DataFrame, and Panel objects with various kinds of set logic for the indexes
and relational algebra functionality in the case of join / merge-type
operations.
See the :ref:`Merging section <merging>`
Concatenating pandas objects together with :func:`concat`:
.. ipython:: python
df = pd.DataFrame(np.random.randn(10, 4))
df
# break it into pieces
pieces = [df[:3], df[3:7], df[7:]]
pd.concat(pieces)
Join
~~~~
SQL style merges. See the :ref:`Database style joining <merging.join>`
.. ipython:: python
left = pd.DataFrame({'key': ['foo', 'foo'], 'lval': [1, 2]})
right = pd.DataFrame({'key': ['foo', 'foo'], 'rval': [4, 5]})
left
right
pd.merge(left, right, on='key')
Another example that can be given is:
.. ipython:: python
left = pd.DataFrame({'key': ['foo', 'bar'], 'lval': [1, 2]})
right = pd.DataFrame({'key': ['foo', 'bar'], 'rval': [4, 5]})
left
right
pd.merge(left, right, on='key')
Append
~~~~~~
Append rows to a dataframe. See the :ref:`Appending <merging.concatenation>`
.. ipython:: python
df = pd.DataFrame(np.random.randn(8, 4), columns=['A','B','C','D'])
df
s = df.iloc[3]
df.append(s, ignore_index=True)
Grouping
--------
By "group by" we are referring to a process involving one or more of the
following steps
- **Splitting** the data into groups based on some criteria
- **Applying** a function to each group independently
- **Combining** the results into a data structure
See the :ref:`Grouping section <groupby>`
.. ipython:: python
df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
'foo', 'bar', 'foo', 'foo'],
'B' : ['one', 'one', 'two', 'three',
'two', 'two', 'one', 'three'],
'C' : np.random.randn(8),
'D' : np.random.randn(8)})
df
Grouping and then applying a function ``sum`` to the resulting groups.
.. ipython:: python
df.groupby('A').sum()
Grouping by multiple columns forms a hierarchical index, which we then apply
the function.
.. ipython:: python
df.groupby(['A','B']).sum()
Reshaping
---------
See the sections on :ref:`Hierarchical Indexing <advanced.hierarchical>` and
:ref:`Reshaping <reshaping.stacking>`.
Stack
~~~~~
.. ipython:: python
tuples = list(zip(*[['bar', 'bar', 'baz', 'baz',
'foo', 'foo', 'qux', 'qux'],
['one', 'two', 'one', 'two',
'one', 'two', 'one', 'two']]))
index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])
df = pd.DataFrame(np.random.randn(8, 2), index=index, columns=['A', 'B'])
df2 = df[:4]
df2
The :meth:`~DataFrame.stack` method "compresses" a level in the DataFrame's
columns.
.. ipython:: python
stacked = df2.stack()
stacked
With a "stacked" DataFrame or Series (having a ``MultiIndex`` as the
``index``), the inverse operation of :meth:`~DataFrame.stack` is
:meth:`~DataFrame.unstack`, which by default unstacks the **last level**:
.. ipython:: python
stacked.unstack()
stacked.unstack(1)
stacked.unstack(0)
Pivot Tables
~~~~~~~~~~~~
See the section on :ref:`Pivot Tables <reshaping.pivot>`.
.. ipython:: python
df = pd.DataFrame({'A' : ['one', 'one', 'two', 'three'] * 3,
'B' : ['A', 'B', 'C'] * 4,
'C' : ['foo', 'foo', 'foo', 'bar', 'bar', 'bar'] * 2,
'D' : np.random.randn(12),
'E' : np.random.randn(12)})
df
We can produce pivot tables from this data very easily:
.. ipython:: python
pd.pivot_table(df, values='D', index=['A', 'B'], columns=['C'])
Time Series
-----------
pandas has simple, powerful, and efficient functionality for performing
resampling operations during frequency conversion (e.g., converting secondly
data into 5-minutely data). This is extremely common in, but not limited to,
financial applications. See the :ref:`Time Series section <timeseries>`
.. ipython:: python
rng = pd.date_range('1/1/2012', periods=100, freq='S')
ts = pd.Series(np.random.randint(0, 500, len(rng)), index=rng)
ts.resample('5Min').sum()
Time zone representation
.. ipython:: python
rng = pd.date_range('3/6/2012 00:00', periods=5, freq='D')
ts = pd.Series(np.random.randn(len(rng)), rng)
ts
ts_utc = ts.tz_localize('UTC')
ts_utc
Convert to another time zone
.. ipython:: python
ts_utc.tz_convert('US/Eastern')
Converting between time span representations
.. ipython:: python
rng = pd.date_range('1/1/2012', periods=5, freq='M')
ts = pd.Series(np.random.randn(len(rng)), index=rng)
ts
ps = ts.to_period()
ps
ps.to_timestamp()
Converting between period and timestamp enables some convenient arithmetic
functions to be used. In the following example, we convert a quarterly
frequency with year ending in November to 9am of the end of the month following
the quarter end:
.. ipython:: python
prng = pd.period_range('1990Q1', '2000Q4', freq='Q-NOV')
ts = pd.Series(np.random.randn(len(prng)), prng)
ts.index = (prng.asfreq('M', 'e') + 1).asfreq('H', 's') + 9
ts.head()
Categoricals
------------
Since version 0.15, pandas can include categorical data in a ``DataFrame``. For full docs, see the
:ref:`categorical introduction <categorical>` and the :ref:`API documentation <api.categorical>`.
.. ipython:: python
df = pd.DataFrame({"id":[1,2,3,4,5,6], "raw_grade":['a', 'b', 'b', 'a', 'a', 'e']})
Convert the raw grades to a categorical data type.
.. ipython:: python
df["grade"] = df["raw_grade"].astype("category")
df["grade"]
Rename the categories to more meaningful names (assigning to ``Series.cat.categories`` is inplace!)
.. ipython:: python
df["grade"].cat.categories = ["very good", "good", "very bad"]
Reorder the categories and simultaneously add the missing categories (methods under ``Series
.cat`` return a new ``Series`` per default).
.. ipython:: python
df["grade"] = df["grade"].cat.set_categories(["very bad", "bad", "medium", "good", "very good"])
df["grade"]
Sorting is per order in the categories, not lexical order.
.. ipython:: python
df.sort_values(by="grade")
Grouping by a categorical column shows also empty categories.
.. ipython:: python
df.groupby("grade").size()
Plotting
--------
:ref:`Plotting <visualization>` docs.
.. ipython:: python
:suppress:
import matplotlib.pyplot as plt
plt.close('all')
.. ipython:: python
ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000))
ts = ts.cumsum()
@savefig series_plot_basic.png
ts.plot()
On DataFrame, :meth:`~DataFrame.plot` is a convenience to plot all of the
columns with labels:
.. ipython:: python
df = pd.DataFrame(np.random.randn(1000, 4), index=ts.index,
columns=['A', 'B', 'C', 'D'])
df = df.cumsum()
@savefig frame_plot_basic.png
plt.figure(); df.plot(); plt.legend(loc='best')
Getting Data In/Out
-------------------
CSV
~~~
:ref:`Writing to a csv file <io.store_in_csv>`
.. ipython:: python
df.to_csv('foo.csv')
:ref:`Reading from a csv file <io.read_csv_table>`
.. ipython:: python
pd.read_csv('foo.csv')
.. ipython:: python
:suppress:
os.remove('foo.csv')
HDF5
~~~~
Reading and writing to :ref:`HDFStores <io.hdf5>`
Writing to a HDF5 Store
.. ipython:: python
df.to_hdf('foo.h5','df')
Reading from a HDF5 Store
.. ipython:: python
pd.read_hdf('foo.h5','df')
.. ipython:: python
:suppress:
os.remove('foo.h5')
Excel
~~~~~
Reading and writing to :ref:`MS Excel <io.excel>`
Writing to an excel file
.. ipython:: python
df.to_excel('foo.xlsx', sheet_name='Sheet1')
Reading from an excel file
.. ipython:: python
pd.read_excel('foo.xlsx', 'Sheet1', index_col=None, na_values=['NA'])
.. ipython:: python
:suppress:
os.remove('foo.xlsx')
Gotchas
-------
If you are trying an operation and you see an exception like:
.. code-block:: python
>>> if pd.Series([False, True, False]):
print("I was true")
Traceback
...
ValueError: The truth value of an array is ambiguous. Use a.empty, a.any() or a.all().
See :ref:`Comparisons<basics.compare>` for an explanation and what to do.
See :ref:`Gotchas<gotchas>` as well.
|