File: README.rst

package info (click to toggle)
python-fastparquet 2024.2.0-2
  • links: PTS, VCS
  • area: main
  • in suites: sid
  • size: 120,180 kB
  • sloc: python: 8,181; makefile: 187
file content (112 lines) | stat: -rw-r--r-- 3,087 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
fastparquet
===========

.. image:: https://github.com/dask/fastparquet/actions/workflows/main.yaml/badge.svg
    :target: https://github.com/dask/fastparquet/actions/workflows/main.yaml

.. image:: https://readthedocs.org/projects/fastparquet/badge/?version=latest
    :target: https://fastparquet.readthedocs.io/en/latest/

fastparquet is a python implementation of the `parquet
format <https://github.com/apache/parquet-format>`_, aiming integrate
into python-based big data work-flows. It is used implicitly by
the projects Dask, Pandas and intake-parquet.

We offer a high degree of support for the features of the parquet format, and
very competitive performance, in a small install size and codebase.

Details of this project, how to use it and comparisons to other work can be found in the documentation_.

.. _documentation: https://fastparquet.readthedocs.io

Requirements
------------

(all development is against recent versions in the default anaconda channels
and/or conda-forge)

Required:

- numpy
- pandas
- cython >= 0.29.23 (if building from pyx files)
- cramjam
- fsspec

Supported compression algorithms:

- Available by default:

  - gzip
  - snappy
  - brotli
  - lz4
  - zstandard

- Optionally supported
  
  - `lzo <https://github.com/jd-boyd/python-lzo>`_


Installation
------------

Install using conda, to get the latest compiled version::

   conda install -c conda-forge fastparquet

or install from PyPI::

   pip install fastparquet

You may wish to install numpy first, to help pip's resolver.
This may install an appropriate wheel, or compile from source. For the latter,
you will need a suitable C compiler toolchain on your system.

You can also install latest version from github::

   pip install git+https://github.com/dask/fastparquet

in which case you should also have ``cython`` to be able to rebuild the C files.

Usage
-----

Please refer to the documentation_.

*Reading*

.. code-block:: python

    from fastparquet import ParquetFile
    pf = ParquetFile('myfile.parq')
    df = pf.to_pandas()
    df2 = pf.to_pandas(['col1', 'col2'], categories=['col1'])

You may specify which columns to load, which of those to keep as categoricals
(if the data uses dictionary encoding). The file-path can be a single file,
a metadata file pointing to other data files, or a directory (tree) containing
data files. The latter is what is typically output by hive/spark.

*Writing*

.. code-block:: python

    from fastparquet import write
    write('outfile.parq', df)
    write('outfile2.parq', df, row_group_offsets=[0, 10000, 20000],
          compression='GZIP', file_scheme='hive')

The default is to produce a single output file with a single row-group
(i.e., logical segment) and no compression. At the moment, only simple
data-types and plain encoding are supported, so expect performance to be
similar to *numpy.savez*.

History
-------

This project forked in October 2016 from `parquet-python`_, which was not designed
for vectorised loading of big data or parallel access.

.. _parquet-python: https://github.com/jcrobak/parquet-python