File: help.txt

package info (click to toggle)
smart-open 7.5.0-2
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid
  • size: 980 kB
  • sloc: python: 8,054; sh: 90; makefile: 14
file content (455 lines) | stat: -rw-r--r-- 18,520 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
Help on package smart_open:

NAME
    smart_open

DESCRIPTION
    Utilities for streaming to/from several file-like data storages: S3 / HDFS / local
    filesystem / compressed files, and many more, using a simple, Pythonic API.

    The streaming makes heavy use of generators and pipes, to avoid loading
    full file contents into memory, allowing work with arbitrarily large files.

    The main functions are:

    * `open()`, which opens the given file for reading/writing
    * `parse_uri()`
    * `s3_iter_bucket()`, which goes over all keys in an S3 bucket in parallel
    * `register_compressor()`, which registers callbacks for transparent compressor handling

PACKAGE CONTENTS
    azure
    bytebuffer
    compression
    concurrency
    constants
    doctools
    ftp
    gcs
    hdfs
    http
    local_file
    s3
    smart_open_lib
    ssh
    transport
    utils
    webhdfs

FUNCTIONS
    open(uri, mode='r', buffering=-1, encoding=None, errors=None, newline=None, closefd=True, opener=None, compression='infer_from_extension', transport_params=None)
        Open the URI object, returning a file-like object.

        The URI is usually a string in a variety of formats.
        For a full list of examples, see the :func:`parse_uri` function.

        The URI may also be one of:

        - an instance of the pathlib.Path class
        - a stream (anything that implements io.IOBase-like functionality)

        Parameters
        ----------
        uri: str or object
            The object to open.
        mode: str, optional
            Mimicks built-in open parameter of the same name.
        buffering: int, optional
            Mimicks built-in open parameter of the same name.
        encoding: str, optional
            Mimicks built-in open parameter of the same name.
        errors: str, optional
            Mimicks built-in open parameter of the same name.
        newline: str, optional
            Mimicks built-in open parameter of the same name.
        closefd: boolean, optional
            Mimicks built-in open parameter of the same name.  Ignored.
        opener: object, optional
            Mimicks built-in open parameter of the same name.  Ignored.
        compression: str, optional (see smart_open.compression.get_supported_compression_types)
            Explicitly specify the compression/decompression behavior.
        transport_params: dict, optional
            Additional parameters for the transport layer (see notes below).

        Returns
        -------
        A file-like object.

        Notes
        -----
        smart_open has several implementations for its transport layer (e.g. S3, HTTP).
        Each transport layer has a different set of keyword arguments for overriding
        default behavior.  If you specify a keyword argument that is *not* supported
        by the transport layer being used, smart_open will ignore that argument and
        log a warning message.

        smart_open supports the following transport mechanisms:

        azure (smart_open/azure.py)
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~
        Implements file-like objects for reading and writing to/from Azure Blob Storage.

        container_id: str
            The name of the container this object resides in.
        blob_id: str
            The name of the blob within the bucket.
        mode: str
            The mode for opening the object.  Must be either "rb" or "wb".
        client: azure.storage.blob.BlobServiceClient, ContainerClient, or BlobClient
            The Azure Blob Storage client to use when working with azure-storage-blob.
        blob_kwargs: dict, optional
            Additional parameters to pass to `BlobClient.commit_block_list`.
            For writing only.
        buffer_size: int, optional
            The buffer size to use when performing I/O. For reading only.
        min_part_size: int, optional
            The minimum part size for multipart uploads. For writing only.
        max_concurrency: int, optional
            The number of parallel connections with which to download. For reading only.

        file (smart_open/local_file.py)
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        Implements the transport for the file:// schema.

        ftp/ftps (smart_open/ftp.py)
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        Implements I/O streams over FTP.

        path: str
            The path on the remote server
        mode: str
            Must be "rb" or "wb"
        host: str
            The host to connect to
        user: str
            The username to use for the connection
        password: str
            The password for the specified username
        port: int
            The port to connect to
        secure_connection: bool
            True for FTPS, False for FTP
        transport_params: dict
            Additional parameters for the FTP connection.
            Currently supported parameters: timeout, source_address, encoding.

        gs (smart_open/gcs.py)
        ~~~~~~~~~~~~~~~~~~~~~~
        Implements file-like objects for reading and writing to/from GCS.

        bucket_id: str
            The name of the bucket this object resides in.
        blob_id: str
            The name of the blob within the bucket.
        mode: str
            The mode for opening the object. Must be either "rb" or "wb".
        buffer_size:
            deprecated
        min_part_size: int, optional
            The minimum part size for multipart uploads. For writing only.
        client: google.cloud.storage.Client, optional
            The GCS client to use when working with google-cloud-storage.
        get_blob_kwargs: dict, optional
            Additional keyword arguments to propagate to the bucket.get_blob
            method of the google-cloud-storage library. For reading only.
        blob_properties: dict, optional
            Set properties on blob before writing. For writing only.
        blob_open_kwargs: dict, optional
            Additional keyword arguments to propagate to the blob.open method
            of the google-cloud-storage library.

        hdfs/viewfs (smart_open/hdfs.py)
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        Implements reading and writing to/from HDFS.

        http/https (smart_open/http.py)
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        Implements file-like objects for reading from http.

        url: str
            The URL to open.
        mode: str
            The mode to open using.
        kerberos: boolean, optional
            If True, will attempt to use the local Kerberos credentials
        user: str, optional
            The username for authenticating over HTTP
        password: str, optional
            The password for authenticating over HTTP
        cert: str/tuple, optional
            if String, path to ssl client cert file (.pem). If Tuple, (‘cert’, ‘key’)
        headers: dict, optional
            Any headers to send in the request. If ``None``, the default headers are sent:
            ``{'Accept-Encoding': 'identity'}``. To use no headers at all,
            set this variable to an empty dict, ``{}``.
        session: object, optional
            The requests Session object to use with http get requests.
            Can be used for OAuth2 clients.
        buffer_size: int, optional
            The buffer size to use when performing I/O.

        s3/s3n/s3u/s3a (smart_open/s3.py)
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        Implements file-like objects for reading and writing from/to AWS S3.

        bucket_id: str
            The name of the bucket this object resides in.
        key_id: str
            The name of the key within the bucket.
        mode: str
            The mode for opening the object.  Must be either "rb" or "wb".
        buffer_size: int, optional
            Default: 128KB
            The buffer size in bytes for reading. Controls memory usage. Data is streamed
            from a S3 network stream in buffer_size chunks. Forward seeks within
            the current buffer are satisfied without additional GET requests. Backward
            seeks always open a new GET request. For forward seek-intensive workloads,
            increase buffer_size to reduce GET requests at the cost of higher memory usage.
        min_part_size: int, optional
            The minimum part size for multipart uploads, in bytes.
            When the writebuffer contains this many bytes, smart_open will upload
            the bytes to S3 as a single part of a multi-part upload, freeing the
            buffer either partially or entirely.  When you close the writer, it
            will assemble the parts together.
            The value determines the upper limit for the writebuffer.  If buffer
            space is short (e.g. you are buffering to memory), then use a smaller
            value for min_part_size, or consider buffering to disk instead (see
            the writebuffer option).
            The value must be between 5MB and 5GB.  If you specify a value outside
            of this range, smart_open will adjust it for you, because otherwise the
            upload _will_ fail.
            For writing only.  Does not apply if you set multipart_upload=False.
        multipart_upload: bool, optional
            Default: `True`
            If set to `True`, will use multipart upload for writing to S3. If set
            to `False`, S3 upload will use the S3 Single-Part Upload API, which
            is more ideal for small file sizes.
            For writing only.
        version_id: str, optional
            Version of the object, used when reading object.
            If None, will fetch the most recent version.
        defer_seek: boolean, optional
            Default: `False`
            If set to `True` on a file opened for reading, GetObject will not be
            called until the first seek() or read().
            Avoids redundant API queries when seeking before reading.
        range_chunk_size: int, optional
            Default: `None`
            Maximum byte range per S3 GET request when reading.
            When None (default), a single GET request is made for the entire file,
            and data is streamed from that single botocore.response.StreamingBody
            in buffer_size chunks.
            When set to a positive integer, multiple GET requests are made, each
            limited to at most this many bytes via HTTP Range headers. Each GET
            returns a new StreamingBody that is streamed in buffer_size chunks.
            Useful for reading small portions of large files without forcing
            S3-compatible systems like SeaweedFS/Ceph to load the entire file.
            Larger values mean fewer billable GET requests but higher load on S3
            servers. Smaller values mean more GET requests but less server load per request.
            Values larger than the file size result in a single GET for the whole file.
            Affects reading only. Does not affect memory usage (controlled by buffer_size).
        client: object, optional
            The S3 client to use when working with boto3.
            If you don't specify this, then smart_open will create a new client for you.
        client_kwargs: dict, optional
            Additional parameters to pass to the relevant functions of the client.
            The keys are fully qualified method names, e.g. `S3.Client.create_multipart_upload`.
            The values are kwargs to pass to that method each time it is called.
        writebuffer: IO[bytes], optional
            By default, this module will buffer data in memory using io.BytesIO
            when writing. Pass another binary IO instance here to use it instead.
            For example, you may pass a file object to buffer to local disk instead
            of in RAM. Use this to keep RAM usage low at the expense of additional
            disk IO. If you pass in an open file, then you are responsible for
            cleaning it up after writing completes.

        ssh/scp/sftp (smart_open/ssh.py)
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        Implements I/O streams over SSH.

        path: str
            The path to the file to open on the remote machine.
        mode: str, optional
            The mode to use for opening the file.
        host: str, optional
            The hostname of the remote machine. May not be None.
        user: str, optional
            The username to use to login to the remote machine.
            If None, defaults to the name of the current user.
        password: str, optional
            The password to use to login to the remote machine.
        port: int, optional
            The port to connect to.
        connect_kwargs: dict, optional
            Any additional settings to be passed to paramiko.SSHClient.connect.
        prefetch_kwargs: dict, optional
            Any additional settings to be passed to paramiko.SFTPFile.prefetch.
            The presence of this dict (even if empty) triggers prefetching.
        buffer_size: int, optional
            Passed to the bufsize argument of paramiko.SFTPClient.open.

        webhdfs (smart_open/webhdfs.py)
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        Implements reading and writing to/from WebHDFS.

        http_uri: str
            webhdfs url converted to http REST url
        min_part_size: int, optional
            For writing only.

        Examples
        --------

        >>> from smart_open import open
        >>>
        >>> # stream lines from an S3 object
        >>> for line in open('s3://commoncrawl/robots.txt'):
        ...    print(repr(line))
        ...    break
        'User-Agent: *\n'

        >>> # stream from/to compressed files, with transparent (de)compression:
        >>> for line in open('tests/test_data/1984.txt.gz', encoding='utf-8'):
        ...    print(repr(line))
        'It was a bright cold day in April, and the clocks were striking thirteen.\n'
        'Winston Smith, his chin nuzzled into his breast in an effort to escape the vile\n'
        'wind, slipped quickly through the glass doors of Victory Mansions, though not\n'
        'quickly enough to prevent a swirl of gritty dust from entering along with him.\n'

        >>> # can use context managers too:
        >>> with open('tests/test_data/1984.txt.gz') as fin:
        ...    with open('tests/test_data/1984.txt.bz2', 'w') as fout:
        ...        for line in fin:
        ...           fout.write(line)
        74
        80
        78
        79

        >>> # can use any IOBase operations, like seek
        >>> with open('s3://commoncrawl/robots.txt', 'rb') as fin:
        ...     for line in fin:
        ...         print(repr(line.decode('utf-8')))
        ...         break
        ...     offset = fin.seek(0)  # seek to the beginning
        ...     print(fin.read(4))
        'User-Agent: *\n'
        b'User'

        >>> # stream from HTTP
        >>> for line in open('http://example.com/index.html'):
        ...     print(repr(line[:15]))
        ...     break

        This function also supports transparent compression and decompression
        using the following codecs:

        * .bz2
        * .gz
        * .xz
        * .zst

        The function depends on the file extension to determine the appropriate codec.


        See Also
        --------
        - `Standard library reference <https://docs.python.org/3.14/library/functions.html#open>`__
        - `smart_open README.rst
          <https://github.com/piskvorky/smart_open/blob/master/README.rst>`__

    parse_uri(uri_as_string)
        Parse the given URI from a string.

        Parameters
        ----------
        uri_as_string: str
            The URI to parse.

        Returns
        -------
        collections.namedtuple
            The parsed URI.

        Notes
        -----
        Supported URI schemes are:

        * azure
        * file
        * ftp
        * ftps
        * gs
        * hdfs
        * viewfs
        * http
        * https
        * s3
        * s3n
        * s3u
        * s3a
        * ssh
        * scp
        * sftp
        * webhdfs

        Valid URI examples::

        * ./local/path/file
        * ~/local/path/file
        * local/path/file
        * ./local/path/file.gz
        * file:///home/user/file
        * file:///home/user/file.bz2
        * ftp://username@host/path/file
        * ftp://username:password@host/path/file
        * ftp://username:password@host:port/path/file
        * ftps://username@host/path/file
        * ftps://username:password@host/path/file
        * ftps://username:password@host:port/path/file
        * hdfs:///path/file
        * hdfs://path/file
        * viewfs:///path/file
        * viewfs://path/file
        * s3://my_bucket/my_key
        * s3://my_key:my_secret@my_bucket/my_key
        * s3://my_key:my_secret@my_server:my_port@my_bucket/my_key
        * ssh://username@host/path/file
        * ssh://username@host//path/file
        * scp://username@host/path/file
        * sftp://username@host/path/file
        * webhdfs://host:port/path/file

    register_compressor(ext, callback)
        Register a callback for transparently decompressing files with a specific extension.

        Parameters
        ----------
        ext: str
            The extension.  Must include the leading period, e.g. `.gz`.
        callback: callable
            The callback.  It must accept two position arguments, file_obj and mode.
            This function will be called when `smart_open` is opening a file with
            the specified extension.

        Examples
        --------

        Instruct smart_open to use the `lzma` module whenever opening a file
        with a .xz extension (see README.rst for the complete example showing I/O):

        >>> def _handle_xz(file_obj, mode):
        ...     import lzma
        ...     return lzma.LZMAFile(filename=file_obj, mode=mode)
        >>>
        >>> register_compressor('.xz', _handle_xz)

        This is just an example: `lzma` is in the standard library and is registered by default.

    s3_iter_bucket(bucket_name, prefix='', accept_key=None, key_limit=None, workers=16, retries=3, **session_kwargs)
        Deprecated.  Use smart_open.s3.iter_bucket instead.

    smart_open(uri, mode='rb', buffering=-1, encoding=None, errors=None, newline=None, closefd=True, opener=None, ignore_extension=False, **kwargs)

DATA
    __all__ = ['open', 'parse_uri', 'register_compressor', 's3_iter_bucket...