File: database.rst

package info (click to toggle)
python-rocksdb 0.8.0~rc3-1
  • links: PTS, VCS
  • area: main
  • in suites: bullseye
  • size: 480 kB
  • sloc: python: 798; cpp: 408; makefile: 158
file content (468 lines) | stat: -rw-r--r-- 16,626 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
Database interactions
*********************

Database object
===============

.. py:class:: rocksdb.DB

    .. py:method:: __init__(db_name, Options opts, read_only=False)

        :param unicode db_name:  Name of the database to open
        :param opts: Options for this specific database
        :type opts: :py:class:`rocksdb.Options`
        :param bool read_only: If ``True`` the database is opened read-only.
                               All DB calls which modify data will raise an
                               Exception.


    .. py:method:: put(key, value, sync=False, disable_wal=False)

        Set the database entry for "key" to "value".

        :param bytes key: Name for this entry
        :param bytes value: Data for this entry
        :param bool sync: 
            If ``True``, the write will be flushed from the operating system
            buffer cache (by calling WritableFile::Sync()) before the write
            is considered complete.  If this flag is true, writes will be
            slower.

            If this flag is ``False``, and the machine crashes, some recent
            writes may be lost.  Note that if it is just the process that
            crashes (i.e., the machine does not reboot), no writes will be
            lost even if ``sync == False``.

            In other words, a DB write with ``sync == False`` has similar
            crash semantics as the "write()" system call.  A DB write
            with ``sync == True`` has similar crash semantics to a "write()"
            system call followed by "fdatasync()".

        :param bool disable_wal:
            If ``True``, writes will not first go to the write ahead log,
            and the write may got lost after a crash.

    .. py:method:: delete(key, sync=False, disable_wal=False)

        Remove the database entry for "key".

        :param bytes key: Name to delete
        :param sync: See :py:meth:`rocksdb.DB.put`
        :param disable_wal: See :py:meth:`rocksdb.DB.put`
        :raises rocksdb.errors.NotFound: If the key did not exists

    .. py:method:: merge(key, value, sync=False, disable_wal=False)

        Merge the database entry for "key" with "value".
        The semantics of this operation is determined by the user provided
        merge_operator when opening DB.

        See :py:meth:`rocksdb.DB.put` for the parameters

        :raises:
            :py:exc:`rocksdb.errors.NotSupported` if this is called and
            no :py:attr:`rocksdb.Options.merge_operator` was set at creation


    .. py:method:: write(batch, sync=False, disable_wal=False)

        Apply the specified updates to the database.

        :param rocksdb.WriteBatch batch: Batch to apply
        :param sync: See :py:meth:`rocksdb.DB.put`
        :param disable_wal: See :py:meth:`rocksdb.DB.put`

    .. py:method:: get(key, verify_checksums=False, fill_cache=True, snapshot=None, read_tier="all")

        :param bytes key: Name to get

        :param bool verify_checksums: 
            If ``True``, all data read from underlying storage will be
            verified against corresponding checksums.

        :param bool fill_cache:
                Should the "data block", "index block" or "filter block"
                read for this iteration be cached in memory?
                Callers may wish to set this field to ``False`` for bulk scans.
        
        :param snapshot:
            If not ``None``, read as of the supplied snapshot
            (which must belong to the DB that is being read and which must
            not have been released). Is it ``None`` a implicit snapshot of the
            state at the beginning of this read operation is used
        :type snapshot: :py:class:`rocksdb.Snapshot`

        :param string read_tier:
            Specify if this read request should process data that ALREADY
            resides on a particular cache. If the required data is not
            found at the specified cache,
            then :py:exc:`rocksdb.errors.Incomplete` is raised.

            | Use ``all`` if a fetch from disk is allowed.
            | Use ``cache`` if only data from cache is allowed.
 
        :returns: ``None`` if not found, else the value for this key

    .. py:method:: multi_get(keys, verify_checksums=False, fill_cache=True, snapshot=None, read_tier="all")

        :param keys: Keys to fetch
        :type keys: list of bytes

        For the other params see :py:meth:`rocksdb.DB.get`

        :returns:
            A ``dict`` where the value is either ``bytes`` or ``None`` if not found

        :raises: If the fetch for a single key fails
        
        .. note::
            keys will not be "de-duplicated".
            Duplicate keys will return duplicate values in order.

    .. py:method:: key_may_exist(key, fetch=False, verify_checksums=False, fill_cache=True, snapshot=None, read_tier="all")

        If the key definitely does not exist in the database, then this method
        returns ``False``, else ``True``. If the caller wants to obtain value
        when the key is found in memory, fetch should be set to ``True``.
        This check is potentially lighter-weight than invoking DB::get().
        One way to make this lighter weight is to avoid doing any IOs.

        :param bytes key: Key to check
        :param bool fetch: Obtain also the value if found

        For the other params see :py:meth:`rocksdb.DB.get`

        :returns: 
            * ``(True, None)`` if key is found but value not in memory
            * ``(True, None)`` if key is found and ``fetch=False``
            * ``(True, <data>)`` if key is found and value in memory and ``fetch=True``
            * ``(False, None)`` if key is not found

    .. py:method:: iterkeys(verify_checksums=False, fill_cache=True, snapshot=None, read_tier="all")

        Iterate over the keys

        For other params see :py:meth:`rocksdb.DB.get`

        :returns:
            A iterator object which is not valid yet.
            Call first one of the seek methods of the iterator to position it

        :rtype: :py:class:`rocksdb.BaseIterator`

    .. py:method:: itervalues(verify_checksums=False, fill_cache=True, snapshot=None, read_tier="all")

        Iterate over the values

        For other params see :py:meth:`rocksdb.DB.get`

        :returns:
            A iterator object which is not valid yet.
            Call first one of the seek methods of the iterator to position it

        :rtype: :py:class:`rocksdb.BaseIterator`

    .. py:method:: iteritems(verify_checksums=False, fill_cache=True, snapshot=None, read_tier="all")

        Iterate over the items

        For other params see :py:meth:`rocksdb.DB.get`

        :returns:
            A iterator object which is not valid yet.
            Call first one of the seek methods of the iterator to position it

        :rtype: :py:class:`rocksdb.BaseIterator`

    .. py:method:: snapshot()
    
        Return a handle to the current DB state.
        Iterators created with this handle will all observe a stable snapshot
        of the current DB state.
        
        :rtype: :py:class:`rocksdb.Snapshot`


    .. py:method:: get_property(prop)

        DB implementations can export properties about their state
        via this method. If "property" is a valid property understood by this
        DB implementation, a byte string with its value is returned.
        Otherwise ``None``
        
        Valid property names include:
        
        * ``b"rocksdb.num-files-at-level<N>"``: return the number of files at level <N>,
            where <N> is an ASCII representation of a level number (e.g. "0").

        * ``b"rocksdb.stats"``: returns a multi-line byte string that describes statistics
            about the internal operation of the DB.

        * ``b"rocksdb.sstables"``: returns a multi-line byte string that describes all
            of the sstables that make up the db contents.

        * ``b"rocksdb.num-immutable-mem-table"``: Number of immutable mem tables.

        * ``b"rocksdb.mem-table-flush-pending"``: Returns ``1`` if mem table flush is pending, otherwise ``0``.

        * ``b"rocksdb.compaction-pending"``:  Returns ``1`` if a compaction is pending, otherweise ``0``.

        * ``b"rocksdb.background-errors"``: Returns accumulated background errors encountered.

        * ``b"rocksdb.cur-size-active-mem-table"``: Returns current size of the active memtable.

    .. py:method:: get_live_files_metadata()

        Returns a list of all table files.

        It returns a list of dict's were each dict has the following keys.

        ``name``
            Name of the file

        ``level``
            Level at which this file resides

        ``size``
            File size in bytes

        ``smallestkey``
            Smallest user defined key in the file

        ``largestkey``
            Largest user defined key in the file

        ``smallest_seqno``
            smallest seqno in file

        ``largest_seqno``
            largest seqno in file

    .. py:method:: compact_range(begin=None, end=None, ** options)

        Compact the underlying storage for the key range [begin,end].
        The actual compaction interval might be superset of [begin, end].
        In particular, deleted and overwritten versions are discarded,
        and the data is rearranged to reduce the cost of operations
        needed to access the data.

        This operation should typically only be invoked by users who understand
        the underlying implementation.

        ``begin == None`` is treated as a key before all keys in the database.
        ``end == None`` is treated as a key after all keys in the database.
        Therefore the following call will compact the entire database: ``db.compact_range()``.

        Note that after the entire database is compacted, all data are pushed
        down to the last level containing any data. If the total data size
        after compaction is reduced, that level might not be appropriate for
        hosting all the files. In this case, client could set change_level
        to ``True``, to move the files back to the minimum level capable of holding
        the data set or a given level (specified by non-negative target_level).

        :param bytes begin: Key where to start compaction.
                            If ``None`` start at the beginning of the database.
        :param bytes end: Key where to end compaction.
                          If ``None`` end at the last key of the database.
        :param bool change_level:  If ``True``, compacted files will be moved to
                                   the minimum level capable of holding the data
                                   or given level (specified by non-negative target_level).
                                   If ``False`` you may end with a bigger level
                                   than configured. Default is ``False``.
        :param int target_level: If change_level is true and target_level have non-negative
                                 value, compacted files will be moved to target_level.
                                 Default is ``-1``.
        :param string bottommost_level_compaction:
            For level based compaction, we can configure if we want to
            skip/force bottommost level compaction. By default level based
            compaction will only compact the bottommost level if there is a
            compaction filter. It can be set to the following values.

            ``skip``
                Skip bottommost level compaction

            ``if_compaction_filter``
                Only compact bottommost level if there is a compaction filter.
                This is the default.

            ``force``
                Always compact bottommost level
        
    .. py:attribute:: options

        Returns the associated :py:class:`rocksdb.Options` instance.

        .. note::

            Changes to this object have no effect anymore.
            Consider this as read-only

Iterator
========

.. py:class:: rocksdb.BaseIterator

    Base class for all iterators in this module. After creation a iterator is
    invalid. Call one of the seek methods first before starting iteration

    .. py:method:: seek_to_first()

            Position at the first key in the source

    .. py:method:: seek_to_last()
    
            Position at the last key in the source

    .. py:method:: seek(key)
    
        :param bytes key: Position at the first key in the source that at or past
 
    Methods to support the python iterator protocol

    .. py:method:: __iter__()
    .. py:method:: __next__()
    .. py:method:: __reversed__()

Snapshot
========

.. py:class:: rocksdb.Snapshot

    Opaque handler for a single Snapshot.
    Snapshot is released if nobody holds a reference on it.
    Retrieved via :py:meth:`rocksdb.DB.snapshot`

WriteBatch
==========

.. py:class:: rocksdb.WriteBatch

     WriteBatch holds a collection of updates to apply atomically to a DB.

     The updates are applied in the order in which they are added
     to the WriteBatch.  For example, the value of "key" will be "v3"
     after the following batch is written::
     
        batch = rocksdb.WriteBatch()
        batch.put(b"key", b"v1")
        batch.delete(b"key")
        batch.put(b"key", b"v2")
        batch.put(b"key", b"v3")

    .. py:method:: __init__(data=None)

        Creates a WriteBatch.

        :param bytes data:
            A serialized version of a previous WriteBatch. As retrieved
            from a previous .data() call. If ``None`` a empty WriteBatch is
            generated

    .. py:method:: put(key, value)
    
        Store the mapping "key->value" in the database.

        :param bytes key: Name of the entry to store
        :param bytes value: Data of this entry

    .. py:method:: merge(key, value)
    
        Merge "value" with the existing value of "key" in the database.

        :param bytes key: Name of the entry to merge
        :param bytes value: Data to merge

    .. py:method:: delete(key)
 
        If the database contains a mapping for "key", erase it.  Else do nothing.

        :param bytes key: Key to erase

    .. py:method:: clear()

        Clear all updates buffered in this batch.

        .. note::
            Don't call this method if there is an outstanding iterator.
            Calling :py:meth:`rocksdb.WriteBatch.clear()` with outstanding
            iterator, leads to SEGFAULT.

    .. py:method:: data()

        Retrieve the serialized version of this batch.

        :rtype: ``bytes``

    .. py:method:: count()
    
        Returns the number of updates in the batch

        :rtype: int

    .. py:method:: __iter__()

        Returns an iterator over the current contents of the write batch.

        If you add new items to the batch, they are not visible for this
        iterator. Create a new one if you need to see them.

        .. note::
            Calling :py:meth:`rocksdb.WriteBatch.clear()` on the write batch
            invalidates the iterator.  Using a iterator where its corresponding
            write batch has been cleared, leads to SEGFAULT.

        :rtype: :py:class:`rocksdb.WriteBatchIterator`

WriteBatchIterator
==================

.. py:class:: rocksdb.WriteBatchIterator

    .. py:method:: __iter__()

        Returns self.

    .. py:method:: __next__()

        Returns the next item inside the corresponding write batch.
        The return value is a tuple of always size three.

        First item (Name of the operation):

            * ``"Put"``
            * ``"Merge"``
            * ``"Delete"``

        Second item (key):
            Key for this operation.

        Third item (value):
            The value for this operation. Empty for ``"Delete"``.

Repair DB
=========

.. py:function:: repair_db(db_name, opts)

    :param unicode db_name: Name of the database to open
    :param opts: Options for this specific database
    :type opts: :py:class:`rocksdb.Options`

    If a DB cannot be opened, you may attempt to call this method to
    resurrect as much of the contents of the database as possible.
    Some data may be lost, so be careful when calling this function
    on a database that contains important information.


Errors
======

.. py:exception:: rocksdb.errors.NotFound
.. py:exception:: rocksdb.errors.Corruption
.. py:exception:: rocksdb.errors.NotSupported
.. py:exception:: rocksdb.errors.InvalidArgument
.. py:exception:: rocksdb.errors.RocksIOError
.. py:exception:: rocksdb.errors.MergeInProgress
.. py:exception:: rocksdb.errors.Incomplete