1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241
|
.. _basics.strings:
****************************************
Working with Arrays of Strings And Bytes
****************************************
While NumPy is primarily a numerical library, it is often convenient
to work with NumPy arrays of strings or bytes. The two most common
use cases are:
* Working with data loaded or memory-mapped from a data file,
where one or more of the fields in the data is a string or
bytestring, and the maximum length of the field is known
ahead of time. This often is used for a name or label field.
* Using NumPy indexing and broadcasting with arrays of Python
strings of unknown length, which may or may not have data
defined for every value.
For the first use case, NumPy provides the fixed-width `numpy.void`,
`numpy.str_` and `numpy.bytes_` data types. For the second use case,
numpy provides `numpy.dtypes.StringDType`. Below we describe how to
work with both fixed-width and variable-width string arrays, how to
convert between the two representations, and provide some advice for
most efficiently working with string data in NumPy.
Fixed-width data types
======================
Before NumPy 2.0, the fixed-width `numpy.str_`, `numpy.bytes_`, and
`numpy.void` data types were the only types available for working
with strings and bytestrings in NumPy. For this reason, they are used
as the default dtype for strings and bytestrings, respectively:
>>> np.array(["hello", "world"])
array(['hello', 'world'], dtype='<U5')
Here the detected data type is ``'<U5'``, or little-endian unicode
string data, with a maximum length of 5 unicode code points.
Similarly for bytestrings:
>>> np.array([b"hello", b"world"])
array([b'hello', b'world'], dtype='|S5')
Since this is a one-byte encoding, the byteorder is `'|'` (not
applicable), and the data type detected is a maximum 5 character
bytestring.
You can also use `numpy.void` to represent bytestrings:
>>> np.array([b"hello", b"world"]).astype(np.void)
array([b'\x68\x65\x6C\x6C\x6F', b'\x77\x6F\x72\x6C\x64'], dtype='|V5')
This is most useful when working with byte streams that are not well
represented as bytestrings, and instead are better thought of as
collections of 8-bit integers.
.. _stringdtype:
Variable-width strings
======================
.. versionadded:: 2.0
.. note::
`numpy.dtypes.StringDType` is a new addition to NumPy, implemented
using the new support in NumPy for flexible user-defined data
types and is not as extensively tested in production workflows as
the older NumPy data types.
Often, real-world string data does not have a predictable length. In
these cases it is awkward to use fixed-width strings, since storing
all the data without truncation requires knowing the length of the
longest string one would like to store in the array before the array
is created.
To support situations like this, NumPy provides
`numpy.dtypes.StringDType`, which stores variable-width string data
in a UTF-8 encoding in a NumPy array:
>>> from numpy.dtypes import StringDType
>>> data = ["this is a longer string", "short string"]
>>> arr = np.array(data, dtype=StringDType())
>>> arr
array(['this is a longer string', 'short string'], dtype=StringDType())
Note that unlike fixed-width strings, ``StringDType`` is not parameterized by
the maximum length of an array element, arbitrarily long or short strings can
live in the same array without needing to reserve storage for padding bytes in
the short strings.
Also note that unlike fixed-width strings and most other NumPy data
types, ``StringDType`` does not store the string data in the "main"
``ndarray`` data buffer. Instead, the array buffer is used to store
metadata about where the string data are stored in memory. This
difference means that code expecting the array buffer to contain
string data will not function correctly, and will need to be updated
to support ``StringDType``.
Missing data support
--------------------
Often string datasets are not complete, and a special label is needed
to indicate that a value is missing. By default ``StringDType`` does
not have any special support for missing values, besides the fact
that empty strings are used to populate empty arrays:
>>> np.empty(3, dtype=StringDType())
array(['', '', ''], dtype=StringDType())
Optionally, you can pass create an instance of ``StringDType`` with
support for missing values by passing ``na_object`` as a keyword
argument for the initializer:
>>> dt = StringDType(na_object=None)
>>> arr = np.array(["this array has", None, "as an entry"], dtype=dt)
>>> arr
array(['this array has', None, 'as an entry'],
dtype=StringDType(na_object=None))
>>> arr[1] is None
True
The ``na_object`` can be any arbitrary python object.
Common choices are `numpy.nan`, ``float('nan')``, ``None``, an object
specifically intended to represent missing data like ``pandas.NA``,
or a (hopefully) unique string like ``"__placeholder__"``.
NumPy has special handling for NaN-like sentinels and string
sentinels.
NaN-like Missing Data Sentinels
+++++++++++++++++++++++++++++++
A NaN-like sentinel returns itself as the result of arithmetic
operations. This includes the python ``nan`` float and the Pandas
missing data sentinel ``pd.NA``. NaN-like sentinels inherit these
behaviors in string operations. This means that, for example, the
result of addition with any other string is the sentinel:
>>> dt = StringDType(na_object=np.nan)
>>> arr = np.array(["hello", np.nan, "world"], dtype=dt)
>>> arr + arr
array(['hellohello', nan, 'worldworld'], dtype=StringDType(na_object=nan))
Following the behavior of ``nan`` in float arrays, NaN-like sentinels
sort to the end of the array:
>>> np.sort(arr)
array(['hello', 'world', nan], dtype=StringDType(na_object=nan))
String Missing Data Sentinels
+++++++++++++++++++++++++++++
A string missing data value is an instance of ``str`` or subtype of ``str``. If
such an array is passed to a string operation or a cast, "missing" entries are
treated as if they have a value given by the string sentinel. Comparison
operations similarly use the sentinel value directly for missing entries.
Other Sentinels
+++++++++++++++
Other objects, such as ``None`` are also supported as missing data
sentinels. If any missing data are present in an array using such a
sentinel, then string operations will raise an error:
>>> dt = StringDType(na_object=None)
>>> arr = np.array(["this array has", None, "as an entry"])
>>> np.sort(arr)
Traceback (most recent call last):
...
TypeError: '<' not supported between instances of 'NoneType' and 'str'
Coercing Non-strings
--------------------
By default, non-string data are coerced to strings:
>>> np.array([1, object(), 3.4], dtype=StringDType())
array(['1', '<object object at 0x7faa2497dde0>', '3.4'], dtype=StringDType())
If this behavior is not desired, an instance of the DType can be created that
disables string coercion by setting ``coerce=False`` in the initializer:
>>> np.array([1, object(), 3.4], dtype=StringDType(coerce=False))
Traceback (most recent call last):
...
ValueError: StringDType only allows string data when string coercion is disabled.
This allows strict data validation in the same pass over the data NumPy uses to
create the array. Setting ``coerce=True`` recovers the default behavior allowing
coercion to strings.
Casting To and From Fixed-Width Strings
---------------------------------------
``StringDType`` supports round-trip casts between `numpy.str_`,
`numpy.bytes_`, and `numpy.void`. Casting to a fixed-width string is
most useful when strings need to be memory-mapped in an ndarray or
when a fixed-width string is needed for reading and writing to a
columnar data format with a known maximum string length.
In all cases, casting to a fixed-width string requires specifying the
maximum allowed string length::
>>> arr = np.array(["hello", "world"], dtype=StringDType())
>>> arr.astype(np.str_) # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
TypeError: Casting from StringDType to a fixed-width dtype with an
unspecified size is not currently supported, specify an explicit
size for the output dtype instead.
The above exception was the direct cause of the following
exception:
TypeError: cannot cast dtype StringDType() to <class 'numpy.dtypes.StrDType'>.
>>> arr.astype("U5")
array(['hello', 'world'], dtype='<U5')
The `numpy.bytes_` cast is most useful for string data that is known
to contain only ASCII characters, as characters outside this range
cannot be represented in a single byte in the UTF-8 encoding and are
rejected.
Any valid unicode string can be cast to `numpy.str_`, although
since `numpy.str_` uses a 32-bit UCS4 encoding for all characters,
this will often waste memory for real-world textual data that can be
well-represented by a more memory-efficient encoding.
Additionally, any valid unicode string can be cast to `numpy.void`,
storing the UTF-8 bytes directly in the output array:
>>> arr = np.array(["hello", "world"], dtype=StringDType())
>>> arr.astype("V5")
array([b'\x68\x65\x6C\x6C\x6F', b'\x77\x6F\x72\x6C\x64'], dtype='|V5')
Care must be taken to ensure that the output array has enough space
for the UTF-8 bytes in the string, since the size of a UTF-8
bytestream in bytes is not necessarily the same as the number of
characters in the string.
|