File: lexical_analysis.rst

package info (click to toggle)
python3.14 3.14.2-1
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid
  • size: 128,000 kB
  • sloc: python: 752,614; ansic: 717,151; xml: 31,250; sh: 5,989; cpp: 4,063; makefile: 1,996; objc: 787; lisp: 502; javascript: 136; asm: 75; csh: 12
file content (1586 lines) | stat: -rw-r--r-- 50,865 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586

.. _lexical:

****************
Lexical analysis
****************

.. index:: lexical analysis, parser, token

A Python program is read by a *parser*.  Input to the parser is a stream of
:term:`tokens <token>`, generated by the *lexical analyzer* (also known as
the *tokenizer*).
This chapter describes how the lexical analyzer produces these tokens.

The lexical analyzer determines the program text's :ref:`encoding <encodings>`
(UTF-8 by default), and decodes the text into
:ref:`source characters <lexical-source-character>`.
If the text cannot be decoded, a :exc:`SyntaxError` is raised.

Next, the lexical analyzer uses the source characters to generate a stream of tokens.
The type of a generated token generally depends on the next source character to
be processed. Similarly, other special behavior of the analyzer depends on
the first source character that hasn't yet been processed.
The following table gives a quick summary of these source characters,
with links to sections that contain more information.

.. list-table::
   :header-rows: 1

   * - Character
     - Next token (or other relevant documentation)

   * - * space
       * tab
       * formfeed
     - * :ref:`Whitespace <whitespace>`

   * - * CR, LF
     - * :ref:`New line <line-structure>`
       * :ref:`Indentation <indentation>`

   * - * backslash (``\``)
     - * :ref:`Explicit line joining <explicit-joining>`
       * (Also significant in :ref:`string escape sequences <escape-sequences>`)

   * - * hash (``#``)
     - * :ref:`Comment <comments>`

   * - * quote (``'``, ``"``)
     - * :ref:`String literal <strings>`

   * - * ASCII letter (``a``-``z``, ``A``-``Z``)
       * non-ASCII character
     - * :ref:`Name <identifiers>`
       * Prefixed :ref:`string or bytes literal <strings>`

   * - * underscore (``_``)
     - * :ref:`Name <identifiers>`
       * (Can also be part of :ref:`numeric literals <numbers>`)

   * - * number (``0``-``9``)
     - * :ref:`Numeric literal <numbers>`

   * - * dot (``.``)
     - * :ref:`Numeric literal <numbers>`
       * :ref:`Operator <operators>`

   * - * question mark (``?``)
       * dollar (``$``)
       *
         .. (the following uses zero-width space characters to render
         .. a literal backquote)

         backquote (``​`​``)
       * control character
     - * Error (outside string literals and comments)

   * - * other printing character
     - * :ref:`Operator or delimiter <operators>`

   * - * end of file
     - * :ref:`End marker <endmarker-token>`


.. _line-structure:

Line structure
==============

.. index:: line structure

A Python program is divided into a number of *logical lines*.


.. _logical-lines:

Logical lines
-------------

.. index:: logical line, physical line, line joining, NEWLINE token

The end of a logical line is represented by the token :data:`~token.NEWLINE`.
Statements cannot cross logical line boundaries except where :data:`!NEWLINE`
is allowed by the syntax (e.g., between statements in compound statements).
A logical line is constructed from one or more *physical lines* by following
the :ref:`explicit <explicit-joining>` or :ref:`implicit <implicit-joining>`
*line joining* rules.


.. _physical-lines:

Physical lines
--------------

A physical line is a sequence of characters terminated by one the following
end-of-line sequences:

* the Unix form using ASCII LF (linefeed),
* the Windows form using the ASCII sequence CR LF (return followed by linefeed),
* the '`Classic Mac OS`__' form using the ASCII CR (return) character.

  __ https://en.wikipedia.org/wiki/Classic_Mac_OS

Regardless of platform, each of these sequences is replaced by a single
ASCII LF (linefeed) character.
(This is done even inside :ref:`string literals <strings>`.)
Each line can use any of the sequences; they do not need to be consistent
within a file.

The end of input also serves as an implicit terminator for the final
physical line.

Formally:

.. grammar-snippet::
   :group: python-grammar

   newline: <ASCII LF> | <ASCII CR> <ASCII LF> | <ASCII CR>


.. _comments:

Comments
--------

.. index:: comment, hash character
   single: # (hash); comment

A comment starts with a hash character (``#``) that is not part of a string
literal, and ends at the end of the physical line.  A comment signifies the end
of the logical line unless the implicit line joining rules are invoked. Comments
are ignored by the syntax.


.. _encodings:

Encoding declarations
---------------------

.. index:: source character set, encoding declarations (source file)
   single: # (hash); source encoding declaration

If a comment in the first or second line of the Python script matches the
regular expression ``coding[=:]\s*([-\w.]+)``, this comment is processed as an
encoding declaration; the first group of this expression names the encoding of
the source code file. The encoding declaration must appear on a line of its
own. If it is the second line, the first line must also be a comment-only line.
The recommended forms of an encoding expression are ::

   # -*- coding: <encoding-name> -*-

which is recognized also by GNU Emacs, and ::

   # vim:fileencoding=<encoding-name>

which is recognized by Bram Moolenaar's VIM.

If no encoding declaration is found, the default encoding is UTF-8.  If the
implicit or explicit encoding of a file is UTF-8, an initial UTF-8 byte-order
mark (``b'\xef\xbb\xbf'``) is ignored rather than being a syntax error.

If an encoding is declared, the encoding name must be recognized by Python
(see :ref:`standard-encodings`). The
encoding is used for all lexical analysis, including string literals, comments
and identifiers.

.. _lexical-source-character:

All lexical analysis, including string literals, comments
and identifiers, works on Unicode text decoded using the source encoding.
Any Unicode code point, except the NUL control character, can appear in
Python source.

.. grammar-snippet::
   :group: python-grammar

   source_character:  <any Unicode code point, except NUL>


.. _explicit-joining:

Explicit line joining
---------------------

.. index:: physical line, line joining, line continuation, backslash character

Two or more physical lines may be joined into logical lines using backslash
characters (``\``), as follows: when a physical line ends in a backslash that is
not part of a string literal or comment, it is joined with the following forming
a single logical line, deleting the backslash and the following end-of-line
character.  For example::

   if 1900 < year < 2100 and 1 <= month <= 12 \
      and 1 <= day <= 31 and 0 <= hour < 24 \
      and 0 <= minute < 60 and 0 <= second < 60:   # Looks like a valid date
           return 1

A line ending in a backslash cannot carry a comment.  A backslash does not
continue a comment.  A backslash does not continue a token except for string
literals (i.e., tokens other than string literals cannot be split across
physical lines using a backslash).  A backslash is illegal elsewhere on a line
outside a string literal.


.. _implicit-joining:

Implicit line joining
---------------------

Expressions in parentheses, square brackets or curly braces can be split over
more than one physical line without using backslashes. For example::

   month_names = ['Januari', 'Februari', 'Maart',      # These are the
                  'April',   'Mei',      'Juni',       # Dutch names
                  'Juli',    'Augustus', 'September',  # for the months
                  'Oktober', 'November', 'December']   # of the year

Implicitly continued lines can carry comments.  The indentation of the
continuation lines is not important.  Blank continuation lines are allowed.
There is no NEWLINE token between implicit continuation lines.  Implicitly
continued lines can also occur within triple-quoted strings (see below); in that
case they cannot carry comments.


.. _blank-lines:

Blank lines
-----------

.. index:: single: blank line

A logical line that contains only spaces, tabs, formfeeds and possibly a
comment, is ignored (i.e., no :data:`~token.NEWLINE` token is generated).
During interactive input of statements, handling of a blank line may differ
depending on the implementation of the read-eval-print loop.
In the standard interactive interpreter, an entirely blank logical line (that
is, one containing not even whitespace or a comment) terminates a multi-line
statement.


.. _indentation:

Indentation
-----------

.. index:: indentation, leading whitespace, space, tab, grouping, statement grouping

Leading whitespace (spaces and tabs) at the beginning of a logical line is used
to compute the indentation level of the line, which in turn is used to determine
the grouping of statements.

Tabs are replaced (from left to right) by one to eight spaces such that the
total number of characters up to and including the replacement is a multiple of
eight (this is intended to be the same rule as used by Unix).  The total number
of spaces preceding the first non-blank character then determines the line's
indentation.  Indentation cannot be split over multiple physical lines using
backslashes; the whitespace up to the first backslash determines the
indentation.

Indentation is rejected as inconsistent if a source file mixes tabs and spaces
in a way that makes the meaning dependent on the worth of a tab in spaces; a
:exc:`TabError` is raised in that case.

**Cross-platform compatibility note:** because of the nature of text editors on
non-UNIX platforms, it is unwise to use a mixture of spaces and tabs for the
indentation in a single source file.  It should also be noted that different
platforms may explicitly limit the maximum indentation level.

A formfeed character may be present at the start of the line; it will be ignored
for the indentation calculations above.  Formfeed characters occurring elsewhere
in the leading whitespace have an undefined effect (for instance, they may reset
the space count to zero).

.. index:: INDENT token, DEDENT token

The indentation levels of consecutive lines are used to generate
:data:`~token.INDENT` and :data:`~token.DEDENT` tokens, using a stack,
as follows.

Before the first line of the file is read, a single zero is pushed on the stack;
this will never be popped off again.  The numbers pushed on the stack will
always be strictly increasing from bottom to top.  At the beginning of each
logical line, the line's indentation level is compared to the top of the stack.
If it is equal, nothing happens. If it is larger, it is pushed on the stack, and
one :data:`!INDENT` token is generated.  If it is smaller, it *must* be one of the
numbers occurring on the stack; all numbers on the stack that are larger are
popped off, and for each number popped off a :data:`!DEDENT` token is generated.
At the end of the file, a :data:`!DEDENT` token is generated for each number
remaining on the stack that is larger than zero.

Here is an example of a correctly (though confusingly) indented piece of Python
code::

   def perm(l):
           # Compute the list of all permutations of l
       if len(l) <= 1:
                     return [l]
       r = []
       for i in range(len(l)):
                s = l[:i] + l[i+1:]
                p = perm(s)
                for x in p:
                 r.append(l[i:i+1] + x)
       return r

The following example shows various indentation errors::

    def perm(l):                       # error: first line indented
   for i in range(len(l)):             # error: not indented
       s = l[:i] + l[i+1:]
           p = perm(l[:i] + l[i+1:])   # error: unexpected indent
           for x in p:
                   r.append(l[i:i+1] + x)
               return r                # error: inconsistent dedent

(Actually, the first three errors are detected by the parser; only the last
error is found by the lexical analyzer --- the indentation of ``return r`` does
not match a level popped off the stack.)


.. _whitespace:

Whitespace between tokens
-------------------------

Except at the beginning of a logical line or in string literals, the whitespace
characters space, tab and formfeed can be used interchangeably to separate
tokens:

.. grammar-snippet::
   :group: python-grammar

   whitespace:  ' ' | tab | formfeed


Whitespace is needed between two tokens only if their concatenation
could otherwise be interpreted as a different token. For example, ``ab`` is one
token, but ``a b`` is two tokens. However, ``+a`` and ``+ a`` both produce
two tokens, ``+`` and ``a``, as ``+a`` is not a valid token.


.. _endmarker-token:

End marker
----------

At the end of non-interactive input, the lexical analyzer generates an
:data:`~token.ENDMARKER` token.


.. _other-tokens:

Other tokens
============

Besides :data:`~token.NEWLINE`, :data:`~token.INDENT` and :data:`~token.DEDENT`,
the following categories of tokens exist:
*identifiers* and *keywords* (:data:`~token.NAME`), *literals* (such as
:data:`~token.NUMBER` and :data:`~token.STRING`), and other symbols
(*operators* and *delimiters*, :data:`~token.OP`).
Whitespace characters (other than logical line terminators, discussed earlier)
are not tokens, but serve to delimit tokens.
Where ambiguity exists, a token comprises the longest possible string that
forms a legal token, when read from left to right.


.. _identifiers:

Names (identifiers and keywords)
================================

.. index:: identifier, name

:data:`~token.NAME` tokens represent *identifiers*, *keywords*, and
*soft keywords*.

Names are composed of the following characters:

* uppercase and lowercase letters (``A-Z`` and ``a-z``),
* the underscore (``_``),
* digits (``0`` through ``9``), which cannot appear as the first character, and
* non-ASCII characters. Valid names may only contain "letter-like" and
  "digit-like" characters; see :ref:`lexical-names-nonascii` for details.

Names must contain at least one character, but have no upper length limit.
Case is significant.

Formally, names are described by the following lexical definitions:

.. grammar-snippet::
   :group: python-grammar

   NAME:          `name_start` `name_continue`*
   name_start:    "a"..."z" | "A"..."Z" | "_" | <non-ASCII character>
   name_continue: name_start | "0"..."9"
   identifier:    <`NAME`, except keywords>

Note that not all names matched by this grammar are valid; see
:ref:`lexical-names-nonascii` for details.


.. _keywords:

Keywords
--------

.. index::
   single: keyword
   single: reserved word

The following names are used as reserved words, or *keywords* of the
language, and cannot be used as ordinary identifiers.  They must be spelled
exactly as written here:

.. sourcecode:: text

   False      await      else       import     pass
   None       break      except     in         raise
   True       class      finally    is         return
   and        continue   for        lambda     try
   as         def        from       nonlocal   while
   assert     del        global     not        with
   async      elif       if         or         yield


.. _soft-keywords:

Soft Keywords
-------------

.. index:: soft keyword, keyword

.. versionadded:: 3.10

Some names are only reserved under specific contexts. These are known as
*soft keywords*:

- ``match``, ``case``, and ``_``, when used in the :keyword:`match` statement.
- ``type``, when used in the :keyword:`type` statement.

These syntactically act as keywords in their specific contexts,
but this distinction is done at the parser level, not when tokenizing.

As soft keywords, their use in the grammar is possible while still
preserving compatibility with existing code that uses these names as
identifier names.

.. versionchanged:: 3.12
   ``type`` is now a soft keyword.

.. index::
   single: _, identifiers
   single: __, identifiers
.. _id-classes:

Reserved classes of identifiers
-------------------------------

Certain classes of identifiers (besides keywords) have special meanings.  These
classes are identified by the patterns of leading and trailing underscore
characters:

``_*``
   Not imported by ``from module import *``.

``_``
   In a ``case`` pattern within a :keyword:`match` statement, ``_`` is a
   :ref:`soft keyword <soft-keywords>` that denotes a
   :ref:`wildcard <wildcard-patterns>`.

   Separately, the interactive interpreter makes the result of the last evaluation
   available in the variable ``_``.
   (It is stored in the :mod:`builtins` module, alongside built-in
   functions like ``print``.)

   Elsewhere, ``_`` is a regular identifier. It is often used to name
   "special" items, but it is not special to Python itself.

   .. note::

      The name ``_`` is often used in conjunction with internationalization;
      refer to the documentation for the :mod:`gettext` module for more
      information on this convention.

      It is also commonly used for unused variables.

``__*__``
   System-defined names, informally known as "dunder" names. These names are
   defined by the interpreter and its implementation (including the standard library).
   Current system names are discussed in the :ref:`specialnames` section and elsewhere.
   More will likely be defined in future versions of Python.  *Any* use of ``__*__`` names,
   in any context, that does not follow explicitly documented use, is subject to
   breakage without warning.

``__*``
   Class-private names.  Names in this category, when used within the context of a
   class definition, are re-written to use a mangled form to help avoid name
   clashes between "private" attributes of base and derived classes. See section
   :ref:`atom-identifiers`.


.. _lexical-names-nonascii:

Non-ASCII characters in names
-----------------------------

Names that contain non-ASCII characters need additional normalization
and validation beyond the rules and grammar explained
:ref:`above <identifiers>`.
For example, ``ř_1``, ``蛇``, or ``साँप``  are valid names, but ``r〰2``,
``€``, or ``🐍`` are not.

This section explains the exact rules.

All names are converted into the `normalization form`_ NFKC while parsing.
This means that, for example, some typographic variants of characters are
converted to their "basic" form. For example, ``fiⁿₐˡᵢᶻₐᵗᵢᵒₙ`` normalizes to
``finalization``, so Python treats them as the same name::

   >>> fiⁿₐˡᵢᶻₐᵗᵢᵒₙ = 3
   >>> finalization
   3

.. note::

   Normalization is done at the lexical level only.
   Run-time functions that take names as *strings* generally do not normalize
   their arguments.
   For example, the variable defined above is accessible at run time in the
   :func:`globals` dictionary as ``globals()["finalization"]`` but not
   ``globals()["fiⁿₐˡᵢᶻₐᵗᵢᵒₙ"]``.

Similarly to how ASCII-only names must contain only letters, digits and
the underscore, and cannot start with a digit, a valid name must
start with a character in the "letter-like" set ``xid_start``,
and the remaining characters must be in the "letter- and digit-like" set
``xid_continue``.

These sets based on the *XID_Start* and *XID_Continue* sets as defined by the
Unicode standard annex `UAX-31`_.
Python's ``xid_start`` additionally includes the underscore (``_``).
Note that Python does not necessarily conform to `UAX-31`_.

A non-normative listing of characters in the *XID_Start* and *XID_Continue*
sets as defined by Unicode is available in the `DerivedCoreProperties.txt`_
file in the Unicode Character Database.
For reference, the construction rules for the ``xid_*`` sets are given below.

The set ``id_start`` is defined as the union of:

* Unicode category ``<Lu>`` - uppercase letters (includes ``A`` to ``Z``)
* Unicode category ``<Ll>`` - lowercase letters (includes ``a`` to ``z``)
* Unicode category ``<Lt>`` - titlecase letters
* Unicode category ``<Lm>`` - modifier letters
* Unicode category ``<Lo>`` - other letters
* Unicode category ``<Nl>`` - letter numbers
* {``"_"``} - the underscore
* ``<Other_ID_Start>`` - an explicit set of characters in `PropList.txt`_
  to support backwards compatibility

The set ``xid_start`` then closes this set under NFKC normalization, by
removing all characters whose normalization is not of the form
``id_start id_continue*``.

The set ``id_continue`` is defined as the union of:

* ``id_start`` (see above)
* Unicode category ``<Nd>`` - decimal numbers (includes ``0`` to ``9``)
* Unicode category ``<Pc>`` - connector punctuations
* Unicode category ``<Mn>`` - nonspacing marks
* Unicode category ``<Mc>`` - spacing combining marks
* ``<Other_ID_Continue>`` - another explicit set of characters in
  `PropList.txt`_ to support backwards compatibility

Again, ``xid_continue`` closes this set under NFKC normalization.

Unicode categories use the version of the Unicode Character Database as
included in the :mod:`unicodedata` module.

.. _UAX-31: https://www.unicode.org/reports/tr31/
.. _PropList.txt: https://www.unicode.org/Public/16.0.0/ucd/PropList.txt
.. _DerivedCoreProperties.txt: https://www.unicode.org/Public/16.0.0/ucd/DerivedCoreProperties.txt
.. _normalization form: https://www.unicode.org/reports/tr15/#Norm_Forms

.. seealso::

   * :pep:`3131` -- Supporting Non-ASCII Identifiers
   * :pep:`672` -- Unicode-related Security Considerations for Python


.. _literals:

Literals
========

.. index:: literal, constant

Literals are notations for constant values of some built-in types.

In terms of lexical analysis, Python has :ref:`string, bytes <strings>`
and :ref:`numeric <numbers>` literals.

Other "literals" are lexically denoted using :ref:`keywords <keywords>`
(``None``, ``True``, ``False``) and the special
:ref:`ellipsis token <lexical-ellipsis>` (``...``).


.. index:: string literal, bytes literal, ASCII
   single: ' (single quote); string literal
   single: " (double quote); string literal
.. _strings:

String and Bytes literals
=========================

String literals are text enclosed in single quotes (``'``) or double
quotes (``"``). For example:

.. code-block:: python

   "spam"
   'eggs'

The quote used to start the literal also terminates it, so a string literal
can only contain the other quote (except with escape sequences, see below).
For example:

.. code-block:: python

   'Say "Hello", please.'
   "Don't do that!"

Except for this limitation, the choice of quote character (``'`` or ``"``)
does not affect how the literal is parsed.

Inside a string literal, the backslash (``\``) character introduces an
:dfn:`escape sequence`, which has special meaning depending on the character
after the backslash.
For example, ``\"`` denotes the double quote character, and does *not* end
the string:

.. code-block:: pycon

   >>> print("Say \"Hello\" to everyone!")
   Say "Hello" to everyone!

See :ref:`escape sequences <escape-sequences>` below for a full list of such
sequences, and more details.


.. index:: triple-quoted string
   single: """; string literal
   single: '''; string literal

Triple-quoted strings
---------------------

Strings can also be enclosed in matching groups of three single or double
quotes.
These are generally referred to as :dfn:`triple-quoted strings`::

   """This is a triple-quoted string."""

In triple-quoted literals, unescaped quotes are allowed (and are
retained), except that three unescaped quotes in a row terminate the literal,
if they are of the same kind (``'`` or ``"``) used at the start::

   """This string has "quotes" inside."""

Unescaped newlines are also allowed and retained::

   '''This triple-quoted string
   continues on the next line.'''


.. index::
   single: u'; string literal
   single: u"; string literal

String prefixes
---------------

String literals can have an optional :dfn:`prefix` that influences how the
content of the literal is parsed, for example:

.. code-block:: python

   b"data"
   f'{result=}'

The allowed prefixes are:

* ``b``: :ref:`Bytes literal <bytes-literal>`
* ``r``: :ref:`Raw string <raw-strings>`
* ``f``: :ref:`Formatted string literal <f-strings>` ("f-string")
* ``t``: :ref:`Template string literal <t-strings>` ("t-string")
* ``u``: No effect (allowed for backwards compatibility)

See the linked sections for details on each type.

Prefixes are case-insensitive (for example, '``B``' works the same as '``b``').
The '``r``' prefix can be combined with '``f``', '``t``' or '``b``', so '``fr``',
'``rf``', '``tr``', '``rt``', '``br``', and '``rb``' are also valid prefixes.

.. versionadded:: 3.3
   The ``'rb'`` prefix of raw bytes literals has been added as a synonym
   of ``'br'``.

   Support for the unicode legacy literal (``u'value'``) was reintroduced
   to simplify the maintenance of dual Python 2.x and 3.x codebases.
   See :pep:`414` for more information.


Formal grammar
--------------

String literals, except :ref:`"f-strings" <f-strings>` and
:ref:`"t-strings" <t-strings>`, are described by the
following lexical definitions.

These definitions use :ref:`negative lookaheads <lexical-lookaheads>` (``!``)
to indicate that an ending quote ends the literal.

.. grammar-snippet::
   :group: python-grammar

   STRING:          [`stringprefix`] (`stringcontent`)
   stringprefix:    <("r" | "u" | "b" | "br" | "rb"), case-insensitive>
   stringcontent:
      | "'''" ( !"'''" `longstringitem`)* "'''"
      | '"""' ( !'"""' `longstringitem`)* '"""'
      | "'" ( !"'" `stringitem`)* "'"
      | '"' ( !'"' `stringitem`)* '"'
   stringitem:      `stringchar` | `stringescapeseq`
   stringchar:      <any `source_character`, except backslash and newline>
   longstringitem:  `stringitem` | newline
   stringescapeseq: "\" <any `source_character`>

Note that as in all lexical definitions, whitespace is significant.
In particular, the prefix (if any) must be immediately followed by the starting
quote.

.. index:: physical line, escape sequence, Standard C, C
   single: \ (backslash); escape sequence
   single: \\; escape sequence
   single: \a; escape sequence
   single: \b; escape sequence
   single: \f; escape sequence
   single: \n; escape sequence
   single: \r; escape sequence
   single: \t; escape sequence
   single: \v; escape sequence
   single: \x; escape sequence
   single: \N; escape sequence
   single: \u; escape sequence
   single: \U; escape sequence

.. _escape-sequences:

Escape sequences
----------------

Unless an '``r``' or '``R``' prefix is present, escape sequences in string and
bytes literals are interpreted according to rules similar to those used by
Standard C.  The recognized escape sequences are:

.. list-table::
   :widths: auto
   :header-rows: 1

   * * Escape Sequence
     * Meaning
   * * ``\``\ <newline>
     * :ref:`string-escape-ignore`
   * * ``\\``
     * :ref:`Backslash <string-escape-escaped-char>`
   * * ``\'``
     * :ref:`Single quote <string-escape-escaped-char>`
   * * ``\"``
     * :ref:`Double quote <string-escape-escaped-char>`
   * * ``\a``
     * ASCII Bell (BEL)
   * * ``\b``
     * ASCII Backspace (BS)
   * * ``\f``
     * ASCII Formfeed (FF)
   * * ``\n``
     * ASCII Linefeed (LF)
   * * ``\r``
     * ASCII Carriage Return (CR)
   * * ``\t``
     * ASCII Horizontal Tab (TAB)
   * * ``\v``
     * ASCII Vertical Tab (VT)
   * * :samp:`\\\\{ooo}`
     * :ref:`string-escape-oct`
   * * :samp:`\\x{hh}`
     * :ref:`string-escape-hex`
   * * :samp:`\\N\\{{name}\\}`
     * :ref:`string-escape-named`
   * * :samp:`\\u{xxxx}`
     * :ref:`Hexadecimal Unicode character <string-escape-long-hex>`
   * * :samp:`\\U{xxxxxxxx}`
     * :ref:`Hexadecimal Unicode character <string-escape-long-hex>`

.. _string-escape-ignore:

Ignored end of line
^^^^^^^^^^^^^^^^^^^

A backslash can be added at the end of a line to ignore the newline::

   >>> 'This string will not include \
   ... backslashes or newline characters.'
   'This string will not include backslashes or newline characters.'

The same result can be achieved using :ref:`triple-quoted strings <strings>`,
or parentheses and :ref:`string literal concatenation <string-concatenation>`.

.. _string-escape-escaped-char:

Escaped characters
^^^^^^^^^^^^^^^^^^

To include a backslash in a non-:ref:`raw <raw-strings>` Python string
literal, it must be doubled. The ``\\`` escape sequence denotes a single
backslash character::

   >>> print('C:\\Program Files')
   C:\Program Files

Similarly, the ``\'`` and ``\"`` sequences denote the single and double
quote character, respectively::

   >>> print('\' and \"')
   ' and "

.. _string-escape-oct:

Octal character
^^^^^^^^^^^^^^^

The sequence :samp:`\\\\{ooo}` denotes a *character* with the octal (base 8)
value *ooo*::

   >>> '\120'
   'P'

Up to three octal digits (0 through 7) are accepted.

In a bytes literal, *character* means a *byte* with the given value.
In a string literal, it means a Unicode character with the given value.

.. versionchanged:: 3.11
   Octal escapes with value larger than ``0o377`` (255) produce a
   :exc:`DeprecationWarning`.

.. versionchanged:: 3.12
   Octal escapes with value larger than ``0o377`` (255) produce a
   :exc:`SyntaxWarning`.
   In a future Python version they will raise a :exc:`SyntaxError`.

.. _string-escape-hex:

Hexadecimal character
^^^^^^^^^^^^^^^^^^^^^

The sequence :samp:`\\x{hh}` denotes a *character* with the hex (base 16)
value *hh*::

   >>> '\x50'
   'P'

Unlike in Standard C, exactly two hex digits are required.

In a bytes literal, *character* means a *byte* with the given value.
In a string literal, it means a Unicode character with the given value.

.. _string-escape-named:

Named Unicode character
^^^^^^^^^^^^^^^^^^^^^^^

The sequence :samp:`\\N\\{{name}\\}` denotes a Unicode character
with the given *name*::

   >>> '\N{LATIN CAPITAL LETTER P}'
   'P'
   >>> '\N{SNAKE}'
   '🐍'

This sequence cannot appear in :ref:`bytes literals <bytes-literal>`.

.. versionchanged:: 3.3
   Support for `name aliases <https://www.unicode.org/Public/16.0.0/ucd/NameAliases.txt>`__
   has been added.

.. _string-escape-long-hex:

Hexadecimal Unicode characters
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

These sequences :samp:`\\u{xxxx}` and :samp:`\\U{xxxxxxxx}` denote the
Unicode character with the given hex (base 16) value.
Exactly four digits are required for ``\u``; exactly eight digits are
required for ``\U``.
The latter can encode any Unicode character.

.. code-block:: pycon

   >>> '\u1234'
   'ሴ'
   >>> '\U0001f40d'
   '🐍'

These sequences cannot appear in :ref:`bytes literals <bytes-literal>`.


.. index:: unrecognized escape sequence

Unrecognized escape sequences
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Unlike in Standard C, all unrecognized escape sequences are left in the string
unchanged, that is, *the backslash is left in the result*::

   >>> print('\q')
   \q
   >>> list('\q')
   ['\\', 'q']

Note that for bytes literals, the escape sequences only recognized in string
literals (``\N...``, ``\u...``, ``\U...``) fall into the category of
unrecognized escapes.

.. versionchanged:: 3.6
   Unrecognized escape sequences produce a :exc:`DeprecationWarning`.

.. versionchanged:: 3.12
   Unrecognized escape sequences produce a :exc:`SyntaxWarning`.
   In a future Python version they will raise a :exc:`SyntaxError`.


.. index::
   single: b'; bytes literal
   single: b"; bytes literal


.. _bytes-literal:

Bytes literals
--------------

:dfn:`Bytes literals` are always prefixed with '``b``' or '``B``'; they produce an
instance of the :class:`bytes` type instead of the :class:`str` type.
They may only contain ASCII characters; bytes with a numeric value of 128
or greater must be expressed with escape sequences (typically
:ref:`string-escape-hex` or :ref:`string-escape-oct`):

.. code-block:: pycon

   >>> b'\x89PNG\r\n\x1a\n'
   b'\x89PNG\r\n\x1a\n'
   >>> list(b'\x89PNG\r\n\x1a\n')
   [137, 80, 78, 71, 13, 10, 26, 10]

Similarly, a zero byte must be expressed using an escape sequence (typically
``\0`` or ``\x00``).


.. index::
   single: r'; raw string literal
   single: r"; raw string literal

.. _raw-strings:

Raw string literals
-------------------

Both string and bytes literals may optionally be prefixed with a letter '``r``'
or '``R``'; such constructs are called :dfn:`raw string literals`
and :dfn:`raw bytes literals` respectively and treat backslashes as
literal characters.
As a result, in raw string literals, :ref:`escape sequences <escape-sequences>`
are not treated specially:

.. code-block:: pycon

   >>> r'\d{4}-\d{2}-\d{2}'
   '\\d{4}-\\d{2}-\\d{2}'

Even in a raw literal, quotes can be escaped with a backslash, but the
backslash remains in the result; for example, ``r"\""`` is a valid string
literal consisting of two characters: a backslash and a double quote; ``r"\"``
is not a valid string literal (even a raw string cannot end in an odd number of
backslashes).  Specifically, *a raw literal cannot end in a single backslash*
(since the backslash would escape the following quote character).  Note also
that a single backslash followed by a newline is interpreted as those two
characters as part of the literal, *not* as a line continuation.


.. index::
   single: formatted string literal
   single: interpolated string literal
   single: string; formatted literal
   single: string; interpolated literal
   single: f-string
   single: fstring
   single: f'; formatted string literal
   single: f"; formatted string literal
   single: {} (curly brackets); in formatted string literal
   single: ! (exclamation); in formatted string literal
   single: : (colon); in formatted string literal
   single: = (equals); for help in debugging using string literals

.. _f-strings:
.. _formatted-string-literals:

f-strings
---------

.. versionadded:: 3.6
.. versionchanged:: 3.7
   The :keyword:`await` and :keyword:`async for` can be used in expressions
   within f-strings.
.. versionchanged:: 3.8
   Added the debug specifier (``=``)
.. versionchanged:: 3.12
   Many restrictions on expressions within f-strings have been removed.
   Notably, nested strings, comments, and backslashes are now permitted.

A :dfn:`formatted string literal` or :dfn:`f-string` is a string literal
that is prefixed with '``f``' or '``F``'.
Unlike other string literals, f-strings do not have a constant value.
They may contain *replacement fields* delimited by curly braces ``{}``.
Replacement fields contain expressions which are evaluated at run time.
For example::

   >>> who = 'nobody'
   >>> nationality = 'Spanish'
   >>> f'{who.title()} expects the {nationality} Inquisition!'
   'Nobody expects the Spanish Inquisition!'

Any doubled curly braces (``{{`` or ``}}``) outside replacement fields
are replaced with the corresponding single curly brace::

   >>> print(f'{{...}}')
   {...}

Other characters outside replacement fields are treated like in ordinary
string literals.
This means that escape sequences are decoded (except when a literal is
also marked as a raw string), and newlines are possible in triple-quoted
f-strings::

   >>> name = 'Galahad'
   >>> favorite_color = 'blue'
   >>> print(f'{name}:\t{favorite_color}')
   Galahad:       blue
   >>> print(rf"C:\Users\{name}")
   C:\Users\Galahad
   >>> print(f'''Three shall be the number of the counting
   ... and the number of the counting shall be three.''')
   Three shall be the number of the counting
   and the number of the counting shall be three.

Expressions in formatted string literals are treated like regular
Python expressions.
Each expression is evaluated in the context where the formatted string literal
appears, in order from left to right.
An empty expression is not allowed, and both :keyword:`lambda` and
assignment expressions ``:=`` must be surrounded by explicit parentheses::

   >>> f'{(half := 1/2)}, {half * 42}'
   '0.5, 21.0'

Reusing the outer f-string quoting type inside a replacement field is
permitted::

   >>> a = dict(x=2)
   >>> f"abc {a["x"]} def"
   'abc 2 def'

Backslashes are also allowed in replacement fields and are evaluated the same
way as in any other context::

   >>> a = ["a", "b", "c"]
   >>> print(f"List a contains:\n{"\n".join(a)}")
   List a contains:
   a
   b
   c

It is possible to nest f-strings::

   >>> name = 'world'
   >>> f'Repeated:{f' hello {name}' * 3}'
   'Repeated: hello world hello world hello world'

Portable Python programs should not use more than 5 levels of nesting.

.. impl-detail::

   CPython does not limit nesting of f-strings.

Replacement expressions can contain newlines in both single-quoted and
triple-quoted f-strings and they can contain comments.
Everything that comes after a ``#`` inside a replacement field
is a comment (even closing braces and quotes).
This means that replacement fields with comments must be closed in a
different line:

.. code-block:: text

   >>> a = 2
   >>> f"abc{a  # This comment  }"  continues until the end of the line
   ...       + 3}"
   'abc5'

After the expression, replacement fields may optionally contain:

* a *debug specifier* -- an equal sign (``=``), optionally surrounded by
  whitespace on one or both sides;
* a *conversion specifier* -- ``!s``, ``!r`` or ``!a``; and/or
* a *format specifier* prefixed with a colon (``:``).

See the :ref:`Standard Library section on f-strings <stdtypes-fstrings>`
for details on how these fields are evaluated.

As that section explains, *format specifiers* are passed as the second argument
to the :func:`format` function to format a replacement field value.
For example, they can be used to specify a field width and padding characters
using the :ref:`Format Specification Mini-Language <formatspec>`::

   >>> number = 14.3
   >>> f'{number:20.7f}'
   '          14.3000000'

Top-level format specifiers may include nested replacement fields::

   >>> field_size = 20
   >>> precision = 7
   >>> f'{number:{field_size}.{precision}f}'
   '          14.3000000'

These nested fields may include their own conversion fields and
:ref:`format specifiers <formatspec>`::

   >>> number = 3
   >>> f'{number:{field_size}}'
   '                   3'
   >>> f'{number:{field_size:05}}'
   '00000000000000000003'

However, these nested fields may not include more deeply nested replacement
fields.

Formatted string literals cannot be used as :term:`docstrings <docstring>`,
even if they do not include expressions::

   >>> def foo():
   ...     f"Not a docstring"
   ...
   >>> print(foo.__doc__)
   None

.. seealso::

   * :pep:`498` -- Literal String Interpolation
   * :pep:`701` -- Syntactic formalization of f-strings
   * :meth:`str.format`, which uses a related format string mechanism.


.. _t-strings:
.. _template-string-literals:

t-strings
---------

.. versionadded:: 3.14

A :dfn:`template string literal` or :dfn:`t-string` is a string literal
that is prefixed with '``t``' or '``T``'.
These strings follow the same syntax rules as
:ref:`formatted string literals <f-strings>`.
For differences in evaluation rules, see the
:ref:`Standard Library section on t-strings <stdtypes-tstrings>`


Formal grammar for f-strings
----------------------------

F-strings are handled partly by the :term:`lexical analyzer`, which produces the
tokens :py:data:`~token.FSTRING_START`, :py:data:`~token.FSTRING_MIDDLE`
and :py:data:`~token.FSTRING_END`, and partly by the parser, which handles
expressions in the replacement field.
The exact way the work is split is a CPython implementation detail.

Correspondingly, the f-string grammar is a mix of
:ref:`lexical and syntactic definitions <notation-lexical-vs-syntactic>`.

Whitespace is significant in these situations:

* There may be no whitespace in :py:data:`~token.FSTRING_START` (between
  the prefix and quote).
* Whitespace in :py:data:`~token.FSTRING_MIDDLE` is part of the literal
  string contents.
* In ``fstring_replacement_field``, if ``f_debug_specifier`` is present,
  all whitespace after the opening brace until the ``f_debug_specifier``,
  as well as whitespace immediatelly following ``f_debug_specifier``,
  is retained as part of the expression.

  .. impl-detail::

     The expression is not handled in the tokenization phase; it is
     retrieved from the source code using locations of the ``{`` token
     and the token after ``=``.


The ``FSTRING_MIDDLE`` definition uses
:ref:`negative lookaheads <lexical-lookaheads>` (``!``)
to indicate special characters (backslash, newline, ``{``, ``}``) and
sequences (``f_quote``).

.. grammar-snippet::
   :group: python-grammar

   fstring:    `FSTRING_START` `fstring_middle`* `FSTRING_END`

   FSTRING_START:      `fstringprefix` ("'" | '"' | "'''" | '"""')
   FSTRING_END:        `f_quote`
   fstringprefix:      <("f" | "fr" | "rf"), case-insensitive>
   f_debug_specifier:  '='
   f_quote:            <the quote character(s) used in FSTRING_START>

   fstring_middle:
      | `fstring_replacement_field`
      | `FSTRING_MIDDLE`
   FSTRING_MIDDLE:
      | (!"\" !`newline` !'{' !'}' !`f_quote`) `source_character`
      | `stringescapeseq`
      | "{{"
      | "}}"
      | <newline, in triple-quoted f-strings only>
   fstring_replacement_field:
      | '{' `f_expression` [`f_debug_specifier`] [`fstring_conversion`]
            [`fstring_full_format_spec`] '}'
   fstring_conversion:
      | "!" ("s" | "r" | "a")
   fstring_full_format_spec:
      | ':' `fstring_format_spec`*
   fstring_format_spec:
      | `FSTRING_MIDDLE`
      | `fstring_replacement_field`
   f_expression:
      | ','.(`conditional_expression` | "*" `or_expr`)+ [","]
      | `yield_expression`

.. note::

   In the above grammar snippet, the ``f_quote`` and ``FSTRING_MIDDLE`` rules
   are context-sensitive -- they depend on the contents of ``FSTRING_START``
   of the nearest enclosing ``fstring``.

   Constructing a more traditional formal grammar from this template is left
   as an exercise for the reader.

The grammar for t-strings is identical to the one for f-strings, with *t*
instead of *f* at the beginning of rule and token names and in the prefix.

.. grammar-snippet::
   :group: python-grammar

   tstring:    TSTRING_START tstring_middle* TSTRING_END

   <rest of the t-string grammar is omitted; see above>


.. _numbers:

Numeric literals
================

.. index:: number, numeric literal, integer literal
   floating-point literal, hexadecimal literal
   octal literal, binary literal, decimal literal, imaginary literal, complex literal

:data:`~token.NUMBER` tokens represent numeric literals, of which there are
three types: integers, floating-point numbers, and imaginary numbers.

.. grammar-snippet::
   :group: python-grammar

   NUMBER: `integer` | `floatnumber` | `imagnumber`

The numeric value of a numeric literal is the same as if it were passed as a
string to the :class:`int`, :class:`float` or :class:`complex` class
constructor, respectively.
Note that not all valid inputs for those constructors are also valid literals.

Numeric literals do not include a sign; a phrase like ``-1`` is
actually an expression composed of the unary operator '``-``' and the literal
``1``.


.. index::
   single: 0b; integer literal
   single: 0o; integer literal
   single: 0x; integer literal
   single: _ (underscore); in numeric literal

.. _integers:

Integer literals
----------------

Integer literals denote whole numbers. For example::

   7
   3
   2147483647

There is no limit for the length of integer literals apart from what can be
stored in available memory::

   7922816251426433759354395033679228162514264337593543950336

Underscores can be used to group digits for enhanced readability,
and are ignored for determining the numeric value of the literal.
For example, the following literals are equivalent::

   100_000_000_000
   100000000000
   1_00_00_00_00_000

Underscores can only occur between digits.
For example, ``_123``, ``321_``, and ``123__321`` are *not* valid literals.

Integers can be specified in binary (base 2), octal (base 8), or hexadecimal
(base 16) using the prefixes ``0b``, ``0o`` and ``0x``, respectively.
Hexadecimal digits 10 through 15 are represented by letters ``A``-``F``,
case-insensitive.  For example::

   0b100110111
   0b_1110_0101
   0o177
   0o377
   0xdeadbeef
   0xDead_Beef

An underscore can follow the base specifier.
For example, ``0x_1f`` is a valid literal, but ``0_x1f`` and ``0x__1f`` are
not.

Leading zeros in a non-zero decimal number are not allowed.
For example, ``0123`` is not a valid literal.
This is for disambiguation with C-style octal literals, which Python used
before version 3.0.

Formally, integer literals are described by the following lexical definitions:

.. grammar-snippet::
   :group: python-grammar

   integer:      `decinteger` | `bininteger` | `octinteger` | `hexinteger` | `zerointeger`
   decinteger:   `nonzerodigit` (["_"] `digit`)*
   bininteger:   "0" ("b" | "B") (["_"] `bindigit`)+
   octinteger:   "0" ("o" | "O") (["_"] `octdigit`)+
   hexinteger:   "0" ("x" | "X") (["_"] `hexdigit`)+
   zerointeger:  "0"+ (["_"] "0")*
   nonzerodigit: "1"..."9"
   digit:        "0"..."9"
   bindigit:     "0" | "1"
   octdigit:     "0"..."7"
   hexdigit:     `digit` | "a"..."f" | "A"..."F"

.. versionchanged:: 3.6
   Underscores are now allowed for grouping purposes in literals.


.. index::
   single: . (dot); in numeric literal
   single: e; in numeric literal
   single: _ (underscore); in numeric literal
.. _floating:

Floating-point literals
-----------------------

Floating-point (float) literals, such as ``3.14`` or ``1.5``, denote
:ref:`approximations of real numbers <datamodel-float>`.

They consist of *integer* and *fraction* parts, each composed of decimal digits.
The parts are separated by a decimal point, ``.``::

   2.71828
   4.0

Unlike in integer literals, leading zeros are allowed.
For example, ``077.010`` is legal, and denotes the same number as ``77.01``.

As in integer literals, single underscores may occur between digits to help
readability::

   96_485.332_123
   3.14_15_93

Either of these parts, but not both, can be empty. For example::

   10.  # (equivalent to 10.0)
   .001  # (equivalent to 0.001)

Optionally, the integer and fraction may be followed by an *exponent*:
the letter ``e`` or ``E``, followed by an optional sign, ``+`` or ``-``,
and a number in the same format as the integer and fraction parts.
The ``e`` or ``E`` represents "times ten raised to the power of"::

   1.0e3  # (represents 1.0×10³, or 1000.0)
   1.166e-5  # (represents 1.166×10⁻⁵, or 0.00001166)
   6.02214076e+23  # (represents 6.02214076×10²³, or 602214076000000000000000.)

In floats with only integer and exponent parts, the decimal point may be
omitted::

   1e3  # (equivalent to 1.e3 and 1.0e3)
   0e0  # (equivalent to 0.)

Formally, floating-point literals are described by the following
lexical definitions:

.. grammar-snippet::
   :group: python-grammar

   floatnumber:
      | `digitpart` "." [`digitpart`] [`exponent`]
      | "." `digitpart` [`exponent`]
      | `digitpart` `exponent`
   digitpart: `digit` (["_"] `digit`)*
   exponent:  ("e" | "E") ["+" | "-"] `digitpart`

.. versionchanged:: 3.6
   Underscores are now allowed for grouping purposes in literals.


.. index::
   single: j; in numeric literal
.. _imaginary:

Imaginary literals
------------------

Python has :ref:`complex number <typesnumeric>` objects, but no complex
literals.
Instead, *imaginary literals* denote complex numbers with a zero
real part.

For example, in math, the complex number 3+4.2\ *i* is written
as the real number 3 added to the imaginary number 4.2\ *i*.
Python uses a similar syntax, except the imaginary unit is written as ``j``
rather than *i*::

   3+4.2j

This is an expression composed
of the :ref:`integer literal <integers>` ``3``,
the :ref:`operator <operators>` '``+``',
and the :ref:`imaginary literal <imaginary>` ``4.2j``.
Since these are three separate tokens, whitespace is allowed between them::

   3 + 4.2j

No whitespace is allowed *within* each token.
In particular, the ``j`` suffix, may not be separated from the number
before it.

The number before the ``j`` has the same syntax as a floating-point literal.
Thus, the following are valid imaginary literals::

   4.2j
   3.14j
   10.j
   .001j
   1e100j
   3.14e-10j
   3.14_15_93j

Unlike in a floating-point literal the decimal point can be omitted if the
imaginary number only has an integer part.
The number is still evaluated as a floating-point number, not an integer::

   10j
   0j
   1000000000000000000000000j   # equivalent to 1e+24j

The ``j`` suffix is case-insensitive.
That means you can use ``J`` instead::

   3.14J   # equivalent to 3.14j

Formally, imaginary literals are described by the following lexical definition:

.. grammar-snippet::
   :group: python-grammar

   imagnumber: (`floatnumber` | `digitpart`) ("j" | "J")


.. _delimiters:
.. _operators:
.. _lexical-ellipsis:

Operators and delimiters
========================

.. index::
   single: operators
   single: delimiters

The following grammar defines :dfn:`operator` and :dfn:`delimiter` tokens,
that is, the generic :data:`~token.OP` token type.
A :ref:`list of these tokens and their names <token_operators_delimiters>`
is also available in the :mod:`!token` module documentation.

.. grammar-snippet::
   :group: python-grammar

   OP:
      | assignment_operator
      | bitwise_operator
      | comparison_operator
      | enclosing_delimiter
      | other_delimiter
      | arithmetic_operator
      | "..."
      | other_op

   assignment_operator:   "+=" | "-=" | "*=" | "**=" | "/="  | "//=" | "%=" |
                          "&=" | "|=" | "^=" | "<<=" | ">>=" | "@="  | ":="
   bitwise_operator:      "&"  | "|"  | "^"  | "~"   | "<<"  | ">>"
   comparison_operator:   "<=" | ">=" | "<"  | ">"   | "=="  | "!="
   enclosing_delimiter:   "("  | ")"  | "["  | "]"   | "{"   | "}"
   other_delimiter:       ","  | ":"  | "!"  | ";"   | "="   | "->"
   arithmetic_operator:   "+"  | "-"  | "**" | "*"   | "//"  | "/"   | "%"
   other_op:              "."  | "@"

.. note::

   Generally, *operators* are used to combine :ref:`expressions <expressions>`,
   while *delimiters* serve other purposes.
   However, there is no clear, formal distinction between the two categories.

   Some tokens can serve as either operators or delimiters, depending on usage.
   For example, ``*`` is both the multiplication operator and a delimiter used
   for sequence unpacking, and ``@`` is both the matrix multiplication and
   a delimiter that introduces decorators.

   For some tokens, the distinction is unclear.
   For example, some people consider ``.``, ``(``, and ``)`` to be delimiters, while others
   see the :py:func:`getattr` operator and the function call operator(s).

   Some of Python's operators, like ``and``, ``or``, and ``not in``, use
   :ref:`keyword <keywords>` tokens rather than "symbols" (operator tokens).

A sequence of three consecutive periods (``...``) has a special
meaning as an :py:data:`Ellipsis` literal.