File: mlx5.rst

package info (click to toggle)
dpdk 24.11.3-1
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid
  • size: 121,148 kB
  • sloc: ansic: 2,206,055; python: 11,866; sh: 4,627; makefile: 2,025; awk: 70
file content (2818 lines) | stat: -rw-r--r-- 123,765 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
2255
2256
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
2271
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
2284
2285
2286
2287
2288
2289
2290
2291
2292
2293
2294
2295
2296
2297
2298
2299
2300
2301
2302
2303
2304
2305
2306
2307
2308
2309
2310
2311
2312
2313
2314
2315
2316
2317
2318
2319
2320
2321
2322
2323
2324
2325
2326
2327
2328
2329
2330
2331
2332
2333
2334
2335
2336
2337
2338
2339
2340
2341
2342
2343
2344
2345
2346
2347
2348
2349
2350
2351
2352
2353
2354
2355
2356
2357
2358
2359
2360
2361
2362
2363
2364
2365
2366
2367
2368
2369
2370
2371
2372
2373
2374
2375
2376
2377
2378
2379
2380
2381
2382
2383
2384
2385
2386
2387
2388
2389
2390
2391
2392
2393
2394
2395
2396
2397
2398
2399
2400
2401
2402
2403
2404
2405
2406
2407
2408
2409
2410
2411
2412
2413
2414
2415
2416
2417
2418
2419
2420
2421
2422
2423
2424
2425
2426
2427
2428
2429
2430
2431
2432
2433
2434
2435
2436
2437
2438
2439
2440
2441
2442
2443
2444
2445
2446
2447
2448
2449
2450
2451
2452
2453
2454
2455
2456
2457
2458
2459
2460
2461
2462
2463
2464
2465
2466
2467
2468
2469
2470
2471
2472
2473
2474
2475
2476
2477
2478
2479
2480
2481
2482
2483
2484
2485
2486
2487
2488
2489
2490
2491
2492
2493
2494
2495
2496
2497
2498
2499
2500
2501
2502
2503
2504
2505
2506
2507
2508
2509
2510
2511
2512
2513
2514
2515
2516
2517
2518
2519
2520
2521
2522
2523
2524
2525
2526
2527
2528
2529
2530
2531
2532
2533
2534
2535
2536
2537
2538
2539
2540
2541
2542
2543
2544
2545
2546
2547
2548
2549
2550
2551
2552
2553
2554
2555
2556
2557
2558
2559
2560
2561
2562
2563
2564
2565
2566
2567
2568
2569
2570
2571
2572
2573
2574
2575
2576
2577
2578
2579
2580
2581
2582
2583
2584
2585
2586
2587
2588
2589
2590
2591
2592
2593
2594
2595
2596
2597
2598
2599
2600
2601
2602
2603
2604
2605
2606
2607
2608
2609
2610
2611
2612
2613
2614
2615
2616
2617
2618
2619
2620
2621
2622
2623
2624
2625
2626
2627
2628
2629
2630
2631
2632
2633
2634
2635
2636
2637
2638
2639
2640
2641
2642
2643
2644
2645
2646
2647
2648
2649
2650
2651
2652
2653
2654
2655
2656
2657
2658
2659
2660
2661
2662
2663
2664
2665
2666
2667
2668
2669
2670
2671
2672
2673
2674
2675
2676
2677
2678
2679
2680
2681
2682
2683
2684
2685
2686
2687
2688
2689
2690
2691
2692
2693
2694
2695
2696
2697
2698
2699
2700
2701
2702
2703
2704
2705
2706
2707
2708
2709
2710
2711
2712
2713
2714
2715
2716
2717
2718
2719
2720
2721
2722
2723
2724
2725
2726
2727
2728
2729
2730
2731
2732
2733
2734
2735
2736
2737
2738
2739
2740
2741
2742
2743
2744
2745
2746
2747
2748
2749
2750
2751
2752
2753
2754
2755
2756
2757
2758
2759
2760
2761
2762
2763
2764
2765
2766
2767
2768
2769
2770
2771
2772
2773
2774
2775
2776
2777
2778
2779
2780
2781
2782
2783
2784
2785
2786
2787
2788
2789
2790
2791
2792
2793
2794
2795
2796
2797
2798
2799
2800
2801
2802
2803
2804
2805
2806
2807
2808
2809
2810
2811
2812
2813
2814
2815
2816
2817
2818
..  SPDX-License-Identifier: BSD-3-Clause
    Copyright 2015 6WIND S.A.
    Copyright 2015 Mellanox Technologies, Ltd

.. include:: <isonum.txt>

NVIDIA MLX5 Ethernet Driver
===========================

.. note::

   NVIDIA acquired Mellanox Technologies in 2020.
   The DPDK documentation and code might still include instances
   of or references to Mellanox trademarks (like BlueField and ConnectX)
   that are now NVIDIA trademarks.

The mlx5 Ethernet poll mode driver library (**librte_net_mlx5**) provides support
for **NVIDIA ConnectX-4**, **NVIDIA ConnectX-4 Lx** , **NVIDIA ConnectX-5**,
**NVIDIA ConnectX-6**, **NVIDIA ConnectX-6 Dx**, **NVIDIA ConnectX-6 Lx**,
**NVIDIA ConnectX-7**, **NVIDIA BlueField**, **NVIDIA BlueField-2** and
**NVIDIA BlueField-3** families of 10/25/40/50/100/200/400 Gb/s adapters
as well as their virtual functions (VF) in SR-IOV context.

Supported NICs
--------------

The following NVIDIA device families are supported by the same mlx5 driver:

  - ConnectX-4
  - ConnectX-4 Lx
  - ConnectX-5
  - ConnectX-5 Ex
  - ConnectX-6
  - ConnectX-6 Dx
  - ConnectX-6 Lx
  - ConnectX-7
  - BlueField
  - BlueField-2
  - BlueField-3

Below are detailed device names:

* NVIDIA\ |reg| ConnectX\ |reg|-4 10G MCX4111A-XCAT (1x10G)
* NVIDIA\ |reg| ConnectX\ |reg|-4 10G MCX412A-XCAT (2x10G)
* NVIDIA\ |reg| ConnectX\ |reg|-4 25G MCX4111A-ACAT (1x25G)
* NVIDIA\ |reg| ConnectX\ |reg|-4 25G MCX412A-ACAT (2x25G)
* NVIDIA\ |reg| ConnectX\ |reg|-4 40G MCX413A-BCAT (1x40G)
* NVIDIA\ |reg| ConnectX\ |reg|-4 40G MCX4131A-BCAT (1x40G)
* NVIDIA\ |reg| ConnectX\ |reg|-4 40G MCX415A-BCAT (1x40G)
* NVIDIA\ |reg| ConnectX\ |reg|-4 50G MCX413A-GCAT (1x50G)
* NVIDIA\ |reg| ConnectX\ |reg|-4 50G MCX4131A-GCAT (1x50G)
* NVIDIA\ |reg| ConnectX\ |reg|-4 50G MCX414A-BCAT (2x50G)
* NVIDIA\ |reg| ConnectX\ |reg|-4 50G MCX415A-GCAT (1x50G)
* NVIDIA\ |reg| ConnectX\ |reg|-4 50G MCX416A-BCAT (2x50G)
* NVIDIA\ |reg| ConnectX\ |reg|-4 50G MCX416A-GCAT (2x50G)
* NVIDIA\ |reg| ConnectX\ |reg|-4 50G MCX415A-CCAT (1x100G)
* NVIDIA\ |reg| ConnectX\ |reg|-4 100G MCX416A-CCAT (2x100G)
* NVIDIA\ |reg| ConnectX\ |reg|-4 Lx 10G MCX4111A-XCAT (1x10G)
* NVIDIA\ |reg| ConnectX\ |reg|-4 Lx 10G MCX4121A-XCAT (2x10G)
* NVIDIA\ |reg| ConnectX\ |reg|-4 Lx 25G MCX4111A-ACAT (1x25G)
* NVIDIA\ |reg| ConnectX\ |reg|-4 Lx 25G MCX4121A-ACAT (2x25G)
* NVIDIA\ |reg| ConnectX\ |reg|-4 Lx 40G MCX4131A-BCAT (1x40G)
* NVIDIA\ |reg| ConnectX\ |reg|-5 100G MCX556A-ECAT (2x100G)
* NVIDIA\ |reg| ConnectX\ |reg|-5 Ex EN 100G MCX516A-CDAT (2x100G)
* NVIDIA\ |reg| ConnectX\ |reg|-6 200G MCX654106A-HCAT (2x200G)
* NVIDIA\ |reg| ConnectX\ |reg|-6 Dx EN 100G MCX623106AN-CDAT (2x100G)
* NVIDIA\ |reg| ConnectX\ |reg|-6 Dx EN 200G MCX623105AN-VDAT (1x200G)
* NVIDIA\ |reg| ConnectX\ |reg|-6 Lx EN 25G MCX631102AN-ADAT (2x25G)
* NVIDIA\ |reg| ConnectX\ |reg|-7 200G CX713106AE-HEA_QP1_Ax (2x200G)
* NVIDIA\ |reg| BlueField\ |reg|-2 25G MBF2H332A-AEEOT_A1 (2x25Gg
* NVIDIA\ |reg| BlueField\ |reg|-3 200GbE 900-9D3B6-00CV-AA0 (2x200)
* NVIDIA\ |reg| BlueField\ |reg|-3 200GbE 900-9D3B6-00SV-AA0 (2x200)
* NVIDIA\ |reg| BlueField\ |reg|-3 400GbE 900-9D3B6-00CN-AB0 (2x400)
* NVIDIA\ |reg| BlueField\ |reg|-3 100GbE 900-9D3B4-00CC-EA0 (2x100)
* NVIDIA\ |reg| BlueField\ |reg|-3 100GbE 900-9D3B4-00SC-EA0 (2x100)
* NVIDIA\ |reg| BlueField\ |reg|-3 400GbE 900-9D3B4-00EN-EA0 (1x100)


Design
------

Besides its dependency on libibverbs (that implies libmlx5 and associated
kernel support), librte_net_mlx5 relies heavily on system calls for control
operations such as querying/updating the MTU and flow control parameters.

This capability allows the PMD to coexist with kernel network interfaces
which remain functional, although they stop receiving unicast packets as
long as they share the same MAC address.
This means legacy linux control tools (for example: ethtool, ifconfig and
more) can operate on the same network interfaces that owned by the DPDK
application.

See :doc:`../../platform/mlx5` guide for more design details,
including prerequisites installation.

Features
--------

- Multi arch support: x86_64, POWER8, ARMv8, i686.
- Multiple TX and RX queues.
- Shared Rx queue.
- Rx queue delay drop.
- Rx queue available descriptor threshold event.
- Host shaper support.
- Support steering for external Rx queue created outside the PMD.
- Support for scattered TX frames.
- Advanced support for scattered Rx frames with tunable buffer attributes.
- IPv4, IPv6, TCPv4, TCPv6, UDPv4 and UDPv6 RSS on any number of queues.
- RSS using different combinations of fields: L3 only, L4 only or both,
  and source only, destination only or both.
- Several RSS hash keys, one for each flow type.
- Default RSS operation with no hash key specification.
- Symmetric RSS function.
- Configurable RETA table.
- Link flow control (pause frame).
- Support for multiple MAC addresses.
- VLAN filtering.
- RX VLAN stripping.
- TX VLAN insertion.
- RX CRC stripping configuration.
- TX mbuf fast free offload.
- Promiscuous mode on PF and VF.
- Multicast promiscuous mode on PF and VF.
- Hardware checksum offloads.
- Flow director (RTE_FDIR_MODE_PERFECT, RTE_FDIR_MODE_PERFECT_MAC_VLAN and
  RTE_ETH_FDIR_REJECT).
- Flow API, including :ref:`flow_isolated_mode`.
- Multiple process.
- KVM and VMware ESX SR-IOV modes are supported.
- RSS hash result is supported.
- Hardware TSO for generic IP or UDP tunnel, including VXLAN and GRE.
- Hardware checksum Tx offload for generic IP or UDP tunnel, including VXLAN and GRE.
- RX interrupts.
- Statistics query including Basic, Extended and per queue.
- Rx HW timestamp.
- Tunnel types: VXLAN, L3 VXLAN, VXLAN-GPE, GRE, MPLSoGRE, MPLSoUDP, IP-in-IP, Geneve, GTP.
- Tunnel HW offloads: packet type, inner/outer RSS, IP and UDP checksum verification.
- NIC HW offloads: encapsulation (vxlan, gre, mplsoudp, mplsogre), NAT, routing, TTL
  increment/decrement, count, drop, mark. For details please see :ref:`mlx5_offloads_support`.
- Flow insertion rate of more then million flows per second, when using Direct Rules.
- Support for multiple rte_flow groups.
- Per packet no-inline hint flag to disable packet data copying into Tx descriptors.
- Hardware LRO.
- Hairpin.
- Multiple-thread flow insertion.
- Matching on IPv4 Internet Header Length (IHL).
- Matching on IPv6 routing extension header.
- Matching on GTP extension header with raw encap/decap action.
- Matching on Geneve TLV option header with raw encap/decap action.
- Matching on ESP header SPI field.
- Matching on InfiniBand BTH.
- Matching on random value.
- Modify IPv4/IPv6 ECN field.
- Push or remove IPv6 routing extension.
- NAT64.
- RSS support in sample action.
- E-Switch mirroring and jump.
- E-Switch mirroring and modify.
- Send to kernel.
- 21844 flow priorities for ingress or egress flow groups greater than 0 and for any transfer
  flow group.
- Flow quota.
- Flow metering, including meter policy API.
- Flow meter hierarchy.
- Flow meter mark.
- Flow integrity offload API.
- Connection tracking.
- Sub-Function representors.
- Sub-Function.
- Matching on represented port.
- Matching on aggregated affinity.
- Matching on external Tx queue.
- Matching on E-Switch manager.


Limitations
-----------

- Windows support:

  On Windows, the features are limited:

  - Promiscuous mode is not supported
  - The following rules are supported:

    - IPv4/UDP with CVLAN filtering
    - Unicast MAC filtering

  - Additional rules are supported from WinOF2 version 2.70:

    - IPv4/TCP with CVLAN filtering
    - L4 steering rules for port RSS of UDP, TCP and IP

- PCI Virtual Function MTU:

  MTU settings on PCI Virtual Functions have no effect.
  The maximum receivable packet size for a VF is determined by the MTU
  configured on its associated Physical Function.
  DPDK applications using VFs must be prepared to handle packets
  up to the maximum size of this PF port.

- For secondary process:

  - Forked secondary process not supported.
  - MPRQ is not supported. Callback to free externally attached MPRQ buffer is set
    in a primary process, but has a different virtual address in a secondary process.
    Calling a function at the wrong address leads to a segmentation fault.
  - External memory unregistered in EAL memseg list cannot be used for DMA
    unless such memory has been registered by ``mlx5_mr_update_ext_mp()`` in
    primary process and remapped to the same virtual address in secondary
    process. If the external memory is registered by primary process but has
    different virtual address in secondary process, unexpected error may happen.

- Shared Rx queue:

  - Counters of received packets and bytes number of devices in same share group are same.
  - Counters of received packets and bytes number of queues in same group and queue ID are same.

- Available descriptor threshold event:

  - Does not support shared Rx queue and hairpin Rx queue.

- The symmetric RSS function is supported by swapping source and destination
  addresses and ports.

- Host shaper:

  - Support BlueField series NIC from BlueField-2.
  - When configuring host shaper with ``RTE_PMD_MLX5_HOST_SHAPER_FLAG_AVAIL_THRESH_TRIGGERED`` flag,
    only rates 0 and 100Mbps are supported.

- HW steering:

  - WQE based high scaling and safer flow insertion/destruction.
  - Set ``dv_flow_en`` to 2 in order to enable HW steering.
  - Async queue-based ``rte_flow_async`` APIs supported only.
  - NIC ConnectX-5 and before are not supported.
  - Reconfiguring flow API engine is not supported.
    Any subsequent call to ``rte_flow_configure()`` with different configuration
    than initially provided will be rejected with ``-ENOTSUP`` error code.
  - Partial match with item template is not supported.
  - IPv6 5-tuple matching is not supported.
  - With E-Switch enabled, ports which share the E-Switch domain
    should be started and stopped in a specific order:

    - When starting ports, the transfer proxy port should be started first
      and port representors should follow.
    - When stopping ports, all of the port representors
      should be stopped before stopping the transfer proxy port.

    If ports are started/stopped in an incorrect order,
    ``rte_eth_dev_start()``/``rte_eth_dev_stop()`` will return an appropriate error code:

    - ``-EAGAIN`` for ``rte_eth_dev_start()``.
    - ``-EBUSY`` for ``rte_eth_dev_stop()``.

  - Matching on ICMP6 following IPv6 routing extension header,
    should match ``ipv6_routing_ext_next_hdr`` instead of ICMP6.
    IPv6 routing extension matching is not supported in flow template relaxed
    matching mode (see ``struct rte_flow_pattern_template_attr::relaxed_matching``).

  - The supported actions order is as below::

          MARK (a)
          *_DECAP (b)
          OF_POP_VLAN
          COUNT | AGE
          METER_MARK | CONNTRACK
          OF_PUSH_VLAN
          MODIFY_FIELD
          *_ENCAP (c)
          JUMP | DROP | RSS (a) | QUEUE (a) | REPRESENTED_PORT (d)

    a. Only supported on ingress.
    b. Any decapsulation action, including the combination of RAW_ENCAP and RAW_DECAP actions
       which results in L3 decapsulation.
       Not supported on egress.
    c. Any encapsulation action, including the combination of RAW_ENCAP and RAW_DECAP actions
       which results in L3 encap.
    d. Only in transfer (switchdev) mode.

  - When using synchronous flow API,
    the following limitations and considerations apply:

    - Geneve options is supported when ``FLEX_PARSER_PROFILE_ENABLE`` = 0 (default).

- When using Verbs flow engine (``dv_flow_en`` = 0), flow pattern without any
  specific VLAN will match for VLAN packets as well:

  When VLAN spec is not specified in the pattern, the matching rule will be created with VLAN as a wild card.
  Meaning, the flow rule::

        flow create 0 ingress pattern eth / vlan vid is 3 / ipv4 / end ...

  Will only match vlan packets with vid=3. and the flow rule::

        flow create 0 ingress pattern eth / ipv4 / end ...

  Will match any ipv4 packet (VLAN included).

- When using Verbs flow engine (``dv_flow_en`` = 0), multi-tagged(QinQ) match is not supported.

- When using DV flow engine (``dv_flow_en`` = 1), flow pattern with any VLAN specification will match only single-tagged packets unless the ETH item ``type`` field is 0x88A8 or the VLAN item ``has_more_vlan`` field is 1.
  The flow rule::

        flow create 0 ingress pattern eth / ipv4 / end ...

  Will match any ipv4 packet.
  The flow rules::

        flow create 0 ingress pattern eth / vlan / end ...
        flow create 0 ingress pattern eth has_vlan is 1 / end ...
        flow create 0 ingress pattern eth type is 0x8100 / end ...

  Will match single-tagged packets only, with any VLAN ID value.
  The flow rules::

        flow create 0 ingress pattern eth type is 0x88A8 / end ...
        flow create 0 ingress pattern eth / vlan has_more_vlan is 1 / end ...

  Will match multi-tagged packets only, with any VLAN ID value.

- A flow pattern with 2 sequential VLAN items is not supported.

- VLAN pop offload command:

  - Flow rules having a VLAN pop offload command as one of their actions and
    are lacking a match on VLAN as one of their items are not supported.
  - The command is not supported on egress traffic in NIC mode.

- VLAN push offload is not supported on ingress traffic in NIC mode.

- VLAN set PCP offload is not supported on existing headers.

- A multi segment packet must have not more segments than reported by dev_infos_get()
  in tx_desc_lim.nb_seg_max field. This value depends on maximal supported Tx descriptor
  size and ``txq_inline_min`` settings and may be from 2 (worst case forced by maximal
  inline settings) to 58.

- Match on VXLAN supports any bits in the tunnel header

  - Flag 8-bits and first 24-bits reserved fields matching
    is only supported when using DV flow engine (``dv_flow_en`` = 2).
  - For ConnectX-5, the UDP destination port must be the standard one (4789).
  - Default UDP destination is 4789 if not explicitly specified.
  - Group zero's behavior may differ which depends on FW.
  - User should set different flags when matching on VXLAN-GPE/GBP:

    - for VXLAN-GPE - P flag
    - for VXLAN-GBP - G flag

- Matching on VXLAN-GPE header fields:

     - ``rsvd0``/``rsvd1`` matching support depends on FW version
       when using DV flow engine (``dv_flow_en`` = 1).
     - ``protocol`` should be explicitly specified in HWS (``dv_flow_en`` = 2).

- L3 VXLAN and VXLAN-GPE tunnels cannot be supported together with MPLSoGRE and MPLSoUDP.

- MPLSoGRE is not supported in HW steering (``dv_flow_en`` = 2).

- MPLSoUDP with multiple MPLS headers is only supported in HW steering (``dv_flow_en`` = 2).

- Match on Geneve header supports the following fields only:

     - VNI
     - OAM
     - protocol type
     - options length

- Match on Geneve TLV option is supported on the following fields:

     - Class
     - Type
     - Length
     - Data

  Class/Type/Length fields must be specified as well as masks.
  Class/Type/Length specified masks must be full.
  Matching Geneve TLV option without specifying data is not supported.
  Matching Geneve TLV option with ``data & mask == 0`` is not supported.

  In SW steering (``dv_flow_en`` = 1):

     - Only one Class/Type/Length Geneve TLV option is supported per shared device.
     - Supported only with ``FLEX_PARSER_PROFILE_ENABLE`` = 0.

  In HW steering (``dv_flow_en`` = 2):

     - Multiple Class/Type/Length Geneve TLV options are supported per physical device.
     - Multiple of same Geneve TLV option isn't supported at the same pattern template.
     - Supported only with ``FLEX_PARSER_PROFILE_ENABLE`` = 8.
     - Supported also with ``FLEX_PARSER_PROFILE_ENABLE`` = 0 for single DW only.
     - Supported for FW version **xx.37.0142** and above.

  .. _geneve_parser_api:

  - An API (``rte_pmd_mlx5_create_geneve_tlv_parser``)
    is available for the flexible parser used in HW steering:

    Each physical device has 7 DWs for GENEVE TLV options.
    Partial option configuration is supported,
    mask for data is provided in parser creation
    indicating which DWs configuration is requested.
    Only masked data DWs can be matched later as item field using flow API.

    - Matching of ``type`` field is supported for each configured option.
    - However, for matching ``class`` field,
      the option should be configured with ``match_on_class_mode=2``.
      One extra DW is consumed for it.
    - Matching on ``length`` field is not supported.

    - More limitations with ``FLEX_PARSER_PROFILE_ENABLE`` = 0:

      - single DW
      - ``sample_len`` must be equal to ``option_len`` and not bigger than 1.
      - ``match_on_class_mode`` different than 1 is not supported.
      - ``offset`` must be 0.

    Although the parser is created per physical device, this API is port oriented.
    Each port should call this API before using GENEVE OPT item,
    but its configuration must use the same options list
    with same internal order configured by first port.

    Calling this API for different ports under same physical device doesn't consume
    more DWs, the first one creates the parser and the rest use same configuration.

- VF: flow rules created on VF devices can only match traffic targeted at the
  configured MAC addresses (see ``rte_eth_dev_mac_addr_add()``).

- Match on GTP tunnel header item supports the following fields only:

     - v_pt_rsv_flags: E flag, S flag, PN flag
     - msg_type
     - teid

- Match on GTP extension header only for GTP PDU session container (next
  extension header type = 0x85).
- Match on GTP extension header is not supported in group 0.

- When using DV/Verbs flow engine (``dv_flow_en`` = 1/0 respectively),
  match on SPI field in ESP header for group 0 is supported from ConnectX-7.

- Matching on SPI field in ESP header is supported over the PF only.

- Flex item:

  - Hardware support: **NVIDIA BlueField-2** and **NVIDIA BlueField-3**.
  - Flex item is supported on PF only.
  - Hardware limits ``header_length_mask_width`` up to 6 bits.
  - Firmware supports 8 global sample fields.
    Each flex item allocates non-shared sample fields from that pool.
  - Supported flex item can have 1 input link - ``eth`` or ``udp``
    and up to 3 output links - ``ipv4`` or ``ipv6``.
  - Flex item fields (``next_header``, ``next_protocol``, ``samples``)
    do not participate in RSS hash functions.
  - In flex item configuration, ``next_header.field_base`` value
    must be byte aligned (multiple of 8).
  - Modify field with flex item, the offset must be byte aligned (multiple of 8).

- Match on random value:

  - Supported only with HW Steering enabled (``dv_flow_en`` = 2).
  - Supported only in table with ``nb_flows=1``.
  - NIC ingress/egress flow in group 0 is not supported.
  - Supports matching only 16 bits (LSB).

- Match with compare result item (``RTE_FLOW_ITEM_TYPE_COMPARE``):

  - Only supported in HW steering(``dv_flow_en`` = 2) mode.
  - Only single flow is supported to the flow table.
  - Only single item is supported per pattern template.
  - In switch mode, when the ``repr_matching_en`` flag is enabled in the devargs
    (which is the default setting),
    the match with compare result item is not supported for ``ingress`` rules.
    This is because an implicit ``REPRESENTED_PORT`` needs to be added to the matcher,
    which conflicts with the single item limitation.
  - Only 32-bit comparison is supported or 16-bit for random field.
  - Only supported for ``RTE_FLOW_FIELD_META``, ``RTE_FLOW_FIELD_TAG``,
    ``RTE_FLOW_FIELD_ESP_SEQ_NUM``,
    ``RTE_FLOW_FIELD_RANDOM`` and ``RTE_FLOW_FIELD_VALUE``.
  - The field type ``RTE_FLOW_FIELD_VALUE`` must be the base (``b``) field.
  - The field type ``RTE_FLOW_FIELD_RANDOM`` can only be compared with
    ``RTE_FLOW_FIELD_VALUE``.

- No Tx metadata go to the E-Switch steering domain for the Flow group 0.
  The flows within group 0 and set metadata action are rejected by hardware.

.. note::

   MAC addresses not already present in the bridge table of the associated
   kernel network device will be added and cleaned up by the PMD when closing
   the device. In case of ungraceful program termination, some entries may
   remain present and should be removed manually by other means.

- Buffer split offload is supported with regular Rx burst routine only,
  no MPRQ feature or vectorized code can be engaged.

- When Multi-Packet Rx queue is configured (``mprq_en``), a Rx packet can be
  externally attached to a user-provided mbuf with having RTE_MBUF_F_EXTERNAL in
  ol_flags. As the mempool for the external buffer is managed by PMD, all the
  Rx mbufs must be freed before the device is closed. Otherwise, the mempool of
  the external buffers will be freed by PMD and the application which still
  holds the external buffers may be corrupted.
  User-managed mempools with external pinned data buffers
  cannot be used in conjunction with MPRQ
  since packets may be already attached to PMD-managed external buffers.

- If Multi-Packet Rx queue is configured (``mprq_en``) and Rx CQE compression is
  enabled (``rxq_cqe_comp_en``) at the same time, RSS hash result is not fully
  supported. Some Rx packets may not have RTE_MBUF_F_RX_RSS_HASH.

- IPv6 Multicast messages are not supported on VM, while promiscuous mode
  and allmulticast mode are both set to off.
  To receive IPv6 Multicast messages on VM, explicitly set the relevant
  MAC address using rte_eth_dev_mac_addr_add() API.

- To support a mixed traffic pattern (some buffers from local host memory, some
  buffers from other devices) with high bandwidth, a mbuf flag is used.

  An application hints the PMD whether or not it should try to inline the
  given mbuf data buffer. PMD should do the best effort to act upon this request.

  The hint flag ``RTE_PMD_MLX5_FINE_GRANULARITY_INLINE`` is dynamic,
  registered by application with rte_mbuf_dynflag_register(). This flag is
  purely driver-specific and declared in PMD specific header ``rte_pmd_mlx5.h``,
  which is intended to be used by the application.

  To query the supported specific flags in runtime,
  the function ``rte_pmd_mlx5_get_dyn_flag_names`` returns the array of
  currently (over present hardware and configuration) supported specific flags.
  The "not inline hint" feature operating flow is the following one:

    - application starts
    - probe the devices, ports are created
    - query the port capabilities
    - if port supporting the feature is found
    - register dynamic flag ``RTE_PMD_MLX5_FINE_GRANULARITY_INLINE``
    - application starts the ports
    - on ``dev_start()`` PMD checks whether the feature flag is registered and
      enables the feature support in datapath
    - application might set the registered flag bit in ``ol_flags`` field
      of mbuf being sent and PMD will handle ones appropriately.

- The amount of descriptors in Tx queue may be limited by data inline settings.
  Inline data require the more descriptor building blocks and overall block
  amount may exceed the hardware supported limits. The application should
  reduce the requested Tx size or adjust data inline settings with
  ``txq_inline_max`` and ``txq_inline_mpw`` devargs keys.

- To provide the packet send scheduling on mbuf timestamps the ``tx_pp``
  parameter should be specified.
  When PMD sees the RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME set on the packet
  being sent it tries to synchronize the time of packet appearing on
  the wire with the specified packet timestamp. It the specified one
  is in the past it should be ignored, if one is in the distant future
  it should be capped with some reasonable value (in range of seconds).
  These specific cases ("too late" and "distant future") can be optionally
  reported via device xstats to assist applications to detect the
  time-related problems.

  The timestamp upper "too-distant-future" limit
  at the moment of invoking the Tx burst routine
  can be estimated as ``tx_pp`` option (in nanoseconds) multiplied by 2^23.
  Please note, for the testpmd txonly mode,
  the limit is deduced from the expression::

        (n_tx_descriptors / burst_size + 1) * inter_burst_gap

  There is no any packet reordering according timestamps is supposed,
  neither within packet burst, nor between packets, it is an entirely
  application responsibility to generate packets and its timestamps
  in desired order. The timestamps can be put only in the first packet
  in the burst providing the entire burst scheduling.

- E-Switch decapsulation Flow:

  - can be applied to PF port only.
  - must specify VF port action (packet redirection from PF to VF).
  - optionally may specify tunnel inner source and destination MAC addresses.

- E-Switch  encapsulation Flow:

  - can be applied to VF ports only.
  - must specify PF port action (packet redirection from VF to PF).

- E-Switch Manager matching:

  - For BlueField with old FW
    which doesn't expose the E-Switch Manager vport ID in the capability,
    matching E-Switch Manager should be used only in BlueField embedded CPU mode.

- Raw encapsulation:

  - The input buffer, used as outer header, is not validated.

- Raw decapsulation:

  - The decapsulation is always done up to the outermost tunnel detected by the HW.
  - The input buffer, providing the removal size, is not validated.
  - The buffer size must match the length of the headers to be removed.

- Outer UDP checksum calculation for encapsulation flow actions:

  - Currently available NVIDIA NICs and DPUs do not have a capability to calculate
    the UDP checksum in the header added using encapsulation flow actions.

    Applications are required to use 0 in UDP checksum field in such flow actions.
    Resulting packet will have outer UDP checksum equal to 0.

- ICMP(code/type/identifier/sequence number) / ICMP6(code/type/identifier/sequence number) matching,
  IP-in-IP and MPLS flow matching are all mutually exclusive features which cannot be supported together
  (see :ref:`mlx5_firmware_config`).

- LRO:

  - Requires DevX and DV flow to be enabled.
  - KEEP_CRC offload cannot be supported with LRO.
  - The first mbuf length, without head-room,  must be big enough to include the
    TCP header (122B).
  - Rx queue with LRO offload enabled, receiving a non-LRO packet, can forward
    it with size limited to max LRO size, not to max RX packet length.
  - The driver rounds down the port configuration value ``max_lro_pkt_size``
    (from ``rte_eth_rxmode``) to a multiple of 256 due to hardware limitation.
  - LRO can be used with outer header of TCP packets of the standard format:
        eth (with or without vlan) / ipv4 or ipv6 / tcp / payload

    Other TCP packets (e.g. with MPLS label) received on Rx queue with LRO enabled, will be received with bad checksum.
  - LRO packet aggregation is performed by HW only for packet size larger than
    ``lro_min_mss_size``. This value is reported on device start, when debug
    mode is enabled.

- CRC:

  - ``RTE_ETH_RX_OFFLOAD_KEEP_CRC`` cannot be supported with decapsulation
    for some NICs (such as ConnectX-6 Dx, ConnectX-6 Lx, ConnectX-7, BlueField-2,
    and BlueField-3).
    The capability bit ``scatter_fcs_w_decap_disable`` shows NIC support.

- TX mbuf fast free:

  - fast free offload assumes the all mbufs being sent are originated from the
    same memory pool and there is no any extra references to the mbufs (the
    reference counter for each mbuf is equal 1 on tx_burst call). The latter
    means there should be no any externally attached buffers in mbufs. It is
    an application responsibility to provide the correct mbufs if the fast
    free offload is engaged. The mlx5 PMD implicitly produces the mbufs with
    externally attached buffers if MPRQ option is enabled, hence, the fast
    free offload is neither supported nor advertised if there is MPRQ enabled.

- Sample flow:

  - Supports ``RTE_FLOW_ACTION_TYPE_SAMPLE`` action only within NIC Rx and
    E-Switch steering domain.
  - In E-Switch steering domain, for sampling with sample ratio > 1 in a transfer rule,
    additional actions are not supported in the sample actions list.
  - For ConnectX-5, the ``RTE_FLOW_ACTION_TYPE_SAMPLE`` is typically used as
    first action in the E-Switch egress flow if with header modify or
    encapsulation actions.
  - For NIC Rx flow, supports only ``MARK``, ``COUNT``, ``QUEUE``, ``RSS`` in the
    sample actions list.
  - In E-Switch steering domain, for mirroring with sample ratio = 1 in a transfer rule,
    supports only ``RAW_ENCAP``, ``PORT_ID``, ``REPRESENTED_PORT``, ``VXLAN_ENCAP``, ``NVGRE_ENCAP``
    in the sample actions list.
  - In E-Switch steering domain, for mirroring with sample ratio = 1 in a transfer rule,
    the encapsulation actions (``RAW_ENCAP`` or ``VXLAN_ENCAP`` or ``NVGRE_ENCAP``)
    support uplink port only.
  - In E-Switch steering domain, for mirroring with sample ratio = 1 in a transfer rule,
    the port actions (``PORT_ID`` or ``REPRESENTED_PORT``) with uplink port and ``JUMP`` action
    are not supported without the encapsulation actions
    (``RAW_ENCAP`` or ``VXLAN_ENCAP`` or ``NVGRE_ENCAP``) in the sample actions list.
  - For ConnectX-5 trusted device, the application metadata with SET_TAG index 0
    is not supported before ``RTE_FLOW_ACTION_TYPE_SAMPLE`` action.

- Modify Field flow:

  - Supports the 'set' and 'add' operations for ``RTE_FLOW_ACTION_TYPE_MODIFY_FIELD`` action.
  - Modification of an arbitrary place in a packet via the special ``RTE_FLOW_FIELD_START`` Field ID is not supported.
  - Modify field action using ``RTE_FLOW_FIELD_RANDOM`` is not supported.
  - Modification of the 802.1Q tag is not supported.
  - Modification of VXLAN network or GENEVE network ID is supported only for HW steering.
  - Modification of the VXLAN header is supported with below limitations:

    - Only for HW steering (``dv_flow_en=2``).
    - Support VNI and the last reserved byte modifications for traffic
      with default UDP destination port: 4789 for VXLAN and VXLAN-GBP, 4790 for VXLAN-GPE.

  - Modification of GENEVE network ID is not supported when configured
    ``FLEX_PARSER_PROFILE_ENABLE`` supports Geneve TLV options.
    See :ref:`mlx5_firmware_config` for more flex parser information.
  - Modification of GENEVE TLV option fields is supported only for HW steering.
    Only DWs configured in :ref:`parser creation <geneve_parser_api>` can be modified,
    'type' and 'class' fields can be modified when ``match_on_class_mode=2``.
  - Modification of GENEVE TLV option data supports one DW per action.
  - Offsets cannot skip past the boundary of a field.
  - If the field type is ``RTE_FLOW_FIELD_MAC_TYPE``
    and packet contains one or more VLAN headers,
    the meaningful type field following the last VLAN header
    is used as modify field operation argument.
    The modify field action is not intended to modify VLAN headers type field,
    dedicated VLAN push and pop actions should be used instead.
  - For packet fields (e.g. MAC addresses, IPv4 addresses or L4 ports)
    offset specifies the number of bits to skip from field's start,
    starting from MSB in the first byte, in the network order.
  - For flow metadata fields (e.g. META or TAG)
    offset specifies the number of bits to skip from field's start,
    starting from LSB in the least significant byte, in the host order.
  - Modification of the MPLS header is supported with some limitations:

    - Only in HW steering.
    - Only in ``src`` field.
    - Only for outermost tunnel header (``level=2``).
      For ``RTE_FLOW_FIELD_MPLS``,
      the default encapsulation level ``0`` describes the outermost tunnel header.

      .. note::

         The default encapsulation level ``0`` describes
         the "outermost that match is supported",
         currently it is the first tunnel,
         but it can be changed to outer when it is supported.

  - Default encapsulation level ``0`` describes outermost.
  - Encapsulation level ``2`` is supported with some limitations:

    - Only in HW steering.
    - Only in ``src`` field.
    - ``RTE_FLOW_FIELD_VLAN_ID`` is not supported.
    - ``RTE_FLOW_FIELD_IPV4_PROTO`` is not supported.
    - ``RTE_FLOW_FIELD_IPV6_PROTO/DSCP/ECN`` are not supported.
    - ``RTE_FLOW_FIELD_ESP_PROTO/SPI/SEQ_NUM`` are not supported.
    - ``RTE_FLOW_FIELD_TCP_SEQ/ACK_NUM`` are not supported.
    - Second tunnel fields are not supported.

  - Encapsulation levels greater than ``2`` are not supported.

- Age action:

  - with HW steering (``dv_flow_en=2``)

    - Using the same indirect count action combined with multiple age actions
      in different flows may cause a wrong age state for the age actions.
    - Creating/destroying flow rules with indirect age action when it is active
      (timeout != 0) may cause a wrong age state for the indirect age action.

    - The driver reuses counters for aging action, so for optimization
      the values in ``rte_flow_port_attr`` structure should describe:

      - ``nb_counters`` is the number of flow rules using counter (with/without age)
        in addition to flow rules using only age (without count action).
      - ``nb_aging_objects`` is the number of flow rules containing age action.

- IPv6 header item 'proto' field, indicating the next header protocol, should
  not be set as extension header.
  In case the next header is an extension header, it should not be specified in
  IPv6 header item 'proto' field.
  The last extension header item 'next header' field can specify the following
  header protocol type.

- Match on IPv6 routing extension header supports the following fields only:

  - ``type``
  - ``next_hdr``
  - ``segments_left``

  Only supports HW steering (``dv_flow_en=2``).

- IPv6 routing extension push/remove:

  - Supported only with HW Steering enabled (``dv_flow_en=2``).
  - Supported in non-zero group
    (no limits on transfer domain if ``fdb_def_rule_en=1`` which is default).
  - Only supports TCP or UDP as next layer.
  - IPv6 routing header must be the only present extension.
  - Not supported on guest port.

- NAT64 action:

  - Supported only with HW Steering enabled (``dv_flow_en`` = 2).
  - FW version: at least ``XX.39.1002``.
  - Supported only on non-root table.
  - Actions order limitation should follow the modify fields action.
  - The last 2 TAG registers will be used implicitly in address backup mode.
  - Even if the action can be shared, new steering entries will be created per flow rule.
    It is recommended a single rule with NAT64 should be shared
    to reduce the duplication of entries.
    The default address and other fields conversion will be handled with NAT64 action.
    To support other address, new rule(s) with modify fields on the IP addresses should be created.
  - TOS / Traffic Class is not supported now.

- Hairpin:

  - Hairpin between two ports could only manual binding and explicit Tx flow mode. For single port hairpin, all the combinations of auto/manual binding and explicit/implicit Tx flow mode could be supported.
  - Hairpin in switchdev SR-IOV mode is not supported till now.
  - ``out_of_buffer`` statistics are not available on:
    - NICs older than ConnectX-7.
    - DPUs older than BlueField-3.

- Quota:

  - Quota implemented for HWS / template API.
  - Maximal value for quota SET and ADD operations in INT32_MAX (2GB).
  - Application cannot use 2 consecutive ADD updates.
    Next tokens update after ADD must always be SET.
  - Quota flow action cannot be used with Meter or CT flow actions in the same rule.
  - Quota flow action and item supported in non-root HWS tables.
  - Maximal number of HW quota and HW meter objects <= 16e6.

- Meter:

  - All the meter colors with drop action will be counted only by the global drop statistics.
  - Yellow detection is only supported with ASO metering.
  - Red color must be with drop action.
  - Meter statistics are supported only for drop case.
  - A meter action created with pre-defined policy must be the last action in the flow except single case where the policy actions are:
     - green: NULL or END.
     - yellow: NULL or END.
     - RED: DROP / END.
  - The only supported meter policy actions:
     - green: QUEUE, RSS, PORT_ID, REPRESENTED_PORT, JUMP, DROP, MODIFY_FIELD, MARK, METER and SET_TAG.
     - yellow: QUEUE, RSS, PORT_ID, REPRESENTED_PORT, JUMP, DROP, MODIFY_FIELD, MARK, METER and SET_TAG.
     - RED: must be DROP.
  - Policy actions of RSS for green and yellow should have the same configuration except queues.
  - Policy with RSS/queue action is not supported when ``dv_xmeta_en`` enabled.
  - If green action is METER, yellow action must be the same METER action or NULL.
  - meter profile packet mode is supported.
  - meter profiles of RFC2697, RFC2698 and RFC4115 are supported.
  - RFC4115 implementation is following MEF, meaning yellow traffic may reclaim unused green bandwidth when green token bucket is full.
  - When using DV flow engine (``dv_flow_en`` = 1),
    if meter has drop count
    or meter hierarchy contains any meter that uses drop count,
    it cannot be used by flow rule matching all ports.
  - When using DV flow engine (``dv_flow_en`` = 1),
    if meter hierarchy contains any meter that has MODIFY_FIELD/SET_TAG,
    it cannot be used by flow matching all ports.
  - When using HWS flow engine (``dv_flow_en`` = 2),
    only meter mark action is supported.

- Ptype:

  - Only supports HW steering (``dv_flow_en=2``).
  - The supported values are:
    L2: ``RTE_PTYPE_L2_ETHER``, ``RTE_PTYPE_L2_ETHER_VLAN``, ``RTE_PTYPE_L2_ETHER_QINQ``
    L3: ``RTE_PTYPE_L3_IPV4``, ``RTE_PTYPE_L3_IPV6``
    L4: ``RTE_PTYPE_L4_TCP``, ``RTE_PTYPE_L4_UDP``, ``RTE_PTYPE_L4_ICMP``
    and their ``RTE_PTYPE_INNER_XXX`` counterparts as well as ``RTE_PTYPE_TUNNEL_ESP``.
    Any other values are not supported. Using them as a value will cause unexpected behavior.
  - Matching on both outer and inner IP fragmented is supported
    using ``RTE_PTYPE_L4_FRAG`` and ``RTE_PTYPE_INNER_L4_FRAG`` values.
    They are not part of L4 types, so they should be provided explicitly
    as a mask value during pattern template creation.
    Providing ``RTE_PTYPE_L4_MASK`` during pattern template creation
    and ``RTE_PTYPE_L4_FRAG`` during flow rule creation
    will cause unexpected behavior.

- Integrity:

  - Verification bits provided by the hardware are ``l3_ok``, ``ipv4_csum_ok``, ``l4_ok``, ``l4_csum_ok``.
  - ``level`` value 0 references outer headers.
  - Negative integrity item verification is not supported.

  - With SW steering (``dv_flow_en=1``)

    - Integrity offload is enabled starting from **ConnectX-6 Dx**.
    - Multiple integrity items not supported in a single flow rule.
    - Flow rule items supplied by application must explicitly specify
      network headers referred by integrity item.

      For example, if integrity item mask sets ``l4_ok`` or ``l4_csum_ok`` bits,
      reference to L4 network header, TCP or UDP, must be in the rule pattern as well::

         flow create 0 ingress pattern integrity level is 0 value mask l3_ok value spec l3_ok / eth / ipv6 / end ...
         flow create 0 ingress pattern integrity level is 0 value mask l4_ok value spec l4_ok / eth / ipv4 proto is udp / end ...

  - With HW steering (``dv_flow_en=2``)
    - The ``l3_ok`` field represents all L3 checks, but nothing about IPv4 checksum.
    - The ``l4_ok`` field represents all L4 checks including L4 checksum.

- Connection tracking:

  - Cannot co-exist with ASO meter, ASO age action in a single flow rule.
  - Flow rules insertion rate and memory consumption need more optimization.
  - 16 ports maximum (with ``dv_flow_en=1``).
  - 32M connections maximum.

- Multi-thread flow insertion:

  - In order to achieve best insertion rate, application should manage the flows per lcore.
  - Better to disable memory reclaim by setting ``reclaim_mem_mode`` to 0 to accelerate the flow object allocation and release with cache.

- HW hashed bonding

  - TXQ affinity subjects to HW hash once enabled.

- Bonding under socket direct mode

  - Needs MLNX_OFED 5.4+.

- Match on aggregated affinity:

  - Supports NIC ingress flow in group 0.
  - Supports E-Switch flow in group 0 and depends on
    device-managed flow steering (DMFS) mode.

- Timestamps:

  - CQE timestamp field width is limited by hardware to 63 bits, MSB is zero.
  - In the free-running mode the timestamp counter is reset on power on
    and 63-bit value provides over 1800 years of uptime till overflow.
  - In the real-time mode
    (configurable with ``REAL_TIME_CLOCK_ENABLE`` firmware settings),
    the timestamp presents the nanoseconds elapsed since 01-Jan-1970,
    hardware timestamp overflow will happen on 19-Jan-2038
    (0x80000000 seconds since 01-Jan-1970).
  - The send scheduling is based on timestamps
    from the reference "Clock Queue" completions,
    the scheduled send timestamps should not be specified with non-zero MSB.

- Match on GRE header supports the following fields:

  - c_rsvd0_v: C bit, K bit, S bit
  - protocol type
  - checksum
  - key
  - sequence

  Matching on checksum and sequence needs MLNX_OFED 5.6+.

- Matching on NVGRE header:

  - c_rc_k_s_rsvd0_ver
  - protocol
  - tni
  - flow_id

  In SW steering (``dv_flow_en`` = 1), only tni is supported.
  In HW steering (``dv_flow_en`` = 2), all fields are supported.

- The NIC egress flow rules on representor port are not supported.

- In switch mode, flow rule matching ``RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT`` item
  with port ID ``UINT16_MAX`` means matching packets sent by E-Switch manager from software.
  Need MLNX_OFED 24.04+.

- A driver limitation for ``RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR`` action
  restricts the ``port_id`` configuration to only accept the value ``0xffff``,
  indicating the E-Switch manager.
  If the ``repr_matching_en`` flag is enabled, the traffic will be directed
  to the representor of the source virtual port (SF/VF), while if it is disabled,
  the traffic will be routed based on the steering rules in the ingress domain.

- Send to kernel action (``RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL``):

  - Supported on non-root table.
  - Supported in isolated mode.
  - In HW steering (``dv_flow_en`` = 2):
    - not supported on guest port.

- During live migration to a new process set its flow engine as standby mode,
  the user should only program flow rules in group 0 (``fdb_def_rule_en=0``).
  Live migration is only supported under SWS (``dv_flow_en=1``).
  The flow group 0 is shared between DPDK processes
  while the other flow groups are limited to the current process.
  The flow engine of a process cannot move from active to standby mode
  if preceding active application rules are still present and vice versa.


Statistics
----------

MLX5 supports various methods to report statistics:

Port statistics can be queried using ``rte_eth_stats_get()``. The received and sent statistics are through SW only and counts the number of packets received or sent successfully by the PMD. The imissed counter is the amount of packets that could not be delivered to SW because a queue was full. Packets not received due to congestion in the bus or on the NIC can be queried via the rx_discards_phy xstats counter.

Extended statistics can be queried using ``rte_eth_xstats_get()``. The extended statistics expose a wider set of counters counted by the device. The extended port statistics counts the number of packets received or sent successfully by the port. As NVIDIA NICs are using the :ref:`Bifurcated Linux Driver <linux_gsg_linux_drivers>` those counters counts also packet received or sent by the Linux kernel. The counters with ``_phy`` suffix counts the total events on the physical port, therefore not valid for VF.

Finally per-flow statistics can by queried using ``rte_flow_query`` when attaching a count action for specific flow. The flow counter counts the number of packets received successfully by the port and match the specific flow.


Extended Statistics Counters
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Send Scheduling Counters
^^^^^^^^^^^^^^^^^^^^^^^^

The mlx5 PMD provides a comprehensive set of counters designed for
debugging and diagnostics related to packet scheduling during transmission.
These counters are applicable only if the port was configured with the ``tx_pp`` devarg
and reflect the status of the PMD scheduling infrastructure
based on Clock and Rearm Queues, used as a workaround on ConnectX-6 DX NICs.

``tx_pp_missed_interrupt_errors``
  Indicates that the Rearm Queue interrupt was not serviced on time.
  The EAL manages interrupts in a dedicated thread,
  and it is possible that other time-consuming actions were being processed concurrently.

``tx_pp_rearm_queue_errors``
  Signifies hardware errors that occurred on the Rearm Queue,
  typically caused by delays in servicing interrupts.

``tx_pp_clock_queue_errors``
  Reflects hardware errors on the Clock Queue,
  which usually indicate configuration issues
  or problems with the internal NIC hardware or firmware.

``tx_pp_timestamp_past_errors``
  Tracks the application attempted to send packets with timestamps set in the past.
  It is useful for debugging application code
  and does not indicate a malfunction of the PMD.

``tx_pp_timestamp_future_errors``
  Records attempts by the application to send packets
  with timestamps set too far into the future,
  exceeding the hardware’s scheduling capabilities.
  Like the previous counter, it aids in application debugging
  without suggesting a PMD malfunction.

``tx_pp_jitter``
  Measures the internal NIC real-time clock jitter estimation
  between two consecutive Clock Queue completions, expressed in nanoseconds.
  Significant jitter may signal potential clock synchronization issues,
  possibly due to inappropriate adjustments
  made by a system PTP (Precision Time Protocol) agent.

``tx_pp_wander``
  Indicates the long-term stability of the internal NIC real-time clock
  over 2^24 completions, measured in nanoseconds.
  Significant wander may also suggest clock synchronization problems.

``tx_pp_sync_lost``
  A general operational indicator;
  a non-zero value indicates that the driver has lost synchronization with the Clock Queue,
  resulting in improper scheduling operations.
  To restore correct scheduling functionality, it is necessary to restart the port.

The following counters are particularly valuable for verifying and debugging application code.
They do not indicate driver or hardware malfunctions
and are applicable to newer hardware with direct on-time scheduling capabilities
(such as ConnectX-7 and above):

``tx_pp_timestamp_order_errors``
  Indicates attempts by the application to send packets
  with timestamps that are not in strictly ascending order.
  Since the PMD does not reorder packets within hardware queues,
  violations of timestamp order can lead to packets being sent at incorrect times.


Compilation
-----------

See :ref:`mlx5 common compilation <mlx5_common_compilation>`.


Configuration
-------------

Environment Configuration
~~~~~~~~~~~~~~~~~~~~~~~~~

See :ref:`mlx5 common configuration <mlx5_common_env>`.

Firmware configuration
~~~~~~~~~~~~~~~~~~~~~~

See :ref:`mlx5_firmware_config` guide.

Runtime Configuration
~~~~~~~~~~~~~~~~~~~~~

Please refer to :ref:`mlx5 common options <mlx5_common_driver_options>`
for an additional list of options shared with other mlx5 drivers.

- ``rxq_cqe_comp_en`` parameter [int]

  A nonzero value enables the compression of CQE on RX side. This feature
  allows to save PCI bandwidth and improve performance. Enabled by default.
  Different compression formats are supported in order to achieve the best
  performance for different traffic patterns. Default format depends on
  Multi-Packet Rx queue configuration: Hash RSS format is used in case
  MPRQ is disabled, Checksum format is used in case MPRQ is enabled.

  The lower 3 bits define the CQE compression format:
  Specifying 2 in these bits of the ``rxq_cqe_comp_en`` parameter selects
  the flow tag format for better compression rate in case of flow mark traffic.
  Specifying 3 in these bits selects checksum format.
  Specifying 4 in these bits selects L3/L4 header format for
  better compression rate in case of mixed TCP/UDP and IPv4/IPv6 traffic.
  CQE compression format selection requires DevX to be enabled. If there is
  no DevX enabled/supported the value is reset to 1 by default.

  8th bit defines the CQE compression layout.
  Setting this bit to 1 turns enhanced CQE compression layout on.
  Enhanced CQE compression is designed for better latency and SW utilization.
  This bit is ignored if only the basic CQE compression layout is supported.

  Supported on:

  - x86_64 with ConnectX-4, ConnectX-4 Lx, ConnectX-5, ConnectX-6, ConnectX-6 Dx,
    ConnectX-6 Lx, ConnectX-7, BlueField, BlueField-2, and BlueField-3.
  - POWER9 and ARMv8 with ConnectX-4 Lx, ConnectX-5, ConnectX-6, ConnectX-6 Dx,
    ConnectX-6 Lx, ConnectX-7 BlueField, BlueField-2, and BlueField-3.

- ``rxq_pkt_pad_en`` parameter [int]

  A nonzero value enables padding Rx packet to the size of cacheline on PCI
  transaction. This feature would waste PCI bandwidth but could improve
  performance by avoiding partial cacheline write which may cause costly
  read-modify-copy in memory transaction on some architectures. Disabled by
  default.

  Supported on:

  - x86_64 with ConnectX-4, ConnectX-4 Lx, ConnectX-5, ConnectX-6, ConnectX-6 Dx,
    ConnectX-6 Lx, ConnectX-7, BlueField, BlueField-2, and BlueField-3.
  - POWER8 and ARMv8 with ConnectX-4 Lx, ConnectX-5, ConnectX-6, ConnectX-6 Dx,
    ConnectX-6 Lx, ConnectX-7, BlueField, BlueField-2, and BlueField-3.

- ``delay_drop`` parameter [int]

  Bitmask value for the Rx queue delay drop attribute. Bit 0 is used for the
  standard Rx queue and bit 1 is used for the hairpin Rx queue. By default, the
  delay drop is disabled for all Rx queues. It will be ignored if the port does
  not support the attribute even if it is enabled explicitly.

  The packets being received will not be dropped immediately when the WQEs are
  exhausted in a Rx queue with delay drop enabled.

  A timeout value is set in the driver to control the waiting time before
  dropping a packet. Once the timer is expired, the delay drop will be
  deactivated for all the Rx queues with this feature enable. To re-activate
  it, a rearming is needed and it is part of the kernel driver starting from
  MLNX_OFED 5.5.

  To enable / disable the delay drop rearming, the private flag ``dropless_rq``
  can be set and queried via ethtool:

  - ethtool --set-priv-flags <netdev> dropless_rq on (/ off)
  - ethtool --show-priv-flags <netdev>

  The configuration flag is global per PF and can only be set on the PF, once
  it is on, all the VFs', SFs' and representors' Rx queues will share the timer
  and rearming.

- ``mprq_en`` parameter [int]

  A nonzero value enables configuring Multi-Packet Rx queues. Rx queue is
  configured as Multi-Packet RQ if the total number of Rx queues is
  ``rxqs_min_mprq`` or more. Disabled by default.

  Multi-Packet Rx Queue (MPRQ a.k.a Striding RQ) can further save PCIe bandwidth
  by posting a single large buffer for multiple packets. Instead of posting a
  buffers per a packet, one large buffer is posted in order to receive multiple
  packets on the buffer. A MPRQ buffer consists of multiple fixed-size strides
  and each stride receives one packet. MPRQ can improve throughput for
  small-packet traffic.

  When MPRQ is enabled, MTU can be larger than the size of
  user-provided mbuf even if RTE_ETH_RX_OFFLOAD_SCATTER isn't enabled. PMD will
  configure large stride size enough to accommodate MTU as long as
  device allows. Note that this can waste system memory compared to enabling Rx
  scatter and multi-segment packet.

- ``mprq_log_stride_num`` parameter [int]

  Log 2 of the number of strides for Multi-Packet Rx queue. Configuring more
  strides can reduce PCIe traffic further. If configured value is not in the
  range of device capability, the default value will be set with a warning
  message. The default value is 4 which is 16 strides per a buffer, valid only
  if ``mprq_en`` is set.

  The size of Rx queue should be bigger than the number of strides.

- ``mprq_log_stride_size`` parameter [int]

  Log 2 of the size of a stride for Multi-Packet Rx queue. Configuring a smaller
  stride size can save some memory and reduce probability of a depletion of all
  available strides due to unreleased packets by an application. If configured
  value is not in the range of device capability, the default value will be set
  with a warning message. The default value is 11 which is 2048 bytes per a
  stride, valid only if ``mprq_en`` is set. With ``mprq_log_stride_size`` set
  it is possible for a packet to span across multiple strides. This mode allows
  support of jumbo frames (9K) with MPRQ. The memcopy of some packets (or part
  of a packet if Rx scatter is configured) may be required in case there is no
  space left for a head room at the end of a stride which incurs some
  performance penalty.

- ``mprq_max_memcpy_len`` parameter [int]

  The maximum length of packet to memcpy in case of Multi-Packet Rx queue. Rx
  packet is mem-copied to a user-provided mbuf if the size of Rx packet is less
  than or equal to this parameter. Otherwise, PMD will attach the Rx packet to
  the mbuf by external buffer attachment - ``rte_pktmbuf_attach_extbuf()``.
  A mempool for external buffers will be allocated and managed by PMD. If Rx
  packet is externally attached, ol_flags field of the mbuf will have
  RTE_MBUF_F_EXTERNAL and this flag must be preserved. ``RTE_MBUF_HAS_EXTBUF()``
  checks the flag. The default value is 128, valid only if ``mprq_en`` is set.

- ``rxqs_min_mprq`` parameter [int]

  Configure Rx queues as Multi-Packet RQ if the total number of Rx queues is
  greater or equal to this value. The default value is 12, valid only if
  ``mprq_en`` is set.

- ``txq_inline`` parameter [int]

  Amount of data to be inlined during TX operations. This parameter is
  deprecated and converted to the new parameter ``txq_inline_max`` providing
  partial compatibility.

- ``txqs_min_inline`` parameter [int]

  Enable inline data send only when the number of TX queues is greater or equal
  to this value.

  This option should be used in combination with ``txq_inline_max`` and
  ``txq_inline_mpw`` below and does not affect ``txq_inline_min`` settings above.

  If this option is not specified the default value 16 is used for BlueField
  and 8 for other platforms

  The data inlining consumes the CPU cycles, so this option is intended to
  auto enable inline data if we have enough Tx queues, which means we have
  enough CPU cores and PCI bandwidth is getting more critical and CPU
  is not supposed to be bottleneck anymore.

  The copying data into WQE improves latency and can improve PPS performance
  when PCI back pressure is detected and may be useful for scenarios involving
  heavy traffic on many queues.

  Because additional software logic is necessary to handle this mode, this
  option should be used with care, as it may lower performance when back
  pressure is not expected.

  If inline data are enabled it may affect the maximal size of Tx queue in
  descriptors because the inline data increase the descriptor size and
  queue size limits supported by hardware may be exceeded.

- ``txq_inline_min`` parameter [int]

  Minimal amount of data to be inlined into WQE during Tx operations. NICs
  may require this minimal data amount to operate correctly. The exact value
  may depend on NIC operation mode, requested offloads, etc. It is strongly
  recommended to omit this parameter and use the default values. Anyway,
  applications using this parameter should take into consideration that
  specifying an inconsistent value may prevent the NIC from sending packets.

  If ``txq_inline_min`` key is present the specified value (may be aligned
  by the driver in order not to exceed the limits and provide better descriptor
  space utilization) will be used by the driver and it is guaranteed that
  requested amount of data bytes are inlined into the WQE beside other inline
  settings. This key also may update ``txq_inline_max`` value (default
  or specified explicitly in devargs) to reserve the space for inline data.

  If ``txq_inline_min`` key is not present, the value may be queried by the
  driver from the NIC via DevX if this feature is available. If there is no DevX
  enabled/supported the value 18 (supposing L2 header including VLAN) is set
  for ConnectX-4 and ConnectX-4 Lx, and 0 is set by default for ConnectX-5
  and newer NICs. If packet is shorter the ``txq_inline_min`` value, the entire
  packet is inlined.

  For ConnectX-4 NIC, driver does not allow specifying value below 18
  (minimal L2 header, including VLAN), error will be raised.

  For ConnectX-4 Lx NIC, it is allowed to specify values below 18, but
  it is not recommended and may prevent NIC from sending packets over
  some configurations.

  For ConnectX-4 and ConnectX-4 Lx NICs, automatically configured value
  is insufficient for some traffic, because they require at least all L2 headers
  to be inlined. For example, Q-in-Q adds 4 bytes to default 18 bytes
  of Ethernet and VLAN, thus ``txq_inline_min`` must be set to 22.
  MPLS would add 4 bytes per label. Final value must account for all possible
  L2 encapsulation headers used in particular environment.

  Please, note, this minimal data inlining disengages eMPW feature (Enhanced
  Multi-Packet Write), because last one does not support partial packet inlining.
  This is not very critical due to minimal data inlining is mostly required
  by ConnectX-4 and ConnectX-4 Lx, these NICs do not support eMPW feature.

- ``txq_inline_max`` parameter [int]

  Specifies the maximal packet length to be completely inlined into WQE
  Ethernet Segment for ordinary SEND method. If packet is larger than specified
  value, the packet data won't be copied by the driver at all, data buffer
  is addressed with a pointer. If packet length is less or equal all packet
  data will be copied into WQE. This may improve PCI bandwidth utilization for
  short packets significantly but requires the extra CPU cycles.

  The data inline feature is controlled by number of Tx queues, if number of Tx
  queues is larger than ``txqs_min_inline`` key parameter, the inline feature
  is engaged, if there are not enough Tx queues (which means not enough CPU cores
  and CPU resources are scarce), data inline is not performed by the driver.
  Assigning ``txqs_min_inline`` with zero always enables the data inline.

  The default ``txq_inline_max`` value is 290. The specified value may be adjusted
  by the driver in order not to exceed the limit (930 bytes) and to provide better
  WQE space filling without gaps, the adjustment is reflected in the debug log.
  Also, the default value (290) may be decreased in run-time if the large transmit
  queue size is requested and hardware does not support enough descriptor
  amount, in this case warning is emitted. If ``txq_inline_max`` key is
  specified and requested inline settings can not be satisfied then error
  will be raised.

- ``txq_inline_mpw`` parameter [int]

  Specifies the maximal packet length to be completely inlined into WQE for
  Enhanced MPW method. If packet is large the specified value, the packet data
  won't be copied, and data buffer is addressed with pointer. If packet length
  is less or equal, all packet data will be copied into WQE. This may improve PCI
  bandwidth utilization for short packets significantly but requires the extra
  CPU cycles.

  The data inline feature is controlled by number of TX queues, if number of Tx
  queues is larger than ``txqs_min_inline`` key parameter, the inline feature
  is engaged, if there are not enough Tx queues (which means not enough CPU cores
  and CPU resources are scarce), data inline is not performed by the driver.
  Assigning ``txqs_min_inline`` with zero always enables the data inline.

  The default ``txq_inline_mpw`` value is 268. The specified value may be adjusted
  by the driver in order not to exceed the limit (930 bytes) and to provide better
  WQE space filling without gaps, the adjustment is reflected in the debug log.
  Due to multiple packets may be included to the same WQE with Enhanced Multi
  Packet Write Method and overall WQE size is limited it is not recommended to
  specify large values for the ``txq_inline_mpw``. Also, the default value (268)
  may be decreased in run-time if the large transmit queue size is requested
  and hardware does not support enough descriptor amount, in this case warning
  is emitted. If ``txq_inline_mpw`` key is  specified and requested inline
  settings can not be satisfied then error will be raised.

- ``txqs_max_vec`` parameter [int]

  Enable vectorized Tx only when the number of TX queues is less than or
  equal to this value. This parameter is deprecated and ignored, kept
  for compatibility issue to not prevent driver from probing.

- ``txq_mpw_hdr_dseg_en`` parameter [int]

  A nonzero value enables including two pointers in the first block of TX
  descriptor. The parameter is deprecated and ignored, kept for compatibility
  issue.

- ``txq_max_inline_len`` parameter [int]

  Maximum size of packet to be inlined. This limits the size of packet to
  be inlined. If the size of a packet is larger than configured value, the
  packet isn't inlined even though there's enough space remained in the
  descriptor. Instead, the packet is included with pointer. This parameter
  is deprecated and converted directly to ``txq_inline_mpw`` providing full
  compatibility. Valid only if eMPW feature is engaged.

- ``txq_mpw_en`` parameter [int]

  A nonzero value enables Enhanced Multi-Packet Write (eMPW) for ConnectX-5,
  ConnectX-6, ConnectX-6 Dx, ConnectX-6 Lx, ConnectX-7, BlueField, BlueField-2
  BlueField-3. eMPW allows the Tx burst function to pack up multiple packets
  in a single descriptor session in order to save PCI bandwidth
  and improve performance at the cost of a slightly higher CPU usage.
  When ``txq_inline_mpw`` is set along with ``txq_mpw_en``,
  Tx burst function copies entire packet data on to Tx descriptor
  instead of including pointer of packet.

  The Enhanced Multi-Packet Write feature is enabled by default if NIC supports
  it, can be disabled by explicit specifying 0 value for ``txq_mpw_en`` option.
  Also, if minimal data inlining is requested by non-zero ``txq_inline_min``
  option or reported by the NIC, the eMPW feature is disengaged.

- ``tx_db_nc`` parameter [int]

  This parameter name is deprecated and ignored.
  The new name for this parameter is ``sq_db_nc``.
  See :ref:`common driver options <mlx5_common_driver_options>`.

- ``tx_pp`` parameter [int]

  If a nonzero value is specified the driver creates all necessary internal
  objects to provide accurate packet send scheduling on mbuf timestamps.
  The positive value specifies the scheduling granularity in nanoseconds,
  the packet send will be accurate up to specified digits. The allowed range is
  from 500 to 1 million of nanoseconds. The negative value specifies the module
  of granularity and engages the special test mode the check the schedule rate.
  By default (if the ``tx_pp`` is not specified) send scheduling on timestamps
  feature is disabled.

  Starting with ConnectX-7 the capability to schedule traffic directly
  on timestamp specified in descriptor is provided,
  no extra objects are needed anymore and scheduling capability
  is advertised and handled regardless ``tx_pp`` parameter presence.

- ``tx_skew`` parameter [int]

  The parameter adjusts the send packet scheduling on timestamps and represents
  the average delay between beginning of the transmitting descriptor processing
  by the hardware and appearance of actual packet data on the wire. The value
  should be provided in nanoseconds and is valid only if ``tx_pp`` parameter is
  specified. The default value is zero.

- ``tx_vec_en`` parameter [int]

  A nonzero value enables Tx vector on ConnectX-5, ConnectX-6, ConnectX-6 Dx,
  ConnectX-6 Lx, ConnectX-7, BlueField, BlueField-2, and BlueField-3 NICs
  if the number of global Tx queues on the port is less than ``txqs_max_vec``.
  The parameter is deprecated and ignored.

- ``rx_vec_en`` parameter [int]

  A nonzero value enables Rx vector if the port is not configured in
  multi-segment otherwise this parameter is ignored.

  Enabled by default.

- ``vf_nl_en`` parameter [int]

  A nonzero value enables Netlink requests from the VF to add/remove MAC
  addresses or/and enable/disable promiscuous/all multicast on the Netdevice.
  Otherwise the relevant configuration must be run with Linux iproute2 tools.
  This is a prerequisite to receive this kind of traffic.

  Enabled by default, valid only on VF devices ignored otherwise.

- ``l3_vxlan_en`` parameter [int]

  A nonzero value allows L3 VXLAN and VXLAN-GPE flow creation. To enable
  L3 VXLAN or VXLAN-GPE, users has to configure firmware and enable this
  parameter. This is a prerequisite to receive this kind of traffic.

  Disabled by default.

- ``dv_xmeta_en`` parameter [int]

  A nonzero value enables extensive flow metadata support if device is
  capable and driver supports it. This can enable extensive support of
  ``MARK`` and ``META`` item of ``rte_flow``. The newly introduced
  ``SET_TAG`` and ``SET_META`` actions do not depend on ``dv_xmeta_en``.

  There are some possible configurations, depending on parameter value:

  - 0, this is default value, defines the legacy mode, the ``MARK`` and
    ``META`` related actions and items operate only within NIC Tx and
    NIC Rx steering domains, no ``MARK`` and ``META`` information crosses
    the domain boundaries. The ``MARK`` item is 24 bits wide, the ``META``
    item is 32 bits wide and match supported on egress only
    when ``dv_flow_en`` = 1.

  - 1, this engages extensive metadata mode, the ``MARK`` and ``META``
    related actions and items operate within all supported steering domains,
    including FDB, ``MARK`` and ``META`` information may cross the domain
    boundaries. The ``MARK`` item is 24 bits wide, the ``META`` item width
    depends on kernel and firmware configurations and might be 0, 16 or
    32 bits. Within NIC Tx domain ``META`` data width is 32 bits for
    compatibility, the actual width of data transferred to the FDB domain
    depends on kernel configuration and may be vary. The actual supported
    width can be retrieved in runtime by series of rte_flow_validate()
    trials.

  - 2, this engages extensive metadata mode, the ``MARK`` and ``META``
    related actions and items operate within all supported steering domains,
    including FDB, ``MARK`` and ``META`` information may cross the domain
    boundaries. The ``META`` item is 32 bits wide, the ``MARK`` item width
    depends on kernel and firmware configurations and might be 0, 16 or
    24 bits. The actual supported width can be retrieved in runtime by
    series of rte_flow_validate() trials.

  - 3, this engages tunnel offload mode. In E-Switch configuration, that
    mode implicitly activates ``dv_xmeta_en=1``.

  - 4, this mode is only supported in HWS (``dv_flow_en=2``).
    The Rx/Tx metadata with 32b width copy between FDB and NIC is supported.
    The mark is only supported in NIC and there is no copy supported.

  +------+-----------+-----------+-------------+-------------+
  | Mode | ``MARK``  | ``META``  | ``META`` Tx | FDB/Through |
  +======+===========+===========+=============+=============+
  | 0    | 24 bits   | 32 bits   | 32 bits     | no          |
  +------+-----------+-----------+-------------+-------------+
  | 1    | 24 bits   | vary 0-32 | 32 bits     | yes         |
  +------+-----------+-----------+-------------+-------------+
  | 2    | vary 0-24 | 32 bits   | 32 bits     | yes         |
  +------+-----------+-----------+-------------+-------------+

  If there is no E-Switch configuration the ``dv_xmeta_en`` parameter is
  ignored and the device is configured to operate in legacy mode (0).

  Disabled by default (set to 0).

  The Direct Verbs/Rules (engaged with ``dv_flow_en`` = 1) supports all
  of the extensive metadata features. The legacy Verbs supports FLAG and
  MARK metadata actions over NIC Rx steering domain only.

  Setting META value to zero in flow action means there is no item provided
  and receiving datapath will not report in mbufs the metadata are present.
  Setting MARK value to zero in flow action means the zero FDIR ID value
  will be reported on packet receiving.

  For the MARK action the last 16 values in the full range are reserved for
  internal PMD purposes (to emulate FLAG action). The valid range for the
  MARK action values is 0-0xFFEF for the 16-bit mode and 0-0xFFFFEF
  for the 24-bit mode, the flows with the MARK action value outside
  the specified range will be rejected.

- ``dv_flow_en`` parameter [int]

  Value 0 means legacy Verbs flow offloading.

  Value 1 enables the DV flow steering assuming it is supported by the
  driver (requires rdma-core 24 or higher).

  Value 2 enables the WQE based hardware steering.
  In this mode, only queue-based flow management is supported.

  It is configured by default to 1 (DV flow steering) if supported.
  Otherwise, the value is 0 which indicates legacy Verbs flow offloading.

- ``dv_esw_en`` parameter [int]

  A nonzero value enables E-Switch using Direct Rules.

  Enabled by default if supported.

- ``fdb_def_rule_en`` parameter [int]

  A non-zero value enables to create a dedicated rule on E-Switch root table.
  This dedicated rule forwards all incoming packets into table 1.
  Other rules will be created in E-Switch table original table level plus one,
  to improve the flow insertion rate due to skipping root table managed by firmware.
  If set to 0, all rules will be created on the original E-Switch table level.

  By default, the PMD will set this value to 1.

- ``lacp_by_user`` parameter [int]

  A nonzero value enables the control of LACP traffic by the user application.
  When a bond exists in the driver, by default it should be managed by the
  kernel and therefore LACP traffic should be steered to the kernel.
  If this devarg is set to 1 it will allow the user to manage the bond by
  itself and not steer LACP traffic to the kernel.

  Disabled by default (set to 0).

- ``representor`` parameter [list]

  This parameter can be used to instantiate DPDK Ethernet devices from
  existing port (PF, VF or SF) representors configured on the device.

  It is a standard parameter whose format is described in
  :ref:`ethernet_device_standard_device_arguments`.

  For instance, to probe VF port representors 0 through 2::

    <PCI_BDF>,representor=vf[0-2]

  To probe SF port representors 0 through 2::

    <PCI_BDF>,representor=sf[0-2]

  To probe VF port representors 0 through 2 on both PFs of bonding device::

    <Primary_PCI_BDF>,representor=pf[0,1]vf[0-2]

- ``repr_matching_en`` parameter [int]

  - 0. If representor matching is disabled, then there will be no implicit
    item added. As a result, ingress flow rules will match traffic
    coming to any port, not only the port on which flow rule is created.
    Because of that, default flow rules for ingress traffic cannot be created
    and port starts in isolated mode by default. Port cannot be switched back
    to non-isolated mode.

  - 1. If representor matching is enabled (default setting),
    then each ingress pattern template has an implicit REPRESENTED_PORT
    item added. Flow rules based on this pattern template will match
    the vport associated with port on which rule is created.

- ``max_dump_files_num`` parameter [int]

  The maximum number of files per PMD entity that may be created for debug information.
  The files will be created in /var/log directory or in current directory.

  set to 128 by default.

- ``lro_timeout_usec`` parameter [int]

  The maximum allowed duration of an LRO session, in micro-seconds.
  PMD will set the nearest value supported by HW, which is not bigger than
  the input ``lro_timeout_usec`` value.
  If this parameter is not specified, by default PMD will set
  the smallest value supported by HW.

- ``hp_buf_log_sz`` parameter [int]

  The total data buffer size of a hairpin queue (logarithmic form), in bytes.
  PMD will set the data buffer size to 2 ** ``hp_buf_log_sz``, both for RX & TX.
  The capacity of the value is specified by the firmware and the initialization
  will get a failure if it is out of scope.
  The range of the value is from 11 to 19 right now, and the supported frame
  size of a single packet for hairpin is from 512B to 128KB. It might change if
  different firmware release is being used. By using a small value, it could
  reduce memory consumption but not work with a large frame. If the value is
  too large, the memory consumption will be high and some potential performance
  degradation will be introduced.
  By default, the PMD will set this value to 16, which means that 9KB jumbo
  frames will be supported.

- ``reclaim_mem_mode`` parameter [int]

  Cache some resources in flow destroy will help flow recreation more efficient.
  While some systems may require the all the resources can be reclaimed after
  flow destroyed.
  The parameter ``reclaim_mem_mode`` provides the option for user to configure
  if the resource cache is needed or not.

  There are three options to choose:

  - 0. It means the flow resources will be cached as usual. The resources will
    be cached, helpful with flow insertion rate.

  - 1. It will only enable the DPDK PMD level resources reclaim.

  - 2. Both DPDK PMD level and rdma-core low level will be configured as
    reclaimed mode.

  By default, the PMD will set this value to 0.

- ``decap_en`` parameter [int]

  Some devices do not support FCS (frame checksum) scattering for
  tunnel-decapsulated packets.
  If set to 0, this option forces the FCS feature and rejects tunnel
  decapsulation in the flow engine for such devices.

  By default, the PMD will set this value to 1.

- ``allow_duplicate_pattern`` parameter [int]

  There are two options to choose:

  - 0. Prevent insertion of rules with the same pattern items on non-root table.
    In this case, only the first rule is inserted and the following rules are
    rejected and error code EEXIST is returned.

  - 1. Allow insertion of rules with the same pattern items.
    In this case, all rules are inserted but only the first rule takes effect,
    the next rule takes effect only if the previous rules are deleted.

  By default, the PMD will set this value to 1.


Multiport E-Switch
------------------

In standard deployments of NVIDIA ConnectX and BlueField HCAs, where embedded switch is enabled,
each physical port is associated with a single switching domain.
Only PFs, VFs and SFs related to that physical port are connected to this domain
and offloaded flow rules are allowed to steer traffic only between the entities in the given domain.

The following diagram pictures the high level overview of this architecture::

       .---. .------. .------. .---. .------. .------.
       |PF0| |PF0VFi| |PF0SFi| |PF1| |PF1VFi| |PF1SFi|
       .-+-. .--+---. .--+---. .-+-. .--+---. .--+---.
         |      |        |       |      |        |
     .---|------|--------|-------|------|--------|---------.
     |   |      |        |       |      |        |      HCA|
     | .-+------+--------+---. .-+------+--------+---.     |
     | |                     | |                     |     |
     | |      E-Switch       | |     E-Switch        |     |
     | |         PF0         | |        PF1          |     |
     | |                     | |                     |     |
     | .---------+-----------. .--------+------------.     |
     |           |                      |                  |
     .--------+--+---+---------------+--+---+--------------.
              |      |               |      |
              | PHY0 |               | PHY1 |
              |      |               |      |
              .------.               .------.

Multiport E-Switch is a deployment scenario where:

- All physical ports, PFs, VFs and SFs share the same switching domain.
- Each physical port gets a separate representor port.
- Traffic can be matched or forwarded explicitly between any of the entities
  connected to the domain.

The following diagram pictures the high level overview of this architecture::

       .---. .------. .------. .---. .------. .------.
       |PF0| |PF0VFi| |PF0SFi| |PF1| |PF1VFi| |PF1SFi|
       .-+-. .--+---. .--+---. .-+-. .--+---. .--+---.
         |      |        |       |      |        |
     .---|------|--------|-------|------|--------|---------.
     |   |      |        |       |      |        |      HCA|
     | .-+------+--------+-------+------+--------+---.     |
     | |                                             |     |
     | |                   Shared                    |     |
     | |                  E-Switch                   |     |
     | |                                             |     |
     | .---------+----------------------+------------.     |
     |           |                      |                  |
     .--------+--+---+---------------+--+---+--------------.
              |      |               |      |
              | PHY0 |               | PHY1 |
              |      |               |      |
              .------.               .------.

In this deployment a single application can control the switching and forwarding behavior for all
entities on the HCA.

With this configuration, mlx5 PMD supports:

- matching traffic coming from physical port, PF, VF or SF using REPRESENTED_PORT items;
- matching traffic coming from E-Switch manager
  using REPRESENTED_PORT item with port ID ``UINT16_MAX``;
- forwarding traffic to physical port, PF, VF or SF using REPRESENTED_PORT actions;

Requirements
~~~~~~~~~~~~

Supported HCAs:

- ConnectX family: ConnectX-6 Dx and above.
- BlueField family: BlueField-2 and above.
- FW version: at least ``XX.37.1014``.

Supported mlx5 kernel modules versions:

- Upstream Linux - from version 6.3 with CONFIG_NET_TC_SKB_EXT and CONFIG_MLX5_CLS_ACT enabled.
- Modules packaged in MLNX_OFED - from version v23.04-0.5.3.3.

Configuration
~~~~~~~~~~~~~

#. Apply required FW configuration::

      sudo mlxconfig -d /dev/mst/mt4125_pciconf0 set LAG_RESOURCE_ALLOCATION=1

#. Reset FW or cold reboot the host.

#. Switch E-Switch mode on all of the PFs to ``switchdev`` mode::

      sudo devlink dev eswitch set pci/0000:08:00.0 mode switchdev
      sudo devlink dev eswitch set pci/0000:08:00.1 mode switchdev

#. Enable Multiport E-Switch on all of the PFs::

      sudo devlink dev param set pci/0000:08:00.0 name esw_multiport value true cmode runtime
      sudo devlink dev param set pci/0000:08:00.1 name esw_multiport value true cmode runtime

#. Configure required number of VFs/SFs::

      echo 4 | sudo tee /sys/class/net/eth2/device/sriov_numvfs
      echo 4 | sudo tee /sys/class/net/eth3/device/sriov_numvfs

#. Start testpmd and verify that all ports are visible::

      $ sudo dpdk-testpmd -a 08:00.0,dv_flow_en=2,representor=pf0-1vf0-3 -- -i
      testpmd> show port summary all
      Number of available ports: 10
      Port MAC Address       Name         Driver         Status   Link
      0    E8:EB:D5:18:22:BC 08:00.0_p0   mlx5_pci       up       200 Gbps
      1    E8:EB:D5:18:22:BD 08:00.0_p1   mlx5_pci       up       200 Gbps
      2    D2:F6:43:0B:9E:19 08:00.0_representor_c0pf0vf0 mlx5_pci       up       200 Gbps
      3    E6:42:27:B7:68:BD 08:00.0_representor_c0pf0vf1 mlx5_pci       up       200 Gbps
      4    A6:5B:7F:8B:B8:47 08:00.0_representor_c0pf0vf2 mlx5_pci       up       200 Gbps
      5    12:93:50:45:89:02 08:00.0_representor_c0pf0vf3 mlx5_pci       up       200 Gbps
      6    06:D3:B2:79:FE:AC 08:00.0_representor_c0pf1vf0 mlx5_pci       up       200 Gbps
      7    12:FC:08:E4:C2:CA 08:00.0_representor_c0pf1vf1 mlx5_pci       up       200 Gbps
      8    8E:A9:9A:D0:35:4C 08:00.0_representor_c0pf1vf2 mlx5_pci       up       200 Gbps
      9    E6:35:83:1F:B0:A9 08:00.0_representor_c0pf1vf3 mlx5_pci       up       200 Gbps

Limitations
~~~~~~~~~~~

- Multiport E-Switch is not supported on Windows.
- Multiport E-Switch is supported only with HW Steering flow engine (``dv_flow_en=2``).
- Matching traffic coming from a physical port and forwarding it to a physical port
  (either the same or other one) is not supported.

  In order to achieve such a functionality, an application has to setup hairpin queues
  between physical port representors and forward the traffic using hairpin queues.


Sub-Function
------------

See :ref:`mlx5_sub_function`.

Sub-Function representor support
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

A SF netdev supports E-Switch representation offload
similar to PF and VF representors.
Use <sfnum> to probe SF representor::

   testpmd> port attach <PCI_BDF>,representor=sf<sfnum>,dv_flow_en=1


Performance tuning
------------------

#. Configure aggressive CQE Zipping for maximum performance::

        mlxconfig -d <mst device> s CQE_COMPRESSION=1

   To set it back to the default CQE Zipping mode use::

        mlxconfig -d <mst device> s CQE_COMPRESSION=0

#. In case of virtualization:

   - Make sure that hypervisor kernel is 3.16 or newer.
   - Configure boot with ``iommu=pt``.
   - Use 1G huge pages.
   - Make sure to allocate a VM on huge pages.
   - Make sure to set CPU pinning.

#. Use the CPU near local NUMA node to which the PCIe adapter is connected,
   for better performance. For VMs, verify that the right CPU
   and NUMA node are pinned according to the above. Run::

        lstopo-no-graphics --merge

   to identify the NUMA node to which the PCIe adapter is connected.

#. If more than one adapter is used, and root complex capabilities allow
   to put both adapters on the same NUMA node without PCI bandwidth degradation,
   it is recommended to locate both adapters on the same NUMA node.
   This in order to forward packets from one to the other without
   NUMA performance penalty.

#. Disable pause frames::

        ethtool -A <netdev> rx off tx off

#. Verify IO non-posted prefetch is disabled by default. This can be checked
   via the BIOS configuration. Please contact you server provider for more
   information about the settings.

   .. note::

        On some machines, depends on the machine integrator, it is beneficial
        to set the PCI max read request parameter to 1K. This can be
        done in the following way:

        To query the read request size use::

                setpci -s <NIC PCI address> 68.w

        If the output is different than 3XXX, set it by::

                setpci -s <NIC PCI address> 68.w=3XXX

        The XXX can be different on different systems. Make sure to configure
        according to the setpci output.

#. To minimize overhead of searching Memory Regions:

   - '--socket-mem' is recommended to pin memory by predictable amount.
   - Configure per-lcore cache when creating Mempools for packet buffer.
   - Refrain from dynamically allocating/freeing memory in run-time.

Rx burst functions
------------------

There are multiple Rx burst functions with different advantages and limitations.

.. table:: Rx burst functions

   +-------------------+------------------------+---------+-----------------+------+-------+
   || Function Name    || Enabler               || Scatter|| Error Recovery || CQE || Large|
   |                   |                        |         |                 || comp|| MTU  |
   +===================+========================+=========+=================+======+=======+
   | rx_burst          | rx_vec_en=0            |   Yes   | Yes             |  Yes |  Yes  |
   +-------------------+------------------------+---------+-----------------+------+-------+
   | rx_burst_vec      | rx_vec_en=1 (default)  |   No    | if CQE comp off |  Yes |  No   |
   +-------------------+------------------------+---------+-----------------+------+-------+
   | rx_burst_mprq     || mprq_en=1             |   No    | Yes             |  Yes |  Yes  |
   |                   || RxQs >= rxqs_min_mprq |         |                 |      |       |
   +-------------------+------------------------+---------+-----------------+------+-------+
   | rx_burst_mprq_vec || rx_vec_en=1 (default) |   No    | if CQE comp off |  Yes |  Yes  |
   |                   || mprq_en=1             |         |                 |      |       |
   |                   || RxQs >= rxqs_min_mprq |         |                 |      |       |
   +-------------------+------------------------+---------+-----------------+------+-------+

.. _mlx5_offloads_support:

Supported hardware offloads
---------------------------

Below tables show offload support depending on hardware, firmware,
and Linux software support.

The :ref:`Linux prerequisites <mlx5_linux_prerequisites>`
are Linux kernel and rdma-core libraries.
These dependencies are also packaged in MLNX_OFED or MLNX_EN,
shortened below as "OFED".

.. table:: Minimal SW/HW versions for queue offloads

   ============== ===== ===== ========= ===== ========== =============
   Offload        DPDK  Linux rdma-core OFED   firmware   hardware
   ============== ===== ===== ========= ===== ========== =============
   common base    17.11  4.14    16     4.2-1 12.21.1000 ConnectX-4
   checksums      17.11  4.14    16     4.2-1 12.21.1000 ConnectX-4
   Rx timestamp   17.11  4.14    16     4.2-1 12.21.1000 ConnectX-4
   TSO            17.11  4.14    16     4.2-1 12.21.1000 ConnectX-4
   LRO            19.08  N/A     N/A    4.6-4 16.25.6406 ConnectX-5
   Tx scheduling  20.08  N/A     N/A    5.1-2 22.28.2006 ConnectX-6 Dx
   Buffer Split   20.11  N/A     N/A    5.1-2 16.28.2006 ConnectX-5
   ============== ===== ===== ========= ===== ========== =============

.. table:: Minimal SW/HW versions for rte_flow offloads

   +-----------------------+-----------------+-----------------+
   | Offload               | with E-Switch   | with NIC        |
   +=======================+=================+=================+
   | Count                 | | DPDK 19.05    | | DPDK 19.02    |
   |                       | | OFED 4.6      | | OFED 4.6      |
   |                       | | rdma-core 24  | | rdma-core 23  |
   |                       | | ConnectX-5    | | ConnectX-5    |
   +-----------------------+-----------------+-----------------+
   | Drop                  | | DPDK 19.05    | | DPDK 18.11    |
   |                       | | OFED 4.6      | | OFED 4.5      |
   |                       | | rdma-core 24  | | rdma-core 23  |
   |                       | | ConnectX-5    | | ConnectX-4    |
   +-----------------------+-----------------+-----------------+
   | Queue / RSS           | |               | | DPDK 18.11    |
   |                       | |     N/A       | | OFED 4.5      |
   |                       | |               | | rdma-core 23  |
   |                       | |               | | ConnectX-4    |
   +-----------------------+-----------------+-----------------+
   | Shared action         | |               | |               |
   |                       | | :numref:`sact`| | :numref:`sact`|
   |                       | |               | |               |
   |                       | |               | |               |
   +-----------------------+-----------------+-----------------+
   | | VLAN                | | DPDK 19.11    | | DPDK 19.11    |
   | | (of_pop_vlan /      | | OFED 4.7-1    | | OFED 4.7-1    |
   | | of_push_vlan /      | | ConnectX-5    | | ConnectX-5    |
   | | of_set_vlan_pcp /   | |               | |               |
   | | of_set_vlan_vid)    | |               | |               |
   +-----------------------+-----------------+-----------------+
   | | VLAN                | | DPDK 21.05    | |               |
   | | ingress and /       | | OFED 5.3      | |    N/A        |
   | | of_push_vlan /      | | ConnectX-6 Dx | |               |
   +-----------------------+-----------------+-----------------+
   | | VLAN                | | DPDK 21.05    | |               |
   | | egress and /        | | OFED 5.3      | |    N/A        |
   | | of_pop_vlan /       | | ConnectX-6 Dx | |               |
   +-----------------------+-----------------+-----------------+
   | Encapsulation         | | DPDK 19.05    | | DPDK 19.02    |
   | (VXLAN / NVGRE / RAW) | | OFED 4.7-1    | | OFED 4.6      |
   |                       | | rdma-core 24  | | rdma-core 23  |
   |                       | | ConnectX-5    | | ConnectX-5    |
   +-----------------------+-----------------+-----------------+
   | Encapsulation         | | DPDK 19.11    | | DPDK 19.11    |
   | GENEVE                | | OFED 4.7-3    | | OFED 4.7-3    |
   |                       | | rdma-core 27  | | rdma-core 27  |
   |                       | | ConnectX-5    | | ConnectX-5    |
   +-----------------------+-----------------+-----------------+
   | Tunnel Offload        | |  DPDK 20.11   | | DPDK 20.11    |
   |                       | |  OFED 5.1-2   | | OFED 5.1-2    |
   |                       | |  rdma-core 32 | | N/A           |
   |                       | |  ConnectX-5   | | ConnectX-5    |
   +-----------------------+-----------------+-----------------+
   | | Header rewrite      | | DPDK 19.05    | | DPDK 19.02    |
   | | (set_ipv4_src /     | | OFED 4.7-1    | | OFED 4.7-1    |
   | | set_ipv4_dst /      | | rdma-core 24  | | rdma-core 24  |
   | | set_ipv6_src /      | | ConnectX-5    | | ConnectX-5    |
   | | set_ipv6_dst /      | |               | |               |
   | | set_tp_src /        | |               | |               |
   | | set_tp_dst /        | |               | |               |
   | | dec_ttl /           | |               | |               |
   | | set_ttl /           | |               | |               |
   | | set_mac_src /       | |               | |               |
   | | set_mac_dst)        | |               | |               |
   +-----------------------+-----------------+-----------------+
   | | Header rewrite      | | DPDK 20.02    | | DPDK 20.02    |
   | | (set_dscp)          | | OFED 5.0      | | OFED 5.0      |
   | |                     | | rdma-core 24  | | rdma-core 24  |
   | |                     | | ConnectX-5    | | ConnectX-5    |
   +-----------------------+-----------------+-----------------+
   | | Header rewrite      | | DPDK 22.07    | | DPDK 22.07    |
   | | (ipv4_ecn /         | | OFED 5.6-2    | | OFED 5.6-2    |
   | | ipv6_ecn)           | | rdma-core 41  | | rdma-core 41  |
   | |                     | | ConnectX-5    | | ConnectX-5    |
   +-----------------------+-----------------+-----------------+
   | Jump                  | | DPDK 19.05    | | DPDK 19.02    |
   |                       | | OFED 4.7-1    | | OFED 4.7-1    |
   |                       | | rdma-core 24  | | N/A           |
   |                       | | ConnectX-5    | | ConnectX-5    |
   +-----------------------+-----------------+-----------------+
   | Mark / Flag           | | DPDK 19.05    | | DPDK 18.11    |
   |                       | | OFED 4.6      | | OFED 4.5      |
   |                       | | rdma-core 24  | | rdma-core 23  |
   |                       | | ConnectX-5    | | ConnectX-4    |
   +-----------------------+-----------------+-----------------+
   | Meta data             | |  DPDK 19.11   | | DPDK 19.11    |
   |                       | |  OFED 4.7-3   | | OFED 4.7-3    |
   |                       | |  rdma-core 26 | | rdma-core 26  |
   |                       | |  ConnectX-5   | | ConnectX-5    |
   +-----------------------+-----------------+-----------------+
   | Port ID               | | DPDK 19.05    |     | N/A       |
   |                       | | OFED 4.7-1    |     | N/A       |
   |                       | | rdma-core 24  |     | N/A       |
   |                       | | ConnectX-5    |     | N/A       |
   +-----------------------+-----------------+-----------------+
   | Hairpin               | |               | | DPDK 19.11    |
   |                       | |     N/A       | | OFED 4.7-3    |
   |                       | |               | | rdma-core 26  |
   |                       | |               | | ConnectX-5    |
   +-----------------------+-----------------+-----------------+
   | 2-port Hairpin        | |               | | DPDK 20.11    |
   |                       | |     N/A       | | OFED 5.1-2    |
   |                       | |               | | N/A           |
   |                       | |               | | ConnectX-5    |
   +-----------------------+-----------------+-----------------+
   | Metering              | |  DPDK 19.11   | | DPDK 19.11    |
   |                       | |  OFED 4.7-3   | | OFED 4.7-3    |
   |                       | |  rdma-core 26 | | rdma-core 26  |
   |                       | |  ConnectX-5   | | ConnectX-5    |
   +-----------------------+-----------------+-----------------+
   | ASO Metering          | |  DPDK 21.05   | | DPDK 21.05    |
   |                       | |  OFED 5.3     | | OFED 5.3      |
   |                       | |  rdma-core 33 | | rdma-core 33  |
   |                       | |  ConnectX-6 Dx| | ConnectX-6 Dx |
   +-----------------------+-----------------+-----------------+
   | Metering Hierarchy    | |  DPDK 21.08   | | DPDK 21.08    |
   |                       | |  OFED 5.3     | | OFED 5.3      |
   |                       | |  N/A          | | N/A           |
   |                       | |  ConnectX-6 Dx| | ConnectX-6 Dx |
   +-----------------------+-----------------+-----------------+
   | Sampling              | |  DPDK 20.11   | | DPDK 20.11    |
   |                       | |  OFED 5.1-2   | | OFED 5.1-2    |
   |                       | |  rdma-core 32 | | N/A           |
   |                       | |  ConnectX-5   | | ConnectX-5    |
   +-----------------------+-----------------+-----------------+
   | Encapsulation         | |  DPDK 21.02   | | DPDK 21.02    |
   | GTP PSC               | |  OFED 5.2     | | OFED 5.2      |
   |                       | |  rdma-core 35 | | rdma-core 35  |
   |                       | |  ConnectX-6 Dx| | ConnectX-6 Dx |
   +-----------------------+-----------------+-----------------+
   | Encapsulation         | | DPDK 21.02    | | DPDK 21.02    |
   | GENEVE TLV option     | | OFED 5.2      | | OFED 5.2      |
   |                       | | rdma-core 34  | | rdma-core 34  |
   |                       | | ConnectX-6 Dx | | ConnectX-6 Dx |
   +-----------------------+-----------------+-----------------+
   | Modify Field          | | DPDK 21.02    | | DPDK 21.02    |
   |                       | | OFED 5.2      | | OFED 5.2      |
   |                       | | rdma-core 35  | | rdma-core 35  |
   |                       | | ConnectX-5    | | ConnectX-5    |
   +-----------------------+-----------------+-----------------+
   | Connection tracking   | |               | | DPDK 21.05    |
   |                       | |     N/A       | | OFED 5.3      |
   |                       | |               | | rdma-core 35  |
   |                       | |               | | ConnectX-6 Dx |
   +-----------------------+-----------------+-----------------+

.. table:: Minimal SW/HW versions for shared action offload
   :name: sact

   +-----------------------+-----------------+-----------------+
   | Shared Action         | with E-Switch   | with NIC        |
   +=======================+=================+=================+
   | RSS                   | |               | | DPDK 20.11    |
   |                       | |     N/A       | | OFED 5.2      |
   |                       | |               | | rdma-core 33  |
   |                       | |               | | ConnectX-5    |
   +-----------------------+-----------------+-----------------+
   | Age                   | | DPDK 20.11    | | DPDK 20.11    |
   |                       | | OFED 5.2      | | OFED 5.2      |
   |                       | | rdma-core 32  | | rdma-core 32  |
   |                       | | ConnectX-6 Dx | | ConnectX-6 Dx |
   +-----------------------+-----------------+-----------------+
   | Count                 | | DPDK 21.05    | | DPDK 21.05    |
   |                       | | OFED 4.6      | | OFED 4.6      |
   |                       | | rdma-core 24  | | rdma-core 23  |
   |                       | | ConnectX-5    | | ConnectX-5    |
   +-----------------------+-----------------+-----------------+

.. table:: Minimal SW/HW versions for flow template API

   +-----------------+--------------------+--------------------+
   | DPDK            | NIC                | Firmware           |
   +=================+====================+====================+
   | 22.11           | ConnectX-6 Dx      | xx.35.1012         |
   +-----------------+--------------------+--------------------+

Notes for metadata
------------------

MARK and META items are interrelated with datapath - they might move from/to
the applications in mbuf fields. Hence, zero value for these items has the
special meaning - it means "no metadata are provided", not zero values are
treated by applications and PMD as valid ones.

Moreover in the flow engine domain the value zero is acceptable to match and
set, and we should allow to specify zero values as rte_flow parameters for the
META and MARK items and actions. In the same time zero mask has no meaning and
should be rejected on validation stage.

Notes for rte_flow
------------------

Flows are not cached in the driver.
When stopping a device port, all the flows created on this port from the
application will be flushed automatically in the background.
After stopping the device port, all flows on this port become invalid and
not represented in the system.
All references to these flows held by the application should be discarded
directly but neither destroyed nor flushed.

The application should re-create the flows as required after the port restart.


Notes for flow counters
-----------------------

mlx5 PMD supports the ``COUNT`` flow action,
which provides an ability to count packets (and bytes)
matched against a given flow rule.
This section describes the high level overview of
how this support is implemented and limitations.

HW steering flow engine
~~~~~~~~~~~~~~~~~~~~~~~

Flow counters are allocated from HW in bulks.
A set of bulks forms a flow counter pool managed by PMD.
When flow counters are queried from HW,
each counter is identified by an offset in a given bulk.
Querying HW flow counter requires sending a request to HW,
which will request a read of counter values for given offsets.
HW will asynchronously provide these values through a DMA write.

In order to optimize HW to SW communication,
these requests are handled in a separate counter service thread
spawned by mlx5 PMD.
This service thread will refresh the counter values stored in memory,
in cycles, each spanning ``svc_cycle_time`` milliseconds.
By default, ``svc_cycle_time`` is set to 500.
When applications query the ``COUNT`` flow action,
PMD returns the values stored in host memory.

mlx5 PMD manages 3 global rings of allocated counter offsets:

- ``free`` ring - Counters which were not used at all.
- ``wait_reset`` ring - Counters which were used in some flow rules,
  but were recently freed (flow rule was destroyed
  or an indirect action was destroyed).
  Since the count value might have changed
  between the last counter service thread cycle and the moment it was freed,
  the value in host memory might be stale.
  During the next service thread cycle,
  such counters will be moved to ``reuse`` ring.
- ``reuse`` ring - Counters which were used at least once
  and can be reused in new flow rules.

When counters are assigned to a flow rule (or allocated to indirect action),
the PMD first tries to fetch a counter from ``reuse`` ring.
If it's empty, the PMD fetches a counter from ``free`` ring.

The counter service thread works as follows:

#. Record counters stored in ``wait_reset`` ring.
#. Read values of all counters which were used at least once
   or are currently in use.
#. Move recorded counters from ``wait_reset`` to ``reuse`` ring.
#. Sleep for ``(query time) - svc_cycle_time`` milliseconds
#. Repeat.

Because freeing a counter (by destroying a flow rule or destroying indirect action)
does not immediately make it available for the application,
the PMD might return:

- ``ENOENT`` if no counter is available in ``free``, ``reuse``
  or ``wait_reset`` rings.
  No counter will be available until the application releases some of them.
- ``EAGAIN`` if no counter is available in ``free`` and ``reuse`` rings,
  but there are counters in ``wait_reset`` ring.
  This means that after the next service thread cycle new counters will be available.

The application has to be aware that flow rule create or indirect action create
might need be retried.


Notes for hairpin
-----------------

NVIDIA ConnectX and BlueField devices support
specifying memory placement for hairpin Rx and Tx queues.
This feature requires NVIDIA MLNX_OFED 5.8.

By default, data buffers and packet descriptors for hairpin queues
are placed in device memory
which is shared with other resources (e.g. flow rules).

Starting with DPDK 22.11 and NVIDIA MLNX_OFED 5.8,
applications are allowed to:

#. Place data buffers and Rx packet descriptors in dedicated device memory.
   Application can request that configuration
   through ``use_locked_device_memory`` configuration option.

   Placing data buffers and Rx packet descriptors in dedicated device memory
   can decrease latency on hairpinned traffic,
   since traffic processing for the hairpin queue will not be memory starved.

   However, reserving device memory for hairpin Rx queues
   may decrease throughput under heavy load,
   since less resources will be available on device.

   This option is supported only for Rx hairpin queues.

#. Place Tx packet descriptors in host memory.
   Application can request that configuration
   through ``use_rte_memory`` configuration option.

   Placing Tx packet descritors in host memory can increase traffic throughput.
   This results in more resources available on the device for other purposes,
   which reduces memory contention on device.
   Side effect of this option is visible increase in latency,
   since each packet incurs additional PCI transactions.

   This option is supported only for Tx hairpin queues.


Notes for testpmd
-----------------

Compared to librte_net_mlx4 that implements a single RSS configuration per
port, librte_net_mlx5 supports per-protocol RSS configuration.

Since ``testpmd`` defaults to IP RSS mode and there is currently no
command-line parameter to enable additional protocols (UDP and TCP as well
as IP), the following commands must be entered from its CLI to get the same
behavior as librte_net_mlx4::

   > port stop all
   > port config all rss all
   > port start all

Usage example
-------------

This section demonstrates how to launch **testpmd** with NVIDIA
ConnectX-4/ConnectX-5/ConnectX-6/BlueField devices managed by librte_net_mlx5.

#. Load the kernel modules::

      modprobe -a ib_uverbs mlx5_core mlx5_ib

   Alternatively if MLNX_OFED/MLNX_EN is fully installed, the following script
   can be run::

      /etc/init.d/openibd restart

   .. note::

      User space I/O kernel modules (uio and igb_uio) are not used and do
      not have to be loaded.

#. Make sure Ethernet interfaces are in working order and linked to kernel
   verbs. Related sysfs entries should be present::

      ls -d /sys/class/net/*/device/infiniband_verbs/uverbs* | cut -d / -f 5

   Example output::

      eth30
      eth31
      eth32
      eth33

#. Optionally, retrieve their PCI bus addresses for to be used with the allow list::

      {
          for intf in eth2 eth3 eth4 eth5;
          do
              (cd "/sys/class/net/${intf}/device/" && pwd -P);
          done;
      } |
      sed -n 's,.*/\(.*\),-a \1,p'

   Example output::

      -a 0000:05:00.1
      -a 0000:06:00.0
      -a 0000:06:00.1
      -a 0000:05:00.0

#. Request huge pages::

      dpdk-hugepages.py --setup 2G

#. Start testpmd with basic parameters::

      dpdk-testpmd -l 8-15 -n 4 -a 05:00.0 -a 05:00.1 -a 06:00.0 -a 06:00.1 -- --rxq=2 --txq=2 -i

   Example output::

      [...]
      EAL: PCI device 0000:05:00.0 on NUMA socket 0
      EAL:   probe driver: 15b3:1013 librte_net_mlx5
      PMD: librte_net_mlx5: PCI information matches, using device "mlx5_0" (VF: false)
      PMD: librte_net_mlx5: 1 port(s) detected
      PMD: librte_net_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fe
      EAL: PCI device 0000:05:00.1 on NUMA socket 0
      EAL:   probe driver: 15b3:1013 librte_net_mlx5
      PMD: librte_net_mlx5: PCI information matches, using device "mlx5_1" (VF: false)
      PMD: librte_net_mlx5: 1 port(s) detected
      PMD: librte_net_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:ff
      EAL: PCI device 0000:06:00.0 on NUMA socket 0
      EAL:   probe driver: 15b3:1013 librte_net_mlx5
      PMD: librte_net_mlx5: PCI information matches, using device "mlx5_2" (VF: false)
      PMD: librte_net_mlx5: 1 port(s) detected
      PMD: librte_net_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fa
      EAL: PCI device 0000:06:00.1 on NUMA socket 0
      EAL:   probe driver: 15b3:1013 librte_net_mlx5
      PMD: librte_net_mlx5: PCI information matches, using device "mlx5_3" (VF: false)
      PMD: librte_net_mlx5: 1 port(s) detected
      PMD: librte_net_mlx5: port 1 MAC address is e4:1d:2d:e7:0c:fb
      Interactive-mode selected
      Configuring Port 0 (socket 0)
      PMD: librte_net_mlx5: 0x8cba80: TX queues number update: 0 -> 2
      PMD: librte_net_mlx5: 0x8cba80: RX queues number update: 0 -> 2
      Port 0: E4:1D:2D:E7:0C:FE
      Configuring Port 1 (socket 0)
      PMD: librte_net_mlx5: 0x8ccac8: TX queues number update: 0 -> 2
      PMD: librte_net_mlx5: 0x8ccac8: RX queues number update: 0 -> 2
      Port 1: E4:1D:2D:E7:0C:FF
      Configuring Port 2 (socket 0)
      PMD: librte_net_mlx5: 0x8cdb10: TX queues number update: 0 -> 2
      PMD: librte_net_mlx5: 0x8cdb10: RX queues number update: 0 -> 2
      Port 2: E4:1D:2D:E7:0C:FA
      Configuring Port 3 (socket 0)
      PMD: librte_net_mlx5: 0x8ceb58: TX queues number update: 0 -> 2
      PMD: librte_net_mlx5: 0x8ceb58: RX queues number update: 0 -> 2
      Port 3: E4:1D:2D:E7:0C:FB
      Checking link statuses...
      Port 0 Link Up - speed 40000 Mbps - full-duplex
      Port 1 Link Up - speed 40000 Mbps - full-duplex
      Port 2 Link Up - speed 10000 Mbps - full-duplex
      Port 3 Link Up - speed 10000 Mbps - full-duplex
      Done
      testpmd>

How to dump flows
-----------------

This section demonstrates how to dump flows. Currently, it's possible to dump
all flows with assistance of external tools.

#. 2 ways to get flow raw file:

   - Using testpmd CLI:

   .. code-block:: console

       To dump all flows:
       testpmd> flow dump <port> all <output_file>
       and dump one flow:
       testpmd> flow dump <port> rule <rule_id> <output_file>

   - call rte_flow_dev_dump api:

   .. code-block:: console

       rte_flow_dev_dump(port, flow, file, NULL);

#. Dump human-readable flows from raw file:

   Get flow parsing tool from: https://github.com/Mellanox/mlx_steering_dump

   .. code-block:: console

       mlx_steering_dump.py -f <output_file> -flowptr <flow_ptr>

How to share a meter between ports in the same switch domain
------------------------------------------------------------

This section demonstrates how to use the shared meter. A meter M can be created
on port X and to be shared with a port Y on the same switch domain by the next way:

.. code-block:: console

   flow create X ingress transfer pattern eth / port_id id is Y / end actions meter mtr_id M / end

How to use meter hierarchy
--------------------------

This section demonstrates how to create and use a meter hierarchy.
A termination meter M can be the policy green action of another termination meter N.
The two meters are chained together as a chain. Using meter N in a flow will apply
both the meters in hierarchy on that flow.

.. code-block:: console

   add port meter policy 0 1 g_actions queue index 0 / end y_actions end r_actions drop / end
   create port meter 0 M 1 1 yes 0xffff 1 0
   add port meter policy 0 2 g_actions meter mtr_id M / end y_actions end r_actions drop / end
   create port meter 0 N 2 2 yes 0xffff 1 0
   flow create 0 ingress group 1 pattern eth / end actions meter mtr_id N / end

How to configure a VF as trusted
--------------------------------

This section demonstrates how to configure a virtual function (VF) interface as trusted.
Trusted VF is needed to offload rules with rte_flow to a group that is bigger than 0.
The configuration is done in two parts: driver and FW.

The procedure below is an example of using a ConnectX-5 adapter card (pf0) with 2 VFs:

#. Create 2 VFs on the PF pf0 when in Legacy SR-IOV mode::

   $ echo 2 > /sys/class/net/pf0/device/mlx5_num_vfs

#. Verify the VFs are created:

   .. code-block:: console

      $ lspci | grep Mellanox
      82:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
      82:00.1 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
      82:00.2 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function]
      82:00.3 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function]

#. Unbind all VFs. For each VF PCIe, using the following command to unbind the driver::

   $ echo "0000:82:00.2" >> /sys/bus/pci/drivers/mlx5_core/unbind

#. Set the VFs to be trusted for the kernel by using one of the methods below:

      - Using sysfs file::

        $ echo ON | tee /sys/class/net/pf0/device/sriov/0/trust
        $ echo ON | tee /sys/class/net/pf0/device/sriov/1/trust

      - Using “ip link” command::

        $ ip link set p0 vf 0 trust on
        $ ip link set p0 vf 1 trust on

#. Configure all VFs using ``mlxreg``:

   - For MFT >= 4.21::

     $ mlxreg -d /dev/mst/mt4121_pciconf0 --reg_name VHCA_TRUST_LEVEL --yes --indexes 'all_vhca=0x1,vhca_id=0x0' --set 'trust_level=0x1'

   - For MFT < 4.21::

     $ mlxreg -d /dev/mst/mt4121_pciconf0 --reg_name VHCA_TRUST_LEVEL --yes --set "all_vhca=0x1,trust_level=0x1"

   .. note::

      Firmware version used must be >= xx.29.1016 and MFT >= 4.18

#. For each VF PCIe, using the following command to bind the driver::

   $ echo "0000:82:00.2" >> /sys/bus/pci/drivers/mlx5_core/bind

How to trace Tx datapath
------------------------

The mlx5 PMD provides Tx datapath tracing capability with extra debug information:
when and how packets were scheduled,
and when the actual sending was completed by the NIC hardware.

Steps to enable Tx datapath tracing:

#. Build DPDK application with enabled datapath tracing

   The Meson option ``--enable_trace_fp=true`` and
   the C flag ``ALLOW_EXPERIMENTAL_API`` should be specified.

   .. code-block:: console

      meson configure --buildtype=debug -Denable_trace_fp=true
         -Dc_args='-DRTE_LIBRTE_MLX5_DEBUG -DRTE_ENABLE_ASSERT -DALLOW_EXPERIMENTAL_API' build

#. Configure the NIC

   If the sending completion timings are important,
   the NIC should be configured to provide realtime timestamps.
   The non-volatile settings parameter  ``REAL_TIME_CLOCK_ENABLE`` should be configured as ``1``.

   .. code-block:: console

      mlxconfig -d /dev/mst/mt4125_pciconf0 s REAL_TIME_CLOCK_ENABLE=1

   The ``mlxconfig`` utility is part of the MFT package.

#. Run application with EAL parameter enabling tracing in mlx5 Tx datapath

   By default all tracepoints are disabled.
   To analyze Tx datapath and its timings: ``--trace=pmd.net.mlx5.tx``.

#. Commit the tracing data to the storage (with ``rte_trace_save()`` API call).

#. Install or build the ``babeltrace2`` package

   The Python script analyzing gathered trace data uses the ``babeltrace2`` library.
   The package should be either installed or built from source as shown below.

   .. code-block:: console

      git clone https://github.com/efficios/babeltrace.git
      cd babeltrace
      ./bootstrap
      ./configure -help
      ./configure --disable-api-doc --disable-man-pages
                  --disable-python-bindings-doc --enable-python-plugins
                  --enable-python-binding

#. Run analyzing script

   ``mlx5_trace.py`` is used to combine related events (packet firing and completion)
   and to show the results in human-readable view.

   The analyzing script is located in the DPDK source tree: ``drivers/net/mlx5/tools``.

   It requires Python 3.6 and ``babeltrace2`` package.

   The parameter of the script is the trace data folder.

   The optional parameter ``-a`` forces to dump incomplete bursts.

   The optional parameter ``-v [level]`` forces to dump raw records data
   for the specified level and below.
   Level 0 dumps bursts, level 1 dumps WQEs, level 2 dumps mbufs.

   .. code-block:: console

      mlx5_trace.py /var/log/rte-2023-01-23-AM-11-52-39

#. Interpreting the script output data

   All the timings are given in nanoseconds.
   The list of Tx bursts per port/queue is presented in the output.
   Each list element contains the list of built WQEs with specific opcodes.
   Each WQE contains the list of the encompassed packets to send.

Host shaper
-----------

Host shaper register is per host port register
which sets a shaper on the host port.
All VF/host PF representors belonging to one host port share one host shaper.
For example, if representor 0 and representor 1 belong to the same host port,
and a host shaper rate of 1Gbps is configured,
the shaper throttles both representors traffic from the host.

Host shaper has two modes for setting the shaper,
immediate and deferred to available descriptor threshold event trigger.

In immediate mode, the rate limit is configured immediately to host shaper.

When deferring to the available descriptor threshold trigger,
the shaper is not set until an available descriptor threshold event
is received by any Rx queue in a VF representor belonging to the host port.
The only rate supported for deferred mode is 100Mbps
(there is no limit on the supported rates for immediate mode).
In deferred mode, the shaper is set on the host port by the firmware
upon receiving the available descriptor threshold event,
which allows throttling host traffic on available descriptor threshold events
at minimum latency, preventing excess drops in the Rx queue.

Dependency on mstflint package
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

In order to configure host shaper register,
``librte_net_mlx5`` depends on ``libmtcr_ul``
which can be installed from MLNX_OFED mstflint package.
Meson detects ``libmtcr_ul`` existence at configure stage.
If the library is detected, the application must link with ``-lmtcr_ul``,
as done by the pkg-config file libdpdk.pc.

Available descriptor threshold and host shaper
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

There is a command to configure the available descriptor threshold in testpmd.
Testpmd also contains sample logic to handle available descriptor threshold events.
The typical workflow is:
testpmd configures available descriptor threshold for Rx queues,
enables ``avail_thresh_triggered`` in host shaper and registers a callback.
When traffic from the host is too high
and Rx queue emptiness is below the available descriptor threshold,
the PMD receives an event
and the firmware configures a 100Mbps shaper on the host port automatically.
Then the PMD call the callback registered previously,
which will delay a while to let Rx queue empty,
then disable host shaper.

Let's assume we have a simple BlueField-2 setup:
port 0 is uplink, port 1 is VF representor.
Each port has 2 Rx queues.
To control traffic from the host to the Arm device,
we can enable the available descriptor threshold in testpmd by:

.. code-block:: console

   testpmd> mlx5 set port 1 host_shaper avail_thresh_triggered 1 rate 0
   testpmd> set port 1 rxq 0 avail_thresh 70
   testpmd> set port 1 rxq 1 avail_thresh 70

The first command disables the current host shaper
and enables the available descriptor threshold triggered mode.
The other commands configure the available descriptor threshold
to 70% of Rx queue size for both Rx queues.

When traffic from the host is too high,
testpmd console prints log about available descriptor threshold event,
then host shaper is disabled.
The traffic rate from the host is controlled and less drop happens in Rx queues.

The threshold event and shaper can be disabled like this:

.. code-block:: console

   testpmd> mlx5 set port 1 host_shaper avail_thresh_triggered 0 rate 0
   testpmd> set port 1 rxq 0 avail_thresh 0
   testpmd> set port 1 rxq 1 avail_thresh 0

It is recommended an application disables the available descriptor threshold
and ``avail_thresh_triggered`` before exit,
if it enables them before.

The shaper can also be configured with a value, the rate unit is 100Mbps.
Below, the command sets the current shaper to 5Gbps
and disables ``avail_thresh_triggered``.

.. code-block:: console

   testpmd> mlx5 set port 1 host_shaper avail_thresh_triggered 0 rate 50


Testpmd driver specific commands
--------------------------------

port attach with socket path
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

It is possible to allocate a port with ``libibverbs`` from external application.
For importing the external port with extra device arguments,
there is a specific testpmd command
similar to :ref:`port attach command <port_attach>`::

   testpmd> mlx5 port attach (identifier) socket=(path)

where:

* ``identifier``: device identifier with optional parameters
  as same as :ref:`port attach command <port_attach>`.
* ``path``: path to IPC server socket created by the external application.

This command performs:

#. Open IPC client socket using the given path, and connect it.

#. Import ibverbs context and ibverbs protection domain.

#. Add two device arguments for context (``cmd_fd``)
   and protection domain (``pd_handle``) to the device identifier.
   See :ref:`mlx5 driver options <mlx5_common_driver_options>` for more
   information about these device arguments.

#. Call the regular ``port attach`` function with updated identifier.

For example, to attach a port whose PCI address is ``0000:0a:00.0``
and its socket path is ``/var/run/import_ipc_socket``:

.. code-block:: console

   testpmd> mlx5 port attach 0000:0a:00.0 socket=/var/run/import_ipc_socket
   testpmd: MLX5 socket path is /var/run/import_ipc_socket
   testpmd: Attach port with extra devargs 0000:0a:00.0,cmd_fd=40,pd_handle=1
   Attaching a new port...
   EAL: Probe PCI driver: mlx5_pci (15b3:101d) device: 0000:0a:00.0 (socket 0)
   Port 0 is attached. Now total ports is 1
   Done


port map external Rx queue
~~~~~~~~~~~~~~~~~~~~~~~~~~

External Rx queue indexes mapping management.

Map HW queue index (32-bit) to ethdev queue index (16-bit) for external Rx queue::

   testpmd> mlx5 port (port_id) ext_rxq map (sw_queue_id) (hw_queue_id)

Unmap external Rx queue::

   testpmd> mlx5 port (port_id) ext_rxq unmap (sw_queue_id)

where:

* ``sw_queue_id``: queue index in range [64536, 65535].
  This range is the highest 1000 numbers.
* ``hw_queue_id``: queue index given by HW in queue creation.


Dump RQ/SQ/CQ HW context for debug purposes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Dump RQ/CQ HW context for a given port/queue to a file::

   testpmd> mlx5 port (port_id) queue (queue_id) dump rq_context (file_name)

Dump SQ/CQ HW context for a given port/queue to a file::

   testpmd> mlx5 port (port_id) queue (queue_id) dump sq_context (file_name)


Set Flow Engine Mode
~~~~~~~~~~~~~~~~~~~~

Set the flow engine to active or standby mode with specific flags (bitmap style).
See ``RTE_PMD_MLX5_FLOW_ENGINE_FLAG_*`` for the flag definitions.

.. code-block:: console

   testpmd> mlx5 set flow_engine <active|standby> [<flags>]

This command is used for testing live migration,
and works for software steering only.
Default FDB jump should be disabled if switchdev is enabled.
The mode will propagate to all the probed ports.


GENEVE TLV options parser
~~~~~~~~~~~~~~~~~~~~~~~~~

See the :ref:`GENEVE parser API <geneve_parser_api>` for more information.

Set
^^^

Add single option to the global option list::

   testpmd> mlx5 set tlv_option class (class) type (type) len (length) \
            offset (sample_offset) sample_len (sample_len) \
            class_mode (ignore|fixed|matchable) data (0xffffffff|0x0 [0xffffffff|0x0]*)

where:

* ``class``: option class.
* ``type``: option type.
* ``length``: option data length in 4 bytes granularity.
* ``sample_offset``: offset to data list related to option data start.
  The offset is in 4 bytes granularity.
* ``sample_len``: length data list in 4 bytes granularity.
* ``ignore``: ignore ``class`` field.
* ``fixed``: option class is fixed and defines the option along with the type.
* ``matchable``: ``class`` field is matchable.
* ``data``: list of masks indicating which DW should be configure.
  The size of list should be equal to ``sample_len``.
* ``0xffffffff``: this DW should be configure.
* ``0x0``: this DW shouldn't be configure.

Flush
^^^^^

Remove several options from the global option list::

   testpmd> mlx5 flush tlv_options max (nb_option)

where:

* ``nb_option``: maximum number of option to remove from list. The order is LIFO.

List
^^^^

Print all options which are set in the global option list so far::

   testpmd> mlx5 list tlv_options

Output contains the values of each option, one per line.
There is no output at all when no options are configured on the global list::

   ID      Type    Class   Class_mode   Len     Offset  Sample_len   Data
   [...]   [...]   [...]   [...]        [...]   [...]   [...]        [...]

Setting several options and listing them::

   testpmd> mlx5 set tlv_option class 1 type 1 len 4 offset 1 sample_len 3
            class_mode fixed data 0xffffffff 0x0 0xffffffff
   testpmd: set new option in global list, now it has 1 options
   testpmd> mlx5 set tlv_option class 1 type 2 len 2 offset 0 sample_len 2
            class_mode fixed data 0xffffffff 0xffffffff
   testpmd: set new option in global list, now it has 2 options
   testpmd> mlx5 set tlv_option class 1 type 3 len 5 offset 4 sample_len 1
            class_mode fixed data 0xffffffff
   testpmd: set new option in global list, now it has 3 options
   testpmd> mlx5 list tlv_options
   ID      Type    Class   Class_mode   Len    Offset  Sample_len  Data
   0       1       1       fixed        4      1       3           0xffffffff 0x0 0xffffffff
   1       2       1       fixed        2      0       2           0xffffffff 0xffffffff
   2       3       1       fixed        5      4       1           0xffffffff
   testpmd>

Apply
^^^^^

Create GENEVE TLV parser for specific port using option list which are set so far::

   testpmd> mlx5 port (port_id) apply tlv_options

The same global option list can used by several ports.

Destroy
^^^^^^^

Destroy GENEVE TLV parser for specific port::

   testpmd> mlx5 port (port_id) destroy tlv_options

This command doesn't destroy the global list,
For releasing options, ``flush`` command should be used.