1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 2323 2324 2325 2326 2327 2328 2329 2330 2331 2332 2333 2334 2335 2336 2337 2338 2339 2340 2341 2342 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 2354 2355 2356 2357 2358 2359 2360 2361 2362 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 2375 2376 2377 2378 2379 2380 2381 2382 2383 2384 2385 2386 2387 2388 2389 2390 2391 2392 2393 2394 2395 2396 2397 2398 2399 2400 2401 2402 2403 2404 2405 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 2418 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 2520 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 2555 2556 2557 2558 2559 2560 2561 2562 2563 2564 2565 2566 2567 2568 2569 2570 2571 2572 2573 2574 2575 2576 2577 2578 2579 2580 2581 2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 2595 2596 2597 2598 2599 2600 2601 2602 2603 2604 2605 2606 2607 2608 2609 2610 2611 2612
|
/*
* Copyright © 2014 Intel Corporation
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include <drm/drm_atomic_helper.h>
#include <drm/drm_damage_helper.h>
#include "display/intel_dp.h"
#include "i915_drv.h"
#include "intel_atomic.h"
#include "intel_crtc.h"
#include "intel_de.h"
#include "intel_display_types.h"
#include "intel_dp_aux.h"
#include "intel_hdmi.h"
#include "intel_psr.h"
#include "intel_snps_phy.h"
#include "skl_universal_plane.h"
/**
* DOC: Panel Self Refresh (PSR/SRD)
*
* Since Haswell Display controller supports Panel Self-Refresh on display
* panels witch have a remote frame buffer (RFB) implemented according to PSR
* spec in eDP1.3. PSR feature allows the display to go to lower standby states
* when system is idle but display is on as it eliminates display refresh
* request to DDR memory completely as long as the frame buffer for that
* display is unchanged.
*
* Panel Self Refresh must be supported by both Hardware (source) and
* Panel (sink).
*
* PSR saves power by caching the framebuffer in the panel RFB, which allows us
* to power down the link and memory controller. For DSI panels the same idea
* is called "manual mode".
*
* The implementation uses the hardware-based PSR support which automatically
* enters/exits self-refresh mode. The hardware takes care of sending the
* required DP aux message and could even retrain the link (that part isn't
* enabled yet though). The hardware also keeps track of any frontbuffer
* changes to know when to exit self-refresh mode again. Unfortunately that
* part doesn't work too well, hence why the i915 PSR support uses the
* software frontbuffer tracking to make sure it doesn't miss a screen
* update. For this integration intel_psr_invalidate() and intel_psr_flush()
* get called by the frontbuffer tracking code. Note that because of locking
* issues the self-refresh re-enable code is done from a work queue, which
* must be correctly synchronized/cancelled when shutting down the pipe."
*
* DC3CO (DC3 clock off)
*
* On top of PSR2, GEN12 adds a intermediate power savings state that turns
* clock off automatically during PSR2 idle state.
* The smaller overhead of DC3co entry/exit vs. the overhead of PSR2 deep sleep
* entry/exit allows the HW to enter a low-power state even when page flipping
* periodically (for instance a 30fps video playback scenario).
*
* Every time a flips occurs PSR2 will get out of deep sleep state(if it was),
* so DC3CO is enabled and tgl_dc3co_disable_work is schedule to run after 6
* frames, if no other flip occurs and the function above is executed, DC3CO is
* disabled and PSR2 is configured to enter deep sleep, resetting again in case
* of another flip.
* Front buffer modifications do not trigger DC3CO activation on purpose as it
* would bring a lot of complexity and most of the moderns systems will only
* use page flips.
*/
static bool psr_global_enabled(struct intel_dp *intel_dp)
{
struct intel_connector *connector = intel_dp->attached_connector;
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
switch (intel_dp->psr.debug & I915_PSR_DEBUG_MODE_MASK) {
case I915_PSR_DEBUG_DEFAULT:
if (i915->params.enable_psr == -1)
return connector->panel.vbt.psr.enable;
return i915->params.enable_psr;
case I915_PSR_DEBUG_DISABLE:
return false;
default:
return true;
}
}
static bool psr2_global_enabled(struct intel_dp *intel_dp)
{
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
switch (intel_dp->psr.debug & I915_PSR_DEBUG_MODE_MASK) {
case I915_PSR_DEBUG_DISABLE:
case I915_PSR_DEBUG_FORCE_PSR1:
return false;
default:
if (i915->params.enable_psr == 1)
return false;
return true;
}
}
static u32 psr_irq_psr_error_bit_get(struct intel_dp *intel_dp)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
return DISPLAY_VER(dev_priv) >= 12 ? TGL_PSR_ERROR :
EDP_PSR_ERROR(intel_dp->psr.transcoder);
}
static u32 psr_irq_post_exit_bit_get(struct intel_dp *intel_dp)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
return DISPLAY_VER(dev_priv) >= 12 ? TGL_PSR_POST_EXIT :
EDP_PSR_POST_EXIT(intel_dp->psr.transcoder);
}
static u32 psr_irq_pre_entry_bit_get(struct intel_dp *intel_dp)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
return DISPLAY_VER(dev_priv) >= 12 ? TGL_PSR_PRE_ENTRY :
EDP_PSR_PRE_ENTRY(intel_dp->psr.transcoder);
}
static u32 psr_irq_mask_get(struct intel_dp *intel_dp)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
return DISPLAY_VER(dev_priv) >= 12 ? TGL_PSR_MASK :
EDP_PSR_MASK(intel_dp->psr.transcoder);
}
static void psr_irq_control(struct intel_dp *intel_dp)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
i915_reg_t imr_reg;
u32 mask, val;
if (DISPLAY_VER(dev_priv) >= 12)
imr_reg = TRANS_PSR_IMR(intel_dp->psr.transcoder);
else
imr_reg = EDP_PSR_IMR;
mask = psr_irq_psr_error_bit_get(intel_dp);
if (intel_dp->psr.debug & I915_PSR_DEBUG_IRQ)
mask |= psr_irq_post_exit_bit_get(intel_dp) |
psr_irq_pre_entry_bit_get(intel_dp);
val = intel_de_read(dev_priv, imr_reg);
val &= ~psr_irq_mask_get(intel_dp);
val |= ~mask;
intel_de_write(dev_priv, imr_reg, val);
}
static void psr_event_print(struct drm_i915_private *i915,
u32 val, bool psr2_enabled)
{
drm_dbg_kms(&i915->drm, "PSR exit events: 0x%x\n", val);
if (val & PSR_EVENT_PSR2_WD_TIMER_EXPIRE)
drm_dbg_kms(&i915->drm, "\tPSR2 watchdog timer expired\n");
if ((val & PSR_EVENT_PSR2_DISABLED) && psr2_enabled)
drm_dbg_kms(&i915->drm, "\tPSR2 disabled\n");
if (val & PSR_EVENT_SU_DIRTY_FIFO_UNDERRUN)
drm_dbg_kms(&i915->drm, "\tSU dirty FIFO underrun\n");
if (val & PSR_EVENT_SU_CRC_FIFO_UNDERRUN)
drm_dbg_kms(&i915->drm, "\tSU CRC FIFO underrun\n");
if (val & PSR_EVENT_GRAPHICS_RESET)
drm_dbg_kms(&i915->drm, "\tGraphics reset\n");
if (val & PSR_EVENT_PCH_INTERRUPT)
drm_dbg_kms(&i915->drm, "\tPCH interrupt\n");
if (val & PSR_EVENT_MEMORY_UP)
drm_dbg_kms(&i915->drm, "\tMemory up\n");
if (val & PSR_EVENT_FRONT_BUFFER_MODIFY)
drm_dbg_kms(&i915->drm, "\tFront buffer modification\n");
if (val & PSR_EVENT_WD_TIMER_EXPIRE)
drm_dbg_kms(&i915->drm, "\tPSR watchdog timer expired\n");
if (val & PSR_EVENT_PIPE_REGISTERS_UPDATE)
drm_dbg_kms(&i915->drm, "\tPIPE registers updated\n");
if (val & PSR_EVENT_REGISTER_UPDATE)
drm_dbg_kms(&i915->drm, "\tRegister updated\n");
if (val & PSR_EVENT_HDCP_ENABLE)
drm_dbg_kms(&i915->drm, "\tHDCP enabled\n");
if (val & PSR_EVENT_KVMR_SESSION_ENABLE)
drm_dbg_kms(&i915->drm, "\tKVMR session enabled\n");
if (val & PSR_EVENT_VBI_ENABLE)
drm_dbg_kms(&i915->drm, "\tVBI enabled\n");
if (val & PSR_EVENT_LPSP_MODE_EXIT)
drm_dbg_kms(&i915->drm, "\tLPSP mode exited\n");
if ((val & PSR_EVENT_PSR_DISABLE) && !psr2_enabled)
drm_dbg_kms(&i915->drm, "\tPSR disabled\n");
}
void intel_psr_irq_handler(struct intel_dp *intel_dp, u32 psr_iir)
{
enum transcoder cpu_transcoder = intel_dp->psr.transcoder;
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
ktime_t time_ns = ktime_get();
i915_reg_t imr_reg;
if (DISPLAY_VER(dev_priv) >= 12)
imr_reg = TRANS_PSR_IMR(intel_dp->psr.transcoder);
else
imr_reg = EDP_PSR_IMR;
if (psr_iir & psr_irq_pre_entry_bit_get(intel_dp)) {
intel_dp->psr.last_entry_attempt = time_ns;
drm_dbg_kms(&dev_priv->drm,
"[transcoder %s] PSR entry attempt in 2 vblanks\n",
transcoder_name(cpu_transcoder));
}
if (psr_iir & psr_irq_post_exit_bit_get(intel_dp)) {
intel_dp->psr.last_exit = time_ns;
drm_dbg_kms(&dev_priv->drm,
"[transcoder %s] PSR exit completed\n",
transcoder_name(cpu_transcoder));
if (DISPLAY_VER(dev_priv) >= 9) {
u32 val = intel_de_read(dev_priv,
PSR_EVENT(cpu_transcoder));
bool psr2_enabled = intel_dp->psr.psr2_enabled;
intel_de_write(dev_priv, PSR_EVENT(cpu_transcoder),
val);
psr_event_print(dev_priv, val, psr2_enabled);
}
}
if (psr_iir & psr_irq_psr_error_bit_get(intel_dp)) {
u32 val;
drm_warn(&dev_priv->drm, "[transcoder %s] PSR aux error\n",
transcoder_name(cpu_transcoder));
intel_dp->psr.irq_aux_error = true;
/*
* If this interruption is not masked it will keep
* interrupting so fast that it prevents the scheduled
* work to run.
* Also after a PSR error, we don't want to arm PSR
* again so we don't care about unmask the interruption
* or unset irq_aux_error.
*/
val = intel_de_read(dev_priv, imr_reg);
val |= psr_irq_psr_error_bit_get(intel_dp);
intel_de_write(dev_priv, imr_reg, val);
schedule_work(&intel_dp->psr.work);
}
}
static bool intel_dp_get_alpm_status(struct intel_dp *intel_dp)
{
u8 alpm_caps = 0;
if (drm_dp_dpcd_readb(&intel_dp->aux, DP_RECEIVER_ALPM_CAP,
&alpm_caps) != 1)
return false;
return alpm_caps & DP_ALPM_CAP;
}
static u8 intel_dp_get_sink_sync_latency(struct intel_dp *intel_dp)
{
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
u8 val = 8; /* assume the worst if we can't read the value */
if (drm_dp_dpcd_readb(&intel_dp->aux,
DP_SYNCHRONIZATION_LATENCY_IN_SINK, &val) == 1)
val &= DP_MAX_RESYNC_FRAME_COUNT_MASK;
else
drm_dbg_kms(&i915->drm,
"Unable to get sink synchronization latency, assuming 8 frames\n");
return val;
}
static void intel_dp_get_su_granularity(struct intel_dp *intel_dp)
{
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
ssize_t r;
u16 w;
u8 y;
/* If sink don't have specific granularity requirements set legacy ones */
if (!(intel_dp->psr_dpcd[1] & DP_PSR2_SU_GRANULARITY_REQUIRED)) {
/* As PSR2 HW sends full lines, we do not care about x granularity */
w = 4;
y = 4;
goto exit;
}
r = drm_dp_dpcd_read(&intel_dp->aux, DP_PSR2_SU_X_GRANULARITY, &w, 2);
if (r != 2)
drm_dbg_kms(&i915->drm,
"Unable to read DP_PSR2_SU_X_GRANULARITY\n");
/*
* Spec says that if the value read is 0 the default granularity should
* be used instead.
*/
if (r != 2 || w == 0)
w = 4;
r = drm_dp_dpcd_read(&intel_dp->aux, DP_PSR2_SU_Y_GRANULARITY, &y, 1);
if (r != 1) {
drm_dbg_kms(&i915->drm,
"Unable to read DP_PSR2_SU_Y_GRANULARITY\n");
y = 4;
}
if (y == 0)
y = 1;
exit:
intel_dp->psr.su_w_granularity = w;
intel_dp->psr.su_y_granularity = y;
}
void intel_psr_init_dpcd(struct intel_dp *intel_dp)
{
struct drm_i915_private *dev_priv =
to_i915(dp_to_dig_port(intel_dp)->base.base.dev);
drm_dp_dpcd_read(&intel_dp->aux, DP_PSR_SUPPORT, intel_dp->psr_dpcd,
sizeof(intel_dp->psr_dpcd));
if (!intel_dp->psr_dpcd[0])
return;
drm_dbg_kms(&dev_priv->drm, "eDP panel supports PSR version %x\n",
intel_dp->psr_dpcd[0]);
if (drm_dp_has_quirk(&intel_dp->desc, DP_DPCD_QUIRK_NO_PSR)) {
drm_dbg_kms(&dev_priv->drm,
"PSR support not currently available for this panel\n");
return;
}
if (!(intel_dp->edp_dpcd[1] & DP_EDP_SET_POWER_CAP)) {
drm_dbg_kms(&dev_priv->drm,
"Panel lacks power state control, PSR cannot be enabled\n");
return;
}
intel_dp->psr.sink_support = true;
intel_dp->psr.sink_sync_latency =
intel_dp_get_sink_sync_latency(intel_dp);
if (DISPLAY_VER(dev_priv) >= 9 &&
(intel_dp->psr_dpcd[0] == DP_PSR2_WITH_Y_COORD_IS_SUPPORTED)) {
bool y_req = intel_dp->psr_dpcd[1] &
DP_PSR2_SU_Y_COORDINATE_REQUIRED;
bool alpm = intel_dp_get_alpm_status(intel_dp);
/*
* All panels that supports PSR version 03h (PSR2 +
* Y-coordinate) can handle Y-coordinates in VSC but we are
* only sure that it is going to be used when required by the
* panel. This way panel is capable to do selective update
* without a aux frame sync.
*
* To support PSR version 02h and PSR version 03h without
* Y-coordinate requirement panels we would need to enable
* GTC first.
*/
intel_dp->psr.sink_psr2_support = y_req && alpm;
drm_dbg_kms(&dev_priv->drm, "PSR2 %ssupported\n",
intel_dp->psr.sink_psr2_support ? "" : "not ");
if (intel_dp->psr.sink_psr2_support) {
intel_dp->psr.colorimetry_support =
intel_dp_get_colorimetry_status(intel_dp);
intel_dp_get_su_granularity(intel_dp);
}
}
}
static void intel_psr_enable_sink(struct intel_dp *intel_dp)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
u8 dpcd_val = DP_PSR_ENABLE;
/* Enable ALPM at sink for psr2 */
if (intel_dp->psr.psr2_enabled) {
drm_dp_dpcd_writeb(&intel_dp->aux, DP_RECEIVER_ALPM_CONFIG,
DP_ALPM_ENABLE |
DP_ALPM_LOCK_ERROR_IRQ_HPD_ENABLE);
dpcd_val |= DP_PSR_ENABLE_PSR2 | DP_PSR_IRQ_HPD_WITH_CRC_ERRORS;
} else {
if (intel_dp->psr.link_standby)
dpcd_val |= DP_PSR_MAIN_LINK_ACTIVE;
if (DISPLAY_VER(dev_priv) >= 8)
dpcd_val |= DP_PSR_CRC_VERIFICATION;
}
if (intel_dp->psr.req_psr2_sdp_prior_scanline)
dpcd_val |= DP_PSR_SU_REGION_SCANLINE_CAPTURE;
drm_dp_dpcd_writeb(&intel_dp->aux, DP_PSR_EN_CFG, dpcd_val);
drm_dp_dpcd_writeb(&intel_dp->aux, DP_SET_POWER, DP_SET_POWER_D0);
}
static u32 intel_psr1_get_tp_time(struct intel_dp *intel_dp)
{
struct intel_connector *connector = intel_dp->attached_connector;
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
u32 val = 0;
if (DISPLAY_VER(dev_priv) >= 11)
val |= EDP_PSR_TP4_TIME_0US;
if (dev_priv->params.psr_safest_params) {
val |= EDP_PSR_TP1_TIME_2500us;
val |= EDP_PSR_TP2_TP3_TIME_2500us;
goto check_tp3_sel;
}
if (connector->panel.vbt.psr.tp1_wakeup_time_us == 0)
val |= EDP_PSR_TP1_TIME_0us;
else if (connector->panel.vbt.psr.tp1_wakeup_time_us <= 100)
val |= EDP_PSR_TP1_TIME_100us;
else if (connector->panel.vbt.psr.tp1_wakeup_time_us <= 500)
val |= EDP_PSR_TP1_TIME_500us;
else
val |= EDP_PSR_TP1_TIME_2500us;
if (connector->panel.vbt.psr.tp2_tp3_wakeup_time_us == 0)
val |= EDP_PSR_TP2_TP3_TIME_0us;
else if (connector->panel.vbt.psr.tp2_tp3_wakeup_time_us <= 100)
val |= EDP_PSR_TP2_TP3_TIME_100us;
else if (connector->panel.vbt.psr.tp2_tp3_wakeup_time_us <= 500)
val |= EDP_PSR_TP2_TP3_TIME_500us;
else
val |= EDP_PSR_TP2_TP3_TIME_2500us;
check_tp3_sel:
if (intel_dp_source_supports_tps3(dev_priv) &&
drm_dp_tps3_supported(intel_dp->dpcd))
val |= EDP_PSR_TP1_TP3_SEL;
else
val |= EDP_PSR_TP1_TP2_SEL;
return val;
}
static u8 psr_compute_idle_frames(struct intel_dp *intel_dp)
{
struct intel_connector *connector = intel_dp->attached_connector;
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
int idle_frames;
/* Let's use 6 as the minimum to cover all known cases including the
* off-by-one issue that HW has in some cases.
*/
idle_frames = max(6, connector->panel.vbt.psr.idle_frames);
idle_frames = max(idle_frames, intel_dp->psr.sink_sync_latency + 1);
if (drm_WARN_ON(&dev_priv->drm, idle_frames > 0xf))
idle_frames = 0xf;
return idle_frames;
}
static void hsw_activate_psr1(struct intel_dp *intel_dp)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
u32 max_sleep_time = 0x1f;
u32 val = EDP_PSR_ENABLE;
val |= psr_compute_idle_frames(intel_dp) << EDP_PSR_IDLE_FRAME_SHIFT;
val |= max_sleep_time << EDP_PSR_MAX_SLEEP_TIME_SHIFT;
if (IS_HASWELL(dev_priv))
val |= EDP_PSR_MIN_LINK_ENTRY_TIME_8_LINES;
if (intel_dp->psr.link_standby)
val |= EDP_PSR_LINK_STANDBY;
val |= intel_psr1_get_tp_time(intel_dp);
if (DISPLAY_VER(dev_priv) >= 8)
val |= EDP_PSR_CRC_ENABLE;
val |= (intel_de_read(dev_priv, EDP_PSR_CTL(intel_dp->psr.transcoder)) &
EDP_PSR_RESTORE_PSR_ACTIVE_CTX_MASK);
intel_de_write(dev_priv, EDP_PSR_CTL(intel_dp->psr.transcoder), val);
}
static u32 intel_psr2_get_tp_time(struct intel_dp *intel_dp)
{
struct intel_connector *connector = intel_dp->attached_connector;
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
u32 val = 0;
if (dev_priv->params.psr_safest_params)
return EDP_PSR2_TP2_TIME_2500us;
if (connector->panel.vbt.psr.psr2_tp2_tp3_wakeup_time_us >= 0 &&
connector->panel.vbt.psr.psr2_tp2_tp3_wakeup_time_us <= 50)
val |= EDP_PSR2_TP2_TIME_50us;
else if (connector->panel.vbt.psr.psr2_tp2_tp3_wakeup_time_us <= 100)
val |= EDP_PSR2_TP2_TIME_100us;
else if (connector->panel.vbt.psr.psr2_tp2_tp3_wakeup_time_us <= 500)
val |= EDP_PSR2_TP2_TIME_500us;
else
val |= EDP_PSR2_TP2_TIME_2500us;
return val;
}
static void hsw_activate_psr2(struct intel_dp *intel_dp)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
u32 val = EDP_PSR2_ENABLE;
val |= psr_compute_idle_frames(intel_dp) << EDP_PSR2_IDLE_FRAME_SHIFT;
if (!IS_ALDERLAKE_P(dev_priv))
val |= EDP_SU_TRACK_ENABLE;
if (DISPLAY_VER(dev_priv) >= 10 && DISPLAY_VER(dev_priv) <= 12)
val |= EDP_Y_COORDINATE_ENABLE;
val |= EDP_PSR2_FRAME_BEFORE_SU(max_t(u8, intel_dp->psr.sink_sync_latency + 1, 2));
val |= intel_psr2_get_tp_time(intel_dp);
/* Wa_22012278275:adl-p */
if (IS_ADLP_DISPLAY_STEP(dev_priv, STEP_A0, STEP_E0)) {
static const u8 map[] = {
2, /* 5 lines */
1, /* 6 lines */
0, /* 7 lines */
3, /* 8 lines */
6, /* 9 lines */
5, /* 10 lines */
4, /* 11 lines */
7, /* 12 lines */
};
/*
* Still using the default IO_BUFFER_WAKE and FAST_WAKE, see
* comments bellow for more information
*/
u32 tmp, lines = 7;
val |= TGL_EDP_PSR2_BLOCK_COUNT_NUM_2;
tmp = map[lines - TGL_EDP_PSR2_IO_BUFFER_WAKE_MIN_LINES];
tmp = tmp << TGL_EDP_PSR2_IO_BUFFER_WAKE_SHIFT;
val |= tmp;
tmp = map[lines - TGL_EDP_PSR2_FAST_WAKE_MIN_LINES];
tmp = tmp << TGL_EDP_PSR2_FAST_WAKE_MIN_SHIFT;
val |= tmp;
} else if (DISPLAY_VER(dev_priv) >= 12) {
/*
* TODO: 7 lines of IO_BUFFER_WAKE and FAST_WAKE are default
* values from BSpec. In order to setting an optimal power
* consumption, lower than 4k resolution mode needs to decrease
* IO_BUFFER_WAKE and FAST_WAKE. And higher than 4K resolution
* mode needs to increase IO_BUFFER_WAKE and FAST_WAKE.
*/
val |= TGL_EDP_PSR2_BLOCK_COUNT_NUM_2;
val |= TGL_EDP_PSR2_IO_BUFFER_WAKE(7);
val |= TGL_EDP_PSR2_FAST_WAKE(7);
} else if (DISPLAY_VER(dev_priv) >= 9) {
val |= EDP_PSR2_IO_BUFFER_WAKE(7);
val |= EDP_PSR2_FAST_WAKE(7);
}
if (intel_dp->psr.req_psr2_sdp_prior_scanline)
val |= EDP_PSR2_SU_SDP_SCANLINE;
if (intel_dp->psr.psr2_sel_fetch_enabled) {
u32 tmp;
/* Wa_1408330847 */
if (IS_TGL_DISPLAY_STEP(dev_priv, STEP_A0, STEP_B0))
intel_de_rmw(dev_priv, CHICKEN_PAR1_1,
DIS_RAM_BYPASS_PSR2_MAN_TRACK,
DIS_RAM_BYPASS_PSR2_MAN_TRACK);
tmp = intel_de_read(dev_priv, PSR2_MAN_TRK_CTL(intel_dp->psr.transcoder));
drm_WARN_ON(&dev_priv->drm, !(tmp & PSR2_MAN_TRK_CTL_ENABLE));
} else if (HAS_PSR2_SEL_FETCH(dev_priv)) {
intel_de_write(dev_priv,
PSR2_MAN_TRK_CTL(intel_dp->psr.transcoder), 0);
}
/*
* PSR2 HW is incorrectly using EDP_PSR_TP1_TP3_SEL and BSpec is
* recommending keep this bit unset while PSR2 is enabled.
*/
intel_de_write(dev_priv, EDP_PSR_CTL(intel_dp->psr.transcoder), 0);
intel_de_write(dev_priv, EDP_PSR2_CTL(intel_dp->psr.transcoder), val);
}
static bool
transcoder_has_psr2(struct drm_i915_private *dev_priv, enum transcoder trans)
{
if (IS_ALDERLAKE_P(dev_priv))
return trans == TRANSCODER_A || trans == TRANSCODER_B;
else if (DISPLAY_VER(dev_priv) >= 12)
return trans == TRANSCODER_A;
else
return trans == TRANSCODER_EDP;
}
static u32 intel_get_frame_time_us(const struct intel_crtc_state *cstate)
{
if (!cstate || !cstate->hw.active)
return 0;
return DIV_ROUND_UP(1000 * 1000,
drm_mode_vrefresh(&cstate->hw.adjusted_mode));
}
static void psr2_program_idle_frames(struct intel_dp *intel_dp,
u32 idle_frames)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
u32 val;
idle_frames <<= EDP_PSR2_IDLE_FRAME_SHIFT;
val = intel_de_read(dev_priv, EDP_PSR2_CTL(intel_dp->psr.transcoder));
val &= ~EDP_PSR2_IDLE_FRAME_MASK;
val |= idle_frames;
intel_de_write(dev_priv, EDP_PSR2_CTL(intel_dp->psr.transcoder), val);
}
static void tgl_psr2_enable_dc3co(struct intel_dp *intel_dp)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
psr2_program_idle_frames(intel_dp, 0);
intel_display_power_set_target_dc_state(dev_priv, DC_STATE_EN_DC3CO);
}
static void tgl_psr2_disable_dc3co(struct intel_dp *intel_dp)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
intel_display_power_set_target_dc_state(dev_priv, DC_STATE_EN_UPTO_DC6);
psr2_program_idle_frames(intel_dp, psr_compute_idle_frames(intel_dp));
}
static void tgl_dc3co_disable_work(struct work_struct *work)
{
struct intel_dp *intel_dp =
container_of(work, typeof(*intel_dp), psr.dc3co_work.work);
mutex_lock(&intel_dp->psr.lock);
/* If delayed work is pending, it is not idle */
if (delayed_work_pending(&intel_dp->psr.dc3co_work))
goto unlock;
tgl_psr2_disable_dc3co(intel_dp);
unlock:
mutex_unlock(&intel_dp->psr.lock);
}
static void tgl_disallow_dc3co_on_psr2_exit(struct intel_dp *intel_dp)
{
if (!intel_dp->psr.dc3co_exitline)
return;
cancel_delayed_work(&intel_dp->psr.dc3co_work);
/* Before PSR2 exit disallow dc3co*/
tgl_psr2_disable_dc3co(intel_dp);
}
static bool
dc3co_is_pipe_port_compatible(struct intel_dp *intel_dp,
struct intel_crtc_state *crtc_state)
{
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
enum pipe pipe = to_intel_crtc(crtc_state->uapi.crtc)->pipe;
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
enum port port = dig_port->base.port;
if (IS_ALDERLAKE_P(dev_priv))
return pipe <= PIPE_B && port <= PORT_B;
else
return pipe == PIPE_A && port == PORT_A;
}
static void
tgl_dc3co_exitline_compute_config(struct intel_dp *intel_dp,
struct intel_crtc_state *crtc_state)
{
const u32 crtc_vdisplay = crtc_state->uapi.adjusted_mode.crtc_vdisplay;
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
u32 exit_scanlines;
/*
* FIXME: Due to the changed sequence of activating/deactivating DC3CO,
* disable DC3CO until the changed dc3co activating/deactivating sequence
* is applied. B.Specs:49196
*/
return;
/*
* DMC's DC3CO exit mechanism has an issue with Selective Fecth
* TODO: when the issue is addressed, this restriction should be removed.
*/
if (crtc_state->enable_psr2_sel_fetch)
return;
if (!(dev_priv->display.dmc.allowed_dc_mask & DC_STATE_EN_DC3CO))
return;
if (!dc3co_is_pipe_port_compatible(intel_dp, crtc_state))
return;
/* Wa_16011303918:adl-p */
if (IS_ADLP_DISPLAY_STEP(dev_priv, STEP_A0, STEP_B0))
return;
/*
* DC3CO Exit time 200us B.Spec 49196
* PSR2 transcoder Early Exit scanlines = ROUNDUP(200 / line time) + 1
*/
exit_scanlines =
intel_usecs_to_scanlines(&crtc_state->uapi.adjusted_mode, 200) + 1;
if (drm_WARN_ON(&dev_priv->drm, exit_scanlines > crtc_vdisplay))
return;
crtc_state->dc3co_exitline = crtc_vdisplay - exit_scanlines;
}
static bool intel_psr2_sel_fetch_config_valid(struct intel_dp *intel_dp,
struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
if (!dev_priv->params.enable_psr2_sel_fetch &&
intel_dp->psr.debug != I915_PSR_DEBUG_ENABLE_SEL_FETCH) {
drm_dbg_kms(&dev_priv->drm,
"PSR2 sel fetch not enabled, disabled by parameter\n");
return false;
}
if (crtc_state->uapi.async_flip) {
drm_dbg_kms(&dev_priv->drm,
"PSR2 sel fetch not enabled, async flip enabled\n");
return false;
}
/* Wa_14010254185 Wa_14010103792 */
if (IS_TGL_DISPLAY_STEP(dev_priv, STEP_A0, STEP_C0)) {
drm_dbg_kms(&dev_priv->drm,
"PSR2 sel fetch not enabled, missing the implementation of WAs\n");
return false;
}
return crtc_state->enable_psr2_sel_fetch = true;
}
static bool psr2_granularity_check(struct intel_dp *intel_dp,
struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
const int crtc_hdisplay = crtc_state->hw.adjusted_mode.crtc_hdisplay;
const int crtc_vdisplay = crtc_state->hw.adjusted_mode.crtc_vdisplay;
u16 y_granularity = 0;
/* PSR2 HW only send full lines so we only need to validate the width */
if (crtc_hdisplay % intel_dp->psr.su_w_granularity)
return false;
if (crtc_vdisplay % intel_dp->psr.su_y_granularity)
return false;
/* HW tracking is only aligned to 4 lines */
if (!crtc_state->enable_psr2_sel_fetch)
return intel_dp->psr.su_y_granularity == 4;
/*
* adl_p has 1 line granularity. For other platforms with SW tracking we
* can adjust the y coordinates to match sink requirement if multiple of
* 4.
*/
if (IS_ALDERLAKE_P(dev_priv))
y_granularity = intel_dp->psr.su_y_granularity;
else if (intel_dp->psr.su_y_granularity <= 2)
y_granularity = 4;
else if ((intel_dp->psr.su_y_granularity % 4) == 0)
y_granularity = intel_dp->psr.su_y_granularity;
if (y_granularity == 0 || crtc_vdisplay % y_granularity)
return false;
crtc_state->su_y_granularity = y_granularity;
return true;
}
static bool _compute_psr2_sdp_prior_scanline_indication(struct intel_dp *intel_dp,
struct intel_crtc_state *crtc_state)
{
const struct drm_display_mode *adjusted_mode = &crtc_state->uapi.adjusted_mode;
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
u32 hblank_total, hblank_ns, req_ns;
hblank_total = adjusted_mode->crtc_hblank_end - adjusted_mode->crtc_hblank_start;
hblank_ns = div_u64(1000000ULL * hblank_total, adjusted_mode->crtc_clock);
/* From spec: ((60 / number of lanes) + 11) * 1000 / symbol clock frequency MHz */
req_ns = ((60 / crtc_state->lane_count) + 11) * 1000 / (crtc_state->port_clock / 1000);
if ((hblank_ns - req_ns) > 100)
return true;
/* Not supported <13 / Wa_22012279113:adl-p */
if (DISPLAY_VER(dev_priv) <= 13 || intel_dp->edp_dpcd[0] < DP_EDP_14b)
return false;
crtc_state->req_psr2_sdp_prior_scanline = true;
return true;
}
static bool intel_psr2_config_valid(struct intel_dp *intel_dp,
struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
int crtc_hdisplay = crtc_state->hw.adjusted_mode.crtc_hdisplay;
int crtc_vdisplay = crtc_state->hw.adjusted_mode.crtc_vdisplay;
int psr_max_h = 0, psr_max_v = 0, max_bpp = 0;
if (!intel_dp->psr.sink_psr2_support)
return false;
/* JSL and EHL only supports eDP 1.3 */
if (IS_JSL_EHL(dev_priv)) {
drm_dbg_kms(&dev_priv->drm, "PSR2 not supported by phy\n");
return false;
}
/* Wa_16011181250 */
if (IS_ROCKETLAKE(dev_priv) || IS_ALDERLAKE_S(dev_priv) ||
IS_DG2(dev_priv)) {
drm_dbg_kms(&dev_priv->drm, "PSR2 is defeatured for this platform\n");
return false;
}
if (IS_ADLP_DISPLAY_STEP(dev_priv, STEP_A0, STEP_B0)) {
drm_dbg_kms(&dev_priv->drm, "PSR2 not completely functional in this stepping\n");
return false;
}
if (!transcoder_has_psr2(dev_priv, crtc_state->cpu_transcoder)) {
drm_dbg_kms(&dev_priv->drm,
"PSR2 not supported in transcoder %s\n",
transcoder_name(crtc_state->cpu_transcoder));
return false;
}
if (!psr2_global_enabled(intel_dp)) {
drm_dbg_kms(&dev_priv->drm, "PSR2 disabled by flag\n");
return false;
}
/*
* DSC and PSR2 cannot be enabled simultaneously. If a requested
* resolution requires DSC to be enabled, priority is given to DSC
* over PSR2.
*/
if (crtc_state->dsc.compression_enable) {
drm_dbg_kms(&dev_priv->drm,
"PSR2 cannot be enabled since DSC is enabled\n");
return false;
}
if (crtc_state->crc_enabled) {
drm_dbg_kms(&dev_priv->drm,
"PSR2 not enabled because it would inhibit pipe CRC calculation\n");
return false;
}
if (DISPLAY_VER(dev_priv) >= 12) {
psr_max_h = 5120;
psr_max_v = 3200;
max_bpp = 30;
} else if (DISPLAY_VER(dev_priv) >= 10) {
psr_max_h = 4096;
psr_max_v = 2304;
max_bpp = 24;
} else if (DISPLAY_VER(dev_priv) == 9) {
psr_max_h = 3640;
psr_max_v = 2304;
max_bpp = 24;
}
if (crtc_state->pipe_bpp > max_bpp) {
drm_dbg_kms(&dev_priv->drm,
"PSR2 not enabled, pipe bpp %d > max supported %d\n",
crtc_state->pipe_bpp, max_bpp);
return false;
}
/* Wa_16011303918:adl-p */
if (crtc_state->vrr.enable &&
IS_ADLP_DISPLAY_STEP(dev_priv, STEP_A0, STEP_B0)) {
drm_dbg_kms(&dev_priv->drm,
"PSR2 not enabled, not compatible with HW stepping + VRR\n");
return false;
}
if (!_compute_psr2_sdp_prior_scanline_indication(intel_dp, crtc_state)) {
drm_dbg_kms(&dev_priv->drm,
"PSR2 not enabled, PSR2 SDP indication do not fit in hblank\n");
return false;
}
if (HAS_PSR2_SEL_FETCH(dev_priv)) {
if (!intel_psr2_sel_fetch_config_valid(intel_dp, crtc_state) &&
!HAS_PSR_HW_TRACKING(dev_priv)) {
drm_dbg_kms(&dev_priv->drm,
"PSR2 not enabled, selective fetch not valid and no HW tracking available\n");
return false;
}
}
/* Wa_2209313811 */
if (!crtc_state->enable_psr2_sel_fetch &&
IS_TGL_DISPLAY_STEP(dev_priv, STEP_A0, STEP_C0)) {
drm_dbg_kms(&dev_priv->drm, "PSR2 HW tracking is not supported this Display stepping\n");
goto unsupported;
}
if (!psr2_granularity_check(intel_dp, crtc_state)) {
drm_dbg_kms(&dev_priv->drm, "PSR2 not enabled, SU granularity not compatible\n");
goto unsupported;
}
if (!crtc_state->enable_psr2_sel_fetch &&
(crtc_hdisplay > psr_max_h || crtc_vdisplay > psr_max_v)) {
drm_dbg_kms(&dev_priv->drm,
"PSR2 not enabled, resolution %dx%d > max supported %dx%d\n",
crtc_hdisplay, crtc_vdisplay,
psr_max_h, psr_max_v);
goto unsupported;
}
tgl_dc3co_exitline_compute_config(intel_dp, crtc_state);
return true;
unsupported:
crtc_state->enable_psr2_sel_fetch = false;
return false;
}
void intel_psr_compute_config(struct intel_dp *intel_dp,
struct intel_crtc_state *crtc_state,
struct drm_connector_state *conn_state)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
const struct drm_display_mode *adjusted_mode =
&crtc_state->hw.adjusted_mode;
int psr_setup_time;
/*
* Current PSR panels don't work reliably with VRR enabled
* So if VRR is enabled, do not enable PSR.
*/
if (crtc_state->vrr.enable)
return;
if (!CAN_PSR(intel_dp))
return;
if (!psr_global_enabled(intel_dp)) {
drm_dbg_kms(&dev_priv->drm, "PSR disabled by flag\n");
return;
}
if (intel_dp->psr.sink_not_reliable) {
drm_dbg_kms(&dev_priv->drm,
"PSR sink implementation is not reliable\n");
return;
}
if (adjusted_mode->flags & DRM_MODE_FLAG_INTERLACE) {
drm_dbg_kms(&dev_priv->drm,
"PSR condition failed: Interlaced mode enabled\n");
return;
}
psr_setup_time = drm_dp_psr_setup_time(intel_dp->psr_dpcd);
if (psr_setup_time < 0) {
drm_dbg_kms(&dev_priv->drm,
"PSR condition failed: Invalid PSR setup time (0x%02x)\n",
intel_dp->psr_dpcd[1]);
return;
}
if (intel_usecs_to_scanlines(adjusted_mode, psr_setup_time) >
adjusted_mode->crtc_vtotal - adjusted_mode->crtc_vdisplay - 1) {
drm_dbg_kms(&dev_priv->drm,
"PSR condition failed: PSR setup time (%d us) too long\n",
psr_setup_time);
return;
}
crtc_state->has_psr = true;
crtc_state->has_psr2 = intel_psr2_config_valid(intel_dp, crtc_state);
crtc_state->infoframes.enable |= intel_hdmi_infoframe_enable(DP_SDP_VSC);
intel_dp_compute_psr_vsc_sdp(intel_dp, crtc_state, conn_state,
&crtc_state->psr_vsc);
}
void intel_psr_get_config(struct intel_encoder *encoder,
struct intel_crtc_state *pipe_config)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_digital_port *dig_port = enc_to_dig_port(encoder);
struct intel_dp *intel_dp;
u32 val;
if (!dig_port)
return;
intel_dp = &dig_port->dp;
if (!CAN_PSR(intel_dp))
return;
mutex_lock(&intel_dp->psr.lock);
if (!intel_dp->psr.enabled)
goto unlock;
/*
* Not possible to read EDP_PSR/PSR2_CTL registers as it is
* enabled/disabled because of frontbuffer tracking and others.
*/
pipe_config->has_psr = true;
pipe_config->has_psr2 = intel_dp->psr.psr2_enabled;
pipe_config->infoframes.enable |= intel_hdmi_infoframe_enable(DP_SDP_VSC);
if (!intel_dp->psr.psr2_enabled)
goto unlock;
if (HAS_PSR2_SEL_FETCH(dev_priv)) {
val = intel_de_read(dev_priv, PSR2_MAN_TRK_CTL(intel_dp->psr.transcoder));
if (val & PSR2_MAN_TRK_CTL_ENABLE)
pipe_config->enable_psr2_sel_fetch = true;
}
if (DISPLAY_VER(dev_priv) >= 12) {
val = intel_de_read(dev_priv, EXITLINE(intel_dp->psr.transcoder));
val &= EXITLINE_MASK;
pipe_config->dc3co_exitline = val;
}
unlock:
mutex_unlock(&intel_dp->psr.lock);
}
static void intel_psr_activate(struct intel_dp *intel_dp)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
enum transcoder transcoder = intel_dp->psr.transcoder;
if (transcoder_has_psr2(dev_priv, transcoder))
drm_WARN_ON(&dev_priv->drm,
intel_de_read(dev_priv, EDP_PSR2_CTL(transcoder)) & EDP_PSR2_ENABLE);
drm_WARN_ON(&dev_priv->drm,
intel_de_read(dev_priv, EDP_PSR_CTL(transcoder)) & EDP_PSR_ENABLE);
drm_WARN_ON(&dev_priv->drm, intel_dp->psr.active);
lockdep_assert_held(&intel_dp->psr.lock);
/* psr1 and psr2 are mutually exclusive.*/
if (intel_dp->psr.psr2_enabled)
hsw_activate_psr2(intel_dp);
else
hsw_activate_psr1(intel_dp);
intel_dp->psr.active = true;
}
static u32 wa_16013835468_bit_get(struct intel_dp *intel_dp)
{
switch (intel_dp->psr.pipe) {
case PIPE_A:
return LATENCY_REPORTING_REMOVED_PIPE_A;
case PIPE_B:
return LATENCY_REPORTING_REMOVED_PIPE_B;
case PIPE_C:
return LATENCY_REPORTING_REMOVED_PIPE_C;
default:
MISSING_CASE(intel_dp->psr.pipe);
return 0;
}
}
static void intel_psr_enable_source(struct intel_dp *intel_dp,
const struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
enum transcoder cpu_transcoder = intel_dp->psr.transcoder;
u32 mask;
/*
* Per Spec: Avoid continuous PSR exit by masking MEMUP and HPD also
* mask LPSP to avoid dependency on other drivers that might block
* runtime_pm besides preventing other hw tracking issues now we
* can rely on frontbuffer tracking.
*/
mask = EDP_PSR_DEBUG_MASK_MEMUP |
EDP_PSR_DEBUG_MASK_HPD |
EDP_PSR_DEBUG_MASK_LPSP |
EDP_PSR_DEBUG_MASK_MAX_SLEEP;
if (DISPLAY_VER(dev_priv) < 11)
mask |= EDP_PSR_DEBUG_MASK_DISP_REG_WRITE;
intel_de_write(dev_priv, EDP_PSR_DEBUG(intel_dp->psr.transcoder),
mask);
psr_irq_control(intel_dp);
if (intel_dp->psr.dc3co_exitline) {
u32 val;
/*
* TODO: if future platforms supports DC3CO in more than one
* transcoder, EXITLINE will need to be unset when disabling PSR
*/
val = intel_de_read(dev_priv, EXITLINE(cpu_transcoder));
val &= ~EXITLINE_MASK;
val |= intel_dp->psr.dc3co_exitline << EXITLINE_SHIFT;
val |= EXITLINE_ENABLE;
intel_de_write(dev_priv, EXITLINE(cpu_transcoder), val);
}
if (HAS_PSR_HW_TRACKING(dev_priv) && HAS_PSR2_SEL_FETCH(dev_priv))
intel_de_rmw(dev_priv, CHICKEN_PAR1_1, IGNORE_PSR2_HW_TRACKING,
intel_dp->psr.psr2_sel_fetch_enabled ?
IGNORE_PSR2_HW_TRACKING : 0);
if (intel_dp->psr.psr2_enabled) {
if (DISPLAY_VER(dev_priv) == 9)
intel_de_rmw(dev_priv, CHICKEN_TRANS(cpu_transcoder), 0,
PSR2_VSC_ENABLE_PROG_HEADER |
PSR2_ADD_VERTICAL_LINE_COUNT);
/*
* Wa_16014451276:adlp
* All supported adlp panels have 1-based X granularity, this may
* cause issues if non-supported panels are used.
*/
if (IS_ALDERLAKE_P(dev_priv))
intel_de_rmw(dev_priv, CHICKEN_TRANS(cpu_transcoder), 0,
ADLP_1_BASED_X_GRANULARITY);
/* Wa_16011168373:adl-p */
if (IS_ADLP_DISPLAY_STEP(dev_priv, STEP_A0, STEP_B0))
intel_de_rmw(dev_priv,
TRANS_SET_CONTEXT_LATENCY(intel_dp->psr.transcoder),
TRANS_SET_CONTEXT_LATENCY_MASK,
TRANS_SET_CONTEXT_LATENCY_VALUE(1));
/* Wa_16012604467:adlp */
if (IS_ALDERLAKE_P(dev_priv))
intel_de_rmw(dev_priv, CLKGATE_DIS_MISC, 0,
CLKGATE_DIS_MISC_DMASC_GATING_DIS);
/* Wa_16013835468:tgl[b0+], dg1 */
if (IS_TGL_DISPLAY_STEP(dev_priv, STEP_B0, STEP_FOREVER) ||
IS_DG1(dev_priv)) {
u16 vtotal, vblank;
vtotal = crtc_state->uapi.adjusted_mode.crtc_vtotal -
crtc_state->uapi.adjusted_mode.crtc_vdisplay;
vblank = crtc_state->uapi.adjusted_mode.crtc_vblank_end -
crtc_state->uapi.adjusted_mode.crtc_vblank_start;
if (vblank > vtotal)
intel_de_rmw(dev_priv, GEN8_CHICKEN_DCPR_1, 0,
wa_16013835468_bit_get(intel_dp));
}
}
}
static bool psr_interrupt_error_check(struct intel_dp *intel_dp)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
u32 val;
/*
* If a PSR error happened and the driver is reloaded, the EDP_PSR_IIR
* will still keep the error set even after the reset done in the
* irq_preinstall and irq_uninstall hooks.
* And enabling in this situation cause the screen to freeze in the
* first time that PSR HW tries to activate so lets keep PSR disabled
* to avoid any rendering problems.
*/
if (DISPLAY_VER(dev_priv) >= 12)
val = intel_de_read(dev_priv,
TRANS_PSR_IIR(intel_dp->psr.transcoder));
else
val = intel_de_read(dev_priv, EDP_PSR_IIR);
val &= psr_irq_psr_error_bit_get(intel_dp);
if (val) {
intel_dp->psr.sink_not_reliable = true;
drm_dbg_kms(&dev_priv->drm,
"PSR interruption error set, not enabling PSR\n");
return false;
}
return true;
}
static void intel_psr_enable_locked(struct intel_dp *intel_dp,
const struct intel_crtc_state *crtc_state)
{
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
enum phy phy = intel_port_to_phy(dev_priv, dig_port->base.port);
struct intel_encoder *encoder = &dig_port->base;
u32 val;
drm_WARN_ON(&dev_priv->drm, intel_dp->psr.enabled);
intel_dp->psr.psr2_enabled = crtc_state->has_psr2;
intel_dp->psr.busy_frontbuffer_bits = 0;
intel_dp->psr.pipe = to_intel_crtc(crtc_state->uapi.crtc)->pipe;
intel_dp->psr.transcoder = crtc_state->cpu_transcoder;
/* DC5/DC6 requires at least 6 idle frames */
val = usecs_to_jiffies(intel_get_frame_time_us(crtc_state) * 6);
intel_dp->psr.dc3co_exit_delay = val;
intel_dp->psr.dc3co_exitline = crtc_state->dc3co_exitline;
intel_dp->psr.psr2_sel_fetch_enabled = crtc_state->enable_psr2_sel_fetch;
intel_dp->psr.psr2_sel_fetch_cff_enabled = false;
intel_dp->psr.req_psr2_sdp_prior_scanline =
crtc_state->req_psr2_sdp_prior_scanline;
if (!psr_interrupt_error_check(intel_dp))
return;
drm_dbg_kms(&dev_priv->drm, "Enabling PSR%s\n",
intel_dp->psr.psr2_enabled ? "2" : "1");
intel_write_dp_vsc_sdp(encoder, crtc_state, &crtc_state->psr_vsc);
intel_snps_phy_update_psr_power_state(dev_priv, phy, true);
intel_psr_enable_sink(intel_dp);
intel_psr_enable_source(intel_dp, crtc_state);
intel_dp->psr.enabled = true;
intel_dp->psr.paused = false;
intel_psr_activate(intel_dp);
}
static void intel_psr_exit(struct intel_dp *intel_dp)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
u32 val;
if (!intel_dp->psr.active) {
if (transcoder_has_psr2(dev_priv, intel_dp->psr.transcoder)) {
val = intel_de_read(dev_priv,
EDP_PSR2_CTL(intel_dp->psr.transcoder));
drm_WARN_ON(&dev_priv->drm, val & EDP_PSR2_ENABLE);
}
val = intel_de_read(dev_priv,
EDP_PSR_CTL(intel_dp->psr.transcoder));
drm_WARN_ON(&dev_priv->drm, val & EDP_PSR_ENABLE);
return;
}
if (intel_dp->psr.psr2_enabled) {
tgl_disallow_dc3co_on_psr2_exit(intel_dp);
val = intel_de_read(dev_priv,
EDP_PSR2_CTL(intel_dp->psr.transcoder));
drm_WARN_ON(&dev_priv->drm, !(val & EDP_PSR2_ENABLE));
val &= ~EDP_PSR2_ENABLE;
intel_de_write(dev_priv,
EDP_PSR2_CTL(intel_dp->psr.transcoder), val);
} else {
val = intel_de_read(dev_priv,
EDP_PSR_CTL(intel_dp->psr.transcoder));
drm_WARN_ON(&dev_priv->drm, !(val & EDP_PSR_ENABLE));
val &= ~EDP_PSR_ENABLE;
intel_de_write(dev_priv,
EDP_PSR_CTL(intel_dp->psr.transcoder), val);
}
intel_dp->psr.active = false;
}
static void intel_psr_wait_exit_locked(struct intel_dp *intel_dp)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
i915_reg_t psr_status;
u32 psr_status_mask;
if (intel_dp->psr.psr2_enabled) {
psr_status = EDP_PSR2_STATUS(intel_dp->psr.transcoder);
psr_status_mask = EDP_PSR2_STATUS_STATE_MASK;
} else {
psr_status = EDP_PSR_STATUS(intel_dp->psr.transcoder);
psr_status_mask = EDP_PSR_STATUS_STATE_MASK;
}
/* Wait till PSR is idle */
if (intel_de_wait_for_clear(dev_priv, psr_status,
psr_status_mask, 2000))
drm_err(&dev_priv->drm, "Timed out waiting PSR idle state\n");
}
static void intel_psr_disable_locked(struct intel_dp *intel_dp)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
enum phy phy = intel_port_to_phy(dev_priv,
dp_to_dig_port(intel_dp)->base.port);
lockdep_assert_held(&intel_dp->psr.lock);
if (!intel_dp->psr.enabled)
return;
drm_dbg_kms(&dev_priv->drm, "Disabling PSR%s\n",
intel_dp->psr.psr2_enabled ? "2" : "1");
intel_psr_exit(intel_dp);
intel_psr_wait_exit_locked(intel_dp);
/* Wa_1408330847 */
if (intel_dp->psr.psr2_sel_fetch_enabled &&
IS_TGL_DISPLAY_STEP(dev_priv, STEP_A0, STEP_B0))
intel_de_rmw(dev_priv, CHICKEN_PAR1_1,
DIS_RAM_BYPASS_PSR2_MAN_TRACK, 0);
if (intel_dp->psr.psr2_enabled) {
/* Wa_16011168373:adl-p */
if (IS_ADLP_DISPLAY_STEP(dev_priv, STEP_A0, STEP_B0))
intel_de_rmw(dev_priv,
TRANS_SET_CONTEXT_LATENCY(intel_dp->psr.transcoder),
TRANS_SET_CONTEXT_LATENCY_MASK, 0);
/* Wa_16012604467:adlp */
if (IS_ALDERLAKE_P(dev_priv))
intel_de_rmw(dev_priv, CLKGATE_DIS_MISC,
CLKGATE_DIS_MISC_DMASC_GATING_DIS, 0);
/* Wa_16013835468:tgl[b0+], dg1 */
if (IS_TGL_DISPLAY_STEP(dev_priv, STEP_B0, STEP_FOREVER) ||
IS_DG1(dev_priv))
intel_de_rmw(dev_priv, GEN8_CHICKEN_DCPR_1,
wa_16013835468_bit_get(intel_dp), 0);
}
intel_snps_phy_update_psr_power_state(dev_priv, phy, false);
/* Disable PSR on Sink */
drm_dp_dpcd_writeb(&intel_dp->aux, DP_PSR_EN_CFG, 0);
if (intel_dp->psr.psr2_enabled)
drm_dp_dpcd_writeb(&intel_dp->aux, DP_RECEIVER_ALPM_CONFIG, 0);
intel_dp->psr.enabled = false;
intel_dp->psr.psr2_enabled = false;
intel_dp->psr.psr2_sel_fetch_enabled = false;
intel_dp->psr.psr2_sel_fetch_cff_enabled = false;
}
/**
* intel_psr_disable - Disable PSR
* @intel_dp: Intel DP
* @old_crtc_state: old CRTC state
*
* This function needs to be called before disabling pipe.
*/
void intel_psr_disable(struct intel_dp *intel_dp,
const struct intel_crtc_state *old_crtc_state)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
if (!old_crtc_state->has_psr)
return;
if (drm_WARN_ON(&dev_priv->drm, !CAN_PSR(intel_dp)))
return;
mutex_lock(&intel_dp->psr.lock);
intel_psr_disable_locked(intel_dp);
mutex_unlock(&intel_dp->psr.lock);
cancel_work_sync(&intel_dp->psr.work);
cancel_delayed_work_sync(&intel_dp->psr.dc3co_work);
}
/**
* intel_psr_pause - Pause PSR
* @intel_dp: Intel DP
*
* This function need to be called after enabling psr.
*/
void intel_psr_pause(struct intel_dp *intel_dp)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
struct intel_psr *psr = &intel_dp->psr;
if (!CAN_PSR(intel_dp))
return;
mutex_lock(&psr->lock);
if (!psr->enabled) {
mutex_unlock(&psr->lock);
return;
}
/* If we ever hit this, we will need to add refcount to pause/resume */
drm_WARN_ON(&dev_priv->drm, psr->paused);
intel_psr_exit(intel_dp);
intel_psr_wait_exit_locked(intel_dp);
psr->paused = true;
mutex_unlock(&psr->lock);
cancel_work_sync(&psr->work);
cancel_delayed_work_sync(&psr->dc3co_work);
}
/**
* intel_psr_resume - Resume PSR
* @intel_dp: Intel DP
*
* This function need to be called after pausing psr.
*/
void intel_psr_resume(struct intel_dp *intel_dp)
{
struct intel_psr *psr = &intel_dp->psr;
if (!CAN_PSR(intel_dp))
return;
mutex_lock(&psr->lock);
if (!psr->paused)
goto unlock;
psr->paused = false;
intel_psr_activate(intel_dp);
unlock:
mutex_unlock(&psr->lock);
}
static u32 man_trk_ctl_enable_bit_get(struct drm_i915_private *dev_priv)
{
return IS_ALDERLAKE_P(dev_priv) ? 0 : PSR2_MAN_TRK_CTL_ENABLE;
}
static u32 man_trk_ctl_single_full_frame_bit_get(struct drm_i915_private *dev_priv)
{
return IS_ALDERLAKE_P(dev_priv) ?
ADLP_PSR2_MAN_TRK_CTL_SF_SINGLE_FULL_FRAME :
PSR2_MAN_TRK_CTL_SF_SINGLE_FULL_FRAME;
}
static u32 man_trk_ctl_partial_frame_bit_get(struct drm_i915_private *dev_priv)
{
return IS_ALDERLAKE_P(dev_priv) ?
ADLP_PSR2_MAN_TRK_CTL_SF_PARTIAL_FRAME_UPDATE :
PSR2_MAN_TRK_CTL_SF_PARTIAL_FRAME_UPDATE;
}
static u32 man_trk_ctl_continuos_full_frame(struct drm_i915_private *dev_priv)
{
return IS_ALDERLAKE_P(dev_priv) ?
ADLP_PSR2_MAN_TRK_CTL_SF_CONTINUOS_FULL_FRAME :
PSR2_MAN_TRK_CTL_SF_CONTINUOS_FULL_FRAME;
}
static void psr_force_hw_tracking_exit(struct intel_dp *intel_dp)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
if (intel_dp->psr.psr2_sel_fetch_enabled)
intel_de_write(dev_priv,
PSR2_MAN_TRK_CTL(intel_dp->psr.transcoder),
man_trk_ctl_enable_bit_get(dev_priv) |
man_trk_ctl_partial_frame_bit_get(dev_priv) |
man_trk_ctl_single_full_frame_bit_get(dev_priv));
/*
* Display WA #0884: skl+
* This documented WA for bxt can be safely applied
* broadly so we can force HW tracking to exit PSR
* instead of disabling and re-enabling.
* Workaround tells us to write 0 to CUR_SURFLIVE_A,
* but it makes more sense write to the current active
* pipe.
*
* This workaround do not exist for platforms with display 10 or newer
* but testing proved that it works for up display 13, for newer
* than that testing will be needed.
*/
intel_de_write(dev_priv, CURSURFLIVE(intel_dp->psr.pipe), 0);
}
void intel_psr2_disable_plane_sel_fetch(struct intel_plane *plane,
const struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
enum pipe pipe = plane->pipe;
if (!crtc_state->enable_psr2_sel_fetch)
return;
intel_de_write_fw(dev_priv, PLANE_SEL_FETCH_CTL(pipe, plane->id), 0);
}
void intel_psr2_program_plane_sel_fetch(struct intel_plane *plane,
const struct intel_crtc_state *crtc_state,
const struct intel_plane_state *plane_state,
int color_plane)
{
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
enum pipe pipe = plane->pipe;
const struct drm_rect *clip;
u32 val;
int x, y;
if (!crtc_state->enable_psr2_sel_fetch)
return;
if (plane->id == PLANE_CURSOR) {
intel_de_write_fw(dev_priv, PLANE_SEL_FETCH_CTL(pipe, plane->id),
plane_state->ctl);
return;
}
clip = &plane_state->psr2_sel_fetch_area;
val = (clip->y1 + plane_state->uapi.dst.y1) << 16;
val |= plane_state->uapi.dst.x1;
intel_de_write_fw(dev_priv, PLANE_SEL_FETCH_POS(pipe, plane->id), val);
x = plane_state->view.color_plane[color_plane].x;
/*
* From Bspec: UV surface Start Y Position = half of Y plane Y
* start position.
*/
if (!color_plane)
y = plane_state->view.color_plane[color_plane].y + clip->y1;
else
y = plane_state->view.color_plane[color_plane].y + clip->y1 / 2;
val = y << 16 | x;
intel_de_write_fw(dev_priv, PLANE_SEL_FETCH_OFFSET(pipe, plane->id),
val);
/* Sizes are 0 based */
val = (drm_rect_height(clip) - 1) << 16;
val |= (drm_rect_width(&plane_state->uapi.src) >> 16) - 1;
intel_de_write_fw(dev_priv, PLANE_SEL_FETCH_SIZE(pipe, plane->id), val);
intel_de_write_fw(dev_priv, PLANE_SEL_FETCH_CTL(pipe, plane->id),
PLANE_SEL_FETCH_CTL_ENABLE);
}
void intel_psr2_program_trans_man_trk_ctl(const struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev);
struct intel_encoder *encoder;
if (!crtc_state->enable_psr2_sel_fetch)
return;
for_each_intel_encoder_mask_with_psr(&dev_priv->drm, encoder,
crtc_state->uapi.encoder_mask) {
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
lockdep_assert_held(&intel_dp->psr.lock);
if (intel_dp->psr.psr2_sel_fetch_cff_enabled)
return;
break;
}
intel_de_write(dev_priv, PSR2_MAN_TRK_CTL(crtc_state->cpu_transcoder),
crtc_state->psr2_man_track_ctl);
}
static void psr2_man_trk_ctl_calc(struct intel_crtc_state *crtc_state,
struct drm_rect *clip, bool full_update)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
u32 val = man_trk_ctl_enable_bit_get(dev_priv);
/* SF partial frame enable has to be set even on full update */
val |= man_trk_ctl_partial_frame_bit_get(dev_priv);
if (full_update) {
/*
* Not applying Wa_14014971508:adlp as we do not support the
* feature that requires this workaround.
*/
val |= man_trk_ctl_single_full_frame_bit_get(dev_priv);
goto exit;
}
if (clip->y1 == -1)
goto exit;
if (IS_ALDERLAKE_P(dev_priv)) {
val |= ADLP_PSR2_MAN_TRK_CTL_SU_REGION_START_ADDR(clip->y1);
val |= ADLP_PSR2_MAN_TRK_CTL_SU_REGION_END_ADDR(clip->y2 - 1);
} else {
drm_WARN_ON(crtc_state->uapi.crtc->dev, clip->y1 % 4 || clip->y2 % 4);
val |= PSR2_MAN_TRK_CTL_SU_REGION_START_ADDR(clip->y1 / 4 + 1);
val |= PSR2_MAN_TRK_CTL_SU_REGION_END_ADDR(clip->y2 / 4 + 1);
}
exit:
crtc_state->psr2_man_track_ctl = val;
}
static void clip_area_update(struct drm_rect *overlap_damage_area,
struct drm_rect *damage_area,
struct drm_rect *pipe_src)
{
if (!drm_rect_intersect(damage_area, pipe_src))
return;
if (overlap_damage_area->y1 == -1) {
overlap_damage_area->y1 = damage_area->y1;
overlap_damage_area->y2 = damage_area->y2;
return;
}
if (damage_area->y1 < overlap_damage_area->y1)
overlap_damage_area->y1 = damage_area->y1;
if (damage_area->y2 > overlap_damage_area->y2)
overlap_damage_area->y2 = damage_area->y2;
}
static void intel_psr2_sel_fetch_pipe_alignment(const struct intel_crtc_state *crtc_state,
struct drm_rect *pipe_clip)
{
struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev);
const u16 y_alignment = crtc_state->su_y_granularity;
pipe_clip->y1 -= pipe_clip->y1 % y_alignment;
if (pipe_clip->y2 % y_alignment)
pipe_clip->y2 = ((pipe_clip->y2 / y_alignment) + 1) * y_alignment;
if (IS_ALDERLAKE_P(dev_priv) && crtc_state->dsc.compression_enable)
drm_warn(&dev_priv->drm, "Missing PSR2 sel fetch alignment with DSC\n");
}
/*
* TODO: Not clear how to handle planes with negative position,
* also planes are not updated if they have a negative X
* position so for now doing a full update in this cases
*
* Plane scaling and rotation is not supported by selective fetch and both
* properties can change without a modeset, so need to be check at every
* atomic commit.
*/
static bool psr2_sel_fetch_plane_state_supported(const struct intel_plane_state *plane_state)
{
if (plane_state->uapi.dst.y1 < 0 ||
plane_state->uapi.dst.x1 < 0 ||
plane_state->scaler_id >= 0 ||
plane_state->uapi.rotation != DRM_MODE_ROTATE_0)
return false;
return true;
}
/*
* Check for pipe properties that is not supported by selective fetch.
*
* TODO: pipe scaling causes a modeset but skl_update_scaler_crtc() is executed
* after intel_psr_compute_config(), so for now keeping PSR2 selective fetch
* enabled and going to the full update path.
*/
static bool psr2_sel_fetch_pipe_state_supported(const struct intel_crtc_state *crtc_state)
{
if (crtc_state->scaler_state.scaler_id >= 0)
return false;
return true;
}
int intel_psr2_sel_fetch_update(struct intel_atomic_state *state,
struct intel_crtc *crtc)
{
struct drm_i915_private *dev_priv = to_i915(state->base.dev);
struct intel_crtc_state *crtc_state = intel_atomic_get_new_crtc_state(state, crtc);
struct drm_rect pipe_clip = { .x1 = 0, .y1 = -1, .x2 = INT_MAX, .y2 = -1 };
struct intel_plane_state *new_plane_state, *old_plane_state;
struct intel_plane *plane;
bool full_update = false;
int i, ret;
if (!crtc_state->enable_psr2_sel_fetch)
return 0;
if (!psr2_sel_fetch_pipe_state_supported(crtc_state)) {
full_update = true;
goto skip_sel_fetch_set_loop;
}
/*
* Calculate minimal selective fetch area of each plane and calculate
* the pipe damaged area.
* In the next loop the plane selective fetch area will actually be set
* using whole pipe damaged area.
*/
for_each_oldnew_intel_plane_in_state(state, plane, old_plane_state,
new_plane_state, i) {
struct drm_rect src, damaged_area = { .x1 = 0, .y1 = -1,
.x2 = INT_MAX };
if (new_plane_state->uapi.crtc != crtc_state->uapi.crtc)
continue;
if (!new_plane_state->uapi.visible &&
!old_plane_state->uapi.visible)
continue;
if (!psr2_sel_fetch_plane_state_supported(new_plane_state)) {
full_update = true;
break;
}
/*
* If visibility or plane moved, mark the whole plane area as
* damaged as it needs to be complete redraw in the new and old
* position.
*/
if (new_plane_state->uapi.visible != old_plane_state->uapi.visible ||
!drm_rect_equals(&new_plane_state->uapi.dst,
&old_plane_state->uapi.dst)) {
if (old_plane_state->uapi.visible) {
damaged_area.y1 = old_plane_state->uapi.dst.y1;
damaged_area.y2 = old_plane_state->uapi.dst.y2;
clip_area_update(&pipe_clip, &damaged_area,
&crtc_state->pipe_src);
}
if (new_plane_state->uapi.visible) {
damaged_area.y1 = new_plane_state->uapi.dst.y1;
damaged_area.y2 = new_plane_state->uapi.dst.y2;
clip_area_update(&pipe_clip, &damaged_area,
&crtc_state->pipe_src);
}
continue;
} else if (new_plane_state->uapi.alpha != old_plane_state->uapi.alpha) {
/* If alpha changed mark the whole plane area as damaged */
damaged_area.y1 = new_plane_state->uapi.dst.y1;
damaged_area.y2 = new_plane_state->uapi.dst.y2;
clip_area_update(&pipe_clip, &damaged_area,
&crtc_state->pipe_src);
continue;
}
src = drm_plane_state_src(&new_plane_state->uapi);
drm_rect_fp_to_int(&src, &src);
if (!drm_atomic_helper_damage_merged(&old_plane_state->uapi,
&new_plane_state->uapi, &damaged_area))
continue;
damaged_area.y1 += new_plane_state->uapi.dst.y1 - src.y1;
damaged_area.y2 += new_plane_state->uapi.dst.y1 - src.y1;
damaged_area.x1 += new_plane_state->uapi.dst.x1 - src.x1;
damaged_area.x2 += new_plane_state->uapi.dst.x1 - src.x1;
clip_area_update(&pipe_clip, &damaged_area, &crtc_state->pipe_src);
}
/*
* TODO: For now we are just using full update in case
* selective fetch area calculation fails. To optimize this we
* should identify cases where this happens and fix the area
* calculation for those.
*/
if (pipe_clip.y1 == -1) {
drm_info_once(&dev_priv->drm,
"Selective fetch area calculation failed in pipe %c\n",
pipe_name(crtc->pipe));
full_update = true;
}
if (full_update)
goto skip_sel_fetch_set_loop;
ret = drm_atomic_add_affected_planes(&state->base, &crtc->base);
if (ret)
return ret;
intel_psr2_sel_fetch_pipe_alignment(crtc_state, &pipe_clip);
/*
* Now that we have the pipe damaged area check if it intersect with
* every plane, if it does set the plane selective fetch area.
*/
for_each_oldnew_intel_plane_in_state(state, plane, old_plane_state,
new_plane_state, i) {
struct drm_rect *sel_fetch_area, inter;
struct intel_plane *linked = new_plane_state->planar_linked_plane;
if (new_plane_state->uapi.crtc != crtc_state->uapi.crtc ||
!new_plane_state->uapi.visible)
continue;
inter = pipe_clip;
if (!drm_rect_intersect(&inter, &new_plane_state->uapi.dst))
continue;
if (!psr2_sel_fetch_plane_state_supported(new_plane_state)) {
full_update = true;
break;
}
sel_fetch_area = &new_plane_state->psr2_sel_fetch_area;
sel_fetch_area->y1 = inter.y1 - new_plane_state->uapi.dst.y1;
sel_fetch_area->y2 = inter.y2 - new_plane_state->uapi.dst.y1;
crtc_state->update_planes |= BIT(plane->id);
/*
* Sel_fetch_area is calculated for UV plane. Use
* same area for Y plane as well.
*/
if (linked) {
struct intel_plane_state *linked_new_plane_state;
struct drm_rect *linked_sel_fetch_area;
linked_new_plane_state = intel_atomic_get_plane_state(state, linked);
if (IS_ERR(linked_new_plane_state))
return PTR_ERR(linked_new_plane_state);
linked_sel_fetch_area = &linked_new_plane_state->psr2_sel_fetch_area;
linked_sel_fetch_area->y1 = sel_fetch_area->y1;
linked_sel_fetch_area->y2 = sel_fetch_area->y2;
crtc_state->update_planes |= BIT(linked->id);
}
}
skip_sel_fetch_set_loop:
psr2_man_trk_ctl_calc(crtc_state, &pipe_clip, full_update);
return 0;
}
void intel_psr_pre_plane_update(struct intel_atomic_state *state,
struct intel_crtc *crtc)
{
struct drm_i915_private *i915 = to_i915(state->base.dev);
const struct intel_crtc_state *old_crtc_state =
intel_atomic_get_old_crtc_state(state, crtc);
const struct intel_crtc_state *new_crtc_state =
intel_atomic_get_new_crtc_state(state, crtc);
struct intel_encoder *encoder;
if (!HAS_PSR(i915))
return;
for_each_intel_encoder_mask_with_psr(state->base.dev, encoder,
old_crtc_state->uapi.encoder_mask) {
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
struct intel_psr *psr = &intel_dp->psr;
bool needs_to_disable = false;
mutex_lock(&psr->lock);
/*
* Reasons to disable:
* - PSR disabled in new state
* - All planes will go inactive
* - Changing between PSR versions
*/
needs_to_disable |= intel_crtc_needs_modeset(new_crtc_state);
needs_to_disable |= !new_crtc_state->has_psr;
needs_to_disable |= !new_crtc_state->active_planes;
needs_to_disable |= new_crtc_state->has_psr2 != psr->psr2_enabled;
if (psr->enabled && needs_to_disable)
intel_psr_disable_locked(intel_dp);
mutex_unlock(&psr->lock);
}
}
static void _intel_psr_post_plane_update(const struct intel_atomic_state *state,
const struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *dev_priv = to_i915(state->base.dev);
struct intel_encoder *encoder;
if (!crtc_state->has_psr)
return;
for_each_intel_encoder_mask_with_psr(state->base.dev, encoder,
crtc_state->uapi.encoder_mask) {
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
struct intel_psr *psr = &intel_dp->psr;
mutex_lock(&psr->lock);
if (psr->sink_not_reliable)
goto exit;
drm_WARN_ON(&dev_priv->drm, psr->enabled && !crtc_state->active_planes);
/* Only enable if there is active planes */
if (!psr->enabled && crtc_state->active_planes)
intel_psr_enable_locked(intel_dp, crtc_state);
/* Force a PSR exit when enabling CRC to avoid CRC timeouts */
if (crtc_state->crc_enabled && psr->enabled)
psr_force_hw_tracking_exit(intel_dp);
exit:
mutex_unlock(&psr->lock);
}
}
void intel_psr_post_plane_update(const struct intel_atomic_state *state)
{
struct drm_i915_private *dev_priv = to_i915(state->base.dev);
struct intel_crtc_state *crtc_state;
struct intel_crtc *crtc;
int i;
if (!HAS_PSR(dev_priv))
return;
for_each_new_intel_crtc_in_state(state, crtc, crtc_state, i)
_intel_psr_post_plane_update(state, crtc_state);
}
static int _psr2_ready_for_pipe_update_locked(struct intel_dp *intel_dp)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
/*
* Any state lower than EDP_PSR2_STATUS_STATE_DEEP_SLEEP is enough.
* As all higher states has bit 4 of PSR2 state set we can just wait for
* EDP_PSR2_STATUS_STATE_DEEP_SLEEP to be cleared.
*/
return intel_de_wait_for_clear(dev_priv,
EDP_PSR2_STATUS(intel_dp->psr.transcoder),
EDP_PSR2_STATUS_STATE_DEEP_SLEEP, 50);
}
static int _psr1_ready_for_pipe_update_locked(struct intel_dp *intel_dp)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
/*
* From bspec: Panel Self Refresh (BDW+)
* Max. time for PSR to idle = Inverse of the refresh rate + 6 ms of
* exit training time + 1.5 ms of aux channel handshake. 50 ms is
* defensive enough to cover everything.
*/
return intel_de_wait_for_clear(dev_priv,
EDP_PSR_STATUS(intel_dp->psr.transcoder),
EDP_PSR_STATUS_STATE_MASK, 50);
}
/**
* intel_psr_wait_for_idle_locked - wait for PSR be ready for a pipe update
* @new_crtc_state: new CRTC state
*
* This function is expected to be called from pipe_update_start() where it is
* not expected to race with PSR enable or disable.
*/
void intel_psr_wait_for_idle_locked(const struct intel_crtc_state *new_crtc_state)
{
struct drm_i915_private *dev_priv = to_i915(new_crtc_state->uapi.crtc->dev);
struct intel_encoder *encoder;
if (!new_crtc_state->has_psr)
return;
for_each_intel_encoder_mask_with_psr(&dev_priv->drm, encoder,
new_crtc_state->uapi.encoder_mask) {
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
int ret;
lockdep_assert_held(&intel_dp->psr.lock);
if (!intel_dp->psr.enabled)
continue;
if (intel_dp->psr.psr2_enabled)
ret = _psr2_ready_for_pipe_update_locked(intel_dp);
else
ret = _psr1_ready_for_pipe_update_locked(intel_dp);
if (ret)
drm_err(&dev_priv->drm, "PSR wait timed out, atomic update may fail\n");
}
}
static bool __psr_wait_for_idle_locked(struct intel_dp *intel_dp)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
i915_reg_t reg;
u32 mask;
int err;
if (!intel_dp->psr.enabled)
return false;
if (intel_dp->psr.psr2_enabled) {
reg = EDP_PSR2_STATUS(intel_dp->psr.transcoder);
mask = EDP_PSR2_STATUS_STATE_MASK;
} else {
reg = EDP_PSR_STATUS(intel_dp->psr.transcoder);
mask = EDP_PSR_STATUS_STATE_MASK;
}
mutex_unlock(&intel_dp->psr.lock);
err = intel_de_wait_for_clear(dev_priv, reg, mask, 50);
if (err)
drm_err(&dev_priv->drm,
"Timed out waiting for PSR Idle for re-enable\n");
/* After the unlocked wait, verify that PSR is still wanted! */
mutex_lock(&intel_dp->psr.lock);
return err == 0 && intel_dp->psr.enabled;
}
static int intel_psr_fastset_force(struct drm_i915_private *dev_priv)
{
struct drm_connector_list_iter conn_iter;
struct drm_device *dev = &dev_priv->drm;
struct drm_modeset_acquire_ctx ctx;
struct drm_atomic_state *state;
struct drm_connector *conn;
int err = 0;
state = drm_atomic_state_alloc(dev);
if (!state)
return -ENOMEM;
drm_modeset_acquire_init(&ctx, DRM_MODESET_ACQUIRE_INTERRUPTIBLE);
state->acquire_ctx = &ctx;
retry:
drm_connector_list_iter_begin(dev, &conn_iter);
drm_for_each_connector_iter(conn, &conn_iter) {
struct drm_connector_state *conn_state;
struct drm_crtc_state *crtc_state;
if (conn->connector_type != DRM_MODE_CONNECTOR_eDP)
continue;
conn_state = drm_atomic_get_connector_state(state, conn);
if (IS_ERR(conn_state)) {
err = PTR_ERR(conn_state);
break;
}
if (!conn_state->crtc)
continue;
crtc_state = drm_atomic_get_crtc_state(state, conn_state->crtc);
if (IS_ERR(crtc_state)) {
err = PTR_ERR(crtc_state);
break;
}
/* Mark mode as changed to trigger a pipe->update() */
crtc_state->mode_changed = true;
}
drm_connector_list_iter_end(&conn_iter);
if (err == 0)
err = drm_atomic_commit(state);
if (err == -EDEADLK) {
drm_atomic_state_clear(state);
err = drm_modeset_backoff(&ctx);
if (!err)
goto retry;
}
drm_modeset_drop_locks(&ctx);
drm_modeset_acquire_fini(&ctx);
drm_atomic_state_put(state);
return err;
}
int intel_psr_debug_set(struct intel_dp *intel_dp, u64 val)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
const u32 mode = val & I915_PSR_DEBUG_MODE_MASK;
u32 old_mode;
int ret;
if (val & ~(I915_PSR_DEBUG_IRQ | I915_PSR_DEBUG_MODE_MASK) ||
mode > I915_PSR_DEBUG_ENABLE_SEL_FETCH) {
drm_dbg_kms(&dev_priv->drm, "Invalid debug mask %llx\n", val);
return -EINVAL;
}
ret = mutex_lock_interruptible(&intel_dp->psr.lock);
if (ret)
return ret;
old_mode = intel_dp->psr.debug & I915_PSR_DEBUG_MODE_MASK;
intel_dp->psr.debug = val;
/*
* Do it right away if it's already enabled, otherwise it will be done
* when enabling the source.
*/
if (intel_dp->psr.enabled)
psr_irq_control(intel_dp);
mutex_unlock(&intel_dp->psr.lock);
if (old_mode != mode)
ret = intel_psr_fastset_force(dev_priv);
return ret;
}
static void intel_psr_handle_irq(struct intel_dp *intel_dp)
{
struct intel_psr *psr = &intel_dp->psr;
intel_psr_disable_locked(intel_dp);
psr->sink_not_reliable = true;
/* let's make sure that sink is awaken */
drm_dp_dpcd_writeb(&intel_dp->aux, DP_SET_POWER, DP_SET_POWER_D0);
}
static void intel_psr_work(struct work_struct *work)
{
struct intel_dp *intel_dp =
container_of(work, typeof(*intel_dp), psr.work);
mutex_lock(&intel_dp->psr.lock);
if (!intel_dp->psr.enabled)
goto unlock;
if (READ_ONCE(intel_dp->psr.irq_aux_error))
intel_psr_handle_irq(intel_dp);
/*
* We have to make sure PSR is ready for re-enable
* otherwise it keeps disabled until next full enable/disable cycle.
* PSR might take some time to get fully disabled
* and be ready for re-enable.
*/
if (!__psr_wait_for_idle_locked(intel_dp))
goto unlock;
/*
* The delayed work can race with an invalidate hence we need to
* recheck. Since psr_flush first clears this and then reschedules we
* won't ever miss a flush when bailing out here.
*/
if (intel_dp->psr.busy_frontbuffer_bits || intel_dp->psr.active)
goto unlock;
intel_psr_activate(intel_dp);
unlock:
mutex_unlock(&intel_dp->psr.lock);
}
static void _psr_invalidate_handle(struct intel_dp *intel_dp)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
if (intel_dp->psr.psr2_sel_fetch_enabled) {
u32 val;
if (intel_dp->psr.psr2_sel_fetch_cff_enabled) {
/* Send one update otherwise lag is observed in screen */
intel_de_write(dev_priv, CURSURFLIVE(intel_dp->psr.pipe), 0);
return;
}
val = man_trk_ctl_enable_bit_get(dev_priv) |
man_trk_ctl_partial_frame_bit_get(dev_priv) |
man_trk_ctl_continuos_full_frame(dev_priv);
intel_de_write(dev_priv, PSR2_MAN_TRK_CTL(intel_dp->psr.transcoder), val);
intel_de_write(dev_priv, CURSURFLIVE(intel_dp->psr.pipe), 0);
intel_dp->psr.psr2_sel_fetch_cff_enabled = true;
} else {
intel_psr_exit(intel_dp);
}
}
/**
* intel_psr_invalidate - Invalidate PSR
* @dev_priv: i915 device
* @frontbuffer_bits: frontbuffer plane tracking bits
* @origin: which operation caused the invalidate
*
* Since the hardware frontbuffer tracking has gaps we need to integrate
* with the software frontbuffer tracking. This function gets called every
* time frontbuffer rendering starts and a buffer gets dirtied. PSR must be
* disabled if the frontbuffer mask contains a buffer relevant to PSR.
*
* Dirty frontbuffers relevant to PSR are tracked in busy_frontbuffer_bits."
*/
void intel_psr_invalidate(struct drm_i915_private *dev_priv,
unsigned frontbuffer_bits, enum fb_op_origin origin)
{
struct intel_encoder *encoder;
if (origin == ORIGIN_FLIP)
return;
for_each_intel_encoder_with_psr(&dev_priv->drm, encoder) {
unsigned int pipe_frontbuffer_bits = frontbuffer_bits;
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
mutex_lock(&intel_dp->psr.lock);
if (!intel_dp->psr.enabled) {
mutex_unlock(&intel_dp->psr.lock);
continue;
}
pipe_frontbuffer_bits &=
INTEL_FRONTBUFFER_ALL_MASK(intel_dp->psr.pipe);
intel_dp->psr.busy_frontbuffer_bits |= pipe_frontbuffer_bits;
if (pipe_frontbuffer_bits)
_psr_invalidate_handle(intel_dp);
mutex_unlock(&intel_dp->psr.lock);
}
}
/*
* When we will be completely rely on PSR2 S/W tracking in future,
* intel_psr_flush() will invalidate and flush the PSR for ORIGIN_FLIP
* event also therefore tgl_dc3co_flush_locked() require to be changed
* accordingly in future.
*/
static void
tgl_dc3co_flush_locked(struct intel_dp *intel_dp, unsigned int frontbuffer_bits,
enum fb_op_origin origin)
{
if (!intel_dp->psr.dc3co_exitline || !intel_dp->psr.psr2_enabled ||
!intel_dp->psr.active)
return;
/*
* At every frontbuffer flush flip event modified delay of delayed work,
* when delayed work schedules that means display has been idle.
*/
if (!(frontbuffer_bits &
INTEL_FRONTBUFFER_ALL_MASK(intel_dp->psr.pipe)))
return;
tgl_psr2_enable_dc3co(intel_dp);
mod_delayed_work(system_wq, &intel_dp->psr.dc3co_work,
intel_dp->psr.dc3co_exit_delay);
}
static void _psr_flush_handle(struct intel_dp *intel_dp)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
if (intel_dp->psr.psr2_sel_fetch_enabled) {
if (intel_dp->psr.psr2_sel_fetch_cff_enabled) {
/* can we turn CFF off? */
if (intel_dp->psr.busy_frontbuffer_bits == 0) {
u32 val = man_trk_ctl_enable_bit_get(dev_priv) |
man_trk_ctl_partial_frame_bit_get(dev_priv) |
man_trk_ctl_single_full_frame_bit_get(dev_priv);
/*
* turn continuous full frame off and do a single
* full frame
*/
intel_de_write(dev_priv, PSR2_MAN_TRK_CTL(intel_dp->psr.transcoder),
val);
intel_de_write(dev_priv, CURSURFLIVE(intel_dp->psr.pipe), 0);
intel_dp->psr.psr2_sel_fetch_cff_enabled = false;
}
} else {
/*
* continuous full frame is disabled, only a single full
* frame is required
*/
psr_force_hw_tracking_exit(intel_dp);
}
} else {
psr_force_hw_tracking_exit(intel_dp);
if (!intel_dp->psr.active && !intel_dp->psr.busy_frontbuffer_bits)
schedule_work(&intel_dp->psr.work);
}
}
/**
* intel_psr_flush - Flush PSR
* @dev_priv: i915 device
* @frontbuffer_bits: frontbuffer plane tracking bits
* @origin: which operation caused the flush
*
* Since the hardware frontbuffer tracking has gaps we need to integrate
* with the software frontbuffer tracking. This function gets called every
* time frontbuffer rendering has completed and flushed out to memory. PSR
* can be enabled again if no other frontbuffer relevant to PSR is dirty.
*
* Dirty frontbuffers relevant to PSR are tracked in busy_frontbuffer_bits.
*/
void intel_psr_flush(struct drm_i915_private *dev_priv,
unsigned frontbuffer_bits, enum fb_op_origin origin)
{
struct intel_encoder *encoder;
for_each_intel_encoder_with_psr(&dev_priv->drm, encoder) {
unsigned int pipe_frontbuffer_bits = frontbuffer_bits;
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
mutex_lock(&intel_dp->psr.lock);
if (!intel_dp->psr.enabled) {
mutex_unlock(&intel_dp->psr.lock);
continue;
}
pipe_frontbuffer_bits &=
INTEL_FRONTBUFFER_ALL_MASK(intel_dp->psr.pipe);
intel_dp->psr.busy_frontbuffer_bits &= ~pipe_frontbuffer_bits;
/*
* If the PSR is paused by an explicit intel_psr_paused() call,
* we have to ensure that the PSR is not activated until
* intel_psr_resume() is called.
*/
if (intel_dp->psr.paused)
goto unlock;
if (origin == ORIGIN_FLIP ||
(origin == ORIGIN_CURSOR_UPDATE &&
!intel_dp->psr.psr2_sel_fetch_enabled)) {
tgl_dc3co_flush_locked(intel_dp, frontbuffer_bits, origin);
goto unlock;
}
if (pipe_frontbuffer_bits == 0)
goto unlock;
/* By definition flush = invalidate + flush */
_psr_flush_handle(intel_dp);
unlock:
mutex_unlock(&intel_dp->psr.lock);
}
}
/**
* intel_psr_init - Init basic PSR work and mutex.
* @intel_dp: Intel DP
*
* This function is called after the initializing connector.
* (the initializing of connector treats the handling of connector capabilities)
* And it initializes basic PSR stuff for each DP Encoder.
*/
void intel_psr_init(struct intel_dp *intel_dp)
{
struct intel_connector *connector = intel_dp->attached_connector;
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
if (!HAS_PSR(dev_priv))
return;
/*
* HSW spec explicitly says PSR is tied to port A.
* BDW+ platforms have a instance of PSR registers per transcoder but
* BDW, GEN9 and GEN11 are not validated by HW team in other transcoder
* than eDP one.
* For now it only supports one instance of PSR for BDW, GEN9 and GEN11.
* So lets keep it hardcoded to PORT_A for BDW, GEN9 and GEN11.
* But GEN12 supports a instance of PSR registers per transcoder.
*/
if (DISPLAY_VER(dev_priv) < 12 && dig_port->base.port != PORT_A) {
drm_dbg_kms(&dev_priv->drm,
"PSR condition failed: Port not supported\n");
return;
}
intel_dp->psr.source_support = true;
/* Set link_standby x link_off defaults */
if (DISPLAY_VER(dev_priv) < 12)
/* For new platforms up to TGL let's respect VBT back again */
intel_dp->psr.link_standby = connector->panel.vbt.psr.full_link;
INIT_WORK(&intel_dp->psr.work, intel_psr_work);
INIT_DELAYED_WORK(&intel_dp->psr.dc3co_work, tgl_dc3co_disable_work);
mutex_init(&intel_dp->psr.lock);
}
static int psr_get_status_and_error_status(struct intel_dp *intel_dp,
u8 *status, u8 *error_status)
{
struct drm_dp_aux *aux = &intel_dp->aux;
int ret;
ret = drm_dp_dpcd_readb(aux, DP_PSR_STATUS, status);
if (ret != 1)
return ret;
ret = drm_dp_dpcd_readb(aux, DP_PSR_ERROR_STATUS, error_status);
if (ret != 1)
return ret;
*status = *status & DP_PSR_SINK_STATE_MASK;
return 0;
}
static void psr_alpm_check(struct intel_dp *intel_dp)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
struct drm_dp_aux *aux = &intel_dp->aux;
struct intel_psr *psr = &intel_dp->psr;
u8 val;
int r;
if (!psr->psr2_enabled)
return;
r = drm_dp_dpcd_readb(aux, DP_RECEIVER_ALPM_STATUS, &val);
if (r != 1) {
drm_err(&dev_priv->drm, "Error reading ALPM status\n");
return;
}
if (val & DP_ALPM_LOCK_TIMEOUT_ERROR) {
intel_psr_disable_locked(intel_dp);
psr->sink_not_reliable = true;
drm_dbg_kms(&dev_priv->drm,
"ALPM lock timeout error, disabling PSR\n");
/* Clearing error */
drm_dp_dpcd_writeb(aux, DP_RECEIVER_ALPM_STATUS, val);
}
}
static void psr_capability_changed_check(struct intel_dp *intel_dp)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
struct intel_psr *psr = &intel_dp->psr;
u8 val;
int r;
r = drm_dp_dpcd_readb(&intel_dp->aux, DP_PSR_ESI, &val);
if (r != 1) {
drm_err(&dev_priv->drm, "Error reading DP_PSR_ESI\n");
return;
}
if (val & DP_PSR_CAPS_CHANGE) {
intel_psr_disable_locked(intel_dp);
psr->sink_not_reliable = true;
drm_dbg_kms(&dev_priv->drm,
"Sink PSR capability changed, disabling PSR\n");
/* Clearing it */
drm_dp_dpcd_writeb(&intel_dp->aux, DP_PSR_ESI, val);
}
}
void intel_psr_short_pulse(struct intel_dp *intel_dp)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
struct intel_psr *psr = &intel_dp->psr;
u8 status, error_status;
const u8 errors = DP_PSR_RFB_STORAGE_ERROR |
DP_PSR_VSC_SDP_UNCORRECTABLE_ERROR |
DP_PSR_LINK_CRC_ERROR;
if (!CAN_PSR(intel_dp))
return;
mutex_lock(&psr->lock);
if (!psr->enabled)
goto exit;
if (psr_get_status_and_error_status(intel_dp, &status, &error_status)) {
drm_err(&dev_priv->drm,
"Error reading PSR status or error status\n");
goto exit;
}
if (status == DP_PSR_SINK_INTERNAL_ERROR || (error_status & errors)) {
intel_psr_disable_locked(intel_dp);
psr->sink_not_reliable = true;
}
if (status == DP_PSR_SINK_INTERNAL_ERROR && !error_status)
drm_dbg_kms(&dev_priv->drm,
"PSR sink internal error, disabling PSR\n");
if (error_status & DP_PSR_RFB_STORAGE_ERROR)
drm_dbg_kms(&dev_priv->drm,
"PSR RFB storage error, disabling PSR\n");
if (error_status & DP_PSR_VSC_SDP_UNCORRECTABLE_ERROR)
drm_dbg_kms(&dev_priv->drm,
"PSR VSC SDP uncorrectable error, disabling PSR\n");
if (error_status & DP_PSR_LINK_CRC_ERROR)
drm_dbg_kms(&dev_priv->drm,
"PSR Link CRC error, disabling PSR\n");
if (error_status & ~errors)
drm_err(&dev_priv->drm,
"PSR_ERROR_STATUS unhandled errors %x\n",
error_status & ~errors);
/* clear status register */
drm_dp_dpcd_writeb(&intel_dp->aux, DP_PSR_ERROR_STATUS, error_status);
psr_alpm_check(intel_dp);
psr_capability_changed_check(intel_dp);
exit:
mutex_unlock(&psr->lock);
}
bool intel_psr_enabled(struct intel_dp *intel_dp)
{
bool ret;
if (!CAN_PSR(intel_dp))
return false;
mutex_lock(&intel_dp->psr.lock);
ret = intel_dp->psr.enabled;
mutex_unlock(&intel_dp->psr.lock);
return ret;
}
/**
* intel_psr_lock - grab PSR lock
* @crtc_state: the crtc state
*
* This is initially meant to be used by around CRTC update, when
* vblank sensitive registers are updated and we need grab the lock
* before it to avoid vblank evasion.
*/
void intel_psr_lock(const struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *i915 = to_i915(crtc_state->uapi.crtc->dev);
struct intel_encoder *encoder;
if (!crtc_state->has_psr)
return;
for_each_intel_encoder_mask_with_psr(&i915->drm, encoder,
crtc_state->uapi.encoder_mask) {
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
mutex_lock(&intel_dp->psr.lock);
break;
}
}
/**
* intel_psr_unlock - release PSR lock
* @crtc_state: the crtc state
*
* Release the PSR lock that was held during pipe update.
*/
void intel_psr_unlock(const struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *i915 = to_i915(crtc_state->uapi.crtc->dev);
struct intel_encoder *encoder;
if (!crtc_state->has_psr)
return;
for_each_intel_encoder_mask_with_psr(&i915->drm, encoder,
crtc_state->uapi.encoder_mask) {
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
mutex_unlock(&intel_dp->psr.lock);
break;
}
}
|