1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 2323 2324 2325 2326 2327 2328 2329 2330 2331 2332 2333 2334 2335 2336 2337 2338 2339 2340 2341 2342 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 2354 2355 2356 2357 2358 2359 2360 2361 2362 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 2375 2376 2377 2378 2379 2380 2381 2382 2383 2384 2385 2386 2387 2388 2389 2390 2391 2392 2393 2394 2395 2396 2397 2398 2399 2400 2401 2402 2403 2404 2405 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 2418 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 2520 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 2555 2556 2557 2558 2559 2560 2561 2562 2563 2564 2565 2566 2567 2568 2569 2570 2571 2572 2573 2574 2575 2576 2577 2578 2579 2580 2581 2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 2595 2596 2597 2598 2599 2600 2601 2602 2603 2604 2605 2606 2607 2608 2609 2610 2611 2612 2613 2614 2615 2616 2617 2618 2619 2620 2621 2622 2623 2624 2625 2626 2627 2628 2629 2630 2631 2632 2633 2634 2635 2636 2637 2638 2639 2640 2641 2642 2643 2644 2645 2646 2647 2648 2649 2650 2651 2652 2653 2654 2655 2656 2657 2658 2659 2660 2661 2662 2663 2664 2665 2666 2667 2668 2669 2670 2671 2672 2673 2674 2675 2676 2677 2678 2679 2680 2681 2682 2683 2684 2685 2686 2687 2688 2689 2690 2691 2692 2693 2694 2695 2696 2697 2698 2699 2700 2701 2702 2703 2704 2705 2706 2707 2708 2709 2710 2711 2712 2713 2714 2715 2716 2717 2718 2719 2720 2721 2722 2723 2724 2725 2726 2727 2728 2729 2730 2731 2732 2733 2734 2735 2736 2737 2738 2739 2740 2741 2742 2743 2744 2745 2746 2747 2748 2749 2750 2751 2752 2753 2754 2755 2756 2757 2758 2759 2760 2761 2762 2763 2764 2765 2766 2767 2768 2769 2770 2771 2772 2773 2774 2775 2776 2777 2778 2779 2780 2781 2782 2783 2784 2785 2786 2787 2788 2789 2790 2791 2792 2793 2794 2795 2796 2797 2798 2799 2800 2801 2802 2803 2804 2805 2806 2807 2808 2809 2810 2811 2812 2813 2814 2815 2816 2817 2818 2819 2820 2821 2822 2823 2824 2825 2826 2827 2828 2829 2830 2831 2832 2833 2834 2835 2836 2837 2838 2839 2840 2841 2842 2843 2844 2845 2846 2847 2848 2849 2850 2851 2852 2853 2854 2855 2856 2857 2858 2859 2860 2861 2862 2863 2864 2865 2866 2867 2868 2869 2870 2871 2872 2873 2874 2875 2876 2877 2878 2879 2880 2881 2882 2883 2884 2885 2886 2887 2888 2889 2890 2891 2892 2893 2894 2895 2896 2897 2898 2899 2900 2901 2902 2903 2904 2905 2906 2907 2908 2909 2910 2911 2912 2913 2914 2915 2916 2917 2918 2919 2920 2921 2922 2923 2924 2925 2926 2927 2928 2929 2930 2931 2932 2933 2934 2935 2936 2937 2938 2939 2940 2941 2942 2943 2944 2945 2946 2947 2948 2949 2950 2951 2952 2953 2954 2955 2956 2957 2958 2959 2960 2961 2962 2963 2964 2965 2966 2967 2968 2969 2970 2971 2972 2973 2974 2975 2976 2977 2978 2979 2980 2981 2982 2983 2984 2985 2986 2987 2988 2989 2990 2991 2992 2993 2994 2995 2996 2997 2998 2999 3000 3001 3002 3003 3004 3005 3006 3007 3008 3009 3010 3011 3012 3013 3014 3015 3016 3017 3018 3019 3020 3021 3022 3023 3024 3025 3026 3027 3028 3029 3030 3031 3032 3033 3034 3035 3036 3037 3038 3039 3040 3041 3042 3043 3044 3045 3046 3047 3048 3049 3050 3051 3052 3053 3054 3055 3056 3057 3058 3059 3060 3061 3062 3063 3064 3065 3066 3067 3068 3069 3070 3071 3072 3073 3074 3075 3076 3077 3078 3079 3080 3081 3082 3083 3084 3085 3086 3087 3088 3089 3090 3091 3092 3093 3094 3095 3096 3097 3098 3099 3100 3101 3102 3103 3104 3105 3106 3107 3108 3109 3110 3111 3112 3113 3114 3115 3116 3117 3118 3119 3120 3121 3122 3123 3124 3125 3126 3127 3128 3129 3130 3131 3132 3133 3134 3135 3136 3137 3138 3139 3140 3141 3142 3143 3144 3145 3146 3147 3148 3149 3150 3151 3152 3153 3154 3155 3156 3157 3158 3159 3160 3161 3162 3163 3164 3165 3166 3167 3168 3169 3170 3171 3172 3173 3174 3175 3176 3177 3178 3179 3180 3181 3182 3183 3184 3185 3186 3187 3188 3189 3190 3191 3192 3193 3194 3195 3196 3197 3198 3199 3200 3201 3202 3203 3204 3205 3206 3207 3208 3209 3210 3211 3212 3213 3214 3215 3216 3217 3218 3219 3220 3221 3222 3223 3224 3225 3226 3227 3228 3229 3230 3231 3232 3233 3234 3235 3236 3237 3238 3239 3240 3241 3242 3243 3244 3245 3246 3247 3248 3249 3250 3251 3252 3253 3254 3255 3256 3257 3258 3259 3260 3261 3262 3263 3264 3265 3266 3267 3268 3269 3270 3271 3272 3273 3274 3275 3276 3277 3278 3279 3280 3281 3282 3283 3284 3285 3286 3287 3288 3289 3290 3291 3292 3293 3294 3295 3296 3297 3298 3299 3300 3301 3302 3303 3304 3305 3306 3307 3308 3309 3310 3311 3312 3313 3314 3315 3316 3317 3318 3319 3320 3321 3322 3323 3324 3325 3326 3327 3328 3329 3330 3331 3332 3333 3334 3335 3336 3337 3338 3339 3340 3341 3342 3343 3344 3345 3346 3347 3348 3349 3350 3351 3352 3353 3354 3355 3356 3357 3358 3359 3360 3361 3362 3363 3364 3365 3366 3367 3368 3369 3370 3371 3372 3373 3374 3375 3376 3377 3378 3379 3380 3381 3382 3383 3384 3385 3386 3387 3388 3389 3390 3391 3392 3393 3394 3395 3396 3397 3398 3399 3400 3401 3402 3403 3404 3405 3406 3407 3408 3409 3410 3411 3412 3413 3414 3415 3416 3417 3418 3419 3420 3421 3422 3423 3424 3425 3426 3427 3428 3429 3430 3431 3432 3433 3434 3435 3436 3437 3438 3439 3440 3441 3442 3443 3444 3445 3446 3447 3448 3449 3450 3451 3452 3453 3454 3455 3456 3457 3458 3459 3460 3461 3462 3463 3464 3465 3466 3467 3468 3469 3470 3471 3472 3473 3474 3475 3476 3477 3478 3479 3480 3481 3482 3483 3484 3485 3486 3487 3488 3489 3490 3491 3492 3493 3494 3495 3496 3497 3498 3499 3500 3501 3502 3503 3504 3505 3506 3507 3508 3509 3510 3511 3512 3513 3514 3515 3516 3517 3518 3519 3520 3521 3522 3523 3524 3525 3526 3527 3528 3529 3530 3531 3532 3533 3534 3535 3536 3537 3538 3539 3540 3541 3542 3543 3544 3545 3546 3547 3548 3549 3550 3551 3552 3553 3554 3555 3556 3557 3558 3559 3560 3561 3562 3563 3564 3565 3566 3567 3568 3569 3570 3571 3572 3573 3574 3575 3576 3577 3578 3579 3580 3581 3582 3583 3584 3585 3586 3587 3588 3589 3590 3591 3592 3593 3594 3595 3596 3597 3598 3599 3600 3601 3602 3603 3604 3605 3606 3607 3608 3609 3610 3611 3612 3613 3614 3615 3616 3617 3618 3619 3620 3621 3622 3623 3624 3625 3626 3627 3628 3629 3630 3631 3632 3633 3634 3635 3636 3637 3638 3639 3640 3641 3642 3643 3644 3645 3646 3647 3648 3649 3650 3651 3652 3653 3654 3655 3656 3657 3658 3659 3660 3661 3662 3663 3664 3665 3666 3667 3668 3669 3670 3671 3672 3673 3674 3675 3676 3677 3678 3679 3680 3681 3682 3683 3684 3685 3686 3687 3688 3689 3690 3691 3692 3693 3694 3695 3696 3697 3698 3699 3700 3701 3702 3703 3704 3705 3706 3707 3708 3709 3710 3711 3712 3713 3714 3715 3716 3717 3718 3719 3720 3721 3722 3723 3724 3725 3726 3727 3728 3729 3730 3731 3732 3733 3734 3735 3736 3737 3738 3739 3740 3741 3742 3743 3744 3745 3746 3747 3748 3749 3750 3751 3752 3753 3754 3755 3756 3757 3758 3759 3760 3761 3762 3763 3764 3765 3766 3767 3768 3769 3770 3771 3772 3773 3774 3775 3776 3777 3778 3779 3780 3781 3782 3783 3784 3785 3786 3787 3788 3789 3790 3791 3792 3793 3794 3795 3796 3797 3798 3799 3800 3801 3802 3803 3804 3805 3806 3807 3808 3809 3810 3811 3812 3813 3814 3815 3816 3817 3818 3819 3820 3821 3822 3823 3824 3825 3826 3827 3828 3829 3830 3831 3832 3833 3834 3835 3836 3837 3838 3839 3840 3841 3842 3843 3844 3845 3846 3847 3848 3849 3850 3851 3852 3853 3854 3855 3856 3857 3858 3859 3860 3861 3862 3863 3864 3865 3866 3867 3868 3869 3870 3871 3872 3873 3874 3875 3876 3877 3878 3879 3880 3881 3882 3883 3884 3885 3886 3887 3888 3889 3890 3891 3892 3893 3894 3895 3896 3897 3898 3899 3900 3901 3902 3903 3904 3905 3906 3907 3908 3909 3910 3911 3912 3913
|
"""Generate self-contained HTML reports from MNE objects."""
# Authors: Alex Gramfort <alexandre.gramfort@inria.fr>
# Mainak Jas <mainak@neuro.hut.fi>
# Teon Brooks <teon.brooks@gmail.com>
#
# License: BSD-3-Clause
import io
import dataclasses
from dataclasses import dataclass
from typing import Tuple, Optional
from collections.abc import Sequence
import base64
from io import BytesIO, StringIO
import os
import os.path as op
from pathlib import Path
import fnmatch
import re
from shutil import copyfile
import time
import warnings
import webbrowser
import numpy as np
from .. import __version__ as MNE_VERSION
from .. import (read_evokeds, read_events, read_cov,
read_source_estimate, read_trans, sys_info,
Evoked, SourceEstimate, Covariance, Info, Transform)
from ..channels import _get_ch_type
from ..defaults import _handle_default
from ..io import read_raw, read_info, BaseRaw
from ..io._read_raw import supported as extension_reader_map
from ..io.pick import _DATA_CH_TYPES_SPLIT
from ..proj import read_proj
from .._freesurfer import _reorient_image, _mri_orientation
from ..utils import (logger, verbose, get_subjects_dir, warn, _ensure_int,
fill_doc, _check_option, _validate_type, _safe_input,
_path_like, use_log_level, _check_fname, _pl,
_check_ch_locs, _import_h5io_funcs, _verbose_safe_false,
check_version)
from ..viz import (plot_events, plot_alignment, plot_cov, plot_projs_topomap,
plot_compare_evokeds, set_3d_view, get_3d_backend,
Figure3D, use_browser_backend)
from ..viz.misc import _plot_mri_contours, _get_bem_plotting_surfaces
from ..viz.utils import _ndarray_to_fig, tight_layout
from ..viz._scraper import _mne_qt_browser_screenshot
from ..forward import read_forward_solution, Forward
from ..epochs import read_epochs, BaseEpochs
from ..preprocessing.ica import read_ica
from .. import dig_mri_distances
from ..minimum_norm import read_inverse_operator, InverseOperator
from ..parallel import parallel_func
_BEM_VIEWS = ('axial', 'sagittal', 'coronal')
# For raw files, we want to support different suffixes + extensions for all
# supported file formats
SUPPORTED_READ_RAW_EXTENSIONS = tuple(extension_reader_map.keys())
RAW_EXTENSIONS = []
for ext in SUPPORTED_READ_RAW_EXTENSIONS:
RAW_EXTENSIONS.append(f'raw{ext}')
if ext not in ('.bdf', '.edf', '.set', '.vhdr'): # EEG-only formats
RAW_EXTENSIONS.append(f'meg{ext}')
RAW_EXTENSIONS.append(f'eeg{ext}')
RAW_EXTENSIONS.append(f'ieeg{ext}')
RAW_EXTENSIONS.append(f'nirs{ext}')
# Processed data will always be in (gzipped) FIFF format
VALID_EXTENSIONS = ('sss.fif', 'sss.fif.gz',
'eve.fif', 'eve.fif.gz',
'cov.fif', 'cov.fif.gz',
'proj.fif', 'prof.fif.gz',
'trans.fif', 'trans.fif.gz',
'fwd.fif', 'fwd.fif.gz',
'epo.fif', 'epo.fif.gz',
'inv.fif', 'inv.fif.gz',
'ave.fif', 'ave.fif.gz',
'T1.mgz') + tuple(RAW_EXTENSIONS)
del RAW_EXTENSIONS
CONTENT_ORDER = (
'raw',
'events',
'epochs',
'ssp-projectors',
'evoked',
'covariance',
'coregistration',
'bem',
'forward-solution',
'inverse-operator',
'source-estimate'
)
html_include_dir = Path(__file__).parent / 'js_and_css'
template_dir = Path(__file__).parent / 'templates'
JAVASCRIPT = (html_include_dir / 'report.js').read_text(encoding='utf-8')
CSS = (html_include_dir / 'report.sass').read_text(encoding='utf-8')
MAX_IMG_RES = 100 # in dots per inch
MAX_IMG_WIDTH = 850 # in pixels
def _get_ch_types(inst):
return [ch_type for ch_type in _DATA_CH_TYPES_SPLIT if ch_type in inst]
###############################################################################
# HTML generation
def _html_header_element(*, lang, include, js, css, title, tags, mne_logo_img):
from ..html_templates import report_templates_env
t = report_templates_env.get_template('header.html.jinja')
t_rendered = t.render(
lang=lang, include=include, js=js, css=css, title=title, tags=tags,
mne_logo_img=mne_logo_img
)
return t_rendered
def _html_footer_element(*, mne_version, date):
from ..html_templates import report_templates_env
t = report_templates_env.get_template('footer.html.jinja')
t_rendered = t.render(mne_version=mne_version, date=date)
return t_rendered
def _html_toc_element(*, titles, dom_ids, tags):
from ..html_templates import report_templates_env
t = report_templates_env.get_template('toc.html.jinja')
t_rendered = t.render(titles=titles, dom_ids=dom_ids, tags=tags)
return t_rendered
def _html_forward_sol_element(*, id, repr, sensitivity_maps, title, tags):
from ..html_templates import report_templates_env
t = report_templates_env.get_template('forward.html.jinja')
t_rendered = t.render(
id=id, repr=repr, sensitivity_maps=sensitivity_maps, tags=tags,
title=title
)
return t_rendered
def _html_inverse_operator_element(*, id, repr, source_space, title, tags):
from ..html_templates import report_templates_env
t = report_templates_env.get_template('inverse.html.jinja')
t_rendered = t.render(
id=id, repr=repr, source_space=source_space, tags=tags, title=title
)
return t_rendered
def _html_slider_element(*, id, images, captions, start_idx, image_format,
title, tags, klass=''):
from ..html_templates import report_templates_env
captions_ = []
for caption in captions:
if caption is None:
caption = ''
captions_.append(caption)
del captions
t = report_templates_env.get_template('slider.html.jinja')
t_rendered = t.render(
id=id, images=images, captions=captions_, tags=tags, title=title,
start_idx=start_idx, image_format=image_format, klass=klass
)
return t_rendered
def _html_image_element(*, id, img, image_format, caption, show, div_klass,
img_klass, title, tags):
from ..html_templates import report_templates_env
t = report_templates_env.get_template('image.html.jinja')
t_rendered = t.render(
id=id, img=img, caption=caption, tags=tags, title=title,
image_format=image_format, div_klass=div_klass, img_klass=img_klass,
show=show
)
return t_rendered
def _html_code_element(*, id, code, language, title, tags):
from ..html_templates import report_templates_env
t = report_templates_env.get_template('code.html.jinja')
t_rendered = t.render(
id=id, code=code, language=language, title=title, tags=tags
)
return t_rendered
def _html_section_element(*, id, div_klass, htmls, title, tags):
from ..html_templates import report_templates_env
t = report_templates_env.get_template('section.html.jinja')
t_rendered = t.render(
id=id, div_klass=div_klass, htmls=htmls, title=title, tags=tags
)
return t_rendered
def _html_bem_element(
*, id, div_klass, html_slider_axial, html_slider_sagittal,
html_slider_coronal, title, tags
):
from ..html_templates import report_templates_env
t = report_templates_env.get_template('bem.html.jinja')
t_rendered = t.render(
id=id, div_klass=div_klass, html_slider_axial=html_slider_axial,
html_slider_sagittal=html_slider_sagittal,
html_slider_coronal=html_slider_coronal, title=title,
tags=tags
)
return t_rendered
def _html_element(*, id, div_klass, html, title, tags):
from ..html_templates import report_templates_env
t = report_templates_env.get_template('html.html.jinja')
t_rendered = t.render(
id=id, div_klass=div_klass, html=html, title=title, tags=tags
)
return t_rendered
@dataclass
class _ContentElement:
name: str
section: Optional[str]
dom_id: str
tags: Tuple[str]
html: str
def _check_tags(tags) -> Tuple[str]:
# Must be iterable, but not a string
if isinstance(tags, str):
tags = (tags,)
elif isinstance(tags, (Sequence, np.ndarray)):
tags = tuple(tags)
else:
raise TypeError(
f'tags must be a string (without spaces or special characters) or '
f'an array-like object of such strings, but got {type(tags)} '
f'instead: {tags}'
)
# Check for invalid dtypes
bad_tags = [tag for tag in tags
if not isinstance(tag, str)]
if bad_tags:
raise TypeError(
f'All tags must be strings without spaces or special characters, '
f'but got the following instead: '
f'{", ".join([str(tag) for tag in bad_tags])}'
)
# Check for invalid characters
invalid_chars = (' ', '"', '\n') # we'll probably find more :-)
bad_tags = []
for tag in tags:
for invalid_char in invalid_chars:
if invalid_char in tag:
bad_tags.append(tag)
break
if bad_tags:
raise ValueError(
f'The following tags contained invalid characters: '
f'{", ".join(repr(tag) for tag in bad_tags)}'
)
return tags
###############################################################################
# PLOTTING FUNCTIONS
def _constrain_fig_resolution(fig, *, max_width, max_res):
"""Limit the resolution (DPI) of a figure.
Parameters
----------
fig : matplotlib.figure.Figure
The figure whose DPI to adjust.
max_width : int
The max. allowed width, in pixels.
max_res : int
The max. allowed resolution, in DPI.
Returns
-------
Nothing, alters the figure's properties in-place.
"""
dpi = min(max_res, max_width / fig.get_size_inches()[0])
fig.set_dpi(dpi)
def _fig_to_img(fig, *, image_format='png', own_figure=True):
"""Plot figure and create a binary image."""
# fig can be ndarray, mpl Figure, PyVista Figure
import matplotlib.pyplot as plt
from matplotlib.figure import Figure
if isinstance(fig, np.ndarray):
# In this case, we are creating the fig, so we might as well
# auto-close in all cases
fig = _ndarray_to_fig(fig)
if own_figure:
_constrain_fig_resolution(
fig, max_width=MAX_IMG_WIDTH, max_res=MAX_IMG_RES
)
own_figure = True # close the figure we just created
elif isinstance(fig, Figure):
pass # nothing to do
else:
# Don't attempt a mne_qt_browser import here (it might pull in Qt
# libraries we don't want), so use a probably good enough class name
# check instead
if fig.__class__.__name__ in ('MNEQtBrowser', 'PyQtGraphBrowser'):
img = _mne_qt_browser_screenshot(fig, return_type='ndarray')
print(img.shape, img.max(), img.min(), img.mean())
elif isinstance(fig, Figure3D):
from ..viz.backends.renderer import backend, MNE_3D_BACKEND_TESTING
backend._check_3d_figure(figure=fig)
if not MNE_3D_BACKEND_TESTING:
img = backend._take_3d_screenshot(figure=fig)
else: # Testing mode
img = np.zeros((2, 2, 3))
if own_figure:
backend._close_3d_figure(figure=fig)
else:
raise TypeError(
'figure must be an instance of np.ndarray, matplotlib Figure, '
'mne_qt_browser.figure.MNEQtBrowser, or mne.viz.Figure3D, got '
f'{type(fig)}')
fig = _ndarray_to_fig(img)
if own_figure:
_constrain_fig_resolution(
fig, max_width=MAX_IMG_WIDTH, max_res=MAX_IMG_RES
)
own_figure = True # close the fig we just created
output = BytesIO()
dpi = fig.get_dpi()
logger.debug(
f'Saving figure with dimension {fig.get_size_inches()} inches with '
f'{{dpi}} dpi'
)
# https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html
mpl_kwargs = dict()
pil_kwargs = dict()
has_pillow = check_version('PIL')
if has_pillow:
if image_format == 'webp':
pil_kwargs.update(lossless=True, method=6)
elif image_format == 'png':
pil_kwargs.update(optimize=True, compress_level=9)
if pil_kwargs:
# matplotlib modifies the passed dict, which is a bug
mpl_kwargs['pil_kwargs'] = pil_kwargs.copy()
with warnings.catch_warnings():
warnings.filterwarnings(
action='ignore',
message='.*Axes that are not compatible with tight_layout.*',
category=UserWarning
)
fig.savefig(output, format=image_format, dpi=dpi, **mpl_kwargs)
if own_figure:
plt.close(fig)
# Remove alpha
if image_format != 'svg' and has_pillow:
from PIL import Image
output.seek(0)
orig = Image.open(output)
if orig.mode == 'RGBA':
background = Image.new('RGBA', orig.size, (255, 255, 255))
new = Image.alpha_composite(background, orig).convert('RGB')
output = BytesIO()
new.save(output, format=image_format, dpi=(dpi, dpi), **pil_kwargs)
output = output.getvalue()
return (output.decode('utf-8') if image_format == 'svg' else
base64.b64encode(output).decode('ascii'))
def _scale_mpl_figure(fig, scale):
"""Magic scaling helper.
Keeps font size and artist sizes constant
0.5 : current font - 4pt
2.0 : current font + 4pt
This is a heuristic but it seems to work for most cases.
"""
scale = float(scale)
fig.set_size_inches(fig.get_size_inches() * scale)
fig.set_dpi(fig.get_dpi() * scale)
import matplotlib as mpl
if scale >= 1:
sfactor = scale ** 2
else:
sfactor = -((1. / scale) ** 2)
for text in fig.findobj(mpl.text.Text):
fs = text.get_fontsize()
new_size = fs + sfactor
if new_size <= 0:
raise ValueError('could not rescale matplotlib fonts, consider '
'increasing "scale"')
text.set_fontsize(new_size)
fig.canvas.draw()
def _get_bem_contour_figs_as_arrays(
*, sl, n_jobs, mri_fname, surfaces, orientation, src, show,
show_orientation, width
):
"""Render BEM surface contours on MRI slices.
Returns
-------
list of array
A list of NumPy arrays that represent the generated Matplotlib figures.
"""
# Matplotlib <3.2 doesn't work nicely with process-based parallelization
kwargs = dict()
if not check_version('matplotilb', '3.2'):
kwargs['prefer'] = 'threads'
parallel, p_fun, n_jobs = parallel_func(
_plot_mri_contours, n_jobs, max_jobs=len(sl), **kwargs)
outs = parallel(
p_fun(
slices=s, mri_fname=mri_fname, surfaces=surfaces,
orientation=orientation, src=src, show=show,
show_orientation=show_orientation, width=width,
slices_as_subplots=False
)
for s in np.array_split(sl, n_jobs)
)
out = list()
for o in outs:
out.extend(o)
return out
def _iterate_trans_views(function, alpha, **kwargs):
"""Auxiliary function to iterate over views in trans fig."""
from ..viz import create_3d_figure
from ..viz.backends.renderer import MNE_3D_BACKEND_TESTING
# TODO: Eventually maybe we should expose the size option?
size = (80, 80) if MNE_3D_BACKEND_TESTING else (800, 800)
fig = create_3d_figure(size, bgcolor=(0.5, 0.5, 0.5))
from ..viz.backends.renderer import backend
try:
try:
return _itv(
function, fig, surfaces={'head-dense': alpha}, **kwargs
)
except IOError:
return _itv(function, fig, surfaces={'head': alpha}, **kwargs)
finally:
backend._close_3d_figure(fig)
def _itv(function, fig, **kwargs):
from ..viz.backends.renderer import MNE_3D_BACKEND_TESTING, backend
from ..viz._brain.view import views_dicts
function(fig=fig, **kwargs)
views = (
'frontal', 'lateral', 'medial',
'axial', 'rostral', 'coronal'
)
images = []
for view in views:
if not MNE_3D_BACKEND_TESTING:
set_3d_view(fig, **views_dicts['both'][view])
backend._check_3d_figure(fig)
im = backend._take_3d_screenshot(figure=fig)
else: # Testing mode
im = np.zeros((2, 2, 3))
images.append(im)
images = np.concatenate(
[np.concatenate(images[:3], axis=1),
np.concatenate(images[3:], axis=1)],
axis=0)
try:
dists = dig_mri_distances(info=kwargs['info'],
trans=kwargs['trans'],
subject=kwargs['subject'],
subjects_dir=kwargs['subjects_dir'],
on_defects='ignore')
caption = (f'Average distance from {len(dists)} digitized points to '
f'head: {1e3 * np.mean(dists):.2f} mm')
except BaseException as e:
caption = 'Distances could not be calculated from digitized points'
warn(f'{caption}: {e}')
img = _fig_to_img(images, image_format='png')
return img, caption
def _plot_ica_properties_as_arrays(*, ica, inst, picks, n_jobs):
"""Parallelize ICA component properties plotting, and return arrays.
Returns
-------
outs : list of array
The properties plots as NumPy arrays.
"""
import matplotlib.pyplot as plt
if picks is None:
picks = list(range(ica.n_components_))
def _plot_one_ica_property(*, ica, inst, pick):
figs = ica.plot_properties(inst=inst, picks=pick, show=False)
assert len(figs) == 1
fig = figs[0]
_constrain_fig_resolution(
fig, max_width=MAX_IMG_WIDTH, max_res=MAX_IMG_RES
)
with io.BytesIO() as buff:
fig.savefig(
buff,
format='png',
pad_inches=0,
)
buff.seek(0)
fig_array = plt.imread(buff, format='png')
plt.close(fig)
return fig_array
parallel, p_fun, n_jobs = parallel_func(
func=_plot_one_ica_property,
n_jobs=n_jobs,
max_jobs=len(picks)
)
outs = parallel(
p_fun(
ica=ica, inst=inst, pick=pick
) for pick in picks
)
return outs
###############################################################################
# TOC FUNCTIONS
def _endswith(fname, suffixes):
"""Aux function to test if file name includes the specified suffixes."""
if isinstance(suffixes, str):
suffixes = [suffixes]
for suffix in suffixes:
for ext in SUPPORTED_READ_RAW_EXTENSIONS:
if fname.endswith((f'-{suffix}{ext}', f'-{suffix}{ext}',
f'_{suffix}{ext}', f'_{suffix}{ext}')):
return True
return False
def open_report(fname, **params):
"""Read a saved report or, if it doesn't exist yet, create a new one.
The returned report can be used as a context manager, in which case any
changes to the report are saved when exiting the context block.
Parameters
----------
fname : str
The file containing the report, stored in the HDF5 format. If the file
does not exist yet, a new report is created that will be saved to the
specified file.
**params : kwargs
When creating a new report, any named parameters other than ``fname``
are passed to the ``__init__`` function of the `Report` object. When
reading an existing report, the parameters are checked with the
loaded report and an exception is raised when they don't match.
Returns
-------
report : instance of Report
The report.
"""
fname = _check_fname(fname=fname, overwrite='read', must_exist=False)
if op.exists(fname):
# Check **params with the loaded report
read_hdf5, _ = _import_h5io_funcs()
state = read_hdf5(fname, title='mnepython')
for param in params.keys():
if param not in state:
raise ValueError('The loaded report has no attribute %s' %
param)
if params[param] != state[param]:
raise ValueError("Attribute '%s' of loaded report does not "
"match the given parameter." % param)
report = Report()
report.__setstate__(state)
else:
report = Report(**params)
# Keep track of the filename in case the Report object is used as a context
# manager.
report.fname = fname
return report
###############################################################################
# HTML scan renderer
mne_logo_path = Path(__file__).parents[1] / 'icons' / 'mne_icon-cropped.png'
mne_logo = base64.b64encode(mne_logo_path.read_bytes()).decode('ascii')
_ALLOWED_IMAGE_FORMATS = ('png', 'svg', 'webp')
def _webp_supported():
good = check_version('matplotlib', '3.6') and check_version('PIL')
if good:
from PIL import features
good = features.check('webp')
return good
def _check_scale(scale):
"""Ensure valid scale value is passed."""
if np.isscalar(scale) and scale <= 0:
raise ValueError('scale must be positive, not %s' % scale)
def _check_image_format(rep, image_format):
"""Ensure fmt is valid."""
if rep is None or image_format is not None:
allowed = list(_ALLOWED_IMAGE_FORMATS) + ['auto']
extra = ''
if not _webp_supported():
allowed.pop(allowed.index('webp'))
extra = '("webp" supported on matplotlib 3.6+ with PIL installed)'
_check_option(
'image_format', image_format, allowed_values=allowed, extra=extra)
else:
image_format = rep.image_format
if image_format == 'auto':
image_format = 'webp' if _webp_supported() else 'png'
return image_format
@fill_doc
class Report:
r"""Object for rendering HTML.
Parameters
----------
info_fname : None | str
Name of the file containing the info dictionary.
%(subjects_dir)s
subject : str | None
Subject name.
title : str
Title of the report.
cov_fname : None | str
Name of the file containing the noise covariance.
%(baseline_report)s
Defaults to ``None``, i.e. no baseline correction.
image_format : 'png' | 'svg' | 'webp' | 'auto'
Default image format to use (default is ``'auto'``, which will use
``'webp'`` if available and ``'png'`` otherwise).
``'svg'`` uses vector graphics, so fidelity is higher but can increase
file size and browser image rendering time as well.
``'webp'`` format requires matplotlib >= 3.6.
.. versionadded:: 0.15
.. versionchanged:: 1.3
Added support for ``'webp'`` format, removed support for GIF, and
set the default to ``'auto'``.
raw_psd : bool | dict
If True, include PSD plots for raw files. Can be False (default) to
omit, True to plot, or a dict to pass as ``kwargs`` to
:meth:`mne.io.Raw.plot_psd`.
.. versionadded:: 0.17
projs : bool
Whether to include topographic plots of SSP projectors, if present in
the data. Defaults to ``False``.
.. versionadded:: 0.21
%(verbose)s
Attributes
----------
info_fname : None | str
Name of the file containing the info dictionary.
%(subjects_dir)s
subject : str | None
Subject name.
title : str
Title of the report.
cov_fname : None | str
Name of the file containing the noise covariance.
%(baseline_report)s
Defaults to ``None``, i.e. no baseline correction.
image_format : str
Default image format to use.
.. versionadded:: 0.15
raw_psd : bool | dict
If True, include PSD plots for raw files. Can be False (default) to
omit, True to plot, or a dict to pass as ``kwargs`` to
:meth:`mne.io.Raw.plot_psd`.
.. versionadded:: 0.17
projs : bool
Whether to include topographic plots of SSP projectors, if present in
the data. Defaults to ``False``.
.. versionadded:: 0.21
%(verbose)s
html : list of str
Contains items of html-page.
include : list of str
Dictionary containing elements included in head.
fnames : list of str
List of file names rendered.
sections : list of str
List of sections.
lang : str
language setting for the HTML file.
Notes
-----
See :ref:`tut-report` for an introduction to using ``mne.Report``.
.. versionadded:: 0.8.0
"""
@verbose
def __init__(self, info_fname=None, subjects_dir=None,
subject=None, title=None, cov_fname=None, baseline=None,
image_format='auto', raw_psd=False, projs=False, *,
verbose=None):
self.info_fname = str(info_fname) if info_fname is not None else None
self.cov_fname = str(cov_fname) if cov_fname is not None else None
self.baseline = baseline
if subjects_dir is not None:
subjects_dir = get_subjects_dir(subjects_dir)
self.subjects_dir = subjects_dir
self.subject = subject
self.title = title
self.image_format = _check_image_format(None, image_format)
self.projs = projs
self._dom_id = 0
self._content = []
self.include = []
self.lang = 'en-us' # language setting for the HTML file
if not isinstance(raw_psd, bool) and not isinstance(raw_psd, dict):
raise TypeError('raw_psd must be bool or dict, got %s'
% (type(raw_psd),))
self.raw_psd = raw_psd
self._init_render() # Initialize the renderer
self.fname = None # The name of the saved report
self.data_path = None
def __repr__(self):
"""Print useful info about report."""
htmls, _, titles, _ = self._content_as_html()
items = self._content
s = '<Report'
s += f' | {len(titles)} title{_pl(titles)}'
s += f' | {len(items)} item{_pl(items)}'
if self.title is not None:
s += f' | {self.title}'
if len(titles) > 0:
titles = [f' {t}' for t in titles] # indent
tr = max(len(s), 50) # trim to larger of opening str and 50
titles = [f'{t[:tr - 2]} …' if len(t) > tr else t for t in titles]
# then trim to the max length of all of these
tr = max(len(title) for title in titles)
tr = max(tr, len(s))
b_to_mb = 1. / (1024. ** 2)
content_element_mb = [len(html) * b_to_mb for html in htmls]
total_mb = f'{sum(content_element_mb):0.1f}'
content_element_mb = [
f'{sz:0.1f}'.rjust(len(total_mb))
for sz in content_element_mb
]
s = f'{s.ljust(tr + 1)} | {total_mb} MB'
s += '\n' + '\n'.join(
f'{title[:tr].ljust(tr + 1)} | {sz} MB'
for title, sz in zip(titles, content_element_mb))
s += '\n'
s += '>'
return s
def __len__(self):
"""Return the number of files processed by the report.
Returns
-------
n_files : int
The number of files processed.
"""
return len(self._content)
@staticmethod
def _get_state_params():
# Which attributes to store in and read from HDF5 files
return (
'baseline', 'cov_fname', 'include', '_content', 'image_format',
'info_fname', '_dom_id', 'raw_psd', 'projs',
'subjects_dir', 'subject', 'title', 'data_path', 'lang',
'fname',
)
def _get_dom_id(self, increment=True):
"""Get unique ID for content to append to the DOM.
This method is just a counter.
increment : bool
Whether to increment the counter. If ``False``, simply returns the
latest DOM ID used.
"""
if increment:
self._dom_id += 1
return f'global-{self._dom_id}'
def _validate_topomap_kwargs(self, topomap_kwargs):
_validate_type(topomap_kwargs, (dict, None), 'topomap_kwargs')
topomap_kwargs = dict() if topomap_kwargs is None else topomap_kwargs
return topomap_kwargs
def _validate_input(self, items, captions, tag, comments=None):
"""Validate input."""
if not isinstance(items, (list, tuple)):
items = [items]
if not isinstance(captions, (list, tuple)):
captions = [captions]
if not isinstance(comments, (list, tuple)) and comments is not None:
comments = [comments]
if comments is not None and len(comments) != len(items):
raise ValueError(
f'Number of "comments" and report items must be equal, '
f'or comments should be None; got '
f'{len(comments)} and {len(items)}'
)
elif captions is not None and len(captions) != len(items):
raise ValueError(
f'Number of "captions" and report items must be equal; '
f'got {len(captions)} and {len(items)}'
)
return items, captions, comments
def _content_as_html(self):
"""Generate HTML representations based on the added content & sections.
Returns
-------
htmls : list of str
The HTML representations of the content.
dom_ids : list of str
The DOM IDs corresponding to the HTML representations.
titles : list of str
The titles corresponding to the HTML representations.
tags : list of tuple of str
The tags corresponding to the HTML representations.
"""
section_dom_ids = []
htmls = []
dom_ids = []
titles = []
tags = []
content_elements = self._content.copy() # shallow copy
# We loop over all content elements and implement special treatment
# for those that are part of a section: Those sections don't actually
# exist in `self._content` – we're creating them on-the-fly here!
for idx, content_element in enumerate(content_elements):
if content_element.section:
if content_element.section in titles:
# The section and all its child elements have already been
# added
continue
# Add all elements belonging to the current section
section_elements = [
el for el in content_elements[idx:]
if el.section == content_element.section
]
section_htmls = [el.html for el in section_elements]
section_tags = tuple(
sorted((set([t
for el in section_elements
for t in el.tags])))
)
# Generate a unique DOM ID, but don't alter the global counter
if section_dom_ids:
section_dom_id = section_dom_ids[-1]
else:
section_dom_id = self._get_dom_id(increment=False)
label, counter = section_dom_id.split('-')
section_dom_id = f'{label}-{int(counter) + 1}'
section_dom_ids.append(section_dom_id)
# Finally, create the section HTML element.
section_html = _html_section_element(
id=section_dom_id,
htmls=section_htmls,
tags=section_tags,
title=content_element.section,
div_klass='section',
)
htmls.append(section_html)
dom_ids.append(section_dom_id)
titles.append(content_element.section)
tags.append(section_tags)
else:
# The element is not part of a section, so we can simply
# append it as-is.
htmls.append(content_element.html)
dom_ids.append(content_element.dom_id)
titles.append(content_element.name)
tags.append(content_element.tags)
return htmls, dom_ids, titles, tags
@property
def html(self):
"""A list of HTML representations for all content elements."""
htmls, _, _, _ = self._content_as_html()
return htmls
@property
def tags(self):
"""All tags currently used in the report."""
tags = []
for c in self._content:
tags.extend(c.tags)
tags = tuple(sorted(set(tags)))
return tags
def add_custom_css(self, css):
"""Add custom CSS to the report.
Parameters
----------
css : str
Style definitions to add to the report. The content of this string
will be embedded between HTML ``<style>`` and ``</style>`` tags.
Notes
-----
.. versionadded:: 0.23
"""
style = f'\n<style type="text/css">\n{css}\n</style>'
self.include += style
def add_custom_js(self, js):
"""Add custom JavaScript to the report.
Parameters
----------
js : str
JavaScript code to add to the report. The content of this string
will be embedded between HTML ``<script>`` and ``</script>`` tags.
Notes
-----
.. versionadded:: 0.23
"""
script = f'\n<script type="text/javascript">\n{js}\n</script>'
self.include += script
@fill_doc
def add_epochs(
self, epochs, title, *, psd=True, projs=None, topomap_kwargs=None,
drop_log_ignore=('IGNORED',), tags=('epochs',), replace=False
):
"""Add `~mne.Epochs` to the report.
Parameters
----------
epochs : path-like | instance of Epochs
The epochs to add to the report.
title : str
The title to add.
psd : bool | float
If a float, the duration of data to use for creation of PSD plots,
in seconds. PSD will be calculated on as many epochs as required to
cover at least this duration. Epochs will be picked across the
entire time range in equally-spaced distance.
.. note::
In rare edge cases, we may not be able to create a grid of
equally-spaced epochs that cover the entire requested time range.
In these situations, a warning will be emitted, informing you
about the duration that's actually being used.
If ``True``, add PSD plots based on all ``epochs``. If ``False``,
do not add PSD plots.
%(projs_report)s
%(topomap_kwargs)s
drop_log_ignore : array-like of str
The drop reasons to ignore when creating the drop log bar plot.
All epochs for which a drop reason listed here appears in
``epochs.drop_log`` will be excluded from the drop log plot.
%(tags_report)s
%(replace_report)s
Notes
-----
.. versionadded:: 0.24.0
"""
tags = _check_tags(tags)
add_projs = self.projs if projs is None else projs
self._add_epochs(
epochs=epochs,
psd=psd,
add_projs=add_projs,
topomap_kwargs=topomap_kwargs,
drop_log_ignore=drop_log_ignore,
section=title,
tags=tags,
image_format=self.image_format,
replace=replace,
)
@fill_doc
def add_evokeds(
self, evokeds, *, titles=None, noise_cov=None, projs=None,
n_time_points=None, tags=('evoked',), replace=False,
topomap_kwargs=None, n_jobs=None
):
"""Add `~mne.Evoked` objects to the report.
Parameters
----------
evokeds : path-like | instance of Evoked | list of Evoked
The evoked data to add to the report. Multiple `~mne.Evoked`
objects – as returned from `mne.read_evokeds` – can be passed as
a list.
titles : str | list of str | None
The titles corresponding to the evoked data. If ``None``, the
content of ``evoked.comment`` from each evoked will be used as
title.
noise_cov : path-like | instance of Covariance | None
A noise covariance matrix. If provided, will be used to whiten
the ``evokeds``. If ``None``, will fall back to the ``cov_fname``
provided upon report creation.
%(projs_report)s
n_time_points : int | None
The number of equidistant time points to render. If ``None``,
will render each `~mne.Evoked` at 21 time points, unless the data
contains fewer time points, in which case all will be rendered.
%(tags_report)s
%(replace_report)s
%(topomap_kwargs)s
%(n_jobs)s
Notes
-----
.. versionadded:: 0.24.0
"""
if isinstance(evokeds, Evoked):
evokeds = [evokeds]
elif isinstance(evokeds, list):
pass
else:
evoked_fname = evokeds
logger.debug(f'Evoked: Reading {evoked_fname}')
evokeds = read_evokeds(evoked_fname, verbose=False)
if self.baseline is not None:
evokeds = [e.copy().apply_baseline(self.baseline)
for e in evokeds]
if titles is None:
titles = [e.comment for e in evokeds]
elif isinstance(titles, str):
titles = [titles]
if len(evokeds) != len(titles):
raise ValueError(
f'Number of evoked objects ({len(evokeds)}) must '
f'match number of captions ({len(titles)})'
)
if noise_cov is None:
noise_cov = self.cov_fname
if noise_cov is not None and not isinstance(noise_cov, Covariance):
noise_cov = read_cov(fname=noise_cov)
tags = _check_tags(tags)
add_projs = self.projs if projs is None else projs
for evoked, title in zip(evokeds, titles):
self._add_evoked(
evoked=evoked,
noise_cov=noise_cov,
image_format=self.image_format,
add_projs=add_projs,
n_time_points=n_time_points,
tags=tags,
section=title,
topomap_kwargs=topomap_kwargs,
n_jobs=n_jobs,
replace=replace,
)
@fill_doc
def add_raw(
self, raw, title, *, psd=None, projs=None, butterfly=True,
scalings=None, tags=('raw',), replace=False, topomap_kwargs=None
):
"""Add `~mne.io.Raw` objects to the report.
Parameters
----------
raw : path-like | instance of Raw
The data to add to the report.
title : str
The title corresponding to the ``raw`` object.
psd : bool | None
Whether to add PSD plots. Overrides the ``raw_psd`` parameter
passed when initializing the `~mne.Report`. If ``None``, use
``raw_psd`` from `~mne.Report` creation.
%(projs_report)s
butterfly : bool | int
Whether to add butterfly plots of the data. Can be useful to
spot problematic channels. If ``True``, 10 equally-spaced 1-second
segments will be plotted. If an integer, specifies the number of
1-second segments to plot. Larger numbers may take a considerable
amount of time if the data contains many sensors. You can disable
butterfly plots altogether by passing ``False``.
%(scalings)s
%(tags_report)s
%(replace_report)s
%(topomap_kwargs)s
Notes
-----
.. versionadded:: 0.24.0
"""
tags = _check_tags(tags)
if psd is None:
add_psd = dict() if self.raw_psd is True else self.raw_psd
elif psd is True:
add_psd = dict()
else:
add_psd = False
add_projs = self.projs if projs is None else projs
self._add_raw(
raw=raw,
add_psd=add_psd,
add_projs=add_projs,
butterfly=butterfly,
butterfly_scalings=scalings,
image_format=self.image_format,
tags=tags,
topomap_kwargs=topomap_kwargs,
section=title,
replace=replace,
)
@fill_doc
def add_stc(
self, stc, title, *, subject=None, subjects_dir=None,
n_time_points=None, tags=('source-estimate',),
replace=False, stc_plot_kwargs=None
):
"""Add a `~mne.SourceEstimate` (STC) to the report.
Parameters
----------
stc : path-like | instance of SourceEstimate
The `~mne.SourceEstimate` to add to the report.
title : str
The title to add.
subject : str | None
The name of the FreeSurfer subject the STC belongs to. The name is
not stored with the STC data and therefore needs to be specified.
If ``None``, will use the value of ``subject`` passed on report
creation.
subjects_dir : path-like | None
The FreeSurfer ``SUBJECTS_DIR``.
n_time_points : int | None
The number of equidistant time points to render. If ``None``,
will render ``stc`` at 51 time points, unless the data
contains fewer time points, in which case all will be rendered.
%(tags_report)s
%(replace_report)s
%(stc_plot_kwargs_report)s
Notes
-----
.. versionadded:: 0.24.0
"""
tags = _check_tags(tags)
self._add_stc(
stc=stc,
title=title,
tags=tags,
image_format=self.image_format,
subject=subject,
subjects_dir=subjects_dir,
n_time_points=n_time_points,
stc_plot_kwargs=stc_plot_kwargs,
section=None,
replace=replace,
)
@fill_doc
def add_forward(
self, forward, title, *, subject=None, subjects_dir=None,
tags=('forward-solution',), replace=False
):
"""Add a forward solution.
Parameters
----------
forward : instance of Forward | path-like
The forward solution to add to the report.
title : str
The title corresponding to forward solution.
subject : str | None
The name of the FreeSurfer subject ``forward`` belongs to. If
provided, the sensitivity maps of the forward solution will
be visualized. If ``None``, will use the value of ``subject``
passed on report creation. If supplied, also pass ``subjects_dir``.
subjects_dir : path-like | None
The FreeSurfer ``SUBJECTS_DIR``.
%(tags_report)s
%(replace_report)s
Notes
-----
.. versionadded:: 0.24.0
"""
tags = _check_tags(tags)
self._add_forward(
forward=forward, subject=subject, subjects_dir=subjects_dir,
title=title, image_format=self.image_format, section=None,
tags=tags, replace=replace,
)
@fill_doc
def add_inverse_operator(
self, inverse_operator, title, *, subject=None,
subjects_dir=None, trans=None, tags=('inverse-operator',),
replace=False
):
"""Add an inverse operator.
Parameters
----------
inverse_operator : instance of InverseOperator | path-like
The inverse operator to add to the report.
title : str
The title corresponding to the inverse operator object.
subject : str | None
The name of the FreeSurfer subject ``inverse_op`` belongs to. If
provided, the source space the inverse solution is based on will
be visualized. If ``None``, will use the value of ``subject``
passed on report creation. If supplied, also pass ``subjects_dir``
and ``trans``.
subjects_dir : path-like | None
The FreeSurfer ``SUBJECTS_DIR``.
trans : path-like | instance of Transform | None
The ``head -> MRI`` transformation for ``subject``.
%(tags_report)s
%(replace_report)s
Notes
-----
.. versionadded:: 0.24.0
"""
tags = _check_tags(tags)
if ((subject is not None and trans is None) or
(trans is not None and subject is None)):
raise ValueError('Please pass subject AND trans, or neither.')
self._add_inverse_operator(
inverse_operator=inverse_operator, subject=subject,
subjects_dir=subjects_dir, trans=trans, title=title,
image_format=self.image_format, section=None, tags=tags,
replace=replace,
)
@fill_doc
def add_trans(
self, trans, *, info, title, subject=None, subjects_dir=None,
alpha=None, tags=('coregistration',), replace=False
):
"""Add a coregistration visualization to the report.
Parameters
----------
trans : path-like | instance of Transform
The ``head -> MRI`` transformation to render.
info : path-like | instance of Info
The `~mne.Info` corresponding to ``trans``.
title : str
The title to add.
subject : str | None
The name of the FreeSurfer subject the ``trans```` belong to. The
name is not stored with the ``trans`` and therefore needs to be
specified. If ``None``, will use the value of ``subject`` passed on
report creation.
subjects_dir : path-like | None
The FreeSurfer ``SUBJECTS_DIR``.
alpha : float | None
The level of opacity to apply to the head surface. If a float, must
be between 0 and 1 (inclusive), where 1 means fully opaque. If
``None``, will use the MNE-Python default value.
%(tags_report)s
%(replace_report)s
Notes
-----
.. versionadded:: 0.24.0
"""
tags = _check_tags(tags)
self._add_trans(
trans=trans,
info=info,
subject=subject,
subjects_dir=subjects_dir,
alpha=alpha,
title=title,
section=None,
tags=tags,
replace=replace,
)
@fill_doc
def add_covariance(
self, cov, *, info, title, tags=('covariance',), replace=False
):
"""Add covariance to the report.
Parameters
----------
cov : path-like | instance of Covariance
The `~mne.Covariance` to add to the report.
info : path-like | instance of Info
The `~mne.Info` corresponding to ``cov``.
title : str
The title corresponding to the `~mne.Covariance` object.
%(tags_report)s
%(replace_report)s
Notes
-----
.. versionadded:: 0.24.0
"""
tags = _check_tags(tags)
self._add_cov(
cov=cov,
info=info,
image_format=self.image_format,
section=title,
tags=tags,
replace=replace,
)
@fill_doc
def add_events(
self, events, title, *, event_id=None, sfreq, first_samp=0,
tags=('events',), replace=False
):
"""Add events to the report.
Parameters
----------
events : path-like | array, shape (n_events, 3)
An MNE-Python events array.
title : str
The title corresponding to the events.
event_id : dict
A dictionary mapping event names (keys) to event codes (values).
sfreq : float
The sampling frequency used while recording.
first_samp : int
The first sample point in the recording. This corresponds to
``raw.first_samp`` on files created with Elekta/Neuromag systems.
%(tags_report)s
%(replace_report)s
Notes
-----
.. versionadded:: 0.24.0
"""
tags = _check_tags(tags)
self._add_events(
events=events,
event_id=event_id,
sfreq=sfreq,
first_samp=first_samp,
title=title,
section=None,
image_format=self.image_format,
tags=tags,
replace=replace,
)
@fill_doc
def add_projs(self, *, info, projs=None, title, topomap_kwargs=None,
tags=('ssp',), replace=False):
"""Render (SSP) projection vectors.
Parameters
----------
info : instance of Info | path-like
An `~mne.Info` structure or the path of a file containing one. This
is required to create the topographic plots.
projs : iterable of mne.Projection | path-like | None
The projection vectors to add to the report. Can be the path to a
file that will be loaded via `mne.read_proj`. If ``None``, the
projectors are taken from ``info['projs']``.
title : str
The title corresponding to the `~mne.Projection` object.
%(topomap_kwargs)s
%(tags_report)s
%(replace_report)s
Notes
-----
.. versionadded:: 0.24.0
"""
tags = _check_tags(tags)
self._add_projs(
info=info, projs=projs, title=title,
image_format=self.image_format, section=None, tags=tags,
topomap_kwargs=topomap_kwargs, replace=replace
)
def _add_ica_overlay(
self, *, ica, inst, image_format, section, tags, replace
):
if isinstance(inst, BaseRaw):
inst_ = inst
else: # Epochs
inst_ = inst.average()
fig = ica.plot_overlay(inst=inst_, show=False, on_baseline='reapply')
del inst_
tight_layout(fig=fig)
_constrain_fig_resolution(
fig, max_width=MAX_IMG_WIDTH, max_res=MAX_IMG_RES
)
self._add_figure(
fig=fig,
title='Original and cleaned signal',
caption=None,
image_format=image_format,
section=section,
tags=tags,
replace=replace,
own_figure=True,
)
def _add_ica_properties(
self, *, ica, picks, inst, n_jobs, image_format, section, tags, replace
):
ch_type = _get_ch_type(inst=ica.info, ch_type=None)
if not _check_ch_locs(info=ica.info, ch_type=ch_type):
ch_type_name = _handle_default("titles")[ch_type]
warn(f'No {ch_type_name} channel locations found, cannot '
f'create ICA properties plots')
return
figs = _plot_ica_properties_as_arrays(
ica=ica, inst=inst, picks=picks, n_jobs=n_jobs
)
rel_explained_var = (ica.pca_explained_variance_ /
ica.pca_explained_variance_.sum())
cum_explained_var = np.cumsum(rel_explained_var)
captions = []
for idx, rel_var, cum_var in zip(
range(len(figs)),
rel_explained_var[:len(figs)],
cum_explained_var[:len(figs)]
):
caption = (
f'ICA component {idx}. '
f'Variance explained: {round(100 * rel_var)}%'
)
if idx == 0:
caption += '.'
else:
caption += f' ({round(100 * cum_var)}% cumulative).'
captions.append(caption)
title = 'ICA component properties'
# Only render a slider if we have more than 1 component.
if len(figs) == 1:
self._add_figure(
fig=figs[0],
title=title,
caption=captions[0],
image_format=image_format,
tags=tags,
section=section,
replace=replace,
own_figure=True,
)
else:
self._add_slider(
figs=figs,
imgs=None,
title=title,
captions=captions,
start_idx=0,
image_format=image_format,
section=section,
tags=tags,
replace=replace,
own_figure=True,
)
def _add_ica_artifact_sources(
self, *, ica, inst, artifact_type, image_format, section, tags, replace
):
with use_browser_backend('matplotlib'):
fig = ica.plot_sources(inst=inst, show=False)
_constrain_fig_resolution(
fig, max_width=MAX_IMG_WIDTH, max_res=MAX_IMG_RES
)
self._add_figure(
fig=fig,
title=f'Original and cleaned {artifact_type} epochs',
caption=None,
image_format=image_format,
tags=tags,
section=section,
replace=replace,
own_figure=True,
)
def _add_ica_artifact_scores(
self, *, ica, scores, artifact_type, image_format, section, tags,
replace
):
fig = ica.plot_scores(scores=scores, title=None, show=False)
_constrain_fig_resolution(
fig, max_width=MAX_IMG_WIDTH, max_res=MAX_IMG_RES
)
self._add_figure(
fig=fig,
title=f'Scores for matching {artifact_type} patterns',
caption=None,
image_format=image_format,
tags=tags,
section=section,
replace=replace,
own_figure=True,
)
def _add_ica_components(
self, *, ica, picks, image_format, section, tags, replace
):
ch_type = _get_ch_type(inst=ica.info, ch_type=None)
if not _check_ch_locs(info=ica.info, ch_type=ch_type):
ch_type_name = _handle_default("titles")[ch_type]
warn(f'No {ch_type_name} channel locations found, cannot '
f'create ICA component plots')
return ''
figs = ica.plot_components(
picks=picks, title='', colorbar=True, show=False
)
if not isinstance(figs, list):
figs = [figs]
for fig in figs:
tight_layout(fig=fig)
title = 'ICA component topographies'
if len(figs) == 1:
fig = figs[0]
_constrain_fig_resolution(
fig, max_width=MAX_IMG_WIDTH, max_res=MAX_IMG_RES
)
self._add_figure(
fig=fig,
title=title,
caption=None,
image_format=image_format,
tags=tags,
section=section,
replace=replace,
own_figure=True,
)
else:
self._add_slider(
figs=figs,
imgs=None,
title=title,
captions=[None] * len(figs),
start_idx=0,
image_format=image_format,
section=section,
tags=tags,
replace=replace
)
def _add_ica(
self, *, ica, inst, picks, ecg_evoked,
eog_evoked, ecg_scores, eog_scores, title, image_format,
section, tags, n_jobs, replace
):
if _path_like(ica):
ica = read_ica(ica)
if ica.current_fit == 'unfitted':
raise RuntimeError(
'ICA must be fitted before it can be added to the report.'
)
if inst is None:
pass # no-op
elif _path_like(inst):
# We cannot know which data type to expect, so let's first try to
# read a Raw, and if that fails, try to load Epochs
fname = str(inst) # could e.g. be a Path!
raw_kwargs = dict(fname=fname, preload=False)
if fname.endswith(('.fif', '.fif.gz')):
raw_kwargs['allow_maxshield'] = 'yes'
try:
inst = read_raw(**raw_kwargs)
except ValueError:
try:
inst = read_epochs(fname)
except ValueError:
raise ValueError(
f'The specified file, {fname}, does not seem to '
f'contain Raw data or Epochs'
)
elif not inst.preload:
raise RuntimeError(
'You passed an object to Report.add_ica() via the "inst" '
'parameter that was not preloaded. Please preload the data '
'via the load_data() method'
)
if _path_like(ecg_evoked):
ecg_evoked = read_evokeds(fname=ecg_evoked, condition=0)
if _path_like(eog_evoked):
eog_evoked = read_evokeds(fname=eog_evoked, condition=0)
# Summary table
self._add_html_repr(
inst=ica,
title='Info',
tags=tags,
section=section,
replace=replace,
div_klass='ica',
)
# Overlay plot
if inst:
self._add_ica_overlay(
ica=ica, inst=inst, image_format=image_format, section=section,
tags=tags, replace=replace,
)
# ECG artifact
if ecg_scores is not None:
self._add_ica_artifact_scores(
ica=ica, scores=ecg_scores, artifact_type='ECG',
image_format=image_format, section=section, tags=tags,
replace=replace,
)
if ecg_evoked:
self._add_ica_artifact_sources(
ica=ica, inst=ecg_evoked, artifact_type='ECG',
image_format=image_format, section=section, tags=tags,
replace=replace,
)
# EOG artifact
if eog_scores is not None:
self._add_ica_artifact_scores(
ica=ica, scores=eog_scores, artifact_type='EOG',
image_format=image_format, section=section, tags=tags,
replace=replace,
)
if eog_evoked:
self._add_ica_artifact_sources(
ica=ica, inst=eog_evoked, artifact_type='EOG',
image_format=image_format, section=section, tags=tags,
replace=replace,
)
# Component topography plots
self._add_ica_components(
ica=ica, picks=picks, image_format=image_format, section=section,
tags=tags, replace=replace,
)
# Properties plots
if inst:
self._add_ica_properties(
ica=ica, picks=picks, inst=inst, n_jobs=n_jobs,
image_format=image_format, section=section, tags=tags,
replace=replace,
)
@fill_doc
def add_ica(
self, ica, title, *, inst, picks=None, ecg_evoked=None,
eog_evoked=None, ecg_scores=None, eog_scores=None, n_jobs=None,
tags=('ica',), replace=False
):
"""Add (a fitted) `~mne.preprocessing.ICA` to the report.
Parameters
----------
ica : path-like | instance of mne.preprocessing.ICA
The fitted ICA to add.
title : str
The title to add.
inst : path-like | mne.io.Raw | mne.Epochs | None
The data to use for visualization of the effects of ICA cleaning.
To only plot the ICA component topographies, explicitly pass
``None``.
%(picks_ica)s This only affects the behavior of the component
topography and properties plots.
ecg_evoked, eog_evoked : path-line | mne.Evoked | None
Evoked signal based on ECG and EOG epochs, respectively. If passed,
will be used to visualize the effects of artifact rejection.
ecg_scores, eog_scores : array of float | list of array of float | None
The scores produced by :meth:`mne.preprocessing.ICA.find_bads_ecg`
and :meth:`mne.preprocessing.ICA.find_bads_eog`, respectively.
If passed, will be used to visualize the scoring for each ICA
component.
%(n_jobs)s
%(tags_report)s
%(replace_report)s
Notes
-----
.. versionadded:: 0.24.0
"""
tags = _check_tags(tags)
self._add_ica(
ica=ica, inst=inst, picks=picks,
ecg_evoked=ecg_evoked, eog_evoked=eog_evoked,
ecg_scores=ecg_scores, eog_scores=eog_scores,
title=title, image_format=self.image_format,
tags=tags, section=title, n_jobs=n_jobs, replace=replace,
)
def remove(self, *, title=None, tags=None, remove_all=False):
"""Remove elements from the report.
The element to remove is searched for by its title. Optionally, tags
may be specified as well to narrow down the search to elements that
have the supplied tags.
Parameters
----------
title : str
The title of the element(s) to remove.
.. versionadded:: 0.24.0
tags : array-like of str | str | None
If supplied, restrict the operation to elements with the supplied
tags.
.. versionadded:: 0.24.0
remove_all : bool
Controls the behavior if multiple elements match the search
criteria. If ``False`` (default) only the element last added to the
report will be removed. If ``True``, all matches will be removed.
.. versionadded:: 0.24.0
Returns
-------
removed_index : int | tuple of int | None
The indices of the elements that were removed, or ``None`` if no
element matched the search criteria. A tuple will always be
returned if ``remove_all`` was set to ``True`` and at least one
element was removed.
.. versionchanged:: 0.24.0
Returns tuple if ``remove_all`` is ``True``.
"""
remove_idx = []
for idx, element in enumerate(self._content):
if element.name == title:
if (tags is not None and
not all(t in element.tags for t in tags)):
continue
remove_idx.append(idx)
if not remove_idx:
remove_idx = None
elif not remove_all: # only remove last occurrence
remove_idx = remove_idx[-1]
del self._content[remove_idx]
else: # remove all occurrences
remove_idx = tuple(remove_idx)
self._content = [e for idx, e in enumerate(self._content)
if idx not in remove_idx]
return remove_idx
@fill_doc
def _add_or_replace(
self, *, title, section, dom_id, tags, html, replace=False
):
"""Append HTML content report, or replace it if it already exists.
Parameters
----------
title : str
The title entry.
%(section_report)s
dom_id : str
A unique element ``id`` in the DOM.
tags : tuple of str
The tags associated with the added element.
html : str
The HTML.
replace : bool
Whether to replace existing content if the title and section match.
"""
assert isinstance(html, str) # otherwise later will break
new_content = _ContentElement(
name=title,
section=section,
dom_id=dom_id,
tags=tags,
html=html
)
append = True
if replace:
matches = [
ii
for ii, element in enumerate(self._content)
if (element.name, element.section) == (title, section)
]
if matches:
self._content[matches[-1]] = new_content
append = False
if append:
self._content.append(new_content)
def _add_code(self, *, code, title, language, section, tags, replace):
if isinstance(code, Path):
code = Path(code).read_text()
dom_id = self._get_dom_id()
html = _html_code_element(
tags=tags,
title=title,
id=dom_id,
code=code,
language=language
)
self._add_or_replace(
dom_id=dom_id,
title=title,
section=section,
tags=tags,
html=html,
replace=replace
)
@fill_doc
def add_code(
self, code, title, *, language='python', tags=('code',),
replace=False
):
"""Add a code snippet (e.g., an analysis script) to the report.
Parameters
----------
code : str | pathlib.Path
The code to add to the report as a string, or the path to a file
as a `pathlib.Path` object.
.. note:: Paths must be passed as `pathlib.Path` object, since
strings will be treated as literal code.
title : str
The title corresponding to the code.
language : str
The programming language of ``code``. This will be used for syntax
highlighting. Can be ``'auto'`` to try to auto-detect the language.
%(tags_report)s
%(replace_report)s
Notes
-----
.. versionadded:: 0.24.0
"""
tags = _check_tags(tags)
language = language.lower()
self._add_code(
code=code, title=title, language=language, section=None, tags=tags,
replace=replace,
)
@fill_doc
def add_sys_info(self, title, *, tags=('mne-sysinfo',), replace=False):
"""Add a MNE-Python system information to the report.
This is a convenience method that captures the output of
`mne.sys_info` and adds it to the report.
Parameters
----------
title : str
The title to assign.
%(tags_report)s
%(replace_report)s
Notes
-----
.. versionadded:: 0.24.0
"""
tags = _check_tags(tags)
with StringIO() as f:
sys_info(f)
info = f.getvalue()
self.add_code(
code=info, title=title, language='shell', tags=tags,
replace=replace
)
def _add_image(
self, *, img, title, caption, image_format, tags, section, replace,
):
dom_id = self._get_dom_id()
html = _html_image_element(
img=img, div_klass='custom-image', img_klass='custom-image',
title=title, caption=caption, show=True,
image_format=image_format, id=dom_id, tags=tags
)
self._add_or_replace(
dom_id=dom_id,
title=title,
section=section,
tags=tags,
html=html,
replace=replace
)
def _add_figure(
self, *, fig, title, caption, image_format, tags, section, replace,
own_figure
):
img = _fig_to_img(
fig=fig, image_format=image_format, own_figure=own_figure
)
self._add_image(
img=img, title=title, caption=caption, image_format=image_format,
tags=tags, section=section, replace=replace
)
@fill_doc
def add_figure(
self, fig, title, *, caption=None, image_format=None,
tags=('custom-figure',), section=None, replace=False
):
"""Add figures to the report.
Parameters
----------
fig : matplotlib.figure.Figure | Figure3D | array | array-like of matplotlib.figure.Figure | array-like of Figure3D | array-like of array
One or more figures to add to the report. All figures must be an
instance of :class:`matplotlib.figure.Figure`,
:class:`mne.viz.Figure3D`, or :class:`numpy.ndarray`. If
multiple figures are passed, they will be added as "slides"
that can be navigated using buttons and a slider element.
title : str
The title corresponding to the figure(s).
caption : str | array-like of str | None
The caption(s) to add to the figure(s).
%(image_format_report)s
%(tags_report)s
%(section_report)s
%(replace_report)s
Notes
-----
.. versionadded:: 0.24.0
""" # noqa E501
tags = _check_tags(tags)
if image_format is None:
image_format = self.image_format
if hasattr(fig, '__len__') and not isinstance(fig, np.ndarray):
figs = tuple(fig)
else:
figs = (fig,)
for fig in figs:
if _path_like(fig):
raise TypeError(
f'It seems you passed a path to `add_figure`. However, '
f'only Matplotlib figures, PyVista scenes, and NumPy '
f'arrays are accepted. You may want to try `add_image` '
f'instead. The provided path was: {fig}'
)
del fig
if isinstance(caption, str):
captions = (caption,)
elif caption is None and len(figs) == 1:
captions = [None]
elif caption is None and len(figs) > 1:
captions = [f'Figure {i+1}' for i in range(len(figs))]
else:
captions = tuple(caption)
del caption
assert figs
if len(figs) == 1:
self._add_figure(
title=title, fig=figs[0], caption=captions[0],
image_format=image_format, section=section, tags=tags,
replace=replace, own_figure=False
)
else:
self._add_slider(
figs=figs, imgs=None, title=title, captions=captions,
start_idx=0, image_format=image_format, section=section,
tags=tags, own_figure=False, replace=replace
)
@fill_doc
def add_image(
self, image, title, *, caption=None, tags=('custom-image',),
section=None, replace=False
):
"""Add an image (e.g., PNG or JPEG pictures) to the report.
Parameters
----------
image : path-like
The image to add.
title : str
Title corresponding to the images.
caption : str | None
If not ``None``, the caption to add to the image.
%(tags_report)s
%(section_report)s
%(replace_report)s
Notes
-----
.. versionadded:: 0.24.0
"""
tags = _check_tags(tags)
image = Path(_check_fname(image, overwrite='read', must_exist=True))
img_format = Path(image).suffix.lower()[1:] # omit leading period
_check_option('Image format', value=img_format,
allowed_values=list(_ALLOWED_IMAGE_FORMATS) + ['gif'])
img_base64 = base64.b64encode(image.read_bytes()).decode('ascii')
self._add_image(
img=img_base64,
title=title,
caption=caption,
image_format=img_format,
tags=tags,
section=section,
replace=replace
)
@fill_doc
def add_html(
self, html, title, *, tags=('custom-html',), section=None,
replace=False
):
"""Add HTML content to the report.
Parameters
----------
html : str
The HTML content to add.
title : str
The title corresponding to ``html``.
%(tags_report)s
%(section_report)s
.. versionadded:: 1.3
%(replace_report)s
Notes
-----
.. versionadded:: 0.24.0
"""
tags = _check_tags(tags)
dom_id = self._get_dom_id()
html_element = _html_element(
id=dom_id, html=html, title=title, tags=tags,
div_klass='custom-html'
)
self._add_or_replace(
dom_id=dom_id,
title=title,
section=None,
tags=tags,
html=html_element,
replace=replace
)
@fill_doc
def add_bem(
self, subject, title, *, subjects_dir=None, decim=2, width=512,
n_jobs=None, tags=('bem',), replace=False
):
"""Render a visualization of the boundary element model (BEM) surfaces.
Parameters
----------
subject : str
The FreeSurfer subject name.
title : str
The title corresponding to the BEM image.
%(subjects_dir)s
decim : int
Use this decimation factor for generating MRI/BEM images
(since it can be time consuming).
width : int
The width of the MRI images (in pixels). Larger values will have
clearer surface lines, but will create larger HTML files.
Typically a factor of 2 more than the number of MRI voxels along
each dimension (typically 512, default) is reasonable.
%(n_jobs)s
%(tags_report)s
%(replace_report)s
Notes
-----
.. versionadded:: 0.24.0
"""
tags = _check_tags(tags)
width = _ensure_int(width, 'width')
self._add_bem(
subject=subject, subjects_dir=subjects_dir,
decim=decim, n_jobs=n_jobs, width=width,
image_format=self.image_format, title=title, tags=tags,
replace=replace
)
def _render_slider(
self, *, figs, imgs, title, captions, start_idx,
image_format, tags, klass, own_figure
):
# This method only exists to make add_bem()'s life easier…
if figs is not None and imgs is not None:
raise ValueError('Must only provide either figs or imgs')
if figs is not None and len(figs) != len(captions):
raise ValueError(
f'Number of captions ({len(captions)}) must be equal to the '
f'number of figures ({len(figs)})'
)
elif imgs is not None and len(imgs) != len(captions):
raise ValueError(
f'Number of captions ({len(captions)}) must be equal to the '
f'number of images ({len(imgs)})'
)
elif figs: # figs can be None if imgs is provided
imgs = [_fig_to_img(fig=fig, image_format=image_format,
own_figure=own_figure)
for fig in figs]
dom_id = self._get_dom_id()
html = _html_slider_element(
id=dom_id,
title=title,
captions=captions,
tags=tags,
images=imgs,
image_format=image_format,
start_idx=start_idx,
klass=klass
)
return html, dom_id
def _add_slider(
self, *, figs, imgs, title, captions, start_idx,
image_format, tags, section, replace, klass='', own_figure=True
):
html, dom_id = self._render_slider(
figs=figs, imgs=imgs, title=title, captions=captions,
start_idx=start_idx, image_format=image_format, tags=tags,
klass=klass, own_figure=own_figure
)
self._add_or_replace(
title=title,
section=section,
dom_id=dom_id,
tags=tags,
html=html,
replace=replace
)
###########################################################################
# global rendering functions
@verbose
def _init_render(self, verbose=None):
"""Initialize the renderer."""
inc_fnames = [
'jquery-3.6.0.min.js',
'bootstrap.bundle.min.js',
'bootstrap.min.css',
'bootstrap-table/bootstrap-table.min.js',
'bootstrap-table/bootstrap-table.min.css',
'bootstrap-table/bootstrap-table-copy-rows.min.js',
'bootstrap-table/bootstrap-table-export.min.js',
'bootstrap-table/tableExport.min.js',
'bootstrap-icons/bootstrap-icons.mne.min.css',
'highlightjs/highlight.min.js',
'highlightjs/atom-one-dark-reasonable.min.css'
]
include = list()
for inc_fname in inc_fnames:
logger.info(f'Embedding : {inc_fname}')
fname = html_include_dir / inc_fname
file_content = fname.read_text(encoding='utf-8')
if inc_fname.endswith('.js'):
include.append(
f'<script type="text/javascript">\n'
f'{file_content}\n'
f'</script>'
)
elif inc_fname.endswith('.css'):
include.append(
f'<style type="text/css">\n'
f'{file_content}\n'
f'</style>'
)
self.include = ''.join(include)
def _iterate_files(self, *, fnames, cov, sfreq, raw_butterfly,
n_time_points_evokeds, n_time_points_stcs, on_error,
stc_plot_kwargs, topomap_kwargs):
"""Parallel process in batch mode."""
assert self.data_path is not None
for fname in fnames:
logger.info(
f"Rendering : {op.join('…' + self.data_path[-20:], fname)}"
)
title = Path(fname).name
try:
if _endswith(fname, ['raw', 'sss', 'meg', 'nirs']):
self.add_raw(
raw=fname, title=title, psd=self.raw_psd,
projs=self.projs, butterfly=raw_butterfly
)
elif _endswith(fname, 'fwd'):
self.add_forward(
forward=fname, title=title, subject=self.subject,
subjects_dir=self.subjects_dir
)
elif _endswith(fname, 'inv'):
# XXX if we pass trans, we can plot the source space, too…
self.add_inverse_operator(
inverse_operator=fname, title=title
)
elif _endswith(fname, 'ave'):
evokeds = read_evokeds(fname)
titles = [
f'{Path(fname).name}: {e.comment}'
for e in evokeds
]
self.add_evokeds(
evokeds=fname, titles=titles, noise_cov=cov,
n_time_points=n_time_points_evokeds,
topomap_kwargs=topomap_kwargs
)
elif _endswith(fname, 'eve'):
if self.info_fname is not None:
sfreq = read_info(self.info_fname)['sfreq']
else:
sfreq = None
self.add_events(events=fname, title=title, sfreq=sfreq)
elif _endswith(fname, 'epo'):
self.add_epochs(epochs=fname, title=title)
elif _endswith(fname, 'cov') and self.info_fname is not None:
self.add_covariance(cov=fname, info=self.info_fname,
title=title)
elif _endswith(fname, 'proj') and self.info_fname is not None:
self.add_projs(info=self.info_fname, projs=fname,
title=title, topomap_kwargs=topomap_kwargs)
# XXX TODO We could render ICA components here someday
# elif _endswith(fname, 'ica') and ica:
# pass
elif (_endswith(fname, 'trans') and
self.info_fname is not None and
self.subjects_dir is not None and
self.subject is not None):
self.add_trans(
trans=fname, info=self.info_fname,
subject=self.subject, subjects_dir=self.subjects_dir,
title=title
)
elif (fname.endswith('-lh.stc') or
fname.endswith('-rh.stc') and
self.info_fname is not None and
self.subjects_dir is not None and
self.subject is not None):
self.add_stc(
stc=fname, title=title, subject=self.subject,
subjects_dir=self.subjects_dir,
n_time_points=n_time_points_stcs,
stc_plot_kwargs=stc_plot_kwargs
)
except Exception as e:
if on_error == 'warn':
warn(f'Failed to process file {fname}:\n"{e}"')
elif on_error == 'raise':
raise
@verbose
def parse_folder(self, data_path, pattern=None, n_jobs=None, mri_decim=2,
sort_content=True, *, on_error='warn',
image_format=None, render_bem=True,
n_time_points_evokeds=None, n_time_points_stcs=None,
raw_butterfly=True, stc_plot_kwargs=None,
topomap_kwargs=None, verbose=None):
r"""Render all the files in the folder.
Parameters
----------
data_path : str
Path to the folder containing data whose HTML report will be
created.
pattern : None | str | list of str
Filename pattern(s) to include in the report.
For example, ``[\*raw.fif, \*ave.fif]`` will include `~mne.io.Raw`
as well as `~mne.Evoked` files. If ``None``, include all supported
file formats.
.. versionchanged:: 0.23
Include supported non-FIFF files by default.
%(n_jobs)s
mri_decim : int
Use this decimation factor for generating MRI/BEM images
(since it can be time consuming).
sort_content : bool
If ``True``, sort the content based on tags in the order:
raw -> events -> epochs -> evoked -> covariance -> coregistration
-> bem -> forward-solution -> inverse-operator -> source-estimate.
.. versionadded:: 0.24.0
on_error : str
What to do if a file cannot be rendered. Can be 'ignore',
'warn' (default), or 'raise'.
%(image_format_report)s
.. versionadded:: 0.15
render_bem : bool
If True (default), try to render the BEM.
.. versionadded:: 0.16
n_time_points_evokeds, n_time_points_stcs : int | None
The number of equidistant time points to render for `~mne.Evoked`
and `~mne.SourceEstimate` data, respectively. If ``None``,
will render each `~mne.Evoked` at 21 and each `~mne.SourceEstimate`
at 51 time points, unless the respective data contains fewer time
points, in which call all will be rendered.
.. versionadded:: 0.24.0
raw_butterfly : bool
Whether to render butterfly plots for (decimated) `~mne.io.Raw`
data.
.. versionadded:: 0.24.0
%(stc_plot_kwargs_report)s
.. versionadded:: 0.24.0
%(topomap_kwargs)s
.. versionadded:: 0.24.0
%(verbose)s
"""
_validate_type(data_path, 'path-like', 'data_path')
data_path = str(data_path)
image_format = _check_image_format(self, image_format)
_check_option('on_error', on_error, ['ignore', 'warn', 'raise'])
self.data_path = data_path
if self.title is None:
self.title = f'MNE Report for {self.data_path[-20:]}'
if pattern is None:
pattern = [f'*{ext}' for ext in SUPPORTED_READ_RAW_EXTENSIONS]
elif not isinstance(pattern, (list, tuple)):
pattern = [pattern]
# iterate through the possible patterns
fnames = list()
for p in pattern:
data_path = _check_fname(
fname=self.data_path, overwrite='read', must_exist=True,
name='Directory or folder', need_dir=True
)
fnames.extend(sorted(_recursive_search(data_path, p)))
if not fnames and not render_bem:
raise RuntimeError(f'No matching files found in {self.data_path}')
fnames_to_remove = []
for fname in fnames:
# For split files, only keep the first one.
if _endswith(fname, ('raw', 'sss', 'meg')):
kwargs = dict(fname=fname, preload=False)
if fname.endswith(('.fif', '.fif.gz')):
kwargs['allow_maxshield'] = 'yes'
inst = read_raw(**kwargs)
if len(inst.filenames) > 1:
fnames_to_remove.extend(inst.filenames[1:])
# For STCs, only keep one hemisphere
elif fname.endswith('-lh.stc') or fname.endswith('-rh.stc'):
first_hemi_fname = fname
if first_hemi_fname.endswidth('-lh.stc'):
second_hemi_fname = (first_hemi_fname
.replace('-lh.stc', '-rh.stc'))
else:
second_hemi_fname = (first_hemi_fname
.replace('-rh.stc', '-lh.stc'))
if (second_hemi_fname in fnames and
first_hemi_fname not in fnames_to_remove):
fnames_to_remove.extend(first_hemi_fname)
else:
continue
fnames_to_remove = list(set(fnames_to_remove)) # Drop duplicates
for fname in fnames_to_remove:
if fname in fnames:
del fnames[fnames.index(fname)]
del fnames_to_remove
if self.info_fname is not None:
info = read_info(self.info_fname, verbose=False)
sfreq = info['sfreq']
else:
# only warn if relevant
if any(_endswith(fname, 'cov') for fname in fnames):
warn('`info_fname` not provided. Cannot render '
'-cov.fif(.gz) files.')
if any(_endswith(fname, 'trans') for fname in fnames):
warn('`info_fname` not provided. Cannot render '
'-trans.fif(.gz) files.')
if any(_endswith(fname, 'proj') for fname in fnames):
warn('`info_fname` not provided. Cannot render '
'-proj.fif(.gz) files.')
info, sfreq = None, None
cov = None
if self.cov_fname is not None:
cov = read_cov(self.cov_fname)
# render plots in parallel; check that n_jobs <= # of files
logger.info(f'Iterating over {len(fnames)} potential files '
f'(this may take some ')
parallel, p_fun, n_jobs = parallel_func(
self._iterate_files, n_jobs, max_jobs=len(fnames))
parallel(
p_fun(
fnames=fname, cov=cov, sfreq=sfreq,
raw_butterfly=raw_butterfly,
n_time_points_evokeds=n_time_points_evokeds,
n_time_points_stcs=n_time_points_stcs, on_error=on_error,
stc_plot_kwargs=stc_plot_kwargs, topomap_kwargs=topomap_kwargs,
) for fname in np.array_split(fnames, n_jobs)
)
# Render BEM
if render_bem:
if self.subjects_dir is not None and self.subject is not None:
logger.info('Rendering BEM')
self.add_bem(
subject=self.subject, subjects_dir=self.subjects_dir,
title='BEM surfaces', decim=mri_decim, n_jobs=n_jobs
)
else:
warn('`subjects_dir` and `subject` not provided. Cannot '
'render MRI and -trans.fif(.gz) files.')
if sort_content:
self._content = self._sort(
content=self._content, order=CONTENT_ORDER
)
def __getstate__(self):
"""Get the state of the report as a dictionary."""
state = dict()
for param_name in self._get_state_params():
param_val = getattr(self, param_name)
# Workaround as h5io doesn't support dataclasses
if param_name == '_content':
assert all(dataclasses.is_dataclass(val) for val in param_val)
param_val = [dataclasses.asdict(val) for val in param_val]
state[param_name] = param_val
return state
def __setstate__(self, state):
"""Set the state of the report."""
for param_name in self._get_state_params():
param_val = state[param_name]
# Workaround as h5io doesn't support dataclasses
if param_name == '_content':
param_val = [_ContentElement(**val) for val in param_val]
setattr(self, param_name, param_val)
return state
@verbose
def save(self, fname=None, open_browser=True, overwrite=False,
sort_content=False, *, verbose=None):
"""Save the report and optionally open it in browser.
Parameters
----------
fname : path-like | None
Output filename. If the name ends with ``.h5`` or ``.hdf5``, the
report is saved in HDF5 format, so it can later be loaded again
with :func:`open_report`. For any other suffix, the report will be
saved in HTML format. If ``None`` and :meth:`Report.parse_folder`
was **not** called, the report is saved as ``report.html`` in the
current working directory. If ``None`` and
:meth:`Report.parse_folder` **was** used, the report is saved as
``report.html`` inside the ``data_path`` supplied to
:meth:`Report.parse_folder`.
open_browser : bool
Whether to open the rendered HTML report in the default web browser
after saving. This is ignored when writing an HDF5 file.
%(overwrite)s
sort_content : bool
If ``True``, sort the content based on tags before saving in the
order:
raw -> events -> epochs -> evoked -> covariance -> coregistration
-> bem -> forward-solution -> inverse-operator -> source-estimate.
.. versionadded:: 0.24.0
%(verbose)s
Returns
-------
fname : str
The file name to which the report was saved.
"""
if fname is None:
if self.data_path is None:
self.data_path = os.getcwd()
warn(f'`data_path` not provided. Using {self.data_path} '
f'instead')
fname = op.join(self.data_path, 'report.html')
fname = _check_fname(fname, overwrite=overwrite, name=fname)
fname = op.realpath(fname) # resolve symlinks
if sort_content:
self._content = self._sort(
content=self._content, order=CONTENT_ORDER
)
if not overwrite and op.isfile(fname):
msg = (f'Report already exists at location {fname}. '
f'Overwrite it (y/[n])? ')
answer = _safe_input(msg, alt='pass overwrite=True')
if answer.lower() == 'y':
overwrite = True
_, ext = op.splitext(fname)
is_hdf5 = ext.lower() in ['.h5', '.hdf5']
if overwrite or not op.isfile(fname):
logger.info(f'Saving report to : {fname}')
if is_hdf5:
_, write_hdf5 = _import_h5io_funcs()
write_hdf5(fname, self.__getstate__(), overwrite=overwrite,
title='mnepython')
else:
# Add header, TOC, and footer.
header_html = _html_header_element(
title=self.title, include=self.include, lang=self.lang,
tags=self.tags, js=JAVASCRIPT, css=CSS,
mne_logo_img=mne_logo
)
# toc_html = _html_toc_element(content_elements=self._content)
_, dom_ids, titles, tags = self._content_as_html()
toc_html = _html_toc_element(
titles=titles,
dom_ids=dom_ids,
tags=tags
)
with warnings.catch_warnings(record=True):
warnings.simplefilter('ignore')
footer_html = _html_footer_element(
mne_version=MNE_VERSION,
date=time.strftime("%B %d, %Y")
)
html = [header_html, toc_html, *self.html, footer_html]
Path(fname).write_text(data=''.join(html), encoding='utf-8')
building_doc = os.getenv('_MNE_BUILDING_DOC', '').lower() == 'true'
if open_browser and not is_hdf5 and not building_doc:
webbrowser.open_new_tab('file://' + fname)
if self.fname is None:
self.fname = fname
return fname
def __enter__(self):
"""Do nothing when entering the context block."""
return self
def __exit__(self, type, value, traceback):
"""Save the report when leaving the context block."""
if self.fname is not None:
self.save(self.fname, open_browser=False, overwrite=True)
@staticmethod
def _sort(content, order):
"""Reorder content to reflect "natural" ordering."""
content_unsorted = content.copy()
content_sorted = []
content_sorted_idx = []
del content
# First arrange content with known tags in the predefined order
for tag in order:
for idx, content in enumerate(content_unsorted):
if tag in content.tags:
content_sorted_idx.append(idx)
content_sorted.append(content)
# Now simply append the rest (custom tags)
content_remaining = [
content for idx, content in enumerate(content_unsorted)
if idx not in content_sorted_idx
]
content_sorted = [*content_sorted, *content_remaining]
return content_sorted
def _render_one_bem_axis(self, *, mri_fname, surfaces,
image_format, orientation, decim=2, n_jobs=None,
width=512, tags):
"""Render one axis of bem contours (only PNG)."""
import nibabel as nib
nim = nib.load(mri_fname)
data = _reorient_image(nim)[0]
axis = _mri_orientation(orientation)[0]
n_slices = data.shape[axis]
sl = np.arange(0, n_slices, decim)
logger.debug(f'Rendering BEM {orientation} with {len(sl)} slices')
figs = _get_bem_contour_figs_as_arrays(
sl=sl, n_jobs=n_jobs, mri_fname=mri_fname, surfaces=surfaces,
orientation=orientation, src=None, show=False,
show_orientation='always', width=width
)
# Render the slider
captions = [f'Slice index: {i * decim}' for i in range(len(figs))]
start_idx = int(round(len(figs) / 2))
html, _ = self._render_slider(
figs=figs,
imgs=None,
captions=captions,
title=orientation,
image_format=image_format,
start_idx=start_idx,
tags=tags,
klass='bem col-md',
own_figure=True,
)
return html
def _add_html_repr(
self, *, inst, title, tags, section, replace, div_klass
):
html = inst._repr_html_()
self._add_html_element(
html=html,
title=title,
tags=tags,
section=section,
replace=replace,
div_klass=div_klass,
)
def _add_raw_butterfly_segments(
self, *, raw: BaseRaw, n_segments, scalings, image_format, tags,
section, replace
):
# Pick n_segments + 2 equally-spaced 1-second time slices, but omit
# the first and last slice, so we end up with n_segments slices
n = n_segments + 2
times = np.linspace(raw.times[0], raw.times[-1], n)[1:-1]
t_starts = np.array([max(t - 0.5, 0) for t in times])
t_stops = np.array([min(t + 0.5, raw.times[-1]) for t in times])
durations = t_stops - t_starts
# Remove annotations before plotting for better performance.
# Ensure we later restore raw.annotations even in case of an exception
orig_annotations = raw.annotations.copy()
try:
raw.set_annotations(None)
# Create the figure once and re-use it for performance reasons
with use_browser_backend('matplotlib'):
fig = raw.plot(
butterfly=True, show_scrollbars=False, start=t_starts[0],
duration=durations[0], scalings=scalings, show=False
)
_constrain_fig_resolution(
fig, max_width=MAX_IMG_WIDTH, max_res=MAX_IMG_RES
)
images = [_fig_to_img(fig=fig, image_format=image_format)]
for start, duration in zip(t_starts[1:], durations[1:]):
fig.mne.t_start = start
fig.mne.duration = duration
fig._update_hscroll()
fig._redraw(annotations=False)
images.append(_fig_to_img(fig=fig, image_format=image_format))
except Exception:
raise
finally:
raw.set_annotations(orig_annotations)
del orig_annotations
captions = [f'Segment {i+1} of {len(images)}'
for i in range(len(images))]
self._add_slider(
figs=None, imgs=images, title='Time series', captions=captions,
start_idx=0, image_format=image_format, tags=tags, section=section,
replace=replace
)
def _add_raw(
self, *, raw, add_psd, add_projs, butterfly,
butterfly_scalings, image_format, tags, topomap_kwargs,
section, replace
):
"""Render raw."""
if isinstance(raw, BaseRaw):
fname = raw.filenames[0]
else:
fname = str(raw) # could e.g. be a Path!
kwargs = dict(fname=fname, preload=False)
if fname.endswith(('.fif', '.fif.gz')):
kwargs['allow_maxshield'] = 'yes'
raw = read_raw(**kwargs)
# Summary table
self._add_html_repr(
inst=raw,
title='Info',
tags=tags,
section=section,
replace=replace,
div_klass='raw'
)
# Butterfly plot
if butterfly:
n_butterfly_segments = 10 if butterfly is True else butterfly
self._add_raw_butterfly_segments(
raw=raw, scalings=butterfly_scalings,
n_segments=n_butterfly_segments,
image_format=image_format, tags=tags, replace=replace,
section=section
)
# PSD
if isinstance(add_psd, dict):
if raw.info['lowpass'] is not None:
fmax = raw.info['lowpass'] + 15
# Must not exceed half the sampling frequency
if fmax > 0.5 * raw.info['sfreq']:
fmax = np.inf
else:
fmax = np.inf
fig = raw.plot_psd(fmax=fmax, show=False, **add_psd)
tight_layout(fig=fig)
_constrain_fig_resolution(
fig, max_width=MAX_IMG_WIDTH, max_res=MAX_IMG_RES
)
self._add_figure(
fig=fig,
title='PSD',
caption=None,
image_format=image_format,
tags=tags,
section=section,
replace=replace,
own_figure=True
)
# SSP projectors
if add_projs:
self._add_projs(
info=raw, projs=None, title='Projectors',
image_format=image_format, tags=tags,
topomap_kwargs=topomap_kwargs, section=section,
replace=replace
)
def _add_projs(self, *, info, projs, title, image_format, tags, section,
topomap_kwargs, replace):
if isinstance(info, Info): # no-op
pass
elif hasattr(info, 'info'): # try to get the file name
if isinstance(info, BaseRaw):
fname = info.filenames[0]
# elif isinstance(info, (Evoked, BaseEpochs)):
# fname = info.filename
else:
fname = ''
info = info.info
else: # read from a file
fname = info
info = read_info(fname, verbose=False)
if projs is None:
projs = info['projs']
elif not isinstance(projs, list):
fname = projs
projs = read_proj(fname)
if not projs:
raise ValueError('No SSP projectors found')
if not _check_ch_locs(info=info):
raise ValueError(
'The provided data does not contain digitization '
'information (channel locations). However, this is '
'required for rendering the projectors.'
)
topomap_kwargs = self._validate_topomap_kwargs(topomap_kwargs)
fig = plot_projs_topomap(
projs=projs, info=info, colorbar=True, vlim='joint',
show=False, **topomap_kwargs
)
# TODO This seems like a bad idea, better to provide a way to set a
# desired size in plot_projs_topomap, but that uses prepare_trellis...
# hard to see how (6, 4) could work in all number-of-projs by
# number-of-channel-types conditions...
fig.set_size_inches((6, 4))
tight_layout(fig=fig)
_constrain_fig_resolution(
fig, max_width=MAX_IMG_WIDTH, max_res=MAX_IMG_RES
)
self._add_figure(
fig=fig,
title=title,
caption=None,
image_format=image_format,
tags=tags,
section=section,
replace=replace,
own_figure=True,
)
def _add_forward(
self, *, forward, subject, subjects_dir, title, image_format,
section, tags, replace
):
"""Render forward solution."""
if not isinstance(forward, Forward):
forward = read_forward_solution(forward)
subject = self.subject if subject is None else subject
subjects_dir = (self.subjects_dir if subjects_dir is None
else subjects_dir)
# XXX Todo
# Render sensitivity maps
if subject is not None:
sensitivity_maps_html = ''
else:
sensitivity_maps_html = ''
dom_id = self._get_dom_id()
html = _html_forward_sol_element(
id=dom_id,
repr=forward._repr_html_(),
sensitivity_maps=sensitivity_maps_html,
title=title,
tags=tags
)
self._add_or_replace(
title=title,
section=section,
dom_id=dom_id,
tags=tags,
html=html,
replace=replace,
)
def _add_inverse_operator(
self, *, inverse_operator, subject,
subjects_dir, trans, title, image_format,
section, tags, replace,
):
"""Render inverse operator."""
if not isinstance(inverse_operator, InverseOperator):
inverse_operator = read_inverse_operator(inverse_operator)
if trans is not None and not isinstance(trans, Transform):
trans = read_trans(trans)
subject = self.subject if subject is None else subject
subjects_dir = (self.subjects_dir if subjects_dir is None
else subjects_dir)
# XXX Todo Render source space?
# if subject is not None and trans is not None:
# src = inverse_operator['src']
# fig = plot_alignment(
# subject=subject,
# subjects_dir=subjects_dir,
# trans=trans,
# surfaces='white',
# src=src
# )
# set_3d_view(fig, focalpoint=(0., 0., 0.06))
# img = _fig_to_img(fig=fig, image_format=image_format)
# dom_id = self._get_dom_id()
# src_img_html = _html_image_element(
# img=img,
# div_klass='inverse-operator source-space',
# img_klass='inverse-operator source-space',
# title='Source space', caption=None, show=True,
# image_format=image_format, id=dom_id,
# tags=tags
# )
# else:
src_img_html = ''
dom_id = self._get_dom_id()
html = _html_inverse_operator_element(
id=dom_id,
repr=inverse_operator._repr_html_(),
source_space=src_img_html,
title=title,
tags=tags,
)
self._add_or_replace(
title=title,
section=section,
dom_id=dom_id,
tags=tags,
html=html,
replace=replace,
)
def _add_evoked_joint(
self, *, evoked, ch_types, image_format, section, tags, topomap_kwargs,
replace
):
for ch_type in ch_types:
if not _check_ch_locs(info=evoked.info, ch_type=ch_type):
ch_type_name = _handle_default("titles")[ch_type]
warn(f'No {ch_type_name} channel locations found, cannot '
f'create joint plot')
continue
with use_log_level(_verbose_safe_false(level='error')):
fig = evoked.copy().pick(ch_type, verbose=False).plot_joint(
ts_args=dict(gfp=True),
title=None,
show=False,
topomap_args=topomap_kwargs,
)
_constrain_fig_resolution(
fig, max_width=MAX_IMG_WIDTH, max_res=MAX_IMG_RES
)
title = f'Time course ({_handle_default("titles")[ch_type]})'
self._add_figure(
fig=fig,
title=title,
caption=None,
image_format=image_format,
tags=tags,
section=section,
replace=replace,
own_figure=True,
)
def _plot_one_evoked_topomap_timepoint(
self, *, evoked, time, ch_types, vmin, vmax, topomap_kwargs
):
import matplotlib.pyplot as plt
fig, ax = plt.subplots(
1, len(ch_types) * 2,
gridspec_kw={
'width_ratios': [8, 0.5] * len(ch_types)
},
figsize=(2.5 * len(ch_types), 2)
)
_constrain_fig_resolution(
fig, max_width=MAX_IMG_WIDTH, max_res=MAX_IMG_RES
)
ch_type_ax_map = dict(
zip(ch_types,
[(ax[i], ax[i + 1]) for i in
range(0, 2 * len(ch_types) - 1, 2)])
)
for ch_type in ch_types:
evoked.plot_topomap(
times=[time], ch_type=ch_type,
vlim=(vmin[ch_type], vmax[ch_type]),
axes=ch_type_ax_map[ch_type], show=False,
**topomap_kwargs
)
ch_type_ax_map[ch_type][0].set_title(ch_type)
tight_layout(fig=fig)
with BytesIO() as buff:
fig.savefig(
buff,
format='png',
pad_inches=0
)
plt.close(fig)
buff.seek(0)
fig_array = plt.imread(buff, format='png')
return fig_array
def _add_evoked_topomap_slider(
self, *, evoked, ch_types, n_time_points, image_format, section, tags,
topomap_kwargs, n_jobs, replace
):
if n_time_points is None:
n_time_points = min(len(evoked.times), 21)
elif n_time_points > len(evoked.times):
raise ValueError(
f'The requested number of time points ({n_time_points}) '
f'exceeds the time points in the provided Evoked object '
f'({len(evoked.times)})'
)
if n_time_points == 1: # only a single time point, pick the first one
times = [evoked.times[0]]
else:
times = np.linspace(
start=evoked.tmin,
stop=evoked.tmax,
num=n_time_points
)
t_zero_idx = np.abs(times).argmin() # index closest to zero
# global min and max values for each channel type
scalings = dict(eeg=1e6, grad=1e13, mag=1e15)
vmax = dict()
vmin = dict()
for ch_type in ch_types:
if not _check_ch_locs(info=evoked.info, ch_type=ch_type):
ch_type_name = _handle_default("titles")[ch_type]
warn(f'No {ch_type_name} channel locations found, cannot '
f'create topography plots')
continue
vmax[ch_type] = (np.abs(evoked.copy()
.pick(ch_type, verbose=False)
.data)
.max()) * scalings[ch_type]
if ch_type == 'grad':
vmin[ch_type] = 0
else:
vmin[ch_type] = -vmax[ch_type]
if not (vmin and vmax): # we only had EEG data and no digpoints
return # No need to warn here, we did that above
else:
topomap_kwargs = self._validate_topomap_kwargs(topomap_kwargs)
parallel, p_fun, n_jobs = parallel_func(
func=self._plot_one_evoked_topomap_timepoint,
n_jobs=n_jobs, max_jobs=len(times),
)
with use_log_level(_verbose_safe_false(level='error')):
fig_arrays = parallel(
p_fun(
evoked=evoked, time=time, ch_types=ch_types,
vmin=vmin, vmax=vmax, topomap_kwargs=topomap_kwargs
) for time in times
)
captions = [f'Time point: {round(t, 3):0.3f} s' for t in times]
self._add_slider(
figs=fig_arrays,
imgs=None,
captions=captions,
title='Topographies',
image_format=image_format,
start_idx=t_zero_idx,
section=section,
tags=tags,
replace=replace,
)
def _add_evoked_gfp(
self, *, evoked, ch_types, image_format, section, tags, replace
):
# Make legend labels shorter by removing the multiplicative factors
pattern = r'\d\.\d* × '
label = evoked.comment
if label is None:
label = ''
for match in re.findall(pattern=pattern, string=label):
label = label.replace(match, '')
import matplotlib.pyplot as plt
fig, ax = plt.subplots(len(ch_types), 1, sharex=True)
if len(ch_types) == 1:
ax = [ax]
for idx, ch_type in enumerate(ch_types):
with use_log_level(_verbose_safe_false(level='error')):
plot_compare_evokeds(
evokeds={
label: evoked.copy().pick(ch_type, verbose=False)
},
ci=None, truncate_xaxis=False,
truncate_yaxis=False, legend=False,
axes=ax[idx], show=False
)
ax[idx].set_title(ch_type)
# Hide x axis label for all but the last subplot
if idx < len(ch_types) - 1:
ax[idx].set_xlabel(None)
tight_layout(fig=fig)
_constrain_fig_resolution(
fig, max_width=MAX_IMG_WIDTH, max_res=MAX_IMG_RES
)
title = 'Global field power'
self._add_figure(
fig=fig,
title=title,
caption=None,
image_format=image_format,
section=section,
tags=tags,
replace=replace,
own_figure=True,
)
def _add_evoked_whitened(
self, *, evoked, noise_cov, image_format, section, tags, replace
):
"""Render whitened evoked."""
fig = evoked.plot_white(
noise_cov=noise_cov,
show=False
)
tight_layout(fig=fig)
_constrain_fig_resolution(
fig, max_width=MAX_IMG_WIDTH, max_res=MAX_IMG_RES
)
title = 'Whitened'
self._add_figure(
fig=fig,
title=title,
caption=None,
image_format=image_format,
tags=tags,
section=section,
replace=replace,
own_figure=True,
)
def _add_evoked(
self, *, evoked, noise_cov, add_projs, n_time_points,
image_format, section, tags, topomap_kwargs, n_jobs, replace
):
ch_types = _get_ch_types(evoked)
self._add_evoked_joint(
evoked=evoked, ch_types=ch_types,
image_format=image_format, section=section, tags=tags,
topomap_kwargs=topomap_kwargs, replace=replace,
)
self._add_evoked_topomap_slider(
evoked=evoked, ch_types=ch_types,
n_time_points=n_time_points,
image_format=image_format,
section=section, tags=tags, topomap_kwargs=topomap_kwargs,
n_jobs=n_jobs, replace=replace,
)
self._add_evoked_gfp(
evoked=evoked, ch_types=ch_types, image_format=image_format,
section=section, tags=tags, replace=replace,
)
if noise_cov is not None:
self._add_evoked_whitened(
evoked=evoked,
noise_cov=noise_cov,
image_format=image_format,
section=section,
tags=tags,
replace=replace,
)
# SSP projectors
if add_projs:
self._add_projs(
info=evoked,
projs=None,
title='Projectors',
image_format=image_format,
section=section,
tags=tags,
topomap_kwargs=topomap_kwargs,
replace=replace,
)
logger.debug('Evoked: done')
def _add_events(
self, *, events, event_id, sfreq, first_samp, title, section,
image_format, tags, replace
):
"""Render events."""
if not isinstance(events, np.ndarray):
events = read_events(filename=events)
fig = plot_events(
events=events,
event_id=event_id,
sfreq=sfreq,
first_samp=first_samp,
show=False
)
_constrain_fig_resolution(
fig, max_width=MAX_IMG_WIDTH, max_res=MAX_IMG_RES
)
self._add_figure(
fig=fig,
title=title,
caption=None,
image_format=image_format,
tags=tags,
section=section,
replace=replace,
own_figure=True,
)
def _add_epochs_psd(
self, *, epochs, psd, image_format, tags, section, replace
):
epoch_duration = epochs.tmax - epochs.tmin
if psd is True: # Entire time range -> all epochs
epochs_for_psd = epochs # Avoid creating a copy
else: # Only a subset of epochs
signal_duration = len(epochs) * epoch_duration
n_epochs_required = int(
np.ceil(psd / epoch_duration)
)
if n_epochs_required > len(epochs):
raise ValueError(
f'You requested to calculate PSD on a duration of '
f'{psd:.3f} sec, but all your epochs '
f'are only {signal_duration:.1f} sec long'
)
epochs_idx = np.round(
np.linspace(
start=0,
stop=len(epochs) - 1,
num=n_epochs_required
)
).astype(int)
# Edge case: there might be duplicate indices due to rounding?
epochs_idx_unique = np.unique(epochs_idx)
if len(epochs_idx_unique) != len(epochs_idx):
duration = round(
len(epochs_idx_unique) * epoch_duration, 1
)
warn(f'Using {len(epochs_idx_unique)} epochs, only '
f'covering {duration:.1f} sec of data')
del duration
epochs_for_psd = epochs[epochs_idx_unique]
if epochs.info['lowpass'] is None:
fmax = np.inf
else:
fmax = epochs.info['lowpass'] + 15
# Must not exceed half the sampling frequency
if fmax > 0.5 * epochs.info['sfreq']:
fmax = np.inf
fig = epochs_for_psd.plot_psd(fmax=fmax, show=False)
_constrain_fig_resolution(
fig, max_width=MAX_IMG_WIDTH, max_res=MAX_IMG_RES
)
duration = round(epoch_duration * len(epochs_for_psd), 1)
caption = (
f'PSD calculated from {len(epochs_for_psd)} epochs '
f'({duration:.1f} sec).'
)
self._add_figure(
fig=fig,
image_format=image_format,
title='PSD',
caption=caption,
tags=tags,
section=section,
replace=replace,
own_figure=True,
)
def _add_html_element(
self, *, html, title, tags, section, replace, div_klass
):
dom_id = self._get_dom_id()
html = _html_element(
div_klass=div_klass,
id=dom_id,
tags=tags,
title=title,
html=html
)
self._add_or_replace(
title=title,
section=section,
dom_id=dom_id,
tags=tags,
html=html,
replace=replace
)
def _add_epochs_metadata(self, *, epochs, section, tags, replace):
metadata = epochs.metadata.copy()
# Ensure we have a named index
if not metadata.index.name:
metadata.index.name = 'Epoch #'
assert metadata.index.is_unique
index_name = metadata.index.name # store for later use
metadata = metadata.reset_index() # We want "proper" columns only
html = metadata.to_html(
border=0,
index=False,
show_dimensions=True,
justify='unset',
float_format=lambda x: f'{round(x, 3):.3f}',
classes='table table-hover table-striped '
'table-sm table-responsive small'
)
del metadata
# Massage the table such that it woks nicely with bootstrap-table
htmls = html.split('\n')
header_pattern = '<th>(.*)</th>'
for idx, html in enumerate(htmls):
if '<table' in html:
htmls[idx] = html.replace(
'<table',
'<table '
'id="mytable" '
'data-toggle="table" '
f'data-unique-id="{index_name}" '
'data-search="true" ' # search / filter
'data-search-highlight="true" '
'data-show-columns="true" ' # show/hide columns
'data-show-toggle="true" ' # allow card view
'data-show-columns-toggle-all="true" '
'data-click-to-select="true" '
'data-show-copy-rows="true" '
'data-show-export="true" ' # export to a file
'data-export-types="[csv]" '
"data-export-options='{\"fileName\": \"metadata\"}' "
'data-icon-size="sm" '
'data-height="400"'
)
continue
elif '<tr' in html:
# Add checkbox for row selection
htmls[idx] = (
f'{html}\n'
f'<th data-field="state" data-checkbox="true"></th>'
)
continue
col_headers = re.findall(pattern=header_pattern, string=html)
if col_headers:
# Make columns sortable
assert len(col_headers) == 1
col_header = col_headers[0]
htmls[idx] = html.replace(
'<th>',
f'<th data-field="{col_header.lower()}" '
f'data-sortable="true">'
)
html = '\n'.join(htmls)
self._add_html_element(
div_klass='epochs',
tags=tags,
title='Metadata',
html=html,
section=section,
replace=replace,
)
def _add_epochs(
self, *, epochs, psd, add_projs, topomap_kwargs, drop_log_ignore,
image_format, section, tags, replace
):
"""Render epochs."""
if isinstance(epochs, BaseEpochs):
fname = epochs.filename
else:
fname = epochs
epochs = read_epochs(fname, preload=False)
# Summary table
self._add_html_repr(
inst=epochs,
title='Info',
tags=tags,
section=section,
replace=replace,
div_klass='epochs',
)
# Metadata table
if epochs.metadata is not None:
self._add_epochs_metadata(
epochs=epochs, tags=tags, section=section, replace=replace
)
# ERP/ERF image(s)
ch_types = _get_ch_types(epochs)
epochs.load_data()
for ch_type in ch_types:
with use_log_level(_verbose_safe_false(level='error')):
figs = epochs.copy().pick(ch_type, verbose=False).plot_image(
show=False
)
assert len(figs) == 1
fig = figs[0]
_constrain_fig_resolution(
fig, max_width=MAX_IMG_WIDTH, max_res=MAX_IMG_RES
)
if ch_type in ('mag', 'grad'):
title_start = 'ERF image'
else:
assert 'eeg' in ch_type
title_start = 'ERP image'
title = (f'{title_start} '
f'({_handle_default("titles")[ch_type]})')
self._add_figure(
fig=fig,
title=title,
caption=None,
image_format=image_format,
tags=tags,
section=section,
replace=replace,
own_figure=True,
)
# Drop log
if epochs._bad_dropped:
title = 'Drop log'
if epochs.drop_log_stats(ignore=drop_log_ignore) == 0: # No drops
self._add_html_element(
html='No epochs exceeded the rejection thresholds. '
'Nothing was dropped.',
div_klass='epochs',
title=title,
tags=tags,
section=section,
replace=replace,
)
else:
fig = epochs.plot_drop_log(
subject=self.subject, ignore=drop_log_ignore, show=False
)
tight_layout(fig=fig)
_constrain_fig_resolution(
fig, max_width=MAX_IMG_WIDTH, max_res=MAX_IMG_RES
)
self._add_figure(
fig=fig,
image_format=image_format,
title=title,
caption=None,
tags=tags,
section=section,
replace=replace,
own_figure=True,
)
if psd:
self._add_epochs_psd(
epochs=epochs, psd=psd, image_format=image_format, tags=tags,
section=section, replace=replace
)
if add_projs:
self._add_projs(
info=epochs,
projs=None,
title='Projections',
image_format=image_format,
tags=tags,
topomap_kwargs=topomap_kwargs,
section=section,
replace=replace,
)
def _add_cov(self, *, cov, info, image_format, section, tags, replace):
"""Render covariance matrix & SVD."""
if not isinstance(cov, Covariance):
cov = read_cov(cov)
if not isinstance(info, Info):
info = read_info(info)
fig_cov, fig_svd = plot_cov(cov=cov, info=info, show=False,
show_svd=True)
figs = [fig_cov, fig_svd]
titles = (
'Covariance matrix',
'Singular values'
)
for fig, title in zip(figs, titles):
_constrain_fig_resolution(
fig, max_width=MAX_IMG_WIDTH, max_res=MAX_IMG_RES
)
self._add_figure(
fig=fig,
title=title,
caption=None,
image_format=image_format,
tags=tags,
section=section,
replace=replace,
own_figure=True,
)
def _add_trans(
self, *, trans, info, subject, subjects_dir, alpha, title, section,
tags, replace
):
"""Render trans (only PNG)."""
if not isinstance(trans, Transform):
trans = read_trans(trans)
if not isinstance(info, Info):
info = read_info(info)
kwargs = dict(info=info, trans=trans, subject=subject,
subjects_dir=subjects_dir, dig=True,
meg=['helmet', 'sensors'], show_axes=True,
coord_frame='mri')
img, caption = _iterate_trans_views(
function=plot_alignment, alpha=alpha, **kwargs
)
self._add_image(
img=img,
title=title,
section=section,
caption=caption,
image_format='png',
tags=tags,
replace=replace,
)
def _add_stc(
self, *, stc, title, subject, subjects_dir, n_time_points,
image_format, section, tags, stc_plot_kwargs, replace,
):
"""Render STC."""
if isinstance(stc, SourceEstimate):
if subject is None:
subject = self.subject # supplied during Report init
if not subject:
subject = stc.subject # supplied when loading STC
if not subject:
raise ValueError(
'Please specify the subject name, as it cannot '
'be found in stc.subject. You may wish to pass '
'the "subject" parameter to read_source_estimate()'
)
else:
subject = subject
else:
fname = stc
stc = read_source_estimate(fname=fname, subject=subject)
subjects_dir = (self.subjects_dir if subjects_dir is None
else subjects_dir)
if n_time_points is None:
n_time_points = min(len(stc.times), 51)
elif n_time_points > len(stc.times):
raise ValueError(
f'The requested number of time points ({n_time_points}) '
f'exceeds the time points in the provided STC object '
f'({len(stc.times)})'
)
if n_time_points == 1: # only a single time point, pick the first one
times = [stc.times[0]]
else:
times = np.linspace(
start=stc.times[0],
stop=stc.times[-1],
num=n_time_points
)
t_zero_idx = np.abs(times).argmin() # index of time closest to zero
# Plot using 3d backend if available, and use Matplotlib
# otherwise.
import matplotlib.pyplot as plt
stc_plot_kwargs = _handle_default(
'report_stc_plot_kwargs', stc_plot_kwargs
)
stc_plot_kwargs.update(subject=subject, subjects_dir=subjects_dir)
if get_3d_backend() is not None:
brain = stc.plot(**stc_plot_kwargs)
brain._renderer.plotter.subplot(0, 0)
backend_is_3d = True
else:
backend_is_3d = False
figs = []
for t in times:
with warnings.catch_warnings():
warnings.filterwarnings(
action='ignore',
message='More than 20 figures have been opened',
category=RuntimeWarning)
if backend_is_3d:
brain.set_time(t)
fig, ax = plt.subplots(figsize=(4.5, 4.5))
ax.imshow(brain.screenshot(time_viewer=True, mode='rgb'))
ax.axis('off')
tight_layout(fig=fig)
_constrain_fig_resolution(
fig,
max_width=stc_plot_kwargs['size'][0],
max_res=MAX_IMG_RES
)
figs.append(fig)
plt.close(fig)
else:
fig_lh = plt.figure()
fig_rh = plt.figure()
brain_lh = stc.plot(
views='lat', hemi='lh',
initial_time=t,
backend='matplotlib',
subject=subject,
subjects_dir=subjects_dir,
figure=fig_lh
)
brain_rh = stc.plot(
views='lat', hemi='rh',
initial_time=t,
subject=subject,
subjects_dir=subjects_dir,
backend='matplotlib',
figure=fig_rh
)
tight_layout(fig=fig_lh) # TODO is this necessary?
tight_layout(fig=fig_rh) # TODO is this necessary?
_constrain_fig_resolution(
fig_lh,
max_width=stc_plot_kwargs['size'][0],
max_res=MAX_IMG_RES
)
_constrain_fig_resolution(
fig_rh,
max_width=stc_plot_kwargs['size'][0],
max_res=MAX_IMG_RES
)
figs.append(brain_lh)
figs.append(brain_rh)
plt.close(fig_lh)
plt.close(fig_rh)
if backend_is_3d:
brain.close()
else:
brain_lh.close()
brain_rh.close()
captions = [f'Time point: {round(t, 3):0.3f} s' for t in times]
self._add_slider(
figs=figs,
imgs=None,
captions=captions,
title=title,
image_format=image_format,
start_idx=t_zero_idx,
section=section,
tags=tags,
replace=replace,
)
def _add_bem(
self, *, subject, subjects_dir, decim, n_jobs, width=512,
image_format, title, tags, replace
):
"""Render mri+bem (only PNG)."""
if subjects_dir is None:
subjects_dir = self.subjects_dir
subjects_dir = get_subjects_dir(subjects_dir, raise_error=True)
# Get the MRI filename
mri_fname = op.join(subjects_dir, subject, 'mri', 'T1.mgz')
if not op.isfile(mri_fname):
warn(f'MRI file "{mri_fname}" does not exist')
# Get the BEM surface filenames
bem_path = op.join(subjects_dir, subject, 'bem')
surfaces = _get_bem_plotting_surfaces(bem_path)
if not surfaces:
warn('No BEM surfaces found, rendering empty MRI')
htmls = dict()
for orientation in _BEM_VIEWS:
htmls[orientation] = self._render_one_bem_axis(
mri_fname=mri_fname, surfaces=surfaces,
orientation=orientation, decim=decim, n_jobs=n_jobs,
width=width, image_format=image_format,
tags=tags
)
# Special handling to deal with our tests, where we monkey-patch
# _BEM_VIEWS to save time
html_slider_axial = htmls['axial'] if 'axial' in htmls else ''
html_slider_sagittal = htmls['sagittal'] if 'sagittal' in htmls else ''
html_slider_coronal = htmls['coronal'] if 'coronal' in htmls else ''
dom_id = self._get_dom_id()
html = _html_bem_element(
id=dom_id,
div_klass='bem',
html_slider_axial=html_slider_axial,
html_slider_sagittal=html_slider_sagittal,
html_slider_coronal=html_slider_coronal,
tags=tags,
title=title,
)
self._add_or_replace(
title=title,
section=None, # no nesting
dom_id=dom_id,
tags=tags,
html=html,
replace=replace,
)
def _clean_tags(tags):
if isinstance(tags, str):
tags = (tags,)
# Replace any whitespace characters with dashes
tags_cleaned = tuple(re.sub(r'[\s*]', '-', tag) for tag in tags)
return tags_cleaned
def _recursive_search(path, pattern):
"""Auxiliary function for recursive_search of the directory."""
filtered_files = list()
for dirpath, dirnames, files in os.walk(path):
for f in fnmatch.filter(files, pattern):
# only the following file types are supported
# this ensures equitable distribution of jobs
if f.endswith(VALID_EXTENSIONS):
filtered_files.append(op.realpath(op.join(dirpath, f)))
return filtered_files
###############################################################################
# Scraper for sphinx-gallery
_SCRAPER_TEXT = '''
.. only:: builder_html
.. container:: row
.. rubric:: The `HTML document <{0}>`__ written by :meth:`mne.Report.save`:
.. raw:: html
<iframe class="sg_report" sandbox="allow-scripts" src="{0}"></iframe>
''' # noqa: E501
# Adapted from fa-file-code
_FA_FILE_CODE = '<svg class="sg_report" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 384 512"><path fill="#dec" d="M149.9 349.1l-.2-.2-32.8-28.9 32.8-28.9c3.6-3.2 4-8.8.8-12.4l-.2-.2-17.4-18.6c-3.4-3.6-9-3.7-12.4-.4l-57.7 54.1c-3.7 3.5-3.7 9.4 0 12.8l57.7 54.1c1.6 1.5 3.8 2.4 6 2.4 2.4 0 4.8-1 6.4-2.8l17.4-18.6c3.3-3.5 3.1-9.1-.4-12.4zm220-251.2L286 14C277 5 264.8-.1 252.1-.1H48C21.5 0 0 21.5 0 48v416c0 26.5 21.5 48 48 48h288c26.5 0 48-21.5 48-48V131.9c0-12.7-5.1-25-14.1-34zM256 51.9l76.1 76.1H256zM336 464H48V48h160v104c0 13.3 10.7 24 24 24h104zM209.6 214c-4.7-1.4-9.5 1.3-10.9 6L144 408.1c-1.4 4.7 1.3 9.6 6 10.9l24.4 7.1c4.7 1.4 9.6-1.4 10.9-6L240 231.9c1.4-4.7-1.3-9.6-6-10.9zm24.5 76.9l.2.2 32.8 28.9-32.8 28.9c-3.6 3.2-4 8.8-.8 12.4l.2.2 17.4 18.6c3.3 3.5 8.9 3.7 12.4.4l57.7-54.1c3.7-3.5 3.7-9.4 0-12.8l-57.7-54.1c-3.5-3.3-9.1-3.2-12.4.4l-17.4 18.6c-3.3 3.5-3.1 9.1.4 12.4z" class=""></path></svg>' # noqa: E501
class _ReportScraper(object):
"""Scrape Report outputs.
Only works properly if conf.py is configured properly and the file
is written to the same directory as the example script.
"""
def __init__(self):
self.app = None
self.files = dict()
def __repr__(self):
return '<ReportScraper>'
def __call__(self, block, block_vars, gallery_conf):
for report in block_vars['example_globals'].values():
if (isinstance(report, Report) and
report.fname is not None and
report.fname.endswith('.html') and
gallery_conf['builder_name'] == 'html'):
# Thumbnail
image_path_iterator = block_vars['image_path_iterator']
img_fname = next(image_path_iterator)
img_fname = img_fname.replace('.png', '.svg')
with open(img_fname, 'w') as fid:
fid.write(_FA_FILE_CODE)
# copy HTML file
html_fname = op.basename(report.fname)
out_dir = op.join(
self.app.builder.outdir,
op.relpath(op.dirname(block_vars['target_file']),
self.app.builder.srcdir))
os.makedirs(out_dir, exist_ok=True)
out_fname = op.join(out_dir, html_fname)
assert op.isfile(report.fname)
self.files[report.fname] = out_fname
# embed links/iframe
data = _SCRAPER_TEXT.format(html_fname)
return data
return ''
def copyfiles(self, *args, **kwargs):
for key, value in self.files.items():
copyfile(key, value)
|