1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 2323 2324 2325 2326 2327 2328 2329 2330 2331 2332 2333 2334 2335 2336 2337 2338 2339 2340 2341 2342 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 2354 2355 2356 2357 2358 2359 2360 2361 2362 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 2375 2376 2377 2378 2379 2380 2381 2382 2383 2384 2385 2386 2387 2388 2389 2390 2391 2392 2393 2394 2395 2396 2397 2398 2399 2400 2401 2402 2403 2404 2405 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 2418 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 2520 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 2555 2556 2557 2558 2559 2560 2561 2562 2563 2564 2565 2566 2567 2568 2569 2570 2571 2572 2573 2574 2575 2576 2577 2578 2579 2580 2581 2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 2595 2596 2597 2598 2599 2600 2601 2602 2603 2604 2605 2606 2607 2608 2609 2610 2611 2612 2613 2614 2615 2616 2617 2618 2619 2620 2621 2622 2623 2624 2625 2626 2627 2628 2629 2630 2631 2632 2633 2634 2635 2636 2637 2638 2639 2640 2641 2642 2643 2644 2645 2646 2647 2648 2649 2650 2651 2652 2653 2654 2655 2656 2657 2658 2659 2660 2661 2662 2663 2664 2665 2666 2667 2668 2669 2670 2671 2672 2673 2674 2675 2676 2677 2678 2679 2680 2681 2682 2683 2684 2685 2686 2687 2688 2689 2690 2691 2692 2693 2694 2695 2696 2697 2698 2699 2700 2701 2702 2703 2704 2705 2706 2707 2708 2709 2710 2711 2712 2713 2714 2715 2716 2717 2718 2719 2720 2721 2722 2723 2724 2725 2726 2727 2728 2729 2730 2731 2732 2733 2734 2735 2736 2737 2738 2739 2740 2741 2742 2743 2744 2745 2746 2747 2748 2749 2750 2751 2752 2753 2754 2755 2756 2757 2758 2759 2760 2761 2762 2763 2764 2765 2766 2767 2768 2769 2770 2771 2772 2773 2774 2775 2776 2777 2778 2779 2780 2781 2782 2783 2784 2785 2786 2787 2788 2789 2790 2791 2792 2793 2794 2795 2796 2797 2798 2799 2800 2801 2802 2803 2804 2805 2806 2807 2808 2809 2810 2811 2812 2813 2814 2815 2816 2817 2818 2819 2820 2821 2822 2823 2824 2825 2826 2827 2828 2829 2830 2831 2832 2833 2834 2835 2836 2837 2838 2839 2840 2841 2842 2843 2844 2845 2846 2847 2848 2849 2850 2851 2852 2853 2854 2855 2856 2857 2858 2859 2860 2861 2862 2863 2864 2865 2866 2867 2868 2869 2870 2871 2872 2873 2874 2875 2876 2877 2878 2879 2880 2881 2882 2883 2884 2885 2886 2887 2888 2889 2890 2891 2892 2893 2894 2895 2896 2897 2898 2899 2900 2901 2902 2903 2904 2905 2906 2907 2908 2909 2910 2911 2912 2913 2914 2915 2916 2917 2918 2919 2920 2921 2922 2923 2924 2925 2926 2927 2928 2929 2930 2931 2932 2933 2934 2935 2936 2937 2938 2939 2940 2941 2942 2943 2944 2945 2946 2947 2948 2949 2950 2951 2952 2953 2954 2955 2956 2957 2958 2959 2960 2961 2962 2963 2964 2965 2966 2967 2968 2969 2970 2971 2972 2973 2974 2975 2976 2977 2978 2979 2980 2981 2982 2983 2984 2985 2986 2987 2988 2989 2990 2991 2992 2993 2994 2995 2996 2997 2998 2999 3000 3001 3002 3003 3004 3005 3006 3007 3008 3009 3010 3011 3012 3013 3014 3015 3016 3017 3018 3019 3020 3021 3022 3023 3024 3025 3026 3027 3028 3029 3030 3031 3032 3033 3034 3035 3036 3037 3038 3039 3040 3041 3042 3043 3044 3045 3046 3047 3048 3049 3050 3051 3052 3053 3054 3055 3056 3057 3058 3059 3060 3061 3062 3063 3064 3065 3066 3067 3068 3069 3070 3071 3072 3073 3074 3075 3076 3077 3078 3079 3080 3081 3082 3083 3084 3085 3086 3087 3088 3089 3090 3091 3092 3093 3094 3095 3096 3097 3098 3099 3100 3101 3102 3103 3104 3105
|
<!DOCTYPE html>
<html>
<head>
<link rel=stylesheet href=style.css />
<link rel=icon href=CZI-new-logo.png />
</head>
<body>
<main>
<div class="goto-index"><a href="index.html">Table of contents</a></div>
<h1>Computational methods</h1>
<h2 id=Reads>Long reads</h2>
<p>
The Shasta assembler can be used to assemble DNA sequence from long reads.
It is optimized especially for
<a href="https://nanoporetech.com">Oxford Nanopore</a> reads,
but can also be used to assemble other types of long reads
such as those generated by <a href="https://www.pacb.com/">Pacific Biosciences</a>
sequencing platforms.
<p>
Oxford Nanopore reads are rapidly evolving in characteristics,
and therefore cannot be characterized precisely.
Here is a summary of read metrics for
the best available reads as of September 2020:
<ul>
<li>Typical length around 50 Kb, with just a few percent of coverage
in reads below 10 Kb and 10% or more of coverage in reads longer than 100 Kb.
There are also <i>Ultra-Long (UL)</i> protocols that provide longer reads, with
typical lengths around 100 Kb.
<li>Sequence identity around 97%, or equivalently an error rate around 3%.
This relatively high error rate has been improving significantly in time.
<li>The dominant error mode consists of errors in the length of homopolymer runs
of all lengths. Errors in short homopolymer runs (lengths 1-5)
are particularly deleterious due to their high frequency.
</ul>
<h2 id=Challenges>Computational challenges</h2>
<p>
Due to their length, Oxford Nanopore reads have
unique value for <i>de novo</i>
assembly. However, a successful approach needs
to deal with the high error rate.
<p>Traditional approaches to <i>de novo</i> assembly typically
rely on selecting a k-mer length that satisfies the following:
<ul>
<li>K-mers of the selected length are reasonably unique
in the target genome. For human assemblies, this typically means
k ⪆ 30.
<li>Most k-mers of the selected length are error-free in the
target reads. This means that k must be significantly less
than the inverse of the error rate.
</ul>
<p>
For many sequencing technolgies, the above conditions
are satisfied for k around 30.
But for our target reads, with an error rate around 3%,
most 30-mers contain errors, and therefore
assembly algorithms based on using such k-mers become unfeasible.
<p>
A possible approach consists of adding a preliminary error correction
step in which reads are aligned to each other and corrected based on consensus
and then presenting to the assembler the corrected reads,
which hopefully have a much lower error rate.
But such approaches tend to be slow, and in addition,
any errors made in the error correction step
are permanent and cannot be fixed during the assembly process.
<h2 id=ReadRepresentation>Read representation</h2>
<p>
The computational techniques used in the Shasta assembler
rely on representing the sequence of input reads in a way
that reduces the effect of errors on
the <i>de novo</i> assembly process:
<ul>
<li>
The sequence of input reads is represented using
<a href='https://en.wikipedia.org/wiki/Run-length_encoding'>run-length encoding</a>.
<li>
In many assembly steps, the sequence of input reads is described
by using occurrences of a pre-determined subset of short k-mers
(k ≈ 10 in run-encoding)
called <i>markers</i>.
</ul>
The next two sections expand on these two methods of
representing the input reads.
<h2 id=RunLengthEncoding>Run-length encoding</h2>
<p>
With <a href='https://en.wikipedia.org/wiki/Run-length_encoding'>run-length encoding</a>,
the sequence of each input read is represented as a sequence of bases,
each with a repeat count that says how many times each of the bases is repeated.
For example, the following read
<pre>
CGATTTAAGTTA
</pre>
is represented as follows using run-length encoding:
<pre>
CGATAGTA
11132121
</pre>
<p>
Using run-length encoding makes the assembly process
less sensitive to errors in the length of homopolymer runs,
which are the most common type of errors in Oxford Nanopore reads.
For example, consider these two reads:
<pre>
CGATTTAAGTTA
CGATTAAGGGTTA
</pre>
Using their raw representation above, these reads can be aligned like this:
<pre>
CGATTTAAG--TTA
CGATT-AAGGGTTA
</pre>
Aligning the second read to the first required a deletion and two insertions.
But in run-length encoding, the two reads become:
<pre>
CGATAGTA
11132121
CGATAGTA
11122321
</pre>
The sequence portions are now identical and can be aligned trivially
and exactly, without any insertions or deletions:
<pre>
CGATAGTA
CGATAGTA
</pre>
The differences between the two reads only appear in the repeat counts:
<pre>
11132121
11122321
* *
</pre>
<p>
The Shasta assembler uses one byte to represent repeat counts,
and as a result, it only represents repeat counts
between 1 and 255. If a read contains more than 255
consecutive bases, it is discarded on input.
Such reads are extremely rare, and the occurrence of such a large
number of repeated bases is probably a symptom
that something is wrong with the read anyway.
<h5>Some properties of base sequences in run-length encoding</h5>
<ul>
<li>In the sequence portion of the run-length encoding,
consecutive bases are always distinct. If they were not, the second one
would be removed from the run-length encoded sequence,
while increasing the repeat count for the first one.
<li>
With ordinary base sequences, the number of distinct k-mers of length k is 4<sup>k</sup>.
But with run-length base sequences, the number of distinct k-mers of length k
is 4×3<sup>k-1</sup>.
This is a consequence of the previous bullet.
<li>The run-length sequence is generally shorter than the raw sequence,
and cannot be longer. For a long random sequence, the number
of bases in the run-length representation is 3/4 of the number of bases
in the raw representation.
</ul>
<h2 id=Markers>Markers</h2>
<p>
Even with run-length encoding, errors in input reads are still frequent.
To further reduce sensitivity to errors, and also to speed up
some of the computational steps in the assembly process,
the Shasta assembler also uses a read representation
based on <i>markers</i>. Markers are
occurrences in reads of a pre-determined subset of short k-mers.
By default, Shasta uses for this purpose k-mers with k=10 in run-length
encoding, corresponding to an average of approximately 13 bases
in raw read representation.
<p>Just for illustration, consider a description
using markers of length 3 in run-length encoding.
There is a total 4×3<sup>2</sup> = 36 distinct such markers.
We arbitrarily choose the following fixed subset of the 36, and we assign an id
to each of the kmers in the subset:
<table class="small-table">
<tr><td><code>TGC</code><td>0
<tr><td><code>GCA</code><td>1
<tr><td><code>GAC</code><td>2
<tr><td><code>CGC</code><td>3
</table>
<p>
Consider now the following portion of a read in run-length representation (here, the
repeat counts are irrelevant and so they are omitted):
<pre>
CGACACGTATGCGCACGCTGCGCTCTGCAGC
GAC TGC CGC TGC
CGC TGC GCA
GCA CGC
</pre>
Occurrences of the k-mers defined in the table above are shown
and define the markers in this read. Note that markers
can overlap. Using the marker ids defined in the table above,
we can summarize the sequence of this read portion as follows:
<pre>
2 0 3 1 3 0 3 0 1
</pre>
This is the marker representation of the read portion above.
It just includes the sequence of markers occurring in the read,
not their positions.
<p id=MarkerOrdinal>
In Shasta documentation and code,
the zero-based sequence number of a marker in the marker representation
of an oriented read is called the <i>marker ordinal</i>
or simply <i>ordinal</i>.
The first marker in an oriented read has ordinal 0,
the second one has ordinal 1, and so on.
<p>
Note that the marker representation loses information,
as it is not possible to reconstruct the complete initial sequence
from the marker representation. This also means that the
marker representation is insensitive to errors in the sequence
portions that don't belong to any markers.
<p>
The Shasta assembler uses a random choice of the k-mers to be used
as markers. The length of the markers k is controlled by
assembly parameter
<code><a href='CommandLineOptions.html#Kmers.k'>--Kmers.k</a></code>
with a default value of 10.
Each k-mer is randomly chosen to be used as a marker
with probability determined by assembly parameter
<code><a href='CommandLineOptions.html#Kmers.probability'>--Kmers.probability</a></code>
with a default value of 0.1.
There are 4 possibilities for the first base in an RLE k-mer, but
only 3 for the remaining k-1 bases since RLE k-mers do not contain
consecutive bases. Thus, 4×3<sup>k-1</sup> is equal to the
total number of RLE k-mers.
With these default values, the total number of distinct
markers is approximately 0.1×4×3<sup>9</sup>≈7900.
<p>
Options are also provided to read from a file the RLE k-mers to
be used as markers. This permits experimentation with marker k-mers
chosen in ways other than random. To date, no marker selection
algorithm has proven more successful than random selection
but it is very possible that assembly quality can be improved
by non-random selection of markers.
<p>
The only constraint used in selecting k-mers to be used as markers
is that if a k-mer is a marker, its reverse
complement should also be a marker. This makes it easy to construct
the marker representation of the reverse complement
of a read from the
marker representation of the original read.
It also ensures strand symmetry in some of the computational steps.
<p>
Below is the run-length representation of a portion of a read
and its markers, as displayed by the Shasta http server.
<p>
<img class="fit-in-main" src=Markers.png>
<h2 id=MarkerAlignments>Marker alignments</h2>
<h3 id=MarkerRepresentation>The marker representation is a sequence</h3>
The marker representation of a read is a sequence
in an alphabet consisting of the marker ids.
This sequence is much shorter than the original sequence of the read
but uses a much larger alphabet.
For example,
with default Shasta assembly parameters, the marker representation
is 10 times shorter than the run-length encoded read sequence,
or about 14 times shorter than the raw read sequence.
Its alphabet has around 8000
symbols, many more than the 4 symbols that the original
read sequence uses.
<p>
Because the marker representation of a read is a sequence,
we can compute an alignment of two reads directly in
marker representation. Computing an alignment in this way
has two important advantages:
<ul>
<li>The shorter sequences and larger alphabet make
the alignment much faster to compute.
<li>The alignment is insensitive to read errors in
the portions that are not covered by any marker.
</ul>
These advantages are illustrated below using alignment
matrices.
<h3 id=AlignmentMatrix>Alignment matrix</h5>
<p>
Consider two sequences on any alphabet,
sequence x with n<sub>x</sub> symbols x<sub>i</sub> (i=0,...n<sub>x</sub>-1)
and
sequence y with n<sub>y</sub> symbols y<sub>j</sub> (j=0,...n<sub>y</sub>-1).
The alignment matrix of the two sequences,
, A<sub>ij</sub>, is a n<sub>x</sub>×n<sub>y</sub>
matrix with elements
<p>
A<sub>ij</sub> = <span style='font-family:serif'>δ</span><sub>x<sub>i</sub>y<sub>j</sub></sub>
<p>
or, in words, A<sub>ij</sub> is 1 if x<sub>i</sub>=y<sub>j</sub>
and 0 otherwise.
(The Shasta assembler never explicitly constructs alignment matrices
except for display when requested interactively.
Alignment matrices are used here just for illustration).
<p>
In portions where the sequences x and y are perfectly aligned,
the alignment matrix consists of a matrix diagonal set to 1.
Most of the remaining elements will be 0,
but many can be 1 just because the same symbol appears
at two unrelated locations in the two sequences.
<h3 id=AlignmentMagtrixRaw>Alignment matrix in raw base representation</h5>
<p>
The picture below shows a portion of a typical
alignment matrix of two reads in their representation
as a raw base sequence (not the run-length encoded representation)
and a computed optimal alignment.
This is for illustration only, as the Shasta assembler
never constructs such a matrix, except
when requested interactively.
<p>
<img src=RawAlignment.png>
<p>
Here, elements of the alignment matrix are colored as follows:
<ul>
<li>Red dots are alignment matrix elements that are 1 but that are not part of the
computed optimal alignment.
<li>Green dots are alignment matrix elements that are 1 and that are part of the
computed optimal alignment.
<li>Yellow matrix elements are alignment matrix elements that are
0 but that were computed to be part of the optimal alignment
as mismatching alignment positions.
<li>Black or grey matrix elements are matrix elements that are
0 and that were not computed to be part of the optimal alignment.
Grey is used instead of black every 10 bases to facilitate
counting bases in the figure.
</ul>
<p>
On average, about 25% of the matrix elements are 1, simply
because the alphabet has 4 symbols.
Because of the large fraction of 1 elements and because of the high error rate,
it would be hard to visually locate the
optimal alignment in this picture, if it was not highlighted using colors.
This is emphasized by the following figure, which represents
the same matrix, but with all 1 elements now colored
red and all 0 elements black.
<p>
<img src=RawAlignmentUnmarked.png>
<p>
Note that the alignment matrix contains frequent square
or rectangular blocks of 1 elements. They correspond to
homopolymer run. A square block indicates that the
two reads agree on the length of that homopolymer run,
and a non-square rectangular block indicates that the
two reads disagree.
If we were using run-length encoding for this picture,
these blocks would all collapse to a single matrix element.
<h3 id=AlignmentMatrixMarkers>Alignment matrix in marker representation</h5>
<p>
For comparison, the picture below shows a portion of the alignment matrix
of two reads, in marker representation, as displayed by the Shasta
http server.
In this alignment matrix, the
<a href='#MarkerOrdinal'>marker ordinal</a> on the first read
is on the horizontal axis (increasing towards the right) and the marker ordinal
for the second read (increasing towards the bottom) is on the vertical axis.
Here, matrix elements that are 1 are displayed in green or red.
The ones in green are the ones that are part of the optimal alignment
computed by the Shasta assembler - see below for more information.
The grey lines are drawn 10 markers apart from each other
and their only purpose is to facilitate reading the picture -
the corresponding matrix elements are 0.
Shasta marker alignments only include matching markers and gaps.
Mismatching markers are never included in the alignment.
<p>
<img id=MarkerAlignment src=MarkerAlignment.png>
<p>
Because of the much larger alphabet, matrix elements
that are 1 but are not part of the optimal alignment are infrequent.
In addition, each alignment matrix element here corresponds
on average to a 13×13 block in the alignment matrix in raw base sequence
shown above. The portion of alignment matrix in marker space
shown here covers about 120 markers or about 1500 bases
in the original representation of the read, compared to
only about
100 bases in the alignment matrix in raw representation shown above.
For these reasons, the marker representation
is orders of magnitude more efficient than the raw base
representation when computing read
alignments.
<h3 id=AlignmentMetrics>Alignment metrics</h3>
<p>
Shasta uses several metrics to characterize alignments in marker space.
As shown in the picture above, a typical alignment consists
of stretches of consecutive aligned markers intermixed with gaps in
the alignment.
Call <i>x<sub>i</sub> (i=0,...,N-1)</i> the aligned marker ordinals
on the first oriented read and
<i>y<sub>j</sub> (j=0,...,N-1)</i> the aligned marker ordinals
on the second oriented read, where <i>N</i>
is the number of aligned markers.
That is, <i>x<sub>i</sub></i> and <i>y<sub>i</sub></i>
are the coordinates of the green points in the alignment matrix as depicted above.
Also, call <i>N<sub>x</i> the number of markers in the first oriented read and
<i>N<sub>y</i> the number of markers in the second oriented read.
<i>N<sub>x</i> and <i>N<sub>y</i> are
the horizontal and vertical sizes, respectively,
of the alignment matrix.
<p>
With this notation, we define the following metrics for a marker alignment.
For each metric, there is a command line option that controls the
minimum or maximum value of the metric for an alignment to be considered
valid and used in the assembly.
These options can be used to control the quality of marker alignments
to be used during assembly.
<ul>
<li id=AlignedMarkerCount><i>N</i>
is the number of aligned markers.
An alignment is only used if <i>N</i>
is at least equal to the value of
<code><a href='CommandLineOptions.html#Align.minAlignedMarkerCount'>--Align.minAlignedMarkerCount</a></code>.
This option can be used to discard alignments that are too
short to be considered reliable.
<li id=AlignedFraction>
For each oriented read in the alignment, the <code>alignedFraction</code>
is defined as the ratio of the number of aligned markers
over the length of the range of ordinals
covered by the alignment, that is
<i>N / (x<sub>N-1</sub> - x<sub>0</sub> + 1)</i>
for the first oriented read and
<i>N / (y<sub>N-1</sub> - y<sub>0</sub> + 1)</i>
for the second oriented read.
The lesser of the <code>alignedFraction</code>
of the two oriented reads is called the <code>alignedFraction</code>
of the alignment.
It is a measure of the accuracy of the alignment.
An alignment is only used if <its <code>alignedFraction</code>
is at least equal to the value of
<code><a href='CommandLineOptions.html#Align.minAlignedFraction'>--Align.minAlignedFraction</a></code>.
<li id=Skip>
The maximum number of markers skipped by any alignment gap
(on either ot the two oriented reads) is called <code>skip</code>
and is defined as the maximum value of
<i>max(x<sub>i+1</sub> - x<sub>i</sub>, y<sub>i+1</sub> - y<sub>i</sub>)</i>
computed over <i>i=1,...,N-1</i>.
Note that, with this definition, gaps at the beginning and end
of the alignment are not considered
(but see <a href=#Trim><code>trim</code></a> below).
An alignment is only used if <code>skip</code>
is at most equal to the value of
<code><a href='CommandLineOptions.html#Align.maxSkip'>--Align.maxSkip</a></code>.
This option can be used to discard alignments containing long gaps.
<li id=Drift>
Even when an alignment gap occurs, successive aligned markers
tend to be approximately on the same diagonal of the alignment matrix.
This reflects the fact that, even when sequencing errors occur,
the number of bases read is likely to still be approximately correct.
The maximum diagonal shift between successive aligned markers is called
<code>drift</code> and is defined as the maximum value of
the absolute value of
<i>(x<sub>i+1</sub> - y<sub>i+1</sub>) - (x<sub>i</sub> - y<sub>i</sub>)</i>
computed over <i>i=1,...,N-1</i>.
Note that, in this case too, alignment gaps at the beginning and end
of the alignment are not considered.
An alignment is only used if <code>drift</code>
is at most equal to the value of
<code><a href='CommandLineOptions.html#Align.maxDrift'>--Align.maxDrift</a></code>.
This option can be used to discard alignments containing
gaps with large diagonal shift.
<li id=Trim>
An alignment should always begin near the beginning of at least one of the two oriented
reads, but we need to allow for an initial incorrect portion at the beginning of each read.
The minimum over the two oriented reads of the number of markers skipped at
the beginning of the alignment,
<i>min(x<sub>0</sub>, y<sub>0</sub>)</i>
is called the <code>leftTrim</code>.
Similarly, the minimum over the two oriented reads of the number of markers skipped at
the end of the alignment,
<i>min(N<sub>x</sub> - 1 - x<sub>N-1</sub>, N<sub>y</sub> - 1 - y<sub>N-1</sub>)</i>
is called the <code>rightTrim</code>.
The lesser of <code>leftTrim</code>
and <code>rightTrim</code> is called <code>trim</code>
An alignment is only used if <code>trim</code>.
is at most equal to the value of
<code><a href='CommandLineOptions.html#Align.maxTrim'>--Align.maxTrim</a></code>.
</ul>
<h3 id=OptimalAlignments>Computing optimal alignments in marker representation</h3>
<p>
To compute the optimal alignment highlighted in green in the above figure,
one of the following methods is used,
under control of command line option
<code><a href='CommandLineOptions.html#Align.alignMethod'>--Align.alignMethod</a></code>:
<ul>
<li><code><a href='#AlignMethod0'>--Align.alignMethod 0</a></code>
selects the legacy Shasta algorithm.
Do not use for production assemblies.
<li><code><a href='#AlignMethod1'>--Align.alignMethod 1</a></code>
computes marker alignments using
<a href='https://www.seqan.de/'>SeqAn</a>.
Do not use for production assemblies.
<li><code><a href='#AlignMethod3'>--Align.alignMethod 3</a></code>
(the default choice)
uses a faster two-step
algorithm that also uses SeqAn and achieves better performance
using a combination of downsampling and banded alignments.
</ul>
<p>
The computation of marker alignments for all alignment candidates
found by the
<a href='#FindingOverlappingReads'>LowHash algorithm</a>
is one of the most expensive
phases of an assembly, and typically takes around half of
total elapsed time.
For this reason, it is recommended that only
the default <code><a href='#AlignMethod3'>--Align.alignMethod 3</a></code>
be used for production assemblies.
<h3 id=AlignMethod0>--Align.alignMethod 0</h3>
<p>
This selects
a simple alignment algorithm
on the marker representations of the two reads to be aligned.
Do not use this method for production assemblies,
as the default alignment method selected by
<code><a href='#AlignMethod3'>--Align.alignMethod 3</a></code>
provides better performance and accuracy.
This algorithm effectively constructs an optimal path in the alignment matrix,
but uses some heuristics to speed up the computation:
<ul>
<li>The maximum number of markers that an alignment
can skip on either read is limited to a maximum,
under control of assembly parameter
<code><a href='CommandLineOptions.html#Align.maxSkip'>--Align.maxSkip</a></code>
(default value 30 markers, corresponding to around 400
bases when all other Shasta parameters are at their default).
This reflects the fact that Oxford Nanopore reads can often
have long stretches in error.
In the alignment matrix shown above, there is a skip of about 20 markers
(2 light grey squares) following the first 10 aligned markers (green dots) on the top left.
<li>The maximum number of markers that an alignment
can skip at the beginning or end of a read is limited to a maximum,
under control of assembly parameter
<code><a href='CommandLineOptions.html#Align.maxTrim'>--Align.maxTrim</a></code>
(default value 30 markers, corresponding to around 400
bases when all other Shasta parameters are at their default).
This reflects the fact that Oxford Nanopore reads often
have an initial or final portion that is not usable.
<li>To avoid alignment artifacts,
marker k-mers that are too frequent in either of the two reads
being aligned are not used in the alignment computation.
For this purpose, the Shasta assembler uses a criterion based
on an absolute number of occurrences of marker k-mers in the two reads,
although a relative criterion (occurrences per Kb) may be more appropriate.
The current absolute frequency threshold is under control of assembly parameter
<code><a href='CommandLineOptions.html#Align.maxMarkerFrequency'>--Align.maxMarkerFrequency</a></code>
(default 10 occurrences).
</ul>
<h4 id=AlignMethod1>--Align.alignMethod 1</h3>
<p>
This causes marker alignments to be computed using
<a href='https://seqan.readthedocs.io/en/seqan-v2.0.2/Tutorial/PairwiseSequenceAlignment.html#overlap-alignments'>
SeqAn overlap alignments</a>.
This guarantees an optimal alignment, but is computationally very expensive,
with cost proportional to the size of the alignment matrix,
therefore <i>O(N<sup>2</sup>)</i> in the number of markers
in the two reads being aligned.
Do not use this method for production assemblies,
as the default alignment method selected by
<code><a href='#AlignMethod3'>--Align.alignMethod 3</a></code> provides much better performance
with minimal degradation in accuracy.
<h4 id=AlignMethod3>--Align.alignMethod 3</h3>
<p>
This is the default alignment method used by Shasta
and provides a good combination of performance and accuracy.
It uses the
<a href='https://www.seqan.de/'>SeqAn</a> in a two-step process:
<ul>
<li>In a first step, SeqAn is used to compute an
<a href='https://seqan.readthedocs.io/en/seqan-v2.0.2/Tutorial/PairwiseSequenceAlignment.html#overlap-alignments'>
overlap alignment</a> between two downsampled versions
of the marker sequences of the reads to be aligned.
This is <i>O(N<sup>2</sup>)</i>
in the number of markers
in the two reads being aligned, but still reasonably fast because
of downsampling.
<li>
In a second step, SeqAn is used to compute a banded alignment
of the marker representation of the two reads.
The position and width of the band is obtained from
the downsampled alignment computed in the first step.
</ul>
<h2 id=FindingOverlappingReads>Finding overlapping reads</h2>
<p>
Even though computing read alignments in marker representation is fast,
it still is not feasible to compute alignments among all possible pairs of reads.
For a human-size genome with ≈10<sup>6</sup>-10<sup>7</sup> reads,
the number of pairs to consider would be ≈10<sup>12</sup>-10<sup>14</sup>,
and even at 10<sup>-3</sup> seconds per alignment the compute time
would be ≈10<sup>9</sup>-10<sup>11</sup> seconds,
or ≈10<sup>7</sup>-10<sup>9</sup> seconds elapsed time
(≈10<sup>2</sup>-10<sup>4</sup> days)
when using 128 virtual processors.
<p>
Therefore some means of narrowing down substantially the number
of pairs to be considered is essential.
The Shasta assembler uses for this purpose a slightly modified MinHash
scheme based on the marker representation of reads.
<p>
For a general description of the MinHash algorithm
see the
<a href='https://en.wikipedia.org/wiki/MinHash'>Wikipedia article</a>
or this
<a href='http://infolab.stanford.edu/~ullman/mmds/ch3.pdf'>excellent book chapter</a>.
In summary, the MinHash algorithm takes as input a set of items
each characterized by a set of features. Its goal is to find
pairs of the input items that have a high
<a href='https://en.wikipedia.org/wiki/Jaccard_index'>Jaccard similarity index</a> -
that is, pairs of items that have many features in common.
The algorithm proceeds by iterations. At each iteration, a
new hash table is created and a
hash function that operates on the feature set is selected.
For each item, the hash function of each of its features is evaluated,
and the minimum hash function value found is used to
select the hash table bucket that each item is stored in.
It can be proven that the probability of two items ending up
in the same bucket equals the Jaccard similarity index of the two
items - that is, items in the same bucket are more likely
to be highly similar than items in different buckets.
The algorithm then adds to the pairs of potentially similar
items all pairs of items that are in the same bucket.
<p>
When all iterations are complete, the probability that a pair
of items was found at least once is an increasing function
of the Jaccard similarity of the two items.
In other words, the pairs found are enriched for pairs
that have high similarity. One can now consider all the pairs found
(hopefully a much smaller set than all possible pairs)
and compute the Jaccard similarity index for each,
then keep only the pairs for which the index is sufficiently high.
The algorithm does not guarantee that all pairs
with high similarity will be found - only that the probability
of finding all pairs is an increasing function of their similarities.
<p>
The algorithm is used by Shasta with items being oriented reads
(a read in either original or reverse complemented orientation)
and features being consecutive occurrences of m markers in the
marker representation of the oriented read.
For example, consider an oriented read with the following marker representation:
<pre>
18,45,71,3,15,6,21
</pre>
If m is selected equal to 4 (the Shasta default, controlled
by assembly parameter
<code><a href='CommandLineOptions.html#MinHash.m'>--MinHash.m</a></code>),
the oriented read is assigned the following features:
<pre>
(18,45,71,3)
(45,71,3,15)
(71,3,15,6)
(3,15,6,21)
</pre>
<p>
From the
<a href='#MarkerAlignment'>picture above</a> of an alignment matrix in
marker representation, we see that streaks of 4 or more common consecutive
markers are relatively common. We have to keep in mind that,
with Shasta default parameters, 4 consecutive markers
span an average 40 bases in run-length encoding or about 52 bases
in the original raw base representation.
At a typical error rate of around 10%, such a portion of a read would
contain on average 5 errors. Yet, the marker representation
in run-length space
is sufficiently robust that these common "features"
are relatively common despite the high error rate.
This indicates that we can expect the
MinHash algorithm to be effective in finding pairs of overlapping reads.
<p>
However, the MinHash algorithm has a feature that is undesirable
for our purposes: namely, that the algorithm is good at finding
read pairs with high Jaccard similarity index.
For two sets X and Y, the Jaccard similarity index is defined as the ratio
<pre>
J = |X∩Y| / |X∪Y|
</pre>
Because the read length distribution of Oxford Nanopore reads
is wide, it is common to have pairs of reads with
very different lengths. Consider now two reads with lengths n<sub>x</sub> and n<sub>y</sub>, with
n<sub>x</sub><n<sub>y</sub>, that overlap exactly over the entire length n<sub>x</sub>.
The Jaccard similarity is in this case given by n<sub>x</sub>/n<sub>y</sub> < 1.
This means that, if one of the reads in a pair is much shorter than
the other one, their Jaccard similarity will be low even in
the best case of exact overlap. As a result, the
unmodified MinHash algorithm will not do a good job
at finding overlapping pairs of reads with very different lengths.
<p>
For this reason, the Shasta assembler uses a small modification to
the MinHash algorithm: instead of just using the minimum hash
for each oriented read for each iteration,
it keeps all hashes below a given threshold. Each oriented read
can be stored in multiple buckets, one for each low hash encountered.
This has the effect of eliminating the bias against
pairs in which one read is much shorter than the other.
The modified algorithm is referred to as <i>LowHash</i>
in the Shasta source code.
It is effectively equivalent to an indexing approach
in which we index all features with low hash.
<p>
The LowHash algorithm is controlled by the following
assembly parameters:
<ul>
<li>
<code><a href='CommandLineOptions.html#MinHash.m'>--MinHash.m</a></code>
(default 4):
the number of consecutive markers that define a feature.
<li>
<code><a href='CommandLineOptions.html#MinHash.hashFraction'>--MinHash.hashFraction</a></code>
(default 0.01):
The fraction of hash values that count as "low".
<li>
<code><a href='CommandLineOptions.html#MinHash.minHashIterationCount'>--MinHash.minHashIterationCount</a></code>
(default 10):
The number of iterations.
<li>
<code><a href='CommandLineOptions.html#MinHash.maxBucketSize'>--MinHash.maxBucketSize</a></code>
(default 10):
The maximum number of items for a bucket to be considered.
Buckets with more than this number of items are ignored.
The goal of this parameter is to mitigate
the effect of common repeats, which
can result in buckets containing large numbers of
unrelated oriented reads.
<li>
<code><a href='CommandLineOptions.html#MinHash.minBucketSize'>--MinHash.minBucketSize</a></code>
(default 0):
The minimum number of items for a bucket to be considered.
Buckets with less than this number of items are ignored
because MinHash features in these buckets are likely to be in error.
<li>
<code><a href='CommandLineOptions.html#MinHash.minFrequency'>--MinHash.minFrequency</a></code>
(default 2):
the number of times a pair of oriented reads has to be found to be considered
and stored as a possible pair of overlapping reads.
</ul>
<h2 id=InitialAssemblySteps>Initial assembly steps</h2>
<p>
Initial steps of a Shasta assembly proceed as follows.
If the assembly is setup for
<a href=Performance.html>best performance</a>
(<code>--<a href='CommandLineOptions.html#memoryMode'>memoryMode</a> filesystem --<a href='CommandLineOptions.html#memoryBacking'>memoryBacking</a> 2M</code>
if using the Shasta executable), all data structures
are stored in memory, and no disk activity takes
place except for initial loading of the input reads,
storing of assembly results, and storing a small number
of small files with useful summary information.
<ul>
<li>Input reads are read from Fasta files and converted
to run-length representation. Read shorter
than the read length cutoff
<code><a href='CommandLineOptions.html#Reads.minReadLength'>--Reads.minReadLength</a></code>
bases (default 10000) are discarded.
Reads that contain bases with repeat counts
greater than 255 are also discarded.
This is a consequence of the fact that repeat counts
are stored using one byte, and therefore there would
be no way to store such reads. Reads with such
long repeat counts are extremely rare, however,
and when they occur they are of suspicious quality.
<li>In addition, if a non-zero value is specified for
<code><a href='CommandLineOptions.html#Reads.desiredCoverage'>--Reads.desiredCoverage</a></code>, the read length cutoff is further increased until coverage is just
above the specified number of bases.
<li>K-mers to be used as markers are randomly selected.
<li>Occurrences of those marker k-mers in all oriented reads are found.
<li>Reads are aligned to their reverse complement to determine if they are
palindromic. Palindromes are flagged and ignored during assembly.
<li>The LowHash algorithm finds candidate pairs of overlapping oriented reads.
<li>A marker alignment is computed for each candidate pair of oriented reads.
If the alignment <a href='#AlignmentMetrics'>metrics</a> are sufficiently good,
the alignment is stored for use in the assembly.
</ul>
<h2 id=ReadGraph>Read graph</h2>
<p>
Using the methods covered so far, an assembly has created
a list of pairs of oriented reads, each pair having a plausible marker
alignment.
How to use this type of information for assembly is a classical problem
with a standard solution
<a href='https://doi.org/10.1093/bioinformatics/bti1114'>(Myers, 2005)</a>,
the <i>string graph</i>.
However, the prescriptions in the Myers paper cannot be directly used here,
mostly due to the marker representation.
<p>
Shasta proceeds by creating a <i>Read Graph</i>, an undirected
graph in which each vertex corresponds to an oriented read.
A subset of the available alignments is selected using one of two
methods described below, under control of command line option
<code><a href="CommandLineOptions.html#ReadGraph.creationMethod">--ReadGraph.creationMethod</a></code>.
For each selected alignment, an undirected edge is created in the read
graph between the vertices corresponding to the oriented reads
in the selected alignment.
<p>
If we used all of the available alignments to create read graph edges,
the resulting read graph would suffers
from high connectivity in repeat regions, due to spurious
alignments between oriented reads originating in different
but similar regions of the genome.
Therefore it is important to select alignments to be used
in a way that minimizes these spurious alignments
while keeping true ones as much as possible.
<p>
Note that each read contributes two vertices to the read graph,
one in its original orientation, and one in reverse
complemented orientation.
Therefore the read graph contains two strands,
each strand at full coverage.
This makes it easy to investigate and potentially detect
erroneous strand jumps that would be much less obvious
if using alternative approaches with one vertex per read.
<p>
An example of a portion of the read graph, as displayed
by the Shasta http server, is shown here.
<img class="fit-in-main" src=ReadGraph1.png>
<p>
Even though the graph is undirected, edges that
correspond to overlap alignments are drawn with an arrow
that points from the leftmost oriented read to the
rightmost one.
Edges that correspond to containment alignments
are drawn in red and without an arrow.
Vertices are drawn with area proportional to the length of
the corresponding reads.
<p>
The linear structure of the read graph successfully reflects
the linear arrangement of the input reads
and their origin on the genome being assembled.
<p>
However, deviations from the linear structure can
occur in the presence of long repeats,
typically for high similarity segment duplications:
<img class="fit-in-main" src=ReadGraph2.png>
<p>
Obviously incorrect connections can destroy the linear structure
of assembled sequence. This is dealt with later in the assembly process.
<h3 id=ReadGraphCreationMethod0>Read graph creation method 0</h3>
<p>
When <code>--ReadGraph.creationMethod 0</code>
(the default) is selected, the Shasta assembler uses a very simple method to select
alignments: it only keeps a
<a href='https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm'>
<i>k</i>-Nearest-Neighbor</a>
subset of the alignments.
That is, for each vertex (oriented read)
it only keeps the best <i>k</i> alignments,
as measured by the number of aligned markers.
The number of edges <i>k</i> kept
for each vertex is controlled by assembly option
<code><a href='CommandLineOptions.html#ReadGraph.maxAlignmentCount'>--ReadGraph.maxAlignmentCount</a></code>,
with a default value of 6.
Note that, despite the <i>k</i>-Nearest-Neighbor subset,
it remains possible for a vertex to have a degree
more than <i>k</i>.
<h3 id=ReadGraphCreationMethod2>Read graph creation method 2</h3>
<p>
When <code>--ReadGraph.creationMethod 2</code>
is selected, a more sophisticated approach,
developed by Ryan Lorig-Roach at U. C. Santa Cruz,
is used.
Shasta first inspects all stored alignments to compute
statistical distributions (histograms) for the five alignment
metrics in the table below.
Alignments for which one of these metrics is worse than a set percentile
are discarded. The threshold percentiles are controlled by
the command line options shown in the table.
<table style='margin-left:auto;margin-right:auto;'>
<tr>
<th>Alignment metric
<th>Percentile option
<th>Default value
<tr>
<td>Number of aligned markers
<td><code>--ReadGraph.markerCountPercentile</code>
<td class=centered>0.015
<tr>
<td>Aligned marker fraction
<td><code>--ReadGraph.alignedFractionPercentile</code>
<td class=centered>0.12
<tr>
<td>Maximum number of markers skipped
<td><code>--ReadGraph.maxSkipPercentile</code>
<td class=centered>0.12
<tr>
<td>Maximum diagonal drift
<td><code>--ReadGraph.maxDriftPercentile</code>
<td class=centered>0.12
<tr>
<td>Number of markers trimmed
<td><code>--ReadGraph.maxTrimPercentile</code>
<td class=centered>0.015
</table>
<p>
The process than continues with the same
k-Nearest-Neighbor procedure used with
<code>--ReadGraph.creationMethod 0</code>,
but only considering alignments that were not ruled
out by above percentile criteria.
<p>
Because of its adaptiveness to alignment characteristics,
this method is more robust and less sensitive
to the choice of alignment criteria
and results in more accurate and more contiguous assemblies.
However, when using <code>--ReadGraph.creationMethod 2</code>
it is important to use very liberal choices for the
options that control alignment quality because
bad alignments will be discarded anyway using the
percentile criteria.
An example of assembly options with <code>--ReadGraph.creationMethod 2</code>
is provided in Shasta configuration files
<code>Nanopore-Sep2020.conf</code> and
<code>Nanopore-UL.Sep2020.conf</code>.
These files apply to Oxford Nanopore reads created
with base caller Guppy version 3.6.0 or newer
(standard and ultra-long version, respectively).
They are available in the
<code>conf</code> directory in the source code tree
or in a Shasta build.
<h2 id=MarkerGraph>Marker graph</h2>
<p>
Consider a read whose marker representation is:
<pre>
a b c d e
</pre>
We can represent this read as a directed graph
that describes the sequence in which its markers appear:
<p>
<img src="MarkerGraph-1.dot.png">
<p>
This is not very useful but illustrates
the simplest form of a <i>marker graph</i>
as used in the Shasta assembler.
The marker graph is a directed graph
in which each vertex represents a marker
and each edge represents the transition
between consecutive markers.
We can associate sequence with each vertex and edge of the
marker graph:
<ul>
<li>Each vertex is associated with the sequence of the corresponding
marker.
<li>If the markers of the source and target vertex of an edge
do not overlap, the edge is associated with the sequence
intervening between the two markers.
<li>If the markers of the source and target vertex of an edge
do overlap, the edge is associated with the overlapping portion
of the marker sequences.
</ul>
<p>
Consider now a second read with the following marker
representation, which differs from the previous one
just by replacing marker <code>c</code> with <code>x</code>:
<pre>
a b x d e
</pre>
<p>
The marker graph for the two reads is:
<p>
<img src="MarkerGraph-2.dot.png">
<p>
In the optimal alignment of the two reads, markers
<code>a, b, d, e</code> are aligned. We can redraw the marker graph
grouping together vertices that correspond to aligned markers:
<p>
<img src="MarkerGraph-3.dot.png">
<p>
Finally, we can merge aligned vertices to obtain
a marker graph describing the two aligned reads:
<p>
<img src="MarkerGraph-4.dot.png">
<p>
Here, by construction, each vertex still has a unique
sequence associated with it - the common sequence
of the markers that were merged
(however the corresponding repeat counts
can be different for each contributing read).
An edge, on the other hand, can have different sequences
associated with it, one corresponding to each
of the contributing reads.
In this example, edges <code>a->b</code>
and <code>d->e</code> have two contributing reads,
which can each have distinct sequence between
the two markers.
<p>
We call coverage of a vertex or edge the number of reads "contributing"
to it. In this example, vertices <code>a, b, d, e</code> have coverage 2,
and vertices <code>c, x</code> have coverage 1.
Edges <code>a->b</code>
and <code>d->e</code> have coverage 2, and the remaining edges have coverage 1.
<p>
The construction of the marker graph was illustrated above
for two reads, but the Shasta assembler
constructs a global marker graph which takes into account
all oriented reads:
<ul>
<li>The process starts with a distinct vertex for each marker of each
oriented read. Note that at this stage the marker graph
is large (≈ 2×10<sup>10</sup> vertices for a human assembly
using default assembly parameters).
<li>For each marker alignment
corresponding to an edge of the read graph,
we merge vertices corresponding to aligned markers.
<li>Of the resulting merged vertices, we remove those whose
coverage is too low or too high, indicating that the contributing
reads or some of the alignments involved are probably in error.
This is controlled by assembly parameters
<code>MarkerGraph.minCoverage</code> (default 10)
and <code>MarkerGraph.maxCoverage</code> (default 100),
which specify the minimum and maximum coverage
for a vertex to be kept.
<li>Edges are created. An edge <code>v0->v1</code> is created if
there is at least a read contributing to both <code>v0</code> and <code>v1</code>
and for which all markers intervening
between <code>v0</code> and <code>v1</code>
belong to vertices that were removed.
</ul>
Note that this does not mean that all vertices
with the same marker sequence are merged -
two vertices are only merged if they have
the same marker sequence, and if there are
at least two reads for which the corresponding markers are aligned.
<p>
Given the large number of initial vertices involved,
this computation is not trivial.
To allow
efficient computation in parallel on many threads, a
<a href='https://github.com/wjakob/dset'>lock-free implementation</a>
of the
<a href='https://en.wikipedia.org/wiki/Disjoint-set_data_structure'>
disjoint data set data structure</a>,
as first described by
<a href='https://dl.acm.org/citation.cfm?id=103458'>
Anderson and Woll (1991)</a>,
<a href='http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.56.8354&rep=rep1&type=pdf'>
Anderson and Woll (1994)</a>,
is used
for merging vertices.
Some code changes
<a href='Acknowledgments.html#Dset'>were necessary</a>
to permit large numbers of vertices,
as the initial implementation by Wenzel Jakob only allowed
for 32-bit vertex ids.
<p>
A portion of the marker graph, as displayed by the
Shasta http server, is shown here:
<p>
<img style="max-width:120%" src=MarkerGraph-5.png>
<p>
Each vertex, shown in green, contains a list
of the oriented reads that contribute to it.
Each oriented read is labeled with the read id
(a number) followed by "-0" for original orientation
and "-1" for reverse complemented orientation. Each vertex
is also labeled with the marker sequence (run-length encoded).
Edges are drawn with an arrow whose thickness is proportional to edge coverage
and labeled with the sequence contributed by each read.
In edge labels, a sequence consisting of a number indicates
the number of overlapping bases between adjacent markers.
<p>
On a larger scale and with less detail, a typical portion of the marker graph looks
like this:
<p>
<img class=fit-in-main src=MarkerGraph-6.png>
<p>
Here, each vertex is drawn with a size proportional to its coverage.
Edge arrows are again displayed with thickness proportional to edge coverage.
The marker graph has a linear structure with a dominant path
and side branches due to errors.
<h2 id=AssemblyGraph>Assembly graph</h2>
<p>
The Shasta assembly process also uses a
compact representation of the marker graph,
called the <i>assembly graph</i>,
in which
each linear sequence of edges is replaced by a single
edge.
For example, this marker graph
<p>
<img src="MarkerGraph-7.dot.png">
<p>
can be represented as an assembly graph as follows.
Colors were chosen to indicate the correspondence
to marker graph edges:
<p>
<img src="MarkerGraph-8.dot.png">
<p>
The <i>length</i>
of an edge of the assembly graph is defined as the number
of marker graph edges that it corresponds to.
For each edge of the assembly graph, average coverage is also computed, by averaging the
coverage of the marker graph edges it corresponds to.
<h2 id=AssembleSequence>Using the marker graph to assemble sequence</h2>
<p>
The marker graph is a partial description of the multiple sequence
alignment between reads and can be used to assemble a consensus sequence.
One simple way to do that is to only keep the "dominant" path
in the graph, and then move on that path from vertex to edge to vertex,
assembling run-length encoded sequence as follows:
<ol>
<li>On a vertex, all reads have the same sequence, by construction:
the marker sequence associated with the vertex.
There is trivial consensus among all the reads contributing
to a vertex, and the marker sequence can be used directly as
the contribution of the vertex to assembled sequence.
<li>For edges, there are two possible situations plus a hybrid case:
<ul style='list-style:none'>
<li>2.1. If the adjacent markers overlap, in most cases, all
contributing reads have the same number of
overlapping bases between the two markers, and we are again in a situation
of trivial consensus, where all reads contribute the same sequence,
which also agrees with the sequence of adjacent vertices.
In cases where not all reads are in agreement on the number
of overlapping bases, only reads with the most
frequent number of overlapping bases are taken into account.
<li>2.2. If the adjacent markers don't overlap,
then each read can have a different sequence between the two markers.
In this situation, we compute a multiple sequence
alignment of the sequences and a consensus using the
<a href='https://github.com/rvaser/spoa'>Spoa</a> library.
The multiple sequence alignment is computed constrained
at both ends, because all reads contributing
to the edge have, by construction, identical
markers at both sides.
<li>
2.3. A hybrid situation occasionally arises, in which
some reads have the two markers overlapping,
and some do not. In this case, we count reads
of the two kinds and discard the reads of the minority kind,
then revert to one of the two cases 2.1 or 2.2 above.
</ul>
</ol>
<p>
This is the process used for sequence assembly by the current Shasta implementation.
It requires a process to select and define dominant paths,
which is described in the next section.
It is algorithmically simple, but its main
shortcoming is that it does not use for assembly reads that contribute to
the abundant side branches. This means that coverage is lost,
and therefore the sequence of assembled accuracy is not
as good as it could be if all available coverage was used.
Means to eliminate this shortcoming and use information from the side branches of
the marker graph could be a subject of future work on the Shasta assembler.
<h2 id=RepeatCounts>Assembling repeat counts</h2>
<p>
The process described above works
with run-length encoded sequence and therefore
assembles run-length encoded sequence.
The final step to create raw assembled sequence
is to compute the most likely repeat count for
each sequence position in run-length encoding.
In Shasta 0.1.0 this was done by choosing as the
most likely repeat count the one that appears
the most frequently in the
reads that contributed to each assembled position.
<p>
The latest version of the Shasta assembler supports three different options
for that, under control of command line option
<code>--Assembly.consensusCaller</code>:
<ul>
<li><code>--Assembly.consensusCaller Modal</code> (the default)
requests the same algorithm used in Shasta 0.1.0, that is,
the most frequent repeat count seen at each base position
is used in the assembly.
<li><code>--Assembly.consensusCaller Median</code>
requests an algorithm that stores in the assembly
the median repeat count seen at each base position.
<li><code>--Assembly.consensusCaller Bayesian:name</code>
requests a Bayesian algorithm to determine the most likely repeat count.
<code>name</code> can be one of the following:
<ul>
<li>One of the following built-in Bayesian models:
<ul>
<li><code>guppy-2.3.1-a</code> to use a Bayesian model optimized for
the Guppy 2.3.1 base caller.
<li><code>guppy-3.0.5-a</code> to use a Bayesian model optimized for
the Guppy 3.0.5 base caller.
<li><code>guppy-3.4.4-a</code> to use a Bayesian model optimized for
the Guppy 3.4.4 base caller.
<li><code>guppy-3.6.0-a</code> to use a Bayesian model optimized for
the Guppy 3.6.5 base caller.
<li><code>r10-guppy-3.4.8-a</code> to use a Bayesian model optimized for
r10 reads and the Guppy 3.4.8 base caller.
<li><code>bonito-0.3.1-a</code> to use a Bayesian model optimized for
the Bonito 0.3.1 base caller.
<li><code>guppy-5.0.7-b</code> to use a Bayesian model optimized for
the Guppy 5.0.7 base caller. There is also an older and less accurate model
<code>guppy-5.0.7-a</code>.
</ul>
<li>The name of a configuration file
describing the Bayesian model to be used,
which can be specified as a relative or absolute path.
Earlier versions of Shasta required an absolute path, but this
is no longer the case.
Sample configuration files are available in <code>shasta/conf</code> or
<code>shasta-install/conf</code>.
They are named <code>SimpleBayesianConsensusCaller-*.csv</code>.
</ul>
</ul>
The default is <code>--Assembly.Modal</code>.
Some testing showed a significant decrease of false positive indels
in assembled sequence compared to Shasta 0.1.0 behavior, corresponding to
<code>--Assembly.consensusCaller Modal</code>. This testing was done
for reads called using Guppy 2.3.5. However, even using
<code>--Assembly.consensusCaller Bayesian:guppy-2.3.1-a</code>
resulted in significant improvement over
<code>--Assembly.consensusCaller Modal</code>, indicating
that the Bayesian model is somewhat resilient to
discrepancies between the reads used to construct the Bayesian model
and the reads being assembled.
<p>
Testing also showed that
<code>--Assembly.consensusCaller Median</code> is generally inferior to
<code>--Assembly.consensusCaller Modal</code>.
<p>
Software to create a new Bayesian model for a new data type is available at
<a href='https://github.com/rlorigro/runlength_analysis_cpp'>
https://github.com/rlorigro/runlength_analysis_cpp</a>.
This creates a configuration file containing the definition of
the newly created Bayesian model, in a csv format that can be used directly in Shasta
via
<code>--Assembly.consensusCaller Bayesian:fileName</code>.
<h2 id=SelectingAssemblyPaths>Selecting assembly paths</h2>
<p>
The sequence assembly procedure described in the previous section
can be used to assemble sequence for any path in the marker graph.
This section describes the selection of paths for assembly
in the current Shasta implementation.
This is done by a series of steps that "remove" edges
(but not vertices) from the marker graph until the
marker graph consists mainly of linear sections
which can be used as the assembly paths.
For speed, edges are not actually removed but just marked
as removed using a set of flag bits allocated for this
purpose in each edge.
However, the description below will use the loose term <q>remove</q>
to indicate that an edge was flagged as removed.
<p>
This process consists of the following three steps,
described in more detail in the following sections:
<ul>
<li><a href='#TransitiveReduction'>Approximate transitive reduction of the marker graph</a>.
<li><a href='#Pruning'>Pruning of short side branches (leaves)</a>.
<li><a href='#BubbleRemoval'>Removal of bubbles and super-bubbles</a>.
</ul>
<h5 id=TransitiveReduction>Approximate transitive reduction of the marker graph</h5>
<p>
The goal of this step is to eliminate the side branches in the marker
graph, which are the result of errors.
Although
the number of side branches is substantially reduced thanks to the use of
run-length encoding, side branches are still abundant.
This step uses an approximate transitive reduction of the marker graph which only
considers reachability up to a maximum distance,
controlled by assembly parameter
<code><a href='CommandLineOptions.html#MarkerGraph.maxDistance'>MarkerGraph.maxDistance</a></code>
(default 30 marker graph edges).
Using a maximum distance makes sure that the process remains computationally affordable,
and also has the advantage of not removing long-range edges in the marker graph,
which could be significant.
<p>
In detail, the process works as follows.
In this description, the edge being considered for removal
is the edge <code>v0→v1</code> with source vertex <code>v0</code>
and target vertex <code>v1</code>.
The first two steps are not really part of the transitive reduction
but are performed by the same code for convenience.
<p>
<ul>
<li>All edges with coverage less than or equal to
<code><a href='CommandLineOptions.html#MarkerGraph.lowCoverageThreshold'>MarkerGraph.lowCoverageThreshold</a></code>
are unconditionally removed.
The default value for this assembly parameter is 0,
so this step does nothing when using default parameters.
<li>All edges with coverage 1
and for which the only supporting read has a large marker skip
are unconditionally removed.
The marker skip of an edge, for a given read, is defined as the distance
(in markers) between the <code>v0</code> marker for that read
and the <code>v1</code> marker for the same read.
Most marker skips are small, and a large skip is indicative of an
artifact. Keeping
those edges could result in assembly errors.
The marker skip threshold is controlled
by assembly parameter
<code><a href='CommandLineOptions.html#MarkerGraph.edgeMarkerSkipThreshold'>MarkerGraph.edgeMarkerSkipThreshold</a></code>
(default 100 markers).
<li>Edges
with coverage greater than
<code><a href='CommandLineOptions.html#MarkerGraph.lowCoverageThreshold'>MarkerGraph.lowCoverageThreshold</a></code>
(default 0) and less than
<code><a href='CommandLineOptions.html#MarkerGraph.highCoverageThreshold'>MarkerGraph.highCoverageThreshold</a></code>
(default 256), and that were not previously removed,
are processed in order of increasing coverage.
Note that with the default values of these parameters all edges
are processed because edge coverage is stored using one byte
and therefore can never be more than 255 (it is saturated at 255).
For each edge <code>v0→v1</code>,
a <a href='https://en.wikipedia.org/wiki/Breadth-first_search'>Breadth-First Search</a>
(BFS) in the marker graph is performed
starting at source vertex <code>v0</code>
and with a limit of
<code><a href='CommandLineOptions.html#MarkerGraph.maxDistance'>MarkerGraph.maxDistance</a></code>
(default 30)
edges distance from vertex <code>v0</code>.
The BFS is constrained to not use edge <code>v0→v1</code>.
If the BFS reaches <code>v1</code>, indicating that an
alternative path from <code>v0</code> to <code>v1</code>
exists, edge <code>v0→v1</code> is removed.
Note that the BFS does not use edges that have already
been removed, and so the process is guaranteed not to affect
reachability. Processing edges in order of increasing coverage
makes sure that low coverage edges are the most likely to be removed.
</ul>
The transitive reduction step is intrinsically sequential and
so it is currently performed in sequential code for simplicity.
It could be in principle be parallelized, but that would
require sophisticated locking of marker graph edges
to make sure independent threads don't step on each other,
possibly reducing reachability.
However, even with sequential code, this step is
not computationally expensive, taking typically only a small fraction
of total assembly time.
<p>
When the transitive reduction step is complete,
the marker graph consists mostly of linear sections
composed of vertices with in-degree and out-degree one,
with occasional side branches and bubbles or
<a href='https://arxiv.org/abs/1307.7925'>superbubbles</a>,
which are handled in the next two phases described below.
<h5 id=Pruning>Pruning of short side branches (leaves)</h5>
<p>
At this stage, a few iterations of pruning are done
by simply removing, at each iteration,
edge <code>v0→v1</code> if
<code>v0</code> has in-degree 0 (that is, is a backward-pointing leaf)
or <code>v1</code> has out-degree 0 (that is, is a forward-pointing leaf).
The net effect is that all side branches of length
(number of edges)
at most equal to the number of iterations are removed.
This leaves the leaf vertex isolated, which causes no problems.
The number of iterations is controlled
by assembly parameter
<code><a href='CommandLineOptions.html#MarkerGraph.pruneIterationCount'>MarkerGraph.pruneIterationCount</a></code>
(default 6).
<h5 id=BubbleRemoval>Removal of bubbles and superbubbles</h5>
<p>
The marker graph now consists of mostly linear section
with occasional bubbles or superbubbles.
Most of the bubbles and superbubbles are caused by errors,
but some of those are due to heterozygous loci in
the genome being assembled.
Bubbles and superbubbles of the latter type could be used for
separating haplotypes (phasing) - a possibility
that will be addressed in future Shasta releases.
However, the goal of the current
Shasta implementation is to create a haploid assembly
at all scales but the very long ones.
Accordingly, bubbles and superbubbles
at short scales are treated as errors, and the goal of
the bubble/superbubble removal step is to keep
the most significant path in each bubble or superbubble.
<p>
The figures below show typical examples of a bubble
and superbubble in the marker graph.
<p>
<img src=Bubble.png>
<p>
<img src=Superbubble.png>
<p>
The bubble/superbubble removal process is iterative.
Early iterations work on short scales, and late iterations
fork on longer scales.
Each iteration uses a length threshold
that controls the maximum number of marker graph edges for
features to be considered for removal.
The value of the threshold for each iteration
is specified using assembly parameter
<code><a href='CommandLineOptions.html#MarkerGraph.simplifyMaxLength'>MarkerGraph.simplifyMaxLength</a></code>,
which consists of a comma-separated string of integer
numbers, each specifying the threshold for one iteration
in the process. The default value is
<code>10,100,1000</code>, which means that three iterations
of this process are performed.
The first iteration uses a threshold of 10 marker graph edges,
and the second and third iterations
use length thresholds of 100 and 1000 marker graph edges, respectively.
The last and largest of the threshold values used
determines the size of the smallest bubble or superbubble
that will survive the process.
The default 1000 markers is equivalent to roughly 13 Kb.
To suppress more bubble/superbubbles, increase
the threshold for the last iteration.
To see more bubbles/superbubbles, decrease
the length threshold for the last iteration, or remove the last iteration entirely.
<p>
The goal of the increased threshold values is to work
on small features at first, and on larger features in the later iterations.
The best choice of
<code><a href='CommandLineOptions.html#MarkerGraph.simplifyMaxLength'>MarkerGraph.simplifyMaxLength</a></code>
is application dependent. The default value
is a reasonable compromise useful if one desires
a mostly haploid assembly with just some large heterozygous features.
<p>
Each iteration consists of two steps.
The first removes bubbles
and the second removes superbubbles.
Only bubbles/superbubbles consisting of features shorter than
the threshold for the current iteration are considered:
<ol>
<li><b>Bubble removal</b>
<ul>
<li>An assembly graph corresponding to the current marker graph is created.
<li>Bubbles are located in which the length of all branches
(number of marker graph edges) is no more than the length threshold
at the current iteration. In the assembly graph,
a bubble appears as a set of parallel edges (edges with the same
source and target).
<li>In each bubble, only the assembly graph edge
with the highest average coverage
is kept.
Marker graph edges corresponding to all other assembly graph edges
in the bubble are flagged as removed.
</ul>
<li><b>Superbubble removal</b>:
<ul>
<li>An assembly graph corresponding to the current marker graph is created.
<li>Connected components of the assembly graph are computed,
but only considering edges below the current length threshold.
This way, each connected component corresponds to a "cluster"
of "short" assembly graph edges.
<li>For each cluster, entries into the cluster are located.
These are vertices that have in-edges from a vertex outside the cluster.
Similarly, exists are located (vertices that have out-edges outside the cluster).
<li>For each entry/exit pair, the shortest path is computed.
However, in this case the "length" of an assembly graph edge
is defined as the inverse of its average coverage - that is, the inverse of average coverage for all
the contributing marker graph edges.
<li>Edges on each shortest path are marked as edges to be kept.
<li>All other edges internal to the cluster are removed.
</ul>
</ol>
<p>
When all iterations of bubble/superbubble removal are complete,
the assembler creates a final version of the assembly graph.
Each edge of the assembly graph corresponds to a path
in the marker graph, for which sequence can be assembled
using the
<a href='#AssembleSequence'>method described above</a>.
Note, however, that the marker graph and the assembly
graph have been constructed to contain both strands.
Special
care is taken during al transformation steps to make sure
that the marker graph (and therefore the assembly graph)
remain symmetric with respect to strand swaps.
Therefore, the majority of assembly graph edges come in
reverse complemented pairs, of which we assemble only one.
It is however possible but rare
for an assembly graph to be its own reverse complement.
<h2 id=Detangle>Detangling</h3>
<p>
In many real-life situations, the assembly graph contains
features like this one, called a tangle:
<p>
<img src='Detangle-Before.png'/>
<p>
A tangle consists of an edge <i>v<sub>0</sub>→v<sub>1</sub></i>
(depicted here in green) for which the following is true:
<p><i>out-degree(v<sub>0</sub>) = 1</i>
(No outgoing edges of <i>v<sub>0</sub></i> other than the green edge)
<br><i>in-degree(v<sub>1</sub>) = 1</i>
(No incoming edges of <i>v<sub>1</sub></i> other than the green edge)
<br><i>in-degree(v<sub>0</sub>) > 1</i>
(<i>v<sub>0</sub></i> has more than one incoming edge -
if that was not the case, the one incoming edge could be trivially merged with the green edge).
<br><i>out-degree(v<sub>1</sub>) > 1</i>
(<i>v<sub>1</sub></i> has more than one outgoing edge -
if that was not the case, the one outgoing edge could be trivially merged with the green edge).
<p>
In the most common case,
<i>in-degree(v<sub>0</sub>) = out-degree(v<sub>1</sub>) = 2</i>,
and this what the above picture and the following text assume, for clarity.
<p>
Because DNA structure is linear, this pattern expressed that two portions of
sequence have similar portions (the green edge) which the assembly is not able to separate.
However, in some cases, it is possible to separate or "detangle" these two copies
of similar sequence.
<p>
If the green edge is short enough, there may be reads long enough to span it in
its entirety. For those reads, we will be able to tell which of the edges on
the left and right of the tangle they reach. Now, suppose we only see reads going from
the blue edge on the left to the blue edge on the right,
and reads going from the red edge on the left to the red edge on the right -
but no reads crossing between blue and red edges.
Then we can infer that one of the copies follows the blue edges on both sides, while
the other copy follows the red edge. This allows us to detangle the two copies as follows:
<p>
<img src='Detangle-After.png'/>
<p>
Note that the sequence in the green edge was duplicated. This makes sense
because there are two copies of that sequence.
This detangling scheme was first proposed, in a different context,
by <a href='https://www.ncbi.nlm.nih.gov/pmc/articles/PMC55524/'>Pevzner <i>et al.</i> (2001)</a>.
It can be applied with no conceptual changes to the Shasta assembly graph.
Command line option
<code><a href='CommandLineOptions.html#Assembly.detangleMethod'>--Assembly.detangleMethod</a></code>
is used to control detangling.
<ul>
<li><a href='CommandLineOptions.html#Assembly.detangleMethod'>--Assembly.detangleMethod 0</a></code>
(default): detangling is turned off (not activated).
<li><a href='CommandLineOptions.html#Assembly.detangleMethod'>--Assembly.detangleMethod 1</a></code>: activates a strict form of detangling.
Requires at least one read spanning the "red" edges on both sides,
at least one read spanning the "blue" edges on both sides,
and no reads crossing over between the red and blue edges.
<li><a href='CommandLineOptions.html#Assembly.detangleMethod'>--Assembly.detangleMethod 2</a></code>:
Activates a more relaxed form of detangling, controlled by three command line options
<a href='CommandLineOptions.html#Assembly.detangle.diagonalReadCountMin'>
--Assembly.detangle.diagonalReadCountMin</a></code>,
<a href='CommandLineOptions.html#Assembly.detangle.offDiagonalReadCountMax'>
--Assembly.detangle.offDiagonalReadCountMax</a></code>, and
<a href='CommandLineOptions.html#Assembly.detangle.offDiagonalRatio'>
--Assembly.detangle.offDiagonalRatio</a></code>.
See the code (<code>shasta/src/AssemblyPathGraph2.cpp</code>)
for details of how these parameters control detangling.
</ul>
<h2 id=IterativeAssembly>Iterative assembly</h3>
<p>
Iterative assembly is a Shasta experimental feature
that provides the ability to separate haplotypes or similar copies of long repeats.
In preliminary tests on a human genome, it has been shown to provide some
amount of haplotype separation,
and therefore partially phased
diploid assembly, as well as improved ability to
resolve segmental duplications.
In current tests, it requires Ultra-Long (UL)
Nanopore reads created by base caller Guppy 3.6.0 or newer
at high coverage 80X.
The assembly options that were used in this process are captured in the configuration file
<code>Nanopore-UL-iterative-Sep2020.conf</code>.
<p>
The Shasta iterative assembly code is at an experimental stage
and therefore still subject to further
improvements and developments. In its implementation as of September 2020,
it operates as follows. In this description, <code>copy</code> refers
to a haplotype or a copy of a segmental duplication
or other long repeats.
<ul>
<li>
The assembly process runs as usual, without the final phases of bubble/superbubble
removal and final sequence assembly.
<li>
Shasta then computes and stores the sequence of assembly graph edges
encountered by each oriented read, called the <i>pseudo-path</i>
of the oriented read.
These pseudo-paths contain detailed information on how each read
traverses each bubble/superbubble in the assembly graph.
Therefore, two oriented reads that originate from the same <code>copy</code>
(in the sense defined above) are likely to have largely concordant pseudo-paths.
<li>
For each stored marker alignment between two oriented reads,
Shasta now computes an alignment of the pseudo-paths of the two reads.
If the alignment of the two pseudo-paths is not sufficiently good,
indicating that the two oriented reads originate from
distinct <code>copies</code>,
that
marker alignment is flagged as not to be used.
<li>
A new version of the read graph is created normally,
but excluding from consideration the marker alignments that
were flagged as not to be used in the previous step.
As a result,
in the new read graph edges are created preferentially
between oriented reads originating from the same <code>copy</code>.
Edges between oriented reads originating from distinct <code>copies</code>
are less likely to be created. Therefore the resulting
read graph has achieved some amount of separation between the <code>copies</code>.
<li>
The process is repeated a few times. At each iteration,
the new read graph achieves cleaner separation between <code>copies</code>.
<li>
When the last iteration completes, the assembly process continues
normally, with optional bubble/superbubble removal and detangling,
followed by sequence assembly.
</ul>
<h2 id=Mode2Assembly>Phased diploid assembly</h3>
<p>
<b>
Note: a preliminary implementation of phased (mode 2) assembly
was included in Shasta 0.8.0.
For documentation of that preliminary implementation,
get the
<a href='https://github.com/chanzuckerberg/shasta/releases/download/0.8.0/shasta-Ubuntu-20.04-0.8.0.tar'>
tar file
</a>
for Shasta 0.8.0.
The documentation below describes the current,
improved implementation of phased assembly.
</b>
<p>
Shasta provides a second assembly workflow
specially tuned for phased diploid assembly.
It still uses the same basic computational methods
(MinHash/LowHash algorithm, read graph, marker graph),
but adds a phasing process for diploid genomes
that results in a good amount of haplotype separation in many cases.
<p>
Being specialized for the separation of haplotypes in a diploid assembly, this
process is not effective at separating copies of
segmental duplications, and for the same reason
is not effective for genomes with higher ploidy.
However, it can be used for genomes with mixed ploidy 1 and 2
such as the human genome.
It will typically assemble two haplotypes at most
locations in the genomes except segmental duplication, if:
<ul>
<li>Coverage is high enough.
<li>The reads are long enough.
<li>Heterozygosity is not too low.
</ul>
<p>
Because mode 2 assembly is less capable than mode 0 assembly
(standard Shasta haploid assembly)
in segmental duplications, mode 0 assembly can be more effective
if assembly contiguity (N<sub>50</sub>) is the main goal,
and separating haplotypes is not important.
<p>
Assembly mode 2 does not work by assigning haplotypes
to reads. Rather, it assigns haplotypes to branches
of heterozygous bubbles in the marker graph.
<p>
See the following sections for a description of the
computational process used for mode 2 assembly
and the output it creates.
<h3>Marker graph creation in mode 2 assembly</h3>
<p>
Mode 0 and mode 2 assembly generate marker graph vertices
in the same way: at the end of the disjoint sets process,
each disjoint set containing at least
<code><a href='CommandLineOptions.html#MarkerGraph.minCoverage'>--MarkerGraph.minCoverage</a></code>
markers is allowed to generate a vertex if:
<ul>
<li>It also contains at least
<code><a href='CommandLineOptions.html#MarkerGraph.minCoveragePerStrand'>--MarkerGraph.minCoveragePerStrand</a></code>
markers on each strand.
This is to avoid vertices with extreme strand bias,
which are likely to correspond to systematic errors.
<li>It does not contain more than one marker
belonging to the same read and with the same orientation.
This avoids some undesirable cyclic features in
the marker graph.
</ul>
However, the generation of marker graph edges
is different: in mode 0
all possible edges are initially generated,
without regard to coverage.
That is, an edge <code>A→B</code>
between vertices <code>A</code> and <code>B</code>
is generated if there is at least one (oriented) read
that visits vertex <code>B</code> immediately
after vertex <code>A</code>, without visiting
any other vertices in between.
Then, edges are gradually removed by a process,
including transitive reduction and bubble/superbubble removal,
whose main goal is to keep contiguity unaltered.
This results in highly contiguous haploid assemblies (large N<sub>50</sub>).
<p>
In mode 2 assembly, the process of generating edges is in a way reversed:
rather than being very permissive in the initial creation of edges,
edges are initially created subject to strict coverage criteria
similar to those used for vertices,
under control of
<code><a href='CommandLineOptions.html#MarkerGraph.minEdgeCoverage'>--MarkerGraph.minEdgeCoverage</a></code>
and
<code><a href='CommandLineOptions.html#MarkerGraph.minEdgeCoveragePerStrand'>--MarkerGraph.minEdgeCoveragePerStrand</a></code>.
This ensures that only edges with good read support are generated.
As an additional requirement, all oriented reads
that participate in an edge are required to have
exactly the same sequence on the edge
(or the same RLE sequence, if working with reads
in RLE representation).
If this is not the case, the edge is split.
<p>
This way, a bubble is generated (see below)
if any two oriented reads have different sequences (or RLE sequences)
in any vertex or edge.
<p>
This creates the following problem when working
using reads in RLE representation.
In that case, this process does not generate bubbles
for heterozygous loci in which the two branches have the same
RLE sequence.
For example, consider a SNP A->G in which the two alleles are
<pre>
AAAG
AAGG
</pre>
For both alleles, the RLE sequence is AG, so no bubble is generated,
and that SNP could be missed.
Nevertheless, with current ONT reads the benefits of RLE still
outweigh this issue, and the current assembly configurations
for phased assembly do use RLE.
<p>
However, the strict criteria for edge creation result in frequent breaks of contiguity
at places where coverage is locally low due to errors.
To avoid fragmented assemblies, this is addressed by
adding after the facts a minimal number of <i>secondary edges</i>
at the locations where breaks occur.
For these secondary edges, the above coverage criteria are not used,
but the logic to create secondary edges attempts to maximize
coverage of the secondary edges that are created.
<p>
Because of the strict coverage criteria used, this results
in a mostly linear marker graph.
The transitive reduction
used for mode 2 assembly is not used.
And the bubble and superbubble removal process work differently in mode 2
assembly, as described below.
<h3>Heterozygous bubbles</h3>
<p>
The marker graph created as described above
is mostly linear
but has occasional <i>bubbles</i> caused by heterozygous loci.
A bubble is a set of parallel edges in the marker graph.
The number of parallel edges is the <i>ploidy</i>
of the bubble.
<p>
Mode 2 assembly works under a diploidy assumption (ploidy is 2
everywhere). If any bubbles with ploidy greater than 2
are found, all branches except for the two strongest ones
(most supporting reads) are kept.
<p>
The two branches of each diploid heterozygous bubble
are ordered arbitrarily and labeled 0 and 1.
We keep track of which reads appear in the
marker graph edges of the two branches.
In the next sections, the
numbers of reads that appear on the two branches are
called <i>n<sub>0</sub></i> and <i>n<sub>1</sub></i>.
<p>
Diploid bubbles in the marker graph don't necessarily
reflect the difference between haplotypes at a
heterozygous locus and can be caused by errors.
In fact, with current Nanopore reads
most heterozygous bubbles are caused by errors.
Therefore, heterozygous bubbles are handled in two steps
in mode 2 assembly:
<ul>
<li>In the first step, bubble removal, bubbles that are likely to be due to
errors are removed by only keeping the strongest branch
(that is, the one supported by the most reads).
<li>In the second step, phasing, a haplotype
is assigned to each branch of each bubble that
was not removed in the first step.
</ul>
<p>
The first step is essential.
Attempting to phase bubbles without first removing
the ones caused by errors would render that phasing process noisy
and unreliable.
<p>
In the phasing step,
it is impossible to assign haplotypes globally.
Rather, in the phasing step, heterozygous bubbles are partitioned into
<i>phasing components</i>, and all bubbles in a
phasing component are assigned a haplotype,
which is valid only relative to other bubbles in the
same component.
Each phasing component corresponds to one "large bubble"
in the phased representation of the assembly (see below).
Occasionally, it can also correspond to more than one large bubble.
<h3>Superbubbles</h3>
<p>
Structures with connectivity more complex than simple bubbles (superbubbles)
are generally present in the marker graph.
Handling of these structures is currently limited
to superbubbles with one entrance and one exit.
The superbubble is replaced with a diploid
bubble by choosing the two most likely paths in the superbubble.
<h3 id=BayesianPhasing>Simple Bayesian model for a pair of diploid bubbles</h3>
<p>
The bubble removal and the phasing steps
use a simple Bayesian model describing two diploid bubbles.
<p>
Given two diploid bubbles, bubble <i>A</i> and bubble <i>B</i>,
we can use their read composition
to tell how the haplotypes of the first bubble are related to the
haplotypes of the second bubble.
We can create a 2 by 2 <i>phasing matrix</i>
that counts the number of common reads between each side of the two bubbles.
That is, <i>n<sub>ij</sub></i> is the number of common reads
between branch <i>i</i> of bubble <i>A</i> and branch <i>j</i>
of bubble <i>B</i>.
In an ideal error-free scenario, one of two following situations
would occur:
<ul>
<li><i>n<sub>ij</sub></i> is diagonal, that is,
<i>n<sub>01</sub> = n<sub>10</sub> = 0</i>.
All common reads that visit branch 0 of bubble <i>A</i>
also visit branch 0 of bubble <i>B</i>,
and
all common reads that visit branch 1 of bubble <i>A</i>
also visit branch 1 of bubble <i>B</i>.
In this case, we can say that the two bubbles are <i>in-phase</i>,
that is,
branch 0 of bubble <i>A</i> is on the same haplotype
as branch 0 of bubble <i>B</i>, and
branch 1 of bubble <i>A</i> is on the same haplotype
as branch 1 of bubble <i>B</i>,
<li><i>n<sub>ij</sub></i> has a zero diagonal, that is,
<i>n<sub>00</sub> = n<sub>11</sub> = 0</i>.
All common reads that visit branch 0 of bubble <i>A</i>
also visit branch 1 of bubble <i>B</i>,
and
all common reads that visit branch 1 of bubble <i>A</i>
also visit branch 0 of bubble <i>B</i>.
In this case, we can say that the two bubbles are <i>out-of-phase</i>,
that is,
branch 0 of bubble <i>A</i> is on the same haplotype
as branch 1 of bubble <i>B</i>, and
branch 1 of bubble <i>A</i> is on the same haplotype
as branch 0 of bubble <i>B</i>.
</ul>
<p>
However, due to errors that can be deviations
from this ideal behavior. In addition, if even one of the
two bubbles is caused by errors the distribution
of the reads between the branches of the two bubbles can
be entirely random.
To describe this quantitatively, we use a simple Bayesian model
for the pair of diploid bubbles.
<p>
Here are some additional quantities we need below.
<ul>
<li>The number of reads on branch 0 of bubble <i>A</i>
that also appear on either branch of bubble <i>B</i> on is
<i>n<sub>A0</sub> = n<sub>00</sub> + n<sub>01</sub></i>.
<li>The number of reads on branch 1 of bubble <i>A</i>
that also appear on either branch of bubble <i>B</i> on is
<i>n<sub>A1</sub> = n<sub>10</sub> + n<sub>11</sub></i>.
<li>The number of reads on branch 0 of bubble <i>B</i>
that also appear on either branch of bubble <i>A</i> on is
<i>n<sub>B0</sub> = n<sub>00</sub> + n<sub>10</sub></i>.
<li>The number of reads on branch 1 of bubble <i>B</i>
that also appear on either branch of bubble <i>B</i> on is
<i>n<sub>B1</sub> = n<sub>01</sub> + n<sub>11</sub></i>.
<li>The number of reads on the diagonal of <i>n<sub>ij</sub></i>
is defined as
<i>n<sub>diagonal</sub> = n<sub>00</sub> + n<sub>11</sub></i>.
This is the number of reads that suggest that the two bubbles are in-phase.
<li>The number of reads off the diagonal of <i>n<sub>ij</sub></i>
is defined as
<i>n<sub>off-diagonal</sub> = n<sub>01</sub> + n<sub>10</sub></i>.
This is the number of reads that suggest that the two bubbles are out of phase.
<li>The number of reads that suggest the "best" of the
in-phase and out of phase hypotheses (based on simple read counts)
is defined as
<i>n<sub>concordant</sub> = max(n<sub>diagonal</sub>, n<sub>off-diagonal</sub>)</i>.
<li>The number of reads that suggest the "worst" of the
in-phase and out of phase hypotheses (based on simple read counts)
is defined as
<i>n<sub>discordant</sub> = min(n<sub>diagonal</sub>, n<sub>off-diagonal</sub>)</i>.
<li>The total number of reads that appear on both bubbles is
<i>n
= n<sub>A0</sub> + n<sub>A1</sub>
= n<sub>B0</sub> + n<sub>B1</sub> =
n<sub>00</sub> + n<sub>01</sub> +
n<sub>10</sub> + n<sub>11</sub> =
n<sub>diagonal</sub> + n<sub>off-diagonal</sub> =
n<sub>concordant</sub> + n<sub>discordant</sub></i>
</ul>
<p>
With the above values given and considered fixed,
we now consider three possible hypotheses for the two bubbles:
<ul>
<li><i>Random</i> hypothesis: one or both of the two bubbles
are caused by errors, and therefore reads visits the
two bubbles in an entirely uncorrelated fashion.
<li><i>Ideal in-phase</i> hypothesis: the two bubbles
are in-phase, that is, branches 0 of the two bubbles
are on the same haplotype, and similarly for branches 1.
Under this ideal version of the in-phase hypothesis we also rule out all errors,
and as a result, the phasing matrix is exactly diagonal.
<li><i>Ideal out-of-phase</i> hypothesis: the two bubbles
are out-of-phase, that is, branch 0 of bubble <i>A</i>
is on the same haplotype as branch 1 of bubble <i>B</i>,
and similarly for the other two haplotypes.
Under this ideal version of the out-of-phase hypothesis we also rule out all errors,
and as a result, the diagonal of the phasing matrix is exactly zero.
</ul>
<h5>Random hypothesis</h5>
<p>
Under the random hypothesis, reads visit the
two branches of the two bubbles in an entirely uncorrelated fashion.
The probability that one of the <i>n</i> reads visits branch
<i>i</i> of bubble <i>A</i> and branch
<i>j</i> of bubble <i>B</i> is:
<p>
<i>P(ij </i>|<i> random) = (n<sub>Ai</sub> / n) (n<sub>Bj</sub> / n) = n<sub>Ai</sub> n<sub>Bj</sub> / n <sup>2</sup>
</i>
<p>
It can be easily verified that the sum of all four values
for <i>P(ij|random)</i> equals 1, as it should.
<h5>Ideal in-phase hypothesis</h5>
<p>
Under the ideal in-phase hypothesis, reads visit the two bubbles
just like in the random hypothesis, except that the
off-diagonal elements are zero. Therefore
<i>P(ij </i>|<i> in-phase)</i>
can be obtained from
<i>P(ij </i>|<i> random)</i>
by setting the off-diagonal entries to zero
and renormalizing the diagonal entries so they add up to 1.
The result is:
<p>
<i>P(ij </i>|<i> ideal in-phase) =
δ<sub>ij</sub> n<sub>Ai</sub> n<sub>Bj</sub> /
(n<sub>A0</sub>n<sub>B0</sub> + n<sub>A1</sub>n<sub>B1</sub>)</i>
<h5>Ideal out-of-phase hypothesis</h5>
<p>
The corresponding expression for the ideal
out-of-phase hypothesis can be obtained in a similar way.
The result is:
<p>
<i>P(ij </i>|<i> ideal out-of-phase) =
(1-δ<sub>ij</sub>) n<sub>Ai</sub> n<sub>Bj</sub> /
(n<sub>A0</sub>n<sub>B1</sub> + n<sub>A1</sub>n<sub>B0</sub>)</i>
<h5>Non-ideal hypotheses</h5>
<p>
In reality, we have to consider non-ideal in-phase and out-of-phase hypotheses
in which reads have probability <i>ε>0</i>
of visiting branches in the two bubbles that is inconsistent
with the corresponding ideal hypothesis
(a simple but perhaps oversimplistic assumption).
Under these non-ideal hypotheses, the probability
that one of the <i>n</i> reads visits branch
<i>i</i> of bubble <i>A</i> and branch
<i>j</i> of bubble <i>B</i> is
<p>
<i>
P(ij </i>|<i> in-phase) =
(1 - ε) P(ij </i>|<i> ideal in-phase) +
ε P(ij </i>|<i> ideal out-of-phase) =
</i>
<p>
<i>
P(ij </i>|<i> out-of-phase) =
(1 - ε) P(ij </i>|<i> ideal out-of-phase) +
ε P(ij </i>|<i> ideal in-phase) =
</i>
<h5>Bayesian model</h5>
<p>
With the above expressions available,
we are in a position to construct a simple Bayesian
model in which we evaluate posterior probability
ratios of the above hypotheses conditional
to the observed distribution of reads
visiting the two branches of each bubble.
<p>
We use "neutral" assumptions on prior probabilities:
<p>
<i>P<sub>prior</sub>(in-phase) =
P<sub>prior</sub>(out-of-phase) =
P<sub>prior</sub>(random)</i>
<p>
Posterior probability ratios are then given by:
<p>
<i>
log<span style='font-size:150%'>[</span>P<sub>posterior</sub>(in-phase) / P<sub>posterior</sub>(random)<span style='font-size:150%'>]</span> =
<span style='font-size:150%'>∑</span><sub>ij</sub>
n<sub>ij</sub>
log<span style='font-size:150%'>[</span>P(ij </i>|<i> in-phase) /
P(ij </i>|<i> random)<span style='font-size:150%'>]</span>
</i>
<p>
Here, the sums over <i>i</i> and <i>j</i>
run over values 0 and 1, that is, over the entire 2 by 2 phasing matrix.
In the expressions on the right, <i>n<sub>ij</sub></i>
are entries of the phasing matrix, and the
remaining expressions have been evaluated in the above sections.
Because we assumed <i>ε>0</i>,
all terms in the above expressions are non-singular.
<h3 id=ExtendedBayesianPhasing>Extension of the Bayesian model to two sets of diploid bubbles</h3>
<p>
Consider two disjoint sets of diploid bubbles,
<i>S<sub>0</sub></i> and <i>S<sub>1</sub></i>.
Assume the bubbles in each of the two sets have already been
phased relative to each other, that is, we know how the
haplotypes of each bubble in each of the set
correspond to haplotypes of other bubbles in the same set.
The Bayesian model described in the previous section can be
extended to apply to the phasing of
<i>S<sub>0</sub></i> relative to <i>S<sub>1</sub></i>.
This can be useful, as it is possible that there is
insufficient evidence to phase individual bubbles of
<i>S<sub>0</sub></i> relative to bubbles of <i>S<sub>1</sub></i>,
while sufficient combined evidence still is present to support
phasing the two sets relative to each other.
<p>
Such an extended Bayesian model permits a hierarchical approach to
phasing, described in more details below, in which sets of
bubbles are iteratively phased and combined
into larger phased sets.
<p>
The phasing matrix for the extended Bayesian model
can be constructed similarly for the case of a single pair of bubbles.
The phasing matrix counts distinct reads, so if a read
appears in multiple bubbles of the two sets,
it is counted only once.
<h3 id=PhasingGraph>Phasing graph</h3>
<p>
Phasing information for a set of diploid bubbles can
be described using a <i>phasing graph</i>,
an undirected graph in which each vertex represents a bubble
or a set of bubbles already phased relative to each other.
The phasing graph is used in two steps, described in more
detail below.
<ul>
<li>The goal of the first step, bubble removal, is to remove bubbles that
are likely to be the result of errors.
In this step, each vertex of the phasing graph corresponds
to a single bubble rather than a set of bubbles.
<li>The goal of the second step, phasing,
is to assign a haplotype to each branch of each bubble.
This is done hierarchically by iteratively phasing
groups of bubbles relative to each other - see below for more details.
</ul>
<p>
The phasing graph is used in both steps, with some differences.
<p>An edge between two vertices can be created if the two corresponding
bubbles or sets of bubbles have enough common reads to permit phasing. The precise
criteria for edge creation are different for bubble removal and phasing,
but in both cases each edge stores the <i>n<sub>ij</sub></i>
phasing matrix for the bubbles corresponding to the vertices it joins.
But in both cases, we enforce a minimum allowed value for
<i>n<sub>concordant</sub></i> and a maximum allowed value for
<i>n<sub>discordant</sub></i>.
This reflects the fact that, for an edge to carry sufficient information,
we would like <i>n<sub>concordant</sub></i> to be high and
<i>n<sub>discordant</sub></i> to be small.
These thresholds are controlled by the following command line options:
<ul>
<li>For bubble removal,
<code><a href='CommandLineOptions.html#Assembly.mode2.bubbleRemoval.minConcordantReadCount'>--Assembly.mode2.bubbleRemoval.minConcordantReadCount</a></code>
and
<code><a href='CommandLineOptions.html#Assembly.mode2.bubbleRemoval.maxDiscordantReadCount'>--Assembly.mode2.bubbleRemoval.maxDiscordantReadCount</a></code>.
<li>For phasing,
<code><a href='CommandLineOptions.html#Assembly.mode2.phasing.minConcordantReadCount'>--Assembly.mode2.phasing.minConcordantReadCount</a></code>
and
<code><a href='CommandLineOptions.html#Assembly.mode2.phasing.maxDiscordantReadCount'>--Assembly.mode2.phasing.maxDiscordantReadCount</a></code>.
</ul>
<p>
In addition, an edge is only generated if it carries sufficient useful information.
For the simpler case of phasing, when bubbles in error have
been removed, we can ignore the random hypothesis. We use the Bayesian model to compute
<p>
<i>
log(P<sub>phasing</sub>) =
<span style='font-size:150%'>|</span>
log
<span style='font-size:150%'>[</span>
P<sub>posterior</sub>(in-phase) / P<sub>posterior</sub>(out-of-phase)<span style='font-size:150%'>]</span>
<span style='font-size:150%'>|</span>
</i>
<p>
Because of the absolute value, this gives the logarithmic spacing between the
in-phase and out-of-phase hypothesis.
A large value means that the edge strongly supports either the
in-phase or out-of-phase hypothesis.
The edge is only generated if
<i>log(P<sub>phasing</sub>)</i>
is greater than the value specified
by command line option
<code><a href='CommandLineOptions.html#Assembly.mode2.phasing.minLogP'>--Assembly.mode2.phasing.minLogP</a></code>.
This value specifies the minimum required value of
<i>log(P<sub>phasing</sub>)</i>, expressed in
<a href='https://en.wikipedia.org/wiki/Decibel'>decibels</a> (dB).
<p>
The generated edge is then assigned a relative phase equal to 0 (in-phase) if
<p>
<i>
P<sub>posterior</sub>(in-phase) > P<sub>posterior</sub>(out-of-phase)
</i>
<p>and 1 (out-of-phase) otherwise.
<p>
For bubble removal, things are a bit more complicated because
now we also have to allow for the random hypothesis.
For an edge to be generated we want it to decisively support
either the in-phase or out-of-phase against <b>both</b> the random hypothesis
and the opposite phasing hypothesis. Therefore in this case
<span style='white-space:nowrap'><i>log(P<sub>bubble-removal</sub>)</i></span>
for an edge is defined as follows:
<p>
If <i>
log
<span style='font-size:150%'>[</span>
P<sub>posterior</sub>(in-phase) / P<sub>posterior</sub>(out-of-phase)
<span style='font-size:150%'>]</span> > 0
</i>:
<p style='margin-left:4em'>
<i>
log(P<sub>bubble-removal</sub>) =
min
<span style='font-size:150%'>{</span>
log
<span style='font-size:150%'>[</span>
P<sub>posterior</sub>(in-phase) / P<sub>posterior</sub>(out-of-phase)
<span style='font-size:150%'>]</span>
,
log
<span style='font-size:150%'>[</span>
P<sub>posterior</sub>(in-phase) / P<sub>posterior</sub>(random)
<span style='font-size:150%'>]</span>
<span style='font-size:150%'>}</span>
</i>
<p>
Otherwise:
<p style='margin-left:4em'>
<i>
log(P<sub>bubble-removal</sub>) =
min
<span style='font-size:150%'>{</span>
log
<span style='font-size:150%'>[</span>
P<sub>posterior</sub>(out-of-phase) / P<sub>posterior</sub>(in-phase)
<span style='font-size:150%'>]</span>
,
log
<span style='font-size:150%'>[</span>
P<sub>posterior</sub>(out-of-phase) / P<sub>posterior</sub>(random)
<span style='font-size:150%'>]</span>
<span style='font-size:150%'>}</span>
</i>
<p>
In both cases,
<i>
log(P<sub>bubble-removal</sub>)
</i>
is the logarithmic spacing between the best of the two phasing hypotheses
against the next best hypothesis - either the opposite phasing hypothesis
or the random hypothesis.
<p>
The edge is only generated if
<i>log(P<sub>bubble-removal</sub>)</i>
is greater than the value specified
by command line option
<code><a href='CommandLineOptions.html#Assembly.mode2.bubbleRemoval.minLogP'>--Assembly.mode2.bubbleRemoval.minLogP</a></code>.
This value specifies the minimum required value of
<i>log(P<sub>bubble-removal</sub>)</i>, expressed in
<a href='https://en.wikipedia.org/wiki/Decibel'>decibels</a> (dB).
<p>
Therefore the presence of an edge indicates good support
for one of the phasing hypotheses, which in turn means that
neither of the two bubbles are due to errors.
<h3 id=BubbleRemoval>Bubble removal</h3>
<p>
Bubble removal is an iterative process.
At each iteration we remove bubbles that don't phase well with
adjacent bubbles. We create a phasing graph as described
<a href="PhasingGraph">above</a>
and allowing the random hypothesis in the Bayesian model.
This is done under control of the following command line options:
<ul>
<li>
<code><a href='CommandLineOptions.html#Assembly.mode2.bubbleRemoval.minConcordantReadCount'>--Assembly.mode2.bubbleRemoval.minConcordantReadCount</a></code>
<li>
<code><a href='CommandLineOptions.html#Assembly.mode2.bubbleRemoval.maxDisconcordantReadCount'>--Assembly.mode2.bubbleRemoval.maxDisconcordantReadCount</a></code>
<li>
<code><a href='CommandLineOptions.html#Assembly.mode2.bubbleRemoval.minLogP'>--Assembly.mode2.bubbleRemoval.minLogP</a></code>
</ul>
Isolated vertices in the phasing graph
correspond to bubbles that don't phase well with any adjacent bubbles.
However bubbles that correspond to vertices that are "almost"
isolated (that is, belong to a small connected component of the phasing graph)
are also suspicious. Therefore, the process for flagging
bad bubbles at each iteration works as follows:
<ul>
<li>The phasing graph is created, with one vertex per bubble.
<li>Connected components are computed.
<li>All bubbles that correspond to vertices in small connected components
are flagged as bad and removed from the assembly graph
by only keeping the strongest branch.
A connected component is considered small if it contains
less than the number of bubbles specified via command line option
<code><a href='CommandLineOptions.html#Assembly.mode2.bubbleRemoval.minLogP'>--Assembly.mode2.bubbleRemoval.componentSizeThreshold</a></code>.
<li>Any assembly graph simplifications made possible by the
removal of some bubbles are made. This includes merging of
adjacent linear segments and superbubble removal.
<li>The iteration continues by recreating the phasing graph
from scratch, and stops when no more bubbles are flagged as bad.
</ul>
<p>
Note this process can erroneusly flag as bad bubbles that are
in low heterozygosity regions of the genome.
This can cause low heterozygosity regions to be assembled haploid.
<h3id=PhasingBubbles>Phasing of bubbles</h3>
<p>
<p>
Mode 2 assembly works by phasing the bubbles, not the
reads. That is, the phasing process does not assign a
haplotype to reads. Rather, it assigns a haplotype to bubbles.
<p>
To achieve this,
the phasing graph is now created again,
but now the <i>log(P<sub>phasing</sub>)</i>
for each edge is computed without allowing the random hypothesis,
as described
<a href="PhasingGraph">above</a>.
We can do this because at this stage all bubbles in error
should have been removed.
Creation of the phasing graph is now under control of the
following command line options:
<ul>
<li>
<code><a href='CommandLineOptions.html#Assembly.mode2.phasing.minConcordantReadCount'>--Assembly.mode2.phasing.minConcordantReadCount</a></code>
<li>
<code><a href='CommandLineOptions.html#Assembly.mode2.phasing.maxDisconcordantReadCount'>--Assembly.mode2.phasing.maxDisconcordantReadCount</a></code>
<li>
<code><a href='CommandLineOptions.html#Assembly.mode2.phasing.minLogP'>--Assembly.mode2.phasing.minLogP</a></code>
</ul>
<p>
With the majority of bubbles in error having been removed,
the phasing graph is generally well behaved
and contains a small number of inconsistencies.
We assign to each edge a weight
equal to
<i>log(P<sub>phasing</sub>)</i>
and compute an optimal spanning tree
that maximizes the sum of these weights.
With this definition, the spanning tree "prefers" edges
for which <i>log(P<sub>phasing</sub>)</i> is large.
Once the spanning tree is computed, we assign
haplotypes arbitrarily on a randomly chosen bubble,
and then use the spanning tree to assign
haplotypes to the remaining bubbles.
This can be done using a Bread First Search (BFS) on the spanning tree
and assigning haplotypes to vertices based on the <i>log(P<sub>phasing</sub>)</i>
values of the tree edges.
<p>
The spanning tree process has to be done
separately for each connected component
of the phasing graph, because there are no edges
the allow the BFS to jump across connected
components. This expresses the fact that
we cannot phase across regions in the absence of
a sufficient number of common reads.
<p>
The bubbles of each connected component
are then combined into a single vertex of the phasing graph,
keeping track of the relative phasing of the bubbles
as determined by the spanning tree.
This hierarchical phasing process is then repeated until all
vertices are isolated. Each vertex then corresponds to
a "phased component" in the final phased assembly.
<p>
At the end of this process, each bubble that was not
marked as bad has a <code>componentId</code>
that defines the final phasing graph vertex each bubble
was part of, and haplotypes
0 and 1 are assigned to its two branches.
<h3 id=BubbleChains>Bubble chains</h3>
<p>
A <i>bubble chain</i> is a linear sequence
of diploid bubbles and intervening segments assembled haploid.
A portion of a phased bubble chain, as displayed in
<a href="https://rrwick.github.io/Bandage/">Bandage</a>,
is shown below.
<p>
<img src="BubbleChain.png" width="100%">
<p>
On each diploid bubble, the two branches are colored
red or blue depending on the haplotype assigned
to each branch during phasing.
The intervening haploid segments are
shown in grey.
In most cases, there is a haploid segment
in between each pair of adjacent bubbles,
but sometimes two bubbles can immediately follow each other.
<p>
Bubble chains are useful for organizing assembly output.
As explained above, the phasing process assigns each
bubble to a connected component
and each branch to a haplotype.
With this information, we can find in the assembly graph paths
that represent each of the two haplotypes.
For example, in the above picture, we can
create a path that follows the haploid segments plus the
blue branches of diploid segments:
<p>
<img src="BubbleChainPath.png" width="100%">
<p>
Paths created in this way provide haplotype separation on
a much longer scale than individual bubbles.
Haplotype separation can continue on both sides as long as we stay in
the same connected component.
<p>
A portion of a bubble chain in which all bubbles
belong
to the same connected component of the bubble graph
is called a <i>phased region</i>.
The intervening segments in between
adjacent phased regions on the same bubble chain are
called <i>unphased regions</i>.
The two paths for a phased region constructed as
defined above provide a representation of the
two haplotypes over the entire phased region.
This permits an assembly representation in which
each phased region is shown as a large
bubble, each of the two branches corresponding
to one of the two paths for the phased region.
<h3 id=Mode2Output>Mode 2 assembly output</h3>
<p>
After a mode 2 assembly run,
the resulting assembly is output in three different
representations containing different levels of detail:
<ul>
<li>A <i>Detailed</i> representation
that includes all of the bubbles and their assignments
to bubble graph components and haplotypes.
This representation also includes
paths representing haplotypes.
<li>A <i>Phased</i> representation in which
each phased region is represented as a large bubble,
each branch corresponding to a path
in the Detailed representation of the assembly.
<li>A <i>Haploid</i> representation
in which each bubble chain is replaced
by a single linear segment. This is done by discarding,
in each phased region, the branch with the lowest average coverage.
This representation can be useful to understand and describe
the large-scale structure of the assembly.
</ul>
<p>
All three representations also include the unstructured and unphased assembled segments
that are not part of a bubble chain.
<p>
For each of the three representations, four files are written out:
<ul>
<li>A complete GFA file.
<li>A GFA file without sequence.
<li>A FASTA file.
<li>A csv file containing various columns of information
for each segment in the GFA and FASTA files.
</ul>
All GFA files are in
<a href="https://gfa-spec.github.io/GFA-spec/GFA1.html">GFA 1 format</a>.
So in total, there are 12 output files, summarized in the table below:
<p>
<table>
<tr>
<th>
<th>Detailed
<th>Phased
<th>Haploid
<tr>
<td class=centered>Complete GFA file
<td class=centered>Assembly-Detailed.gfa
<td class=centered>Assembly-Phased.gfa
<td class=centered>Assembly-Haploid.gfa
<tr>
<td class=centered>GFA file without sequence
<td class=centered>Assembly-Detailed-NoSequence.gfa
<td class=centered>Assembly-Phased-NoSequence.gfa
<td class=centered>Assembly-Haploid-NoSequence.gfa
<tr>
<td class=centered>FASTA file
<td class=centered>Assembly-Detailed.fasta
<td class=centered>Assembly-Phased.fasta
<td class=centered>Assembly-Haploid.fasta
<tr>
<td class=centered>FASTA file
<td class=centered>Assembly-Detailed.csv
<td class=centered>Assembly-Phased.csv
<td class=centered>Assembly-Haploid.csv
</table>
<p>
GFA files without sequence are much smaller
than the complete GFA files. So they can be useful to quickly
inspect the assembly in
<a href="https://rrwick.github.io/Bandage/">Bandage</a>
or to do other processing that does not require sequence.
<p>
FASTA files contain sequence in FASTA format
for all segments in each of the GFA files.
<p>
The csv files can be loaded in Bandage and used
to display information about the segments in each file.
They also define custom colors which facilitate
the display of the GFA files in Bandage.
<p>
Not all of the 12 files are necessary for all applications,
and so the following command line options can be used
to selectively suppress output:
<ul>
<li><code>--Assembly.mode2.suppressGfaOutput</code>
turns off all GFA output.
<li><code>--Assembly.mode2.suppressFastaOutput</code>
turns off all FASTA output.
<li><code>--Assembly.mode2.suppressDetailedOutput</code>
turns off output of the Detailed representation of the assembly.
<li><code>--Assembly.mode2.suppressPhasedOutput</code>
turns off output of the Phased representation of the assembly.
<li><code>--Assembly.mode2.suppressHaploidOutput</code>
turns off output of the Haploid representation of the assembly.
</ul>
<h3>Naming schemes for assembled segments and paths</h3>
<p>
Assembled segments in the detailed representation of the assembly
are named as follows:
<ul>
<li>Segments that are not part of a bubble are named using a numeric identifier.
For example, <code>12497</code>.
<li>Segments that are part of a bubble are named using a numeric identifier,
followed by <code>.0</code> and <code>.1</code>.
For example, <code>17495.0</code> and <code>17495.1</code>.
The portion preceding the period is the same for the
two branches of a bubble.
</ul>
<p>
Paths in the detailed representation of the assembly
are named like the corresponding segments in the phased representation
of the assembly and follow the segment naming schemes:
<ul>
<li>The two branches of a phased region are named
<code>PR.bubbleChain.position.component.haplotype</code>
(example: <code>PR.17.41.268.1</code>)
where <code>PR</code> stands for "Phased Region", and
the four fields are numeric identifiers with the following meaning:
<ul>
<li><code>bubbleChain</code> identifies the bubble chain
that the segment belongs to.
<li><code>position</code> is the position of the segment in the bubble chain.
This begins at 0 and increments by one at each phased or unphased region
encountered in the bubble chain.
<li><code>component</code> identifies the connected component of the bubble
graph that this phased region belongs to.
<li><code>haplotype</code> (<code>0</code> or <code>1</code>) for each
of the two branches (haplotypes) of the phasing region. Usually a component
corresponds to a single phased region, but there are exceptions to that.
If two phased regions have the same component, their haplotypes are phased
relative to each other (that is, branch <code>0</code> of the first phased region
occurs in the same haplotype as branch <code>0</code> of the second phased region).
</ul>
<li>Unphased region are named
<code>UR.bubbleChain.position</code>
(example: <code>PR.17.42</code>)
where <code>UR</code> stands for "Unphased Region", and
the two fields are numeric identifiers with the same meaning
as for phased regions above.
</ul>
To clarify the above description, the figure below shows names for a portion
of bubble chain <code>0</code>. Note how the <code>position</code>
field gets incremented along the bubble chain.
<img src="PhasedRegionsNaming.png" size="100%">
<p>
Bubble chains in the haploid representation of the assembly
are named <code>BC.bubbleChain</code> where
<code>bubbleChain</code> identifies the bubble chain.
For example, <code>BC.18</code>.
<p>
In the phased and haploid representations of the assembly,
segments that are not part of a bubble chain
have the same name as the corresponding segment
in the detailed representation of the assembly,
that is, a numeric identifier possibly followed by
<code>.0</code> or <code>.1</code>.
<h3 id=UsingMode2>Using mode 2 assembly</h3>
<p>
Phased diploid assembly is turned on via
<code>--Assembly.mode 2</code> or, more conveniently,
using command line option
<code>--config</code>
to specify one of the following configurations:
<ul>
<li><code>Nanopore-Phased-Jan2022.conf</code>.
for standard nanopore reads.
<li><code>Nanopore-UL-Phased-Jan2022.conf</code>.
for Ultra-Long (UL) nanopore reads.
</ul>
<p>
You can use `shasta --command listConfiguration --config
to get a ful description of each these two configurations,
including comments the describe in more details the conditions
under which eacxh of the two configurations were tested.
<p>
Please
<a href="https://github.com/paoloshasta/shasta/issues">file an issue</a>
in the Shasta GitHub repository for discussion of mode 2 assembly parameters.
<p>
Assembly mode 2 requires the following setting (both
included in the configurations mentioned above):
<ul>
<li><code>--Kmers.k</code> must be even.
<li><code>--ReadGraph.strandSeparationMethod 2</code>
(strict strand separation in the read graph).
</ul>
<h3>Assembly mode 2: typical results for diploid assembly</h3>
<p>
Limited testing of phased diploid (mode 2) assembly was done
using current (Guppy 5) nanopore reads for a human genome.
Longer reads give better phasing results.
When using Ultra-Long (UL, N50>50Kb) reads
base-called with Guppy 5 and "super" accuracy
at coverage 60-80x, typical results for human genomes
are as follows:
<ul>
<li>80 to 90% of the genome sequence is assembled diploid (that is, in
phased regions).
<li>Typical length of the large diploid bubbles in the phased
representation is 2 to 10 Mb.
<li>Each
of the branches in one of the large bubbles
maps very well to one haplotype of a reference genome
for the assembly being sequenced, much less well to the
opposite haplotype.
</ul>
<h3>Comparison of assembly mode 2 and iterative assembly</h3>
<p>
<a href="#IterativeAssembly">Iterative assembly</a> similarly addresses the separation of
copies of similar sequence due to ploidy, segmental duplication,
or other repeats.
It is more general in that it does not make
any assumption on the number of copies, while
assembly mode 2 has a built-in assumption of two copies.
<p>
However, iterative assembly uses a different approach:
it uses the assembly graph to separate reads and
then iteratively reassembles while enforcing
separation of the read pairs found in the previous
iteration to belong to different copies.
In summary, one could say that iterative assembly
"phases the reads", while mode 2 assembly
"phases the bubbles".
<p>
This approach used an iterative assembly, while effective in some cases, proved
to be much less robust and causes abundant assembly artifacts and
breaks. A future generalization of
mode 2 assembly without the assumption of two copies
may be able to provide robust separation
in the presence of arbitrary numbers of copies.
<h2 id=HighPerformance>High performance computing techniques</h2>
<p>
The Shasta assembler is designed to run on a single machine
with an amount of memory sufficient to hold all of its data structures
(1-2 TB for a human assembly, depending on coverage).
All data structures are memory mapped
and can be set up to remain available after assembly completes.
Note that using such a large memory machine does not substantially
increase the cost per CPU cycle. For example,
on Amazon AWS the cost per virtual processor hour for
large memory instances is no more than twice the cost for laptop-sized instances.
<p>
There are various advantages to running assemblies in this way:
<ul>
<li>Running on a single machine simplifies the logistics of running
an assembly, versus for example running on a cluster of smaller machines
with shared storage.
<li>No disk input/output takes place during assembly,
except for loading the reads in memory and writing out assembly
results plus a few small files containing summary information.
This eliminates performance bottlenecks commonly caused by disk I/O.
<li>Having all data structures in memory makes it easier
and more efficient to exploit parallelism, even at very low granularity.
<li>Algorithm development is easier, as all data are immediately accessible
without the need to read files from disk. For example, it is possible
to easily rerun a specific portion of an assembly for experimentation
and debugging without any wait time for data structures to be read from disk.
<li>When the assembler data structures are set up to remain
in memory after the assembler completes,
<a href=InspectingResults.html>it is possible</a> to
use the Python API or the Shasta http server to
inspect and analyze an assembly and its data structures
(for example, display a portion of the read graph,
marker graph, or assembly graph).
<li>For optimal performance, assembler data structures can be mapped to
Linux 2 MB pages (<q>huge pages</q>). This makes it faster for
the operating system to allocate and manage the memory
and improves TLB efficiency. Using huge pages mapped on the
<code>hugetlbfs</code> filesystem
(Shasta executable options
<code>--<a href='CommandLineOptions.html#memoryMode'>memoryMode</a> filesystem --<a href='CommandLineOptions.html#memoryBacking'>memoryBacking</a> 2M</code>)
can result in a significant speed up (20-30%) for large assemblies.
However, it requires root privilege via <code>sudo</code>.
</ul>
<p>
To optimize performance in this setting, the Shasta assembler
uses various techniques:
<ul>
<li>
In most parallel steps, the division of work among threads
is not set up in advance but decided dynamically
(<q>Dynamic load balancing</q>). As a
thread finishes a piece of work assigned to it,
it grabs another chunk of work to do.
The process of assigning work items to threads is lock-free (that is, it uses atomic memory
primitives rather than mutexes or other synchronization methods provided by the operating system).
<li>Most large memory allocations are done via <code>mmap</code>
and can optionally be mapped to Linux 2 MB pages.
The Shasta code includes a C++ class for conveniently handling these large
memory-mapped regions as C++ containers with familiar semantics
(<code>class shasta::MemoryMapped::Vector</code>).
<li>In situations where a large number of small vectors are required,
a two-pass process is used (<code>class shasta::MemoryMapped::VectorOfVectors</code>). In the first
pass, one computes the length of each of the vectors. A single large area
is then allocated to hold all of the vectors contiguously,
together with another area to hold indexes pointing to the beginning of each of the short
vectors. In a second pass, the vectors are then filled. Both passes
can be performed in parallel and are entirely lock-free.
This process eliminates memory allocation overhead
that would be incurred if each of the vectors were to be
allocated individually.
</ul>
<p>
Thanks to these techniques, Shasta achieves close to 100% CPU utilization
during its parallel phases, even when using large numbers of threads.
However, a number of sequential phases
remain, which typically result in average CPU utilization
during a large assembly around 70%. Some of these sequential
phases can be parallelized, which would result in increased
average CPU utilization and improved assembly performance.
<p>
As of Shasta release 0.1.0, a typical human assembly at coverage 60x
runs in about 6 hours on a <code>x1.32xlarge</code> AWS instance,
which has 64 cores (128 virtual CPUs) and 1952 GB of memory.
<p>
See <a href=Performance.html>here</a> for information on
maximizing assembly performance.
<div class="goto-index"><a href="index.html">Table of contents</a></div>
</main>
</body>
</html>
|