1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815
|
test compile
set is_pic
target x86_64-unknown-linux-gnu
function u0:0(i64, i64, i64) system_v {
ss0 = explicit_slot 16
ss1 = explicit_slot 1 ss2 = explicit_slot 16 ss3 = explicit_slot 1 ss4 = explicit_slot 16 ss5 = explicit_slot 8 ss6 = explicit_slot 16 ss7 = explicit_slot 16 ss8 = explicit_slot 16 ss9 = explicit_slot 16 ss10 = explicit_slot 16 ss11 = explicit_slot 16 ss12 = explicit_slot 16 ss13 = explicit_slot 16 ss14 = explicit_slot 16 ss15 = explicit_slot 16 ss16 = explicit_slot 16 ss17 = explicit_slot 16 ss18 = explicit_slot 24 ss19 = explicit_slot 4 ss20 = explicit_slot 4 ss21 = explicit_slot 4 ss22 = explicit_slot 4 ss23 = explicit_slot 16 ss24 = explicit_slot 16 ss25 = explicit_slot 16 ss26 = explicit_slot 16 ss27 = explicit_slot 48 ss28 = explicit_slot 16 ss29 = explicit_slot 16 ss30 = explicit_slot 32 ss31 = explicit_slot 16 ss32 = explicit_slot 8 ss33 = explicit_slot 8 ss34 = explicit_slot 16 ss35 = explicit_slot 16 ss36 = explicit_slot 16 ss37 = explicit_slot 48 ss38 = explicit_slot 16 ss39 = explicit_slot 16 ss40 = explicit_slot 32 ss41 = explicit_slot 16 ss42 = explicit_slot 8 ss43 = explicit_slot 8 ss44 = explicit_slot 16 ss45 = explicit_slot 16 ss46 = explicit_slot 16 ss47 = explicit_slot 16 ss48 = explicit_slot 16 ss49 = explicit_slot 16 ss50 = explicit_slot 16 ss51 = explicit_slot 8 ss52 = explicit_slot 4 ss53 = explicit_slot 4 ss54 = explicit_slot 16 ss55 = explicit_slot 16 ss56 = explicit_slot 16 ss57 = explicit_slot 2 ss58 = explicit_slot 4 ss59 = explicit_slot 2 ss60 = explicit_slot 16 ss61 = explicit_slot 16 ss62 = explicit_slot 16 ss63 = explicit_slot 16 ss64 = explicit_slot 16 ss65 = explicit_slot 16 ss66 = explicit_slot 16 ss67 = explicit_slot 16 ss68 = explicit_slot 8 ss69 = explicit_slot 16 ss70 = explicit_slot 16 ss71 = explicit_slot 48 ss72 = explicit_slot 16 ss73 = explicit_slot 16 ss74 = explicit_slot 32 ss75 = explicit_slot 16 ss76 = explicit_slot 8 ss77 = explicit_slot 8 ss78 = explicit_slot 16 ss79 = explicit_slot 16 ss80 = explicit_slot 16 ss81 = explicit_slot 48 ss82 = explicit_slot 16 ss83 = explicit_slot 16 ss84 = explicit_slot 32 ss85 = explicit_slot 16 ss86 = explicit_slot 8 ss87 = explicit_slot 8 ss88 = explicit_slot 16 ss89 = explicit_slot 16 ss90 = explicit_slot 4 ss91 = explicit_slot 16 ss92 = explicit_slot 16 ss93 = explicit_slot 16 ss94 = explicit_slot 16 ss95 = explicit_slot 16 ss96 = explicit_slot 16 ss97 = explicit_slot 2 ss98 = explicit_slot 16 ss99 = explicit_slot 16 ss100 = explicit_slot 16 ss101 = explicit_slot 16 ss102 = explicit_slot 16 ss103 = explicit_slot 16 ss104 = explicit_slot 8 ss105 = explicit_slot 16 ss106 = explicit_slot 16 ss107 = explicit_slot 4 ss108 = explicit_slot 16
ss109 = explicit_slot 16
ss110 = explicit_slot 16
ss111 = explicit_slot 16
ss112 = explicit_slot 4
ss113 = explicit_slot 4
ss114 = explicit_slot 4
ss115 = explicit_slot 4
ss116 = explicit_slot 16
ss117 = explicit_slot 16
ss118 = explicit_slot 16
ss119 = explicit_slot 16
ss120 = explicit_slot 16
ss121 = explicit_slot 4
ss122 = explicit_slot 4
ss123 = explicit_slot 16
ss124 = explicit_slot 16
ss125 = explicit_slot 16
ss126 = explicit_slot 2
ss127 = explicit_slot 16
ss128 = explicit_slot 16
ss129 = explicit_slot 16
ss130 = explicit_slot 16
ss131 = explicit_slot 16
ss132 = explicit_slot 4
ss133 = explicit_slot 16
ss134 = explicit_slot 16
ss135 = explicit_slot 16
ss136 = explicit_slot 16
ss137 = explicit_slot 16
ss138 = explicit_slot 16
ss139 = explicit_slot 2
ss140 = explicit_slot 16
ss141 = explicit_slot 16
ss142 = explicit_slot 16
ss143 = explicit_slot 16
ss144 = explicit_slot 4
gv0 = symbol colocated u1:22
gv1 = symbol colocated u1:23
gv2 = symbol colocated u1:24
gv3 = symbol colocated u1:23
gv4 = symbol colocated u1:25
gv5 = symbol colocated u1:23
gv6 = symbol colocated u1:26
gv7 = symbol colocated u1:23
gv8 = symbol colocated u1:27
gv9 = symbol colocated u1:23
gv10 = symbol colocated u1:28
gv11 = symbol colocated u1:23
gv12 = symbol colocated u1:29
gv13 = symbol colocated u1:30
gv14 = symbol colocated u1:31
gv15 = symbol colocated u1:23
gv16 = symbol colocated u1:29
gv17 = symbol colocated u1:32
gv18 = symbol colocated u1:32
gv19 = symbol colocated u1:32
gv20 = symbol colocated u1:32
gv21 = symbol colocated u1:32
gv22 = symbol colocated u1:33
gv23 = symbol colocated u1:34
gv24 = symbol colocated u1:23
gv25 = symbol colocated u1:35
gv26 = symbol colocated u1:36
gv27 = symbol colocated u1:23
gv28 = symbol colocated u1:29
gv29 = symbol colocated u1:32
gv30 = symbol colocated u1:37
gv31 = symbol colocated u1:38
gv32 = symbol colocated u1:30
gv33 = symbol colocated u1:32
gv34 = symbol colocated u1:32
gv35 = symbol colocated u1:29
gv36 = symbol colocated u1:32
gv37 = symbol colocated u1:30
gv38 = symbol colocated u1:32
gv39 = symbol colocated u1:39
gv40 = symbol colocated u1:40
gv41 = symbol colocated u1:41
gv42 = symbol colocated u1:23
gv43 = symbol colocated u1:29
gv44 = symbol colocated u1:42
gv45 = symbol colocated u1:29
gv46 = symbol colocated u1:30
gv47 = symbol colocated u1:29
gv48 = symbol colocated u1:30
gv49 = symbol colocated u1:32
gv50 = symbol colocated u1:43
gv51 = symbol colocated u1:44
gv52 = symbol colocated u1:45
gv53 = symbol colocated u1:23
gv54 = symbol colocated u1:46
gv55 = symbol colocated u1:47
gv56 = symbol colocated u1:48
gv57 = symbol colocated u1:23
gv58 = symbol colocated u1:32
gv59 = symbol colocated u1:39
gv60 = symbol colocated u1:49
gv61 = symbol colocated u1:49
gv62 = symbol colocated u1:49
gv63 = symbol colocated u1:38
gv64 = symbol colocated u1:30
gv65 = symbol colocated u1:32
gv66 = symbol colocated u1:50
gv67 = symbol colocated u1:23
gv68 = symbol colocated u1:29
gv69 = symbol colocated u1:51
gv70 = symbol colocated u1:29
gv71 = symbol colocated u1:30
gv72 = symbol colocated u1:32
gv73 = symbol colocated u1:49
gv74 = symbol colocated u1:32
sig0 = (i64) system_v
sig1 = (i64) system_v
sig2 = (i64) system_v
sig3 = (i64) system_v
sig4 = (i64) system_v
sig5 = (i64) system_v
sig6 = (i64, i64, i64) system_v
sig7 = (i64) -> i8 system_v
sig8 = (i64) system_v
sig9 = (i64) system_v
sig10 = (i64, i64, i64) system_v
sig11 = (i64) -> i8 system_v
sig12 = (i64) system_v
sig13 = (i64) system_v
sig14 = (i64) -> i64 system_v
sig15 = (i64) system_v
sig16 = (i64) system_v
sig17 = (i64) system_v
sig18 = (i64) system_v
sig19 = (i64) system_v
sig20 = (i64) system_v
sig21 = (i64) system_v
sig22 = (i64, i64) system_v
sig23 = (i64) system_v
sig24 = (i64, i64, i16) system_v
sig25 = (i64, i64, i16) system_v
sig26 = (i64) system_v
sig27 = (i64) system_v
sig28 = (i64) system_v
sig29 = (i64) system_v
sig30 = (i64, i16, i16) system_v
sig31 = (i64, i64, i64) system_v
sig32 = (i64, i64, i64) system_v
sig33 = (i64, i64, i64) system_v
sig34 = (i64, i64) -> i8 system_v
sig35 = (i64, i64, i64) system_v
sig36 = (i64, i64) -> i8 system_v
sig37 = (i64, i64, i64) system_v
sig38 = (i64, i64, i64) system_v
sig39 = (i64, i64) system_v
sig40 = (i64) system_v
sig41 = (i64, i64) -> i8 system_v
sig42 = (i64, i64, i64) system_v
sig43 = (i64, i64) -> i8 system_v
sig44 = (i64, i64, i64) system_v
sig45 = (i64, i64, i64) system_v
sig46 = (i64, i64) system_v
sig47 = (i64) system_v
sig48 = (i64) system_v
sig49 = (i64) system_v
sig50 = (i64) system_v
sig51 = (i64) system_v
sig52 = (i64) system_v
sig53 = (i64) system_v
sig54 = (i64, i32) system_v
sig55 = (i64) system_v
sig56 = (i64) system_v
sig57 = (i64) system_v
sig58 = (i64) system_v
sig59 = (i64) system_v
sig60 = (i64) system_v
sig61 = (i64) system_v
sig62 = (i64) system_v
sig63 = (i64) system_v
sig64 = (i64) system_v
sig65 = (i64) system_v
sig66 = (i64) system_v
sig67 = (i64) system_v
sig68 = (i64) system_v
sig69 = (i64) system_v
sig70 = (i64, i64, i64) system_v
sig71 = (i64) system_v
sig72 = (i64, i64, i16, i64, i64, i64, i64, i64) system_v
sig73 = (i64, i64) -> i8 system_v
sig74 = (i64, i64, i64) system_v
sig75 = (i64, i64) -> i8 system_v
sig76 = (i64, i64, i64) system_v
sig77 = (i64, i64, i64) system_v
sig78 = (i64, i64) system_v
sig79 = (i64) system_v
sig80 = (i64, i64) -> i8 system_v
sig81 = (i64, i64, i64) system_v
sig82 = (i64, i64) -> i8 system_v
sig83 = (i64, i64, i64) system_v
sig84 = (i64, i64, i64) system_v
sig85 = (i64, i64) system_v
sig86 = (i64) system_v
sig87 = (i64) system_v
sig88 = (i64) system_v
sig89 = (i64) system_v
sig90 = (i64) system_v
sig91 = (i64) system_v
sig92 = (i64) system_v
sig93 = (i64) system_v
sig94 = (i64) system_v
sig95 = (i64) system_v
sig96 = (i64) system_v
sig97 = (i64) system_v
sig98 = (i64) system_v
sig99 = (i64) system_v
sig100 = (i64) system_v
sig101 = (i64, i64, i64) system_v
sig102 = (i64) system_v
sig103 = (i64) system_v
sig104 = (i64, i64, i16, i64, i64, i64, i64, i64) system_v
sig105 = (i64) system_v
fn0 = u0:83 sig0
fn1 = u0:13 sig1
fn2 = u0:83 sig2
fn3 = u0:13 sig3
fn4 = u0:83 sig4
fn5 = u0:13 sig5
fn6 = u0:84 sig6
fn7 = u0:85 sig7
fn8 = u0:83 sig8
fn9 = u0:13 sig9
fn10 = u0:86 sig10 fn11 = u0:85 sig11 fn12 = u0:83 sig12 fn13 = u0:13 sig13
fn14 = u0:16 sig14 fn15 = u0:83 sig15 fn16 = u0:13 sig16
fn17 = u0:13 sig17
fn18 = u0:13 sig18
fn19 = u0:83 sig19 fn20 = u0:13 sig20
fn21 = u0:13 sig21
fn22 = u0:87 sig22 fn23 = u0:13 sig23
fn24 = u0:88 sig24 fn25 = u0:88 sig25 fn26 = u0:13 sig26
fn27 = u0:13 sig27
fn28 = u0:13 sig28
fn29 = u0:13 sig29
fn30 = u0:89 sig30 fn31 = u0:90 sig31 fn32 = u0:90 sig32 fn33 = u0:90 sig33 fn34 = u0:91 sig34 fn35 = u0:92 sig35 fn36 = u0:91 sig36 fn37 = u0:92 sig37 fn38 = u0:11 sig38 fn39 = u0:12 sig39 fn40 = u0:13 sig40
fn41 = u0:91 sig41 fn42 = u0:92 sig42 fn43 = u0:91 sig43 fn44 = u0:92 sig44 fn45 = u0:11 sig45 fn46 = u0:12 sig46 fn47 = u0:13 sig47
fn48 = u0:13 sig48
fn49 = u0:13 sig49
fn50 = u0:13 sig50
fn51 = u0:13 sig51
fn52 = u0:13 sig52
fn53 = u0:13 sig53
fn54 = u0:93 sig54 fn55 = u0:13 sig55
fn56 = u0:13 sig56
fn57 = u0:13 sig57
fn58 = u0:13 sig58
fn59 = u0:13 sig59
fn60 = u0:13 sig60
fn61 = u0:13 sig61
fn62 = u0:83 sig62 fn63 = u0:13 sig63
fn64 = u0:13 sig64
fn65 = u0:13 sig65
fn66 = u0:13 sig66
fn67 = u0:13 sig67
fn68 = u0:13 sig68
fn69 = u0:13 sig69
fn70 = u0:94 sig70 fn71 = u0:13 sig71
fn72 = u0:95 sig72 fn73 = u0:96 sig73 fn74 = u0:97 sig74 fn75 = u0:96 sig75 fn76 = u0:97 sig76 fn77 = u0:11 sig77 fn78 = u0:12 sig78 fn79 = u0:13 sig79
fn80 = u0:91 sig80 fn81 = u0:92 sig81 fn82 = u0:91 sig82 fn83 = u0:92 sig83 fn84 = u0:11 sig84 fn85 = u0:12 sig85 fn86 = u0:13 sig86
fn87 = u0:13 sig87
fn88 = u0:13 sig88
fn89 = u0:13 sig89
fn90 = u0:13 sig90
fn91 = u0:13 sig91
fn92 = u0:13 sig92
fn93 = u0:13 sig93
fn94 = u0:13 sig94
fn95 = u0:83 sig95 fn96 = u0:13 sig96
fn97 = u0:13 sig97
fn98 = u0:13 sig98
fn99 = u0:13 sig99
fn100 = u0:13 sig100
fn101 = u0:94 sig101
fn102 = u0:13 sig102
fn103 = u0:13 sig103
fn104 = u0:95 sig104
block0(v0: i64, v1: i64, v2: i64):
v113 -> v1
v124 -> v1
v136 -> v1
v148 -> v1
v160 -> v1
v185 -> v1
v222 -> v1
v237 -> v1
v241 -> v1
v256 -> v1
v262 -> v1
@0001 v3 = imul v0, v1
v4 = imul v1, v2
store aligned v4, v3
v5 = load.i64 aligned v2+8
store aligned v5, v3+8
@0002 v6 = stack_addr.i64 ss1
v7 = stack_addr.i64 ss2
v8 = stack_addr.i64 ss3
v9 = stack_addr.i64 ss4
v10 = stack_addr.i64 ss5
v11 = stack_addr.i64 ss6
v12 = stack_addr.i64 ss7
v13 = stack_addr.i64 ss8
v14 = stack_addr.i64 ss9
v15 = stack_addr.i64 ss10
v16 = stack_addr.i64 ss11
v17 = stack_addr.i64 ss12
v18 = stack_addr.i64 ss13
v19 = stack_addr.i64 ss14
v20 = stack_addr.i64 ss15
v21 = stack_addr.i64 ss16
v22 = stack_addr.i64 ss17
v23 = stack_addr.i64 ss18
v24 = stack_addr.i64 ss19
v25 = stack_addr.i64 ss20
v26 = stack_addr.i64 ss21
v27 = stack_addr.i64 ss22
v28 = stack_addr.i64 ss23
v29 = stack_addr.i64 ss24
v30 = stack_addr.i64 ss25
v31 = stack_addr.i64 ss26
v32 = stack_addr.i64 ss27
v33 = stack_addr.i64 ss28
v34 = stack_addr.i64 ss29
v35 = stack_addr.i64 ss30
v36 = stack_addr.i64 ss31
v37 = stack_addr.i64 ss32
v38 = stack_addr.i64 ss33
v39 = stack_addr.i64 ss34
v40 = stack_addr.i64 ss35
v41 = stack_addr.i64 ss36
v42 = stack_addr.i64 ss37
v43 = stack_addr.i64 ss38
v44 = stack_addr.i64 ss39
v45 = stack_addr.i64 ss40
v46 = stack_addr.i64 ss41
v47 = stack_addr.i64 ss42
v48 = stack_addr.i64 ss43
v49 = stack_addr.i64 ss44
v50 = stack_addr.i64 ss45
v51 = stack_addr.i64 ss46
v52 = stack_addr.i64 ss47
v53 = stack_addr.i64 ss48
v54 = stack_addr.i64 ss49
v55 = stack_addr.i64 ss50
v56 = stack_addr.i64 ss51
v57 = stack_addr.i64 ss52
v58 = stack_addr.i64 ss53
v59 = stack_addr.i64 ss54
v60 = stack_addr.i64 ss55
v61 = stack_addr.i64 ss56
v62 = stack_addr.i64 ss57
v63 = stack_addr.i64 ss58
v64 = stack_addr.i64 ss59
v65 = stack_addr.i64 ss60
v66 = stack_addr.i64 ss61
@0003 v67 = stack_addr.i64 ss62
v68 = stack_addr.i64 ss63
v69 = stack_addr.i64 ss64
v70 = stack_addr.i64 ss65
v71 = stack_addr.i64 ss66
v72 = stack_addr.i64 ss67
v73 = stack_addr.i64 ss68
v74 = stack_addr.i64 ss69
v75 = stack_addr.i64 ss70
v76 = stack_addr.i64 ss71
v77 = stack_addr.i64 ss72
v78 = stack_addr.i64 ss73
v79 = stack_addr.i64 ss74
v80 = stack_addr.i64 ss75
v81 = stack_addr.i64 ss76
v82 = stack_addr.i64 ss77
v83 = stack_addr.i64 ss78
v84 = stack_addr.i64 ss79
v85 = stack_addr.i64 ss80
v86 = stack_addr.i64 ss81
v87 = stack_addr.i64 ss82
v88 = stack_addr.i64 ss83
v89 = stack_addr.i64 ss84
v90 = stack_addr.i64 ss85
v91 = stack_addr.i64 ss86
v92 = stack_addr.i64 ss87
v93 = stack_addr.i64 ss88
v94 = stack_addr.i64 ss89
v95 = stack_addr.i64 ss90
v96 = stack_addr.i64 ss91
v97 = stack_addr.i64 ss92
v98 = stack_addr.i64 ss93
v99 = stack_addr.i64 ss94
v100 = stack_addr.i64 ss95
v101 = stack_addr.i64 ss96
v102 = stack_addr.i64 ss97
v103 = stack_addr.i64 ss98
v104 = stack_addr.i64 ss99
v105 = stack_addr.i64 ss100
v106 = stack_addr.i64 ss101
v107 = stack_addr.i64 ss102
v108 = stack_addr.i64 ss103
v109 = stack_addr.i64 ss104
v110 = stack_addr.i64 ss105
v111 = stack_addr.i64 ss106
v112 = stack_addr.i64 ss107
jump block1
block1:
v114 = load.i64 v113
v115 = iconst.i64 0
v116 = icmp ugt v114, v115
v118 = uextend.i32 v116
v119 = icmp_imm eq v118, 0
v121 = uextend.i32 v119
brif v121, block2, block3
block2:
v122 = global_value.i64 gv0
v123 = global_value.i64 gv1
trap user1
block3:
v125 = iadd_imm.i64 v124, 8
v126 = load.i64 v125
v127 = iconst.i64 0
v128 = icmp ugt v126, v127
v130 = uextend.i32 v128
v131 = icmp_imm eq v130, 0
v133 = uextend.i32 v131
brif v133, block4, block5
block4:
v134 = global_value.i64 gv2
v135 = global_value.i64 gv3
trap user1
block5:
v137 = iadd_imm.i64 v136, 16
v138 = load.i64 v137+42
v139 = iconst.i64 0
v140 = icmp ugt v138, v139
v142 = uextend.i32 v140
v143 = icmp_imm eq v142, 0
v145 = uextend.i32 v143
brif v145, block6, block7
block6:
v146 = global_value.i64 gv4
v147 = global_value.i64 gv5
trap user1
block7:
v149 = load.i64 v148
v150 = iadd_imm.i64 v148, 16
v151 = load.i64 v150
call fn6(v7, v149, v151)
jump block8
block8:
v152 = call fn7(v7)
jump block9
block9:
v153 = load.i8 v6
v154 = uextend.i32 v153
v155 = icmp_imm eq v154, 0
v157 = uextend.i32 v155
brif v157, block10, block11
block10:
v158 = global_value.i64 gv6
v159 = global_value.i64 gv7
trap user1
block11:
v161 = load.i64 v160
v162 = iadd_imm.i64 v160, 8
v163 = load.i64 v162
call fn10(v9, v161, v163)
jump block12
block12:
v164 = call fn11(v9)
jump block13
block13:
v165 = load.i8 v8
v166 = uextend.i32 v165
v167 = icmp_imm eq v166, 0
v169 = uextend.i32 v167
brif v169, block14, block15
block14:
v170 = global_value.i64 gv8
v171 = global_value.i64 gv9
trap user1
block15:
v172 = load.i64 aligned v3
v173 = load.i64 aligned v3+8
v174 = call fn14(v11)
jump block16
block16:
v175 = iconst.i64 17
v176 = load.i64 v10
v177 = icmp uge v176, v175
v179 = uextend.i32 v177
v180 = icmp_imm eq v179, 0
v182 = uextend.i32 v180
brif v182, block17, block18
block17:
v183 = global_value.i64 gv10
v184 = global_value.i64 gv11
trap user1
block18:
v186 = load.i64 v185
v187 = iadd_imm.i64 v185, 16
v188 = load.i64 v187
v189 = iadd v186, v188
v190 = iconst.i8 0
v191 = stack_addr.i64 ss108
v192 = stack_addr.i64 ss108
v193 = load.i64 aligned v192
v194 = load.i64 aligned v192+8
v195 = iadd_imm.i64 v12, 8
v196 = load.i8 v195
v197 = uextend.i32 v196
brif v197, block164, block19
block164:
v198 = global_value.i64 gv12
trap user2
block19:
v199 = load.i64 v12
v213 -> v199
v200 = iconst.i64 1
v201 = iconst.i32 61
v202 = ishl v200, v201
v203 = iconst.i8 0
v204 = stack_addr.i64 ss109
v205 = stack_addr.i64 ss109
v206 = load.i64 aligned v205
v207 = load.i64 aligned v205+8
v208 = iadd_imm.i64 v13, 8
v209 = load.i8 v208
v210 = uextend.i32 v209
brif v210, block163, block20
block163:
v211 = global_value.i64 gv13
trap user2
block20:
v212 = load.i64 v13
v214 = icmp.i64 ult v213, v212
v216 = uextend.i32 v214
v217 = icmp_imm eq v216, 0
v219 = uextend.i32 v217
brif v219, block21, block22
block21:
v220 = global_value.i64 gv14
v221 = global_value.i64 gv15
trap user1
block22:
v223 = load.i64 v222
v224 = iadd_imm.i64 v222, 16
v225 = load.i64 v224
v226 = iadd v223, v225
v227 = iconst.i8 0
v228 = stack_addr.i64 ss110
v229 = stack_addr.i64 ss110
v230 = load.i64 aligned v229
v231 = load.i64 aligned v229+8
v232 = iadd_imm.i64 v16, 8
v233 = load.i8 v232
v234 = uextend.i32 v233
brif v234, block162, block23
block162:
v235 = global_value.i64 gv16
trap user2
block23:
v236 = load.i64 v16
v238 = iadd_imm.i64 v237, 24
v239 = load.i16 v238
v240 = iadd_imm.i64 v15, 8
call fn22(v14, v15)
jump block24
block24:
v242 = load.i64 v241
v243 = iadd_imm.i64 v241, 8
v244 = load.i64 v243
v245 = isub v242, v244
v246 = iconst.i8 0
v247 = stack_addr.i64 ss111
v248 = stack_addr.i64 ss111
v249 = load.i64 aligned v248
v250 = load.i64 aligned v248+8
v251 = iadd_imm.i64 v19, 8
v252 = load.i8 v251
v253 = uextend.i32 v252
brif v253, block161, block25
block161:
v254 = global_value.i64 gv17
trap user2
block25:
v255 = load.i64 v19
v257 = iadd_imm.i64 v256, 24
v258 = load.i16 v257
v259 = iadd_imm.i64 v18, 8
v260 = iadd_imm.i64 v14, 8
v261 = load.i16 v260
call fn24(v17, v18, v261)
jump block26
block26:
v263 = load.i64 v262
v264 = iadd_imm.i64 v262, 24
v265 = load.i16 v264
v266 = iadd_imm.i64 v21, 8
v267 = iadd_imm.i64 v14, 8
v268 = load.i16 v267
call fn25(v20, v21, v268)
jump block27
block27:
v269 = iadd_imm.i64 v14, 8
v270 = load.i16 v269
v271 = iconst.i16 -60
v272 = isub v271, v270
v273 = iconst.i8 0
v274 = stack_addr.i64 ss112
v275 = stack_addr.i64 ss112
v276 = load.i32 aligned v275
v277 = iadd_imm.i64 v24, 2
v278 = load.i8 v277
v279 = uextend.i32 v278
brif v279, block160, block28
block160:
v280 = global_value.i64 gv18
trap user2
block28:
v281 = load.i16 v24
v282 = iconst.i16 64
v283 = isub v281, v282
v284 = iconst.i8 0
v285 = stack_addr.i64 ss113
v286 = stack_addr.i64 ss113
v287 = load.i32 aligned v286
v288 = iadd_imm.i64 v25, 2
v289 = load.i8 v288
v290 = uextend.i32 v289
brif v290, block159, block29
block159:
v291 = global_value.i64 gv19
trap user2
block29:
v292 = load.i16 v25
v317 -> v292
v293 = iadd_imm.i64 v14, 8
v294 = load.i16 v293
v295 = iconst.i16 -32
v296 = isub v295, v294
v297 = iconst.i8 0
v298 = stack_addr.i64 ss114
v299 = stack_addr.i64 ss114
v300 = load.i32 aligned v299
v301 = iadd_imm.i64 v26, 2
v302 = load.i8 v301
v303 = uextend.i32 v302
brif v303, block158, block30
block158:
v304 = global_value.i64 gv20
trap user2
block30:
v305 = load.i16 v26
v306 = iconst.i16 64
v307 = isub v305, v306
v308 = iconst.i8 0
v309 = stack_addr.i64 ss115
v310 = stack_addr.i64 ss115
v311 = load.i32 aligned v310
v312 = iadd_imm.i64 v27, 2
v313 = load.i8 v312
v314 = uextend.i32 v313
brif v314, block157, block31
block157:
v315 = global_value.i64 gv21
trap user2
block31:
v316 = load.i16 v27
call fn30(v23, v317, v316)
jump block32
block32:
v318 = load.i16 v23
v1007 -> v318
v319 = iadd_imm.i64 v23, 8
v320 = load.i64 aligned v319
v321 = load.i64 aligned v319+8
call fn31(v28, v14, v22)
jump block33
block33:
call fn32(v29, v17, v22)
jump block34
block34:
call fn33(v30, v20, v22)
jump block35
block35:
v322 = iconst.i8 1
v323 = uextend.i32 v322
brif v323, block36, block42
block36:
v324 = iadd_imm.i64 v28, 8
v325 = iadd_imm.i64 v29, 8
v326 = iadd_imm.i64 v31, 8
v327 = load.i64 v31
v340 -> v327
v328 = iadd_imm.i64 v31, 8
v329 = load.i64 v328
v341 -> v329
v330 = load.i16 v327
v331 = load.i16 v329
v332 = icmp eq v330, v331
v334 = uextend.i32 v332
v335 = icmp_imm eq v334, 0
v337 = uextend.i32 v335
brif v337, block37, block38
block37:
v338 = global_value.i64 gv22
v339 = iconst.i64 3
v342 = iadd_imm.i64 v36, 8
v343 = load.i64 v36
v344 = iadd_imm.i64 v36, 8
v345 = load.i64 v344
v347 -> v345
v346 = func_addr.i64 fn34
call fn35(v39, v343, v346)
jump block39
block38:
jump block42
block39:
v348 = func_addr.i64 fn36
call fn37(v40, v347, v348)
jump block40
block40:
v349 = iconst.i64 0
v350 = imul_imm v349, 16
v351 = iadd.i64 v35, v350
v352 = load.i64 aligned v39
v353 = load.i64 aligned v39+8
v354 = iconst.i64 1
v355 = imul_imm v354, 16
v356 = iadd.i64 v35, v355
v357 = load.i64 aligned v40
v358 = load.i64 aligned v40+8
v359 = iconst.i64 2
call fn38(v32, v33, v34)
jump block41
block41:
v360 = global_value.i64 gv23
call fn39(v32, v360)
v361 = global_value.i64 gv24
trap user1
block42:
v362 = iconst.i8 1
v363 = uextend.i32 v362
brif v363, block43, block49(v1007)
block43:
v364 = iadd_imm.i64 v28, 8
v365 = iadd_imm.i64 v30, 8
v366 = iadd_imm.i64 v41, 8
v367 = load.i64 v41
v380 -> v367
v368 = iadd_imm.i64 v41, 8
v369 = load.i64 v368
v381 -> v369
v370 = load.i16 v367
v371 = load.i16 v369
v372 = icmp eq v370, v371
v374 = uextend.i32 v372
v375 = icmp_imm eq v374, 0
v377 = uextend.i32 v375
brif v377, block44, block45
block44:
v378 = global_value.i64 gv25
v379 = iconst.i64 3
v382 = iadd_imm.i64 v46, 8
v383 = load.i64 v46
v384 = iadd_imm.i64 v46, 8
v385 = load.i64 v384
v387 -> v385
v386 = func_addr.i64 fn41
call fn42(v49, v383, v386)
jump block46
block45:
jump block49(v1007)
block46:
v388 = func_addr.i64 fn43
call fn44(v50, v387, v388)
jump block47
block47:
v389 = iconst.i64 0
v390 = imul_imm v389, 16
v391 = iadd.i64 v45, v390
v392 = load.i64 aligned v49
v393 = load.i64 aligned v49+8
v394 = iconst.i64 1
v395 = imul_imm v394, 16
v396 = iadd.i64 v45, v395
v397 = load.i64 aligned v50
v398 = load.i64 aligned v50+8
v399 = iconst.i64 2
call fn45(v42, v43, v44)
jump block48
block48:
v400 = global_value.i64 gv26
call fn46(v42, v400)
v401 = global_value.i64 gv27
trap user1
block49(v1006: i16):
v486 -> v1006
v402 = load.i64 v28
v403 = iconst.i64 1
v404 = iadd v402, v403
v405 = iconst.i8 0
v406 = stack_addr.i64 ss116
v407 = stack_addr.i64 ss116
v408 = load.i64 aligned v407
v409 = load.i64 aligned v407+8
v410 = iadd_imm.i64 v51, 8
v411 = load.i8 v410
v412 = uextend.i32 v411
brif v412, block156, block50
block156:
v413 = global_value.i64 gv28
trap user2
block50:
v414 = load.i64 v51
v439 -> v414
v452 -> v414
v478 -> v414
v508 -> v414
v415 = load.i64 v29
v416 = iconst.i64 1
v417 = isub v415, v416
v418 = iconst.i8 0
v419 = stack_addr.i64 ss117
v420 = stack_addr.i64 ss117
v421 = load.i64 aligned v420
v422 = load.i64 aligned v420+8
v423 = iadd_imm.i64 v52, 8
v424 = load.i8 v423
v425 = uextend.i32 v424
brif v425, block155, block51
block155:
v426 = global_value.i64 gv29
trap user2
block51:
v427 = load.i64 v52
v509 -> v427
v428 = iadd_imm.i64 v28, 8
v429 = load.i16 v428
v435 -> v429
v430 = iconst.i16 0x8000
v431 = icmp eq v429, v430
v433 = uextend.i32 v431
brif v433, block154, block52
block154:
v434 = global_value.i64 gv30
trap user2
block52:
v436 = iconst.i16 0
v437 = isub v436, v435
v438 = sextend.i64 v437
v453 -> v438
v521 -> v438
v440 = ushr.i64 v439, v438
v441 = iconst.i8 0
v442 = stack_addr.i64 ss118
v443 = stack_addr.i64 ss118
v444 = load.i64 aligned v443
v445 = load.i64 aligned v443+8
v446 = iadd_imm.i64 v53, 8
v447 = load.i8 v446
v448 = uextend.i32 v447
brif v448, block153, block53
block153:
v449 = global_value.i64 gv31
trap user2
block53:
v450 = load.i64 v53
v451 = ireduce.i32 v450
v480 -> v451
v551 -> v451
v454 = iconst.i64 1
v455 = ishl v454, v453
v456 = iconst.i8 0
v457 = stack_addr.i64 ss119
v458 = stack_addr.i64 ss119
v459 = load.i64 aligned v458
v460 = load.i64 aligned v458+8
v461 = iadd_imm.i64 v54, 8
v462 = load.i8 v461
v463 = uextend.i32 v462
brif v463, block152, block54
block152:
v464 = global_value.i64 gv32
trap user2
block54:
v465 = load.i64 v54
v466 = iconst.i64 1
v467 = isub v465, v466
v468 = iconst.i8 0
v469 = stack_addr.i64 ss120
v470 = stack_addr.i64 ss120
v471 = load.i64 aligned v470
v472 = load.i64 aligned v470+8
v473 = iadd_imm.i64 v55, 8
v474 = load.i8 v473
v475 = uextend.i32 v474
brif v475, block151, block55
block151:
v476 = global_value.i64 gv33
trap user2
block55:
v477 = load.i64 v55
v479 = band.i64 v478, v477
call fn54(v56, v480)
jump block56
block56:
v481 = load.i8 v56
v548 -> v481
v482 = iadd_imm.i64 v56, 4
v483 = load.i32 v482
v550 -> v483
v484 = iconst.i64 0
v485 = uextend.i16 v481
v487 = isub v485, v486
v488 = iconst.i8 0
v489 = stack_addr.i64 ss121
v490 = stack_addr.i64 ss121
v491 = load.i32 aligned v490
v492 = iadd_imm.i64 v57, 2
v493 = load.i8 v492
v494 = uextend.i32 v493
brif v494, block150, block57
block150:
v495 = global_value.i64 gv34
trap user2
block57:
v496 = load.i16 v57
v497 = iconst.i16 1
v498 = iadd v496, v497
v499 = iconst.i8 0
v500 = stack_addr.i64 ss122
v501 = stack_addr.i64 ss122
v502 = load.i32 aligned v501
v503 = iadd_imm.i64 v58, 2
v504 = load.i8 v503
v505 = uextend.i32 v504
brif v505, block149, block58
block149:
v506 = global_value.i64 gv35
trap user2
block58:
v507 = load.i16 v58
v510 = isub.i64 v508, v509
v511 = iconst.i8 0
v512 = stack_addr.i64 ss123
v513 = stack_addr.i64 ss123
v514 = load.i64 aligned v513
v515 = load.i64 aligned v513+8
v516 = iadd_imm.i64 v59, 8
v517 = load.i8 v516
v518 = uextend.i32 v517
brif v518, block148, block59
block148:
v519 = global_value.i64 gv36
trap user2
block59:
v520 = load.i64 v59
v546 -> v520
v522 = iconst.i64 1
v523 = ishl v522, v521
v524 = iconst.i8 0
v525 = stack_addr.i64 ss124
v526 = stack_addr.i64 ss124
v527 = load.i64 aligned v526
v528 = load.i64 aligned v526+8
v529 = iadd_imm.i64 v60, 8
v530 = load.i8 v529
v531 = uextend.i32 v530
brif v531, block147, block60
block147:
v532 = global_value.i64 gv37
trap user2
block60:
v533 = load.i64 v60
v534 = iconst.i64 1
v535 = isub v533, v534
v536 = iconst.i8 0
v537 = stack_addr.i64 ss125
v538 = stack_addr.i64 ss125
v539 = load.i64 aligned v538
v540 = load.i64 aligned v538+8
v541 = iadd_imm.i64 v61, 8
v542 = load.i8 v541
v543 = uextend.i32 v542
brif v543, block146, block61
block146:
v544 = global_value.i64 gv38
trap user2
block61:
v545 = load.i64 v61
v547 = band.i64 v546, v545
v549 = uextend.i16 v548
jump block62(v551, v484, v521, v479, v520, v507, v508, v548, v547)
block62(v552: i32, v1009: i64, v1013: i64, v1016: i64, v1019: i64, v1022: i16, v1025: i64, v1028: i8, v1033: i64):
v559 -> v552
v562 -> v552
v569 -> v552
v596 -> v1009
v605 -> v1009
v609 -> v1009
v1008 -> v1009
v624 -> v1013
v654 -> v1013
v1012 -> v1013
v1014 -> v1013
v1041 -> v1013
v636 -> v1016
v1015 -> v1016
v1017 -> v1016
v1030 -> v1016
v648 -> v1019
v676 -> v1019
v693 -> v1019
v1018 -> v1019
v1020 -> v1019
v674 -> v1022
v691 -> v1022
v1021 -> v1022
v1023 -> v1022
v1054 -> v1022
v677 -> v1025
v1024 -> v1025
v1026 -> v1025
v1059 -> v1025
v696 -> v1028
v1027 -> v1028
v1029 -> v1028
v1031 -> v1033
v1032 -> v1033
v1034 -> v1033
v553 = load.i32 v63
v560 -> v553
v554 = iconst.i32 0
v555 = icmp eq v553, v554
v557 = uextend.i32 v555
brif v557, block145, block63
block145:
v558 = global_value.i64 gv39
trap user2
block63:
v561 = udiv.i32 v559, v560
v574 -> v561
v563 = load.i32 v63
v570 -> v563
v564 = iconst.i32 0
v565 = icmp eq v563, v564
v567 = uextend.i32 v565
brif v567, block144, block64
block144:
v568 = global_value.i64 gv40
trap user2
block64:
v571 = urem.i32 v569, v570
v622 -> v571
v803 -> v571
v1011 -> v571
v572 = iconst.i8 1
v573 = uextend.i32 v572
brif v573, block65, block68(v561)
block65:
v575 = iconst.i32 10
v576 = icmp.i32 ult v574, v575
v578 = uextend.i32 v576
v579 = icmp_imm eq v578, 0
v581 = uextend.i32 v579
brif v581, block66, block67
block66:
v582 = global_value.i64 gv41
v583 = global_value.i64 gv42
trap user1
block67:
jump block68(v574)
block68(v584: i32):
v585 = ireduce.i8 v584
v586 = iconst.i8 48
v587 = iadd v586, v585
v588 = iconst.i8 0
v589 = stack_addr.i64 ss126
v590 = stack_addr.i64 ss126
v591 = load.i16 aligned v590
v592 = iadd_imm.i64 v64, 1
v593 = load.i8 v592
v594 = uextend.i32 v593
brif v594, block143, block69
block143:
v595 = global_value.i64 gv43
trap user2
block69:
v597 = load.i64 v3
v598 = load.i64 v3+8
v599 = icmp.i64 ult v596, v598
v601 = uextend.i32 v599
brif v601, block70, block142
block142:
v602 = global_value.i64 gv44
trap user2
block70:
v603 = load.i64 v3
v604 = load.i64 v3+8
v606 = imul_imm.i64 v605, 1
v607 = iadd v603, v606
v608 = load.i8 aligned v64
v610 = iconst.i64 1
v611 = iadd.i64 v609, v610
v612 = iconst.i8 0
v613 = stack_addr.i64 ss127
v614 = stack_addr.i64 ss127
v615 = load.i64 aligned v614
v616 = load.i64 aligned v614+8
v617 = iadd_imm.i64 v65, 8
v618 = load.i8 v617
v619 = uextend.i32 v618
brif v619, block141, block71
block141:
v620 = global_value.i64 gv45
trap user2
block71:
v621 = load.i64 v65
v668 -> v621
v695 -> v621
v1010 -> v621
v1046 -> v621
v623 = uextend.i64 v622
v625 = ishl v623, v624
v626 = iconst.i8 0
v627 = stack_addr.i64 ss128
v628 = stack_addr.i64 ss128
v629 = load.i64 aligned v628
v630 = load.i64 aligned v628+8
v631 = iadd_imm.i64 v66, 8
v632 = load.i8 v631
v633 = uextend.i32 v632
brif v633, block140, block72
block140:
v634 = global_value.i64 gv46
trap user2
block72:
v635 = load.i64 v66
v637 = iadd v635, v636
v638 = iconst.i8 0
v639 = stack_addr.i64 ss129
v640 = stack_addr.i64 ss129
v641 = load.i64 aligned v640
v642 = load.i64 aligned v640+8
v643 = iadd_imm.i64 v67, 8
v644 = load.i8 v643
v645 = uextend.i32 v644
brif v645, block139, block73
block139:
v646 = global_value.i64 gv47
trap user2
block73:
v647 = load.i64 v67
v675 -> v647
v692 -> v647
v649 = icmp ult v647, v648
v651 = uextend.i32 v649
brif v651, block74, block80
block74:
v652 = load.i32 v63
v653 = uextend.i64 v652
v655 = ishl v653, v654
v656 = iconst.i8 0
v657 = stack_addr.i64 ss130
v658 = stack_addr.i64 ss130
v659 = load.i64 aligned v658
v660 = load.i64 aligned v658+8
v661 = iadd_imm.i64 v68, 8
v662 = load.i8 v661
v663 = uextend.i32 v662
brif v663, block138, block75
block138:
v664 = global_value.i64 gv48
trap user2
block75:
v665 = load.i64 v68
v690 -> v665
v666 = load.i64 aligned v3
v667 = load.i64 aligned v3+8
v669 = load.i64 v73
call fn70(v71, v72, v669)
jump block76
block76:
v670 = load.i64 aligned v71
v671 = load.i64 aligned v71+8
v672 = load.i64 aligned v70
v673 = load.i64 aligned v70+8
v678 = load.i64 v30
v679 = isub.i64 v677, v678
v680 = iconst.i8 0
v681 = stack_addr.i64 ss131
v682 = stack_addr.i64 ss131
v683 = load.i64 aligned v682
v684 = load.i64 aligned v682+8
v685 = iadd_imm.i64 v74, 8
v686 = load.i8 v685
v687 = uextend.i32 v686
brif v687, block137, block77
block137:
v688 = global_value.i64 gv49
trap user2
block77:
v689 = load.i64 v74
v694 = iconst.i64 1
call fn72(v0, v69, v691, v692, v693, v689, v690, v694)
jump block78
block78:
jump block79
block79:
return
block80:
v697 = uextend.i64 v696
v698 = icmp.i64 ugt v695, v697
v700 = uextend.i32 v698
brif v700, block81, block96
block81:
v701 = iconst.i8 1
v702 = uextend.i32 v701
brif v702, block82, block88
block82:
v703 = global_value.i64 gv50
v704 = iadd_imm.i64 v75, 8
v705 = load.i64 v75
v718 -> v705
v706 = iadd_imm.i64 v75, 8
v707 = load.i64 v706
v719 -> v707
v708 = load.i32 v705
v709 = load.i32 v707
v710 = icmp eq v708, v709
v712 = uextend.i32 v710
v713 = icmp_imm eq v712, 0
v715 = uextend.i32 v713
brif v715, block83, block84
block83:
v716 = global_value.i64 gv51
v717 = iconst.i64 3
v720 = iadd_imm.i64 v80, 8
v721 = load.i64 v80
v722 = iadd_imm.i64 v80, 8
v723 = load.i64 v722
v725 -> v723
v724 = func_addr.i64 fn73
call fn74(v83, v721, v724)
jump block85
block84:
jump block88
block85:
v726 = func_addr.i64 fn75
call fn76(v84, v725, v726)
jump block86
block86:
v727 = iconst.i64 0
v728 = imul_imm v727, 16
v729 = iadd.i64 v79, v728
v730 = load.i64 aligned v83
v731 = load.i64 aligned v83+8
v732 = iconst.i64 1
v733 = imul_imm v732, 16
v734 = iadd.i64 v79, v733
v735 = load.i64 aligned v84
v736 = load.i64 aligned v84+8
v737 = iconst.i64 2
call fn77(v76, v77, v78)
jump block87
block87:
v738 = global_value.i64 gv52
call fn78(v76, v738)
v739 = global_value.i64 gv53
trap user1
block88:
v740 = iconst.i8 1
v741 = uextend.i32 v740
brif v741, block89, block95(v1030, v1031, v1041, v1046, v1054, v1059)
block89:
v742 = global_value.i64 gv54
v743 = iadd_imm.i64 v85, 8
v744 = load.i64 v85
v757 -> v744
v745 = iadd_imm.i64 v85, 8
v746 = load.i64 v745
v758 -> v746
v747 = load.i16 v744
v748 = load.i16 v746
v749 = icmp eq v747, v748
v751 = uextend.i32 v749
v752 = icmp_imm eq v751, 0
v754 = uextend.i32 v752
brif v754, block90, block91
block90:
v755 = global_value.i64 gv55
v756 = iconst.i64 3
v759 = iadd_imm.i64 v90, 8
v760 = load.i64 v90
v761 = iadd_imm.i64 v90, 8
v762 = load.i64 v761
v764 -> v762
v763 = func_addr.i64 fn80
call fn81(v93, v760, v763)
jump block92
block91:
jump block95(v1030, v1031, v1041, v1046, v1054, v1059)
block92:
v765 = func_addr.i64 fn82
call fn83(v94, v764, v765)
jump block93
block93:
v766 = iconst.i64 0
v767 = imul_imm v766, 16
v768 = iadd.i64 v89, v767
v769 = load.i64 aligned v93
v770 = load.i64 aligned v93+8
v771 = iconst.i64 1
v772 = imul_imm v771, 16
v773 = iadd.i64 v89, v772
v774 = load.i64 aligned v94
v775 = load.i64 aligned v94+8
v776 = iconst.i64 2
call fn84(v86, v87, v88)
jump block94
block94:
v777 = global_value.i64 gv56
call fn85(v86, v777)
v778 = global_value.i64 gv57
trap user1
block95(v779: i64, v780: i64, v1040: i64, v1045: i64, v1053: i16, v1058: i64):
v781 = iconst.i64 1
jump block99(v779, v780, v781, v1040, v1045, v1053, v1058)
block96:
v782 = iconst.i16 1
v783 = load.i16 v62
v784 = isub v783, v782
v785 = iconst.i8 0
v786 = stack_addr.i64 ss132
v787 = stack_addr.i64 ss132
v788 = load.i32 aligned v787
v789 = iadd_imm.i64 v95, 2
v790 = load.i8 v789
v791 = uextend.i32 v790
brif v791, block136, block97
block136:
v792 = global_value.i64 gv58
trap user2
block97:
v793 = load.i16 aligned v95
v794 = iconst.i32 10
v795 = iconst.i32 0
v796 = icmp eq v794, v795
v798 = uextend.i32 v796
brif v798, block135, block98
block135:
v799 = global_value.i64 gv59
trap user2
block98:
v800 = iconst.i32 10
v801 = load.i32 v63
v802 = udiv v801, v800
jump block62(v803, v1010, v1014, v1017, v1020, v1023, v1026, v1029, v1034)
block99(v804: i64, v1035: i64, v1037: i64, v1039: i64, v1044: i64, v1052: i16, v1057: i64):
v817 -> v1035
v830 -> v1037
v844 -> v1039
v857 -> v1039
v939 -> v1039
v1042 -> v1039
v1050 -> v1039
v908 -> v1044
v917 -> v1044
v921 -> v1044
v1043 -> v1044
v960 -> v1052
v990 -> v1052
v1051 -> v1052
v1055 -> v1052
v963 -> v1057
v1056 -> v1057
v1060 -> v1057
v805 = iconst.i64 10
v806 = imul v804, v805
v807 = iconst.i8 0
v808 = stack_addr.i64 ss133
v809 = stack_addr.i64 ss133
v810 = load.i64 aligned v809
v811 = load.i64 aligned v809+8
v812 = iadd_imm.i64 v96, 8
v813 = load.i8 v812
v814 = uextend.i32 v813
brif v814, block134, block100
block134:
v815 = global_value.i64 gv60
trap user2
block100:
v816 = load.i64 v96
v843 -> v816
v856 -> v816
v882 -> v816
v818 = iconst.i64 10
v819 = imul.i64 v817, v818
v820 = iconst.i8 0
v821 = stack_addr.i64 ss134
v822 = stack_addr.i64 ss134
v823 = load.i64 aligned v822
v824 = load.i64 aligned v822+8
v825 = iadd_imm.i64 v97, 8
v826 = load.i8 v825
v827 = uextend.i32 v826
brif v827, block133, block101
block133:
v828 = global_value.i64 gv61
trap user2
block101:
v829 = load.i64 v97
v935 -> v829
v962 -> v829
v992 -> v829
v1036 -> v829
v1049 -> v829
v831 = iconst.i64 10
v832 = imul.i64 v830, v831
v833 = iconst.i8 0
v834 = stack_addr.i64 ss135
v835 = stack_addr.i64 ss135
v836 = load.i64 aligned v835
v837 = load.i64 aligned v835+8
v838 = iadd_imm.i64 v98, 8
v839 = load.i8 v838
v840 = uextend.i32 v839
brif v840, block132, block102
block132:
v841 = global_value.i64 gv62
trap user2
block102:
v842 = load.i64 v98
v976 -> v842
v989 -> v842
v1038 -> v842
v1061 -> v842
v845 = ushr.i64 v843, v844
v846 = iconst.i8 0
v847 = stack_addr.i64 ss136
v848 = stack_addr.i64 ss136
v849 = load.i64 aligned v848
v850 = load.i64 aligned v848+8
v851 = iadd_imm.i64 v99, 8
v852 = load.i8 v851
v853 = uextend.i32 v852
brif v853, block131, block103
block131:
v854 = global_value.i64 gv63
trap user2
block103:
v855 = load.i64 v99
v886 -> v855
v858 = iconst.i64 1
v859 = ishl v858, v857
v860 = iconst.i8 0
v861 = stack_addr.i64 ss137
v862 = stack_addr.i64 ss137
v863 = load.i64 aligned v862
v864 = load.i64 aligned v862+8
v865 = iadd_imm.i64 v100, 8
v866 = load.i8 v865
v867 = uextend.i32 v866
brif v867, block130, block104
block130:
v868 = global_value.i64 gv64
trap user2
block104:
v869 = load.i64 v100
v870 = iconst.i64 1
v871 = isub v869, v870
v872 = iconst.i8 0
v873 = stack_addr.i64 ss138
v874 = stack_addr.i64 ss138
v875 = load.i64 aligned v874
v876 = load.i64 aligned v874+8
v877 = iadd_imm.i64 v101, 8
v878 = load.i8 v877
v879 = uextend.i32 v878
brif v879, block129, block105
block129:
v880 = global_value.i64 gv65
trap user2
block105:
v881 = load.i64 v101
v883 = band.i64 v882, v881
v934 -> v883
v961 -> v883
v991 -> v883
v1005 -> v883
v1048 -> v883
v884 = iconst.i8 1
v885 = uextend.i32 v884
brif v885, block106, block109(v855)
block106:
v887 = iconst.i64 10
v888 = icmp.i64 ult v886, v887
v890 = uextend.i32 v888
v891 = icmp_imm eq v890, 0
v893 = uextend.i32 v891
brif v893, block107, block108
block107:
v894 = global_value.i64 gv66
v895 = global_value.i64 gv67
trap user1
block108:
jump block109(v886)
block109(v896: i64):
v897 = ireduce.i8 v896
v898 = iconst.i8 48
v899 = iadd v898, v897
v900 = iconst.i8 0
v901 = stack_addr.i64 ss139
v902 = stack_addr.i64 ss139
v903 = load.i16 aligned v902
v904 = iadd_imm.i64 v102, 1
v905 = load.i8 v904
v906 = uextend.i32 v905
brif v906, block128, block110
block128:
v907 = global_value.i64 gv68
trap user2
block110:
v909 = load.i64 v3
v910 = load.i64 v3+8
v911 = icmp.i64 ult v908, v910
v913 = uextend.i32 v911
brif v913, block111, block127
block127:
v914 = global_value.i64 gv69
trap user2
block111:
v915 = load.i64 v3
v916 = load.i64 v3+8
v918 = imul_imm.i64 v917, 1
v919 = iadd v915, v918
v920 = load.i8 aligned v102
v922 = iconst.i64 1
v923 = iadd.i64 v921, v922
v924 = iconst.i8 0
v925 = stack_addr.i64 ss140
v926 = stack_addr.i64 ss140
v927 = load.i64 aligned v926
v928 = load.i64 aligned v926+8
v929 = iadd_imm.i64 v103, 8
v930 = load.i8 v929
v931 = uextend.i32 v930
brif v931, block126, block112
block126:
v932 = global_value.i64 gv70
trap user2
block112:
v933 = load.i64 v103
v954 -> v933
v1047 -> v933
v936 = icmp.i64 ult v934, v935
v938 = uextend.i32 v936
brif v938, block113, block119
block113:
v940 = iconst.i64 1
v941 = ishl v940, v939
v942 = iconst.i8 0
v943 = stack_addr.i64 ss141
v944 = stack_addr.i64 ss141
v945 = load.i64 aligned v944
v946 = load.i64 aligned v944+8
v947 = iadd_imm.i64 v104, 8
v948 = load.i8 v947
v949 = uextend.i32 v948
brif v949, block125, block114
block125:
v950 = global_value.i64 gv71
trap user2
block114:
v951 = load.i64 v104
v988 -> v951
v952 = load.i64 aligned v3
v953 = load.i64 aligned v3+8
v955 = load.i64 v109
call fn101(v107, v108, v955)
jump block115
block115:
v956 = load.i64 aligned v107
v957 = load.i64 aligned v107+8
v958 = load.i64 aligned v106
v959 = load.i64 aligned v106+8
v964 = load.i64 v30
v965 = isub.i64 v963, v964
v966 = iconst.i8 0
v967 = stack_addr.i64 ss142
v968 = stack_addr.i64 ss142
v969 = load.i64 aligned v968
v970 = load.i64 aligned v968+8
v971 = iadd_imm.i64 v110, 8
v972 = load.i8 v971
v973 = uextend.i32 v972
brif v973, block123, block116
block123:
v974 = global_value.i64 gv72
trap user2
block116:
v975 = load.i64 v110
v977 = imul v975, v976
v978 = iconst.i8 0
v979 = stack_addr.i64 ss143
v980 = stack_addr.i64 ss143
v981 = load.i64 aligned v980
v982 = load.i64 aligned v980+8
v983 = iadd_imm.i64 v111, 8
v984 = load.i8 v983
v985 = uextend.i32 v984
brif v985, block122, block117
block122:
v986 = global_value.i64 gv73
trap user2
block117:
v987 = load.i64 v111
call fn104(v0, v105, v990, v991, v992, v987, v988, v989)
jump block118
block118:
jump block79
block119:
v993 = iconst.i16 1
v994 = load.i16 v62
v995 = isub v994, v993
v996 = iconst.i8 0
v997 = stack_addr.i64 ss144
v998 = stack_addr.i64 ss144
v999 = load.i32 aligned v998
v1000 = iadd_imm.i64 v112, 2
v1001 = load.i8 v1000
v1002 = uextend.i32 v1001
brif v1002, block121, block120
block121:
v1003 = global_value.i64 gv74
trap user2
block120:
v1004 = load.i16 aligned v112
jump block99(v1005, v1036, v1038, v1042, v1047, v1055, v1060)
}
|