1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148
|
(module
(memory 1)
(data (i32.const 128) "WASMSIMDGOESFAST")
(data (i32.const 256) "\80\90\a0\b0\c0\d0\e0\f0")
(data (i32.const 1024) "\ff\ff\ff\ff\ff\ff\ff\ff")
(func (export "v128.load") (param $0 i32) (result v128) (v128.load (local.get $0)))
(func (export "v128.store") (param $0 i32) (param $1 v128) (result v128)
(v128.store offset=0 align=16 (local.get $0) (local.get $1))
(v128.load (local.get $0))
)
(func (export "v128.const.i8x16") (result v128) (v128.const i8x16 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16))
(func (export "v128.const.i16x8") (result v128) (v128.const i16x8 1 2 3 4 5 6 7 8))
(func (export "v128.const.i32x4") (result v128) (v128.const i32x4 1 2 3 4))
(func (export "v128.const.i64x2") (result v128) (v128.const i64x2 1 2))
(func (export "v128.const.f32x4") (result v128) (v128.const f32x4 1.0 2 3 4))
(func (export "v128.const.f64x2") (result v128) (v128.const f64x2 1.0 2))
(func (export "i8x16.shuffle_interleave_bytes") (param $0 v128) (param $1 v128) (result v128)
(i8x16.shuffle 0 17 2 19 4 21 6 23 8 25 10 27 12 29 14 31 (local.get $0) (local.get $1))
)
(func (export "i8x16.shuffle_reverse_i32s") (param $0 v128) (result v128)
(i8x16.shuffle 12 13 14 15 8 9 10 11 4 5 6 7 0 1 2 3 (local.get $0) (local.get $0))
)
(func (export "i8x16.splat") (param $0 i32) (result v128) (i8x16.splat (local.get $0)))
(func (export "i8x16.extract_lane_s_first") (param $0 v128) (result i32) (i8x16.extract_lane_s 0 (local.get $0)))
(func (export "i8x16.extract_lane_s_last") (param $0 v128) (result i32) (i8x16.extract_lane_s 15 (local.get $0)))
(func (export "i8x16.extract_lane_u_first") (param $0 v128) (result i32) (i8x16.extract_lane_u 0 (local.get $0)))
(func (export "i8x16.extract_lane_u_last") (param $0 v128) (result i32) (i8x16.extract_lane_u 15 (local.get $0)))
(func (export "i8x16.replace_lane_first") (param $0 v128) (param $1 i32) (result v128) (i8x16.replace_lane 0 (local.get $0) (local.get $1)))
(func (export "i8x16.replace_lane_last") (param $0 v128) (param $1 i32) (result v128) (i8x16.replace_lane 15 (local.get $0) (local.get $1)))
(func (export "i16x8.splat") (param $0 i32) (result v128) (i16x8.splat (local.get $0)))
(func (export "i16x8.extract_lane_s_first") (param $0 v128) (result i32) (i16x8.extract_lane_s 0 (local.get $0)))
(func (export "i16x8.extract_lane_s_last") (param $0 v128) (result i32) (i16x8.extract_lane_s 7 (local.get $0)))
(func (export "i16x8.extract_lane_u_first") (param $0 v128) (result i32) (i16x8.extract_lane_u 0 (local.get $0)))
(func (export "i16x8.extract_lane_u_last") (param $0 v128) (result i32) (i16x8.extract_lane_u 7 (local.get $0)))
(func (export "i16x8.replace_lane_first") (param $0 v128) (param $1 i32) (result v128) (i16x8.replace_lane 0 (local.get $0) (local.get $1)))
(func (export "i16x8.replace_lane_last") (param $0 v128) (param $1 i32) (result v128) (i16x8.replace_lane 7 (local.get $0) (local.get $1)))
(func (export "i32x4.splat") (param $0 i32) (result v128) (i32x4.splat (local.get $0)))
(func (export "i32x4.extract_lane_first") (param $0 v128) (result i32) (i32x4.extract_lane 0 (local.get $0)))
(func (export "i32x4.extract_lane_last") (param $0 v128) (result i32) (i32x4.extract_lane 3 (local.get $0)))
(func (export "i32x4.replace_lane_first") (param $0 v128) (param $1 i32) (result v128) (i32x4.replace_lane 0 (local.get $0) (local.get $1)))
(func (export "i32x4.replace_lane_last") (param $0 v128) (param $1 i32) (result v128) (i32x4.replace_lane 3 (local.get $0) (local.get $1)))
(func (export "i64x2.splat") (param $0 i64) (result v128) (i64x2.splat (local.get $0)))
(func (export "i64x2.extract_lane_first") (param $0 v128) (result i64) (i64x2.extract_lane 0 (local.get $0)))
(func (export "i64x2.extract_lane_last") (param $0 v128) (result i64) (i64x2.extract_lane 1 (local.get $0)))
(func (export "i64x2.replace_lane_first") (param $0 v128) (param $1 i64) (result v128) (i64x2.replace_lane 0 (local.get $0) (local.get $1)))
(func (export "i64x2.replace_lane_last") (param $0 v128) (param $1 i64) (result v128) (i64x2.replace_lane 1 (local.get $0) (local.get $1)))
(func (export "f32x4.splat") (param $0 f32) (result v128) (f32x4.splat (local.get $0)))
(func (export "f32x4.extract_lane_first") (param $0 v128) (result f32) (f32x4.extract_lane 0 (local.get $0)))
(func (export "f32x4.extract_lane_last") (param $0 v128) (result f32) (f32x4.extract_lane 3 (local.get $0)))
(func (export "f32x4.replace_lane_first") (param $0 v128) (param $1 f32) (result v128) (f32x4.replace_lane 0 (local.get $0) (local.get $1)))
(func (export "f32x4.replace_lane_last") (param $0 v128) (param $1 f32) (result v128) (f32x4.replace_lane 3 (local.get $0) (local.get $1)))
(func (export "f64x2.splat") (param $0 f64) (result v128) (f64x2.splat (local.get $0)))
(func (export "f64x2.extract_lane_first") (param $0 v128) (result f64) (f64x2.extract_lane 0 (local.get $0)))
(func (export "f64x2.extract_lane_last") (param $0 v128) (result f64) (f64x2.extract_lane 1 (local.get $0)))
(func (export "f64x2.replace_lane_first") (param $0 v128) (param $1 f64) (result v128) (f64x2.replace_lane 0 (local.get $0) (local.get $1)))
(func (export "f64x2.replace_lane_last") (param $0 v128) (param $1 f64) (result v128) (f64x2.replace_lane 1 (local.get $0) (local.get $1)))
(func (export "i8x16.eq") (param $0 v128) (param $1 v128) (result v128) (i8x16.eq (local.get $0) (local.get $1)))
(func (export "i8x16.ne") (param $0 v128) (param $1 v128) (result v128) (i8x16.ne (local.get $0) (local.get $1)))
(func (export "i8x16.lt_s") (param $0 v128) (param $1 v128) (result v128) (i8x16.lt_s (local.get $0) (local.get $1)))
(func (export "i8x16.lt_u") (param $0 v128) (param $1 v128) (result v128) (i8x16.lt_u (local.get $0) (local.get $1)))
(func (export "i8x16.gt_s") (param $0 v128) (param $1 v128) (result v128) (i8x16.gt_s (local.get $0) (local.get $1)))
(func (export "i8x16.gt_u") (param $0 v128) (param $1 v128) (result v128) (i8x16.gt_u (local.get $0) (local.get $1)))
(func (export "i8x16.le_s") (param $0 v128) (param $1 v128) (result v128) (i8x16.le_s (local.get $0) (local.get $1)))
(func (export "i8x16.le_u") (param $0 v128) (param $1 v128) (result v128) (i8x16.le_u (local.get $0) (local.get $1)))
(func (export "i8x16.ge_s") (param $0 v128) (param $1 v128) (result v128) (i8x16.ge_s (local.get $0) (local.get $1)))
(func (export "i8x16.ge_u") (param $0 v128) (param $1 v128) (result v128) (i8x16.ge_u (local.get $0) (local.get $1)))
(func (export "i16x8.eq") (param $0 v128) (param $1 v128) (result v128) (i16x8.eq (local.get $0) (local.get $1)))
(func (export "i16x8.ne") (param $0 v128) (param $1 v128) (result v128) (i16x8.ne (local.get $0) (local.get $1)))
(func (export "i16x8.lt_s") (param $0 v128) (param $1 v128) (result v128) (i16x8.lt_s (local.get $0) (local.get $1)))
(func (export "i16x8.lt_u") (param $0 v128) (param $1 v128) (result v128) (i16x8.lt_u (local.get $0) (local.get $1)))
(func (export "i16x8.gt_s") (param $0 v128) (param $1 v128) (result v128) (i16x8.gt_s (local.get $0) (local.get $1)))
(func (export "i16x8.gt_u") (param $0 v128) (param $1 v128) (result v128) (i16x8.gt_u (local.get $0) (local.get $1)))
(func (export "i16x8.le_s") (param $0 v128) (param $1 v128) (result v128) (i16x8.le_s (local.get $0) (local.get $1)))
(func (export "i16x8.le_u") (param $0 v128) (param $1 v128) (result v128) (i16x8.le_u (local.get $0) (local.get $1)))
(func (export "i16x8.ge_s") (param $0 v128) (param $1 v128) (result v128) (i16x8.ge_s (local.get $0) (local.get $1)))
(func (export "i16x8.ge_u") (param $0 v128) (param $1 v128) (result v128) (i16x8.ge_u (local.get $0) (local.get $1)))
(func (export "i32x4.eq") (param $0 v128) (param $1 v128) (result v128) (i32x4.eq (local.get $0) (local.get $1)))
(func (export "i32x4.ne") (param $0 v128) (param $1 v128) (result v128) (i32x4.ne (local.get $0) (local.get $1)))
(func (export "i32x4.lt_s") (param $0 v128) (param $1 v128) (result v128) (i32x4.lt_s (local.get $0) (local.get $1)))
(func (export "i32x4.lt_u") (param $0 v128) (param $1 v128) (result v128) (i32x4.lt_u (local.get $0) (local.get $1)))
(func (export "i32x4.gt_s") (param $0 v128) (param $1 v128) (result v128) (i32x4.gt_s (local.get $0) (local.get $1)))
(func (export "i32x4.gt_u") (param $0 v128) (param $1 v128) (result v128) (i32x4.gt_u (local.get $0) (local.get $1)))
(func (export "i32x4.le_s") (param $0 v128) (param $1 v128) (result v128) (i32x4.le_s (local.get $0) (local.get $1)))
(func (export "i32x4.le_u") (param $0 v128) (param $1 v128) (result v128) (i32x4.le_u (local.get $0) (local.get $1)))
(func (export "i32x4.ge_s") (param $0 v128) (param $1 v128) (result v128) (i32x4.ge_s (local.get $0) (local.get $1)))
(func (export "i32x4.ge_u") (param $0 v128) (param $1 v128) (result v128) (i32x4.ge_u (local.get $0) (local.get $1)))
(func (export "i64x2.eq") (param $0 v128) (param $1 v128) (result v128) (i64x2.eq (local.get $0) (local.get $1)))
(func (export "f32x4.eq") (param $0 v128) (param $1 v128) (result v128) (f32x4.eq (local.get $0) (local.get $1)))
(func (export "f32x4.ne") (param $0 v128) (param $1 v128) (result v128) (f32x4.ne (local.get $0) (local.get $1)))
(func (export "f32x4.lt") (param $0 v128) (param $1 v128) (result v128) (f32x4.lt (local.get $0) (local.get $1)))
(func (export "f32x4.gt") (param $0 v128) (param $1 v128) (result v128) (f32x4.gt (local.get $0) (local.get $1)))
(func (export "f32x4.le") (param $0 v128) (param $1 v128) (result v128) (f32x4.le (local.get $0) (local.get $1)))
(func (export "f32x4.ge") (param $0 v128) (param $1 v128) (result v128) (f32x4.ge (local.get $0) (local.get $1)))
(func (export "f64x2.eq") (param $0 v128) (param $1 v128) (result v128) (f64x2.eq (local.get $0) (local.get $1)))
(func (export "f64x2.ne") (param $0 v128) (param $1 v128) (result v128) (f64x2.ne (local.get $0) (local.get $1)))
(func (export "f64x2.lt") (param $0 v128) (param $1 v128) (result v128) (f64x2.lt (local.get $0) (local.get $1)))
(func (export "f64x2.gt") (param $0 v128) (param $1 v128) (result v128) (f64x2.gt (local.get $0) (local.get $1)))
(func (export "f64x2.le") (param $0 v128) (param $1 v128) (result v128) (f64x2.le (local.get $0) (local.get $1)))
(func (export "f64x2.ge") (param $0 v128) (param $1 v128) (result v128) (f64x2.ge (local.get $0) (local.get $1)))
(func (export "v128.not") (param $0 v128) (result v128) (v128.not (local.get $0)))
(func (export "v128.and") (param $0 v128) (param $1 v128) (result v128) (v128.and (local.get $0) (local.get $1)))
(func (export "v128.or") (param $0 v128) (param $1 v128) (result v128) (v128.or (local.get $0) (local.get $1)))
(func (export "v128.xor") (param $0 v128) (param $1 v128) (result v128) (v128.xor (local.get $0) (local.get $1)))
(func (export "v128.andnot") (param $0 v128) (param $1 v128) (result v128) (v128.andnot (local.get $0) (local.get $1)))
(func (export "v128.bitselect") (param $0 v128) (param $1 v128) (param $2 v128) (result v128)
(v128.bitselect (local.get $0) (local.get $1) (local.get $2))
)
(func (export "v128.load8_lane") (param $0 i32) (param $1 v128) (result v128) (v128.load8_lane 0 (local.get $0) (local.get $1)))
(func (export "v128.load16_lane") (param $0 i32) (param $1 v128) (result v128) (v128.load16_lane 0 (local.get $0) (local.get $1)))
(func (export "v128.load32_lane") (param $0 i32) (param $1 v128) (result v128) (v128.load32_lane 0 (local.get $0) (local.get $1)))
(func (export "v128.load64_lane") (param $0 i32) (param $1 v128) (result v128) (v128.load64_lane 0 (local.get $0) (local.get $1)))
(func (export "v128.store8_lane") (param $0 i32) (param $1 v128) (v128.store8_lane 0 (local.get $0) (local.get $1)))
(func (export "v128.store16_lane") (param $0 i32) (param $1 v128) (v128.store16_lane 0 (local.get $0) (local.get $1)))
(func (export "v128.store32_lane") (param $0 i32) (param $1 v128) (v128.store32_lane 0 (local.get $0) (local.get $1)))
(func (export "v128.store64_lane") (param $0 i32) (param $1 v128) (v128.store64_lane 0 (local.get $0) (local.get $1)))
(func (export "i8x16.popcnt") (param $0 v128) (result v128) (i8x16.popcnt (local.get $0)))
(func (export "i8x16.abs") (param $0 v128) (result v128) (i8x16.abs (local.get $0)))
(func (export "i8x16.neg") (param $0 v128) (result v128) (i8x16.neg (local.get $0)))
(func (export "i8x16.all_true") (param $0 v128) (result i32) (i8x16.all_true (local.get $0)))
(func (export "i8x16.bitmask") (param $0 v128) (result i32) (i8x16.bitmask (local.get $0)))
(func (export "i8x16.shl") (param $0 v128) (param $1 i32) (result v128) (i8x16.shl (local.get $0) (local.get $1)))
(func (export "i8x16.shr_s") (param $0 v128) (param $1 i32) (result v128) (i8x16.shr_s (local.get $0) (local.get $1)))
(func (export "i8x16.shr_u") (param $0 v128) (param $1 i32) (result v128) (i8x16.shr_u (local.get $0) (local.get $1)))
(func (export "i8x16.add") (param $0 v128) (param $1 v128) (result v128) (i8x16.add (local.get $0) (local.get $1)))
(func (export "i8x16.add_sat_s") (param $0 v128) (param $1 v128) (result v128) (i8x16.add_sat_s (local.get $0) (local.get $1)))
(func (export "i8x16.add_sat_u") (param $0 v128) (param $1 v128) (result v128) (i8x16.add_sat_u (local.get $0) (local.get $1)))
(func (export "i8x16.sub") (param $0 v128) (param $1 v128) (result v128) (i8x16.sub (local.get $0) (local.get $1)))
(func (export "i8x16.sub_sat_s") (param $0 v128) (param $1 v128) (result v128) (i8x16.sub_sat_s (local.get $0) (local.get $1)))
(func (export "i8x16.sub_sat_u") (param $0 v128) (param $1 v128) (result v128) (i8x16.sub_sat_u (local.get $0) (local.get $1)))
(func (export "i8x16.min_s") (param $0 v128) (param $1 v128) (result v128) (i8x16.min_s (local.get $0) (local.get $1)))
(func (export "i8x16.min_u") (param $0 v128) (param $1 v128) (result v128) (i8x16.min_u (local.get $0) (local.get $1)))
(func (export "i8x16.max_s") (param $0 v128) (param $1 v128) (result v128) (i8x16.max_s (local.get $0) (local.get $1)))
(func (export "i8x16.max_u") (param $0 v128) (param $1 v128) (result v128) (i8x16.max_u (local.get $0) (local.get $1)))
(func (export "i8x16.avgr_u") (param $0 v128) (param $1 v128) (result v128) (i8x16.avgr_u (local.get $0) (local.get $1)))
(func (export "i16x8.abs") (param $0 v128) (result v128) (i16x8.abs (local.get $0)))
(func (export "i16x8.neg") (param $0 v128) (result v128) (i16x8.neg (local.get $0)))
(func (export "i16x8.all_true") (param $0 v128) (result i32) (i16x8.all_true (local.get $0)))
(func (export "i16x8.bitmask") (param $0 v128) (result i32) (i16x8.bitmask (local.get $0)))
(func (export "i16x8.shl") (param $0 v128) (param $1 i32) (result v128) (i16x8.shl (local.get $0) (local.get $1)))
(func (export "i16x8.shr_s") (param $0 v128) (param $1 i32) (result v128) (i16x8.shr_s (local.get $0) (local.get $1)))
(func (export "i16x8.shr_u") (param $0 v128) (param $1 i32) (result v128) (i16x8.shr_u (local.get $0) (local.get $1)))
(func (export "i16x8.add") (param $0 v128) (param $1 v128) (result v128) (i16x8.add (local.get $0) (local.get $1)))
(func (export "i16x8.add_sat_s") (param $0 v128) (param $1 v128) (result v128) (i16x8.add_sat_s (local.get $0) (local.get $1)))
(func (export "i16x8.add_sat_u") (param $0 v128) (param $1 v128) (result v128) (i16x8.add_sat_u (local.get $0) (local.get $1)))
(func (export "i16x8.sub") (param $0 v128) (param $1 v128) (result v128) (i16x8.sub (local.get $0) (local.get $1)))
(func (export "i16x8.sub_sat_s") (param $0 v128) (param $1 v128) (result v128) (i16x8.sub_sat_s (local.get $0) (local.get $1)))
(func (export "i16x8.sub_sat_u") (param $0 v128) (param $1 v128) (result v128) (i16x8.sub_sat_u (local.get $0) (local.get $1)))
(func (export "i16x8.mul") (param $0 v128) (param $1 v128) (result v128) (i16x8.mul (local.get $0) (local.get $1)))
(func (export "i16x8.min_s") (param $0 v128) (param $1 v128) (result v128) (i16x8.min_s (local.get $0) (local.get $1)))
(func (export "i16x8.min_u") (param $0 v128) (param $1 v128) (result v128) (i16x8.min_u (local.get $0) (local.get $1)))
(func (export "i16x8.max_s") (param $0 v128) (param $1 v128) (result v128) (i16x8.max_s (local.get $0) (local.get $1)))
(func (export "i16x8.max_u") (param $0 v128) (param $1 v128) (result v128) (i16x8.max_u (local.get $0) (local.get $1)))
(func (export "i16x8.avgr_u") (param $0 v128) (param $1 v128) (result v128) (i16x8.avgr_u (local.get $0) (local.get $1)))
(func (export "i16x8.q15mulr_sat_s") (param $0 v128) (param $1 v128) (result v128) (i16x8.q15mulr_sat_s (local.get $0) (local.get $1)))
(func (export "i16x8.extmul_low_i8x16_s") (param $0 v128) (param $1 v128) (result v128) (i16x8.extmul_low_i8x16_s (local.get $0) (local.get $1)))
(func (export "i16x8.extmul_high_i8x16_s") (param $0 v128) (param $1 v128) (result v128) (i16x8.extmul_high_i8x16_s (local.get $0) (local.get $1)))
(func (export "i16x8.extmul_low_i8x16_u") (param $0 v128) (param $1 v128) (result v128) (i16x8.extmul_low_i8x16_u (local.get $0) (local.get $1)))
(func (export "i16x8.extmul_high_i8x16_u") (param $0 v128) (param $1 v128) (result v128) (i16x8.extmul_high_i8x16_u (local.get $0) (local.get $1)))
(func (export "i32x4.extmul_low_i16x8_s") (param $0 v128) (param $1 v128) (result v128) (i32x4.extmul_low_i16x8_s (local.get $0) (local.get $1)))
(func (export "i32x4.extmul_high_i16x8_s") (param $0 v128) (param $1 v128) (result v128) (i32x4.extmul_high_i16x8_s (local.get $0) (local.get $1)))
(func (export "i32x4.extmul_low_i16x8_u") (param $0 v128) (param $1 v128) (result v128) (i32x4.extmul_low_i16x8_u (local.get $0) (local.get $1)))
(func (export "i32x4.extmul_high_i16x8_u") (param $0 v128) (param $1 v128) (result v128) (i32x4.extmul_high_i16x8_u (local.get $0) (local.get $1)))
(func (export "i64x2.extmul_low_i32x4_s") (param $0 v128) (param $1 v128) (result v128) (i64x2.extmul_low_i32x4_s (local.get $0) (local.get $1)))
(func (export "i64x2.extmul_high_i32x4_s") (param $0 v128) (param $1 v128) (result v128) (i64x2.extmul_high_i32x4_s (local.get $0) (local.get $1)))
(func (export "i64x2.extmul_low_i32x4_u") (param $0 v128) (param $1 v128) (result v128) (i64x2.extmul_low_i32x4_u (local.get $0) (local.get $1)))
(func (export "i64x2.extmul_high_i32x4_u") (param $0 v128) (param $1 v128) (result v128) (i64x2.extmul_high_i32x4_u (local.get $0) (local.get $1)))
(func (export "i32x4.abs") (param $0 v128) (result v128) (i32x4.abs (local.get $0)))
(func (export "i32x4.neg") (param $0 v128) (result v128) (i32x4.neg (local.get $0)))
(func (export "i32x4.all_true") (param $0 v128) (result i32) (i32x4.all_true (local.get $0)))
(func (export "i32x4.bitmask") (param $0 v128) (result i32) (i32x4.bitmask (local.get $0)))
(func (export "i32x4.shl") (param $0 v128) (param $1 i32) (result v128) (i32x4.shl (local.get $0) (local.get $1)))
(func (export "i32x4.shr_s") (param $0 v128) (param $1 i32) (result v128) (i32x4.shr_s (local.get $0) (local.get $1)))
(func (export "i32x4.shr_u") (param $0 v128) (param $1 i32) (result v128) (i32x4.shr_u (local.get $0) (local.get $1)))
(func (export "i32x4.add") (param $0 v128) (param $1 v128) (result v128) (i32x4.add (local.get $0) (local.get $1)))
(func (export "i32x4.sub") (param $0 v128) (param $1 v128) (result v128) (i32x4.sub (local.get $0) (local.get $1)))
(func (export "i32x4.mul") (param $0 v128) (param $1 v128) (result v128) (i32x4.mul (local.get $0) (local.get $1)))
(func (export "i32x4.min_s") (param $0 v128) (param $1 v128) (result v128) (i32x4.min_s (local.get $0) (local.get $1)))
(func (export "i32x4.min_u") (param $0 v128) (param $1 v128) (result v128) (i32x4.min_u (local.get $0) (local.get $1)))
(func (export "i32x4.max_s") (param $0 v128) (param $1 v128) (result v128) (i32x4.max_s (local.get $0) (local.get $1)))
(func (export "i32x4.max_u") (param $0 v128) (param $1 v128) (result v128) (i32x4.max_u (local.get $0) (local.get $1)))
(func (export "i32x4.dot_i16x8_s") (param $0 v128) (param $1 v128) (result v128) (i32x4.dot_i16x8_s (local.get $0) (local.get $1)))
(func (export "i64x2.neg") (param $0 v128) (result v128) (i64x2.neg (local.get $0)))
(func (export "i64x2.bitmask") (param $0 v128) (result i32) (i64x2.bitmask (local.get $0)))
(func (export "i64x2.shl") (param $0 v128) (param $1 i32) (result v128) (i64x2.shl (local.get $0) (local.get $1)))
(func (export "i64x2.shr_s") (param $0 v128) (param $1 i32) (result v128) (i64x2.shr_s (local.get $0) (local.get $1)))
(func (export "i64x2.shr_u") (param $0 v128) (param $1 i32) (result v128) (i64x2.shr_u (local.get $0) (local.get $1)))
(func (export "i64x2.add") (param $0 v128) (param $1 v128) (result v128) (i64x2.add (local.get $0) (local.get $1)))
(func (export "i64x2.sub") (param $0 v128) (param $1 v128) (result v128) (i64x2.sub (local.get $0) (local.get $1)))
(func (export "i64x2.mul") (param $0 v128) (param $1 v128) (result v128) (i64x2.mul (local.get $0) (local.get $1)))
(func (export "f32x4.abs") (param $0 v128) (result v128) (f32x4.abs (local.get $0)))
(func (export "f32x4.neg") (param $0 v128) (result v128) (f32x4.neg (local.get $0)))
(func (export "f32x4.sqrt") (param $0 v128) (result v128) (f32x4.sqrt (local.get $0)))
(func (export "f32x4.add") (param $0 v128) (param $1 v128) (result v128) (f32x4.add (local.get $0) (local.get $1)))
(func (export "f32x4.sub") (param $0 v128) (param $1 v128) (result v128) (f32x4.sub (local.get $0) (local.get $1)))
(func (export "f32x4.mul") (param $0 v128) (param $1 v128) (result v128) (f32x4.mul (local.get $0) (local.get $1)))
(func (export "f32x4.div") (param $0 v128) (param $1 v128) (result v128) (f32x4.div (local.get $0) (local.get $1)))
(func (export "f32x4.min") (param $0 v128) (param $1 v128) (result v128) (f32x4.min (local.get $0) (local.get $1)))
(func (export "f32x4.max") (param $0 v128) (param $1 v128) (result v128) (f32x4.max (local.get $0) (local.get $1)))
(func (export "f32x4.pmin") (param $0 v128) (param $1 v128) (result v128) (f32x4.pmin (local.get $0) (local.get $1)))
(func (export "f32x4.pmax") (param $0 v128) (param $1 v128) (result v128) (f32x4.pmax (local.get $0) (local.get $1)))
(func (export "f32x4.ceil") (param $0 v128) (result v128) (f32x4.ceil (local.get $0)))
(func (export "f32x4.floor") (param $0 v128) (result v128) (f32x4.floor (local.get $0)))
(func (export "f32x4.trunc") (param $0 v128) (result v128) (f32x4.trunc (local.get $0)))
(func (export "f32x4.nearest") (param $0 v128) (result v128) (f32x4.nearest (local.get $0)))
(func (export "f64x2.abs") (param $0 v128) (result v128) (f64x2.abs (local.get $0)))
(func (export "f64x2.neg") (param $0 v128) (result v128) (f64x2.neg (local.get $0)))
(func (export "f64x2.sqrt") (param $0 v128) (result v128) (f64x2.sqrt (local.get $0)))
(func (export "f64x2.add") (param $0 v128) (param $1 v128) (result v128) (f64x2.add (local.get $0) (local.get $1)))
(func (export "f64x2.sub") (param $0 v128) (param $1 v128) (result v128) (f64x2.sub (local.get $0) (local.get $1)))
(func (export "f64x2.mul") (param $0 v128) (param $1 v128) (result v128) (f64x2.mul (local.get $0) (local.get $1)))
(func (export "f64x2.div") (param $0 v128) (param $1 v128) (result v128) (f64x2.div (local.get $0) (local.get $1)))
(func (export "f64x2.min") (param $0 v128) (param $1 v128) (result v128) (f64x2.min (local.get $0) (local.get $1)))
(func (export "f64x2.max") (param $0 v128) (param $1 v128) (result v128) (f64x2.max (local.get $0) (local.get $1)))
(func (export "f64x2.pmin") (param $0 v128) (param $1 v128) (result v128) (f64x2.pmin (local.get $0) (local.get $1)))
(func (export "f64x2.pmax") (param $0 v128) (param $1 v128) (result v128) (f64x2.pmax (local.get $0) (local.get $1)))
(func (export "f64x2.ceil") (param $0 v128) (result v128) (f64x2.ceil (local.get $0)))
(func (export "f64x2.floor") (param $0 v128) (result v128) (f64x2.floor (local.get $0)))
(func (export "f64x2.trunc") (param $0 v128) (result v128) (f64x2.trunc (local.get $0)))
(func (export "f64x2.nearest") (param $0 v128) (result v128) (f64x2.nearest (local.get $0)))
(func (export "i16x8.extadd_pairwise_i8x16_s") (param v128) (result v128) (i16x8.extadd_pairwise_i8x16_s (local.get 0)))
(func (export "i16x8.extadd_pairwise_i8x16_u") (param v128) (result v128) (i16x8.extadd_pairwise_i8x16_u (local.get 0)))
(func (export "i32x4.extadd_pairwise_i16x8_s") (param v128) (result v128) (i32x4.extadd_pairwise_i16x8_s (local.get 0)))
(func (export "i32x4.extadd_pairwise_i16x8_u") (param v128) (result v128) (i32x4.extadd_pairwise_i16x8_u (local.get 0)))
(func (export "i32x4.trunc_sat_f32x4_s") (param $0 v128) (result v128) (i32x4.trunc_sat_f32x4_s (local.get $0)))
(func (export "i32x4.trunc_sat_f32x4_u") (param $0 v128) (result v128) (i32x4.trunc_sat_f32x4_u (local.get $0)))
(func (export "f32x4.convert_i32x4_s") (param $0 v128) (result v128) (f32x4.convert_i32x4_s (local.get $0)))
(func (export "f32x4.convert_i32x4_u") (param $0 v128) (result v128) (f32x4.convert_i32x4_u (local.get $0)))
(func (export "v128.load8_splat") (param $0 i32) (result v128) (v128.load8_splat (local.get $0)))
(func (export "v128.load16_splat") (param $0 i32) (result v128) (v128.load16_splat (local.get $0)))
(func (export "v128.load32_splat") (param $0 i32) (result v128) (v128.load32_splat (local.get $0)))
(func (export "v128.load64_splat") (param $0 i32) (result v128) (v128.load64_splat (local.get $0)))
(func (export "i8x16.narrow_i16x8_s") (param $0 v128) (param $1 v128) (result v128) (i8x16.narrow_i16x8_s (local.get $0) (local.get $1)))
(func (export "i8x16.narrow_i16x8_u") (param $0 v128) (param $1 v128) (result v128) (i8x16.narrow_i16x8_u (local.get $0) (local.get $1)))
(func (export "i16x8.narrow_i32x4_s") (param $0 v128) (param $1 v128) (result v128) (i16x8.narrow_i32x4_s (local.get $0) (local.get $1)))
(func (export "i16x8.narrow_i32x4_u") (param $0 v128) (param $1 v128) (result v128) (i16x8.narrow_i32x4_u (local.get $0) (local.get $1)))
(func (export "i16x8.extend_low_i8x16_s") (param $0 v128) (result v128) (i16x8.extend_low_i8x16_s (local.get $0)))
(func (export "i16x8.extend_high_i8x16_s") (param $0 v128) (result v128) (i16x8.extend_high_i8x16_s (local.get $0)))
(func (export "i16x8.extend_low_i8x16_u") (param $0 v128) (result v128) (i16x8.extend_low_i8x16_u (local.get $0)))
(func (export "i16x8.extend_high_i8x16_u") (param $0 v128) (result v128) (i16x8.extend_high_i8x16_u (local.get $0)))
(func (export "i32x4.extend_low_i16x8_s") (param $0 v128) (result v128) (i32x4.extend_low_i16x8_s (local.get $0)))
(func (export "i32x4.extend_high_i16x8_s") (param $0 v128) (result v128) (i32x4.extend_high_i16x8_s (local.get $0)))
(func (export "i32x4.extend_low_i16x8_u") (param $0 v128) (result v128) (i32x4.extend_low_i16x8_u (local.get $0)))
(func (export "i32x4.extend_high_i16x8_u") (param $0 v128) (result v128) (i32x4.extend_high_i16x8_u (local.get $0)))
(func (export "i64x2.extend_low_i32x4_s") (param $0 v128) (result v128) (i64x2.extend_low_i32x4_s (local.get $0)))
(func (export "i64x2.extend_high_i32x4_s") (param $0 v128) (result v128) (i64x2.extend_high_i32x4_s (local.get $0)))
(func (export "i64x2.extend_low_i32x4_u") (param $0 v128) (result v128) (i64x2.extend_low_i32x4_u (local.get $0)))
(func (export "i64x2.extend_high_i32x4_u") (param $0 v128) (result v128) (i64x2.extend_high_i32x4_u (local.get $0)))
(func (export "v128.load8x8_u") (param $0 i32) (result v128) (v128.load8x8_u (local.get $0)))
(func (export "v128.load8x8_s") (param $0 i32) (result v128) (v128.load8x8_s (local.get $0)))
(func (export "v128.load16x4_u") (param $0 i32) (result v128) (v128.load16x4_u (local.get $0)))
(func (export "v128.load16x4_s") (param $0 i32) (result v128) (v128.load16x4_s (local.get $0)))
(func (export "v128.load32x2_u") (param $0 i32) (result v128) (v128.load32x2_u (local.get $0)))
(func (export "v128.load32x2_s") (param $0 i32) (result v128) (v128.load32x2_s (local.get $0)))
(func (export "v128.load32_zero") (param $0 i32) (result v128) (v128.load32_zero (local.get $0)))
(func (export "v128.load64_zero") (param $0 i32) (result v128) (v128.load64_zero (local.get $0)))
(func (export "i8x16.swizzle") (param $0 v128) (param $1 v128) (result v128) (i8x16.swizzle (local.get $0) (local.get $1)))
(func (export "f64x2.convert_low_i32x4_s") (param $0 v128) (result v128) (f64x2.convert_low_i32x4_s (local.get $0)))
(func (export "f64x2.convert_low_i32x4_u") (param $0 v128) (result v128) (f64x2.convert_low_i32x4_u (local.get $0)))
(func (export "i32x4.trunc_sat_f64x2_s_zero") (param $0 v128) (result v128) (i32x4.trunc_sat_f64x2_s_zero (local.get $0)))
(func (export "i32x4.trunc_sat_f64x2_u_zero") (param $0 v128) (result v128) (i32x4.trunc_sat_f64x2_u_zero (local.get $0)))
(func (export "f32x4.demote_f64x2_zero") (param $0 v128) (result v128) (f32x4.demote_f64x2_zero (local.get $0)))
(func (export "f64x2.promote_low_f32x4") (param $0 v128) (result v128) (f64x2.promote_low_f32x4 (local.get $0)))
)
;; TODO: Additional f64x2 conversions if specified
;; Basic v128 manipulation
(assert_return (invoke "v128.load" (i32.const 128)) (v128.const i8x16 87 65 83 77 83 73 77 68 71 79 69 83 70 65 83 84))
(assert_return (invoke "v128.store" (i32.const 16) (v128.const i32x4 1 2 3 4)) (v128.const i32x4 1 2 3 4))
(assert_return (invoke "v128.load8_splat" (i32.const 128)) (v128.const i8x16 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87 87))
(assert_return (invoke "v128.load16_splat" (i32.const 128)) (v128.const i8x16 87 65 87 65 87 65 87 65 87 65 87 65 87 65 87 65))
(assert_return (invoke "v128.load32_splat" (i32.const 128)) (v128.const i8x16 87 65 83 77 87 65 83 77 87 65 83 77 87 65 83 77))
(assert_return (invoke "v128.load64_splat" (i32.const 128)) (v128.const i8x16 87 65 83 77 83 73 77 68 87 65 83 77 83 73 77 68))
(assert_return (invoke "v128.const.i8x16") (v128.const i32x4 0x04030201 0x08070605 0x0c0b0a09 0x100f0e0d))
(assert_return (invoke "v128.const.i16x8") (v128.const i8x16 01 00 02 00 03 00 04 00 05 00 06 00 07 00 08 00))
(assert_return (invoke "v128.const.i32x4") (v128.const i8x16 01 00 00 00 02 00 00 00 03 00 00 00 04 00 00 00))
(assert_return (invoke "v128.const.i64x2") (v128.const i8x16 01 00 00 00 00 00 00 00 02 00 00 00 00 00 00 00))
(assert_return (invoke "v128.const.f32x4") (v128.const f32x4 1 2 3 4))
(assert_return (invoke "v128.const.f64x2") (v128.const f64x2 1 2))
(assert_return
(invoke "i8x16.shuffle_interleave_bytes"
(v128.const i8x16 1 0 3 0 5 0 7 0 9 0 11 0 13 0 15 0)
(v128.const i8x16 0 2 0 4 0 6 0 8 0 10 0 12 0 14 0 16)
)
(v128.const i8x16 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16)
)
(assert_return (invoke "i8x16.shuffle_reverse_i32s" (v128.const i32x4 1 2 3 4)) (v128.const i32x4 4 3 2 1))
;; i8x16 lane accesses
(assert_return (invoke "i8x16.splat" (i32.const 5)) (v128.const i8x16 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5))
(assert_return (invoke "i8x16.splat" (i32.const 257)) (v128.const i8x16 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1))
(assert_return (invoke "i8x16.extract_lane_s_first" (v128.const i8x16 255 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0)) (i32.const -1))
(assert_return (invoke "i8x16.extract_lane_s_last" (v128.const i8x16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 255)) (i32.const -1))
(assert_return (invoke "i8x16.extract_lane_u_first" (v128.const i8x16 255 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0)) (i32.const 255))
(assert_return (invoke "i8x16.extract_lane_u_last" (v128.const i8x16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 255)) (i32.const 255))
(assert_return (invoke "i8x16.replace_lane_first" (v128.const i64x2 0 0) (i32.const 7)) (v128.const i8x16 7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0))
(assert_return (invoke "i8x16.replace_lane_last" (v128.const i64x2 0 0) (i32.const 7)) (v128.const i8x16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 7))
;; i16x8 lane accesses
(assert_return (invoke "i16x8.splat" (i32.const 5)) (v128.const i16x8 5 5 5 5 5 5 5 5))
(assert_return (invoke "i16x8.splat" (i32.const 65537)) (v128.const i16x8 1 1 1 1 1 1 1 1))
(assert_return (invoke "i16x8.extract_lane_s_first" (v128.const i16x8 65535 0 0 0 0 0 0 0)) (i32.const -1))
(assert_return (invoke "i16x8.extract_lane_s_last" (v128.const i16x8 0 0 0 0 0 0 0 65535)) (i32.const -1))
(assert_return (invoke "i16x8.extract_lane_u_first" (v128.const i16x8 65535 0 0 0 0 0 0 0)) (i32.const 65535))
(assert_return (invoke "i16x8.extract_lane_u_last" (v128.const i16x8 0 0 0 0 0 0 0 65535)) (i32.const 65535))
(assert_return (invoke "i16x8.replace_lane_first" (v128.const i64x2 0 0) (i32.const 7)) (v128.const i16x8 7 0 0 0 0 0 0 0))
(assert_return (invoke "i16x8.replace_lane_last" (v128.const i64x2 0 0) (i32.const 7)) (v128.const i16x8 0 0 0 0 0 0 0 7))
;; i32x4 lane accesses
(assert_return (invoke "i32x4.splat" (i32.const -5)) (v128.const i32x4 -5 -5 -5 -5))
(assert_return (invoke "i32x4.extract_lane_first" (v128.const i32x4 -5 0 0 0)) (i32.const -5))
(assert_return (invoke "i32x4.extract_lane_last" (v128.const i32x4 0 0 0 -5)) (i32.const -5))
(assert_return (invoke "i32x4.replace_lane_first" (v128.const i64x2 0 0) (i32.const 53)) (v128.const i32x4 53 0 0 0))
(assert_return (invoke "i32x4.replace_lane_last" (v128.const i64x2 0 0) (i32.const 53)) (v128.const i32x4 0 0 0 53))
;; i64x2 lane accesses
(assert_return (invoke "i64x2.splat" (i64.const -5)) (v128.const i64x2 -5 -5))
(assert_return (invoke "i64x2.extract_lane_first" (v128.const i64x2 -5 0)) (i64.const -5))
(assert_return (invoke "i64x2.extract_lane_last" (v128.const i64x2 0 -5)) (i64.const -5))
(assert_return (invoke "i64x2.replace_lane_first" (v128.const i64x2 0 0) (i64.const 53)) (v128.const i64x2 53 0))
(assert_return (invoke "i64x2.replace_lane_last" (v128.const i64x2 0 0) (i64.const 53)) (v128.const i64x2 0 53))
;; f32x4 lane accesses
(assert_return (invoke "f32x4.splat" (f32.const -5)) (v128.const f32x4 -5 -5 -5 -5))
(assert_return (invoke "f32x4.extract_lane_first" (v128.const f32x4 -5 0 0 0)) (f32.const -5))
(assert_return (invoke "f32x4.extract_lane_last" (v128.const f32x4 0 0 0 -5)) (f32.const -5))
(assert_return (invoke "f32x4.replace_lane_first" (v128.const i64x2 0 0) (f32.const 53)) (v128.const f32x4 53 0 0 0))
(assert_return (invoke "f32x4.replace_lane_last" (v128.const i64x2 0 0) (f32.const 53)) (v128.const f32x4 0 0 0 53))
;; f64x2 lane accesses
(assert_return (invoke "f64x2.splat" (f64.const -5)) (v128.const f64x2 -5 -5))
(assert_return (invoke "f64x2.extract_lane_first" (v128.const f64x2 -5 0)) (f64.const -5))
(assert_return (invoke "f64x2.extract_lane_last" (v128.const f64x2 0 -5)) (f64.const -5))
(assert_return (invoke "f64x2.replace_lane_first" (v128.const f64x2 0 0) (f64.const 53)) (v128.const f64x2 53 0))
(assert_return (invoke "f64x2.replace_lane_last" (v128.const f64x2 0 0) (f64.const 53)) (v128.const f64x2 0 53))
;; i8x16 comparisons
(assert_return
(invoke "i8x16.eq"
(v128.const i8x16 0 127 13 128 1 13 129 42 0 127 255 42 1 13 129 42)
(v128.const i8x16 0 255 13 42 129 127 0 128 0 255 13 42 129 127 0 128)
)
(v128.const i8x16 -1 0 -1 0 0 0 0 0 -1 0 0 -1 0 0 0 0)
)
(assert_return
(invoke "i8x16.ne"
(v128.const i8x16 0 127 13 128 1 13 129 42 0 127 255 42 1 13 129 42)
(v128.const i8x16 0 255 13 42 129 127 0 128 0 255 13 42 129 127 0 128)
)
(v128.const i8x16 0 -1 0 -1 -1 -1 -1 -1 0 -1 -1 0 -1 -1 -1 -1)
)
(assert_return
(invoke "i8x16.lt_s"
(v128.const i8x16 0 127 13 128 1 13 129 42 0 127 255 42 1 13 129 42)
(v128.const i8x16 0 255 13 42 129 127 0 128 0 255 13 42 129 127 0 128)
)
(v128.const i8x16 0 0 0 -1 0 -1 -1 0 0 0 -1 0 0 -1 -1 0)
)
(assert_return
(invoke "i8x16.lt_u"
(v128.const i8x16 0 127 13 128 1 13 129 42 0 127 255 42 1 13 129 42)
(v128.const i8x16 0 255 13 42 129 127 0 128 0 255 13 42 129 127 0 128)
)
(v128.const i8x16 0 -1 0 0 -1 -1 0 -1 0 -1 0 0 -1 -1 0 -1)
)
(assert_return
(invoke "i8x16.gt_s"
(v128.const i8x16 0 127 13 128 1 13 129 42 0 127 255 42 1 13 129 42)
(v128.const i8x16 0 255 13 42 129 127 0 128 0 255 13 42 129 127 0 128)
)
(v128.const i8x16 0 -1 0 0 -1 0 0 -1 0 -1 0 0 -1 0 0 -1)
)
(assert_return
(invoke "i8x16.gt_u"
(v128.const i8x16 0 127 13 128 1 13 129 42 0 127 255 42 1 13 129 42)
(v128.const i8x16 0 255 13 42 129 127 0 128 0 255 13 42 129 127 0 128)
)
(v128.const i8x16 0 0 0 -1 0 0 -1 0 0 0 -1 0 0 0 -1 0)
)
(assert_return
(invoke "i8x16.le_s"
(v128.const i8x16 0 127 13 128 1 13 129 42 0 127 255 42 1 13 129 42)
(v128.const i8x16 0 255 13 42 129 127 0 128 0 255 13 42 129 127 0 128)
)
(v128.const i8x16 -1 0 -1 -1 0 -1 -1 0 -1 0 -1 -1 0 -1 -1 0)
)
(assert_return
(invoke "i8x16.le_u"
(v128.const i8x16 0 127 13 128 1 13 129 42 0 127 255 42 1 13 129 42)
(v128.const i8x16 0 255 13 42 129 127 0 128 0 255 13 42 129 127 0 128)
)
(v128.const i8x16 -1 -1 -1 0 -1 -1 0 -1 -1 -1 0 -1 -1 -1 0 -1)
)
(assert_return
(invoke "i8x16.ge_s"
(v128.const i8x16 0 127 13 128 1 13 129 42 0 127 255 42 1 13 129 42)
(v128.const i8x16 0 255 13 42 129 127 0 128 0 255 13 42 129 127 0 128)
)
(v128.const i8x16 -1 -1 -1 0 -1 0 0 -1 -1 -1 0 -1 -1 0 0 -1)
)
(assert_return
(invoke "i8x16.ge_u"
(v128.const i8x16 0 127 13 128 1 13 129 42 0 127 255 42 1 13 129 42)
(v128.const i8x16 0 255 13 42 129 127 0 128 0 255 13 42 129 127 0 128)
)
(v128.const i8x16 -1 0 -1 -1 0 0 -1 0 -1 0 -1 -1 0 0 -1 0)
)
;; i16x8 comparisons
(assert_return (invoke "i16x8.eq"
(v128.const i16x8 0 32767 13 32768 1 32769 42 40000)
(v128.const i16x8 0 13 1 32767 32769 42 40000 32767)
)
(v128.const i16x8 -1 0 0 0 0 0 0 0)
)
(assert_return
(invoke "i16x8.ne"
(v128.const i16x8 0 32767 13 32768 1 32769 42 40000)
(v128.const i16x8 0 13 1 32767 32769 42 40000 32767)
)
(v128.const i16x8 0 -1 -1 -1 -1 -1 -1 -1)
)
(assert_return
(invoke "i16x8.lt_s"
(v128.const i16x8 0 32767 13 32768 1 32769 42 40000)
(v128.const i16x8 0 13 1 32767 32769 42 40000 32767)
)
(v128.const i16x8 0 0 0 -1 0 -1 0 -1)
)
(assert_return
(invoke "i16x8.lt_u"
(v128.const i16x8 0 32767 13 32768 1 32769 42 40000)
(v128.const i16x8 0 13 1 32767 32769 42 40000 32767)
)
(v128.const i16x8 0 0 0 0 -1 0 -1 0)
)
(assert_return
(invoke "i16x8.gt_s"
(v128.const i16x8 0 32767 13 32768 1 32769 42 40000)
(v128.const i16x8 0 13 1 32767 32769 42 40000 32767)
)
(v128.const i16x8 0 -1 -1 0 -1 0 -1 0)
)
(assert_return
(invoke "i16x8.gt_u"
(v128.const i16x8 0 32767 13 32768 1 32769 42 40000)
(v128.const i16x8 0 13 1 32767 32769 42 40000 32767)
)
(v128.const i16x8 0 -1 -1 -1 0 -1 0 -1)
)
(assert_return
(invoke "i16x8.le_s"
(v128.const i16x8 0 32767 13 32768 1 32769 42 40000)
(v128.const i16x8 0 13 1 32767 32769 42 40000 32767)
)
(v128.const i16x8 -1 0 0 -1 0 -1 0 -1)
)
(assert_return
(invoke "i16x8.le_u"
(v128.const i16x8 0 32767 13 32768 1 32769 42 40000)
(v128.const i16x8 0 13 1 32767 32769 42 40000 32767)
)
(v128.const i16x8 -1 0 0 0 -1 0 -1 0)
)
(assert_return
(invoke "i16x8.ge_s"
(v128.const i16x8 0 32767 13 32768 1 32769 42 40000)
(v128.const i16x8 0 13 1 32767 32769 42 40000 32767)
)
(v128.const i16x8 -1 -1 -1 0 -1 0 -1 0)
)
(assert_return
(invoke "i16x8.ge_u"
(v128.const i16x8 0 32767 13 32768 1 32769 42 40000)
(v128.const i16x8 0 13 1 32767 32769 42 40000 32767)
)
(v128.const i16x8 -1 -1 -1 -1 0 -1 0 -1)
)
;; i32x4 comparisons
(assert_return (invoke "i32x4.eq" (v128.const i32x4 0 -1 53 -7) (v128.const i32x4 0 53 -7 -1)) (v128.const i32x4 -1 0 0 0))
(assert_return (invoke "i32x4.ne" (v128.const i32x4 0 -1 53 -7) (v128.const i32x4 0 53 -7 -1)) (v128.const i32x4 0 -1 -1 -1))
(assert_return (invoke "i32x4.lt_s" (v128.const i32x4 0 -1 53 -7) (v128.const i32x4 0 53 -7 -1)) (v128.const i32x4 0 -1 0 -1))
(assert_return (invoke "i32x4.lt_u" (v128.const i32x4 0 -1 53 -7) (v128.const i32x4 0 53 -7 -1)) (v128.const i32x4 0 0 -1 -1))
(assert_return (invoke "i32x4.gt_s" (v128.const i32x4 0 -1 53 -7) (v128.const i32x4 0 53 -7 -1)) (v128.const i32x4 0 0 -1 0))
(assert_return (invoke "i32x4.gt_u" (v128.const i32x4 0 -1 53 -7) (v128.const i32x4 0 53 -7 -1)) (v128.const i32x4 0 -1 0 0))
(assert_return (invoke "i32x4.le_s" (v128.const i32x4 0 -1 53 -7) (v128.const i32x4 0 53 -7 -1)) (v128.const i32x4 -1 -1 0 -1))
(assert_return (invoke "i32x4.le_u" (v128.const i32x4 0 -1 53 -7) (v128.const i32x4 0 53 -7 -1)) (v128.const i32x4 -1 0 -1 -1))
(assert_return (invoke "i32x4.ge_s" (v128.const i32x4 0 -1 53 -7) (v128.const i32x4 0 53 -7 -1)) (v128.const i32x4 -1 0 -1 0))
(assert_return (invoke "i32x4.ge_u" (v128.const i32x4 0 -1 53 -7) (v128.const i32x4 0 53 -7 -1)) (v128.const i32x4 -1 -1 0 0))
;; i64x2 comparisons
(assert_return (invoke "i64x2.eq" (v128.const i64x2 0 -1) (v128.const i64x2 -1 -1)) (v128.const i64x2 0 -1))
;; f32x4 comparisons
(assert_return (invoke "f32x4.eq" (v128.const f32x4 0 -1 1 0) (v128.const f32x4 0 0 -1 1)) (v128.const i32x4 -1 0 0 0))
(assert_return (invoke "f32x4.ne" (v128.const f32x4 0 -1 1 0) (v128.const f32x4 0 0 -1 1)) (v128.const i32x4 0 -1 -1 -1))
(assert_return (invoke "f32x4.lt" (v128.const f32x4 0 -1 1 0) (v128.const f32x4 0 0 -1 1)) (v128.const i32x4 0 -1 0 -1))
(assert_return (invoke "f32x4.gt" (v128.const f32x4 0 -1 1 0) (v128.const f32x4 0 0 -1 1)) (v128.const i32x4 0 0 -1 0))
(assert_return (invoke "f32x4.le" (v128.const f32x4 0 -1 1 0) (v128.const f32x4 0 0 -1 1)) (v128.const i32x4 -1 -1 0 -1))
(assert_return (invoke "f32x4.ge" (v128.const f32x4 0 -1 1 0) (v128.const f32x4 0 0 -1 1)) (v128.const i32x4 -1 0 -1 0))
(assert_return (invoke "f32x4.eq" (v128.const f32x4 nan 0 nan inf) (v128.const f32x4 0 nan nan inf)) (v128.const i32x4 0 0 0 -1))
(assert_return (invoke "f32x4.ne" (v128.const f32x4 nan 0 nan inf) (v128.const f32x4 0 nan nan inf)) (v128.const i32x4 -1 -1 -1 0))
(assert_return (invoke "f32x4.lt" (v128.const f32x4 nan 0 nan inf) (v128.const f32x4 0 nan nan inf)) (v128.const i32x4 0 0 0 0))
(assert_return (invoke "f32x4.gt" (v128.const f32x4 nan 0 nan inf) (v128.const f32x4 0 nan nan inf)) (v128.const i32x4 0 0 0 0))
(assert_return (invoke "f32x4.le" (v128.const f32x4 nan 0 nan inf) (v128.const f32x4 0 nan nan inf)) (v128.const i32x4 0 0 0 -1))
(assert_return (invoke "f32x4.ge" (v128.const f32x4 nan 0 nan inf) (v128.const f32x4 0 nan nan inf)) (v128.const i32x4 0 0 0 -1))
(assert_return (invoke "f32x4.eq" (v128.const f32x4 -inf 0 nan -inf) (v128.const f32x4 0 inf inf nan)) (v128.const i32x4 0 0 0 0))
(assert_return (invoke "f32x4.ne" (v128.const f32x4 -inf 0 nan -inf) (v128.const f32x4 0 inf inf nan)) (v128.const i32x4 -1 -1 -1 -1))
(assert_return (invoke "f32x4.lt" (v128.const f32x4 -inf 0 nan -inf) (v128.const f32x4 0 inf inf nan)) (v128.const i32x4 -1 -1 0 0))
(assert_return (invoke "f32x4.gt" (v128.const f32x4 -inf 0 nan -inf) (v128.const f32x4 0 inf inf nan)) (v128.const i32x4 0 0 0 0))
(assert_return (invoke "f32x4.le" (v128.const f32x4 -inf 0 nan -inf) (v128.const f32x4 0 inf inf nan)) (v128.const i32x4 -1 -1 0 0))
(assert_return (invoke "f32x4.ge" (v128.const f32x4 -inf 0 nan -inf) (v128.const f32x4 0 inf inf nan)) (v128.const i32x4 0 0 0 0))
;; f64x2 comparisons
(assert_return (invoke "f64x2.eq" (v128.const f64x2 0 1) (v128.const f64x2 0 0)) (v128.const i64x2 -1 0))
(assert_return (invoke "f64x2.ne" (v128.const f64x2 0 1) (v128.const f64x2 0 0)) (v128.const i64x2 0 -1))
(assert_return (invoke "f64x2.lt" (v128.const f64x2 0 1) (v128.const f64x2 0 0)) (v128.const i64x2 0 0))
(assert_return (invoke "f64x2.gt" (v128.const f64x2 0 1) (v128.const f64x2 0 0)) (v128.const i64x2 0 -1))
(assert_return (invoke "f64x2.le" (v128.const f64x2 0 1) (v128.const f64x2 0 0)) (v128.const i64x2 -1 0))
(assert_return (invoke "f64x2.ge" (v128.const f64x2 0 1) (v128.const f64x2 0 0)) (v128.const i64x2 -1 -1))
(assert_return (invoke "f64x2.eq" (v128.const f64x2 nan 0) (v128.const f64x2 inf inf)) (v128.const i64x2 0 0))
(assert_return (invoke "f64x2.ne" (v128.const f64x2 nan 0) (v128.const f64x2 inf inf)) (v128.const i64x2 -1 -1))
(assert_return (invoke "f64x2.lt" (v128.const f64x2 nan 0) (v128.const f64x2 inf inf)) (v128.const i64x2 0 -1))
(assert_return (invoke "f64x2.gt" (v128.const f64x2 nan 0) (v128.const f64x2 inf inf)) (v128.const i64x2 0 0))
(assert_return (invoke "f64x2.le" (v128.const f64x2 nan 0) (v128.const f64x2 inf inf)) (v128.const i64x2 0 -1))
(assert_return (invoke "f64x2.ge" (v128.const f64x2 nan 0) (v128.const f64x2 inf inf)) (v128.const i64x2 0 0))
;; bitwise operations
(assert_return (invoke "v128.not" (v128.const i32x4 0 -1 0 -1)) (v128.const i32x4 -1 0 -1 0))
(assert_return (invoke "v128.and" (v128.const i32x4 0 0 -1 -1) (v128.const i32x4 0 -1 0 -1)) (v128.const i32x4 0 0 0 -1))
(assert_return (invoke "v128.or" (v128.const i32x4 0 0 -1 -1) (v128.const i32x4 0 -1 0 -1)) (v128.const i32x4 0 -1 -1 -1))
(assert_return (invoke "v128.xor" (v128.const i32x4 0 0 -1 -1) (v128.const i32x4 0 -1 0 -1)) (v128.const i32x4 0 -1 -1 0))
(assert_return (invoke "v128.andnot" (v128.const i32x4 0 0 -1 -1) (v128.const i32x4 0 -1 0 -1)) (v128.const i32x4 0 0 -1 0))
(assert_return (invoke "v128.bitselect"
(v128.const i32x4 0xAAAAAAAA 0xAAAAAAAA 0xAAAAAAAA 0xAAAAAAAA)
(v128.const i32x4 0xBBBBBBBB 0xBBBBBBBB 0xBBBBBBBB 0xBBBBBBBB)
(v128.const i32x4 0xF0F0F0F0 0xFFFFFFFF 0x00000000 0xFF00FF00)
)
(v128.const i32x4 0xABABABAB 0xAAAAAAAA 0xBBBBBBBB 0xAABBAABB)
)
;; TODO: signselect tests
;; load/store lane
(assert_return (invoke "v128.load8_lane"
(i32.const 1024)
(v128.const i32x4 0x04030201 0x08070605 0x0c0b0a09 0x100f0e0d)
)
(v128.const i32x4 0x040302ff 0x08070605 0x0c0b0a09 0x100f0e0d)
)
(assert_return (invoke "v128.load16_lane"
(i32.const 1024)
(v128.const i32x4 0x04030201 0x08070605 0x0c0b0a09 0x100f0e0d)
)
(v128.const i32x4 0x0403ffff 0x08070605 0x0c0b0a09 0x100f0e0d)
)
(assert_return (invoke "v128.load32_lane"
(i32.const 1024)
(v128.const i32x4 0x04030201 0x08070605 0x0c0b0a09 0x100f0e0d)
)
(v128.const i32x4 0xffffffff 0x08070605 0x0c0b0a09 0x100f0e0d)
)
(assert_return (invoke "v128.load64_lane"
(i32.const 1024)
(v128.const i32x4 0x04030201 0x08070605 0x0c0b0a09 0x100f0e0d)
)
(v128.const i32x4 0xffffffff 0xffffffff 0x0c0b0a09 0x100f0e0d)
)
(assert_return (invoke "v128.store8_lane"
(i32.const 1024)
(v128.const i32x4 0x04030201 0x08070605 0x0c0b0a09 0x100f0e0d)
)
)
(assert_return (invoke "v128.load" (i32.const 1024)) (v128.const i32x4 0xffffff01 0xffffffff 0x00000000 0x00000000))
(assert_return (invoke "v128.store16_lane"
(i32.const 1024)
(v128.const i32x4 0x04030201 0x08070605 0x0c0b0a09 0x100f0e0d)
)
)
(assert_return (invoke "v128.load" (i32.const 1024)) (v128.const i32x4 0xffff0201 0xffffffff 0x00000000 0x00000000))
(assert_return (invoke "v128.store32_lane"
(i32.const 1024)
(v128.const i32x4 0x04030201 0x08070605 0x0c0b0a09 0x100f0e0d)
)
)
(assert_return (invoke "v128.load" (i32.const 1024)) (v128.const i32x4 0x04030201 0xffffffff 0x00000000 0x00000000))
(assert_return (invoke "v128.store64_lane"
(i32.const 1024)
(v128.const i32x4 0x04030201 0x08070605 0x0c0b0a09 0x100f0e0d)
)
)
(assert_return (invoke "v128.load" (i32.const 1024)) (v128.const i32x4 0x04030201 0x08070605 0x00000000 0x00000000))
;; i8x16 arithmetic
(assert_return (invoke "i8x16.popcnt" (v128.const i8x16 0 1 42 -3 -56 127 -128 -126 0 -1 -42 3 56 -127 -128 126))
(v128.const i8x16 0 1 3 7 3 7 1 2 0 8 5 2 3 2 1 6)
)
(assert_return (invoke "i8x16.abs" (v128.const i8x16 0 1 42 -3 -56 127 -128 -126 0 -1 -42 3 56 -127 -128 126))
(v128.const i8x16 0 1 42 3 56 127 -128 126 0 1 42 3 56 127 -128 126)
)
(assert_return (invoke "i8x16.neg" (v128.const i8x16 0 1 42 -3 -56 127 -128 -126 0 -1 -42 3 56 -127 -128 126))
(v128.const i8x16 0 -1 -42 3 56 -127 -128 126 0 1 42 -3 -56 127 -128 -126)
)
(assert_return (invoke "i8x16.all_true" (v128.const i8x16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0)) (i32.const 0))
(assert_return (invoke "i8x16.all_true" (v128.const i8x16 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0)) (i32.const 0))
(assert_return (invoke "i8x16.all_true" (v128.const i8x16 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1)) (i32.const 0))
(assert_return (invoke "i8x16.all_true" (v128.const i8x16 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1)) (i32.const 1))
(assert_return (invoke "i8x16.bitmask" (v128.const i8x16 -1 0 1 -128 127 -127 0 128 -1 0 1 -128 127 -127 0 128)) (i32.const 43433))
(assert_return (invoke "i8x16.shl" (v128.const i8x16 0 1 2 4 8 16 32 64 -128 3 6 12 24 48 96 -64) (i32.const 1))
(v128.const i8x16 0 2 4 8 16 32 64 -128 0 6 12 24 48 96 -64 -128)
)
(assert_return (invoke "i8x16.shl" (v128.const i8x16 0 1 2 4 8 16 32 64 -128 3 6 12 24 48 96 -64) (i32.const 8))
(v128.const i8x16 0 1 2 4 8 16 32 64 -128 3 6 12 24 48 96 -64)
)
(assert_return (invoke "i8x16.shr_u" (v128.const i8x16 0 1 2 4 8 16 32 64 -128 3 6 12 24 48 96 -64) (i32.const 1))
(v128.const i8x16 0 0 1 2 4 8 16 32 64 1 3 6 12 24 48 96)
)
(assert_return (invoke "i8x16.shr_u" (v128.const i8x16 0 1 2 4 8 16 32 64 -128 3 6 12 24 48 96 -64) (i32.const 8))
(v128.const i8x16 0 1 2 4 8 16 32 64 -128 3 6 12 24 48 96 -64)
)
(assert_return (invoke "i8x16.shr_s" (v128.const i8x16 0 1 2 4 8 16 32 64 -128 3 6 12 24 48 96 -64) (i32.const 1))
(v128.const i8x16 0 0 1 2 4 8 16 32 -64 1 3 6 12 24 48 -32)
)
(assert_return (invoke "i8x16.shr_s" (v128.const i8x16 0 1 2 4 8 16 32 64 -128 3 6 12 24 48 96 -64) (i32.const 8))
(v128.const i8x16 0 1 2 4 8 16 32 64 -128 3 6 12 24 48 96 -64)
)
(assert_return
(invoke "i8x16.add"
(v128.const i8x16 0 42 255 128 127 129 6 29 103 196 231 142 17 250 1 73)
(v128.const i8x16 3 231 1 128 129 6 103 17 42 29 73 42 0 255 127 142)
)
(v128.const i8x16 3 17 0 0 0 135 109 46 145 225 48 184 17 249 128 215)
)
(assert_return
(invoke "i8x16.add_sat_s"
(v128.const i8x16 0 42 255 128 127 129 6 29 103 196 231 142 17 250 1 73)
(v128.const i8x16 3 231 1 128 129 6 103 17 42 29 73 42 0 255 127 142)
)
(v128.const i8x16 3 17 0 128 0 135 109 46 127 225 48 184 17 249 127 215)
)
(assert_return
(invoke "i8x16.add_sat_u"
(v128.const i8x16 0 42 255 128 127 129 6 29 103 196 231 142 17 250 1 73)
(v128.const i8x16 3 231 1 128 129 6 103 17 42 29 73 42 0 255 127 142)
)
(v128.const i8x16 3 255 255 255 255 135 109 46 145 225 255 184 17 255 128 215)
)
(assert_return
(invoke "i8x16.sub"
(v128.const i8x16 0 42 255 128 127 129 6 29 103 196 231 142 17 250 1 73)
(v128.const i8x16 3 231 1 128 129 6 103 17 42 29 73 42 0 255 127 142)
)
(v128.const i8x16 253 67 254 0 254 123 159 12 61 167 158 100 17 251 130 187)
)
(assert_return
(invoke "i8x16.sub_sat_s"
(v128.const i8x16 0 42 255 128 127 129 6 29 103 196 231 142 17 250 1 73)
(v128.const i8x16 3 231 1 128 129 6 103 17 42 29 73 42 0 255 127 142)
)
(v128.const i8x16 253 67 254 0 127 128 159 12 61 167 158 128 17 251 130 127)
)
(assert_return
(invoke "i8x16.sub_sat_u"
(v128.const i8x16 0 42 255 128 127 129 6 29 103 196 231 142 17 250 1 73)
(v128.const i8x16 3 231 1 128 129 6 103 17 42 29 73 42 0 255 127 142)
)
(v128.const i8x16 0 0 254 0 0 123 0 12 61 167 158 100 17 0 0 0)
)
(assert_return
(invoke "i8x16.min_s"
(v128.const i8x16 0 42 255 128 127 129 6 29 103 196 231 142 17 250 1 73)
(v128.const i8x16 3 231 1 128 129 6 103 17 42 29 73 42 0 255 127 142)
)
(v128.const i8x16 0 231 255 128 129 129 6 17 42 196 231 142 0 250 1 142)
)
(assert_return
(invoke "i8x16.min_u"
(v128.const i8x16 0 42 255 128 127 129 6 29 103 196 231 142 17 250 1 73)
(v128.const i8x16 3 231 1 128 129 6 103 17 42 29 73 42 0 255 127 142)
)
(v128.const i8x16 0 42 1 128 127 6 6 17 42 29 73 42 0 250 1 73)
)
(assert_return
(invoke "i8x16.max_s"
(v128.const i8x16 0 42 255 128 127 129 6 29 103 196 231 142 17 250 1 73)
(v128.const i8x16 3 231 1 128 129 6 103 17 42 29 73 42 0 255 127 142)
)
(v128.const i8x16 3 42 1 128 127 6 103 29 103 29 73 42 17 255 127 73)
)
(assert_return
(invoke "i8x16.max_u"
(v128.const i8x16 0 42 255 128 127 129 6 29 103 196 231 142 17 250 1 73)
(v128.const i8x16 3 231 1 128 129 6 103 17 42 29 73 42 0 255 127 142)
)
(v128.const i8x16 3 231 255 128 129 129 103 29 103 196 231 142 17 255 127 142)
)
(assert_return
(invoke "i8x16.avgr_u"
(v128.const i8x16 0 42 255 128 127 129 6 29 103 196 231 142 17 250 1 73)
(v128.const i8x16 3 231 1 128 129 6 103 17 42 29 73 42 0 255 127 142)
)
(v128.const i8x16 2 137 128 128 128 68 55 23 73 113 152 92 9 253 64 108)
)
;; i16x8 arithmetic
(assert_return (invoke "i16x8.abs" (v128.const i16x8 0 1 42 -3 -56 32767 -32768 32766))
(v128.const i16x8 0 1 42 3 56 32767 -32768 32766)
)
(assert_return (invoke "i16x8.neg" (v128.const i16x8 0 1 42 -3 -56 32767 -32768 32766))
(v128.const i16x8 0 -1 -42 3 56 -32767 -32768 -32766)
)
(assert_return (invoke "i16x8.all_true" (v128.const i16x8 0 0 0 0 0 0 0 0)) (i32.const 0))
(assert_return (invoke "i16x8.all_true" (v128.const i16x8 0 0 1 0 0 0 0 0)) (i32.const 0))
(assert_return (invoke "i16x8.all_true" (v128.const i16x8 1 1 1 1 1 0 1 1)) (i32.const 0))
(assert_return (invoke "i16x8.all_true" (v128.const i16x8 1 1 1 1 1 1 1 1)) (i32.const 1))
(assert_return (invoke "i16x8.bitmask" (v128.const i16x8 -1 0 1 -32768 32767 -32767 0 32768)) (i32.const 169))
(assert_return (invoke "i16x8.shl" (v128.const i16x8 0 8 16 128 256 2048 4096 -32768) (i32.const 1)) (v128.const i16x8 0 16 32 256 512 4096 8192 0))
(assert_return (invoke "i16x8.shl" (v128.const i16x8 0 8 16 128 256 2048 4096 -32768) (i32.const 16)) (v128.const i16x8 0 8 16 128 256 2048 4096 -32768))
(assert_return (invoke "i16x8.shr_u" (v128.const i16x8 0 8 16 128 256 2048 4096 -32768) (i32.const 1)) (v128.const i16x8 0 4 8 64 128 1024 2048 16384))
(assert_return (invoke "i16x8.shr_u" (v128.const i16x8 0 8 16 128 256 2048 4096 -32768) (i32.const 16)) (v128.const i16x8 0 8 16 128 256 2048 4096 -32768))
(assert_return (invoke "i16x8.shr_s" (v128.const i16x8 0 8 16 128 256 2048 4096 -32768) (i32.const 1)) (v128.const i16x8 0 4 8 64 128 1024 2048 -16384))
(assert_return (invoke "i16x8.shr_s" (v128.const i16x8 0 8 16 128 256 2048 4096 -32768) (i32.const 16)) (v128.const i16x8 0 8 16 128 256 2048 4096 -32768))
(assert_return
(invoke "i16x8.add"
(v128.const i16x8 0 65280 32768 32512 33024 59136 64000 32766)
(v128.const i16x8 768 1 32768 33024 1536 18688 65280 2)
)
(v128.const i16x8 768 65281 0 0 34560 12288 63744 32768)
)
(assert_return
(invoke "i16x8.add_sat_s"
(v128.const i16x8 0 65280 32768 32512 33024 59136 64000 32766)
(v128.const i16x8 768 1 32768 33024 1536 18688 65280 2)
)
(v128.const i16x8 768 65281 32768 0 34560 12288 63744 32767)
)
(assert_return
(invoke "i16x8.add_sat_u"
(v128.const i16x8 0 65280 32768 32512 33024 59136 64000 32766)
(v128.const i16x8 768 1 32768 33024 1536 18688 65280 2)
)
(v128.const i16x8 768 65281 65535 65535 34560 65535 65535 32768)
)
(assert_return
(invoke "i16x8.sub"
(v128.const i16x8 0 65280 32768 32512 33024 59136 64000 32766)
(v128.const i16x8 768 1 32768 33024 1536 18688 65280 2)
)
(v128.const i16x8 64768 65279 0 65024 31488 40448 64256 32764)
)
(assert_return
(invoke "i16x8.sub_sat_s"
(v128.const i16x8 0 65280 32768 32512 33024 59136 64000 32766)
(v128.const i16x8 768 1 32768 33024 1536 18688 65280 2)
)
(v128.const i16x8 64768 65279 0 32767 32768 40448 64256 32764)
)
(assert_return
(invoke "i16x8.sub_sat_u"
(v128.const i16x8 0 65280 32768 32512 33024 59136 64000 32766)
(v128.const i16x8 768 1 32768 33024 1536 18688 65280 2)
)
(v128.const i16x8 0 65279 0 0 31488 40448 0 32764)
)
(assert_return
(invoke "i16x8.mul"
(v128.const i16x8 0 65280 32768 32512 33024 59136 64000 32766)
(v128.const i16x8 768 1 32768 33024 1536 18688 65280 2)
)
(v128.const i16x8 0 65280 0 0 0 0 0 65532)
)
(assert_return
(invoke "i16x8.min_s"
(v128.const i16x8 0 65280 32768 32512 33024 59136 64000 32766)
(v128.const i16x8 768 1 32768 33024 1536 18688 65280 2)
)
(v128.const i16x8 0 65280 32768 33024 33024 59136 64000 2)
)
(assert_return
(invoke "i16x8.min_u"
(v128.const i16x8 0 65280 32768 32512 33024 59136 64000 32766)
(v128.const i16x8 768 1 32768 33024 1536 18688 65280 2)
)
(v128.const i16x8 0 1 32768 32512 1536 18688 64000 2)
)
(assert_return
(invoke "i16x8.max_s"
(v128.const i16x8 0 65280 32768 32512 33024 59136 64000 32766)
(v128.const i16x8 768 1 32768 33024 1536 18688 65280 2)
)
(v128.const i16x8 768 1 32768 32512 1536 18688 65280 32766)
)
(assert_return
(invoke "i16x8.max_u"
(v128.const i16x8 0 65280 32768 32512 33024 59136 64000 32766)
(v128.const i16x8 768 1 32768 33024 1536 18688 65280 2)
)
(v128.const i16x8 768 65280 32768 33024 33024 59136 65280 32766)
)
(assert_return
(invoke "i16x8.avgr_u"
(v128.const i16x8 0 65280 32768 32512 33024 59136 64000 32766)
(v128.const i16x8 768 1 32768 33024 1536 18688 65280 2)
)
(v128.const i16x8 384 32641 32768 32768 17280 38912 64640 16384)
)
(assert_return
(invoke "i16x8.q15mulr_sat_s"
(v128.const i16x8 -1 -16383 32765 65535 -32768 65535 -16385 -32768)
(v128.const i16x8 -1 -16384 1 -32768 -32768 1 -16384 -1)
)
(v128.const i16x8 0 8192 1 1 32767 0 8193 1)
)
(assert_return
(invoke "i16x8.extmul_low_i8x16_s"
(v128.const i8x16 63 -63 128 -127 -128 -128 255 255 0 0 0 0 0 0 0 0)
(v128.const i8x16 64 -64 1 -1 -1 -127 1 255 0 0 0 0 0 0 0 0)
)
(v128.const i16x8 4032 4032 -128 127 128 16256 -1 1)
)
(assert_return
(invoke "i16x8.extmul_high_i8x16_s"
(v128.const i8x16 0 0 0 0 0 0 0 0 63 -63 128 -127 -128 -128 255 255)
(v128.const i8x16 0 0 0 0 0 0 0 0 64 -64 1 -1 -1 -127 1 255)
)
(v128.const i16x8 4032 4032 -128 127 128 16256 -1 1)
)
(assert_return
(invoke "i16x8.extmul_low_i8x16_u"
(v128.const i8x16 1 -1 63 -65 -126 -128 255 255 0 0 0 0 0 0 0 0)
(v128.const i8x16 -1 -1 64 -64 -1 -128 -128 255 0 0 0 0 0 0 0 0)
)
(v128.const i16x8 255 -511 4032 -28864 -32386 16384 32640 -511)
)
(assert_return
(invoke "i16x8.extmul_high_i8x16_u"
(v128.const i8x16 0 0 0 0 0 0 0 0 1 -1 63 -65 -126 -128 255 255)
(v128.const i8x16 0 0 0 0 0 0 0 0 -1 -1 64 -64 -1 -128 -128 255)
)
(v128.const i16x8 255 -511 4032 -28864 -32386 16384 32640 -511)
)
(assert_return
(invoke "i32x4.extmul_low_i16x8_s"
(v128.const i16x8 1 16384 -32766 65535 0 0 0 0)
(v128.const i16x8 -1 16384 -1 -1 0 0 0 0)
)
(v128.const i32x4 -1 268435456 32766 1)
)
(assert_return
(invoke "i32x4.extmul_high_i16x8_s"
(v128.const i16x8 0 0 0 0 16383 32768 -32766 65535)
(v128.const i16x8 0 0 0 0 16384 1 -1 -32768)
)
(v128.const i32x4 268419072 -32768 32766 32768)
)
(assert_return
(invoke "i32x4.extmul_low_i16x8_u"
(v128.const i16x8 16383 -16383 -32768 65535 0 0 0 0)
(v128.const i16x8 16384 -16384 -1 65535 0 0 0 0)
)
(v128.const i32x4 268419072 -1878999040 2147450880 -131071)
)
(assert_return
(invoke "i32x4.extmul_high_i16x8_u"
(v128.const i16x8 0 0 0 0 16383 -16383 -32768 65535)
(v128.const i16x8 0 0 0 0 16384 -16384 -1 65535)
)
(v128.const i32x4 268419072 -1878999040 2147450880 -131071)
)
(assert_return
(invoke "i64x2.extmul_low_i32x4_s"
(v128.const i32x4 1073741823 -2147483648 0 0)
(v128.const i32x4 1073741824 -1 0 0)
)
(v128.const i64x2 1152921503533105152 2147483648)
)
(assert_return
(invoke "i64x2.extmul_high_i32x4_s"
(v128.const i32x4 0 0 -1073741825 4294967295)
(v128.const i32x4 0 0 -1073741824 4294967295)
)
(v128.const i64x2 1152921505680588800 1)
)
(assert_return
(invoke "i64x2.extmul_low_i32x4_u"
(v128.const i32x4 -1 -1073741825 0 0)
(v128.const i32x4 -1 -1073741824 0 0)
)
(v128.const i64x2 -8589934591 -8070450535469154304)
)
(assert_return
(invoke "i64x2.extmul_high_i32x4_u"
(v128.const i32x4 0 0 -2147483648 4294967295)
(v128.const i32x4 0 0 -1 4294967295)
)
(v128.const i64x2 9223372034707292160 -8589934591)
)
;; i32x4 arithmetic
(assert_return (invoke "i32x4.abs" (v128.const i32x4 0 1 0x80000000 0x80000001)) (v128.const i32x4 0 1 0x80000000 0x7fffffff))
(assert_return (invoke "i32x4.neg" (v128.const i32x4 0 1 0x80000000 0x80000001)) (v128.const i32x4 0 -1 0x80000000 0x7fffffff))
(assert_return (invoke "i32x4.all_true" (v128.const i32x4 0 0 0 0)) (i32.const 0))
(assert_return (invoke "i32x4.all_true" (v128.const i32x4 0 0 1 0)) (i32.const 0))
(assert_return (invoke "i32x4.all_true" (v128.const i32x4 1 0 1 1)) (i32.const 0))
(assert_return (invoke "i32x4.all_true" (v128.const i32x4 1 1 1 1)) (i32.const 1))
(assert_return (invoke "i32x4.bitmask" (v128.const i32x4 -1 0 -128 127)) (i32.const 5))
(assert_return (invoke "i32x4.shl" (v128.const i32x4 1 0x40000000 0x80000000 -1) (i32.const 1)) (v128.const i32x4 2 0x80000000 0 -2))
(assert_return (invoke "i32x4.shl" (v128.const i32x4 1 0x40000000 0x80000000 -1) (i32.const 32)) (v128.const i32x4 1 0x40000000 0x80000000 -1))
(assert_return (invoke "i32x4.shr_s" (v128.const i32x4 1 0x40000000 0x80000000 -1) (i32.const 1)) (v128.const i32x4 0 0x20000000 0xc0000000 -1))
(assert_return (invoke "i32x4.shr_s" (v128.const i32x4 1 0x40000000 0x80000000 -1) (i32.const 32)) (v128.const i32x4 1 0x40000000 0x80000000 -1))
(assert_return (invoke "i32x4.shr_u" (v128.const i32x4 1 0x40000000 0x80000000 -1) (i32.const 1)) (v128.const i32x4 0 0x20000000 0x40000000 0x7fffffff))
(assert_return (invoke "i32x4.shr_u" (v128.const i32x4 1 0x40000000 0x80000000 -1) (i32.const 32)) (v128.const i32x4 1 0x40000000 0x80000000 -1))
(assert_return (invoke "i32x4.add" (v128.const i32x4 0 0x80000001 42 5) (v128.const i32x4 0 0x80000001 5 42)) (v128.const i32x4 0 2 47 47))
(assert_return (invoke "i32x4.sub" (v128.const i32x4 0 2 47 47) (v128.const i32x4 0 0x80000001 42 5)) (v128.const i32x4 0 0x80000001 5 42))
(assert_return (invoke "i32x4.mul" (v128.const i32x4 0 0x80000001 42 5) (v128.const i32x4 0 0x80000001 42 5)) (v128.const i32x4 0 1 1764 25))
(assert_return
(invoke "i32x4.min_s" (v128.const i32x4 0 0x80000001 42 0xc0000000) (v128.const i32x4 0xffffffff 42 0 0xb0000000))
(v128.const i32x4 0xffffffff 0x80000001 0 0xb0000000)
)
(assert_return
(invoke "i32x4.min_u" (v128.const i32x4 0 0x80000001 42 0xc0000000) (v128.const i32x4 0xffffffff 42 0 0xb0000000))
(v128.const i32x4 0 42 0 0xb0000000)
)
(assert_return
(invoke "i32x4.max_s" (v128.const i32x4 0 0x80000001 42 0xc0000000) (v128.const i32x4 0xffffffff 42 0 0xb0000000))
(v128.const i32x4 0 42 42 0xc0000000)
)
(assert_return
(invoke "i32x4.max_u" (v128.const i32x4 0 0x80000001 42 0xc0000000) (v128.const i32x4 0xffffffff 42 0 0xb0000000))
(v128.const i32x4 0xffffffff 0x80000001 42 0xc0000000)
)
(assert_return
(invoke "i32x4.dot_i16x8_s" (v128.const i16x8 0 1 2 3 4 5 6 7) (v128.const i16x8 -1 2 -3 4 5 6 -7 -8))
(v128.const i32x4 2 6 50 -98)
)
;; i64x2 arithmetic
(assert_return (invoke "i64x2.neg" (v128.const i64x2 0x8000000000000000 42)) (v128.const i64x2 0x8000000000000000 -42))
(assert_return (invoke "i64x2.bitmask" (v128.const i64x2 0x8000000000000000 42)) (i32.const 1))
(assert_return (invoke "i64x2.bitmask" (v128.const i64x2 1 -1)) (i32.const 2))
(assert_return (invoke "i64x2.shl" (v128.const i64x2 1 0x8000000000000000) (i32.const 1)) (v128.const i64x2 2 0))
(assert_return (invoke "i64x2.shl" (v128.const i64x2 1 0x8000000000000000) (i32.const 64)) (v128.const i64x2 1 0x8000000000000000))
(assert_return (invoke "i64x2.shr_s" (v128.const i64x2 1 0x8000000000000000) (i32.const 1)) (v128.const i64x2 0 0xc000000000000000))
(assert_return (invoke "i64x2.shr_s" (v128.const i64x2 1 0x8000000000000000) (i32.const 64)) (v128.const i64x2 1 0x8000000000000000))
(assert_return (invoke "i64x2.shr_u" (v128.const i64x2 1 0x8000000000000000) (i32.const 1)) (v128.const i64x2 0 0x4000000000000000))
(assert_return (invoke "i64x2.shr_u" (v128.const i64x2 1 0x8000000000000000) (i32.const 64)) (v128.const i64x2 1 0x8000000000000000))
(assert_return (invoke "i64x2.add" (v128.const i64x2 0x8000000000000001 42) (v128.const i64x2 0x8000000000000001 0)) (v128.const i64x2 2 42))
(assert_return (invoke "i64x2.sub" (v128.const i64x2 2 42) (v128.const i64x2 0x8000000000000001 0)) (v128.const i64x2 0x8000000000000001 42))
(assert_return (invoke "i64x2.mul" (v128.const i64x2 2 42) (v128.const i64x2 0x8000000000000001 0)) (v128.const i64x2 2 0))
;; f32x4 arithmetic
(assert_return (invoke "f32x4.abs" (v128.const f32x4 -0 nan -inf 5)) (v128.const f32x4 0 nan inf 5))
;;(assert_return (invoke "f32x4.neg" (v128.const f32x4 -0 nan -inf 5)) (v128.const f32x4 0 -nan inf -5))
(assert_return (invoke "f32x4.sqrt" (v128.const f32x4 -0 nan inf 4)) (v128.const f32x4 -0 nan inf 2))
(assert_return (invoke "f32x4.add" (v128.const f32x4 nan -nan inf 42) (v128.const f32x4 42 inf inf 1)) (v128.const f32x4 nan nan inf 43))
(assert_return (invoke "f32x4.sub" (v128.const f32x4 nan -nan inf 42) (v128.const f32x4 42 inf -inf 1)) (v128.const f32x4 nan nan inf 41))
(assert_return (invoke "f32x4.mul" (v128.const f32x4 nan -nan inf 42) (v128.const f32x4 42 inf inf 2)) (v128.const f32x4 nan nan inf 84))
(assert_return (invoke "f32x4.div" (v128.const f32x4 nan -nan inf 42) (v128.const f32x4 42 inf 2 2)) (v128.const f32x4 nan nan inf 21))
(assert_return (invoke "f32x4.min" (v128.const f32x4 -0 0 nan 5) (v128.const f32x4 0 -0 5 nan)) (v128.const f32x4 -0 -0 nan nan))
(assert_return (invoke "f32x4.max" (v128.const f32x4 -0 0 nan 5) (v128.const f32x4 0 -0 5 nan)) (v128.const f32x4 0 0 nan nan))
(assert_return (invoke "f32x4.pmin" (v128.const f32x4 -0 0 nan 5) (v128.const f32x4 0 -0 5 nan)) (v128.const f32x4 -0 0 nan 5))
(assert_return (invoke "f32x4.pmax" (v128.const f32x4 -0 0 nan 5) (v128.const f32x4 0 -0 5 nan)) (v128.const f32x4 -0 0 nan 5))
(assert_return (invoke "f32x4.ceil" (v128.const f32x4 -0 0 inf -inf)) (v128.const f32x4 -0 0 inf -inf))
(assert_return (invoke "f32x4.ceil" (v128.const f32x4 nan 42 0.5 -0.5)) (v128.const f32x4 nan 42 1 -0))
(assert_return (invoke "f32x4.ceil" (v128.const f32x4 1.5 -1.5 4.2 -4.2)) (v128.const f32x4 2 -1 5 -4))
(assert_return (invoke "f32x4.floor" (v128.const f32x4 -0 0 inf -inf)) (v128.const f32x4 -0 0 inf -inf))
(assert_return (invoke "f32x4.floor" (v128.const f32x4 nan 42 0.5 -0.5)) (v128.const f32x4 nan 42 0 -1))
(assert_return (invoke "f32x4.floor" (v128.const f32x4 1.5 -1.5 4.2 -4.2)) (v128.const f32x4 1 -2 4 -5))
(assert_return (invoke "f32x4.trunc" (v128.const f32x4 -0 0 inf -inf)) (v128.const f32x4 -0 0 inf -inf))
(assert_return (invoke "f32x4.trunc" (v128.const f32x4 nan 42 0.5 -0.5)) (v128.const f32x4 nan 42 0 -0))
(assert_return (invoke "f32x4.trunc" (v128.const f32x4 1.5 -1.5 4.2 -4.2)) (v128.const f32x4 1 -1 4 -4))
(assert_return (invoke "f32x4.nearest" (v128.const f32x4 -0 0 inf -inf)) (v128.const f32x4 -0 0 inf -inf))
(assert_return (invoke "f32x4.nearest" (v128.const f32x4 nan 42 0.5 -0.5)) (v128.const f32x4 nan 42 0 -0))
(assert_return (invoke "f32x4.nearest" (v128.const f32x4 1.5 -1.5 4.2 -4.2)) (v128.const f32x4 2 -2 4 -4))
;; f64x2 arithmetic
(assert_return (invoke "f64x2.abs" (v128.const f64x2 -0 nan)) (v128.const f64x2 0 nan))
(assert_return (invoke "f64x2.abs" (v128.const f64x2 -inf 5)) (v128.const f64x2 inf 5))
(assert_return (invoke "f64x2.neg" (v128.const f64x2 -0 nan)) (v128.const f64x2 0 -nan))
(assert_return (invoke "f64x2.neg" (v128.const f64x2 -inf 5)) (v128.const f64x2 inf -5))
(assert_return (invoke "f64x2.sqrt" (v128.const f64x2 -0 nan)) (v128.const f64x2 -0 nan))
(assert_return (invoke "f64x2.sqrt" (v128.const f64x2 inf 4)) (v128.const f64x2 inf 2))
(assert_return (invoke "f64x2.add" (v128.const f64x2 nan -nan) (v128.const f64x2 42 inf)) (v128.const f64x2 nan nan))
(assert_return (invoke "f64x2.add" (v128.const f64x2 inf 42) (v128.const f64x2 inf 1)) (v128.const f64x2 inf 43))
(assert_return (invoke "f64x2.sub" (v128.const f64x2 nan -nan) (v128.const f64x2 42 inf)) (v128.const f64x2 nan nan))
(assert_return (invoke "f64x2.sub" (v128.const f64x2 inf 42) (v128.const f64x2 -inf 1)) (v128.const f64x2 inf 41))
(assert_return (invoke "f64x2.mul" (v128.const f64x2 nan -nan) (v128.const f64x2 42 inf)) (v128.const f64x2 nan nan))
(assert_return (invoke "f64x2.mul" (v128.const f64x2 inf 42) (v128.const f64x2 inf 2)) (v128.const f64x2 inf 84))
(assert_return (invoke "f64x2.div" (v128.const f64x2 nan -nan) (v128.const f64x2 42 inf)) (v128.const f64x2 nan nan))
(assert_return (invoke "f64x2.div" (v128.const f64x2 inf 42) (v128.const f64x2 2 2)) (v128.const f64x2 inf 21))
(assert_return (invoke "f64x2.min" (v128.const f64x2 -0 0) (v128.const f64x2 0 -0)) (v128.const f64x2 -0 -0))
(assert_return (invoke "f64x2.min" (v128.const f64x2 nan 5) (v128.const f64x2 5 nan)) (v128.const f64x2 nan nan))
(assert_return (invoke "f64x2.max" (v128.const f64x2 -0 0) (v128.const f64x2 0 -0)) (v128.const f64x2 0 0))
(assert_return (invoke "f64x2.max" (v128.const f64x2 nan 5) (v128.const f64x2 5 nan)) (v128.const f64x2 nan nan))
(assert_return (invoke "f64x2.pmin" (v128.const f64x2 -0 0) (v128.const f64x2 0 -0)) (v128.const f64x2 -0 0))
(assert_return (invoke "f64x2.pmin" (v128.const f64x2 nan 5) (v128.const f64x2 5 nan)) (v128.const f64x2 nan 5))
(assert_return (invoke "f64x2.pmax" (v128.const f64x2 -0 0) (v128.const f64x2 0 -0)) (v128.const f64x2 -0 0))
(assert_return (invoke "f64x2.pmax" (v128.const f64x2 nan 5) (v128.const f64x2 5 nan)) (v128.const f64x2 nan 5))
(assert_return (invoke "f64x2.ceil" (v128.const f64x2 -0 0)) (v128.const f64x2 -0 0))
(assert_return (invoke "f64x2.ceil" (v128.const f64x2 inf -inf)) (v128.const f64x2 inf -inf))
(assert_return (invoke "f64x2.ceil" (v128.const f64x2 nan 42)) (v128.const f64x2 nan 42))
(assert_return (invoke "f64x2.ceil" (v128.const f64x2 0.5 -0.5)) (v128.const f64x2 1 -0))
(assert_return (invoke "f64x2.ceil" (v128.const f64x2 1.5 -1.5)) (v128.const f64x2 2 -1))
(assert_return (invoke "f64x2.ceil" (v128.const f64x2 4.2 -4.2)) (v128.const f64x2 5 -4))
(assert_return (invoke "f64x2.floor" (v128.const f64x2 -0 0)) (v128.const f64x2 -0 0))
(assert_return (invoke "f64x2.floor" (v128.const f64x2 inf -inf)) (v128.const f64x2 inf -inf))
(assert_return (invoke "f64x2.floor" (v128.const f64x2 nan 42)) (v128.const f64x2 nan 42))
(assert_return (invoke "f64x2.floor" (v128.const f64x2 0.5 -0.5)) (v128.const f64x2 0 -1))
(assert_return (invoke "f64x2.floor" (v128.const f64x2 1.5 -1.5)) (v128.const f64x2 1 -2))
(assert_return (invoke "f64x2.floor" (v128.const f64x2 4.2 -4.2)) (v128.const f64x2 4 -5))
(assert_return (invoke "f64x2.trunc" (v128.const f64x2 -0 0)) (v128.const f64x2 -0 0))
(assert_return (invoke "f64x2.trunc" (v128.const f64x2 inf -inf)) (v128.const f64x2 inf -inf))
(assert_return (invoke "f64x2.trunc" (v128.const f64x2 nan 42)) (v128.const f64x2 nan 42))
(assert_return (invoke "f64x2.trunc" (v128.const f64x2 0.5 -0.5)) (v128.const f64x2 0 -0))
(assert_return (invoke "f64x2.trunc" (v128.const f64x2 1.5 -1.5)) (v128.const f64x2 1 -1))
(assert_return (invoke "f64x2.trunc" (v128.const f64x2 4.2 -4.2)) (v128.const f64x2 4 -4))
(assert_return (invoke "f64x2.nearest" (v128.const f64x2 -0 0)) (v128.const f64x2 -0 0))
(assert_return (invoke "f64x2.nearest" (v128.const f64x2 inf -inf)) (v128.const f64x2 inf -inf))
(assert_return (invoke "f64x2.nearest" (v128.const f64x2 nan 42)) (v128.const f64x2 nan 42))
(assert_return (invoke "f64x2.nearest" (v128.const f64x2 0.5 -0.5)) (v128.const f64x2 0 -0))
(assert_return (invoke "f64x2.nearest" (v128.const f64x2 1.5 -1.5)) (v128.const f64x2 2 -2))
(assert_return (invoke "f64x2.nearest" (v128.const f64x2 4.2 -4.2)) (v128.const f64x2 4 -4))
(assert_return
(invoke "i16x8.extadd_pairwise_i8x16_s"
(v128.const i8x16 -1 -1 -127 -127 -128 -128 127 127 255 255 1 1 0 0 126 126)
)
(v128.const i16x8 -2 -254 -256 254 -2 2 0 252)
)
(assert_return
(invoke "i16x8.extadd_pairwise_i8x16_u"
(v128.const i8x16 0 0 1 1 -1 -1 126 126 -127 -127 -128 -128 127 127 255 255)
)
(v128.const i16x8 0 2 510 252 258 256 254 510)
)
(assert_return
(invoke "i32x4.extadd_pairwise_i16x8_s"
(v128.const i16x8 32766 32766 -32767 -32767 65535 65535 -1 -1)
)
(v128.const i32x4 65532 -65534 -2 -2)
)
(assert_return
(invoke "i32x4.extadd_pairwise_i16x8_u"
(v128.const i16x8 -1 -1 -32767 -32767 -32768 -32768 65535 65535)
)
(v128.const i32x4 131070 65538 65536 131070)
)
;; conversions
(assert_return (invoke "i32x4.trunc_sat_f32x4_s" (v128.const f32x4 42 nan inf -inf)) (v128.const i32x4 42 0 2147483647 -2147483648))
(assert_return (invoke "i32x4.trunc_sat_f32x4_u" (v128.const f32x4 42 nan inf -inf)) (v128.const i32x4 42 0 4294967295 0))
(assert_return (invoke "f32x4.convert_i32x4_s" (v128.const i32x4 0 -1 2147483647 -2147483648)) (v128.const f32x4 0 -1 2147483648 -2147483648))
(assert_return (invoke "f32x4.convert_i32x4_u" (v128.const i32x4 0 -1 2147483647 -2147483648)) (v128.const f32x4 0 4294967296 2147483648 2147483648))
(assert_return
(invoke "i8x16.narrow_i16x8_s"
(v128.const i16x8 129 127 -32767 32767 -32768 -1 1 0)
(v128.const i16x8 0 1 -1 -32768 32767 -32767 127 129)
)
(v128.const i8x16 127 127 -128 127 -128 -1 1 0 0 1 -1 -128 127 -128 127 127)
)
(assert_return
(invoke "i8x16.narrow_i16x8_u"
(v128.const i16x8 129 127 -32767 32767 -32768 -1 1 0)
(v128.const i16x8 0 1 -1 -32768 32767 -32767 127 129)
)
(v128.const i8x16 129 127 0 255 0 0 1 0 0 1 0 0 255 0 127 129)
)
(assert_return
(invoke "i16x8.narrow_i32x4_s"
(v128.const i32x4 32769 32767 -2147483647 2147483647)
(v128.const i32x4 0 1 -1 -2147483648)
)
(v128.const i16x8 32767 32767 -32768 32767 0 1 -1 -32768)
)
(assert_return
(invoke "i16x8.narrow_i32x4_u"
(v128.const i32x4 32769 32767 -2147483647 2147483647)
(v128.const i32x4 0 1 -1 -2147483648)
)
(v128.const i16x8 32769 32767 0 65535 0 1 0 0)
)
(assert_return
(invoke "i16x8.extend_low_i8x16_s"
(v128.const i8x16 0 1 -1 -128 127 129 64 -64 -64 64 129 127 -128 -1 1 0)
)
(v128.const i16x8 0 1 -1 -128 127 -127 64 -64)
)
(assert_return
(invoke "i16x8.extend_high_i8x16_s"
(v128.const i8x16 0 1 -1 -128 127 129 64 -64 -64 64 129 127 -128 -1 1 0)
)
(v128.const i16x8 -64 64 -127 127 -128 -1 1 0)
)
(assert_return
(invoke "i16x8.extend_low_i8x16_u"
(v128.const i8x16 0 1 -1 -128 127 129 64 -64 -64 64 129 127 -128 -1 1 0)
)
(v128.const i16x8 0 1 255 128 127 129 64 192)
)
(assert_return
(invoke "i16x8.extend_high_i8x16_u"
(v128.const i8x16 0 1 -1 -128 127 129 64 -64 -64 64 129 127 -128 -1 1 0)
)
(v128.const i16x8 192 64 129 127 128 255 1 0)
)
(assert_return (invoke "i32x4.extend_low_i16x8_s" (v128.const i16x8 0 1 -1 32768 32767 32769 16384 -16384)) (v128.const i32x4 0 1 -1 -32768))
(assert_return (invoke "i32x4.extend_high_i16x8_s" (v128.const i16x8 0 1 -1 32768 32767 32769 16384 -16384)) (v128.const i32x4 32767 -32767 16384 -16384))
(assert_return (invoke "i32x4.extend_low_i16x8_u" (v128.const i16x8 0 1 -1 32768 32767 32769 16384 -16384)) (v128.const i32x4 0 1 65535 32768))
(assert_return (invoke "i32x4.extend_high_i16x8_u" (v128.const i16x8 0 1 -1 32768 32767 32769 16384 -16384)) (v128.const i32x4 32767 32769 16384 49152))
(assert_return (invoke "i64x2.extend_low_i32x4_s" (v128.const i32x4 -1 -1 -2147483648 -2147483648)) (v128.const i64x2 -1 -1))
(assert_return (invoke "i64x2.extend_high_i32x4_s" (v128.const i32x4 2147483647 2147483647 -1 -1)) (v128.const i64x2 -1 -1))
(assert_return (invoke "i64x2.extend_low_i32x4_u" (v128.const i32x4 -1 -1 2 2)) (v128.const i64x2 4294967295 4294967295))
(assert_return (invoke "i64x2.extend_high_i32x4_u" (v128.const i32x4 2 2 -1 -1)) (v128.const i64x2 4294967295 4294967295))
(assert_return (invoke "v128.load8x8_s" (i32.const 256)) (v128.const i16x8 0xff80 0xff90 0xffa0 0xffb0 0xffc0 0xffd0 0xffe0 0xfff0))
(assert_return (invoke "v128.load8x8_u" (i32.const 256)) (v128.const i16x8 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0))
(assert_return (invoke "v128.load16x4_s" (i32.const 256)) (v128.const i32x4 0xffff9080 0xffffb0a0 0xffffd0c0 0xfffff0e0))
(assert_return (invoke "v128.load16x4_u" (i32.const 256)) (v128.const i32x4 0x00009080 0x0000b0a0 0x0000d0c0 0x0000f0e0))
(assert_return (invoke "v128.load32x2_s" (i32.const 256)) (v128.const i64x2 0xffffffffb0a09080 0xfffffffff0e0d0c0))
(assert_return (invoke "v128.load32x2_u" (i32.const 256)) (v128.const i64x2 0x00000000b0a09080 0x00000000f0e0d0c0))
(assert_return (invoke "v128.load32_zero" (i32.const 256)) (v128.const i32x4 0xb0a09080 0 0 0))
(assert_return (invoke "v128.load64_zero" (i32.const 256)) (v128.const i64x2 0xf0e0d0c0b0a09080 0))
(assert_return
(invoke "i8x16.swizzle"
(v128.const i8x16 0xf0 0xf1 0xf2 0xf3 0xf4 0xf5 0xf6 0xf7 0xf8 0xf9 0xfa 0xfb 0xfc 0xfd 0xfe 0xff)
(v128.const i8x16 0 4 8 12 16 255 129 128 127 17 15 13 12 8 4 0)
)
(v128.const i8x16 0xf0 0xf4 0xf8 0xfc 0x00 0x00 0x00 0x00 0x00 0x00 0xff 0xfd 0xfc 0xf8 0xf4 0xf0)
)
(assert_return (invoke "f64x2.convert_low_i32x4_s" (v128.const i32x4 1 -2147483648 0 0)) (v128.const f64x2 1.0 -2147483648))
(assert_return (invoke "f64x2.convert_low_i32x4_u" (v128.const i32x4 -2147483648 0xffffffff 0 0)) (v128.const f64x2 2147483648 4294967295.0))
(assert_return (invoke "i32x4.trunc_sat_f64x2_s_zero" (v128.const f64x2 -inf 4294967296.0)) (v128.const i32x4 -2147483648 2147483647 0 0))
(assert_return (invoke "i32x4.trunc_sat_f64x2_u_zero" (v128.const f64x2 -inf 4294967296.0)) (v128.const i32x4 0 4294967295 0 0))
(assert_return
(invoke "f32x4.demote_f64x2_zero"
(v128.const f64x2 0x1.fffffe0000000p-127 -0x1.6972b30cfb562p+1)
)
(v128.const f32x4 0x1p-126 -0x1.6972b4p+1 0 0)
)
(assert_return
(invoke "f64x2.promote_low_f32x4"
(v128.const f32x4 -0x1p-149 0x1.8f867ep+125 0 0)
)
(v128.const f64x2 -0x1p-149 6.6382536710104395e+37)
)
|