1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850
|
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<link rel="stylesheet" href="style.css" type="text/css">
<meta content="text/html; charset=iso-8859-1" http-equiv="Content-Type">
<link rel="Start" href="index.html">
<link rel="previous" href="Unixqueue_mt.html">
<link rel="next" href="Uq_ssl.html">
<link rel="Up" href="index.html">
<link title="Index of types" rel=Appendix href="index_types.html">
<link title="Index of exceptions" rel=Appendix href="index_exceptions.html">
<link title="Index of values" rel=Appendix href="index_values.html">
<link title="Index of class attributes" rel=Appendix href="index_attributes.html">
<link title="Index of class methods" rel=Appendix href="index_methods.html">
<link title="Index of classes" rel=Appendix href="index_classes.html">
<link title="Index of class types" rel=Appendix href="index_class_types.html">
<link title="Index of modules" rel=Appendix href="index_modules.html">
<link title="Index of module types" rel=Appendix href="index_module_types.html">
<link title="Uq_gtk" rel="Chapter" href="Uq_gtk.html">
<link title="Equeue" rel="Chapter" href="Equeue.html">
<link title="Unixqueue" rel="Chapter" href="Unixqueue.html">
<link title="Uq_engines" rel="Chapter" href="Uq_engines.html">
<link title="Uq_socks5" rel="Chapter" href="Uq_socks5.html">
<link title="Unixqueue_mt" rel="Chapter" href="Unixqueue_mt.html">
<link title="Equeue_intro" rel="Chapter" href="Equeue_intro.html">
<link title="Uq_ssl" rel="Chapter" href="Uq_ssl.html">
<link title="Uq_tcl" rel="Chapter" href="Uq_tcl.html">
<link title="Netcgi_common" rel="Chapter" href="Netcgi_common.html">
<link title="Netcgi" rel="Chapter" href="Netcgi.html">
<link title="Netcgi_ajp" rel="Chapter" href="Netcgi_ajp.html">
<link title="Netcgi_scgi" rel="Chapter" href="Netcgi_scgi.html">
<link title="Netcgi_cgi" rel="Chapter" href="Netcgi_cgi.html">
<link title="Netcgi_fcgi" rel="Chapter" href="Netcgi_fcgi.html">
<link title="Netcgi_dbi" rel="Chapter" href="Netcgi_dbi.html">
<link title="Netcgi1_compat" rel="Chapter" href="Netcgi1_compat.html">
<link title="Netcgi_test" rel="Chapter" href="Netcgi_test.html">
<link title="Netcgi_porting" rel="Chapter" href="Netcgi_porting.html">
<link title="Netcgi_plex" rel="Chapter" href="Netcgi_plex.html">
<link title="Http_client" rel="Chapter" href="Http_client.html">
<link title="Telnet_client" rel="Chapter" href="Telnet_client.html">
<link title="Ftp_data_endpoint" rel="Chapter" href="Ftp_data_endpoint.html">
<link title="Ftp_client" rel="Chapter" href="Ftp_client.html">
<link title="Nethttpd_types" rel="Chapter" href="Nethttpd_types.html">
<link title="Nethttpd_kernel" rel="Chapter" href="Nethttpd_kernel.html">
<link title="Nethttpd_reactor" rel="Chapter" href="Nethttpd_reactor.html">
<link title="Nethttpd_engine" rel="Chapter" href="Nethttpd_engine.html">
<link title="Nethttpd_services" rel="Chapter" href="Nethttpd_services.html">
<link title="Nethttpd_plex" rel="Chapter" href="Nethttpd_plex.html">
<link title="Nethttpd_intro" rel="Chapter" href="Nethttpd_intro.html">
<link title="Netplex_types" rel="Chapter" href="Netplex_types.html">
<link title="Netplex_mp" rel="Chapter" href="Netplex_mp.html">
<link title="Netplex_mt" rel="Chapter" href="Netplex_mt.html">
<link title="Netplex_log" rel="Chapter" href="Netplex_log.html">
<link title="Netplex_controller" rel="Chapter" href="Netplex_controller.html">
<link title="Netplex_container" rel="Chapter" href="Netplex_container.html">
<link title="Netplex_sockserv" rel="Chapter" href="Netplex_sockserv.html">
<link title="Netplex_workload" rel="Chapter" href="Netplex_workload.html">
<link title="Netplex_main" rel="Chapter" href="Netplex_main.html">
<link title="Netplex_config" rel="Chapter" href="Netplex_config.html">
<link title="Netplex_kit" rel="Chapter" href="Netplex_kit.html">
<link title="Rpc_netplex" rel="Chapter" href="Rpc_netplex.html">
<link title="Netplex_cenv" rel="Chapter" href="Netplex_cenv.html">
<link title="Netplex_intro" rel="Chapter" href="Netplex_intro.html">
<link title="Netshm" rel="Chapter" href="Netshm.html">
<link title="Netshm_data" rel="Chapter" href="Netshm_data.html">
<link title="Netshm_hashtbl" rel="Chapter" href="Netshm_hashtbl.html">
<link title="Netshm_array" rel="Chapter" href="Netshm_array.html">
<link title="Netshm_intro" rel="Chapter" href="Netshm_intro.html">
<link title="Netconversion" rel="Chapter" href="Netconversion.html">
<link title="Netchannels" rel="Chapter" href="Netchannels.html">
<link title="Netstream" rel="Chapter" href="Netstream.html">
<link title="Mimestring" rel="Chapter" href="Mimestring.html">
<link title="Netmime" rel="Chapter" href="Netmime.html">
<link title="Netsendmail" rel="Chapter" href="Netsendmail.html">
<link title="Neturl" rel="Chapter" href="Neturl.html">
<link title="Netaddress" rel="Chapter" href="Netaddress.html">
<link title="Netbuffer" rel="Chapter" href="Netbuffer.html">
<link title="Netdate" rel="Chapter" href="Netdate.html">
<link title="Netencoding" rel="Chapter" href="Netencoding.html">
<link title="Netulex" rel="Chapter" href="Netulex.html">
<link title="Netaccel" rel="Chapter" href="Netaccel.html">
<link title="Netaccel_link" rel="Chapter" href="Netaccel_link.html">
<link title="Nethtml" rel="Chapter" href="Nethtml.html">
<link title="Netstring_str" rel="Chapter" href="Netstring_str.html">
<link title="Netstring_pcre" rel="Chapter" href="Netstring_pcre.html">
<link title="Netstring_mt" rel="Chapter" href="Netstring_mt.html">
<link title="Netmappings" rel="Chapter" href="Netmappings.html">
<link title="Netaux" rel="Chapter" href="Netaux.html">
<link title="Nethttp" rel="Chapter" href="Nethttp.html">
<link title="Netchannels_tut" rel="Chapter" href="Netchannels_tut.html">
<link title="Netmime_tut" rel="Chapter" href="Netmime_tut.html">
<link title="Netsendmail_tut" rel="Chapter" href="Netsendmail_tut.html">
<link title="Netulex_tut" rel="Chapter" href="Netulex_tut.html">
<link title="Neturl_tut" rel="Chapter" href="Neturl_tut.html">
<link title="Netsys" rel="Chapter" href="Netsys.html">
<link title="Netpop" rel="Chapter" href="Netpop.html">
<link title="Rpc_auth_dh" rel="Chapter" href="Rpc_auth_dh.html">
<link title="Rpc_key_service" rel="Chapter" href="Rpc_key_service.html">
<link title="Rpc_time" rel="Chapter" href="Rpc_time.html">
<link title="Rpc_auth_local" rel="Chapter" href="Rpc_auth_local.html">
<link title="Rtypes" rel="Chapter" href="Rtypes.html">
<link title="Xdr" rel="Chapter" href="Xdr.html">
<link title="Rpc" rel="Chapter" href="Rpc.html">
<link title="Rpc_program" rel="Chapter" href="Rpc_program.html">
<link title="Rpc_portmapper_aux" rel="Chapter" href="Rpc_portmapper_aux.html">
<link title="Rpc_packer" rel="Chapter" href="Rpc_packer.html">
<link title="Rpc_transport" rel="Chapter" href="Rpc_transport.html">
<link title="Rpc_client" rel="Chapter" href="Rpc_client.html">
<link title="Rpc_simple_client" rel="Chapter" href="Rpc_simple_client.html">
<link title="Rpc_portmapper_clnt" rel="Chapter" href="Rpc_portmapper_clnt.html">
<link title="Rpc_portmapper" rel="Chapter" href="Rpc_portmapper.html">
<link title="Rpc_server" rel="Chapter" href="Rpc_server.html">
<link title="Rpc_auth_sys" rel="Chapter" href="Rpc_auth_sys.html">
<link title="Rpc_intro" rel="Chapter" href="Rpc_intro.html">
<link title="Rpc_mapping_ref" rel="Chapter" href="Rpc_mapping_ref.html">
<link title="Rpc_ssl" rel="Chapter" href="Rpc_ssl.html">
<link title="Rpc_xti_client" rel="Chapter" href="Rpc_xti_client.html">
<link title="Shell_sys" rel="Chapter" href="Shell_sys.html">
<link title="Shell" rel="Chapter" href="Shell.html">
<link title="Shell_uq" rel="Chapter" href="Shell_uq.html">
<link title="Shell_mt" rel="Chapter" href="Shell_mt.html">
<link title="Shell_intro" rel="Chapter" href="Shell_intro.html">
<link title="Netsmtp" rel="Chapter" href="Netsmtp.html"><link title="Introduction into event-driven programming" rel="Section" href="#intro">
<link title="The Equeue module" rel="Section" href="#equeue">
<link title="The Unixqueue module" rel="Section" href="#unixqueue">
<link title="Engines" rel="Section" href="#engines">
<link title="Event-driven programming vs. multi-threaded programming" rel="Section" href="#ev_vs_mt">
<link title="Pitfalls" rel="Section" href="#pitfalls">
<link title="Using Unixqueue together with Tcl (labltk) and Glib (lablgtk)" rel="Section" href="#ui">
<link title="Description" rel="Subsection" href="#eq_descr">
<link title="A silly example" rel="Subsection" href="#eq_eg">
<link title="Description" rel="Subsection" href="#uq_descr">
<link title="Object-oriented interface" rel="Subsection" href="#uq_oo">
<link title="Example: Copying several files in parallel" rel="Subsection" href="#uq_eg">
<link title="Modelling the abstract properties of engines" rel="Subsection" href="#eng_model">
<link title="Examples for engine primitives and engine construction" rel="Subsection" href="#eng_model_eg">
<link title="The notification mechanism" rel="Subsection" href="#eng_notify">
<link title="Asynchronous channels" rel="Subsection" href="#eng_async_ch">
<link title="Receivers" rel="Subsection" href="#eng_recv">
<link title="Example: A simple HTTP client" rel="Subsection" href="#eng_eg">
<link title="Combining both styles" rel="Subsection" href="#3_Combiningbothstyles">
<title>Ocamlnet 2 Reference Manual : Equeue_intro</title>
</head>
<body>
<div class="navbar"><a href="Unixqueue_mt.html">Previous</a>
<a href="index.html">Up</a>
<a href="Uq_ssl.html">Next</a>
</div>
<center><h1>Equeue_intro</h1></center>
<br>
<br>
Introduction into programming with <code class="code">equeue</code> (formerly known as "Equeue
User's Guide").
<p>
<b>Contents</b>
<ul>
<li><a href="Equeue_intro.html#intro"><i>Introduction into event-driven programming</i></a></li>
<li><a href="Equeue_intro.html#equeue"><i>The Equeue module</i></a>
<ul>
<li><a href="Equeue_intro.html#eq_descr"><i>Description</i></a></li>
<li><a href="Equeue_intro.html#eq_eg"><i>A silly example</i></a></li>
</ul>
</li>
<li><a href="Equeue_intro.html#unixqueue"><i>The Unixqueue module</i></a>
<ul>
<li><a href="Equeue_intro.html#uq_descr"><i>Description</i></a></li>
<li><a href="Equeue_intro.html#uq_oo"><i>Object-oriented interface</i></a></li>
<li><a href="Equeue_intro.html#eq_eg"><i>A silly example</i></a></li>
</ul>
</li>
<li><a href="Equeue_intro.html#engines"><i>Engines</i></a>
<ul>
<li><a href="Equeue_intro.html#eng_model"><i>Modelling the abstract properties of engines</i></a></li>
<li><a href="Equeue_intro.html#eng_model_eg"><i>Examples for engine primitives and engine construction</i></a></li>
<li><a href="Equeue_intro.html#eng_notify"><i>The notification mechanism</i></a></li>
<li><a href="Equeue_intro.html#eng_async_ch"><i>Asynchronous channels</i></a></li>
<li><a href="Equeue_intro.html#eng_recv"><i>Receivers</i></a></li>
<li><a href="Equeue_intro.html#eng_eg"><i>Example: A simple HTTP client</i></a></li>
</ul>
</li>
<li><a href="Equeue_intro.html#ev_vs_mt"><i>Event-driven programming vs. multi-threaded programming</i></a></li>
<li><a href="Equeue_intro.html#pitfalls"><i>Pitfalls</i></a></li>
<li><a href="Equeue_intro.html#ui"><i>Using Unixqueue together with Tcl (labltk) and Glib (lablgtk)</i></a></li>
</ul>
<p>
<a name="intro"></a>
<h2>Introduction into event-driven programming</h2>
<p>
Event-driven programming is an advanced way of organizing programs
around I/O channels. This may be best explained by an example: Consider you
want to read from a pipeline, convert all arriving lowercase letters to their
corresponding uppercase letters, and finally write the result into a second
pipeline.
<p>
A conventional solution works as follows: A number of bytes is
read from the input pipeline into a buffer, converted, and then written into
the output pipeline. Because we do not know at the beginning how many bytes
will arrive, we do not know how big the buffer must be to store all bytes;
so we simply decide to repeat the whole read/convert/write cycle until the
end of input is signaled.
<p>
In O'Caml code:
<pre><code class="code">let buffer_length = 1024 in
let buffer = String.create buffer_length in
try
while true do
(* Read up to buffer_length bytes into the buffer: *)
let n = Unix.read Unix.stdin buffer 0 buffer_length in
(* If n=0, the end of input is reached. Otherwise we have
* read n bytes.
*)
if n=0 then
raise End_of_file;
(* Convert: *)
let buffer' = String.uppercase (String.sub buffer 0 n) in
(* Write the buffer' contents: *)
let m = ref 0 in
while !m < n do
m := !m + Unix.write Unix.stdout buffer' !m (n - !m)
done
done
with
End_of_file -> ()
</code></pre>
<p>
The input and output pipelines may be connected with any other
endpoint of pipelines, and may be arbitrary slow. Because of this, there
are two interesting phenomenons. First, it is possible that the
<code class="code">Unix.read</code> system call returns less than
<code class="code">buffer_length</code> bytes, even if we are not almost at the
end of the data stream. The reason might be that the pipeline works across
a network connection, and that just a network packet arrived with less
than <code class="code">buffer_length</code> bytes. In this case, the operating
system may decide to forward this packet to the application as soon as
possible (but it is free not to decide so). The same may happen when
<code class="code">Unix.write</code> is called; because of this the inner
<code class="code">while</code> loop invokes <code class="code">Unix.write</code>
repeatedly until all bytes are actually written.
<p>
Nevertheless, <code class="code">Unix.read</code> guarantees to read
at least one byte (unless the end of the stream is reached), and
<code class="code">Unix.write</code> always writes at least one byte. But what
happens if there is currently no byte to return? In this case, the second
phenomenon happens: The program stops until at least one byte is available;
this is called <b>blocking</b>.
<p>
Consider that the output pipeline is very fast, and that the input
pipeline is rather slow. In this case, blocking slows down the program
such that it is as slow as the input pipeline delivers data.
<p>
Consider that both pipelines are slow: Now, the program may block
because it is waiting on input, but the output pipeline would accept data.
Or, the program blocks because it waits until the output side is ready,
but there have already input bytes arrived which cannot be read in because
the program blocks. In these cases, the program runs much slower than it
could do if it would react on I/O possibilities in an optimal way.
<p>
The operating systems indicates the I/O possibilities by
the <code class="code">Unix.select</code> system call. It works as follows: We
pass lists of file descriptors on which we want to react.
<code class="code">Unix.select</code> also blocks, but the program continues to
run already if <b>one</b> of the file descriptors is ready
to perform I/O. Furthermore, we can pass a timeout value.
<p>
Here is the improved program:
<pre><code class="code">let buffer_length = 1024 in
let in_buffer = String.create buffer_length in
let out_buffer = String.create buffer_length in
let out_buffer_length = ref 0 in
let end_of_stream = ref false in
let waiting_for_input = ref true in
let waiting_for_output = ref false in
while !waiting_for_input or !waiting_for_output do
(* If !waiting_for_input, we are interested whether input arrives.
* If !waiting_for_output, we are interested whether output is
* possible.
*)
let (in_fd, out_fd, oob_fd) =
Unix.select (if !waiting_for_input then [ Unix.stdin] else [])
(if !waiting_for_output then [ Unix.stdout] else [])
[]
(-.1.0) in
(* If in_fd is non-empty, input is immediately possible and will
* not block.
*)
if in_fd <> [] then begin
(* How many bytes we can read in depends on the amount of
* free space in the output buffer.
*)
let n = buffer_length - !out_buffer_length in
assert(n > 0);
let n' = Unix.read Unix.stdin in_buffer 0 n in
end_of_stream := (n' = 0);
(* Convert the bytes, and append them to the output buffer. *)
let converted = String.uppercase (String.sub in_buffer 0 n') in
String.blit converted 0 out_buffer !out_buffer_length n';
out_buffer_length := !out_buffer_length + n';
end;
(* If out_fd is non-empty, output is immediately possible and
* will not block.
*)
if out_fd <> [] then begin
(* Try to write !out_buffer_length bytes. *)
let n' = Unix.write Unix.stdout out_buffer 0 !out_buffer_length in
(* Remove the written bytes from the out_buffer: *)
String.blit out_buffer n' out_buffer 0 (!out_buffer_length - n');
out_buffer_length := !out_buffer_length - n'
end;
(* Now find out which event is interesting next: *)
waiting_for_input := (* Input is interesting if...*)
not !end_of_stream && (* ...we are before the end *)
!out_buffer_length < buffer_length; (* ...there is space in the out buf *)
waiting_for_output := (* Output is interesting if... *)
!out_buffer_length > 0; (* ...there is material to output *)
done
</code></pre>
<p>
Most important, we must now track the states of the I/O connections
ourselves. The variable <code class="code">end_of_stream</code> stores whether
the end of the input stream has been reached. In
<code class="code">waiting_for_input</code> it is stored whether we are ready to
accept input data. We can only accept input if there is space in the output
buffer. The variable <code class="code">waiting_for_output</code> indicates whether
we have data to output or not. In the previous program, these states were
implicitly encoded by the "program counter", i.e. which next statement
was to be executed: After the <code class="code">Unix.read</code> was done we
<b>knew</b> that we had data to output; after the
<code class="code">Unix.write</code> we <b>knew</b> that there was again
space in the buffer. Now, these states must be explicitly stored in
variables because the structure of the program does not contain such
information anymore.
<p>
This program is already an example of event-driven programming. We
have two possible <b>events</b>: "Input arrived", and "output is
possible". The <code class="code">Unix.select</code> statement is the <b>event
source</b>, it produces a sequence of events. There are two
<b>resources</b> which cause the events, namely the two file
descriptors. We have two <b>event handlers</b>: The statements
after <code class="code">if in_fd <> [] then</code> form the input event
handler, and the statements after <code class="code">if out_fd <> [] then</code>
are the output event handler.
<p>
The <a href="Equeue.html"><code class="code">Equeue</code></a> module now provides these concepts as
abstractions you can program with. It is a general-purpose event queue,
allowing to specify an arbitrary event source, to manage event handlers, and
offering a system how the events are sent to the event handlers that can
process them. The <a href="Unixqueue.html"><code class="code">Unixqueue</code></a> module is a layer above
<a href="Equeue.html"><code class="code">Equeue</code></a> and deals with file descriptor events. It has already
an event source generating file descriptor events using the
<code class="code">Unix.select</code> system call, and it provides a way to manage
file descriptor resources.
<p>
Especially the <a href="Unixqueue.html"><code class="code">Unixqueue</code></a> abstraction is an
interesting link between the operating system and components offering services
on file descriptors. For example, it is possible to create one event queue, and
to attach several, independent components to this queue, and to invoke these
components in parallel. For instance, consider a HTTP proxy. Such proxies
accept connections and forward them to the service that can best deal with
the requests arriving. These services are typically a disk cache, a HTTP
client, and an FTP client. Using the <a href="Unixqueue.html"><code class="code">Unixqueue</code></a> model, you
can realize this constellation by creating one event queue, and by attaching
the services to it which can be independently programmed and tested; finally
these components communicate either directly with the outer world or with other
components only by putting events onto the queue and receiving events from this
queue.
<p>
<a name="equeue"></a>
<h2>The Equeue module</h2>
<p>
<a name="eq_descr"></a>
<h3>Description</h3>
<p>
<a name="4_TheabbreviatedinterfaceoftheEqueuemodule"></a>
<h4>The (abbreviated) interface of the <a href="Equeue.html"><code class="code">Equeue</code></a> module</h4>
<p>
<pre><code class="code">type 'a t (* Event systems over events of type 'a *)
exception Reject (* Possible reaction of an event handler *)
exception Terminate (* Possible reaction of an event handler *)
exception Out_of_handlers (* Error condition *)
val create : ('a t -> unit) -> 'a t
val add_event : 'a t -> 'a -> unit
val add_handler : 'a t -> ('a t -> 'a -> unit) -> unit
val run : 'a t -> unit
</code></pre>
<p>
See also the full interface of <a href="Equeue.html"><code class="code">Equeue</code></a>.
<p>
The values of type <a href="Equeue.html#TYPEt"><code class="code">Equeue.t</code></a> are called
<b>event systems</b>, and contain:
<p>
<ul>
<li>An event source, which is simply a function that gets the event
system as argument and that may add further events to the system by invoking
<a href="Equeue.html#VALadd_event"><code class="code">Equeue.add_event</code></a>. The event source must be passed to
<a href="Equeue.html#VALcreate"><code class="code">Equeue.create</code></a> as argument; it is not possible to change the
source later.</li>
<li>A list of event handlers. Handlers are added to the system by
calling <a href="Equeue.html#VALadd_handler"><code class="code">Equeue.add_handler</code></a>.</li>
<li>A queue of events waiting to be delivered to one of the
handlers. You can add an event to the queue by invoking
<a href="Equeue.html#VALadd_event"><code class="code">Equeue.add_event</code></a>. </li>
</ul>
The module is intended to be used as follows: First, an event
system is created, and initialized with an event source. Some event handlers
are added:
<p>
<pre><code class="code">let some_source esys = ... in
let handler1 esys e = ... in
let handler2 esys e = ... in
... (* more handlers *)
let esys = Equeue.create some_source in
Equeue.add_handler esys handler1;
Equeue.add_handler esys handler2;
... (* more handlers *)
</code></pre>
<p>
It is necessary that at least one handler is added. In the second step, the
event system can be started:
<p>
<pre><code class="code">Equeue.run esys
</code></pre>
<p>
This means the following:
<p>
<ul>
<li>At the beginning, the function realizing the event source is
called once. The function has the chance to add the first event(s) to the event
queue by calling <a href="Equeue.html#VALadd_event"><code class="code">Equeue.add_event</code></a>.</li>
<li>If the event queue is not empty: It is iterated over all
events currently in the queue. Every event is tried to be delivered to a
handler by the simplest possible algorithm: The handlers are tried in turn, and
the first handler that wants to consume the event gets the event.</li>
<li>After one round of iteration over all events, it is possible
that the handlers did already add further events to the queue, or it is
possible that the queue is now empty. In the first case, the iteration is
simply repeated with the newly added events. In the second case, the event
source is called. If there are now events, they are iterated.</li>
<li>Otherwise, the event system terminates.</li>
</ul>
A handler can indicate either that it wants to consume the event,
or that it rejects the event, or that it wants to be removed from the list of
handlers. Consumption is indicated by returning normally. Rejection is
indicated by raising the <a href="Equeue.html#EXCEPTIONReject"><code class="code">Equeue.Reject</code></a> exception. If the
handler raises the <a href="Equeue.html#EXCEPTIONTerminate"><code class="code">Equeue.Terminate</code></a> exception, the event is
consumed and the handler is removed from the list of handlers.
<p>
Other exceptions, either raised within the event source function or
within a handler function, simply fall through the event loop; they are not
caught. However, the event system is restartable, which means:
<p>
<ul>
<li>If the exception happened within the event source, the source
is called again.</li>
<li>If the exception happened within a handler function, the
current event is scheduled again.</li>
</ul>
The event source is called when there are no events in the
equeue. Note that the event source may not only add events, but also event
handlers. It is an error if after the invocation of the event source there are
events in the queue, but no handlers are defined. In this case, the exception
<code class="code">Out_of_handlers</code> is raised.
<p>
<a name="eq_eg"></a>
<h3>A silly example</h3>
<p>
Two kinds of events:
<p>
<pre><code class="code">type event =
A of int
| B
</code></pre>
<p>
This event source produces ten events from <code class="code">A 1</code> to <code class="code">A
10</code>:
<p>
<pre><code class="code">let n = ref 1
let source esys =
if !n <= 10 then begin
Equeue.add_event esys (A !n);
incr n
end
</code></pre>
<p>
The handler for type A events puts as many type B events on the
queue as the argument
counts.
<p>
<pre><code class="code">let handler_a esys e =
match e with
A n ->
for i = 1 to n do
Equeue.add_event esys B
done
| _ ->
raise Equeue.Reject
</code></pre>
<p>
The handler for type B events simply prints the events:
<p>
<pre><code class="code">let handler_b esys e =
match e with
B ->
print_endline "B"
| _ ->
raise Equeue.Reject
</code></pre>
<p>
Finally, we set up the event system and start it:
<p>
<pre><code class="code">let esys = Equeue.create source in
Equeue.add_handler esys handler_a;
Equeue.add_handler esys handler_b;
Equeue.run esys;
</code></pre>
<p>
As result, the program prints 55 Bs.
<p>
<a name="unixqueue"></a>
<h2>The Unixqueue module</h2>
<p>
<a name="uq_descr"></a>
<h3>Description</h3>
<p>
<a name="4_TheabbreviatedinterfaceoftheUnixqueuemodule"></a>
<h4>The (abbreviated) interface of the <a href="Unixqueue.html"><code class="code">Unixqueue</code></a> module</h4>
<p>
<pre><code class="code">open Unix
open Sys
type group (* Groups of events *)
type wait_id (* Wait ticket *)
type operation =
Wait_in of file_descr (* wait for input data *)
| Wait_out of file_descr (* wait until output can be written *)
| Wait_oob of file_descr (* wait for out-of-band data *)
| Wait of wait_id (* wait only for timeout *)
type event =
Input_arrived of (group * file_descr)
| Output_readiness of (group * file_descr)
| Out_of_band of (group * file_descr)
| Timeout of (group * operation)
| Signal
| Extra of exn
type event_system
val create_unix_event_system : unit -> event_system
val new_group : event_system -> group
val new_wait_id : event_system -> wait_id
val add_resource : event_system -> group -> (operation * float) -> unit
val remove_resource : event_system -> group -> operation -> unit
val add_handler :
event_system -> group ->
(event_system -> event Equeue.t -> event -> unit)
-> unit
val add_event : event_system -> event -> unit
val clear : event_system -> group -> unit
val run : event_system -> unit
</code></pre>
<p>
See also the full interface of <a href="Unixqueue.html"><code class="code">Unixqueue</code></a>.
<p>
Subject of this module are four types of operations: Waiting for input
data <code class="code">Wait_in</code>, waiting for output readiness <code class="code">Wait_out</code>, waiting for
out-of-band data <code class="code">Wait_oob</code>, and waiting for a period of time
<code class="code">Wait</code>. You can associate resources with the operations which simply
means that it is waited until one of the operations becomes possible
or is timed out. Resources are the combination of an operation and a
time-out value.
<p>
This module already implements an event source which checks whether
the operations are possible or timed-out, and which generates events
describing what has happended. As with <a href="Equeue.html"><code class="code">Equeue</code></a> you can add events
yourself, and you can add handlers which perform actions on certain
events. As <a href="Unixqueue.html"><code class="code">Unixqueue</code></a> is based on <a href="Equeue.html"><code class="code">Equeue</code></a>, the queue model is
simply the same.
<p>
Resources, handlers and events are grouped, i.e. you can reference to
a bundle of resources/events by specifying the group they belong
to. Groups are created by <a href="Unixqueue.html#VALnew_group"><code class="code">Unixqueue.new_group</code></a>, and every resource
must belong to a group. The events caused by a resource belong to the
same group as the resource. Handlers have a group, too, and the
handlers only get events of the same group.
<p>
The groups simplify clean-up actions. Especially, it is possible to
remove all handlers and resouces belonging to a group with only one
function call (<code class="code">clear</code>).
<p>
<a name="uq_oo"></a>
<h3>Object-oriented interface</h3>
<p>
In addition to the functional interface, there is also an
object-oriented interface. Instead of calling one of the above functions
<replaceable>f</replaceable>, one can also invoke the method with the
same name. For example, the call
<p>
<pre><code class="code">add_resource ues g (op,t)
</code></pre>
<p>
can also be written as
<p>
<pre><code class="code">ues # add_resource g (op,t)
</code></pre>
<p>
Both styles can be used in the same program, and there is absolutely
no difference (actually, the object-oriented interface is even the
fundamental interface, and the functions are just wrappers for the method
calls).
<p>
Instead of creating the event system with
<p>
<pre><code class="code">let ues = create_unix_event_system()
</code></pre>
<p>
one can also use
<p>
<pre><code class="code">let ues = new unix_event_system()
</code></pre>
<p>
Again, both calls do exactly the same.
<p>
The object-oriented interface has been introduced to support
other implementations of file descriptor polling than <code class="code">Unix.select</code>.
The integration into
the Tcl and Glib event systems has been implemented by defining additional
classes that are compatible with <a href="Unixqueue.unix_event_system.html"><code class="code">Unixqueue.unix_event_system</code></a>,
but internally base on different polling mechanisms.
<p>
<a name="uq_eg"></a>
<h3>Example: Copying several files in parallel</h3>
<p>
We present here a function which adds a file copy engine to an
event system. It is simple to add the engine several times to the event
system to copy several files in parallel.
<p>
<pre><code class="code">open Unixqueue
type copy_state =
{ copy_ues : Unixqueue.event_system;
copy_group : Unixqueue.group;
copy_infd : Unix.file_descr;
copy_outfd : Unix.file_descr;
copy_size : int;
copy_inbuf : string;
copy_outbuf : string;
mutable copy_outlen : int;
mutable copy_eof : bool;
mutable copy_have_inres : bool;
mutable copy_have_outres : bool;
mutable copy_cleared : bool;
}
</code></pre>
<p>
This record type contains the state of the engine.
<p>
<ul>
<li><code class="code">copy_ues</code>: The event system to which the
engine is attached</li>
<li><code class="code">copy_group</code>: The group to which all the
entities belong</li>
<li><code class="code">copy_infd</code>: The file descriptor of the
source file</li>
<li><code class="code">copy_outfd</code>: The file descriptor of the
copy file</li>
<li><code class="code">copy_size</code>: The size of copy_inbuf and copy_outbuf</li>
<li><code class="code">copy_inbuf</code>: The string buffer used to read
the bytes of the source file</li>
<li><code class="code">copy_outbuf</code>: The string buffer used to
write the bytes to the copy file</li>
<li><code class="code">copy_outlen</code>: The portion of copy_outbuf
that is actually used</li>
<li><code class="code">copy_eof</code>: Whether the EOF marker has been
read or not</li>
<li><code class="code">copy_have_inres</code>: Whether there is
currently an input resource for the input file</li>
<li><code class="code">copy_have_outres</code>: Whether there is
currently an output resource for the output file</li>
<li><code class="code">copy_cleared</code>: Whether the copy is over or not</li>
</ul>
Now the core function begins:
<p>
<pre><code class="code">let copy_file ues old_name new_name =
(* Adds the necessary handlers and actions to the Unixqueue.event_system
* ues that copy the file 'old_name' to 'new_name'.
*)
</code></pre>
<p>
Several inner functions are defined now. First,
<code class="code">update_resources</code> adds or removes the resources involved into
copying. The record components <code class="code">copy_have_inres</code> and
<code class="code">copy_have_outres</code> store whether there is currently a resource
for input and for output, respectively. It is computed whether a input or
output resource is wanted; and then the resource is added or removed as needed.
If both resources are deleted, the file descriptors are closed, and the event
system is cleaned.
<p>
We want input if there is space in the output buffer, and the end
of the input file has not yet been reached. If this is true, it is ensured that
an input resource is defined for the input file such that input events are
generated.
<p>
We want output if there is something in the output buffer. In the
same manner it is ensured that an output resource is defined for the output
file.
<p>
Note that normally the input and output resources are added and
removed several times until the complete file is copied.
<p>
<pre><code class="code"> let update_resources state ues =
let want_input_resource =
not state.copy_eof && state.copy_outlen < state.copy_size in
let want_output_resource =
state.copy_outlen > 0 in
if want_input_resource && not state.copy_have_inres then
add_resource ues state.copy_group (Wait_in state.copy_infd, -.1.0);
if not want_input_resource && state.copy_have_inres then
remove_resource ues state.copy_group (Wait_in state.copy_infd);
if want_output_resource && not state.copy_have_outres then
add_resource ues state.copy_group (Wait_out state.copy_outfd, -.1.0);
if not want_output_resource && state.copy_have_outres then
remove_resource ues state.copy_group (Wait_out state.copy_outfd);
state.copy_have_inres <- want_input_resource;
state.copy_have_outres <- want_output_resource;
if not want_input_resource && not want_output_resource &&
not state.copy_cleared
then begin
(* Close file descriptors at end: *)
Unix.close state.copy_infd;
Unix.close state.copy_outfd;
(* Remove everything: *)
clear ues state.copy_group;
state.copy_cleared <- true; (* avoid to call 'clear' twice *)
end
in
</code></pre>
<p>
The input handler is called only for input events belonging to our
own group. It is very similar to the example in the introductory
chapter.
<p>
The input handler calls <code class="code">update_resource</code> after
the work is done. It is now possible that the output buffer contentains data
after it was previously empty, and <code class="code">update_resource</code> will then
add the output resource. Or, it is possible that the output buffer is now full,
and <code class="code">update_resource</code> will then remove the input resource such
that no more input data will be accepted. Of course, both conditions can happen
at the same time.
<p>
<pre><code class="code"> let handle_input state ues esys e =
(* There is data on the input file descriptor. *)
(* Calculate the available space in the output buffer: *)
let n = state.copy_size - state.copy_outlen in
assert(n > 0);
(* Read the data: *)
let n' = Unix.read state.copy_infd state.copy_inbuf 0 n in
(* End of stream reached? *)
state.copy_eof <- n' = 0;
(* Append the read data to the output buffer: *)
String.blit state.copy_inbuf 0 state.copy_outbuf state.copy_outlen n';
state.copy_outlen <- state.copy_outlen + n';
(* Add or remove resources: *)
update_resources state ues
in
</code></pre>
<p>
The output handler is called only for output events of our own
group, too.
<p>
The output handler calls <code class="code">update_resource</code> after
the work is done. It is now possible that the output buffer has space again,
and <code class="code">update_resource</code> will add the input resource again. Or,
th output buffer is even empty, and <code class="code">update_resource</code> will
also remove the output resource.
<p>
<pre><code class="code"> let handle_output state ues esys e =
(* The file descriptor is ready to output data. *)
(* Write as much as possible: *)
let n' = Unix.write state.copy_outfd state.copy_outbuf 0 state.copy_outlen
in
(* Remove the written bytes from the output buffer: *)
String.blit
state.copy_outbuf n' state.copy_outbuf 0 (state.copy_outlen - n');
state.copy_outlen <- state.copy_outlen - n';
(* Add or remove resources: *)
update_resources state ues
in
</code></pre>
<p>
This is the main event handler. It accepts only
<code class="code">Input_arrived</code> and <code class="code">Output_readiness</code> events
belonging to our own group. All other events are rejected.
<p>
<pre><code class="code"> let handle state ues esys e =
(* Only accept events associated with our own group. *)
match e with
Input_arrived (g,fd) ->
handle_input state ues esys e
| Output_readiness (g,fd) ->
handle_output state ues esys e
| _ ->
raise Equeue.Reject
in
</code></pre>
<p>
Now the body of the <code class="code">copy_file</code> function
follows. It contains only initializations.
<p>
<pre><code class="code"> let g = new_group ues in
let infd = Unix.openfile
old_name
[ Unix.O_RDONLY; Unix.O_NONBLOCK ]
0 in
let outfd = Unix.openfile
new_name
[ Unix.O_WRONLY; Unix.O_NONBLOCK; Unix.O_CREAT; Unix.O_TRUNC ]
0o666 in
Unix.clear_nonblock infd;
Unix.clear_nonblock outfd;
let size = 1024 in
let state =
{ copy_ues = ues;
copy_group = g;
copy_infd = infd;
copy_outfd = outfd;
copy_size = size;
copy_inbuf = String.create size;
copy_outbuf = String.create size;
copy_outlen = 0;
copy_eof = false;
copy_have_inres = false;
copy_have_outres = false;
copy_cleared = false;
} in
update_resources state ues;
add_handler ues g (handle state);
;;
</code></pre>
<p>
Note that the files are opened in "non-blocking" mode. This ensures that the
<code class="code">Unix.openfile</code> system call does not block itself. After the
files have been opened, the non-blocking flag is reset; the event system
already guarantees that I/O will not block.
<p>
Now we can add our copy engine to an event system, e.g.
<p>
<pre><code class="code">let ues = create_unix_event_system() in
copy_file ues "a.old" "a.new";
copy_file ues "b.old" "b.new";
run ues
;;
</code></pre>
<p>
This piece of code will copy both files in parallel. Note that the concept of
"groups" is very helpful to avoid that several instances of the same engine
interfer with each other.
<p>
<a name="engines"></a>
<h2>Engines</h2>
<p>
Programming directly with Unixqueues can be quite
ineffective. One needs a lot of code to perform even simple
problems. The question arises whether there is a way to construct
event-driven code from larger units that do more complicated tasks
than just looking at the possible I/O operations of file
descriptors. Ideally, there would be a construction principle that
scales with the problems the programmer wants to solve.
<p>
An <b>engine</b> is an object bound to an
event system that performs a task in an autonomous way. After the
engine has started, the user of the engine can leave it alone, and
let it do what it has been designed for, and simply wait until the
engine has completed its task. The user can start several engines
at once, and all run in parallel. It is also possible to construct
larger engines from more primitive ones: One can run engines in
sequence (the output of the first engine is the input of the next),
one can run synchronize engines (when two engines are done the
results of both engines are combined into a single result), and
map the results of engines to different values.
<p>
<a name="eng_model"></a>
<h3>Modelling the abstract properties of engines</h3>
<p>
The formalization of engines assumes that there are four
major states (see the module <a href="Uq_engines.html"><code class="code">Uq_engines</code></a>):
<p>
<pre><code class="code"> type 't engine_state =
[ `Working of int
| `Done of 't
| `Error of exn
| `Aborted
]
</code></pre>
<p>
A <code class="code">`Working</code> engine is actively performing its
task. The number argument counts the events that are processed while
progressing. The state <code class="code">`Done</code> indicates that the
task is completed. The argument of <code class="code">`Done</code> is the
result value of the engine. The state <code class="code">`Error</code> means
that the engine ran into a problem, and cannot continue. Usually an
exception was raised, and in order to be able to pass the exception to
the outside world, it becomes the argument of
<code class="code">`Error</code>. Finally, an engine can be explictly
<code class="code">`Aborted</code> by calling the <code class="code">abort</code>
method. This forces that the engine stops and releases the resources
it has allocated.
<p>
The last three states are called <b>final
states</b> because they indicate that the engine has
stopped. Once it is in a final state, the engine will never go back to
<code class="code">`Working</code>, and will also not transition into another
final state.
<p>
There is no state for the situation that the engine has not
yet begun operation. It is assumed that an engine starts performing
its task right when it has been created, so the initial state is
usually <code class="code">`Working 0</code>.
<p>
Engines are objects that implement this class type:
<p>
<pre><code class="code"> class type [ 't ] engine = object
method state : 't engine_state
method abort : unit -> unit
method request_notification : (unit -> bool) -> unit
method event_system : Unixqueue.event_system
end
</code></pre>
<p>
The method <code class="code">state</code> reports the state the engine
currently has. By calling <code class="code">abort</code> the engine is
aborted. The method <code class="code">request_notification</code> will
be explained later. Finally, <code class="code">event_system</code> reports
the Unixqueue event system the engine is attached to.
<p>
<a name="eng_model_eg"></a>
<h3>Examples for engine primitives and engine construction</h3>
<p>
Fortunately, there are already some primitive engines
we can just instantiate, and see what they are doing. The
function <code class="code">connector</code> creates an engine that
connects to a TCP service in the network, and returns the connected
socket as result:
<p>
<pre><code class="code"> val connector : ?proxy:#client_socket_connector ->
connect_address ->
Unixqueue.event_system ->
connect_status engine
</code></pre>
<p>
To create and setup the engine, just call this function, as in:
<p>
<pre><code class="code"> let ues = Unixqueue.create_unix_event_system() in
let addr = `Socket(`Sock_inet_byname(Unix.SOCK_STREAM, "www.npc.de", 80)) in
let eng = connector addr ues in
...
</code></pre>
<p>
The engine will connect to the web server (port 80) on www.npc.de.
It has added handlers and resources to the event system <code class="code">ues</code>
such that the action of connecting will be triggered when
<a href="Unixqueue.html#VALrun"><code class="code">Unixqueue.run</code></a> becomes active. To see the effect, just
activate the event system:
<p>
<pre><code class="code"> Unixqueue.run ues
</code></pre>
<p>
When the connection is established, <code class="code">eng#state</code> changes to
<code class="code">`Done(`Socket(fd,addr))</code> where <code class="code">fd</code>
is the socket, and <code class="code">addr</code> is the logical address of the
client socket (which may be different than the physical address because
<code class="code">connect</code> supports network proxies). It is also
possible that the state changes to <code class="code">`Error e</code> where
<code class="code">e</code> is the problematic exception. Note that there is
no timeout value; to limit the time of engine actions one has to
attach a watchdog to the engine.
<p>
This is not yet very impressive, because we have only
a single engine. As mentioned, engines run in parallel, so we can
connect to several web services in parallel by just creating several
engines:
<p>
<pre><code class="code"> let ues = Unixqueue.create_unix_event_system() in
let addr1 = `Socket(`Sock_inet_byname(Unix.SOCK_STREAM, "www.npc.de", 80)) in
let addr2 = `Socket(`Sock_inet_byname(Unix.SOCK_STREAM, "caml.inria.fr", 80)) in
let addr3 = `Socket(`Sock_inet_byname(Unix.SOCK_STREAM, "ocaml-programming.de", 80)) in
let eng1 = connector addr1 ues in
let eng2 = connector addr2 ues in
let eng3 = connector addr3 ues in
Unixqueue.run ues
</code></pre>
<p>
Note that the resolution of DNS names is not done in the background, and
may block the whole event system for a moment.
<p>
As a variant, we can also connect to one service after the
other:
<p>
<pre><code class="code"> let eng1 = connector addr1 ues in
let eng123 = new seq_engine
eng1
(fun result1 ->
let eng2 = connector addr2 ues in
new seq_engine
eng2
(fun result2 ->
let eng3 = connector addr3 ues in
eng3)))
</code></pre>
<p>
The constructor for sequential engine execution, <code class="code">seq_engine</code>,
expects one engine and a function as arguments. When the engine is done,
the function is invoked with the result of the engine, and the function
must return a second engine. The result of <code class="code">seq_engine</code> is
the result of the second engine.
<p>
In these examples, we have called <a href="Unixqueue.html#VALrun"><code class="code">Unixqueue.run</code></a>
to start the event system. This function returns when all actions are
completed; this implies that finally all engines are synchronized again
(i.e. in a final state). We can also synchronize in the middle of the
execution by using <code class="code">sync_engine</code>. In the following
code snipped, two services are connected in parallel, and when both
connections have been established, a third connection is started:
<p>
<pre><code class="code"> let eng1 = connector addr1 ues in
let eng2 = connector addr2 ues in
let eng12 = new sync_engine eng1 eng2 in
let eng123 = new seq_engine
eng12
(fun result12 ->
let eng3 = connector addr3 ues in
eng3)
</code></pre>
<p>
<a name="eng_notify"></a>
<h3>The notification mechanism</h3>
<p>
Often, one just wants to watch an engine, and to perform a
special action when it reaches a final state. There is a simple way to
configure a callback:
<p>
<pre><code class="code"> val when_state : ?is_done:('a -> unit) ->
?is_error:(exn -> unit) ->
?is_aborted:(unit -> unit) ->
'a #engine ->
unit
</code></pre>
<p>
For example, to output a message when <code class="code">eng1</code> is
connected:
<p>
<pre><code class="code"> when_state ~is_done:(fun _ -> prerr_endline "eng1 connected") eng1
</code></pre>
<p>
The argument of <code class="code">is_done</code> is the result of the
engine (not needed in this example).
<p>
The function <code class="code">when_state</code> is implemented
with the notification mechanism all engines must support. The method
<code class="code">request_notification</code> can be used to request a
callback whenever the state of the engine changes:
<p>
<pre><code class="code"> method request_notification : (unit -> bool) -> unit
</code></pre>
<p>
The callback function returns whether it is still interested in being
called (<code class="code">true</code>) or not (<code class="code">false</code>).
In the latter case, the engine must not call the function again.
<p>
For example, the connection message can also be output
by:
<pre><code class="code"> eng1 # request_notification
(fun () ->
match eng1#state with
`Done _ -> prerr_endline "eng1 connected"; false
| `Error _
| `Aborted -> false
| `Working _ -> true
)
</code></pre>
<p>
Some more details: The callback function must be even
called when only minor state changes occur, e.g. when
<code class="code">`Working n</code> changes to <code class="code">`Working (n+1)</code>.
The engine is free to invoke the callback function even more
frequently.
<p>
Another detail: It is allowed that more callbacks are
requested when a callback function is running.
<p>
Note that all engine construction classes base on the
notification mechanism, so it is absolutely required that it
is implemented error-free.
<p>
<a name="eng_async_ch"></a>
<h3>Asynchronous channels</h3>
<p>
Because engines are based on Unixqueues, one can imagine
that complex operations on file descriptors are executed by engines.
Actually, there is a primitive that copies the whole byte stream
arriving at one descriptor to another descriptor: The class
<code class="code">copier</code>. We do not discuss this class in detail,
it is explained in the reference manual. From the outside it works
like every engine: One specifies the task, creates the engine, and
waits until it is finished. Internally, the class has to watch
both file descriptors, check when data can be read and written,
and to actually copy chunk by chunk.
<p>
Now imagine we do not only want to copy from descriptor to
descriptor, but to copy from a descriptor into a data object. Of
course, we have the phenomenon that the descriptor sometimes has data
to be read and sometimes not, this is well-known and can be
effectively handled by Unixqueue means. In addition to this, we assume
that there is only limited processing capacity in the data object, so
it can sometimes accept data and sometimes not. This sounds the same,
but it is not, <b>because there is no descriptor to which this
phenomenon is bound</b>. We have to develop our own interface
to mimick this behaviour on a higher programming level: The
asynchronous output channel.
<p>
The term <b>channel</b> is used by the
O'Caml runtime system to refer to buffered I/O descriptors. The
Ocamlnet library has extended the meaning of the term to
objects that handle I/O in a configurable way. As this is what we
are going to do, we adopt this meaning.
<p>
An asynchronous output channel is a class with the type:
<p>
<pre><code class="code">class type async_out_channel = object
method output : string -> int -> int -> int
method close_out : unit -> unit
method pos_out : int
method flush : unit -> unit
method can_output : bool
method request_notification : (unit -> bool) -> unit
</code></pre>
<p>
The first four methods are borrowed from Ocamlnet's class type
<code class="code">raw_out_channel</code>:
<p>
<ul>
<li><code class="code">output s k n</code> prints
into the channel n bytes that can be found at position k of string
s. The method returns the number of bytes that have been accepted</li>
<li><code class="code">close_out()</code> closes the
channel</li>
<li><code class="code">flush()</code> causes that bytes
found in internal buffers are immediately processed. Note that
it is questionable what this means in an asynchronous programming
environment, and because of this, we ignore this method.</li>
<li><code class="code">pos_out</code> returns the
number of bytes that have been written into the channel since
its creation (as object)</li>
</ul>
Originally, these methods have been specified for synchronous
channels. These are allowed to wait until a needed resource
is again available - this is not possible for an asynchronous
channel. For example, <code class="code">output</code> ensures to
accept at least one byte in the original specification. An
implementation is free to wait until this is possible. Here,
we should not do so because this would block the whole event
system. Instead, there are two additional methods helping
to cope with these difficulties:
<p>
<ul>
<li><code class="code">can_output</code> returns true
when <code class="code">output</code> accepts at least one byte, and
false otherwise</li>
<li><code class="code">request_notification f</code>
requests that the function f is called back whenever
<code class="code">can_output</code> changes its value</li>
</ul>
The point is that now the <b>user</b> of an asynchronous
channel is able to defer the output operation into the future when it
is currently not possible. Of course, it is required that the user
knows this - using an asynchronous channel is not as easy as using
a synchronous channel.
<p>
We show now two examples: The first always accepts output
and appends it to a buffer. Of course, the two methods
<code class="code">can_output</code> and <code class="code">request_notification</code>
are trivial in this case. The second example illustrates these methods: The
channel pauses for one second after one kilobyte of data have been
accepted. This is of little practical use, but quite simple to
implement, and has the right niveau for an example.
<p>
Example 1: We just inherit from an Ocamlnet class that
implements the buffer:
<p>
<pre><code class="code"> class async_buffer b =
object (self)
inherit Netchannels.output_buffer b
method can_output = true
method request_notification (f : unit->bool) = ()
end
</code></pre>
<p>
I insist that this is a good example because it demonstrates why
the class type <code class="code">async_out_channel</code> bases on
an Ocamlnet class type. (Note that <code class="code">async_buffer</code>
defines more methods than necessary. It might be necessary to
coerce objects of this class to <code class="code">async_out_channel</code>
if required by typing.)
<p>
Example 2: Again we use an Ocamlnet class to implement the
buffer, but we do not directly inherit from this class. Instead we
instantiate it as an instance variable <code class="code">real_buf</code>.
The variable <code class="code">barrier_enabled</code> is true as long as no
more than 1024 bytes have been written into the buffer,
<b>and</b> the sleep second is not yet over. The
variable <code class="code">barrier_reached</code> is true if at least 1024
bytes have been written into the buffer.
<p>
<pre><code class="code"> class funny_async_buffer b ues =
object (self)
val real_buf = new Netchannels.output_buffer b
val mutable barrier_enabled = true
val mutable barrier_reached = false
val mutable notify_list = []
val mutable notify_list_new = []
method output s k n =
if barrier_enabled then (
let m = 1024 - real_buf#pos_out in
let r = real_buf # output s k (min n m) in
if m > 0 && real_buf#pos_out = 1024 then (
barrier_reached <- true;
self # configure_sleep_second();
self # notify()
);
r
)
else
real_buf # output s k n
method flush() = ()
method pos_out = real_buf#pos_out
method close_out() = real_buf#close_out()
method can_output =
if barrier_enabled then
not barrier_reached
else
true
method request_notification f =
notify_list_new <- f :: notify_list_new
method private notify() =
notify_list <- notify_list @ notify_list_new;
notify_list_new <- [];
notify_list <- List.filter (fun f -> f()) notify_list
method private configure_sleep_second() =
let g = Unixqueue.new_group ues in
Unixqueue.once ues g 1.0 self#wake_up
method private wake_up() =
barrier_enabled <- false;
self # notify()
end
</code></pre>
<p>
Initially, the barrier is enabled, and <code class="code">can_output</code>
returns <code class="code">true</code>. The logic in
<code class="code">output</code> ensures that no more than 1024 bytes are
added to the buffer. When the 1024th byte is printed, the barrier is
reached, and the sleep second begins. <code class="code">can_output</code>
changes to <code class="code">false</code>, and because of this, we must
<code class="code">notify</code> the functions that have requested that.
The timer is implemented by a call of <a href="Unixqueue.html#VALonce"><code class="code">Unixqueue.once</code></a>;
this function performs a callback after a period of time has
elapsed. Here, <code class="code">wake_up</code> is called back. It
disables the barrier, and because <code class="code">can_output</code>
is now again <code class="code">true</code>, the notifications have to
be done again.
<p>
The complete example can be found in the "examples/engines"
directory of the equeue distribution.
<p>
An implementation of a useful asynchronous channel is
<code class="code">output_async_descr</code> that outputs the channel data
to a file descriptor. This class is also an engine. See the reference
manual for a description.
<p>
<a name="eng_recv"></a>
<h3>Receivers</h3>
<p>
The question is what one can do with asynchronous channels.
We have mentioned that these objects were designed with copy tasks
in mind that transfer data from file descriptors into data objects.
Of course, the asynchronous channels play the role of these data
objects. In addition to these, we need an engine that actually performs
this kind of data transfer: The receiver engine.
<p>
The receiver class has this signature:
<p>
<pre><code class="code"> class receiver : src:Unix.file_descr ->
dst:#async_out_channel ->
?close_src:bool ->
?close_dst:bool ->
Unixqueue.event_system ->
[unit] engine
</code></pre>
<p>
Obviously, <code class="code">src</code> is the descriptor to get the data
from, and <code class="code">dst</code> is the asynchronous channel to
write the data into. After the receiver has been created, it copies
the data stream from <code class="code">src</code> to <code class="code">dst</code>
until EOF is found.
<p>
The receiver is an engine, and this means that it
reports its state to the outer world. When the copy task has been
completed, it transitions into the state <code class="code">`Done()</code>.
<p>
In the next section we present a real example that
also uses the receiver class.
</sect1>
<p>
<a name="eng_eg"></a>
<h3>Example: A simple HTTP client</h3>
<p>
The HTTP protocol is used to get web pages from web servers.
Its principle is very simple: A request is sent to the server, and
the server replies with the document (well, actually HTTP can be very
complicated, but it can also still be used in this simple way). For
example, the request could be
<p>
<pre><code class="code"> GET / HTTP/1.0
--empty line--
</code></pre>
<p>
Note there is a second line which is empty. The server responds with
a header, an empty line, and the document. In HTTP/1.0 we can assume
that the server sends EOF after the document.
<p>
The first part of our client connects to the web server.
This is not new:
<p>
<pre><code class="code"> let ues = Unixqueue.create_unix_event_system();;
let c = connector (`Socket(`Sock_inet_byname(Unix.SOCK_STREAM,
"www.npc.de", 80),
default_connect_options
)) ues;;
</code></pre>
<p>
Furthermore, we need an asynchronous output channel that stores
the incoming server reply. This is also a known code snippet:
<p>
<pre><code class="code"> class async_buffer b =
object (self)
inherit Netchannels.output_buffer b
method can_output = true
method request_notification (f : unit->bool) = ()
end
</code></pre>
<p>
We also create a buffer:
<p>
<pre><code class="code"> let b = Buffer.create 10000;;
</code></pre>
<p>
Now we are interested in the moment when the connection is
established. In this moment, we set up an
<code class="code">output_async_descr</code> object that copies
its contents to the connection, so we can asynchronously
send our HTTP request. Furthermore, we create an
<code class="code">async_buffer</code> object that collects the
HTTP response, which can arrive at any time from now on.
<p>
<pre><code class="code"> when_state
~is_done:(fun connstat ->
match connstat with
`Socket(fd, _) ->
prerr_endline "CONNECTED";
let printer = new output_async_descr ~dst:fd ues in
let buffer = new async_buffer b in
let receiver = new receiver ~src:fd ~dst:buffer ues in
let s = "GET / HTTP/1.0\n\n" in
ignore(printer # output s 0 (String.length s));
when_state
~is_done:(fun _ ->
prerr_endline "HTTP RESPONSE RECEIVED!")
~is_error:(fun _ ->
prerr_endline "ERROR!")
receiver
| _ -> assert false
)
c
;;
</code></pre>
<p>
Some details: We can ignore the result of <code class="code">printer#output</code>
because the <code class="code">printer</code> has unlimited capacity (the
default of <code class="code">output_async_descr</code> channels). Because
<code class="code">printer</code> is not closed, this channel does not
close the destination descriptor <code class="code">fd</code> (which would
be fatal). The <code class="code">receiver</code>, however, closes the file
descriptor when it finds the end of the input stream.
<p>
One important line is missing: Up to now we have only
set up the client, but it is not yet running. To invoke it we need:
<p>
<pre><code class="code"> Unixqueue.run ues;;
</code></pre>
<p>
This client is not perfect, not only, because it is restricted
to the most basic form of the HTTP protocol. The error handling could
be better: In the case that <code class="code">printer</code> transitions to
error state, it will close <code class="code">fd</code>. But the file descriptor
is in use by <code class="code">receiver</code> at the same time, causing
a followup error that is finally reported. A better solution would
use a duplicate of the file descriptor for <code class="code">receiver</code>,
so both engines can independently close their descriptors. Furthermore,
the error state of <code class="code">printer</code> would be trapped.
<p>
<a name="ev_vs_mt"></a>
<h2>Event-driven programming vs. multi-threaded programming</h2>
<p>
One of the tasks of event-driven programming is to avoid blocking
situations, another is to schedule the processor activities. Another approach
to achieve these goals is multi-threaded programming.
<p>
The fundamental difference between both approaches is that
in the case of event-driven programming the application controls itself, while
in the case of multi-threaded programming additional features of the operating
system are applied. The latter seems to have major advantages, for example
blocking is impossible at all (if one thread blocks, the other threads may
continue running), and scheduling is one of the native tasks of an operating
system.
<p>
This is not the whole truth. First of all, multi-threaded programming
has the disadvantage that every line of the program must follow certain
programming guidelines, especially shared storage must be protected by mutexes
such that everything is "reentrant". This is not very simple. On the contrary,
event-driven programs can be "plugged together" from a set of basic components,
and you do not need to know how the components are programmed.
<p>
Scheduling: Multi-threaded programs sometimes lead to situations
where there are many runnable threads. Despite the capabilities of the
operating system, every modern hardware has the restriction that it performs
badly if the code to execute is wide-spread over the whole memory. This is
mainly caused by limited cache memory. Many operating systems are not well
enough designed to efficiently get around this bottleneck.
<p>
Furthermore, I think that scheduling controlled by the application
that knows best its own requirements cannot be worse than scheduling controlled
by the operating system. (But this may be wrong in special situations.)
<p>
Avoid blocking: Of course, an event-driven program blocks if it gets
into an endless loop. A multi-threaded application does not block in this case,
but it wastes CPU time. It is normally not possible to kill single wild-running
threads because most programs are not "cancellation-safe" (a very high
requirement). In O'Caml, the latter is only possible for the bytecode thread
emulation.
<p>
Of course, if you must combine some non-blocking I/O with
time-consuming computations, the multi-threaded program will block "less" (it
becomes only slower) than the event-driven program, which is unavailable for a
period of time.
<p>
To come to an end, I think that there are many tasks where
event-driven programs perform as well as multi-threaded programs, but
where the first style has fewer requirements on the quality of the code.
<p>
<a name="3_Combiningbothstyles"></a>
<h3>Combining both styles</h3>
<p>
Since Equeue 1.2, it is
possible to use Equeue in a multi-threaded environment. The fundamental Equeue
module is reentrant, and the Unixqueue module even serializes the execution of
functions if necessary, such that the same event system may be used from
different threads.
<p>
One idea is to program a hybrid server in the following way: One
thread does all network I/O (using event systems), and the other threads
execute the operations the server provides. For example, consider a server
doing remote procedures (as most servers do). Such a server receives requests,
and every request is responded. When the server starts up, the networking
thread begins to wait for requests. When a complete request has been received,
a new thread is started performing the requested operation. The network thread
continues immediately, normally doing other network I/O. When the operation is
over, an artificial event is generated indicating this situation (see below on
artificial events). The artificial event carries the result of the operation,
and is added to the event system directly from the thread that executed the
operation. This thread can now stop working. The network thread receives this
artificial event like every other event, and can start sending the result over
the network back to the client.
<p>
Artificial events are new in Equeue 1.2, too. The idea is to use
O'Caml exceptions as dynamically extensible sum type. For example:
<p>
<pre><code class="code">exception Result of result_type ;;
...
add_event esys (Extra (Result r))
</code></pre>
<p>
The <code class="code">Extra</code> event constructor can carry every exception value.
<p>
<a name="4_Caveat"></a>
<h4>Caveat</h4>
<p>
The <code class="code">Extra</code>
events are not associated to any group. Every event handler will get them.
<p>
<a name="pitfalls"></a>
<h2>Pitfalls</h2>
<p>
There are some situations where the program may still block, if it is
not programmed very carefully.
<p>
<ul>
<li>Despite the <code class="code">open</code> system call, the
<code class="code">connect</code> system call may also block. To avoid blocking, you
must first set the socket to non-blocking mode. E.g.
<pre><code class="code"> let s = Unix.socket Unix.PF_INET Unix.SOCK_STREAM 0 in
Unix.set_nonblock s;
Unix.connect s some_address;
</code></pre></li>
<li>Other blocking candidates are the name-server functions, in
O'Caml <code class="code">Unix.inet_addr_of_string</code>. This is very hard to solve
because the underlying C library performs the DNS lookup. The POSIX thread
implemention does not help, because special DNS functions needs to be called to
avoid blocking (these functions have a reentrant function interface), and the
O'Caml <code class="code">Unix</code> module does not use them. A possible solution is
to fork a new process, and let the new process perform the DNS lookup.</li>
</ul>
<a name="ui"></a>
<h2>Using Unixqueue together with Tcl (labltk) and Glib (lablgtk)</h2>
<p>
The Tcl programming language has already an event queue
implementation, and the Tk toolkit applies it to realize event queues for
graphical user interfaces (GUIs). In the O'Caml world, Tcl/Tk is available
through the packages camltk and labltk.
<p>
The same holds for the Glib library which is used by the
gtk GUI toolkit to implement event queues. In the O'Caml world, gtk
bindings are provided by lablgtk (and lablgtk2).
<p>
While the GUI queues mainly process GUI events (e.g. mouse
and keyboard events), they can also watch files in the same way as
Unixqueue does it. It is, however, not possible to run both types of
queues in parallel, because there is the problem that when one type of
queue blocks, the other is implicitly also blocked, even when there
would be events to process. The solution is to <b>integrate</b>
both queues,
and because the GUI queues can subsume the functionality of
Unixqueue, the GUI queues are the more fundamental ones, and Unixqueue
must integrate its event processing into the GUI queues.
<p>
This type of integration into the GUI queues is implemented by
defining the alternate classes <a href="Uq_tcl.tcl_event_system.html"><code class="code">Uq_tcl.tcl_event_system</code></a>
and <a href="Uq_gtk.gtk_event_system.html"><code class="code">Uq_gtk.gtk_event_system</code></a>. These classes can be used
in the same way as <a href="Unixqueue.unix_event_system.html"><code class="code">Unixqueue.unix_event_system</code></a>, but
automatically arrange the event queue integration.
<p>
For example, a labltk program uses
<pre><code class="code">let ues = new Uq_tcl.tcl_event_system()
</code></pre>
to create the event system object, which can be used in the same way
as event systems created with <code class="code">create_unix_event_system</code>
or <code class="code">new unix_event_system</code>.
There is one important difference,
however. One must no longer call <a href="Unixqueue.html#VALrun"><code class="code">Unixqueue.run</code></a> to start
the processing. The reason is that the TCL queue is already started,
and remains active during the runtime of the program. Remember that
when the GUI function
<pre><code class="code">Tk.mainLoop()
</code></pre>
is entered, the TCL queue becomes active, and all subsequent execution
of O'Caml code is triggered by callback functions. The integrated
queue now behaves as follows: When handlers, resources,
or events are added to <code class="code">ues</code>, they are automatically considered
for processing when the current callback function returns. For example,
this might look as follows:
<pre><code class="code">let b1 = Button.create
~text:"Start function"
~command:(fun () ->
Unixqueue.add_handler ues ...; ...) widget in
let b2 = Button.create
~text:"Stop function"
~command:(fun () ->
Unixqueue.remove_handler ues ...; ...) widget in
...
</code></pre>
When the button is pressed, the function is triggered, and the
callback function passed as <code class="code">command</code> starts executing.
This adds handlers, resources, and what ever is needed to start the
activated function. The callback function returns immediately, and
the processing of the event queue is performed by the regular GUI
event system. Of course, it is still possible to press other buttons
etc., because GUI and Unixqueue events are processed in an interweaved
way. So the user is able to press the "Stop" button to stop the
further execution of the activated function.
<p>
<a name="4_APIchange"></a>
<h4>API change</h4>
In Equeue-2.1, the
interface for the Tcl queue integration was changed. It works now
as described above; the function <code class="code">Unixqueue.attach_to_tcl_queue</code>
no longer exists. The new scheme has the advantage that the Glib-type
queues (and probably any other event queue implementation) can be
also easily supported.
<p>
<a name="4_Samplecode"></a>
<h4>Sample code</h4>
The example discussed before,
copying files in an event-driven way, has been extended to show how
Unixqueue and Tcl can cooperate. While the file is being copied, a window
informs about the progress and offers a "Stop" button which immediately
aborts the copy procedure. See the directory "filecopy_labltk" in the
distributed tarball. There is also a variant that works with lablgtk or
lablgtk2, see the directory "filecopy_lablgtk".
<p>
<a name="4_Pitfalls"></a>
<h4>Pitfalls</h4>
If you call Unixqueue functions
from Unixqueue event handlers, the functions behave exactly as described in the
previous chapters. However, it is also possible to call Unixqueue functions
from TCL/Glib event handlers. In this case, not all change requests will be
immediately honoured. Especially, <code class="code">add_event</code> does not
immediately invoke the appropriate event handler; the event is just recorded,
and the handler will be called when the next <b>system</b> event
happens (either a GUI event, a file descriptor event, or a timeout event).
You can force to respect the new event as soon as possible by adding
an empty handler using <a href="Unixqueue.html#VALonce"><code class="code">Unixqueue.once</code></a> with a timeout of 0 seconds. -
The other Unixqueue functions should not behave differently (although the
actually performed operations are very different). Especially you can
call <code class="code">add_resource</code> and <code class="code">remove_resource</code> and
the change will be respected immediately.
<br>
</body></html>
|