File: Netmcore_tut.html

package info (click to toggle)
ocamlnet 4.1.9-7
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid
  • size: 54,024 kB
  • sloc: ml: 151,939; ansic: 11,071; sh: 2,003; makefile: 1,310
file content (1494 lines) | stat: -rw-r--r-- 82,605 bytes parent folder | download | duplicates (4)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<link rel="stylesheet" href="style.css" type="text/css">
<meta content="text/html; charset=iso-8859-1" http-equiv="Content-Type">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="Start" href="index.html">
<link rel="previous" href="Netmcore_process.html">
<link rel="next" href="Netmcore_basics.html">
<link rel="Up" href="index.html">
<link title="Index of types" rel=Appendix href="index_types.html">
<link title="Index of extensions" rel=Appendix href="index_extensions.html">
<link title="Index of exceptions" rel=Appendix href="index_exceptions.html">
<link title="Index of values" rel=Appendix href="index_values.html">
<link title="Index of class attributes" rel=Appendix href="index_attributes.html">
<link title="Index of class methods" rel=Appendix href="index_methods.html">
<link title="Index of classes" rel=Appendix href="index_classes.html">
<link title="Index of class types" rel=Appendix href="index_class_types.html">
<link title="Index of modules" rel=Appendix href="index_modules.html">
<link title="Index of module types" rel=Appendix href="index_module_types.html">
<link title="Uq_gtk" rel="Chapter" href="Uq_gtk.html">
<link title="Uq_tcl" rel="Chapter" href="Uq_tcl.html">
<link title="Equeue" rel="Chapter" href="Equeue.html">
<link title="Unixqueue" rel="Chapter" href="Unixqueue.html">
<link title="Unixqueue_pollset" rel="Chapter" href="Unixqueue_pollset.html">
<link title="Unixqueue_select" rel="Chapter" href="Unixqueue_select.html">
<link title="Uq_resolver" rel="Chapter" href="Uq_resolver.html">
<link title="Uq_engines" rel="Chapter" href="Uq_engines.html">
<link title="Uq_multiplex" rel="Chapter" href="Uq_multiplex.html">
<link title="Uq_transfer" rel="Chapter" href="Uq_transfer.html">
<link title="Uq_socks5" rel="Chapter" href="Uq_socks5.html">
<link title="Uq_io" rel="Chapter" href="Uq_io.html">
<link title="Uq_lwt" rel="Chapter" href="Uq_lwt.html">
<link title="Uq_libevent" rel="Chapter" href="Uq_libevent.html">
<link title="Uq_mt" rel="Chapter" href="Uq_mt.html">
<link title="Uq_client" rel="Chapter" href="Uq_client.html">
<link title="Uq_server" rel="Chapter" href="Uq_server.html">
<link title="Uq_datagram" rel="Chapter" href="Uq_datagram.html">
<link title="Uq_engines_compat" rel="Chapter" href="Uq_engines_compat.html">
<link title="Equeue_intro" rel="Chapter" href="Equeue_intro.html">
<link title="Equeue_howto" rel="Chapter" href="Equeue_howto.html">
<link title="Netcamlbox" rel="Chapter" href="Netcamlbox.html">
<link title="Netcgi_apache" rel="Chapter" href="Netcgi_apache.html">
<link title="Netcgi_modtpl" rel="Chapter" href="Netcgi_modtpl.html">
<link title="Netcgi_plex" rel="Chapter" href="Netcgi_plex.html">
<link title="Netcgi_common" rel="Chapter" href="Netcgi_common.html">
<link title="Netcgi" rel="Chapter" href="Netcgi.html">
<link title="Netcgi_ajp" rel="Chapter" href="Netcgi_ajp.html">
<link title="Netcgi_scgi" rel="Chapter" href="Netcgi_scgi.html">
<link title="Netcgi_cgi" rel="Chapter" href="Netcgi_cgi.html">
<link title="Netcgi_fcgi" rel="Chapter" href="Netcgi_fcgi.html">
<link title="Netcgi_dbi" rel="Chapter" href="Netcgi_dbi.html">
<link title="Netcgi1_compat" rel="Chapter" href="Netcgi1_compat.html">
<link title="Netcgi_test" rel="Chapter" href="Netcgi_test.html">
<link title="Netcgi_porting" rel="Chapter" href="Netcgi_porting.html">
<link title="Nethttp_client_conncache" rel="Chapter" href="Nethttp_client_conncache.html">
<link title="Nethttp_client" rel="Chapter" href="Nethttp_client.html">
<link title="Nettelnet_client" rel="Chapter" href="Nettelnet_client.html">
<link title="Netftp_data_endpoint" rel="Chapter" href="Netftp_data_endpoint.html">
<link title="Netftp_client" rel="Chapter" href="Netftp_client.html">
<link title="Nethttp_fs" rel="Chapter" href="Nethttp_fs.html">
<link title="Netftp_fs" rel="Chapter" href="Netftp_fs.html">
<link title="Netsmtp" rel="Chapter" href="Netsmtp.html">
<link title="Netpop" rel="Chapter" href="Netpop.html">
<link title="Netldap" rel="Chapter" href="Netldap.html">
<link title="Netclient_tut" rel="Chapter" href="Netclient_tut.html">
<link title="Netgss_bindings" rel="Chapter" href="Netgss_bindings.html">
<link title="Netgss" rel="Chapter" href="Netgss.html">
<link title="Nethttpd_types" rel="Chapter" href="Nethttpd_types.html">
<link title="Nethttpd_kernel" rel="Chapter" href="Nethttpd_kernel.html">
<link title="Nethttpd_reactor" rel="Chapter" href="Nethttpd_reactor.html">
<link title="Nethttpd_engine" rel="Chapter" href="Nethttpd_engine.html">
<link title="Nethttpd_services" rel="Chapter" href="Nethttpd_services.html">
<link title="Nethttpd_plex" rel="Chapter" href="Nethttpd_plex.html">
<link title="Nethttpd_util" rel="Chapter" href="Nethttpd_util.html">
<link title="Nethttpd_intro" rel="Chapter" href="Nethttpd_intro.html">
<link title="Netmcore" rel="Chapter" href="Netmcore.html">
<link title="Netmcore_camlbox" rel="Chapter" href="Netmcore_camlbox.html">
<link title="Netmcore_mempool" rel="Chapter" href="Netmcore_mempool.html">
<link title="Netmcore_heap" rel="Chapter" href="Netmcore_heap.html">
<link title="Netmcore_ref" rel="Chapter" href="Netmcore_ref.html">
<link title="Netmcore_array" rel="Chapter" href="Netmcore_array.html">
<link title="Netmcore_sem" rel="Chapter" href="Netmcore_sem.html">
<link title="Netmcore_mutex" rel="Chapter" href="Netmcore_mutex.html">
<link title="Netmcore_condition" rel="Chapter" href="Netmcore_condition.html">
<link title="Netmcore_queue" rel="Chapter" href="Netmcore_queue.html">
<link title="Netmcore_buffer" rel="Chapter" href="Netmcore_buffer.html">
<link title="Netmcore_matrix" rel="Chapter" href="Netmcore_matrix.html">
<link title="Netmcore_hashtbl" rel="Chapter" href="Netmcore_hashtbl.html">
<link title="Netmcore_process" rel="Chapter" href="Netmcore_process.html">
<link title="Netmcore_tut" rel="Chapter" href="Netmcore_tut.html">
<link title="Netmcore_basics" rel="Chapter" href="Netmcore_basics.html">
<link title="Netplex_types" rel="Chapter" href="Netplex_types.html">
<link title="Netplex_mp" rel="Chapter" href="Netplex_mp.html">
<link title="Netplex_mt" rel="Chapter" href="Netplex_mt.html">
<link title="Netplex_log" rel="Chapter" href="Netplex_log.html">
<link title="Netplex_controller" rel="Chapter" href="Netplex_controller.html">
<link title="Netplex_container" rel="Chapter" href="Netplex_container.html">
<link title="Netplex_sockserv" rel="Chapter" href="Netplex_sockserv.html">
<link title="Netplex_workload" rel="Chapter" href="Netplex_workload.html">
<link title="Netplex_main" rel="Chapter" href="Netplex_main.html">
<link title="Netplex_config" rel="Chapter" href="Netplex_config.html">
<link title="Netplex_kit" rel="Chapter" href="Netplex_kit.html">
<link title="Rpc_netplex" rel="Chapter" href="Rpc_netplex.html">
<link title="Netplex_cenv" rel="Chapter" href="Netplex_cenv.html">
<link title="Netplex_semaphore" rel="Chapter" href="Netplex_semaphore.html">
<link title="Netplex_sharedvar" rel="Chapter" href="Netplex_sharedvar.html">
<link title="Netplex_mutex" rel="Chapter" href="Netplex_mutex.html">
<link title="Netplex_encap" rel="Chapter" href="Netplex_encap.html">
<link title="Netplex_mbox" rel="Chapter" href="Netplex_mbox.html">
<link title="Netplex_internal" rel="Chapter" href="Netplex_internal.html">
<link title="Netplex_intro" rel="Chapter" href="Netplex_intro.html">
<link title="Netplex_advanced" rel="Chapter" href="Netplex_advanced.html">
<link title="Netplex_admin" rel="Chapter" href="Netplex_admin.html">
<link title="Netshm" rel="Chapter" href="Netshm.html">
<link title="Netshm_data" rel="Chapter" href="Netshm_data.html">
<link title="Netshm_hashtbl" rel="Chapter" href="Netshm_hashtbl.html">
<link title="Netshm_array" rel="Chapter" href="Netshm_array.html">
<link title="Netshm_intro" rel="Chapter" href="Netshm_intro.html">
<link title="Netstring_pcre" rel="Chapter" href="Netstring_pcre.html">
<link title="Netconversion" rel="Chapter" href="Netconversion.html">
<link title="Netchannels" rel="Chapter" href="Netchannels.html">
<link title="Netstream" rel="Chapter" href="Netstream.html">
<link title="Netmime_string" rel="Chapter" href="Netmime_string.html">
<link title="Netmime" rel="Chapter" href="Netmime.html">
<link title="Netsendmail" rel="Chapter" href="Netsendmail.html">
<link title="Neturl" rel="Chapter" href="Neturl.html">
<link title="Netaddress" rel="Chapter" href="Netaddress.html">
<link title="Netbuffer" rel="Chapter" href="Netbuffer.html">
<link title="Netmime_header" rel="Chapter" href="Netmime_header.html">
<link title="Netmime_channels" rel="Chapter" href="Netmime_channels.html">
<link title="Neturl_ldap" rel="Chapter" href="Neturl_ldap.html">
<link title="Netdate" rel="Chapter" href="Netdate.html">
<link title="Netencoding" rel="Chapter" href="Netencoding.html">
<link title="Netulex" rel="Chapter" href="Netulex.html">
<link title="Netaccel" rel="Chapter" href="Netaccel.html">
<link title="Netaccel_link" rel="Chapter" href="Netaccel_link.html">
<link title="Nethtml" rel="Chapter" href="Nethtml.html">
<link title="Netstring_str" rel="Chapter" href="Netstring_str.html">
<link title="Netmappings" rel="Chapter" href="Netmappings.html">
<link title="Netaux" rel="Chapter" href="Netaux.html">
<link title="Nethttp" rel="Chapter" href="Nethttp.html">
<link title="Netpagebuffer" rel="Chapter" href="Netpagebuffer.html">
<link title="Netfs" rel="Chapter" href="Netfs.html">
<link title="Netglob" rel="Chapter" href="Netglob.html">
<link title="Netauth" rel="Chapter" href="Netauth.html">
<link title="Netsockaddr" rel="Chapter" href="Netsockaddr.html">
<link title="Netnumber" rel="Chapter" href="Netnumber.html">
<link title="Netxdr_mstring" rel="Chapter" href="Netxdr_mstring.html">
<link title="Netxdr" rel="Chapter" href="Netxdr.html">
<link title="Netcompression" rel="Chapter" href="Netcompression.html">
<link title="Netunichar" rel="Chapter" href="Netunichar.html">
<link title="Netasn1" rel="Chapter" href="Netasn1.html">
<link title="Netasn1_encode" rel="Chapter" href="Netasn1_encode.html">
<link title="Netoid" rel="Chapter" href="Netoid.html">
<link title="Netstring_tstring" rel="Chapter" href="Netstring_tstring.html">
<link title="Netdn" rel="Chapter" href="Netdn.html">
<link title="Netx509" rel="Chapter" href="Netx509.html">
<link title="Netascii_armor" rel="Chapter" href="Netascii_armor.html">
<link title="Nettls_support" rel="Chapter" href="Nettls_support.html">
<link title="Netmech_scram" rel="Chapter" href="Netmech_scram.html">
<link title="Netmech_scram_gssapi" rel="Chapter" href="Netmech_scram_gssapi.html">
<link title="Netmech_scram_sasl" rel="Chapter" href="Netmech_scram_sasl.html">
<link title="Netmech_scram_http" rel="Chapter" href="Netmech_scram_http.html">
<link title="Netgssapi_support" rel="Chapter" href="Netgssapi_support.html">
<link title="Netgssapi_auth" rel="Chapter" href="Netgssapi_auth.html">
<link title="Netchannels_crypto" rel="Chapter" href="Netchannels_crypto.html">
<link title="Netx509_pubkey" rel="Chapter" href="Netx509_pubkey.html">
<link title="Netx509_pubkey_crypto" rel="Chapter" href="Netx509_pubkey_crypto.html">
<link title="Netsaslprep" rel="Chapter" href="Netsaslprep.html">
<link title="Netmech_plain_sasl" rel="Chapter" href="Netmech_plain_sasl.html">
<link title="Netmech_crammd5_sasl" rel="Chapter" href="Netmech_crammd5_sasl.html">
<link title="Netmech_digest_sasl" rel="Chapter" href="Netmech_digest_sasl.html">
<link title="Netmech_digest_http" rel="Chapter" href="Netmech_digest_http.html">
<link title="Netmech_krb5_sasl" rel="Chapter" href="Netmech_krb5_sasl.html">
<link title="Netmech_gs2_sasl" rel="Chapter" href="Netmech_gs2_sasl.html">
<link title="Netmech_spnego_http" rel="Chapter" href="Netmech_spnego_http.html">
<link title="Netchannels_tut" rel="Chapter" href="Netchannels_tut.html">
<link title="Netmime_tut" rel="Chapter" href="Netmime_tut.html">
<link title="Netsendmail_tut" rel="Chapter" href="Netsendmail_tut.html">
<link title="Netulex_tut" rel="Chapter" href="Netulex_tut.html">
<link title="Neturl_tut" rel="Chapter" href="Neturl_tut.html">
<link title="Netsys" rel="Chapter" href="Netsys.html">
<link title="Netsys_posix" rel="Chapter" href="Netsys_posix.html">
<link title="Netsys_pollset" rel="Chapter" href="Netsys_pollset.html">
<link title="Netlog" rel="Chapter" href="Netlog.html">
<link title="Netexn" rel="Chapter" href="Netexn.html">
<link title="Netsys_win32" rel="Chapter" href="Netsys_win32.html">
<link title="Netsys_pollset_posix" rel="Chapter" href="Netsys_pollset_posix.html">
<link title="Netsys_pollset_win32" rel="Chapter" href="Netsys_pollset_win32.html">
<link title="Netsys_pollset_generic" rel="Chapter" href="Netsys_pollset_generic.html">
<link title="Netsys_signal" rel="Chapter" href="Netsys_signal.html">
<link title="Netsys_oothr" rel="Chapter" href="Netsys_oothr.html">
<link title="Netsys_xdr" rel="Chapter" href="Netsys_xdr.html">
<link title="Netsys_rng" rel="Chapter" href="Netsys_rng.html">
<link title="Netsys_crypto_types" rel="Chapter" href="Netsys_crypto_types.html">
<link title="Netsys_types" rel="Chapter" href="Netsys_types.html">
<link title="Netsys_mem" rel="Chapter" href="Netsys_mem.html">
<link title="Netsys_tmp" rel="Chapter" href="Netsys_tmp.html">
<link title="Netsys_sem" rel="Chapter" href="Netsys_sem.html">
<link title="Netsys_pmanage" rel="Chapter" href="Netsys_pmanage.html">
<link title="Netsys_crypto" rel="Chapter" href="Netsys_crypto.html">
<link title="Netsys_tls" rel="Chapter" href="Netsys_tls.html">
<link title="Netsys_ciphers" rel="Chapter" href="Netsys_ciphers.html">
<link title="Netsys_digests" rel="Chapter" href="Netsys_digests.html">
<link title="Netsys_crypto_modes" rel="Chapter" href="Netsys_crypto_modes.html">
<link title="Netsys_gssapi" rel="Chapter" href="Netsys_gssapi.html">
<link title="Netsys_sasl_types" rel="Chapter" href="Netsys_sasl_types.html">
<link title="Netsys_sasl" rel="Chapter" href="Netsys_sasl.html">
<link title="Netsys_polypipe" rel="Chapter" href="Netsys_polypipe.html">
<link title="Netsys_polysocket" rel="Chapter" href="Netsys_polysocket.html">
<link title="Netsys_global" rel="Chapter" href="Netsys_global.html">
<link title="Nettls_gnutls_bindings" rel="Chapter" href="Nettls_gnutls_bindings.html">
<link title="Nettls_nettle_bindings" rel="Chapter" href="Nettls_nettle_bindings.html">
<link title="Nettls_gnutls" rel="Chapter" href="Nettls_gnutls.html">
<link title="Netunidata" rel="Chapter" href="Netunidata.html">
<link title="Netgzip" rel="Chapter" href="Netgzip.html">
<link title="Rpc_auth_local" rel="Chapter" href="Rpc_auth_local.html">
<link title="Rpc_xti_client" rel="Chapter" href="Rpc_xti_client.html">
<link title="Rpc" rel="Chapter" href="Rpc.html">
<link title="Rpc_program" rel="Chapter" href="Rpc_program.html">
<link title="Rpc_util" rel="Chapter" href="Rpc_util.html">
<link title="Rpc_portmapper_aux" rel="Chapter" href="Rpc_portmapper_aux.html">
<link title="Rpc_packer" rel="Chapter" href="Rpc_packer.html">
<link title="Rpc_transport" rel="Chapter" href="Rpc_transport.html">
<link title="Rpc_client" rel="Chapter" href="Rpc_client.html">
<link title="Rpc_simple_client" rel="Chapter" href="Rpc_simple_client.html">
<link title="Rpc_portmapper_clnt" rel="Chapter" href="Rpc_portmapper_clnt.html">
<link title="Rpc_portmapper" rel="Chapter" href="Rpc_portmapper.html">
<link title="Rpc_server" rel="Chapter" href="Rpc_server.html">
<link title="Rpc_auth_sys" rel="Chapter" href="Rpc_auth_sys.html">
<link title="Rpc_auth_gssapi" rel="Chapter" href="Rpc_auth_gssapi.html">
<link title="Rpc_proxy" rel="Chapter" href="Rpc_proxy.html">
<link title="Rpc_intro" rel="Chapter" href="Rpc_intro.html">
<link title="Rpc_mapping_ref" rel="Chapter" href="Rpc_mapping_ref.html">
<link title="Rpc_intro_gss" rel="Chapter" href="Rpc_intro_gss.html">
<link title="Shell_sys" rel="Chapter" href="Shell_sys.html">
<link title="Shell" rel="Chapter" href="Shell.html">
<link title="Shell_uq" rel="Chapter" href="Shell_uq.html">
<link title="Shell_fs" rel="Chapter" href="Shell_fs.html">
<link title="Shell_intro" rel="Chapter" href="Shell_intro.html">
<link title="Intro" rel="Chapter" href="Intro.html">
<link title="Platform" rel="Chapter" href="Platform.html">
<link title="Foreword" rel="Chapter" href="Foreword.html">
<link title="Ipv6" rel="Chapter" href="Ipv6.html">
<link title="Regexp" rel="Chapter" href="Regexp.html">
<link title="Tls" rel="Chapter" href="Tls.html">
<link title="Crypto" rel="Chapter" href="Crypto.html">
<link title="Authentication" rel="Chapter" href="Authentication.html">
<link title="Credentials" rel="Chapter" href="Credentials.html">
<link title="Gssapi" rel="Chapter" href="Gssapi.html">
<link title="Ocamlnet4" rel="Chapter" href="Ocamlnet4.html">
<link title="Get" rel="Chapter" href="Get.html"><title>Ocamlnet 4 Reference Manual : Netmcore_tut</title>
</head>
<body>
<div class="navbar"><a class="pre" href="Netmcore_process.html" title="Netmcore_process">Previous</a>
&nbsp;<a class="up" href="index.html" title="Index">Up</a>
&nbsp;<a class="post" href="Netmcore_basics.html" title="Netmcore_basics">Next</a>
</div>
<h1>Netmcore_tut</h1>
<div class="info-desc">
<h2 id="1_NetmulticoreTutorial">Netmulticore Tutorial</h2>
<p><b>Contents</b></p>

<ul>
<li><code class="code">Netmcore_tut.design</code></li>
<li><code class="code">Netmcore_tut.start_procs</code></li>
<li><code class="code">Netmcore_tut.camlboxes</code></li>
<li><code class="code">Netmcore_tut.mempools</code></li>
<li><code class="code">Netmcore_tut.sref</code></li>
<li><code class="code">Netmcore_tut.descriptors</code></li>
<li><code class="code">Netmcore_tut.mutation</code></li>
<li><code class="code">Netmcore_tut.sdata</code></li>
<li><code class="code">Netmcore_tut.sync</code></li>
<li><code class="code">Netmcore_tut.examples</code></li>
<li><code class="code">Netmcore_tut.impl</code></li>
<li><code class="code">Netmcore_tut.diffs</code></li>
<li><code class="code">Netmcore_tut.os</code></li>
</ul>
<p>This manual gives an overview of Netmulticore, which allows it to
manage subprocesses for speeding up computations on multicore
CPU's. Netmulticore tries to overcome a limitation of OCaml's runtime,
namely that only one thread at a time can get the CPU. Because of this,
multi-threaded programs cannot make use of the additional power of
multicore CPU's.</p>

<p><div class="remark">
Readers are encouraged to first have a look at <a href="Netmcore_basics.html"><code class="code">Netmcore_basics</code></a>,
which is more fundamental and doesn't use unsafe language features.
</div></p>

<p><div class="remark">
<b>Since OCaml-4.01:</b> This OCaml version changed the semantics of the
built-in primitives <code class="code">caml_modify</code> and <code class="code">caml_initialize</code>. Essentially,
it is no longer possible to modify OCaml values residing outside the
regular OCaml heap. As we do this inside Netmulticore, this change affects
this library. Fortunately, there is a workaround on systems supporting
weak symbols (all ELF systems and OS X): Here, <code class="code">caml_modify</code> and
<code class="code">caml_initialize</code> are overridden by Netmulticore so that they are again
compatible. Note that this is a global modification of the runtime
system!</p>

<p>Future versions of Ocamlnet may solve this problem differently.
</div></p>

<p>The approach of Netmulticore is to spawn subprocesses acting in the
role of worker threads. Processes are separated from each other, and
hence there is normally no direct way of getting into interaction.
The help of the operating system is required here - a classic example
of IPC (interprocess communication) are pipes, which create a data
stream from one process to the other. Of course, we want here even
closer interaction than this. Another, rarely used IPC mechanism is
shared memory. This means that a block of RAM is allocated and mapped
into all processes. When one process mutates a RAM cell in a shared
block, the other processes immediately see this mutation, so that
there is no system call overhead for transporting data. Actually,
there is a bit of overhead in modern computers when this technique is
used, but it is only occuring on the hardware level, and is very fast.</p>

<p>Netmulticore does not only allocate a shared block of RAM, but also
manages it. Ideally, using the shared block would be as simple as
using normal, process-local memory. Unfortunately, this is not
possible, but the design of Netmulticore allows it to come quite close
to this ideal. With Netmulticore a process can write normal Ocaml
values like strings, tuples or records to the shared block, and the
other processes can directly access this data as if it were in normal
RAM. Unfortunately, one cannot use direct assignment to do such
writing to the shared block, but has to follow special programming
rules which ensure that the shared block remains in a consistent
state. Large parts of this tutorial explain how to accomplish this.</p>

<p>The list of features supported by Netmulticore:</p>

<ul>
<li>Creation of worker processes</li>
<li>Management of shared resources like file descriptors and shared memory
  blocks</li>
<li>Management of shared RAM pools that are mapped to the same address in
  all worker processes</li>
<li>Management of shared heaps, i.e. containers for Ocaml values</li>
<li>Garbage collection for shared heaps</li>
<li>Predefined data structures that live in shared heaps</li>
<li>Synchronization primitives (locks, condition variables)</li>
<li>Message passing between worker processes</li>
<li>Integration with Netplex</li>
</ul>
<p>Before you start looking at Netmulticore, I should also give a warning.
Netmulticore requires unsafe programming elements which can, if used
the wrong way, lead to corrupt results or even crash the program.
This is, to some degree, comparable to interfacing the Ocaml runtime
on the C level. Unfortunately, when it comes to parallel programming,
such unsafeness is unavoidable if you want to get real speedups of
your program. Note that multi-threading suffers from the safe problem,
and is in no way different.</p>

<h3 id="design">Design</h3>
<h4 id="3_Theprocesshierarchy">The process hierarchy</h4>
<p>When processes start other processes, the Unix system design defines a
relation between processes: The started workers are children of the
process requesting the starts. An immediate effect of this is that
only the requesting process can wait for the termination of the
children. There are more implications than this, but it should be
clear that we have to take the relations between processes into
account.</p>

<p>When a Netmulticore program is started, the implicitly created first
process has a special role. It is called the <i>master process</i>, and
it is normally not used for doing the real work. It is merely acting
as a supervisor managing the workers. The tasks of the master process
include the management of the resource table, and of course watching
the lifetime of the workers. The master normally starts quickly the
first worker process, which in turn starts as many workers as needed
for accomplishing the computation.</p>

<p>Now, what does it mean to <i>start</i> processes? Unix only defines this
via the <code class="code">fork</code> system call. When doing a <code class="code">fork</code> the requesting process
is duplicated (i.e. RAM contents and many resources are copied, with
only a few exceptions), and the duplicate is registered as the new
child process.  Netmulticore requires that the relationships between
processes remain manageable, and because of this <i>it is always the
master process which requests the start of new children</i>. Of course,
the programmer can also call <a href="Netmcore_process.html#VALstart"><code class="code">Netmcore_process.start</code></a> from other
workers, but the Netmulticore machinery just sends this request to the
master, where it is really executed.</p>

<p>All in all this means: When a new worker is created, it is initialized
as copy of the master process, independently of from where the
user code requests the creation.</p>

<h4 id="3_Resources">Resources</h4>
<p>Netmulticore manages not only processes but also other kinds of
resources. As of now the supported types are:</p>

<ul>
<li>Temporary files</li>
<li>Shared memory blocks</li>
<li>Shared memory blocks with the additional requirement that all workers
  map them at the same address</li>
<li>Named semaphores</li>
<li>Fork points, i.e. registered functions acting as worker body</li>
<li>Join points, i.e. the option of waiting for the completion of worker
  bodies</li>
</ul>
<p>All resources have a resource ID (<a href="Netmcore.html#TYPEres_id"><code class="code">Netmcore.res_id</code></a>) which is
effectively an integer. With only the resource ID every worker can
request to get the details of the resource (e.g. which name the
temporary file has). The master process also keeps records which
worker needs which resource. If a resource is not needed anymore by
any worker, it is automatically deleted.</p>

<p>Since Ocamlnet-3.6, the deletion procedure has been substantially
improved. Now, a list of the alive resources is not only kept in
memory, but also written to a file "netplex.pmanage". This allows it
to delete the resources after the program has crashed or terminated
somehow abnormally. This is automatically done when the program is
restarted, or by the user running the <code class="code">netplex-admin</code> utility with
option <code class="code">-unlink</code>. The background for this is that the mentioned
resources have kernel persistency, and continue to exist when the
program is finished.</p>

<h4 id="3_Memorypools">Memory pools</h4>
<p>For setting up shared heaps as explained below, we especially need
shared memory blocks that are mapped at the same address by all worker
processes. This kind of block is needed for memory pools as
implemented by <a href="Netmcore_mempool.html"><code class="code">Netmcore_mempool</code></a>. There is, unfortunately, only one
reliable method of doing so: The master process allocates the block
and maps it into its own address space, and the workers get access to
the block by <i>inheriting</i> the mapping. This just means that the
<code class="code">fork</code> operation leaves the mapping in place, and because all workers
are created in the same way, all workers end up seeing the shared
block at the same address. The details of this procedure are hidden in
Netmulticore, but the user should know the limitations of this method.</p>

<p>A process cannot inherit a posteriori - when the process is already
created, and a new shared block is set up, there is no safe way for
the process to get access to the block at the right address. Because
of this, there are two important programming rules:</p>

<ul>
<li>A newly created shared block (with the "same address" requirement)
  is only visible in worker processes that are created after the block</li>
<li>This even means that the requesting process does not get access
  to the shared block it just created if the requesting process is a worker</li>
</ul>
<p>The safest and easiest way to deal with this is to create the shared
block in the master before starting any worker.</p>

<p>Another limitation of the inheritance method: A shared block cannot be
enlarged later. For the programmer this means that the initially
created block must be large enough for the whole lifetime of the
program.</p>

<p>The management of the shared blocks is now done by two modules:
<a href="Netmcore_mempool.html"><code class="code">Netmcore_mempool</code></a> is responsible in the first place, and can hand
out pieces of the shared block to users. <a href="Netmcore_heap.html"><code class="code">Netmcore_heap</code></a> is such a
user, and manages the pieces it gets as a heap of Ocaml values.
Heaps can be enlarged if necessary, i.e. more pieces can be obtained
from the big shared block that is managed by <a href="Netmcore_mempool.html"><code class="code">Netmcore_mempool</code></a>.
Of course, it is also possible to have several heaps getting their
memory pieces from a single <a href="Netmcore_mempool.html"><code class="code">Netmcore_mempool</code></a>.</p>

<h4 id="3_Sharedheaps">Shared heaps</h4>
<p>What is now a heap, or better <i>shared heap</i>? It is just a memory
area, and it is possible to put Ocaml values into it. The values are
connected by pointers that reach from one value to the next.  This is
not very much different than what is done by the Ocaml runtime and the
normal Ocaml heap, only that the values are now exposed to a multitude
of processes. We'll later see how we can create such a heap, and
how it can be filled with Ocaml values. At this point, let me only
mention that it requires the discipline of the programmer to fill
and access a shared heap in the right way. If something gets wrong,
the punishment is an illegal memory access, normally leading to a
segmentation fault.</p>

<p>If a shared heap fills up, <a href="Netmcore_heap.html"><code class="code">Netmcore_heap</code></a> starts a special garbage
collection run over it to reclaim memory that is no longer referenced.
This garbage collector works very much like the "major GC" that is
built into the Ocaml runtime.</p>

<p>As it is a bit complicated to manage shared heaps directly, there are
a number of pre-defined data structures over shared heaps that can be
directly used. Among these structures are buffers, queues, arrays,
and hash tables.</p>

<h4 id="3_Camlboxes">Camlboxes</h4>
<p>For fast notification between processes, Netmulticore uses Camlboxes.
This data structure was invented before and exists independently of
Netmulticore, but is specially supported.</p>

<p>Camlboxes are like mail boxes where an open number of senders can
send messages to a single receiver. The messages are normal Ocaml
values (like strings, records, or variants), so there is no
marshalling issue. Camlboxes also use shared memory as transport
medium, but it is here not required that the shared block needs
to be mapped at the same address. Because of this, it is also possible
to connect processes with Camlboxes that are not related to each
other.</p>

<p>Camlboxes are optimized for speed and maximum parallelism. The data
model is the following: The receiver creates the box which consists of
a fixed number of slots, and each slot has a fixed maximum size.  Each
sender can map the box into its own address space, and fill any free
slot. Because there is no strict ordering of the messages, the senders
can avoid to run into lock contention issues (the senders simply use
different slots and avoid to step on each other's feet). The receiver
can look at the messages, copy the messages outside the box, and
delete messages. The messages are not strings or marshalled data, 
but really Ocaml values, relocated to the receiver's address.</p>

<h4 id="3_Synchronization">Synchronization</h4>
<p>Netmulticore provides these synchronization primitives:</p>

<ul>
<li>Mutexes</li>
<li>Semaphores</li>
<li>Condition variables</li>
</ul>
<p>These primitives can be used in conjuction with shared heaps, and allow
the programmer to define additional synchronization requirements that
are not satisfied by the data structure in the heap.</p>

<p>For example, consider a shared hash table (<a href="Netmcore_hashtbl.html"><code class="code">Netmcore_hashtbl</code></a>). This
data structure already includes everything to protect the internal
representation from concurrent accesses, but not more. For example,
a <a href="Netmcore_hashtbl.html#VALadd"><code class="code">Netmcore_hashtbl.add</code></a> adds a value to the table, and it is safe
to call this operation from several processes at the same time.
This is not enough, however, to protect whole read/modify/update
cycles where the whole cycle needs to be run uninterrupted to avoid
data corruption. The user can define these additional synchronization
requests with the mentioned primitives.</p>

<p>As this is a quite frequent programming case, many shared data
structures already contain a special area called the <i>header</i> where
the user can put these primitives.</p>

<h3 id="start_procs">How to start processes</h3>
<p>For starting processes, there are actually two API's: the slightly
more generic one in <a href="Netmcore.html"><code class="code">Netmcore</code></a>, and the strictly typed one in
<a href="Netmcore_process.html"><code class="code">Netmcore_process</code></a>. They are quite similar, and I'm only explaining
the latter.</p>

<p>Before a process can be started, it needs to be defined:</p>

<pre class="codepre"><code class="code">let process_fork, process_join =
  Netmcore_process.def_process process_body
</code></pre>
<p>The definition via <a href="Netmcore_process.html#VALdef_process"><code class="code">Netmcore_process.def_process</code></a> takes a function
<code class="code">process_body</code>, which is simply a function of type <code class="code">'a -&gt; 'b</code>, and
returns a fork point <code class="code">process_fork</code> and a join point <code class="code">process_join</code>.
This definition must happen in the master process. Remember that it is
also always the master process that is forked when a new worker
process is created. This is consistent with this programming rule -
<code class="code">process_body</code> will be called in the context of a fresh copy of the
master, so it needs also to be defined in the context of the master.</p>

<p>Practically, the definition is usually done when a top-level module
is initialized, and <code class="code">process_body</code> is a normal top-level function.</p>

<p>The fork point <code class="code">process_fork</code> has type <code class="code">'a fork_point</code> where <code class="code">'a</code> is
the type of the argument of <code class="code">process_body</code>. Imagine a fork point as a
way to fork the master process, where a value of type <code class="code">'a</code> is passed
down, and <code class="code">process_body</code> is finally called with this value in the new
child. At the other end, the joint point <code class="code">process_join</code>
(of type <code class="code">'b join_point</code>) is a way to wait for the completion of the
<code class="code">process_body</code> and for getting the result of type <code class="code">'b</code>.</p>

<p>Of course, after defining a process, one can start it as many times
as needed. Here is how to do:</p>

<pre class="codepre"><code class="code">let pid = Netmcore_process.start process_fork arg
</code></pre>
<p>This finally does:</p>

<ul>
<li>The master process is notified that a new process is needed
  which will run a process as defined by <code class="code">process_fork</code></li>
<li>Once the process is running, the argument <code class="code">arg</code> is marshalled
  and sent to the new process</li>
<li>Execution returns now to the caller of <code class="code">start</code> with a process ID <code class="code">pid</code></li>
<li>The new process runs <code class="code">process_body arg'</code> where <code class="code">arg'</code> is the
  restored value <code class="code">arg</code></li>
</ul>
<p>The process ID's are Netmulticore's own ID's, and are guaranteed to be
unique (other than the process ID's the operating system uses which
can wrap around).</p>

<p>When <code class="code">process_body</code> finishes, the function result is passed back to
the master where it is stored until the join is done. The worker
process is terminated. One can get the result value by doing:</p>

<pre class="codepre"><code class="code">let r_opt = Netmcore_process.join process_join pid
</code></pre>
<p>This waits until the result value is available, and returns it as
<code class="code">Some r</code> (when <code class="code">r</code> is of type <code class="code">'b</code>). If no result is available because
of an exception or other termination of the process, <code class="code">None</code> is
returned. Note that the result <code class="code">r</code> is also marshalled.</p>

<p>If there is no interest in getting results at all, one can also do</p>

<pre class="codepre"><code class="code">Netmcore_process.release_join_point process_join
</code></pre>
<p>which prevents that results are stored in the master until they are
picked up.</p>

<p>Before giving a complete example, let's look at how to initialize
Netmulticore. When the master is set up (processes are defined etc.)
one can start the first worker. At this point, the whole management
machinery needs to be started, too. This is normally done by a
function call like</p>

<pre class="codepre"><code class="code">Netmcore.startup
   ~socket_directory:"some_dir" 
   ~first_process:(fun () -&gt; Netmcore_process.start process_fork arg)
   ()
</code></pre>
<p>The <code class="code">socket_directory</code> is needed by the machinery to store runtime
files like Unix domain sockets (inherited from Netplex). In doubt, set
it to something like "/tmp/dir", but note that each running instance
of the program needs its own socket directory. (There is a chance that
<code class="code">socket_directory</code> becomes optional in future releases.)</p>

<p>Now a complete example: The master starts process X which in turn 
starts process Y. The processes do not much, but let's see:</p>

<pre class="codepre"><code class="code">let c = ref 0

let body_Y k =
  k + 1 + !c

let fork_Y, join_Y =
  Netmcore_process.def_process body_Y

let body_X k =
  c := 1;
  let pid_Y = Netmcore_process.start fork_Y k in
  let j =
     match Netmcore_process.join join_Y pid_Y with
      | Some j -&gt; j
      | None -&gt; failwith "Error in process Y" in
  Printf.printf "Result: %d\n%!" j

let fork_X, join_X =
  Netmcore_process.def_process body_X

Netmcore.startup
   ~socket_directory:"some_dir" 
   ~first_process:(fun () -&gt; Netmcore_process.start fork_X 1)
   ()
</code></pre>
<p>The result is of course 2 (and not 3), because the assignment to <code class="code">c</code>
does not have any effect. Remember that process <code class="code">X</code> is forked from
the master process where <code class="code">c</code> still has the value 0.</p>

<p>A final word on marshalling before going on. Arguments and results of
processes are transmitted as strings that have been created with the
functions from the <code class="code">Marshal</code> module of the standard library. There are
a number of constraints one should be aware of:</p>

<ul>
<li>Functions, objects, and lazy values are not supported, and will
  cause exceptions. For a few other types this is also the case
  (e.g. <code class="code">in_channel</code>).</li>
<li>Unfortunately, there are also types of values that do not trigger
  exceptions, but do nonsense. For example, <code class="code">Unix.file_descr</code> is such
  a case. This unfortunately also holds for many Netmulticore types
  such as heaps. For these types the marshalling seems to work,
  but the restored values are actually unusable because they have
  lost their connection with the underlying resources of the operating
  system.</li>
</ul>
<h3 id="camlboxes">How to use Camlboxes for passing messages</h3>
<p>Camlboxes must always be created by the (single) receiver of the
messages. The address of the box is then made available to the senders
which can then start the transmission of messages.</p>

<p>The number of message slots is fixed, and cannot be changed later.
Also, the maximum size of the messages must be specified in advance,
in bytes. Of course, this means one cannot use Camlboxes for
arbitrarily large messages, but this is not what they are designed for.
Camlboxes are good for small notifications that need to be quickly
sent without risking any lock contention. (If big payload data needs
to be included, a good workaround is to include that data by reference
to a shared heap only.)</p>

<p>Let's just give an example: Process X creates a box, starts process Y,
and Y sends a message to the box of X.</p>

<pre class="codepre"><code class="code">let body_Y box_res =
  let (box_sender : string Netcamlbox.camlbox_sender) = 
     Netmcore_camlbox.lookup_camlbox_sender box_res in
  Netcamlbox.camlbox_send box_sender "Hello world"

let fork_Y, join_Y =
  Netmcore_process.def_process body_Y

let () =
  Netmcore_process.release_join_point join_Y

let body_X () =
  let ((box : string Netcamlbox.camlbox),box_res) = 
     Netmcore_camlbox.create_camlbox "example" 1 100 in
  let pid_Y = Netmcore_process.start fork_Y box_res in
  ( match Netcamlbox.camlbox_wait box with
     | [ slot ] -&gt;
         let m = Netcamlbox.camlbox_get_copy box slot in
         Netcamlbox.camlbox_delete box slot;
         printf "Message: %s\n%!" m
     | _ -&gt;
         (* in _this_ example not possible *)
         assert false
  )

let fork_X, join_X =
  Netmcore_process.def_process body_X

Netmcore.startup
   ~socket_directory:"some_dir" 
   ~first_process:(fun () -&gt; Netmcore_process.start fork_X ())
   ()
</code></pre>
<p>Let's go through it:</p>

<ul>
<li>The box is created at the beginning of <code class="code">body_X</code>, by a call of
  <a href="Netmcore_camlbox.html#VALcreate_camlbox"><code class="code">Netmcore_camlbox.create_camlbox</code></a>. Generally, <a href="Netmcore_camlbox.html"><code class="code">Netmcore_camlbox</code></a>
  contains Netmulticore-specific extensions of the Camlbox abstraction
  which is primarily defined in <a href="Netcamlbox.html"><code class="code">Netcamlbox</code></a>.
  The function <a href="Netmcore_camlbox.html#VALcreate_camlbox"><code class="code">Netmcore_camlbox.create_camlbox</code></a> does not only create the box,
  but also registers it as resource in the master process. The
  string "example" is used for naming the shared memory block backing
  the box. This box has a capacity of one message which may get
  100 bytes long. There are two return values, <code class="code">box</code> and <code class="code">box_res</code>.
  The value <code class="code">box</code> is the way to access the box for <i>receiving</i>
  messages. The value <code class="code">box_res</code> is the resource ID.</li>
<li>We pass <code class="code">box_res</code> when we start <code class="code">body_Y</code>. Resource ID's can be
  marshalled (whereas camlboxes cannot).</li>
<li>The helper function <a href="Netmcore_camlbox.html#VALlookup_camlbox_sender"><code class="code">Netmcore_camlbox.lookup_camlbox_sender</code></a> is
  called at the beginning of <code class="code">body_Y</code> to look up the sender interface
  for the box identified by <code class="code">box_res</code>. The sender view of the box
  is <code class="code">box_sender</code>.</li>
<li>Now the string "Hello world" is sent by invoking 
  <a href="Netcamlbox.html#VALcamlbox_send"><code class="code">Netcamlbox.camlbox_send</code></a>.</li>
<li>In the meantime, process X continued running in parallel, and
  by invoking <a href="Netcamlbox.html#VALcamlbox_wait"><code class="code">Netcamlbox.camlbox_wait</code></a> the execution is suspended
  until at least one message is in the box. The function returns the
  slots containing messages.</li>
<li>We look now into the single slot where we expect a message.
  With <a href="Netcamlbox.html#VALcamlbox_get_copy"><code class="code">Netcamlbox.camlbox_get_copy</code></a> we can get a copy of the message
  in the slot. Note that there is also <a href="Netcamlbox.html#VALcamlbox_get"><code class="code">Netcamlbox.camlbox_get</code></a> which
  does not make a copy but returns a reference to the message as it
  is in the box. This is unsafe because this reference becomes invalid
  when the message is deleted.</li>
<li>Finally we delete the message in the slot with <a href="Netcamlbox.html#VALcamlbox_delete"><code class="code">Netcamlbox.camlbox_delete</code></a>.
  The slot is now again free, and can hold the next message.</li>
</ul>
<p>You may wonder why I added the type annotations for <code class="code">box</code> and
<code class="code">box_sender</code>.  If you look at the signatures of the Camlbox modules,
you'll see that there is no enforcement that the types of <code class="code">box</code> and
<code class="code">box_sender</code> fit to each other. The type parameter <code class="code">string</code> is lost
at the moment the resource ID is used for identifying the box.</p>

<p>When sending a message, it is always copied into the box. For doing
this a special "value copier" is used. This copier traverses deeply
through the value representation and copies each piece into the box.
This is a bit like marshalling, only that this does not result in
a string but a copy of the orginal value (and the copy is placed into
a reserved region in the shared memory block of the Camlbox). This
copier has also restrictions which types of values can be handled,
very much like marshalling.</p>

<h3 id="mempools">How to create and use memory pools</h3>
<p>Now back to the core of Netmulticore. For the remaining data
structures the shared memory must be managed as a memory pool. The
module <a href="Netmcore_mempool.html"><code class="code">Netmcore_mempool</code></a> accomplishes this.</p>

<p>As explained above, there are certain restrictions on which processes
have access to memory pools. For most applications it is best when
they simply create the pool in the master process directly after
program startup and before launching any worker process. This avoids
all restrictions, but the size of the pool needs to be set at this
early point of execution (remember that pools also cannot be
enlarged).</p>

<p>It is a good question how big a pool should be. Generally, the pool
should be a small factor larger than the minimum amount of RAM
needed for the pool. I cannot give really good recommendations, but
factors between 1.5 and 3.0 seem to be good choices. If the pool
size is chosen too tight, the garbage collector will run often and
slow down the program.</p>

<p>Another strategy for the pool size is to make it always large,
e.g. 25% of the available system memory. The idea here is that when
parts of the pool remain unused throughout the lifetime of the
program, they <i>actually</i> will also not consume RAM. Operating
systems typically distinguish between RAM that is reserved by a memory
allocation and RAM that is actually filled with data.  Only the latter
type consumes real RAM, whereas the first type is only taken into
account for bookkeeping. This means the pool memory is reserved for
the case it is needed at a certain point, but RAM is not wasted if
not.</p>

<p>Now, the pool is simply created with</p>

<pre class="codepre"><code class="code">let pool = Netmcore_mempool.create_mempool size_in_bytes
</code></pre>
<p>The value <code class="code">pool</code> is again a resource ID, and it is no problem to
marshal this ID. When creating shared heaps, and for a number of
other operations the <code class="code">pool</code> ID is required, so it is a good idea
to passed it down to all worker processes.</p>

<p>Pools are backed by shared memory blocks, and these blocks have
kernel persistence (usually they even appear as files, but this
depends on the operating system). This means they exist until
explicitly deleted (like files). To do so, just call</p>

<pre class="codepre"><code class="code">Netmcore.release pool
</code></pre>
<p>after the <a href="Netmcore.html#VALstartup"><code class="code">Netmcore.startup</code></a> function returns to the caller, and
before the program is finally terminated.</p>

<p>As pool memory is inherited by worker processes (as explained in the
section about "Design"), one has to enable the inheritance for
each started process, e.g.</p>

<pre class="codepre"><code class="code">let pid = 
  Netmcore_process.start 
    ~inherit_resources:`All
    process_fork arg
</code></pre>
<p>Otherwise the pool is not accessible by the started worker process.
(I'm thinking about making this the default, but haven't come to a
conclusion yet.)</p>

<p>If system memory is very tight, you will sometimes see bus errors when
using pools (signal SIGBUS is sent to one of the processes, typically
the signal number is 7). This happens when allocated shared memory is
actually not available when it is used for the first time. (I'm still
looking for ways how to get nicer reactions.)</p>

<h3 id="sref">Shared <code class="code">ref</code>-type variables</h3>
<p>Let's now look at a data structure that lives in a shared heap.
Note that there is also a direct interface to shared heaps, but
for the sake of explaining the concept, it is easier to first
look at a concrete instance of a heap.</p>

<p>The module <a href="Netmcore_ref.html"><code class="code">Netmcore_ref</code></a> provides a mutable reference for a single
value residing in the shared heap. The reference is comparable to the
<code class="code">ref</code> type provided by the standard library, and it is possible to
dereference it, and to assign new values to it. The difference is,
however, that the reference and the referenced value reside completely
in shared memory, and are accessible by all workers.</p>

<p>The reference is created by something like</p>

<pre class="codepre"><code class="code">let s = Netmcore_ref.sref pool initial_value
</code></pre>
<p>where <code class="code">pool</code> is the resource ID of a pool (see above), and
<code class="code">initial_value</code> is the value to assign to the reference initially.
This is comparable to</p>

<pre class="codepre"><code class="code">let r = ref initial_value
</code></pre>
<p>for normal references. There is an important difference, however.
The referenced value must completely reside in shared memory, and
in order to achieve this, the <code class="code">sref</code> function <i>copies</i> the 
<code class="code">initial_value</code> to it. (The same copy mechanism is used that also
puts messages into Camlboxes.)</p>

<p>After running <code class="code">sref</code>, a new shared heap has been created which is
initialized as a reference. You can get a direct handle for the
heap structure by calling <a href="Netmcore_ref.html#VALheap"><code class="code">Netmcore_ref.heap</code></a>.</p>

<p>Also note that you have to call <code class="code">sref</code> from a worker process. It is
not possible to do this from the master process. This also applies
to almost any other heap-related function.</p>

<p>It is possible to assign a new value to <code class="code">s</code>:</p>

<pre class="codepre"><code class="code">Netmcore_ref.assign s new_value
</code></pre>
<p>This also <i>copies</i> the <code class="code">new_value</code> to the heap, and changes the
reference.</p>

<p>You may wonder what happens when you assign new values over and over
again. There is no mechanism for deleting the old values immediately.
Instead, the values accumulate over time in the heap, and when a
certain threshold is reached, a garbage collection run is started.
This run checks for values (or value parts) that became unreachable,
and reclaims the memory used for them.</p>

<p>This garbage collector (GC) is not the normal garbage collector that
cleans the process-local heap managed by the Ocaml runtime. It is a
special GC designed for shared heaps. Unfortunately, this special GC
is not as automatic as the GC built into the Ocaml runtime, as we'll
see.</p>

<p>Back to our shared references. There are three functions for getting
the value pointed to by a reference:</p>

<pre class="codepre"><code class="code">let v1 = Netmcore_ref.deref_ro s

let v2 = Netmcore_ref.deref_c s

Netmcore_ref.deref_p s (fun v3 -&gt; ...)
</code></pre>
<p>The first function, <code class="code">deref_ro</code>, is the fastest but unsafest. It just
returns the current value of the reference as-is, and this value is of
course stored in the shared heap. The problem is, however, that
further assignments to the variable can invalidate this value when a
GC run occurs. If this happens, <code class="code">v1</code> becomes invalid because the
memory holding the representation for <code class="code">v1</code> is overwritten with
something else, and any accesses to <code class="code">v1</code> or its parts may crash the
program! For this reason, the <code class="code">deref_ro</code> function must only be used if it
can be excluded by other means that the variable is not being assigned
while <code class="code">v1</code> is being accessed (e.g. by a lock). The suffix "_ro" is
meant to remind of the fact that it is safe to use in read-only
contexts. (N.B. The GC could theoretically also check for values that
are only alive because of references from process-local
memory. Unfortunately, this would complicate everything dramatically -
the local GC and the shared GC would need some synchronization, and
slow or misbehaving local GC's could delay shared GC runs almost
indefinitely.)</p>

<p>The second function, <code class="code">deref_c</code>, does not have this problem because it
always returns a copy of the referenced value (suffix "_c" = "copy").
It is, of course, a lot slower because of this, and the copy semantics
may not always be the right design for an application.</p>

<p>The third function, <code class="code">deref_p</code>, is a clever alternative. As you can
see, <code class="code">deref_p</code> does not <i>return</i> the value, but it runs an
argument function with the current value. The trick is now that
<code class="code">v3</code> is specially protected while this function is running. This
protection does not prevent assignments, but it prevents that the
GC deletes the value <code class="code">v3</code>, if a GC run is done. This type of protection
is also called <i>pinning</i>, and the suffix "_p" is meant to remind
of this.</p>

<p>When using <code class="code">deref_p</code>, one still should be careful and think about
whether accesses to <code class="code">v3</code> (or parts of <code class="code">v3</code>) can occur after the
pinning protection has ended. This must be excluded by all means!</p>

<p>Let's look at another thing the programmer must not do. Imagine
some inner part of the value is mutable, e.g.</p>

<pre class="codepre"><code class="code">type t = { mutable data : string }

let s = Netmcore_ref.sref pool { data = "initial string" }
</code></pre>
<p>The question is now: Is it allowed to assign to the <code class="code">data</code>
component, as in</p>

<pre class="codepre"><code class="code">(Netmcore_ref.deref_ro s).data &lt;- "new string"   (* WARNING - PROBLEMATIC CODE *)
</code></pre>
<p>The meaning of this assignment is that the <code class="code">data</code> component is
overwritten with a pointer to a string residing in process-local
memory. Imagine now what happens if a different worker process
accesses the <code class="code">data</code> component. The address of this pointer is now
meaningless, because the string does not exist in context of the other
process at this address. If we access nevertheless, we risk getting
random data or even segmentation faults.</p>

<p>I'll explain below how to fix this assignment. It is possible, if
done in the right way.</p>

<p>Finally, here is a complete example using shared references:</p>

<pre class="codepre"><code class="code">let body_Y (pool, c_descr, k) =
  let c = Netmcore_sref.sref_of_descr pool c_descr in
  let c_value = Netmcore_sref.deref_ro c in
  k + 1 + c_value

let fork_Y, join_Y =
  Netmcore_process.def_process body_Y

let body_X (pool,k) =
  let c = Netmcore_sref.sref pool 1 in
  let c_descr = Netmcore_sref.descr_of_sref c in
  let pid_Y = 
    Netmcore_process.start ~inherit_resources:`All fork_Y (pool, c_descr, k) in
  let j =
     match Netmcore_process.join join_Y pid_Y with
      | Some j -&gt; j
      | None -&gt; failwith "Error in process Y" in
  Printf.printf "Result: %d\n%!" j

let fork_X, join_X =
  Netmcore_process.def_process body_X

let () = 
  let pool = Netmcore_mempool.create_mempool (1024 * 1024) in

  Netmcore.startup
     ~socket_directory:"some_dir" 
     ~first_process:
        (fun () -&gt; 
           Netmcore_process.start ~inherit_resources:`All fork_X (pool,1))
     ();

  Netmcore.release pool
</code></pre>
<p>Compare this with the example I gave when explaining how to start
processes. The variable <code class="code">c</code> is now a shared reference, and because of
this process Y can access the contents of <code class="code">c</code>. The result is now 3.</p>

<p>In this example already a feature is used that is first explained in
the next section: descriptors. For some reason (we'll see why) it is not
possible to marshal shared heaps, and thus you cannot call <code class="code">body_Y</code>
with <code class="code">c</code> as argument. The workaround is to create a descriptor for
<code class="code">c</code>, use the descriptor for marshalling, and restore the orginal
heap in the called function.</p>

<h3 id="descriptors">Descriptors</h3>
<p>Shared heaps are very special objects - in particular, the heaps
reside in shared memory at a certain address, and this address also
appears in the internal data structures the heaps use to manage their
memory space.</p>

<p>Imagine what happens when you do not respect this special nature of
shared heaps, and create a copy nevertheless (e.g. by marshalling the
heap, or by putting the heap into another heap). At the first glance,
you will see that you can actually create the copy (it's a valid Ocaml
value), but when you try to use it, the program will crash. What has
happened?</p>

<p>The problem is that the internal data structure of the copied heap
still contains addresses that are only valid for the orignal heap, but
are meaningless for the copy. When these addresses are followed,
invalid memory access occur, and the program crashes.</p>

<p>So, a very important programming rule: Never copy shared heaps!
Unfortunately, this really requires discipline, as there are many
mechanisms in Netmulticore where copies are automatically created
(just page back and look how often I told you that a copy is created
here and there), for example when starting worker processes the
arguments are also copied over to the newly created process.</p>

<p>How to work around? First let's think about what we really want to
have when we e.g. start a worker process and pass a shared heap to it.
Of course, we do not want to create a copy, but rather we want to
make the <i>same</i> shared heap accessible to the worker. We want
call by reference!</p>

<p>Descriptors are the solution. A descriptor of a shared heap is just
a marshallable reference to the heap. For each shared data structure
there is a special descriptor type, and it is possible to get the
descriptor for a heap, and to look up the heap by descriptor. For
example, shared references define this as (in <a href="Netmcore_ref.html"><code class="code">Netmcore_ref</code></a>):</p>

<pre class="codepre"><code class="code">type 't sref_descr

val descr_of_sref : 't sref -&gt; 't sref_descr
val sref_of_descr : Netmcore.res_id -&gt; 't sref_descr -&gt; 't sref
</code></pre>
<p>Note that the <code class="code">sref_of_descr</code> function takes the resource ID of the
pool as first argument, and that this function is quite slow (because
it has to ask the master process where to find the shared memory
object).</p>

<p>For the other shared data structures, the are comparable functions
supporting descriptors.</p>

<h3 id="mutation">How to mutate shared values</h3>
<p>Remember this code?</p>

<pre class="codepre"><code class="code">(Netmcore_ref.deref_ro s).data &lt;- "new string"   (* WARNING - PROBLEMATIC CODE *)
</code></pre>
<p>We now look how to fix it. The basic problem is that the new string is
not residing in shared memory, and if we could achieve that this is
the case, the assignment would be acceptable.</p>

<p>The correct way to do it is this:</p>

<pre class="codepre"><code class="code">Netmcore_heap.modify
  (Netmcore_ref.heap s)
  (fun mut -&gt;
    (Netmcore_ref.deref_ro s).data &lt;- Netmcore_heap.add mut "new string"
  )
</code></pre>
<p>The essence is that we now call <a href="Netmcore_heap.html#VALadd"><code class="code">Netmcore_heap.add</code></a> before assigning
the string. This function again copies the argument value, and puts
the copy onto the heap. For managing the copy, this function needs
an special object <code class="code">mut</code>, which is actually a <i>mutator</i>. The only
way to get a mutator is to call <a href="Netmcore_heap.html#VALmodify"><code class="code">Netmcore_heap.modify</code></a>, which - among
other actions - locks the heap, and prevents that any competing write
access occurs. This is required, because otherwise mutations could
overlap in bad ways when they are done in parallel, and the internal
representation of the heap would be corrupted.</p>

<p>Note that you need to call <code class="code">add</code> for assigning all values that are not
yet residing in the same heap. This means: Call it if the new value is
in process-local memory, but also call it when the new value is
already part of a different heap (because pointers from one heap to
the other are not allowed - heaps must be completely self-contained).</p>

<p>You may wonder why we used <code class="code">deref_ro</code> to get the current value of <code class="code">s</code>,
and not one of the other access functions. Quite easy answer: the
other functions <code class="code">deref_c</code> and <code class="code">deref_p</code> would deadlock the program!
The <code class="code">modify</code> function already acquires the heap lock for the duration
of the mutation, and using any access function that also plays with
this lock will cause deadlocks.</p>

<p>Let's look at this example:</p>

<pre class="codepre"><code class="code">type u = { mutable data1 : string;
           mutable data2 : string;
         }

let s = Netmcore_ref.sref
           pool
           { data1 = "initial string 1"; data2 = "initial string 2" }
</code></pre>
<p>We want now to swap the values in the two components <code class="code">data1</code> and <code class="code">data2</code>.
The solution is of course:</p>

<pre class="codepre"><code class="code">Netmcore_heap.modify
  (Netmcore_ref.heap s)
  (fun mut -&gt;
    let u = Netmcore_ref.deref_ro s in
    let p = u.data1 in
    u.data1 &lt;- u.data2;
    u.data2 &lt;- p
  )
</code></pre>
<p>Note that we call <code class="code">modify</code> although we do not need the mutator for
doing our work. The reason is that <code class="code">modify</code> also write-locks the heap,
and this protects the integrity of our data. We do not need to call
<code class="code">add</code> because the two strings are already residing in the same shared
heap, and we are just swapping them.</p>

<p>A little variation of this let us run into a possible problem, though:</p>

<pre class="codepre"><code class="code">Netmcore_heap.modify
  (Netmcore_ref.heap s)
  (fun mut -&gt;
    let u = Netmcore_ref.deref_ro s in
    let p = u.data1 in
    u.data1 &lt;- Netmcore_heap.add mut ("Previously in data2: " ^ u.data2);
    u.data2 &lt;- Netmcore_heap.add mut ("Previously in data1: " ^ p);
    printf "p=%s\n" p;                    (* THIS LINE IS PROBLEMATIC *)
  )
</code></pre>
<p>This piece of code will crash now and then (not often, so maybe difficult
to run into). What is going wrong?</p>

<p>There is one phenomenon we haven't paid attention to yet: When we call
<code class="code">add</code> it is possible that the available space in the heap is not
sufficient anymore to do the required memory allocations, and that a
GC run is started. When this happens in the second <code class="code">add</code>, the
field <code class="code">data1</code> is already overwritten, and because of this the string
<code class="code">p</code> is now unreachable from the top-level value of the heap. The
space occupied by <code class="code">p</code> is reclaimed, and may even be overwritten. For
the two assignments this is no problem, because <code class="code">p</code> is no longer needed.
When we access <code class="code">p</code> after the second <code class="code">add</code>, however, the value may
have become invalid. Accessing it may crash the program!</p>

<p>How to fix? It is possible to pin additional values during mutation
so that they cannot be collected by the GC:</p>

<pre class="codepre"><code class="code">Netmcore_heap.modify
  (Netmcore_ref.heap s)
  (fun mut -&gt;
    let u = Netmcore_ref.deref_ro s in
    let p = u.data1 in
    u.data1 &lt;- Netmcore_heap.add mut ("Previously in data2: " ^ u.data2);
    u.data2 &lt;- Netmcore_heap.add mut ("Previously in data1: " ^ p);
    Netmcore_heap.pin mut p;
    printf "p=%s\n" p;
  )
</code></pre>
<p>This piece of code is now correct. The effect of <code class="code">pin</code> lasts until
the end of the <code class="code">modify</code> function.</p>

<h3 id="sdata">Shared data structures</h3>
<p>Besides references, there are a number of further shared data structures:</p>

<ul>
<li><a href="Netmcore_array.html"><code class="code">Netmcore_array</code></a> implements shared arrays</li>
<li><a href="Netmcore_matrix.html"><code class="code">Netmcore_matrix</code></a> implements shared matrices (2-dimensional arrays)</li>
<li><a href="Netmcore_buffer.html"><code class="code">Netmcore_buffer</code></a> implements shared string buffers</li>
<li><a href="Netmcore_queue.html"><code class="code">Netmcore_queue</code></a> implements shared queues</li>
<li><a href="Netmcore_hashtbl.html"><code class="code">Netmcore_hashtbl</code></a> implements shared hash tables</li>
</ul>
<p>You may wonder why these data structures exist. Isn't it possible to
e.g. put a normal <code class="code">Queue</code> into a shared reference in order to get a
shared version? The problem is that existing types like <code class="code">Queue</code> do not
adhere to the programming rules we've outlined above. <code class="code">Queue</code> is a
mutable structure, and when elements are added to or removed from the
queue, new value allocations are always done in the normal
process-local heap but not in the right shared heap. This is nothing
one can fix by wrapping code around these data structures.</p>

<p>If you look at the implementation of e.g. <a href="Netmcore_queue.html"><code class="code">Netmcore_queue</code></a> (which is
really recommended), you'll see that it looks very much like the
implementation of <code class="code">Queue</code>, only that <a href="Netmcore_heap.html#VALmodify"><code class="code">Netmcore_heap.modify</code></a> is used
for managing mutation.</p>

<p>The above listed modules also include all the necessary means to
protect the data structures against the possibly disastrous effects
of parallel mutation. Also, for read accesses there are always several
access functions corresponding to what we've seen for references
(<code class="code">deref_ro</code>, <code class="code">deref_c</code>, and <code class="code">deref_p</code>).</p>

<p>Let's have a closer look at <a href="Netmcore_array.html"><code class="code">Netmcore_array</code></a> to see how this is done.
The signature is:</p>

<pre class="codepre"><code class="code">type ('e,'h) sarray
type ('e,'h) sarray_descr
val create : Netmcore.res_id -&gt; 'e array -&gt; 'h -&gt; ('e,'h) sarray
val make : Netmcore.res_id -&gt; int -&gt; 'e -&gt; 'h -&gt; ('e,'h) sarray
val init : Netmcore.res_id -&gt; int -&gt; (int -&gt; 'e) -&gt; 'h -&gt; ('e,'h) sarray
val grow : ('e,_) sarray -&gt; int -&gt; 'e -&gt; unit
val set : ('e,_) sarray -&gt; int -&gt; 'e -&gt; unit
val get_ro : ('e,_) sarray -&gt; int -&gt; 'e
val get_p : ('e,_) sarray -&gt; int -&gt; ('e -&gt; 'a) -&gt; 'a
val get_c : ('e,_) sarray -&gt; int -&gt; 'e
val length : (_,_) sarray -&gt; int
val header : (_,'h) sarray -&gt; 'h
val deref : ('e,_) sarray -&gt; 'e array
val heap : (_,_) sarray -&gt; Obj.t Netmcore_heap.heap
val descr_of_sarray : ('e,'h) sarray -&gt; ('e,'h) sarray_descr
val sarray_of_descr : Netmcore.res_id -&gt; ('e,'h) sarray_descr -&gt; ('e,'h) sarray
</code></pre>
<p>As you can see, the type is not only <code class="code">'e sarray</code> (when <code class="code">'e</code> is the
type of the elements), but there is a second type variable <code class="code">'h</code>, so
that the type becomes <code class="code">('e,'h) sarray</code>. This is the type of the
<i>header</i>, which is simply an extra place for storing data that exists
once per shared data structure. The header can have any type, but is
often a record with a few fields. If you do not need the header, just
set it to <code class="code">()</code> (i.e. <code class="code">'h = unit</code>).</p>

<p>For managing shared data structures one often needs a few extra fields
that can be put into the header. For example, a requirement could be
that the length of a shared queue is limited, and one needs
synchronization variables to ensure that the users of the queue stick
to the limit (i.e. the addition of new elements to the queue is
suspended when it is full, and it is restarted when there is again
space). As such extra requirements are very common, all shared data
structures have such a header (well, for <a href="Netmcore_ref.html"><code class="code">Netmcore_ref</code></a> it was omitted
for obvious reasons).</p>

<p>When you create an instance of the structure, you always have to pass
the resource ID of the pool (here for <code class="code">create</code>, <code class="code">make</code>,
<code class="code">init</code>). Another argument is the initial value of the header (which is
also copied into the shared heap). The header, after being copied to
the heap, can be accessed with the <code class="code">header</code> function.</p>

<p>The function <code class="code">set</code> looks very much like the one in <code class="code">Array</code>. Note that
for all mutation the shared heap is write-locked, so there can actually
only be one running <code class="code">set</code> operation at a time. Because <code class="code">set</code> copies
the argument data to the shared heap, this time is not neglectable.</p>

<p>For accessing elements, there are the known three variants: <code class="code">get_ro</code>,
<code class="code">get_c</code>, and <code class="code">get_p</code>. The <code class="code">length</code> function works as in <code class="code">Array</code>.</p>

<p>As it would be difficult for the user to implement growable arrays
on top of the basic version, it was chosen to add a <code class="code">grow</code> function
exactly doing that. This function avoids to copy the values again
into the shared heap that are already there.</p>

<p>The type <code class="code">('e,'h) sarray_descr</code> is used for descriptors to shared
arrays, and the functions <code class="code">descr_of_sarray</code> and <code class="code">sarray_of_descr</code>
allow it to manage descriptors.</p>

<h3 id="sync">Synchronization</h3>
<p>There are three kinds of synchronization devices:</p>

<ul>
<li><a href="Netmcore_sem.html"><code class="code">Netmcore_sem</code></a>: Sempahores</li>
<li><a href="Netmcore_mutex.html"><code class="code">Netmcore_mutex</code></a>: Mutexes</li>
<li><a href="Netmcore_condition.html"><code class="code">Netmcore_condition</code></a>: Condition variables</li>
</ul>
<p>Unlike the above explained data structures, these synchronization
means are simply special values, and not shared heaps of their own
(which would be quite costly). One consequence of their special nature
is that it is not possible to copy these values around - they must
exist at a fixed memory address, and cannot be copied or moved. This
just means that after copying or moving the values become
non-functional.</p>

<p>Of course, these special values must be put into shared heaps in order
to be accessible by several processes. Let's just walk through an
example, to see how this is done right.</p>

<p>Imagine you have a field <code class="code">x</code> of some type, and a lock <code class="code">m</code> that is going
to protect concurrent accesses to <code class="code">x</code>:</p>

<pre class="codepre"><code class="code">type t =
  { mutable x : some_type;
    mutable m : Netmcore_mutex.mutex
  }
</code></pre>
<p>This record is put into a shared reference:</p>

<pre class="codepre"><code class="code">let s = Netmcore_ref.sref { x = ...; m = ... }
</code></pre>
<p>Remember that <code class="code">sref</code> initializes the reference with a <i>copy</i> of the
argument value. This would mean we copy <code class="code">m</code>, which is invalid as we've
learned. How to solve?</p>

<p>Mutexes (like the other synchronization devices) are designed for this
use pattern, so there is already a built-in solution. There is a special
dummy value one can use during initialization:</p>

<pre class="codepre"><code class="code">let s = Netmcore_ref.sref { x = ...; m = Netmcore_mutex.dummy() }
</code></pre>
<p>Dummies are placeholders that need to be re-initialized later when the
mutex is already copied to the shared heap. This is done by:</p>

<pre class="codepre"><code class="code">Netmcore_heap.modify
  (Netmcore_ref.heap s)
  (fun mut -&gt;
     let r = Netmcore_ref.deref_ro mut in
     r.m &lt;- Netmcore_mutex.create mut `Normal
  )
</code></pre>
<p>That's it! Note that we've set the type of the mutex to <code class="code">`Normal</code>
which creates a fast mutex without deadlock protection.</p>

<p>The mutex can now be used in statements like</p>

<pre class="codepre"><code class="code">Netmcore_ref.deref_p (fun r -&gt; Netmcore_mutex.lock r)
</code></pre>
<p>Be careful not to use <code class="code">deref_c</code> because this would create a copy of the
mutex, and render it useless!</p>

<p>Semaphores are much like mutexes, only that the synchronization functions
are not <code class="code">lock</code> and <code class="code">unlock</code> but <code class="code">wait</code> and <code class="code">post</code>.</p>

<p>Condition variables are a bit harder to use, unfortunately. When
implementing them I ran into the issue that the fast algorithm needs
to allocate special storage places, one for each process that can be
suspended. (There is a slow algorithm not requiring additional
storage, but this would have been a very bad deal.) In system-level
implementations of condition variables the additional storage can
usually be hidden from the user. This is not possible in a pure
user-space implementation like this one. For this reason, the user of
condition variables has to allocate these places called <code class="code">wait_entry</code>.
One <code class="code">wait_entry</code> is needed for each process that can ever wait for
a condition variable. There is also <code class="code">wait_set</code> which is just a collection
of <code class="code">wait_entry</code> values. Let's look at an example:</p>

<pre class="codepre"><code class="code">type t =
  { mutable x : some_type;
    mutable m : Netmcore_mutex.mutex;
    mutable c : Netmcore_condition.condition;
    mutable w : Netmcore_condition.wait_set
  }

let s = 
  Netmcore_ref.sref
     { x = ...; 
       m = Netmcore_mutex.dummy();
       c = Netmcore_condition.dummy_condition();
       w = Netmcore_condition.dummy_wait_set()
     }

let () =
  Netmcore_heap.modify
    (Netmcore_ref.heap s)
    (fun mut -&gt;
       let r = Netmcore_ref.deref_ro mut in
       r.m &lt;- Netmcore_mutex.create mut `Normal;
       r.c &lt;- Netmcore_condition.create_condition mut;
       r.w &lt;- Netmcore_condition.create_wait_set mut
    )
</code></pre>
<p>The field <code class="code">w</code> is now an empty <code class="code">wait_set</code>. Note that we only need one <code class="code">w</code>
for all condition variables that exist in the same shared heap.</p>

<p>The point is now that we need to get a <code class="code">wait_entry</code> for each process
using <code class="code">s</code>. Get it with:</p>

<pre class="codepre"><code class="code">let we =
  Netmcore_heap.modify
    (Netmcore_ref.heap s)
    (fun mut -&gt; 
      Netmcore_condition.alloc_wait_entry mut (Netmcore_ref.deref_ro s).w
    )
</code></pre>
<p>This just needs to happen once for each process. The value <code class="code">we</code> can be
used for all <code class="code">wait</code> calls related to all condition variable in the
same shared heap.</p>

<p>A <code class="code">wait</code> call looks then like:</p>

<pre class="codepre"><code class="code">Netmcore_ref.deref_p (fun r -&gt; Netmcore_condition.wait we r.c r.m)
</code></pre>
<p>For <code class="code">signal</code> and <code class="code">broadcast</code> the value <code class="code">we</code> is not needed. As you see
the only additional complexity has to do with the initialization of the
process - we need to create <code class="code">we</code> once for each process.</p>

<h3 id="examples">Where to find examples</h3>
<p>It is very much recommended to study complete examples before trying
to develop with Netmulticore. There are a few examples in the
<code class="code">examples/multicore</code> directory of the distributed tarball.</p>

<p>The latest version can always be found in the svn repository:</p>

<ul>
<li><a href="https://godirepo.camlcity.org/svn/lib-ocamlnet2/trunk/code/examples/multicore/"> examples/multicore</a></li>
</ul>
<h3 id="impl">Remarks on the implementation</h3>
<p>The current implementation of shared heaps only uses a single lock for
protecting every heap, even for the initial actions of read
accesses. This e.g. means that a <code class="code">deref_c</code> locks the reference for a
short time until it has pinned the current value. The (perhaps time
consuming) copy operation is then done without lock.</p>

<p>This might not be optimal for all use cases. An improved locking scheme
would use reader/writer locks. However, this kind of lock is complicated
to implement on top of just semaphores, so it was omitted for now. Also,
reader/writer locks are more expensive in general, so it is not clear
whether it is better at all.</p>

<p>The memory management of heaps is still in a quite experimental state.
Heaps are extended in units of 64 KB which may be way too big or way
too small for the application. Also, it is tried to achieve that at least
half of the heap memory is free (i.e. the "overhead factor" is 50%).</p>

<p>If parts of a shared heap become completely free they are in deed
given back to the memory pool.</p>

<h3 id="diffs">Some points where Netmulticore is different from multi-threading</h3>
<p>When porting multi-threaded programs to Netmulticore, you may wonder
where the differences are.</p>

<p>Netmulticore can only deal with values that are completely stored in
shared heaps. This requires that the value is initially copied to the
heap, and the described programming rules must be adhered to when
modifiying and also reading the values. Of course, there are also
programming rules in a multi-threaded environment, so this is not
completely different.</p>

<p>The way the shared heaps are managed is less automatic than in the
multi-threaded case. Especially, the garbage collector of shared heaps
does not recognize values in process-local memory as roots.  This is
really unfortunate, because the user has to work around this
limitation (pinning), and this is error-prone. There is, however,
probably nothing we can do about it. Theoretically it is possible to
create a protocol between the shared GC and the process-local GC, but
it looks very complicated to implement it.</p>

<p>Another problem is that there is nothing that would prevent erroneous
pointers from shared heaps to process-local heaps. In multi-threaded
environments this distinction does not exist, so there is no such
problem. By changing the Ocaml compiler it could be possible (without
having checked it in detail) to emit different code when a
process-local value is assigned to a shared variable, and to
automatically allocate the value in the shared heap.</p>

<p>The Netmulticore approach also has advantages. Especially, it is way
more scalable than a multi-threaded environment with a single heap
only. There is no global lock that could become the bottleneck of the
implementation. Each shared heap has its own lock, so there is always
the possibility to increase the "lock capacity" by using more shared
heaps together. In some sense, this is not a "multicore" solution, but
rather a "manycore" approach that will also work for hundreds of
cores.</p>

<h3 id="os">Operating system issues</h3>
<h4 id="3_AdministeringPOSIXsharedmemory">Administering POSIX shared memory</h4>
<p>Many of the data structures described here are actually backed by
shared memory blocks. For managing these blocks, the POSIX API
is used (i.e. <code class="code">shm_open</code>). Note that there is also an older API
on many systems called System V API (i.e. <code class="code">shmget</code>). This API
is not used.</p>

<p>Shared memory has kernel persistence, i.e. the blocks remain allocated
even after the process terminates that created them. The blocks need
to be explicitly deleted. This can be done by the right API calls
(e.g.  call <a href="Netmcore.html#VALrelease"><code class="code">Netmcore.release</code></a> for objects with a Netmulticore
resource ID), but from time to time a program does not terminate
normally, and this deletion is not performed. The question is how to
get administratively rid of the blocks.</p>

<p>The nice aspect of the POSIX API is that shared memory looks very much
like files, and in deed, in many implementations I've seen the blocks
appear somewhere in the file system. Typical locations for these files
are <code class="code">/dev/shm</code> and <code class="code">/tmp</code>. The files have names like
<code class="code">mempool_f7e8bdaa</code>, or generally <code class="code">&lt;prefix&gt;_&lt;8hexdigits&gt;</code>. The prefix
depends on the data structure the block is used for, and the hex
digits make the name unique. By deleting these files, the blocks
are removed.</p>

<p>Since Ocamlnet-3.6, one can also delete shared memory administratively
with the <code class="code">netplex-admin</code> utility. See <code class="code">Netplex_admin.unlink</code> for
details. This method works for all OS. Also, an unlink of old memory
is automatically done when the program is restarted.</p>

<p>Another issue is that OS typically define an upper limit for the
amount of shared memory. This is e.g. 50% of the system RAM for
Linux. There are usually ways to configure this limit.</p>
</div>
</body></html>