File: ocfs2_faq.html

package info (click to toggle)
ocfs2-tools 1.8.6-1
  • links: PTS, VCS
  • area: main
  • in suites: bullseye, sid
  • size: 6,232 kB
  • sloc: ansic: 86,865; sh: 5,781; python: 2,380; makefile: 1,305
file content (1643 lines) | stat: -rw-r--r-- 60,484 bytes parent folder | download | duplicates (3)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
<html>
<hr>
<title>
OCFS2 - FREQUENTLY ASKED QUESTIONS
</title>
<p>
<font size=+2> <center><b>OCFS2 - FREQUENTLY ASKED QUESTIONS</b></center></font>
</p>

<ul>
<p>
<font size=+1><b>CONTENTS</b></font>
</p>
<li><a href=#GENERAL>General</a>
<li><a href=#DOWNLOAD>Download and Install</a>
<li><a href=#CONFIGURE>Configure</a>
<li><a href=#O2CB>O2CB Cluster Service</a>
<li><a href=#FORMAT>Format</a>
<li><a href=#RESIZE>Resize</a>
<li><a href=#MOUNT>Mount</a>
<li><a href=#RAC>Oracle RAC</a>
<li><a href=#MIGRATE>Migrate Data from OCFS (Release 1) to OCFS2</a>
<li><a href=#COREUTILS>Coreutils</a>
<li><a href=#NFS>Exporting via NFS</a>
<li><a href=#TROUBLESHOOTING>Troubleshooting</a>
<li><a href=#LIMITS>Limits</a>
<li><a href=#SYSTEMFILES>System Files</a>
<li><a href=#HEARTBEAT>Heartbeat</a>
<li><a href=#QUORUM>Quorum and Fencing</a>
<li><a href=#SLES>Novell's SLES9 and SLES10</a>
<li><a href=#RELEASE1.2>Release 1.2</a>
<li><a href=#UPGRADE>Upgrade to the Latest Release</a>
<li><a href=#PROCESSES>Processes</a>
<li><a href=#BUILD>Build RPMs for Hotfix Kernels</a>
<li><a href=#BACKUPSB>Backup Super block</a>
<li><a href=#TIMEOUT>Configuring Cluster Timeouts</a>
<li><a href=#EL5>Enterprise Linux 5</a>
</ul>

<ol>
<p>
<A name="GENERAL"><font size=+1><b>GENERAL</b></font></A>
</p>

<font size=+1>
<li>How do I get started?<br>
</font>
<ul>
<li>Download and install the module and tools rpms.
<li>Create cluster.conf and propagate to all nodes.
<li>Configure and start the O2CB cluster service.
<li>Format the volume.
<li>Mount the volume.
</ul>

<font size=+1>
<li>How do I know the version number running?<br>
</font>
<pre>
	# cat /proc/fs/ocfs2/version
	OCFS2 1.2.1 Fri Apr 21 13:51:24 PDT 2006 (build bd2f25ba0af9677db3572e3ccd92f739)
</pre>

<font size=+1>
<li>How do I configure my system to auto-reboot after a panic?<br>
</font>
To auto-reboot system 60 secs after a panic, do:
<pre>
	# echo 60 > /proc/sys/kernel/panic
</pre>
To enable the above on every reboot, add the following to /etc/sysctl.conf:
<pre>
	kernel.panic = 60
</pre>

<p>
<A name="DOWNLOAD"><font size=+1><b>DOWNLOAD AND INSTALL</b></font></A>
</p>

<font size=+1>
<li>Where do I get the packages from?<br>
</font>
For Oracle Enterprise Linux 4 and 5, use the up2date command as follows:
<pre>
	# up2date --install ocfs2-tools ocfs2console
	# up2date --install ocfs2-`uname -r`
</pre>
For Novell's SLES9, use yast to upgrade to the latest SP3 kernel to get the required
modules installed. Also, install the ocfs2-tools and ocfs2console packages.<br>
For Novell's SLES10, install ocfs2-tools and ocfs2console packages.</b>
For Red Hat's RHEL4 and RHEL5, download and install the appropriate module package and the two tools
packages, ocfs2-tools and ocfs2console. Appropriate module refers to one matching the
kernel version, flavor and architecture. Flavor refers to smp, hugemem, etc.<br>

<span style="color: #F00;">
<font size=+1>
<li>What are the latest versions of the OCFS2 packages?<br>
</font>
For Enterprise Linux 5, the latest module package version is 1.2.6-1 and the latest
tools/console package version is 1.2.6-1.<br>
For Enterprise Linux 4, the latest module package version is 1.2.5-1 and the latest
tools/console package version is 1.2.4-1.<br>
</span>

<font size=+1>
<li>How do I interpret the package name ocfs2-2.6.9-22.0.1.ELsmp-1.2.1-1.i686.rpm?<br>
</font>
The package name is comprised of multiple parts separated by '-'.<br>
<ul>
<li><b>ocfs2</b> - Package name
<li><b>2.6.9-22.0.1.ELsmp</b> - Kernel version and flavor
<li><b>1.2.1</b> - Package version
<li><b>1</b> - Package subversion
<li><b>i686</b> - Architecture
</ul>

<font size=+1>
<li>How do I know which package to install on my box?<br>
</font>
After one identifies the package name and version to install, one still needs
to determine the kernel version, flavor and architecture.<br>
To know the kernel version and flavor, do:
<pre>
	# uname -r
	2.6.9-22.0.1.ELsmp
</pre>
To know the architecture, do:
<pre>
	# rpm -qf /boot/vmlinuz-`uname -r` --queryformat "%{ARCH}\n"
	i686
</pre>

<font size=+1>
<li>Why can't I use <i>uname -p</i> to determine the kernel architecture?<br>
</font>
<i>uname -p</i> does not always provide the exact kernel architecture. Case in
point the RHEL3 kernels on x86_64. Even though Red Hat has two different kernel
architectures available for this port, ia32e and x86_64, <i>uname -p</i>
identifies both as the generic <i>x86_64</i>.<br>

<font size=+1>
<li>How do I install the rpms?<br>
</font>
First install the tools and console packages:
<pre>
	# rpm -Uvh ocfs2-tools-1.2.1-1.i386.rpm ocfs2console-1.2.1-1.i386.rpm
</pre>
Then install the appropriate kernel module package:
<pre>
	# rpm -Uvh ocfs2-2.6.9-22.0.1.ELsmp-1.2.1-1.i686.rpm
</pre>

<font size=+1>
<li>Do I need to install the console?<br>
</font>
No, the console is not required but recommended for ease-of-use.<br>

<font size=+1>
<li>What are the dependencies for installing ocfs2console?<br>
</font>
ocfs2console requires e2fsprogs, glib2 2.2.3 or later, vte 0.11.10 or later,
pygtk2 (RHEL4) or python-gtk (SLES9) 1.99.16 or later, python 2.3 or later and
ocfs2-tools.<br>

<font size=+1>
<li>What modules are installed with the OCFS2 1.2 package?<br>
</font>
<ul>
<li>ocfs2.ko
<li>ocfs2_dlm.ko
<li>ocfs2_dlmfs.ko
<li>ocfs2_nodemanager.ko
<li>configfs.ko (only Enterprise Linux 4)
<li>debugfs.ko (only Enterprise Linux 4)<br>
</ul>
<span style="color: #F00;">
The kernel shipped alongwith Enterprise Linux 5 includes configfs.ko and debugfs.ko.<br>
</span>

<font size=+1>
<li>What tools are installed with the ocfs2-tools 1.2 package?<br>
</font>
<ul>
<li>mkfs.ocfs2
<li>fsck.ocfs2
<li>tunefs.ocfs2
<li>debugfs.ocfs2
<li>mount.ocfs2
<li>mounted.ocfs2
<li>ocfs2cdsl
<li>ocfs2_hb_ctl
<li>o2cb_ctl
<li>o2cb - init service to start/stop the cluster
<li>ocfs2 - init service to mount/umount ocfs2 volumes
<li>ocfs2console - installed with the console package
</ul>

<font size=+1>
<li>What is debugfs and is it related to debugfs.ocfs2?<br>
</font>
<a href=http://kerneltrap.org/node/4394>debugfs</a> is an in-memory filesystem
developed by Greg Kroah-Hartman. It is useful for debugging as it allows kernel
space to easily export data to userspace. It is currently being used by OCFS2
to dump the list of filesystem locks and could be used for more in the future.
It is bundled with OCFS2 as the various distributions are currently not bundling
it. While debugfs and debugfs.ocfs2 are unrelated in general, the latter is used
as the front-end for the debugging info provided by the former. For example,
refer to the troubleshooting section.

<p>
<A name="CONFIGURE"><font size=+1><b>CONFIGURE</b></font></A>
</p>

<font size=+1>
<li>How do I populate /etc/ocfs2/cluster.conf?<br>
</font>
If you have installed the console, use it to create this configuration file.
For details, refer to the user's guide.  If you do not have the console installed,
check the Appendix in the User's guide for a sample cluster.conf and the details
of all the components. Do not forget to copy this file to all the nodes in the
cluster. If you ever edit this file on any node, ensure the other nodes are
updated as well.<br>

<font size=+1>
<li>Should the IP interconnect be public or private?<br>
</font>
Using a private interconnect is recommended. While OCFS2 does not take much
bandwidth, it does require the nodes to be alive on the network and sends regular
keepalive packets to ensure that they are. To avoid a network delay being
interpreted as a node disappearing on the net which could lead to a
node-self-fencing, a private interconnect is recommended. One could use the
same interconnect for Oracle RAC and OCFS2.<br>

<font size=+1>
<li>What should the node name be and should it be related to the IP address?<br>
</font>
The node name needs to match the hostname. The IP address need not be the one
associated with that hostname. As in, any valid IP address on that node can be
used. OCFS2 will not attempt to match the node name (hostname) with the
specified IP address.<br>

<font size=+1>
<li>How do I modify the IP address, port or any other information specified in cluster.conf?<br>
</font>
While one can use ocfs2console to add nodes dynamically to a running cluster,
any other modifications require the cluster to be offlined. Stop the cluster
on all nodes, edit /etc/ocfs2/cluster.conf on one and copy to the rest, and
restart the cluster on all nodes. Always ensure that cluster.conf is the
same on all the nodes in the cluster.<br>

<font size=+1>
<li>How do I add a new node to an online cluster?<br>
</font>
You can use the console to add a new node. However, you will need to explicitly add
the new node on all the online nodes. That is, adding on one node and propagating
to the other nodes is not sufficient. If the operation fails, it will most likely
be due to <a href="http://oss.oracle.com/bugzilla/show_bug.cgi?id=741">bug#741</a>.
In that case, you can use the o2cb_ctl utility on all online nodes as follows:
<pre>
	# o2cb_ctl -C -i -n NODENAME -t node -a number=NODENUM -a ip_address=IPADDR -a ip_port=IPPORT -a cluster=CLUSTERNAME
</pre>
Ensure the node is added both in /etc/ocfs2/cluster.conf and in /config/cluster/CLUSTERNAME/node
on all online nodes. You can then simply copy the cluster.conf to the new (still offline) node
as well as other offline nodes. At the end, ensure that cluster.conf is consistent on all the nodes.

<font size=+1>
<li>How do I add a new node to an offline cluster?<br>
</font>
You can either use the console or use o2cb_ctl or simply hand edit cluster.conf. Then
either use the console to propagate it to all nodes or hand copy using scp or any other tool.
The o2cb_ctl command to do the same is:
<pre>
        # o2cb_ctl -C -n NODENAME -t node -a number=NODENUM -a ip_address=IPADDR -a ip_port=IPPORT -a cluster=CLUSTERNAME
</pre>
Notice the "-i" argument is not required as the cluster is not online.

<p>
<A name="O2CB"><font size=+1><b>O2CB CLUSTER SERVICE</b></font></A>
</p>

<font size=+1>
<li>How do I configure the cluster service?<br>
</font>
<pre>
	# /etc/init.d/o2cb configure
</pre>
Enter 'y' if you want the service to load on boot, the name of the
cluster (as listed in /etc/ocfs2/cluster.conf) and the <a href=#TIMEOUT>cluster timeouts</a>.<br>

<font size=+1>
<li>How do I start the cluster service?<br>
</font>
<ul>
<li>To load the modules, do:
<pre>
	# /etc/init.d/o2cb load
</pre>
<li>To Online it, do:
<pre>
	# /etc/init.d/o2cb online [cluster_name]
</pre>
</ul>
If you have configured the cluster to load on boot, you could combine the two as follows:
<pre>
	# /etc/init.d/o2cb start [cluster_name]
</pre>
The cluster name is not required if you have specified the name during configuration.<br>

<font size=+1>
<li>How do I stop the cluster service?<br>
</font>
<ul>
<li>To offline it, do:
<pre>
	# /etc/init.d/o2cb offline [cluster_name]
</pre>
<li>To unload the modules, do:
<pre>
	# /etc/init.d/o2cb unload
</pre>
</ul>
If you have configured the cluster to load on boot, you could combine the two as follows:
<pre>
	# /etc/init.d/o2cb stop [cluster_name]
</pre>
The cluster name is not required if you have specified the name during configuration.<br>

<font size=+1>
<li>How can I learn the status of the cluster?<br>
</font>
To learn the status of the cluster, do:
<pre>
	# /etc/init.d/o2cb status
</pre>

<font size=+1>
<li>I am unable to get the cluster online. What could be wrong?<br>
</font>
Check whether the node name in the cluster.conf exactly matches the hostname.
One of the nodes in the cluster.conf need to be in the cluster for the cluster
to be online.<br>

<p>
<A name="FORMAT"><font size=+1><b>FORMAT</b></font></A>
</p>

<span style="color: #F00;">
<font size=+1>
<li>Should I partition a disk before formatting?<br>
</font>
Yes, partitioning is recommended even if one is planning to use the entire disk
for ocfs2. Apart from the fact that partitioned disks are less likely to be "reused"
by mistake, some features like mount-by-label only work with partitioned volumes.<br>
Use fdisk or parted or any other tool for the task.<br>
</span>

<font size=+1>
<li>How do I format a volume?<br>
</font>
You could either use the console or use mkfs.ocfs2 directly to format the volume.
For console, refer to the user's guide.
<pre>
	# mkfs.ocfs2 -L "oracle_home" /dev/sdX
</pre>
The above formats the volume with default block and cluster sizes, which are computed
based upon the size of the volume.
<pre>
	# mkfs.ocfs2 -b 4k -C 32K -L "oracle_home" -N 4 /dev/sdX
</pre>
The above formats the volume for 4 nodes with a 4K block size and a 32K cluster size.<br>

<font size=+1>
<li>What does the number of node slots during format refer to?<br>
</font>
The number of node slots specifies the number of nodes that can concurrently mount
the volume. This number is specified during format and can be increased using
tunefs.ocfs2. This number cannot be decreased.<br>

<font size=+1>
<li>What should I consider when determining the number of node slots?<br>
</font>
OCFS2 allocates system files, like Journal, for each node slot. So as to not
to waste space, one should specify a number within the ballpark of the actual
number of nodes. Also, as this number can be increased, there is no need to
specify a much larger number than one plans for mounting the volume.<br>

<font size=+1>
<li>Does the number of node slots have to be the same for all volumes?<br>
</font>
No. This number can be different for each volume.<br>

<font size=+1>
<li>What block size should I use?<br>
</font>
A block size is the smallest unit of space addressable by the file system.
OCFS2 supports block sizes of 512 bytes, 1K, 2K and 4K. The block size cannot
be changed after the format. For most volume sizes, a 4K size is recommended.
On the other hand, the 512 bytes block is never recommended.<br>

<font size=+1>
<li>What cluster size should I use?<br>
</font>
A cluster size is the smallest unit of space allocated to a file to hold the data.
OCFS2 supports cluster sizes of 4K, 8K, 16K, 32K, 64K, 128K, 256K, 512K and 1M.
For database volumes, a cluster size of 128K or larger is recommended. For Oracle
home, 32K to 64K.<br>

<font size=+1>
<li>Any advantage of labelling the volumes?<br>
</font>
As in a shared disk environment, the disk name (/dev/sdX) for a particular device
be different on different nodes, labelling becomes a must for easy identification.
You could also use labels to identify volumes during mount.
<pre>
	# mount -L "label" /dir
</pre>
The volume label is changeable using the tunefs.ocfs2 utility.<br>

<p>
<A name="RESIZE"><font size=+1><b>RESIZE</b></font></A>
</p>

<font size=+1>
<li>Can OCFS2 file systems be grown in size?<br>
</font>
Yes, you can grow an OCFS2 file system using tunefs.ocfs2. It should be noted that
the tool will only resize the file system and not the underlying partition. You can
use <i>fdisk(8)</i> (or any appropriate tool for your disk array) to resize the partition.<br>

<font size=+1>
<li>What do I need to know to use fdisk(8) to resize the partition?<br>
</font>
To grow a partition using <i>fdisk(8)</i>, you will have to delete it and recreate it
with a larger size. When recreating it, ensure you specify the same starting disk cylinder
as before and a ending disk cylinder that is greater than the existing one. Otherwise,
not only will the resize operation fail, but you may lose your entire file system. Backup
your data before performing this task.<br>

<font size=+1>
<li>Short of reboot, how do I get the other nodes in the cluster to see the resized partition?<br>
</font>
Use <i>blockdev(8)</i> to rescan the partition table of the device on the other nodes in
the cluster.
<pre>
	# blockdev --rereadpt /dev/sdX
</pre>

<font size=+1>
<li>What is the tunefs.ocfs2 syntax for resizing the file system?<br>
</font>
To grow a file system to the end of the resized partition, do:
<pre>
	# tunefs.ocfs2 -S /dev/sdX
</pre>
For more, refer to the tunefs.ocfs2 manpage.<br>

<font size=+1>
<li>Can the OCFS2 file system be grown while the file system is in use?<br>
</font>
No. tunefs.ocfs2 1.2.2 only allows offline resize. i.e., the file system cannot
be mounted on any node in the cluster. The online resize capability will be added later.<br>

<font size=+1>
<li>Can the OCFS2 file system be shrunk in size?<br>
</font>
No. We have no current plans on providing this functionality. However, if you find this
feature useful, file an enhancement request on bugzilla listing your reasons for the same.<br>

<p>
<A name="MOUNT"><font size=+1><b>MOUNT</b></font></A>
</p>

<font size=+1>
<li>How do I mount the volume?<br>
</font>
You could either use the console or use mount directly. For console, refer to
the user's guide.
<pre>
	# mount -t ocfs2 /dev/sdX /dir
</pre>
The above command will mount device /dev/sdX on directory /dir.<br>

<font size=+1>
<li>How do I mount by label?<br>
</font>
To mount by label do:
<pre>
	# mount -L "label" /dir
</pre>

<font size=+1>
<li>What entry to I add to /etc/fstab to mount an ocfs2 volume?<br>
</font>
Add the following:
<pre>
	/dev/sdX	/dir	ocfs2	noauto,_netdev	0	0
</pre>
The _netdev option indicates that the devices needs to be mounted after the network is up.<br>

<font size=+1>
<li>What do I need to do to mount OCFS2 volumes on boot?<br>
</font>
<ul>
<li>Enable o2cb service using:
<pre>
	# chkconfig --add o2cb
</pre>
<li>Enable ocfs2 service using:
<pre>
	# chkconfig --add ocfs2
</pre>
<li>Configure o2cb to load on boot using:
<pre>
	# /etc/init.d/o2cb configure
</pre>
<li>Add entries into /etc/fstab as follows:
<pre>
	/dev/sdX	/dir	ocfs2	_netdev	0	0
</pre>
</ul>

<font size=+1>
<li>How do I know my volume is mounted?<br>
</font>
<ul>
<li>Enter mount without arguments, or,
<pre>
	# mount
</pre>
<li>List /etc/mtab, or,
<pre>
	# cat /etc/mtab
</pre>
<li>List /proc/mounts, or,
<pre>
	# cat /proc/mounts
</pre>
<li>Run ocfs2 service.
<pre>
	# /etc/init.d/ocfs2 status
</pre>
mount command reads the /etc/mtab to show the information.<br>
</ul>

<font size=+1>
<li>What are the /config and /dlm mountpoints for?<br>
</font>
OCFS2 comes bundled with two in-memory filesystems <i>configfs</i> and <i>ocfs2_dlmfs</i>.
<i>configfs</i> is used by the ocfs2 tools to communicate to the in-kernel node
manager the list of nodes in the cluster and to the in-kernel heartbeat thread
the resource to heartbeat on. <i>ocfs2_dlmfs</i> is used by ocfs2 tools to communicate
with the in-kernel dlm to take and release clusterwide locks on resources.<br>

<font size=+1>
<li>Why does it take so much time to mount the volume?<br>
</font>
It takes around 5 secs for a volume to mount. It does so so as to let the heartbeat
thread stabilize. In a later release, we plan to add support for a global
heartbeat, which will make most mounts instant.<br>

<font size=+1>
<li>Why does it take so much time to umount the volume?<br>
</font>
During umount, the dlm has to migrate all the mastered lockres' to an another
node in the cluster. In 1.2, the lockres migration is a synchronous operation.
We are looking into making it asynchronous so as to reduce the time it takes
to migrate the lockres'. (While we have improved this performance in 1.2.5, the
task of asynchronously migrating lockres' has been pushed to the 1.4 time frame.)

To find the number of lockres in all dlm domains, do:
<pre>
	# cat /proc/fs/ocfs2_dlm/*/stat
	local=60624, remote=1, unknown=0, key=0x8619a8da
</pre>
<i>local</i> refers to locally mastered lockres'.
</br>

<p>
<A name="RAC"><font size=+1><b>ORACLE RAC</b></font></A>
</p>

<font size=+1>
<li>Any special flags to run Oracle RAC?<br>
</font>
OCFS2 volumes containing the Voting diskfile (CRS), Cluster registry (OCR),
Data files, Redo logs, Archive logs and Control files must be mounted with the
<b><i>datavolume</i></b> and <b><i>nointr</i></b> mount options. The <i>datavolume</i>
option ensures that the Oracle processes opens these files with the o_direct flag.
The <i>nointr</i> option ensures that the ios are not interrupted by signals.
<pre>
	# mount -o datavolume,nointr -t ocfs2 /dev/sda1 /u01/db
</pre>

<font size=+1>
<li>What about the volume containing Oracle home?<br>
</font>
Oracle home volume should be mounted normally, that is, without the <i>datavolume</i>
and <i>nointr</i> mount options. These mount options are only relevant for Oracle
files listed above.
<pre>
	# mount -t ocfs2 /dev/sdb1 /software/orahome
</pre>
Also as OCFS2 does not currently support shared writeable mmap, the health check (GIMH)
file <i>$ORACLE_HOME/dbs/hc_ORACLESID.dat</i> and the ASM file <i>$ASM_HOME/dbs/ab_ORACLESID.dat</i>
should be symlinked to local filesystem. We expect to support shared writeable mmap in the
OCFS2 1.4 release.

<font size=+1>
<li>Does that mean I cannot have my data file and Oracle home on the same volume?<br>
</font>
Yes. The volume containing the Oracle data files, redo-logs, etc. should never
be on the same volume as the distribution (including the trace logs like,
alert.log).<br>

<font size=+1>
<li>Any other information I should be aware off?<br>
</font>
The 1.2.3 release of OCFS2 does not update the modification time on the inode
across the cluster for non-extending writes. However, the time will be locally
updated in the cached inodes. This leads to one observing different times (ls -l)
for the same file on different nodes on the cluster.<br>
While this does not affect most uses of the filesystem, as one variably changes the
file size during write, the one usage where this is most commonly experienced is with
Oracle datafiles and redologs. This is because Oracle rarely resizes these files and
thus almost all writes are non-extending.<br>
In OCFS2 1.4, we intend to fix this by updating modification times for all writes
while providing an opt-out mount option (nocmtime) for users who would prefer
to avoid the performance overhead associated with this feature.<br>

<p>
<A name="MIGRATE"><font size=+1><b>MIGRATE DATA FROM OCFS (RELEASE 1) TO OCFS2</b></font></A>
</p>

<font size=+1>
<li>Can I mount OCFS volumes as OCFS2?<br>
</font>
No. OCFS and OCFS2 are not on-disk compatible. We had to break the compatibility
in order to add many of the new features. At the same time, we have added enough
flexibility in the new disk layout so as to maintain backward compatibility
in the future.<br>

<font size=+1>
<li>Can OCFS volumes and OCFS2 volumes be mounted on the same machine simultaneously?<br>
</font>
No. OCFS only works on 2.4 linux kernels (Red Hat's AS2.1/EL3 and SuSE's SLES8).
OCFS2, on the other hand, only works on the 2.6 kernels (RHEL4, SLES9 and SLES10).<br>

<font size=+1>
<li>Can I access my OCFS volume on 2.6 kernels (SLES9/SLES10/RHEL4)?<br>
</font>
Yes, you can access the OCFS volume on 2.6 kernels using FSCat tools, fsls and
fscp. These tools can access the OCFS volumes at the device layer, to list and
copy the files to another filesystem.  FSCat tools are available on oss.oracle.com.<br>

<font size=+1>
<li>Can I in-place convert my OCFS volume to OCFS2?<br>
</font>
No. The on-disk layout of OCFS and OCFS2 are sufficiently different that it
would require a third disk (as a temporary buffer) inorder to in-place upgrade
the volume. With that in mind, it was decided not to develop such a tool but
instead provide tools to copy data from OCFS without one having to mount it.<br>

<font size=+1>
<li>What is the quickest way to move data from OCFS to OCFS2?<br>
</font>
Quickest would mean having to perform the minimal number of copies. If you have
the current backup on a non-OCFS volume accessible from the 2.6 kernel install,
then all you would need to do is to retore the backup on the OCFS2 volume(s).
If you do not have a backup but have a setup in which the system containing the
OCFS2 volumes can access the disks containing the OCFS volume, you can use the
FSCat tools to extract data from the OCFS volume and copy onto OCFS2.<br>

<p>
<A name="COREUTILS"><font size=+1><b>COREUTILS</b></font></A>
</p>

<font size=+1>
<li>Like with OCFS (Release 1), do I need to use o_direct enabled tools to
perform cp, mv, tar, etc.?<br>
</font>
No. OCFS2 does not need the o_direct enabled tools. The file system allows
processes to open files in both o_direct and bufferred mode concurrently.<br>

<p>
<A name="NFS"><font size=+1><b>EXPORTING VIA NFS</b></font></A>
</p>

<font size=+1>
<li>Can I export an OCFS2 file system via NFS?<br>
</font>
Yes, you can export files on OCFS2 via the standard Linux NFS server. Please
note that only NFS version 3 and above will work. In practice, this means
clients need to be running a 2.4.x kernel or above.<br>

<font size=+1>
<li>Is there no solution for the NFS v2 clients?<br>
</font>
NFS v2 clients can work if the server exports the volumes with the
<i>no_subtree_check</i> option. However, this has some security implications
that is documented in the <i>exports</i> manpage.<br>

<p>
<A name="TROUBLESHOOTING"><font size=+1><b>TROUBLESHOOTING</b></font></A>
</p>

<font size=+1>
<li>How do I enable and disable filesystem tracing?<br>
</font>
To list all the debug bits along with their statuses, do:
<pre>
	# debugfs.ocfs2 -l
</pre>
To enable tracing the bit SUPER, do:
<pre>
	# debugfs.ocfs2 -l SUPER allow
</pre>
To disable tracing the bit SUPER, do:
<pre>
	# debugfs.ocfs2 -l SUPER off
</pre>
To totally turn off tracing the SUPER bit, as in, turn off tracing even if
some other bit is enabled for the same, do:
<pre>
	# debugfs.ocfs2 -l SUPER deny
</pre>
To enable heartbeat tracing, do:
<pre>
	# debugfs.ocfs2 -l HEARTBEAT ENTRY EXIT allow
</pre>
To disable heartbeat tracing, do:
<pre>
	# debugfs.ocfs2 -l HEARTBEAT off ENTRY EXIT deny
</pre>

<font size=+1>
<li>How do I get a list of filesystem locks and their statuses?<br>
</font>
OCFS2 1.0.9+ has this feature. To get this list, do:
<ul>
<li>Mount debugfs is mounted at /debug (EL4) or /sys/kernel/debug (EL5).
<pre>
	# mount -t debugfs debugfs /debug
	- OR -
	# mount -t debugfs debugfs /sys/kernel/debug
</pre>
<li>Dump the locks.
<pre>
	# echo "fs_locks" | debugfs.ocfs2 /dev/sdX >/tmp/fslocks
</pre>
</ul>

<font size=+1>
<li>How do I read the fs_locks output?<br>
</font>
Let's look at a sample output:
<pre>
	Lockres: M000000000000000006672078b84822  Mode: Protected Read
	Flags: Initialized Attached
	RO Holders: 0  EX Holders: 0
	Pending Action: None  Pending Unlock Action: None
	Requested Mode: Protected Read  Blocking Mode: Invalid
</pre>
First thing to note is the Lockres, which is the lockname. The dlm identifies
resources using locknames. A lockname is a combination of a lock type
(S superblock, M metadata, D filedata, R rename, W readwrite), inode number
and generation.<br>
To get the inode number and generation from lockname, do:
<pre>
	#echo "stat <M000000000000000006672078b84822>" | debugfs.ocfs2 -n /dev/sdX
	Inode: 419616   Mode: 0666   Generation: 2025343010 (0x78b84822)
	....
</pre>
To map the lockname to a directory entry, do:
<pre>
	# echo "locate <M000000000000000006672078b84822>" | debugfs.ocfs2 -n /dev/sdX
	419616  /linux-2.6.15/arch/i386/kernel/semaphore.c
</pre>
One could also provide the inode number instead of the lockname.
<pre>
	# echo "locate <419616>" | debugfs.ocfs2 -n /dev/sdX
	419616  /linux-2.6.15/arch/i386/kernel/semaphore.c
</pre>
To get a lockname from a directory entry, do:
<pre>
	# echo "encode /linux-2.6.15/arch/i386/kernel/semaphore.c" | debugfs.ocfs2 -n /dev/sdX
	M000000000000000006672078b84822 D000000000000000006672078b84822 W000000000000000006672078b84822
</pre>
The first is the Metadata lock, then Data lock and last ReadWrite lock for the same resource.<br>
<br>
The DLM supports 3 lock modes: NL no lock, PR protected read and EX exclusive.<br>
<br>
If you have a dlm hang, the resource to look for would be one with the "Busy" flag set.<br>
<br>
The next step would be to query the dlm for the lock resource.<br>
<br>
Note: The dlm debugging is still a work in progress.<br>
<br>
To do dlm debugging, first one needs to know the dlm domain, which matches
the volume UUID.
<pre>
	# echo "stats" | debugfs.ocfs2 -n /dev/sdX | grep UUID: | while read a b ; do echo $b ; done
	82DA8137A49A47E4B187F74E09FBBB4B
</pre>
Then do:
<pre>
	# echo R dlm_domain lockname > /proc/fs/ocfs2_dlm/debug
</pre>
For example:
<pre>
	# echo R 82DA8137A49A47E4B187F74E09FBBB4B M000000000000000006672078b84822 > /proc/fs/ocfs2_dlm/debug
	# dmesg | tail
	struct dlm_ctxt: 82DA8137A49A47E4B187F74E09FBBB4B, node=79, key=965960985
	lockres: M000000000000000006672078b84822, owner=75, state=0 last used: 0, on purge list: no
	  granted queue:
	    type=3, conv=-1, node=79, cookie=11673330234144325711, ast=(empty=y,pend=n), bast=(empty=y,pend=n)
	  converting queue:
	  blocked queue:
</pre>
It shows that the lock is mastered by node 75 and that node 79 has been granted
a PR lock on the resource.<br>
<br>
This is just to give a flavor of dlm debugging.<br>

<p>
<A name="LIMITS"><font size=+1><b>LIMITS</b></font></A>
</p>

<font size=+1>
<li>Is there a limit to the number of subdirectories in a directory?<br>
</font>
Yes. OCFS2 currently allows up to 32000 subdirectories. While this limit could
be increased, we will not be doing it till we implement some kind of efficient
name lookup (htree, etc.).<br>

<font size=+1>
<li>Is there a limit to the size of an ocfs2 file system?<br>
</font>
Yes, current software addresses block numbers with 32 bits. So the file system
device is limited to (2 ^ 32) * blocksize (see mkfs -b). With a 4KB block size
this amounts to a 16TB file system. This block addressing limit will be relaxed
in future software. At that point the limit becomes addressing clusters of 1MB
each with 32 bits which leads to a 4PB file system.<br>

<p>
<A name="SYSTEMFILES"><font size=+1><b>SYSTEM FILES</b></font></A>
</p>

<font size=+1>
<li>What are system files?<br>
</font>
System files are used to store standard filesystem metadata like bitmaps,
journals, etc. Storing this information in files in a directory allows OCFS2
to be extensible. These system files can be accessed using debugfs.ocfs2.
To list the system files, do:<br>
<pre>
	# echo "ls -l //" | debugfs.ocfs2 -n /dev/sdX
        	18        16       1      2  .
        	18        16       2      2  ..
        	19        24       10     1  bad_blocks
        	20        32       18     1  global_inode_alloc
        	21        20       8      1  slot_map
        	22        24       9      1  heartbeat
        	23        28       13     1  global_bitmap
        	24        28       15     2  orphan_dir:0000
        	25        32       17     1  extent_alloc:0000
        	26        28       16     1  inode_alloc:0000
        	27        24       12     1  journal:0000
        	28        28       16     1  local_alloc:0000
        	29        3796     17     1  truncate_log:0000
</pre>
The first column lists the block number.<br>

<font size=+1>
<li>Why do some files have numbers at the end?<br>
</font>
There are two types of files, global and local. Global files are for all the
nodes, while local, like journal:0000, are node specific. The set of local
files used by a node is determined by the slot mapping of that node. The
numbers at the end of the system file name is the slot#. To list the slot maps, do:<br>
<pre>
	# echo "slotmap" | debugfs.ocfs2 -n /dev/sdX
       	Slot#   Node#
            0      39
       	    1      40
            2      41
       	    3      42
</pre>

<p>
<A name="HEARTBEAT"><font size=+1><b>HEARTBEAT</b></font></A>
</p>

<font size=+1>
<li>How does the disk heartbeat work?<br>
</font>
Every node writes every two secs to its block in the heartbeat system file.
The block offset is equal to its global node number. So node 0 writes to the
first block, node 1 to the second, etc. All the nodes also read the heartbeat
sysfile every two secs. As long as the timestamp is changing, that node is
deemed alive.<br>

<font size=+1>
<li>When is a node deemed dead?<br>
</font>
An active node is deemed dead if it does not update its timestamp for
O2CB_HEARTBEAT_THRESHOLD (default=7) loops. Once a node is deemed dead, the
surviving node which manages to cluster lock the dead node's journal, recovers
it by replaying the journal.<br>

<font size=+1>
<li>What about self fencing?<br>
</font>
A node self-fences if it fails to update its timestamp for
((O2CB_HEARTBEAT_THRESHOLD - 1) * 2) secs. The [o2hb-xx] kernel thread, after
every timestamp write, sets a timer to panic the system after that duration.
If the next timestamp is written within that duration, as it should, it first
cancels that timer before setting up a new one. This way it ensures the system
will self fence if for some reason the [o2hb-x] kernel thread is unable to
update the timestamp and thus be deemed dead by other nodes in the cluster.<br>

<font size=+1>
<li>How can one change the parameter value of O2CB_HEARTBEAT_THRESHOLD?<br>
</font>
This parameter value could be changed by adding it to /etc/sysconfig/o2cb and
RESTARTING the O2CB cluster. This value should be the SAME on ALL the nodes
in the cluster.<br>

<font size=+1>
<li>What should one set O2CB_HEARTBEAT_THRESHOLD to?<br>
</font>
It should be set to the timeout value of the io layer. Most multipath solutions
have a timeout ranging from 60 secs to 120 secs. For 60 secs, set it to 31.
For 120 secs, set it to 61.<br>
<pre>
	O2CB_HEARTBEAT_THRESHOLD = (((timeout in secs) / 2) + 1)
</pre>

<font size=+1>
<li>How does one check the current active O2CB_HEARTBEAT_THRESHOLD value?<br>
</font>
<pre>
	# cat /proc/fs/ocfs2_nodemanager/hb_dead_threshold
	7
</pre>

<font size=+1>
<li>What if a node umounts a volume?<br>
</font>
During umount, the node will broadcast to all the nodes that have mounted that
volume to drop that node from its node maps. As the journal is shutdown before
this broadcast, any node crash after this point is ignored as there is no need
for recovery.<br>

<font size=+1>
<li>I encounter "Kernel panic - not syncing: ocfs2 is very sorry to be fencing
this system by panicing" whenever I run a heavy io load?<br>
</font>
We have encountered a bug with the default <i>CFQ</i> io scheduler which causes
a process doing heavy io to temporarily starve out other processes. While this
is not fatal for most environments, it is for OCFS2 as we expect the hb thread
to reading from and writing to the hb area atleast once every 12 secs (default).
<span style="color: #F00;">This bug has been addressed by Red Hat in RHEL4 U4 (2.6.9-42.EL) and Novell
in SLES9 SP3 (2.6.5-7.257).</span> If you wish to use the <i>DEADLINE</i> io
scheduler, you could do so by appending "elevator=deadline" to the kernel
command line as follows:<br><br>
<ul>
<li>For SLES9, edit the command line in /boot/grub/menu.lst.
<pre>
title Linux 2.6.5-7.244-bigsmp (with deadline)
	kernel (hd0,4)/boot/vmlinuz-2.6.5-7.244-bigsmp root=/dev/sda5
		vga=0x314 selinux=0 splash=silent resume=/dev/sda3 <b>elevator=deadline</b> showopts console=tty0 console=ttyS0,115200 noexec=off
	initrd (hd0,4)/boot/initrd-2.6.5-7.244-bigsmp
</pre>
<li>For RHEL4, edit the command line in /boot/grub/grub.conf:
<pre>
title Red Hat Enterprise Linux AS (2.6.9-22.EL) (with deadline)
	root (hd0,0)
	kernel /vmlinuz-2.6.9-22.EL ro root=LABEL=/ console=ttyS0,115200 console=tty0 <b>elevator=deadline</b> noexec=off
	initrd /initrd-2.6.9-22.EL.img
</pre>
</ul>
To see the current kernel command line, do:
<pre>
	# cat /proc/cmdline
</pre>

<p>
<A name="QUORUM"><font size=+1><b>QUORUM AND FENCING</b></font></A>
</p>

<font size=+1>
<li>What is a quorum?<br>
</font>
A quorum is a designation given to a group of nodes in a cluster which are
still allowed to operate on shared storage. It comes up when there is a
failure in the cluster which breaks the nodes up into groups which can
communicate in their groups and with the shared storage but not between groups.<br>

<font size=+1>
<li>How does OCFS2's cluster services define a quorum? 
</font>
The quorum decision is made by a single node based on the number of other nodes
that are considered alive by heartbeating and the number of other nodes that are
reachable via the network.<br>
A node has quorum when:<br>
<ul>
<li>it sees an odd number of heartbeating nodes and has network connectivity to
more than half of them.<br>
OR,<br>
<li>it sees an even number of heartbeating nodes and has network connectivity
to at least half of them *and* has connectivity to the heartbeating node with
the lowest node number.<br>
</ul>

<font size=+1>
<li>What is fencing?<br>
</font>
Fencing is the act of forecefully removing a node from a cluster. A node with
OCFS2 mounted will fence itself when it realizes that it doesn't have quorum
in a degraded cluster.  It does this so that other nodes won't get stuck trying
to access its resources. Currently OCFS2 will panic the machine when it
realizes it has to fence itself off from the cluster. As described above, it
will do this when it sees more nodes heartbeating than it has connectivity to
and fails the quorum test.<br>
<span style="color: #F00;">
Due to user reports of nodes hanging during fencing, OCFS2 1.2.5 no longer uses
"panic" for fencing. Instead, by default, it uses "machine restart".
This should not only prevent nodes from hanging during fencing but also allow
for nodes to quickly restart and rejoin the cluster. While this change is internal
in nature, we are documenting this so as to make users aware that they are no longer
going to see the familiar panic stack trace during fencing. Instead they will see the
message <i>"*** ocfs2 is very sorry to be fencing this system by restarting ***"</i>
and that too probably only as part of the messages captured on the netdump/netconsole
server.<br>
If perchance the user wishes to use panic to fence (maybe to see the familiar oops
stack trace or on the advise of customer support to diagnose frequent reboots),
one can do so by issuing the following command after the O2CB cluster is online.
<pre>
	# echo 1 > /proc/fs/ocfs2_nodemanager/fence_method
</pre>
Please note that this change is local to a node.
</span>

<font size=+1>
<li>How does a node decide that it has connectivity with another?<br>
</font>
When a node sees another come to life via heartbeating it will try and establish
a TCP connection to that newly live node. It considers that other node
connected as long as the TCP connection persists and the connection is not idle
for 10 seconds. Once that TCP connection is closed or idle it will not be
reestablished until heartbeat thinks the other node has died and come back alive.<br>

<font size=+1>
<li>How long does the quorum process take?<br>
</font>
First a node will realize that it doesn't have connectivity with another node.
This can happen immediately if the connection is closed but can take a maximum
of 10 seconds of idle time. Then the node must wait long enough to give
heartbeating a chance to declare the node dead. It does this by waiting two
iterations longer than the number of iterations needed to consider a node dead
(see the Heartbeat section of this FAQ). The current default of 7 iterations
of 2 seconds results in waiting for 9 iterations or 18 seconds. By default,
then, a maximum of 28 seconds can pass from the time a network fault occurs
until a node fences itself.<br>

<font size=+1>
<li>How can one avoid a node from panic-ing when one shutdowns the other node
in a 2-node cluster?<br>
</font>
This typically means that the network is shutting down before all the OCFS2 volumes
are being umounted. Ensure the ocfs2 init script is enabled. This script ensures
that the OCFS2 volumes are umounted before the network is shutdown. To check whether
the service is enabled, do:
<pre>
       	# chkconfig --list ocfs2
       	ocfs2     0:off   1:off   2:on    3:on    4:on    5:on    6:off
</pre>

<font size=+1>
<li>How does one list out the startup and shutdown ordering of the OCFS2 related
services?<br>
</font>
<ul>
<li>To list the startup order for runlevel 3 on RHEL4, do:
<pre>
	# cd /etc/rc3.d
	# ls S*ocfs2* S*o2cb* S*network*
	S10network  S24o2cb  S25ocfs2
</pre>
<li>To list the shutdown order on RHEL4, do:
<pre>
	# cd /etc/rc6.d
	# ls K*ocfs2* K*o2cb* K*network*
	K19ocfs2  K20o2cb  K90network
</pre>
<li>To list the startup order for runlevel 3 on SLES9/SLES10, do:
<pre>
	# cd /etc/init.d/rc3.d
	# ls S*ocfs2* S*o2cb* S*network*
	S05network  S07o2cb  S08ocfs2
</pre>
<li>To list the shutdown order on SLES9/SLES10, do:
<pre>
	# cd /etc/init.d/rc3.d
	# ls K*ocfs2* K*o2cb* K*network*
	K14ocfs2  K15o2cb  K17network
</pre>
</ul>
Please note that the default ordering in the ocfs2 scripts only include the
network service and not any shared-device specific service, like iscsi. If one
is using iscsi or any shared device requiring a service to be started and
shutdown, please ensure that that service runs before and shutsdown after the
ocfs2 init service.<br>

<p>
<A name="SLES"><font size=+1><b>NOVELL'S SLES9 and SLES10</b></font></A>
</p>

<font size=+1>
<li>Why are OCFS2 packages for SLES9 and SLES10 not made available on oss.oracle.com?<br>
</font>
OCFS2 packages for SLES9 and SELS10 are available directly from Novell as part of the
kernel. Same is true for the various Asianux distributions and for ubuntu.
As OCFS2 is now part of the
<a href="http://lwn.net/Articles/166954/">mainline kernel</a>, we expect more
distributions to bundle the product with the kernel.<br>

<font size=+1>
<li>What versions of OCFS2 are available with SLES9 and how do they match with
the Red Hat versions available on oss.oracle.com?<br>
</font>
As both Novell and Oracle ship OCFS2 on different schedules, the package versions
do not match. We expect to resolve itself over time as the number of patch
fixes reduce. Novell is shipping two SLES9 releases, viz., SP2 and SP3.<br>
<ul>
<li>The latest kernel with the SP2 release is 2.6.5-7.202.7. It ships with OCFS2 1.0.8.
<li><span style="color: #F00;">The latest kernel with the SP3 release is 2.6.5-7.283. It ships with OCFS2 1.2.3.
Please contact Novell to get the latest OCFS2 modules on SLES9 SP3.</span>
</ul>

<font size=+1>
<li>What versions of OCFS2 are available with SLES10?
</font>
SLES10 is currently shipping OCFS2 1.2.3. SLES10 SP1 (beta) is currently shipping 1.2.5.

<p>
<A name="RELEASE1.2"><font size=+1><b>RELEASE 1.2</b></font></A>
</p>

<font size=+1>
<li>What is new in OCFS2 1.2?<br>
</font>
OCFS2 1.2 has two new features:
<ul>
<li>It is endian-safe. With this release, one can mount the same volume concurrently
on x86, x86-64, ia64 and big endian architectures ppc64 and s390x.
<li>Supports readonly mounts. The fs uses this feature to auto remount ro when
encountering on-disk corruptions (instead of panic-ing).
</ul>

<font size=+1>
<li>Do I need to re-make the volume when upgrading?<br>
</font>
No. OCFS2 1.2 is fully on-disk compatible with 1.0.<br>

<font size=+1>
<li>Do I need to upgrade anything else?<br>
</font>
Yes, the tools needs to be upgraded to ocfs2-tools 1.2. ocfs2-tools 1.0 will
not work with OCFS2 1.2 nor will 1.2 tools work with 1.0 modules.<br>

<p>
<A name="UPGRADE"><font size=+1><b>UPGRADE TO THE LATEST RELEASE</b></font></A>
</p>

<font size=+1>
<li>How do I upgrade to the latest release?<br>
</font>
<ul>
<li>Download the latest ocfs2-tools and ocfs2console for the target platform and
the appropriate ocfs2 module package for the kernel version, flavor and architecture.
(For more, refer to the "Download and Install" section above.)<br><br>
<li>Umount all OCFS2 volumes.
<pre>
	# umount -at ocfs2
</pre>
<li>Shutdown the cluster and unload the modules.<br>
<pre>
	# /etc/init.d/o2cb offline
	# /etc/init.d/o2cb unload
</pre>
<li>If required, upgrade the tools and console.
<pre>
	# rpm -Uvh ocfs2-tools-1.2.2-1.i386.rpm ocfs2console-1.2.2-1.i386.rpm
</pre>
<li>Upgrade the module.
<pre>
	# rpm -Uvh ocfs2-2.6.9-42.0.3.ELsmp-1.2.4-2.i686.rpm
</pre>
<li>Ensure init services ocfs2 and o2cb are enabled.
<pre>
	# chkconfig --add o2cb
	# chkconfig --add ocfs2
</pre>
<li>To check whether the services are enabled, do:
<pre>
	# chkconfig --list o2cb
	o2cb      0:off   1:off   2:on    3:on    4:on    5:on    6:off
	# chkconfig --list ocfs2
	ocfs2     0:off   1:off   2:on    3:on    4:on    5:on    6:off
</pre>
<li>To update the cluster timeouts, do:
<pre>
	# /etc/init.d/o2cb configure

<li>At this stage one could either reboot the node or simply, restart the cluster
and mount the volume.
</ul>

<font size=+1>
<li>Can I do a rolling upgrade from 1.2.3 to 1.2.4?<br>
</font>
No. The network protocol had to be updated in 1.2.4 to allow for
proper reference counting of lockres' across the cluster. This fix
was necessary to fix races encountered during lockres purge and migrate.
Effectively, one cannot run 1.2.4 on one node while another node
is still on an earlier release (1.2.3 or older).
<br>
</span>

<span style="color: #F00;">
<font size=+1>
<li>Can I do a rolling upgrade from 1.2.4 to 1.2.5?<br>
</font>
No. The network protocol had to be updated in 1.2.5 to ensure all nodes were
using the same O2CB timeouts. Effectively, one cannot run 1.2.5 on one node
while another node is still on an earlier release. (For the record, the
protocol remained the same between 1.2.0 to 1.2.3 before changing in 1.2.4
and 1.2.5.)
</span>

<font size=+1>
<li>After upgrade I am getting the following error on mount "mount.ocfs2: Invalid argument while mounting /dev/sda6 on /ocfs".<br>
</font>
Do "dmesg | tail". If you see the error:
<pre>
ocfs2_parse_options:523 ERROR: Unrecognized mount option "heartbeat=local" or missing value
</pre>
it means that you are trying to use the 1.2 tools and 1.0 modules. Ensure that you
have unloaded the 1.0 modules and installed and loaded the 1.2 modules. Use modinfo
to determine the version of the module installed and/or loaded.<br>

<font size=+1>
<li>The cluster fails to load. What do I do?<br>
</font>
Check "demsg | tail" for any relevant errors. One common error is as follows:
<pre>
SELinux: initialized (dev configfs, type configfs), not configured for labeling audit(1139964740.184:2): avc:  denied  { mount } for  ...
</pre>
The above error indicates that you have SELinux activated. A bug in SELinux
does not allow configfs to mount. Disable SELinux by setting "SELINUX=disabled"
in /etc/selinux/config. Change is activated on reboot.<br>

<p>
<A name="PROCESSES"><font size=+1><b>PROCESSES</b></font></A>
</p>

<font size=+1>
<li>List and describe all OCFS2 threads?<br>
</font>
<dl>

<dt>[o2net]
<dd>One per node. Is a workqueue thread started when the cluster is brought
online and stopped when offline. It handles the network communication for all
threads. It gets the list of active nodes from the o2hb thread and sets up
tcp/ip communication channels with each active node. It sends regular keepalive
packets to detect any interruption on the channels.

<dt>[user_dlm]
<dd>One per node. Is a workqueue thread started when dlmfs is loaded and stopped
on unload. (dlmfs is an in-memory file system which allows user space processes
to access the dlm in kernel to lock and unlock resources.) Handles lock downconverts
when requested by other nodes.

<dt>[ocfs2_wq]
<dd>One per node. Is a workqueue thread started when ocfs2 module is loaded
and stopped on unload. Handles blockable file system tasks like truncate
log flush, orphan dir recovery and local alloc recovery, which involve taking
dlm locks. Various code paths queue tasks to this thread. For example,
ocfs2rec queues orphan dir recovery so that while the task is kicked off as
part of recovery, its completion does not affect the recovery time.

<dt>[o2hb-14C29A7392]
<dd>One per heartbeat device. Is a kernel thread started when the heartbeat
region is populated in configfs and stopped when it is removed. It writes
every 2 secs to its block in the heartbeat region to indicate to other nodes
that that node is alive. It also reads the region to maintain a nodemap of
live nodes. It notifies o2net and dlm any changes in the nodemap.

<dt>[ocfs2vote-0]
<dd>One per mount. Is a kernel thread started when a volume is mounted and
stopped on umount. It downgrades locks when requested by other nodes in reponse
to blocking ASTs (BASTs). It also fixes up the dentry cache in reponse to
files unlinked or renamed on other nodes.

<dt>[dlm_thread]
<dd>One per dlm domain. Is a kernel thread started when a dlm domain is created
and stopped when destroyed. This is the core dlm which maintains the list of
lock resources and handles the cluster locking infrastructure.

<dt>[dlm_reco_thread]
<dd>One per dlm domain. Is a kernel thread which handles dlm recovery whenever
a node dies. If the node is the dlm recovery master, it remasters all the locks
owned by the dead node.

<dt>[dlm_wq]
<dd>One per dlm domain. Is a workqueue thread. o2net queues dlm tasks on this thread.

<dt>[kjournald]
<dd>One per mount. Is used as OCFS2 uses JDB for journalling.

<dt>[ocfs2cmt-0]
<dd>One per mount. Is a kernel thread started when a volume is mounted and
stopped on umount. Works in conjunction with kjournald.

<dt>[ocfs2rec-0]
<dd>Is started whenever another node needs to be be recovered. This could be
either on mount when it discovers a dirty journal or during operation when hb
detects a dead node. ocfs2rec handles the file system recovery and it runs
after the dlm has finished its recovery.
</dl>

<p>
<A name="BUILD"><font size=+1><b>BUILD RPMS FOR HOTFIX KERNELS</b></font></A>
</p>

<font size=+1>
<li>How to build OCFS2 packages for a hotfix kernel?<br>
</font>
<ul>
<li>Download and install all the kernel-devel packages for the hotfix kernel.
<li>Download and untar the OCFS2 source tarball.
<pre>
	# cd /tmp
	# wget http://oss.oracle.com/projects/ocfs2/dist/files/source/v1.2/ocfs2-1.2.3.tar.gz
	# tar -zxvf ocfs2-1.2.3.tar.gz
</pre>
<li> Ensure rpmbuild is installed and ~/.rpmmacros contains the proper links.
<pre>
	# cat ~/.rpmmacros
	%_topdir        /home/jdoe/rpms
	%_tmppath       /home/jdoe/rpms/tmp
	%_sourcedir     /home/jdoe/rpms/SOURCES
	%_specdir       /home/jdoe/rpms/SPECS
	%_srcrpmdir     /home/jdoe/rpms/SRPMS
	%_rpmdir        /home/jdoe/rpms/RPMS
	%_builddir      /home/jdoe/rpms/BUILD
</pre>
<li> Configure and make.
<pre>
	# ./configure --with-kernel=/usr/src/kernels/2.6.9-42.X.EL_rpm
	# make rhel4_2.6.9-42.X.EL_rpm
</pre>
</ul>
The packages will be in %_rpmdir.

<font size=+1>
<li>Are the self-built packages officially supported by Oracle Support?<br>
</font>
No. Oracle Support does not provide support for self-built modules. If you wish
official support, contact Oracle via Support or the ocfs2-users mailing list with
the link to the hotfix kernel (kernel-devel and kernel-src rpms).<br>

<p>
<A name="BACKUPSB"><font size=+1><b>BACKUP SUPER BLOCK</b></font></A>
</p>

<font size=+1>
<li>What is a Backup Super block?
</font>
A backup super block is a copy of the super block. As the super block is
typically located close to the start of the device, it is susceptible to
be overwritten, say, by an errant write (dd if=file of=/dev/sdX). Moreover,
as the super block stores critical information that is hard to recreate,
it becomes important to backup the block and use it when the super block
gets corrupted.
<br>
 
<font size=+1>
<li>Where are the backup super blocks located?
</font>
In OCFS2, the super blocks are backed up to blocks at the 1G, 4G, 16G, 64G,
256G and 1T byte offsets. The actual number of backups depend on the size
of the device. It should be noted that the super block is not backed up on
devices smaller than 1G.
<br>

<font size=+1>
<li>How does one enable this feature?
</font>
mkfs.ocfs2 1.2.3 or later automatically backs up super blocks on devices
larger than 1G. One can disable this by using the --no-backup-super option.
<br>

<font size=+1>
<li>How do I detect whether the super blocks are backed up on a device?
</font>
<pre>
	# debugfs.ocfs2 -R "stats" /dev/sdX | grep "Feature Compat"
        	Feature Compat: 1 BackupSuper
</pre>

<font size=+1>
<li>How do I backup the super block on a device formatted by an older mkfs.ocfs2?
</font>
tunefs.ocfs2 1.2.3 or later can attempt to retroactively backup the super block.
<pre>
	# tunefs.ocfs2 --backup-super /dev/sdX
	tunefs.ocfs2 1.2.3
	Adding backup superblock for the volume
	Proceed (y/N): y
	Backed up Superblock.
	Wrote Superblock
</pre>
However, it is quite possible that one or more backup locations are in use
by the file system. (tunefs.ocfs2 backs up the block only if all the backup
locations are unused.)
<pre>
	# tunefs.ocfs2 --backup-super /dev/sdX
	tunefs.ocfs2 1.2.3
	tunefs.ocfs2: block 262144 is in use.
	tunefs.ocfs2: block 4194304 is in use.
	tunefs.ocfs2: Cannot enable backup superblock as backup blocks are in use
</pre>
If so, use the
<a href=http://oss.oracle.com/projects/ocfs2-tools/dist/files/extras/verify_backup_super>verify_backup_super</a>
script to list out the objects using these blocks.
<pre>
	# ./verify_backup_super /dev/sdX
	Locating inodes using blocks 262144 1048576 4194304 on device /dev/sdX
        	Block#            Inode             Block Offset   
        	262144            27                65058          
        	1048576           Unused                           
        	4194304           4161791           25             
	Matching inodes to object names
        	27      //journal:0003
        	4161791 /src/kernel/linux-2.6.19/drivers/scsi/BusLogic.c
</pre>
If the object happens to be user created, move that object temporarily to an
another volume before re-attempting the operation. However, this will not work
if one or more blocks are being used by a system file (shown starting with
double slashes //), say, a journal.

<font size=+1>
<li>How do I ask fsck.ocfs2 to use a backup super block?
</font>
To recover a volume using the second backup super block, do:
<pre>
	# fsck.ocfs2 -f -r 2 /dev/sdX
	[RECOVER_BACKUP_SUPERBLOCK] Recover superblock information from backup block#1048576? <n> n
	Checking OCFS2 filesystem in /dev/sdX
  	label:              myvolume
  	uuid:               4d 1d 1f f3 24 01 4d 3f 82 4c e2 67 0c b2 94 f3 
  	number of blocks:   13107196
  	bytes per block:    4096
  	number of clusters: 13107196
  	bytes per cluster:  4096
  	max slots:          4

	/dev/sdX was run with -f, check forced.
	Pass 0a: Checking cluster allocation chains
	Pass 0b: Checking inode allocation chains
	Pass 0c: Checking extent block allocation chains
	Pass 1: Checking inodes and blocks.
	Pass 2: Checking directory entries.
	Pass 3: Checking directory connectivity.
	Pass 4a: checking for orphaned inodes
	Pass 4b: Checking inodes link counts.
	All passes succeeded.
</pre>
For more, refer to the man pages.

<p>
<A name="TIMEOUT"><font size=+1><b>CONFIGURING CLUSTER TIMEOUTS</b></font></A>
</p>

<font size=+1>
<li>List and describe all the configurable timeouts in the O2CB cluster stack?
</font>
OCFS2 1.2.5 has 4 different configurable O2CB cluster timeouts:
<ul>
<li><b>O2CB_HEARTBEAT_THRESHOLD</b> - The Disk Heartbeat timeout is the number of two
second iterations before a node is considered dead. The exact formula used to
convert the timeout in seconds to the number of iterations is as follows:
<pre>
        O2CB_HEARTBEAT_THRESHOLD = (((timeout in seconds) / 2) + 1)
</pre>
For e.g., to specify a 60 sec timeout, set it to 31. For 120 secs, set it to 61.
The default is 12 secs (O2CB_HEARTBEAT_THRESHOLD = 7).

<li><b>O2CB_IDLE_TIMEOUT_MS</b> - The Network Idle timeout specifies the time in miliseconds
before a network connection is considered dead. The default is 10000 ms.

<li><b>O2CB_KEEPALIVE_DELAY_MS</b> - The Network Keepalive specifies the maximum
delay in miliseconds before a keepalive packet is sent. As in, a keepalive packet
is sent if a network connection between two nodes is silent for this duration.
If the other node is alive and is connected, it is expected to respond. The default
is 5000 ms.

<li><b>O2CB_RECONNECT_DELAY_MS</b> - The Network Reconnect specifies the minimum
delay in miliseconds between connection attempts. The default is 2000 ms.
</ul>

<font size=+1>
<li>What are the recommended timeout values?
</font>
As timeout values depend on the hardware being used, there is no one set
of recommended values. For e.g., users of multipath io should set the disk
heartbeat threshold to atleast 60 secs, if not 120 secs. Similarly, users of
Network bonding should set the network idle timeout to atleast 30 secs, if
not 60 secs.

<font size=+1>
<li>What were the timeouts set to during OCFS2 1.2.5 release testing?
</font>
The timeouts used during release testing were as follows:
<pre>
	O2CB_HEARTBEAT_THRESHOLD = 31
	O2CB_IDLE_TIMEOUT_MS = 30000
	O2CB_KEEPALIVE_DELAY_MS = 2000
	O2CB_RECONNECT_DELAY_MS = 2000
</pre>
<span style="color: #F00;">
The default cluster timeouts in OCFS2 1.2.6 for EL5 have been set to the above.
<b>The upcoming release, OCFS2 1.2.7 (EL4 and EL5), will make the above default
for both EL4 and EL5.</b></span><br>

<font size=+1>
<li>Can one change these timeout values in a round robin fashion?
</font>
No. The o2net handshake protocol ensures that all the timeout values for
both the nodes are consistent and fails if any value differs. This failed
connection results in a failed mount, the reason for which is always listed
in dmesg.

<font size=+1>
<li>How does one set these O2CB timeouts?
</font>
Umount all OCFS2 volumes and shutdown the O2CB cluster. If not already,
upgrade to OCFS2 1.2.5 and OCFS2 TOOLS 1.2.4. Then use o2cb configure to
set the new values. Do the same on all nodes. Start mounting volumes only
after the timeouts have been set on all nodes.
<pre>
	# service o2cb configure
	Configuring the O2CB driver.

	This will configure the on-boot properties of the O2CB driver.
	The following questions will determine whether the driver is loaded on
	boot.  The current values will be shown in brackets ('[]').  Hitting
	<ENTER> without typing an answer will keep that current value.  Ctrl-C
	will abort.

	Load O2CB driver on boot (y/n) [n]: y
	Cluster to start on boot (Enter "none" to clear) []: mycluster
<b>	Specify heartbeat dead threshold (>=7) [7]: 31
	Specify network idle timeout in ms (>=5000) [10000]: 30000
	Specify network keepalive delay in ms (>=1000) [5000]: 2000
	Specify network reconnect delay in ms (>=2000) [2000]: 2000
</b>	Writing O2CB configuration: OK
	Starting O2CB cluster mycluster: OK
</pre>

<font size=+1>
<li>How to find the O2CB timeout values in effect?
</font>
<pre>
	# /etc/init.d/o2cb status
	Module "configfs": Loaded
	Filesystem "configfs": Mounted
	Module "ocfs2_nodemanager": Loaded
	Module "ocfs2_dlm": Loaded
	Module "ocfs2_dlmfs": Loaded
	Filesystem "ocfs2_dlmfs": Mounted
	Checking O2CB cluster mycluster: Online
<b>	  Heartbeat dead threshold: 31
	  Network idle timeout: 30000
	  Network keepalive delay: 2000
	  Network reconnect delay: 2000
</b>	Checking O2CB heartbeat: Not active
</pre>

<font size=+1>
<li>Where are the O2CB timeout values stored?
</font>
<pre>
	# cat /etc/sysconfig/o2cb 
	#
	# This is a configuration file for automatic startup of the O2CB
	# driver.  It is generated by running /etc/init.d/o2cb configure.
	# Please use that method to modify this file
	#

	# O2CB_ENABELED: 'true' means to load the driver on boot.
	O2CB_ENABLED=true

	# O2CB_BOOTCLUSTER: If not empty, the name of a cluster to start.
	O2CB_BOOTCLUSTER=mycluster

<b>	# O2CB_HEARTBEAT_THRESHOLD: Iterations before a node is considered dead.
	O2CB_HEARTBEAT_THRESHOLD=31

	# O2CB_IDLE_TIMEOUT_MS: Time in ms before a network connection is considered dead.
	O2CB_IDLE_TIMEOUT_MS=30000

	# O2CB_KEEPALIVE_DELAY_MS: Max time in ms before a keepalive packet is sent
	O2CB_KEEPALIVE_DELAY_MS=2000

	# O2CB_RECONNECT_DELAY_MS: Min time in ms between connection attempts
	O2CB_RECONNECT_DELAY_MS=2000
</b>
</pre>

<span style="color: #F00;">
<p>
<A name="EL5"><font size=+1><b>ENTERPRISE LINUX 5</b></font></A>
</p>

<font size=+1>
<li>What are the changes in EL5 as compared to EL4 as it pertains to OCFS2?
</font>
<ul>
<li>The in-memory filesystems, configfs and debugfs, have new mountpoints.
configfs is mounted at /sys/kernel/config, instead of /config, and debugfs at /sys/kernel/debug,
instead of /debug. (dlmfs still mounts at the old mountpoint /dlm.)
<li>While not related to EL5 per se, it just so happens that the default O2CB Cluster timeouts
for OCFS2 on EL5 are different than on EL4. This difference is temporary in nature as in the next
release of OCFS2 (1.2.7), the same timeouts will be made defaults for both EL4 and EL5.
</ul>

</span>
</ol>
</html>