File: README.md

package info (click to toggle)
openstack-cluster-installer 43.0.18
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid
  • size: 4,484 kB
  • sloc: php: 19,127; sh: 18,142; ruby: 75; makefile: 31; xml: 8
file content (3846 lines) | stat: -rw-r--r-- 137,611 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
2255
2256
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
2271
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
2284
2285
2286
2287
2288
2289
2290
2291
2292
2293
2294
2295
2296
2297
2298
2299
2300
2301
2302
2303
2304
2305
2306
2307
2308
2309
2310
2311
2312
2313
2314
2315
2316
2317
2318
2319
2320
2321
2322
2323
2324
2325
2326
2327
2328
2329
2330
2331
2332
2333
2334
2335
2336
2337
2338
2339
2340
2341
2342
2343
2344
2345
2346
2347
2348
2349
2350
2351
2352
2353
2354
2355
2356
2357
2358
2359
2360
2361
2362
2363
2364
2365
2366
2367
2368
2369
2370
2371
2372
2373
2374
2375
2376
2377
2378
2379
2380
2381
2382
2383
2384
2385
2386
2387
2388
2389
2390
2391
2392
2393
2394
2395
2396
2397
2398
2399
2400
2401
2402
2403
2404
2405
2406
2407
2408
2409
2410
2411
2412
2413
2414
2415
2416
2417
2418
2419
2420
2421
2422
2423
2424
2425
2426
2427
2428
2429
2430
2431
2432
2433
2434
2435
2436
2437
2438
2439
2440
2441
2442
2443
2444
2445
2446
2447
2448
2449
2450
2451
2452
2453
2454
2455
2456
2457
2458
2459
2460
2461
2462
2463
2464
2465
2466
2467
2468
2469
2470
2471
2472
2473
2474
2475
2476
2477
2478
2479
2480
2481
2482
2483
2484
2485
2486
2487
2488
2489
2490
2491
2492
2493
2494
2495
2496
2497
2498
2499
2500
2501
2502
2503
2504
2505
2506
2507
2508
2509
2510
2511
2512
2513
2514
2515
2516
2517
2518
2519
2520
2521
2522
2523
2524
2525
2526
2527
2528
2529
2530
2531
2532
2533
2534
2535
2536
2537
2538
2539
2540
2541
2542
2543
2544
2545
2546
2547
2548
2549
2550
2551
2552
2553
2554
2555
2556
2557
2558
2559
2560
2561
2562
2563
2564
2565
2566
2567
2568
2569
2570
2571
2572
2573
2574
2575
2576
2577
2578
2579
2580
2581
2582
2583
2584
2585
2586
2587
2588
2589
2590
2591
2592
2593
2594
2595
2596
2597
2598
2599
2600
2601
2602
2603
2604
2605
2606
2607
2608
2609
2610
2611
2612
2613
2614
2615
2616
2617
2618
2619
2620
2621
2622
2623
2624
2625
2626
2627
2628
2629
2630
2631
2632
2633
2634
2635
2636
2637
2638
2639
2640
2641
2642
2643
2644
2645
2646
2647
2648
2649
2650
2651
2652
2653
2654
2655
2656
2657
2658
2659
2660
2661
2662
2663
2664
2665
2666
2667
2668
2669
2670
2671
2672
2673
2674
2675
2676
2677
2678
2679
2680
2681
2682
2683
2684
2685
2686
2687
2688
2689
2690
2691
2692
2693
2694
2695
2696
2697
2698
2699
2700
2701
2702
2703
2704
2705
2706
2707
2708
2709
2710
2711
2712
2713
2714
2715
2716
2717
2718
2719
2720
2721
2722
2723
2724
2725
2726
2727
2728
2729
2730
2731
2732
2733
2734
2735
2736
2737
2738
2739
2740
2741
2742
2743
2744
2745
2746
2747
2748
2749
2750
2751
2752
2753
2754
2755
2756
2757
2758
2759
2760
2761
2762
2763
2764
2765
2766
2767
2768
2769
2770
2771
2772
2773
2774
2775
2776
2777
2778
2779
2780
2781
2782
2783
2784
2785
2786
2787
2788
2789
2790
2791
2792
2793
2794
2795
2796
2797
2798
2799
2800
2801
2802
2803
2804
2805
2806
2807
2808
2809
2810
2811
2812
2813
2814
2815
2816
2817
2818
2819
2820
2821
2822
2823
2824
2825
2826
2827
2828
2829
2830
2831
2832
2833
2834
2835
2836
2837
2838
2839
2840
2841
2842
2843
2844
2845
2846
2847
2848
2849
2850
2851
2852
2853
2854
2855
2856
2857
2858
2859
2860
2861
2862
2863
2864
2865
2866
2867
2868
2869
2870
2871
2872
2873
2874
2875
2876
2877
2878
2879
2880
2881
2882
2883
2884
2885
2886
2887
2888
2889
2890
2891
2892
2893
2894
2895
2896
2897
2898
2899
2900
2901
2902
2903
2904
2905
2906
2907
2908
2909
2910
2911
2912
2913
2914
2915
2916
2917
2918
2919
2920
2921
2922
2923
2924
2925
2926
2927
2928
2929
2930
2931
2932
2933
2934
2935
2936
2937
2938
2939
2940
2941
2942
2943
2944
2945
2946
2947
2948
2949
2950
2951
2952
2953
2954
2955
2956
2957
2958
2959
2960
2961
2962
2963
2964
2965
2966
2967
2968
2969
2970
2971
2972
2973
2974
2975
2976
2977
2978
2979
2980
2981
2982
2983
2984
2985
2986
2987
2988
2989
2990
2991
2992
2993
2994
2995
2996
2997
2998
2999
3000
3001
3002
3003
3004
3005
3006
3007
3008
3009
3010
3011
3012
3013
3014
3015
3016
3017
3018
3019
3020
3021
3022
3023
3024
3025
3026
3027
3028
3029
3030
3031
3032
3033
3034
3035
3036
3037
3038
3039
3040
3041
3042
3043
3044
3045
3046
3047
3048
3049
3050
3051
3052
3053
3054
3055
3056
3057
3058
3059
3060
3061
3062
3063
3064
3065
3066
3067
3068
3069
3070
3071
3072
3073
3074
3075
3076
3077
3078
3079
3080
3081
3082
3083
3084
3085
3086
3087
3088
3089
3090
3091
3092
3093
3094
3095
3096
3097
3098
3099
3100
3101
3102
3103
3104
3105
3106
3107
3108
3109
3110
3111
3112
3113
3114
3115
3116
3117
3118
3119
3120
3121
3122
3123
3124
3125
3126
3127
3128
3129
3130
3131
3132
3133
3134
3135
3136
3137
3138
3139
3140
3141
3142
3143
3144
3145
3146
3147
3148
3149
3150
3151
3152
3153
3154
3155
3156
3157
3158
3159
3160
3161
3162
3163
3164
3165
3166
3167
3168
3169
3170
3171
3172
3173
3174
3175
3176
3177
3178
3179
3180
3181
3182
3183
3184
3185
3186
3187
3188
3189
3190
3191
3192
3193
3194
3195
3196
3197
3198
3199
3200
3201
3202
3203
3204
3205
3206
3207
3208
3209
3210
3211
3212
3213
3214
3215
3216
3217
3218
3219
3220
3221
3222
3223
3224
3225
3226
3227
3228
3229
3230
3231
3232
3233
3234
3235
3236
3237
3238
3239
3240
3241
3242
3243
3244
3245
3246
3247
3248
3249
3250
3251
3252
3253
3254
3255
3256
3257
3258
3259
3260
3261
3262
3263
3264
3265
3266
3267
3268
3269
3270
3271
3272
3273
3274
3275
3276
3277
3278
3279
3280
3281
3282
3283
3284
3285
3286
3287
3288
3289
3290
3291
3292
3293
3294
3295
3296
3297
3298
3299
3300
3301
3302
3303
3304
3305
3306
3307
3308
3309
3310
3311
3312
3313
3314
3315
3316
3317
3318
3319
3320
3321
3322
3323
3324
3325
3326
3327
3328
3329
3330
3331
3332
3333
3334
3335
3336
3337
3338
3339
3340
3341
3342
3343
3344
3345
3346
3347
3348
3349
3350
3351
3352
3353
3354
3355
3356
3357
3358
3359
3360
3361
3362
3363
3364
3365
3366
3367
3368
3369
3370
3371
3372
3373
3374
3375
3376
3377
3378
3379
3380
3381
3382
3383
3384
3385
3386
3387
3388
3389
3390
3391
3392
3393
3394
3395
3396
3397
3398
3399
3400
3401
3402
3403
3404
3405
3406
3407
3408
3409
3410
3411
3412
3413
3414
3415
3416
3417
3418
3419
3420
3421
3422
3423
3424
3425
3426
3427
3428
3429
3430
3431
3432
3433
3434
3435
3436
3437
3438
3439
3440
3441
3442
3443
3444
3445
3446
3447
3448
3449
3450
3451
3452
3453
3454
3455
3456
3457
3458
3459
3460
3461
3462
3463
3464
3465
3466
3467
3468
3469
3470
3471
3472
3473
3474
3475
3476
3477
3478
3479
3480
3481
3482
3483
3484
3485
3486
3487
3488
3489
3490
3491
3492
3493
3494
3495
3496
3497
3498
3499
3500
3501
3502
3503
3504
3505
3506
3507
3508
3509
3510
3511
3512
3513
3514
3515
3516
3517
3518
3519
3520
3521
3522
3523
3524
3525
3526
3527
3528
3529
3530
3531
3532
3533
3534
3535
3536
3537
3538
3539
3540
3541
3542
3543
3544
3545
3546
3547
3548
3549
3550
3551
3552
3553
3554
3555
3556
3557
3558
3559
3560
3561
3562
3563
3564
3565
3566
3567
3568
3569
3570
3571
3572
3573
3574
3575
3576
3577
3578
3579
3580
3581
3582
3583
3584
3585
3586
3587
3588
3589
3590
3591
3592
3593
3594
3595
3596
3597
3598
3599
3600
3601
3602
3603
3604
3605
3606
3607
3608
3609
3610
3611
3612
3613
3614
3615
3616
3617
3618
3619
3620
3621
3622
3623
3624
3625
3626
3627
3628
3629
3630
3631
3632
3633
3634
3635
3636
3637
3638
3639
3640
3641
3642
3643
3644
3645
3646
3647
3648
3649
3650
3651
3652
3653
3654
3655
3656
3657
3658
3659
3660
3661
3662
3663
3664
3665
3666
3667
3668
3669
3670
3671
3672
3673
3674
3675
3676
3677
3678
3679
3680
3681
3682
3683
3684
3685
3686
3687
3688
3689
3690
3691
3692
3693
3694
3695
3696
3697
3698
3699
3700
3701
3702
3703
3704
3705
3706
3707
3708
3709
3710
3711
3712
3713
3714
3715
3716
3717
3718
3719
3720
3721
3722
3723
3724
3725
3726
3727
3728
3729
3730
3731
3732
3733
3734
3735
3736
3737
3738
3739
3740
3741
3742
3743
3744
3745
3746
3747
3748
3749
3750
3751
3752
3753
3754
3755
3756
3757
3758
3759
3760
3761
3762
3763
3764
3765
3766
3767
3768
3769
3770
3771
3772
3773
3774
3775
3776
3777
3778
3779
3780
3781
3782
3783
3784
3785
3786
3787
3788
3789
3790
3791
3792
3793
3794
3795
3796
3797
3798
3799
3800
3801
3802
3803
3804
3805
3806
3807
3808
3809
3810
3811
3812
3813
3814
3815
3816
3817
3818
3819
3820
3821
3822
3823
3824
3825
3826
3827
3828
3829
3830
3831
3832
3833
3834
3835
3836
3837
3838
3839
3840
3841
3842
3843
3844
3845
3846
# Table of Contents

- [What is OpenStack Cluster Installer (OCI)](#what-is-openstack-cluster-installer-oci)
  * [General description](#general-description)
  * [What OpenStack services can OCI install?](#what-openstack-services-can-oci-install)
  * [Who initiated the project? Who are the main contributors?](#who-initiated-the-project-who-are-the-main-contributors)
  * [Video presentation](#video-presentation)
- [How to install your puppet-master/PXE server)](#how-to-install-your-puppet-masterpxe-server)
  * [Minimum requirements](#minimum-requirements)
  * [Installing the package](#installing-the-package)
  * [Installing side services](#installing-side-services)
  * [Getting ready to install servers](#getting-ready-to-install-servers)
- [Using OCI](#Using OCI)
  * [Booting-up servers](#booting-up-servers)
  * [Creating Swift regions, locations, networks, roles and clusters](#creating-swift-regions-locations-networks-roles-and-clusters)
  * [Real certificate for the API](#real-certificate-for-the-api)
  * [Enrolling servers in a cluster](#enrolling-servers-in-a-cluster)
  * [Calculating the Swift ring](#calculating-the-swift-ring)
  * [Installing servers](#installing-servers)
  * [Checking your installation](#checking-your-installation)
  * [Enabling Swift object encryption](#enabling-swift-object-encryption)
  * [Fixing useless node1 in corosync](#fixing-useless-node1-in-corosync)
  * [Fixing ceph -s](#fixing-ceph--s)
  * [Initial cluster setup variable](#initial-cluster-setup-variable)
  * [Adding other types of nodes](#adding-other-types-of-nodes)
- [Advanced usage](#advanced-usage)
  * [Using custom NTP servers](#using-custom-ntp-servers)
  * [Using automated IPMI address configuration](#using-automated-ipmi-address-configuration)
  * [Automatic upgrade of BIOS and IPMI firmware](#automatic-upgrade-of-bios-and-ipmi-firmware)
  * [Customizing the /etc/hosts of all your cluster](#customizing-the-etchosts-of-all-your-cluster)
  * [Customizing the ENC](#customizing-the-enc)
  * [Customizing installed server at setup time](#customizing-installed-server-at-setup-time)
  * [Using a BGP VIP](#using-a-bgp-vip)
  * [Doing a test in OCI's manifests for debug purpose](#doing-a-test-in-ocis-manifests-for-debug-purpose)
  * [Customizing files and packages in your servers](#customizing-files-and-packages-in-your-servers)
  * [Once deployment is ready](#once-deployment-is-ready)
  * [Fixing-up the controllers](#fixing-up-the-controllers)
  * [Adding custom firewall rulles](#adding-custom-firewall-rules)
  * [Adding compute nodes](#adding-compute-nodes)
  * [Adding GPU support in a compute node](#adding-gpu-support-in-a-compute-node)
  * [Multiple Cinder LVM backends](#multiple-cinder-lvm-backends)
  * [Customizing the number of workers](#customizing-the-number-of-workers)
- [Advanced automation](#advanced-automation)
  * [Hands off fully-automated installation](#hands-off-fully-automated-installation)
  * [Auto racking](#auto-racking)
  * [Hardware profiles](#hardware-profiles)
  * [DNS plugin](#dns-plugin)
  * [Root password plugin](#root-password-plugin)
  * [Monitoring plugin](#monitoring-plugin)
- [Managing the OpenStack deployment](#managing-the-openstack-deployment)
  * [DNS inside OpenStack VMs](#dns-inside-openstack-vms)
  * [Enabling cloudkitty rating](#enabling-cloudkitty-rating)
  * [Writing custom pollsters](#writing-custom-pollsters)
  * [Installing a first OpenStack image](#installing-a-first-OpenStack-image)
  * [Setting-up networking](#setting-up-networking)
  * [Adding an ssh key](#adding-an-ssh-key)
  * [Creating flavor](#creating-flavor)
  * [Boot a VM](#boot-a-vm)
  * [Add Octavia service](#add-octavia-service)
  * [Setting-up no limits for services resources](#setting-up-no-limits-for-services-resources)
  * [Add Magnum service](#add-magnum-service)
  * [Replacing a broken server](#replacing-a-broken-server)
  * [Secure boot and dkms](#secure-boot-and-dkms)
- [Using Telemetry and Rating](#using-telemetry-and-rating)
  * [Add billing of instances](#add-billing-of-instances)
  * [Configuring a custom metric and billing](#configuring-a-custom-metric-and-billing)
  * [Other metrics billing](#other-metrics-billing)
- [Deploying Designate](#deploying-designate)
- [Using multi region with an external keystone](#using-multi-region-with-an-external-keystone)
  * [General external keystone and multi region considerations](general-external-keystone-and-multi-region-considerations)
  * [Configuring a cluster to use an external keystone](configuring-a-cluster-to-use-an-external-keystone)
  * [Setting-up the external Keystone instance](setting-up-the-external-Keystone-instance)
- [Upgrading the OCI PKI setup](#upgrading-the-oci-pki-setup)
  * [How is the OCI PKI done](#how-is-the-oci-pki-done)
  * [Result with the new setup](#result-with-the-new-setup)
  * [What got fixed](#what-got-fixed)
  * [How to upgrade](#how-to-upgrade)
- [Using OCI PoC Package for Fun and Profit](#using-oci-poc-package-for-fun-and-profit)
  * [Installation of the PoC package](#installation-of-the-poc-package)
  * [Dependency on ikvswitch](#Dependency-on-ikvswitch)
  * [Configuring the host to access OCI](#configuring-the-host-to-access-oci)
  * [Fully automated run](#fully-automated-run)
  * [Creating the oci-PoC image](#creating-the-oci-poc-image)
  * [Starting-up VMs](#starting-up-vms)
  * [Installing the PoC cluster](#installing-the-poc-cluster)
  * [Provisionning images, flavors, octavia, networking and all, inside OpenStack](#Provisionning-images,-flavors,-octavia,-networking-and-all,-inside-OpenStack)
  * [Running tempest functional tests](#Running-tempest-functional-tests)
  * [Testing OCI patches](#testing-oci-patches)
  * [Cluster save and restore](#cluster-save-and-restore)
- [Hardware compatibility list (HCL)](#hardware-compatibility-list)
  * [Dell servers](#dell-servers)
  * [Gigabyte](#gigabyte)
  * [HP servers](#hp-servers)
  * [Lenovo](#lenovo)
  * [Supermicro](#supermicro)
- [Upgrading](#upgrading)
  * [From stretch-rocky to buster-rocky](#from-stretch-rocky-to-buster-rocky)
  * [From bullseye-zed to bookworm-zed](#from-bullseye-zed-to-bookworm-zed)
  * [Upgrading volume nodes](#upgrading-volume-nodes)
  * [Upgrading compute nodes](#upgrading-compute-nodes)
  * [Upgrading from one OpenStack release to the next](#upgrading-from-one-openstack-release-to-the-next)
  * [Upgrading to libvirt and NoVNC over TLS](#upgrading-to-libvirt-and-novnc-over-tls)


# What is OpenStack Cluster Installer (OCI)

### General description

OCI (OpenStack Cluster Installer) is a software to provision an OpenStack
clusters automatically. This package installs a provisioning machine, which
uses the below components:
- a DHCP server (isc-dhcp-server)
- a PXE boot server (tftp-hpa)
- a web server (apache2)
- a puppet-master

Once computers in the cluster boot for the first time, a Debian live system
is served by OCI over PXE, to act as a discovery image. This live system then
reports the hardware features back to OCI. Computers can then be installed with
Debian from that live system, configured with a puppet-agent that will connect
to the puppet-master of OCI. After Debian is installed, the server reboots, and
OpenStack services are provisioned, depending on the server role in the cluster.

OCI is fully packaged in Debian, including all of the Puppet modules. After
installing the OCI package and its dependencies, no other artificat needs to be
installed on your provisioning server, meaning that if a local debian mirror
is available, the OpenStack cluster installation can be done completely
offline.

### What OpenStack services can OCI install?

Currently, OCI can install:
- Swift (with optional dedicated proxy nodes)
- Keystone
- Cinder (LVM or Ceph backend)
- Glance (File, Swift or Ceph backend, Swift can be external)
- Heat
- Horizon
- Manila
- Nova (with GPU support)
- Neutron
- Barbican
- Octavia
- Telemetry (Ceilometer, Gnocchi, Panko, Aodh)
- Cloudkitty
- Designate

There's currently ongoing effort to integrate:
- Magnum

Also, OCI now support running CephOSD on compute nodes (which is what is
called "hyper-converged") as an option for each compute node.

All of this in a high availability way, using haproxy and corosync for
the controller nodes for all services.

All services are fully using TLS, even within the cluster.

As a general rule, what OCI does, is check what type of nodes are part
of the cluster, and takes decisions depending on it. For example, if there
are some Ceph OSD nodes, OCI will use Ceph as a backend for Glance, Nova
and Cinder backup.
If there are some Cinder Volume nodes, OCI will use them with the LVM
backend. If there is some Swiftstore nodes, Swift will be used for backups
and Glance images. If there are some Ceph OSD nodes, but
no dedicated Ceph MON nodes, the controllers will act as Ceph monitors.
If there are some Compute nodes, then Cinder, Nova and Neutron will be
installed on the controller nodes. Etc.

The minimum number of controller nodes is 3, though it is possible, with
a bit of hacking to install the 3 controllers on VMs on a single server
(of course, loosing the high availability feature if the hardware fails).

OCI can setup the below list of roles (node type):
- controller (API server)
- compute
- network
- volume (Cinder LVM nodes)
- sql (MariaDB Galera cluster for controller nodes)
- messaging
- sqlmsg (MariaDB Galera cluster for messaging nodes)
- cephmon
- cephosd
- billmon (Ceph MON for billing)
- billosd (Ceph OSD for billing)
- swiftproxy
- swiftstore
- debmirror
- tempest
- dns (bind9 nodes for Designate)
- radosgw (Ceph RADOS gateway)
- elastic (for testing telemetry+cloudkitty purpose only)


### Who initiated the project? Who are the main contributors?

OCI has been written from scratch by Thomas Goirand (zigo). The work is
fully sponsored by Infomaniak Networks, who is using it in production
in reasonably large clusters. There's been some sporadic contributions
within Infomaniak, plus a few patches from external contributors, but
no major features (yet). Hopefully, this project, over time, will gather
more contributors.


### Video presentation

If you wish to have a quick presentation of what OCI can do, to see if
it fits your needs, you can watch the presentation made for the OpenStack
summit in November 2020. It's not long (19 minutes):

[![OCI presentation](https://img.youtube.com/vi/Q25jT2fYDjc/0.jpg)](https://www.youtube.com/watch?v=Q25jT2fYDjc)


# How to install your puppet-master/PXE server

## Minimum requirements

OCI itself will run fine with about 20 GB of HDD, and a few GB of RAM.
However, to install OpenStack, you will need at least 3 controllers with
a minimum of 16 GB of RAM, 32 GB is recommended, and best is with 64 GB of RAM.
If you want Ceph, a minimum of 3 Ceph OSD is needed, however, we're only
talking when your cluster reaches 100 disks. The Ceph recommendation is
that any given server down doesn't remove more than 10% of the total
capacity. So 10 OSD servers as a start is nice. As for swift, the minimum
number of servers would be 3, but then if one fails, you'll get some timeouts.
So probably it's best to start with at least 6 Swift storage nodes, and maybe
at 2 proxies. For a the other resources, it's really up to you: a few
computes, and probably 2 network nodes and some volume nodes.

If you intend to run the openstack-cluster-installer-poc package to do some
OCI development in a virtualized environment, we recommend a single server
with 1 TB of HDD and 256 GB of RAM. This configuration is enough to
provision 19 VMs where OpenStack will be installed. It's possible to run
with less, but then not a lot of nodes will be available, and you'll have
to tweak down the number of servers in /etc/oci-poc/oci-poc.conf.

## Installing the package

### The package repository

The OCI packages are either available from plain Debian Sid or from the
official Debian Stable repositories. However, we recommend using the
unofficial debian.net backports repository that are more up-to-date and
that contain intermediary releases of OpenStack.

#### Using Extrepo

The new (better) way of using Debian Stable backports of OpenStack is to use
extrepo. Extrepo is available from the official buster-backports, or in the
normal Stable repositories. Here's how to install OpenStack Epoxy, for
example:

```
apt-get install extrepo
extrepo search openstack
extrepo enable openstack_epoxy
apt-get update
apt-get dist-upgrade
```

See extrepo documentation if you want to know more about it.

#### Manual setup of the Debian repositories

If using Bookworm with OpenStack Bobcat is desired, then the below repository
must be added to the /etc/apt/sources.list.d/bookworm-bobcat.sources file:

```
Types: deb deb-src
URIs: http://bookworm-bobcat.debian.net/debian
Suites: bookworm-bobcat-backports bookworm-bobcat-backports-nochange
Components: main
Signed-By: /etc/apt/keyrings/bookworm-bobcat.asc
```

Also, get the GPG key:

```
wget http://bookworm-bobcat.debian.net/debian/dists/pubkey.gpg -O /etc/apt/keyrings/bookworm-bobcat.asc
```

You may replace bookworm above by whatever Debian stable distro of the day
and bobcat by whatever OpenStack release name of the day.

There's also a mirror containing ALL of the OpenStack releases in a single
place, located at:

http://osbpo.debian.net/debian/

### Install the package

Simply install the package:

```
apt-get install openstack-cluster-installer
```

However, before you do that, it's nicer to do the below steps to prepare
the dependencies.

### Install a db server

MariaDB will do:

```
apt-get install default-mysql-server dbconfig-common
```

It is possible to the db creation and credentials by hand, or to let OCI handle
it automatically with dbconfig-common. If APT is running in
non-interactive mode, or if during the installation, there will be no prompt
for the database credentials. Here's how to configure OCI once the db and
credentials are provisioned:

```
apt-get install openstack-pkg-tools
. /usr/share/openstack-pkg-tools/pkgos_func
PASSWORD=$(openssl rand -hex 16)
pkgos_inifile set /etc/openstack-cluster-installer/openstack-cluster-installer.conf database connection mysql+pymysql://oci:${PASSWORD}@localhost:3306/oci"
mysql --execute 'CREATE DATABASE oci;'
mysql --execute "GRANT ALL PRIVILEGES ON oci.* TO 'oci'@'localhost' IDENTIFIED BY '${PASSWORD}';"
```

One must then make sure that the "connection" directive in
/etc/openstack-cluster-installer/openstack-cluster-installer.conf doesn't
contain spaces before and after the equal sign. Then the db is populated
below.

### Configuring OCI

Make sure the db is in sync (if it is, you'll see table exists errors):

```
apt-get install -y php-cli
cd /usr/share/openstack-cluster-installer ; php db_sync.php
```

Then edit /etc/openstack-cluster-installer/openstack-cluster-installer.conf
and make it looks the way it pleases you (ie: change network values, etc.).

### Generate the OCI's root CA

To handle TLS, OCI is using its own root CA. The root CA certificate is
distributed on all nodes of the cluster. To create the initial root CA,
there's a script to do it all:

```
oci-root-ca-gen
```

At this point, you should be able to browse through OCI's web interface:
```
firefox http://your-ip-address/oci/
```

However, you need a login/pass to get in. There's a shell utility to manage
your usernames. To add a new user, do this:

```
oci-userdb -a mylogin mypassword
```

Check the result by listing all configured logins:

```
oci-userdb -l
```

Passwords are hashed using the PHP password_hash() function using the
BCRYPT algo.

Also, OCI is capable of using an external Radius for its authentication.
However, you still need to manually add logins in the db. What's below
inserts a new user that has an entry in the radius server:

```
oci-userdb -r newuser@example.com
```

Note that you also need to configure your radius server address and
shared secret in openstack-cluster-installer.conf.

Note that even if there is an authentication system, it is strongly advised
to not expose OCI to the public internet. The best setup is if your
provisioning server isn't reachable at all from the outside.

## Installing side services

### ISC-DHCPD

Configure isc-dhcp to match your network configuration. Note that
"next-server" must be the address of your puppet-master node (ie: the dhcp
server that we're currently configuring).

Edit /etc/default/isc-dhcpd:

```
sed -i 's/INTERFACESv4=.*/INTERFACESv4="eth0"/' /etc/default/isc-dhcp-server
```

Then edit /etc/dhcp/dhcpd.conf:

```
allow booting;
allow bootp;
default-lease-time 600;
max-lease-time 7200;
ddns-update-style none;
authoritative;
ignore-client-uids true;

subnet 192.168.100.0 netmask 255.255.255.0 {
        option subnet-mask 255.255.255.0;
        option broadcast-address 192.168.100.255;
        option routers 192.168.100.1;
        range 192.168.100.20 192.168.100.120;
        option domain-name "example.com";
        option domain-name-servers 9.9.9.9;
        next-server 192.168.100.2;
        if exists user-class and option user-class = "iPXE" {
                filename "http://192.168.100.2/oci/ipxe.php";
        } else {
                filename "lpxelinux.0";
        }
}
```

Restart dhcpd via:

```
systemctl restart isc-dhcp-server.service
```

Carefully note that 192.168.100.2 must be the address of your OCI server,
as it will be used for serving PXE, TFTP and web for the slave nodes.
It is of course fine to use another address if your OCI server does,
so feel free to adapt the above to your liking.

Note that as of OCI version 28 and above, loading the initrd and kernel
is done over HTTP, so using lpxelinux.0 is mandatory (pxelinux.0 should
not be used anymore, as it only supports TFTP).

Also, for OCI to allow query from the DHCP range, you must add your
DHCP subnets to TRUSTED_NETWORKS in openstack-cluster-installer.conf.
Otherwise, hardware reporting will never work.

### tftpd

Configure tftp-hpa to serve files from OCI:

```
sed -i 's#TFTP_DIRECTORY=.*#TFTP_DIRECTORY="/var/lib/openstack-cluster-installer/tftp"#' /etc/default/tftpd-hpa
systemctl restart tftpd-hpa.service
```

## Getting ready to install servers

### Configuring ssh keys

When setting-up, OCI will create a public / private ssh keypair in here:

```
/etc/openstack-cluster-installer/id_rsa
```

Once done, it will copy the corresponding id_rsa.pub content into:

```
/etc/openstack-cluster-installer/authorized_keys
```

and will also add all the public keys it finds under
/root/.ssh/authorized_keys in it. Later on, this file will be copied
in the OCI Debian live image, and in all new systems OCI will install.
OCI will later on use the private key it generated to log into the
servers, while your keys will also be present so you can log into each
individual servers using your private key. Therefore, it is strongly
advise to customize /etc/openstack-cluster-installer/authorized_keys
*before* you build the OCI Debian Live image.

### Optional: using a self-hosted package repository ###

If you are using a self-hosted package repository which signs packages
with its own key (e.g. aptly), this requires additional configuration.
If you are using the official Debian packages (either direct or via a
caching proxy) then skip to the next section.

You will need a package in your repo which contains the repository's
signing key, and this package will need to already be installed on your
OCI server.

Configure the following entries in /etc/openstack-cluster-installer/openstack-cluster-installer.conf :
```
debian_keyring_package=my-archive-keyring
debian_keyring_file=/usr/share/keyrings/my-archive-keyring.gpg
install_debian_keyring_package=yes
```

If the keyring package is not available at install time (because it
is not kept in the same repository as the mirrored debian packages),
then set the following options to instead copy the keyring file from
the live image:

```
install_debian_keyring_package=no
install_debian_keyring_file=yes
```

### Build OCI's live image ###

```
mkdir -p /root/live-image
cd /root/live-image
openstack-cluster-installer-build-live-image --pxe-server-ip 192.168.100.2 --debian-mirror-addr http://deb.debian.org/debian --debian-security-mirror-addr http://security.debian.org/
cp -auxf /var/lib/openstack-cluster-installer/tftp/* /usr/share/openstack-cluster-installer
cd ..
rm -rf /root/live-image
```

Is is possible to use package proxy servers like approx,
or local mirrors, which gives the possibility to have your cluster
and OCI itself completely disconnected from internet.

If one wishes to build for more than a single architecture,
it is possible to do so, and have multi-arch PXE booting.
To do so, make sure your dhcpd.conf containst the necessary
arch detection (note: the next-server must be the OCI
PXE server IP):

```
subnet 192.168.100.0 netmask 255.255.255.0 {
        range 192.168.100.10 192.168.100.99;
        option subnet-mask 255.255.255.0;
        option broadcast-address 192.168.100.255;
        option routers 192.168.100.1;
        next-server 10.10.0.4;
        if exists user-class and option user-class = "iPXE" {
                filename "http://10.10.0.4/oci/ipxe.php";
        } elsif exists pxe-system-type {
                if option pxe-system-type = 00:00 {
                        filename "lpxelinux.0";
                } elsif option pxe-system-type = 00:07 {
                        filename "shimx64.efi.signed";
                } elsif option pxe-system-type = 00:09 {
                        filename "shimx64.efi.signed";
                } elsif option pxe-system-type = 00:0b {
                        filename "shimaa64.efi.signed";
                }
        } else {
                filename "lpxelinux.0";
        }
}
```

Edit openstack-cluster-installer.conf and fill image_builder_hosts
with the hostname or IP address of your arm64 build host. In
that server, install the openstack-cluster-installer-live-image-builder
package, plus grub-efi-arm64-signed and syslinux-efi (these are
as Suggests: to avoid a hard dependency on some arch-specific packages).

Then simply launch oci-live:

```
# oci-live
```

Running this will scp the OCI configuration to your build host,
launch openstack-cluster-installer-build-live-image there, and
copy the resulting binary locally.

Note that currently, only arm64 and amd64 are supported, but it
shouldn't be hard to add another architecture (only the filenames
of grub, shim, etc. needs to be filled in the build scripts). 

### Configure puppet's ENC

Once the puppet-master service is installed, its external node
classifier (ENC) directives must be set, so that OCI acts as ENC
(which means OCI will define roles and puppet classes to call when
installing a new server with puppet):

```
. /usr/share/openstack-pkg-tools/pkgos_func
pkgos_add_directive /etc/puppet/puppet.conf master "external_nodes = /usr/bin/oci-puppet-external-node-classifier" "# Path to enc"
pkgos_inifile set /etc/puppet/puppet.conf master external_nodes /usr/bin/oci-puppet-external-node-classifier
pkgos_add_directive /etc/puppet/puppet.conf master "node_terminus = exec" "# Tell what type of ENC"
pkgos_inifile set /etc/puppet/puppet.conf master node_terminus exec
```

then restart the puppet-master service.

### Optional: approx

To speed-up package download, it is highly recommended to install approx
locally on your OCI provisioning server, and use its address when
setting-up servers (the address is set in
/etc/openstack-cluster-installer/openstack-cluster-installer.conf).

# Using OCI

## Booting-up servers

Start-up a bunch of computers, booting them with PXE. If everything goes well, they will
catch the OCI's DHCP, and boot-up OCI's Debian live image. Once the server
is up, an agent will run to report to OCI's web interface. Just refresh
OCI's web interface, and you will see machines. You can also use the CLI
tool:

```
# apt-get install openstack-cluster-installer-cli
# ocicli machine-list
serial   ipaddr          memory  status     lastseen             cluster  hostname
2S2JGM2  192.168.100.37  4096    live       2018-09-20 09:22:31  null
2S2JGM3  192.168.100.39  4096    live       2018-09-20 09:22:50  null
```

Note that ocicli can either use a login/password which can be set in
the OCI's internal db, or the IP address of the server where ocicli runs can
be white-listed in /etc/openstack-cluster-installer/openstack-cluster-installer.conf.

## Creating Swift regions, locations, networks, roles and clusters

### Before we start

In this documentation, everything is done through the command line using
ocicli. However, absolutely everything can also be done using the web
interface. It is just easier to explain using the CLI, as this avoids
the necessity of showing snapshots of the web interface.

Here, the only network you'll be adding to OCI would be the OpenStack
internal networks. Never, you'll be adding the public networks or the
ones in the OpenStack VMs. For example, one network for the management of
nodes, one for vm-net, one for the ceph-cluster network ... All of the
networks you'll be using on OpenStack, are to be provisioned with
OpenStack itself using the OpenStack API.

### Creating Swift regions and locations

Before installing the systems on your servers, clusters must be defined.
This starts by setting-up Swift regions. In a Swift cluster, there are
zones and regions. When uploading a file to Swift, it is replicated on
N zones (usually 3). If 2 regions are defined, then Swift tries to
replicate objects on both regions.

Under OCI, you must first define Swift regions. To do so, click on
"Swift region" on the web interface, or using ocicli, type:

```
# ocicli swift-region-create datacenter-1
# ocicli swift-region-create datacenter-2
```

Then create locations attached to these regions:

```
# ocicli location-create dc1-zone1 datacenter-1
# ocicli location-create dc1-zone2 datacenter-1
# ocicli location-create dc2-zone1 datacenter-2
```

Later on, when adding a swift data node to a cluster (data nodes are
the servers that will actually do the Swift storage), a location must
be selected.

Once the locations have been defined, it is time to define networks.
Networks are attached to locations as well. The Swift zones and regions
will be related to these locations and regions.

### Creating networks

```
# ocicli network-create dc1-net1 192.168.101.0 24 dc1-zone1 no
```

The above command will create a subnet 192.168.101.0/24, located at
dc1-zone1. Let's create 2 more networks:

```
# ocicli network-create dc1-net2 192.168.102.0 24 dc1-zone2 no
# ocicli network-create dc2-net1 192.168.103.0 24 dc2-zone1 no
```

Next, for the cluster to be reachable, let's create a public network
on which customers will connect:

```
# ocicli network-create pubnet1 203.0.113.0 28 public yes
```

Note that if using a /32, it will be setup on the lo interface of
your controller. The expected setup is to use BGP to route that
public IP on the controller. To do that, it is possible to customize
the ENC and add BGP peering to your router. See at the end of this
documentation for that.

### Creating a new cluster

Let's create a new cluster:

```
# ocicli cluster-create swift01 example.com
```

Now that we have a new cluster, the networks we created can be added to it:

```
# ocicli network-add dc1-net1 swift01 all eth0
# ocicli network-add dc1-net2 swift01 all eth0
# ocicli network-add dc2-net1 swift01 all eth0
# ocicli network-add pubnet1 swift01 all eth0
```

When adding the public network, automatically, one IP address will be
reserved for the VIP (Virtual Private IP). This IP address will later
be shared by the controller nodes, to perform HA (High Availability),
controlled by pacemaker / corosync. The principle is: if one of
the controllers nodes is hosting the VIP (and it's assigned to its
eth0), and becomes unavailable (let's say, the server crashes or the
network wire is unplugged), then the VIP is re-assigned to the eth0
of another controller node of the cluster.

If selecting 2 network interfaces (for example, eth0 and eth1), then
bonding will be used. Note that your network equipment (switches, etc.)
must be configured accordingly (LACP, etc.), and that the setup of
these equipment is out of the scope of this documentation. Consult your
network equipment vendor for more information.

## Real certificate for the API

By default, OCI will generate self-signed certificate for everyting.
Though this works well except a few exceptions (it noticeably doesn't work
for Heat, Magnum and if one wants to enable Swift on-disk encryption), it is
preferable, in production, to use a real API certificate, so that clients
can trust your server. In order to do this, one must first choose a hostname
for the API. This is set this way:

```
# ocicli cluster-set z --vip-hostname cloud-api.example.com
```

Once done, in the OCI server, generate a certificate for this hostname:

```
# oci-gen-slave-node-cert cloud-api.example.com
```

The cd to /var/lib/oci/ssl/slave-nodes/cloud-api.example.com. Then can
be find the cloud-api.example.com.csr (.csr stands for Certificate Signing
Certificate) which can be used to optain a real certificate. Get the
certificate signed, and then replace the .crt, and .pem files with the
real signed content. If you are re-using a wildcard certicate, then you
probably also want to replace the .key file. Note that the .pem file
must contain the certificate *and* the private key, concatenated, and
maybe also all the intermediate certificates.

Once this is done, simply inform OCI that we're using a real signed
certificate:

```
# ocicli cluster-set z --self-signed-api-cert no
```

Now, puppet will be started without using the OCI's root ca as environment,
and ca_file will not be used in all OpenStack configuration files (an
empty string will be set instead).

If you have set your cluster in production before signing the certificate,
it is possible to use, on the puppet server, the oci-update-cluster-certs
utility:

```
# oci-update-cluster-certs z
```

This will replace the certificate cloud-api.example.com everywhere in
the cluster, and restart services to use it. This shell utility is also
useful whenever your SSL certificate expires and needs to be updated.

## Enrolling servers in a cluster

Now that we have networks assigned to the cluster, it is time to
assign servers to the cluster. Let's say we have the below output:

```
# ocicli machine-list
serial  ipaddr          memory  status  lastseen             cluster  hostname
C1      192.168.100.20  8192    live    2018-09-19 20:31:57  null
C2      192.168.100.21  8192    live    2018-09-19 20:31:04  null
C3      192.168.100.22  8192    live    2018-09-19 20:31:14  null
C4      192.168.100.23  5120    live    2018-09-19 20:31:08  null
C5      192.168.100.24  5120    live    2018-09-19 20:31:06  null
C6      192.168.100.25  5120    live    2018-09-19 20:31:14  null
C7      192.168.100.26  4096    live    2018-09-19 20:31:18  null
C8      192.168.100.27  4096    live    2018-09-19 20:31:26  null
C9      192.168.100.28  4096    live    2018-09-19 20:30:50  null
CA      192.168.100.29  4096    live    2018-09-19 20:31:00  null
CB      192.168.100.30  4096    live    2018-09-19 20:31:07  null
CC      192.168.100.31  4096    live    2018-09-19 20:31:20  null
CD      192.168.100.32  4096    live    2018-09-19 20:31:28  null
CE      192.168.100.33  4096    live    2018-09-19 20:31:33  null
CF      192.168.100.34  4096    live    2018-09-19 20:31:40  null
D0      192.168.100.35  4096    live    2018-09-19 20:31:47  null
D1      192.168.100.37  4096    live    2018-09-21 20:31:23  null
D2      192.168.100.39  4096    live    2018-09-21 20:31:31  null
```

Then we can enroll machines in the cluster this way:

```
# ocicli machine-add C1 swift01 controller dc1-zone1
# ocicli machine-add C2 swift01 controller dc1-zone2
# ocicli machine-add C3 swift01 controller dc2-zone1
# ocicli machine-add C4 swift01 swiftproxy dc1-zone1
# ocicli machine-add C5 swift01 swiftproxy dc1-zone2
# ocicli machine-add C6 swift01 swiftproxy dc2-zone1
# ocicli machine-add C7 swift01 swiftstore dc1-zone1
# ocicli machine-add C8 swift01 swiftstore dc1-zone2
# ocicli machine-add C9 swift01 swiftstore dc2-zone1
# ocicli machine-add CA swift01 swiftstore dc1-zone1
# ocicli machine-add CB swift01 swiftstore dc1-zone2
# ocicli machine-add CC swift01 swiftstore dc2-zone1
```

As a result, there's going to be 1 controller, 1 Swift proxy and
2 Swift data node on each zone of our clusters. IP addresses will
automatically be assigned to servers as you add them to the clusters.
They aren't shown in ocicli, but you can check for them through the
web interface. The result should be like this:

```
# ocicli machine-list
serial  ipaddr          memory  status  lastseen             cluster  hostname
C1      192.168.100.20  8192    live    2018-09-19 20:31:57  7        swift01-controller-1.example.com
C2      192.168.100.21  8192    live    2018-09-19 20:31:04  7        swift01-controller-2.example.com
C3      192.168.100.22  8192    live    2018-09-19 20:31:14  7        swift01-controller-3.example.com
C4      192.168.100.23  5120    live    2018-09-19 20:31:08  7        swift01-swiftproxy-1.example.com
C5      192.168.100.24  5120    live    2018-09-19 20:31:06  7        swift01-swiftproxy-2.example.com
C6      192.168.100.25  5120    live    2018-09-19 20:31:14  7        swift01-swiftproxy-3.example.com
C7      192.168.100.26  4096    live    2018-09-19 20:31:18  7        swift01-swiftstore-1.example.com
C8      192.168.100.27  4096    live    2018-09-19 20:31:26  7        swift01-swiftstore-2.example.com
C9      192.168.100.28  4096    live    2018-09-19 20:30:50  7        swift01-swiftstore-3.example.com
CA      192.168.100.29  4096    live    2018-09-19 20:31:00  7        swift01-swiftstore-4.example.com
CB      192.168.100.30  4096    live    2018-09-19 20:31:07  7        swift01-swiftstore-5.example.com
CC      192.168.100.31  4096    live    2018-09-19 20:31:20  7        swift01-swiftstore-6.example.com
CD      192.168.100.32  4096    live    2018-09-19 20:31:28  null
CE      192.168.100.33  4096    live    2018-09-19 20:31:33  null
CF      192.168.100.34  4096    live    2018-09-19 20:31:40  null
D0      192.168.100.35  4096    live    2018-09-19 20:31:47  null
D1      192.168.100.37  4096    live    2018-09-21 20:31:23  null
D2      192.168.100.39  4096    live    2018-09-21 20:31:31  null
```

As you can see, hostnames are calculated automatically as well.

## Calculating the Swift ring

Before starting to install servers, the swift ring must be built.
Simply issue this command:

```
# ocicli swift-calculate-ring swift01
```

Note that it may take a very long time, depending on your cluster size.
This is expected. Just be patient.

## Installing servers

There's no (yet) a big "install the cluster" button on the web interface, or on
the CLI. Instead, servers must be installed one by one:

```
# ocicli machine-install-os C1
# ocicli machine-install-os C2
# ocicli machine-install-os C3
```

It is advised to first install the controller nodes, manually check that
they are installed correctly (for example, check that "openstack user list"
works), then the Swift store nodes, then the Swift proxy nodes. However,
nodes of the same type can be installed at once. Also, du to the use of
a VIP and corosync/pacemaker, controller nodes *must* be installed roughly
at the same time.

It is possible to see a server's installation log last lines using the
CLI as well:

```
# ocicli machine-install-log C1
```

This will show the logs of the system installation from /var/log/oci,
then once the server has rebooted, it will show the puppet logs from
/var/log/puppet-first-run.

## Checking your installation

Login on a controller node. To do that, list its IP:

```
# CONTROLLER_IP=$(ocicli machine-list | grep C1 | awk '{print $2}')
# ssh root@${CONTROLLER_IP}
```

Once logged into the controller, you'll see login credentials under
/root/oci-openrc.sh. Source it and try:

```
# . /root/oci-openrc.sh
# openstack user list
```

You can also try Swift:

```
# . /root/oci-openrc.sh
# openstack container create foo
# echo "test" >bar
# openstack object create foo bar
# rm bar
# openstack object delete foo bar
```

## Enabling Swift object encryption

Locally on the Swift store, Swift stores the object in clear form. This
means that anyone with physical access to the data center can pull a hard
drive and objects can be accessed from the /srv/node folder.
To mitigate this risk, Swift can do encryption of the objects it stores.
The metadata (accounts, containters, etc.) will still be stored in clear
form, but at least, the data that is stored encrypted.

The way this is implemented in OCI is to use Barbican. This is the reason
why Barbican is provisioned by default on the controller nodes. By default,
encryption isn't activated. To activate it, you must first store the key
for object encryption in the Barbican store. It can be done this way:

```
# ENC_KEY=$(openssl rand -hex 32)
# . swift-openrc
# openstack secret store --name swift-encryption-key \
  --payload-content-type=text/plain --algorithm aes \
  --bit-length 256 --mode ctr --secret-type symmetric \
  --payload ${ENC_KEY}
+---------------+--------------------------------------------------------------------------------------------+
| Field         | Value                                                                                      |
+---------------+--------------------------------------------------------------------------------------------+
| Secret href   | https://swift01-api.example.com/keymanager/v1/secrets/6ba8dd62-d752-4144-b803-b32012d707d0 |
| Name          | swift-encryption-key                                                                       |
| Created       | None                                                                                       |
| Status        | None                                                                                       |
| Content types | {'default': 'text/plain'}                                                                  |
| Algorithm     | aes                                                                                        |
| Bit length    | 256                                                                                        |
| Secret type   | symmetric                                                                                  |
| Mode          | ctr                                                                                        |
| Expiration    | None                                                                                       |
+---------------+--------------------------------------------------------------------------------------------+
```

Once that's done, the key ID (here: 6ba8dd62-d752-4144-b803-b32012d707d0)
has to be entered in the OCI's web interface, in the cluster definition,
under "Swift encryption key id (blank: no encryption):". This also can be
done using the OCI cli:

```
# ocicli cluster-set swift01 --swift-encryption-key-id 6ba8dd62-d752-4144-b803-b32012d707d0 --swift-disable-encryption no
```

Once that's done,
another puppet run is needed on the swift proxy nodes:

```
root@C1-swift01-swiftproxy-1>_ ~ # oci-puppet
```

This should enable encryption. Note that the encryption key must be stored
in Barbican under the user swift and project services, so that Swift has
access to it.

## Fixing useless node1 in corosync

Sometimes, "node1" appears when doing "crm status". To clean this
up, simply do:

```
crm_node -R node1 --force
```

## Fixing ceph -s

This fixes all Ceph warnings after a setup:

```
ceph osd pool application enable glance rbd
ceph osd pool application enable nova rbd
ceph osd pool application enable cinder rbd
ceph osd pool application enable gnocchi rbd
ceph osd pool application enable cinderback rbd
ceph mon enable-msgr2
```

Also, if using Ceph, one needs to create the CEPH_1 backend by hand:

```
openstack volume type create --property volume_backend_name=CEPH_1 --public CEPH_1
```

## Initial cluster setup variable

To avoid doing too many things when the cluster is in production (like, for
example, starting MySQL to do the initial Galera cluster setup), OCI has a
variable called "initial-cluster-setup". It is on by default on the first
runs, and after all controllers report a successful puppet run, this
variable is automatically set to no. Here's a (probably non-exhaustive) list
of things that OCI does only if initial-cluster-setup is set to yes:

- openstack-api-vip resource in corosync
- Galera cluster
- Make controllers join the rabbitmq cluster
- Heat and Magnum domain users
- Nova cells v2 configuration

At any moment, it is possible to switch the value to yes or no:

```
# ocicli cluster-set z --initial-cluster-setup no
```

however, it is strongly advised to set the value to no once the cluster is
in production.

Note that if the 3 controllers of your clusters succesfully run puppet at
the first startup, they will call "oci-report-puppet-success". Once the
third controller does that, initial-cluster-setup is automatically set to
the value "no" in the OCI database.

## Adding other types of nodes

OCI can handle, by default, the below types of nodes:

- cephmon: Ceph monitor
- cephosd: Ceph data machines
- compute: Nova compute and Neutron DVR nodes
- controller: The OpenStack control plane, running all API and daemons
- swiftproxy: Swift proxy servers
- swiftstore: Swift data machines
- volume: Cinder LVM nodes
- network: DHCP, IPv4 SNAT and IPv6 routing

It is only mandatory to install 3 controllers, then everything else is
optional. There's nothing to configure, OCI will understand what the
user wants depending of what type of nodes is provisioned.

If cephosd nodes are deployed, then everything will be using Ceph:
- Nova (ie: /var/lib/nova/instances over Ceph)
- Glance (images stored on Ceph)
- Cinder (cinder-volume deployed on compute nodes will be using the Ceph backend)

Though even with Ceph, setting-up volume nodes will add the LVM
backend capability. With or without volume nodes, if some OSD nodes
are deployed, cinder-volume and cinder-backup with Ceph backend will
be installed on the compute nodes.

Live migration of VMs between compute nodes is only possible if using
Ceph (ie: if some Ceph OSD nodes are deployed), or if using the
--block-migration option.

Ceph MON nodes are optional. If they aren't deployed, the Ceph MON and
MGR will be installed on the controller nodes.

Network nodes are optional. If they aren't deployed, the controllers
will act as SNAT and IPv6 routing nodes, and the DHCP servers will be
installed on the compute nodes.

# Advanced usage
## Using custom NTP servers

The default time server that will be configured on the nodes is
0.debian.pool.ntp.org. If you want to use a different NTP server then
you can configure this using the cluster-set command:

```
# ocicli cluster-set swift01 --time-server-host ntp1.some.domain
```

If you want to use multiple NTP servers then these can be specified
using the semicolon (;) as a delimiter. Note that you will need to
encapsulate the list in quotes to prevent your shell from splitting
this into multiple commands:

```
# ocicli cluster-set swift01 --time-server-host 'ntp1.some.domain;ntp2.some.domain'
```

## Using automated IPMI address configuration

Because it may take too much time to manage this manually, OCI offers the
possibility to automatically configure IPMI addresses of all discovered
servers. And because it is possible that in your network setup, there's
multiple IPMI networks depending on where the server is physically located,
OCI offers the possibility to automatically choose an IPMI network depending
on which DHCP network a server boots on the Debian live image.

The first thing to do is to define an IPMI network, set it with the role
"ipmi", and then make it match the IP address of the DHCP network:

```
# ocicli network-create ipmi 192.168.200.0 24 zone-1 no
# ocicli network-set ipmi --role ipmi --ipmi-match-addr 192.168.100.0 --ipmi-match-cidr 24
```

Once this is done, the automatic_ipmi_numbering=yes option must be set in
/etc/openstack-cluster-installer/openstack-cluster-installer.conf.

When this option is set, each time a server reports its hardware
configuration, OCI will check if it has a correct IPMI IP. If not, OCI will
ssh into the server and perform the necessary "ipmitool" commands to set a
valid network configuration. When doing so, the IP address will be reserved
in the "ips" table of OCI, making sure that never, an IP is used twice.

With the above example, if a server PXE boots on the 192.168.100.0/24
network, then it will automatically be assigned an IPMI ip address on the
192.168.200.0/24 network. Note that the IPMI password is randomly choosen.
As we're using openssl rand -base64, it may be a good idea to make sure that
your OCI server has a good source of entropy.

If previously, some servers had their IPMI address already set to something
that matches the IPMI network, but OCI didn't record it, it is possible to
get this IP address recorded in OCI's database. Just typing this command is
enough to do so:

```
# ocicli ipmi-assign-check
```

This command will ask OCI to go through each and every machine recorded in
the database, and check the detected IPMI address. If this address exists in
the database, nothing is done. If not, a new record will be added to the
database for this machine, to avoid later address conflict.

If the deployment contains some HP ProLiant DL385 Gen10 (Plus) machines,
it is possible to automatically install the ILO license. To do so, simply
drop a license file here:

/etc/openstack-cluster-installer/live-image-additions/root/License.xml

This file should be in this format:

```
<RIBCL VERSION="2.0">
<LOGIN USER_LOGIN="adminname" PASSWORD="password">
<RIB_INFO MODE="write">
<LICENSE>
<ACTIVATE KEY="LICENSE-GOES_HERE"/>
</LICENSE>
</RIB_INFO>
</LOGIN>
</RIBCL>
```

For this type of machines, after the IPMI change IP address, IPMI over
LAN is automatically activated, and the ILO is reset (because it wouldn't
take the new IP address otherwise).

## Automatic upgrade of BIOS and IPMI firmware

Upgrading the BIOS and IPMI firmware of servers can take a really long time
if managing a large number of servers. So OCI offers the possibility to
perform these upgrades automatically. This is controled using a
configuration file that can be find in here: 
/etc/openstack-cluster-installer/oci-firmware-upgrade-config.json. Here is
an example valid configuration file:

```
{
	"CL2800 Gen10": {
		"BIOS": {
			"version": "2.1.0",
			"script": "/root/hp-bios-upgrade-2.1.0"
			},
		"IPMI": {
			"version": "2.22",
			"script": "/root/hp-ipmi-upgrade-2.22"
			}
	},
}
```

With the above, if OCI finds an HP Cloud Line CL2800 server that has the
BIOS firmware lower than 2.1.0, it will attempt to upgrade it by launching
the script /root/hp-bios-upgrade-2.1.0. To add the said script, the live
image must be customized. To do so, simply add some files under the folder
/etc/openstack-cluster-installer/live-image-additions. Every files that
are there will be added to the live image. Then the live image must be
regenerated:

```
# openstack-cluster-installer-build-live-image
```

Once this is done, reboot servers that must be upgraded. As they boot
on the live image, the upgrade will be performed. For reference, here is an
example hp-bios-upgrade-2.1.0 script, which will be dumped here:
/etc/openstack-cluster-installer/live-image-additions/root/hp-bios-upgrade-2.1.0.

```
#!/bin/sh

set -e
set -x

cd /root
tar -xvzf CL2600_CL2800_Gen10_BIOS_v2.1.0_11052019_Linux.tgz
cd CL2600_CL2800_Gen10_BIOS_v2.1.0_11052019_Linux/FlashTool/
./flash_bios.sh
reboot
sleep 20000
```

The "sleep 20000" is to make sure the OCI agent doesn't restart before the
machine is rebooted. YMMV depending on the upgrade that needs to be
performed.

## Customizing the /etc/hosts of all your cluster

It is possible to add some entries on all of the /etc/hosts of clusters, if
adding some entries to this file on the OCI server:

/etc/openstack-cluster-installer/hosts_append

All what OCI generates is located between these tags:

```
# OCISTA_MAINTAINED: Do not touch between these lines, this is a generated content.
... some generated content ...
# OCIFIN_MAINTAINED: Do not touch between these lines, this is a generated content.
```

Then it's possible to add some entries to each individual /etc/hosts
manually after the above tag, and these entries will be preserved.

## Customizing the ENC

In /etc/openstack-cluster-installer/hiera, you'll find 2 folders and a
all.yaml. These are to allow one to customize the output of OCI's ENC.
For example, if you put:

```
   ntp:
      servers:
         - 0.us.pool.ntp.org iburst
```

in /etc/openstack-cluster-installer/hiera/all.yaml, then all nodes will
be configured with ntp using 0.us.pool.ntp.org to synchronize time.

If we have a swift01 cluster, then the full folder structure is as follow:

```
/etc/openstack-cluster-installer/hiera/roles/controller.yaml
/etc/openstack-cluster-installer/hiera/roles/swiftproxy.yaml
/etc/openstack-cluster-installer/hiera/roles/swiftstore.yaml
/etc/openstack-cluster-installer/hiera/nodes/-hostname-of-your-node-.yaml
/etc/openstack-cluster-installer/hiera/all.yaml
/etc/openstack-cluster-installer/hiera/clusters/swift01/roles/controller.yaml
/etc/openstack-cluster-installer/hiera/clusters/swift01/roles/swiftproxy.yaml
/etc/openstack-cluster-installer/hiera/clusters/swift01/roles/swiftstore.yaml
/etc/openstack-cluster-installer/hiera/clusters/swift01/nodes/-hostname-of-your-node-.yaml
/etc/openstack-cluster-installer/hiera/clusters/swift01/all.yaml

```

## Custom OCI facts

OCI maintains a /etc/facter/facts.d/oci_facts.yaml file with puppet. This
file is also created at provisioning time. This helps cutomizing your
puppet server, so there's a fact for the role, cluster name, block device
controller and NIC driver.

## Customizing installed server at setup time

Sometimes, it is desirable to configure a server at setup time. For example,
it could be needed to configure routing (using BGP) for the virtual IP to be
available at setup time. OCI offers all what's needed in order to enrich the
server configuration at install time, before puppet agent even starts.

Say you want to configure swift01-controller-1 in your swift01 cluster, add
quagga to it, and add some configuration files. Simply create the folder,
fill content in it, and add a oci-packages-list file:

```
# mkdir -p /var/lib/oci/clusters/swift01/swift01-controller-1.example.com/oci-in-target
# cd /var/lib/oci/clusters/swift01/swift01-controller-1.example.com
# echo -n "quagga,tmux" >oci-packages-list
# mkdir -p oci-in-target/etc/quagga
# echo "some conf" >oci-in-target/etc/quagga/bgpd.conf
```

When OCI provision the baremetal server, it looks if the oci-packages-list
file exists. If it does, the packages are added when installing. Then the
oci-in-target content is copied into the target system.

## Using a BGP VIP

The same way, you can for example, decide to have the VIP of your
controllers to use BGP routing. To do that, write in
/etc/openstack-cluster-installer/roles/controller.yaml:

```
   quagga::bgpd:
      my_asn: 64496,
      router_id: 192.0.2.1
      networks4:
         - '192.0.2.0/24'
      peers:
         64497:
            addr4:
               - '192.0.2.2'
            desc: TEST Network
```

Though you may want to do this only for a specific node of a single
cluster of servers, rather than all. In such case, simply use this
filepath scheme:
/etc/openstack-cluster-installer/clusters/cloud1/nodes/cloud1-controller-1.example.com.yaml

For all controllers of the cloud1 cluster, use:
/etc/openstack-cluster-installer/clusters/cloud1/roles/controller.yaml

## Doing a test in OCI's manifests for debug purpose

If you would like to test a change in OCI's puppet files, edit them
in /usr/share/puppet/modules/oci, then on the master run, for example:

```
# puppet master --compile swift01-controller-1.example.com
# /etc/init.d/puppet-master stop
# /etc/init.d/puppet-master start
```

then on swift01-controller-1.example.com you can run:

```
# OS_CACERT=/etc/ssl/certs/oci-pki-oci-ca-chain.pem puppet agent --test --debug
```

## Customizing files and packages in your servers

If you wish to customize the file contents of your hosts, simply write
any file in, for example:

```
/var/lib/oci/clusters/swift01/swift01-controller-1.example.com/oci-in-target
```

and it will be copied in the server you'll be installing.

The same way, you can add additional packages to your server by adding their
names in this file:

```
/var/lib/oci/clusters/swift01/swift01-controller-1.example.com/oci-packages-list
```

Packages must be listed on a single line, separated by comas. For example:

```
quagga,bind
```

### Enabling Hiera for environment

If you need to enable Hiera, you can do it this way:
```
# mkdir -p /etc/puppet/code/environments/production/manifests/
# echo "hiera_include('classes')" > /etc/puppet/code/environments/production/manifests/site.pp
# cat /etc/puppet/code/hiera/common.yaml
---
classes:
  - xxx
...
```

# Once deployment is ready

There's currently a few issues that need to be addressed by hand. Hopefully,
all of these will be automated in a near future. In the mean while, please
do contribute the fixes if you find out how, or just do as per what's below.

## Fixing-up the controllers

Unfortunately, sometimes, there's some scheduling issues in the puppet
apply. If this happens, one can try to relaunch the puppet thing:

```
# OS_CACERT=/etc/ssl/certs/oci-pki-oci-ca-chain.pem puppet agent --test --debug 2>&1 | tee /var/log/puppet-run-1
```

Do this on the controller-1 node first, wait until it finishes, then restart
it on the other controller nodes.

## Adding custom firewall rules

OCI is using puppet-module-puppetlabs-firewall, and flushes iptables on each
run. Therefore, if you need custom firewall rules, you also have to do it
via puppet. If you want to do apply the same firewall rules on all nodes,
simply edit the site.pp like this in /etc/puppet/code/environments/production/manifests/site.pp:

```
hiera_include('classes')

firewall { '000 allow monitoring network':
  proto       => tcp,
  action      => accept,
  source      => "10.3.50.0/24",
}
```

Note that the firewall rule is prefixed with a number. This is mandatory.
Also, make sure that this number doesn't enter in conflict with an already
existing rule.

What's done by OCI is: protect the controller's VIP (deny access to it from
the outside), and protect the swiftstore ports for account, container and
object servers from any query not from within the cluster. So the above will
allow a monitoring server from 10.3.50.0/24 to monitor your swiftstore
does.

If you wish to have the above applied only to a specific node, it's possible
to do so by only matching some hostnames. Here's a simple example, with a
different IP allowed depending on the machine roles:

```
hiera_include('classes')

node /^z-controller.*/ {
  firewall { '000 allow monitoring network':
    proto       => tcp,
    action      => accept,
    source      => "10.1.2.0/24",
  }
}

node default {
  firewall { '000 allow monitoring network':
    proto       => tcp,
    action      => accept,
    source      => "10.3.4.0/24",
  }
}
```

## Adding compute nodes

With latest version of OCI, this is performed automatically: after a compute
node runs puppet with success, it calls oci-report-puppet-success, which
contacts the provisioning node, which in its turn ssh to one of the
controllers to run "nova-manage cell_v2 discover_hosts". So what's below is
only needed if the compute node didn't install correctly directly.

To add the compute node to the cluster and check it's there, on the controller, do:

```
# . oci-openrc
# su nova -s /bin/sh -c "nova-manage cell_v2 discover_hosts"
# openstack hypervisor list
+----+-------------------------------+-----------------+---------------+-------+
| ID | Hypervisor Hostname           | Hypervisor Type | Host IP       | State |
+----+-------------------------------+-----------------+---------------+-------+
|  4 | swift01-compute-1.example.com | QEMU            | 192.168.103.7 | up    |
+----+-------------------------------+-----------------+---------------+-------+
```

There's nothing more to it... :)

## Adding GPU support in a compute node

Currently, only Nvidia boards are supported, however, we welcome
contributions. First, locate your GPU in your compute host. Here's
an example with an Nvidia T4 board:

```
# lspci -nn | grep -i nvidia
5e:00.0 3D controller [0302]: NVIDIA Corporation TU104GL [Tesla T4] [10de:1eb8] (rev a1)
```

When you have that, simply enter it with ocicli:
```
# ocicli machine-set 1CJ9FV2 --use-gpu yes --gpu-vendor-id 10de --gpu-produc-id 1eb8 --gpu-name nvidia-t4 --gpu-device-type type-PF --vfio-ids 10de:1eb8+10de:0fb9
```

Please note that the IDs in the --vfio-ids must be separated by +, not by
a comma (conversion is done later on by OCI and Puppet).

Also, the --gpu-device-type depends on the type of GPU card and firmware
that you are using. For example, older Nvidia T4 firmware require type-PCI,
while newer firmware require type-PF. If you do a mistake here, then the
nova-scheduler will not know where to spawn a VM and will return "no valid
host".

This will populate /etc/modprobe.d/blacklist-nvidia.conf to blacklist the
Nvidia driver and a few others, /etc/modules-load.d/vfio.conf to load the
vfio-pci module, and /etc/modprobe.d/vfio.conf with this content (to allow
exposing devices to guests):

```
options vfio-pci ids=10de:1eb8,10de:0fb9
```

The /etc/default/grub should then be modified by hand to add this (manually):

```
intel_iommu=on
```

reboot the compute machine, apply puppet on both the compute and the
controllers.

Now, let's create the Glance image and Nova flavor to use this new
GPU and start the instance:

```
# openstack image set bionic-server-cloudimg-amd64_20190726_GPU --property img_hide_hypervisor_id='true'
# openstack flavor create --ram 6144 --disk 20 --vcpus 2 cpu2-ram6-disk20-gpu-nvidia-t4
# openstack flavor set cpu6-ram20-disk20-gpu-t4 --property pci_passthrough:alias=nvidia-t4:1
# openstack server create --image bionic-server-cloudimg-amd64_20190726_GPU --nic net-id=demo-net --key-name demo-keypair --flavor cpu6-ram20-disk20-gpu-nvidia-t4 my-instance-with-gpu
```

In the instance, we can use Cuda and check for it:

```
# wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-repo-ubuntu1804_10.1.168-1_amd64.deb
# apt-get update
# apt-get install cuda cuda-toolkit-10-1  nvidia-cuda-toolkit
# cat /proc/driver/nvidia/version
NVRM version: NVIDIA UNIX x86_64 Kernel Module  430.26  Tue Jun  4 17:40:52 CDT 2019
GCC version:  gcc version 7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04.1)
```

## Multiple Cinder LVM backends

If using more than one type of LVM backend (for example, SSD and HDD), it
may be useful to select the name of the backend when setting-up a new Volume
node. This is done this way:

```
# ocicli machine-set 1CJ9FV2 --lvm-backend-name HDD_1
```

You may also have multiple backends on a single server. In such case,
there's the possibility to use one backend per drive, instead of using them
all on a single VG. To do so, do something like this:

```
# ocicli machine set 5KC2J63 --cinder-separate-volume-groups yes --cinder-enabled-backends LVM_SDA:LVM_SDB:LVM_SDC
```

This will setup new volume types LVM_SDA, LVM_SDB, and LVM_SDC. To go
back to the normal way (ie: one big VG), it is possible to set back
the no-override value:

```
# ocicli machine-set 5KC2J63 no-override
```

though please take care, OCI will only do the right thing once, when
provisioning the system.

## Customizing the number of workers

A bit everywhere in OCI, the fact $::os_workers from puppet-openstack is
used to configure the number for workers. This is used for example for
the number of RPC workers, or for the number of API processes configured
for uwsgi. The default value is the number of core of your server, devided
by 2. To customize this value, simply write the fact in facter. This
simple command will set a value of 4 workers:

```
# echo "os_workers=4" >/etc/facter/facts.d/os_workers.txt
```

This will, for example, configure 4 Neutron RPC workers, and 4 processes
for the neutron-api. The same apply for all services.

It is also possible to be more granular using hiera.

# Advanced automation
## Hands off fully-automated installation

When managing large clusters, the hardware provisioning can take a long
chunk of your human time. There's unfortunately no way to compress the time
it takes for the hardware physical installation, but OCI is there to provide
a full installation without having to even type a single command line.

Hardware nodes are first booted into the Live environment, their hardware
is then discovered, and if it matches a hardware profile defined (by you)
in OCI, the server can be fully provisioned without any human being on the
keyboard.

This chapter is here to explain the reader how to set this up.

If one wishes to fully automate provisioning, here's the list of directives
to set in /etc/openstack-cluster-installer/openstack-cluster-installer.conf:

```
[megacli]
megacli_auto_clear=yes
megacli_auto_clear=yes
megacli_auto_clear_num_of_discovery=3
megacli_auto_apply=yes
megacli_auto_apply_num_of_discovery=7

[ipmi]
automatic_ipmi_numbering=yes
automatic_ipmi_username=ocirox

[dns_plugin]
call_dns_shell_script=yes

[root_pass_plugin]
call_root_password_change=yes

[monitoring_plugin]
call_monitoring_plugin=yes

[auto_provision]
auto_add_machines_to_cluster=yes
auto_add_machines_cluster_name=cluster1
auto_add_machines_num_of_discovery=9

[auto_racking]
auto_rack_machines_info=yes
auto_rack_machines_num_of_discovery=7

[auto_install_os]
auto_install_machines_os=yes
auto_install_machines_num_of_discovery=15
```

Note that all of the above is set to no by default.

On the above, we can see some directives with "num_of_discovery". What
happens is that when a machine boots into the OCI live image, the
openstack-cluster-installer-agent runs in loop, every 30 seconds (in fact,
anytime during a period of 30 seconds, as the script randomly waits to avoid
that all discovery agent report to OCI at the same time... but I'm
digressing here...). Each time the OCI agent reports a hardware configuration
for a server, a counter is incremented. That's our "num_of_discovery".
As the values for "num_of_discovery" are different, this kinds of produces
a scheduler of actions to perform on newly discovered servers. For example,
with the default values, here's the schedule (see below for the details
of each operation):

- setup of IPMI
- clearing of the RAID config
- applying the "machine-set"
- applying the RAID profile
- fetching the LLDP information to populate OCI (server dc, rack, U...)
- adding a server to the default cluster with the correct role
- install the operating system and reboot the server

Note that the default values for "num_of_discovery" are correct, and it
isn't advised to change them unless you are really sure of what you're
doing. For example, it is on purpose that a cycle of hardware discovery
is left between "clearing of the RAID config" and "applying the RAID
profile", and the LLDP discovery is left after many runs of the agent
as LLDP can sometimes take time.

To reset the number of discovery counter:

```
ocicli machine-report-counter-reset SERIAL
```

## Auto racking

OCI relies on the LLDP protocol to discover to which switch a server is
connected, and uses that information to tell where it is and what to do.
Your switch names to racking information is defined in a static Json file in
/etc/openstack-cluster-installer/auto-racking.json. It's done this way,
because one doesn't expect this data to change over time.

This file contains 3 main sections:
- productnames
- switchhostnames
- switchportnames

Under productnames, there's currently only a description of how many rack
unit a server needs.

OCI assumes that each server in each U will be connected to the matching
switch port number. For example, server in U-4 will be connected to the
switch port 4, as per the LLDP advertizing of your switch.

OCI will then read the productnames description, to tell
how many rack units a server takes.

OCI also assumes that each of your switches will be using LLDP to advertize
the switch names and ports, and that each switch is set with a unique
hostname in your data centers.

Let's take an example. Let's say we're having a switch number 5, in the
rack 3 of the row b, in data center 2. Let's have the hostname dc2-b3-5.
We'll then define in /etc/openstack-cluster-installer/auto-racking.json:

```
"switchhostnames": {
    "dc2-b3-5": {
        "dc": "2",
        "row": "b",
        "rack": "3",
        "location-name": "zone-3",
        "compute-aggregate": "AZ3"
    },
```

The above tells that everything connected to this switch will be
provisioned in OCI's location zone-3 (as per the "ocicli machine-add"
location parameter), and if it is a Nova compute server, it may be in
use in an aggregate named AZ3. This will be used below.

To be able to debug, a few commands are available:

```
ocicli machine-guess-racking SERIAL
```

this will tell where the machine is racked, given the information in the
auto-racking.json and the LLDP info advertized by the switch.

```
ocicli machine-auto-rack SERIAL
```

will populate the racking information.

```
ocicli machine-auto-add SERIAL
```

will add the server to the location defined in auto-racking.json and with
the role defined in the hardware profile.

## Hardware profiles

To be able to take decisions, OCI needs to auto-detect hardware, and
match it to a hardware profile. OCI takes a given hardware, and compares
to the list of profiles. Each time something doesn't match, a hardware
profile is removed from the list. If the user has designed the hardware
profiles correctly, at the end, only a single profile remains. When
that is the case, then the role define in that profile can be used,
and the RAID profile applied using MegaCli.

Here's an example:

```
    "compute-with-var-lib-nova-instance": {
        "role": "compute",
        "product-name": [
            "PowerEdge R640",
        ],
        "ram": {
            "min": 256,
            "max": 512
            },
        "hdd": {
            "controller": "megacli",
            "hdd-num-exact-match": "yes",
            "layout": {
                "0": {
                    "raid-type": 1,
                    "software-raid": "no",
                    "options": "WB RA Direct",
                    "size_min": 220,
                    "size_max": 250,
                    "num_min": 2,
                    "num_max": 2
                },
                "1": {
                    "raid-type": 1,
                    "software-raid": "no",
                    "options": "WB RA Direct",
                    "size_min": 800,
                    "size_max": 1800,
                    "num_min": 2,
                    "num_max": 4
                }
            }
        },
        "machine-set": [ "--use_ceph_if_available no --cpu-mode custom --cpu-model Skylake-Server-IBRS"],
        "after-puppet-controller-command": [
            "openstack compute service set --disable %%HOSTNAME%%",
            "openstack aggregate add host %%COMPUTE_AGGREGATE%% %%HOSTNAME%%",
            "openstack aggregate add host INTEL_COMPUTE %%HOSTNAME%%"
            ]
    },
```

The above profile will only match machines with product name "PowerEdge R640",
with between 256 and 512 GB of RAM, a LSI RAID controller, with exactly 2 system
disks of 220 to 250 GB, and 2 to 4 data disks of 800 to 1800 GB. When the
RAID profile is applied, it will provision 2 RAID1 arrays, one for the
system with the smaller drives, and another bigger one that will later be in use
in /var/lib/nova/instances.

What is in machine-set are ocicli commands to issue when the hardware
profile is recognized. On the above example, we can see that we're
setting-up a CPU model according to the hardware profile. Obviously, one can
set another hardware profile for "PowerEdge R6525" (this is an AMD machine)
with a different CPU model for it, for example.

What is in after-puppet-controller-command will be
issued after the first puppet run is successful. Feel free to add any
OpenStack command in there, knowing that %%HOSTNAME%% will be replaced by
the actual FQDN of the provisioned server, and %%COMPUTE_AGGREGATE%% will
be replaced by whatever is set in the auto-racking.json. Here, we use the
hardware profile to set the machine in an INTEL_COMPUTE aggregate, as this
cluster also has AMD compute nodes. We're also using %%COMPUTE_AGGREGATE%%
to set the correct availability zone automatically.

To check what hardware profile is matching a given server, one can type:

```
ocicli machine-guessed-profile SERIAL
```

It is also possible to manually apply a RAID profile with:

```
ocicli machine-megacli-reset-raid SERIAL
ocicli machine-megacli-apply SERIAL
```

Beware not to do the above on a server running in production.

## DNS plugin

OCI can call a custom script of your own to publish the node hostnames in
your DNS. Up to you to write it. The script will be called whenever servers
are added to a cluster (automatically or manually).

To test the DNS plugin, it is possible to manually call it using:

```
ocicli machine-to-dns HOSTNAME
```

## Root password plugin

When a machine is declared as installed, it is possible to automatically
set a password for it. That password can be saved somewhere (for example
using hashicorp vault, or a simple text file), using the plugin script.

To test the root password plugin, once a machine is installed, it is
possible to manually call it using:

```
ocicli machine-gen-root-pass HOSTNAME
```

## Monitoring plugin

OCI doesn't provide monitoring, but if you have such a service, for example
Zabbix, you can call a plugin script to register machines in the monitoring.

To manually call the monitoring registration plugin, one can type:

```
ocicli machine-to-monitoring HOSTHANE
```

# Managing the OpenStack deployment

## DNS inside OpenStack VMs

Different options for DNS resolution inside VMs are outlined at
https://docs.openstack.org/neutron/latest/admin/config-dns-res.html

### Case 1 - Each virtual network uses unique DNS resolvers

This is supported out-of-the box and requires no configuration in OCI.

Note that when using Case 1, because VMs are using a DNS resolver which resides
outside of their project, resolving the IPs of other VMs in the same subnet can
only be achieved using mDNS (which is enabled by default in Debian's
genericcloud image since bookworm), or by publishing DNS records somewhere that
the external DNS server can reach them (e.g. by using designate).

To use Case 1 ensure that no Neturon dnsmasq DNS servers have been set

```
# ocicli cluster-set swift01 --neutron-dnsmasq-dns-servers none
```

### Case 2a - DHCP agents forward queries to configured DNS servers

This is enabled by setting the 'Neutron dnsmasq\_dns\_servers' option on the
cluster. In this scenario VMs will send DNS queries to the subnet's dnsmasq
instance (which is also the DHCP server for the subnet). This will in turn
forward queries for external domains to the configured DNS servers. Note that if
the user specifies DNS servers when creating a subnet then these will be
assigned on the VM instead, which will then function as Case 1.

```
# ocicli cluster-set swift01 --neutron-dnsmasq-dns-servers '9.9.9.9;149.112.112.112'
```

Note that when using Case 2a care should be taken when choosing the DNS servers,
because the queries are sent from the dnsmasq process which is running on the
OpenStack node itself (which is inside the OpenStack management network).

You should also set the DHCP domain:

```
# ocicli cluster-set swift01 --dhcp-domain openstack.internal
```

This way DNS queries for `othervm` and `othervm.openstack.internal` will both
resolved to IPs

## Enabling cloudkitty rating

First, add the rating role to the cloudkitty user:

```
openstack role add --user cloudkitty --project services rating
```

Then, enable the hashmap module:

```
cloudkitty module enable hashmap
cloudkitty module set priority hashmap 100
```

Note that the error 503 may be just ignored, it still works, as "module
list" shows. Now, let's add rating for instances:

```
cloudkitty hashmap group create instance_uptime_flavor
cloudkitty hashmap service create compute
cloudkitty hashmap field create 96a34245-83ae-406b-9621-c4dcd627fb8e flavor
```

The above ID is the one of the hashmap service create. Then we reuse the ID
of the field create we just had for the -f parameter, and the group ID for
the -g parameter below:
```
cloudkitty hashmap mapping create --field-id ce85c041-00a9-4a6a-a25d-9ebf028692b6 --value demo-flavor -t flat -g 2a986ce8-60a3-4f09-911e-c9989d875187 0.03
```

## Writing custom pollsters to bill specific things

In this example, we'll prentend we want to bill any port on a specific
network called "ext-net1" which holds public IP addresses. To do this,
we need to have ceilometer-polling, in the 3 controllers, to query the
Neutron API every 5 minutes, and ask for all ports using the network
"ext-net1". Each port associated with an OpenStack project will need
a custom record in the Gnocchi time series.

So, first, we need to design our pollster (ie: the thing which will
query the API). Let's say that when we do this:

```
openstack port list --network ext-net1 --long --debug
```

the debug mode shows that we can translate this into this curl query:

```
curl -g -X GET "https://pub1-api.cloud.infomaniak.ch/network/v2.0/ports?network_id=5a7f5f53-627c-4d0e-be89-39efad5ac54d" \
	-H "Accept: application/json" -H "User-Agent: openstacksdk/0.50.0 keystoneauth1/4.2.1 python-requests/2.23.0 CPython/3.7.3" \
	-H "X-Auth-Token: "$(openstack token issue --format value -c id) | jq .
```

the OpenStack API repling this way:

```
{
  "ports": [
    {
      "id": "c558857c-d010-41ba-8f93-08c3cb876ebe",
      "name": "",
      "network_id": "5a7f5f53-627c-4d0e-be89-39efad5ac54d",
      "tenant_id": "ac4fafd60021431585bbb23470119557",
      "mac_address": "fa:16:3e:d5:3f:13",
      "admin_state_up": true,
      "status": "ACTIVE",
      "device_id": "0c2b0e8f-0a59-4d81-9545-fd90dc7fee73",
      "device_owner": "compute:b4",
      "fixed_ips": [
        {
          "subnet_id": "615ddc30-2ed5-4b0a-aba7-acb19b843276",
          "ip_address": "203.0.113.14"
        },
        {
          "subnet_id": "2c7d6ee4-d317-4749-b6a5-339803ac01f2",
          "ip_address": "2001:db8:1:1::2e8"
        }
      ],
      "allowed_address_pairs": [],
      "extra_dhcp_opts": [],
      "security_groups": [
        "5d9b69fb-2dae-4ed2-839c-91f645d53eeb",
        "c901c534-fd90-4738-aa6b-007cd7a5081b"
      ],
      "description": "",
      "binding:vnic_type": "normal",
      "binding:profile": {},
      "binding:host_id": "cl1-compute-8.example.com",
      "binding:vif_type": "ovs",
      "binding:vif_details": {
        "connectivity": "l2",
        "port_filter": true,
        "ovs_hybrid_plug": true,
        "datapath_type": "system",
        "bridge_name": "br-int"
      },
      "port_security_enabled": true,
      "qos_policy_id": null,
      "qos_network_policy_id": null,
      "resource_request": null,
      "ip_allocation": "immediate",
      "tags": [],
      "created_at": "2021-02-25T08:57:30Z",
      "updated_at": "2021-02-25T09:42:47Z",
      "revision_number": 8,
      "project_id": "ac4fafd60021431585bbb23470119557"
    }
  ]
}
```

We then create the matching resource-type in Gnocchi:

TODO: this isn't clear yet what to do...

```
gnocchi resource-type create -a status:string:true:max_length=3 -a device_id:uuid:false -a mac_address:string:true:max_length=20  network.ports.ext-net1
gnocchi resource-type create -a status:string:false:max_length=3 -a mac_address:string:false:max_length=20 public_ip
gnocchi resource-type create -a cidr:string:false:max_length=4 -a network_id:uuid:false -a description:string:false:max_length=64 public_subnet
```

In /etc/openstack-cluster-installer/pollsters.d, we simply write a new file
that looks like this:

```
---

- name: "network.ports.ext-net1"
  sample_type: "gauge"
  unit: "ip"
  endpoint_type: "network"
  url_path: "/network/v2.0/ports?network_id=5a7f5f53-627c-4d0e-be89-39efad5ac54d"
  value_attribute: "status"
  response_entries_key: "ports"
  project_id_attribute: "project_id"
  value_mapping:
    ACTIVE: "1"
  metadata_fields:
    - "mac_address"
    - "device_id"
    - "device_owner"
    - "fixed_ips"
    - "binding:vnic_type"
    - "binding:host_id"
    - "binding:vif_type"
    - "created_at"
    - "updated_at"
```

The url_path above matches what we write in the curl query. The response_entries_key
is the name of the toplevel object the json object that Neutron replies.
Writing this in /etc/openstack-cluster-installer/pollsters.d/ext-net-ports.yaml
is the only thing that's necessary. OCI will automatically write this file
in /etc/ceilometer/pollsters.d in the controller nodes, and list this
pollster in /etc/ceilometer/polling.yaml.

:warning: Warning: Your custome pollster file must be readable by _www-data_
user, or it will break Ceilometer as OCI will not be able to read file content
and will push an empty pollster (and this cause Ceilometer crash on start).

## Installing a first OpenStack image

```
wget http://cdimage.debian.org/cdimage/openstack/current-9/debian-9-openstack-amd64.qcow2
openstack image create \
	--container-format bare --disk-format qcow2 \
	--file debian-9-openstack-amd64.qcow2 \
	debian-9-openstack-amd64
```

## Setting-up networking

There's many ways to handle networking in OpenStack. This documentation only
quickly covers one way, and it is out of the scope of this doc to explain
all of OpenStack networking. However, the reader must know that OCI is
setting-up compute nodes using DVR (Distributed Virtual Routers), which
means a Neutron router is installed on every compute nodes. Also,
OpenVSwitch is used, using VXLan between the compute nodes. Anyway, here's
one way to setup networking. Something like this may do it:

```
# Create external network
openstack network create --external --provider-physical-network external --provider-network-type flat ext-net
openstack subnet create --network ext-net --allocation-pool start=192.168.105.100,end=192.168.105.199 --dns-nameserver 84.16.67.69 --gateway 192.168.105.1 --subnet-range 192.168.105.0/24 --no-dhcp ext-subnet

# Create internal network
openstack network create --share demo-net
openstack subnet create --network demo-net --subnet-range 192.168.200.0/24 --dns-nameserver 84.16.67.69 demo-subnet

# Create router, add it to demo-subnet and set it as gateway
openstack router create demo-router
openstack router add subnet demo-router demo-subnet
openstack router set demo-router --external-gateway ext-net

# Create a few floating IPs
openstack floating ip create ext-net
openstack floating ip create ext-net
openstack floating ip create ext-net
openstack floating ip create ext-net
openstack floating ip create ext-net

# Add rules to the admin's security group to allow ping and ssh
SECURITY_GROUP=$(openstack security group list --project admin --format=csv | q -d , -H 'SELECT ID FROM -')
openstack security group rule create --ingress --protocol tcp --dst-port 22 ${SECURITY_GROUP}
openstack security group rule create --protocol icmp --ingress ${SECURITY_GROUP}
```

## Adding an ssh key

```
openstack keypair create --public-key ~/.ssh/id_rsa.pub demo-keypair
```

## Creating flavor

```
openstack flavor create --ram 2048 --disk 5 --vcpus 1 demo-flavor
openstack flavor create --ram 6144 --disk 20 --vcpus 2 cpu2-ram6-disk20
openstack flavor create --ram 12288 --disk 40 --vcpus 4 cpu4-ram12-disk40
```

## Boot a VM

```
#!/bin/sh

set -e
set -x

NETWORK_ID=$(openstack network list --name demo-net -c ID -f value)
IMAGE_ID=$(openstack image list -f csv 2>/dev/null | q -H -d , "SELECT ID FROM - WHERE Name LIKE 'debian-10%.qcow2'")
FLAVOR_ID=$(openstack flavor show demo-flavor -c id -f value)

openstack server create --image ${IMAGE_ID} --flavor ${FLAVOR_ID} \
	--key-name demo-keypair --nic net-id=${NETWORK_ID} --availability-zone nova:z-compute-1.example.com demo-server
```

## Add Octavia service
### Scripted setup
All of what's done below can be done with 2 helper scripts:

```
oci-octavia-amphora-secgroups-sshkey-lbrole-and-network 
oci-octavia-certs
```

First, edit /usr/bin/oci-octavia-amphora-secgroups-sshkey-lbrole-and-network
header. There, you'll find these values:

```
# Set to either flat or vlan
OCTAVIA_NETWORK_TYPE=flat
# Set to the ID of the Octavia VLAN if the above is set to vlan
OCTAVIA_NETWORK_VLAN=876
# Set this to a value that matches something listed in /etc/neutron/plugins/ml2/ml2_conf.ini
# either in [ml2_type_flat]/flat_networks or in [ml2_type_vlan]/network_vlan_ranges
OCTAVIA_PHYSNET_NAME=external1

OCTAVIA_SUBNET_RANGE=192.168.104.0/24
OCTAVIA_SUBNET_START=192.168.104.4
OCTAVIA_SUBNET_END=192.168.104.250
OCTAVIA_SUBNET_GW=192.168.104.1
OCTAVIA_SUBNET_DNS1=84.16.67.69
OCTAVIA_SUBNET_DNS2=84.16.67.70
```

Edit them to your taste. If you're running with vlan, then the value for
OCTAVIA_NETWORK_TYPE must be vlan, and the value for OCTAVIA_PHYSNET_NAME
must be "external". The IPs described above must be routable from the
controller nodes.

Once edit is done, run the first script, then tell OCI what security group
and network boot to use like this:

```
ocicli cluster-set CLUSTER_NAME --amp-secgroup-list SECGROUP_ID_1,SECGROUP_ID_2d5681bb2-044c-4de2-9f81-c3ca7d91abb6
ocicli cluster-set ver1 --amp-boot-network-list LOAD_BALANCER_NETWORK_ID
```

These IDs may be found in the logs when running
oci-octavia-amphora-secgroups-sshkey-lbrole-and-network, or in
/etc/octavia/octavia.conf under amp_secgroup_list and amp_boot_network_list.

Now, run oci-octavia-certs on one of the controllers, then
copy over /etc/octavia/.ssh and /etc/octavia/certs to the
other controllers.

```
rsync -e 'ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no' -avz --delete /etc/octavia/certs/ root@z-controller-2:/etc/octavia/certs/
rsync -e 'ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no' -avz --delete /etc/octavia/certs/ root@z-controller-3:/etc/octavia/certs/
rsync -e 'ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no' -avz --delete /etc/octavia/.ssh/ root@z-controller-2:/etc/octavia/.ssh/
rsync -e 'ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no' -avz --delete /etc/octavia/.ssh/ root@z-controller-3:/etc/octavia/.ssh/
```

Now, restart octavia-worker, octavia-health-manager
and octavia-housekeeping. The copy can be done this way:

That's it, it should work now!

### Manual setup
If you wish to do things manually, here's how it works.

Create the Amphora image. This can be done with DIB (Disk Image Builder)
like this:

```
sudo apt-get install openstack-debianimages
/usr/share/doc/openstack-debian-images/examples/octavia/amphora-build
openstack image create --container-format bare --disk-format qcow2 --file debian-buster-octavia-amphora-2019.09.11-11.52-amd64.qcow2 --tag amphora debian-buster-octavia-amphora-2019.09.11-11.52-amd64.qcow2
```

Create the Octavia network. If, like in the PoC package, you are
running with a specific br-lb bridge bound to an external network called
external1, something like this will do:

```
openstack network create --external --provider-physical-network external1 --provider-network-type flat lb-mgmt-net
openstack subnet create --network lb-mgmt-net --allocation-pool start=192.168.104.4,end=192.168.104.250 --dns-nameserver 84.16.67.69 --dns-nameserver 84.16.67.70 --gateway 192.168.104.1 --subnet-range 192.168.104.0/24 lb-mgmt-subnet
```

The above example is for when you're not running with vlan, but have
a specific network card for the Octavia network.

Then we need s specific security groups for Octavia (make sure to use
/root/octavia-openrc, not the admin's one):

```
openstack security group create lb-mgmt-sec-grp
openstack security group rule create --protocol icmp lb-mgmt-sec-grp
openstack security group rule create --protocol tcp --dst-port 22 lb-mgmt-sec-grp
openstack security group rule create --protocol tcp --dst-port 9443 lb-mgmt-sec-grp
openstack security group rule create --protocol icmpv6 --ethertype IPv6 --remote-ip ::/0 lb-mgmt-sec-grp
openstack security group rule create --protocol tcp --dst-port 22 --ethertype IPv6 --remote-ip ::/0 lb-mgmt-sec-grp
openstack security group rule create --protocol tcp --dst-port 9443 --ethertype IPv6 --remote-ip ::/0 lb-mgmt-sec-grp

openstack security group create lb-health-mgr-sec-grp
openstack security group rule create --protocol udp --dst-port 5555 lb-health-mgr-sec-grp
openstack security group rule create --protocol udp --dst-port 5555 --ethertype IPv6 --remote-ip ::/0 lb-health-mgr-sec-grp
```

Then we create an ssh keypair:

```
mkdir /etc/octavia/.ssh
ssh-keygen -t rsa -f /etc/octavia/.ssh/octavia_ssh_key
chown -R octavia:octavia /etc/octavia/.ssh
rsync -e 'ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no' -avz --delete /etc/octavia/.ssh/ root@z-controller-2:/etc/octavia/.ssh/
rsync -e 'ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no' -avz --delete /etc/octavia/.ssh/ root@z-controller-3:/etc/octavia/.ssh/
. /root/octavia-openrc
openstack keypair create --public-key /etc/octavia/.ssh/octavia_ssh_key.pub octavia-ssh-key
```

Make the certs as per the upstream tutorial at https://docs.openstack.org/octavia/latest/admin/guides/certificates.html

Rsync the certs to the other 2 controllers:

```
rsync -e 'ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no' -avz --delete /etc/octavia/certs/ root@z-controller-2:/etc/octavia/certs/
rsync -e 'ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no' -avz --delete /etc/octavia/certs/ root@z-controller-3:/etc/octavia/certs/
```

Edit octavia.conf and set amp_boot_network_list and amp_secgroup_list IDs.

Then restart all Octavia services on all controllers.

Create the load-balancer_admin role and assign it:

```
openstack role create load-balancer_admin
openstack role add --project admin --user admin load-balancer_admin
```

Now, one must set, with ocicli, the boot network and security group list for
the amphora:

```
ocicli cluster-set swift01 \
	--amp-boot-network-list 0c50875f-368a-4f43-802a-8350b330c127 \
	--amp-secgroup-list b94afddb-4fe1-4450-a1b8-25f36a354b7d,012584cd-ffde-483b-a55a-a1afba52bc20
```

Then we can start using Octavia:

```
openstack loadbalancer create --name lb-test-1 --vip-subnet-id ext-subnet
```
How to use the load balancer is described here:

https://docs.openstack.org/octavia/latest/user/guides/basic-cookbook.html

Don't forget to create the flavor:

```
openstack flavor create --ram 2048 --disk 4 --vcpus 2 --id 65 --private --project services octavia_65
```

### Using Octavia as an HTTPS load balancer for 2 web servers

The OpenStack documentation has all what you need at:
https://docs.openstack.org/octavia/latest/user/guides/basic-cookbook.html

However, here's an example creating a loadbalancer with an HTTPS
certificate.

Creating the load balancer for the "foo" service:
```
openstack loadbalancer create \
    --name lb-foo \
    --vip-subnet-id pub01-subnet2
```

Create the certificate and store it in Barbican. First, create a normal
x509 certificate, with the key, crt and ca-chain files. Then convert it
to a pkcs12 cert using this command:

```
openssl pkcs12 -export -inkey server.key -in server.crt -certfile ca-chain.crt -passout pass: -out server.p12
```

Then we store it in Barbican, and keep its resulting address:
```
openstack secret store --name='tls_secret1' -t 'application/octet-stream' -e 'base64' --payload="$(base64 < server.p12)"
```

Creating the listener:
```
openstack loadbalancer listener create \
    --name lb-foo-https \
    --protocol TERMINATED_HTTPS \
    --protocol-port 443 \
    --default-tls-container-ref https://z-api.example.com/keymanager/v1/secrets/e2e590a4-08b7-40e7-ab52-c06fd3a0a2dd \
    lb-foo
```

Creating the pool:
```
openstack loadbalancer pool create \
    --name pool-foo-https \
    --protocol TERMINATED_HTTPS \
    --listener lb-foo-https \
    --lb-algorithm ROUND_ROBIN
```

Creating the pool members:
```
openstack loadbalancer member create \
    --name foo-member-1-https \
    --address 10.4.42.10 \
    --protocol-port 443 \
    --subnet-id e499c943-09bb-46b7-8463-8d83ce51e830 \
    pool-foo-https
openstack loadbalancer member create \
    --name foo-member-2-https \
    --address 10.4.42.4 \
    --protocol-port 443 \
    --subnet-id e499c943-09bb-46b7-8463-8d83ce51e830 \
    pool-foo-https
```

## Setting-up no limits for services resources

As some services may spawn instances, like for example Octavia or Magnum, it
may be desirable to set no limit for some resources of the services project:

```
openstack quota set --secgroup-rules -1 --secgroups -1 --instances -1 --ram -1 --cores -1 --ports -1 services
```

The quota will apply for the virtual resources the services project will
create, for example, use openstack loadbalancer quota show PROJECT_NAME to
set the max number of loadbalancer for a project.

## Add Magnum service

First, upload the coreos image and set the property correctly:

```
openstack image create --file coreos_production_openstack_image.img coreos_production_openstack_image.img
openstack image set --property os_distro=coreos coreos_production_openstack_image.img
```

Then create the COE template:

```
openstack coe cluster template create k8s-cluster-template \
    --image coreos_production_openstack_image.img --keypair demo-keypair \
    --external-network ext-net --dns-nameserver 84.16.67.69 --flavor demo-flavor \
    --docker-volume-size 5 --network-driver flannel --coe kubernetes
```

Then create the Magnum cluster:

```
openstack coe cluster create k8s-cluster \
                      --cluster-template k8s-cluster-template \
                      --master-count 1 \
                      --node-count 2
```

Looks like coreos wouldn't work for k8s. Instead:

```
wget https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-Atomic-27-20180419.0/CloudImages/x86_64/images/Fedora-Atomic-27-20180419.0.x86_64.qcow2
openstack image create \
                      --disk-format=qcow2 \
                      --container-format=bare \
                      --file=Fedora-Atomic-27-20180419.0.x86_64.qcow2 \
                      --property os_distro='fedora-atomic' \
                      fedora-atomic-latest
openstack coe cluster template create kubernetes-cluster-template \
	--image fedora-atomic-latest --keypair demo-keypair \
	--external-network ext-net --dns-nameserver 84.16.67.69 \
	--master-flavor demo-flavor --flavor demo-flavor \
	--docker-volume-size 5 --network-driver flannel \
	--coe kubernetes
```

## Replacing a broken server

Sometimes, hardware fail. In such situation, you may want to simply
replace a server by a new one. Though the new server comes with a new
serial number, and will probably boot up in live, and show up in OCI.
Here's how to do.

If you've put the old server's SSD / HDD in the new one, and told the
BIOS to boot on them, it will boot with the old server's hostname
configured. What we should do now, is simply clean-up the OCI db entries.

First, let's remove the new server:

```
ocicli machine-destroy SERIAL
```

Now, let's update the old broken server serial number in the OCI db:

```
mysql -Doci -e "UPDATE machines SET serial='6B12345' WHERE hostname='cl1-compute-62.example.com'"
```

Let's now set the IPMI of the new server with the config of the old one:

```
ocicli machine-apply-ipmi cl1-compute-62.example.com
```

Finally, the server counter may have increased when the new server
booted in live. If the auto-provisionning was on, it was added
as a new compute. In this case, simply set the counter:

```
ocicli cluster-rolecounts-set cl1 compute 84
```

## Secure boot and dkms

If necessary (up to you), it is possible to enable secure boot, which is
fully supported by OCI. Though there's still the issue that kernel modules
needs to be signed to be able to load them. OCI offers the facility to
configure servers for DKMS auto-signing. To do so, on each server, it
is necessary to create a MOK (Machine Onwer Key):

```
# mkdir -p /var/lib/shim-signed/mok/
# cd /var/lib/shim-signed/mok/
# openssl req -nodes -new -x509 -newkey rsa:2048 -keyout MOK.priv -outform DER -out MOK.der -days 36500 -subj "/CN=My Name/"
# openssl x509 -inform der -in MOK.der -out MOK.pem
```

Once the key is created, it must be enrolled in SHIM:

```
# mokutil --import /var/lib/shim-signed/mok/MOK.der
```

this will prompt for a one time password. Then reboot the server, and
when the SHIM screen shows up, press any key, then select "Enroll key".
This will prompt for the enroll password entered when doing --import
just above. It will then show a "reboot" option (choose that one).

Once rebooted with the new key enrolled, it is possible to check if
the key is correctly in:

```
# mokutil --test-key /var/lib/shim-signed/mok/MOK.der
/var/lib/shim-signed/mok/MOK.der is already enrolled
```

if the output is just like above, then it's done. There's only one
thing to do now, which is to tell OCI we want DKMS configured on that
server:

```
ocicli machine-set cl1-compute-3.example.com --configure-dkms yes
```

On next puppet run, dkms, linux-kebuild and linux-headers will be
installed, and /etc/dkms/framework.conf will be configured with:

```
mok_signing_key=/var/lib/shim-signed/mok/MOK.priv
mok_certificate=/var/lib/shim-signed/mok/MOK.der
sign_tool=/etc/dkms/sign_helper.sh
```

with sign_helper.sh containing:

```
#!/bin/sh

/lib/modules/"$1"/build/scripts/sign-file sha512 /root/.mok/client.priv /root/.mok/client.der "$2"
```

which should be enough to get the server ready to sign DKMS modules
automatically with this new MOK key. Enjoy secure-boot! :)

# Using Telemetry and Rating

## How it works

The Ceilometer project is responsible for collecting raw metrics. For
example, on the compute nodes, ceilometer-polling is deployed using the
compute namespace (ie: DEFAULT/polling_namespaces=compute). On the
controllers, Ceilometer uses the central namespace (ie: it does polling
on the OpenStack API).

All of the collected data (from ceilometer-polling on compute and controller
as explained above, or from all the different OpenStack services like
ceilometermiddleware, glance, nova, neutron-metering, etc.) are sent to the
rabbitmq notification bus. If You've setup 3 messaging nodes with OCI, then
the notification bus will be on a separate rabbitmq cluster.

Then ceilometer-notification-agent (setup on the controller nodes) will
gather the metrics it sees on the rabbitmq bus, and send them to Gnocchi
that will store it in its timeseries database. OCI setups Gnocchi with
Galera cluster + Ceph as a backend. If you have messaging nodes, Gnocchi
will use the Galera cluster on these nodes, otherwise, the controller nodes
are used. If you've setup billosd + billmon nodes Gnocchi will use them for
the time series database, otherwise a unique Ceph is used (the same as for
the Cinder volume service). At scale, it is strongly recommended to setup
the 3 billing node types (ie: messaging, billmon and billosd nodes),
otherwise your controlle plane may be affected by constant billing operations.

Once the data has reached Gnocchi, the cloudkitty-processor daemon starts
a task every hour to process all of the raw metrics of every  project,
and attempts to rate them according to Cloudkitty configuration.

:warning: Warning: if Gnocchi is down on messaging nodes, ceilometer-notification-agent could quickly fill RabbitMQ service (dedicated one, but still). There is a switch in OCI to disable notifications from ceilometer :

```
ocicli cluster-set cl1 --disable-notifications
```

## Add billing of instances

The below script will rate "demo-flavor" at 0.01:

```
cloudkitty module enable hashmap
cloudkitty module set priority hashmap 100
cloudkitty hashmap group create instance_uptime_flavor_id
GROUP_ID=$(cloudkitty hashmap group list -f value -c "Group ID")

cloudkitty hashmap service create instance
SERVICE_ID=$(cloudkitty hashmap service list -f value -c "Service ID")

cloudkitty hashmap field create ${SERVICE_ID} flavor_id
FIELD_ID=$(cloudkitty hashmap field list ${SERVICE_ID} -f value -c "Field ID")

FLAVOR_ID=$(openstack flavor show demo-flavor -f value -c id)

cloudkitty hashmap mapping create 0.01 --field-id ${FIELD_ID} --value ${FLAVOR_ID} -g ${GROUP_ID} -t flat
```

The rest may be found here: https://docs.openstack.org/cloudkitty/latest/user/rating/hashmap.html

Also, add the role rating to the admin:

```
openstack role add --user admin --project admin rating
```

Note: currently, after installing the cluster, all ceilometer agents must be
restarted in order to obtain metrics, even though they appear to be well
configured.

## Configuring a custom metric and billing

Let's pretend that we have a custom public network doing direct attach to VMs.
In such case, customers will simply reserve a port on that network and will
attach them to VMs. These public IPs will not be accounted as floating IPs,
and therefore, will not be accounted in the billing, unless we do something
about it. Here is how. Let's call this network "external-network".

First, we need to get ceilometer-polling to poll the Neutron API for the
ports used on external-network. This is done using a "dynamic pollster": this is an
API pollster that is custom to our setup. To do so, we simply edit a new
file in /etc/openstack-cluster-installer/pollsters.d and that's it. OCI
will then copy its content in all the controller of the cluster, and
configure ceilometer-polling to use the custom dynamic pollster. Here
is an example of such a pollster:

```
cat ports.yaml 
---

- name: "external-network-public-ip"
  sample_type: "gauge"
  unit: "ip"
  endpoint_type: "network"
  url_path: "/network/v2.0/ports?network_id=e060d063-c73c-4022-b92a-1d025c5f7107"
  value_attribute: "status"
  response_entries_key: "ports"
  project_id_attribute: "project_id"
  value_mapping:
    ACTIVE: "1"
  metadata_fields:
    - "mac_address"
    - "device_id"
```

The url_path above can be found using a simple OpenStack command:

```
openstack --debug port list --network e060d063-c73c-4022-b92a-1d025c5f7107
```

Best is to even add the --format json to the above command, as this is how
the Ceilometer pollster will see it (that way, it is easier to see what kind
of metadata_fields there could be).

Once that's done, a new resource type external-network-public-ip will be used
by ceilometer-polling to store the raw metrics. This will not work unless
we create such a resource type (ie: ceilometer-notification-agent will
complain that the resource type doesn't exist and will not store anything).
This can be done this way:

```
gnocchi resource-type create -a status:string:true:max_length=3 -a device_id:uuid:false -a mac_address:string:true:max_length=20 external-network-public-ip
```

Note that Gnocchi understands only the types string, uuid and date. The
"false" at the end of a metric type tells that the field isn't mandatory.

Note that it's also possible to edit /etc/openstack-cluster-installer/gnocchi_resources.yaml
instead, and run ceilometer-upgrade which will also create the resource
types (this is how Ceilometer initializes all of the "standard" resource
types there is in Gnocchi).

If we wait a little while, the new metrics should appear in Gnocchi. If they
do not, no need to read further: you need to fix your Ceilometer and Gnocchi
settings. Best is probably to read the ceilometer-notification-agent.log
files, as this is where the Ceilometer data will be recorded in Gnocchi
(sent by either ceilometer-polling in the controllers, or maybe
ceilometer-polling in a compute node, or by different daemons, like for
example ceilometer-middleware in a swift proxy, Glance itself, etc.).

Once you have raw metrics, it's time to tell Cloudkitty about them, so
it can rate them. This is done in the metrics.yml file of
cloudkitty-processor, which can be edited in /etc/openstack-cluster-installer/metrics.yml
(it is read there, and transported by puppet on your 3 controller (or
messaging nodes if you have some)).

Here's an example metrics.yaml entry for our external-network network:

```
grep -A8 external-network /etc/openstack-cluster-installer/metrics.yml
  external-network-public-ip:
    unit: ip
    groupby:
      - id
      - project_id
    extra_args:
      aggregation_method: mean
      resource_type: public_ip
      force_granularity: 300
```

Once that is done, puppet will install the new metrics.yml in your
controller/messaging nodes, and restart cloudkitty-processor. That
is enough to see the entry in a "openstack rating dataframes get"
command, but not enough to have it rated: we must add a price to
this type of resource. Here's how to do that:

```
#!/bin/sh

set -e

get_or_create_hashmap_group () {
        GROUP_NAME=$1
        # Create group:
        echo "---> Searching for hashmap group ${GROUP_NAME}"
        if ! cloudkitty hashmap group list --format value -c Name | grep -E '^'${GROUP_NAME}'$' ; then
                echo "-> Didn't find: creating..."
                cloudkitty hashmap group create ${GROUP_NAME}
        fi
        echo -n "-> Getting ID: "
        HASHMAP_GROUP=$(cloudkitty hashmap group list --format csv -c Name -c 'Group ID' | q -H -d, "SELECT \`Group ID\` FROM - WHERE Name='${GROUP_NAME}'")
        echo ${HASHMAP_GROUP}
}

get_or_create_hashmap_service () {
        SERVICE_NAME=$1
        echo "---> Searching for hashmap service ${SERVICE_NAME}"
        if ! cloudkitty hashmap service list --format value -c Name | grep -E '^'${SERVICE_NAME}'$' ; then
                cloudkitty hashmap service create ${SERVICE_NAME}
        fi
        echo -n "-> Getting ID: "
        HASHMAP_SERVICE=$(cloudkitty hashmap service list --format csv -c Name -c 'Service ID' | q -H -d, "SELECT \`Service ID\` FROM - WHERE Name='${SERVICE_NAME}'")
        echo ${HASHMAP_SERVICE}
}

get_or_create_hashmap_group public_ip
get_or_create_hashmap_service external-network-public-ip
set_hashmap_mapping_price 0.01
```

Like this, we have any port on external-network-public-ip priced at 0.01
cloudkitty unit per hour.

## Other metrics billing

### Generalities

Every other type of metric should be setup the way described above. However,
since this is a complex task to find out how to do it, we'll see here
specific examples, giving the deployer the direct solution.

Basically, you will find below examples for billing:
- Load balancers
- Router floating IPs
- Self-service subnets

and each time, giving you the dynamic pollster file (for Ceilometer API
polling) and the matching extract of metrics.yaml (for Cloudkitty rating).

### Gnocchi resource types

Before setting-up new metrics, one needs to create the Gnocchi resource
types. Here's how:

```
gnocchi resource-type create -a status:string:false:max_length=3 -a device_id:uuid:false -a mac_address:string:false:max_length=20 public_ip
gnocchi resource-type create -a status:string:false:max_length=3 -a device_id:uuid:false -a mac_address:string:false:max_length=20 router_public_ip
gnocchi resource-type create -a status:string:true:max_length=3 -a device_id:uuid:false -a mac_address:string:true:max_length=20 external-network-public-ip
gnocchi resource-type create -a status:string:true:max_length=3 -a device_id:uuid:false -a mac_address:string:true:max_length=20 router-gateway-public-ip
gnocchi resource-type create -a cidr:string:false:max_length=4 -a network_id:uuid:false -a description:string:false:max_length=64 public_subnet
gnocchi resource-type create -a name:string:false:max_length=255 -a description:string:false:max_length=255 -a vip_address:string:false:max_length=32 loadbalancer
```

What's above MUST match the field resource_type defined in the yaml dynamic
pollsters below, otherwise ceilometer-notification-agent will simply crash.
So take a big care about this.

### Load balancers

my_loadbalancer.yaml:

```
---

- name: "my_loadbalancer"
  sample_type: "gauge"
  unit: "loadbalancer"
  endpoint_type: "load-balancer"
  url_path: "/loadbalance/v2.0/lbaas/loadbalancers"
  value_attribute: "provisioning_status"
  response_entries_key: "loadbalancers"
  project_id_attribute: "project_id"
  value_mapping:
    ACTIVE: "1"
    ERROR:  "0"
  metadata_fields:
    - "name"
    - "description"
    - "vip_address"
```

metrics.yaml:

```
  my_loadbalancer:
    alt_name: network.services.lb.loadbalancer
    unit: loadbalancer
    groupby:
      - id
      - project_id
    extra_args:
      aggregation_method: mean
      resource_type: loadbalancer
      force_granularity: 300
```

### Rating the public IP of a Router gateway

router-floating.yaml:

```
---

- name: "router-gateway-public-ip"
  sample_type: "gauge"
  unit: "ip"
  endpoint_type: "network"
  url_path: "network/v2.0/routers?fields=id&fields=project_id&fields=external_gateway_info"
  value_attribute: "external_gateway_info | 1 if value and 'network_id' in value and value['network_id'] == 'be472268-cb1b-435c-9735-bc7c7e46c9b0' else 0"
  response_entries_key: "routers"
  project_id_attribute: "project_id"
```

Please note that above, the network be472268-cb1b-435c-9735-bc7c7e46c9b0 is
used as a filter, so that only router gateways using that network are rated.
The value_attribute is constructed to have 1 if the network ID is the public
network, and zero otherwise. This way, a router with an external_gateway_info
pointing to a non-public IP address will not be included in the rating.

metrics.yaml:

```
  router-gateway-public-ip:
    alt_name: network.ports.router-gateway
    unit: ip
    groupby:
      - id
      - project_id
    extra_args:
      aggregation_method: mean
      resource_type: router_public_ip
      force_granularity: 300
```

### Self service public IP subnets

In this example, we're having a subnet pool that holds public IPs, and
clients can decide to reserve a subnet of public IPs directly assigned to
their VMs. So, what should be billed, is the size of the subnet reserved by
the client.

subnet-selfservice1.yaml:

```
---

- name: "network-subnet-public-ip"
  sample_type: "gauge"
  unit: "ip"
  endpoint_type: "network"
  url_path: "/network/v2.0/subnets?subnetpool_id=110203aa-89a9-4a9c-a57b-f849d7fb89a6"
  value_attribute: "cidr | 2**(32 - int(value.split('/')[1]))"
  response_entries_key: "subnets"
  project_id_attribute: "project_id"
  metadata_fields:
    - "network_id"
    - "description"
```

As you may see above, the value 110203aa-89a9-4a9c-a57b-f849d7fb89a6 is used to filter
subnets comming from the subnetpool. The value_attribute field above, has
the Python code to calculate the number of IPs from the CIDR of the rated
subnet.

metrics.yaml:

```
  network-subnet-public-ip:
    unit: ip
    groupby:
      - id
      - project_id
    extra_args:
      aggregation_method: mean
      resource_type: public_subnet
      force_granularity: 300
```

### Swift storage rating

On all swiftproxies, ceilometermiddleware is used for collecting the raw
metrics. Here's the matching Cloudkitty metrics.yaml:

```
  storage.objects.size:
    unit: Gib
    factor: 1/1073741824
    groupby:
      - id
      - project_id
    extra_args:
      aggregation_method: mean
      resource_type: swift_account
      force_granularity: 300
```

### Windows billing

If using telemetry, OCI will automatically install the
ceilometer-instance-poller package on every compute nodes. This package
uses libvirt and libguestfs to check the running OS type inside each compute.

To add the metric, edit /etc/openstack-cluster-installer/gnocchi_resources.yaml.
Under the resource_type: instance, add the metrics: os.type.is_windows:

```
  - resource_type: instance
    metrics:
      [...]
      os.type.is_windows:
    attributes:
      [...]
```

This way, when ceilometer-upgrade will run, the os.type.is_windows metric
will be added to the instance resource-type.

Then in cloudkitty's metrics.yml what's below should be added:

```
  os.type.is_windows:
    unit: instance
    alt_name: windows_license
    groupby:
      - id
      - project_id
    extra_args:
      aggregation_method: mean
      resource_type: instance
      force_granularity: 300
```

Then the new cloudkitty service, group and mapping for OS billing:

```
openstack rating hashmap service create windows_license
openstack rating hashmap group create os_license
openstack rating hashmap mapping create -s windows_license -g os_license -t flat 10
```


# Deploying Designate

## Used domain in this chapter

In this chapter, we will pretend that the cluster will be setup
using cluster1.example.com. The matching ns1/ns2.cluster.example.com
will be setup.

## Add 2 nodes for publishing DNS records from Designate mDNS

The principle is that Designate will push zones from designate-mdns
to your satelite dns nodes (using AXFR and the special Designate
key that OCI will provision for you).

```
ocicli machine-add SERIAL CLUSTER_NAME dns zone-1
```

Note that these 2 machines must have public IPs that will reply
to the queries on port 53. So it is advised to provision them
on a separate (public) management network.

## Create glue records

Create 2 glue records that will match the public IPs of the
servers added to the cluster just above. Example:

```
ns1.cluster1.example.com
ns2.cluster1.example.com
```

Also, A pointers with the same IP must be set.

## What OCI will activate

OCI will activate the scenario 3b described here:
https://docs.openstack.org/neutron/latest/admin/config-dns-int-ext-serv.html#use-case-3b-the-dns-domain-ports-extension

## VNI requirements

As per the Designate documentation at the above URL:
"For network types VLAN, GRE, VXLAN or GENEVE, the segmentation ID must be outside the ranges assigned to project
networks."

Therefore, to use Designate, one must do:

```
ocicli cluster-set preprod --neutron-vxlan-vni-min 1005
```

so that it doesn't overlap.

## Check the neutron domain name

Simply do:

```
ocicli cluster-set cluster1 --neutron-dns-domain cluster1.example.com
```

## Create the main Designate zone

```
openstack zone create cluster1.example.com. --email admin@example.com
```

## Test that everyting is working:

```
openstack zone create my-test-zone.example.com. --email admin@example.com
openstack port create --network NETWOKR_ID --dns-name dns-entry-for-the-port --dns-domain my-test-zone.example.com. my-port-name
```

This will create a port on NETWOKR_ID with a DNS "IN A" record
"dns-entry-for-the-port.my-test-zone.example.com" that will point
to the IP address of the port.

## Populate the Designate TLD list

To avoid zone squatting, OCI populates (and maintain) the list of
TLDs using a special package called "designate-tlds". It is setup
to update the list from Mozilla every week (using a cron). However,
it's probably nicer to call the script immediately after the DNSaaS
setup.

Note that the package is installed only in one of your controllers
(the OCI "first master):

```
# designate-tlds
```


# Using multi region with an external keystone

## General external keystone and multi region considerations

There are multiple ways to do multi region, like using SAML2, Keystone
federation and so on. In OCI, it was decided to use a single Keystone
deployment for all region. This way, connecting to Horizon shows all
possible regions, and the setup is quite simple to achieve.

The way it is achieved in OCI, is that for a given cluster, it is
possible to tell that Keystone should not be setup, and that instead,
the cluster should be using a Keystone server that is setup externally.
This external Keystone server can be a standalone deployment, or be part of
another OpenStack cluster setup with OCI.

## Setting-up the external Keystone instance

As OCI will manage everything, including services, users and endpoints
in Keystone, it is mandatory that the existing external Keystone server
is configure to accept incomming connections from puppet running on the
controllers of the cluster that is to be setup using an external Keystone.
Namely, this means the external Keystone server must have:
- the project "services" added
- the Keystone endpoint for the region to be setup

For example, if the new region to be setup is called "blueregion", then
type the below commands in the external Keystone server:

```
openstack endpoint create --region blueregion identity public http://192.168.110.2:5000/
openstack endpoint create --region blueregion identity internal http://192.168.110.2:5000/
openstack endpoint create --region blueregion identity admin http://192.168.110.2:5000/
```

Note that the above example shows a setup without TLS, but it's of course
stringly recommended to use endpoints over HTTPS.

## Configuring a cluster to use an external keystone

Simply do something like this:

```
ocicli cluster-set cl1 --region-name blueregion --external-keystone-activate yes --external-keystone-admin-password EXTERNAL_KEYSTONE_PASSWORD --external-keystone-region-prefixed-users yes --external-keystone-url http://192.168.110.2:5000
```

Note that, with the above, if your cluster has a region name called "blueregion",
then all system users for services will be postfixed with the region name.
For example, that makes it "nova-blueregion", "neutron-blueregion" and son
on. There's no need to create these users, the puppet manifests of OCI will
connect to the external Keystone server and create them for you.

# Upgrading the OCI PKI setup

## How is the OCI PKI done

There are 2 CA generated by the oci-root-ca-gen commands. The first one is
the root CA, which is used to sign the intermediary CA. Then that 2nd CA
is used to sign each individual server certificates.

ROOT CA => OCI CA 2 => Server certs

These CA files are stored in /etc/openstack-cluster-installer/pki/ca
(a copy of the certs is also present in /var/lib/oci/ssl/ca), and that
is used to sign individual server certificates (for TLS authentication)
under /var/lib/oci/ssl/slave-nodes.

All of the PKI materials are installed at provisioning time, but ALSO
they are transported through puppet to the servers, so they can be
automatically updated.

Within a cluster, all servers can trust each other, because the OCI
root CAs are installed in the global /etc/ssl/certs/ca-certificates.crt.
In fact, the 2 CAs of OCI are first stored under /usr/share/ca-certificates/oci
(OCI_1_selfsigned-root-ca.crt and OCI_2_oci-ca.crt), added to the
/etc/ca-certificates.conf, and then update-ca-certificates --fresh
is called.

## Result with the new setup

Since 1st of December 2021 (somewhere in the development cycle of OCI
version 42), the PKI setup of OCI has been fixed, so that servers can really
trust each other, without specifying a root CA. For example, connecting to
keystone directly from any host in the cluster will work out of the box
without a root CA file:

```
openssl s_client -connect cluster1-controller-1.example.com:5000
```

This can also be checked with curl that don't require a root CA chain
certificate anymore:

```
curl https://cluster1-controller-1.example.com:5000/v3
```

The reason why it works is because the root CA of OCI is now installed
properly as described above. However, this wasn't the case previously,
and the system was kind of half broken.

The only time where one needs the OCI root CA chain certificates, is
when using the OpenStack from outside of a cluster.

## What got fixed

There was numerous defect in previous setup:
- The root CA and intermediate CA options were not set properly
- The server certificates were not signed with the correct options
- The root CA and OCI CA where not properly installed in the system

As a consequence, authentication couldn't be done properly, and the
OCI root ca chain had to be specified.

## How to upgrade

First, the OCI root CA and intermediate CA must be regenerated with the
correct options. Simply regen it with this command:

```
oci-root-ca-gen
```

Then all of the certificates for servers must be regenerated again.
This can be done with a one liner command:

```
cd /var/lib/oci/ssl/slave-nodes
for i in $(ls -d *) ; do rm -r $i ; oci-gen-slave-node-cert $i ; done
```

Note that if you are using a "real" certificate (ie: not self-signed)
for your API, you must preserve it in the command above. Therefore,
it may become:

```
cd /var/lib/oci/ssl/slave-nodes
for i in $(ls -d * | grep -v api) ; do rm -r $i ; oci-gen-slave-node-cert $i ; done
```

Once this is done, simply apply puppet on all of the controllers of
your cluster. All of the certificates will be updated, including the
root CA and the OCI intermediate CA. Nearly all services will be
restarted, however, a few have to be manually restarted by hand after
the puppet run on the 3 controllers:

- cinder-api
- heat-api
- heat-api-cfn
- nova-api

It is strongly advise to look-up for API services that haproxy do not see
as up using the haproxy statshttp monitoring page, on the port 8088 of
your controllers (simply point your web browser to the IP of your controller
on port 8088, and lookup for the generated password in
/etc/haproxy/haproxy.cfg).

# Using OCI PoC Package for Fun and Profit

## Installation of the PoC package

Because setting-up hardware is complicated and time consuming, it is
possible to test and develop OCI using a fully virtualized environment. This
is done using the openstack-cluster-installer-poc package. A lot of memory
is needed to run it (512 GB advised).

To install it, one may use extrepo:

```
apt-get install extrepo
extrepo enable openstack_epoxy
apt-get update
apt-get install openstack-cluster-installer-poc
```

Once installed, edit /etc/oci-poc/oci-poc.conf to match your network
environment and hardware capability.

## Dependency on ikvswitch

Since early 2024, openstack-cluster-installer-poc is using ikvswitch
to emulate a complex network setup using bgp-2-the-host. The project
is also packaged and available in Debian. A full description of the
project ikvswitch is available here:

https://salsa.debian.org/openstack-team/debian/ikvswitch

Basically, edit /etc/ikvswitch/ikvswitch.conf and set NUM_U= to a
big enough value for the PoC (as of writting: 22), edit the mirror
address (if the default to deb.debian.org is not reachable), and
set MY_IP to your host IP address (so NAT can be performed). Then
to start the virtual switch environment, one simply does:

```
ikvswitch-host-networking start
ikvswitch-setup start
```

This will, with the first command, provision all the networking
bridges and interfaces, and the 2nd command will start 9 VMs (using
512 MB of RAM each) acting as virtual switches. The OCI-PoC VMs
will then plug on these virtual switches.

## Configuring the host to access OCI

As ocicli is remote (setup on the host, but connecting to the "oci"
VM), it wont be authenticated by default. Simply add this to your
/root/.bashrc to solve this:

```
export OCI_API_URL="http://192.168.100.2/oci/api.php?"
export OCI_LOGIN=poc
export OCI_PASS=poc
```

Before doing anything else, make sure oci resolves. On the host server,
edit /etc/hosts and add:

```
192.168.100.2   oci
```

To avoid ssh to prompt about host keys, add to /root/.ssh/config:

```
Host *
	StrictHostKeyChecking no
	HashKnownHosts no
	GlobalKnownHostsFile /dev/null
	UserKnownHostsFile /dev/null
```

Last, if your PoC host cannot access internet, but only your local
network, it is necessary for the the cl1-controller-1 to wget the
official Debian image from the internet. In such case, fill-up
/etc/oci-poc/oci-poc.conf with tne neccessary:

```
USE_HTTP_PROXY=yes
HTTP_PROXY_ADDR=http-proxy.example.com:3128
```

It is also important to have the host be able to ssh the VMs
of the PoC.

```
# ssh-keygen -t rsa
[...]
# cat .ssh/id_rsa.pub >.ssh/authorized_keys
```

Inded, whatever is in /root/.ssh/authorized_keys of the host
will be copied into all VMs.

## Fully automated run

Simply run this command you're good to go:

```
oci-poc-ci
```

this takes approximately 5 hours to install all and run tempest.

If you still want to run things manually, continue reading on.

## Creating the oci-PoC image

Before starting-up the virtualized environment, a VM image needs to be
created. This is done using the command:

```
oci-setup
```

This will create an image in
/var/lib/openstack-cluster-installer-poc/templates/pxe-server-node.qcow2
that will contain a Debian system with OCI, and the live image of OCI in it.

## Starting-up VMs


Then, to start VMs, simply do this:

```
oci-poc-vms start
```

This will produce the below screen output, showing what's going on:

```
===> Copying all template files to runtime folder
==> Starting OCI/PXE/puppet-master server
-> Starting OCI VM
-> Waiting 5 seconds
-> Waiting for ssh: ...ok.
===> Configuring PXE server
-> Enabling OCI vhost
-> Reloading apache
-> Configuring OCI db
-> Creating OCI db
-> Granting OCI db privileges
-> Installing php-cli
-> Running db_sync.php
-> Fixing config file rights
-> Copying tftp folder to web root
-> Restarting tftp-hpa
-> Generating root CA
-> Configuring oci-userdb
-> Fixing connection= line
-> Restarting DHCPd
===> Starting OpenStack cluster VMs
=> Starting VM 1 with 1xHDD and 32 GB RAM (controllers: C1)
=> Starting VM 2 with 1xHDD and 32 GB RAM (controllers: C2)
=> Starting VM 3 with 1xHDD and 32 GB RAM (controllers: C3)
=> Starting VM 4 with 1xHDD and 5 GB RAM (network: C4)
=> Starting VM 5 with 1xHDD and 5 GB RAM (network: C5)
=> Starting VM 6 with 1xHDD and 3 GB RAM (swiftproxy: C6)
=> Starting VM 7 with 1xHDD and 4 GB RAM (cephmon: C7)
=> Starting VM 8 with 1xHDD and 4 GB RAM (cephmon: C8)
=> Starting VM 9 with 1xHDD and 4 GB RAM (cephmon: C9)
=> Starting VM 10 with 2xHDD and 60 GB RAM (Compute + ceph OSD: CA)
=> Starting VM 11 with 2xHDD and 60 GB RAM (Compute + ceph OSD: CB)
=> Starting VM 12 with 2xHDD and 60 GB RAM (Compute + ceph OSD: CC)
=> Starting VM 13 with 4xHDD and 5 GB RAM (swiftstore: CD)
=> Starting VM 14 with 4xHDD and 5 GB RAM (swiftstore: CE)
=> Starting VM 15 with 4xHDD and 5 GB RAM (swiftstore: CF)
=> Starting VM 16 with 4xHDD and 5 GB RAM (swiftstore: D0)
=> Starting VM 17 with 4xHDD and 5 GB RAM (swiftstore: D1)
=> Starting VM 18 with 4xHDD and 3 GB RAM (volume: D2)
=> Starting VM 19 with 4xHDD and 3 GB RAM (volume: D3)
=> Starting VM 20 with 1xHDD and 16 GB RAM (messaging: D4)
=> Starting VM 21 with 1xHDD and 16 GB RAM (messaging: D5)
=> Starting VM 22 with 1xHDD and 16 GB RAM (messaging: D6)
=> Starting VM 23 with 1xHDD and 4 GB RAM (tempest: D7)
=> Starting VM 24 with 1xHDD and 4 GB RAM (billmon: D8)
=> Starting VM 25 with 1xHDD and 4 GB RAM (billmon: D9)
=> Starting VM 26 with 1xHDD and 4 GB RAM (billmon: DA)
=> Starting VM 27 with 3xHDD and 8 GB RAM (billosd: DB)
=> Starting VM 28 with 3xHDD and 8 GB RAM (billosd: DC)
=> Starting VM 29 with 3xHDD and 8 GB RAM (billosd: DD)
=> Starting VM 30 with 3xHDD and 10 GB RAM (Ceph OSD: DE)
=> Starting VM 31 with 3xHDD and 10 GB RAM (Ceph OSD: DF)
=> Starting VM 32 with 3xHDD and 10 GB RAM (Ceph OSD: E0)
=> Starting VM 33 with 3xHDD and 10 GB RAM (Ceph OSD: E1)
=> Starting VM 34 with 3xHDD and 10 GB RAM (Ceph OSD: E2)
=> Starting VM 35 with 3xHDD and 10 GB RAM (Ceph OSD: E3)
-> Waiting 30 seconds for VMs to start:..............................ok.
===> Waiting for VMs to be up: .28.29.30.32.33.34ok.
```

Note that if there is not enough memory on the host, it is possible to edit
the number of started VMs in /etc/oci-poc/oci-poc.conf. The directive is
NUMBER_OF_GUESTS=35 by default.

Once it is done, it is possible to see VMs with the ocicli command:

```
ocicli machine-list
```

## Installing the PoC cluster

Simply create a cluster with a single command:

```
oci-poc-install-cluster-bgp
```

Once done, ocicli machine-list will show machines added to the cluster with
the correct role.

To effectively install every VM:

```
ocicli cluster-install cl1
```

then wait ... It takes roughly 3 hours to get your cluster ready.

## Provisionning images, flavors, octavia, networking and all, inside OpenStack

On the host, there's a simple script to do all the necessary provisionning.
It will configure:

* A Debian image downloaded from cdimage.debian.org
* Networking with an exeternal network and a internal network (VM direct-attach)
* 3 VM Flavors for Nova
* 3 availability zones for all the compute nodes
* All the octavia setup (image, sec-groups, certs, ssh-key, networking...)
* A tempest node to do functional testing of the cluster

## Running the oci-poc-ci

A single script runs all of the above. Simply do:

```
oci-poc-ci
```

wait 3 hours, and your cluster is ready.

## Running tempest functional tests

Simply ssh the tempest host (run ocicli machine-list to get its IP address),
and then do:

```
cd /var/lib/tempest
stestr init
tempest_debian_shell_wrapper | tee bobcat-tests.txt
```

It is also possible to run a single test this way:

```
tempest_debian_shell_wrapper 'tempest\.api\.compute\.servers\.test_server_actions\.ServerActionsTestJSON\.test_resize_server_revert_with_volume_attached'
```

The full run takes a bit less than 3 hours. Note that the Debian wrapper
for tempest will take its tests exclude list from /etc/tempest/exclude.conf
that you may enrich with your own banned tests.

Note that tempest is designed as a CI for gating commits in OpenStack upstream
Gerrit. It's not really meant for a CI for something like OCI. Even if its
huge list of tests is very helpful, it's expected that many tests will fail,
either because the test environment is different from the one used in
upstream Zuul/Gerrit, because it needs more configuration of the test
environment to have the test pass, or simply because some tests aren't
deterministic (and sometimes fail, sometimes don't). That's the reason why
there's a huge number of tests in this exclude file by default.

However, contribution and debugging to reduce the number of excluded tests
is aways welcome.

## Testing OCI patches

Now that your host is ready, it is possible to test any change using:

```
./sync-poc your-oci-poc-hostname
```

This will synchronize all of the PHP, puppet and shell scripts to your PoC.

## Cluster save and restore

Because it is kind of long to install a full OpenStack cluster made of so
many machines, oci-poc has a save and restore state for clusters. This will
simply shutdown mysql, then all VMs, and copy the .qcow2 disk of all VMs
in /var/lib/openstack-cluster-installer-poc/saved:

```
oci-poc-save cl1
```

Once saved, it is ok to just rename the folders, so one can keep multiple
copies. Then restore is done with the folder name:

```
oci-poc-restore cl1
```

# Hardware compatibility list
## Dell servers

OCI has been tested with these types of PowerEdge servers:

- DSS 1500
- DSS 1510
- DSS 2500
- PowerEdge R410
- PowerEdge R420
- PowerEdge R430
- PowerEdge R440
- PowerEdge R610 (as compute, swiftproxy)
- PowerEdge R620
- PowerEdge R630 (as compute, controller)
- PowerEdge R640 (as compute, controller)
- PowerEdge R720xd (as swiftstore)
- PowerEdge R740xd (as swiftstore)
- PowerEdge R6525 (AMD CPUs)
- PowerEdge R7525 (AMD CPUs)

Support for Dell's racadm is included, and OCI makes an extensive use of it.

## Gigabyte

OCI has been tested and supports:

- R182-Z93-00 (as compute)

## HP servers

OCI has been tested with these types of Cloud Line servers (used as
swiftstores or Ceph OSD):

- CL2600 Gen10 (as cephosd)
- CL2800 Gen10 (as swiftstore)

Unfortunately, there's no way we have found to configure the BIOS of these
servers, so some manual work has to be done to configure the BIOS manually,
for example to set the HDD hotplug flag. This can be very annoying when
setting-up a large amount of servers.

OCI has also been tested with these servers (used as swiftstores):

- ProLiant DL365 Gen10 Plus (as swiftstore, compute)
- ProLiant DL385 Gen10
- ProLiant DL385 Gen10 Plus (as swiftstore)
- ProLiant DL345 Gen11 (as swiftstore)

OCI also is capable of setting-up ARM-based servers, and produce a
dual-arch (x86 and ARM) PXE boot process. It supports:

- ProLiant RL300 (tested as CephOSD and compute)

OCI can automatically install hponcfg, ssacli and storcli, directly from
the HP Debian repository. OCI uses hponcfg to automatically activate IPMI
over LAN (which is off by default in these servers).

## Lenovo

These systems have been used in production:

- ThinkSystem SR645 (as compute)
- ThinkSystem SR665 (as swift store)

However, for IPMI to work correctly, it is needed to package onecli from
Lenovo: Lenovo doesn't ship a Debian package, and it is impossible to
redistribute the package (non-free license). Please get in touch if you
want the Debian source package (without the proprietary binaries).

Note that OCI set these machines using UEFI and Secure boot.

## Supermicro

A user reported he's using Supermicro. Though I couldn't test it myself. A
few patches were added in OCI to support them. I don't have much details
on what model(s) though.

# Upgrading
## From stretch-rocky to buster-rocky

### Upgrading compute nodes

First, switch the apt/sources.list to buster, and remove upstream's Ceph
backport repositories. Then remove all traces of Ceph from uptream:

```apt-get purge libcephfs2 librados2 librbd1 python3-rgw python3-rbd python3-rados python3-cephfs librgw2```

This probably will remove some Nova component, do it anyways. Then do the
dist-upgrade. Just hit entry on any prompt, or run in non-interactive mode
for Debconf prompts. Then just run puppet.

### Upgrading volume nodes

Nothing special here, just upgrade them with apt, reboot, and apply puppet.
It may be of course desirable to live-migrate volumes before rebooting.

### Upgrading your controllers

Upgrading controllers from Stretch to Buster isn't an easy task, so OCI
includes a script to automate the task:

```
oci-cluster-upgrade-stretch-to-buster CLUSTER_NAME
```

It's going to do all for you. It's strongly advise to test this before
doing it on a live cluster. The upgrade takes about 1 hour if running
with 3 controllers.

## Upgrading from one OpenStack release to the next

OCI comes with a shell script that helps you to do the OpenStack upgrades in
a fully automated way:

```
oci-cluster-upgrade-openstack-release CLUSTER_NAME FROM TO
```

For example, if you want to upgrade your cluster named "cl1" from Rocky to
Stein, simply do:

```
oci-cluster-upgrade-openstack-release cl1 rocky stein
```

Note that you cannot skip OpenStack release. If you wish to upgrade from
Rocky to Victoria, then you must do:

```
oci-cluster-upgrade-openstack-release cl1 rocky stein
oci-cluster-upgrade-openstack-release cl1 stein train
oci-cluster-upgrade-openstack-release cl1 train ussuri
oci-cluster-upgrade-openstack-release cl1 ussuri victoria
```

Note that, after upgrading to buster-victoria, you then must upgrade your
cluster to Bullseye the way described above (still keeping victoria), and
hopefully, you'll be able to upgrade to Wallby:

```
oci-cluster-upgrade-openstack-release cl1 victoria wallby
```

## From bullseye-zed to bookworm-zed

### Make sure the latest version of OCI is running

Especially on controller and messaging nodes, OCI will need to run
/root/reset-rabbitmq-credentials. This comes with fairly recent versions
of OCI, therefore, make sure you're running a version of OCI that has it,
and run oci-puppet on the first controller and messaging nodes so that
the script is created.

### Preparing the upgrade: configure GRUB

If servers were installed with OCI, it's possible that upgrading grub
doesn't know where to install itself uppon upgrades. To fix this, and
only if using non-UEFI setup:

```
dpkg-reconfigure grub-pc
```

and select the drive where Grub should go.

### Upgrade script

Simply run the upgrade script:

```
oci-cluster-upgrade-bullseye-to-bookworm cl1
```

Note that it will upgrade all of the cluster including compute and
network nodes. If that's not what you want, edit the script and
remove that part, it can be be processed manually.

## Upgrading to libvirt and NoVNC over TLS

### What is this about ?

Previously, OCI was setting-up libvirt over TCP, without any encryption.
There was also no VNC authentication, and anyone could connect to the VNC
port of a VM, provided an access to the management network of the compute
nodes.

The feature was added to use libvirt over TLS instead of just TCP, so that
live-migrations can be done with everything encrypted on the wire.

The same way, the NoVNC console now uses server/client SSL certificates, so
that the Nova NoVNC proxy verifies the VMs NoVNC identity, plus the VMs
integrated VNC server only allow the NoVNC proxy to connect.

Libvirt over TLS and NoVNC are using client and server certificates. The PKI
for this has to be done right, but unfortunately, OCI had a slightly wrong
setup of its PKI, with missing intermediate CA certificates attributes, so
it could sign client certificates.

As a consequence, to upgrade to a newer version of OCI, it is necessary to
completely re-do the internal PKI. This is painful and mandates some
operation that *WILL* make some downtime on your cluster.

However, a script to automate all of this has been written (and is currently
being worked on: please hold...).

### When is such an upgraded needed?

If the deployed cluster doesn't have the pki infrastructure for qemu+tls
and novnc, you need to run this scripted upgrade. In a normal situation,
a compute node should have these files:

```
cluster1-compute-1 # find /etc/pki/
/etc/pki/
/etc/pki/libvirt-vnc
/etc/pki/libvirt-vnc/server-cert.pem
/etc/pki/libvirt-vnc/ca-cert.pem
/etc/pki/libvirt-vnc/server-key.pem
/etc/pki/qemu
/etc/pki/qemu/server-cert.pem
/etc/pki/qemu/client-cert.pem
/etc/pki/qemu/client-key.pem
/etc/pki/qemu/ca-cert.pem
/etc/pki/qemu/server-key.pem
/etc/pki/CA
/etc/pki/CA/cacert.pem
/etc/pki/libvirt
/etc/pki/libvirt/clientcert.pem
/etc/pki/libvirt/servercert.pem
/etc/pki/libvirt/private
/etc/pki/libvirt/private/clientkey.pem
/etc/pki/libvirt/private/serverkey.pem
```

If it is already the case if your deployment, skip reading below.
If now, you need to run the script.

### How to perform the upgrade

All of the upgrade is scripted, and has been sucessfully tested
on a very busy (moderately large) compute cluster.

Get the 2 scripts from the OCI git without deploying the OCI
upgrade yet, and scp it to your OCI/puppet server:

```
openstack-cluster-installer (debian/zed)$ scp bin/oci-renew-intermediate-ca bin/oci-disable-puppet root@cluster1-puppet-1:/usr/bin
```

Then simply run the script:

```
# oci-renew-intermediate-ca
```

During this process, OCI itself will be upgraded to the latest
release (ie: apt-get dist-upgrade).

### Trick to keep networking agent running

During the upgrade, the network nodes will, in some situation,
not be able to reach rabbitmq (because they may contain the old
root-ca, when rabbit has been restarted with the new one). To
prevent this, one can simply continuously update the Neutron DB
to fake that network agents are always up:

```
while [ 1 ] ; do
   mysql -D neutrondb -e "UPDATE agents SET heartbeat_timestamp=NOW()"
   sleep 10
done

This will also avoid that the agents do a "full resync" that may
trigger some network disconnections. Once the oci-renew-intermediate-ca
script has finish to run, you can stop running this script.

### Libvirtd check on compute nodes

On compute nodes, we make sure libvirtd runs with TLS:

```
systemctl status libvirtd
```

If not, we restart libvirt with the correct socket activated:

```
systemctl stop libvirtd.service
systemctl stop libvirtd.socket
systemctl stop libvirtd-ro.socket
systemctl stop libvirtd-admin.socket
systemctl stop libvirtd-tcp.socket
systemctl start libvirtd-tls.socket
systemctl start libvirtd-ro.socket
systemctl start libvirtd-admin.socket
systemctl start libvirtd.service
```

Note this will only work if we have the new certs from puppet.

One can check that TLS is working using:

```
virsh -c qemu+tls://$(hostname --fqdn)/system list
```

Every compute node must be able to list instances of all other nodes,
and must also be able to ssh as root (from root).

### Live-migrating all VMs to enable the VNC client certificate checks

Once you're done with the upgrade, your VMs will continue to bind
their VNC server on the local compute without any kind of authentication.
To fix this, a new Qemu process must be started, so that it includes
the client and server TLS checks. To do so, there's 2 ways: either
stop and start the VM, or live migrate it. One easy way is probably
to do a nova host-evacuate on every compute nodes. A simple script
like this can do such trick:

```
for HOST in $(openstack compute service list --service nova-compute --format value -c Host) ; do
    echo "---> starting to evacuate $HOST" ;
    nova host-evacuate-live $i
    echo "---> Waiting 20 minutes between evacuate runs"
    sleep 1200
done
```