| 12
 3
 4
 5
 6
 7
 8
 9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
 100
 101
 102
 103
 104
 105
 106
 107
 108
 109
 110
 111
 112
 113
 114
 115
 116
 117
 118
 119
 120
 121
 122
 123
 124
 125
 126
 127
 128
 129
 130
 131
 132
 133
 134
 135
 136
 137
 138
 139
 140
 141
 142
 143
 144
 145
 146
 147
 148
 149
 150
 151
 152
 153
 154
 155
 156
 157
 158
 159
 160
 161
 162
 163
 164
 165
 166
 167
 168
 169
 170
 171
 172
 173
 174
 175
 176
 177
 178
 179
 180
 181
 182
 183
 184
 185
 186
 187
 188
 189
 190
 191
 192
 193
 194
 195
 196
 197
 198
 199
 200
 201
 202
 203
 204
 205
 206
 207
 208
 209
 210
 211
 212
 213
 214
 215
 216
 217
 218
 219
 220
 221
 222
 223
 224
 225
 226
 227
 228
 229
 230
 231
 232
 233
 234
 235
 236
 237
 238
 239
 240
 241
 242
 243
 244
 245
 246
 247
 248
 249
 250
 251
 252
 253
 254
 255
 256
 257
 258
 259
 260
 261
 262
 263
 264
 265
 266
 267
 268
 269
 270
 271
 272
 273
 274
 275
 276
 277
 278
 279
 280
 281
 282
 283
 284
 285
 286
 287
 288
 289
 290
 291
 292
 293
 294
 295
 296
 297
 298
 299
 300
 301
 302
 303
 304
 305
 306
 307
 308
 309
 310
 311
 312
 313
 314
 315
 316
 317
 318
 319
 320
 321
 322
 323
 324
 325
 326
 327
 328
 329
 330
 331
 332
 333
 334
 335
 336
 337
 338
 339
 340
 341
 342
 343
 344
 345
 346
 347
 348
 349
 350
 351
 352
 353
 354
 355
 356
 357
 358
 359
 360
 361
 362
 363
 364
 365
 366
 367
 368
 369
 370
 371
 372
 373
 374
 375
 376
 377
 378
 379
 380
 381
 382
 383
 384
 385
 386
 387
 388
 389
 390
 391
 392
 393
 394
 395
 396
 397
 398
 399
 400
 401
 402
 403
 404
 405
 406
 407
 408
 409
 410
 411
 412
 413
 414
 415
 416
 417
 418
 419
 420
 421
 422
 423
 424
 425
 426
 427
 428
 429
 430
 431
 432
 433
 434
 435
 436
 437
 438
 439
 440
 441
 442
 443
 444
 445
 446
 447
 448
 449
 450
 451
 452
 453
 454
 455
 456
 457
 458
 459
 460
 461
 462
 463
 464
 465
 466
 467
 468
 469
 470
 471
 472
 473
 474
 475
 476
 477
 478
 479
 480
 481
 482
 483
 484
 485
 486
 487
 488
 489
 490
 491
 492
 493
 494
 495
 496
 497
 498
 499
 500
 501
 502
 503
 504
 505
 506
 507
 508
 509
 510
 511
 512
 513
 514
 515
 516
 517
 518
 519
 520
 521
 522
 523
 524
 525
 526
 527
 528
 529
 530
 531
 532
 533
 534
 535
 536
 537
 538
 539
 540
 541
 542
 543
 544
 545
 546
 547
 548
 549
 550
 551
 552
 553
 554
 555
 556
 557
 558
 559
 560
 561
 562
 563
 564
 565
 566
 567
 568
 569
 570
 571
 572
 573
 574
 575
 576
 577
 578
 579
 580
 581
 582
 583
 584
 585
 586
 587
 588
 589
 590
 591
 592
 593
 594
 595
 596
 597
 598
 599
 600
 601
 602
 603
 604
 605
 606
 607
 608
 609
 610
 611
 612
 613
 614
 615
 616
 617
 618
 619
 620
 621
 622
 623
 624
 625
 626
 627
 628
 629
 630
 631
 632
 633
 634
 635
 636
 637
 638
 639
 640
 641
 642
 643
 644
 645
 646
 647
 648
 649
 650
 651
 652
 653
 654
 655
 656
 657
 658
 659
 660
 661
 662
 663
 664
 665
 666
 667
 668
 669
 670
 671
 672
 673
 674
 675
 676
 677
 678
 679
 680
 681
 682
 683
 684
 685
 686
 687
 688
 689
 690
 691
 692
 693
 694
 695
 696
 697
 698
 699
 700
 701
 702
 703
 704
 705
 706
 707
 708
 709
 710
 711
 712
 713
 714
 715
 716
 717
 718
 719
 720
 721
 722
 723
 724
 725
 726
 727
 728
 729
 730
 731
 732
 733
 734
 735
 736
 737
 738
 739
 740
 741
 742
 743
 744
 745
 746
 747
 748
 749
 750
 751
 752
 753
 754
 755
 756
 757
 758
 759
 760
 761
 762
 763
 764
 765
 766
 767
 768
 769
 770
 771
 772
 773
 774
 775
 776
 777
 778
 779
 780
 781
 782
 783
 784
 785
 786
 787
 788
 789
 790
 791
 792
 793
 794
 795
 796
 797
 798
 799
 800
 801
 802
 803
 804
 805
 806
 807
 808
 809
 810
 811
 812
 813
 814
 815
 816
 817
 818
 819
 820
 821
 822
 823
 824
 825
 826
 827
 828
 829
 830
 831
 832
 833
 834
 835
 836
 837
 838
 839
 840
 841
 842
 843
 844
 845
 846
 847
 848
 849
 850
 851
 852
 853
 854
 855
 856
 857
 858
 859
 860
 861
 862
 863
 864
 865
 866
 867
 868
 869
 870
 871
 872
 873
 874
 875
 876
 877
 878
 879
 880
 881
 882
 883
 884
 885
 886
 887
 888
 889
 890
 891
 892
 893
 894
 895
 896
 897
 898
 899
 900
 901
 902
 903
 904
 905
 906
 907
 908
 909
 910
 911
 912
 913
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
 1000
 1001
 1002
 1003
 1004
 1005
 1006
 1007
 1008
 1009
 1010
 1011
 1012
 1013
 1014
 1015
 1016
 1017
 1018
 1019
 1020
 1021
 1022
 1023
 1024
 1025
 1026
 1027
 1028
 1029
 1030
 1031
 1032
 1033
 1034
 1035
 1036
 1037
 1038
 1039
 1040
 1041
 1042
 1043
 1044
 1045
 1046
 1047
 1048
 1049
 1050
 1051
 1052
 1053
 1054
 1055
 1056
 1057
 1058
 1059
 1060
 1061
 1062
 1063
 1064
 1065
 1066
 1067
 1068
 1069
 1070
 1071
 1072
 1073
 1074
 1075
 1076
 1077
 1078
 1079
 1080
 1081
 1082
 1083
 1084
 1085
 1086
 1087
 1088
 1089
 1090
 1091
 1092
 1093
 1094
 1095
 1096
 1097
 1098
 1099
 1100
 1101
 1102
 1103
 1104
 1105
 1106
 1107
 1108
 1109
 1110
 1111
 1112
 1113
 1114
 1115
 1116
 1117
 1118
 1119
 1120
 1121
 1122
 1123
 1124
 1125
 1126
 1127
 1128
 1129
 1130
 1131
 1132
 1133
 1134
 1135
 1136
 1137
 1138
 1139
 1140
 1141
 1142
 1143
 1144
 1145
 1146
 1147
 1148
 1149
 1150
 1151
 1152
 1153
 1154
 1155
 1156
 1157
 1158
 1159
 1160
 1161
 1162
 1163
 1164
 1165
 1166
 1167
 1168
 1169
 1170
 1171
 1172
 1173
 1174
 1175
 1176
 1177
 1178
 1179
 1180
 1181
 1182
 1183
 1184
 1185
 1186
 1187
 1188
 1189
 1190
 1191
 1192
 1193
 1194
 1195
 1196
 1197
 1198
 1199
 1200
 1201
 1202
 1203
 1204
 1205
 1206
 1207
 1208
 1209
 1210
 1211
 1212
 1213
 1214
 1215
 1216
 1217
 1218
 1219
 1220
 1221
 1222
 1223
 1224
 1225
 1226
 1227
 1228
 1229
 1230
 1231
 1232
 1233
 1234
 1235
 1236
 1237
 1238
 1239
 1240
 1241
 1242
 1243
 1244
 1245
 1246
 1247
 1248
 1249
 1250
 1251
 1252
 1253
 1254
 1255
 1256
 1257
 1258
 1259
 1260
 1261
 1262
 1263
 1264
 1265
 1266
 1267
 1268
 1269
 1270
 1271
 1272
 1273
 1274
 1275
 1276
 1277
 1278
 1279
 1280
 1281
 1282
 1283
 1284
 1285
 1286
 1287
 1288
 1289
 1290
 1291
 1292
 1293
 1294
 1295
 1296
 1297
 1298
 1299
 1300
 1301
 1302
 1303
 1304
 1305
 1306
 1307
 1308
 1309
 1310
 1311
 1312
 1313
 1314
 1315
 1316
 1317
 1318
 1319
 1320
 1321
 1322
 1323
 1324
 1325
 1326
 1327
 1328
 1329
 1330
 1331
 1332
 1333
 1334
 1335
 1336
 1337
 1338
 1339
 1340
 1341
 1342
 1343
 1344
 1345
 1346
 1347
 1348
 1349
 1350
 1351
 1352
 1353
 1354
 1355
 1356
 1357
 1358
 1359
 1360
 1361
 1362
 1363
 1364
 1365
 1366
 1367
 1368
 1369
 1370
 1371
 1372
 1373
 1374
 1375
 1376
 1377
 1378
 1379
 1380
 1381
 1382
 1383
 1384
 1385
 1386
 1387
 1388
 1389
 1390
 1391
 1392
 1393
 1394
 1395
 1396
 1397
 1398
 1399
 1400
 1401
 1402
 1403
 1404
 1405
 1406
 1407
 1408
 1409
 1410
 1411
 1412
 1413
 1414
 1415
 1416
 1417
 1418
 1419
 1420
 1421
 1422
 1423
 1424
 1425
 1426
 1427
 1428
 1429
 1430
 1431
 1432
 1433
 1434
 1435
 1436
 1437
 1438
 1439
 1440
 1441
 1442
 1443
 1444
 1445
 1446
 1447
 1448
 1449
 1450
 1451
 1452
 1453
 1454
 1455
 1456
 1457
 1458
 1459
 1460
 1461
 1462
 1463
 1464
 1465
 1466
 1467
 1468
 1469
 1470
 1471
 1472
 1473
 1474
 1475
 1476
 1477
 1478
 1479
 1480
 1481
 1482
 1483
 1484
 1485
 1486
 1487
 1488
 1489
 1490
 1491
 1492
 1493
 1494
 1495
 1496
 1497
 1498
 1499
 1500
 1501
 1502
 1503
 1504
 1505
 1506
 1507
 1508
 1509
 1510
 1511
 1512
 1513
 1514
 1515
 1516
 1517
 1518
 1519
 1520
 1521
 1522
 1523
 1524
 1525
 1526
 1527
 1528
 1529
 1530
 1531
 1532
 1533
 1534
 1535
 1536
 1537
 1538
 1539
 1540
 1541
 1542
 1543
 1544
 1545
 
 | Information for developers
==========================
This document is intended to explain some of the more useful things
within the tree, and provide a standard for working on the code.
General stuff -- common subdirectory
------------------------------------
String handling
~~~~~~~~~~~~~~~
Use `snprintf()`.  It's even provided with a compatibility module if the
target system doesn't have it natively.
If you use `snprintf()` to load some value into a buffer, make sure you
provide the format string.  Don't use user-provided format strings,
since that's an easy way to open yourself up to an exploit.
Don't use `strcat()`.  We have a neat wrapper for `snprintf()` called
`snprintfcat()` that allows you to append to `char *` with a format
string and all the usual string length checking of `snprintf()` routine.
Error reporting
~~~~~~~~~~~~~~~
Don't call `syslog()` directly.  Use `upslog_with_errno()` and `upslogx()`.
They may write to the syslog, stderr, or both as appropriate.  This
means you don't have to worry about whether you're running in the
background or not.
The `upslog_with_errno()` routine prints your message plus the string
expansion of `errno`. The `upslogx()` just prints the message.
`fatal_with_errno()` and `fatalx()` work the same way, but they
also `exit(EXIT_FAILURE)` afterwards. Don't call `exit()` directly.
Debugging information
~~~~~~~~~~~~~~~~~~~~~
The `upsdebug_with_errno()`, `upsdebugx()`, `upsdebug_hex()` and
`upsdebug_ascii()` routines use the global `nut_debug_level`, so you
don't have to mess around with `printf()`'s and `if`'s yourself.
Use them.
Memory allocation
~~~~~~~~~~~~~~~~~
`xmalloc()`, `xcalloc()`, `xrealloc()` and `xstrdup()` all check the
results of the base calls before continuing, so you don't have to.
Don't use the raw calls directly.
Config file parsing
~~~~~~~~~~~~~~~~~~~
The configuration parser, called `parseconf`, is now up to its fourth
major version.  It has multiple entry points, and can handle many
different jobs.  It's usually used for parsing files, but it can also
take input a line at a time or even a character at a time.
You must initialize a context buffer with `pconf_init()` before using any
other `parseconf` function.  `pconf_encode()` is the only exception, since
it operates on a buffer you supply and is an auxiliary function.
Escaping special characters and quoting multiple-word elements is all
handled by the state machine.  Using the same code for all config files
avoids code duplication.
NOTE: this does not apply to drivers.  Driver authors should use the
`upsdrv_makevartable()` scheme to pick up values from 'ups.conf' file.
Drivers should not have their own config files.
Drivers may have their own data files, such as lists of hardware,
mapping tables, or similar.  The difference between a data file and a
config file is that users should never be expected to edit a data file
under normal circumstances.  This technique might be used to add more
hardware support to a driver without recompiling.
<time.h> vs. <sys/time.h>
~~~~~~~~~~~~~~~~~~~~~~~~~
This is already handled by autoconf, so just `#include "timehead.h"` and
you will get the right headers on every system.
Device drivers -- main.c
------------------------
The device drivers use `main.c` as their core.
To write a new driver, you create a file with a series of support
functions that will be called by main.  These all have names that start
with `upsdrv_`, and they will be called at different times by main
depending on what needs to happen.
See the <<new-drivers,driver documentation>> for information on writing
drivers, and also refer to the skeletal driver in `skel.c`.
Portability
-----------
Avoid things that will break on other systems.  All the world is not an
x86 Linux box.
C comments
~~~~~~~~~~
There are still older systems out there that don't do C++ style comments.
--------------------------------------
/* Comments look like this. */
// Not like this.
--------------------------------------
Variable declarations go on top
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Newer versions of gcc allow you to declare a variable inside a function
after code, somewhat like the way C++ operates, like this:
-------------------------------------------------------------------------------
function do_stuff(void)
{
	check_something();
	int a;
	a = do_something_else();
}
-------------------------------------------------------------------------------
While this will compile and run on these newer versions, it will fail
miserably for anyone on an older system.  That means you must not use it.
Note that `gcc` only warns about this with `-pedantic` flag, and `clang`
with a `-Weverything` (possibly `-Wextra`) flag, which can be enabled by
developers with `configure --enable-warnings=...` option values (and made
fatal with `configure --enable-Werror`), to ensure non-regression of code
quality.  It was reported that `clang-16` with such options does complain
about non-portability to older C language revisions even if explicitly
building for a newer revision.
Please note that for the purposes of legacy-compatible variable declarations
(on top of their scopes), a `NUT_UNUSED_VARIABLE(varname)` counts as code and
should be used just below the declarations.  Initial assignments to variables
(also as return values of methods) may generally happen as part of their
declarations.
You can use scoping (e.g. `do { ... } while (0);`) where it makes sense
to constrain visibility of temporary variables, such as in `switch/case`
blocks.
Variable declaration in loop block syntax
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Another feature that does not work on some compilers (e.g. conforming
to "ANSI C"/C89/C90 standard) is initial variable declaration inside a
'for loop' block, like this:
--------------------------------------------------------------------------------
function do_stuff(void)
{
	/* This should declare "int i;" first, then use it in "for" loop: */
	for (int i = 0; i < INT_MAX; ++i) { ... }
	/* Additional loops cause also an error about re-declaring a variable: */
	for (int i = 10; i < 15; ++i) { ... }
}
--------------------------------------------------------------------------------
Other hints
~~~~~~~~~~~
TIP: At this point NUT is expected to work correctly when built with a
C99 (or rather GNU99 on many systems) or newer standard.
The NUT codebase may build in a mode without warnings made fatal on C89
(GNU89), but the emitted warnings indicate that those binaries may crash.
By the end of 2021, NUT codebase has been revised to pass GNU and strict-C
mode builds with C89 standard with the GCC toolkit (and on systems that do
have the newer features in libraries, just hide them in standard headers);
however CLANG toolkit is more restrictive about the C99+ syntax used.
If somebody in the community requires to build and run NUT on systems
that old, pull requests to fix the offending coding issues are welcome.
Note also that the NUT codebase currently relies on certain features,
such as the printf format modifiers for `(s)size_t`, use of `long long`,
some nuances about structure/array initializers, variadic macros for
debugging, etc. that a pedantic C90 mode compilation warns is not part
of the standard but a GNU extension (and part of C99 and newer standard
revisions). Many of the "offences" against the older standard actually
come from system and third-party header files.
That said, the NUT CI farm does run non-regression builds with GNU C89
and strict C89 standard revisions and minimal passing warnings level,
to ensure that codebase is and remains at least basically compliant.
Continuous Integration and Automated Builds
-------------------------------------------
To ease and automate the build scenarios which were deemed important for
quality assurance and non-regression checks of NUT, several solutions
were introduced over time.
Build automation tools and scripts
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ci_build.sh
^^^^^^^^^^^
This script was originally introduced (following ZeroMQ/ZProject example)
to automate CI builds, by automating certain scenarios driven by exported
environment variables to set particular `configure` options and `make`
some targets (chosen by the `BUILD_TYPE` envvar). It can also be used
locally to avoid much typing to re-run those scenarios during development.
Developers can directly use the scripts involved in CI builds to fix
existing code on their workstations or to ensure support for new
compilers and C standard revisions, e.g. save a local file like this
to call the common script with pre-sets:
	$ cat _fightwarn-gcc10-gnu17.sh
	#!/bin/sh
	BUILD_TYPE=default-all-errors \
	CFLAGS="-Wall -Wextra -Werror -pedantic -std=gnu17" \
	CXXFLAGS="-Wall -Wextra -Werror -std=gnu++17" \
	CC=gcc-10 CXX=g++-10 \
		./ci_build.sh
...and then execute it to prepare a workspace, after which you can go
fixing bugs file-by-file running a `make` after each save to confirm
your solutions and uncover the next issue to address :-)
Helpfully, the NUT CI farm build logs report the configuration used for
each executed stage, so if some build combination fails -- you can just
scroll to the end of that section and copy-paste the way to reproduce
an issue locally (on an OS similar to that build case).
Note that while spelling out sets of warnings can help in a quest to
fix certain bugs during development (if only by removing noise from
classes of warnings not relevant to the issue one is working on), there
is a reasonable set of warnings which NUT codebase actively tries to
be clean about (and checks in CI), detailed in the next section.
For the `ci_build.sh` usage like above, one can instead pass the setting
via `BUILD_WARNOPT=...`, and require that all emitted warnings are fatal
for their build, e.g.:
	$ cat _fightwarn-clang9-gnu11.sh
	#!/bin/sh
	BUILD_TYPE=default-all-errors \
	BUILD_WARNOPT=hard BUILD_WARNFATAL=yes \
	CFLAGS="-std=gnu11" \
	CXXFLAGS="-std=gnu++11" \
	CC=clang-9 CXX=clang++-9 CPP=clang-cpp \
		./ci_build.sh
Finally, for refactoring effort geared particularly for fighting the
warnings which exist in current codebase, the script contains some
presets (which would evolve along with codebase quality improvements)
as `BUILD_TYPE=fightwarn-gcc`, `BUILD_TYPE=fightwarn-clang` or plain
`BUILD_TYPE=fightwarn`:
	BUILD_TYPE=fightwarn-clang ./ci_build.sh
As a rule of thumb, new contributions must not emit any warnings when
built in GNU99 mode with a `minimal` "difficulty" level of warnings.
Technically they must survive the part of test matrix across the several
platforms tested by NUT CI and marked in project settings as required
to pass, to be accepted for a pull request merge.
Developers aiming to post successful pull requests to improve NUT can
pass the `--enable-warnings` option to the `configure` script in local
builds to see how that behaves and ensure that at least in some set-up
their contribution is viable. Note that different compiler versions and
vendors (gcc/clang/...), building against different OS and third-party
dependencies, with different CPU architectures and different language
specification revisions, might all complain about different issues --
and catching this in as diverse range of set-ups as possible is why we
have CI tests.
It can be beneficial for serial developers to set up a local BuildBot,
Travis or a Jenkins instance with a matrix test job, to test their local
git repository branches with whatever systems they have available.
* https://github.com/networkupstools/nut/issues/823
While `autoconf` tries its best to provide portable shell code, sometimes
there are builds of system shell that just fail under stress. If you are
seeing random failures of `./configure` script in different spots with
the same inputs, try telling `./ci_build.sh` to loop configuring until
success (instead of quickly failing), and/or tell `./configure` to use
another shell at least for the system call-outs, with options like these:
	SHELL=/bin/bash CONFIG_SHELL=/bin/bash CI_SHELL_IS_FLAKY=true \
	./ci_build.sh
Jenkins CI
^^^^^^^^^^
Since mid-2021, the NUT CI farm is implemented by several virtual servers
courteously provided by http://fosshost.org
These run various operating systems as build agents, and a Jenkins instance
to orchestrate the builds of NUT branches and pull requests on those agents.
This is driven by `Jenkinsfile-dynamatrix` and a Jenkins Shared Library called
link:https://github.com/networkupstools/jenkins-dynamatrix[jenkins-dynamatrix]
which prepares a matrix of builds across as many operating systems,
bitnesses/architectures, compilers, make programs and C/C++ revisions
as it can -- based on the population of currently available build agents
and capabilities which they expose as agent labels.
This hopefully means that people interested in NUT can contribute to the
build farm (and ensure NUT is and remains compatible with their platform)
by running a Jenkins Swarm agent with certain labels, which would dial
into https://ci.networkupstools.org/ controller. Please contact the NUT
maintainer if you want to participate in this manner.
The `Jenkinsfile-dynamatrix` recipe allows NUT CI farm to run different sets
of build scenarios based on various conditions, such as the name of branch
being built (or PR'ed against), changed files (e.g. C/C++ sources vs. just
docs), and some build combinations may be not required to succeed.
For example, the main development branch and pull requests against it must
cleanly pass all specified builds and tests on various platforms with the
default level of warnings specified in the `configure` script. These are
balanced to not run too many build scenarios overall, but just a quick and
sufficiently representative set.
As another example, there is special handling for "fightwarn" pattern in
the branch names to run many more builds with varying warning levels and
more variants of intermediate language revisions, and so expose concerns
deliberately missed by default warnings levels in "master" branch builds
(the bar moves over time, as some classes of warnings become extinct from
our codebase).
Further special handling for branches named like `fightwarn.*89.*` regex
enables more intensive warning levels for a GNU89 build specifically (which
are otherwise disabled as noisy yet not useful for supported C99+ builds),
and is intended to help develop fixes for support of this older language
revision, if anyone would dare.
Many of those unsuccessful build stages are precisely the focus of the
"fightwarn" effort, and are currently marked as "may fail", so they end
up as "UNSTABLE" (seen as orange bubbles in the Jenkins BlueOcean UI, or
orange cells in the tabular list of stages in the legacy UI), rather than
as "FAILURE" (red bubbles) for build scenarios that were not expected to
fail and usually represent higher-priority problems that would block a PR.
Developers whose PR builds (or attempts to fix warnings) did not succeed in
some cell of such build matrix, can look at the individual logs of that cell.
Beside indication from the compiler about the failure, the end of log text
includes the command which was executed by CI worker and can be reproduced
locally by the developer, e.g.:
----
22:26:01  FINISHED with exit-code 2 cmd:  (
22:26:01  [ -x ./ci_build.sh ] || exit
22:26:01
22:26:01  eval BUILD_TYPE="default-alldrv" BUILD_WARNOPT="hard" \
    BUILD_WARNFATAL="yes" MAKE="make"  CC=gcc-10 CXX=g++-10 \
    CPP=cpp-10 CFLAGS='-std=gnu99 -m64' CXXFLAGS='-std=gnu++11 -m64' \
    LDFLAGS='-m64' ./ci_build.sh
22:26:01  )
----
or for autotools-driven scenarios (which prep, configure, build and test
in separate stages -- so for reproducing a failed build you should also
look at its configuration step separately):
----
22:28:18  FINISHED with exit-code 0 cmd:  ( [ -x configure ] || exit; \
    eval  CC=clang-9 CXX=clang++-9 CPP=clang-cpp-9 CFLAGS='-std=c11 -m64' \
    CXXFLAGS='-std=c++11 -m64' LDFLAGS='-m64' time ./configure )
----
To re-run such scenario locally, you can copy the line from `eval` (but
without the `eval` keyword itself) up to and including the executed script
or tool, into your shell. Depending on locally available compilers, you
may have to tweak the `CC`, `CXX` and `CPP` arguments; note that a `CPP`
may be specified as `/path/to/CC -E` for GCC and CLANG based toolkits
at least, if they lack a standalone preprocessor program (e.g. IntelCC).
NOTE: While NUT recipes do not currently recognize a separate `CXXCPP`,
it would follow similar semantics.
Some further details about the NUT CI farm workers are available in
link:config-prereqs.txt[config-prereqs.txt] and
link:ci-farm-lxc-setup.txt[ci-farm-lxc-setup.txt] documents.
AppVeyor CI
^^^^^^^^^^^
Primarily used for building NUT for Windows on Windows instances provided
in the cloud -- and so ensure non-regression as well as downloadable archives
with binary installation prototype area, intended for enthusiastic testing
(proper packaging to follow). NUT for Windows build-ability was re-introduced
soon after NUT 2.8.0 release.
This relies on a few prerequisite packages and a common NUT configuration,
as coded in the `appveyor.yml` file in the NUT codebase.
CircleCI
^^^^^^^^
Primarily used for building NUT for MacOS on instances provided in the cloud,
and so ensure non-regression across several Xcode releases.
This relies on a few prerequisite packages and a common NUT configuration,
as coded in the `.circleci/config.yml` file in the NUT codebase.
Travis CI
^^^^^^^^^
See the `.travis.yml` file in project sources for a detailed list of third
party dependencies and a large matrix of `CFLAGS` and compiler versions
last known to work or to not (yet) work on operating systems available
to that CI solution.
[NOTE]
======
The cloud Travis CI offering became effectively defunct for
open-source projects in mid-2021, so the `.travis.yml` file in NUT
codebase is not actively maintained.
Local private deployments of Travis CI are possible, so if anybody does
use it and has updated markup to share, they are welcome to post PRs.
======
The NUT project on GitHub has integration with Travis CI to test a large
set of compiler and option combinations, covering different versions of
gcc and clang, C standards, and requiring to pass builds at least in a
mode without warnings (and checking the other cases where any warnings
are made fatal).
Pre-set warning options
~~~~~~~~~~~~~~~~~~~~~~~
The options chosen into pre-sets that can be selected by `configure`
script options are ones we use for different layers of CI tests.
Values to note include:
* `--enable-Werror(=yes/no)` -- make warnings fatal;
* `--enable-warnings(=.../no)` -- enable certain warning presets:
** `gcc-hard`, `clang-hard`, `gcc-medium`, `clang-medium`, `gcc-minimal`,
   `clang-minimal`, `all` -- actual definitions that are compiler-dependent
   (the latter just adds `-Wall` which may be relatively portable);
** `hard`, `medium` or `minimal` -- if current compiler is detected as
   CLANG or GCC, apply corresponding setting from above (or `all` otherwise);
** `gcc` or `clang` -- apply the set of options (regardless of detected
   compiler) with default "difficulty" hard-coded in `configure` script,
   to tweak as our codebase becomes cleaner;
** `yes`/`auto` (also takes effect if `--enable-warnings` is requested
   without an `=ARG` part) -- if current compiler is detected as CLANG
   or GCC, apply corresponding setting with default "difficulty" from
   above (or `all` otherwise).
Note that for backwards-compatibility reasons and to help filter out
introduction of blatant errors, builds with compilers that claim GCC
compatibility can enable a few easy warning presets by default. This
can be avoided with an explicit argument to `--disable-warnings` (or
`--enable-warnings=no`).
All levels of warnings pre-sets for GCC in particular do not enforce
the `-pedantic` mode for builds with C89/C90/ANSI standard revision
(as guesstimated by `CFLAGS` content), because nowadays it complains
more about the system and third-party library headers, than about NUT
codebase quality (and "our offenses" are mostly something not worth
fixing in this era, such as the use of `__func__` in debug commands).
If there still are practical use-cases that require builds of NUT on
pre-C99 compiler toolkits, pull requests are of course welcome -- but
the maintainer team does not intend to spend much time on that.
Hopefully this warnings pre-set mechanism is extensible enough if we
would need to add more compilers and/or "difficulty levels" in the
future.
Finally, note that such pre-set warnings can be mixed with options
passed through `CFLAGS` or `CXXFLAGS` values to your local `configure`
run, but it is up to your compiler how it interprets the resulting mix.
Integrated Development Environments (IDEs) and debugging NUT
------------------------------------------------------------
Much of NUT has been coded using classic editors of developers' preference,
like `vi`, `nano`, Midnight Commander `mcedit`, `gedit`/`pluma`, NotePad++
and tools like `meld` or WinMerge for file comparison and merge.
Modern IDEs however do offer benefits, specifically for live debugging
sessions in a more convenient fashion than with command-line `gdb` directly.
They also simplify writing AsciiDoc files with real-time rendering support.
NOTE: Due to use of `libtool` wrappers in "autotools" driven projects,
it may be tricky to attach the debugger (mixing the correct `LD_LIBRARY_PATH`
or equivalent with a binary under a `.libs` subdirectory; on some platforms
you may be better off copying shared objects to the directory with the binary
being tested).
IDEs that were tested to work with NUT development and real-time debugger
tracing include:
* Sun NetBeans 8.2 on Solaris, Linux (including local and remote build
  and debug ability);
* Apache NetBeans 17 on Windows with MSYS2 support (as MinGW toolkit);
* Visual Studio Code (VSCode) on Windows with MSYS2 support.
Some supporting maintenance and development is doable with IntelliJ IDEA,
making some things easier to do than with a simple Notepad, but it does
not handle C/C++ development as such.
IDE notes on Windows
~~~~~~~~~~~~~~~~~~~~
General settings for builds on Windows
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When working in a native Windows environment with
link:https://www.msys2.org/[MSYS2] (providing MinGW x64 among other things),
you may need to ensure certain environment variables are set before you start
the IDE (shortcuts and wrappers that start your console apply them via shell).
WARNING: If you set such environment variables system-wide for your user
profile (or wrap the IDE start-up by a script to set them), it may compromise
your ability to use *other* MSYS2 profiles and/or other builds of these
toolkits (packaged by e.g. Git for Windows or PERL for Windows projects)
generally, or in the same IDE session, respectively.
You may want to do this in a dedicated user account!
Examples below assume you installed MSYS2 into `C:\msys64` (by default) and
are using the "MinGW X64" profile for GCC builds (nuances may differ for
32-bit, CLANG, UCRT and other profile variants).
Also keep in mind that not all dependencies and tools involved in a
fully-fledged NUT build are easily available or usable on Windows (e.g.
the spell checker). See the link:config-prereqs.txt[] for better detailed
package lists for different operating systems including Windows, and feel
welcome to post pull requests with suggestions about new tool-chains that
might fare better than those already tried and documented.
* Make sure its tools are in the `PATH`:
+
Control Panel => "Edit the system environment variables" =>
  "Environment variables..." (button) =>
  "Edit..." or create "New..." `Path` setting ("User variable" level suffices) =>
** Make sure `C:\msys64\mingw64\bin` and `C:\msys64\usr\bin` are both there.
**  Depending on further installed toolkits, you may want to add
   `C:\Program Files\Git\cmd` or `C:\Program Files\Microsoft VS Code\bin`
   (preferably use deployment-dependent spellings without white-space like
   `Progra~1` to err on the safe side of variable expansions later).
* Make sure that MSYS2 (and tools which integrate with it) know its home:
+
Open Environment variables window as above, and
  "Edit..." or create "New..." `MSYS_HOME` setting =>
  Set to `C:\msys64\mingw64\bin`
* Restart the IDE (if already running) for it to acknowledge the system
  configuration change.
Otherwise, NetBeans for example claims there is no shell for it to run `make`
or open Terminal pane windows, and fails to start the built programs due to
lack of DLL files they were linked against (such as `libssl` usually needed
for any networked part of the codebase).
You might still have to fiddle with DLL files built in other directories of
the NUT project, when preparing to debug certain programs, e.g. for `dummy-ups`
testing you may need to:
------
:; cp ./clients/.libs/libupsclient-6.dll ./drivers/.libs/
------
To ensure builds with debug symbols, you may add `CFLAGS` and `CXXFLAGS` set
to `-g3 -gdwarf-2` or similar to `configure` options, or if that confuses
the cross-build (it tends to assume those values are part of GCC path),
you may have to hack them into your local copy of `configure.ac`, after the
`AM_INIT_AUTOMAKE([subdir-objects])` line:
------
CFLAGS="$CFLAGS -g3 -gdwarf-2"
CXXFLAGS="$CXXFLAGS -g3 -gdwarf-2"
------
...and re-run the `./autogen.sh` script.
GDB on Windows
^^^^^^^^^^^^^^
Examples below assume that whichever IDE you are using, the primary goal is
to debug some issues with NUT on that platform.
This may require you to craft a configuration file for the GNU Debugger,
e.g. `C:\Users\abuild\.gdbinit` for the examples below. One is not required
however, and may be missing.
Another thing to keep in mind is that with `libtool` involved, the actual
binary for testing would be in a `.libs` subdirectory and you may have some
fun with ensuring that DLLs are found to start them -- see the notes above.
NetBeans on Windows
^^^^^^^^^^^^^^^^^^^
When you install newer link:https://netbeans.apache.org/[Apache NetBeans]
releases (14, 17 as of this writing), you may need to enable the use of
"NetBeans 8.2 Plugin Portal" (check under Tools/Plugins/Settings) and
install the "C/C++" plugin only available there at the moment.
In turn, that older build of a plugin package may require that your system
provides the `unpack200(.exe)` tool which was shipped with JDK11 or older
(you may have to install that just to get the tool, or copy its binary from
another system).
Under Tools/Options menu open the C/C++ tab and further its Build Tools sub-tab.
NOTE: NetBeans allows you to easily define different Tool Collections,
including those associated with a different build host (accessible over SSH
and source/build paths optionally shared over NFS or similar technology, or
copied over). This allows you to run the IDE on your desktop while debugging
a build running on a server or embedded system.
Make sure you have a MinGW Tool Collection for the "localhost" build host with
such settings as:
|===
| Option name             | Sample value
| Family                  | GNU MinGW
| Encoding                | UTF-8
| Base Directory          | `C:\msys64\mingw64\bin`
| C Compiler              | `C:\msys64\mingw64\bin\gcc.exe`
| C++ Compiler            | `C:\msys64\mingw64\bin\g++.exe`
| Assembler               | `C:\msys64\mingw64\bin\as.exe`
| Make Command            | `C:\msys64\usr\bin\make.exe`
| Debugger Command        | `C:\msys64\mingw64\bin\gdb.exe`
|===
In the Code Assistance sub-tab check that there are toolkit-specific and
general include paths, e.g. both C and C++ Compiler settings might involve:
|===
| `C:\msys64\mingw64\lib\gcc\x86_64-w64-mingw32\12.2.0\include`
| `C:\msys64\mingw64\include`
| `C:\msys64\mingw64\lib\gcc\x86_64-w64-mingw32\12.2.0\include-fixed`
| `C:\msys64\mingw64\x86_64-w64-mingw32\include`
|===
On top of that, C++ Compiler settings may include:
|===
| `C:\msys64\mingw64\include\12.2.0`
| `C:\msys64\mingw64\include\12.2.0\x86_64-w64-mingw32`
| `C:\msys64\mingw64\include\12.2.0\backward`
|===
In the "Other" sub-tab, set default standards to C99 and C++11 to match common
NUT codebase expectations.
Finally, open/create a "nut" project pointing to your git checkout workspace.
Next part of configuration regards build/debug configurations, which you can
find on the toolbar or as File / Project Properties.
The main configuration for debugging a particular binary (and NUT has tons
of those, good luck in case you want to debug several simultaneously) is
in the *Run* and *Debug* categories. You may want to define different
Configuration profiles to track the individual Run/Debug settings for
different tested binaries, while the Build/Make settings would remain the same.
Alternatively, you may set the *Make* category's "Build Result" as the path to
the binary you would test, and use `${OUTPUT_PATH}` variable as its name in
the "Run Command" (still likely need custom arguments) and "Symbol File" below.
When you investigate interactions of two or more programs, but only want to
debug (step through) just one of them, you are advised to run each of the
others from a dedicated terminal session, and just bump their debug verbosity.
* In the *Build* category, set the Build Host (localhost) and Tool Collection
  (MinGW). In expert part of the settings, un-check "platform-independent"
  and revise that the `TOOLS_PATH=C:\msys64\mingw64\bin` while the
  `UTILITIES_PATH=C:\msys64\usr\bin`.
* In the *Pre-Build* category likely keep the Working Directory as `.` and
  the `Pre-Build First` generally unchecked (so only enable it to reconfigure
  the project, which takes time and is not needed for every rebuild iteration),
  but you may still pre-set the Command line to something like the following
  (on one line):
+
------
bash -c "rm -f configure Makefile; ./autogen.sh &&
    ./configure CC='${IDE_CC}' CXX='${IDE_CXX}'
        --with-all=auto --with-docs=skip"
------
+
In some cases, NOT specifying the `CC`, `CXX` and the flags actually succeeds
while passing their options fails the configuration ("Compiler can not create
executables" etc.) probably due to path resolution issues between the native
and MinGW environments.
+
NOTE: In practice, you may have an easier time using NUT `./ci_build.sh` helper or
running a more specific `./autogen.sh && ./configure ...` spell similar to
the above example or customized otherwise, in the MinGW x64 console window
to actually configure a NUT source code setup, than to maintain one via the IDE.
Running (re-)builds with the IDE (as you just edit non-recipe sources and
iterate with a debugger) using externally configured Makefiles works fine.
* In the *Make* category you may want to customize for parallelized builds on
  multi-CPU systems with something like:
** Build Command: `${MAKE} -j 6 -f Makefile`
** Clean Command: `${MAKE} -f Makefile clean`
* In the *Run* category you should set the "Run Command" to point to your
  binary (note the `.libs` sub-directory, and see comments above regarding
  possibly needed copies of shared objects) and its arguments (all on one
  line), e.g.:
+
------
C:\Users\abuild\Desktop\nut\drivers\.libs\usbhid-ups.exe -s ups -x port=auto
    -d1 -DDDDDD
------
+
Other useful settings may be to keep "Build First" checked, and if the
"Internal Terminal" does not work for you as the debugged program's console --
set the "Console Type" to "External Terminal" of type "Command Window".
Unfortunately, NetBeans on Windows may have issues running terminal tabs
unless CygWin is installed.
* In the *Debug* category you should set the "Symbol File" to point to your
  tested binary (e.g. `C:\Users\abuild\Desktop\nut\drivers\.libs\usbhid-ups.exe`
  to match the "Run Command" example above) and specify "Follow Fork Mode" as
  "child" and "Detach On Fork" as "off". "Reverse Debugging" may be useful too
  in some situations. Finally, select your "Gdb Init File" if you have one,
  e.g. `C:\Users\abuild\.gdbinit`.
Microsoft VS Code
^^^^^^^^^^^^^^^^^
With this IDE you can benefit from numerous Extensions from its Marketplace,
the ones found useful for NUT development and debugging include:
* AsciiDoc (by asciidoctor)
* EditorConfig for VS Code (by EditorConfig)
* C/C++ (by Microsoft)
* C/C++ Extension pack (by Microsoft)
* Makefile tool (by Microsoft)
* MSYS2/Cygwin/MinGW/Clang support (by okhlybov)
* Native Debug (GDB, LLDB ... Debugger support; by WebFreak)
Configurations are tracked locally in JSON files where you would need to add
some entries. Examples below highlight the needed keys and values; your files
may have others:
* `.vscode/launch.json` (can create one via Run/Add Configuration... menu
  defines ways to launch the debug session for a program:
+
------
{
    "configurations": [
        {
            "name": "CPPDBG GDB usbhid-ups",
            "type": "cppdbg",
            "request": "launch",
            "program": "C:\\Users\\abuild\\Desktop\\nut\\drivers\\.libs\\usbhid-ups.exe",
            "additionalSOLibSearchPath": "C:\\Users\\abuild\\Desktop\\nut\\.inst\\mingw64\\bin",
            "stopAtConnect": true,
            "args": ["-s", "ups", "-DDDDDD", "-d1", "-x", "port=auto"],
            "stopAtEntry": false,
            "cwd": "C:\\Users\\abuild\\Desktop\\nut",
            "environment": [],
            "externalConsole": false,
            "MIMode": "gdb",
            "miDebuggerPath": "C:\\msys64\\mingw64\\bin\\gdb.exe",
            "targetArchitecture": "x64",
            "setupCommands": [
                {
                    "description": "Enable pretty-printing for gdb",
                    "text": "-enable-pretty-printing",
                    "ignoreFailures": true
                },
                {
                    "description": "Set Disassembly Flavor to Intel",
                    "text": "-gdb-set disassembly-flavor intel",
                    "ignoreFailures": true
                }
            ],
            "preLaunchTask": "make usbhid-ups"
        },
        {
            // Alternately with LLDB (clang), the rest looks like above:
            "name": "CPPDBG LLDB usbhid-ups",
            "MIMode": "lldb",
            "miDebuggerPath": "C:\\msys64\\usr\\bin\\lldb.exe",
        },
        ...
    ]
}
------
* `.vscode/tasks.json` defines other tasks, such as the `preLaunchTask`
  mentioned above (assuming you have configured the build externally in
  the MinGW x64 terminal session):
+
------
{
    "tasks": [
        {
            "type": "shell",
            "label": "make usbhid-ups",
            "command": "C:\\msys64\\usr\\bin\\make usbhid-ups",
            "options": {
                "cwd": "${workspaceFolder}/drivers"
            },
            "problemMatcher": [
                "$gcc"
            ],
            "group": {
                "kind": "build",
                "isDefault": true
            }
        },
        ...
    ]
}
------
* `.vscode/c_cpp_properties.json` defines general compiler settings, e.g.:
+
------
{
    "configurations": [
        {
            "name": "Win32",
            "includePath": [
                "${workspaceFolder}/**",
                "C:\\msys64\\mingw64\\include\\libusb-1.0",
                "C:\\msys64\\mingw64\\include",
                "C:\\msys64\\usr\\include"
            ],
            "defines": [
                "_DEBUG",
                "UNICODE",
                "_UNICODE"
            ],
            "compilerPath": "C:\\msys64\\mingw64\\bin\\gcc.exe",
            "cStandard": "c99",
            "cppStandard": "c++11",
            "intelliSenseMode": "windows-gcc-x64",
            "configurationProvider": "ms-vscode.makefile-tools"
        }
    ],
    "version": 4
}
------
IntelliJ IDEA
^^^^^^^^^^^^^
It is worth mentioning IntelliJ IDEA as another free (as of Community Edition)
and popular IDE, however it is of limited use for NUT development.
Its ecosystem does feature a good AsciiDoc plugin, Python and of course the
Java/Groovy support, so IDEA is helpful for maintenance of NUT documentation,
helper scripts and CI recipes.
It lacks however C/C++ language support (allegedly a different product in the
IntelliJ portfolio is dedicated to that), so for the core NUT project sources
it is just a fancy text editor (with `.editorconfig` support) without syntax
highlighting or codebase cross-reference aids, build/run/debug support, etc.
Still, it is possible to run builds and tests in embedded or external terminal
session -- so it is not worse than editing with legacy tools, and navigation
or code-base-wide search is arguably easier.
////////
TODO:
Make note of settings (and Run as Administrator) to use symlinks in MinGW x64.
Check if required for sane (iterative re-)builds? ;)
////////
Coding style
------------
This is how we do things:
-------------------------------------------------------------------------------
int open_subspace(char *ship, int privacy)
{
	if (!privacy)
		return insecure_channel(ship);
	if (!init_privacy(ship))
		fatal_with_errno("Can't open secure channel");
	return secure_channel(ship);
}
-------------------------------------------------------------------------------
The basic idea is that we try to group things into functions, and then
find ways to drop out of them when we can't go any further.  There's
another way to program this involving a big else chunk and a bunch of
braces, and it can be hard to follow.  You can read this from top to
bottom and have a pretty good idea of what's going on without having to
track too much `{ }` nesting and indenting.
We don't really care for `pretentiousVariableNamingSchemes`, but you can
probably get away with it in your own driver that we will never have to
touch.  If your function or variable names start pushing important code
off the right margin of the screen, expect them to meet the byte
chainsaw sooner or later.
All types defined with typedef should end in `_t`, because this is
easier to read, and it enables tools (such as indent and emacs) to
display the source code correctly.
Indenting with tabs vs. spaces
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Another thing to notice is that the indenting happens with tabs instead
of spaces.  This lets everyone have their personal tab-width setting
without inflicting much pain on other developers.  If you use a space,
then you've fixed the spacing in stone and have really annoyed half of
the people out there.
Note that tabs apply only to *indenting*.  Alignment of text after any
non-tab character has appeared on the line must be done by spaces in
order for it to remain at the same alignment when someone views tabs at
a different widths.
One common example for this is multi-line if condition:
--------------------------------------------------------------------------------
	if (something &&
	    something_else) {
--------------------------------------------------------------------------------
which may be written without mixing tabs and spaces to indent, as:
--------------------------------------------------------------------------------
	if (something
	&&  something_else
	) {
--------------------------------------------------------------------------------
Another example is tables of definitions that are better aligned with
(non-leading) spaces at least between names and values not too many
characters wide; it still helps to align the columns with spaces at
offsets divisible by 4 or 8 (consistently for the whole table):
--------------------------------------------------------------------------------
#define SHORT_MACRO                         1	/* flag comment */
#define SOMETHING_WITH_A_VERY_LONG_NAME     255	/* flag comment */
--------------------------------------------------------------------------------
If you write something that uses leading spaces, you may get away with
it in a driver that's relatively secluded.  However, if we have to work
on that code, expect it to get reformatted according to the above.
Patches to existing code that don't conform to the coding style being
used in that file will probably be dropped.  If it's something we really
need, it will be grudgingly reformatted before being included.
When in doubt, have a look at Linus's take on this topic in the Linux
kernel -- Documentation/CodingStyle.  He's done a far better job of
explaining this.
Line breaks
~~~~~~~~~~~
It is better to have lines that are longer than 80 characters than to
wrap lines in random places. This makes it easier to work with tools
such as `grep`, and it also lets each developer choose their own
window size and tab setting without being stuck to one particular
choice.
Of course, this does not mean that lines should be made unnecessarily
long when there is a better alternative (see the note on
`pretentiousVariableNamingSchemes` above).  Certainly there should not
be more than one statement per line. Please do not use
-------------------------------------------------------------------------------
if (condition) break;
-------------------------------------------------------------------------------
but use the following:
-------------------------------------------------------------------------------
if (condition) {
	break;
}
-------------------------------------------------------------------------------
NOTE: Earlier revisions of coding style might suggest avoiding braces if just
one line is added as condition/loop/etc. handling code. Current approach is to
welcome them even for single lines: on one hand, this confirms the intention
that only this line is the conditional code; on another, this minimizes the
context differences for later code comparisons, relocation, refactoring, etc.
Un-used variables and function arguments
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Whenever a function needs to satisfy a particular API, it can end up
taking arguments that are not used in practice (think a too-trivial
signal handler). While some compilers offer the facility of decorations
like `__attribute__(unused)`, this proved not to be a portable solution.
Also the abilities of newer C++ standard revisions are of no help to
the vast range of existing systems that run NUT today and expect to be
able to do so tomorrow (hence the required C99+ support noted above).
In NUT codebase we prefer to mark un-used variables explicitly in the
body of the function (or an `#ifdef` branch of its code) using the
`NUT_UNUSED_VARIABLE(varname)` as a routine call inside a function
body, referring to the macro defined in `common.h`.
Please note that for the purposes of legacy-compatible variable declarations
(on top of their scopes), `NUT_UNUSED_VARIABLE(varname)` counts as code and
should happen below the declarations.
To display in a rough example:
-------------------------------------------------------------------------------
	static void signal_X_handler(int signal_X) {
		NUT_UNUSED_VARIABLE(signal_X);
		/* We have explicitly got nothing to do if we catch signal X */
		return;
	}
-------------------------------------------------------------------------------
Miscellaneous coding style tools
--------------------------------
NUT codebase includes an `.editorconfig` file which should be supported
by most of the IDEs and text editors nowadays. Many support this format
specification (at least partially) out of the box, possibly with some
configuration toggle in the GUI. Others may need a plugin, see more at
https://editorconfig.org/#pre-installed page. There are also command-line
tools to verify and/or enforce compliance of source files to configuration.
You can go a long way towards converting your source code to the NUT
coding style by piping it through the following command:
	indent -kr -i8 -T FILE -l1000 -nhnl
This next command does a reasonable job of converting most C++ style
comments (but not URLs and DOCTYPE strings):
	sed 's#\(^\|[ \t]\)//[ \t]*\(.*\)[ \t]*#/* \2 */#'
Emacs users can adjust how tabs are displayed. For example, it is
possible to set a tab stop to be 3 spaces, rather than the usual 8.
(Note that in the saved file, one indentation level will still
correspond to one tab stop; the difference is only how the file is
rendered on screen). It is even possible to set this on a
per-directory basis, by putting something like this into your `.emacs`
file:
-------------------------------------------------------------------------------
;; NUT style
(defun nut-c-mode ()
 "C mode with adjusted defaults for use with the NUT sources."
 (interactive)
 (c-mode)
 (c-set-style "K&R")
 (setq c-basic-offset 3)  ;; 3 spaces C-indentation
 (setq tab-width 3))      ;; 3 spaces per tab
;; apply NUT style to all C source files in all subdirectories of nut/
(setq auto-mode-alist (cons '(".*/nut/.*\\.[ch]$". nut-c-mode)
                       auto-mode-alist))
-------------------------------------------------------------------------------
Finishing touches
~~~~~~~~~~~~~~~~~
We like code that uses `const` and `static` liberally.  If you don't need
to expose a function or global variable to the outside world, `static` is
your friend.  If nobody should edit the contents of some buffer that's
behind a pointer, `const` keeps them honest.
We always compile with `-Wall`, so things like `const` and `static` help you
find implementation flaws.  Functions that attempt to modify a constant
or access something outside their scope will throw a warning or even
fail to compile in some cases.  This is what we want.
Switch case fall-through
~~~~~~~~~~~~~~~~~~~~~~~~
While C standards allow to write `switch` statements to "fall through"
from handling one case into another, modern compilers frown upon that
practice and spew warnings which complicate detecting real bugs in the
code (and also looking back at some of the cases written decades ago,
it is not trivial to state whether the fall-through was intentional or
really is a bug).
Compilers which detect such problem usually offer ways to decorate the
code with comments or attributes to keep it quiet it in cases where the
jump is intentional; also C++17 introduces special keywords for that in
the standard. NUT aiming to be portable and independent of compilers as
much as possible, prefers the arguably clearer and standards-based way
of using `goto` into the next intended operation, even though it is a
couple of lines away, e.g.:
	int uppercase = 0;
	switch (char_opt) {
		case 'U':
			uppercase = 1;
			goto fallthrough_case_u_option;
		case 'u':
		fallthrough_case_u_option:
			process_u_option(uppercase);
			break;
	}
In trivial cases, like falling through to `default` which just returns,
it may be clearer and more maintainable (adding other option cases in
the future) to just `return same_result` in the code block that would
fall through otherwise and avoid `goto` statements altogether.
Spaghetti
~~~~~~~~~
If you use a `goto` that jumps over long distances (see "Switch case
fall-through" section above), expect us to drop it when our head stops
spinning. It gives us flashbacks to the very old code we wrote.
We've tried to clean up our act, and you should make the effort
as well.
We're not making a blanket statement about gotos, since everything
probably has at least one good use.  There are a few cases where a goto
is more efficient than any other approach, but you probably won't
encounter them very often in this software.
Legacy code
~~~~~~~~~~~
There are parts of the source tree that do not yet conform to these
specs.  Part of this is due to the fact that the coding style has been
evolving slightly over the course of the project.  Some of the code you
see in these directories is 5 years old, and things have gotten cleaner
since then.  Don't worry -- it'll get cleaned up the next time something
in the vicinity gets a visit.
Memory leak checking
~~~~~~~~~~~~~~~~~~~~
We can't say enough good things about valgrind.  If you do anything with
dynamic memory in your code, you need to use this.  Just compile with
`gcc -g` and start the program inside `valgrind`.  Run it through the
suspected area and then exit cleanly.  valgrind will tell you if you've
done anything dodgy like freeing regions twice, reading uninitialized
memory, or if you've leaked memory anywhere.
For more information, refer to the link:http://valgrind.kde.org[Valgrind]
project.
Conclusion
~~~~~~~~~~
The summary: please be kind to our eyes.  There's a lot of stuff in here,
and many people have put a lot of time and energy to improve it.
Submitting patches
------------------
Current preference for suggesting changes is to open a pull request on
GitHub for the https://github.com/networkupstools/nut/ project.
For some cases, small patches that arrive by mailing list in unified
format (`diff -u`) as plain text attachments with no HTML and a brief
summary at the top are easy to handle, but sadly also easy to overlook.
If a patch is sent to the nut-upsdev mailing list, it stands a better
chance of being seen immediately. However, it is likely to be dropped
if any issues cannot be resolved quickly. If your code might not work
for others, or if it is a large change, your best bet is to submit a
pull request or create an
link:https://github.com/networkupstools/nut/issues[issue on GitHub].
The issue tracker allows us to track the patches over a longer period
of time, and it is less likely that a patch will fall through the cracks.
Posting a reminder to the developers (via the nut-upsdev list) about a
patch on GitHub is fair game.
Patch cohesion
--------------
Patches should have some kind of unifying element.  One patch set is one
message, and it should all touch similar things.  If you have to edit 6
files to add support for neutrino detection in UPS hardware, that's
fine.
However, sending one huge patch that does massive separate changes all over
the tree is not recommended.  That kind of patch has to be split up and
evaluated separately, assuming the core developers care enough to do that
instead of just dropping it.
If you have to make big changes in lots of places, send multiple
patches -- one per item.
The finishing touches: manual pages and device entry in HCL
-----------------------------------------------------------
If you change something that involves an argument to a program or
configuration file parsing, the man page is probably now out of date.
If you don't update it, we have to, and we have enough to do as it is.
If you write a new driver, send in the man page when you send us the
source code for your driver.  Otherwise, we will be forced to write a
skeletal man page that will probably miss many of the finer points of
the driver and hardware.
The same remark goes for device entries: if you add support for new models,
please remember to also complete the hardware compatibility list, present
in link:data/driver.list.in[]. This will be used to generate both textual,
static HTML and dynamic searchable HTML for the website.
Finally, don't forget about fame and glory: if you added or substantially
updated a driver, your copyright belongs in the heading comment (along
with existing ones). For vendor backed (or sponsored) contributions we
welcome an entry in the link:docs/acknowledgements.txt[] file as well,
to track and know the industry players who help make NUT better and more
useful.
It is nice to update the link:NEWS[] file for significant development
to be seen as part of next release, as well as to update the
link:UPGRADING[] file for potentially breaking changes and similar
heads-up notes for third-party teams (distribution packagers, clients
and bindings, etc.)
Source code management
----------------------
We currently use a Git repository hosted at GitHub to track changes to
the NUT source code. This allows you to clone the repository (or fork,
in GitHub parlance), make changes, and post them online for peer review
prior to integration.
To obtain permission to commit directly to the common upstream NUT repository,
you must be prepared to spend a fair amount of time contributing to the
NUT codebase. Most developers will be well served by committing to their
own forked Git repository (preferably in a uniquely named branch for each
new contribution), and having the NUT team merge their changes using pull
requests.
Git offers a little more flexibility than the +svn update+ command.
You may fetch other developers' changes into your repository, but hold
off on actually combining them with your branch until you have compared
the two branches (for instance, with `gitk --all`). Git also allows you
to accumulate more than one commit worth of changes before pushing to
another repository. This allows development to continue without a constant
network connection.
For a quick change to a file in the Git working copy, you can use
`git diff` to generate a patch to send to the nut-upsdev mailing list.
If you have more extensive changes, you can use `git format-patch` on
a complete commit or branch, and send the resulting series of patches
to the list.
If you use GitHub's web-based editor to make changes, it tends to create
lots of small commits, one per change per file. Unless there is reason to
keep the intermediate history, we will probably collapse (or "squash" in
Git parlance) the entire branch into one commit with a `git rebase -i`
before merging.
The link:https://git.wiki.kernel.org/index.php/GitSvnCrashCourse[GitSvnCrashCourse]
wiki page has some useful information for long-time users of Subversion.
Git access
~~~~~~~~~~
Anonymous Git checkouts are possible:
	git clone git://github.com/networkupstools/nut.git
or
	git clone https://github.com/networkupstools/nut.git
if it is necessary to get around a pesky firewall that blocks the native
Git protocol.
For a quicker checkout (when you don't need the entire repository history),
you can limit the depth of the clone:
	git clone --depth 1 git://github.com/networkupstools/nut.git
Mercurial (hg) access
~~~~~~~~~~~~~~~~~~~~~
There are those who prefer the simplicity and self-consistency of the
Mercurial SCM client over the hodgepodge of unique commands which make
up Git. Rather than debate the merits of each system, we will gently
guide you towards the link:http://hg-git.github.com/[hg-git project]
which would theoretically be a transparent bridge between the central
Git repository, and your local Mercurial working copy.
Other tools for hg/git interoperability are sure to exist. We would
welcome any feedback about this process on the nut-upsdev mailing list.
Subversion (SVN) access
~~~~~~~~~~~~~~~~~~~~~~~
If you prefer to check out the NUT source code using an SVN client, GitHub
has a link:https://github.com/blog/966-improved-subversion-client-support[SVN
interface to Git repositories] hosted on their servers. You can fork a copy
of the NUT repository and commit to your fork with SVN.
Be aware that the examples in the GitHub blog post might result in a
checkout that includes all of the current branches, as well as the trunk.
You are most likely interested in a command line similar to the following:
	svn co https://github.com/networkupstools/nut/trunk nut-trunk-svn
Ignoring generated files
------------------------
The NUT repository generally only holds files which are not generated from
other files. This prevents spurious differences from being recorded in the
repository history.
If you add a driver, it is recommended that you add the driver executable
name to the `.gitignore` file in that directory. Similarly, files generated
from `*.in` and `*.am` source templates should be ignored as well.
We try to include a number of generated files in the tarball releases with
`make dist` hooks in order to minimize the number of dependencies for end
users, but the assumption is that a developer can install the packages
needed to regenerate those files.
Commit message formatting
-------------------------
From the `git commit` man page:
[quote]
Though not required, it's a good idea to begin the commit message with a
single short (less than 50 character) line summarizing the change, followed
by a blank line and then a more thorough description. The text up to the
first blank line in a commit message is treated as the commit title, and
that title is used throughout git.
If your commit is just a change to one component, such as the HCL, upsd or a
specific driver, prefix your commit message in a way that matches similar
commits. This helps when searching the repository or tracking down a
regression.
Referring to previous commits can be tricky. If you are referring to the
immediate parent of a given commit, it suffices to say "the previous commit".
(Are you correcting a typo in the previous commit? If you haven't pushed yet,
consider using the `git commit --amend` command instead of creating a new
commit.) For other commits, even though tools like gitk and GitHub's
repository viewers recognize Git hashes and create links automatically, it is
best to add some context such as the commit title or a date.
You may notice that some older commits have `[[SVN:####]]` tags and Fossil-ID
footers. These were lifted from the old SVN commit messages using reposurgeon,
and should *not* be used as a guide for future commits.
Commit sign-off
---------------
Please also note that since 2023 we explicitly ask for contributions to be
"Signed Off" according to "Developer Certificate of Origin" as represented
in the `LICENSE-DCO` file in the root of NUT source tree (verbatim copy of
Version 1.1 of DCO published at https://developercertificate.org/ web site).
This is exactly the same one created and used by the Linux kernel developers.
This is a developer's certification that he or she has the right to submit
the patch for inclusion into the project. Simply submitting a contribution
implies this agreement, however, please include a "Signed-off-by" tag in
every patch (this tag is a conventional way to confirm that you agree to
the DCO). In other words, this tag certifies that committer has the rights
to submit this work under the same license as the project and agrees to the
terms of a Developer Certificate of Origin.
Note that while git commit hook tricks are available to automatically sign
off all commits, these signatures are intended to be a conscious (legally
meaningful) act -- hence they are not automated in git core with an easy
configuration option.
For more details see:
* https://github.com/networkupstools/nut/issues/1994
* https://stackoverflow.com/questions/1962094/what-is-the-sign-off-feature-in-git-for
* https://stackoverflow.com/questions/15015894/git-add-signed-off-by-line-using-format-signoff-not-working
You are also encouraged to set up a PGP key, make its public part known, and
use it to sign your git commits (in addition to the `Signed-Off-By` tag) by
also passing a `-S` option or calling `git config commit.gpgsign true` once.
Numerous public articles can walk you through this ordeal, including:
* https://docs.github.com/en/authentication/managing-commit-signature-verification/signing-commits
* https://docs.github.com/en/authentication/managing-commit-signature-verification/telling-git-about-your-signing-key
* https://www.kernel.org/doc/html/v4.19/process/maintainer-pgp-guide.html
Repository etiquette and quality assurance
------------------------------------------
For developers who have commit access to the common upstream NUT repository:
Please keep the Git "master" branch in working condition at all times.
The "master" branch may be used to generate daily tarballs, it provides the
baseline for new contributions, and occasionally is tagged for a new release.
It should not contain broken code. If you need to commit incremental changes
that leave the system in a broken state, please do so in a separate branch
and merge the changes back into "master" once they are complete.
To help keep the codebase ever-green, we run a number of CI tests and builds
in various conditions, including older compilers, different C/C++ standard
revisions, and an assortment of operating systems; a section below elaborates
on this in more detail.
You are encouraged to use `git rebase -i` on your private Git branches to
separate your changes into <<_patch_cohesion,logical changes>>.
From there, you can generate patches for the issue tracker, or the nut-upsdev
mailing list.
Note that once you rebase a branch, anyone else who has a copy of this branch
will need to rebase on top of your rebased branch. Obviously, this hinders
collaboration. In this case, we recommend that you rebase only in your private
repository, and push when things are ready for discussion. Merging instead of
rebasing will help with collaboration, but please do not turn the repository
history into a pile of spaghetti by merging unnecessarily. (Test merges can be
done on integration branches, which can be discarded if the merge is trivial.)
Be sure that your commit messages are descriptive when merging.
If you haven't created a commit out of your local changes yet, and you want to
fetch the latest code, you can also use +git stash+ before pulling, then +git
stash pop+ to apply your saved changes.
Here is an example workflow:
------------------------------------------------------------------------------
	git clone -o central git://github.com/networkupstools/nut.git
	cd nut
	git remote add -f username git://github.com/username/nut.git
	git checkout master
	git branch my-new-feature
	git checkout my-new-feature
	# Hack away
	git add changed-file.c
	git commit -s
	# Fix a typo in a file or commit message:
	git commit -s -a --amend
	# Someone committed something to the central repository. Fetch it.
	git fetch central
	git rebase central/master
	# Publish your branch to your GitHub repository:
	git push username my-new-feature
------------------------------------------------------------------------------
If you are new to Git, but are familiar with SVN, some of the following links
may be of use:
* link:https://web.archive.org/web/20191224210950/https://git-scm.com/course/svn.html[Git - SVN Crash Course (archived)]
* link:https://git-scm.com/book/en/v2/Git-and-Other-Systems-Migrating-to-Git[Git and Other Systems - Migrating to Git]
* link:https://www.git-tower.com/learn/git/ebook/en/command-line/appendix/from-subversion-to-git[Switching from Subversion to Git]
* link:https://www.atlassian.com/git/tutorials/migrating-overview[Migrate from SVN to Git]
[[building]]
Building the Code
-----------------
For a developer, the NUT build process starts with `./autogen.sh`.
This script generates the `./configure` script that end users typically
invoke to build NUT. If you are making a number of changes to the NUT
source tree, configuring with the `--enable-maintainer-mode` flag will
ensure that after you change a `Makefile.am`, nearby `Makefile.in` and
`Makefile` get regenerated. At a minimum, you will need at least:
* autoconf
* automake
* libtool
* Python
* Perl
[NOTE]
======
See the link:config-prereqs.txt[] for better detailed package lists for
different operating systems.
See `ci_build.sh` for automating many practical scenarios, for easier
iterations.
It is optional, but highly recommended, to have Python 2.x or 3.x, and Perl,
to generate some files included into the `configure` script whose presence
is checked by autotools when it is generated. Neutered files can be just
"touched" to pass the `autogen.sh` if these interpreters are not available,
and effectively skip those parts of the build later on -- `autogen.sh` will
then advise which special environment variables to `export` in your situation
and re-run it.
======
Even if you do not use your distribution's packages of NUT, installing the
distribution's list of build dependencies for NUT can reduce the amount of
trial-and-error when installing dependencies. For instance, in Debian, you
can run `apt-get build-dep nut` to install all of the auto* tools as well
as any development libraries and headers.
After running `./autogen.sh`, you can pass your local configuration
options to `./configure` and run `make` from the top-level directory.
To avoid the need for root privileges when testing new NUT code, you
may wish to use `--prefix=$HOME/local/nut --with-statepath=/tmp`.
You can also keep compilation times down by only building the driver
which you are currently working on: `--with-drivers=driver1,dummy-ups`.
Before pushing your commits upstream, please run `make distcheck-light`.
This checks that the Makefiles are not broken, that all the relevant files
are distributed, and that there are no compilation or installation errors.
Note that unless you specifically pass `--with-doc=skip` to `configure`,
this requires all of the dependencies necessary to build the documentation
to be locally installed on your system, including `asciidoc`, `a2x`,
`xsltproc`, `dblatex` and any additional XSL stylesheets.
Running `make distcheck-light` is especially important if you have added or
removed files, or updated `configure.ac` or some `Makefile.am` file.
Remember: simply adding a file to Git does not mean it will be distributed.
To distribute a file, you must update the corresponding `Makefile.am` with
`EXTRA_DIST` entry and possibly other recipe handling.
There is also `make distcheck`, which runs an even stricter set of
tests than `make distcheck-light`, but will not work unless you have all
of the optional third-party libraries and features installed.
Finally note, that since 2017 the GitHub upstream project is monitored
by Travis CI (in addition to earlier multi-platform buildbots which
occasionally do not work), replaced since 2021 by a dedicated NUT CI farm.
This means that if your posted improvements are based on current NUT
"master" branch, the resulting pull request should get tested for a number of
scenarios automatically. If your code adds a substantial feature, consider
extending the `Jenkinsfile-dynamatrix` and/or `ci_build.sh` scripts in the
workspace root to add another `BUILD_TYPE` to the matrix of tests run in
parallel.
 |