1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392
|
# mlpack changelog
## mlpack 4.6.2
_2025-05-22_
* Fix compilation of `Save()` when HDF5 is enabled (#3942).
* Update bundled STB to fix warnings in R bindings (#3940).
* Fix cross-validation support for algorithms with many parameters (including
`RandomForest`) (#3941).
## mlpack 4.6.1
_2025-05-13_
* Shuffle sequence lengths for RNNs (#3926).
* Add ability to compile OpenBLAS for Windows (#3922).
* Drop pytest-runner and "setup.py test" support (#3921).
* Fix compilation errors with clang++ version 20 (#3928).
## mlpack 4.6.0
_2025-04-02_
* Fix command-line duplicate output bug when loading matrices for some bindings
(#3838).
* Use `CMAKE_BUILD_TYPE` to specify build type instead of DEBUG and PROFILE options (#3865).
* Add `MLPACK_NO_STD_MUTEX` to allow disabling `std::mutex` (#3868).
* Bundle STB with mlpack and add `ResizeImages()` functionality (#3823).
* Add `mlpack.cmake` to facilitate finding mlpack and its dependencies (#3872).
* Fix conversion of empty Armadillo objects to numpy in Python bindings
(#3896).
* Added bootstrap strategies for `RandomForest`: `IdentityBootstrap`,
`DefaultBootstrap`, and `SequentialBootstrap` (#3829).
* Add `ResizeCropImages()` for resize-and-crop image preprocessing
functionality (#3903).
* Fix `LSTM` input size calculation for multidimensional inputs (#3913).
## mlpack 4.5.1
_2024-12-02_
* Fix compilation with clang 19 (#3799).
* Deprecate version of `data::Split()` that returns a `std::tuple` for
consistency; use other overloads instead (#3803).
* Fix LSTM layer copy/move constructors (#3809).
* Fix compilation if only including `mlpack/methods/kde/kde_model.hpp` (#3800).
* Fix serialization and `MinDistance()` bugs with `HollowBallBound` (#3808).
* Add `LinearRecurrent` layer and revamp `LSTM` layer (#3859).
* Fix BPTT issues in `RNN` (#3859).
## mlpack 4.5.0
_2024-09-17_
* Distribute STB headers as part of R package (#3724, #3726).
* Added OpenMP parallelization to Hamerly, Naive, and Elkan k-means (#3761, #3762, #3764).
* Added OpenMP support for fast approximation (#3685).
* Implemented the Find and Fill algorithm into the Dropout Layer and added OpenMP support (#3684).
* Update Python bindings to support NumPy 2.x (#3752).
* Bump minimum Armadillo version to 10.8 (#3760).
* Adapt `NearestInterpolation` ANN layer to new Layer Inteface (#3768).
* Add support for arbitrary matrix types to `Radical` and deprecate
`Radical::DoRadical()` in favor of `Radical::Apply()` (#3787).
## mlpack 4.4.0
_2024-05-26_
* Add `print_training_accuracy` option to LogisticRegression bindings (#3552).
* Fix `preprocess_split()` call in documentation for `LinearRegression` and
`AdaBoost` Python classes (#3563).
* Added `Repeat` ANN layer type (#3565).
* Remove `round()` implementation for old MSVC compilers (#3570).
* (R) Added inline plugin to the R bindings to allow for other R packages to
link to headers (#3626, h/t @cgiachalis).
* (R) Removed extra gcc-specific options from `Makevars.win` (#3627, h/t
@kalibera).
* (R) Changed roxygen package-level documentation from using
`@docType package` to `"_PACKAGE"`. (#3636)
* Fix floating-point accuracy issue for decision trees that sometimes caused
crashes (#3595).
* Use templates metaprog to distinguish between a matrix and a cube type
(#3602), (#3585).
* Use `MatType` instead of `arma::Mat<eT>`, (#3567), (#3607), (#3608),
(#3609), (#3568).
* Generalize matrix operations for armadillo and bandicoot, (#3619), (#3617),
(#3610), (#3643), (#3600), (#3605), (#3629).
* Change `arma::conv_to` to `ConvTo` using a local shim for bandicoot support
(#3614).
* Fix a bug for the stddev and mean in `RandNormal()` #(3651).
* Allow PCA to take different matrix types (#3677).
* Fix usage of precompiled headers; remove cotire (#3635).
* Fix non-working `verbose` option for R bindings (#3691), and add global
`mlpack.verbose` option (#3706).
* Fix divide-by-zero edge case for LARS (#3701).
* Templatize `SparseCoding` and `LocalCoordinateCoding` to allow different
matrix types (#3709, #3711).
* Fix handling of unused atoms in `LocalCoordinateCoding` (#3711).
* Move minimum required C++ version from C++14 to C++17 (#3704).
## mlpack 4.3.0
_2023-11-27_
* Fix include ordering issue for `LinearRegression` (#3541).
* Fix L1 regularization in case where weight is zero (#3545).
* Use HTTPS for all auto-downloaded dependencies (#3550).
* More robust detection of C++17 mode in the MSVC "compiler" (#3555, #3557).
* Fix setting number of classes correctly in `SoftmaxRegression::Train()`
(#3553).
* Adapt `MultiheadAttention` and `LayerNorm` ANN layers to new Layer interface
(#3547).
* Fix inconsistent use of the "input" parameter to the Backward method in ANNs
(#3551).
* Allow passing weak learner hyperparameters directly to AdaBoost (#3560).
## mlpack 4.2.1
_2023-09-05_
* Reinforcement Learning: Gaussian noise (#3515).
* Reinforcement Learning: Twin Delayed Deep Deterministic Policy Gradient
(#3512).
* Reinforcement Learning: Ornstein-Uhlenbeck noise (#3499).
* Reinforcement Learning: Deep Deterministic Policy Gradient (#3494).
* Add `ClassProbabilities()` member to `DecisionTree` so that the internal
details of trees can be more easily inspected (#3511).
* Bipolar sigmoid activation function added and invertible functions
fixed (#3506).
* Add auto-configured `mlpack/config.hpp` to contain configuration details of
mlpack that are required at compile time. STB detection is now done in this
file with the `MLPACK_HAS_STB` macro (#3519).
* Fix CRAN package alias for R bindings (#3543).
## mlpack 4.2.0
_2023-06-14_
* Adapt `C_ReLU`, `ReLU6`, `FlexibleReLU` layers for the new neural network
API (#3445).
* Fix PReLU, add integration test to it (#3473).
* Fix bug in LogSoftMax derivative (#3469).
* Add `serialize` method to `GaussianInitialization`,
`LecunNormalInitialization`,
`KathirvalavakumarSubavathiInitialization`, `NguyenWidrowInitialization`,
and `OrthogonalInitialization` (#3483).
* Allow categorical features to `preprocess_one_hot_encode` (#3487).
* Install mlpack and cereal headers as part of R package (#3488).
* Add intercept and normalization support to LARS (#3493).
* Allow adding two features simultaneously to LARS models (#3493).
* Adapt FTSwish activation function (#3485).
* Adapt Hyper-Sinh activation function (#3491).
## mlpack 4.1.0
_2023-04-26_
* Adapt HardTanH layer (#3454).
* Adapt Softmin layer for new neural network API (#3437).
* Adapt PReLU layer for new neural network API (#3420).
* Add CF decomposition methods: `QUIC_SVDPolicy` and `BlockKrylovSVDPolicy`
(#3413, #3404).
* Update outdated code in tutorials (#3398, #3401).
* Bugfix for non-square convolution kernels (#3376).
* Fix a few missing includes in `<mlpack.hpp>` (#3374).
* Fix DBSCAN handling of non-core points (#3346).
* Avoid deprecation warnings in Armadillo 11.4.4+ (#3405).
* Issue runtime error when serialization of neural networks is attempted but
`MLPACK_ENABLE_ANN_SERIALIZATION` is not defined (#3451).
## mlpack 4.0.1
_2022-12-23_
* Fix mapping of categorical data for Julia bindings (#3305).
* Bugfix: catch all exceptions when running bindings from Julia, instead of
crashing (#3304).
* Various Python configuration fixes for Windows and OS X (#3312, #3313,
#3311, #3309, #3308, #3297, #3302).
* Optimize and strip compiled Python bindings when possible, resulting in
significant size minimization (#3310).
* The `/std:c++17` and `/Zc:__cplusplus` options are now required when using
Visual Studio (#3318). Documentation and compile-time checks added.
* Set `BUILD_TESTS` to `OFF` by default. If you want to build tests, like
`mlpack_test`, manually set `BUILD_TESTS` to `ON` in your CMake
configuration step (#3316).
* Fix handling of transposed matrix parameters in Python, Julia, R, and Go
bindings (#3327).
* Comment out definition of ARMA_NO DEBUG. This allows various Armadillo
run-time checks such as non-conforming matrices and out-of-bounds
element access. In turn this helps tracking down bugs and incorrect
usage (#3322).
## mlpack 4.0.0
_2022-10-23_
* Bump C++ standard requirement to C++14 (#3233).
* Fix `Perceptron` to work with cross-validation framework (#3190).
* Migrate from boost tests to Catch2 framework (#2523), (#2584).
* Bump minimum armadillo version from 8.400 to 9.800 (#3043), (#3048).
* Adding a copy constructor in the Convolution layer (#3067).
* Replace `boost::spirit` parser by a local efficient implementation (#2942).
* Disable correctly the autodownloader + fix tests stability (#3076).
* Replace `boost::any` with `core::v2::any` or `std::any` if available (#3006).
* Remove old non used Boost headers (#3005).
* Replace `boost::enable_if` with `std::enable_if` (#2998).
* Replace `boost::is_same` with `std::is_same` (#2993).
* Remove invalid option for emsmallen and STB (#2960).
* Check for armadillo dependencies before downloading armadillo (#2954).
* Disable the usage of autodownloader by default (#2953).
* Install dependencies downloaded with the autodownloader (#2952).
* Download older Boost if the compiler is old (#2940).
* Add support for embedded systems (#2531).
* Build mlpack executable statically if the library is statically linked (#2931).
* Fix cover tree loop bug on embedded arm systems (#2869).
* Fix a LAPACK bug in `FindArmadillo.cmake` (#2929).
* Add an autodownloader to get mlpack dependencies (#2927).
* Remove Coverage files and configurations from CMakeLists (#2866).
* Added `Multi Label Soft Margin Loss` loss function for neural networks
(#2345).
* Added Decision Tree Regressor (#2905). It can be used using the class
`mlpack::tree::DecisionTreeRegressor`. It is accessible only though C++.
* Added dict-style inspection of mlpack models in python bindings (#2868).
* Added Extra Trees Algorithm (#2883). Currently, it can be used using the
class `mlpack::tree::ExtraTrees`, but only through C++.
* Add Flatten T Swish activation function (`flatten-t-swish.hpp`)
* Added warm start feature to Random Forest (#2881); this feature is
accessible from mlpack's bindings to different languages.
* Added Pixel Shuffle layer (#2563).
* Add "check_input_matrices" option to python bindings that checks
for NaN and inf values in all the input matrices (#2787).
* Add Adjusted R squared functionality to R2Score::Evaluate (#2624).
* Disabled all the bindings by default in CMake (#2782).
* Added an implementation to Stratify Data (#2671).
* Add `BUILD_DOCS` CMake option to control whether Doxygen documentation is
built (default ON) (#2730).
* Add Triplet Margin Loss function (#2762).
* Add finalizers to Julia binding model types to fix memory handling (#2756).
* HMM: add functions to calculate likelihood for data stream with/without
pre-calculated emission probability (#2142).
* Replace Boost serialization library with Cereal (#2458).
* Add `PYTHON_INSTALL_PREFIX` CMake option to specify installation root for
Python bindings (#2797).
* Removed `boost::visitor` from model classes for `knn`, `kfn`, `cf`,
`range_search`, `krann`, and `kde` bindings (#2803).
* Add k-means++ initialization strategy (#2813).
* `NegativeLogLikelihood<>` now expects classes in the range `0` to
`numClasses - 1` (#2534).
* Add `Lambda1()`, `Lambda2()`, `UseCholesky()`, and `Tolerance()` members to
`LARS` so parameters for training can be modified (#2861).
* Remove unused `ElemType` template parameter from `DecisionTree` and
`RandomForest` (#2874).
* Fix Python binding build when the CMake variable `USE_OPENMP` is set to
`OFF` (#2884).
* The `mlpack_test` target is no longer built as part of `make all`. Use
`make mlpack_test` to build the tests.
* Fixes to `HoeffdingTree`: ensure that training still works when empty
constructor is used (#2964).
* Fix Julia model serialization bug (#2970).
* Fix `LoadCSV()` to use pre-populated `DatasetInfo` objects (#2980).
* Add `probabilities` option to softmax regression binding, to get class
probabilities for test points (#3001).
* Fix thread safety issues in mlpack bindings to other languages (#2995).
* Fix double-free of model pointers in R bindings (#3034).
* Fix Julia, Python, R, and Go handling of categorical data for
`decision_tree()` and `hoeffding_tree()` (#2971).
* Depend on `pkgbuild` for R bindings (#3081).
* Replaced Numpy deprecated code in Python bindings (#3126).
## mlpack 3.4.2
_2020-10-26_
* Added Mean Absolute Percentage Error.
* Added Softmin activation function as layer in ann/layer.
* Fix spurious ARMA_64BIT_WORD compilation warnings on 32-bit systems (#2665).
## mlpack 3.4.1
_2020-09-07_
* Fix incorrect parsing of required matrix/model parameters for command-line
bindings (#2600).
* Add manual type specification support to `data::Load()` and `data::Save()`
(#2084, #2135, #2602).
* Remove use of internal Armadillo functionality (#2596, #2601, #2602).
## mlpack 3.4.0
_2020-09-01_
* Issue warnings when metrics produce NaNs in KFoldCV (#2595).
* Added bindings for _R_ during Google Summer of Code (#2556).
* Added common striptype function for all bindings (#2556).
* Refactored common utility function of bindings to bindings/util (#2556).
* Renamed InformationGain to HoeffdingInformationGain in
methods/hoeffding_trees/information_gain.hpp (#2556).
* Added macro for changing stream of printing and warnings/errors (#2556).
* Added Spatial Dropout layer (#2564).
* Force CMake to show error when it didn't find Python/modules (#2568).
* Refactor `ProgramInfo()` to separate out all the different
information (#2558).
* Add bindings for one-hot encoding (#2325).
* Added Soft Actor-Critic to RL methods (#2487).
* Added Categorical DQN to q_networks (#2454).
* Added N-step DQN to q_networks (#2461).
* Add Silhoutte Score metric and Pairwise Distances (#2406).
* Add Go bindings for some missed models (#2460).
* Replace boost program_options dependency with CLI11 (#2459).
* Additional functionality for the ARFF loader (#2486); use case sensitive
categories (#2516).
* Add `bayesian_linear_regression` binding for the command-line, Python,
Julia, and Go. Also called "Bayesian Ridge", this is equivalent to a
version of linear regression where the regularization parameter is
automatically tuned (#2030).
* Fix defeatist search for spill tree traversals (#2566, #1269).
* Fix incremental training of logistic regression models (#2560).
* Change default configuration of `BUILD_PYTHON_BINDINGS` to `OFF` (#2575).
## mlpack 3.3.2
_2020-06-18_
* Added Noisy DQN to q_networks (#2446).
* Add Go bindings (#1884).
* Added Dueling DQN to q_networks, Noisy linear layer to ann/layer
and Empty loss to ann/loss_functions (#2414).
* Storing and adding accessor method for action in q_learning (#2413).
* Added accessor methods for ANN layers (#2321).
* Addition of `Elliot` activation function (#2268).
* Add adaptive max pooling and adaptive mean pooling layers (#2195).
* Add parameter to avoid shuffling of data in preprocess_split (#2293).
* Add `MatType` parameter to `LSHSearch`, allowing sparse matrices to be used
for search (#2395).
* Documentation fixes to resolve Doxygen warnings and issues (#2400).
* Add Load and Save of Sparse Matrix (#2344).
* Add Intersection over Union (IoU) metric for bounding boxes (#2402).
* Add Non Maximal Supression (NMS) metric for bounding boxes (#2410).
* Fix `no_intercept` and probability computation for linear SVM bindings
(#2419).
* Fix incorrect neighbors for `k > 1` searches in `approx_kfn` binding, for
the `QDAFN` algorithm (#2448).
* Fix serialization of kernels with state for FastMKS (#2452).
* Add `RBF` layer in ann module to make `RBFN` architecture (#2261).
## mlpack 3.3.1
_2020-04-29_
* Minor Julia and Python documentation fixes (#2373).
* Updated terminal state and fixed bugs for Pendulum environment (#2354,
#2369).
* Added `EliSH` activation function (#2323).
* Add L1 Loss function (#2203).
* Pass CMAKE_CXX_FLAGS (compilation options) correctly to Python build
(#2367).
* Expose ensmallen Callbacks for sparseautoencoder (#2198).
* Bugfix for LARS class causing invalid read (#2374).
* Add serialization support from Julia; use `mlpack.serialize()` and
`mlpack.deserialize()` to save and load from `IOBuffer`s.
## mlpack 3.3.0
_2020-04-07_
* Added `Normal Distribution` to `ann/dists` (#2382).
* Templated return type of `Forward function` of loss functions (#2339).
* Added `R2 Score` regression metric (#2323).
* Added `poisson negative log likelihood` loss function (#2196).
* Added `huber` loss function (#2199).
* Added `mean squared logarithmic error` loss function for neural networks
(#2210).
* Added `mean bias loss function` for neural networks (#2210).
* The DecisionStump class has been marked deprecated; use the `DecisionTree`
class with `NoRecursion=true` or use `ID3DecisionStump` instead (#2099).
* Added `probabilities_file` parameter to get the probabilities matrix of
AdaBoost classifier (#2050).
* Fix STB header search paths (#2104).
* Add `DISABLE_DOWNLOADS` CMake configuration option (#2104).
* Add padding layer in TransposedConvolutionLayer (#2082).
* Fix pkgconfig generation on non-Linux systems (#2101).
* Use log-space to represent HMM initial state and transition probabilities
(#2081).
* Add functions to access parameters of `Convolution` and `AtrousConvolution`
layers (#1985).
* Add Compute Error function in lars regression and changing Train function to
return computed error (#2139).
* Add Julia bindings (#1949). Build settings can be controlled with the
`BUILD_JULIA_BINDINGS=(ON/OFF)` and `JULIA_EXECUTABLE=/path/to/julia` CMake
parameters.
* CMake fix for finding STB include directory (#2145).
* Add bindings for loading and saving images (#2019); `mlpack_image_converter`
from the command-line, `mlpack.image_converter()` from Python.
* Add normalization support for CF binding (#2136).
* Add Mish activation function (#2158).
* Update `init_rules` in AMF to allow users to merge two initialization
rules (#2151).
* Add GELU activation function (#2183).
* Better error handling of eigendecompositions and Cholesky decompositions
(#2088, #1840).
* Add LiSHT activation function (#2182).
* Add Valid and Same Padding for Transposed Convolution layer (#2163).
* Add CELU activation function (#2191)
* Add Log-Hyperbolic-Cosine Loss function (#2207).
* Change neural network types to avoid unnecessary use of rvalue references
(#2259).
* Bump minimum Boost version to 1.58 (#2305).
* Refactor STB support so `HAS_STB` macro is not needed when compiling against
mlpack (#2312).
* Add Hard Shrink Activation Function (#2186).
* Add Soft Shrink Activation Function (#2174).
* Add Hinge Embedding Loss Function (#2229).
* Add Cosine Embedding Loss Function (#2209).
* Add Margin Ranking Loss Function (#2264).
* Bugfix for incorrect parameter vector sizes in logistic regression and
softmax regression (#2359).
## mlpack 3.2.2
_2019-11-26_
* Add `valid` and `same` padding option in `Convolution` and `Atrous
Convolution` layer (#1988).
* Add Model() to the FFN class to access individual layers (#2043).
* Update documentation for pip and conda installation packages (#2044).
* Add bindings for linear SVM (#1935); `mlpack_linear_svm` from the
command-line, `linear_svm()` from Python.
* Add support to return the layer name as `std::string` (#1987).
* Speed and memory improvements for the Transposed Convolution layer (#1493).
* Fix Windows Python build configuration (#1885).
* Validate md5 of STB library after download (#2087).
* Add `__version__` to `__init__.py` (#2092).
* Correctly handle RNN sequences that are shorter than the value of rho (#2102).
## mlpack 3.2.1
_2019-10-01_
* Enforce CMake version check for ensmallen (#2032).
* Fix CMake check for Armadillo version (#2029).
* Better handling of when STB is not installed (#2033).
* Fix Naive Bayes classifier computations in high dimensions (#2022).
## mlpack 3.2.0
_2019-09-25_
* Fix some potential infinity errors in Naive Bayes Classifier (#2022).
* Fix occasionally-failing RADICAL test (#1924).
* Fix gcc 9 OpenMP compilation issue (#1970).
* Added support for loading and saving of images (#1903).
* Add Multiple Pole Balancing Environment (#1901, #1951).
* Added functionality for scaling of data (#1876); see the command-line
binding `mlpack_preprocess_scale` or Python binding `preprocess_scale()`.
* Add new parameter `maximum_depth` to decision tree and random forest
bindings (#1916).
* Fix prediction output of softmax regression when test set accuracy is
calculated (#1922).
* Pendulum environment now checks for termination. All RL environments now
have an option to terminate after a set number of time steps (no limit
by default) (#1941).
* Add support for probabilistic KDE (kernel density estimation) error bounds
when using the Gaussian kernel (#1934).
* Fix negative distances for cover tree computation (#1979).
* Fix cover tree building when all pairwise distances are 0 (#1986).
* Improve KDE pruning by reclaiming not used error tolerance (#1954, #1984).
* Optimizations for sparse matrix accesses in z-score normalization for CF
(#1989).
* Add `kmeans_max_iterations` option to GMM training binding `gmm_train_main`.
* Bump minimum Armadillo version to 8.400.0 due to ensmallen dependency
requirement (#2015).
## mlpack 3.1.1
_2019-05-26_
* Fix random forest bug for numerical-only data (#1887).
* Significant speedups for random forest (#1887).
* Random forest now has `minimum_gain_split` and `subspace_dim` parameters
(#1887).
* Decision tree parameter `print_training_error` deprecated in favor of
`print_training_accuracy`.
* `output` option changed to `predictions` for adaboost and perceptron
binding. Old options are now deprecated and will be preserved until mlpack
4.0.0 (#1882).
* Concatenated ReLU layer (#1843).
* Accelerate NormalizeLabels function using hashing instead of linear search
(see `src/mlpack/core/data/normalize_labels_impl.hpp`) (#1780).
* Add `ConfusionMatrix()` function for checking performance of classifiers
(#1798).
* Install ensmallen headers when it is downloaded during build (#1900).
## mlpack 3.1.0
_2019-04-25_
* Add DiagonalGaussianDistribution and DiagonalGMM classes to speed up the
diagonal covariance computation and deprecate DiagonalConstraint (#1666).
* Add kernel density estimation (KDE) implementation with bindings to other
languages (#1301).
* Where relevant, all models with a `Train()` method now return a `double`
value representing the goodness of fit (i.e. final objective value, error,
etc.) (#1678).
* Add implementation for linear support vector machine (see
`src/mlpack/methods/linear_svm`).
* Change DBSCAN to use PointSelectionPolicy and add OrderedPointSelection (#1625).
* Residual block support (#1594).
* Bidirectional RNN (#1626).
* Dice loss layer (#1674, #1714) and hard sigmoid layer (#1776).
* `output` option changed to `predictions` and `output_probabilities` to
`probabilities` for Naive Bayes binding (`mlpack_nbc`/`nbc()`). Old options
are now deprecated and will be preserved until mlpack 4.0.0 (#1616).
* Add support for Diagonal GMMs to HMM code (#1658, #1666). This can provide
large speedup when a diagonal GMM is acceptable as an emission probability
distribution.
* Python binding improvements: check parameter type (#1717), avoid copying
Pandas dataframes (#1711), handle Pandas Series objects (#1700).
## mlpack 3.0.4
_2018-11-13_
* Bump minimum CMake version to 3.3.2.
* CMake fixes for Ninja generator by Marc Espie.
## mlpack 3.0.3
_2018-07-27_
* Fix Visual Studio compilation issue (#1443).
* Allow running local_coordinate_coding binding with no initial_dictionary
parameter when input_model is not specified (#1457).
* Make use of OpenMP optional via the CMake 'USE_OPENMP' configuration
variable (#1474).
* Accelerate FNN training by 20-30% by avoiding redundant calculations
(#1467).
* Fix math::RandomSeed() usage in tests (#1462, #1440).
* Generate better Python setup.py with documentation (#1460).
## mlpack 3.0.2
_2018-06-08_
* Documentation generation fixes for Python bindings (#1421).
* Fix build error for man pages if command-line bindings are not being built
(#1424).
* Add 'shuffle' parameter and Shuffle() method to KFoldCV (#1412). This will
shuffle the data when the object is constructed, or when Shuffle() is
called.
* Added neural network layers: AtrousConvolution (#1390), Embedding (#1401),
and LayerNorm (layer normalization) (#1389).
* Add Pendulum environment for reinforcement learning (#1388) and update
Mountain Car environment (#1394).
## mlpack 3.0.1
_2018-05-10_
* Fix intermittently failing tests (#1387).
* Add big-batch SGD (BBSGD) optimizer in
src/mlpack/core/optimizers/bigbatch_sgd/ (#1131).
* Fix simple compiler warnings (#1380, #1373).
* Simplify NeighborSearch constructor and Train() overloads (#1378).
* Add warning for OpenMP setting differences (#1358/#1382). When mlpack is
compiled with OpenMP but another application is not (or vice versa), a
compilation warning will now be issued.
* Restructured loss functions in src/mlpack/methods/ann/ (#1365).
* Add environments for reinforcement learning tests (#1368, #1370, #1329).
* Allow single outputs for multiple timestep inputs for recurrent neural
networks (#1348).
* Add He and LeCun normal initializations for neural networks (#1342).
Neural networks: add He and LeCun normal initializations (#1342), add FReLU
and SELU activation functions (#1346, #1341), add alpha-dropout (#1349).
## mlpack 3.0.0
_2018-03-30_
* Speed and memory improvements for DBSCAN. --single_mode can now be used for
situations where previously RAM usage was too high.
* Bump minimum required version of Armadillo to 6.500.0.
* Add automatically generated Python bindings. These have the same interface
as the command-line programs.
* Add deep learning infrastructure in src/mlpack/methods/ann/.
* Add reinforcement learning infrastructure in
src/mlpack/methods/reinforcement_learning/.
* Add optimizers: AdaGrad, CMAES, CNE, FrankeWolfe, GradientDescent,
GridSearch, IQN, Katyusha, LineSearch, ParallelSGD, SARAH, SCD, SGDR,
SMORMS3, SPALeRA, SVRG.
* Add hyperparameter tuning infrastructure and cross-validation infrastructure
in src/mlpack/core/cv/ and src/mlpack/core/hpt/.
* Fix bug in mean shift.
* Add random forests (see src/mlpack/methods/random_forest).
* Numerous other bugfixes and testing improvements.
* Add randomized Krylov SVD and Block Krylov SVD.
## mlpack 2.2.5
_2017-08-25_
* Compilation fix for some systems (#1082).
* Fix PARAM_INT_OUT() (#1100).
## mlpack 2.2.4
_2017-07-18_
* Speed and memory improvements for DBSCAN. --single_mode can now be used for
situations where previously RAM usage was too high.
* Fix bug in CF causing incorrect recommendations.
## mlpack 2.2.3
_2017-05-24_
* Bug fix for --predictions_file in mlpack_decision_tree program.
## mlpack 2.2.2
_2017-05-04_
* Install backwards-compatibility mlpack_allknn and mlpack_allkfn programs;
note they are deprecated and will be removed in mlpack 3.0.0 (#992).
* Fix RStarTree bug that surfaced on OS X only (#964).
* Small fixes for MiniBatchSGD and SGD and tests.
## mlpack 2.2.1
_2017-04-13_
* Compilation fix for mlpack_nca and mlpack_test on older Armadillo versions
(#984).
## mlpack 2.2.0
_2017-03-21_
* Bugfix for mlpack_knn program (#816).
* Add decision tree implementation in methods/decision_tree/. This is very
similar to a C4.5 tree learner.
* Add DBSCAN implementation in methods/dbscan/.
* Add support for multidimensional discrete distributions (#810, #830).
* Better output for Log::Debug/Log::Info/Log::Warn/Log::Fatal for Armadillo
objects (#895, #928).
* Refactor categorical CSV loading with boost::spirit for faster loading
(#681).
## mlpack 2.1.1
_2016-12-22_
* HMMs now use random initialization; this should fix some convergence issues
(#828).
* HMMs now initialize emissions according to the distribution of observations
(#833).
* Minor fix for formatted output (#814).
* Fix DecisionStump to properly work with any input type.
## mlpack 2.1.0
_2016-10-31_
* Fixed CoverTree to properly handle single-point datasets.
* Fixed a bug in CosineTree (and thus QUIC-SVD) that caused split failures for
some datasets (#717).
* Added mlpack_preprocess_describe program, which can be used to print
statistics on a given dataset (#742).
* Fix prioritized recursion for k-furthest-neighbor search (mlpack_kfn and the
KFN class), leading to orders-of-magnitude speedups in some cases.
* Bump minimum required version of Armadillo to 4.200.0.
* Added simple Gradient Descent optimizer, found in
src/mlpack/core/optimizers/gradient_descent/ (#792).
* Added approximate furthest neighbor search algorithms QDAFN and
DrusillaSelect in src/mlpack/methods/approx_kfn/, with command-line program
mlpack_approx_kfn.
## mlpack 2.0.3
_2016-07-21_
* Added multiprobe LSH (#691). The parameter 'T' to LSHSearch::Search() can
now be used to control the number of extra bins that are probed, as can the
-T (--num_probes) option to mlpack_lsh.
* Added the Hilbert R tree to src/mlpack/core/tree/rectangle_tree/ (#664). It
can be used as the typedef HilbertRTree, and it is now an option in the
mlpack_knn, mlpack_kfn, mlpack_range_search, and mlpack_krann command-line
programs.
* Added the mlpack_preprocess_split and mlpack_preprocess_binarize programs,
which can be used for preprocessing code (#650, #666).
* Added OpenMP support to LSHSearch and mlpack_lsh (#700).
## mlpack 2.0.2
_2016-06-20_
* Added the function LSHSearch::Projections(), which returns an arma::cube
with each projection table in a slice (#663). Instead of Projection(i), you
should now use Projections().slice(i).
* A new constructor has been added to LSHSearch that creates objects using
projection tables provided in an arma::cube (#663).
* Handle zero-variance dimensions in DET (#515).
* Add MiniBatchSGD optimizer (src/mlpack/core/optimizers/minibatch_sgd/) and
allow its use in mlpack_logistic_regression and mlpack_nca programs.
* Add better backtrace support from Grzegorz Krajewski for Log::Fatal messages
when compiled with debugging and profiling symbols. This requires libbfd
and libdl to be present during compilation.
* CosineTree test fix from Mikhail Lozhnikov (#358).
* Fixed HMM initial state estimation (#600).
* Changed versioning macros __MLPACK_VERSION_MAJOR, __MLPACK_VERSION_MINOR,
and __MLPACK_VERSION_PATCH to MLPACK_VERSION_MAJOR, MLPACK_VERSION_MINOR,
and MLPACK_VERSION_PATCH. The old names will remain in place until
mlpack 3.0.0.
* Renamed mlpack_allknn, mlpack_allkfn, and mlpack_allkrann to mlpack_knn,
mlpack_kfn, and mlpack_krann. The mlpack_allknn, mlpack_allkfn, and
mlpack_allkrann programs will remain as copies until mlpack 3.0.0.
* Add --random_initialization option to mlpack_hmm_train, for use when no
labels are provided.
* Add --kill_empty_clusters option to mlpack_kmeans and KillEmptyClusters
policy for the KMeans class (#595, #596).
## mlpack 2.0.1
_2016-02-04_
* Fix CMake to properly detect when MKL is being used with Armadillo.
* Minor parameter handling fixes to mlpack_logistic_regression (#504, #505).
* Properly install arma_config.hpp.
* Memory handling fixes for Hoeffding tree code.
* Add functions that allow changing training-time parameters to HoeffdingTree
class.
* Fix infinite loop in sparse coding test.
* Documentation spelling fixes (#501).
* Properly handle covariances for Gaussians with large condition number
(#496), preventing GMMs from filling with NaNs during training (and also
HMMs that use GMMs).
* CMake fixes for finding LAPACK and BLAS as Armadillo dependencies when ATLAS
is used.
* CMake fix for projects using mlpack's CMake configuration from elsewhere
(#512).
## mlpack 2.0.0
_2015-12-24_
* Removed overclustering support from k-means because it is not well-tested,
may be buggy, and is (I think) unused. If this was support you were using,
open a bug or get in touch with us; it would not be hard for us to
reimplement it.
* Refactored KMeans to allow different types of Lloyd iterations.
* Added implementations of k-means: Elkan's algorithm, Hamerly's algorithm,
Pelleg-Moore's algorithm, and the DTNN (dual-tree nearest neighbor)
algorithm.
* Significant acceleration of LRSDP via the use of accu(a % b) instead of
trace(a * b).
* Added MatrixCompletion class (matrix_completion), which performs nuclear
norm minimization to fill unknown values of an input matrix.
* No more dependence on Boost.Random; now we use C++11 STL random support.
* Add softmax regression, contributed by Siddharth Agrawal and QiaoAn Chen.
* Changed NeighborSearch, RangeSearch, FastMKS, LSH, and RASearch API; these
classes now take the query sets in the Search() method, instead of in the
constructor.
* Use OpenMP, if available. For now OpenMP support is only available in the
DET training code.
* Add support for predicting new test point values to LARS and the
command-line `lars` program.
* Add serialization support for `Perceptron` and `LogisticRegression`.
* Refactor SoftmaxRegression to predict into an `arma::Row<size_t>` object,
and add a `softmax_regression` program.
* Refactor LSH to allow loading and saving of models.
* ToString() is removed entirely (#487).
* Add `--input_model_file` and `--output_model_file` options to appropriate
machine learning algorithms.
* Rename all executables to start with an "mlpack" prefix (#229).
* Add HoeffdingTree and `mlpack_hoeffding_tree`, an implementation of the
streaming decision tree methodology from Domingos and Hulten in 2000.
## mlpack 1.0.12
_2015-01-07_
* Switch to 3-clause BSD license (from LGPL).
## mlpack 1.0.11
_2014-12-11_
* Proper handling of dimension calculation in PCA.
* Load parameter vectors properly for LinearRegression models.
* Linker fixes for AugLagrangian specializations under Visual Studio.
* Add support for observation weights to LinearRegression.
* `MahalanobisDistance<>` now takes the root of the distance by default and
therefore satisfies the triangle inequality (TakeRoot now defaults to true).
* Better handling of optional Armadillo HDF5 dependency.
* Fixes for numerous intermittent test failures.
* math::RandomSeed() now sets the random seed for recent (>=3.930) Armadillo
versions.
* Handle Newton method convergence better for
SparseCoding::OptimizeDictionary() and make maximum iterations a parameter.
* Known bug: CosineTree construction may fail in some cases on i386 systems
(#358).
## mlpack 1.0.10
_2014-08-29_
* Bugfix for NeighborSearch regression which caused very slow allknn/allkfn.
Speeds are now restored to approximately 1.0.8 speeds, with significant
improvement for the cover tree (#347).
* Detect dependencies correctly when ARMA_USE_WRAPPER is not being defined
(i.e., libarmadillo.so does not exist).
* Bugfix for compilation under Visual Studio (#348).
## mlpack 1.0.9
_2014-07-28_
* GMM initialization is now safer and provides a working GMM when constructed
with only the dimensionality and number of Gaussians (#301).
* Check for division by 0 in Forward-Backward Algorithm in HMMs (#301).
* Fix MaxVarianceNewCluster (used when re-initializing clusters for k-means)
(#301).
* Fixed implementation of Viterbi algorithm in HMM::Predict() (#303).
* Significant speedups for dual-tree algorithms using the cover tree (#235,
#314) including a faster implementation of FastMKS.
* Fix for LRSDP optimizer so that it compiles and can be used (#312).
* CF (collaborative filtering) now expects users and items to be zero-indexed,
not one-indexed (#311).
* CF::GetRecommendations() API change: now requires the number of
recommendations as the first parameter. The number of users in the local
neighborhood should be specified with CF::NumUsersForSimilarity().
* Removed incorrect PeriodicHRectBound (#58).
* Refactor LRSDP into LRSDP class and standalone function to be optimized
(#305).
* Fix for centering in kernel PCA (#337).
* Added simulated annealing (SA) optimizer, contributed by Zhihao Lou.
* HMMs now support initial state probabilities; these can be set in the
constructor, trained, or set manually with HMM::Initial() (#302).
* Added Nyström method for kernel matrix approximation by Marcus Edel.
* Kernel PCA now supports using Nyström method for approximation.
* Ball trees now work with dual-tree algorithms, via the BallBound<> bound
structure (#307); fixed by Yash Vadalia.
* The NMF class is now AMF<>, and supports far more types of factorizations,
by Sumedh Ghaisas.
* A QUIC-SVD implementation has returned, written by Siddharth Agrawal and
based on older code from Mudit Gupta.
* Added perceptron and decision stump by Udit Saxena (these are weak learners
for an eventual AdaBoost class).
* Sparse autoencoder added by Siddharth Agrawal.
## mlpack 1.0.8
_2014-01-06_
* Memory leak in NeighborSearch index-mapping code fixed (#298).
* GMMs can be trained using the existing model as a starting point by
specifying an additional boolean parameter to GMM::Estimate() (#296).
* Logistic regression implementation added in methods/logistic_regression (see
also #293).
* L-BFGS optimizer now returns its function via Function().
* Version information is now obtainable via mlpack::util::GetVersion() or the
__MLPACK_VERSION_MAJOR, __MLPACK_VERSION_MINOR, and __MLPACK_VERSION_PATCH
macros (#297).
* Fix typos in allkfn and allkrann output.
## mlpack 1.0.7
_2013-10-04_
* Cover tree support for range search (range_search), rank-approximate nearest
neighbors (allkrann), minimum spanning tree calculation (emst), and FastMKS
(fastmks).
* Dual-tree FastMKS implementation added and tested.
* Added collaborative filtering package (cf) that can provide recommendations
when given users and items.
* Fix for correctness of Kernel PCA (kernel_pca) (#270).
* Speedups for PCA and Kernel PCA (#198).
* Fix for correctness of Neighborhood Components Analysis (NCA) (#279).
* Minor speedups for dual-tree algorithms.
* Fix for Naive Bayes Classifier (nbc) (#269).
* Added a ridge regression option to LinearRegression (linear_regression)
(#286).
* Gaussian Mixture Models (gmm::GMM<>) now support arbitrary covariance matrix
constraints (#283).
* MVU (mvu) removed because it is known to not work (#183).
* Minor updates and fixes for kernels (in mlpack::kernel).
## mlpack 1.0.6
_2013-06-13_
* Minor bugfix so that FastMKS gets built.
## mlpack 1.0.5
_2013-05-01_
* Speedups of cover tree traversers (#235).
* Addition of rank-approximate nearest neighbors (RANN), found in
src/mlpack/methods/rann/.
* Addition of fast exact max-kernel search (FastMKS), found in
src/mlpack/methods/fastmks/.
* Fix for EM covariance estimation; this should improve GMM training time.
* More parameters for GMM estimation.
* Force GMM and GaussianDistribution covariance matrices to be positive
definite, so that training converges much more often.
* Add parameter for the tolerance of the Baum-Welch algorithm for HMM
training.
* Fix for compilation with clang compiler.
* Fix for k-furthest-neighbor-search.
## mlpack 1.0.4
_2013-02-08_
* Force minimum Armadillo version to 2.4.2.
* Better output of class types to streams; a class with a ToString() method
implemented can be sent to a stream with operator<<.
* Change return type of GMM::Estimate() to double (#257).
* Style fixes for k-means and RADICAL.
* Handle size_t support correctly with Armadillo 3.6.2 (#258).
* Add locality-sensitive hashing (LSH), found in src/mlpack/methods/lsh/.
* Better tests for SGD (stochastic gradient descent) and NCA (neighborhood
components analysis).
## mlpack 1.0.3
_2012-09-16_
* Remove internal sparse matrix support because Armadillo 3.4.0 now includes
it. When using Armadillo versions older than 3.4.0, sparse matrix support
is not available.
* NCA (neighborhood components analysis) now support an arbitrary optimizer
(#245), including stochastic gradient descent (#249).
## mlpack 1.0.2
_2012-08-15_
* Added density estimation trees, found in src/mlpack/methods/det/.
* Added non-negative matrix factorization, found in src/mlpack/methods/nmf/.
* Added experimental cover tree implementation, found in
src/mlpack/core/tree/cover_tree/ (#157).
* Better reporting of boost::program_options errors (#225).
* Fix for timers on Windows (#212, #211).
* Fix for allknn and allkfn output (#204).
* Sparse coding dictionary initialization is now a template parameter (#220).
## mlpack 1.0.1
_2012-03-03_
* Added kernel principal components analysis (kernel PCA), found in
src/mlpack/methods/kernel_pca/ (#74).
* Fix for Lovasz-Theta AugLagrangian tests (#182).
* Fixes for allknn output (#185, #186).
* Added range search executable (#192).
* Adapted citations in documentation to BibTeX; no citations in -h output
(#195).
* Stop use of 'const char*' and prefer 'std::string' (#176).
* Support seeds for random numbers (#177).
## mlpack 1.0.0
_2011-12-17_
* Initial release. See any resolved tickets numbered less than #196 or
execute this query:
http://www.mlpack.org/trac/query?status=closed&milestone=mlpack+1.0.0
|