1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50
|
You will find here a little example that shows how to use the library.
1) After compiling the library, type in this directory
make mlp
If I suppose that...
- you have a Linux machine
- you're using the optimizing mode
- you're using the float mode
... you'll have an executable "mlp" in a new Linux_OPT_FLOAT directory.
2) Type ./Linux_OPT_FLOAT/mlp to test the program and to see the
options of the program.
3) Type
./Linux_OPT_FLOAT/mlp -sm model -seed 57 -nhu 15 -lr 0.01 -iter 50 -valid valid_data train_data 54 1
- This will train an MLP with 15 hidden units.
- It'll save the weights of the MLP in "model".
- The "-seed 57" is the random seed for random generation number,
just to have the same results as mine.
- The learning rate is 0.01 .
- You will train at most 50 iterations of stochastic gradient.
- We use a validation file: valid_data.
- We train using the train_data file, which has 54 inputs and 1 target.
Okay ? Results are in the files
- the_class_err: the last line should be 0.118
(11.8% of misclassified examples on the train)
- the_mse: the last line should be 0.354232
(Mean Squared error on the train dataset)
- the_valid_class_err: the last line should be 0.279
(27.9% of misclassified examples on the validation)
- the_valid_mse: the last line should be 0.883813
(Mean Squared error on the validation dataset)
(results could slightly change depending on
the architecture of your computer)
4) If you want to test the learned model, just type
./Linux_OPT_FLOAT/mlp -test model -nhu 15 test_data 54 1
- Note that the program is very simple (as everything in this directory),
and it saves only the parameters of the model, not the structure
of the model like the number of hidden units. So DON'T FORGET to
put the structure of the model in the command line, or you'll have
something like a "segmentation fault".
- The result should be in the_class_err (and the_mse) file.
Only one line: 0.276 (and 0.889326 fot the_mse)
|