| |
- mlp
class mlp |
|
Class to define, train and test a multilayer perceptron. |
|
Methods defined here:
- __init__(self, ni, nh, no, f='linear', w=None)
- Set up instance of mlp. Initial weights are drawn from a
zero-mean Gaussian w/ variance is scaled by fan-in.
Input:
ni - <int> # of inputs
nh - <int> # of hidden units
no - <int> # of outputs
f - <str> output activation fxn
w - <array of float> vector of initial weights
- errfxn(self, w, x, t)
- Return vector of squared-errors for the leastsq optimizer
- fwd_all(self, x, w=None)
- Propagate values forward through the net.
Input:
x - array (size>1) of input patterns
w - optional 1-d vector of weights
Returns:
y - array of outputs for all input patterns
- pack(self)
- Compile weight matrices w1,b1,w2,b2 from net into a
single vector, suitable for optimization routines.
- test_all(self, x, t)
- Test network on an array (size>1) of patterns
Input:
x - array of input data
t - array of targets
Returns:
sum-squared-error over all data
- train(self, x, t)
- Train network using scipy's leastsq optimizer
Input:
x - array of input data
t - array of targets
N.B. x and t comprise the *entire* collection of training data
Returns:
post-optimization weight vector
- unpack(self)
- Decompose 1-d vector of weights w into appropriate weight
matrices (w1,b1,w2,b2) and reinsert them into net
| |