1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93
|
<!doctype html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<html><head><title>Python: module mlp</title>
</head><body bgcolor="#f0f0f8">
<table width="100%" cellspacing=0 cellpadding=2 border=0 summary="heading">
<tr bgcolor="#7799ee">
<td valign=bottom> <br>
<font color="#ffffff" face="helvetica, arial"> <br><big><big><strong>mlp</strong></big></big></font></td
><td align=right valign=bottom
><font color="#ffffff" face="helvetica, arial"><a href=".">index</a><br><a href="file:/home/tilde/programming/SoC/scipy/Lib/sandbox/ann/mlp.py">/home/tilde/programming/SoC/scipy/Lib/sandbox/ann/mlp.py</a></font></td></tr></table>
<p><tt># <a href="#mlp">mlp</a>.py<br>
# by: Fred Mailhot<br>
# last mod: 2006-08-19</tt></p>
<p>
<table width="100%" cellspacing=0 cellpadding=2 border=0 summary="section">
<tr bgcolor="#aa55cc">
<td colspan=3 valign=bottom> <br>
<font color="#fffff" face="helvetica, arial"><big><strong>Modules</strong></big></font></td></tr>
<tr><td bgcolor="#aa55cc"><tt> </tt></td><td> </td>
<td width="100%"><table width="100%" summary="list"><tr><td width="25%" valign=top><a href="numpy.html">numpy</a><br>
</td><td width="25%" valign=top></td><td width="25%" valign=top></td><td width="25%" valign=top></td></tr></table></td></tr></table><p>
<table width="100%" cellspacing=0 cellpadding=2 border=0 summary="section">
<tr bgcolor="#ee77aa">
<td colspan=3 valign=bottom> <br>
<font color="#ffffff" face="helvetica, arial"><big><strong>Classes</strong></big></font></td></tr>
<tr><td bgcolor="#ee77aa"><tt> </tt></td><td> </td>
<td width="100%"><dl>
<dt><font face="helvetica, arial"><a href="mlp.html#mlp">mlp</a>
</font></dt></dl>
<p>
<table width="100%" cellspacing=0 cellpadding=2 border=0 summary="section">
<tr bgcolor="#ffc8d8">
<td colspan=3 valign=bottom> <br>
<font color="#000000" face="helvetica, arial"><a name="mlp">class <strong>mlp</strong></a></font></td></tr>
<tr bgcolor="#ffc8d8"><td rowspan=2><tt> </tt></td>
<td colspan=2><tt>Class to define, train and test a multilayer perceptron.<br> </tt></td></tr>
<tr><td> </td>
<td width="100%">Methods defined here:<br>
<dl><dt><a name="mlp-__init__"><strong>__init__</strong></a>(self, ni, nh, no, f<font color="#909090">='linear'</font>, w<font color="#909090">=None</font>)</dt><dd><tt>Set up instance of <a href="#mlp">mlp</a>. Initial weights are drawn from a <br>
zero-mean Gaussian w/ variance is scaled by fan-in.<br>
Input:<br>
ni - <int> # of inputs<br>
nh - <int> # of hidden units<br>
no - <int> # of outputs<br>
f - <str> output activation fxn<br>
w - <array of float> vector of initial weights</tt></dd></dl>
<dl><dt><a name="mlp-errfxn"><strong>errfxn</strong></a>(self, w, x, t)</dt><dd><tt>Return vector of squared-errors for the leastsq optimizer</tt></dd></dl>
<dl><dt><a name="mlp-fwd_all"><strong>fwd_all</strong></a>(self, x, w<font color="#909090">=None</font>)</dt><dd><tt>Propagate values forward through the net. <br>
Input:<br>
x - array (size>1) of input patterns<br>
w - optional 1-d vector of weights <br>
Returns:<br>
y - array of outputs for all input patterns</tt></dd></dl>
<dl><dt><a name="mlp-pack"><strong>pack</strong></a>(self)</dt><dd><tt>Compile weight matrices w1,b1,w2,b2 from net into a<br>
single vector, suitable for optimization routines.</tt></dd></dl>
<dl><dt><a name="mlp-test_all"><strong>test_all</strong></a>(self, x, t)</dt><dd><tt>Test network on an array (size>1) of patterns<br>
Input:<br>
x - array of input data<br>
t - array of targets<br>
Returns:<br>
sum-squared-error over all data</tt></dd></dl>
<dl><dt><a name="mlp-train"><strong>train</strong></a>(self, x, t)</dt><dd><tt>Train network using scipy's leastsq optimizer<br>
Input:<br>
x - array of input data <br>
t - array of targets<br>
<br>
N.B. x and t comprise the *entire* collection of training data<br>
<br>
Returns:<br>
post-optimization weight vector</tt></dd></dl>
<dl><dt><a name="mlp-unpack"><strong>unpack</strong></a>(self)</dt><dd><tt>Decompose 1-d vector of weights w into appropriate weight <br>
matrices (w1,b1,w2,b2) and reinsert them into net</tt></dd></dl>
</td></tr></table></td></tr></table><p>
<table width="100%" cellspacing=0 cellpadding=2 border=0 summary="section">
<tr bgcolor="#eeaa77">
<td colspan=3 valign=bottom> <br>
<font color="#ffffff" face="helvetica, arial"><big><strong>Functions</strong></big></font></td></tr>
<tr><td bgcolor="#eeaa77"><tt> </tt></td><td> </td>
<td width="100%"><dl><dt><a name="-main"><strong>main</strong></a>()</dt><dd><tt>Build/train/test MLP</tt></dd></dl>
</td></tr></table>
</body></html>
|