1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148
|
<HTML>
<HEAD><TITLE>NF01AY - SLICOT Library Routine Documentation</TITLE>
</HEAD>
<BODY>
<H2><A Name="NF01AY">NF01AY</A></H2>
<H3>
Computing the output of a set of neural networks
</H3>
<A HREF ="#Specification"><B>[Specification]</B></A>
<A HREF ="#Arguments"><B>[Arguments]</B></A>
<A HREF ="#Method"><B>[Method]</B></A>
<A HREF ="#References"><B>[References]</B></A>
<A HREF ="#Comments"><B>[Comments]</B></A>
<A HREF ="#Example"><B>[Example]</B></A>
<P>
<B><FONT SIZE="+1">Purpose</FONT></B>
<PRE>
To calculate the output of a set of neural networks with the
structure
- tanh(w1'*z+b1) -
/ : \
z --- : --- sum(ws(i)*...)+ b(n+1) --- y,
\ : /
- tanh(wn'*z+bn) -
given the input z and the parameter vectors wi, ws, and b,
where z, w1, ..., wn are vectors of length NZ, ws is a vector
of length n, b(1), ..., b(n+1) are scalars, and n is called the
number of neurons in the hidden layer, or just number of neurons.
Such a network is used for each L output variables.
</PRE>
<A name="Specification"><B><FONT SIZE="+1">Specification</FONT></B></A>
<PRE>
SUBROUTINE NF01AY( NSMP, NZ, L, IPAR, LIPAR, WB, LWB, Z, LDZ,
$ Y, LDY, DWORK, LDWORK, INFO )
C .. Scalar Arguments ..
INTEGER INFO, L, LDWORK, LDY, LDZ, LIPAR, LWB, NSMP, NZ
C .. Array Arguments ..
DOUBLE PRECISION DWORK(*), WB(*), Y(LDY,*), Z(LDZ,*)
INTEGER IPAR(*)
</PRE>
<A name="Arguments"><B><FONT SIZE="+1">Arguments</FONT></B></A>
<P>
</PRE>
<B>Input/Output Parameters</B>
<PRE>
NSMP (input) INTEGER
The number of training samples. NSMP >= 0.
NZ (input) INTEGER
The length of each input sample. NZ >= 0.
L (input) INTEGER
The length of each output sample. L >= 0.
IPAR (input) INTEGER array, dimension (LIPAR)
The integer parameters needed.
IPAR(1) must contain the number of neurons, n, per output
variable, denoted NN in the sequel. NN >= 0.
LIPAR (input) INTEGER
The length of the vector IPAR. LIPAR >= 1.
WB (input) DOUBLE PRECISION array, dimension (LWB)
The leading (NN*(NZ+2)+1)*L part of this array must
contain the weights and biases of the network. This vector
is partitioned into L vectors of length NN*(NZ+2)+1,
WB = [ wb(1), ..., wb(L) ]. Each wb(k), k = 1, ..., L,
corresponds to one output variable, and has the structure
wb(k) = [ w1(1), ..., w1(NZ), ..., wn(1), ..., wn(NZ),
ws(1), ..., ws(n), b(1), ..., b(n+1) ],
where wi(j) are the weights of the hidden layer,
ws(i) are the weights of the linear output layer, and
b(i) are the biases, as in the scheme above.
LWB (input) INTEGER
The length of the array WB.
LWB >= ( NN*(NZ + 2) + 1 )*L.
Z (input) DOUBLE PRECISION array, dimension (LDZ, NZ)
The leading NSMP-by-NZ part of this array must contain the
set of input samples,
Z = ( Z(1,1),...,Z(1,NZ); ...; Z(NSMP,1),...,Z(NSMP,NZ) ).
LDZ INTEGER
The leading dimension of the array Z. LDZ >= MAX(1,NSMP).
Y (output) DOUBLE PRECISION array, dimension (LDY, L)
The leading NSMP-by-L part of this array contains the set
of output samples,
Y = ( Y(1,1),...,Y(1,L); ...; Y(NSMP,1),...,Y(NSMP,L) ).
LDY INTEGER
The leading dimension of the array Y. LDY >= MAX(1,NSMP).
</PRE>
<B>Workspace</B>
<PRE>
DWORK DOUBLE PRECISION array, dimension (LDWORK)
LDWORK INTEGER
The length of the array DWORK. LDWORK >= 2*NN.
For better performance, LDWORK should be larger.
</PRE>
<B>Error Indicator</B>
<PRE>
INFO INTEGER
= 0: successful exit;
< 0: if INFO = -i, the i-th argument had an illegal
value.
</PRE>
<A name="Method"><B><FONT SIZE="+1">Method</FONT></B></A>
<PRE>
BLAS routines are used to compute the matrix-vector products.
</PRE>
<A name="Comments"><B><FONT SIZE="+1">Further Comments</FONT></B></A>
<PRE>
None
</PRE>
<A name="Example"><B><FONT SIZE="+1">Example</FONT></B></A>
<P>
<B>Program Text</B>
<PRE>
None
</PRE>
<B>Program Data</B>
<PRE>
None
</PRE>
<B>Program Results</B>
<PRE>
None
</PRE>
<HR>
<A HREF=support.html><B>Return to Supporting Routines index</B></A></BODY>
</HTML>
|