File: repl.txt

package info (click to toggle)
node-stdlib 0.0.96%2Bds1%2B~cs0.0.429-2
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid, trixie
  • size: 421,476 kB
  • sloc: javascript: 1,562,831; ansic: 109,702; lisp: 49,823; cpp: 27,224; python: 7,871; sh: 6,807; makefile: 6,089; fortran: 3,102; awk: 387
file content (117 lines) | stat: -rw-r--r-- 4,274 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117

{{alias}}( N[, options] )
    Returns an accumulator function which incrementally performs binary
    classification using stochastic gradient descent (SGD).

    If provided a feature vector and response value, the accumulator function
    updates a binary classification model and returns updated model
    coefficients.

    If not provided a feature vector and response value, the accumulator
    function returns the current model coefficients.

    Stochastic gradient descent is sensitive to the scaling of the features. One
    is advised to either scale each feature to `[0,1]` or `[-1,1]` or to
    transform the features into z-scores with zero mean and unit variance. One
    should keep in mind that the same scaling has to be applied to training data
    in order to obtain accurate predictions.

    In general, the more data provided to an accumulator, the more reliable the
    model predictions.

    Parameters
    ----------
    N: integer
        Number of features.

    options: Object (optional)
        Function options.

    options.intercept: boolean (optional)
        Boolean indicating whether to include an intercept. Default: true.

    options.lambda: number (optional)
        Regularization parameter. Default: 1.0e-4.

    options.learningRate: ArrayLike (optional)
        Learning rate function and associated (optional) parameters. The first
        array element specifies the learning rate function and must be one of
        the following:

        - ['constant', ...]: constant learning rate function. To set the
        learning rate, provide a second array element. By default, when the
        learn rate function is 'constant', the learning rate is set to 0.02.

        - ['basic']: basic learning rate function according to the formula
        `10/(10+t)` where `t` is the current iteration.

        - ['invscaling', ...]: inverse scaling learning rate function according
        to the formula `eta0/pow(t, power_t)` where `eta0` is the initial
        learning rate and `power_t` is the exponent controlling how quickly the
        learning rate decreases. To set the initial learning rate, provide a
        second array element. By default, the initial learning rate is 0.02. To
        set the exponent, provide a third array element. By default, the
        exponent is 0.5.

        - ['pegasos']: Pegasos learning rate function according to the formula
        `1/(lambda*t)` where `t` is the current iteration and `lambda` is the
        regularization parameter.

        Default: ['basic'].

    options.loss: string (optional)
        Loss function. Must be one of the following:

        - hinge: hinge loss function. Corresponds to a soft-margin linear
        Support Vector Machine (SVM), which can handle non-linearly separable
        data.

        - log: logistic loss function. Corresponds to Logistic Regression.

        - modifiedHuber: Huber loss function variant for classification.

        - perceptron: hinge loss function without a margin. Corresponds to the
        original Perceptron by Rosenblatt.

        - squaredHinge: squared hinge loss function SVM (L2-SVM).

        Default: 'log'.

    Returns
    -------
    acc: Function
        Accumulator function.

    acc.predict: Function
        Predicts response values for one ore more observation vectors. Provide a
        second argument to specify the prediction type. Must be one of the
        following: 'label', 'probability', or 'linear'. Default: 'label'.

        Note that the probability prediction type is only compatible with 'log'
        and 'modifiedHuber' loss functions.

    Examples
    --------
    // Create an accumulator:
    > var opts = {};
    > opts.intercept = true;
    > opts.lambda = 1.0e-5;
    > var acc = {{alias}}( 3, opts );

    // Update the model:
    > var buf = new {{alias:@stdlib/array/float64}}( [ 2.3, 1.0, 5.0 ] );
    > var x = {{alias:@stdlib/ndarray/array}}( buf );
    > var coefs = acc( x, 1 )
    <ndarray>

    // Create a new observation vector:
    > buf = new {{alias:@stdlib/array/float64}}( [ 2.3, 5.3, 8.6 ] );
    > x = {{alias:@stdlib/ndarray/array}}( buf );

    // Predict the response value:
    > var yhat = acc.predict( x )
    <ndarray>

    See Also
    --------