1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119
|
Regression
==========
Ordinary Least Squares and Ridge Regression
-------------------------------------------
.. autoclass:: mlpy.RidgeRegression
:members:
.. versionadded:: 2.2.0
.. note::
The predicted response is computed as:
.. math:: \hat{y} = \beta_0 + X \boldsymbol\beta
Example (requires matplotlib module):
.. code-block:: python
>>> import numpy as np
>>> import mlpy
>>> import matplotlib.pyplot as plt
>>> x = np.array([[1], [2], [3], [4], [5], [6]]) # p = 1
>>> y = np.array([0.13, 0.19, 0.31, 0.38, 0.49, 0.64])
>>> rr = mlpy.RidgeRegression(alpha=0.0) # OLS
>>> rr.learn(x, y)
>>> y_hat = rr.pred(x)
>>> plt.figure(1)
>>> plt.plot(x[:, 0], y, 'o') # show y
>>> plt.plot(x[:, 0], y_hat) # show y_hat
>>> plt.show()
.. image:: images/ols.png
.. code-block:: python
>>> rr.beta0()
0.0046666666666667078
>>> rr.beta()
array([ 0.10057143])
Kernel Ridge Regression
-----------------------
.. autoclass:: mlpy.KernelRidgeRegression
:members:
.. versionadded:: 2.2.0
Example (requires matplotlib module):
.. code-block:: python
>>> import numpy as np
>>> import mlpy
>>> import matplotlib.pyplot as plt
>>> x = np.array([[1], [2], [3], [4], [5], [6]]) # p = 1
>>> y = np.array([0.13, 0.19, 0.31, 0.38, 0.49, 0.64])
>>> kernel = mlpy.KernelGaussian(sigma=0.01)
>>> krr = mlpy.KernelRidgeRegression(kernel=kernel, alpha=0.01)
>>> krr.learn(x,y)
>>> y_hat = krr.pred(x)
>>> plt.figure(1)
>>> plt.plot(x[:, 0], y, 'o') # show y
>>> plt.plot(x[:, 0], y_hat) # show y_hat
>>> plt.show()
.. image:: images/krr.png
Least Angle Regression (LAR)
----------------------------
Least Angle Regression is described in [Efron04]_.
Covariates should be standardized to have mean 0 and unit length,
and the response should have mean 0:
.. math::
\sum_{i=1}^n{x_{ij}} = 0, \hspace{1cm} \sum_{i=1}^n{x_{ij}^2} = 1, \hspace{1cm} \sum_{i=1}^n{y_i} = 0 \hspace{1cm} \mathrm{for} \hspace{0.2cm} j = 1, 2, \dots, p.
.. autoclass:: mlpy.Lar
:members:
.. versionadded:: 2.2.0
LASSO (LARS implementation)
---------------------------
It implements simple modifications of the LARS algorithm
that produces Lasso estimates. See [Efron04]_ and [Tibshirani96]_.
Covariates should be standardized to have mean 0 and unit length,
and the response should have mean 0:
.. math::
\sum_{i=1}^n{x_{ij}} = 0, \hspace{1cm} \sum_{i=1}^n{x_{ij}^2} = 1, \hspace{1cm} \sum_{i=1}^n{y_i} = 0 \hspace{1cm} \mathrm{for} \hspace{0.2cm} j = 1, 2, \dots, p.
.. autoclass:: mlpy.Lasso
:members:
.. versionadded:: 2.2.0
Gradient Descent
----------------
.. autoclass:: mlpy.GradientDescent
:members:
.. versionadded:: 2.2.0
.. [Efron04] Bradley Efron, Trevor Hastie, Iain Johnstone and Robert Tibshirani. Least Angle Regression. Annals of Statistics, 2004, volume 32, pages 407-499.
.. [Tibshirani96] Robert Tibshirani. Regression shrinkage and selection via the lasso. J. Royal. Statist. Soc B., 1996, volume 58, number 1, pages 267-288.
|