1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322
|
============================================================
Unsupervised learning: seeking representations of the data
============================================================
Clustering: grouping observations together
============================================
.. topic:: The problem solved in clustering
Given the iris dataset, if we knew that there were 3 types of iris, but
did not have access to a taxonomist to label them: we could try a
**clustering task**: split the observations in well-separated group
called *clusters*.
..
>>> # Set the PRNG
>>> import numpy as np
>>> np.random.seed(1)
K-means clustering
-------------------
Note that their exists a lot of different clustering criteria and associated
algorithms. The simplest clustering algorithm is the
:ref:`k_means`.
.. image:: ../../auto_examples/cluster/images/plot_cluster_iris_2.png
:target: ../../auto_examples/cluster/plot_cluster_iris.html
:scale: 70
:align: right
::
>>> from sklearn import cluster, datasets
>>> iris = datasets.load_iris()
>>> X_iris = iris.data
>>> y_iris = iris.target
>>> k_means = cluster.KMeans(k=3)
>>> k_means.fit(X_iris) # doctest: +ELLIPSIS
KMeans(copy_x=True, init='k-means++', k=3, max_iter=300,...
>>> print k_means.labels_[::10]
[1 1 1 1 1 0 0 0 0 0 2 2 2 2 2]
>>> print y_iris[::10]
[0 0 0 0 0 1 1 1 1 1 2 2 2 2 2]
.. |k_means_iris_bad_init| image:: ../../auto_examples/cluster/images/plot_cluster_iris_3.png
:target: ../../auto_examples/cluster/plot_cluster_iris.html
:scale: 63
.. |k_means_iris_8| image:: ../../auto_examples/cluster/images/plot_cluster_iris_1.png
:target: ../../auto_examples/cluster/plot_cluster_iris.html
:scale: 63
.. |cluster_iris_truth| image:: ../../auto_examples/cluster/images/plot_cluster_iris_4.png
:target: ../../auto_examples/cluster/plot_cluster_iris.html
:scale: 63
.. warning::
There is absolutely no guarantee of recovering a ground truth. First
choosing the right number of clusters is hard. Second, the algorithm
is sensitive to initialization, and can fall in local minima,
although in the `sklearn` package we play many tricks to mitigate this
issue.
.. list-table::
:class: centered
*
- |k_means_iris_bad_init|
- |k_means_iris_8|
- |cluster_iris_truth|
*
- **Bad initialization**
- **8 clusters**
- **Ground truth**
**Don't over-interpret clustering results**
.. |lena| image:: ../../auto_examples/cluster/images/plot_lena_compress_1.png
:target: ../../auto_examples/cluster/plot_lena_compress.html
:scale: 60
.. |lena_regular| image:: ../../auto_examples/cluster/images/plot_lena_compress_2.png
:target: ../../auto_examples/cluster/plot_lena_compress.html
:scale: 60
.. |lena_compressed| image:: ../../auto_examples/cluster/images/plot_lena_compress_3.png
:target: ../../auto_examples/cluster/plot_lena_compress.html
:scale: 60
.. |lena_histogram| image:: ../../auto_examples/cluster/images/plot_lena_compress_4.png
:target: ../../auto_examples/cluster/plot_lena_compress.html
:scale: 60
.. topic:: **Application example: vector quantization**
Clustering in general and KMeans in particular, can be seen as a way
of choosing a small number of examplars to compress the information,
a problem sometimes known as
`vector quantization <http://en.wikipedia.org/wiki/Vector_quantization>`_.
For instance, this can be used to posterize an image::
>>> import scipy as sp
>>> try:
... lena = sp.lena()
... except AttributeError:
... from scipy import misc
... lena = misc.lena()
>>> X = lena.reshape((-1, 1)) # We need an (n_sample, n_feature) array
>>> k_means = cluster.KMeans(k=5, n_init=1)
>>> k_means.fit(X) # doctest: +ELLIPSIS
KMeans(copy_x=True, init='k-means++', k=5, ...
>>> values = k_means.cluster_centers_.squeeze()
>>> labels = k_means.labels_
>>> lena_compressed = np.choose(labels, values)
>>> lena_compressed.shape = lena.shape
.. list-table::
:class: centered
*
- |lena|
- |lena_compressed|
- |lena_regular|
- |lena_histogram|
*
- Raw image
- K-means quantization
- Equal bins
- Image histogram
Hierarchical agglomerative clustering: Ward
---------------------------------------------
A :ref:`hierarchical_clustering` method is a type of cluster analysis
that aims to build a hierarchy of clusters. In general, the various approaches
of this technique are either:
* **Agglomerative** - `bottom-up` approaches, or
* **Divisive** - `top-down` approaches.
For estimating a large number of clusters, top-down approaches are both
statisticaly ill-posed, and slow - due to it starting with all observations
as one cluster, which it splits recursively. Agglomerative
hierarchical-clustering is a bottom-up approach that successively merges
observations together and is particularly useful when the clusters of interest
are made of only a few observations. *Ward* clustering minimizes a criterion
similar to k-means in a bottom-up approach. When the number of clusters is large,
it is much more computationally efficient than k-means.
Connectivity-constrained clustering
.....................................
With Ward clustering, it is possible to specify which samples can be
clustered together by giving a connectivity graph. Graphs in the scikit
are represented by their adjacency matrix. Often a sparse matrix is used.
This can be useful for instance to retrieve connect regions when
clustering an image:
.. image:: ../../auto_examples/cluster/images/plot_lena_ward_segmentation_1.png
:target: ../../auto_examples/cluster/plot_lena_ward_segmentation.html
:scale: 40
:align: right
.. literalinclude:: ../../auto_examples/cluster/plot_lena_ward_segmentation.py
:lines: 24-44
..
>>> from sklearn.feature_extraction.image import grid_to_graph
>>> connectivity = grid_to_graph(*lena.shape)
Feature agglomeration
......................
We have seen that sparsity could be used to mitigate the curse of
dimensionality, *i.e* the insufficience of observations compared to the
number of features. Another approach is to merge together similar
features: **feature agglomeration**. This approach can be implementing by
clustering in the feature direction, in other words clustering the
transposed data.
.. image:: ../../auto_examples/cluster/images/plot_digits_agglomeration_1.png
:target: ../../auto_examples/cluster/plot_digits_agglomeration.html
:align: right
:scale: 57
::
>>> digits = datasets.load_digits()
>>> images = digits.images
>>> X = np.reshape(images, (len(images), -1))
>>> connectivity = grid_to_graph(*images[0].shape)
>>> agglo = cluster.WardAgglomeration(connectivity=connectivity,
... n_clusters=32)
>>> agglo.fit(X) # doctest: +ELLIPSIS
WardAgglomeration(connectivity=...
>>> X_reduced = agglo.transform(X)
>>> X_approx = agglo.inverse_transform(X_reduced)
>>> images_approx = np.reshape(X_approx, images.shape)
.. topic:: `transform` and `inverse_transform` methods
Some estimators expose a `transform` method, for instance to reduce
the dimensionality of the dataset.
Decompositions: from a signal to components and loadings
===========================================================
.. topic:: **Components and loadings**
If X is our multivariate data, the problem that we are trying to solve
is to rewrite it on a different observation basis: we want to learn
loadings L and a set of components C such that *X = L C*.
Different criteria exist to choose the components
Principal component analysis: PCA
-----------------------------------
:ref:`PCA` selects the successive components that
explain the maximum variance in the signal.
.. |pca_3d_axis| image:: ../../auto_examples/decomposition/images/plot_pca_3d_1.png
:target: ../../auto_examples/decomposition/plot_pca_3d.html
:scale: 70
.. |pca_3d_aligned| image:: ../../auto_examples/decomposition/images/plot_pca_3d_2.png
:target: ../../auto_examples/decomposition/plot_pca_3d.html
:scale: 70
.. rst-class:: centered
|pca_3d_axis| |pca_3d_aligned|
The point cloud spanned by the observations above is very flat in one
direction: one of the 3 univariate features can almost be exactly
computed using the 2 other. PCA finds the directions in which the data is
not *flat*
When used to *transform* data, PCA can reduce the dimensionality of the
data by projecting on a principal subspace.
.. np.random.seed(0)
::
>>> # Create a signal with only 2 useful dimensions
>>> x1 = np.random.normal(size=100)
>>> x2 = np.random.normal(size=100)
>>> x3 = x1 + x2
>>> X = np.c_[x1, x2, x3]
>>> from sklearn import decomposition
>>> pca = decomposition.PCA()
>>> pca.fit(X)
PCA(copy=True, n_components=None, whiten=False)
>>> print pca.explained_variance_ # doctest: +SKIP
[ 2.18565811e+00 1.19346747e+00 8.43026679e-32]
>>> # As we can see, only the 2 first components are useful
>>> pca.n_components = 2
>>> X_reduced = pca.fit_transform(X)
>>> X_reduced.shape
(100, 2)
.. Eigenfaces here?
Independent Component Analysis: ICA
-------------------------------------
:ref:`ICA` selects components so that the distribution of their loadings carries
a maximum amount of independent information. It is able to recover
**non-Gaussian** independent signals:
.. image:: ../../auto_examples/decomposition/images/plot_ica_blind_source_separation_1.png
:target: ../../auto_examples/decomposition/plot_ica_blind_source_separation.html
:scale: 70
:align: center
.. np.random.seed(0)
::
>>> # Generate sample data
>>> time = np.linspace(0, 10, 2000)
>>> s1 = np.sin(2 * time) # Signal 1 : sinusoidal signal
>>> s2 = np.sign(np.sin(3 * time)) # Signal 2 : square signal
>>> S = np.c_[s1, s2]
>>> S += 0.2 * np.random.normal(size=S.shape) # Add noise
>>> S /= S.std(axis=0) # Standardize data
>>> # Mix data
>>> A = np.array([[1, 1], [0.5, 2]]) # Mixing matrix
>>> X = np.dot(S, A.T) # Generate observations
>>> # Compute ICA
>>> ica = decomposition.FastICA()
>>> S_ = ica.fit(X).transform(X) # Get the estimated sources
>>> A_ = ica.get_mixing_matrix() # Get estimated mixing matrix
>>> np.allclose(X, np.dot(S_, A_.T))
True
|