1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297
|
============================================================
Unsupervised learning: seeking representations of the data
============================================================
Clustering: grouping observations together
============================================
.. topic:: The problem solved in clustering
Given the iris dataset, if we knew that there were 3 types of iris, but
did not have access to a taxonomist to label them: we could try a
**clustering task**: split the observations into well-separated group
called *clusters*.
::
>>> # Set the PRNG
>>> import numpy as np
>>> np.random.seed(1)
K-means clustering
-------------------
Note that there exist a lot of different clustering criteria and associated
algorithms. The simplest clustering algorithm is :ref:`k_means`.
::
>>> from sklearn import cluster, datasets
>>> X_iris, y_iris = datasets.load_iris(return_X_y=True)
>>> k_means = cluster.KMeans(n_clusters=3)
>>> k_means.fit(X_iris)
KMeans(n_clusters=3)
>>> print(k_means.labels_[::10])
[1 1 1 1 1 2 0 0 0 0 2 2 2 2 2]
>>> print(y_iris[::10])
[0 0 0 0 0 1 1 1 1 1 2 2 2 2 2]
.. figure:: /auto_examples/cluster/images/sphx_glr_plot_cluster_iris_001.png
:target: ../../auto_examples/cluster/plot_cluster_iris.html
:scale: 63
.. warning::
There is absolutely no guarantee of recovering a ground truth. First,
choosing the right number of clusters is hard. Second, the algorithm
is sensitive to initialization, and can fall into local minima,
although scikit-learn employs several tricks to mitigate this issue.
For instance, on the image above, we can observe the difference between the
ground-truth (bottom right figure) and different clustering. We do not
recover the expected labels, either because the number of cluster was
chosen to be to large (top left figure) or suffer from a bad initialization
(bottom left figure).
**It is therefore important to not over-interpret clustering results.**
.. topic:: **Application example: vector quantization**
Clustering in general and KMeans, in particular, can be seen as a way
of choosing a small number of exemplars to compress the information.
The problem is sometimes known as
`vector quantization <https://en.wikipedia.org/wiki/Vector_quantization>`_.
For instance, this can be used to posterize an image::
>>> import scipy as sp
>>> try:
... face = sp.face(gray=True)
... except AttributeError:
... from scipy import misc
... face = misc.face(gray=True)
>>> X = face.reshape((-1, 1)) # We need an (n_sample, n_feature) array
>>> k_means = cluster.KMeans(n_clusters=5, n_init=1)
>>> k_means.fit(X)
KMeans(n_clusters=5, n_init=1)
>>> values = k_means.cluster_centers_.squeeze()
>>> labels = k_means.labels_
>>> face_compressed = np.choose(labels, values)
>>> face_compressed.shape = face.shape
**Raw image**
.. figure:: /auto_examples/cluster/images/sphx_glr_plot_face_compress_001.png
:target: ../../auto_examples/cluster/plot_face_compress.html
**K-means quantization**
.. figure:: /auto_examples/cluster/images/sphx_glr_plot_face_compress_004.png
:target: ../../auto_examples/cluster/plot_face_compress.html
**Equal bins**
.. figure:: /auto_examples/cluster/images/sphx_glr_plot_face_compress_002.png
:target: ../../auto_examples/cluster/plot_face_compress.html
Hierarchical agglomerative clustering: Ward
---------------------------------------------
A :ref:`hierarchical_clustering` method is a type of cluster analysis
that aims to build a hierarchy of clusters. In general, the various approaches
of this technique are either:
* **Agglomerative** - bottom-up approaches: each observation starts in its
own cluster, and clusters are iteratively merged in such a way to
minimize a *linkage* criterion. This approach is particularly interesting
when the clusters of interest are made of only a few observations. When
the number of clusters is large, it is much more computationally efficient
than k-means.
* **Divisive** - top-down approaches: all observations start in one
cluster, which is iteratively split as one moves down the hierarchy.
For estimating large numbers of clusters, this approach is both slow (due
to all observations starting as one cluster, which it splits recursively)
and statistically ill-posed.
Connectivity-constrained clustering
.....................................
With agglomerative clustering, it is possible to specify which samples can be
clustered together by giving a connectivity graph. Graphs in scikit-learn
are represented by their adjacency matrix. Often, a sparse matrix is used.
This can be useful, for instance, to retrieve connected regions (sometimes
also referred to as connected components) when clustering an image.
.. image:: /auto_examples/cluster/images/sphx_glr_plot_coin_ward_segmentation_001.png
:target: ../../auto_examples/cluster/plot_coin_ward_segmentation.html
:scale: 40
:align: center
::
>>> from skimage.data import coins
>>> from scipy.ndimage import gaussian_filter
>>> from skimage.transform import rescale
>>> rescaled_coins = rescale(
... gaussian_filter(coins(), sigma=2),
... 0.2, mode='reflect', anti_aliasing=False
... )
>>> X = np.reshape(rescaled_coins, (-1, 1))
We need a vectorized version of the image. `'rescaled_coins'` is a down-scaled
version of the coins image to speed up the process::
>>> from sklearn.feature_extraction import grid_to_graph
>>> connectivity = grid_to_graph(*rescaled_coins.shape)
Define the graph structure of the data. Pixels connected to their neighbors::
>>> n_clusters = 27 # number of regions
>>> from sklearn.cluster import AgglomerativeClustering
>>> ward = AgglomerativeClustering(n_clusters=n_clusters, linkage='ward',
... connectivity=connectivity)
>>> ward.fit(X)
AgglomerativeClustering(connectivity=..., n_clusters=27)
>>> label = np.reshape(ward.labels_, rescaled_coins.shape)
Feature agglomeration
......................
We have seen that sparsity could be used to mitigate the curse of
dimensionality, *i.e* an insufficient amount of observations compared to the
number of features. Another approach is to merge together similar
features: **feature agglomeration**. This approach can be implemented by
clustering in the feature direction, in other words clustering the
transposed data.
.. image:: /auto_examples/cluster/images/sphx_glr_plot_digits_agglomeration_001.png
:target: ../../auto_examples/cluster/plot_digits_agglomeration.html
:align: center
:scale: 57
::
>>> digits = datasets.load_digits()
>>> images = digits.images
>>> X = np.reshape(images, (len(images), -1))
>>> connectivity = grid_to_graph(*images[0].shape)
>>> agglo = cluster.FeatureAgglomeration(connectivity=connectivity,
... n_clusters=32)
>>> agglo.fit(X)
FeatureAgglomeration(connectivity=..., n_clusters=32)
>>> X_reduced = agglo.transform(X)
>>> X_approx = agglo.inverse_transform(X_reduced)
>>> images_approx = np.reshape(X_approx, images.shape)
.. topic:: ``transform`` and ``inverse_transform`` methods
Some estimators expose a ``transform`` method, for instance to reduce
the dimensionality of the dataset.
Decompositions: from a signal to components and loadings
===========================================================
.. topic:: **Components and loadings**
If X is our multivariate data, then the problem that we are trying to solve
is to rewrite it on a different observational basis: we want to learn
loadings L and a set of components C such that *X = L C*.
Different criteria exist to choose the components
Principal component analysis: PCA
-----------------------------------
:ref:`PCA` selects the successive components that explain the maximum variance in the
signal. Let's create a synthetic 3-dimensional dataset.
.. np.random.seed(0)
::
>>> # Create a signal with only 2 useful dimensions
>>> x1 = np.random.normal(size=(100, 1))
>>> x2 = np.random.normal(size=(100, 1))
>>> x3 = x1 + x2
>>> X = np.concatenate([x1, x2, x3], axis=1)
The point cloud spanned by the observations above is very flat in one
direction: one of the three univariate features (i.e. z-axis) can almost be exactly
computed using the other two.
.. plot::
:context: close-figs
:align: center
>>> import matplotlib.pyplot as plt
>>> fig = plt.figure()
>>> ax = fig.add_subplot(111, projection='3d')
>>> ax.scatter(X[:, 0], X[:, 1], X[:, 2])
<...>
>>> _ = ax.set(xlabel="x", ylabel="y", zlabel="z")
PCA finds the directions in which the data is not *flat*.
::
>>> from sklearn import decomposition
>>> pca = decomposition.PCA()
>>> pca.fit(X)
PCA()
>>> print(pca.explained_variance_) # doctest: +SKIP
[ 2.18565811e+00 1.19346747e+00 8.43026679e-32]
Looking at the explained variance, we see that only the first two components
are useful. PCA can be used to reduce dimensionality while preserving
most of the information. It will project the data on the principal subspace.
::
>>> pca.set_params(n_components=2)
PCA(n_components=2)
>>> X_reduced = pca.fit_transform(X)
>>> X_reduced.shape
(100, 2)
.. Eigenfaces here?
Independent Component Analysis: ICA
-------------------------------------
:ref:`ICA` selects components so that the distribution of their loadings carries
a maximum amount of independent information. It is able to recover
**non-Gaussian** independent signals:
.. image:: /auto_examples/decomposition/images/sphx_glr_plot_ica_blind_source_separation_001.png
:target: ../../auto_examples/decomposition/plot_ica_blind_source_separation.html
:scale: 70
:align: center
.. np.random.seed(0)
::
>>> # Generate sample data
>>> import numpy as np
>>> from scipy import signal
>>> time = np.linspace(0, 10, 2000)
>>> s1 = np.sin(2 * time) # Signal 1 : sinusoidal signal
>>> s2 = np.sign(np.sin(3 * time)) # Signal 2 : square signal
>>> s3 = signal.sawtooth(2 * np.pi * time) # Signal 3: saw tooth signal
>>> S = np.c_[s1, s2, s3]
>>> S += 0.2 * np.random.normal(size=S.shape) # Add noise
>>> S /= S.std(axis=0) # Standardize data
>>> # Mix data
>>> A = np.array([[1, 1, 1], [0.5, 2, 1], [1.5, 1, 2]]) # Mixing matrix
>>> X = np.dot(S, A.T) # Generate observations
>>> # Compute ICA
>>> ica = decomposition.FastICA()
>>> S_ = ica.fit_transform(X) # Get the estimated sources
>>> A_ = ica.mixing_.T
>>> np.allclose(X, np.dot(S_, A_) + ica.mean_)
True
|