1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926
|
.. _clustering:
==========
Clustering
==========
`Clustering <http://en.wikipedia.org/wiki/Cluster_analysis>`__ of
unlabeled data can be performed with the module :mod:`sklearn.cluster`.
Each clustering algorithm comes in two variants: a class, that implements
the `fit` method to learn the clusters on train data, and a function,
that, given train data, returns an array of integer labels corresponding
to the different clusters. For the class, the labels over the training
data can be found in the `labels_` attribute.
.. currentmodule:: sklearn.cluster
.. topic:: Input data
One important thing to note is that the algorithms implemented in
this module take different kinds of matrix as input. On one hand,
:class:`MeanShift` and :class:`KMeans` take data matrices of shape
[n_samples, n_features]. These can be obtained from the classes in
the :mod:`sklearn.feature_extraction` module. On the other hand,
:class:`AffinityPropagation` and :class:`SpectralClustering` take
similarity matrices of shape [n_samples, n_samples]. These can be
obtained from the functions in the :mod:`sklearn.metrics.pairwise`
module. In other words, :class:`MeanShift` and :class:`KMeans` work
with points in a vector space, whereas :class:`AffinityPropagation`
and :class:`SpectralClustering` can work with arbitrary objects, as
long as a similarity measure exists for such objects.
Overview of clustering methods
===============================
.. figure:: ../auto_examples/cluster/images/plot_cluster_comparison_1.png
:target: ../auto_examples/cluster/plot_cluster_comparison.html
:align: center
:scale: 50
A comparison of the clustering algorithms in scikit-learn
.. list-table::
:header-rows: 1
:widths: 14 15 19 25 20
* - Method name
- Parameters
- Scalability
- Usecase
- Geometry (metric used)
* - :ref:`K-Means <k_means>`
- number of clusters
- Very large `n_samples`, medium `n_clusters` with
:ref:`MiniBatch code <mini_batch_kmeans>`
- General-purpose, even cluster size, flat geometry, not too many clusters
- Distances between points
* - :ref:`Affinity propagation <affinity_propagation>`
- damping, sample preference
- Not scalable with n_samples
- Many clusters, uneven cluster size, non-flat geometry
- Graph distance (e.g. nearest-neighbor graph)
* - :ref:`Mean-shift <mean_shift>`
- bandwidth
- Not scalable with n_samples
- Many clusters, uneven cluster size, non-flat geometry
- Distances between points
* - :ref:`Spectral clustering <spectral_clustering>`
- number of clusters
- Medium `n_samples`, small `n_clusters`
- Few clusters, even cluster size, non-flat geometry
- Graph distance (e.g. nearest-neighbor graph)
* - :ref:`Hierarchical clustering <hierarchical_clustering>`
- number of clusters
- Large `n_samples` and `n_clusters`
- Many clusters, possibly connectivity constraints
- Distances between points
* - :ref:`DBSCAN <dbscan>`
- neighborhood size
- Very large `n_samples`, medium `n_clusters`
- Non-flat geometry, uneven cluster sizes
- Distances between nearest points
* - :ref:`Gaussian mixtures <mixture>`
- many
- Not scalable
- Flat geometry, good for density estimation
- Mahalanobis distances to centers
Non-flat geometry clustering is useful when the clusters have a specific
shape, i.e. a non-flat manifold, and the standard euclidean distance is
not the right metric. This case arises in the two top rows of the figure
above.
Gaussian mixture models, useful for clustering, are described in
:ref:`another chapter of the documentation <mixture>` dedicated to
mixture models. KMeans can be seen as a special case of Gaussian mixture
model with equal covariance per component.
.. _k_means:
K-means
=======
The :class:`KMeans` algorithm clusters data by trying to separate samples
in n groups of equal variance, minimizing a criterion known as the
'inertia' of the groups. This algorithm requires the number of cluster to
be specified. It scales well to large number of samples, however its
results may be dependent on an initialisation. As a result, the computation is
often done several times, with different initialisation of the centroids.
K-means is often referred to as Lloyd's algorithm. After initialization,
k-means consists of looping between two major steps. First the Voronoi diagram
of the points is calculated using the current centroids. Each segment in the
Voronoi diagram becomes a separate cluster. Secondly, the centroids are updated
to the mean of each segment. The algorithm then repeats this until a stopping
criteria is fulfilled. Usually, as in this implementation, the algorithm
stops when the relative increment in the results between iterations is less than
the given tolerance value.
A parameter can be given to allow K-means to be run in parallel, called
`n_jobs`. Giving this parameter a positive value uses that many processors
(default=1). A value of -1 uses all processors, with -2 using one less, and so
on. Parallelization generally speeds up computation at the cost of memory (in
this case, multiple copies of centroids need to be stored, one for each job).
K-means can be used for vector quantization. This is achieved using the
transform method of a trained model of :class:`KMeans`.
.. topic:: Examples:
* :ref:`example_cluster_plot_kmeans_digits.py`: Clustering handwritten digits
.. _mini_batch_kmeans:
Mini Batch K-Means
------------------
The :class:`MiniBatchKMeans` is a variant of the :class:`KMeans` algorithm
using mini-batches, random subset of the dataset, to compute the centroids.
Althought the :class:`MiniBatchKMeans` converge faster than the KMeans
version, the quality of the results, measured by the inertia, the sum of
the distance of each points to the nearest centroid, is not as good as
the :class:`KMeans` algorithm.
.. figure:: ../auto_examples/cluster/images/plot_mini_batch_kmeans_1.png
:target: ../auto_examples/cluster/plot_mini_batch_kmeans.html
:align: center
:scale: 100
.. topic:: Examples:
* :ref:`example_cluster_plot_mini_batch_kmeans.py`: Comparison of KMeans and
MiniBatchKMeans
* :ref:`example_document_clustering.py`: Document clustering using sparse
MiniBatchKMeans
.. topic:: References:
* `"Web Scale K-Means clustering"
<http://www.eecs.tufts.edu/~dsculley/papers/fastkmeans.pdf>`_
D. Sculley, *Proceedings of the 19th international conference on World
wide web* (2010)
.. _affinity_propagation:
Affinity propagation
====================
:class:`AffinityPropagation` clusters data by diffusion in the similarity
matrix. This algorithm automatically sets its numbers of cluster. It
will have difficulties scaling to thousands of samples.
.. figure:: ../auto_examples/cluster/images/plot_affinity_propagation_1.png
:target: ../auto_examples/cluster/plot_affinity_propagation.html
:align: center
:scale: 50
.. topic:: Examples:
* :ref:`example_cluster_plot_affinity_propagation.py`: Affinity
Propagation on a synthetic 2D datasets with 3 classes.
* :ref:`example_applications_plot_stock_market.py` Affinity Propagation on
Financial time series to find groups of companies
.. _mean_shift:
Mean Shift
==========
:class:`MeanShift` clusters data by estimating *blobs* in a smooth
density of points matrix. This algorithm automatically sets its numbers
of cluster. It will have difficulties scaling to thousands of samples.
The utility function :func:`estimate_bandwidth` can be used to guess
the optimal bandwidth for :class:`MeanShift` from the data.
.. figure:: ../auto_examples/cluster/images/plot_mean_shift_1.png
:target: ../auto_examples/cluster/plot_mean_shift.html
:align: center
:scale: 50
.. topic:: Examples:
* :ref:`example_cluster_plot_mean_shift.py`: Mean Shift clustering
on a synthetic 2D datasets with 3 classes.
.. _spectral_clustering:
Spectral clustering
===================
:class:`SpectralClustering` does a low-dimension embedding of the
affinity matrix between samples, followed by a KMeans in the low
dimensional space. It is especially efficient if the affinity matrix is
sparse and the `pyamg <http://code.google.com/p/pyamg/>`_ module is
installed. SpectralClustering requires the number of clusters to be
specified. It works well for a small number of clusters but is not
advised when using many clusters.
For two clusters, it solves a convex relaxation of the `normalised
cuts <http://www.cs.berkeley.edu/~malik/papers/SM-ncut.pdf>`_ problem on
the similarity graph: cutting the graph in two so that the weight of the
edges cut is small compared to the weights in of edges inside each
cluster. This criteria is especially interesting when working on images:
graph vertices are pixels, and edges of the similarity graph are a
function of the gradient of the image.
.. |noisy_img| image:: ../auto_examples/cluster/images/plot_segmentation_toy_1.png
:target: ../auto_examples/cluster/plot_segmentation_toy.html
:scale: 50
.. |segmented_img| image:: ../auto_examples/cluster/images/plot_segmentation_toy_2.png
:target: ../auto_examples/cluster/plot_segmentation_toy.html
:scale: 50
.. centered:: |noisy_img| |segmented_img|
.. warning:: Shapeless isotropic data
When the data is really shapeless (i.e. generated from a random
distribution with no clusters), the spectral-clustering problem is
ill-conditioned: the different choices are almost equivalent, and
the spectral clustering solver chooses an arbitrary one, putting
the first sample alone in one bin.
.. topic:: Examples:
* :ref:`example_cluster_plot_segmentation_toy.py`: Segmenting objects
from a noisy background using spectral clustering.
* :ref:`example_cluster_plot_lena_segmentation.py`: Spectral clustering
to split the image of lena in regions.
.. topic:: References:
* `"A Tutorial on Spectral Clustering"
<http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.165.9323>`_
Ulrike von Luxburg, 2007
* `"Normalized cuts and image segmentation"
<http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.160.2324>`_
Jianbo Shi, Jitendra Malik, 2000
* `"A Random Walks View of Spectral Segmentation"
<http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.33.1501>`_
Marina Meila, Jianbo Shi, 2001
* `"On Spectral Clustering: Analysis and an algorithm"
<http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.19.8100>`_
Andrew Y. Ng, Michael I. Jordan, Yair Weiss, 2001
.. _hierarchical_clustering:
Hierarchical clustering
=======================
Hierarchical clustering is a general family of clustering algorithms that
build nested clusters by merging them successively. This hierarchy of
clusters represented as a tree (or dendrogram). The root of the tree is
the unique cluster that gathers all the samples, the leaves being the
clusters with only one sample. See the `Wikipedia page
<http://en.wikipedia.org/wiki/Hierarchical_clustering>`_ for more
details.
The :class:`Ward` object performs a hierarchical clustering based on
the Ward algorithm, that is a variance-minimizing approach. At each
step, it minimizes the sum of squared differences within all clusters
(inertia criterion).
This algorithm can scale to large number of samples when it is used jointly
with an connectivity matrix, but can be computationally expensive when no
connectivity constraints are added between samples: it considers at each step
all the possible merges.
Adding connectivity constraints
-------------------------------
An interesting aspect of the :class:`Ward` object is that connectivity
constraints can be added to this algorithm (only adjacent clusters can be
merged together), through an connectivity matrix that defines for each
sample the neighboring samples following a given structure of the data. For
instance, in the swiss-roll example below, the connectivity constraints
forbid the merging of points that are not adjacent on the swiss roll, and
thus avoid forming clusters that extend across overlapping folds of the
roll.
.. |unstructured| image:: ../auto_examples/cluster/images/plot_ward_structured_vs_unstructured_1.png
:target: ../auto_examples/cluster/plot_ward_structured_vs_unstructured.html
:scale: 49
.. |structured| image:: ../auto_examples/cluster/images/plot_ward_structured_vs_unstructured_2.png
:target: ../auto_examples/cluster/plot_ward_structured_vs_unstructured.html
:scale: 49
.. centered:: |unstructured| |structured|
The connectivity constraints are imposed via an connectivity matrix: a
scipy sparse matrix that has elements only at the intersection of a row
and a column with indices of the dataset that should be connected. This
matrix can be constructed from apriori information, for instance if you
whish to cluster web pages, but only merging pages with a link pointing
from one to another. It can also be learned from the data, for instance
using :func:`sklearn.neighbors.kneighbors_graph` to restrict
merging to nearest neighbors as in the :ref:`swiss roll
<example_cluster_plot_ward_structured_vs_unstructured.py>` example, or
using :func:`sklearn.feature_extraction.image.grid_to_graph` to
enable only merging of neighboring pixels on an image, as in the
:ref:`Lena <example_cluster_plot_lena_ward_segmentation.py>` example.
.. topic:: Examples:
* :ref:`example_cluster_plot_lena_ward_segmentation.py`: Ward clustering
to split the image of lena in regions.
* :ref:`example_cluster_plot_ward_structured_vs_unstructured.py`: Example of
Ward algorithm on a swiss-roll, comparison of structured approaches
versus unstructured approaches.
* :ref:`example_cluster_plot_feature_agglomeration_vs_univariate_selection.py`:
Example of dimensionality reduction with feature agglomeration based on
Ward hierarchical clustering.
.. _dbscan:
DBSCAN
======
The :class:`DBSCAN` algorithm clusters data by finding core points which have
many neighbours within a given radius. After a core point is found, the cluster
is expanded by adding its neighbours to the current cluster and recusively
checking if any are core points. Formally, a point is considered a core point
if it has more than min_points points which are of a similarity greater than
the given threshold eps. This is shown in the figure below, where the color
indicates cluster membership and large circles indicate core points found by
the algorithm. Moreover, the algorithm can detect outliers, indicated by black
points below. The outliers are defined as points which do not belong to
any current cluster and do not have enough close neighbours to start a
new cluster.
.. |dbscan_results| image:: ../auto_examples/cluster/images/plot_dbscan_1.png
:target: ../auto_examples/cluster/plot_dbscan.html
:scale: 50
.. centered:: |dbscan_results|
.. topic:: Examples:
* :ref:`example_cluster_plot_dbscan.py`: Clustering synthetic data with DBSCAN
.. topic:: References:
* "A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases
with Noise"
Ester, M., H. P. Kriegel, J. Sander, and X. Xu,
In Proceedings of the 2nd International Conference on Knowledge Discovery
and Data Mining, Portland, OR, AAAI Press, pp. 226–231. 1996
.. _clustering_evaluation:
Clustering performance evaluation
=================================
Evaluating the performance of a clustering algorithm is not as trivial as
counting the number of errors or the precision and recall of a supervised
classification algorithm. In particular any evaluation metric should not
take the absolute values of the cluster labels into account but rather
if this clustering define separations of the data similar to some ground
truth set of classes or satisfying some assumption such that members
belong to the same class are more similar that members of different
classes according to some similarity metric.
.. currentmodule:: sklearn.metrics
Inertia
-------
Presentation and usage
~~~~~~~~~~~~~~~~~~~~~~
TODO: factorize inertia computation out of kmeans and then write me!
Advantages
~~~~~~~~~~
- No need for the ground truth knowledge of the "real" classes.
Drawbacks
~~~~~~~~~
- Inertia makes the assumption that clusters are convex and isotropic
which is not always the case especially of the clusters are manifolds
with weird shapes: for instance inertia is a useless metrics to evaluate
clustering algorithm that tries to identify nested circles on a 2D plane.
- Inertia is not a normalized metrics: we just know that lower values are
better and bounded by zero. One potential solution would be to adjust
inertia for random clustering (assuming the number of ground truth classes
is known).
Adjusted Rand index
-------------------
Presentation and usage
~~~~~~~~~~~~~~~~~~~~~~
Given the knowledge of the ground truth class assignments ``labels_true``
and our clustering algorithm assignments of the same samples
``labels_pred``, the **adjusted Rand index** is a function that measures
the **similarity** of the two assignements, ignoring permutations and **with
chance normalization**::
>>> from sklearn import metrics
>>> labels_true = [0, 0, 0, 1, 1, 1]
>>> labels_pred = [0, 0, 1, 1, 2, 2]
>>> metrics.adjusted_rand_score(labels_true, labels_pred) # doctest: +ELLIPSIS
0.24...
One can permute 0 and 1 in the predicted labels and rename `2` by `3` and get
the same score::
>>> labels_pred = [1, 1, 0, 0, 3, 3]
>>> metrics.adjusted_rand_score(labels_true, labels_pred) # doctest: +ELLIPSIS
0.24...
Furthermore, :func:`adjusted_rand_score` is **symmetric**: swapping the argument
does not change the score. It can thus be used as a **consensus
measure**::
>>> metrics.adjusted_rand_score(labels_pred, labels_true) # doctest: +ELLIPSIS
0.24...
Perfect labeling is scored 1.0::
>>> labels_pred = labels_true[:]
>>> metrics.adjusted_rand_score(labels_true, labels_pred)
1.0
Bad (e.g. independent labelings) have negative or close to 0.0 scores::
>>> labels_true = [0, 1, 2, 0, 3, 4, 5, 1]
>>> labels_pred = [1, 1, 0, 0, 2, 2, 2, 2]
>>> metrics.adjusted_rand_score(labels_true, labels_pred) # doctest: +ELLIPSIS
-0.12...
Advantages
~~~~~~~~~~
- **Random (uniform) label assignements have a ARI score close to 0.0**
for any value of ``n_clusters`` and ``n_samples`` (which is not the
case for raw Rand index or the V-measure for instance).
- **Bounded range [-1, 1]**: negative values are bad (independent
labelings), similar clusterings have a positve ARI, 1.0 is the perfect
match score.
- **No assumption is made on the cluster structure**: can be used
to compare clustering algorithms such as k-means which assumes isotropic
blob shapes with results of spectral clustering algorithms which can
find cluster with "folded" shapes.
Drawbacks
~~~~~~~~~
- Contrary to inertia, **ARI requires the knowlege of the ground truth
classes** while almost never available in practice or requires manual
assignment by human annotators (as in the supervised learning setting).
However ARI can also be useful in purely unsupervised setting as a
building block for a Consensus Index that can be used for clustering
model selection (TODO).
.. topic:: Examples:
* :ref:`example_cluster_plot_adjusted_for_chance_measures.py`: Analysis of
the impact of the dataset size on the value of clustering measures
for random assignements.
Mathematical formulation
~~~~~~~~~~~~~~~~~~~~~~~~
If C is a ground truth class assignement and K the clustering, let us
define :math:`a` and :math:`b` as:
- :math:`a`, the number of pairs of elements that are in the same set
in C and in the same set in K
- :math:`b`, the number of pairs of elements that are in different sets
in C and in different sets in K
The raw (unadjusted) Rand index is then given by:
.. math:: RI = \frac{a + b}{C_2^{n_{samples}}}
Where :math:`C_2^{n_{samples}}` is the total number of possible pairs
in the dataset (without ordering).
However the RI score does not guarantee that random label assignements
will get a value close to zero (esp. if the number of clusters is in
the same order of magnitude as the number of samples).
To counter this effect we can discount the expected RI of random labelings
by defining the adjusted Rand index as follows:
.. math:: ARI = \frac{RI - Expected\_RI}{max(RI) - Expected\_RI}
.. topic:: References
* `Comparing Partitions
<http://www.springerlink.com/content/x64124718341j1j0/>`_
L. Hubert and P. Arabie, Journal of Classification 1985
* `Wikipedia entry for the adjusted Rand index
<http://en.wikipedia.org/wiki/Rand_index#Adjusted_Rand_index>`_
Adjusted Mutual Information
---------------------------
Presentation and usage
~~~~~~~~~~~~~~~~~~~~~~
Given the knowledge of the ground truth class assignments ``labels_true``
and our clustering algorithm assignments of the same samples
``labels_pred``, the **Adjusted Mutual Information** is a function that
measures the **agreement** of the two assignements, ignoring permutations
and **with chance normalization**::
>>> from sklearn import metrics
>>> labels_true = [0, 0, 0, 1, 1, 1]
>>> labels_pred = [0, 0, 1, 1, 2, 2]
>>> metrics.adjusted_mutual_info_score(labels_true, labels_pred) # doctest: +ELLIPSIS
0.22504...
One can permute 0 and 1 in the predicted labels and rename `2` by `3` and get
the same score::
>>> labels_pred = [1, 1, 0, 0, 3, 3]
>>> metrics.adjusted_mutual_info_score(labels_true, labels_pred) # doctest: +ELLIPSIS
0.22504...
Furthermore, :func:`adjusted_mutual_info_score` is **symmetric**: swapping the
argument does not change the score. It can thus be used as a **consensus
measure**::
>>> metrics.adjusted_mutual_info_score(labels_pred, labels_true) # doctest: +ELLIPSIS
0.22504...
Perfect labeling is scored 1.0::
>>> labels_pred = labels_true[:]
>>> metrics.adjusted_mutual_info_score(labels_true, labels_pred)
1.0
Bad (e.g. independent labelings) have non-positive scores::
>>> labels_true = [0, 1, 2, 0, 3, 4, 5, 1]
>>> labels_pred = [1, 1, 0, 0, 2, 2, 2, 2]
>>> metrics.adjusted_mutual_info_score(labels_true, labels_pred) # doctest: +ELLIPSIS
-0.10526...
Advantages
~~~~~~~~~~
- **Random (uniform) label assignements have a AMI score close to 0.0**
for any value of ``n_clusters`` and ``n_samples`` (which is not the
case for raw Mutual Information or the V-measure for instance).
- **Bounded range [0, 1]**: Values close to zero indicate two label
assignments that are largely independent, while values close to one
indicate significant agreement. Further, values of exactly 0 indicate
**purely** independent label assignments and a AMI of exactly 1 indicates
that the two label assignments are equal (with or without permutation).
- **No assumption is made on the cluster structure**: can be used
to compare clustering algorithms such as k-means which assumes isotropic
blob shapes with results of spectral clustering algorithms which can
find cluster with "folded" shapes.
Drawbacks
~~~~~~~~~
- Contrary to inertia, **AMI requires the knowlege of the ground truth
classes** while almost never available in practice or requires manual
assignment by human annotators (as in the supervised learning setting).
However AMI can also be useful in purely unsupervised setting as a
building block for a Consensus Index that can be used for clustering
model selection.
.. topic:: Examples:
* :ref:`example_cluster_plot_adjusted_for_chance_measures.py`: Analysis of
the impact of the dataset size on the value of clustering measures
for random assignements. This example also includes the Adjusted Rand
Index.
Mathematical formulation
~~~~~~~~~~~~~~~~~~~~~~~~
Assume two label assignments (of the same data), :math:`U` with :math:`R`
classes and :math:`V` with :math:`C` classes. The entropy of either is the
amount of uncertaintly for an array, and can be calculated as:
.. math:: H(U) = \sum_{i=1}^{|R|}P(i)log(P(i))
Where P(i) is the number of instances in U that are in class :math:`R_i`.
Likewise, for :math:`V`:
.. math:: H(V) = \sum_{j=1}^{|C|}P'(j)log(P'(j))
Where P'(j) is the number of instances in V that are in class :math:`C_j`.
The (non-adjusted) mutual information between :math:`U` and :math:`V` is
calculated by:
.. math:: MI(U, V) = \sum_{i=1}^{|R|}\sum_{j=1}^{|C|}P(i, j)log(\frac{P(i,j)}{P(i)P'(j)})
Where P(i, j) is the number of instances with label :math:`R_i`
and also with label :math:`C_j`.
This value of the mutual information is not adjusted cfor chance and will tend
to increase as the number of different labels (clusters) increases, regardless
of the actual amount of "mutual information" between the label assignments.
The expected value for the mutual information can be calculated using the
following equation, from Vinh, Epps, and Bailey, (2009). In this equation,
:math:`a_i` is the number of instances with label :math:`U_i` and
:math:`b_j` is the number of instances with label :math:`V_j`.
.. math:: E\{MI(U,V)\}=\sum_{i=1}^R \sum_{j=1}^C \sum_{n_{ij}=(a_i+b_j-N)^+
}^{\min(a_i, b_j)} \frac{n_{ij}}{N}\log ( \frac{ N.n_{ij}}{a_i b_j})
\frac{a_i!b_j!(N-a_i)!(N-b_j)!}{N!n_{ij}!(a_i-n_{ij})!(b_j-n_{ij})!
(N-a_i-b_j+n_{ij})!}
Using the expected value, the adjusted mutual information can then be
calculated using a similar form to that of the adjusted Rand index:
.. math:: AMI = \frac{MI - Expected\_MI}{max(H(U), H(V)) - Expected\_MI}
.. topic:: References
* Vinh, Epps, and Bailey, (2009). "Information theoretic measures
for clusterings comparison". Proceedings of the 26th Annual International
Conference on Machine Learning - ICML '09.
doi:10.1145/1553374.1553511. ISBN 9781605585161.
* Vinh, Epps, and Bailey, (2010). Information Theoretic Measures for
Clusterings Comparison: Variants, Properties, Normalization and
Correction for Chance}, JMLR
http://jmlr.csail.mit.edu/papers/volume11/vinh10a/vinh10a.pdf
* `Wikipedia entry for the Adjusted Mutual Information
<http://en.wikipedia.org/wiki/Adjusted_Mutual_Information>`_
Homogeneity, completeness and V-measure
---------------------------------------
Presentation and usage
~~~~~~~~~~~~~~~~~~~~~~
Given the knowledge of the ground truth class assignments of the samples,
it is possible to define some intuitive metric using conditional entropy
analysis.
In particular Rosenberg and Hirschberg (2007) define the following two
desirable objectives for any cluster assignment:
- **homogeneity**: each cluster contains only members of a single class.
- **completeness**: all members of a given class are assigned to the same
cluster.
We can turn those concept as scores :func:`homogeneity_score` and
:func:`completeness_score`. Both are bounded below by 0.0 and above by
1.0 (higher is better)::
>>> from sklearn import metrics
>>> labels_true = [0, 0, 0, 1, 1, 1]
>>> labels_pred = [0, 0, 1, 1, 2, 2]
>>> metrics.homogeneity_score(labels_true, labels_pred) # doctest: +ELLIPSIS
0.66...
>>> metrics.completeness_score(labels_true, labels_pred) # doctest: +ELLIPSIS
0.42...
Their harmonic mean called **V-measure** is computed by
:func:`v_measure_score`::
>>> metrics.v_measure_score(labels_true, labels_pred) # doctest: +ELLIPSIS
0.51...
All three metrics can be computed at once using
:func:`homogeneity_completeness_v_measure` as follows::
>>> metrics.homogeneity_completeness_v_measure(labels_true, labels_pred)
... # doctest: +ELLIPSIS
(0.66..., 0.42..., 0.51...)
The following clustering assignment is slighlty better, since it is
homogeneous but not complete::
>>> labels_pred = [0, 0, 0, 1, 2, 2]
>>> metrics.homogeneity_completeness_v_measure(labels_true, labels_pred)
... # doctest: +ELLIPSIS
(1.0, 0.68..., 0.81...)
.. note::
:func:`v_measure_score` is **symmetric**: it can be used to evaluate
the **agreement** of two independent assignements on the same dataset.
This is not the case for :func:`completeness_score` and
:func:`homogeneity_score`: both are bound by the relationship::
homogeneity_score(a, b) == completeness_score(b, a)
Advantages
~~~~~~~~~~
- **Bounded scores**: 0.0 is as bad as it can be, 1.0 is a perfect score
- Intuitive interpretation: clustering with bad V-measure can be
**qualitatively analyzed in terms of homogeneity and completeness**
to better feel what 'kind' of mistakes is done by the assigmenent.
- **No assumption is made on the cluster structure**: can be used
to compare clustering algorithms such as k-means which assumes isotropic
blob shapes with results of spectral clustering algorithms which can
find cluster with "folded" shapes.
Drawbacks
~~~~~~~~~
- The previously introduced metrics are **not normalized w.r.t. random
labeling**: this means that depending on the number of samples,
clusters and ground truth classes, a completely random labeling will
not always yield the same values for homogeneity, completeness and
hence v-measure. In particular **random labeling won't yield zero
scores especially when the number of clusters is large**.
This problem can safely be ignored when the number of samples is more
than a thousand and the number of clusters is less than 10. **For
smaller sample sizes or larger number of clusters it is safer to use
an adjusted index such as the Adjusted Rand Index (ARI)**.
.. figure:: ../auto_examples/cluster/images/plot_adjusted_for_chance_measures_1.png
:target: ../auto_examples/cluster/plot_adjusted_for_chance_measures.html
:align: center
:scale: 100
- These metrics **require the knowlege of the ground truth classes** while
almost never available in practice or requires manual assignment by
human annotators (as in the supervised learning setting).
.. topic:: Examples:
* :ref:`example_cluster_plot_adjusted_for_chance_measures.py`: Analysis of
the impact of the dataset size on the value of clustering measures
for random assignements.
Mathematical formulation
~~~~~~~~~~~~~~~~~~~~~~~~
Homogeneity and completeness scores are formally given by:
.. math:: h = 1 - \frac{H(C|K)}{H(C)}
.. math:: c = 1 - \frac{H(K|C)}{H(K)}
where :math:`H(C|K)` is the **conditional entropy of the classes given
the cluster assignments** and is given by:
.. math:: H(C|K) = - \sum_{c=1}^{|C|} \sum_{k=1}^{|K|} \frac{n_{c,k}}{n}
\cdot log(\frac{n_{c,k}}{n_k})
and :math:`H(C)` is the **entropy of the classes** and is given by:
.. math:: H(C) = - \sum_{c=1}^{|C|} \frac{n_c}{n} \cdot log(\frac{n_c}{n})
with :math:`n` the total number of samples, :math:`n_c` and :math:`n_k`
the number of samples respectively belonging to class :math:`c` and
cluster :math:`k`, and finally :math:`n_{c,k}` the number of samples
from class :math:`c` assigned to cluster :math:`k`.
The **conditional entropy of clusters given class** :math:`H(K|C)` and the
**entropy of clusters** :math:`H(K)` are defined in a symmetric manner.
Rosenberg and Hirschberg further define **V-measure** as the **harmonic
mean of homogeneity and completeness**:
.. math:: v = 2 \cdot \frac{h \cdot c}{h + c}
.. topic:: References
* `V-Measure: A conditional entropy-based external cluster evaluation
measure <http://acl.ldc.upenn.edu/D/D07/D07-1043.pdf>`_
Andrew Rosenberg and Julia Hirschberg, 2007
.. _silhouette_coefficient:
Silhouette Coefficient
----------------------
Presentation and usage
~~~~~~~~~~~~~~~~~~~~~~
If the ground truth labels are not known, evaluation must be performed using
the model itself. The Silhouette Coefficient
(:func:`sklearn.metrics.silhouette_score`)
is an example of such an evaluation, where a
higher Silhouette Coefficient score relates to a model with better defined
clusters. The Silhouette Coefficient is defined for each sample and is composed
of two scores:
- **a**: The mean distance between a sample and all other points in the same
class.
- **b**: The mean distance between a sample and all other points in the *next
nearest cluster*.
The Silhoeutte Coefficient *s* for a single sample is then given as:
.. math:: s = \frac{b - a}{max(a, b)}
The Silhouette Coefficient for a set of samples is given as the mean of the
Silhouette Coefficient for each sample.
>>> from sklearn import metrics
>>> from sklearn.metrics import pairwise_distances
>>> from sklearn import datasets
>>> dataset = datasets.load_iris()
>>> X = dataset.data
>>> y = dataset.target
In normal usage, the Silhouette Coefficient is applied to the results of a
cluster analysis.
>>> import numpy as np
>>> from sklearn.cluster import KMeans
>>> kmeans_model = KMeans(k=3, random_state=1).fit(X)
>>> labels = kmeans_model.labels_
>>> metrics.silhouette_score(X, labels, metric='euclidean')
... # doctest: +ELLIPSIS
0.55...
.. topic:: References
* Peter J. Rousseeuw (1987). "Silhouettes: a Graphical Aid to the
Interpretation and Validation of Cluster Analysis". Computational
and Applied Mathematics 20: 53–65. doi:10.1016/0377-0427(87)90125-7.
Advantages
~~~~~~~~~~
- The score is bounded between -1 for incorrect clustering and +1 for highly
dense clustering. Scores around zero indicate overlapping clusters.
- The score is higher when clusters are dense and well separated, which relates
to a standard concept of a cluster.
Drawbacks
~~~~~~~~~
- The Silhouette Coefficient is generally higher for convex clusters than other
concepts of clusters, such as density based clusters like those obtained
through DBSCAN.
|