scikit-learn
¶CBIO (CSCI) 4835/6835: Introduction to Computational Biology
It's nice when you're able to automate a lot of your data analysis. Unfortunately, quite a bit of this analysis is "fuzzy"--it doesn't have well-defined right or wrong answers, but instead relies on sophisticated algorithmic and statistical analyses. Fortunately, a lot of these analyses have been implemented in the scikit-learn
Python library. By the end of this lecture, you should be able to:
What is clustering?
Wikipedia:
Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense or another) to each other than to those in other groups (clusters).
Generally speaking, clustering is a hard problem, so it is difficult to identify a provably optimal clustering.
In k-means clustering we are given a set of d-dimensional vectors and we want to identify k sets $S_i$ such that
$$\sum_{i=0}^k \sum_{x_j \in S_i} ||x_j - \mu_i||^2$$is minimized where $\mu_i$ is the mean of cluster $S_i$. That is, all points are close as possible to the 'center' of the cluster.
Limitations
The general algorithm for $k$-means is as follows:
1: Choose the initial set of $k$ centroids. These are essentially pseudo-datapoints that will be updated over the course of the algorithm.
2: Assign all the actual data points to one of the centroids--whichever centroid they're closest to.
3: Recompute the centroids based on the cluster assignments found in step 2.
4: Repeat until the centroids don't change very much.
https://www.naftaliharris.com/blog/visualizing-k-means-clustering/
First let's make a toy data set...
%matplotlib inline
import matplotlib.pylab as plt
import numpy as np
randpts1 = np.random.randn(100, 2) / (4, 1) #100 integer coordinates in the range [0:50],[0:50]
randpts2 = (np.random.randn(100, 2) + (1, 0)) / (1, 4)
plt.plot(randpts1[:, 0], randpts1[:, 1], 'o', randpts2[:, 0], randpts2[:, 1], 'o')
X = np.vstack((randpts1,randpts2))
import sklearn.cluster as cluster
kmeans = cluster.KMeans(n_clusters = 2)
kmeans.fit(X)
# Get the cluster assignments.
clusters = kmeans.labels_
# Get the centroids.
means = kmeans.cluster_centers_
The means are the cluster centers
plt.scatter(X[:, 0], X[:, 1], c = clusters)
plt.plot(means[:, 0], means[:, 1], '*', ms = 20);
Let's look at what happens if we change from $k = 2$ to $k = 3$.
kmeans = cluster.KMeans(n_clusters = 3)
kmeans.fit(X)
clusters, means = kmeans.labels_, kmeans.cluster_centers_
plt.scatter(X[:, 0], X[:, 1], c = clusters)
plt.plot(means[:, 0], means[:, 1], '*', ms = 20);
And for $k = 4$
kmeans = cluster.KMeans(n_clusters = 4)
kmeans.fit(X)
clusters, means = kmeans.labels_, kmeans.cluster_centers_
plt.scatter(X[:, 0], X[:, 1], c = clusters)
plt.plot(means[:, 0], means[:, 1], '*', ms = 20);
Will K-means always find the same set of clusters?
What sort of data would k-means have difficulty clustering?
Hierarchical clustering creates a heirarchy, where each cluster is formed from subclusters.
There are two kinds of hierarchical clustering: agglomerative and divisive.
Which do you think is easier, in practice?
Agglomerative clustering requires there be a notion of distance between clusters of items, not just the items themselves.
On the other hand, all you need is a distance function. You do not need to be able to take an average, as with $k$-means.
linkage
¶scipy.cluster.hierarchy.linkage
creates a clustering hierarchy. It takes three parameters:
import scipy.cluster.hierarchy as hclust
linkage_matrix = hclust.linkage(X)
Z[i, 0]
and Z[i, 1]
are combined to form cluster $n + i$.Z[i, 0]
and Z[i, 1]
is given by Z[i, 2]
.Z[i, 3]
represents the number of original observations in the newly formed cluster.print(X.shape)
print(linkage_matrix)
print(linkage_matrix.shape)
(200, 2) [[ 2.50000000e+01 3.50000000e+01 2.63916828e-03 2.00000000e+00] [ 5.40000000e+01 1.73000000e+02 1.01051897e-02 2.00000000e+00] [ 6.20000000e+01 1.23000000e+02 1.35954775e-02 2.00000000e+00] [ 1.02000000e+02 1.96000000e+02 1.51275706e-02 2.00000000e+00] [ 4.60000000e+01 9.20000000e+01 2.02063121e-02 2.00000000e+00] [ 5.70000000e+01 6.40000000e+01 2.18460429e-02 2.00000000e+00] [ 1.00000000e+02 1.04000000e+02 2.29672957e-02 2.00000000e+00] [ 1.08000000e+02 1.83000000e+02 2.32704887e-02 2.00000000e+00] [ 3.70000000e+01 8.90000000e+01 3.38871668e-02 2.00000000e+00] [ 1.77000000e+02 1.99000000e+02 3.39665977e-02 2.00000000e+00] [ 1.20000000e+01 9.40000000e+01 3.53582352e-02 2.00000000e+00] [ 1.62000000e+02 2.00000000e+02 3.59685856e-02 3.00000000e+00] [ 4.00000000e+00 1.11000000e+02 3.90930186e-02 2.00000000e+00] [ 4.50000000e+01 2.11000000e+02 3.94964830e-02 4.00000000e+00] [ 1.20000000e+02 2.06000000e+02 3.99223712e-02 3.00000000e+00] [ 1.51000000e+02 1.81000000e+02 4.24831327e-02 2.00000000e+00] [ 1.65000000e+02 2.02000000e+02 4.34119289e-02 3.00000000e+00] [ 7.00000000e+01 1.12000000e+02 4.34727102e-02 2.00000000e+00] [ 1.24000000e+02 1.25000000e+02 4.38734941e-02 2.00000000e+00] [ 1.09000000e+02 1.82000000e+02 4.44092631e-02 2.00000000e+00] [ 1.40000000e+01 1.14000000e+02 4.53099827e-02 2.00000000e+00] [ 9.10000000e+01 1.64000000e+02 4.71874219e-02 2.00000000e+00] [ 1.56000000e+02 2.13000000e+02 4.80335249e-02 5.00000000e+00] [ 1.01000000e+02 1.66000000e+02 4.92936904e-02 2.00000000e+00] [ 1.60000000e+01 4.70000000e+01 4.95433741e-02 2.00000000e+00] [ 0.00000000e+00 2.17000000e+02 5.06811918e-02 3.00000000e+00] [ 9.60000000e+01 1.44000000e+02 5.24944706e-02 2.00000000e+00] [ 1.00000000e+00 7.00000000e+00 5.26635468e-02 2.00000000e+00] [ 4.30000000e+01 1.70000000e+02 5.26727937e-02 2.00000000e+00] [ 3.80000000e+01 1.84000000e+02 5.56313253e-02 2.00000000e+00] [ 9.50000000e+01 2.22000000e+02 5.56519560e-02 6.00000000e+00] [ 1.06000000e+02 1.37000000e+02 5.60257404e-02 2.00000000e+00] [ 8.00000000e+00 1.27000000e+02 5.60400973e-02 2.00000000e+00] [ 2.15000000e+02 2.31000000e+02 5.77123450e-02 4.00000000e+00] [ 1.79000000e+02 2.29000000e+02 5.80185849e-02 3.00000000e+00] [ 2.16000000e+02 2.30000000e+02 5.81633896e-02 9.00000000e+00] [ 8.70000000e+01 1.63000000e+02 5.87700494e-02 2.00000000e+00] [ 1.43000000e+02 2.33000000e+02 5.94301790e-02 5.00000000e+00] [ 5.10000000e+01 2.21000000e+02 6.06828924e-02 3.00000000e+00] [ 2.05000000e+02 2.26000000e+02 6.10815552e-02 4.00000000e+00] [ 2.08000000e+02 2.28000000e+02 6.16216960e-02 4.00000000e+00] [ 1.55000000e+02 2.14000000e+02 6.29386374e-02 4.00000000e+00] [ 1.40000000e+02 2.40000000e+02 6.52266952e-02 5.00000000e+00] [ 2.19000000e+02 2.38000000e+02 6.72727949e-02 5.00000000e+00] [ 2.20000000e+01 6.30000000e+01 6.77817623e-02 2.00000000e+00] [ 1.32000000e+02 2.09000000e+02 6.83831790e-02 3.00000000e+00] [ 1.90000000e+02 2.27000000e+02 6.84378472e-02 3.00000000e+00] [ 1.34000000e+02 2.12000000e+02 7.14259007e-02 3.00000000e+00] [ 5.00000000e+01 8.40000000e+01 7.18098108e-02 2.00000000e+00] [ 1.80000000e+01 2.90000000e+01 7.19188136e-02 2.00000000e+00] [ 1.29000000e+02 2.42000000e+02 7.31127435e-02 6.00000000e+00] [ 1.75000000e+02 1.80000000e+02 7.44480795e-02 2.00000000e+00] [ 3.90000000e+01 2.39000000e+02 7.56755465e-02 5.00000000e+00] [ 7.20000000e+01 2.46000000e+02 7.58979901e-02 4.00000000e+00] [ 6.00000000e+00 3.30000000e+01 7.61809224e-02 2.00000000e+00] [ 1.07000000e+02 1.98000000e+02 7.63656716e-02 2.00000000e+00] [ 1.54000000e+02 2.45000000e+02 7.75299607e-02 4.00000000e+00] [ 1.89000000e+02 2.43000000e+02 7.76050330e-02 6.00000000e+00] [ 2.52000000e+02 2.53000000e+02 7.76791150e-02 9.00000000e+00] [ 1.58000000e+02 1.61000000e+02 7.82530799e-02 2.00000000e+00] [ 2.35000000e+02 2.50000000e+02 7.84643796e-02 1.50000000e+01] [ 1.31000000e+02 2.36000000e+02 8.00415746e-02 3.00000000e+00] [ 9.00000000e+00 2.58000000e+02 8.02095892e-02 1.00000000e+01] [ 5.50000000e+01 2.04000000e+02 8.06500808e-02 3.00000000e+00] [ 2.60000000e+02 2.62000000e+02 8.06670132e-02 2.50000000e+01] [ 2.32000000e+02 2.55000000e+02 8.12408398e-02 4.00000000e+00] [ 2.01000000e+02 2.57000000e+02 8.25128143e-02 8.00000000e+00] [ 2.63000000e+02 2.64000000e+02 8.31204035e-02 2.80000000e+01] [ 8.60000000e+01 8.80000000e+01 8.37568053e-02 2.00000000e+00] [ 2.37000000e+02 2.41000000e+02 8.41020055e-02 9.00000000e+00] [ 2.80000000e+01 2.67000000e+02 8.47401716e-02 2.90000000e+01] [ 1.17000000e+02 1.39000000e+02 8.50356974e-02 2.00000000e+00] [ 2.18000000e+02 2.56000000e+02 8.58912606e-02 6.00000000e+00] [ 7.40000000e+01 7.50000000e+01 8.60517654e-02 2.00000000e+00] [ 1.86000000e+02 2.65000000e+02 8.71690775e-02 5.00000000e+00] [ 1.10000000e+01 2.66000000e+02 8.73160091e-02 9.00000000e+00] [ 1.10000000e+02 1.36000000e+02 8.78254553e-02 2.00000000e+00] [ 2.00000000e+00 3.10000000e+01 8.90695555e-02 2.00000000e+00] [ 1.53000000e+02 1.59000000e+02 9.02098449e-02 2.00000000e+00] [ 2.70000000e+02 2.73000000e+02 9.11426604e-02 3.10000000e+01] [ 2.23000000e+02 2.69000000e+02 9.16111196e-02 1.10000000e+01] [ 2.34000000e+02 2.47000000e+02 9.25609778e-02 6.00000000e+00] [ 2.10000000e+02 2.77000000e+02 9.40077525e-02 4.00000000e+00] [ 3.20000000e+01 2.79000000e+02 9.85611736e-02 3.20000000e+01] [ 2.61000000e+02 2.81000000e+02 9.92329495e-02 9.00000000e+00] [ 2.74000000e+02 2.84000000e+02 1.00171388e-01 1.40000000e+01] [ 5.20000000e+01 9.80000000e+01 1.00402868e-01 2.00000000e+00] [ 2.83000000e+02 2.85000000e+02 1.01363602e-01 4.60000000e+01] [ 6.80000000e+01 2.87000000e+02 1.01687765e-01 4.70000000e+01] [ 2.00000000e+01 7.10000000e+01 1.02222583e-01 2.00000000e+00] [ 1.00000000e+01 7.80000000e+01 1.03221737e-01 2.00000000e+00] [ 9.30000000e+01 2.75000000e+02 1.03376750e-01 1.00000000e+01] [ 1.70000000e+01 2.88000000e+02 1.03424887e-01 4.80000000e+01] [ 1.18000000e+02 2.80000000e+02 1.04399123e-01 1.20000000e+01] [ 1.26000000e+02 1.48000000e+02 1.04904576e-01 2.00000000e+00] [ 1.21000000e+02 2.93000000e+02 1.07074380e-01 1.30000000e+01] [ 5.90000000e+01 2.25000000e+02 1.07524235e-01 4.00000000e+00] [ 2.24000000e+02 2.92000000e+02 1.07808747e-01 5.00000000e+01] [ 1.67000000e+02 2.72000000e+02 1.07886179e-01 7.00000000e+00] [ 6.50000000e+01 2.89000000e+02 1.08166609e-01 3.00000000e+00] [ 1.49000000e+02 2.51000000e+02 1.08940112e-01 3.00000000e+00] [ 2.03000000e+02 2.76000000e+02 1.09110893e-01 4.00000000e+00] [ 1.52000000e+02 2.98000000e+02 1.09730032e-01 8.00000000e+00] [ 2.95000000e+02 3.01000000e+02 1.12271026e-01 1.70000000e+01] [ 1.45000000e+02 1.91000000e+02 1.15443990e-01 2.00000000e+00] [ 5.00000000e+00 2.97000000e+02 1.15602164e-01 5.10000000e+01] [ 2.78000000e+02 3.05000000e+02 1.16460559e-01 5.30000000e+01] [ 3.02000000e+02 3.03000000e+02 1.16693622e-01 2.50000000e+01] [ 2.86000000e+02 3.06000000e+02 1.17943148e-01 5.50000000e+01] [ 6.90000000e+01 7.70000000e+01 1.20807324e-01 2.00000000e+00] [ 8.20000000e+01 9.70000000e+01 1.21566682e-01 2.00000000e+00] [ 1.19000000e+02 1.28000000e+02 1.21916159e-01 2.00000000e+00] [ 1.69000000e+02 3.07000000e+02 1.23271602e-01 2.60000000e+01] [ 1.30000000e+01 3.09000000e+02 1.24217565e-01 3.00000000e+00] [ 3.00000000e+00 8.30000000e+01 1.26184263e-01 2.00000000e+00] [ 1.85000000e+02 3.12000000e+02 1.27561911e-01 2.70000000e+01] [ 2.70000000e+01 2.91000000e+02 1.28011513e-01 1.10000000e+01] [ 1.35000000e+02 1.38000000e+02 1.30510074e-01 2.00000000e+00] [ 2.40000000e+01 2.44000000e+02 1.30841808e-01 3.00000000e+00] [ 1.68000000e+02 3.15000000e+02 1.31129310e-01 2.80000000e+01] [ 9.00000000e+01 1.93000000e+02 1.33945625e-01 2.00000000e+00] [ 2.94000000e+02 3.19000000e+02 1.34995385e-01 3.00000000e+01] [ 3.60000000e+01 8.50000000e+01 1.35256447e-01 2.00000000e+00] [ 2.60000000e+01 2.54000000e+02 1.38506026e-01 3.00000000e+00] [ 2.49000000e+02 3.10000000e+02 1.38727231e-01 4.00000000e+00] [ 2.82000000e+02 2.99000000e+02 1.38791489e-01 7.00000000e+00] [ 6.70000000e+01 2.48000000e+02 1.40405798e-01 3.00000000e+00] [ 1.30000000e+02 2.71000000e+02 1.40705997e-01 3.00000000e+00] [ 6.60000000e+01 3.08000000e+02 1.41666473e-01 5.60000000e+01] [ 3.21000000e+02 3.28000000e+02 1.43753651e-01 8.60000000e+01] [ 2.30000000e+01 3.16000000e+02 1.44440178e-01 1.20000000e+01] [ 3.29000000e+02 3.30000000e+02 1.44639096e-01 9.80000000e+01] [ 3.22000000e+02 3.25000000e+02 1.44717298e-01 9.00000000e+00] [ 1.22000000e+02 3.11000000e+02 1.49481133e-01 3.00000000e+00] [ 1.72000000e+02 3.31000000e+02 1.49969051e-01 9.90000000e+01] [ 3.00000000e+02 3.20000000e+02 1.51784762e-01 5.00000000e+00] [ 2.07000000e+02 3.27000000e+02 1.51818938e-01 5.00000000e+00] [ 2.59000000e+02 3.04000000e+02 1.52043271e-01 4.00000000e+00] [ 1.15000000e+02 3.34000000e+02 1.52261851e-01 1.00000000e+02] [ 1.42000000e+02 1.88000000e+02 1.55020853e-01 2.00000000e+00] [ 3.33000000e+02 3.36000000e+02 1.55525583e-01 8.00000000e+00] [ 2.20000000e+02 3.38000000e+02 1.55804363e-01 1.02000000e+02] [ 4.10000000e+01 3.26000000e+02 1.55851466e-01 4.00000000e+00] [ 1.97000000e+02 3.41000000e+02 1.58643196e-01 1.03000000e+02] [ 3.37000000e+02 3.43000000e+02 1.60583926e-01 1.07000000e+02] [ 1.95000000e+02 3.40000000e+02 1.61437866e-01 9.00000000e+00] [ 7.30000000e+01 3.32000000e+02 1.67699030e-01 1.00000000e+01] [ 1.05000000e+02 1.41000000e+02 1.68752956e-01 2.00000000e+00] [ 3.13000000e+02 3.42000000e+02 1.69821755e-01 7.00000000e+00] [ 1.92000000e+02 3.39000000e+02 1.75768381e-01 3.00000000e+00] [ 3.24000000e+02 3.46000000e+02 1.76504054e-01 1.40000000e+01] [ 2.96000000e+02 3.44000000e+02 1.76522136e-01 1.11000000e+02] [ 3.35000000e+02 3.51000000e+02 1.76968655e-01 1.16000000e+02] [ 5.60000000e+01 3.52000000e+02 1.78202334e-01 1.17000000e+02] [ 3.17000000e+02 3.53000000e+02 1.79385944e-01 1.19000000e+02] [ 1.03000000e+02 3.49000000e+02 1.79890924e-01 4.00000000e+00] [ 3.40000000e+01 3.50000000e+02 1.81150855e-01 1.50000000e+01] [ 4.40000000e+01 3.54000000e+02 1.81205632e-01 1.20000000e+02] [ 6.10000000e+01 3.56000000e+02 1.85159387e-01 1.60000000e+01] [ 3.18000000e+02 3.57000000e+02 1.88265907e-01 1.23000000e+02] [ 1.16000000e+02 3.55000000e+02 1.88807669e-01 5.00000000e+00] [ 1.87000000e+02 3.60000000e+02 1.88831004e-01 6.00000000e+00] [ 7.60000000e+01 3.59000000e+02 1.89079191e-01 1.24000000e+02] [ 1.94000000e+02 3.62000000e+02 1.92353961e-01 1.25000000e+02] [ 2.68000000e+02 3.58000000e+02 1.95463392e-01 1.80000000e+01] [ 3.23000000e+02 3.64000000e+02 1.96986381e-01 2.10000000e+01] [ 5.30000000e+01 7.90000000e+01 1.97343099e-01 2.00000000e+00] [ 1.13000000e+02 1.46000000e+02 1.98808794e-01 2.00000000e+00] [ 1.33000000e+02 1.60000000e+02 1.99159486e-01 2.00000000e+00] [ 2.10000000e+01 3.65000000e+02 1.99782372e-01 2.20000000e+01] [ 3.48000000e+02 3.69000000e+02 2.02295779e-01 2.90000000e+01] [ 4.90000000e+01 8.00000000e+01 2.14993430e-01 2.00000000e+00] [ 3.45000000e+02 3.47000000e+02 2.17237094e-01 1.10000000e+01] [ 3.68000000e+02 3.72000000e+02 2.18142509e-01 1.30000000e+01] [ 4.20000000e+01 3.66000000e+02 2.20453898e-01 3.00000000e+00] [ 1.90000000e+01 3.70000000e+02 2.34968663e-01 3.00000000e+01] [ 6.00000000e+01 3.14000000e+02 2.35362941e-01 3.00000000e+00] [ 3.00000000e+01 3.63000000e+02 2.36983192e-01 1.26000000e+02] [ 1.78000000e+02 3.73000000e+02 2.38489242e-01 1.40000000e+01] [ 1.50000000e+01 3.77000000e+02 2.41028929e-01 1.27000000e+02] [ 3.78000000e+02 3.79000000e+02 2.41788798e-01 1.41000000e+02] [ 1.76000000e+02 3.80000000e+02 2.43365688e-01 1.42000000e+02] [ 3.61000000e+02 3.81000000e+02 2.51632469e-01 1.48000000e+02] [ 3.74000000e+02 3.76000000e+02 2.58255913e-01 6.00000000e+00] [ 4.80000000e+01 3.82000000e+02 2.62669005e-01 1.49000000e+02] [ 1.57000000e+02 3.84000000e+02 2.72339104e-01 1.50000000e+02] [ 2.90000000e+02 3.83000000e+02 2.86373094e-01 8.00000000e+00] [ 5.80000000e+01 3.86000000e+02 3.37169183e-01 9.00000000e+00] [ 3.85000000e+02 3.87000000e+02 3.53341131e-01 1.59000000e+02] [ 3.75000000e+02 3.88000000e+02 3.59894279e-01 1.89000000e+02] [ 8.10000000e+01 3.89000000e+02 3.63424368e-01 1.90000000e+02] [ 1.71000000e+02 3.67000000e+02 3.97093317e-01 3.00000000e+00] [ 3.71000000e+02 3.90000000e+02 4.08900160e-01 1.92000000e+02] [ 4.00000000e+01 3.92000000e+02 4.48383841e-01 1.93000000e+02] [ 3.91000000e+02 3.93000000e+02 4.97996946e-01 1.96000000e+02] [ 1.47000000e+02 3.94000000e+02 5.59668104e-01 1.97000000e+02] [ 1.74000000e+02 3.95000000e+02 6.35576164e-01 1.98000000e+02] [ 1.50000000e+02 3.96000000e+02 9.10077509e-01 1.99000000e+02] [ 9.90000000e+01 3.97000000e+02 9.58559881e-01 2.00000000e+02]] (199, 4)
hclust.dendrogram(linkage_matrix,p=4,truncate_mode='level');#show first 4 levels
fcluster
: extracting clusters from a hierarchy¶fcluster
takes a linkage matrix and returns a cluster assignment. It takes a threshold value and a string specifying what method to use to form the cluster.
help(hclust.fcluster)
Help on function fcluster in module scipy.cluster.hierarchy: fcluster(Z, t, criterion='inconsistent', depth=2, R=None, monocrit=None) Forms flat clusters from the hierarchical clustering defined by the given linkage matrix. Parameters ---------- Z : ndarray The hierarchical clustering encoded with the matrix returned by the `linkage` function. t : float The threshold to apply when forming flat clusters. criterion : str, optional The criterion to use in forming flat clusters. This can be any of the following values: ``inconsistent`` : If a cluster node and all its descendants have an inconsistent value less than or equal to `t` then all its leaf descendants belong to the same flat cluster. When no non-singleton cluster meets this criterion, every node is assigned to its own cluster. (Default) ``distance`` : Forms flat clusters so that the original observations in each flat cluster have no greater a cophenetic distance than `t`. ``maxclust`` : Finds a minimum threshold ``r`` so that the cophenetic distance between any two original observations in the same flat cluster is no more than ``r`` and no more than `t` flat clusters are formed. ``monocrit`` : Forms a flat cluster from a cluster node c with index i when ``monocrit[j] <= t``. For example, to threshold on the maximum mean distance as computed in the inconsistency matrix R with a threshold of 0.8 do:: MR = maxRstat(Z, R, 3) cluster(Z, t=0.8, criterion='monocrit', monocrit=MR) ``maxclust_monocrit`` : Forms a flat cluster from a non-singleton cluster node ``c`` when ``monocrit[i] <= r`` for all cluster indices ``i`` below and including ``c``. ``r`` is minimized such that no more than ``t`` flat clusters are formed. monocrit must be monotonic. For example, to minimize the threshold t on maximum inconsistency values so that no more than 3 flat clusters are formed, do:: MI = maxinconsts(Z, R) cluster(Z, t=3, criterion='maxclust_monocrit', monocrit=MI) depth : int, optional The maximum depth to perform the inconsistency calculation. It has no meaning for the other criteria. Default is 2. R : ndarray, optional The inconsistency matrix to use for the 'inconsistent' criterion. This matrix is computed if not provided. monocrit : ndarray, optional An array of length n-1. `monocrit[i]` is the statistics upon which non-singleton i is thresholded. The monocrit vector must be monotonic, i.e. given a node c with index i, for all node indices j corresponding to nodes below c, ``monocrit[i] >= monocrit[j]``. Returns ------- fcluster : ndarray An array of length ``n``. ``T[i]`` is the flat cluster number to which original observation ``i`` belongs.
clusters = hclust.fcluster(linkage_matrix,0.3,'distance')
len(set(clusters))
13
plt.scatter(X[:, 0], X[:, 1], c = clusters)
<matplotlib.collections.PathCollection at 0x11b0db1d0>
clusters = hclust.fcluster(linkage_matrix,4,'maxclust')
len(set(clusters))
4
plt.scatter(X[:, 0], X[:, 1], c = clusters)
<matplotlib.collections.PathCollection at 0x11affe710>
fclusterdata
¶fclusterdata
does both linkage and fcluster in one step. Let's try out different linkage methods.
clusters = hclust.fclusterdata(X, 4, 'maxclust', method = 'complete')
plt.scatter(X[:, 0], X[:, 1], c = clusters)
<matplotlib.collections.PathCollection at 0x114738d68>
clusters = hclust.fclusterdata(X, 4, 'maxclust', method = 'average')
plt.scatter(X[:, 0], X[:, 1], c = clusters)
<matplotlib.collections.PathCollection at 0x11473e0b8>
You can even use a non-Euclidean metric.
clusters = hclust.fclusterdata(X, 4, 'maxclust', method = 'average', metric = 'cityblock')
plt.scatter(X[:, 0], X[:, 1], c = clusters)
<matplotlib.collections.PathCollection at 0x11af775f8>
leaves_list
¶A hierarchical cluster imposes an order on the leaves. You can retrieve this ordering from the linkage matrix with leaves_list
hclust.leaves_list(linkage_matrix)
array([ 99, 150, 174, 147, 171, 113, 146, 40, 49, 80, 81, 19, 13, 69, 77, 41, 67, 50, 84, 21, 26, 6, 33, 86, 88, 61, 34, 18, 29, 82, 97, 73, 36, 85, 12, 94, 2, 31, 65, 20, 71, 157, 48, 187, 116, 103, 192, 142, 188, 176, 178, 133, 160, 195, 122, 119, 128, 108, 183, 130, 117, 139, 105, 141, 15, 30, 194, 76, 24, 22, 63, 44, 135, 138, 56, 149, 175, 180, 90, 193, 59, 0, 70, 112, 158, 161, 145, 191, 197, 14, 114, 115, 172, 126, 148, 168, 185, 169, 152, 167, 124, 125, 154, 132, 177, 199, 121, 118, 101, 166, 143, 151, 181, 106, 137, 155, 120, 100, 104, 102, 196, 110, 136, 66, 52, 98, 153, 159, 5, 16, 47, 17, 68, 32, 28, 55, 46, 92, 165, 62, 123, 95, 156, 45, 162, 25, 35, 129, 140, 37, 89, 43, 170, 9, 39, 57, 64, 96, 144, 72, 190, 1, 7, 74, 75, 186, 8, 127, 107, 198, 131, 87, 163, 179, 38, 184, 134, 4, 111, 23, 27, 93, 11, 54, 173, 189, 109, 182, 51, 91, 164, 58, 10, 78, 42, 53, 79, 60, 3, 83], dtype=int32)
from matplotlib.pylab import cm
data = np.genfromtxt('Spellman.csv',skip_header=1,delimiter=',')[:,1:]
Z = hclust.linkage(data,method='complete')
leaves = hclust.leaves_list(Z)
ordered = data[leaves]
plt.matshow(ordered, aspect = 0.01, cmap=cm.seismic);
plt.colorbar()
<matplotlib.colorbar.Colorbar at 0x11b8cc080>
What is machine learning?
Wikipedia:
Machine learning is a subfield of computer science that evolved from the study of pattern recognition and computational learning theory in artificial intelligence. Machine learning explores the study and construction of algorithms that can learn from and make predictions on data.
Dr. David Koes, University of Pittsburgh:
Creating useful and/or predictive computational models from data.
Machine Learning can be considered a subfield of Artificial Intelligence.
These algorithms can be seen as building blocks to make computers learn to behave more intelligently by somehow generalizing rather that just storing and retrieving data items like a database system would do.
Here's an overly simple example:
This may seem like a trivial task, but it is a simple version of a very important concept. By drawing this separating line, we have learned a model which can generalize to new data.
If you were to drop another point onto the plane which is unlabeled, this algorithm could now predict whether it's a blue or a red point.
scikit-learn
¶Scikit-Learn is a Python package designed to give access to well-known machine learning algorithms within Python code, through a clean, well-thought-out API. It has been built by hundreds of contributors from around the world, and is used across industry and academia.
Scikit-Learn is built upon Python's NumPy (Numerical Python) and SciPy (Scientific Python) libraries, which enable efficient in-core numerical and scientific computation within Python.
Machine learning is about creating models from data, and scikit-learn
uses a particular layout for building its models.
Most machine learning algorithms implemented in scikit-learn expect data to be stored in a
two-dimensional array or matrix, typically as numpy
arrays. The shape of the array is expected to be [n_samples, n_features]
As an example of a simple dataset, we're going to take a look at the iris data (stored in scikit-learn
, along with dozens of other sample datasets).
The data consists of measurements of three different species of irises.
There are three species of iris in the dataset, which we can picture here:
from IPython.core.display import Image, display
display(Image(filename = 'Lecture26/iris_setosa.jpg'))
print("Iris Setosa\n")
display(Image(filename = 'Lecture26/iris_versicolor.jpg'))
print("Iris Versicolor\n")
display(Image(filename = 'Lecture26/iris_virginica.jpg'))
print("Iris Virginica")
Iris Setosa
Iris Versicolor
Iris Virginica
The iris data consists of the following:
Features in the Iris dataset:
Target classes to predict:
Accessing the iris data in scikit-learn
is pretty easy:
from sklearn.datasets import load_iris
iris = load_iris()
We can examine this data more closely, using the properties of the various Python objects storing it:
iris.keys()
dict_keys(['feature_names', 'data', 'DESCR', 'target_names', 'target'])
n_samples, n_features = iris.data.shape
print((n_samples, n_features))
print(iris.data[0])
(150, 4) [ 5.1 3.5 1.4 0.2]
print(iris.data.shape)
print(iris.target.shape)
(150, 4) (150,)
print(iris.target)
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2]
print(iris.target_names)
['setosa' 'versicolor' 'virginica']
This data is four dimensional, but we can visualize two of the dimensions at a time using a simple scatter-plot:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
x_index = 0
y_index = 1
# this formatter will label the colorbar with the correct target names
formatter = plt.FuncFormatter(lambda i, *args: iris.target_names[int(i)])
plt.scatter(iris.data[:, x_index], iris.data[:, y_index],
c=iris.target, cmap=plt.cm.get_cmap('RdYlBu', 3))
plt.colorbar(ticks=[0, 1, 2], format=formatter)
plt.clim(-0.5, 2.5)
plt.xlabel(iris.feature_names[x_index])
plt.ylabel(iris.feature_names[y_index]);
Imagine we didn't have the target data (i.e., iris species) available, and wanted to cluster the data into similar groups.
In this case, we're discovering an underlying structure in the data without any a priori knowledge of categories or "ground truth". Hence, unsupervised.
Other forms of unsupervised learning include:
This is what we do in order to build models from data that DO include some form of "ground truth" labels.
The data consists of a set of examples X where each example has a label y.
Our assumption is that the label is a function of the data; our goal is to learn that function that maps X to y as accurately as possible: $y = f(X)$
There are two types of supervised learning.
Classification maps an example to a discrete labels. This answers questions such as
Regression maps an example to a continuous number. This answers questions such as
This is a flow chart created by scikit-learn super-contributor Andreas Mueller which gives a nice summary of which algorithms to choose in various situations. Keep it around as a handy reference!
from IPython.display import Image
Image("http://scikit-learn.org/dev/_static/ml_map.png")
Given we have ground truth information (iris species), are we performing supervised or unsupervised learning on the iris data?
What if we didn't have grouth truth information? What kind of algorithms could we use?
What kind of supervised learning are we performing?
The iris data are 4-dimensional; can't exactly visualize that directly. We can, however, plot each pair of features in turn (this is called "getting a feel for the data").
import itertools
features = iris.data
feature_names = iris.feature_names
classes = iris.target
feature_combos = itertools.combinations([0, 1, 2, 3], 2)
for i, (x1, x2) in enumerate(feature_combos):
fig = plt.subplot(2, 3, i + 1)
fig.set_xticks([])
fig.set_yticks([])
for t, marker, c in zip(range(3), ">ox", "rgb"):
plt.scatter(features[classes == t, x1],
features[classes == t, x2],
marker = marker, c = c)
plt.xlabel(feature_names[x1])
plt.ylabel(feature_names[x2])
The features, X, are what make each example distinct. Ideally they contain enough information to predict y. In the case of the iris dataset, the features have to do with petal and sepal measurements.
The pairwise feature plots give us some good intuition for the data. For example:
Still, for any given pair of features, we can't get a perfect classification rule.
But that's ok!
Another algorithm with a $k$ in the name--this time, a supervised algorithm--is the simplest classifier you can design.
It asks, for any given data point $x$: What are the labels of the $k$ data points closest to $x$?
It then performs a majority vote, based on those labels. The winning label is assigned to the new data point $x$. The end!
Let's try it on the iris dataset.
from sklearn import neighbors
X, y = iris.data, iris.target
X = iris.data[:, :2] # For ease of interpretation
# create the model
knn = neighbors.KNeighborsClassifier(n_neighbors = 5)
# fit the model
knn.fit(X, y)
KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski', metric_params=None, n_jobs=1, n_neighbors=5, p=2, weights='uniform')
That's it. We've trained our classifier! Now let's try to predict a new, previously-unobserved iris:
# What kind of iris has 3cm x 5cm sepal?
# call the "predict" method:
result = knn.predict([[3, 5],])
print(iris.target_names[result])
['setosa']
Our completely-made-up new mystery iris is classified as a setosa!
We can visualize what this algorithm looks like:
from matplotlib.colors import ListedColormap
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
### IGNORE THIS ###
x_min, x_max = X[:, 0].min() - .1, X[:, 0].max() + .1
y_min, y_max = X[:, 1].min() - .1, X[:, 1].max() + .1
xx, yy = np.meshgrid(np.linspace(x_min, x_max, 100),
np.linspace(y_min, y_max, 100))
Z = knn.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap = cmap_light)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c = y, cmap = cmap_bold)
plt.xlabel('sepal length (cm)')
plt.ylabel('sepal width (cm)')
plt.axis('tight')
(4.2000000000000002, 8.0, 1.8999999999999999, 4.5)
There are plenty of more sophisticated classifiers:
Furthermore, you will often combine multiple methods into a machine learning pipeline. For example, rather than work directly with the 4D iris data, perhaps you'll run PCA on it first to reduce it to 2 dimensions, then classify it.
Machine learning is more of an art than a science; it takes a lot of "playing with the data" to find the right combination of model and pipeline that incorporates the right assumptions about the data and can generalize well.
scikit-learn
makes it relatively easy to get started: you don't have to know how the algorithms are implemented underneath, but that knowledge does help in fine-tuning your pipelines.