# scikit-learn: Theory of K-Means Clustering

In this course, we will explore a class of unsupervised machine learning models: clustering algorithms. Clustering algorithms seek to automatically learn, from the properties of the data, an optimal partitioning of the points into a discrete labeling of groups.

k-Means clustering is presented first as an algorithm and then as an approach to minimizing a particular objective function. One challenge with clustering algorithms is that it’s not obvious how to measure success.

Start for FreeFirst 2 tasks free. Then, decide to pay $9.99 for the rest

Duration (mins)

Learners

#### NA / 5

Rating

## Task List

We will cover the following tasks in 52 minutes:

### Setting

In this chapter, we start out by cultivating our intuition for the settings most appropriate for k-Means clustering. Next, we generate a two-dimensional dataset containing four distinct blobs. To emphasize that this is an unsupervised algorithm, we will leave the labels out of the visualization.

### Failure Cases: Suboptimal Local Minimum

Here we look at situations where k-means fails in practice, and try to piece together why this is so.

First, although the E–M procedure is guaranteed to improve the result in each step, there is no assurance that it will lead to the global best solution. For example, if we use a different random seed in our simple procedure, the particular starting guesses lead to poor results.

## About the Host (Snehan Kekre)

Snehan hosts Machine Learning courses at Rhyme. He is in his senior year of university at the Minerva Schools at KGI, pursuing a double major in the Natural Sciences and Computational Sciences, with a focus on physics and machine learning. When not applying computational and quantitative methods to identify the structures shaping the world around him, he can sometimes be seen trekking in the mountains of Nepal.