Theoretically rigorous introduction to EM, Mixture of Gaussians and K-means

5 minute read

This post ties together EM (Expectation Maximization), GMM (Gaussian Mixture Models), K means and variational inference. If you have taken an introductory machine learning course and have learned these algorithms, but the connections between them are not yet clear, this is the post for you. If you have not heard about two of the above terms, I suggest you read the wikipedia page for each before reading this post.


  • EM is a variational inference algorithm to optimize the lower bound of the log likelihood.
  • GMM is a specific example of EM where the base distribution is MVN (multivariate normal).
  • K means is GMM where variance and cluster assignment probabilities are fixed and cluster assignments are hard.

This post is rather theoretic compared to the other ones but I assure you that you’ll have a very deep understanding of the above algorithms by the end of this post.

Theory of Variational Inference

Let me first introduce the EM algorithm in a rather theoretical way. I found this theoretical introduction more “intuitive” than other attempts. I hope you will, too. In short, the goal of EM is to increase the likelihood, and it does so by giving a lower bound of the likelihood. Let denote the data we have at hand. Then, the likelihood of a model parameterized by is

Now let me introduce variables for cluster membership . Here, each is a dimensional vector where each vector denotes which cluster each data point belongs to. Then, using the law of total probability,

The learning objective, as usual, is to maximize the log likelihood:

This would have been easier if the log likelihood was of form as the sum is outside the log and hence we can take the derivative easily. Life is not so easy here since the sum is inside the log. How can we take the log outside? This question motivates us to introduce Jensen’s Inequality:

For any concave function , Jensen’s Inequality

Using Jensen’s Inequality, we can take the sum outside:

Jensen’s Inequality is used from line 4 to 5 to provide as the lower bound for . You should be able to see that only has the sum outside the log.

I abruptly introduced without really explaining it. What is here? We can think of as a probability distribution we model that we wish to be as close to the true data distribution as possible. To see why this is the case, consider the difference between the true model log likelihood and our lower bound approximation :

Note that from line 2 to 3, we are using the fact that . To summarize, we have so far:

This equation explains why we want to be as close to as possible. It’s because we want to have smaller so as to make a tighter lower bound for . To sum up, we have:

  • Original goal:
  • New goal:

From Variational Inference to EM

Now that we introduced the new optimization problem we care about, let’s actually solve it! This will yield the formula for the EM algorithm in the most general way possible. Since has two parameters and , let’s optimize the function w.r.t. one parameter at a time.

E Step:

Let’s maximize w.r.t. . Since we know that and also that is fixed and doesn’t depend on , this is equivalent to . Since we know that the minimum value of Kullback–Leibler divergence is , we want to set such that:

Thus, we have the update formula for the E step:

Because we are minimizing the KL divergence but the likelihood itself doesn’t change, E step is equivalent to making the lower bound tighter.

M Step:

It is not so hard to maximize w.r.t. . But the intuition behind M step is trickier. This is because depends on and thus we first need to understand why that maximizes also increases . To see this, recall

In E step, we minimized the second term on RHS (KL divergence) to be its smallest possible value . Hence, this term can only increase as we change in M step. The first term on RHS will obviously increase because that is what we are aiming for in M step. Hence, as a whole, we know that must also increase.

Now, since we are updating , let’s rewrite after the E step as so as to distinguish with the new derived from the M step. Using this new terminology:

where . Hence,

depends on how we parametrize the model. One case is GMM, and is exemplified below. We now have the update rule of the M step. In contrast to the E step, M step is equivalent to raising the log likelihood higher.


Now, we will show that GMM is a specific example of EM. To see this, let

  • be the data.
  • be the cluster membership.
  • be the parameters. are parameters of the Gaussian, and are prior mixture probability.

The whole model is a mixture of MVN, as follows:

E Step:

M Step:

Let . Then,

Since we have the constraint that (the cluster membership probability sums to 1), the ultimate optimization problem becomes:

Since , we can solve this for one at a time:

K means as GMM

It is straightforward to see that K means is a specific instance of GMM. In GMM, assume the following:

  • Constant variance:
  • Constant membership probability:
  • Hard membership assignment:

With these three additional assumptions, we have the K means! To see this, let’s take a look at the E step and the M step.

E Step:

Since the membership assignment is hard, for each point, we want to find a class that maximizes its likelihood:

Hence we see that E step is equivalent to assigning points to the nearest centroid.

M Step:

We only need to update according to the update rule derived above:

This is equivalent to taking the mean of data points in each cluster.


I hope you were able to see the theoretical derivation of EM and that GMM and K means are both specific instances of EM. To read more about this material, I suggest you refer to the following:

  • Information Theory, Inference, and Learning Algorithms (MacKay) Chapter 20, 22, 33.7
  • Machine Learning: A Probabilistic Perspective (Murphy) Chapter 11.4
  • Pattern Recognition and Machine Learning (Bishop) Chapter 9

Leave a Comment