Gaussian Mixture Models
— #machine-learning#mathematics#statistical models
Introduction
Gaussian Mixture Models (GMMs) are a powerful probabilistic approach for modeling data that is believed to be generated from several Gaussian distributions.
Simple clustering like K-means assigns each data-point to a cluster, meanwhile GMM represent data-point as a mixture of multiple Gaussian distributions, allowing for soft assignments based on probabilities. This allows GMMs to be useful where cluster boundaries aren't clearly defined or if the data is very noisy.
GMMs assume that the distribution of data is a weighted sum of several Gaussian distributions. We don't know which Gaussian distributions each data-point came from, so we want to estimate the parameters of each of the Gaussian in the data distribution. These models are used heavily in speech recognition, image segmentation, anomaly detection, and density estimation.
#How to initialize a GMM in Python with the GMM library from sk-learn
gmm = GaussianMixture(n_components=3, covariance_type='full', random_state=0)
gmm.fit(X)
Notation
: number of Gaussians
: our data
: mixing coefficient where
: mean of the th Gaussian
: covariance matrix of the th Gaussian
where is a normal distribution with the following formulation:
Let's break down this distribution formula to understand what is going on
- is the determinant of the covariance matrix.
- is our normalization constant to make sure the probability function integrates to 1.
- The exponential term has the inverse of the covariance matrix to account for feature correlation and scaling
- gives us the squared distance from to the mean
Overall, this tells us how far x is from , taking into account the shape, orientation, and correlations of the distributions.
Expectation Maximization (EM) algorithm.
Now to find the parameters of each of these Gaussians, we will introduce a new parameter which represents the probability that data point belongs to Gaussian .
E-step
Goal: For each point , calculate how "responsible" each Gaussian is for it To find , we do:
We are saying, "Given this point, how likely is it that it came from each of the Gaussians?"
I got confused between and , so the difference is that gives us the overall proportion (a prior) of the dataset expected to belong to cluster . Meanwhile, tells us the conditional probability that data point comes from cluster .
M-step
Now, we update the parameters of the Gaussians based on these calculated responsibilities.
First, we update the mixing coefficients by making it the expected proportion of data points belonging to component .
Then, we update via weighted average of datapoints that were assigned to pulls the mean towards their own mean.
Finally, we update the covariances which helps us see how points vary around the mean of component .
- gives us the responsibility of how much component "owns" data point
- is the outer product matrix that captures how the point deviates from the mean in every pair of dimensions.
- The numerator computes a weighted sum of these deviation matrices across all points
- The denominator normalizes by the effective number of points assigned to component
Convergence
We continue this EM-cycle until,
We continue until the log-likelihood converges, where the log-likelihood is:
Specifically, we check if the change in log-likelihood between iterations is small:
where is the likelihood at iteration and is a small threshold.
The log-likelihood measures how well our mixture model explains the data. Since EM is guaranteed to increase (or maintain) the likelihood at each step, we continue until we reach a local maximum where further iterations yield negligible improvement.
We can't guarantee a loss-surface that only has one minima. Thus, GMM might not always produce the most optimal solution and sometimes converge early in a local minima. That's why it is standard practice to run GMM several tiems and then pick the best log-likelihood.