Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
10.1 Introduction
As we know that each Gaussian is represented by a
combination of mean and variance, if we have a mixture of M
Gaussian distributions, then the weight of each Gaussian will
be a third parameter related to each Gaussian distribution in a
Gaussian mixture model (GMM). The following equation rep-
resents a GMM with M components.
M
p ( x|θ ) = ∑w p( x |θ )
k =1
k k
{−(1/2 )( x −µ )′∑ }
( ( ∑ )) =2π
−1
1 ( x −µk )
p ( x|θk )or p x| µk ,
k
e k
∑
1/2
k D /2
k
138 ◾ Machine Learning
mu1 = [-1];
mu2 = [0];
mu3 = [3];
sigma1 = [2.25];
sigma2 = [1]
sigma3 = [.25];
weight1 = [.3];
weight2 = [.5];
weight3 = [.2];
component_1 = mvnrnd(mu1,sigma1,300);
component_2 = mvnrnd(mu2,sigma2,500);
component_3 = mvnrnd(mu3,sigma3,200);
X = [component_1; component_2; component_3];
1.4
1.2
Gauss distribution
1.0
0.8
0.6
0.4
0.2
0
−6 −4 −2 0 2 4 6
Randomly produced numbers
0.30
0.25
0.20
0.15
0.10
0.05
0
−6 −4 −2 0 2 4 6
hold on;
for i = 1: 1000
if idx(i) == 1
plot(X(i),0,’r*’)
elseif idx(i) == 2
plot(X(i),0,’b+’)
else
plot(X(i),0,’go’)
end
end
title(‘Plot illustrating the cluster assignment’);
ylim([-0.2 0.2]);
hold off
0.15
0.10
0.05
−0.05
−0.10
−0.15
−0.20
−6 −4 −2 0 2 4 6