Sei sulla pagina 1di 1

Gibbs Sampling is a way of simulating a random vector with a complex distribution if we

have a simple conditional distribution of each coordinate given the other coordinates.
The conditional distributions may be easier to sample than the resulting joint
distribution because that is how the variables are defined. Suppose (X1, ... , Xp) is the
random vector that we have to simulate. The Gibbs sampler follows the algorithm:

1] Initialize values of the random vector: (X1(0), ... , Xp(0)).

2] Let (X1(k), ... , Xp(k)) be the current value of the random vector (X1, ... , Xp). Simulate
the new realization X1(k+1) of variable X1 out of the conditional distribution f( X1 | X2 =
X2(k), ... , Xp = Xp(k)).

3] Now (X1(k+1), X2(k), ... , Xp(k)) is the current value of the random vector (X1, ... , Xp).
Simulate the new realization X2(k+1) of variable X2 out of the conditional distribution f(
X2 | X1 = X1(k+1), X3 = X3(k), ... , Xp = Xp(k)).

4] Repeat step 3 for variables X3, ... , Xp until you obtain the realization (X1(k+1), ... ,
Xp(k+1)).

5] Repeat steps 2 - 4 multiple times until the simulated distribution of vector (X1, ... ,
Xp) converges to its true distribution.

The deviance information criterion (DIC) was introduced in 2002 by Spiegelhalter et al. to compare the relative fit of a

set of Bayesian hierarchical models. It is similar to Akaike's information criterion (AIC) in combining a measure of

goodness-of-fit and measure of complexity, both based on the deviance. While AIC uses the maximum likelihood

estimate, DIC's plug-in estimate is based on the posterior mean. As the number of independent parameters in a

Bayesian hierarchical model is not clearly defined, DIC estimates the effective number of parameters by the

difference of the posterior mean of the deviance and the deviance at the posterior mean. This coincides with the

number of independent parameters in fixed effect models with flat priors, thus the DIC is a generalization of AIC. It

can be justified as an estimate of the posterior predictive model performance within a decision-theoretic framework

and it is asymptotically equivalent to leave-one-out cross-validation. The DIC has been used extensively for practical

model comparison in many disciplines and works well for exponential family models but due to its dependence on the

parametrization and focus of a model, its application to mixture models is problematic.

https://www.probabilitycourse.com/chapter11/11_3_1_introduction.php

Potrebbero piacerti anche