Handbook of markov chain monte carlo (Chap.1&5) Chang liu 2014-11-17
Handbook of Markov Chain Monte Carlo (Chap. 1&5) Chang Liu 2014-11-17
Outline Chapter 1: Introduction to MCMC Chapter 5: MCMC using Hamiltonian dynamics
Outline • Chapter 1: Introduction to MCMC • Chapter 5: MCMC using Hamiltonian dynamics
Introduction to markov chain monte Carlo Charles j. geyer History Markov chains Intuitions of mcmc Elementary theory of mcmc The metropolis-Hastings-Green Algorithm
Introduction to Markov Chain Monte Carlo Charles J. Geyer • History • Markov Chains • Intuitions of MCMC • Elementary Theory of MCMC • The Metropolis-Hastings-Green Algorithm
Introduction to mcmc Brief history The invention of computer stimulates simulation methods Metropolis et al. (1953 simulated a liquid in equilibrium with its gas phase by a markov chain Hastings(1970) generalized the metropolis algorithm, and simulations following his scheme are said to use the metropolis-Hastings algorithm A special case of the metropolis-Hastings algorithm was introduced by geman and Geman (1984) Simulations following their scheme are said to use the gibbs sampler. Green(1995) generalized the metropolis-Hastings algorithm Metropolis-Hastings-Green agorithm
Introduction to MCMC • Brief history – The invention of computer stimulates simulation methods. – Metropolis et al.(1953) simulated a liquid in equilibrium with its gas phase by a Markov chain. – Hastings (1970) generalized the Metropolis algorithm, and simulations following his scheme are said to use the Metropolis-Hastings algorithm. – A special case of the Metropolis-Hastings algorithm was introduced by Geman and Geman (1984). Simulations following their scheme are said to use the Gibbs sampler. – Green (1995) generalized the Metropolis–Hastings algorithm: Metropolis–Hastings–Green algorithm
Markov chains Definition A sequence X, X2,... of random elements of some set is a markov chain if the conditional distribution of Xn+1 given X1,..., Xn depends on Xn only P(X +141,,4 )=P(n+1Xn) State space s: the set in which the Xi take values -Transition probabilities: the conditional distribution of Xn+1 given Xn p=P(Kn+1=x|Xn=x1)=1…,n, j=1,…,n( S finite) Stationary transition probabilities transition probabilities does not depend on n Initial distribution: the marginal distribution of X1
Markov Chains • Definition – A sequence 𝑋1, 𝑋2, ⋯ of random elements of some set is a Markov chain if the conditional distribution of 𝑋𝑛+1 given 𝑋1, ⋯ , 𝑋𝑛 depends on 𝑋𝑛 only. 𝑃 𝑋𝑛+1 𝑋1, ⋯ , 𝑋𝑛 = 𝑃(𝑋𝑛+1|𝑋𝑛) – State space 𝑆: the set in which the 𝑋𝑖 take values – Transition probabilities: the conditional distribution of 𝑋𝑛+1 given 𝑋𝑛 𝑝𝑖𝑗 = 𝑃 𝑋𝑛+1 = 𝑥𝑗 𝑋𝑛 = 𝑥𝑖 , 𝑖 = 1, ⋯ , 𝑛, 𝑗 = 1, ⋯ , 𝑛 (𝑆 finite) Stationary transition probabilities: transition probabilities does not depend on 𝑛. – Initial distribution: the marginal distribution of 𝑋1