Computational Physics - M. Jensen Episode 2 Part 1 ppsx

20 242 0
Computational Physics - M. Jensen Episode 2 Part 1 ppsx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

10.2 DIFFUSION EQUATION AND RANDOM WALKS # i n c l u d e < iom anip > # include Ð º u s i n g namespace s t d ; / / F u n c t i o n t o r e a d i n d a t a from s c r e e n , n o t e c a l l by r e f e r e n c e v o i d i n i t i a l i s e ( i n t & , i n t & , d ou b le &) ; / / The Mc s a m p l i n g f o r random w a l k s v o i d m c_sam pling ( i n t , i n t , double , i n t £ , i n t £ ) ; / / p r in t s to screen the r e s u l t s of the c a lc ul at io ns void o u t p u t ( in t , in t , i n t £ , i n t £ ) ; i n t main ( ) { i n t m a x _ t r i a l s , num ber_w alks ; d ou b le m o v e _ p r o b a b i l i t y ; / / Read i n d a t a i n i t i a l i s e ( m a x _ t r i a l s , num ber_w alks , m o v e _ p r o b a b i l i t y ) ; i n t £ w a l k _ c u m u l a t i v e = new i n t [ num ber_w alks + ] ; i n t £ w a l k _ c u m u l a t i v e = new i n t [ num ber_w alks + ] ; f o r ( i n t w a l k s = ; w a l k s < = num ber_w alks ; w a l k s ++) { walk_cumulative [ walks ] = walk2_cumulative [ walks ] = ; } / / end i n i t i a l i z a t i o n o f v e c t o r s / / Do t h e mc s a m p l i n g m c_sam pling ( m a x _ t r i a l s , num ber_w alks , m o v e _ p r o b a b i l i t y , walk_cumulative , walk2_cumulative ) ; / / Print out r e s u l t s o u t p u t ( m a x _ t r i a l s , num ber_w alks , w a l k _ c u m u l a t i v e , walk2_cumulative ) ; d e l e t e [ ] w a l k _ c u m u l a t i v e ; / / f r e e memory delete [ ] walk2_cumulative ; return ; } / / end main f u n c t i o n The input and output functions are v o i d i n i t i a l i s e ( i n t & m a x _ t r i a l s , i n t & num ber_w alks , d ou b le & move_probability ) { c o u t < < ỈÙĐ Ư Ĩ ÅĨỊØ ƯÐĨ ØƯ Ð× ; cin > > max_trials ; c o u t < < ỈÙĐ Ư Ĩ ØØ ĐỜ Û Ð × ; c i n > > num ber_w alks ; c o u t < < ÅÓÚ ƠƯĨ Ð ØÝ ; cin > > move_probability ; } / / end o f f u n c t i o n i n i t i a l i s e 169 170 CHAPTER 10 RANDOM WALKS AND THE METROPOLIS ALGORITHM v o i d o u t p u t ( i n t m a x _ t r i a l s , i n t num ber_w alks , i n t £ walk_cumulative , i n t £ walk2_cumulative ) { o f s t r e a m o f i l e ( Ø ×ØÛ Ð Ư× º Ø ) ; for ( int i = ; i < = num ber_w alks ; i ++) { d ou b le x a v e r a g e = w a l k _ c u m u l a t i v e [ i ] / ( ( d ou b le ) m a x _ t r i a l s ) ; d ou b le x a v e r a g e = w a l k _ c u m u l a t i v e [ i ] / ( ( d ou b le ) m a x _ t r i a l s ) ; d ou b le v a r i a n c e = x a v e r a g e   x a v e r a g e £ x a v e r a g e ; o f i l e < < s e t i o s f l a g s ( i os : : showpoint | i os : : uppercase ) ; o f i l e < < setw ( ) < < i ; o f i l e < < setw ( ) < < s e t p r e c i s i o n ( ) < < xaverage ; o f i l e < < setw ( ) < < s e t p r e c i s i o n ( ) < < v a r i a n c e < < endl ; } of i l e close () ; } / / end o f f u n c t i o n o u t p u t The algorithm is in the function mc_sampling and tests the probability of moving to the left or to the right by generating a random number v o i d m c_sam pling ( i n t m a x _ t r i a l s , i n t num ber_w alks , d ou b le m o v e _ p r o b a b i l i t y , i n t £ w a l k _ c u m u l a t i v e , int £ walk2_cumulative ) { l o n g idum ; idum =   1; / / i n i t i a l i s e random number g e n e r a t o r f o r ( i n t t r i a l = ; t r i a l < = m a x _ t r i a l s ; t r i a l ++) { int position = 0; f o r ( i n t w a l k s = ; w a l k s < = num ber_w alks ; w a l k s ++) { i f ( r a n (& idum ) < = m o v e _ p r o b a b i l i t y ) { position += 1; } else { p o s i t i o n  = 1; } walk_cumulative [ walks ] + = p o s i t i o n ; walk2_cumulative [ walks ] + = p o s i t i o n £ p o s i t i o n ; } / / end o f l o o p o v e r w a l k s } / / end o f l o o p o v e r t r i a l s } / / end m c _ s a m p l i n g f u n c t i o n Fig 10.3 shows that the variance increases linearly as function of the number of time steps, as expected from the analytic results Similarly, the mean displacement in Fig 10.4 oscillates around zero 10.2 DIFFUSION EQUATION AND RANDOM WALKS 171 ẵẳẳ ẳ ắ ẳ ẳ ắẳ ẳ ẳ ắẳ ẳ èẹ ìỉ ễì ỉ ẳ ẳ ẵẳẳ Figure 10.3: Time development of ¾ for a random walker 100000 Monte Carlo samples were used with the function ran1 and a seed set to ẵ ẳẳ ẳẳắ ĩỉà ẳ ạẳẳắ ạẳẳ ẳ ắẳ ẳ èẹ ìỉ ễì ỉ ¼ ¼ ½¼¼ Figure 10.4: Time development of Ü Ø for a random walker 100000 Monte Carlo samples were used with the function ran1 and a seed set to   ½ 172 CHAPTER 10 RANDOM WALKS AND THE METROPOLIS ALGORITHM Exercise 10.1 Extend the above program to a two-dimensional random walk with probability for a move to the right, left, up or down Compute the variance for both the Ü and Ý directions and the total variance ½ 10.3 Microscopic derivation of the diffusion equation When solving partial differential equations such as the diffusion equation numerically, the derivatives are always discretized Recalling our discussions from Chapter 3, we can rewrite the time derivative as Û´Ü Øµ Ø Û´ Ị à ẵà à ềà Ăỉ (10.24) à ẵ ềà à ẵ ềà ềà Ăĩàắ (10.25) whereas the gradient is approximated as  ắ ĩ ỉà ܾ Û´ resulting in the discretized diffusion equation Û´ Ị à ẵà à ềà Ăỉ à ẵ ềà à ẵ ềà ềà Ăĩàắ (10.26) where Ò represents a given time step and a step in the Ü-direction We will come back to the solution of such equations in our chapter on partial differential equations, see Chapter 16 The aim here is to show that we can derive the discretized diffusion equation from a Markov process and thereby demonstrate the close connection between the important physical process diffusion and random walks Random walks allow for an intuitive way of picturing the process of diffusion In addition, as demonstrated in the previous section, it is easy to simulate a random walk 10.3.1 Discretized diffusion equation and Markov chains A Markov process allows in principle for a microscopic description of Brownian motion As with the random walk studied in the previous section, we consider a particle which moves along the Ü-axis in the form of a series of jumps with step length Ü Ð Time and space are discretized and the subsequent moves are statistically indenpendent, i.e., the new move depends only on the previous step and not on the results from earlier trials We start at a position Ü Ð Ü and Ü during a step Ø ¯, where and are integers The move to a new position Ü original probability distribution function (PDF) of the particles is given by Û Ø where refers to a specific position on the grid in Fig 10.2, with representing Ü The function Û Ø is now the discretized version of Û Ü Ø We can regard the discretized PDF as a Ă Ă ẳà Ă µ ¼ ¼ ¼ ¡ ´ ¼ ¼µ 10.3 MICROSCOPIC DERIVATION OF THE DIFFUSION EQUATION vector For the Markov process we have a transition probability from a position position Ü é given by ẽ ẵ ắ ẽ é é ẳ éì ẵ 173 ĩ é to a (10.27) We call Ï for the transition probability and we can represent it, see below, as a matrix Our new PDF Û Ø ¯ is now related to the PDF at Ø through the relation ´ µ Û ỉ ẳ ỉ ẳà ẽ (10.28) This equation represents the discretized time-development of an original PDF It is a microscopic way of representing the process shown in Fig 10.1 Since both Ï and Û represent probabilities, they have to be normalized, i.e., we require that at each time step we have ỉà and ẳ ẵ (10.29) ½ Ï´ ½ (10.30) ¼ ½ Ï and Û Note that the probability for remaining The further constraints are at the same place is in general not necessarily equal zero In our Markov process we allow only for jumps to the left or to the right The time development of our initial PDF can now be represented through the action of the transition probability matrix applied Ò times At a time ØÒ Ò¯ our initial distribution has developed into Û ØỊ Ï ØỊ Û (10.31) ´ µ and defining we obtain or in matrix form ẳà ẽ ´ Ð   Ð Ị¯µ Û ´Ị¯µ ´Ï Ị´¯µµ ´Ï ềàà ềà ẳà ẽ ề àẳà (10.32) (10.33) (10.34) The matrix Ï can be written in terms of two matrices ẵ ắ ẽ Ãấ (10.35) where and ấ represent the transition probabilities for a jump to the left or the right, respectively For a ¢ case we could write these matrices as ẳ ấ ẳ ẵ  ẳ ¼ ¼ ¼ ½ ¼ ¼ ¼ ¼ ½ ¼½ ¼ ¼ ¼ (10.36) 174 CHAPTER 10 RANDOM WALKS AND THE METROPOLIS ALGORITHM ¼ and ¼ ¼  ¼ ¼ ẵ ẳ ẳ ẳ ẳẵ ẳ ẵ ẳ ẳ ½ ¼ ¼ (10.37) However, in principle these are infinite dimensional matrices since the number of time steps are very large or infinite For the infinite case we can write these matrices ấ ặ Ãẵà and ặ Ãẵà , implying that ÄÊ ÊÄ (10.38) ½ and To see that ÄÊ Ê ½ Ä (10.39) ½, perform e.g., the matrix multiplication ấ ặ Ãẵà ặ Ãẵà ấ ấ Æ ·½ ·½ Æ (10.40) and only the diagonal matrix elements are different from zero For the first time step we have thus ẵ ắ ẽ Ãấ (10.41) and using the properties in Eqs (10.38) and (10.39) we have after two time steps ẵ ắ à ấắ à ắấ (10.42) Ä¿ · Ê¿ · ¿Êľ · ¿Ê¾ Ä Ï ¾ ắà (10.43) and similarly after three time steps ẵ ẽ ¿ ´¿¯µ Using the binomial formula Ị Ị ´ · µỊ Ị  ¼ (10.44) we have that the transition matrix after Ò time steps can be written as ÏÒ or ẽề ềàà ắẵề ềàà ắẵề ề ề ẳ ề ề ẳ ềắ ấ ề ẵ ắề ề (10.45) ề ẳ ʾ  Ị (10.46) 10.3 MICROSCOPIC DERIVATION OF THE DIFFUSION EQUATION and using ấẹ ặ Ãẹà and ẹ 175 ặ Ãẹà we arrive at ề ẵ ắề ẽ é é ềà ẵ ề à ắ ẳ à ề éì (10.47) and ề has to be an even number We note that the transition matrix for a Markov process has three important properties: ¯ ¯ ¯   It depends only on the difference in space , it is thus homogenous in space It is also isotropic in space since it is unchanged when we go from ´ µ to ´    µ It is homogenous in time since it depends only the difference between the initial time and final time If we place the walker at Ü Using Eq (10.34) we have Û ´Ị¯µ resulting in ¼ at Ø ¼ we can represent the initial PDF with ẳà ẽ ềàà ềà ẵ ắề ẳà ẵ ắề ề ẵ ắ ề à ề ẵ ề à ắ ặ ề ẳ Æ ¼ (10.48) (10.49) Using the recursion relation for the binomials ềÃẵ ẵ ắ ề à ẵ à we obtain, dening ĩ é, ỉ ề ẵ ắ ề à àà à ẵà à ề ẵ ắ ề à àẵ (10.50) Ò¯ and setting Û´Ü Øµ Û´Ü Ø · ¯µ Û´ é ềà ềà ẵ ĩ à é ỉà à ẵ ĩ é ỉà ắ ắ (10.51) (10.52) ĩ ỉà and multiplying both sides with éắ we have ĩ ỉ à ĩ ỉà éắ ĩ à é ỉà ắĩ ỉà à ĩ é ỉà (10.53) ¯ ¾¯ о and identifying о ¾¯ and letting Ð ¡Ü and ¯ ¡Ø we see that this is nothing but the discretized version of the diffusion equation Taking the limits ¡Ü ¼ and ¡Ø ¼ we recover Û´Ü ỉà  ắ ĩ ỉà and adding and subtracting Ø the diffusion equation ܾ 176 CHAPTER 10 RANDOM WALKS AND THE METROPOLIS ALGORITHM 10.3.2 Continuous equations Hitherto we have considered discretized versions of all equations Our initial probability distribution function was then given by ẳà ặẳ and its time-development after a given time step ¡Ø ¯ is Û ỉà The continuous analog to ẽ ẳà is ỉ ẳà ĩà ặ ĩà (10.54) where we now have generalized the one-dimensional position Ü to a generic-dimensional vector Ü The Kroenecker Ỉ function is replaced by the Ỉ distribution function Ỉ Ü at Ø The transition from a state to a state is now replaced by a transition to a state with position Ý from a state with position Ü The discrete sum of transition probabilities can then be replaced by an integral and we obtain the new distribution at a time ỉ ỉ as ẳ ÃĂ Û´Ý Ø · ¡Øµ Ï ´Ý Ü ¡ØµÛ´Ü ص Ü (10.55) and after Ñ time steps we have Û´Ý Ø · Đ¡Øµ Ï ´Ý Ü Đ¡ØµÛ´Ü Øµ Ü (10.56) When equilibrium is reached we have ۴ݵ Ï ´Ý Ü ØµÛ´Üµ Ü ´ µ (10.57) We can solve the equation for Û Ý Ø by making a Fourier transform to momentum space The PDF Û Ü Ø is related to its Fourier transform ỉ through ẵ ĩ ỉà ĩễ ĩà ỉà ẵ and using the denition of the ặ -function ẵ ặ ĩà ắ we see that ẵ ẵ (10.58) ĩễ ĩà (10.59) ẳà ẵ ắ (10.60) We can then use the Fourier-transformed diffusion equation  ỉà ỉ ắ ỉà (10.61) 10.3 MICROSCOPIC DERIVATION OF THE DIFFUSION EQUATION with the obvious solution ỉà ẳà ĩễ  ắ ỉàĂ ẵ ĩễ  ắ ĩ ẵ ĩễ  ắ ắ ỉàÊ ễẵ 177 ắ ỉàÊ (10.62) Using Eq (10.58) we obtain ĩ ỉà ẵ ẵ ĩễ with the normalization condition ẵ ẵ ĩ ỉà ĩ ỉ ĩễ Âĩắ ỉà ẵ £ (10.63) (10.64) It is rather easy to verify by insertion that Eq (10.63) is a solution of the diffusion equation The solution represents the probability of finding our random walker at position Ü at time Ø if the initial distribution was placed at Ü at Ø There is another interesting feature worth observing The discrete transition probability Ï itself is given by a binomial distribution, see Eq (10.47) The results from the central limit theorem, see Sect ??, state that transition probability in the limit Ị ½ converges to the normal distribution It is then possible to show that ẳ ẽ é é ềà ẽ í ĩ ẳ  Ăỉà ễ ẵ Ăỉ ĩễ í ĩàắ ĂỉàÊ (10.65) and that it satises the normalization condition and is itself a solution to the diffusion equation 10.3.3 Numerical simulation In the two previous subsections we have given evidence that a Markov process actually yields in the limit of infinitely many steps the diffusion equation It links therefore in a physical intuitive way the fundamental process of diffusion with random walks It could therefore be of interest to visualize this connection through a numerical experiment We saw in the previous subsection that one possible solution to the diffusion equation is given by a normal distribution In addition, the transition rate for a given number of steps develops from a binomial distribution into a normal distribution in the limit of infinitely many steps To achieve this we construct in addition a histogram which contains the number of times the walker was in a particular position Ü This is given by the variable probability , which is normalized in the output function We have omitted the initialization function, since this identical to program1.cpp of this chapter The array probability extends from  number_walks to +number_walks /£ £/ programs/chap10/program2.cpp 1  dim random w alk program A w a l k e r makes s e v e r a l t r i a l s s t e p s w i t h a g i v e n number o f w a l k s p e r t r i a l 178 CHAPTER 10 RANDOM WALKS AND THE METROPOLIS ALGORITHM # include < iostream > # include < fstream > # i n c l u d e < iom anip > # include Ð º u s i n g namespace s t d ; / / F u n c t i o n t o r e a d i n d a t a from s c r e e n , n o t e c a l l by r e f e r e n c e v o i d i n i t i a l i s e ( i n t & , i n t & , d ou b le &) ; / / The Mc s a m p l i n g f o r random w a l k s v o i d m c_sam pling ( i n t , i n t , double , i n t £ , i n t £ , i n t £ ) ; / / p r in t s to screen the r e s u l t s of the c a lc ul at io ns void o u t p u t ( in t , in t , i n t £ , i n t £ , i n t £ ) ; i n t main ( ) { i n t m a x _ t r i a l s , num ber_w alks ; d ou b le m o v e _ p r o b a b i l i t y ; / / Read i n d a t a i n i t i a l i s e ( m a x _ t r i a l s , num ber_w alks , m o v e _ p r o b a b i l i t y ) ; i n t £ w a l k _ c u m u l a t i v e = new i n t [ num ber_w alks + ] ; i n t £ w a l k _ c u m u l a t i v e = new i n t [ num ber_w alks + ] ; i n t £ p r o b a b i l i t y = new i n t [ £ ( num ber_w alks +1) ] ; f o r ( i n t w a l k s = ; w a l k s < = num ber_w alks ; w a l k s ++) { walk_cumulative [ walks ] = walk2_cumulative [ walks ] = ; } f o r ( i n t w a l k s = ; w a l k s < = £ num ber_w alks ; w a l k s ++) { p r o b a b i l i t y [ walks ] = ; } / / end i n i t i a l i z a t i o n o f v e c t o r s / / Do t h e mc s a m p l i n g m c_sam pling ( m a x _ t r i a l s , num ber_w alks , m o v e _ p r o b a b i l i t y , walk_cumulative , walk2_cumulative , p r o b a b i l i t y ) ; / / Print out r e s u l t s o u t p u t ( m a x _ t r i a l s , num ber_w alks , w a l k _ c u m u l a t i v e , walk2_cumulative , p r o b a b i l i t y ) ; d e l e t e [ ] w a l k _ c u m u l a t i v e ; / / f r e e memory delete [ ] walk2_cumulative ; delete [ ] p r o b a b i l i t y ; return ; } / / end main f u n c t i o n The output function contains now the normalization of the probability as well and writes this to its own file v o i d o u t p u t ( i n t m a x _ t r i a l s , i n t num ber_w alks , i n t £ walk_cumulative , i n t £ walk2_cumulative , i n t probability ) { £ 10.3 MICROSCOPIC DERIVATION OF THE DIFFUSION EQUATION 179 o f s t r e a m o f i l e ( Ø ×ØÛ Ð Ư× º Ø ) ; o f s t r e a m p r o b f i l e ( ƠƯĨ Ð ØÝ º Ø ) ; for ( int i = ; i < = num ber_w alks ; i ++) { d ou b le x a v e r a g e = w a l k _ c u m u l a t i v e [ i ] / ( ( d ou b le ) m a x _ t r i a l s ) ; d ou b le x a v e r a g e = w a l k _ c u m u l a t i v e [ i ] / ( ( d ou b le ) m a x _ t r i a l s ) ; d ou b le v a r i a n c e = x a v e r a g e   x a v e r a g e £ x a v e r a g e ; o f i l e < < s e t i o s f l a g s ( i os : : showpoint | i os : : uppercase ) ; o f i l e < < setw ( ) < < i ; o f i l e < < setw ( ) < < s e t p r e c i s i o n ( ) < < xaverage ; o f i l e < < setw ( ) < < s e t p r e c i s i o n ( ) < < v a r i a n c e < < endl ; } of i l e close () ; / / f i n d norm o f p r o b a b i l i t y d ou b le norm = ; for ( int i =   num ber_w alks ; i < = num ber_w alks ; i ++) { norm + = ( d ou b le ) p r o b a b i l i t y [ i + num ber_w alks ] ; } / / write probability for ( int i =   num ber_w alks ; i < = num ber_w alks ; i ++) { d ou b le h i s t o g r a m = p r o b a b i l i t y [ i + num ber_w alks ] / norm ; p r o b f i l e < < s e t i o s f l a g s ( i os : : showpoint | i os : : uppercase ) ; p r o b f i l e < < setw ( ) < < i ; p r o b f i l e < < setw ( ) < < s e t p r e c i s i o n ( ) < < h i s t o g r a m < < endl ; } probfile close () ; } / / end o f f u n c t i o n o u t p u t The sampling part is still done in the same function, but contains now the setup of a histogram containing the number of times the walker visited a given position Ü v o i d m c_sam pling ( i n t m a x _ t r i a l s , i n t num ber_w alks , d ou b le m o v e _ p r o b a b i l i t y , i n t £ w a l k _ c u m u l a t i v e , i n t £ walk2_cumulative , i n t £ p r o b a b i l i t y ) { l o n g idum ; idum =   1; / / i n i t i a l i s e random number g e n e r a t o r f o r ( i n t t r i a l = ; t r i a l < = m a x _ t r i a l s ; t r i a l ++) { int position = 0; f o r ( i n t w a l k s = ; w a l k s < = num ber_w alks ; w a l k s ++) { i f ( r a n (& idum ) < = m o v e _ p r o b a b i l i t y ) { position += 1; } else { p o s i t i o n  = 1; } walk_cumulative [ walks ] + = p o s i t i o n ; 180 CHAPTER 10 RANDOM WALKS AND THE METROPOLIS ALGORITHM walk2_cumulative [ walks ] + = p o s i t i o n £ p o s i t i o n ; p r o b a b i l i t y [ p o s i t i o n + num ber_w alks ] + = ; } / / end o f l o o p o v e r w a l k s } / / end o f l o o p o v e r t r i a l s } / / end m c _ s a m p l i n g f u n c t i o n Fig 10.5 shows the resulting probability distribution after Ò steps We see from Fig 10.5 that the probability distribution function resembles a normal distribution Exercise 10.2 Use the above program and try to fit the computed probability distribution with a normal distribution using your calculated values of ¾ and Ü 10.4 The Metropolis algorithm and detailed balance An important condition we require that our Markov chain should satisfy is that of detailed balance In statistical physics this condition ensures that it is e.g., the Boltzmann distribution which is generated when equilibrium is reached The definition for being in equilibrium is that the rates at which a system makes a transition to or from a given state have to be equal, that is Ï´ µÛ Ï´ µÛ (10.66) Another way of stating that a Markow process has reached equilibrium is Û´Ø ½µ ÏÛ´Ø ½µ (10.67) However, the condition that the rates should equal each other is in general not sufficient to guarantee that we, after many simulations, generate the correct distribution We therefore introduce an additional condition, namely that of detailed balance Ï´ µÛ Ï´ µÛ (10.68) Satisfies the detailed balance condition At equilibrium detailed balance gives thus Ï´ Ï´ We introduce the Boltzmann distribution Û µ µ Û Û ÜƠ (10.69) (10.70) which states that probability of finding the system in a state with energy at an inverse temperature ¬ Ì is Û »  ¬ The denominator is a normalization constant ẵ ĩễ 10.4 THE METROPOLIS ALGORITHM AND DETAILED BALANCE ẳẵ 181 ẵẳ ìỉ ễì ẳẵ ẳẵ ẳẵắ ĩ ỉà ẳẵ ẳẳ ẳẳ ẳẳ ẳẳắ ẳ ạắẳ ạẵ ạẵẳ ẳ ìỉ ễì ẵẳ ĩ ẳẳ ẵ ắẳ ẵẳẳ ìỉ ễì ẳẳ ẳẳ ẳẳ ĩ ỉàẳẳ ẳẳ ẳẳắ ẳẳẵ ẳ ẳ ạắẳ ẳ ìỉ ễì ĩ ẳẳắ ắẳ ẳ ẵẳẳẳ ìỉ ễì ẳẳắ ẳẳẵ ĩ ỉà ẳẳẵ ẳẳẳ ẳ ẳ ạắẳ ẳ ìỉ ễì ĩ ắẳ ẳ Figure 10.5: Probability distribution for one walker after 10, 100 and 1000 steps 182 CHAPTER 10 RANDOM WALKS AND THE METROPOLIS ALGORITHM which ensures that the sum of all probabilities is normalized to one It is defined as the sum of probabilities over all microstates of the system ĩễ (10.71) From the partition function we can in principle generate all interesting quantities for a given system in equilibrium with its surroundings at a temperature Ì This is demonstrated in the next chapter With the probability distribution given by the Boltzmann distribution we are now in the position where we can generate expectation values for a given variable through the definition ẩ ĩễ (10.72) In general, most systems have an infinity of microstates making thereby the computation of practically impossible and a brute force Monte Carlo calculation over a given number of randomly selected microstates may therefore not yield those microstates which are important at equilibrium To select the most important contributions we need to use the condition for detailed balance Since this is just given by the ratios of probabilities, we never need to evaluate the partition function For the Boltzmann distribution, detailed balance results in ĩễ àà (10.73) Let us now specialize to a system whose energy is defined by the orientation of single spins represented by the following Æ spins Consider the state , with given energy ½ ắ ẵ Ãẵ ặ ẵ ặ We are interested in the transition with one single spinflip to a new state with energy ẵ ắ ẵ Ãẵ ặ ẵ Æ This change from one microstate (or spin configuration) to another microstate is the configuration space analogue to a random walk on a lattice Instead of jumping from one place to another in space, we ’jump’ from one microstate to another However, the selection of states has to generate a final distribution which is the Boltzmann distribution This is again the same we saw for a random walker, for the discrete case we had always a binomial distribution, whereas for the continuous case we had a normal distribution The way we sample configurations should result in, when equilibrium is established, in the Boltzmann distribution Else, our algorithm for selecting microstates has to be wrong Since we not know the analytic form of the transition rate, we are free to model it as Ï´ µ ´ µ ´ µ (10.74) 10.4 THE METROPOLIS ALGORITHM AND DETAILED BALANCE 183 where is a selection probability while is the probability for accepting a move It is also called the acceptance ratio The selection probability should be same for all possible spin orientations, namely ẵ ặ (10.75) With detailed balance this gives à ĩễ ´   µµ (10.76) but since the selection ratio is the same for both transitions, we have ´ ´ µ µ ÜƠ ´ ¬ ´   µµ (10.77) In general, we are looking for those spin orientations which correspond to the average energy at equilibrium We are in this case interested in a new state whose energy is lower than , viz.,   A simple test would then be to accept only those microstates which lower the energy Suppose we have ten microstates with energy ẳ Our ẵ ắ ¿ ¡¡¡ desired energy is ¼ At a given temperature Ì we start our simulation by randomly choosing state Flipping spins we may then find a path from ĂĂĂ ẵ ẳ This would however lead to biased statistical averages since it would violate the ergodic hypothesis which states that it should be possible for any Markov process to reach every possible state of the system from any starting point if the simulations is carried out for a long enough time Any state in a Boltzmann distribution has a probability different from zero and if such a state cannot be reached from a given starting point, then the system is not ergodic This means that ¡¡¡ another possible path to ¼ could be ¼ and so forth Even though such a path could have a negligible probability it is still a possibility, and if we simulate long enough it should be included in our computation of an expectation value Thus, we require that our algorithm should satisfy the principle of detailed balance and be ergodic One possible way is the Metropolis algorithm, which reads ¡ ¼ ´ ÜƠ àà ẵ éì ẳ (10.78) This algorithm satisfies the condition for detailed balance and ergodicity It is implemented as follows: ¯ ¯ Establish an initial energy Do a random change of this initial state by e.g., flipping an individual spin This new state has energy Ø Compute then Ø  ¯ If ¡ ¡ ¼ accept the new configuration 184 CHAPTER 10 RANDOM WALKS AND THE METROPOLIS ALGORITHM If Ă ẳ, compute ơĂ µ ¯ Compare Û with a random number Ö If Ö Û accept, else keep the old configuration ¯ Compute the terms in the sums È ×È× ¯ Repeat the above steps in order to have a large enough number of microstates ¯ For a given number of MC cycles, compute then expectation values The application of this algorithm will be discussed in detail in the next two chapters 10.5 Physics project: simulation of the Boltzmann distribution In this project the aim is to show that the Metropolis algorithm generates the Boltzmann distribution ẩ ẵ (10.79) è being the inverse temperature, is the energy of the system and is the partition with ¬ function The only functions you will need are those to generate random numbers We are going to study one single particle in equilibrium with its surroundings, the latter modeled via a large heat bath with temperature Ì The model used to describe this particle is that of an ideal gas in one dimension and with velocity  Ú or Ú We are interested in finding È Ú Ú , which expresses the probability for Ú The energy for this one-dimensional finding the system with a given velocity Ú ¾ Ú Ú system is ẵ ắ ẵ è à ẵ ắ ¾ (10.80) with mass Ñ In order to simulate the Boltzmann distribution, your program should contain the following ingredients: ¯ Reads in the temperature Ì , the number of Monte Carlo cycles, and the initial velocity You should also read in the change in velocity ỈÚ used in every Monte Carlo step Let the temperature have dimension energy ¯ Thereafter you choose a maximum velocity given by e.g., ÚÑ Ü Ì Then you construct a velocity interval defined by ÚÑ Ü and divided it in small intervals through ÚĐ Ü Ỉ , with Ỉ   For each of these intervals your task is to find out how many times a given velocity during the Monte Carlo sampling appears in each specic interval ẵẳ ễ ẵẳẳ ẵẳẳẳ The number of times a given velocity appears in a specific interval is used to construct a histogram representing È Ú Ú To achieve this you should construct a vector È Ỉ which contains the number of times a given velocity appears in the subinterval Ú Ú Ú ´µ · ℄ 10.5 PHYSICS PROJECT: SIMULATION OF THE BOLTZMANN DISTRIBUTION185 In order to find the number of velocities appearing in each interval we will employ the Metropolis algorithm A pseudocode for this is f o r ( m o n t e c a r l o _ c y c l e s = ; M a x _ c y c l e s ; m o n t e c a r l o _ c y c l e s ++) { / / change s p e e d as f u n c t i o n o f d e l t a v v _ c h a n g e = ( £ r a n (&idum )   ) £ d e l t a _ v ; v_new = v _ o l d + v _ c h a n g e ; / / e n e r g y change d e l t a _ E = £ ( v_new £ v_new   v _ o l d £ v _ o l d ) ; / / Metropolis algorithm begins here i f ( r a n (& idum ) < = exp (   b e t a £ d e l t a _ E ) ) { accept_step = accept_step + ; v _ o l d = v_new ; } / / t h e r e a f t e r we m ust f i l l i n P [N ] as a f u n c t i o n o f / / t h e new s p e e d P[?] = / / u p g r a d e mean v e l o c i t y , e n e r g y and v a r i a n c e } ´µ a) Make your own algorithm which sets up the histogram È Ú Ú , find mean velocity, energy, energy variance and the number of accepted steps for a given temperature Study the change of the number of accepted moves as a function of ỈÚ Compare the final energy with the analytic result Ì for one dimension Use Ì and set the intial velocity to zero, i.e., Ú¼ Try different values of ỈÚ A possible start value is ỈÚ Check the final result for the energy as a function of the number of Monte Carlo cycles ¼ b) Make thereafter a plot of Comment the result ¾ ÐỊ´È ´Ú µµ as function of and see if you get a straight line Chapter 11 Monte Carlo methods in statistical physics The aim of this chapter is to present examples from the physical sciences where Monte Carlo methods are widely applied Here we focus on examples from statistical physics and discuss one of the most studied systems, the Ising model for the interaction among classical spins This model exhibits both first and second order phase transitions and is perhaps one of the most studied cases in statistical physics and discussions of simulations of phase transitions 11.1 Phase transitions in magnetic systems 11.1.1 Theoretical background The model we will employ in our studies of phase transitions at finite temperature for magnetic systems is the so-called Ising model In its simplest form the energy is expressed as ẵ ặ ì ìé é ặ ì (11.1) with ì Ư , Æ is the total number of spins,  is a coupling constant expressing the strength of the interaction between neighboring spins and is an external magnetic field interacting with the magnetic moment set up by the spins The symbol Ð indicates that we sum over nearest neighbors only Notice that for  it is energetically favorable for neighboring spins to be aligned This feature leads to, at low enough temperatures, to a cooperative phenomenon called spontaneous magnetization That is, through interactions between nearest neighbors, a given magnetic moment can influence the alignment of spins that are separated from the given spin by a macroscopic distance These long range correlations between spins are associated with a long-range order in which the lattice has a net magnetization in the absence of a magnetic field only In our further studies of the Ising model, we will limit the attention to cases with In order to calculate expectation values such as the mean energy or magnetization Å in statistical physics at a given temperature, we need a probability distribution ẳ ẳ ẩ 187  ¬ (11.2) 188 CHAPTER 11 MONTE CARLO METHODS IN STATISTICAL PHYSICS ẵ with state while è being the inverse temperature, the Boltzmann constant, is the energy of a is the partition function for the canonical ensemble defined as (11.3) ẵ where the sum extends over all states Å È expresses the probability of finding the system in a given configuration The energy for a specic conguration is given by ặ ì ìé é (11.4) To better understand what is meant with a configuration, consider first the case of the onedimensional Ising model with In general, a given configuration of Ỉ spins in one dimension may look like ẳ ẵ ắ ẵ Ãẵ ặ ẵ ặ In order to illustrate these features let us further specialize to just two spins With two spins, since each spin takes two values only, it means that in total we have ¾ possible arrangements of the two spins These four possibilities are ắ ẵ ắ What is the energy of each of these configurations? For small systems, the way we treat the ends matters Two cases are often used In the first case we employ what is called free ends For the one-dimensional case, the energy is then written as a sum over a single index ặ ẵ ẵ ì ì Ãẵ (11.5) If we label the rst spin as ìẵ and the second as ìắ we obtain the following expression for the energy ìẵ ìắ (11.6) The calculation of the energy for the one-dimensional lattice with free ends for one specific spin-configuration can easily be implemented in the following lines f o r ( j = ; j < N ; j ++) { energy + = spin [ j ]£ spin [ j +1]; } ... magnetization Å in statistical physics at a given temperature, we need a probability distribution ẳ ẳ ẩ 18 7 (11 .2) 18 8 CHAPTER 11 MONTE CARLO METHODS IN STATISTICAL PHYSICS ½ with ¬ state while... of the most studied cases in statistical physics and discussions of simulations of phase transitions 11 .1 Phase transitions in magnetic systems 11 .1. 1 Theoretical background The model we will... denition of the ặ -function ẵ ặ ĩà ắ we see that ẵ ẵ (10 .58) ĩễ ĩà (10 .59) ẳà ẵ ắ (10 .60) We can then use the Fourier-transformed diffusion equation  Û´ ỉà ỉ ắ ỉà (10 . 61) 10 .3 MICROSCOPIC

Ngày đăng: 07/08/2014, 12:22

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan