1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Computational Physics - M. Jensen Episode 1 Part 9 pptx

20 313 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 20
Dung lượng 305,17 KB

Nội dung

9.5. IMPROVED MONTE CARLO INTEGRATION 149 The second term on the rhs disappears since this is just the mean and employing the definition of we have (9.59) resulting in (9.60) and in the limit we obtain (9.61) which is the normal distribution with variance , where is the variance of the PDF and is also the mean of the PDF . Thus, the central limit theorem states that the PDF of the average of random values corresponding to a PDF is a normal distribution whose mean is the mean value of the PDF and whose variance is the variance of the PDF divided by , the number of values used to compute . The theorem is satisfied by a large class of PDFs. Note however that for a finite , it is not always possible to find a closed expression for . 9.5 Improved Monte Carlo integration In section 5.1 we presented a simple brute force approach to integration with the Monte Carlo method. There we sampled over a given number of points distributed uniformly in the interval with the weights . Here we introduce two important topics which in most cases improve upon the above simple brute force approach with the uniform distribution for . With improvements we think of a smaller variance and the need for fewer Monte Carlo samples, although each new Monte Carlo sample will most likely be more times consuming than corresponding ones of the brute force method. The first topic deals with change of variables, and is linked to the cumulative function of a PDF . Obviously, not all integration limits go from to , rather, in physics we are often confronted with integration domains like or etc. Since all random number generators give numbers in the interval , we need a mapping from this integration interval to the explicit one under consideration. 150 CHAPTER 9. OUTLINE OF THE MONTE-CARLO STRATEGY The next topic deals with the shape of the integrand itself. Let us for the sake of simplicity just assume that the integration domain is again from to . If the function to be integrated has sharp peaks and is zero or small for many values of , most samples of give contributions to the integral which are negligible. As a consequence we need many samples to have a sufficient accuracy in the region where is peaked. What do we do then? We try to find a new PDF chosen so as to match in order to render the integrand smooth. The new PDF has in turn an domain which most likely has to be mapped from the domain of the uniform distribution. Why care at all and not be content with just a change of variables in cases where that is needed? Below we show several examples of how to improve a Monte Carlo integration through smarter choices of PDFs which render the integrand smoother. However one classic example from quantum mechanics illustrates the need for a good sampling function. In quantum mechanics, the probability distribution function is given by , where is the eigenfunction arising from the solution of e.g., the time-independent Schrödinger equation. If is an eigenfunction, the corresponding energy eigenvalue is given by (9.62) where is the hamiltonian under consideration. The expectation value of , assuming that the quantum mechanical PDF is normalized, is given by (9.63) We could insert right to the left of and rewrite the last equation as (9.64) or (9.65) which is on the form of an expectation value with (9.66) The crucial point to note is that if is the exact eigenfunction itself with eigenvalue , then reduces just to the constant and we have (9.67) since is normalized. 9.5. IMPROVED MONTE CARLO INTEGRATION 151 However, in most cases of interest we do not have the exact . But if we have made a clever choice for , the expression exhibits a smooth behavior in the neighbourhood of the exact solution. This means in turn that when do our Monte Carlo sampling, we will hopefully pick only relevant values for . The above example encompasses the main essence of the Monte Carlo philosophy. It is a trial approach, where intelligent guesses lead to hopefully better results. 9.5.1 Change of variables The starting point is always the uniform distribution (9.68) with and satisfying (9.69) All random number generators provided in the program library generate numbers in this domain. When we attempt a transformation to a new variable we have to conserve the proba- bility (9.70) which for the uniform distribution implies (9.71) Let us assume that is a PDF different from the uniform PDF with . If we integrate the last expression we arrive at (9.72) which is nothing but the cumulative distribution of , i.e., (9.73) This is an important result which has consequences for eventual improvements over the brute force Monte Carlo. To illustrate this approach, let us look at some examples. 152 CHAPTER 9. OUTLINE OF THE MONTE-CARLO STRATEGY Example 1 Suppose we have the general uniform distribution (9.74) If we wish to relate this distribution to the one in the interval we have (9.75) and integrating we obtain the cumulative function (9.76) yielding (9.77) a well-known result! Example 2, the exponential distribution Assume that (9.78) which is the exponential distribution, important for the analysis of e.g., radioactive decay. Again, is given by the uniform distribution with , and with the assumption that the proba- bility is conserved we have (9.79) which yields after integration (9.80) or (9.81) This givesus the new random variable in the domain determined through the random variable generated by functions like . This means that if we can factor out from an integrand we may have (9.82) which we rewrite as (9.83) 9.5. IMPROVED MONTE CARLO INTEGRATION 153 where is a random number in the interval [0,1]. The algorithm for the last example is rather simple. In the function which sets up the integral, we simply need to call one of the random number generators like , , or in order to obtain numbers in the interval [0,1]. We obtain by the taking the logarithm of . Our calling function which sets up the new random variable may then include statements like Exercise 9.4 Make a function _ which computes random numbers for the exponential distribution based on random numbers generated from the function . Example 3 Another function which provides an example for a PDF is (9.84) with . It is normalizable, positive definite, analytically integrable and the integral is invert- ible, allowing thereby the expression of a new variable in terms of the old one. The integral (9.85) gives (9.86) which in turn gives the cumulative function (9.87) resulting in (9.88) or (9.89) With the random variable generated by functions like , we have again the appro- priate random variable for a new PDF. 154 CHAPTER 9. OUTLINE OF THE MONTE-CARLO STRATEGY Example 4, the normal distribution For the normal distribution, expressed here as (9.90) it is rather difficult to find an inverse since the cumulative distribution is given by the error function . If we however switch to polar coordinates, we have for and (9.91) resulting in (9.92) where the angle could be given by a uniform distribution in the region . Following example 1 above, this implies simply multiplying random numbers by . The variable , defined for needs to be related to to random numbers . To achieve that, we introduce a new variable (9.93) and define a PDF (9.94) with . Using the results from example 2, we have that (9.95) where is a random number generated for . With (9.96) and (9.97) we can obtain new random numbers through (9.98) and (9.99) with and . A function which yields such random numbers for the normal distribution would include statements like 9.5. IMPROVED MONTE CARLO INTEGRATION 155 Exercise 9.4 Make a function _ which computes random numbers for the normal distribution based on random numbers generated from the function . 9.5.2 Importance sampling With the aid of the above variable transformations we address now one of the most widely used approaches to Monte Carlo integration, namely importance sampling. Let us assume that is a PDF whose behavior resembles that of a function defined in a certain interval . The normalization condition is (9.100) We can rewrite our integral as (9.101) This integral resembles our discussion on the evaluation of the energy for a quantum mechanical system in Eq. (9.64). Since random numbers are generated for the uniform distribution with , we need to perform a change of variables through (9.102) where we used (9.103) If we can invert , we find as well. With this change of variables we can express the integral of Eq. (9.101) as (9.104) 156 CHAPTER 9. OUTLINE OF THE MONTE-CARLO STRATEGY meaning that a Monte Carlo evalutaion of the above integral gives (9.105) The advantage of such a change of variables in case follows closely is that the integrand becomes smooth and we can sample over relevant values for the integrand. It is however not triv- ial to find such a function . The conditions on which allow us to perform these transformations are 1. is normalizable and positive definite, 2. it is analytically integrable and 3. the integral is invertible, allowing us thereby to express a new variable in terms of the old one. The standard deviation is now with the definition (9.106) (9.107) The algorithm for this procedure is Use the uniform distribution to find the random variable in the interval [0,1]. is auser provided PDF. Evaluate thereafter (9.108) by rewriting (9.109) since (9.110) Perform then a Monte Carlo sampling for (9.111) with , 9.6. MONTE CARLO INTEGRATION OF MULTIDIMENSIONAL INTEGRALS 157 and evaluate the variance as well according to Eq. (9.107). Exercise 9.5 (a) Calculate the integral using brute force Monte Carlo with and importance sampling with where is a constant. (b) Calculate the integral with where is a constant. Determine the value of which minimizes the variance. 9.5.3 Acceptance-Rejection method This is rather simple and appealing method after von Neumann. Assume that we are looking at an interval , this being the domain of the PDF . Suppose also that the largest value our distribution function takes in this interval is , that is (9.112) Then we generate a random number from the uniform distribution for and a corre- sponding number for the uniform distribution between . If (9.113) we accept the new value of , else we generate again two new random numbers and and perform the test in the latter equation again. 9.6 Monte Carlo integration of multidimensional integrals When we deal with multidimensional integrals of the form (9.114) 158 CHAPTER 9. OUTLINE OF THE MONTE-CARLO STRATEGY with defined in the interval we would typically need a transformation of variables of the form if we were to use the uniform distribution on the interval . In this case, we need a Jacobi determinant and to convert the function to As an example, consider the following sixth-dimensional integral (9.115) where (9.116) with . We can solve this integral by employing our brute scheme, or using importance sampling and random variables distributed according to a gaussian PDF. For the latter, if we set the mean value and the standard deviation , we have (9.117) and through (9.118) we can rewrite our integral as (9.119) where is the gaussian distribution. Below we list two codes, one for the brute force integration and the other employing impor- tance sampling with a gaussian distribution. [...]... imposes signicant constraints on ĩ ỉà ĩ ẵ (10 .7) ĩ ỉà These are  ề ĩ ỉà ẳ (10 .8) ĩề ĩ Ưẵ implying that when we study the time-derivative  ĩỉà ỉ, we obtain after integration by parts ĩ Ưẵ ỉà ẳ and using Eq (10 .3) ĩ ỉ leading to implying that ĩ ỉ ẵ ĩ ỉà ĩ ĩ ỉ ẵ ẵ  ắ ĩ ỉà ĩ ĩ ĩắ ẵ (10 .9) ĩ ỉà ĩ ẵ ĩ ỉà ĩ ẵ ĩ (10 .10 ) ĩ ĩ ĩ ỉ Ưẵ ẳ (10 .11 ) 16 6 CHAPTER 10 RANDOM WALKS AND THE METROPOLIS ALGORITHM... ỉà ĩ ĩ ĩ ẵ Ưẵ ắ (10 .12 ) ĩ where we have performed an integration by parts as we did for ỉ A further integration by parts results in  ĩắ ỉ ĩĩ ỉà ĩ leading to ĩắ and the variance as ĩắ ẵ Ưẵ ã ắ ắ ẵ ĩ ỉà ĩ ỉ ắ (10 .13 ) (10 .14 ) ĩắ ắ ỉ (10 .15 ) The root mean square displacement after a time ỉ is then ễ ĩắ ĩắ ễ ắ ỉ (10 .16 ) This should be contrasted to the displacement of a free particle with initial... s t a t i c d ou b le g s e t ; d ou b le f a c , r s q , v1 , v2 ; i f ( idum < 0 ) i s e t = 0 ; if ( iset == 0) { do { v1 = 2 Ê r a n 0 ( idum ) 1 0 ; v2 = 2 Ê r a n 0 ( idum ) 1 0 ; r s q = v1 Ê v1+v2 Ê v2 ; } while ( rsq > = 1 0 | | rsq = = 0 ) ; f a c = s q r t ( 2 Ê l o g ( r s q ) / r s q ) ; g s e t = v1 Ê f a c ; iset = 1; r e t u r n v2 Ê f a c ; } else { iset =0; return gset ;... displacement after ề time steps is ẳ ẳ Ă ẵắ ĩềà ề Ăĩ ẳ Ăĩ Ưé (10 .18 ) 16 8 CHAPTER 10 RANDOM WALKS AND THE METROPOLIS ALGORITHM since we have an equal probability of jumping either to the left or to right The value of is ĩềàắ ề Ăĩ ắ ề Ăĩắ ã ặ Ăĩ Ăĩ éắ ặ ĩềàắ (10 . 19 ) For many enough steps the non-diagonal contribution is ặ since Ăĩ Ăĩ Ăĩ ẳ (10 .20) Ưé The variance is then ĩềàắ ĩềà ắ éắ ề It is also rather... the gradient of ĩ expressed mathematically through ĩ ỉà ĩ ỉà ĩ (10 .1) 10 .2 DIFFUSION EQUATION AND RANDOM WALKS 16 5 where is the so-called diffusion constant, with dimensionality lengthắ per time If the number of particles is conserved, we have the continuity equation  ĩ ỉà ĩ ĩ ỉà ỉ which leads to ĩ ỉ ỉà  ắ ĩ ỉà ĩắ (10 .2) (10 .3) which is the diffusion equation in one dimension With the probability... straightforward to compute the variance for ĩềàắ ĩềà ắ (10 . 21) ấ The result is ấéắ ề (10 .22) Ă In Eq (10 . 21) the variable ề represents the number of time steps If we dene ề ỉ ỉ, we can then couple the variance result from a random walk in one dimension with the variance from the diffusion equation of Eq (10 .15 ) by dening the diffusion constant as éắ Ăỉ (10 .23) In the next section we show in detail that this... ĩ ắ ĩ ĩ (PDF) The quantum physics equivalent of ĩ ỉ is the wave function itself This diffusion interpretation of Schrửdingers equation forms the starting point for diffusion Monte Carlo techniques in quantum physics à ã à 10 .2 .1 Diffusion equation ĩ ỉà, viz., the number of ỉà This proportionality is From experiment there are strong indications that the ux of particles particles passing ĩ at a time... Schrửdingers equation The latter is, for a free particle, nothing but the diffusion equation in complex time! Let us consider the one-dimensional diffusion equation We study a large ensemble of particles performing Brownian motion along the ĩ-axis There is no interaction between the particles We dene ĩ ỉ ĩ as the probability of nding a given number of particles in an interval ĩ at a time ỉ This quantity... much more slowly from the starting point than would a free particle We can vizualize the above in the following gure In Fig 10 .1 we have assumed that our distribution is given by a normal distribution with variance ắ ỉ, centered at ĩ The distribution reads à ắ ẳ ĩ ỉà ĩ ễẵ ỉ ắs the new variance is ắ ễ ĩ ắ ắ At a further time ỉ ắ ỉ ĩễ ĩ ỉ à ĩ (10 .17 ) s, implying that the root mean square value is ễ ễ... ĩ ỉà ĩẳẵ ẳẳ ẳẳ ẳẳ ẳẳắ ẳ ạẵẳ ạ ẳ ẵẳ ĩ ắ Figure 10 .1: Time development of a normal distribution with variance ắ ỉ and with mắ /s The solid line represents the distribution at ỉ s while the dotted line stands for ỉ s ẵ ắ é ắ é ĩ ẳ ắé é é Figure 10 .2: One-dimensional walker which can jump either to the left or to the right Every step has length ĩ é Ă 10 .2.2 Random walks Consider now a random walker . PDF (9. 94) with . Using the results from example 2, we have that (9. 95) where is a random number generated for . With (9. 96) and (9. 97) we can obtain new random numbers through (9. 98) and (9. 99) with and. thereafter (9 .10 8) by rewriting (9 .10 9) since (9 .11 0) Perform then a Monte Carlo sampling for (9 .11 1) with , 9. 6. MONTE CARLO INTEGRATION OF MULTIDIMENSIONAL INTEGRALS 15 7 and evaluate the variance. integration by parts and using Eq. (10 .3) (10 .9) leading to (10 .10 ) implying that (10 .11 ) 16 6 CHAPTER 10 . RANDOM WALKS AND THE METROPOLIS ALGORITHM This means in turn that is independent of time. If

Ngày đăng: 07/08/2014, 12:22

TỪ KHÓA LIÊN QUAN