Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 422 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
422
Dung lượng
19,74 MB
Nội dung
www.elsolucionario.net Preface Thermodynamics deals with the general principles and laws that govern the behaviour of matter and with the relationships between material properties The origins of these laws and quantitative values for the properties are provided by statistical mechanics, which analyses the interaction of molecules and provides a detailed description of their behaviour This book presents a unified account of equilibrium thermodynamics and statistical mechanics using entropy and its maximisation A physical explanation of entropy based upon the laws of probability is introduced The equivalence of entropy and probability that results represents a return to the original viewpoint of Boltzmann, and it serves to demonstrate the fundamental unity of thermodynamics and statistical mechanics, a point that has become obscured over the years The fact that entropy and probability are objective consequences of the mechanics of molecular motion provides a physical basis and a coherent conceptual framework for the two disciplines The free energy and the other thermodynamic potentials of thermodynamics are shown simply to be the total entropy of a subsystem and reservoir; their minimisation at equilibrium is nothing but the m a x i m u m of the entropy mandated by the second law of thermodynamics and is manifest in the peaked probability distributions of statistical mechanics A straightforward extension to nonequilibrium states by the introduction of appropriate constraints allows the description of fluctuations and the approach to equilibrium, and clarifies the physical basis of the equilibrium state Although this book takes a different route to other texts, it shares with them the common destination of explaining material properties in terms of molecular motion The final formulae and interrelationships are the same, although new interpretations and derivations are offered in places The reasons for taking a detour on some of the less-travelled paths of thermodynamics and statistical mechanics are to view the vista from a different perspective, and to seek a fresh interpretation and a renewed appreciation of well-tried and familiar results In some cases this reveals a shorter path to known solutions, and in others the journey leads to the frontiers of the disciplines The book is basic in the sense that it begins at the beginning and is entirely self-contained It is also comprehensive and contains an account of all of the modern techniques that have proven useful in modern equilibrium, classical statistical mechanics The aim has been to make the subject m a t t e r broadly XV www.elsolucionario.net xvi PREFACE accessible to advanced students, whilst at the same time providing a reference text for graduate scholars and research scientists active in the field The later chapters deal with more advanced applications, and while their details may be followed step-by-step, it may require a certain experience and sophistication to appreciate their point and utility The emphasis throughout is on fundamental principles and upon the relationship between various approaches Despite this, a deal of space is devoted to applications, approximations, and computational algorithms; thermodynamics and statistical mechanics were in the final analysis developed to describe the real world, and while their generality and universality are intellectually satisfying, it is their practical application that is their ultimate justification For this reason a certain pragmatism that seeks to convince by physical explanation rather than to convict by mathematical sophistry pervades the text; after all, one person's rigor is another's mortis The first four chapters of the book comprise statistical thermodynamics This takes the existence of weighted states as axiomatic, and from certain physically motivated definitions, it deduces the familiar thermodynamic relationships, free energies, and probability distributions It is in this section that the formalism that relates each of these to entropy is introduced The remainder of the book comprises statistical mechanics, which in the first place identifies the states as molecular configurations, and shows the common case in which these have equal weight, and then goes Oil to derive the material thermodynamic properties in terms of the molecular ones Ill successive chapters the partition function, particle distribution functions, and system averages, as well as a number of applications, approximation schemes, computational approaches, and simulation methodologies, are discussed Appended is a discussion of the nature of probability The paths of thermodynamics and statistical mechanics are well-travelled and there is an extensive primary and secondary literature on various aspects of the subject Whilst very many of the results presented in this book may be found elsewhere, tile presentation and interpretation offered here represent a sufficiently distinctive exposition to warrant publication The debt to the existing literature is only partially reflected in the list of references; these in general were selected to suggest alternative presentations, or fllrther, more detailed, reading material, or as the original source of more specialised results The bibliography is not intended to be a historical survey of the field, and, as mentioned above, an effort has been made to make the book self-contained At a more personal level, I acknowledge a deep debt to Iny teachers, collaborators, and students over the years Their influence and stimulation are iInpossible to quantify or detail in full Three people, however, may be fondly acknowledged: Pat Kelly, Elmo Lavis, and John Mitchell, who in childhood, school, and PhD taught me well www.elsolucionario.net Chapter Prologue 1.1 Entropy in T h e r m o d y n a m i c s and Statistical Mechanics All systems move in the direction of increasing entropy Thus Clausius introduced the concept of entropy in the middle of the 19th century In this - the second law of thermodynamics - the general utility and scope of entropy is apparent, with the implication being that entropy maximisation is the ultimate goal of the physical universe Thermodynamics is based upon entropy and its maximisation The fact that the direction of motion of thermal systems is determined by the increase in entropy differentiates thermodynamics from classical mechanics, where, as Newton showed in the 17th century, it is energy and its minimisation that plays the primary role It was quickly clear that entropy in some sense measured the disorder of a system, but it was not until the 1870's that Boltzmann articulated its physical basis as a measure of the number of possible molecular configurations of the system According to Boltzmann, systems move in the direction of increasing entropy because such states have a greater number of configurations, and the equilibrium state of highest entropy is the state with the greatest number of molecular configurations Although there had been earlier work on the kinetic theory of gases, Boltzmann's enunciation of the physical basis of entropy marks the proper beginning of statistical mechanics Despite the fact that thermodynamics and statistical mechanics have entropy as a common basis and core, they are today regarded as separate disciplines Thermodynamics is concerned with the behavior of bulk matter, with its measurable macroscopic properties and the relationships between them, and with the empirical laws that bind the physical universe These laws form a set of fundamental principles that have been abstracted from long experience A relatively minor branch of thermodynamics deals with the epistemologieal consequences of the few axioms For the most part pragmatism pervades the discipline, the phenomenological nature of the laws is recognised, and the main www.elsolucionario.net C H A P T E R P R O L O G U E concern lies with the practical application of the results to specific systems Statistical mechanics is concerned with calculating the macroscopic properties of matter from the behaviour of the microscopic constituents 'Mechanics' refers to the fact that the particles' interactions are described by an energy function or Hamiltonian, and that their trajectories behave according to the usual laws of motion, either classical or quantum 'Statistical' denotes the fact that a measurement yields the value of an observable quantity averaged over these trajectories in time A typical macroscopic system has on the order of 10 23 molecules It is not feasible to follow the motion of the individual particles, and in any case the number of measurable thermodynamic parameters of the system is very much less than the number of possible configurations of the particles In theory and in practice the microscopic states of the system are inaccessible One instead seeks the probability distribution of the microstates, and the consequent macroscopic quantities follow as averages over these distributions Despite its probabilistic nature, statistical mechanics is able to make precise predicative statements because the number of microstates is so huge that the relative fluctuations about the average are completely negligible Entropy has always remained central to thermodynamics, but nowadays it has been relegated to a secondary role in statistical mechanics, where it generally emerges as just another thermodynamic property to be calculated from the appropriate probability distribution This is unfortunate because what has become obscured over the years is the fundamental basis of these probability distributions In fact these distributions are properly derived by entropy maximisation; indeed from this basis can be derived all of equilibrium statistical mechanics Likewise obscured in both disciplines is the nature of the various thermodynamic potentials and free energies Despite their frequent recurrence throughout thermodynamics and statistical mechanics, it is not commonly appreciated that they represent nothing more than the maximum entropy of the system The physical interpretation of entropy as the number or weight of the microscopic states of the system r e m o v e s I n u c h of the mystery that traditionally shrouds it in thermodynamics The theme of this book is entropy maximisation One aim is to show the unity of thermodynamics and statistical mechanics and to derive both from a common basis A second goal is to derive both disciplines precisely and in their entirety, so that all of the various quantities that occur are defined and fully interpreted A benefit of using the present self-contained maximum entropy approach consistently is the transparency of the results, which makes for relatively straightforward generalisations and extensions of the conventional results To begin, a simple example that clarifies the nature of entropy and the rationale for entropy maximisation will be explored The basic notions of states, weights, and probability are then formally introduced and defined, and the fundamental equation for entropy that is the foundation of thermodynamics and statistical mechanics is given The particular reservoir and constraint methodology used in the following thermodynamic chapters is presented in a general context and illustrated with reference to the initial example www.elsolucionario.net 1.2 AN EXAMPLE: SCRAMBLED EGGS Figure 1.1: The 6!/2!4! = 15 configurations of eggs in a egg carton 1.2 An Example" Scrambled Eggs The direction toward equilibrium is t h a t of increasing entropy, and systems remain at equilibrium once they a t t a i n it W i t h s t a n d i n g even royal command, the ultimate state initiated by H u m p t y D u m p t y ' s fall was irreversible, as the anonymous author clearly noted Nowadays the fractured, final fate of the unfortunate ovoid is recognised as a state of high entropy Here the nature of entropy and the connection between entropy and equilibrium is illustrated with a simple example Suppose t h a t n eggs are placed in a carton able to hold N eggs The number of distinct arrangements, WN(n), is the number of ways of choosing n objects from N in any order This is WN(n) = NCn N ! / n ! ( N - n ) ! (cf Fig 1.1) This arises because there are N possible cells in which the first egg can be placed, which leaves N - for the second, , and finally N - rt + possible cells for the last egg T h a t is, there are N ! / ( N - n)! ways of placing eggs in the carton in order However, these eggs are indistinguishable, and the n! p e r m u t a t i o n s of the eggs cannot be counted as distinct configurations This gives the remaining t e r m in the denominator Suppose t h a t the eggs are scrambled by choosing two cells at r a n d o m and swapping their contents Obviously if both are e m p t y or both are occupied nothing happens, but if only one is occupied a new configuration is generated It will prove convenient to deal with the logarithm of the number of configurations, rather t h a n the number of configurations itself Accordingly one defines SN(n) = kB In WN(n) (1.1) This quantity is in fact the entropy (kB is Boltzmann's constant), but for the present one need only note t h a t because the logarithm is a monotonic function of its argument, maximizing the number of configurations is the same as maximizing the entropy l In general the transition rule determines the microstate weights and as such it is central to equilibrium statistical mechanics www.elsolucionario.net CHAPTER PROLOGUE Now introduce a second carton of size M containing m eggs The number of configurations for this carton is WM(m) = M ! / m ! ( M - m)!, and its entropy is SM(m) = kB In WM(m) The total number of distinct configurations if the two cartons are kept separate is the product WN,M(rt, m) = WN(n)WM(m), since for each of the WM(m) configurations of the second system there are WN(n) configurations of the first The total entropy is the sum of the two individual entropies, SN,M(n, m) = kB in WN,M(n, m) = S x ( n ) + SM(m) This shows the advantage of working with entropy rather t h a n the number of configurations Like the number of eggs, or the number of cells, it is an additive quantity, and such quantities in general are easier to deal with than products 1.2.1 Equilibrium Allocation W h a t happens if interchange of eggs between the cartons is allowed via the scrambling procedure described above? Intuitively one expects that the carton with the greatest concentration of eggs will lose them to the less concentrated carton Eventually a steady state will be reached where the eggs are as likely to be transferred in one direction as another Here one expects the two cartons to have an equal concentration of eggs, and this steady state is called the equilibrium state The reason t h a t concentration is the determining quantity rather t h a n simply the number of eggs is because at equilibrium a large carton will have proportionally more eggs than a small carton In fact, each time an egg goes from one carton to the other an unoccupied cell goes in the opposite direction, which suggests t h a t the steady state will treat occupied and unoccupied cells in an equivalent fashion To be more precise one needs to calculate the probability of moving an egg between the cartons The probability of a cell chosen at random in the first carton being occupied is just the ratio of the number of eggs to the number of cells, namely n/N, and similarly the chance of choosing a free cell is ( N n)/N For an interchange between cartons to occur the two cells must be in different cartons The probability of choosing any cell in the first carton is N/(N + M), and the probability of choosing any cell in the second carton is M / ( N + M) Hence the probability of the two chosen cells being in different cartons is 2NM/(N + M) 2, the factor of arising because it doesn't m a t t e r which carton is chosen first For an egg to leave the first carton, one must choose different cartons and both an occupied cell in the first carton and an unoccupied cell in the second The chance of this is just the product of the probabilities, [2NM/(N + M)2](n/N)[(M- m ) / M ] = n ( M - m)/(N + M) Conversely, the probability of an egg going from the second carton to the first is r e ( N - n ) / ( N + M) For the equilibrium or steady state situation the net flux must be 0, so these two must balance The equilibrium number of eggs in each carton is denoted by g and ~ Equating the fluxes, the equilibrium condition is m Tt N-~ ?gt M-m 7/, , or N ?Tt M www.elsolucionario.net (12) 1.2 A N EXAMPLE: SCRAMBLED EGGS The concentration of eggs in each carton is indeed equal at equilibrium, as is the ratio of occupied cells to free cells (For simplicity, one may assume that the numbers have been chosen so t h a t this equilibrium condition possesses an integral solution.) 1.2.2 M a x i m u m Entropy The number of configurations, and hence the entropy, is a m a x i m u m at the equilibrium allocation This may be proven by showing that the number of configurations decreases monotonically moving away from equilibrium For the case that there are too many eggs in the first carton, define = n - ~ > If the number of configurations corresponding to n is greater than the number of configurations for n + 1, then the entropy is increasing moving toward equilibrium This may be proven by taking the ratio of the respective number of configurations, (1.3) WN(n + ) W M ( m - 1) WN(g + + ) W M ( ~ - - l - - 1) (g + + 1)I(N - g + 1)!(N - > l1)! 1)I ( ~ - l- 1)!(M - ~ + 1)!(M - + 1)I + 1)! ~+l + l M-~+l N-g-1 ~-l M-m N-g m (1.4) Hence S N ( ~ + l ) + S M ( ~ - - l ) > S N ( ~ + l + 1) + S M ( ~ - - l - 1), and an analogous argument with changed to -1 gives S N ( f i - l) & S M ( ~ + l) > S N ( f i - & 1) & S M ( ~ & 1) T h a t is, the entropy decreases moving away from equilibrium, which is to say t h a t it is a concave function of the allocation of eggs that attains its m a x i m u m at the equilibrium allocation The total number of distinct configurations in the case that eggs are transferable between the two cartons is ~VN+M(ft + ~Tt) = N+MCn+m, since all cells are now accessible to all eggs, and the entropy is SN+M(n + m) = kB in WN+M(n + m) The total number of configurations of the combined cartons must be greater t h a n any of those of the isolated cartons with a fixed allocation of the eggs because it includes all the configurations of the latter, plus all the configurations with a different allocation of the eggs T h a t is, g/-N+M(ft+~t) ~ WN(n)WM(~'t) or in terms of the entropy, Sx+M(n+m) >_ SN(n)+SM(m), for any n, m (This is a strict inequality unless N or M is 0, or unless there are no eggs, n + m = 0, or no empty cells, n + m = N + M.) One concludes that the entropy of the two cartons able to exchange eggs is greater than the entropy of the two isolated cartons with the equilibrium allocation of eggs, which is greater t h a n the www.elsolucionario.net C H A P T E R PROLOGUE 4.0E+08 25 3.5E+08 - 20- 3.0E+08 - AAAAA 15~ 2.5E+08- + 10- 2.0E+08- 5- 1.5E+08- 9 I I I I 1012 1.0E+08 - 5.0E+07 0.0E+00 "F I I I A I _~ t l0 t ~m 12 Figure 1.2: T h e n u m b e r of configurations as a function of the n u m b e r of eggs in the first carton, for a dozen eggs in total, n + m - 12, allocated to an ordinary carton, N - 12, and a twin carton, M - 24 T h e inset shows the constrained total entropy, in units of kB, with the arrow indicating the equilibrium occupancy, - 4, and the d o t t e d line giving the u n c o n s t r a i n e d entropy, ln[36!/12! 24!] entropy of any other allocation of eggs to tile isolated cartons, SN+M(n + m) SN(g) + SM(57) SN(n) + SM(m), (1.5) where ~ + m - n + 'm, and ~ / N - TFi/M This behaviom" of tile constrained entropy is shown in Fig 1.2 1.2.3 Direction of Motion It has been shown t h a t the equilibrium or s t e a d y s t a t e is tile one with the m a x i m u m entropy, and now it is shown t h a t the system out of equilibrium is m o s t likely to move in the direction of increasing entropy T h e probability t h a t the N c a r t o n with n eggs will gain an egg, p ( n + 1In ), is required (Obviously the other c a r t o n will simultaneously lose an egg, m + m - 1.) This was given above in e q u a t i n g the fluxes to find the equilibrium state, p(n+lln )- m ( X - n) (N+M) " (1.6) Similarly, p(n- lln ) (N + " www.elsolucionario.net (1.7) 1.2 AN EXAMPLE: SCRAMBLED EGGS T h e probability t h a t the number of eggs in each carton is unchanged is obviously p ( n n) - - p ( n + 1In) - p ( n - n) Now if there are too few eggs in the first carton compared to the equilibrium allocation, n < g and rn > rn, then the odds of increasing the n u m b e r of eggs in this carton are p(n § 1In) ~ ( X - n) n) n ( M - rn) p(n- N-g M-~ = (1.8) Hence it is more likely t h a t the transition n , n + will be made t h a n n + n - if n < ~ and m > ~ On the other side of equilibrium one similarly sees t h a t the transition tends in the opposite direction, so t h a t in b o t h cases the most likely transition is toward the equilibrium allocation of eggs Given the monotonic decrease of entropy away from equilibrium, one concludes t h a t the most likely flux of eggs is in the direction of increasing entropy 1.2.4 Physical Interpretation How are these results to be interpreted? First one distinguishes between a microstate and a macrostate A microstate is one of the configurations of the eggs, (i.e., a specification of the occupied cells) A m a c r o s t a t e is a specification of the n u m b e r of eggs in each carton, but not the cells t h a t they occupy Hence there are m a n y microstates for each macrostate Specifying a m a c r o s t a t e corresponds to isolating the cartons and to disallowing the swapping of eggs between them This acts as a constraint on the configurations t h a t the system can assume, and hence the n u m b e r of configurations for a specified m a c r o s t a t e is less t h a n the n u m b e r of configurations if no m a c r o s t a t e is specified In other words, the entropy of a system constrained to be in a given m a c r o s t a t e is less t h a n the entropy of an unconstrained system T h e flux of eggs from the initial allocation was interpreted as the approach to equilibrium This did not occur because the total n u m b e r of configurations or the total entropy was increasing; once the cartons were allowed to exchange eggs the n u m b e r of possible configurations was fixed Rather, the a p p r o a c h to equilibrium was a macroscopic flux, and isolating the cartons at any instant would have given a m a c r o s t a t e with a n u m b e r of configurations larger t h a n before Hence it is the entropy constrained by the current value of the q u a n t i t y in flux t h a t increases approaching equilibrium, and t h a t reaches its m a x i m u m value for the equilibrium macrostate Obviously one can only make such s t a t e m e n t s on average, since there is nothing to prevent a t e m p o r a r y reversal in the flux direction due to the transitions between microstates as one approaches equilibrium Likewise one expects fluctuations a b o u t the equilibrium m a c r o s t a t e once it is attained T h e preceding p a r a g r a p h describes the increase in the constrained entropy during the approach to equilibrium, but it does not explain why such an increase occurs As mentioned, once the cartons are allowed to exchange eggs the www.elsolucionario.net B.2 CONJUGATE VARIABLES 409 with familiar notions of pressure and volume changes For a reversible process, dS = 0, and one has p(S, v, x ) = p0, (B.S) so t h a t the internal and external pressures are in balance Again this is consistent with everyday experience: when the internal and external pressures are equal, the volume doesn't change It is indeed appropriate to make the physical identification of the pressure of the system with the volume derivative of the energy at constant entropy, p(S, V, N), which is equivalent to the original definition of p(E, V, N) in terms of the entropy derivative at constant energy In general, the physical identification of t h e r m o d y n a m i c derivatives proceeds from the mechanical notion of work, and equates the appropriate coefficients with the conjugate energy derivative at constant entropy The procedure is valid for isolated (or insulated) systems undergoing reversible (or quasi-static) changes The latter may be called adiabatic transformations, but some caution should be exercised since adiabatic is often used inappropriately to denote an insulated system itself As was seen above, isolating a system is not sufficient to ensure constant entropy It is necessary both for the system to be insulated and for the change to be reversible to be able to identify the t h e r m o d y n a m i c derivatives B.2 Conjugate Variables Pressure and volume form a conjugate pair, as chemical potential and number, and t e m p e r a t u r e and entropy Conjugate pairs consist of an intensive and an extensive variable whose product has the dimensions of energy As was defined above, extensive variables scale with the size of the system since they are linear additive quantities Examples t h a t have already been met are V, N, E, and S The fundamental quantity of t h e r m o d y n a m i c s is the entropy as a function of all the extensive variables of the system Intensive variables such as p, #, and T not scale with the size of the system; they may also be called field variables In the example above the pressure is the field, and the volume may be regarded as the response of the system to an external field In general, one can consider an intensive field variable z and an extensive response X One can a t t r i b u t e part of the total energy to external sources such t h a t the change in the external energy with X is dE ext = ze• where z ~xt is the externally applied field For example, changing the volume of the system by dV against an external pressure pext gives d E ext = peXtdV This change in the external part of the energy may also be called the work done on the system If the external field tends to decrease X (i.e., E ext decreases if X 2In q u a n t u m mechanics, an infinitely small, infinitely slow p e r t u r b a t i o n induces w h a t is known as an adiabatic t r a n s f o r m a t i o n T i m e - d e p e n d e n t p e r t u r b a t i o n t h e o r y says t h a t the eigenstates are conserved and the system stays in the same s t a t e during such a t r a n s f o r m a t i o n The probability distribution of microstates is thus unchanged, as is the entropy, which is a functional of these www.elsolucionario.net APPENDIX B WORK AND CONJUGATE VARIABLES 410 decreases), t h e n Xext is positive If the external field variable is a constant, then the external energy is E ext = x e x t x (In certain cases the field variable may contain contributions from the system t h a t vary with X and the integration does not have this simple form.) T h e explicit expression for the energy due to external sources serves to identify the conjugate variables For a truly isolated system in the absence of external fields and with the extensive p a r a m e t e r constrained, one defines the field variable of the system as the entropy derivative at constant internal energy, (oqs(Eint,x)) (~X z _ Eint (g.9) T" Only the relevant extensive independent variables are shown here, the remaining ones being held constant during the differentiation Of course in the absence of the external field the internal energy is the same as the total energy W h e n the external field is present, one separates the total energy into the internal part and the external part, d E t~ = d E int + d E era Above it was shown t h a t for a reversible volume change the internal and external pressures were in balance This p r o p e r t y is preserved by the above definitions The entropy derivative at constant total energy is OS(XIE int Xext) ) OX ( 0S 0E int 0Eint)x ( 0Z )Et,,,t -ll- (~-~)Ein t E'"' Xext X = T + ~ (B.10) Tile first t e r m arises because tile change in entropy with internal energy is the inverse t e m p e r a t u r e , and because dE int = - d ~ 'ext if tile total energy is fixed One can see t h a t the entropy of the system is nlaximised at fixed total energy when the internal and tile external fields are in balance, x ( X ) = x ext This procedure may be explicitly illustrated using pressure as an example Suppose t h a t tile system has height h and cross-sectional area A, and its u p p e r b o u n d a r y is a movable piston of mass M in a gravitational field g In this case the external energy is E ext = Mgh = p~xtv, where the external pressure is p~Xt = Mg/A As far as tile system is concerned, the origin of the external pressure is irrelevant Tile total energy is E t~ = E i''t + E ~xt = E int - } - p e x t w (this is also called tile enthalpy) T h e external energy has a trivial effect on the entropy so t h a t one can write S(VIE t~ ])tot) : S0(vIEint) The right-hand side is the entropy in tile absence of the external pressure, and hence one defines (/)So) OE int V V' (OSo) p -~ Eint- T' (B.11) where p is the internal pressure of the system One also has (0S(WIj~int, pext) ) : 0V Etot = ( 0~ ) ((~j~int) -71-((~S) ~ y 0V Etot ~ Eint pext p T + ~ www.elsolucionario.net (B.12) B.2 CONJUGATE VARIABLES 411 T h a t is, the constrained entropy is maximal at the equilibrium volume, which is the one at which the internal pressure balances the external pressure These results follow from the general formalism with V = X and p = x There are a number of common examples of non-pV work A rod or length of wire under a tension f tends to expand, so t h a t the change in the external part of the energy due to a change in length is d E ext = - f d L Hence the length is equivalent to X and the tension is equivalent to - x , so t h a t Eint r " Note how the tension is of opposite sign convention to the pressure Note also t h a t it has not been specified whether this particular tension is at constant volume or at constant area Similarly one can define a surface tension t h a t is conjugate to the area of the system, so t h a t d E e• = - T d A , where the surface tension is [" O S ' ~ _ -~/ (B.14) These two examples are intended for illustration only, and more precise definitions will be required for specific cases Another common class of systems is those acted upon by an external field with the extensive p a r a m e t e r being a constrained property of the system rather t h a n a conserved one For example, a system of mass M in a gravitational field g changes energy with height by an amount d E ~xt = M g dh Accordingly, X h and x - M g, and hence [lOS, _ Mg (B.15) ~, ) ~ Eint T " Charge is a conserved quantity, and the conjugate variable is the electric potential The change in the external part of the energy due to a change in the charge of the system is d E ext = C d Q Here ~ is the potential difference between the system and the point to which the charge is transferred to If the system is at the higher potential, then transferring charge to the system increases the external part of the total energy, which is to say t h a t a high potential tends to decrease the charge in a system Accordingly one has X = Q, x = ~, and hence / 10/ _ _ A more complicated class of systems is those with spatially varying external fields Consider an external electric field t h a t polarises dielectric material, with the total polarisation being an unconserved extensive quantity In this example the external electric field, E~Xt(r), is due to external charges and is the field t h a t would be present in the absence of the dielectric material The interaction of a given polarisation density, P ( r ) , with this external electric field contributes to the total energy an amount V ~xt = - { dr E ~xt (r)- P ( r ) Jv www.elsolucionario.net (B.17) APPENDIX 412 B WORK AND CONJUGATE VARIABLES (Here the symbol U is used for the energy to avoid confusion with the electric field.) Evidently the external field tends to align the dipoles of the system, since this energy decreases as the number of dipoles pointing in the same direction as the field increases The functional derivative of this yields (SU ext - E e x t ( r ) 95P(F) (B.18) Hence X = P and x = - E e x t , and the entropy derivative at constant internal energy is (.) 5P(r) Uint (t 19 T This is the field produced by an isolated system constrained to have a certain polarisation and energy It is also the external field that would have to be applied to produce this polarisation in an equilibrated system W h e n a paramagnetic sample is placed in a magnetic field B ~xt it becomes magnetised, with the magnetisation density M ( r ) pointing in the same direction as the field The analysis is identical to that of a polarised dielectric in an electric field and one similarly obtains ((iS) 5M(r) B.2.1 Vint _ -lB(r)" T (B.20) Momentum Linear additivity and conservation arc tile crux of the formalism for statistical thermodynamics In classical mechanics there are jllst seven linear additive constants of the motion: tile energy and tile components of linear and angular moinentum Hence the m o m e n t u m (terivatives of the entropy are important quantities A system of mass M in uniform motion with velocity v has total m o m e n t u m P - } ]-iPi - M v The peculiar momentuIn is t h a t measure(t in a reference frame moving with the system, t),i - P i - m i v , so that ~) - ( ) Similarly, the peculiar kinetic energy is K - E i~2/2'm,i - K M'~,2/2, and the peculiar energy is E - / ( + U The t e m p e r a t u r e is the pemlliar energy derivative of the entropy, OS(E,N,V,P) = OE - (B.21) T' and since the number of configurations only depends upon the peculiar energy, holding the latter constant during m o m e n t u m exchange fixes the entropy, OS(E,N,V, ) OP = (B.22) Now consider a second system with velocity v ext able to exchange m o m e n t u m with the subsystem From m o m e n t u m conservation, d P ~xt - -d P, the change www.elsolucionario.net B.2 CONJUGATE 413 VARIABLES in the energy of the second system following m o m e n t u m exchange is d E ext = v ext d P ~xt - - v ~xt d P Hence energy conservation gives d E - v ~xt- d P Also, the change in the peculiar energy of the subsystem is d E - d ( E - P / M ) (v e x t - v ) d_P, whereas t h a t of the second system is 0, d/) ~xt - d ( E e x t ( p e x t ) / M ~xt) - ( - v ~xt + v~Xt) - The change in the total entropy during m o m e n t u m exchange at fixed total energy is ( (~Stotal(m, P; mext, pext) ) _P E~o~, ~ Etotal = 0E- p = _ Etotal+ /~ oP v ) + (B.2a) T T h e first equality follows because the entropy of the second system doesn't change; as shown above, its peculiar energy remains fixed This result shows t h a t equilibrium occurs when the two systems move with the same velocity It also follows t h a t the m o m e n t u m derivative of the entropy of the subsystem at constant energy is the negative of its centre of mass velocity, E P /~ E V : A similar treatment to the definition J = T + of a system's macroscopic (B.24) angular momentum T' _J leads (B.25) E where w - J_/I is the rotational velocity and I is the moment of inertia of the system One concludes that the entropy depends nontrivially on the microscopic internal energy, and only trivially on the macroscopic kinetic and rotational energies www.elsolucionario.net Appendix C Mathematical C.1 Notation Results and Context It seems impossible to develop an unambiguous m a t h e m a t i c a l notation t h a t is independent of the context in which it is used In the text a function is generally denoted f(x), where f is the function and x is the variable Generally the argument is enclosed in parentheses, but on occasion, such as when dealing with a function of a function, brackets may be used for clarity, as in f(g[x]) However, brackets are also used to denote a functional, as in f[g], which depends for its value not on the single point z, but on all the values of the function g(z) on the appropriate interval Parentheses (and brackets and braces) are also used to specify the hierarchy of arithmetical operations, and so, depending upon the context, the quantity f(x + y) could denote either the sum of the products f z and fy or the function of the sum z + y The equality sign is used to denote both equality, as in x = 2, and to define a function, as in f(z) = x The ambiguity in writing f(x) = (does this define the constant function, or determine t h a t z = v/2?) can only be resolved from the context Occasionally the equivalence sign, /~ _= 1/kBT, is used to define a symbol t h a t can be replaced by the string of symbols on the right-hand side wherever it occurs; as often as not, the equal sign is used instead Usually the argument of a function is merely a placeholder t h a t assumes specific values; f(x) = f ( y ) when x = y Although it would be preferable to use different symbols to denote distinct functions, in order to prevent a proliferation of symbols, distinct functions are often in the text denoted by the same symbol; the arguments of the functions serve as well to distinguish t h e m in these cases For example, S(N, V, E) is the entropy of an isolated system, whereas S(N, V, T) is the total entropy of a subsystem in contact with a heat reservoir; these are not equal to each other even if E is numerically equal to T It is arguable t h a t it would be b e t t e r to distinguish these by using subscripts, as in ST(N, V, T), and this has been done when ambiguity is not precluded by the context Because arguments are used as both variables and distinguishing marks, mixing up the 415 www.elsolucionario.net APPENDIX C MATHEMATICAL RESULTS 416 order of the arguments in a function should not cause great confusion C.2 Partial Differentiation In the case of partial differentiation, usually all arguments of the function are held fixed except t h a t used for the derivative Often this is made clear by showing the function with all its arguments in the numerator, or by placing the fixed variables as subscripts, OS(E, N, V) ( OS ) OE , or - ~ g,v" (C9 The latter procedure is always followed when it is not the arguments of the function themselves t h a t are fixed, but rather some combination thereof Much of t h e r m o d y n a m i c s is an exercise in partial differentiation, and here is summarised some of the main relationships The partial derivative of a function of two variables, f(x, y), with respect to x with y held fixed is (Of'~ \ / _ Y lira f(xl,yl)- Xl ~x2 Xl f(x2,yl) (C.2) X2 Sometimes one may write Of(x, y)/Ox, or f:~(x, y), or f'(x), when no ambiguity exists Because the derivative is simply the ratio of two differences, one has (0f) (O~X) -1 - f(x, y) ea x(f, !]) The total differential is This assumes () Of dx + df-~~ Setting (c.a) ~ x dy (C9 d f - and rearranging yields Ox The derivative holding some function of of z(x, !]) constant is of (c.6) X ~ Z9 For the proof one simply chooses two nearby points such that z(x2, Y2), and takes the limit of z(xl,yl) - f(xl, Yl) - f(x2, Y2) Xl = X2 f ( x l , y l ) f(x2, Yl) X l X2 + f(x2,Yl) - f(x2,Y2) Yl - Y2 Yl - - Y2 www.elsolucionario.net ~ Xl X2 - - (c.7) C.3 ASYMPTOTIC 417 ANALYSIS Finally, (c.s) as follows from the limit of f(xl,Yl) f(x2,Yl) f(xl, ~ ( ~ , y, ) - ~ ( ~ , y, ) C.3 Y l ) f ( x , Xl Yl) X l X2 Z ( X l , Yl ) - - X2 (C.9) Z(X2, Yl ) " Asymptotic Analysis For asymptotic analysis three notations are used The tilde symbol is used for the most precise characterisation of the asymptote, f (x) ~ a x - ~ , x ~ oc, means lim f (x) a x - ~ (c.~0) X +OO This says t h a t for any small positive number e there exists a large number X(e) such t h a t I f ( x ) - a x - n l < e for all x > X(e) The so-called 'big O' notation is used to signify the functional form of the asymptote, f (x) O ( x - n ) , x ~ oc, means lim f (x) - - M , X +O0 (C.11) X -n where M is a bounded constant whose (unknown) value is normally nonzero Finally, the 'little o' notation is used to set a bound on the asymptotic behaviour, f (x) - o ( x - ~ ) , x ~ oc, means lim f(x) x -+oc X - m = 0, (C.12) which is to say t h a t the function decays faster t h a n x -'~ Quite often a series is t e r m i n a t e d by ellipsis, , which is the same as saying t h a t the neglected terms are little o of the last exhibited term, or big O of what is obviously the next t e r m in the sequence At the simplest level asymptotic expansions can be derived as simple Taylor expansions, x f f (x) ~ f (O) + x f ' ( O ) + -~ (0) + x ~ O (C.13) One of the more well-known asymptotic expansions is Stirling's approximation for the factorial, lnn!~nlnn-n+ln 2x/2~n+On-1, n~oc (C.14) Perhaps the most common asymptotic expansions are based upon the binomial expansion, (1 + x) ~ ~ + n x + n(n1) x2 2! + , x - ~ O www.elsolucionario.net (C.15) APPENDIX 418 C M A T H E M A T I C A L RESULTS This t e r m i n a t e s for positive integral values of n, but not more generally The so-called h a r m o n i c series follows from this, 1 + z + x2 + , 1-x x + (C.16) Obviously z + is the same as z -1 + oc; other limit points may be used A s y m p t o t i c expansions are often differentiated and i n t e g r a t e d fearlessly T h e distinction between a convergent series and an a s y m p t o t i c expansion is t h a t for a fixed value of the p a r a m e t e r x, convergent series are increasingly a c c u r a t e as the n u m b e r of t e r m s in the series is increased, whereas the series representing an a s y m p t o t i c expansion often diverges An a s y m p t o t i c expansion becomes more a c c u r a t e for a fixed n u m b e r of t e r m s as the p a r a m e t e r approaches the limit point In m a n y cases increasing the n u m b e r of t e r m s in an a s y m p t o t i c expansion m e a n s t h a t the p a r a m e t e r must be t a k e n closer to its limit point to achieve the same level of accuracy For exainple, the a s y m p t o t i c expansion r e p r e s e n t e d by the h a r m o n i c series for fixed x > is not a convergent series However, for any z I < it is convergent A more striking example is provided by the c o m p l e m e n t a r y error function, which has a s y m p t o t i c expansion, 1 v/-~ze~2erfc z ~ ~+ 3! (2m)! 2!(2z2)2 _ _ _ _ + (_2),~m!(2z2),,~ , z -+ cxD (C 17) Clearly, for fixed z t h e series would diverge, but for fixed n u m b e r of t e r m s rn, the tail becomes increasingly small as z increases For the case of integration one might have a fimction f(:r, :q) t h a t becomes sharply peaked a b o u t some z0 in the a s y m p t o t i c limit, ;q ~ exp In this case one can p e r f o r m a Taylor expansion of a more slowly w~rying fimction about the location of the p e a k and evaluate the integral, I(y) (t:r.q(z)f(x, ;q) - OC) # = d:,: + ] t(:,,, + OG q(xo)Io( ,j) + g'(x0)I (u) + (c.18) One tends to fin(t t h a t the m o m e n t integrals are of the form /,,,~ - Oy -'~, or similar, so t h a t this p r o c e d u r e g e n e r a t e s an a s y m p t o t i c expansion Successive i n t e g r a t i o n by I)arts is a n o t h e r fecund source of expansions T h e a s y m p t o t i c evaluation of convolution integrals was (tiscusse(t in ,~9.6 T h e r e are of course m a n y other and m o r e t)owerful a s y m p t o t i c techniques 1M Abramowitz and I A Stegun, Handbook of Mathematical Functions, Dover, New York, 1965 2N G de Bruijn, Asymptotic Methods in Analysis, Dover, New York, 1981 www.elsolucionario.net Index activity, 139, 188 density expansion of, 197, 199 functional, 254-256 addition theorem, 315 additive, 11, 18, 19, 409 adiabatic, 409 adiabatic incompressibility, 95 Aristotle logic of, 393 Asakura-Oosawa, 282 asymptote double layer long-range, 347 planar, 343, 346 spherical, 338 electrolyte, 330-336 pair, 210, 235-242 solute, 283 triplet, 224 average, 66 definition of, 18 phase space, 90 Axilrod-Teller potential, 157 Carnahan-Starling, 200 cavity, 178, 278, 280 hard-sphere solvent, 292 cavity function, 226 charge, 153, 317, 411 effective, 331,333, 344 chemical potential, 36, 169,232-235, 311 definition of, 20 density expansion of, 199 hypernetted chain, 252 mixture, 286 simulation of, 365, 368 Clausius virial, 162 closure, 214-221 functional expansion, 219-221 mixture, 279 coexistence, 204 compressibility, 49, 72, 76, 80, 174, 229-231,288 density expansion of, 199 ideal gas, 138, 145 ratio of, 145 concavity, s e e entropy or potential configuration integral, 158, 181 conjugate variables, 409 conserved variables, 19 constrained, see entropy or potential contact theorem, 177, 226, 307 Coulomb potential, 153, 317 coupling constant, 232,268, 271,369 critical point, 230, 238, 240 solute asymptote, 284 Curie temperature, 127 cylindrical solute, 297 Bayes' theorem, 383, 392, 394, 399 Bernoulli trial, 398 BGY, 310, 313, 315 density, 306, 309 BGY hierarchy, 222-224 black-body radiation, 116 Born, s e e BGY Bose-Einstein distribution, 114 bosons, 114 Bragg-Williams, 125 bridge diagram, 218, 226, 289 bridge function, 216, 233 generator, 259 419 www.elsolucionario.net 420 INDEX de Broglie wavelength, 159 Debye length, 319, 323, 333 Debye-Hiickel, 320-322, 328 density, 31 n-particle, 171 activity expansion of, 196 asymptote of, 173 fluctuation potential, 147 functional of, 151 ideal functional of, 148 ideal gas, 173 pair, 171, 193, 207 particle, 170, 192 profile, 309, 337, 344 singlet, 146, 170, 171, 193, 309 density expansion, 183 density functional, 245 approximations, 252 thermodynamic potential, 211 dependent variables, 20, 21, 29 depletion attraction, 282 Derjaguin approximation, 291 cylinder, 299 diagrams cluster, 185-188 functional derivative of, 191 labelled, 186 dielectric, 154 diffusion coefficient, 371 dipole, 154, 412 potential, 154, 157 direct correlation function, 211,236, 253, 266 generator, 211,247 singlet, 198, 309 distribution pair, 208 particle, 170, 174 dumb-bell solute, 304 density expansion of, 200 derivatives of, 407 fluctuation, 161 ensemble, 94 enthalpy, 41, 77, 105-108, 410 concavity of, 80 entropy and equilibrium, and flux, 4, and probability, 6, 12, 383 Boltzmann, 10 concavity of, 27, 59 constant temperature, 100 constrained, 22 continuum, 13, 93, 387 cross-, 248, 256, 260, 264 definition of, 3, 4, 10-14, 17, 387 density, 148 density expansion of, 200 derivatives of, 55 Gibbs, 11 history of, maximisation of, principle of maximum, 100, 394, 401 406 quadratic form, 29, 59 total, 65 total differential of, 21 equilibrilun, 21,410 approach to, 59 definition of, 17 state, 58 thermal, 23 equipartition theorem, 165 eqlfivalence of systems, 50, 80 ergodic hypothesis, 94 Eular's theorem, 163 extensive, 30, 51,409 electric double layer, 322, 336 electric field, 411 electric potential, s e e potential electro-neutral, 319, 337, 340, 349 energy average, 160, 175, 228 Fermi Dirac distribution, 112 fermions, 111 ferromagnetism, 124 fluctuation energy, 138, 140, 144 particle, 140 www.elsolucionario.net 421 INDEX volume, 141 fluctuation potential, s e e potential, constrained fluctuations, 67 Fourier transform, 213, 279, 288 two-dimensional, 298, 313 free energy, s e e potential, thermodynamic Gibbs, 40, 52, 105 Helmholtz, 24, 52, 101 density expansion of, 199 functional derivative, 189-192 uniform, 190 Gauss' principle, 375 Gibbs, s e e free energy Gibbs-Duhem equation, 52 Green, s e e BGY Hamiltonian, 84, 352 Hankel transform, 298, 313 hard sphere, 155, 200-202, 225 virial coefficient, 200 hard wall, 1Y7 harmonic oscillator, 118, 129 heat, 408 heat bath, s e e reservoir heat capacity, 48, 71, YT, 161, 169 Debye, 120 Einstein, 119 ideal gas, 138, 142 magnetic, 124, 128 ratio of, 145 solids, 118 Helmholtz, s e e free energy hypernetted chain, 216, 221, 234, 241, 261 density functional, 249 singlet, 311 hypersphere, 131, 135 ideal gas, 135-145 inhomogeneous, 145 independent variables, 21 intensive, 31,409 interacting pores, 302 interaction spheres, 280, 338 walls, 272, 274, 289, 345 interaction free energy, 271,291,306 irreversible processes, 408 Ising model, 122 isobaric, s e e reservoir, volume isothermal, s e e reservoir, heat Jaynes, 12, 394, 402 Jeffreys prior, 400 Kirkwood superposition, 224 Lagrange multiplier, 149, 151, 403, 405 Landau potential, 24 Legendre polynomial, 314 transform, 25, 315 Lennard-Jones, 182, 202-204, 227 critical point, 203 potential, 155 Liouville's theorem, 95 Lovett-Mou-Buff, 310, 314, 315 macroion, 337 macrostate, s e e state magnetic field, 122, 412 magnetisation, 124 magnetism model of, 121 maxent, s e e entropy, principle of Maxwell equal area, 204 Maxwell relation, 50 Mayer f-function, 182, 235 Mayer irreducible clusters, 197 mean field magnetisation, 124 mean field theory, 150-152, 318 mean spherical approximation, 217, 219, 234, 242 membrane, 300 Metropolis algorithm, 360, 377 microstate, s e c state www.elsolucionario.net 422 INDEX mixture, 39, 210, 214, 231,239, 285 closure, 279 electrolyte, 323 modes normal, 129 molecular dynamics, s e e simulation moment, 324, 327 second, 325, 337 zeroth, 324, 337 momentum, 412 angular, 35-36, 413 linear, 35-36 Monte Carlo, s e e simulation multipole, 154 nonequilibrium potential, tial, constrained see poten- one-dimensional crystal, 129 open system, s e e reservoir, particle Ornstein-Zernike equation, 211-214 electrostatic, 333, 341 inhomogeneous, 308 316 planar, 312, 348 spherical, 314 singlet cylinder, 297 dumb-bell, 304 planar, 287 296, 341 spherical, 277 287, 337 pair potential functional of, 263 partition function, 65, 158 definition of, 17 density expansion of, 194, 198 general derivative of, 66 pendulum, 133 Percus-Yevick, 217, 220, 234, 242 periodic boundary conditions, 356 perturbation theory, 264 275 grand potential, 268, 271 Helmholtz free energy, 270 phase coexistence, 204 phase space, 84 adiabatic incompressibility of, 95 average, 90 entropy, 93 probability of, 89, 99 trajectory, 85 Poisson-Boltzmann, 318-320 polarization, 411 pore, 300 potential chemical, see chemical potential constrained, 23, 53, 61 convexity, 25 energy, 24, 71, 101 number, 38, 72 pair, 263 particle, 103 variational, 25 volume, 40, 42, 44, 74, 77, 79, 105, 107 electric, 341,343, 411 fluctuation, 70 density, 151, 247 fluctuations of, 248 grand, 38, 52, 103 many-body, 158 mean electrostatic, 318,322,341 mean fieht, 150 of mean force, 215 pair, 153 singlet, 157 tail of, 266, 359 tlmrmodynanfic, 23, 61 activity flmctional, 254-256 concavity of, 46 50, 64 density flmctional, 211,245249 derivatives of, 62 Gibbs, 105 Helmholtz, 101 quadratic form, 65 total derivative of, 63 triplet, 151, 157, 163, 176, 196, 198, 222 pressure www.elsolucionario.net 423 INDEX definition of, 20 density expansion, 184 direct correlation, 295 virial, 162, 175 probability, 55-58 belief, 392 causality, 395 density, 248, 383 energy, 99 frequency, 390 Gaussian, 14, 60 laws of, 382 magnetisation, 124 nature of, 389 objective, 391,394, 400 of states, 9, 382, 383 particle, 102 phase space, s e e phase space prior, 398 propensity, 391 proportion, 391 reservoir, 15 subjective, 392, 394, 402 time evolution of, 95 transition, 376 volume, 105, 106 radial distribution function, 175,208, 223 reference system, 266 reservoir definition of, 21 egg, 14 exchange with, 56 formalism of, 16 generalised, 23 heat, 21-27, 70-72, 98-101 meta, 246, 257 meta-, 244 particle, 36-39, 72-74, 102-103 probability, s e e probability, reservoir volume, 40-44, 74-78, 103-105 reversible processes, 408 ring diagrams, 259 rotation, 413 screening, 324, 349 algebraic, 347 exponential, 325 length, 319, 329, 331,333, 344 series function, 216 generator, 259 set theory, 381 simulation average, 355, 359 constrained, 374 equilibration, 354 grand canonical, 365-366 isobaric, 364, 372 isothermal, 373, 374, 376 microscopic reversibility, 361 molecular dynamics, 352-359 Monte Carlo, 359-362 neighbour table, 362-364 periodic image, 356 potential cutoff, 358 preferential sampling, 367 stochastic, 376 time correlation, 370 umbrella sampling, 367 singlet hypernetted chain, 311 slit pore, 289, 304 solute aggregation of, 285 charged, 337, 340 cylinder, 297 free energy, 286 hard-sphere, 278, 280, 292 interaction of, 281 planar, 177, 287-296, 340 simulation of, 366 spherical, 178, 277-287, 292,337 solvation free energy, 286 slab, 294 spectral distribution, 116 spin-lattice, 121 spinodal line, 230, 238, 240 solute asymptote, 284 stability thermal, 30 state, 7, www.elsolucionario.net 424 INDEX probability of, s e e probability, of states uniqueness of, 29 Stillinger-Lovett, 325 superposition approximation, 224 surface energy, s e e tension susceptibility magnetic, 127 symmetry number, 186 system isolated, 19 density, 383 of states, 9, 383 Weiss, 125 Wertheim, 310, 314, 315 Widom, 170 work, 407, 409 Yvon, s e e BGY Yvon equation, 219 tail correction, 359 temperature average, 167 constant, see reservoir, heat definition of, 20 tension, 164, 178, 411 direct correlation expansion, 296 thermodynamic limit, 32 thermodynamic potential, see potential, ttmrmodynainic thermodynamics first law, 407 second law, 21, 28, 59 zeroth law, 21 three-body potential, s e e potential, triplet total correlation flmction, 208 density expansion of, 209 functional of, 256 263 trajectory, s e e phase space, trajectory Triezenberg-Zwanzig, 309, 314, 315 van der Waals, 156 vibrational states, 115, 118 virial, s e e pressure coefficient, 185 hard-sphere, 200, 227 Lennard-Jones, 202 equation, 162, 176 expansion, 181, 185, 198 pressure hard-sphere, 226 weight www.elsolucionario.net ... additive conserved quantity may be similarly analysed, which is the content of this chapter Physical thermodynamic systems differ in their boundaries Some systems may be completely insulated from their... theory The n a t u r e of probability is discussed in detail in A p p e n d i x A Briefly, an a r b i t r a r y state (set of microstates) m a y be d e n o t e d by a, and its probability by p(a)... entropy, energy, and temperature, which accounts for the dimensions of entropy.) By analogy, the entropy of a macrostate labelled c~ is defined as S~ = kB lncJ~ Again for equally likely microstates