Introductionchemical thermodynamics, and batteries. Chapter 4 is devoted to the thermo-dynamics of phase transitions and the use of thermodynamic stability theory inanalyzing these phasetransitions.We discuss first-order phasetransitions in liq-uid-vapor-solidtransitions,withparticularemphasis on the liquid-vaportransi-tion and its critical point andcritical exponents.We also introducethe Ginzburg-Landautheoryofcontinuous phasetransitions anddiscussavarietyoftransitionswhich involve broken symmetries.Andwe introduce thecritical exponents whichcharacterizethebehaviorofkeythermodynamicquantities as a system approach-esitscriticalpoint.In Chapter 5,we derivethe probability density operatorfor systems in thermalcontactwiththeoutsideworldbutisolatedchemically(thecanonical ensemble)We use the canonical ensemble to derive the thermodynamic properties of a va-riety ofmodel systems, including semiclassical gases, harmoniclattices and spinsystems.Wealso introduce the concept of scaling offree energies as we approachthe critical point and we derive values for critical exponents using Wilson renor-malizationtheoryforsomeparticularspinlattices.In Chapter 6, wederive the probability density operator for open systems (thegrandcanonical ensemble),and use ittodiscuss adsorption processes,propertiesof interacting classical gases, ideal quantum gases, Bose-Einstein condensation,Bogoliubovmeanfield theory,diamagnetism,and super-conductors.The discrete nature of matter introduces fluctuations about the average (ther-modynamic)behavior of systems.These fluctuations can be measured and givevaluable information about decay processes and the hydrodynamic behavior ofmany-body systems. Therefore, in Chapter 7 we introduce the theory of Browni-an motion which is the paradigm theory describing the effect of underlying fluc-tuations on macroscopic quantities.The relation between fluctuations and decayprocesses is the content of the so-called fluctuation-dissipation theorem whichis derived in this chapter.We also derive Onsager's relations between transportcoefficients, and we introduce the mathematics needed to introduce the effect ofcausality on correlation functions.We conclude this chapter with a discussion ofthermal noise and Landauer conductivityin ballistic electron waveguides.Chapter 8 is devoted to hydrodynamic processes for systems near equilibrium.Wederivethe Navier-Stokes equations fromthe symmetrypropertiesofa fluidofpoint particles,and we use the derived expression for entropy production to ob-tainthetransport coefficientsforthesystem.Wealsousethesolutions ofthelin-earized Navier-Stokes equations to predict the outcome oflight-scattering exper-iments.We next derive a general expression forthe entropy production in binarymixtures and use this theorytodescribe thermal and chemical transport process-es in mixtures, and in electrical circuits.We conclude Chapter 8 with a derivationof hydrodynamic equations for superfluids and consider thetypes of sound thatcan exist in such fluids.In Chapter9, we derive microscopic expressions forthe coefficients ofdiffusion,shear viscosity,and thermal conductivity,starting both from mean freepathar-guments and from the Boltzmann and Lorentz-Boltzmann equations.We obtainexplicit microscopic expressionsforthetransportcoefficients of a hard-spheregas
2 1 Introduction chemical thermodynamics, and batteries. Chapter 4 is devoted to the thermodynamics of phase transitions and the use of thermodynamic stability theory in analyzing these phase transitions. We discuss first-order phase transitions in liquid–vapor–solid transitions, with particular emphasis on the liquid–vapor transition and its critical point and critical exponents. We also introduce the Ginzburg– Landau theory of continuous phase transitions and discuss a variety of transitions which involve broken symmetries. And we introduce the critical exponents which characterize the behavior of key thermodynamic quantities as a system approaches its critical point. In Chapter 5, we derive the probability density operator for systems in thermal contact with the outside world but isolated chemically (the canonical ensemble). We use the canonical ensemble to derive the thermodynamic properties of a variety of model systems, including semiclassical gases, harmonic lattices and spin systems. We also introduce the concept of scaling of free energies as we approach the critical point and we derive values for critical exponents using Wilson renormalization theory for some particular spin lattices. In Chapter 6, we derive the probability density operator for open systems (the grand canonical ensemble), and use it to discuss adsorption processes, properties of interacting classical gases, ideal quantum gases, Bose–Einstein condensation, Bogoliubov mean field theory, diamagnetism, and super-conductors. The discrete nature of matter introduces fluctuations about the average (thermodynamic) behavior of systems. These fluctuations can be measured and give valuable information about decay processes and the hydrodynamic behavior of many-body systems. Therefore, in Chapter 7 we introduce the theory of Brownian motion which is the paradigm theory describing the effect of underlying fluctuations on macroscopic quantities. The relation between fluctuations and decay processes is the content of the so-called fluctuation–dissipation theorem which is derived in this chapter. We also derive Onsager’s relations between transport coefficients, and we introduce the mathematics needed to introduce the effect of causality on correlation functions. We conclude this chapter with a discussion of thermal noise and Landauer conductivity in ballistic electron waveguides. Chapter 8 is devoted to hydrodynamic processes for systems near equilibrium. We derive the Navier–Stokes equations from the symmetry properties of a fluid of point particles, and we use the derived expression for entropy production to obtain the transport coefficients for the system. We also use the solutions of the linearized Navier–Stokes equations to predict the outcome of light-scattering experiments. We next derive a general expression for the entropy production in binary mixtures and use this theory to describe thermal and chemical transport processes in mixtures, and in electrical circuits. We conclude Chapter 8 with a derivation of hydrodynamic equations for superfluids and consider the types of sound that can exist in such fluids. In Chapter 9, we derive microscopic expressions for the coefficients of diffusion, shear viscosity, and thermal conductivity, starting both from mean free path arguments and from the Boltzmann and Lorentz–Boltzmann equations. We obtain explicit microscopic expressions for the transport coefficients of a hard-sphere gas
1IntroductionFinally, in Chapter 10 we conclude with thefascinating subject ofnonequilibri-um phase transitions.We showhownonlinearities in the rateequations forchemical reaction-diffusion systems lead to nonequilibrium phase transitions whichgive rise to chemical clocks, nonlinear chemical waves, and spatially periodicchemical structures, while nonlinearities in the Rayleigh-Benard hydrodynamicsystem lead to spatially periodic convection cells.Thebook contains Appendices with backgroundmaterial on a variety of top-ics.AppendixA,givesareviewofbasic conceptsfromprobabilitytheoryandthetheory of stochastic processes.AppendixB reviews the theory of exact differen-tials which is the mathematics underlying thermodynamics. In Appendix C, wereview ergodic theory. Ergodicity is a fundamental ingredient for the microscop-ic foundations of thermodynamics.In Appendix D, we derive the second quan-tizedformalism ofquantummechanics and showhow it can be used in statisticalmechanics. Appendix E reviews basic classical scattering theory. Finally, in Ap-pendix F,we give some useful mathformulas and data.AppendixFalso containssolutions to some of theproblemsthat appear at the end ofeach chapter.The material covered in this textbook is designed to provide a solid groundingin the statistical physics underlying most modern physics research topics
1 Introduction 3 Finally, in Chapter 10 we conclude with the fascinating subject of nonequilibrium phase transitions. We show how nonlinearities in the rate equations for chemical reaction–diffusion systems lead to nonequilibrium phase transitions which give rise to chemical clocks, nonlinear chemical waves, and spatially periodic chemical structures, while nonlinearities in the Rayleigh–Bénard hydrodynamic system lead to spatially periodic convection cells. The book contains Appendices with background material on a variety of topics. Appendix A, gives a review of basic concepts from probability theory and the theory of stochastic processes. Appendix B reviews the theory of exact differentials which is the mathematics underlying thermodynamics. In Appendix C, we review ergodic theory. Ergodicity is a fundamental ingredient for the microscopic foundations of thermodynamics. In Appendix D, we derive the second quantized formalism of quantum mechanics and show how it can be used in statistical mechanics. Appendix E reviews basic classical scattering theory. Finally, in Appendix F, we give some useful math formulas and data. Appendix F also contains solutions to some of the problems that appear at the end of each chapter. The material covered in this textbook is designed to provide a solid grounding in the statistical physics underlying most modern physics research topics
52Complexityand Entropy2.1IntroductionThermodynamics and statistical physics describe the behavior of systems withmany interacting degrees of freedom. Such systems have a huge number of mi-croscopic states availableto them and theyare continuallypassingbetweenthesestates. The reason that we can say anything about the behavior of such systemsis that symmetries (and conservation laws)exist that must be respected by themicroscopicdynamicsofthesesystems.If we have had a course in Newtonian mechanics or quantum mechanics, thenwearefamiliarwith the effects ofconservation laws on thedynamicsof classicalor quantum systems. However, in such courses, we generally only deal with veryspecial systems (usually integrable systems) that have few degrees of freedom.We seldom are taught themeans to deal with the complexity that arises when in-teracting systems have many degrees of freedom. Fortunately, nature has givenus a quantity, called entropy, that is a measure of complexity.Thermodynamicsshows us that entropy is one of the essential building blocks, together with con-servation laws,for describingthemacroscopicbehaviorofcomplex systems.Thetendency of systems to maximize their entropy gives rise to effective forces (en-tropic forces).Two examples ofentropicforces are the pressure of an ideal gasandthetension inanelasticbandIn this chapter, wefocus on toolsfor measuring the complexityofsystems withmany degrees of freedom.We first describe methods for counting microscopicstates.Then we introduce themeasureofcomplexity,the entropy,thatwill playafundamentalrolein everythingwediscuss intheremainderofthebook2.2CountingMicroscopicStatesThe first step in counting the number ofmicroscopic states,fora given system,is to identify what these states are. Once the states are identified, we can startthe counting process. It is useful to keep in mind two very important countingAModern Course in Statistical Physics,4.Edition.Linda EReichl@ 2016 WILEY-VCH Verlag GmbH & Co. KGaA.Published 2016 by WILEY-VCH Verlag GmbH& Co. KGaA
5 2 Complexity and Entropy 2.1 Introduction Thermodynamics and statistical physics describe the behavior of systems with many interacting degrees of freedom. Such systems have a huge number of microscopic states available to them and they are continually passing between these states. The reason that we can say anything about the behavior of such systems is that symmetries (and conservation laws) exist that must be respected by the microscopic dynamics of these systems. If we have had a course in Newtonian mechanics or quantum mechanics, then we are familiar with the effects of conservation laws on the dynamics of classical or quantum systems. However, in such courses, we generally only deal with very special systems (usually integrable systems) that have few degrees of freedom. We seldom are taught the means to deal with the complexity that arises when interacting systems have many degrees of freedom. Fortunately, nature has given us a quantity, called entropy, that is a measure of complexity. Thermodynamics shows us that entropy is one of the essential building blocks, together with conservation laws, for describing the macroscopic behavior of complex systems. The tendency of systems to maximize their entropy gives rise to effective forces (entropic forces). Two examples of entropic forces are the pressure of an ideal gas and the tension in an elastic band. In this chapter, we focus on tools for measuring the complexity of systems with many degrees of freedom. We first describe methods for counting microscopic states. Then we introduce the measure of complexity, the entropy, that will play a fundamental role in everything we discuss in the remainder of the book. 2.2 Counting Microscopic States The first step in counting the number of microscopic states, for a given system, is to identify what these states are. Once the states are identified, we can start the counting process. It is useful to keep in mind two very important counting A Modern Course in Statistical Physics, 4. Edition. Linda E. Reichl. © 2016WILEY-VCH Verlag GmbH & Co.KGaA. Published 2016 byWILEY-VCH Verlag GmbH & Co.KGaA
Complexity and Entropyprinciples [125, 146, 183]:1.Addition principle: If two operations are mutually exclusiveand the first canbe done in m ways while the second can be done in n ways, then one or theothercan bedone inm+n ways.2.Multiplication principle:Ifanoperation can be performed in n ways,and af-ter it is performed in any one ofthese ways a second operation isperformedwhichcanbeperformedinanyoneofmways,thenthetwooperationscanbeperformed in n x m ways.Let us consider some very simple examples which illustratethe use ofthese count-ing principles.As a first example (Exercise 2.1),we count the number of distinctsignals that a ship can send if it has one flagpole and four distinct (distinguish-able)flags.The number of distinct signals depends on the rules for distinguishingdifferent signals.C Exercise 2.1A ship has four distinct flags, W, , and Z, that it can run up its flagpole.How many different signals can it send (assuming at least one flag must be onthe flagpole to create a signal)? Consider two different rules for defining a signal(a state): (a)the order ofthe flags on the flagpole is important and (b)the order ofthe flags is not important. (Note that the cases of one flag, two flags, three flags,and fourflags onthe flag pole aremutually exclusive.Therefore,we mustfind thenumberofsignalsforeachcaseandaddthem.)(a) Orderofflags important.With oneflagthere are 4!/(4-1)!=4 signals,withtwo flags 4!/(4 - 2)! =12 signals, with three flags 4!/(4 - 3)! = 24 signals, withfourflags4!/(4-4)!=24 signals,fora total of4+12+24+24=64 signals.(b) Order offlags not important. With one flag there are 4!/(4-1)!11)=4 sig-nals, with two flags 4!/(4 - 2)!21) = 6 signals, with three flags 4!/(4 - 3)!3!) = 4signals,withfourflags4!/(4-4)!4!)=1 signal,foratotalof4+6+4+1=15signals.In Exercise 2.1(a),the number of signals is given by the number of permutationsof theflags,whilefor Exercise 2.1(b)the numberof signals corresponds to thenumberof combinations offlags.Belowwediscuss thesetwoquantities inmoredetail.A permutation is any arrangement of a set of N distinct objects in a definiteorder.The number of different permutations of Ndistinct objects is N!Toprovethis,assume that wehave N ordered spaces and N distinct objects with which tofllthem.Thefirst space can befilled N ways, and after it is flled,the second spacecan be filled in (N - 1) ways, etc. Thus, the N spaces can be flled in N(N -1)(N -2)× ...x1=N!ways.The number of diferent permutations of N objects taken R at a time is N!/(N -R)!. To prove this, let us assume we have R ordered spaces to fll. Thenthe first can be filled in N ways, the second in (N -- 1) ways, ..., and the Rth in
6 2 Complexity and Entropy principles [125, 146, 183]: 1. Addition principle: If two operations are mutually exclusive and the first can be done in m ways while the second can be done in n ways, then one or the other can be done in m + n ways. 2. Multiplication principle: If an operation can be performed in n ways, and after it is performed in any one of these ways a second operation is performed which can be performed in any one of m ways, then the two operations can be performed in n × m ways. Let us consider some very simple examples which illustrate the use of these counting principles. As a first example (Exercise 2.1), we count the number of distinct signals that a ship can send if it has one flagpole and four distinct (distinguishable) flags. The number of distinct signals depends on the rules for distinguishing different signals. Exercise 2.1 A ship has four distinct flags, W, X, Y, and Z, that it can run up its flagpole. How many different signals can it send (assuming at least one flag must be on the flagpole to create a signal)? Consider two different rules for defining a signal (a state): (a) the order of the flags on the flagpole is important and (b) the order of the flags is not important. (Note that the cases of one flag, two flags, three flags, and four flags on the flag pole are mutually exclusive. Therefore, we must find the number of signals for each case and add them.) (a) Order of flags important. With one flag there are 4!∕(4 − 1)! = 4 signals, with two flags 4!∕(4 − 2)! = 12 signals, with three flags 4!∕(4 − 3)! = 24 signals, with four flags 4!∕(4 − 4)! = 24 signals, for a total of 4 + 12 + 24 + 24 = 64 signals. (b) Order of flags not important. With one flag there are 4!∕((4 − 1)!1!) = 4 signals, with two flags 4!∕((4 − 2)!2!) = 6 signals, with three flags 4!∕((4 − 3)!3!) = 4 signals, with four flags 4!∕((4 − 4)!4!) = 1 signal, for a total of 4 + 6 + 4 + 1 = 15 signals. In Exercise 2.1(a), the number of signals is given by the number of permutations of the flags, while for Exercise 2.1(b) the number of signals corresponds to the number of combinations of flags. Below we discuss these two quantities in more detail. A permutation is any arrangement of a set of N distinct objects in a definite order. The number of different permutations of N distinct objects is N! To prove this, assume that we have N ordered spaces and N distinct objects with which to fill them. The first space can be filled N ways, and after it is filled, the second space can be filled in (N −1) ways, etc. Thus, the N spaces can be filled in N(N −1)(N − 2) × ⋯ × 1 = N! ways. The number of different permutations of N objects taken R at a time is N!∕ (N − R)!. To prove this, let us assume we have R ordered spaces to fill. Then the first can be filled in N ways, the second in (N − 1) ways, ., and the Rth in
2.2 Counting Microscopic States(N-R+1)ways.The total number of ways PN that R ordered spaces can beflledusing N distinct objects is PN = N(N-1)× ... × (N - R + 1) = N!/(N - R)!.A combination is a selection of N distinct objects without regard to order.ThenumberofdifferentcombinationsofNobjectstaken RatatimeisN!/(N-R)!R!).Rdistinct objects have R!permutations.If we let CN denote the number ofcom-binations of N distinct objects taken R at a time, then R!CN=PN and CNN!/((N-R)!R!).Exercise2.2A bus has seven seatsfacing forward,F,and six seats facing backward, B, so thatF/B-→(nnnnnnn)/(UUUUUu).Nine(distinct)studentsgetonthebus,butthree of them refuse to sit facing backward.In how many different ways can thenine students bedistributed among theseats onthebus?Answer:Three students must face forward.The number of ways to seat threestudents in the seven forward facing seats is equal to the number of permu-tations of seven objects taken three at a time or 7!/(7-3)!=210.After thesethree students are seated,the number of ways to seat the remaining six studentsamongtheremainingtenseatsisequal tothenumberofpermutationsoftenobjects taken six at a timeor10!/(10-6)!=151200.Nowusing themultiplicationprinciple, we find that the total number of distinct ways to seat the students is(210)×(151200)=31752000,which isan amazinglylargenumber.It is also useful to determine the number of distinct permutations of N objectswhen some of them are identical and indistinguishable.The number of permu-tations of a set of N objects which contains n,identical elements of onekindn, identical elements of another kind, ...,and ng identical elements of a kth kindis N!/(n,!n,!..n!), wheren,+n,+..+nk=N.A simple example of this isgiven in Exercise2.3.Exercise2.3(a)Find thenumber of permutations of the letters in the word, ENGINEERING.(b) In howmanyways are three E's together?(c) In howmanyways are (only)two E'stogether.Answer:(a)Thenumberofpermutationsis(11!/3!3!2!2!)=277200,sincethereare 11 letters but two identical pairs (I and G)and two identical triplets (E and N).(b)ThenumberofpermutationswiththreeE'stogether=thenumberofper-mutationsofENGINRING=(9!/3!2!21)=15120.(c)The number ofways that onlytwoE's aretogether=8×(15120)=120960,since there are eight ways to insert EE into ENGINRING and its permutations.When weare considering a physical system with N particles,thenumber ofmi-croscopic states can beenormous for even moderatevalues ofN.In Exercise 2.4,we countthenumberof differentmicroscopic magnetic states available to a col-lection of N spin-1/2 particles
2.2 Counting Microscopic States 7 (N − R +1) ways. The total number of ways PN R that R ordered spaces can be filled using N distinct objects is PN R = N(N − 1) × ⋯ × (N − R + 1) = N!∕(N − R)!. A combination is a selection of N distinct objects without regard to order. The number of different combinations of N objects taken R at a time is N!∕((N − R)!R!). R distinct objects have R! permutations. If we let CN R denote the number of combinations of N distinct objects taken R at a time, then R!CN R = PN R and CN R = N!∕((N − R)!R!). Exercise 2.2 A bus has seven seats facing forward, F, and six seats facing backward, B, so that F∕B → (∩ ∩ ∩ ∩ ∩ ∩ ∩)∕(∪ ∪ ∪ ∪ ∪ ∪). Nine (distinct) students get on the bus, but three of them refuse to sit facing backward. In how many different ways can the nine students be distributed among the seats on the bus? Answer: Three students must face forward. The number of ways to seat three students in the seven forward facing seats is equal to the number of permutations of seven objects taken three at a time or 7!∕(7 − 3)! = 210. After these three students are seated, the number of ways to seat the remaining six students among the remaining ten seats is equal to the number of permutations of ten objects taken six at a time or 10!∕(10 − 6)! = 151 200. Now using the multiplication principle, we find that the total number of distinct ways to seat the students is (210)×(151 200) = 31 752 000, which is an amazingly large number. It is also useful to determine the number of distinct permutations of N objects when some of them are identical and indistinguishable. The number of permutations of a set of N objects which contains n1 identical elements of one kind, n2 identical elements of another kind, ., and nk identical elements of a kth kind is N!∕(n1!n2! ⋯ nk !), where n1 + n2 + ⋯ + nk = N. A simple example of this is given in Exercise 2.3. Exercise 2.3 (a) Find the number of permutations of the letters in the word, ENGINEERING. (b) In how many ways are three E’s together? (c) In how many ways are (only) two E’s together. Answer: (a) The number of permutations is(11!∕3!3!2!2!) = 277 200, since there are 11 letters but two identical pairs (I and G) and two identical triplets (E and N). (b) The number of permutations with three E’s together = the number of permutations of ENGINRING = (9!∕3!2!2!) = 15 120. (c) The number of ways that only two E’s are together = 8 × (15 120) = 120 960, since there are eight ways to insert EE into ENGINRING and its permutations. When we are considering a physical system with N particles, the number of microscopic states can be enormous for even moderate values of N. In Exercise 2.4, we count the number of different microscopic magnetic states available to a collection of N spin-1∕2 particles