Statistical16thermodynamics 1:the conceptsThedistributionofmolecularStatisticalthermodynamicsprovidesthelinkbetweenthemicroscopicpropertiesofmatterstatesanditsbulkproperties.Twokeyideasareintroducedinthischapter.ThefirstistheBoltzmanndistribution,which is used to predict the populations of states in systems at thermal16.1Configurationsandweightsequilibrium. In this chapter we see its derivation in terms of the distribution of particles over16.2 The molecular partitionavailablestates.Thederivationleadsnaturallytotheintroductionofthepartitionfunction,functionwhichisthecentralmathematicalconceptofthisandthenextchapter.Weseehowtointerpret thepartitionfunctionandhowtocalculateit inanumberof simplecases.Wethen116.1Impactonbiochemistryseehowtoextractthermodynamicinformationfromthepartitionfunction.InthefinalpartThe helix-coil transition inpolypeptidesof the chapter, we generalize the discussion to include systems that are composed ofassembliesofinteractingparticles.Verysimilarequations aredevelopedtothose inthefirstThe internal energy andpart of the chapter, but they are much more widely applicable.theentropy16.3 Theinternal energyThe preceding chapters of thispart of thetext have shown howthe energy levelsof molecules can be calculated,determined spectroscopically,and related to their16.4 The statistical entropystructures.The nextmajor step is to see how a knowledge of these energylevels canbeusedto accountfortheproperties ofmatterinbulk.Todo so,wenowintroduceThecanonical partitionfunctionthe concepts of statistical thermodynamics, the link between individual molecular16.5Thecanonicalensembleproperties and bulk thermodynamic properties.The crucial step in going from the quantum mechanics of individual molecules16.6 The thermodynamictothethermodynamics ofbulk samples is torecognizethat thelatter deals with theinformation inthe partitionaveragebehaviour of large numbers of molecules.For example, the pressure of a gasfunctiondepends on the average force exerted by its molecules, and there is no need to specify16.7Independentmoleculeswhich molecules happen to be striking the wall at any instant. Nor is it necessary toconsiderthefluctuations inthepressure as different numbers of molecules collideChecklist of key ideaswith the wall at different moments. The fluctuations in pressure are very small com-Further readingpared with the steady pressure: it is highly improbable that there will be a sudden lullFurther information 16.1:in the number of collisions,or a sudden surge.Fluctuations in other thermodynamicTheBoltzmanndistributionproperties also occur,butforlargenumbersofparticles theyarenegligiblecomparedFurther information 16.2to the mean values.TheBoltzmannformulaThis chapter introduces statistical thermodynamics in two stages.The first,theFurther information 16.3:derivation of the Boltzmann distribution for individual particles,is of restrictedTemperatures below zeroapplicability,butithastheadvantageoftakingusdirectlytoaresultofcentralimport-ance in a straightforward and elementary way.We can use statistical thermodynamicsDiscussionquestionsoncewehavededuced theBoltzmanndistribution.Then(in Section16.5)weextendExercisesthe arguments to systems composed of interacting particles.Problems
Statistical thermodynamics 1: the concepts Statistical thermodynamics provides the link between the microscopic properties of matter and its bulk properties. Two key ideas are introduced in this chapter. The first is the Boltzmann distribution, which is used to predict the populations of states in systems at thermal equilibrium. In this chapter we see its derivation in terms of the distribution of particles over available states. The derivation leads naturally to the introduction of the partition function, which is the central mathematical concept of this and the next chapter. We see how to interpret the partition function and how to calculate it in a number of simple cases. We then see how to extract thermodynamic information from the partition function. In the final part of the chapter, we generalize the discussion to include systems that are composed of assemblies of interacting particles. Very similar equations are developed to those in the first part of the chapter, but they are much more widely applicable. The preceding chapters of this part of the text have shown how the energy levels of molecules can be calculated, determined spectroscopically, and related to their structures. The next major step is to see how a knowledge of these energy levels can be used to account for the properties of matter in bulk. To do so, we now introduce the concepts of statistical thermodynamics, the link between individual molecular properties and bulk thermodynamic properties. The crucial step in going from the quantum mechanics of individual molecules to the thermodynamics of bulk samples is to recognize that the latter deals with the average behaviour of large numbers of molecules. For example, the pressure of a gas depends on the average force exerted by its molecules, and there is no need to specify which molecules happen to be striking the wall at any instant. Nor is it necessary to consider the fluctuations in the pressure as different numbers of molecules collide with the wall at different moments. The fluctuations in pressure are very small compared with the steady pressure: it is highly improbable that there will be a sudden lull in the number of collisions, or a sudden surge. Fluctuations in other thermodynamic properties also occur, but for large numbers of particles they are negligible compared to the mean values. This chapter introduces statistical thermodynamics in two stages. The first, the derivation of the Boltzmann distribution for individual particles, is of restricted applicability, but it has the advantage of taking us directly to a result of central importance in a straightforward and elementary way. We can usestatistical thermodynamics once we have deduced the Boltzmann distribution. Then (in Section 16.5) we extend the arguments to systems composed of interacting particles. The distribution of molecular states 16.1 Configurations and weights 16.2 The molecular partition function I16.1 Impact on biochemistry: The helix–coil transition in polypeptides The internal energy and the entropy 16.3 The internal energy 16.4 The statistical entropy The canonical partition function 16.5 The canonical ensemble 16.6 The thermodynamic information in the partition function 16.7 Independent molecules Checklist of key ideas Further reading Further information 16.1: The Boltzmann distribution Further information 16.2: The Boltzmann formula Further information 16.3: Temperatures below zero Discussion questions Exercises Problems 16
56116.1CONFIGURATIONSANDWEIGHTSThedistributionofmolecularstatesWe consider a closed system composed of N molecules. Although the total energy isconstantatE,it isnotpossibletobedefiniteabout howthat energyis shared betweenthe molecules. Collisions result in the ceaseless redistribution of energy not onlybetween themolecules but also amongtheirdifferentmodes of motion.The closestwe can cometoa descriptionofthedistribution ofenergyisto reportthepopulationofa state, the average number ofmolecules that occupy it, and to say that on averagethere are n, molecules in a state of energy e, The populations of the states remainalmost constant, but the precise identities of the molecules in each state may changeateverycollision.Theproblem weaddress inthis section is thecalculationofthepopulations ofstatesforanytypeofmolecule inanymode of motion at any temperature.Theonlyrestric-tion is that the molecules should be independent, in the sense that the total energyof the system is a sum oftheir individual energies.Weare discounting (at this stage)the possibility that in a real system a contribution to the total energy may arise frominteractions between molecules. We also adopt the principle of equal a priori prob-abilities,theassumption that all possibilitiesforthedistribution ofenergyare equallyprobable.Apriorimeans in this contextloosely'as far as oneknows.Wehavenoreasonto presume otherwise than that,for a collection of molecules at thermal equilibrium,vibrational states of a certain energy,for instance, are as likely to be populated asrotational states of the same energy.One very important conclusion that will emerge from the following analysis is thatthepopulationsofstatesdependona singleparameter,the'temperature.Thatis,statist-ical thermodynamics provides a molecular justification for the concept of temperature and some insight into this crucially important quantity.16.1ConfigurationsandweightsAny individual molecule may exist in states with energies Eo, E, .... We shall alwaystake eg, the lowest state, as the zero ofenergy (e =O),and measure all other energiesrelative to that state.To obtain the actual internal energy, U, we may have to add aconstant to the calculated energy of the system.For example, if we are considering thevibrational contributiontotheinternal energy,then wemustadd thetotal zero-pointenergyof anyoscillators in the sample(a)InstantaneousconfigurationsAt any instant there willbe ngmolecules in the state with energyEg,n, with ,and soon.Thespecification ofthesetof populationsno,np..in theform (ng,n,.Jisastatement of the instantaneous configuration of the system. The instantaneou业业站configurationfluctuateswithtimebecausethepopulations change.Wecan picturealargenumberofdifferentinstantaneousconfigurations.One,forexample,mightbe(N,o,o,...J,corresponding to every moleculebeing in its ground state.Anothermight be (N -- 2,2,0,0, ... J, in which two molecules are in the first excited state.The latter configuration is intrinsically more likely to be found than the formerbecause it can be achieved in more ways: (N,o,0,...J can be achieved in only oneFig.16.1Whereasa configurationway, but (N- 2,2,0, .. J can be achieved in N(N-1) different ways (Fig. 16.1; see{5,0,0, ... J can be achieved in only oneJustification 16.1).At this stage in the argument, we are ignoring the requirementway,a configuration [3,2,0,...f can bethat thetotal energy of the system should be constant (the second configuration hasachieved in the ten different ways showna higher energy than the first). The constraint of total energy is imposed later in thishere, where the tinted blocks representsection.differentmolecules
16.1 CONFIGURATIONS AND WEIGHTS 561 The distribution of molecular states We consider a closed system composed of N molecules. Although the total energy is constant at E, it is not possible to be definite about how that energy is shared between the molecules. Collisions result in the ceaseless redistribution of energy not only between the molecules but also among their different modes of motion. The closest we can come to a description of the distribution of energy is to report the population of a state, the average number of molecules that occupy it, and to say that on average there are ni molecules in a state of energy εi . The populations of the states remain almost constant, but the precise identities of the molecules in each state may change at every collision. The problem we address in this section is the calculation of the populations of states for any type of molecule in any mode of motion at any temperature. The only restriction is that the molecules should be independent, in the sense that the total energy of the system is a sum of their individual energies. We are discounting (at this stage) the possibility that in a real system a contribution to the total energy may arise from interactions between molecules. We also adopt the principle of equal a priori probabilities, the assumption that all possibilities for the distribution of energy are equally probable. A priori means in this context loosely ‘as far as one knows’. We have no reason to presume otherwise than that, for a collection of molecules at thermal equilibrium, vibrational states of a certain energy, for instance, are as likely to be populated as rotational states of the same energy. One very important conclusion that will emerge from the following analysis is that the populations of states depend on a single parameter, the ‘temperature’. That is, statistical thermodynamics provides a molecular justification for the concept of temperature and some insight into this crucially important quantity. 16.1 Configurations and weights Any individual molecule may exist in states with energies ε0, ε1, . . . . We shall always take ε0, the lowest state, as the zero of energy (ε0 = 0), and measure all other energies relative to that state. To obtain the actual internal energy, U, we may have to add a constant to the calculated energy of the system. For example, if we are considering the vibrational contribution to the internal energy, then we must add the total zero-point energy of any oscillators in the sample. (a) Instantaneous configurations At any instant there will be n0 molecules in the state with energy ε0, n1 with ε1, and so on. The specification of the set of populations n0, n1, . . . in the form {n0, n1, . . . } is a statement of the instantaneous configuration of the system. The instantaneous configuration fluctuates with time because the populations change. We can picture a large number of different instantaneous configurations. One, for example, might be {N,0,0, . . . }, corresponding to every molecule being in its ground state. Another might be {N − 2,2,0,0, . . . }, in which two molecules are in the first excited state. The latter configuration is intrinsically more likely to be found than the former because it can be achieved in more ways: {N,0,0, . . . } can be achieved in only one way, but {N − 2,2,0, . . . } can be achieved in 1 –2N(N − 1) different ways (Fig. 16.1; see Justification 16.1). At this stage in the argument, we are ignoring the requirement that the total energy of the system should be constant (the second configuration has a higher energy than the first). The constraint of total energy is imposed later in this section. Fig. 16.1 Whereas a configuration {5,0,0, . . . } can be achieved in only one way, a configuration {3,2,0, . . . } can be achieved in the ten different ways shown here, where the tinted blocks represent different molecules
56216STATISTICALTHERMODYNAMICS1:THECONCEPTSFig.16.2 The 18 molecules shown here canN=18be distributed into four receptacles(distinguished by the three vertical lines)in 18! different ways.However,3! of theselections that put three molecules in thefirst receptacle are equivalent, 6! that putsix molecules into the second receptacle are3!6!514!equivalent, and so on. Hencethe numberof distinguishable arrangements is181/31615!4!.If,as a result ofcollisions,the system were to fluctuate between the configurations(N,0,0, ...] and (N - 2,2,0, .. 1, it would almost always be found in the second,more likely state (especially if N were large). In other words, a system free to switchbetween the two configurations would show properties characteristic almost exclus-ively of the second configuration.A general configuration (ng,n,...I can be achievedin W different ways, where W is called the weight of the configuration. The weight oftheconfiguration (no.n,...J is given bythe expressionComment16.1N!Moreformally,Wis called theW=(16.1)multinomialcoefficient (seeAppendix2)ng!n,!n,!...Ineqn 16.1,xl,xfactorial,denotesEquation 16.1 isa generalization of theformula W=N(N-1),and reduces to it forx(x-1)(x-2)...1,andby definition0!=1.the configuration (N-2,2,0,...].Justification16.1Theweight ofa configurationFirst, consider theweightofthe configuration (N-2,2,0,0,...J.One candidateforpromotion to an upper state can be selected in N ways.There are N-1 candidatesfor the second choice, so the total number of choices is N(N-1).However, weshouldnotdistinguishthechoice (Jack,Jill)fromthechoice(Jill,Jack)becausetheyleadtothesameconfigurations.Therefore,onlyhalfthechoicesleadtodistinguish-able configurations,and thetotal number ofdistinguishable choices is N(N-1).Now we generalize this remark.Consider the number of ways of distributingNballs into bins.The first ball can be selected in N different ways, the next ballin N -1 different ways for the balls remaining, and so on. Therefore, there areN(N-1)...1 =N! ways of selecting the balls for distribution over the bins.However,iftherearen.balls in thebinlabelled e,there wouldben!different waysin which the same balls could have been chosen (Fig. 16.2). Similarly, there aren,! ways in which the n, balls in the bin labelled e, can be chosen, and so on.Therefore,thetotal number of distinguishable ways ofdistributing theballs so thatthere are ninbin Eo,n,inbin Ey,etc.regardless ofthe order in which theballs werechosen is N!/n,!n,!...,which is the content ofeqn 16.1.Illustration16.1Calculating theweightofa distributionTo calculate the number of ways of distributing 20 identical objects withthearrangement 1, 0, 3, 5, 10, 1, we note that the configuration is [1,0,3,5,10,1] withN= 20; therefore the weight is20!9.31×108W=110!3!5!10!1!Self-test 16.1 Calculate the weight of the configuration in which 20 objects are[4.19×1010]distributed in the arrangement 0, 1, 5, 0, 8, 0, 3, 2, 0, 1
562 16 STATISTICAL THERMODYNAMICS 1: THE CONCEPTS Comment 16.1 More formally, W is called the multinomial coefficient (see Appendix 2). In eqn 16.1, x!, x factorial, denotes x(x − 1)(x − 2) . . . 1, and by definition 0! = 1. 3! 6! 5! 4! Fig. 16.2 The 18 molecules shown here can N = 18 be distributed into four receptacles (distinguished by the three vertical lines) in 18! different ways. However, 3! of the selections that put three molecules in the first receptacle are equivalent, 6! that put six molecules into the second receptacle are equivalent, and so on. Hence the number of distinguishable arrangements is 18!/3!6!5!4!. If, as a result of collisions, the system were to fluctuate between the configurations {N,0,0, . . . } and {N − 2,2,0, . . . }, it would almost always be found in the second, more likely state (especially if N were large). In other words, a system free to switch between the two configurations would show properties characteristic almost exclusively of the second configuration. A general configuration {n0,n1, . . . } can be achieved in W different ways, where W is called the weight of the configuration. The weight of the configuration {n0,n1, . . . } is given by the expression W = (16.1) Equation 16.1 is a generalization of the formula W = 1 –2N(N − 1), and reduces to it for the configuration {N − 2,2,0, . . . }. Justification 16.1 The weight of a configuration First, consider the weight of the configuration {N − 2,2,0,0, . . . }. One candidate for promotion to an upper state can be selected in N ways. There are N − 1 candidates for the second choice, so the total number of choices is N(N − 1). However, we should not distinguish the choice (Jack, Jill) from the choice (Jill, Jack) because they lead to the same configurations. Therefore, only half the choices lead to distinguishable configurations, and the total number of distinguishable choices is 1 –2N(N − 1). Now we generalize this remark. Consider the number of ways of distributing N balls into bins. The first ball can be selected in N different ways, the next ball in N − 1 different ways for the balls remaining, and so on. Therefore, there are N(N − 1) . . . 1 = N! ways of selecting the balls for distribution over the bins. However, if there are n0 balls in the bin labelled ε0, there would be n0! different ways in which the same balls could have been chosen (Fig. 16.2). Similarly, there are n1! ways in which the n1 balls in the bin labelled ε1 can be chosen, and so on. Therefore, the total number of distinguishable ways of distributing the balls so that there are n0 in bin ε 0, n1 in bin ε1, etc. regardless of the order in which the balls were chosen is N!/n0!n1! . . . , which is the content of eqn 16.1. Illustration 16.1 Calculating the weight of a distribution To calculate the number of ways of distributing 20 identical objects with the arrangement 1, 0, 3, 5, 10, 1, we note that the configuration is {1,0,3,5,10,1} with N = 20; therefore the weight is W = = 9.31 × 108 Self-test 16.1 Calculate the weight of the configuration in which 20 objects are distributed in the arrangement 0, 1, 5, 0, 8, 0, 3, 2, 0, 1. [4.19 × 1010] 20! 1!0!3!5!10!1! N! n0!n1!n2!
56316.1CONFIGURATIONSANDWEIGHTSIt will turn out to be more convenient to deal with the natural logarithm of theweight, ln W,rather than with the weight itself. We shall therefore need the expressionN!In W=ln=In N!-In(no!n,!n! -)no!n,in!...=In N! (ln no! + In n,!+ln n?!+...)= In NIIn n!where in the first line we have used In(x/y) = n x-In y and in the second ln xy =ln x+ln y.One reason for introducing ln Wis that it is easier to make approximations. InComment16.2particular,wecan simplify thefactorials byusing Stirling'sapproximation in theformAmore accurateform of Stirling'sIn x!=xlnx-x(16.2)approximationisThen the approximate expression for the weight isxl(2元)12x+++e-In W=(NInN-N)-Z(n,ln n;-n)=NlnN-Zn,ln n;(16.3)and is in error byless than Ipercentwhen x is greater than about 10. We dealThe final form of eqn 16.3 is derived by noting that the sum of n,is equal to N, so thewithfarlargervaluesofx,andthesecond andfourth terms inthe second expression cancel.simplified version in eqn16.2 isadequate.(b)The Boltzmann distributionWehave seen that the configuration (N-2,2,0, .. dominates (N,0,0,..., and itshould be easy to believe that there may be other configurations that have a muchgreater weight than both. We shall see, in fact, that there is a configuration with sogreat a weight that it overwhelms allthe rest in importance to such an extent that thesystem will almost always befound in it.The properties ofthe system will thereforebecharacteristic of that particular dominating configuration. This dominating config-uration can befound bylookingfor thevalues ofn,that leadto a maximum value of W.Because W is a function of allthe n, we can do this search by varying the n; and look-ingforthevaluesthat correspondtodW=o(justas in the searchfor themaximum ofany function), or equivalently a maximum value of ln W. However, there are twodifficultieswith this procedure.Thefirst difficultyisthat the onlypermitted configurations arethose corresponding to the specified, constant, total energy of the system. This requirement rules outmany configurations; (N,o,0, ...) and (N-2,2,0, ... J, for instance, have differentenergies, so both cannot occur in the same isolated system. It follows that, in lookingfor the configuration with the greatest weight, we mustensure that the configurationalso satisfies the conditionZn;s=E(16.4)Constant total energy:where E is the total energy of the system.The second constraint is that, because the total number of molecules present isalso fixed (at N),we cannot arbitrarily vary all the populations simultaneously.Thus,increasing the population of one stateby1demands thatthe population of anotherstate must be reduced by 1. Therefore, the search for the maximum value of Wis alsosubjecttothe conditionEn;=NConstant total number of molecules:(16.5)We show in Further information 16.1 that the populations in the configuration ofgreatest weight, subject to the two constraints in eqns 16.4 and 16.5, depend on theenergyofthestateaccordingtotheBoltzmanndistribution:
16.1 CONFIGURATIONS AND WEIGHTS 563 It will turn out to be more convenient to deal with the natural logarithm of the weight, ln W, rather than with the weight itself. We shall therefore need the expression ln W = ln = ln N! − ln(n0!n1!n2! · · · ) = ln N! − (ln n0! + ln n1! + ln n2! + · · · ) = ln N! −∑ i ln ni ! where in the first line we have used ln(x/y) = ln x − ln y and in the second ln xy = ln x + ln y. One reason for introducing ln W is that it is easier to make approximations. In particular, we can simplify the factorials by using Stirling’s approximation in the form ln x! ≈ x ln x − x (16.2) Then the approximate expression for the weight is ln W = (N ln N − N) −∑ i (ni ln ni − ni ) = N ln N −∑ i ni ln ni (16.3) The final form of eqn 16.3 is derived by noting that the sum of ni is equal to N, so the second and fourth terms in the second expression cancel. (b) The Boltzmann distribution We have seen that the configuration {N − 2,2,0, . . . } dominates {N,0,0, . . . }, and it should be easy to believe that there may be other configurations that have a much greater weight than both. We shall see, in fact, that there is a configuration with so great a weight that it overwhelms all the rest in importance to such an extent that the system will almost always be found in it. The properties of the system will therefore be characteristic of that particular dominating configuration. This dominating configuration can be found by looking for the values of ni that lead to a maximum value of W. Because W is a function of all the ni , we can do this search by varying the ni and looking for the values that correspond to dW = 0 (just as in the search for the maximum of any function), or equivalently a maximum value of ln W. However, there are two difficulties with this procedure. The first difficulty is that the only permitted configurations are those corresponding to the specified, constant, total energy of the system. This requirement rules out many configurations; {N,0,0, . . . } and {N − 2,2,0, . . . }, for instance, have different energies, so both cannot occur in the same isolated system. It follows that, in looking for the configuration with the greatest weight, we must ensure that the configuration also satisfies the condition Constant total energy: ∑ i ni εi = E (16.4) where E is the total energy of the system. The second constraint is that, because the total number of molecules present is also fixed (at N), we cannot arbitrarily vary all the populations simultaneously. Thus, increasing the population of one state by 1 demands that the population of another state must be reduced by 1. Therefore, the search for the maximum value of W is also subject to the condition Constant total number of molecules: ∑ i ni = N (16.5) We show in Further information 16.1 that the populations in the configuration of greatest weight, subject to the two constraints in eqns 16.4 and 16.5, depend on the energy of the state according to the Boltzmann distribution: N! n0!n1!n2!. Comment 16.2 A more accurate form of Stirling’s approximation is x! ≈ (2π) 1/2xx+ 1 – 2 e−x and is in error by less than 1 per cent when x is greater than about 10. We deal with far larger values of x, and the simplified version in eqn 16.2 is adequate
56416STATISTICALTHERMODYNAMICS1:THECONCEPTSe-pe,n(16.6a)NEe-Pewhere o≤e,≤e. .... Equation 16.6a is the justification of the remark that a singleparameter,heredenotedβ,determines themostprobablepopulationsofthestates ofthe system.We shall see in Section 16.3b that1β=(16.6b)KTwhereTis thethermodynamictemperature and k is Boltzmann's constant.In otherwords,thethermodynamictemperature is theunique parameterthatgovernsthe mostprobablepopulationsofstates ofa system atthermal equilibrium.InFurtherinformation16.3,moreover, we see thatβis a more natural measure oftemperaturethan Titself.16.2ThemolecularpartitionfunctionFromnowonwewritetheBoltzmanndistributionase-Re,(16.7)qwhere p,is thefraction ofmolecules in the statei, p,= n,/N, and q is the molecularpartitionfunction:q=e-ke[16.8] The sum in qis sometimesexpressed slightlydifferently.Itmayhappenthat several stateshave the same energy,and so give the same contribution to the sum.If,for example,g, stateshavethe same energy , (so thelevel isg-folddegenerate),we could writeZg;e-Re(16.9)-levelsiwhere the sum is now over energy levels (sets of states with the same energy), notindividual states.Example16.1Writinga partition functionWrite an expression for the partition function ofa linearmolecule (such as HCl)treated as arigid rotor.Method To use eqn 16.9 we need to know (a) the energies of the levels, (b) thedegeneracies,thenumberofstatesthatbelong to each level.Whenevercalculatingapartitionfunction,theenergies ofthelevelsareexpressed relativetoOforthestateoflowest energy.The energy levels ofarigid linear rotor were derived in Section 13.5c.Answer From eqn 13.31, the energy levels of a linear rotor are hcBJ(J+ 1), withJ = O, l, 2,.... The state oflowest energy has zero energy, so no adjustment needbe made to the energies given by this expression. Each level consists of 2J +1degenerate states.Therefore,S1eZ(2]+1)e-83hcB10+1)J=0The sum can be evaluated numericallyby supplying the value of B(from spectro-scopyorcalculation)andthetemperature.ForreasonsexplainedinSection17.2b
564 16 STATISTICAL THERMODYNAMICS 1: THE CONCEPTS = (16.6a) where ε0 ≤ ε1 ≤ ε2 . Equation 16.6a is the justification of the remark that a single parameter, here denoted β, determines the most probable populations of the states of the system. We shall see in Section 16.3b that β = (16.6b) where T is the thermodynamic temperature and k is Boltzmann’s constant. In other words, the thermodynamic temperature is the unique parameter that governs the most probable populations of states of a system at thermal equilibrium. In Further information 16.3, moreover, we see that β is a more natural measure of temperature than T itself. 16.2 The molecular partition function From now on we write the Boltzmann distribution as pi = (16.7) where pi is the fraction of molecules in the state i, pi = ni/N, and q is the molecular partition function: q =∑ i e−βεi [16.8] The sum in q is sometimes expressed slightly differently. It may happen that several states have the same energy, and so give the same contribution to the sum. If, for example, gi states have the same energy εi (so the level is gi -fold degenerate), we could write q = ∑ levels i gie−βεi (16.9) where the sum is now over energy levels (sets of states with the same energy), not individual states. Example 16.1 Writing a partition function Write an expression for the partition function of a linear molecule (such as HCl) treated as a rigid rotor. Method To use eqn 16.9 we need to know (a) the energies of the levels, (b) the degeneracies, the number of states that belong to each level. Whenever calculating a partition function, the energies of the levels are expressed relative to 0 for the state of lowest energy. The energy levels of a rigid linear rotor were derived in Section 13.5c. Answer From eqn 13.31, the energy levels of a linear rotor are hcBJ(J + 1), with J = 0, 1, 2, . . . . The state of lowest energy has zero energy, so no adjustment need be made to the energies given by this expression. Each level consists of 2J + 1 degenerate states. Therefore, gJ εJ q = ∞ ∑ J=0 (2J + 1)e−βhcBJ(J+1) The sum can be evaluated numerically by supplying the value of B (from spectroscopy or calculation) and the temperature. For reasons explained in Section 17.2b, 5 6 7 5 6 7 e−βεi q 1 kT e−βεi ∑ i e−βεi ni N