Statistical Thermodynamics: An Engineering Approach covers in a practical, readily understandable manner the underlying

*1,154*
*331*
*10MB*

*English*
*Pages 282
[286]*
*Year 2018*

- Author / Uploaded
- John W. Daily

*Table of contents : CoverFront MatterStatistical Thermodynamics:An Engineering ApproachCopyrightBrief ContentsContentsList of FiguresList of TablesPrefaceComment on SoftwareAcknowledgments1 Introduction 1.1 The Role of Thermodynamics 1.2 The Nature of Matter 1.3 Energy, Work, Heat Transfer, and the 1st Law 1.4 Equilibrium 1.5 Thermodynamic Properties 1.6 The Fundamental Problem of Thermodynamics 1.7 Analysis of Non-equilibrium Behavior 1.8 Summary 1.9 Problems2 Fundamentals of Macroscopic Thermodynamics 2.1 The Postulates of Macroscopic (Classical) Thermodynamics 2.2 Simple Forms of the Fundamental Relation 2.3 Equilibrium and the Intensive Properties 2.4 Representation and the Equations of State 2.5 The Euler Equation and the Gibbs–Duhem Relation 2.6 Quasi-static Processes and Thermal and Mechanical Energy Reservoirs 2.7 Equilibrium in the Energy Representation 2.8 Alternative Representations – Legendre Transformations 2.9 Transformations of the Energy 2.10 Transformations of the Entropy 2.11 Reversible Work 2.12 Maxwell’s Relations 2.13 Building Property Relations 2.14 Sources for Thermodynamic Properties 2.15 Summary 2.16 Problems3 Microscopic Thermodynamics 3.1 The Role of Statistics in Thermodynamics 3.2 The Postulates of Microscopic Thermodynamics 3.3 The Partition Function and its Alternative Formulations 3.4 Thermodynamic Properties 3.5 Fluctuations 3.6 Systems with Negligible Inter-particle Forces 3.7 Systems with Non-negligible Inter-particle Forces 3.8 Summary 3.9 Problems4 Quantum Mechanics 4.1 A Brief History 4.2 The Postulates of Quantum Mechanics 4.3 Solutions of the Wave Equation 4.4 Real Atomic Behavior 4.5 Real Molecular Behavior 4.6 Molecular Modeling/Computational Chemistry 4.7 Summary 4.8 Problems5 Ideal Gases 5.1 The Partition Function 5.2 The Translational Partition Function 5.3 Monatomic Gases 5.4 Diatomic Gases 5.5 Polyatomic Gases 5.6 Summary 5.7 Problems6 Ideal Gas Mixtures 6.1 Non-reacting Mixtures 6.2 Reacting Mixtures 6.3 Summary 6.4 Problems7 The Photon and Electron Gases 7.1 The Photon Gas 7.2 The Electron Gas 7.3 Summary 7.4 Problems8 Dense Gases 8.1 Evaluating the Configuration Integral 8.2 The Virial Equation of State 8.3 Other Properties 8.4 Potential Energy Functions 8.5 Other Equations of State 8.6 Summary 8.7 Problems9 Liquids 9.1 The Radial Distribution Function and Thermodynamic Properties 9.2 Molecular Dynamics Simulations of Liquids 9.3 Determining g(r) from Molecular Dynamics Simulations 9.4 Molecular Dynamics Software 9.5 Summary 9.6 Problems10 Crystalline Solids 10.1 Einstein Crystal 10.2 Debye Crystal 10.3 Summary 10.4 Problems11 Thermodynamic Stability and Phase Change 11.1 Thermodynamic Stability 11.2 Phase Change 11.3 Gibbs Phase Rule 11.4 Thermodynamic versus Dynamic Stability 11.5 Summary 11.6 Problems12 Kinetic Theory of Gases 12.1 Transport Phenomena 12.2 The Boltzmann Equation and the Chapman–Enskog Solution 12.3 Transport Data Sources 12.4 Summary 12.5 Problems13 Spectroscopy 13.1 The Absorption and Emission of Radiation 13.2 Spectral Line Broadening 13.3 Atomic Transitions 13.4 Molecular Transitions 13.5 Absorption and Emission Spectroscopy 13.6 Laser-Induced Fluorescence 13.7 Rayleigh and Raman Scattering 13.8 Summary 13.9 Problems14 Chemical Kinetics 14.1 Reaction Rate 14.2 Reaction Rate Constant and the Arrhenius Form 14.3 More on Reaction Rates 14.4 Reaction Mechanisms 14.5 Summary 14.6 ProblemsA Physical ConstantsB Combinatorial AnalysisC Tables C.1 Atomic Electron Configurations C.2 Lennard–Jones Parameters C.3 Some Properties of Diatomic MoleculesD Multicomponent, Reactive Flow Conservation Equations D.1 The Distribution Function and Flow PropertiesE Boltzmann’s Equation E.1 Collision IntegralsF Bibliography for Thermodynamics Classical Thermodynamics Introductory Statistical Thermodynamics More Advanced Statistical Mechanics and Applications Quantum Mechanics Atoms and Molecules Liquids Molecular Dynamics and Computational Chemistry Kinetic Theory and Transport Phenomena Useful Mathematics References Online Resources Some Useful JournalsReferencesIndex*

Statistical Thermodynamics An Engineering Approach Dr. Daily is currently Professor of Mechanical Engineering at the University of Colorado at Boulder. He studied mechanical engineering at the University of Michigan (BS 1968, MS 1969) and at Stanford University (PhD 1975). Prior to starting college he worked on sports and racing cars, owning his own business. Between the MS and PhD degrees he worked as a heat transfer analyst at Aerojet Liquid Rocket Company. After receiving the PhD he was a faculty member at the University of California at Berkeley until 1988, when he moved to the University of Colorado. He has served as the Director of the Center for Combustion Research and as Chair of the Mechanical Engineering Department at the University of Colorado. His academic career has been devoted to the field of energy, focusing on combustion and environmental studies. He has worked on combustion and heat transfer aspects of propulsion and power generation devices, studying such topics as fluid mechanics of mixing, chemical kinetics, combustion stability, and air pollution. He also works on the development of advanced diagnostic instrumentation (including laser based) for studying reacting flows and environmental monitoring. Most recently he has been working in the areas of biomass thermochemical processing and source characterization, wildfire behavior, the environmental consequences of combustion, and optical biopsy of cancer. He is a founder of Precision Biopsy Inc., a company developing technology for the optical detection of prostate cancer. Dr. Daily served as a member of the San Francisco Bay Area Air Quality Management District Advisory Council for 10 years. He served on and chaired the State of Colorado Hazardous Waste Commission for over 10 years and was on the State of Colorado Air Quality Control Commission. He is a Fellow of The American Institute of Aeronautics and Astronautics (AIAA) and serves as chair of its Publications Committee.

Statistical Thermodynamics An Engineering Approach J O H N W. D A I LY University of Colorado Boulder

University Printing House, Cambridge CB2 8BS, United Kingdom One Liberty Plaza, 20th Floor, New York, NY 10006, USA 477 Williamstown Road, Port Melbourne, VIC 3207, Australia 314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre, New Delhi – 110025, India 79 Anson Road, #06–04/06, Singapore 079906 Cambridge University Press is part of the University of Cambridge. It furthers the University’s mission by disseminating knowledge in the pursuit of education, learning, and research at the highest international levels of excellence. www.cambridge.org Information on this title: www.cambridge.org/9781108415316 DOI: 10.1017/9781108233194 © John W. Daily 2019 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2019 Printed and bound in Great Britain by Clays Ltd, Elcograf S.p.A. A catalogue record for this publication is available from the British Library. Library of Congress Cataloging-in-Publication Data Names: Daily, John W. (John Wallace), author. Title: Statistical thermodynamics : an engineering approach / John W. Daily (University of Colorado Boulder). Other titles: Thermodynamics Description: Cambridge ; New York, NY : Cambridge University Press, 2019. | Includes bibliographical references and index. Identifiers: LCCN 2018034166 | ISBN 9781108415316 (hardback : alk. paper) Subjects: LCSH: Thermodynamics–Textbooks. Classification: LCC TJ265 .D2945 2019 | DDC 621.402/1–dc23 LC record available at https://lccn.loc.gov/2018034166 ISBN 978-1-108-41531-6 Hardback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party Internet Web sites referred to in this publication and does not guarantee that any content on such Web sites is, or will remain, accurate or appropriate.

Brief Contents

List of Figures List of Tables Preface Comment on Software Acknowledgments 1 2 3 4 5 6 7 8 9 10 11 12 13 14

page xiii xvi xvii xix xx

Introduction Fundamentals of Macroscopic Thermodynamics Microscopic Thermodynamics Quantum Mechanics Ideal Gases Ideal Gas Mixtures The Photon and Electron Gases Dense Gases Liquids Crystalline Solids Thermodynamic Stability and Phase Change Kinetic Theory of Gases Spectroscopy Chemical Kinetics

1 9 36 57 92 109 122 132 144 156 166 178 196 216

Appendices References Index

232 257 261

v

Contents

List of Figures List of Tables Preface Comment on Software Acknowledgments 1

Introduction 1.1 The Role of Thermodynamics 1.2 The Nature of Matter 1.3 Energy, Work, Heat Transfer, and the 1st Law 1.4 Equilibrium 1.5 Thermodynamic Properties 1.6 The Fundamental Problem of Thermodynamics 1.7 Analysis of Non-equilibrium Behavior 1.8 Summary 1.9 Problems

2

Fundamentals of Macroscopic Thermodynamics 2.1 The Postulates of Macroscopic (Classical) Thermodynamics 2.2 Simple Forms of the Fundamental Relation 2.2.1 Van der Waals Substance 2.2.2 Ideal Gas 2.3 Equilibrium and the Intensive Properties 2.3.1 Thermal Equilibrium: The Meaning of Temperature 2.3.2 Mechanical Equilibrium: The Meaning of Pressure 2.3.3 Matter Flow and Chemical Equilibrium: The Meaning of Chemical Potential 2.4 Representation and the Equations of State 2.5 The Euler Equation and the Gibbs–Duhem Relation 2.6 Quasi-static Processes and Thermal and Mechanical Energy Reservoirs 2.7 Equilibrium in the Energy Representation 2.8 Alternative Representations – Legendre Transformations 2.8.1 Example 2.1

page xiii xvi xvii xix xx 1 1 2 2 4 5 6 7 7 8 9 9 11 12 13 13 14 15 16 17 18 19 21 22 23

vii

viii

Contents

2.9 2.10 2.11 2.12 2.13 2.14 2.15

Transformations of the Energy Transformations of the Entropy Reversible Work Maxwell’s Relations Building Property Relations Sources for Thermodynamic Properties Summary 2.15.1 Postulates and the Fundamental Relation 2.15.2 Equilibrium and Intensive Parameters 2.15.3 Representation and Equations of State 2.15.4 The Euler Equation and the Gibbs–Duhem Relation 2.15.5 Alternative Representations 2.15.6 Maxwell’s Relations 2.15.7 Property Relations 2.16 Problems

24 24 25 26 27 29 30 30 31 31 32 32 32 33 33

3

Microscopic Thermodynamics 3.1 The Role of Statistics in Thermodynamics 3.2 The Postulates of Microscopic Thermodynamics 3.3 The Partition Function and its Alternative Formulations 3.4 Thermodynamic Properties 3.5 Fluctuations 3.6 Systems with Negligible Inter-particle Forces 3.7 Systems with Non-negligible Inter-particle Forces 3.8 Summary 3.8.1 Statistics in Thermodynamics and Ensembles 3.8.2 The Postulates of Microscopic Thermodynamics 3.8.3 The Partition Function 3.8.4 Relationship of Partition Function to Fundamental Relation 3.8.5 Fluctuations 3.8.6 Systems with Negligible Inter-particle Forces 3.8.7 Systems with Non-negligible Inter-particle Forces 3.9 Problems

36 36 40 41 43 46 48 52 52 52 53 53 54 54 54 55 55

4

Quantum Mechanics 4.1 A Brief History 4.1.1 Wave–Particle Duality – Electromagnetic Radiation Behaves Like Particles 4.1.2 Particle–Wave Duality – Particles Can Display Wave-Like Behavior 4.1.3 Heisenberg Uncertainty Principle 4.2 The Postulates of Quantum Mechanics 4.3 Solutions of the Wave Equation 4.3.1 The Particle in a Box 4.3.2 Internal Motion

57 57 58 61 61 63 64 66 68

Contents

4.4

4.5 4.6 4.7 4.8

4.3.3 The Hydrogenic Atom 4.3.4 The Born–Oppenheimer Approximation and the Diatomic Molecule Real Atomic Behavior 4.4.1 Pauli Exclusion Principle 4.4.2 Higher-Order Effects 4.4.3 Multiple Electrons Real Molecular Behavior Molecular Modeling/Computational Chemistry 4.6.1 Example 4.1 Summary Problems

ix

71 73 78 78 79 81 84 86 87 88 89

5

Ideal Gases 5.1 The Partition Function 5.2 The Translational Partition Function 5.3 Monatomic Gases 5.3.1 Example 5.1 5.4 Diatomic Gases 5.4.1 Rotation 5.4.2 Example 5.2 5.4.3 Vibration 5.4.4 Properties 5.5 Polyatomic Gases 5.6 Summary 5.6.1 Monatomic Gas 5.6.2 Simple Diatomic Gas 5.6.3 Polyatomic Molecules 5.7 Problems

92 92 93 96 96 99 99 101 102 103 105 106 106 107 107 108

6

Ideal Gas Mixtures 6.1 Non-reacting Mixtures 6.1.1 Changes in Properties on Mixing 6.1.2 Example 6.1 6.2 Reacting Mixtures 6.2.1 General Case 6.2.2 Properties for Equilibrium and 1st Law Calculations 6.2.3 Example 6.2 6.2.4 The Equilibrium Constant 6.2.5 Example 6.3 6.2.6 The Principle of Detailed Balance 6.3 Summary 6.3.1 Non-reacting Mixtures 6.3.2 Reacting Mixtures 6.4 Problems

109 109 110 111 112 112 114 115 116 118 119 119 119 120 121

x

Contents

7

The Photon and Electron Gases 7.1 The Photon Gas 7.1.1 Example 7.1 7.2 The Electron Gas 7.2.1 Example 7.2 7.2.2 Example 7.3 7.3 Summary 7.3.1 Photon Gas 7.3.2 Electron Gas 7.4 Problems

122 122 125 126 129 129 129 130 130 130

8

Dense Gases 8.1 Evaluating the Configuration Integral 8.2 The Virial Equation of State 8.3 Other Properties 8.4 Potential Energy Functions 8.4.1 Example 8.1 8.4.2 Example 8.2 8.5 Other Equations of State 8.6 Summary 8.6.1 Evaluating the Configuration Integral 8.6.2 Virial Equation of State 8.6.3 Other Properties 8.6.4 Potential Energy Function 8.6.5 Other Equations of State 8.7 Problems

132 133 135 136 136 137 138 140 141 141 142 142 142 143 143

9

Liquids 9.1 The Radial Distribution Function and Thermodynamic Properties 9.1.1 Example 9.1 9.2 Molecular Dynamics Simulations of Liquids 9.3 Determining g(r) from Molecular Dynamics Simulations 9.4 Molecular Dynamics Software 9.4.1 Example 9.2 9.5 Summary 9.6 Problems

144 144 147 148 150 151 151 154 155

10

Crystalline Solids 10.1 Einstein Crystal 10.2 Debye Crystal 10.2.1 Example 10.1 10.3 Summary 10.4 Problems

156 159 160 162 164 164

Contents

xi

11

Thermodynamic Stability and Phase Change 11.1 Thermodynamic Stability 11.2 Phase Change 11.2.1 Example 11.1 11.2.2 Example 11.2 11.3 Gibbs Phase Rule 11.4 Thermodynamic versus Dynamic Stability 11.5 Summary 11.5.1 Thermodynamic Stability 11.5.2 Phase Change 11.5.3 Gibb’s Phase Rule 11.6 Problems

166 166 169 172 173 174 175 175 175 176 176 177

12

Kinetic Theory of Gases 12.1 Transport Phenomena 12.1.1 Simple Estimates of Transport Rates 12.1.2 Example 12.1 12.2 The Boltzmann Equation and the Chapman–Enskog Solution 12.2.1 Momentum Diffusion 12.2.2 Example 12.2 12.2.3 Thermal Diffusion 12.2.4 Example 12.3 12.2.5 Mass Diffusion 12.2.6 Example 12.4 12.3 Transport Data Sources 12.4 Summary 12.4.1 Transport Phenomena 12.4.2 Boltzmann Equation and the Chapman–Enskog Solution 12.5 Problems

178 180 180 183 187 188 188 189 190 190 192 192 193 193 194 194

13

Spectroscopy 13.1 The Absorption and Emission of Radiation 13.2 Spectral Line Broadening 13.3 Atomic Transitions 13.4 Molecular Transitions 13.4.1 Rotational Transitions 13.4.2 Vibrational Transitions 13.4.3 Electronic Transitions 13.5 Absorption and Emission Spectroscopy 13.5.1 Example 13.1 13.6 Laser-Induced Fluorescence 13.7 Rayleigh and Raman Scattering 13.8 Summary

196 197 199 201 202 203 203 204 207 209 209 211 213

xii

14

Contents

13.8.1 The Absorption and Emission of Radiation 13.8.2 Spectral Line Broadening 13.8.3 Spectral Transitions 13.8.4 Types of Spectroscopies 13.9 Problems

213 214 214 214 214

Chemical Kinetics 14.1 Reaction Rate 14.2 Reaction Rate Constant and the Arrhenius Form 14.2.1 Unimolecular Reactions 14.2.2 Example 14.1 14.3 More on Reaction Rates 14.3.1 Transition State Theory 14.3.2 Statistical Theories: RRKM 14.4 Reaction Mechanisms 14.4.1 Example 14.2 14.5 Summary 14.5.1 Reaction Rate 14.5.2 Reaction Rate Constant and the Arrhenius Form 14.5.3 Unimolecular Reactions 14.5.4 More on Reaction Rates 14.5.5 Reaction Mechanisms 14.6 Problems

216 216 217 221 222 223 224 224 226 228 229 229 230 230 230 230 230

Appendices A Physical Constants B Combinatorial Analysis C Tables D Multicomponent, Reactive Flow Conservation Equations E Boltzmann’s Equation F Bibliography for Thermodynamics References Index

232 232 233 234 236 247 252 257 261

List of Figures

1.1 2.1 2.2 2.3 2.4 2.5 2.6 2.7 3.1 3.2 3.3 3.4 3.5 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.10 4.11 4.12 4.13 4.14 4.15 4.16 4.17 5.1 5.2 5.3

The fundamental problem Interatomic potential energy for the diatomic molecule Thermal equilibrium Mechanical energy reservoir Thermal energy reservoir Graphical illustration of the transformation process Constant-temperature work Joule–Thompson coefficient for several substances (data from Perry’s Chemical Engineers’ Handbook [8]) An ensemble of ensemble members Equilibrium rotational population distribution for CO Expected value of < Nk > Expected value of < Nk > for a Bose–Einstein system Expected value of < Nk > for a Fermi–Dirac system Spectral distribution of blackbody radiation Photoelectric emission The Compton effect A wave packet The particle in a box Spherical coordinate system The hydrogenic atom Hydrogen atom energy levels Morse and harmonic potentials Rotational motion of a diatomic molecule Vibrational motion of a diatomic molecule Rotational and vibrational energy levels for a diatomic molecule Fine structure splitting for a hydrogenic atom (Incropera [7]) The periodic table Sodium energy-level diagram OH energy-level diagram (from Radzig and Smirov [9]) Predicted structure of CH2 The Maxwellian speed distribution Electronic partition function for Li Equilibrium rotational population distribution for CO

page 6 12 14 20 20 22 26 30 38 39 50 51 51 59 60 60 62 66 69 71 72 74 74 75 77 80 82 83 85 88 95 97 100 xiii

xiv

List of Figures

5.4

5.5 5.6 5.7 6.1 7.1 7.2 7.3 7.4 8.1 9.1 9.2 9.3 9.4 10.1 10.2 10.3 10.4 10.5 11.1 11.2 11.3 11.4 11.5 11.6 11.7 11.8 11.9 12.1 12.2 12.3 12.4 12.5 12.6 12.7 12.8 12.9 13.1 13.2

Effect of number of terms on the value of qr . Horizontal lines are the high-temperature limit. Although not plotted, the Euler–MacLaurin series converges in only a few terms Effect of number of terms on error using the high-temperature limit Typical potential energy function cv /k as a function of temperature for CO Adiabatic mixing at constant total volume Spectral distribution of blackbody radiation Electron energy distribution in a metal matrix Electron energy distribution Specific heat as a function of temperature. Rigid sphere, square well, Sutherland (weakly attractive sphere), and Lennard–Jones potential functions (clockwise around figure) Radial distribution function for crystal structures Radial distribution function for liquids Water molecules in simulation box Radial distribution function Crystal structure in 2D The dependence of average potential energy (per atom) on position Properties of the Einstein crystal Frequency distributions for Einstein and Debye models Specific heat for Einstein and Debye models (data from Kittel [10] and White and Collocott [11]) The fundamental relation Example fundamental relation Isotherms of typical pvT relationship Single isotherms of typical pvT relationship Gibbs potential as a function of p Phase diagram for simple compressible substance Isotherm of typical pv diagram Van der Waals equation Global versus local stability Relationship between Knudsen number and mathematical models (adapted from Bird [12]) Gradient Transport The sphere of influence Random path of colliding particle Straightened path of colliding particle Viscosity of several gases Viscosity of helium Thermal conductivity of helium Self-diffusion coefficient of helium The Einstein radiative processes Spectral line shape functions

101 102 103 104 111 124 127 127 129 137 146 146 153 154 157 157 159 161 162 167 168 169 170 171 171 172 172 175 180 181 182 183 183 185 189 190 192 197 201

List of Figures

13.3 13.4 13.5

Sodium energy-level diagram Rotational absorption spectrum Vibrational/rotational absorption spectrum (horizontal axis is wavelength) (note that the J to J transition that is shown dashed is forbidden; also, the rotational spacings are greatly exaggerated) 13.6 Illustration of the Franck–Condon principle 13.7 Electronic transitions resulting in a band spectrum. In this case, only v = 0 is allowed. The band locations and shapes depend on the detailed energy-level parameters. The v = 0 bands will overlap if the higher-order terms in G(v) are zero 13.8 LIF optical arrangement 13.9 Two-level system showing radiative and collisional rate terms 13.10 Rayleigh and Raman scattering processes 13.11 The Einstein radiative processes 14.1 Potential energy barrier 14.2 Reaction probability as a function of collision energy 14.3 Collision geometry 14.4 Acetaldehyde unimolecular decomposition rate constant kuni at 1500 K 14.5 H2 /O2 explosion behavior 14.6 Temperature, H2 , O2 , and H2 O vs. time 14.7 H2 O2 , HO2 , H, and O vs. time D.1 Conservation balance D.2 Geometry for calculating number flux D.3 Mass conservation balance E.1 Phase space E.2 Flux in physical space E.3 Flux in velocity space

xv

202 204

205 206

207 210 210 212 213 218 219 219 223 228 229 229 238 240 242 248 248 248

List of Tables

1.1 2.1 2.2 2.3 3.1 3.2 4.1 4.2 4.3 4.4 4.5 6.1 7.1 10.1 11.1 12.1 13.1 14.1 14.2 D.1 E.1

xvi

Characteristic times of transport processes Equations of state Transformations of the energy representation Transformations of the entropy representation Types of ensembles Types of partition functions Degeneracy of translational quantum states Electronic energy levels of sodium Methods of molecular modeling Molecular modeling software Simple solutions of the wave equation Mole fractions predicted by Matlab script Electronic properties of metals Debye temperatures for some monatomic crystalline solids Number of phases for a two-component system Non-dimensional parameters of fluid flow Main transition bands Detailed H2 /O2 reaction mechanism (Li et al. [13]) Rate constant in units of cm3 /molecule sec and temperature in K Forms of the densities Lennard–Jones 6,12 collision integrals (from Hirschfelder, Curtiss, and Bird [14]; also in Bird, Stewart, and Lightfoot [15])

page 5 18 25 25 41 44 68 83 86 87 89 116 126 163 174 179 206 227 231 239 250

Preface

I have been teaching advanced thermodynamics for over 40 years, first at the University of California at Berkeley from 1975 through 1988, and since then at the University of Colorado at Boulder. I have mostly had mechanical and aerospace engineering students who are moving toward a career in the thermal sciences, but also a goodly number of students from other engineering and scientific disciplines. While working on my Master’s degree at the University of Michigan I took statistical thermodynamics from Professor Richard Sonntag using his and Gordon Van Wylen’s text Fundamentals of Statistical Thermodynamics [1]. Later at Stanford I took Charles Kruger’s course using Introduction to Physical Gas Dynamics by Vincenti and Kruger [2]. This course had a large dose of statistical thermodynamics combined with gas dynamics. Both experiences sharpened my interest in the subject matter. Then, early in my teaching career, I had the good fortune to be introduced to the wonderful text Thermodynamics and an Introduction to Thermostatistics by Professor Herbert B. Callen [3] of the University of Pennsylvania. My first reading of his postulatory approach to classical thermodynamics was one of the most exciting learning experiences of my career. As one epiphany followed another, I realized that here was a teacher’s dream, the opportunity to teach classical thermodynamics in a way that makes Gibbs’ [4] ensemble approach to statistical thermodynamics transparent. I have therefore taught advanced thermodynamics in this fashion over the years, but with a serious handicap. There was no available text that was suitable for engineering students and that fully integrates Callen’s postulatory approach with the ensemble approach of statistical mechanics. Eldon Knuth’s book Introduction to Statistical Thermodynamics [5] is a good companion to Callen, but uses different notation. Tien and Lienhard’s Statistical Thermodynamics [6] introduces the postulatory approach, but in a very short first chapter. They also spend a great deal of time on classical statistics, which I feel is unnecessary when ensemble statistics are used. All are quite dated, especially in terms of advances in computation techniques and modern property compilations. I also feel indebted to Frank Incropera’s book Introduction to Molecular Structure and Thermodynamics [7], which provides a particularly easy to follow presentation of quantum mechanics suitable for engineering students. Hence this book. It assumes the reader is a mechanical or aerospace engineering graduate student who already has a strong background in undergraduate engineering thermodynamics and is ready to tackle the underlying fundamentals of the subject. It is designed for those entering advanced fields such as combustion, high-temperature gas dynamics, environmental sciences, or materials processing, or who wish to build a xvii

xviii

Preface

background for understanding advanced experimental diagnostic techniques in these or similar fields. The presentation of the subject is quite different from that encountered in engineering thermodynamics courses, where little fundamental explanation is given and the student is required to accept concepts such as entropy and the 2nd Law. Here, the underlying meaning of entropy, temperature, and other thermodynamic concepts will be definitively explored, quantum mechanics learned, and the physical basis of gas, liquid, and solid phases established. In addition, the molecular basis of transport phenomena and chemical kinetics will be explored. Modern tools for solving thermodynamic problems will also be explored, and the student is assured that he or she will gain knowledge of practical usefulness.

Comment on Software

In a number of locations throughout the text, various software programs will be mentioned. Some are open source, others commercial. Two packages are mentioned multiple times: Mathcad and Mathematica. Both are commercial but almost all universities have site licenses for engineering students and student licenses are very affordable. At the University of Colorado we have favored Matlab for many years, and it is expected that students will be adept in its usage. Where other commercial programs are mentioned, there is almost always an open source alternative given as well. As is usually the case, the commercial programs are more polished, with easier to use interfaces. However, the open source programs can work well and in some cases the science is more up to date. I realize that in this day and age electronic anything tends to come and go. I have tried to reference software that is likely to have staying power for some time. However, it is incumbent on any engineer or scientist to stay current on available tools, so I expect that the conscientious student (and teacher) will find suitable alternatives if necessary.

xix

Acknowledgments

In any endeavor such as writing a book of this nature, it is clear that one owes debts to a number of people. My start came from being blessed with being born into an academic family. My father, James W. Daily, studied at Stanford and Cal Tech, before teaching at MIT for 18 years, and later at the University of Michigan, serving as Chair of the Applied Mechanics Department. As a youth I met many giants in engineering and science, including G. I. Taylor, Hermann Schlichting, Theodore von Kármán, and Harold Eugene “Doc” Edgerton. I have already mentioned studying thermodynamics under Richard Sonntag at Michigan. One of my PhD advisors at Stanford was Charles Kruger. I also had classes from Bill Reynolds and Milton Van Dyke, both great minds. And while teaching at Berkeley I had many scintillating conversations with Rick Sherman, Chang Tien, George Pimentel, Yuan Lee, Bill Miller, and Henry “Fritz” Schaefer III. While at Boulder I have developed wonderful collaborations with G. Barney Ellison, John Stanton, Peter Hamlington, Melvin Branch, Greg Rieker, Nicole Labbe, and others. My many colleagues around the world have kept me honest through their reviews of papers and proposals, provided spirited discussions at meetings and meals, and generally challenged me to be my best. And needless to say, my graduate students have provided great joy as we transition from teacher–student into lifelong equals and friends. At home my wife Carol has been an inspiration. As I write this she is battling ovarian cancer with courage and grace. I am not surprised. She raised four children while working as a teacher and psychotherapist helping children, all while managing to ski, backpack, run marathons, and compete in triathlons. We have been rewarded with ten wonderful grandchildren. Of course, thanks go to the people at Cambridge University Press, including Steven Elliott and Brianda Reyes. One of our current graduate students, Jeff Glusman, was particularly helpful with proofreading and made many valuable suggestions. To all these people I give my heartfelt thanks. Because of them I have had a great life that has given me the opportunity to write this book.

xx

1

Introduction

1.1

The Role of Thermodynamics With an important restriction, the discipline of thermodynamics extends classical dynamics to the study of systems for which internal, microscopic, modes of motion are important. In dynamics, we are concerned about the macroscopic motion of matter. In thermodynamics it is motion at the microscopic level that mostly absorbs our interest. In dynamics we use the concepts of kinetic and potential energy and work to describe motion. In thermodynamics we add to these concepts internal energy and heat transfer, along with related properties such as temperature, pressure, and chemical potential. The important restriction, which we will discuss in detail later, is that thermodynamics is limited to analyzing changes between equilibrium states. Systems for which internal modes of motion are important include power generation, propulsion, refrigeration, chemical processes, and biology. The goal of applied thermodynamic analysis is to understand the relationship between the design parameters and the performance of such systems, including specification of all appropriate state properties and energy flows. This task may be cast in terms of the following steps: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

Identify any process or series of processes including components of cyclic processes. Select control masses or volumes as appropriate. Identify interactions between subsystems (i.e. work, heat transfer, mass transfer). Sketch a system diagram showing control surfaces and interactions and a process diagram showing state changes. Obtain all necessary properties at each state given sufficient independent properties – for example u, v, h, s, T, p, and chemical composition. Calculate interactions directly where possible. Apply the 1st Law to any process or set of processes. Calculate the behavior of an isentropic process or a non-isentropic process given the isentropic efficiency. Put it all together and solve the resulting system of nonlinear algebraic equations. Calculate the system performance, including 2nd Law performance.

Most of these tasks are addressed in undergraduate engineering thermodynamics, at least for systems involving simple compressible substances as the working fluid. 1

2

Introduction

However, the level of conceptual understanding necessary to address more complex substances and to understand and carry out 2nd Law performance analysis is usually left for graduate study. This is the focus of this book.

1.2

The Nature of Matter As we know, matter is composed of atoms and molecules. The typical small atom is composed of a positively charged nucleus and negatively charge electrons. The nucleus, in turn, is composed of positively charged protons and neutral neutrons. The charge on electrons and protons is 1.602 × 10−19 C. It is the electrostatic forces that arise from charged electrons and protons that hold atoms together, allow for the formation of molecules, and determine the overall phase of large collections of atoms and/or molecules as solid, liquid, gas, or plasma. The spatial extent of electrostatic forces for a typical small atom is approximately 5 Å or 5 × 10−10 m. There are about 2 × 109 atoms per lineal meter in a solid, resulting in about 8 × 1027 atoms/ solidus m3 . Thus, macroscopic systems are composed of a very large number of atoms or molecules. In macroscopic systems, we describe behavior using the equations of motion derived from Newton’s Law. In principle, we should be able to solve the equations of motion for each atom or molecule to determine the effect of microscopic modes of motion. However, even if we ignore the fact that the behavior of individual atoms and molecules is described by quantum mechanics, it would be impossible to simultaneously solve the enormous number of equations involved. Clearly, an alternative approach is required as some kind of averaging must take place. Fortunately, nature has been kind in devising the laws of averaging in ways that allow for great simplification (although we will explore solving the classical equations of motion for small subsets of atoms as a way of estimating thermodynamic and other properties). Thus, the solution of thermodynamics problems breaks down into two great tasks. The first is developing the rules for macroscopic behavior, given basic knowledge of microscopic behavior. We call this subject classical or macroscopic thermodynamics. Providing microscopic information is the subject of statistical or microscopic thermodynamics.

1.3

Energy, Work, Heat Transfer, and the 1st Law The basis of the concepts of energy, kinetic and potential, and work can be derived from Newton’s Law: F = ma

(1.1)

Consider applying a force to a macroscopic body of mass m, causing it to follow some trajectory. Integrating Newton’s Law over the trajectory, we obtain

1.3 Energy, Work, Heat Transfer, and the 1st Law

x2

x = Fd

x1

x2

x2 mad x =

x1

x1

dV m dx = dt

V2

V = mVd

V 1

1 2 m V2 − V12 2

3

(1.2)

We normally identify x2 W12 =

x Fd

(1.3)

x1

as the work done during the process of causing the mass to move from point x1 to x2 . x). Indeed, different functions can result The work will depend on the path function F( in the same amount of work. As a result, we say that work is a path, or process integral. In contrast, the integral of ma depends only on the value of the velocity squared at the end points. We identify 1 (1.4) KE = mV 2 2 as the kinetic energy. The energy is a point or state property, and the integral of 12 mdV 2 is an exact differential. The concept of potential energy arises out of the behavior of a body subject to a potential force field. A potential field is one in which the force imposed on the body is a function of position only. Gravity is the most common example of a potential field encountered in practice. Consider the case where a body subjected to an applied force is in a gravitational field whose effect is the constant weight W. If the gravitational field operates in the z direction, then Newton’s Law takes on the form Fz − Wz = maz

(1.5)

In the absence of an applied force, Wz = maz . Defining g as the effective acceleration due to gravity, Wz = mg. Adding this to the applied force and integrating as above gives W12 − mg(z2 − z1 ) =

1 m(V22 − V12 ) 2

(1.6)

Note that the integral of the potential force term is in an exact form, and depends only on the value of mgz at the end points. Therefore, we normally call PE = mgz

(1.7)

the potential energy, and write Eqs (1.1)–(1.6) as 1 m(V22 − V12 ) + mg(z2 − z1 ) = W12 2

(1.8)

E = KE + PE = W12

(1.9)

E = KE + PE

(1.10)

or

where

4

Introduction

This is a statement of the 1st Law of Thermodynamics for a body with no internal energy. As can be seen, it means that energy can be viewed as a property that measures the ability of matter to do work. Furthermore, rather than the absolute value of the energy, the important quantity is the change in energy. The concepts of work and energy can also be applied at the microscopic level. Atoms and molecules, and nuclei and electrons, can have kinetic energy, and the electrostatic fields within and between atoms can lead to potential energy. Furthermore, electrostatic forces can result in work being done on individual particles. If we identify the total kinetic and potential energy at the microscopic level as U, then the total energy of a macroscopic body becomes E = U + KE + PE

(1.11)

Heat transfer is work carried out at the microscopic level. It arises from random individual molecular interactions occurring throughout a material or at a surface between two materials. The non-random, or coherent motion leads to macroscopic work, the random component leads to heat transfer. We typically identify heat transfer as Q, and the complete form of the 1st Law becomes E = W12 + Q12

(1.12)

dE = δW + δQ

(1.13)

or in differential form

where δ indicates that work and heat transfer are path functions, not exact differentials.

1.4

Equilibrium As mentioned in the first paragraph of this book, thermodynamics involves the study of systems that undergo change between a set of very restricted states called equilibrium states. Equilibrium is the stationary limit reached by some transient process and one must establish that equilibrium is reached and thermodynamic analysis can be used. As we shall see, the statistics of large numbers lead to great simplifications when considering stationary or equilibrium states. At the microscopic level processes are usually very fast. Typical internal motions such as rotation or vibration occur with periods of 10−12 –10−15 sec. In a gas, collisions occur every few 10−9 –10−12 sec. Thus, internal modes tend to want to be in equilibrium, at least locally. At the macroscopic level, however, processes such as flow, heat transfer, and mass transfer can be quite slow. Table 1.1 lists various processes and characteristic times for them to occur. When it is necessary to understand the system behavior while these transient processes are occurring, one must use the principles of fluid mechanics, heat transfer, mass transfer, and so on. (Here, L is a characteristic length, V a characteristic velocity, α the thermal diffusivity, and D the mass diffusivity.) This leads to a very important concept, that of “local thermodynamic equilibrium” (LTE). If local molecular relaxation processes are very fast compared to

1.5 Thermodynamic Properties

5

Table 1.1 Characteristic times of transport processes Process

Characteristic time

Flow Heat transfer Mass transfer

L/V L2 /α L2 /D

the characteristic times of Table 1.1, then locally within the flow thermodynamic equilibrium will hold, allowing the use of all the normal thermodynamic relationships. Indeed, this approximation holds for almost all flow, heat transfer, and mass transfer processes that are normally encountered, with the exception of very highspeed flows.

1.5

Thermodynamic Properties Properties are quantities that describe the state of a system. Position, rotational orientation, and velocity, for example, describe the instantaneous dynamic state of a solid body. In thermodynamics we are concerned with problems involving work and heat transfer. Therefore, energy must be an important property, as it is a measure of the ability of matter to do work or transfer heat. Work and heat transfer are dynamic quantities and not descriptive of the state of a system. However, the amount of work or heat transfer required to bring a system to a given state will clearly be influenced by the size of the system, and thus volume and mass are also important thermodynamic properties. Composition is also an important property, as it will clearly affect the microscopic behavior of a system. To summarize, we have identified energy, volume, and mass (or moles) as important properties. For a multicomponent system we must also specify either the mass or moles of each specie or phase present. The properties thus identified, U, V, and Ni (where Ni is the number of moles of component i), have an important feature in common. They are all properties that are extensive in nature. By that we mean that they are a measure of the size of the system. And, in fact, were the size of a system to change, all other things being held constant, each of these properties would change by the same amount. We thus formally call them extensive properties. If, for a closed system, the extensive properties U, V, and Ni are specified, then the thermodynamic state is completely specified. In addition to the extensive properties, we shall be concerned with several intensive properties. Intensive properties are properties that do not scale with the size of a system, but rather are a function of the normalized extensive properties. Temperature and pressure are examples of important intensive properties. As we shall see, the intensive properties are each paired with a related extensive property, and are defined in terms of the extensive properties. The extensive properties can be cast in an intensive form by normalization, usually in terms of the volume or total mass or moles. However, as we shall see, they remain fundamentally different in character from the true intensive properties.

6

Introduction

1.6

The Fundamental Problem of Thermodynamics There are four types of thermodynamic problems. These are: 1.

2.

3.

4.

Initial state/final state problems. Involve specifying the initial states of two or more subsystems that may include reservoirs. Work, heat transfer, or mass transfer is then allowed and the final states of the subsystem determined. Specified interaction problems. In specified interaction problems, one specifies the nature and value of interactions of a system with its surroundings. Consider compressing a gas in an insulated piston cylinder arrangement. If the initial state of the gas and the work done in compression are specified, then one can calculate the final state of the gas. Limiting process problems. In this case, the initial and final state of a system are specified, and the maximum or minimum amount of heat transfer or work required is obtained. Predicting the maximum output of an ideal gas turbine is an example. Cycle analysis. The analysis of a cyclical sequence of processes, such as the Rankine vapor power cycle.

In fact, each of the above problems is a subset of the first. Consider the adiabatic cylinder shown in Fig. 1.1. A piston separates the cylinder into two subsystems. Several possibilities can occur: 1. 2. 3. 4.

The piston is adiabatic, fixed, and impermeable. In the language of thermodynamics, the piston becomes diathermal. This means that heat transfer can occur through the piston. The piston is now allowed to move. Thus, work can take place. The piston becomes porous. Mass transfer is allowed.

The first case means that the system is completely closed to any kind of interaction, and if the subsystems are individually in equilibrium they remain so. Each additional change removes a constraint and results in a possible spontaneous process leading to a new

A

Figure 1.1 The fundamental problem.

B

1.8 Summary

7

equilibrium state. The fundamental problem of thermodynamics is to find the final state once a given constraint is removed. If the fundamental problem can be solved, then type 1–4 problems can also be solved. Once it is possible to solve the fundamental problem, then all other types of thermodynamic system problems can be solved by breaking the system down into its component processes and analyzing each process individually. This will usually result in a simultaneous set of nonlinear algebraic equations.

1.7

Analysis of Non-equilibrium Behavior As we have seen, thermodynamics is the study of equilibrium states. That is, given a change in system constraints, what new equilibrium state arises? Thermodynamics does not address the nature of the process or processes that result in a change in equilibrium state. The dynamics part of the word thermodynamics is thus something of a misnomer. Many workers have suggested using the word thermostatics. However, two centuries of usage are not easily put aside, and our use of the name thermodynamics is unlikely to change. More common is to use the term equilibrium thermodynamics. The question naturally arises, how do we deal with processes and non-equilibrium states? This is the subject of kinetic theory, which leads to the disciplines of fluid mechanics, heat transfer, and mass transfer. As we shall see from our study of microscopic thermodynamics, the equilibrium state arises from the statistical averaging of the possible microscopic states allowed by the macroscopic constraints. In complete forms of kinetic theory equations are derived that describe the departure of these state relations from equilibrium. For example, in a gas there is a unique distribution of atomic velocities that occurs at equilibrium. For momentum transport or heat transfer to occur, this distribution must depart from its equilibrium form. We will explore this subject in a later chapter.

1.8

Summary In this chapter we explored some of fundamental concepts upon which the field of thermodynamics rests. We started with a short discussion of how matter is composed of atoms and molecules, and that the average motions of these particles at the microscopic level has important implications at the macroscopic level. Essential to understanding thermodynamics are the concepts of energy, work, heat transfer, and the 1st Law. Work, of course, is a force acting through a distance: x2 W12 =

F · dx

x1

Using Newton’s Law F = ma we derived the concept of kinetic energy as

(1.14)

8

Introduction

x2 x1

x = Fd

1 m(V22 − V12 ) 2

(1.15)

or 1 mV 2 2 After some manipulation, we also derived the potential energy KE =

PE = mgz

(1.16)

(1.17)

Putting these concepts together results in the 1st Law of Thermodynamics E = KE + PE = W12

(1.18)

Adding in the possibility of kinetic energy and work taking place at the microscopic level, the 1st Law becomes E = U + KE + PE = W12 + Q12

(1.19)

Work and heat transfer are processes that depend on the details, or path, by which they take place. Internal, kinetic, and potential energies, on the other hand, are properties related to the amount of work or heat transfer that takes place. In addition, the composition of any working substance is important as well. Finally, one can cast all thermodynamic problems in terms of the fundamental problem described in Section 1.6. We explore this in detail in Chapter 2.

1.9

Problems 1.1

Compare the kinetic energy of a 0.5-km diameter asteroid approaching the Earth at 30,000 m/sec with the 210,000 TJ released by a 50-megaton hydrogen bomb.

1.2

Calculate the potential energy required for a 160-lbm person to hike to the top of Long’s Peak (14,259 ft) from downtown Boulder, CO (5,430 ft). Compare that to the energy in a McDonald’s large Big Mac Meal with chocolate shake.

1.3

Estimate the number of molecules in the Earth’s atmosphere.

1.4

Calculate the number of molecules in one cubic meter of air at STP. Then estimate the total kinetic energy of this mass assuming that the average speed of molecules is the speed of sound in air at STP. Compare this to the potential energy associated with lifting this mass of air 20 m.

1.5

Consider two equal 1000-cm3 cubes of copper. Initially separated, one has a temperature of 20 ◦ C and the other is at 100 ◦ C. They are then brought into contact along one wall, but otherwise isolated from their surroundings. Estimate how long it will take for the two cubes to come into equilibrium.

2

Fundamentals of Macroscopic Thermodynamics

In this chapter we explore the basis of macroscopic thermodynamics, introducing the concept of entropy and providing a deeper understanding of properties like temperature, pressure, and chemical potential. We end with a number of property relationships that can be used in practical ways. However, to proceed at the macroscopic level it is necessary to introduce some postulatory basis to our considerations. Clearly, without starting at the microscopic level there is little we can say about the macroscopic averages. Even in statistical mechanics, however, we must make statistical postulates before the mathematical model can be constructed. (Note that we closely follow Callen [3].)

2.1

The Postulates of Macroscopic (Classical) Thermodynamics The postulates are: I.

There exist certain states (called equilibrium states) of simple systems that, macroscopically, are characterized completely by the internal energy U, the volume V, and the mole numbers N1 , N2 , . . . , Nr of the chemical components, where r is the number of chemical components.

This postulate has the important consequence that previous history plays no role in determining the final state. This implies that all atomic states must be allowed. This is equivalent to making an assumption called a priori equal probability. This assumption is not always met. (Two examples of non-equilibrium states that are quite stable are heattreated steel and non-Newtonian fluid with hysteresis. We can often live with systems in metastable equilibrium if it does not otherwise interfere with our thermodynamic analysis.) II.

There exists a function called the entropy, S, of the extensive properties of any composite system, defined for all equilibrium states and having the following property: The values assumed by the extensive properties are those that maximize the entropy over the manifold of constrained equilibrium states.

This postulate is based on the following argument. Suppose we repeatedly observe a composite system evolve from a given initial state to its final equilibrium state. This is the fundamental problem. If the final state is the same time after time, we would 9

10

Fundamentals of Macroscopic Thermodynamics

naturally conclude that some systematic functional dependency exists between the initial state of the system and the final state. Mathematically, we would say that the final state is a stable point of this function. That being the case, there must be some function that displays the property of stability at the final state, perhaps a maximum or a minimum. We make that assumption, calling the function entropy, and define it to have a maximum at the final state. We can solve the basic problem if the relationship between S and the other extensive properties is known for the matter in each subsystem of a thermodynamic composite system. This relationship, called the fundamental relation, is of the form S = S(U, V, Ni )

(2.1)

and plays a central role in thermodynamics. To evaluate the fundamental relation from first principles requires the application of statistical mechanics, the subject we will study during the second part of this course. As we shall see then, even today it is not always possible to theoretically determine the fundamental relation for complex materials. It turns out that one could define the entropy in many different ways. The second postulate only asserts that the function be a maximum at the final equilibrium state, and says nothing about other mathematical properties such as linearity, additivity, and so on. The third postulate addresses these issues. III.

The entropy of a composite system is additive over the constituent subsystems. The entropy is continuous and differentiable and is a monotonically increasing function of energy.

This postulate insures that entropy is a well-behaved function, and simplifies the task of finding the final equilibrium state. Additivity assures that the entropy for a composite system can be written as Sj (2.2) S= j

where j is an index identifying subsystems of the composite system. Sj is a function of the properties of the subsystem Sj = Sj (Uj , Vj , Nij )

(2.3)

A consequence of the additivity property is that the entropy must be a homogeneous, first-order function of the extensive properties S(λU, λV, λNi ) = λS(U, V, Ni )

(2.4)

where λ is a constant multiplier. This is consistent with the idea that U, V, and Ni are extensive properties, that is, they are a measure of the extent of the system. Thus, entropy must also be an extensive property. The monotonic property implies that ∂S >0 (2.5) ∂U V,Ni

2.2 Simple Forms of the Fundamental Relation

11

The continuity, differentiability, and monotonic properties insure that S can be inverted with respect to U and energy always remains single valued. This allows us to write U = U(S, V, Ni )

(2.6)

as being equivalent to the entropy form of the fundamental relation. Each form contains all thermodynamic information. Note that because entropy is extensive, we can write U V Ni , , (2.7) S(U, V, Ni ) = NS N N N where N is the total number of moles in the system. For a single-component system, this becomes U V , ,1 (2.8) S(U, V, Ni ) = NS N N If we define as intensified, or normalized properties u≡

U V S ,v≡ ,s≡ N N N

(2.9)

then the fundamental relation becomes s = s(u, v)

(2.10)

This form is most familiar to undergraduate engineering students. As will be seen later, the final postulate is necessary to determine the absolute value of the entropy. IV.

The entropy of any system vanishes in the state for which

∂U ∂S

=0

(2.11)

V,Nj ’s

The fourth postulate is also known as the 3rd Law of Thermodynamics or the Nernst postulate. As we shall see, this postulate states that S → 0 as T → 0

2.2

(2.12)

Simple Forms of the Fundamental Relation As stated above, the fundamental relation plays a central role in thermodynamic theory. It contains all the necessary information about the equilibrium behavior of specific substances, and is necessary to solve the fundamental problem. For a limited number of cases, analytic forms of the fundamental relations are available. We present two here without derivation to use in subsequent examples.

12

Fundamentals of Macroscopic Thermodynamics

2.2.1

Van der Waals Substance In 1873 van der Waals [16] proposed a form for the fundamental relation of a simple compressible substance based on heuristic arguments regarding the nature of intermolecular interactions. His arguments have subsequently been shown to be qualitatively correct, and the resulting fundamental relation is very useful for illustrating typical behaviors. Van der Waals discovered that when two atoms first approach they experience an attractive force due to the net effect of electrostatic forces between the electrons and the nuclei present. However, if the nuclei are brought into close proximity, the repulsive force of like charges dominates the interaction. The potential energy associated with this force is defined as V(r) = −

dF dr

(2.13)

and is illustrated in Fig. 2.1. The point of zero force, where the attractive and repulsive forces just balance, corresponds to the minimum in the potential energy. It is this property that allows molecules to form, and subsequently allows atoms and molecules to coalesce into liquids or solids. Based on these arguments, and utilizing a specific form of the potential energy function, van der Waals showed that one form of the fundamental relation that satisfies the postulates can be written as

Figure 2.1 Interatomic potential energy for the diatomic molecule.

2.3 Equilibrium and the Intensive Properties

V U V0 c S = NR −b +a V0 U0 V

13

(2.14)

where a, b, and c are parameters of the relationship and R is the universal gas constant.

2.2.2

Ideal Gas In the limit where intermolecular forces play little role in the macroscopic behavior of a substance, V and U become large compared to b and a/v. In this case, called the ideal or dilute gas limit, the fundamental relation becomes c N U V − (c + 1) ln (2.15) + ln S = S0 + NR ln U0 V0 N0 where cR is the specific heat at constant volume and S0 , U0 , and V0 are reference values of U and V, respectively (i.e. for a monatomic gas c = 3/2). The ideal gas fundamental relation does not satisfy the fourth postulate. This is because it is a limiting expression not valid over the entire range of possible values of the independent parameters.

2.3

Equilibrium and the Intensive Properties To solve the fundamental problem we follow the dictate of the second postulate. That is, for the composite system we are analyzing we calculate the entropy of each subsystem as a function of the extensive properties. The final values of the unconstrained properties are those that maximize the entropy subject to any system-wide constraints. The maximum of a function with respect to its independent variables is found by setting the total derivative of the function to zero. That is, if S = S(U, V, Nj ) then dS =

∂S ∂U

dU + V,Ni

∂S ∂V

dV + U,Ni

(2.16)

∂S dNj = 0 ∂Nj V,Ni =j

(2.17)

j

The partial derivatives, which we have not yet interpreted, are functions of the extensive properties. However, they are in a normalized form, being essentially ratios of extensive properties. As we shall see, they relate directly to the intensive properties we are familiar with. Of course, setting the derivative to zero does not guarantee that a maximum in entropy has been found. To be certain, the sign of the second derivative must be less than zero, otherwise one may have found a saddle point or a minimum. Furthermore, the overall stability of an equilibrium point will be determined by the character of the entropy function near the point in question. This is the subject of thermodynamic stability and plays an important role in studying phase equilibrium.

14

Fundamentals of Macroscopic Thermodynamics

A

B

Figure 2.2 Thermal equilibrium.

2.3.1

Thermal Equilibrium: The Meaning of Temperature Consider the problem of thermal equilibrium, as illustrated in Fig. 2.2. The composite system consists of two adjacent subsystems, thermally isolated from the surroundings and separated by a barrier that is initially an insulator. The problem is to find the final state of the system after the barrier is allowed to conduct heat, or become diathermal. In this problem, only the internal energy can change; both V and Ni are constrained because the barrier is fixed and impervious to mass transfer. The second postulate would thus have us write S = SA + SB

(2.18)

dS = dSA + dSB

(2.19)

so that

or dS =

dS dU

dUA + A

dS dU

dUB

(2.20)

B

Now, although the internal energy within each subsystem can change, the overall energy is constrained because the composite system is isolated from its surroundings. Thus, U = UA + UB = const.

(2.21)

This is a constraint, or conservation relation. Indeed, this is the statement of the 1st Law of Thermodynamics for this problem, and determines the allowed relationship between UA and UB . Thus, we can write dU = 0 = dUA + dUB and use the result to eliminate dUB in Eq. (2.20) so that dS dS dS = − dUA dU A dU B

(2.22)

(2.23)

2.3 Equilibrium and the Intensive Properties

15

At equilibrium we must have dS → 0. Since UA is the independent variable, at equilibrium dS dS = (2.24) dU A dU B The thermodynamic property we associate with heat transfer and thermal equilibrium is temperature. Heat transfer occurs from a higher-temperature to a lower-temperature body and two systems in thermal equilibrium have equal temperature. Thus, one might be tempted to associate these partial derivatives with temperature. A short thought experiment would prove that were we to do so, however, the heat transfer between two bodies out of equilibrium would be from cold to hot! Therefore, it is conventional to define ∂U 1 = (2.25) T ≡ ∂S

∂S V, Ni ∂U V, Ni

and the equilibrium requirement becomes TA = TB

(2.26)

Temperature and entropy are closely related quantities. Their product must have the dimension of energy, but their individual units are arbitrary. Since the units for temperature were defined empirically long before the concept of entropy was developed, it is conventional to use the historic unit of temperature to define those for entropy. In the SI system the units are Kelvin. Thus the units of entropy must be J (2.27) K

2.3.2

Mechanical Equilibrium: The Meaning of Pressure Suppose we now replace the barrier separating subsystems A and B in Fig. 2.2 with a diathermal piston. Now, in addition to heat transfer, the volumes can change and work will be done as the piston moves. Unlike the previous problem, both internal energy and volume can change. Therefore, the total derivative of the system entropy must be written ∂S ∂S ∂S ∂S dUA + dVA + dUB + dVB (2.28) dS = ∂U A ∂V A ∂U B ∂V B where it is implicitly assumed that in evaluating the partial derivatives, the other independent property is being held constant. As in the previous problem, the total energy is constrained, and Eqs (2.21) and (2.22) still hold. In addition, we must consider the constraint on volume V = VA + VB = const.

(2.29)

dV = 0 = dVA + dVB

(2.30)

that leads to

16

Fundamentals of Macroscopic Thermodynamics

Using the conservation relations of Eqs (2.22) and (2.30) to replace dUB and dVB , we get ∂S ∂S ∂S ∂S dUA + dVA dS = − − (2.31) ∂U A ∂U B ∂V A ∂V B As in the thermal case, dS must go to zero, resulting in the requirement that ∂S ∂S ∂S ∂S = and = ∂U A ∂U B ∂V A ∂V B

(2.32)

We have already interpreted the meaning of the first relation, that the temperature of the two subsystems must be equal when there is thermal equilibrium. Likewise, mechanical equilibrium clearly requires that the force on each side of the piston be equal, which in the case of a piston requires that the pressures on each side be equal. Thus, one would not be too surprised to find the partial derivative of entropy with volume to be related to pressure. The dimensions of the derivative must be entropy over volume. However, we set the dimensions of entropy in our discussion on thermal equilibrium, or Energy Entropy = Volume Volume ∗ Temperature (2.33) Force Force ∗ Length = = Length3 ∗ Temperature Length2 ∗ Temperature Therefore, the derivative has dimensions of pressure over temperature and we define the thermodynamic pressure with the relationship ∂S p ≡ (2.34) T ∂V U,Ni and the full equilibrium requirement becomes TA = TB and pA = pB

(2.35)

The observant student will wonder why we included heat transfer in the problem of mechanical equilibrium. Interestingly, without heat transfer the problem has no stable solution. Indeed, were the piston released from an initial condition of non-equilibrium without heat transfer (or equivalently friction), it would oscillate forever.

2.3.3

Matter Flow and Chemical Equilibrium: The Meaning of Chemical Potential Now consider the case where the fixed, diathermal barrier of the first problem becomes permeable to one or more chemical components while remaining impermeable to all others. For this problem internal energy and the mole number (say N1 ) of the specie under consideration become the independent variables, and ∂S ∂S ∂S ∂S dUA + dN1A + dUB + dN1B (2.36) dS = ∂U A ∂N1 A ∂U B ∂N1 B As above we set the derivative to zero, leading to the restriction that ∂S ∂S ∂S ∂S = and = ∂U A ∂U B ∂N1 A ∂N1 B

(2.37)

2.4 Representation and the Equations of State

17

Again, we have already identified the derivative of entropy with internal energy as the inverse of temperature, so that as before equilibrium requires that the subsystem temperatures be equal. The dimensions of ∂S/∂N are entropy over mole number, or Energy Entropy = (2.38) Moles Mole ∗ Temperature It is conventional to identify the derivative as μ1 ∂S ≡− T ∂N1

(2.39) U,V,Ni=1

where μ is called the chemical potential and plays the same role as pressure in determining a state of equilibrium with respect to matter flow. That is, the subsystem chemical potentials must be equal at equilibrium for each specie involved in mass transfer, and we have TA = TB and μA = μB

(2.40)

This reasoning can be applied to chemically reacting systems as follows. Consider the case of a closed system, whose overall energy and volume are fixed, but within which an initial batch of r chemical compounds is allowed to chemically react. In this case, the entropy is a function only of the mole numbers of the reacting species. Therefore dS =

r μi i=1

T

dNi

(2.41)

Setting this expression to zero results in one equation with r unknown mole numbers. For a chemically reacting system, atoms are conserved, not moles. The mole numbers are only constrained in a way that insures conservation of atoms. If nki is the number of k-type atoms in species i, and pk is the number of moles of k-type atoms initially present in the system, then r nki Ni (2.42) pk = i=1

This is a equations, where a is the number of different types of atoms within the system. If the fundamental relation is known, then these equations may be solved for the unknown equilibrium mole numbers using the method of undetermined multipliers as described in Chapter 5 for ideal gas mixtures.

2.4

Representation and the Equations of State We have been writing the functional form of the fundamental relation as S = S(U, V, Ni )

(2.43)

This form is called the entropy representation. Based on the considerations of the previous section, we can now write the differential form of this relation as

18

Fundamentals of Macroscopic Thermodynamics

Table 2.1 Equations of state Van der Waals fluid

Ideal gas

1 cR T = u+a/v p acR T = uv2 +av

1 cR T = u p R T = v

μi 1 p dU + dV − dNi T T T r

dS =

(2.44)

i=1

We could also write U = U(S, V, Ni )

(2.45)

Because of the statement regarding the monotonic relationship between entropy and energy in the third postulate, this relation, called the energy representation, contains precisely the same information as its expression in the entropy representation. It is, therefore, also a form of the fundamental relation. The differential form can be written as r μi dNi (2.46) dU = TdS − pdV + i=1

In both representations the coefficients in the differential form are intensive parameters that can be calculated as a function of the independent properties from the fundamental relation. Such relationships are generally called equations of state. In the energy representation, for example, the equations of state take on the functional forms T = T(S, V, Ni ) p = p(S, V, Ni )

(2.47)

μi = μi (S, V, Ni ) Note that if the equations of state are known, then the differential form of the fundamental relation can be recovered by integration. The equations of state in the energy representation for a single-component van der Waals fluid and ideal gas are given in Table 2.1.

2.5

The Euler Equation and the Gibbs–Duhem Relation There are two very useful general relationships that can be derived directly using the properties of the fundamental relation. Called the Euler equation and the Gibbs–Duhem relation, they relate the extensive and intensive properties in particularly direct ways. Starting from the homogeneous, first-order property, we can write, for any λ U(λS, λV, λNi ) = λU(S, V, Ni )

(2.48)

2.6 Quasi-static Processes and Thermal and Mechanical Energy Reservoirs

19

Differentiating with respect to λ, ∂U ∂(λS) ∂U ∂(λV) ∂U ∂(λNi ) + + =U ∂(λS) ∂λ ∂(λV) ∂λ ∂(λNi ) ∂λ r

(2.49)

i=1

Noting that

∂(λS) ∂λ

= S, and so on, and taking λ = 1, ∂U ∂U ∂U Ni = U S+ V+ ∂S ∂V ∂Ni r

(2.50)

i=1

or U = TS − pV +

r

μi Ni

(2.51)

i=1

This is called the Euler equation. In the entropy representation it can be written μi p 1 U+ V− Ni T T T r

S=

(2.52)

i=1

The Gibbs–Duhem relation can be derived as follows. Start with the energy representation Euler equation in differential form: dU = TdS + SdT − pdV − Vdp +

r

μi dNi +

i=1

r

Ni dμi

(2.53)

i=1

However, the differential form of the fundamental relation (energy representation) is dU = TdS − pdV +

r

μi dNi

(2.54)

i=1

Subtracting the two, we obtain the Gibbs–Duhem relation SdT − Vdp +

r

Ni dμi = 0

(2.55)

i=0

The Gibbs–Duhem relation shows that not all the intensive parameters are independent of each other. The actual number of independent intensive parameters is called the thermodynamic degrees of freedom. A simple system of r components thus has r + 1 degrees of freedom. As we shall see, this corresponds directly to Gibbs’ phase rule.

2.6

Quasi-static Processes and Thermal and Mechanical Energy Reservoirs Many thermodynamic systems of interest interact with surroundings that are much larger than the system under study and whose properties therefore change minimally upon interaction with the system. Examples include heat exchange with a large body of water or the atmosphere, and expansion against the atmosphere. Idealizing this interaction simplifies system analysis. Part of our idealization will be that during

20

Fundamentals of Macroscopic Thermodynamics

interaction with a reservoir, the reservoir evolves in a quasi-static manner. By quasistatic we mean that the processes occur slowly enough that the reservoir can be treated as though it were in equilibrium at all times, and in the case of the mechanical energy reservoir described below, the process is frictionless. Some texts use the term quasiequilibrium to describe quasi-static processes. The quasi-static idealization is very important in thermodynamic theory as it forms the basis of reversibility, a concept we will use to analyze the limiting behavior of systems. First consider the mechanical energy reservoir (MER) represented in Fig. 2.3 as a very large piston and cylinder without friction. We use this reservoir to model work done on (or by) the surroundings. If a system does work on the MER, the 1st Law can be written dU = δW

(2.56)

If the work is carried out in a quasi-static manner, then dS =

p 1 dU + dV T T

(2.57)

Since δW = −pdV, this leads to the MER being isentropic. The thermal energy reservoir (TER) is illustrated in Fig. 2.4. We use this reservoir to model heat transfer with the surroundings. If a system transfers heat to the TER, the 1st Law for the TER becomes dU = δQ

dW

Figure 2.3 Mechanical energy reservoir.

dQ

Figure 2.4 Thermal energy reservoir.

(2.58)

2.7 Equilibrium in the Energy Representation

21

For this constant-volume system, the differential form of the fundamental relation becomes 1 (2.59) dS = dU T Thus, the entropy of a TER is not constant. In fact, by the 1st Law, dU = δQ, so that δQ T This is Clausius’ famous relationship with which he defined entropy. dS =

2.7

(2.60)

Equilibrium in the Energy Representation In thermodynamic systems analysis we are very much interested in understanding the relationship between independent and dependent variables. In the entropy representation, U, V, and Ni are the independent variables while S, T, p, and μi are the dependent variables. The fundamental problem is to determine the values of the dependent properties in each subsystem given that certain processes are allowed. In the entropy representation the properties are determined by maximizing the entropy of the overall system. The energy representation offers a different set of independent variables, namely S, V, and Ni . Since S is not a conserved quantity, thermal and mechanical energy reservoirs are required to control the independent properties in the energy representation. It is logical to ask how the final values of the dependent properties in the subsystems are determined. In fact, the equilibrium state is determined by minimizing the energy of the system (not including the reservoirs), a result that will be important in considering other combinations of dependent and independent properties. To prove the energy minimization principle, consider a system that is allowed to interact with reservoirs. The second postulate (2nd Law) holds for the system and the reservoirs. Now, assume that the energy of the system (not including the reservoirs) does not have the smallest possible value consistent with a given total entropy. Then withdraw energy from the system as work into a MER, maintaining the entropy of the system constant. From the fundamental relation, dS =

δQ T

(2.61)

Thus, since S is constant, dU = −pdV

(2.62)

which is the work done on the MER. Now, return the system to its original energy by transferring heat from a TER. However, from the fundamental relation, dU = TdS

(2.63)

Since the temperature is positive, S must have increased. Thus, the original state could not have been in equilibrium.

22

Fundamentals of Macroscopic Thermodynamics

2.8

Alternative Representations – Legendre Transformations In practice, there are several possible combinations of dependent and independent properties. We have focused so far on using the fundamental relation in the entropy representation and touched briefly on the energy representation. However, there are a variety of thermodynamic problems for which these representations are not appropriate. In many problems we may wish to treat one of the intensive properties as an independent variable. Because the intensive properties are derivatives of the extensive properties, however, rearranging the fundamental relation must be carried out with care using the method of Legendre transformations (see [17]). Consider the case of a fundamental relation with a single independent variable, Y = Y(X)

(2.64)

The intensive property is z=

∂Y ∂X

(2.65)

We wish to obtain a suitable function with z as the independent variable that contains all the information of the original function. We could replace X with z between these two relationships, so that Y = Y(z)

(2.66)

However, the replacement would only be accurate to within an integration constant. In other words, to invert the relationship we must integrate z(Y) to recover X, or dY +C (2.67) X= z where C is unknown. A method which takes this problem into account is the Legendre transformation. As illustrated in Fig. 2.5, it involves defining the transformed relationship in terms of both the local slope, z, and the Y intercept, . For a given value of X, z=

Y − X−0

Figure 2.5 Graphical illustration of the transformation process.

(2.68)

2.8 Alternative Representations – Legendre Transformations

23

thus = Y − zX

(2.69)

The procedure is to use Eqs (2.68) and (2.69) to eliminate X and Y in Eq. (2.66).

2.8.1

Example 2.1 Perform the forward and reverse Legendre transformation for Y = X2

[2.8.1]

Then z = 2X Thus Y=

z 2 2

=

z2 4

which with X=

z 2

leads to =

z2 z2 z2 − =− 4 2 4

For the reverse problem we are given = (z) We wish to recover Y = Y(X) Since = Y − zX we can write d = dY − zdX − Xdz but dY = zdX so that d = −Xdz

or − X =

d dz

Use this relation and Eq. [2.8.1] to eliminate z and . Using our example, z −X = − 2 thus Y = X2

24

Fundamentals of Macroscopic Thermodynamics

We can generalize by writing the form of the fundamental relation as Y = Y(X0 , X1 , . . . , Xt )

(2.70)

where Y is the entropy or energy and Xi are the appropriate extensive properties. The intensive properties are then ∂Y (2.71) zk ≡ ∂Xk Xi=k The transformation procedure then involves fitting planes, instead of straight lines, with zk Xk (2.72) =Y− k

where the summation is over the independent properties we wish to replace. To invert , take the total derivative d = dY − zk dXk − Xk dk = − Xk dzk (2.73) k

k

Thus − Xk =

∂ ∂zk

k

(2.74) Pi=k

Use this and Eq. (2.70) to replace and zR . Note that we don’t need to replace every independent variable, only those that are convenient.

2.9

Transformations of the Energy We now consider transformations of the energy representation. These functions are called thermodynamic potential functions because they can be used to predict the ability of given systems to accomplish work, heat transfer, or mass transfer. In the energy representation the zk of the previous section correspond to T, −p, and μi . For a simple compressible substance, in addition to energy there are seven possible transformations as listed in Table 2.2. In the table, the notation U[ ] is used for those functions which do not have an established name. (The term “canonical” was introduced by Gibbs to describe different system/reservoir combinations that he analyzed using ensemble statistical mechanics. We will explore these combinations in Chapter 5.)

2.10

Transformations of the Entropy The transformations of entropy are of more practical use in using statistical mechanics to determine the fundamental relation. In general, these transforms are called Massieu functions, and the bracket notation S[ ] is used to identify them. They are listed in Table 2.3.

2.11 Reversible Work

25

Table 2.2 Transformations of the energy representation Function name

Form

Derivative

Energy Helmholtz Enthalpy

U = U(S, V, N) F = F(T, V, N) = U − TS H = H(S, p, N) = U + pV U[S, V, μ] = U − μN G = G(T, p, N) = U − TS + pV U[T, V, μ] = U − TS − μN U[S, p, μ] = U + pV − μN U[T, p, μ] = U − TS + pV − μN = 0

dU = TdS − pdV + μdN dF = −SdT − pdV + μdN dH = TdS + Vdp + μdN dU[S, V, μ] = TdS − pdV − Ndμ dG = −SdT + Vdp + μdN dU[T, V, μ] = −SdT − pdV − Ndμ dU[S, p, μ] = −SdT − pdV + μdN dU[T, p, μ] = −SdT + Vdp − Ndμ = 0

Gibbs Grand canonical Microcanonical

Table 2.3 Transformations of the entropy representation Function name

Form

Derivative

Entropy

S = S(U, V, N)

dS = T1 dU + Tp dV − μ T dN

Canonical

S[1/T, V, N] = S − T1 U S[U, p/T, N) = S − Tp V S[U, V, μ/T] = S + μ TN S[1/T, p/T, N] = S − T1 U − Tp V S[1/T, V, μ/T] = S − T1 U + μ TN S[U, p/T, μ/T] = S − Tp V + μ TN S[1/T, p/T, μ/T] = S − T1 U − Tp V + μ TN =0

Grand canonical

Microcanonical

2.11

dS[1/T, V, N] = −Ud1/T + Tp dV − μ T dN p μ 1 dS[U, p/T, N] = T dU − Vd T − T dN dS[U, V, μ/T] = T1 dU + Tp dV + Nd μ T dS[1/T, p/T, N] = −Ud T1 − Vd Tp − μdN dS[1/T, V, μ/T] = −Ud T1 + Tp dV + Nd μ T dS[U, p/T, μ/T] = T1 dU − Vd Tp + Nd μ T

dS[1/T, p/T, μ/T] = −Ud T1 − Vd Tp + Nd μ T =0

Reversible Work The transformations of energy are particularly useful in the following way. Suppose we ask what is the maximum amount of work that can be done by a system that is held at a constant temperature (Fig. 2.6). In particular, we assume that all processes occur in a quasi-static fashion. Then, by the 1st Law, USYS + UTER + UMER = UTOT

(2.75)

dUMER = −(dUSYS + dUTER )

(2.76)

or

However, from our discussion of reservoirs, dUTER = −TdS

(2.77)

26

Fundamentals of Macroscopic Thermodynamics

Heat Transfer

TER

SYSTEM

MER Work

Figure 2.6 Constant-temperature work.

Therefore, dUMER = −dF

(2.78)

Thus, the Helmholtz function is a measure of the energy available to do useful work by a constant-temperature system. For this reason the Helmholtz function is also called the Helmholtz free energy. Likewise, enthalpy is the measure of energy available to do reversible work by a constant-pressure system, or dUMER = −dH

(2.79)

The Gibbs function is the measure of energy available to do reversible work by a constant-temperature and constant-pressure system, or dUMER = −dG

(2.80)

One can show that each of the thermodynamic potentials is a measure of a system to perform reversible work when the indicated intensive parameters are held constant.

2.12

Maxwell’s Relations Maxwell [18] derived a set of relationships based on the properties of differentiation that relate the various intensive and extensive properties. The relations are particularly helpful in utilizing experimental tabular data. Consider the general function of two variables f = f (x, y) By the chain rule of differentiation ∂f ∂f dx + dy = Mdx + Ndy df = ∂x y ∂y x

(2.81)

(2.82)

2.13 Building Property Relations

Now ∂M ∂y

x

∂N ∂f and = ∂x∂y ∂x

= y

∂f ∂y∂x

27

(2.83)

but ∂f ∂f = ∂x∂y ∂y∂x so that ∂M ∂y

= x

∂N ∂x

(2.84)

(2.85) y

Applying this result to the energy of a simple compressible substance, we note that M and N correspond to T and −p in the energy representation. In other words, U = U(S, V)

(2.86)

dU = TdS − pdV

(2.87)

Therefore, ∂T ∂V

Likewise, from dF = −SdT − pdV, ∂(−S) ∂V from dH = TdS + VdP, ∂T ∂p and from dG = −SdT + Vdp, ∂T ∂p

2.13

∂(−p) = ∂S

S

= T

= S

= S

(2.88) V

∂(−p) ∂T ∂V ∂S

∂V ∂S

(2.89) V

(2.90) p

(2.91) p

Building Property Relations In practice, we seek properties as a function of the independent variables, not just the fundamental relation. We can use the various transforms discussed above and Maxwell’s relations to assist us in doing so. Let us start by considering the practical case of using laboratory data to construct relations. In the laboratory only certain of the extensive and intensive properties are amenable to direct measurement. These include temperature, pressure, volume, mole number, and force. For a simple compressible substance, the relationship between the first four provides the pvT relationship. By measuring the force necessary to change the volume, relative energy changes can be measured, which in turn allow determination of the specific heats. As we shall see, given the pvT relation and the

28

Fundamentals of Macroscopic Thermodynamics

specific heats, the fundamental relation can be determined. Once the fundamental relation is known in one representation, it can be transformed to any other representation. Consider the differential form of the Massieu function for a simple compressible substance: μ p (2.92) dS[1/T, V, N] = −Ud(1/T) + dV − dN T T If we know the pVTN relationship, then the second term is determined, and it remains to determine the energy and chemical potential. The normalized form of the energy can be written as u = u(T, v)

(2.93)

The total derivative for this expression is ∂u ∂u dT + dv du = ∂T v ∂v T The first term is the specific heat at constant volume, ∂u cv = ∂T v

(2.94)

(2.95)

which can be measured. It remains to interpret the second term. Consider the normalized form of the fundamental relations in the energy representation du = Tds − pdv

(2.96)

Express entropy as a function of temperature and specific volume, s = s(T, v), then ∂s ∂s dT + dv (2.97) ds = ∂T v ∂v T Inserting this into Eq. (2.96), du = T

∂s ∂T

∂s dT + T − p dv ∂v T v

(2.98)

The coefficient of the first term must be the specific heat at constant volume, so that ∂s cv = T (2.99) ∂T v From Maxwell’s relations, ∂s ∂v so that

T

∂p = ∂T

(2.100) v

∂p du = cv dT + T − p dv ∂T v

This is the first term in the Massieu function.

(2.101)

2.14 Sources for Thermodynamic Properties

29

The chemical potential can be obtained from the Gibbs–Duhem relation, which in the entropy representation is p 1 (2.102) dμ = T ud + vd T T Hence, by supplying cv and the pvT relation we have recovered the fundamental relation in a Massieu transformation form. By invoking Maxwell’s relations a number of other useful relations can be obtained, including ∂p ∂v (2.103) cp − cv = T ∂T P ∂T v cp − cv = −T

∂v ∂T

2 P

∂p ∂v

(2.104) T

It is common to define the coefficients of thermal expansion and isothermal compressibility as 1 ∂v (2.105) α≡ v ∂T p and κT ≡ −

1 ∂v v ∂p T

(2.106)

Then vTβ 2 α The Joule–Thompson coefficient [19] is defined as ∂T μ≡ ∂p h cp − cv =

(2.107)

(2.108)

and describes how the temperature changes in adiabatic throttling: ⎧ ⎨ < 0 temperature increases μ 0 temperature constant ⎩ > 0 temperature decreases This is illustrated in Fig. 2.7 for several substances. To determine the fundamental relation from first principles will require statistical methods as discussed in the Introduction. That is the subject of the following chapters.

2.14

Sources for Thermodynamic Properties There are numerous sources for thermodynamic properties. Data is contained in textbooks, and in various handbooks. One can also find numerous open source and commercial property programs. Developing and compiling thermodynamic data is part of

30

Fundamentals of Macroscopic Thermodynamics

Figure 2.7 Joule–Thompson coefficient for several substances (data from Perry’s Chemical Engineers’ Handbook [8]).

NIST’s mission. Their program REFPROP [20] (https://www.nist.gov/srd/refprop) is a good compilation of data for a variety of substances, particularly refrigerants.

2.15

Summary

2.15.1

Postulates and the Fundamental Relation We started by outlining the postulates of classical thermodynamics: I.

II.

III.

There exist certain states (called equilibrium states) of a simple system that, macroscopically, are characterized completely by the internal energy U, the volume V, and the mole numbers N1 , N2 , . . . , Nr of the chemical components, where r is the number of chemical components. There exists a function called the entropy, S, of the extensive parameters of any composite system, defined for all equilibrium states and having the following property: The values assumed by the extensive parameters are those that maximize the entropy over the manifold of constrained equilibrium states. The entropy of a composite system is additive over the constituent subsystems. The entropy is continuous and differentiable and is a monotonically increasing function of energy.

2.15 Summary

IV.

The entropy of any system vanishes in the state for which ∂U =0 ∂S V,Nj ’s

31

(2.109)

From these postulates arises the concept of the fundamental relation S = S(U, V, Ni )

(2.110)

Examples of fundamental relations are those for a van der Waals substance and an ideal gas: (2.111) s = s0 + R ln (v − b) (u + a/v)c and s = s0 + cR ln

2.15.2

u v + R ln u0 v0

(2.112)

Equilibrium and Intensive Parameters We then derived the thermodynamic definitions of temperature 1 ∂U T ≡ ∂S

= ∂S V,Ni ∂U

(2.113)

V,Ni

pressure p ∂S ≡ T ∂V and chemical potential ∂S μ1 ≡− T ∂N1

2.15.3

(2.114) U,Ni

(2.115) U,V,Ni=1

Representation and Equations of State Representation refers to the idea that once the fundamental relation is known, then by suitable transformation dependent and independent variables can be exchanged for convenience. For example, the third postulate allows us to invert energy and entropy in Eq. (2.110) resulting in U = U(S, V, Ni )

(2.116)

We say that this is the energy representation and Eq. (2.110) is in the entropy representation. Both equations can be differentiated and take on the form μi p 1 dU + dV − dNi T T T r

dS =

i=1

and

(2.117)

32

Fundamentals of Macroscopic Thermodynamics

dU = TdS − pdV +

r

μi dNi

(2.118)

i=1

In both representations the coefficients in the differential form are intensive parameters that can be calculated as a function of the independent properties from the fundamental relation. Such relationships are generally called equations of state. In the energy representation, for example, the equations of state take on the functional forms T = T(S, V, Ni ) p = p(S, V, Ni )

(2.119)

μi = μi (S, V, Ni )

2.15.4

The Euler Equation and the Gibbs–Duhem Relation Two useful relations we derived are the Euler relation U = TS − pV +

r

μi Ni

(2.120)

i=1

and the Gibbs–Duhem relation SdT − Vdp +

r

Ni dμi = 0

(2.121)

i=0

2.15.5

Alternative Representations By performing Legendre transformations, we showed that we could exchange dependent and independent variables in either of the entropy of energy representations. This led to the Massieu functions and thermodynamic potentials, respectively. The transforms are given in Tables 2.2 and 2.3.

2.15.6

Maxwell’s Relations A useful set of relations can be derived by noting that the cross derivatives of the coefficients of a derivative relation are equal. This leads to the following four relationships, known as Maxwell’s relations: ∂(−p) ∂T = (2.122) ∂V S ∂S V ∂(−p) ∂(−S) = (2.123) ∂V ∂T T V ∂T ∂V = (2.124) ∂p S ∂S p ∂T ∂p

= S

∂V ∂S

(2.125) p

2.16 Problems

2.15.7

33

Property Relations By utilizing the various relationships we have derived, we can also derive various practical property relations. Typically we require two types of relations for practical problem solving; a pvT relation and a caloric equation of state. The pvT relation is obtained either from experiment, or by calculation from first principles using the methods of statistical thermodynamics. The caloric equation of state is most easily obtained from a relation like Eq. (2.126): ∂p − p dv (2.126) du = cv dT + T ∂T v where the specific heat is either measured or calculated. The chemical potential can likewise be found using p 1 (2.127) dμ = T ud + vd T T

2.16

Problems 2.1

Fundamental equations can come in various functional forms. Consider the following equations and check to see which ones satisfy Postulates II, III, and IV. For each, functionally sketch the relation between S and U. (In cases where there are fractional exponents, take the positive root.) (a) (b)

(c) (d)

S ∼ (NVU)2/3 S∼

NU 1/3 V

S3 exp (S/NR) V2 S 2 exp (−S/NR) U ∼ (UV) 1 + NR U∼

2.2

Two systems of monatomic ideal gases are separated by a diathermal wall. In system A there are 2 moles initially at 175 K, in system B there are 3 moles initially at 400 K. Find UA , UB as a function of R and the common temperature after equilibrium is reached.

2.3

Two systems have these equations of state: 3 N1 1 = R T1 2 U1 and 1 5 N2 = R T2 2 U2

34

Fundamentals of Macroscopic Thermodynamics

where R is the gas constant per mole. The mole numbers are N1 = 3 and N2 = 5. The initial temperatures are T1 = 175 K and T2 = 400 K. What are the values of U1 and U2 after equilibrium is established? 2.4

Two systems have the following equations of state: 1 3 N1 = R , T1 2 U1

P1 N1 =R T1 V1

5 N2 1 = R , T2 2 U2

P2 N2 =R T2 V2

and

The two systems are contained in a closed cylinder, separated by a fixed, adiabatic and impermeable piston. N1 = 2.0 and N2 = 1.5 moles. The initial temperatures are T1 = 175 K and T2 = 400 K. The total volume is 0.025 m3 . The piston is allowed to move and heat transfer is allowed across the piston. What are the final energies, volumes, temperatures, and pressures once equilibrium is reached? 2.5

Find the three equations of state for a system with the fundamental relation S4 NV 2 Show that the equations of state are homogeneous zero order, that is T, P, and μ are intensive parameters. (C is a positive constant.) U=C

2.6

A system obeys the following two equations of state: T=

2As v2

P=

2As2 v3

and

where A is a constant. Find μ as a function of s and v, and then find the fundamental equation. 2.7

Find the three equations of state for the simple ideal gas (see Table 2.1) and show that these equations of state satisfy the Euler relation.

2.8

Find the relation between the volume and the temperature of an ideal van der Waals fluid in an isentropic expansion.

2.9

Find the fundamental relation for a monatomic gas in the Helmholtz, enthalpy, and Gibbs representations. Start with Eq. (2.15) and assume a monatomic gas with cv = 3/2.

2.10 A system obeys the fundamental relation (s − s0 )4 = Avu2 Find the Gibbs potential G(T, P, N).

2.16 Problems

2.11 Prove that cv must vanish for any substance as T goes to zero. 2.12 Find the Joule–Thompson coefficient for a gas that obeys V = RT/P + aT 2 and where a, A, B, C, and R are constants. 2.13 What is temperature? 2.14 What is pressure? 2.15 What is chemical potential? 2.16 What is entropy?

cp = A + BT + CP

35

3

Microscopic Thermodynamics

Our next great task is determining the form of the fundamental relation based on the microscopic nature of matter. As we shall see, that task reduces to relating the macroscopic independent properties to a distribution of microscopic quantum states that characterizes the equilibrium macroscopic state. In the next section, we provide background and explore the plausibility of the postulates. We then present two postulates that prescribe how we are to proceed. From then on we explore the consequences of the postulates and provide a relationship between the microscopic and macroscopic scales.

3.1

The Role of Statistics in Thermodynamics In the Introduction, we discussed some aspects of the problem. We know that matter is composed of atoms and molecules. The size of an atom is determined by the range of electrostatic forces due to the positively charged nucleus and negatively charged electrons, and is approximately 5 Å or 5 × 10−10 m. There are about 2 × 109 atoms per lineal meter in a solid, resulting in about 8 × 1027 atoms/m3 . In a gas at standard conditions, there are about 1024 atoms/m3 . Thus, macroscopic systems are composed of a very, very large number of atoms or molecules. Even if we ignore the fact that the behavior of individual atoms and molecules is governed by quantum mechanics, it would be impossible to simultaneously solve the enormous number of equations of motion involved to describe the evolution of a macroscopic system to equilibrium. Clearly, an alternative approach is required and fortunately, nature has been kind in devising the laws of averaging in ways that allow for great simplification. Consider the following thought experiment. Suppose N atoms are placed in a box of fixed volume V, which is then closed and adiabatic. N is chosen small enough so that the system is always in the gas phase. The kinetic energy of each particle is the same and nonzero, and when the particles are put in the box they are all placed in one corner. Only kinetic energy is considered and for simplicity, we will assume that the trajectory of each particle is described by Newton’s Law. How will this system evolve? We know by conservation of energy that the average kinetic energy of the particles must remain the same no matter how the overall dynamic state of the system evolves. However, because the particles have kinetic energy they will move within the volume, beginning to distribute themselves around the available space. They must remain within the volume, so they might collide with the wall. They might also collide with each other.

36

3.1 The Role of Statistics in Thermodynamics

37

Because the number of particles is very large, it would not take very much time before it would be practically impossible to reconstruct the initial placement of the particles. (For mathematicians, this leads to a chaotic state.) Finally, after some period of time that is long compared to diffusional and convective times, we might expect that the particles would be roughly evenly distributed throughout the volume. In fact, the above description is exactly what would happen. We call the final state the equilibrium state. Were we to measure the local number and energy densities (looking at a volume that is small compared to the dimensions of the box but large compared to the average molecular spacing), we would find them to be uniform throughout the volume and with time, even though the particles continue to move, colliding with each other and the walls. We would also find the positions and velocities of individual particles to be highly randomized. (We can actually make such a measurement using laser scattering to confirm this behavior.) We know from quantum mechanics that, in the absence of applied forces, the dynamic state of an atom or molecule will relax to a fixed quantum state. The atom or molecule arrives at that state as a result of interactions with other particles. The state is constantly changing, because interactions occur at a very high frequency, 109 or 1010 /sec at room temperature and pressure. Indeed, it is through intermolecular interactions that equilibrium is reached. At any instant in time, the complete specification of the quantum state of every atom or molecule describes the quantum state of the entire macroscopic system. Thus, the system quantum state is continually changing as well. However, the system is subject to the macroscopic constraints, fixed numbers, and energy for a closed system, for example. Therefore, the system quantum state is constrained. Suppose we know the system quantum state, meaning that we know exactly what quantum state each atom or molecule is in. In that case we could calculate the system value of any mechanical property by summing the value of that property for each particle. However, we have painted the following picture. At equilibrium, it appears at the macroscopic level that the local mechanical properties of the system are constant. When a macroscopic constraint is changed, the system evolves to a new equilibrium state with new mechanical properties. However, even when a system is in equilibrium, the system quantum state is constantly changing. Therefore, what system quantum state could we use to characterize the equilibrium system? The number of system quantum states that can satisfy a given set of macroscopic constraints is enormous, approximately eN where N is the number of particles (see Appendix E in Knuth [5]). Some of these physically allowed states cannot possibly be equilibrium states. For example, in our thought experiment above, the initial state with all the particles in one corner of the box satisfies the macroscopic constraints, yet we clearly will not actually observe that state for more than a few microseconds. Equilibrium always occurs when the constraints remain fixed for a sufficient period of time. Therefore, most system quantum states must look very similar (i.e. states that result in equilibrium behavior must be overwhelmingly more probable than other states). We might be tempted to make the following argument. The system quantum state is constantly changing. No individual allowed system quantum state is physically preferred over any other state. (This is called equal a priori probability.) If we were to wait long

38

Microscopic Thermodynamics

enough, we would see every allowed state. Therefore, equilibrium is really the average over all the allowed states. In fact, this argument is incorrect. While every state that satisfies the macroscopic constraints is physically allowable, dynamics precludes that a system will be able to evolve through all its allowed states in a reasonable time. For example, the collision frequency in a gas at standard conditions is approximately 34 1010 /sec. For a 1-m3 system, it would take about e10 sec to access every allowed state once. This time is longer than the age of the universe! This argument reaffirms the conclusion from above that allowed quantum states which look like equilibrium must be overwhelmingly more probable than states that do not. If equilibrium quantum states are so much more probable than non-equilibrium ones, then one way to proceed is to look at the relative probability of each allowed system quantum state, and characterize equilibrium with the most probable one. Indeed, that is the approach we will follow. The only remaining question is how to find that state. There are two historical approaches. The first makes the equal a priori assumption discussed above. It involves watching a single system and finding the most probable of all the allowed states. The difficulty with this approach is that it has no physical analog, it being physically impossible for all allowed states to appear in a single system. The second approach we will take is that followed by Gibbs [4]. It involves the following argument. Imagine a very large number of identical macroscopic systems. Gibbs called this collection an ensemble, and each individual system a member of the ensemble (Fig. 3.1). If the number of ensemble members is large enough (which it can be, since we are imagining the ensemble), then at any instant in time all allowed quantum states will appear. There are two major advantages of this concept. The first, and most obvious, is that we need not make the equal a priori approximation. Rather than observe over impossibly large times, we take a snapshot at a fixed time. The second advantage is that we can construct the ensemble to mimic real systems. That is, we can include a reservoir. The only requirement is that the reservoir be very large, so that all appropriate intensive properties are held constant for every ensemble member. Of course, we will have to construct the ensemble so that all members experience the same conditions. All constraints and allowed interactions with the reservoir are the same. We will also treat the entire ensemble as closed.

Member Figure 3.1 An ensemble of ensemble members.

Reservoir

3.1 The Role of Statistics in Thermodynamics

39

Figure 3.2 Equilibrium rotational population distribution for CO.

Suppose we are able to find the most probable system quantum state for a given set of independent properties. How would we characterize the state and what could we do with it? Recall that the quantum state of a macroscopic system is specified by the distribution of microscopic quantum states. By distribution, we mean a normalized count of how many particles are presently existing in each possible quantum state. For example, Fig. 3.2 shows a distribution of rotational energy for a diatomic gas at standard conditions plotted as a function of a rotational energy index, J. From the plot we can find the fraction of molecules that have a given rotational energy. There will be similar distributions for translational, vibrational, and electronic energy. Suppose we index all possible system quantum states with the index j. Then, if nj is the number of ensemble members that exist in system quantum state j, and n is the number of ensemble members, we can calculate the mechanical parameters U, V, and Ni :

A =

1 nj Aj n

(3.1)

j

The bracket indicates average or expectation value, and j = ensemble member quantum state index Aj = Uj , Vj , or Nij nj = number of ensemble members in quantum state j n = total number of ensemble members Note that the set of nj ’s is the quantum state distribution for the ensemble. Many different sets are possible; our task is to find the most probable.

40

Microscopic Thermodynamics

3.2

The Postulates of Microscopic Thermodynamics The postulates we will invoke were first stated by Gibbs in 1902. In the foreword of his book he stated, “The only error into which one can fall, is the want of agreement between the premises and the conclusions, and this, with care, one may hope, in the main, to avoid.” It turns out that he was right, and his formulation forms the backbone of modern statistical thermodynamics. The postulates are: I.

II.

The macroscopic value of a mechanical–thermodynamic property of a system in equilibrium is characterized by the expected value computed from the most probable distribution of ensemble members among the allowed ensemble member quantum states. The most probable distribution of ensemble members among ensemble member quantum states is that distribution for which the number of possible quantum states of the ensemble and reservoir is a maximum.

Postulate I states that we can calculate A using

A =

1 ∗ nj Aj = pj Aj n j

(3.2)

j

where pj = n∗j /n is the normalized probability distribution and * indicates that we are using the most probable distribution of nj ’s or pj ’s. Postulate II describes how to determine the most probable distribution. To apply the postulate we must obtain an expression for the number of allowed quantum states as a function of the nj . Then, find the nj that maximizes this function. For an ensemble involving a reservoir, the members are in some quantum state, the reservoir in another. If is the number of states available to the members, and R the number available to the reservoir, then the total number available to the entire ensemble including the reservoir is TOT = R

(3.3)

This is the joint probability of two independent random processes. It will turn out that we don’t need R . We will need, however, an expression for in terms of nj . The question is: How many ways (permutations) can n members in a given distribution (combinations) of quantum states be arranged? From basic probability mathematics, n! = nj !

(3.4)

j

Thus, n! TOT = R nj ! j

(3.5)

3.3 The Partition Function and its Alternative Formulations

41

We wish to maximize this function, subject to any appropriate constraints. It will turn out to be easier if we maximize the natural log of the function, applying Sterling’s approximation [21] to simplify the algebra. Stirling’s approximation is (you can test this yourselves) ln(x! ) = x ln x − x, if x >> 1 Taking the logarithm of TOT and applying the approximation, we obtain nj ln nj ln TOT = ln R + n ln n −

(3.6)

(3.7)

j

3.3

The Partition Function and its Alternative Formulations To proceed further, we need to make a decision about what type of ensemble to use. The possible choices are shown in Table 3.1. Note the correspondence with the macroscopic Massieu functions. We will start with the grand canonical representation. Thus, only the volume of the ensemble members is fixed, and we will need constraints on energy and number of particles. We also need to recognize that there is a constraint on the sum of all the nj which must equal n. Thus, the constraints become nj (3.8) n= j

UTOT = UR +

nj Uj

(3.9)

j

NiTOT = NiR +

nj Nij

(3.10)

j

where Uj and Nij are, respectively, the ensemble member energies and number of i-type particles associated with the ensemble member quantum state j. (These will eventually be determined using quantum mechanics.) We will use Lagrange’s method of undetermined multipliers (see Courant and Hilbert [22]) to maximize TOT subject to the constraints. We start by taking the total derivative of ln TOT and setting it to zero: Table 3.1 Types of ensembles Gibbs’ name

Independent parameters

Fundamental relation

Microcanonical

U, V, Ni

S(U, V, Ni )

Canonical

1/T, V, Ni U, V, −μi /T

S[1/T] S[−μi /T]

Grand canonical

1/T, V, −μi /T U, p/T, Ni 1/T, p/T, Ni U, p/T, −μi /T 1/T, p/T, −μi /T

S[1/T, V, −μi /T] S[U, p/T, Ni ] S[1/T, p/T, Ni ] S[U, p/T, −μi /T] S[1/T, p/T, −μi /T]

42

Microscopic Thermodynamics

d ln TOT =

∂ ln R ∂ ln R dUR + dNiR − (1 + ln nj )dnj = 0 ∂UR ∂NiR i

(3.11)

j

Since n, UTOT , and NiTOT are constants, by differentiation we obtain dnj 0=

(3.12)

j

0 = dUR +

Uj dnj

(3.13)

Nij dnj

(3.14)

j

0 = dNiR +

j

Since these derivatives are each equal to zero, there is no reason why each cannot be multiplied by a constant factor, nor why they can’t be added to or subtracted from the expression for d ln TOT . Historically, the multipliers chosen are α − 1, β, and γi and they are subtracted from Eq. (3.11). Doing so: ∂ ln R ∂ ln R dUR + γi − dNiR β− ∂UR ∂NiR i + γi Nij dnj = 0 (3.15) ln nj + α + βUj + j

i

Since UR , niR , and nj are independent variables, the coefficients of their derivatives must each be zero. Thus, ∂ ln R β= (3.16) ∂UR NiR ,V γi =

∂ ln R ∂NiR

(3.17) UR ,NkR,k=i ,V

and α = − ln nj − βUj −

γi Nij

(3.18)

i

The most probable distribution, the n∗j ’s, can be found from the equation for α: ∗ γi Nij nj = exp −α − βUj − (3.19) i

If we sum over

n∗j ,

then n = e−α

−βUj − γi Nij

e

i

(3.20)

j

or eα =

−βUj − γi Nij

e

i

j

n

(3.21)

3.4 Thermodynamic Properties

and thus

43

−βUj − γi Nij

n∗j

e pj = = n

i

−βUj − γi Nij

e

(3.22)

i

j

The denominator of this expression plays a very important role in thermodynamics. It is called the partition function, in this case the grand partition function: QG (β, γi ) ≡

−βUj − γi Nij

e

i

(3.23)

j

The physical significance of the partition function is that it determines how ensemble members are distributed, or partitioned, over the allowed quantum states. Also, note that α has dropped out of the expression for both pj and QG . The Lagrange multipliers, β and γi , are associated with Uj and Nij , respectively. It would not be too surprising if they turned out to be related to 1/T and −μi /T. However, the product βUj must be unitless, as must γi Nij . This can be accomplished by dividing by Boltzmann’s constant, k, the gas constant per molecule, so that 1 kT

(3.24)

−μi kT

(3.25)

β= γi =

By carrying out an analysis similar to that above for all the possible representations, we would find that in each case, the probability distribution can be represented in the form e−fj pj = −f e j

(3.26)

j

where the form of fj depends on the representation. The denominator is, in each case, identified as the partition function. Furthermore, one can show that the natural logarithm of the partition function times Boltzmann’s constant is the fundamental relation in the form of entropy or its associated Massieu function: S [ ] = k ln Q

(3.27)

This result is tabulated in Table 3.2.

3.4

Thermodynamic Properties It remains to interpret the multipliers and relate what we have learned to the fundamental relation. To do so, assume that we are working with the canonical representation, so that exp(−βUj ) (3.28) Q= j

44

Microscopic Thermodynamics

Table 3.2 Types of partition functions

Gibbs’ name

Independent parameters

Microcanonical

U, V, Ni

Canonical

1/T, V, Ni

Q= U Q = exp − kTj j μi Q = exp N kT ij

U, V, −μi /T

j

Grand canonical

Fundamental relation

Partition function

S(U, V, Ni ) S[1/T] S[−μi /T]

i

μi U − kTj + N ij kT j i PV Q = exp − kT j j U PV Q = exp − kTj − kT j j P V + μi N Q = exp − kT j ij kT

1/T, V, −μi /T

Q=

U, p/T, Ni 1/T, p/T, Ni U, p/T, −μi /T

exp

j

S[1/T, V, −μi /T] S[U, p/T, Ni ] S[1/T, p/T, Ni ] S[U, p/T, −μi /T]

i

1/T, p/T, −μi /T Q = S[1/T, p/T, −μi /T] μi Uj P exp − kT − kT Vj + kT Nij j

i

and pj =

e−βUj Q

(3.29)

Apply the first postulate to calculate the energy,

U = pj Uj Taking the derivative of U, dU =

Uj dpj +

(3.30)

pj dUj

(3.31)

j

The first term can be treated by noting that, by using the definition of pj , we can express Uj as

1 ln pj + ln Q (3.32) Uj = β Also, j

Thus, j

⎛ ⎞ ln pj dpj = d ⎝ pj ln pj ⎠

(3.33)

j

⎛ ⎞ 1 ⎝ Uj dpj = − d pj ln pj ⎠ β j

(3.34)

3.4 Thermodynamic Properties

45

The second term can be simplified by introducing the concept of ensemble member pressure. Recall that pressure is the derivative of internal energy with respect to volume. For a single j this becomes (here we will temporarily use capital P for pressure to avoid confusion with the probabilities pj ) Pj = − Therefore, P=

∂Uj ∂V

pj Pj = −

j

Utilizing these results,

(3.35)

pj

j

⎛ 1 dU = − d ⎝ β

∂Uj ∂V

(3.36)

⎞ pj ln pj ⎠ − PdV

(3.37)

j

Compare this result with the differential relation dU = TdS − PdV

(3.38)

If we assume that 1 kT

β= then S = −k

pj ln pj

(3.39)

(3.40)

j

If we substitute the expression for pj into this result for S, then 1 U + k ln Q T Compare this with the definition of S[1/T] 1 1 S =S− U T T S=

thus

1 S = k ln Q T

One can show that these results apply for any representation. The function I=− pj ln pj

(3.41)

(3.42)

(3.43)

(3.44)

j

is known as the “information” in information theory. One can show that I is maximized when all the pj are equal or all states are equally probable. Thus, if we start in one quantum state, then the natural tendency is to evolve in such a way as to pass through many other states. Equilibrium means that things have evolved long enough to “forget” the

46

Microscopic Thermodynamics

initial conditions and access distributions that look like the most probable distribution. Of course, in thermodynamics the distribution is constrained, but the principle remains. One way to interpret the importance of entropy is to study the effect of quasi-static work or heat transfer on the entropy. From the above, recall that

U = (3.45) pj Uj and dU =

Uj dpj +

pj dUj

(3.46)

j

The second term on the right was shown to be equivalent to −PdV. Therefore, dU = Uj dpj − PdV

(3.47)

j

Comparing this to the 1st Law, dU = δQ + δW If we assume that the work is reversible, then we can make the associations δQ = Uj dpj

(3.48)

(3.49)

j

δW =

pj dUj

(3.50)

j

For work to occur, only the Uj ’s are required to change, not the pj ’s. If this occurs, then the entropy is constant and we call the work isentropic. However, for there to be heat transfer, the pj ’s must change, and thus the entropy changes. Note that reversible work is possible only when the allowed quantum states are allowed to change, by changing the volume for example. Otherwise, the only way to change the energy is by heat transfer.

3.5

Fluctuations Having found the means to obtain the fundamental relation based on the most probable distribution, let us now turn to asking what level of fluctuations we might expect in real systems under equilibrium conditions. A measure of the width of the distribution function is the variance, which is defined as (3.51) σ 2 (A) ≡ (A − A)2 Plugging in the expression for the expectation value of a mechanical variable, this can be written as σ 2 (A) = pj A2 − A2 (3.52) j

3.5 Fluctuations

47

Since we know the pj , we can calculate the variance. Consider, for example, the variance of energy in the canonical representation: σ 2 (U) = pj Uj2 − U2 (3.53) j

This can be written as j

⎧ ⎫ −βUj 2 ⎪ ⎪ ⎨ j Uj e ⎬ − −βU j ⎪ ⎪ e ⎩ ⎭

Uj2 e−βUj

σ 2 (U) =

e−βUj

j

or

(3.54)

j

⎧ ⎫ −βUj ⎪ ⎪ ⎨ ∂ j Uj e ⎬ 2 ln Q ∂ σ 2 (U) = − = −βU j ⎪ ⎪ ∂β 2 e ⎩ ∂β ⎭

(3.55)

j

However,

U = − so that

∂ 2U σ (U) = − ∂β 2

∂ ln Q ∂β

2

V,N

(3.56)

1 ∂ 2U = 2 kT ∂T 2

(3.57)

V,N

For an ideal gas the caloric equation of state is U = cv NT

(3.58)

σ 2 (U) = cv NkT 2

(3.59)

so that

and

! σ (U) = U

k cv N

(3.60)

Since cv = 32 k for a monatomic gas, k and cv are of the same order. Clearly, fluctuations only become important when the number of molecules is very small. This development can be generalized for any mechanical property that is allowed to fluctuate in any representation: σ 2 (A) =

∂ 2 ln Q ∂a2

(3.61)

where a is the associated multiplier for A, that is β, π , or γi . (π is the usual symbol for p/kT.)

48

Microscopic Thermodynamics

3.6

Systems with Negligible Inter-particle Forces If a system is composed of particles that are indistinguishable from one another, and do not, for the vast majority of time, interact with one another, then for all practical purposes the mechanical properties of the system are those of each particle, summed over all the particles. We start with some definitions: energy of ensemble member in quantum state j number of particles in ensemble member in quantum state j energy associated with type-i particle of member in quantum state j number of i-type particles in member in quantum state j energy of i-type particle in particle quantum state k number of i-type particles in particle quantum state k in member in quantum state j i = particle type index j = member quantum state number k = particle quantum state number (Note: when discussing statistics, N usually means number rather than moles.) Uj Nj Uij Nij

ik Nikj

= = = = = =

In Chapter 4 we will see that, using quantum mechanics, we can obtain Nij and ij so that we are now in a position to start calculating the mechanical properties and the partition function. In the grand canonical representation, we are concerned with the ensemble member energy and particle numbers, which become: Uij =

ik Nikj (3.62) Uj = Nj =

i

i

Nij =

i

k

i

Nikj

The partition function then becomes Q= exp(−βUj − γi Nij ) j i = exp(− (β ik + γi )Nikj ) j i k = exp(−(β ik + γi )Nikj ) j

i

(3.63)

k

(3.64)

k

There is a simplification of the final form of Eq. (3.64) that is quite useful: Q=

" " Max Nikj i

k

e−(β ik +γi )η

(3.65)

η=0

where Max Nikj is the maximum number of i-type particles which may simultaneously occupy the kth particle quantum state. The advantage of this form for the partition function is that the sum is for a single quantum state of a single-type particle. We will

3.6 Systems with Negligible Inter-particle Forces

49

shortly express this sum in closed algebraic form, even without specifying the exact functional form for ik . (Derivation of this form is left to a homework problem.) To proceed further, we must take note of a physical phenomenon that can have a profound impact on the final form of the partition function. For some types of particles, it can be shown that within a given ensemble member, no two particles can occupy the same quantum state. This is called the Pauli exclusion principle (see Pauling and Wilson [23]) and is based on the symmetry properties of the wave functions of the particles being considered. It is found that for some indistinguishable particles, the sign of the wave function changes if the coordinates of the particles are reversed. If so, the function is anti-symmetric, otherwise it is symmetric. For particles with anti-symmetric wave functions, it is also found that only one particle can occupy a given quantum state at one time. This is the exclusion principle. Particles are labeled according to their symmetry properties: Symmetry

Max Nikj

Particle name

System name

Examples

Anti-symmetric Symmetric

1 infinity

fermion boson

Fermi–Dirac Bose–Einstein

D, He3 , e H, He4 , photons

One may show that the partition function for the Fermi–Dirac and Bose–Einstein cases becomes: "" ±1 QFD,BE = (1 ± e−β ik −γi ) (3.66) i

ln QFD,BE =

k

i

ln (1 ± e−β ik −γi )

so that

Ni FD,BE =

(3.67)

k

Note that we can now evaluate Ni , since

Ni = −

±1

∂ ln Q ∂γi

(3.68) β,V,γ=i

k

1 e+β ik +γi

±1

(3.69)

Likewise,

Ni,k FD,BE =

1 e+β ik +γi

±1

(3.70)

A special case exists when e+β ik +γi >> 1. Called the Maxwell–Boltzmann limit, it arises when the density is so low that the issue of symmetry is no longer important. In this case: e−β ik −γi (3.71) ln QMB = i

k

50

Microscopic Thermodynamics

Figure 3.3 Expected value of < Nk >.

e−β ik −γi

(3.72)

< Ni > = ln QMB

(3.73)

Ni MB =

k

NMB =

i

The Maxwell–Boltzmann limit is important because it leads to the ideal gas law. Recall that since PV (3.74) k ln Q = T we must have PV = NkT

(3.75)

1 This is actually the proof that β = kT . For those instances where the Maxwell–Boltzmann limit does not apply, the thermodynamic behavior is strongly influenced by whether the particles are bosons or fermions. We can illustrate this point by plotting Nk as a function of β k + γ in Fig. 3.3. Note that for a Fermi–Dirac system, Nk can never exceed unity. For a Bose–Einstein system,

Nk can exceed unity for small values of β k + γ . One may interpret this difference as an effective attraction between bosons, and an effective repulsion between fermions. For large values of β k + γ , the Maxwell–Boltzmann limit holds and inter-particle forces are irrelevant. The effect of temperature and chemical potential on the distribution of particles among the particle quantum states for Bose–Einstein and Fermi–Dirac systems is illustrated in Figs 3.4 and 3.5, respectively.

3.6 Systems with Negligible Inter-particle Forces

51

Figure 3.4 Expected value of < Nk > for a Bose–Einstein system.

Figure 3.5 Expected value of < Nk > for a Fermi–Dirac system.

Note that first, for a Bose–Einstein system, Nk is limited to positive values by the physical restriction that Nk be equal to or greater than zero. Thus, the chemical potential is limited to values less than k . Given that, for a fixed temperature, the number of particles with a given quantum energy increases with decreasing energy. As the temperature increases, the population increases. If N is nonzero and T = 0 K, the chemical potential must equal k , otherwise the population would be zero. The behavior of a Fermi–Dirac system is substantially different from that of a Bose– Einstein system. In this case, at a fixed temperature, if μ → −∞, then the number density of particles in the systems goes to zero, whereas if μ → ∞, then the number of particles approaches infinity. Note that the energy level takes on a special physical significance. The expectation value for Nk is always one-half, regardless of the temperature. As T → 0 K, one particle occupies each quantum state with k < μ, with no particles in higher-energy quantum states. We shall see that this behavior drives the behavior of electrons in metals and semiconductors.

52

Microscopic Thermodynamics

3.7

Systems with Non-negligible Inter-particle Forces If inter-particle forces become important, then a complete specification of the quantum state includes specifying the positions, or configuration, of particles. The dilemma we face when forced to consider configuration is that there is no longer a simple way to calculate the partition function. For simplicity, consider the canonical partition function e−βUj (3.76) Q= j

For a system in which particle interactions are important, the energy of the member in quantum state j becomes

ik Nikj + φ( r1 , r2 , r3 , . . . , rN ) (3.77) Uj = i

k

where φ is the energy associated with interactions between particles. Therefore, #

Q=

−β

e

i

$

ik Nikj +φ( r1 , r2 , r3 ,..., rN

k

(3.78)

We will show that Q=

qint qtr ZN N!

(3.79)

where qint and qtr are partition functions for internal motion and translational motion, respectively, and r1 d r2 . . . d rN (3.80) ZN = . . . e−φ/kT d is called the configuration integral. We will have to evaluate ZN to calculate the partition function.

3.8

Summary

3.8.1

Statistics in Thermodynamics and Ensembles This section can be summarized as: “there are lots of particles!” Therefore, we characterize the macroscopic state of a very large system of particles with probability distributions. For example, the average energy can be written as

U =

1 nj Uj n

(3.81)

j

where the sum is over the energy of all the members of an “ensemble,” conceptually illustrated in Fig. 3.1, and nj is the distribution of the values of the energy over the ensemble members.

3.8 Summary

3.8.2

53

The Postulates of Microscopic Thermodynamics How we calculate the appropriate distribution functions is prescribed by the following postulates: I.

II.

The macroscopic value of a mechanical–thermodynamic property of a system in equilibrium is characterized by the expected value computed from the most probable distribution of ensemble members among the allowed ensemble member quantum states. The most probable distribution of ensemble members among ensemble member quantum states is that distribution for which the number of possible quantum states of the ensemble and reservoir is a maximum.

Postulate I states that we can calculate A using 1 ∗ nj Aj = pj Aj

A = n j

(3.82)

j

where pj = n∗j /n is the normalized probability distribution and * indicates that we are using the most probable distribution of nj ’s or pj ’s. Postulate II describes how to determine the most probable distribution. To apply the postulate we must obtain an expression for the number of allowed quantum states as a function of the nj and then find the nj that maximizes this function: nj ln nj (3.83) ln TOT = ln R + n ln n − j

Using Lagrange’s method of undetermined multipliers, we showed how to obtain the probability distribution functions.

3.8.3

The Partition Function In this chapter we discuss at some length the type of ensemble to use. Table 3.1 shows the possibilities. It really doesn’t matter which one uses, and the choice is usually made on the basis of convenience. In our case we used the grand canonical form. After the math we found that the desired probability distribution function is pj =

n∗j n

−βUj − γi Nij

=

e

i

−βUj − γi Nij

e

(3.84)

i

j

The denominator of this expression is called the partition function, the grand partition function in this case: −βUj − γi Nij i QG (β, γi ) ≡ e (3.85) j

We then showed that β=

1 −μi and γi = kT kT

(3.86)

54

Microscopic Thermodynamics

Were we to derive the partition functions in other representations, they would be of the form shown in Table 3.2.

3.8.4

Relationship of Partition Function to Fundamental Relation Most importantly, we showed that the partition function is directly proportional to the fundamental relation. This takes on the general form for Massieu functions: %1& = k ln Q (3.87) S T

3.8.5

Fluctuations Once we have the distribution function, we can calculate its statistical properties. Of most interest is the variance, or square of the standard deviation. For internal energy the variance is 1 ∂ 2U 2 (3.88) σ (U) = 2 kT ∂T 2 V,N

which for an ideal gas becomes σ 2 (U) = cv NkT 2

3.8.6

(3.89)

Systems with Negligible Inter-particle Forces An important distinction in studying the macroscopic behavior of matter is the degree to which individual atoms and molecules influence their neighbors due to electrostatic interactions. If, on average, they are too far apart to influence each other, then we can neglect the effect of inter-particle forces. If that is the case, we showed that the partition function becomes Q=

Nikj " " Max i

k

e−(βεik +γi )η

(3.90)

η=0

By taking into account Pauli’s exclusion principle (which we discuss in more detail in the next chapter), we showed that the final form is ±1 ln (1 ± e−βεik −γi ) (3.91) ln QFD,BE = i

k

where FD stands for Fermi–Dirac, particles with anti-symmetric wave functions, and BE for Bose–Einstein, particles with symmetric wave functions. These particles are called fermions and bosons, respectively, as shown below: Symmetry

Max Nikj

Particle name

System name

Examples

Anti-symmetric Symmetric

1 infinity

fermion boson

Fermi–Dirac Bose–Einstein

D, He3 , e H, He4 , photons

3.9 Problems

55

The most important difference between fermions and bosons is that, in a macroscopic system of fermions, only one particle is allowed to exist in any given quantum state, while for bosons there is no limit. For example, electrons are fermions, and this explains why the periodic table is organized as it is. More on this in the next chapter.

3.8.7

Systems with Non-negligible Inter-particle Forces For systems in which interatomic forces are important, the local arrangement of particles becomes important. We call this the configuration. We will show that the partition function becomes qint qtr ZN (3.92) Q= N! where qint and qtr are partition functions for internal motion and translational motion, respectively, and r1 d r2 . . . d rN (3.93) ZN = . . . e−φ/kT d is called the configuration integral. We will have to evaluate ZN to calculate the partition function.

3.9

Problems 3.1

Consider four hands of cards, each hand containing 13 cards with the 52 cards forming a conventional deck. How many different deals are possible? Simplify your result using Stirling’s approximation. The order of the cards in each hand is not important.

3.2

Calculate and plot the error in using Stirling’s approximation ln x! x ln x − x for 1 < x < 100. Use the method of Langrangian multipliers to maximize − i pi ln pi subject to the constraint i pi = 1. Show that when this quantity is a maximum, pi = constant.

3.3

3.4

Using Lagrange’s method of undetermined multipliers, show that the partition function for a canonical ensemble is Uj exp − Q= kT j

3.5

Show that for a system of indistinguishable particles (single component) with negligible inter-particle forces, S = k[ Nk ln Nk ± (1 + Nk ) ln(1 ∓ Nk )] Hint: Start with the Massieu function for the grand canonical representation, then solve for S and substitute in the quantum statistical expressions for U and N.

56

Microscopic Thermodynamics

3.6

Prove that for a system of indistinguishable particles (single component) with negligible inter-particle forces, ln QFD < ln QMB < ln QBE

3.7

Show that Eqs (3.66) and (3.67) are equivalent.

3.8

¯ 3 as a measure of energy If we wished to do so, we could take U 3 ≡ (Uj − U) fluctuations in the canonical ensemble. Show that U 3 = −

3.9

∂ 3 ln Q ∂β 3

At what approximate altitude would a 1-cm3 sample of air exhibit 1% fluctuations in internal energy? Use data from the US Standard Atmosphere (see engineeringtoolbox.com).

4

Quantum Mechanics

In this chapter we learn enough about quantum mechanics to gain a conceptual understanding of the structure of atoms and molecules and provide quantitative relationships that will allow us to compute macroscopic properties from first principles. Quantum mechanics developed as a consequence of the inability of classical Newtonian mechanics and Maxwell’s electromagnetic theory to explain certain experimental observations. The development of quantum mechanics is a fascinating episode in the history of modern science. Because it took place over a relatively short period from the end of the nineteenth century through the first third of the twentieth century, it is well documented and is the subject of many books and treatises. See, for example, Gamow [24]. In the next section we give an abbreviated version of the history, outlining the major conceptual hurdles and advances that led to our modern understanding. We then introduce four postulates that describe how to calculate quantum-mechanical properties. The remainder of the chapter discusses specific atomic and molecular behaviors. (This chapter largely follows Incropera [7].)

4.1

A Brief History We start with a brief history of the atom. The Greek philosopher Democratus [25] was the first to develop the idea of atoms. He postulated that if you divided matter over and over again, you would eventually find the smallest piece, the atom. However, his ideas were not explored seriously for more than 2000 years. In the 1800s scientists began to carefully explore the behavior of matter. An English scientist, John Dalton, carried out experiments that seemed to indicate that matter was composed of some type of elementary particles [26]. In 1897 the English physicist J. J. Thomson discovered the electron and proposed a model for the structure of the atom that included positive and negative charges. He recognized that electrons carried a negative charge but that matter was neutral. (His general ideas were confirmed in a series of scattering experiments over the period 1911–1919 by Rutherford, which firmly established the presence of electrons and protons.) In 1861–1862 James Clerk Maxwell [18] published early forms of what are now called “Maxwell’s equations.” These equations describe how electric and magnetic fields interact with charged matter and how electromagnetic radiation is generated. Electromagnetic radiation, which includes visible light, was described as a wave 57

58

Quantum Mechanics

phenomenon in which oscillating electric and magnetic fields interact. It is considered a “classical” theory in this sense. For an electromagnetic wave, there is a fixed relation between the frequency and the wavelength given by ν=

c λ

(4.1)

where ν is the frequency, λ is the wavelength, and c is the speed of light. By the end of the nineteenth century, then, it was recognized that matter was composed of elementary particles, or atoms, that contained positive and negative charges, and that light (more generally radiation) was composed of electromagnetic waves. Both were viewed through the lens of classical theory.

4.1.1 4.1.1.1

Wave–Particle Duality – Electromagnetic Radiation Behaves Like Particles Blackbody Radiation The first experiment to challenge classical theories was the observation of the spectrum of blackbody radiation. It was known that any surface with a temperature greater than absolute zero emits radiation. A black body is one that absorbs all radiation striking it. Such a surface can be approximated by a cavity with highly absorbing internal walls. If a small hole is placed in the wall, any radiation entering will reflect numerous times before exiting, and will be absorbed almost entirely. At the same time, the atoms making up the walls are vibrating. Classical electromagnetic theory predicts that when two charges oscillate with respect to each other, they emit radiation. At equilibrium, the amount of radiation energy entering the cavity must equal the amount leaving. Assuming the cavity walls are at a fixed temperature T, the radiation leaving the cavity is referred to as blackbody emission. The spectral energy distribution of blackbody radiation is shown in Fig. 4.1. As can be seen, the energy per unit wavelength approaches zero at high frequencies, reaches a peak, and then declines again at low frequencies. Every attempt to predict this functional relationship using classical theory failed. The curve labeled “Rayleigh–Jeans formula” is based on classical electromagnetic theory. It matched the long-wavelength region of the spectrum but diverged sharply as the wavelength decreased. At the time, this was called the “ultraviolet catastrophe.” Wien’s formula, on the other hand, predicted the high-frequency behavior well, but failed at lower frequencies. In 1901, Max Planck provided the first important concept that led to modern physics. He did what any engineer would do; he performed an empirical curve fit to the data. The correlation was of the form uν =

1 8π hν 3 −hν/kT 3 (e − 1) c

(4.2)

The constant h is now known as Planck’s constant and is equal to 6.626 × 10−34 J-sec. After some thought, he concluded that the energy associated with the atomic oscillators could not be continuously distributed. This became known as “Planck’s postulate,” the essence of which is that

n = nhν

(4.3)

4.1 A Brief History

59

Figure 4.1 Spectral distribution of blackbody radiation.

where n is an integer (n = 1, 2, 3, . . . ). Thus, the oscillator energy is “quantized.” If the energy of an oscillator is quantized, then it may only exchange energy in discrete amounts.

4.1.1.2

The Photoelectric Effect Consider the experimental arrangement illustrated in Fig. 4.2. The experiment consists of illuminating a metallic surface with radiation of fixed frequency and measuring the current due to electrons that are emitted from the surface because of radiation/surface interactions. What was observed is that below a certain frequency no electrons were emitted, regardless of the intensity of the radiation. Classical theory would predict that as the intensity of radiation is increased, the oscillators would acquire more and more energy, eventually gaining enough kinetic energy that an electron would escape the surface. Einstein, in 1905, proposed an alternative theory in which the incident radiation behaved not as a wave, but rather as particles. He called these particles “photons” or “quanta” of light. The energy of each photon is

= hν

(4.4)

where h is Planck’s constant. The energy of an electron emitted from the surface is then (T is historically used to denote the electron energy) T = hν − V − W

(4.5)

60

Quantum Mechanics

Figure 4.2 Photoelectric emission.

Figure 4.3 The Compton effect.

where V is the energy required to bring the electron to the surface and W is the energy to eject it from the surface. W is called the work function and depends on the material properties of the surface. Therefore, unless the photon energy is large enough to overcome the work function and V, no electron emission will take place. Einstein’s theory completely explained the photoelectric effect. However, the scientific community was then faced with the so-called “wave–particle” duality of radiation.

4.1.1.3

The Compton Effect In 1923, Arthur Compton published the results of an experiment involving the scattering of X-rays by a metallic foil (Fig. 4.3). It was observed that for X-rays of a given wavelength, the scattered X-rays emerged at a difference wavelength that depended on the scattering angle θ . Compton proposed that the “photons” scattered like “particles.” He derived the relation λ − λ =

h (1 − cos θ ) me c

(4.6)

4.1 A Brief History

61

where λ is the incident wavelength, λ the scattered wavelength, and me the mass of an electron. The theory was based on photons carrying the property of momentum in the amount

= h/λ

(4.7)

Compton’s finding removed any doubt regarding the dual nature of radiation.

4.1.2

Particle–Wave Duality – Particles Can Display Wave-Like Behavior

4.1.2.1

The De Broglie Postulate In 1924 Louis de Broglie postulated that particles can behave like waves. Waves typically have the properties of wavelength, phase-velocity, and group-velocity. Properties associated with matter before quantum mechanics were mass, velocity, and momentum. A connection between these two worlds is the de Broglie relation λ = h/p

(4.8)

which attributes a wavelength to a particle with momentum p. This can also be written as p = h¯ k

(4.9)

where h¯ = h/2π and k = 2π/λ is the wavenumber. The kinetic energy of a classical particle can then be written as a function of the wavenumber K.E. =

4.1.2.2

h¯ 2 k2 p2 = 2m 2m

(4.10)

Davisson–Germer Experiment In 1927 Davisson and Germer confirmed the wave-like nature of matter. They observed that a beam of electrons with momentum p was scattered by a nickel crystal like X-rays of the same wavelength. The relation between momentum and wavelength was given by the de Broglie relation [Eq. (4.8)].

4.1.3

Heisenberg Uncertainty Principle In 1927, Heisenberg developed what has come to be called the “Heisenberg uncertainty principle.” It is based on a wave-packet representation of a particle. A wave-packet is a wave-like structure that is spatially localized, as illustrated in Fig. 4.4. A so-called “plane wave” can be described as ψ(x) ∝ eik0 x = eip0 x/h¯

(4.11)

where we have used the de Broglie relation to relate wavenumber and momentum. A plane wave, however, has infinite extent, but a wave similar to that shown in Fig. 4.4 can be represented by an expansion of plane waves: An eipn x/h¯ (4.12) ψ(x) ∝ n

62

Quantum Mechanics

X Figure 4.4 A wave packet.

where the An are expansion coefficients that represent the relative contribution of each mode pn to the overall wave. In the continuum limit of an infinite number of modes, one can define a wave function ∞ 1 φ(p)eipx/h¯ dp (4.13) ψ(x) = √ 2π h¯ −∞ This form of the wave function is normalized, that is the integral is unity. Since the probability of finding the particle between x locations a and b is b |ψ(x)|2 dx (4.14) P[a ≤ x ≤ b] = a

the normalization insures that the particle is somewhere in space. However, for a wave packet with finite width, this integral will have a value only over the spatial domain of the particle. Its average position will be ∞ x|ψ(x)|2 dx (4.15) x¯ = −∞

and the dispersion is

Dx =

∞

−∞

x2 |ψ(x)|2 dx

(4.16)

It turns out that the function φ(p) is the Fourier transform of ψ(x). Were we to calculate the dispersion in the momentum, we would obtain the result that xp ≥

h¯ 2

(4.17)

This is Heisenberg’s uncertainty principle. It states that the position and momentum of a particle cannot be measured simultaneously with arbitrarily high precision; there is a minimum uncertainty in the product of the two. It comes about purely from the wave character of the particle.

4.2 The Postulates of Quantum Mechanics

4.2

63

The Postulates of Quantum Mechanics By the early 1930s, the foundations of modern quantum mechanics were fairly well established. It then became reasonable to consolidate the theory with a set of postulates. (Here, system means a single particle or a collection of particles such as an atom or molecule.) I.

Each system can be characterized by a wave function, which may be complex ( r, t)

II.

(4.18)

and which contains all the information that is known about the system. The wave function is defined such that r, t)( r, t)dV = |( r, t)|2 p( r, t)dV = ∗ (

(4.19)

is the probability of finding the system in the volume element dV. The consequence of this is that ∞ ∗ ( r, t)( r, t)dV = 1 (4.20) −∞

III.

That is, the system will be found somewhere. With every dynamical variable, there are associated operators r: rop = r

(4.21)

p: pop = p

(4.22)

: op =

(4.23)

B( r, t): Bop = B( r, −ih¯ ∇) IV.

The expectation value of any physical observable is ∞

B = ∗ ( r, t)Bop ( r, t)dV −∞

(4.24)

(4.25)

The wave function is described by Schrödinger’s equation, which starts with the classical form for conservation of energy (H is the Hamiltonian of classical mechanics) H=

p2 + V( r, t) =

2m

(4.26)

and substitutes the operators so that −

h¯ 2 2 ∂( r, t) ∇ ( r, t) + V( r, t)( r, t) = ih¯ 2m ∂t

(4.27)

Schrödinger chose this form for two reasons: 1. 2.

He needed a linear equation to describe the propagation of wave packets. He wanted the equation to be symmetric with the classical conservation of energy statement.

64

Quantum Mechanics

4.3

Solutions of the Wave Equation As thermodynamicists, we are interested in obtaining the allowed quantum states of individual atoms and molecules so that we may calculate the allowed quantum states of ensemble members. The complete specification of the atomic or molecular quantum state requires understanding translational and electronic motion in the case of atoms, with rotational and vibration motion also important for molecules. The wave equation is a partial differential equation, first order in time and second order in space. The potential energy function acts as a source term, and can take on a variety of forms depending on the circumstances. For isolated atoms and molecules, the potential energy will be either zero, or a function of the relative positions of subatomic or molecular particles, but not time. As we shall see, this leads to the ability to solve the wave equation using the method of separation of variables, familiar from solving the heat equation. The solutions in this case lead to stationary (constant) expectation values for the mechanical, dynamical variables. In real macroscopic systems, the atoms and molecules interact with each other at very fast rates, via collisions in the gas phase, or through close, continuous contact in the liquid and solid phases. However, between collisions, they rapidly relax to stationary states, and the properties of those states dominate the overall thermodynamic behavior. As a result, we will focus our attention on stationary solutions that describe translational, rotational, vibrational, and electronic behavior. We start, then, by assuming that the potential energy term V is a function only of position: V( r, t) = V( r)

(4.28)

As a result, the wave equation can be written as ih¯

h¯ 2 ∂( r, t) = − ∇ 2 ( r, t) + V( r)( r, t) ∂t 2m

(4.29)

We assume that the wave function can be written as a product of a time-dependent function and a spatially dependent function: ( r, t) = φ(t)ψ( r)

(4.30)

Substituting this into the wave equation and then dividing by the same function, we obtain ( ' 1 ih¯ ∂φ h¯ 2 2 = r)ψ (4.31) − ∇ ψ + V( φ ∂t ψ 2m Since the two sides are functions of different variables, they must be equal to within an arbitrary constant, so that the time-dependent wave equation becomes ih¯ ∂φ =C φ ∂t

(4.32)

4.3 Solutions of the Wave Equation

and the spatially dependent wave equation becomes ( ' 1 h¯ 2 2 r)ψ = C − ∇ ψ + V( ψ 2m

65

(4.33)

The time-dependent part is a simple first-order, ordinary differential equation, and can be solved once and for all. Its solution is C φ(t) = exp −i t (4.34) h¯ Therefore,

C ( r, t) = ψ( r) exp −i t h¯

(4.35)

It turns out that the separation constant C has a very important physical meaning which we can understand if we use the wave function to calculate the expectation value of the energy: C C ∂ ∗

= ih¯ dV = ψ ∗ ei h¯ t Cψe−i h¯ t dV = C (4.36) ∂t V

V

Thus, C is equal to the expectation value of the energy, and the spatially dependent wave equation becomes −

h¯ 2 2 r )ψ = ψ ∇ ψ + V( 2m

(4.37)

This equation is also called the stationary wave equation, because its solutions result in expectation values of the dynamical properties that do not depend on time. To solve it requires two boundary conditions in space, and knowledge of the potential energy function. We can solve for the motion of a single particle using the above equations. For a twoparticle system, say a diatomic molecule or an atom with a single electron, the energy becomes p2 p2 r) (4.38)

= 1 + 22 + V( 2m1 2m One can show [7] that transforming the wave equation into center of mass coordinates results in the following equations for external and internal motion: h¯ 2 2 ∇ ψe + e ψe = 0 2mt

(4.39)

h¯ 2 2 ∇ ψint + ( e − V( r ))ψe = 0 2μ

(4.40)

where μ= is the reduced mass.

m1 m2 m1 + m2

(4.41)

66

Quantum Mechanics

x

0

L

V(r) Figure 4.5 The particle in a box.

4.3.1

The Particle in a Box We first consider the linear motion of a single particle in a square box, as illustrated in Fig. 4.5. While we expect that V(x) is zero within the box, we also expect that the particle will remain in the box. Therefore, V(x) cannot be zero at the walls. Indeed, V(x) must be essentially infinite at the walls, so that the particle experiences a strong opposing force as it approaches the wall. This is illustrated conceptually in the lower half of Fig. 4.5, where V(x) is represented as delta functions at the walls. The wave equation in one dimension becomes h¯ 2 ∂ψ 2 = − x ψ 2m ∂ 2 x

(4.42)

The boundary conditions are quite simple; the wave function must be zero at each boundary. This can be deduced from the second postulate, which states that the probability of finding the particle within a given differential volume is r, t)( r, t)dV = ψ ∗ ( r)ψ( r)dV p( r, t)dV = ∗ (

(4.43)

As a result, we have ψ(0) = ψ(L) = 0

(4.44)

4.3 Solutions of the Wave Equation

The solution to Eq. (4.42) is simply '! ψ(x) = A sin

2m x h¯ 2

'!

( x + B cos

2m x h¯ 2

67

( x

Note that the first condition requires that B be zero, and the second that ( '! 2m x L 0 = A sin h¯ 2

(4.45)

(4.46)

For this relation to be satisfied, the argument of the sin must be equal to zero or a multiple of π . The only parameter within the argument that is not a fixed constant of the problem is the energy x . Therefore, while it is physically restricted to being zero or positive, it must also be restricted to values that meet the boundary conditions. These values become h¯ 2 π 2 2 n (4.47)

x = 2mL x where nx is zero or a positive integer. It is because not all energies are allowed that we use the term “quantum.” The constant A is evaluated by normalizing the wave function L |ψ|2 dx = 1

(4.48)

0

so that

) A=

2 L

and the stationary wave function becomes ) πn x 2 x , nx = 0, 1, 2, . . . ψ(x) = sin L L

(4.49)

(4.50)

The three-dimensional problem is now easily solved. Because the potential function is zero, the three coordinate directions are independent of each other. Therefore, the solutions for the y and z directions are the same as for the x direction, assuming the appropriate dimension is used. Assuming that the box is a cube, we thus have

= x + y + z = and

) ψ(x) =

h2 (n2 + n2y + n2z ) 8mV 2/3 x

πn y πn z πn x 8 y x z sin sin sin 3 L L L L

(4.51)

(4.52)

An important finding is that the energy is a function of the volume of the box. Indeed, we will see that for an ideal gas, this leads to all the volume dependency in the fundamental relation. It is very important to note that different combinations of the three quantum numbers can result in the same energy. This is illustrated in Table 4.1. When more than one

68

Quantum Mechanics

Table 4.1 Degeneracy of translational quantum states nx

ny

nz

n2x + n2y + n2z

g

1 2 1 1

1 1 2 1

1 1 1 2

3 6 6 6

1 3

quantum state have the same energy, they are often lumped together and identified as an “energy state” in contrast to a “quantum state” or “eigenstate.” The energy state is then said to be degenerate. The letter g is used to denote the value of the degeneracy. In Table 4.1, for example, the first state listed is not degenerate; however, the next three have the same total quantum number and thus energy, and the degeneracy is three. For large values of n, the degeneracy can be very large. We shall see that the average kinetic energy of an atom or molecule is equal to

=

3 kT 2

(4.53)

where k is Boltzmann’s constant. Thus, a typical value of the translational quantum number at room temperature is about 108 . [Calculate for 300 K and then use Eq. (4.51) to estimate an average quantum number.] We will work out what the degeneracy would be as a function of ntrans in Chapter 5.

4.3.2

Internal Motion To explore the internal motions of atoms and molecules, we must take into account the fact that even the most simple models involve multiple particles. Atoms are composed of nuclei and electrons, and molecules of multiple atoms. We will only explore analytic solutions for the simplest cases, the one-electron or hydrogen atom and the diatomic molecule. We will resort to numerical methods to explore more realistic behavior. We start with the stationary wave equation h¯ 2 2 ∇ ψint + ( int − V( r))ψint = 0 2μ Converting this equation into spherical coordinates (Fig. 4.6), we obtain ' ( 1 ∂ ∂2 1 ∂ ∂ 1 2 ∂ r + sin θ + 2 ψi int ∂r ∂θ r2 ∂r r sin θ ∂θ r2 sin2 θ ∂φ 2 +

2μ h¯ 2

(4.54)

(4.55)

[ int − Vint (r)] ψint = 0

Again we pursue separation of variables, defining ψint (r, θ , φ) = R(r)Y(θ , φ)

(4.56)

4.3 Solutions of the Wave Equation

69

z

f

r

y

q x Figure 4.6 Spherical coordinate system.

and inserting into Eq. (4.55): ( ' 1 d 2μr2 1 ∂ ∂Y 1 ∂ 2Y 1 2 dR r + 2 [ int − Vint (r)] = − sin θ + R dr dr Y sin θ ∂θ ∂θ sin2 θ ∂φ 2 h¯ (4.57) Calling the separation variable α, we get a radial equation and one that contains all the angular dependence: $ # 2μr2 d 2 dR [ int − Vint (r)] − α R = 0 r + (4.58) dr dr h¯ 2 ' ( 1 ∂ ∂Y 1 ∂ 2Y sin θ + = −αY (4.59) sin θ ∂θ ∂θ sin2 θ ∂φ 2 Now, separate the angular wave equation using Y(θ , φ) = (θ )(φ)

(4.60)

d2 + β = 0 dφ 2

(4.61)

resulting in

and 1 d sin θ dθ

∂ β sin θ + α− =0 ∂θ sin2 θ

(4.62)

70

Quantum Mechanics

where β is the separation constant. Note that neither equation depends on the potential energy term, V(r), so they can be solved once and for all. The general solution for is simply (φ) = exp(iβ 1/2 φ)

(4.63)

Note that this function must be continuous and single valued, so (φ) = (φ + 2π )

(4.64)

This will only be true if β 1/2 is equal to a constant, which we will call ml , where ml = 0, ±1, ±2, . . . Therefore, (φ) = exp(iml φ) Plugging the expression for β into the equation for : m2l 1 ∂ ∂ sin θ + α− =0 sin θ ∂θ ∂θ sin2 θ

(4.65)

(4.66)

This is a Legendre equation. (Legendre was a nineteenth-century mathematician who worked out the series solution of equations of this form.) To satisfy the boundary conditions we must have α = l(l + 1), l = i + |ml | , i = 0, 1, 2, 3, . . . The solution is the associated Legendre function |ml |

= Pl

(cos θ ) =

d|ml |+l 1 l 2 |ml |/2 (1 − cos θ ) (sin2 θ − 1) |ml |+l 2l l! d(sin θ )

(4.67)

Thus, the angular component of the wave equation becomes |ml |

Y(θ , φ) = Cl,ml Pl

(cos θ )eiml φ

where Cl,ml is a normalization constant: (2l + 1)(l − |ml |)! 1/2 1 Cl,ml = 2(l + |ml |)! (2π )1/2

(4.68)

(4.69)

The integer constants l and ml turn out to be quantum numbers associated with angular momentum. Recall Eq. (4.25): ∞

B = ∗ ( r, t)Bop ( r, t)dV (4.70) −∞

If we calculated the angular momentum of our system, we would get

L2 = l(l + 1)h¯ 2

(4.71)

Thus l determines the expectation value of the angular momentum and is called the angular momentum quantum number. Likewise, if we calculated the z component of angular momentum, we would obtain

Lz = ml h¯

(4.72)

4.3 Solutions of the Wave Equation

71

Figure 4.7 The hydrogenic atom.

Normally, atoms or molecules are randomly oriented. However, in the presence of a magnetic field they will align. Thus, ml is called the magnetic quantum number.

4.3.3

The Hydrogenic Atom We can obtain a relatively simple and illustrative, although inaccurate, solution to the radial wave equation of an atom if we assume that the atom is composed of a positively charged nucleus and a single electron, as illustrated in Fig. 4.7, where e is the electronic charge 1.602 × 10−19 C. We also assume that the potential energy term is given by Coulomb’s law: r 2 e dr e2 (4.73) V(r) = = − 4π 0 r2 4π 0 r ∞

Starting with the radial wave equation $ # 2 2μr d dR [ int − Vint (r)] − α R = 0 r2 + dr dr h¯ 2 and inserting V and α (the two-particle separation constant), we obtain ' ($ # l(l + 1) 2μ 1 d e2 2 dR r − + 2 int + R=0 dr 4π 0 r r2 dr r2 h¯ It is useful to transform this equation so that l(l + 1) 1 n 1 d 2 dR ρ − R=0 − + dρ ρ 4 ρ 2 dρ ρ2 where ρ =

2r 2 na0 , n

=−

h¯ 2 , 2μa20 el

(4.74)

(4.75)

(4.76)

2

¯ . and a0 = − π hμe 2

Equation (4.76) is an example of an ordinary differential equation that is amenable to solution using series methods. Many such equations were solved in the nineteenth century by mathematicians, this one by Edmond Laguerre. The solution is of the form (n − l − 1)! 1/2 2 3/2 l 2l+1 Rnl (ρ) = − ρ exp(−ρ/2)Ln+1 (ρ) (4.77) na0 2n[(n + l)! ]3

72

Quantum Mechanics

Figure 4.8 Hydrogen atom energy levels.

where 2l+1 Ln+1 (ρ) =

n−l−1

(−1)k+1

k=0

[(n + l)! ]2 ρk (n − l − 1 − k)! (2l + 1 + k)! k!

(4.78)

is the associated Laguerre polynomial (see Abramowitz and Stegun [27]). Again we obtain an index, n, called the principal quantum number. It can take on values n = 1, 2, 3, . . . Values of l, the angular momentum quantum number, are restricted to being less than n, or l = 0, 1, 2, . . . , n − 1. Solving for the energy, we obtain (where Z is the nuclear charge)

n = −

Z 2 e4 μ 32π 2 02 h¯ 2

1 , n = 1, 2, 3, . . . n2

(4.79)

This result is illustrated in Fig. 4.8. Note that by convention, energy is referenced to zero for n → ∞ where the nucleus and electron are infinitely far apart. (In that case we say the atom is ionized.) n = 1 is called the ground state and is the state with the lowest energy. For this ideal hydrogenic atom, the other quantum numbers, l and ml , do not affect the energy. However, their values are limited by the value of n. One can show that the allowed values of l and ml are limited as l = 0, 1, 2, 3, . . . , n − 1; l = i + |ml | ; i = 0, 1, 2, . . .

(4.80)

gn = n2

(4.81)

so that

One further complication involves the electrons. Quantum mechanics tells us that electrons, in addition to orbiting the nuclei, have spin. The spin angular momentum is given by

4.3 Solutions of the Wave Equation

S2 = s(s + 1)h¯ 2

73

(4.82)

where s is the spin angular momentum quantum number. The value of s is limited to 1/2. As with the general case, Sz = ms h¯

(4.83)

1 2

(4.84)

where ms = ±

Therefore, for our simple hydrogenic atom the total degeneracy is actually gn = 2n2

4.3.4

(4.85)

The Born–Oppenheimer Approximation and the Diatomic Molecule In general, the analytic solution of the wave equation for molecules is a complex, timedependent problem. Even for a diatomic molecule it is a many-body problem, since all the electrons must be accounted for. However, there is a great simplification if the electronic motion can be decoupled from the nuclear motion, as pointed out by Max Born and J. Robert Oppenheimer in 1927. This turns out to be possible because the mass of an electron is 1/1830th that of a proton. This means that the electrons are moving much faster than the nuclei, and approach steady motion rapidly. As a result, the effect of the electrons on the nuclei is an effective force between the nuclei that only depends on the distance between them. Thus, the force can be described as due to a potential field. We know that stable potentials (i.e. ones that can result in chemical bonding) are attractive at a distance and repulsive as the nuclei approach each other. One simple function that contains this behavior is the Morse potential, which is illustrated in Fig. 4.9: * +2 (4.86) V(r) = De 1 − e−β(r−re ) Real potentials must either be determined experimentally or from numerical solutions of the wave equation. However, most bonding potentials look more or less like Fig. 4.9. If we concern ourselves with molecules in quantum states such that only the lower-energy portion of the potential is important, then the typical potential looks much like that for a harmonic oscillator (Fig. 4.9). In this case the force between the nuclei is F = k(r − re )

(4.87)

where re is the equilibrium nuclear spacing and k is a force constant. The potential energy is then V(r) =

k(r − re )2 2

(4.88)

We now consider the two extra modes of motion available to a molecule. The first is rotation (Fig. 4.10). In a real molecule, rotation will exert a centripetal force that will cause the bond between the nuclei to stretch. However, if we assume that the two

74

Quantum Mechanics

Figure 4.9 Morse and harmonic potentials.

Figure 4.10 Rotational motion of a diatomic molecule.

nuclei are rigidly affixed to each other, then we can use the angular solution to the wave equation that we developed above. This is called the rigid rotor approximation. The angular momentum is L = Iω

(4.89)

I = μr2

(4.90)

where I is the moment of inertia

4.3 Solutions of the Wave Equation

75

and μ is the reduced mass m1 × m2 m1 + m2

(4.91)

h¯ 2 1 2 L2 Iω = = l(l + 1) 2 2I 2μr2

(4.92)

μ= The energy of rotation then becomes

R =

For molecular rotation it is common to use J as the rotational quantum number, so

R = J(J + 1)

h¯ 2 2μr2

(4.93)

Next we consider vibration, or the relative motion of nuclei with respect to each other (Fig. 4.11). If we neglect rotation and consider that the intermolecular force acts like a linear spring, then we can use the harmonic potential of Eq. (4.9) in the radial wave equation $ # h¯ 2 h¯ 2 d 2 dR r + − V(r) − l(l + 1) R = 0 (4.94) dr 2μr2 dr 2μr2 This equation can be solved analytically using series methods. Introducing the transformations K(r) = rR(r) and x = r − re the equation becomes

(4.95)

2μ kx2 dK 2 + 2 V − K=0 2 dx2 h¯

Introducing a non-dimensional form for energy (where v is a characteristic frequency) ! k 1 2 V (4.96) , where vV = λ= hvV 2π μ

Figure 4.11 Vibrational motion of a diatomic molecule.

76

Quantum Mechanics

and defining a new independent variable 2π vV μ 1/2 y= x h¯

(4.97)

one obtains a non-dimensional form of the radial wave equation d2 K + (λ − y2 )K = 0 dy2

(4.98)

Let us examine the solution as y → ∞: d2 K − y2 K ∼ =0 ⇒ K∼ = exp(−y2 /2) dy2

(4.99)

Assume that we can express K(y) as a factor H(y) times the large y limiting solution K(y) = H(y) exp(−y2 /2)

(4.100)

dH d2 H − 2y + (λ − 1)H = 0 2 dy dy

(4.101)

Then

This is another equation solved by a nineteenth-century mathematician, Charles Hermite. It has solutions only when (λ − 1) = 2v, v = 0, 1, 2, 3, . . .

(4.102)

H(y) is the Hermite polynomial of degree u: Hu (y) = (−1)u ey The energy is

2

du −y2 e dyu

1

v = v + hνv , where v = 0, 1, 2, 3, . . . 2

(4.103)

(4.104)

Note that energy does NOT go to zero. v = 12 hνv is called the zero-point energy. In addition, the degeneracy is unity (gv = 1). The resulting vibrational energy levels are shown superimposed on Fig. 4.12, along with what the real energy levels might look like. Putting the rotational and vibrational solutions together, the total energy of these two modes becomes 1 h¯ 2 (4.105) J(J + 1) + hνv v +

R + v = 2Ie 2 where J = 0, 1, 2, 3 . . . and v = 0, 1, 2, 3 . . .

(4.106)

gJ = 2J + 1 and gv = 1

(4.107)

with degeneracies

4.3 Solutions of the Wave Equation

77

Figure 4.12 Rotational and vibrational energy levels for a diatomic molecule.

The energy is more commonly expressed as F(J) =

1

v

R = Be J(J + 1) and G(v) = = ωe v + hc hc 2

(4.108)

where the units are cm−1 . This nomenclature was developed by spectroscopists, who noted that the inverse of the wavelength of spectroscopic transitions is proportional to the energy difference between the two quantum states involved in the transition because

= hν = h

c λ

(4.109)

Alternatively, we can express the energy in terms of characteristic temperatures: TR ≡

Be Gv h¯ 2 hνv = and TV ≡ = 2Ie k (k/hc) k (k/hc)

(4.110)

For N2 , for example, Be = 1.998 cm−1 , ωe = 2357.6

cm−1 ,

TR = 2.87 K TV = 3390 K

Note that Be L. The term symbol, or classification, is of the form 2S+1

LJ

(4.131)

L is designated by the following capital letter notation: 0 S

1 P

2 D

3 F

4 G

... ...

An important property is the multiplicity. This is defined as the number of different values of J which are possible for given values of L and S. It is equal to 2S + 1 for S ≤ L and 2L + 1 for L < S. The superscript on the term classification is equal to the multiplicity only if S ≤ L. A multiplet is a group of quantum states that are in every way

4.4 Real Atomic Behavior

83

Table 4.2 Electronic energy levels of sodium Optical electron configuration

Term classification

Energy (cm−1 )

Degeneracy

3s 3p 4s 3d 4p 5s

2S 1/2 2P 1/2,3/2 2S 1/2 2D 5/2,3/2 2a P 1/2,3/2 2S 1/2

0 16,965 25,740 29,173 30,269 33,201

2 6 2 10 6 2

Figure 4.15 Sodium energy-level diagram.

equal, except for differing energy levels. Because of spin–orbit coupling, these states may have slightly different energies. The degeneracy of an energy level with given J is gJ = 2J + 1

(4.132)

If the energies of all components of a multiplet are equal, the degeneracy of the multiplet is gmult =

N

gJ,i

(4.133)

i=1

where N is the number of components in the multiplet. Data on atomic structure are typically presented in tabular and graphical form. An example of the lower electronic energy levels for sodium is given in Table 4.2. The energy levels can be represented graphically using an energy-level diagram. An example for sodium is given in Fig. 4.15. Note the 2 P1/2,3/2 multiplet. The radiative transitions to these two components from the 2 S1/2 state are the famous Fraunhofer D lines.

84

Quantum Mechanics

4.5

Real Molecular Behavior All of the above complexities, such as spin–orbit coupling, carry through to molecular behavior. The electronic quantum state is characterized by the axial orbital angular momentum quantum number , the total electron spin quantum number S, and the axial spin quantum number . can take on values of 0, 1, 2, . . . S may assume integer or half-integer values, and can take on values = −S, −S + 1, . . . , S − 1, S

(4.134)

The term classification symbol for the diatomic molecule is 2S+1

(4.135)

+

As with atoms, the value of is represented by letters, in this case Greek: 0

1

2

3

4

... ...

Multiplicity is also an important property in molecules. For states with > 0, orbital– spin interactions can cause small variations in energy for states with different . The number of such states is equal to 2S+1. Thus, the degeneracy of each level is ge = 2S + 1

(4.136)

ge = 2

(4.137)

for = 0 and

for > 0. As an example, the state classifications for the lowest four electronic states of NO are Classification

Energy

ge

2

0 121 43,965 45,930

2 2 2 4

2

1/2

3/2 2+ 2

In most molecules, the potential energy function is not exactly harmonic nor is the rigid rotator approximation exactly met. For small departures from ideal conditions, a perturbation analysis is appropriate. In this case, the energy terms become G(v) + Fv (J) ∼ = ωe (v + 1/2) − ωe xe (v + 1/2)2 + ωe ye (v + 1/2)3 − · · · + Bv J(J + 1) − Dv J 2 (J + 1)2 + · · ·

(4.138)

where xe , ye , and Dv are additional constants depending on the particular molecule and its electronic state. Data for a wide variety of diatomic molecules were tabulated by Herzberg [28] and can be found in the NIST Chemistry Webbook.

4.5 Real Molecular Behavior

U

4S -

2S -

4P

0(1D) + H (2S)

8 2 1 0

3 2 1 0 0

85

0(3P) + H (2S) A2S +

2P

OH R

Figure 4.16 OH energy-level diagram (from Radzig and Smirov [9]).

As with atoms, molecular energy states can be represented graphically. However, it is usual to plot the energy levels in terms of the electronic potential functions as a function of separation of the nuclei. The vibrational and rotational states are then plotted as horizontal lines on each potential. As an example, Fig. 4.16 shows the energy level diagram for OH. Not all electronic configurations result in stable potentials. The 4 − state is an example of an unstable state, in that it always results in a repulsive force between the two nuclei. Such states are called predissociative. There are situations in which such states can be used in laser-induced fluorescence instrumentation schemes. As the number of nuclei increases, the complexity of describing their structure increases correspondingly. In a molecule with n atoms, there are 3n degrees of freedom of motion. Three degrees are associated with translational motion. For the general case, three degrees are associated with the angular orientation of the molecule, or rotational motion. Thus, in general, there are 3n − 6 degrees of freedom for vibrational motion. The exception are linear molecules which have only two degrees of rotational freedom, and thus 3n − 5 degrees of vibrational freedom. In some cases, the Born– Oppenheimer approximation holds, and simple electronic potentials result. In this case, the vibrational and rotational expressions given above are descriptive. In many cases,

86

Quantum Mechanics

however, the complexity prevents this simple view. See Herzberg [29] for a detailed discussion of the structure and spectral properties of polyatomic molecules. In practice, formulas such as those given by Eq. (4.138) are sometimes not sufficiently accurate for precision spectroscopic purposes. The approximations made in their derivation do not hold exactly, and the resulting errors can result in incorrect identification of spectral features. As a result, more precise structural data are usually required. One compilation of such data is LIFBASE, a program written by Luque and Crosley [30] at the Stanford Research Institute to assist in interpreting laser-induced fluorescence measurements. The program contains data for OH, OD, NO, CH, and CN.

4.6

Molecular Modeling/Computational Chemistry Molecular modeling or computational chemistry is the discipline of using numerical methods to model atomic and molecular behavior. It is used to study structure, energies, electron density distribution, moments of the electron density distribution, rotational and vibrational frequencies, reactivity, and collision dynamics. It can be used to model individual atoms or molecules, gases, liquids, and solids. Methods range from the highly empirical to ab initio, which means from basic principles. Ab initio methods are based on quantum mechanics and fundamental constants. Other methods are called semiempirical as they depend on parametrization using experimental data. While the simple solutions to the wave equation that we derived in Section 4.3 are extremely useful for understanding the underlying physics of atomic and molecular structure, they are not very accurate. Two good examples are the rigid rotator and the harmonic oscillator. A key assumption of the rigid rotator model is that the distance between the atoms is fixed. In fact, it depends on the vibrational state, thus distorting the simple solution. The harmonic oscillator ignores non-harmonic behavior as the molecule approaches the dissociation state. Even the polynomial correction formulas given in the previous section are not particularly accurate if fine details are desired, or as the potential function becomes more asymmetric. Molecular modeling methods include the main branches listed in Table 4.3. The reason that different methods have evolved has to do with computational efficiency and the need for certain levels of accuracy. The least-expensive methods are Table 4.3 Methods of molecular modeling Method

Description

Molecular mechanics/molecular dynamics

Modeling the motion of atoms and molecules using Newton’s Law. Forces between atoms are modeled algebraically.

Hartree–Fock (HF)

Numerical solution of the wave equation. There are many variations of HF.

Density functional theory (DFT)

Based on the concept of density functionals. Less accurate than HF methods but computationally less expensive.

Semi-empirical

Methods that parameterize some portion of a HF or DFT model to speed calculation.

4.6 Molecular Modeling/Computational Chemistry

87

Table 4.4 Molecular modeling software Gaussian

General-purpose commercial software package. Includes MM, HF and enhancements, DFT, QM/MM, and other methods.

GAMESS

Open source ab initio quantum chemistry package.

Amsterdam

Commercial DFT and reactive MD package.

PSI

Open source HF and enhancements package.

those based on molecular mechanics, which we discuss in more detail in Chapter 9 on liquids. Necessarily, they are the least accurate. At the other limit, Hartree–Fock methods and their more advanced forms, which include electron correlations or coupledcluster theory, offer the highest accuracy, but at very high computational cost. Indeed, typically they are limited to systems with no more than 10 or so atoms. Semi-empirical methods were developed to be less costly than full ab initio methods, while maintaining some of the benefits of the full methods. DFT methods, while less accurate than HF methods, are much more accurate that semi-empirical methods and mostly used to study solid-state behavior. There are also combined methods, such as QM/MM (quantum mechanics/molecular mechanics), which are used to model reactions between large biomolecules where the reaction involves only a few atoms. There are many commercial and open source computational chemistry software packages available. Table 4.4 provides a short list of packages that find common use in engineering. Gaussian is probably the most popular commercial package for those needing HF and post-HF methods, although it covers all four major method areas. Amsterdam is mostly used by materials researchers. GAMESS is one of the most used open source ab initio codes. PSI is a simpler open source alternative to GAMESS. An important consideration is how the user is to interact with the calculations. The output of these codes involve large sets of data, which are difficult to parse. Therefore, it is necessary to have some form of visualization capability. Gaussian and Amsterdam come with their own visualization software, while GAMESS and PSI require third-party packages, of which fortunately there are many. VMD and Avogadro are two examples.

4.6.1

Example 4.1 Calculate the equilibrium structure of the molecule CH2 using GAMESS and plot the result using VMD. Methylene is a radical species, meaning it would rather be CH4 and so is highly reactive. Below is an input file for this calculation. Listing 4.1: GAMESS input for CH2 1 2 3 4 5 6

$CONTRL $SYSTEM $STATPT $BASIS $GUESS $DATA

SCFTYP=RHF RUNTYP=OPTIMIZE COORD=ZMT NZVAR=0 $END TIMLIM=1 $END OPTTOL= 1 . 0 E−5 $END GBASIS=STO NGAUSS=2 $END GUESS=HUCKEL $END

88

Quantum Mechanics

7 8 9 10 11 12 13 14 15 16

M e t h y l e n e . . . 1 −A−1 s t a t e . . . RHF / STO−2G Cnv 2 C H H

1 rCH 1 rCH

2 aHCH

rCH = 1 . 0 9 aHCH= 1 1 0 . 0 $END

The output file is quite large. However, it can be read by VMD or Avogadro. Doing so, the predicted structure can be displayed and the geometry measured. In this case the bond distances are each 1.124 Å and the bond angle is 99.3◦ . See Fig. 4.17.

Figure 4.17 Predicted structure of CH2 .

4.7

Summary The most important result of quantum mechanics is that atoms and molecules typically exist in discrete states, known as quantum states. It turns out that at the very small length and time scales of atomic and molecular behavior, Newtonian physics do not apply. Instead, their behavior is the domain of quantum mechanics. In this chapter we provided a brief history of how quantum mechanics came to be developed, including the wave–particle duality question and Heisenberg’s uncertainty principle. We then introduced the Schrödinger wave equation:

4.8 Problems

89

Table 4.5 Simple solutions of the wave equation Quantum number

Energy

Degeneracy

Particle in box

nx , ny , nz

tr =

Rigid rotator

J

R = J(J + 1)Be

gJ = 2J + 1

Harmonic oscillator

v

V = ωe (v + 1/2)

gv = 1

Hydrogenic atom

n

2 4

n = − Z 2e 2μ 2 12

gn = 2n2

h2 (n2 + n2 + n2 ) y z 8mV 2/3 x

32π 0 h¯ n

(see Chapter 5)

h¯ 2 2 ∂( r, t) ∇ ( (4.139) r, t) + V( r, t)( r, t) = ih¯ 2m ∂t We explored some simple solutions of the wave equation: particle in a box, rigid rotator, harmonic oscillator, and hydrogenic atom. In each case the allowed solutions were discrete in nature. One finding is that the solutions are a function of integer numbers, known as quantum numbers. In addition, for some cases, multiple allowed solutions resulted in the atom or molecule having the same energy. This is known as degeneracy. The important results for thermodynamics are the quantum numbers, the allowed energies, and the degeneracy associated with each energy. These are summarized in Table 4.5. For real atoms and molecules, the situation is more complex. For atoms, the energies and degeneracies do not follow the simple rules of the hydrogenic atom. One must take into account angular momentum and the coupling of various rotational motions. These lead to quite different results, and are difficult to calculate without advanced numerical methods. Furthermore, the Pauli exclusion principle must be taken into account. We used this effect to build up the periodic table. Typically, for energies and degeneracies we utilize data compilations such as are available in the NIST Chemistry Webbook. This is also true of molecules, although there are empirical algebraic expressions for diatomic molecules that can be used to calculate properties. However, for complex situations we are forced to use data from experiments and/or advanced numerical calculations. −

4.8

Problems 4.1

The umbrella of ozone in the upper atmosphere is formed from the photolysis of O2 molecules by solar radiation according to the reaction O2 + hν → O + O Calculate the cutoff wavelength above which this reaction cannot occur. Then find a plot of the solar spectrum in the upper atmosphere and identify which portion of the spectrum will cause the O2 to dissociate.

4.2

What is the physical interpretation of the wave function? How is this quantity obtained and once known, how may it be used?

90

Quantum Mechanics

4.3

Why is the wave function normalized?

4.4

What is meant by the “expectation value” of a dynamical variable?

4.5

What is degeneracy?

4.6

What is the expectation value of the x-momentum of a particle in a box of dimension L on each side?

4.7

What assumption enabled the description of internal motion in spherical coordinates?

4.8

For a molecule that can be modeled as a rigid rotator, what is the degeneracy of a rotational state with quantum number J = 23?

4.9

What is the physical significance of ml ?

4.10 Consider an O2 molecule. Assume that the x-direction translational energy, the rotational energy, and the vibrational energy each equal about kT/2. What would the translational, rotational, and vibrational quantum numbers be at 300 K and a volume of 1 cm3 ? 4.11 A combustion engineer suggests that a practical means of eliminating NO from engine exhaust is by photodecomposition according to the reaction NO + hν → N+O He suggests using He–Ne laser beams at 632.8 nm as the radiation source. Would the idea work? 4.12 For nitrogen molecules occupying a volume of 1 cm3 , calculate the translational energies corresponding to states for which (n1 , n2 , n3 ) are (1, 1, 1), (100, 100, 100), and (1000, 1000, 1000). 4.13 Consider the energy levels of a hydrogenic atom for which n = 3. In terms of appropriate quantum number values, list the different eigenstates, first assuming no spin–orbit coupling and then with spin–orbit coupling. Compare the degeneracies for the two cases (including electron spin) and sketch the component levels that result from spin–orbit coupling. Use Fig. 4.13 as a guide. 4.14 For the diatomic molecule assigned to you in class (one per person), calculate and plot the ground electronic state vibrational and rotational-level energies. The plot should be drawn like the figure below, but also include rotational levels. Include the first five vibrational levels and the first five rotational levels for each v. Remember you can get the harmonic oscillator constant k from ωe . 4.15 For the diatomic molecule I2 , determine the rotational quantum number for the rotational energy state that is equal in value to the energy difference between two adjacent vibrational energy levels. Assume the rigid rotator/harmonic oscillator solution applies.

4.8 Problems

91

Energy levels of diatomic molecule.

4.16 Carry out a series of calculations on the same diatomic molecule you were assigned in class. The sequence of steps is given below. Do this for two semiempirical methods (AM1 and PM3) and then two ab initio methods. For the two ab initio methods choose the Small(3-21G) and Large(6-31G**) basis sets. In all cases just use the other default settings. (a) (b) (c) (d) (e) (f)

Draw the molecule. Select a method (semi-empirical or ab initio). Optimize the geometry and note the bond length. Do a single point calculation. Do a vibrational calculation. Display the vibrational results and note the vibrational frequency.

Use the NIST database to obtain experimental values of the bond length and vibrational frequency. Compare these with your calculated results. Turn in a summary of your findings.

5

Ideal Gases

Here we deal with non-interacting, or ideal gases. We find that we are in a position to calculate the partition function for such gases in a fairly straightforward way. We shall see that the partition function can be separated into several components, representing each of the various modes of motion available to the atom or molecule. In some cases, simple algebraic relations are obtained. For the monatomic gas, only translational and electronic motion must be considered. For molecules, vibrational and rotational motion must be added. This task is simplified if the Maxwell–Botzmann limit holds, and we will make that assumption for monatomic and molecular gases.

5.1

The Partition Function We start with the expression for the single-component grand canonical partition function in a gas that satisfies the Maxwell–Boltzmann limit: ln QMB = e−β k −γ (5.1) k

Because γ is not a function of the eigen-index k, we can factor e−γ out of the summation so that ln QMB = e−γ e−β k (5.2) k

The sum q =

e−β k is called the “molecular partition function.” We can separate the

k

molecular partition function by noting that the energy can be written as the sum of the translational and internal energy

= tr + int

(5.3)

eA+B = eA eB

(5.4)

and that in general

If A and B are statistically independent of each other, then e−A e−B = e−A e−B A,B

92

A

B

(5.5)

5.2 The Translational Partition Function

Thus, we can write

ln QMB = e−γ

e−β tr,k

k

e−β int,k

93

(5.6)

k

and ln QMB = e−γ qtr qint

(5.7)

where the q’s are the molecular partition functions. It remains to evaluate them.

5.2

The Translational Partition Function To evaluate the translational partition function requires the quantum-mechanical expression for the translational energy, which is # 2 2 $ ny nx 2 h2 nz

tr = + + (5.8) 8m lx ly lz Here, the n’s are the translational quantum numbers and the l’s the macroscopic dimensions in the three coordinate directions. Therefore, if the motions in the three coordinate directions are statistically independent, then qtr = qx qy qz =

2 n 2 x lx 2

− βh 8m

e

nx

2 ny 2 ly 2

− βh 8m

e

ny

2 nz 2 lz 2

− βh 8m

e

(5.9)

nz

We can obtain an analytic expression for qtr if we note that, in practice, we are dealing with very large quantum numbers. Thus, we can treat the exponentials as continuous rather than discrete functions. For example, qx =

2 n 2 x lx 2

− βh 8m

e

nx

If we let x =

βh2 8m

1/2

nx lx ,

∼ =

∞

2 n 2 x lx 2

− βh 8m

e

dnx

(5.10)

0

then qx ∼ =

8m βh2

1/2 ∞ 2 lx e−x dx

(5.11)

0

√ The value of the definite integral can be found in math tables and is equal to π /2. Substituting that value and repeating for each coordinate direction, we finally obtain (β = 1/kT) 2π mkT 1/2 V (5.12) qtr = h2 where V = lx ly lz . Let us now explore the distribution of particles over the allowed translational energy states. Recall that the number of particles in a given state is Nk = e−γ e−β k

(5.13)

94

Ideal Gases

and since N = e−γ

e−β k

(5.14)

k

the probability of finding a particle in a given state is Nk e−β k = −β

N e k

(5.15)

k

This is called the Boltzmann ratio and is generally applicable for any mode of motion. For translational energy the differences between energy levels are so small that energy varies continuously, and we can write e−β k f ( )d = −β dg e k

(5.16)

k

where f is a probability distribution function, f ( )d is the fraction of particles in the interval to +d , and dg is the degeneracy for the energy increment. The denominator is, of course, the translational energy partition function that we have already derived. To relate d to dg first consider that

=

h2 n2 8mV 2/3

(5.17)

4π n2 dn 8

(5.18)

where n is the total translational quantum number equal to n2x + n2y + n2z . Therefore, n is the radius of a one-eighth sphere in the positive octant of translational quantum number space. All states on the surface of the sphere have the same energy. Therefore, each state with energy between + d occupies the one-eighth spherical shell between n and n + dn. The volume of this shell is dg =

Differentiating with respect to n and substituting into Eq. (5.18), one obtains 2m √ dg = 2π V

d

h2

(5.19)

Substituting this into Eq. (5.16), we obtain f ( ) =

2 1/2 π 1/2 (kT)3/2

e− /kT

(5.20)

Now, the translational energy of an individual molecule is

=

mc2 2

(5.21)

where c is the particle speed, so that d = mcdc

(5.22)

5.2 The Translational Partition Function

95

Figure 5.1 The Maxwellian speed distribution.

Note that we must have f ( )d = f (c)dc

(5.23)

which follows from the idea that the probability of a particle assuming a certain energy must correspond to a certain speed. Substituting Eqs (5.21)–(5.23) into Eq. (5.20), we obtain 1/2 m 3/2 2 −mc2 /2kT 2 c e (5.24) f (c) = π kT This relationship, which plays a very important role in the kinetic theory of gases, is called the Maxwellian speed distribution and is illustrated in Fig. 5.1. Useful properties that can be obtained from the Maxwellian speed distribution include the average speed

∞ cf (c)dc =

c¯ =

8kT πm

1/2 (5.25)

0

the most probable speed (obtained by differentiating the Maxwellian and setting it to zero) 2kT cmp = (5.26) m

96

Ideal Gases

and the root-mean-squared speed ∞ , 3 2kT 3 2 2 c = c f (c)dc = = cmp 2 m 2 0

5.3

(5.27)

Monatomic Gases Let us next consider a gas made up entirely of atoms. In that case the only other form of motion is that of the electrons orbiting the nucleus. The electronic partition function e−β k (5.28) qe = k

is a sum over quantum states. However, electronic energy states, in contrast to quantum states, are degenerate. Thus, we could also write

1

2 qe = gk e−β k = g0 + g1 e− kT + g2 e− kT + · · · (5.29) k

where now the summation is over energy states rather than quantum states. To evaluate qe we need to know the details for each specific atom. The electronic states of many atoms can be found in the NIST Atomic Spectra Database (https:// www.nist.gov/pml/atomic-spectra-database). However, recall that electronic energies are quite large, typically 20,000–80,000 K in terms of characteristic temperature. Therefore, at reasonable temperatures, only a few terms in the summation are significant. In fact, for most atoms, the room-temperature partition function is essentially equal to the first term in the summation. If that is the case, then qe = g0

5.3.1

(5.30)

Example 5.1 Consider the lithium atom, Li. Calculate qe as a function of temperature. Determine the temperature at which the partition function is increased by 10% above the case where there is no excitation. The first few electronic states of Li are given in the table below:

Optical electron configuration

Term classification

Energy (cm−1 )

Degeneracy

1s2 2s 1s2 2p 1s2 2p 1s2 3s

2S 1/2 2P 1/2 2P 3/2 2S 1/2

0 14,903.622 14,903.957 27,206.066

2 2 4 2

5.3 Monatomic Gases

97

Figure 5.2 Electronic partition function for Li.

qe is calculated using Eq. (5.29) and plotted in Fig. 5.2. The temperature at which qe increases by 10% is 6151 K. This illustrates why, at temperatures not exceeding 2000– 3000 K, electronic excitation can be ignored. Ignoring electronic excitation, the fundamental relation for a monatomic ideal gas with no electronic excitation becomes μ 2π mkT 3/2 1 −μ − kT , = ke Vge (5.31) S T T h2 Now, recall that

1 −μ 1 p μ dS , = −Ud + dV + Nd T T T T T

(5.32)

where −U, p/T, and N are the equations of state in the grand canonical representation. Evaluating N, & % ∂S T1 , −μ T pV 1 1 −μ = = S N= , (5.33) −μ k T T kT ∂ T This, of course, is the perfect gas law. Evaluating U, we obtain & % −∂S T1 , −μ T 1 −μ 3 3 3 = TS , = pV = NkT U= 1 2 T T 2 2 ∂T

(5.34)

98

Ideal Gases

The average internal energy per atom is u=

3 U = kT N 2

The entropy becomes 1 μ 1 μ 5 μ 1 −μ , +U −N =k N+U −N =k − S=S T T T T T T 2 kT Finally, it is useful to calculate the specific heat cv , ∂u 3 = k cv = ∂T V 2

(5.35)

(5.36)

(5.37)

Each of these property expressions is valid if our assumption about the electronic partition function holds. If not, then qe is a function of temperature, and the derivatives must be evaluated taking that into account. We can assess the importance of higherorder electronic terms using the Boltzmann ratio. It can be rewritten as the ratio of the populations of two states: gk e−β k Nk = Nj gj e−β j

(5.38)

For example, consider the ratio of the first excited electronic state of the nitrogen atom to the ground state. For nitrogen, 1 = 19,227 cm−1 and g0 , g1 = 4, 10, respectively. If the temperature is 300 K, then 19227hc kT

N1 10e− = N0 4

= 1.67 × 10−40

(5.39)

However, if the temperature were 2000 K, then the ratio would be about 2.4 × 10−6 . Clearly, the Boltzmann ratio is a very strong function of temperature. Even so, for nitrogen only the first term of the electronic partition function sum is important at these temperatures. For other atomic species, this may not be the case, and one must treat each specie on a case-by-case basis. Note, finally, that we can fairly precisely evaluate the limits of the Maxwell– Boltzmann approximation. The exact criterion is that eβ k +γ >> 1

(5.40)

for all values of k. Since eβ k ≥ 1 for all values of k, the criterion must apply to eγ . Using the fundamental relation to calculate eγ , we obtain 2π mkT 3/2 kT ge eγ = (5.41) p h2 Since the electronic degeneracy is order unity, the inequality can be written 2π mkT 3/2 kT ge >> 1 p h2

(5.42)

5.4 Diatomic Gases

99

For hydrogen at 300 K and 1 atm, the left-hand side has a value of 3.9 × 1034 , clearly satisfying the inequality.

5.4

Diatomic Gases For a diatomic gas, the total energy is

= tr + e + v + r

(5.43)

Following the arguments made in discussing the monatomic gas, we write the internal energy portion of the partition function as e−β k = e−β ktr e−β ke e−β kv e−β kr = qtr qe qv qr (5.44) k

k

k

k

k

Thus, the partition function becomes ln Q = e−γ qtr qe qv qr

(5.45)

We have already discussed the evaluation of the translational and electronic partition functions. We now consider rotation and vibration.

5.4.1

Rotation Recall that for a rigid rotator,

r = kr J(J + 1)

(5.46)

where r is the characteristic rotational temperature. The rotational degeneracy is gJ = 2J + 1. Thus,

J r qr = (2J + 1)e− kT = (2J + 1)e− T J(J+1) (5.47) J

J

One can approach the evaluation of this sum in several ways. The most accurate is to directly evaluate it numerically. The number of terms required will depend on the ratio /T. The higher the temperature, the more terms will be required. Alternatively, for (/T)J(J + 1) > r , is k. For vibration, however, at temperatures below and near v , the contribution to the energy, and thus specific heat, is a function of temperature. cv /k for CO is shown in Fig. 5.7. As can be seen, the curve is S-shaped, representing the fact that at low temperatures all the molecules are in their ground vibrational states and the contribution of vibration is insensitive to temperature. As T increases, more molecules are in higher vibrational states. At high T, the vibrational contribution to the energy

Figure 5.7 cv /k as a function of temperature for CO.

5.5 Polyatomic Gases

105

reaches the limiting value of kT. Therefore, as T increases from values of T > v , cv goes from 5/2k to 7/2k. Once the vibrational contribution to cv reaches k, vibration is said to be “fully excited.” Actually, this phenomenon occurs for all modes of motion. For translation and rotation, full excitation is reached at low temperatures. Electronic motion is never fully excited because dissociation or ionization occurs at lower temperatures than full excitation. We have seen that real diatomic molecules do not behave exactly like the simple rigid rotator/harmonic oscillator models. If the vibrational and rotational states are merely distorted from the simple model, then one can still separate the molecular partition functions and use the more complete term expressions or experiment data to calculate the q’s. However, if there is coupling between the modes, then q cannot be easily separated, and must be directly evaluated. For example, suppose that the sum of vibrational and rotational energies is G(v) + Fv (J)

(5.65)

where the subscript on F indicates that the rotational energy depends on the vibrational state. Then we must write 1 (2J + 1)e−(hc/kT)[G(v)+Fv (J)] (5.66) qv,r = σ v J

In practice, one must examine each case to determine whether the simple models provide sufficient accuracy for a given application. Most property compilations have used numerical, rather than analytic procedures to calculate the partition functions and the equations of state. (Details regarding numerical procedures can be found in the NIST-JANNAF Themochemical Tables [32].)

5.5

Polyatomic Gases The principles for evaluating the partition function for polyatomic gases are similar to those we have already applied. The results for translational energy are identical to those for monatomic and diatomic gas assemblies, as translational energy describes the motion of the center of mass of any molecular structure. However, evaluating the internal modes of energy is somewhat more difficult because of the many modes of motion and the likelihood that they interact. For a body made up of n atoms, there are 3n degrees of freedom. Three of these are taken up by translational motion, leaving 3n − 3 modes for vibrational and rotational motion. For a linear molecule, such as CO2 , there are only two rotational modes, thus there are 3n − 5 possible vibrational modes. For the general nonlinear case, there are three rotational modes and thus 3n − 6 vibrational modes. If we assume that each vibrational and rotational mode is independent, then we can separate the partition function for each mode. Furthermore, if we assume that the harmonic oscillator and rigid rotator models apply, then we can write for the general case

106

Ideal Gases

qv = e

De kT

3n−6 " i

and 1 qr = σ

#

#

e−vi /2T

$

1 − e−vi /T π T3 r1 r2 r3

(5.67)

$ (5.68)

where σ is the rotational symmetry factor. As with atoms and diatomic molecules, however, real polyatomic behavior rarely follows the simple models and numerical methods must be used to evaluate the partition function. There is a concept called the “equipartition of energy.” We have noted that “fully excited” translational energy contributes 32 kT to cv , rotation kT for each axis of rotation, and kT each for the potential and kinetic energy of each vibrational mode. This concept can be used to estimate cv in large molecules, as illustrated by Problem 5.7.

5.6

Summary In this chapter we began the process of relating macroscopic thermodynamic properties to the partition function for specific situations, non-interacting gases in this case. These include ideal gases, reacting mixtures of ideal gases, the photon gas, and the electron gas. Our first development involved defining the molecular partition functions. Recall that the grand canonical partition function for a gas that satisfies the Maxwell–Boltzmann limit is e−β k −γ (5.69) ln QMB = k

Factoring e−γ out of the sum we obtain ln QMB = e−γ

e−β k

(5.70)

k

The sum q = k e−β k is called the molecular partition function, and because of the independence of the various modes of motion we can write (as appropriate) ln QMB = e−γ qtr qrot qvib qe

(5.71)

We then evaluated each of these molecular partition functions.

5.6.1

Monatomic Gas For monatomic gases, the only modes of motion are translation and electronic. Starting with translation, we obtained 2π mkT 1/2 V (5.72) qtr = h2 While we were able to derive this algebraic expression for translation, the electronic partition function must be calculated numerically. In many cases, the first excited electronic

5.6 Summary

107

states are so energetic that all but the first term in the partition function can be ignored. In that case, qe = ge . Thus, combining with the translational partition function we obtain 1/2 −γ 2π mkT Vge (5.73) ln Q = e h2 Thus, the fundamental relation for a monatomic ideal gas with no electronic excitation becomes μ 2π mkT 3/2 1 −μ , = ke− kT Vge (5.74) S T T h2

5.6.2

Simple Diatomic Gas For a diatomic gas we also took rotation and vibration into account. The molecular partition functions for these are 1 T (5.75) qr = σ r where σ is a symmetry factor and σ =1 σ =2

for heteronuclear for homonuclear v

De

qv = e kT

e− 2T

− Tv

1−e

De

= e kT

1 2 sinh (v /2T)

(5.76)

Substituting all the molecular partition functions into the full partition function, we have the fundamental relation for T >> r and qe = ge : De μ 2π mkT 3/2 e kT μ 1 T − kT ,− =e ge S V (5.77) T T σ r 2 sinh(v /2T) h2 Of course, this expression is only valid for the rigid rotator/harmonic oscillator. As we discussed in this chapter, real diatomic molecules are more complex.

5.6.3

Polyatomic Molecules Recall that for a body made up of n atoms, there are 3n degrees of freedom. Three of these are taken up by translational motion, leaving 3n − 3 modes for vibrational and rotational motion. For a linear molecule, such as CO2 , there are only two rotational modes, thus there are 3n − 5 possible vibrational modes. For the general nonlinear case, there are three rotational modes and thus 3n − 6 vibrational modes. If we assume that the rigid rotator approximation holds and that the various modes of vibration in a polyatomic molecule are independent, then the simple expressions for the rotational and vibrational partitions given in Section 5.5 hold: # $ 3n−6 De " e−vi /2T kT (5.78) qv = e 1 − e−vi /T i

108

Ideal Gases

and 1 qr = σ

#

π T3 r1 r2 r3

$ (5.79)

Be sure to note that for a linear molecule, CO2 for example, there are only two degrees of rotational freedom.

5.7

Problems 5.1

Calculate the value of the translational partition function of O2 at 300 and 1000 K for a volume of 1 m3 .

5.2

Calculate the average molecular speed of O2 at 300 and 1000 K.

5.3

For H2 , find the temperature at which Eq. (5.42) is satisfied such that the left-hand side is equal to 106 .

5.4

Plot the population ratio of electronic state 1 to state 0 for sodium atoms from 300 to 3000 K. Will electronic excitation contribute significantly to the electronic partition function anywhere within this range?

5.5

Plot the rotational partition function of N2 as a function of temperature from 10 to 300 K. At what temperature does the approximation of Eq. (5.49) result in less than a 1% error?

5.6

Plot the vibrational partition function for CO from 300 to 1000 K. Ignore the dissociation energy term.

5.7

Consider the molecule D-glucose, C6 H12 O6 , the base molecule in cellulose. (a) (b)

Calculate the total number of energy modes and the number of translational, rotational, and vibrational modes. (I want numbers.) Using the “equipartition of energy” concept, estimate ideal gas cp (not cv ) in units of kB for the “low-temperature” regime where no vibrational modes are activated and for the “high-temperature” regime where all the vibrational modes are activated. (I want numbers.) (Note: The hydrogens attached to carbons are not shown. All C–C bonds are single bonds.)

6

Ideal Gas Mixtures

Here we explore the properties of mixtures of ideal gases. For non-reacting gas mixtures of known composition, this is fairly straightforward. For reacting mixtures, a maximization or minimization process is required, depending on the choice of representation.

6.1

Non-reacting Mixtures The treatment of ideal gas non-reacting mixtures is well covered in most undergraduate textbooks. Here we briefly summarize the important relations. The prediction of the properties of ideal gas mixtures are based on Dalton’s law of additive pressures and Amagat’s law of additive volumes. Dalton’s law states that the pressure of a gas mixture is equal to the sum of the pressures that each component would exert if it were alone at the mixture temperature and volume. Amagat’s law states that the volume of a mixture is equal to the sum of the volumes each component would occupy if it were alone at the mixture temperature and pressure. One can derive these results directly by evaluating the partition function for the mixture. For the Gibbs representation in which temperature, pressure and mass, mole number, or molecular number are the independent variables, the following simple summation relations over all r species thus hold: U=

r

Uj =

j

H=

r

r

uj Nj

(6.1)

hj Nj

(6.2)

sj Nj

(6.3)

j

Hj =

j

S=

r

r j

Sj =

j

r j

for the extensive properties. Similarly for the normalized or specific properties: u=

r

xj uj

(6.4)

j

109

110

Ideal Gas Mixtures

h=

r

xj hj

(6.5)

xj sj

(6.6)

xj cv j

(6.7)

j

s=

r j

cv =

r j

where xj is the mole fraction for specie j, defined as Nj Ntot

xj =

(6.8)

We have seen that internal energy and enthalpy are a function only of temperature. Thus, in these summations, hj and uj are evaluated at the mixture temperature. In the Gibbs representation, however, entropy is a function of both temperature and pressure. The sj must be evaluated at the mixture temperature and the partial pressure, or s(T, p) =

r

xj sj (T, pj )

(6.9)

j

It is common to rewrite this expression in terms of the mole fraction and total pressure. The Gibbs representation expression for the entropy is s2 − s1 = cp ln

T2 p2 − R ln T1 p1

(6.10)

For application to a mixture, this relation is used to relate the entropy at the partial pressure to that at the total pressure. Thus, sj (T, pj ) = sj (T, p) + R ln xj

6.1.1

(6.11)

Changes in Properties on Mixing When a mixture undergoes a state change, calculating the change in properties is straightforward. For example: umix =

r

xj uj

(6.12)

xj hj

(6.13)

xj sj

(6.14)

j

hmix =

r j

smix =

r j

6.1 Non-reacting Mixtures

111

Figure 6.1 Adiabatic mixing at constant total volume.

However, when the ideal gases are mixed, special care must be taken with the entropy because of its pressure dependence: sj = sj (T2 , p2 ) − sj (T1 , p1 )

(6.15)

Consider the divided volume as shown in Fig. 6.1, where sides A and B of the volume are at the same temperature and pressure, but contain difference species. The barrier between the two sides is then removed and the gases allowed to mix. The change of entropy for the systems becomes s = −xA RA ln

pA p

− xB RB ln

pB p

(6.16)

Since both pA and pB are less than p, this means there is an entropy increase because of mixing.

6.1.2

Example 6.1 Methane, ethane, and propane are mixed in a tank. They are supplied separately from storage cylinders at different temperatures (10, 30, and 20◦ C) and pressures (15, 10, and 6 bar). A pressure relief valve keeps the pressure in the tank at 5 bar. In the final mixture the mole fractions are 0.7, 0.2, and 0.1, respectively. Determine the final temperature in the tank and the total entropy change for the process. (Assume constant specific heats.) The molecular weights of methane, ethane, and propane are 16.043, 30.070, and 44.097 kg/kmol. cp = 2.2537, 1.7662, and 1.6794 kJ/kg K. R = 0.5182, 0.2765, and 0.1976 kJ/kg K. The molar mass of the mixture is Mmix = yCH4 MWCH4 + yC2 H6 MWC2 H6 + yC3 H8 MWC3 H8 = 0.7 × 16.043 + 0.3 × 30.070 × +0.1 × 44.097 = 21.654 kg/kmol

112

Ideal Gas Mixtures

Then, the mass fractions in the mixture become yCH4 =

MWCH4 = 0.519 Mmix

yC2 H6 =

MWC2 H6 = 0.278 Mmix

yC3 H8 =

MWC3 H∗ = 0.204 Mmix

Applying the conservation of energy equation for steady-state flow, one can obtain the temperature of the mixture: yCH4 hCH4 (10◦ C) + yC2 H6 hC2 H6 (30◦ C) + yC3 H8 hC3 H8 (20◦ C) = yCH4 hCH4 (Tmix ) + yC2 H6 hC2 H6 (Tmix ) + yC3 H8 hC3 H8 (Tmix ) Noting that h = cp (T2 − T1 ), the temperature becomes 16.61◦ C. The entropy change for each component is P2 T2 − R ln s = cp ln T1 P1 and smix = yCH4 sCH4 + yC2 H6 sC2 H6 + yC3 H8 sC3 H8 = 1.011 kJ/kg K

6.2

Reacting Mixtures

6.2.1

General Case The most straightforward way to find the equilibrium state of a reacting mixture is to minimize the appropriate thermodynamic potential. For example, if the pressure and temperature are to be controlled, then we could minimize the Gibbs function G(T, p, N1 , . . . , Nc )

(6.17)

where r is the total number of compounds being considered. If T and V are held constant, then minimize the Helmholtz function: F(T, V, N1 , . . . , Nc )

(6.18)

In differential form these are dG = −SdT + Vdp +

r

μj dNj

(6.19)

μj dNj

(6.20)

j

and dF = −SdT − pdV +

r j

6.2 Reacting Mixtures

113

where j is the species index. In either case, we hold the independent parameters constant except for the Nj and set the total derivative equal to zero. Thus, r

μj dNj = 0

(6.21)

j

The minimization is subject to a set of constraints, namely that atoms, the basic building blocks of molecules, be conserved and that no mole numbers be negative. We can write bk =

r

nkj Nj

(6.22)

j

where nkj is the number of k-type atoms in j-type molecules. Thus, bk is the total number of k-type atoms in the mixture. (The form of the no negative mole number requirement depends on the specific numerical method to be used.) The temperature, pressure, or volume and number dependency is contained in the equation of state for the μj . The ideal gas expression for the chemical potential can be understood by recalling that G ≡ U − TS + pV

(6.23)

By Euler’s relation U = TS − pV +

r

μj Nj

(6.24)

j

Thus, G = μ1 N1 + μ2 N2 + · · · + μc Nc

(6.25)

The chemical potential can be thought of as the specific Gibbs function, since G (6.26) N In an ideal gas mixture, specific or partial properties are evaluated at the given temperature and partial pressure of the component in question, or g=

G=

r

μj (T, pj )Nj

(6.27)

j

However, we need a relationship for the μj that is explicit in pj or xj . Noting that g = h − Ts = h(T) − Ts(T, p)

(6.28)

one can write, using Eq. (6.11), μj (T, pj ) = gj (T, pj ) = gj (T, p) + RT ln xj The minimization function thus becomes r G= gj (T, p) + RT ln xj j

(6.29)

(6.30)

114

Ideal Gas Mixtures

There are a number of numerical methods to minimize a function subject to constraints. In Listing 6.1 (see Example 6.2) we use the Matlab function “fmincon” to solve the problem of equilibrium between gaseous water, hydrogen, and oxygen. However, to proceed requires knowing the Gibbs function gj (T, p).

6.2.2

Properties for Equilibrium and 1st Law Calculations To use either the general equations of Section 6.2.1 or the equilibrium constants of Section 6.2.4 requires specification of the Gibbs free energy. In addition, if one wants to carry out 1st and 2nd Law analysis, one must be able to calculate other properties such as the internal energy, enthalpy, and entropy. For reacting mixtures it is important to recognize that internal energies must be properly referenced so that energies of reaction, either exothermic or endothermic, are properly accounted for. The usual way to provide proper referencing is to consider the so-called standard reference state. Every molecule’s internal energy, enthalpy, and Gibbs free energy are referenced to the energy required to form the molecule from stable atoms. This is normally done at standard temperature and pressure, and then the relations of Section 6.1 are used to calculate the properties at other temperatures and pressures. A concerted effort to compile properties was one goal of the JANNAF (Joint Army and Navy NASA Air Force) Interagency Propulsion Committee. JANNAF was formed in 1946 to consolidate rocket propulsion research under one organization. It still functions as the US national resource for worldwide information, data, and analysis on chemical, electrical, and nuclear propulsion for missile, space, and gun propulsion systems. Much of the property work has been carried out by NIST, the National Institute of Standards and Technology. The property values compiled under the JANNAF program were published as the NIST–JANNAF Thermochemical Tables [32]. The tables are available online at http://kinetics.nist.gov/janaf/. The tables provide Cp◦ , S◦ , [G◦ −H ◦ ]/T, H − H ◦ (Tr ), f H ◦ , f G◦ , and log Kf . (The NIST notation for Kp is Kf , where the subscript f stands for formation.) It is important to understand the definitions of the various parameters in the tables. Cp◦ is the constant-pressure ideal gas specific heat, as we have discussed before. S◦ is the absolute entropy evaluated at the standard reference pressure of 1 bar, going to zero at 0 K. The term H − H ◦ (Tr ) refers to the change in enthalpy with respect to the standard reference temperature, 298.15 K. Thus, the change in enthalpy between any two temperatures can be obtained as H = H(T1 − H ◦ (Tr )) − H(T2 − H ◦ (Tr ))

(6.31)

To obtain the entropy at some pressure other than 1 bar, use the relation S(T, p) = s◦ − R ln

p pref

(6.32)

The JANNAF tables also list the heat of formation f H ◦ . This is the enthalpy required to form the compound from elements in their standard state at constant temperature and

6.2 Reacting Mixtures

115

pressure. Thus, oxygen and hydrogen, for example, would be in the diatomic form O2 and H2 , while carbon would be in the atomic form C. The value of f H ◦ is negative when energy is released, as a compound is formed from its elements. To convert f H ◦ between the gas and liquid phases, add the heat of vaporization: ◦ ◦ f H298.15 (gas) = f H298.15 (liq) + hfg,298.15

(6.33)

f G◦ is called the Gibbs free energy of formation. It is zero for elements in their equilibrium form, but not for compounds. See Example 6.2 for f G◦ used in a numerical Gibbs free energy minimization. (The term [G◦ −H ◦ ]/T was included in the tables when hand calculations were more common. See the introduction to the JANNAF tables for more information.) Also listed in the tables are the logarithms of the equilibrium constants, log10 Kf , for formation of the compound from elements in their standard state. log10 Kf is zero for elements in their standard state, but nonzero for compounds. We will discuss the equilibrium constants in more detail in Section 6.2.4. For computational work, it is most convenient to have the properties in the form of algebraic relations. It has been shown that this can be done by fitting properties to polynomial functions. For many application programs, these polynomials are Cp◦ R

= a1 + a2 T + a3 T 2 + a4 T 3 + a5 T 4

(6.34)

H◦ a3 a2 a4 a5 a6 = a1 + T + T 2 + T 3 + T 4 + T 5 RT 2 3 4 5 6

(6.35)

S◦ a3 a4 a5 = a1 ln T + a2 T + T 2 + T 3 + T 4 + a7 RT 2 3 4

(6.36)

In programs such as the NASA equilibrium code [33] or CHEMKIN [34], only the coefficients are stored. In addition, it has been found that to get acceptable fits, the full temperature range of interest must be split in two. Thus, 14 coefficients must be stored, as shown here for the molecule CH4 : CH4 L 8/88C 1H 4 00 00G 200.000 3500.000 1000.000 7.48514950E-02 1.33909467E-02-5.73285809E-06 1.22292535E-09-1.01815230E-13 -9.46834459E+03 1.84373180E+01 5.14987613E+00-1.36709788E-02 4.91800599E-05 -4.84743026E-08 1.66693956E-11-1.02466476E+04-4.64130376E+00 1.00161980E+04

6.2.3

1 2 3 4

Example 6.2 Consider one mole of water vapor, H2 O. If it is heated to a high enough temperature, then it can dissociate into O2 and H2 . Here we solve the equilibrium problem at 3000 K numerically using Matlab.

116

Ideal Gas Mixtures

Listing 6.1: Simple equilibrium script 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33

close all clear clc R = 0 . 0 0 8 3 1 4 4 5 9 8 ; % k J / mol / K T = 3000; % K % S p e c i e s names s p e c i e s = { ’H2O ’ ’O2 ’ ’H2 ’ } ; % D e l t a _ f G ^0 f o r i n c l u d e d s p e c i e s a t g i v e n T % f r o m JANNAF T a b l e s DfG0H2O = −77.163; DfG0O2 = 0 ; DfG0H2 = 0 ; g0 = [ DfG0H2O DfG0O2 DfG0H2 ] ; % k J / mol % Number o f k t y p e a t o m s i n j m o l e c u l e nkj = [1 2 0 % oxygen 2 0 2] % hydrogen % I n i t i a l c o n d i t i o n s − number o f a t o m s bk = [ 1 % oxygen atoms 2] % hydrogen atoms % Lower b o u n d s on mole numbers l b = [ 0 0 0 ] ; % no mole numbers l e s s t h a n z e r o a l l o w e d % I n i t i a l g u e s s e s f o r moles o f each molecule n j 0 = [ 1 e−3 1 e−3 1 e − 3 ] ; % Run m i n i m i z a t i o n g f u n = @( n j ) sum ( n j . ∗ ( g0 / ( R∗T ) + l o g ( n j / sum ( n j ) ) ) ) ; o p t i o n s = o p t i m s e t ( ’ Algorithm ’ , ’ sqp ’ ) ; [ n j f v a l ] = f m i n c o n ( gfun , n j 0 , [ ] , [ ] , n k j , bk , l b , [ ] , [ ] , o p t i o n s ) ; % Mole f r a c t i o n s m o l f r a c = ( n j / sum ( n j ) ) ’ ; f p r i n t f ( ’ # S p e c i e s Nj x j \ n ’ ) f o r i = 1 : numel ( n j ) f p r i n t f ( ’%5d%10s %10.3 g %10.3 g \ n ’ , i , s p e c i e s { i } , n j ( i ) , m o l f r a c ( i ) ) end

Running the above script resulted in the mole fractions at 3000 K as shown in Table 6.1.

Table 6.1 Mole fractions predicted by Matlab script

6.2.4

H2 O

O2

H2

0.794

0.0687

0.137

The Equilibrium Constant The discussion above provides a methodology for calculating the equilibrium composition of a reacting gas mixture. However, it is best suited to numerical calculations. For the case of one or two reactions, the equilibrium constant approach may be more useful.

6.2 Reacting Mixtures

117

We start by recalling that the equilibrium condition requires minimizing the Gibbs free energy dGmix = 0

(6.37)

which, for a mixture of ideal gases, is dGmix = 0 =

r

dNj gj (T, p) =

j

r

Nj ln[gj (T, p) + RT ln(pj /p)]

(6.38)

j

Consider the reaction aA + bB + · · · eE + fF + · · ·

(6.39)

The change in the number of moles of each specie is directly proportional to its stoichiometric coefficient (a, b, etc.) times a progress of reaction variable α. Note that we can write dNA = −αa dNB = −αb .. .

(6.40)

dNE = +αe dNF = +αf If we substitute this into the equation for dGmix , we obtain GT = −RT ln Kp

(6.41)

GT = (egE + fgF + · · · − agA − bgB − · · · )

(6.42)

where

and Kp =

(pE /p)e (pF /p)f . . . (pA /p)a (pB /p)b . . .

(6.43)

Here, Kp is the equilibrium constant. It is common to define the Gibbs free energy of formation, f G◦ , as the value of G at a reference pressure, typically the standard state pressure of 1 bar = 0.1 MPa. This makes it a function of temperature only. Then one can still use Eq. (5.18) as long as the equilibrium constant is written Kp =

(pE /pref )e (pF /pref )f . . . (pA /pref )a (pB /pref )b . . .

(6.44)

where pref is the standard state pressure. Noting that pj = xj p =

Nj p Ntot

(6.45)

118

Ideal Gas Mixtures

we can write f

NEe NF . . .

P Kp = a b NA NB . . . Ntot

e+f +···−a−b−... (6.46)

Both f G◦ and log10 Kp are tabulated in the JANNAF tables. They are related by f G◦ (T) = −RT ln Kp (T)

(6.47)

using the ideal gas relation. The molar concentration can be written (where the bracketed notation is common when discussing chemical reactions) [Cj ] = xj

pj p = RT RT

so we can define an equilibrium constant based on molar concentrations: e+f +···−a−b−... RT Kp = Kc p

(6.48)

(6.49)

where Kc =

6.2.5

[E]e [F]f . . . [A]a [B]b . . .

(6.50)

Example 6.3 Repeat Example 6.2 using the equilibrium constant from the JANNAF tables. Note that Kp is given for the formation of the compound from its elements in their natural state. Thus, the reaction we are concerned with is 1 H2 + O2 H2 O 2 Write the reaction process as H2 O → xH2 O + yH2 + zO2 Carrying out an atom balance: H O Ntot

2 = 2x + 2y 1 = x + 2y x+y+z

y=1−x z = (1 − x)/2 1.5 − 0.5x

Thus, using Eq. (6.44) and noting that Kp =

p 1−1/2−1 x (1 − x)[(1 − x)/2]1/2 Ntot

from the JANNAF tables, log10 Kp = −1.344 at 3000 K. Thus, Kp = 0.0453. Solving the equation for x, one obtains the same values as in Example 6.2.

6.3 Summary

6.2.6

119

The Principle of Detailed Balance Now consider the reaction kf

A+BC+D

(6.51)

kb

where kf and kb are called reaction rate coefficients. Suppose we wish to explore the time behavior of this reaction when starting from a non-equilibrium state, say all A and B but no C and D. It can be shown that the rate of destruction of A can be written as d[A] = −kf [A][B] + kb [C][D] dt

(6.52)

The reaction rate coefficients contain information about the nature of reacting collisions between molecules and are typically only a function of temperature. The concentration terms reflect the dependence on the frequency of collisions on concentration and thus pressure. If the mixture is in equilibrium, then the concentrations should be unchanging. Thus, 0 = −kf [A][B] + kb [C][D]

(6.53)

kf (T) [C][D] = = Kc (T) [A][B] kb (T)

(6.54)

or

This leads to the incredibly important principle of detailed balance. If the k’s are a function of molecular structure and the translational energy is in equilibrium, then this relation should hold even when the mixture is not in chemical equilibrium. Using Eq. (6.65), only one of the two reaction rate coefficients is independently required, assuming the equilibrium constant is available.

6.3

Summary We explored ideal gas mixtures in this chapter.

6.3.1

Non-reacting Mixtures We started with non-reacting mixtures. For these mixtures, calculating the mixture properties is straightforward given the mole fractions of the components. For the Gibbs representation in which temperature, pressure and mass, mole number, or molecular number are the independent variables, the following simple summation relations over all r species thus hold: U=

r j

Uj =

r j

uj Nj

(6.55)

120

Ideal Gas Mixtures

H=

r

Hj =

j

S=

r

r

hj Nj

(6.56)

sj Nj

(6.57)

j

Sj =

j

r j

for the extensive properties. Similarly for the normalized or specific properties: u=

r

xj uj

(6.58)

xj hj

(6.59)

xj sj

(6.60)

xj cv j

(6.61)

j

h=

r j

s=

r j

cv =

r j

In the Gibbs representation, however, entropy is a function of both temperature and pressure. The sj must be evaluated at the mixture temperature and the partial pressure, s(T, p) =

r

xj sj (T, pj )

(6.62)

T2 p2 − R ln T1 p1

(6.63)

j

s2 − s1 = cp ln

6.3.2

Reacting Mixtures For reacting mixtures we first discussed the general case of the equilibrium of a reacting mixture. Minimizing the Gibbs free energy, we derived the minimization function r gj (T, p) + RT ln xj (6.64) G= j

The minimization can easily be carried out numerically, as shown in Listing 6.1. We then derived the equilibrium constant and showed how it can be used in Example 6.3. This was followed by a discussion of detailed balance: kf (T) [C][D] = = Kc (T) [A][B] kb (T)

(6.65)

6.4 Problems

6.4

121

Problems Reacting mixtures. For the following problems, “theoretical air” (or oxygen) means the percentage of air (or oxygen) that is provided compared to the stoichiometric amount. (You will need either the NIST database or your undergraduate thermodynamics books to do these problems, along with the NASA program CEA2.) 6.1

Liquid ethanol (C2 H5 OH) is burned with 150% theoretical oxygen in a steadystate, steady-flow process. The reactants enter the combustion chamber at 25◦ C and the products leave at 65◦ C. The process takes place at 1 bar. Assuming complete combustion (i.e. CO2 and H2 O as products along with any excess O2 ), calculate the heat transfer per kmole of fuel burned.

6.2

Gaseous propane at 25◦ C is mixed with air at 400 K and burned. 300% theoretical air is used. What is the adiabatic flame temperature? Again, assume complete combustion.

6.3

Repeat Problems 6.1 and 6.2 using CEA2. Compare the product composition using CEA2 with the assumption of complete combustion. What do you observe?

6.4

Using CEA2, calculate the equilibrium composition of a mixture of 1 mol of CO2 , 2 mol of H2 O, and 7.52 mol of N2 at 1 bar and temperatures ranging from 500 to 3000 K in steps of 100 K. Then plot the mole fraction of NO as a function of temperature.

7

The Photon and Electron Gases

It is an interesting fact that both equilibrium radiation fields and electrons in metals can be treated as non-interacting ideal gases. Here we explore the consequences.

7.1

The Photon Gas We have discussed the idea that electromagnetic radiation can, under certain circumstances, be thought of as being composed of photons, or quanta, that display particlelike characteristics. It would be useful if we could treat an electromagnetic field in an enclosure as a collection of monatomic particles and predict its equilibrium behavior using the principles of statistical mechanics. This is because many bodies emit radiation with a spectral distribution similar to that of an equilibrium field. A body that does so is said to be a “black body” and the radiation from the surface is completely characterized by the temperature. Indeed, in carrying out radiation heat transfer calculations, one usually expresses the properties of surfaces in terms of how closely they resemble a black body. We can treat a collection of photons contained in an enclosure as an ideal monatomic gas except for two restrictions: 1.

2.

Photons are bosons and Bose–Einstein statistics must be used. However, photons do not interact with each other, so no approximation is made by neglecting interparticle forces. Photons can be absorbed and emitted by the walls of the container, so that no constraint can be placed on the number of photons even in the grand canonical representation.

Recall that for a dilute assembly of bosons, the distribution of particles over the allowed energy states is gk (7.1) Nk = γ +ε /kT e k −1 However, this expression was derived including the constraint on N that is now removed and γ = −μ/kT = 0. Thus, gk Nk = ε /kT (7.2) k e −1 122

7.1 The Photon Gas

123

If we assume that the differences between energy levels are so small that energy varies continuously, we can write dg (7.3) dN = ε/kT e −1 where dg is the degeneracy for the energy increment to + d . As we derived in our discussion of translational energy, the degeneracy is 4π n2 dn dg = 2 (7.4) 8 where the factor of 2 arises because an electromagnetic field can be polarized in two independent directions. For a molecular gas, we related dn to d using Newtonian physics, namely that ε=

mc2 or p2 = 2mε 2

(7.5)

However, photons are relativistic, and de Broglie’s relation (where here c is the speed of light) ε (7.6) p= c must be used. Therefore, ε2 =

h2 c2 2 n 4V 2/3

(7.7)

dg =

8π V 2 ε dε h3 c3

(7.8)

and

Substituting this into the expression for dN and noting that ε = hν, 8π dN ν2 = 3 hν/kT dν V − 1) c (e

(7.9)

This is the number of photons (per unit volume) in the frequency range ν to ν + dν. We seek the spectral energy density uν : uν dν = hν

dN V

(7.10)

Thus, uν =

1 8π hν 3 hν/kT 3 (e − 1) c

(7.11)

This is Planck’s Law. The limits for low and high frequency are readily obtained. For large ν, hν/kT >> 1 and we get the Wien formula: 8π hν 3 −hν/kT e uν ∼ = c3

(7.12)

124

The Photon and Electron Gases

Figure 7.1 Spectral distribution of blackbody radiation.

For small ν, hν/kT > Q12 but (W12 + W21 )