Lecture Notes for Solid State Physics

solid state physics advances in research and applications and solid state physics for engineering and materials science pdf free download
MarthaKelly Profile Pic
Published Date:12-07-2017
Your Website URL(Optional)
Lecture Notes for Solid State Physics (3rd Year Course 6) Hilary Term 2012 c Professor Steven H. Simon Oxford University January 9, 2012Chapter 1 About Condensed Matter Physics This chapter is just my personal take on why this topic is interesting. It seems unlikely to me that any exam would ask you why you study this topic, so you should probably consider this section to be not examinable. Nonetheless, you might want to read it to figure out why you should think this course is interesting if that isn’t otherwise obvious. 1.1 What is Condensed Matter Physics Quoting Wikipedia: Condensed matter physics is the field of physics that deals with the macro- scopic and microscopic physical properties of matter. In particular, it is concerned with the “condensed” phases that appear whenever the num- ber of constituents in a system is extremely large and the interactions be- tweentheconstituentsarestrong. Themostfamiliarexamplesofcondensed phases are solids and liquids, which arise from the electromagnetic forces between atoms. Theuseoftheterm“CondensedMatter”beingmoregeneralthanjustsolidstatewascoined and promoted by Nobel-Laureate Philip W. Anderson. 1.2 Why Do We Study Condensed Matter Physics? There are several very good answers to this question 1. Because it is the world around us Almost all of the physical world that we see is in fact condensed matter. Questions such as • why are metals shiny and why do they feel cold? • why is glass transparent? 12 CHAPTER 1. ABOUT CONDENSED MATTER PHYSICS • why is water a fluid, and why does fluid feel wet? • why is rubber soft and stretchy? These questions are all in the domain of condensed matter physics. In fact almost every question you might ask about the world around you, short of asking about the sun or stars, is probably related to condensed matter physics in some way. 2. Because it is useful Over the last century our command of condensed matter physics has enabled us humans to do remarkable things. We haveused our knowledge of physics to engineer new materials and exploit their properties to change our world and our society completely. Perhaps the most remarkable example is how our understanding of solid state physics enabled new inventions exploiting semiconductor technology, which enabled the electronics industry, which enabled computers, iPhones, and everything else we now take for granted. 3. Because it is deep The questions that arise in condensed matter physics are as deep as those you might find anywhere. In fact, many of the ideas that are now used in other fields of physics can trace their origins to condensed matter physics. A few examples for fun: • The famous Higgs boson, which the LHC is searching for, is no different from a phe- nomenon that occurs in superconductors (the domain of condensed matter physicists). The Higgs mechanism, which gives mass to elementary particles is frequently called the “Anderson-Higgs”mechanism, after the condensed matter physicist Phil Anderson (the same guy who coined the term “condensed matter”) who described much of the same physics before Peter Higgs, the high energy theorist. • The ideas of the renormalization group (Nobel prize to Kenneth Wilson in 1982) was developed simultaneously in both high-energy and condensed matter physics. • The ideas of topological quantum field theories, while invented by string theorists as theoriesofquantumgravity,havebeendiscoveredinthelaboratorybycondensedmatter physicists • In the last few years there has been a mass exodus of string theorists applying black- hole physics (in N-dimensions) to phase transitions in real materials. The very same structures exist in the lab that are (maybe) somewhere out in the cosmos That this type of physics is deep is not just my opinion. The Nobel committee agrees with me. During this course we will discuss the work of no fewer than 50 Nobel laureates (See the index of scientists at the end of this set of notes). 4. Because reductionism doesn’t work beginrant People frequently have the feeling that if you continually ask “what is it made of” you learn more about something. This approach to knowledge is known as reductionism. For example, asking what water is made of, someone may tell you it is made from molecules, then molecules are made of atoms, atoms of electrons and protons, protons of quarks, and quarks are made of who-knows-what. But none of this information tells you anything about why water is wet, about why protons and neutrons bind to form nuclei, why the atoms bind to form water, and so forth. Understanding physics inevitably involves understanding how many objects all interact with each other. And this is where things get difficult very1.2. WHY DO WE STUDY CONDENSED MATTER PHYSICS? 3 quickly. We understand the Schroedinger equation extremely well for one particle, but the Schroedinger equations for four or more particles, while in principle solvable, in practice are neversolvedbecausethey aretoodifficult —evenforthe world’sbiggestcomputers. Physics involves figuring out what to do then. How are we to understand how many quarks form a nucleus, or how many electrons and protons form an atom if we cannot solve the many particle Schroedinger equation? Even more interesting is the possibility that we understand very well the microscopic theory of a system, but then we discover that macroscopic properties emerge from the system that we did not expect. My personal favorite example is when one puts together many electrons (each with charge−e) one can sometimes find new particles emerging, each having one third 1 the charge of an electron Reductionism would never uncover this — it misses the point completely. endrant 5. Because it is a Laboratory Condensed matter physics is perhaps the best laboratory we have for studying quantum physics and statistical physics. Those of us who are fascinated by what quantum mechanics and statistical mechanics can do often end up studying condensed matter physics which is deeplygroundedin bothofthesetopics. Condensedmatter isaninfinitely variedplayground for physicists to test strange quantum and statistical effects. I view this entire course as an extension of what you have already learned in quantum and statistical physics. If you enjoyed those courses, you will likely enjoy this as well. If you did not do well in those courses,you might want to goback and study them againbecause many of the same ideas will arise here. 1 Yes, this truly happens. The Nobel prize in 1998 was awarded to Dan Tsui, Horst Stormer and Bob Laughlin, for discovery of this phenomenon known as the fractional quantum Hall effect.Chapter 2 Specific Heat of Solids: Boltzmann, Einstein, and Debye Our story of condensed matter physics starts around the turn of the last century. It was well 1 known (and you should remember from last year) that the heat capacity of a monatomic (ideal) gas is C =3k /2 per atom withk being Boltzmann’s constant. The statistical theory of gases v B B described why this is so. As far backas 1819,however,it had alsobeen known that for many solids the heat capacity 2 is given by C = 3k per atom B or C = 3R 3 which is known as the Law of Dulong-Petit . While this law is not always correct, it frequently is close to true. For example, at room temperature we have With the exception of diamond, the law C/R = 3 seems to hold extremely well at room temper- ature, although at lower temperatures all materials start to deviate from this law, and typically 1 We willalmostalwaysbeconcerned withthe heatcapacityC peratomofamaterial. MultiplyingbyAvogadro’s number gives the molar heat capacity or heat capacity per mole. The specific heat (denoted often as c rather than C) is the heat capacity per unit mass. However, the phrase “specific heat” is also used loosely to describe the molar heat capacity since they are both intensive quantities (as compared to the total heat capacity which is extensive — i.e., proportional to the amount of mass in the system). We will try to be precise with our language but one should be aware that frequently things are written in non-precise ways and you are left to figure out what is meant. For example, Really we should say C per atom = 3k /2 rather than C = 3k /2 per atom, and similarly we should v B v B say C per mole = 3R. To be more precise I really would have liked to title this chapter “Heat Capacity Per Atom of Solids” rather than “Specific Heat of Solids”. However, for over a century people have talked about the “Einstein Theory of Specific Heat” and “Debye Theory of Specific Heat” and it would have been almost scandalous to not use this wording. 2 Here I do not distinguish between Cp and Cv because they are very close to the same. Recall that Cp−Cv = 2 VTα /β whereβ is the isothermal compressibility andα is the coefficient of thermal expansion. For a solidα is T T relatively small. 3 Both Pierre Dulong and Alexis Petit were French chemists. Neither is remembered for much else besides this law. 78 CHAPTER 2. SPECIFIC HEAT OF SOLIDS: BOLTZMANN, EINSTEIN, AND DEBYE Material C/R Aluminum 2.91 Antimony 3.03 Copper 2.94 Gold 3.05 Silver 2.99 Diamond 0.735 Table 2.1: Heat Capacities of Some Solids C drops rapidly below some temperature. (And for diamond when the temperature is raised, the heat capacity increases towards 3R as well, see Fig. 2.2 below). In1896Boltzmannconstructedamodelthataccountedforthislawfairlywell. Inhismodel, each atom in the solid is bound to neighboring atoms. Focusing on a single particular atom, we imagine that atom as being in a harmonic well formed by the interaction with its neighbors. In such a classical statistical mechanical model, the heat capacity of the vibration of the atom is 3k B per atom, in agreementwith Dulong-Petit. (Provingthis is a goodhomeworkassignmentthat you should be able to answer with your knowledge of statistical mechanics and/or the equipartition theorem). Several years later in 1907, Einstein started wondering about why this law does not hold at low temperatures (for diamond, “low” temperature appears to be room temperature). What he realized is that quantum mechanics is important Einstein’s assumption was similar to that of Boltzmann. He assumed that every atom is in a harmonic well created by the interaction with its neighbors. Further he assumed that every atom is in an identical harmonic well and has an oscillation frequencyω (known as the “Einstein” frequency). The quantum mechanical problem of a simple harmonic oscillator is one whose solution we know. We will now use that knowledge to determine the heat capacity of a single one dimensional harmonic oscillator. This entire calculation should look familiar from your statistical physics course. 2.1 Einstein’s Calculation In one dimension, the eigenstates of a single harmonic oscillator are E =ω(n+1/2) n withω the frequency of the harmonicoscillator (the “Einsteinfrequency”). The partition function 4 is then X −βω(n+1/2) Z = e 1D n0 −βω/2 e 1 = = −βω 1−e 2sinh(βω/2) 4 We will very frequently use the standard notation β =1/(k T). B2.1. EINSTEIN’S CALCULATION 9 The expectation of energy is then     1 ∂Z ω βω 1 hEi =− = coth =ω n (βω)+ (2.1) B Z ∂β 2 2 2 5 where n is the Bose occupation factor B 1 n (x) = B x e −1 This result is easy to interpret: the mode ω is an excitation that is excited on average n times, B or equivalently there is a “boson” orbital which is “occupied” by n bosons. B Differentiating the expression for energy we obtain the heat capacity for a single oscillator, βω ∂hEi e 2 C = =k (βω) B βω 2 ∂T (e −1) Note that the high temperature limit of this expression gives C = k (check this if it is not B obvious). Generalizing to the three-dimensional case, E =ω(n +1/2)+(n +1/2)+(n +1/2) n ,n ,n x y z x y z and X −βE 3 n ,n ,n x y z Z = e =Z 3D 1D n ,n ,n 0 x y z resulting inhE i=3hE i, so correspondingly we obtain 3D 1D βω e 2 C =3k (βω) B βω 2 (e −1) Plotted this looks like Fig. 2.1. 5 Satyendra Bose worked out the idea of Bose statistics in 1924, but could not get it published until Einstein lent his support to the idea.10 CHAPTER 2. SPECIFIC HEAT OF SOLIDS: BOLTZMANN, EINSTEIN, AND DEBYE 1 0.8 0.6 0.4 0.2 0 0 0.5 1 1.5 2 k T/(ω) B Figure 2.1: Einstein Heat Capacity Per Atom in Three Dimensions Note that in the high temperature limit k Tω recover the law of Dulong-Petit — 3k B B heatcapacityper atom. However,atlowtemperature(Tω/k ) thedegreesoffreedom“freeze B out”, the system gets stuck in only the ground state eigenstate, and the heat capacity vanishes rapidly. Einstein’s theory reasonablyaccuratelyexplained the behaviorof the the heat capacity as a functionoftemperaturewithonlyasinglefittingparameter,theEinsteinfrequencyω. (Sometimes this frequency is quoted in terms of the Einstein temperature ω = k T ). In Fig. 2.2 we B Einstein show Einstein’s original comparison to the heat capacity of diamond. For most materials, the Einstein frequency ω is low compared to room temperature, so the Dulong-Petit law hold fairly well (being relatively high temperature compared to the Einstein frequency). However, for diamond,ω is high compared to room temperature, so the heat capacity is lower than 3R at room temperature. The reason diamond has such a high Einstein frequency is that the bonding between atoms in diamond is very strong and its mass is relatively low (hence p a high ω = κ/m oscillation frequency with κ a spring constant and m the mass). These strong bonds also result in diamond being an exceptionally hard material. Einstein’s result was remarkable, not only in that it explained the temperature dependence C 3k B2.2. DEBYE’S CALCULATION 11 Figure 2.2: Plot of Molar Heat Capacity of Diamond from Einstein’s Original 1907 paper. The fit is to the Einstein theory. The x-axis is k T in units of ω and the B y axis is C in units of cal/(K-mol). In these units, 3R≈5.96. of the heat capacity, but more importantly it told us something fundamental about quantum mechanics. Keep in mind that Einstein obtained this result 19 years before the Schroedinger 6 equation was discovered 2.2 Debye’s Calculation Einstein’s theory of specific heat was extremely successful, but still there were clear deviations from the predicted equation. Even in the plot in his first paper (Fig. 2.2 above) one can see that 7 at low temperature the experimental data lies above the theoretical curve . This result turns out to be rather important In fact, it wasknownthat at lowtemperaturesmost materialshavea heat 3 capacity that is proportional toT (Metals also have a very small additional term proportional to T which we will discuss later in section 4.2. Magnetic materials may have other additional terms 8 3 as well . Nonmagnetic insulators have only the T behavior). At any rate, Einstein’s formula at low temperature is exponentially small in T, not agreeing at all with the actual experiments. 9 In 1912 Peter Debye discovered how to better treat the quantum mechanics of oscillations 3 of atoms, and managed to explain the T specific heat. Debye realized that oscillation of atoms is the same thing as sound, and sound is a wave, so it should be quantized the same way as Planck quantized light waves. Besides the fact that the speed of light is much faster than that of sound, thereisonlyoneminordifferencebetweenlightandsound: forlight,therearetwopolarizationsfor eachk whereasforsound, therearethreemodesforeachk(alongitudinalmode, wheretheatomic motion is in the same direction as k and two transverse modes where the motion is perpendicular tok. Lighthasonlythetransversemodes.). Forsimplicityofpresentationherewewillassumethat the transverse and longitudinal modes have the same velocity, although in truth the longitudinal 6 Einstein was a pretty smart guy. 7 Although perhaps not obvious, this deviation turns out to be real, and not just experimental error. 8 We will discuss magnetism in part VII. 9 Peter Debye later won a Nobel prize in Chemistry for something completely different.12 CHAPTER 2. SPECIFIC HEAT OF SOLIDS: BOLTZMANN, EINSTEIN, AND DEBYE 10 velocity is usually somewhat greater than the transverse velocity . We now repeat essentially what was Planck’s calculation for light. This calculation should also look familiar from your statistical physics course. First, however, we need some preliminary information about waves: 2.2.1 About Periodic (Born-Von-Karman) Boundary Conditions Many times in this course we will consider waves with periodic or “Born-Von-Karman”boundary conditions. It is easiest to describe this first in one dimension. Here, instead of having a one dimensional sample of length L with actual ends, we imagine that the two ends are connected together making the sample into a circle. The periodic boundary condition means that, any wave ikr in this sample e is required to have the same value for a positionr as it has forr+L (we have gone all the way around the circle). This then restricts the possible values of k to be 2πn k = L forn an integer. If we are ever required to sum over all possible values ofk, for large enoughL we 11 can replace the sum with an integral obtaining Z ∞ X L → dk 2π −∞ k A way to understand this mapping is to note that the spacing between allowed points in k space R is 2π/L so the integral dk can be replaced by a sum overk points times the spacing between the points. 3 In three dimensions, the story is extremely similar. For a sample of size L , we identify opposite ends of the sample (wrapping the sample up into a hypertorus) so that if you go a 12 distance L in any direction, you get back to where you started . As a result, our k values can only take values 2π k= (n ,n ,n ) 1 2 3 L 3 for integer values of n , so here each k point now occupies a volume of (2π/L) . Because of this i discretization of values of k, whenever we have a sum over all possible k values we obtain Z 3 X L → dk 3 (2π) k 10 We have also assumed the sound velocity to be the same in every direction, which need not be true in real materials. It is not too hard to include anisotropy into Debye’s theory as well. 11 In your previous courses you may have used particle in a box boundary conditions where instead of plane waves i2πnr/L e you used particle in a box wavefunctions of the form sin(knπr/L). This gives you instead Z ∞ X L → dk π 0 k whichwillinevitablyresultinthesamephysicalanswersasfortheperiodicboundaryconditioncase. Allcalculations can be done either way, but periodic Born-Von-Karmen boundary conditions are almost always simpler. 12 Such boundary conditions are very popular in video games. It may also be possible that our universe has such boundary conditions — a notion known as the doughnut universe. Data collected by Cosmic Microwave Background Explorer (led by Nobel Laureates John Mather and George Smoot) and its successor the Wilkinson Microwave Anisotropy Probe appear consistent with this structure.2.2. DEBYE’S CALCULATION 13 with the integral over all three dimensions of k-space (this is what we mean by the bold dk). One might think that wrapping the sample up into a hypertorus is very unnatural compared to considering a system with real boundary conditions. However, these boundary conditions tend to simplifycalculationsquiteabitandmostphysicalquantitiesyoumightmeasurecouldbemeasured farfromthe boundariesofthe sampleanywayandwouldthen be independent of whatyoudowith the boundary conditions. 2.2.2 Debye’s Calculation Following Planck Debyedecidedthattheoscillationmodeswerewaveswithfrequenciesω(k) =vkwithv thesound velocity — and for each k there should be three possible oscillation modes, one for each direction of motion. Thus he wrote an expression entirely analogous to Einstein’s expression (compare to Eq. 2.1)   X 1 hEi = 3 ω(k) n (βω(k))+ B 2 k   Z 3 L 1 = 3 dkω(k) n (βω(k))+ B 3 (2π) 2 Eachexcitationmodeisabosonoffrequencyω(k)anditisoccupiedonaveragen (βω(k))times. B By spherical symmetry, we may convertthe three dimensional integral to a one dimensional integral Z Z ∞ 2 dk→4π k dk 0 2 13 (recall that 4πk is the area of the surface of a sphere of radius k) and we also use k = ω/v to obtain   Z ∞ 3 4πL 1 2 3 hEi=3 ω dω(1/v )(ω) n (βω)+ B 3 (2π) 2 0 3 It is convenient to replacenL =N where n is the density of atoms. We then obtain   Z ∞ 1 hEi = dωg(ω)(ω) n (βω)+ (2.2) B 2 0 where the density of states is given by   2 2 12πω 9ω g(ω)=N =N (2.3) 3 3 3 (2π) nv ω d where 3 2 3 ω =6π nv (2.4) d This frequencywill be knownasthe Debye frequency, andbelowwewill seewhywechosetodefine it this way with the factor of 9 removed. 14 The meaning of the density of states here is that the total number of oscillation modes with frequencies between ω and ω+dω is given by g(ω)dω. Thus the interpretation of Eq. 2.2 is R R R R 2π π 13 2 Or to be pedantic, dk→ dφ dθsinθ k dk and performing the angular integrals gives 4π. 0 0 14 We will encounter the concept of density of states many times, so it is a good idea to become comfortable with it14 CHAPTER 2. SPECIFIC HEAT OF SOLIDS: BOLTZMANN, EINSTEIN, AND DEBYE simply that we should count how many modes there are per frequency (given by g) then multiply bytheexpectedenergypermode(comparetoEq. 2.1)andfinallyweintegrateoverallfrequencies. This result, Eq. 2.2, for the quantum energy of the sound waves is strikingly similar to Planck’s 3 3 result for the quantum energy of light waves, only we have replaced 2/c by 3/v (replacing the 2 light modes by 3 sound modes). The other change from Planck’s classic result is the +1/2 that we 15 obtain as the zero point energy of each oscillator . At any rate, this zero point energy gives us a 16 contribution which is temperature independent . Since we are concerned withC =∂hEi/∂T this term will not contribute and we will separate it out. We thus obtain Z ∞ 3 9N ω hEi = dω + T independent constant 3 βω ω e −1 0 d by defining a variable x=βω this becomes Z ∞ 3 9N x hEi = dx + T independent constant 3 4 x ω (β) e −1 0 d 17 4 The nasty integral just gives some number – in fact the number is π /15. Thus we obtain 4 4 (k T) π B hEi =9N + T independent constant 3 (ω ) 15 d 4 Notice the similarity to Planck’s derivation of the T energy of photons. As a result, the heat capacity is 3 4 ∂hEi (k T) 12π B 3 C = =Nk ∼T B 3 ∂T (ω ) 5 d 3 3 This correctly obtains the desired T specific heat. Furthermore, the prefactor of T can be calculated in terms of known quantities such as the sound velocity and the density of atoms. Note that the Debye frequency in this equation is sometimes replaced by a temperature ω =k T d B Debye known as the Debye temperature, so that this equation reads 3 4 ∂hEi (T) 12π C = =Nk B 3 ∂T (T ) 5 Debye 15 Planck should have gotten this energy as well, but he didn’t know about zero-point energy — in fact, since it was long before quantum mechanics was fully understood, Debye didn’t actually have this term either. 16 Temperature independent and also infinite. Handling infinities like this is something that gives mathematicians nightmares, but physicist do it happily when they know that the infinity is not really physical. We will see below in section 2.2.3 how this infinity gets properly cut off by the Debye Frequency. 17 If you wanted to evaluate the nasty integral, the strategy is to reduce it to the famous Riemann zeta function. We start by writing Z Z Z Z ∞ ∞ ∞ ∞ 3 ∞ 3 −x ∞ ∞ X X X x x e 1 3 −x −nx 3 −nx dx = dx = dxx e e = dxx e =3 x −x 4 e −1 1−e n 0 0 0 0 n=0 n=1 n=1 P ∞ −p The resulting sum is a special case of the famous Riemann zeta function defined as ζ(p) = n where here n=1 we are concerned with the value of ζ(4). Since the zeta function is one of the most important functions in all of 18 4 mathematics , one can just look up its value on a table to find that ζ(4)=π /90 thus giving us the above stated 4 result that the nasty integral is π /15. However, in the unlikely event that you were stranded on a desert island and did not have access to a table, you could even evaluate this sum explicitly, which we do in the appendix to this chapter. 18 One of the most important unproven conjectures in all of mathematics is known as the Riemann hypothesis and is concerned with determining for which values of p does ζ(p) = 0. The hypothesis was written down in 1869 by Bernard Riemann (the same guy who invented Riemannian geometry, crucial to general relativity) and has defied proof ever since. The Clay Mathematics Institute has offered one million dollars for a successful proof.2.2. DEBYE’S CALCULATION 15 2.2.3 Debye’s “Interpolation” Unfortunately, now Debye has a problem. In the expression derived above, the heat capacity is 3 proportional to T up to arbitrarily high temperature. We know however, that the heat capacity should level off to 3k N at high T. Debye understood that the problem with his approximation B is that it allows an infinite number of sound wave modes — up to arbitrarily large k. This would implymoresoundwavemodesthanthereareatomsintheentiresystem. Debyeguessed(correctly) that really there should be only as many modes as there are degrees of freedom in the system. We will see in sections 8-12 below that this is an important general principle. To fix this problem, Debye decided to not consider sound waves above some maximum frequency ω , with this cutoff frequency chosen such that there are exactly 3N sound wave modes in the system (3 dimensions of motion times N particles). We thus define ω via cutoff Z ω cutoff 3N = dωg(ω) (2.5) 0 We correspondingly rewrite Eq. 2.2 for the energy (dropping the zero point contribution) as Z ω cutoff hEi= dωg(ω)ωn (βω) (2.6) B 0 Note that at very low temperature, this cutoff does not matter at all, since for large β the Bose factor n will very rapidly go to zero at frequencies well below the cutoff frequency anyway. B Let us now check that this cutoff gives us the correct high temperature limit. For high temperature 1 k T B n (βω) = → B βω e −1 ω Thus in the high temperature limit, invoking Eqs. 2.5 and 2.6 we obtain Z ω cutoff hEi =k T dωg(ω) =3k TN B B 0 yieldingtheDulong-PetithightemperatureheatcapacityC =∂hEi/∂T =3k N =3k per atom. B B For completeness, let us now evaluate our cutoff frequency, Z Z ω ω 3 cutoff cutoff 2 ω ω cutoff 3N = dωg(ω)=9N dω =3N 3 3 ω ω 0 0 d d we thus see that the correct cutoff frequency is exactly the Debye frequency ω . Note that k = d 2 1/3 ω /v =(6π n) (from Eq. 2.4) is on the order of the inverse interatomic spacing of the solid. d More generally (in the neither high nor low temperature limit) one has to evaluate the integral 2.6, which cannot be done analytically. Nonetheless it can be done numerically and then can be compared to actual experimental data as shown in Fig. 2.3. It should be emphasized that the Debye theory makes predictions without any free parameters, as compared to the Einstein theory which had the unknown Einstein frequency ω as a free fitting parameter. 2.2.4 Some Shortcomings of the Debye Theory While Debye’s theory is remarkably successful, it does have a few shortcomings.16 CHAPTER 2. SPECIFIC HEAT OF SOLIDS: BOLTZMANN, EINSTEIN, AND DEBYE Figure2.3: Plotof Heat Capacityof Silver. They axisisC in units of cal/(K-mol). In these units, 3R≈5.96). Over the entire experimentalrange,the fit to the Debye 3 theory is excellent. At low T it correctly recovers the T dependence, and at high T it converges to the law of Dulong-Petit. • The introduction of the cutoff seems very ad-hoc. This seems like a successful cheat rather than real physics • We have assumed sound waves follow the law ω = vk even for very very large values of k (on the order of the inverse lattice spacing), whereas the entire idea of sound is a long wavelength idea, which doesn’t seem to make sense for high enough frequency and short enough wavelength. At any rate, it is known that at high enough frequency the law ω =vk no longer holds. • Experimentally, the Debye theory is very accurate, but it is not exact at intermediate tem- peratures. • At very very low temperatures, metals have a term in the heat capacity that is proportional 3 to T, so the overall heat capacity is C =aT +bT and at low enoughT the linear term will 19 dominate You can’t see this contribution on the plot Fig. 2.3 but at very lowT it becomes evident. Of these shortcomings, the first three can be handled more properly by treating the details of the crystal structure of materials accurately (which we will do much later in this course). The final issue requires us to carefully study the behavior of electrons in metals to discover the origin of this linearT term (see section 4.2 below). Nonetheless, despite these problems, Debye’s theory was a substantial improvement over 20 Einstein’s , 19 In magnetic materials there may be still other contributions to the heat capacity reflecting the energy stored in magnetic degrees of freedom. See part VII below. 20 Debye was pretty smart too... even though he was a chemist.2.3. SUMMARY OF SPECIFIC HEAT OF SOLIDS 17 2.3 Summary of Specific Heat of Solids • (Much of the) Heat capacity (specific heat) of materials is due to atomic vibrations. • Boltzmann and Einstein models consider these vibrations asN simple harmonic oscillators. • Boltzmann classical analysis obtains law of Dulong-PetitC =3Nk =3R. B • Einsteinquantumanalysisshowsthatattemperaturesbelowtheoscillatorfrequency,degrees of freedom freeze out, and heat capacity drops exponentially. Einstein frequency is a fitting parameter. • Debye Model treats oscillations as sound waves. No fitting parameters. – ω =vk, similar to light (but three polarizations not two) – quantization similar to Planck quantization of light – Maximum frequency cutoff (ω = k T ) necessary to obtain a total of only Debye B Debye 3N degrees of freedom 3 – obtains Dulong-Petit at highT andC∼T at low T. • Metals have an additional (albeit small) linear T term in the heat capacity which we will discuss later. References Almost every book covers the material introduced in this chapter, but frequently it is done late in the book only after the idea of phonons is introduced. We will get to phonons in chapter 8. Before we get there the following references cover this material without discussion of phonons: • Goodstein sections 3.1 and 3.2 • Rosenberg sections 5.1 through 5.13 (good problems included) • Burns sections 11.3 through 11.5 (good problems included) Once we get to phonons, we can look back at this material again. Discussions are then given also by • Dove section 9.1 and 9.2 • Ashcroft and Mermin chapter 23 • Hook and Hall section 2.6 • Kittel beginning of chapter 5 2.4 Appendix to this Chapter: ζ(4) The Riemann zeta function as mentioned above is defined as ∞ X −p ζ(p) = n . n=118 CHAPTER 2. SPECIFIC HEAT OF SOLIDS: BOLTZMANN, EINSTEIN, AND DEBYE This function occurs frequently in physics, not only in the Debye theory of solids, but also in the Sommerfeld theory of electrons in metals (see chapter 4 below), as well as in the study of Bose condensation. Asmentioned abovein footnote18ofthis chapter, it isalsoanextremelyimportant quantity to mathematicians. Inthisappendixweareconcernedwiththevalueofζ(4). ToevaluatethiswewriteaFourier 2 series for the function x on the interval −π,π. The series is given by X a 0 2 x = + a cos(nx) n 2 n0 with coefficients given by Z π 1 2 a = dxx cos(nx) n π π These can be calculated straightforwardly to give  2 2π /3 n=0 a = n n 2 4(−1) /n n0 We now calculate an integral in two different ways. First we can directly evaluate Z π 5 2π 2 2 dx(x ) = 5 −π 2 On the other hand using the above Fourier decomposition ofx we can write the same integral as Z Z π π X X a a 0 0 2 2 dx(x ) = dx + a cos(nx) + a cos(mx) n m 2 2 −π −π n0 m0 Z Z π   π X 2 a 0 = dx + dx (a cos(nx)) n 2 −π −π n0 where we have used the orthogonality of Fourier modes to eliminate cross terms in the product. We can do these integrals to obtain Z π 2 5 X a 2π 2 2 0 2 dx(x ) =π + a = +16πζ(4) n 2 9 −π n0 5 4 Setting this expression to 2π /5 gives us the result ζ(4)=π /90.Chapter 3 Electrons in Metals: Drude Theory The fundamental characteristic of a metal is that it conducts electricity. At some level the reason for this conduction boils down to the fact that electrons are mobile in these materials. In later chapters we will be concerned with the question of why electrons are mobile in some materials but not in others, being that all materialshaveelectronsin them For now, wetakeas giventhat there are mobile electrons and we would like to understand their properties. J.J.Thomson’s1896discoveryoftheelectron(“corpusclesofcharge”thatcouldbepulledout ofmetal)raisedthequestionofhowthesechargecarriersmightmovewithinthemetal. In1900Paul 1 Drude realizedthat he could apply Boltzmann’s kinetic theory of gasesto understandingelectron motion within metals. This theory was remarkably successful, providing a first understanding of 2 metallic conduction. Havingstudied thekinetic theoryofgases,Drudetheoryshouldbe veryeasytounderstand. We will make three assumptions about the motion of electrons 1. Electrons have a scattering time τ. The probability of scattering within a time intervaldt is dt/τ. 2. Once a scattering event occurs, we assume the electron returns to momentum p=0. 3. In between scattering events, the electrons, which are charge−e particles, respond to the externally applied electric field E and magnetic field B. 3 The first two of these assumptions are exactly those made in the kinetic theory of gases . The thirdassumptionisjustalogicalgeneralizationtoaccountforthefactthat, unlikegasesmolecules, 1 pronounced roughly “Drood-a” 2 Sadly, neither BoltzmannnorDrudelivedtoseehow muchinfluence thistheory reallyhad—inunrelated tragic events, both of them committed suicide in 1906. Boltzmann’s famous student, Ehrenfest, also committed suicide some years later. Why so many highly successful statistical physicists took their own lives is a bit of a mystery. 3 Ideally we would do a better job with our representation of the scattering of particles. Every collision should final initial initial consider two particles having initial momentap andp and then scattering to final momentap and 1 2 1 final p so as to conserve both energy and momentum. Unfortunately, keeping track of things so carefully makes the 2 problem extremely difficult to solve. Assumption 1 is not so crazy as an approximation being that there really is a typical time between scattering events in a gas. Assumption 2 is a bit more questionable, but on average the final 1920 CHAPTER 3. DRUDE THEORY electrons are charged and must therefore respond to electromagnetic fields. We consider an electron with momentum p at time t and we ask what momentum it will have at time t+dt. There are two terms in the answer, there is a probability dt/τ that it will scatter to momentum zero. If it does not scatter to momentum zero (with probability 1−dt/τ) it simply accelerates as dictated by its usual equations of motiondp/dt=F. Putting the two terms together we have   dt hp(t+dt)i = 1− (p(t)+Fdt)+0dt/τ τ 4 or dp p =F− (3.1) dt τ where here the force F on the electron is just the Lorentz force F=−e(E+v×B) One can think of the scattering term−p/τ as just a drag force on the electron. Note that in the absence of any externally applied field the solution to this differential equation is just an exponentially decaying momentum −t/τ p(t)=p e initial which is what we should expect for particles that lose momentum by scattering. 3.1 Electrons in Fields 3.1.1 Electrons in an Electric Field Let us start by considering the case where the electric field is nonzero but the magnetic field is zero. Our equation of motion is then dp p =−eE− dt τ In steady state,dp/dt=0 so we have mv =p=−eτE with m the mass of the electron and v its velocity. Now, if there is a density n of electrons in the metal each with charge−e, and they are all moving at velocity v, then the electrical current is given by 2 e τn j=−env = E m momentum after a scattering event is indeed zero (if you average momentum as a vector). However, obviously it is not correct that every particle has zero kinetic energy after a scattering event. This is a defect of the approach. 4 Here we really mean hpi when we write p. Since our scattering is probabilistic, we should view all quantities (such as the momentum) as being an expectation over these random events. A more detailed theory would keep track of the entire distribution of momenta rather than just the average momentum. Keeping track of distributions in this way leads one to the Boltzmann Transport Equation, which we will not discuss.3.1. ELECTRONS IN FIELDS 21 5 or in other words, the conductivity of the metal, defined via j=σE is given by 2 e τn σ = (3.2) m By measuring the conductivity of the metal (assuming we know both the charge and mass of the electron) we can determine the product of the density and scattering time of the electron. 3.1.2 Electrons in Electric and Magnetic Fields Let us continue on to see what other predictions come from Drude theory. Consider the transport equation 3.1 for a system in both an electric and a magnetic field. We now have dp =−e(E+v×B)−p/τ dt Again setting this to zeroin steady state, and usingp=mv andj=−nev, weobtain an equation for the steady state current j×B m 0=−eE+ + j n neτ or   1 m E= j×B+ j 2 ne ne τ We now define the 3 by 3 resistivity matrix ρ which relates the current vector to the electric field e vector E=ρj e such that the components of this matrix are given by m ρ =ρ =ρ = xx yy zz 2 ne τ and if we imagine B oriented in the zˆdirection, then B ρ =−ρ = xy yx ne andallothercomponentsofρarezero. Thisoff-diagonaltermintheresistivityisknownastheHall e resistivity, named after Edwin Hall who discovered in 1879 that when a magnetic field is applied perpendicular to a current flow, a voltage can be measured perpendicular to both current and magneticfield(SeeFig. 3.1.2). Asahomeworkproblemyoumightconsiderafurthergeneralization of Drude theory to finite frequency conductivity, where it gives some interesting (and frequently accurate) predictions. The Hall coefficientR is defined as H ρ yx R = H B which in the Drude theory is given by −1 R = H ne 5 A related quantity is the mobility, defined by v = μF, which is given in Drude theory by μ = eτ/m. We will discuss mobility further in section 16.1.1 below.22 CHAPTER 3. DRUDE THEORY Figure 3.1: Edwin Hall’s 1879 experiment. The voltage measured perpendicular to both the magnetic field and the current is known as the Hall voltage which is proportional to B and inversely proportional to the electron density (at least in Drude theory). This then allows us to measure the density of electrons in a metal. Aside: One can also consider turning this experiment on its head. If you know the density of electrons in your sample you can use a Hall measurement to determine the magnetic field. This is known as a Hall sensor. Since it is hard to measure small voltages, Hall sensors typically use materials, such as semiconductors, where the density of electrons is low so R and hence the resulting voltage is large. H Letusthencalculaten=−1/(eR )forvariousmetalsanddivideitbythedensityofatoms. H Thisshouldgiveusthenumber offreeelectronsper atom. Lateronwewillseethat itisfrequently not so hard to estimate the number of electrons in a system. A short description is that electrons bound in the core shells of the atoms are never free to travel throughout the crystal, whereas the electrons in the outer shell may be free (we will discuss later when these electrons are free and when they arenot). The number of electronsin the outermost shell is known asthevalence of the atom. (−1/eR )/ density of atoms H Material In Drude theory this should give Valence the number of free electrons per atom which is the valence Li .8 1 Na 1.2 1 K 1.1 1 Cu 1.5 1 (usually) Be -0.2 (but anisotropic) 2 Mg -0.4 2 Table 3.1: Comparison of the valence of various atoms to the measured number of free electrons per atom (measured via the Hall resistivity and the atomic density).

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.