Looking back and ahead at condensed matter physics
JUN 01, 2006
What began as independent and disparate specialties has matured into a unified field whose remarkable diversity has influenced the development of many others, including quantum physics and chemistry, computer science, and engineering.
The American Institute of Physics, founded in 1931, has grown and prospered during the past 75 years. And so has the field of condensed matter physics (CMP). Today, CMP is the largest branch of physics in the US and in most countries around the world. But the field’s stature has its origins in the slow knitting together of a diverse set of specialties that flourished in the early 20th century. Indeed, one can argue that solid-state physics did not exist in 1931—at least in name. Only starting roughly in the mid-1930s did the term gradually become the familiar label attached to conferences, journals, and research groups. Not until 1947 did the American Physical Society establish the division of solid-state physics (DSSP).
1
The name change from solid-state physics to CMP was also gradual, motivated partly by physicists’ desire to include the study of materials like quantum liquids and liquid crystals, which have nonsolid phases but are nevertheless thought to lie within the research purview of work on solids. Today, both names are used, although by the late 1970s the label CMP became somewhat official in the US when the DSSP became the division of condensed matter physics (DCMP). Many researchers also associate the field of materials science with CMP; when determining the size of the condensed matter community, people often include the membership of APS’s division of materials physics with that of the DCMP.
Partly because of its breadth, one can argue that CMP is at the center of physics. The size and energy scales of interest to CMP researchers are in the middle, to put it loosely, of what physicists study. CMP’s ties to fields outside physics proper are strong as well: Its sizes and energy scales are not too different from the sizes and energies of objects studied by chemists, biologists, engineers, and computer scientists. Collaborative work among researchers in CMP and other fields, both inside and outside physics, is common.
Within physics, some CMP studies can be viewed as basic research, others more applied. What counts as basic or applied, as with other labels, is subjective and difficult to quantify. However, CMP has clearly been central to the development of modern technology and has greatly influenced other branches of physics. It’s no surprise then that the field has grown so large and robust.
Looking back
Scientists knew a lot about the properties of solids by the time AIP was founded. X-ray diffraction studies, for instance, provided the positions of atoms in a crystal lattice, so researchers could infer a great deal about crystal structure. And Felix Bloch, Arnold Sommerfeld, and others had by 1931 used quantum theory to augment the classical model of electrons in metals, proposed by Paul Drude around 1900. In fact, it was possible for Sommerfeld and Hans Bethe to write the now classic review
2
that covered a major part of the theory of solids in the 1930s, just as Bethe did for nuclear physics in the series of Reviews of Modern Physics articles known today as the Bethe bible. No one would attempt to write a comprehensive bible of CMP today; the subject is too diverse. In 1931, Léon Brillouin introduced the concept of zones in a periodic lattice to represent the allowed energy states of an electron in a crystal (see figure 1). The concept is now ubiquitous in the theory of solids. But fascinating phenomena were still waiting to be explained. The microscopic origins of Heike Kamerlingh Onnes’s 1911 discovery of superconductivity, for example, would remain mysterious until mid-century, even though theorists such as Werner Heisenberg and Albert Einstein had tried to understand them.
Figure 1. The reciprocal lattice, a constructed patchwork of points in momentum space and lines representing Bragg diffraction planes in a crystal, provides a geometric description of the allowed states that electrons or phonons can occupy—that is, what momenta they can take while moving throughout a crystal. Those states fall within various zones, determined by the planes and named after Léon Brillouin, the scientist who originally developed the idea.
(Adapted from L. Brillouin, Die Quantenstatistik, Springer, Berlin [1931] as reproduced in ref. 2.)
Quantum models for magnetism, electron transport, thermal effects, and other solid-state properties explained many puzzles that scientists could not solve previously because of the limitations of classical physics. But the ability to predict properties of specific materials hadn’t developed very far in the 1930s. The progress achieved in atomic physics—sorting out electronic energy levels and optical properties, for example—could not be duplicated for solids at that time. Because the sharp atomic levels spread into bands that overlap when atoms form into solids, the optical spectra resulting from electronic transitions in solids are broad and featureless, especially compared to atomic spectra. And early band-structure calculations were crude. The need for detailed electronic models became more apparent after John Bardeen, Walter Brattain, and William Shockley’s invention of the transistor in 1947. Researchers also quickly realized the need for purer materials, cleaner surfaces and interfaces, and a better understanding of the microscopic nature of solids.
Making models
How has the conceptual basis of the field evolved during the past 75 years? Physicists currently have two general models of a solid. The first model is based on the perspective of a solid as a collection of interacting atoms. Within crystals, the cores of those atoms—that is, the atomic nuclei and core electrons—are arranged in a periodic array while the outer valence electrons in different atoms interact with each other to form metallic, covalent, van der Waals, and ionic bonds. In this collection-of-atoms model (model 1), one views and calculates solid-state properties of a crystal as if they arise from interacting atoms (see figure 2(a)).
Figure 2. Two models of a solid. (a) One way to view a solid is as a collection of interacting atoms. Outer valence electrons are only weakly bound to atoms in a periodic lattice and therefore can roam throughout the crystal, although they are somewhat concentrated in the bonds between atoms. Those outer electrons interact weakly with each other and with nuclei and the tightly bound core electrons. (b) Alternatively, a solid can be modeled as a system that responds to probes. Light, heat, and external fields, for instance, represented here by a hammer, produce elementary excitations in the form of quantized vibrational, electronic, or spin waves that propagate inside the solid. When “heard” and analyzed, those excitations reveal details about the material’s properties.
Model 2 is the elementary-excitation model. We know about the properties of solids by way of experiments that probe them. A measure of the response of the solid to a probe and an explanation of the response in terms of a measured response function reveal details about the solid. So instead of talking about the motions of individual atomic cores, we talk about their collective behavior in terms of phonons, quantized lattice vibrations that propagate through the solids. Similarly, the collective excitations of electrons are known as plasmons. Hence, in model 2, the solid responds as if elementary excitations, which in some cases bear little resemblance to interacting atoms, dominate the properties of a solid (see figure 2(b)).
Both models motivate much of the current—and probably future—experimental and theoretical work in CMP. Scientists now have excellent ways to use model 1 to calculate and predict the properties of solids. I focus on one approach in which I have some expertise.
Even 75 years ago, researchers knew that the valence electrons, at least in some metals, behaved as if they were nearly free inside the metal (for historical perspective, particularly on Bloch theory, see the article by Hans Bethe and David Mermin in Physics Today, June 2004, page 53). This behavior also meant that the effective electron–core interaction had to be weak and the electron–electron correlations could not dominate the properties of the electron gas or liquid for those systems. Two arguments that justified treating electrons as nearly free have their roots in work by Enrico Fermi and Lev Landau. Fermi’s work focused on the electron–core interaction in atoms, and Landau’s looked at the electron–electron interaction in solids.
Fermi’s contribution arose from a calculation he did in 1934 on the highly excited states of alkali atoms.
3
He reasoned that the behavior of those outer electron states should depend only on the properties of the wavefunction’s tail rather than on the properties of the wavefunction near the ionic core of the alkali atom. Wavefunction behavior near the core usually involves large oscillations and is difficult to treat, while far from the core the wavefunction is normally a smooth function of distance. So Fermi’s reasoning greatly simplified calculations of valence electron interactions with core electrons and nuclei: He simply replaced the true wavefunction with a smooth pseudowavefunction and the strong ionic Coulomb potential with a weak pseudopotential.
Because solid-state effects are usually dependent on the outer parts of the wavefunction, the Fermi scheme is applicable to solids. Considerable work on the pseudopotential approach has followed in the intervening decades since Fermi introduced his version, with much of the subsequent research quite distinct from Fermi’s approach. For example, James Phillips and Leonard Kleinman proved in 1959 that because the Pauli principle requires that valence-electron wavefunctions be orthogonal to those of core electrons, the valence electrons are effectively pushed out of the core region.
4
Using a scheme proposed by Conyers Herring, Phillips and Kleinman showed that the orthogonalization could be represented as a repulsive potential, which when added to the attractive ionic core potential produces a net effective weak pseudopotential of the kind Fermi envisioned.
A calculational and conceptual breakthrough came when the pseudopotential was fit to experiment. It became possible to show that the electronic structure of solids could be calculated accurately enough for the theory to make successful predictions. Researchers used the empirical pseudopotential method (EPM),
5
as the technique was called, to obtain electronic structure and optical response functions such as the reflectivity of solids. In the 1960s and 1970s, the EPM solved the problem of how features arose in the optical spectra of semiconductors in the visible and UV region where the optical structure is dominated by electronic interband transitions. In a sense, solid-state physics had caught up to atomic physics in using quantum theory to identify spectra. Theorists could assign a pseudopotential to each atom and, knowing only the atomic number, they were able to extend the Fermi scheme to produce ionic pseudopotentials for any atom.
Walter Kohn, Pierre Hohenberg, and Lu Sham added density functional theory in the mid-1960s to account for electron–electron interactions.
6
Thereafter, to calculate the pseudopotentials and electronic properties, theorists required only the atomic numbers of the constituent atoms and the crystal structure.
One of the most interesting challenges in the 1970s was the study of chemical bonds. Using wavefunctions obtained through the empirical pseudopotential, researchers could calculate the electronic charge density as a function of position in a crystal. The recipe was simply to square the wavefunction of each state and sum the contributions of all occupied states. Figure 3 compares the results of a calculation for silicon with the valence charge density measured from an x-ray scattering experiment.
Figure 3. The charge density of bulk silicon, as measured using x-ray scattering (top) and calculated using pseudopotential theory (bottom). The contour plots reveal the density of valence electrons that make up the covalent bonds between silicon atoms (red); white signifies low electron density and dark blue high density. A comparison of the two plots reveals the striking agreement with experiment that can be achieved using the pseudopotential method, which describes the interaction of a valence electron with an atomic core.
(Experimental plot adapted from ref. 15, theoretical plot adapted from ref. 16.)
The next major advance was the development of a method to predict the total energy of a solid in different structures. Essentially, one attaches a pseudopotential to each atom and then allows the electrons to rearrange in response to the change in lattice configurations. The atomic mass is added as input, and the vibrational structure, electron–lattice interactions, structural properties, and even superconducting properties can be computed from first principles. This total-energy method, combined with the electronic-structure methods described above, has evolved into the standard model of solids.
7
The standard model is both a model and an approach for explaining and predicting a host of ground-state properties of solids, surfaces, clusters, nanosystems, and molecules. Added to the standard model is the so-called GW scheme, developed by Mark Hybertsen and Steven Louie,
8
which allows theorists to accurately calculate excited-state properties as well.
Fermi liquids
Landau’s description of the electron–electron interaction is commonly referred to as the Fermi liquid model.
9
In contrast to the free or nearly free electron gas approach, Landau considered electron correlations that would change the properties associated with a free electron gas model. Eugene Wigner and others had earlier examined the ground-state properties of an electron gas or liquid. Landau’s focus on excited-state properties showed that a one-to-one correspondence exists between elementary excitations known as quasiparticles (or quasielectrons in this case) and the electrons of the system. He used phase-space-based arguments to reveal how quasiparticles could be long lived and act like “dressed electrons”—that is, electrons whose mass is effectively enhanced in response to the presence of other electrons.
Like the pseudopotential theory, Landau’s theory was phenomenological in its early stages and became more ab initio as it evolved in theorists’ hands. Using many-body physics techniques, theorists could calculate the quasiparticle energy from the pole of a suitable Green’s function, and the width of the pole gave an estimate of a quasiparticle’s lifetime. Most of the calculations done with the standard model assume Fermi liquid theory to be valid.
Although phenomenally successful, the standard model has its limitations. Perhaps most important, its commonly used form cannot describe well the properties of highly correlated electrons in such materials as transition-metal oxides and high-Tc superconductors. But recent additions to the standard model have allowed researchers to calculate novel properties of highly correlated electron systems.
Model 2—a solid as collection of elementary excitations—can be viewed as a way of describing response functions. If the probe is an electromagnetic field, the response function is the frequency- and wave-vector-dependent dielectric function. If a magnetic field is the probe, then the magnetic susceptibility is the response function. Similarly for temperature, the response function is the heat capacity. The list goes on.
Speaking metaphorically, as pictured in figure 2(b), you hit a sample with a hammer, listen, and then describe what you hear in terms of elementary excitations that were created by the impact. Most elementary excitations can be put in one of two categories: collective excitations and quasiparticles. Collective excitations—like phonons, plasmons, and magnons—behave like bosons and don’t resemble their parent particles. On the other hand, quasiparticles do resemble particles in the solid. A polaron, for instance, is a quasielectron with augmented mass; a hole is a quasiparticle that represents the absence of an electron; and quasiparticles that arise in the context of quantum Hall experiments can resemble electrons but have fractional charges.
If you can describe via a response function the excited states of a system, such as a superconductor, and explain the nature of the measurement in terms of excited quasiparticles, which for a superconductor have electron- and hole-like character, then you’ve achieved a bona fide description of nature and can believe in those admittedly fictitious particles.
Models 1 and 2 can both be associated with philosophical discussions about emergence and reductionism. Model 2 is clearly an emergence idea, whereby the organization of component parts largely determines properties. New particles or an entirely new state of the solid emerges as a result of the particular way that components interact. Using the formalism of relativistic quantum electrodynamics one can describe how elementary excitations can be created and destroyed. In reductionism, in contrast, physicists partition nature into its simplest component parts: Matter is reduced to molecules, then to atoms, then to nuclei and electrons, and so on. Model 1 is more intuitively that kind of approach because the interactions are reduced to those among valence electrons and atomic cores. Theorists know the particles and their interactions with each other, and they straightforwardly deduce properties using quantum mechanical methods. Often they can use model 1 to calculate the properties of the elementary excitations of model 2.
As a practicing CMP theorist, when asked if I’m a believer in emergence or reductionism, I answer the way I do when a student asks me if light is a particle or a wave: I say yes.
Looking ahead
How do physicists plan for the future in a diverse field like CMP? How do professors interest students in it? How does the physics community interest the public and the funding agencies? Such questions are difficult to answer. As APS president during last year’s 100th anniversary of Einstein’s miraculous year, I wrestled with them while traveling around the world talking about the wonders of physics. I reached out at many levels and spoke enthusiastically about all branches of physics—the latest advances, the benefits of physics to humankind, and the fundamental knowledge it brings to the world.
I’ve heard that Fermi said all branches of physics are equally interesting. I felt that way when I was a graduate student and still do. However, last year I noticed a big difference in my ability to promote different fields. Particle physics and astrophysics were easiest to showcase, and CMP was hardest. I could point to the excellent recent report that listed nine fundamental questions in particle physics and astrophysics,
10
how they would be addressed, and what machines should be built to help solve fundamental problems.
Most people can understand and appreciate questions that address how the universe began, how old and big it is, and the nature of matter. Indeed, many lay people in the US have been exposed through the popular press to questions about string theory, unification of the forces, and the efforts to merge quantum theory and general relativity. Some seem to think that once scientists know the answers to those questions, the rest of physics is a simple homework assignment.
It’s hard to imagine a list of basic questions about condensed matter that a majority of APS’s DCMP members would endorse, so diverse is the field. It’s not in the nature of CMP to pose a small number of central problems that cover everything. And there is the issue of emergence: At a particular level of organization, things happen that are unexpected. Everything does not hang together neatly, but that’s part of the excitement.
Last year while giving talks about CMP and Einstein’s contributions to the field, I decided to decline requests to make a list of fundamental questions, and I continue the policy here. I reached out to the public instead by levitating a magnet with a superconductor, by stating that a current-carrying cold superconducting ring would sustain its current and associated magnetic field for longer times than the age of the universe, and by claiming that Alex Zettl and other colleagues at the University of California, Berkeley, could build out of nanotubes a motor that could sit on the back of a virus.
11
I got students involved in the intellectual questions of quantum computing, phase transitions, magnetism, optical properties, transport, and superconductivity—or more generally, what makes the stuff around us behave as it does. Rather than hide the complexity and diversity, I emphasized them in the basic and applied problems of CMP; some have big material payoffs and chance applications, others are studied purely to understand matter and energy.
New materials and better instrumentation largely drive the field forward. Certainly advancements in both are crucial. Computer-assisted methods can help to synthesize materials and are revolutionizing instrumentation. It is routine nowadays to control the growth of nearly defect-free crystals to meet the high standards of purity and perfection required for today’s semiconductors and frontier research. Using modern instrumentation to characterize such materials has led to the discovery of new phenomena. Microscopists can now routinely image atoms in all kinds of solids. And there is the old science folklore about the experimentalist who goes to heaven and is granted a wish for a light-emitting box with only two dials—one for wavelength and one for intensity; with synchrotrons and lasers, we are getting there (see Philip Bucksbaum’s article in this issue on page 57).
Analogous advancements on the theoretical side include new developments in computational physics and increases in computer speed and memory that accommodate ever more intricate calculations of materials. Still, the main thrust in CMP theory is the development of physical and mathematical models that form the basis of the research. Theorists use a combination of modeling and computational techniques to reveal electron-density distributions in a crystal, an approach that serves as a theoretical microscope to look at bonds and occasionally to predict new materials before they are made in the laboratory (see figure 4).
12
This kind of work is sometimes referred to as quantum alchemy.
Figure 4. Boron nitride nanotubes are materials whose existence theorists predicted before they were synthesized in laboratories. The pictured image is a simulation of a single nanotube. This material has the intriguing property that the greatest density of conduction electrons is along the tube axis. Boron nitride nanotubes thus can ballistically conduct electrons down the center of the tube.
Which subfields of condensed matter physics are growing rapidly? Nanoscience is in the popular press daily, although clear definitions of what it is are lacking. Strong funding for it is spurring activity and is providing scientists flexibility to occasionally redirect their research objectives. New physics often emerges when systems are confined, and reduced dimensionality leads to new symmetries that, together with confinement, yield interesting phenomena. More experimental and theoretical research on transport in nanosystems is sorely needed. Already an active field, molecular electronics is likely to attract considerably more researchers trying to understand the details of how molecules conduct electricity and heat.
In the heyday of semiconductor research, from the 1960s through the 1980s, successful transitions from basic studies to useful applications were common. And the synergy between research and development has largely affected how modern computers and information technology have evolved. Many people think a similar era is beginning with systems composed of a thousand atoms or less and with research applications related to thin films. It is likely that current microelectromechanical system (MEMS) technology will combine with its nanoscale version, NEMS.
Nanotubes currently appear to be the most popular nanostructures for exploitation. Researchers are making carbon and compound nanotubes, composed of boron nitride and other materials, longer, purer, and more defect free. Along with the continued engineering of those systems, devices are likely to be fabricated to test the fundamental properties of quantum mechanics. So, the payback to physics from engineering is not only better instrumentation but new devices with unusual quantum properties.
The study of noncrystalline solids is another research field with active basic- and applied-science aspects. Amorphous semiconductors, glasses, and the general field of disorder have been in the spotlight as frontier research subjects since Philip Anderson’s pioneering work on localization.
13
The study of disordered materials, combined with soft condensed matter, may provide links between CMP and biophysics. Researchers are already excited about using molecules such as DNA as structural templates or scaffolds for nanophysics studies. Generally, physicists are giving their instruments to biologists, and in return, the biologists are giving back novel materials.
Superconductivity, magnetism, and optics are core fields of CMP. Research on copper oxide materials is very active now and will probably continue at least until consensus about the theory emerges. Studies of high-Tc superconductors have raised new questions about Mott insulators, Fermi liquid theory, and electron correlations in general. To explain the properties of the superconducting oxides, it’s likely that John Bardeen, Leon Cooper, and Robert Schrieffer’s tremendously successful 50-year-old theory
14
will have to be augmented in fundamental ways.
Research on spintronics, nanomagnetism, colossal magnetoresistance, and magnetic semiconductors is proceeding at a rapid rate. Measuring optical properties of materials has always been a central area of CMP, and today, new developments in time-resolution spectroscopy, pump–probe techniques, nonlinear optics, and high-resolution angular-resolved photoemission studies are yielding extremely important data in many subfields of CMP.
Forces outside traditional CMP will probably exert a large influence on it as well. The needs of society, including the demand for new medicines and sources of energy, can spur applied research. A lot of physics goes into the instrumentation used in the medical profession; and the development of such alternative energy sources as solar power has proceeded for decades.
There hasn’t emerged the kind of urgency about energy research that one might expect considering experts’ warnings about threats from global warming and limited fossil and fissionable fuel sources. It’s not clear that society is now ready for a commitment in energy research on the scale of the Manhattan Project. But that level of commitment may yet be appropriate. If so, CMP will clearly be an important ingredient in the search for solutions.
Predictions are hard to make. But when I consider the growth, robustness, and diversity of condensed matter physics, I’m reminded of an old adage: As we get older we become more like ourselves. I believe a similar statement can be said about the future development of CMP.
This article is adapted from a talk given at AIP’s 75th-anniversary celebration, held in Washington, DC, on 3 May 2006.
References
1. S. R. Weart, in Out of the Crystal Maze: Chapters from the History of Solid-State Physics, L. Hoddeson, E. Braun, J. Teichmann, S. R. Weart, eds., Oxford U. Press, New York (1992), chap 9.
2. A. Sommerfeld, H. Bethe, in Handbuch der Physik, vol. 24, part 2, Springer, Berlin (1933), chap. 3.
Marvin Cohen is a University Professor of Physics at the University of California, Berkeley, and Senior Faculty Scientist in the materials sciences division at Lawrence Berkeley National Laboratory.
Marvin L. Cohen.
University of California, Berkeley and Lawrence
Berkeley National Laboratory, US
.
Technical knowledge and skills are only some of the considerations that managers have when hiring physical scientists. Soft skills, in particular communication, are also high on the list.
Research exchanges between US and Soviet scientists during the second half of the 20th century may be instructive for navigating today’s debates on scientific collaboration.
The Eisenhower administration dismissed the director of the National Bureau of Standards in 1953. Suspecting political interference with the agency’s research, scientists fought back—and won.
Alternative undergraduate physics courses expand access to students and address socioeconomic barriers that prevent many of them from entering physics and engineering fields. The courses also help all students develop quantitative skills.
Defying the often-perceived incompatibility between the two subjects, some physicists are using poetry to communicate science and to explore the human side of their work.
September 01, 2025 12:00 AM
Get PT in your inbox
Physics Today - The Week in Physics
The Week in Physics" is likely a reference to the regular updates or summaries of new physics research, such as those found in publications like Physics Today from AIP Publishing or on news aggregators like Phys.org.