Life’s Parameters
DOI: 10.1063/1.1564328
Planck’s units—10−6 g, 10−33 cm, 10−44 s—are derived from fundamental parameters that appear in the most basic theories of physics. They are constructed from suitable combinations of the speed of light c, the quantum of action h, and the Newtonian gravitational constant G. These quantities are the avatars of Lorentz symmetry, wave—particle duality, and bending of space—time by matter, respectively.
The mismatch between Planck’s units and the practical units of mass, length, and time—to wit, 1 g, 1 cm, 1 s—is so enormous as to be grotesque. (Our discussion will be smoother using these, rather than the “standard” SI units.) Quantitative disparities of this order pose qualitative challenges for our understanding of the world. Why do we find it helpful to use units that are so far removed from the fundamentals?
The central mission of my recent trilogy of Reference Frame columns, “Scaling Mount Planck,” was to explain how the value of the proton mass, a skimpy 10−18 Planck units, might emerge from a theory in which Planck units are basic. Superficially, the appearance of such a small number represents a gross violation of the guiding principle of dimensional analysis, which is that natural quantities in natural units should be expressed as numbers of order unity. But a profound and well-established dynamical effect, the logarithmic running of couplings, together with the basic understanding of proton structure provided by quantum chromodynamics, allows us to understand where the small number comes from.
From a conventional, reductionist perspective this calculation of the proton mass solves the main problem involved in relating Planck’s units to mundane reality. The business of fundamental physics, from that perspective, is to understand the basic building blocks. To put it crudely: Having understood protons, you’re entitled to declare victory. (Strictly speaking, electrons count for something too.) But for a broader vision, such insight, although important, poses a new challenge. We can certainly aspire to understand in a more detailed and comprehensive way how the texture of our everyday world, the macrocosm, relates to the fundamentals.
Along that road, surely an important step is to understand the basis for the practical shorthand we actually find convenient in describing that world. We need to deconstruct the macros that we use to construct the macrocosm. Which brings us back to our problem: Why do we find it helpful to use grams, centimeters, and seconds—CGS units?
As the words “we” and “helpful” suggest, this is not a conventional question in pure physics. It has very much to do with what we are, as physical beings, and how we interact with the physical world.
Planck mass and practical mass
Let’s start with mass. We’ve started with a mismatch of 106 between the gram and Planck’s unit of mass. Our earlier triumph (see Physics Today, June 2001, page 12
That number, of course, is essentially Avogadro’s number, the number of protons in a gram. Why is that number so big? Well, we find it convenient to use grams, because we are—very roughly—gram-sized. It takes of order a million protons and neutrons to make a functional protein or macro-molecule, a billion such building blocks (with their aqueous environment) to make a functional cell, and a billion cells to make a simple tissue fragment. Multiply these factors, and there’s your 1024. Biology, as it has evolved on Earth, requires that kind of hierarchy of structures to build up an architecture complicated enough to support beings capable of doing physics.
This quick explication does not do justice to the very interesting question of why, at each stage of the hierarchy, large numbers of basic units from the previous stage are required, nor to the specific values of the large number. Certainly ensuring stability of function against quantum and thermal fluctuations is a key at the first stage. The complexity of metabolism, which entails the necessity of many different catalytic agents and organelles, is a key at the second stage.
The third stage, passing from cells to intelligent creatures, appears much more contingent. Indeed, the emergence of muticellular life is a relatively recent evolutionary event, and to this day single-cell forms remain common. And among multicellular forms we find a vast range of masses, including some very unintelligent giants such as apatosaurs and trees. Altogether, intelligence seems to be a biological epiphenomenon. Not all species evolve toward it as a function of time, nor at any one time do even the most massive creatures necessarily accommodate it.
For better or worse, we have only one (semi-)convincing example of evolved intelligence to look at: the human brain. And while our understanding of that structure is advancing rapidly, it is still primitive. We can’t understand deeply why the human brain is the size it is when we still don’t know how that brain works. However, there are two observations that, when combined, suggest that human intelligence could not be supported with a brain of significantly smaller mass. First is the extremely suggestive historical fact that rich cultural artifacts emerged simultaneously with a vast increase in the size of human brains, both occurring on timescales that are very short by evolutionary standards. Second is the fact that human childbirth is made difficult by the size of neonatal brains (and neonates are far from finished products). Together, these observations suggest that sheer brain mass is crucial to the emergence of intelligence, and that, consistent with functionality, there has been substantial evolutionary pressure to keep the mass as small as possible.
Planck length and practical length
In any case, given an explanation of the disparity in mass units, the disparity in length units is a straightforward consequence. A centimeter is roughly 108 Bohr radii, or atomic sizes, and so a cubic centimeter is just what encompasses those same 1024 atoms that make a gram.
I hasten to confess that the Bohr radius, which is undoubtedly the key thing here, itself has a problematic value when expressed in Planck units. The Bohr radius is most naturally expressed as r B = h 2/m e e 2, where h is Planck’s quantum of action, m e is the electron mass, and e is the electron charge. Alternatively, we can write it as r B = h/c × 1/α × 1/m e, where c is the speed of light and α is the fine-structure constant. Now h and c are, of course, both just unity in Planck units, and α does not pose a major difficulty. (In the framework of unified gauge theories, α corresponds to a near-unity value of the unified coupling at the Planck scale.) But m e, at about 10−22 times the Planck mass, is so very small as to be a very big embarrassment. The best that can be said is that this embarassment is not a new one.
Planck time and practical time
In these derivations of the practical mass and length scales, I used simple arguments and crude estimates. I’m confident, though, that with some work they could be firmed up and enriched considerably. By comparison, when it comes to the unit of time, I’m somewhat at a loss.
Indeed, hints that biological time-scales are highly negotiable seem to be all around us. When watching trees adapt to their environment, we run out of patience and must resort to time-lapse photography, while flies elude our swats, and to follow the beating of their wings, we need to watch slow-motion movies.
So why does it take about a second to have a thought? At a mechanistic level, this timescale is tied up with the diffusion rate of signal molecules across synapses, opening and closing of receptors, the capacitance and conductivity of the fatty membranes that facilitate nerve impulses, reaction rates for secondary messengers, and possibly other factors. From the point of view of physics, these are complicated phenomena, and it seems extremely difficult to tie them to the fundamentals—or, therefore, to perceive fundamental constraints on their values. It is not at all obvious that evolution has been driven to optimize the rate of thought. If not, then we might expect that this rate could be drastically modulated by means of physio-chemistry (“speed” worthy of the name!), or altered by genetic engineering. These observations also suggest that in seeking the deep source of the second a bottom-up approach from microphysics is doomed, and we must consider environmental and possibly historical (evolutionary) factors.
A possible clue is that the value g of the acceleration due to near-Earth gravity comes out to be of order unity in practical CGS units. Since g sets the tempo for purposeful motion near Earth’s surface, creatures working at this tempo will adopt as the natural unit of time, accommodating the gram and centimeter, something close to (g cm2/g)1/2. And indeed we do.
By way of contrast, it is striking that the practical unit of temperature, the degree (regarded as a unit of energy), corresponds to the strange value °C ~10−16g cm2/s2. This temperature provides some rough measure of available energy sources and heat sinks, and so it is very relevant to the physics of computation. But if we try to find the deep source of the time unit in (g cm2/°C)1/2, we’d wind up with 108 s! Obviously this does not correspond to the speed of thought; but then again we do not think with macroscopic units (like Tinkertoy™ computers), but at a molecular level. If instead we translate the degree directly into an atomic timescale, using Planck’s constant, we find the unit h/°C ~ 10−11 s. This is so much smaller than the practical unit tailored to our thought processes as to strongly suggest, again, that we operate far from the physical limit.
There is direct evidence for this conclusion. The impressively tiny—and still shrinking—size and timescales of our artificial thinking progeny, electronic computers, are much closer to fundamental physics. They were designed that way! Careful use of the laws of physics makes possible a higher density of more rapid thought than did biological evolution. When the coming quick-witted second-generation silicon physicists define their own practical units, they’ll use different ones, and they’ll have a much easier time understanding where they came from.
More about the Authors
Frank Wilczek is the Herman Feshbach Professor of Physics at the Massachusetts Institute of Technology in Cambridge, Massachusetts.
Frank Wilczek. Massachusetts Institute of Technology, Cambridge, Massachusetts, US .