Supernovae are among the most dramatic events in nature. A star undergoes a thermonuclear explosion and within a few seconds produces a total energy flux comparable to that of an entire galaxy.
Computer simulations are beginning to catch the essential features of these events. Recently, a large Chicago-led simulation showed an attractive new mechanism for one kind of supernova.
1
Previous calculations have often been done for spherically symmetrical situations in which the event is triggered by a heating event in the center of the star. The new calculations look at off-center triggering, which sends a very asymmetrical blob of hot material rising up through the star. At the point where the bubble surfaces, outer stellar layers are strongly accelerated, fly over the star surfaces, and then come together on the opposite side of the star. That focus then forms the center of the resulting explosion.
This entire sequence has been studied by a computer simulation that gives many attractive results for comparison with observational data. The sequence of events might well explain previously considered anomalies in the angular anisotropy of the events and in the mass and velocity spectra of the ejecta. If the mechanism survives further testing, the results might help us understand one class of supernova events.
Computer models are representations of nature. Sometimes they can give faithful and accurate representations. Sometimes their accuracy is weak or unknown. Robert Batterman, a philosopher of science, has argued that the most accurate models do not necessarily give the most worthwhile results. His position is somewhat counterintuitive. Nonetheless, I argue for it, using the example of this excellent and provocative supernova simulation.
The simulated event starts with two imperfectly known processes. One is the off-center trigger. We simply do not know enough about fluctuations in stars to predict the likelihood of such a triggering event. The other is the rising blob of hotter and less dense material started by the trigger and formed by the Rayleigh–Taylor instability.
To visualize the instability, imagine a dense fluid placed above one of lesser density with the two separated by an interface that is not quite flat. The pull of gravity will trigger a motion in which the two fluids will interpenetrate one another to form a kind of mixing zone. The motion will be inhibited by the surface’s interfacial tension and the fluids’ viscosities. If these inhibiting effects can be neglected, the motion will be basically ballistic. A simple dimensional analysis argument then gives an estimate of the height of the mixing zone. The height will grow in proportion to the square of the time, with a constant of proportionality called α. Arguments based upon the concepts of universality, scale invariance, and the renor-malization group back up the dimensional analysis. This approach put forward by a group at Stony Brook University suggests that once the flow gets well started, α might be a universal constant, independent of everything except perhaps the ratio of the densities of the two fluids.
2
Stockpile stewardship
The studies mentioned here are part of the stockpile stewardship program of the US Department of Energy weapons laboratories. The stewardship program uses extensive computer simulations to maintain the reliability of our nuclear weapons. It also tests the simulations by using university groups, in an effort called ASC, for Advanced Simulation and Computing, to explore situations akin to ones that might be relevant for the weapons stewardship. This program supports part of my research.
One important focus of ASC is the test of computer programs that simulate Rayleigh–Taylor instabilities. These instabilities play an important role in many explosive events, including supernovae. Gravity can produce the basic instability. In explosive situations, the role of gravity is replaced by that of the rapid acceleration of the interfaces. Thus, it is natural that ASC has sponsored more than a dozen experimental, theoretical, and simulational studies of the Rayleigh–Taylor instability.
The results of the studies were surprising and disturbing. Contrary to expectations, variations in initial conditions, in computational methodology, or in computational precision gave measurable variations in the “universal” parameter, α. The Chicago group even found a kind of dynamical phase transition in its calculations within the computationally interesting range. As the resolution of the calculation was improved, the finger-like structure produced by the Rayleigh–Taylor instability lost its left–right symmetry and became quite asymmetrical. This change had a substantial effect on the mixing of the fluids. Taken all together, these different studies suggested that several different complex causes will result in a variation in the apparent value of α by roughly a factor of two within the parameter range studied. Even larger variations might be expected if one tried to extrapolate to new situations. These variations produce a substantial impediment to accurate predictions.
Perhaps these outcomes should not have been entirely surprising. The mathematics of the situation does indicate that in the limit of low viscosity and surface tension, the problem of Rayleigh–Taylor interface instability is ill-posed, which means that the results are potentially very sensitive to small variations in initial data or computational method. Thus, empirical and mathematical evidence suggest that in this kind of instability problem, predictions are likely to remain subject to doubt no matter how much effort is invested in simulations of the Rayleigh–Taylor problem.
Inaccurate and important
Despite these adverse results for a universal α, the Chicago ASC group came up with two different applications of the same techniques—one to novae, and the other to the supernovae calculation described above. The group pointed out that both results were important additions to knowledge of these astrophysical phenomena. Fluid instabilities are significant ingredients in both calculations. The Rayleigh–Taylor study, however, indicated that there may be difficulties with numerically modeling these instabilities. Thus the calculations were at the same time inaccurate and important! How can that be?
As I see it, there are two ways in which a simulation may be useful. First, the simulation may prove something. In such a case, the simulation will include carefully checked routines that accurately model all the physical processes involved. These routines will have been validated and verified by a long series of investigations open to the usual processes of broad scientific criticism and comparison to a broad variety of observational data.
A second kind of simulation is the exploratory one that suggests new mechanisms for complex physical processes. Such simulations are likely to contain components that represent the best state of our knowledge, but are not necessarily “true.” They are most effective when they use a simplified model of the situation to isolate the essential elements of the new processes. Their results are likely to be remembered in the words that describe the processes, rather than in any numerical computer output. These exploratory calculations represent some sort of argument. Are they just a rhetorical device? Do they represent an important improvement over the traditional method for conducting such arguments, via words supplemented by order-of-magnitude estimates?
I’ll come back to these questions. But first, I should point out that only a few physically interesting calculations fall into the first category, that of proof. In a previous paper, I suggested that the computational studies of neutrino production within the Sun may have achieved that status.
3
However, studies of magnetic field generation by dynamos clearly fall into the second category, that is, exploratory calculations. Glass formation and fracture in solids remain mostly in this latter category as well. I expect that most astrophysical calculations involving turbulent mixing will also remain heuristic in character for a long time to come, perhaps until we really understand turbulence.
To deny these calculations the honorable name of proof is not to deny their value. Nothing can be more valuable to science than to suggest how things might happen. The Watson–Crick model of the double helix implied suggestions for how biology might work. Those suggestions were of magnificent value. The suggestion of a new mechanism for how a supernova might go off is of considerable value, even if the mechanism does not directly provide a proof of its truth.
Making a star go boom
How does the power of argumentation provided by exploratory simulations compare to that of rhetorical or order-of-magnitude discussions? Since the simulations must include everything to make a star go boom, they provide an internal check of consistency and completeness not available through words. On the other hand, some intermediate steps in the argument may have their weaknesses hidden in un-examined computer processes. Words may be better than computer output for showing up weak arguments. Computer arguments often force us to rely upon the integrity and care of the investigators. So computers provide a useful but dangerous tool for the exploration of complex systems.
There is an important technical reason for the increasing use of computers in exploring physical systems. In the past, analysis of point particles used ordinary differential equations to plot their position and momentum. More recent work has focused upon fluids, plasmas, and even solids that are studied as continuum systems, usually with the aid of partial differential equations. These systems and equations can develop structures with very short scales or very long ones. The new processes and structures are often completely unexpected. We need methods to protect ourselves from being overtaken by unexpected occurrences. Computer simulations of simple nonlinear systems can be one of our best tools for explorations that uncover unexpected possibilities.
But there are also major dangers in simulations. Often simulations are directly aimed at confirming our expectations, thereby throwing away the possibility of finding anything new. In addition, we simulators must be most careful to distinguish between simulations as argument versus simulations as proof. There is a considerable risk of confounding the two approaches. It is tempting to say that “supercomputer simulations show …” when what is meant is more like “recent investigations have raised the possibility that….” In our writing, we all are tempted to replace “it would please us if …” by “we know that … “. And if we scientists and engineers join with all those around us—in places high and low—who confound possibility with proof, and desire with truth, who then will believe us in anything we say?
I had very helpful discussions with Alan Calder, Robert Rosner, Tomasz Plewa, Steve Libby, Don Lamb, Robert Batterman, and Guy DiMonte.
Computer simulation shows a bubble rising through a model star. The bubble was started in a slightly off-center position in a spherically symmetrical environment. This completely asymmetrical structure was a surprising outcome and has an important influence on the subsequent history of the star and its transition to supernova behavior. Another simulation (see Plewa et al., reference 1) shows that after the bubble surfaces, material flies over the star surface and dredges up material from the surface.
Image was constructed by the ASC center at the University of Chicago and Argonne National Laboratory.
Unusual Arctic fire activity in 2019–21 was driven by, among other factors, earlier snowmelt and varying atmospheric conditions brought about by rising temperatures.
January 06, 2023 12:00 AM
This Content Appeared In
Volume 57, Number 11
Get PT in your inbox
Physics Today - The Week in Physics
The Week in Physics" is likely a reference to the regular updates or summaries of new physics research, such as those found in publications like Physics Today from AIP Publishing or on news aggregators like Phys.org.