Computational scientists have always pushed machines to their processing limits, and not just for the heady thrill of it all. A calculation performed today on an old and slow machine might have already been done by someone else back when the same machine was new and fast.
To meet the need for speed, computational scientists have striven to squeeze the most out of their machines—which raises the question, What can we learn from the computational scientists of the past, whose puny machines demanded ingenious and frugal programming?
For an answer to that question, I searched Physics Today‘s back issues for articles about computing. The oldest I found was Robert Richtmyer and Nicholas Metropolis’s “Modern Computing,” which appeared in October 1949, six months before the magazine’s second anniversary.
To illustrate Richtmyer and Metropolis’s article, Paul Bond drew whimsical sketches on so-called matrix sheets, which were used for designing computer wiring schemes.
Both men were pioneers of computational science. The theorem that Richtmyer derived with Peter Lax defines the conditions under which finite-difference methods converge. The most famous application of the Monte Carlo method bears Metropolis’s name.
Perhaps because computers were so new and rare in 1949, Richtmyer and Metropolis focused on explaining how computers work and what they can do, rather than on teaching Physics Today‘s readers how to use one. Remarkably, if you discount the authors’ reference to vacuum-tube technology, their opening paragraph retains its relevance:
The intricacies of automatic computing methods have been popularized by pictures, visual and verbal, of complicated wiring diagrams, great banks of electron tubes, and dramatic control boards, as well as by certain romantic analogies between the machines and the human brain. There remains, however, a need for defining the limits of computing machine operation, as well as its promise.
In the 65 years since Richtmyer and Metropolis wrote their article, the frontiers of computational science have expanded greatly but not limitlessly. Operational constraints imposed by the bulky size of vacuum tubes have been superseded by operational constraints imposed by the tiny sizes of interconnects and capacitors.
Communicating effectively with a computer continues to be a challenge. The first high-level programming language written for an electronic computer, Short Code, was conceived by John Mauchly in 1949 and implemented the following year by William Schmitt. The concept of a programming language is not mentioned in Richtmyer and Metropolis’s article, but it’s foreshadowed by their discussion of basic vocabularies, degrees of inflection, and variable addresses.
As you might guess, Richtmyer and Metropolis end with speculations about the future of computational science. Two of their predictions were far out of reach in 1949, yet not so visionary that they were inconceivably difficult to realize. Solving algebraic equations on a computer was achieved in the 1960s by Tony Hearn, Martinus Veltman, and other pioneers of symbolic manipulation. And in 1969, satisfying Richtmyer and Metropolis’s hope that computers be applied to biological systems, Michael Levitt and Arieh Warshel computed the energy landscape of a single protein for the first time.
This essay by Charles Day first appeared on page 72 of the January/February 2015 issue ofComputing in Science & Engineering, a bimonthly magazine published jointly by the American Institute of Physics and IEEE Computer Society.
Unusual Arctic fire activity in 2019–21 was driven by, among other factors, earlier snowmelt and varying atmospheric conditions brought about by rising temperatures.
January 06, 2023 12:00 AM
Get PT in your inbox
Physics Today - The Week in Physics
The Week in Physics" is likely a reference to the regular updates or summaries of new physics research, such as those found in publications like Physics Today from AIP Publishing or on news aggregators like Phys.org.