Supercomputing has a future in clean energy
DOI: 10.1063/PT.3.1162
As supercomputing surges toward the exascale—1018 floating-point operations per second (flops), or 1000 times as powerful as today’s top-performing petaflop machines—the US Department of Energy and its national laboratories are promoting the use of high-performance computing (HPC) to power the development of clean-energy technologies.
Lawrence Livermore National Laboratory (LLNL), home to some of the world’s most powerful supercomputers, cosponsored a conference in Washington, DC, in May that focused on the application of HPC to clean-energy technology. The laboratory anticipates becoming host to one of the world’s two fastest computers when the IBM system it calls Sequoia is completed next year.
“Energy is fundamentally a materials issue,” said Thom Mason, director of Oak Ridge National Laboratory, where a Cray supercomputer also due for completion in 2012 is expected to equal Sequoia’s 20-petaflop performance. Most energy sources are limited by the performance of materials, he noted, and added that some materials science problems will require exascale computing to solve. DOE has requested $126 million for exascale computing research in fiscal year 2012.
Predictive modeling has been instrumental in the design of more efficient internal combustion engines, like the homogeneous-charge, compression-ignition power plant now used throughout the auto industry. Steven Koonin, DOE undersecretary for science, said further efficiency gains of 30% are possible. But, said Andy McIlroy, senior manager for chemical sciences at Sandia National Laboratories, those gains will require improving the understanding of random events during combustion and optimizing fuel injection. As computational power increases, models will become capable of simulating each of the thousands of possible chemical reactions that occur in the combustion of fuel, said Charles Westbrook, a retired LLNL physicist who worked on combustion simulations. Existing models and hardware can’t handle the calculations required for that level of detail, so simulations must be simplified.
Narrowing the field
High-performance computing will be used to predict how long the carbon dioxide that is expected to be captured from fossil-fuel boilers will stay bottled up once it is injected into deep geological formations. Visualizations from the models will be useful in describing to the public how the CO2 will behave in the subsurface, noted David Sholl, an engineering professor at Georgia Tech. By conservative estimates, 10 000 materials could be considered candidates for membranes to capture CO2. Supercomputing tools are providing quantitative estimates and narrowing the field for serious examination, Sholl said. Financial institutions backing CO2 capture and storage will utilize HPC modeling to minimize the risk that investments will sour over time, noted Julio Friedmann, carbon management program leader at LLNL. But John Tombari, president of Schlumberger Carbon Services, cautioned that acquiring the data needed for subsurface modeling could be an order of magnitude more costly than the analysis itself.
The smart grid that will be needed to transmit significant new quantities of intermittent wind and solar generation will be gathering and processing 17 000 terabytes of data from controls, sensors, and other digital devices each year, said Steve Pullins, president of the Horizon Energy Group, a designer of microgrids and energy storage systems. He noted that HPC modeling tools will enable engineers to design more efficient distribution circuits and thus reduce power lost over the lines.
Improving the energy efficiency of buildings, where 70% of US electricity is consumed, presents yet another opportunity for HPC. Nationwide, a threefold improvement in buildings’ efficiency is achievable, Koonin said. No more than 20% of commercial buildings are equipped with energy management systems, said James Braun of Purdue University. But it’s unlikely that small architectural firms will want to purchase their own supercomputer, noted James Sexton, program director at the IBM Thomas J. Watson Research Center. Instead, he predicted, HPC services will become available for purchase over the internet from service providers. Google’s data centers are high-performance computers “by anyone’s standards,” said Sexton. A big challenge will be making HPC systems that are accessible to nonexperts, something, he added, that the HPC crowd hasn’t done much.
At its current rate of advancement, exascale computing will be realized by the end of this decade. The limiting factor is the energy that such powerful systems will require. Breakthroughs in hardware architectures that will dramatically reduce electrical consumption are a necessity, said David Turek, vice president of deep computing at IBM. Were Los Alamos National Laboratory’s Roadrunner, an IBM design that in 2009 was the first to attain petaflop speed and is now listed as the world’s seventh-fastest computer, to be built to exascale dimensions, it would require 2 gigawatts or more to run, Turek said. That is roughly the output from two commercial nuclear reactors. Scaling up Oak Ridge National Laboratory’s 1.8-petaflop Jaguar, now ranked the world’s second-fastest system, would take four reactors’ worth of electricity, said Mason.
First place is short-lived
China currently holds the computing speed record with its 2.5-petaflop Tianhe-1A machine, located at the national supercomputing center in Tianjin. But Japan is expected to take the title next year, upon completion of a Fujitsu-RIKEN system that will have five times the computing power of the Tianhe-1A. The K computer will be eclipsed within months by LLNL’s Sequoia and ORNL’s Titan. The LLNL system, which employs IBM’s PowerPC architecture, will consume only 2 MW of power, compared with the 16 MW that will be needed by the K computer, and will cost one-tenth of the K computer’s $1.2 billion price tag, Turek said.
Large US manufacturers have been using HPC to reduce the cost and time required in new product development. “Modeling and simulation has become a matter of economic survival,” said Rajeeb Hazra, general manager of high-performance computing for Intel. J. Gary Smyth, executive director for General Motors’ global R&D, said the automaker operates teraflop-scale computers that run predictive simulations of the combustion process. At Oak Ridge, GM is sifting through candidates for a thermoelectric material that could someday replace the alternator in vehicles, said Mason. The Cummins corporation applied HPC codes first developed by DOE in designing a new diesel engine that meets the most recent federal emissions standards for heavy-duty trucks, said Wayne Eckerle, vice president for corporate research and technology. Only one prototype was built before that engine entered production.
But small- and medium-sized companies don’t have the resources or staff to dedicate to HPC. At the Obama administration’s request, the Council on Competitiveness (CoC), a consortium of US corporations, universities, nonprofits, and labor unions, organized a public–private partnership to move HPC into modest-sized businesses. In March the CoC launched a coalition in which four original equipment manufacturers (OEMs) will encourage their suppliers to adopt HPC. The National Digital Engineering and Manufacturing Consortium, whose members include universities, nonprofit organizations, four federal agencies, and state governments, could extend HPC use to as many as 30 of the small and medium manufacturers that supply OEM members Procter & Gamble, Lockheed Martin, General Electric, and John Deere,according to Cynthia McIntyre, CoC senior vice president. The firms will have access to academic supercomputing centers located at the University of Illinois, the Ohio State University, and Purdue University.
The CoC has written 15 case studies on how HPC has helped manufacturers accelerate their product development cycle. The most recent study, released in April, relates how AAI Corp’s unmanned aircraft systems used advanced computational fluid dynamics codes to create a virtual wind tunnel. The results included the ability to analyze the impact of design changes to the prototype’s propeller, fuselage, landing gear, and other components.
Other recent CoC case studies have related how automotive supplier Dana Holding Corp has used HPC simulations to identify the optimal configuration of layers, metals, geometries, and coatings in its metal gasket products; simulations that once took months could be done in a few days.

The Jaguar computer at Oak Ridge National Laboratory, currently the second fastest in the world, is dedicated solely to unclassified research. Previous US performance leaders, such as Los Alamos National Laboratory’s seventh-place Roadrunner, were built primarily for nuclear weapons simulations.
Oak Ridge National Laboratory

More about the Authors
David Kramer. dkramer@aip.org