NSF TeraGrid
DOI: 10.1063/1.4796222
Four groups will share $53 million over three years from NSF to develop the TeraGrid, a distributed supercomputer network capable of performing 11.6 trillion calculations per second (11.6 teraflops) and transferring 40 billion data bits per second.
“This will be the largest, most comprehensive information infrastructure ever deployed for open scientific research,” says Dan Reed, director of the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana–Champaign and one of the TeraGrid’s principal investigators. “Unprecedented amounts of data are being generated … and groups of scientists are conducting new simulations of increasingly complex phenomena.”
The TeraGrid will be a user facility, available competitively to US scientists. All data- and computation-intensive research will be game, with foreseen applications in, among other areas, genomics, particle physics, astrophysics, and storm, climate, and earthquake prediction. The TeraGrid is slated to start up next year.
NCSA’s TeraGrid partners are the San Diego Supercomputer Center at the University of California at San Diego, Argonne National Laboratory, and Caltech. NSF may expand the TeraGrid to include the Pittsburgh Supercomputing Center—which is expected to reach its peak performance of 6 teraflops this fall—and the National Center for Atmospheric Research in Boulder, Colorado. ▪