The Future of Scientific Computing

COFFEE_KLATCH · Invited

Abstract

Computing technologies are undergoing a dramatic transition. Multicore chips with up to eight cores are now available from many vendors. This trend will continue, with the number of cores on a chip continuing to increase. In fact, many-core chips, e.g., NVIDIA GPUs, are now being seriously explored in many areas of scientific computing. This technology shift presents a challenge for computational science and engineering--the only significant performance increases in the future will be through the increased exploitation of parallelism. At the same time, petascale computers based on these technologies are being deployed at sites across the world. The opportunities arising from petascale computing are enormous--predicting the behavior of complex biological systems, understanding the production of heavy elements in supernovae, designing catalysts at the atomic level, predicting changes in the earth's climate and ecosystems, and designing complex engineered systems. But, petascale computers are very complex systems, built from multi-core and many-core chips with 100,000s to millions of cores, 100s of terabytes to petabytes of memory, and 10,000s of disk drives. The architecture of petascale computers has significant implications for the design of the next generation of science and engineering applications. In this presentation, we will provide an overview of the directions in computing technologies as well as describe the petascale computing systems being deployed in the U.S. and elsewhere.

Authors

  • Thom Dunning

    National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign