NSF awards $53 million supercomputing bid
Aug 10, 2001 — by Rick Lehrbaum — from the LinuxDevices Archive — viewsBy Stephen Shankland; special to ZDNet News . . .
The National Science Foundation has awarded contracts worth $53 million to build a grid that connects supercomputer clusters across the country into a single large computing resource called the Distributed Terascale Facility.
The main part of the work will be handled by the National Center for Supercomputing Applications (NCSA) and the San Diego Supercomputer Center, said NCSA Director Dan Reed.
But a big winner will be IBM, which will build four Linux supercomputer clusters and take home tens of millions of dollars, said Mike Nelson, director of Internet technology and strategy at IBM. The NCSA's cluster will be able to perform 6.1 trillion calculations per second (teraflops), and SDSC's will handle 4 teraflops, Nelson said. Argonne National Laboratory will have a 1 teraflop machine and the California Institute of Technology a 0.4 teraflop machine.
The supercomputers will be made from Intel's “McKinley” CPU, the second generation model of the Itanium line, the National Science Foundation said in a statement. In addition, Qwest will link the computers with a high-speed network that can transfer data at 40 gigabits per second.
IBM has embarked on a project to improve grid computing — which federates high-powered computers to give researchers access to supercomputer calculation facilities — and to speed up access to large databases of information. Big Blue believes the technology, chiefly appealing to academics at present, will become useful to corporations as well.
“We're pulling the pieces together. We'll be providing a lot of hardware that uses the McKinley chip,” Nelson said. “We think grid computing could be just as big as Linux.”
The system, which Reed and SDSC Director Fran Berman called the Teragrid, will be used for work involving national and international collaborations, Reed said. Computing jobs will include work in the areas of astronomy, cosmology, earthquake simulation, genetics, protein research, drug design, brain research and high-energy physics, Berman and Reed said.
A national board will decide how the computing power is allocated, but using it should be made simpler through the choice of open-source grid software from an organization called the Globus Project, he said.
Scientists won't have to worry about where exactly data is stored or what computers are churning through their calculations. “It's trying to take a distributed cluster and data architecture and build an easy-to-use interface on top of it,” Reed said.
Ultimately, the grid will grow to include smaller research networks, link to other grids overseas and even incorporate countless sensors across the world, Reed said.
This article was originally published on LinuxDevices.com and has been donated to the open source community by QuinStreet Inc. Please visit LinuxToday.com for up-to-date news and articles about Linux and open source.