News Archive (1999-2012) | 2013-current at LinuxGizmos | Current Tech News Portal |    About   

Intel and Los Alamos National Lab build largest InfiniBand cluster

Nov 21, 2002 — by LinuxDevices Staff — from the LinuxDevices Archive — 1 views

Baltimore — (press release excerpt) — Intel announced at Supercomputing 2002 the largest InfiniBand cluster test-bed built yet. The 128-node cluster housed at Los Alamos National Laboratory, and will initially will be used for InfiniBand software stack validation and hardware testing, and ultimately will be available for protocol research and development.

The InfiniBand cluster project utilizes 128 computers, each containing dual Intel Xeon processors and utilizing InfiniBand Host Channel Adapters, configured together within a 4X InfiniBand fabric. The cluster uses the Linux operating system.

“InfiniBand architecture is emerging as the leading HPC interconnect technology,” said Jim Pappas, director of initiative marketing for Intel's Enterprise Platform Group. “Los Alamos National Laboratory has a long history in developing some of the most powerful clusters on the planet. We look forward to working closely with them in testing their InfiniBand cluster.”

Intel is among the founding companies of the InfiniBand Trade Association, and is a leader in delivering InfiniBand ecosystems to Intel Architecture servers. To generate broad, industry-wide implementation of an InfiniBand infrastructure, Intel has founded a wide range of industry enabling programs including the InfiniBand Evaluation Program, targeted at early implementations of the fabric technology.

What is InfiniBand?

The following is an excerpt from the Linux InfiniBand Project website . . .

The InfiniBand Architecture (IBA) is an industry standard that defines a new high-speed switched fabric subsystem designed to connect processor nodes and I/O nodes to form a system area network. This new interconnect method moves away from the local transaction-based I/O model across busses to a remote message-passing model across channels. The architecture is independent of the host operating system (OS) and the processor platform.

IBA provides both reliable and unreliable transport mechanisms in which messages are enqueued for delivery between end systems. Hardware transport protocols are defined that support reliable and unreliable messaging (send/receive), and memory manipulation semantics (e.g., RDMA read/write) without software intervention in the data transfer path.

The InfiniBand specification primarily defines the hardware electrical, mechanical, link-level, and management aspects of an InfiniBand fabric, but does not define the lowest layers of the operating system stack needed to communicate over an InfiniBand fabric. The remainder of the operating system stack to support storage, networking, IPC, and systems management is left to the operating system vendor for definition. More on the InfiniBand architecture can be found here.

Linux InfiniBand Project

The Linux InfiniBand Project is a collection of sub-projects and activities all focused around the common goal of providing the operating system software components needed to support an InfiniBand fabric, specifically for the Linux operating system. The architecture for several of these components is further influenced by existing and emerging standards that define uniform protocols for components of the operating system. Examples here are emerging protocols like Internet Protocol over InfiniBand (IPoIB) and the SCSI RDMA Protocol (SRP) and proposed definitions for standard InfiniBand transport and driver APIs.

This project is focused on promoting, enabling and delivering the software components needed to support an InfiniBand fabric for the Linux operating system.

The Linux InfiniBand Project is hosted at SourceForge.

This article was originally published on and has been donated to the open source community by QuinStreet Inc. Please visit for up-to-date news and articles about Linux and open source.

Comments are closed.