There is new interconnect technology under development based on the PCI Express bus which promises another jump in communication speed inside a cluster. Versions of the technology are being developed independently by the companies PLX Technologies and A3Cube.

Interconnect technology has continued to evolve since the first cluster appeared on the Top500 list. The interconnect allows the machines in the cluster to communicate information as calculations progress. Amdahl’s Law states that the performance of a parallel system is limited by the serial portions of the programs, which often occur as communication between processes. Faster interconnect is essential to shorten the time it takes for programs complete, unless the work they are doing is “embarrassingly” parallel.

NOW logo

The first cluster to appear on the Top500 list was the product of the Network of Workstations (NOW) project at the University of California at Berkeley. The idea was to demonstrate that a properly configured bunch of small machines could provide the performance of large form factor specialized hardware at lower cost. The machine contained 100 processors and ran the Linpack benchmark at 33 GigaFlops. This put the NOW machine at number 344 out of the 500 fastest machines measured in the world at that time.

The interconnect used in the NOW machine was Myrinet, with links capable of bi-directional transfer rates of 160 MB/s (= 1280 Mb/s), compared with the standard Ethernet of the time running at 10 Mb/s which was used to connect the cluster to the outside world.

Clusters have developed significantly since then. The initial machines developed using commercial-off-the-shelf (COTS) components demonstrated the viability of the idea, but components have become more specialized. The core densities of nodes have increased greatly, the interconnects have become much faster. Infiniband is currently the most widely used interconnect for data transfer within numerical cluster installations.

For the sake of comparison, here are some numbers. 1 Gb/s Ethernet is very common today, and 10 Gb/s Ethernet is widely available, although somewhat expensive. QDR Infiniband has a single link transfer rate of 8 Gb/s, close to that of 10 gig Ethernet, with the advantage that links can be aggregated in units of 4 or 12. With four links aggregated, QDR Infiniband provides a data transfer speed of up to 32 Gb/s.

The PCI express 3 standard provides a data rate of 8 Gb/s per lane, and with 16 lanes the data can be transferred at 128 Gb/s in each direction. The PCIe 4 promises to double that to 256 Gb/s in each direction. At that rate it would only take 32 seconds to transfer a terabyte of data. In a world of data intensive computing, being able to move data around faster for analysis will be very helpful. Couple this with lower cost and power usage and the future looks bright for a PCIe interconnect.

Read more about how PLX Technologies and A3Cube are developing the PCIe networking technology at http://www.enterprisetech.com/2014/03/13/pci-express-switching-takes-ethernet-infiniband/

 

Contact us

Need help finding an efficient and cost effective system of software, hardware, and processes?

Contact us