Both an I/O architecture and a specification for the transmission of data between processors and I/O devices that has been gradually replacing the PCI bus in high-end servers and PCs. Instead of sending data in parallel, which is what PCI does, InfiniBand sends data in serial and can carry multiple channels of data at the same time in a multiplexing signal. The principles of InfiniBand mirror those of mainframe computer systems that are inherently channel-based systems. InfiniBand channels are created by attaching host channel adapters (HCAs) and target channel adapters (TCAs) through InfiniBand switches. HCAs are I/O engines located within a server. TCAs enable remote storage and network connectivity into the InfiniBand interconnect infrastructure, called a fabric. InfiniBand architecure is capable of supporting tens of thousands of nodes in a single subnet.
InfiniBand is a trademarked term. The technology is a result of the merger of two competing designs -- Future I/O, which was developed by Compaq, IBM and Hewlett-Packard, and Next Generation I/O, which was developed by Intel, Microsoft and Sun Microsystems. InfiniBand was previously called System I/O.
InfiniBand transmission rates begin at 2.5GBps.