National Conference on Innovative Paradigms in Engineering & Technology (NCIPET-2012) Proceedings published by International Journal of Computer Applications® (IJCA) 15 ABSTRACT Now-a-days, the systems are coming with the high speed processors. With the development in the processors, the CPU performance is increasing day-by-day and making applications such as data mining, data warehousing, and e-business commonplace. This growth in computational power requires that the I/O subsystem should be able to deliver the data needed by the processor subsystem at the rate at which is it needed. In the past couple of years, it has become clear that the current shared bus-based architecture will become the bottleneck of the servers that host these powerful but demanding applications. The Peripheral Component Interconnect (PCI) bus, which is a dominant bus, commonly used in both desktop and server machines for attaching I/O peripherals to the CPU/memory Units. The most common configuration of the PCI bus used is as given in the Table: Bit Clock Rate (MHz) Bandwidth (MB/s) 32 33 133 64 33 266 64 66 533 Today’s desktop machines have lots of capacity available with the PCI bus in the typical configuration, but server machines are starting to hit the upper limits of the shared bus architecture. To resolve this limitation on the bandwidth of the PCI bus, a number of solutions are becoming available in the market as interim solutions such as PCI-X and PCI DDR. But these versions also fail to some extent. For ex: The PCI-X specification allows for a 64-bit version of the bus operating at the clock rate of 133 MHz, but this is achieved by ceasing some of the timing constraints. Because of the shared bus nature of these versions, the bus forces it to lower its fanout in order to achieve the high clock rate of 133 MHz. So, despite the temporary resolution of the PCI bandwidth limitation through these new upgrade technologies, there is a long term solution needed that cannot rely on shared bus architecture. InfiniBand breaks through the bandwidth and fanout limitations of the PCI bus by migrating from the traditional shared bus architecture into switched fabric architecture. The InfiniBand™ Architecture (IBA) is an industry standard that defines a new high-speed switched fabric subsystem designed to connect processor nodes and I/O nodes to form a system area network. These new interconnect method moves away from the local transaction-based I/O model across busses to a remote message-passing model across channels. The architecture is independent of the host operating system (OS) and the processor platform. IBA provides both reliable and unreliable transport mechanisms in which messages are enqueued for delivery between end systems. Hardware transport protocols are defined that support reliable and unreliable messaging (send/receive), and memory manipulation semantics (e.g., RDMA read/write) without software intervention in the data transfer path. I. INTRODUCTION InfiniBand is a switched fabric communication link primarily used in high-Performance computing. The architecture is based on a serial, switched fabric that in addition to defining link bandwidths between 2.5 and 30GBits/sec, resolves the scalability, expandability, and fault tolerance limitations of the shared bus architecture through the use of switches and routers in the construction of its fabric. The InfiniBand architecture specification defines a connection between processor nodes and high performance I/O nodes such as storage devices. Like Fiber Channel, PCI Express, SATA, and many other modern interconnect, InfiniBand is a point-to-point bidirectional serial link intended for the connection of processors with the peripherals such as disks. II. INFINIBAND NETWORK Figure 1: InfiniBand Network The figure 1, above represents the simplest configuration of an InfiniBand. An Endnode represents either a host device such as a server or an I/O device such as a RAID subsystem. Two or more Endnodes connected through the switch form a Subnet. Each node connects to the fabric through a channel adapter. There are two types of the channel adapters i.e. Host Channel adapter (HCA) and Target Channel Adapter (TCA). Each processor node contains a host channel adapter (HCA) and each peripheral node has a target channel adapter (TCA). Each channel adapter may have one or more ports providing multiple paths between a source and a destination. The fabric is able to achieve transfer rates at the full capacity of the channel, avoiding congestion issues that arise in the shared bus architecture. Furthermore it provides alternative paths which results in increased reliability and availability since another path is available for routing of the data in the case of failure of one of the link. Two or more subnets are connected using the Vivek D. Deshmukh Assistant Professor, S.B. Jain Institute of Technology, Management & Research, Nagpur InfiniBand: A New Era in Networking