Advantages Of Infiniband In An Oracle Rac Environment; Hp Infiniband Solution For Oracle Rac; Solution Overview - HP 376227-B21 - ProLiant InfiniBand 4x Fabric Copper Switch White Paper

Hp infiniband solution for oracle rac environments white paper
Hide thumbs Also See for 376227-B21 - ProLiant InfiniBand 4x Fabric Copper Switch:
Table of Contents

Advertisement

write/write). In both cases, the node that holds the initially updated data block ships the block to the
requesting node across the high speed cluster interconnect.
In the case of the write/read case scenario, a node wants to update a block that's already read and
cached by a remote instance. An update operation typically involves reading the relevant block into
memory and then writing the updated block back to disk. In Oracle's Parallel Server product
(predecessor to Oracle RAC), once the update had been complete and the update block had been
written to disk, the node waiting for the block would then read the new version of the block off of the
disk. This "disk pinging" created additional I/O, resulting in lower system performance. In an Oracle
RAC environment, the node holding the updated block can transfer the updated block across the
cluster interconnect. In this scenario, disk I/O for read is avoided and performance is increased as
the block is shipped from the cache of the remote node into the cache of the requesting node.

Advantages of InfiniBand in an Oracle RAC environment

Because resolving contention for database blocks involves sending the blocks across the cluster
interconnect, efficient inter-node messaging is the key to coordinating fast block transfers between
nodes. The efficiency of inter-node messaging depends on three primary factors:
• The number of messages required for each synchronization sequence
• The frequency of synchronization – the less frequent, the better
• The latency, or speed, of inter-node communications
The first two factors depend mostly on the application being deployed on the RAC database. The
performance of the cluster interconnect can be greatly enhanced through the use of InfiniBand
technologies. InfiniBand is a low latency, high bandwidth interconnect that can be used to enhance
the performance of the inter-node messaging. In addition to its high performance design, InfiniBand
supports the uDAPL, a user mode API for memory-to-memory transfers between applications running
on different nodes. uDAPL greatly reduces the latency and CPU overhead associated with inter-node
communication, allowing the cluster to scale significantly better than standard Ethernet technologies.

HP InfiniBand solution for Oracle RAC

Solution overview

Many HP customers have successfully implemented Oracle RAC solutions using the award winning HP
ProLiant line of servers and HP StorageWorks line of Fibre Channel connectivity products. With the
addition of InfiniBand networking products, customers can implement an Oracle RAC configuration
using InfiniBand as a high speed cluster interconnect.
A sample configuration could be composed of the following components:
• HP ProLiant DL380 G4 servers
• HP StorageWorks Fibre Channel SAN
– HP StorageWorks Modular Smart Array (MSA) 1000
– HP StorageWorks FCA2214DC Fibre Channel HBA
– HP StorageWorks MSA SAN Switch 2/8
• HP InfiniBand cluster interconnect
– HP NC570C Dual Port PCI-X InfiniBand HCA
– HP 24 Port 4x Copper Fabric Switch
5

Advertisement

Table of Contents
loading

Table of Contents