HP XP20000/XP24000 Configuration Manual

HP XP20000/XP24000 Configuration Manual

Hp storageworks xp disk array configuration guide (t5278-96047, may 2011)
Hide thumbs Also See for XP20000/XP24000:
Table of Contents

Advertisement

HP StorageWorks
XP Disk Array Configuration Guide
HP XP24000 Disk Array
HP XP20000 Disk Array
HP XP12000 Disk Array
HP XP10000 Disk Array
HP 200 Storage Virtualization System
Abstract
This guide provides requirements and procedures for connecting an XP disk array or SVS 200 to a host system, and for
configuring the disk array for use with a specific operating system. This document is intended for system administrators, HP
representatives, and authorized service providers who are involved in installing, configuring, and operating XP disk arrays.
HP Part Number: T5278-96047
Published: May 201 1
Edition: First

Advertisement

Table of Contents
loading

Summary of Contents for HP XP20000/XP24000

  • Page 1 HP StorageWorks XP Disk Array Configuration Guide HP XP24000 Disk Array HP XP20000 Disk Array HP XP12000 Disk Array HP XP10000 Disk Array HP 200 Storage Virtualization System Abstract This guide provides requirements and procedures for connecting an XP disk array or SVS 200 to a host system, and for configuring the disk array for use with a specific operating system.
  • Page 2 © Copyright 2003, 201 1 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.21 1 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license.
  • Page 3: Table Of Contents

    Contents 1 Overview....................10 What's in this guide........................10 Audience..........................10 Features and requirements.......................10 Fibre Channel interface......................11 Device emulation types......................12 Failover..........................12 SNMP configuration........................13 RAID Manager command devices.....................13 2 HP-UX.....................14 Installation roadmap.......................14 Installing and configuring the disk array..................14 Defining the paths......................15 Setting the host mode and host group mode for the disk array ports.........15 Setting the system option modes..................16 Configuring the Fibre Channel ports..................17 Installing and configuring the host.....................17...
  • Page 4 Verifying the host recognizes array devices................36 Configuring disk devices......................36 Writing signatures......................36 Creating and formatting disk partitions.................37 Verifying file system operations ...................37 4 Novell NetWare..................39 Installation roadmap.......................39 Installing and configuring the disk array..................39 Defining the paths......................39 Setting the host mode and host group mode for the disk array ports.........40 Configuring the Fibre Channel ports..................41 Installing and configuring the host.....................42 Loading the operating system and software................42...
  • Page 5 Setting the host mode for the disk array ports................59 Setting the UUID........................60 Setting the system option modes..................62 Configuring the Fibre Channel ports..................62 Installing and configuring the host.....................63 Loading the operating system and software................63 Installing and configuring the FCAs ..................63 Clustering and fabric zoning....................63 Fabric zoning and LUN security for multiple operating systems..........64 Configuring FC switches......................64 Connecting the disk array......................64...
  • Page 6 Creating the file systems.....................84 Creating file systems with ext2..................85 Creating the mount directories.....................85 Creating the mount table....................85 Verifying file system operation.....................86 9 Solaris....................87 Installation roadmap.......................87 Installing and configuring the disk array..................87 Defining the paths......................87 Setting the host mode and host group mode for the disk array ports.........88 Setting the system option modes..................90 Configuring the Fibre Channel ports..................90 Installing and configuring the host.....................91...
  • Page 7 Mounting and verifying the file systems................113 1 1 Citrix XenServer Enterprise..............115 Installation roadmap......................115 Installing and configuring the disk array..................115 Defining the paths......................115 Setting the host mode and host group mode for the disk array ports........116 Configuring the Fibre Channel ports...................117 Setting the system option modes..................118 Installing and configuring the host...................118 Installing and configuring the FCAs ...................118...
  • Page 8 NonStop..........................146 Supported emulations.......................146 Emulation specifications....................146 OpenVMS...........................147 Supported emulations.......................147 Emulation specifications....................147 VMware..........................150 Supported emulations.......................150 Emulation specifications....................150 Linux...........................153 Supported emulations.......................153 Emulation specifications....................153 Solaris..........................156 Supported emulations.......................156 Emulation specifications....................156 IBM AIX..........................159 Supported emulations.......................159 Emulation specifications....................159 Disk parameters by emulation type..................161 Byte information table.......................167 Physical partition size table....................169 D Using Veritas Cluster Server to prevent data corruption........171 Using VCS I/O fencing......................171 E Reference information for the HP System Administration Manager (SAM)..174...
  • Page 9 Contents Contents...
  • Page 10: Overview

    1 Overview What's in this guide This guide includes information on installing and configuring P9000 disk arrays. The following operating systems are covered: HP-UX Windows Novell Netware NonStop OpenVMS VMware Linux Solaris IBM AIX For additional information on connecting disk arrays to a host system and configuring for a mainframe, see the HP StorageWorks P9000 Mainframe Host Attachment and Operations Guide.
  • Page 11: Fibre Channel Interface

    For all operating systems, before installing the disk array, ensure the environment conforms to the following requirements: Fibre Channel Adapters (FCAs): Install FCAs, all utilities, and drivers. For installation details, see the adapter documentation. HP StorageWorks XP Remote Web Console or HP StorageWorks P9000 or XP Command View Advanced Edition Software for configuring disk array ports and paths.
  • Page 12: Device Emulation Types

    Device emulation types The XP family of disk arrays and the SVS 200 support these device emulation types: OPEN-x devices: OPEN-x logical units represent disk devices. Except for OPEN-V, these devices are based on fixed sizes. OPEN-V is a user-defined size based on a CVS device. Supported emulations include OPEN-3, OPEN-8, OPEN-9, OPEN-E, OPEN-L, and OPEN-V devices.
  • Page 13: Snmp Configuration

    Your HP representative might need to set specific disk array system modes for these products. Check with your HP representative for the current versions supported. For I/O path failover, different products are available from Oracle, Veritas, and HP. Oracle supplies software called STMS for Solaris 8/9 and Storage Multipathing for Solaris 10. Veritas offers VxVM, which includes DMP.
  • Page 14: Hp-Ux

    2 HP-UX You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: “Installing and configuring the disk array”...
  • Page 15: Defining The Paths

    Defining the paths Use P9000 or XP Command View Advanced Edition Software or XP Remote Web Console (shown) to define paths between hosts and volumes (LUNs) in the disk array. This process is also called “LUN mapping.” In the XP Remote Web Console, LUN mapping includes: Configuring ports Enabling LUN security on the ports Creating host groups...
  • Page 16: Setting The System Option Modes

    CAUTION: The correct host mode must be set for all new installations (newly connected ports) to HP-UX hosts. Do not select a mode other than 08 for HP-UX. Changing a host mode after the host has been connected is disruptive and requires the server to be rebooted. When a new host group is added, additional host group modes (options) can be configured.
  • Page 17: Configuring The Fibre Channel Ports

    Configuring the Fibre Channel ports Configure the disk array Fibre Channel ports by using P9000 or XP Command View Advanced Edition Software or XP Remote Web Console (shown). Select the settings for each port based on your SAN topology. Use switch zoning if you connect different types of hosts to the array through the same switch.
  • Page 18: Fabric Zoning And Lun Security For Multiple Operating Systems

    Figure 2 Multi-cluster environment (HP-UX) Within the SAN, the clusters can be homogeneous (all the same operating system) or heterogeneous (mixed operating systems). How you configure LUN security and fabric zoning depends on the operating system mix and the SAN configuration. Fabric zoning and LUN security for multiple operating systems You can connect multiple clusters with multiple operating systems to the same switch and fabric using appropriate zoning and LUN security as follows:...
  • Page 19: Verifying Device Recognition

    Use the ioscan f command, and verify that the rows shown in the example are displayed. If these rows are not displayed, check the host adapter installation (hardware and driver installation) or the host configuration. Example # ioscan Class I H/W Path Driver S/W State H/W Type Description...
  • Page 20: Configuring Disk Array Devices

    z = LUN c stands for controller t stands for target ID d stands for device The numbers x, y, and z are hexadecimal. Table 3 Device file name example (HP-UX) SCSI bus instance Hardware path SCSI TID File name number 14/12.6.0 c6t0d0...
  • Page 21: Verifying The Device Files And Drivers

    Verifying the device files and drivers The device files for new devices are usually created automatically during HP-UX startup. Each device must have a block-type device file in the /dev/dsk directory and a character-type device file in the /dev/rdsk directory. However, some HP-compatible systems do not create the device files automatically.
  • Page 22 repeat the procedures in “Verifying device recognition” (page 19) to verify new device recognition and the device files and driver. Example # insf -e insf: Installing special files for mux2 instance 0 address 8/0/0 Failure of the insf e command indicates a SAN problem. If the device files for the new disk array devices cannot be created automatically, you must create the device files manually using the mknodcommand as follows: Retrieve the device information you recorded earlier.
  • Page 23: Creating The Physical Volumes

    Create the device files for all disk array devices (SCSI disk and multiplatform devices) using the mknodcommand. Create the block-type device files in the /dev/dsk directory and the character-type device files in the /dev/rdsk directory. Example # cd /dev/dsk Go to /dev/dsk directory. # mknod /dev/dsk/c2t6d0 b 31 0x026000 Create block-type file.
  • Page 24 The physical volumes that make up one volume group can be located either in the same disk array or in other disk arrays. To allow more volume groups to be created, use SAM to modify the HP-UX system kernel configuration. See Reference information for the HP System Administrator Manager SAM for details.
  • Page 25: Creating Logical Volumes

    Use vgdisplay v to verify that the volume group was created correctly. The v option displays the detailed volume group information. Example # vgdisplay v /dev/vg06 - - - Volume groups - - - VG Name /dev/vg06 VG Write Access read/write VG Status available...
  • Page 26 To create logical volumes: Use the lvcreate L command to create a logical volume. Specify the volume size (in megabytes) and the volume group for the new logical volume. HP-UX assigns the logical volume numbers automatically (lvol1, lvol2, lvol3). Use the following capacity values for the size parameter: OPEN-K = 1740 OPEN-3 = 2344...
  • Page 27: Creating The File Systems

    Creating the file systems Create the file system for each new logical volume on the disk array. The default file system types are: HP-UX OS version 10.20 = hfs or vxfs, depending on entry in the /etc/defaults/fs file. HP-UX OS version 1 1.0 = vxfs HP-UX OS version 1 1.i = vxfs To create file systems: Use the newfs command to create the file system using the logical volume as the argument.
  • Page 28: Creating The Mount Directories

    Example # pvchange -t 60 /dev/dsk/c0t6d0 Physical volume "/dev/dsk/c0t6d0" has been successfully changed. Volume Group configuration for /dev/vg06 has been saved in /etc/lvmconf/vg06.conf. Verify that the new I/O timeout value is 60 seconds using the pvdisplay command: Example # pvdisplay /dev/dsk/c0t6d0 --- Physical volumes --- PV Name /dev/dsk/c0t6d0...
  • Page 29: Setting And Verifying The Auto-Mount Parameters

    /ldev/vg00/lvol1 59797 59364 100% /ldev/vg06/lvol1 2348177 2113350 /AHPMD-LU00 As a final verification, perform some basic UNIX operations (for example file creation, copying, and deletion) on each logical device to make sure that the devices on the disk array are fully operational.
  • Page 30 Use the bdf command to verify the file system again. HP-UX...
  • Page 31: Windows

    3 Windows You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: “Installing and configuring the disk array”...
  • Page 32: Setting The Host Mode And Host Group Mode For The Disk Array Ports

    In P9000 or XP Command View Advanced Edition Software, LUN mapping includes: Configuring ports Creating storage groups Mapping volumes and WWN/host access permissions to the storage groups For more information about LUN mapping, see the HP StorageWorks XP LUN Manager User’s Guide, HP StorageWorks XP LUN Configuration and Security Manager user guide: HP XP12000 Disk Array, HP XP10000 Disk Array, HP 200 Storage Virtualization System, or Remote Web Console online help.
  • Page 33 The available host mode settings are as follows: Table 6 Host mode settings (Windows) Host mode Description 2C (available on some array HP recommended. For use with LUSE volumes when online LUN models) expansion is required or might be required in the future. HP recommended.
  • Page 34: Setting The System Option Modes

    Table 8 Host group modes (options) Windows Host Group Function Default Mode Parameter Setting Failure for TPRLO Inactive When using the Emulex FCA in the Windows environment, the parameter setting for TPRLO failed. After receiving TPRLO and FCP_CMD, respectively. PRLO will respond when HostMode=0x0C/ 0x2C and HostModeOption=0x06.
  • Page 35: Installing And Configuring The Host

    Installing and configuring the host This section explains how to install and configure Fibre Channel adapters (FCAs) that connect the host to the disk array. Loading the operating system and software Follow the manufacturer's instructions to load the operating system and software onto the host. Load all OS patches and configuration utilities supported by HP and the FCA manufacturer.
  • Page 36: Connecting The Disk Array

    Figure 3 Multi-cluster environment (Windows) Connecting the disk array The HP service representative performs the following steps to connect the disk array to the host: Verifying operational status of the disk array channel adapters, LDEVs, and paths. Connecting the Fibre Channel cables between the disk array and the fabric switch or host. Verifying the ready status of the disk array and peripherals.
  • Page 37: Creating And Formatting Disk Partitions

    Click OK to update the system configuration and start the Write Signature wizard. For each new disk, click OK to write a signature, or click No to prevent writing a signature. When you have performed this process for all new disks, the Disk Management main window opens and displays the added disks.
  • Page 38 Copy a file from an existing drive to each new drive to verify the new drives are working, and then delete the copies. Windows...
  • Page 39: Novell Netware

    4 Novell NetWare You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: “Installing and configuring the disk array”...
  • Page 40: Setting The Host Mode And Host Group Mode For The Disk Array Ports

    Creating host groups Assigning Fibre Channel adapter WWNs to host groups Mapping volumes (LDEVs) to host groups (by assigning LUNs) In P9000 or XP Command View Advanced Edition Software, LUN mapping includes: Configuring ports Creating storage groups Mapping volumes and WWN/host access permissions to the storage groups For details see the HP StorageWorks XP LUN Manager User’s Guide.
  • Page 41: Configuring The Fibre Channel Ports

    CAUTION: The correct host mode must be set for all new installations (newly connected ports) to Novell NetWare hosts. Do not select a mode other than 0A for Novell NetWare. The host modes must be set for certain middleware environments (for example, Novell High Availability Server, NHAS, System Fault Tolerance, SFT III).
  • Page 42: Installing And Configuring The Host

    Installing and configuring the host This section explains how to install and configure Fibre Channel adapters (FCAs) that connect the host to the disk array. Loading the operating system and software Follow the manufacturer's instructions to load the operating system and software onto the host. Load all OS patches and configuration utilities supported by HP and the FCA manufacturer.
  • Page 43: Clustering And Fabric Zoning

    Click Partitions, New, and select a device. Click Create, click NSS pools, click New, and name the pool. The pool name and volume name can be the same. Click Create, click NSS Logical Volume, select New, name the volume, then select the pool. Select Allow volume quota to grow to pool size.
  • Page 44: Connecting The Disk Array

    Table 10 Fabric zoning and LUN security settings (Novell NetWare) Environment OS Mix Fabric Zoning LUN Security Standalone SAN homogeneous (a single OS type present Not required Must be used when multiple (non-clustered) in the SAN) hosts or cluster nodes connect through a shared port Clustered SAN heterogeneous (more than one OS type...
  • Page 45 The Available Disk Drives screen lists the devices by device number. Record the device numbers. On the Available Disk Drives screen, select the device to partition, and then press Enter. If the partition table has already been initialized, skip this step. If the partition table has not been initialized, the partition table message is displayed.
  • Page 46: Assigning The New Devices To Volumes

    1 1. Select the disk to be included in the pool, and click Next. 12. On the Create Pool – Attribute Information screen, check Activate on Creation to make the new pool active, and then click Finish. 13. Select a label for the partition (optional). 14.
  • Page 47: Mounting The New Volumes

    NetWare 6.0 Using ConsoleOne, right-click the targeted server and click Properties. Click the Media tab and select NSSPools. Click New... to open the Create a New Logical Volume screen and enter the name for the new pool. Then click Next. On the Create Logical Volume—Storage Information screen, select the desired pool/device, enter the desired Volume Quota, and click Next.
  • Page 48: Verifying Client Operations

    NetWare 6.5 Enter NSSMU at the server console. In the main menu, select Volumes. Press Insert and enter a name for the new volume, then click Next. Select the desired pool/device, enter the desired Volume Quota, then click Next. Review and change volume attributes as necessary. Select Create.
  • Page 49: Helpful Multipath Commands

    LOAD QL2300.HAM SLOT=3 /LUNS /ALLPATHS /PORTNAMES /CONSOLE ######## End HAM Drivers ######## Restart the server. To see a list of the failover devices and paths, at the server prompt enter: list failover devices Example failover device path listing 0x20 [V6E0-A2-D0:0] HP OPEN-3 rev:HP16 Up 0x0D [V6E0-A2-D0:0] HP OPEN-3 rev:HP16 Priority = 0 selected 0x1B [V6E0-A3-D0:0] HP OPEN-3 rev:HP16...
  • Page 50: Configuring Netware 6.X Servers For Cluster Services

    Use the NWCONFIG NetWare utility to create partitions/Volumes for each LUN. For additional information consult these websites: http://www.novell.com. http://www.support.novell.com. Configuring NetWare 6.x servers for Cluster Services The following requirements must be met in order to use clustering: NetWare 6.x on each server in the cluster. All servers must be in the same NDS tree.
  • Page 51: Creating Logical Volumes

    Click Next to accept the default shared media settings, if prompted. Select Start Clustering on newly added or upgraded servers after installation. Install the licenses: Insert the appropriate Cluster License diskette into drive A: of the client. Click Next. Click Next to select all available licenses. Click Next at the summary screen.
  • Page 52: Nonstop

    5 NonStop You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. The HP NonStop operating system runs on HP S-series and Integrity NonStop servers to provide continuous availability for applications, databases, and devices.
  • Page 53: Setting The Host Mode And Host Group Mode For The Disk Array Ports

    two different clusters of the disk array, and give each host group access to separate but identical LUNs. This arrangement minimizes the shared components among the four paths, providing both mirroring and greater failure protection. NOTE: For the highest level of availability and fault tolerance, HP recommends the use of two XP disk arrays, one for the Primary disks and one for the Mirror disks.
  • Page 54: Setting System Option Modes

    Ask your service representative if these modes apply in your situation. Table 1 1 System option modes (NonStop) SystemOption Mode Minimum microcode version XP128/XP1024 XP10000/XP12000 XP20000/XP24000 21-09-02-00/00 or later Available from initial release Available from initial release 21- 1 4-02-00/00 or later 21- 1 4-35-00/00 or later...
  • Page 55: Configuring The Fibre Channel Ports

    System option mode 685 enhances the performance of the XP storage systems during the repair or replacement of a cache board. When this mode is used, XP storage systems display consistent I/O processing response times throughout the repair action. To use system option mode 685, four or more cache PC boards must be installed.
  • Page 56: Fabric Zoning And Lun Security For Multiple Operating Systems

    Fabric zoning and LUN security for multiple operating systems You can connect multiple clusters of various operating systems to the same switch using appropriate switch zoning and array LUN security as follows: Use LUN Manager for LUN isolation when multiple NonStop systems connect through a shared array port.
  • Page 57: Openvms

    6 OpenVMS You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: “Installing and configuring the disk array”...
  • Page 58: Defining The Paths

    IMPORTANT: For optimal performance when configuring any XP disk array with a Tru64 host, HP does not recommend: Sharing of CHA (channel adapter) microprocessors Multiple host groups sharing the same CHA port NOTE: As illustrated in “Microprocessor port sharing (OpenVMS)” (page 58), there is no microprocessor sharing with 8-port module pairs.
  • Page 59: Setting The Host Mode For The Disk Array Ports

    Path configuration for OpenVMS requires the following steps: Define one command device LUN per array and present it to the OpenVMS hosts across all connected paths. If host mode option 33 is not enabled, for all LUNs, determine the device number as follows (once OpenVMS sees the XP disks): OpenVMS device name ($1$dgaxxx), where xxx = CU with LDEV appended (Then convert the created number from hex to decimal)
  • Page 60: Setting The Uuid

    When a new host group is added, additional host group modes (options) can be configured. The storage administrator must verify if an additional host group mode is required for the host group. The following host group mode (option) is available for OpenVMS: Table 14 Host mode setting (OpenVMS) Host Mode Description...
  • Page 61 CU:LDEV value. If the CU:LDEV value is 01:FF, then the UUID must be set to 51 1 (the decimal value of 01FF). Thus, none of these volumes can have a CU:LDEV value greater than 7F:FF. Additionally, these volumes must use LUN numbers 1 to 255. These are limitations of the AlphaServer firmware used (both for the definition of known paths by the wwidmgr and by the boot code).
  • Page 62: Setting The System Option Modes

    Figure 6 Set UUID window (OpenVMS) Enter a UUID in UUID in the Set UUID window. When a OpenVMS server host is used, a UUID can consist of the numerical value between 1 to 32,767. Click OK to close the Set UUID window. Click Apply in the LUN Manager window.
  • Page 63: Installing And Configuring The Host

    Installing and configuring the host This section explains how to install and configure Fibre Channel adapters (FCAs) that connect the host to the disk array. Loading the operating system and software Follow the manufacturer's instructions to load the operating system and software onto the host. Load all OS patches and configuration utilities supported by HP and the FCA manufacturer.
  • Page 64: Fabric Zoning And Lun Security For Multiple Operating Systems

    Within the SAN, the clusters can be homogeneous (all the same operating system) or heterogeneous (mixed operating systems). How you configure LUN security and fabric zoning depends on the operating system mix and the SAN configuration. WARNING! For OpenVMS — HP recommends that a volume be presented to one OpenVMS cluster or stand alone system at a time.
  • Page 65: Configuring Disk Array Devices

    Check the list of peripherals on the host to verify the host recognizes all disk array devices. If any devices are missing: If host mode option 33 is enabled, check the UUID values in the XP Remote Web Console LUN mapping If host mode option 33 is not enabled, check the CU:LDEV mapping To ensure the created OpenVMS device number is correct, check the values do not conflict with other device numbers or LUNs already created on the SAN...
  • Page 66: Verifying File System Operation

    Verifying file system operation Use the show device d command to list the devices: Example $ show device dg NOTE: Use the show device/full dga100 command to show the path information for the device: Example: $ show device/full $1$dga100: Disk $1$DGA100: (NODE01), device type HP OPEN-V, is online, file-oriented device, shareable, device has multiple I/O paths, served to cluster via MSCP Server, error logging is enabled.
  • Page 67 $ directory Directory $1$DGA100:[USER] TEST.TXT;1 Total of 1 file. Verify the content of the data file: Example $ type test.txt this is a line of text for the test file test.txt Delete the data file: Example $ delete test.txt; $ directory %DIRECT-W-NOFILES, no files found $ type test.txt %TYPE-W-SEARCHFAIL,error searching for...
  • Page 68: Vmware

    7 VMware You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: “Installing and configuring the disk array”...
  • Page 69: Setting The Host Mode And Host Group Mode For The Disk Array Ports

    Web Console (shown). If XP Remote Web Console is not available, the HP service representative can set the host mode using the SVP. The host mode setting for VMware is 0C for the XP10000/XP12000 and 01 for the XP20000/XP24000. Installing and configuring the disk array...
  • Page 70: Setting The System Option Modes

    Figure 8 Host mode setting 01 (VMware)(XP20000/XP24000 only) CAUTION: The correct host mode must be set for all new installations (newly connected ports) to VMware hosts. Do not select a mode other than 0C (XP10000/XP12000) or 01 (XP20000/ XP24000) for VMware. Changing a host mode after the host has been connected is disruptive and requires the server to be rebooted.
  • Page 71: Installing And Configuring The Host

    Installing and configuring the host This section explains how to install and configure Fibre Channel adapters (FCAs) that connect the host to the disk array. Loading the operating system and software Follow the manufacturer's instructions to load the operating system and software onto the host. Load all OS patches and configuration utilities supported by HP and the FCA manufacturer.
  • Page 72: Fabric Zoning And Lun Security For Multiple Operating Systems

    Figure 9 Multi-cluster environment (VMware) Within the SAN, the clusters can be homogeneous (all the same operating system) or heterogeneous (mixed operating systems). How you configure LUN security and fabric zoning depends on the operating system mix and the SAN configuration. Fabric zoning and LUN security for multiple operating systems You can connect multiple clusters with multiple operating systems to the same switch and fabric using appropriate zoning and LUN security as follows:...
  • Page 73: Configuring Vmware Esx Server

    Configuring VMware ESX Server VMware ESX Server 2.5x Open the management interface, select the Options tab, and then click Advanced Settings..In the “Advanced Settings” window, scroll down to Disk.MaskLUN. Verify that the value is large enough to support your configuration (default=8). If the value is less than the number of LUNs you have presented then you will not see all of your LUNs.
  • Page 74: Setting Up Virtual Machines (Vms) And Guest Operating Systems

    Setting up virtual machines (VMs) and guest operating systems Setting the SCSI disk timeout value for Windows VMs To ensure Windows VM’s (Windows 2000 and Windows Server 2003) wait at least 60 seconds for delayed disk operations to complete before generating errors, you must set the SCSI disk timeout value to 60 seconds by editing the registry of the guest operating system as follows: CAUTION: Before making any changes to the registry file, make a back up copy of the existing...
  • Page 75 Select the Bus Sharing mode (virtual or physical) appropriate for your configuration, and then click OK. Setting up virtual machines (VMs) and guest operating systems...
  • Page 76: Selecting The Scsi Emulation Driver

    NOTE: Sharing VMDK disks is not supported. VMware ESX Server 3.0x In VirtualCenter, select the VM you plan to edit, and then click Edit Settings. Select the SCSI controller for use with your shared LUNs. NOTE: If only one SCSI controller is present, add another disk that uses a different SCSI bus than your current configured devices.
  • Page 77 Linux For the 2.4 kernel use the LSI Logic SCSI driver. For the 2.6 kernel use the BusLogic SCSI driver. Setting up virtual machines (VMs) and guest operating systems...
  • Page 78: Linux

    8 Linux You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: “Installing and configuring the disk array”...
  • Page 79: Setting The Host Mode And Host Group Mode For The Disk Array Ports

    This process is also called “LUN mapping.” In the XP Remote Web Console, LUN mapping includes: Configuring ports Enabling LUN security on the ports Creating host groups Assigning Fibre Channel adapter WWNs to host groups Mapping volumes (LDEVs) to host groups (by assigning LUNs) In P9000 or XP Command View Advanced Edition Software, LUN mapping includes: Configuring ports Creating storage groups...
  • Page 80: Configuring The Fibre Channel Ports

    CAUTION: The correct host mode must be set for all new installations (newly connected ports) to Linux hosts. Do not select a mode other than 00 for Linux. Changing a host mode after the host has been connected is disruptive and requires the server to be rebooted. When a new host group is added, additional host group modes (options) can be configured.
  • Page 81: Setting The System Option Modes

    your SAN topology. Use switch zoning if you connect different types of hosts to the array through the same switch. Setting the system option modes The HP service representative sets the system option mode(s) based on the operating system and software configuration of the host.
  • Page 82: Fabric Zoning And Lun Security For Multiple Operating Systems

    Figure 10 Multi-cluster environment (Linux) Within the SAN, the clusters can be homogeneous (all the same operating system) or heterogeneous (mixed operating systems). How you configure LUN security and fabric zoning depends on the operating system mix and the SAN configuration. Fabric zoning and LUN security for multiple operating systems You can connect multiple clusters with multiple operating systems to the same switch and fabric using appropriate zoning and LUN security as follows:...
  • Page 83: Verifying New Device Recognition

    Power on the display of the Linux server. Power on all devices other than the Linux server. Confirm ready status of all devices. Power on the Linux server. Verifying new device recognition Verify that the FCA driver is installed using the lsmod command. View the device information in the /proc/scsi/scsi file.
  • Page 84: Partitioning The Devices

    “Partitioning the devices” (page 84) “Creating the file systems” (page 84) “Creating the mount directories” (page 85) “Creating the mount table” (page 85) “Verifying file system operation” (page 86) Creating scripts to configure all devices at once could save you considerable time. Partitioning the devices In a Linux environment, one LUN can be divided into a maximum of four primary partitions (using fdisk) or maximum of one extended partition.
  • Page 85: Creating File Systems With Ext2

    Creating file systems with ext2 Enter mkfs t ext2 /dev/device_name. Example # mkfs t ext2 /dev/sdd Repeat step 1 for each device on the disk array. Creating the mount directories Create mount directories using the mkdir command. Choose names for the mount directories which identify both the logical volume and partition.
  • Page 86: Verifying File System Operation

    Display the mounted devices using the df h command and verify that the devices were automounted. Example # df -h Filesystem Size Used Avail Used% Mounted on /dev/sda1 1.8G 890M 866M /dev/sdb1 1.9G 1.0G 803M /usr /dev/sdc1 2.2G 13k 2.1G /A5700F-LU00 Verifying file system operation Verify file system operation by copying a file to each device.
  • Page 87: Solaris

    9 Solaris You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: “Installing and configuring the disk array”...
  • Page 88: Setting The Host Mode And Host Group Mode For The Disk Array Ports

    This process is also called “LUN mapping.” In the XP Remote Web Console, LUN mapping includes: Configuring ports Enabling LUN security on the ports Creating host groups Assigning Fibre Channel adapter WWNs to host groups Mapping volumes (LDEVs) to host groups (by assigning LUNs) In P9000 or XP Command View Advanced Edition Software, LUN mapping includes: Configuring ports Creating storage groups...
  • Page 89 CAUTION: The correct host mode must be set for all new installations (newly connected ports) to Solaris hosts. Do not select a mode other than 09 for Solaris. Changing a host mode after the host has been connected is disruptive and requires the server to be rebooted. When a new host group is added, additional host group modes (options) can be configured.
  • Page 90: Setting The System Option Modes

    Table 19 Host group modes (options) Solaris (continued) Host Group Mode Function Default Comments SIM report at link failure Inactive Optional This mode is common to all host Select HMO 13 to enable SIM notification platforms. when the number of link failures detected between ports exceeds the threshold.
  • Page 91: Installing And Configuring The Host

    Installing and configuring the host This section explains how to install and configure Fibre Channel adapters (FCAs) that connect the host to the disk array. Loading the operating system and software Follow the manufacturer's instructions to load the operating system and software onto the host. Load all OS patches and configuration utilities supported by HP and the FCA manufacturer.
  • Page 92: Configuring Fcas With The Oracle San Driver Stack

    Table 20 Max throttle (queue depth) requirements for the devices (Solaris) Queue depth option Requirements Option 1 XP10000, XP12000, SVS 200: Queue_depth 1024 default. XP20000, XP24000: Queue_depth 2048 default. CAUTION: The number of issued commands must be completely controlled. Because queuing capacity of the disk array is either 1024 or 2048 per port (depending on the disk array), you must adjust the number of issued commands from Solaris system to less than 1024 or 2048.
  • Page 93 NOTE: Ensure host group mode 7 is set for the XP array or SVS 200 ports where the host is connected to enable automatic LUN recognition using this driver. To configure the FCA: Check with your HP representative to determine which non-Oracle branded FCAs are supported by HP with the Oracle SAN driver Stack, and if a specific System Mode or Host Group Mode setting is required for Oracle and non-Oracle branded FCAs.
  • Page 94: Configuring Emulex Fcas With The Lpfc Driver

    Configuring Emulex FCAs with the lpfc driver NOTE: The lpfc driver cannot be used with Oracle StorEdge Traffic Manager/Oracle Storage VM Multipathing. Emulex does not support using both the lpfc driver and the emlxs driver (provided with the Oracle SAN driver stack) concurrently. To use the emlxs driver, see Configuring FCAs with the Oracle SAN driver stack.
  • Page 95: Configuring Qlogic Fcas With The Qla2300 Driver

    Configuring QLogic FCAs with the qla2300 driver NOTE: The qla2300 driver cannot be used with Oracle StorEdge Traffic Manager/Oracle Storage Multipathing. To configure a QLogic FCA using the Oracle SAN driver stack, see Configuring FCAs with the Oracle SAN driver stack.
  • Page 96: Amcc/Jni 2 Gbit Fcas

    remain the same when the system is rebooted. Persistent bindings can be set by editing the configuration file as shown in the examples that follow. Make sure the target in the driver configuration file and in the kernel file (/kernel/drv/sd.conf) match. Replace the WWNs shown in the examples with the correct WWNs for your array ports.
  • Page 97: Fabric Zoning And Lun Security For Multiple Operating Systems

    multi-cluster environment with three clusters, each containing two nodes. The nodes share access to the disk array. Figure 1 1 Multi-cluster environment (Solaris) Within the SAN, the clusters can be homogeneous (all the same operating system) or heterogeneous (mixed operating systems). How you configure LUN security and fabric zoning depends on the operating system mix and the SAN configuration.
  • Page 98: Verifying Host Recognition Of Disk Array Devices

    Verifying host recognition of disk array devices Verify that the host recognizes the disk array devices as follows: Use format to display the device information. Check the list of disks to verify the host recognizes all disk array devices. If any devices are missing or if no array devices are shown, check the following: SAN (zoning configuration and cables) Disk array path configuration (FCA HBA WWNs, host group 09 and host group mode...
  • Page 99: Creating The File Systems

    Repeat this labeling procedure for each new device (use the disk command to select another disk). When you finish labeling the disks, enter quit or press Ctrl-D to exit the format utility. For further information, see the System Administration Guide - Devices and File Systems at: http://www.oracle.com/technetwork/indexes/documentation.
  • Page 100 not need to be installed separately. With VxVM 4.x versions, you need to download and install the ASL from the Symantec/Veritas support website (http://support.veritas.com): Select Volume Manager for Unix/Linux as product and search the XP array model for Solaris as the platform. Read the TechFile that appears and follow the instructions to download and install the ASL.
  • Page 101: 10 Ibm Aix

    10 IBM AIX You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: “Installing and configuring the disk array”...
  • Page 102: Setting The Host Mode And Host Group Mode For The Disk Array Ports

    Assigning Fibre Channel adapter WWNs to host groups Mapping volumes (LDEVs) to host groups (by assigning LUNs) In P9000 or XP Command View Advanced Edition Software, LUN mapping includes: Configuring ports Creating storage groups Mapping volumes and WWN/host access permissions to the storage groups For details see the HP StorageWorks XP LUN Manager User’s Guide.
  • Page 103 CAUTION: The correct host mode must be set for all new installations (newly connected ports) to AIX hosts. Do not select a mode other than 0F for AIX. Changing a host mode after the host has been connected is disruptive and requires the server to be rebooted. When a new host group is added, additional host group modes (options) can be configured.
  • Page 104: Setting The System Option Modes

    Table 21 Host group mode (option) IBM AIX Host Group Mode Function Default Comments Veritas Storage Foundation for Oracle RAC, Inactive Previously MODE186 DBE+RAC Database Edition/Advanced Do not apply this option to Cluster for Real Application Clusters or if Oracle Cluster. Veritas Cluster Server 4.0 or later with I/O fencing function is used.
  • Page 105: Installing And Configuring The Host

    Installing and configuring the host This section explains how to install and configure Fibre Channel adapters (FCAs) that connect the host to the disk array. Loading the operating system and software Follow the manufacturer's instructions to load the operating system and software onto the host. Load all OS patches and configuration utilities supported by HP and the FCA manufacturer.
  • Page 106: Fabric Zoning And Lun Security For Multiple Operating Systems

    Figure 12 Multi-cluster environment (IBM AIX) Within the SAN, the clusters can be homogeneous (all the same operating system) or heterogeneous (mixed operating systems). How you configure LUN security and fabric zoning depends on the operating system mix and the SAN configuration. Fabric zoning and LUN security for multiple operating systems You can connect multiple clusters with multiple operating systems to the same switch and fabric using appropriate zoning and LUN security as follows:...
  • Page 107: Configuring Disk Array Devices

    If the disk array LUNs are defined after the IBM system is powered on, issue a cfgmgr command to recognize the new devices. Use the lsdev command to display system device data and verify that the system recognizes the newly installed devices. The devices are listed by device file name.
  • Page 108 Table 24 Device parameters-queue depth (IBM AIX) Parameter Recommended Value Queue depth per LU Queue depth per port (MAXTAGS) 1024 The recommended queue depth settings might not provide the best I/O performance for your system. You can adjust the queue depth setting to optimize the I/O performance of the disk array. Displaying the device parameters using the AIX command line At the command line prompt, enter lsattr -E -l hdiskx, where hdiskx is the device file name.
  • Page 109: Assigning The New Devices To Volume Groups

    Security & Users Communications Applications and Services Print Spooling Problem Determination Performance & Resource Scheduling System Environments Processes & Subsystems Applications Using SMIT (information only) Select Fixed Disk. Select Change/Show Characteristics of a Disk. Select the desired device from the Disk menu. The Change/Show Characteristics of a Disk screen for that device is displayed.
  • Page 110 System Environments Processes & Subsystems Applications Using SMIT (information only) Select Logical Volume Manager. Example System Storage Management (Physical & Logical Storage) Move cursor to desired item and press Enter. Logical Volume Manager File Systems Files & Directories Removable Disk Management System Backup Manager Select Volume Groups.
  • Page 111: Creating The Journaled File Systems

    Physical partition SIZE in megabytes PHYSICAL VOLUME names [hdisk1] Activate volume group AUTOMATICALLY at system restart? Volume Group MAJOR NUMBER Enter yes or no in the Activate volume group AUTOMATICALLY at system restart? field. If you are not using HACMP (High Availability Cluster Multi-Processing) or HAGEO (High Availability Geographic), enter yes.
  • Page 112 Removable Disk Management System Backup Manager Select Add / Change / Show / Delete File Systems. Example File Systems Move cursor to desired item and press Enter. List All File Systems List All Mounted File Systems Add / Change / Show / Delete File Systems Mount a File System Mount a Group of File Systems Unmount a File System...
  • Page 113: Mounting And Verifying The File Systems

    Enter values for the following fields: SIZE of file system (in 512-byte blocks). Enter the lsvg command to display the number of free physical partitions and physical partition size. Calculate the maximum size of the file system as follows: (FREE PPs - 1) x (PP SIZE) x 2048. Mount Point: Enter mount point name.
  • Page 114 /dev/hd3 24576 11608 0% /tmp /dev/hd1 8192 7840 1% /home /dev/lv00 4792320 4602128 1% /VG00 (OPEN-3) /dev/lv01 4792320 4602128 1% /VG01 (OPEN-3) /dev/lv02 14401536 13949392 1% /VG02 (OPEN-9) Verify that the file system is usable by performing some basic operations (for example, file creation, copying, and deletion) on each logical device.
  • Page 115: 1 Citrix Xenserver Enterprise

    1 1 Citrix XenServer Enterprise You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: “Installing and configuring the disk array”...
  • Page 116: Setting The Host Mode And Host Group Mode For The Disk Array Ports

    Creating host groups Assigning Fibre Channel adapter WWNs to host groups Mapping volumes (LDEVs) to host groups (by assigning LUNs) In P9000 or XP Command View Advanced Edition Software, LUN mapping includes: Configuring ports Creating storage groups Mapping volumes and WWN/host access permissions to the storage groups For details see the HP StorageWorks XP LUN Manager User’s Guide.
  • Page 117: Configuring The Fibre Channel Ports

    CAUTION: The correct host mode must be set for all new installations (newly connected ports) to Linux hosts. Do not select a mode other than 00 for Linux. Changing a host mode after the host has been connected is disruptive and requires the server to be rebooted. When a new host group is added, additional host group modes (options) can be configured.
  • Page 118: Setting The System Option Modes

    your SAN topology. Use switch zoning if you connect different types of hosts to the array through the same switch. Setting the system option modes The HP service representative sets the system option mode(s) based on the operating system and software configuration of the host.
  • Page 119: Fabric Zoning And Lun Security For Multiple Operating Systems

    Figure 13 Multi-cluster environment (Linux) Within the SAN, the clusters can be homogeneous (all the same operating system) or heterogeneous (mixed operating systems). How you configure LUN security and fabric zoning depends on the operating system mix and the SAN configuration. Fabric zoning and LUN security for multiple operating systems You can connect multiple clusters with multiple operating systems to the same switch and fabric using appropriate zoning and LUN security as follows:...
  • Page 120: Verifying New Device Recognition

    Power on the display of the Linux server. Power on all devices other than the Linux server. Confirm ready status of all devices. Power on the Linux server. Verifying new device recognition Verify that the FCA driver is installed using the sr-probe command. # xe sr-probe type=lvmohba Error code: SR_BACKEND_FAILURE_107 Error parameters: , The SCSIid parameter is missing or incorrect, <?xml version="1.0"...
  • Page 121: Configuring Disk Array Devices

    [root@cb-xen-srv31 ~]# Configuring disk array devices Disks in the disk array are configured using the same procedure for configuring any new disk on the host. This includes the following procedures: Configuring multipathing Creating a Storage Repository Adding a Virtual Disk to a domU Adding a dynamic LUN Configuring multipathing Follow these steps to configure multipathing using XenCenter.
  • Page 122 Select the General tab and then click Properties. Select the Multipathing tab, check the Enable multipathing on this server check box, and then click OK. 122 Citrix XenServer Enterprise...
  • Page 123 Right-click the domU that was placed in maintenance mode and select Exit Maintenance Mode. Open a command line interface to the dom0 and edit the /etc/multipath-enable.conf file with the appropriate array. NOTE: HP recommends that you use the RHEL 5.x device mapper config file and multipathing parameter settings on HP.com.
  • Page 124: Creating A Storage Repository

    Creating a Storage Repository Follow these steps to create a Storage Repository using XenCenter. Open XenCenter, create a pool, and then add all of the dom0s to the pool. Select one of the dom0s in the pool, click the Storage tab, and then click New SR. Select the type of virtual disk storage for the storage array and then click Next.
  • Page 125: Adding A Virtual Disk To A Domu

    NOTE: For Fibre Channel, select Hardware HBA. Complete the template and then click Finish. Adding a Virtual Disk to a domU After the Storage Repository has been created on the dom0, the vdisk from the Storage Repository can be assigned to the domU. This section describes how to pass vdisks to the domU. HP Proliant Configuring disk array devices 125...
  • Page 126 Virtual Console can be used with HP Integrated CitrixXen Server Enterprise Edition to complete this process. Select the domU. Select the Storage tab and then click Add. 126 Citrix XenServer Enterprise...
  • Page 127: Adding A Dynamic Lun

    Type a name, description, and size for the new disk and then click Add. Adding a dynamic LUN To add a LUN to a dom0 dynamically, follow these steps. Create and present a LUN to a dom0 from the array. Enter the following command to rescan the sessions that are connected to the arrays for the new LUN: xe sr-probe type=lvmohba.
  • Page 128: 12 Troubleshooting

    12 Troubleshooting This chapter includes resolutions for various error conditions you may encounter. If you are unable to resolve an error condition, ask your HP support representative for assistance. Error conditions Depending on your system configuration, you may be able to view error messages (R-SIMS) as follows: In XP Remote Web Console (Status tab) In P9000 or XP Command View Advanced Edition Software (Alerts window)
  • Page 129 Table 27 Error conditions (continued) Error condition Recommended action The host detects a parity error. Check the FCA and make sure it was installed properly. Reboot the host. The host hangs or devices are declared Make sure there are no duplicate disk array TIDs and that disk array TIDs and the host hangs.
  • Page 130: 13 Support And Other Resources

    13 Support and other resources Contacting HP For worldwide technical support information, see the HP support website: http://www.hp.com/support Before contacting HP, collect the following information: Product model names and numbers Technical support registration number (if applicable) Product serial numbers Error messages Operating system type and revision level Detailed questions Subscription service...
  • Page 131: Conventions For Storage Capacity Values

    Conventions for storage capacity values HP XP storage systems use the following values to calculate physical storage capacity values (hard disk drives): 1 KB (kilobyte) = 1,000 (10 ) bytes 1 MB (megabyte) = 1,000 bytes 1 GB (gigabyte) = 1,000 bytes 1 TB (terabyte) = 1,000 bytes...
  • Page 132: A Path Worksheet

    A Path worksheet Worksheet Table 28 Path worksheet LDEV (CU:LDEV) (CU = Device Type SCSI Bus Path 1 Alternate Paths control unit) Number 0:00 TID: TID: TID: LUN: LUN: LUN: 0:01 TID: TID: TID: LUN: LUN: LUN: 0:02 TID: TID: TID: LUN: LUN:...
  • Page 133: B Path Worksheet (Nonstop)

    B Path worksheet (NonStop) Worksheet Table 29 Path worksheet (NonStop) LUN # CU:LDEV Array Emulation Array Array Port NSK Server NSK SAC NSK SAC Path Group type Port name (G-M-S-S) volume name Example: 00 01:00 1- 1 1 OPEN-E 50060E80 /OSDNSK3 1 10-2-3- 1 50060B00...
  • Page 134: C Disk Array Supported Emulations

    C Disk array supported emulations HP-UX This appendix provides information about supported emulations and device type specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).
  • Page 135: General Notes

    Table 31 Emulation specifications (HP-UX) (continued) Emulation Category Product name Blocks Sector size # of Heads Sectors Capacity (512 bytes) (bytes) cylinders per track OPEN-3 CVS SCSI disk OPEN-3-CVS Footnote Footnote Footnote OPEN-8 CVS SCSI disk OPEN-8-CVS Footnote Footnote Footnote OPEN-9 CVS SCSI disk OPEN-9-CVS...
  • Page 136 OPEN-V: The number of cylinders for a CVS volume = # of cylinders = (capacity (MB) specified by user) × 16/15 Example For an OPEN-V CVS volume with capacity = 49 MB: # of cylinders = 49 × 16/15 52.26 (rounded up to next integer) = 53 cylinders OPEN-3/8/9/E: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) ×...
  • Page 137: Luse Device Parameters

    LUSE device parameters Table 32 LUSE device parameters (HP-UX) Device type Physical extent size (PE) Max physical extent size (MPE) OPEN-K/3/8/9/E OPEN-3/K*n (n= 2 to 36) default default OPEN-3/K-CVS OPEN-3/K*n-CVS (n = 2 to 36) OPEN-8/9*n n = 2 to 17 default default n = 18...
  • Page 138: Scsi Tid Map For Fibre Channel Adapters

    Table 32 LUSE device parameters (HP-UX) (continued) Device type Physical extent size (PE) Max physical extent size (MPE) n = 22 38205 n = 23 39942 n = 24 41679 n = 25 43415 n = 26 45152 n = 27 46889 n = 28 48625...
  • Page 139 adapters. The controller number (the dks value in /dev/dsk/dks*d*l*s*) depends on the server configuration, and a different value is assigned per each column. The mapping cannot be done when these conditions exist: Disk array devices and other types of devices are connected in the same loop Information for unused devices remains in the server system Multiple ports participate in the same arbitrated loop Table 33 SCSI TID map (HP-UX)
  • Page 140: Windows

    Windows This appendix provides information about supported emulations and emulation specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).
  • Page 141 Table 35 Emulation specifications (Windows) (continued) Emulation Category Product name Blocks Sector size # of Heads Sectors Capacity (512 bytes) (bytes) cylinders per track OPEN-8 CVS SCSI disk OPEN-8-CVS Footnote Footnote Footnote OPEN-9 CVS SCSI disk OPEN-9-CVS Footnote Footnote Footnote OPEN-E CVS SCSI disk OPEN-E-CVS...
  • Page 142 Example For an OPEN-V CVS volume with capacity = 49 MB: # of cylinders = 49 × 16/15 52.26 (rounded up to next integer) = 53 cylinders OPEN-3/8/9/E: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) ×...
  • Page 143: Novell Netware

    Novell NetWare This appendix provides information about supported emulations and emulation specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).
  • Page 144 Table 37 Emulation specifications (Novell NetWare) (continued) Emulation Category Product name Blocks Sector size # of Heads Sectors Capacity (512 bytes) (bytes) cylinders per track OPEN-V SCSI disk OPEN-V Footnote Footnote Footnote CVS LUSE OPEN-3*n CVS SCSI disk OPEN-3*n-CVS Footnote Note 6 Note 7 OPEN-8*n CVS...
  • Page 145 OPEN-3/8/9/E: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) × 1024/720 × n Example For a CVS LUSE volume with capacity = 37 MB and n = 4: # of cylinders = 37 ×...
  • Page 146: Nonstop

    NonStop This appendix provides information about supported emulations and emulation specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).
  • Page 147: Openvms

    OpenVMS This appendix provides information about supported emulations and device type specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).
  • Page 148 Table 41 Emulation specifications (OpenVMS) (continued) Emulation Category Product Blocks Sector size # of Heads Sectors Capacity MB* name (512 bytes) (bytes) cylinders per track OPEN-E SCSI disk OPEN-E-CVS Footnote Footnote Footnote OPEN-V SCSI disk OPEN-V Footnote Footnote Footnote CVS LUSE OPEN-3*n SCSI disk OPEN- 3 *n- C VS...
  • Page 149 OPEN-V: The number of cylinders for a CVS volume = # of cylinders = (capacity (MB) specified by user) × 16/15 Example For an OPEN-V CVS volume with capacity = 49 MB: # of cylinders = 49 × 16/15 52.26 (rounded up to next integer) = 53 cylinders OPEN-3/8/9/E: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) ×...
  • Page 150: Vmware

    VMware This appendix provides information about supported emulations and device type specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).
  • Page 151 Table 43 Emulation specifications (VMware) (continued) Emulation Category Product Blocks Sector size # of Heads Sectors Capacity MB* name (512 bytes) (bytes) cylinders per track OPEN-8 SCSI disk OPEN-8-CVS Footnote Footnote Footnote OPEN-9 SCSI disk OPEN-9-CVS Footnote Footnote Footnote OPEN-E SCSI disk OPEN-E-CVS Footnote...
  • Page 152 For an OPEN-3 CVS volume with capacity = 37 MB: # of cylinders = 37 × 1024/720 52.62 (rounded up to next integer) = 53 cylinders OPEN-V: The number of cylinders for a CVS volume = # of cylinders = (capacity (MB) specified by user) ×...
  • Page 153: Linux

    Linux This appendix provides information about supported emulations and device type specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).
  • Page 154 Table 45 Emulation specifications (Linux) (continued) Emulation Category Product name Blocks Sector size # of Heads Sectors Capacity (512 bytes) (bytes) cylinders per track OPEN-8 CVS SCSI disk OPEN-8-CVS Footnote Footnote Footnote OPEN-9 CVS SCSI disk OPEN-9-CVS Footnote Footnote Footnote OPEN-E CVS SCSI disk OPEN-E-CVS...
  • Page 155 Example For an OPEN-V CVS volume with capacity = 49 MB: # of cylinders = 49 × 16/15 52.26 (rounded up to next integer) = 53 cylinders OPEN-3/8/9/E: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) ×...
  • Page 156: Solaris

    Solaris This appendix provides information about supported emulations and device type specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).
  • Page 157 Table 47 Emulation specifications (Solaris) (continued) Emulation Category Product name Blocks Sector size # of Heads Sectors Capacity (512 bytes) (bytes) cylinders per track OPEN-8 CVS SCSI disk OPEN-8-CVS Footnote Footnote Footnote OPEN-9 CVS SCSI disk OPEN-9-CVS Footnote Footnote Footnote OPEN-E CVS SCSI disk OPEN-E-CVS...
  • Page 158 Example For an OPEN-V CVS volume with capacity = 49 MB: # of cylinders = 49 × 16/15 52.26 (rounded up to next integer) = 53 cylinders OPEN-3/8/9/E: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) ×...
  • Page 159: Ibm Aix

    IBM AIX This appendix provides information about supported emulations and device type specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).
  • Page 160 Table 49 Emulation specifications (IBM AIX) (continued) Emulation Category Product name Blocks Sector size # of Heads Sectors Capacity (512 bytes) (bytes) cylinders per track OPEN-8 CVS SCSI disk OPEN-8-CVS Note 5 Footnote Footnote OPEN-9 CVS SCSI disk OPEN-9-CVS Note 5 Footnote Footnote OPEN-E CVS...
  • Page 161: Disk Parameters By Emulation Type

    Example For an OPEN-V CVS volume with capacity = 49 MB: # of cylinders = 49 × 16/15 52.26 (rounded up to next integer) = 53 cylinders OPEN-3/8/9/E: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) ×...
  • Page 162 Table 50 OPEN-3 parameters by emulation type (IBM AIX) (continued) Emulation Type Parameter OPEN-3 OPEN-3*n (n=2 OPEN-3 CVS OPEN-3 CVS*n to 36) (n=2 to 36) a partition size Set optionally Set optionally Set optionally Set optionally b partition size Set optionally Set optionally Set optionally Set optionally...
  • Page 163 Table 51 OPEN-8 parameters by emulation type (IBM AIX) (continued) Emulation Type Parameter OPEN-8 OPEN-8*n (n=2 OPEN-8 CVS OPEN-8 CVS*n to 36) (n=2 to 36) Number of all cylinders 9,966 9,966*n Depends on Depends on configuration of configuration of Number of rotations of the disk 6,300 6,300 6,300...
  • Page 164 Table 51 OPEN-8 parameters by emulation type (IBM AIX) (continued) Emulation Type Parameter OPEN-8 OPEN-8*n (n=2 OPEN-8 CVS OPEN-8 CVS*n to 36) (n=2 to 36) b partition fragment size 1,024 1,024 1,024 1,024 c partition fragment size 1,024 1,024 1,024 1,024 d partition fragment size 1,024...
  • Page 165 Table 52 OPEN-9 parameters by emulation type (IBM AIX) (continued) Emulation Type Parameter OPEN-9 OPEN-9*n (n=2 OPEN-9 CVS OPEN-9 CVS*n to 36) (n=2 to 36) c partition size 14,423,040 14,423,040*n Depends on Depends on configuration of configuration of d partition size Set optionally Set optionally Set optionally...
  • Page 166 Table 53 OPEN-E parameters by emulation type (IBM AIX) (continued) Emulation Type Parameter OPEN-E OPEN-E*n (n=2 to OPEN-E CVS OPEN-E CVS*n (n=2 to 36) Number of rotations of the disk 6,300 6,300 6,300 6,300 a partition offset (Starting block Set optionally Set optionally Set optionally Set optionally...
  • Page 167: Byte Information Table

    Table 53 OPEN-E parameters by emulation type (IBM AIX) (continued) Emulation Type Parameter OPEN-E OPEN-E*n (n=2 to OPEN-E CVS OPEN-E CVS*n (n=2 to 36) e partition fragment size 1,024 1,024 1,024 1024 f partition fragment size 1,024 1,024 1,024 1,024 g partition fragment size 1,024 1,024...
  • Page 168 Table 54 Byte information (IBM AIX) Category LU product name Number of bytes per Inode OPEN-3 OPEN-3 OPEN-3*2 to OPEN-3*28 4096 OPEN-3*29 to OPEN-3*36 8192 OPEN-8 OPEN-8 OPEN-8*2 to OPEN-8*9 4096 OPEN-8*10 to OPEN-8*18 8192 OPEN-8*19 to OPEN-8*36 16384 OPEN-9 OPEN-9 OPEN-9*2 to OPEN-9*9 4096 OPEN-9*10 to OPEN-9*18...
  • Page 169: Physical Partition Size Table

    Physical partition size table Table 55 Physical partition size (IBM AIX) Category LU product name Physical partition size in megabytes OPEN-3 OPEN-3 OPEN-3*2 to OPEN-3*3 OPEN-3*4 to OPEN-3*6 OPEN-3*7 to OPEN-3*13 OPEN-3*14 to OPEN-3*27 OPEN-3*28 to OPEN-3*36 OPEN-8 OPEN-8 OPEN-8*2 OPEN-8*3 to OPEN-8*4 OPEN-8*5 to OPEN-8*9 OPEN-8*10 to OPEN-8*18...
  • Page 170 Table 55 Physical partition size (IBM AIX) (continued) Category LU product name Physical partition size in megabytes OPEN-x*n CVS 35 to1800 1801 to 2300 2301 to 7000 7001 to 16200 13201 to 32400 32401 to 64800 64801 to 126000 126001 to 259200 259201 - 518400 518401 and higher 1024...
  • Page 171: D Using Veritas Cluster Server To Prevent Data Corruption

    D Using Veritas Cluster Server to prevent data corruption Using VCS I/O fencing By issuing a Persistent Reserve SCSI-3 command, VCS employs an I/O fencing feature that prevents data corruption from occurring if cluster communication stops. To accomplish I/O fencing, each node of VCS registers reserve keys for each disk in a disk group that is imported.
  • Page 172 Figure 14 Nodes and ports 172 Using Veritas Cluster Server to prevent data corruption...
  • Page 173 Table 56 Port 1A Key Registration Entries e r v E n t R e s LU - Disk Group i b l e v i s r a t i o n r e g i s t P o r t a b l 0 A P 0 0 0...
  • Page 174: E Reference Information For The Hp System Administration Manager (Sam)

    E Reference information for the HP System Administration Manager (SAM) The HP System Administration Manager (SAM) is used to perform HP-UX system administration functions, including: Setting up users and groups Configuring the disks and file systems Performing auditing and security activities Editing the system kernel configuration This appendix provides instructions for: Using SAM to configure the disk devices...
  • Page 175: Setting The Maximum Number Of Volume Groups Using Sam

    To configure the newly-installed disk array devices: Select Disks and File Systems, then select Disk Devices. Verify that the new disk array devices are displayed in the Disk Devices window. Select the device to configure, select the Actions menu, select Add, and then select Using the Logical Volume Manager.
  • Page 176: F Hp Clustered Gateway Deployments

    F HP Clustered Gateway deployments Windows The HP Cluster Gateway and HP Scalable NAS software both use the HP PolyServe software as their underlying clustering technology and both have similar requirements for the XP disk array. They have both been tested with the XP disk arrays and this appendix details configuration requirements specific to XP deployments using HP PolyServe Software on Windows.
  • Page 177: Linux

    them as described previously, you can then use them to create dynamic volumes and file systems, mount them on the cluster nodes, and assign drive letters or junction points. For details on importing and deporting disks, dynamic volume creation and configuration, and file system creation and configuration, see the HP StorageWorks Scalable NAS File Serving Software Administration Guide .
  • Page 178: Snapshots

    Snapshots To take hardware snapshots on XP storage arrays, you must install the latest version of firmware on the array controllers. Also, the latest versions of XP Business Copy and XP Snapshot must also be installed on the array controllers. On the servers, you must install and configure the latest version of RAID Manager, with both local and remote HORCM instances running on each server, and with all file system LUNs (P-VOLs) controlled by the local instance and all snapshot V-VOLs (S-VOLs) controlled by the remote instance.
  • Page 179: Glossary

    Glossary AL-PA Arbitrated loop physical address. A 1-byte value that the arbitrated loop topology uses to identify the loop ports. This value becomes the last byte of the address identifier for each public port on the loop. command device A volume in the disk array that accepts Continuous Access, Business Copy, or P9000 for Business Continuity Manager control operations, which are then executed by the array.
  • Page 180 port A physical connection that allows data to pass between a host and a disk array. R-SIM Remote service information message. Service information message. SNMP Simple Network Management Protocol. A widely used network monitoring and control protocol. Data is passed from SNMP agents, which are hardware and/or software processes reporting activity in each network device (hub, router, bridge, and so on) to the workstation console used to oversee the network.
  • Page 181: Index

    Index logical, not recognized by host, LUSE device parameters, auto-mount parameters, setting, mounting, parameters change using SMIT command line, client, verifying operations, changing, clustering, 17, 43, 48, 50, 56, 63, 71, 81, 97, 105, changing using AIX command line, 1 18 show using AIX command line, command device(s) partitioning, 84,...
  • Page 182 configuring, 17, 42, 63, 71, 81, 91, 105, 1 18 interface, Fibre Channel, Emulex, installation, verifying, JNI, Journaled File Systems, creating, 1 1 1 multiple with shared LUNs, Oracle, QLogic, labeling devices, supported, LDEV(s) verify driver installation, 83, one designated as a command device, verifying configuration, Linux FCSA(s)
  • Page 183 partitions setting maximum number, creating, groups, assigning new device, path(s) logical, adding, auto-mount parameters, defining, 15, 31, 39, 53, 58, 68, 87, 101, 1 15 cannot be created, SCSI, creating, worksheet, file systems, 27, physical volume(s) mounting, creating, physical creating groups, cannot be created, port(s) creating,...

Table of Contents