Dell PowerVault MD3200i Hardware Installation And Troubleshooting Manual
Dell PowerVault MD3200i Hardware Installation And Troubleshooting Manual

Dell PowerVault MD3200i Hardware Installation And Troubleshooting Manual

Storage arrays with microsoft windows server failover clusters
Hide thumbs Also See for PowerVault MD3200i:

Advertisement

Dell PowerVault MD3200i and
MD3220i Storage Arrays With
Microsoft Windows Server
Failover Clusters

Hardware Installation

and

Troubleshooting Guide

Advertisement

Table of Contents
loading

Summary of Contents for Dell PowerVault MD3200i

  • Page 1: Hardware Installation

    Dell PowerVault MD3200i and MD3220i Storage Arrays With Microsoft Windows Server Failover Clusters Hardware Installation Troubleshooting Guide...
  • Page 2: Notes And Cautions

    Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names or their products. Dell Inc. disclaims any proprietary interest in trademarks and trade names other than its own.
  • Page 3: Table Of Contents

    Contents Introduction ......Overview ......Cluster Solution .
  • Page 4 ....Connecting a PowerEdge Cluster to Multiple PowerVault MD3200i or MD3220i Storage Systems ... . .
  • Page 5: Introduction

    It provides information and specific configuration tasks that enable you to deploy the shared storage for your cluster. For more information on deploying your cluster, see the Dell Failover Clusters with Microsoft Windows Server Installation and Troubleshooting Guide at support.dell.com/manuals.
  • Page 6: Cluster Solution

    Cluster Solution Your iSCSI cluster implements a minimum of two-node clustering and a maximum of sixteen-node clustering and provides the following features: • Internet Small Computer System Interface (iSCSI) technology • High availability of system services and resources to network clients •...
  • Page 7: Cluster Nodes

    TCP/IP Offload Engine (TOE) NICs are also supported for iSCSI access traffic. For a list of recommended operating systems, hardware components, and driver or firmware versions for your Dell Windows Server Failover Cluster, see the Dell Cluster Configuration Support Matrices at dell.com/ha.
  • Page 8: Cluster Storage

    Power and cooling Two integrated hot-swappable power supply/cooling fan requirements modules. Physical disks At least two physical disks in the PowerVault MD3200i or MD3220i RAID enclosure. Multiple clusters and In a switch-attached configuration, clusters and stand-alone stand-alone systems systems can share one or more PowerVault MD3200i or MD3220i systems.
  • Page 9: Cluster Storage Management Software

    Dell PowerVault Modular Disk Storage Manager The software runs on the management station or any host attached to the array to centrally manage the PowerVault MD3200i and MD3220i RAID enclosures. You can use Dell PowerVault Modular Disk Storage Manager (MDSM) to perform tasks such as creating or managing RAID arrays, binding virtual disks, and downloading firmware.
  • Page 10 Advanced Features Advanced features for the PowerVault MD3200i and MD3220i RAID storage systems include: • Snapshot Virtual Disk—Captures point-in-time images of a virtual disk for backup, testing, or data processing without affecting the contents of the source virtual disk. •...
  • Page 11: Supported Cluster Configurations

    Supported Cluster Configurations Figure 1-1. Direct-Attached Cluster Configuration Node 1 ....Node N storage array corporate, public, or private network MD32 i RAID MD32 i RAID controller module 0 controller module 1 NOTE: The configuration can have up to four nodes (N is either 2, 3, or 4). The nodes can be: one cluster •...
  • Page 12 Figure 1-2. Redundant Network-Attached Cluster Configuration up to 32 hosts storage array corporate, public, or private network MD32 i RAID MD32 i RAID controller module 1 controller module 0 NOTE: The configuration can have up to 32 nodes. The nodes can be: one cluster (up to 16 nodes) •...
  • Page 13: Other Documents You May Need

    Warranty information may be included within this document or as a separate document. NOTE: To configure Dell blade system modules in a Dell PowerEdge Cluster, Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster see the document at support.dell.com/manuals.
  • Page 14 • The Dell PowerVault MD Getting Started Guide provides an overview of setting up and cabling your storage array. • The Dell PowerVault MD3200i and MD3220i Storage Arrays Deployment Guide provides installation and configuration instructions to configure the storage system for initial use.
  • Page 15: Cabling Your Cluster Hardware

    Figure 2-1 and Figure 2-2 illustrate recommended methods for power cabling of a cluster solution consisting of two Dell PowerEdge systems and one storage system. To ensure redundancy, the primary power supplies of all the components are grouped onto one or two circuits and the redundant power supplies are grouped onto a different circuit.
  • Page 16 Figure 2-1. Power Cabling Examples With One Power Supply in the PowerEdge Systems MD32 MD32 MD32 RAID RAID RAID controller controller controller module 0 module 1 module 1 redundant power supplies on one primary power supplies on one AC power strip (or one AC PDU AC power strip (or one AC PDU [not shown]) [not shown])
  • Page 17 Figure 2-2. Power Cabling Example With Two Power Supplies in the PowerEdge Systems MD32 MD32 RAID RAID controller controller module 0 module 1 redundant power supplies on one primary power supplies on one AC power strip (or one AC PDU AC power strip (or one AC PDU [not shown]) [not shown])
  • Page 18: Cabling Your Public And Private Networks

    Cabling Your Public and Private Networks The network adapters in the cluster nodes provide at least two network connections for each node. These connections are described in Table 2-1. Table 2-1. Network Connections Network Connection Description Public Network All connections to the client LAN. At least one public network must be configured for mixed mode (public mode and private mode) for private network failover.
  • Page 19: Cabling Your Public Network

    Figure 2-3. Example of Network Cabling Connection public network private network adapter private network cluster node 1 cluster node 2 Cabling Your Public Network Any network adapter supported by a system running TCP/IP may be used to connect to the public network segments. You can install additional network adapters to support additional public network segments or to provide redundancy in the event of a faulty primary network adapter or switch port.
  • Page 20: Using Dual-Port Network Adapters For Your Private Network

    Table 2-2. Private Network Hardware Components and Connections Method Hardware Components Connection Network switch Gigabit or 10 Gigabit Depending on the hardware, Ethernet network connect the CAT5e or CAT6 adapters and switches. cables, the multimode optical cables with Local Connectors (LCs), or the twinax cables from the network adapters in the nodes to a switch.
  • Page 21: Cabling The Storage Systems

    Cabling the Storage Systems This section provides information for connecting your cluster to a storage system. NOTE: To configure Dell blade system modules in a Dell PowerEdge Cluster, see Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster support.dell.com/manuals. NOTE:...
  • Page 22 Install a network cable from the cluster node 1 iSCSI NIC 2 (or NIC port 2) to the RAID controller module 1 port In-1. 2 Connect cluster node 2 to the storage system: Install a network cable from the cluster node 2 iSCSI NIC 1 (or NIC port 1) to the RAID controller module 1 port In-0.
  • Page 23 Figure 2-4. Direct-Attached Cluster Configuration public network private network cluster node 1 cluster node 2 Ethernet management SAS out port (2) port (2) MD32 i RAID controller MD32 i RAID controller module 0 module 1 NOTE: The SAS out port provides SAS connection for cabling to MD1200 or MD1220 expansion enclosure(s).
  • Page 24: Cabling The Cluster In Network-Attached Configuration

    Cabling the Cluster in Network-Attached Configuration In the network-attached configuration, each cluster node attaches to the storage system using redundant IP storage area network (SAN) industry- standard 1 Gb Ethernet switches, and either with one dual-port iSCSI NIC or two single-port iSCSI NICs. If a component fails in the storage path such as the iSCSI NIC, the cable, the switch, or the storage controller, the multipath software automatically re-routes the I/O requests to the alternate path so that the storage array continues to operate without interruption.
  • Page 25 Repeat step a and step b for each additional cluster node. 3 Repeat step 2 to connect additional clusters or stand-alone systems to the iSCSI network. Cabling Your Cluster Hardware...
  • Page 26 Figure 2-5. Network-Attached Cluster Configuration public network private network 2 to n cluster nodes IP SAN (dual Gigabit Ethernet switch) (2) SAS out port (2) Ethernet management port (2) MD32 i RAID MD32 i RAID controller module 1 controller module 0 Cabling Your Cluster Hardware...
  • Page 27: Connecting A Poweredge Cluster To Multiple Powervault Md3200I Or Md3220I Storage Systems

    Connecting a PowerEdge Cluster to Multiple PowerVault MD3200i or MD3220i Storage Systems You can increase your cluster storage capacity by attaching multiple storage systems to your cluster using redundant network switches. The PowerEdge cluster systems support configurations with multiple PowerVault MD3200i or MD3220i storage systems attached to clustered systems.
  • Page 28 Figure 2-6. Network-Attached Cluster Configuration With Multiple Storage Arrays public network private network 2 to n cluster nodes IP SAN (dual storage array 1 Gigabit Ethernet switch (2) storage array 2 MD32 i RAID MD32 i RAID controller module 1 controller module 0 Cabling Your Cluster Hardware...
  • Page 29 When attaching multiple PowerVault MD3200i and MD3220i storage systems with your cluster, the following rules apply: • A maximum of four Power Vault MD3200i and MD3220i storage systems per cluster. • The shared storage systems and firmware must be identical. Using dissimilar storage systems and firmware for your shared storage is not supported.
  • Page 30 Cabling Your Cluster Hardware...
  • Page 31: Preparing Your Systems For Clustering

    NOTE: For more information on step 3 through step 7 and step 10 through step 12, see the "Preparing your systems for clustering" section of the Dell Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide at support.dell.com/manuals.
  • Page 32 5 Configure each server node as a member server in the same Windows Active Directory Domain. NOTE: You can configure the cluster nodes as Domain Controllers. For more Dell Failover information, see the "Selecting a Domain Model" section of the Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide at support.dell.com/manuals.
  • Page 33: Installation Overview

    Installation Overview Each node in your Dell Windows Server failover cluster must have the same release, edition, service pack, and processor architecture of the Windows Server operating system installed. For example, all nodes in your cluster may be configured with Windows Server 2008 R2, Enterprise x64 Edition.
  • Page 34: Installing The Iscsi Nics

    For a list of recommended operating systems, hardware components, and driver or firmware versions for your Dell Windows Server Failover Cluster, see the Dell Cluster Configuration Support Matrices at dell.com/ha. Enabling TOE NIC The purpose of TOE is to take the TCP/IP packets to be processed by the system processor(s) and offload them on the NIC.
  • Page 35: Installing The Microsoft Iscsi Software Initiator

    Microsoft iSCSI Initiator is installed natively on Windows Server 2008. Installing and Configuring the Storage Management Software The PowerVault MD3200i and MD3220i storage software installer provides features that include the core software, providers, and optional utilities. The core software feature includes the host-based storage agent, multipath driver, and MDSM application used to configure, manage and monitor the storage array solution.
  • Page 36 – Full (recommended)—This package installs core software, providers, and utilities. It includes the necessary host-based storage agent, multipath driver, MD Storage Manager, providers, and optional utilities. – Host Only—This package includes the host-based storage agent, multipath drivers, and optional utilities required to configure the host. –...
  • Page 37: Configuring The Shared Storage System

    Configuring the Shared Storage System Before you begin configuring iSCSI, you must fill out the "iSCSI Configuration Worksheet" on page 73. Gathering this type of information about your network prior to starting the configuration steps helps you complete the process faster. Terminology The following table outlines the terminology used in the iSCSI configuration steps later in this section.
  • Page 38 Using Internet Storage Naming Service Server Internet Storage Naming Service Server (iSNS) eliminates the need to manually configure each individual storage array with a specific list of initiators and target IP addresses. Instead, iSNS automatically discovers, manages, and configures all iSCSI devices in your environment. For more information on iSNS, including installation and configuration, go to microsoft.com.
  • Page 39 2 For Windows, click StartAll Programs Dell MD Storage SoftwareModular Disk Configuration Utility. 3 For Linux, click the MDCU icon on the desktop or navigate to the /opt/dell/mdstoragesoftware/mdconfigurationutility directory in a terminal window and run the MDCU.
  • Page 40 storage array has a single controller (simplex) or dual controllers (duplex) and whether to use IPv4 or IPv6 protocol to communicate with the management port of the storage array. The next screen displays a list of the iSCSI-based MD storage arrays that were discovered based on the discovery process selected in step 3.
  • Page 41 13 In the CHAP Configuration screen, select the CHAP method and click Next. For more information on CHAP see "Understanding CHAP Authentication" on page 42. 14 In the Summary screen, review the information that you entered for the storage array. 15 Click Apply to save the changes to the storage array.
  • Page 42 Click Next if you want to enter the login information for another controller or click Apply to commit the log in information. 19 In the Connect to Additional Arrays screen, select if you want to connect to another storage array. To connect to another storage array, repeat the steps above starting from step d.
  • Page 43 NOTE: If you elect to use CHAP authentication, you must configure it on both the storage array (using MD Storage Manager) and the host server (using the iSCSI initiator) before preparing virtual disks to receive data. If you prepare disks to receive data before you configure CHAP authentication, you will lose visibility to the disks after CHAP is configured.
  • Page 44 NOTE: If you choose to configure mutual CHAP authentication, you must first configure target CHAP. Remember, in terms of iSCSI configuration, the term target always refers to the storage array. Configuring Target CHAP Authentication on the Storage Array 1 From MD Storage Manager, click the iSCSI tab and then Change Target Authentication.
  • Page 45 Configuring Mutual CHAP Authentication on the Storage Array The initiator secret must be unique for each host server that connects to the storage array and must not be the same as the target CHAP secret. 1 From MD Storage Manager, click on the iSCSI tab, then select Enter Mutual Authentication Permissions.
  • Page 46 6 Under Target Portals, click Add and re-enter the IP address or DNS name of the iSCSI port on the storage array (removed above). 7 Click Advanced and set the following values on the General tab: • Local Adapter: Must always be set to Microsoft iSCSI Initiator. •...
  • Page 47 • Source IP: The source IP address of the host server you want to connect from. Target Portal: Select the iSCSI port on the storage array controller • that you want to connect to. • Data Digest and Header Digest: Optionally, you can specify that a digest of data or header information be compiled during transmission to assist in troubleshooting.
  • Page 48 Controller 1: IP: 192.168.128.102 Subnet Mask: 255.255.255.0 NOTE: The management station you are using must be configured for network communication to the same IP subnet as the PowerVault MD3200i or MD3220i iSCSI host ports. 1 Establish an iSCSI session to the MD3200i or MD3220i RAID storage array.
  • Page 49: Click Next

    4 Select the relevant option in Do you plan to use the storage partitions in the this storage array? field and click Next. The Specify Host Port Identifiers window is displayed. NOTE: Select Yes if your cluster shares the array with other clustered or stand- alone system(s), and No otherwise.
  • Page 50 Creating Disk Groups and Virtual Disks In some cases, the virtual disks may have been bound when the system was shipped. However, it is important that you install the management software and verify that the desired virtual disk configuration exists. You can manage your virtual disks remotely using PowerVault Modular Disk Storage Manager.
  • Page 51 2 Click Next. The Disk Group Name and Physical Disk Selection window is displayed. 3 Type a name (up to 30 characters) for the disk group in Disk Group Name field. 4 Select the appropriate configuration method of Physical Disk selection from the following: –...
  • Page 52 – To create a virtual disk from unconfigured capacity in the storage array—On the Logical tab, select an Unconfigured Capacity node and select Virtual Disk Create. Alternatively, you can right-click the Unconfigured Capacity node and select Create Virtual Disk from the pop-up menu.
  • Page 53 8 Select the appropriate Preferred RAID controller module. For more information on how to create disk groups and virtual disks, see the Dell PowerVault Modular Disk Storage Manager User’s Guide at support.dell.com/manuals. It is recommended that you create at least one virtual disk for each application.
  • Page 54: Troubleshooting Tools

    Mappings pane in the Mappings tab are updated to display the mappings. Troubleshooting Tools The Dell PowerVault MDSM establishes communication with each managed array and determines the current array status. When a problem occurs on a storage array, the MDSM provides several ways to troubleshoot the problem.
  • Page 55 • Detail view—Shows details about a selected event. To view the event log: 1 In the Array Management window, select Advanced Troubleshooting View Event Log. The Event Log is displayed. By default, the summary view is displayed. 2 Select View Details to view the details of each selected log entry. A Detail pane is added to the event log that contains information about the log item.
  • Page 56 Storage Profile The storage array profile provides a description of all components and properties of the storage array. The storage array profile also provides the option to save the storage array profile information in a text file. You can also use the storage array profile as an aid during recovery or as an overview of the current configuration of the storage array.
  • Page 57 Click the Find button again to search for additional occurrences of the term. 5 To save the storage array profile, perform these steps: Click Save As. To save all sections of the storage array profile, select All Sections. To save information from particular sections of the storage array profile, select the Select Sections option and click on the check boxes corresponding to the sections that you want to save.
  • Page 58 Configuring the RAID Level for the Shared Storage Subsystem The virtual disks in your shared storage subsystem must be configured into disk groups or virtual disks using the Dell PowerVault MDSM software. All virtual disks, especially if they are used for the quorum resource, must be bound and must incorporate the appropriate RAID level to ensure high availability.
  • Page 59 Naming and Formatting Drives on the Shared Storage System Each virtual disk being created in the PowerVault Modular Disk Storage Manager becomes a physical disk in Windows Disk Management. For each physical disk, perform the following: • Write the disk signature •...
  • Page 60 5 In the dialog box, create a partition with the size of the entire drive (the default) and then click OK. NOTE: A virtual disk that is mapped or assigned from the storage system to a cluster node(s) is represented as a physical disk within the Windows operating system on each node.
  • Page 61 For instructions about this process, see the Premium Feature Activation card that shipped along with your Dell PowerVault MD3200i or MD3220i storage system. These premium features increase the high availability for your cluster solution.
  • Page 62 Snapshot Virtual Disk Snapshot Virtual Disk captures point-in-time images of a virtual disk for backup, testing, or data processing without affecting the contents of the source virtual disk. You can use either Simple Path or Advanced Path to create a snapshot for your cluster disk. The Snapshot Virtual Disk can be mapped to the primary node (the node owning the source disk) or the secondary node (the node not owning the source disk) for backup, testing, or data processing.
  • Page 63: Configuring A Failover Cluster

    NOTE: For a cluster configuration with multiple Snapshot Virtual Disks, each virtual disk must be mapped to the node owning the associated source disk first. The primary node for a Snapshot Virtual Disk may not be the primary node for another Snapshot Virtual Disk.
  • Page 64 For more information on deploying your cluster, see the Dell Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide at support.dell.com/manuals. Preparing Your Systems for Clustering...
  • Page 65: A Troubleshooting

    Troubleshooting This appendix provides troubleshooting information for your cluster configurations. Table A-1 describes general cluster problems you may encounter and the probable causes and solutions for each problem. Table A-1. General Cluster Troubleshooting Problem Probable Cause Corrective Action The nodes cannot The storage system is not Ensure that the cables are access the storage...
  • Page 66 (continued) Table A-1. General Cluster Troubleshooting Problem Probable Cause Corrective Action One of the nodes takes The node-to-node Check the network cabling. a long time to join the network has failed due to Ensure that the node-to-node cluster. a cabling or hardware interconnection and the public failure.
  • Page 67 Cluster installation. about assigning the network IPs, see "Assigning Static IP Addresses to Your Cluster Resources and Components" in the Dell Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide. The private (point-to- Ensure that all systems are...
  • Page 68 (continued) Table A-1. General Cluster Troubleshooting Problem Probable Cause Corrective Action Unable to add a node The new node cannot Ensure that the new cluster node to the cluster. access the shared disks. can enumerate the cluster disks using Windows Disk Administration.
  • Page 69 (continued) Table A-1. General Cluster Troubleshooting Problem Probable Cause Corrective Action Virtual Disk Copy The Virtual Disk Copy To perform a Virtual Disk Copy operation fails. operation uses the cluster operation on the cluster share disk as the source disk. disk, create a snapshot of the disk, and then perform a Virtual Disk Copy of the snapshot...
  • Page 70 Troubleshooting Tools...
  • Page 71: B Cluster Data Form

    Cluster Data Form You can attach the following form in a convenient location near each cluster node or rack to record information about the cluster. Use the form when you call for technical support. Table B-1. Cluster Configuration Information Cluster Information Cluster Solution Cluster name and IP address...
  • Page 72 Table B-2. Cluster Node Configuration Information Node Name Service Tag Public IP Address Private IP Address Number Table B-3. Additional Network Information Additional Networks Table B-4. Storage Array Configuration Information Array Array Service Tag IP Address Number of Attached DAEs Virtual Disks Cluster Data From...
  • Page 73: Iscsi Configuration Worksheet

    iSCSI Configuration Worksheet IPv4 Settings host server Mutual CHAP Secret 192.168.128.101 (management network port) 192.168.133.101 (In 3 default) 192.168.132.101 (In 2 default) 192.168.131.101 (In 1 default) 192.168.130.101 (In 0 default) Target CHAP Secret PowerVault MD32 192.168.130.102 (In 0 default) 192.168.131.102 (In 1 default) 192.168.132.102 (In 2 default) 192.168.128.102 (management network port) 192.168.133.102 (In 3 default)
  • Page 74: Ipv6 Settings

    IPv6 Settings host server Mutual CHAP Secret Target CHAP PowerVault MD32 Secret If you need additional space for more than one host server, use an additional sheet. Host iSCSI port 1 Host iSCSI port 2 ___ . ___ . ___ . ___ ___ .
  • Page 75 iSCSI controller 0, In 3 FE80 : 0000 : 0000 : 0000 : ____ : ____ : ____ : ____ IP address ____ : ____ : ____ : ____ : ____ : ____ : ____ : ____ Routable IP address 1 ____ : ____ : ____ : ____ : ____ : ____ : ____ : ____ Routable IP address 2 ____ : ____ : ____ : ____ : ____ : ____ : ____ : ____...
  • Page 76 iSCSI Configuration Worksheet...
  • Page 77: Index

    Index advanced features event log, 54 snapshot virtual disk, 10 virtual disk copy, 10 assigning drive letters and mount points, 58 initial storage array setup, 38 installing iSCSI NICs, 34 Microsoft iSCSI software initiator, 35 cabling cluster in direct-attached installing and configuring configuration, 21 storage management software, 35 cluster in network-attached...
  • Page 78 NIC teaming, 20 virtual disk copy, 63 operating system Windows Server 2003, Enterprise Edition installing, 33 installing, 33 PowerVault 22xS storage system clustering, 59 recovery guru, 55 snapshot virtual disk, 62 status icons, 57 storage profile, 56 supported cluster configurations, 11 troubleshooting general cluster, 65 Index...

This manual is also suitable for:

Powervault md3220i

Table of Contents