DS4000 Implementation Cookbook v1.0

52
DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers For IBM and IBM Business Partner internal use only. IBM Confidential. Version 1.0 dated 9/6/2005 1 Jodi Toft Field Technical Sales Support email: [email protected]

Transcript of DS4000 Implementation Cookbook v1.0

Page 1: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

1

Jodi Toft Field Technical Sales Support

email: [email protected]

Page 2: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

2

Prior to Installation TOOLS: Please Have the following Tools: #2 Phillips Screwdriver, Label Maker, Null Modem Serial Cable (this is a Serial Crossover Cable), Ethernet Cables (1 for each Controller in a DS4000 unit). Field Tip: It can be very handy to have a small 8 port Ethernet switch. This will allow you to connect your laptop to both DS4000 controllers and multiple DS4000 units when doing the configuration. Review

• IBM TotalStorage FAStT Storage Manager Version 9 Installation and Support Guide from http://ssddom02.storage.ibm.com/techsup/webnav.nsf/support/disk or

http://www-1.ibm.com/servers/storage/support/fastt/index.html • IBM TotalStorage FAStT Best Practices Guide from www.ibm.com/redbooks

Power – Each DS4000 and EXP unit will require two power sources. The Power Cord is(125V, 10A, 2.8 m) Rack Mounting - Review the documentation that comes with your rack cabinet. • Maintain 15 cm (6 in.) of clearance around your controller unit for air circulation. • Ensure that the room air temperature is below 35°C (95°F). • Plan the controller unit installation starting from the bottom of the rack. • Remove the rack doors and side panels to provide easier access during installation. • Position the template to the rack so that the edges of the template do not overlap any other devices. • Connect all power cords to electrical outlets that are properly wired and grounded. • Take precautions to prevent overloading the electrical outlets when you install multiple devices in a rack. Software - Make sure you download the latest Firmware, NVSRAM and Drive Code for the DS4000 Storage System! IMPORTANT: To be notified of important product updates, you must first register at the IBM Support and Download Web site: www-1.ibm.com/servers/storage/disk/ Perform the following steps to register at the IBM Support and Download Web site:

Page 3: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

3

1. Click on My Support in the Additional Support box in the right side of the DS4000 Support web page. 2. The Sign In window displays. Either enter your IBM ID and Password in the sign in area to sign in and proceed to step 6 or, if you are not currently

registered with the site, click Register Now. 3. The My IBM Registration window Step 1 of 2 opens. Enter the appropriate information and click Continue to get to the second My IBM Registration

window. 4. In the My IBM Registration window Step 2 of 2, enter the appropriate information and click Submit to register. 5. The My IBM Registration windows opens with the following message, “Thank you for registering with ibm.com. Continue to explore ibm.com as a

registered user.” Click Edit Profile in the My IBM box on the right side of the window. 6. The My Support window opens. Click Add Products to add products to your profile. 7. Use the pull-down menus to choose the appropriate DS4000 storage server and expansion enclosures that you want to add to your profile. 8. To add the product to your profile, select the appropriate box or boxes next to the product names and click Add Product. 9. Once the product or products are added to your profile, click Subscribe to Email folder tab 10. Select Storage in the pull down menu. Select Please send these documents by weekly email and select Downloads and drivers and Flashes to receive

important information about product updates. Click Updates. 11. Click Sign Out to log out of My Support.

IP Requirement

• Obtain 2 IP addresses for each DS4000 storage subsystem: IP ________________________ IP ___________________ • Obtain 1 IP address for each SAN switch IP: _______________________________ IP: _________________________

Host Information -

• Please review the README.txt files for any particular OS limitations. Please understand these limitations before going any farther in this doc. • Verify OS, SAN and HBA interoperability matrix at: http://www.storage.ibm.com/disk/FAStT/supserver.htm

Be careful not to put all the high-speed adapters on a single system bus; otherwise the computer bus becomes the performance bottleneck • OS: ________________ HBA ________________________ • OS: ________________ HBA ________________________ • OS: ________________ HBA ________________________ • OS: ________________ HBA ________________________

BladeCenter Fabric Support: Details for SAN attachment are in the IBM eServer BladeCenter interoperability guide.

• McData – You will have to be sure to acquire the OPM Optical Pass thru Module on BladeCenter in order to attach to McData otherwise no support is available for the ED5000 and ES1000 as they cannot operate in "open" mode.

McData announcement> Working with Qlogic to enable their switches to run in McData mode in an effort to better support BladeCenter connectivity. With this

Page 4: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

4

feature, BladeCenter customers with the Qlogic switching module can connect to a McData fabric without an outage to switch the McData fabric into Open Mode. Expect this functionality to GA in Q4/04. Boot from SAN -- At present, you can only boot from Windows NT, Windows 2000, Linux Red Hat Enterprise Linux, AIX and Solaris operating systems. Boot support for Netware, HP-UX is not available at this time.

** NEW in 9.12 **

• Starting with the DS4000 Storage Manager (SM) host software version 9.12 or later, the Storage Manager client script window looks for the files with the file type of ".script" as the possible script command files. In the previous versions of the DS4000 Storage Manager host software, the script window looks for the file type ".scr" instead.

• New DS4000 host-software installation wizard that will automatically install the appropriate host-software components depending on the select server type (i.e.. host or management-station.)

• Serviceability Enhancement - the ability to extract SMART data for SATA-technology drives. • Usability Enhancement - Added Wizard to Storage Manager SMclient to aid users with Storage Partitioning creation with Hosts, Adapters, and

mappings. Added Task Assistant to Storage Manager which helps guide users through common task wizards for both the Enterprise and Subsystem management tasks. Starting with the DS4000 storage manager version 9.12, all of the host software packages are included in a single DS4000 Storage Manager host software installer wizard. During the execution of the wizard, the user will have a choice to install all or only certain software packages depending the need for a given server.

** NEW in 9.14 (DS4800 ONLY) **

• The minimum version of controller firmware for the DS4800 is 06.14.xx.xx. The Storage Mgr Client can be used to manage all other DS4000’s units. • Microsoft Windows host attachment to the DS4800 requires the additional purchase of the IBM DS4800 Host Kit Option or Feature Code. • There is not any controller firmware packages version 06.14.xx.xx for other DS4000 storage subsystems besides the DS4800. Do not download this

controller firmware on any other DS4000 storage subsystems. • The DS4800 does not support the attachment the EXP700 or EXP500 drive expansion enclosures. • Support the FC/SATA intermix feature option on the DS4300 storage subsystem with standard/base option and controller firmware version 06.12.xx.xx• Support for Veritas Volume Manager (DMP) version 4.0

NEW in 9.15 (DS4800 ONLY) **

• Version 06.15.11.xx dated 6/28/2005 - Enable support for controller cache memory above 1G for the DS4800 storage subsystems (M/T 1815-models 82A/H, 84A/H) IMPORTANT NOTE: You can not do a concurrent firmware download on the DS4800 from 6.14.xx to 6.15.xx (ALL IOs must be stopped during the upgrading of the DS4800 controller firmware and NVSRAM)

Page 5: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

5

DS4100 and EXP100 NOTES

• DO NOT use the tens digit (x10) setting on the EXP enclosures. Use only the ones digit (x1) setting to set unique server IDs or enclosure IDs. . • EXP100 >> IBM recommends 2 hot spares per EXP100 drive expansion enclosure. One in an even slot and the other in an odd slot. • DS 4100 >> When configuring a DS 100 it is highly recommended to configure the 14 disks that are in the base frame to use the same Controller. This

will prevent I/O shipping and a possible degradation in performance. . • AIX version 4.3.3 and earlier are not supported with Storage Manager version 8.41 and EXP100 drawers • Booting from a DS4000 subsystem utilizing SATA drives for the boot image is supported but not recommended due to performance reasons.

Storage Controller Firmware versions for SATA: DS4100 Standard: SNAP_282X_06120302 Storage Controller NVSRAM versions for SATA: DS4100 (FAST100): N1724F100R912V05 ESM: Version 9554 SATA NOTE: The ideal configuration for SATA drives is one drive in each EXP per array, one logical drive per array and one OS disk partition per logical drive. This configuration minimizes the random head movements that increase stress on the SATA drives. As the number of drive locations to which the heads have to move increases, application performance and drive reliability may be impacted. If more logical drives are configured, but not all of them used simultaneously,

Page 6: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

6

some of the randomness can be avoided. SATA drives are best used for long sequential reads and writes.

Page 7: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

7

DS4000 Hardware Setup

Drive-side Fibre Channel cabling - This is the IBM recommended cabling Note: The in and out ports of the drive side mini-hubs on the DS4000 do not matter. In other words, you can connect to the top or bottom port. Just connect to only one (top or bottom) IMPORTANT FLASH: To prevent a drive enclosure group loss, THE DS4500 CABLING DIFFERS FROM THE DS4100, 4300 & 4400. YOU SHOULD PAIR MINI-HUB 1 & 3 TOGETHER TO CREATE DRIVE LOOPS A & B. PAIR MINI-HUB 2 & 4 TO CREATE DRIVE LOOPS C & D. This diagram shows mini-hub 1&2 paired together and 3 & 4 are paired together. DO NOT FOLLOW this diagram for a DS4500! 1. Start with the first expansion unit of Drive enclosures group 1 and connect the In port on the left ESM board to the Out port on the left ESM board of the second (next) expansion unit. 2. Start with the first expansion unit of Drive enclosures group 1 and connect the In port on the right ESM board to the Out port on the right ESM board of the second (next) expansion unit. 3. If you are cabling more expansion units to this group, repeat steps 1 and 2, starting with the second expansion unit. 4. If you are cabling a second group, repeat step 1 to step 3 and reverse the cabling order; connect from the Out ports on the ESM boards to the In ports on successive expansion units according to the illustration on the left. 5. Connect the port of drive-side mini hub 4 (leftmost drive side) to the In port on the left ESM board of the last expansion unit in Drive enclosures group 1. 6. Connect the port of drive-side mini hub 3 (mini hub 2 if DS4500) to the Out port on the right ESM board of the first expansion unit in Drive enclosures group 1. 7. If you are cabling a second group, connect the port of drive-side mini hub 2 (mini hub 3 if DS4500) to the In port on the left ESM board of the first expansion unit in Drive enclosures group 2; then, connect the port of the drive-side mini hub 1 (rightmost drive side) to the Out port on the right ESM board of the last expansion unit in Drive enclosures group 2. When instructed to remove and reinsert or to replace a hard drive, wait at least 70 seconds before inserting either the removed existing drive or the new drive into the drive slot. Similarly, wait at least 70 seconds before reinserting either the removed existing ESM module or the new ESM module into the empty ESM slot in the EXP drive expansion enclosure. There is no work-around.

Page 8: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

8

DS4100 (100), 200 and 600 Dual expansion unit Fibre Channel cabling 1. If you are cabling two expansion units to the storage server, use a fibre channel cable to connect the In port on the left ESM board of the first expansion unit to the Out port on the left ESM board of the second expansion unit. Connect the In port on the right ESM board of the first expansion unit to the Out port on the right ESM board of the second expansion unit. 2. Connect the SFP Expansion port on Controller A to the In port on the left ESM board of the second expansion unit. Connect the SFP Expansion unit port on Controller B to the Out port on the right ESM board of the first expansion unit 3. Ensure that each expansion unit has a unique ID (switch setting). The two host ports in each controller are independent. They are not connected in the controller module as they would be in a hub configuration. So, there are a total of 4 host ports in the DS4300 (600), 2 in the FAStT200. When instructed to remove and reinsert or to replace a hard drive, wait at least 70 seconds before inserting either the removed existing drive or the new drive into the drive slot. Similarly, wait at least 70 seconds before reinserting either the removed existing ESM module or the new ESM module into the empty ESM slot in the EXP drive expansion enclosure. There is no work-around.

Page 9: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

IBM Confidential. Version 1.0 dated 9/6/2005

9

For IBM and IBM Business Partner internal use only. – I/O paths to each controller should be established for full redundancy and failover protection

DS4800 Cabling Each drive-side fibre channel port shares a loop switch with the other disk array controller. When attaching enclosures, drive loops are configured as redundant pairs utilizing one port from each controller. This ensures data access in the event of a path/loop or controller failure. Height:6.9 in (17.5 cm) Depth:24.8 in (63.0 cm) Width:19.0 in (48.3 cm) Unit height: 4u* Weight:80.5 lb (36.5 kg) Power U.S: 115 V, 15 A, NEMA 5–15 International:230 V, 10 A Drive Stack 1 Drive Stack 2 Drive Stack 3 Drive Stack 4

• Configure DS4800 with drive trays in multiples of four • Distribute the drives equally between the drive trays • For each disk array controller, use four fibre channel loops if you have more than 4 expansion units • Based on recommended cabling from above

– Dedicate the drives in tray stacks 2 & 4 to disk array controller “B” – Dedicate the drives in tray stacks 1 & 3 to disk array controller “A”

Drive Tray

In OutIn Out Drive Tray

In OutIn Out

Ch 3 Ch 4P1 P2 P3 P4DRIVE CHANNELS (Ch)

Dual-Ported (P)

Controller B

Ch 2Ch 1P1P2P3P4 DRIVE CHANNELS (Ch)

Dual-Ported (P)

Controller A

Drive Tray

In OutIn Out

Drive Tray

In OutIn Out

Drive Tray

In OutIn Out Drive Tray

In OutIn Out Drive Tray

In OutIn Out

Drive Tray

In OutIn Out

Drive Tray

In OutIn Out Drive Tray

In OutIn Out Drive Tray

In OutIn Out

Drive Tray

In OutIn Out

Drive Tray

In OutIn Out Drive Tray

In OutIn Out Drive Tray

In OutIn Out

Drive Tray

In OutIn Out

Page 10: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

10

Insure the drive enclosures have different id's • There's a switch/dial on the back of each EXP500/700 that identifies the enclosure. Each enclosure must have different id, otherwise you will receive conflict

errors. • DS4100 (100), FAStT200 and DS4300 (600) enclosures are always ID 0 • BEST PRACTICE > Change the drive enclosures to something other than the default of ‘00’. New EXP drawers always come with id of ‘00’ and this will

prevent errors in the event you forget to change it before adding it to the DS4000 subsystem. Within a subsystem each drive enclosure must have a unique ID. Within a loop the subsystem IDs should be unique in the ones column. All drive trays on any given loop should have complete unique ID's assigned to them. Example for a maximum config DS4000 900: trays on one loop should be assigned id's 10-17 and trays on the second loop assigned id's 20-27. Tray id's 00 ~ 09 should not be used and tray id's with the same single digit such as 11 and 21 should not be used on the same drive loop.

• When connecting the EXP100 enclosures, DO NOT use the tens digit (x10) setting. Use only the ones digit (x1) setting to set unique server IDs or enclosure IDs. This is to prevent the possibility that the controller blade has the same ALPA as one of the drives in the EXP100 enclosures under certain DS4000 controller reboot scenarios.

• For the DS4800, it’s recommended to have the tens digit (x10) enclosure ID setting to distinguish between different loops and use the ones digit (x1) enclosure ID setting to distinguish storage expansion enclosures IDs within a redundant loop. (see example below)

The power down procedure includes the following steps:

• Turn off servers • Turn off DST controller • Turn off switches • Turn off drives

The power up procedure includes the following steps:

• Turn on drives and wait one minutes • Turn on switches • Turn on DS4000 controller • Turn on servers

NOTE: With firmware 05.30 and above, the controllers have a built in pause/delay to wait for the drives to stabilize but it is still a good practice to follow the proper

Page 11: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

11

power up sequences to prevent any loss of data.

• Verify status of the box by checking status lights. Verify 2 GB light and Conflict light. The hub provides two status LEDs for each port. Use these LEDs to help you quickly diagnose and recover from problems. Green LED Amber LED Pprt State Off Off No GBIC Installed On Off Operational GBIC; Valid Signal Off On Faulty GBIC; Port Bypassed On On Operational GBIC; No Valid Signal; Port Bypassed

IMPORTANT NOTE: When powering up a new system for the first time it is recommended that you power up one EXP unit at a time and then add only 2 drives at a time! This means that you should pull out every drive in a new system and slowly add them into the system (2 at a time) until recognized. There have been problems with the Controller discovering large configurations all at once which can result in loss of drives, ESM, GBIC’s, etc. Field Tip: My recommendation is to

1. Power up the controller and only 1 EXP unit with 1 drive installed. 2. Install Storage Manager client on a workstation and connect to the DS4000 >> Setting up the Network & DS4000 Storage Manager Setup 3. Once you have Storage Manager connected to the DS, continue adding drives (2 at a time) and EXP units. Verify with Storage Manager the DS4000 sees

the drives before you continue to add units/drives. 4. Continue with the rest of the setup “Collect your Storage System Profile”

Connect Ethernet cables between RAID controllers and network switch

• There is a RETAIN tip regarding sensing problem. If you have problems locating the unit over Ethernet, try the following: 1. Make sure the Ethernet switch is set to auto sense. It does not work well with hard set ports at 100Mb. 2. If that doesn’t work, hard set the ports to 10Mb. 3. The DS4000 controller sometimes won’t work well with 100Mb or auto-sensing.

Setting up the Network - Logging into the Controllers

If the storage subsystem controllers have firmware version 05.30 or later, then DS4000 will have default IP settings only if NO DHCP/BOOTP server is found. Controller IP address Subnet mask A 192.168.128.101 255.255.255.0 B 192.168.128.102 255.255.255.0 A2 192.168.129.101 255.255.255.0 DS4800 Only. The DS4800 has four Ethernet ports: two on each controller blade (A, B, A2 and B2). B2 192.168.129.102 255.255.255.0 DS4800 Only

Page 12: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

12

In 5.30 code and above, you can change the IP via the Storage Manager GUI but you’ll have to change your TCP/IP setting on your laptop or workstation to an IP address that’s something like 192.168.128.10….255.255.255.0. I use a linksys switch to connect my laptop to both controllers to do this. First, discover the DS4000 and then right click on each controller…..Change>IP If you can’t do this via the Ethernet, you’ll have to do it through the serial port which requires that null modem serial cable…. Field TIP: The newer laptops from IBM like the T and R models do not come with a Serial port. Obvious issue when you need to log into the controllers. Some have successfully used the IO gear GUC232A USB to Serial 9 pin adapter and then used standard Null modem cable. Some say that the Belkin product did not work for several people. Just a tip that may save some time when trying to figure out what to do to get the storage system configured.

How to Connect to the DS4000 system using PuTTy from Windows

Launch PuTTY.exe Type in the Controller IP address Select the "Rlogin" protocol Click Open

How to Connect to the DS4000 system using Hyperterm

1. Attach the null modem serial cable to Controller A or Controller B • Launch HyperTerm. • Try to get HyperTerm baud rate to 38200 or 57600, 8, 1, None >> Using flow control other than "None" can cause HyperTerm lock ups when connected to DS4000 controllers.

Note: Sending a single Serial-Break tells the controller someone is requesting shell access and to print "Press within 5 seconds: <ESC> for SHELL, <BREAK> for baud rate." Two consecutive Serial-Breaks cause the controller to cycle one step through its baud rate table. A pause of at least one second between Serial-Breaks is a good idea. Initial Shell Access Procedure:

2. Press <CTL> and <BREAK> (you may have to keep doing this until you get a response) 3. When you see "Press Space Bar within 5 Seconds To Set Baud rate," press the Space Bar. Wait till you see that the Baud rate reports being set before

going to step 4. 4. Press <CTL> and <BREAK> once. 5. When you see "Press Space Bar within 5 Seconds To Set Baud rate, or Press <ESC> To Exit To Shell", press <ESC>. 6. You will now be prompted for the password infiniti

Field Tip: If you cannot break into controller, the culprit is usually the cable. Please ensure you have a 'null modem serial cable'

Troubleshooting: If you cannot break into the controller and you’re sure you have a null-modem serial cable, I find that re-booting my laptop resolves the problem. Sometimes the COM1 port on your laptop gets hung. This especially happens if you’ve been connecting and re-connecting to various serial ports.

Page 13: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

13

Please follow the procedure outlined here exactly, because some commands that can be issued from the serial console can cause data loss. You will need one IP address for each controller. Ctlr A __________________________ Ctlr B ___________________________

1. Log into Controller A 2. Type netCfgSet.

a. This command will display each line, one line at a time. When each line is displayed, the cursor will be placed to the right of the current value waiting for user input. Entering a "Carriage Return" at the cursor will cause the current line to be skipped with no changes being made to its value. Entering a "." (period) at the prompt will cause the value of the current line to be reset to the factory default. To change the value of a field, simply enter the new value at the cursor. Make sure to include the "." in IP addresses and to precede the Network Init Flags with "0x" if this value is to be changed. As a minimum requirement, four fields must have non-default values in order to grant a particular host access to a controller. These fields are titled:

i. "My IP Address", ii. "Gateway IP Address" iii. "Subnet Mask"

3. Repeat the process for Controller B

Notes: • It is strongly recommended that during an initial configuration that all values be set to their default values using a ".", except for the three required settings • Change Network Init Flag to 0x01 if using a static IP address (keep it at 0x00 if using DHCP or Bootp server) By request, here is everything you wanted

to know about the Network Init Flag: The Network Init Flags are used to control the initialization of the network interfaces of a controller. The one-byte field displayed next to the Network Init Flags title when the netCfgSet command is run can be used to modify these flags. Each bit in this one-byte field corresponds to a flag. The function of each of these flags is listed below: bit 0 =1: Do not use BOOTP for any reason =0: Use BOOTP as needed bit 1 =1: Use BOOTP unconditionally =0: Use BOOTP only as necessary bit 2 =0: Start NFS services =1: Do not start NFS services bit 3 =0: Use “0.0.0.0” default route =1: Do not use “0.0.0.0” default route bit 4: =0: Do not mount all NFS volumes =1: Mount all NFS volumes bit 5: =0: Allow remote login to shell =1: Disable remote login to shell

Page 14: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

14

bit 6: =0: Use remote access authorization =1: Do not require authorization The preceding bits can be hex-added in order to enable more than one flag. For example, a Network Init Flags value of 0x21 would set bits 5 and 0, disabling remote login to the shell and causing the controller software to not broadcast to a BOOTP server. While all of the flags available in the network software are listed in the above table, the only flags which the end user should ever need to modify are 0x01, 0x02, and 0x20. If the controller network interface is being manually configured via the controller shell, then bit 0 should be set to 1. Changes to any of these flags can lead to network connection problems. The 0x20 flag can be used for added protection if network security is an issue. If this flag is not set, then anyone can access the controller shell if they know the IP address of a controller, and the controller shell password. On the other hand, leaving this flag unset can be useful for debugging purposes since it enables access to a controller shell via rlogin. Verify Network IP Settings

• Check settings using netCfgShow command via serial connection and/or Ping IP address from Command prompt. netCfgShow will dump the following: -> netCfgShow

==== NETWORK CONFIGURATION ==== Interface Name : dse0 My Host Name : DS_a My IP Address : 100.100.100.236 Server Host Name : host Server IP Address : 0.0.0.0 Gateway IP Address : 100.100.100.9 Subnet Mask : 255.255.255.0 Network Init Flags : 0x01 Network Mgmt Timeout : 30 Shell Password : ************ User Name : guest User Password : ************ NFS Root Path : (null) NFS Group ID Number : 0 NFS User ID Number: 0n. value = 27 = 0x1b

DacStore DACStore is a region on each drive that is reserved for the use of the DS4000 Controller Unit. It stores information about the drive state/status, logical drive state/status and other information needed by the controller. The DACStore contains the following information:

1. Failed drive information 2. Global Hot Spare state/status 3. Storage array password

Page 15: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

15

4. Media scan rate 5. Cache configuration of the storage array 6. Storage user label 7. MEL logs 8. Logical drive/LUN mappings, host types, etc. 9. Controller NVSRAM

• DACStore region extends 512MB – DS4500 (900) (firmware level 5.3)

DACStore Cleaning Methods If you don't want to keep any of the data – Use the Storage Manager > Configuration > Reset or sysWipe/sysReboot from the serial port to both controllers via the shell > THIS WIPES OUT ALL DATA! To preserve data on the DS4000 but you want to clean one drive –

1. Assign a disk as a Global Hot Spare. 2. Fail the GHS then remove it. 3. Put suspected dirty DACStore disk into GHS slot. A new DACStore will be written 4. Remove the now clean disk and put it into the empty slot.

Adding EXP Units to an existing DS4000 configuration Tips:

• Call Support First (1-800-IBM-SERV) • When adding drives to an expansion unit, do not add more than two drives at a time. • Subsystem should be in an optimal state • Make sure drive tray ids are set - set to 2Gb • Connect one fibre cable at a time wait until that loop sees the new tray then repeat the process on the other loop • Always save a copy of the profile and configuration before doing any work additions

a. Run DisableDriveMigration.scr script on DS4000 from DS4000 Storage Manager Enterprise Management Window. This ensures that if you have a drive

with dirty DACStore, it will not propagate to the DS. You can clean the DACStore by following the procedure above. b. Only put one (1) HDD in the newly connected EXP unit. c. Check the DS4000 Storage Manager. You should be able to see the new HDD in the new EXP unit. d. Add two(2) new HDD at a time, verifying each time that each drive appears in the DS4000 Storage Manager. e. Run Enable Drive Migration script on DS4000 from DS4000 Storage Manager Enterprise Management Window.

EnableDriveMigration.scr: The Enable drive migration script scans the drives for configurations when they are added to a storage system and will also attempt to use configurations found on the newly added drives. The NVSRAM default will enable drive migration. DisableDriveMigration.scr: The disable drive migration script allows the addition of disk drives while the system is running. In addition, it identifies the drives as unused capacity regardless of past drive configuration. The NVSRAM default will enable drive migration.

Page 16: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

16

Host Server Preparation Tasks

Install Server HBA's and update Driver Check http://knowledge.storage.ibm.com/HBA/HBASearch for the latest supported host adapters, driver levels, bios, and updated readme.

• Be careful not to put all the high-speed adapters on a single system bus; otherwise the computer bus becomes the performance bottleneck. • Make a note of what slot in each server • Record WWPN of each HBA and what slot it’s in.

INTEL:

• Ensure HBA’s are in a higher priority slot than ServeRAID adapters. If not booting from the Host HBA's, it doesn't matter whether or not they are a higher PCI scan priority.

• Install and update the driver for the IBM FAStT Host Adapter. 1. Install the hardware by using the instructions that come with the adapter. 2. Install the IBM FAStT Host Adapter driver by using the instructions provided in the readme.txt file located in the Host Adapter directory on this

installation CD.

DS4300 (600) Note for Windows, Novell NetWare or x86 Linux "Direct connection of the DS4300 (600) to a host system running Microsoft Windows, Novell NetWare or x86 Linux is only supported using the IBM FAStT FC2-133 Host Bus Adapter (HBA), IBM feature code 2104 with the last 6 digits of the 13 digit serial number of the HBA is H21160 or higher. For example, if the serial number is FFC0308H21161, the last 6 digits will be H21161, which indicates that this HBA meets the DS4300 (600) direct connect prerequisite. Install RDAC on to Hosts according to installation manual.

• RDAC is recommended regardless of whether or not there are multiple HBA's • AIX - The AIX ”fcp.array” driver suite files (RDAC)are not included on the DS4000 installation CD. Either install them from the AIX Operating Systems CD,

if the correct version is included, or download them from the following Web site: techsupport.services.ibm.com/server/fixes

Page 17: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

17

DS4000 Storage Manager Setup

The following components are Mandatory for all DS4000 Environments:

• RDAC (Regardless of whether or not there are multiple paths) for Windows NT, Windows 2000 and Solaris • QLRemote (regardless of whether or not there are multiple paths) for LINUX • Client somewhere to be able to configure the solution

The following components are optional based on the needs of the customer:

• Agent (All Operating Systems) - This is only needed if you wish to configure the Fibre Channel through a direct Fibre Connection. If you only want to manage the DS4000 unit over the network, it's not necessary

• IBMSAN.CDM - This is the multi-path software for Netware 5.1. It's only needed if you have multiple paths. • SMxUtil - These utilities are not required, but RECOMMENDED because they add additional functionality for troubleshooting and hot-adding devices to

the OS. NOTE: In SM8.0, they will be required for FlashCopy Functionality. If you plan to use SM Client through a firewall. SM Client uses Port 2463 TCP.

• FAStT MSJ - Not required but RECOMMENDED because it adds Fibre Path Diagnostic capability to the system. It is recommended that customer's always install this software and leave it on the system.

Be sure you install the host-bus adapter and driver before you install the storage management software. For in-band management, you must install the software on the host in the following order: Note: Linux does not support in-band management.

1. Microsoft Virtual Machine (Windows NT 4.0 and Windows Server 2003 only) 2. SMclient 3. RDAC 4. SMagent 5. SMutil

For out-band management, you must install storage management software on a management station in the following order: 1. Microsoft Virtual Machine (Windows NT 4.0 and Windows Server 2003 only) 2. SMclient

Install Storage Manager from CD or download. You must use 9.15 Storage Manager Client to manage some DS4000 systems (Firmware 8.34 for example). If you are upgrading a system to 9.15 and have previously installed Event Monitor, you must make sure all of the Event Monitor services are stopped before you can upgrade.

Page 18: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

18

• Determine if client or agent needs to be installed (Client for out-band, Agent AND Client for in-band) • Execute SETUP.EXE file

During the installation, it will ask you if you want to install the event monitor. Only install Event Monitor if you are at a workstation that will be responsible for monitoring the DS. Otherwise, skip it. Launch Storage Manager To perform an initial automatic discovery of storage subsystems, perform the following steps:

1. Click Start → Programs. 2. Click IBM DS4000 Storage Manager Client v09.1.G5.xx. The client software starts and displays the Enterprise Management window and the Confirm

Initial Automatic Discovery window. Note: The Enterprise Management window can take several minutes to open. No wait cursor (such as an hourglass) is displayed.

3. Click Yes to begin an initial automatic discovery of hosts and storage subsystems attached to the local subnetwork. After the initial automatic discovery is complete, the Enterprise Management window displays all hosts and storage subsystems attached to the local subnetwork.

Note: The Enterprise Management window can take up to a minute to refresh after an initial automatic discovery. Direct Management: If the Automatic Discovery doesn’t work.

• Go to EDIT>Add Device • Enter the IP Address of controller A. Click Add. • Enter IP Address for controller B. Click Add. • Click Done.

Storage Controllers should appear. It is likely that they will show "Needs Attention” This is common since the battery will be charging. Power cycle the DS4000 controller if it doesn't appear. Manage the DS4000 Once you have discovered the DS4000 systems, you will start to manage them individually. Go through these items for each DS4000 controller.

• Double click on the DS4000 system and launch the manager. Rename the DS4000 Controller

• Click Storage Subsystem → Rename. The Rename Storage Subsystem window opens. • Type the name of the storage subsystem. Then click OK.

Page 19: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

19

If you have multiple controllers, it is helpful to enter the IP addresses or some other unique identifier for each subsystem controller.

Change Enclosure Order The EXP enclosures will likely show in the GUI different than how you have them installed into the rack. You can change the GUI to look like how the enclosures are installed. You do this by going under File>Change>Enclosure Order and move the enclosures up or down to correctly reflect how they are installed into the rack. Check that each Controller is Online and Active

• Right click each Controller and place Online and change to Active (if applicable)

Collect your DS4000 Storage System Profile • Go to View > Storage System Profile • Click on Controller Tab (these example are from 8.4 client code which everyone can use) • Make note of NVSRAM and Firmware versions listed

Firmware version: _______________________ NVSRAM version: _______________________ • Click on the “Drives” tab and find the product ID and Firmware version. Example shown

HDD Firmware ___________

TRAY, SLOT STATUS CAPACITY CURRENT DATA RATE PRODUCT ID FIRMWARE VERSION 0, 1 Optimal 36.72 GB 2 Gbps B337 F454 << Look for this 0, 2 Optimal 36.72 GB 2 Gbps B337 F454 • Click on the “Enclosures” tab and find the ESM Firmware version for each EXP unit (all EXP700’s should be identical, etc.). Example shown

ESM _____________

Page 20: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

ss Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

20 For IBM and IBM Busine

ESM card Status: Optimal Firmware version: 9140 << Look for this

Update Microcode

Field Tip: Do not upgrade the Firmware/NVS/ESM or HDD code if the DS4000 is in anything but ‘optimal’ state!! Doing so may cause disastrous results. Always check the README files (especially the Dependencies section) that are packaged together with the firmware files for any required minimum firmware level requirements and the firmware download sequence for the DS4000 drive expansion enclosure ESM, the DS4000 storage server controller and the hard drive firmware. If you are upgrading the Firmware/Microcode with defined storage partitions/mappings and an operating system other than Windows NT is set as the default host group, you will need to change the default host type to this operating system host type after upgrading. The DS4000 Storage Manager upgrading will reset the default host type to Windows NT. AIX Concurrent download Depending on your system’s current firmware and AIX device driver levels, you might be able to use concurrent download. You have to be at the 5.4.xx.xx going to another level of 5.4.xx.xx or you have to be at the 6.1x.xx.xx going to another level of 6.1x.xx.x. You cannot go from 5.4 to 6.1 with concurrent download. Concurrent download is a method of downloading and installing firmware that does not require you to stop I/O to the controllers during the process. You cannot use concurrent firmware download if you change the default setting of the Object Data Manager (ODM) attribute switch_retries. The default is 5. IMPORTANT NOTE: You can not do a concurrent firmware download on the DS4800 from 6.14.xx to 6.15.xx(ALL IOs must be stopped during the upgrading of the DS4800 controller firmware and NVSRAM) Fibre Code levels at this time 8/5/2005 is as follows: This is for 9.14! Storage Controller Firmware versions: DS4800: FW_DS4800_06151100 DS4500: FW_06120300_06100400 DS4400: FW_06120300_06100400 DS4300 Standard: SNAP_288X_06120300 (EXP700/710 or FC/SATA intermix) DS4300 Turbo : SNAP_288X_06120300 (EXP700/710 or FC/SATA intermix) DS4300 Standard: SNAP_282X_06120300 (EXP100 attachment ONLY)

Page 21: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

21

Storage Controller NVSRAM versions: ATTENTION: The DS4000 storage subsystem controller firmware version 06.12.xx.xx uses the FC/SATA intermix premium key file to enable the FC/SATA intermix functionality. It does not rely on a certain version of the NVSRAM file like the controller firmware version 06.10.xx.xx. Do not apply this version 06.12.xx.xx of the firmware until a FC/SATA intermix premium feature key file is generated and available.

• DS4800 N1815D480R914V05 • DS4500: N1742F900R912V06 • DS4400: N1742F700R912V05 • DS4300 Standard and Turbo: N1722F600R912V05 (for EXP700/EXP710 ONLY attachment or when FC/SATA intermix premium feature is enabled) • DS4300 Standard and Turbo: N1722F600R28enc2 (for EXP100/SATA ONLY attachment)

Important: You must install the firmware update prior to downloading the NVSRAM update.

1. Controller firmware (5 to 15 minutes) 2. NVSRAM (2-5 minutes) 3. ESM firmware (10 to 15 minutes) NOTE: EXP500 Firmware 9166 before EXP700 firmware 9324 4. Hard Drive firmware (1 minute per drive but you can do many concurrently)

DS4100 (100) Note: BEWARE OF OLD EXP100’s. To attach EXP100 to a DS4100 (100) it must have the Release 2 ESM code to allow ESM failover. If you have an EXP100 that does not have R2 code on it, you must update the ESM firmware FIRST, before you update the firmware of the DS. If the EXP100 does not have R2 code, the DS4100 (100) will not recognize it. Updating the Firmware and NVSRAM

• If you click Stop while a firmware download is in progress, the current download will finish before the operation stops. The Status field for the remaining enclosures changes to Canceled.

• Online Firmware and NVSRAM firmware upgrades are only supported when upgrading from 05.40.06.XX to a new version of 05.40.XX.XX. It is highly

recommended that Online FW upgrades be scheduled during low peak I/O loads.

• Online Firmware and NVSRAM upgrades of SATA arrays are only supported when upgrading from firmware 05.41.56.XX to a higher firmware version of 05.41.5x.xx.

AIX Notes: Online upgrades are not supported on AIX 4.3.3. I/O must be quiesced prior to performing the upgrade. Using concurrent download: Depending on your system’s current firmware and AIX device driver levels, you might be able to use concurrent download. Attention: 1. You cannot use concurrent firmware download if you change the default setting of the Object Data Manager (ODM) attribute switch_retries. The default is 5.

Page 22: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

22

2. If you do not have the correct firmware versions to use concurrent download, you must ensure that all I/O to the controllers is stopped before you upgrade the firmware or NVSRAM. The upgrade procedure needs two independent connections to the DS4000 Storage Server, one for each controller. It is not possible to perform a microcode update with only one controller connected. Therefore, both controllers must be accessible either via Fibre Channel or Ethernet. Both controllers must also be in the active state.

• Update the Firmware first and then NVSRAM • Have the Firmware and NVSRAM downloaded from http://ssddom02.storage.ibm.com/techsup/webnav.nsf/support/disk or

http://www-1.ibm.com/servers/storage/disk/ To download firmware, do the following: a. Open the Subsystem Management window. b. Click Advanced => Download => Firmware (Follow the online instructions) To download NVSRAM, do the following: a. Open the Subsystem Management window. b. Advanced => Download => NVSRAM (Follow the online instructions)

• After updating the NVSRAM, the system resets all settings stored in the NVSRAM to their defaults so if you made any changes manually using a script, you will have to reapply them.

Updating ESM - not common for new installs – Allow approximately 5-10 minutes per ESM to complete the firmware update.

• With Storage Manager 9.1 and controller firmware 05.4x.xx.xx or higher, it is possible to update the ESM firmware during host I/O to the logical drives. BUT You must suspend all I/O while ESM firmware downloads if you select multiple enclosures for downloading ESM firmware. If you select only one enclosure for download at a time, you can download ESM firmware with I/Os. However, IBM recommends that you suspend all I/O activity to perform firmware upgrades.

• The ESM firmware version must be the same in all EXP drive enclosures of the same type in a given DS. • Updating ESM firmware requires downtime if the controller firmware level is 05.21 or lower.

Perform the following steps to download the ESM card firmware:

1. From the Subsystem Management window, select Advanced → Download → Environmental (ESM) Card Firmware. 2. Select Enclosures field, highlight each enclosure to which you want to download firmware or click Select All to highlight all drive enclosures in the

Page 23: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

23

storage subsystem. Each drive enclosure that you select should have the same product ID. 3. Enter the firmware file to download 4. Click Start. Note: The Start button will be unavailable until both a drive enclosure and a firmware file are selected. 5. Confirm your selections and then click Yes to continue with the firmware download

Update Drive Code – not common for new installs Note: Drive firmware download is an offline management event. You must schedule downtime for the download because no IOs to the storage server are allowed during the drive firmware download process. In Storage Manager 9.1 there is added support for parallel hard drive firmware download support. Up to four different drive firmware packages can be downloaded to multiple drives of four different drive types simultaneously. If you have both SATA-(EXP100) and Fibre drives (EXP700/EXP710) behind the same DS4000 storage server, do not download drives firmware to both SATA- Fibre-Channel drives at the same time. Download the drive firmware to drives of a single drive technology (either SATA or FC) at a time. Do not pull or insert drives during the drive firmware download. In addition, ALL IOs must also be stopped during the drive firmware download. Otherwise, drives may be shown as missing, unavailable or failed.

• Download the latest HDD code if you haven’t already • Updated the drive code using SM 9.1 > Advanced > update Drive Code • You will update each drive independently, taking approx 5 min per drive.

Set Storage Subsystem Clock - Since the DS4000 Storage Server stores its own event log, synchronize the controller clocks with the time of the host system. This simplifies error determination when you start comparing the different event log. Be sure that your local system is working using the correct time. Then click Storage Subsystem > Set Controller Clock

Page 24: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

24

Ensure Storage Partitioning is Enabled.

• The DS4300 (600) does not come with Storage Partitioning enabled. There is an activation kit that comes with the storage system that will point you to a website for key activation. You can still get the key from the support or level2 (see below)

• DS4400 has 64 storage partitions installed. No additional storage partitioning premium feature options can be purchased for these storage subsystems.

• DS4500 has 16 storage partitions installed. An upgrade from 16 to 64 partition storage partitioning premium feature option can be purchased. Storage partitioning allows you to connect multiple host systems to the same Storage Server. It is a way of assigning logical drives to specific host systems or groups of hosts — this is known as LUN masking. Logical drives in a storage partition will only be visible and accessible by their assigned group or individual hosts. Without the use of Storage Partitioning, all logical drives appear within what is called the Default Host Group, and they can be accessed by any fibre channel initiator that has access to the DS4000 host port. When homogeneous host servers are directly attached to the DS4000 Storage server, access to all logical drives may be satisfactory, or when attached to a SAN, zoning within the fabric can be used to limit access to the DS4000 host ports to specific set of hosts.

• Go to Storage Subsystem>Premium Features>List • You should see "Storage Partitioning Enabled"

o If you do not see that it is enabled you’ll have to get the feature key and enable it. Make note of the 32 digit feature key number. Customers call 800-IBM-SERV, enter the 4 digit machine type and tell the help desk that you need a feature key generated. An FTSS can go to http://ssgtech1.sanjose.ibm.com/Raleigh/FastT%20PE%20Support.nsf/Home?OpenPage > Feature Key Generator > Enter customer name, machine type, serial number and SM version.

1. Download key file to Storage Manager to client or agent. 2. To enable feature 3. Go to Storage Subsystem>Premium Features>Enable 4. Point to the key file.

Create Hot Spares

• 1 Hot Spare per drive tray is optimal but it’ll depend on your capacity requirements. I recommend no less than 1 spare for every 20-30 drives. Also keep the rebuild times in mind depending upon the size of the drives installed.

• EXP100 >> IBM recommends 1 hot spares per EXP100 drive expansion enclosure. One in an even slot and the other in an odd slot. • Ensure that spare drives are placed on different disk drive channels (odd and even slots) to reduce the risk of a single disk channel causing a loss

of access to all spare drives in the subsystem. • In the DS 4400/4500 configuration with two redundant drive loops, it is recommended to put half of the hot-spares in one redundant drive loop and

the rest on the other redundant drive loop. The DS4800 has 4 drive loops so try to put at least one spare in each drive loop.

Note: a total of 15 hot-spares can be defined per DS4000 storage server configuration

Page 25: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

25

Performance and Configuration Notes

Tips:

• More physical disks for the same overall capacity give you better performance. By doubling the number of the physical drives, you can expect up to 50% increase in throughput performance. Capacity / LUN may impose restrictions

• Cable and configure the DS4000 storage controller to use all drive loops if you have more than 2/4 EXP710’s (not applicable to DS4300 (600)). You will

get better performance.

• Spread your arrays across multiple enclosures and drive channels rather than in one enclosure so that a failure of a single enclosure does not take a whole array offline. Another benefit is performance increases since the I/O requests are processed by multiple ESM boards along multiple paths (loop side).

• Recommended drives / LUN varies o Capacity / LUN may impose restrictions o Application queue depth / LUN may impose restrictions o Up to 10 drives / LUN is reasonable

10K drives - “Typical” environments yield between 130-190 IOPS per drive max 15K drives - “Typical” environments yield between 150-220 IOPS per drive max Filesystem Partitioning

• Never create a filesystem and log on the same RAID logical disk >> this can cause thrashing of the disks. • Creating a single disk partition encompassing all usable space is recommended. • Creating more than 3 concurrently active disk partitions will also cause disk thrashing thus degrading performance.

What type of I/O is best for which raid level:

• Raw I/O is best suited for small transaction based I/O applications such as databases. Buffered filesystem I/O could be used here but better performance is generally found using raw I/O.

• Large sequential I/O is best suited for direct I/O with a filesystem. You can achieve near raw performance and gain the benefits of having a filesystem. • RAID 1, 1/0 and 5 benefit from concurrent I/O’s • I/O Rates plateau around 14 drives

SVC – If you are configuring a DS4000 for a SVC, it has been recommended to use either a 4+P or 8+ P RAID5 configurations.

Page 26: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

26

Selecting RAID Parameters • Number of Disks to use in a RAID Array

o High Bandwidth Applications • RAID 5 8+1 drive configurations

o High I/O rate Applications For large I/O environments consider 4+1 or 8+1

• Single I/O on a single data stripe (segment size X number of drives) • RAID 1/0 striped in multiple of 2 drives • More spindles will provide Higher I/O Rates • Capacity requirements will limit the number of spindles

Important Configuration Note: DS4000 SM will automatically configure a RAID10 array that has more than 4 drives (2 drives mirrored). If you want RAID1 (no RAID10) then you must keep your array no larger than 4 drives and if you want a RAID10 array, you must have more than 4 drives. Important Note for Raid5: I recommend no more than a 12+P as the maximum RAID5 array size. 8-12 disks per array are the sweet spot if you don’t know what the workload is. An array which contains more than 12 drives results in:

• Substantial increase in the time needed to rebuild a failed drive onto a spare drive, increasing the probability of data loss due to a second drive failure in the same array during the rebuild process

• Elongation of the time required to perform a CHKDSK operation against the logical drive to multiple hours, even days.

Page 27: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

IBM Confidential. Version 1.0 dated 9/6/2005

27

• Most recommend in the field a 4+P, 6+P, 8+P, 12+ P configurations. It has to do with the segment sizing working out evenly across the physical disks ( # of disks x segment size = size of stripe of data)

Write-back caching and Write-through

• Write-through means that data is always going to be written directly to the disk drives, bypassing cache. Good for freeing up cache for reads. • Write-back caching data is not written straight to the disk drives; it is only written to the cache.

o Enable write-back caching with cache mirroring Allows immediate write complete acknowledgement to host Flush to disk due to cache full (demand) or age (10 seconds)

o Write-back mode appears to be faster than write-through mode but if workload is high, cache will become inefficient Maximizing Performance for IOPS/Sec. Monitor these setting with Performance Monitor

• Disable array prefetch for random IO o Set on a LUN basis

• Enable prefetch if some sequential host IO Segment Size >> The Segment is amount of data that the controller writes on a single drive in a Logical Drive before writing data on the next drive.

For IBM and IBM Business Partner internal use only.

Page 28: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

28

• If you have a large I/O environment try to get a single I/O on a single data stripe (segment size X number of drives) • If I/O size is larger than segment size consider increasing segment size • Most common segment size misconception is small segment size is needed for small IO sizes. This may be true for other vendors products but not the

DS4000 • DS4000 9.1 Storage Manager has 512KB segments for larger full-stripe widths which could be performance enhancements for large database

applications.

Cache Block Size • Set array cache block size to 4K • Minimum memory allocation unit • Two cache block sizes, 4K (default) and 16K • Highest efficiency if IO is aligned, and a full cache block

Set Cache Read Ahead Multiplier to 4 (default is 1) This is good for most installations. If you have a very heavy sequential workload, you can increase it but monitor it with the Performance Monitor. You should see a higher cache hit ratio if it’s working. In DSM 9.1, a pre-fetch multiplier of zero disables automatic pre-fetch and any other value enables automatic pre-fetch. This method still uses a multiplier, but the value is strictly internal and automatically adjusted up or down depending on cache hits and misses of the sequential host read requests. Determining I/O bottlenecks

The Performance Monitor can be used to help determine a number of items Actual I/O size being received by the storage subsystem

– Max. throughput / max. number of IOPS = I/O size How many IOPS are being processed by a given RAID set

– Number of IOPS divided by number of data drives in RAID set will give average number of IOPS per disk in the set – Continuous high values could indicate a disk bottleneck

Drive Failure on EXP700 – I recommend that when a drive fails on the EXP700, you should make sure to pull the bad or failed drive after the drive has been successfully spared out. (The drives electronics are actively being used whether or not the drive is failed; as a result it is possible for a failed drive to continue causing problems).

Page 29: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

29

How long does it take media scan to complete? (created by Hans Hudelson)

The setting of 30 days for media scan is for the maximum time the scan will complete on all LUNs that have media scan (MS) activated. It runs continuously in the background, using spare cycles to complete its work. It reads a stripe into cache memory, and if all blocks can be read, it discards them and proceeds to the next stripe. If one block can not be read, it retries 3 times to make sure the drive cannot be read. When it fails to read, the data for that block is reconstructed, and the controller issues a write with verify command. Write with verify tells the drive to write the data to the block, read it back, and tell the controller when the drive has successfully written the data and read it back. In the case of a bad block, the write will either overwrite data that was weakened, or it will fail to write to the block so that the data can be read back. If the write to that block fails, the drive will allocate other blocks until the data has been verified. This continues, LUN by LUN, on the controller until all LUNs have been verified, then it starts over. If completion takes less than the setting, it will start again, and will schedule its completion for 30 days. Along the way, it calculates how much more scanning it has to do, and moves up the priority on the scan when calculations show that it will not complete on schedule. As it gets closer to the set completion time, 30 days by default, it will continue increasing priority so that the 30 day end time will be reached. For example, scanning all the LUN on a system is set for 30 days. There are plenty of spare processor and cache cycles, so it completes the scan of all the LUNs in 21 days. It will start over, right away, and schedule the next completion 30 days from the time this scan started. During these 30 days, the customer uses the controller a lot more, and spare processor cycles are decreased a lot. At 10 days, the controller calculates it will take 21 days to finish, so the priority on the MS will increase until the scheduled end date is 20 days away. At 15 days, the controller calculates it will take 16 days to finish, so priority of the MS application when assigning processor cycles increases again. As we get closer to the end date, the priority of MS may get high enough that production is slowed, because the controllers priority will be to finish the MS in the time allotted, 30 days. The same process holds for whatever duration you set. We have seen no effect on I/O with a 30 day setting unless the processor is utilized in excess of 95%. The length of time that it will take to scan the LUNs depends on the capacity of all the LUNs on the system and the utilization of the controller.

Create arrays and logical drives

IMPORTANT: I’ve had in previous checklists to let the DS4000 create your array. I no longer recommend this. When you select the drives for an array, make sure to select a drive from each EXP700/710 and alternate drive slots (odd and even) This will ensure maximum availability and performance. Try to avoid creating an array where more than 1 drive is in 1 tray. This may not be feasible for many installations.

DS4100: When configuring a DS 4100 it is highly recommended to configure the 14 disks that are in the base frame to use the same Controller. This will prevent I/O shipping and a possible degradation in performance. SATA NOTE: The ideal configuration for SATA drives is one drive in each EXP per array, one logical drive per array and one OS disk partition per logical drive. This configuration minimizes the random head movements that increase stress on the SATA drives. As the number of drive locations to which the heads have to move increases, application performance and drive reliability may be impacted. If more logical drives are configured, but not all of them used simultaneously,

Page 30: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

30

some of the randomness can be avoided. SATA drives are best used for long sequential reads and writes. A good practice is to have the logical lun size to be the size of the array. This may not be feasible or recommended in some configurations (you may only have 10 drives and want multiple RAID10 luns). You are limited to having no more than 30 disks in an array. Most hosts will be able to have 256 LUNs mapped per storage partition. Windows NT, Solaris with RDAC, NetWare 5.1, and HP-UX 11.0 are restricted to 32 LUNs. If you try to map a logical drive to a LUN that is greater than 32 on these operating systems, the host will be unable to access it. Solaris will require use of Veritas DMP for failover for 256 LUNs. Click Logical Drive → Create. The Logical Drive Create wizard starts. Follow the online instructions to create arrays and logical drives. Be sure to select your drive capacity! By default all the available space in the array will be configured as one logical drive. If you want to define more than one logical drive, enter the size of the first logical drive. Continue setting up drives for either this Same Array or a Different Array Depending upon the size of the 4 storage system, the formatting may not be completed for many hours. Start this process and come back the following day!

Verifying and defining the default host type Homogenous Host Attachment The host type determines how DS4000 will work with each connected. If all host computers connected to the same storage subsystem is running the same operating system, and you do not want to define partitioning, you can define a default host type.

• Click Storage subsystem → Change → Default host-type. The Default Host-type window opens. • From the pull-down list, select the host type. • Click OK.

Heterogeneous Host Attachment Important: To use the heterogeneous host feature, you must meet the following conditions:

• You must enable storage partitioning. • During host-port definition, you must set each host type to the appropriate operating system so that the firmware on each controller can respond correctly

to the host.

Page 31: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

31

SAN Configuration Guide

Configure SAN Switch – • Follow the SAN switch installation and users guide for settings IP address of switch(s). • Set the switch name • Set Domain ID • Verify Firmware of switch and upgrade if necessary following installation guide.

BROCADE Setup These are Telnet commands to configure the Brocade switch. Brocade needs straight thru serial cable unlike the DS4000.

- To set the IP address ( ipaddrset )

- To set the Domain ID ( configure ) hit CTL+D to accept all the rest of the defaults and commit changes

- To set the Core PID Setting this should be set to ‘1’. ( configure ) ) hit CTL+D to accept all the rest of the defaults and commit changes)

- Set telnet timeout value ( timeout 10 ) [optional, but recommended] Fabric Topology - It is a best practice that all Fibre Channel (FC) Switches must be zoned such that a single FC Host HBA can only access one controller per storage array. This zoning requirement will ensure the maximum number of host connections can be seen and log into the DS controller FC host port. This is because if a FC HBA port is seen by both controller A and B host ports, it will be counted as two host connections to the storage subsystem - one for controller A port and one for controller B port. Zoning – Most errors and problems with setting up the DS4000 storage systems are made in the zoning configurations. To avoid possible problem at the host level, all Fibre Channel Switches should be zoned such that a single host bus adapter can only access one controller per storage array. I tend to get a lot of feedback about different ways to zone but this is the recommended method and will not cause strange behaviors. Understanding the DS4000 storage controllers will help you understand how zoning should be configured. This is an example of a more complicated configuration. This shows 2 dual HBA hosts that want to access data from 2 DS4000 storage systems. Notice how each DS4000 has both ports of Controller A going to one switch (I’ll call it SAN A) and both ports of Controller B going to SAN B. You would then create a zone where HBA1/Server1 is zoned with ControllerA/Port1 >> in SAN A HBA2/Server1 is zoned with ControllerB/Port1 >> in SAN B HBA1/Server2 is zoned with ControllerA/Port2 >> in SAN A HBA2/Server2 is zoned with ControllerB/Port2 >> in SAN B …. Then repeat the process for adding the other DS4000 storage system.

Page 32: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

32

AIX: You can attach up to 4 DS4000 to a single Pseries server. You will need two HBAs in the server and 10 ports on the SAN (two from HBAs to switch, and two each from the switch to each DS4000). Four connections from the switch to each DS4000 would be even better. Make sure you check your zoning so the following rules apply:

• Every adapter in the AIX system can see only one controller (these are AIX specific zoning restrictions, not HACMP specific). • Other storage devices, such as tape devices or other disk storage, must be connected through separate HBAs and SAN zones. (AIX specific) • Multiple HBA’s in the same server cannot “see” the same DS4000 controller port. (each in their separate zone) • The HBA’s are isolated from each other (ZONED) if they are connected to the same switch that is connected to the same DS4000 controller port. • Each HBA and controller port must be in its own fabric zone, if they are connecting through a single switch.

Connect SAN Switches to mini-hub host ports of DS4000 controller (indicate port per switch). DS4300 (600) NOTE>> The two host ports in each controller are independent. They are not connected in the controller module as they would be in a hub configuration. So, there are a total of 4 host ports in the DS4300 (600).

• I recommend creating an alias for each Host HBA and each DS4000 controller to easily identify the WWPN. • Zone fiber switches. Each HBA is to be zoned to each controller

NOTE : The DS4000 controller host ports or the Fibre Channel HBA ports can not be connected to a Cisco FC switch ports with "trunking" enable. You might encounter failover and failback problems if you do not change the Cisco FC switch port to "non-trunking" using the following procedure:

Page 33: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

33

a. Launch the Cicso FC switch Device Manager GUI. b. Select one or more ports by a single click. c. Right click the port(s) and select Configure, a new window pops up d. Select the "Trunk Config" tab from this window, a new window opens e. In this window under Admin, select the "non-trunk" radio button, it is set to auto by default. f. Refresh the entire fabric.

Page 34: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

34

Host to Lun Mapping

For a new installation, after creating new arrays and logical drives;

• If your host type is not Windows NT, create a partition with your host type and map the logical drives to this partition. • If your host type is Windows NT, you can map your logical drives to the "Default Host Group" or create a partition with a NT host type.

1. At the Storage Manager - Subsystem Management screen, click Mapping Views and OK 2. Right click on Default Host Group. 3. Create a New host group. 4. Right click on new Host Group name and Define New Host. 5. Right click on the name of the New Host and Define Host Ports. A pull down menu will list all HBA’ the DS4000 controller recognizes. Verify the WWPN's

listed in Storage Manager with the Host HBA WWPN's. Make sure that you select the right Host Type 6. Define second HBA by following the same steps 7. Select host group or host. 8. Highlight the group that you are adding LUNs to 9. From the list of drives available, select drive to map and set Logical Unit Number to "0", 10. Click Add. 11. Continue mapping new drives (increment the Logical Unit Number each time) until complete for this host. 12. Then Click Close.

Repeat these steps for all Host Server storage assignments If you have a single server in a host group that has one or more LUN assigned to it, it is recommended to assign the mapping to the host and not the host group. All servers having the same host type, for example all Windows NT servers, can be in the same group if you want, but, by mapping at the host level you can define what specific server accesses what specific LUN. If you have a cluster, it is good practice to assign the LUNS to the host group, so that all of the servers in the host group have access to the LUNs. In a normal partition, assign the luns to the host or host port. To make the logical drives available to the host systems without rebooting, the DS4000 Utilities package provides the hot_add command line tool for some operating systems. You simply run hot_add, and all host bus adapters are re-scanned for new devices and the devices are assigned within the operating system. Delete the Access Logical Volume – (LUN 31) The DS4000 storage system will automatically create a LUN 31 for each Host attached. This is used for in-band management so if you do not plan to manage the DS4000 storage subsystem from that host, you can delete LUN 31 which will give you one more LUN to use per Host. If you attached a Linux or AIX 4.3 to the DS4000 Storage Server, you need to delete the mapping of the access LUN.

Page 35: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

35

NEW > In-band Storage Management is supported on AIX 5.1 and 5.2.

• Right click on LUN 31 and remove mapping Save the configuration:

1. From the Storage Manager window, click on Storage Subsystem - Configuration - Save 2. Subsystem Management window - Configure -> Save Configuration

Reset MEL log and RLS Statistics

Save Profile

1. To create storage subsystem profile, select View → Storage Subsystem Profile in the Storage Subsystem Management window and click the Save As button when the Storage Subsystem Profile window opens.

Page 36: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

36

Windows/Intel

To run the Storage Manager Client software on a Windows XP Professional OS, use the software package for the Microsoft Windows Server 2003 Intel Architecture 32bit (IA32). The version of the Storage Manager Client host software installer wizard for 9.15 is SMIA-WS32-09.15.35.03.exe. 1. IBM DS4000 Storage Manager Client version: 09.15.G5.01 2. IBM DS4000 Storage Manager RDAC version: 09.01.35.11 Note: This RDAC requires Windows2000-KB822831-x86-ENU.exe hot fix be installed in addition to Service Pack 4 in Microsoft Windows 2000 operating system environment) 3. IBM DS4000 Storage Manager Agent version: 09.14.35.02 4. IBM DS4000 Storage Manager Utilities version: 09.14.35.00 A Windows "signature" needs to be configured for each host in order to see the drives assigned to it NOTE (Dual pathing in Win2K) :

1) The paths must be zoned so that the LUN is not seen on both paths at the same time. 2) The LUN and both HBA’s must be in the same partition. 3) On each Host Server go to Start - Programs - Administrative Tools - Computer Management. Then click Disk Management 4) The Signature Upgrade Disk Wizard should start 5) Click to select disks to write a signature to 6) At the Upgrade to Dynamic Disks, deselect all disks and click OK 7) Right click unallocated space on first disk and click Create Partition. The Create Partition Wizard begins. 8) If Upgrade Disk Wizard doesn't start, right-click DiskX and choose Upgrade to Dynamic Disk 9) Confirm Primary Partition is selected 10) Confirm maximum amount of disk space is selected 11) Assign a drive letter to this drive 12) On Format Partition screen, leave all defaults and Perform a Quick Format 13) Click Finish 14) Repeat same process with each drive 15) Repeat same process for each host

DS4300, 4400, 4500, 4800 >> Veritas Volume Manager (DMP) 4.2 is supported with Windows Server 2003 and Windows 2000. FastT200, 500 > Veritas Volume Manager (DMP) 3.1 is supported with Windows 2000. Refer to document http://www-1.ibm.com/support/docview.wss?uid=psg1MIGR-57485&rs=555 for details.

Page 37: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

37

Limitations of Booting from Windows 2000

After the server is configured with DS4000 storage access and path redundancy, the following limitations have been observed: 1. You cannot boot from a DS4000 Storage Server and use it as a clustering device. This is a Microsoft physical limitation. 2. If there is a path failure and the host is generating I/O, the boot drive will move to the other path. However, while this transition is occurring, the system

will appear to freeze for up to 30 seconds. 3. If you have two adapters and reboot the system while the primary path is failed, you must manually go into the QLogic BIOS for both adapters and disable

the BIOS on the first adapter and enable the BIOS on the secondary adapter. 4. You cannot enable the BIOS for both adapters at the same time. If you do and there is a path failure on the primary adapter (and the adapter is still active),

the system will trap with an INACCESSIBLE_BOOT_DEVICE error on reboot. 5. If the boot device, (LUN 0), is not on the same path as the bootable HBA port, you will receive an INACCESSIBLE_BOOT_DEVICE error message. 6. If you suffer major path problems (LIPs) or controller panics, it can hang the server indefinitely as RDAC tries to find a stable path. 7. By booting from the DS4000 storage device, most of the online diagnostic strategies are effectively canceled and Path PD must be done from the Ctrl+Q

diagnostics panel instead of DS4000 MSJ. 8. The IDE disk devices should not be re-enabled.

Windows 2003: If the host type of either the Default Group or the defined host ports in a storage mapping partition is set to "Windows NT ...", it could take up to 2 hours to boot a Windows Server 2003 host after installing Windows Server 2003 RDAC driver. One must make management connection to the storage subsystem via Ethernet ports (out-of-band management) and set the host type to the "Windows 2000/ ..." setting. Rebooting a host server might cause its mapped LUNs move to a non-preferred path. Use the DS4000 Storage Manager client program to redistribute the LUNs to their preferred path. Windows 2000 Service Pack 3 (SP3) Performance Issue: http://support.microsoft.com/default.aspx?scid=kb;en-us;332023 Install FAStT MSJ

1) Locate and execute the FAStT MSJ SETUP.EXE 2) Select GUI or Agent (or both). Install GUI and Agent on Management Workstation and Agent on each host server 3) Password to update any advanced features of FAStT MSJ is "config" 4) Run through the features of the FAStT MSJ utility

Page 38: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

38

Ensure Java Plug-in is enabled on management machine

1) Go to Start > Settings > Control Panel. 2) Double click Internet Options. 3) Click Advanced tab. 4) Scroll down to Microsoft VM and place a check in Java Console Enabled box. Click OK 5) Restart this machine.

BladeCenter attached to a Brocade Switch – (This only applies if you don’t have Brocade Module in the BladeCenter) The Brocade switch must be in Interoperability mode to be FC-SW2 compliant. Interoperability mode cannot be set using Brocade’s Web Tools; use the Brocade CLI. ATTENTION!! This procedure requires a reboot of the switch. Login: admin Password: xxxxxxxx Brocade3800:admin> switchdisable Brocade3800:admin> interopmode 1 Run this command without the 1 to see its current setting. Brocade3800:admin> fastboot Notes:

• The RDAC driver must be digitally signed by Microsoft in order for it to work correctly. Always use the IBM provided signed RDAC driver package.

• RDAC for Windows supports round-robin load-balancing • You must always uninstall IBM DS4000 Storage Manager RDAC before you uninstall the Host Bus Adapter driver. Failure to do so may

result in system hung or blue-screen condition. • If you define a large number of arrays, you may not be able to right-click a logical drive and get a pop-up menu in the Physical View of the

Subsystem management window. The workaround is to use the Logical Drive pull-down menu to select the logical drive options.

Page 39: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

39

Linux

There are two versions of the Linux RDAC. The version (09.00.A5.09)for Linux 2.4 kernels only like Redhat EL 3 and SuSe SLES 8 and RDAC package version 09.01.B5.XX for Linux 2.6 kernel environment.

• Make sure you read the README.TXT files for V9.15 Linux RDAC, HBA and Storage Manager for Linux. • When using the Linux RDAC as the multipathing driver, the "LNXCL" host type must be used

• There is not a requirement that the UTM (Access LUN) must be removed from the LNXCL Storage Partitioning partition.

• When using the Linux RDAC as the failover/failback driver, the host type should be set to LNXCL instead of Linux. If Linux RDAC is not used, the host type

of Linux must be used instead.

• The Linux RDAC driver cannot co-exist with a HBA-level multipath failover/failback driver such as the 6.06.63-fo driver. You might have to modify the driver make file for it to be compiled in the non-failover mode.

• Auto Logical Drive Transfer(ADT/AVT) mode is not supported at this time. ADT(AVT) is automatically enabled in the Linux storage partitioning host type.

It has to be disabled using the script DisableAVT_Linux.scr.

• The Linux SCSI layer does not support skipped (sparse) LUNs. If the mapped LUNs are not contiguous, the Linux kernel will not scan the rest of the LUNs. Therefore, the LUNs after the skipped LUN will not be available to the host server. The LUNs should always be mapped using consecutive LUN numbers

• Although the Host server can have different FC HBAs from multiple vendors or different FC HBA models from the same vendors, only one model of FC

HBAs can be connected to the IBM DS4000 Storage Servers.

• If a host server has multiple HBA ports and each HBA port sees both controllers (via a un-zoned switch), the Linux RDAC driver may return I/O errors during controller failover.

• Linux SCSI device names have the possibility of changing when the host system reboots. We recommend using a utility such as devlabel to create

user-defined device names that will map devices based on a unique identifier, called a UUID. devlabel is available as part of the Red Hat Enterprise Linux 3 distribution, or online at: http://www.lerhaupt.com/devlabel/devlabel.html.

• Linux RDAC supports re-scanning to recognize a newly mapped LUN without rebooting the server. The utility program is packed with the Linux RDAC

driver. It can be invoked by using either "hot_add" or "mppBusRescan" command. "hot_add" is a symbolic link to mppBusRescan. There are man pages for both commands. However, the Linux RDAC driver doesn't support LUN deletion. One has to reboot the server after deleting the mapped logical drives.

Page 40: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

40

AIX Setup Requirements

Make sure you have these fileset versions or later. PTF/APARs can be downloaded from: http://techsupport.services.ibm.com/server/aix.fdc AIX 5.3, BASE devices.fcp.disk.array.diag 5.3.0.0 devices.fcp.disk.array.rte 5.3.0.20 devices.common.IBM.fc.rte 5.3.0.0 devices.pci.df1000f7.com 5.3.0.21 devices.pci.df1000f7.rte 5.3.0.0 devices.pci.df1000f9.rte 5.3.0.0 devices.pci.df1000fa.rte 5.3.0.10 Important: With AIX 5.3, download the complete maintenance package and update all PTFs together. Do not install each PTF separately. AIX 5.2, Maintenance Level 4 and the following AIX fileset versions: devices.fcp.disk.array.diag 5.2.0.30 devices.fcp.disk.array.rte 5.2.0.60 devices.common.IBM.fc.rte 5.2.0.50 devices.pci.df1000f7.com 5.2.0.60 devices.pci.df1000f7.rte 5.2.0.30 devices.pci.df1000f9.rte 5.2.0.30 devices.pci.df1000fa.rte 5.2.0.50 AIX 5.1, Maintenance Level 6 and the following AIX fileset versions: devices.fcp.disk.array.diag 5.1.0.51 devices.fcp.disk.array.rte 5.1.0.65 devices.common.IBM.fc.rte 5.1.0.51 devices.pci.df1000f7.com 5.1.0.64 devices.pci.df1000f7.rte 5.1.0.37 devices.pci.df1000f9.rte 5.1.0.37 * AIX 4.3 contains no support for features beyond Storage Manager 8.3. Host Bus Adapter(s): IBM Feature Code 6227, 6228, 6239 or 5716 HBA Firmware Levels:

Page 41: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

41

HBA Firmware Levels: FC 6227 - 3.30X1 FC 6228 - 3.91A1 FC 6239 - 1.81X1 FC 5716 - 1.90A4 For booting from DS, the following HBA Firmware levels are required FC 6227 - 3.22A1 or above FC 6228 - 3.82A01 or above FC 6239 - 1.00X5 or above EXP100 Limitations

• Booting from a DS4000 subsystem utilizing SATA drives for the boot image is supported but not recommended due to performance reasons. AIX Configuration and Usage Notes

1. Dynamic Volume Expansion (DVE) is only supported on AIX 5.2 and 5.3. AIX 5.3 must have PTF U499974 installed before using DVE. 1. In-band Storage Management is supported on AIX 5.1 and 5.2. 2. Booting up from a DS4000 device is supported only on AIX 5.1 and 5.2.

SAN (switch) ports should be configured an “F” port (some switches/directors default ports to type 'Gx'.) • Use the SAN switch/director mgt console to force the port to "F"

3. Online upgrades are not supported on AIX 4.3.3. I/O must be quiesced prior to performing the upgrade. 4. Online concurrent firmware and NVSRAM upgrades of FC arrays are only supported when upgrading from 06.10.06.XX to another version of

06.10.XX.XX. There is an exception for DS4800’s for 9.12 to 9.15. APAR_aix_51 = IY64463 APAR_aix_52 = IY64585 APAR_aix_53 = IY64475

It is highly recommended that Online FW upgrades be scheduled during low peak I/O loads. Upgrading firmware from 05.xx.xx.xx to version 06.xx.xx.xx must be performed with no IOs. There is no work-around.

5. Interoperability with IBM 2105 and SDD Software is supported on separate HBA and switch zones. 6. Interoperability with tape devices is supported on separate HBA and switch zones. 7. When using FlashCopy, the Repository Volume failure policy must be set to "Fail FlashCopy logical drive", which is the default setting. The "Fail writes

to base logical drive" policy is not supported on AIX, as data could be lost to the base logical drive. 8. It is important to set the queue depth to a correct size for AIX hosts. Having too large of a queue depth can result in lost filesystems and host panics. 9. F-RAID Manager is not supported

Page 42: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

42

10. For most installations, AIX hosts attach to DS4000 with pairs of HBA’s. For each adapter pair, one HBA must be configured to connect to controller "A" and the other to controller "B". An AIX host with 4 HBA’s will require you to configure 2 DS partitions (or Host Groups).

11. Each AIX host (server) can support 1 or 2 DS4000 Partitions (or Host Groups), each with a maximum of 256 Logical Drives (AIX 5.1 or 5.2 and SM8.4) a. AIX 4.3.3 is restricted to 32 Logical Drives on each partition.

12. Single-switch configurations are allowed, but each HBA and DS4000 controller combination must be in a separate SAN zone. 13. Single HBA configurations are allowed, but each single HBA configuration requires that both controllers in the DS4000 be connected to the host.

• In a switch environment, both controllers must be connected to the switch within the same SAN zone as the HBA. • In a direct-attach configurations, both controllers must be "daisy-chained” together. This can only be done on DS4400/4500 (FAStT700/900).

14. When you start from a DS4000 device, both paths to the boot device must be up and operational. 15. Path failover is not supported during the AIX boot process. Once the AIX host has started, failover operates normally.

SAN ZONING Notes:

1. Multiple HBAs within the same server cannot “see” the same DS4000 controller port. 2. The HBAs are isolated from each other (zoned) if they are connected to the same switch that is connected to the same DS4000 controller port. 3. Each fibre-channel HBA and controller port must be in its own fabric zone, if they are connecting through a single fibre-channel switch, such as

2109-F16. Recommendation: The DS4000 should be configured with at least 1 LUN assigned to the AIX server before the AIX server is allowed to see the DS. This prevents problems with the auto-generated dac/dar relationship. For most Direct-attach applications, the connections (FC-AL) of DS4000 storage arrays on AIX should be configured with two HBAs for complete path availability. As such, dual-path configurations would be restricted to the following: DS4300 (600) - one or two server configurations only (2 or 4 HBAs) Each HBA pair must be connected to both A & B host-side controller ports. DS4400 (700)/900 - one or two server configurations only (2 or 4 HBAs) Each HBA pair must be connected to both A & B controllers. Only 1 connection on each host-side mini-hub can be used. Single HBA configurations are allowed, but each single HBA configuration requires that both controllers in the DS4000 be connected to the host. In a switch environment, both controllers must be connected to the switch within the same SAN zone as the HBA. In a direct-attach configurations, both controllers must be "daisy-chained" together.

AIX RESTRICTION Restrictions when booting up your system

• If you create more than 32 LUNs on a partition, you cannot use the release CD to install AIX on a DS4000 device on that partition. Therefore, if your system is booted from a DS4000 device, do not create more than 32 LUNs on the partition that you are booting from.

Page 43: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

43

• When you boot your system from a DS4000 device, both paths to the DS4000 storage server must be up and running. The system cannot use path failover during the AIX boot process. Once the AIX host has started, failover operates normally.

• You cannot boot your system from any device that has one or more EXP100 SATA expansion units attached. Partitioning restrictions

• The maximum number of partitions per AIX host, per DS4000 storage server, is two. • All logical drives that are configured for AIX must be mapped to an AIX host group. For more information, see “Storage Partitioning: Defining an AIX

host group” On each controller, you must configure at least one LUN with an ID between 0 and 31 that is not a UTM or access logical drive.

AIX RDAC (FCP.ARRAY filesets)

All AIX hosts in your storage subsystem must have the RDAC multipath driver installed. In a single server environment, AIX allows load sharing (also called load balancing). You can set the load balancing parameter to yes. In case of heavy workload on one path the driver will move other LUNs to the controller with less workload and, if the workload reduces back to the preferred controller. Problem that can occur is disk thrashing. That means that the driver moves the LUN back and forth from one controller to the other. As a result the controller is more occupied by moving disks around than servicing I/O. The recommendation is to NOT load balance on an AIX system. The performance increase is minimal (or performance could actually get worse). RDAC (fcp.array filesets) for AIX support round-robin load-balancing Setting the attributes of the RDAC driver for AIX The AIX RDAC driver files are not included on the DS4000 installation CD. Either install them from the AIX Operating Systems CD, if the correct version is included, or download them from the following Web site: techsupport.services.ibm.com/server/fixes You must change some of these parameters for AIX to operate properly, while others can be changed to enhance the operability of your configuration.

• Attribute settings for dar devices: For multi-initiator configurations, the autorecovery attribute must be set to no. • On single host systems, the load_balancing attribute can be set to yes ( I usually recommend no) • On multihost systems, the load_balancing attribute must be set to no. • Setting the queue_depth attribute to the appropriate value is important for system performance. For large, multihost configurations, always set the

attribute to less than 10. • Use the following formula to determine the maximum queue depth for your system:

512 / (number-of-hosts * LUNs-per-host ) Attention: If you do not set the queue depth to the proper level, you might experience loss of filesystems and system panics.

fast_fail - Enables fast I/O failure. Fast I/O failure can be useful in multipath configurations. It can decrease the I/O fail times due to link loss between the storage device and the switch, and can allow faster failover to alternate paths >> chdev -l fscsi0 -a fc_err_recov=fast_fail (do for every HBA)

Page 44: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

IBM Confidential. Version 1.0 dated 9/6/2005

44 For IBM and IBM Business Partner internal use only.

delayed_fail is the default setting and should be used in a single path environment. Notes:

1. The fast_fail attribute only affects failover that occurs between the switch and the DS4000 storage server. It does not affect failover that occurs between the host and the switch.

2. Set the fast_fail attribute on each HBA that is configured to the DS4000 storage server. 3. You can use fast I/O failure only in a SAN environment. You cannot use it in a direct-attach environment.

HACMP NOTES

Configuration limitations The following limitations apply to HACMP configurations:

• Switched fabric connection only; no direct connection is allowed between the host nodes and DS4000. • HACMP C-SPOC cannot be used to add a DS4000 disk to AIX using the Add a Disk to the Cluster facility. • HACMP C-SPOC does not support enhanced concurrent mode volume groups. • Single node quorum is not supported in a two-node GPFS cluster with DS4000 disks in the configuration

Other HACMP usage notes The following notations are specific to HACMP environments:

• HACMP clusters can support from two to 32 servers on each DS4000 partition. If you run this kind of environment, be sure to read and understand the AIX device drivers queue depth

• You can attach non-clustered AIX hosts to a DS4000 that is running DS4000 Storage Manager and is attached to an HACMP 4.5 cluster. However, you must configure the non-clustered AIX hosts on separate host partitions on the DS4000.

• The DS4300 (600) can only be connected to 2 GB switches, directors, or HBA’s. • Every adapter in the AIX system can see only one controller (these are AIX specific zoning restrictions).

Discover Disks After the DS4000 storage subsystem has been set up, volumes have been assigned to the host, and the RDAC driver has been installed

• Run # cfgmgr –v • Verify the device driver recognizes the DS, run lsdev -Cc disk

Connectivity

• Determine how many DS4000’s are seen by AIX server: 1 dar for each DS4000. If you see more than one dar per DS4000, check your zoning. rmdev all your devices and run cfgmgr

Page 45: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

45

lsdev -C | grep dar dar0 Available 1742 Disk Array Router • Determine how many DS4000 Controllers are seen by AIX server: lsdev -C | grep dac dac0 Available 91-08-01 1742 Disk Array Controller dac1 Available 11-08-01 1742 Disk Array Controller • Using the fget_config command

The fget_config command displays the current owner of each hdisk. The following example shows that it is a quick way to determine which LUN (hdisk) is actively owned by a controller. The following example shows that both controllers (dac0 and dac1) are in the Active state. This is normal when the DS4000 storage subsystem is configured correctly. Other possible states could be: NONE > The controller is not defined or is offline. RESET >The controller is in the reset state.

fget_config –A | -l |-v dar_name -A Displays output for all the configured dars in the subsystem. If you use this parameter, do not specify a dar name. -l Displays output only for the dar that you specify. -v Displays more information about the dar orders, such as the user array names that were created when you configured the DS4000 subsystem.

• Check the attribute settings for a disk array router (dar0)

lsattr -El dar0

If you want or need to move or change the preferred ownership of luns from one controller to another, like from controller A to controller B, you will have to rmdev all of the hdisk from the AIX system, otherwise your changes on the DS4000 will revert back. Make sure your SAN zoning matches!

rmdev –dl hdiskx > removes the hdisk, run for each hdisk rmdev –dl dac0 > remove DS4000 controller A rmdev –dl dac1 > remove DS4000 controller B rmdev –dl dar0 > remove DS rmdev –Rdl fcsx > removes fibre adapter and other child devices cfgmgr -l <hba> This is optional. Done to make sure HBA's are there before bringing disk in.

Page 46: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

46

cfgmgr –v To map DS4000 luns to the AIX system. Command: lsattr -El hdiskx lscfg -vl hdiskx (maybe only the output of one of the commands is needed). The LUN number is in the output of either lsattr or lscfg and you can compare to the DS4000 profile file Datacollection on AIX host _________________________________________________________________________________ Run "snap -gfiLGc" and re-direct output to a file. Valid file formats for output file are: snap.tar.Z, snap.tar, snap.pax.Z and snap.pax Upload file to Austin Datacase server @ http://fieldsupport.austin.ibm.com/cgi-bin/pfe/pfe.pl

Page 47: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

47

Solaris Setup – No DMP

The DS4000 Storage Manager version 9.14 and higher host software installer wizard requires the installation of a graphics adapter in the SUN Solaris server for it to run. For SUN Solaris servers without the graphics adapter, individual host software installation packages are provided in the DS4000 Storage Manager Version 9.1x (where x is an appropriate release version like 9.14, 9.15, ... ) Support for SUN Solaris OSes CD under the directory named "/Solaris/Individual packages". The version of the host software installer wizard for this release is SMIA-SOL-09.15.05.03.bin ( 09.15.xx.x3 dated 7/12/2005) 1. IBM DS4000 Storage Manager Client version: 09.15.G5.01 2. IBM DS4000 Storage Manager RDAC version: 09.10.05.01 3. IBM DS4000 Storage Manager Agent version: 09.14.05.03 4. IBM DS4000 Storage Manager Utilities version: 09.14.05.01 5. IBM DS4000 Storage Manager RunTime version: 09.14.05.01 The patches listed in this document can be superseded by more recent versions.

1. Solaris 7 with the following patches (minimum versions): – 106541-23 – 108376-42 (or later) – 107544-03 2. Solaris 8 with the following patches (minimum versions): – 06 Jumbo Patch – 108528-18 ; 111293-04 ; 111310-01 ; 111111-03 ; 108987-12 3. Solaris 9.0 > Request the patch associated with Sun BugID #4630273 . Install 113454-14 4. RDAC and dynamic multipathing (DMP) coexistence is not supported. 5. Dynamic multipathing (DMP) requires the installation of VERITAS Volume Manager. 6. Verify Interoperability at > http://www.storage.ibm.com/disk/FAStT/pdf/interop-matrix.pdf 7. Multiple HBAs within the same server must be unable to “see” the same DS4000 controller port. 8. The JNI HBAs must be isolated from each other if they are connected to the same switch that is connected to the same DS4000 controller port. 9. Each HBA and controller port must be in its own fabric zone, if they are connecting through a single switch, such as a 2109-F16. 10. When configuring DS4000 Logical volumes to be used with Veritas Volume Manager on Solaris, the maximum capacity per disk is limited to 999GB.

For 1024GB disk group support using Veritas Volume Manager, it is recommended to use multiple disks of sizes less than 999GB. 11. Auto LUN/Array Transfer is not supported. If a controller fails over arrays to the alternate controller and the failed controller is replaced and brought

back online, the arrays do not automatically transfer to the preferred controller. This must be done manually by redistributing arrays.

Note: Before RDAC is installed, Solaris will see every lun x (# of HBA’s in host) so if you have one lun and two hba’s, it will see two luns. 1 for each target (hba).

DO NOT INSTALL RDAC BEFORE YOU CONFIGURE AND EDIT YOUR HOST BUS ADAPTER SETTINGS AND SD.CONF

Page 48: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

48

ZONE:

• Attach the JNI HBA’s to the SAN Switch • Attach the DS4000 Controllers to the SAN Switch • Create an alias for each device connected to the switch and give it a name that tells you what it is…for example: DS_controllerA_port1 • Create a zone that separates each HBA and controller. DO NOT PUT BOTH FAST CONTROLLERS INTO THE SAME ZONE! • Reboot Solaris and look in the var/adm/messages file.

o Make a note of what target id was assigned to each HBA >>> jnic146x0 , jnic146x1, jnic146x2, jnic146x3, etc. Configuring the HBA and Persistent Binding – Do not begin unless you’ve attached to the SAN and completed the zoning

1. Download and install the most current adapter driver package Refer to the IBM website for the correct HBA driver package…. http://knowledge.storage.ibm.com/HBA/HBASearch

The RDAC package includes shell scripts that you can use to configure JNI cards for use with connections through fibre-channel switches.

a. For FCI-1063 cards, /etc/raid/bin/genfcaconf b. For all jnic.conf cards, /etc/raid/bin/genjniconf

Page 49: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

49

c. For FC64-1063 and all other cards, /etc/raid/bin/genscsiconf, which calls other scripts as appropriate

2. Open and Edit the JNI configuration file. 3. Loop Settings: Applies to FCC-6460, FCE2-1473, or FCE-21473

i. If you have Brocade 2 Gb switch, you need to force the HBA to be a public loop device. FcLoopEnable=1 FcFabricEnable=1

ii. If you have Cisco, McData or a Brocade 1 Gb switch, the loop setting will be a private loop device: FcLoopEnable=0 FcFabricEnable=1

This is an example of the JNI loop settings. See how there are two jnic146x entries? There’s a target 0 and target 1. This is because there are two HBA’s. Make sure you set these parameters for each HBA installed on the server.

jnic 146x0-FcLoopEnabled=1; jnic 146x0-FcFabricEnabled=0; jnic 146x1-FcLoopEnabled=1; jnic 146x1-FcFabricEnabled=0;

You can determine what target was assigned to each HBA by looking in the var/adm/messages file. You should have a Solaris target id for each HBA: Hba: JNI,FCR Model: FCE-6460-N May 6 10:02:06 solar unix: jnic146x0: FCode: Version 3.8.9 [ba79] ‘’ ‘’ Hba: JNI,FCR Model: FCE-6460-N May 6 10:02:06 solar unix: jnic146x1: FCode: Version 3.8.9 [ba79] ‘’

Persistent Binding: This function allows a subset of discovered targets to be bound to an HBA. Solaris does not guarantee that devices will always be allocated the same SCSI target and LUN IDs after a reboot. Once a configuration has been set, it will survive reboots and any hardware configuration changes. Binding can be implemented by WWNN or WWPN.

1. Make note of the WWPN of the DS4000 controllers that each JNI HBA has seen. You see this in the messages file too by looking for something like this… May 6 10:02:11 solar unix: jnic146x0: Port 011000 (WWN 200200a0b80f478e:200300a0b80f478f) online.

Page 50: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

50

So the WWPN 200300a0b80f478f is the WWPN of the DS4000 controller and it will be bound to Target 0.

target0_hba = "jnic146x0"; target1_hba = "jnic146x1"; target0_wwpn = "200300a0b80f478f "; target1_wwpn = "200400a0b80f478f ";

2. Uncomment and change the other HBA settings according to the Installation Guide for Solaris. (FailoverDelay, JniCreationDelay, etc.) 3. Edit the sd.conf file to reflect your bindings.

Complete the installation and verify…

1. Install RDAC according to installation guide. 2. Verify with SMdevices.

FAILOVER: Auto Volume Transfer (AVT) is disabled on Solaris hosts. Therefore, if a controller failover occurs, you must manually redistribute logical drives to their preferred paths once the failed controller is repaired. Complete the following steps to redistribute logical drives to their preferred paths:

• Repair or replace any faulty components. • Redistribute volumes to their preferred paths by clicking Subsystem Management —> Storage Subsystem —> Redistribute Logical Drive.

The following is a list of Packages that are know to cause problems as 11/16/03: Please call IBM Support rm5 rm6 symsm SUNWosamn SUNWosafw SUNWosar SUNWosau stkarray fcsm SMasl Setting the parameters for the Sun host system The sd_max_throttle parameter specifies the maximum number of commands that the system can queue to the HBA device driver. This value is global, affecting each sd (disk) device recognized by the driver. The sd_max_throttle variable assigns the default value lpfc will use to limit the number of outstanding commands per sd device.

Page 51: DS4000 Implementation Cookbook v1.0

DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided

for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

For IBM and IBM Business Partner internal use only. IBM Confidential.

Version 1.0 dated 9/6/2005

51

The default (or maximum) value is 256, but you must set the parameter to a value less than or equal to a maximum queue depth for each LUN connected. Determine the value by using the following formula: 256 ÷ LUNs per adapter = sd_max_throttle value Where LUNs per adapter is the largest number of LUNs assigned to a single adapter. To set the sd_max_throttle parameter for the DS LUNs, you would add the following line to the /etc/system file: set sd:sd_max_throttle= |calculated value| (round down the calculated value to a whole number) For example, a server with two HBA's installed, 10 LUN's defined to HBA1, and 16 LUN's defined to HBA2. HBA1= 256 / 10 = 25.6 and HBA2 = 256 / 16 = 16 Rounding down yields 25 for HBA1 and 16 for HBA2. In this example, the correct ‘sd_max_throttle' setting would be the lowest value obtained or 16. Solaris SAN – Please follow the installation and planning guide for Solaris. The following diagram was provided by Murray Finch to help you understand the persistent binding requirement for Solaris.

Page 52: DS4000 Implementation Cookbook v1.0

ding And fastT

Sun SerFastT

Controller AMini P1

FastTController B

LUN 0

wwpn="200900a0b80f184

Prim

Alt

wwpn="200800a0b80f184

Mini P2

LUN 1

Sd.conf

name="sd" class="scsitarget=1 lun=1; target=1 lun=2; target=1 lun=x; name="sd" class="scsitarget=2 lun=1; target=2 lun=2; target=2 lun=x;

adapt.conf name="fca-pci" pa 00" unit-address="1 target1_hba=" _wwpn="200800a0bname="fca-pci" pa 00" unit-address="2 target2_hba=" _wwpn="200900a0b

Note: "This is a cal representation of persistant

ding o is, using FAStT Storage. It is ended ide some understanding of the

# SMdevices /dev/rdsk/c3t0d0s2 m CLSS700, L rive CL _L0, LUN 0 , Logical Drive ID <600a0b80000f444c000000033df49 referred ontroller-A): In Use] /dev/rdsk/c2t1d1s2 [Storage Subsystem CLSS700, Logical Drive CL _L1, LUN 1 , Logical Drive ID <600a0b80000f183f000000023dfdc41a>, Preferred ontroller-B): In Use]

IBM_User

binint

istent WWPN Bin

Switch

Switch

" 80f1840"; " 80f1840";

Pers

ogical Ddd7>, P

relevant c ents (sd.conf file, <hba>.conf file, and SMd utput). This document is not a replacem the SMclient Installation Guide for Unix."

Page 1 10/27/2003 Murray Finch

graphin Solarto provompon

evices oent for

SSSUN Path (C

SSSUN Path (C

ver Adapt ‘0’Adapt ‘1’

" target=1 lun=0;

" target=2 lun=0;

rent="/pci@1f,40fca-pci0" target1rent="/pci@1f,40fca-pci1" target2

[Storage Subsyste