Sun Installation & Configuration Participant Guide

Page 1

Sun Storage 6000 Product Line Installation and Configuration Course Description . . . . . xi Sun Storage 6x80 product overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Sun Storage 6540 product overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Sun Storage 6140 product overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Sun Storage CSM200 expansion module overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Sun Storage hardware installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv Sun Storage Common Array Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv Array configuration using Sun Storage Common Array Manager . . . . . . . . . . . . . . . . . . . . . xv Storage Domains. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Integrated data services: Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi Integrated data services: Volume Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi Integrated data services: Remote Replication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi Monitoring performance and dynamic features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi Problem determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Maintaining the storage array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii SSCS and Command Line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Preface: About this course . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Course goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Sun Storage 6x80 product overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Sun Storage modular disk family positioning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 The mid-range family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Compare the Sun Storage 6140, 6540, and 6x80 arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 High performance computing with the 6x80 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 6x80 controller module overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 6x80 controller module: Front view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 6x80 controller module power-fan canister . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Interconnect-battery canister. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 6x80 controller module: Power distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 6x80 controller: Inside view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Cache DIMM memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 USB persistent cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Host cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Controller base board . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 6x80 controller module: Back view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Host ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Drive ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Other ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Controller LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 6x80 summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Knowledge check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Sun Storage 6540 Product Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

Please Recycle


Sun Storage 6540 product overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Hardware overview: Components of the Sun Storage 6540. . . . . . . . . . . . . . . . . . . . . . . . . . 36 Controller Module. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Power-fan canister . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Interconnect-battery canister. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Power distribution and battery system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 6540 controller canister highlights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 6540 controller canister . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 6540 4Gb/s host interface ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 6540 4Gb/s disk expansion ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 6540 drive channels and loop switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Dual 10/100 Base-T Ethernet ports with EEPROM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Serial port connector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Seven segment display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Controller service indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 6540 summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 Knowledge check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Sun Storage 6140 product overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Sun Storage 6140 product overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 Compare the Sun StorEdge™ 6130 and the Sun Storage 6140 Arrays . . . . . . . . . . . . . . . . . 69 Hardware components of the Sun Storage 6140. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Hardware overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Controller module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 DACstore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Back view of controller module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 6140 controller module details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 The 6140 controller canister . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Battery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 The power-fan canister . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Controller architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 6140 summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Knowledge check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Sun Storage CSM200 expansion module overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Sun Storage CSM200 expansion module overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Hardware overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Hardware components of the Sun Storage 6x80 and 6540 . . . . . . . . . . . . . . . . . . . . . . . . . . 100 CSM200 expansion module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 CSM200 expansion module - Front view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 DACstore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 CSM200 expansion module - Back view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Architecture overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

Please Recycle


Switched bunch of disks (SBOD) architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 CSM200 summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Knowledge check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Sun Storage 6000 hardware installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Overview of the installation process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Cabling procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Cable types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Recommended cabling practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Cabling for redundancy – Top-down-bottom-up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 Cabling for performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Hot-adding an expansion module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 Cabling summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 Recommended cabling practices for the 6x80 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Recommended cabling practices for the 6540 and 6140 . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Considerations for drive channel speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Proper power procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Turning on the power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Turning off the power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 Set the controller IP addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 Configuring dynamic IP addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 Configuring static IP addressing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 Serial port service interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 Serial port recovery interface procedure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Use the hardware compatibility matrix to verify SAN components. . . . . . . . . . . . . . . . . . . 152 Attach the host interface cables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Host cabling for redundancy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Connecting data hosts directly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 Connecting data hosts through an external FC switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 Hardware installation summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Knowledge check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 Sun Storage Common Array Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 What is Sun Storage Common Array Manager? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 The CAM interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 SMI-S overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 Software components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Sun Storage Management host software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 CAM management methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Out-of-band management method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 In-band management method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Sun Storage data host software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Host Bus Adapter (HBA): Compatibility and configuration . . . . . . . . . . . . . . . . . . . . . . . . 170

Please Recycle


Multi-path drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 Common Array Manager installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Firmware and NVSRAM files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Sun Storage Common Array Manager navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 Common Array Manager banner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 Common Array Manager’s navigation tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Common Array Manager’s content area. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 Additional navigation aids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 Administration functions and parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Accessing the managment software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Auto Service Request (ASR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Initial Common Array Manager configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Configure IP addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 Naming an array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 Configuring the array password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Setting the array time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Default host type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Adding additional users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Setting module IDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Common Array Manager summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Knowledge check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 Array configuration using Sun Storage Common Array Manager . . . . . . . . . . . . . . . . . . . 189 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Common Array Manager configuration components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Creating a volume with Common Array Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 Storage profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Storage pools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 Volume configuration preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 Volume parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 Virtual Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Administration functions and parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 Auto Service Request (ASR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 Array name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Default host type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 Hot spares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 Storage array cache settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 Disk Scrubbing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 Failover alert delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 Array time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 Manage passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Array configuration summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Knowledge check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209

Please Recycle


Storage Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 What are Storage Domains? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 Storage Domains benefits (pre-sales) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 Storage Domains benefits (technical) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 Storage Domain terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Steps for creating a Storage Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 How Storage Domains work. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 What the host sees. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 What the storage array sees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Storage Domains - How many domains are required? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 LUNS - How do you number these LUNS? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Summary of creating Storage Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Storage Domains summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 Knowledge check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Integrated data services – Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Data services overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 Snapshot terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 Snapshot - Benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 Pre-Sales benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 Technical benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 How does Snapshot work? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Examples of how Snapshot works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Disabling and recreating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 Snapshot considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 Creating Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Creating a Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Calculating Reserve Volume capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Creating a Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 Snapshot summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 Knowledge check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 Integrated data services – Volume Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Volume Copy overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Volume Copy terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Volume Copy – Benefits (pre-sales). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 Volume Copy - Benefits (technical) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 How Volume Copy works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 Factors affecting Volume Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 Volume Copy states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 Volume Copy – Read/write restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267

Please Recycle


Creating a Volume Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 Functions that can be performed on a copy pair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 Recopying a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 Stopping a Volume Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 Removing Copy Pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 Changing Copy priority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 Volume permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 Volume Copy compatibility with other data services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 Storage domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 Remote Replication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Configuring a Volume Copy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 Configuring a Volume Copy with Common Array Manager . . . . . . . . . . . . . . . . . . . . . . . . 274 Enabling the Volume Copy feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 Creating a Volume Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 Recopying a Volume Copy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Changing the copy priority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Stopping a Volume Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 Removing Copy Pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 Volume permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 Volume Copy summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 Knowledge check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 Integrated data services — Remote Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 Remote Replication overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 Benefits of Remote Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 Remote Replication terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 Summary of Remote Replication modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 Technical features of Remote Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 Remote replication distances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 Configuring data replication with CAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297 Activating and deactivating data replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298 Disabling data replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 Configuring the hardware for data replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300 Setup the hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300 Creating replication sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 What happens when an error occurs? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Suspend and resume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 Role reversal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 Changing replication modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310 Testing replication sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 Removing a mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 Remote Replication summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312

Please Recycle


Knowledge check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 Monitoring performance and dynamic features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 First principle of storage array performance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316 40/30/30 rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 Context for performance tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318 Analyzing I/O characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 Factors that affect storage array performance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320 Cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320 Choosing a disk type. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 Selecting a RAID level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 Number of spindles in a v-disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326 Calculating an optimal segment size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 Cache parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 Read Caching Pre-fetch enabled. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 Enabling write caching and enabling write caching with mirroring. . . . . . . . . . . . . . . . . . . 332 Number of volumes in a virtual disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 Choosing an optimal volume modification priority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 Setting array-wide global parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334 Performance Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336 The Performance Monitor pages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336 Fine tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336 Performance and dynamic features summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 Knowledge check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 Problem determination. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 Problem determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 Utilizing the tools available for problem determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 Visual Cues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 Compatibility matrix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Problems and recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Service Advisor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 Collect support data through the command line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 Support Data bundle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 Fault Management Service (FMS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350 Alarms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351 FRU - Field Replaceable Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356 Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 Array administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 Health administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 Notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360 Activity log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 Problem determination summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362

Please Recycle


Knowledge check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363 Maintaining the storage array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 Dynamic volume expansion (DVE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366 Disk scrubbing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 Installing baseline firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 Upgrading to 7.xx firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 Command line firmware upgrade utility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 Maintaining the storage array summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370 Knowledge check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 SSCS and Command Line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 Sun Storage Common Array Manager CLI (SSCS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374 Benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374 Other useful information to collect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 Other command line interface tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376 Fault Management Service (ras_admin) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376 Command Service Module (csmservice) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376 Collect support data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377 Service command line. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378 SSCS and CLI summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380 Knowledge check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 Appendix A. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 Glossary of acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 Appendices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 Knowledge check solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391 Sun Storage 6x80 product overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391 Sun Storage 6540 product overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 Sun Storage 6140 product overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 Sun Storage CSM200 expansion module overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398 Sun Storage 6000 hardware installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 Sun Storage Common Array Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 Array configuration using Sun Storage Common Array Manager . . . . . . . . . . . . . . . . . . . . 403 Storage Domains. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 Integrated data services: Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408 Integrated data services: Volume Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409 Integrated data services: Remote Replication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410 Monitoring performance and dynamic features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 Maintaining the storage array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414 Problem determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415

Please Recycle


SSCS and Command Line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416

Please Recycle


Please Recycle


Sun Storage 6000 Product Line Installation and Configuration Course Description Course overview: This technical training course contains information about the operation and management of the Sun Storage 6000 modular product line of storage arrays. The basic objective of this course is to familiarize individuals with the essential concepts associated with the configuration of the Common Array Manager (CAM) software and 6000 disk storage arrays. The information contained herein is derived from end-user publications and engineering data. It reflects the latest information available at the time of printing but will not include modifications if they occurred after the date of publication. In all cases, if there is discrepancy between this information and official publications issued by Sun, then Sun official publications should take precedence.

Prerequisites: A basic understanding of computer networks, RAID technology, storage area network (SAN) terminology, Fibre Channel topology and operating systems such as Windows and Solaris.

Course description: This course is designed with the focus on identifying Sun Storage 6000 modular product line hardware and using Common Array Manager software to configure 6000 storage arrays. Participants will have the opportunity to install and configure the Common Array Manager and map LUNs to hosts. They also will learn how to use CAM to configure, tune and maintain a storage array. Data Services Storage Domains, Snapshot, Volume Copy and Volume Replication will be covered.

xi Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6000 Product Line Installation and Configuration Course Description

Course length: Approximately 3 days in length.

Course objectives: After completing this course, you will be able to: •

Recognize Sun 6000 modular product line components

Install Common Array Manager software

Configure Sun 6000 modular product line components using Common Array Manager

Explain the data services available in Common Array Manager

Perform basic problem determination functions on Sun 6000 modular product line components

Course topics:

xii

Sun Storage 6x80 Product Overview

Sun Storage 6540 Product Overview

Sun Storage 6140 Product Overview

Sun Storage CSM200 Expansion Module Overview

Sun Storage 6000 Hardware Installation

Sun Storage Common Array Manager

Array Configuring using Sun Storage Common Array Manager

Storage Domains

Integrated Data Service: Snapshot

Integrated Data Service: Volume Copy

Integrated Data Service: Volume Replication

Monitoring Performance and Dynamic Functions

Problem Determination

Maintaining the Storage Array

SSCS and Command Line Interface

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6000 Product Line Installation and Configuration Course Description

Course outline: Sun Storage 6x80 product overview Objectives: After completing this module, you will be able to: •

Describe how the key features of 6x80 storage systems address the needs of high performance computing environments

Identify the canisters of the 6x80 controller module

Identify the upgradable components inside the controller canister

Identify the LEDs of 6x80 controller modules

Sun Storage 6540 product overview Objectives: After completing this module, you will be able to: •

Describe the Sun Storage 6540 key features

Identify the hardware components of the 6540 controller module

Describe the functionality of the 6540 components

Interpret LEDs for proper parts replacement

Sun Storage 6140 product overview Objectives: After completing this module, you will be able to: •

Provide an overview of the Sun Storage 6140

Identify the hardware components of the 6140

Describe the functionality of the 6140 controller module

Interpret LEDs for proper parts replacement

Sun Storage CSM200 expansion module overview Objectives: After completing this module, you will be able to: •

Describe the Sun Storage Common Storage Module (CSM)200 expansion module key features

Identify the hardware components of the CSM200 expansion module

Sun Storage 6000 Product Line Installation and Configuration Course Description Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

xiii


Sun Storage 6000 Product Line Installation and Configuration Course Description •

Describe the functionality of the CSM200 expansion module

Interpret LEDs for proper parts replacement

Sun Storage hardware installation Knowledge Objectives: After completing this module, you will be able to: •

List the basic steps for installing the Sun Storage 6x80, 6540 and 6140

Describe proper cabling techniques and methodologies

List the basic steps of hot-adding CSM200 expansion modules to a 6x80, 6540 and 6140

Perform the proper power sequence for the 6x80, 6540 and 6140 storage array

Describe procedure to set static IP addresses for the 6x80, 6540 and 6140

Skill Objectives: •

Cable a storage array

Set a static IP address for each controller in the 6540

Sun Storage Common Array Manager Knowledge Objectives: After completing this module, you will be able to: •

Describe the functionality of Common Array Manager (CAM)

Differentiate management and data host install

Describe the management methods used by CAM

Explain the function of a multi-path driver

Describe logging into and navigating within CAM

List initial CAM configuration steps

Skill Objectives:

xiv

Install CAM

Register Devices (manual and auto discovery)

Set up Alert Notification

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6000 Product Line Installation and Configuration Course Description

Array configuration using Sun Storage Common Array Manager Knowledge Objectives: After completing this module, you will be able to: •

Describe how to provision the storage array with Common Array Manager

Describe additional provisioning components and how they relate to volume creation

Describe the profile parameters that are selected when creating a volume

Skill Objectives: •

Set Module ID’s

Enable media background media scan

Create Storage Profiles

Create Storage Pools

Configure available storage capacity into volumes

Select appropriate volume parameters (RAID level, cache settings, segment size, etc.)

Configure a global hot spare

Access the volumes on the storage array from the host

Storage Domains Knowledge Objectives: After completing this module, you will be able to: •

Explain the benefits of Storage Domains

Define Storage Domain terminology

Describe the functionality of Storage Domains

Calculate Storage Domain usage

Skill Objectives: •

Create Hosts and/or Host Groups

Map volumes to Hosts and /or Host Groups

Sun Storage 6000 Product Line Installation and Configuration Course Description Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

xv


Sun Storage 6000 Product Line Installation and Configuration Course Description

Integrated data services: Snapshot Knowledge Objectives: After completing this module, you will be able to: •

List the benefits and application of Snapshot

Explain how Snapshot is implemented

Skill Objectives: •

Create and use Snapshot volumes

Integrated data services: Volume Copy Knowledge Objectives: After completing this module, you will be able to: •

Describe the benefits and application of Volume Copy

Explain how Volume Copy is implemented

Explain the functions that can be performed on a Copy Pair

Skill Objectives: •

Create and use Volume Copy volumes

Integrated data services: Remote Replication Knowledge Objectives: After completing this module, you should be able to: •

Describe the benefits and applications of Remote Replication

Explain how Replication is implemented

Differentiate between synchronous and asynchronous replication modes

Skill Objectives: •

Create and use Replication volumes

Monitoring performance and dynamic features Knowledge Objectives: After completing this module, you will be able to:

xvi

List the factors that influence storage array performance

Explain how cache parameters effect performance

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6000 Product Line Installation and Configuration Course Description •

Recognize how dynamic functions impact performance

Explain the data presented by the CAM built-in Performance Monitor

Skill Objectives: •

Dynamically modify volumes using the following dynamic features: •

Dynamic RAID Migration (DRM)

Dynamic Volume Expansion (DVE)

Dynamic Capacity Expansion (DCE)

Dynamic Segment Sizing (DSS)

Defragmentation

Problem determination Objectives: After completing this module, you will be able to: •

Describe the tools in CAM to analyze storage array problems

Explain how to use the service advisor to solve problems

Maintaining the storage array Objectives: After completing this module, you will be able to: •

Describe Dynamic Volume Expansion

Explain the benefits of disk scrubbing

Describe the process to install baseline firmware

SSCS and Command Line Interface Objectives: After completing this module, you will be able to: •

Utilize the SSCS to export and import the configuration

Use the fault management command line tools (FMS)

Sun Storage 6000 Product Line Installation and Configuration Course Description Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

xvii


Sun Storage 6000 Product Line Installation and Configuration Course Description

xviii

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Course goals

Preface: About this course Course goals Upon completion of this course, you will be able to: • Describe the features, functions and terminology of the Sun Storage 6000 Module series • Describe the customer benefits and requirements to migrate to or use Sun Storage series arrays • Describe the architecture of Sun Storage 6000 series arrays • Install Sun Storage 6000 array hardware • Install Common Array Manager storage management software • Configure CAM-based storage systems • Attach production hosts to Sun Storage 6000 series arrays • Configure and use Snapshots • Configure and use Volume Copies • Configure and use a Replication set • Common Array Manager (CAM) • Diagnose basic problems using available tools • Use common commands in the SSCS command line interface

Preface: About this course Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

1


Course goals

2

Preface: About this course Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Module 1

Sun Storage 6x80 product overview Objectives Upon completion of this module, you will be able to: •

Describe how the key features of 6x80 storage systems address the needs of high performance computing environments

Identify the canisters of the 6x80 controller module

Identify the upgradable components inside the controller canister

Identify the LEDs of 6x80 controller modules

1-3 Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage modular disk family positioning

Sun Storage modular disk family positioning

Figure 1-1

Modular disk family positioning

The mid-range family

Figure 1-2

1-4

Flexline series to 6000 modular series

Sun Storage 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6x80 product overview

Compare the Sun Storage 6140, 6540, and 6x80 arrays The Sun Storage 6140 storage array is targeted for the small-to-medium business (SMB) market, while the Sun Storage 6x80 and 6540 storage arrays are targeted for enterprise environments. Table 1-1 Item

Module 6000 product line comparison

6140-2

6140-4

6540

6580

6780

Controller CPU

667 MHz Xscale

667 MHz Xscale

2.4GHz Xeon 2.8GHz Xeon

2.8GHz Xeon

XOR engine

On CPU

On CPU

Dedicated ASIC

Dedicated ZIP ASIC

Host ports

1/2/4 Gb/s; 2 1/2/4 Gb/s; 4 per controller per controller

1/2/4 Gb/s; 4 1/2/4 Gb/s; 8 1/2/4 Gb/s; per controller per controller 2/4/8Gb/s; 8 per controller

Drive ports

2/4Gb/s; 2 per 2/4Gb/s; 2 per controller controller

2/4Gb/s; 4 per 4Gb/s; 8 per 4Gb/s; 8 per controller controller controller

Controller cache

1GB per controller

2GB per controller

2/4/8GB per controller

Ethernet ports

10/100; 2 per controller

10/100; 2 per controller

10/100; 2 per 10/100/1000; 10/100/1000; 2 controller 2 per per controller controller

Dedicated ZIP ASIC

8GB per controller

8/16 per controller

Expansion module IOM

FC

FC

FC

FC

FC

No. of drivers per module

16

16

16

16

16

Drive types supported

2/4Gb/s; FC, SATA II

2/4Gb/s; FC, SATA II

2/4Gb/s; FC, SATA II

4Gb/s; FC, SATA II

4Gb/s; FC, SATA II

No. of expansion modules supported

3

6

14

16

16

Maximum drives supported

64

112

224

256

448

Configuration Maximum hosts supported

512 (256 redundant)

512 (256 redundant)

Sun Storage 6x80 product overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

1-5


High performance computing with the 6x80 Item

6140-2

Maximum volumes 1024

6140-4 1024

6540 2048

6580

6780

2048

2048

Performance Targets Burst I/O rate (cache 120,100 read)

120,100

575,000

~600,000

~700,000

Sustained I/O rate (disk reads)

30,235

44,000

85,000

~115,000

~172,000

Sustained I/O rate (disk writes)

5,789

9,000

22,000

~30,000

~45,000

Sustained throughput (disk reads

750MB/s

990MB/s

1600MB/s

3200MB/s

~6400MB/s

Sustained throughput (disk writes)

698MB/s

850MB/s

1300MB/s

2800MB/s

~5200MB/s

High performance computing with the 6x80 The 6x80 controller module provides the power and speed demanded by high performance computing (HPC) environments that store and use vast amounts of data for high-bandwidth programs and complex application processing. The controller used in the 6x80 controller module is very sophisticated and uses state-of-the-art XBB2 (RAIDCore 2) architecture. These factors enable the 6x80 to use fast cache memory, USB-based drives for persistent cache storage, 4Gb/s FC host and drive ports, high-speed busses and multiple processing elements to optimize resource utilization. All CAM-managed enterprise-class controllers run similar firmware. This unique implementation creates a lower total cost of ownership (TCO) and higher return on investment (ROI) by enabling common features and functionality, centralized management, a consistent interface and reduced training and support costs. The 6x80 controller’s high speed, dedicated XOR engine generates RAID parity with no performance penalty, enabling this compute-intensive task to be handled efficiently and effortlessly. A separate processor focuses on data movement control, allowing setup and control instructions to be processed and dispatched independent of data.

1-6

Sun Storage 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6x80 product overview Two 6x80 controllers are integrated into one controller module, and combined with one or more expansion trays, create a fully featured storage system. The dual controllers are fully redundant. The 6x80 controller module supports up to 16-1, 2, or 4Gb/s FC host connections and 16 4Gb/s Fibre Channel (FC) drive connections using Fibre ChannelSwitched Loop (FC-SW) protocols giving the 6x80 access to a total of 448 FC/SATA II drives. Extensive compatibility and the ability to auto-negotiate host connectivity speeds result in minimal or no impact on storage networks, protecting customers’ existing infrastructure investment. There are several configuration possibilities for the 6x80 controller module based on host ports 4 and 8Gb/s FC and 10Gb/s iSCSI as available. This flexibility makes it easy for customers to purchase exactly the storage system they need.

6x80 controller module overview

Figure 1-3

Front and back views of 6x80 controller module

The CAM-managed 6x80 controller module is a cabinet-mounted storage system that directs and manages the I/O activity between a host and the volumes configured on the storage system. The 6x80 shares many of the characteristics of other members of the CAMmanaged storage line: it is a 4U module that fits the standard 19-inch (48.3 cm) wide cabinet, has virtually the same-sized canisters in it and uses many of the same LED indicators. The 6x80 is unique, though, in some respects. For example, the 6x80 power-fan canisters do not contain chargers for the batteries. These have been moved into the Interconnection Battery canister with the batteries themselves.

Sun Storage 6x80 product overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

1-7


6x80 controller module: Front view From the back of the controller module, it is very easy to distinguish between the 6x80 and all other CAM-managed storage systems. It has eight host and eight drive ports. This gives the 6x80 greater availability for host/SAN attachments, as well as expansion tray attachments. In fact, the 6x80 can support more drives than any other CAM-managed array: up to 448 drives (16 fully configured SBOD expansion trays) of FC and/or SATA II drives.

Figure 1-4

6x80 canisters

The 6x80 controller module has five FRU canisters which can be replaced onsite: •

Two controller canisters

•

Two power-fan canisters (also referred to as controller support modules)

•

One interconnect battery canister

The 6x80 controller module does not have a mid-plane. It is designed so that all of the canisters interconnect with one another through the interconnect canister. The 6x80 controller module has two identical controllers. The controller canisters install from the rear of the module. The top controller is controller A, and the bottom controller is controller B. All connections to the hosts and drives are through the controllers. Controllers A and B are inverted 180 degrees from each other. The reason for this is so power connections are on the outside of each controller, which makes power cables more manageable.

6x80 controller module: Front view From the front of the controller module, the power-fan canisters and interconnect battery canister are accessible. All of these canisters are field replaceable, which makes the 6x80 easy and fast to service.

1-8

Sun Storage 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6x80 product overview

6x80 controller module power-fan canister

Figure 1-5

6x80 power-fan canisters

The power-fan canister (controller support module), as the name suggests, provides power and cooling to the storage system. Each power-fan canister contains: •

A 525-watt power supply that provides power to the controllers by converting incoming AC voltage to 12-V DC

Two cooling fans that provide redundant cooling even if either power supply fails

A thermal sensor that prevents power supplies from overheating. Under normal operating conditions with an ambient air temperature of 40° F to 104° F (5° C to 40° C), the cooling fans maintain the correct operating temperature inside the module Factors that can cause power supplies to overheat: •

An unusually high room temperature

A fan failure

Defective circuitry in the power supply

A blocked air vent

A failure in other devices installed in the cabinet

If the internal temperature rises above 158° F (70° C), one or both power supplies automatically shuts down, and Common Array Manager software reports the error. Critical event notifications also are issued if event monitoring is enabled and event notification is configured.

Sun Storage 6x80 product overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

1-9


6x80 controller module: Front view A black connector at the rear of the power-fan canister connects it to its respective controller. The power-fan canister on the left has the connector at the bottom and therefore, connects to controller B. The power-fan canister on the right has the connector on the top and connects to controller A.

Figure 1-6

Power-fan canister connection to controller

Power-fan canister LEDs

Figure 1-7

Power-fan canister LEDs

Information about the condition of the power supplies and fans is conveyed by LEDs on the front of each power-fan canister. However, the LEDs are only visible if the front cover of the 6x80 controller module is removed. Typically, a one-to-one relationship exists between the Service Action Required (SAR) and Service Action Allowed (SAA) LEDs. However, there are exceptions. An example is if both power-fan canisters have a fault, one due to a power fault, and the other due to a fan fault. The power-fan canister with the power fault should be removed and replaced first.

1-10

Sun Storage 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6x80 product overview If the power-fan canister with the fan fault is removed first, the storage system is left without power. This causes the storage system to shut down and possibly lose data. In this scenario, the power-fan canister with the fan fault would have the SAR LED on and the SAA LED off, indicating a problem but that the canister should not be removed.

Interconnect-battery canister

Figure 1-8

6x80 interconnect-battery canister

The interconnect-battery canister (ICC) acts like a mid-plane for the controller status lines, power distribution lines and drive channels, as well as storing the batteries that transfer data in cache to the on-board USB-based flash drives in the event of a power failure. Note – If there is an unexpected power outage, data in cache is transferred to USBbased persistent cache flash drives. These drives are considered long-term storage, so extended power outages are no longer a concern to maintaining data in cache. However, there is no true mid-plane or backplane in the 6x80. The interconnectbattery canister simply acts as a way for the different canisters of the controller module to interact (and access the battery packs). To do so, the ICC board connects to all components of the ICC, as well as to both controller canisters. If the ICC is not present, the other canisters might still function (depending on what the problem is), so in the strictest sense of the term, the ICC is not a true midplane. The interconnect-battery canister contains: •

A removable “mid-plane” that provides cross-coupled signal connection between the controller canisters. The control output from each controller canister is connected to the control input of the alternate controller canister

Sun Storage 6x80 product overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

1-11


6x80 controller module: Front view •

Two battery backup (BBU) packs. Each battery backup pack is sealed and contains 18 cells of lithium ion batteries. Each battery pack is dedicated to one controller

An audible alarm that provides a warning of potentially serious problems with the controller module. The 6x80 module may be shipped with the alarm enabled or disabled, depending on OEM specifications. If it is enabled, the alarm can be muted with the mute button on the front of the canister or by using the Common Array Manager

Front bezel LEDs that are visible through the front cover including summary LEDs and an audible alarm for the entire controller module, as well as LEDs specific to the ICC

Caution – Never remove the interconnect-battery canister unless directed to do so by a Customer Support representative. Removing the interconnect-battery canister after either a controller or a power-fan canister has been removed could result in loss of data access.

Interconnect-battery canister LEDs

Figure 1-9

Interconnect-battery canister LEDs

The power and locate LEDs, as well as the audible alarm, are general indicators for the entire controller module, not specifically for the interconnect-battery canister. The Service Action Allowed (SAA), Service Action Required (SAR) and battery LEDs, however, are specifically for the interconnect-battery canister itself. Both global and ICC LEDs can be seen through the front cover. In the unlikely event that an interconnect-battery canister must be replaced (e.g., due to a bent pin or as a last resort to resolve a problem), the Common Array Manager will provide details about the procedure. Data access is limited to controller A when the interconnect-battery canister is removed as ICC removal automatically suspends controller B.

1-12

Sun Storage 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6x80 product overview Correct preparation for removing the interconnect-battery canister must be followed. Removal and replacement steps are performed in this order: 1.

Place controller B offline so that host failover software can detect the offline controller and re-route all I/O to controller A

2.

Turn on the SAA LED for the interconnect-battery canister using the Common Array Manager

3.

Remove and replace the interconnect-battery canister

4.

Turn off the SAA LED using CAM

5.

Place controller B online and rebalance volumes

Smart battery backup pack

Figure 1-10

6x80 interconnect-battery canister with battery backup packs

The battery backup packs (BBUs)—also referred to as the contoller’s internal UPS—in the 6x80 are called Smart BBUs because they have more capabilities than normal battery packs. Each BBU contains battery cells, a charger, a battery “Gas Gauge” chip, a discharge load and control logic. As with traditional battery units, the controller firmware monitors the charge level, battery status and the health of the battery pack (e.g., the number of times the batteries have been charged and the temperature of the cells). It uses “learn cycles” to perform this monitoring, and the information gathered allows the firmware to determine exactly how long a BBU can hold the cache “up” in case of a power outage and what action to take if it cannot.

Sun Storage 6x80 product overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

1-13


6x80 controller module: Front view To calibrate the battery gas gauge, a learn cycle has to be implemented. During a learn cycle, a fully charged BBU goes through a controlled discharge to a predetermined threshold into the discharge load, goes through a rest period and then fully recharges. The learn cycle interval is scheduled in weeks, so that the start time for each learn cycle occurs on the same day of the week, at the same time of day. Because Smart BBUs monitor for voltage, it is no longer necessary to set the date in CAM when batteries are replaced. Batteries function and allow data to write to cache as long as a minimum power level is maintained (normally 20 minutes’ worth). It is expected for battery capacity to remain above the minimum application capacity during learn cycles and for write caching to continue normally. However, if the capacity unexpectedly falls below the minimum application capacity during the learn cycle, write caching is disabled for all volumes, except those that have the “cache without batteries� attribute enabled, and an alert is generated. Service Advisor shows that the BBU needs replacing. Even though data in cache is stored in the USB-based persistent cache drives if a power failure occurs, it is still important to maintain the BBUs. Not only do they provide power to cache memory for approximately 30 minutes, long enough to transfer the data in cache to the USB persistent cache drives, but they also keep one set of fans in the power-fan canisters running so the controller module does not overheat. Each BBU is dedicated to one controller. Therefore, it is important that BOTH battery packs are charged and functional. If one BBU fails, the other will not spare for it.

1-14

Sun Storage 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6x80 product overview

6x80 controller module: Power distribution

Figure 1-11

Power distribution in controller module

As mentioned above, the 6x80 module does not have a mid-plane or backplane. Therefore, all the canisters in the module are interconnected with power passing through the interconnect-battery canister. All power flows through the controllers. Power will continue to flow through a controller canister to the other canisters even if the controller itself becomes inoperable. The power from the left power-fan canister is distributed by controller B, and power from the right power-fan canister is distributed by controller A. Both controllers must be in place in order to provide redundant power to its partner controller. Service Advisor procedures must be followed carefully if there are multiple failures in a controller module. For example, if the power supply connected to controller A fails and controller B fails, removing controller B before replacing the failed power-fan canister will cause controller A to lose power, resulting in a total loss of data access. This occurs because power distribution is through the controller physically connected to the power-fan canister. Caution – Because there is an interdependency between the power-fan, battery and controller canisters, follow Service Advisor procedures when removing any of these canisters.

Sun Storage 6x80 product overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

1-15


6x80 controller module: Front view

Quick check

Question: Two failures exist on the 6x80 controller module: The left power supply has failed and one of the right fans has failed. Which LEDs will be on for the failed components? Which component should be replaced first?

Question: Two failures exist on the 6x80 controller module: The left power supply has failed and controller A has failed. Which LEDs will be on for the failed components? Which component should be replaced first?

1-16

Sun Storage 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6x80 product overview

6x80 controller: Inside view Controller cache and the persistent cache USB flash drives are in the 6x80 controllers, and host “daughter� cards are changeable. Therefore, it is worth taking a look inside the controller to see where these are located.

Figure 1-12

6x80 controller inside view

Cache DIMM memory Controller cache is an usually large set of physical memory chips dedicated to I/O operations between controller and hosts and between controller and drives. Cache DIMM memory is used exclusively for host I/O, while processor memory is used for RAID application code and data, the underlying OS, and so on. Cache is used even for volumes that do not have any write caching enabled. An incoming write operation results in two independent operations: one from the host-side buffers to cache and another from the drive-side buffers. The responses from these operations are not sent to the host until the drive-side write operation completes and the data has been written to the drive(s). Each cache DIMM slot in the 6x80 can accommodate either a 1 or 2GB module. However, all DIMMs must be the same size (all 1GB or all 2GB). The 6x80 currently supports either 8GB or 16GB cache (total for redundant controllers). DIMM slots must be populated in a certain order: either all slots (1 through 8) or slots 2, 4, 5 and 7. Any other configurations are invalid and will trigger an error.

Sun Storage 6x80 product overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

1-17


6x80 controller: Inside view It is important to note that DIMM memory is inserted by the manufacturer and is not a FRU.

Figure 1-13

Fully configured DIMM slots: 1GB in each slot

Figure 1-14

Alternate DIMM configuration: DIMMs in slots 2, 4, 5, 7

USB persistent cache Cache memory is random access memory (RAM), so if there is a power failure, the contents of cache are lost—including all the “dirty” data (writes not yet written to drives). There is no way for an application using the storage system to tell exactly how many writes are lost in a case like this, and consequently, recovery can be very difficult or impossible. To combat this problem, the 6x80 controller has persistent USB-based cache backup devices which store cache contents for an indefinite length of time. As long as the controller has a battery with enough capacity to let it write the full contents of cache to the persistent cache backup device, cache contents are not lost during a power failure. When the backup to persistent cache completes, the controller firmware turns off the batteries. Unlike battery-backed cache, the batteries are enabled on a power outage, even if there is no dirty data in cache.

1-18

Sun Storage 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6x80 product overview The 6x80 uses USB-based modules for persistent cache. They are 4GB each, and each controller base board has four module slots (a total of 16GB). An important fact to keep in mind is that the flash module capacity needs to be equal to the DIMM cache capacity since the persistent cache modules are used to store cache in times of prolonged power outages.

Host cards The 6x80 can be configured with various types of daughter host cards so the controller module can match an enterprise’s specific needs. Because the 6x80 is scalable, it has room for two host cards giving customers room to grow. Model numbers are based on the number and type of host cards in the controller canister.Just like cache modules, though, host cards are not FRUs. The host cards are inserted by the manufacturer based on Sun’s specifications.

Figure 1-15

6x80 model and host card configurations

Controller base board The controller’s base board is also worth a closer look. It boasts a fast, dedicated ZIP chip which performs both XOR (RAID 5) calculations and p+q (RAID 6) calculations. There is a separate 2.8GHz LV Xeon CPU for each controller, and each controller can accommodate up to 1GB dedicated memory for CPU processes. Between the ZIP chip and the host and drive ports, there are two 2GB/s PCI-Express busses, which gives the 6x80 exceptional bandwidth. The busses for host card connections are 4GB/s PCI-Express busses, with one bus for each FC chip. One FC chip is dedicated to the local controller ports, while the other is dedicated to interconnecting with the alternate controller. This makes host-side I/O transfer extremely fast with 1GB bandwidth per host port.

Sun Storage 6x80 product overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

1-19


6x80 controller module: Back view The busses for drive channels are also 4GB/s PCI-Express busses. There is one bus dedicated to each 4Gb/s FC “switch� chip (loop switch), which is, in turn, directly connected to the XOR ZIP chip. Additionally, each loop switch has a dedicated connection to the alternate controller making multi-pathing and failover very efficient. There are two busses dedicated to cache mirroring. These busses are 2GB/s PCIExpress busses. Each connection type has two PCI-E busses dedicated to it, making the 6x80 an extremely fast and efficient controller.

Figure 1-16

6x80 controller architecture

6x80 controller module: Back view

Figure 1-17

1-20

Host and drive ports on 6x80 controller

Sun Storage 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6x80 product overview The 6x80 controllers has eight drive ports and the capacity for eight host ports. Both host-side and drive-side ports are 4Gb/s FC.

Host ports Each host card has four host ports on it, and as mentioned previously, the 6x80 supports two separate host cards. The controllers perform link-speed negotiation on each host port (also referred to as auto-negotiation) by interacting with the host or SAN fabric switch to determine the fastest compatible speed available between the controllers and the host or switch. This becomes the operating speed of the link. Link speed is limited to link speeds supported by the Small Form-factor Pluggable (SFP) transceiver on that channel. The controllers enter into autonegotiation at controller boot-up or when the controller detects a link-up event after a previous link-down event. If the auto-negotiation process fails, the controllers consider the link to be down until negotiation is attempted again.

Drive ports Each controller has eight drive ports that connect to four loop switches. This means that two drive ports share one loop switch, but each individual drive port is capable of delivering 400MB/s of bandwidth. This bandwidth amount is possible because the loop switches actually double the usable bandwidth because one loop switch can process I/Os on both of its ports concurrently and independently.

Sun Storage 6x80 product overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

1-21


6x80 controller module: Back view Drive channels/loops operate at 4Gb/s and only attach to 4Gb/s-capable expansion trays. The total number of drives supported by the 6x80 is 256 (16 fully populated CSM 200, 16-slot expansion trays).

Figure 1-18

Each loop switch connects to two drive-side ports

When cabling expansion trays to the controllers, it is important that expansion trays are cabled to both controller A and B to ensure redundancy.

Other ports

Figure 1-19

Other connections on 6x80 controller

Each controller in the 6x80 controller module has two RJ-45 10/100/1000 Base-T Ethernet ports: one for out-of-band management; the other for field service personnel or service diagnostics. By having a separate Ethernet connection for service, the customer’s LAN is not exposed to unknown connections.

1-22

Sun Storage 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6x80 product overview Each controller canister also has one RS-232 serial port, used only for diagnostic purposes, an AC power connection and an on/off switch.

Controller LEDs

Figure 1-20

Controller LEDs

The 6x80 has a set of three LEDs and a 7-segment display that are used for problem diagnosis. The green LED indicates if there is data in cache, the amber LED (the SAR) indicates that there is a problem with the controller canister that needs to be addressed, and the blue LED (SAA) indicates that the controller canister can be removed without loss of data access. LED

Color

What It Means

Service Action Allowed (SAA)

Blue LED

• Off = Normal • On = Controller safe to remove

Service Action Required (SAR)

Amber LED

• Off = Normal • On = Controller needs attention

Cache Active LED

Green LED • On = Data in cache • Off = No data in cache

7-Segment display

Figure 1-21

7-Segment display

Sun Storage 6x80 product overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

1-23


6x80 controller module: Back view Each controller module has a pair of 7-segment displays that shows two digits. Each digit has a decimal point and is rotated 180 degrees relative to the other digit. With this orientation, the display looks the same regardless of controller orientation. The numeric display shows either the module identification number or a diagnostic error code. The default controller module ID is set at 85 by the controller firmware and will automatically adjust during power-on to avoid conflicts with existing expansion tray IDs. If a module ID is displayed, the diagnostic light is off and the heartbeat light is blinking. Both controllers show the same ID. However, if one controller has a problem, it may show a diagnostic code while the other still displays the module ID. The Diagnostic-Fault LED (the small decimal at the upper left-hand corner of the second digit), the heartbeat LED (the small decimal at the lower right-hand corner of the first digit) and all seven segments of both digits display whenever the controller is powered on. After a power reset or cycle occurs, the 7-segment display may temporarily show diagnostic codes as it goes through its self-test. After initial diagnostics are complete, though, the controller module ID is displayed. If an error occurs and the controller canister’s amber Service Action Required (SAR) LED is lit, the 7-segment display shows diagnostic information and the Diagnostic-Fault LED displays. The heartbeat light remains off when an error sequence displays. Note – If the Diagnostic-Fault LED is on and no diagnostic code appears, the controller has failed.

The 6x80 uses codes to identify specific problems detected on the controller. The hardware displays error codes whenever a controller has an error or when it is held in reset by its alternate controller, so in general, these codes display only when the controller is in a non-operational state. The code may be a dynamic one that uses more than one code, or as a static code that just has one code. The amount of time each 2-digit code is displayed during a dynamic code sequence is a fixed value under hardware control. Each sequence of dynamic codes minimally consists of a two-digit category code, followed by a two-digit detail code. The entire display is blanked at the end of the sequence (all segments off, the diagnostic light off), then the sequence repeats. Longer sequences are displayed if there is more then one event that needs to be reported. This nominally consists of a series of category-detail sequences with a delimiter between each category-detail sequence. The entire display is blanked at the end of the sequence, then the sequence repeats.

1-24

Sun Storage 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6x80 product overview Static diagnostic codes are in the form of Lx, where “x” is a hexadecimal digit indicating controller state information. Dynamic diagnostic codes start with a general error code, such as OS (operation state), then specific codes such as CF (component failure). Note – Dynamic error code example: OE + L1 = Operation error, missing ICC

The controller might be non-operational due to a configuration problem (such as mismatched cache types), or it might be non-operational due to hardware faults. If the controller is non-operational due to array configuration, the controller SAR light is off. If the controller is non-operational due to a hardware fault, the controller SAR light is on. Table 1-2

Static diagnostic codes Static Numeric Display Diagnostic Codes

Value

Controller State

Description

CAM View

L0

Suspend

Mismatched controller types

Service Action Required condition for board type mismatch

L1

Suspend

Missing Service Action Required interconnect-battery condition for a missing canister interconnect-battery canister

L2

Suspend

Persistent memory errors

Service Action Required condition for offline controller; possible cache DIMM mismatch

L3

Suspend

Persistent hardware errors

Service Action Required condition for an offline controller

L4

Suspend

Persistent data protection errors

Service Action Required condition for an offline controller

L6

Suspend

Unsupported host card

Service Action Required condition ofr incorrect host card

Sun Storage 6x80 product overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

1-25


6x80 controller module: Back view Static Numeric Display Diagnostic Codes L8

Suspend

Memory configuration error

Service Action Required condition; incorrect cache inserted

L9

Suspend

Link speed mismatch Service Action Required; incorrect expansion trays or link speeds set

Lb

Suspend

Host card configuration error

LC

Suspend

Persistent cache Service Action Required for backup configuration incorrect USB-based error persistent cache configuration

Ld

Suspend

Mixed cache memory Service Action Required; all DIMMs DIMM banks must be the same size

LE

Suspend

Uncertified cache Service Action Required to memroy DIMM sizes change DIMM banks to allowed sizes

LF

Suspend

Lockdown with limited SYMbol support

LH

Suspend

Controller firmware Service Action Required to mismatch reload firmware to controller not upgraded

88

Reset

Controller is held in reset by the alternate controller

Service Action Required for incorrect host card

Static numeric codes can be used in conjuction with dynamic codes.

Table 1-3

Dynamic diagnostic codes Dynamic Numeric Display Diagnostic Codes

Category

1-26

Category Code

Detail Codes

Startup Error

SE+

88+ Power-on default dF+ Power-on diagnostic fault

Operational Error

OE+

Lx+ Lock-down codes

Sun Storage 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6x80 product overview Dynamic Numeric Display Diagnostic Codes Category

Category Code

Detail Codes

Operational State

OS+

OL+ Offline (held in reset) bb+ Battery Backup (operating on batteries) CF+ Component failure

Component Failure

CF+

dx+ Processor/cache DIMM (x=location) Cx+ Cache DiMM (x=location) Px+ Processor DIMM (x=location) Hx+ Host card (x=location) Fx+ Flash card (x=location)

Category Delimiter

dash+

Separator between category-detail code pairs

End-of-Sequence Delimiter

blank-

End-of-sequence indicator

Port and connection LEDs The host and drive port LEDs indicate if there is a communication link. If the green LED displays, the link is working; if the amber LED displays, the link is not working. Fault conditions can result from SFP speed mismatches, HBA speed mismatches or expansion tray speed mismatches.

Sun Storage 6x80 product overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

1-27


6x80 summary The 10/100/1000 Base-T Ethernet port LEDs also must be read in a special way. The two inside LEDs indicate the speed (100 or 1000Mb/s) the link is running; the outside LEDs indicate if there is an active connection to the port. LED Host port link

Color

What It Means

Green LED • • • • • • •

Both LEDs Off = No connection, link down Left LED on = 4Gb/s Card 1Gb/s Left LED on = 8Gb/s Card 2Gb/s Right LED on = 4Gb/s Card 2Gb/s Right LED on = 8Gb/s Card 4Gb/s Both LEDs on = 4Gb/s Card 4Gb/s Both LEDs on = 8Gb/s Card 8Gb/s

Note – Controller A is upside down so LEDs read the opposite of controller B. Drive port link

Green LED • • • •

Drive port bypass

Amber LED

Ethernet links

Green LED • • • • •

Off/Off = No connection/link down On = Connection active Right LED on = 2Gb/s Both LEDs on = 4Gb/s

• Off = Normal • On = Port is bypassed

Left LED Off = 10/100Base-T Right LED On = 1000Base-T Right LED Off = No link Right LED On = Link established Right LED Blinking = Activity

6x80 summary

1-28

All CAM-managed storage systems’ modules are designed to fit into a typical 19-inch (48.3-cm) wide by 72-inch (183-cm) cabinet

The 6x80 controller is a very sophisticated and high performing controller

The 6x80 controller uses x8 4GB PCI-Express busses for drive and host channels; it uses x8 2GB PCI-Express busses for cache mirroring

The 6x80 uses USB-based flash drives for persistent cache in event of power failure

Sun Storage 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6x80 product overview •

The 6x80 controller module includes cache DIMMs, CPU DIMMs and USB-based persistent cache drives

The 6x80 can accommodate two separate host cards of four ports each

The 6x80 supports 4Gb/s FC or SATA II drives and expansion trays

6x80 controller canister summary

When viewing the 6x80 controller module from the front, the left power-fan canister is right-side up (controller B); the right canister is upside down (controller A)

Each power-fan canister contains 1 power supply, 2 fans, 1 thermal sensor and 1 voltage charger for (respectively) controller A or B’s BBU

The interconnect-battery canister (center) serves as a pass-through for controller status lines, power distribution lines and drive channels, as well as supplying the battery backup power source

The ICC canister contains 2 Smart battery backup packs, the module audible alarm and global module LEDs

Smart BBUs have learn cycles where they do self-monitoring

All canisters in the 6x80 controller module are hot-swappable as long as power interdependencies are taken into consideration

When viewing the 6x80 controller module from the back, the top canister (inverted) is controller A; the bottom canister is controller B

Sun Storage 6x80 product overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

1-29


6x80 summary

1-30

Each 6x80 controller canister provides housing for the controller, data cache DIMM modules, up to eight independent host ports, eight drive ports, two 10/100/1000 Base-T Ethernet ports and one RS-232 serial port

The LEDs on each canister of the 6x80 controller module indicate status and/or service needs

The 7-segment display shows the controller module ID or a diagnostic code

Sun Storage 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6x80 product overview

Knowledge check 1.

List the Field Replaceable Units found in the 6x80.

2.

List the upgradable components in the 6x80 controller canister.

3.

True or False: The 6x80 controller module can support up to 512 drives. True

False

4.

Which power-fan canister is connected to controller A, the left or the right canister (when viewed from the front of the module)?

5.

What does the SAA LED indicate? the SAR? What color is each?

6.

Does the 6x80 controller module have a mid- or back-plane? If not, how does power flow through the module?

7.

If one BBU fails, will the other one spare for it? Yes

No

8.

What are the USB-based flash modules used for in the 6x80 controller module?

9.

Can you “mix and match� two different types of host cards in one 6x80 controller module? Yes

No

Sun Storage 6x80 product overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

1-31


Knowledge check 10. What is a loop switch? Where would you find it?

11. What does it mean if a drive port has an amber LED on?

1-32

Sun Storage 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Module 2

Sun Storage 6540 Product Overview Objectives Upon completion of this module, you will be able to: •

Describe the Sun Storage 6540 key features

Identify the hardware components of the 6540 controller module

Describe the functionality of the 6540

Interpret LEDs for proper parts replacement

2-33 Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6540 product overview

Sun Storage 6540 product overview Today’s open systems environments create unique challenges for storage arrays. Round-the-clock processing requires the highest availability and online administration. Varying applications result in a range of performance requirements: from transaction-heavy (I/O per second) to throughput-intensive (MB per second). Unpredictable capacity growth demands efficient scalability. Finally, the sheer volume of storage in today's enterprise requires centralized administration and simple storage management. Sun Storage provides storage arrays that are designed specifically to address the needs of the open systems environment: the Sun Storage 6140 and 6540. Both storage arrays are high-performance, enterprise-class, full 4-gigabit per second (Gbps) Fibre Channel/SATA II solutions that combine outstanding performance with the highest reliability, availability, flexibility and manageability. This section focuses on the Sun Storage 6540 storage array. The Sun Storage 6540 storage array provides the performance demanded by high performance computing (HPC) environments that store and utilize vast amounts of data for high-bandwidth programs and complex application processing. The Sun Storage 6540 has the powerful 6998 controller architecture and 4 Gb/s interfaces which are ideally-suited for bandwidth-oriented applications such as sophisticated data-intensive research, visualization, 3-D computer modeling, rich media, seismic processing, data mining and large-scale simulation. The 6998 controller used in the 6540 storage array is the most sophisticated and highest-performing controller to date from Sun Storage 6000 mid-range disk product line. Its sixth-generation XBB architecture boasts our fastest cache memory, 4Gb/s Fibre Channel host and drive interfaces, high-speed busses, and multiple processing elements to optimize resource utilization. The 6998 controller’s high speed XOR (parity generating) engine generates RAID parity with no performance penalty, enabling this compute-intensive task to be handled efficiently and effortlessly. A separate 2.4 GHz Xeon processor focuses on data movement control, allowing setup and control instructions to be processed and dispatched independent of data. Two 6998 controllers are integrated into a controller module and combined with one or more drive modules to create a fully featured 6540 storage array. These dual controller arrays are fully-redundant and support up to eight 4, 2 or 1Gb/s Fibre Channel host connections and 224 Fibre Channel or SATA disk drives.

2-34

Sun Storage 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6540 Product Overview The 6540 storage array has eight 4Gb/s FC-AL host or FC-SW SAN connections and eight 4Gb/s FC-AL drive expansion connections. Extensive compatibility and ability to auto-negotiate 4, 2, or 1Gb/s FC host connectivity speeds results in minimal or no impact on existing storage network, protecting customers’ infrastructure investment. The Sun Storage 6x80, 6540, and 6140 storage arrays run similar firmware. This unique implementation creates a lower total cost of ownership and higher return on investment by enabling seamless data and model migration, common features and functionality, centralized management, a consistent interface and reduced training and support costs. Additionally, the 6140 storage array can be upgraded to a high-performance 6540 HPC storage array. And in each instance, all configuration and user data remains intact on the drives. The Sun Storage 6540 storage array is modular and rack mountable, and scalable from a single controller module (CRM=Controller RAID Module) plus one expansion module (CSM=Common Storage Module) to a maximum of 13 additional expansion modules. Summary of the features offered by the Sun Storage 6540 storage array: •

The Sun Storage 6540 has two 6998 controllers.

Each 6998 controller has four 4Gb/s Fibre Channel host I/O ports (eight per dual controller storage array) supporting direct host or SAN attachments. The eight 4 Gb/s Fibre Channel host ports support 4, 2 and 1 Gb/s connectivity.

Each 6998 controllers has the powerful 2.4 Ghz Intel Xeon processor. Each controller also has a dedicated next generation ASIC to perform the RAID parity calculation thereby off loading the processor of this function.

Supports up to 224 drives, FC or SATA

HotScale™ technology enables online capacity expansion up to 67 TB with FC drives (224 x 300 GB), or 168 TB with SATA drives (224 x 750 GB).

4GB, 8GB and 16GB cache options are available (2GB / 4GB / 8GB per controller respectively).

4 drive channels per controller that can support either 2Gb/s or 4Gb/s expansion modules.

All components are hot-swappable

RoHS compliant

Sun Storage 6540 Product Overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

2-35


Hardware overview: Components of the Sun Storage 6540

Hardware overview: Components of the Sun Storage 6540 The Sun Storage 6540 storage array is comprised of two main modules: the 6540 controller module and a minimum of one expansion module. The expansion module is also known as the Common Storage Module 2 (CSM200 or CSM200).

Figure 2-1

Sun Storage 6540 storage array

This section describes the main components of the Sun Storage 6540 controller module (CRM). The CSM200 expansion module is covered in another chapter.

2-36

Sun Storage 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6540 Product Overview

Controller Module The figure below shows a block diagram for the Sun Storage 6540. The blocks represent placement of controllers, power-fan canisters and removable mid-plane canister.

Figure 2-2

Block diagram for the Sun Storage 6540

The Sun Storage 6540 controller module has five main canisters: •

Two Power-Fan canisters

One Interconnect canister (removable mid-plane)

Two controller canisters

There are also two battery FRUs (Field Replaceable Units) within the Interconnect-Battery canister, bringing the total number of FRUs for the 6540 controller module to seven. The module does not have a mid-plane but instead has been designed such that all the canisters interconnect with one another. Caution – Service Advisor procedures should be followed when removing a FRU because there is interdependency between the FRUs.

Sun Storage 6540 Product Overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

2-37


Hardware overview: Components of the Sun Storage 6540

6540 controller module field replaceable unit (FRU) details

Figure 2-3

6540 FRUs

The two Power-Fan canisters and the Interconnect-Battery canister are located behind the front cover. The Power-Fan canister on the left is right-side up, and the Power-Fan canister on the right is upside down. The two controllers are located in the rear of the module. All canisters are hot swappable as long as interdependencies between the FRUs are taken into consideration.

2-38

Sun Storage 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6540 Product Overview

Power-fan canister

Figure 2-4 Table 2-1

6540 power-fan canister LEDs Power-fan canister LEDs

Light

Color

Normal Status

Problem Status

Power

Green

On - Power supply fan canister is powered on

Off

Battery Charging

Green

On - Battery charged and ready Blinking - Battery charging

Off - Battery failed or discharged

Battery Charging

Green

On - Battery charged and ready Blinking - Battery charging

Off - Battery failed or discharged

Needs Attention (SAR)

Amber

Off

On - Power supply canister needs attention

OK to Remove (SAA)

Blue

Off

On - Safe to remove. May also indicate open circuit breaker

Sun Storage 6540 Product Overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

2-39


Hardware overview: Components of the Sun Storage 6540 The main purpose of the Power-Fan canister is as the name suggests - to provide power and cooling to the storage array. Each Power-Fan canister contains: •

A power supply - provides power to the controllers by converting incoming AC voltage to the appropriate DC voltages. In addition to the AC-to-DC power supply, a DC-to-DC power supply will be supported when it becomes available (there is a DC connector on the controller canister but it is not currently functional).

Two array cooling fans - the fans are powered by the power supply in both Power-Fan canisters. If either power supply fails, the fans will continue to operate.

Two battery chargers - the battery chargers perform battery tests when the 6540 module is first power on, and every 25 hours thereafter. If needed, the batteries will be recharged at that time. The batteries are located in the Interconnect-Battery canister.

Thermal sensor - prevents power supplies from overheating. Under normal operating conditions, with an ambient air temperature of 5°C to 40°C, (40°F to 104°F), the cooling fans maintain a proper operating temperature inside the module.

Factors that can cause power supplies to overheat: •

Unusually high room temperature

Fan failure

Defective circuitry in the power supply

Blocked air vent

Failure in other devices installed in the cabinet

If the internal temperature rises above 70°C (158°F) one or both power supplies will automatically shut down, and the storage management software will report the exception. Critical event notifications will also be issued if event monitoring is enabled and event notification is configured.

2-40

Sun Storage 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6540 Product Overview

Figure 2-5

Back view - Power-fan canister

In the figure above, note the black connector when looking at the back of the canister. This connector connects to one of the controllers. The Power-Fan canister on the right has the connector at the top, and therefore, connects to controller A. The Power-Fan canister on the left is upside down with the connector on the bottom, and therefore, connects to controller B. Information about the condition of the power supplies, fans and battery charger is conveyed by indicator lights on the front of each Power-Fan canister. You must remove the front cover of the 6540 module to see the indicator lights. Typically there is a one-to-one relationship between the Needs Attention/Service Action Required (SAR) and Ok to Remove/Service Action Allowed (SAA) LEDs. However, there are exceptions. An example is if both Power-Fan canisters have a fault, one due to a power fault, and the other due to a fan fault. The PowerFan canister with the power fault should be removed and replaced first. If the Power-Fan canister with the fan fault is removed, the array would be left with no power. In this case, the Power-Fan canister with the fan fault would have the SAR LED ON, but the SAA LED OFF.

Sun Storage 6540 Product Overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

2-41


Hardware overview: Components of the Sun Storage 6540

Interconnect-battery canister

Figure 2-6

6540 Battery interconnect canister LEDs

The purpose of the Interconnect-Battery canister is to serve as a midplane for pass through of controller status lines, power distribution lines, and drive channels. Additionally, it contains the batteries to hold data in cache in the event of loss of power, summary indicators for the entire storage array, and the audible alarm. The Interconnect-Battery canister contains:

2-42

•

A removable mid-plane - provides cross-coupled signal connection between the controller canisters. The control output from each controller canister is connected to the control input of the alternate controller canister.

•

Two battery packs - provide backup power to the controller cache memory. Each battery pack is sealed and contains two clusters of lithium ion batteries. Each battery pack is connected to both controllers - one cluster to controller A, the other to controller B. The battery pack voltage ranges from 9 to 13 V. When two battery packs are present, the 6540 storage array data cache will be backed up for 3 days.

Sun Storage 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6540 Product Overview •

Front bezel LED’s - the LED’s that are displayed through the front cover are located on the Interconnect-Battery canister.

Table 2-2

ICC LEDs

Light

Color

Normal Status

Problem Status

Battery Needs Attention

Amber

Off

On - Battery missing or failed

Power

Green

On - Array module is powered on

Off - Array module is powered off

Service Action Required

Amber

Off

On - A component is the array module has developed a fault Inspect the Service Action Required lights on the other canisters to isolate the fault

Locate

White

Off

On - Command module locate

Service Action Allowed

Blue

Off

On - Safe to remove

Figure 2-7

Alarm mute button on ICC

Information about the condition of the interconnect-battery canister is conveyed by indicator lights on the front of the Interconnect-Battery canister.

Sun Storage 6540 Product Overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

2-43


Hardware overview: Components of the Sun Storage 6540 The Power, Service Action Required, and Locate lights are general indicators for the entire command module, not specifically for the Interconnect-Battery canister. The Service Action Required light turns on if a fault condition is detected in any component in the controller module. The Power, Service Action Required, and Locate lights shine through the front cover. The Service Action Allowed LED is for the Interconnect-Battery canister itself. Caution – Never remove the Interconnect-Battery canister unless directed to do so by Customer Support. Removing the InterconnectBattery canister after either a controller or a Power-Fan canister has already been removed could result in loss of data access. In the unlikely event an Interconnect-Battery canister must be replaced, (i.e. due to a bent pin, or as a last resort to resolve a problem) the storage management software provides details on the procedure. Data access is limited to only one controller (Controller A) when the Interconnect-Battery canister is removed. Removal of the Interconnect-Battery canister will automatically suspend controller B, and all I/O will be performed by controller A. It is recommended that you prepare for the removal of the Interconnect-Battery canister instead of just pulling it out. Preparation involves:

2-44

Placing controller B offline so that host failover software can detect the offline controller and re-route all I/O to controller A.

Turning ON the Service Action Allowed LED using the storage management software.

Removing and replacing the Interconnect-Battery canister

Turning OFF the Service Action Allowed LED using the storage management software

Placing controller B on-line and re-balancing the volumes

Sun Storage 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6540 Product Overview

Interconnect-battery canister - Battery pack.

Figure 2-8

6540 Interconnect battery canister showing a single battery pack

The above figure shows the Interconnect-Battery canister with the access cover removed. For clarity, the picture shows only one battery pack, there would normally be two. The battery pack is mounted to a sheet metal bracket. You can see the flange at the end of the bracket closest to the access - grasp the flange to remove the battery pack. When replacing the battery pack, the battery pack must be pushed firmly into the interconnect-battery canister to ensure it completely engages with the connectors at the back of the Interconnect-Battery canister.

Figure 2-9

Back view of Interconnect Battery Canister

Sun Storage 6540 Product Overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

2-45


Hardware overview: Components of the Sun Storage 6540 The battery system spans all the canisters: •

The two battery packs are in the Interconnect-Battery canister. Half of each battery pack is dedicated to each controller.

The two charging circuits in each of the Power/Fan canisters - one charger for one battery cluster in each of the battery packs.

Voltage regulator in each controller to ensure the lithium ion batteries are not over-charged.

Power distribution and battery system

Controller A (top)

Cache Memory Voltage Regulator

Left Power/Fan Canister Power Supply

Interconnect Canister Battery Packs

Charger

Right Power/Fan Canister Charger

B A Charger

B A

Power Supply

Charger

Front

Figure 2-10

6540 as seen from the top, showing the power distribution

The 6540 module does not have a midplane (sometimes also referred to as a backplane) that can be found in all pre-sixth generation Sun Storage 6140 and 6130 products. This diagram shows how the canisters are interconnected, and also gives an overview of how the power distribution and battery system work. The power from the left Power-Fan canister is distributed via controller B, and power from the right Power-Fan canister is distributed via controller A. Both controllers must be in place in order to provide redundant power to each controller.

2-46

Sun Storage 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6540 Product Overview

Figure 2-11

Which component should be replaced first: Right power-fan canister or left power-fan canister?

Service Advisor procedures must be followed carefully if both the power supply connected to controller B fails (the left Power-Fan canister), and controller A fails. Removing controller A before replacing the failed Power-Fan canister will cause controller B to lose power, resulting in loss of data access. This occurs because power distribution from each Power-Fan canister is through the controller physically connected to that Power-Fan canister.

Figure 2-12

Which component should be replaced first: Controller A or left power-fan canister?

Sun Storage 6540 Product Overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

2-47


6540 controller canister highlights

6540 controller canister highlights

Figure 2-13

6540 Controller diagram

Processors •

6091-0901 controller model number (also referred to as 6998)

Next generation hardware XOR engine

2.4 GHz Xeon processor

Data cache •

Optional 2, 4, or 8 GB of cache per controller

Host Channels •

Four independent 4Gb/s FC channels per controller (8 independent ports per dual-controller array)

Auto-negotiate to 1, 2, and 4Gb/s speeds

Drive channels

2-48

Two 4Gb/s FC loop switches per controller

Run at 2Gb/s and/or 4Gb/s

Auto-detect drive side speed

Sun Storage 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6540 Product Overview •

Can support both 2Gb/s and 4Gb/s expansion modules behind the same controller on different drive channels.

Dual 10/100 Ethernet for out-of-band management •

One for customer out-of-band management

One for service diagnostics, serviceability •

Totally isolated to prevent exposure to customer’s LAN

RS-232 interface for diagnostics

6540 controller canister The 6540 command module has two 6998 controllers. Both controllers are identical. The controllers install from the rear of the command module. The top controller is controller A and the bottom controller is controller B. All connections to the hosts and the drives in the storage controller are through the controller canisters. The host side connections support fibre-optic connections. The drive side connections support either copper or fibre-optic connections. The 6998 controller inside the controller canister is comprised of two circuit boards: •

The base controller board - contains the 2.4 GHz processor, the DIMM slots for cache memory, and two Emulex SOC 422 ’loop switch’ chips for the two drive channels on each controller. Each loop switch combines two loops together for one drive channel, and also provides an external connection for each loop.

Sun Storage 6540 Product Overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

2-49


6540 controller canister highlights •

The host interface card - plugs into the base controller board and provides the four 4Gb/s host side connections. In the future, there will be several variations of the host interface card, thus allowing the customer to order a host interface card that has the number, type and speed of host connections that meets his needs.

Figure 2-14

6540 controllers - Back view

Each 6540 controller canister provides the following connections and LED indicators which are described in detail in the following sections:

2-50

Four 4Gb/s Host Interface Ports

Four 4Gb/s Disk expansion ports

Dual 10/100 Base-T Ethernet Ports With EEPROM

Serial Port Connector

Seven segment display

Controller service indicators

AC or DC power (DC power connector present, but DC power not currently implemented)

Sun Storage 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6540 Product Overview

6540 4Gb/s host interface ports

Figure 2-15

6540 host channels

The 6540 storage array has eight 4Gb/s FC-AL host or FC-SW SAN connections (4 per controller). •

The host side connections perform link speed negotiation on each host channel port (also referred to as auto-negotiation) for 4, 2, or 1Gb/s FC host connectivity speeds resulting in minimal or no impact on the existing storage network. Link speed negotiation for a given host channel is limited to link speeds supported by the Small Form-factor Pluggable (SFP) transceiver on that channel. The controllers will enter into auto-negotiation at these points in time: •

Controller boot-up sequence

Detection of a link-up event after a previous link-down event.

If the auto-negotiation process fails, the controllers will consider the link to be down until negotiation is again attempted at one of these points in time. For a 4-Gb controller, the supported link speeds are 1, 2, and 4Gbps.

Sun Storage 6540 Product Overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

2-51


6540 controller canister highlights

Auto-negotiation The Fibre Channel host interface performs link speed negotiation on each host channel Fibre Channel port. This process, referred to as auto-negotiation, means that it will interact with the host or switch to determine the fastest compatible speed between the controller and the other device. The fastest compatible speed will become the operating speed of the link. If the device on the other end of the link is a fixed speed device or is not capable of negotiating, the controller will automatically detect the operating speed of the other device and set its link speed accordingly.

6540 4Gb/s disk expansion ports

Figure 2-16

Disk expansion ports

Each 6540 controller canister has two dual-ported drive channels. Each 6540 controller has two drive channels, each channel consists of two drive channels, each drive channel has an external connection - so, each 6540 controller has 4 drive side port connections. The connections for Channel 1 and Channel 2 are on Controller A. The connections for Channel 3 and Channel 4 are on Controller B. When attaching expansion modules, it is important that the expansion module is cabled to a drive channel on each controller to ensure redundancy.

2-52

Sun Storage 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6540 Product Overview The drive channels can operate at 2Gb/s or 4Gb/s. The drive channels perform link speed detection (which is different than link speed negotiation) - the controller will automatically match the link speed of the attached expansion modules. Drive channels can operate at different link speeds, but both ports of a single channel must run at the same speed. Two LEDs indicate the speed of the channel of the disk drive ports, as shown in the figure below.

Figure 2-17

Disk expansion ports

The behavior of the LEDs is as follows: •

When both LEDs are OFF, there is no FC connection or the link is down.

With the first LED in the OFF position and the right LED in the ON position, the port is at 2Gb/s.

When both LEDs are in the ON position, the port is at 4Gb/s.

Fibre Channel port by-pass indicator The fibre channel port by-pass indicator has two settings: on and off.

Figure 2-18

Port By-Pass Indicator

When in the OFF position, no SFP is installed or port is enabled. In the ON position, no valid device is detected and the channel or port is internally bypassed (AMBER).

Sun Storage 6540 Product Overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

2-53


6540 controller canister highlights

6540 drive channels and loop switches

Figure 2-19

6540 drive channels and loop switches

Each drive port is capable of delivering 400 MB/s of bandwidth; however, both ports of a loop switch (one channel) will run at the same link speed - either both ports will run at 4Gb/s or 2Gb/s. Each controller has two dual-ported 4Gb/s FC Chips. Each FC Chip is attached to a Loop Switch chip on both Controller A and Controller B, therefore both controllers are connected to all four Drive Channels. Both the FC Chips and the Loop Switch chips support concurrent full link speed on both ports of each chip. Each Loop Switch chip represents a Drive Channel. Each Drive Channel can support a maximum of 126 devices (drives, IOMs and controllers). The 6540 storage array supports a maximum of 224 disks.

2-54

Sun Storage 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6540 Product Overview Each Drive Channel has two independent drive channels, represented by the two ports per Drive Channel.

Figure 2-20

Each drive channel has 2 ports

Host and drive side cabling will be covered after a hardware overview of the CSM200 expansion module.

Dual 10/100 Base-T Ethernet ports with EEPROM

Figure 2-21

Ethernet status LEDs

The 6540 has two RJ-45 ports per controller canister. Ethernet port 1 should be for the Management Host while port 2 is reserved for service diagnostics. It is recommended practice not to use this port for management of the storage array. Default IP addresses (default subnet is 255.255.255.0): •

Controller A interface 0: 192.168.128.101

Controller A interface 1: 192.168.129.101

Controller B interface 0: 192.168.128.102

Sun Storage 6540 Product Overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

2-55


6540 controller canister highlights •

Controller B interface 1: 192.168.129.102.

Table 2-3

Ethernet port LEDs Light

Color

Normal Status

Ethernet Link Speed

Green LED

Off = 10 Base-T On = 100 Base-T

Ethernet Link Activity

Green LED

Off = No link established On = Link established Blinking = Activity

Serial port connector

Figure 2-22

6540 controller serial ports

To access the serial port, use a RS232 DB9 null modem serial cable. This port is used to access the Service Serial Interface used for viewing or setting a static IP address for the controllers. This interface can also clear the storage array password.

Figure 2-23

2-56

RS232 null modem cable

Sun Storage 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6540 Product Overview

Seven segment display

Figure 2-24

Seven-segment display and heartbeat

The numeric display consists of two seven-segment LEDs that provide information about module identification and diagnostics. When the controller module is operating normally, the numeric display shows the module identification (module ID) of the controller module. The controller module ID is intentionally set from 80-99 by the controller firmware and automatically adjusts during power-on to avoid conflicts with existing expansion module IDs. You can not change the controller module ID through the storage management software. In fact, the controller module ID should not be changed to an ID below 80 as it will not work properly. Each digit of the numeric display has a decimal point, and is rotated 180 degrees relative to the other digit. With this orientation, the display looks the same regardless of controller orientation. The numeric display as shown in Figure 2-24 shows the module identification (Module ID) or a diagnostic error code The heartbeat is the small decimal on the lower right hand corner of the 1st digit - when the heartbeat is blinking the number displayed is the Module ID. The diagnostic light is the small decimal in the upper left hand corner of the 2nd digit - when the diagnostic light is solid amber the number displayed is a diagnostic code.

Sun Storage 6540 Product Overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

2-57


6540 controller canister highlights The module ID is an attribute of the 6540 command module; both controllers display the same module ID. It is possible, however, that one controller will display the module ID, while the other controller displays a diagnostic code. •

Power on behavior - The Diagnostic Light, the Heartbeat Light, and all 7 segments of both digits will be on if a power-on or reset occurs. The module ID display may be used to temporarily display diagnostic codes after each power cycle or reset. The Diagnostic Light will remain on until the module ID is displayed. After diagnostics are completed, the current module ID will be displayed.

•

Diagnostic behavior - Diagnostic codes in the form of Lx or Hx, where x is a hexadecimal digit, are used to indicate state information. In general, these codes are displayed only when the canister is in a non-operational state. The canister may be non-operational due to a configuration problem (such as mismatched controller types), or it may be non-operational due to hardware faults. If the controller is non-operational due to array configuration, the controller Fault Light will be off. If the controller is non-operational due to a hardware fault, the controller Fault Light will be on.

Table 2-4

Diagnostic codes Value

2-58

Description

--

Boot FW is booting up

FF

Boot Diagnostic executing

88

This controller/IOM is being held in reset by the other controller/IOM

AA

ESM-A application is booting

bb

ESM-B application is booting

L0

Mismatched IOM types

L2

Persistent memory errors

L3

Persistent hardware errors

L9

Over temperature

H0

SOC (Fibre Channel Interface) Failure

H1

SFP Speed mismatch (2Gb SFP installed when operating at 4Gb)

H2

Invalid/incomplete configuration

H3

Maximum reboot attempts exceeded

Sun Storage 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6540 Product Overview Value

Description

H4

Cannot communicate with the other IOM

H5

Mid-plane harness failure

H6

Firmware failure

H7

Current module Fibre Channel rate different than rate switch

H8

SFP(s) present in currently unsupported slot (2A or 2B)

Controller service indicators

Figure 2-25

Controller service indicator LEDs

Service Action Allowed (SAA) LED

Figure 2-26 •

Service Action Allowed LED

Normal Status is OFF

Sun Storage 6540 Product Overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

2-59


6540 controller canister highlights •

Problem Status is ON - OK to remove canister. A service action can be performed on the designated component with no adverse consequences (BLUE).

Each drive, power-fan, and controller/IOM canister has a Service Action Allowed light. The Service Action Allowed light lets you know when you can remove a component safely. Caution – Potential loss of data access – Never remove a drive, powerfan, or controller or IOM canister unless the Service Action Allowed light is turned on. •

If a drive, power-fan, or controller/IOM canister fails and must be replaced, the Service Action Required (Fault) light on that canister turns on to indicate that service action is required. The Service Action Allowed light will also turn on if it is safe to remove the canister. If there are data availability dependencies or other conditions that dictate that a canister should not be removed, the Service Action Allowed light will remain off.

The Service Action Allowed light automatically turns on or off as conditions change. In most cases, the Service Action Allowed light turns on when the Service Action Required (Fault) light is turned on for a canister. Note – IMPORTANT. If the Service Action Required (Fault) light is turned on but the Service Action Allowed light is turned off for a particular canister, you might have to service another canister first. Check your storage management software to determine the action you should take.

Service Action Required (SAR) LED (fault)

Figure 2-27

Service Action Required LED

Normal status is OFF •

2-60

Problem status is ON. A condition exists that requires service. The canister has failed. Use the storage management software to diagnose the problem (AMBER).

Sun Storage 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6540 Product Overview

Cache Active indicator

Figure 2-28

Cache Active indicator

If no data is in cache and all cache data has been written to disk. OFF.

Data is in cache. ON (GREEN)

Table 2-5

Summary of 6540 controller canister LED definitions

Light

Color

Host Channel Speed- Green L1 LED Host Channel Speed- Green L2 LED

Normal Status

Problem Status

L1 L2 Definition Off Off No connection / link down On Off 1 XGb/s Off On 2 XGb/s On On 4 XGb/s

Drive Port Bypass (one light per port)

Amber LED

Off

Drive Channel Speed-L1

Green LED

Drive Channel Speed-L2

Green LED

L1 L2 Definition Off Off No connection / link down Off On 2 XGb/s On On 4 XGb/s

Drive Port Bypass (one light per port)

Amber LED

Service Action Allowed (SAA)

Blue LED Off

On=Controller safe to remove

Service Action Required (SAR)

Amber LED

Off

On=Controller needs attention

Cache Active LED

Green LED

On=Data in cache Off=No data in cache

Not applicable

Off

On=Bypass

On=Bypass

Sun Storage 6540 Product Overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

2-61


6540 summary Light

Color

Normal Status

Problem Status

Ethernet Link Speed Green LED

Off=10Base-T On=100Base-T

Not applicable

Ethernet Link Activity

Green LED

Off=No Link established On=Link established Blinking=Activity

Not applicable

Numeric Display (Module ID and Diagnostic Display)

Green / yellow seven segment display

Diagnostic LED=Off; Module ID Diagnostic LED=On; Diagnostic Code

6540 summary •

2-62

When viewing the 6540 controller module from the front, the canisters are arranged: •

Two power-fan canisters: Left is left-side up and provides power through controller B; right-hand side is upside down and provides power through controller A

Each canister contains a power supply, two fans, two battery chargers and a thermal sensor

One interconnect-battery canister that contains two battery packs, an audible alarm and front bezel LEDs

When viewed from the rear, the canisters are arranged: •

Top canister is controller A; bottom canister (inverted) is controller B

All 6540 replaceable components are fully redundant and hot swappable

In the 6540 controller module, the interconnect-battery canister serves as a pass-through for controller status lines, power distribution lines and drive channels, as well as supplying a back-up power source (battery) for the cache memory

In the 6540 controller module, each controller canister provides housing for the controller, data cache chips, four independent host channels, four drive channels per storage system, two 10/100 Ethernet ports and one RS-232 serial port

Drive channels 1 and 2 are located on controller A; drive channels 3 and 4 are located on controller B

Sun Storage 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6540 Product Overview •

FC host ports perform link-speed negotiation between the controller and the host bus adapter or FC switch port on each host channel

•

To avoid data loss, the Service Action Allowed (SAA) LED must be on before removal of any canister in the 6540 module

Sun Storage 6540 Product Overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

2-63


Knowledge check

Knowledge check Complete the following:

1.

Identify the module shown above._______________________________

2.

Using the letters, identify the parts of the component shown above A _______________________________________ B _______________________________________ C _______________________________________ D _______________________________________ E _______________________________________ F _______________________________________

4

2

P1 Ch 2 (Ctrl B) P2 Ch 2 (Ctrl A)

2-64

3.a. If both LEDs in the middle are on, what speed is the port operating at?

3.b. What are the function of the LEDs to the far left and far right?

Sun Storage 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6540 Product Overview 4.

Explain how module IDs are set. How can you change them?

5.a. Why are there two Ethernet ports?

5.b. Which port should be used for normal operation? __________________ 6.

Why should you never remove the Interconnect Battery canister without Customer Support approval?

7.

The left power-fan canister is distributed via controller _______. The right is distributed via controller ________.

8.

If one drive port on one channel operates at 4Gb/s link speed and the other operates at 2Gb/s, what will be the speed for both ports?

9.

What is meant when a port is said to be able to “auto negotiate”?

10. Where can you find the “heart beat” of the controller?

11. What is the default controller module ID?

Sun Storage 6540 Product Overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

2-65


Knowledge check

2-66

Sun Storage 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Module 3

Sun Storage 6140 product overview Objectives Upon completion of this module, you will be able to: •

Describe the Sun Storage 6140 key features

Identify the hardware components of the 6140

Describe the functionality of the 6140 controller module

Interpret LEDs for proper parts replacement

3-67 Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6140 product overview

Sun Storage 6140 product overview Today’s open systems environments create unique challenges for storage arrays. Round-the-clock processing requires the highest availability and online administration. Varying applications result in a range of performance requirements: from transaction-heavy (I/O per second) to throughput-intensive (MB per second). Unpredictable capacity growth demands efficient scalability. Finally, the sheer volume of storage in today's enterprise requires centralized administration and simple storage management. There is a storage array designed specifically to address the needs of the open systems environment: the Sun Storage 6140. The 6140 storage array is a highperformance, enterprise-class, full 4-gigabit per second (Gb/s) Fibre Channel/SATA II solution that combines outstanding performance with the highest reliability, availability, flexibility and manageability. The Sun Storage 6140 storage array is modular, rack mountable and scalable from a single controller module (CRM) to a maximum of six additional expansion modules (CEM).

Figure 3-1

Sun Storage 6140

The Sun Storage 6140 storage array offers these features:

3-68

End-to-end 4-Gb/s FC

Mix FC & SATAII in module

8 host ports (4 per controller)

16 drives per module

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6140 product overview •

112 drives in 7 modules (1 controller module and 6 expansion modules)

4GB cache (2GB per controller) 120K IOPS, 1500 MB

Serviceability: •

Battery is a separate FRU

RS232 interface

On IOM •

Removable drive cage

ANSI standard LEDs RoHS compliant

Compare the Sun StorEdge™ 6130 and the Sun Storage 6140 Arrays The Sun Storage 6140 storage array comprised of two module types, the controller module (CRM) and the expansion module (CEM). Both module types utilize the Common Storage Module 2 (CSM) and are differentiated by the module in the controller bay. A CRM uses a RAID controller whereas a CEM uses an IO Module (IOM). Table 3-1

Comparison Chart: 6130 and 6140 differences Item

StorEdge™ 6130

Sun Storage 6140-2

Sun Storage 6140-4

Controller CPU Processor

600 Mhz Xscale w/ XOR

667 Mhz Xscale w/ XOR

667 Mhz Xscale w/ XOR

Host Ports

1/2 Gb 2 per ctlr

1/2/4 Gb 2 per ctlr

1/2/4 Gb 4 per ctlr

Expansion Ports

1 per ctlr

2 per ctlr

2 per ctlr

Controller Cache

1 GB per ctlr

1 GB per ctlr

2 GB per ctlr

Ethernet Ports

1 per ctlr

2 per ctlr

2 per ctlr

Controller

2882

3992

3994

Expansion Module IOM

FC or SATA

FC

FC

Sun Storage 6140 product overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

3-69


Hardware components of the Sun Storage 6140 Table 3-1

Comparison Chart: 6130 and 6140 differences StorEdge™ 6130

Item

Sun Storage 6140-2

Sun Storage 6140-4

# of Disk Drives per Module

14

16

16

Disk Types

1/2 Gb: FC, SATA

2/4 Gb: FC, SATA II

2/4 Gb: FC, SATA II

# Expansion Modules

7

3

6

Maximum Disks

112

64

112

Disk Types

1/2 Gb: FC, SATA

2/4 Gb: FC, SATA II

2/4 Gb: FC, SATA II

Configuration Maximum Hosts

256

512 (256 redundant)

512 (256 redundant)

Maximum Volumes

1024

1024

1024

Performance Targets Burst I/O rate - Cache Read

77,000

120,100

120,000

Sustained I/O rate Disk Read

25,000

30,235

44,000

Sustained I/O rate Disk Write

5,000

5,789

9,000

Burst throughput Cache Read

800 MB/s

1,270 MB/s

1,500 MB/s

Sustained throughput Disk Read

400MB/s

750MB/s

990MB/s

Sustained throughput Disk Writes

300MB/s

698MB/s

850MB/s

Hardware components of the Sun Storage 6140 The Sun Storage 6140 storage array is comprised of two main modules, the controller module and the expansion module. The expansion module is also known as the Common Storage Module 2 (CSM2).

3-70

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6140 product overview Each module has 16 FC or SATA II drives, switched architecture, 2 power-fan canisters and a removable drive cage. The controller module can be a stand-alone storage array or you can add up to 6 expansion modules. The difference between the controller module and the CSM2 are the controller canisters and the Input/Output Modules (IOMs). Two dual-active controllers are located in the controller module. Each controller canister has: •

2 or 4 Host/SAN connections - 1, 2, or 4Gb/s speed auto-negotiates •

6140-2 has 2 host ports

6140-4 has 4 host ports

2 expansion ports - 2 or 4Gb/s (set by link rate switch on front of module)

2 Ethernet connections - 1 for storage management, 1 reserved for service

Serial port

7-segment display for module ID and diagnostics

Two Input/Output Modules (IOMs) are located in the expansion module (CSM2). Each IOM has: •

2 Expansion ports - 2 or 4Gb/s (set by link rate switch on front of module)

Serial port

7-segment display for module ID and diagnostics

2 Expansion ports reserved for future functionality

Sun Storage 6140 product overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

3-71


Hardware components of the Sun Storage 6140 The figure below shows a block diagram for the Sun Storage 6140 and CSM200 expansion module. The blocks represent placement of drives, drive cage, powerfan canisters and either the controller canister or IOM.

Figure 3-2

Diagram of Sun Storage 6140

Hardware overview This section describes the main components of the Sun Storage 6140 controller module (CRM).

Controller module The controller module contains up to 16 drives, two controller canisters, two power-fan canisters and a removable drive cage.

3-72

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6140 product overview

Field replaceable drive cage The field replaceable drive cage holds sixteen 3.5 inch drives. The mid-plane is located on the back of the cage as shown below.

Figure 3-3

Drive cage

The front of the controller module has a molded frame that contains global lights and the Link Rate switch. .

Figure 3-4

Sun Storage 6140 front view

Drive field replaceable unit (FRU) Each disk drive is housed in a removable, portable canister. The FC drives are low-profile hot-swappable, dual-ported fibre channel disk drives.

Sun Storage 6140 product overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

3-73


Hardware components of the Sun Storage 6140 The SATA II drives utilize the same canister as the FC drives, but since SATA II drives are single-ported, an additional SATA II Interface Card (SIC) is added to the rear of the canister. The card provides a fibre channel connector and simulates a dual-port configuration, 3Gb/s to 4Gb/s buffering and SATA II to FC protocol translation. The SATA II drive negotiates between 2Gb and 4Gb based on the setting to the Link Rate Switch on the module.

Figure 3-5

SATA II Interface Card (SIC)

The drives are removed by gently lifting on the lower portion of the handle, which releases the handle. Caution – Only add or remove drives while the storage array is powered on. The drive should not be physically removed from its slot until it has stopped spinning. Release the handle of the drive to pop the drive out of its connector. The drive can be removed from its slot after it has spun down. This usually takes 30 - 60 seconds. Disk drive options include: •

3-74

10K RPM FC drives •

2Gb/s interface

146 GB and 300 GB

15K RPM FC Drives •

2Gb/s or 4Gb/s interface

73 GB, 146 GB and 300 GB

7.2K SATA II Drives •

3Gb/s - has a 4Gb/s interface

Native command queuing

500 GB, 750 GB and 1TB

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6140 product overview

DACstore DACstore is written to all online, non-failed drives in the storage array, and beginning with FW 7.xx, contains a complete configuration database that has all the information needed to identify the virtual-disk each drive belongs to—as well as the other drives that are part of the virtual-disk—and volumes stored on it. Information stored in the DACstore database includes: •

Database identifier

Controller serial numbers

Drive state and status

Storage array world wide ID (WWID)

Volumes contained on the drive

Volumes state and status

Virtual-disk definitions

Controller serial numbers (used to enable the storage system to determine if the controllers are native or foreign to the storage system)

Failed drives

Global hot spares state and status

Storage array password

Media scan rate

Cache configuration of the storage system

Storage storage system user label

Event Logs for controller events

Volume-to-LUN mappings, host type mappings and other information used by storage storage domains

Copy of the current controller NVSRAM values

Permissions allowed for the controller

Premium feature keys Note – Starting with firmware 7.xx, premium feature keys are no longer imported if a v-disk is migrated from one storage system to another.

Sun Storage 6140 product overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

3-75


Hardware components of the Sun Storage 6140 The major advantage of the upgrade to DACstore in FW 7.xx is that, because a copy of the configuration database is stored on every drive in the storage array, changes to the database are replicated to ALL drives. This means all drives have a complete “view” of the storage array. If for any reason an interruption occurs during the update process, either all or no changes are made to the database. Upgrades to the new DACstore must be complete. In other words, all drives in the array must upgrade.

Sundry drive Because pre-7.xx firmware does not store a complete configuration database in DACstore, it creates a special drive in each virtual-disk called a sundry drive. This drive acts as a “master DACstore” drive that stores all the information about all the drives in the storage system (similar to the information that the configuration database stores). The controllers assign a minimum of three sundry drives in a storage system and at least one sundry drive to each virtual-disk. There is no limit on the maximum number of sundry drives on a storage system. Sundry drives’ information resides within the DACStore area, and the controllers make every attempt to assign the sundry drives to drives on different channels. Because sundry drives are included in every virtual-disk, this guarantees that if a v-disk is removed for migration, at least one sundry drive migrates to the new destination. After migration, all drives’ DACstores will be merged to the new storage system on import.

DACstore benefits

Figure 3-6

3-76

Customer starts with two disk modules

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6140 product overview

Figure 3-7

Months later the customer adds two more modules

Figure 3-8

Drives can be rearranged for optimization

Sun Storage 6140 product overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

3-77


Hardware components of the Sun Storage 6140

Data intact migration for virtual disks

3-78

Figure 3-9

Migrate the Email virtual disk

Figure 3-10

Export the Email virtual disk to a new storage array

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6140 product overview

Figure 3-11

Import the Email virtual disk to the new storage array

Disk drive LEDs

Figure 3-12

Disk drive LEDs

The LEDs are: •

Drive Service Action Allowed: If this LED is on it is OK to remove. Normal status is OFF. Problem status is ON (BLUE).

Drive Fault: Normal status is OFF. If BLINKING - Drive, volume or storage array locate function. Problem status is ON (AMBER).

Drive Active: ON (not blinking) - No data is being processed. BLINKING Data is being processed. Problem status is OFF (GREEN).

Sun Storage 6140 product overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

3-79


Hardware components of the Sun Storage 6140

Global controller module and expansion module LEDs Each component in a module has LEDs that indicate functionality for that individual component. Global LEDs indicate functionality for the entire module.

Figure 3-13

Global LEDs on 6140 controller on expansion module

The global LEDs are as follows: 1.

Global locate: Normal status is OFF. Only on when the user is performing the locate function (WHITE).

2.

Global Summary Fault: Normal status is OFF. Problem status is ON (AMBER).

3.

Global Power: Normal status is ON. Problem status is OFF (GREEN).

Alarm mute button The 6140 controller and expansion module have a configurable audible alarm. The Alarm Mute button is located on the front bezel to the right of the Global LEDs, as shown below. The audible alarm is turned off by a setting in NVSRAM for the Sun Storage 6140.

Figure 3-14

3-80

Alarm mute button

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6140 product overview

Link Rate switch The Link Rate switch shown below enables you to select the data transfer rate between the IOMs, drives and controllers. Setting the Link Rate switch determines the speed of the back end drive channel.

Figure 3-15

Link Rate switch

Important things to remember: •

The correct position is 4Gb/s to the left; 2Gb/s to the right.

All modules of a 6140 must be set to operate at the same data transfer rate.

The drives in the controller and expansion module must support the selected link rate speed.

If a 6140 is set to operate at 4Gb/s:

A 2Gb/s drive will be bypassed.

If an expansion module is set to operate at 2Gb/s, the module will be bypassed.

If a 6140 is set to operate at 2Gb/s, all 4Gb/s drives will operate at 2Gb/s. Caution – Change the Link Rate switch only when there is no power to the CRM or CEM module.

Sun Storage 6140 product overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

3-81


Back view of controller module

Back view of controller module The Sun Storage 6140 controller module is 4Gb/s capable and comes in two versions. At the back of the controller module, the controller canisters and the power-fan canisters on the top are inverted 180 degrees from the canisters on the bottom, as shown in Figure 3-16. In a fully-configured array, the replaceable components are fully redundant. If one component fails, its counterpart can maintain operations until you replace the failed component.

Figure 3-16

Sun Storage 6140-2

The Sun Storage 6140-2 with two host ports is 4Gb capable, front and back. The 6140-2 auto-negotiates 1Gb, 2Gb and 4Gb speeds on the host side. With dual controllers, there are a total of 4 host ports per storage array. The 6140-2 controller has 1GB of cache memory. The two expansion ports support 2Gb or 4Gb speeds selected by the Link Rate Switch.

Figure 3-17

3-82

Sun Storage 6140-4

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6140 product overview The Sun Storage 6140-4 with 4 host ports is 4Gb capable, front and back. The controller auto-negotiates 1Gb, 2Gb and 4Gb speeds. With dual controllers, there are a total of 8 host ports per storage array. Each 6140-4 controller has 2GB of cache memory. The two expansion ports support 2Gb or 4Gb speeds selected by the Link Rate Switch. Caution – Never insert a 6140-2 controller and a 6140-4 controller into the same unit. This will cause the storage array to become inoperable.

6140 controller module details

Figure 3-18

6140 back view

The top left controller (A) is inverted from the bottom-right controller (B).

The top right power-fan canister is inverted from the bottom-left power-fan canister.

The battery is located in its own separate removable FRU to the left of controller A and to the right of controller B.

Sun Storage 6140 product overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

3-83


Back view of controller module

The 6140 controller canister

Figure 3-19

6140-4 controller canister

FC host ports The fibre channel host port LEDs indicate the speed of the ports, as shown below. 1 4 2

Ch 1

Figure 3-20

Host port LEDs

The ports: •

Support speeds of 1/2/4Gb/s using Agilent DX4+

Auto-negotiate for speed

Host port LEDs - Two LEDs indicate the speed of the port

3-84

OFF and OFF = No connection/link down

ON and OFF = (Green) 1Gb/s

OFF and ON = (Green) 2Gb/s

ON and ON = (Green) 4Gb/s

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6140 product overview

Auto-negotiation The Fibre Channel host interface performs link speed negotiation on each host channel Fibre Channel port. This process, referred to as auto-negotiation, means that it will interact with the host or FC switch to determine the fastest compatible speed between the controller and the other device. The fastest compatible speed will become the operating speed of the link. If the device on the other end of the link is a fixed speed device or is not capable of negotiating, the controller will automatically detect the operating speed of the other device and set its link speed accordingly.

Dual 10/100 Base-T Ethernet ports with EEPROM

Figure 3-21

Ethernet status LEDs

Ethernet port 1 should be used for management while port 2 is reserved for service. It is recommended practice not to use this port for management of the storage array. Light

Color

Normal Status

Ethernet Link Speed

Green LED

Off = 10 Base-T On = 100 Base-T

Ethernet Link Activity

Green LED

Off = No link established On = Link established Blinking = Activity

Sun Storage 6140 product overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

3-85


Back view of controller module

Serial port connector To access the serial port, use a RS232 DB9-MINI DIN 6 with a null modem serial cable. This port is used to access the Service Serial Interface used for viewing or setting a static IP address for the controllers. This interface can also clear the storage array password. Serial Port

Figure 3-22

Serial port connector

The illustration below shows the RS232 DB9-MINI DIN 6. Use with a null modem cable for serial port access.

Figure 3-23

RS232 DB9-MINI DIN 6

Dual disk expansion ports Two LEDs indicate the speed of the channel of the disk drive ports, as shown below.

Figure 3-24

Disk expansion ports

The behavior of the LEDs is as follows: •

3-86

When both LEDs are OFF, there is no Ethernet connection or the link is down.

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6140 product overview •

With the first LED in the OFF position and the right LED in the ON position, the port is at 2Gb/s.

•

When both LEDs are in the ON position, the port is at 4Gb/s.

Fibre Channel port by-pass indicator The fibre channel port by-pass indicator has two settings: on and off. The figure below shows the indicator.

Figure 3-25

Port by-pass indicator

When in the OFF position, no SFP is installed or the port is enabled (no light). In the ON position, no valid device is detected and the channel or port is internally bypassed (Amber).

Seven-segment display Each controller module has a pair of seven-segment displays that form a two-digit display. Each digit has a decimal point, and is rotated 180 degrees relative to the other digit. With this orientation, the display looks the same regardless of controller orientation. The numeric display as shown below shows the module identification (Module ID) or a diagnostic error code.

Figure 3-26

Seven-segment display and heartbeat

The heartbeat is the small decimal on the lower right hand corner of the 1st digit. The diagnostic light is the small decimal in the upper left hand corner of the 2nd digit.

Sun Storage 6140 product overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

3-87


Back view of controller module The controller module ID is set at 85 by the controller firmware. The controller module ID should not be changed as it will not work properly with an ID below 80. The expansion module IDs are automatically set during power-on to avoid conflicts with existing expansion module IDs. Values on each display will be shown as if the digits had the same orientation. During normal operation, the seven-segment display is used to display the module ID. The display may also used for diagnostic codes. The Diagnostic Light (upper digit decimal point) indicates current usage. The Diagnostic Light is off when the display is used to show the current module ID. The module ID is an attribute of the module. In other words, both controllers will always display the same module ID. It is possible, however, that one controller may display the module ID, while the other controller displays a diagnostic code. •

Power on behavior - The Diagnostic Light, the Heartbeat Light, and all 7 segments of both digits will be on if a power-on or reset occurs. The module ID display may be used to temporarily display diagnostic codes after each power cycle or reset. The Diagnostic Light will remain on until the module ID is displayed. After diagnostics are completed, the current module ID will be displayed.

Diagnostic behavior - Diagnostic codes in the form of Lx or Hx, where x is a hexadecimal digit, are used to indicate state information. In general, these codes are displayed only when the canister is in a non-operational state. The canister may be non-operational due to a configuration problem (such as mismatched IOM and/or controller types), or it may be non-operational due to hardware faults. If the controller/IOM is non-operational due to array configuration, the controller/IOM Fault Light will be off. If the controller/IOM is non-operational due to a hardware fault, the controller/IOM Fault Light will be on.

Figure 3-27 Numeric Display Diagnostic Codes Value Description -Boot firmware is booting up FF Boot diagnostic executing 88 The controller is being held in reset by the alternate controller AA Used by IOMs only: IOM-A application is booting bb Used by IOMs only: IOM-B application is booting L0 Used by IOMs only: Mismatched IOM types exist L1 NOT USED

3-88

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6140 product overview L2 L3 L4 L5 L6 L7 L8 L9 H0 H1 H2 H3 H4 H5 H6 H7 H8 H9

Persistent memory errors exist Persistent hardware errors exist NOT USED NOT USED NOT USED NOT USED NOT USED Over temperature SOC (Fibre Channel port) Failure SFP speed mismatch (For example, a 4Gbp/s SFP is installed when operating at 2Gb/s) Invalid or incomplete configuration Maximum reboot attempts exceeded Communication failure with redundant canister Mid-plane harness failure Firmware failure Current module FC rate different than rate switch One or more SFPs are present in currently unsupported slot(s) Non-catastrophic hardware failure

Controller service indicators

Figure 3-28

Controller service indicators

Sun Storage 6140 product overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

3-89


Back view of controller module

Service Action Allowed (SAA) LED

Figure 3-29

Service Action Allowed LED

Normal Status is OFF

Problem Status is ON - OK to remove canister. A service action can be performed on the designated component with no adverse consequences (BLUE).

Each drive, power-fan, and controller/IOM canister has a Service Action Allowed light. The Service Action Allowed light lets you know when you can remove a component safely. Caution – Potential loss of data access – Never remove a drive, powerfan, or controller or IOM canister unless the Service Action Allowed light is turned on. •

If a drive, power-fan, or controller/IOM canister fails and must be replaced, the Service Action Required (Fault) light on that canister turns on to indicate that service action is required. The Service Action Allowed light will also turn on if it is safe to remove the canister. If there are data availability dependencies or other conditions that dictate that a canister should not be removed, the Service Action Allowed light will remain off.

The Service Action Allowed light automatically turns on or off as conditions change. In most cases, the Service Action Allowed light turns on when the Service Action Required (Fault) light is turned on for a canister. Note – IMPORTANT. If the Service Action Required (Fault) light is turned on but the Service Action Allowed light is turned off for a particular canister, you might have to service another canister first. Check your CAM software to determine the action you should take.

3-90

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6140 product overview

Service Action Required (SAR) LED (fault)

Figure 3-30

Service Action Required LED

Normal status is OFF.

Problem status is ON. A condition exists that requires service. The canister has failed. Use the storage management software to diagnose the problem (AMBER).

Cache active indicator

Figure 3-31

Cache active indicator

If no data is in cache and all cache data has been written to disk OFF.

Data is in cache. ON (GREEN)

Battery

Figure 3-32

6140-4 controller with battery

Batteries are used to preserve the contents of controller cache memory during power outages. An error is reported if any of the batteries are missing.

Sun Storage 6140 product overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

3-91


Back view of controller module An installation date is tracked for each battery FRU installed in the storage array. The battery installation date is set when the battery packages are installed into the storage array during manufacturing. If a battery package is replaced, the system administrator must set the installation date to the current date by resetting the battery age for that battery package in CAM or the command line interface. Each day, the storage array controllers determine the age of each battery package in the storage array by comparing the current date to the installation date. If a battery package has reached its expiration age, cache battery failure event notification will occur. The storage array can be configured to generate cache battery near expiration event notification prior to reaching the expiration age. The controller module has a removable battery canister. The Lithium Ion battery will need to be replaced every three years. It will hold data in cache for up to 72 hours.

Figure 3-33

3-92

Battery removal

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6140 product overview

Figure 3-34

Battery LEDs

Service Action Allowed (OK to Remove): Normal status is OFF. Problem Status is ON.

Service Action Required (Fault). Normal status is OFF. Problem status is ON.

Battery Charging: Normal operating status is ON. Blinking means charging. Problem status is OFF.

The power-fan canister The controller module has two removable power-fan canisters. Each power-fan canister contains one power supply and two fans. The four fans pull air through the canister from front to back across the drives. The fans provide redundant cooling, which means that if one of the fans in either fan housing fails, the remaining fans continue to provide sufficient cooling to operate the array. Cooling is improved by using side cooling for the controllers and IOMs. The 600-watt power supplies provide power to the internal components by converting incoming AC voltage to DC voltage. If one power supply is turned off or malfunctions, the other power supply maintains electrical power to the module. The power-fan canister contains: •

One 600 watt redundant switching power supply •

Each power supply will generate +5 and +12 volts,

The two power supplies are tied to a common power bus on the midplane using active current share between the redundant pair.

Sun Storage 6140 product overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

3-93


Back view of controller module

The power supplies have power-factor correction and support wideranging AC or DC input.

They are able to operate in ranges from 90 VAC to 264 VAC (50 Hz to 60 Hz) or if the DC supply is selected, they will operate in the range from –36VDC to –72VDC.T

Two integrated +12V blower fans. •

If one blower fails, the second blower will automatically increase to maximum speed to maintain cooling until a replacement power supply is available.

Blower speed control will be monitored and controlled by a microcontroller and thermal sensor within the power supply.

Figure 3-35 •

Power (AC): Indicates input power is being applied to the power supply and the power switch is on. Normal status ON. Problem status OFF (GREEN).

Service Action Allowed (OK to remove): Normal status OFF. Problem status ON (BLUE).

Service Actions Required (Fault) glows amber when:

3-94

Power-fan canister LEDs

The power cord is plugged in, the power switch is on and the power supply is not correctly connected to the mid-plane.

Power cord is plugged in, the power switch is on, the power supply is correctly seated in the mid-plane, and a power supply or blower fault condition exists.

Normal status OFF. Problem status ON (AMBER).

Direct Current Enabled: DC Power LED glows green to indicate the DC power rails are within regulation. Normal status ON. Problem status OFF (GREEN).

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6140 product overview

Controller architecture

Figure 3-36

Controller block diagram showing both controllers and interconnect

Uses 667 MHz Xscale processor with embedded XOR engine

1 DIMM memory slot •

Shared memory bus (first 128MB is used by processor)

6140-4 – 2 GB per controller

6140-2 – 1 GB per controller

6140 summary •

The 6140-2 has 1GB controller cache, two host ports, two drive ports on one channel per controller and somewhat lower overall controller performance than the 6140-4

The 6140-4 has 2GB controller cache, four host ports, two drive channels per controller and better performance than the 6140-2

The 6140-4 controller-expansion tray includes the power-fan canisters, controller canisters, mid-plane interconnect and a removable 16-drive cage

The 6140-4 supports both FC and SATA II drives

Sun Storage 6140 product overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

3-95


6140 summary

3-96

The battery in the 6140-4 controller-expansion tray is located in its own separate, removable unit

FC host ports perform link-speed negotiation between the controller and the host bus adapter or FC switch port for each host channel

To avoid data loss, the Service Action Allowed (SAA) LED should be on before removal of any canister in the module

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6140 product overview

Knowledge check

1.

Identify the module, shown above._______________________________

2.

Using the letters, identify the parts of the component shown above A _______________________________________ B _______________________________________ C _______________________________________ D _______________________________________ E _______________________________________ F _______________________________________

4

2

3..a. On which module would you find this set of ports and LEDs? _________________________________

P1 Ch 2 (Ctrl B) P2

3.b. If both LEDs in the middle are on, what speed is the port operating at?

Ch 2 (Ctrl A)

3c. What are the function of the Leds to the far left and far right?

4.

List three benefits of DACstore.

Sun Storage 6140 product overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

3-97


Knowledge check 5.

Differentiate the functionality of the Sundry drives compared to the other drives in the array.

6.

Explain how module IDs are set. How can you change them?

7.

How do you differentiate the 6140-2 and 6140-4 controllers?

8..a. Why are there two ethernet ports?

1

2

8.b. Which port should be used for normal operation? __________________ 9.Why are the controllers inverted in a 6140

controller module?

3-98

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Module 4

Sun Storage CSM200 expansion module overview Objectives Upon completion of this module, you will be able to: •

Describe the Sun Storage Common Storage Module (CSM)200 expansion module key features

Identify the hardware components of the CSM200 expansion module

Describe the functionality of the CSM200 expansion module

Interpret LEDs for proper parts replacement

4-99 Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage CSM200 expansion module overview

Sun Storage CSM200 expansion module overview The CSM200 is the latest disk expansion module in the Sun Storage mid-range 6000 Series of products. This 3U module has 4Gb/s Fibre Channel (FC) interfaces, and supports up to 16 disk drives. The 4Gb/s ready CSM200 expansion module offers a 16-bay disk module for attachment to selected mid-range 6000 storage arrays, with up to 4.8 terabytes (TB) physical capacity per expansion unit using sixteen 300GB FC disk drives. The CSM200 supports 2Gb/s and 4Gb/s FC drives, SATAII drives and the intermix of FC drives and SATA II drives, all within the same module. The CSM200 contains redundant (AC) power and cooling modules and IOM interfaces. Summary of the features offered by the CSM200 expansion module: •

16 drives per module

Support for multiple drive types: •

2Gb/s 10K RPM FC drives: 146GB and 300GB

4Gb/s 15K RPM FC drives: 73 GB and 146GB

3Gb/s 7.2K RPM SATAII drives: 500GB and 750GB

SATA II and FC drives can be intermixed in the same module (controller firmware dependant)

Module has selectable loop speed switch allowing module to run at 2Gb/s or 4Gb/s speed (not auto-sensing)

Switched loop design improves RAS (Reliability, Availability and Serviceability) and reduces latency

All components are hot-swappable

RoHS compliant

Hardware overview Hardware components of the Sun Storage 6x80 and 6540 The Sun Storage 6x80 or 6540 storage array is comprised of two main modules: the controller module and a minimum of one expansion module. The expansion module is also known as the Common Storage Module 200 (CSM200). 4-100

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage CSM200 expansion module overview

Figure 4-1

Sun Storage CSM200 expansion module

This module describes the main components of the CSM200 expansion module.

CSM200 expansion module Below is a block diagram for the CSM200 expansion module. The blocks represent placement of IOMs, power-fan canisters and removable mid-plane canister.

Figure 4-2

Block diagram for the CSM200expansion module

The CSM200 module has the following FRUs:

Sun Storage CSM200 expansion module overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

4-101


Hardware overview •

Disk drive canisters - Fibre Channel and/or SATA

Power-Fan canisters

IOM canister

The module has a removable drive cage, and removable midplane. Caution – Service Advisor procedures should be followed when removing a FRU.

CSM200 expansion module - Front view The controller module contains up to 16 drives, two controller canisters, two power-fan canisters and a removable drive cage. The front of the controller module has a molded frame that contains global lights and the Link Rate switch.

Figure 4-3

Sun Storage CSM200 expansion module front view

Drive field replaceable unit (FRU) Each disk drive is housed in a removable, portable canister. The FC drives are low-profile hot-swappable, dual-ported fibre channel disk drives. The SATA II drives utilize the same canister as the FC drives, but have a SATA II Interface Card (SIC) added to the rear of the canister. The SIC card serves three purposes:

4-102

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage CSM200 expansion module overview •

Provides redundant paths to the disk. SATA II drives are single-ported so the SIC card acts as a multiplexer. and effectively simulates a dual-ported disk

Provides SATA II to FC protocol translation thereby enabling a SATAII disk to function within an FC expansion module.

Provides speed-matching. The SIC card negotiates between 2Gb/s and 4Gb/s based on the setting of the Link Rate Switch on the expansion module. SATAII drives run at 3Gb/s, the SIC card does the 3Gb/s to 4Gb/s buffering so the SATAII drive effectively runs at 4Gb/s speed (and similarly can run at 2Gb/s speed).

Figure 4-4

SATA II Interface Card (SIC)

The drives are removed by gently lifting on the lower portion of the handle, which releases the handle. Caution – Only add or remove drives while the storage array is powered on. The drive should not be physically removed from its slot until it has stopped spinning. Release the handle of the drive to pop the drive out of its connector. The drive can be removed from its slot after it has spun down. This usually takes 30 to 60 seconds. •

7.2K RPM SATA II drives •

3Gb/s (with SIC can run at either 2Gb/s or 4Gb/s speed)

Native command queuing

500GB, 750GB, and 1TB

Sun Storage CSM200 expansion module overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

4-103


Hardware overview

DACstore DACstore is written to all online, non-failed drives in the storage array, and beginning with FW 7.xx, contains a complete configuration database that has all the information needed to identify the virtual-disk each drive belongs to—as well as the other drives that are part of the virtual-disk—and volumes stored on it. Information stored in the DACstore database includes: •

Database identifier

Controller serial numbers

Drive state and status

Storage array world wide ID (WWID)

Volumes contained on the drive

Volumes state and status

Virtual-disk definitions

Controller serial numbers (used to enable the storage system to determine if the controllers are native or foreign to the storage system)

Failed drives

Global hot spares state and status

Storage array password

Media scan rate

Cache configuration of the storage system

Storage storage system user label

Event Logs for controller events

Volume-to-LUN mappings, host type mappings and other information used by storage storage domains

Copy of the current controller NVSRAM values

Permissions allowed for the controller

Premium feature keys Note – Starting with firmware 7.xx, premium feature keys are no longer imported if a v-disk is migrated from one storage system to another.

4-104

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage CSM200 expansion module overview The major advantage of the upgrade to DACstore in FW 7.xx is that, because a copy of the configuration database is stored on every drive in the storage array, changes to the database are replicated to ALL drives. This means all drives have a complete “view” of the storage array. If for any reason an interruption occurs during the update process, either all or no changes are made to the database. Upgrades to the new DACstore must be complete. In other words, all drives in the array must upgrade.

Sundry drive Because pre-7.xx firmware does not store a complete configuration database in DACstore, it creates a special drive in each virtual-disk called a sundry drive. This drive acts as a “master DACstore” drive that stores all the information about all the drives in the storage system (similar to the information that the configuration database stores). The controllers assign a minimum of three sundry drives in a storage system and at least one sundry drive to each virtual-disk. There is no limit on the maximum number of sundry drives on a storage system. Sundry drives’ information resides within the DACStore area, and the controllers make every attempt to assign the sundry drives to drives on different channels. Because sundry drives are included in every virtual-disk, this guarantees that if a v-disk is removed for migration, at least one sundry drive migrates to the new destination. After migration, all drives’ DACstores will be merged to the new storage system on import.

DACstore benefits DACstore exists on every drive and can be read by all 6000 controllers. Therefore, when an entire virtual disk is moved from one storage array to a new storage array, the data remains intact and can be read by the controllers in the new storage array. Investment protection through “data intact” upgrades and migrations include: •

All LSI controllers recognize configuration and data from other LSI storage arrays

Storage-array-level relocation

DACStore enables relocation of drives within the same storage array in order to maximize availability. When expansion modules are added, DACstore gives you the ability to relocate drives so that drives are striped vertically across all expansion modules, and no one module has more than one drive of a virtual disk.

Sun Storage CSM200 expansion module overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

4-105


Hardware overview

4-106

Figure 4-5

Customer starts with two disk modules

Figure 4-6

Months later the customer adds two more modules

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage CSM200 expansion module overview

Figure 4-7

Drives can be rearranged for optimization

Data intact migration for virtual disks

Figure 4-8

Migrate the Email virtual disk

Sun Storage CSM200 expansion module overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

4-107


Hardware overview

4-108

Figure 4-9

Export the Email virtual disk to a new storage array

Figure 4-10

Import the Email virtual disk to the new storage array

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage CSM200 expansion module overview

Field replaceable drive cage The field replaceable drive cage holds sixteen 3.5 inch drives. The mid-plane is located on the back of the cage as shown below.

Figure 4-11

Drive cage

Disk drive LEDs The disk drive LEDs are illustrated below.

Figure 4-12

Disk drive LEDs

The LEDs are: •

Drive Service Action Allowed: If this LED is on it is OK to remove. Normal status is OFF. Problem status is ON (BLUE).

Sun Storage CSM200 expansion module overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

4-109


Hardware overview •

Drive Fault: Normal status is OFF. If BLINKING - Drive, volume or storage array locate function. Problem status is ON (AMBER).

Drive Active: ON (not blinking) - No data is being processed. BLINKING Data is being processed. Problem status is OFF (GREEN).

Global CSM200 expansion module LEDs Each component in a module has LEDs that indicate functionality for that individual component. Global LEDs indicate functionality for the entire module. Global LEDs are shown in Figure 4-13.

Figure 4-13

Global LEDs on CSM200 expansion module

The global LEDs are as follows: 1.

Global locate: Normal status is OFF. Only on when the user is performing the locate function (WHITE).

2.

Global Summary Fault: Normal status is OFF. Problem status is ON (AMBER).

3.

Global Power: Normal status is ON. Problem status is OFF (GREEN).

Link Rate switch The Link Rate Switch shown below enables you to select the data transfer rate between the IOMs, drives and controllers. Setting the Link Rate switch determines the speed of the back end drive channel.

Figure 4-14

4-110

Link Rate switch

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage CSM200 expansion module overview Important things to remember: •

The correct position is 4Gb/s to the left; 2Gb/s to the right.

All modules on a pair of drive channels must be set to operate at the same data transfer rate.

The drives in the expansion module must support the selected link rate speed.

If a module is set to operate at 4Gb/s, all 2Gb drives in that module will be bypassed.

If a module is set to operate at 2Gb/s, all 4Gb/s drives will operate at 2Gb/s. Caution – Change the Link Rate switch only when there is no power to the storage array.

CSM200 expansion module - Back view

Figure 4-15

Back view of expansion module (CSM200)

Sun Storage CSM200 expansion module overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

4-111


Hardware overview At the back of the expansion module, the IO modules (IOM) and the power-fan canisters on the top are inverted 180 degrees from the canisters on the bottom. In a fully configured array, the field replaceable canisters are fully redundant. If one component fails, its counterpart can maintain operations until the failed component is replaced. IOM modules are also sometime referred to as ESMs (Environmental Services Modules).

Figure 4-16

4-112

IOM LEDs

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage CSM200 expansion module overview The IOM has 4 drive ports. However, only two are available to use. Do not use the drive ports (2A and 2B) nearest the seven-segment display. These are reserved for future functionality. The IOM is 2Gb or 4Gb, determined by the switch on the front side of the expansion module.

Figure 4-17

IOM (IO module)

The following environmental conditions are monitored by the IOM: •

The presence and absence of disk drives and two power-fan canisters

The operational status line of two power-fan canisters

Module temperature reading

Temperature shutdown will occur

The preset values will be •

60 degrees Celsius for the “high warning” fault

68 degrees Celsius for the “high critical” fault

Fan rotational speed for all four fans (two per power-fan canister)

Voltage level reading for 5V, and 12V supply buses

Voltage level reading for 1.2V, 1.8V, 2.5V, and 3.3V on board supply bus

Control the fault status lines for the drives

Control of the Locator LED, Summary Fault LED, and Service Action Allowed LED

Presence of the second 4Gb FC IOM in the module

Sun Storage CSM200 expansion module overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

4-113


Hardware overview

Dual disk expansion ports Only disk drive ports 1A and 1B should be used. Two LEDs indicate the speed of the channel of the disk drive ports, as shown in Figure 4-18. 4

2

P1 Ch 2 (Ctrl B) P2 Ch 2 (Ctrl A)

Figure 4-18

Disk expansion ports

The behavior of the LEDs is as follows: •

When both LEDs are OFF, there is no FC connection or the link is down.

With the first LED in the OFF position and the right LED in the ON position, the port is at 2Gb/s.

When both LEDs are in the ON position, the port is at 4Gb/s.

Fibre Channel port by-pass indicator The fibre channel port by-pass indicator has two settings: on and off. Figure 4-19 shows the indicator.

Figure 4-19

Port by-pass indicator

When in the OFF position, no SFP is installed or port is enabled. In the ON position, no valid device is detected and the channel or port is internally bypassed (Amber).

4-114

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage CSM200 expansion module overview

Seven-segment display and IOM service indicators

Figure 4-20

Seven-segment display

The numeric display consists of two seven-segment LEDs that provide information about module identification and diagnostics. When the IOM module is operating normally, the numeric display shows the module identification (module ID) of the IOM module. The IOM module ID is automatically set by the controller firmware and automatically adjusts during power-on to avoid conflicts with other existing expansion module IDs. Each digit of the numeric display has a decimal point, and is rotated 180 degrees relative to the other digit. With this orientation, the display looks the same regardless of IOM orientation. The numeric display as shown in the figure above shows the module identification (Module ID) or a diagnostic error code The heartbeat is the small decimal on the lower right hand corner of the 1st digit - when the heartbeat is blinking the number displayed is the Module ID. The diagnostic light is the small decimal in the upper left hand corner of the 2nd digit - when the diagnostic light is solid amber the number displayed is a diagnostic code. The module ID is an attribute of the CSM200 module; both IOMs display the same module ID. It is possible, however, that one IOM will display the module ID, while the other IOM displays a diagnostic code. •

Power on behavior - The Diagnostic Light, the Heartbeat Light, and all 7 segments of both digits will be on if a power-on or reset occurs. The module ID display may be used to temporarily display diagnostic codes after each power cycle or reset. The Diagnostic Light will remain on until the module ID is displayed. After diagnostics are completed, the current module ID will be displayed.

Sun Storage CSM200 expansion module overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

4-115


Hardware overview •

Diagnostic behavior - Diagnostic codes in the form of Lx or Hx, where x is a hexadecimal digit, are used to indicate state information. In general, these codes are displayed only when the canister is in a non-operational state. The canister may be non-operational due to a configuration problem (such as mismatched IOM types), or it may be non-operational due to hardware faults. If the IOM is non-operational due to array configuration, the IOM Fault Light will be off. If the IOM is non-operational due to a hardware fault, the IOM Fault Light will be on.

Table 4-1

Diagnostic codes

Value -FF 88 AA bb L0 L1 L2 L3 L4 L5 L6 L7 L8 L9 H0 H1 H2 H3 H4 H5 H6 H7 H8 H9

4-116

Description Boot firmware is booting up Boot diagnostic executing The IOM is being held in reset by the alternate IOM IOM-A application is booting IOM-B application is booting Mismatched IOM types exist NOT USED Persistent memory errors exist Persistent hardware errors exist NOT USED NOT USED NOT USED NOT USED NOT USED Over temperature SOC (Fibre Channel port) Failure SFP speed mismatch (For example, a 4Gbp/s SFP is installed when operating at 2Gb/s) Invalid or incomplete configuration Maximum reboot attempts exceeded Communication failure with redundant IOM Mid-plane harness failure Firmware failure Current module FC rate different than rate switch One or more SFPs present in currently unsupported slots (2A or 2B) Non-catastrophic hardware failure

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage CSM200 expansion module overview

The power-fan canister The CSM200 module has two removable power-fan canisters. Each power-fan canister contains one power supply and two fans. The four fans pull air through the canister from front to back across the drives. The fans provide redundant cooling, which means that if one of the fans in either fan housing fails, the remaining fans continue to provide sufficient cooling to operate the array. Cooling is improved by using side cooling for the IOMs. The 600-watt power supplies provide power to the internal components by converting incoming AC voltage to DC voltage. If one power supply is turned off or malfunctions, the other power supply maintains electrical power to the module. The power-fan canister contains: •

One 600 watt redundant switching power supply •

Each power supply will generate +5 and +12 volts,

The two power supplies are tied to a common power bus on the midplane using active current share between the redundant pair.

The power supplies have power-factor correction and support wideranging AC or DC input.

They are able to operate in ranges from 90 VAC to 264 VAC (50 Hz to 60 Hz) or if the DC supply is selected, they will operate in the range from –36VDC to –72VDC.T

Two integrated +12V blower fans •

If one blower fails, the second blower will automatically increase to maximum speed to maintain cooling until a replacement power supply is available.

Blower speed control will be monitored and controlled by a microcontroller and thermal sensor within the power supply.

Sun Storage CSM200 expansion module overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

4-117


Hardware overview Power-fan canister LEDs:

Figure 4-21

Power-fan canister LEDs

Power (AC): Indicates input power is being applied to the power supply and the power switch is on. Normal status ON. Problem status OFF (GREEN).

Service Action Allowed (OK to remove): Normal status OFF. Problem status ON (BLUE).

Service Actions Required (Fault) glows amber when: •

The power cord is plugged in, the power switch is on and the power supply is not correctly connected to the mid-plane.

Power cord is plugged in, the power switch is on, the power supply is correctly seated in the mid-plane, and a power supply or blower fault condition exists. Normal status OFF. Problem status ON (AMBER).

4-118

Direct Current Enabled: DC Power LED glows green to indicate the DC power rails are within regulation. Normal status ON. Problem status OFF (GREEN).

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage CSM200 expansion module overview

Architecture overview The following section shows the architecture for the CSM200 expansion module which is a switched bunch of disks (SBODs).

Figure 4-22

Just a Bunch of Disks (JBOD): Loops in JBOD include controller, IOMs and drives

Figure 4-23

Switched Bunch of Disks (SBOD): Loop switch technology enables direct FC communication with each individual drive

Switched bunch of disks (SBOD) architecture The loop-switch technology enables direct and detailed FC communication with each drive. A loop switch allows devices on a FC loop to operate as though they were on a private Fibre Channel Arbitrated Loop (FC-AL), but have the performance and diagnostic advantages of Fibre Channel fabric. A SOC (switch-on-a-chip) allows FC-AL devices to communicate directly to each other which reduces the loop latency inherent in a true arbitrated loop. Because Fibre Channel communication is essentially point-to-point with a loop switch, diagnosis and isolation of loop problems is simplified.

Sun Storage CSM200 expansion module overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

4-119


CSM200 summary The figure below shows the SBOD architecture.

Figure 4-24

SBOD architecture

CSM200 summary

4-120

DACstore is a 512MB on each drive that stores information about the physical and logical states and status, and other information needed by the controller

The IOM (input-output module) board in the expansion trays provides connectivity to and from the controllers to the drives; reports drive exceptions to the controller; and supervises and controls power, temperature and fans in the expansion tray

SBOD 4Gb/s expansion tray IOMs have embedded “loop switches” that isolate each drive on a private loop between it and the IOM to ensure direct FC communication to each drive

Beginning with firmware 7.xx, SBOD IOMs can by-pass drives that exceed error thresholds

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage CSM200 expansion module overview

Knowledge check

1.

Identify the module, shown above._______________________________

2.

Using the letters, identify the parts of the component shown above

A

_______________________________________

B

_______________________________________

C

_______________________________________

D

_______________________________________

E

_______________________________________

3.

Explain the purpose of the SATA II interface card.

4.

What is the main difference between the JBOD and SBOD technology?

Sun Storage CSM200 expansion module overview Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

4-121


Knowledge check

4-122

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Module 5

Sun Storage 6000 hardware installation Objectives Upon completion of this module, you will be able to: •

List the basic steps for installing the Sun Storage 6x80, 6540 and 6140

Describe proper cabling techniques and methodologies

List the basic steps of hot-adding CSM200 expansion modules to a 6x80, 6540 and 6140

Perform the proper power sequence for the 6x80, 6540 and 6140 storage array

Describe procedure to set static IP addresses for the 6x80, 6540 and 6140

5-123 Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Overview of the installation process

Overview of the installation process The following list outlines the tasks required for installing the Sun Storage 6000 hardware. The installation of the management software, Common Array Manager (CAM) will be covered in another module. The first three tasks will not be covered in this section. Utilize the instructions for unpacking and physically installing the hardware that ships with the product to complete the first three installation tasks. 1.

Unpack the hardware according to the directions in the unpacking guide that should be attached to the outside of the shipping carton.

2.

Install the cabinet, controller module and expansion modules by following the directions in the hardware installation guide.

3.

Attach the power cables.

4.

Attach Ethernet cables - one to each controller.

5.

Cable the controller and expansion modules.

6.

Check the link rate switch.

7.

Turn on the power.

8.

Set the controllers’ IP addresses.

9.

Use the hardware compatibility matrix to evaluate array set-up.

10. Attach the host interface cables. (Items in bold are covered in detail in this module) Standard 19-inch cabinets can be customized for maximum flexibility and can contain a combination of twelve modules. Always start loading the cabinet from the bottom up. Always push the cabinet from the front. U - A unit of measurement used to measure the height of computer equipment components and the height of the standard racks in which these components are mounted. 1U is equal to 1.75 inches (44.45 millimeters), so for example, a 2U component is 3.5 inches high. Specific examples:

5-124

The height of an expansion module is 3 U.

The height of a controller module is 4 U.

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6000 hardware installation The 72-inch cabinet is approximately 41 U. This means that 11 expansion modules and 1 controller module will fit in a 72-inch cabinet.

Cabling procedures The following section highlights proper cabling methods for the controller and expansion modules, keeping in mind how to cable for redundancy.

Cable types Fiber-optic cables and small form-factor pluggable (SFP) transceivers are used for connections to the host. If the array will be cabled with fiber-optic cables, you must install active SFPs into each port where a cable will be connected before plugging in the cable.

Figure 5-1

Fibre optic cable and copper cable with LC connector

Sun Storage 6000 hardware installation Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

5-125


Cabling procedures Copper cables do not require separate SFP transceivers. The SFP transceivers are integrated into the cables themselves. Copper cables are used to connect the expansion modules. The two types of cables for expansion cabling include: •

2Gb = Molex

4Gb = Tyco Note – Host connections require the use of fiber-optic cables but either copper or fiber-optic cables can be used to connect expansion modules.

Comparing copper and fibre optic The choice between optical fiber and electrical (or “copper”) transmission for a particular array is made based on a number of trade-offs. Optical fiber is generally chosen for arrays with higher bandwidths, spanning longer distances, than copper cabling can provide. The main benefits of fiber are its exceptionally low loss, allowing long distances between amplifiers or repeaters; and its inherently high data-carrying capacity, such that thousands of electrical links would be required to replace a single high bandwidth fiber. Typically copper cables are used for short distances, such as inter-connecting expansion modules. Fibre cables are used for long distances, such as connecting the storage array directly to servers or to a FC switch.

Cable considerations •

Light is transmitted through the fiber optic cables; therefore, kinks or tight bends can degrade performance or damage cables.

Fiber optic cables are fragile. Bending, twisting, folding, or pinching fiber optic cables can cause damage to the cables, degraded performance or data loss. To avoid damage, do not step on, twist, fold, or pinch the cables. Do not bend the cables tighter than a 2-inch radius.

Install SFP transceivers only in the ports that are used. Caution – Fiber optic cables are fragile. Do not bend, twist, fold, pinch, or step on the fiber optic cables. Doing so can degrade performance or cause loss of data connectivity.

5-126

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6000 hardware installation

Recommended cabling practices This section explains recommended cabling practices. To ensure that your cabling topology results in optimal performance and reliability, observe these practices.

What’s wrong with this cabling method?

Figure 5-2

If an expansion module fails, neither Drive Channel 1 nor 3 can access the remaining expansion modules.

If both redundant drive channels are cabled in the same direction, then a loss of power or communication to one expansion module can result in loss of access to the remaining expansion modules.

Sun Storage 6000 hardware installation Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

5-127


Recommended cabling practices

Cabling for redundancy – Top-down-bottom-up

Figure 5-3

If an expansion module fails, the remaining expansion modules can be accessed with Drive Channel 3.

When attaching expansion modules, create a cabling topology that uses the redundant paths to eliminate inter-module connections as a potential single point of failure. To ensure that the loss of an expansion module itself does not effect access to other modules, cable one drive channel from controller A of the 6540 or 6140 top-down, and one drive channel from controller B bottom-up. Thus, the loss of a single module will not prohibit access to modules on the other side of the failure from being accessed by the other path. Figure 4-3 shows full redundancy cabling on the drive channel side. Each expansion module is cabled to both controllers - i.e. from each expansion module, one IOM is cabled to Controller A, and the other IOM is cabled to Controller B. Drive Channel 1 from Controller A is cabled top down. The redundant loop, Drive Channel 3 from Controller B is cabled bottom up. Even if a whole expansion module fails, the connection to all other expansion modules is not lost. Each 6x80 controller has four drive channels with two expansion ports for each channel; each 6540 controller has two drive channels also with two expansion ports. Splitting the modules between drive channels, or between each ports of a single channel further isolates the effect of a module failure by half.

5-128

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6000 hardware installation

3 Stack 1 / ID 1-4 1

B

Enclosure “B” – Port 3

Stack 3 / ID 1-3

Enclosure “B” – Port 1

Stacks

Enclosure “A” – Port 2

Port View

Enclosure “A” – Port 4

Cabling for performance

Bottom Half of Loop

2

Stack 4 / ID 4-7 4

Enclosure “B” – Port 2

Enclosure “A” – Port 3

Stack 2 / ID 5-7

Enclosure “B” – Port 4

Top Half of Loop

Enclosure “A” – Port 1

A

Figure 5-4

6540 best practice for creating redundant drive-side channels

Figure 5-5

6x80 best practice for creating redundant drive-side channels

Generally speaking, performance is enhanced by maximizing bandwidth, or the ability to process more I/O across more channels. Therefore, a configuration that maximizes the number of host channels and the number of drive channels available to process I/O will maximize performance. Of course, faster processing speeds also maximize performance. Expansion modules should be balanced across controller backend loops to achieve maximum throughput performance. Balancing expansion modules also provides some additional module loss protection if Virtual Disks are properly configured across modules.

Sun Storage 6000 hardware installation Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

5-129


Recommended cabling practices

Backend AL-PA assignments Each controller talks to the loop through a Tachyon chip. The Tachyon chip has an embedded loop address table which is used to manage the 126 addresses on an FC loop. Seven expansion modules use 121 addresses: •

Every disk drive slot uses an address whether there is a disk drive present or not. •

Every expansion module also reserves one address for the IOM that sits on that loop. •

7 (modules) * 16 (slots) = 112 (addresses)

7 (modules) * 1 IOM = 7 (addresses)

Each controller uses an address •

2 controllers = 2 addresses

Any more than seven expansion modules would exceed the available addresses: •

5-130

112 (drives) + 2 (controllers) + 7 (ESM modules) = 121 (addresses)

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6000 hardware installation

T r a y ID 1 1 T r a y ID 1 2 T r a y ID 1 3 T r a y ID 1 4 T r a y ID 1 5 T r a y ID 1 6 T r a y ID 1 7

T a c h y o n C h ip 126 Loop A d d re s s e s

B A Figure 5-6

Backend AL-PA assignments

Hot-adding an expansion module The Top-Down-Bottom-Up cabling methodology has the added benefit of enabling hot-adding an expansion module while the storage array is in production. After power has been applied to a storage array and it is in production, the cabling methodology along with the Hot-add or HotScale technology enables online array expansion and reconfiguration with no forced downtime.

Sun Storage 6000 hardware installation Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

5-131


Hot-adding an expansion module You can add expansion modules or hosts on the fly without suspending user access or compromising availability in any way. Note – This is only a high level overview of the procedure; please refer to the appropriate user documentation for details.

The array expansion is a simple process: 1.

Install the new expansion module in the rack but do not apply power to it yet.

Figure 5-7

5-132

Hot Add step 1 - Install new expansion module in the rack but do not apply power to it yet

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6000 hardware installation 2.

Add the new module to the top-down loop (in this example Drive Channel 1)

Figure 5-8

Hot Add step 2 - Cable top down

3.

Power up the new expansion module

4.

Verify storage management software recognizes and displays the new expansion module

Sun Storage 6000 hardware installation Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

5-133


Cabling summary 5.

Re-cable the bottom-up loop to include the new module (in this example Drive Channel 3)

Figure 5-9

Hot Add step 5 - Cable bottom up

Cabling summary

5-134

Have two fibre channel drive channels to each expansion module for redundancy. One drive channel from Controller A to the left side IOM of a expansion module. The redundant drive channel from Controller B to the right side IOM of a expansion module.

Have drive channels travel in opposite directions across all of the expansion modules on those loops for robustness in case of a expansion module failure

Use all of the drive side channels (drive channels) available for improved performance

From the controller module, cable to the 1B port of the expansion module IOM.

Use odd numbered drive channels as a redundant pair, and even numbered drive channels as a redundant pair.

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6000 hardware installation

Recommended cabling practices for the 6x80 To ensure that your cabling topology results in optimal performance and reliability, observe the following practices. Because the 6x80 array has eight drive ports (with 4 SOC chips) on each controller, up to eight expansion modules can be cabled to separate drive ports. Therefore, cabling for performance is easy to achieve. If the 6x80 is cabled to eight or fewer expansion modules, daisy-chaining is not necessary. This ensures that each expansion module receives full bandwidth from the port. Even in a fully configured array of 16 expansion modules, each only has to be daisy-chained to one other expansion module. As with other controller models in the 6000 series, cable from the controller to port 1B on the IOM. Cable from one expansion module to the next from IOM port 1A to 1B. In other words, consider port 1B as the “in” port and 1A as the “out” port.

Figure 5-10

One 6x80 controller module with 4 expansion modules

Sun Storage 6000 hardware installation Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

5-135


Recommended cabling practices for the 6x80

Figure 5-11

One 6x80 controller module with 8 expansion modules

In the event that any expansion module fails, all other expansion modules are still accessible through their drive ports. When more than eight expansion modules are added behind a single 6x80 controller module, an expansion-module-to-expansion-module cabling scheme needs to be used. Be sure to use the top-down-bottom-up cabling guidelines.

5-136

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6000 hardware installation

Figure 5-12

One 6x80 controller module with 16 expansion modules - Fully configured

Also, remember that the 6x80 array can only support 4Gb/s CSM200 expansion modules.

Recommended cabling practices for the 6540 and 6140 This section explains recommended cabling practices for the 6540 and 6140. To ensure that your cabling topology results in optimal performance and reliability, observe these practices. The first path is created by cabling the expansion modules sequentially from Controller A. For example, Controller A is connected to expansion module 1 through its port 1B, which is then connected to expansion module 2 from expansion module 1’s port 1A to expansion modules 2’s port 1B, which is connected to expansion module 3 through port 1B, which is connected to expansion module 4 through Port 1B.

Sun Storage 6000 hardware installation Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

5-137


Recommended cabling practices for the 6540 and 6140 The alternate path is created by cabling the drive modules in the reverse order from Controller B. For example, Controller B is connected to expansion module 4, which is connected to expansion module 3, which is connected to expansion module 2, which is connected to expansion module 1. In the event that expansion module 2 fails, expansion module 3 and 4 are still accessible through the alternate path. While identical cabling topologies are simpler, a single point of failure exists. If a expansion module fails, all expansion modules beyond the failure are no longer accessible. This topology is vulnerable to loss of access to data due to an expansion module failure.

Stack 2

Stack 3

Stack 1 Figure 5-13

5-138

Stack 4 A fully configured 6540 with 14 4Gb/s CSM200 expansion modules

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6000 hardware installation

A

B

B A

Expansion Trays

Controller Tray

Figure 5-14

6140 redundant cabling with 1 controller module and 4 expansion modules

The following figures show one 6140 controller module cabled to a full complement of six CSM200 expansion modules.

Sun Storage 6000 hardware installation Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

5-139


Recommended cabling practices for the 6540 and 6140

Expansion Tray

Controller Tray

Figure 5-15

One controller module and 1 expansion module

Expansion Tray

Expansion Tray

Controller Tray

Figure 5-16

5-140

One controller module and 2 expansion modules

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6000 hardware installation

Expansion Tray

Expansion Tray

Expansion Tray

Controller Tray

Figure 5-17

One controller module and 3 expansion modules

Sun Storage 6000 hardware installation Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

5-141


Recommended cabling practices for the 6540 and 6140

Expansion Tray

Expansion Tray

Expansion Tray

Expansion Tray

Controller Tray

Figure 5-18

5-142

One controller module and 4 expansion modules

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6000 hardware installation

Expansion Tray

Expansion Tray

Expansion Tray

Expansion Tray

Expansion Tray

Controller Tray

Figure 5-19

One controller module and 5 expansion modules

Sun Storage 6000 hardware installation Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

5-143


Recommended cabling practices for the 6540 and 6140

Expansion Tray

Expansion Tray

Expansion Tray

Expansion Tray

Expansion Tray

Expansion Tray

Controller Tray

Figure 5-20

5-144

One controller module and 6 expansion modules

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6000 hardware installation

Considerations for drive channel speed When multiple expansion modules are connected to the same 6540 or 6140 controller module, all expansion modules attached to the same drive channel must operate at the same speed. The drive channels are used in pairs for redundancy: on the 6540 Drive Channels 1 and 3 are used as a redundant pair, and Drive Channels 2 and 4 are used as a redundant pair. Therefore, all modules attached to the redundant pair of drive channels must operate at the same speed. All 4Gb 16 slot drive modules

Top Half

Bottom Half

1

Figure 5-21

All 2Gb 14slot drive modules

2

Top Half

Bottom Half

3

4

A fully configured 6540 with a both 4Gb/s CSM200 and 2Gb/s CSM100 expansion modules

The 6140 controller module only has one drive channel. Therefore, all expansion modules attached to the 6140 must be set to operate and the same speed. Before powering on the array, check to see if the Link Rate switch is set to the appropriate data transfer rate. If the Link Rate switch is not set to the correct data transfer rate, move the switch to the correct position. •

4Gb/s to the left

•

2Gb/s to the right

Since the switch is recessed, you will need to use a small tool to slide the switch to the proper position.

Sun Storage 6000 hardware installation Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

5-145


Proper power procedures Before powering on the array, check to see the link rate switch is set the appropriate data transfer rate.

Figure 5-22

Link Rate switch

Proper power procedures The following section highlights the proper power-on procedures for the controller and expansion disk modules.

Turning on the power The process of powering on a storage array is easy if the right procedure is followed. First, make sure that all the modules have been cabled correctly. Then the key to this process is that you should power on the expansion modules before the controller module.

5-146

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6000 hardware installation The controllers read the storage array configuration from the DACstore on the drives, therefore all drives need to have power to them before the controllers are turned on. The first thing a controller does is issue a Drive Spin Up command to each drive. After all drives are spun up, the controller goes out and reads the DACstore information from each drive. Caution – Potential damage to drives - Repeatedly turning the power off and on without waiting for the drives to spin down can damage the drives. Always wait at least 30 seconds from when you turn off the power until you turn on the power again.

Caution – If you are connecting a power cord to an expansion module, turn off both power switches on the controller/expansion module first. If the main circuit breaker in the cabinet is turned off, be sure both power switches are turned off on each module in the cabinet before turning on the main circuit breakers.

Power-on procedure 1.

Are the main circuit breakers turned on? a. YES - Turn off both power switches on each module you intend to turn on. b. NO - Turn off both power switches on all modules in the cabinet.

2.

If the main circuit breakers are turned off, turn them on. Note – MPORTANT. Turn on the power to the expansion modules before turning on the power to the controller module to ensure that the controllers acknowledge each attached expansion module. If the controllers have power before the drives, the controllers could interpret this as a drive loss situation.

3.

Turn on both power switches on the back of each expansion module.

4.

Turn on both power switches on the back of the controller module.

A controller module can take up to 10 seconds to power on and can take up to 15 minutes to complete its controller battery self-test. During this time, the lights on the front and back of the module blink intermittently. a. Check the status of the LEDs on the front and back of each module. Green lights indicate a normal status. Amber lights may indicate a hardware fault.

Sun Storage 6000 hardware installation Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

5-147


Proper power procedures b. If any fault lights are on, diagnose and correct the fault. Note – To diagnose and correct the fault, you may need help from the storage management software. The use of CAM to recover from faults will be covered in a later section.

Turning off the power Storage arrays are designed to run continuously, 24 hours a day. After you turn on power to a module, it should remain on unless you need to perform certain upgrade and service procedures. There is the possibility of data loss during power off if it is not done correctly. This data loss can occur from data stored in cache, or IOs in the process of being written from a server or to the drives. Always ensure that IOs from the server are stopped, drive activity has ceased and the ’Cache Active’ LED at the controller is off, then power off the entire rack or the controller module, then the expansion modules.

Power-off procedure 1.

Stop all I/O activity to each module you are going to power off. Note – Always wait until the Cache Active light on the back of the controller module turns off and all drive active lights stop blinking before turning off the power.

2.

Check the lights on the back of the controller and expansion modules. a. If one or more fault lights are on, do not continue with the power-off procedure until you have corrected the fault.

3.

Turn off the power switches on each fan-power canister in the controller module.

4.

Turn off the power switches on each fan-power canister in each expansion module. Note – Power on: first expansion modules then controller module. Power off: first the controller module then the expansion modules.

5-148

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6000 hardware installation

Set the controller IP addresses All controllers in the 6000 series have two Ethernet ports. Ethernet port 1 is used for management. Ethernet port 2 is reserved for is reserved for future use. Do not use for management.

Figure 5-23

Ethernet ports on the 6540 controller

To configure the controller’s IP address for ethernet port 1, you need to have an IP connection between the controllers and a management host. You can configure the controllers with either a dynamic or a static IP address. Note – Each controller must have its own IP address. The default IP address for controller A port 1 is 192.168.128.101. The default IP address for controller B port 1 is 192.168.128.101.

Configuring dynamic IP addressing Dynamic IP addresses for each controller can be assigned through a DHCP server. The dynamic IP address from a DHCP server can be used if BOOTP services are available.

Configuring static IP addressing If a DHCP server is not available, the controllers use the following default internal IP addresses. •

Controller A1: 192.168.128.101

Sun Storage 6000 hardware installation Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

5-149


Serial port service interface •

Controller B1: 192.168.128.102

Controller A2: 192.168.129.101

Controller B2: 192.168.129.102

There are several ways to change the controller’s default IP address to the desired static IP addresses. •

Connect the controller module directly to a management host using a crossover Ethernet cable and change the IP address using the management software CAM.

Connect the controller module to a management host using an Ethernet hub and change the IP address using the management software CAM.

Connect the controller module on an existing subnet and change the IP address using the management software CAM.

Utilize the Serial Port Service Interface through the serial port

Serial port service interface With the Sun Storage 6000, the IP address can be changed through the serial port service interface. Use this interface if a DHCP server or setting a static IP address through Ethernet is not available. This interface: •

Displays network parameters

Sets network parameters

Clears the storage array password.

To connect to the 6540 serial port, use the null-modem cable. This should be supplied with the controller module.

Figure 5-24

5-150

6-pin to 9-pin serial converter and null-modem cable

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6000 hardware installation

Serial port recovery interface procedure Once you have a physical connection on the serial port, use the following commands to complete your connection: 1.

Connect to the serial port of controller A with a terminal emulation program. (38,400, 8, none, 1)

2.

Send <break> for the Interface or baud rate change.

3.

Within 5 seconds, press “S� (<shift>+s) to enter the Serial Port Interface

4.

Enter password within 60 seconds or else access will terminate a. password = kra16wen

5.

Make selection from menu.

The two figures below show sample screens of the service interface main menu and the Ethernet port configuration screen.

Figure 5-25

Service interface main menu

Figure 5-26

Display IP configuration

Sun Storage 6000 hardware installation Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

5-151


Use the hardware compatibility matrix to verify SAN components The figure below shows the screen where “Change IP Configuration” can be selected.

Figure 5-27

Change IP configuration

If you answer “Y” to configure using DHCP, the array tries for 20 seconds to connect to the DHCP server. If no DHCP server is found, the array cycles back to the main menu.

Use the hardware compatibility matrix to verify SAN components Interoperability and solution test labs conduct rigorous testing on components and arrays. Upon successful completion of comprehensive testing, these products are added to the appropriate compatibility list. Always utilize the hardware compatibility matrix to verify that all SAN components are certified with the Sun Storage 6000. The components include data host, OS version, Host Bus Adapters and FC switches. Always verify firmware levels and bios settings for new arrays or firmware upgrades. Note – Refer to the Sun web for the Interoperability Matrix.

5-152

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6000 hardware installation

Attach the host interface cables You can connect data hosts to access the Sun Storage 6000 storage arrays through Fibre Channel switches or directly to the array. The 6x80 array has 16 host connections: 8 per 7900 controller; the 6540 array has eight host connections: four per 6998 controller; The 6140 either has two or four host ports depending on the model. This allows for redundant hosts to be directly connected to the array. Caution – If you will be using Remote Replication, for the 6998 controller or 3994 controller, do not use host port 4 on both controller A and Controller B. If using Remote Replication on the 6x80, do not use the LAST host port on either controller. When Remote Replication is activated, the last host port on each controller is reserved for Replication and any data host connected will be logged out.

Host cabling for redundancy To ensure that, in the event of a host channel failure, the storage array will remain accessible to the host, establish two physical paths from each host or switch to the controllers, and install a path failover driver like MPxIO on the host. This cabling topology, when used with a path failover driver, ensures a redundant path from the host to the controllers.

Figure 5-28

Attach hosts directly or through a switch

Sun Storage 6000 hardware installation Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

5-153


Attach the host interface cables

Connecting data hosts directly A direct point-to-point connection is a physical connection in which the HBAs are cabled directly to the storage array’s host ports. Before you connect data hosts directly to the array, check that the following prerequisites have been met: •

Fiber-optic cables of the appropriate length are available to connect the array host ports to the data host HBAs.

Redundant connections from the host to each controller module are available.

Certified failover software is enabled on the host. Note – Check the hardware/software compatibility matrix to determine the certified failover solutions.

Connecting data hosts through an external FC switch You can connect the Sun Storage 6000 storage array to data hosts through external FC switches. Always check the compatibility matrix to ensure the switch is on the matrix as part of a certified solution. Before you connect data hosts, check that the following prerequisites have been met: •

The FC switches are installed and configured as described in the vendor’s installation documentation. •

5-154

Redundant switches inherently provide two distinct connections to the storage array.

Interface cables are connected and routed between the host bust adapters (HBAs), switches and installation site.

Fiber-optic cables of adequate length are available to connect the array to the FC switch.

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6000 hardware installation

Hardware installation summary •

Use a Fibre Channel drive channel from each controller to each expansion tray to create a drive channel for redundancy

For best performance, use all available drive-side channels performance

In the 6540, use odd-numbered drive channels as redundant pairs and evennumbered channels as redundant pairs

Cable one side of the expansion tray top down and cable the opposite side of the expansion tray from the bottom up to build in redundancy and availability

Turn on the power to the storage system by powering on all expansion trays at the same time (use the cabinet breakers), then the controller module

To turn off the power, use the cabinet-mounted circuit breakers to turn off all modules at once or power off the controller module first, then the expansion trays

Follow the correct steps to cable a new expansion tray to an existing online storage system so the controllers remain online and data remains available

Sun Storage 6000 hardware installation Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

5-155


Knowledge check

Knowledge check 1.

5-156

On the diagram below, number the expansion modules and design a cabling scheme for the Sun Storage 6540 that has one controller module and six expansion modules:

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6000 hardware installation 2.

On the diagram below, design a cabling scheme for the Sun Storage 6140 that has one controller module and six expansion modules:

Sun Storage 6000 hardware installation Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

5-157


Knowledge check

5-158

3.

Why is it important to have an unique module ID assigned to a expansion module?

4.

Why would you choose to use fibre optic cables instead of copper cables?

5.

Why is top-down bottom-up cabling important?

6.

What is the best way to power on an entire storage array?

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Module 6

Sun Storage Common Array Manager Objectives Upon completion of this module, you will be able to: •

Describe the functionality of Common Array Manager (CAM)

Differentiate management and data host install

Describe the management methods used by CAM

Explain the function of a multi-path driver

Describe logging into and navigating within CAM

List initial CAM configuration steps

6-159 Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


What is Sun Storage Common Array Manager?

What is Sun Storage Common Array Manager? The Sun Storage Common Array Manager (CAM) allows storage administrators to monitor, configure and maintain Sun Storage mid-range 6000 storage arrays over existing LAN’s and WAN’s. CAM features online administration of all 6000 array functions. Fully dynamic reconfiguration allows for the creation, assignment or reassignment of volumes without interruption to other active volumes. Maintenance of the storage array is simplified as well since storage administrators can receive important event information regarding the status of the storage array where and when needed by e-mail notification. CAM management activities include: •

Centralized management - monitor and manage 6000 arrays from any location on the network

Web based GUI - the CAM GUI displays information about the storage array’s logical components (storage volumes and virtual disks), physical components (controllers and disk drives), topological elements (host groups, hosts, host ports) and volume-to-LUN mappings.

Volume configuration flexibility - The characteristics of a volume are defined not only during volume creation, but also by the storage pool and profile that are associated with the volume. The volume characteristics ensure the most optimal configuration settings are used to create volumes to meet the I/O requirements for specific types of applications.

Volume characteristics include:

6-160

Capacity

Segment size

Modification priority

Enable/disable read cache

Enable write cache (write back)

Disable write cache (write through)

Enable/disable write cache mirroring

Read-ahead multiplier

Enable/disable background media scan with or without redundancy check.

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage Common Array Manager •

Online administration - CAM enables most management tasks to be performed while the storage remains online with complete read/write data access. This allows storage administrators to make configuration changes, conduct maintenance or expand the storage capacity without disrupting I/O to its attached hosts. CAM’s online capabilities include:

Dynamic expansion that enables new expansion modules to be added, virtual disks to be configured, and volumes to be created without disrupting access to existing data. Once a newly created volume is defined, LUNs are immediately available to be mapped and accessed by the data host.

Dynamic capacity expansion (DCE) adds up to two drives at a time to an existing virtual disk, creating free capacity for volume creation or expansion and improving the IOPS performance of the volumes residing on the virtual disk.

Dynamic volume expansion (DVE) allows you to expand the capacity of an existing volume by using the free capacity on an existing virtual disk.

Dynamic Virtual Disk Defragmentation allows you to consolidate all free capacity on a selected Virtual Disk. A fragmented Virtual Disk can result from volume deletion leaving groups of free data blocks interleaved between configured volumes. Using the Defrag option on a Virtual Disk allows you to maximize the amount of free capacity available to create additional volumes on that Virtual Disk.

Highest availability - CAM software ensures uninterrupted access to data with online storage management and support for up to 15 global hot spares with firmware 6.1x and unlimited hot spares for firmware 7.xx. •

Intuitive diagnostics and recovery - The Service Advisor provides valuable troubleshooting assistance by diagnosing storage array problems and determining the appropriate procedure to use for recovery.

Extensive operating system support - CAM software provides a broad range of platform support for open systems environments that include Windows Server 2003, Solaris, HP-UX, Linux, AIX, NetWare and IRIX.

CAM, however, is only installable on Windows, Linux, and Solaris SPARC/x86

In summary, CAM allows management from one or more points on the network, centralization of capacity allocation decisions and remote support and management.

Sun Storage Common Array Manager Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

6-161


The CAM interface

The CAM interface The Sun Storage Common Array Manager (CAM), provides a common management interface for Sun supported storage solutions. It greatly reduces the complexity of data storage implementations by providing: •

A standard interface for storage management and event reporting

Standard terms associated with Sun’s storage arrays

A shorter learning curve when transitioning to newer storage products

The Common Array Manger allows users to manage multiple storage devices from a single interface by adhering to the Storage Management Initiative Specification (SMI-S) created by the Storage Networking Industry Association (SNIA).

SMI-S overview Management today is a myriad of different software packages by different vendors that are not coordinated with each other. Furthermore, many of these applications are deficient in the necessary functionality, security, and dependability needed to ensure greater business efficiency. Incompatible Application Program Interfaces (API) for storage management spread throughout today's multi-vendor SANs. The Storage Management Initiative Specification (SMI-S) assists administrators to gather and examine data from dissimilar vendors' products, and puts it in a common format. This lets CAM manipulate all devices on the SAN from a centralized application.

6-162

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage Common Array Manager The figure below displays an overview of SMI-S.

Figure 6-1

SMI-S overview

SMI-S is derived from the Web-based Enterprise Management (WBEM) initiative. WBEM contains the Common Information Model (CIM) for managing network infrastructures, together with a data model, a transport mechanism that uses Hypertext Transfer Protocol (HTTP) and encoding that uses Extensible Markup Language (XML). SMI-S goes further than the open management functionality found in the Internet Engineering Task Force's (IETF) longstanding and much-used Simple Network Management Protocol (SNMP). There are two current versions of SMI-S, version 1.01 and version 1.0.2. The Common Array Manager uses version 1.0.2. This will allow for the most up-todate support for SAN infrastructures. SMI-S gives heterogeneous vendor support, functionally rich, dependable, and secure monitoring/control of mission essential resources. This interface sweeps away the deficiencies related with legacy management.

Sun Storage Common Array Manager Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

6-163


Sun Storage Management host software

Software components The management software is delivered on compact disk (CD) or can be downloaded from the Sun website. The management software consists of a number of components: •

Sun Storage Management Host Software (Common Array Manager - CAM)

Sun Storage Remote Management Host Software

Management Host - used to manage the storage array. This can be any host that has a network connection to the storage array and has the CAM Management Host Software installed. Data Host - used to read and write data to the storage array. This can be any host that has a FC connection to the storage array and has the CAM Data Host Software installed. Note – Hosts that have both network and FC connection to the storage array can act as both Management and Data hosts.

Sun Storage Management host software The management host software is the Sun Storage Common Array Manager (CAM) 6.xx package and contains the following:

6-164

Graphical user interface (GUI), also referred to as the browser user interface (BUI), which includes the Java™ platform and the SUN Web Console. CAM’s web-based Java console is the primary interface for configuration and administration of the storage array. It enables users to manage the storage array from any system with a web browser that is on the same network as the management host. For a list of supported browsers, see the release notes.

Sun Storage Configuration Service (SSCS) which is the command line interface (CLI). The SSCS CLI provides the same control and monitoring capability as the web browser. In addition, the CLI is also scriptable for running frequently performed tasks.

Built-in Service Advisor and background Fault Management Service (FMS). CAM has a built-in Service Advisor and Fault Management Service. Both were formerly features of a separate product called the Sun Storage Automated Diagnostic Environment (StorADE) but have now been

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage Common Array Manager incorporated into CAM. The Service Advisor provides online advise to replace components and to diagnose and resolve issues. The Fault Management Service is a service or daemon that runs in the background and monitors the storage arrays for exceptions. Via CAM you can configure the FMS to monitor the storage arrays on a 24-hour basis, collecting information that enhances the reliability, availability, and serviceability (RAS) of the storage array. CAM automates the transmission of alerts, which can be sent via email, pager, or other diagnostic software installed (i.e. SNMP service) on a management host on the network.

Sun Storage Remote Management host software The Remote Management Host software contains only the CLI. This can be installed on remote Solaris and non-Solaris systems allowing users to manage the storage array remotely. Note – The Remote Management Host software that comes on the CD is only for the Solaris OS on the SPARCŽ platform. Versions for other platforms can be downloaded from the sun web site. Use of the Remote Management Host Software still requires the use of a CAM managment host. The CLI simply communicates with the CAM management host to perform desired tasks.

CAM management methods CAM uses the out-of-band management method on Linux, Solaris Sparc and x86 and Windows 2003 Server and in-band management on Linux and Solaris Sparc. With firmware 7.xx, in-band management is also available with Windows and Solaris x86

Out-of-band management method The out-of-band management method allows storage management commands to be sent to the controllers in the storage array directly over the network through each controller’s Ethernet connection.

Sun Storage Common Array Manager Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

6-165


CAM management methods Out-of-band management requires that each controller already have an IP address that was either set statically, or assigned via a DHCP server. Note – Full storage array management requires that both controllers are accessible via Ethernet. If only one controller is accessible then only a subset of the storage array management functions will be available. To manage the storage array through an Ethernet connection: 1.

Attach cables from the Ethernet connections on the storage array to the network

2.

The 6000 storage array has two Ethernet ports. Be sure that the Ethernet cable used for management is connected to Ethernet port 1.

3.

Install CAM on a management host

4.

Register the storage array with CAM by completing an auto-discovery (Scan the Subnet) or by entering the IP address of one of the storage array controllers. Note – Multiple users can be logged into the CAM management server concurrently.

Figure 6-2

6-166

Out-of-Band management and location of CAM installation

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage Common Array Manager

In-band management method In-band management uses a proxy agent running on a data host to communicate with a managed storage array. Sun Storage Common Array Manager software discovers the proxy agents on the subnet and then queries storage arrays registered with the software. The proxy agent receives the queries over Ethernet and passes them on to the array over the data path between the data host and the array. New arrays can be registered with the software using the registration wizard. The wizard can autodiscover the array via the proxies or you can specify the IP address of the proxy agent. Once an array is registered, management of the array appears the same as does management with an out-of-band connection. Volume creation, deletion, and mapping are accomplished in the same manner. In-band management uses a special access LUN mapping to facilitate communications between the management software and the storage array. You can view all mappings on the array on the Mapping Summary Page in the Sun Storage Common Array Manager software. For in-band communication, an access volume is mapped to LUN 31. This special access LUN (also called the UTM LUN) is mapped to the default domain. (All storage arrays have a default domain for volumes not registered with a storage domain.) With new storage arrays, the mapping of the access LUN to the default domain is installed at the factory. If you lose this mapping, before installing in-band, use out-of-band management and the Common Array Manager software to re-map the access LUN to the default domain. See the online help in the software for more information about mapping. Note – For CAM release 6.0.x, only Linux and Solaris Sparc are supported for in-band management .The SMruntime pkg needs to be installed before the SMagent. For CAM release 6.1.x, in-band management support is also available for Windows and Solaris x86.

Sun Storage Common Array Manager Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

6-167


CAM management methods

Figure 6-3

6-168

In-band management

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage Common Array Manager

Sun Storage data host software Data host software controls the data path between the data host and the storage array. The data host software contains tools that manage the data path I/O connections between the data host and the storage array. This includes drivers and utilities that enable hosts to connect to, monitor, and transfer data in a storage area network (SAN).

Figure 6-4

Data host software

The type of data host software you need depends on your operating system. You can obtain the most of the data host software from the Sun Download Center.

Figure 6-5

Storage I/O stack

Sun Storage Common Array Manager Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

6-169


Sun Storage data host software

Host Bus Adapter (HBA): Compatibility and configuration •

Consist of single, dual, or quad port cards (Qlogic, Emulex, LSI, etc.)

Check storage vendor interoperability matrix for appropriate drivers

Verify proper HBA BIOS settings

Verify required operating system tweaks (OS services patches)

Be careful of downloading the “latest” driver from the HBA vendors websites

Figure 6-6

Always use a compatible HBA driver and settings

Multi-path drivers Multipathing solutions are designed to provide failover through the use of redundant physical path components-host bus adaptors, cables, and switchesbetween the server and storage array. In the event that one or more of these components fails, applications can still access their data. Fault tolerance is not the only benefit of multipathing solutions. Multipathing software also serves to redistribute the read/write load among multiple paths between the server and storage, thereby helping to remove bottlenecks and to balance work loads. General features of a multi-path driver

6-170

Multi-path discovery and configuration

Path failover

Controller failover

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage Common Array Manager •

Path failback

IO load balancing

Shared storage support for cluster based solutions

Volume rebalance across controllers (preferred LUN owner model)

Generic solution or hardware specific (protocol support, SCSI, SAS, Infiniband, iSCSI)

Dynamic addition of LUNs, paths, and targets

Certified multi-path drivers •

SUN Traffic Manager (MPxIO) for Solaris

Veritas Dynamic Multi-Pathing (DMP) + ASL, array support library (Solaris and Windows)

LSI’s Multi Path Proxy Driver (“MPP” driver) (Linux)

Classic RDAC (SCSI-centric dual-path drivers) (Windows2000, 2003)

Microsoft MPIO + DSM, device specific module (Windows 2003)

Characteristics of each multi-path solution Table 6-1 Operating System Windows

Failover Driver Type

Multi-path solutions Storage Array Mode

No. Paths Supported

No. Volumes Supported

Failover through Single HBA Support?

Cluster Support ?

RDAC

Mode Select

4 default 32 Maximum

254

Yes, as long as at least one good path to each controller is detected.

Yes

MPIO

Mode Select

4 default 32 Maximum

254

Yes, as long as at least one good path to each controller is detected.

Yes

Sun Storage Common Array Manager Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

6-171


Sun Storage data host software

Operating System

Failover Driver Type

Storage Array Mode

No. Paths Supported

No. Volumes Supported

Failover through Single HBA Support?

Cluster Support ?

Linux

RDAC

Either Mode Select or AVT

4 default 32 Maximum

256 for Linux 2.4 256 for Linux 2.6

Yes, as long as at least one good path to each controller is detected.

Yes

Linux 2.4

Veritas Volume Manager Veritas DMP

AVT

4 default 32 Maximum

256

Yes

Yes

Linux 2.6

Veritas Volume Manager Veritas DMP

AVT

4 default 32 Maximum

256

Yes

Yes

Solaris

RDAC

Mode Select

2

32

Yes

Veritas Only

Veritas Volume Manager Veritas DMP

AVT

At least 2, no hard-coded limit

255

Yes

Veritas Yes Solaris No

MPxIO

Mode Selec

4

255

Yes

Yes

For Solaris:

6-172

RDAC/MPP supported on Solaris 8, 9

MPxIO also supported on Solaris 8, 9

MPxIO must be disabled if using RDAC/MPP as the multi-path solution on Solaris 8, 9

MPXIO is the recommended default for Solaris 10 •

Do not install the RDAC/MPP driver on Solaris 10

There is no supported version of RDAC/MPP for Solaris 10

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage Common Array Manager

Redundant controller failover methods Explicit Method (RDAC mode): Delivery of a special SCSI command (mode sense/select) to the storage array, which causes all volume(s) to move to one controller or the other Implicit Method (AVT, auto volume transfer): I/O directed to a volume through the non-owning controller will effect an ownership change of that volume. Forced condition: Storage array controller has suffered a fault condition. The alternate controller initiates a forced take over of all volumes.

Explicit method (RDAC mode) When a fail fails, delivery of a special SCSI command (mode sense/select) to the storage array, causes all volume(s) to move to one controller or the other

Figure 6-7

SCSI Mode Select method

Sun Storage Common Array Manager Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

6-173


Sun Storage data host software

Implicit Method (AVT: Auto Volume Transfer) When path fails, I/O directed to a volume through the non-owning controller will effect an ownership change of that volume.

Figure 6-8

6-174

AVT method

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage Common Array Manager

Forced condition Storage array controller has suffered a fault condition. The alternate controller initiates a forced take over of all volumes.

Figure 6-9

Forced method

Common Array Manager installation Common Array Manager is installed using the CD provided with the 6000 or downloaded from the SUN website, and currently runs on the Solaris Sparc and x86, Linux and Windows 2003 Server operating systems. Following the installation steps will ensure proper function once installation is complete. Note – Detailed installation instructions are provided in the Sun Storage Common array Manager Software Installation Guide.

Before you start the installation, ensure the following requirements are met: •

The root password of the management host is available (for running the installation script). Note that the root (administrator) password is required for the initial login to the Sun Java Web Console after the software is installed.

The following amount of space will be required for the installation:

Sun Storage Common Array Manager Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

6-175


Common Array Manager installation •

625 MBs on Solaris

705 MBs on Linux

620 MBs on Windows Note – Review the release notes for the most up to date list of supported operating systems.

The installation script verifies these requirements. If a requirement is not met, the script informs the user or, in some cases, exits. The installation wizard provides two choices for installation: typical and custom. In a typical installation, the entire management host software is installed. If the custom installation is selected the user can choose either the management software or the remote CLI client to install. Note – During the software installation, the progress indicator may reflect 0% for a considerable portion of the installation process. This is the expected progress indication when the “typical” installation process is selected.

Firmware and NVSRAM files The controller firmware, NVSRAM, IOM and drive firmware are bundled with CAM.

6-176

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage Common Array Manager When registering a storage array, the CAM software will confirm that the firmware on the controller is compatible with the version of CAM. All versions of CAM are backwards compatible, which allows higher levels of CAM to manage storage arrays running on lower levels of firmware. The registration results page displays a message if the firmware is not at the baseline to match this version of Common Array Manager software. If the detected array is not at a baseline firmware level, the firmware can be upgraded at a later time from the administration page of each storage array. In addition to firmware, Non-volatile Static Random Access Memory (NVSRAM) is a file that specifies default settings for the controllers. This file will also be upgraded as part of the firmware upgrade for each controller array. Caution – Each 6000 controller model has a unique NVSRAM file. Inappropriate application of this file can cause serious problems including loss of connectivity with the storage array.

Sun Storage Common Array Manager navigation Navigation through the Common Array Manager Java console is performed in the same manner used to navigate a typical web page. The navigation tree to the left of most screens provides navigation among pages within an application. Onscreen links can be clicked for additional details. In addition, information displayed on a page can be sorted and filtered. When the cursor is moved over a button, tree object, link, icon, or column, a tool tip with a brief description of the object will be displayed. Most screens are broken into three sections: the banner, the navigation tree and the content area.

Sun Storage Common Array Manager Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

6-177


Sun Storage Common Array Manager navigation

Common Array Manager banner The banner consists of access buttons across the top and quick status displays on the left and right sides. The figure below displays the page banner.

Figure 6-10

Page banner

The access buttons provide the following functions: •

Console - Returns to the Sun Java Web Console page

Version - Displays the version of the component currently being viewed on screen.

Refresh - Updates the current view

Log Out - Logs the current user out and then displays the Sun Java Web Console login page

Help - Opens the online help system Note – There are additional buttons specific to some screens.

The quick status display on the left of the banner provides the current user’s role and the server name. The display on the right provides the number of current users logged in, the date and time of the array was refreshed (by the refresh button), and current alarms.

6-178

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage Common Array Manager

Common Array Manager’s navigation tree The navigation tree is only displayed in the Sun Storage Configuration Service console. It is used to navigate between areas of the interface that allow users to view, configure, manage, and monitor the array. Each folder can be expanded or collapsed by clicking the triangle on the left side of the folder. The Common Array Manager’s navigation tree is displayed in the firgure below.

Figure 6-11

Common Array Manger navigation tree

The main headings in the tree are: •

Logical Storage - Enables users to configure volumes, snapshots, replication sets, virtual disks, storage pools and storage profiles.

Physical Storage - Allows users to configure initiators, ports, Virtual Disks, modules and disks.

Mappings - Used to view the mappings for the selected array.

Jobs - Provides access to current configuration processes (jobs running). This area also provides a history of jobs.

Administration - Allows users to configure base array parameters as well as perform administrative tasks.

Sun Storage Common Array Manager Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

6-179


Sun Storage Common Array Manager navigation

Common Array Manager’s content area The content area of the Common Array Manager displays information about either data storage arrays or hosts depending on what has been requested. Content area pages are generally displayed as tables or forms. Each may contain links to additional information or steps, drop down menu boxes and text boxes. An example of a content area is displayed in the figure below.

Figure 6-12

Common Array Manager content area

Additional navigation aids There are a variety of other icons, drop down menu boxes and links that will help you navigate and organize information presented by the Common Array Manager. The following table provides a list of the more commonly used items. Note – Not all of the navigation aides in the table are available in every content screen.

6-180

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage Common Array Manager Table 6-2

Common navigation aids Icon/Indicator

Description Filters out undesirable information. To filter information choose the filter criterion from the drop down menu. When filtering tables, use the following guidelines: • A filter must have a least one defined criterion. • A filter applies to the current server only. Toggles between displaying a page at a time and displaying 15 or 25 rows at a time. When the top icon is displayed, click the icon to page through all data. When the bottom icon is displayed, click the icon to page through 15 or 25 rows at a time. Selects or deselects all check boxes in a table. The icon on the left selects all items and the icon the right deselects all items. Indicates that the column in the table is sorted in ascending order. For example, 0 to 9. A highlighted symbol indicates that it is the active column being used to sort the data. Indicates that the column in the table is sorted in descending order. For example, 9 to 0. A highlighted symbol indicates that it is the active column being used to sort the data. Indicates the current page out of the total number of pages. You can also type in the desired page and click “Go” to jump to a desired page.

Red Asterisk

Indicates a required field

Double down arrows

Displays the part of the form indicated by the text next to the icon.

Double up arrows

Click to returns to the top of the form.

Sun Storage Common Array Manager Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

6-181


Sun Storage Common Array Manager navigation

Administration functions and parameters Once CAM has been properly installed, the user must setup the storage array for management and use. The following functions and parameters can be set via the General Configuration tab in CAM to complete the initial array configuration.

Accessing the managment software To access the management software, open a browser and type: https://[management host ip-address]:6789

or

https://localhost:6789

Then log in to the Common Array manager as root using the root password for the managment host (on Windows this is typically login Administrator) Then select Sun Storage Configuration Service from the Storage section of the Sun Java Web Console page.

Auto Service Request (ASR) Auto Service Request (ASR) monitors the array health and performance and automatically notifies the Sun Technical Support Center when critical events occur. Critical alarms generate an Auto Service Request case. The notifications enable Sun Service to respond faster and more accurately to critical on-site issues. The Common Array Manager provides the interface to activate Auto Service Request on behalf of the devices it manages. It also provides the fault telemetry to notify the Sun service database of fault events on those devices. To use ASR, you must provide account information to register devices to participate in the ASR service. After you register with ASR, you can choose which arrays you want to be monitored and enable them individually. ASR uses SSL security and leverages Sun online account credentials to authenticate transactions. The service levels are based on contract level and response times of the connected devices. ASR is available to all customers with current StorageWarranty or Storage Spectrum Contracts. The service runs continuously from activation until the end of the warranty or contract period.

6-182

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage Common Array Manager Only the event information is collected. Your stored data is not read and remains secure. The event information is sent by secure connection to https://cnsservices.sun.com. Event information collected by ARS 1.

Activation Event: Static information collected for purpose of client registration and entitlement.

2.

Heart Beat Event: Dynamic pulse information periodically collected to establish whether a device is capable of connecting.

3.

Alarm Event: Critical events trigger Auto Service Request and generate a case.Additional events are collected to provide context for existing or imminent cases.

Subscribing to and editing properties of Auto Service Request During the initial storage array registration process, the Common Array Manager prompts you to register with the Auto Service Request service by displaying the Auto Service Request (ASR) Setup page. This page continues to display until you either fill out the page and click OK, or click Decline to either decline or defer ASR service registration. After you register with ASR, you can choose which arrays you want to be monitored.

Initial Common Array Manager configuration Once CAM has been properly installed, the user must setup the storage arrays for management and use. The following procedures will be performed to complete the initial CAM configuration: •

Configure IP address

Access the management software

Register the array

Name the array

Set the array password

Set the array time

Add any additional users

Setup the Sun Storage Automated Diagnostic Environment

Sun Storage Common Array Manager Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

6-183


Initial Common Array Manager configuration

Configure IP addressing To configure the IP address for each controller's Ethernet port, an IP connection between the controller modules and a management host must already have been established using the controllers default IP addresses. It is important that both controllers are configured with an IP address to ensure proper function. The controller's Ethernet ports can be configured with either a dynamic or a static IP address.

Configuring dynamic IP addresses Dynamic IP addresses for the controllers ethernet ports are assigned by a dynamic host configuration protocol (DHCP) server. The address from the DHCP server will then be used if bootstrap protocol (BOOTP) services are available.

Configuring static IP addresses The Sun Storage 6000 array has the following default internal IP addresses for the first port: •

Controller A: 192.168.128.101

Controller B: 192.168.128.102

In order to change the controllers' default IP addresses to desired static IP addresses, first set up an Ethernet interface on the management host with an IP address of 192.168.128.100 (or any IP address on the 192.168.128.0 subnet, provided it does not conflict with the controller module's IP address). Connect the management host to the storage. Note – If connecting directly to the storage without an Ethernet hub or switch, a crossover cable may need to be used. Review the Getting Started Guide for details.

Naming an array The storage array will come with a default name or will be un-named which you should change to a unique name to simplify identification. The Array Name can be changed on the Administration Details page. The name should be unique, meaningful, up to 30 characters in length.

6-184

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage Common Array Manager

Configuring the array password One of the additional options on the Administration Details page is the Manage Passwords button located at the top of the screen. A new Sun Storage 6000 Array is shipped with a blank, or empty, password field. Sun recommends that an array password be created during initial setup for security purposes. The password prevents other management hosts from unauthorized access to the configuration of the array. If an array is moved from one management host to another, the user will need to provide the password when registering the array that was moved. Note – The password can be unique between different arrays on the same management host. However, if a single array is being managed by more than one management host, the password must be the same on each management host. The management software stores an encrypted copy of the array password, known as the local password, on the management host. Use the Update Array Password in Array Registration Database to ensure that there is no password conflict with another instance of the management software.

Setting the array time Another option on the Administration Details page is the array time and date. When the time and date for a selected array are set, the values are updated for all arrays in the array. The time will be set automatically using the network’s Network Time Protocol (NTP) server. If an NTP server is used in the network, click Synchronize with Server to synchronize the time on the array with your management host. This will save steps since the time will not have to be manually set.

Default host type The host type defines how the controllers in the storage array will work with the particular operating system on the data hosts that are connected to it when volumes are accessed. The host type depicts an operating system (Windows 2000, for example) or variant of an operating system (Windows 2000 running in a clustered environment). Generally, you will use this option only if all hosts connected to the storage array have the same operating system (homogeneous host environment).

Sun Storage Common Array Manager Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

6-185


Common Array Manager summary If you are in an environment where there are attached hosts with different operating systems (heterogeneous host environment), you will define the individual host types as part of creating Storage Domains.

Adding additional users There are two types of privileges that can be assigned to users. The assignable privileges are: •

Storage - the storage role can view and modify all attributes

Guest - the guest role can only view (monitor) all attributes

To be eligible for privileges to the CAM interface users must have valid Solaris or Windows user accounts and have access to the management host. The users can then log in to the CAM interface using their Solaris or Windows user names and passwords.If multiple users are logged in to the array with Storage privileges there is a risk of one user’s changes overwriting those of another user’s. For this reason Storage administrators should develop procedures to manage this risk.

Setting module IDs Although setting the module IDs are not a requirement to configure the Common Array Manager, it is a good practice to ensure the module IDs are unique and in order. By default the controller module is set to ID 85, thus, when viewed it may be at the top or bottom of your screen. Each additional expansion module added to the array should be numbered one, two, three, etc... For example, if you have two expansion modules attached, their IDs should be set to 1 and 2 respectively. Module IDs can be changed from selecting modules from the CAM navigation tree.

Common Array Manager summary

6-186

CAM is the management software that enables configuration and maintenance of the 6000 arrays

CAM allows for two types of management, In-band and out-of-band

Data host software includes the HBA driver and multi-path drivers

There are 3 types of failover methods, explicit, implicit and forced

CAM can be installed on Linux, Windows and Solaris SPARC and x86

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage Common Array Manager •

To start CAM, open a browser window

Sun Storage Common Array Manager Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

6-187


Knowledge check

Knowledge check

6-188

1.

What is the difference between a “data host” and “management host”?

2.

Describe the main difference between in-band and out-of-band management.

3.

What is the purpose of the “access” volume?

4.

List the 3 types of failover methods

5.

List at least 4 initial configuration steps.

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Module 7

Array configuration using Sun Storage Common Array Manager Objectives Upon completion of this module, you will be able to: •

Describe how to provision the storage array with Common Array Manager

Describe additional provisioning components and how they relate to volume creation

Describe the profile parameters that are selected when creating a volume

7-189 Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Common Array Manager configuration components

Common Array Manager configuration components Prior to administering the Sun Storage 6000, it is important to understand the basic configuration components used in the Common Array Manager (CAM) interface. To configure storage resources you must work with both physical and logical components.

Figure 7-1

Physical components

The physical components are:

7-190

Initiator – A port on a Fibre Channel (FC) host bus adapter (HBA) that allows a data host to gain access to the storage array for data I/O purposes. The initiator has a World Wide Name (WWN) that is globally unique.

Hosts – A server, or data host, with one or more initiators that can store data on an array. A host can be viewed as a logical grouping of initiators. You can define volume-to-logical unit number (LUN) mappings to an individual host or assign a host to a host group.

Host Groups – A collection of one or more data hosts in a clustered environment. A host can be part of only one host group at a time. You can map one or more volumes to a host to enable the hosts in the group to share access to a volume.

Controllers – The RAID controllers in the Sun Storage 6000 Array.

Ports – The physical ports in the Sun Storage 6000 Array.

Modules – A module that contains from 5 to 16 disks.

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Array configuration using Sun Storage Common Array Manager •

Disks – A non-volatile, randomly addressable, re-writable data storage device. Physical disks are managed as a pool of storage space for creating volumes.

Figure 7-2

Logical components

The logical components are: •

Virtual Disks – One or more physical disks that are configured with a given RAID level (or RAID set). All physical disks in a virtual disk must be of the same type, FC or SATA II.

Volume – A container into which applications, databases, and file systems store data. Volumes are created from a virtual disk, based on the characteristics of a storage pool. You assign a LUN number to a volume and map it to a host or host group.

Profiles – A set of attributes that are used to create a storage pool. The array has a pre-defined set of storage profiles. You can choose a profile suitable for the application that is using the storage, or you can create a custom profile.

Pools – A collection of volumes with the same configuration. A storage pool is associated with a storage profile, which defines the storage properties and performance characteristics of a volume.

Array configuration using Sun Storage Common Array Manager Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

7-191


Creating a volume with Common Array Manager •

Storage domain – A logical entity that defines the mappings between volumes and hosts or host groups.

Snapshot – A point-in-time copy of a primary volume. The snapshot can be mounted by an application and used for backup, application testing, or data mining without requiring you to take the primary volume offline. Snapshots are a premium feature that require a right-to-use license.

Data Replication – The data replication feature is a volume-level replication tool that protects your data. It can be used to replicate volumes between physically separate primary and secondary arrays in real time. The replication is active while your applications access the volumes, and it continuously replicates the data between volumes.

The diagram below shows the relationship of basic logical and physical configuration components.

Host Group

Based on a Storage Profile

Other vDisk and volumes

Figure 7-3

Relationship of basic configuration components

Creating a volume with Common Array Manager To configure a volume on the Storage 6000 perform the following steps:

7-192

1.

Select or create a profile

2.

Create storage pools

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Array configuration using Sun Storage Common Array Manager 3.

Create a volume

Storage profiles A storage profile consists of a set of attributes that are applied to a storage pool. Each disk or Virtual Disk must meet the attributes defined by the storage profile to be a member of a storage pool. The use of storage profiles simplifies configuration by configuring the basic attributes that have been optimized for a specific application or data type. Prior to configuring an array it is important to review the available storage profiles and ensure a profile exists that matches the users targeted application and performance needs - if not, the user can create a new storage profile. The Sun Storage 6000 Array provides several predefined storage profiles that meet most storage configuration requirements, see the table below. Table 7-1

CAM default profiles

Name

RAID Level

Segment Size

ReadAhead Mode

Drive Type

Number of Drives

Default

RAID-5

512 KB

Enabled

FC

Variable

High_Capacity_ Computing

RAID-5

512 KB

Enabled

SATA

Variable

High_Performance_C omputing

RAID-5

512 KB

Enabled

FC

Variable

Mail_Spooling

RAID-1

512 KB

Enabled

FC

Variable

Mircosoft_Exchange

RAID-5

32 KB

Enabled

FC

4

Microsoft_NTFS

RAID-5

64 KB

Enabled

ANY

4

Microsoft_NTFS_HA

RAID-1

64 KB

Enabled

FC

Variable

NFS_Mirroring

RAID-1

512 KB

Enabled

FC

Variable

NFS_Striping

RAID-5

512 KB

Enabled

FC

Variable

Oracle_10_ASM_ VxFS_HA

RAID-5

256 KB

Enabled

FC

4

Oracle_8_VxFS

RAID-5

128 KB

Enabled

FC

4

Oracle_9_VxFS

RAID-5

128 KB

Enabled

FC

4

Oracle_9_VxFS_HA

RAID-1

128 KB

Enabled

FC

4

Array configuration using Sun Storage Common Array Manager Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

7-193


Creating a volume with Common Array Manager

Name

RAID Level

Segment Size

ReadAhead Mode

Drive Type

Number of Drives

Oracle_DSS

RAID-5

512 KB

Enabled

FC

Variable

Oracle_OLTP

RAID-5

512 KB

Enabled

FC

Variable

Oracle_OLTP_HA

RAID-1

512 KB

Enabled

FC

Variable

Random_1

RAID-1

512 KB

Enabled

FC

Variable

Sequential

RAID-5

512 KB

Enabled

FC

Variable

Sun_SAM-FS

RAID-5

128 KB

Enabled

ANY

4

Sun_ZFS

RAID-5

128 KB

Enabled

ANY

4

Sybase_DSS

RAID-5

512 KB

Enabled

FC

Variable

Sybase_OLTP

RAID-5

512 KB

Enabled

FC

Variable

Sybase_OLTP_HA

RAID-1

512 KB

Enabled

FC

Variable

To view the Storage Profile Summary screen, select Profiles from the navigation pane. In addition to the profiles parameters, the Storage Profile Summary screen provides the state of each profile. The possible states are In Use and Not In Use. The details of each profile can be viewed by clicking the profile name. The predefined storage profiles cannot be modified. Custom storage profiles created by the user can be modified. Note – The last profile listed is Test. The Test profile is a custom profile and is selectable by clicking on the check box to the left of the profile name. If the provided storage profiles do not meet the performance needs of a specific application, a custom profile can be created based on the parameters listed below. To create a new profile perform the following steps:

7-194

1.

Click the New button.

2.

The New StorageProfile screen is displayed and you will be prompted to provide the Storage Profile parameters listed below.

3.

Click OK.

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Array configuration using Sun Storage Common Array Manager The new profile is displayed in the Storage Profile Summary list. Once selected, a custom profile can be copied or deleted. The copy function allows the user to copy a custom profile from one array to another. The default profiles cannot be copied as there is no need since they already exist on the other array by default. Additionally, default profiles cannot be deleted.

Storage profile parameters Name is the unique identifier for the storage profile. The profile name can be up to 32 characters. Description is a typed description of the profile. This parameter is optional. RAID Level can be 0, 1, 3, 5 or 10. This is the RAID level that will be configured across all disks within a virtual disk. Note – RAID 1 is used for 2 drives only. RAID 10 is used if RAID 1 is chosen and more than 2 drives are specified.

Segment size is the amount of data, in kilobytes (KB) that the controller writes on a single drive in a Volume before writing data on the next drive. Data blocks store 512 bytes of data and are the smallest units of storage. The size of a segment determines how many blocks it contains. For example, an 8 KB segment holds 16 data blocks. A 64 KB segment holds 128 data blocks. For optimal performance in a multi-user database or file system storage environment, set your segment size in order to minimize the number of drives needed to satisfy an I/O request. Using a single drive for a single request leaves other drives available to simultaneously service other requests. If the volume is in a single-user, large I/O environment (multi-media) performance is maximized when a single I/O request can be serviced with a single data stripe. This is the segment size multiplied by the number of drives in the volume group that are used for I/O. In this case, multiple disks are used for the same request but each disk is only accessed once. Read ahead allows the controller, while it is reading and copying host-requested data blocks from disk into the cache, to copy additional data blocks into the cache. This increases the chance that a future request for data could be fulfilled from the cache.

Array configuration using Sun Storage Common Array Manager Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

7-195


Creating a volume with Common Array Manager Cache read-ahead is important for multimedia applications that use sequential I/O. The cache read-ahead multiplier value is multiplied by the segment size of the Volume to determine the amount of data that will be read ahead. The multiplier is chosen by the controllers based on the I/O pattern of the data. Setting this value to Disabled will turn off read ahead. Setting this value to Enabled tells the controllers to determine the most optimal multiplier value. Number of Disks can be set to a value of between 1 and 30, or to the value Variable. This parameter specifies the number of disks to be grouped together in a virtual disk. For example, if you create a storage pool with a profile that has the number of disks parameter set to a number, all virtual disks that are part of that storage pool must have the same number of disks. If the number of disks parameter is set to the Variable value you are prompted for the number of disks when storage is added to the pool. Note – The maximum number of disk drives is 30 but actual limitation is based on the 2 Terabyte size restriction for a volume.

Note – Module loss protection is achieved when all the drives that comprise the Virtual Disk are located in different expansion modules.

Disk Type specifies the drive type to be used for the volume. It can be set to FC, SATA or Any. Mixing drive types (SATA or Fibre Channel) within a single virtual disk is not permitted. If disk drives available have different capacities and/or different speeds, the overall capacity of the Virtual Disk will be based on the smallest capacity drive and the slowest drive.

7-196

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Array configuration using Sun Storage Common Array Manager

Storage pools An array can be divided into storage pools. Each pool is associated with a profile and acts as a container for volumes or physical storage devices that meet the storage profile. This allows users to optimize each storage pool to the type of application that it will be used with. Note – Removing a storage pool destroys all stored data in the pool and deletes all volumes that are members of the pool. The data can be restored from backup after new storage pools are added, but it is far easier to avoid the difficulty in the first place.

Volumes A volume is a “container” into which applications, databases, and file systems can store data. A volume is created from a Virtual Disk that is part of a storage pool. The creation of a volume is comparable to partitioning a disk drive, in that a volume is a part of a Virtual Disk. There are several different types of volumes: •

Standard volume - A standard volume is a logical structure created on a storage array for data storage. When you create a volume, initially it is a standard volume. Standard volumes are the typical volumes that users will access from data hosts.

Source volume - A standard volume becomes a source volume when it participates in a volume copy operation as the source of the data to be copied to a target volume. The source and target volumes maintain their association through a copy pair. When the copy pair is removed, the source volume reverts back to a standard volume.

Target volume - A standard volume becomes a target volume when it participates in a volume copy operation as the recipient of the data from a source volume. The source and target volumes maintain their association through a copy pair. When the copy pair is removed, the target volume reverts back to a standard volume.

Replicated volume - A replicated volume is a volume that participates in a replication set. A replication set consists of two volumes; each is located on a separate array. After you create a replication set, the software ensures that the replicated volumes contain the same data on an ongoing basis.

Array configuration using Sun Storage Common Array Manager Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

7-197


Creating a volume with Common Array Manager •

Snapshot volume - A snapshot volume is a point-in-time image of a standard volume. The management software creates a snapshot volume when you use the snapshot feature. The standard volume on which a snapshot is based is also known as the base or primary volume.

Reserve volume - There are two types of Reserve Volumes: a snapshot reserve volume, and a remote replication reserve volume. Every snapshot created results in the automatic creation of a snapshot reserve volume. The snapshot reserve volume is used to save original data from the base volume as changes are made to the base volume. The remote replication reserve volume is automatically created when the Remote Replication feature is activated. One remote replication reserve is created for each controller, the reserve is fixed in size (128 MB) and is used to store information about the state of the Remote Replication volumes.

Volume configuration preparation Creating a volume involves a number of tasks and decisions about a variety of elements in your storage configuration. On a a brand new array that does not have anything configured, the creation of a volume will automatically result in the creation of a Virtual Disk. Prior to creating a volume, be prepared to provide the following information:

7-198

Volume name - Provide a unique name that identifies the volume.

Volume capacity - Identify the capacity of the volume in megabytes, gigabytes, or terabytes. The capacity is the amount of disk space on the Virtual Disk that will be used for this Volume.

Storage Profile - Check the list of configured profiles to see if any contain the desired characteristics (RAID Level, Drive Type, Segment size, number of drives, etc...). If a suitable profile does not exist, create a new profile.

Storage Pool - the storage pool selected is associated with a storage profile which determine the volume’s characteristics. The management software supplies a default storage pool. This pool uses the default storage profile, which implements RAID-5 storage characteristics that can be used in the most common storage environments. Other pools may have also been configured. Choose a Storage Pool that is associated to the desired Storage Profile that has the attributes that best suit the application.

Disk Selection method for creating the Virtual Disk- A volume can be created on a virtual disk as long as the RAID level, the number of disks, and the disk type (either FC or SATA) of the virtual disk matches the storage profile associated with the volume's pool. The virtual disk must also have

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Array configuration using Sun Storage Common Array Manager enough capacity for the volume. In addition, the method of determining which virtual disk will be used to create the volume must be chosen. The following options are available: •

Automatic - The management software automatically searches for and selects a virtual disk that matches the storage profile of the selected storage pool. If none are available, it creates a new virtual disk based on the profile.

Create Volume on an Existing Virtual Disk - Manually select the virtual disks on which to create the volume from the list of all available virtual disks. Be sure that the number of disks you select have enough capacity for the volume.

Create a New Virtual Disk - A new virtual disk is created by specifying the number of disks, or selecting from a list of available disks. The Virtual Disk is then used to create the volume. Be sure that the number of disks you select have enough capacity for the volume and to account for the parity that is used by the chosen RAID level.

Whether you want to map the volume now or later - You can add the volume to an existing storage domain, including the default storage domain, or create a new one by mapping the volume to a host or host group.

Once the volume or volumes has been successfully mapped to a host or host group, the storage resource will be available to the hosts operating systems.

Volume parameters The following parameters can be viewed, and most can be modified dynamically after the Volume has been created.

Cache settings Cache is high-speed memory designed to hold upcoming to-be-accessed and/or recently accessed data. Cache works like this: when the CPU needs data from memory, the cache hardware and software checks to see if the information is already in cache. If it is, it grabs the information. This is called a cache hit. If it is not, it is called a cache miss and the computer has to access the disk, which is slower. The use of cache increases controller performance in three ways. •

Cache acts as a buffer so that host and drive data transfers do not need to be synchronized.

Array configuration using Sun Storage Common Array Manager Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

7-199


Creating a volume with Common Array Manager •

The data for a read or write operation from the host may already be in the cache from a previous operation, thus eliminating the need to access the drive itself.

If write caching is enabled, the host can continue before the write operation actually occurs.

Read caching allows read operations from the host to be stored in controller cache memory. If a host requests data that is not in the cache, the controller reads the needed data blocks from the disk and then places them in the cache. Until the cache is flushed, all other requests for this data are fulfilled with cache data rather than from a physical disk read, increasing throughput. Read caching is enabled by default and cannot be modified. Write caching allows write operations from the host to be stored in cache memory. Unwritten volume data in cache is written to disk, or flushed, automatically every 10 seconds. Write caching with replication allows cached data to be mirrored across redundant controllers with the same cache size. Data written to the cache memory of one controller is also written to the cache memory of the other controller. Therefore, if one controller fails, the other can complete all outstanding write operations. This option is available only when write caching is also enabled. Write caching without batteries allows write caching to continue, even if the controller batteries are discharged completely, not fully charged, or not present. If you select this parameter without a UPS for back-up power, you could lose data if power fails. Caution – This option should never be used in production environments.

Disk Scrubbing for an Individual volume enables the disk scrubbing process which lets the process find media errors before they disrupt normal drive reads and writes. The disk scrubbing process scans all Volume data to verify that it can be accessed and optionally scans the Volume redundancy data. Disk Scrubbing with Redundancy scans the blocks in a RAID 3 or 5 Volume and checks the redundancy information for each block or it compares data blocks on RAID 1 mirrored pairs. The error is corrected, if possible. All errors are reported to the Event Log.

7-200

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Array configuration using Sun Storage Common Array Manager Preferred controller ownership of a volume or Virtual Disk is the controller that is designated to be the owner. Modification Priority defines how much processing time is allocated for volume modification operations relative to array performance. You can increase the volume modification priority, although this might affect array performance. Operations affected by the Modification Priority include: •

Copyback

Reconstruction

Initialization

Changing Segment Size

Defragmentation of a Virtual Disk

Expanding a Virtual Disk (adding more drives to an existing Virtual Disk)

Dynamic Volume Expansion (DVE)

Changing from one Storage Profile to another that would result in a change of RAID Level or Segment Size

Modification Priority Rates - The Lowest priority rate favors array performance, but the modification operation will take longer. The Highest priority rate favors the modification operation, but array performance might be compromised.

Virtual Disks During the configuration of a volume, the Common Array Manager creates a Virtual Disk automatically. Virtual disks are created and removed indirectly through the process of creating or deleting volumes or snapshots. A Virtual Disk is the RAID set which contains the specified number of disks and is created based on the RAID level assigned in the storage profile. The disk drives that participate in the virtual disk must all be of the same type, either Serial Attached Technology Advancement (SATA) or Fibre Channel (FC). Once established, Virtual Disks can be modified in the following ways: •

Defragment the Virtual Disk

Array configuration using Sun Storage Common Array Manager Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

7-201


Administration functions and parameters Defragmenting the Virtual Disk will ensure that the all volumes in the Virtual Disk are contiguous. For example, if there were three volumes in a Virtual Disk and the middle volume was deleted the defragment feature will move the third volume into the place previously occupied by the second. •

Place the Virtual Disk offline

Expand the Virtual Disk by adding additional drives to the Virtual Disk.

Summary and detail information on existing virtual disks can be displayed. Summary information about the disk drives and volumes associated with each virtual disk can also be displayed.

Administration functions and parameters Once CAM has been properly installed, the user must setup the arrays for management and use. The following functions and parameters can be set via the General Configuration tab in CAM to complete the initial array configuration.

Auto Service Request (ASR) Auto Service Request (ASR) monitors the array health and performance and automatically notifies the Sun Technical Support Center when critical events occur. Critical alarms generate an Auto Service Request case. The notifications enable Sun Service to respond faster and more accurately to critical on-site issues. The Common Array Manager provides the interface to activate Auto Service Request on behalf of the devices it manages. It also provides the fault telemetry to notify the Sun service database of fault events on those devices. To use ASR, you must provide account information to register devices to participate in the ASR service. After you register with ASR, you can choose which arrays you want to be monitored and enable them individually. ASR uses SSL security and leverages Sun online account credentials to authenticate transactions. The service levels are based on contract level and response times of the connected devices. ASR is available to all customers with current Storage Warranty or Storage Spectrum Contracts. The service runs continuously from activation until the end of the warranty or contract period.

7-202

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Array configuration using Sun Storage Common Array Manager Only the event information is collected. Your stored data is not read and remains secure. The event information is sent by secure connection to https://cnsservices.sun.com. Event information collected by ARS 1.

Activation Event: Static information collected for purpose of client registration and entitlement.

2.

Heart Beat Event: Dynamic pulse information periodically collected to establish whether a device is capable of connecting.

3.

Alarm Event: Critical events trigger Auto Service Request and generate a case.Additional events are collected to provide context for existing or imminent cases.

Subscribing to and Editing Properties of Auto Service Request During the initial storage array registration process, the Common Array Manager prompts you to register with the Auto Service Request service by displaying the Auto Service Request (ASR) Setup page. This page continues to display until you either fill out the page and click OK, or click Decline to either decline or defer ASR service registration. After you register with ASR, you can choose which arrays you want to be monitored.

Array name When naming storage systems, keep the following in mind: •

There is a 30 character limit. All leading and trailing spaces will be deleted from the name.

Use a unique, meaningful name that is easy to understand and remember.

A name can consist of letters and numbers but only two special characters may be used - the dash (-) and the underscore (_). No spaces. Note – The storage management software does not check for duplicate names. Verify that the name chosen is not already in use by another array.

Array configuration using Sun Storage Common Array Manager Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

7-203


Administration functions and parameters

Default host type The host type defines how the controllers in the storage array will work with the particular operating system on the data hosts that are connected to it when volumes are accessed. The host type depicts an operating system (Windows 2000, for example) or variant of an operating system (Windows 2000 running in a clustered environment). Generally, you will use this option only if all hosts connected to the storage array have the same operating system (homogeneous host environment). If you are in an environment where there are attached hosts with different operating systems (heterogeneous host environment), you will define the individual host types as part of creating Storage Domains.

Hot spares A valuable strategy to keep data available is to assign available drives in the storage system as hot spare drives. A global hot spare (GHS) is a drive within the storage system that has been defined by the user as a spare drive to be used in the event a drive that is part of a volume with redundancy, fails. When the failure occurs, and a GHS is configured the controller will begin reconstructing the data from the failed drive to the GHS drive. When the failed drive is replaced with a good drive, the copy-back process will automatically start. Your storage system volume remains online and accessible while you are replacing the failed drive, since the hot spare drive is automatically substituting for the failed drive. Reconstruction is the process of reading data from the remaining drives and the parity drive. This data is processed through an XOR operation to recreate the missing data. This data is written to the hot spare. Copy-back is the process of copying the data from the GHS drive to the drive that has replaced the failed drive. The time to reconstruct the GHS drive varies and depends on the activity of the storage system, the size of the failed volume and the speed of the drives. A hot spare drive is not dedicated to a specific volume group but instead is global which means that is can be used for any failed drive in the storage system with the same or smaller capacity. Hot spare drives are only available for a RAID level 1, 3, or 5 volume group. When creating a global hot spare, keep the following in mind:

7-204

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Array configuration using Sun Storage Common Array Manager Select a drive with a capacity equal to or larger than the total capacity of the drive you want to cover with the hot spare. Generally, you should not assign a drive as a hot spare unless its capacity is equal to or greater than the capacity of the largest drive in the storage system. The maximum number hot spare drives per array for FW 6.xx is 15; for FW 7.xx, it is 30.

Storage array cache settings There are cache settings that can be set at the storage system level that are in effect for all volumes in the storage system. Cache start and stop percentages - The start value (percentage) indicates when unwritten cache data should be written to disk (flushed). The stop value (percentage) indicates when a cache flush should stop When the cache holds the specified start percentage of unwritten data, a flush is triggered. When the cache flushes down to the specified stop percentage, the flush is stopped. For example, you can specify that the controller start flushing the cache when the cache reaches 80% full and stop flushing the cache when the cache reaches 16% full. Note – Unwritten writes are written to disk every 10 seconds. This is not affected by the cache settings. For best performance, keep the start and stop values equal.

Cache block size The cache block size indicates the cache block size used by the controller in managing the cache. For the 6540 the default cache block size is set to 16 KB and cannot be modified. This parameter is applied to the entire storage system. The Cache Block Size is for all volumes in the storage system. For redundant controller configurations, this includes all volumes owned by both controllers within the storage system. •

4 KB (a good choice for file system or database application use)

•

16 KB (a good choice for applications that generate sequential I/O, such as multimedia)

Array configuration using Sun Storage Common Array Manager Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

7-205


Administration functions and parameters

Disk Scrubbing The Disk Scrubbing feature provides a means of detecting drive media errors before they are found during a normal read or write to the drive. It is intended to provide an early indication of an impending drive failure and to reduce the possibility of encountering a media error during host operations.The feature also provides an option to verify data/parity consistency for those volumes that include redundancy information. When enabled, it runs on all volumes in the storage system that are •

optimal

have no modification operations in progress

have the Disk Scrubbing parameter enabled on the volume Properties dialog.

The disk scrubbing Interval specifies the number of days over which the disk scrubbing should be run on the eligible Volumes. The controller uses the duration period, in conjunction with its knowledge of which Volumes must be scanned to determine a constant rate at which to perform disk scrubbing activities. This rate is maintained regardless of host I/O activity. By default this parameter is not enabled. Additional disk scrubbing options exist for individual volumes.

Failover alert delay The Failover Alert Delay specifies at the time that a critical event be logged when a Volume is transferred to a non-preferred controller. A value of 0 will create a log entry immediately.

Array time The Array Time option synchronizes the storage system controller clocks with the storage management station. This option ensures that event timestamps written by controllers to the Event Log match event timestamps written to host log files. Controllers remain available during synchronization. You also have the option to manually set the date and time.

7-206

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Array configuration using Sun Storage Common Array Manager

Manage passwords Implementing destructive commands on a storage system can cause serious damage, including data loss. Unless a password is specified, all options are available within the storage management software. If you specify a password, then any option that is destructive will be password protected. A destructive option includes any functions that change the state of the storage array such as creation of Volumes, modification of cache settings and so on. The password is stored on the storage system. Therefore a password needs to be set for each storage system in the management domain. When selecting a password, keep the following in mind: •

The maximum length is 30 characters.

The password is case sensitive.

Trailing spaces are not stripped from the password. Note – If you have forgotten the password, contact your customer support representative.

Array configuration summary •

Create a profile based on the I/O characteristics from the application

With FW 7.xx: RAID 6 is available on the 6140

Drive types (SATA and FC) cannot be mixed in a vDisk

Drives of the same type (capacity, speed, RPM) should be used in a vDisk for maximum performance

Segment size is the amount of data (in KB) that the controller writes on a single drive before writing to the next drive in the volume

A global hot spare (GHS) is a drive within the storage array that is set aside to be used in the event that a disk in a vDisk with redundancy (RAID 1, 10, 3, 5, 6) fails

With firmware 7.xx, an option of "promoting"“promoting”a global hot spare drive after a drive failure and rebuild to become a permanent drive of the vDisk is available

Array configuration using Sun Storage Common Array Manager Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

7-207


Array configuration summary Disk scrubbing continually scans sectors of the drives configured into volumes looking for errors during reads; it can also perform a parity check on everything it scans

7-208

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Array configuration using Sun Storage Common Array Manager

Knowledge check 4.

You can mix drive types (SATA and Fibre Channel) in a single module. True

False

5.

Why is it important to know what type of data you'll be working with when determining segment size?

6.

What is a preferred controller?

7.

What is cache? What effect does it have on a volume?

8.

What is disk scrubbing?

9.

What does a "global" refer to in relation to a hot spare?

10. What is the difference between "reconstruction" and "copy-back" in relation to a hot spare?

11. Why should you name your storage array?

Array configuration using Sun Storage Common Array Manager Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

7-209


Knowledge check 12. What can happen if you do not set your controller clocks to match your management station?

13. What part of the storage array takes advantage of the cache block size? What does it do with it?

14. Why is it important to keep a copy of all the support data?

7-210

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Module 8

Storage Domains Objectives Upon completion of this module, you will be able to: •

Explain the benefits and application of Storage Domains

Define Storage Domains terminology

Describe the functionality of Storage Domains

Calculate Storage Domain usage

8-211 Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


What are Storage Domains?

What are Storage Domains? Storage Domains allow a single physical storage array to be shared among multiple servers regardless of server type, application or operating system. The storage array is “partitioned” into several virtual storage arrays, so the job of several small storage arrays can be done with a single larger array. Storage Domains also manage and control host access to volumes.

Figure 8-1

Partitioning one physical storage array into several virtual storage arrays.

Storage Domains provide: •

Ability to create •

FW 6.1x: 6000: 4, 8, 16, or 64 domains

FW 7.xx 6140: 4, 8, 16, 32, 96, or 128 domains

FW 7.xx 6540: 4, 8, 16, 32, 96, 128 or 256 domains

Heterogeneous host support

Licensable feature •

8-212

License details shows number of storage domains in use

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Storage Domains

Figure 8-2

Storage domain features

Storage Domains benefits (pre-sales) •

Consolidation through Storage Domains capitalizes on the power of Sun 6000 series storage arrays and delivers significant benefits to the IT environment. By solving many of the challenges faced by IT organizations today, Storage Domains enables higher storage ROI through improved efficiency, cost avoidance, and lower TCO.

More efficient utilization of storage capacity is possible, allowing “islands” of isolated server-attached storage to be eliminated or minimized. There is no need for extra capacity on each server as available capacity can be easily allocated to servers as needed. This means that unused storage does not sit wasted on a given server

More efficient storage management is also a benefit as busy storage administrators can reduce the number of individual storage arrays that need to be managed. Fewer storage arrays needed to support many servers allowing administrators to spend less time and money managing storage.

Improved storage flexibility by de-coupling servers from storage and eliminating server-captive storage limitations. The typical one-to-one relationship between servers and storage can become a many-to-one or one-

Storage Domains Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

8-213


What are Storage Domains? to-many relationship, allowing new servers or additional storage to be added quickly and easily. Existing servers and storage can also be reconfigured without the need to unload and reload data or interrupt data availability. •

Storage Domains can also play a significant role in reducing storage total cost of ownership (TCO). Through storage consolidation, enabled by Storage Domains, servers with low capacity requirements can take full advantage of larger storage arrays in a cost effective manner, delivering greater performance, expanded functionality, and higher availability than is typically not offered by the low cost solutions designed for smaller capacity requirements. Enable sharing the cost of high performing, highly available storage over multiple servers or clusters; including servers where it was not previously economically feasible.

Storage Domains benefits (technical)

8-214

Storage Domains provide the same functionality as “lun masking” or “lun mapping.” LUN masking can be accomplished at various levels: the host adapter driver / software level, the fabric switch level or the storage array level.

Storage Domains reside in the storage array and is not dependant on a particular HBA or driver. This allows usage of standard HBA drivers (certified drivers recommended), minimizing compatibility issues in shared server and OS environments.

Management of the storage domains is done through CAM, providing a consistent interface across all host platforms. This eliminates the need to handle multiple operating vendor specific LUN masking/mapping mechanisms. Storage can be consolidated and centrally managed. Have a single point of storage management allows users to centralize and simplify administrative tasks, such as managing growth and allocating capacity.

Storage Domains enable large-scale storage consolidation by providing multiple domains (up to 64) per storage array.

In contrast to software or HBA driver controlled storage access management, Storage Domains protects Volumes against rogue hosts in the SAN. Volumes are not visible to or accessible by any host unless a specific mapping has been done. Any other host will not have access.

Storage Domains heterogeneous hosts support allows the storage array to tailor it’s behavior to the needs of the host operating systems. This provides each individual host the view of the storage array that it would experience if it had exclusive access to the storage.

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Storage Domains •

Storage Domains’s controller-based implementation ensures data integrity, as Volume access is maintained at the controller level, ensuring complete data integrity in multi-host, multi-OS environments.

And finally, logical storage domains enables administrators to choose from a range of volumes with different characteristics to meet a server’s exact needs for a given LUN.

Storage Domain terminology

Figure 8-3

Storage Domains terminology

Storage Domain: A storage domain consists of one or more volumes that can be accessed by a single host or shared among hosts (known as a host group). A storage domain is created when the first Volume is mapped to the host or host group. This volume-to-LUN mapping allows you to define what host or host group will have access to a particular Volume in your storage array. Hosts and host groups can only access data through assigned volume-to-LUN mappings. Characteristics: •

Configuring domains manages access to Volumes.

Hosts residing in different domains will be isolated. This allows attachment of multiple hosts to a single storage array, even if the hosts are running different operating systems.

Storage Domains can be licensed in steps: 4, 8, 16, 64 domains. So, as many as 64 virtual arrays can be created on a single storage array. Each domain represents a virtual storage array and consists of one or more Volumes assigned to a host or group of hosts.

Storage Domains Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

8-215


What are Storage Domains? Default Storage Domain: The default storage domain is a collection of hosts and volumes that do not already belong to a defined storage domain. Characteristics: All hosts in the Default Storage Domain share access to Volumes in the Default Storage Domain. A Volume resides in the Default Storage Domain only if it was assigned a default LUN number during Volume creation (else the Volume has a State of Free waiting to be assigned a LUN number). A Default Storage Domain exists to include the following: •

All host groups and hosts that do not have a volume explicitly mapped to it.

All volumes that have a default volume-to-LUN mapping assigned.

All automatically detected initiators.

Any volumes within the default storage domain can be accessed by all hosts and host groups within that storage domain. Creating an explicit volume-to-LUN mapping for any host or host group and volume within the default storage domain causes the management software to remove the specified host or host group and volume from the default storage domain and create a new separate storage domain. Host Group: a label for one or most hosts that need to share access to a Volume. Characteristics: Define a Host Group only if you have two or more hosts that will share access to the same Volumes. Host: a label for a Host that contains one or more FC ports that are connected to the storage array Characteristics: A host is a computer that is attached to the storage array and accesses various Volumes on the storage array through its host ports (host adapters). Host Ports: A host port is a physical connection on a host adapter that resides within a host. This physical connection is represented by a world wide port name (WWPN) in the storage management software. Characteristics:

8-216

When the host adapter only has one physical connection (host port), the terms host port and host adapter are synonymous.

Host ports are automatically discovered by the storage management software after the storage array has been connected and powered-up. A host port is the actual physical connection that allows a host to gain access to the

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Storage Domains Volumes in the storage array Therefore, if you want to define specific volume-to-LUN mappings for a particular host and create storage domains, you must define the host’s associated host ports. •

Initially, all discovered host ports belong to the Default Host Group and have access to any Volumes that were automatically assigned default LUN mappings by the controller firmware during Volume creation.

A host is identified by the WWPN of the HBA. A list of HBA WWPNs can be viewed in the Mappings View of the storage management software. This list holds all HBAs that: •

did a fibre channel port login into the storage array

are not already configured as host ports The WWPN of the servers’ HBA must be matched with the WWPN in the list. The following tools can be used to determine the WWPN of the HBAs in the server:

HBA vendor tools such as SANsurfer, HBAnywhere, EZFibre

HBA Bios

Query the name server of the fibre channel switch, if you know the port number the HBA is plugged into.

Look into Solaris/Linux system logs to find messages showing the WWPN of discovered HBA’s. Note – If you move or change a host adapter in a server, remember to remap any volume-to-LUN mappings. Access to your data will be lost until this is done.

Host Type: the type of OS or OS variant (ie. W2K or W2K Clustered) running on the host. The Host Type defines the behavior of the Volume (such as LUN reporting and error conditions). Characteristics: The Host Type allows hosts running different operating systems to access a single storage array. A Host Type could be set to completely different operating systems (such as Solaris and Windows 2000) or variants of the same operating system (such as Windows 2000 Clustered, and Windows 2000 Non-Clustered). When a Host Type is specified, the storage array tailors the behavior of the mapped Volume to the needs of the operating system. Logical Unit Number: A LUN is the number a host uses to access a Volume on a storage array.

Storage Domains Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

8-217


Steps for creating a Storage Domain Characteristics: •

After a Volume on the storage array is mapped to a host or group of hosts, it is presented to the host or host group with a Logical Unit Number (LUN). Because each host has it’s own LUN address space, you can use the same LUN in more than one volume-to-LUN mapping, as long as that LUN is available for use by each host within the host group. This allows the storage to present up to 256 LUNs to a single host or host group, and up to 2048 Volumes in total.

A Volume can only be mapped to a single LUN.

A LUN cannot be mapped to more than one host group or host.

Default volume-to-LUN mapping: This mapping defines hosts and Volumes that belong to the Default Host group Characteristics: During Volume creation, you can specify that you want the controller to automatically assign a LUN to the Volume, or that you want to map the Volumes later (available only with the storage Domains premium feature enabled). These “default” volumeto-LUN mappings can be accessed by all host groups and hosts that do not have specific volume-to-LUN mappings. Specific volume-to-LUN mapping: This mapping defines hosts and Volumes that belong to a defined Storage Domain Characteristics: A specific volume-to-LUN mapping occurs when you select a defined host group or host, and assign to a Volume a specific logical unit number (volume-to-LUN mapping). This designates that only the selected host group or host has access to that particular Volume through the assigned LUN. You can define one or more specific volume-to-LUN mappings for a host group or host. Note – Volume-to-LUN mappings are dynamic, meaning that a mapping can be created and changed anytime without the need to reboot the storage array.

Steps for creating a Storage Domain 1.

8-218

Enable the Storage Domain Feature - if Storage Domains is not already enabled on a storage array, a Feature Key file is needed. The Feature Key file can be created by your storage supplier by sending your supplier the Feature Enable Identifier specific to the storage array. You can obtain the

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Storage Domains Feature Enable Identifier by selecting the Administration->Licensing->List option in the SMW. After your storage supplier has sent back a Feature Key file, you can use it to enable storage Domains. 2.

Create or select the Storage Profile and Storage Pool with the appropriate characteristics for your application.

3.

Create Volumes using 1-100% of the Virtual Disk.

As part of the Volume creation, specify one of the following volume-to-LUN mapping settings: •

Automatic - this setting specifies that a LUN be automatically assigned to the Volume using the next available LUN within the Default Host Group. This setting will grant Volume access to host groups or hosts that have no specific volume-to-LUN mappings. If Storage Domains is disabled, this will be the default setting.

Map later using the Mappings View - This setting specifies that a LUN not be assigned to the Volume during creation. This setting allows definition of a specific volume-to-LUN mapping and creation of storage domains using the Mappings View tab. The volume will reside in the Undefined Mappings node until a specific volume-to-LUN mapping is defined. If Storage Domains is enabled, choose this setting.

4.

Map the volume to hosts or host groups and assign LUN numbers by defining the following:

host groups and/or hosts

host ports for each host

host port identifier (WWPN)

host type

host port name

Volume-to-LUN mappings •

select a defined host group or host

define an additional mapping

select the Volume

select the next available LUN number

Storage Domains Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

8-219


How Storage Domains work •

The relationship between a host (or host group) and one or more volumes is a storage domain.

Figure 8-4

Creating storage domains

How Storage Domains work •

8-220

During volume creation, each volume is assigned a unique ID, which is referred to as the volume ID. When the user defines a storage domain, CAM wizard builds a lookup table mapping a host initiator’s WWPN to a LUN. When the host sends an I/O request with it’s initiator WWPN and the LUN number it wishes to access, the controller verifies the request is an allowed combination by checking the lookup table. The lookup table then

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Storage Domains returns a volume ID for that LUN, and the I/O request is completed. The lookup table is stored in the DACstore region of every configured drive as well as in the controller’s memory.

Figure 8-5

Storage Domains use a lookup table of WWPN’s to determine if host has access to a particular volume

There are two types of mappings: default mapping or specific mapping. Default mapping means any Volumes in the default group can be accessed by any host attached to the storage, as long as that host is not already part of a domain. Host or host group specific mapping means a given server can only see and access the Volumes in its domain. Note – A host group or host can access Volumes with either default mappings or specific mappings, but not both.

When using the Storage Domains wizard, there is only one Volume-to-LUN mapping allowed. If more than one server needs to access a single volume, a host group should be used. All servers in a host group can access all the Volumes in that domain.

Server clusters need to use host groups, so that all the servers can share access to the same Volumes. But, servers in a host group do not necessarily need to run clustering software. Keep in mind, however, that without file sharing software, all servers in a host group can access the same Volumes, which can lead to data integrity issues.

A given host can be part of a host group and have its own individual mappings.

Each host has its own LUN address space within a domain. Meaning the same LUN number can be used in multiple Volume to LUN mappings, just not in the same domain.

Storage Domains Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

8-221


How Storage Domains work

What the host sees

Figure 8-6

Two volumes are mapped to Host A

If we look at what the host sees, it has a LUN number that maps to a volume on the storage array. Each host can only see the volume in its domain. For instance, host A has two LUNs that map to two volumes (shown in red). It has no idea that there is additional capacity on the storage array. The same is true for host B, which sees only its two blue volumes, and host group C, which sees only its three green volumes. Unmapped volumes can be assigned to any of the domains.

8-222

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Storage Domains

What the storage array sees

Figure 8-7

The two volumes mapped to Host A are on different Virtual Disks—and could even be on different drive technologies (A1 on FC and A3 on SATA)

From the storage array’s perspective, it maps a volume ID to the world wide port name of a host adapter. It doesn’t matter where the Volume resides within the storage array. Storage Domains’s volume-to-LUN mapping implementation creates valuable flexibility for the storage administrator, as any available Volume can be mapped to any attached server. So, while the individual server sees a virtual storage array that consists of only their mapped LUNs, the physical Volumes can be intermixed throughout the storage array within one or more RAID Virtual Disks. The previous diagram showed Host A had two red Volumes that comprised its domain. On this diagram, you can see those two Volumes reside on two different RAID Virtual Disks. Volume A, which is Host A’s LUN 0, is in Virtual Disk A1, and volume G, which is Host A’s LUN1, is in Virtual Disk A3.

Storage Domains Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

8-223


How Storage Domains work This is a powerful feature as it enables administrators to choose from a range of Volumes with different characteristics to meet a server’s exact needs for a given LUN. Each Volume can have unique configuration settings and reside on different drive types with different RAID levels. In this example, Virtual Disk A1 could be on high-speed FC drives configured as RAID 1, and Virtual Disk A3 could be on low-cost SATA drives configured as RAID 5. This flexibility enables a range of hosts with different capacity, performance or data protection demands to effectively share a single SUN series storage array.

Storage Domains - How many domains are required?

Figure 8-8

8-224

Calculate storage domain usage

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Storage Domains

LUNS - How do you number these LUNS?

Figure 8-9

Number the LUNs

Summary of creating Storage Domains 1.

Enable the Storage Domains premium feature

2.

Create volumes on the storage array (during creation check “Map Later with Storage Domains”)

3.

Define the storage topology using the Mappings tab •

host group or hosts

host ports for each host

host type for each host port

4.

Define volume-to-LUN mappings

5.

Verify mappings from the host •

Run OS native utility to rescan the fibre channel loop or fabric (ie. in Solaris devfsadm may be necessary, on Windows use Disk Management->Tools->Rescan Disks, etc.)

The new volume should be recognized by the host (ie. in Solaris the new volume will be listed by the format command, in Windows the new volume will be listed in Disk Management, etc.)

Now, the volume(s) will be ready for use by the host (or host groups).

Storage Domains Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

8-225


Storage Domains summary

Storage Domains summary

8-226

Storage domains allow a single physical storage array to be shared among multiple servers running heterogeneous operating systems

Storage domains create virtual storage arrays and controls access to hosts and volumes associated with each partition

Storage domains can be licensed in steps: 4, 8, 16, 32, 96 or 128 partitions for the 6140 and 4, 8, 16, 32, 96, 128, 256 or 512 for the 6540 and 6x80

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Storage Domains

Knowledge check True or false 1.

A storage domain is created when a host group or a single host is associated with a volume-to-LUN mapping. True

2.

A host group or host can access volumes with default mappings and specific mappings. True

3.

False

You can not use the same LUN number in more than one volume-to-LUN mapping. True

4.

False

False

A Default Host Group shares access to any volumes that were automatically assigned default LUN numbers. True

False

Multiple choice 5.

After defining the first specific volume-to-LUN mapping for a host, a. b. c. d.

6.

Host ports must be defined the host type can no longer be changed The LUN number can not be used by other hosts in the topology The host and host ports move out of the Default host group

In a heterogeneous environment, a. Each host type must be set to the appropriate operating system during host port definition b. Volumes can have more than one volume-to-LUN number c. Hosts with different operating systems can share volumes d. A host can access volumes with either default mappings or specific volume-to-LUN mappings.

Customer scenario Mr. Customer has the 3 servers and one storage array (6540). The servers: Three W2003 (each has two single ported HBA's), Linux (one dual ported HBA) and Solaris Sparc (one single dual-ported HBA).

Storage Domains Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

8-227


Knowledge check The Finance Dept. has requested a 'disk' for storing employee expense statements. The application to access the employee expense statement will run on both W2003 servers with Microsoft Cluster server software. One of the W2003 server will be running the Exchange application and the Exchange Administrator has requested 2 'volumes' - one for the database, the other for a log file. The Linux server will be used for software development and will require disk space for source code and development tools (2 volumes). The Solaris server will be running the engineering document database and will require 1 volume. First draw a diagram showing the servers and the storage, so you and the customer have the same understanding of the requested configuration. 7.

List the Host Groups that will be created:

8.

List the Hosts that will be created under each Host Group:

9.

List the number of Host Ports under each Host:

10. List the Host Types used for each Host Port:

11. Will the Default Host Group be empty ?

8-228

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Storage Domains 12. How many domains will the customer require?

13. What needs to be done by the user or storage administrator when an HBA is replaced in one of the servers?

14. How many storage domains would you need for the configuration below?_______

Storage Domains Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

8-229


Knowledge check

8-230

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Module 9

Integrated data services – Snapshot Objectives Upon completion of this module, you will be able to: •

Describe the benefits and application of Snapshot

Explain how Snapshot is implemented

9-231 Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Data services overview

Data services overview The Sun Storage 6000 array offers separate licenses for the integrated data services software features: •

Volume Snapshot - A point in time image (PiT) of a volume in a Sun Storage 6000 array.

Volume Copy - A complete (byte-by-byte) or (block-by-block) PiT replication of one volume to another within a storage array.

Remote Replication - A real-time copy of volumes between two storage arrays over a remote distance through a FC SAN.

These features are ideal for data protection surrounding backup, business continuance and disaster recovery situations. All three features require a license and can be enabled or disabled as you choose. Table 9-1

Integrated data services (7.xx) Array

9-232

Snapshots (Per Volume)

Remote Remote Replication Replication Mirrors (Base) Mirrors (Optional)

6140-2

8

16

64

6140-4

8

16

64

6540

16

64

128

6x80

16

64

128

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services – Snapshot

Snapshot Snapshot creates a “static” or point-in-time (PiT) image of a volume. The Snapshot volume is created almost instantaneously and appears and functions as a volume as shown in the figure below.

Figure 9-1

Snapshot volume: Shows base, reserve and snapshot

Snapshot terminology To better understand snapshot, there are several terms which must be defined. This list includes: •

Base volume

Snapshot volume

Reserve volume

Original data blocks

Integrated data services – Snapshot Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

9-233


Snapshot

Figure 9-2

Snapshot terminology

Base Volume Definition: The base volume defines volume from which the Snapshot is taken. Characteristics: It must be a standard volume in your storage array, as you cannot take a Snap of a Snap. The base volume remains online and useraccessible regardless of the state of the Snapshot. Note – Invalid base volumes include snapshot reserve volumes, snapshot volumes, mirror reserve volumes, target volumes participating in a volume copy.

Snapshot Volume Definition: A snapshot volume is a logical point-in-time image of another volume in the storage array. It is the logical equivalent of a complete physical copy, but is created much more quickly than a physical copy and requires less disk space. Taking a Snapshot is like taking a photograph, freezing the state of the data. The exact state is kept, while the source volume can be used again for reading and writing purposes.

9-234

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services – Snapshot Characteristics: The Snapshot is treated the same as any other volume that can be mapped to a host. The Snapshot volume has all of the characteristics of the Base volume, such as: •

Same Size and RAID Level of the Base volume at time of Snapshot

Mappable to any host

Can be read from and written to

Has a unique WWN (WWD) - A Snapshot volume has a unique World Wide Device Name (WWD). This allows the operating systems and applications to recognize it as an individual volume instead an alternative path to the Base volume

Additionally: •

Snapshots can be Disabled - stopped

Snapshots can be Recreated at a later time

With firmware 6.1x, a maximum of 4 Snapshots per Base volume can exist.

Starting with firmware 7.xx, a license can be purchased •

6140 controllers allow a maximum of 8 snapshots per base volume

6540 controllers allow a maximum of 16 snapshots per base volume

The maximum number of Snapshots per storage array is one half the total number of volumes supported by the controller model.

The Snapshot is virtual, but actually consists of the Base and Reserve Volumes. Note – Due to the dependency on the Base volume, a Snapshot should not be utilized for data migration or disaster recovery purposes in protecting against catastrophic failure to the original volume or storage array.

Reserve Volume Definition: The Reserve volume (also called a Reserve) is a physical volume. It’s used to hold the metadata (copy-on-write map) and the original data of the blocks that have been modified on the Base volume. Characteristics: The reserve volume: •

Need to consider capacity allocation during volume creation so as to retain free capacity for Reserve volumes

Integrated data services – Snapshot Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

9-235


Snapshot •

Can be smaller than Base volume (defaults to 20% of Base; min 8 MB), but can be expanded with Dynamic Volume Expansion.

Some space consumed by metadata but metadata is very small (192KB), so it doesn’t need to be taken into account when determining the size of the Reserve.

Can be expanded later via DVE (Dynamic Volume Expansion) regardless of OS. When you create a Snapshot there is the possibility that the Reserve may need to be expanded due to more modifications being made to the Base volume than originally anticipated, therefore ensure enough free capacity exists on the same Virtual Disk next to the Reserve Volume in order to expand it without delay.

Configurable warning/alert threshold

Configurable response when Reserve is full

Reserve volumes cannot be mapped; no host I/O

One Reserve per Snapshot

Can reside in different Virtual Disk from Base

Original data blocks Definition: The original data blocks are data blocks that are on the base volume at the time the Snapshot was taken. Characteristics: Original data blocks will continue to reside on the Base Volume if those blocks have not been modified since the Snapshot was taken. Original data blocks will be copied to the Reserve Volume if those blocks on the Base Volume are overwritten (modified) with new data after the Snapshot was taken. The Snapshot Volume is comprised of original data blocks that are still on the Base Volume and Original data blocks that have been copied to the Reserve Volume.

9-236

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services – Snapshot

Snapshot - Benefits

Figure 9-3

Benefits of using snapshot

Pre-Sales benefits Data Protection - The time and cost to backup data is a major consideration, but more important is the ability to recover data and restore it in a timely manner. A PiT image or backup is needed to protect against the most common reason to restore - user or operator error. Even sophisticated disaster recovery sites with redundantly mirrored disk arrays cannot protect against the need to go back to a point before corruption occurred. Tape is much slower than disk. Additionally, a Snapshot can be taken multiple times throughout the day, whereas tape backup is typically feasible only once per day. This allows more recent information to be recovered should an unfortunate need arise. Taking these multiple Snapshots by automated scripting means no operator intervention is required. Tape, being the least expensive medium, can be used for longer-term archives.

Integrated data services – Snapshot Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

9-237


Snapshot - Benefits Application Testing - Snapshot feature expedites application testing by utilizing the Snapshot volume in a test environment. The Snapshot is taken instantaneously and uses less disk space, thus providing an efficient data set for application development / testing. This facilitates enhanced data processing capabilities to create a competitive advantage. Upgrades and modifications can be tested on the Snapshot, which saves time compared to making full copies of the data. A diskto-disk copy for 1 TB would take approximately 1 hour, whereas a Snapshot would be nearly instantaneous and typically only take 200 GB of storage (Based on a typical configuration where the Snapshot Reserve is 20% of the Base volume).

Summary of benefits (pre-sales): •

Improves data utilization. Snapshot enables non-production servers to access an up-to-date copy of production data for a variety of applications including backup, application testing, or data mining - while the production data remains online and user-accessible.

Improves employee productivity by having an immediate copy. No more waiting for large volumes of data to copy, Snapshot is nearly instantaneous.

Protects data by providing a readily available online copy that reduces restore time.

Reduces disk space requirements by using an innovative copy-on-write technology. The Snapshot image only requires a fraction of the original volume's disk space.

Provides a copy to use as the source of a backup. This allows continuous processing during the backup procedure.

Provides more rapid application development through the immediate creation of a test environment and capitalizing on the ability to write to the Snapshot image.

Technical benefits Snapshot holds several benefits, including:

9-238

It provides instantaneous PiT image of the data.

Snapshot utilizes only a small fraction of original disk space.

It enables quick, frequent and non-disruptive backups.

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services – Snapshot •

Snapshot allows the Snapshot to be read, written and copied. The ability to write to a Snapshot is a valuable capability that opens up new techniques for creating immediate test and small data mining environments. This is also a distinguishing capability of the Snapshot feature.

It utilizes the same high-availability characteristics of the original (Base) volume, such as RAID protection and redundant path fail-over.

It provides placement flexibility – the Snapshot can be mapped and made accessible to any host on the SAN. Snapshot data can be available to secondary hosts for read and write access by mapping the Snapshot volume to these hosts.

It is integrated into the CAM software for consistent, simple management.

It provides an easy-to-use GUI along with a command line interface for the flexibility to script Snapshot functions, such as automated backups from the copy.

Up to four copies can be created per volume with a maximum of 512 copies in the 6140 storage array.

Expandable Reserve capacity with full warning and statistical information.

How does Snapshot work? A Snapshot is a Point in Time (PiT) logical view of data that is created by saving the original data to a Reserve whenever data in the Base volume is overwritten. The technique that allows a Snapshot to be created instantaneously is the innovative “copy-on-write” technology. Essentially, the Snapshot process creates an empty Reserve that will hold original values that later change in the Base volume after the time of Snapshot creation. The Snapshot only takes as long as needed to create an empty Reserve volume and Snapshot volume pointers, again a nearly instantaneous creation. It is recommended that the Base volume be quiesced during the Snapshot so that a stable image of this moment in time is available. The Snapshot is actually seen by combining the Reserve of original data with the Base volume, thus, the Snapshot creates an exact copy of the data at the moment the Snapshot was taken. This copy-on-write technology enables the instantaneous nature of the Snapshot while only requiring a fraction of the Base volume disk space. This instant creation and small size compared to the original volume distinguishes a Snapshot from a full-volume copy. The full-volume copy must physically copy all of the data. This can take more than an hour for a 500 GB volume.

Integrated data services – Snapshot Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

9-239


How does Snapshot work? The Snapshot “appears” as a volume containing the original data at the time of creation, but is actually an image seen by combining the Reserve with the original Base volume. The Reserve, which houses the original data changed after the PiT, is the only additional disk space needed for the Snapshot. This is typically 10 to 20 percent of the Base volume, and will vary depending on the amount of changes to the original data. The longer a Snapshot is active, the larger the Reserve that is needed. The CAM Snapshot wizard provides notification upon reaching a user-defined saturation point for the Reserve, thus notifying the administrator that the Reserve has reached a certain capacity limit and needs to be expanded. The default size of the Snapshot Reserve) is 20 percent the size of the Base volume. It should be noted that the Snapshot is dependent on the Base volume in order to reconstruct the PiT image. Note – Due to the dependency on the Base volume, a Snapshot should not be utilized for data migration or disaster recovery purposes in protecting against catastrophic failure to the original volume or storage array.

9-240

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services – Snapshot

Examples of how Snapshot works In order to better understand the relationship between the Base, Reserve and Snapshot volumes, consider the following examples.

Standard Read – No Snapshot This shows a standard read I/O from a Base volume. In this example, a single Base volume exists on the storage array with 8 data blocks. At 11 am, the host issues a read request for block A to the storage array. The data block resides on the Base volume, and the read data comes directly from there.

Figure 9-4

Standard read – no snapshot

Integrated data services – Snapshot Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

9-241


How does Snapshot work?

Snapshot is created At 11:05, a Snapshot is created. When the Snapshot is “taken,” the controller suspends I/O to the Base volume for a few seconds while it creates a physical Reserve volume to store the Snapshot metadata and copy-on-write data. The logical Snapshot volume is also created and is immediately available for mapping.

Figure 9-5

Snapshot is Created

Notice that the Snapshot volume is identical to the Base volume at the time the Snapshot is created. In this example with data blocks A, B, C, D, E, F, G and H. No matter how much the Base volume changes after 11:05, the Snapshot volume will look the same as the Base volume did at 11:05. Snapshot will always reflect the original data.

9-242

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services – Snapshot

Read From Snapshot (first case) The first use everyone thinks of for a Snapshot is backups, so the Snapshot needs to be readable.

Figure 9-6

Read From Snapshot

At 11:15, the host issues a read for data block A from the Snapshot volume. As mentioned previously, no physical data resides in the Snapshot volume. The Reserve volume combined with the original Base volume creates the logical Snapshot volume. So, when data is requested from the Snapshot volume, the disk array determines if the data is in the Base volume or the Reserve volume. In this case, data block A resides in the Base volume, so the read comes directly from there.

Integrated data services – Snapshot Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

9-243


How does Snapshot work?

Write to base Now at 11:30, the host issues a write to the Base volume. Blocks Z,Y are going to overwrite blocks A,B. As blocks A,B are needed for the Snapshot volume, a copy-on-write occurs, copying blocks A,B into the Reserve volume for safekeeping. Once this is done, the write of blocks Z,Y completes to the Base volume and is acknowledged to the host.

Figure 9-7

9-244

Snapshot – Write to Base

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services – Snapshot

Rewrite to base New Re-write changes do not need to do anything to the Snapshot, because the original data is already in the Reserve as shown in below.

Figure 9-8

Snapshot – Rewrite to base

At 11:45, the host issues another write to the Base volume. This time block X is overwriting block Z. Snapshot is more correctly described as copy on FIRST write technology. No additional copy on write operation is needed because the original data block—block A—was moved to the Reserve when the first write to this block took place, therefore, subsequent writes to the block do not require any action. In this example, block X simply overwrites block Z, and the I/O write is acknowledged to the host.

Integrated data services – Snapshot Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

9-245


How does Snapshot work?

Read From Snapshot (second case) Reads to the Snapshot will physically be read from the Base and the Reserve volumes. The metadata map in the Reserve is consulted to determine if the data should be read from the Base because it has not changed, or read from the Reserve because the data in the Base has been modified since the time the Snapshot was taken.

Figure 9-9

Read From Snapshot

At 12:00, the host issues a read for blocks A,B,C,D from the Snapshot volume. When we read A at 11:15 (see Figure 9-9), block A was still in its original location in the Base volume. Since then, however, it has been overwritten. So now when the host issues a read for blocks A,B,C,D to the Snapshot volume, the storage array uses the metadata map. A,B are in the Reserve volume and the data is read directly from there. C,D are still on the Base volume and have not been overwritten since the Snapshot was taken, so the Snapshot (metadata map) simply points to the original blocks still on the Base Volume, and blocks C,D are read directly from the Base Volume.

Write to Snapshot If a write is performed to the Snapshot, the original data is overwritten in the Reserve, and the result is the Snapshot is now no longer a PiT of the original data.Writes to the Snapshot are stored in the Reserve, as the Snapshot is not a physical volume and therefore cannot store data.

9-246

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services – Snapshot In Figure 9-10, the host is overwriting data block D in the Snapshot volume with block M. As the Snapshot volume is not a physical volume, block M has to go somewhere. Writing to the Snapshot volume puts the data directly into the Reserve.

Figure 9-10

Write to Snapshot

Integrated data services – Snapshot Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

9-247


How does Snapshot work?

Write to base (first case) Since writes, or updates to the Snapshot are written to the Reserve - any changes to those same blocks on the Base are not saved since the data written to the Snapshot supersedes point-in-time data. Once write data is issued to a Snapshot volume, it is no longer a PiT image of the Base Volume, as shown in below.

Figure 9-11

9-248

Snapshot - Write to base - First example

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services – Snapshot

Write to base (second case) If another write is performed to a block that was already modified (such as W), there will be no change to the Reserve as this data is not original data. If a write is performed to a block that has NOT been modified (such as C), the copy-onwrite procedure is performed again as shown below.

Figure 9-12

Snapshot – Write to base - Second example

Disabling and recreating With the Snapshot volume enabled, a performance impact is experienced due to the copy-on-write procedure. If the Snapshot is no longer required, it can be Disabled (stopped) and the copy-on-write penalty on the Base goes away. An example of when a Snapshot should be DISABLED is when a backup completes. If the Snapshot volume is Disabled, it can be retained along with its associated Reserve volume. When it is needed again, it can re-created using the “recreate” option utilizing the same volumes from the previous Snapshot, taking less time. If a Snapshot has been Disabled, it can then be Re-created (i.e. re-snapped) later. A new point-in-time image is created taking less time because a Re-create will use the existing Reserve volume definitions and parameters for Snapshot creation.

Integrated data services – Snapshot Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

9-249


How does Snapshot work?

Snapshot considerations There are several things to consider when you are creating a snapshot, including: •

Performance

Volume failover considerations

Handling deleting base or snapshot respository volumes

Maintaining the state on your volumes

Performance The copy-on-write process consumes a portion of the available performance. High load arrays might experience a performance degradation while one or multiple Snapshots are active. Especially writes might be slower, because the original data has to first be copied to the Reserve. Read operations from the Snapshot might be slower than reads from the Base volume, because the metadata map in the Reserve has to be consulted first.

Volume failover Ownership changes affect Base and ALL of its Snapshots. The Base volume, Snapshot and Reserve are all owned by the same controller. The rules that apply to the Base volume for AVT and RDAC modes, also apply to the associated Snapshots and reserves. All “related” volumes change controller ownership as a group.

Deleting a base or Snapshot Base, Snapshot and Reserve volumes are all associated. Each Snapshot requires it’s own Reserve. A Snapshot cannot exist without a Base or a Reserve. When you delete a Base volume, all Snapshots of this volume and associated reserves will also be deleted. When you delete a Snapshot, the associated Reserve will also be deleted.

Logically identical volumes When using cloning operations like Snapshot and Volume Copy, the most important part is to ensure that the source volume is in a consistent state. By nature, a clone is an identical copy of it’s source volume. If the source volume is not consistent, the clone is also not consistent. An inconsistent volume might be unusable for the purpose needed.

9-250

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services – Snapshot Open applications like databases or even mounted file systems keep files open. Flags on the physical disks usually indicate the opened status. If we now take a Snapshot, the Snapshot will also indicate the opened state. If this had been a file system or database, the application would ask for a database check or file system check, which easily takes a couple of hours. To be fully consistent, it is preferred to bring a database into quiesce or hot backup mode. This closes the transaction logs and creates redo or recovery points. A file system can be unmounted (remove the drive letter in Windows) to make it consistent. Hosts use buffers - reserved space in the memory of the host - which act as a kind of cache space. The data used most, such as directory structures, or bitmap tables, is quite often kept in buffers to improve overall disk performance. Snapshot and Volume Copy can only copy what’s physically on disk, but not what’s stored in the hosts memory.

Creating Snapshots The following section covers things to consider when creating and managing a snapshot using CAM.

Creating a Snapshot Prior to creating a snapshot with the Common Array Manager it is important to plan the following aspects: •

The name of the snapshot reserve volume - When a snapshot is created, it must be assigned a unique name. This simplifies identification of the primary volume.

Each snapshot has an associated reserve volume that stores information about the data that has changed since the snapshot was created. It too must have a unique name making it easy to identify as the reserve volume of the snapshot to which it corresponds.

The capacity of the reserve volume - To determine the appropriate capacity, calculate both the management overhead required and percentage of change expected on the base volume.

The warning threshold - When a snapshot volume is created, the threshold at which the management software will generate messages to indicate the level of space left in the reserve volume can be specified. By default, the

Integrated data services – Snapshot Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

9-251


Creating Snapshots software generates a warning notification when data in the reserve volume reaches 50 percent of the available capacity. The percentage of space used can be monitored on the Snapshot Details page for the snapshot. •

The method used to handle snapshot failures - When a snapshot volume is created, you can determine how the management software will respond when the reserve volume for the snapshot becomes full. The management software can do either of the following: •

Fail the snapshot volume. In this case the snapshot becomes invalid, but the base volume continues to operate normally.

Fail the base volume. In this case, attempts to write new data to the primary volume fail. This leaves the snapshot as a valid copy of the original base volume.

The virtual disk selection method - A snapshot can be created on a virtual disk as long as the virtual disk has enough capacity for the snapshot. The following options are available: •

Automatic - The management software automatically searches for and selects a virtual disk that matches the necessary criteria. If there are none, and enough space is available, it creates a new virtual disk.

Create Volume on an Existing Virtual Disk - You manually select the virtual disks on which you want to create the volume from the list of all available virtual disks. Be sure that the number of disks you select have enough capacity for the volume.

Create a New Virtual Disk - Creates a new virtual disk on which to create the volume. Be sure that the virtual disk that you create has enough capacity for the volume.

The snapshot mapping option - The snapshot can be added to an existing host or host group.

During snapshot creation, you can choose between the following mapping options: •

Map Snapshot to One Host or Host Group - this option enables you to explicitly map the snapshot to a specific host or host group, or to include the snapshot in the default storage domain.

Do Not Map this Snapshot - this option causes the management software to automatically include the snapshot in the default storage domain. Note – A host or host group will be available as a mapping option only if an initiator is associated with each individual host and each host is included in a host group.

9-252

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services – Snapshot

Calculating Reserve Volume capacity When creating a snapshot, the size of the snapshot reserve volume that will store snapshot data and any other data that is needed during the life of the snapshot must be specified. When prompted to specify the size of the snapshot reserve volume, the size is entered as a percentage of the size of the base volume, as long as that percentage does not translate to a size of less than 8 megabytes. The capacity needed for the snapshot reserve volume varies, depending on the frequency and size of I/O writes to the base volume and how long the snapshot volume will be kept. In general, choose a large capacity for the reserve volume if you intend to keep the snapshot volume for a long period of time or if you anticipate heavy I/O activity, which will cause a large percentage of data blocks to change on the base volume during the life of the snapshot volume. Use historical performance to monitor data or other operating system utilities to help you determine typical I/O activity on the base volume. As noted earlier, when the snapshot reserve volume reaches a specified capacity threshold, a warning is given. This threshold is set at the time of the snapshot volume creation. The default threshold level is 50 percent. If you receive a warning and determine that the snapshot reserve volume is in danger of filling up before you have finished using the snapshot volume, increase its capacity by navigating to the Snapshot Details page and clicking Expand. If the snapshot reserve volume fills up before you have finished using the snapshot, the snapshot failure handling conditions specify the action that will be taken. Use the following information to determine the appropriate capacity of the snapshot reserve volume: •

A snapshot reserve volume cannot be smaller than 8 megabytes.

The amount of write activity to the base volume after the snapshot volume has been created dictates how large the snapshot reserve volume needs to be. As the amount of write activity to the base volume increases, the number of original data blocks that need to be copied from the base volume to the snapshot reserve volume also increases.

The estimated life expectancy of the snapshot volume contributes to determining the appropriate capacity of the snapshot reserve volume. If the snapshot volume is created and remains enabled for a long period of time, the snapshot reserve volume runs the risk of reaching its maximum capacity.

Integrated data services – Snapshot Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

9-253


Snapshot summary

Creating a Snapshot The illustration shows an overall view of creating the snapshot.

Figure 9-13

Creating Snapshot flow chart

Snapshot summary Snapshot is a point-in-time (PiT) logical image of a standard volume Starting with firmware 7.xx,

9-254

6140 controllers allow a maximum of 8 snapshots per base volume

6x80 and 6540 controllers allow a maximum of 16 snapshots per base volume

Starting with firmware 7.xx, the snapshot feature is tiered in 4, 8 or 16 multiples (depending on the controller) per base volume

Snapshot can be mapped to a storage domain just like a standard volume

The reserve volume cannot be accessed by a host

The snapshot can not be used for disaster recovery

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services – Snapshot

Knowledge check 1.

A snapshot, a method for creating a point-in-time image of a volume, is immediately out of date as soon as a new write is made to the array. True

False

2.

Why is a snapshot referred to as a "point-in-time" (PiT) image?

3.

Why should snapshot never be used for disaster recovery?

4.

What is the maximum number of snapshots that can be created on one base volume?

5.

What happens if a data block on the base volume is changed more than once after the snapshot is taken?

6.

What is the difference between disabling and deleting a snapshot?

Integrated data services – Snapshot Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

9-255


Knowledge check

9-256

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Module 10

Integrated data services – Volume Copy Objectives Upon completion of this module, you will be able to: •

Describe the benefits and application of Volume Copy

Explain how Volume Copy is implemented

Explain the functions that can be performed on a Copy Pair

10-257 Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Volume Copy overview

Volume Copy overview The volume copy premium feature is used to copy data from one volume, the source volume, to another volume, the target volume, in a single storage array. Volume copy creates a complete physical replication of a source volume at a suspended point in time (PiT) to a target volume.

Figure 10-1

10-258

Volume Copy

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services – Volume Copy

Volume Copy terminology

Figure 10-2

Volume copy terminology

To better understand volume copy, there are several terms which must be defined. This list includes: •

Source volume

Target volume

Copy pair

Source volume Definition: The source volume is the volume that accepts host I/O and stores application data. When a volume copy is started, data from the source volume is copied in its entirety to the target volume. In order to maintain the data integrity of the point in time target, volume copy suspends write to the source volume during the copy process. Therefore, in order to maintain normal I/O activity and ensure data availability, volume Copy must be used in conjunction with Snapshot - where the Snapshot volume is the source volume for the volume Copy. A source volume can be any of the following volume types: •

Standard volume

Snapshot

Integrated data services – Volume Copy Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

10-259


Volume Copy overview •

Base volume of a snapshot

Target volume - You can copy one source volume to several different target volumes.

Target volume Definition: The target volume is a standard volume that maintains a copy of the data from the source volume. The target volume will be identical to the source volume after the copy completes. Caution – A volume copy will overwrite all data on the target volume and automatically make the target volume read-only to hosts. Ensure that you no longer need the data or have backed up the data on the target volume before starting a volume copy. While the copy is in progress, the target volume is not available for any I/O from a host. When the copy is complete the target volume by default will be read-only, but can be modified by the user to be read and write accessible. The target volume must be the same or greater capacity as the source volume, but can be of a different RAID level. A target volume can be a: •

Standard volume

Base volume of a failed or disabled Snapshot volume

Remote Mirror primary volume.

If you choose the base volume of an active Snapshot volume as the target volume, you must disable all Snapshot volumes associated with the base volume before creating a volume copy.

Copy pair The source volume and it’s associated target volume for a single volume copy are know as a copy pair. The copy pair relationship links the source and target volumes together. A copy pair can be:

10-260

Stopped - stop the copy, but copy pair relationship is maintained

Re-copied - re-copy the source to the target, thereby overwriting the previous data on the target

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services – Volume Copy •

Removed - sever the copy pair relationship, leaving the data on the source and target intact.

Note – A maximum of eight copy pairs can have a status of “In Progress” at one time. If more than eight volume copies are created, they will each have a status of “Pending” until one of the volume copies with a status of “In Progress” completes. For example, if a ninth copy pair is defined, it will be placed in a queue until one of the existing eight copy processes completes, at which time the ninth copy process will begin.

Volume Copy – Benefits (pre-sales)

Figure 10-3

Volume copy benefits

There are several benefits to implementing a volume copy. This includes: •

Volume Copy creates an exact point in time “clone” of production data that can be mapped to a separate data host for analysis. The target volume can be mapped to any host and enables data analysis, data mining and application testing to run without degrading the performance of the production volume.

Using the target volume for backup can eliminate I/O contention to a source volume compared to using a Snapshot. If the application write activity to the production volume is heavy while the Snapshot is being backed up, the production application performance may be affected. In these instances,

Integrated data services – Volume Copy Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

10-261


Volume Copy overview volume Copy can be utilized to make a separate copy of the production data faster than the Snapshot can be transferred to tape. Once the copy is complete, the Snapshot can be deleted – removing the performance overhead of its maintenance – and the copy then backed up. This enables the production volume to sustain a performance “hit” for a minimum amount of time, while still creating a complete point in time copy of that volume for backup. •

Data can be backed up to and restored directly from the target volume. This enables faster backups and restoration compared to tape.

Another benefit of Volume Copy is the ability to redistribute data or migrate data to newer/faster/larger drives (copy data from a virtual disk that use smaller capacity drives to a virtual disk using larger capacity drives). You can even migrate volumes to an virtual disk with more drives, or a more effective RAID level.

Figure 10-4

10-262

Migrate data to larger drives and change RAID level

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services – Volume Copy

Volume Copy - Benefits (technical)

Figure 10-5

Technical details of volume copy

The Volume Copy function is controller based and resides in the storage array, and therefore requires no host interaction or server CPU cycles to perform the copy, thereby minimizing the performance impact to the server.

Eight concurrent copies can be taking place at any given time.

Volume Copy can be configured via an intuitive GUI, or can be scripted for automation via the CLI (command line interface).

Volume Copy is a background operation with five priority settings that define how much of the storage array’s resources are used to complete a volume copy versus fulfill I/O requests (ie. the higher the priority the quicker the volume copy will complete but the greater the performance impact on storage array I/O).

The copy progress is checked every 60 seconds throughout the copy process. Interruptions while the copy is in progress (ie. controller reset or failover) will be recovered by continuing from the last known progress boundary.

As long as the copy pair relationship is maintained, the target Volume can be set to read-only upon copy completion so that the point in time clone cannot be modified.

Integrated data services – Volume Copy Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

10-263


How Volume Copy works

How Volume Copy works A Volume Copy Wizard walks you through creating a volume copy. When configuration is completed through the Wizard, the application host sends a volume copy request to the controller that owns the source Volume. Data from the source Volume is read and copied to the target Volume. Operation in Progress icons are displayed on the source and target Volumes while the volume copy is completing.

Figure 10-6

Volume Copy functionality

During a volume copy, the same controller must own both the source and target Volumes. If both Volumes do not have the same preferred controller when the volume copy starts, the ownership of the target Volume is automatically transferred to the preferred controller of the source Volume. When the Volume Copy is completed or is stopped, ownership of the target Volume is restored to its original controller owner. If ownership of the source Volume is changed during the volume copy, ownership of the target Volume is also changed.

10-264

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services – Volume Copy If the storage array controllers experience a reset while a volume copy is in progress, the request to the controllers is restored during start-of-day processing and the copy will continue from the point when the controllers were reset. For example, if a Volume Copy was at 65% complete when a controller reset occurred, the volume copy will start from the 65% complete point when start-ofday processing begins.

Factors affecting Volume Copy Several factors contribute to the storage array’s performance, including: •

I/O activity

Volume redundant array of independent disks (RAID) level

Volume configuration (number of drives and cache parameters)

Volume type (volume snapshots may take more time to copy than standard volumes).

When you create a new volume copy, you will define the copy priority to determine how much controller processing time is allocated for the volume copy process and diverted from I/O activity. There are five relative priority settings. The Highest priority rate supports the volume copy at the expense of I/O activity. The Lowest priority rate supports I/O activity at the expense of volume copy speed. You can specify the copy priority: •

Before the volume copy process begins

While it is in progress

After it has finished (in preparation for re-copying the volume).

Volume Copy states While creating and maintaining a volume copy, there are several states it will go through, both during and after the volume copy.

Integrated data services – Volume Copy Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

10-265


How Volume Copy works

During the Volume Copy Once the volume copy starts an operation from the source volume over to the target volume, no read or write requests to the target volume are allowed. The volume copy goes from an idle to an active state, displaying either an In Progress or Pending (resources not available) status. These status conditions are displayed in the Jobs window of CAM. Table 10-1 Volume Copy states during a Volume Copy State

Description

In Progress

This status is displayed when data on the source volume is being read and then written to the target volume. While a volume copy has this status, the host has read-only access to the source volume and read and write requests to the target volume will not take place until the volume copy has completed.

Pending

This status is displayed when a volume copy has been created, but array resources do not allow it to start. While in this status, the host has read-only access to the source volume and read and write requests to the target volume will not take place until the Volume Copy has completed.

After the Volume Copy After the volume copy is complete, by default the target volume automatically becomes read-only to hosts, and write requests to the target volume will be rejected. Table 10-2 Status after Volume Copy

10-266

State

Description

Copy Complete

This status signifies that the data on the source volume has been successfully copied to the target volume. This status is accompanied by a timestamp attribute.

Copy Failed

This status is displayed when an error occurred during the volume copy. A status of Failed can occur because of a read error from the source volume, a write error to the target volume, or because of a failure on the storage array that affects the source volume or target volume. A critical event is logged in the Event Log and a Critical Alarm icon is displayed.

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services – Volume Copy

Volume Copy – Read/write restrictions The following restrictions apply to the source volume, target volume and storage array. •

The source volume is available for read I/O activity only while a volume copy has a status of In Progress or Pending. Write requests are allowed after the volume copy is completed.

A volume that is the source or target volume in another volume copy with a status of Failed, In Progress, or Pending cannot be used as a source or target volume.

A volume with a status of Failed cannot be used as a source or target volume.

A volume with a status of Degraded cannot be used as a target volume.

Figure 10-7

Volume copy read and write restrictions

Note – If a modification operation is running on a source volume or target volume, and the volume copy has a status of In Progress, Pending, or Failed, the volume copy will not take place. If a modification operation is running on a source or target volume after a volume copy has been created, the modification operation must complete before the volume copy can start. If a volume copy has a status of In Progress, modification operations will not be allowed.

Integrated data services – Volume Copy Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

10-267


Functions that can be performed on a copy pair

Creating a Volume Copy Before a volume copy is created, a target and source Volume must either already exist on the storage array or be created by the user at that point. When a volume copy is created, the data from the source Volume is written to the target Volume. To ensure that all the data is copied, the target Volume’s capacity must be equal to or greater than the source Volume’s capacity. After the volume copy has completed, the target Volume automatically becomes read-only to hosts, and write requests to the target Volume will not be permitted. Perform the following before starting a Volume Copy: 1.

Stop all I/O activity to the source and target Volumes.

2.

Unmount any file systems on the source and target Volumes.

Functions that can be performed on a copy pair The source Volume and target Volume for a single volume copy are known as a copy pair. The Volume Details page of either the source or the target volumes can be used to re-copy a copy pair, stop a copy pair with a status of In Progress, remove copy pairs (which removes the copy pair association information from the storage array but leaves the data in tact on both the source and target Volumes), change the volume copy priority, and disable the target Volume’s Read-Only attribute.

Recopying a volume The Re-Copy option enables you to create a new Volume copy for a previously defined copy pair that may have been Stopped, Failed, or has Completed. This option can be used for creating scheduled, complete backups of the target Volume that can then be copied to tape drive for off-site storage. After starting the Re-Copy option, the data on the source Volume is copied in its entirety to the target Volume. Volume Copy does not support the ability to resynchronize the target with only the changes that occurred to the source after the copy was completed. The copy process is a full, block by block replication at a given point in time. It is not mirroring technology, which continuously updates the target.

10-268

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services – Volume Copy You can also set the copy priority for the volume copy at this time. The higher priorities will allocate storage array resources to the volume copy at the expense of the storage array’s performance.

Re-Copy considerations •

This option will overwrite existing data on the target Volume and make the target Volume read-only to hosts. This option will fail all snapshot Volumes associated with the target Volume, if any exist.

Only one copy pair at a time can be selected to be re-copied.

Similar to Snapshot re-create

New full-size point-in-time copy using same source and target

The Re-copy option is always available EXCEPT when there is already a copy pending, a copy is already in progress (option available when Copy Failed), target is a degraded Volume, source or target Volume is also a secondary Remote Replication Volume, offline Volume, failed Volume or missing Volume.

Stopping a Volume Copy The Stop Copy option allows you to stop a volume copy that has a status of In Progress, Pending, or Failed. Using this option on a volume copy with a status of Failed clears the Critical Alarm status displayed for the storage array in the Current Alarms of the storage management software. After the volume copy has been stopped, the Re-Copy option can be used to create a new volume copy using the original copy pair. Note – When the volume copy is stopped, all mapped hosts will have write access to the source Volume, If data is written to the source Volume, the data on the target Volume will no longer match the data on the source Volume.

Stopping a Volume Copy considerations •

Stop Copy is available when status is Pending, In Progress or Failed

Operation stops but copy pair relationship is still maintained.

Integrated data services – Volume Copy Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

10-269


Functions that can be performed on a copy pair

Removing Copy Pairs The Remove Copy Pairs option allows you to remove one or more volume copies pairs. Any volume copy-related information for the source Volume and target Volume is removed from the Volume Properties and Storage Array Profile dialogs. After the volume copy is removed, the target Volume can be selected as a source Volume or target Volume for a new volume copy. Removing a volume copy also permanently removes the Read-Only attribute for the target volume. Note – If the volume copy has a status of In Progress, you must stop the volume copy before you can remove the copy pair.

Removing a Copy Pair considerations The data on the source volume or target volume is not deleted.

Changing Copy priority The Change Copy Priority dialog allows you to set the rate at which the volume copy completes. The copy priority setting defines how much of the storage array’s resources are used to complete a volume copy versus fulfill I/O requests. There are five relative settings ranging from Lowest to Highest. The Highest priority rate supports the volume copy, but I/O activity may be affected. The Lowest priority rate supports I/O activity, but the volume copy will take longer. You can change the copy priority for a copy pair: •

before the volume copy begins,

while the volume copy has a status of In Progress,

after the volume copy has completed when recreating a volume copy using the Re-Copy option.

Changing Copy priority considerations

10-270

Available whenever a copy is Pending or In Progress.

Enables resource balancing between copy and I/O.

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services – Volume Copy

Volume permissions Read and write requests to the target Volume will be rejected while the volume copy has a status of In Progress, Pending, or Failed. After the volume copy has completed, the target Volume automatically becomes read-only to hosts. You may want to keep the Read-Only attribute enabled in order to preserve the data on the target Volume. Examples of when you may want to keep the Read-Only attribute enabled include: •

If you are using the target Volume for backup purposes

If you are copying data from one array to a larger array for greater accessibility

If you are planning to use the data on the target Volume to copy back to the base Volume in case of a disabled or failed Snapshot Volume

If you decide to allow host write access to the data on the target Volume after the volume copy is completed, use the Volume details page in CAM to disable the Read-Only attribute for the target Volume.

Volume permission considerations •

Setting target Volume permissions is not available when copy is Pending, In Progress or Failed.

Target permissions toggle between read and write access to target Volumes. Note – Some OS’s may report an error when accessing a read only device, in which case the read only access on the Target volume must be disabled in order to allow read and write access. Unix OS’s may allow access to a read only device as long as it is mounted as a read only device.

Volume Copy compatibility with other data services Volume Copy can be used in conjunction with other integrated data services.

Integrated data services – Volume Copy Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

10-271


Volume Copy compatibility with other data services

Storage domains When a volume copy is created, the target volume automatically becomes readonly to hosts, to ensure that the data is preserved. Hosts that have been mapped to a target volume will not have write access to the volume. Any attempts to write to the read-only target volume will result in a host I/O error. If you want hosts to have read and write access to the data on the target volume, use the Volume Details page in CAM to disable the read-only attribute for the target volume.

Snapshot In order to maintain the data integrity of the point in time clone, volume Copy suspends writes to the source during the copy process. If the volume being copied is large, this can result in an extended period of time without the ability to make updates or changes. Even though the source volume does support read-only access, many operating systems still try to write to the volume when it is in a read-only mode. If this happens, the server can hang.

Figure 10-8

Copying the Snapshot

Therefore, in order to maintain normal I/O activity and ensure server availability, volume Copy must be used in conjunction with Snapshot, where the Snapshot is the source for the volume Copy.

10-272

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services – Volume Copy Copying the Snapshot creates the same full point in time clone of the desired source volume, and it does so while I/O continues to the production volume. The process is straightforward. First a Snapshot of a volume is created. Then volume Copy uses the Snapshot volume as its source volume. Once the copy is complete, the Snapshot volume can be deleted. The volume for which the Snapshot is created is known as the base volume and must be a standard volume in the storage array. For the volume copy feature, the base volume of a Snapshot volume is permitted to be selected as the source volume for a volume copy. Note – If you choose the base volume of a Snapshot volume as your target volume, you must disable all Snapshot volumes associated with the base volume before you can select it as a target volume. Otherwise, the base volume cannot be used as a target volume. When you create a Snapshot volume, a Snapshot reserve volume is automatically created. The Snapshot reserve volume stores information about all the data altered since the Snapshot volume was created, and cannot be selected as a source volume or target volume in a volume copy. The Snapshot premium feature can be used in conjunction with the Volume Copy premium feature to back up data on the same storage array, and to restore the data on the Snapshot volume back to its original base volume.

Remote Replication In the Remote Replication premium feature, a mirrored volume pair is created and consists of a primary volume on a primary storage array and a secondary volume on a secondary storage array. A primary volume participating in a Remote Replication can be selected as the source volume for a volume copy. A secondary volume participating in a Remote Replication cannot be selected as either the source or target volume for a volume copy. Note – If a primary volume is selected as the source volume for a volume copy, you must ensure that the capacity of the target volume is equal to, or greater than, the usable capacity of the primary volume. The usable capacity for the primary volume is the minimum of the primary and secondary volume’s actual capacity.

Integrated data services – Volume Copy Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

10-273


Configuring a Volume Copy If a catastrophic failure occurs on the storage array containing the primary volume (also participating in a volume copy as a source volume), the secondary volume is promoted to the primary volume role, allowing hosts to continue accessing data and business operations to continue. Any volume copies that are In-Progress will fail and cannot be restarted until the primary volume is demoted back to its original secondary volume role. If the primary storage array is recovered but is unreadable due to a link failure, a forced promotion of the secondary volume will result in both the primary and secondary volumes viewing themselves in the primary volume role. If his occurs, the original primary volume and any associated volume copies will be unaffected.

Configuring a Volume Copy The following section covers how to create and manage a volume copy using CAM.

Configuring a Volume Copy with Common Array Manager Before a volume copy is created, a target and source volume must either already exist on the storage array or be created by the user at that point. After the volume copy has completed, the target volume automatically becomes read-only to hosts, and write requests to the target volume will not be permitted. When creating a volume copy, be prepared to do the following: •

Select a source volume from the Volume Summary page or from the Snapshot Summary page. Note – In order for a volume to be used as a target volume, its snapshots need to be either failed or disabled.

10-274

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services – Volume Copy •

Select a target volume from the list of target volume candidates. Caution – Remember, a volume copy will overwrite all data on the target volume and automatically make the target volume read-only to hosts. After the volume copy process has finished, you can enable hosts to write to the target volume by changing the target volume’s Read-Only attribute on the Volume Details page.

Note – Because a target volume can have only one source volume, it can participate in one copy pair as a target. However, a target volume can also be a source volume for another volume copy, enabling you to make a volume copy of a volume copy.

Set the copy priority for the volume copy. During a volume copy, the storage array’s resources may be diverted from processing I/O activity to completing a volume copy, which may affect the storage array’s overall performance.

Enabling the Volume Copy feature To enable the volume copy feature: 1.

Click Sun Storage Configuration Service.

2.

The Array Summary page is displayed. Click the array on which you want to use the volume copy feature.

3.

The Volume Summary page for that array is displayed. In the navigation pane, click Administration > Licensing.

4.

The Licensable Feature Summary page is displayed. Click Add License.

5.

The Add License page is displayed. Select Volume Copying from the License Type menu.

6.

Enter the version number and the key digest, and click OK. Note – If you disable the volume copy feature, but volume copy pairs still exist, you can still remove the copy pair, start a copy using the existing copy pair, and change the setting of the read-only attribute for target volumes. However, you cannot create new volume copies.

Integrated data services – Volume Copy Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

10-275


Configuring a Volume Copy

Creating a Volume Copy Before creating a volume copy, be sure that a suitable target volume exists on the storage array, or create a new target volume specifically for the volume copy. You can create a copy of a standard volume, a target volume, or a snapshot volume. To create a volume copy of a standard volume or a target volume: 1.

From the Volume Summary page, click the name of the volume whose contents you want to copy to another volume. The volume you select must be either a standard volume, a snapshot volume, or a target volume.

2.

The Volume Details page for that volume is displayed. Click Copy.

3.

When prompted to continue, click OK.

4.

The Copy Volume page is displayed. Select the copy priority. The higher the priority you select, the more resources will be allocated to the volume copy operation at the expense of the storage array’s performance.

5.

Select the target volume you want from the Target Volumes list. Select a target volume with a capacity similar to the usable capacity of the source volume to reduce the risk of having unusable space on the target volume after the volume copy is created.

6.

Before starting the volume copy process: a. Stop all I/O activity to the source and target volumes. b. Unmount any file systems on the source and target volumes, if applicable.

7.

Review the specified information on the Copy Volume page. If you are satisfied, click OK to start the volume copy.

8.

A message confirms that the volume copy has successfully started. After the volume copy process has finished: a. Remount any file systems on the source volume and target volume, if applicable. b. Enable I/O activity to the source volume and target volume.

10-276

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services – Volume Copy

Recopying a Volume Copy A volume copy can be recopied for an existing copy pair. Recopying a volume copy is useful when you want to perform a scheduled, complete backup of the target volume that can then be moved to a tape drive for off-site storage. Caution – Recopying a volume copy will overwrite all data on the target volume and automatically make the target volume read-only to hosts. Ensure that you no longer need the data or have backed up the data on the target volume before recopying a volume copy. To recopy a volume copy: 1.

Click Sun Storage Configuration Service.

2.

The Array Summary page is displayed. Click the array for which you want to recopy a volume copy.

3.

The Volume Summary page for that array is displayed. Click the name of the target volume that you want to recopy.

4.

The Volume Details page for that volume is displayed. Stop all I/O activity to the source volume and target volume.

5.

Unmount any file systems on the source volume and target volume, if applicable.

6.

Click Recopy. The management software recopies the source volume to the target volume and displays a confirmation message.

7.

Remount any file systems on the source volume and target volume, if applicable.

8.

Enable I/O activity to the source volume and target volume.

Changing the copy priority To change the copy priority for a volume copy: 1.

Click the array for which you want to change the copy priority of a volume copy from the Array Summary page.

2.

The Volume Summary page for that array is displayed. Click the name of the volume for which you want to change the copy priority.

Integrated data services – Volume Copy Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

10-277


Configuring a Volume Copy 3.

The Volume Details page for the selected volume is displayed. In the Copy Priority field, select the copy priority you want. The higher the priority you select, the more resources will be allocated to the volume copy operation at the expense of the storage array’s performance.

4.

Click OK.

A confirmation message indicates that the change was successful.

Re-Copying a volume After starting the Re-Copy option, the data on the source volume is copied in its entirety to the target volume. Volume copy does not support the ability to resynchronize the target with only the changes that occurred to the source after the copy was completed. The copy process is a full, block by block replication at a given point in time. It is not mirroring technology, which continuously updates the target. You can also set the copy priority for the volume copy at this time. The higher priorities will allocate storage array resources to the volume copy at the expense of the storage array’s performance. There are several things to consider when performing a re-copy. •

This option will overwrite existing data on the target volume and make the target volume read-only to hosts. This option will fail all Snapshot volumes associated with the target volume, if any exist.

Only one copy pair at a time can be selected to be re-copied.

Similar to Snapshot re-create

New full-size point-in-time copy using same source and target

The Re-copy option is always available EXCEPT when there is already a copy pending, a copy is already in progress (option available when Copy Failed), target is a degraded volume, source or target volume is also a secondary Remote Replication volume, offline volume, failed volume or missing volume.

Stopping a Volume Copy The Stop Copy option allows you to stop a volume copy that has a status of In Progress, Pending, or Failed. Using this option on a volume copy with a status of Failed clears the Critical Alarm status displayed for the storage array in the Current Alarms of the storage management software.

10-278

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services – Volume Copy After the volume copy has been stopped, the Re-Copy option can be used to create a new volume copy using the original copy pair. Note – When the Volume Copy is stopped, all mapped hosts will have write access to the source volume. If data is written to the source volume, the data on the target volume will no longer match the data on the source volume. When you stop a volume copy, the following occurs: •

Stop Copy is available when status is Pending, In Progress or Failed

Operation stops but copy pair relationship is still maintained.

Removing Copy Pairs The Remove Copy Pairs option allows you to remove one or more volume copies pairs. Any volume copy-related information for the source volume and target volume is removed from the volume Properties and Storage Array Profile dialogs. After the volume copy is removed, the target volume can be selected as a source volume or target volume for a new volume copy. Removing a volume copy also permanently removes the Read-Only attribute for the target volume. Note – If the Volume Copy has a status of “In Progress,” you must stop the process before you can remove the copy pair.

The data on the source volume or target volume is not deleted when you remove a copy pair.

Changing Copy Priority The Change Copy Priority dialog allows you to set the rate at which the volume copy completes. Changing the copy priority: •

Is available whenever a copy is Pending or In Progress.

Enables resource balancing between copy and I/O.

Integrated data services – Volume Copy Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

10-279


Volume Copy summary

Volume permissions Read and write requests to the target volume will be rejected while the volume copy has a status of In Progress, Pending, or Failed. After the volume copy has completed, the target volume automatically becomes read-only to hosts. You may want to keep the Read-Only attribute enabled in order to preserve the data on the target volume.Examples of when you may want to keep the Read-Only attribute enabled include: •

If you are using the target volume for backup purposes

If you are copying data from one virtual disk to a larger virtual disk for greater accessibility

If you are planning to use the data on the target volume to copy back to the base volume in case of a disabled or failed Snapshot volume

If you decide to allow host write access to the data on the target volume after the volume copy is completed, use the Volume Details page in CAM to disable the Read-Only attribute for the target volume. The following are things to consider when changing the volume permissions: •

Setting target volume permissions is not available when copy is Pending, In Progress or Failed.

Target permissions toggle between read and write access to target volumes.

Volume Copy summary

10-280

Volume Copy performs a block-by-block copy from one volume (the source) to another volume (the target) in the same storage array

8 concurrent copies can be in progress at the same time

Volume Copy is a background operation with priority settings that define how much of the storage array's resources to use for the volume copy and how much to use for I/Os

The source volume is available for read I/O activity while a volume copy has a status of "In Progress" or "Pending"

After the volume copy starts, no read or write requests to the target volume are allowed

The target volume is set to read-only by default upon copy completion

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services – Volume Copy •

Snapshot can be used with Volume Copy to allow data reads and writes to continue during the copying process

Integrated data services – Volume Copy Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

10-281


Knowledge check

Knowledge check 1.

Volume Copy source and target volumes can have different RAID level and configuration. True

2.

During the copy process, controller A can be the preferred owner of the source volume and controller B can be the preferred owner of the target. True

3.

False

Reads and writes can continue to the source volume during a volume copy. True

10-282

False

False

4.

What volumes are included in a “copy pair”?

5.

What is the maximum number of copy pairs that can be in progress at one time?

6.

Why would you want to change the copy priority?

7.

Explain why using snapshot with volume copy is a best practice.

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Module 11

Integrated data services — Remote Replication Objectives Upon completion of this module, you will be able to: •

Describe the benefits and applications of Remote Replication

Explain how Replication is implemented

Differentiate between synchronous and asynchronous replication modes

11-283 Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Remote Replication overview

Remote Replication overview Remote Replication provides the ability to maintain synchronous or asynchronous copies of online real-time replication of data between two Sun Storage 6000s over a remote distance. Ideally, Remote Replication is used for disaster recovery. In the event of a disaster, all data is mirrored to an alternate site, which comprises storage components and workstations. From a business continuance perspective, critical data can be mirrored to a remote location to enable the continuity of critical business activities such as billing, ordering, and production. When a disaster occurs at one site, the secondary or backup site takes over responsibility for computer services. Therefore, users and hosts that were previously mapped to a primary storage array can have access to a secondary storage array. Essentially this is a good BCDR (Business Continuity and Disaster Recovery) plan where the ability to have a robust business continuance strategy keeps essential services operational during and after a failure or a disaster.

Figure 11-1

11-284

Business continuance strategy

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services — Remote Replication

Note – The terms “local” and “remote” are relative. Support for cross mirrors between storage arrays means a given array can be considered both local and remote with both primary and secondary volumes.

Benefits of Remote Replication Remote replication offers the following beneficial features: •

Disaster recovery - Remote Replication allows the storage system to replicate critical data at one site to another storage system at another site. Data transfers occur at Fibre Channel speeds providing an exact mirror duplicate at the remote secondary site. In the event that the primary site fails, mirrored data at the remote site is used for data host fail over and recovery. Operations may then be shifted over to the remote mirror site for continued operation of all services normally provided by the primary site.

Data vaulting and data availability - Remote Replication allows data to be sent off site where it can be protected from hardware failures and other threats. The off-site copy of the data can then be used for testing, or may be backed up without interruption to critical operations at the primary site.

High performance remote copy - Remote Replication provides a complete copy of data on a second storage system for use in applications testing. This method removes the burden of processing from the original array with no impact on the host server. The secondary data host and storage system simply breaks the mirror, uses it, and re-syncs for the next testing cycle.

Two-way data protection - Remote Replication provides the ability to have two storage systems provide backup to each other by mirroring critical volumes on each storage array to volumes on the other storage array. This provides the ability for each array to recover data from the other array in the event of any service interruptions.

Integrated data services — Remote Replication Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

11-285


Remote Replication overview

Remote Replication terminology

Figure 11-2

Remote Replication terminology

To better understand remote replication, there are several terms which must be defined. This list includes: •

Primary volume

Secondary volume

Mirror reserve volume

Replication set

Synchronous mirroring

Asynchronous mirroring

Asynchronous mirroring with write consistency

Primary volume The volume residing in the primary or local storage array is the primary volume. The primary volume accepts host I/O and stores application data. The data on a primary volume is replicated to the secondary volume.

11-286

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services — Remote Replication When a mirror pair is first created, data from the primary volume is copied in its entirety to the secondary volume. This process is known as full synchronization and is directed by the controller that owns the primary volume. During a full synchronization, the primary volume remains fully accessible for all read and write host I/O.

Secondary volume The volume residing in the secondary or remote storage array is the secondary volume. This volume is used to maintain a mirror (or copy) of the data on its associated primary volume. The controller that owns the secondary volume receives remote writes for the volume from the controller that owns the primary volume. The controller that owns the secondary volume does not accept host write requests. The secondary volume can be mapped to a host for use in disaster recovery situations. However, only read host I/O will be allowed. The secondary volume remains read-only to host applications while mirroring is underway. The secondary volume can be used for backups and analysis. Therefore, this capacity is not wasted while waiting for a disaster to occur. In the event of a disaster or catastrophic failure of the primary volume, a role reversal can be performed to promote the secondary volume to a primary role. Hosts will then be able to access the newly promoted volume and business operations can continue. The secondary volume has to be of equal or greater size than the primary. RAID level and drive type do not matter. The secondary volume can also be the base volume for a Snapshot.

Mirror reserve volume A mirror reserve volume is a special volume in the storage system created as a resource for the controller to store mirroring information, such as specifics about remote writes that have not yet been written to the secondary volume. The controller can use this information to recover from controller resets and accidental powering-down of the storage array. Two mirror reserves are required per storage array, one for each controller. The mirror reserve volume is 128 MB (256 MB total per storage array). Unlike Snapshot reserves, mirror reserves are not required for each mirrored pair as actual read/write data is not stored in the mirror reserve. The delta log and the FIFO log are kept in the mirror reserve.

Integrated data services — Remote Replication Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

11-287


Remote Replication overview The delta log is used to track changes to the primary volume that have not yet been replicated to the secondary volume. Therefore, if an interruption occurs to the communication between the two storage arrays, the delta log can be used to re-synchronize the data between the secondary and primary volumes. The delta log is a bit map (maximum 1 million bits per mirror), where each bit represents a section of the primary volume that was written by the host, but has not yet been copied to the secondary volume. The number of blocks represented by a single bit is computed based on the usable capacity of the primary volume. The minimum amount of data represented by a single bit is 64K, that is 128-512-byte blocks. For example, for a 2TB volume, each bit will represent a data range of 2 MB. The FIFO log is used during Write Consistency mirroring mode to ensure writes are completed in the same order on both the primary and secondary volumes.

Figure 11-3

Replication bitmap

Replication set When you create Remote Replication, a mirrored pair is created that consists of one primary volume at a local storage array and one secondary volume at a remote storage array. A replication set has the following characteristics:

11-288

A volume can only belong to one mirrored pair at any given time. Meaning a single primary volume cannot have two secondary volumes.

A mirror pair (or the mirror relationship) is on a volume per volume basis, not on a file basis.

Only standard volumes may be included in a Replication Set

With 6.xx firmware, a maximum of 32 Replication Sets are permitted on each storage array

With 7.xx firmware, a maximum of 128 Replication Sets for the 6540 and 64 Replication Sets for the 6140 are permitted

The primary volume is the volume that accepts host I/O

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services — Remote Replication •

When the Replication Set is first created, the controller that owns the primary volume copies all of the data from the primary volume to the secondary volume. This is a full synchronization.

Both volumes in a mirror pair must be owned by the same controller in each storage array. Volume ownership is determined by the owner of the primary volume. An ownership change on the primary volume will automatically cause a subsequent ownership change on the associated secondary volume on the next I/O. AVT and failover controller ownership change requests to the secondary volume will be rejected.

Figure 11-4

Replication set

Integrated data services — Remote Replication Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

11-289


Remote Replication overview

Synchronous mirroring

Figure 11-5

Synchronous mirroring

A write I/O from a host must be written to both the primary and secondary volumes before the I/O is reported as complete. When the controller owner of the primary volume receives a write request from a host, the controller first logs the information about the write to the mirror reserve, then writes the data to the primary volume. The controller then initiates a remote write operation to copy the affected data blocks from the primary to the secondary volume. After the host write request has been written to the primary volume and the data has been successfully copied to the secondary volume, the controller removes the log entry from the mirror reserve and sends an I/O completion status back to the data host. This mirroring mode is called synchronous because the controller does not send the I/O completion to the host until the data has been copied to both the primary and secondary volumes, When a read request is received from a host system, the controller that owns the primary volume handles the request normally. No communication takes place between the primary and secondary storage arrays. Synchronous mirroring provides continuous mirroring between primary and secondary volumes to ensure absolute synchronization. Application performance is impacted because I/O is not complete until it has made the round trip journey to the secondary storage array.

11-290

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services — Remote Replication

Asynchronous mirroring

Figure 11-6

Asynchronous Remote Replication

Host write requests are written to just the primary volume before the controller sends an I/O completion status back to the host system regardless of when the data was successfully copied to the secondary storage array.The asynchronous write mode offers faster I/O performance but does not guarantee that the copy has been successfully completed before processing the next write request. In Asychronous mirroring, the primary storage array does not wait for the I/O to complete to the secondary storage array before sending an I/O completion status to the server. Therefore, there can be multiple outstanding I/O’s to the secondary storage array. Remote Replication supports up to 128 outstanding I/O’s per mirror pair. After the 128th I/O has been issued to the secondary volume, the primary volume will suspend any new I/O’s until one of the outstanding I/O’s to the secondary volume has completed and freed up space in the queue for pending I/O’s. Asynchronous mirroring offers the following benefits: •

Queues remote write to offer faster host I/O performance, thereby improving response to applications using the primary volume.

Can effectively replicate over longer distances since longer latency times are acceptable.

Allows the secondary volume to fall behind during “Peak Times”

Integrated data services — Remote Replication Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

11-291


Remote Replication overview The following are things to consider when dealing with an asynchronous mirror: •

Remote site may not have all the “latest-greatest” data.

“Non-Peak-Times” needed for secondary volume to catch up with the primary volume.

Maximum number of outstanding write requests is 128 per mirror.

Asynchronous Remote Replication with Write Consistency

Figure 11-7

Preserved write order

Write consistency is a configuration option that ensures writes to the remote storage array complete in the same order as the local storage array. This method of remote replication is critical for maintaining data integrity in multi-volume applications, such as databases, by eliminating out-of-order updates at the remote site that can cause logical corruption. The write consistency option is available for any primary and secondary volumes participating in an asychronous remote replication relationship. When asychronous remote replication mode is selected, write requests to the primary volume are completed by the controller without waiting for an indication of a successful write to the secondary storage array. As a result of selecting the asychronous remote replication mode, write requests are not guaranteed to be completed in the same order on the secondary volume as they are on the primary volume. If the order of write request is not retained, data on the secondary volume may become inconsistent with the data on the primary volume and could jeopardize any attempt to recover data if a disaster occurs on the primary storage array.

11-292

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services — Remote Replication When the write consistency option is selected for multiple volumes on the same storage array, the order in which data is synchronized is preserved. Selecting the write consistency option for a single mirror pair does not make sense because the process in which data is replicated does not change. More than one mirror pair must have the write consistency option selected for the replication process to change. When multiple replication pairs exist on the same storage arrays and have been configured for Asychronous Mirroring with Write Consistency, they are considered to be an interdependent group known as a write consistency group. All mirror pairs in the write consistency group maintain the same order when sending writes from the primary volume to their corresponding secondary volume. The data on the secondary volume cannot be considered fully synchronized until all mirror pairs in the write consistency group are synchronized. If one mirror pair in a write consistency group becomes unsynchronized, all of the mirrored pairs in the write consistency group will become unsynchronized. Any write activity to the remote site will be prevented to protect the write consistency of the remote data set. When implementing an asynchrnous mirror with write-consistency, it maintains data integrity in multi-volume databases. There are some things to consider however: •

Asychronous Mirroring with the Write Consistency option will have decreased performance compared to just Asychronous Mirroring because I/O’s for all the mirror pairs in the write consistency group are serialized.

There is only one write consistency group per storage array.

Integrated data services — Remote Replication Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

11-293


Remote Replication overview

Summary of Remote Replication modes

Figure 11-8

Asynchronous notes

The following table highlights the remote replication modes. Table 11-1 Summary of Remote Replication modes Synchronous

11-294

Asynchronous

Preserved Write Order

1 Host issues write to primary volume

1 Host issues write to primary volume

1 Host issues write to primary volume

2 Primary controller adds entry to metadata

2 Primary controller adds entry to metadata

2 Primary controller adds entry to metadata

3 Write to primary volume

3 Write to primary volume

3 Write to primary volume

4 Copy to secondary volume

4 Notify host, write is complete

4 Primary controller moves log to FIFO log

5 Notify host, write is complete

5 Copy to secondary volume

4a) Read first entry from FIFO log 4b) Copy to secondary volume 4c) Remove first entry from FIFO log

6 Entry removed from metadata log

6 Entry removed from metadata log

5 Notify host, write is complete

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services — Remote Replication

Technical features of Remote Replication The following are a list of the remote replication features available: •

Synchronous, asychronous and write order consistency mirroring modes enable administrators to choose the replication method that best meets protection, distance or performance requirements.

Dynamic mode switching without suspending the mirror accommodates changing application and bandwidth requirements.

Suspend / resume with delta resynchronization reduces vulnerability associated with reestablishing the mirror.

Without interrupting the normal mirroring from the local to remote site, Remote Replication provides read-only and Snapshot access to the secondary volume. This enables the remote data to be utilized prior to a disaster, such as backup, vaulting, data mining, application testing, etc., without sacrificing protection of the primary site data.

Storage-based implementation has no host server or application overhead for high performance

Multiple remote arrays can mirror to a single array for centralized data protection, mining or backups.

Cross-mirroring data between storage arrays protects the data on each storage array.

Remote Replication is a premium feature that is fully integrated into the storage management software for a single point of control for all storage administration and replication needs.

User selectable synchronization priority controls the impact of data transfers on application performance

Managed by the controllers, Remote Replication is transparent to the data host and applications.

Once replication is established, data synchronization begins. Data is copied to a secondary volume in the background. After synchronization, on-line replication continues

Integrated data services — Remote Replication Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

11-295


Technical features of Remote Replication

Remote replication distances

Figure 11-9

.Remote replication distances

The replication distances supported between storage arrays participating in a mirror relationship are governed by the distance limitations of the Fibre Channel Standard. The distances that have been tested using a FC fabric in conjunction with CNT routers:

11-296

Synchronous - 100 miles ( 160 km)

Asychronous - 3200 miles (5150 km)

Write Consistency - 800 miles (1285 km)

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services — Remote Replication

Configuring data replication with CAM

Figure 11-10 Activating remote replication Installing the license for the Sun Storage Data Replicator software premium feature on an array enables data replication for that array only. Since two arrays participate in a replication set, you must install a license on both arrays that you plan to have participate in a replication set. Note – The array dedicates Fibre Channel (FC) port 4 on each controller for use with the Sun Storage Data Replicator software premium feature. Before enabling data replication on an array, you must ensure that FC port 4 on each controller is not currently in use. If it is in use, you must move all connections from FC port 4 to FC port 1, 2 or 3. To enable data replication on an array: 1.

Click Sun Storage Configuration Service.

2.

The Array Summary page is displayed. Click the array on which you want to enable data replication.

3.

The Volume Summary page for that array is displayed. In the navigation pane, click Administration > Licensing.

4.

The Licensable Feature Summary page is displayed. Click Add License.

5.

The Add License page is displayed. Select Sun StorEdge Data Replicator Software from the License Type menu.

Integrated data services — Remote Replication Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

11-297


Configuring data replication with CAM 6.

Enter the version number and the key digest, and click OK.

Activating and deactivating data replication Activating the Sun Storage Data Replicator software premium feature prepares the array to create and configure replication sets. After data replication is activated, the secondary ports for each of the array’s controllers are reserved and dedicated to data replication. In addition, a replication reserve volume is automatically created for each controller in the array. Activating the feature does the following: •

Reserves the last host port on each controller for mirroring operations Remote Replication requires a dedicated host port between storage systems for mirroring data. After Remote Replication has been activated, one Fibre Channel host-side I/O port on each controller is solely dedicated to mirroring operations.

Any host-initiated I/O operations will not be accepted by the dedicated port, and any requests received on this dedicated port will only be accepted from another controller participating in the mirror relationship. Controller ports dedicated to Remote Replication must be attached to a Fibre Channel fabric environment with support for the Directory Service and Name Service interfaces.

Creates the Mirror reseves - When you activate Remote Replication on the storage system, you create two mirror reserve volumes, one for each controller in the storage system. During this process you will have the option to decide where the mirror reserve volume will reside on free capacity on an existing virtual disk or in a newly created virtual disk. Because of the critical nature of the data being stored, the RAID level of the mirror reserve volumes must not be RAID 0 (data striping with no redundancy). Each Mirror Reserve volume will be a fixed size of 128 MB. An individual mirror reserve volume is not needed for each mirror pair. Caution – Before activating Remote Replication, verify that creating the replication reserves will not exceed the volume limits of the storage array.

11-298

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services — Remote Replication

Note – The replication reserve volumes require a total of 256 megabytes of available capacity on an array. The two replication reserve volumes are created with a size of 128 MB, one for each controller. If no replication sets exist and the Sun Storage Data Replicator software premium feature is no longer required, you can deactivate data replication in order to reestablish normal use of dedicated ports on both storage arrays and delete both replication reserve volumes. Note – You must delete all replication sets before you can deactivate the premium feature.

To activate or deactivate the Sun Storage Data Replicator software premium feature: 1.

Click Sun Storage Configuration Service.

2.

The Array Summary page is displayed. Click the array containing the primary volume in the data replication set.

3.

The Volume Summary page for that array is displayed. In the navigation pane, click Administration > Licensing.

4.

The Licensable Feature Summary page is displayed. Click Replication Sets.

5.

The Licensable Feature Details - Replication Sets page is displayed. Click Activate or Deactivate, as appropriate.

A confirmation dialog box indicates success or failure.

Disabling data replication When data replication is in the disabled/activated state, previously existing replication sets can still be maintained and managed; however, new data replication sets cannot be created. When in the disabled/deactivated state, no data replication activity can occur. To disable data replication: 1.

Click Sun Storage Configuration Service.

Integrated data services — Remote Replication Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

11-299


Configuring the hardware for data replication 2.

The Array Summary page is displayed. Click the array on which you want to locate the primary volume in the data replication set.

3.

The Volume Summary page for that array is displayed. In the navigation pane, click Administration > Licensing.

4.

The Licensable Feature Summary page is displayed. Click the check box to the left of Replication Sets. This enables the Disable button.

5.

Click Disable.

Configuring the hardware for data replication To configure remote or data replication for the Sun Storage 6000 arrays, you will need to verify the following: •

Confirm that you are able to manage each array that will be part of the remote replication with its appropriate host through the array’s IP address(es).

Each storage array that will be part of a remote replication set should be on a fiber channel switch. If there are more arrays on the switch than will be in that specific replication set, you must zone the switch.

Setup the hardware Remote Replication requires two storage arrays and a FC switch. The FC switch provides a Name Service function so storage arrays can identify and access other storage arrays on the SAN. The FC switch also provides diagnostics on the FC link between the local and remote site.

11-300

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services — Remote Replication The figure below shows the hardware ports for Remote Replication. The last host port on each controller is dedicated for Remote Replication (port 4 or 8 on the 6x80 [depending on how many host cards are inserted], port 4 on the 6540 and 6140, and port 2 on the 6140-2).

Figure 11-11 Dedicated Remote Replication ports

Caution – The last port on each controller is dedicated to the replication function once Remote Replication is activated. Any hosts connected to this port will be logged out. Use this last port to connect the array to the SAN for Remote Replication. To configure the remote replication environment: •

Direct-attach the management host to the storage array using any of ports 1 through 3. Port 4 is dedicated to remote replication.

Using fiber channel cables, connect port 4 from each controller, controller A and controller B, to the fiber channel switch. This will be performed for each controller module in the replication set.

Integrated data services — Remote Replication Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

11-301


Creating replication sets •

Additional zones can be configured to group the connections from controller A from the arrays together while grouping the connections from controller B from the arrays, together, further isolating group A from group B.

Creating replication sets

Figure 11-12 Creating replication sets Before any mirror relationships can be created, volumes must exist at both the primary and secondary sites. If a primary volume does not exist, one will need to be created on the primary storage system. If a secondary volume does not exist, one will need to be created on the secondary storage system. Consider the following when creating the secondary volume: •

The secondary volume must be of equal or greater size than the associated primary volume.

The RAID level and drive type of the secondary volume do not have to be the same as the primary volume.

When adequate volumes exist at both sites, mirror relationships can be created using the Create Replication Set Wizard. •

11-302

Stop all I/O activity and unmount any file systems on the secondary volume. Do this just before creating the replication set.

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services — Remote Replication •

Log in to the array using the storage user role.

The Create Replication Set wizard enables you to create a replication set, either standalone or as part of the consistency group. To create a replication set: 1.

Click Sun Storage Configuration Service.

2.

The storage array Summary page is displayed. Click the name of the storage array containing the primary volume that you want to replicate to the secondary volume.

3.

The Volume Summary page is displayed. Click the name of the primary volume that you want to replicate to the secondary volume.

4.

The Volume Details page for the selected volume is displayed. Note – You cannot replicate a volume that is already in a replication set.

5.

Click Replicate.

6.

The Create Replication Set wizard is displayed. Follow the steps in the wizard. The Create Replication Set wizard also allows you to include the new replication set in the consistency group, if desired.

When creating the replication set, the array copies all data from the primary volume to the secondary volume, overwriting any existing data on the secondary volume. If replication is suspended, either manually or due to an array or communication problem, and then resumed, only the differences in data between volumes are copied. When creating a Replication Set you have the option to select the Synchronization Priority Level. You can choose from five different synchronization priorities for the primary volume, ranging from lowest to highest, that determine how much of a priority the full synchronization will receive relative to host I/O activity and, therefore, how much of a performance impact there will be. The following guidelines roughly approximate the differences between the five priorities. Note that volume size can cause these estimates to vary widely.A full synchronization at the lowest Synchronization Priority Level will take approximately eight times as long as a full synchronization at the Highest Synchronization Priority Level.

Integrated data services — Remote Replication Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

11-303


Creating replication sets •

A full synchronization at the Low Synchronization Priority Level will take approximately six times as long as a full synchronization at the Highest Synchronization Priority Level.

A full synchronization at the Medium Synchronization Priority Level will take approximately three and a half times as long as a full synchronization at the Highest Synchronization Priority Level.

A full synchronization at the High Synchronization Priority Level will take approximately twice as long as a full synchronization at the Highest Synchronization Priority Level.

Figure 11-13 Synchronization priorities The Synchronization Priority Level of a mirror relationship defines the amount of array resources used to synchronize the data between the primary and secondary volume of a mirror relationship. If the highest priority level is selected for a mirror relationship, the data synchronization uses a high amount of array resources to perform the full synchronization, but may decrease performance for host I/O, including other mirror relationships. Conversely, if the lowest synchronization level is selected, there is less impact on complete array performance, but the full synchronization may be slower.

Note – Use the highest replication synchronization priority that applications will permit. Applications will run faster if Remote Replication volumes are set to synchronize at lower priorities but synchronization rates will be slower. Applications will run slower and synchronization rates on Remote Replication volumes will be higher if Remote Replication volumes are set to synchronize at higher priorities.

11-304

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services — Remote Replication

Note – An alternative method of creating a replication set is to go to the Replication Set Summary page and click on the New button. In this case, an additional step in the wizard prompts you to filter and select the primary volume from the current array.

What happens when an error occurs?

Figure 11-14 Communication link interruption When processing write requests, the primary controller may be able to write to the primary volume, but a link interruption prevents communication with the secondary volume. In this case, the remote write cannot complete to the secondary volume, and the primary and secondary volumes are no longer correctly mirrored. The primary controller transitions the mirror pair to an unsynchronized status and sends an I/O completion to the primary host. The primary host can continue to write to the primary volume, but remote writes will not take place.

Integrated data services — Remote Replication Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

11-305


What happens when an error occurs? When connectivity is restored between the primary volume and the secondary volume, a resynchronization will take place, either automatically or manually, depending on which method you choose when setting up the mirror pair. During resynchronization, only the blocks of data that have changed on the primary volume during the link interruption are copied to the secondary volume. After the resynchronization begins, the mirrored pair will transition from an unsynchronized status to a synchronization in progress status.

Figure 11-15 Resynchronization options The primary controller will also mark the mirror pair as unsynchronized when a volume error on the secondary prevents the remote write from completing. For example, an offline or a failed secondary volume can cause the mirror pair to become unsynchronized. When the volume error is corrected (the secondary volume is placed online, or recovered to an optimal status), a synchronization (automatic or manual) is required, and the mirror pair transitions to synchronization in progress.

11-306

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services — Remote Replication

Suspend and resume

Figure 11-16 Suspend and resume A mirror pair can be suspended to stop data transfer between the primary and secondary volumes. When a mirror pair is in a suspended state, no attempt is made to contact the secondary volume. Any data that is written to the primary volume while the mirror pair is suspended will be logged in the mirror reserve, and will automatically be written to the secondary volume when the mirror relationship is resumed. A full synchronization will not be required. A mirror pair can be resumed to restart data transfer between a primary volume and a secondary volume participating in a mirror relationship, after the mirror has been suspended or unsynchronized. After the mirror pair is resumed, only the regions of the primary volume known to have changed since the mirror pair was suspended are written to the secondary volume. A mirror that was either manually suspended or stopped due to an unplanned communication error will not need to restart the lengthy process of establishing the mirror. When the mirror is resumed, only the data blocks written to the primary volume while the mirror was suspended will be copied to the secondary volume. This delta resynchronization process is user defined; initiated either as an operator command or automatically when communication is restored. The suspend-and-resume feature works in conjunction with major database solutions to extend backup and recovery best practices for enhanced business continuity.

Suspending and resuming data replication To suspend or resume data replication in an existing replication set:

Integrated data services — Remote Replication Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

11-307


What happens when an error occurs? 1.

Click Sun Storage Configuration Service.

2.

The Array Summary page is displayed. Click the name of the array containing the replication set for which you want to suspend or resume replication.

3.

The Volume Summary page is displayed. Click the Replication Sets tab.

4.

The Replication Set Summary page is displayed. Click the name of the replication set for which you want to suspend or resume replication.

5.

The Replication Set Details page is displayed. Do one of the following: •

If you want to suspend replication and track changes between the volumes, click Suspend. Note – If the replication set is already in a Suspended, Unsynchronized, or Failed/Suspended state, only the Resume button is available. Suspending a replication set will stop the coordination of data between the primary and the secondary volume. Any data that is written to the primary volume will be tracked while the replication set is suspended and will automatically be written to the secondary volume when replication is resumed. A full synchronization will not be required.

If you want to resume replication and copy only the data changes, not the entire contents of the volume, click Resume. Note – Any data that is written to the primary volume will be tracked while the replication set is suspended and will automatically be written to the secondary volume when replication is resumed. A full synchronization will not be required.

6.

When prompted to confirm the selected action, click OK. Note – If you are suspending or resuming replication for a replication set that is part of the consistency group, all other replication sets in the group with primary volumes on the primary array will also be suspended or resumed.

11-308

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services — Remote Replication

Role reversal

Figure 11-17 Role reversal A role reversal is the act of promoting the secondary volume to be the primary volume within the mirrored volume pair, and or demoting the primary volume to be the secondary volume. The role reversal process always requires a user to initiate the process. When the secondary volume becomes a primary volume, any hosts that are mapped to the volume through a volume-to-LUN mapping will now be able to read or write to the volume.

Reversing roles It is possible to perform the role reversal from either volume in the replication set. For example, when you promote the secondary volume to a primary role, the existing primary volume is automatically demoted to a secondary role (unless the array cannot communicate with the existing primary volume). A role reversal may be performed using one of the following methods: •

Changing a secondary mirrored volume to a primary volume – This option promotes the selected secondary volume to become the primary volume of the mirrored pair and would be used when a catastrophic failure has occurred. For step-by-step instructions, refer to “Changing a Secondary volume to a Primary volume.” in the online help.

Integrated data services — Remote Replication Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

11-309


What happens when an error occurs? •

Changing a primary mirrored volume to a secondary volume – This option demotes the selected primary volume to become the secondary volume of the mirrored pair and would be used during normal operating conditions. For step-by-step instructions, refer to “Changing a Primary volume to a Secondary volume.” in the online help.

If a communication problem between the secondary and primary sites prevents the demotion of the remote primary volume, an error message is displayed. However, you are given the opportunity to proceed with the promotion of the secondary volume, even though this will lead to a dual-primary condition. This condition can be remedied as the communication problem between the storage systems does not prevent the original primary from being demoted by a user. If communication with the remote storage array is down, you can force a role reversal even when there will be a resulting dual-primary or dual-secondary condition. Use the Recovery Guru to recover from one of these conditions after communication is restored with the remote array. Note – If the role of a volume in a replication set that is a member of the consistency group is changed, the replication set will become a member of the consistency group on the storage array that hosts the newly promoted primary volume. To reverse the role of volumes within a replication set: 1.

Click Sun Storage Configuration Service.

2.

The storage array Summary page is displayed. Click the name of the storage array containing the volume in the replication set whose role you want to reverse.

3.

The Volume Summary page is displayed. Click the Replication Sets tab.

4.

The Replication Set Summary page is displayed. Click the name of the replication set that includes the volume.

5.

The Replication Set Details page is displayed. Click Role to Secondary or Role to Primary, as appropriate.

6.

A confirmation message is displayed. Click OK.

Changing replication modes A number of factors must be considered and a number of decisions must be made before changing the replication mode of a replication set. Ensure you have a full understanding of the replications modes prior to changing the replication modes.

11-310

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services — Remote Replication To change the replication mode of a replication set: 1.

Click Sun Storage Configuration Service.

2.

The Array Summary page is displayed. Click the name of the array containing the replication set whose replication mode you want to change.

3.

The Volume Summary page is displayed. Click the Replication Sets tab.

4.

The Replication Set Summary page is displayed. Click the name of the replication set whose replication mode you want to change.

5.

The Replication Set Details page is displayed. Select Asynchronous or Synchronous, as appropriate, from the drop-down list. If you select Asynchronous, write order consistency is disabled by default. To enable write order consistency for all replication sets using asynchronous mode, select the Consistency Group check box.

6.

Click OK to save the changes.

Testing replication sets You can test communication between volumes in a replication set by clicking the Test Communication button on the Replication Set Details page. If a viable link exists between primary and secondary volumes, a message displays indicating that communication between the primary and secondary volume is normal. If there is a problem with the link, a message displays details about the communication problem.

Removing a mirror relationship Removing a mirror relationship between a primary and secondary volume does not affect any of the existing data on either volume. The relationship between the volumes is removed. They are no longer tied together. The primary volume will continue normal I/O operation. The secondary volume will become a standard volume and can be mapped to a host for read and write access. Note – For backup routines, use the Suspend option rather than Removing the mirror relationship.

Integrated data services — Remote Replication Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

11-311


Remote Replication summary

Remote Replication summary •

Minimum two storage arrays and a fabric switch required

Remote Replication requires enabling AND activation

All storage arrays involved in Remote Replication must be registered with same CAM server

Primary and remote volumes must be standard volumes

Mirroring relationship on volume basis

Remote volume must already exist on remote storage array •

Any existing data will be overwritten

Must be equal or greater capacity of primary volume

Must be owned by the same controller as primary

Can be mapped to host – read only mode

Figure 11-18 Replication scenario summary

11-312

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Integrated data services — Remote Replication

Knowledge check 1.

Remote replication continuously copies from one volume to another to produce an exact copy of the source volume. True

2.

Asynchronous replication is faster than synchronous mirroring. True

3.

False False

When using remote replication, your mirrored volume must be located offsite. True

False

4.

Why are there two mirror reserves volumes on an array?

5.

What are the two logs kept in the mirror reserve volume? Briefly describe what each does.

6.

How does “write consistency mode” differ from “asynchronous mode”?

7.

What happens if there is a link interruption during the remote replication process?

Integrated data services — Remote Replication Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

11-313


Knowledge check

11-314

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Module 12

Monitoring performance and dynamic features Objectives Upon completion of this module, you will be able to: •

List the factors that influence storage array performance

Explain how cache parameters effect performance

Recognize how dynamic functions impact performance

Explain the data presented by the CAM built-in Performance Monitor

12-315 Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


First principle of storage array performance

First principle of storage array performance

Figure 12-1

First principle of performance

Even with the best storage arrays, the continuous changing environments for data transfer create challenges that make optimizing performance complex and difficult to manage. The first principle of monitoring, improving or fine-tuning for storage array performance is that •

It depends •

Each environment is unique What works in one environment, may not work in another. Continuous hardware and software changes create a constantly changing environment that needs to be tweaked to adjust to the new conditions.

Settings depend on the unique goals Characteristics of the actual IO are important to determine. It is important to identify the relevant parameters. Create a benchmark and constantly monitor the storage array performance.

Actual mileage may vary No two environments are the same. What has been successful in one environment may not actually produce the same effect in another.

12-316

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Monitoring performance and dynamic features

40/30/30 rule

Figure 12-2

40%-30%-30% performance rule

Fine-tuning a storage array begins with the recognition of the 40/30/30 performance rule. The 40/30/30 performance rule states •

40% of the performance from the storage array is within the storage array hardware set-up

•

30% of the performance from the storage array is found in the server platform that includes hardware, operating system and device drivers

•

30% of the performance from the storage array is found in the application software itself

Monitoring performance and dynamic features Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

12-317


Context for performance tuning

Context for performance tuning

Figure 12-3

Storage array I/O path

It is important to understand how all aspects of the storage area network interact while accurately measuring application performance. Tuning must be done in alignment with all components in the data path. These components include HBAs and the HBA setting and drivers, Fibre Channel switches, volume managers, operating systems and server hardware. IO characteristics at the application or file system level may not be the same characteristics at the controller level. Tuning a storage array becomes even more complex when multiple applications, multiple hosts with different HBAs and different multi-path drivers all share the same storage array.

12-318

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Monitoring performance and dynamic features

Analyzing I/O characteristics

Figure 12-4

I/O characteristics

When monitoring and optimizing for storage array performance, the best place to start is to analyze the I/O for each application. Answer the following questions for each application. •

Is the I/O primarily random or sequential? •

Random I/O is any I/O load whose consecutively issued read and/or write requests do not specify adjacently located data.

Sequential I/O is any I/O load consisting of consecutively issued read or write requests to adjacently located data.

Is the size of the I/O large or small? •

An I/O greater than 256KB is usually considered large.

To determine the I/O size, use this formula

I/O Size = current KB/sec divided by current I/O/sec

Is the I/O mostly reads or writes? •

The Read Percent statistic is reported in the performance monitor.

Are the I/Os concurrent or are there multiple threads? •

Creating more sustained I/O produces the best overall results, up to the point of controller saturation. However, write-intensive applications are an exception.

Monitoring performance and dynamic features Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

12-319


Factors that affect storage array performance

Factors that affect storage array performance

Figure 12-5

Storage array factors that affect performance

Cabling

Figure 12-6

Cabling for the 6540

The 6540 controller module is distinguished by two drive-side channels with two ports each for a total of 4 ports per controller. One port each from controller A and B are paired to connect a logical stack of drive modules. Cabling the drive modules as above ensures redundancy and utilizes all available backend channels. By utilizing all channels the maximum aggregate bandwidth can be obtained.

12-320

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Monitoring performance and dynamic features Best practices for performance include: •

Pair the odd numbered channels together and the even numbered channels together

Use all the available ports - each port is capable of 400 MBs of bandwidth

Balance drive modules between stacks 1 and 3 and 2 and 4

Assign volumes on stacks 1 and 3 to controller A - assign volumes on stacks 2 and 4 to controller B

Figure 12-7

6140 cabling

The 6140 has only one channel with two ports on each controller and therefore two redundant back-end loops. Remember unlike the 6540, it is not possible to mix drive module speeds. The entire 6140 back end must operate at either 4Gb/s or 2Gb/s.

Monitoring performance and dynamic features Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

12-321


Factors that affect storage array performance

Choosing a disk type

Figure 12-8

Disk type considerations

Types of hard disk drives For those looking for performance, the typical evaluation metric is either the revolution per minute (RPM) rotational speed or simply the disk drive interface (Fibre Channel, SAS, or SATA). However, the key to deciding what type of disk drive to use is aligning the disk to the data, and more importantly, to how the data will be used and accessed. There are several disk drive technologies available. •

Fibre Channel •

10K and 15K RPM

2 or 4 Gb/s

SAS (serial attached SCSI) •

Good random access performance

Point to point technology

15K RPM

SATA (serial ATA)

12-322

Low cost per GB

Larger capacity

7.2 RPM,

3Gb/s

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Monitoring performance and dynamic features

Capacity Although the capacities of all types of drives keep increasing, the lowest capacities are found in the enterprise class drives, where performance is more important than capacity. The high capacities are found in nearline storage, where disks are used for secondary storage and as disk-to-disk backup or for storing less frequently used data that still requires online access.

Selecting a RAID level

Figure 12-9

RAID levels

Advantages and disadvantages of RAID levels Raid 0 stripes data across multiple drive;: can have just one drive. Maximum with 7.xx firmware could be the number of disks in the entire storage array. With 6.xx firmware, maximum disks is 30. •

Advantages: Performance due to parallel operation of the access.

Disadvantages: No redundancy. One drive fails, data is lost.

Application: Good performance for both IOPS and MBs

Raid 1 mirrors disk’s data mirrored to another drive. Needs two disks. •

Advantages: Performance as multiple requests can be fulfilled simultaneously.

Monitoring performance and dynamic features Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

12-323


Factors that affect storage array performance •

Disadvantages: Storage Costs are doubled.

Application: Good performance for IOPS

Raid 1+0 mirrors data to another disk and then stripes it across multiple drives. Must have at least four disks, after that must have an even number to a maximum. Maximum with 7.xx firmware is the number of disks in the entire storage array; with 6.xx firmware, it is 30. •

Advantages: Performance as multiple requests can be fulfilled simultaneously.

Disadvantages: Storage Costs are doubled.

Application: Good performance for IOPS

Raid 3 distributes data across multiple drives in lockstep. Must have at least three disks. Maximum number of disks with 6.xx and 7.xx is 30. •

Advantages: High performance for large, sequentially accessed files (image, video, graphical, etc.;)

Disadvantages: Degraded performance with 8-9 i/o threads, random i/os, smaller more numerous I/Os.

Application: Good performance for MBs

Raid 5 drives operate independently with data and parity blocks distributed across all drives in the group. Must have at least three disks. Maximum number of disks with 6.xx and 7.xx is 30. •

Advantages: Good for reads, small I/Os, many concurrent I/Os and random I/Os.

Disadvantages: Writes are particularly demanding.

Application: Good performance for both IOPS and MBs

Raid 6 drives operate independently with data and double parity blocks distributed across all drives in the group. Must have at least five disks. Maximum number of disks with 6.xx and 7.xx is 30.

12-324

Advantages: Tolerates the failure of two disks in a RAID group, twice the tolerance of RAID 5

Disadvantages: Writes are particularly demanding.

Application: Good for applications that need a high level of redundancy

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Monitoring performance and dynamic features

Dynamic RAID Migration (DRM)

Figure 12-10 DRM: Dynamic RAID migration In the illustration above, changing RAID 5 to RAID 1 for a heavily read-intensive application could improve the performance especially the IOPS. The RAID level on a selected virtual disk can be changed by applying a storage profile with the desired RAID level to an existing virtual disk. Applying the new profile will change the RAID level of every volume that comprises the virtual disk. Performance might be slightly affected during the operation, depending on the setting of the modification priority. Important: •

You cannot cancel this operation after it begins.

Your data remains available during this operation.

The virtual disk must be in an optimal state before you can perform this operation.

If you do not have enough capacity in the virtual disk to convert to the new RAID level, an error message is received and the operation will not continue. If you have unassigned drives, add additional drives to the virtual disk. Then, retry the operation.

Monitoring performance and dynamic features Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

12-325


Factors that affect storage array performance

Number of spindles in a v-disk

Figure 12-11 Number of disks in a v-disk For IOPS or transaction-oriented applications, the number of disks in a vDisk becomes more important since disk drive random I/O rates are relatively low. •

Select a number of disks that matches the per vDisk I/O rate needed to support the application

Account for the number of drives that are needed for the selected RAID level

For high bandwidth, use enough disks to enable a full stripe write for the typical application I/O size. Host I/O sizes of 512K, 1 MB, 2MB are common in high bandwidth applications. A RAID 5 virtual disk of 4 data disks + 1 parity disk with a segment size of 256K or 512K or 8 data disks + 1 parity disk with a segment size of 128K or 256K is a good match for large I/O sizes.

12-326

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Monitoring performance and dynamic features

Dynamic Capacity Expansion (DCE)

Figure 12-12 Ability to add more disks to a v-disk Dynamic Capacity Expansion (DCE) is a modification operation used to increase the available free capacity on a virtual disk. The increase in capacity is achieved by selecting unassigned drives to be added to the virtual disk. Once the capacity expansion is completed, additional free capacity is available on the virtual disk for creation of other volumes or adding free capacity to an existing volume. This modification operation is considered to be “dynamic” because you have the ability to continually access data on virtual disks, volumes, and disk drives. You can add 1 or 2 previously unassigned or newly inserted drives at a time into a vDisk. What does DCE really do? •

Can improve performance by providing more drive I/Os

Expands capacity by reducing the parity of a virtual disk

It removes gaps from previous configurations, if they exist

Monitoring performance and dynamic features Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

12-327


Factors that affect storage array performance An added advantage of adding more spindles to a parity group is that parity becomes a smaller percentage of total capacity. As an example, in a RAID 5 five drive virtual disk, the capacity of one of the disks will be dedicated to parity, even though parity is spread across all five disks in the group. As a percentage, a five drive virtual disk has a 20% overhead for parity data. As the number of disks in the group increase, the percentage of parity decreases. Whereas, a six drive virtual disk only has a 17% overhead for parity data.

DCE Example: Before Expansion: 5-146 GB disks - RAID 5 virtual disk Volume 1 capacity is 200 GB - Volume 2 which was 50GB was deleted, Volume 3 capacity is 300 GB. Remaining capacity of this vDisk is approximately 84 GB. However, the 84 GB is not contiguous. Parity overhead is approximately 20%.

Figure 12-13 Before DCE After Expansion: 7 - 146 GB disks - RAID 5 virtual disk Volume 1 capacity is still 200 GB. The deleted Volume 2 capacity is now part of the free capacity which is now contiguous. Volume 3 is still 300 GB The free capacity of this virtual disk is now approximately 300 GB. Parity Overhead is approximately 14%.

Figure 12-14 After DCE

12-328

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Monitoring performance and dynamic features

Calculating an optimal segment size

Figure 12-15 Optimize segment size with I/O size A segment is the amount of data, in kilobytes, that the controller writes on a single drive in a volume before writing data on the next drive. Data blocks store 512 bytes of data and are the smallest units of storage. The size of a segment determines how many data blocks it contains. For example, an 8K segment holds 16 data blocks and a 64K segment holds 128 data blocks. For example, in a RAID5 4+1 vDisk with a segment size of 128KB, the first 128KB of the volume is written to the first drive, the next 128KB to the second drive, and so forth. For a RAID1 2+2 vDiskl, 128KB of an I/O would be written to each of the two data drives and to the mirrors. If the I/O size is larger than the number of drives times 128KB, this pattern repeats until the entire I/O is completed. A segment size is set during volume creation, along with the virtual disk RAID level and the other volume I/O characteristics specified in the Storage Profile. Supported segment sizes are: •

8K, 16K, 32K, 64K, 128K, 256K, 512K

Monitoring performance and dynamic features Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

12-329


Factors that affect storage array performance

Selecting an optimal segment size For very large I/O requests, the optimal segment size is one that distributes a single host I/O across all data drives. The formula for optimal segment size is as follows: segment size = stripe width ÷ number of data drives In other words, you want to complete a full stripe write. For RAID5, the number of data drives is equal to the number of drives in the pool minus1 (parity drive). For example: RAID5, 4+1 with a 64KB segment size => (5-1) * 64KB = 256KB stripe width For RAID1, the number of data drives is equal to the number of drives divided by 2. For example: RAID1/0, 2+2 with a 64KB segment size => (2) * 64KB = 128KB stripe width For small I/O requests, the segment size should be large enough to minimize the number of segments (drives in the vDisk) that need to be accessed to satisfy the I/O request, that is, to minimize segment boundary crossings. For IOPS environments, set the segment size to 64 KB or 128 KB or larger, so that the stripe width is at least as large as the median I/O size.

Dynamic Segment Size Dynamic segment size is the ability to change the segment size of a volume while actual I/O operations continue. The process includes creating a new profile and pool with the new segment size. Then change the volume’s pool from the volume characteristic screen. Remember

12-330

You can only change one parameter in the profile/pool at a time

You cannot cancel this operation once it begins.

Do not begin this operation unless the virtual disk is optimal.

Allowed transitions typically are double or half of current segment size. For example, if the current volume segment size is 32K, a new volume segment size of either 16K or 64K is allowed

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Monitoring performance and dynamic features

How long does a change segment size operation take? The operation is slower than other modification operations (for example, changing RAID levels or adding free capacity to a virtual disk) because of how the data is reorganized and because of the temporary internal backup procedures that occur during the operation. How long a Change Segment Size operation can take depends on many variables, including: •

The I/O load from the hosts

The modification priority of the volume

The number of drives in the virtual disk

The number of drive channels

The processing power of the storage array controllers.

If you want this operation to complete faster, you can change the modification priority, although this may decrease array I/O performance. To change the priority, view the jobs tab from the CAM tree.

Cache parameters

Figure 12-16 Read and write cache should always be enabled

Monitoring performance and dynamic features Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

12-331


Cache parameters

Read Caching Pre-fetch enabled Enabling Read Pre-fetch Caching for a volume might be helpful if parts of the workload are sequential. For firmware 6.xx and later, the read ahead pre-fetch is accomplished by a controller algorithm, so the feature is either enabled (any nonzero value) or disabled (0), and in most cases should be left enabled.

Enabling write caching and enabling write caching with mirroring Enabling Write Cache on a volume generally improves performance for applications with significant write content, unless the application features a continuous stream of writes. However, write caching does introduce some small risk of data loss, in the unlikely event of a controller failure. To eliminate any chance of data loss from a controller failure, the Write Cache Mirroring option ensures that a volume’s write data is cached in both controllers. This option historically trades write performance for the highest possible availability, although recent firmware improvements significantly reduce this penalty for bandwidth environments.

Note – Never use write caching without batteries in a production environment.

12-332

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Monitoring performance and dynamic features

Number of volumes in a virtual disk

Figure 12-17 Keep the ratio of vDisk to volume at one Creation of a virtual disk that only contains one volume is recommended. If you make a virtual disk that has more than one volume, try not to make more than three volumes. Having more than three active volumes on a virtual disk could cause disk thrashing and thus, poor I/O performance.

Choosing an optimal volume modification priority The modification priority defines how much processing time is allocated for volume modification operations relative to array performance. You can increase the volume modification priority, although this may affect array performance. Operations affected by the modification priority include: •

Copyback

Reconstruction

Initialization

Changing Segment Size

Defragmentation of a virtual disk

Adding Free Capacity to a virtual disk

Changing RAID Level of a virtual disk

Monitoring performance and dynamic features Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

12-333


Number of volumes in a virtual disk

Modification priority rates The following priority rates are available. •

Lowest

Low

Medium

High

Highest

The Lowest priority rate favors array performance, but the modification operation will take longer. The Highest priority rate favors the modification operation, but array performance may be compromised.

Setting array-wide global parameters

Figure 12-18 Array-wide global parameters that affect performance

12-334

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Monitoring performance and dynamic features

Setting the global cache flush Two global parameters, Start Flushing and Stop Flushing, are provided to control the flushing of write data from the controller cache to the drives. Flushing begins when the percentage of unwritten data cache exceeds the Start Flushing level and stops when the percentage hits the Stop Flushing mark. Best practice recommends setting both parameters to the same value to cause a brief flushing operation to maintain a specified level of free space.

Cache block size Cache block is the size of the block of data that is written into or read from cache. With 6.xx firmware and CAM version 6.0x, the cache block size defaulted to 16. Starting with CAM 6.1, the cache block size can be changed on the Administration page. Starting with firmware 7.xx, an 8 KB block size is available.

NVSRAM settings: AVT (Auto Volume Transfer) If you enable the per-host failover functionality of Auto Volume Transfer (AVT), cache management and flushing behavior can be affected. If AVT is not required for failover for any host platforms using the storage array, consider disabling AVT in all host regions. This can improve performance for some workloads.

Setting the global disk scrubbing The impact of disk scrubbing is minimal, but the extra reads do represent a finite workload. Therefore, consider the performance demands when setting Disk Scrubbing. •

In most cases, enable Disk Scrubbing and set the scan frequency to 30 days to enable periodic scans of the disk media configured into volumes

•

When absolute maximum performance is the objective, do not enable Disk Scrubbing.

Monitoring performance and dynamic features Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

12-335


Performance Monitor

Performance Monitor Use the CAM built-in Performance Monitor to monitor storage array performance in real-time and save performance data to a file for later analysis. You can monitor the performance for all volumes, an individual volume, or for just the controllers. Totals for the entire array are also available, which is data that combines the statistics for all volumes and both controllers in an active-active controller pair. Do not run the Performance Monitor if volumes are being initialized or a modification operation is occurring since these operations negatively impact performance.

The Performance Monitor pages •

Performance Summary Page - allows you to set performance monitoring options and view performance statistics for the array.

Performance Statistics Summary, Volumes Page - enables you to view performance statistics for all volumes.

Performance Statistics, Controller Details Page - enables you to view performance statistics for both controllers A and B.

Performance Statistics, Volume Details Page - enables you to view performance statistics for the selected volume.

The statistics displayed in each page is described in the CAM on-line help.

Fine tuning The following describes how some of the data fields can be used to analyze the performance of the storage array. Total IOPS - This data field is useful for monitoring the I/O activity to a specific controller and a specific volume. This field helps you identify possible I/O “hot spots.” If I/O rate is slow on a volume, try increasing the number of drives in the virtual disk by using the dynamic capacity expansion (DCE) option.

12-336

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Monitoring performance and dynamic features You might notice a disparity in the Total I/Os (workload) of controllers, for example, the workload of one controller is heavy or is increasing over time while that of the other controller is lighter or more stable. In this case, consider changing the controller ownership of one or more volumes to the controller with the lighter workload. Use the volume Total I/O statistics to determine which volumes to move. If you notice the workload across the storage array (Total IOPS statistic on the Performance Monitoring page) continues to increase over time while application performance decreases, this might indicate the need to add additional storage arrays to your enterprise so that you can continue to meet application needs at an acceptable performance level. Since I/O loads are constantly changing, it can be difficult to perfectly balance I/O load across controllers and volumes. The volumes and data accessed during your polling session depends on which applications and users were active during that time period. It is important to monitor performance during different time periods and gather data at regular intervals so you can identify performance trends. Read Percentage - Use this statistic for a volume to determine actual application behavior. If there is a low percentage of read activity relative to write activity, consider changing the RAID level of a virtual disk from RAID 5 to RAID 1 for faster performance. Cache Hit Rate - A higher percentage is desirable for optimal application performance. There is a positive correlation between the cache hit percentage and I/O rates. The cache hit percentage of all of the volumes may be low or trending downward. This may indicate inherent randomness in access patterns, or, at the storage array or controller level, this can indicate the need to install more controller cache memory if you do not have the maximum amount of memory installed. If an individual volume is experiencing a low cache hit percentage, consider enabling cache read ahead for that volume. Cache Read Ahead can increase the cache hit percentage for a sequential I/O workload. Total Data Transferred - The transfer rates of the controller are determined by the application I/O size and the I/O request rate. In general, a small application I/O request size results in a lower transfer rate, but provides a faster I/O request rate and a shorter response time. With larger application I/O request sizes, higher throughput rates are possible. Understanding your typical application I/O patterns can give you an idea of the maximum I/O transfer rates that are possible for a given Storage array.

Monitoring performance and dynamic features Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

12-337


Performance Monitor Consider a Storage array equipped with controllers and fibre channel interfaces that supports a maximum of 100 MB (100,000 KB) per second transfer rate. You are typically achieving an average transfer rate of 20,000 KB per second on the Storage array. This KB per second average transfer rate is a function of the typical I/O size for the applications using the Storage array. (If the typical I/O size for your applications is 4K, 5,000 I/Os can be transferred per second to reach an average transfer rate of 20,000 KB.) In this case, I/O size is small and there is array overhead associated with each I/O transferred, so you can never expect to see transfer rates that approach 100,000 KB per second. However, if your typical I/O size is large, a transfer rate within a range of 80,000 - 90,000 KB per second might be achieved. Because of the dependency on I/O size and transmission media, the only technique you can use to improve transfer rates is to improve the I/O request rate. Use host operating system utilities to gather I/O size data so you understand the maximum transfer rates possible. Then use tuning options available in the storage management software to optimize the I/O Request Rate so you can reach the maximum possible transfer rate. Average IOPS - Factors that affect I/Os per second include access pattern (random or sequential), I/O size, RAID level, segment size, and number of drives in the virtual disks or storage array. The higher the cache hit rate, the higher I/O rates will be. Performance improvements caused by changing the segment size can be seen in the I/Os per second statistics for a volume. Experiment to determine the optimal segment size or use the file system or database block size. Higher write I/O rates are experienced with write caching enabled compared to disabled. In deciding whether to enable write caching for an individual volume, consider the current and maximum I/Os per second. You should expect to see higher rates for sequential I/O patterns than for random I/O patterns. Regardless of your I/O pattern, it is recommended that write caching be enabled to maximize I/O rate and shorten application response time. If you notice that the Total IOPS or Average IOPS is not as expected then a factor might be host-side file fragmentation. Minimize disk accesses by defragmenting your files. Each access of the drive to read or write a file results in movement of the read/write heads. Make sure the files on your volume are defragmented. When the files are defragmented, the data blocks making up the files are contiguous so the read/write heads do not have to travel all over the disk to retrieve the separate parts of the file. Fragmented files are detrimental to the performance of a volume with sequential I/O access patterns.

12-338

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Monitoring performance and dynamic features

Polling interval The frequency that the performance data is obtained from the storage array is controller by the Polling Interval. Each time the polling interval elapses, the Performance Monitor re-queries the storage array for performance statistics. If you are monitoring the array via CAM, update the statistics frequently by selecting a short polling interval, for example, 3 or 5 seconds. If you are saving results to a file to look at later via SSCS, choose a slightly longer interval, for example, 30 to 60 seconds, to decrease the array overhead and the performance impact. Note – Best Practice: Be sure to monitor during different time periods to account for users/application variance.

Performance and dynamic features summary •

Know your I/O characteristics

Use all available host-side and drive- side channels on the controller

Balance I/O across the dual controllers

Cable according to best practices

For IOPS - select faster drives – FC, 15K, 4Gb/s

For throughput – select the number of drives and segment size that will allow for a full stripe write

Use read and write cache

Configure the entire capacity of a vDisk into one volume

Monitoring performance and dynamic features Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

12-339


Knowledge check

Knowledge check 1.

Explain the 40/30/30 rule.

2.

A high read cache hit rate is desirable for what kind of environments?

3.

What are the cache parameters that can be set for each volume?

4.

Which volume cache parameters have a negative effect on performance?

5.

How is cabling important for performance?

True or False 6.

Increasing the segment size will always improve performance. True

7.

The Performance Monitor can monitor specific Virtual Disks, Volumes or Controllers, but not specific disks. True

8.

False

False

The higher the modification priority is set, the faster the I/O’ are serviced, and the modification operations complete at slower pace. True

False

1. A segment is the amount of data, in kilobytes, that the controller writes on a single drive in a volume before writing data on the next drive.

True

12-340

False

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Monitoring performance and dynamic features 2. The Dynamic Functions (DSS; DRM, and DCE) will terminate if the storage array is powered off.

True

False

Multiple Choice 9.

What is performance? a. How well a storage array stores or retrieves data for various host workloads. b. The probability that a disk array is available 7 x 24. c. The maximum ratio of read operations to write operations that a storage array can execute. d. The number of requests that can be fulfilled simultaneously to retrieve data.

10. You would enable write cache with mirroring when a. b. c. d.

You need top performance You need additional reliability You need to have an extra copy of the volume You need to have more cache

11. Applications with a high read percentage do very well using a. b. c. d.

RAID 0 RAID 1 RAID 3 RAID 5

12. The Add Free Capacity option allows the addition of capacity to a virtual disk. How many drives can be added at one time? a. b. c. d.

Only 1 drive at a time 1 or 2 drives A maximum of 2 for RAID 1 and maximum of 3 for RAID 3 and RAID 5 As many drives as are available

13. If your typical I/O size is larger than your segment size, a. Increase your segment size in order to minimize the number of drives needed to satisfy an I/O request. b. Decrease your segment size in order to maximize the number of drives needed to satisfy an I/O request. c. The number of drives should be equal to the segment size d. Multiply segment size by the number of drives in the Virtual Disk to optimize striping

Monitoring performance and dynamic features Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

12-341


Knowledge check

12-342

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Module 13

Problem determination Objectives Upon completion of this module, you will be able to: •

Describe the tools in CAM to analyze storage array problems

•

Explain how to use the service advisor to solve problems

13-343 Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Problem determination

Problem determination Ask yourself these questions about the storage array environment: •

What has or is changing?

What does the physical look like (LEDs, cabling)?

What is the configuration?

Other Indicators?

What other questions should you ask?

What tools are available to aid in observation?

Figure 13-1

What can go wrong?

Utilizing the tools available for problem determination Visual Cues

13-344

LEDs

Audible Alarm

Icons in CAM

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Problem determination The limiting factor with “visual cues” are that they do not have details and are only indicators that a problem exists.

Compatibility matrix A table of all 3rd party hardware and software components that should be used with a particular level of controller firmware. When determining compatibility it is important to verify •

Controller FW Release (05.40, 6.10, 6.12...)

Vendor (Qlogic, Emulex, LSI...)

Component Type (HBA, switch...)

O/S Version Certified (Win2003, RH7.2...)

Components description lists versions specific to the respective component (i.e. HBA Driver, BIOS, FW...

Problems and recovery

Figure 13-2

Problems and recovery

Problem determination Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

13-345


Problems and recovery

Service Advisor

Figure 13-3

Service Advisor

Collection of “service” procedures

Manually locate the one you need... OR...

Arrive via an Alarm

Service Advisor tasks FRU Removal/Replacement Procedures (for both 6540 and CSM200) •

Controllers

Batteries

Interconnect Canister

Disk Drives

I/O Module (IOM)

Battery

Power Supply

SAS Interface Cable

X-Options

13-346

Adding Array Capacity

Removing Array Capacity

Adding Expansion Modules

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Problem determination •

Removing Expansion Modules

Troubleshooting and Recovery •

Offline/Online Controllers

Reset Controller

Correcting an IOM Firmware Mismatch

Redistribute Volumes

Setting the Drive Channel to Optimal

Reviving a Disk Drive

Recovering from an Overheated Power Supply

Portable Virtual Disk Management (07.xx) •

Export

Import

Service Only •

Module Midplane removal/Replacement

Support Data •

Various types of inventory, status, and performance data that can help troubleshoot any problems with your storage array.

Gathered into zipped- file format

Gather through the CAM or through the command line interface (CLI)

Figure 13-4

Collect support data through service advisor

Problem determination Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

13-347


Problems and recovery

Collect support data through the command line /opt/SUNWsefms/bin/supportData usage: supportData -d <identifier> -p <path> -o <output file> Valid identifiers are the deviceKey, array name, controller IP number or controller DNS name. The exact form of identifier used must be on record in order for the array to be recognized. Use the command 'ras_admin device_list' to find a valid array name and ip number. ex: supportData -d SUN.XXXXXXXXXXX.YYYYYYYYYY -p mp -o outputfile ex: supportData -d Array-15 -p mp -o outputfile ex: supportData -d 123.456.789.101 -p mp -o outputfile

Support Data bundle C = current configuration info S = current state information PS = performance / statistical information E = event tracking 1.

NVSRAMdata.txt (C, current NVSRAM configuration) A controller file that specifies the default settings for the controllers.

2.

stateCaptureData.dmp* (S, current state of the controller from the view point of the controller firmware. This log is nothing more then a series of controller shell commands and their output.) evfShowOwnership - show volume ownership rdacMgrShow - show controller logged in vdmShowDriveModules - show expansion module information vdmDrmShowHSDrives - show hot spare information evfShowVol - show volume information vdmShowVGInfo - show v-disk information bmgrShow - battery functions

13-348

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Problem determination bidShow - battery functions at driver level for development use only tditnall - summary host side information iditnall - summary drive side information fcnShow chall - summary channel information luall - summary logical unit information ionShow - show drive state information fcDump - fibre channel information fcAll 10 showSdStatus ionShow 99 discreteLineTableShow ssmShowTree ssmDumpEncl socShow - displays soc statistics showModules excLogShow - displays the exception log hwLogShow - hardware log memory errors spmShowMaps - show volume to LUN mapping spmShow - show volume to LUN mapping fcHosts - host summary information getObjectGraph_MT - determine individual component status ccmShowState - cache information netCfgShow - Internet address information inetstatShow - list established network connections dqprint - debug queue for development use only dqlist - debug queue for development use only taskInfoAll - summary of currently running tasks in firmware fcAll - Displays status and cumulative error counts for source and destination fibre loops 3.

socStatistics.csv S, A detailed list of all the statistics and errors gathered by Fibre Channel loop-switch devices.

Problem determination Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

13-349


Problems and recovery 4.

objectBundle S, the information the controller firmware has reported back to the management software concerning the current state of the storage array, normally intended for developer use.

5.

driveDiagnosticData.bin S, binary file used for failure analysis, intended for developer use

6.

storageArrayProfile.txt C, current physical and logical configuration for the storage array

7.

performanceStatistics.csv PS, point in time IO performance information

8.

majorEventLog.txt E, used for event tracking on the storage array A detailed list of events that occur on the storage array. The list is stored in the DACstore region on the disks in the storage array and records configuration events and storage array component failures.

9.

alarms.txt S, used for current alarms on the storage array

10. badBlocksData.txt E, contains Volume, Dateime, Volume LBA, Drive Location, Drive LBA, Failure Type 11. readLinkStatus.csv S, used to diagnose drive side channel components errors (IOM, SFP, drives), this log is commonly used to isolate component errors in configurations with JBOD expansion modules, be mindful of the back end architecture (SBOD or JBOD) to best interpret the information 12. persistentReservation.txt S, viewing LUN persistent reservation locks, only time this log would be viewed is in the context of the storage array being used in a clustered application with multiple hosts accessing the same LUN.

Fault Management Service (FMS) The Fault Management Service (FMS) is a software component of the Sun Storage Common Array Manager that can used to monitor and diagnose the storage arrays. The primary monitoring and diagnostic functions of the software are: •

Array health monitoring

Event and alarm generation

Notification to configured recipients

Diagnostics

Device and device component reporting

An FMS agent, which runs as a background process, monitors all devices managed by the Sun Storage Common Array Manager.

13-350

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Problem determination The agent at configured intervals, or can be run manually, to probe devices. Events are generated with content, such as probable cause and recommended action, to help facilitate isolation to a single field-replaceable unit (FRU).

Alarms

Figure 13-5

Alarms

Events are generated to signify a health transition in a monitored device or device component. Events that require action are classified as alarms. There are four event severity levels: Down - Identifies a device or component as not functioning and in need of immediate service Critical - Identifies a device or component in which a significant error condition is detected that requires immediate service Major - Identifies a device or component in which a major error condition is detected and service may be required

Problem determination Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

13-351


Problems and recovery Minor - Identifies a device or component in which a minor error condition is detected or an event of significance is detected

Figure 13-6

Current alarms

Summary List

13-352

Acknowledge - change the state of any selected alarms from Open to Acknowledged

Re-Open - change the state of any selected alarms from Acknowledged to Open. This button is grayed out until the alarm has been acknowledged

Delete - remove selected alarms. This button is grayed out for any autoclear alarm

Auto Clear - Whether or not this alarm will automatically be cleared when the underlying problem is resolved. Alarms which do not have the autoclear state will need to be deleted by the user when the underlying problem is resolved

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Problem determination

Alarms Aggregate Problems CAM aggregates events to provide better fault isolation. Example: Controller failure Other SW may report: •

Failed controller

2x Loss of communication with unknown device’ (this is the batteries)

Module Path Redundancy Failure

Volumes not on preferred path CAM will reports this as failed controller Example: Loss of power to Controller Module Other SW reports 22 events CAM reduces this number to 1 entry

Service Advisor links, alarms, and solutions

Figure 13-7

Service advisor links alarms and solutions

Problem determination Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

13-353


Problems and recovery

Links to exact places

Figure 13-8

Links to exact place

With pictures

Figure 13-9

13-354

With pictures

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Problem determination

Active links check status

Figure 13-10 Active links from service advisor

Troubleshooting link from the CAM navigation tree

Figure 13-11 Troubleshooting link from CAM navigation tree The Troubleshooting link from the CAM navigation tree includes: •

Troubleshooting diagnostic tests

Field Replaceable Units (FRU)

Storage Array Events •

This page displays summary information on all events in the major event log

This includes date, type, component status and details

Problem determination Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

13-355


Problems and recovery

Controller diagnostics When these tests are run you will get a message "the target controller will be quiesced" – which means data transfer is disabled to/from the controller while the tests are running. If you run the diagnostics while a host is using the volumes owned by the selected controller, the I/O will be rejected. Before starting the diagnostics verify that the volumes owned by the controller are not in use or that a multi-path driver exists and is functional. •

Controller read test: Checks for data integrity and redundancy errors

Controller write test: Initiates a write command to the diagnostics region (DACstore) on a specified drive

Internal loopback test: Passes data through each controller's drive-side channel, out onto the loop and then back again to determine channel error conditions

All controller tests: All controller tests are run

Remote Peer Communication Check: Only if remote replication has been configured

FRU - Field Replaceable Units

Figure 13-12 FRU summary

13-356

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Problem determination

Summary pages •

Physical aspect of Managed components.

Alarms link to alarm page.

Installed and Slot Count determines configuration.

Component summary •

Name links to Details page which contains FRU properties.

State and Status

Revision or Firmware version

FRU ID tied to physical element

Events

Figure 13-13 Events •

Summary of all events for device.

Filter available.

Problem determination Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

13-357


Problems and recovery •

Some events turn into alarms.

Some events get aggregated into a single event.

Events can be sent using E-mail notification.

Array administration

Figure 13-14 Array administration

Administration

13-358

Manage Passwords

Redistribute Volumes

Reset Configuration

Upgrade Firmware

Change Array Name

Define Default Host Type

Define Start/Stop Cache%

Configure Background Disk Scrubbing

Configure Alert Fail-Over Delay

Set Time Manually

Synchronize Time with Time Server

Array Health Monitoring

Enable Health monitoring

Configure Performance Monitoring

Enable/Disable Performance Monitoring

Set Polling interval

Set Data Retention Period.

Add licenses

Disable licenses

View Activity Log

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Problem determination •

View Array Specific Alarms

Health administration Use the Array Health Monitoring Setup page to display the monitoring status for all storage arrays registered with this instance of Sun Storage Common Array Manager software, and to display and edit the health monitoring status for an individual array.

Agent information •

Active The status of the agent.

Categories to Monitor The type of arrays to be monitored. You can select more than one type of array by using the shift key.

Monitoring Frequency How often, in minutes, the agent monitors the selected array categories.

Maximum Monitoring Thread Allowed The maximum number of arrays to be monitored concurrently. If the number of arrays to be monitored exceeds the number selected to be monitored concurrently, the agent will monitor the specified number of additional arrays serially.

Problem determination Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

13-359


Problems and recovery

Figure 13-15 Health administration

Health Agent configuration •

Enable/Disable or manually run a monitoring cycle.

Select device types to monitor.

Select monitoring frequency.

Set number of unique monitoring threads

Adjust time out settings for monitoring activity.

Notification E-mail

13-360

User E-mail or pager

Filters available per E-mail address

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Problem determination

SNMP traps •

Programmatic data sent to SNMP trap listeners.

Management Integration: SunMC, HP Openview...

Figure 13-16 Notification

Activity log This page enables you to view all user-initiated management activity that has occurred on the array. The following table describes the fields on the Activity Log Summary page. •

Time The date and time when an operation occurred on the array.

Event The type of operation that occurred, including the creation, deletion, or modification of an object type.

Problem determination Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

13-361


Problem determination summary •

Details about the operation performed, including the specific object affected and whether the operation was successful.

Figure 13-17 Activity log

Problem determination summary

13-362

Service advisor is a collection of service procedures

Always collect the support data bundle after every configuration change

Fault management system (FMS) monitors and diagnoses the registered storage arrays

Alarms are the primary indicator of problems

Events record the log of storage array status changes

Activity log stores the storage array management history

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Problem determination

Knowledge check 1.

What is the limiting factor of using visual cues for determining problems on the storage array?

2.

List two ways to collect the support data bundle.

3.

What function does the service advisor serve?

4.

Why is the major event log important?

5.

What data is contained in the storage array profile?

6.

List the 4 levels of alarms.

7.

List two things you need to verify before running controller diagnostics.

Problem determination Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

13-363


Knowledge check

13-364

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Module 14

Maintaining the storage array Objectives Upon completion of this module, you will be able to: •

Describe Dynamic Volume Expansion

Explain the benefits of disk scrubbing

Describe the process to install baseline firmware

14-365 Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Dynamic volume expansion (DVE)

Dynamic volume expansion (DVE) Dynamic volume expansion (DVE) is the ability to seamlessly increase the capacity of standard volumes and reserve volumes. DVE allows you to expand the capacity of an existing volume by either using free capacity on an existing virtual disk or by adding unconfigured capacity through dynamic capacity expansion to that virtual disk. You can expand a volume dynamically without losing access to it or to any other volumes. Note – Increasing the capacity of a standard volume is only supported on certain operating systems.

If you increase the volume capacity on a host operating system that is unsupported, the expanded capacity will be unusable and you cannot restore the original volume capacity. However, in the case of Snapshot reserve volumes since they are not mapped to hosts, expansion is supported for all host environments. The DVE option is not available if the volume: •

Has a non-optimal status

There is no free capacity on the virtual disk or there is no unconfigured capacity on the storage array.

The availability of the capacity added to an existing volume depends on whether free capacity large enough for the expansion is located directly before or after the volume to modify. By nature, a volume must cover a contiguous disk capacity within a virtual disk. This leads to three possible scenarios: •

Free capacity available: Added capacity will be available immediately

Enough free capacity in the virtual disk, but not directly before or after volume to expand: all volumes between the volume to expand and the free capacity have to be relocated. Once this background process to relocate the volumes is finished, the added capacity will be available

Not enough free capacity in the virtual disk: the virtual disk needs to be expanded via capacity expansion. DCE is then coupled with DVE. Once the restripe (DCE) is finished, the capacity will be available

As soon as the free capacity is positioned properly, the extra capacity is available to the host.

14-366

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Maintaining the storage array Dynamic volume expansion is considered an exclusive operation. Other exclusive operations include Dynamic Capacity Expansion (DCE), Dynamic Segment Sizing (DSS) and Dynamic RAID Migration (DRM). Only one such operation can be active per virtual disk. While dynamic volume expansion is in progress: •

The DVE operation can not be stopped

•

Affected volume(s)/group can not be deleted.

Disk scrubbing Disk scrubbing is a background process performed by the storage array controllers to provide error detection on the drive media. Disk scrubbing detects errors and reports them to the event log. Before disk scrubbing can run, it must be enabled on the storage array. Disk scrubbing then runs on all volumes on the storage array. You can disable disk scrubbing on any volume that you do not want to have scrubbed. Later, you can re-enable disk scrubbing for any volume on which you disabled it. The advantage of disk scrubbing is that the process can find media errors before they disrupt normal drive reads and writes. Disk scrubbing scans all volume data to verify that it can be accessed. If you enable a redundancy check, it also scans the volume redundancy data. Disk scrubbing discovers the following errors and reports them to the Unrecovered media error: The data could not be read on its first attempt, or on any subsequent retries. Volumes with redundancy protection. Data is reconstructed, rewritten to the drive, and verified. The error is reported to the event log. On volumes without redundancy protection, the error is not corrected but is reported to the event log. Recovered media error: The drive could not read the requested data on its first attempt, but succeeded on a subsequent attempt.The data is rewritten to the drive and verified. The error is reported to the event log. Redundancy mismatches: Redundancy errors are found. The first 10 redundancy mismatches found on a volume are reported to the event log. Note: The media scan checks for redundancy only if the optional redundancy check box is enabled.

Maintaining the storage array Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

14-367


Installing baseline firmware Unfixable error: The data could not be read, and parity or redundancy information could not be used to regenerate it. For example, redundancy information cannot be used to reconstruct data on a degraded volume. The error is reported to the event log.

Installing baseline firmware As part of the installation of Common Array Manager software, current firmware files are place in a directory on the management host. When you upgrade the firmware, CAM analyzes the firmware installed on the storage array. CAM provides a message stating that the storage array is at baseline. This means that the storage array firmware and NVSRAM for the controllers match the baseline installed on the management host. CAM also reviews the IOM firmware level and the disk drive firmware level. A message is provided stating that the storage array is at baseline, so no firmware upgrade is necessary. If the firmware level is not at baseline, an option is provided stating either upgrade no disks or upgrade all. At this point, you can choose to install the baseline firmware on the storage array. It is recommended that the firmware on all the storage arrays be at the level of the current firmware baseline. New features are not supported with non-baseline firmware. Note – Always check the latest Common Array Manager and storage array release notes for the latest release specific information about firmware and a list of firmware files for your storage array. When dealing with the IOM firmware: •

Do not make changes to the storage array while updating the firmware

Verify that all your IOM cards are visible before continuing work

When dealing with the drive firmware:

14-368

Download firmware packages to the drives only if you are having firmwarerelated limitations or performance issues

Stop all I/O activity

Verify the behavior against a limited number of drives

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Maintaining the storage array •

Do not remove any drives while performing the updates

Upgrading to 7.xx firmware The 7.xx firmware must be installed by Sun Service. This is a one-time upgrade from 6.xx. A special upgrade utility must be used. During this upgrade, all DACstore on the drives are rewritten from version 3 DACstore to version 4 DACstore. It is absolutely imperative that the storage arrays be in optimal condition. Once the upgrade to 7.xx is complete, downgrading to 6.xx is not possible. Once 7.xx is installed on the storage array, future upgrades can be done once again through using the upgrade process through CAM.

Command line firmware upgrade utility Command Service Module (csmservice) Solaris (SPARC, x86): /opt/SUNWsefms/bin/csmservice Windows 2003 server: c:/program files/Sun/common array manager/component/fms/bin CSMservice is a field utility for analyzing and updating array firmware baselines (a collection of controller, IOM and disk firmware that have been tested). CSMservice, unlike CAM, will support stand-alone mode, the displaying of current firmware versions and individual array component updates (e.g., update just the disk drives or an IOM).

Example: csmservice -i -a <array-name> [-f] [-w] [-t <fru-type>] [-x <fru-type>] [[-p <path>] <-c <component>>] csmservice -v [-a <array-name>] [-t <fru-type>] [-x <fru-type>] csmservice -s

Maintaining the storage array Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

14-369


Maintaining the storage array summary csmservice -h

Table 14-1 Parameters to use with csmservice Parameter

Means

-i

install the array firmware CAM baseline

-v

View components not at the baseline firmware level

-s

Service mode menu-driven install mode

-h

Help displays command usage statement

-f

Install in force mode

-o

Install the offiline upgrade FRU type(s)

-j

Install as a background job

-w

Suppress the warning and continue to install

-p

Path to firmware file to override baseline version installation

-c name

Install the named product or FRU with the given firmware file

-a array

Select array(s) for view or installation

-t type

Filter view or install command to the given FRU type(s)

-x type

Filter view or install command to exclude the given FRU type(s)

Maintaining the storage array summary •

Dynamic Volume Expansion allows additional capacity to be added to volumes •

Disk scrubbing checks for media errors •

14-370

OS dependent Only checks the disks configured into volumes

Upgrade to 7.xx firmware requires use of special utility •

DACstore is rewritten on all the drives

IO must be stopped during this process

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Maintaining the storage array

Knowledge check 1.

Increasing the capacity of a standard volume is only supported on certain operating systems. True

2.

Disk scrubbing examines and reports media errors on all disk capacity in a storage array. True

3.

False

If the controller firmware is not at baseline, the storage array reports as degraded. True

4.

False

False

A special utility is required for upgrading firmware from 6.xx to 7.xx. True

False

Maintaining the storage array Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

14-371


Knowledge check

14-372

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Module 15

SSCS and Command Line Interface Objectives Upon completion of this module, you will be able to: •

Utilize the SSCS to export and import the configuration

•

Use the fault management command line tools (FMS)

15-373 Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage Common Array Manager CLI (SSCS)

Sun Storage Common Array Manager CLI (SSCS) Features •

Remote CLI shared across product line

Multi-Platform Support

Full Feature Set

Scriptable

Backward compatible – Continued support for 6120, 6320, 6920

Man pages (UNIX)

Benefits •

All processing performed on the server.

New features installed on the server immediately available to all clients.

Client upgrade not necessary with server upgrade

Performance independent of client machine

Code sharing with GUI

Usage Login Must login into a CAM host before executing SSCS commands. Example: ./SSCS -h localhost -u root

Built in help Keyword: help

Correct syntax as part of error messages To manage the Sun Storage arrays, use the /opt/SUNWsesscs/cli/bin/sscs command.

15-374

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


SSCS and Command Line Interface From a terminal window, type the sscs command with a subcommand and any applicable parameters. Note – The sscs command has an inactivity timer. The session terminates if you do not issue any sscs commands for 30 minutes. You must log in again after the timeout to issue a command.

Examples: Save configuration: sscs login –h <localhost | IP Address> –u <root|administrator>

Export/import of array configuration: sscs export array <array name>

Export the storage array configuration into XML format; can specify file name output file: sscs export -array <array name> > emp/configfile.xml sscs import -x <xml file> array <array name>

Other useful information to collect Pool sscs list -a <array name> pool <pool name>

Profile sscs list -a <array name> profile <profile name>

Virtual disk sscs list -a <array name> vdisk <vdisk name>

Disk sscs list -a <array name> disk

SSCS and Command Line Interface Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

15-375


Other command line interface tools

Other command line interface tools Fault Management Service (ras_admin) /opt/SUNWsefms/bin/ras_admin •

The “backend” for many of the Browser User Interface (BUI) and CLI functions

Useful for diagnosing why an array can not be registered through the BUI or CLI

Offers in-band and device list method for discovering arrays

Can also perform other functions •

List and delete alerts

List and delete devices

Add, list and delete email addresses for notifications

List and display reports

List and display topologies

Command Service Module (csmservice) /opt/SUNWsefms/bin/csmservice

Field utility for analyzing and updating array firmware baselines (a collection of controller, IOM and disk firmwares that have been tested) Will support stand-alone mode, the displaying of current firmware versions and individual array component updates, e.g., update just the disk drives. Note – On Windows assumption is made that MSVCR71.dll is present. If it is be missing, then download it from the web (just Google MSVCR71.dll).

Example: csmservice -i -a <array-name> [-f] [-w] [-t <fru-type>] [-x <fru-type>] [[-p <path>] <-c <component>>] csmservice -v [-a <array-name>] [-t <fru-type>] [-x <fru-type>]

15-376

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


SSCS and Command Line Interface csmservice -s csmservice -h

Table 15-1 Parameters to use with csmservice Parameter

Means

-i

Install the array firmware CAM baseline

-v

View components not at the baseline firmware level

-s

Service-mode-menu driven install mode

-h

Help displays command usage statement

-f

Install in force mode

-o

Install the offline upgrade FRU type(s)

-j

Install as a background job

-w

Suppress the warning and continue to install

-p

Path to firmware file to override baseline version installation

-c name

Install the named product or FRU with the given firmware file

-a array

Select array(s) for view or installation

-t type

Filter view or install command to the given FRU type(s)

-x type

Filter view or install command to exclude the given FRU type(s)

Collect support data /opt/SUNWsefms/bin/supportData

Usage: supportData -d <identifier> -p <path> -o <output file>

Valid identifiers are the deviceKey, array name, controller IP number or controller DNS name. The exact form of identifier used must be on record in order for the array to be recognized. Use the command 'ras_admin device_list' to find a valid array name and IP number.

SSCS and Command Line Interface Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

15-377


Other command line interface tools

Examples: supportData -d SUN.XXXXXXXXXXX.YYYYYYYYYY -p mp -o outputfile supportData -d Array-15 -p mp -o outputfile supportData -d 123.456.789.101 -p mp -o outputfile

Service command line /opt/SUNWsefms/bin/service

Service commands The CLI command “service” has been implemented to support running many of the management operations that are typically performed via the service advisor. It should be noted that these are intended for expert use and that the preferred approach is to access this functionality via the Service Advisor since it details the proper sequencing of operations. These commands require the target array's password to be in the CAM database. The sscs command set does not include NVSRAM settings. Note – Commands that change a component's state such as the “revive” may work, only to have the array automatically return the component to a failed state because of an underlying failure condition.

Available service commands: Attempts to place the controller or drive into a failed state: service -d <deviceid> -c fail -t <a|b|tXctrlY|tXdriveY>

Attempts to place the controller or drive into the optimal state: service -d <deviceid> -c revive -t <a|b|tXctrlY|tXdriveY>

Redistributes the volumes back to their preferred owners: service -d <deviceid> -c redistribute -q volumes

15-378

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


SSCS and Command Line Interface Turn on the drive, module or array locator LED(s). When the “off” target is used, all locator LEDs are turned off. Not to be confused with fault LEDs, though they are mixed on some array types. service -d <deviceid> -c locate -t <tXdriveY|tX|array|off>

Changes the array’s name: service -d <deviceid> -c set -q array <name=newname>

Changes the array’s controller redundancy setting: service -d <deviceid> -c set -q redundancy -t <simplex|duplex>

Sets a byte to the given value at the indicated offset of the identified NVSRAM region. Host specific regions require the host specific modifier. Both controllers' NVSRAM settings are modified unless they are in simplex mode. service -d <deviceid> -c set -q nvsram <region=0xXX> <offset=0xXX> <value=0xXX> [host=0xXX]

Reads the specified NVSRAM region from both controllers (unless the array is in simplex mode). Host specific regions require the host specific modifier: service -d <deviceid> -c read -q nvsram <region=0xXX> [host=0xXX]

Resets a controller, resets the battery age, clears the array's MEL log, or resets (baselines) the array's RLS counters depending upon the target selected: service -d <deviceid> -c reset -t <a|b|tXctrlY|tXbatY|mel|rls>

Sets the indicated drive channel to optimal: service -d <deviceid> -c reset -q driveChannel -t <channel>

Prints just the storage array profile: service -d <deviceid> -c print -t arrayprofile

Prints just the Major Event Log: service -d <deviceid> -c print -t MEL

SSCS and Command Line Interface Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

15-379


SSCS and CLI summary

SSCS and CLI summary

15-380

SSCS is Remote CLI shared across product line that has multi-platform OS support

SSCS contains a full feature set and is scriptable

Fault Management Service (ras_admin) is the “backend” for many of the Browser User Interface (BUI) and CLI functions

ras_admin is useful for diagnosing why an array can not be registered through the BUI or CLI

Command Service Module (csmservice) is a field utility for analyzing and updating array firmware baselines

Collect support data commands collect all or part of the support bundle used by technicians for problem diagnosis

Service commands are the CLI command “service” that has been implemented to support running many of the management operations that are typically performed via the Service Advisor

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


SSCS and Command Line Interface

Knowledge check 1.

How do you login into the SSCS?

2.

Why is it important to export the storage array configuration?

3.

What is the function of the Ras_Admin command line?

4.

What is the function of the Service command line?

SSCS and Command Line Interface Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

15-381


Knowledge check

15-382

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Appendix A

Glossary of acronyms This glossary is not meant to be a complete compendium of acronyms used in Storage Area Networks; however, it is a beginning place and covers the majority of acronyms used in this manual. Below are some web references that can be used for further research.

References http://en.wikipedia.org http://glossary.eea.europa.eu/EEAGlossary/R/RoHS_directive http://searchstorage.techtarget.com/ http://www.arcelect.com/NEBS.htm http://www.cisco.com/univercd/ccd/doc/cisintwk http://www.cnet.com/Resources/Info/Glossaryerms/led.html http://www.dhcp.org/ http://www.emulex.com/support/glossary.htm http://www.microsoft.com/windowsserversystem/storage/storgloss.mspx#R http://www.sdgcomputing.com/glossary.htm http://www.webopedia.com/ http://www.zerocut.comech/fibre.html

Acronym AL-PA

Stands for Arbitrated Loop Physical Address CAM

CAM AVT canister

Auto Volume Transfer Customer Replaceable Unit

Definition The address of a Fibre Channel node in an arbitrated loop. One of the windows in Common Array Manager. It shows the array and includes menus, toolbars and buttons to help manage the storage system. The ability for a volume to dynamically transfer its data to another volume upon failover. Any module that can be replaced on-site by the customer without the technician’s assistance.

Appendix A-383 Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Appendix A Acronym DAS

DCE

Stands for Direct Attached Storage Dynamic Capacity Expansion Dynamic Host Configuration Protocol

DHCP

DMP DRM

Dynamic MultiPathing driver Dynamic RAID Migration Device Specific Module

DSM Dynamic Segment Size DSS

DVE

EMW IOM

Appendix A-384

Dynamic Volume Expansion

Definition Storage that is directly connected to a server by connectivity media such as parallel SCSI cables. This direct connection provides fast access to the data; however, storage is only accessible from that server. A modification operation that increases the available free capacity on an virtual-disk by adding unassigned drives or newly inserted drives to the group. An Internet protocol for automating the configuration of computers that use TCP/IP. DHCP automatically assigns IP addresses, delivers TCP/IP stack configuration parameters such as the subnet mask and default router, and provides other configuration information such as the addresses for printers and time and news servers. The Solaris VERITAS Volume Manager multi-pathing driver used for redundancy and failover. DRM is used to change the RAID level on a selected virtual-disk. Performance might be slightly affected during the operation. Device-Specific Modules (DSM) export a set of behaviors to MPIO. This allows the physical devices to be recognized by MPIO and for the DSM to enhance and improve the utilization and performance of the device. Operation that changes the segment size on a volume. The controller firmware determines the segment size transitions that are allowed; those that are inappropriate transitions from the current segment size is unavailable on the SANtricitiy menu. Allowed transitions typically are double or half of current segment size. The ability to seamlessly increase the capacity of standard volumes and reserve volumes by either using free capacity on an existing virtual-disk or by adding unconfigured capacity. This process is dynamic, which means it can be executed without losing access to the affected volume or to any other volumes. One of the windows in Common Array Manager. It shows the entire storage system for the enterprise.

Enterprise Management Window Environmental A canister in a module that house fans, batteries and Service Monitor power supplies.

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Glossary of acronyms Acronym

Stands for Fibre Channel

FC

FC-AL FRU GB

GBIC

Gb/s GHS GUI HBA

HPC

ICC IOPS ITW

JBOD

Fibre ChannelArbitrated Loop Field Replaceable Unit Gigabyte Gigabit Interface Converter

Gigabits per second Global Hot Spare

Definition A high–speed interconnect used in storage area networks (SANs) to connect servers to shared storage. Fibre Channel components include HBAs, hubs, switches, and cabling. The term Fibre Channel also refers to the storage protocol. A ring-style network topology usually configured as a double loop to protect the array against device failure. Any part of a storage system that can be replaced on location by a technician. Approximately 1 billion bytes. A Fibre Channel optical or copper transceiver that is easily swapped to offer a flexible choice of copper or fiber optic media. An optical GBIC supports both shortwave and longwave optical transmissions, which is an important criterion when trying to go the distance. A measurement of throughput on a storage system.

A drive within the storage system that a user defines as a spare drive to be used in the event a drive that is part of a volume with redundancy fails. Graphical User A software interface that uses graphics to interact Interface with the user to make the program easier to use. Host Bus The intelligent hardware residing on the host server Adapter that controls the transfer of data between the host and the target storage device. High An environment that demands high performance Performance computers and large amount of storage to use vast Computing amounts of data for high-bandwidth-oriented applications such as data-intensive research, visualization, 3-D computer modeling, rich media, seismic processing, data mining and large-scale simulation. Interconnect The canister that contains the battery for the 6140 Battery Canister controller-expansion tray. Input-Output The standard measurement for input/output Per Second operations per second. Invalid Errors encountered on the fiber network. Transmission Word Just a Bunch Of A group of drives housed in its own box; JBOD Disks differs from RAID in not having any storage controller intelligence or data redundancy capabilities.

Glossary of acronyms Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

Appendix A-385


Appendix A Acronym LC

Stands for Local Connector

LED

Light Emitting Diode Logical Unit Number

LUN MB

Megabytes Multi-path I/O

MPIO

MPP

NAS

Multi-path Proxy Network Attached Storage Network Equipment Building System

NEBS

NVSRAM OLTP PDHM

Appendix A-386

Non-volatile Static Random Access Memory OnLine Transaction Processing Proactive Drive Health Monitoring

Definition Commonly used fiber-optic cable connector in networks; it replaced the standard connectors (SC) in most networks. A type of light commonly used on electronic devices as indicators. LEDs produce either visible or infrared light and require very little power. A logical unit is a conceptual division (a subunit) of a storage drive or a set of drives. Logical units directly correspond to a volume. Each logical unit has an address, known as the logical unit number (LUN), which allows it to be uniquely identified. Approximately 1 million bytes. Microsoft Multi-path I/O (MPIO) is a Driver Development Kit (DDK) used to build highavailability/multi-path solutions for Windows Server 2000 and 2003 Operating Systems. Future OS releases (e.g., Windows Server 2008 [Longhorn]) will provide MPIO “in-box” as part of the OS. A multi-path driver that can handle several different paths to the storage system. MPP is commonly bundled with Common Array Manager, but it can also be packaged separately. A server that runs an operating system specifically designed for handling files (rather than block data). Network-attached storage is accessible directly on the local area network (LAN) through LAN protocols such as TCP/IP. NEBS criteria are a universal measure of network product excellence. Products that are NEBS certified are expected to be top performers in enterprise network environments. They are designed to be easy to install, operate reliably and efficiently occupy building space. Physical configurations and compatibility of equipment with a set of environmental conditions help reduce product installation and maintenance costs. NVSRAM contains storage system parameters that determine how the storage system reports itself. The use of computers to run the on-going operation of a business. A feature built into controller firmware starting with version 6.23 that tries to predict imminent drive failures.

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Glossary of acronyms Acronym PiT RAID

RAS RDAC

Stands for Point in Time Redundant Array of Independent Disks Reliability, Availability, Serviceability Redundant Dual Active Controller Read Link Status

RLS Reduction of Hazardous Substance

RoHS

RSM Remote Replication SAA SAN SAR SATA SBOD

Remote Service Monitoring Remote Replicationing Service Action Allowed Storage Area Network Service Action Required Serial ATA Switched Bunch Of Disks

Definition A physical point in time view of a volume used by some features to take a “snapshot� of the volume. A way of storing the same data over multiple physical drives to ensure that if a hard drive fails a redundant copy of the data can be accessed elsewhere on the storage system. A measurement of how well a storage system performs in a live environment. A method of failover that is platform-dependent. RDAC is available for Windows, Solaris and Linux. A link error in the traffic flow of a Fibre Channel loop. The errors detected are counted over a period of time. This count provides a coarse measure of the integrity of the components and devices on the Fibre Channel loop. Directive 2002/95/EC on the restriction of the use of certain hazardous substances in electrical and electronic equipment. It approximates the laws of the [EU] Member States on the restrictions of the use of hazardous substances and contributes to the protection of human health and the environmentally sound recovery and disposal of electrical and electronic equipment waste. This Directive bans placing any new electrical and electronic equipment containing more than agreed levels of lead, cadmium, mercury, hexavalent chromium, polybrominated biphenyl (PBB) and polybrominated diphenyl ether (PBDE) flame retardants on the EU market. The ability to monitor an array from a remote location. This premium feature of CAM provides disaster recovery by associating two volumes from two different storage systems to continuously mirror one to the other. An LED that indicates that a module in the storage array can be removed for service or replacement. A specialized network that provides access to high performance and highly available storage systems using block storage protocols. An LED that indicates that a module in the storage array needs attention due to a failed state. A type of high capacity drive used primarily for backup in storage systems. A type of high speed drive used in storage networks. SBODs use a switched loop environment.

Glossary of acronyms Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

Appendix A-387


Appendix A Acronym SC

SCSI

SFP SIC

SMB

SNMP

SOC TCO TB

TCP/IP

WEEE

WWPN

Appendix A-388

Stands for Standard Connector Small Computer Systeml Interface

Definition Fibre cable connector that has been replaced by the LC connector in most networks. A set of standards that allow computers to communicate with attached devices, such as storage devices (drives, tape libraries, and so on) and printers. SCSI also refers to a parallel interconnect technology that implements the SCSI protocol. Small FormThis is the port that receives an SC connections in factor Pluggable either a controller module or host. SATA Interface A connector used on SATA drives that provides a Card fiber channel connector and simulates a dual-port configuration, 3Gb/s-to-4Gb/s buffering and SATA II to FC protocol translation. Small to A marketing term that describes a division of Medium customers. Businesses Simple Network An application layer protocol that facilitates the Management exchange of management information between Protocol network devices. It is part of the Transmission Control Protocol/Internet Protocol (TCP/IP) protocol suite. SNMP enables network administrators to manage network performance, find and solve network problems, and plan for network growth. Switch On a A network device that channels incoming data flow Chip from any of multiple input ports to the output port appropriate for its destination that is built into a single chip. Total Cost of Total amount it costs to own and run a storage system. Ownership Terabyte Approximately 1 trillion bytes. Transmission A suite of communications protocols that connects Control hosts on the Internet. TCP/IP is built into the UNIX Protocol/Interne operating system and is used by the Internet, making t Protocol it the de facto standard for transmitting data over networks. Even network operating systems that have their own protocols, such as Netware, also support TCP/IP. Waste Electrical Required recycling of electrical and electronic and Electronic equipment products; became European law in Equipment February 2003. Directive World Wide The “name” of a host port that is a physical Port Name connection on a host bus adapter (HBA) that resides within a host.

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Appendices

-389 Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


-390

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Appendix B

Knowledge check solutions Sun Storage 6x80 product overview 1.

List the Field Replaceable Units found in the 6x80. • • • •

2.

List the upgradable components in the 6x80 controller canister. • • • •

3.

Power-fan canisters (2) Interconnect-battery canister (1) Controller canisters (2) Battery packs (2)

Persistent cache USB drives Cache DIMM banks CPU DIMM banks Host cards

True or False: The 6x80 controller module can support up to 512 drives. False; it can support up to 256 drives now and 448 in the future;

4.

Which power-fan canister is connected to controller A, the left or the right canister (when viewed from the front of the module)? Right

5.

What does the SAA LED indicate? the SAR? What color is each? • SAA = Blue Service Action Allowed; indicates that the canister can be safely removed without interrupting data I/O • SAR = Amber Service Action Required: indicates that the canister needs to be replaced; it has failed

6.

Does the 6x80 controller module have a mid- or back-plane? If not, how does power flow through the module? • No, it does not have a mid- or back-plane. • Power flows through the ICC to both controller A and controller B because the ICC connects to both.

7.

If one BBU fails, will the other one spare for it?

Appendix B-391 Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Sun Storage 6x80 product overview No; each BBU is dedicated to one controller or the other; therefore, if the BBU fails, the other will not spare for it.

8.

What are the USB-based flash modules used for in the 6x80 controller module? They are used as persistent cache that can hold data in cache for indefinite periods of time in case of disaster

9.

Can you “mix and match” two different types of host cards in one 6x80 controller module? Yes; this option offers great configuration flexibility

Appendix B-392

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Knowledge check solutions 10. What is a SOC? Where would you find it? They are “Switches on a Chip� loop chips and create the drive channels that are connected to the drive ports. They are found on the controller board. 11. What does it mean if a drive port has an amber LED on? It means the port has been by-passed. No I/O can pass through the port

Knowledge check solutions Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

Appendix B-393


Sun Storage 6540 product overview

Sun Storage 6540 product overview

1.

Identify the module, shown above. The module is the 6998 controller canister module.

2.

Using the letters, identify the parts of the component shown above. A

Host side ports

B

Ethernet ports

C

Controller Service Indicators (Service action allowed, Service Action Required, Data in Cache)

D

7 segment display for module ID and fault identification

E

Drive side ports

F

Serial port

3.a. If both LEDs in the middle are on, what speed is the port operating at? 4Gb

3) b. What are the function of the LEDs to the far left and far right? Port by-pass indicator; Off no SFP installed or port is enabled; ON (amber) No valid device is detected and the channel port/ is internally bypassed.

4.

Explain how module IDs are set. How can you change them? Module IDS are soft set by the controller to avoid module ID conflicts. You can change them through CAM or through the SCSS command line.

Appendix B-394

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Knowledge check solutions 5.a. Why are there two ethernet ports? Ethernet port 1 is for normal operation. Ethernet port 2 is available for support to use.

5b. Which port should be used for normal operation? Ethernet port 1.

6.Why should you never remove the Interconnect Battery canister without Customer Support approval? Serves as a midplane for pass through of controller status lines, power distribution lines and drive channels

7.

The left power-fan canister is distributed via controller ___B____. The right is distributed via controller ___A_____.

8.

If one drive port on one channel is set at 4Gb/s link speed and the other is set at 2Gb/s what will be the speed for both ports? Both ports on a drive channel must run at the same speed.

9.

What is meant when a port is said to be able to “auto negotiate”? The port will interact with the host HBA or switch to determine the fastest compatible speed between the controller and the other device.

10. Where can you find the “heart beat” of the controller? On the lower right hand corner of the left box of the 7-segment display

11. What is the default controller module ID? 85

Knowledge check solutions Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

Appendix B-395


Sun Storage 6140 product overview

Sun Storage 6140 product overview

1.

Identify the module, shown above. The module is the 3994 controller module.

2.

Using the letters, identify the parts of the component shown above

A

Host side ports

B

Ethernet ports

C

Service action allowed

D

7 segment display for module ID and fault identification

E

Drive side ports

2F

Serial port

4

2

3.b. On which module would you find this set of ports and LEDs? Controller module

P1 Ch 2 (Ctrl B) P2

3b.If both LEDs in the middle are on, what speed is the port operating at?

Ch 2 (Ctrl A)

4Gb

3c) What are the function of the LEDs to the far left and far right? Port by-pass indicator; Off no SFP installed or port is enabled; ON (amber) No valid device is detected and the channel port/ is internally bypassed.

4.

List 3 benefits of DACstore. 1. All controllers recognize configuration and data from other storage arrays. 2. Storage array level relocation

Appendix B-396

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Knowledge check solutions 3. DACStore also enables relocation of drives within the same storage array in order to: a. maximize performance - as customers add expansion units, allows customer to relocate drives such that drives within an array are spread across all drive channels. b. maximize availability - as customer adds expansion units, DACstore allows user to relocate drives such that drives are striped vertically across all expansion modules, and no one module has more than one disk of a virtual disk.

5.

Differentiate the functionality of the Sundry drives compared to the other drives in the array. The sundry drive contains information about the entire array. Whereas all the other drives just contain their own information in the DACstore.

6.

Explain how module IDs are set. How can you change them? Module IDS are soft set by the controller to avoid module ID conflicts. You can change them through CAM GUI or through the command line.

7.

How do you differentiate the 6140-2 and 6140-4 controllers? The 6140-2 controller has two host ports. The 6140-4 controller has four host ports.

8.a. Why are there two ethernet ports? Ethernet port 1 is for normal operation. Ethernet port 2 is available for support to use.

1

2

8b. Which port should be used for normal operation? Ethernet port .

9.

Why are the controllers inverted in a 6140 controller module? Cooling and power cord management

Knowledge check solutions Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

Appendix B-397


Sun Storage CSM200 expansion module overview

Sun Storage CSM200 expansion module overview A

B

C

D 1.

E

Identify the module, shown above. The module is the CSM200 IOM module.

2.

Using the letters, identify the parts of the component shown above.

A

Drive expansion ports

B

IOM Service Indicators

C

7 segment display for module ID and fault identification

D

Serial port

E

Reserved ports

3.

Explain the purpose of the SATA II interface card. The SIC card serves three purposes: 1. Provides redundant paths to the disk. SATA II drives are single-ported so the SIC card acts as a multiplexer. and effectively simulates a dual-ported disk 2. Provides SATA II to FC protocol translation thereby enabling a SATAII disk to function within an FC expansion module. 3. Provides speed-matching. The SIC card negotiates between 2Gb/s and 4Gb/s based on the setting of the Link Rate Switch on the expansion module. SATAII drives run at 3Gb/s, the SIC card does the 3Gb/s to 4Gb/s buffering so the SATAII drive effectively runs at 4Gb/s speed (and similarly can run at 2Gb/s speed).

4.

What are the main differences between the JBOD and SBOD technology? Just a Bunch of Disks (JBOD): Loops in JBOD include controller, IOMs and Drive (arbitrated loop) Switched Bunch of Disks (SBOD): Loop switch technology enables direct FC communication with each individual drive (point-to-point)

Appendix B-398

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Knowledge check solutions

Sun Storage 6000 hardware installation

This is only one solution of many valid ones.On the diagram below, design a cabling scheme for the Sun Storage 6140 that has one controller module and 6 Knowledge check solutions Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

Appendix B-399


Sun Storage 6000 hardware installation expansion modules. 1.

Cable the 6140 with best practices.

2.

Why is it important to have an unique module ID assigned to a expansion module? To assign a hard ID to each module

Appendix B-400

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Knowledge check solutions 3.

Why would you choose to use fibre cabling over copper?

• SPEED: Fiber optic networks operate at high speeds - up into the gigabits • BANDWIDTH: large carrying capacity • DISTANCE: Signals can be transmitted further without needing to be "refreshed" or • • 4.

strengthened. Anything over 100 meters and requiring 100 Mbps bandwidth or better should use fiber. RESISTANCE: Greater resistance to electromagnetic noise such as radios, motors or other nearby cables. MAINTENANCE: Fiber optic cables costs much less to maintain.

Why is top-down bottom-up cabling important? In order to preserve access to the other modules in an array if one of the modules have a problem.

5.

What is the best way to power on an entire storage array? First Expansion modules, then controller module

Knowledge check solutions Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

Appendix B-401


Sun Storage Common Array Manager

Sun Storage Common Array Manager 1.

What is the difference between a “data host” and “management host”? Management Host - used to manage the storage array. This can be any host that has a network connection to the storage array and has the CAM Management Host Software installed. Data Host - used to read and write data to the storage array. This can be any host that has a FC connection to the storage array and has the CAM Data Host Software installed. Hosts that have both network and FC connection to the storage array can act as both Management and Data hosts.

2.

Describe the main difference between in-band and out-of-band management. In-Band management sends management commands through the FC data path and uses an special agent and access volume. Out-of-band management uses Ethernet connections to each controller and uses the ethernet connections for management commands.

3.

What is the purpose of the “access” volume? To allow communication through the FC path for in-band management

4.

List the 3 types of failover methods Explicit method (RDAC) Implicit method (AVT) Forced (controller failure)

5.

List at least 4 initial configuration steps. Name the storage array - set the storage array password Set up users - set module IDs Set the array time Set IP addresses

Appendix B-402

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Knowledge check solutions

Array configuration using Sun Storage Common Array Manager 1.

You can mix drive types (SATA and Fibre Channel) in a single module. True

2.

Why is it important to know what type of data you'll be working with when determining segment size? Segment size can have a major impact on performance for both IOPS and MBps. Segment size should be equal to or larger than your IO size in IOPS environments and be very large in MBps environments

3.

What is a preferred controller? A volume is assigned to one of the two active controllers. This controller controls the I/O between the volume and the application host along the I/O path.

4.

What is cache? What effect does it have on a volume? Cache memory is an area of temporary volatile storage on the controller that has a faster access time than the drive media. Cache can speed writes as the host can receive acknowledgement that the write has been written to cache which is faster than writing to disk. For reads, if the requested read is already in cache, it can be immediately sent to the host saving disk access time.

5.

What is disk scrubbing? A background process that checks the physical disks for defects by reading the raw data from the disk and writing it back. This detects possible problems caused by bad sectors of the physical disks before they disrupt normal data reads or writes.

6.

What does a "global" refer to in relation to a hot spare? The hot spare disk can spare for any disk for which it has enough capacity and same type. A hot spare is not assigned to a specific vdisk.

7.

What is the difference between "reconstruction" and "copy-back" in relation to a hot spare? Reconstruction is the process of building data on a drive from RAID parity or RAID mirror. Copy back is the process of copying the data from the hot spare drive after reconstruction completes to the newly replaced drive in a vdisk.

Knowledge check solutions Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

Appendix B-403


Array configuration using Sun Storage Common Array Manager 8.

Why should you name your storage array? For identification.

9.

What can happen if you do not set your controller clocks to match your management station? The support data in the Major Event Log, activity log and other data will not have the proper time stamp in case of an issue.

10. What part of the storage array takes advantage of the cache block size? What does it do with it? The size of the cache memory allocation unit - currently either 4k or 16k. This setting if set effectively can improve caching effectiveness. Small block IOs leave at 4k - large block IOs increase to 16k

11. Why is it important to keep a copy of all the support data? In order to have a record of the storage array when it is optimal - in order to compare in a time of a problem.

Appendix B-404

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Knowledge check solutions

Storage Domains True or False 1.

A storage domain is created when a host group or a single host is associated with a volume-to-LUN mapping. True

2.

A host group or host can access volumes with default mappings and specific mappings. False

3.

You can not use the same LUN number in more than one volume-to-LUN mapping. False

4.

A Default Host Group shares access to any volumes that were automatically assigned default LUN numbers. True

Multiple Choice 5.

After defining the first specific volume-to-LUN mapping for a host, a. Host ports must be defined b. the host type can no longer be changed c. The LUN number can not be used by other hosts in the topology d. The host and host ports move out of the Default host group

6.

In a heterogeneous environment, a. Each host type must be set to the appropriate operating system during host port definition

b. Volumes can have more than one volume-to-LUN number c. Hosts with different operating systems can share volumes d. A host can access volumes with either default mappings or specific volume-to-LUN mappings.

Customer Scenario Mr. Customer has the 3 servers and one storage array (6540). The servers: Three W2003 (each has two single ported HBA's), Linux (one dual ported HBA) and Solaris Sparc (one single dual-ported HBA).

Knowledge check solutions Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

Appendix B-405


Storage Domains The Finance Department has requested a 'disk' for storing employee expense statements. The application to access the employee expense statement will run on both W2003 servers with Microsoft Cluster server software. One of the W2003 server will be running the Exchange application and the Exchange Administrator has requested 2 “volumes”: one for the database, the other for a log file. The Linux server will be used for software development and will require disk space for source code and development tools (2 volumes). The Solaris server will be running the engineering document database and will require 1 volume. First draw a diagram showing the servers and the storage, so you and the customer have the same understanding of the requested configuration. 7.

List the Host Groups that will be created: W2003 with Microsoft Cluster Server

8.

List the Hosts that will be created under each Host Group: W2003 host group- two W2003 hosts, W2003, Linux host, solaris host

9.

List the number of Host Ports under each Host: 2 host ports for each of the 5 hosts

10. List the Host Types used for each Host Port: Windows 2003 clustered for the host group, Windows 2003 nonclustered, Linux, AIX

11. Will the Default Host Group be empty? Yes

12. How many domains will the customer require? 8 - 5 will be utilized but must buy the license for 8

13. What needs to be done by the user or storage administrator when an HBA is replaced in one of the servers? Create new host ports, delete the old ones, add the newly created to the host

Appendix B-406

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Knowledge check solutions 14. How many storage domains would you need for the configuration below?___5____

Knowledge check solutions Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

Appendix B-407


Integrated data services: Snapshot

Integrated data services: Snapshot 1.

A snapshot, a method for creating a point-in-time image of a volume, is immediately out of date as soon as a new write is made to the array. False

2.

Why is a snapshot referred to as a "point-in-time" (PiT) image? It is really only a logical volume - and no updates are made to the snapshot even if the data changes in the base volume.

3.

Is snapshot a true disaster recovery feature? Why or why not? A snapshot is not disaster recovery because if the base volume is corrupt or deleted, the snapshot volume is not valid as well.

4.

What is the maximum number of snapshots that can be created on one base volume? Currently at 6.19 fw 4 snapshots can be created on one base volume

5.

What happens if a data block on the base volume is changed more than once after the snapshot is taken? Nothing - snapshot implementation is copy of first write so if the block changes for the second time nothing happens

6.

What is the difference between disabling and deleting a snapshot? Disable means that the naming and location of the reserve volume is maintained just no updates to the reserve volume happens. Deleting the snapshot removes the name and physical location of the reserve volume.

Appendix B-408

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Knowledge check solutions

Integrated data services: Volume Copy 1.

Volume Copy source and target volumes can have the same RAID level and configuration. True

2.

During the copy process, controller A can be the preferred owner of the source volume and controller B can be the preferred owner of the target. False

3.

Reads and writes can continue to the source volume during a volume copy. False

4.

What volumes are included in a “copy pair�? Source and the target

5.

What is the maximum number of copy pairs that can be in progress at one time? 8 copy pairs

6.

Why would you want to change the copy priority? To complete the process quicker or have less impact on the performance of the array.

7.

Explain why using snapshot with volume copy is a best practice. Use snapshot with volume copy so that reads and writes can continue to the source volume while the copy operation is in progress

Knowledge check solutions Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

Appendix B-409


Integrated data services: Remote Replication

Integrated data services: Remote Replication 1.

Remote replication continuously copies from one volume to another to produce an exact copy of the source volume. True

2.

Asynchronous mirroring is faster than synchronous mirroring. False

3.

When using remote replication, your mirrored volume must be located offsite. False

4.

Why are there two mirror reserve volumes on an array? One for each of the controllers. These are used to track the completion of writes to the secondary array - in order to keep the mirrors synchronized. No actual data is written to the reserve volumes. They are uses for status and control data in relation to the mirror relationships.

5.

What are the two logs kept in the mirror reserve volume? Briefly describe what each does. The delta log and the FIFO log are kept in the mirror reserve. The delta log is used to track changes to the primary volume that have not yet been replicated to the secondary volume. Therefore, if an interruption occurs to the communication between the two storage arrays, the delta log can be used to re-synchronize the data between the secondary and primary volumes. The delta log is a bit map (maximum 1 million bits per mirror), where each bit represents a section of the primary volume that was written by the host, but has not yet been copied to the secondary volume. The number of blocks represented by a single bit is computed based on the usable capacity of the primary volume. The minimum amount of data represented by a single bit is 64K, that is 128-512-byte blocks. For example, for a 2TB volume, each bit will represent a data range of 2 MB. The FIFO log is used during Write Consistency mirroring mode to ensure writes are completed in the same order on both the primary and secondary volumes.

6.

How does “write consistency mode” differ from “asynchronous mode”? With the write consistency mode, the write order to the secondary volume is preserved and is used for multiple replication relationships like database volumes and log volumes.

Appendix B-410

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Knowledge check solutions 7.

What happens if there is a link interruption during the remote mirror process? The replication set process is suspended - once the link interruption has been corrected the replication set is then resynchronized.

Knowledge check solutions Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

Appendix B-411


Monitoring performance and dynamic features

Monitoring performance and dynamic features 1.

Explain the 40/30/30 rule. 100% of the storage is based on tuning each of the 3 aspects for performance 40% for the storage array 30% for the host and SAN infrastructure 30% for the host

2.

A high read cache hit rate is desirable for what kind of environments? sequential data - throughput environments

3.

What are the cache parameters that can be set for each volume? read, write, write with mirroring, write without batteries, prefetch for reads

4.

Which Volume cache parameters have a positive effect on performance? read cache, write cache, prefetch if the reads are for sequential data

5.

How is cabling important for performance? Provides 400 MB of bandwidth for each port used.

True or False 6.

Increasing the segment size will always improve performance. False

7.

The Performance Monitor can monitor specific Virtual Disks, Volumes or Controllers, but not specific disks. True

8.

The higher the modification priority is set, the faster the I/O’ are serviced, and the modification operations complete at slower pace. False 1. A segment is the amount of data, in kilobytes, that the controller writes on a single drive in a Volume before writing data on the next drive. True 2. The Immediate Availability Feature allows reads and writes to a Volume while initialization is still taking place. True 3. The Dynamic Functions (DSS; DRM, DCE and DVE) will terminate if the storage array is powered off. False

Appendix B-412

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Knowledge check solutions

Multiple Choice 9.

What is performance? a. How well a storage array stores or retrieves data for various host workloads.

b. The probability that a disk array is available 7 x 24. c. The maximum ratio of read operations to write operations that a storage array can execute. d. The number of requests that can be fulfilled simultaneously to retrieve data. 10. You would enable write cache with mirroring when a. You need top performance b. You need additional reliability

c. You need to have an extra copy of the volume d. You need to have more cache 11. Applications with a high read percentage do very well using a. RAID 0 RAID 1

b. RAID 3 c. RAID 5 12. The Add Free Capacity option allows the addition of capacity to a virtual disk. How many drives can be added at one time? a. Only 1 drive at a time 1 or 2 drives

b. A maximum of 2 for RAID 1 and maximum of 3 for RAID 3 and RAID 5 c. As many drives as are available 13. If your typical I/O size is larger than your segment size, a. Increase your segment size in order to minimize the number of drives needed to satisfy an I/O request.

b. Decrease your segment size in order to maximize the number of drives needed to satisfy an I/O request. c. The number of drives should be equal to the segment size d. Multiply segment size by the number of drives in the Virtual Disk to optimize striping

Knowledge check solutions Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

Appendix B-413


Maintaining the storage array

Maintaining the storage array 1.

Increasing the capacity of a standard volume is only supported on certain operating systems. True

2.

Disk scrubbing examines and reports media errors on all disk capacity in a storage array. False

3.

If the controller firmware is not at baseline, the storage array reports as degraded. True

4.

A special utility is required for upgrading firmware from 6.xx to 7.xx. True

Appendix B-414

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Knowledge check solutions

Problem determination 1.

What is the limiting factor of using visual cues for determining problems on the storage array? No details, observation without tools can vary from person to person

2.

List two ways to collect the support data bundle. Service advisor supportData command line

3.

What function does the service advisor serve? Collection of service procedures—maintains and/or problem resolution

4.

Why is the major event log important? Contains the historical list of all events reported by the controllers— used for problem determination and resolution

5.

What data is contained in the storage array profile? Current physical and logical configuration information of a storage array

6.

List the 4 levels of alarms. Down Critical Major Minor

7.

List two things you need to verify before running controller diagnostics. Operational multi-path driver No I/O running on the selected controller (not in use)

Knowledge check solutions Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0

Appendix B-415


SSCS and Command Line Interface

SSCS and Command Line Interface 1.

How do you login into the SSCS? sscs login -u <username> -h <host either localhost or IP>

2.

Why is it important to export the storage array configuration? Save the storage array configuration in case of a issue with the configuration— can restore it - also can use the file to clone to another storage array.

3.

What is the function of the Ras_Admin command line? “back-end” for the browser interface - can perform most of the commands found in CAM

4.

What is the function of the Service command line? Supports many of the functions in from the service advisor—but from the command line.

Appendix B-416

Sun StorageTek™ 6000 Product Line Installation and Configuration Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Sun Services, June 2009, Revision 3.0


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.