Vnx

Page 1

Welcome to VNX Foundations. Copyright © 1996, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC2, EMC, Data Domain, RSA, EMC Centera, EMC ControlCenter, EMC LifeLine, EMC OnCourse, EMC Proven, EMC Snap, EMC SourceOne, EMC Storage Administrator, Acartus, Access Logix, AdvantEdge, AlphaStor, ApplicationXtender, ArchiveXtender, Atmos, Authentica, Authentic Problems, Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, Avamar, Captiva, Catalog Solution, C-Clip, Celerra, Celerra Replicator, Centera, CenterStage, CentraStar, ClaimPack, ClaimsEditor, CLARiiON, ClientPak, Codebook Correlation Technology, Common Information Model, Configuration Intelligence, Configuresoft, Connectrix, CopyCross, CopyPoint, Dantz, DatabaseXtender, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, Document Sciences, Documentum, elnput, E-Lab, EmailXaminer, EmailXtender, Enginuity, eRoom, Event Explorer, FarPoint, FirstPass, FLARE, FormWare, Geosynchrony, Global File Virtualization, Graphic Visualization, Greenplum, HighRoad, HomeBase, InfoMover, Infoscape, Infra, InputAccel, InputAccel Express, Invista, Ionix, ISIS, Max Retriever, MediaStor, MirrorView, Navisphere, NetWorker, nLayers, OnAlert, OpenScale, PixTools, Powerlink, PowerPath, PowerSnap, QuickScan, Rainfinity, RepliCare, RepliStor, ResourcePak, Retrospect, RSA, the RSA logo, SafeLine, SAN Advisor, SAN Copy, SAN Manager, Smarts, SnapImage, SnapSure, SnapView, SRDF, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix, Symmetrix DMX, Symmetrix VMAX, TimeFinder, UltraFlex, UltraPoint, UltraScale, Unisphere, VMAX, Vblock, Viewlets, Virtual Matrix, Virtual Matrix Architecture, Virtual Provisioning, VisualSAN, VisualSRM, Voyence, VPLEX, VSAM-Assist, WebXtender, xPression, xPresso, YottaYotta, the EMC logo, and where information lives, are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. © Copyright 2011 EMC Corporation. All rights reserved. Published in the USA.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

1


Upon completion of this course, you should be able to describe VNX and VNXe models, features, data features, architecture, and management.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

2


This module provides an overview of VNX and VNXe models and architecture, VNX VG2 and VG8 Gateways, and VNX and VNXe software packs and suites.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

3


This lesson covers VNX and VNXe models, basic components, architecture, and features.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

4


The VNX unified storage systems are grouped into two different series, the VNXe and VNX series. The VNXe series includes the VNXe3100 and VNXe3300 which are both file and iSCSI block solutions. The VNX series include the VNX5100 which is FC block only, VNX5300, VNX5500, VNX5700 and VNX7500. Unified storage platforms combine Block array and File serving components into a single Unified Block and File, File only, or Block only storage solution. The VNX series storage systems leverage Intel multi-core CPUs and PCI Express 2.0 interconnects to deliver uncompromising scalability and flexibility while providing market leading simplicity and efficiency. The VNX series platforms are designed to comply with the emerging Energy Star Storage Server power efficiency guidelines. The VNX series implements a modular architecture concurrently supporting native NAS, iSCSI, Fibre Channel and FCoE protocols for host connectivity and 6 Gb Serial Attached SCSI (SAS) back-end topology. The high end VNX5700 and VNX7500 utilize Storage Processor Enclosure (SPE) architecture. The mid-range models utilize Disk Processor Enclosure (DPE) architecture.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

5


The VNXe family consists of VNXe3100 and VNXe3300. These two models are designed to be powerful with their multi-core CPUs, but also flexible in that they support File and Block storage via IP networking. This product convergence and simplification makes this product ready for the channel market.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

6


Shown here are the specifications for the VNXe3100. It is scalable up to 96 drives, supports SAS and NL-SAS drive types, and supports NFS, CIFS, and iSCSI protocols, and includes snapshots for file.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

7


Shown here are the specifications for the VNXe3300. It is scalable up to 120 drives, supports SAS, NL-SAS, and Flash drive types, and supports NFS, CIFS, and iSCSI protocols. It has more memory than the VNXe3100 at 12 GB per blade and SP, and includes snapshots for file.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

8


VNX combines all the protocols that are needed in today’s IT environment with simple unified management features. However, simple to use does not mean “simplistic” as the VNX features advanced replication, management, and Fully Automated Storage Tiering (FAST). The architecture is Modular Unified (configured for purpose: File, Block, and Object). The VNX is designed for high performance which is optimized for multi-core and Flash and includes a 6G SAS back-end infrastructure. The VNX family is flexible, and by using Expanded UltraFlex I/O, the VNX can natively support Fibre Channel, iSCSI, CIFS, and NFS. The new packaging is denser and greener than ever with new Energy Star ratings on energy efficiency.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

9


Shown here are the specifications for the VNX5100. It is scalable up to 75 drives, supports SAS, NL-SAS, and Flash drive types and supports FC protocols. The VNX5100 supports FC block only.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

10


These are the specifications for the VNX5300. It is scalable up to 125 drives, supports SAS, NLSAS, and Flash drive types and supports NFS, CIFS, MPFS, pNFS, FC, iSCSI, and FCoE protocols.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

11


Shown here are the specifications for the VNX5500. It is scalable up to 250 drives, supports SAS, NL-SAS, and Flash drive types and supports NFS, CIFS, MPFS, pNFS, FC, iSCSI, and FCoE protocols. Additionally, the VNX5500 platform can now deliver up to a fifty percent increase in data transfer rate by adding an additional four port 6 Gb SAS UltraFlex I/O module. In an environment where bandwidth is at a premium, this will significantly reduce downtime for block-specific applications.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

12


These are the specifications for the VNX5700. It is scalable up to 500 drives, supports SAS, NLSAS, and Flash drive types and supports NFS, CIFS, MPFS, pNFS, FC, iSCSI, and FCoE protocols.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

13


Shown here are the specifications for the VNX7500. It is scalable up to 1000 drives, supports SAS, NL-SAS, and Flash drive types and supports NFS, CIFS, MPFS, pNFS, FC, iSCSI, and FCoE protocols.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

14


VNX systems are available utilizing the dense-storage capabilities of the 60-drive Disk Array Enclosure (DAE) and dense rack. The 60-drive DAE provides three times greater capacity density than the conventional 15-drive DAE, affording greater rotating drive performance for a given rack footprint. Also as a result, VNX configurations can be reduced by a factor of three in deployments where data center real estate is scarce. For example, a 975-drive Unified VNX7500 with 15-drive DAEs requires six 40 U cabinets. A 975-drive VNX7500 with 60-drive DAEs requires just two 40 U deep racks. Because a VNX5700 or VNX7500 has considerably fewer discrete DAEs when using the 60-drive DAEs, up to 240 drives are accessible via one hop from the Storage Processor Enclosure (SPE) assuring minimum access latency. And for VNX7500 optionally configured with 8 back-end SAS buses, up to 480 drives can be directly accessed by the SPE. VNX FAST Cache capabilities are facilitated by the 60-drive DAE. It supports flash, 10K, and near line high capacity drives simultaneously. FAST Cache extends the storage system’s existing caching capacity using flash drives for better system-wide performance. The 60-drive DAEs require and are shipped in a 44-inch cabinet, five inches deeper than a standard EMC cabinet. This allows sufficient room in the rear of the 35-inch DAE for proper cable management and access to the CRUs in the rear. It is a standard 19-inch rack so all VNX hardware (SPE, DME, CS, DAE, etc.) can be installed in the dense rack.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

15


The high-end VNX gateway system (NAS head only) called the VG2 and VG8 can connect to the standard VNX for block system, Symmetrix, or CLARiiON. The basic design principle of the VG8 uses X-Blades and Control Stations to separate the data flow from the control flow, which is handled by the Control Station. VG8 supports up to eight XBlades with advanced failover (N+1 or N+M) under a single point of management and control and presents all the benefits of a true cluster. Each VG8 X-Blade uses the UltraFlex I/O module concept, allowing up to five I/O modules per X-Blade. The I/O module options: • Four 10/100/1000 Ethernet (copper) ports • Two 10/100/1000 Ethernet (copper) ports plus two Gigabit Ethernet optical ports • Two 10 Gigabit Ethernet ports • FCoE I/O modules (in place of Fibre Channel) to facilitate a smooth integration with the VNX for Block, the Symmetrix, and any other legacy systems that support FCoE Each X-Blade is configured with one 4-port Fibre Channel I/O module for storage array connectivity and tape connectivity (for NDMP) or 1 dual port FCoE I/O module for storage connectivity. The Fibre Channel module can be either 4 Gb/s or 8 Gb/s Fibre Channel. The FCoE I/O module is 10 Gb/s optical.

X-Blade configuration types cannot be mixed in a VG8 system. Although the FLEX I/O modules can be added in any combination, the first slot must contain a 4-port FC module or an FCoE I/O module. X-Blade configurations are allowed with combinations of all possible connectivity types: FCoE, copper Ethernet, copper plus optical Ethernet, and optical 10 Gigabit Ethernet. The X-Blades feature EMC’s VNX OE for File embedded system software, which is optimized for file I/O. The Control Station is used to configure, manage, upgrade, and failover the X-Blades, as well as to manage X-Blade failover. The VG2 offers a single blade with fast reboot or dual blade with N+1 failover. The VG8 offers N+1 and N+M advanced failover. The standby blades wait, fully booted, for the primary X-Blade to fail. There is no performance degradation on failover. When a failover occurs, the standby X-Blade presents itself to the network with the identity of the failed X-Blade. VNX VG2 and VG8, like all the members of the VNX Series, have the high-availability features you would expect from EMC, including dual power supplies, dual fans, full battery backup, and the “call-home” card for remote and predictive diagnostics. The VG2 and VG8 provide dual Control Stations for Control Station failover as an option. Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

16


The VNX series unified modular architecture delivers a highly flexible and scalable storage solution. A VNX7500 can scale up to 60 CPU cores of processing power. There are 12 CPU cores dedicated to high performance block-serving using six core CPUs on two Storage Processors. There can be up to 48 CPU cores dedicated to networked File system management and data sharing via six core CPUs on eight X-Blades. Block connectivity is via FC, FCoE, and iSCSI. File connectivity is via NAS including NFS, CIFS, MPFS, and pNFS. The pNFS protocol is only available with VNX arrays.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

17


VNX Series systems use X-Blade for File front end and the Storage Processors for block access to the back end. The data control flow is handled by the Storage Processor in block-only systems, and X-Blades in File-enabled systems. The Control Station is used to configure, manage, and upgrade the X-Blades, as well as to manage X-Blade failover. Each X-Blade Enclosure contains up to two X-Blades running VNX OE system software optimized for file. Depending on the model, a VNX system can contain up to eight X-Blades. Each X-Blade is configured with one 4-port 8 Gigabit Fibre Channel I/O module for storage array connectivity and tape connectivity (for NDMP). Multi-blade systems are typically configured with N + 1 or N +M advanced failover (where N is the active X-Blade and M is a pool of standby X-Blades) where one X-Blade is configured as standby or where a number of X-Blades are configured as a pool of failover X-Blades for the active blades. The Disk Processor Enclosure (DPE) or Storage Processor Enclosure (SPE) use dual active Storage Processors (SPs) for disk I/O. These processors run the VNX OE for Block. The SPE supports automatic failover should one of the SPs fail. The disk array enclosures are one of the following configurations:

• 15 by 3.5-inch disk shelves (Flash, SAS, and NL-SAS) • 25 by 2.5-inch disk shelves (Flash and SAS) • 60 by 2.5- or 3.5-inch disk shelves (Flash, SAS, and NL-SAS)

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

18


Shown here is a Storage Processor Enclosure or Data Mover Enclosure along with the FRUs accessible from the front. The Storage Processors and X-Blades are also referred to as CPU Modules. The two power supply/cooling modules in front of a CPU Module must be removed to allow it to be removed. These components are held in place using latches. Instructions for replacing the CPU DIMMs are located on the top of the CPU Module housing. The blue plastic cover must be removed to access the CPU DIMMs. Note that the SP or X-Blade must be shut down prior to removal and/or replacement. You must follow established procedures to avoid Electrostatic Discharge (ESD) damage to the components. Download, install, and run the VNX Procedure Generator from Powerlink to print out complete instructions.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

19


Shown here are a DPE power supply/cooling module and a Storage Processor removed from the Disk Processor Enclosure. Both of these components are held in place using latches. Instructions for replacing the SP DIMMs are located on the top of the Storage Processor housing. The VNX5100, 5300, and 5500 use the DPE based architecture. Note that the SP must be shut down prior to removal and/or replacement. You must follow established procedures to avoid Electrostatic Discharge (ESD) damage to the components. Download, install, and run the VNX Procedure Generator from Powerlink to print out complete instructions.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

20


Shown here is the 15-drive DAE. Note that the front view looks identical to the 15-drive DPE but the shorter depth of the enclosure and the components in the rear are completely different. The Field Replaceable Units include the 15 disk drives, two Link Control Cards, and two power supply/cooling modules, PS A and PS B. The LCCs and power supplies are locked into place using captive screws to ensure proper connection to the midplane. All DAE FRUs are hot swappable but precautions must be taken to ensure non disruptive operation. If a power supply is removed from the enclosure, the enclosure will shut down after two minutes. Be sure to download and run the Procedure Generator before removing or installing any components.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

21


Shown here is a 25-drive DAE. Note that the front view looks identical to the 25-drive DPE but the shorter depth of the enclosure and the components in the rear are completely different. The FRUs include the twenty-five 2.5-inch drives, LCC A and B, and power supply/cooling module A and B. In these DAEs, Power Supply B is on the left; Power Supply A is on the right, LCC B is on top, and LCC A is on the bottom. Notice the power supplies and LCCs are held in place with locking latches to secure them to the midplane as opposed to the captive screws used in the 15-drive DAEs. The Power Supplies have Power on and Fault LEDs.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

22


Shown here is a 60-drive DAE. The FRUs include the sixty 2.5-inch or 3.5-inch drives, LCC A and B, Power Supply, fan modules, and InterConnect Modules (ICM). In these DAEs, Power Supply B is on the left; Power Supply A is on the right, LCC B is on top, and LCC A is on the bottom. Notice the LCCs are held in place with locking latches to secure them to the midplane as opposed to captive screws. The Power Supplies have Power on and Fault LEDs.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

23


The VNX series Data Mover Enclosure can contain one or two X-Blades. The X-Blades provide file connectivity via the VNX operating system. The different VNX series models use the same DMEs and vary only in the maximum number of X-Blades, the maximum number of I/O Modules per X-Blade, and the X-Blade CPU speed and memory specifications. Each X-Blade has an associated Management Module. The Management Module comes with three integrated Ethernet ports labeled 0, 1, and 2. These ports provide an interface for connecting to the Control Station, Storage Processor, and additional X-Blade Management ports. All three RJ-45 ports have Link and Activity LEDs. The Management Module includes an RS-232 DB-9 serial connector for service laptop connection and a DME ID numeric display. Possible DME IDs include 0 through 3. The Power/Fault LED is in the upper left hand corner. If the Power/Fault LED is illuminated green, the Management Module is powered up. If the Power/Fault LED is amber, the module is faulted. If it is off, the Management Module is powered down. The VNX series X-Blades include a four port 8 Gb Fibre Channel I/O Module in slot 0. Two ports are for connectivity to the Storage Processors and two are for connectivity to a backup tape device. A blue link LED on an 8 Gb FC port indicates the port is linked up at 8 Gb/s. The rest of the I/O slots are used for File connectivity. There must be at least one network I/O Module in each X-Blade. Each UltraFlex I/O Module includes a power/fault LED and a combination link/activity LED for each port. File connectivity I/O Module options are: • Four-port 1 GbE I/O module with two copper ports and two optical ports • Four-port 1 GbE I/O module with four copper ports • Two-port 10 GbE I/O module with two optical or Twinax ports Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

24


The Control Station software provides VNX for File’s controlling system. It runs an EMC valueadded version of the Red Hat Enterprise Linux V5 industry-standard UNIX operating system. The Control Station also provides a secure user interface to all file-server components—a single point of management for the whole VNX solution, which can be isolated to a secure, private network. Control Station software is used to install, manage, and configure the X-Blades; monitor the environmental conditions and performance of all components; and implement the call-home and dial-in support features. The unified user interface used to manage the VNX talks directly to the Control Station when managing file functionality. Typical administrative functions for File include managing volume and file systems, configuring network interfaces, creating file systems, exporting file systems to clients, performing filesystem consistency checks, and extending file systems (both manually and automatically). Control Station administrative functions are accessible via Unisphere, through a command-line interface on secure shell, and via the VNX Startup Assistant and VNX Provisioning Wizard in the VNX installation toolbox.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

25


This slide shows a gateway system (external storage connect via switch). The functionality is exactly the same in the non-Gateway VNX systems. Each X-Blade is an independent, autonomous file server that remains unaffected even if a problem affects the other X-Blades. The multiple X-Blades (up to a maximum of eight) are managed as a single physical entity. XBlades are hot-pluggable and offer N+1 and N+M advanced failover. In addition, X-Blades will continue operation independent of any Control Station halts or restarts. The X-Blades run a mature EMC operating system called VNX OE for File, which is optimized to move data between the storage (VNX for block components in the case of an integrated VNX system or Symmetrix/CLARiiON for the gateway) and the IP network.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

26


The base configuration for the X-Blades includes a four port 8 Gb Fibre Channel I/O module in slot 0 for connectivity to the Storage Processors and a backup tape device.

The rest of the available slots are used for client I/O. The VNX5500, 5700, and 7500 can be configured with up to 256 TB useable file capacity per X-Blade. This is to ensure that the Fileonly systems can scale to the maximum capacity of the storage system. The I/O slots and back-end ports per Storage Processor are listed here. Note that the DPE based models use onboard 6 Gb SAS ports. The SPE based models use four port 6 Gb I/O modules for back-end connectivity. The VNX5700 requires one four port back-end SAS I/O module. The VNX7500 can have up to two four-port back-end SAS I/O modules. This is why there may be three or four I/O slots available for client I/O. Although the VNX5100 doesn’t support use of the I/O slots, four onboard 8 Gb Fibre Channel ports are available for client I/O and/or replication. The VNX 5100 supports an option to upgrade to eight 8 Gb FC ports from a base of four per storage system. The VNX5500 has an available high bandwidth option. If this option is chosen, the VNX5500 is able to add a SAS I/O module per Disk Processor Enclosure (DPE). By doing so, it expands the total number of ports to six 6 Gb SAS buses with a bandwidth of up to 6 Gb/s (or 50% more bandwidth). Additionally, an 8 Gb Fibre Channel I/O module is also needed on the other slot in the DPE to enable the high bandwidth configuration.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

27


This chart is a Protocol Support Summary comparing the VNX and VNXe. Note that the VNXe supports only IP connectivity, while the VNX supports both IP and FC connectivity.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

28


This lesson covers VNX and VNXe series basic functionality, DAE and drive options, and benefits.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

29


VNXe installation and setup is straightforward and quick. Most components are customer replaceable and upgradeable. The VNXe is an ideal storage option for a complete application solution. Application storage wizards are provided to easily create storage, enabling storage management to occur at the application level. With this application view in mind, the VNXe3100 and VNXe3300 use best practices when providing storage resources to specific applications. The applications include Exchange, VMware, Hyper-V, Shared Folders, and Generic iSCSI. Storage is allocated, protected, monitored, managed and serviced with Unisphere. An online community is available, providing the administrator a way to participate in online chat sessions, link to training, and troubleshoot and resolve problems.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

30


The VNXe3100 and VNXe3300 easily connect to an existing IP network. More robust network connectivity options are available. Storage becomes available on the network via CIFS, NFS, and iSCSI. The VNXe3300 is configured with dual Storage Processors (SP) for high availability. The VNXe3100 can be configured with either one or two SPs. To allow for scalability, both VNXe models can be upgraded to accommodate additional storage capacity and additional IP ports for connectivity.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

31


The VNX Series is designed for mid-market to enterprise environments. It is managed with Unisphere. All storage related tasks can be accomplished with consistent, intuitive navigation for both file and block environments. Unified upgrades allow for block and file software to be upgraded with one tool. An online community enables problem resolution via online help, chat, eLearning, FAQs, and access to the knowledgebase. Some hardware components can be ordered online and replaced by the storage administrator. The flexibility and scalability offered by optional I/O modules and a wide array of disk types provide the ability to customize solutions for any environment. 2U DAEs housing twenty-five 2.5-inch drives, 3U DAEs housing fifteen 3.5-inch drives, and 4U DAEs housing sixty 2.5- or 3.5-inch drives provide even more flexible configurations.

A 6 Gb SAS back end, Intel’s latest CPUs, and up to 24 GB of memory per SP, along with FAST Cache and FAST Virtual Pools (FAST VP) greatly enhance performance for both file and block data. Proven EMC replication technologies, including RecoverPoint/SE, Replication Manager, and Data Protection Advisor ensure the integrity and availability of both file and block data.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

32


VNX hardware and software optimizations enable the VNX to virtualize Exchange, SQL, and Oracle while providing increased performance for the application. Booting thousands of desktops and managing the VMware environment are quick and easy. ROI is maximized and previously missed SLAs can now be met with optimized performance using FAST Cache. Automatic tiering with FAST VP optimizes disk resources thus reducing TCO. Disk cost can also be reduced by saving up to 50% disk space with compression and deduplication. Web/Cloud applications and storage-as-a-service are supported via Atmos VE. Host encryption, file-level retention, and anti-virus checking provide data security. EMC’s proven replication technologies allow for simple setup, monitoring, notification, and reporting of local and remote replication. Maintenance, upgrades, and troubleshooting are simple and intuitive with an ecosystem designed for quick answers, software downloads, and problem resolution.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

33


VNX Thin Provisioning further improves capacity utilization. File systems as well as FC/FCoE/native iSCSI LUNs can be logically sized to required capacities, and physically provisioned with less capacity. This means storage need not sit idly in a file system or LUN until it is used. Provisioning is simplified, and capacity utilization is improved. VNX Thin Provisioning safeguards allow users to keep track of thinly provisioned file systems and LUNs. By reporting on actual physical usage, total logical size, and available pooled capacity, administrators can both predict and set alerts to avoid running out of physical capacity. In addition, with Automatic File System Extension and Dynamic LUN Extension, physical allocation can be increased in real time, as needed.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

34


Deduplication is a process used to compress redundant data, allowing space to be saved on a file system. When multiple files have identical data, the file system stores only one copy of the data and shares that data between the multiple files. Different instances of the file can have different names, security attributes, and timestamps. None of the metadata is affected by deduplication. VNX for File performs all deduplication processing as a background, asynchronous operation that acts on file data after it has been written into the file system. It does not process active data or data as it is written into the file system. It avoids active data because active data is more likely to be accessed, modified, or deleted in a short time period. Inactive data, which represents the largest component of your datasets, roughly 80 percent, is targeted for the deduplication processes. The system is designed to process only those files that are not being actively used by clients; thereby both maximizing the space savings and minimizing the impact on end users and applications. By default, the system selects files based on their size (minimum and maximum), and age (access and modification time). The Administrator can tune these selection criteria if desired and optionally add filters to exclude specific file types (by extension) and/or files in directories with specific names. By running all the processing in the background and avoiding the processing active files, introducing a performance penalty on the data with which you are running your business is avoided. Deduplication activity is also throttled to avoid impact on processes serving client I/Os. When running on an X-Blade, deduplication will process one file system at a time and throttle its activity if the X-Blade CPU utilization exceeds 75 percent. This means that VNX File deduplication and compression will process the bulk of the data in a file system in the background without impacting the production workload by using otherwise idle CPU cycles.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

35


VNX’s FailSafe Network or “FSN” provides high availability in the event of an Ethernet switch failure by connecting the two components of the FSN to separate switches. Unlike EtherChannel and Link Aggregation, FSN can maintain full bandwidth when failed over, given the same bandwidth on both the active and passive configurations. They do not require any special switch configuration. FailSafe Networks are configured as sets of ports, FastEtherChannels, Link Aggregations, or combinations of these. Only one connection in an FSN is active at a time. If the FailSafe device detects that the active connection has failed, the blade automatically switches to the surviving partner with the same identity as the failed connection. It is not recommended that the FSN be configured with an Active and Passive relationship, but that the links simply be grouped together in the FSN. One of them will be passive, depending upon the order of configuration. When automatic failover occurs and is then restored, automatic failback does NOT occur. This recommended configuration will prevent a flip-flop effect if intermittent network failures occur.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

36


A quota is a limit placed on the number of allocated disk blocks and/or files (literally, inodes) that a user/group/tree can have on a production file system (PFS). In other words, quotas provide a way of controlling the amount of disk space and the number of files that a user/group/tree can consume. Quotas can be managed via the VNX Control Station (CLI or GUI) or via the Windows 2000/03/08 interface. Limiting usage is not the only application of quotas. The quota tracking capability can be useful for tracking and reporting usage by simply setting the quota limits to zero. There are three types of implementation choices. They are hard quota, soft quota, and tracking.

• Hard quota: denies space on disk and generates an error. • Soft quota: offers a grace period before starting to deny space on disk. • Tracking: disk usage is tracked, but no limits are imposed.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

37


User mapping with VNX is needed to uniquely identify users and groups from Windows and UNIX/Linux environments that access the VNX. Windows environments use Security Identifiers (SIDs) for identifying users and groups. The VNX uses the UxFS file system, which employs a traditional UNIX /Linux UID/GID scheme.

When Windows users access the VNX, the user and group SIDs need to be mapped to UIDs and GIDs in order to allow the user to access data. The user mapping provides this correlation of Windows SIDs to UNIX/Linux UIDs and GIDs. User mapping is required in a CIFS-only user environment and in mixed CIFS and NFS user environment. Usermapper is only recommended for CIFS-only environments. User mapping is not required for NFS only user environments. The VNX uses the UIDs and GIDs provided with NFS access.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

38


File locking offers another level of security. It provides a mechanism for ensuring file integrity when more than one user attempts to access the same file. File locking manages attempts to read, write, or lock a file that is held by another user. The VNX OE allows flexible implementation of file locking in CIFS, NFS, and mixed environments.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

39


The AntiVirus integration solution is a component of the VNX Event Enabler infrastructure, an alerting API that indicates to external applications when an event for example a file save occurs within the VNX. The Event Enabler framework is also used for quota management tool integration. The VNX Anti-Virus Sizing Tool is used after installation to identify the need for additional antivirus engines to ensure performance of the system. The Anti-Virus Sizing Tool comes with VEE, which is Windows Management Interface (WMI)-enabled. There is also a pre-install sizing tool available for initial anti-virus sizing. You can scale the solution by adding virus-checking servers as required. Your server vendors should be able to provide you with an understanding of how many dedicated servers you would need. You can also use different server types for example, McAfee, Symantec, Trend Micro, CA, Sophos, and Kaspersky concurrently, as per their original anti-virus implementation. Performance of anti-virus solutions tend to be measured in server overhead, and might vary from one environment to another depending on application and workload.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

40


The Event Publishing Agent uses the VNX Event Enabler framework. The same framework is used for anti-virus integration to deliver an event notification framework for integration with third-party quota management applications such as Northern Parklife (NSS) and NTP Software (QFS). It is a mechanism whereby applications can register to receive event notification and context from Celerra. CEPA (Celerra Event Publishing Agent) delivers to the consuming application both event notification and associated context (file/directory metadata needed to make a business policy decision) in one message. The VEE alerts the agents on file and directory actions. Multiple agents can be deployed for high availability. The VEE is Windows (CIFS)-based and is available concurrently with anti-virus integration on each server that runs the VNX Event Enabler license.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

41


Network Data Management Protocol is an industry standard LAN-less backup for NAS devices. NDMP backups only use the LAN for control information; the data is transferred to the local backup device. The NDMP architecture allows backup activity to be localized to a single backup blade, which requires only one blade to be physically attached to the Tape Library Unit.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

42


To secure its infrastructure, VNX provides a variety of features that can be used to tighten the management operations on the Control Station. Within Unisphere, security with LDAP domains such as Windows Active Directory can be configured. Domain users can be defined as VNX administrative users. The Control Station will authenticate the users with the domain. Role-based administrative access is also provided. The VNX has predefined role-based groups for different levels of administrative access. Users are assigned to the groups according to the administrative access level required.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

43


VNX SAN Copy comes by default with the base software on the VNX. It is not part of any of the Suites of software currently packaged. However, it is software that is important to highlight in this training. SAN Copy software runs on a VNX storage system. SAN Copy copies data at a block level between VNX storage systems, within VNX storage systems, between VNX and Symmetrix storage systems, or between qualified non-EMC storage systems. SAN Copy software copies data directly from a logical unit on one storage system to destination logical units on another, without using host resources. SAN Copy software can perform multiple copies—each in its own copy session—simultaneously. You can use SAN Copy software to create full copies and incremental copies of a source logical unit.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

44


This slide details how SAN Copy can interoperate with other VNX Replication software.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

45


Customers can use SAN Copy to simultaneously move information, regardless of the host operating system or application. This is valuable for content distribution, moving applications, or supporting application data to distributed environments to aid in performance. It acts as the facilitator of data movement from system to system over the SAN or LAN/WAN infrastructure, eliminating the need for critical server CPU cycles and LAN bandwidth.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

46


This lesson covers the VNX and VNXe software packs and suites and gives a basic overview of what software is provided within each pack and suite.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

47


The VNX software packaging simplifies selling and ordering, making it easy to provide the functionality required by the environment. Packs and suites allow the ability to change the underlying technology. No longer do you need to purchase multiple individual software titles to benefit from advanced efficiency features, security and compliance features, local and remote protection, and application protection. For VNX series, the packs are Total Protection and Total Efficiency packs. The Total Protection pack includes the Local and Remote Protection Suite and the Application Protection Suite. The Total Efficiency pack includes every suite.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

48


As you respond to requirements for data protection, application availability, and business continuity and compliance, the VNXe series offers incremental capabilities in the form of system and integration software. These titles can be purchased in suites to meet specific needs, or in value-priced packs to provide even greater value. The Total Protection Pack includes the Local, Remote, and Application Protection suite while the Total Value pack includes all the available suites. Note that the VNXe systems do not support the FAST suite.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

49


The FAST Suite is available only for VNX series arrays and include VNX FAST VP, VNX FAST Cache, Unisphere Analyzer, and Unisphere Quality of Service Manager.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

50


With FAST VP (Auto-Tiering), the data at the sub-LUN level can be moved from Tier to Tier to improve the performance of the array. More active data can now be moved up to faster tiers and less active data moved down to slower tiers without interrupting the user.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

51


Fast Cache is a system wide resource. It is easy to set up and can allocate up to 2 TB to cache (read or read/write). The use of FAST Cache can result in immediate improved response from the system, while reducing the amount of physical drives required for I/O transactions.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

52


Unisphere Analyzer was designed and developed specifically to identify bottlenecks and hotspots in VNX storage systems. Data may be collected by the storage system, or by a Windows host, running the appropriate software, in the environment. The parameters are displayed in graphical form using the familiar Unisphere GUI.

Data may be displayed in real-time, or, for later analysis, saved as a .nar (Unisphere Archive) file. The two display modes show data in a slightly different way – Performance Summary is useful when the display shows minima and maxima, but the time that those values were reached, is not important. Performance Detail is a time-related display. In addition to those views, a Performance Survey chart shows up to five commonly used performance parameters, with clear visual indication if they have exceeded user-defined thresholds. Sometimes the sizes of transfers from the host can be a factor in performance issues, and Analyzer displays I/O size information in two different ways. Data may be exported to text as comma separated value (.csv) files and graphics are exported as .jpg files.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

53


Unisphere Quality of Service Manager measures, monitors, and controls application performance on the VNX storage system. The monitoring feature is the first step to improve performance of high-priority applications because it gives users a more logical view of system performance, both for the entire storage system and for specific applications. This can be a powerful method of evaluating the storage system to determine the current service levels and to provide guidance on what service levels are possible, given the specific environment. UQM may be managed by Unisphere GUI, Secure CLI, or Unisphere Client. Because UQM is array resident, there is no host component to load, and no performance impact on the host. UQM controls array performance by allocating resources to user-defined classes of I/O. This resource allocation allows the specified I/O classes to meet pre-defined performance goals. Other I/O classes may see decreased performance as a result of the resource assignment.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

54


The Security and Compliance Suite is available for the VNX and VNXe series arrays. For the VNXe series arrays, the Event Enabler (VEE) and File Level Retention (FLR) solutions are available. For the VNX series arrays, the Event Enabler (VEE), File Level Retention (FLR), and VNX Host Encryption solutions are available.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

55


The VNX Event Enabler (VEE) provides event notification of activities on VNX to external processes requiring instant knowledge of antivirus activities. VEE allows VNX antivirus activities to integrate with VNX OE for file with minimal impact to system resource usage. Auditing applications monitor and report all key actions performed on computer and storage systems to identify changes that may be malicious or may relate to system availability issues, thereby improving system availability.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

56


File-Level Retention is an optional VNX /VNXe OE for Block software feature for protecting files from modification or deletion until a specified retention date. Using VNX File-Level Retention (FLR), you can archive data to storage on standard, rewritable magnetic disks through NFS or CIFS operations. File-Level Retention allows you to create a permanent, unalterable set of files and directories and ensure the integrity of data. File-Level Retention has been available for a number of years. It provides for Write Once-Read Many (WORM) functionality and has recently been updated to include a compliance option. File-Level Retention includes capabilities that allow you to enforce a basic set of selfgovernance, or compliance policies, that limit write access for a specified retention period. When customers purchase file-level retention, they get access to both the FLR-E and FLR-C options.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

57


VNX Host Encryption provides data encryption at the storage-device level to protect data from unauthorized access or from the removal of a disk drive or array from a secured environment. It enables a consistent encryption technology that leverages RSA Key Manager to centrally manage and automate encryption keys. The primary benefit of VNX Host Encryption is that it protects information in the event it becomes compromised through unauthorized access or disk removal. Adding this level of protection enables compliance with internal, private, and government standards, including the Payment Card Industry Data Security Standard (PCI DSS), which is one of the most widely accepted compliance standards in the market today. As a host-based encryption product, VNX Host Encryption lets you choose the LUNs or volumes that contain sensitive data and need to be encrypted, thereby minimizing management. There is no need to encrypt the entire environment or array. Since data is encrypted as it leaves the host, the data is secure from the host to disk on storage where it resides. In other words, your data is protected anywhere it goes outside of the server.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

58


The Local Protection Suite is available for the VNX and VNXe series arrays. For the VNXe series arrays, only VNX SnapSure is available. For the VNX series arrays, VNX SnapView, VNX SnapSure, and RecoverPoint/SE Continuous Data Protection solutions are available.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

59


VNX SnapView is an array of software products that run on the VNX. Having the software present on the array has several advantages over host-based products. Since Snapshots execute on the storage system, no host processing cycles are spent managing information. Snapshots allow companies to make efficient use of their most valuable resource— information, by enabling parallel information access. Snapshots allow multiple business processes to have concurrent, parallel access to information. Snapshots create block-based logical point-in-time views of production information using SnapView and point-in-time copies using clones. Snapshots use only a fraction of the original disk space, while clones require the same amount of disk space as the source.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

60


VNX SnapSure creates a point-in-time view of a file system. SnapSure creates a checkpoint “file system” that is not a copy or a mirror image of the original file system. Rather, the checkpoint “file system” is a calculation of what the production file system looked like at a particular time and is not an actual file system at all. The checkpoint is a read only view of the production file system prior to changes at that particular time.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

61


RecoverPoint CDP is a continuous synchronous product that mirrors SAN volumes in real time between one or more arrays at a local site. RecoverPoint CDP maintains a history journal of all changes that can be used to roll back the mirrors to any point in time.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

62


The Remote Protection Suite is available for the VNX and VNXe series arrays. For the VNXe series arrays, only the VNX Replicator is available. For the VNX series arrays, VNX MirrorView, VNX Replicator, and RecoverPoint/SE Continuous Remote Replication solutions are available.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

63


Provisioning for disaster recovery is the major benefit of MirrorView mirroring. Destruction of the data at the primary site would cripple or ruin many organizations. After a disaster, MirrorView lets data processing operations resume with minimal overhead. MirrorView enables a quick recovery by creating and maintaining a copy of the data on another storage system. The criticality of business applications and information defines the recovery objectives. As we have seen, RPO defines the amount of data loss in the event of a disaster; while RTO defines the amount of time required to bring critical business applications back online after a disaster occurs.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

64


VNX Replicator is an IP-based replication solution that produces a read-only, point-in-time copy of a source (production) file system. The VNX Replication service periodically updates this copy, making it consistent with the production file system. Replicator uses internal checkpoints to ensure availability of the most recent point-in-time copy. These internal checkpoints are based on SnapSure technology. This read-only replica can be used by an X-Blade in the same VNX cabinet (local replication), or an X-Blade at a remote site (remote replication) for content distribution, backup, and application testing. In the event that the primary site becomes unavailable for processing, File Replicator enables you to failover to the remote site for production. When the primary site becomes available, you can use File Replicator to synchronize the primary site with the remote site, and then failback the primary site for production. You can use the failover/reverse features to perform maintenance at the primary site or testing at the remote site. When a replication session is first started, a full backup is performed. After initial synchronization, Replicator only sends changed data over IP.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

65


RecoverPoint CRR differs from CDP in that replication occurs over the WAN to a remote site. The same components exist within a CRR environment, such as array-based CX splitters or host splitters. Just as in CDP, the CRR journals are used to track changes to every protected LUN within the environment. CRR, just like CDP, allows for application-aware bookmarks for Exchange and SQL which have been integrated into a RecoverPoint/SE replication environment.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

66


The Application Protection Suite is available for the VNX and VNXe series arrays. Replication Manager is available for both VNXe and VNX arrays. Data Protection Advisor for Replication is available for VNX arrays only.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

67


Replication Manager is designed to manage and automate snapshots and clones for EMC’s various replication products. It provides point-and-click management for VNX SnapView, VNX SAN Copy, and VNX SnapSure. Replication Manager provides a simple graphical interface to create, mount, restore, and expire data replicas within and between storage arrays using the array’s native technologies. The Replication Manager creates disk-based replicas of information on Unified storage arrays. It creates these replicas simply, quickly, and easily by automating many important data replication procedures. The Replication Manager automatically controls the complexities associated with replications of file systems and application volumes, such as Microsoft Exchange and SQL Server volumes, using a GUI to schedule and manage replication jobs.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

68


EMC Data Protection Advisor (DPA) is the software that monitors the data center. It collects, correlates, and analyzes disparate data in order to present actionable advice, alerting, and trending feedback about your backup and replication infrastructure. DPA is capable of meeting the demands of businesses of any size, and offers a single solution to manage customer’s data protection operations. Its powerful analysis engine issues alerts about problematic conditions and failures, and its sophisticated reporting capabilities provide a graphical view of the quality and scope of data protection efforts.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

69


This module covered VNX and VNXe models and architecture, VNX VG2 and VG8 Gateways, and VNX and VNXe theory of operations and basic functionality.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

70


This module covers VNX and VNXe software suites and packs, data integrity and availability features, management objects, and storage objects.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

71


This lesson covers VNX data integrity features including RAID types, disk formatting, parity shedding, rebuild avoidance, and Sniffer.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

72


VNX supports multiple RAID levels and these levels can be mixed within an array. Once a RAID type is assigned to a disk or group of disks, all LUNs bound to that RAID group will be assigned that RAID type.

RAID 0 offers striped data with no protection. RAID 1 consists of a mirrored pair providing duplicate copies of data for protection. RAID 1/0 consists of mirrored stripes providing duplicate copies of data for protection. RAID 3 offers striped data with parity protection stored on a single disk in the RAID group. RAID 5 has striped data with parity protection stored across all disks in the RAID group. RAID 6 consists of striped data with double parity protection stored across all disks in the RAID group. An Individual Disk provides no protection at all. Hot Spares provide protection for faulted disks.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

73


The RAID type of a LUN determines the type of redundancy, and therefore the data integrity that the LUN provides. A LUN in a RAID Group supports the RAID type of the first LUN bound on the Group. Any additional LUNs bound on the Group have that RAID type. A LUN in a non-RAID Group storage system has the RAID type that you select when you create it. The RAID types shown on this slide are available.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

74


VNX Fibre Channel disks are formatted in the factory at 520 bytes per sector. Of these, 512 bytes hold host-accessible data. The additional 8 bytes contain information used by the array for Background Verify. Background Verify performs data checking and possible correction at the Storage Processor. A Shed Stamp is used with RAID-5 and RAID-3 to perform “Parity Shedding” in a group with a single faulted disk (degraded mode). A Write Stamp is used with RAID-5 to detect incomplete writes of less than a full stripe. A Time Stamp is used with RAID-5 and RAID-3 to detect incomplete writes of a full stripe. Checksum is used for all RAID types. Checksum can be used to detect but not correct errors in the sector.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

75


When a disk fails in a parity RAID Group, the writes to the failed disk will be applied to the parity causing it to become a “shed” position. By becoming a “shed” position the stripe will no longer have any protection. Subsequent reads to the Failed Drive will be read from the shed data at the previous parity position. RAID 6 has 2 parity positions in a stripe - referred to as Row Parity, and Diagonal Parity. RAID 6 parity shedding allows for the loss of 2 drives in the stripe without data loss. With parity shedding, we have 2 possible shed positions (one for each missing drive).

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

76


Rebuild Avoidance is used to prevent Backend Path Failure (BPF). It refers to the fact that the drives are inaccessible on one SP but remain accessible on the other SP.

Reasons for failure may include: Cable failure, LCC failure, single port failure on dual-ported drives, or single port failure on dual-ported SATA paddle card.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

77


By default when a LUN is bound, Sniff verify is enabled at a very low priority; however, whenever a LUN is idle, Sniff will run at an increased rate, which reduces the overall time required to complete a full pass. Background Verify starts automatically if, during the trespass (failover) of a LUN from a faulted SP to its peer SP, the OS determines that an inconsistent state is possible on the LUN. This could happen when I/O operations are in progress and left uncompleted when the SP faults. The OS will verify the state of the sectors at a much higher rate and (while the speed at which the operation runs depends on several factors including host I/O loading) will normally complete within 24 hours.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

78


This lesson covers VNX and VNXe data availability features including data mover failover, hot sparing, and ALUA.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

79


The VNX series storage systems support X-Blade failover. By default, an X-Blade failover group includes one primary X-Blade and one standby X-Blade. The primary and standby X-Blades within a group must have identical I/O Module configurations. It is possible to have multiple failover groups. The different failover groups do not need to be configured identically. Creating a standby X-Blade ensures continuous access to File systems. When a primary X-Blade fails over to a standby, the standby X-Blade assumes the identity and functions of the failed X-Blade. To act as a standby server, an X-Blade must first be configured as a standby for one or more primary X-Blades. For example, you can have one standby X-Blade acting as a standby for three other primary X-Blades. If one of the primary X-Blades fails, the standby X-Blade assumes the IP and MAC addresses and functions of the failed X-Blade immediately routing data to an alternate data path to avoid interrupting services. The former standby is now a primary and no longer available in a standby capacity. Any FTP, archive, or Network Data Management Protocol (NDMP) sessions that are active when the failure occurs are automatically disconnected and must be manually restarted.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

80


When a disk failure occurs within a parity protected RAID group (RAID3, RAID5, RAID6) and a hot spare is invoked, the storage processor reads the remaining blocks in each stripe, calculates the missing data, and writes the missing data to the hot spare. In the above example Disk 2 has failed. The storage processor reads Blocks 1, 2, 4, and Parity 1 to 4 from the remaining disks in the RAID group (0, 1, 3, and 4). With this information, the Storage Processor has all it needs to calculate the data for missing Block 3, and writes this data to the hot spare. The Storage Processor continues this action for each stripe in each LUN in the RAID Group until all missing data/parity information is reconstructed. This process is known as a rebuild.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

81


When a disk failure occurs within a mirror protected RAID Group (RAID1 or RAID1/0) and a hot spare is invoked, the storage processor will read the blocks from the mirror of the faulted disk and write the missing data to the hot spare. In this example, Disk 2 has failed. The storage processor reads Blocks 1, 3, 5, 7, etc. from the mirror of the faulted disk (0) and writes this data to the hot spare. The Storage Processor continues this action for each stripe in each LUN in the RAID Group until the data/parity information is fully reconstructed. This process is known as a “rebuild”.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

82


A proactive hot spare reduces the frequency of DU/DL incidences by addressing the 2 diskfailure scenario. Proactive hot sparing allows the replacement of a drive without operating in degraded mode during the hot spare rebuild operation. The copy to hot spare can be initiated by the OS or manually via GUI or SecCLI. When a proactive spare is requested for a drive, it is determined if it fits the criteria for a Copy to Hot Spare operation (Redundant RG, units bound in RG, state of units). The most appropriate hot spare drive to invoke is picked (smallest drive that will fit). A proactive copy is a copy operation NOT a rebuild operation. The Proactive Copy (PaCO) checkpoints are tracked. Informational and error messages are logged in the ktrace, user, and event logs. General rules:

• A copy to Hot Sparing can only take place on a redundant RAID Group. • There can be only one active Copy to Hot Spare operation per RAID Group. • There can be multiple Copy to Hot Spare operations taking place as long as they are on •

different RAID Groups. There must be at least one hot spare left to be used for hot sparing only.

Exception: If only one hot spare is bound in the storage system, it can be used as a Proactive Spare (PS).

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

83


ALUA (Asymmetric Logical Unit Access) is a request forwarding implementation. In other words, the LUN is still owned by a single SP. However, if an I/O is received by an SP that does not own a LUN, that I/O is redirected to the owning SP. It is redirected using a communication method to pass I/O to the other SP. ALUA terminology:

An optimized path is a path to an SP that owns the LUN. A non-optimized path is a path to an SP that doesn’t own the LUN.

• This implementation should not be confused by an active-active model because I/O is not serviced by both SPs for a given LUN (as it is in a Symmetrix array). You still have a LUN ownership in place. I/O is redirected to the SP owning the LUN.

• One port may provide full performance access to a logical unit, while another port, possibly on a different physical controller, provides either lower performance access or supports a subset of the available SCSI. It uses failover mode 4. In the event of a front-end path failure there is no need to trespass LUNs immediately. The Upper Redirector driver will route the I/O to the SP owning the LUNs through the CMI channel. In the event of a back-end path failure there is no need to trespass LUNs immediately. The Lower Redirector will route the I/O to the SP owning the LUNs through the CMI channel. The host is unaware of failure and the LUNs do not have to be trespassed. An additional benefit of the lower-redirector is internal in that the replication software drivers (including metaLUN components) are also unaware of the redirect.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

84


This lesson covers the VNX and VNXe management options including Unisphere and CLI.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

85


Unisphere is web-based software that allows you to configure, administer, and monitor the VNX series.

Unisphere provides an overall view of what is happening in your environment plus an intuitive and easy way to manage EMC unified storage.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

86


After selecting a system, the storage system dropdown listing the Dashboard view is presented to the user. Here you can view system alerts, system information, and the storage capacity summary of the current array.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

87


From the Unisphere System view, you can select Hardware, Monitoring, or Reports. The Hardware option allows you to configure, view, and service the system’s hardware components. The Monitoring and Alerts option allows you to monitor the system health and configure notifications for important events. The Reports option allows you to view and generate several reports. These reports include fault analysis, trespassed LUNs, and high availability verification.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

88


From the Unisphere Storage view, you can select Shared Folders, LUNs, Virtual Tapes, Data Migration, and Storage Configuration options.

The Shared Folders option is used to create and manage CIFS shares and NFS exports for file. When users select LUNs on this page, they are allowed to create and manage LUNs for block. The Virtual Tape option allows users to create and manage storage that emulates physical tape devices. Data migration option is used to create and manage the file systems migrations and SAN Copy sessions. The Storage Configuration option allows the creation and management of file systems, storage pools, and volumes.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

89


From the Unisphere Hosts view, you can select Host List, Virtualization, and Storage Groups. The Host List option shows properties of hosts connecting to the storage system such as connectivity and assigned LUNs. The Storage Group option allows you to create and manage storage groups. The Virtualization option allows you to see properties of VMware servers and virtual machines connected to the storage system.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

90


The Data Protection view allows users to manage their replication technologies. Wizards for Snapshots, Clones, and Mirrors are provided here.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

91


The Settings view allows for the management of Network, Security, and Data Mover Parameters.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

92


The Support view provides resources to help the user manage Unisphere. These include “How To” procedures, Unisphere help, Community resources, product support pages, downloads, and EMC support.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

93


There are two ways to use the CLI for the VNX Series Platform:

1. The Control Station is a customized Linux kernel and operates VNX for file management services that configure, manage, and monitor Blades. A second Control Station may be present in some models for redundancy. If VNX for File or Unified is present, you can connect to it via serial or SSH to troubleshoot many VNX for File hardware components.

2. If VNX for Block is present, the Navisphere Secure CLI can be used. It is a client application that allows simple operations on the EMC VNX Series platform and some other legacy storage systems. It uses the Navisphere 6.X security model, which includes role-based management auditing of all user change requests, management data protected with SSL, and centralized user account management.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

94


Click the Launch button to view this presentation. When the presentation is complete, return to the course and advance to the next slide.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

95


This lesson covers VNX and VNXe storage objects including LUNs and Storage Pools.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

96


Physical Disks are held in DAEs within the array and are subsequently combined into RAID Groups. A RAID Group is a set of disks on which you bind one or more logical units (LUNs). A Logical Unit is a portion of a RAID Group that is made available to the client as a logical disk. Logical Units allow users to subdivide their RAID Groups into convenient sizes for host usage. With a Traditional LUN, all of the space on it is allocated for usage at the time of its creation.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

97


A RAID group is a set of disks (up to 16 in a group) with the same capacity and redundancy, on which you create one or more traditional LUNs. (A RAID 6 group must have an even number of disks with a minimum number of four. A RAID 5 group must include at least three disks. A RAID 3 group must include five or nine disks, and a RAID 1/0 group must have an even number of disks.) The storagesystem model determines the number of RAID groups that it can support. All the capacity in the group is available to the server. Any RAID Group should consist of all SAS or all Flash drives but not a mix of SAS and Flash drives. Most RAID types can be expanded with the exception of RAID 1, 3, 6, and hot spares. Most RAID types can be defragmented to reclaim gaps in the RAID group with the exception of RAID 6.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

98


Storage can consist of two types of storage pools: either Pools or RAID Groups. A Pool is a collection of disks that are dedicated for use by thin LUNs. A Pool is somewhat similar to a RAID group. However, a Pool can contain a few disks or hundreds of disks, whereas a RAID Group is limited to 16 disks. Pools are simple to create because they require only three user inputs:

• Pool Name • Resources (Number of disks) • Protection level: RAID 5 or 6 Pools are more flexible. They can consist of any supported disk drives. Arrays can contain one or many pools per storage system. The smallest pool size is three drives for RAID 5 and four drives for RAID 6. Pools are easy to modify. You can expand the pool size by adding drives to the pool and contract the pool size by removing drives from the pool. Note: EMC recommends a minimum of five drives for RAID 5 and eight drives for RAID 6.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved.

99


Click the Launch button to view this presentation. When the presentation is complete, return to the course and advance to the next slide.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

100


This module covered VNX and VNXe software suites and packs, data integrity and availability features, management objects, and storage objects.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

101


This course covered VNX and VNXe models, features, data features, architecture, and management. This concludes the training. Proceed to the course assessment on the next slide.

Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

102


Copyright Š 2011 EMC Corporation. Do not Copy - All Rights Reserved.

103


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.