Sei sulla pagina 1di 51

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations

2009 EMC Corporation. All rights reserved.

Welcome to EMC Storage Virtualization Foundations. Copyright 2009 EMC Corporation. All rights reserved. These materials may not be copied without EMC's written consent. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC , EMC, EMC ControlCenter, AdvantEdge, AlphaStor, ApplicationXtender, Avamar, Captiva, Catalog Solution, Celerra, Centera, CentraStar, ClaimPack, ClaimsEditor, ClaimsEditor, Professional, CLARalert, CLARiiON, ClientPak, CodeLink, Connectrix, Co-StandbyServer, Dantz, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, Document Sciences, Documentum, EmailXaminer, EmailXtender, EmailXtract, enVision, eRoom, Event Explorer, FLARE, FormWare, HighRoad, InputAccel,InputAccel Express, Invista, ISIS, Max Retriever, Navisphere, NetWorker, nLayers, OpenScale, PixTools, Powerlink, PowerPath, Rainfinity, RepliStor, ResourcePak, Retrospect, RSA, RSA Secured, RSA Security, SecurID, SecurWorld, Smarts, SnapShotServer, SnapView/IP, SRDF, Symmetrix, TimeFinder, VisualSAN, VSAM-Assist, WebXtender, where information lives, xPression, xPresso, Xtender, Xtender Solutions; and EMC OnCourse, EMC Proven, EMC Snap, EMC Storage Administrator, Acartus, Access Logix, ArchiveXtender, Authentic Problems, Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, C-Clip, Celerra Replicator, CLARevent, Codebook Correlation Technology, Common Information Model, CopyCross, CopyPoint, DatabaseXtender, Digital Mailroom, Direct Matrix, EDM, E-Lab, eInput, Enginuity, FarPoint, FirstPass, Fortress, Global File Virtualization, Graphic Visualization, InfoMover, Infoscape, MediaStor, MirrorView, Mozy, MozyEnterprise, MozyHome, MozyPro, NetWin, OnAlert, PowerSnap, QuickScan, RepliCare, SafeLine, SAN Advisor, SAN Copy, SAN Manager, SDMS, SnapImage, SnapSure, SnapView, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix DMX, UltraFlex, UltraPoint, UltraScale, Viewlets, VisualSRM are trademarks of EMC Corporation. All other trademarks used herein are the property of their respective owners.

EMC Storage Virtualization Foundations - 1

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

Course Objectives
Upon completion of this course, you will be able to: Define a virtual infrastructure List VMware product differences Cite basic concepts of file-level virtualization Describe Rainfinity features, functions, and benefits Identify benefits, features, and advantages of an Invista solution Explain the concept and benefits of storage virtualization

2009 EMC Corporation. All rights reserved.

EMC Storage Virtualization Foundations - 2

The objectives for this course are shown here. Please take a moment to read them.

EMC Storage Virtualization Foundations - 2

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

Virtualization Technologies
Server virtualization with VMware Infrastructure File virtualization with EMC Rainfinity File Virtualization Appliance Block-storage virtualization with EMC Invista
Virtual volumes
APP APP OS OS APP APP OS OS APP APP OS OS APP APP OS OS APP APP OS OS APP APP OS OS

Invista IP network Global Namespace NAS storage pool


Runs in the storage network

File servers NetApp EMC

Physical storage

*Formerly ESX Server


2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 3

Complementing virtualization services are the virtualization technologiesserver, file, and block. We all know these EMC virtualization technologies wellVMware, Rainfinity, and Invista. Take the time to learn about these technologies if you are not familiar with them. There are occasions when one of these technologies can be obvious recommendations to address a customer problem.

EMC Storage Virtualization Foundations - 3

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

VMware vSphere 4 Virtualization Overview


Upon completion of this module, you will be able to: Define virtualization concepts Describe VMware vSphere infrastructure and components Describe the architecture of ESX/ESXi Host Describe the architecture of Virtual Machine (VM) Describe vCenter Server and its usage

2009 EMC Corporation. All rights reserved.

EMC Storage Virtualization Foundations - 4

The objectives for this module are shown here. Please take a moment to read them.

EMC Storage Virtualization Foundations - 4

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

What is Virtualization?
Virtualization allows you to run multiple operating systems and applications on a single computer Virtualization allows consolidation of many servers into a single physcial computer Two implementation solutions:
Hypervisor Hosted

2009 EMC Corporation. All rights reserved.

EMC Storage Virtualization Foundations - 5

Virtualization is a technology that enables consolidation of many servers into a single physical computer. Virtualization allows representation of multiple operating systems simultaneously on a single hardware. The virtualization is a thin layer (software) which is installed between the hardware and the operating system (OS). It dynamically partitions the physical resources such as CPU, memory, and I/O devices for the concurrently running machines in a virtual environment. The task of the virtualization software is to provide an independent operating environment for each operating system and maintain the logical separation of physical resources. Virtualization is often compared incorrectly with simulation, emulation, or terminal services. It is important to understand how virtualization differs from these technologies. Simulation provides a look and feel of an environment but it does not represent the physical environment. An example of simulation is any virtual reality game or a training program (flight simulation) used to train pilots. Emulation addresses the ability of a software program or a hardware device to imitate another software program or hardware device. Emulation is predominately used for training, demonstration, and testing when the original hardware and software is not available. Terminal Services allow remote access to a server by many users simultaneously. Citrix metaframe is an example of terminal services. In contrast to these technologies, virtualization allows multiple operating systems to be hosted on a single computer and allows access by users and applications within each operating system providing all the computing resources they need as if they own the entire computer system. There are two implementation solutions of virtualization on x86 (CPU architecture standards implemented in Intel, AMD, Cyrix, and others). Hosted virtualization solution is an approach of virtualization where a virtual thin layer of software is installed on top of an operating system as an application and provides virtual resources for the guest operating systems to run. Examples of hosted virtualization solutions are VMware Server, Workstation, ACE, and VMware Player. Hypervisor virtualization solution is a thin layer of software installed on the top of the hardware (bare-metal) and creates virtual resources for the guest operating systems. Examples of hypervisor virtualization solution are ESX and ESXi.

EMC Storage Virtualization Foundations - 5

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

VMware vSphere 4 Infrastructure

Firewall; Anti-Virus; Intrusion Prevention and Detection

Dynamic Resource Sizing

Live Migration

Storage Management and Replication Virtual Appliance

Network Management

2009 EMC Corporation. All rights reserved.

EMC Storage Virtualization Foundations - 6

Virtualization enables consolidation of service offerings to end users in a most cost effective manner. Flexibility, scalability, availability, manageability are the key benefits of virtualization providing an optimal computing infrastructure. Services in a datacenter are provisioned for the internal users in a virtual environment which can be referred to as an internal cloud allowing seamless access to resources such as CPU, storage, and network. Extending the services to the external users with an external cloud is also a key requirement in virtualized a environment. VMware has adopted a new approach with the release of the vSphere 4 suite of products to better integrate various components together and create a unified cloud infrastructure (internal and external). Infrastructure services are a segment of vSphere that focus on the physical component of a computing environment. Infrastructure Services consist of three components: vCompute, vStorage, and vNetwork. vCompute virtualizes server resources (CPU, memory, BIOS, chipsets, etc.) by using a hypervisor which creates an aggregated pool of resources for Operating System (OS)/Applications to use. The OS/Applications are installed within the virtual object known as the Virtual Machines (VM). The VM provides the resource from the resource pool for various OS/Applications to run on the server with complete logical isolation. ESX/ESXi hosts provide the physical resources used to run virtual machines. They are bare-metal, efficient and reliable hypervisors that provide a virtualization layer by abstracting the processor, memory, storage, network and other hardware components. ESX is managed with a built-in service console or vSphere command line interface (vCLI). ESXi is light weight (32 MB footprint) software and is managed with the BIOS like direct console or vCLI. VMware gives out ESXi for free to customers for experience and usefulness of virtualization technologies. The vStorage service addresses virtualizations of storage devices such as SCSI, FC, iSCSI, and NFS storage systems. The ESX/ESXi server virtualizes the storage in a logical storage unit which is known as a datastore. There are three types of datastores supported by ESX/ESXi host Virtual Machine File System (VMFS). Raw Device Mapping (RDM) and Network File System (NFS).vStorage services also provide thin provisioning to improve the utilization of expensive storage systems. By using thin provisioning, the OS can see and use a larger disk where the physical storage allocation to it is less. With thin provisioning approach, customers can better utilize and manage expensive storage systems. The vNetwork service addresses virtualizations of network connectivity. vNetwork service supports virtual distributed switches (VDS) as well as virtual switches. The virtual switch provides network connectivity for VM to VM, VM to external host, or VMkernal services such as NFS, iSCSI to the network attached storage. It also provides the connectivity to the service console for management of the ESX server.

EMC Storage Virtualization Foundations - 6

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

VMware vSphere 4 Infrastructure (Cont)

Firewall; Anti-Virus; Intrusion Prevention and Detection

Dynamic Resource Sizing

Live Migration

Storage Management and Replication Virtual Appliance

Network Management

2009 EMC Corporation. All rights reserved.

EMC Storage Virtualization Foundations - 7

Application services provide availability, security, and scalability for the virtual IT environment. Application services focus on the logical aspect of the infrastructure (availability, Security, and Scalability). To achieve the business goals for availability, VMware vSphere has introduced VMotion, HA, FT, and DRS. VMotion technology allows the movement of live VM from one server to another as long as both servers are in the same cluster and share the same datastore. This solution is particularly useful for providing availability during hardware maintenance windows or moving VMs from a failing server. VMotion can also be used for load balancing when there is performance degradation on an ESX host. High Availability (HA) is a technology that allows you to create a cluster without any specific vendor solutions for clustering. It provides the functionality of a cluster enabling OS/Applications to restart at the secondary server if the primary server fails. Fault Tolerance (FT) provides data protection and business continuity. FT technology ensures zero data loss for application. FT is based on a vLockstep technology where the state of primary and secondary virtual machines maintain the same data and instructions set at given point of time. vSphere provides greater security and addresses all compliance requirements. vShield Zones maintain the fencing of the network for applications in a virtualized shared resource pool environment. This creates trust and network segmentation of users and data accessibility. vMSafe is a set of Application Programming Interface (API) for the third party vendors to write software to protect the virtual machines in terms of access to Memory, CPU, Networking, Process execution, and Storage resources. This feature is planned to be released in the future. Network/application firewall, Antivirus protection, Intrusion Detection (ID), and Intrusion protection (IP) are some of the other features available in vSphere. VMware vSphere 4 is highly scalable. The VM can now use up to 8 vCPU and 255 GB of memory. The addition of new virtual hardware to a VM is enabled while the VMs are up and running. For example, you can add more CPU, memory, Ethernet card, or hard disk to the virtual machines in an ON state. Disk capacity can also be increased while the VM is up and running. The core component of vSphere is the ESX/ESXi server. The next slide details the architecture of the ESX/ESXi server.

EMC Storage Virtualization Foundations - 7

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

Introduction to ESX

X86 Architecture

2009 EMC Corporation. All rights reserved.

EMC Storage Virtualization Foundations - 8

ESX is a hypervisor (also known as VMkernel) which installs on bare-metal x86 hardware to create a virtual platform. As you can see, the bottom portion of the slide depicts the physical hardware components (CPU, memory, storage, and network) on which the ESX is installed and creates a virtual platform. The hypervisor performs certain tasks (schedules CPU resources, memory management, and all lowlevel I/O jobs) on the physical hardware components based on the request from the Virtual Machine Monitor (VMM). VMM is a process that runs within the VMkernel for each Virtual Machine (VM) running on an ESX server. It provides the abstraction of hardware to the guest OS which runs inside the VM. The VM illustrates a virtual hardware environment with virtual CPU, memory, network/storage controllers, and disks. A Virtual Machine provides a complete system environment for the guest OS to use and think that it is a physical system, although, the VM is just set of configuration, state, and storage software files which represent virtual chipset (Intel 440BX chipset), BIOS, VGA, CPU, RAM, disk/network controllers and disks. VMware ESX ships in two versions: ESX and ESXi. We will learn about new features of ESX and ESXi hosts in the next slide.

EMC Storage Virtualization Foundations - 8

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

ESX and ESXi Architecture

2009 EMC Corporation. All rights reserved.

EMC Storage Virtualization Foundations - 9

Here we learn about ESX and ESXi server. The major differentiator between ESX and ESXi is the service console. ESX has a service console where ESXi does not. The service console is now encapsulated in a VM in vSphere which is the first VM to be installed on an ESX server. The new features of vSphere ESX and ESXi 4.0: 64-bit system architecture Both ESX and ESXi hypervisors now support 64-bit CPU (Intel and AMD). Improved performance and scalability ESX 4 has greater transaction I/O processing capacity and a more scalable architecture, supporting up to 64 CPU cores and 1 TB RAM per host. Network and storage optimization improved paravirtualized SCSI and iSCSI device drivers, and support for 10 GigE networking with Jumbo Frames. Support for IP6 is a new feature as well. Better support for storage and network consolidation through support for dynamic storage/file system expansion, thin provisioning, and distributed virtual switches (allowing network aggregation across hosts). DirectPath I/O VMware DirectPath I/O provides the ability to give a guest OS direct access to a physical network or storage controller. Improved security with the VMsafe API and Trusted Platform Module (TPM). Many more management options including CLI options, virtual management appliances, programming interfaces, and clients.

EMC Storage Virtualization Foundations - 9

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

Virtual Machine (VM)


Set of virtual hardware where OS/applications run
Virtual hardware (version 7)

Viewed as a set of files Provisioning methods


Cloning Template
Applications

Virtual appliances Flexibility


Hot-pluggable device Virtual disk size increase vApp

Operating System

Virtualized Hardware
CPU
NIC Memory HDD CD

Scalability
8-way VCPU, 256 GB RAM

Virtual Machine (VM)

VMDirectPath for VM
2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 10

The Virtual Machine (VM) is a virtual hardware representation for the OS and its applications to run. The OS run under a VM is called a guest operating system. VMware provides a list of guest operating systems supported on its platform. It is always a good practice to check the latest compatibility document from the VMware website for latest operating system and hardware device support. The new VM hardware version 7 is compatible with ESX/ESXi 4.0 host and consists of many new features: New storage virtual devices SAS virtual devices (Serial Attached SCSI) IDE virtual devices VMXNET Generation 3 8-way Virtual SMP 256 GB RAM Enhanced VMotion Capability (EVC) Virtual Machine Hot Plug Support (memory, CPU, and devices w/o VM shutdown) Virtual Machine is a discrete set of files. Following are the list of files which constitute a Virtual Machine: Configuration file (VM_name.vmx) Virtual disk characteristics (VM_name.vmdk) Preallocated virtual disk (VM_name-flat.vmdk) VM BIOS (VM_name.nvram) Swap file (VM_name.vswp) Virtual Machine snapshot (VM_name.vmsd) Log file (vmware.log) Because VM is a set of discrete files, it makes VM more portable to clone and use as a template to provision multiple VMs from a source VM. VMware along with other companies have developed an Open Virtualization Format (OVF) to simplify VM deployment and packaging. OVF is a platform for packaging OS, preconfigured applications, and other software assembled for a specific purpose. For example, web, database, and application servers can be preconfigured and packaged into an appliance using an OVF format. VMware virtual machines are highly flexible and allow hot pluggable devices and virtual disk size increase while VM is up and running. You can add CPU, memory and other I/O devices into the VM and increase the size of virtual disk while VM is up and running. The memory size of VM has been increased to 256 GB to meet application requirements. VM can also take advantage of virtual SMP (symmetric multi processor) up to 8 virtual CPUs, for CPU-intensive workload applications. The VMDirectPath for Virtual Machines feature is primarily targeted for applications that can benefit from direct access by the guest OS to the I/O devices. If the guest OS is enabled with this feature, then other virtualization features such as VMotion, hardware independence, and sharing of physical I/O devices will not be available to the VM using the VMDirectPath feature.

EMC Storage Virtualization Foundations - 10

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

vSphere Components
VMware vSphere Client VMware vSphere Web Access VMware vSphere vStorage VMFS VMware Virtual SMP

2009 EMC Corporation. All rights reserved.

EMC Storage Virtualization Foundations - 11

Let us look in more detail at some important components of vCenter vSphere. VMware vSphere Client The interface to the vCenter and ESX/ESXi Servers. It allows users to remotely connect to the vCenter and ESX/ESXi server and perform tasks depending on the user rights. VMware vSphere Web Access allows an internet browser to manage the virtual machines and access to the remote console. VMware vSphere vStorage VMFS This is a cluster file system which allows simultaneous read and write access to the storage device. This is a default file system for virtual machines. It allows live migration of running virtual machines from one physical server to another and also restarts the failed virtual machine to the good physical server as long as the server participates in the cluster configuration. A VMFS is optimized to store large files such as virtual disks and memory image of the server. It can grow to a maximum size of 64 TB. VMware Virtual SMP This component allows simultaneous access of multiple physical CPUs to a virtual machine (VM). For any CPU intensive application, this component is very useful. This is a licensed component and requires a separate license to activate.

EMC Storage Virtualization Foundations - 11

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

VMware vCenter Components

2009 EMC Corporation. All rights reserved.

EMC Storage Virtualization Foundations - 12

VMware vCenter architecture is a collection of services and interfaces. It is organized into the following types of services and interfaces: Core Services: These are management services that perform the following functions: VM provisioning Host and VM configuration Resource and inventory management Statistics and logging Alarms and event management Task scheduling Distributed services: This provides services such as VMotion, Distributed Resource Scheduler (DRS), and High Availability (HA). Plug-in: Other components are installed separately from the base products and require plug-ins in the vCenter Server. The products such as VMware Update Manager and Converter. Active Directory Interface for the client login and authentication for domain users. vSphere Application Programming Interface (API) for third party to develop custom applications for vCenter usage. ESX/ESXi Management: Here the vCenter server installs an agent (vpxa) on the ESX/ESXi host for communication with the host agent (hostd). Database Interface: This provides access to the database. vCenter comes bundled with the SQL Express database for a small instance (five hosts and fifty VM). However, it is a best practice to use the external database instance for production and enterprise use. The supported databases are MS-SQL 2005/2008 and Oracle 10g and 11g. For a complete list and compatibility of the database requirements, please check the vCenter installation guide. In the next slide, we will see the more details about the vCenter server functionality.

EMC Storage Virtualization Foundations - 12

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

VMware vCenter Server


Via Web Browser vSphere Client

2009 EMC Corporation. All rights reserved.

EMC Storage Virtualization Foundations - 13

The vCenter server provides a centralized management, configuration, provisioning, and automation of virtual IT environment. VMware vSphere vCenter server is a suite of products that allow you to configure, manage, and monitor the VMware virtual infrastructure through a vSphere Client or vSphere web-based client. vCenter server is installed on Windows physical or virtual (VM) server. It should be noted that it cannot be installed on any other operating systems such as Linux or UNIX. VMware add-on components such as VMotion, High Availability (HA), and Distributed Resource Scheduler (DRS) are installed and managed through VMware vCenter server. Multiple vCenter servers can be managed by a single vCenter client providing a consolidated view of multiple vCenter servers. VC servers are interconnected in Linked Mode which allows administrators to share roles and licenses across multiple VC servers. The Linked Mode uses MS-ADAM (Microsoft Active Directory Application Mode). Host Profiles in a VC server are policy-based rules that enforce compliance along with configuration settings for network, storage, and security on multiple hosts. This simplifies host configuration management. vServices simplifies deployment and ongoing management of multi-tiered applications running on multiple VMs by encapsulating it into a single vService entity. Licensing is an application in the vCenter server suite that centralizes the reporting and management of license keys in the VC 4.0 environment. It also provides performance charts providing single view for CPU, DISK, Memory, and Network. These aggregated views can be further drilled down to detailed view of resources such as a datastore or for events. Alarms can be set for events and low level hardware and host events can be displayed rapidly for fault isolations. It also increases visibility of the virtual infrastructure reports and topology maps. VC server also provides It provides detailed resource usage statistics for CPU and memory usage at both VM and resource pool level. VI Update Service allows remote upgrade of older virtual infrastructure, rollback, and post-installation scripts.

EMC Storage Virtualization Foundations - 13

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

Module Summary
Key points covered in this module: Virtualization concepts VMware VSphere infrastructure architecture Architecture of ESX/ESXi host Virtual Machine definition List of VMware vSphere components VMware vSphere vCenter Server framework

2009 EMC Corporation. All rights reserved.

EMC Storage Virtualization Foundations - 14

These are the key points covered in this module. Please take a moment to review them.

EMC Storage Virtualization Foundations - 14

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Rainfinity
Upon completion of this module, you will be able to: Describe basic concepts of file-level virtualization Identify Rainfinity terminology Describe Rainfinity theory of operations Describe Rainfinity features and functions Identify Rainfinity platforms List the benefits of a Rainfinity solution

2009 EMC Corporation. All rights reserved.

EMC Storage Virtualization Foundations - 15

The objectives for this module are shown here. Please take a moment to read them.

EMC Storage Virtualization Foundations - 15

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

File Level Virtualization Basics


Before File Level Virtualization
IP network

After File Level Virtualization


IP network

NAS devices/platforms Every NAS device is an independent entity physically and logically Underutilized storage resources Downtime caused by data migrations

NAS devices/platforms Break dependencies between end-user access and data location Storage utilization is optimized Non-disruptive data migrations

2009 EMC Corporation. All rights reserved.

EMC Storage Virtualization Foundations - 16

EMC Rainfinity virtualizes NAS environments by dynamically moving information without disruption to clients or applications, making them simple to manage. Rainfinity is an out-of-band file system virtualization that enables non-disruptive data movement in multi-vendor NAS environments.

EMC Storage Virtualization Foundations - 16

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

Rainfinity Overview
Rainfinity is a dedicated hardware/software solution that manages file-oriented (NFS/CIFS) storage access Provides transparent data mobility Acts as a bridge between clients and storage servers Manages data and servers in heterogeneous environments
Microsoft Windows UNIX / Linux

Assists with:
Data migration and consolidation Storage optimization Global Namespace Management Tiered storage management

2009 EMC Corporation. All rights reserved.

EMC Storage Virtualization Foundations - 17

Rainfinity is a dedicated hardware/software platform solution that supports the management of fileoriented data and their servers. File-oriented data is data that is accessed by CIFS or NFS. Rainfinity allows clients to transparently access data that it is managing. Virtualization is an abstraction of the logical and physical paths to data. The client is unaware where the data physically resides. The management of the namespace can be accomplished by industry-standard mechanisms such as a Distributed File System (DFS) in a Windows environment, and NIS/Automount and LDAP in a UNIX environment. Rainfinity does not create its own namespace; it integrates with these existing industry namespaces. Rainfinity assists administrators with file-oriented storage optimization, consolidation, and disaster recovery. It leverages existing industry namespaces and allows for a single point of namespace management. As data becomes less valuable over time, it can be moved from one storage tier to a less expensive tier while the data remains available on disk rather than on tape. Freeing up expensive storage enables a new project or application to come online. Consequently, tiered storage results in much more effective storage utilization. These management functions work in a heterogeneous environment. In other words, Rainfinity supports Microsoft Windows as well as UNIX and Linux sites. Rainfinity is designed to be installed as a bridge between file servers and clients on the network. To achieve this, Rainfinity functions as a Layer 2 switch. This functionality enables Rainfinity to see and process traffic between clients and file servers with minimal modification to the existing network.

EMC Storage Virtualization Foundations - 17

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

File Virtualization Appliance (FVA)


Abstracts file-based storage access over IP-based networks
Physical storage location is transparent to users and applications File-based storage systems are seen as a logical pool of resources Provides constant access to data while moving NFS and/or CIFS data

Simplifies multiprotocol data migration


Replicates files and preserves multiprotocol attributes
Clients
FVA

NAS storage pool

NetApp Layer 2 Switch (Bridge)


2009 EMC Corporation. All rights reserved.

Centera

Celerra

File Servers
EMC Storage Virtualization Foundations - 18

File Virtualization Appliance, or FVA, focuses on abstracting the file-level storage access over IPbased networks. FVA is the term used to describe the values and technologies that extend file-based storage systems to appear as a logical pool of resources from which you can freely allocate capacity wherever and whenever it is needed. NAS and file serving devices are typically implemented with a single file system per device. A file system that expands to the limitation of a devices physical disk capacity requires repopulating data to another device with more capacity and then remounting/remapping clients to the new location. The file system must be taken offline during this process, disrupting users and applications, because active data cannot change physical location. When there are multiple devices, the storage administrator must manage each device independently as islands of storage capacity. FVA provides a layer of transparency to users and applications. Distribution transparency masks the distributed nature of data while repopulating data, and allows users and applications full read/write access. FVA simplifies multiprotocol data migration. FVAs data mobility engine replicates data and metadata, maps permissions, synchronizes access and changes and redirects client access to the authoritative copy of the dataall for multiprotocol mixed access as well as NFS-only or CIFS-only access.

EMC Storage Virtualization Foundations - 18

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

Rainfinity Basic Operations


Bi-modal
Out-of-band mode
Stays out-of-band until data mobility is needed No network performance impact of in-band appliances Can still monitor servers

In-band mode
File servers go in-band to prevent user disruption typical of out-of-band devices Can monitor protocol connections and sessions Keeps source and destination in synch as long as needed Tracks client access to source and destination

Monitors storage system performance: CPU, I/O, capacity Performs actual data movement during migration Integrates to provide a single point of namespace management
DFS; Automounter; & standalone deployment
2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 19

When Rainfinity is doing a move, the two file servers involved in the move must be on the private, or server side. In this case, Rainfinity is said to be in-band, and those file servers are also referred to as inband. When Rainfinity is not doing a move or redirecting access to certain file servers, those file servers may be moved to the public, or client side. In this case, Rainfinity is said to be out-of-band for these file servers, and those file servers are also referred to as out-of-band. Rainfinity is aware of filesharing protocols. It is this application-layer intelligence that allows Rainfinity to move data without interrupting client access. Not only can Rainfinity move data transparently, it can also be implemented as a data gathering device. A key to the Rainfinity technology suite is that the appliance has multiple network interfaces and essentially performs like a bridgebringing hosts in- and out-of-band as required to move data accordingly. Rainfinity also leverages existing namespaces into a single point of management. For example, Rainfinity can integrate with a customers DFS, NIS/Automount, and LDAP namespace environments to provide a single view of the data.

EMC Storage Virtualization Foundations - 19

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

Rainfinity FVA Theory of Operation

DFS AD Automount NIS LDAP

Rainfinity Appliance FVA


2

Data mobility
Rainfinity is triggered

Redirection
Global namespace updated
EMC Storage Virtualization Foundations - 20

2009 EMC Corporation. All rights reserved.

Rainfinity installs by plugging into the network switch. There are no changes required to client mount points. Rainfinity installs in the network but is not in the data path. When you install Rainfinity, you set up a separate VLAN in the network. Clients continue to access storage with no disruption. When data migration must take place for cost or optimization reasons, the ports associated with the involved file servers are associated with the private VLAN of Rainfinity. Rainfinity is now in the data path for these file servers and can ensure client access to the data even though it is being dynamically relocated. Any updates during the migration are synchronized across both the original source and the new destination. If Rainfinity is removed from the network in the middle of a transaction, there is no data integrity risk. All updates are reflected on the source. The clients are still mounting the source. You can plug Rainfinity back in and the transaction resumes. Once the data relocation is complete, Rainfinity updates the global namespace (DFS for Windows, Automount for UNIX, login scripts, or homegrown namespace solutions). The namespace in turn updates the clients with the new location of the data. The original source reflects a point-in-time copy at the end of the transaction and reflects updates made up to that point. Updating client mappings takes time, however, so Rainfinity remains in the data path and redirects client access to the new location. Over time, the number of sessions to redirect decreases as new sessions mount directly to the new location.

EMC Storage Virtualization Foundations - 20

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

Rainfinity FVA Theory of Operation (Cont)

DFS Rainfinity FVA Appliance AD Automount


1

Data mobility
Rainfinity is triggered

NIS LDAP
3

Redirection
Global namespace updated

Transaction complete
without downtime
EMC Storage Virtualization Foundations - 21

2009 EMC Corporation. All rights reserved.

When all of the client sessions have been remapped to the new location, Rainfinity completes the migration and the servers move out of the Rainfinity private VLAN into the public VLAN. Rainfinity is now out of the data path. Rainfinity virtualizes an environment 100% of the time with the namespace providing a logical abstraction layer. Rainfinity selectively virtualizes traffic on the wire based on particular optimization or relocation events that need to take place. Rainfinity can also handle multiple transactions at a time. Rainfinity only does one active move transaction at a time and the others are queued. Rainfinity can perform a redirection simultaneously for multiple transactions.

EMC Storage Virtualization Foundations - 21

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

File Management Appliance (FMA)


Hardware/software appliance solution FMA archives and recalls files based on configured rules Full function
File archival and recall Rule and policy creation and preview Scheduling Orphan file management Stub recovery
FMA NetApp

File Server FMA-HA

NAS Clients Centera


EMC Storage Virtualization Foundations - 22

2009 EMC Corporation. All rights reserved.

File Management Appliance is a hardware and software appliance solution. FMA provides archival and retrieval functionality in a NAS environment. The archive decision is based on configured rules. After archival, a stub file resides on the primary storage and points to the archived file on the secondary storage. All of the data necessary to retrieve a file resides in the stub itself, not in a database on the FMA appliance. The archival and recall functions reside on the FMA appliance. The FMA-High Availability (FMAHA) appliance complements an existing FMA by adding high availability and load balancing capabilities when recalling archived data to primary storage. The FMA-HA can be used for recall only and cannot be used for archiving, or any other FMA function. FMA provides full archiving functionality. These features include archival and recall, rule and policy creation and preview, scheduling, orphan file management and stub file recovery.

EMC Storage Virtualization Foundations - 22

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

Tiered Storage Management with FMA


Allows for the efficient placement of data, based on capacity and service-level agreements
Primary Storage
Top tier of storage in terms of performance and availability
Policy Engine Primary Storage or

Secondary Storage
2nd level of storage with lower cost and performance than primary storage

FMA

Policy Engine
Intelligence that classifies and migrates files based on preestablished policy
2009 EMC Corporation. All rights reserved.

NAS Clients Secondary Storage


EMC Storage Virtualization Foundations - 23

Tiered storage management, or file archival, allows for efficient placement of data based on capacity and service level agreements. Intelligently placing data in optimal tiers of performance and price lowers the average cost of storage. Enterprises can use lower cost storage, such as ATA drives or tape, to store less critical data at a fraction of the cost of high performance storage. Intelligent software, that automatically classifies and migrates data based on policy, makes this file placement feasible. Based on configured policies, policy engines migrate data from one storage tier to another. These policies can be based on the size of the data, the length of time since the last access, or by file extension type. File Management Application, or FMA, provides the policy engine functionality that supports the archival process. FMA is used by companies that need to conform to government regulations, policies, and the Information Lifecycle Management process.

EMC Storage Virtualization Foundations - 23

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

Rainfinity FMA Theory of Operation

DFS Rainfinity FMA


File Management

AD Automount NIS LDAP

Policy-based file archiving


Rainfinity archives files

End-user retrieval
Access stub file, retrieves file

2009 EMC Corporation. All rights reserved.

EMC Storage Virtualization Foundations - 24

Once data is migrated from primary to secondary storage, a stub (or tag) file exists on the primary storage to direct access to the actual location of the data (secondary storage). When a write occurs to the data, it is typically fully recalled to the primary storage. When a read occurs, the data can be read from the secondary storage, or it can be partially or fully migrated from secondary to primary storage, depending on the storage technology used. FPolicy, the NetApp file archival application interface, requires a full recall of the data on reads and writes.

EMC Storage Virtualization Foundations - 24

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

Rainfinity Hardware
Front View
FVA5 / FMA5

Hot swap Disk Drives

FVA6 / FMA6

Hot swap Disk Drives


2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 25

Shown here are the front views of two different Rainfinity boxes. The FVA and FMA appliances use the same hardware and run the same customized Linux Operating System. The FVA5 and the FMA5 appliances are based on DELL 2950 G5 hardware. FVA6 and FMA6 are based on DELL 2950 G6 hardware. The following is a description of the front of the appliance: The CD-ROM is used for full CD system upgrades and fresh installs. There are two hot-swappable SCSI hard drives on the G5 hardware, and three hot-swappable drives on the G6 hardware.

EMC Storage Virtualization Foundations - 25

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

Rainfinity Hardware (Cont)


Back View
G5 and G6 Hardware

Eth10Eth11 Eth12Eth13

Eth2 Eth3 Eth4 Eth5 Eth6 Eth7 Eth8 Eth9 Eth1 Eth0

Bridging Ethernet Interfaces Clustering Ethernet Interfaces

Hot Swap Power Supplies

2009 EMC Corporation. All rights reserved.

EMC Storage Virtualization Foundations - 26

This slide displays the back view of the G5 and G6 Rainfinity boxes seen on the previous slide. The back of the box looks identical for either the G5 or the G6. Notice the two on-board copper ports used for clustering appliances in an active-standby configuration. Please take a moment to review the slide.

EMC Storage Virtualization Foundations - 26

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

FVA GUI and Applications


Capacity Management Performance Management Migration and Consolidation Rainfinity Platform

2009 EMC Corporation. All rights reserved.

EMC Storage Virtualization Foundations - 27

Rainfinity FVA application software provides the functionality of the system. The GUI screen shown here can be used, along with the CLI, to use the features provided by Rainfinity. The first two applications from the right-hand corner, Capacity Management and Performance Management, drive storage optimization by visualizing usage trends and exceptions. The next application, Migration and Consolidation, moves data between storage devices. The Rainfinity Platform feature is used to add servers and setup Proxy servers, among other things.

EMC Storage Virtualization Foundations - 27

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

Why Rainfinity?
Built for the Enterprise:
Most scalable Safest Easiest to deploy

Industry standards based Enterprise service and support


EMCs world class technical service organization and 24x7 global hardware and software support

2009 EMC Corporation. All rights reserved.

EMC Storage Virtualization Foundations - 28

28

In order to streamline the operations of your file server and NAS environments, Rainfinity File Virtualization Appliance delivers: Optimized utilization of storage resources, Accelerated storage consolidation, Simplified management, and Increased protection of critical files. It does this by simplifying capacity management through non-disruptive data movement and namespace management updates, maintaining a virtual file system of your physical file serving and NAS resources.

EMC Storage Virtualization Foundations - 28

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

Rainfinity Benefits
Supports heterogeneous, multi-vendor environments No client or server software required Data consolidation Leverages industry-standard namespace Increases data mobility Operates transparently to clients and applications Supports multiprotocol moves (CIFS and NFS)

2009 EMC Corporation. All rights reserved.

EMC Storage Virtualization Foundations - 29

29

Rainfinity leverages industry-standard Global Namespaces with a scalable, transparent, file-protocol switching capability. There are many advantages to this approach. It limits risk and performance concerns, but also leverages the continuing investment made by large vendors and Standards bodies, whether the namespace is DFS or Automount. Rainfinity can also work with existing environments in which a standard namespace is not deployed, such as login scripts. Rainfinity is a virtualization solution that provides complete transparency during consolidation and data migration, not just transparency of file access, but transparency to the environment. This solution does not require mount-point changes or the deployment of agents on clients or servers. It supports migration of data that is accessed by both NFS and CIFS clients. Virtualization increases data mobility by providing location independence of files and file systems from the applications and the people who use them.

EMC Storage Virtualization Foundations - 29

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

Module Summary
Key points covered in this module:
Virtualization is the newest approach for consolidating many servers into one Rainfinity is a dedicated hardware/software solution that manages file-oriented (NFS/CIFS) storage access Rainfinity combines applications that monitor and move data between storage devices FVA streamlines operations of file server and NAS environments through non-disruptive data movement, and leverages the customers existing environment

2009 EMC Corporation. All rights reserved.

EMC Storage Virtualization Foundations - 30

These are the key points covered in this module. Please take a moment to review them.

EMC Storage Virtualization Foundations - 30

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Invista
Upon completion of this module, you will be able to: Understand the concept and benefits of storage virtualization Identify the benefits, features, and competitive advantages of an Invista solution List the hardware and software components of Invista and how they work together to achieve storage virtualization Understand the advanced software functionality enabled by an Invista solution Cite how to integrate Invista into an existing SAN and how to design a highly available Invista configuration Identify the new features and functionality included in the latest release of Invista .
2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 31

The objectives for this module are shown here. Please take a moment to read them.

EMC Storage Virtualization Foundations - 31

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Invista
Network-based Storage Virtualization

Performance architecture
Leverages next-generation intelligent SAN switches for high performance Designed to work in enterprise-class environments

Virtual volumes

Provides advanced functionality


Dynamic volume mobility

Runs in the storage network

Invista
Data Path Controller Control Path Cluster Data Path Controller

Network-based volume management Heterogeneous point-in-time copies (clones)

Virtual initiators

Enterprise management
EMC ControlCenter integration EMC Replication Manager integration

Supports heterogeneous environments


Works with EMC and third-party storage
2009 EMC Corporation. All rights reserved.

Physical storage

EMC Storage Virtualization Foundations - 32

Invista is a SAN-based storage virtualization solution. Its architecture leverages new intelligent SAN switch hardware from EMCs Connectrix partners that enables higher levels of scalability and functionality. Unlike other storage virtualization products, Invista is not appliance-based. Invista delivers consistent, scalable performance across a heterogeneous storage environment, even when using highly random I/O applications. Because Invista uses the processing capabilities of intelligent switches, it eliminates the latency and bandwidth issues associated with an in-band appliance approach. By using purpose built switches with port-level processing, this split-path architecture delivers wire-speed performance with negligible latency. EMCs unique network-based approach to storage virtualization enables key functionalities, such as the ability to move active applications to different tiers of storage non-disruptively, and the ability to leverage clones across a heterogeneous storage environment. These functions work uniformly across qualified hosts and heterogeneous storage arrays. In addition to integrating discovery and monitoring functions for virtual volumes into EMC ControlCenter, Invista can also be easily managed from a GUI or a command-line interface (CLI). Invista supports the five major operating systems and storage arrays from EMC, IBM, Hitachi Data Systems, and Hewlett-Packard.

EMC Storage Virtualization Foundations - 32

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

Advanced Software Functionality

Heterogeneous remote replication of virtual volumes


Create remote copies of data for disaster recovery and business continuity

Network-based volume management


Pool storage and manage volumes at the network level

Continuous data protection of virtual volumes


Point-in-time recovery and application checkpoints

EMC Invista
Move data non-disruptively
Move and change primary volumes while application remains online

Heterogeneous pointin-time copies


Create local copies of data for testing and repurposing across multiple types of storage

2009 EMC Corporation. All rights reserved.

EMC Storage Virtualization Foundations - 33

The next-generation hardware, combined with powerful Invista software, enables some unique capabilities, including: Dynamic volume mobility allows administrators to move primary volumes between heterogeneous storage arrays while the application remains online. This enabler of information lifecycle management allows you to move applications non-disruptively to the appropriate storage tier, based on application requirements and service levels. Network-based volume management is the basis for what is commonly considered virtualization. Invista enables you to create and configure virtual volumes from a heterogeneous storage pool and present them to hosts. It makes sense for the network to be the control point for thisabstracting and aggregating the back-end storage, configuring it, and making it available to all of the connected hosts. Invista creates clones of virtual volumes. This allows you to extend the use of clones to areas where it was previously impossible, due to compatibility issues. For example, you can now create a clone from a high-tier, primary storage array and extend it to a lower-tier, lower-cost storage array. This gives you another local replication option in your tiered storage environment. Invista integration with EMC RecoverPoint software provides a disaster-recovery capability by enabling IT managers to employ virtualization technologies across multiple sites. RecoverPoint provides continuous data protection by enabling recovery of data from any point-in-time backups or application checkpoints.

EMC Storage Virtualization Foundations - 33 33

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

Key Invista Benefits


Support for EMC and third-party arrays
Leverages existing investments in storage capacity and resources

Delivers Information Lifecycle Management


Enables data movement across multiple storage tiers

Reduces complexity
Single interface for managing all tiers of storage

Increases operational efficiency by simplifying:


Movement of data to optimize performance Provisioning of storage among multiple vendor arrays
2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 34

EMC Invista provides support for EMC and third-party arrays, which allows an enterprise to leverage existing investments in storage capacity and resources. Invista also supports Information Lifecycle Management by enabling data movement across multiple storage tiers. Invista reduces management complexity by establishing a single interface for managing all tiers of storage. Finally, Invista increases operational efficiency by simplifying both the movement of data to optimize performance, and the provisioning of storage among multiple vendor arrays.

EMC Storage Virtualization Foundations - 34

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

Invista 2.x Hardware Architecture


Fabric B Intel host CPC

SP B

DPC-2

IP Network

SP A Dual FC links for cluster interconnect

DPC-1

Fabric A

Metadata store

Distributed Control Path Clusters (CPC) with dual Fibre Channel links for cluster interconnect and failover support IP packet filtering is used to restrict communication to Invista components Dual Data Path Controllers (DPC) for redundancy Metadata stored on array connected via SAN
2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 35

The illustration shows the hardware components of an Invista version 2 deployment. The CPC is implemented as a two-node host cluster running on an Intel-based host. Cluster interconnect between the two nodes is implemented using a pair of dedicated point-to-point FC links called CMI links. In addition to allowing distributed deployment, the CPC platforms use their local hard drives for Bootstrap PSM, and storage space for diagnostic information. They also eliminate the need for standby power supplies, as each SP has its own redundant power supply. The metadata store resides on SAN volumes which the CPC nodes access via the mirrored fabrics. In Invista terminology, the metadata store is referred to as an Invista Configuration Repository Volume, or ICRV. A functional IP link between CPC nodes and the DPCs is critical to the operation of the Invista instance. This is provided by a private IP network of two Allied Telesis or two Cisco Catalyst switchesa standard feature of an Invista installation. Invista V2.0 Patch 2 introduced IP packet filtering configuration. The packet filtering configuration is the preferred IP deployment for Invista. Previous versions of Invista only supported firewalled configurations. In firewalled deployments, each Invista component is assigned an IP address on a private network behind the firewall and NAT (Network Address Translation) is used to translate an external IP address to an IP address on the private network. In packet filtering deployments, customers supply IP addresses for the components of the Invista instance. Packet filtering rules added to the IP switch configuration restrict communication to the Invista components. Packet filtering preserves the network quality provided by a firewall, while eliminating the need for NAT and the limitations imposed by NAT.

EMC Storage Virtualization Foundations - 35

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

ICRV: Metadata Store (SAN Volume)


What is Invista metadata? Information about Invista configuration Information about
Virtual Frames HBAs, hosts (front end) Storage elements, storage arrays (back end) Virtual volumes and mappings to storage elements Meta-volume relationships Clone data or CPLs Clone Private LUNs

Invista Code In general, all configuration and operation related-data that is not production data on the storage elements
2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 36

Metadata is critical to the operation of Invista. Invista version 2 makes the metadata repository highly available via SAN provisioning. Invista metadata contains information about the Invista configuration. This includes information about Virtual Frames, HBAs and hosts, storage elements and arrays, virtual volumes and their mappings to storage elements, meta-volume relationships, and the Clone Private LUNS, or CPLs. Also included in the metadata is the Invista code itself. In general, metadata is all configuration and operation related data that is not the customers production data on the storage elements.

EMC Storage Virtualization Foundations - 36

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

Theory of Operation CPC


Runs storage and management applications used to configure and control the Invista instance The CPC stores information about physical and virtual storage, including:
Storage elements dedicated to Invista Imported storage elements and associated storage elements Virtual volumes and associated imported storage elements Virtual Frames and the hosts and virtual volumes that belong to them Clone Groups and the storage volumes that belong to them

The CPC downloads information to the DPC

2009 EMC Corporation. All rights reserved.

EMC Storage Virtualization Foundations - 37

Invista management applications run on the CPC. These applications are the Invista Element Manager GUI and the Invista CLI, both of which can be used on a remote platform to monitor and manage the Invista instance. The CPC stores the following configuration metadata about the Invista instance on the SAN attached ICRV. Storage element (back-end array volume) information These volumes have been assigned to the Invista instance. The back-end volumes must be allocated exclusively for Invista usage by the administrator of the storage arrays. Imported storage element information Imported storage elements are simply storage elements that have been imported into the Invista instance. This identifies storage array capacity that Invista intends to use for creating virtual volumes. Virtual Volumes information includes the virtual volume name, storage volume identification (ID), and the imported storage element used to create the virtual volume. Virtual Frame information identifies one or more virtual volumes and the host allowed to access them. Clone Group information includes a data (source) volume and clone (copy) volumes.

EMC Storage Virtualization Foundations - 37

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

Theory of Operation - DPC


Currently supported DPCs: Brocade AP-7600B, Brocade PB-48K-AP, and Cisco Receives part of its configuration from the CPC Examines all incoming/outgoing FC frames
Read/write frames are mapped to the appropriate virtual target or initiator and FC port Control frames are passed between the requesting host and the CPC

Brocade AP-7600B

Cisco SSM Module

Serves as a virtual target for hosts Serves as a virtual initiator for storage arrays
2009 EMC Corporation. All rights reserved.

Brocade PB-48K-AP (Scimitar)


EMC Storage Virtualization Foundations - 38

The Data Path Controller (DPC) resides in the intelligent switch component of Invista. It receives part of its configuration from the CPC. The DPC is the center of all traffic in Invista. The DPC is located in the path of all host I/O traffic. The DPC examines each read/write Fibre Channel frame that is generated by hosts and storage arrays and forwarded to the appropriate device based on the physical to virtual mapping stored in the metadata. Control frames, or Fibre Channel frames that are not read/write operations, are forwarded to the CPC for processing. To the host, the DPC is a virtual target. To a storage array, the DPC is a virtual initiator. The three currently supported DPCs are shown on the slide. The Brocade AP-7600B is a departmental switch. The Brocade PB-48K-AP and Cisco SSM module are blades that fit into a slot in large enterprise switches.

EMC Storage Virtualization Foundations - 38

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

Separation of Data and Control Path Operations


Control Frame
- Inquiry (page 80, 83) - Read Capacity - Report LUNs - Test Unit Ready - Request Sense - Reservation - Group Reservation - Persistent Reservation - Format - Verify - Rezero Unit

Data Frame
- Read - Write

Invista Virtualization Services SAL (Switch Abstraction Layer)

Control Path Cluster (CPC)

SAL-Agent

Control Operations

I/O Streams

Data Path Controller (DPC)


Parse and Redirect Frames Processed I/O Stream

Array

2009 EMC Corporation. All rights reserved.

EMC Storage Virtualization Foundations - 39

The illustration shows how I/O packets are processed by the Invista intelligent switch. When a command arrives at the DPC, there are two places where processing occurs. The data path processors are the port-level ASICs that handle the incoming I/O from the host and do the remapping to the back-end storage. In typical operations, more than 95% of I/O is handled by the DPCs. Whatever cannot be handled by the DPCs is termed an exception. An exception might be a SCSI inquiry about the device, or an I/O for which the DPC does not have mapping information. These exceptions are handled by the CPC. The CPC is where the storage application actually runs. When the system starts up, the CPC loads the mapping tables for the virtual volumes into the DPCs.

EMC Storage Virtualization Foundations - 39

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

Invista Differentiators
Uses intelligent switches and directors with port-level real-time processing Full wire speed Sustains high performance across applications with highly random I/O All data safely written to attached storage before writes are acknowledged to the host ensures consistent image of data on attached storage No risk of lost data due to failure of virtualization system Metadata stored on SAN for additional protection
2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 40

Invista differentiators include: Uses intelligent switches and directors with port-level real-time processing. By placing the virtualization function on the switch, Invista utilizes a split path architecture that leverages the dedicated port processing of intelligent switches to perform virtualization functions in real time. Because the virtualization function happens at ASIC speed, the caching used in first generation virtualization solutions is unnecessary. Read and write operations are performed as if they were talking directly to physically attached arrays. This preserves investment in the processing power of the attached storage arrays. It also eliminates the bottlenecks and data integrity risk imposed by putting a caching virtualization controller in front of a storage controller. Virtualization is done at full wire speed. Therefore, no I/O performance degradation occurs to I/O that is sent over the Fibre Channel SAN. Sustains high performance across applications with highly random I/O. All data is safely written to attached storage before writes are acknowledged to the host ensuring consistent image of data on attached storage. No risk of lost data due to failure of virtualization system because of the redundant architecture of the Invista components. A typical Invista configuration includes at least two DPCs and CPCs along with redundant connections to the host and array ports. Metadata is stored on ICRVs that have duals paths from the CPCs through the SAN for additional protection.

EMC Storage Virtualization Foundations - 40

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

Invista Logical Topology

DPC

Virtual Targets
2009 EMC Corporation. All rights reserved.

Virtual Volumes

Virtual Initiators
EMC Storage Virtualization Foundations - 41

The illustration shows the logical view of Invista. Virtual Targets are abstract entities that are created by designating specific ports on the switch to be used as front-end ports. On each port designated as front end, a Virtual Target is created that becomes visible in the NameServer on the switch. Each Virtual Target has a unique World Wide Port Name (WWPN). Invista uses virtual targets to map Virtual Volumes to logical devices on the back-end arrays. Each Virtual Volume presented to a host is mapped to a logical device on a back-end array. Virtual initiators are also abstract entities that are created when the intelligent switch is imported. The number of virtual initiators created is switch dependent. A Cisco SSM module creates nine virtual initiators per SSM blade. However, only eight are useable. A Brocade AP-7600 or PB-48K-AP creates 16 virtual initiators (one per port). Similar to virtual targets, each virtual initiator has a unique WWPN. Invista initiates I/O to the back-end storage arrays (targets) using the virtual initiators.

EMC Storage Virtualization Foundations - 41

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

High Availability Configuration


Mirrored SAN
Two separate SANs, dual HBA hosts Supports non-disruptive code upgrade to virtualization components Provides HA for switch configurations through fault isolation Invista LUNs can be exposed on both fabrics
Hosts

Layer 2 SAN (A/B Fabric)

CPC uses 2 Control Path Nodes


Active-Active cluster LUN ownership model follows CPC nodes

Invista Core

DPC

CPC

DPC

Layer 2 SAN (A/B Fabric)

Multiple DPCs
Failover LUNs across DPCs Support for switch upgrades (hardware and firmware)
2009 EMC Corporation. All rights reserved.

Storage Arrays

EMC Storage Virtualization Foundations - 42

In the illustration, each host has two HBAs. Each HBA is cabled into a unique front-end layer 2 fabric. Each layer 2 fabric is cabled into a separate DPC. The DPC shares the same CPC for redundancy and the ability to share volume mapping in case a component of one of the paths is out of order. Each DPC is cabled into a layer 2 back-end SAN which is cabled into one port in the backend arrays. In the example, the CPC references the dual CPC node cluster present in the standard Invista 2.x configuration.

EMC Storage Virtualization Foundations - 42

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

Invista Sharing an Existing SAN


A B C D E

DPC A

Invista Instance

DPC B

Fabric A

L2 SAN

Fabric B

Heterogeneous Storage Arrays

2009 EMC Corporation. All rights reserved.

EMC Storage Virtualization Foundations - 43

The diagram shows how an Invista configuration may look when it coexists with a traditional SAN. DPC A and the Fabric A switches are cabled together and are managed as one fabric. DPC B and Fabric B are configured in the same manner. In this scenario, hosts C, D, and E are directly connected to the Invista environments. Hosts A and F are directly attached to the L2 SAN environment. Host B has one connection to the Invista instance and another to the L2 SAN. Hosts may be connected in this manner for a number of reasons. They may not be taking part in the virtualized environment or they may be preparing for volume migration to the virtualized environment. Hosts A, C, D, E, and F can be separated from Invista by zoning the HBA to the array port. By not zoning the HBA to a virtual target, it bypasses Invista. However, physical connectivity to the DPC is preserved in case the host is migrated in the future.

EMC Storage Virtualization Foundations - 43

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

Invista Element Manager

2009 EMC Corporation. All rights reserved.

EMC Storage Virtualization Foundations - 44

The illustration shows the main Invista Element Manager window. To start Element Manager, first log in to a computer that has IP connectivity to the Invista CPCs. The computer must be running a supported browser and Java Runtime Environment (JRE). The Invista Element Manager requires a current JRE version which is downloadable from java.sun.com. Start the browser and ensure that any pop-up blockers are either disabled or configured to allow the Invista GUI to launch. Enter the IP address of either Invista SP. When the Network Address Translation (NAT) Configuration dialog box appears, leave the box checked (this is default option), and choose OK. The software displays the Invista login dialog box that requests the username and password. Once logged, the main Element Manager window appears. From this window, the Storage Admin or Operator can perform or view Invista operations.

EMC Storage Virtualization Foundations - 44

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

Volume Management
Simplify volume presentation and management
Create, delete, and change functionality Provides front-end LUN masking and mapping of storage volumes to the host Single HBA driver for all arrays Concatenated volume

Centralized volume management and control


Single Invista console to manage virtual volumes, clones, and mobility jobs

Virtual Volumes

Reduce management complexity of heterogeneous storage


Single management interface to allocate and reallocate storage resources
2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 45

Invista provides a robust volume management capability. The illustration shows how the storage elements from the backend arrays, shown in yellow, red, green, and blue, are created. Several storage elements can be concatenated together to form a single virtual volume that can then be configured to the host. A concatenated volume is shown in the example. A virtual volume can consist of the entire storage element as in the case of the red virtual volume or a virtual volume can be a smaller chunk of the storage element as shown by the green virtual volume. Use Element Manager to create, delete, and modify virtual volumes. Element Manager is also used to configure or un-configure virtual volumes to a host. By using Element Manager, administrators have a tool that can provide all the capabilities needed to manage the Invista instance.

EMC Storage Virtualization Foundations - 45

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

Heterogeneous Point-in-Time Copies: Cloning


Key features
Can clone a virtual volume to another virtual volume of the same size across heterogeneous array types High performance data copy

Use cases
Heterogeneous backup and recovery Testing, development, training Parallel processing, reporting, queries

Integrated management
Replication Manager EMC ControlCenter Microsoft VSS
2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 46

The illustration shows how a virtual volume, shown in blue, can be cloned to other virtual volumes of the same size. In the example, there are three clones shown in yellow, red, and green. Invista permits users to create one or more full copies of a virtual volume. This functionality is performed by Invista, not hosts or arrays, and does not require host CPU cycles. Administrators can use the Element Manager console or the Invista CLI to control cloning operations. Active clones are managed as a Clone Group, which consists of a source volume and one or more clone volumes. Clones can be built on volumes that span heterogeneous arrays. Invista cloning requires that the source and clone volumes must be the same size. Clones created with Invista can be used for backups, restoring data, testing, report creation, etc.

EMC Storage Virtualization Foundations - 46

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

Dynamic Volume Mobility


Key features
Non disruptive high speed movement of data across homogeneous arrays

Use cases
Reduce planned application downtime
Roll out application to production Move legacy applications to lower tiers of storage

Reduce migration costs


Perform lease roll-overs or technology refreshes faster

Increase ability to meet service levels


Match storage AND host capacity to application performance requirements Integral component to Information Lifecycle Management
2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 47

In Invista, data mobility refers to the high speed non-disruptive movement of data from one virtual volume to another. The source and destination arrays must be available to Invista. The move is transparent to the host. There is no requirement to reboot or take other action due to the migration. The host sees the same virtual volume before and after the data has been moved, regardless of the storage array containing the data. In the example, the data on the green volume is being moved to the blue volume. Data mobility is a valuable tool any time the customer needs to move data without impacting the application. For example, it is useful when a lease expires on a storage array and the data needs to be retained. In this case, the data can be moved to the new array while the application is running.

EMC Storage Virtualization Foundations - 47

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

Remote Replication: Virtual to Virtual Volumes


Applications

RecoverPoint performs remote replication at the network level, enabling virtual-volume-to-virtualvolume replication Leading-edge compression reduces bandwidth 3x to 15x to reduce monthly connectivity charges

Applications

SAN

SAN

Invista
Data Path Controller Control Path Cluster Data Path Controller

RecoverPoint WAN

RecoverPoint
Data Path Controller

Invista
Control Path Cluster Data Path Controller

Active volume

Local CDP journal

Remote CDP journal

Remote volume

Data

Data

Physical storage
2009 EMC Corporation. All rights reserved.

Physical storage
EMC Storage Virtualization Foundations - 48

RecoverPoint is a network-based remote-replication product. It provides: Disaster recovery capability for Invista Virtual-volume-to-virtual-volume replication, enabling IT managers to employ virtualization technologies across multiple sites for disaster tolerance and enhanced availability Synchronous or asynchronous bi-directional replication between sites RecoverPoint utilizes leading-edge compression technology to reduce bandwidth requirements (by 3x to 15x) to save on lease-line costs. In addition, it incorporates continuous data protection (CDP) to provide protection against data corruption and to ensure data consistency across application volumes at remote sites.

EMC Storage Virtualization Foundations - 48

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

Remote Replication: Virtual to Physical Volumes


Applications

RecoverPoint supports heterogeneous storage, enabling virtual to non-virtual replication


Lowers cost of remote replication of virtual volumes

Applications

SAN

SAN

Invista
Data Path Controller Control Path Cluster Data Path Controller

RecoverPoint WAN

RecoverPoint

Active volume

Local CDP journal

Remote CDP journal

Remote volume

Data

Physical storage
2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 49

Unlike some products from EMCs competitors, RecoverPoint supports heterogeneous storage, as well as virtual-to-virtual or virtual-to-non-virtual deployments. With Invista and RecoverPoint, you have the option of remotely replicating from a virtual environment to Tier 1 or Tier 2 storage in a traditional storage configuration. You do not need to purchase a second Invista instance at the remote site to obtain disaster tolerance, which reduces the costs and complexity at your remote site.

EMC Storage Virtualization Foundations - 49

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

Module Summary
Key points covered in this module: Storage virtualization concepts Invista design and benefits Major hardware components of Invista Invista management interfaces Invista services (functionality) Invista theory of operations Invista configuration strategies

2009 EMC Corporation. All rights reserved.

EMC Storage Virtualization Foundations - 50

These are the key points covered in this module. Please take a moment to review them.

EMC Storage Virtualization Foundations - 50

Copyright 2009 EMC Corporation. Do not Copy - All Rights Reserved.

Course Summary
Key points covered in this course: Virtual infrastructure VMware product differences File-level virtualization basic concepts Rainfinity features, functions, and benefits Benefits, features, and advantages of an Invista solution Concepts and benefits of storage virtualization

2009 EMC Corporation. All rights reserved.

EMC Storage Virtualization Foundations - 51

These are the key points covered in this training. Please take a moment to review them. This concludes the training. Please proceed to the Course Completion slide to take the assessment.

EMC Storage Virtualization Foundations - 51

Potrebbero piacerti anche