Sei sulla pagina 1di 74

VPLEXArchitectureandDesign

EMC VPLEX
ArchitectureandDesign
April2010

Support:EducationServices
2010 EMC Corporation. All rights reserved. These materials may not be copied without EMCs written consent.

WelcometoEMCVPLEXArchitectureandDesign.Clicktheplaybuttoninthelowerrighthandcornerofthisscreentocontinue. Copyright 2010EMCCorporation.Allrightsreserved. ThesematerialsmaynotbecopiedwithoutEMC'swrittenconsent. EMCbelievestheinformationinthispublicationisaccurateasofitspublicationdate.Theinformationissubjecttochange withoutnotice. THEINFORMATIONINTHISPUBLICATIONISPROVIDEDASIS. EMCCORPORATIONMAKESNOREPRESENTATIONSOR WARRANTIESOFANYKINDWITHRESPECTTOTHEINFORMATIONINTHISPUBLICATION,ANDSPECIFICALLYDISCLAIMSIMPLIED WARRANTIESOFMERCHANTABILITYORFITNESSFORAPARTICULARPURPOSE. Use,copying,anddistributionofanyEMCsoftwaredescribedinthispublicationrequiresanapplicablesoftwarelicense. EMC ,EMC,EMCControlCenter,AdvantEdge,AlphaStor,ApplicationXtender,Avamar,Captiva,CatalogSolution,Celerra,Centera, CentraStar,ClaimPack,ClaimsEditor,ClaimsEditor,Professional, CLARalert,CLARiiON,ClientPak,CodeLink,Connectrix,Co StandbyServer,Dantz,DirectMatrixArchitecture,DiskXtender,DiskXtender 2000,DocumentSciences,Documentum, EmailXaminer,EmailXtender,EmailXtract,enVision,eRoom,EventExplorer,FLARE,FormWare,HighRoad,InputAccel,InputAccel Express,Invista,ISIS,MaxRetriever,Navisphere,NetWorker,nLayers,OpenScale,PixTools,Powerlink,PowerPath,Rainfinity, RepliStor,ResourcePak,Retrospect,RSA,RSASecured,RSASecurity,SecurID,SecurWorld,Smarts,SnapShotServer,SnapView/IP, SRDF,Symmetrix,TimeFinder,VisualSAN,VSAMAssist,WebXtender,whereinformationlives,xPression,xPresso,Xtender, Xtender Solutions;andEMCOnCourse,EMCProven,EMCSnap,EMCStorageAdministrator,Acartus,AccessLogix, ArchiveXtender,AuthenticProblems,AutomatedResourceManager,AutoStart,AutoSwap,AVALONidm,CClip,Celerra Replicator,CLARevent,CodebookCorrelationTechnology,CommonInformationModel,CopyCross,CopyPoint,DatabaseXtender, DigitalMailroom,DirectMatrix,EDM,ELab,eInput,Enginuity,FarPoint,FirstPass,Fortress,GlobalFileVirtualization,Graphic Visualization,InfoMover,Infoscape,MediaStor,MirrorView,Mozy,MozyEnterprise,MozyHome,MozyPro,NetWin,OnAlert, PowerSnap,QuickScan,RepliCare,SafeLine,SANAdvisor,SANCopy,SANManager,SDMS,SnapImage,SnapSure,SnapView, StorageScope,SupportMate,SymmAPI,SymmEnabler,SymmetrixDMX,UltraFlex,UltraPoint,UltraScale,Viewlets,VisualSRM are trademarksofEMCCorporation. Allothertrademarksusedhereinarethepropertyoftheirrespectiveowners.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

VPLEXArchitectureandDesign

CourseOverview
ThiscourseprovidesdetailedcoverageofVPLEXintypicaldatacenter environments.Itcomprehensivelyaddressesproductarchitecture, hosttovirtual storageimplementation,systemenvironmentsizing,managementandmonitoring ofVPLEXenvironments. Thiscourseisintendedforaudienceswhoarepresentlyorplanningtobeengaged inpositioningVPLEX,andperformingVPLEXsolutionsdesign. Uponsuccessfulcompletionofthiscourse,youshouldbeableto:

Description

Audience

Objectives

DescribeVPLEXsystemarchitectureandconfigurationoptions PositionsolutionsutilizingVPLEX,anddescribetheirbenefitstothecustomer DescribekeyVPLEXfeatures,howtheycanbeeffectivelyused,andhighlevel


tasksforimplementingthem

ExplainhowVPLEXcanbeintegratedintoyourcustomersproduction
environment

PerformplanninganddesignforVPLEXdeployment
EMCbelievestheinformationinthiscourseisaccurateasofitspublicationdate.ItisbasedonpreGAproduct information,whichissubjecttochangewithoutnotice.Forthe mostcurrentinformation,seetheEMCSupport MatrixandproductreleasenotesinPowerlink.

2010 EMC Corporation. All rights reserved.

VPLEX Architecture and Design

ThiscourseprovidesanintroductiontoEMCVPLEX.ItdescribesVPLEXsystemarchitecture,keyfeatures, andrecommendedimplementations. ThistrainingprovidesfamiliaritywithmajorVPLEXsolutionsdesignconcerns.Italsoincludesahighlevel viewofimplementationtasksrelatedtospecificVPLEXfeatures, functionalityandmanagement.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

VPLEXArchitectureandDesign

CourseModules
Module1:VPLEXTechnologyandPositioning Module2:Architecture PhysicalandLogicalComponents Module3:VPLEXFunctionalityandManagement Module4:PlanningandDesignConsiderations

2010 EMC Corporation. All rights reserved.

VPLEX Architecture and Design

ThiseLearningcourseisstructuredintofourmodules: Module1brieflycoversEMCsvisiononblockstoragevirtualization,andhowVPLEXisbeing positioned. Module2discussestheunderlyingtechnologyandarchitecture. Module3coversthemajorfeaturesandcapabilitiesavailablein thecurrentrelease. Module4addressesthesignificantplanninganddesignconsiderationsrelevanttoVPLEXdeployment.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

VPLEXArchitectureandDesign

Module1:VPLEXTechnologyandPositioning

ThismoduleintroducesfundamentalconceptsrelevanttoVPLEX technology,localfederationanddistributedfederation. Uponsuccessfulcompletionofthismodule,youshouldbeableto:

ArticulatehowVPLEXcanenableEMCsvisionofjourneytotheprivate
cloud

DescribeVPLEXlocalanddistributedfederation ProvideahighlevelsystemviewofVPLEXLocalandVPLEXMetro DescribetypicalscenarioswhereVPLEXtechnologycanbeeffectively


applied
2010 EMC Corporation. All rights reserved. Module 1: VPLEX Technology and Positioning 4

TheintroductorymodulebrieflyoutlinesEMCsvisiononblockstoragevirtualization,andpositionsVPLEX enabledsolutionswithinthebroadercontextofthatvision.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

VPLEXArchitectureandDesign

JourneytothePrivateCloud Information Infrastructure


ReduceCapEx&OpEx
Leverageefficiencytechnologies

ManageatScale
Simplifyandautomate

OptimizeServiceLevels
Tierandconsolidate

Transitioningto PrivateCloud

DeliverAlwaysOn
24xforeveravailability

2010 EMC Corporation. All rights reserved.

Module 1: VPLEX Technology and Positioning

WhenEMCthinksofthePrivateCloud,itsdescribingastrategyforyourinfrastructurethatenables optimizedresourceuse.Thismeansyou'reoptimizedforenergy, powerandcostsavings.Youcanscaleup andoutsimplyandapplyautomatedpolicies,andyoucanguaranteegreateravailabilityandaccessforyour productionenvironment significantlyreducingoreliminatingdowntime.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

VPLEXArchitectureandDesign

EMCVision:VirtualStorage

Capabilitiesthatfreeinformation fromphysicalstorage MovethousandsofVMsoverthousandsmiles Automated Efficient Batchprocessinlowcostenergylocations Physical Integrated Alwayson Dynamicworkloadbalancingandrelocation Storage OnDemand Secure Aggregatebigdatacentersfromseparateones 24xforever runapplicationswithoutrestart.Ever! FAST+Federation+StorageVirtualization
2010 EMC Corporation. All rights reserved. Module 1: VPLEX Technology and Positioning 6

Foryears,usershavereliedonphysicalstorage tomeettheirinformationneeds.Newandevolving changes,suchasvirtualizationandtheadoptionofPrivateCloudcomputing,haveplacednewdemandson howstorageandinformationismanaged. Tomeetthesenewrequirements,storagemustevolvetodelivercapabilitiesthatfreeinformationfroma physicalelementtoavirtualizedresourcethatisfullyautomated,integratedwithintheinfrastructure, consumedondemand,costeffectiveandefficient,alwaysonandsecure.Thetechnologyenablersneeded todeliverthistocombineuniqueEMCcapabilitiessuchasFAST, Federation,andstoragevirtualization. TheresultisanextgenerationPrivateCloudinfrastructurethatallowsusersto: MovethousandsofVMsoverthousandsofmiles Batchprocessinlowcostenergylocations Enableboundarylessworkloadbalancingandrelocation Aggregatebigdatacenters Deliver24xforever andrunorrecoverapplicationswithouteverhavingtorestart.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

VPLEXArchitectureandDesign

EMCVPLEXArchitecture
Local&Distributed Federation

NextGenerationData MobilityandAccess
ScaleOutClusterArchitecture
Startsmallandgrowbigwith predictableservicelevels

AccessAnywhere

AdvancedDataCaching
ImproveI/Operformanceandreduce storagearraycontention
EMCandNonEMCArrays EMCandNonEMCArrays

DistributedCacheCoherence
Automaticsharing,balancingand failoverofstoragedomainswithinand acrossVPLEXEngines

Available April2010
Module 1: VPLEX Technology and Positioning 7

2010 EMC Corporation. All rights reserved.

EMCVPLEXisanextgenerationarchitecturefordatamobilityandinformationaccess. Itisbasedonuniquetechnologythatcombinesscaleoutclusteringandadvanceddatacaching,withthe uniquedistributedcachecoherenceintelligencetodeliverradicallynewandimprovedapproachesto storagemanagement. Thisarchitectureallowsdatatobeaccessedandsharedbetweenlocationsoverdistanceviaadistributed federationofstorageresources. Thefirstproductsbeingintroducedbasedonthisarchitectureincludeconfigurationsthatsupportlocaland metroenvironments,withadditionalproductsplannedforfuturereleases.

7 Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

VPLEXArchitectureandDesign

EMCVPLEXCapabilities
LocalFederation StorageVirtualization DistributedFederation

AccessAnywhere

EMCandnonEMCArrays

Streamlinestoragerefreshes, Streamline storage refreshes, consolidationsandmigrations consolidations and migrations Simplifymultiarrayallocation, Simplify multi-array allocation, management,andprovisioning management, and provisioning Poolstoragecapacitytoextend Pool storage capacity to extend usefullifeforN1storageassets useful life for N-1 storage assets
2010 EMC Corporation. All rights reserved.

within,across,andbetweenData Centersoverdistance andenableinformationto beaccessanywhere andprovidejustintime storageservicesviascaleout


Module 1: VPLEX Technology and Positioning 8

Distributedfederationbuildsontraditionalvirtualizationbyaddingtheabilitytotransparentlymoveand migratedatawithinandacrossdatacenters.Thissimplifiesmultiarraystoragemanagementandmultisite informationaccess,aswellasallowscapacitytobepooledandefficientlyscaledondemand.

8 Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

VPLEXArchitectureandDesign

VPLEXLocal:Overview
Simplifyprovisioningandvolume
management Centralizemanagementofblockstoragein thedatacenter Simplifystorageprovisioning, managementandmonitoring Physicalstorageneedstobeprovisioned justonce tothevirtualizationlayer Nondisruptivedatamobility Optimizeperformance,redistributeand balanceworkloadsamongarrays Workloadresiliency Improvereliability,scaleoutperformance Storagepooling Manageavailablecapacityacrossmultiple framesbasedonSLAs
2010 EMC Corporation. All rights reserved.

VPLEXLocal (SingleCluster)

Module 1: VPLEX Technology and Positioning

Around2003,storagevirtualizationwasintroducedasaviablesolution.Theprimaryvaluepropositionof storagevirtualizationwasmovingdatanondisruptively.Customerslookedtothistechnologyfor transparenttiering,movingbackendstoragedatawithouthavingtodisrupthosts,simplifiedoperations overmultipleframes,aswellasongoingdatamovesfortechrefreshesandleaserollovers. Customersrequiredtoolsthatenabledstoragemovestobemadewithoutforcinginteraction,andworking atthehostanddatabaseadministrationlevels.Thisconceptof avirtualizationcontrollerwasintroduced andtookitsplaceinthemarket.WhileEMCreleaseditsownversionofthiswiththeInvistasplitpath architecture,wealsocontinueddevelopmentonbothSymmetrixandCLARiiONtointegratemultipletiersof storagewithinasinglearray.Today,weofferFlash,FibreChannelandSATAwithinEMCarrays,andavery transparentmethodofmovingdataacrossdifferentstoragetypes andtierswithourvirtualLUNcapability. Wefoundthatprovidingbothchoicesforcustomersallowedourproductstomeetawidersetofchallenges thanifweonlyofferedjustoneofthetwooptions. Thechallengesaddressedbytraditionalstoragevirtualization whichcanbebroadlycategorizedas simplifiedstoragemanagement stillexisttoday.VPLEXlocalfederationcansolvethisclassofproblems withinthecontextofasingledatacenter. However,wevealsoseenthesedatacenterissuesevolve.Newer,differentproblemshaveemergedthat requirenewsolutions aswellseenext,whenwediscussdistributedfederation.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

VPLEXArchitectureandDesign

VPLEXMetro:Overview
AccessAnywhere:Blockstorageaccess
within,betweenandacrossdatacenters Withinsynchronousdistances Approximately60milesor 100Kilometers ConnecttwoVPLEXstorageclusterstogether overdistance Enablesvirtualvolumestobesharedbyboth clusters Providesuniquedistributedcache coherencyforallreadsandwrites Bothclustersmaintainthesameidentityfor avolume,andpreservethesameSCSIstate forthelogicalunit EnablesVMwareVMotionoverdistance

Cluster1/SiteA

Cluster2/SiteB

VPLEXMetro(TwoClusters)

2010 EMC Corporation. All rights reserved.

Module 1: VPLEX Technology and Positioning

10

WithVPLEXdistributedfederation,itbecomespossibletoconfiguresharedvolumestohoststhatarein differentsitesorfailuredomains.Thisenablesanewsetofsolutionsthatcanbeimplementedover synchronousdistances,whereearlierthesesolutionscouldresideonlywithinasingledatacenter.VMware VMotionoverdistanceisaprimeexampleofsuchsolutions. AnotherkeytechnologythatenablesAccessAnywhereisremoteaccess.Thismakesitpossibleforblock storagetobeaccessedasthoughitwerelocal,eventhoughitisremote.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

10

VPLEXArchitectureandDesign

Example:Current WorkloadRelocationwithin Sites


Domain1/Site1
Mail_1 VMotion PlannedEvents Mail_2 Mail_1 Mail_3 Mail_2

Domain2/Site2
Mail_4 Fileandprint server

MSExchange

MSExchange

MSExchange

Windows2008Server

Web Web front front Excel end end

SQL01

SQL02

Challenges:

SAN
SharePoint2007 SQLServer2008

Uneven resource utilization across Planned events requiring shutdown SAN

Web front sites end

Excel

SharePoint2007

VMFSVolume

VMFSVolume

Synchronous Distance 100 Kms Symmetrix CLARiiON ThirdParty Symmetrix CLARiiON ThirdParty
11

2010 EMC Corporation. All rights reserved.

Module 1: VPLEX Technology and Positioning

ThistypicalscenariodealswithadualsiteenvironmentwithvirtualizedMicrosoftapplicationserversat eachsite. VMotioncancurrentlyleveragesharedSANstoragetomoveVMsacrossESXserverswithineachsite. However,thecustomerisnowlookingtoexpandthescopeofVMotionbeyondsiteboundariestofurther improveresourceutilization,andtohandleplannedeventsthatmayaffectanentiresite.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

11

VPLEXArchitectureandDesign

Proposed:VMotionOverDistancewithVPLEX
Domain1/Site1
Mail_1 Mail_2 Mail_3

Domain2/Site2
DistanceVMotion
Mail_1 Mail_2 Mail_4 Mail_3 Fileandprint server

MSExchange

MSExchange

MSExchange

Windows2008Server

Addressing the challenges:


Web Web front front Excel end end

Distance VMotion: loadbalance


SQL01 SQL02

across sites

Planned sitewide events: move


SAN
Other potential benefits: VMFS SQLServer2008 volumeondistributeddeviceDisaster avoidance

Web front end

Excel

FC MAN applications proactively to the other site SAN


SharePoint2007

SharePoint2007

Improved infrastructure availability


VMFSVolume

and performance VMFSVolume

Power/energy savings by moving VMs


across sites
Synchronous Distance 100 Kms Symmetrix CLARiiON ThirdParty Symmetrix CLARiiON ThirdParty
12

2010 EMC Corporation. All rights reserved.

Module 1: VPLEX Technology and Positioning

Theproposedsolutioncanaccomplishthisasfollows.

ItinvolvesaVPLEXMetrospanningsites,withtheapplicationVMsusingshareddatastoresbuiltonVPLEX distributeddevices. ThisenablesnondisruptivedistanceVMotionacrosssites,

therebyaddressingthecustomersprimarychallenges.

DistanceVMotionopensupotherpossibilitiesforthiscustomer, aslistedhere.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

12

VPLEXArchitectureandDesign

VPLEXLocal:SingleCluster
Management Server 8 port FC SW 8 port FC SW Switch UPS Switch UPS

1to4VirtualizationEnginesperrack Upto8,000totalVirtualDevicespercluster N+1performancescaling Cachewritethroughtopreservearray


functionality SupportedUserEnvironmentsatGeneralAvailability
HostPlatforms MultiPathing VolumeManagers ESX,Windows,Solaris,AIX,HPUX,Linux PowerPath,VMwareNMP VxVM,AIXLVM,HPQHPLVM VMAX,DMX,CLARiiON,HDS99X0,USPV,USPVM Brocade,McDataandCisco

Power Supply Power Supply

Arrays(atGA) SANFabrics

Power Supply Power Supply

2010 EMC Corporation. All rights reserved.

Module 1: VPLEX Technology and Positioning

13

ShownisasummaryofthekeycharacteristicsofaVPLEXLocalorsingleclusterconfiguration. Amongourkeyvaluepropositions:youcanstartsmallandscaleup,youcanhavecentralizedmanagement, aswellaspredictableperformanceandavailability. Theenginesarearrangedinatruecluster,whichmeansI/Othat enterstheclusterfromanywherecanbe servicedfromanywhere. TheenginesarearrangedinanN+1configuration whichmeansthatasyouaddmoreengines,you increasethememory,portsandperformanceofthetotalcluster. Theclustercanwithstandthefailureof anydevice,andanycomponent.Theclusterwillcontinuetooperateandprovidestorageservicesaslongas justoncedevicesurvives.Yougettransparentmobilityacrossheterogeneousarrays.Ifyouhaveaneedto extendthesecapabilitiesoutoverdistanceoracrossmultiplefailuredomainswithinasinglesite,aVPLEX Metroconfigurationmaybeamoreappropriatechoice.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

13

VPLEXArchitectureandDesign

VPLEXMetro:DualCluster
DualCluster MetroPlex
Up to 8 Virtualization Engines 16K (8K per cluster or shared) total Virtual Devices Within or across Data Centers Synchronous distance support

Management Server 8 port FC SW 8 port FC SW Switch UPS Switch UPS

Management Server 8 port FC SW 8 port FC SW Switch UPS Switch UPS

Power Supply Power Supply Power Supply Power Supply

Power Supply Power Supply Power Supply Power Supply

2010 EMC Corporation. All rights reserved.

Module 1: VPLEX Technology and Positioning

14

HereisabriefsynopsisofVPLEXMetroconfigurations,limitsandkeycapabilities. AswesawwithVPLEXLocal,eachsingleclustercansupport8000 backendStorageVolumesand8000 VirtualVolumes,regardlessofwhetheryouspecify1,2or4engines.Thenumberofenginesinfluencesthe totalnumberofFE/BEportsavailable,andthusscalabilityandobtainableperformancerelativetothe numberofhostsandstoragearrayportstobeserviced.AVPLEXMetroDualClustercansupportatotalof 16000frontendand16000backend.However,whencreatingdistributedRAID1Devicesrememberthat youareconsuming2devices,1fromeachclusterintheMetro,soifalldevicesareDR1sthelimitis8000 frontenddevices. OneviewofaMetroPlexiseachclusterservicingadifferentphysicalsite,withupto100kmbetweensites. AnequallyusefulalternateviewistwojoinedclustersatasinglesitewithsharedLUNsbetweenthem.You maychoosetoimplementthesetwoclustersastwodifferenttargetswithinseparatefailuredomains,for example,inthesamedatacenter. AtGA,VPLEXwillsupportclusteredhostfilesystemsincludingVMFS.Withthisdeployment,multipleVMFS serverscanread/writethesamefilesystemsimultaneously,whileindividualvirtualmachinefilesarelocked. Wewillalsoextendsupportovertimetoinclude:SUNCluster,HPClusterIBMClusterandCXFS. CurrentlythereisalimitationforStretchhostclustersoverdistance:ifonesitefails,youneedtoperforma manualrestartoftheapplicationonthefailedsite.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

14

VPLEXArchitectureandDesign

Module2:Architecture Physical andLogicalComponents


Thismoduledescribesphysicalandlogicalcomponentscomprising aVPLEX system,thecurrentlyavailablefederationfeatures,andtheirinternal operation. Uponsuccessfulcompletionofthismodule,youshouldbeableto: ProvideacomprehensiveviewofVPLEXLocalandVPLEXMetro

DescribeVPLEXhardwareandsoftwarearchitectureatahighlevel

2010 EMC Corporation. All rights reserved.

Module 2: Architecture - Physical and Logical Components

15

ThismoduledescribesthephysicalcomponentsandlogicalcomponentscomprisingaVPLEXsystem.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

15

VPLEXArchitectureandDesign

VPLEXArchitecture
Cluster1/SiteA
VPLEX ManagementServer Hosts IP

Cluster2/SiteB
VPLEX ManagementServer Hosts

VPLEX FrontendPorts Virtual Volumes

VPLEX Engine
Virtual Vol Virtual Vol

VPLEX FrontendPorts Virtual Volumes

Virtual Vol

Virtual Vol

Virtual Vol

Virtual Vol

LCOM

VPLEX Directors VPLEX BackendPorts

FC MAN

LCOM

VPLEX Directors VPLEX BackendPorts

EMCand NonEMCArrays

EMCand NonEMCArrays

2010 EMC Corporation. All rights reserved.

Module 2: Architecture - Physical and Logical Components

16

Let'slookatatypicalproductionSANenvironment,andhowVPLEXfitsandworkswithinit. ThebasicbuildingblockofaVPLEXsystemistheEngine.MultipleenginescanbeconfiguredtoformasingleVPLEX clusterforscalability.EachEngineincludestwoHighAvailabilityDirectorswithfrontendandbackendFibreChannel portsforintegrationwiththecustomer'sfabrics.VPLEXdoesnotrelyon(orrequire)anyparticularfabricintelligence. TheDirectorFEandBEportsshowupasstandardFportsonthefabrics.VPLEXtechnologycanworkequallywellwith BrocadeorCiscofabricswithnodependencyonswitchinghardwareorfirmware.Directorswithinacluster communicatewitheachotherviaredundant,privateFibreChannel linkscalledLCOMlinks. Eachclusterincludesa1UManagementServerwithapublicIPportforsystemmanagementandadministrationover thecustomersnetwork.TheManagementServeralsohasprivate,redundantIPnetworkconnectionstoeachDirector withinthecluster. VPLEXimplementationfundamentallyinvolvesthreetasks:presentingSANvolumesfrombackendarraystoVPLEX enginesviaeachDirectorsbackendports;packagingtheseintosetsofVPLEXVirtualVolumeswiththedesired configurationsandprotectionlevels;andpresentingVirtualVolumestoproductionhostsintheSANviatheVPLEX frontend. CurrentlyaVPLEXsystemcansupportamaximumoftwoclusters. AdualclustersystemiscalledaMetroPlex.Fora dualclusterimplementation,thetwositesmustbelessthan100kmapart,withroundtriplatencyof5msecsorless ontheFClinks.VPLEXclusterswithinaMetroPlexcommunicateviaFCovertheDirectors FCMANports. VPLEXimplementsaVPNtunnelbetweentheManagementServersof thetwoclusters.Thisenableseach ManagementServertocommunicatewithDirectorsineitherclusterviatheprivateIPnetworks.Withthisdesign,its possibletoconvenientlymanageaMetroPlexfromeitherofthetwosites.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

16

VPLEXArchitectureandDesign

VPLEXEngine:Characteristics
DualHADirectorsperengine GeoSynchronysoftwarerunsoneach
DirectortoprovideVPLEXfeaturesand functionality 32 8GB/sFibreChannelFE/BEports Forfabricconnectivitytohosts andstoragearrays

8Gb/sFibreChannel Host&ArrayPorts

8Gb/sFibreChannel Host&ArrayPorts

CPUComplex
Core Core Core Core Core Core Core Core

CPUComplex
Core Core Core Core Core Core Core Core

GlobalMemory

FibreChannelinterconnect
betweenDirectors

GlobalMemory

IntelmulticoreCPUs 64GB(raw)ofcachememory Redundantpowersupplies Integratedbatterybackup BuiltinCallHome support


2010 EMC Corporation. All rights reserved. Module 2: Architecture - Physical and Logical Components 17

Theengineitselfitdesignedwithaveryhighlyavailablehardwarearchitecture.IthoststwoDirectorswitha totalof32FibreChannelports,16FEandBE.Allmajorenginecomponentsareredundant. Theengineisbuiltforperformancewithalargecache,andhasfullyredundantpowersupplies,battery backupsandEMCCallHomecapabilitiestoalignwithoursupport bestpractices.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

17

VPLEXArchitectureandDesign

DistributedCacheCoherency

Host
BlockAddress BlockAddress CacheA CacheA CacheC CacheC

EngineCacheCoherencyDirectory
1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 11 12 13 10 11 12 13

Host

NewWrite: Block3

CacheE CacheE CacheG CacheG

Read: Block3
CacheDirectoryD CacheDirectoryC CacheDirectoryF CacheDirectoryE CacheDirectoryH CacheDirectoryG

CacheDirectoryB Cache Directory A

Cache

Cache

Cache

Cache

2010 EMC Corporation. All rights reserved.

Module 2: Architecture - Physical and Logical Components

18

TheVPLEXenvironmentisdynamicandusesahierarchytokeeptrackofwhereI/Osgo.

AnI/OrequestcancomefromanywhereandwillbeservicedbyanyavailableengineintheVPLEXcluster. VPLEXabstractstheownershipmodelintoahighleveldirectorythat'supdatedforeveryI/O,andshared acrossallengines.Thedirectoryusesasmallamountofmetadata,andtellsallotherenginesinthecluster, in4kblocks,whichblockofdataisownedbywhichengineandatwhattime.Thecommunicationthat actuallyoccursismuchlessthanthe4kblocksthatareactuallybeingupdated. Ifareadrequestcomesin,VPLEXautomaticallychecksthedirectoryforanowner.Oncetheowneris located,thereadrequestgoesdirectlytothatengine. Onceawriteisdoneandthetableismodified,ifanotherreadrequestcomesinfromanotherengine,it checksthetableandcanthenpullthereaddirectlyfromthatengine'scache.Ifit'sstillincache,thereisno needtogotothedisktosatisfytheread.ThismodelalsoenablesVPLEXtostretchthecluster,aswecan distributethisdirectorybetweenclustersandtherefore,betweensites.Thedesignhasminimaloverhead, isveryefficient,andenableseffectivecommunicationoverdistance.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

18

VPLEXArchitectureandDesign VPLEX

HardwareComponents:Engine

VPLEXEngineFront

DirectorB DirectorA

VPLEXEngineBack

Directors

Frontendportsprovideactive/activeaccesstovirtualvolumes ProcessFibreChannelSCSIcommandsfromhosts
2010 EMC Corporation. All rights reserved. Module 2: Architecture - Physical and Logical Components 19

ThetwodirectorswithinaVPLEXenginearedesignatedA andB.DirectorAisbelowDirectorB.Each directorcontainsdualIntelQuadcoreCPUsthatrunat2.4GHz,32GBofreadcachememoryandatotalof (16)8GbpsFCports,8frontendand8backend.Bothdirectorsareactiveduringclusteroperations.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

19

VPLEXArchitectureandDesign VPLEX

HardwareComponents:I/OModules

FrontEnd

BackEnd

COM GigE

COM GigE

2010 EMC Corporation. All rights reserved.

Module 2: Architecture - Physical and Logical Components

20

Thereareatotalof12I/OmodulesinaVPLEXengine.10ofthesemodulesareFibreChanneland2are GigE.TheFibreChannelportscannegotiateupto8Gbps.Four FCmodulesarededicatedforfrontenduse andfourforthebackend.ThetworemainingFCmodulesareusedforinter/intraclustercommunication. ThetwoGigEI/OmodulesarenotutilizedinthisreleaseofVPLEX.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

20

VPLEXArchitectureandDesign VPLEX

HardwareComponents:DAE

InternalDAEbehindscreen

InternalDAEwithscreenremoved

SSDDriveCarrier

2010 EMC Corporation. All rights reserved.

Module 2: Architecture - Physical and Logical Components

21

VPLEXinternalSSDscanbeaccessedfromthefrontofaVPLEXsystem.EachdirectorisassignedoneSSD, andbootsfromit.SSDsresidewithinanSSDDriveCarrierbehindtheDAEscreen.EachSSDDriveCarrier canholdtwo2.5inchSSDs.However,onlyoneSSDisinstalledperdrivecarrier.EachSSDhasadrive capacityof30GB.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

21

VPLEXArchitectureandDesign VPLEX

HardwareComponents:I/OModuleCarrier

I/OModuleCarrier

2010 EMC Corporation. All rights reserved.

Module 2: Architecture - Physical and Logical Components

22

AVPLEXenginecontainstwoI/OModulecarriers,oneforDirectorAandoneforDirectorB.Theoneonthe rightisforDirectorAandtheoneontheleftforDirectorB. TherearetwoI/Omodulespercarrier.Theone thatisshowninthispicturecontainsaFibreChannelmoduleandaGigEmodule.Aswejustdiscussed,the FibreChannelmoduleisusedforinter andintraclustercommunicationwithinaVPLEXsystem.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

22

VPLEXArchitectureandDesign VPLEX

HardwareComponents:I/OModuleTypes
I/OModuleCarrier

0123 FCIOM FCIOM

4port8GbpsFibreChannelIOM UsedforFCCOMandFCWANconnectivitywithanI/OModulecarrier

2010 EMC Corporation. All rights reserved.

Module 2: Architecture - Physical and Logical Components

23

ThisistheFCI/OModulefromanI/OModuleCarrierwhichisusedforinter andintra cluster communication.Inthismodule,Ports0and1areusedforlocal COM.Ports2and3areusedforWANCOM betweenclustersinaMetroPlex.Inmediumandlargeconfigurations,FCI/OCOMportsrunat4Gbps.In termsofphysicalhardware,thisFCI/OmoduleisidenticaltotheI/Omodulesusedforfrontendandback endconnectivityinthedirectorslots.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

23

VPLEXArchitectureandDesign VPLEX

HardwareComponents:ManagementandPower

PowerSupplies

ManagementModules

Allowsfordaisychainconnectionbetweenengineswithinacluster USBportunused
2010 EMC Corporation. All rights reserved. Module 2: Architecture - Physical and Logical Components 24

Eachenginecontainstwomanagementmodulesandtwopowersupplies.Eachmanagementmodule containstwoserialportsandtwoEthernetports.Theupperofthetwoserialportsisopen,andcanbe utilizedbyEMCfieldpersonnelforBIOSandPOSTaccess.Thelowerserialportshipsprecabled.Itisused tomonitortheSPSandUPS.TheEthernetportsareusedtoconnecttotheManagementServerandalsoto otherDirectorswithinthecluster,inadaisychainfashion.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

24

VPLEXArchitectureandDesign VPLEX

HardwareComponents:VPLEXManagementServer

CentralPointofManagement

2010 EMC Corporation. All rights reserved.

Module 2: Architecture - Physical and Logical Components

25

TheVPLEXManagementServeristhecentralpointofmanagementforaVPLEXLocalandVPLEXMetro system.ItshipswithadualcoreXeonprocessor,a250GBSATAnearlinedriveand4GBofmemory.The ManagementServerinterfacesbetweenthecustomernetworkandtheVPLEXcluster.ItisolatestheVPLEX internalmanagementnetworksfromthecustomerLAN.ItcommunicateswithVPLEXfirmwarelayers withinthedirectorsovertheprivateIPconnections.AManagementservershipswitheachVPLEXcluster. NotethatthelossofaManagementServerdoesnotimpacthostI/OtoVPLEXprovidedvirtualstorage. WithinaMetroPlextherearetwoManagementservers,oneforeachcluster.Bothclusterscanbe controlledfromeitherManagementServer.AMetroPlexutilizesasecuremanagementconnection betweenthetwoManagementServersviaVPNconnection.AVPLEXclustercanbecontrolledthroughthe ManagementConsolewhichrunsontheManagementServer. TheManagementServeralsoenablesremotesupportviaanESRSGateway.Withthisfunctionalityinplace, VPLEXisabletosendCallHomeeventsandsystemreportstothe ESRSGateway.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

25

VPLEXArchitectureandDesign VPLEX

HardwareComponents:FibreChannelCOMSwitches

ConnectrixDS300B:createsaredundantFibreChannelnetworkforCOM

2010 EMC Corporation. All rights reserved.

Module 2: Architecture - Physical and Logical Components

26

ConnectrixDS300BswitchesareusedforintraclustercommunicationinaVPLEXmediumorlarge configuration.ApairofDS300Bswitchesshipprecabled,withmediumorlargeconfigurations.These switchescreateredundantFibreChannelnetworksfortheinternalLCOMconnections.Eachdirectorhas twoindependentLCOMpathstoeveryotherdirector.AVPLEXmediumconfigurationuses4portsper switchandaVPLEXlargeconfigurationuses8portsperswitch. 16portsremaindisabled,unusedand unlicensed.Eachportrunsat4Gbps.TheLCOMnetworksarecompletelyprivate nocustomer connectionsarepermittedontheseswitches.EachConnectrixDS300ButilizesanindependentUPS.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

26

VPLEXArchitectureandDesign

VPLEXLocal:SupportedConfigurations

Engine 4 SPS Engine 3 SPS FCSwitchB UPSB FCSwitchA UPSA ManagementServer ManagementServer Engine 2 SPS Engine 1 SPS SPS SPS SPS SPS

FCSwitchB UPSB FCSwitchA UPSA ManagementServer Engine 2 SPS Engine 1 SPS SPS SPS

Engine 1 SPS SPS

SingleEngine
2010 EMC Corporation. All rights reserved.

DualEngine

QuadEngine
Module 2: Architecture - Physical and Logical Components 27

AllsupportedVPLEXconfigurationsshipinastandard,singlerack. Theshippedrackcontainstheselectednumberofengines,oneManagementServer,redundantStandby PowerSupplies(SPSs)foreachEngineandanyotherneededinternalcomponents.Forthedualandquad configurationsonly,theseincluderedundantinternalFCswitchesforLCOMconnectionbetweenthe Directors.Inaddition,dualandquadconfigurationscontainredundantUninterruptiblePowerSupplies (UPSs)thatservicetheFCswitchesandtheManagementServer. Thesoftwareispreinstalled,thesystemisprecabled,andalsopretested. Enginesarenumbered14fromthebottomtothetop.Anysparespaceintheshippedrackistobe preservedforpotentialengineupgradesinthefuture.Thecustomermaynotrepurposethisspacefor unrelateduses.Sincetheenginenumberdictatesitsphysicalpositionintherack,numberingwillremain intactasenginesgetaddedduringaclusterupgrade.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

27

VPLEXArchitectureandDesign

ConfigurationsataGlance
SingleEngine DualEngine QuadEngine

Directors RedundantEngineSPSs FEFibreChannelports BEFibreChannelports Cache ManagementServers InternalFCswitches(ForLCOM) UninterruptiblePowerSupplies(UPS)

2 Yes 16 16 64 GB 1 None None

4 Yes 32 32 128 GB 1 2 2

8 Yes 64 64 256 GB 1 2 2

Start Small and Transparently Scale Out Engines


2010 EMC Corporation. All rights reserved. Module 2: Architecture - Physical and Logical Components 28

ThistableprovidesaquickcomparisonofthethreedifferentVPLEXsingleclusterconfigurationsavailableat GA.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

28

VPLEXArchitectureandDesign VPLEX

VPLEXManagement:IPInfrastructure

Customer LAN
HTTPSorSSH

Management Client

Management Server
Internal IP Network Internal IP

Director Director Director

Network

EMCVPLEXCluster
Module 2: Architecture - Physical and Logical Components 29

2010 EMC Corporation. All rights reserved.

Shownisahighlevelarchitecturalviewofsingleclustermanagement.TheManagementServeristhe onlyVPLEXcomponentthatgetsconfiguredwithapublic IPonthecustomernetwork. Fromthecustomernetwork,theManagementServercanbeaccessed byaVPLEXstorageadministrator viaanSSHsession.WithintheSSHsession,theadministratorcanrunaCLIutility,calledVPlexcli,to manageallaspectsofthecluster.AbrowserbasedGUIisalsoavailable.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

29

VPLEXArchitectureandDesign VPLEX

VPlexcli(CLI)

VPLEXManagement

VPLEXManagement Console(GUI)

2010 EMC Corporation. All rights reserved.

Module 2: Architecture - Physical and Logical Components

30

VPLEXprovidestwowaysofmanagement,theVPlexcliandtheVPLEXManagementConsole. TheVPlexcli canbeaccessedviaatelnetsessiontoTCPport49500ontheManagementServer.TheVPLEXManagement ConsoleisaccessedbypointingabrowserattheManagementServerIPusingthehttpsprotocol.Currently VPLEXCLIisthemorematureinterfaceprovidingcompletesupportforalldocumentedfeaturesand functionality.Themanagementconsolehasknownlimitationsinsomeareas.Forexample,mobility operationscanonlybeperformedusingCLI. EverytimetheVPlexcliisaccessed,itcreatesasessionlogin the/var/log/VPlex/cli/ directory.Loggingin throughtheManagementConsolealsocreatesasessionfilein/var/log/VPlex/cli.VPLEXManagement Console
ViahttpssessiontotheManagementServer Intuitive,easytouseinterfaceforsimplifiedstoragemanagement Incorporatescomprehensiveonlinehelp

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

30

VPLEXArchitectureandDesign

VPLEXFederation:Constructs

Dev Extent

Dev

Extent

Extent

StorageVol StorageVol
2010 EMC Corporation. All rights reserved.

StorageVol

Module 2: Architecture - Physical and Logical Components

31

LetsexaminethevarioustypesofmanagedstorageobjectswithinEMCVPLEX,theirinterrelationships,and howtheyrelatetoentitiesexternaltoVPLEX suchascustomerhostsandcustomerstoragearrays. BackendstoragearraysareconfiguredtopresentLUNstoVPLEXbackendports. EachpresentedbackendLUNmapstooneVPLEXStorageVolume.StorageVolumesareinitiallyinthe unclaimed state.UnclaimedstoragevolumesmaynotbeusedforanypurposewithinVPLEXotherthanto createmetavolumes,whichareforsysteminternaluseonly. OnceaStorageVolumehasbeenclaimedwithinVPLEX,itmaybecarvedintooneormorecontiguous Extents.AsingleExtentmaymaptoanentireStorageVolume;however,itcannotspanmultipleStorage Volumes. AVPLEXDeviceistheentityenablesRAIDimplementationacross multiplestoragearrays.VPLEXsupports RAID0forstriping,RAID1formirroring,andRAIDCforconcatenation.Thesimplestpossibledeviceisa singleRAID0devicecomprisingoneextent,asshownhere. Shownnextisamorecomplexdevice forexampleastripedRAID0deviceacrosstwoextents.Notethat theunderlyingextentscouldevenbefrommultiplebackendstoragearrays.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

31

VPLEXArchitectureandDesign

VPLEXFederation:Constructs
StorageView
Host Initiator Initiator
Virtual Vol

Port Port VPLEXFrontEnd Port


TopLevelDevice(TLD) Dev

Dev Dev

Extent

Extent

Extent

StorageVol StorageVol
2010 EMC Corporation. All rights reserved.

StorageVol

Module 2: Architecture - Physical and Logical Components

32

Devicesmaybelayeredontopofotherdevices.Forexample,wecouldcreateaRAID1mirroreddevice withtwodissimilarmirrorlegs,asshowninthisexample.Onlydevicesatthetoplevelmayhaveafrontend SCSIpersonalityandbepresentedtohosts.ThesearecalledTop LevelDevices. StorageView isthemaskingconstructthatcontrolshowvirtualstorageisexposedthroughthefrontend. AnoperationalStorageViewisconfiguredwiththreesetsofentitiesasshownnext. First,anyhoststhattheStorageViewmustpresentstoragetoshouldhaveoneormoreinitiatorports (HBAs)intheStorageView.Hostinitiatorsshouldberegistered withoneofseveralspecificallyrecognized andsupportedhostpersonalitytypeswithinVPLEX,suchasdefault whichcorrespondstomostopen systemshosts:WindowsandLinux,HPUX,andVCS.Ahighavailabilityhostshouldhaveaminimumoftwo registeredinitiatorportseachwithinitsStorageView. Second,oneormoreVPLEXfrontendportsneedstobeconfiguredaspartoftheStorageView.Atypical highavailabilityconfigurationwoulduseaminimumofonefrontendportperfabric,eachofthemservicing aseparatehostinitiator. Third,aVirtualVolumethatmapstotheappropriateTopLevelDeviceneedstobecreatedandthen configuredaspartoftheStorageView. OnceaStorageViewisproperlyconfiguredasdescribedandoperational,thehostshouldbeabletodetect anduseVirtualVolumesafterinitiatingabusscanonitsHBAs.EveryfrontendpathtoaVirtualVolumeis anactivepath,andthecurrentversionofVPLEXpresentsvolumeswiththeproductIDInvista.Thehost requiressupportedmultipathingsoftwareinatypicalhighavailabilityimplementation.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

32

VPLEXArchitectureandDesign

Module3:VPLEXFunctionality andManagement

ThismoduledescribescoreVPLEXproductfunctionalityavailable atGA. Uponsuccessfulcompletionofthismodule,youshouldbeableto: DescribelocalfederationcapabilitieswithinaVPLEXcluster DescribedistributedfederationcapabilitiesinaMetroPlex ExplaintheVPLEXinternaldataflowoperationsforhosttostorageI/O undervariousscenarios DescribekeyVPLEXadministrationandmaintenancefeatures

2010 EMC Corporation. All rights reserved.

Module 3: VPLEX Functionality and Management

33

ThismoduleprovidesadetailedlookatthecoreVPLEXcapabilitiesthatareavailableatGA.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

33

VPLEXArchitectureandDesign

Provisioning:UsingtheVPLEXManagementConsole
ProvisionStorage

Tasks ProvisioningOverview
2010 EMC Corporation. All rights reserved. Module 3: VPLEX Functionality and Management 34

ThisisthehomesectionoftheEMCVPLEXManagementConsole.Thisisagoodlogicalstartingpointfor manyVPLEXmanagementoperations. Ontherightofthescreentherearestorageprovisioningsteps. Thesestepsarealsolinksthatwillredirecta persontothepagetoimplementthestep. Ontheleftofthescreenthereisapictureshowingthetasksequencetoprovisionvirtualvolumesoutof VPLEX.TotherightoftheHomebutton,therearetwomorelinks,ProvisionStorage andHelp The ProvisionStoragelinkwilltaketheusertoanalternativepagefromwhichprovisioningcanbe implemented.TheHelplinkwilltaketheusertotheVPLEXOnlineHelppage.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

34

VPLEXArchitectureandDesign

BrownfieldImplementation:Encapsulation
Encapsulation:theprocessofconvertingexistingproductionSAN volumesonhoststoVPLEXvolumes,viaoneforone mapping
EMCVPLEXmaintainsphysicalseparationofmetadatafromhostdata
VPLEXmetadataisstoredseparatelyonmetadatavolumes Basisforsimpledatainplacemobility

Highlevelsteps:

PresentnativearrayLUNwithexistingdatatoVPLEXbackend ClaimtheLUNasastoragevolumefromVPLEX Createoneextentconsistingoftheentirestoragevolume CreateaRAID0deviceontheextent CreateaVirtualVolumeonthedevice UnprovisionnativeLUNfromhost PresentVPLEXVirtualVolumetohost


Onetimedisruptiontohost

2010 EMC Corporation. All rights reserved.

Module 3: VPLEX Functionality and Management

35

Encapsulationisbasicallydatainplace migrationofexistingproductiondataintoVPLEX,andtherefore doesnotrequireanyadditionalstorage.Encapsulationisdisruptivesinceyoucannotsimultaneouslypresent storageboththroughVPLEXanddirectlyfromthestoragearraywithoutriskingdatacorruption,dueto readcachingattheVPLEXlevel. YouhavetocutoverfromdirectarrayaccesstoVPLEXvirtualizedaccess.This impliesaperiodwhereall pathstostorageareunavailabletotheapplication.Withproper planningandexecution,thisdowntimecan beminimized.WhenPowerPathMigrationEnabler(PPME)supportis putinplace,itcanhelpeliminateany disruption. Analternativemigrationstrategyforexistingproductionhostsistoperformhostbasedreplicationfrom nativearrayvolumestonetnewVPLEXvolumes.Thisisnondisruptivebutrequiresadditionalstorage. Hostbasedcopyalsoconsumescyclesonthehost,andmayneedtobeplannedinaliveproduction environment.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

35

VPLEXArchitectureandDesign

Title Month Year

Encapsulation:MigratingaHosttoVPLEX
HostInitiatorPortsdetected: UNREGISTERED0x10000000c987422a UNREGISTERED0x10000000c987422b ArrayStorageVolumesfound: VPD83T3:600601606bb02500aab2affa35b5de11

FabricA

VPD83T3:600601606bb025006a17a18d5bfade11 VPD83T3:600601606bb02500ba7b6b1c49fade11

Host

VirtualVolumesdetected

FabricB

2010 EMC Corporation. All rights reserved.

Module 3: VPLEX Functionality and Management

36

Thisexampleillustratestheprocessofcuttingoverfromnative SANvolumestoVPLEXvolumesvia encapsulation.Observethesystemstatetransitionsasyoustepthroughthistasksequence. ThebasicideaistologicallyintegrateVPLEXintoyourproductionfabricsbetweenyourhostsandstorage arrays. Todothis,thebackendportsofVPLEXarefirstconnectedtotheproductionfabrics. ViasuitablezoningandLUNmasking,VPLEXbackendports,whicharetechnicallyinitiators,detectthe backendstoragearraysandvolumes.NativearrayvolumesorLUNsare thenclaimedbyVPLEX,allowing yourstorageadministratortolayerVPLEXvirtualvolumesonthemforpresentationtohosts. Frontendconfigurationisthenextlogicalstep.VPLEXfrontendportsareconnectedtothefabrics,andthe zoningconfigurationmodifiedtoallowhoststodetecttheseportsastargets. Oncethisisdone,VPLEXcandetectthehostinitiators(HBAs)whichshouldthenberegisteredwiththe appropriatehostpersonality. Atthispoint,bycreatingasuitablestorageviewwithinVPLEX,itbecomespossibletopresentVPLEX volumestothehostinitiators.Notethatinthisprocess,theoriginalSANvolumesfromthearrayarenow repackagedasVPLEXvolumesandpresentedvianewFCtargets,(i.e.theVPLEXFEports).The recommendationistoremovehostaccesstotheoriginalSANvolumes,beforepresentingtheencapsulating VPLEXvolumes.

36 Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

36

VPLEXArchitectureandDesign

StorageProvisioning: Devices RAID1 MirroredVPLEXDevice


Usearraysfromthesametier Idealfornestingotherdevices
Dev Dev

RAID0 StripedVPLEXDevice
Idealforencapsulateddevices Considerstripedepth Avoidstripingstripedstoragevolumes

Extent

Extent

Extent

RAIDC ConcatenatedVPLEXDevice
Mostflexibletogrow

Dev

Dev

Dev

Dev

Dev

2010 EMC Corporation. All rights reserved.

Module 3: VPLEX Functionality and Management

37

TheVPLEXdevice constructformsthebasisofcoreRAIDcapabilitiessuppliedby VPLEX.Thekeyvalueadd isthatVPLEXcanenableRAIDfunctionalityacrossstoragearrays. ARAID1VPLEXDevicemirrorsdatatotwoextentsordevices. ARAID0VPLEXDevicestripesdataacrossmultipleextentsordevices.SimplestpossibledeviceisaRAID0 devicethatusesoneextent.Thisistypicallywhatyoudconfigureduringencapsulation. ARAIDCVPLEXDeviceconcatenatesmultipleextentsordevices. Viewingtheseasbuildingblocksallowsyoutoconsideranorganizedsystemofdevicenesting tomeet yourcustomersspecificneeds.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

37

VPLEXArchitectureandDesign VPLEX

Provisioning:MultipathingwithEMCPowerPath

2010 EMC Corporation. All rights reserved.

Module 3: VPLEX Functionality and Management

38

Bydefault,EMCVPLEXvolumesappearwithvendorIDEMC andproductIDInvista.Thus,anyversionof PowerPaththatcanmanageInvistavolumes,canalsorecognizeandmanageEMCVPLEXvolumes.This exampleshowsaVirtualVolumeonafrontendLinuxhost,asreportedbyPowerPath.Notethatthedefault loadbalancingpolicywithPowerPathforaVPLEXvolumeisADaptive.Othermultipathingoptions, includingnativeOSmultipathingarediscussedlater,inthePlanningandDesignmodule.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

38

VPLEXArchitectureandDesign

ExtentMobility
Mobilityofblockdataacrossextents,
nondisruptivetothehost Extentmobilitycanonlybeperformed withinacluster Originalextentisfreedupforreuse
Host

Fundamentaluse:nondisruptivedata
mobilityacrossheterogeneous storagearrays
Extent

Virtual Vol

DEV

Extent

StorageVol 1010101101

StorageVol

2010 EMC Corporation. All rights reserved.

Module 3: VPLEX Functionality and Management

39

VPLEXLocalsupportsmobilityofExtents potentiallyacrossstoragearrayframes thatiscompletely transparenttoanylayeredvirtualvolumethatisactivelyservicingI/Orequestsfromahost.

Asthisexampleshows,thedevicetoextentmappingchangesattheendofacommittedMobility operation.However,thehosttowhichthevolumeisprovisionedisnotevenawareofthischange. Notethatextentmobilityrequiresthatboththesourceextentandthetargetextentbelongtothesame VPLEXcluster.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

39

VPLEXArchitectureandDesign

DeviceMobility
Host

Virtual Vol

DEV

DEV

Extent

Extent

Extent

Extent

StorageVol 1010101101

StorageVol 1010101101

StorageVol

StorageVol

2010 EMC Corporation. All rights reserved.

Module 3: VPLEX Functionality and Management

40

AnotherMobilityoptionwithVPLEXLocalismobilityatthedevicelevel.Thiscouldbeusedforexampleto movedataacrossdisparatestoragearrays,oreventochangethe RAIDlevelofadevicewithoutdisruption. Devicemobilityissupportedacrossclustersaswell,inaMetroPlexenvironment.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

40

VPLEXArchitectureandDesign VPLEX

Mobility:TypicalTaskSequence
1. dm migration start n <name> -f <extent/device> -t <extent/device>
2. dm migration commit -m <name> --force 3. dm migration clean -m <name> --force 4. dm migration remove -m <name> --force

RAID1

SourceDeviceorExtent

TargetDeviceorExtent

1010101101

2010 EMC Corporation. All rights reserved.

Module 3: VPLEX Functionality and Management

41

Therearefourbasicoperationsinvolvedinmovingextentsordevices.Theyare:start,commit,clean,and remove.DatamobilityisaccomplishedbyusingRAID1operations. ThestartoperationfirstcreatesaRAID1deviceontopofthesourcedevice.Itspecifiesthesourcedevice asoneofitslegsandthetargetdeviceastheotherleg.Itthencopiesthesourcesdatatothetargetdevice orextent.Thisoperationcanbecanceledaslongatitisnotcommitted. Thecommitoperationremovesthepointertothesourceleg.Itisnotbestpracticetocommitthe operationimmediately. AtthispointintimethetargetdeviceistheonlydeviceaccessiblethroughtheVirtualVolume.

Thecleanoperationbreaksthesourcedevicedownallthewayto thestoragevolumelevel.Thisoperation isoptional.However,thedataonthesourcedeviceisnotdeleted.

Theremoveoperationremovestherecordfromthemobilityoperationlist.Datamobilityoperationscan alsobepausedandresumed.ThesecommandsmaybeusedinconjunctionwiththeVPLEXschedulerto mitigateoreliminatedisruptiontoproductionI/O.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

41

VPLEXArchitectureandDesign

BatchedMobility
Enablesscriptingofextentanddevicemobility Abatchcanprocesseitherextentsordevices,butnotamixofboth
Tasksequenceforbatchedmobility: 1. Createmigrationplan: batch-migrate create-plan plan.txt -f <source>
-t <destination>

2. Checkplanforerrors: batch-migrate check-plan plan.txt 3. Startmigration,copydatatotargets: batch-migrate start plan.txt 4. Commitmigration: batch-migrate commit plan.txt 5. Cleanupmigration: batch-migrate clean file plan.txt 6. Removemigrationrecord: batch-migrate remove

2010 EMC Corporation. All rights reserved.

Module 3: VPLEX Functionality and Management

42

Batchedmobilityprovidestheabilitytoscriptlargescalemigrationswithouthavingtospecifyindividual extentbyextentordevicebydevicemigrationjobs.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

42

VPLEXArchitectureandDesign

AccessAnywherewithVPLEXMetro
DistributedDevice
Cluster1/SiteA
Host

RemoteDevice
Cluster1/SiteA
Host

Cluster2/SiteB
Host

Cluster2/SiteB
Host

VirtualVolume

Virtual Volume

DistributedDevice

Device

Storage Array SynchronousDistance


2010 EMC Corporation. All rights reserved.

Storage Array

Storage Array SynchronousDistance

Storage Array

Module 3: VPLEX Functionality and Management

43

AccessAnywhereprovidesalogicaldevicewithfullread/writeaccesstomultiplehostsatmultiplelocations withthecurrentrelease,separatedbysynchronousdistanceupto100km. AkeyenablingVPLEXMetrotechnologyforAccessAnywhereisdistributedmirroring.Itenablesyouto configureaRAID1mirroreddevicewithtwolegs,oneoneachcluster.HostsateithersitemayissueI/Oto thissharedvolumeconcurrently.Distributedcoherentsharedcachepreservesdataintegrityofthisvolume. Thismirroreddevicehasthesamevolumeidentityatbothclusters,whilebeingpresentedviadistinctFC targets(i.e.,VPLEXFEportsateachcluster).

AnotherenablingVPLEXMetrotechnologyforAccessAnywhereisremoteaccess. Thisallowsadeviceconfiguredononesitetobepresentedtoinitiatorsontheothersiteforfullread/write access.Forremoteexports, VPLEXuseofsequentialreaddetectionlogicwithinthecachinglayercansignificantlyimproveperformance. FeasibleconfigurationsthereforeincludehostswithnoSANstoragewithintheirlocalsite.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

43

VPLEXArchitectureandDesign

DistributedDevice:I/OOperation
VPLEXCluster1/SiteA
Host

VPLEXCluster2/SiteB
Host
10110

Virtual Volume

10110 ACK
ACK

FCMAN
Distributeddevice

ACK 10110

ACK

Storage Array

SynchronousDistance

Storage Array

HostinCluster2/SiteBwritesdatatosharedvolume. DataiswrittenthroughcachetoBackendstorage. Dataisacknowledgedtohostoncewrittentodisk. DataisacknowledgedbyBackendarrays.


2010 EMC Corporation. All rights reserved. Module 3: VPLEX Functionality and Management 44

LetsexaminethemechanicsofI/Oaccessofeachoftheseenablingtechnologiesingreaterdetail. Withadistributeddevice,whenahostissuesawritetothedevice,thedataisplacedinthecacheofthe ingressDirector. And,thenwrittenthroughtostoragearraysatbothsites.Only afterthestoragearrayshaveacknowledged writecompletion doesthehostgettheackforwritecomplete fromVPLEX. ThisdesigncompletelyeliminatestheriskoflosinghostdataintheeventofVPLEXcomponentfailures.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

44

VPLEXArchitectureandDesign

RemoteDevice:I/OOperation
VPLEXCluster1/SiteA
Host
READ 11001

VPLEXCluster2/SiteB
Host
10110

Virtual Vol

Virtual Vol

READ 11001 10110

FCMAN

11001 ACK 10110


ACK ACK

SynchronousDistance

Storage Array

Storage Array

Dataisacknowledgedtohostoncewrittentodisk Dataisacknowledgedtohostoncewrittentodisk. HostinCluster1/SiteAwritesdatatovolume. HostinCluster2/SiteBwritesdatatovolume HostinCluster1/SiteAreadsdatafromvolume.


2010 EMC Corporation. All rights reserved. Module 3: VPLEX Functionality and Management 45

Withremoteaccess: Writesfromhostsonthesameclusterastheexporteddeviceworkthesameaswritestoanylocal device thenwrittentothebackendarray,beforetheacknowledgementissenttothehost. Readsfromremotehostscaneffectivelyexploitlocalcache,remotecacheandsequentialread aheadfornearlocalperformance. Forawritefromaremotehost:thenewdataiscachedattheremotesite.Existingdatainthelocal cacheisinvalidatedwithanRPCmessage;thenthenewdataissenttothelocalsite,andwrittento thebackendstorage.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

45

VPLEXArchitectureandDesign

DistributedDevice:HandlingSplitbrain
Consideradistributedsystemwithtwosites:
FCMAN

Site A

Site B

FromSiteAsperspectivethefollowingtwoconditionsareindistinguishable:
Site A FCMAN Site B Site A FCMAN Site B

PartitionFailure

SiteFailure

Addressingthisisfundamentaltothedesignofdistributedapplications. WithMetroPlexdistributeddevice:handledwithaconfigurabledetachrule
2010 EMC Corporation. All rights reserved. Module 3: VPLEX Functionality and Management 46

LetsexaminethelogisticsoffailurehandlinginaMetroPlexenvironment. TherearetwotypesoffailuresinaMetroPlex,partitionfailuresandsitefailures.Partitionfailurestypically occurmoreoftenthansitefailures.However,fromonesitespointofview,bothpartitionfailuresandsite failuresarehandledthesameway.MetroPlexhandlesbothtypesoffailuresusingdetachrules,aswell seenext.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

46

VPLEXArchitectureandDesign

DistributedDevice:ConfiguringDetachRule

Canspecifyapredefinedrulesetorcustomizedruleset
2010 EMC Corporation. All rights reserved. Module 3: VPLEX Functionality and Management 47

Failurehandlingbehaviorisconfiguredbytyingaspecificdetachrule toeachdistributeddevice.Inthe exampleshown,therulesetcluster1detaches impliesthatuponfailure,ifcluster1survivesthenitwill continuetoprovideread/writeaccesstothevolume,whilecluster2willsuspendI/Oactivitytothisdevice attheothersite.Thedetachrulecanbechangedbyselectingthedistributeddevicessupportingdeviceand thenselectingadifferentclustertodetachfrom.Detachrules maybecustomizedtomeetspecificneeds.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

47

VPLEXArchitectureandDesign

DistributedDevices:SupportedDetachOptions
DetachoptionscurrentlysupportedwithVPLEXdistributeddevicesina MetroPlex: Biasedsitedetach Nonbiasedsitedetach Manualdetach
Usewithautomatedscriptonproductionhost(s)toactivateread/write

accessfromeithersite,afterafailureevent

2009 EMC Corporation. All rights reserved.

Module 3: VPLEX Functionality and Management

48

Therearethreemajorcategoriesorapproachestodetachrules. Eitherbiasedsitedetachornonbiasedsitedetacharesimpletoimplement,withpredefinedrulesetsin place.Eitherofthesemayadequatelyaddressthecustomersneeds.Forexample,whenonesitecanbe clearlyviewedastheproductionsitewhiletheotherissecondary,withinthecontextofagivenDR1. ToenablecompletecontroloftheVPLEXDR1environmentfromastretchedhostcluster,theuseof manualdetach withscriptingisrecommended.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

48

VPLEXArchitectureandDesign

Monitoring:VPLEXPerformance Creatingmonitors
monitor create --name <name> --period <time> -director <Director_Name> --stats <stat>

Listingmonitors

Destroyingmonitors
monitor destroy <monitor>
2010 EMC Corporation. All rights reserved. Module 3: VPLEX Functionality and Management 49

PerformancedatacanbecollectedontheVPLEXsystembycreatingmonitorsandsinks.Monitorscollect performancestatisticsonvariousVPLEXcomponents.ThesemonitorsarecreatedwithintheVPlexcliusing themonitorcommand.Bydefault,monitorscollectstatisticsevery30seconds.Thiscollectiontimecanbe modifiedifdesired. Onceamonitoriscreated,itcanbefoundinthe/monitoring directory.Monitorsonlystartcollectingdata whentheyhaveatleastoneassociatedsink,aswellseenext.Monitorscanbedestroyedusingthe monitor destroy command.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

49

VPLEXArchitectureandDesign

Monitoring:VPLEXPerformance(Contd) Listingstatisticsavailableformonitoring
monitor stat-list

Monitorcollect
Updatesaperformancemonitorimmediately Adhocmanualcollectofdata

Supportedmonitorsink types:console,file,SNMP Addingsinksformonitors


monitor add-file-sink n <name> -f <file_location> -m <monitor_to_add>

Removingasink
monitor remove-sink <sink>

2010 EMC Corporation. All rights reserved.

Module 3: VPLEX Functionality and Management

50

Tobeabletoactivateandviewthestatisticscollectedbyamonitor,atleastonesinkmustbecreated.Sinks arefilescreatedtoholdoutputfrommonitors.Sinkfilescanthenbeuploadedtootherprogramssuchas MSExceltobetterviewtheinformationcollected.Threedifferenttypesofsinkscanbecreated,console, file,andSNMP.SNMPsinksarenotsupported.Sinksarecomposedofcommaseparatedvaluesand thereforea.CSVextensionisausefulfilenameextension.Consolesinkshavelimitedusebecausethey interferewithconsoletyping.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

50

VPLEXArchitectureandDesign

Monitoring:EventHandlingandReportGeneration

Engine

ManagementServer
ConnectEMC CallHomeListener SYR EMA_Adaptor VPlexcli

TCPports22, 9010,443and 5901

ESRSGateway

2010 EMC Corporation. All rights reserved.

Module 3: VPLEX Functionality and Management

51

ShownisthehighlevelarchitectureofeventhandlingandmessagingflowfromtheEnginetothe Managementserver,toaproperlyconfiguredESRSgateway. VPlexcli,whichrunsontheManagementServer,pullseventseverysecondfromaprocessonaDirector. TheCallHomeListenerontheManagementserverlooksattheeventsanddetermines,whicheventsshould initiateacallhome.Itthenplacesthoseeventsintothe/opt/emc/VPlex/Event_Msg_Folderdirectoryas .txtfiles. TheEMA_adaptorsjobistotakethetextfilesfromtheEvent_Msg_Folder directoryandcreatethe requiredXMLfilesusingtheEMAAPI.TheEMA_adaptorthenplacesthosefilesintothe /opt/emc/connectemc/poll directory.TheConnectEMCprocesspicksuptheXMLeventfiles andsends themtoESRSGateway.Iftheeventsaresuccessfullysenttothegateway,theyarealsocopiedintothe /opt/emc/connectemc/archive directory.Iftransmissionfailsforsomereason,thecorrespondingevents areplacedintothe/opt/emc/connectemc/failed directory. TCPports22,9010,443,and5901mustbeopenbetweentheManagementServerandtheESRSGateway. TheESRSGatewayclassifiesincomingeventsasbelongingtothis VPLEXinstanceviatheTopLevel Assembly fieldwithineachevent.TheTopLevelAssembly isaclusteruniqueidentifierthatispresetat thefactoryonallenginesofaVPLEXcluster.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

51

VPLEXArchitectureandDesign

GeneratingSystemReports:SYR
SYRgeneratesacompletereportoftheVPLEXSystem
Task ConfigureSYR ListSYR ManuallyrunSYR Command SendsaweeklyreporttotheESRSGateway scheduleSYR add -d <day> -t <hour> -m <minute> scheduleSYR list syrcollect

2010 EMC Corporation. All rights reserved.

Module 3: VPLEX Functionality and Management

52

SYRisaprocessthatcollectsVPLEXsystemreportstosendtotheESRSgateway.SYRreportsusethesame directoriesasESRSevents.SYRcanberunmanuallyusingthesyrcollect command;oritcanberunata scheduledtimeusingthescheduleSYRcommand.SYRreportsaresenttotheESRSGatewaybythe ConnectEMCprocess.OnceSYRhasbeenscheduled,itwillrunweeklyatthescheduledtime.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

52

VPLEXArchitectureandDesign

CollectingVPLEXLogFiles
collect-diagnostics

Collectslogs,cores,andconfigurationinformationfromtheManagement
Serverandthedirectors Placesatar.gz filein/diag/collect-dianostics-out

2010 EMC Corporation. All rights reserved.

Module 3: VPLEX Functionality and Management

53

Thecollectdiagnostics commandcanbeusedwhenattemptingtotroubleshootVPLEXissues.This commandwillproduceatar.gzfilecontaininglogs,coresandconfigurationinformationaboutthe ManagementServeranddirectorswithinaVPLEXsystem.Thisfileisverylargeandshouldmovedoffthe systemonceithasbeengenerated.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

53

VPLEXArchitectureandDesign

Scheduling:cronstyle
schedule manageandcontroltimingofspecifictasks

2010 EMC Corporation. All rights reserved.

Module 3: VPLEX Functionality and Management

54

TheVPlexCLIschedule commandmaybeusedtoruncommandsinbatchmodeatanarbitrarytime,or periodicallyonaschedule.Thiscanbeparticularlyusefultooffloadcertaintypesofactivity forexample mobility tooffproductionhours.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

54

VPLEXArchitectureandDesign

Maintenance:NondisruptiveCodeUpgrade(NDU) NDUprocessforVPLEX:codeupgradeswithnodisruptionto
productionhostsperformingI/OtoVPLEXvirtualvolumes

Requiresbestpracticestobefollowedforhostconnectivity,and
supportedmultipathingsoftware Usesanotionoffirstupgraders andsecondupgraders
First:DirectorAofeveryengineisupgraded,thenrebooted Second:DirectorBofeveryengineisupgraded,thenrebooted VPLEXMetroupgrade:Bothclusterareupgradedwithasinglendu

operationissuedononeManagementServer

2010 EMC Corporation. All rights reserved.

Module 3: VPLEX Functionality and Management

55

ThesearethestepstoperformanNDU.I/Owillcontinuewhileonesideofanengineisbeingupgraded. ThetimetocompleteanNDUshouldberelativelythesameregardlessofthenumberofenginesinthe system.ThisisbecauseanNDUwillupgradeallAdirectorsandthenallBdirectorsatonce. Firstupgraders:EveryenginesAdirectorsareupgraded


Adirectors firmwareisshutdownduringtheupgrade I/OisautomaticallyredirectedtoBdirectors Onceupgraded,Adirectorsreboot AdirectorsbeginservingI/Oagain

Secondupgraders:EveryenginesBdirectorsareupgraded
Bdirectorsfirmwareisshutdownduringtheupgrade I/OisautomaticallyredirectedtoAdirectors Onceupgraded,Bdirectorsreboot BdirectorsbeginservingI/Oagain

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

55

VPLEXArchitectureandDesign

Module4:Planningand DesignConsiderations

Thismodulecoverskeyplanninganddesignconsiderationsrelevantto VPLEXsolutions. Uponsuccessfulcompletionofthismodule,youshouldbeableto:

PerformplanninganddesignforVPLEXdeployment Stateandexplaintherationaleforrecommendedbestpractices
withVPLEXimplementations

2010 EMC Corporation. All rights reserved.

Module 4: Planning and Design Considerations

56

ThismodulecoversPlanningandDesignconsiderationsduringdeploymentofaVPLEXsolution.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

56

VPLEXArchitectureandDesign

VPLEXPhysicalConnectivity:SANBestPractices
Hosts BE

FabricA

FE

Volume1 Volume2

FabricB

FE BE

Arrays


2010 EMC Corporation. All rights reserved.

Deploymirroredfabrics Connecteveryhostandeverystoragearraytobothfabrics ForeachVPLEXDirector,distributefrontendportsoverbothfabrics ForeachVPLEXDirector,distributebackendportsoverbothfabrics ForeachFEmoduleandBEmodule,distributeportsoverbothfabrics


Module 4: Planning and Design Considerations 57

WhendeployingtheVPLEXcluster,thegeneralruleistouseaconfigurationthatprovidesthebest combinationofsimplicityandredundancy.Inmanyinstancesconnectivitycanbeconfiguredtovarying degreesofredundancy.However,therearesomeminimalrequirementsthatshouldbemet. Deploymirroredfabrics:thisisstandardEMCpractice.Inaddition,itispreferabletoisolatethefrontend fabricsfromthebackendfabrics.Thiswouldensurecleanseparationofhostsfromstoragearrays.Thisis appropriateinenvironmentswhereallencapsulationofexistingproductiondataiscomplete,andanyfuture provisioningtohostswillbeexclusivelyfromVPLEX. Connecteveryhostandeverystoragearraytobothfabrics. EachDirectorshouldbeassignedportstobothfabricsotherwise,afabricfailurecouldreducethepathsand computingpoweroftheVPLEX.Thiswilldoubletheworkloadfor thesurvivingDirectors.DistributeFEports ofeachdirectoroverbothfabrics. DistributeBEportsofeachdirectoroverbothfabrics. Theabovetworulesensurethefollowing:ifthereiscompleteoutageononefabric,thatdoesnotrendera Directorcompletelynonoperationaloneitherthefrontendoronthebackend. ThustheprocessingpoweroftheVPLEXsystemisnotcompromised byafabricoutage. DistributethefourportsofeachI/Omoduleoverbothfabrics. AgainthisminimizeslossofVPLEXefficiencyandprocessingpowerintheeventofcompletefailureonone fabric.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

57

VPLEXArchitectureandDesign

VPLEXLogicalConnectivity:Backend

Volume

FabricA
VMAX
A0 A1 B0 B1

LUN

FabricB

CX4960

EachdirectormustbeprovidedaccesstoeveryBEvolumeinthecluster Active/Activearray:Foreachdirector,provideatleastoneBEpathtoeachvolumeviaeachfabric Active/Passivearray:Foreachdirector,provideBEpathsviabothcontrollerstoeachLUNviaeachfabric VPLEXBEportinitiatorpersonality opensystemshost,usefailovermode=1withCLARiiONarrays
Module 4: Planning and Design Considerations 58

2010 EMC Corporation. All rights reserved.

ItisarequirementthateachDirectorhaveatleastoneviable, activepathtoeveryStorageVolumeina VPLEXcluster. Thismeans,tobeusableaStorageVolumemustbepresentedtoeveryDirectorinthesamecluster. Foractive/passivestoragearrays,makesurethatagivenBEportofaDirectorhasbothactiveandpassive pathstothestoragevolume.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

58

VPLEXArchitectureandDesign

VPLEXLogicalConnectivity:Frontend
Hosts Engine2
DirectorB

FabricA

Engine2 DirectorB

Engine1
Engine1
Volume1 Volume2

FabricB

DirectorA DirectorA

Arrays

SingleEngineconfiguration:Foreachhost,configureFEpathstobothDirectorAandDirectorB DualEngineandQuadEngineconfiguration:Foreachhost,configureFEpathstoAandBof
separateengines
2010 EMC Corporation. All rights reserved. Module 4: Planning and Design Considerations 59

FrontendhostsshouldbeconfiguredwithpathstoVPLEXfrontendports,whichserveasvirtualization targets,viaseparatefabrics.Inasingleenginesystem,configureatleastonefrontendpathtoeach director.ThisenablesthehosttomaintainI/OaccesstoVPLEXvolumesduringanNDUcodeupgrade. Withdualengineorquadenginesystems,additionalresiliencycanbeobtainedbyusingA andB directorsondistinctengines.ThiswouldensurethatthehostdoesnotloseI/Oaccesstovolumeseven duringplannedorunplannedshutdownofoneengine.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

59

VPLEXArchitectureandDesign

SANVolumeRequirements:VPLEXMetaVolume OneactiveVPLEXmetavolumepercluster Usedinternallyforstoringmetadata Failureimpact:doesnotaffectproductionI/Otoexisting


VPLEXvolumes MetaVolumeBestPractices:

Requiredcapacity:78GBorlarger Recommended:runVPLEXmetavolumebackupperiodically GeneralrequirementsforSANvolumestobeusedformetas:


Highestpossibleavailability Notdemandingofperformance:
LowwriteI/O onlyduringconfigurationchanges HighreadI/O onlyduringDirectorbootandNDU

2010 EMC Corporation. All rights reserved.

Module 4: Planning and Design Considerations

60

ListedaretherequirementsandbestpracticesforVPLEXMetaVolumes.I/Othroughputcapabilityisnota seriousconsiderationforametavolume,sinceitisupdatedonlyduringconfigurationchanges.Availabilityis theoverridingconcernhere.ItiscriticaltomirrortheMetaVolumeontotwodifferentarrays.Anadditional recommendationistocreatemetavolumesontwoarrayswithdifferentrefreshtimelines,thusavoiding havingtomigratethedataoffbotharraysatonce.ItisimportanttoperiodicallymakebackupsoftheMeta VolumeespeciallyafterVPLEXconfigurationchangesorupgrades. Thereasonforthisistoeliminatethe possibilityofthesystemfromeverlosingaccesstonewlycreatedVPLEXobjects.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

60

VPLEXArchitectureandDesign

SANVolumeRequirements:VPLEXLoggingVolume RequiredonlyinMetroPlex:atleastoneloggingvolumepercluster UsedinternallytotrackchangesbetweenlegsofdistributedRAID1


devicesduringlossofconnectivitybetweenclusters

Requiredcapacity:1bitforevery4Kbytepageofdistributeddevice
One10GBloggingvolumecansupport320TBofdistributeddevices

GeneralrequirementsforSANvolumestobeusedforlogging:
Veryhighperformancerequirement
NoI/Oactivityonloggingvolumesundernormalconditions Highrandom,smallblockwriteI/Orateduringlossofconnectivity HighsmallblockreadI/Orateduringincrementalresynchronization

Highestpossibleavailability Usestripedandmirroredvolumestomeettheserequirements

2010 EMC Corporation. All rights reserved.

Module 4: Planning and Design Considerations

61

ListedaretherequirementsandbestpracticesforVPLEXlogging volumes. Aprerequisiteforcreatingadistributeddevice,oraremotedevice, isthatyoumusthavealoggingvolume ateachcluster.Singleclustersystemsandsystemsthatdonothavedistributeddevices donotrequire loggingvolumes.Loggingvolumeskeeptrackofchangedblocksduringaninterclusterlinkfailure.Aftera linkisrestored,thesystemusestheinformationinloggingvolumestosynchronizethedistributeddevices bysendingonlychangedblockregionsacrossthelink. Theloggingvolumemustbelargeenoughtocontainonebitforeverypageofdistributedstoragespace.So forexample,youonlyneedabout10GBofloggingvolumespacefor320TBofdistributeddevicesina MetroPlex.TheloggingvolumereceivesalargeamountofI/Oduringandafterlinkoutages.Soitmustbe abletohandleI/Oquicklyandefficiently.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

61

VPLEXArchitectureandDesign

StorageViews:BestPractices
Eachstorageviewshouldhave:
Atleasttworegisteredinitiators(HBAports)fromeachhost
Recommended:HBAsdistributedoverredundantfabrics

AtleasttwoVPLEXFEports:onefromanAdirector,onefromaBdirector
Recommended:portsfromdifferentengineswhenpossible,anddistributedover

redundantfabrics

Createonestorageviewforallthehoststhatneedaccesstothesamestorage

StorageView
Host Initiator VVol Host Initiator FEPort

FEPort

2010 EMC Corporation. All rights reserved.

Module 4: Planning and Design Considerations

62

Whencreatingstorageviews,followthesebestpractices:
Createonestorageviewforallhoststhatneedaccesstothesamestorage,andthenaddallrequiredvolumestotheview. RedundancyrequirementsarebasedonstandardEMCguidelinesfor SANconfiguration.Eachhostshouldhaveatleasttwo registeredinitiatorsintheview.AccesstothevolumesshouldbeenabledviaatleasttwoVPLEXfrontendportsintheview. Whenselectingthefrontendportsforastorageview,makesuretofollowthepreviouslydiscussedbestpractices useports fromatleastoneAdirectorandoneBdirector,andwheneverpossible,fromdirectorsinseparateengines.

62 Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

62

VPLEXArchitectureandDesign

Title Month Year

PartitionAlignment
VPLEXpagesize=4K VMAXtracksize=32K Minimumrecommendedalignment=64K Cantgowrongwith1M

2010 EMC Corporation. All rights reserved.

Module 4: Planning and Design Considerations

63

WhencreatingVPLEXvirtualvolumes,payattentiontopartitionalignmentinordertoavoidhosttostorage performanceproblemsinproduction. Followthesebestpracticesforpartitionalignment:


Bestpracticesthatapplytodirectlyaccessedstoragevolumesalsoapplytovirtualvolumes I/Ooperationstoastoragedevicethatcrosspage,trackorcylinderboundariesmustbeminimized theseleadtomultiple readorwriteoperationstosatisfyasingleI/Orequest MisalignedpartitionscanconsumeadditionalresourcesinVPLEXandtheunderlyingstoragearray(s),leadingtolessthan optimalperformance Alignpartitionsforanyx86basedOSplatform Alignpartitionson32KBboundaries

63 Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

63

VPLEXArchitectureandDesign

VPLEXEncapsulation:BestPractices Datainplace migration:minimizesdowntime BestPractices:


Claimstoragevolumesusingtheapplicationconsistent flag Preventsreconfigurationsotherthanoneforone (singleextent spanningentireSANvolume) Ensuresthatproductiondatadoesnotbecomeunavailableorcorrupted MigrateintoVPLEXinphases Dividemigrationsbyhostsorinitiatorgroups

Limitation:
Capacityofencapsulationtargetmustbeanintegralmultipleof 4

Kbytes AvoidconcurrentI/Oactivityfromhosttothenativearrayvolume, andtotheVPLEXencapsulatedvolume

2010 EMC Corporation. All rights reserved.

Module 4: Planning and Design Considerations

64

Herearesomeofthebestpracticesandrequirementsforencapsulation. Astoragevolumetobeencapsulatedmusthavecapacitythatisanintegralmultipleof4Kbytes.Otherwise, encapsulationwillrenderitinaccessibletothehost.theywill beinaccessible. DuringencapsulationhostsshouldbeallowedtoperformI/Otovirtualvolumesorstoragevolumes,butnot bothatthesametime thatcancausedatacorruption. Migrationsshouldbeperformedonaninitiatorgroupbasis.Thiswayanynecessarydriverupdatescanbe convenientlyhandledonahostbyhostbasis.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

64

VPLEXArchitectureandDesign

VPNandMANCOM:BestPractices
Cluster1/SiteA
VPLEX ManagementServer

IPsecTunnel
WAN

Cluster2/SiteB
VPLEX ManagementServer

Engine2
DirectorB DirectorA
Switch

ISL1

Engine2
Switch

DirectorB DirectorA

Engine1
DirectorB ISL2 DirectorA
Switch Switch

Engine1
DirectorB DirectorA

MetroPlexrequirement:distance<=100km;FCMANroundtriplatency<5milliseconds Supporteddistanceextensiontechnologies:FCoverdarkfibre;DWDM BestPractice:


TwophysicalMANlinkswithsimilarcharacteristics,suchaslatency ConfigurelongdistancelinksbetweenVPLEXclustersusingISLs RedundantMANfabrics;oneconnectiontoeachMANfabricfromeveryVPLEXDirector
2010 EMC Corporation. All rights reserved. Module 4: Planning and Design Considerations 65

ThediagramillustratestherequirementsforIPandFCconnectivitybetweenthetwoclustersinaMetro Plex. Afundamentalrequirement withoutwhichtheMetroPlexcannotbeinstalled isIPconnectivity betweentheVPLEXManagementServers.AspartofinitialMetroPlexinstall,aVPNtunnelisestablishedfor secureconnectionandinterchangeofconfigurationdatabetweentheseservers. Additionally,theVPLEXDirectorsofeachclusterneedvisibilitytoDirectorsoftheotherclusterviatheir MANCOMports.Currentlydistancesofupto100kmbetweenclusters issupported.Roundtriplatencyon thislinkmustbelessthan5milliseconds.Bandwidthrequirementwillobviouslydependonthespecific customerapplication;ingeneralaminimumof45Mbpsistheguideline. TheFCMANlinkscanuseeitherdarkfibreorDWDM. WhenconfiguringaMetroPlexitisbesttomakeuseoftwofabricsfortheFCMANconnection,allowinga DirectortocommunicatewithalltheotherDirectorsoneitherofthetwofabrics.Thisprovidesthebest possibleperformanceandfaulttolerance. IfMANtrafficmustsharethesamephysicallinkascustomerproductiontraffic,thenlogicalisolationmust beimplementedusingVSANsorLSANs. NotethattherearespecificzoningpracticestobefollowedwhenexposingDirectorFCMANportstoeach other.Refertotheproductinstallationguidefordetails.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

65

VPLEXArchitectureandDesign

MobilityRecommendations DeviceMobility
Mobilitybetweendissimilararrays Relocatehotdevicesfromonearraytypetoanother RelocatedevicesacrossclustersinaMetroPlex

BatchMobility
Fornondisruptivetechrefreshesandleaserollovers FornondisruptivecrossPlexdevicemobility Only25devicesorextentscanbeintransitatonetime Additionalmobilitywillbequeuedifgreaterthan25

ExtentMobility
Loadbalanceacrossstoragevolumes

2010 EMC Corporation. All rights reserved.

Module 4: Planning and Design Considerations

66

ListedaresometypicalapplicationsforeachsupportedtypeofMobility. Extentmobilitycanbeusedforloadbalancingacrossthestoragevolumes.Thiscanalsobeusedforarray mobilitywheresourceandtargetarrayshaveasimilarconfiguration,i.e.samenumberofstoragevolumes, identicalcapacities,etc. Devicemobilitycanbeusedfordatamobilitybetweendissimilararrays,relocatingahot devicefromone typeofstoragetoanother.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

66

VPLEXArchitectureandDesign

DistributedDevices:HostConnectTopologies LocalAccess
EachhostaccessesvolumeviaFEportsononeclusteronly

SpannedAccess(NOTSupportedinV4.0)
EachhostaccessesvolumeviaFEportsonbothclusters

2010 EMC Corporation. All rights reserved.

Module 4: Planning and Design Considerations

67

TherearetwofundamentalmodelsforhostaccesstoDR1volumesinaMetroPlex. WithLocalAccess,thefabricsatthetwositesremainseparate,withhostsateachsiteaccessingDR1 volumesvialocalVPLEXFEportsonly. WithSpannedAccess,thehostshaveaccesstofabricsatbothsitesandcanthereforeaccessDR1volumes throughFEportsatbothsites.Thisprovidesforadditionalresiliencyinastretchedhostcluster sincewith thisaccessmodel,thehostcantoleratelossofanentireVPLEX clusterateithersite.NotethatSpanned Accessisnotsupportedinv4.0.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

67

VPLEXArchitectureandDesign

ScalabilityandLimits
Parameter Virtualvolumes Storagevolumes Initiators(HBAports) Extents Metavolumesize RAID1mirrorlegs Activeintraclusterrebuilds Activeinterclusterrebuilds Storagevolumesize Virtualvolumesize Totalstorageprovisionedinasystem Maximum# 8000percluster 8000percluster 400 24000 78GB 2 25 25 Upto32TB Upto32TB 8PB

2010 EMC Corporation. All rights reserved.

Module 4: Planning and Design Considerations

68

Shownaresomekeydesignlimits;acompletetableofallEMCVPLEXrelateddesignlimitsispublishedin theReleaseNotes.AlwaysrefertothecurrentversionoftheproductReleaseNotesfortheselimits,which aresubjecttochangeuntilGA.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

68

VPLEXArchitectureandDesign

VolumeLimitsinaMetroPlex:Example
Cluster1/SiteA
Hosts

Cluster2/SiteB
Hosts

2000stretched volumes

2000distributeddevices 6000local volumes 6000local volumes

6000localdevices

2000localdevices

2000localdevices

6000localdevices

2010 EMC Corporation. All rights reserved.

Module 4: Planning and Design Considerations

69

Hereisanexampletoillustratehowthemaximumlimitof8000volumesperclustercanbeeffectively exploitedinaMetroPlexsolution. Inthisscenario,wehave2000distributeddeviceswiththecorresponding2000stretched volumesthat canbepresentedtohostsatbothsites.Thesevolumescanpotentiallybesharedbyhostsacrosssites,for exampletoaccommodatedistanceVMotionorstretchedhostclusteringapplications.Notethatour2000 toplevel distributeddevices(i.e.devicesthatareenabledforfrontendpresentation)arelayeredupon 2000localdeviceswithineachcluster. Inaddition,youcanconfigureupto6000moretoplevel localdevicesateachsite,thatarepresentedto localhostsonly.Thesewouldbesuitablefordatathatdoesntneedtobesharedacrosssites. Thisexampleshowshowtoconformtothe8000volumesperVPLEXclusterlimit,whilealsomaximizingthe benefittothecustomer.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

69

VPLEXArchitectureandDesign

EMCVPLEX:SolutionDesignTools SimpleSupportMatrix(SSM) VPLEXSizingTool(VST)


Currentlyacalculatortodetermineclustersize PlanistointegratewithBCSDinthefuture

HEAT
CheckforhostcompatibilitywithVPLEX

VPLEXDeploymentTool(VDT)
HelpstoassistwithVPLEX
Configurations,implementations,andmodificationsinVPLEXclusters

ExecutablethatrunsonWindows

SVCQualifier

2010 EMC Corporation. All rights reserved.

Module 4: Planning and Design Considerations

70

ThesearethecurrentVPLEXsolutiondesigntoolsinactivedevelopment. Networkqualityandlatencyassessmentisrecommended.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

70

VPLEXArchitectureandDesign

VPLEXSizingTool

2010 EMC Corporation. All rights reserved.

Module 4: Planning and Design Considerations

71

TheVPLEXSizingToolcanbeusedtovalidateaproposedVPLEXsolution eithersingleclusterorMetro Plex. Itrequiresbasicinformationaboutthetypeofworkload,volume count,hostinitiatorcountetc. Giventhisdata,thetoolcheckswhethertheproposeddesigniscapableofhandlingtheworkloadfroma performancestandpoint; andalsowhetheritconformstothecompletelistofconfigurationlimits,aslistedintheReleaseNotes.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

71

VPLEXArchitectureandDesign

SimpleSupportMatrix(SSM)
CurrentVPLEXSSMisdownloadablefrom:
https://elabnavigator.emc.com/emcpubs/elab/esm/pdf/EMC_VPLEX.pdf

2010 EMC Corporation. All rights reserved.

Module 4: Planning and Design Considerations

72

TheSimpleSupportMatrixprovidesacomprehensiveviewofcurrentinteroperabilitystatementswithina compactlayout.ItwillbeaccessiblethrougheLabNavigator. Supportedoperatingsystembaseplatforms, Multipathing options, Volumemanagementoptions, Andhostclusteringoptionsarepresentedhereinaneasytoreadformat,forquickreference.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

72

VPLEXArchitectureandDesign

Interoperability:CurrentLimitations Timefinder/Clone/Snap:NOTSupportedatthistime MirrorView/SRDF:canbeusedonlywhentargetorR2sitevolumes


arenotvirtualizedwithVPLEX
ONLYsupport1:1mappingbetweenVPLEXvirtualvolumeandarray

physicalvolumebecausetheremotesite(target/R2)won'tbevirtualized

CurrentlyVPLEXsupportonlythicktothickdatamoves
Virtualprovisioningandsupportforthicktothinnondisruptivemobility

inVPLEXareplannedtobeaddedovertime

RecoverPoint:notintegratedandsupportedwithVPLEX

2010 EMC Corporation. All rights reserved.

Module 4: Planning and Design Considerations

73

Shownaresomeofthekeyinteroperabilitylimitationsatlaunch time. Inv4.0,Timefinder/Clone/Snapisnotsupported. MirrorView/SRDFcanbeusedonVPLEXbackendaslongasthetargetorR2sitevolumesarenotvirtualized withVPLEX.ThisalsomeansthatwecanONLYsupport1:1mapping betweenVPLEXvirtualvolumeand arrayphysicalvolumebecausetheremotesite(target/R2)won'tbevirtualized. Inv4.0,VPLEXwillsupportonlythicktothickdatamoves.Virtualprovisioningandsupportforthicktothin nondisruptivemobilityinVPLEXareplannedtobeaddedovertime. RecoverPointisnotintegratedandsupportedwithv4.0.Thisfunctionalitywillbeaddedovertime.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

73

VPLEXArchitectureandDesign

CourseSummary EMCVPLEXrepresentsinnovativelocalanddistributedfederation
technology.Itispositionedtoaddressnondisruptiveworkload relocation,distributeddataaccess,workloadresiliencyandsimplified storagemanagement. VPLEXLocalsupportslocalfederationincludingconsolidation, heterogeneouspoolingandnondisruptivemobilitywithinadata center. VPLEXMetrosupportstheabove,aswellasdistributedfederation acrosssitesorfailuredomains,withinsynchronousdistances(upto 100km,latency<5msec). VPLEXoffersAccessAnywherewiththekeyenablersincluding: distributedvirtualvolumesoverdistance,remoteaccess,andmobility withinandacrossclusters.

2010 EMC Corporation. All rights reserved.

VPLEX Architecture and Design

74

Thisconcludestheinstructionalportionofthistraining.These arethekeypointsthathavebeencoveredin thiscourse. Pleaseproceedtotaketheassessment.

Copyright 2010EMCCorporation.DonotCopy AllRightsReserved.

74

Potrebbero piacerti anche