86% 50% Infrastructure provisioning Enterprise-class multi- tenant infrastructure for hybrid...
-
Upload
angel-black -
Category
Documents
-
view
225 -
download
3
Transcript of 86% 50% Infrastructure provisioning Enterprise-class multi- tenant infrastructure for hybrid...
Software-Defined Storage Solutions: Storage Management Plane Functionality in Windows Server 2012 R2 and System Center 2012 R2
Hector LinaresPrincipal Program Manager
DCIM-B338
No automation. I have to wait for someone else to provision storage that I need for a deployment.
86%
Targeted automation. My company invests in storage automation primarily to reduce human error
14%
No automation. I do not have the expertise to automate storage operations
50%
No automation. I am in API overload. Datacenter contains multiple arrays each with a different API surface.
50%
Infrastructure provisioning
Enterprise-class multi-tenant infrastructure for hybrid environments
System Center capabilities
Application performance monitoring
Deep insight into application health
Automation and self-service
Application-owner agility while IT retains control
IT service management
Flexible service delivery
Infrastructure monitoring
Comprehensive monitoring of physical, virtual, and cloud infrastructure
Insight Flexibility Automation
Scalable Provisioning
Standards Based Management
Management of Pools, LUNs, File Shares,
Zones, Zone AliasesStorage Classification
Storage Automation Built in Context
VM, Host, and Cluster Storage Management
Extensive Device Support
SAN, NAS, FC Switch
End to End Discovery and Mapping
Allocation and Assignment
Hybrid Cloud ScenariosStorage Monitoring
and Capacity Trending
Cloud Integrationfor Self-Service
KenWindows System EngineerOne of the largest casino operators in EuropeManages Hyper-V virtualization environment. Storage is managed by the storage operations group. One of Ken’s business units decides to purchase their own array because the application’s performance requirements.
Ken integrated the new system within minutes. He classified the new storage as BU1-HIIOPS and allocated the classification to the tenant’s cloud. Ken has the flexibility to assign storage to some or all Hyper-V servers and is confident other business units cannot consume this new capacity. For the application owners, its business as usual. They simply target the storage in template to BU1-HIIOPS.
Private Cloud Storage Provisioning
Storage ClassificationAssign Classification to StorageCreate multiple classificationsClassify discovered storage pool, disk, and file shareStorage classification inheritance
Streamline VM DeploymentExpress storage intent in templatesPlacement is classification aware
Differentiate Capacity in Multi-Tenant EnvironmentCreate clouds with specific classificationSelf-service users restricted to allocated storage based on classification
Assign Classification to StorageStorage Pool Classification
GOLD
Assign
Pool1(GOLD)
Assign Classification to StorageStorage Pool Discovery
Assign Classification to StorageInherited Classification
GOLD
Assign
Pool1(GOLD)
LUN3(GOLD)
LUN1
LUN2
Assign Classification to StorageStorage Disk Classification
Allocate Storage to Cloud
Cloud Scope Template Scope Instance Scope
No classification assigned to cloud
All classifications available on host group
Placement selected storage based on available capacity
Gold classification assigned to cloud
Limited to Gold classification
Placement selected Gold storage with available capacity
Gold and Silver classification assigned to cloud
Gold and Silver classification available
Placement selected storage based on selection in template and available capacity
Important classification fix in UR2.http://support.microsoft.com/kb/2932926
Storage Management API (SMAPI)
Storage Management API (WMI)
SMPBased
Subsystem
SMI-SCompliantSubsystem
StorageSpaces
Enclosure
SMI-SCompliant
NAS
SMI-SCompliantFC Switch
Windows ServerServer Manager
ISV orStorage Vendor
Applications
System CenterVirtual Machine
Manager
CIM Pass Through
ArchitectureVMM Server
FC Array / ISCSI Array
LUN LUN LUN LUNSAN admin tool
Host
HostSMAPIHBA Provider
NPIV Provider
iSCSI initiator
HostHost
Host
Port to LUN mappings
Virtual to Physical port mappingsCreate & delete Virtual Ports
SMI-S Provider
EnumerateRescanMount/UnMountVolume to Disk mappingDisk to LUN mapping
Storage Management Service
SMAPI
Discovery of Array, Pool and LUNS
LUN create, snapshot, cloneMask and Unmask
Discovery of portals and targets log on & log
off
VMM Service
VMM Server
SMAPI
SMP:SMI- S Storage
Service
SMP:Partner Provider
SMP:Spaces
Library Server
Hyper -VStandalone
Hyper -VCluster
VM VM
WSMAN
WSMAN
WSMAN
WMI
SAN
Proprietary
Scale -out File Server
SSD/SASJBOD
SSD/SASJBOD
WSMAN
SAN
EmbeddedSMI-S
Provider
NAS
Proxy SMI -SNAS Provider
SAN/NAS
Proxy SMI -SSAN/NAS Provider
SAN
Proxy SMI - SSAN
Provider FC/iSCSI(some
require this)
Fibre ChannelSwitch
Proxy SMI -SFabric Provider
CIMXML CIMXML CIMXMLCIMXML
CIMXML
JacquelynnInfrastructure Engineer - Microsoft
Manages the engineering team architecting and designing a new private cloud platform. Jacquelynn asks her team to evaluate a Microsoft only solution based on Windows Server 2012 R2 and System Center 2012 R2.
Jacquelynn presents the final evaluation and recommendation to her management. Key decisions to virtualize all workloads and use a converged Ethernet infrastructure gives Jacquelynn the opportunity to deploy a file based storage solution.
Meeting Customer ChallengesAddressing customer needs:Tenant network isolationAllow migration to standard API accessMoving to a standard tenant portal
Increasing Operational service efficiency:Seamless live migration with network connections preservedReducing copies of customer VHDs copied and on diskRapid copy of VHDs from customer shares into VMM libraryTop to bottom stack monitoring
Datacenter savings
41 labs closed3,781 machines recycled
Scalable provisioning
30,000 VMs/day>12 million VMs served
Scalable VM hosting platform
75,000 VM capacity
Large scale infrastructure
4,000 Hosts8 PB storage
Scale-out File Server Bare-metal Deployment
Bare-metal server
WDS server
VHD
Drivers
Host profile
contoso
Library server
File Server node
1
3
24
5
67
8
9
OOB reboot
Boot from PXE
Download VHDInject drivers
AuthorizePXE boot
Run GCE (Generic Command
Execution) scripts and configure
partitions
Customize and
domain join
Install VMM Agent
Enable Failover
Clustering Feature and File Serve
Role
Download VMM customized WinPE
File Server node
File Server node
File Server node
VMM server
File Server node
Cluster NodesCreate SOFS cluster role
File Server node
DHCP server
New in R2
Host Profile Physical Computer Profile Role Selection
Workflow capable of deploying Hyper-V or File Server. Role is selected in profile. No support for both roles on the same node.
Profile OptionsBackwards compatible with VM host profiles (pre-R2)Cannot create teamed NICs with Windows File Server Role NIC teaming is tightly coupled with logical switch implementation
New in R2
Create Physical Computer Profile$VHD = Get-SCVirtualHardDisk -Name “WINR2MP.vhd”$RunAsAccount = Get-SCRunAsAccount -Name "esdcvsec“$AdminCredentials = Get-Credential
$NicProfilesArray = @()$LogicalNetwork = Get-SCLogicalNetwork -Name "dcmanager"$NicProfile1 = New-SCPhysicalComputerNetworkAdapterProfile -SetAsGenericNIC -SetAsPhysicalNetworkAdapter -UseStaticIPForIPConfiguration -ConsistentDeviceName "PROD" -LogicalNetwork $LogicalNetwork$NicProfilesArray += $NicProfile1
New-SCPhysicalComputerProfile -Name "MySOFSProfile" -Description "Profile for scale-out file server" -DiskConfiguration "MBR=1:PRIMARY:QUICK:4:FALSE:OSDisk::0:BOOTPARTITION;" -Domain "dcmanager.lab" -TimeZone 4 -RunAsynchronously -FullName "DEMO" -OrganizationName "DEMO" -ProductKey "11111-11111-11111-11111-11111" -UseAsFileServer -VirtualHardDisk $VHD -BypassVHDConversion $false -DomainJoinRunAsAccount $RunAsAccount -LocalAdministratorCredential $AdminCredentials -PhysicalComputerNetworkAdapterProfile $NicProfilesArray
New File Server Cluster WizardUnified ExperienceBare-metal OS deployment to one or more servers
Enable File Server, Failover Cluster and MPIO featuresClaim specific MPIO devices by ID (including MSFT2011SASBusType_0xA)Create Scale-out File Server cluster role
Specify existing servers on the networkAutomatically brings scale-out file server under management
Adds cluster-aware Spaces provider under managementDiscovers file server, storage subsystem, pools, file shares, physical disks
New in R2
Scale-out File Server Provisioning
Scale-out File Server ManagementScale-out File ServerBare metal provisioning of File Server Clustering of file server nodesManagement using new cluster-aware Spaces provider
Spaces Pool ProvisioningDiscovery of physical disksPool creation from physical disks
Intent based Share ProvisioningShare creation from pool, volume, explicit pathFully automated file share from pool provisioningParity and mirror Spaces supportNTFS and ReFS support
File Share ACLingPermissions set on file share to each node can access the shareHost management account added as well for VMM operations
Hyper-V Hyper-VHyper-V
VM VM VM VM VM VM VM VM VM
Storage Pool
SAS
File Server File Server File Server File Server
Space
CSV
Scale-out File Share
New in R2
New in R2
New in R2
Assign Classification to StorageFile Share Classification
Assign Classification to StorageInherited Classification
SDS
Assign
Pool1(SDS)
Share3(SDS)
Share1
Share2
File Server Cluster Witness DiskWitness disk is recommended for clusters with 4 nodes or less that require N-2 availability.
VMM will create a new disk using New-Volume API in SMAPI.
Witness disk is 2-way mirror, 1GB in size. Default name is ‘Witness-<ID>’
Agent ManagementAdd file server node as infrastructure node (agent version must match VMM server version)Deploy agents to file server nodes using File Server properties.After upgrading VMM server (e.g. next Update Rollup), agent version will not match. Upgrade all agents from the Infrastructure Servers in Fabric workspace
Efficient VHD Ingestion
VMM
VMM Library ServerTenant File
Share
Symlinks in Library Share
Library
Share
File Symlink
Hyper-V Server
Tenant VHD VM Template
VHD VM
1
“Upload Image”(supply file share path)
2
Portal submits Request for
new link
3
Workflow creates symlink
4
Workflow calls library
refresher
6
Workflowcreates
temporary template5
Tenant deploys new VM
7Workflow calls new VM from
template
8
VM created on
Hyper-V server
Symlinks in Library ShareTenants that modify VHDs oftenAvoids frequent uploads to Library share
Space EfficiencyLeverage capacity on tenant file shares
Network ImpactSymlink approach is for rapid ingestionLibrary refresher required to pick up file (no eventing)VMM relies on CopyFile2 APIs to offload file copy Worse case, network impact on all three servers (tenant file server, VMM Library server, Hyper-V server)Works with ODX, Differencing Disk optimization, BITS
VMM
VMM Library Server
Tenant File Share
Library ShareSymlin
k
File
Hyper-V Server
Network traffic
Tenant VHD
VM Template
VHD VM
MCS EngagementOne of the largest public sector organizations in the Netherlands
This Microsoft MCS engagement focused on improving this customers application delivery pipeline, minimize downtime, and deliver cloud efficiencies on-premises. The customer managed multiple home-grown applications that accumulated over the years. These applications experienced severe downtime due to improper testing and maintenance. Each application was responsible for its own HA and DR. Developers has many platforms to develop adding to operational complexity. The final solution delivered a high degree of abstraction by providing PaaS services & capabilities with built-in ‘HA’ across sites. This environment offered tenant self-service and application automation from development, test, acceptance, to production.
SITE2
SAN
NODE1 NODE2
Hyper-V Failover Cluster
FABRICBFABRIC A
vNODE2
CLUSTER RESOURCES
LUN
vHBA vHBA
SITE1
SAN
NODE1 NODE2
Hyper-V Failover Cluster
FABRICBFABRIC A
vNODE1
CLUSTER RESOURCES
LUN
REPLICATION CHANNEL (<10KM)
vHBA vHBA
SYNC REPLICATION
MULTI-SITE GUEST CLUSTER
TENANT 1 - PRODUCTION ENVIRONMENTSelf-ServiceService provider manages ‘environments’ – Dev, Test, Acceptance, and Production
Within an environment, the tenant can deploy applications and manage promotion of application DTAP
Components: Service Manager
AutomationEnvironments are highly available using virtualized multi-site guest clusters with replicated storage.
Components: Orchestrator, Virtual Machine Manager (service templates, storage management), HP 3PAR SMI-S provider, Brocade SMI-S provider, HP CLX APIs
Fibre Channel Fabric Management
Server
Fibre Channel
Block Storage SAN
PhysicalOr VirtualMachine
Zoning
Masking
Server
Ethernet (iSCSI)
Block StorageSAN
PhysicalOr VirtualMachine Masking
Server
Ethernet (SMB3)
File Storage
VirtualMachine
Topologies
Storage Management API (SMAPI)
Storage Management API (WMI)
SMPBased
Subsystem
SMI-SCompliantSubsystem
StorageSpaces
Enclosure
SMI-SCompliant
NAS
SMI-SCompliantFC Switch
Windows ServerServer Manager
ISV orStorage Vendor
Applications
System CenterVirtual Machine
Manager
CIM Pass Through
Manage FC Fabric$provider = Get-SCStorageProvider -Name "DCMRRSMISCISCO"$fabric = Get-SCStorageFabric -Name "2002547FEE79EE81"$classification = New-SCStorageFabricClassification -Name "PRODUCTION"Set-SCStorageFabric -EnableManagement -StorageFabric $fabric -StorageFabricClassification $classification
Enumerate Zoneset
Enumerate Zone
Enumerate ZoneAlias
Enumerate Zone
Members
ObjectId : CISCO_Vsan.CreationClassName="CISCO_Vsan",Name="2002547FEE79EE81“ElementName : VSAN0002Classification : PRODUCTIONStorageZoneSets : {Vsan2Active, Vsan2Active}StorageZoneAliases : {t_alias1, t_alias3}StorageZoneMemberships : {CISCO_ZoneMemberSettingData.InstanceID="2\2002547FEE79EE81\vm_0_zone\2\500507680220214F\true“ …}StorageZones : {dcmrr25r19n11, sfcvm1_zone, dcmrr25r19n11, sfcvm1_zone...}
FC Fabric Classificatio
n
Identify fabric using friendly name
Classification aware Placement
Storage Fabric Classification
FC SwitchesFC Switches
Fabric (PROD1)
Zone
Hyper-V Clusters
Virtual HBA
Virtual SAN
PROD2
Zone
Virtual HBA
Virtual SAN
Fabric (PROD2)PROD
1
NOTE: Modify and remove fabric classification using CLI
Create Zone and Add Members (VMM)#add list of zone members $zoneset_newZone = Get-SCStorageZoneSet -Name "Active"
$newZone = New-SCStorageZone -Name "MyZone" -StorageZoneSet $zoneset_newZone -AddZoneMembership @("21240002AC000C63", "20230002AC000C63", "10000000C9C17FCA", "10000000C9C17FCB")
Activate Zoneset (VMM)#Activate zonesetIf($zoneset_newZone.Active -eq $false) {
Set-SCStorageZoneSet -StorageZoneSet $zoneset_newZone -Enable }
Services with FC Storage
Create Virtual SAN and Add Members#get VM host and fabric classification $vmHost = Get-SCVMHost –Name “dcmrr25r19n08.dcmanager.lab”$class = Get-SCStorageFabricClassification -Name Production
#get all HBA adapters and filter down to the ones installed in #VM host and attached to the correct fabric$adaptersToAdd = @()$adaptersToAdd = Get-SCVMHostFibreChannelHba | where {$_.VMHost –eq $vmhost –and $_.FabricClassification –eq $class}
#create virtual SAN in a job group and use VM host as the triggerNew-SCVMHostFibreChannelVirtualSAN -Name "VirtualSAN_CISCO" -Description "Virtual SAN connected to Cisco FC ports" -HostFibreChannelHba $adaptersToAdd -JobGroup $jobguid
Set-SCVMHost -VMHost $vmHost -JobGroup $jobguid
Computer Tier LUN Masking$virtualMachines = @()$virtualMachines += Get-SCVirtualMachine -Name "FCDataTier002.dcmanager.lab"$virtualMachines += Get-SCVirtualMachine -Name "FCDataTier001.dcmanager.lab"$logicalUnit_0 = Get-SCStorageLogicalUnit -Name “MyLUN"
Register-SCStorageLogicalUnit -StorageLogicalUnit $logicalUnit_0 -VM $virtualMachines
Computer Tier LUN Unmasking$virtualMachines = @()$virtualMachines += Get-SCVirtualMachine -Name "FCDataTier001.dcmanager.lab“$virtualMachines += Get-SCVirtualMachine -Name "FCDataTier002.dcmanager.lab"$logicalUnit_0 = Get-SCStorageLogicalUnit -Name “MyLUN"
#new VM created by scaling out tier$newVM = Get-SCVirtualMachine -Name "FCDataTier003.dcmanager.lab“
Register-SCStorageLogicalUnit -StorageLogicalUnit $logicalUnit_0 -VM $virtualMachines -AddVM $newVM
Computer Tier LUN Masking$virtualMachines = @()$virtualMachines += Get-SCVirtualMachine -Name "FCDataTier001.dcmanager.lab“$virtualMachines += Get-SCVirtualMachine -Name "FCDataTier002.dcmanager.lab"$logicalUnit_0 = Get-SCStorageLogicalUnit -Name “MyLUN"
#mask LUN before scaling in tier$removeVM = Get-SCVirtualMachine -Name "FCDataTier003.dcmanager.lab“
Unregister-SCStorageLogicalUnit -StorageLogicalUnit $logicalUnit_0 -VM $virtualMachines -RemoveVM $removeVM
Partnerships
Overwhelming support !
SNIA SMILab Plugfests (Booth 241)Microsoft is committed to standards based managementActive plugfest participants – 5 per year
Storage Manageability Certification
Storage Management API (WMI)
SMPBased
Subsystem
SMI-SCompliantSubsystem
StorageSpaces
Enclosure
SMI-SCompliant
NAS
SMI-SCompliantFC Switch
Windows ServerServer Manager
ISV orStorage Vendor
Applications
System CenterVirtual Machine
Manager
CIM Pass Through
VMMBased
ScenarioTests
NativeSMAPI
Functionality
Tests
SNIASMICTP
Breakout Sessions (session codes and titles)DCIM-B211 - Service Provider Datacenter Architecture
DCIM-B322 - Implementing Enterprise-Scale Disaster Recovery with Hyper-V Recovery Manager, Network Virtualization, and Microsoft System Center 2012 R2
DCIM-B321 - Windows Azure Pack: Automation Essentials for Tenant Provisioning
DCIM-B212 - The Enterprise: Be Your Own Cloud Provider
DCIM-B337 - File Server Networking for a Private Cloud Storage Infrastructure in Windows Server 2012 R2
DCIM-B346 - Best Practices for Deploying Tiered Storage Spaces in Windows Server 2012 R2
DCIM-B349 - Software-Defined Storage in Windows Server 2012 R2 and Microsoft System Center 2012 R2
DCIM-B377 - Building Disaster Recovery Plans for Microsoft Workloads and Applications with Hyper-V Recovery Manager and Desired State Configuration
DCIM-B421 - Delivering Disaster Recovery Solutions Using Windows Server 2012 R2, Microsoft System Center 2012 R2 and Microsoft Azure
Related content
Related contentFind Me Later At. . .
TechExpo Welcome Reception - Datacenter & Infrastructure Management: Cloud & Datacenter Infrastructure Solutions
Microsoft System Center Alliance Partner Event
User Feedback Programhttp://aka.ms/ITProUserPanelSurvey
Lets Talk After TechEd. . .E-mail: [email protected]
Twitter: @hectoralinares
SNIA SMILab
Related content
Brocade, Microsoft, and Hitachi Support for SMI-S
Come Visit Us in the Microsoft Solutions Experience!
Look for Datacenter and Infrastructure ManagementTechExpo Level 1 Hall CD
For More InformationWindows Server 2012 R2http://technet.microsoft.com/en-US/evalcenter/dn205286
Windows Server
Microsoft Azure
Microsoft Azurehttp://azure.microsoft.com/en-us/
System Center
System Center 2012 R2http://technet.microsoft.com/en-US/evalcenter/dn205295
Azure PackAzure Packhttp://www.microsoft.com/en-us/server-cloud/products/windows-azure-pack
Resources
Learning
Microsoft Certification & Training Resources
www.microsoft.com/learning
msdn
Resources for Developers
http://microsoft.com/msdn
TechNet
Resources for IT Professionals
http://microsoft.com/technet
Sessions on Demand
http://channel9.msdn.com/Events/TechEd
Complete an evaluation and enter to win!
Evaluate this session
Scan this QR code to evaluate this session.
© 2014 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries.The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.