VMAX: What’s New with VMAX All Flash - Dell EMC Germany€¦ · SRDF Technology Target VMAX All...
-
Upload
truongkien -
Category
Documents
-
view
244 -
download
1
Transcript of VMAX: What’s New with VMAX All Flash - Dell EMC Germany€¦ · SRDF Technology Target VMAX All...
GLOBAL SPONSORS
Scale-Out Storage
Sanel SamardzicSr. Systems Engineer - MIT
VMAX AFA and XtremIO
Internal Use - Confidential
Internal Use - Confidential
VMAX All Flash
15.4TB SSD for Highest
IOP/TB/Floor Tile Density
Latency
Density
Simplicity
Engineered
Performance numbers based on 8 V-Bricks, 8k RRH
6+M IOPS, <0.5ms Latency
150GB/s Bandwidth
Appliance-Like Packaging
Software Included
Simple, Simple, Simple
One Tier, Any Skew, No HDDs
Internal Use - Confidential
The VMAX All Flash Family
Software Package Highlights
SnapVX
Compression
F SOFTWARE
FX SOFTWARE Above +
ViPR Suite
PowerPath/VE
and more…
SRDF
D@RE
eNAS
1M IOPSRRH-8K
1PBe Capacity
64 FC/iSCSI Ports
1 to 2 V-Bricks
6.7M IOPSRRH-8K
4PBe Capacity
192 FC/iSCSI or 256 FICON Ports
1 to 8 V-Bricks
VMAX 950FVMAX 250F
Internal Use - Confidential
VMAX All Flash 950F/FX configuration details
• V-Bricks in single increments
– Redundant dual director engine design
– 72 Broadwell CPU cores @ 2.3GHz (+ Turbo)
• Up to 3 I/O module pairs per V-Brick
– Each 4x 16Gb FC, 16Gb FICON, or 10Gb iSCSI
– eNAS 10Gb IP
• 2 DAEs per V-Brick
– 240 x 2.5” flash drives per V-Brick
• RAID 5 (7+1) or RAID 6 (14+2)
Internal Use - Confidential
VMAX All Flash 250F/FX Configuration Details
• V-Bricks in single increments
– Redundant dual director engine design
– 48 Broadwell CPU cores @ 2.2GHz
• Up to 4 I/O module pairs per V-Brick
– Each 4x 16Gb FC or 10Gb iSCSI› NO mainframe (FICON) support
– eNAS 10Gb IP
• 2 DAEs per V-Brick (12Gb SAS)
– 50 x 2.5” flash drives per V-Brick
• RAID 5 (3+1, 7+1) or RAID 6 (6+2)
Internal Use - Confidential
VMAX All Flash 250F/FX sample configuration
• Dual V-Brick system
– 96 cores, 4TB cache
– 80 active drives (+2 spares)
– Up to 64 host ports
– Up to 4 eNAS Data Movers
• 80 x 7.7TB flash (7+1)
– ~500TB writable
– ~1PB effective (2:1 compression)
– ~1.3PB host visible capacity (thin/allocated)
• <5 kVA, <600 lb. (w/o rack)
Internal Use - Confidential
VMAX All-Flash Technology Refresh Example800TB usable
VMAX 20K 9 bay system
VMAX All Flash
Single bay system
10X More
Performance
40% Lower
TCO
87% Less
Energy
92% Smaller
Footprint
98% Fewer drive
Replacements
VMAX All-FlashEnterprise Data Services
Internal Use - Confidential
VMAX All Flash Enterprise Data ServicesEXTENSIVE
ECOSYSTEM
PROVEN
PROTECTION
MANAGEMENT
AUTOMATION
MASSIVECONSOLIDATION
ENAS
BLOCK & FILE
VMWARE
VAAI & VVOLS
MICROSOFT
HYPER-V & ODX
OFFLOAD
BROADEST OS,
SERVER/DB,
CLUSTER
SUPPORT
LOCAL
REPLICATION
AT SCALE
MULTI-SITE
REMOTE
REPLICATION
SRDF/METRO:
ACTIVE/ACTIVE
PROTECTPOINT
APP RECOVERY
EMBEDDED
UNISPHERE
UNISPHERE
360
200 ARRAYS
APPSYNC
APP & DB
INTEGRATION
VIPR
STORAGE
AUTOMATION
SCALE UP
SCALE OUT
64 PORTS
64,000 LUNS
PRIORITY I/O
CONTROL QOS
CLOUD ARRAY
OBJECT / CLOUD
INTEGRATION
Internal Use - Confidential
VMAX All-Flash Online Code Updates: True NDUUnique in the industry
• < 10 second array OS upgrade
• No component downtime
– No rolling outage upgrade
– No Failover/Failback processes involved
– No switching LUN ownership/trespass required
• Ports never drop light
– Servers never see logout/login (no fabric RSCN)
• Online downgrades work the same way
• Historical feature going back
many generations
DOWNTIME COST
$1.8 millionper day
$45,000 per hour
$750 per minute
“Thank you VMAX for giving me back my weekends”
“VMAX NDU is the gold standard for upgrades”
“Nobody knows its happening – it just works”
Internal Use - Confidential
Remote Replication Gold Standard – SRDF
Synchronous AsynchronousMetro
Zero data loss
High performance
Scalable consistency
Extended distance
Multi-cycle mode
Remote link resiliency
Active/active
Automated failover/back
Non-disruptive migrations
2-site, 3-site, and 4-site replication
Simple: <2 minutes to configure
Up to 100km
1
42
3
Unlimited Distance
1
23
Witness
Up to 100km
Internal Use - Confidential
VMAX Non-disruptive Migration (NDM)Migrations simplified
• Three simple steps:– Create, Cutover, Commit
• Customer usable & free of charge
• Application-level migrations– Large scale migrations
• VMAX VMAX All Flash
– Broad host support matrix
• Maintains existing replication– Snapshots & SRDF
Source
VMAX
(5876)
SRDF technology
Multipathing
SW
Host
(Single or Cluster)
Metro
Distances
SRDF
Technology
Target
VMAX All Flash
(5977)
Internal Use - Confidential
TimeFinder SnapVX
UP TO 256 SNAPS
PER SOURCE
UP TO 1024
LINKED TARGETS
PER SOURCE
INCREASED
AGILITY
USER-DEFINED
NAMES/VERSIONS
CREATE GROUP
SNAPS IN ONE CLICK
AUTOMATIC
EXPIRATION
EASE
OF USE
TARGET-LESS
SNAPSHOTS
REDUCED
IMPACT
PRODUCTION
VOLUME
LINKED
TARGET
SNAPSHOT
SNAPSHOT
SNAPSHOT
Internal Use - Confidential
ProtectPoint Storage-integrated ProtectionDramatically faster backup and recovery
Faster
backup & recovery
Eliminate
application impact
Reduce
cost and complexity
20x Faster
Backup
10x Faster
Recovery
Internal Use - Confidential
VMAX All-Flash Storage EfficiencyReduces TCO
I n l i n e c o m p r e s s i o n
S n a p s
T h i n p r o v i s i o n i n g
Z e r o s p a c e r e c l a i m
4:1S t o r a g e E f f i c i e n c yC o m p r e s s i o n
2:1*
* Compression rates vary depending on customer applications and environments.
2:1 compression ratio is expected for typical OLTP workloads.
B l o c k
F i l e
Introducing XtremIO X2
Internal Use - Confidential
Why XtremIO?
Consistent Performance Inline, all the time data services with no performance impact
App Integrated CopiesRich Application Integration. No Compromise Copy services.
Unmatched EfficiencyMaximize Efficiency with Deduplication and Compression.
Internal Use - Confidential
Flash Optimized w/ Multi-Dimensional Scalability
New Multi-
Dimensional
Scalable HW
SW-driven
Performance/
Efficiency
Improvements
iCDM Use Case
Enhancements
New Simple
HTML-5 UI
New Metadata-
Aware Native
Replication
HTML
5
Mid
-2
01
8
Internal Use - Confidential
Thin Provisioning
All volumes
thin; optimized for data saving
Deduplication
Inline –block
written once
No post-process
XtremIO
Virtual Copies
Super efficient –in-memory meta data
copy
Compression
Inline –compressed blocks only
No post-process
D@RE
Always-on Encryption
with no perf impact
XtremIO
Data Protection
Single “RAID model” double
parity w/ 89%
usable
XtremIO In-Line, All-The-Time Data Services
Internal Use - Confidential
XtremIO Scale Out Capabilities
Scale-up X-Brick to
138TB
Scale-out a cluster to 8 X-Bricks of
1.1 PBu
Assumes 6:1 data reduction
Active Controller
Active Controller
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
Active Controller
Active Controller
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
Active Controller
Active Controller
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
SSDFLASH
….........
Up to
5.5PBof
effective capacity
Internal Use - Confidential
100TBEnvironment
25TB on XtremIO
4:1 Reduction
20TB on X2
5:1 Reduction
Increased Storage Efficiency
25 better data
reduction(average)
%
Internal Use - Confidential
PCIe NV-RAM UnitReplaces BBU in X1
• Used for data and metadata vaulting
• Increased reliability
• Reduced complexity
• Reduced cabling
• Reduce overall cluster RUs
• Leverages super capacitor
• Allows odd X-Brick supportSuper-Cap
Internal Use - Confidential
XtremIO Virtual Copies Are Busy !XTREMIO VIRTUAL COPIES HANDLE ~40% OF IO WORKLOAD
41% 40%
59% 60%
Total Read IOs Total Write IOs
IOs to Snapshots IOs to Volumes
XtremIO Virtual Copies v/s Volumes: IOs from Entire Install Base
Internal Use - Confidential
XtremIO Metadata-aware native replication
• Uses XtremIO in-memory snapshots
• Wizard based
• Full operational disaster recovery
• RPO as low as 30 seconds
• Immediate RTO
• Up to 1000 recover points
• “Fan-in” configurations
• Supports XtremIO High Performance
• Efficient Metadata-aware Replication
• Efficient replication - Compression
aware
Easy Operation Best Protection Superior Performance
Internal Use - Confidential
XtremIO native replication
75%Data reduction
RPOsas low as
30seconds
=S1S2
DELTA(S1, S2)
222
Primary Site DR Site
Deduplicated and net new
blocks transferred
Just pointer
updates for
existing blocks
With up to…
Internal Use - Confidential
4000Virtual Desktops hosted
per X-Brick
33%Lower $/desktop
25%Faster Boot Times
Up to
40%More VDI Users
80%Better application latency
2XBetter copy operations
25%Better data reduction
2X# of XtremIO Virtual Copies
4Xbetter rack density
Improved
Performance
TCO Savings
Per X-BrickImproved
Efficiency
1/3rd
Lower $/GB
Up to
Why XtremIO X2?
NVMe
Internal Use - Confidential
Emergent Non-volatile Media ImpactAddresses memory/storage latency/capacity gaps
10ns 1us 10us1ns 100ns 100us 1ms 10ms
HDD
Medium NV Media(DRAM)
(Processor
SRAM)
Memory access semantics
IO block access semantics
Low Speed Storage
Access
Latency
Re
lative
ca
pacity
(no
t to
sca
le)
Capacity/Latency Gap Fill
Faster
NV Media
$$$$ $$$ $$ $ < $ << $
Emergent Memory Domain
MLCSLC
TLCQLC
High Speed Storage
(NAND Flash)
Slower NV Media
Internal Use - Confidential
NVM Express and I/O Latency
Source: Storage Technologies Group, Intel. Comparisons between memory technologies based on in-market product specifications and internal Intel specifications.
Latency
HDD+ SAS
NAND+ SAS
NAND+ NVMe
Drive
Latency
Controller
Latency(i.e. SAS HBA)
Software LatencySCM
+ NVMe
NVMe drives down connection latency
Storage-Class Memory technology offers ~10xlatency reduction versus NAND
NAND technology offers ~100x latency
reduction versus HDD
Slide credit: Intel and NVM Express
Internal Use - Confidential
Dell EMC Storage Technology Evolution
Dell EMC is working closely with NVMe and Storage-Class Memory suppliers and will be a leader in integrating, optimizing, and delivering next generation flash solutions
SCSI + HDD
1988
Industry’s first Intelligent Cached Disk Array combining cache and commodity HDDs
SAS + SLC
2008
Industry’s first Enterprise Array to
support SSD Flash and automated tiering
NVMe + NAND
Next
Leadership for Enterprise Array delivering NVMe-connected SSDs
NVMe + SCM
Future
Leadership for Enterprise Array delivering NVMe-connected SCM
~15->1 ~1.2->1 ~5->1
Q&A
Thank you