Storage_ EMC VMax

download Storage_ EMC VMax

of 13

description

Storage EMC VMAX

Transcript of Storage_ EMC VMax

  • Home, Cost-Based Optimizer, Benchmarks, Server Systems, Systems Architecture, Processors, Storage, Storage Overview, System View of Storage, SQL Server View of Storage, File Layout, PCI-E, SAS, FC, HDD, SSD Technology, RAID Controllers, Direct-Attach, SAN, Dell MD3200, EMC AX4, CX4, VNX, V-Max, HP P2000, EVA, P9000/VSP, Hitachi AMS SSD products: SATA SSDs, PCI-E SSDs , Fusion iO , other SSD

    See the extended discussion for this topic on SQLblog

    Enterprise Storage Systems

    I generally do not get involved in high-end SAN systems. It is almost impossible to find meaningfulinformation on the hardware architecture from the vendor. And it is just as impossible to get configurationinformation from the SAN admin. The high-end SAN is usually a corporate resource managed in differentdepartment from the database.

    The SAN admin is generally hard set on implementing SAN vendor doctrine of "Storage as a Service" anddoes not care to hear input on special considerations from the database team. In addition to unpredictableor poor performance and sometimes both, it is often necessary for fight for every GB of storage space viaemail requests, filling out forms or endless meetings. This is especially ridiculous because storage capacityat the component level is a really cheap. It only becomes a precious resource in a SAN.

    Still, I am expected to answer questions on what is wrong with SQL Server when there are performanceproblems against a magically all-powerful enterprise SAN, so this is my best understanding. The example Iam using is the EMC Symmetrix line, but the concepts here could be applied to other systems if detailswere available.

    The EMC Symmetrix VMAX was introduced in 2009 using Intel Core2 architecture processors (45nm Penryn)with RapidIO fabric. A second generation came out in 2012, with the VMAX 10K, 20K and 40K models usingIntel Xeon 5600 (32nm Westmere) processors. The predecessor to the VMAX was the Symmetrix DMX-4,which used PPC processors and a cross-bar architecture connecting front-end, memory and back-end units.

    The basic information here is from the EMC documents. Because the details on the internal architecture ofthe VMAX is not found in a single authoritative source, much of it has to be pieced together. Some of theassessments here are speculation, so anyone with hard knowledge is invited to provide corrections.

    VMAX (2009)

    The original VMAX architecture is comprised of up to 8 engines. Each engine is comprised of a pair ofdirectors. Each director is a 2-way quad-core Intel Xeon 5400 system with up to 64GB memory (comparedwith 16GB for the CLARiiON CX4-960).

    Storage: EMC VMax http://www.qdpma.com/storage/EMC_VMax.html

    1 of 13 01-09-2014 13:49

  • Each director has 8 back-end 4Gb/s FC ports (comprised of quad-port HBAs?) and various options for thefront-end including 8 x 4Gb/s FC ports.

    The engine with 2 directors has 16 back-end FC ports (2 ports making 1 loop) and can have 16 ports on thefront-end in the FC configuration. Assuming 375MB/s net realizable throughput with 4Gbps FC, eachdirector could support an aggregate of 3.0GB/s on both the front and back-end ports.

    In the full VMAX system of 8 engines (16 directors) with FC front-end configuration there are 128 x 4Gb/sFC ports on the front and back ends. Then in theory, the combined front-end and back-end bandwidth ofthe full system is 16 x 3.0GB/s (or 128 x 375MB/s) = 48 GB/s.

    Of course, there is no documentation on the actual sequential (or large block) IO capability of the V-Maxsystem. There is an VMAX Oracle document mentioning 10GB/s on 2 engines (not sure whether this is the2009 VMAX or the second generation VMAX).

    To support the above director, I would guess that the system architecture should have 6 x8 PCI-E slots.Based on a quad-port FC HBA, the 8 back-end ports requires 2 x8 slots, and there are also 2 x8 slots forthe front-end for any supported interface.

    Without discussing the nature of the interconnect between directors in an engine, and the Virtual MatrixInterface, I am supposing that each requires one x8 slot. The above diagram does show a connectionbetween the two directors in one engine.

    So there should be 2 back-end, 2 front-end, 1 VM and 1 other x8 PCI-E in all. It could also be presumedthat the slots are not connected through an expander, as this would result in an arrangement withunbalanced bandwidth.

    At this point I would like to digress to review the Intel Core2 system architecture. The original memorycontroller hub (MCH or chipset) for the 2-socket Core2 system was the 5000P in 2006, 1432-pins. The5000P has 24 PCI-E lanes and the ESI, which is equivalent to 4 lanes. So this is clearly inadequate tosupport the VMAX director.

    Storage: EMC VMax http://www.qdpma.com/storage/EMC_VMax.html

    2 of 13 01-09-2014 13:49

  • In late 2007-early 2008 or so, late in the product life of the Core2 architecture processors, Intel producedthe 5400 MCH chipset, codename Seaburg, with 1520-pins supporting 36 PCI-E lanes plus the ESI,equivalent to 4 PCI-E lanes.

    This MCH chipset was not used by any server system vendor, so why did Intel make it if there were noapparent customers? It is possible the 5400 MCH was built specifically to the requirements of the high-endstorage system vendors. I mentioned this briefly in System Architecture 2011 Q3.

    The 5400 MCH can support 5 x8 PCI-E slots. I think this is done by using the ESI plus 1 x4 on the upstreamside of the Enterprise South Bridge to support x8 on the downstream side. So there is something wrongwith my estimate to the PCI-E slot count required for the VMAX engine.

    When the original EMC VMAX came out in 2009, I could find no documentation on the Virtual Matrixinterface. I had assumed it was Infini-band, as FC would not have been suitable on bandwidth or protocolsupport. Later I came across a slide deck illustrating VMI implemented with an ASIC connecting x8 PCI-E toRapidIO. The second generation VMAX specification sheets explicitly lists RapidIO as the interconnect

    Storage: EMC VMax http://www.qdpma.com/storage/EMC_VMax.html

    3 of 13 01-09-2014 13:49

  • fabric.

    RapidIO is an open-standard switched fabric. In short, RapidIO has protocols for additional functionalitythat was not necessary in PCI-E, a point-to-point protocol. (Some of these may have been added to PCI-Ein later versions?) RapidIO can "seamlessly encapsulate PCI-E". The other aspect of RapidIO is that thepacket overhead is far lower than Ethernet layer 2, and even more so than Ethernet layer 2 plus layer 3(IP) plus layer 4 (TCP) as there is no requirement to handle world-wide networks. The RapidIO protocoloverhead is also slightly lower than PCI-E.

    The first version of serial RapidIO supported 1.25, 2.5 and 3.125 Gbaud, and x1 and x4 links. Version 2added 5 and 6.25 Gbaud and x2, x8 and x16 links.

    The diagram below is for the original VMAX using two Xeon L5410 processors. I neglected to note thesource, so some help on this would be appreciated.

    In the diagram above, the VMI ASIC is connected to x8 PCI-E to the director system, and 2 x4 RapidIO forthe interconnect. The RapidIO raw data rate is 3.125GHz with encoding. The data rate before 8b/10bencoding is 2.5Gb/s per lane or 1.25GB/s bandwidth for the x4 connection in each direction. The bandwidthper connection cited at 2.5GB/s full duplex is the combined bandwidth in each direction on the RapidIOside.

    The bandwidth on the PCI-E side is 2.5Gbit/s raw, or 2Gbps unencoded data (8b/10b) for 2.0GB/s on thex8 slot. This is the nominal bandwidth of the full PCI-E packet including header and payload. The PCI-Epacket overhead is 22 bytes.

    The net bandwidth that I have seen for disk IO on x8 PCI-E gen 1 is 1.6GB/s. I am not sure what theaverage payload size was for this. It could have been 512 bytes, the disk sector size commonly used. Inany case, the packet overhead is much less than 20%, so there is a difference between the net achievablebandwidth and the net bandwidth after PCI-E packet overhead.

    The VMAX diagram above shows one x8 PCI-E for VMI and 4 x8 PCI-E for Disks (Back-end) and front-endchannels (HBAs). The 4 BE and FE slots are labeled at 1.5GB/s each and 6.0GB/s for the set of four.Presumably this is the 4 x 375MB/s FC bandwidth, and not the PCI-E x8 bandwidth of 2.0 GB/s includingpacket overhead.

    A dedicated interconnect between the two directors in one engine is not shown. So this would represent avalid configuration for 5400 MCH, except that 4 x8 PCI-E should be to the MCH, and only 1 x8 on the ICH(ICH was the desktop I/O controller hub, ESB was the server version).

    The main observation here is that EMC decided it is a waste of time and money to continue to building

    Storage: EMC VMax http://www.qdpma.com/storage/EMC_VMax.html

    4 of 13 01-09-2014 13:49

  • custom architecture in silicon when there are alternatives. It is better to use Intel Xeon (or AMD Opteron)components along with an open-standard fabric. There are ASIC and FPGA vendors that provide a basePCI-E to RapidIO interface design that can be customized. I am presuming the EMC VMI ASIC is built onthis.

    Below is EMC's representation of the VMAX system, showing 8 engines (16 directors) interconnected via theVirtual Matrix.

    The diagram is pretty, but conveys very little understanding of what it is. Knowing that the Virtual Matrixinterface is RapidIO is all that we need to know. The Virtual Matrix is a RapidIO switch, or rather a set ofRapidIO switches.

    Each of 16 directors is connected to the VM with 2 x4 RapidIO ports. A single switch with 128 (16x2x4)RapidIO lanes could connect the full VMAX system. A second possibility is two switches with 64 (16x4)RapidIO lanes. Each switch connects one x4 port on each director. Other possibilities with fewer than 64lanes include 8 switches of 16 lanes, or some arrangement involving more than 1 switch between directors.

    IDT makes both RapidIO switches and PCI-E to RapidIO bridges. Their PCI-E switches are available with upto 64 lanes. The RapidIO switches are available with up to 48 lanes. There are other vendors that makeRapidIO switches and I do not know the source for the EMC VMAX.

    Storage: EMC VMax http://www.qdpma.com/storage/EMC_VMax.html

    5 of 13 01-09-2014 13:49

  • IDT describes their 48-port RapidIO switch, capable of operating at 6.25Gbaud, as having 240Gb/sThroughput. So they felt it was more correct to cite the unencoded bandwidth, single direction, not the fullduplex, and not the encoded data rate.

    The diagram below shows the full VMax system comprising 11 racks with the maximum disk configuration!

    The center rack is for the VMax engines, the other 10 are storage bays. Each storage bay can hold up to240 drives. There are 160 disk array enclosures, 64 directly connected, and 96 daisy chained. There are 8VMax engines, with the disk enclosures in matching color.

    The 2009 VMAX only supported 3.5in drives initially? (I misplaced or did not keep the original VMAXdocumentation, oops) The back-end interface on both the original and second generation (!@#$%^&)VMAX is 4Gbps FC. The 3.5in disk drives are also FC. The 2.5in disk drives for the second generation VMAXis listed as SAS, so presumably the disk enclosure converts the external FC interface to SAS internally.There are flash drive options for both 3.5in and 2.5in, the 3.5in being FC and the 2.5in SAS?

    The mid-range VNX moved off FC disks in 2011. Perhaps the size of the VMAX with all 11 racks is beyondthe cable limits of SAS? But why 4Gb/s FC and not 8Gbps? Is this to maintain compatibility with theprevious generation DMX? I am inclined to think it is not a good idea to saddle a new generation with the

    Storage: EMC VMax http://www.qdpma.com/storage/EMC_VMax.html

    6 of 13 01-09-2014 13:49

  • baggage from the older generation. Perhaps in the next generation FC on the back-end would be replacedby SAS?

    VMAX Second Generation (2012)

    The second generation EMC VMAX employs the Intel Xeon 5600 series (Westmere-EP) processors with upto six cores. There are three series, the VMAX 10K, 20K and 40K. The complete system is comprised of oneor more engines. There can be up to 8 engines in the 20K and 40K and up to 4 engines in the 10K.

    Each engine is comprised of 2 directors. A director is a computer system. The 10K director originally had asingle quad-core processor; later versions have a single six-core processor. The 20K director has twoquad-core processors. The 40K director has two six-core processors. Both the 10K and 20K (directors) havedual Virtual Matrix Interface (VMI or just VM?). The 40K (director) has quad-VM.

    It is very hard to find useful detailed SAN system architecture information. I came across the following froman EMC VMAX 40K Oracle presentation, which appears to be just an update of the original VMAX enginediagram to the second generation VMAX 40K.

    But notice that the interconnect between the two directors (2-socket servers) is labeled as CMI-II. CMI is ofcourse the acronym for CLARiiON Messaging Interface, which in turn was once Configuration ManagerInterface (prior to marketing intervention?). This makes sense. There is no reason to develop differenttechnologies to perform the same function in the two product lines. So the separation between VNX andVMAX is that the latter has VMI to cluster multiple engines together.

    Along the same lines, does there need to be a difference in the chips to perform the CMI and VMIfunctions? It does not matter if the software stacks are different.

    To support the VMAX 40K director, there should be 2 x8 PCI-E slots each for both the front-end andback-end ports as before in the original VMAX. I am also assuming a single x8 PCI-E slot for the CMI-II.The difference is that the 40K director needs 2 x8 PCI-E slots to support 4 VM connections, each x4RapidIO. This makes a total of 7 x8 PCI-E slots.

    The 2-socket Xeon 5600 system architecture is shown below with two 5520 IOH devices each supporting 36PCI-E gen2 lanes for 72 lanes total, not counting the ESI (equivalent to PCI-E gen 1 x4).

    Storage: EMC VMax http://www.qdpma.com/storage/EMC_VMax.html

    7 of 13 01-09-2014 13:49

  • The full Xeon 5600 system can support 8 PCI-E gen2 x8 slots, plus 2 gen2 x4 (because the extra x4 oneach IOH cannot be combined into a single x8?). So this time there are more PCI-E slots than necessary.Note also that all of these are PCI-E gen2 slots. The back-end FC on the 2nd generation VMAX is still 4Gb/sFC. The front-end FC can be 8Gbps FC. It could be that all FC HBAs in the second generation can support8Gbps, just that the back-end ports operate at 4Gbps?

    Virtual Matrix and RapidIO

    The original VMAX used RapidIO at 3.135 Gbaud. After 8b/10b encoding, the un-encoded data rate is2.5Gbps. In a x4 link, the combined data rate is 10 Gbit/s or 1.25 GByte/s. As with modern serial protocols,data transmission is simultaneous bi-directional. So the bandwidth in both directions combined is 2.5GB/sfull duplex.

    In a server system, citing full duplex bandwidth for storage is not meaningful because IO is almost alwaysheavily in one direction (except for backups directly to disk). However, it should be pointed out that thebi-directional capability is immensely valuable because the primary stream is not disrupted by minor trafficin the opposite direction (including acknowledgement packets). Just do not confuse this with the full duplexbandwidth being a useful value.

    In a storage system, it could be legitimate to cite the full duplex bandwidth for the engine, because eachengine could be simultaneously processing data in-bound from and out-bound to other engines. So theengine must be able to handle the full duplex bandwidth.

    Now considering the complete storage system, any traffic that leaves one engine must arrive at anotherengine. The total traffic is the sum of the bandwidth a single direction. So it is misleading to cite the sumtotal full duplex bandwidth. But marketing people can be relied upon to mislead, and we can trustmarketing material to be misleading.

    The VMI ASIC bridges 8 PCI-E lanes to 8 RapidIO lanes. In the original VMAX, this PCI-E gen 1 to RapidIOat 3.125 Gbaud. In the second generation VMAX with Westmere-EP processors, the PCI-E is gen2 andRapidIO is now presumed to be 6.25 Gbaud. PCI-E gen1 is 2.5Gbps and gen2 is 5Gbps.

    I suppose that there is a good reason RapidIO was defined to 3.125 Gbaud at the time PCI-E was 2.5Gbps.Consider sending data from one system to another. In the first system, data is first transmitted over PCI-E(hop 1). A device converts the data to be transmitted over RapidIO (hop 2). At the other end, a deviceconverts back for transmission over PCI-E (hop 3) to the final destination.

    It would seem reasonable that if all interfaces had equal data rates, there would be some loss of efficiencydue to the multiple hops. So for lack of hard analysis I am just speculating that there was a deliberatereason in the RapidIO specification.

    Another speculation is that it was known that RapidIO would be interconnecting systems with PCI-E, andthe extra bandwidth would allow encapsulated PCI-E packets on RapidIO with the upstream anddownstream PCI-E ports to be running at full bandwidth?

    The point of the above discussion is that the bandwidth on the RapidIO of the VMI ASIC is less material to

    Storage: EMC VMax http://www.qdpma.com/storage/EMC_VMax.html

    8 of 13 01-09-2014 13:49

  • the storage professional. The bandwidth on the PCI-E side is closer to net storage IO bandwidth.

    In the table below, I am trying to make sense of the Virtual Matrix bandwidth of the original VMAX, and thesecond generation VMAX 10K, 20K and 40K. The original VMAX 2009 had 3.125 Gbaud RapidIO, so each x4link had 1.25GB/s unencoded bandwidth per direction. Each director has dual Virtual Matrix, so thecombined full duplex bandwidth of 4 VM for the engine is 10GB/s unencoded. The combined full duplexbandwidth on the PCI-E side is 8GB/s per engine.

    Original 10K 20K 40KProcessor Core2 Westmere Westmere WestmereSockets 2 1 2 2

    cores 4 4-6 4 6VMI/dir 2 2 2 4VMI/eng 4 2 4 8RapidIO 3.125 Gbaud 6.25 Gbaud ? 6.25 Gbaud

    Unencoded 8b/10b 2.5 Gbaud 5 Gbaud ? 5 Gbaudx4 link 1.25GB/s 2.5GB/s ? 2.5GB/s

    x4 link bi-dir 2.5GB/s 5GB/s ? 5GB/sEngine VM BW 10GB/s 50GB/s? 24GB/s 50GB/sSystem VM BW 80GB/s? 200GB/s? 192GB/s 400GB/s

    The second generation VMAX should be on RapidIO at 6.25 Gbaud and PCI-E gen 2 at 5Gbps. The VMAX40K specification sheet cites Virtual Matrix bandwidth of 50GB/s for the engine and the full system with 8engines VM at 400GB/s. The VMAX 20K specification sheet cites VM bandwidth of 24GB/s for the engineand the full system with 8 engines VM at 192GB/s. The VMAX 10K specification sheet cites the full system(4 engines) VM bandwidth at 200GB/s, implying a single engine at VM bandwidth of 50GB/s.

    Given that the VMAX 40K has twice as many Virtual Matrix interfaces and double the signaling rate, thecited VM value of 50GB/s can only mean the bi-directional encoded rate of 6.25 Gbaud over 8 x4 lanes onthe RapidIO side. The VMAX 20K value of 24GB/s is perplexing. Why is it not the full duplex rate of 25GB/sfor 6.25 GBaud over 8 x4 lanes?

    The VMAX 10K system value of 200GB/s is also perplexing. There are only 4 engines maximum, meaningeach engine would be 50GB/s. The other documents or slide decks indicate the VMAX 10K director is dualVMI? So the VM bandwidth should be 25GB/s full duplex encoded?

    On the assumption that the VMAX 40K engine has 50GB/s Virtual Matrix encoded full duplex bandwidth,then the unencoded bi-directional bandwidth is 40GB/s on the RapidIO side, and the unencodedbi-directional bandwidth is 32GB/s on the PCI-E side, corresponding to 4 x 8 PCI-E gen 2 lanes. So theuseful bandwidth for the engine VM is 16GB/s single direction.

    Bandwidth Calculation and Speculation

    For lack of hard data on what the VMAX IO bandwidth capability actually is, I will speculate. The originalVMAX director could have 8 x 4Gbps FC ports on both front-end and back-end. As discussed above, basedon 375MB/s for each 4Gbps FC, the director FE and BE bandwidth is then 3.0 GB/s.

    I will further assume that the sole purpose of the CMI-II between the two directors in each engine is tomaintain a duplicate of the write cache for fault tolerance. This means all other traffic between directorsmust go through the VMI.

    In the circumstance that every I/O request coming to a particular port on one of the directors access dataonly RAID groups directly attached to that director, then we would have 100% locality and the would benearly zero traffic over the VM. Not only is this highly improbable and extremely difficult to contrive, it alsogoes against one of the key principles of the central SAN argument. The idea is to pool a very large numberof disks into one system such that every volume from all hosts could access the aggregate IOPS capability

    Storage: EMC VMax http://www.qdpma.com/storage/EMC_VMax.html

    9 of 13 01-09-2014 13:49

  • of the complete storage system.

    A RAID group must be built only from the disks directly attached to the director. So the aggregate conceptis achieved by pooling all RAID groups together. Volumes are created by taking a (small) slice of each RAIDgroup across all directors. Each volume now has access to the IOPS capability of the entire set of disks.This is why the SAN shared storage concept is valid for transaction processing systems but not for DWsystems that would benefit from sequential large block IO.

    In this scenario, the presumption is that IO requests arriving at any director are evenly distributed to alldirectors. In the full system of 8 engines, 16 directors, 6.25% (1/16) of IO are on local disks accessed viathe back-end ports and 93.75% (15/16) must come through the VM from the other directors.

    Then the SAN system bandwidth is constrained by the more limiting of the Front-end channels, the backendchannels, and the adjusted Virtual Matrix single direction, not full duplex, bandwidth. The adjustmentaccounts for the percentage of traffic that must come through the VM. If the VM must handle 15/16 of thetotal traffic, then the upper limit is 16/15 times the VM bandwidth. On the VM, the PCI-E is more limitingthan the RapidIO side, so quoting bi-directional bandwidth is misleading and so is quoting the RapidIO sidebandwidth instead of the PCI-E bandwidth.

    The PCI-E bandwidth to VM in the original VMAX is 2.0 GB/s (x8 gen 1) including PCI-E protocol overhead.The actual net bandwidth is less than 2GB/s but possibly more than 1.6GB/s cited earlier as the maximumthat I have seen in direct attach IO. This is more limiting than the 3GB/s on the 8 x 4Gbps FC front-end orbackend ports.

    The second generation VMAX allows 8 x 8Gbps FC ports on the front-end for an aggregate bandwidth of6GB/s based on 750MB/s per 8Gbps FC port. However the back-end ports are still 4Gbps FC for anaggregate of the same 3GB/s in the original VMAX. The 40K VMAX engine is described as 50GB/s VMbandwidth, not mentioning this is the full-duplex value encoded on the RapidIO side. The single directionencoded data rate on a single director is 12.5GB/s. The unencoded rate 10GB/s on the RapidIO side. Thesingle direction unencoded rate on the PCI-E side is 8GB/s (16 PCI-E gen 2 lanes). Still this is much morethan either the FE or BE ports.

    Note that with fewer engines and corresponding directors, more of the traffic is local. With 4 engines and 8directors, the local traffic is 12.5% and 87.5% remote. With 2 engines and 4 directors, the local traffic is25% and 75% remote.

    All of the above is for read traffic, and does not consider if there are other more limiting elements. Anotherconsideration is memory bandwidth. A read from "disk" could be first written to memory, then read frommemory. (the latency due to the CPU-cycles involved is not considered). An 8-byte wide DDR DRAMchannel at 1333MHz has 10.7GB/s bandwidth, but this is only for memory read.

    The memory write bandwidth to SDR/DDR is one-half the nominal bandwidth. In the really old days, a diskaccess involving a memory write followed by a memory read would be constrained by one-third of thenominal memory bandwidth. Intel server systems from 2006 or so on used memory with a buffer chip thatis described as supporting simultaneous read at the nominal bandwidth and write at one-half the nominalbandwidth.

    In writes to storage, the write IO is first sent to memory on the local director, then copied across theCMI-II(?) to the other director in the same engine? So the net bandwidth across the CMI is also limiting.

    Now that SQL Server 2012 allows clustering (AlwaysOn?) with tempdb on local (preferably SSD) storage, Irecommend this to avoid burdening the SAN with writes. Or a SAN vendor can bother to understand thenature of tempdb and allow write cache mirroring to be selectively disabled?

    Even with all this, there is not a definitive statement from EMC on the actual bandwidth capability of theVMAX, original or extra-crispy second generation. Some slides mention a 3X increase in bandwidth. Wasthat a particular element, or the realizable bandwidth? Is it possible that the original VMAX could do only1/3 the back-end aggregate of 48GB/s, and that the second generation can do the full back-end limit?

    Summary

    Regardless of the SAN, focus on the IOPS and bandwidth that can be realized by actual SQL Server queries.SAN vendor big meaningless numbers are not helpful. The standard SAN vendor configuration should be

    Storage: EMC VMax http://www.qdpma.com/storage/EMC_VMax.html

    10 of 13 01-09-2014 13:49

  • able to produce reasonable IOPS, but will probably be sadly deficient on bandwidth that can be realized bySQL Server. I do like SSD. I do not like auto-tiering, flash-cache or 7200RPM drives for the main Line ofBusiness database. If there are SSDs, these should be made available as pure SSD. It should be thedatabase administrators responsibility to isolate hot data with filegroups and partitioning.

    Considering that a 10K 900GB drive lists for $600, why bother with the 7200RPM (3TB) drive in anenterprise system, unless it is because the markup is so ridiculously high? Or perhaps data that needs to beon 7200RPM drives for cost should not be on an host-cost enterprise storage system?) If there are SSDs,these should be made available as pure SSD.

    Symmetrix DMX (pre-2009)

    The old DMX-4 architecture below. The front-end and back-end units used PPC processors(?) connectedwith a matrix to memory controllers?

    A history of the EMC Symmetrix product line can be found on Storage Nerve.

    There can be up to 8 front-end units. Each FE can have 8 FC ports for a total of 64 FE ports? Assuming thatthis was designed for 4Gbps FC, with a realizable bandwidth 375MB/s on each 4Gbps FC port, then each FEunit of 8 ports would in theory have a maximum BW of 3.0GB/sec. The combined 8 FE units with 64 portstotal would a have theoretical upper bound of 24GB/s. Now it is possible that the DMX was originally designto 2Gbps FC, for an upper bound design of 12GB/s.

    Various EMC documents mentions the interconnect bandwidth as a sum total of the individual componentbandwidths. But nowhere in EMC document is there a mention of the actual DMX bandwidth capability. Ihave heard that due to some internal architecture aspect, the actual bandwidth capability of the DMX is infact 3GB/s.

    Lonny Niederstadt provided the following link . Agilysys Audit.

    Storage: EMC VMax http://www.qdpma.com/storage/EMC_VMax.html

    11 of 13 01-09-2014 13:49

  • Storage: EMC VMax http://www.qdpma.com/storage/EMC_VMax.html

    12 of 13 01-09-2014 13:49

  • Storage: EMC VMax http://www.qdpma.com/storage/EMC_VMax.html

    13 of 13 01-09-2014 13:49