Enabling Technologies for Future Data Center Networking: A ...

8
IEEE Network • July/August 2013 8 0890-8044/13/25.00 © 2013 IEEE urrently, an increasing number of science and engineering applications involve big data and intensive computing, which present increasingly high demands for network bandwidth, response speed, and data storage. Due to complicated management and low operation efficiency, it is difficult for existing computing and service modes to meet these demands. As a novel computing and service mode, cloud computing has already become prevalent, and is attracting extensive attention from both academia and industry. Companies find such a mode appealing as it requires smaller investments to deploy new businesses, and extensively reduces operation and maintenance cost with a lower risk to support new services. The core idea of cloud computing is the unification and dis- patch of networked resources via a resource pool to provide virtual processing, storage, bandwidth, and so on. In order to achieve this goal, it is critical to evolve the existing network architecture to a cloud network platform that has high perfor- mance with large bandwidth capacity and good scalability while maintaining a high level of quality of service (QoS). The cloud network platform where applications obtain services is referred to as a data center network (DCN). 1 A traditional DCN interconnects servers by electronic switching with a limited number of switching ports, and usual- ly employs a multitier interconnection architecture to extend the number of ports to provide full connectivity without block- ing, which suffers from poor scalability. However, with the convergence of cloud computing with social media and mobile communications, the types of data traffic are becoming more diverse while the number of clients is increasing exponentially. Thus, the traditional DCN meets two major downfalls in terms of scalability and flexibility: • From the interconnection architecture point of view, it is hard to extend the network capacity of the traditional DCN physically to scale with the fluctuating traffic volumes and satisfy the increasing traffic demand. • From the multi-client QoS support point of view, the tradi- tional design is insensitive to various QoS requirements for a large number of clients. Also, it is challenging to virtual- ize a private network for each individual client to meet spe- cific QoS requirements while minimizing resource redundancy. The modern data center (DC) can contain as many as 100,000 servers, and the required peak communication band- width can reach up to 100 Tb/s [1]. Meanwhile, the supported number of users can be very large. As predicted in [2], the number of mobile cloud computing subscribers worldwide is expected to grow rapidly over the next five years, rising from 42.8 million subscribers in 2008 (approximately 1.1 percent of all mobile subscribers) to just over 998 million in 2014 (nearly 19 percent). With the requirements to provide huge band- width capacity and support a very large number of clients, how to design a future DCN is a hot topic, and the following challenging issues should be addressed. Scalable bandwidth capacity with low cost: In order to sat- isfy the requirement of non-blocking bisection bandwidth among servers, huge bandwidth capacity should be provided by an efficient interconnection architecture, while the cost and complexity should be decreased as much as possible. Energy efficiency: DCs consume a huge amount of power and account for about 2 percent of the greenhouse gas emis- sions that are exacerbating global warming. Typically, the annual energy use of a DC (2 MW) is equal to the amount of C C Min Chen and Hai Jin, Huazhong University of Science and Technology Yonggang Wen, Nanyang Technological University Victor C. M. Leung, The University of British Columbia Abstract The increasing adoption of cloud services is demanding the deployment of more data centers. Data centers typically house a huge amount of storage and comput- ing resources, in turn dictating better networking technologies to connect the large number of computing and storage nodes. Data center networking (DCN) is an emerging field to study networking challenges in data centers. In this article, we present a survey on enabling DCN technologies for future cloud infrastructures through which the huge amount of resources in data centers can be efficiently man- aged. Specifically, we start with a detailed investigation of the architecture, tech- nologies, and design principles for future DCN. Following that, we highlight some of the design challenges and open issues that should be addressed for future DCN to improve its energy efficiency and increase its throughput while lowering its cost. Enabling Technologies for Future Data Center Networking: A Primer This work is supported in part by the 1000 Young Talents Plan, and China National Natural Science Foundation under grant No. 61133006. We would like to thank Professor Limei Peng for her very helpful suggestions and comments during the delicate stages of concluding this article. 1 In this article, depending on the context, DCN represents data center net- work and data center networking interchangeably.

Transcript of Enabling Technologies for Future Data Center Networking: A ...

Page 1: Enabling Technologies for Future Data Center Networking: A ...

IEEE Network • July/August 20138 0890-8044/13/25.00 © 2013 IEEE

urrently, an increasing number of science andengineering applications involve big data andintensive computing, which present increasinglyhigh demands for network bandwidth, response

speed, and data storage. Due to complicated management andlow operation efficiency, it is difficult for existing computingand service modes to meet these demands.

As a novel computing and service mode, cloud computinghas already become prevalent, and is attracting extensiveattention from both academia and industry. Companies findsuch a mode appealing as it requires smaller investments todeploy new businesses, and extensively reduces operation andmaintenance cost with a lower risk to support new services.The core idea of cloud computing is the unification and dis-patch of networked resources via a resource pool to providevirtual processing, storage, bandwidth, and so on. In order toachieve this goal, it is critical to evolve the existing networkarchitecture to a cloud network platform that has high perfor-mance with large bandwidth capacity and good scalabilitywhile maintaining a high level of quality of service (QoS). Thecloud network platform where applications obtain services isreferred to as a data center network (DCN).1

A traditional DCN interconnects servers by electronicswitching with a limited number of switching ports, and usual-ly employs a multitier interconnection architecture to extendthe number of ports to provide full connectivity without block-ing, which suffers from poor scalability. However, with the

convergence of cloud computing with social media and mobilecommunications, the types of data traffic are becoming morediverse while the number of clients is increasing exponentially.Thus, the traditional DCN meets two major downfalls interms of scalability and flexibility:• From the interconnection architecture point of view, it is

hard to extend the network capacity of the traditional DCNphysically to scale with the fluctuating traffic volumes andsatisfy the increasing traffic demand.

• From the multi-client QoS support point of view, the tradi-tional design is insensitive to various QoS requirements fora large number of clients. Also, it is challenging to virtual-ize a private network for each individual client to meet spe-cific QoS requirements while minimizing resourceredundancy.The modern data center (DC) can contain as many as

100,000 servers, and the required peak communication band-width can reach up to 100 Tb/s [1]. Meanwhile, the supportednumber of users can be very large. As predicted in [2], thenumber of mobile cloud computing subscribers worldwide isexpected to grow rapidly over the next five years, rising from42.8 million subscribers in 2008 (approximately 1.1 percent ofall mobile subscribers) to just over 998 million in 2014 (nearly19 percent). With the requirements to provide huge band-width capacity and support a very large number of clients,how to design a future DCN is a hot topic, and the followingchallenging issues should be addressed.

Scalable bandwidth capacity with low cost: In order to sat-isfy the requirement of non-blocking bisection bandwidthamong servers, huge bandwidth capacity should be providedby an efficient interconnection architecture, while the cost andcomplexity should be decreased as much as possible.

Energy efficiency: DCs consume a huge amount of powerand account for about 2 percent of the greenhouse gas emis-sions that are exacerbating global warming. Typically, theannual energy use of a DC (2 MW) is equal to the amount of

CC

Min Chen and Hai Jin, Huazhong University of Science and TechnologyYonggang Wen, Nanyang Technological University

Victor C. M. Leung, The University of British Columbia

AbstractThe increasing adoption of cloud services is demanding the deployment of moredata centers. Data centers typically house a huge amount of storage and comput-ing resources, in turn dictating better networking technologies to connect the largenumber of computing and storage nodes. Data center networking (DCN) is anemerging field to study networking challenges in data centers. In this article, wepresent a survey on enabling DCN technologies for future cloud infrastructuresthrough which the huge amount of resources in data centers can be efficiently man-aged. Specifically, we start with a detailed investigation of the architecture, tech-nologies, and design principles for future DCN. Following that, we highlight someof the design challenges and open issues that should be addressed for future DCNto improve its energy efficiency and increase its throughput while lowering its cost.

Enabling Technologies forFuture Data Center Networking: A Primer

This work is supported in part by the 1000 Young Talents Plan, and ChinaNational Natural Science Foundation under grant No. 61133006. Wewould like to thank Professor Limei Peng for her very helpful suggestionsand comments during the delicate stages of concluding this article.

1 In this article, depending on the context, DCN represents data center net-work and data center networking interchangeably.

JIN LAYOUT_Layout 1 7/24/13 3:09 PM Page 8

Page 2: Enabling Technologies for Future Data Center Networking: A ...

IEEE Network • July/August 2013 9

energy consumed by around 5000 U.S. cars in the same peri-od. The power consumption of DCs mainly comes from sever-al aspects, including switching/transmitting data traffic,storage/computation within numerous servers, cooling sys-tems, and power distribution loss. Energy-aware optimizationpolicies are critical for green DCN.

User-oriented QoS provisioning: A large-scale DC carriesvarious kinds of requests with different importance or prioritylevels from many individual users. The QoS provisioningshould be differentiated among different users. Even for thesame user, the QoS requirements can change dynamicallyover time.

Survivability/reliability: In case of system failures, uninter-rupted communications should be guaranteed to offer almostuninterrupted services. Thus, it is very crucial to design finelytuned redundancy to achieve the desired reliability and stabili-ty with the lowest resource waste.

In this article, the technologies for building a future DCNare mainly classified into three categories: DCN architecture,inter-DCN communications, and large-scale clients supportingtechnologies. The organization of the rest of this articleaddresses the above three categories of technologies. We pre-sent various DCN interconnection architectures. The emerg-ing communication techniques to connect multiple DCNs arethen described. The design issues to support QoS require-ments for large-scale clients are given. We provide a detaileddescription of a novel testbed, Cloud3DView, for modularDC, as well as outline some future research issues and trends.Finally, we give our concluding remarks.

Architecture for Data Center NetworkingA large DCN may comprise hundreds of thousands or evenmore servers. These servers are typically connected through atwo-level hierarchical architecture (i.e., a fat-tree topology). Inthe first level, the servers in the same rack are connected tothe top of the rack (ToR) switch. In the second level, ToRswitches are interconnected through higher-layer switches.The key to meeting the requirements of huge bandwidthcapacity and high-speed communications for DCN is to designan efficient interconnecting architecture.

In this section, we first classify the networking architectureinside a DC into four categories — electronic switching, wire-less, all-optical switching, and hybrid electronic/optical switch-ing — which are detailed below.

Electronic Switching TechnologiesAlthough researchers have conducted extensive investigationson various structures for DCNs recently, most of the designsare based on electronic switching [3, 4]. In electronic-switch-ing-based DCN, the number of switching ports supported byan electronic switch is limited. In order to provide a sufficientnumber of ports to satisfy the requirement of non-blockingcommunications among a huge number of servers, the server-oriented multi-tier interconnection architecture is usuallyemployed.

Due to the hierarchical structure, oversubscription andunbalanced traffic are the intrinsic problems in electronic-switching-based DCN. The limitation of such an architectureis that the number of required network devices is very large,and the corresponding construction cost is expensive; also, thenetwork energy consumption is high. Therefore, the key totackling the challenge is to provide balanced communicationbandwidth between any arbitrary pair of servers. Thus, anyserver in the DCN is able to communicate with any otherserver at full network interface card (NIC) bandwidth. Inorder to ensure that the aggregation/core layer of the DCN is

not oversubscribed, more links and switches are added tofacilitate multipath routing [3, 4]. However, the increment ofthe performance is traded off for the increased cost of a larg-er amount of hardware and greater networking complexity.

Wireless Data CenterRecently, wireless capacity of 60 GHz spectrum has been uti-lized to tackle the hotspot problem caused by oversubscriptionand unbalanced traffic [5]. The 60 GHz transceivers aredeployed to connect ToR for providing supplemental routingpaths in addition to traditional wired links in DCNs. However,due to the intrinsic line-of-sight limitation of the 60 GHzwireless links, the realization of wireless DC is quite challeng-ing. To alleviate this problem, a novel 3D wireless DC is pro-posed to solve the link blockage and radio interferenceproblems [6]. In 3D wireless DC, wireless signals bounce offDC ceilings to establish non-blockage wireless connections. In[7], a wireless flyway system is designed to set up the mostbeneficial flyways and routes over them both directly and indi-rectly to reduce congestion on hot links.

The scheduling problem in wireless DCN is first found andformulated as an important foundation for further work onthis area [8]. Then an ideal theoretical model is developed byconsidering both the wireless interference and the adaptivetransmission rate [9]. A novel solution to combining through-put and job completion time is proposed to efficiently improvethe global performance of wireless transmissions. As pointedout in [9], channel allocation is a critical research issue forwireless DCNs.

All-Optical SwitchingDue to super-high switching/transmission capacity, the opticalfiber transmission system is considered as one of the mostappropriate transmission technologies for DCN. Moreover,the super-high switching capacity and flexible multiplexingcapability of all-optical switching technology provide the pos-sibility of flattening the DCN architecture, even for a large-scale DC.

The all-optical switching techniques can be mainly dividedinto two categories: optical circuit switching (OCS) and opti-cal packet switching (OPS).• OCS is a relatively mature technology with market readi-

ness, and can be used for the core switching layer toincrease the switching capacity of DCNs while significantlyalleviating the traffic burden. However, OCS is designed tosupport the deployment of static routing with pre-estab-lished lightpaths, but statically planned lightpaths cannothandle bursty DC traffic patterns, which leads to congestionon overloaded links. Since OCS is a coarse-grained switch-ing technology at the level of a wavelength channel, itexhibits low flexibility and inefficiency on switching burstyand fine-grained DCN traffic.

• OPS has the advantage of fine-grained and adaptive switch-ing capability but is subject to some serious problems in theaspect of technological maturity due to the lack of high-speed OPS fabrics and all-optical buffers.

Hybrid TechnologiesCompared to an optical-switching-based network, an electron-ic network exhibits better expandability but poorer energyefficiency. Hybrid electronic/optical-switching-based DCNtries to combine the advantages of both electronic-switching-based and optical-switching-based architectures [10, 11]. Anelectronic-switching-based network is used to transmit a smallamount of delay-sensitive data, while an optical-switching-based network is used to transmit a large amount of traffic.Table 1 compares the features of various representative DCN

JIN LAYOUT_Layout 1 7/24/13 3:09 PM Page 9

Page 3: Enabling Technologies for Future Data Center Networking: A ...

IEEE Network • July/August 201310

architectures in terms of different interconnecting technolo-gies. As seen in Fig. 1, suitable DCN architecture designtrade-offs should attempt to satisfy application-specific band-width and scalability requirements while keeping deploymentcost as low as possible.

Inter-DCN CommunicationsNowadays, due to the wide deployment of rich media applica-tions via social networks and content delivery networks, the num-ber of DCNs around the world is increasing rapidly, and a DCseldom works alone. Thus, there is a demand to connect multipleDCNs placed in various strategic locations, and the communica-tions among them is defined as inter-DCN communications in thissection. When inter-DCN communications and intra-DCN com-munications are jointly considered, a two-level structure emerges.Inside a DC, any architecture presented earlier can be selectedfor intra-DCN communications. In this section, we first surveyalternative architectures for inter-DCN communications andthen the joint design between the two levels.

Optical-Switching-Based Backbone for Inter-DCNCommunicationsSince optical fiber can provide large bandwidth, it can be uti-lized to interconnect multiple DCNs to solve the problems oftraffic congestion and unbalancing. Similar to intra-DCN com-munications, it is hard to deploy OPS for inter-DCN commu-nications due to the lack of high-speed optical packetswitching fabrics and all-optical buffers. Instead, OCS is thebetter choice because of the maturity of the technology.Although OCS is characterized by slow reconfiguration on theorder of a millisecond, the traffic flow in the backbone net-work is relatively stable, and thus the cost of lightpath config-uration can be amortized over backbone traffic streams thathave sufficiently long durations. In this section, two OCS tech-nologies are considered for this purpose, wavelength-divisionmultiplexing (WDM) and coherent optical-orthogonal fre-quency-division multiplexing (CO-OFDM) technology [13].

Wavelength-Division Multiplexing — Traditionally, a single-car-rier laser optical source is used as the WDM light source. Inorder to meet the requirements for mass data transmissions,

the number of laser sources needs to beincreased, resulting in a sharp rise in cost andenergy consumption. Thus, the demand for theincrease of carriers in a WDM light source is crit-ical. In recent years, multicarrier source genera-tion technology [14] and point-to-point WDMtransmission systems have attracted much atten-tion. Thousand-channel dense WDM (DWDM)has also been demonstrated successfully. Basedon a centralized multicarrier source, an opticalbroadcast and select network architecture is sug-gested in [14].

CO-OFDM Technology — As one of the OCStechnologies, CO-OFDM shows great potential inreducing the construction cost for future DCNs.One great advantage of CO-OFDM is its all-opti-cal traffic grooming capability compared with thelegacy OCS networks where optical-to-electronic-to-optical (O/E/O) conversion is required. This isa critical feature desired in the interconnection ofDCNs where traffic is heterogeneous withextremely large diversity. Thus, it can improvebandwidth capacity with high efficiency and allo-

cation flexibility. However, it suffers from the intrinsic net-work agility problem that is common to all of the existingOCS-based technologies.

Software-Defined Networking SwitchGatorCloud [15] was proposed to leverage the flexibility of soft-ware-defined networking (SDN) techniques to dramaticallyboost DCNs to over 100 Gb/s bandwidth with cutting-edgeSDN switches by OpenFlow techniques. SDN switches separatethe data path (packet forwarding fabrics) and the control path(high-level routing decisions) to allow advanced networking andnovel protocol designs, which decouples decision-making logicand enables switches remotely programmable via SDN proto-cols. In such a way, SDN makes switches economic commodi-ties. With SDN, networks can be abstracted and sliced forbetter control and optimization for many demanding data-intensive or computation-intensive applications over DCNs.

Hence, SDN makes it possible to conceive a fundamentallydifferent parallel and distributed computing engine that isdeeply embedded in the DCNs, realizing application-awareservice provisioning. Also, SDN-enabled networks in DCNscan be adaptively provisioned to boost data throughput forspecific applications for reserved time periods. Currently, themain research issues are SDN-based flow scheduling andworkload balancing, which remain unsolved.

Joint Intra-DCN and Inter-DCN DesignFor large-scale DCNs where many servers and multiple DCNsare interconnected, traditional solutions exhibit the undesir-able features of various kinds of load-unbalancing problems.In this section, the whole architecture is jointly designed inthree levels according to the characteristics of intra-DCN’sinternal servers and inter-DCN’s traffic flows. At the racklevel, hybrid electronic/optical switching is proposed to inter-connect similar servers; at the intra-DCN level, optical switch-ing is suggested to connect ToR switches; at the inter-DCNlevel, multiple DCNs are connected by main backbone opticalnetworks.

Rack Level — In the same rack, the servers are more apt tocommunicate with nearby servers only. Compared to commu-nications with higher layers, the communications betweenservers in the same rack have a higher tendency to burst. To

Table 1. A comparison of DCN architectures.

Name Networkingarchitecture

Switchinggranularity Scalability Energy

consumption

Fat-Tree Electronic Fine Low High

BCube Electronic Fine Low High

DCell Electronic Fine Low High

VL2 Electronic Fine Low High

Schemes in [5] Wireless Fine Low High

Schemes in [6] Wireless Fine Low High

HyPaC Hybrid Medium Medium Medium

Helios Hybrid Medium Medium Medium

DOS [12] Optical Coarse High Low

Scheme in [13] Optical Coarse High Low

JIN LAYOUT_Layout 1 7/24/13 3:09 PM Page 10

Page 4: Enabling Technologies for Future Data Center Networking: A ...

IEEE Network • July/August 2013 11

handle dynamic and bursty traffic flows between similarservers, packet switching technology is most suitable. Since itis challenging to deploy OPS networks, electronic switching orhybrid electronic/optical switching are recommended for rack-level communications.

Intra-DCN Level — As mentioned earlier, a typical DCN con-tains a certain number of racks, and a typical rack accommo-dates several dozen servers. The hierarchical tree structure’stop bandwidth is much smaller than the available bandwidthof the servers’ NIC, resulting in bottlenecks and the hotspotproblem of oversubscription to aggregation/core switches.Optical switching technology can provide large communica-tion bandwidth, which can effectively solve the bandwidthshortage and facilitate the improvement of the DCN’s designand implementation. At the intra-DCN level, the traffic flowsbetween racks have no distinguishable dynamic change.Although the deployment of an optical switch incurs a highercost initially, the device also has a longer life span, whichcompensates for the higher deployment cost.

Multi-DCN Level — At the multi-DCN level, the amount oftraffic flows on the downstream and upstream links are usual-ly asymmetric. The upstream data flow from a terminal serverto a DCN consists mostly of simple request messages, whilethe returned content flow (e.g., video streaming) from theDCN to the terminal server may be very large. Such asymme-try causes a serious imbalance in capacity consumptionbetween upstream and downstream links, possibly resulting inwastage of bandwidth resources.

While WDM with a multicarrier optical source is a suitabletechnology for the main backbone of DCNs, it has severalissues that need to be addressed. For example, optical carrierdistribution algorithms should be designed to meet thedemands of dynamic flow change. The effective use of opticalcarriers produced by a multicarrier source can maximize spec-trum efficiency. Then a highly efficient and reliable opticalcarrier distribution method can be used to improve carrierquality in terms of optical SNR in order to satisfy the demandof data transmissions.

Large-Scale Client-Supporting TechnologiesA large-scale DC may process various kinds of requests withdifferent levels of importance or priority from lots of individu-al users. In order to support large-scale clients with differentQoS requirements, this section mainly investigates two cate-gories of technologies, virtualization and routing. Two kindsof virtualization technologies (i.e., server consolidation andvirtual machine migration) are introduced. Then routingissues are investigated by considering intra-DCN traffic char-acteristics.

Virtualization for Large-Scale ApplicationsAt present, DCs are power hungry: their energy cost accountsfor 75 percent of the total operating cost [16]. Servers con-sume the largest fraction of overall power (up to 50 percent),

while approximately the other half of the power is consumedby cooling systems. This is why many research efforts havefocused on increasing both server utilization and power con-sumption efficiency. This includes virtual machine (VM) con-solidation (migration of VMs to the minimum amount ofservers and switching off the rest of the servers), dynamicvoltage and frequency scaling of servers’ central processingunits (CPUs), air flow management (to optimize cooling), andthe adaptation of air conditioning systems to current operat-ing conditions.

Server Consolidation — In server consolidation, multipleinstances of different applications are moved to the sameserver, with the aim of freeing up some servers for shutdown.Server consolidation can save the energy consumed by a DC.However, such a use case will challenge the existing DCNsolutions, because a large amount of application data mightneed to be shifted among different servers, possibly locatedon different racks.

VM Migration — Another technology suitable for serving alarge number of clients is to migrate VMs in response to userdemand. Similar to the server consolidation use case, this casewill also result in additional data communications amongphysical servers in different racks. As such, new technologiesand strategies should be developed in order to mitigate poten-tial detrimental effects.

One possibility is to leverage the existing named-data net-working (NDN) technology for distributed file system (DFS)management and VM migration. First, DFS is a crucial com-ponent for DC and cloud computing. In current architec-tures, metadata management systems are becoming abottleneck, limiting their scalability. In our Cloud3DViewproject, two novel ideas based on NDN and information-cen-tric networking (ICN) have been proposed for DFS metada-ta management systems. We have shown that our newsolution can reduce the overhead from O(log n) to O(1). Ini-tial numerical results verified that the throughput andresponse delay can be reduced significantly. Second, VMmigration is the most popular approach for load balancing inDCs nowadays. However, the traditional approach often suf-fers from overwhelming overhead. In this research, we lever-age the emerging NDN framework to decouple objectlocality from its physical infrastructure. In this case, anobject can easily be routed without constantly updating themetadata system. We expect that overhead cost can bereduced significantly.

Routing for Dynamic Flow DemandsToday’s DCs are shared among multiple clients running awide range of applications. These applications require a net-work with flexibility to meet various flow demands. Given aflow matrix, an ideal routing algorithm should be able to findthe optimal routing distribution. However, as the flow in theDCN changes dynamically and randomly, the routing algo-rithm cannot possibly know the actual flow matrix.

Currently, most DCN routing algorithms are only designed

Figure 1. Trade-off map for DCN architecture design.

♦ More distributed bandwidth♦ Lower scalability♦ Higher deployment complexity♦ Fine granularity for bursty traffic

♦ More centralized bandwidth♦ Higher scalability♦ Lower deployment complexity♦ Coarse granularity for bursty traffic

Electrical switching/wireless/hybrid Optical switching: OPS Optical switching: OCS

Architecture design trade-off

JIN LAYOUT_Layout 1 7/24/13 3:09 PM Page 11

Page 5: Enabling Technologies for Future Data Center Networking: A ...

for specific DCN architectures (e.g., electronic-switching-based or optical-switching-based), and either demand oblivi-ous routing (DOR) [3] or demand specific routing (DSR) [11]is used. As presented earlier, hybrid electronic-/optical-switch-ing-based DCN can exhibit higher integrated performance.How to design a highly efficient algorithm for a hybrid elec-tronic-/optical-switching-based DCN needs to be furtherexplored. In order to address this issue, this section analyzesDOR and DSR, and proposes a hybrid scheme.

Demand Oblivious Routing — DOR is a conservative approachto address flow randomness. Since it assumes that the routingalgorithm does not know the flow characteristics, it cannotmake highly efficient routing selections.

Demand-Specific Routing — By comparison, DSR mechanismsneed a given flow matrix for an optimal routing decision. DSRhas higher performance than DOR when the flow demand isknown. However, due to the dynamic changes in actual DCNflows causing the flow matrix to be outdated, DSR may sufferfrom low efficiency when the flows change dynamically.

Hybrid Scheme — This section presents a hybrid algorithm ofDOR and DSR to maximize network performance for a largeDCN with uncertain dynamic flows. The hybrid scheme ismotivated by combining the advantages of both routingmechanisms. Although the DC flows change dynamically, itcan be divided into slowly changing flows (e.g., big but stabledata flows) and rapidly changing flows (e.g., small bursts ofdata). Thus, if slowly changing data flows could be estimatedaccurately, DSR would gain better performance. On theother hand, DOR could be used for rapidly changing flows.The design of the hybrid scheme needs to address the follow-ing issues.

Flow matrix composition and representation: Understand-ing flow characteristics is key to the development of a highlyefficient routing algorithm. It includes several topics, such aswhether the DC flows can be decomposed into steady flowsand burst flows, how to decompose the flows, and whichmathematical model could be used to represent both steadyflows and burst flows.

Determining the proportion of DSR and DOR: An algo-rithm is needed to decide which parts of the flows and howmuch of the flows should be transferred to the DSR or DORpaths. The proportion is related to network capacity and linkcongestion probability.

Algorithm design to meet the congestion probability condi-tion: Assuming that congestion probability on any link shouldbe lower than a certain threshold value, the design of a hybridDOR and DSR algorithm should maximize the network flowsaccommodated while meeting the congestion probability whena probability density function of congestion probability for theflow matrix is given.

CLOUD3DVIEW: A Testbed for a ModularData Center

In this section, as an example, we briefly introduce a modularDC testbed operated at Nanyang Technological University.The purpose of this testbed is to develop dynamic power man-agement technologies, some of which are based on the afore-mentioned DCN technologies, to reduce the electricityconsumption of DC operations.

Testbed InfrastructureThe testbed consists of three subsystems, including informa-tion and communication technology (ICT), cooling, andpower distribution. In this subsection, we illustrate its overallstructure and ICT capacity.

Overall Structure — As Fig. 2 shows, the modular DC consistsof two modules, including an air conditioning module and aserver module. The former module is stacked over the latterto save the precious resource of space in Singapore.

The air conditioning module makes use of outside air forcooling purposes whenever feasible, such as when the outsideair temperature is lower than the temperature inside the DC.The intake fresh air flows from the cold area to the hot areathrough the servers and takes away the heat generated by theservers. It saves energy by preventing the DC from runningcolder for longer hours.

In the modular DC of Cloud3DView, the server module isorganized into 10 racks, each of which contains up to 27servers and two layer 2 switches. We adopt a fat tree topologyfor rack-level internetworking, and all the ToR switches areconnected to the aggregate router, which is connected to theInternet. Cloud3DView, a software system developed at NTU,monitors and manages all the equipment.

ICT Capacity — The modular DC contains 270 servers, whichconsist of a 394.2 Tbyte disk, 1080 Gbyte memory, and 540CPUs. All of these servers are connected to the ToR switchesvia Gigabit Ethernet. The operation system for the servers isCentos 6.3 for stability.

System Management SuiteOne goal of this project is to create a novel gamification DCmanagement system. Cloud3DView aims to eliminate the tra-ditional command-line-based DC management style and pro-vide a gamification solution to control DC with highefficiency. To achieve this goal, Cloud3DView contains fivesubsystems.

Data Collection Subsystem — This subsystem focuses on col-lecting data from DC equipment (e.g., servers, switches,power distribution units, and air conditioner) and applica-

IEEE Network • July/August 201312

Figure 2. The infrastructure of Cloud3DView: a) data center testbed; b) hot isle; c) cold isle.

(a) (b) (c)

JIN LAYOUT_Layout 1 7/24/13 3:09 PM Page 12

Page 6: Enabling Technologies for Future Data Center Networking: A ...

IEEE Network • July/August 2013 13

tions (e.g., Hadoop, Apache, and Mysql). The collected datawill be used for monitoring, analysis, visualization, and man-agement.

Data Storage Subsystem —This subsystem focuses on storingdata efficiently in both a traditional relational database man-agement system (RDBMS) and NoSQL. It also provideseffective application program interfaces (APIs) for data visu-alization and data mining.

Data Mining Subsystem —This subsystem focuses on analyzingcollected data and providing solutions to various design prob-lems (e.g., green DC).

System Monitoring Subsystem — This subsystem provides visu-alization of the collected data in 3D and a game-like experi-ence. Administrators can view the real-time status of DCequipment and applications. If anything goes wrong, an alertmessage can be sent to the administrators.

System Management Subsystem — This subsystem mainly focus-es on managing physical servers, VMs, applications, and net-work devices. Traditionally, in order to deploy an application ina DC, administrators and clients may take a lot of time to do

background tasks (e.g., creating VMs and installing software).Cloud3DView can eliminate these background tasks and deployapplications to fulfill clients’ requirements automatically.

Two Trial ApplicationsThe testbed is primarily used to support two applications,multi-screen cloud social TV [20], and big data analytics.

Multiscreen Cloud Social TV — Our multiscreen Social TVtechnology has been touted by global media (1600+ news arti-cles from 29+ countries) as an innovative technology to trans-form traditional “laid-back” TV viewing behavior into aproactive “lean-forward” social networking experience, marry-ing TV to the social networking lifestyle of today. This plat-form, when fully developed and commercialized, wouldtransform the value of TV and potentially save it from adownfall similar to that of newspapers. In our system, exam-ples of salient and sticky features include, but are not limitedto, a virtual living room experience that allows remote viewersto watch TV programs together with text, audio, and videocommunication modalities, a video teleportation experiencethat allows viewers to seamlessly migrate programs across dif-ferent screens (e.g., TV, smartphone, and tablet) with mini-mum learning. Moreover, to meet the requirements of various

Figure 3. Multiscreen cloud social TV application deployment: a) social TV deployment architec-ture; b) data center performance for social TV.

Cloud m

essage bus

PC

TV Phone

Users

Network operationcenter

Social TV user interface Modular data center

Load balancer

Service proxy

HTTP server

Service portal Intensive computing

Storage serversuser and contentstorage service

Media serversvideo transcodingservice

XMPP serverssocial networkingservice

Five minute load averageStorage servers

Sat 00:00172.16.0.98 last custom Now: 11.42m Min: 0.00Avg. 1.20m Max: 20.17m

Sat 12:00 Sun 00:00

20m

10m

0

172.17.0.82 load last custom

Sat 00:00

Use Now: 223.8M Min: 223.4M Avg. 223.7M Max: 224.3MShare Now: 0.0 Min: 0.0 Avg. 0.0 Max: 0.0Cache Now: 548.9M Min: 548.4M Avg. 548.7M Max: 548.9MBuffer Now: 181.2M Min: 181.1M Avg. 181.2M Max: 181.2MSwap Now: 0.0 Min: 0.0 Avg. 0.0 Max: 0.0Total Now: 3.7G Min: 3.7G Avg. 3.7G Max: 3.7G

Sat 12:00 Sun 00:00

4.03.02.01.00.0Lo

ads/p

rocs

PRDTOO

L/TO

BI OETIKERT

PRDTOO

L/TO

BI OETIKERT

PRDTOO

L/TO

BI OETIKERT

PRDTOO

L/TO

BI OETIKERT

PRDTOO

L/TO

BI OETIKERT

PRDTOO

L/TO

BI OETIKERT

PRDTOO

L/TO

BI OETIKERT

PRDTOO

L/TO

BI OETIKERT

PRDTOO

L/TO

BI OETIKERT

PRDTOO

L/TO

BI OETIKERT

PRDTOO

L/TO

BI OETIKERT

PRDTOO

L/TO

BI OETIKERT

172.17.0.82 memory last custom

Sat 00:00 Sat 12:00 Sun 00:00

4.0G3.0G2.0G1.0G

0.0

Byte

s

172.17.0.82 Network last custom

Sat 00:00 Sat 12:00 Sun 00:00

400300200100

0Byte

s/sec

172.16.0.71 load last custom

Sat 00:00 Sat 12:00 Sun 00:00

4.03.02.01.00.0Lo

ads/p

rocs

Localhost load last custom

Sat 00:00 Sat 12:00 Sun 00:00

6.05.04.03.02.01.00.0Lo

ads/p

rocs

172.16.0.71 memory last custom

Sat 00:00 Sat 12:00 Sun 00:00

4.0G3.0G2.0G1.0G

0.0

Byte

s

Localhost memory last custom

Sat 00:00 Sat 12:00 Sun 00:00

40G30G20G10G

0

Byte

s

172.17.0.82 Network last custom

Sat 00:00 Sat 12:00 Sun 00:00

403020100By

tes/s

ec

Localhost CPU last custom

Sat 00:00 Sat 12:00 Sun 00:00

100806040200

Perc

ent

Use Now: 143.2M Min: 142.5M Avg. 142.8M Max: 143.5MShare Now: 0.0 Min: 0.0 Avg. 0.0 Max: 0.0Cache Now: 935.6M Min: 935.2M Avg. 935.4M Max: 935.6MBuffer Now: 187.4M Min: 187.4M Avg. 187.4M Max: 187.4MSwap Now: 0.0 Min: 0.0 Avg. 0.0 Max: 0.0Total Now: 3.7G Min: 3.7G Avg. 3.7G Max: 3.7G

Use Now: 16.5G Min: 1.3G Avg. 16.4G Max: 16.6GShare Now: 0.0 Min: 0.0 Avg. 0.0 Max: 0.0Cache Now: 6.9G Min: 6.6G Avg. 6.8G Max: 6.9GBuffer Now: 264.0M Min: 263.9M Avg. 264.0M Max: 264.0MSwap Now: 45.0 Min: 45.0 Avg. 45.0 Max: 45.0Total Now: 31.4G Min: 31.4G Avg. 31.4G Max: 31.4G

User Now: 10.8% Min: 10.1% Avg. 10.8% Max: 14.1%Nice Now: 1.2% Min: 0.9% Avg. 1.2% Max: 1.8%System Now: 8.9% Min: 7.8% Avg. 9.1% Max: 12.9%Wait Now: 2.3% Min: 0.9% Avg. 3.7% Max: 8.2%Idle Now: 76.7% Min: 70.9% Avg. 75.2% Max: 78.4%

1-min Now: 9.2m Min: 0.0 Avg. 4.2m Max: 68CPUs Now: 4.0 Min: 4.0 Avg. 4.0 Max: 4Procs Now: 0.0 Min: 0.0 Avg. 868.1u Max: 250

1-min Now: 1.1 Min: 617.5 Avg. 1.2m Max: 2CPUs Now: 4.0 Min: 4.0 Avg. 4.0 Max: 4Procs Now: 913.3 Min: 0.0 Avg. 1.1u Max: 5

1-min Now: 0.0 Min: 0.0 Avg. 6.1m Max: 70CPUs Now: 4.0 Min: 4.0 Avg. 4.0 Max: 4Procs Now: 256.7m Min: 0.0 Avg. 20.0m Max: 546

In Now: 142.7 Min: 137.2 Avg. 157.3 Max: 195.3Out Now: 309.7 Min: 294.3 Avg. 314.6 Max: 335.4

In Now: 1.6 Min: 0.0 Avg. 2.2 Max: 6.9Out Now: 33.1 Min: 5.7 Avg. 20.6 Max: 37.5

Bytes received

Media serversSMP serversPortal server

Sat 00:00172.16.0.98 last custom Now: 1.60m Min: 0.00Avg. 2.13m Max: 7.41

Sat 12:00 Sun 00:00

8.06.04.02.00.0

Byte

s/sec

Bytes received

Sat 00:00

172.16.0.98 last custom Now: 33.15 Min: 4.90Avg. 22.62m Max: 45.00

Sat 12:00 Sun 00:00

5040302010By

tes/s

ec

(a)

(b)

Storage servers

JIN LAYOUT_Layout 1 7/24/13 3:09 PM Page 13

Page 7: Enabling Technologies for Future Data Center Networking: A ...

IEEE Network • July/August 201314

customers, the platform will provide a set of APIs for otherdevelopers to design, implement, and deploy novel value-added content services for specific customer needs (e.g., homecare for the elderly, TV ad workflow redesign, real-time TVshopping, collaborative e-learning, autism diagnosis, and assis-tive treatment, to name a few).

In Fig. 3, we present the deploy-ment architecture for multiscreencloud social TV into the modularDC and its performance in systemmeasurement. In the deploymentarchitecture, as shown in Fig. 3a,different servers are configured forspecific functions, including portal,XMPP messaging, media players,storage, and so on. Using theCloud3DView management system,we have been able to monitor theperformance statistics of each indi-vidual server. As shown in Fig. 3b,for each column, the three figures,from top to bottom, show the per-formance for CPU, storage, andnetworking. The visualization ofthese performance metrics is instru-mental for the operators to dynami-cally control the workload for betterQoS or quality of experience (QoE).

Big-Data Analytics — In the testbed,we have also implemented a refer-ence architecture for big data ana-lytics, based on the very popularHadoop framework. Our referencearchitecture is illustrated in Fig. 4a,in which the key component is across-rack Hadoop deployment. Thereference architecture includes fourmajor steps in the big data analyticsprocess, including:•Source data input•Data-intensive computing•Result consumption•Intelligence

In Fig. 4b, we present an alterna-tive view of the system status, bycombining the computation loadsfrom all the servers into the samediagram. In this diagram, each serv-er is labeled with a specific color,and the total computation load isthe sum of all the individual servers(or colors). With this aggregateload, it is easy to keep track of thetotal load of the computation task.This aggregate load metric is a goodindicator when more resources areacquired from the DC.

ConclusionsIn this article, we have surveyed themajor aspects for building a futureDCN in terms of architecture, mul-tiple-DCN interconnection, user-oriented QoS provisioning, andother supporting technologies. In

Fig. 5, we summarize the core design issues with respect toseveral DCN system components. With regard to large-scaleapplications and clients, more open issues for building afuture DCN, such as efficient interconnection architecture,resource assignment, reliability, energy-efficiency, and trafficcharacteristics, need to be further investigated in future work.

Figure 4. Big data analytics application deployment: a) big data analytic reference architec-ture; b) big data performance.

Hadoop aggregated load_one last hour

20:00

50

1015202530354045505560

19:50 20:10 20:20 20:30 20:40

Network operationcenter

Network operationcenter

Consume result

RDMS

Visualization

Statistic

Source data import

Modular data center

Intelligence

Data intensive computing

(a)

Science data

Map stage

Reduce stage

CSV

Industry data

Legacy data

System dataData nodes

Master nodes

Text

Video

Log

SQL

XML

172.16.0.100172.16.0.18172.16.0.26172.16.0.34172.16.0.43172.16.0.52172.16.0.6172.16.0.67172.16.0.73172.16.0.79172.16.0.86172.16.0.92172.16.0.98172.17.0.14172.17.0.20172.17.0.26172.17.0.39172.17.0.47172.17.0.53172.17.0.6172.17.0.66172.17.0.76172.17.0.82172.17.0.91172.18.0.1172.18.0.16172.18.0.21172.18.0.27172.18.0.32172.18.0.39172.18.0.44172.18.0.50

172.16.0.11172.16.0.19172.16.0.27172.16.0.36172.16.0.46172.16.0.53172.16.0.60172.16.0.68172.16.0.74172.16.0.8172.16.0.88172.16.0.93172.17.0.10172.17.0.15172.17.0.21172.17.0.27172.17.0.4172.17.0.48172.17.0.54172.17.0.60172.17.0.7172.17.0.77172.17.0.83172.17.0.93172.18.0.20172.18.0.17172.18.0.22172.18.0.28172.18.0.44172.18.0.4172.18.0.45172.18.0.6

172.16.0.12172.16.0.21172.16.0.28172.16.0.37172.16.0.47172.16.0.54172.16.0.62172.16.0.69172.16.0.75172.16.0.80172.16.0.89172.16.0.94172.17.0.100172.17.0.16172.17.0.22172.17.0.33172.17.0.42172.17.0.49172.17.0.55172.17.0.61172.17.0.70172.17.0.78172.17.0.84172.17.0.94172.18.0.11172.18.0.18172.18.0.23172.18.0.29172.18.0.34172.18.0.40172.18.0.46172.18.0.8

172.16.0.15172.16.0.24172.16.0.30172.16.0.41172.16.0.50172.16.0.58172.16.0.65172.16.0.70172.16.0.77172.16.0.84172.16.0.90172.16.0.96172.17.0.12172.17.0.18172.17.0.24172.17.0.36172.17.0.44172.17.0.50172.17.0.58172.17.0.64172.17.0.73172.17.0.8172.17.0.89172.17.0.96172.18.0.14172.18.0.2172.18.0.25172.18.0.30172.18.0.37172.18.0.42172.18.0.48dl360-16-005

172.16.0.16172.16.0.25172.16.0.32172.16.0.42172.16.0.51172.16.0.59172.16.0.66172.16.0.71172.16.0.78172.16.0.85172.16.0.91172.16.0.97172.17.0.13172.17.0.19172.17.0.25172.17.0.38172.17.0.45172.17.0.51172.17.0.59172.17.0.65172.17.0.74172.17.0.80172.17.0.90172.17.0.97172.18.0.15172.18.0.20172.18.0.26172.18.0.31172.18.0.38172.18.0.43172.18.0.5dl360-16-010

172.16.0.14172.16.0.22172.16.0.29172.16.0.39172.16.0.49172.16.0.56172.16.0.64172.16.0.7172.16.0.76172.16.0.82172.16.0.9172.16.0.95172.17.0.11172.17.0.17172.17.0.23172.17.0.34172.17.0.43172.17.0.5172.17.0.56172.17.0.63172.17.0.71172.17.0.79172.17.0.88172.17.0.95172.18.0.13172.18.0.19172.18.0.24172.18.0.3172.18.0.35172.18.0.41172.18.0.47172.18.0.9

(b)

JIN LAYOUT_Layout 1 7/24/13 3:09 PM Page 14

Page 8: Enabling Technologies for Future Data Center Networking: A ...

IEEE Network • July/August 2013 15

References[1] http://www.datacenterknowledge.com/.[2] ABI, “Mobile Cloud Computing Subscribers to

Total Nearly one Bi l l ion by 2014,” ABI,http://www.abiresearch.com/press/1484/,Tech. Rep., 2009.

[3] A. Greenberg et al., “VL2: A Scalable and Flexi-ble Data Center Network,” SIGCOMM 2009,Barcelona, Spain.

[4] C. Guo et al., “BCube: A High Performance,Server-Centric Network Architecture for ModularData Centers,” SIGCOMM 2009, Barcelona,Spain.

[5] D. Halperin et al., “Augmenting Data Center Net-works with Multi-Gigabit Wireless Links,” ACMSIGCOMM 2011, Toronto, Canada.

[6] X. Zhou et al., “Mirror Mirror on the Ceiling:Flexible Wireless Links for Data Centers,” ACMSIGCOMM 2012, Helsinki, Finland.

[7] J. Kandula and P. Bahl, “Flyways to De-CongestData Center Networks,” HotNets ’09, 2009.

[8] Y. Cui, H. Wang, and X. Cheng, “Channel Allocation in Wireless DataCenter Networks,” IEEE INFOCOM 2011, 2011.

[9] Y. Cui et al., “Dynamic Scheduling for Wireless Data Center Networks,” IEEETrans. Distrib. Systems, http://doi.ieeecomputersociety.org/10.1109/TPDS.2013.5, Jan. 2013.

[10] N. Farrington et al., “Helios: A Hybrid Electronic/Optical Switch Architecturefor Modular Data Centers,” SIGCOMM 2010, New Delhi, India.

[11] G. Wang et al., “c-Through: Part-Time Optics in Data Centers,” SIG-COMM 2010, New Delhi, India.

[12] X. Ye et al., “DOSA Scalable Optical Switch for Datacenters,” ANCS2010, La Jolla, CA, 2010.

[13] L. Peng et al., “A Novel Approach to Optical Switching for Intra-Datacen-ter Networking,” IEEE/OSA J. Lightwave Tech., vol. 30, no. 2, Jan. 2012,pp. 252–66.

[14] Y. Cai et al., “Design and Evaluation of an Optical Broadcast-and-SelectNetwork Architecture with a Centralized Multi-Carrier Light Source,”IEEE/OSA J. Lightwave Tech., vol. 27, no. 21, Nov. 2009, pp.4897–4906.

[15] “GatorCloud,” http://www.s3lab.ece.ufl.edu/projects/ [16] S. L. Sams, “Discovering Hidden Costs in Your Data Centre: A CFO Per-

spective,” IBM white paper.[17] D. Li et al., “Exploring Efficient and Scalable Multicast Routing in Future

Data Center Networks,” Proc. IEEE INFOCOM, Shanghai, China, 2011. [18] R. Dai, L. Li and S. Wang, “Adaptive Load-Balancing in WDM Mesh

Networks with Performance Guarantees,” Photonic Network Communica-tions.

[19] Y.G. Wen, G. Y. Shi and G.Q. Wang, “Designing an Inter-Cloud Mes-saging Protocol for Content Distribution as A Service (CoDaas) over FutureInternet,” 6th Int’l. Conf. Future Internet Technologies 2011, Seoul, Korea,June 2011.

[20] Y. C. Jin et al., “On Monetary Cost Minimization for Content Placementin Cloud Centric Media Network,” accepted to the 2013 IEEE Int’l. Conf.Multimedia and Expo, July 15–19, 2013, San Jose, CA.

BiographiesMIN CHEN [M’08, SM’09] ([email protected]) is a professor at the School ofComputer Science and Technology of Huazhong University of Science and Tech-nology (HUST). He was an assistant professor at the School of Computer Scienceand Engineering of Seoul National University (SNU) from September 2009 toFebruary 2012. He worked as a postdoctoral fellow in the Department of Electri-cal and Computer Engineering of the University of British Columbia (UBC) for threeyears. Before joining UBC, he was a postdoctoral fellow at SNU for one and halfyears. He has more than 180 paper publications. He received the Best PaperAward from IEEE ICC 2012 and was the Best Paper Runner-Up from QShine2008. He has been a Guest Editor for IEEE Network, IEEE Wireless Communica-tions, and other publications. He was Symposium Co-Chair for IEEE ICC 2012and 2013. He was General Co-Chair for IEEE CIT 2012. He is a TPC member forIEEE INFOCOM 2014. He was a Keynote Speaker for CyberC 2012 and Mobiq-uitous 2012.

YONGGANG WEN [S’99,M’08] ([email protected]) is an assistant professorwith the School of Computer Engineering of Nanyang Technological Universi-ty, Singapore He received his Ph.D. degree in electrical engineering and com-puter science (with a minor in Western Literature) from the MassachusettsInstitute of Technology (MIT), Cambridge, in 2008; his M.Phil. degree (withhonor) in information engineering from the Chinese University of Hong Kongin 2001, and his B.Eng. degree (with honor) in electronic engineering andinformation science from the University of Science and Technology of China(USTC), Hefei, Anhui, in 1999. Previously, he worked at Cisco as a seniorsoftware engineer and a system architect for content networking products. Healso worked as a research intern at Bell Laboratories, Sycamore Networks,and Mitsubishi Electric Research Laboratory (MERL). He has published morethan 60 papers in top and prestigious conferences. His system research hasresulted in two patents and has been featured in international media (e.g.,Straits Times, Business Times, Lianhe Zaobao, Channel News Asia, ZDNet,CNet, ACM Tech News, United Press International, Times of India, YahooNews). His research interests include cloud computing, mobile computing, mul-timedia networking, cyber security, and green ICT.

HAI JIN [M’99, SM’06] ([email protected]) is a Cheung Kung Scholars ChairProfessor of computer science and engineering at HUST. He is now dean ofthe School of Computer Science and Technology at HUST. He received hisPh.D. degree in computer engineering from HUST in 1994. In 1996, he wasawarded a German Academic Exchange Service fellowship to visit the Techni-cal University of Chemnitz in Germany. He worked at the University of HongKong between 1998 and 2000, and as a visiting scholar at the University ofSouthern California between 1999 and 2000. He was awarded an ExcellentYouth Award from the National Science Foundation of China in 2001. He isthe chief scientist of ChinaGrid, the largest grid computing project in China,and the chief scientist of the National 973 Basic Research Program Project ofVirtualization Technology of Computing Systems. He is a member of the ACM.His research interests include computer architecture, virtualization technology,cluster computing and grid computing, peer-to-peer computing, network stor-age, and network security.

VICTOR C. M. LEUNG [S’75, M’89, SM’97, F’03] ([email protected]) is aprofessor of electrical and computer engineering and holder of the TELUSMobility Research Chair at the University of British Columbia, Vancouver,Canada. He has contributed some 650 technical papers, 25 book chapters,and five books in the areas of wireless networks and mobile systems. He wasa Distinguished Lecturer of the IEEE Communications Society. Several papershe co-authored have won best paper awards. He has been serving on the Edi-torial Boards of IEEE Transactions on Computers, IEEE Wireless Communica-tions Letters, and several other journals, and has contributed to the organizingand technical program committees of numerous conferences. He was a winnerof the 2012 UBC Killam Research Prize and the IEEE Vancouver Section Cen-tennial Award. He is a Fellow of the Canadian Academy of Engineering andthe Engineering Institute of Canada.

Figure 5. Core functional components for future DCN design.

Core design components for future DCNs

Applicationmanager

Architecture

• Cloud media• Gaming• Healthcare• Social TV• Big-data analytics

• Electronic switching• Wireless• Hybrid electrical/optical• All optical switching

User-oriented QoSprovisioning

• VM migration• Server consolidation• QoS routing

Technologies forother issues

• Green/energy efficiency• Survivability/reliability• Security/privacy• Sustainability

Inter-DCN comm.

• SDN• All optical switching• NDN

JIN LAYOUT_Layout 1 7/24/13 3:09 PM Page 15