高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS...

112
高级软件工程 9节课:云计算和大数据 主讲:刘 驰 20131118

Transcript of 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS...

Page 1: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

高级软件工程

第9节课:云计算和大数据

主讲:刘驰

2013年11月18日

Page 2: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Content

Cloud ComputingHadoopHDFSMapReduceHBase

3

Page 3: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

云计算的相关概念云计算的定义

云计算的特征

云计算的分类

云计算与其它计算比较

云计算的优势与带来的变革

4

Page 4: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

为什么需要云计算?

案例一:

写文件

电脑硬盘坏了,文件丢失

存储在云中的文件不会丢失

5

Page 5: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

为什么需要云计算?

案例二:

QQ聊天 ‐‐‐下载、安装、使用

使用C++‐‐‐下载、安装、使用

……

从云中获取服务

6

Page 6: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

为什么需要云计算?

案例三:

华盛顿邮报突然需要大量计算资源进行文件格式转化

报社现有计算能力每页需要30分钟

新闻时效性不允许

使用Amazon EC2的计算资源

77

Page 7: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

云计算产生的原动力

8

Page 8: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

为什么叫云计算?

云‐‐‐互联网?透明性?云里雾中,不得其解?

9

Page 9: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

什么是云计算?

10

Page 10: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

什么是云计算?

11

Page 11: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

云计算系统构架

12

Page 12: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

云计算的特征

13

Page 13: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

云计算的分类

按服务类型

14

Page 14: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

云计算分类

服务类型分类

分类 服务类型 运用灵活性 运用难易程度

IaaS 接近原始的计算存储能力 高 难

PaaS 应用的托管环境 中 中

SaaS 特定功能 低 易

15

Page 15: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

云计算分类

不同类型云的案例

16

Page 16: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

云计算分类

按服务方式分

17

Page 17: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

云计算的相关概念

云计算与其它计算比较云计算与并行计算

云计算与网格计算

云计算与效用计算

云计算的优势与带来的变革

18

Page 18: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

云计算与并行计算

并行计算(高性能计算、超级计算)一群同构处理单元的集合,这些处理单元通过通信和协作来更快地解决大规模计算问题

19

Page 19: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

20

Page 20: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

云计算与网格计算

网格计算(分布式计算)将分散在网络中的空闲服务器、存储系统和网络连接起来,形成一个整合系统,为用户提供功能强大的计算及存储能力来处理特定的任务

21

Page 21: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

22

Page 22: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

云计算与效用计算

Utility Computing: IT资源能够根据用户的要求按需

提供,并根据使用情况付费

23

Page 23: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

云计算与其它类型计算

24

Page 24: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

云计算的相关概念

云计算与其它计算比较

云计算的优势与带来的变革云计算的优势

云计算带来的变革

云计算产生的原动力

25

Page 25: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

云计算的优势

优化产业布局例:Google数据中心分布

26

Page 26: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

云计算的优势

推进专业分工例:中小公司的数据中心 vs专业公司的大型数据中心

数据中心属性 中小型数据中心 大型数据中心

服务器个数 <2000 >20000

每个管理员管理服务器数

<500 >500

PUE值 2.0~2.5 1.0~1.5

服务器供电方式 交流电 直流电

电价 高 低

制冷方式 风冷 水冷+风冷

提供单位计算力的成本 高 低27

Page 27: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

云计算的优势

提升资源利用力例:新兴公司将IT业务外包给专业的云计算提供商提供

管理

28

Page 28: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

云计算的优势

减少初期投资例:企业使用云中的计算资源和服务,无需购买硬件和授权

29

Page 29: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

云计算的优势

降低管理开销应用管理的动态、高效率、自动化

30

Page 30: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

云计算带来的变革

机遇与挑战云计算产业结构中的角色

31

Page 31: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

云构架层次

共有云(通过Internet提供公共服务)

混合云(通过Internet和Intranet提供公共

和私有服务)

私有云(通过Intranet提供私有服务)

应用层软件及服务(SaaS)

平台层平台即服务(PaaS)

基础设施层基础设施即服务(Iaas)

32

Page 32: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

云构架的服务层次

基础设施即服务提供虚拟化的计算资源、存储资源、网络资源Amazon EC2

平台即服务使开发人员充分利用开放资源来开发定制应用Google AppEngine

软件即服务软件或应用通过租用的形式提供给用户使用Salesforce.comGoogle Gmail、Docs

33

Page 33: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

IaaS基本功能

34

Page 34: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Example: Amazon EC2

底层采用Xen虚拟化技术,以Xen虚拟机的形式向

用户动态提供计算资源

按照用户使用资源的数量和时间计费

http://aws.amazon.com/ec2/

35

Page 35: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

PaaS开发测试环境

应用模型、API代码库、开发测试环境

运行时环境验证、配置、部署、激活

运营环境升级、监控、淘汰、计费

36

Page 36: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Example: Google App Engine在Google的基础架构上运行自己的网络应用程序

提供网址抓取、邮件、memcache、图像操作、计

划任务等服务

目前支持Java和Python

3737

Page 37: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

SaaS可以通过浏览器访问,具有开放的API

在使用的过程中根据实际使用情况付费

较强的云应用之间的整合能力

38

Page 38: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

SaaS分类

标准应用

如文档处理、电子邮件、日程管理等提供商往往是实力雄厚的IT业巨头

客户应用如客户管理系统CRM、企业资源计划系统ERP提供商是规模较小的专业公司

多元应用如地铁时刻表服务Mutiny、期权交易方案提供The Option Lab提供商多是规模较小的开发团队

39

Page 39: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Example: Google Docs & Docs for Facebook

• 在线文档编辑• 多人协作编辑

40

Page 40: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Web QQ一站式网络服务

41

Page 41: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Salesforce.com客户应用的典型代表

采用了多租户的架构

所有用户和用户和应用程序共享一个实例,同时又能够按需满足不同的客户要求

42

Page 42: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

An Ecosystem for Cloud Computing

43

Page 43: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Problem

Batch (offline) processing of huge data set using commodity hardware is not enough for real‐time applications

Strong desire for linear scalability

Need infrastructure to handle all mechanics

allow developers to focus on the processing logic/algorithms

44

Page 44: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Explosive Data! – StorageNew York Stock Exchange: 1 TB data per day

Facebook: 100 billion photos, 1 PB (1000 TB)

Internet Archive: 2 PB data, growing by 20 TB per month

Can’t put data on a SINGLE node

Strong needs for distributed file systems

Page 45: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

46Java/Python/C interfaces

Page 46: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

47

Page 47: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Commercial Hardware

典型的2层构架–节点是普通的商业PC机– 30‐40 节点/Rack–顶层到Rack 带宽3‐4Gbps– Rack到节点带宽1Gbps 48

Page 48: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Hadoop HistoryDec 2004 – Google GFS 论文发表

July 2005 – Nutch使用MapReduceFeb 2006 –成为 Lucene子项目

Apr 2007 – Yahoo! 建立 1000个节点的集群

Jan 2008 –成为 Apache顶级项目

Jul 2008 –建立 4000 节点的测试集群

Sept 2008 – Hive 成为Hadoop子项目

……49

Page 49: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Who is Using Hadoop?

50

Page 50: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Example: Facebook的Hadoop集群

产品集群

4800个内核,600个机器,每个机器16GB—2009年4月8000个内核,1000个机器,每个机器32GB—2009年7月每个机器拥有4个1TB大小的SATA硬盘

两层网络结构,每个Rack有40个机器

整个集群大小为2PB,未来还会不断增加

测试集群

• 800 个内核, 每个16GB 

51

Page 51: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

A Distributed File System

Page 52: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Single‐Node Architecture

Memory

Disk

CPU

Machine Learning, Statistics

“Classical” Data Mining

53

Page 53: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Commodity ClustersWeb data sets can be very large 

Tens to hundreds of TBCannot mine on a single serverStandard architecture emerging:

Cluster of commodity Linux nodesGigabit Ethernet interconnect

How to organize computations on this architecture?

Mask issues such as hardware failure 54

Page 54: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Cluster Architecture

Mem

Disk

CPU

Mem

Disk

CPU

Switch

Each rack contains 16‐64 nodes

Mem

Disk

CPU

Mem

Disk

CPU

Switch

Switch1 Gbps between any pair of nodesin a rack

2‐10 Gbps backbone between racks

55

Page 55: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Stable storageFirst order problem: if nodes can fail, how can we store data persistently? Answer: Distributed File System

Provides global file namespaceGoogle GFS; Hadoop HDFS; Kosmix KFS

Typical usage patternHuge files (100s of GB to TB)Data is rarely updated in placeReads and appends are common

56

Page 56: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

57

Page 57: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

58

Page 58: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Namenode and DatanodesMaster/slave architecture1 Namenode, a master server that manages the file system namespace and regulates access to files by clients.many DataNodes usually one per node in a cluster.

manage storageserves read, write requests, performs block creation, deletion, and replication upon instruction from Namenode.

HDFS exposes a file system namespace and allows user data to be stored in files.A file is split into one or more blocks and set of blocks are stored in DataNodes.

2013/11/1859

Page 59: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Namespace

2013/11/18

Hierarchical file system with directories and filesCreate, remove, move, rename etc.Namenode maintains the file systemAny meta information changes to the file system recorded by the Namenode.An application can specify the number of replicas of the file needed: replication factor of the file. This information is stored in the Namenode.

60

Page 60: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Data Replication

2013/11/18

Store very large files across machines in a large cluster.Each file is a sequence of blocks of same size.Blocks are replicated 2‐3 times.Block size and replicas are configurable per file.Namenode receives a Heartbeat and a BlockReport from each DataNode in the cluster.BlockReport contains all the blocks on a Datanode.

61

Page 61: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Replica Placement

2013/11/18

Rack‐aware: Goal: improve reliability, availability and network bandwidth utilizationResearch topic

Namenode determines the rack id for each DataNode.Replicas are placed: 1 in a local rack, 1 on a different node in the local rack and 1 on a node in a different rack.1/3 of the replica on a node, 2/3 on a rack and 1/3 distributed evenly across remaining racks.

62

Page 62: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

HDFS: Data Node Distance

63

Page 63: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Replication PipeliningWhen the client receives response from Namenode, it flushes its block in small pieces (4K)  to the first replica, that in turn copies it to the next replica and so on.Thus data is pipelined from Datanode to the next.

2013/11/1864

Page 64: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Replica Selection 

2013/11/18

Replica selection for READ operation: HDFS tries to minimize the bandwidth consumption and latency.If there is a replica on the Reader node then that is preferred.HDFS cluster may span multiple data centers: replica in the local data center is preferred over the remote one.

65

Page 65: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Datanode

2013/11/18

A Datanode stores data in files in its local file system.Datanode has no knowledge about HDFS filesystemIt stores each block of HDFS data in a separate file.Datanode does not create all files in the same directory.It uses heuristics to determine optimal number of files per directory and creates directories appropriately: 

Research issue?

When the filesystem starts up it generates a list of all HDFS blocks and send this report to Namenode: Blockreport. 

66

Page 66: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

HDFS: File Read

67

Page 67: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

HDFS: File Write

68

Page 68: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Communication Protocol

2013/11/18

All protocols are layered on top of the TCP/IP protocolA client establishes a connection to a configurable TCP port on the Namenode machine. It talks ClientProtocol with the Namenode.Datanodes talk to the Namenode using Datanode protocol.RPC abstraction wraps both ClientProtocol and Datanodeprotocol.Namenode is simply a server and never initiates a request; it only responds to RPC requests issued by DataNodes or clients. 

69

Page 69: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

DataNode Failure and HeartbeatDatanodes lose connectivity with Namenode.Namenode detects this condition by the absence of a Heartbeat message.Namenode marks Datanodes without Hearbeat and does not send any IO requests to them.Any data registered to the failed Datanode is not available to the HDFS.

2013/11/1870

Page 70: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Cluster RebalancingHDFS architecture is compatible with data rebalancing schemes.A scheme might move data from one Datanode to another if the free space on a Datanode falls below a certain threshold.In the event of a sudden high demand for a particular file, a scheme might dynamically create additional replicas and rebalance other data in the cluster.These types of data rebalancing are not yet implemented: research issue.

2013/11/1871

Page 71: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

APIsHDFS provides Java API for application to use.Python access is also used in many applications.A C language wrapper for Java API is also available.A HTTP browser can be used to browse the files of a HDFS instance.

2013/11/1872

Page 72: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

FS Shell, Admin and Browser InterfaceHDFS organizes its data in files and directories.It provides a command line interface called the FS shell that lets the user interact with data in the HDFS.The syntax of the commands is similar to bash and csh.Example: to create a directory  /foodir

/bin/hadoop dfs –mkdir /foodirThere is also DFSAdmin interface availableBrowser interface is also available to view the namespace.

2013/11/1873

Page 73: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

A Distributed Computation Framework for Large Data Set

Page 74: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

What is Map/Reduce?A Programming Model

Decompose a processing job into Map and Reducestages

Developer need to provide code for Map and Reduce functionsconfigure the joblet Hadoop handle the rest

75

Page 75: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

MapReduce Model

76

Page 76: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Architecture Overview

Job tracker

Task tracker Task tracker Task tracker

Master Node

Slave node 1 Slave node 2 Slave node N

Workers

user

Workers Workers77

Page 77: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Inside Hadoop

78

Page 78: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Warm up: Word CountWe have a large file of words, one word to a lineCount the number of appearances for each distinct word

Sample application: analyze web server logs to find popular URLs

79

Page 79: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Word Count (2)Case 1: Entire file fits in memoryCase 2: File too large for mem, but all <word, count> pairs fit in memCase 3: File on disk, too many distinct words to fit in memory

sort datafile | uniq –c

80

Page 80: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Word Count (3)To make it slightly harder, suppose we have a large corpus of documentsCount the number of times each distinct word occurs in the corpuswords(docs/*) | sort | uniq -cwhere words takes a file and outputs the words in it, one to 

a line

The above captures the essence of MapReduceGreat thing is it is naturally parallelizable

81

Page 81: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

MapReduceInput: a set of key/value pairsUser supplies two functions:

map(k,v)  list(k1,v1)reduce(k1, list(v1))  v2

(k1,v1) is an intermediate key/value pairOutput is the set of (k1,v2) pairs

82

Page 82: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

What is MAP?Map each data entry into a pair <key, value>

ExamplesMap each log file entry into <URL,1>Map day stock trading record into <STOCK, Price>

83

Page 83: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

What is Shuffle/Merge phase?Hadoop merges(shuffles) output of the MAP stage into<key, valulue1, value2,   value3>

Examples<URL, 1 ,1 ,1 ,1 ,1 1><STOCK, Price On day 1, Price On day 2..> 

84

Page 84: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

What is Reduce?

Reduce entries produces by Hadoop merging processing into <key, value> pair

ExamplesMap <URL, 1,1,1> into <URL, 3>Map <Stock, 3,2,10> into <Stock, 10>

85

Page 85: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Pseudo‐Code: Word Countmap(key, value):// key: document name; value: text of document

for each word w in value:emit(w, 1)

reduce(key, values):// key: a word; values: an iterator over counts

result = 0for each count v in values:

result += vemit(key,result) 86

Page 86: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

map(key=url, val=contents):For each word w in contents, emit (w, “1”)reduce(key=word, values=uniq_counts):

Sum all “1”s in values listEmit result “(word, sum)”

see bob runsee spot throw

see 1bob 1 run 1see 1spot 1throw 1

bob 1 run 1see 2spot 1throw 1

87

Word Count

Page 87: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Example uses: distributed grep distributed sort web link-graph reversal term-vector / host web access log stats inverted index construction

document clustering machine learning statistical machine translation

... ... ...

Widely ApplicableMapReduce Programs in Google Source Tree 

88

Page 88: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

• 100s/1000s of 2‐CPU x86 machines, 2‐4 GB of memory • Limited bisection bandwidth • Storage is on local IDE disks • GFS: distributed file system manages data (SOSP'03) • Job scheduling system: jobs made up of tasks, scheduler assigns tasks to machines 

Implementation is a C++ library linked into user programs

Implementation Overview

89

Page 89: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Distributed Execution Overview User

Program

Worker

Worker

Master

Worker

Worker

Worker

fork fork fork

assignmap

assignreduce

readlocalwrite

remoteread,sort

OutputFile 0

OutputFile 1

write

Split 0Split 1Split 2

Input Data

90

Page 90: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Data FlowInput, final output are stored on HDFS

Scheduler tries to schedule map tasks “close”to physical storage location of input data

Intermediate results are stored on local FS of map and reduce workersOutput is often input to another map reduce task

91

Page 91: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

CoordinationMaster data structures

Task status: (idle, in‐progress, completed)Idle tasks get scheduled as workers become availableWhen a map task completes, it sends the master the location and sizes of its R intermediate files, one for each reducerMaster pushes this info to reducers

Master pings workers periodically to detect failures 92

Page 92: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

FailuresMap worker failure

Map tasks completed or in‐progress at worker are reset to idleReduce workers are notified when task is rescheduled on another worker

Reduce worker failureOnly in‐progress tasks are reset to idle

Master failureMapReduce task is aborted and client is notified

93

Page 93: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Execution

94

Page 94: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Parallel Execution 

95

Page 95: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

How Many Map and Reduce Jobs?M map tasks, R reduce tasksRule of thumb:

M, R >> (# of nodes) in clusterOne DFS chunk per map is commonImproves dynamic load balancing and speeds recovery from worker failure

Usually R is smaller than M, because output is spread across R files

96

Page 96: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

CombinersOften a map task will produce many pairs of the form (k,v1), (k,v2), … for the same key k

e.g., popular words in Word CountCan save network time by pre‐aggregating at mapper

combine(k1, list(v1))  v2same as reduce function

97

Page 97: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Partition FunctionInputs to map tasks are created by contiguoussplits of input fileFor reduce, we need to ensure that records with the same intermediate key end up at the same workerSystem can use a default partition function e.g., hash(key) mod RSometimes useful to override 

e.g., hash(hostname(URL)) mod R ensures URLs from a host end up in the same output file 98

Page 98: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Execution SummaryHow is this distributed?

1. Partition input key/value pairs into chunks, run map() tasks in parallel

2. After all map()s are complete, consolidate all emitted values for each unique emitted key

3. Now partition space of output map keys, and run reduce() in parallel

If map() or reduce() fails, re‐execute!

99

Page 99: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Example: Trading Data Processing

Input: Historical Stock DataRecords are CSV (comma separated values) text file Each line : stock_symbol, low_price, high_price1987‐2009 data for all stocks one record per stock per day

Output:Maximum interday delta for each stock

100

Page 100: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Map Function: Part I

101

Page 101: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Map Function: Part II

102

Page 102: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Reduce Function

103

Page 103: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Running the Job : Part I

104

Page 104: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Running the Job: Part II

105

Page 105: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

A Distributed Storage System

Page 106: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

What is HBase?

Distributed Column‐Oriented database on top of HDFS

Modeled after Google’s BigTable data store

Random Reads/Writes on sequential stream‐oriented HDFS

Billions of Rows * Millions of Columns * Thousands of Versions

107

Page 107: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Where is HBase?

HBase is built on top of HDFS 

HBase files are internally 

stored in HDFS

108

Page 108: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Logical View

Row Key Time Stamp

Column Contents

Column Family Anchor (Referred by/to)

Column “mime”

“com.cnn.www”

T9 cnnsi.com cnn.com/1

T8 my.look.ca cnn.com/2T6 “<html>.. “ Text/htmlT5 “<html>.. “t3 “<html>.. “

109

Page 109: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Physical View

Row Key Time Stamp Column: ContentsCom.cnn.www T6 “<html>..”

T5 “<html>..”T3 “<html>..”

Row Key Time Stamp Column Family: AnchorCom.cnn.www T9 cnnsi.com cnn.com/1

T5 my.look.ca cnn.com/2

Row Key Time Stamp Column: mimeCom.cnn.www T6 text/html

110

Page 110: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

Region Servers

Tables are split into horizontal regionsEach region comprises a subset of rows

HDFSNamenode, dataNode

MapReduceJobTracker, TaskTracker

HBASEMaster Server, Region Server

111

Page 111: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

HBASE Architecture

112

Page 112: 高级软件工程 - SJTUwang-xb/wireless_new/coursePages/...Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place

HBase vs. RDMS

HBase tables are similar to RDBS tablesDifferences:Rows are sorted with a Row KeyColumns can be added on‐the‐fly by client as long as the column family they belong to pre‐exists

113