Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

48
Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

Transcript of Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

Page 1: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

Hadoop and MapReduce

Shannon Quinn

(with thanks from William Cohen of CMU)

Page 2: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

2

Page 3: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

3

MapReduce!

• Sequentially read a lot of data• Map:

– Extract something you care about• Group by key: Sort and Shuffle• Reduce:

– Aggregate, summarize, filter or transform• Write the result

Page 4: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

4

MapReduce: The Map Step

vk

k v

k v

mapvk

vk

k vmap

Inputkey-value pairs

Intermediatekey-value pairs

k v

Page 5: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

5

MapReduce: The Reduce Step

k v

k v

k v

k v

Intermediatekey-value pairs

Groupby key

reduce

reduce

k v

k v

k v

k v

k v

k v v

v v

Key-value groupsOutput key-value pairs

Page 6: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

6

More Specifically• Input: a set of key-value pairs• Programmer specifies two methods:

– Map(k, v) <k’, v’>*• Takes a key-value pair and outputs a set of key-value pairs

– E.g., key is the filename, value is a single line in the file

• There is one Map call for every (k,v) pair

– Reduce(k’, <v’>*) <k’, v’’>*• All values v’ with same key k’ are reduced together

and processed in v’ order• There is one Reduce function call per unique key k’

Page 7: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

7

Large-scale Computing• Large-scale computing for data mining

problems on commodity hardware• Challenges:

– How do you distribute computation?– How can we make it easy to write distributed

programs?– Machines fail:

• One server may stay up 3 years (1,000 days)• If you have 1,000 servers, expect to loose 1/day• People estimated Google had ~1M machines in 2011

– 1,000 machines fail every day!

Page 8: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

8

Idea and Solution• Issue: Copying data over a network takes time• Idea:

– Bring computation close to the data– Store files multiple times for reliability

• Map-reduce addresses these problems– Google’s computational/data manipulation model– Elegant way to work with big data– Storage Infrastructure – File system

• Google: GFS. Hadoop: HDFS

– Programming model• Map-Reduce

Page 9: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

9

Storage Infrastructure

• Problem:– If nodes fail, how to store data persistently?

• Answer:– Distributed File System:

• Provides global file namespace• Google GFS; Hadoop HDFS;

• Typical usage pattern– Huge files (100s of GB to TB)– Data is rarely updated in place– Reads and appends are common

Page 10: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

10

Distributed File System• Chunk servers

– File is split into contiguous chunks– Typically each chunk is 16-64MB– Each chunk replicated (usually 2x or 3x)– Try to keep replicas in different racks

• Master node– a.k.a. Name Node in Hadoop’s HDFS– Stores metadata about where files are stored– Might be replicated

• Client library for file access– Talks to master to find chunk servers – Connects directly to chunk servers to access data

Page 11: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

11

Distributed File System

• Reliable distributed file system• Data kept in “chunks” spread across machines• Each chunk replicated on different machines

– Seamless recovery from disk or machine failureC0 C1

C2C5

Chunk server 1

D1

C5

Chunk server 3

C1

C3C5

Chunk server 2

…C2D0

D0

Bring computation directly to the data!

C0 C5

Chunk server N

C2D0

Chunk servers also serve as compute servers

Page 12: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

12

Programming Model: MapReduce

Warm-up task:• We have a huge text document

• Count the number of times each distinct word appears in the file

• Sample application: – Analyze web server logs to find popular URLs

Page 13: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

13

Task: Word Count

Case 1: – File too large for memory, but all <word, count> pairs

fit in memory

Case 2:• Count occurrences of words:

– words(doc.txt) | sort | uniq -c• where words takes a file and outputs the words in it, one

per a line

• Case 2 captures the essence of MapReduce– Great thing is that it is naturally parallelizable

Page 14: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

14

Data Flow

• Input and final output are stored on a distributed file system (FS):– Scheduler tries to schedule map tasks “close” to

physical storage location of input data

• Intermediate results are stored on local FS of Map and Reduce workers

• Output is often input to another MapReduce task

Page 15: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

15

Coordination: Master

• Master node takes care of coordination:– Task status: (idle, in-progress, completed)– Idle tasks get scheduled as workers become available– When a map task completes, it sends the master the

location and sizes of its R intermediate files, one for each reducer

– Master pushes this info to reducers

• Master pings workers periodically to detect failures

Page 16: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

16

Dealing with Failures

• Map worker failure– Map tasks completed or in-progress at

worker are reset to idle– Reduce workers are notified when task is rescheduled

on another worker• Reduce worker failure

– Only in-progress tasks are reset to idle – Reduce task is restarted

• Master failure– MapReduce task is aborted and client is notified

Page 17: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

17

Task Granularity & Pipelining• Fine granularity tasks: map tasks >> machines

– Minimizes time for fault recovery– Can do pipeline shuffling with map execution– Better dynamic load balancing

Page 18: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

18

Refinement: Combiners• Often a Map task will produce many pairs of

the form (k,v1), (k,v2), … for the same key k– E.g., popular words in the word count example

• Can save network time by pre-aggregating values in the mapper:– combine(k, list(v1)) v2

– Combiner is usually same as the reduce function

• Works only if reduce function is commutative and associative

Page 19: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

19

Refinement: Combiners• Back to our word counting example:

– Combiner combines the values of all keys of a single mapper (single machine):

– Much less data needs to be copied and shuffled!

Page 20: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

20

Refinement: Partition Function• Want to control how keys get partitioned

– Inputs to map tasks are created by contiguous splits of input file

– Reduce needs to ensure that records with the same intermediate key end up at the same worker

• System uses a default partition function:– hash(key) mod R

• Sometimes useful to override the hash function:– E.g., hash(hostname(URL)) mod R ensures URLs

from a host end up in the same output file

Page 21: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

21

Cost Measures for Algorithms

• In MapReduce we quantify the cost of an algorithm using

1. Communication cost = total I/O of all processes2. Elapsed communication cost = max of I/O along

any path3. (Elapsed) computation cost analogous, but

count only running time of processes

Note that here the big-O notation is not the most useful (adding more machines is always an option)

Page 22: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

22

Example: Cost Measures

• For a map-reduce algorithm:– Communication cost = input file size + 2 (sum of

the sizes of all files passed from Map processes to Reduce processes) + the sum of the output sizes of the Reduce processes.

– Elapsed communication cost is the sum of the largest input + output for any map process, plus the same for any reduce process

Page 23: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

23

What Cost Measures Mean

• Either the I/O (communication) or processing (computation) cost dominates– Ignore one or the other

• Total cost tells what you pay in rent from your friendly neighborhood cloud

• Elapsed cost is wall-clock time using parallelism

Page 24: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

Performance

• IMPORTANT– You may not have room for all reduce

values in memory• In fact you should PLAN not to have

memory for all values• Remember, small machines are much

cheaper– you have a limited budget

Page 25: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

25

Implementations

• Google– Not available outside Google

• Hadoop– An open-source implementation in Java– Uses HDFS for stable storage– Download: http://hadoop.apache.org/

• Spark– A competitor to MapReduce– Uses several distributed filesystems– Download: http://spark.apache.org/

• Others

Page 26: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

26

Reading

• Jeffrey Dean and Sanjay Ghemawat: MapReduce: Simplified Data Processing on Large Clusters– http://labs.google.com/papers/mapreduce.html

• Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung: The Google File System– http://labs.google.com/papers/gfs.html

Page 27: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

27

Further Reading• Programming model inspired by functional language primitives• Partitioning/shuffling similar to many large-scale sorting systems

– NOW-Sort ['97] • Re-execution for fault tolerance

– BAD-FS ['04] and TACC ['97] • Locality optimization has parallels with Active Disks/Diamond work

– Active Disks ['01], Diamond ['04] • Backup tasks similar to Eager Scheduling in Charlotte system

– Charlotte ['96] • Dynamic load balancing solves similar problem as River's

distributed queues – River ['99]

Page 28: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)
Page 29: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

Git

• Basics– http://git-scm.com/docs/gittutorial

• [Advanced] Cheat sheet– https://github.com/tiimgreen/github-cheat-sheet

Page 30: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

Some useful Hadoop-isms

• Testing and setup– Excellent Hadoop 2.x setup and testing tutorial:

http://www.highlyscalablesystems.com/3597/hadoop-installation-tutorial-hadoop-2-x/

• You won’t need to worry about setup!– On GACRC, I’m setting up the VMs– On AWS, Amazon handles it

• But the testing tips near the bottom are excellent– Putting files into HDFS– Reading existing content/results in HDFS– Submitting Hadoop jobs on the command line

• If you use AWS GUI, you won’t need to worry about any of this

Page 31: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

Some useful Hadoop-isms

• Counters– Initialize in main() / run()

– Increment in mapper / reducer

– Read in main() / run()– Example:

http://diveintodata.org/2011/03/15/an-example-of-hadoop-mapreduce-counter/

Page 32: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

Some useful Hadoop-isms

• Joins– Join values together that have the same key– Map-side

• Faster and more efficient• Harder to implement—requires custom Partitioner and Comparator• http://codingjunkie.net/mapside-joins/

– Reduce-side• Easy to implement—shuffle step does the work for you!• Less efficient as data is pushed to the network• http://codingjunkie.net/mapreduce-reduce-joins/

• MultipleInputs– Specify a specific mapper class for a specific input path– https://hadoop.apache.org/docs/current/api/org/apache/hadoop/

mapreduce/lib/input/MultipleInputs.html

Page 33: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

Some useful Hadoop-isms

• setup()– Optional method override in Mapper / Reducer

subclass– Executed before map() / reduce()– Useful for initializing variables…

Page 35: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

A little MapReduce: NB Variables

1. |V|

2. |L|

3. Y=*4. Y=y

5. Y=y, W=*

6. Y=y, W=w

1. Size of vocabulary (unique words)

2. Size of label space (unique labels)

3. Number of documents4. Number of documents with

label y5. Number of words in a

document with label y6. Number of times w appears

in a document with label y

Page 36: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

Introduction to MapReduce

• 3 main phases– Map (send each input record to a key)– Sort (put all of one key in the same place)

• Handled behind the scenes in Hadoop

– Reduce (operate on each key and its set of values)• Terms come from functional programming:

– map(lambda x:x.upper(),[”shannon",”p",”quinn"])[’SHANNON', ’P', ’QUINN']

– reduce(lambda x,y:x+"-"+y,[”shannon",”p",”quinn"])”shannon-p-quinn”

Page 37: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

MapReduce in slow motion

• Canonical example: Word Count• Example corpus:

Page 38: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)
Page 39: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)
Page 40: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)
Page 41: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

others:• KeyValueInputFormat• SequenceFileInputFormat

Page 42: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)
Page 43: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)
Page 44: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

Is any part of this wasteful?

• Remember - moving data around and writing to/reading from disk are very expensive operations

• No reducer can start until:• all mappers are done • data in its partition has been sorted

Page 45: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

Common pitfalls

• You have no control over the order in which reduces are performed

• You have “no” control over the order in which you encounter reduce values– Sort of (more later).

• The only ordering you should assume is that Reducers always start after Mappers

Page 46: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

Common pitfalls

• You should assume your Maps and Reduces will be taking place on different machines with different memory spaces

• Don’t make a static variable and assume that other processes can read it– They can’t.– It appear that they can when run locally, but they

can’t– No really, don’t do this.

Page 47: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

Common pitfalls

• Do not communicate between mappers or between reducers– overhead is high– you don’t know which mappers/reducers are

actually running at any given point– there’s no easy way to find out what machine

they’re running on• because you shouldn’t be looking for them anyway

Page 48: Hadoop and MapReduce Shannon Quinn (with thanks from William Cohen of CMU)

Assignment 2

• Naïve Bayes on Hadoop• Document classification• Due Friday, Sept 18 by 11:59:59pm