Scalable Computing In Education Workshop

131
© Spinnaker Labs, Inc. 2008 NSF Data- Intensive Scalable Computing in Education Workshop Module II: Hadoop-Based Course Components This presentation includes content © Google, Inc. Redistributed under the Creative Commons Attribution 2.5 license. All other contents:

description

 

Transcript of Scalable Computing In Education Workshop

Page 1: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

2008 NSF Data-Intensive Scalable Computing in Education Workshop

Module II: Hadoop-Based Course Components

This presentation includes content © Google, Inc.Redistributed under the Creative Commons Attribution 2.5 license.

All other contents:

Page 2: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Overview

• University of Washington Curriculum– Teaching Methods– Reflections– Student Background– Course Staff Requirements

• MapReduce Algorithms

Page 3: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

UW: Course Summary

• Course title: “Problem Solving on Large Scale Clusters”

• Primary purpose: developing large-scale problem solving skills

• Format: 6 weeks of lectures + labs, 4 week project

Page 4: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

UW: Course Goals

• Think creatively about large-scale problems in a parallel fashion; design parallel solutions

• Manage large data sets under memory, bandwidth limitations

• Develop a foundation in parallel algorithms for large-scale data

• Identify and understand engineering trade-offs in real systems

Page 5: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Lectures

• 2 hours, once per week• Half formal lecture, half discussion• Mostly covered systems & background• Included group activities for reinforcement

Page 6: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Classroom Activities

• Worksheets included pseudo-code programming, working through examples– Performed in groups of 2—3

• Small-group discussions about engineering and systems design– Groups of ~10– Course staff facilitated, but mostly open-

ended

Page 7: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Readings

• No textbook• One academic paper per week

– E.g., “Simplified Data Processing on Large Clusters”

– Short homework covered comprehension• Formed basis for discussion

Page 8: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Lecture Schedule

• Introduction to Distributed Computing• MapReduce: Theory and Implementation• Networks and Distributed Reliability• Real-World Distributed Systems• Distributed File Systems• Other Distributed Systems

Page 9: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Intro to Distributed Computing

• What is distributed computing?• Flynn’s Taxonomy• Brief history of distributed computing• Some background on synchronization and

memory sharing

Page 10: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

MapReduce

• Brief refresher on functional programming• MapReduce slides

– More detailed version of module I• Discussion on MapReduce

Page 11: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Networking and Reliability

• Crash course in networking• Distributed systems reliability

– What is reliability?– How do distributed systems fail?– ACID, other metrics

• Discussion: Does MapReduce provide reliability?

Page 12: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Real Systems

• Design and implementation of Nutch• Tech talk from Googler on Google Maps

Page 13: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Distributed File Systems

• Introduced GFS• Discussed implementation of NFS and

AndrewFS for comparison

Page 14: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Other Distributed Systems

• BOINC: Another platform• Broader definition of distributed systems

– DNS– One Laptop per Child project

Page 15: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Labs

• Also 2 hours, once per week• Focused on applications of distributed

systems• Four lab projects over six weeks

Page 16: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Lab Schedule

• Introduction to Hadoop, Eclipse Setup, Word Count

• Inverted Index• PageRank on Wikipedia• Clustering on Netflix Prize Data

Page 17: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Design Projects

• Final four weeks of quarter• Teams of 1—3 students• Students proposed topic, gathered data,

developed software, and presented solution

Page 18: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Example: Geozette

Image © Julia Schwartz

Page 19: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Example: Galaxy Simulation

Image © Slava Chernyak, Mike Hoak

Page 20: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Other Projects

• Bayesian Wikipedia spam filter• Unsupervised synonym extraction• Video collage rendering

Page 21: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Ongoing research: traceroutes

• Analyze time-stamped traceroute data to model changes in Internet router topology– 4.5 GB of data/day * 1.5 years– 12 billion traces from 200 PlanetLab sites

• Calculates prevalence and persistence of routes between hosts

Page 22: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Ongoing research: dynamic program traces

• Dynamic memory trace data from simulators can reach hundreds of GB

• Existing work focuses on sampling• New capability: record all accesses and

post-process with Hadoop

Page 23: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Common Features

• Hadoop!• Used publicly-available web APIs for data• Many involved reading papers for

algorithms and translating into MapReduce framework

Page 24: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Background Topics

• Programming Languages• Systems:

– Operating Systems – File Systems– Networking

• Databases

Page 25: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Programming Languages

• MapReduce is based on functional programming map and fold

• FP is taught in one quarter, but not reinforced– “Crash course” necessary– Worksheets to pose short problems in terms

of map and fold– Immutable data a key concept

Page 26: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Multithreaded programming

• Taught in OS course at Washington– Not a prerequisite!

• Students need to understand multiple copies of same method running in parallel

Page 27: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

File Systems

• Necessary to understand GFS• Comparison to NFS, other distributed file

systems relevant

Page 28: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Networking

• TCP/IP• Concepts of “connection,” network splits,

other failure modes• Bandwidth issues

Page 29: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Other Systems Topics

• Process Scheduling• Synchronization• Memory coherency

Page 30: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Databases

• Concept of shared consistency model• Consensus• ACID characteristics

– Journaling– Multi-phase commit processes

Page 31: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Course Staff

• Instructor (me!)• Two undergrad teaching assistants

– Helped facilitate discussions, directed labs• One student sys admin

– Worked only about three hours/week

Page 32: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Preparation

• Teaching assistants had taken previous iteration of course in winter

• Lectures retooled based on feedback from that quarter– Added reasonably large amount of

background material• Ran & solved all labs in advance

Page 33: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

The Course: What Worked

• Discussions– Often covered broad range of subjects

• Hands-on lab projects• “Active learning” in classroom• Independent design projects

Page 34: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Things to Improve: Coverage

• Algorithms were not reinforced during lecture– Students requested much more time be spent

on “how to parallelize an iterative algorithm”• Background material was very fast-paced

Page 35: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Things to Improve: Projects

• Labs could have used a moderated/scripted discussion component– Just “jumping in” to the code proved difficult– No time was devoted to Hadoop itself in

lecture– Clustering lab should be split in two

• Design projects could have used more time

Page 36: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Part 2: Algorithms

Page 37: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Algorithms for MapReduce

• Sorting• Searching• Indexing• Classification• TF-IDF• PageRank• Clustering

Page 38: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

MapReduce Jobs

• Tend to be very short, code-wise– IdentityReducer is very common

• “Utility” jobs can be composed• Represent a data flow, more so than a

procedure

Page 39: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Sort: Inputs

• A set of files, one value per line.• Mapper key is file name, line number• Mapper value is the contents of the line

Page 40: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Sort Algorithm

• Takes advantage of reducer properties: (key, value) pairs are processed in order by key; reducers are themselves ordered

• Mapper: Identity function for value(k, v) (v, _)

• Reducer: Identity function (k’, _) -> (k’, “”)

Page 41: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Sort: The Trick

• (key, value) pairs from mappers are sent to a particular reducer based on hash(key)

• Must pick the hash function for your data such that k1 < k2 => hash(k1) < hash(k2)

M1 M2 M3

R1 R2

Partition and

Shuffle

Page 42: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Final Thoughts on Sort

• Used as a test of Hadoop’s raw speed• Essentially “IO drag race”• Highlights utility of GFS

Page 43: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Search: Inputs

• A set of files containing lines of text• A search pattern to find

• Mapper key is file name, line number• Mapper value is the contents of the line• Search pattern sent as special parameter

Page 44: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Search Algorithm

• Mapper:– Given (filename, some text) and “pattern”, if

“text” matches “pattern” output (filename, _)• Reducer:

– Identity function

Page 45: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Search: An Optimization

• Once a file is found to be interesting, we only need to mark it that way once

• Use Combiner function to fold redundant (filename, _) pairs into a single one– Reduces network I/O

Page 46: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Indexing: Inputs

• A set of files containing lines of text

• Mapper key is file name, line number• Mapper value is the contents of the line

Page 47: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Inverted Index Algorithm

• Mapper: For each word in (file, words), map to (word, file)

• Reducer: Identity function

Page 48: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Index: MapReducemap(pageName, pageText):

foreach word w in pageText: emitIntermediate(w, pageName);done

reduce(word, values):foreach pageName in values: AddToOutputList(pageName);doneemitFinal(FormattedPageListForWord);

Page 49: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Index: Data Flow

This page contains so much text

My page contains text

too

Page A

Page BMy: Bpage : Bcontains: Btext: Btoo: B

This : Apage : Acontains: Aso : Amuch: Atext: A

contains: A, Bmuch: AMy: Bpage : A, Bso : Atext: A, BThis : Atoo: B

Reduced output

A map output

B map output

Page 50: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

An Aside: Word Count

• Word count was described in module I• Mapper for Word Count is (word, 1) for

each word in input line– Strikingly similar to inverted index– Common theme: reuse/modify existing

mappers

Page 51: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Bayesian Classification

• Files containing classification instances are sent to mappers

• Map (filename, instance) (instance, class)

• Identity Reducer

Page 52: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Bayesian Classification

• Existing toolsets exist to perform Bayes classification on instance– E.g., WEKA, already in Java!

• Another example of discarding input key

Page 53: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

TF-IDF

• Term Frequency – Inverse Document Frequency– Relevant to text processing– Common web analysis algorithm

Page 54: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

The Algorithm, Formally

•| D | : total number of documents in the corpus •                 : number of documents where the term ti appears (that is        ).

Page 55: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Information We Need

• Number of times term X appears in a given document

• Number of terms in each document• Number of documents X appears in• Total number of documents

Page 56: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Job 1: Word Frequency in Doc

• Mapper– Input: (docname, contents)– Output: ((word, docname), 1)

• Reducer– Sums counts for word in document– Outputs ((word, docname), n)

• Combiner is same as Reducer

Page 57: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Job 2: Word Counts For Docs

• Mapper– Input: ((word, docname), n)– Output: (docname, (word, n))

• Reducer– Sums frequency of individual n’s in same doc– Feeds original data through– Outputs ((word, docname), (n, N))

Page 58: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Job 3: Word Frequency In Corpus

• Mapper– Input: ((word, docname), (n, N))– Output: (word, (docname, n, N, 1))

• Reducer– Sums counts for word in corpus– Outputs ((word, docname), (n, N, m))

Page 59: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Job 4: Calculate TF-IDF

• Mapper– Input: ((word, docname), (n, N, m))– Assume D is known (or, easy MR to find it)– Output ((word, docname), TF*IDF)

• Reducer– Just the identity function

Page 60: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Working At Scale

• Buffering (doc, n, N) counts while summing 1’s into m may not fit in memory– How many documents does the word “the”

occur in?• Possible solutions

– Ignore very-high-frequency words– Write out intermediate data to a file– Use another MR pass

Page 61: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Final Thoughts on TF-IDF

• Several small jobs add up to full algorithm• Lots of code reuse possible

– Stock classes exist for aggregation, identity• Jobs 3 and 4 can really be done at once in

same reducer, saving a write/read cycle• Very easy to handle medium-large scale,

but must take care to ensure flat memory usage for largest scale

Page 62: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

PageRank: Random Walks Over The Web

• If a user starts at a random web page and surfs by clicking links and randomly entering new URLs, what is the probability that s/he will arrive at a given page?

• The PageRank of a page captures this notion– More “popular” or “worthwhile” pages get a

higher rank

Page 63: Scalable Computing In Education Workshop

PageRank: Visually

www.cnn.com

en.wikipedia.org

www.nytimes.com

Page 64: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

PageRank: Formula

Given page A, and pages T1 through Tn linking to A, PageRank is defined as:

PR(A) = (1-d) + d (PR(T1)/C(T1) + ... + PR(Tn)/C(Tn))

C(P) is the cardinality (out-degree) of page Pd is the damping (“random URL”) factor

Page 65: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

PageRank: Intuition

• Calculation is iterative: PRi+1 is based on PRi

• Each page distributes its PRi to all pages it links to. Linkees add up their awarded rank fragments to find their PRi+1

• d is a tunable parameter (usually = 0.85) encapsulating the “random jump factor”

PR(A) = (1-d) + d (PR(T1)/C(T1) + ... + PR(Tn)/C(Tn))

Page 66: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Graph Representations

• The most straightforward representation of graphs uses references from each node to its neighbors

Page 67: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Direct References

• Structure is inherent to object

• Iteration requires linked list “threaded through” graph

• Requires common view of shared memory (synchronization!)

• Not easily serializable

class GraphNode{ Object data; Vector<GraphNode> out_edges; GraphNode iter_next;}

Page 68: Scalable Computing In Education Workshop

Adjacency Matrices

• Another classic graph representation. M[i][j]= '1' implies a link from node i to j.

• Naturally encapsulates iteration over nodes

010140010311012101014321

Page 69: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Adjacency Matrices: Sparse Representation

• Adjacency matrix for most large graphs (e.g., the web) will be overwhelmingly full of zeros.

• Each row of the graph is absurdly long

• Sparse matrices only include non-zero elements

Page 70: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Sparse Matrix Representation

1: (3, 1), (18, 1), (200, 1)2: (6, 1), (12, 1), (80, 1), (400, 1)3: (1, 1), (14, 1)…

Page 71: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Sparse Matrix Representation

1: 3, 18, 2002: 6, 12, 80, 4003: 1, 14…

Page 72: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

PageRank: First Implementation

• Create two tables 'current' and 'next' holding the PageRank for each page. Seed 'current' with initial PR values

• Iterate over all pages in the graph, distributing PR from 'current' into 'next' of linkees

• current := next; next := fresh_table();• Go back to iteration step or end if converged

Page 73: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Distribution of the Algorithm

• Key insights allowing parallelization:– The 'next' table depends on 'current', but not on

any other rows of 'next'– Individual rows of the adjacency matrix can be

processed in parallel– Sparse matrix rows are relatively small

Page 74: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Distribution of the Algorithm

• Consequences of insights:– We can map each row of 'current' to a list of

PageRank “fragments” to assign to linkees– These fragments can be reduced into a single

PageRank value for a page by summing– Graph representation can be even more

compact; since each element is simply 0 or 1, only transmit column numbers where it's 1

Page 75: Scalable Computing In Education Workshop

Map step: break page rank into even fragments to distribute to link targets

Reduce step: add together fragments into next PageRank

Iterate for next step...

Page 76: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Phase 1: Parse HTML

• Map task takes (URL, page content) pairs and maps them to (URL, (PRinit, list-of-urls))– PRinit is the “seed” PageRank for URL– list-of-urls contains all pages pointed to by URL

• Reduce task is just the identity function

Page 77: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Phase 2: PageRank Distribution

• Map task takes (URL, (cur_rank, url_list))– For each u in url_list, emit (u, cur_rank/|url_list|)– Emit (URL, url_list) to carry the points-to list

along through iterations

PR(A) = (1-d) + d (PR(T1)/C(T1) + ... + PR(Tn)/C(Tn))

Page 78: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Phase 2: PageRank Distribution

• Reduce task gets (URL, url_list) and many (URL, val) values– Sum vals and fix up with d– Emit (URL, (new_rank, url_list))

PR(A) = (1-d) + d (PR(T1)/C(T1) + ... + PR(Tn)/C(Tn))

Page 79: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Finishing up...

• A subsequent component determines whether convergence has been achieved (Fixed number of iterations? Comparison of key values?)

• If so, write out the PageRank lists - done!• Otherwise, feed output of Phase 2 into

another Phase 2 iteration

Page 80: Scalable Computing In Education Workshop

PageRank Conclusions• MapReduce runs the “heavy lifting” in

iterated computation• Key element in parallelization is

independent PageRank computations in a given step

• Parallelization requires thinking about minimum data partitions to transmit (e.g., compact representations of graph rows)– Even the implementation shown today doesn't

actually scale to the whole Internet; but it works for intermediate-sized graphs

Page 81: Scalable Computing In Education Workshop

Clustering

• What is clustering?

Page 82: Scalable Computing In Education Workshop

Google News

• They didn’t pick all 3,400,217 related articles by hand…

• Or Amazon.com • Or Netflix…

Page 83: Scalable Computing In Education Workshop

Other less glamorous things...

• Hospital Records• Scientific Imaging

– Related genes, related stars, related sequences• Market Research

– Segmenting markets, product positioning• Social Network Analysis• Data mining• Image segmentation…

Page 84: Scalable Computing In Education Workshop

The Distance Measure

• How the similarity of two elements in a set is determined, e.g.– Euclidean Distance– Manhattan Distance– Inner Product Space– Maximum Norm – Or any metric you define over the space…

Page 85: Scalable Computing In Education Workshop

• Hierarchical Clustering vs.• Partitional Clustering

Types of Algorithms

Page 86: Scalable Computing In Education Workshop

Hierarchical Clustering

• Builds or breaks up a hierarchy of clusters.

Page 87: Scalable Computing In Education Workshop

Partitional Clustering

• Partitions set into all clusters simultaneously.

Page 88: Scalable Computing In Education Workshop

Partitional Clustering

• Partitions set into all clusters simultaneously.

Page 89: Scalable Computing In Education Workshop

K-Means Clustering • Simple Partitional Clustering

• Choose the number of clusters, k• Choose k points to be cluster centers• Then…

Page 90: Scalable Computing In Education Workshop

K-Means Clustering

iterate { Compute distance from all points to all k- centers Assign each point to the nearest k-center Compute the average of all points assigned to all specific k-centers

Replace the k-centers with the new averages}

Page 91: Scalable Computing In Education Workshop

But!

• The complexity is pretty high: – k * n * O ( distance metric ) * num (iterations)

• Moreover, it can be necessary to send tons of data to each Mapper Node. Depending on your bandwidth and memory available, this could be impossible.

Page 92: Scalable Computing In Education Workshop

Furthermore

• There are three big ways a data set can be large:– There are a large number of elements in the

set.– Each element can have many features.– There can be many clusters to discover

• Conclusion – Clustering can be huge, even when you distribute it.

Page 93: Scalable Computing In Education Workshop

Canopy Clustering

• Preliminary step to help parallelize computation.

• Clusters data into overlapping Canopies using super cheap distance metric.

• Efficient• Accurate

Page 94: Scalable Computing In Education Workshop

Canopy Clustering

While there are unmarked points { pick a point which is not strongly marked call it a canopy centermark all points within some threshold of

it as in it’s canopystrongly mark all points within some

stronger threshold }

Page 95: Scalable Computing In Education Workshop

After the canopy clustering…

• Resume hierarchical or partitional clustering as usual.

• Treat objects in separate clusters as being at infinite distances.

Page 96: Scalable Computing In Education Workshop

MapReduce Implementation:

• Problem – Efficiently partition a large data set (say… movies with user ratings!) into a fixed number of clusters using Canopy Clustering, K-Means Clustering, and a Euclidean distance measure.

Page 97: Scalable Computing In Education Workshop

The Distance Metric• The Canopy Metric ($)

• The K-Means Metric ($$$)

Page 98: Scalable Computing In Education Workshop

Steps!

• Get Data into a form you can use (MR)• Picking Canopy Centers (MR)• Assign Data Points to Canopies (MR)• Pick K-Means Cluster Centers• K-Means algorithm (MR)

– Iterate!

Page 99: Scalable Computing In Education Workshop

Selecting Canopy Centers

Page 100: Scalable Computing In Education Workshop
Page 101: Scalable Computing In Education Workshop
Page 102: Scalable Computing In Education Workshop
Page 103: Scalable Computing In Education Workshop
Page 104: Scalable Computing In Education Workshop
Page 105: Scalable Computing In Education Workshop
Page 106: Scalable Computing In Education Workshop
Page 107: Scalable Computing In Education Workshop
Page 108: Scalable Computing In Education Workshop
Page 109: Scalable Computing In Education Workshop
Page 110: Scalable Computing In Education Workshop
Page 111: Scalable Computing In Education Workshop
Page 112: Scalable Computing In Education Workshop
Page 113: Scalable Computing In Education Workshop
Page 114: Scalable Computing In Education Workshop
Page 115: Scalable Computing In Education Workshop

Assigning Points to Canopies

Page 116: Scalable Computing In Education Workshop
Page 117: Scalable Computing In Education Workshop
Page 118: Scalable Computing In Education Workshop
Page 119: Scalable Computing In Education Workshop
Page 120: Scalable Computing In Education Workshop
Page 121: Scalable Computing In Education Workshop

K-Means Map

Page 122: Scalable Computing In Education Workshop
Page 123: Scalable Computing In Education Workshop
Page 124: Scalable Computing In Education Workshop
Page 125: Scalable Computing In Education Workshop
Page 126: Scalable Computing In Education Workshop
Page 127: Scalable Computing In Education Workshop
Page 128: Scalable Computing In Education Workshop
Page 129: Scalable Computing In Education Workshop

Elbow Criterion• Choose a number of clusters s.t. adding a

cluster doesn’t add interesting information.

• Rule of thumb to determine what number of Clusters should be chosen.

• Initial assignment of cluster seeds has bearing on final model performance.

• Often required to run clustering several times to get maximal performance

Page 130: Scalable Computing In Education Workshop

Clustering Conclusions

• Clustering is slick• And it can be done super efficiently• And in lots of different ways

Page 131: Scalable Computing In Education Workshop

© Spinnaker Labs, Inc.

Conclusions

• Lots of high level algorithms• Lots of deep connections to low-level

systems• Discussion-based classes helped students

think critically about real issues• Code labs made them real