ID2223 Lecture 2: Distributed ML and Linear …...[Distributed Machine Learning with Apache Spark,...

Post on 22-May-2020

2 views 0 download

Transcript of ID2223 Lecture 2: Distributed ML and Linear …...[Distributed Machine Learning with Apache Spark,...

ID2223 Lecture 2: Distributed ML and Linear Regression

Terminology

•Observations. Entities used for learning/evaluation

•Features. Attributes (typically numeric) used to represent an observation

•Labels. Values/categories assigned to observations

•Model. Parameters/weights that are adjusted by training to predict label(s) given observations.

•Training, Validation, and Test Data. Observations for training and evaluating a learning algorithm

- Training data is given to the algorithm for training

- Validation data is withheld from training and is used to measure the performance of training

- Test data is withheld at train time

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 2/77

Supervised Learning

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 3

Input

DataPrediction

SupervisedLearning

Learn from labeled observations.

Labels ‘teach’ the algorithm to learn a mapping from observations to labels

Labelled Observations

Supervised Learning

•Classification. Assign a category to each item

- Categories are discrete

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 4/77

[Image from Sze et al, 2017]

Supervised Learning

•Regression. Predict a real value for each item

- Labels are continuous

- Can define ‘closeness’ when comparing prediction with label

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 5/77

Unsupervised Learning

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 6

List of

Data Points

List of

Cluster LabelsUnsupervised

Learning

UnsupervisedLearning

Clustering. Partition observations into homogeneous regionsDimensionality Reduction. Transform an initial feature representation into

a more concise representation

Non-Distributed Representations

Not Compositional

- Nearest Neighbour

- Decision Trees (DTs)

• Random Forests

• Gradient Boosted DTs

- Clustering

2017-11-02 7/58

Regions defined by

learned Prototypes

x

x

x

x

x

x

x

x

[Bengio, BayArea DL School, 16]

Parametric Learning

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 8/77

“A learning model that summarizes data with a set of

parameters of fixed size (independent of the number of

training examples) is called a parametric model. No

matter how much data you throw at a parametric model, it

won’t change its mind about how many parameters it

needs.”— Artificial Intelligence: A Modern Approach, page 737

9

Parametric vs Non Parametric Learning

•Parametric = bounded number of parameters

•Non-parametric = unbounded number of parameters

•Parametric or non-parametric models affect

- the way the learning is stored

- the method for learning

•Possible Combinations:

Supervised Parametric Supervised Non-Parametric

Unsupervised Parametric Unsupervised Non-Parametric

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling

10

Supervised Parametric Learning

•Imagine a machine where you can input some data, turn some dials and observe its performance

- Output data is a prediction

2017-11-02

[Grokking Deep Learning, Manning ’16]`

11

Supervised Parametric Learning Algorithm

1. Predict

2. Compare to actual result (true pattern)

3. Adjust the dials (parameters)to improve predictions

2017-11-02

[Grokking Deep Learning, Manning ’16]

Machine Learning Pipeline

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 12/77

Machine Learning Hierarchy of Needs

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 13/77

DDL

(Distributed

Deep Learning)

Deep Learning,

RL, Automated ML

Labeled Data, ML Experimentation

B.I. Analytics, Metrics, Aggregates,

Features, Training/Test Data

Reliable Data Pipelines, ETL, Unstructured and

Structured Data Storage, Real-Time Data Ingestion

Analytics

Prediction

Importing Data

•Import the raw data (observations) from some source

- Real-time data (Kafka)

- Data-at-rest (HDFS)

•Different data formats likely

•Data may have duplicate, missing columns, invalid values

- Clean and wrangle the data

•Store the data in a format that is efficient for querying (partitioned)

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 14/77

Clean/Partition Data

Acquire Raw Data

Feature Extraction

•Extract features to represent the observations

- Exploit domain knowledge

•Nearly always want numeric features

•Choice of features is crucial tothe success of the entire pipeline

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 15/77

Clean/Partition Data

Feature Extraction

Acquire Raw Data

Supervised Learning

•Train a supervised model using labeled data

•Classification or Regression model

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 16/77

Clean/Partition Data

Feature Extraction

Supervised Learning

Acquire Raw Data

Learning

•Q: How do we determine the quality of the model we’ve just trained?

•A: We can evaluate it on test / hold-out data, i.e., labeled data not used for training

•If we don’t like the results, we iterate…

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 17/77

Clean/Partition Data

Feature Extraction

Supervised Learning

Evaluation

Acquire Raw Data

Predict

• Once we’re happy with our model,

we can use it to make predictions on

future observations, i.e., data without

a known label

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 18/77

Feature Extraction

Supervised Learning

Evaluation

Predict

Acquire Raw Data

Clean/Partition Data

Classification

•Goal: Learn a mapping from observations to discrete labels given a set of training examples (supervised learning)

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 19/77

Classification Examples

•Spam Classification

- Emails → {spam, ham}

•Anomaly detection

- Network activity → {anomoly, not anomoly}

•Fraud detection

- Shopping activity → {fraud, not fraud}

•Clickthrough rate prediction

- User viewing an ad → {click, no click}

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 20/77

Classification Example – 20 Newsgroups

•1000s of documents from 20 Usenet Newsgroups

- alt.atheism, soc.religion.Christian, talk.politics.guns

- comp.sys.ibm.pc.hardware, comp.sys.mac.hardware

•Train a model to classify documents from the 20 Newsgroups data set into two categories according to whether or not the documents are computer related.

•Goal: classify each document as:IS_COMPUTERS or NOT_COMPUTERS.

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 21/77

Classification Pipeline – Newsgroups20

•Raw data consists of a set of labeled training observations

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 22/77

Acquire Raw Data

Feature Extraction

Supervised Learning

Evaluation

Predict>

>>I have been at a shooting range where

>>gang members were "practicing"

shooting.

In article <C5qsBF.IEK@ms.uky.edu>

billq@ms.uky.edu (Billy Quinn) writes:

>I built a little project using the radio shack

5vdc relays to switch

>

Classification Pipeline

Observation Labeltalk.politics.guns/54279 NOT_COMPUTERS

sci.electronics/53909 IS_COMPUTERS

…. …..

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 23/77

In article <C5qsBF.IEK@ms.uky.edu>

billq@ms.uky.edu (Billy Quinn) writes:

>I built a little project using the radio shack

5vdc relays to switch

>

>

>>I have been at a shooting range where

>>gang members were "practicing"

shooting.

Acquire Raw Data

Feature Extraction

Supervised Learning

Evaluation

Predict

Classification Pipeline

•Feature extraction usually converts each observation into a vector of real numbers (features)

•Choosing a good description for an observation will have huge baring on the success or failure of a classifier.

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 24/77

Acquire Raw Data

Feature Extraction

Supervised Learning

Evaluation

Predict

>

>>I have been at a shooting range where

>>gang members were "practicing"

shooting.

00

01

02

03

04

05

06

07

08

09

10

11

12

13

14

15

16

17

18

19

7830 7831 7832 7833 7834 7835 7836 7837 7838 7839

Classification Pipeline

case class newsgroupsCaseClass

(id: String, text: String, topic: String)

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 25/77

Acquire Raw Data

Feature Extraction

Supervised Learning

Evaluation

Predict

Classification Pipeline

•Show only documents that are related to computer topics

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 26/77

Acquire Raw Data

Feature Extraction

Supervised Learning

Evaluation

Predict

Classification Pipeline

•Supervised Learning: Train classifier using training data

- Common classifiers include Logistic Regression, SVMs, Decision Trees, Random Forests, etc.

•Training (especially at scale) often involves iterative computations, e.g., gradient descent

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 27/77

Acquire Raw Data

Feature Extraction

Supervised Learning

Evaluation

Predict

>

>>I have been at a shooting range where

>>gang members were "practicing"

shooting.

00

01

02

03

04

05

06

07

08

09

10

11

12

13

14

15

16

17

18

19

7830 7831 7832 7833 7834 7835 7836 7837 7838 7839

Classifier

Logistic Regression

•Goal: Find linear decision boundary

- Parameters to learn are feature weights and offset

- Nice probabilistic interpretation

- Covered in more detail later in course

2017-11-02 28/77

Acquire Raw Data

Feature Extraction

Supervised Learning

Evaluation

Predict

[Distributed Machine Learning with Apache Spark, Berkeley ‘16 ]

Evaluation

•How can we evaluate the quality of our classifier?

•We want good predictions on unobserved data

- ’Generalization’ ability

•Accuracy on training data is overly optimistic since classifier has already learned from it

- We might be ‘overfitting’

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 29/77

Acquire Raw Data

Feature Extraction

Supervised Learning

Evaluation

Predict

Overfitting and Generalization

•Fitting training data does not guarantee generalization, e.g., lookup table

•Which figure below is a better classifier?

- Left: perfectly fits training samples, but it is complex and overfits the data

•Favour simple models over complex ones, ceteris paribus

2017-11-02 30/77

Acquire Raw Data

Feature Extraction

Supervised Learning

Evaluation

Predict

[Distributed Machine Learning with Apache Spark, Berkeley ‘16 ]

Classification Pipeline

•How can we evaluate the quality of our classifier?

•Idea: Create test set to simulate unobserved data

•Evaluation: Split dataset into training / testing datasets

- Various ways to compare predicted and true labels

- Evaluation criterion is called a ‘loss’ function

- Accuracy (or 0-1 loss) is common for classification

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 31/77

Acquire Raw Data

Feature Extraction

Supervised Learning

Evaluation

Predict

Classification Pipeline

•Split data set into separate training (70%) and test (30%) data sets

2017-11-02 32/77

Acquire Raw Data

Feature Extraction

Supervised Learning

Evaluation

Predict

[Distributed Machine Learning with Apache Spark, Berkeley ‘16 ]

Classification Pipeline

•Predict: Final classifier can then be used to make predictions on future observations

2017-11-02 33/77

Acquire Raw Data

Feature Extraction

Supervised Learning

Evaluation

Predict

[Distributed Machine Learning with Apache Spark, Berkeley ‘16 ]

Linear Regression

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 34/77

Regression

•Goal: Learn a mapping from observations (features) to continuous values/labels given a training set (supervised learning)

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 35/77

Linear Least Squares Regression

•For each observation we have a feature vector, x, and label, y

𝐱𝑇 = 𝑥1 𝑥2⋯𝑥𝑑

•We assume a linear mapping between features and label:

𝑦 ≈ 𝑤𝑜 + 𝑤1𝑥1 +⋯𝑤𝑑−1𝑥𝑑−1

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 36/77

Linear Least Squares Regression

•We can augment the feature vector to incorporate offset:

𝐱𝑇 = 1 𝑥1 𝑥2⋯𝑥𝑑

•We can then rewrite this linear mapping as a scalar product:

𝑦 ≈ 𝑦 =

𝑖=0

𝑑

𝑤𝑖𝑥𝑖 =𝐰𝑇𝐱

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 37/77

Least Squares Regression

•Given n training samples with d features, we define:

𝐗 ∈ ℝ𝒏×𝒅 : matrix storing points

𝐲 ∈ ℝ𝒏 : real-valued labels

𝒚 ∈ ℝ𝒏 : predicted labels, where 𝒚 = 𝐗𝐰

𝐰 ∈ ℝ𝒅 : model parameters

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 38/77

Evaluating Predictions

•What is an appropriate evaluation metric or ‘loss’ function?

•Absolute loss:𝑦 − 𝑦

•Squared loss:𝑦 − 𝑦 2

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 39/77

Learning a Model an Optimization Problem

•Assume we have n training points, where x(i)

denotes the ith point

•Idea: Find the model w that minimizes squared loss over the training points:

min𝐰

𝑖=1

𝑛

𝐰𝑇𝐱 𝒊 − 𝑦 𝒊 2

where 𝑦 = 𝐰𝑇𝐱 𝑖

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 40/77

Least Squares Regression

•Least Squares Regression: Learn the mapping (w) from features to labels that minimizes residual sum of squares:

min𝐰

𝐗𝐰 − 𝐲 22

•Equivalent by definition of the Euclidean norm:

min𝐰

𝑖=1

𝑛

𝐰𝑇𝐱 𝒊 − 𝑦 𝒊 2

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 41/77

Closed Form Solution

•Solve by setting derivative to zero

- (left as exercise)

- Hint: find the minimum by differentiating the loss (error) function and setting to zero. Then you will need two linear algebra steps.

•Solution:

w = ( XT X )−1 XT y

( XT X )−1 should not be zero.

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 42/77

Overfitting and Generalization

•We want good predictions on new data, i.e., ’generalization’

•Least squares regression minimizes training error, and could overfit

- Simpler models are more likely to generalize (Occam’s razor)

- Can we change the problem to penalize for model complexity?

- Intuitively, models with smaller weights are simpler

•Can we penalize overly complex models?

•Ridge Regression does this by adding a regularization term

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 43/77

Ridge Regression

•Ridge Regression: Learn mapping (w) that minimizes the residual sum of squares along with a regularization term:

min𝐰

𝐗𝐰 − 𝐲 22 + 𝜆 𝐰 2

2

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 44/77

Free parameter that trades off training error and model complexity

Training Error Model Complexity

Hyperparameters

𝜆 𝐰 22

•How do find good values for this hyperparameter?

- Need to tune ‘hyperparameters’

•Problem: If we try out different hyperparametervalues and evaluate them using the test set, there is a danger of overfitting the data. Why??

- Problem arises as the test set should reflect unobserved data

•Solution: Use a 2nd hold out dataset to evaluate different tunings for hyperparameters

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 45/77

Evaluating with the Validation Set

•Hyperparameter tuning

- Training: train different models

- Validation: evaluate different models

- Test: evaluate the accuracy of the final model

2017-11-02 46/77[Distributed Machine Learning with Apache Spark, Berkeley ‘16 ]

Hyperparameter Search

•Grid Search: Exhaustively search through hyperparameter space

- Define and discretize search space (linear or log scale)

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 47/77

[Distributed Machine Learning with Apache Spark, Berkeley ‘16 ]

Evaluation

•Evaluate final model

- Training set: train various models

- Validation set: evaluate various models

- Test set: evaluate final model’s accuracy

2017-11-02 49/77[Distributed Machine Learning with Apache Spark, Berkeley ‘16 ]

Predict

•The trained model can then be used to make predictions on unseen observations

2017-11-02 50/77

[Distributed Machine Learning with Apache Spark, Berkeley ‘16 ]

Distributed Machine Learning

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 51/77

Computational Complexity - Review

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 52/77

Time Complexity

•The (worst-case) time complexity of a problem is the time complexity of the fastest algorithm that solves the problem.

•Ο(n): there is an algorithm that solves the problem faster than this time

- Has to be true for all possible inputs, that is, the worst-case

53

for ( i=0 ; i<n ; i++ )

for( j=0 ; j<n ; j++ )

for( k=0 ; k<n ; k++ )

sum[i][j] += entry[i][j][k];

What is the time complexity of the following code snippet?

Orders-of-Growth

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 54/77

Common Orders-of-Growth

•O(1) – constant

•O(log(n)) – logarithmic time

•O(n) – linear

•O(n log(n)) – linearithmic time

•O(n2) – quadratic time

•O(2n) – exponential time

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 55/77

Orders-of-Growth Example

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 56/77

Big-Oh (O)

2017-11-02 57/77

•𝑓 𝑥 = 𝑂 𝑔 𝑥

- ignore constant and lower-order terms

•Big-Oh determines the upper bound on a function

𝑓 𝑛 = 𝑂 𝑔 𝑛 𝑖𝑓𝑓 ∃ 𝑐 > 0 𝑎𝑛𝑑 𝑛0 > 0 𝑤ℎ𝑒𝑟𝑒 𝑓 𝑛 ≤ 𝑐𝑔 𝑛 ∀ 𝑛 ≥ 𝑛0

Space Complexity

•The space complexity of an algorithm is an expression for the worst-case amount of memory that the algorithm will use.

58/77

Linear Regression:Computational Complexity

•Can we count the number of arithmetic operations for?

- Assume we have d features and n samples

w = ( XT X )−1 XT y

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 59/77

Linear Regression:Computational Complexity

•If we count the number of arithmetic operations for:

w = ( XT X )−1 XT y

•Cost of operations (n is #samples, d is #features):

- XT X : O(nd2)

- Matrix inverse of XT X: O(d3)

- XT y : O(nd)

- Product XT y and Inverse Matrix: O(d2)

•The Time Complexity is:O(nd2 + d3)

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 60/77

Ignore

lower-order

terms

Linear Regression: Space Complexity

•Storage costs:XTX and its inverse: O(d2) floatsX : O(nd) floats

•Space complexity: O(nd + d2) floats

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 61/77

Big n and Small d

•Assume O(d3) computation and O(d2) memory is feasible on a single machine

•Storing X and computing XTX are the bottlenecks

•How can we distribute storage and computation?

- Partition rows of X over hosts

- Compute XTX as a sum of outer products

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 62/77

Matrix Multiplication using Inner Products

•Each entry of output matrix is result of inner product of inputs matrices

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 63/77

3 4 62 3 1

∙2 31 −14 5

=34 3511 8

3 × 2 + 4 × 1 + 6 × 4 = 34

Matrix Multiplication using Inner Products

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 64/77

3 4 62 3 1

∙2 31 −14 5

=34 3511 8

3 × 3 + 4 × −1 + 6 × 5 = 35

Matrix Multiplication using Inner Products

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 65/77

3 4 62 3 1

∙2 31 −14 5

=34 3511 8

2 × 2 + 3 × 1 + 1 × 4 = 11

Matrix Multiplication using Inner Products

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 66/77

3 4 62 3 1

∙2 31 −14 5

=34 3511 8

2 × 3 + 3 × −1 + 1 × 5 = 8

Matrix-Matrix Multiplication

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 67/77

•Output matrix is the sum of outer products between corresponding rows and columns of input matrices

3 4 62 3 1

∙2 31 −14 5

=

6 94 6

+

Matrix Multiplication via Outer Products

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 68/77

•Output matrix is the sum of outer products between corresponding rows and columns of input matrices

3 4 62 3 1

∙2 31 −14 5

=

6 94 6

+4 −43 −3

Matrix Multiplication via Outer Products

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 69/77

•Output matrix is the sum of outer products between corresponding rows and columns of input matrices

3 4 62 3 1

∙2 31 −14 5

=34 3511 8

6 94 6

+4 −43 −3

+24 304 5

=34 3511 8

Matrix Multiplication via Outer Products

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 70/77

Computational Complexity with Map-Reduce

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 71/77

[Distributed Machine Learning with Apache Spark, Berkeley ‘16 ]

Computational Complexity with Map-Reduce

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 72/77

[Distributed Machine Learning with Apache Spark, Berkeley ‘16 ]

Big n and Big d

•With big d, storing and computing are still the bottlenecks

•O(d3) computation in the reduce step is not easily parallelized

Big n and Big d not Computationally Feasible

•Distributing storage and processing doesn’t change the cubic computational complexity

•Options to scale include:

1. Exploit data sparsity to reduce dimensionalityor

2. Make computation (and storage) linear in (n, d)

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 74/77

Iterative Algorithms are more Efficient

•We need methods that are linear in time and space

•Gradient descent is an iterative algorithm that requires O(nd) computation and O(d) local storage per iteration

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 75/77

Gradient Descent

2017-11-02 76/77

[Distributed Machine Learning with Apache Spark, Berkeley ‘16 ]

References

•Distributed Machine Learning with Apache Spark, UCLA/Berkeley Course 2016

•Andrew Ng, Machine Learning CS 229, Stanford

•Deep Learning Book, Section 4.5

• Sze et Al, “Efficient Processing of Deep Neural Networks:

A Tutorial and Survey”, https://arxiv.org/abs/1703.09039

2017-11-02 ID2223, Large Scale Machine Learning and Deep Learning, Jim Dowling 77/77