H2O Distributed Deep Learning by Arno Candel 071614

download H2O Distributed Deep Learning by Arno Candel 071614

of 42

  • date post

    27-Aug-2014
  • Category

    Software

  • view

    2.098
  • download

    11

Embed Size (px)

description

Deep Learning has been dominating recent machine learning competitions with better predictions. Unlike the neural networks of the past, modern Deep Learning methods have cracked the code for training stability and generalization. Deep Learning is not only the leader in image and speech recognition tasks, but is also emerging as the algorithm of choice in traditional business analytics. This talk introduces Deep Learning and implementation concepts in the open-source H2O in-memory prediction engine. Designed for the solution of enterprise-scale problems on distributed compute clusters, it offers advanced features such as adaptive learning rate, dropout regularization and optimization for class imbalance. World record performance on the classic MNIST dataset, best-in-class accuracy for eBay text classification and others showcase the power of this game changing technology. A whole new ecosystem of Intelligent Applications is emerging with Deep Learning at its core. About the Speaker: Arno Candel Prior to joining 0xdata as Physicist & Hacker, Arno was a founding Senior MTS at Skytree where he designed and implemented high-performance machine learning algorithms. He has over a decade of experience in HPC with C++/MPI and had access to the world's largest supercomputers as a Staff Scientist at SLAC National Accelerator Laboratory where he participated in US DOE scientific computing initiatives. While at SLAC, he authored the first curvilinear finite-element simulation code for space-charge dominated relativistic free electrons and scaled it to thousands of compute nodes. He also led a collaboration with CERN to model the electromagnetic performance of CLIC, a ginormous e+e- collider and potential successor of LHC. Arno has authored dozens of scientific papers and was a sought-after academic conference speaker. He holds a PhD and Masters summa cum laude in Physics from ETH Zurich.

Transcript of H2O Distributed Deep Learning by Arno Candel 071614

  • Deep Learning with H2O ! 0xdata, H2O.ai Scalable In-Memory Machine Learning ! Hadoop User Group, Chicago, 7/16/14 Arno Candel
  • Who am I? PhD in Computational Physics, 2005 from ETH Zurich Switzerland ! 6 years at SLAC - Accelerator Physics Modeling 2 years at Skytree, Inc - Machine Learning 7 months at 0xdata/H2O - Machine Learning ! 15 years in HPC, C++, MPI, Supercomputing @ArnoCandel
  • H2O Deep Learning, @ArnoCandel Outline Intro & Live Demo (5 mins) Methods & Implementation (20 mins) Results & Live Demos (25 mins) MNIST handwritten digits text classification Weather prediction Q & A (10 mins) 3
  • H2O Deep Learning, @ArnoCandel Distributed in-memory math platform GLM, GBM, RF, K-Means, PCA, Deep Learning Easy to use SDK / API Java, R, Scala, Python, JSON, Browser-based GUI ! Businesses can use ALL of their data (w or w/o Hadoop) Modeling without Sampling Big Data + Better Algorithms Better Predictions H2O Open Source in-memory Prediction Engine for Big Data 4
  • H2O Deep Learning, @ArnoCandel About H20 (aka 0xdata) Pure Java, Apache v2 Open Source Join the www.h2o.ai/community! 5 +1 Cyprien Noel for prior work
  • H2O Deep Learning, @ArnoCandel Customer Demands for Practical Machine Learning 6 Requirements Value In-Memory Fast (Interactive) Distributed Big Data (No Sampling) Open Source Ownership of Methods API / SDK Extensibility H2O was developed by 0xdata to meet these requirements
  • H2O Deep Learning, @ArnoCandel H2O Integration H2O HDFS HDFS HDFS YARN Hadoop MR R ScalaJSON Python Standalone Over YARN On MRv1 7 H2O H2O Java
  • H2O Deep Learning, @ArnoCandel H2O Architecture Distributed In-Memory K-V store Col. compression Machine Learning Algorithms R Engine Nano fast Scoring Engine Prediction Engine Memory manager e.g. Deep Learning 8 MapReduce
  • H2O Deep Learning, @ArnoCandel H2O - The Killer App on Spark 9 http://databricks.com/blog/2014/06/30/ sparkling-water-h20-spark.html
  • H2O Deep Learning, @ArnoCandel 10 John Chambers (creator of the S language, R-core member) names H2O R API in top three promising R projects H2O R CRAN package
  • H2O Deep Learning, @ArnoCandel H2O + R = Happy Data Scientist 11 Machine Learning on Big Data with R: Data resides on the H2O cluster!
  • H2O Deep Learning, @ArnoCandel H2O Deep Learning in Action Train: 60,000 rows 784 integer columns 10 classes Test: 10,000 rows 784 integer columns 10 classes 12 MNIST = Digitized handwritten digits database (Yann LeCun) Live Demo Build a H2O Deep Learning model on MNIST train/test data Data: 28x28=784 pixels with (gray-scale) values in 0255 Yann LeCun: Yet another advice: don't get fooled by people who claim to have a solution to Artificial General Intelligence. Ask them what error rate they get on MNIST or ImageNet.
  • H2O Deep Learning, @ArnoCandel Wikipedia: Deep learning is a set of algorithms in machine learning that attempt to model high-level abstractions in data by using architectures composed of multiple non-linear transformations. What is Deep Learning? Example: Input data (image) Prediction (who is it?) 13 Facebook's DeepFace (Yann LeCun) recognises faces as well as humans
  • H2O Deep Learning, @ArnoCandel Deep Learning is Trending 20132012 Google trends 2011 14 Businesses are using Deep Learning techniques! Google Brain (Andrew Ng, Jeff Dean & Geoffrey Hinton) ! FBI FACE: $1 billion face recognition project ! Chinese Search Giant Baidu Hires Man Behind the Google Brain (Andrew Ng)
  • H2O Deep Learning, @ArnoCandel Deep Learning History slides by Yan LeCun (now Facebook) 15 Deep Learning wins competitions AND makes humans, businesses and machines (cyborgs!?) smarter
  • H2O Deep Learning, @ArnoCandel What is NOT Deep Linear models are not deep (by definition) ! Neural nets with 1 hidden layer are not deep (no feature hierarchy) ! SVMs and Kernel methods are not deep (2 layers: kernel + linear) ! Classification trees are not deep (operate on original input space) 16
  • H2O Deep Learning, @ArnoCandel 1970s multi-layer feed-forward Neural Network (supervised learning with stochastic gradient descent using back-propagation) ! + distributed processing for big data (H2O in-memory MapReduce paradigm on distributed data) ! + multi-threaded speedup (H2O Fork/Join worker threads update the model asynchronously) ! + smart algorithms for accuracy (weight initialization, adaptive learning, momentum, dropout, regularization) ! = Top-notch prediction engine! Deep Learning in H2O 17
  • H2O Deep Learning, @ArnoCandel fully connected directed graph of neurons age income employment married single Input layer Hidden layer 1 Hidden layer 2 Output layer 3x4 4x3 3x2#connections information flow input/output neuron hidden neuron 4 3 2#neurons 3 Example Neural Network 18
  • H2O Deep Learning, @ArnoCandel age income employment yj = tanh(sumi(xi*uij)+bj) uij xi yj per-class probabilities sum(pl) = 1 zk = tanh(sumj(yj*vjk)+ck) vjk zk pl pl = softmax(sumk(zk*wkl)+dl) wkl softmax(xk) = exp(xk) / sumk(exp(xk)) neurons activate each other via weighted sums Prediction: Forward Propagation activation function: tanh alternative: x -> max(0,x) rectifier pl is a non-linear function of xi: can approximate ANY function with enough layers! bj, ck, dl: bias values (indep. of inputs) 19 married single
  • H2O Deep Learning, @ArnoCandel age income employment xi Automatic standardization of data xi: mean = 0, stddev = 1 ! horizontalize categorical variables, e.g. {full-time, part-time, none, self-employed} -> {0,1,0} = part-time, {0,0,0} = self-employed Automatic initialization of weights ! Poor mans initialization: random weights wkl ! Default (better): Uniform distribution in +/- sqrt(6/(#units + #units_previous_layer)) Data preparation & Initialization Neural Networks are sensitive to numerical noise, operate best in the linear regime (not saturated) 20 married single wkl
  • H2O Deep Learning, @ArnoCandel Mean Square Error = (0.22 + 0.22)/2 penalize differences per-class ! Cross-entropy = -log(0.8) strongly penalize non-1-ness Training: Update Weights & Biases Stochastic Gradient Descent: Update weights and biases via gradient of the error (via back-propagation): For each training row, we make a prediction and compare with the actual label (supervised learning): married10.8 predicted actual Objective: minimize prediction error (MSE or cross-entropy) w < w - rate * E/w 1 21 single00.2 E w rate
  • H2O Deep Learning, @ArnoCandel Backward Propagation ! E/wi = E/y * y/net * net/wi = (error(y))/y * (activation(net))/net * xi Backprop: Compute E/wi via chain rule going backwards wi net = sumi(wi*xi) + b xi E = error(y) y = activation(net) How to compute E/wi for wi < wi - rate * E/wi ? Naive: For every i, evaluate E twice at (w1,,wi,,wN) Slow! 22
  • H2O Deep Learning, @ArnoCandel H2O Deep Learning Architecture K-V K-V HTTPD HTTPD nodes/JVMs: sync threads: async communication w w w w w w w w1 w3 w2 w4 w2+w4 w1+w3 w* = (w1+w2+w3+w4)/4 map: each node trains a copy of the weights and biases with (some* or all of) its local data with asynchronous F/J threads initial model: weights and biases w updated model: w* H2O atomic in-memory K-V store reduce: model averaging: average weights and biases from all nodes, speedup is at least #nodes/log(#rows) arxiv:1209.4129v3 Keep iterating over the data (epochs), score from time to time Query & display the model via JSON, WWW 2 2 431 1 1 1 4 3 2 1 2 1 i *user can specify the number of total rows per MapReduce iteration 23
  • H2O Deep Learning, @ArnoCandel Adaptive learning rate - ADADELTA (Google) Automatically set learning rate for each neuron based on its training history Grid Search and Checkpointing Run a grid search to scan many hyper- parameters, then continue training the most promising model(s) Regularization L1: penalizes non-zero weights L2: penalizes large weights Dropout: randomly ignore certain inputs 24 Secret Sauce to Higher Accuracy
  • H2O Deep Learning, @ArnoCandel Detail: Adaptive Learning Rate ! Compute moving average of wi 2 at time t for window length rho: ! E[wi 2]t = rho * E[wi 2]t-1 + (1-rho) * wi 2 ! Compute RMS of wi at time t with smoothing epsilon: ! RMS[wi]t = sqrt( E[wi 2]t + epsilon ) Adaptive annealing / progress: Gradient-dependent learning rate, moving window prevents freezing (unlike ADAGRAD: no window) Adaptive acceleration / momentum: accumulate previous weigh