Throughput Prediction of Asynchronous SGD in TensorFlow · Machine learning models with millions of...

18
Li, Yan, Paolieri, Golubchik Throughput Prediction of Asynchronous SGD in TensorFlow QED Research Group | qed.usc.edu 1 ICPE, April 23, 2020 icpe2020.spec.org Throughput Prediction of Asynchronous SGD in TensorFlow Leana Golubchik Marco Paolieri Wumo Yan Zhuojin Li qed.usc.edu

Transcript of Throughput Prediction of Asynchronous SGD in TensorFlow · Machine learning models with millions of...

Page 1: Throughput Prediction of Asynchronous SGD in TensorFlow · Machine learning models with millions of adjustable parameters (weights) Training with millions of labeled examples Scaling

Li, Yan, Paolieri, Golubchik Throughput Prediction of Asynchronous SGD in TensorFlow QED Research Group | qed.usc.edu 1

ICPE, April 23, 2020icpe2020.spec.org

Throughput Prediction of Asynchronous SGD in TensorFlow

Leana GolubchikMarco PaolieriWumo YanZhuojin Li

qed.usc.edu

Page 2: Throughput Prediction of Asynchronous SGD in TensorFlow · Machine learning models with millions of adjustable parameters (weights) Training with millions of labeled examples Scaling

Li, Yan, Paolieri, Golubchik Throughput Prediction of Asynchronous SGD in TensorFlow QED Research Group | qed.usc.edu 2

Training of Deep Neural NetworksImage Classification

Machine learning models with millions of adjustable parameters (weights)

Training with millions of labeled examples

Scaling up with GPUs

Image ClassificationConvolutional NN

[Krizhevsky et al., 2012]

Speech RecognitionRecurrent NN + HMM[Hinton et al., 2012]

Machine TranslationRNN Encoder-Decoder[Sutskever et al., 2014]

[adeshpande3.github.io]

Page 3: Throughput Prediction of Asynchronous SGD in TensorFlow · Machine learning models with millions of adjustable parameters (weights) Training with millions of labeled examples Scaling

Li, Yan, Paolieri, Golubchik Throughput Prediction of Asynchronous SGD in TensorFlow QED Research Group | qed.usc.edu 3

Worker Nodes:● Receive weights (downlink)● Process batch of examples (compute)● Send update (uplink)

Parameter Server: apply updates to weights (update)

Asynchronous SGD with Parameter Server

Training throughput (examples/s) of Inception-v3 on AWS p3.2xlarge

instances (NVIDIA V100 GPU)

Page 4: Throughput Prediction of Asynchronous SGD in TensorFlow · Machine learning models with millions of adjustable parameters (weights) Training with millions of labeled examples Scaling

Li, Yan, Paolieri, Golubchik Throughput Prediction of Asynchronous SGD in TensorFlow QED Research Group | qed.usc.edu 4

Overlap of Computation and Communication

Compute?

[Lin et al.] A Model-Based Approach to Streamlining Distributed Training for Asynchronous SGD. MASCOTS’18[Zheng et al.] Cynthia: Cost-Efficient Cloud Resource Provisioning for Predictable Distributed DNN Training. ICPP’19

Weights are split into multiple tensors (arrays of weights)

Dependencies between communication and computation operations

Computation during communication!

Page 5: Throughput Prediction of Asynchronous SGD in TensorFlow · Machine learning models with millions of adjustable parameters (weights) Training with millions of labeled examples Scaling

Li, Yan, Paolieri, Golubchik Throughput Prediction of Asynchronous SGD in TensorFlow QED Research Group | qed.usc.edu 5

Simulation Approach to Throughput Prediction

Replay single-worker traces with multiple workers, accounting for reduced bandwidth

Real traces: hundreds of operations

Page 6: Throughput Prediction of Asynchronous SGD in TensorFlow · Machine learning models with millions of adjustable parameters (weights) Training with millions of labeled examples Scaling

Li, Yan, Paolieri, Golubchik Throughput Prediction of Asynchronous SGD in TensorFlow QED Research Group | qed.usc.edu 6

Profiling Challenges in TensorFlow

Problems of recorded durations in profiling traces ● Communication overhead included at the end● Tensor transmission can be stopped and resumed

Communication Overhead

Transmission

Page 7: Throughput Prediction of Asynchronous SGD in TensorFlow · Machine learning models with millions of adjustable parameters (weights) Training with millions of labeled examples Scaling

Li, Yan, Paolieri, Golubchik Throughput Prediction of Asynchronous SGD in TensorFlow QED Research Group | qed.usc.edu 7

Estimation of Communication Overhead

Linear Modeltransmission overhead = 𝜶 ⨉ size + 𝜷

Parameters 𝜶, 𝜷 estimated once for each platform (private cluster, cloud CPU cluster, cloud GPU cluster).

Overhead due to tensor deserialization and copies between memory buffers.

Page 8: Throughput Prediction of Asynchronous SGD in TensorFlow · Machine learning models with millions of adjustable parameters (weights) Training with millions of labeled examples Scaling

Li, Yan, Paolieri, Golubchik Throughput Prediction of Asynchronous SGD in TensorFlow QED Research Group | qed.usc.edu 8

Multiplexing Model of Downlink and Uplink

Each stream is transmitted up to the size of the control window.

Next, pending streams are transmitted until completion.

DNN Model End-time Prediction Error Private Cluster AWS Cloud

AlexNetMean 1.82% 2.89%

95th Percentile 3.35% 9.71%

GoogLeNetMean 1.69% 3.43%

95th Percentile 3.74% 9.14%

ResNet-50Mean 1.26% 4.36%

95th Percentile 2.32% 9.70%

Inception-V3Mean 1.02% 9.23%

95th Percentile 3.92% 20.98%

Page 9: Throughput Prediction of Asynchronous SGD in TensorFlow · Machine learning models with millions of adjustable parameters (weights) Training with millions of labeled examples Scaling

Li, Yan, Paolieri, Golubchik Throughput Prediction of Asynchronous SGD in TensorFlow QED Research Group | qed.usc.edu 9

Networking Optimizations

Default

Flow-control Disabled Flow-control Disabled, TIC ordering

[Hashemi et al.] TicTac: Accelerating distributed deep learning with communication scheduling. SysML’19

Multiplexing of multiple streams can increase the duration of a training step (if required tensors are delayed)

Flow control can be disabled in gRPC and transmissions ordered

Page 10: Throughput Prediction of Asynchronous SGD in TensorFlow · Machine learning models with millions of adjustable parameters (weights) Training with millions of labeled examples Scaling

Li, Yan, Paolieri, Golubchik Throughput Prediction of Asynchronous SGD in TensorFlow QED Research Group | qed.usc.edu 10

Simulation with Multiple WorkersGiven a system configuration, including:● Network bandwidth B● Number of worker nodes W● Number of parameter servers M● Parameters 𝜶, 𝜷 of communication overhead model

We simulate a sequence of SGD steps with W workers by sampling steps from the profiling trace.

Each worker replays the sampled step (a graph of communication and computation operations) but … ● Tensor transmissions are scheduled using our multiplexing

model● When multiple workers are in the downlink or uplink phase,

bandwidth is shared equally● Parsing overhead added after the reception of a tensor

Page 11: Throughput Prediction of Asynchronous SGD in TensorFlow · Machine learning models with millions of adjustable parameters (weights) Training with millions of labeled examples Scaling

Li, Yan, Paolieri, Golubchik Throughput Prediction of Asynchronous SGD in TensorFlow QED Research Group | qed.usc.edu 11

Experimental Setup

Validation Platforms● Private cluster of nodes with 4-core CPU, 16 GB RAM, 1 Gbps Ethernet● AWS c4.8xlarge instances: 36-core CPU, 60 GB RAM, 10 Gbps Ethernet● AWS p3.2xlarge instances: 8-core CPU, NVIDIA V100 GPU, 10 Gbps Ethernet

Platform ProfilingEstimate the parameters 𝜶, 𝜷 of the communication overhead model

Job ProfilingFor each job, run 100 steps with a single worker node to obtain profiling trace

PredictionRun trace simulator with 2,…,W workers for 1000 steps to evaluate the mean throughput along the trace.

ValidationRun clusters with 2,…,W workers, skip 50 steps, compute throughput on next 50

Page 12: Throughput Prediction of Asynchronous SGD in TensorFlow · Machine learning models with millions of adjustable parameters (weights) Training with millions of labeled examples Scaling

Li, Yan, Paolieri, Golubchik Throughput Prediction of Asynchronous SGD in TensorFlow QED Research Group | qed.usc.edu 12

Private CPU Cluster

Batch Sizes DNN Models

Page 13: Throughput Prediction of Asynchronous SGD in TensorFlow · Machine learning models with millions of adjustable parameters (weights) Training with millions of labeled examples Scaling

Li, Yan, Paolieri, Golubchik Throughput Prediction of Asynchronous SGD in TensorFlow QED Research Group | qed.usc.edu 13

Private CPU Cluster: Networking Optimizations

Flow-control disabled Flow-control disabled, TIC ordering

Flow-control disabled, various orderings

AlexNet, batch size = 4

Page 14: Throughput Prediction of Asynchronous SGD in TensorFlow · Machine learning models with millions of adjustable parameters (weights) Training with millions of labeled examples Scaling

Li, Yan, Paolieri, Golubchik Throughput Prediction of Asynchronous SGD in TensorFlow QED Research Group | qed.usc.edu 14

Cloud Cluster: CPU-only

Page 15: Throughput Prediction of Asynchronous SGD in TensorFlow · Machine learning models with millions of adjustable parameters (weights) Training with millions of labeled examples Scaling

Li, Yan, Paolieri, Golubchik Throughput Prediction of Asynchronous SGD in TensorFlow QED Research Group | qed.usc.edu 15

Cloud Cluster: GPU-enabled

Page 16: Throughput Prediction of Asynchronous SGD in TensorFlow · Machine learning models with millions of adjustable parameters (weights) Training with millions of labeled examples Scaling

Li, Yan, Paolieri, Golubchik Throughput Prediction of Asynchronous SGD in TensorFlow QED Research Group | qed.usc.edu 16

Cloud Cluster: GPU-enabled, two PSVGG-11 Weights Partition

Limited improvement from two parameters servers in VGG-11 (h) due to uneven split of DNN weights

Page 17: Throughput Prediction of Asynchronous SGD in TensorFlow · Machine learning models with millions of adjustable parameters (weights) Training with millions of labeled examples Scaling

Li, Yan, Paolieri, Golubchik Throughput Prediction of Asynchronous SGD in TensorFlow QED Research Group | qed.usc.edu 17

Cost and Time Savings

Prediction is faster and less expensive (simulation of the computation, on CPU nodes instead of p3.2xlarge)

Page 18: Throughput Prediction of Asynchronous SGD in TensorFlow · Machine learning models with millions of adjustable parameters (weights) Training with millions of labeled examples Scaling

Li, Yan, Paolieri, Golubchik Throughput Prediction of Asynchronous SGD in TensorFlow QED Research Group | qed.usc.edu 18

Conclusions● Approach to the prediction of training throughput

of asynchronous SGD in TensorFlow

○ Tracing information from minimal single-worker profiling

○ Discrete-event simulation to generate synthetic traces with multiple worker nodes

● Faster and less expensive than direct measurements with multiple workers

● Good accuracy across DNN models, batch sizes, and platforms, networking optimizations

● Future work: more fine-grained analytical modelsInception-V3, batch=64, p3.2xlarge