AI Compiler @Alibaba · 2019-12-17 · HowTVMisused@Alibaba •An End-to-End Deep Learning Compiler...

21
AI Compiler @ Alibaba Xiaoyong Liu PAI (Platform of AI) Alibaba Cloud Intelligence Presenting the work of many people

Transcript of AI Compiler @Alibaba · 2019-12-17 · HowTVMisused@Alibaba •An End-to-End Deep Learning Compiler...

Page 1: AI Compiler @Alibaba · 2019-12-17 · HowTVMisused@Alibaba •An End-to-End Deep Learning Compiler ØEmpower AI service ØGenerate high performance operators •subgraph & kernel

AI Compiler @ Alibaba

Xiaoyong Liu

PAI (Platform of AI)Alibaba Cloud Intelligence

Presenting the work of many people�

Page 2: AI Compiler @Alibaba · 2019-12-17 · HowTVMisused@Alibaba •An End-to-End Deep Learning Compiler ØEmpower AI service ØGenerate high performance operators •subgraph & kernel

AICompiler Stack

PAI

AI Compiler

PAI EAS

TVMHigh Performance libraries PAI TAO

PAI Tensorflow

CPU GPU ASIC FPGA

PAI Blade

Page 3: AI Compiler @Alibaba · 2019-12-17 · HowTVMisused@Alibaba •An End-to-End Deep Learning Compiler ØEmpower AI service ØGenerate high performance operators •subgraph & kernel

How TVM is used @ Alibaba

• An End-to-End Deep Learning Compiler

Ø Empower AI service

Ø Generate high performance operators

• subgraph & kernel• heterogenous computing

• An Optimizer & Compiler

Ø Enable chips such as CPU, GPU, DSP & etc.� potentially FPGA, AI Chips.

Ø Deploy algorithm automatically

• All Scenarios

Ø Cloud, Edge & IoT

Ø Training & Inference

Page 4: AI Compiler @Alibaba · 2019-12-17 · HowTVMisused@Alibaba •An End-to-End Deep Learning Compiler ØEmpower AI service ØGenerate high performance operators •subgraph & kernel

TVM + AIService�PAI-Blade

Page 5: AI Compiler @Alibaba · 2019-12-17 · HowTVMisused@Alibaba •An End-to-End Deep Learning Compiler ØEmpower AI service ØGenerate high performance operators •subgraph & kernel

Things We Experienced

• Current approach is too much engineering effort , difficult for platform service

• TVM is good at

Ø To generate high-performance computing intensive kernels• Automatic is the key

Ø Heterogenous hardware friendly, if ISA is provided• Performance portability

Ø Software Architect friendly to Auto TVM / Schedule…Ø Whole-Graph Optimization

• ChallengesØ Easy of deployment, including coverage, quality & compatibility

• Correctness, Performance & easy of new device enablingØ Systems don’t interop

Ø Maturity/Standardization …5

Page 6: AI Compiler @Alibaba · 2019-12-17 · HowTVMisused@Alibaba •An End-to-End Deep Learning Compiler ØEmpower AI service ØGenerate high performance operators •subgraph & kernel

Contributed to TVM Community• Automatic Tensor Core Scheduling

Ø Nvidia tensor core in V100/T4

• Schedule Algorithm enablement, as Batch Matmul and etc.Ø Know what/how

• Support TFLite models Ø Automatically

• C++ RPC ServerØ Tuning your program in embedded environment without python

NMT

Page 7: AI Compiler @Alibaba · 2019-12-17 · HowTVMisused@Alibaba •An End-to-End Deep Learning Compiler ØEmpower AI service ØGenerate high performance operators •subgraph & kernel

Ongoing Effort to Community

• Automatic Tensor Core Scheduling EnhancementØ vthread supporting

• Operators: New Ops in Relay / TVMØ HashTable� Embedding…

• Standardize GraphRuntime Exports Into a Single DLLØ A way to unify runtime models exports

Page 8: AI Compiler @Alibaba · 2019-12-17 · HowTVMisused@Alibaba •An End-to-End Deep Learning Compiler ØEmpower AI service ØGenerate high performance operators •subgraph & kernel

Product-driven TVM enhancement

• Brings online inference service

• Compiles heterogenous hardware at cloud & edgeØ Nvidia server GPU

• V100/T4 on FP16/INT8/INT4/INT1Ø Intel X86 Server CPU

• on INT8/FP32/BF16Ø ARM64 CPU

• on INT8 / FP32Ø ARM32 CPU

Any general solution is planed to contribute back to TVM Community!

• Enhances Infrastructure

Ø HIFI4 DSPØ Hexagon DSP

Ø PowerVR GPUØ Intel GPU

Page 9: AI Compiler @Alibaba · 2019-12-17 · HowTVMisused@Alibaba •An End-to-End Deep Learning Compiler ØEmpower AI service ØGenerate high performance operators •subgraph & kernel

TVM produced more performance

• VSØ Chip supplier’s latest manually optimized high performance library Ø Assembly-level optimized edge machine learning framework

• Optimized to gain decent performance on various productsØ Server Nvidia GPU

Ø Edge Arm64

Ø IoT Arm32

Automatic TC Scheduling + tensorization + tensorcore

Page 10: AI Compiler @Alibaba · 2019-12-17 · HowTVMisused@Alibaba •An End-to-End Deep Learning Compiler ØEmpower AI service ØGenerate high performance operators •subgraph & kernel

Performance on V100 (FP16)

M, N, K cuBLAS TensorCore TVM TensorCore speedup

512, 16, 512 7.7470us 5.2570us 1.47X

512, 32, 512 8.0140us 6.0220us 1.33X

512, 64, 512 8.7530us 6.2390us 1.40X

512, 128, 512 9.0290us 7.1610us 1.26X

256, 256, 256 6.9380us 4.5930us 1.51X

1024, 32, 512 8.3320us 6.3770us 1.30X

2048, 32, 512 9.0640us 7.5070us 1.21X

Page 11: AI Compiler @Alibaba · 2019-12-17 · HowTVMisused@Alibaba •An End-to-End Deep Learning Compiler ØEmpower AI service ØGenerate high performance operators •subgraph & kernel

Performance on T4

Page 12: AI Compiler @Alibaba · 2019-12-17 · HowTVMisused@Alibaba •An End-to-End Deep Learning Compiler ØEmpower AI service ØGenerate high performance operators •subgraph & kernel

AliOS enhances TVM on vehicles

• To accelerate NLU and AR-Nav models

• ARM64 CPU performance on INT8 / FP32Ø NHWC/img2col+pack/no tensorize&co-optimized with llvmØ Planning to contribute back to community

• Hexagon DSPØ vrmpy tensorize/llvm-codegenØ Could run end-to-end Mobilenet V2 INT8 model

• Intel GPUØ Schedule algorithmØ Boost 1.6X performance of Lanenet model

Page 13: AI Compiler @Alibaba · 2019-12-17 · HowTVMisused@Alibaba •An End-to-End Deep Learning Compiler ØEmpower AI service ØGenerate high performance operators •subgraph & kernel

Performance on ARM64 INT8

1.00 1.00 1.00 1.00 1.00 1.00 1.11 1.321.29 1.601.23 1.46

3.83

1.61 2.08

8.87

0.00

1.00

2.00

3.00

4.00

5.00

6.00

7.00

8.00

9.00

10.00

MobileNetV1 MobileNetV2 LaneNet

Performance Comparison @ rasp 3b+ AARCH64

TFLite 1 core TFLite 4 core QNNPACK 1 core QNNPACK 4 core TVM 1 core TVM 4 core

Page 14: AI Compiler @Alibaba · 2019-12-17 · HowTVMisused@Alibaba •An End-to-End Deep Learning Compiler ØEmpower AI service ØGenerate high performance operators •subgraph & kernel

Performance on ARM64 FP32

1.07

1.03

1.17

1.13

0.95

1

1.05

1.1

1.15

1.2

Mobilenet V1 Mobilenet V2

Performance Comparison AARCH64

TVM / MNN @ A53 TVM / MNN @ A72

Page 15: AI Compiler @Alibaba · 2019-12-17 · HowTVMisused@Alibaba •An End-to-End Deep Learning Compiler ØEmpower AI service ØGenerate high performance operators •subgraph & kernel

AI Labs Compiles TMallGenie Models

• ARM32 CPUØ Overflow-Aware Quantization (INT16 = INT8 * INT8)Ø GEMM Tensorize

• HIFI4 DSPØ GEMM Tensorize, 10X speed up

• PowerVR GPUØ Schedule Algorithm

Page 16: AI Compiler @Alibaba · 2019-12-17 · HowTVMisused@Alibaba •An End-to-End Deep Learning Compiler ØEmpower AI service ØGenerate high performance operators •subgraph & kernel

Performance

322 336

230 214

140

0

50

100

150

200

250

300

350

400

1

TF Lite 8bit NCNN 8bit QNNPACK 8bit MNN 8bit ACE Overflow-aware (Assembly)

CPU�MTK8167S�ARM32 A35 1.5GHz�Model�MobileNetV2_1.0_224

Page 17: AI Compiler @Alibaba · 2019-12-17 · HowTVMisused@Alibaba •An End-to-End Deep Learning Compiler ØEmpower AI service ØGenerate high performance operators •subgraph & kernel

A DL Compiler in T-HEAD SoC

• TVM has been integrated into WuJian(��) SoC toolchain

• Support Caffe FrontendØ Tested pass alexnet / resnet 50 / mobilenet v1 / mobilenet v2 / …

Tensorflow Caffe

TVM

T-HEAD NN LLVM

WuJian SoC Customized AIAccelerator

Page 18: AI Compiler @Alibaba · 2019-12-17 · HowTVMisused@Alibaba •An End-to-End Deep Learning Compiler ØEmpower AI service ØGenerate high performance operators •subgraph & kernel

TVM Roadmap @ Alibaba

• Keep contributing general effort back to community

• Auto Schedule (“with Berkeley team”)

Ø Auto* is the key to build machine-learning-powered system

• Interpolate with top frameworks

• Auto heterogenous hardware placement in system level

• Infra Maturity

Ø Completeness & Seamless Deployment, as quantization, model compatibility

• Workload Characterization

Ø To improve the key workloads within community

• AI Service & OperatorsØ more chips, more models

Page 19: AI Compiler @Alibaba · 2019-12-17 · HowTVMisused@Alibaba •An End-to-End Deep Learning Compiler ØEmpower AI service ØGenerate high performance operators •subgraph & kernel

Alibaba & OpenSource

Embrace OpenSource Contribute OpenSource Win-Win OpenSource

Page 20: AI Compiler @Alibaba · 2019-12-17 · HowTVMisused@Alibaba •An End-to-End Deep Learning Compiler ØEmpower AI service ØGenerate high performance operators •subgraph & kernel

Takeaways

• A golden age of deep learning compiler

• Industry-grade deep-learning compilation solution is still in evolution

• We are working to contribute to TVM Ø Development & Research

• Welcome to join us to contribute to TVM togetherØ Xiaoyong Liu ([email protected])

Page 21: AI Compiler @Alibaba · 2019-12-17 · HowTVMisused@Alibaba •An End-to-End Deep Learning Compiler ØEmpower AI service ØGenerate high performance operators •subgraph & kernel

���

Thank you!