Question? Leave a message!




Deep Learning on GPUs

Deep Learning on GPUs 24
Deep Learning on GPUs March 2016 What is Deep Learning GPUs and DL AGENDA DL in practice Scaling up DL 2What is Deep Learning 3DEEP LEARNING EVERYWHERE INTERNET CLOUD MEDICINE BIOLOGY SECURITY DEFENSE AUTONOMOUS MACHINES MEDIA ENTERTAINMENT Image Classification Cancer Cell Detection Face Detection Pedestrian Detection Video Captioning Speech Recognition Diabetic Grading Video Surveillance Lane Tracking Video Search Language Translation Drug Discovery Satellite Imagery Recognize Traffic Sign Real Time Translation Language Processing Sentiment Analysis Recommendation 4Traditional machine perception Hand crafted feature extractors Classifier/ Raw data Feature extraction Result detector SVM, shallow neural net, … Speaker ID, HMM, speech transcription, … shallow neural net, … Topic classification, machine translation, Clustering, HMM, sentiment analysis… LDA, LSA 5 …Deep learning approach Train: Errors Dog Dog Cat MODEL Raccoon Cat Honey badger Deploy: Dog MODEL 6Artificial neural network A collection of simple, trainable mathematical units that collectively learn complex functions Hidden layers Input layer Output layer Given sufficient training data an artificial neural network can approximate very complex functions mapping raw data to output decisions 7Artificial neurons Biological neuron Artificial neuron y w w w 1 2 3 x x x 1 2 3 From Stanford cs231n lecture notes y=F(w x +w x +w x ) 1 1 2 2 3 3 F(x)=max(0,x) 8Deep neural network (dnn) Raw data Lowlevel features Midlevel features Highlevel features Application components: Task objective e.g. Identify face Training data 10100M images Network architecture 10 layers 1B parameters Learning algorithm Input Result 30 Exaflops 30 GPU days 9Deep learning benefits § Robust § No need to design the features ahead of time – features are automatically learned to be optimal for the task at hand § Robustness to natural variations in the data is automatically learned § Generalizable § The same neural net approach can be used for many different applications and data types § Scalable § Performance improves with more data, method is massively parallelizable 10Baidu Deep Speech 2 Endtoend Deep Learning for English and Mandarin Speech Recognition English and Mandarin speech recognition Transition from English to Mandarin made simpler by endtoend DL No feature engineering or Mandarinspecifics required More accurate than humans Error rate 3.7 vs. 4 for human tests http://svail.github.io/mandarin/ http://arxiv.org/abs/1512.02595 11AlphaGo First Computer Program to Beat a Human Go Professional Training DNNs: 3 weeks, 340 million training steps on 50 GPUs Play: Asynchronous multithreaded search Simulations on CPUs, policy and value DNNs in parallel on GPUs Single machine: 40 search threads, 48 CPUs, and 8 GPUs Distributed version: 40 search threads, 1202 CPUs and 176 GPUs Outcome: Beat both European and World Go champions in best of 5 matches http://www.nature.com/nature/journal/v529/n7587/full/nature16961.html http://deepmind.com/alphago.html 12Deep Learning for Autonomous vehicles 13Deep Learning Synthesis Texture synthesis and transfer using CNNs. Timo Aila et al., NVIDIA Research 14THE AI RACE IS ON IMAGENET Accuracy Rate 100 Traditional CV Deep Learning 90 80 70 Baidu Deep Speech 2 IBM Watson Achieves Breakthrough Facebook Beats Humans in Natural Language Processing Launches Big Sur 60 50 40 30 20 10 Google Toyota Invests 1B Microsoft U. Science Tech, China Launches TensorFlow in AI Labs Beat Humans on IQ 0 2009 2010 2011 2012 2013 2014 2015 2016 15The Big Bang in Machine Learning DNN BIG DATA GPU “ Google’s AI engine also reflects how the world of computer hardware is changing. (It) depends on machines equipped with GPUs… And it depends on these chips more than the larger tech universe realizes.” 16GPUs and DL USE MORE PROCESSORS TO GO FASTER 17Deep learning development cycle 18Three Kinds of Networks DNN – all fully connected layers CNN – some convolutional layers RNN – recurrent neural network, LSTM 19DNN Key operation is dense M x V Backpropagation uses dense matrixmatrix multiply starting from softmax scores 20DNN Batching for training and latency insensitive. M x M Batched operation is M x M – gives reuse of weights. Without batching, would use each element of Weight matrix once. Want 1050 arithmetic operations per memory fetch for modern compute architectures. 21CNN Requires convolution and M x V Filters conserved through plane Multiply limited – even without batching. 22Other Operations To finish building a DNN These are not limiting factors with appropriate GPU use Complex networks have hundreds of millions of weights. 23Lots of Parallelism Available in a DNN 2413x Faster Training Caffe Dual CPU Server TESLA M40 Reduce Training Time from 13 Days to just 1 Day GPU Server with World’s Fastest Accelerator 4x TESLA M40 for Deep Learning Training 0 1 2 3 4 5 6 7 8 9 10 11 12 13 Number of Days CUDA Cores 3072 Peak SP 7 TFLOPS GDDR5 Memory 12 GB Bandwidth 288 GB/s Power 250W 28 Gflop/W Note: Caffe benchmark with AlexNet, CPU server uses 2x E52680v3 12 Core 2.5GHz CPU, 128GB System Memory, Ubuntu 14.04 25Comparing CPU and GPU – server class Xeon E52698 and Tesla M40 NVIDIA Whitepaper “GPU based deep learning inference: A performance and power analysis.” 26DL in practice 27The Engine of Modern AI BIG SUR TENSORFLOW WATSON CNTK EDUCATION TORCH CAFFE THEANO MATCONVNET MOCHA.JL PURINE STARTUPS CHAINER DL4J KERAS OPENDEEP MINERVA MXNET SCHULTS VITRUVIAN LABORATORIES NVIDIA GPU PLATFORM U. Washington, CMU, Stanford, TuSimple, NYU, Microsoft, U. Alberta, MIT, NYU Shanghai 28CUDA for Deep Learning Development DEEP LEARNING SDK DIGITS cuDNN cuSPARSE cuBLAS NCCL TITAN X DEVBOX GPU CLOUD 29Tiled FFT up to 2x faster than FFT § GPUaccelerated Deep Learning subroutines 2.5x 2.0x § High performance neural network training 1.5x § Accelerates Major Deep Learning 1.0x frameworks: Caffe, Theano, 0.5x Torch, TensorFlow 0.0x § Up to 3.5x faster AlexNet training Deep Learning Primitives in Caffe than baseline GPU Millions of Images Trained Per Day Accelerating 100 Artificial Intelligence 80 60 40 20 0 cuDNN 1 cuDNN 2 cuDNN 3 cuDNN 4 developer.nvidia.com/cudnn 30Caffe Performance 6 M40+cuDNN4 5 M40+cuDNN3 CUDA BOOSTS 4 DEEP LEARNING 3 2 K40+cuDNN1 5X IN 2 YEARS K40 1 0 11/2013 9/2014 7/2015 12/2015 AlexNet training throughput based on 20 iterations, CPU: 1x E52680v3 12 Core 2.5GHz. 128GB System Memory, Ubuntu 14.04 31 PerformanceNVIDIA DIGITS Interactive Deep Learning GPU Training System Process Data Configure DNN Monitor Progress Visualize Layers T Te est st IIm ma ag ge e developer.nvidia.com/digits 32ONE ARCHITECTURE — ENDTOEND AI PC GAMING Jetson Tesla Titan X DRIVE PX for Embedded for Cloud for PC for Auto 33Scaling DL 34Scaling Neural Networks Data Parallelism Sync. W W Image 1 Image 2 Machine1 Machine 2 Notes: Need to sync model across machines. Largest models do not fit on one GPU. Requires Pfold larger batch size. Works across many nodes – parameter server approach – linear speedup. Adam Coates, Brody Huval, Tao Wang, David J. Wu, Andrew Ng and Bryan Catanzaro 35Multiple GPUs Near linear scaling – data parallel. Ren Wu et al, Baidu, “Deep Image: Scaling up Image Recognition.” arXiv 2015 36Scaling Neural Networks Model Parallelism W Image 1 Machine 1 Machine 2 Notes: Allows for larger models than fit on one GPU. Requires much more frequent communication between GPUs. Most commonly used within a node – GPU P2P. Effective for the fully connected layers. Adam Coates, Brody Huval, Tao Wang, David J. Wu, Andrew Ng and Bryan Catanzaro 37Scaling Neural Networks Hyper Parameter Parallelism Try many alternative neural networks in parallel – on different CPU / GPU / Machines. Probably the most obvious and effective way 38Deep Learning Everywhere NVIDIA DRIVE PX NVIDIA Tesla NVIDIA Jetson NVIDIA Titan X Contact: jbarkernvidia.com 39
sharer
Presentations
Free
Document Information
Category:
Presentations
User Name:
DannyConnolly
User Type:
Professional
Country:
Switzerland
Uploaded Date:
14-07-2017