Parallel Processing with cuda

25
Parallel Programming with CUDA Matthew Guidry Charles McClendon

description

This slide gives an important information and introductory concept about cuda

Transcript of Parallel Processing with cuda

Page 1: Parallel Processing with cuda

Parallel Programming with CUDA

Matthew GuidryCharles McClendon

Page 2: Parallel Processing with cuda

Introduction to CUDA

• CUDA is a platform for performing massively parallel computations on graphics accelerators

• CUDA was developed by NVIDIA• It was first available with their G8X line of

graphics cards• Approximately 1 million CUDA capable GPUs

are shipped every week• CUDA presents a unique opportunity to

develop widely-deployed parallel applications

Page 3: Parallel Processing with cuda

CUDA

• Because of the Power Wall, Latency Wall, etc (free lunch is over), we must find a way to keep our processor intensive programs from slowing down to a crawl

• With CUDA developments it is possible to do things like simulating Networks of Brain Neurons

• CUDA brings the possibility of ubiquitous supercomputing to the everyday computer…

Page 4: Parallel Processing with cuda

CUDA

• CUDA is supported on all of NVIDIA’s G8X and above graphics cards

• The current CUDA GPU Architecture is branded Tesla

• 8-series GPUs offer 50-200 GFLOPS

Page 5: Parallel Processing with cuda

CUDA Compilation

• As a programming model, CUDA is a set of extensions to ANSI C

• CPU code is compiled by the host C compiler and the GPU code (kernel) is compiled by the CUDA compiler. Separate binaries are produced

Page 6: Parallel Processing with cuda

CUDA Stack

Page 7: Parallel Processing with cuda

Limitations of CUDA

• Tesla does not fully support IEEE spec for double precision floating point operations

• Code only supported on NVIDIA hardware• No use of recursive functions (can

workaround)• Bus latency between host CPU and GPU

(Although double precision will be resolved with Fermi)

Page 8: Parallel Processing with cuda

Thread Hierarchy

Thread – Distributed by the CUDA runtime (identified by threadIdx)

Warp – A scheduling unit of up to 32 threads

Block – A user defined group of 1 to 512 threads. (identified by blockIdx)

Grid – A group of one or more blocks. A grid is created for each CUDA kernel function

Page 9: Parallel Processing with cuda

CUDA Memory Hierarchy

• The CUDA platform has three primary memory typesLocal Memory – per thread memory for automatic variables and register spilling.

Shared Memory – per block low-latency memory to allow for intra-block data sharing and synchronization. Threads can safely share data through this memory and can perform barrier synchronization through _ _syncthreads()

Global Memory – device level memory that may be shared between blocks or grids

Page 10: Parallel Processing with cuda

Moving Data…CUDA allows us to copy data from one memory type to another.

This includes dereferencing pointers, even in the host’s memory (main system RAM)

To facilitate this data movement CUDA provides cudaMemcpy()

Page 11: Parallel Processing with cuda

Optimizing Code for CUDA

• Prevent thread starvation by breaking your problem down (128 execution units are available for use, thousands of threads may be in flight)

• Utilize shared memory and avoid latency problems (communicating with system memory is slow)

• Keep in mind there is no built-in way to synchronize threads in different blocks

• Avoid thread divergence in warps by blocking threads with similar control paths

Page 12: Parallel Processing with cuda

Code Example

Will be explained more in depth later…

Page 13: Parallel Processing with cuda

Kernel Functions

• A kernel function is the basic unit of work within a CUDA thread

• Kernel functions are CUDA extensions to ANSI C that are compiled by the CUDA compiler and the object code generator

Page 14: Parallel Processing with cuda

Kernel Limitations

• There must be no recursion; there’s no call stack

• There must no static variable declarations

• Functions must have a non-variable number of arguments

Page 15: Parallel Processing with cuda

CUDA Warp• CUDA utilizes SIMT (Single Instruction Multiple

Thread)• Warps are groups of 32 threads. Each warp

receives a single instruction and “broadcasts” it to all of its threads.

• CUDA provides “zero-overhead” warp and thread scheduling. Also, the overhead of thread creation is on the order of 1 clock.

• Because a warp receives a single instruction, it will diverge and converge as each thread branches independently

Page 16: Parallel Processing with cuda

CUDA Hardware

• The primary components of the Tesla architecture are:– Streaming Multiprocessor (The 8800 has 16)– Scalar Processor – Memory hierarchy– Interconnection network– Host interface

Page 17: Parallel Processing with cuda

Streaming Multiprocessor (SM)- Each SM has 8 Scalar Processors (SP) - IEEE 754 32-bit floating point support (incomplete support)

- Each SP is a 1.35 GHz processor (32 GFLOPS peak)

- Supports 32 and 64 bit integers

- 8,192 dynamically partitioned 32-bit registers

- Supports 768 threads in hardware (24 SIMT warps of 32 threads)

- Thread scheduling done in hardware

- 16KB of low-latency shared memory

- 2 Special Function Units (reciprocal square root, trig functions, etc)

Each GPU has 16 SMs…

Page 18: Parallel Processing with cuda

The GPU

Page 19: Parallel Processing with cuda

Scalar Processor

• Supports 32-bit IEEE floating point instructions:

FADD, FMAD, FMIN, FMAX, FSET, F2I, I2F

• Supports 32-bit integer operationsIADD, IMUL24, IMAD24, IMIN, IMAX, ISET, I2I, SHR, SHL,

AND, OR, XOR

• Fully pipelined

Page 20: Parallel Processing with cuda

Code Example: Revisited

Page 21: Parallel Processing with cuda

Myths About CUDA• GPUs are the only processors in a CUDA application

– The CUDA platform is a co-processor, using the CPU and GPU• GPUs have very wide (1000s) SIMD machines

– No, a CUDA Warp is only 32 threads• Branching is not possible on GPUs

– Incorrect. • GPUs are power-inefficient

– Nope, performance per watt is quite good• CUDA is only for C or C++ programmers

– Not true, there are third party wrappers for Java, Python, and more

Page 22: Parallel Processing with cuda

Different Types of CUDA Applications

Page 23: Parallel Processing with cuda

Future Developments of CUDA

• The next generation of CUDA, called “Fermi,” will be the standard on the GeForce 300 series

• Fermi will have full support IEEE 754 double precision

• Fermi will natively support more programming languages

• Also, there is a new project, OpenCL that seeks to provide an abstraction layer over CUDA and similar platforms (AMD’s Stream)

Page 24: Parallel Processing with cuda

Things to Ponder…

• Is CUDA better than Cell??

• How do I utilize 12,000 threads??

• Is CUDA really relevant anyway, in world where web applications are so popular??

Page 25: Parallel Processing with cuda

“Parallel Programming with CUDA”

By: Matthew GuidryCharles McClendon