Parallel Computing An Introduction
-
Upload
cleopatra2121 -
Category
Documents
-
view
51 -
download
0
description
Transcript of Parallel Computing An Introduction
Parallel Computing: An Introduction
Bing Bing Zhou
School of Information Technologies University of Sydney
2
The Goals ! This course is to present foundational
concepts of high performance computing advanced computer architectures the basics of parallel programming –
algorithm design and analysis ! The main theme: A New Way of Thinking
3
Contents
! Issues in High Performance Computing ! Parallel Architectures ! Parallel Algorithm Design ! Performance Analysis ! MPI and Pthread
4
The Course ! Lectures:
Mon: 14:30 – 16:05, C12-N402 Wed: 19:00 – 21:30, C12-N403 Fri: 16:25 – 18:00, C12-N403
! Assignments: two small MPI programming projects
To pass this subject, you must get at least the pass mark for both assignments
5
References ! “Introduction to Parallel Computing” by Ananth
Grama, Anshul Gupta, George Karypis, and Vipin Kumar, Addison Wesley, 2003
! “Introduction to Parallel Processing: Algorithms and Architectures” by Behrooz Parhami, Plenum Press, 1999
! Parallel Programming in C with MPI and OpenMP, by Michael J. Quinn, Mcgraw-Hill, 2003
6
High Performance Computing
! High performance computing (HPC) is the main motivation for using parallel supercomputers, computer clusters and other advanced parallel/distributed computing systems High speed High throughput
7
Obtaining Performance ! Increasing the speed of sequential
microprocessors. Increasing clock frequency Implicit parallelism (ILP)
! Parallelizing (explicitly) the process of computing!
! Specialized computers for a certain class of problems.
8
Parallel Computing ! Parallel Computing (or Processing)
refers to a large class of methods (algorithms, architectures, software tools, etc) that attempt to increase computing speed by performing more than one computation concurrently.
9
Why Parallel Computing? ! The promise of parallelism has fascinated
researchers for at least three decades. ! In the past, parallel computing efforts have shown
promise and gathered investment, but in the end, uniprocessor computing always prevailed.
! We argue general-purpose computing is taking an irreversible step toward parallel architectures. Technology push Application driven
10
Moore’s Law ! Each new chip
contained roughly twice as much capacity as its predecessor.
! Each new chip was released within 18-24 months of the previous chip.
© Intel Corp
11
Processor Performance ! performance to
improve by 52% per year between 1986 and 2002,
! Since 2002, performance has improved less than 20% per year.
12
Technological Limitations ! “Frequency wall”: Increasing frequencies and
deeper pipelines has reached diminishing returns on performance.
! “Power wall”: The chip will melt if running any faster (higher clock rate).
! “ILP wall”: There are diminishing returns on finding more ILP (instruction-level parallelism).
! “Memory wall”: Load and store is slow, but multiply is fast. Modern microprocessors can take much more clock cycles to access Dynamic Random Access Memory (DRAM) than the clocks for floating-point multiplies.
13
14
15
16
17
Multi-Core Technology ! Intel Announcement (12/2006):
Intel Corporation researchers have developed the world’s first programmable processor that delivers supercomputer-like performance (1+ TFlops, 1+ Tbs/s) from a single, 80-core chip not much larger than the size of a finger nail while using less electricity (~65W) than most of today’s home appliances.
18
19
20
21
The New Wave ! The rate of technological progress
for networking is an astounding 10-fold increase every 4 years (77.8% yearly compound rate).
! The emergence of network-centric
computing (as opposed to processor-centric) – distributed high performance/throughput computing
22
Parallel vs Distributed Computing Parallel computing splits a single application up
into tasks that are executed at the same time and is more like a top-down approach
Distributed computing considers a single application which is executed as a whole but at different locations and is more like a bottom-up approach
23
Parallel vs Distributed Computing Parallel computing is about decomposition:
how we can perform a single application concurrently,
how we can divide a computation into smaller parts which may potentially be executed in parallel.
Distributed computing is about composition: what happens if many distributed processes
interact with each other if a global function can be achieved although
there is no global time or state
24
Parallel vs Distributed Computing Parallel computing considers how to reach a
maximum degree of concurrency Scientific computing
Distributed computing considers reliability and availability Information/resource sharing
25
Parallel vs Distributed Computing The differences are now blurred,
especially after the introduction of Grid computing and Cloud computing
The two related fields have many things in common: Multiple processors Networks connecting the processors Multiple computing activities and processes Input/output data distributed among
processors
26
The Network is the Computer
LANs & WANs (Local Area Networks) (Wide Area Networks)
Supercomputer
The Network is the
Computer!
27
The Network is the Computer
LANs & WANs (Local Area Networks) (Wide Area Networks)
Supercomputer
The Network is the
Computer!
“When the network is as fast as the computer's internal links, the machine disintegrates across the net into a set of special purpose appliances”
28
Cluster Computing ! A computer cluster is a group of linked computers,
working together closely so that in many respects they form a single computer.
! The components of a cluster are commonly, but not always, connected to each other through fast local area networks.
! Clusters are usually deployed to improve performance and/or availability over that of a single computer, while typically being much more cost-effective than single computers of comparable speed or availability.
29
The Internet
30
Computational Grids
31
Grid Computing ! Grid computing is the combination of computer
resources from multiple administrative domains applied to a common task, usually to a scientific, technical or business problem that requires a great number of computer processing cycles or the need to process large amounts of data.
! It is a form of distributed computing whereby a “super and virtual computer” is composed of a cluster of networked loosely coupled computers acting in concert to perform very large tasks.
! This technology has been applied to computationally intensive scientific, mathematical, and academic problems, and used in commercial enterprise data intensive applications.
32
Cloud Computing ! Cloud computing is a model for enabling convenient,
on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction (NIST’s definition)
! Cloud computing describes a new supplement, consumption and delivery model for IT services based on the Internet, and it typically involves the provision of dynamically scalable and often virtualized resources (storage, platform, infrastructure, and software) as a service over the Internet.
33
Computing Milieu ! When computing cost improves, the opportunities
of computers multiply. ! Science
— storm forecasting and climate prediction — understanding biochemical processes of living organisms ! Engineering
—computational fluid dynamics and airplane design —earthquake and structural modelling —molecular nanotechnology
! Business —computational finance —data mining
34
Course Theme: A New Way of Thinking In sequential computing, operations are
performed one at a time, making it straightforward to reason about the correctness and performance characteristics of a program.
In parallel computing, many operations take place at once, complicating our reasoning about the correctness and performance.
35
Course Theme: A New Way of Thinking ! A sequential algorithm is evaluated by its
runtime ! The asymptotic runtime of a sequential
program is identical on any serial platform. ! The parallel runtime of a program depends
on the input size, the number of processors, the communication parameters of the machine.
! A parallel algorithm must therefore be analyzed in the context of the underlying platform.
36
Parallel Programming
“Work”
w1 w2 w3
r1 r2 r3
“Result”
“worker” “worker” “worker”
Partition
Combine
37
Parallel Programming TA
SK (J
OB)
DEC
OM
POSE
Tasks A
SSIG
N
Processes
MA
P
Distributed Environment
NP-complete problems Non-deterministic Polynomial time complete
38
Parallel Programming ! Problems to consider:
How do we assign tasks to processes? What if we have more tasks than processes? What if processes need to share partial
results? How do we aggregate partial results? How do we know all the processes have
finished? What if processes die? What is the performance of a parallel
program?
39
Main Topics ! Parallel/distributed computing
architectures ! Parallel algorithm design ! Analytical modelling of parallel programs ! Examples
40
References
! Jack Dongarra (U. Tenn.) CS 594 slides http://www.cs.utk.edu/~dongarra/WEB-PAGES/cs594-2010.htm