Copyright © 2013 Russel Winder 1
GPars Workshop
Russel Winder
email: [email protected]: [email protected]
twitter: @russel_winder
http://www.russel.org.uk
Copyright © 2013 Russel Winder 2
Aims, Goals and Objectives
● Gain practical experience of the various models of concurrent and parallel behaviour available in GPars; actors, dataflow, data parallelism, etc.
● Have some fun.
Copyright © 2013 Russel Winder 3
Subsidiary Aims, Goals and Objects
● Show that shared mutable memory multi-threading should return to being an operating systems development technique and not continue to be pushed as an applications programming technique – remember…
Copyright © 2013 Russel Winder 4
…people should tremblein fear at the prospect of using
Shared mutable memorymulti-threading.
Copyright © 2013 Russel Winder 5
Structure
Introduction.
Actors.
Dataflow.
Data Parallelism.
Analysis.
Closing.
Copyright © 2013 Russel Winder 6
Protocol
Short presentation.
(Short presentation Practical period)→ 3
Interaction.
Short presentation.
Questions or comments welcome at any time.
Copyright © 2013 Russel Winder 7
Interstitial Advertisement
Copyright © 2013 Russel Winder 8
Introduction
Copyright © 2013 Russel Winder 9
It is no longer contentious thatThe Multicore Revolution
is well underway.
Copyright © 2013 Russel Winder 10
Quad core laptops and phones.
Eight and twelve core workstations.
Servers with “zillions” of cores.
Copyright © 2013 Russel Winder 11
Parallel capable hardware is the norm.
Copyright © 2013 Russel Winder 12
Software technology in use is now lagginghardware technology by decades.
Copyright © 2013 Russel Winder 13
Operating systems manage coreswith kernel threads.
Operating systems are fundamentally shared mutable memory multi-threaded systems.
Operating systems rightly use all the lock, semaphore, monitor, etc. technologies.
Copyright © 2013 Russel Winder 14
Computationally intensive systems orsubsystems definitely have to be parallel.
Other systems likely use concurrencybut not parallelism.
Copyright © 2013 Russel Winder 15
Concurrency
Execution as co-routines:
Sequences of code give up the executionto pass it to another coroutine.
Copyright © 2013 Russel Winder 16
More Concurrency
Concurrency is a technique founded in auniprocessor view of the world.
Time-division multiplexing.
Copyright © 2013 Russel Winder 17
Parallelism
Having multiple executions activeat the same time.
Copyright © 2013 Russel Winder 18
Concurrency is a tool for structuring execution where asingle processor is used by multiple computations.
Parallelism is about making a computation completefaster than using a single processor.
Copyright © 2013 Russel Winder 19
Copyright © 2013 Russel Winder 20
Copyright © 2013 Russel Winder 21
Copyright © 2013 Russel Winder 22
Copyright © 2013 Russel Winder 23
Copyright © 2013 Russel Winder 24
Copyright © 2013 Russel Winder 25
Copyright © 2013 Russel Winder 26
Squirrel behaviour emulatessynchronized software behaviour.
Copyright © 2013 Russel Winder 27
Locks deny parallelism.
Copyright © 2013 Russel Winder 28
The whole purpose of a lock is toprevent parallelism.
Copyright © 2013 Russel Winder 29
Parallelism is performance improvement.
Performance improvement requires parallelism.
Copyright © 2013 Russel Winder 30
Locks deny performance improvement.
Copyright © 2013 Russel Winder 31
Locks are needed only ifthere is mutable shared state.
Copyright © 2013 Russel Winder 32
Avoid mutable shared state.
Copyright © 2013 Russel Winder 33
…but how…
Copyright © 2013 Russel Winder 34
Use appropriate architectural models.
Copyright © 2013 Russel Winder 35
It's all about controllingconcurrency and parallelismwith tools that applicationsprogrammers find usable.
Copyright © 2013 Russel Winder 36
Shared mutable memory multi-threadingis an operating system technique.
Copyright © 2013 Russel Winder 37
Applications and tools programmersneed computational models with
integrated synchronization.
Copyright © 2013 Russel Winder 38
Use processes and message passing.
Copyright © 2013 Russel Winder 39
It's all easier if processesare single threaded.
Copyright © 2013 Russel Winder 40
ActorsIndependent processes communicating via asynchronous exchange of messages.
DataflowOperators connected by channels with activity triggered by arrival of data on the channels.
Data ParallelismTransform a sequence to another sequence where all individual actions happen at the same time.
Copyright © 2013 Russel Winder 41
Active ObjectsAn object that is actually an actor but looks like a full service object.
AgentsA wrapper for some shared mutable state.
Software Transactional MemoryWrappers for mutable values that uses transactions rather than locks.
Fork/JoinAn toolkit for tree structured concurrency and parallelism.
Copyright © 2013 Russel Winder 42
Actors
Copyright © 2013 Russel Winder 43
ActorsIndependent processes communicating via asynchronous exchange of messages
Copyright © 2013 Russel Winder 44
Need an example.
Copyright © 2013 Russel Winder 45
The Sleeping Barber Problem
Copyright © 2013 Russel Winder 46
The Sleeping Barber Problem
● The barber's shop has a single cutting chair and a row of waiting seats.
● The barber sleeps in the cutting chair unless trimming a customer.
● Customers arrive at the shop at intervals.
● If the barber is asleep, the customer wakes the barber takes the cutting chair and gets a trim.
● If the barber is cutting, a new customer checks to see if there is a free waiting seat.
● If there is join the queue to be trimmed.
● If there isn't leave disgruntled.
Problem originally dueto Edsgar Dijkstra.
Copyright © 2013 Russel Winder 47
A new customer enters the shop,check to see if they can go straightto the cutting chair, if not can theytake a waiting chair, if not leave.
The waiting chairs
The cutting chair.
The barber's shop.
Copyright © 2013 Russel Winder 48
Wikipedia article presents the classic operating systems approach using locks and semaphores.
http://en.wikipedia.org/wiki/Sleeping_barber_problem
Copyright © 2013 Russel Winder 49
Code!
Copyright © 2013 Russel Winder 50
Dataflow
Copyright © 2013 Russel Winder 51
DataflowOperators connected by channels with activity triggered by arrival of data on the channels.
Copyright © 2013 Russel Winder 52
Code!
Copyright © 2013 Russel Winder 53
If you want the code, clone the Git repository:
http://www.russel.org.uk/Git/SleepingBarber.git
Copyright © 2013 Russel Winder 54
Or if you just want to browse:
http://www.russel.org.uk/gitweb
Copyright © 2013 Russel Winder 55
Data Parallelism
Copyright © 2013 Russel Winder 56
Data ParallelismTransform a sequence to another sequence where all individual actions happen at the same time.
Copyright © 2013 Russel Winder 57
Need an example.
Copyright © 2013 Russel Winder 58
Copyright © 2013 Russel Winder 59
What is the Value of ?
Easy, it's known exactly.
It's .
Obviously.
Copyright © 2013 Russel Winder 60
It's simples Александр Орлов 2009
Copyright © 2013 Russel Winder 61
Approximating
● What is it's value represented as a floating point number?● We can only obtain an approximation.● A plethora of possible algorithms to choose from, a
popular one is to employ the following integral equation.
4=∫0
1 1
1x2dx
Copyright © 2013 Russel Winder 62
One Possible Algorithm
● Use quadrature to estimate the value of the integral – which is the area under the curve.
=4n∑i=1
n 1
1i−0.5n
2
With n = 3 not much to do, but potentially lots of error. Use n = 107 or n = 109?
Embarrassingly parallel.
Copyright © 2013 Russel Winder 63
Because addition is commutative andassociative, expression can be
decomposed into sums of partial sums.
Copyright © 2013 Russel Winder 64
a + b + c + d + e + f
=
( a + b ) + ( c + d ) + ( e + f )
Copyright © 2013 Russel Winder 65
Scatter – Gather
map reduce
Copyright © 2013 Russel Winder 66
Code!
Copyright © 2013 Russel Winder 67
If you want the code, clone the Git repository:
http://www.russel.org.uk/Git/Pi_Quadrature.git
Copyright © 2013 Russel Winder 68
Or if you just want to browse:
http://www.russel.org.uk/gitweb
Copyright © 2013 Russel Winder 69
An End
Copyright © 2013 Russel Winder 70
Multicore and multiprocessor are nowthe norm, not the exception.
Copyright © 2013 Russel Winder 71
Parallelism only matters ifcomputational performance matters.
Copyright © 2013 Russel Winder 72
Unstructured synchronizationof concurrent systems
is not a feasible approach.
Copyright © 2013 Russel Winder 73
Actors, dataflow, and data parallelism(and CSP, agents, fork/join,…)
are the future of applications structure.
Copyright © 2013 Russel Winder 74
Passing messages betweenprocesses is the way forward.
Copyright © 2013 Russel Winder 75
Shared memory concurrencyis a dead end for applications.
Copyright © 2013 Russel Winder 76
Copyright © 2013 Russel Winder 77
Copyright © 2013 Russel Winder 78
Copyright © 2013 Russel Winder 79
Copyright © 2013 Russel Winder 80
Copyright © 2013 Russel Winder 81
Copyright © 2013 Russel Winder 82
Squirrels deny parallelism.
Copyright © 2013 Russel Winder 83
Squirrels deny performance enhancement.
Copyright © 2013 Russel Winder 84
Don't be a squirrel.
Copyright © 2013 Russel Winder 85
Do not use explicit locking algorithms.
Copyright © 2013 Russel Winder 86
Use computational architectures that promoteparallelism and hence performance
improvement:
ActorsDataflow
Data Parallelism
Copyright © 2013 Russel Winder 87
Use
Go on, you know you want to…
Copyright © 2013 Russel Winder 88
Interstitial Advertisement
Copyright © 2013 Russel Winder 89
The End
Copyright © 2013 Russel Winder 90
GPars Workshop
Russel Winder
email: [email protected]: [email protected]
twitter: @russel_winderwebsite: http://www.russel.org.uk
Top Related