Dataflow Process Networks Lee & Parks Synchronous Dataflow Lee & Messerschmitt Abhijit...

Click here to load reader

  • date post

    21-Dec-2015
  • Category

    Documents

  • view

    214
  • download

    0

Embed Size (px)

Transcript of Dataflow Process Networks Lee & Parks Synchronous Dataflow Lee & Messerschmitt Abhijit...

  • Slide 1
  • Slide 2
  • Dataflow Process Networks Lee & Parks Synchronous Dataflow Lee & Messerschmitt Abhijit Davare Nathan Kitchen
  • Slide 3
  • Dataflow Data oriented Host Language Java, C/C++ Coordination language SDF, Cyclo-static Dataflow
  • Slide 4
  • Kahn Process Networks Dataflow process networks are a special case of Kahn process networks. P2P2 a, b, cA, B, C unbounded channels P1P1 P3P3
  • Slide 5
  • Properties of Kahn Processes The processes that we deal with are usually: Continuous Monotonic Which means that we can realistically compute them.
  • Slide 6
  • Continuous Processes Formally: Practically: A continuous process will not wait for the end of an infinite input stream before producing an output.
  • Slide 7
  • Monotonic Processes Adding to the input does not change the output already produced. All continuous processes are monotonic. monotonicnot monotonic P 1, 2, 32, 4, 6 P 1, 22, 4 P 1, 2, 34, 2, 6 P 1, 22, 4
  • Slide 8
  • Computing Continuous Processes Continuity means: We can start producing the output stream before we have received all the input. We can process an infinite stream with finite resources. Networks of processes are determinate.
  • Slide 9
  • Nondeterminism Pros: Compact system definition Incomplete specification OK Cons: Loss of continuity Analysis more difficult
  • Slide 10
  • Nondeterminism (2) Nondeterminism can be added to KPN by: Processes can test for empty channels Internal Nondeterminism Channel has multiple sources Channel has multiple sinks Processes have shared variables rand() x+=2 x=0 empty?
  • Slide 11
  • Streams Most relevant in real-time signal processing Recursively defined or Channel oriented Be good at losing data, recycling memory, and storing data. Recursive CampChannel Camp
  • Slide 12
  • Dataflow Processes When a dataflow actor fires, it consumes inputs and produces outputs. We get a dataflow process by repeatedly firing an actor. 2 2 + 4
  • Slide 13
  • Firing Rules for Actors Dataflow processes are continuous if: Each actor firing is functional (i.e., the outputs depend only on the inputs). The firing rules are sequential. A firing rule is a set of patterns that the inputs have to match.
  • Slide 14
  • Examples of Firing Rules At least one token on each input: R 1 = { [*], [*] } The select channel chooses an input channel to take a token from: R 1 = { [*], , [T] } R 2 = { , [*], [F] } + [*] F T
  • Slide 15
  • Firing Rules in SDF A synchronous dataflow actor has a single firing rule containing only wildcardsit is enabled by a fixed number of tokens. down sample 41 R = { [*, *, *, *] }
  • Slide 16
  • Firing Rules with a Problem At least one token on either input: R 1 = { [*, ] }, R 2 = { [ , *] } Which rule should be checked first? If we check with a blocking read, we will deadlock. These rules are not sequential. merge
  • Slide 17
  • Sequential Firing Rules If we can avoid deadlock by checking rules in the right order, the rules are sequential. Example: the select actor R 1 = { [*], , [T] } R 2 = { , [*], [F] } We read the third input first, then we know which other input to read.
  • Slide 18
  • Implications of Sequentiality All sequential processes are continuous, therefore determinate. Networks of functional actors with sequential firing rules can be scheduled; we do not have to synchronize processes using blocking reads.
  • Slide 19
  • Execution models For Dataflow Process Networks, execution independent of scheduling 1. Concurrent Processes 2. Dynamic Scheduling 3. Static Scheduling
  • Slide 20
  • B Concurrent Processes C B A
  • Slide 21
  • B C B A hungry
  • Slide 22
  • B Concurrent Processes C B A hungry suspended hungry
  • Slide 23
  • B Concurrent Processes C B A hungry suspended hungry suspended hungry enabled
  • Slide 24
  • Concurrent Processes(2) Large overhead of context switching In general, increasing process granularity decreases relative overhead
  • Slide 25
  • Dynamic Scheduling Data dependencies can make static scheduling impossible In particular, dynamic scheduling is necessary when number of input and output tokens for each actor cannot be defined a priori Can be done in hardware or software Actors activated when inputs available
  • Slide 26
  • Static Scheduling Must have number of input & output tokens predetermined for all actors Best schedule chosen among several based on objective (e.g. minimum buffer size, minimum code size) Used for: Code generation by code stitching HW synthesis
  • Slide 27
  • Formalism for Static Scheduling Rank: Number of linearly independent vectors 1 2 4 3 1 2 3 4 3 6 2 8 4 1 2 1
  • Slide 28
  • Formalism for Static Scheduling Rank: Number of linearly independent vectors 1 2 4 3 1 2 3 4 3 6 2 8 4 1 2 1 Full rank No periodic solution
  • Slide 29
  • Formalism for Static Scheduling Rank: Number of linearly independent vectors 1 2 4 3 1 2 3 4 3 6 2 8 4 1 2 2
  • Slide 30
  • Correctness of SDF graphs 12 3 A BC 11 1 1 1 2
  • Slide 31
  • 12 3 A BC 11 2 1 1 2
  • Slide 32
  • 12 3 A BC 11 1 1 1 1
  • Slide 33
  • Delays On startup, need to make sure that arcs have enough tokens to activate nodes b(i) represents number of tokens in buffer i b(n+1) = b(n) + T v(n) b(0) affects which startup options are legal
  • Slide 34
  • Single Processor Static Scheduling Class S algorithms: Given a firing vector (q) and the initial number of tokens in each buffer Select each runnable node and update b(n) iteratively List of these selected nodes forms schedule If schedule cannot be met, deadlock has occurred
  • Slide 35
  • Parallel Processor Static Scheduling Exploit concurrency to increase throughput In this example, nodes 1 and 3 can be scheduled at the same time on different processors Periodic Admissible Schedules: {1,3,1,2} {3,1,1,2} {1,1,3,2}
  • Slide 36
  • Static Buffering Minimize execution time by having memory locations embedded as constants in the code, not variables iq = KN Where i = Number of tokens emitted per firing q = Number of firings in one period N = Size of buffer K = an integer
  • Slide 37
  • Functional Behavior & Hierarchy Dataflow Process Networks allow hierarchy Functionality may be present at the higher level Delay, internal state, and unbalanced subsystems can cause nodes to be non-functional
  • Slide 38
  • Dataflow and Functional Languages Actors correspond to first-order functions: Processes are higher-order functions (they take functions as arguments): Process F results from applying first-order function f to a stream that starts with R and continues with X.
  • Slide 39
  • Language Features Recursion Iteration Carrying state Polymorphism Parallelism
  • Slide 40
  • Recursion Not needed in dataflow to carry state (we have feedback loops) Can be used with high-order functions for compact models (e.g., FFT) Expensive unless unrolled at setup time
  • Slide 41
  • Polymorphism Tokens can have arbitrary type Arrays as sequence of tokens Arrays as single tokens In Ptolemy One actor can operate on several types add doubles add ints (without converting to doubles)
  • Slide 42
  • Parallelism Functional languages Explicit Thwarted by recursion Use higher-order functions for parallelism instead Dataflow Implicit Use higher-order functions as syntactic sugar Unrolled at setup time Still parallel!
  • Slide 43
  • Credits Sean Connery as James Bond: http://www.speakeasy.org/~wvt3rd/ BONDLIT1.HTMhttp://www.speakeasy.org/~wvt3rd/ BONDLIT1.HTM Screen Beans 1995, 1996, 1997, 1998, 1999, 2000 A Bit Better Corporation Screen Beans is a registered trademark of A Bit Better Corporation. Figures from Dataflow Process Networks, Lee & Parks, Proc. IEEE, May 1995, and Synchronous Data Flow, Lee & Messerschmitt, Proc. IEEE, Sep 1987. All episodes are filmed before a live studio audience. No animals were harmed during the production of this presentation.
  • Slide 44
  • Tagged Token Model Tokens have tags + values Out of order execution Channels are not FIFO Actors fire when input tags match Graph would deadlock as dataflow More expressive Limited value