Actors Model, Asynchronous Design with Non Blocking IO and SEDA

Post on 15-Jan-2015

5.080 views 0 download

Tags:

description

The idea of this presentation is to demonstrate differences between event based systems, traditional concurrency model and how it affect the state of objects and impossibilite a true scalable system.I will show how the technique of Actors Model can be the base of a eficient concurrency system to support thousands of simultaneous requests, without incurring an complex infrastructure architecture, expensive and inefficient .Also raise some questions about Non Blocking IO as well the intersection with SEDA, to provide a final solution prepared to meet this grow demand and make a better use of the resources in Cloud Computing.

Transcript of Actors Model, Asynchronous Design with Non Blocking IO and SEDA

Actors Model, Asynchronous Design with Non Blocking IO

and SEDAFelipe Oliveira – felipe.oliveira@soaexpert.com.br

Agenda A Little bit about Concurrency

Dealing with State (Share Mutable, Isolated Mutable, Persistent Data Structure)

Strategies

Concurrency with Intensive IO

Scalability

STM – Software Transaction Memory

Actors Model and SEDA

What´s Concurrency ? In a concurrent program two or more actions take place simultaneously

We often write concurrent programs using threads

Starting threads is easy, but their execution sequence is non- deterministic !!

Coordinate threads and ensure they’re handling data consistently is very difficulty

Three prominent options for concurrency

The “Synchronize and Suffer” model

The Software-Transactional Memory (STM) model

The Actor-based Concurrency model

Exploring Design Options Shared Mutable Design

Isolated Mutable Design

Purely Immutable Design (with functional languages)

Three ways to avoid problems Synchronize Properly

Don’t Share State !!

Don’t Mutable State

“Avoiding mutable state is the secret weapon to winning concurrency battles”

Strategies Sequential to Concurrent

Divide and Conquer

Decide the Number of Threads:

Runtime.getRuntime().availableProcessors();

We can compute the total number of threads we’d need as:

Number of threads = Number of Available Cores / (1 - Blocking Coefficient)

Concurrency with Intensive IO An IO intensive application has a large blocking coefficient and will benefit

from more threads than the number of available cores.

A computation intensive task has a blocking coefficient of 0 and an IO intensive task has a value close to 1—a fully blocked task is doomed so we don’t have to worry about the value reaching 1.

In order to determine the number of threads you need to know two things: The number of available cores The blocking coefficient of your tasks

Speedup for the IO Intensive App

Concurrent Computation of Prime Numbers

Speedup for the Computationally Intensive

Managing Threads with ExecutorService

Software Transactional Memory - STM Separation of Identity and State

Clojure STM

Actos Model – Isolating Mutability

Life Cycle of an Actor

Actors Model Lock free approach to concurrency

No shared state between actors

Asynchronous message passing Mailboxes to buffer incoming messages

SEDA Staged Event Driven Architecture

Decomposes a complex, event-driven application into a set of stages connected by queues.

The most fundamental aspect of the SEDA architecture is the programming model that supports stage-level backpressure and load management.

Stages Stages

One actor class per stage

Shared dispatcher

Individually tunable I/O Bound

CPU Bound Easier to reason about Code reuse

Dispatchers ThreadBasedDispatcher Binds one actor to its own thread

ExecutorBasedEventDrivenDispatcher Must be shared between actors

ExecutorBasedEventDrivenWorkStealingDispatcher Must be shared between actors of the same type

Queues SEDA has a queue per stage model

Akka actors have their own mailbox

How do we evenly distribute work?

Work Stelaling"Actors of the same type can be set up to share this dispatcher and during execution time the different actors will steal messages from other actors if they have less messages to process”

Fault Tolerance Supervisors Restarts actors Stops after x times within y milliseconds

Restart Strategies OneForOne AllForOne

Final Product

I hope that this presentation was useful for open mind to a new model for build scalable APIs

Thanks – Felipe Oliveira @soaexpertbr