The Architect's Two Hats

Post on 21-May-2015

1.183 views 3 download

Tags:

description

This presentation, given on the Software Architecture course at the University of Brunel, discusses the interplay between architecture and design. How the designer and architect are really different roles and ones that often have competing goals.

Transcript of The Architect's Two Hats

The Architect’s Two Hats

Ben StopfordArchitect

High Performance ComputingRBS

Once upon a time

In a land far far away

There was a designer

And an architect

And they didn’t really get on

OK – that’s not entirely true – but they have competing objectives

This is the story of the architect’s two hats

So what are the two hats all about?

Design

The internal structure of software at a code level.

Controlling the complexity of the software as it grows.

Keeping it comprehensible and fungible. Basically everything that Steve has been telling

you so far.

Architecture

The non functional stuff that fights against design.

The stuff we do because we have to. Because we need to build systems that are fast

and resilient. This includes everything from machine

specifications to grid infrastructures

So lets look at how design works in industry

We’ll look at architecture for performance a bit later,

right now lets look at design

Software Design

Why do we need to worry about Design?

Software evolves over time. Unmanaged change leads to spaghetti code

bases where classes are highly coupled to one another.

This makes them brittle and difficult to understand

So how to we avoid this?

We design our solution well so that it is easy to understand and change

And we all know how to do that right?

I’m going to play with you a bit now….

We Design our System Up Front

… with UML

AND…

We get to spot problems early on in the project lifecycle.

Why is that advantageous?

Why? Because it costs less to get the design right early on, see:

Low cost of change early in lifecycle

So we have a plan for how to build our application before we start

coding.

All we need to do is follow the plan!!

Well…..

That’s what we used to do…

…but problems kept cropping up…

It was really hard to get the design right up front.

The problems we face are hard. The human brain is bad at predicting all the

implications of a complex solution up front. When we came to implementing solutions, our

perspective would inevitably change and so would our design.

And then…

…when we did get the design right…

…the users would go and change the requirements and we’d have to

redesign our model.

Ahhh, those pesky users, why can’t they make up their minds?

So…

…in summary…

We find designing up front hard…

...and a bit bureaucratic

…and when we do get it right the users generally go and change the

requirements…

…and it all changes in the next release anyway.

So are we taking the right approach

Software is supposed to be soft.

That means it is supposed to be easy to change.

So is fixing the design up front the right way to do it?

But we’ve seen that changing the design later in the project life cycle is

expensive!

So does the cost curve have to look like this?

Can we design our systems so that we CAN change them later in the

lifecycle?

The Cost of Change in an Agile application

How

By designing for change

** But not designing for any specific changes

*** And writing lots and lots of tests

Agile Development facilitates this

Code Base

Test

Test

Test

Dynamic Design

Similar to up-front design except that it is done little and often.

Design just enough to solve the problem we are facing now, and NO MORE.

Refactor it later when a more complex solution is needed.

This means that your system’s design will constantly evolve.

So what does this imply for the Designer?

It implies that the designer must constantly steer the application so that it remains easy to understand

and easy to change.

With everyone being responsible for the design.

So the designers role becomes about steering the applications design

through others.

Shepherding the team!

Shepherding the team

People will always develop software in their own way. You can’t change this.

You use the techniques you know to keep the team moving in the right direction.

Occasionally one will run of in some tangential direction and when you see this you move them back.

How? Encourage preferred patterns. Encourage reuse. Software Process Communication Frameworks

In industry we see the design of a system evolve. It does not all happen up front.

We use agile methods to let us get away with changing things later.

We design for change using the patterns steve has discussed. Separation of Concerns, Interfaces etc.

Now we can switch to the architects hat.

Architecting for performance

And why it can fight against good design

If speed is going to be a problem….

Then evolutionary design alone may not work

We need to seed the design with clever architectural decisions to

ensure we get the speed we need

We use patterns and frameworks that increase speed

But often at the expense of a clear programming model. i.e. these

patterns can obfuscate the implementation.

How it fits together: A project timeline

Set-up Architecture

Design: Push patterns and reuse etc

Amend Architecture

So lets look at how we architect for speed

Load Balancing

Ideal for applications that cleanly segregate into

discrete pieces of work e.g. Commercial websites.

Load Balancing: Horizontal Scalability

Load balancing is great way to get horizontal scalability without affecting

the programming model much!!

But it is limited by the time taken for a single request

i.e. it allows us to handle more load, but does not allow us to reduce the

time taken to execute single process

What if we want to make a single (possibly long running) processes

faster?

Lets look at an example

Sum the numbers between 1 and 10,000,000

int totalfor (n =1 to 10,000,000){

total+=n}

A service has a hard limit on how fast it can execute this, defined by it’s clock speed.

Load balancing won’t speed this up.

How do we speed this up programming in process intensive

cases?

Exploit the parallelism of multiple cores or machines

Make it faster, using threads and multiple cores

New Thread(){for (n =1 to 100){

total+=n}

}

New Thread(){for (n =101 to 200){

total+=n}

}…..

Larger Machines - Azul

Up to 768 processors in a single machine Each processor has up to 1GB local memory RAID 10 based disk access (redundancy for

speed) Customised JVM distributes processing over

available processors and handles garbage collection in hardware (why is this a problem?).

Expensive (£300-400K). Still has a hard limit of 768 processors.

This is scaling up

But it costs

But we can also scale out. i.e. lots of normal hardware linked together

Scaling Out is Cost Effective

Scaling out is much more cost effective per teraflop than scaling up.

But it adds complexity to the programming model

Key Point: Role of an Architect is to construct and application that performs - that often means a

distributed system

Distributed Systems can be much harder to manage than those that run

on a single machine

Why?

Because standard computing models don’t work in a distributed world.

The Trials of Distributed Computing

As we distribute our application across multiple machines, complexity is

moved from hardware to software.

The major problem is the lack of fast shared memory

Why?

Read from memory ~ 1-10μs Read across wire ~ 1-10ms

You cannot abstract such latencies away. (why not?) You must architect around them.

We can’t ignore ‘wire’ time. We need to make sure programmers think

about it.

The complexity of shared memory has been moved from the hardware domain to the software domain.

Simple problems like accessing other objects are now more time consuming

But lets step back a little and look at why we need to distribute processing

Sum the numbers between 1 and 10,000,000

int totalfor (n =1 to 10,000,000){

total+=n}

A service has a hard limit on how fast it can execute this

To execute faster: batch for parallel execution

On machine (1)int totalfor (n =1 to 100){

total+=n}

On machine (2)int totalfor (n =101 to 200){

total+=n}

Leads to concept of Grid Computing

Split work into jobs with the data they need. Send these jobs to n machines for processing.

Parallel execution on a grid

Server

Grid node

Grid node

Grid node

Grid node

Send code + data

Receive result

Code + data (1)

Code + data (2)

Code + data (3)

Code + data (4)

Processing time is 4 x synchronous case (assuming processing time >> wire time)

Grid computing solves the processing problem. i.e. we can do complex

computations very quickly by doing them in parallel.

But it complicates the programming model

Example: Report Generation

for (Report report : reports){Data data = getDataFromDB(report);format(data);present(data);

}

... so visually...

Get data DB

Format

Present

Loop For n

...but on the grid...

Start DB

Format

Present

Loop n times

Grid RunnerAsynchronous

Synchronous

Grid Callback

Grid

Note how this muddles the design

A simple loop has become a set of distributed invocations and asynchronous call backs.

Grids allow us to process computations quickly through

parallelism.

But this leads to the problem getting fast enough access to the data we

want to operate on

Problems With Data Bottlenecks

Data tends to exist in only one place (DB) This is a bottleneck in the architecture In a distributed architecture we must avoid

bottlenecks. Databases often represent one of these.

Why is the database a bottleneck?

Server

Grid node

Grid node

Grid node

Grid node

Send code

Receive result

Database

Get data

Databases are slow(ish*)

ACID (Atomic, Consistent, Isolated, Durable) Must write to disk, this is slow. What other options do you have?

We can scale up by using memory not disk

Memory access is much faster than disk access.

In memory databases

In memory databases do not have the overhead of writing to disk.

This makes them FAST. However they have drawbacks:

Durability – what happens if the power is cut or the application crashes? Is data consistent?

Scalability of memory: Memory on a single machine Maximum size of a JVM for example is a little over 3GB

(why?)

But Single Machine Data Stores Don’t Scale, even if they are in memory

Server

Grid node

Grid node

Grid node

Grid node

Send code

Receive result

In Memory Database

Get data

BOTTLENECK

What we need is a distributed data source?

Welcome to the world of distributed caching

Distributed Caching Solves This Problem by Splitting The Data Over All

Servers

1/5 data 1/5 data 1/5 data 1/5 data 1/5 data

Client Client Client Client

This is parallel processing for data access. Data requests are split across multiple machines.

Now we have removed the data bottleneck

Server

Grid node

Grid node

Grid node

Grid node

(1) Send code

(2) Receive result

Data Fabric

1/6

1/6

1/6

1/6

1/6

1/6

This gives us access to a fast shared memory across multiple machines

We are now massively parallel

With lightning fast data access.

But we can get faster than this.How??

Server

Grid node

Grid node

Grid node

Grid node

(1) Send code

(2) Receive result

Data Fabric

1/6

1/6

1/6

1/6

1/6

1/6

Data Fabric

1/6

1/6

1/6

1/6

1/6

1/6

Superimpose compute and data fabrics into one entity

Server

(1) Send code

(2) Receive result

So to speed up a system

Load balance services Run compute intensive tasks in parallel on a

grid. Avoid data bottlenecks with distributed caching

So remember the two hats

In industry we use Agile methods to evolve the systems design. It does not all happen up front.

We accept change as part of our world and use the patterns Steve has discussed to control it. Separation of Concerns, Interfaces etc.

But architecting for speed means distributed computing.

Distributed/parallel computing adds significant complexity.

It is hard to factor in later (although not impossible)

The trick is balancing them!

How much architecture is enough?

But fundamentally, you need them both!

Thanks

Slides: http://www.BenStopford.com

Vacation Placements: Benjamin.Stopford@rbs.com