Coding for Atomic Shared Memory Emulation Viveck R. Cadambe (MIT) Joint with Prof. Nancy Lynch...

Post on 17-Dec-2015

219 views 2 download

Transcript of Coding for Atomic Shared Memory Emulation Viveck R. Cadambe (MIT) Joint with Prof. Nancy Lynch...

Coding for Atomic Shared Memory Emulation

Viveck R. Cadambe (MIT)

Joint with Prof. Nancy Lynch (MIT), Prof. Muriel Médard (MIT) and Dr. Peter Musial

(EMC)

Erasure Coding for Distributed Storage

• Locality, Repair Bandwidth, Caching and Content Distribution– [Gopalan et. al 2011, Dimakis-Godfrey-Wu-Wainwright- 10, Wu-

Dimakis 09, Niesen-Ali 12]

Erasure Coding for Distributed Storage

• Locality, Repair Bandwidth, Caching and Content Distribution– [Gopalan et. al 2011, Dimakis-Godfrey-Wu-Wainwright- 10, Wu-

Dimakis 09, Niesen-Ali 12]

• Queuing theory– [Ferner-Medard-Soljanin 12, Joshi-Liu-Soljanin 12, Shah-Lee-

Ramchandran 12]

Erasure Coding for Distributed Storage

• Locality, Repair Bandwidth, Caching and Content Distribution– [Gopalan et. al 2011, Dimakis-Godfrey-Wu-Wainwright- 10, Wu-

Dimakis 09, Niesen-Ali 12]

• Queuing theory– [Ferner-Medard-Soljanin 12, Joshi-Liu-Soljanin 12, Shah-Lee-

Ramchandran 12]

Erasure Coding for Distributed Storage

This talk: Theory of distributed computingConsiderations for storing data that changes

6

Consistency: Value changing, get the “latest” version

Failure tolerance, Low storage costs, Fast reads and writes

7

Shared Memory Emulation - History

Atomic (consistent) shared memory

• [Lamport 1986]• Cornerstone of distributed

computing and multi-processor programming

8

Shared Memory Emulation - History

Atomic (consistent) shared memory

Emulation over distributed storage

systems

• [Lamport 1986]• Cornerstone of distributed

computing and multi-processor programming

• “ABD” algorithm [Attiya-Bar-Noy-Dolev95], 2011 Dijsktra Prize,

• Amazon dynamo key-value store

[Decandia et. al. 2008]• Replication-based

9

Shared Memory Emulation - History

Atomic (consistent) shared memory

Emulation over distributed storage

systems

Costs of emulation

• [Lamport 1986]• Cornerstone of distributed

computing and multi-processor programming

• “ABD” algorithm [Attiya-Bar-Noy-Dolev95], 2011 Dijsktra Prize,

• Amazon dynamo key-value store

[Decandia et. al. 2008]• Replication-based

• Low cost coding based algorithm

• Communication and storage costs

(This talk) • [C-Lynch-Medard-Musial 2014],preprint available

10

Shared Memory Emulation - History

Atomic (consistent) shared memory

Emulation over distributed storage

systems

Costs of emulation

• [Lamport 1986]• Cornerstone of distributed

computing and multi-processor programming

• “ABD” algorithm [Attiya-Bar-Noy-Dolev95], 2011 Dijsktra Prize,

• Amazon dynamo key-value store

[Decandia et. al. 2008]• Replication-based

• Low cost coding based algorithm

• Communication and storage costs

• [C-Lynch-Medard-Musial 2014],preprint available(This talk)

11

12

Write

Readtime

13

Write

Readtime

14

Atomicity [Lamport 86]

aka linearizability. [Herlihy, Wing 90]

Write

Readtime

15

Write

Read

Atomicity [Lamport 86]

aka linearizability. [Herlihy, Wing 90]

time

16

Write

Read

Atomicity [Lamport 86]

aka linearizability. [Herlihy, Wing 90]

time

17

Write

Read

Atomicity [Lamport 86]

aka linearizability. [Herlihy, Wing 90]

time

Atomic

18

Atomic

Not atomic

Write

Read

Atomicity [Lamport 86]

aka linearizability. [Herlihy, Wing 90]

time

time

time

19

Shared Memory Emulation - History

Atomic (consistent) shared memory

Emulation over distributed storage

systems

Costs of emulation

• [Lamport 1986]• Cornerstone of distributed

computing and multi-processor programming

• “ABD” algorithm [Attiya-Bar-Noy-Dolev95], 2011 Dijsktra Prize,

• Amazon dynamo key-value store

[Decandia et. al. 2008]• Replication-based

• Low cost coding based algorithm

• Communication and storage costs

• [C-Lynch-Medard-Musial 2014],preprint available(This talk)

20

• Client server architecture, nodes can fail (no. of server failures is limited)

• Point-to-point reliable links (arbitrary delay).

• Nodes do not know if other nodes fail

• An operation should not have to wait for others to complete

Distributed Storage Model

Servers

Write Clients Read Clients

21

• Client server architecture, nodes can fail (no. of server failures is limited)

• Point-to-point reliable links (arbitrary delay)

• Nodes do not know if other nodes fail

• An operation should not have to wait for others to complete

Distributed Storage Model

Servers

Write Clients Read Clients

22

• Client server architecture, nodes can fail (no. of server failures is limited)

• Point-to-point reliable links (arbitrary delay).

• Nodes do not know if other nodes fail

• An operation should not have to wait for others to complete

Distributed Storage Model

Servers

Write Clients Read Clients

23

Write Clients Read Clients

Servers

Requirements and cost measure

Design write, read and server protocols such that

• Atomicity

• Concurrent operations, no waiting.

Communication overheads: Number of bits sent over links Storage overheads: (Worst-case) server storage costs

24

The ABD algorithm (sketch)

Servers

Write Clients Read Clients

Quorum set: Every majority of server snodes. Any two sets intersect at at least one nodesAlgorithm works if at least one quorum set is available.

25

The ABD algorithm (sketch)

Write:Send time-stamped value to every server; return after receiving sufficeint acks.

Read: Send read query; wait for sufficient responses and return with latest value.

Servers:Store latest value from server; send ackRespond to read request with value

Servers

Write Clients Read Clients

26

The ABD algorithm (sketch)

Write:Send time-stamped value to every server; return after receiving acks from quorum.

Read:: Send read query; wait for sufficient responses and return with latest value.

Servers:Store latest value; send ackRespond to read request with value

Servers

ACK

ACK

ACK

ACK

ACK

ACK

Write Clients Read Clients

27

The ABD algorithm (sketch)QueryQueryQueryQueryQueryQuery

QueryWrite Clients Read Clients

Write:Send time-stamped value to every server; return after receiving sufficeint acks.

Read: Send read query; wait for sufficient responses and return with latest value.

Servers:Store latest value from server; send ackRespond to read request with value

Servers

28

The ABD algorithm (sketch)

Servers

Write:Send time-stamped value to every server; return after receiving sufficeint acks.

Read: Send read query; wait for quorum of responses; return with latest value.

Servers:Store latest value from server; send ackRespond to read request with value

Write Clients Read Clients

29

The ABD algorithm (sketch)

Servers

Write:Send time-stamped value to every server; return after receiving sufficeint acks.

Read: Send read query; wait for quorum responses; send latest value to quourm; latest value.

Servers:Store latest value from server; send ackRespond to read request with value

Write Clients Read Clients

30

The ABD algorithm (sketch)

Servers

Write:Send time-stamped value to every server; return after receiving sufficeint acks.

Read: Send read query; wait for acks from quorum responses; send latest value to servers; return latest value after receiving acks from quorum.Servers:Store latest value from server; send ackRespond to read request with value

Write Clients Read Clients

ACK

ACK ACK

ACK

ACK

ACK

The ABD algorithm (summary)

• The ABD algorithm ensures atomic operations.

• Operations terminate is ensured as long as a majority of nodes do not fail.

• Implication: A networked distributed storage system can be used as shared memory.

• Replication to ensure failure tolerance.

ABD

Storage

Communication(read)

Communication(write)

Performance Analysis

• f represents number of failures• a lower communication cost algorithm in [Fan-Lynch 03]

33

Shared Memory Emulation - History

Atomic (consistent) shared memory

Emulation over distributed storage

systems

Costs of emulation

• [Lamport 1986]• Cornerstone of distributed

computing and multi-processor programming

• “ABD” algorithm [Attiya-Bar-Noy-Dolev95], 2011 Dijsktra Prize,

• Amazon dynamo key-value store

[Decandia et. al. 2008]• Replication-based

• Low cost coding based algorithm

• Communication and storage costs

(This talk)• [C-Lynch-Medard-Musial 2014],

preprint available

Shared Memory Emulation – Erasure Coding

• [Hendricks-Ganger-Reiter 07, Dutta-Guerraoui-Levy 08, Dobre-et.al 13, Androulaki et. al 14]

• New algorithm, a formal analysis of costs

• Outperforms previous algorithms in certain aspects• Previous algorithms incur infinite worst-case storage costs• Previous algorithms incur large communication costs

35

Erasure Coded Shared Memory

36

Erasure Coded Shared Memory

Example:(6,4) MDS code

• Value recoverable from any 4 coded packets

• Size of coded packet is ¼ size of value

Smaller packets,smaller overheads

37

• Value recoverable from any 4 coded packets

• Size of coded packet is ¼ size of value

• New constraint, need 4 packets with same time-stamp

Erasure Coded Shared Memory

Smaller packets,smaller overheads

Example:(6,4) MDS code

38

Quorum set: Every subset of 5 server snodes. Any two sets intersect at 4 nodesAlgorithm works if at least one quorum set is available.

Coded Shared Memory – Quorum set up

Servers

Write Clients Read Clients

39

Coded Shared Memory – Why is it challenging?

Servers

Write Clients Read Clients

40

Coded Shared Memory – Why is it challenging?

Servers

QueryQuery

Query

Query

Challenges: reveal elements to readers only when enough elements are propagated discard old versions safely

Solutions: Write in multiple phases Store all the write-versions concurrent with a read

Servers store multiple versions

Write Clients Read Clients

Coded Shared Memory – Protocol overview

Write:Send time-stamped value to every server; send finalize message after getting acks from quorum; return after receiving acks from quorum.

Read: Send read query; wait for time-stamps from a quorum;Send request with latest time-stamp to servers; decode and return value after receiving acks from quorum.

Servers:Store the coded symbol; keep latest δ codeword symbols and delete older ones; send ack. Set finalize flag for tag on receiving finalize message.Respond to read query with latest finalized tag.Finalize the requested tag; respond to read request with codeword symbol.

Coded Shared Memory – Protocol overview

Write:Send time-stamped value to every server; send finalize message after getting acks from quorum; return after receiving acks from quorum.

Read: Send read query; wait for time-stamps from a quorum;Send request with latest time-stamp to servers; decode and return value after receiving acks from quorum.

Servers:Store the coded symbol; keep latest δ codeword symbols and delete older ones; send ack. Set finalize flag for time-stamp on receiving finalize message. Send ack.Respond to read query with latest finalized tag.Finalize the requested tag; respond to read request with codeword symbol.

Coded Shared Memory – Protocol overview

Write:Send time-stamped value to every server; send finalize message after getting acks from quorum; return after receiving acks from quorum.

Read: Send read query; wait for time-stamps from a quorum;Send request with latest time-stamp to servers; decode and return value after receiving acks from quorum.

Servers:Store the coded symbol; keep latest δ codeword symbols and delete older ones; send ack. Set finalize flag for tag on receiving finalize message.Respond to read query with latest finalized tag.Finalize the requested tag; respond to read request with codeword symbol.

Coded Shared Memory – Protocol overview

Write:Send time-stamped value to every server; send finalize message after getting acks from quorum; return after receiving acks from quorum.

Read: Send read query; wait for time-stamps from a quorum;Send request with latest time-stamp to servers; decode and return value after receiving acks/symbols from quorum.

Servers:Store the coded symbol; keep latest δ codeword symbols and delete older ones; send ack. Set finalize flag for tag on receiving finalize message.Respond to read query with latest finalized tag.Finalize the requested time-stamp; respond to read request with codeword symbol if it exists, else send ack.

Coded Shared Memory – Protocol overview

Write:Send time-stamped value to every server; send finalize message after getting acks from quorum; return after receiving acks from quorum.

Read: Send read query; wait for time-stamps from a quorum;Send request with latest time-stamp to servers; decode and return value after receiving acks/symbols from quorum.

Servers:Store the coded symbol; keep latest δ codeword symbols and delete older ones; send ack. Set finalize flag for time-stamp on receiving finalize message.Respond to read query with latest finalized tag.Finalize the requested time-stamp; respond to read request with codeword symbol if it exists, else send ack.

Coded Shared Memory – Protocol overview

• Use (N,k) MDS code, where N is the number of servers

• Ensures atomic operations

• Operations terminate is ensured as long as o Number of failed nodes smaller than (N-k)/2o Number of writes concurrent with a read

smaller than δ

Performance comparisons

ABD Our Algorithm

Storage

Communication(read)

Communication(write)

• N represents number of nodes, f represents number of failures• δ represents maximum number of writes concurrent with a read

48

Proof Steps

• After every operation terminates, - there is a quorum of servers with the codeword symbol - there is a quorum of servers with the finalize label - because every pair of servers intersects in k servers,

readers can decode the value

49

Proof Steps

• After every operation terminates, - there is a quorum of servers with the codeword symbol - there is a quorum of servers with the finalize label - because every pair of servers intersects in k servers,

readers can decode the value

• When a codeword symbol is deleted at a server– Every operation that wants that time-stamp has terminated– (Or the concurrency bound is violated)

50

Main Insights

• Significant savings on network traffic overheads

- Reflects the classical gain of erasure coding over replication

• (New Insight) Storage overheads depend on client activity• Storage overhead proportional to the no. of writes concurrent

with a read• Better than classical techniques for moderate client activity

51

Future Work – Many open questions

Refinements of our algorithm- (Ongoing) More robustness to client node failures

Information theoretic bounds on costs- New coding schemes

Finer network models- Erasure channels, different topologies, wireless channels

Finer source models- Correlations across versions

Dynamic networks

52

Future Work – Many open questions

Refinements of our algorithm- (Ongoing) More robustness to client node failures

Information theoretic bounds on costs- New coding schemes

Finer network models- Erasure channels, different topologies, wireless channels

Finer source models- Correlations across versions

Dynamic networks

53

Storage costs

ABD

Our algorithm

Number of writes concurrent with a read

Storage Overhead

What is the fundamental cost

curve?

54

Future Work – Many open questions

Refinements of our algorithm- (Ongoing) More robustness to client node failures

Information theoretic bounds on costs- New coding schemes

Finer network models, finer source models- Erasure channels, different topologies, wireless channels- Correlations across versions

Dynamic networks

55

Future Work – Many open questions

Refinements of our algorithm- (Ongoing) More robustness to client node failures

Information theoretic bounds on costs- New coding schemes

Finer network models, finer source models- Erasure channels, different topologies, wireless channels- Correlations across versions

Dynamic networks

- Interesting replication based algorithm in [Gilbert-Lynch-Shvartsman 03]

- Study of costs in terms of network dynamics