Distributed Anemone: Transparent Low-Latency Access to Remote Memory

18
Distributed Anemone: Transparent Low-Latency Access to Remote Memory Reporter : Min-Jyun Chen Michael R. Hines, Jian Wang Kartik Gopalan Submission year :2006

description

High Performance Distributed System. Distributed Anemone: Transparent Low-Latency Access to Remote Memory. Michael R. Hines, Jian Wang Kartik Gopalan. Reporter : Min-Jyun Chen. Submission year :2006. Abstract. - PowerPoint PPT Presentation

Transcript of Distributed Anemone: Transparent Low-Latency Access to Remote Memory

Page 1: Distributed Anemone: Transparent Low-Latency Access to Remote Memory

Distributed Anemone: Transparent Low-Latency Access to Remote

Memory

Distributed Anemone: Transparent Low-Latency Access to Remote

Memory

Reporter : Min-Jyun Chen

Michael R. Hines, Jian Wang

Kartik Gopalan

Submission year :2006

Page 2: Distributed Anemone: Transparent Low-Latency Access to Remote Memory

2

Abstract

Performance of large memory applications degrades rapidly once the system hits the physical memory limit and starts paging to local disk.

Distributed Anemone(Adaptive Network Memory Engine) – a lightweight and distributed system that pools together the collective memory resources of multiple machines across a gigabit Ethernet LAN.

Our kernel-level prototype features fully distributed resource management, low-latency paging, resource discovery, load balancing, soft-state refresh, and support for ’jumbo’ Ethernet frames.

Page 3: Distributed Anemone: Transparent Low-Latency Access to Remote Memory

3

Outline

Introduction

Design and Implementation

Performance

Conclusions

Page 4: Distributed Anemone: Transparent Low-Latency Access to Remote Memory

Introduction

Memory resource management is distributed across the whole cluster. There is no single control node.

Clients can perform load-balancing across multiple memory servers, taking into account their memory usage and paging load.

4

Page 5: Distributed Anemone: Transparent Low-Latency Access to Remote Memory

Introduction(cont.)

A distributed resource discovery mechanism enables clients to discover newly available servers and track memory usage across the cluster.

A soft-state refresh mechanism enables memory servers to track the liveness of clients and their pages.

5

Page 6: Distributed Anemone: Transparent Low-Latency Access to Remote Memory

6

Design and Implementation

Client and Server ModulesTransparent VirtualizationDistributed Resource DiscoverySoft-State RefreshServer Load BalancingFault-tolerance

Page 7: Distributed Anemone: Transparent Low-Latency Access to Remote Memory

7

Client and Server Modules(1/6)

Page 8: Distributed Anemone: Transparent Low-Latency Access to Remote Memory

8

Transparent Virtualization(2/6)

To enable LMAs to transparently access remote memory ,the client module exports a BDI to the pager.Any single client can transparently access memory

from multiple servers as one pool via the BDI.

Any single server can share its unused memory pool among multiple clients simultaneously.

Page 9: Distributed Anemone: Transparent Low-Latency Access to Remote Memory

9

Distributed Resource Discovery(3/6)

Seamlessly absorb the increase/decrease in cluster-wide memory capacity, insulating LMAs from resource fluctuations.

Allow any server to reclaim part or all of its contributed memory.

Page 10: Distributed Anemone: Transparent Low-Latency Access to Remote Memory

10

Soft-State Refresh(4/6)

Soft-state also permits servers to track the liveness of clients whose pages they store.

Each client periodically transmits a Session

Refresh message to each server that hosts its pages, which carries a client-specific session ID.

Page 11: Distributed Anemone: Transparent Low-Latency Access to Remote Memory

11

Server Load Balancing(5/6)

The number of pages stored at each active

server.

The number of paging requests serviced by each active server.

Page 12: Distributed Anemone: Transparent Low-Latency Access to Remote Memory

12

Fault-tolerance(6/6)

To maintain a local disk-based copy of every memory page swapped out over the network.

To keep redundant copies of each page on multiple remote servers.

Page 13: Distributed Anemone: Transparent Low-Latency Access to Remote Memory

13

Performance

Paging Latency

Application Speedup

Tuning the Client RMAP Protocol

Page 14: Distributed Anemone: Transparent Low-Latency Access to Remote Memory

14

Paging Latency(1/3)

Page 15: Distributed Anemone: Transparent Low-Latency Access to Remote Memory

15

Application Speedup(2/3)

Page 16: Distributed Anemone: Transparent Low-Latency Access to Remote Memory

16

Tuning the Client RMAP Protocol(3/3)

Page 17: Distributed Anemone: Transparent Low-Latency Access to Remote Memory

17

Conclusions

We are incorporating fault-tolerance mechanisms into Anemone using page replication across servers as well as local disk.

Page 18: Distributed Anemone: Transparent Low-Latency Access to Remote Memory

18

Thank You!