Informed prefetching in distributed multi level storage systems

49
Informed Prefetching in Distributed Multi-Level Storage Systems Maen M. Al Assaf Advisor: Xiao Qin Department of Computer Science and Software Engineering, Auburn University, Auburn, AL Download the dissertation at: http://www.eng.auburn.edu/~ xqin/theses/PhD-Al-Assaf-Infomed-Prefetching.pdf The abstract of this dissertation can be found at: http://etd.auburn.edu/etd/handle/10415/2935

description

In this dissertation, we present pipelined prefetching mechanisms that use application-disclosed access patterns to prefetch hinted blocks in multi-level storage systems. The fundamental concept in our approach is to split an informed prefetching process into a set of independent prefetching steps among multiple storage levels (e.g., main memory, solid state disks, and hard disk drives). In the first part of this study, we show that a prefetching pipeline across multiple storage levels is an viable and effective technique for allocating file buffers at the multiple-level storage devices. Our approaches (a.k.a., iPipe and IPO) extends previous ideas of informed prefetching in two ways: (1) our approach reduces applications' I/O stalls by keeping hinted data in caches residing in the main memory, solid state disks, and hard drives; (2) we propose a pipelined prefetching scheme in which multiple informed prefetching mechanisms semi-dependently work to fetch blocks from low-level (slow) to high-level (fast) storage devices. Our iPipe and IPO strategies integrated with the pipelining mechanism significantly reduce overall I/O access time in multiple-level storage systems. Next, we propose a third prefetching scheme called IPODS that aims to maximize the benefit of informed prefetches as well as to hide network latencies in a distributed storage systems. Finally, we develop a simulator to evaluate the performance of the proposed informed prefetching schemes in the context of multiple-level storage systems. We implement a prototype to validate the accuracy of the simulator. Our results show that our iPipe improves system performance by 56% in most informed prefetching cases, IPO and IPODS improve system performance by 56% and 6% respectively in informed prefetching critical cases across a wide range of real-world I/O traces.

Transcript of Informed prefetching in distributed multi level storage systems

Page 1: Informed prefetching in distributed multi level storage systems

Informed Prefetching in Distributed Multi-Level Storage Systems

Maen M. Al AssafAdvisor: Xiao Qin

Department of Computer Science and Software Engineering,Auburn University, Auburn, AL

Download the dissertation at: http://www.eng.auburn.edu/~xqin/theses/PhD-Al-Assaf-Infomed-Prefetching.pdf

The abstract of this dissertation can be found at: http://etd.auburn.edu/etd/handle/10415/2935

Page 2: Informed prefetching in distributed multi level storage systems

2

Background

• Informed Prefetching.

• Parallel multi-level storage system.

• Distributed parallel multi-level storage system.

• Prefetching pipelining to upper level.

• Bandwidth limitation (Enough, Limited).

Page 3: Informed prefetching in distributed multi level storage systems

3

Informed Caching and Prefetching

• My Research’s Core Reference:

(TIP) R. Patterson, Hugo, G. Gibson, D. Stodolsky, and J. Zelenka: Informed prefetching and caching, In Proceedings of the 15th ACM Symposium on Operating System Principles, pages 79-95, CO, USA, 1995.

Page 4: Informed prefetching in distributed multi level storage systems

4

Informed Caching and Prefetching

• Applications can disclose hints about their future accesses.

• Invoke parallel storage parallelism.• Make the application more CPU-bound than

I/O- bound.– Reducing I/O stalls.– Application’s Elapsed Time.

• Tdisk: Disk read Latency (prefetching time).

Page 5: Informed prefetching in distributed multi level storage systems

5

Informed Caching and Prefetching• Cost benefit model to determine # prefetching of buffer caches (TIP)

R. Patterson, Hugo, G. Gibson, D. Stodolsky, and J. Zelenka: Informed prefetching and caching, In Proceedings of the 15th ACM Symposium on Operating System Principles, pages 79-95, CO, USA, 1995.

Page 6: Informed prefetching in distributed multi level storage systems

6

Multi-level storage system

• Speed (Access Time) Vs. Capacity

Page 7: Informed prefetching in distributed multi level storage systems

7

• High I/O performance, bandwidth, scalability, and reliability.• Disk Arrays (enough / limited bandwidth).

Cache

Parallel (Multi-levels) Storage Systems

Page 8: Informed prefetching in distributed multi level storage systems

8

Distributed Parallel (Multi-level) Storage Systems

• high I/O performance, bandwidth, scalability, and reliability.T.Madhyastha; G. Gibson; C. Faloutsos: Informed prefetching of collective input/output requests, Proceedings of the 1999 ACM/IEEE conference on Supercomputing (CDROM), Portland, Oregon, 1999.

Page 9: Informed prefetching in distributed multi level storage systems

9

Research Motivations

• The growing needs of multi-level storage systems,

• The I/O access hints offered by applications, and

• The possibility of multiple prefetching mechanisms to work in parallel.

Page 10: Informed prefetching in distributed multi level storage systems

10

Research Objectives

• Minimizing the prefetching time (I/O Delays).

• Reducing application's stalls and elapsed time.

• Reducing prefetching time in distributed storage system.

Page 11: Informed prefetching in distributed multi level storage systems

11

My Solutions

• iPipe: Informed Prefetching Pipelining for Multi-levels Storage Systems.

• IPO: Informed Prefetching Optimization in Multi-levels Storage Systems. (Optimizes iPipe).

• IPODS: Informed Prefetching in Distributed Multi-levels Storage Systems.

Page 12: Informed prefetching in distributed multi level storage systems

12

Research Tasks

IPOiPipe

IPODS Prototyping

Informed Prefetching Optimization in Multi-levels Storage Systems

Informed Prefetching in Distributed Multi-levels Storage Systems.

Informed Prefetching Pipelining For Multi-levels Storage Systems.

Solution’s prototyping Results

12

Page 13: Informed prefetching in distributed multi level storage systems

13

iPipe and IPO Architecture

Page 14: Informed prefetching in distributed multi level storage systems

14

IPODS Architecture

• Network and server latency. Data is stripped.T.Madhyastha; G. Gibson; C. Faloutsos: Informed prefetching of collective input/output requests, Proceedings of the

1999 ACM/IEEE conference on Supercomputing (CDROM), Portland, Oregon, 1999.

Page 15: Informed prefetching in distributed multi level storage systems

15

Assumptions

• Pipelined data are copies: 1) small portion 2) do not move data back and forth.

• Data initial placement in HDD. 1) worst case 2) Larger.

• Writes and data consistency.

Page 16: Informed prefetching in distributed multi level storage systems

16

Validations and Prototyping Test Bed

• The following are our lab's test bed devices: – Memory: Samsung 3GB RAM Main Memory. – HDD : Western Digital 500GB SATA 16 MB Cache

WD5000AAKS.– SSD: Intel 2Gb/s SATA SSD 80G sV 1A.– Network Switch: Network Dell Power Connect

2824.

Page 17: Informed prefetching in distributed multi level storage systems

17

iPipe and IPO ParametersParameter Description Value

Block size Block size in MB 10 MB

Tcpu+Thit+Tdriver Time to consume a buffer 0.00192 s

Thdd-cache Read time from HDD to cache 0.12 s

Tss-cache Reading time from SSD to cache 0.052 s

Thdd-ss Reading time from HDD to SSD 0.122 s

MaxBW Maximum # of reading requests 15

Xcache Number of prefetching buffers 1 – 63 / 1 - 15

Page 18: Informed prefetching in distributed multi level storage systems

18

IPODS ParametersParameter Description Value

Block size Block size in MB 200 MB

Tcpu+Thit+Tdriver Time to consume a buffer 0.037 s

Thdd-network- cache Read time from HDD to cache 4.43 s

Tss-network-cache Reading time from SSD to cache 4.158 s

Thdd-ss Reading time from HDD to SSD 4.5 s

MaxBW Maximum # of reading requests 15

Xcache Number of prefetching buffers 1 - 15

Page 19: Informed prefetching in distributed multi level storage systems

19

iPipe

• Parallel multi-levels storage system (SSD and HDD.)• Pipeline data to uppermost level to reduce Tdisk.

• Reduce stalls, elapsed time.• Assumes:– Assume enough bandwidth, and scalability.

• iPipe pipelines the informed prefetching process in the SSD.– Prefetching pipelining start (Pstart).– Prefetching pipelining depth (Pdepth).

Page 20: Informed prefetching in distributed multi level storage systems

20

iPipe: Example Parameters

• Assume:• Tcpu+Thit+Tdriver = 1 Time Unit

• Thdd-cache = 5 Time Units

• Tss-cache = 4 Time Units

• Thdd-ss = 8 Time Units

• Xcache = 3• Tstall-hdd = 2 time units every 3 accesses.• Tstall-ss = 1 time unit every 3 accesses.

Page 21: Informed prefetching in distributed multi level storage systems

iPipe: Informed Prefetching

Tcpu+Thit+Tdriver = 1 Thdd-cache = 5 Xcache = 3Tstall-hdd = 2 every 3 accesses.

21

Page 22: Informed prefetching in distributed multi level storage systems

22

iPipe: Informed Prefetching

Tcpu+Thit+Tdriver = 1 Thdd-cache = 5 Xcache = 3Tstall-hdd = 2 every 3 accesses.

Page 23: Informed prefetching in distributed multi level storage systems

23

iPipe: Informed Prefetching

Tcpu+Thit+Tdriver = 1 Thdd-cache = 5 Xcache = 3Tstall-hdd = 2 every 3 accesses.

Stalls time = 16 time units & Elapsed time = 46 time units

Page 24: Informed prefetching in distributed multi level storage systems

iPipe: Pipelining Algorithm

24

Page 25: Informed prefetching in distributed multi level storage systems

25

iPipe: Pipelining Algorithm

Tcpu+Thit+Tdriver = 1 Thdd-cache = 5 Tss-cache = 4 T disk-ss = 8 Xcache = 3Tstall-hdd = 2 every 3 accesses. Tstall-ss = 1 every 3 accesses.

Stalls time = 9 time units & Elapsed time = 39 time units < 46 time units

Page 26: Informed prefetching in distributed multi level storage systems

26

iPipe: Performance Improvement - Elapse Time 1

56% improvement in most cases

Total simulation elapsed time when using 1 to 9 prefetching buffers.

LASR

1LA

SR2

56%

56%

56%56%

Page 27: Informed prefetching in distributed multi level storage systems

iPipe: Performance Improvement - Elapse Time 2

27Total simulation elapsed time when using 11 to 30 prefetching buffers.

56% improvement

56%

56%56%

56%

Page 28: Informed prefetching in distributed multi level storage systems

iPipe: Performance Improvement - Elapse Time 3

28Total simulation elapsed time when using 35 to 63 prefetching buffers.

Page 29: Informed prefetching in distributed multi level storage systems

29

Research Tasks

IPOiPipe

IPODS Prototyping

Informed Prefetching Optimization in Multi-levels Storage Systems

Informed Prefetching in Distributed Multi-levels Storage Systems.

Informed Prefetching Pipelining For Multi-levels Storage Systems.

Solution’s prototyping Results

29

Page 30: Informed prefetching in distributed multi level storage systems

30

IPO

• Parallel multi-levels storage system (SSD and HDD.)• Pipeline data to uppermost level to reduce Tdisk.

• Reduce stalls, elapsed time.• Assumes:– Limited bandwidth and scalability.– MaxBW = 15– Pipelining depth = MaxBW - Xcache

• IPO pipelines the informed prefetching process in the SSD.– Prefetching pipelining start (Pstart).– Next to Prefetch (Pnext).

Page 31: Informed prefetching in distributed multi level storage systems

31

IPO: Example Parameters

• Assume:• Tcpu+Thit+Tdriver = 1 Time Unit

• Thdd-cache = 5 Time Units

• Tss-cache = 4 Time Units

• Thdd-ss = 8 Time Units

• Xcache = 2• Tstall-hdd = 3 time units every 2 accesses.• Tstall-ss = 2 time units every 2 accesses.• MaxBW = 5 concurrent reading requests.

Page 32: Informed prefetching in distributed multi level storage systems

32

IPO: Informed Prefetching

Tcpu+Thit+Tdriver = 1, Thdd-cache = 5, Xcache = 2Tstall-hdd = 3 every 2 accesses.

Page 33: Informed prefetching in distributed multi level storage systems

33

IPO: Informed Prefetching

Tcpu+Thit+Tdriver = 1, Thdd-cache = 5, Xcache = 2Tstall-hdd = 3 every 2 accesses.

Page 34: Informed prefetching in distributed multi level storage systems

34

IPO: Informed Prefetching

Tcpu+Thit+Tdriver = 1, Thdd-cache = 5, Xcache = 2Tstall-hdd = 3 every 2 accesses.

Page 35: Informed prefetching in distributed multi level storage systems

35

IPO: Informed Prefetching

Stalls time = 45 time units & Elapsed time = 81 time units

Tcpu+Thit+Tdriver = 1, Thdd-cache = 5, Xcache = 2Tstall-hdd = 3 every 2 accesses.

Page 36: Informed prefetching in distributed multi level storage systems

36

IPO: Pipelining Algorithm

Tcpu+Thit+Tdriver = 1 Thdd-cache = 5 Tss-cache = 4 T hdd-ss = 8 Xcache = 2Tstall-hdd = 3 every 2 accesses. Tstall-ss =2 every 2 accesses.

Page 37: Informed prefetching in distributed multi level storage systems

37

IPO: Pipelining Algorithm

Tcpu+Thit+Tdriver = 1 Thdd-cache = 5 Tss-cache = 4 T hdd-ss = 8 Xcache = 2Tstall-hdd = 3 every 2 accesses. Tstall-ss =2 every 2 accesses.

Page 38: Informed prefetching in distributed multi level storage systems

38

IPO: Pipelining Algorithm

Tcpu+Thit+Tdriver = 1 Thdd-cache = 5 Tss-cache = 4 T hdd-ss = 8 Xcache = 2Tstall-hdd = 3 every 2 accesses. Tstall-ss =2 every 2 accesses.

Page 39: Informed prefetching in distributed multi level storage systems

39

IPO: Pipelining Algorithm

Stalls time = 40 time units & Elapsed time = 76 time units < 81

Tcpu+Thit+Tdriver = 1 Thdd-cache = 5 Tss-cache = 4 T hdd-ss = 8 Xcache = 2Tstall-hdd = 3 every 2 accesses. Tstall-ss =2 every 2 accesses.

Page 40: Informed prefetching in distributed multi level storage systems

IPO: Performance Improvement - Elapse Time

56% improvement in critical cases

LASR

1LA

SR2

Total simulation elapsed time when using 1 to 15 prefetching buffers. (MaxBW = 15)40

28%

56%

0%

Page 41: Informed prefetching in distributed multi level storage systems

41

IPODS

• Parallel multi-levels storage systems can be implemented on a distributed system.

• Several storage nodes. Data are stripped.• Tnetwork and Tserver are added to Tdisk .• IPO pipelining was used. • Assumes:– Limited bandwidth and scalability (depends on # of nodes).– MaxBW = 15

Page 42: Informed prefetching in distributed multi level storage systems

42

IPODS Architecture

• Network and server latency. Data is stripped.T.Madhyastha; G. Gibson; C. Faloutsos: Informed prefetching of collective input/output requests, Proceedings of the 1999 ACM/IEEE conference on Supercomputing (CDROM), Portland, Oregon, 1999.

Page 43: Informed prefetching in distributed multi level storage systems

IPODS: Performance Improvement - Elapse Time

LASR

1LA

SR2

6% improvement in critical cases

43Total simulation elapsed time when using 1 to 15 prefetching buffers. (MaxBW = 15)

6%

4%2% 0%

Page 44: Informed prefetching in distributed multi level storage systems

Prototyping Development

• We aim to validate our simulations’ results• 1000 I/O reading requests were sufficient to

show the pattern.• Expect the results if simulated application

were used in prototyping.• Simulators used LASR traces:

– LASR1 : 11686 I/O Reading Requests– LASR2 : 51206 I/O Reading Requests

• Compare the variations.

44

Page 45: Informed prefetching in distributed multi level storage systems

45

iPipe PrototypingXcache 1 3 5 7 9

LASR1 With iPipe 559.6273 190.3077 117.2071 86.52069 67.93586

LASR1 no iPipe 1161.996 408.5916 259.0295 185.8635 151.6153

Xcache 1 3 5 7 9

LASR1 With iPipe 607.612 202.537 121.522 86.8018 67.5125

LASR1 no iPipe 1402.32 467.44 280.464 200.331 155.813

Xcache 11 13 15 17 19

LASR1 With iPipe 57.32708 49.60041 44.31705 39.6187 36.04687

LASR1 no iPipe 132.4959 117.7423 114.4435 93.45855 85.93546

Xcache 11 13 15 17 19

LASR1 With iPipe 55.2375 46.7394 40.5075 35.7419 31.9796

LASR1 no iPipe 127.484 107.871 93.488 82.4894 73.8063

Prototyping

Simulation

(3-18%) Variation

Moti

vatio

nal E

xam

ple

Page 46: Informed prefetching in distributed multi level storage systems

46

iPipe PrototypingXcache 25 35 45 55 63

LASR1 With iPipe 29.69927 28.52599 27.40414 24.96726 24.22578

LASR1 no iPipe 79.44797 63.66883 59.02622 39.04141 37.02078

Xcache 25 35 45 55 63

LASR1 With iPipe 24.3045 22.4358 22.4365 22.4369 22.4371

LASR1 no iPipe 56.0928 40.0663 31.1627 25.4967 22.4371

Prototyping

Simulation

35% Variation when Xcache = 63

(3-18%) Variation in most cases

Page 47: Informed prefetching in distributed multi level storage systems

47

IPO PrototypingXcache 1 3 5 7

LASR1 With IPO 560.1602 236.8811 209.8946 169.9799LASR1 no IPO 1161.996 408.5916 259.0295 185.8635

Xcache 1 3 5 7

LASR1 With IPO 607.876 202.792 201.128 172.036LASR1 no IPO 1402.32 467.516 280.552 200.392

Xcache 9 11 13 15

LASR1 With IPO 147.101 131.3518 116.7431 114.4435LASR1 no IPO 151.6153 132.4959 117.7423 114.4435

Xcache 9 11 13 15

LASR1 With IPO 155.768 127.448 107.878 93.5731LASR1 no IPO 155.87 127.547 107.878 93.5731

Prototyping

Simulation

(3-18%) Variation

Page 48: Informed prefetching in distributed multi level storage systems

48

IPODS PrototypingXcache 1 3 5 7

LASR1 With IPODS 48590.39 16377.7 10383.65 7604.548

LASR1 no IPODS 51768.98 17772.19 10705.66 7803.21

Xcache 1 3 5 7

LASR1 With IPODS 48590.9 16200 9933.42 7246.72

LASR1 no IPODS 51769 17259.2 10357.2 7397.95

Xcache 9 11 13 15

LASR1 With IPODS 6061.563 5003.758 4264.514 3750.33

LASR1 no IPODS 6151.16 5022.374 4276.2 3750.33

Xcache 9 11 13 15

LASR1 With IPODS 5750.29 4704.81 3982.53 3454.88

LASR1 no IPODS 5754.39 4708.83 3982.53 3454.88

Prototyping

Simulation

(1-8%) Variation

Page 49: Informed prefetching in distributed multi level storage systems

49

Conclusion & Future Work

• Conclusion:– Informed prefetching shows better performance when

Tdisk is low.– Reading data from upper levels is faster.– Three solutions (iPipe, IPO, IPODS).– Elapsed time improvement in about 56% and 6%.– Many Contributions.

• Future work:– Data migration– More experimental work