1 uFLIP: Understanding Flash IO Patterns Luc Bouganim, INRIA Rocquencourt, France Philippe Bonnet,...
-
Upload
angelo-cheevers -
Category
Documents
-
view
220 -
download
0
Transcript of 1 uFLIP: Understanding Flash IO Patterns Luc Bouganim, INRIA Rocquencourt, France Philippe Bonnet,...
1
uFLIP: Understanding Flash IO Patterns
Luc Bouganim, INRIARocquencourt, France
Philippe Bonnet, DIKUCopenhagen, Denmark
Björn Þór Jónsson, RUReykjavík, Iceland
2
Flash cells
Why should we consider flash devices?
• NAND flash chip typical timings (SLC chip): Read a 2KB page:
RAM buffer
• A single flash chip could potentially deliver: Read throughput of 23 MB/s, write throughput of 6 MB/s
• And… Random access is potentially as fast as sequential access! An SSD contains many (e.g., 8, 16) flash chips. Potential parallelism!
Flash devices have a high potential
Write a 2KB page:
Transfer (60 µs), write page (200µs)
Erase before rewrite! (2ms for 128 KB)
Read page (25 µs), transfer (60 µs)
3
but …
• Flash chips have many constraints I/O granularity: a flash page (2 KB) No update: Erase before write; erase granularity: a block (64 pages) Writes must be sequential within a flash block Limited lifetime: max 105 – 106 erase Usually, a software layer (Flash Translation Layer) handle these constraints
• Flash devices are not flash chips Do not behave as the flash chip they contain No access to the flash chip API but only through the device API Complex architecture and software, proprietary and not documented Flash devices are black boxes !
• How can we model flash devices? First step: understand their performance
Need for a benchmark.Need for a benchmark.
4
Read
Write
Erase
RAM R/W
Free Block
Unfilled block
Filled block
When possible redirect writes to previously erased locations Update blocks
The Flash Translation Layer
• Emulates a normal block device, handling flash constraints
RA
MF
LA
SH
FTL Blocks Update BlocksData Blocks
FT
L M
AP
FT
L M
AP
Oth
er F
TL
O
ther
FT
L
stru
ctu
res
stru
ctu
res
Read(@100, …):
Read(@101, …):
Write(@900, …):
Write(@200, …):
D
R
R
R FF D
R FF D D D D D D D D D D D D D D D D
Read(@100, …)
Maps logical address (LBA) to physical locations Mapping information Distributes erase across the device (wear levelling) Other data structures
FF D D
Read(@101, …)Write(@900, …)Write(@200, …)
Flash DeviceFlash DeviceFlash Translation Layer API: Read(LBA, &data) / Write (LBA, data)
……
…………
……
…………
• IO Cost thus depends on The mode of the IO (i.e., read, write) Recent IOs (caching in the device RAM) The device state (i.e., flash state and data structures)
• Device state depends on Entire history of previous IO requests
5
• Why do we need to benchmark flash devices? DB technology relies on the HD characteristics …
… Flash devices will replace or complement HDs …… and we have a poor knowledge of flash devices.
Flash devices are black boxes (complex and undocumented FTLs)
Large range from USB flash drives to high performance flash board.
• Benchmarking flash devices is difficult: Need to design a sound benchmarking methodology
– IO cost is highly variable and depends on the whole device history! Need to define a broad benchmark
– No safe assumption can be made on the device behavior (black box)– Moreover, we do not want to restrict the benchmark usage!
Benchmarking flash devices: Goal and difficulties
6
Methodology (1): Device state
• Measuring Samsung SSD RW performance Out-of-the-box …
0.1
1
10
100
100 200 300 400 500 IO number
Re
spo
ns
e ti
me
(m
s)
rtAvg(rt)
0.1
1
10
100
100 200 300 400 500 IO number
Re
spo
ns
e ti
me
(m
s)
rtAvg(rt)
Random Writes – Samsung SSDOut of the box
7
Methodology (1): Device state
• Measuring Samsung SSD RW performance Out-of-the-box … and after filling the device!!! (similar behavior on Intel SSD)
0.1
1
10
100
100 200 300 400 500 IO number
Re
spo
ns
e ti
me
(m
s)
rtAvg(rt)
0.1
1
10
100
100 200 300 400 500 IO number
Re
spo
ns
e ti
me
(m
s)
rtAvg(rt)
Random Writes – Samsung SSDOut of the box
Random Writes – Samsung SSDAfter filling the device
0.1
1
10
100
100 200 300 400 500 IO number
Re
spo
ns
e ti
me
(m
s)
rtAvg(rt)
Avg(rt) o-o-b0.1
1
10
100
100 200 300 400 500 IO number
Re
spo
ns
e ti
me
(m
s)
rtAvg(rt)
Avg(rt) o-o-b
8
Methodology (2): Startup and running phases
• When do we reach a steady state? How long to run each test?
0,1
1
10
100
0 100 200 300IO number
Re
spo
ns
e ti
me
(m
s)
rtAvg(rt) incl.Avg(rt) excl.
0,1
1
10
100
0 100 200 300IO number
Re
spo
ns
e ti
me
(m
s)
rtAvg(rt) incl.Avg(rt) excl.
1
10
100
1000
0 100 200 300
IO number
Re
spo
ns
e ti
me
(m
s)
rtAvg(rt)
1
10
100
1000
0 100 200 300
IO number
Re
spo
ns
e ti
me
(m
s)
rtAvg(rt)
1
10
100
1000
0 100 200 300
IO number
Re
spo
ns
e ti
me
(m
s)
rtAvg(rt)
Startup and running phases for the Mtron SSD (RW)
Running phase for the Kingston DTI flash Drive (SW)
9
Methodology (3): Interferences between consecutive runs
0.1
1
10
100
0 5000 10000
IO number
Res
pon
se ti
me
(ms)
rt
13000
Pause length
Seq. Reads Random Writes Seq. Reads
0.1
1
10
100
0 5000 10000
IO number
Res
pon
se ti
me
(ms)
rt
13000
Pause length
Seq. Reads Random Writes Seq. Reads
Setup experiment for the Mtron SSD
10
Proposed methodology:
• Device state : Enforce a well-defined device state performing random write IOs of random size on the whole device The alternative, sequential IOs, is less stable, thus more difficult to enforce
• Startup and running phase: Run experiments to define IOIgnore: Number of IOs ignored when computing statistics IOCount: Number of measures to allow for convergence of those statistics.
• Interferences: Introduce a pause between runs Run the following experiment: SR, then RW, then SR (with a large IOCount) Measure the interferences. In previous experiment, 3000 IOs Overestimate the length of the Pause
11
uFLIP (1): Basic construct: IO Pattern
• An IO Pattern is a sequence of IOs An IO is defined by 4 attributes (time, size, LBA, mode) Baseline Patterns (Seq. Read, Random Read, Seq. Write, Random Write) More patterns by using parameterized functions for each attribute
ConsecutiveSequential
ConsecutiveRandom
PauseSequential
BurstSequential
Ordered Partitioned
timeConsecutivePause (Pause)Burst (Pause,Burst)
size size (Size) LBA
SequentialRandomOrdered (Incr)Partitioned (Partitions)
modeReadWrite
12
uFLIP (1): Basic construct: IO Pattern
• An IO Pattern is a sequence of IOs An IO is defined by 4 attributes (time, size, LBA, mode) Baseline Patterns (Seq. Read, Random Read, Seq. Write, Random Write) More patterns by using parameterized functions for each attribute
• Potentially relevant IO patterns Basic patterns: one function for each attribute Mixed patterns: combining basic patterns Parallel patterns: replicating a basic pattern or mixing in // basic patterns
• Problems: IO patterns space is too large! Mixed and parallel patterns may be too complex to analyze
timeConsecutivePause (Pause)Burst (Pause,Burst)
size size (Size) LBA
SequentialRandomOrdered (Incr)Partitioned (Partitions)
modeReadWrite
13
uFLIP (2): What is a uFLIP micro-benchmark
• An execution of a reference pattern is a run. Measure the response time for individual IOs Compute statistics (min, max, mean, standard deviation) to summarize it.
• A collection of runs of the same pattern is an experiment Restriction to a single varying parameter for sound analysis
• A collection of related experiments is a micro-benchmark Defined over the baseline patterns with the same varying parameter
• 9 varying parameters 9 micro-benchmarks Basic patterns: IOSize, IOShift, TargetSize, Partitions, Incr, Pause,
Burst Mixed patterns: Ratio (mix only two baseline patterns) Parallel patterns: ParallelDegree (replicate in // each baseline pattern)
{{{IO}}}{{run}}{experiment}Micro-benchmark
14
uFLIP (3): The 9 micro-benchmarks
1 Granularity IOSize Basic performance? Device latency?
2 Alignment IOShift Penalty for badly aligned IOs?
3 LocalityTargetSize
IOs focused on a reduced area?
4 Partitioning Partitions IOs in several partitions?
5 Order Incr Reverse pattern, In place pattern, IOs with gaps?
6 ParallelismParallelDegree
IOs in parallel?
7 Mix Ratio Mixing two baseline patterns?
8 Pause Pause Device capacity to benefit from idle periods?
9 Bursts Burst Asynchronous overhead accumulation in time?
15
1
2
3
4
5
6
7
8
9
1 2 4 8 16 32 64 128TargetSize (MB)
Re
spo
ns
e ti
me
(re
lati
ve t
o S
W)
SamsungMemorightMtron
1
2
3
4
5
6
7
8
9
1 2 4 8 16 32 64 128TargetSize (MB)
Re
spo
ns
e ti
me
(re
lati
ve t
o S
W)
SamsungMemorightMtron
Results
Locality for the Samsung, Memoright and Mtron SSDs
• When limited to a focused area, RW performs very well
• For SR, SW and RR, linear behavior, almost no latency good throughputs with large IO Size
• For RW, 5ms for a 16KB-128KB IO
Granularity for the Memoright SSD
0
1
2
3
4
5
6
7
8
0 100 200 300 400 500
IO size (KB)
Re
spo
nse
tim
e (
ms)
SRRRSWRW
0
1
2
3
4
5
6
7
8
0 100 200 300 400 500
IO size (KB)
Re
spo
nse
tim
e (
ms)
SRRRSWRW
16
Baseline patterns (32 KB) Locality Partitioning Ordered Parallelism
SR RR SW RW RW SW Reverse In-Place
(ms) (ms) (ms) (ms) (MB) (Partitions) (Incr = -1) (Incr = 0)
SSD Memoright 0.3 0.4 0.3 5 8 (=) 8 (=) = = no
SSD Mtron 0.4 0.5 0.4 9 8 (2) 4 (1.5) = = no
SSD Samsung 0.5 0,5 0.6 18 16 (1.5) 4 (2) 1.5 0.6 no
Module Transcend 1.2 1.3 1.7 18 4 (2) 4 (2) 3 2 no
SSD Transcend MLC 1.4 3.0 2.6 233 4 (=) 4 (2) 2 2 no
USB Kingston DTHX 1.3 1.5 1.8 270 16 (20) 8 (20) 7 6 no
USB Kingston DTI 1.9 2.2 2.9 256 No 4 (5) 8 40 no
Results: summary
• SR, RR and SW are very efficient
• Flash devices incur large latency for RW
• Random writes should be limited to a focused area
• Sequential writes should be limited to a few partitions
• Good support for reverse and in place patterns.
• Surprisingly, no device supports parallel IO submission
17
Conclusion
• The uFLIP benchmark Sound methodology: Device preparation & setup stable measurements
Broad: 9 micro benchmarks, 39 experiments
Detailed: 1400 runs, 1 to 5 million I/Os ... for a single device!
Simple: an experiment = a 2-dimensional graph
Publicly available: www.uflip.org
• First results: Flash devices exhibit similar behaviors Despite their differences in cost / complexity / interface
• Current & Future work Short term: Visualization tool, with several levels of summarization
Enhance the software: setup parameters, benchmark duration, ...
Exploit the benchmark results!
18
Experiment color analysis
Run
IO
19
Details
20
Conclusion
• The uFLIP benchmark Sound methodology: Device preparation & setup stable measurements
Broad: 9 micro benchmarks, 39 experiments
Detailed: 1400 runs, 1 to 5 million I/Os ... for a single device!
Simple: an experiment = a 2-dimensional graph
Publicly available: www.uflip.org
• First results: Flash devices exhibit similar behaviors Despite their differences in cost / complexity / interface
• Current & Future work Short term: Visualization tool, with several levels of summarization
Enhance the software: setup parameters, benchmark duration, ...
Exploit the benchmark results!
21
www.uflip.org
Questions?
22
23
Selecting a device
24
Device result summary
Partitioning R W
Order R W
RR/RW SW/RWMix SR/RR SR/SW SR/RW RR/SW
Granularity SR RR SW RW
Alignment SR RR SW RW
Locality SR RR SW RW
Parallelism SR RR SW RW
Pause SR RR SW RW
Bursts SR RR SW RW
Experiment marked as interesting
Experiment marked as not interesting
Not performed
25
Experiment analysis
26
Experiment color analysis
Run
IO
27
Details