Ext2 On Singularity
-
Upload
raphael-powell -
Category
Documents
-
view
41 -
download
1
description
Transcript of Ext2 On Singularity
Ext2 On SingularityScott Finley
University of Wisconsin – MadisonCS 736 Project
Basic, default Linux file system Almost exactly the same as FFS
◦ Disk broken into “block groups”◦ Super-block, inode/block bitmaps, etc.
Ext2 Defined
New from the ground up Reliability as #1 goal Reevaluate conventional OS structure Leverage advances of the last 20 years
◦ Languages and compilers◦ Static analysis of whole system
Microsoft Research’s Singularity
Implement Ext2 on Singularity Focus on read support with caching Investigate how Singularity design impact
FS integration Investigate performance implications
My Goals
I have a Ext2 “working” on Singularity◦ Reading fully supported◦ Caching improves performance!◦ Limited write support
Singularity design◦ Garbage collection hurts performance◦ Reliability is good: I couldn’t crash it.
Overview of Results
1. Singularity Details2. Details of my Ext2 implementation3. Results
Outline
Singularity
Everything is written in C#◦ Small pieces of kernel (< 5%) in assembly and C+
+ as needed Kernel and processes are garbage collected No virtual machine
◦ Compiled to native code Very aggressive optimizing compiler
Programming Environment
Singularity is a micro kernel Everything else is a SIP
◦ “Software Isolated Process” No hardware based memory isolation
◦ SIP “Object Space” isolation guaranteed by static analysis and safe language (C#)
◦ Context switches are much faster
Process Model
All SIP communication is via message channels
No shared memory Messages and data passed via Exchange
Heap Object ownership is tracked Zero copy data passing via pointers
Communication Channels
Application creates buffer in ExHeap
Disk Read Example
AppFile
SystemDisk
Driver
Exchange Heap
Empty Buffer
Application send read request to file system◦ File system owns the buffer
Disk Read Example
AppFile
SystemDisk
Driver
Exchange Heap
Empty Buffer
File system sends read request to driver◦ Driver owns the buffer
Disk Read Example
AppFile
SystemDisk
Driver
Exchange Heap
Empty Buffer
Driver fills buffer and replies to file system
Disk Read Example
AppFile
SystemDisk
Driver
Exchange Heap
Full Buffer
File system replies to application
Disk Read Example
AppFile
SystemDisk
Driver
Exchange Heap
Full Buffer
Application consumes the buffer
Disk Read Example
AppFile
SystemDisk
Driver
Exchange Heap
Ext2 Implementation
Ext2Control: Command line application Ext2ClientManager: Manages mount points Ext2FS: Core file system functionality Ext2Contracts: Defines communication
Required Pieces
System service (SIP) launched at boot Accessible at known location in /dev
directory Does “Ext2 stuff” Operates on Ext2 volumes and mount
points Exports “Mount” and “Unmount”
◦ Would also provide “Format” if implemented 300 lines of code
Ext2 Client Manager
Command line application Allows Ext2 Client Manger interface access Not used by other applications 500 lines of code
Ext2 Control
Core Ext2 file system. Separate instance (SIP) for each mount
point.◦ Exports “Directory Service Provider” interface
Clients open files and directories by attaching a communication channel◦ Internally paired with an Inode.
Reads implemented, Writes in progress 2400 Lines of code
Ext2Fs
Client wants to read file “/mnt/a/b.txt”◦ Ext2 mounted at “/mnt”
1. App --CH0-->SNS: <Bind,“/mnt/a/b.txt”,CH1>2. App<--CH0-- SNS: <Reparse, “/mnt”>3. App --CH0-->SNS: <Bind,”/mnt”,CH1>4. App<--CH0-- SNS: <AckBind>
File Read Sequence
5. App --CH1-->Ext2Fs: <Bind,“/a/b.txt”,CH2>6. App<--CH1-- Ext2Fs: <AckBind>7. App --CH2-->Ext2Fs: <Read, Buff, BOff, FOff>8. App<--CH2-- Ext2Fs: <ReadAck, Buff>
9. …
File Read Sequence 2
1. Inodes: Used on every access2. Block Numbers: Very important for large
files3. Data Blocks: Naturally captures others All use LRU replacement Large files unusable without caching
◦ 8300X faster reading 350 MB file
Caching
Results
Athlon 64 3200, 1 GB RAM Disk: 120GB, 7200 RPM, 2 MB buffer, PATA Measured sequential reads Varied read buffer size from 4 KB to 96 MB Timed each request File sizes ranged from 13 MB to 350 MB
Testing
2,048 20,480 204,800 2,048,000 20,480,000 204,800,0000
5
10
15
20
25
30
35350MB File Sequential Read
LinuxAverage4 KB8 KB16 KB64 KB256 KB1 MB8 MB64 MB96 MBCold
Buffer Size (Bytes)
Read
Sp
eed
(M
B/S
)
0
5
10
15
20
25
30
35
Request Service Time for reads of 350 MB file with 16 KB buffers
Time
Req
uest
Serv
ice T
ime (
ms)
0
5
10
15
20
25
30
35
200 Sequential 16 KB Buffer Reads of 350 MB File
Time
Read
Sp
eed
(M
B/s
)
Linux is faster◦ Not clear that this is fundamental
Performance is not horrible◦ “Good enough” objective met◦ Garbage collection overhead < 0.1%
Not sensitive to file size
Results
System programming in a modern language System programming with no crashes Micro kernel is feasible
◦ Hurts feature integration: mmap, cache sharing◦ Clean, simple interfaces
Conclusion
Questions
Extras
2,048 20,480 204,800 2,048,000 20,480,000 204,800,0000
5
10
15
20
25
30
3513 MB File Sequential Read
LinuxAverage4 KB8 KB16 KB64 KB256 KB1 MB8 MB64 MB96 MBCold
Buffer Size (Bytes)
Read
Sp
eed
(M
B/S
)
0
5
10
15
20
25
30
35
64 KB Buffer Reads of 350 MB File
Time
Read
Sp
eed
(M
B/s
)
0
5
10
15
20
25
30
64 MB Buffer Reads of 350 MB File
Time
Read
Sp
eed
(M
B/s
)
2,048 20,480 204,800 2,048,000 20,480,000 204,800,0000
5
10
15
20
25
30
Average Sequential Read Speeds
13 MB File350 MB File
Buffer Size
Read
Sp
eed
(M
B/s
)