Hashing Directory Scheme for NAND flash file system
-
Upload
flashdomain -
Category
Documents
-
view
310 -
download
0
Transcript of Hashing Directory Scheme for NAND flash file system
Hashing Directory Scheme for NAND flash file system
Seung-Ho Lim, Chul Lee and Kyu-Ho ParkDepartment of Electrical Engineering and Computer Science, KAISTICACT 2007
2007.9.19Speaker : Kim, Jung-Kuk
Introduction
Log based FFS metadata into data node As flash capacity increases
more mount time to identify metadata contents more memory footprint to store the FS direct mapping structure in core
Proposed Scheme Efficient metadata management scheme for giga scale flash memory Hashing directory structure
for directory management Two level index structure
for file index management Reduces mount time and memory footprint
Related Works
JFFS2 Metadata and data with versioning written in free data node on flash
in sequential manner Need to scan entire flash media at mounting time
YAFFS/2 File id (inode #) and chunk number (offset) stored in spare region Need to scan spare region only (inverse mapping) Faster than JFFS2
CFFS InodeMapBlock : dedicated flash region (UF flag) InodeBlockHash in core Pseudo hot-cold separation : separate metadata and data region
Enhance mount time and memory footprint
FFS Architecture
Design motivation To reduce additional page consumption overhead (consumed pages due to update of metadata) when file is created or written Separation of data and metadata block
All directory tree doesn’t need to reside in core during runtime
Block types Hashing directory blocks : hashing table of directory files Inode blocks : file’s inode Data blocks : file’s real data Super block : map of blocks
Directory Structure
Directory Entries (filename, inode #) pair entry data can be large (indirect indexing needed)
lots of additional page consumption proportional to degree of indexing level
Proposed Scheme one page for each inode
Directory entries stored directly Just store the inode # of files to avoid higher degree indexing level Filename stored in it’s inode page. Hashing Directory Blocks
List of pair of (hashed directory inode # and physical location) 8 bytes
Directory Structure
Directory Hash Table Management Hashed Inode # generated by using the absolute path name and
hashing function
Directory Structure
Dentry Structure and File create/lookup method Can reduce file creation / lookup overhead
from Hashing Directory Block
Inode Block
File Data Indexing
Inode Type whole flash page for inode
e.g. 448 four-byte indexing entries / 2KB page (256bytes for attributes) i-class1 / i-class2 (size threshold :1MB)
Indirect index pointer
Double Indirect index pointer
File Data Indexing
Existent FS Ext2 allocates only 12 entries for direct indexing (up to 6KB)
Represents 2.4% space when file size is 1MiB Probability of (higher level indexing) is much higher, even if small files Lots of additional flash page consumption
Proportional to degree of indexing level
Proposed Scheme For small file
Only the inode page itself consumption needed ‘cause most small files are in i-class1
For large file write Two additional page consumption needed Negligible overhead ‘cause most operations for large files are read
Analysis and Evaluation
FS mount time super block read hashing directory blocks make hash table &
tree directory structure in core significantly reduced
Memory footprint just manage the hashing value of directory files in case of 2KB page flash memory,
256 directories / page and 16K directories / block # of allocated hashing dir. blocks < ten blocks for giga scale file
system
Analysis and Evaluation
For directory update for 2KB page size inode,
1504 bytes (188 files) for direct entries seven entries (1792 files) for direct indexing one entry (almost 100K files) for indirect indexing
hundreds of directory entries is enough for one directory sufficient to update only one dir inode page in almost
cases
Conclusion
An efficient metadata management scheme for giga scale capacity flash memory
Hashing directory structure for directory management
Two level index structure for file index management
Reduces mount time and memory footprintReduces file lookup and creation latency when
directory hierarchy is much complex