7/13/20151 PSU’s CS 587 Chapter 10,11 Indexes 10. Tree-structured Indexes Intuition for tree...
-
date post
22-Dec-2015 -
Category
Documents
-
view
228 -
download
0
Transcript of 7/13/20151 PSU’s CS 587 Chapter 10,11 Indexes 10. Tree-structured Indexes Intuition for tree...
04/19/23 1PSU’s CS 587
Chapter 10,11 Indexes
10. Tree-structured Indexes Intuition for tree indexes ISAM: a static structure B+ Trees
• Operations• Search• Insert• Delete
• Special cases• Key Compression• Bulk Loading• Order
• Updates may change rids
11. Hash-based Indexes Static Hashing Extendible Hashing
• Directory• Insert• Delete
Linear Hashing• When to split?
Extendible vs. Linear Hashing
04/19/23 2PSU’s CS 587
Learning Objectives
Describe the pros and cons of ISAM Perform inserts and deletes on Btrees,
extndible and linear hash indexes For Btrees: Describe algorithms for bulk
loading and key compression Explain why hash indexes are rarely
used
04/19/23 3PSU’s CS 587
Indexes in Real DBMSs SQLServer, Oracle, DB2: Btree only Postgres: Btree, Hash (discouraged)
gist • http://gist.cs.berkeley.edu
• Generalized index type • Models RTrees* and other indexes
gin • http://www.sai.msu.su/~megera/oddmuse/index.cgi/Gin
• primarily text search
MySql: Depends on storage engine. Mainly Btree, some hash, Rtree*
Bottom Line: Hash indexes are rare*RTrees index rectangles and higher dimensional structures
04/19/23 4PSU’s CS 587
10.1 Intuition ``Find all students with gpa > 3.0’’
If data is in sorted file, do binary search to find first such student, then scan to find others.
Cost of binary search on disk can be high. 2 Simple ideas:
Create an `index’ file. Use a large fanout F since each dereference
is $$$$$.
Can do binary search on (smaller) index file!
Page 1 Page 2 Page NPage 3 Data File
k2 kNk1 Index File
10. Trees
04/19/23 5PSU’s CS 587
ISAM
Index file may still be quite large. But we can apply the idea repeatedly!
Leaf pages contain data entries.
P0
K1 P
1K 2 P
2K m
P m
index entry
Non-leaf
Pages
Pages
Overflow page
Primary pages
Leaf
10. Trees
04/19/23 6PSU’s CS 587
Review of Indexes
As for any index, 3 alternatives for data entries k*: Data record with key value k <k, rid of data record with search key value k> <k, list of rids of data records with search key k>
Choice is orthogonal to the indexing technique used to locate data entries k*.
Tree-structured indexing techniques support both range searches and equality searches.
ISAM: static structure; B+ tree: dynamic, adjusts gracefully under inserts and deletes.
10. Trees
04/19/23 7PSU’s CS 587
Comments on ISAM File creation: Leaf (data) pages allocated
sequentially, sorted by search key; then index pages allocated, then space for overflow pages.
Index entries: <search key value, page id>; they `direct’ search for data entries, in leaf pages.
Search: Start at root; use key comparisons to go to leaf. Cost log F N ; F = # entries/index pg, N = # leaf pgs
Insert: Find leaf data entry belongs to, and put it there. Perhaps in an overflow page
Delete: Find and remove from leaf; if empty overflow page, de-allocate. Thus data pages remain sequential/continguous
Static tree structure: inserts/deletes affect only leaf pages.
Data Pages
Index Pages
Overflow pages
04/19/23 8PSU’s CS 587
Example ISAM Tree Each node can hold 2 entries; no need
for `next-leaf-page’ pointers. (Why?)
10* 15* 20* 27* 33* 37* 40* 46* 51* 55* 63* 97*
20 33 51 63
40
Root
10. Trees
04/19/23 9PSU’s CS 587
After Inserting 23*, 48*, 41*, 42* ...
10* 15* 20* 27* 33* 37* 40* 46* 51* 55* 63* 97*
20 33 51 63
40
Root
23* 48* 41*
42*
Overflow
Pages
Leaf
Index
Pages
Pages
Primary
10.Trees
04/19/23 10PSU’s CS 587
... Then Deleting 42*, 51*, 97*
Note that 51* appears in index levels, but not in leaf!
10* 15* 20* 27* 33* 37* 40* 46* 55* 63*
20 33 51 63
40
Root
23* 48* 41*
10. Trees
04/19/23 11PSU’s CS 587
Pros and Cons of ISAM Cons
After many inserts and deletes, long overflow chains can develop
Overflow records may not be sorted Pros
Inserts and deletes are fast since there’s no need to balance the tree
No need to lock nodes of the original tree for concurrent access
If the tree has had few updates, then interval queries are fast.
04/19/23 12PSU’s CS 587
B+ Tree: Most Widely Used Index
Insert/delete at log F N cost; keep tree height-balanced. (F = fanout, N = # leaf pages)
Minimum 50% occupancy (except for root). Each node contains d <= m <= 2d entries. The parameter d is called the order of the tree. This ensures that the height is relatively small
Supports equality and range-searches efficiently.
Index Entries
Data Entries("Sequence set")
(Direct search)
10. Trees
04/19/23 13PSU’s CS 587
Example B+ Tree Search begins at root, and key
comparisons direct it to a leaf (as in ISAM).
Search for 5*, 15*, all data entries >= 24* ...
Based on the search for 15*, we know it is not in the tree!
Root
17 24 30
2* 3* 5* 7* 14* 16* 19* 20* 22* 24* 27* 29* 33* 34* 38* 39*
13
10. Trees
04/19/23 14PSU’s CS 587
Inserting a Data Entry into a B+ Tree Find correct leaf L. Put data entry onto L.
If L has enough space, done! Else, must split L (into L and a new node L2)
• Redistribute entries evenly, copy up middle key.• Insert index entry pointing to L2 into parent of L.
This can happen recursively To split index node, redistribute entries evenly, but
push up middle key. (Contrast with leaf splits.) Splits “grow” tree; root split increases height.
Tree growth: gets wider or one level taller at top.
10. Trees
04/19/23 15PSU’s CS 587
Inserting 8* into Example B+ Tree
Observe how minimum occupancy is guaranteed in both leaf and index pg splits.
Note difference between copy-up and push-up; be sure you understand the reasons for this.
2* 3* 5* 7* 8*
5
Entry to be inserted in parent node.(Note that 5 iscontinues to appear in the leaf.)
s copied up and
appears once in the index. Contrast
5 24 30
17
13
Entry to be inserted in parent node.(Note that 17 is pushed up and only
this with a leaf split.)
10. Trees
04/19/23 16PSU’s CS 587
Example B+ Tree After Inserting 8*
Notice that root was split, leading to increase in height.
In this example, we can avoid split by re-distributing entries; however, this is usually not done in practice.
2* 3*
Root
17
24 30
14* 16* 19* 20* 22* 24* 27* 29* 33* 34* 38* 39*
135
7*5* 8*
10. Trees
04/19/23 17PSU’s CS 587
Deleting a Data Entry from a B+ Tree
Start at root, find leaf L where entry belongs. Remove the entry.
If L is at least half-full, done! If L has only d-1 entries,
• Try to re-distribute, borrowing from sibling (adjacent node with same parent as L).
• If re-distribution fails, merge L and sibling. If merge occurred, must delete entry (pointing to L
or sibling) from parent of L. Merge could propagate to root, decreasing height.
10. Trees
04/19/23 18PSU’s CS 587
Example Tree After (Inserting 8*, Then) Deleting 19* and 20* ...
Deleting 19* is easy. Deleting 20* is done with re-distribution.
Notice how middle key is copied up.
2* 3*
Root
17
30
14* 16* 33* 34* 38* 39*
135
7*5* 8* 22* 24*
27
27* 29*
10. Trees
04/19/23 19PSU’s CS 587
... And Then Deleting 24*
Must merge. Observe `toss’ of
index entry (on right), and `pull down’ of index entry (below).
30
22* 27* 29* 33* 34* 38* 39*
2* 3* 7* 14* 16* 22* 27* 29* 33* 34* 38* 39*5* 8*
Root30135 17
10. Trees
04/19/23 20PSU’s CS 587
Example of Non-leaf Re-distribution Tree is shown below during deletion of 24*. (What could be a possible initial tree?)
In contrast to previous example, can re-distribute entry from left child of root to right child. Root
135 17 20
22
30
14* 16* 17* 18* 20* 33* 34* 38* 39*22* 27* 29*21*7*5* 8*3*2*
10. Trees
04/19/23 21PSU’s CS 587
After Re-distribution
Intuitively, entries are re-distributed by `pushing through’ the splitting entry in the parent node.
It suffices to re-distribute index entry with key 20; we’ve re-distributed 17 as well for illustration.
14* 16* 33* 34* 38* 39*22* 27* 29*17* 18* 20* 21*7*5* 8*2* 3*
Root
135
17
3020 22
10. Trees
04/19/23 22PSU’s CS 587
B+ Trees in Practice Typical values for B+ tree parameters
Page size 8K Key: at most 8 bytes (compression – later) Pointer: at most 4 bytes Thus entries in index are at most 12 bytes, and a
page can hold at least 683 entries. Occupancy: 67%, so a page can hold at least 455
entries, estimate that conservatively with 256 = 28.
Top two levels often in memory: Top level, root of tree: 1 page = 8K bytes Next level, 28 pages = 28 * 23K bytes = 2
Megabytes
10. Trees
04/19/23 23PSU’s CS 587
B-Trees vs Hash Indexes
A typical B-tree height is 2-3 Height 0 supports 28 = 256 records Height 2 supports 224 = 32M records Height 3 supports 232 = 4G records
A B-tree of height 2-3 requires 2-3 I/Os Including one I/O to access data Assuming top two levels are in memory Assuming alternative 2 or 3
This is why DBMSs either don’t support or don’t recommend hash indexes on base tables Though hashing is widely used elsewhere.
10. Trees
04/19/23 24PSU’s CS 587
Prefix Key Compression Important to increase fan-out. (Why?) Key values in index entries only `direct traffic’; can
often compress them. E.g., If we have adjacent index entries with search key
values Dannon Yogurt, David Smith and Devarakonda Murthy, we can abbreviate David Smith to Dav. (The other keys can be compressed too ...)
• Is this correct? Not quite! What if there is a data entry Davey Jones? (Can only compress David Smith to Davi)
• In general, while compressing, must leave each index entry greater than every key value (in any subtree) to its left.
Insert/delete must be suitably modified.
10. Trees
04/19/23 25PSU’s CS 587
Bulk Loading of a B+ Tree If we have a large collection of records,
and we want to create a B+ tree on some field, doing so by repeatedly inserting records is very slow.
Bulk Loading can be done much more efficiently.
Initialization: Sort all data entries, insert pointer to first (leaf) page in a new (root) page.
3* 4* 6* 9* 10* 11* 12* 13* 20* 22* 23* 31* 35* 36* 38* 41* 44*
Sorted pages of data entries; not yet in B+ treeRoot
10. Trees
04/19/23 26PSU’s CS 587
Bulk Loading (Contd.) Index entries for
leaf pages always entered into right-most index page just above leaf level. When this fills up, it splits. (Split may go up right-most path to the root.)
Much faster than repeated inserts, especially when one considers locking!
3* 4* 6* 9* 10*11* 12*13* 20*22* 23* 31* 35*36* 38*41* 44*
Root
Data entry pages
not yet in B+ tree3523126
10 20
3* 4* 6* 9* 10* 11* 12*13* 20*22* 23* 31* 35*36* 38*41* 44*
6
Root
10
12 23
20
35
38
not yet in B+ treeData entry pages
10. Trees
04/19/23 27PSU’s CS 587
Summary of Bulk Loading
Option 1: multiple inserts. Slow. Does not give sequential storage of leaves.
Option 2: Bulk Loading Has advantages for concurrency control. Fewer I/Os during build. Leaves will be stored sequentially (and
linked, of course). Can control “fill factor” on pages.
10. Trees
04/19/23 28PSU’s CS 587
A Note on `Order’
Order (d) concept replaced by physical space criterion in practice (`at least half-full’). Index pages can typically hold many more entries
than leaf pages. Variable sized records and search keys mean
different nodes will contain different numbers of entries.
Even with fixed length fields, multiple records with the same search key value (duplicates) can lead to variable-sized data entries (if we use Alternative (3)).
10. Trees
04/19/23 29PSU’s CS 587
10.8.4 Effect of Inserts and Deletes on RIDs The text raises this problem:
Suppose there is an index using alternative 1.• As happens with SQLServer and Oracle if a primary index
is declared on a table. RIDs will change with updates and deletes.
• Why? Splits and merges. Then pointers in other, secondary, indexes will be
wrong. Text suggests that index pointers can be updated. This is impractical.
What do SQL Server and Oracle do? They use logical RIDs in secondary indexes.
04/19/23 30PSU’s CS 587
Logical Pointers in Data Entries
What is a logical pointer? A primary key value For example, an Employee ID Thus a data entry for an age index might be
<42,C24> 42 is the age, C24 is the ID of an employee aged 42. To find that employee with age 42, must use the
primary key index! This approach makes primary key indexes faster
(alternative 1 instead of 2) but secondary key indexes slower.
04/19/23 31PSU’s CS 587
11. Hash-based Indexes review
As for any index, 3 alternatives for data entries k*: Data record with key value k <k, rid of data record with search key value k> <k, list of rids of data records with search key
k> Choice orthogonal to the indexing technique
Hash-based indexes are best for equality selections. Cannot support range searches.
Static and dynamic hashing techniques exist; trade-offs similar to ISAM vs. B+ trees.
11.Hash
04/19/23 32PSU’s CS 587
Static Hashing # primary pages fixed, allocated
sequentially, never de-allocated; overflow pages if needed.
h(k) mod M = bucket to which data entry with key k belongs. (M = # of buckets)
h(key) mod M
hkey
Primary bucket pages Overflow pages
10
M-1
11.Hash
04/19/23 33PSU’s CS 587
Static Hashing (Contd.)
Buckets contain data entries. Hash fn works on search key field of record r.
Must distribute values over range 0 ... M-1. h(key) = (a * key + b) usually works well. a and b are constants; lots known about how to tune
h. Long overflow chains can develop and degrade
performance. Extendible and Linear Hashing: Dynamic techniques
to fix this problem.
11.Hash
04/19/23 34PSU’s CS 587
11.2 Extendible Hashing
Situation: Bucket (primary page) becomes full. Why not re-organize file by doubling # of buckets? Reading and writing all pages is expensive! Idea: Use directory of pointers to buckets, double #
of buckets by doubling the directory, splitting just the bucket that overflowed!
Directory much smaller than file, so doubling it is much cheaper. Only one page of data entries is split. No overflow page!
Trick lies in how hash function is adjusted!
11.Hash
04/19/23 35PSU’s CS 587
Insert Example
Directory is array of size 4. To find bucket for r, take
last `global depth’ # bits of h(r); we denote r by h(r). If h(r) = 5 = binary 101,
it is in bucket pointed to by 01.
Insert: If bucket is full, split it (allocate new page, re-distribute).
If necessary, double the directory. (As we will see, splitting a bucket does not always require doubling; we can tell by comparing global depth with local depth for the split bucket.)
13*00
01
10
11
2
2
2
2
2
LOCAL DEPTH
GLOBAL DEPTH
DIRECTORY
Bucket A
Bucket B
Bucket C
Bucket D
DATA PAGES
10*
1* 21*
4* 12* 32* 16*
15* 7* 19*
5*
04/19/23 36PSU’s CS 587
Insert h(r)=20 (Causes Doubling)
20*
00
01
10
11
2 2
2
2
LOCAL DEPTH 2
2
DIRECTORY
GLOBAL DEPTHBucket A
Bucket B
Bucket C
Bucket D
Bucket A2(`split image'of Bucket A)
1* 5* 21*13*
32*16*
10*
15* 7* 19*
4* 12*
19*
2
2
2
000
001
010
011
100
101
110
111
3
3
3DIRECTORY
Bucket A
Bucket B
Bucket C
Bucket D
Bucket A2(`split image'of Bucket A)
32*
1* 5* 21*13*
16*
10*
15* 7*
4* 20*12*
LOCAL DEPTH
GLOBAL DEPTH
11.Hash
04/19/23 37PSU’s CS 587
Points to Note 20 = binary 10100. Last 2 bits (00) tell us r
belongs in A or A2. Last 3 bits needed to tell which. Global depth of directory: Max # of bits needed to tell
which bucket an entry belongs to. Local depth of a bucket: # of bits used to determine if
an entry belongs to this bucket. When does bucket split cause directory doubling?
Before insert, local depth of bucket = global depth. Insert causes local depth to become > global depth; directory is doubled by copying it over and `fixing’ pointer to split image page. (Use of least significant bits enables efficient doubling via copying of directory!)
11.Hash
04/19/23 38PSU’s CS 587
Performance, Deletions If directory fits in memory, equality search
answered with one disk access; else two. 100MB file, 100 bytes/rec, contains 1,000,000
records (as data entries). If pages are 4K then the file requires 25,000 directory elements; chances are high that directory will fit in memory.
Directory grows in spurts, and, if the distribution of hash values is skewed, directory can grow large.
Multiple entries with same hash value cause problems!
Delete: If removal of data entry makes bucket empty, can be merged with `split image’. If each directory element points to same bucket as its split image, can halve directory.
11.Hash
04/19/23 39PSU’s CS 587
11.3 Linear Hashing
This is another dynamic hashing scheme, an alternative to Extendible Hashing.
LH handles the problem of long overflow chains without using a directory, and handles duplicates.
Idea: Use a family of hash functions h0, h1, h2, ... hi(key) = h(key) mod(2iN); N = initial # buckets h is some hash function (range is not 0 to N-1) If N = 2d0, for some d0, hi consists of applying h and
looking at the last di bits, where di = d0 + i. hi+1 doubles the range of hi (similar to directory
doubling)
11.Hash
04/19/23 40PSU’s CS 587
Linear Hashing (Contd.)
Directory avoided in LH by using overflow pages, and choosing bucket to split round-robin. Splitting proceeds in `rounds’. Round ends when all
NR initial (for round R) buckets are split. Buckets 0 to Next-1 have been split; Next to NR yet to be split.
Current round number is Level. Search: To find bucket for data entry r, find hLevel(r):
• If hLevel(r) in range `Next to NR’ , r belongs here.
• Else, r could belong to bucket hLevel(r) or bucket hLevel(r) + NR; must apply hLevel+1(r) to find out.
11.Hash
04/19/23 41PSU’s CS 587
Overview of LH File
In the middle of a round.
Levelh
Buckets that existed at thebeginning of this round:
this is the range of
NextBucket to be split
of other buckets) in this round
Levelh search key value )(
search key value )(
Buckets split in this round:If is in this range, must useh Level+1
`split image' bucket.to decide if entry is in
created (through splitting`split image' buckets:
11.Hash
04/19/23 42PSU’s CS 587
When to split? Insert: Find bucket by applying hLevel / hLevel+1:
If bucket to insert into is full:• Add overflow page and insert data entry.• (Maybe) Split Next bucket and increment Next.
Can choose any criterion to `trigger’ split. Since buckets are split round-robin, long
overflow chains don’t develop! Doubling of directory in Extendible Hashing is
similar; switching of hash functions is implicit in how the # of bits examined is increased.
11.Hash
04/19/23 43PSU’s CS 587
Example of Linear Hashing
On split, hLevel+1 is used to re-distribute entries.
0hh
1
(This infois for illustrationonly!)
Level=0, N=4
00
01
10
11
000
001
010
011
(The actual contentsof the linear hashedfile)
Next=0
PRIMARYPAGES
Data entry rwith h(r)=5
Primary bucket page
44* 36*32*
25*9* 5*
14* 18*10*30*
31*35* 11*7*
0hh
1
Level=0
00
01
10
11
000
001
010
011
Next=1
PRIMARYPAGES
44* 36*
32*
25*9* 5*
14* 18*10*30*
31*35* 11*7*
OVERFLOWPAGES
43*
00100
11.Hash
04/19/23 44PSU’s CS 587
Example: End of a Round
0hh1
22*
00
01
10
11
000
001
010
011
00100
Next=3
01
10
101
110
Level=0PRIMARYPAGES
OVERFLOWPAGES
32*
9*
5*
14*
25*
66* 10*18* 34*
35*31* 7* 11* 43*
44*36*
37*29*
30*
0hh1
37*
00
01
10
11
000
001
010
011
00100
10
101
110
Next=0
Level=1
111
11
PRIMARYPAGES
OVERFLOWPAGES
11
32*
9* 25*
66* 18* 10* 34*
35* 11*
44* 36*
5* 29*
43*
14* 30* 22*
31*7*
50*
11.Hash
04/19/23 45PSU’s CS 587
LH Described as a Variant of EH
The two schemes are actually quite similar: Begin with an EH index where directory has N elements. Use overflow pages, split buckets round-robin. First split is at bucket 0. (Imagine directory being
doubled at this point.) But elements <1,N+1>, <2,N+2>, ... are the same. So, need only create directory element N, which differs from 0, now.
• When bucket 1 splits, create directory element N+1, etc.
So, directory can double gradually. Also, primary bucket pages are created in order. If they are allocated in sequence too (so that finding i’th is easy), we actually don’t need a directory! Voila, LH.
11.Hash