More mastering the art of indexing
-
Upload
yoshinori-matsunobu -
Category
Technology
-
view
18.559 -
download
7
description
Transcript of More mastering the art of indexing
1Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
More Mastering the Art of Indexing
Yoshinori Matsunobu
Lead of MySQL Professional Services APACSun Microsystems
2Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Table of contents
• Case 1: Lock contention and indexing• Case 2: Deadlock caused by indexes• Case 3: Covering index and range scan / LIMIT• Case 4: Covering index and long text/blob• Case 5: Sorting, indexing and query execution plans
This is a second half of “Mastering the art of indexing” session.(Independent from the first half. No prerequisite)
The first half session was presented last year.http://www.mysqlconf.com/mysql2009/public/schedule/detail/6661
3Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Speaker’s profile
• Name: Yoshinori Matsunobu
• Living in Tokyo, Japan
• Leading MySQL Consulting Business in Japan and APAC at Sun Microsystems (Oracle)
• Joined MySQL inc as a consultant in September 2006– Before joining MySQL, I worked at Sony Corporation as a software
architect for 5.5 years, using Oracle/WebLogic/SAP R3 and MySQL/JBoss
• Published a couple of MySQL/Linux/Java books/articles (but all written in Japanese)
• Contact:– E-mail: [email protected]– Blog http://yoshinorimatsunobu.blogspot.com– @matsunobu on Twitter
4Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Case 1: Lock contention
• “user” table (InnoDB)• 1 million records• PK on user_id• No index on status column• Auto-committed queries
user_id(PK) name status …
1 Ronald 0 …
… … … …
100 Messi 0
… … …
1000000 Raul 9 …
Why query 2 was blocked by query 1?
Session 1 Session 2
1) DELETE FROM user WHERE status = 9;…
… (taking long time to scan & delete)
…Query OK, 1000 rows affected (13.66 sec)
2) UPDATE user SET status=9 WHERE user_id=100;… (waiting for query 1 to finish)…
Query OK, 1 row affected (11.27 sec)Rows matched: 1 Changed: 1 Warnings: 0
5Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Understanding how statement based binary logging works
user_id(PK) name status …
1 Ronald 0 …
… … … …
100 Messi 0
… … …
1000000 Raul 9 …
1) DELETE FROM user WHERE status = 9;
Fully scanning user table, deleting if status == 9
a)
b)
d)
2) UPDATE user SET status=9 WHERE user_id=100;c)
What happens if query 1) does not lock row: user_id=100 ?
Binary log:1. UPDATE user SET status=9 WHERE user_id=100;2. DELETE FROM user WHERE status = 9;
user_id=100 on slaves: Not exists
- 2) finishes before 1)- 1) already checked row: user_id=100, the row was not deleted because status==0- The final result of the row
--> exists, status=9
Data consistency is broken!
6Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Next-Key Locking in InnoDB
• Locking not only “modified” rows, but also “scanned” rows
• To avoid master/slave mismatch– And to make point-in-time-recovery from binary logs work
• UPDATE/DELETE … WHERE … sets exclusive next-key lock on every record the search encounters.
• INSERT … SELECT … put a shared next key lock on all the selected rows.
• Disadvantage: low concurrency
7Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
If status column is indexed
• 1) and 2) can run in parallel (concurrency improved!)– If query 2 sets status=9, it is blocked by query 1
• Even though cardinality on status column is very low, indexing is still helpful to avoid massive row-lock contentions
user_id(PK) name status …
1 Ronald 0 …
… … … …
100 Messi 0 -> 1
… … …
1000000 Raul 9 …
1) DELETE FROM user WHERE status = 9; 2) UPDATE user SET status=1 WHERE user_id=100;
status PK
0 1001 100
… …9 12345
9 …
9 1000000
8Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Sequential Access vs Random Access• Updating/deleting rows by full table scan does sequential reads/writes• Updating/deleting rows by index scan does random reads/writes
• Full table scan is not always worse than index scan
Index scan for 2mil rows vs Full scan for 100mil rows
0 1000 2000 3000 4000 5000 6000 7000 8000
seconds
Index scan (buffer pool=10G)
Index scan (buffer pool=5G)
Full table scan
297rows/s900rows/s
502,310rows/s
9Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Read Committed + Row Based Binary Logging• Next-key locking can be disabled with --binlog-format=row
and --transaction-isolation=read-committed in MySQL 5.1
• Performance disadvantages:– Row based binary logging– Read Committed is less efficient than repeatable read with many concurrent sessions
in InnoDB (See http://www.facebook.com/note.php?note_id=244956410932)
• In 5.0 or earlier: innodb_locks_unsafe_for_binlog– Enable only if you do not use binlog or you do not care about data consistency
Session 1 Session 21) DELETE FROM user WHERE status = 9;…… (taking long time to scan & delete)……Query OK, 1000 rows affected (13.66 sec)
2) UPDATE user SET status=9 WHERE user_id=100;Query OK, 1 row affected (0.00 sec)Rows matched: 1 Changed: 1 Warnings: 0
10Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Table of contents
• Case 1: Lock contention and indexing• Case 2: Deadlock caused by indexes• Case 3: Covering index and range scan / LIMIT• Case 4: Covering index and long text/blob• Case 5: Sorting, indexing and query execution plans
11Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Case 2: Deadlockuser_id(PK) name status …
1 Ronald 0 …
2 John 1 …
… … …
… … …
10000000 Raul 9 …
Why auto-committed, single-row updating queries caused deadlock error?
Session 1 Session 2
- user table
- Index on status column
- Auto committed
mysql> UPDATE user SET status=4 where status=1 ORDER BY user_id LIMIT 1 ;
mysql> UPDATE user SET status=2 where user_id=2;
Query OK, 1 rows affected (0.00 sec)Rows matched: 1 Changed: 1 Warnings: 0
ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transaction
12Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Updating(Deleting) single row is not single step
user_id(PK) name status …
1 Ronald 0 …
2 John 1 …
100 Vieri 0 -> 1
… … …
1000000 Raul 9 …
1) UPDATE user SET status=4,… where status=1 ORDER BY user_id LIMIT 1 ;
2) UPDATE user SET status=2 where user_id=2;
status PK0 100
1 2
1 101 13
… …
a
b
c (wait)
d (wait)-> DL
1) UPDATE user SET status=4,… where status=1 ORDER BY user_id LIMIT 1 ;
status PK0 100
1 2
1 101 13
… …
1)a Exclusive Lock on status = 1, getting PK 2)
b Exclusive Lock on PRIMARY = 2, getting values (status=1)c Exclusive Lock on PRIMARY = 2
-> Waiting for b d Exclusive Lock on status = 1 -> Waiting for c -> Deadlock!
Secondary Index on status Clustered Index
13Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Table of contents
• Case 1: Lock contention and indexing• Case 2: Deadlock caused by indexes• Case 3: Covering index and range scan / LIMIT• Case 4: Covering index and long text/blob• Case 5: Sorting, indexing and query execution plans
14Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Example table
“diary” table
CREATE TABLE diary ( diary_id INT UNSIGNED AUTO_INCREMENT, user_id INT UNSIGNED NOT NULL, post_date TIMESTAMP NOT NULL, status TINYINT UNSIGNED NOT NULL, rating FLOAT NOT NULL, title VARCHAR(100) NOT NULL, body TEXT, PRIMARY KEY (diary_id), INDEX user_date(user_id)) CHARSET utf8 ENGINE=InnoDB;
15Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Wide-range queries are not fast
- If 100 entries match user_id=1, 100 random disk reads might happen- One random read for a leaf block, but 100 random reads for table records
- Single HDD can do only 100-200 random disk reads per second (very slow)
user_id RowID1 51 10000
…1 15321
table records
10000: post_date=‘2009-04-04..’, status=0
5: post_date=‘2009-03-01..’, status=0
SELECT diary_id FROM diary WHERE user_id=1 AND status=0 AND post_date >= '2009-03-01 00:00:00';
10 10
- 20 Leaf 1- 40 Leaf 2
Branch 1
Leaf Block 1
15321: post_date=‘2009-04-23…’, status=0…
16Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Covering Index: Reading only an index
- If all columns in the SQL statement (SELECT/WHERE/etc) are contained within single index, MySQL chooses “Covering Index” execution plan- Very efficient because random disk i/o does not happen- In InnoDB, RowID is PK (diary_id) so covering index can be used more frequently- status column is not useful to filter records, but useful to make it Covering Index
20 Leaf 1- 120 Leaf 2
Branch 1
post_date2009-03-29
2009-04-30
2009-03-302009-03-312009-04-01
RowID1000
100400
100005
15321
user_id1
1..
111
…
status0
0..
000
Leaf 1
table records
10000: post_date=‘2009-04-04..’, status=0
5: post_date=‘2009-03-01..’, status=0
15321: post_date=‘2009-04-23…’, status=0
SELECT diary_id FROM diary WHERE user_id=1 AND status=0 AND post_date >= '2009-03-01 00:00:00';
17Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Covering Index = “Using index”
> explain select count(ind) from t id: 1 select_type: SIMPLE table: t type: indexpossible_keys: NULL key: ind key_len: 5 ref: NULL rows: 100000181 Extra: Using index
mysql> select count(ind) from t;+---------------+| count(ind) |+---------------+| 100000000 |+---------------+1 row in set (15.98 sec)
> explain select count(c) from t id: 1 select_type: SIMPLE table: t type: ALLpossible_keys: NULL key: NULL key_len: NULL ref: NULL rows: 100000181 Extra:
mysql> select count(c) from t;+-----------+| count(c) |+-----------+| 100000000 |+-----------+1 row in set (28.99 sec)
18Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
LIMIT without covering index is not fast
- LIMIT 30,10 requires at least 40 random reads- If most of records are not status=0, many more random reads will happen
table records
SELECT diary_id FROM diary WHERE user_id=1 AND status=0 AND post_date >= '2009-03-01 00:00:00' ORDER BY post_date LIMIT 30, 10;
- 20 Leaf 1- 40 Leaf 2
Branch 1
11
2
40…
…
post_date2009-03-29
2009-04-302009-05-13
2009-03-302009-03-31
RowID4
20020000
400
100005
user_id1
11..
11
…
Leaf 1
…
checking status=0 or not
19Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
LIMIT with covering index is fast
- Covering index scan is sequential access- All entries are likely to reside in the same leaf block- Single disk i/o is enough to get all data- Reading 40 records or 10 records does not matter
20 Leaf 1- 120 Leaf 2
Branch 1
post_date2009-03-29
2009-04-30 2009-05-302009-06-13
2009-03-302009-03-312009-04-01
RowID4
100200
20000400
100005
15321
user_id1
111..
111
…
status0
000..
000
Leaf 1
table records
10000: post_date=‘2009-04-04..’, status=0
5: post_date=‘2009-03-01..’, status=0
15321: post_date=‘2009-04-23…’, status=0
SELECT diary_id FROM diary WHERE user_id=1 AND status=0 AND post_date >= '2009-03-01 00:00:00' ORDER BY post_date LIMIT 30, 10;
20Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
LIMIT performance example
• Disk read happened in both cases (worst case example)• # of random disk reads was N times higher on Normal Index + LIMIT• # of disk reads was almost equal on Covering Index + LIMIT
• Similar effects apply to COUNT records
Normal Index Covering Index
LIMIT 0, 10 1.787s 0.800s
LIMIT 30, 10 5.173s 0.831s
SELECT diary_id FROM diary WHERE user_id=? AND status=0 AND post_date >= '2009-03-01 00:00:00' ORDER BY post_date LIMIT X, 10;
Time to execute from 50 clients
21Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
What if covering index can not be used?
- Use LIMIT 0, X (OFFSET 0) so that you can minimize the number of random reads
table records
1) SELECT diary_id FROM diary WHERE user_id=1 AND status=0 AND post_date >= '2009-03-01 00:00:00' ORDER BY post_date LIMIT 0, 10;
1
1…
post_date2009-03-29
2009-04-302009-05-13
2009-03-302009-03-31
RowID4
20020000
400
100005
user_id1
11..
11
…
Leaf 1
…
checking status=0 or not
3) SELECT diary_id FROM diary WHERE user_id=1 AND status=0 AND post_date > '2009-04-29 00:00:00' ORDER BY post_date LIMIT 0, 10;
2) Remember the highest post_date (in HIDDEN HTML tag, etc) i.e. 2009-04-29
2009-04-29 51
22Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Table of contents
• Case 1: Lock contention and indexing• Case 2: Deadlock caused by indexes• Case 3: Covering index and range scan / LIMIT• Case 4: Covering index and long text/blob• Case 5: Sorting, indexing and query execution plans
23Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
“diary” table example
CREATE TABLE diary ( diary_id INT UNSIGNED
AUTO_INCREMENT, user_id INT UNSIGNED NOT NULL, post_date TIMESTAMP NOT NULL, status TINYINT UNSIGNED NOT NULL, rating FLOAT NOT NULL, title VARCHAR(100) NOT NULL, body TEXT, PRIMARY KEY (diary_id), INDEX user_date(user_id, post_date), INDEX user_rating(user_id, rating)) CHARSET utf8 ENGINE=InnoDB;
- body is about 1KB/row
- The rest columns are less than 50B/row in total
- 20 million rows (20+GB) in total
- Almost INSERT or SELECT only
- 90% SELECT statements do not fetch body
SELECT user_id, post_date, title FROM diary WHERE diary_id=?
- 10% SELECT statements fetch body
SELECT body FROM diary WHERE diary_id=?
24Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Row position info
Page Header
Page Trailer
Row Row
Row
Row Row Row Row Row
Page(block) structure in InnoDB
Same page or Overflow Page
diary_id user_id post_date
rating title body (prefix)(768B)
The rest body
1 Page(block) = 16KB
- Storing the rest body within the same page if space is available.- If not, storing it to a separated page called “Overflow Page”. - “diary” table is insert-mostly, so the rest body is stored in the same page in most cases
(20B in InnoDB Plugin’s DYNAMIC format)
…
25Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Large TEXT/BLOB slows down all queriesdiary_id(BIGINT PK)
user_id(BIGINT, INDEX)
post_date(DATETIME, INDEX)
title(VARCHAR(100))
body(TEXT)
1 5544321 2009/09/13 21:10:14 MySQL Cluster overview …….(2000bytes)
2 5544321 2009/10/13 22:13:34 UEFA Champions League …….(700bytes)
3 2345 2009/11/7 22:12:23 巨人・7年ぶりの日本一 …….(3000bytes)
InnoDB Buffer Pool
diary_id user_id post_date title body
diary_id user_id post_date title
diary_id user_id post_date title
SELECT user_id, post_date, title FROM diary WHERE diary_id=?
90% queries
SELECT body FROM diary WHERE diary_id=?
10% queries
InnoDB Data File
body
body
・ Even though 90% queries don’t fetch body, body is loaded into buffer pool because body resides in the same block・Body values occupy most of InnoDB buffer pool space. Less # of records will be cached
Block
26Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Optimization approach - 1 to 1 relationshipCREATE TABLE diary_head ( diary_id INT UNSIGNED
AUTO_INCREMENT, user_id INT UNSIGNED NOT NULL, post_date TIMESTAMP NOT NULL, status TINYINT UNSIGNED NOT NULL, rating FLOAT NOT NULL, title VARCHAR(100) NOT NULL, PRIMARY KEY (diary_id), INDEX user_date(user_id, post_date), INDEX user_rating(user_id, rating)) CHARSET utf8 ENGINE=InnoDB;
CREATE TABLE diary_body ( diary_id INT UNSIGNED AUTO_INCREMENT
PRIMARY KEY, body TEXT) CHARSET utf8 ENGINE=InnoDB;
-Two tables, both have diary_id as primary key-diary_head has all columns except body-diary_body has only body and pk -diary_body can be NoSQL-Normalization is broken
27Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Table Size
PRIMARY KEY (+ records)
Secondary Indexes
diary 24GB 1.2Gdiary_head 1.2GB 1.2Gdiary_body 22GB 0
- Since body is by far the biggest column, diary_head table becomes much smaller
- 90% SELECT statements access diary_head
SELECT user_id, post_date, title FROM diary_head WHERE diary_id=?
- 10% SELECT statements access diary_body
SELECT body FROM diary_body WHERE diary_id=?
28Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Why 1:1 relationship is effective?
diary_id user_id post_date title
bodydiary_id user_id post_date title
diary_id user_id post_date title
SELECT user_id, post_date, title FROM diary_head WHERE diary_id=?
90% queries
SELECT body FROM diary_body WHERE diary_id=?
10% queries
InnoDB Data File, InnoDB Log File
body
- 90% queries do not read blocks that contain body - Blocks in diary_head table are (much) less frequently cached out
InnoDB Buffer Pool
Block
…
Block…
29Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Queries per second
• If only small number of queries (20% or less) fetch body column, 1:1 relationship is beneficial in this case
Queries that contain body column
diary table (qps)
1:1 relationship (qps)
Improvement
2% 323.61 15166.22 46.9x5% 333.78 6567.11 19.7x10% 352.27 3215.27 9.13x20% 395.81 1370.24 3.46x33% 474.84 782.51 1.45x50% 632.87 539.86 0.85x
SELECT user_id, post_date, title FROM diary_head WHERE diary_id=?2,5,..50%: SELECT body FROM diary_body WHERE diary_id=?
30Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Covering Index as an alternative• Covering index can be a replacement of 1:1
relationship, without breaking normalization
• ALTER TABLE diary ADD INDEX diary_covering (diary_id, user_id, post_date, status, rating, title); – Including all columns except body
• Execution plan of “SELECT user_id, post_date, … FROM diary WHERE diary_id=?” should be “Using Index (Covering Index)” !
31Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Covering index that covers all columns except body
• This query’s execution plan should be “Using Index” (covering index)• Index size is much smaller than diary table (close to diary_head)• Should be very well cached, so it should be fast
-100 Leaf 1- 200 Leaf 2
Branch 1
user_id1000
1352 930444
210000
351
RowID1
567…
234
diary_id1
567..
234
…
……
………..
………
Leaf 1
table records
10000: post_date=‘2009-04-04..’, … body
5: post_date=‘2009-03-01..’, … body
15321: post_date=‘2009-04-23…’, … body
SELECT user_id, post_date, rating, status, title FROM diary WHERE diary_id=5”
32Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Be careful about query execution plan, always
• PRIMARY is Clustered Index in InnoDB– All colums *including body* will be accessed
• Why MySQL did not use “diary_covering”?
mysql> EXPLAIN SELECT user_id, post_date, title FROM diary WHERE diary_id=5;
*************************** 1. row *************************** id: 1 select_type: SIMPLE table: diary type: constpossible_keys: PRIMARY,diary_covering key: PRIMARY key_len: 4 ref: const rows: 1 Extra:1 row in set (0.00 sec)
33Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Difference between type=const and type=ref• const: Unique lookup (guaranteed by Primary/Unique
index)• ref: Non-unique lookup
• MySQL prioritizes “const” plan over “ref” plan– Unique key lookup reads at most one record so this is the
fastest if we do not consider row length at all
• Using diary_covering is non-unique lookup– Even though it actually returns at most one record
• Control optimizer plan by FORCE INDEX– SELECT user_id, post_date, title FROM diary FORCE
INDEX (diary_covering) WHERE diary_id=?
34Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Optimized query execution plan
mysql> EXPLAIN SELECT user_id, post_date, title FROM diary FORCE INDEX (diary_covering) WHERE diary_id = 5 \G*************************** 1. row *************************** id: 1 select_type: SIMPLE table: diary type: refpossible_keys: diary_covering key: diary_covering key_len: 4 ref: const rows: 1 Extra: Using index1 row in set (0.00 sec)
35Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Queries per second – Covering index
• Covering index is very effective – close or above 1:1 relationship
Queries that contain body column
diary table Covering Index 1:1 relationship Up (normal->covering)
2% 323.61 14275.93 15166.22 44.11x5% 333.78 6174.59 6567.11 18.50x 10% 352.27 3198.16 3215.27 9.08x 20% 395.81 1557.52 1370.24 3.94x 33% 474.84 852.98 782.51 1.80x50% 632.87 550.08 539.86 0.87x
SELECT user_id, post_date, title FROM diary_head FORCE INDEX(diary_covering) WHERE diary_id=?
2,5,..50%: SELECT body FROM diary_body WHERE diary_id=?
36Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
What about insertion time and table size?
• No big difference on insertion time• SQL statement parsing overhead is relatively small on disk i/o bound workloads• “diary_covering” index was sorted by diary_id (AUTO_INC PK), so index entries
were sequentially inserted• Secondary Index size was about 1.0GB bigger. It depends on covered columns
Normal diary table 1:1 relationship Covering index version
Time to insert 20million records
3 hours 4 min 3 hours 17 min 3 hours 9 min
Table size 24GB 1.4GB + 22GB 24GBSecondary Index size
1.2GB 1.2GB + 0 2.2GB
- In general, adding indexes slows down insertion time- 1:1 relationship requires to execute INSERT twice
37Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Note: Sequential order INSERT is fast
INSERT INTO diary (diary_id, user_id, …) VALUES (NULL, 3, …)
・No fragmentation
・Small number of blocks, small size
・Highly recommended for InnoDB PRIMARY KEY
All entries are inserted here: cached in memory
“diary_covering” index
diary_id RowID1 12 2
…3 3
60 60
Leaf Block 1user_id, etc
………
…
diary_id RowID1 12 2
…3 3
60 60
Leaf Block 1user_id, etc
………
…
diary_id RowID61 61
Leaf Block 2user_id, etc
…
Empty
38Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Table of contents
• Case 1: Lock contention and indexing• Case 2: Deadlock caused by indexes• Case 3: Covering index and range scan / LIMIT• Case 4: Covering index and long text/blob• Case 5: Sorting, indexing and query execution plans
39Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Sorting and Indexing
- Index entries are sorted. When an indexed column is used with ORDER BY, filesort (sorting all records) can be skipped
key1 PK1 100002 5
…3 15321
Table Records
10000: col2=‘abc’, col3=100
5: col2=‘aaa’, col3=10
SELECT * FROM tbl WHERE key1 < 30 ORDER BY key1
60 431
- 60 Leaf 1- 120 Leaf 2
Branch 1
Leaf 1
15321: col2=‘a’, col3=7
40Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Sorting and Indexing (2)
-If column(s) with ORDER BY are not indexed, sorting all matched entries is required-EXPLAIN
- Extra: Using filesort-Calculation time is O(NlogN)
key1 PK1 100002 5
…3 15321
Table Records
10000: col2=‘abc’, col3=100
5: col2=‘aaa’, col3=10
SELECT * FROM tbl WHERE key1 < 30 ORDER BY col2
60 431
- 60 Leaf 1- 120 Leaf 2
Branch 1
Leaf 1
15321: col2=‘a’, col3=7
sorting by col2
41Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Sorting and Indexing (3)
- You have two indexes, key1 and key2- key1 or key2 is used. Both indexes can not be used at the same time- If key2 is used, filesort does not happen. But key1 is not used to filter recordsso full scan (full index scan) happens- MySQL optimizer chooses key1 or key2 (cost based)
key1 PK1 100002 5
…3 15321
Table Records
10000: col2=‘abc’, col3=100
5: col2=‘aaa’, col3=10
SELECT * FROM tbl WHERE key1 < 30 ORDER BY key2
60 431
- 60 Leaf 1- 120 Leaf 2
Branch 1
Leaf 1
15321: col2=‘a’, col3=7
Sorting by key2
42Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Note: Index merge
-Key 1 and Key2 are different indexes each other-One access for key1, One access for key2, merging 7 entries, one access on the data -The more records matched, the more overhead is added-Index Merge can be used to filter records, but can not be used to skip sorting
key1 RowID1
…
table records
5: col2=‘aaa’, col3=10
SELECT * FROM tbl WHERE key1 = 2 AND key2 = 3
Key1’s Leaf Block
999: col2=‘a’, col3=7
key2Key2’s Leaf Block
RowID
merge
100002 42 5372 9993 1003 2003 3004 400
3 2003 3003 999
1 101 201 302 5003 100
4
537999
100200
300
43Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
ORDER BY LIMIT N
What MySQL query execution plans can be considered ?
SELECT * FROM tbl WHERE cond ORDER BY keyX LIMIT 20
44Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Plan A: Using cond as an indexUsing cond as an index, sorting matched records, returning top 20
mysql> EXPLAIN SELECT * FROM tbl WHERE cond < 10 -----> ORDER BY keyX LIMIT 20\G*************************** 1. row *************************** id: 1 select_type: SIMPLE table: tbl type: rangepossible_keys: cond key: cond key_len: 5 ref: NULL rows: 10 Extra: Using where; Using filesort1 row in set (0.00 sec)
If cond is very complex, this plan might not be possible
45Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Plan B: Using keyX as an index Using keyX as an index, skipping sorting, checking cond one by one,
stopping scanning when 20 records match criteria
mysql> EXPLAIN SELECT * FROM tbl WHERE cond < 10 -----> ORDER BY keyX LIMIT 20\G*************************** 1. row *************************** id: 1 select_type: SIMPLE table: tbl type: indexpossible_keys: NULL key: keyX key_len: 5 ref: NULL rows: 20 Extra: Using where1 row in set (0.00 sec)
46Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Plan C: Full table scan Scanning whole table, filtering by cond, sorting, then returning top 20 records
mysql> EXPLAIN SELECT * FROM tbl WHERE cond < 10 -----> ORDER BY keyX LIMIT 20\G*************************** 1. row *************************** id: 1 select_type: SIMPLE table: tbl type: ALLpossible_keys: NULL key: NULL key_len: NULL ref: NULL rows: 4012 Extra: Using where; Using filesort1 row in set (0.00 sec)
47Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Which plan is the fastest ?
A: Using cond as an index, sorting matched records, returning top 20 (type=range, key=cond, Using filesort)
B: Using keyX as an index, skipping sorting, checking cond one by one, stopping scanning when 20 records match criteria (type=index, key=keyX)
C: Scanning whole table, filtering by cond, sorting, then returning top 20 records (type=ALL, key=NULL, Using filesort)
SELECT * FROM tbl WHERE cond ORDER BY keyX LIMIT 20
48Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
The fastest execution plan depends on dataSELECT * FROM tbl WHERE cond < 10 ORDER BY keyX LIMIT 20
cond RowID1 100002 5
…3 15321
Rows10 431
Leaf BlockA. Using cond as an index
keyX RowIDaaa 250bbb 5553
…ccc 51
Rowszzz 732
Leaf BlockB. Using keyX as an index
Stopping when 20 rowsmeet criteria: cond < 10
Sorting by keyX
Returning top20
- Database Optimizer can not decide the fastest plan without reading records- Sometimes Optimizer chooses a slower plan.
When small # of records matchcond < 10, plan A should be the fastest.Otherwise massive random reads happen
When many records matchcond < 10, plan B should be the fastest.Otherwise massive random reads happen
When both A and B are slow, C (full table scan) should be the fastest.
49Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Example case: DBT-1 (similar to TPC-W)SELECT i_id, i_title, a_fname, a_lname FROM item, author WHERE item.i_title LIKE '%AAA%' AND item.i_a_id = author.a_id ORDER BY item.i_title ASC LIMIT 50;*************************** 1. row *************************** select_type: SIMPLE table: item type: indexpossible_keys: i_i_a_id key: i_i_title key_len: 63 ref: NULL rows: 10005 Extra: Using where*************************** 2. row *************************** select_type: SIMPLE table: author type: eq_refpossible_keys: PRIMARY key: PRIMARY key_len: 5 ref: test.item.i_a_id rows: 1 Extra:
- Item table has 10000 records- Author table has 2500 records- Index i_title on item- Join from item to author- Primary key a_id on author- Index on i_title can not be used for filtering- Plan B was chosen in this case- But almost no record matched WHERE condition
50Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
type=index, without covering index is not efficient
i_title i_id(PK)… …… …
…… …
… …
Leaf 1
SELECT i_id, i_title, a_fname, a_lname FROM item, author WHERE i_title LIKE '%aaa%' AND i_a_id = a_id ORDER BY i_title ASC LIMIT 50;
…
Table Records
PK, i_a_id, …
PK, i_a_id, …
PK, i_a_id, …
- type=index means Full index scan- Reading records one by one (random access!)- Check where conditions- Joining author table- Stopping scanning when 50 records meet criteria- In this case, only 5 records meet criteria. -> Scanning all index entries & random accesses- Full table scan is better plan (in ideal, fulltext search is the best)
51Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Query execution plan, TPS, and CPU scalability
DBT-1 Throughput
0
500
1000
1500
2000
2500
3000
3500
4 6 7 8 10 12 14 16 20 24 28
# of connections
Thro
ugh
put
(BT/s)
8 cores
4 cores
8 cores, bad index
4 cores, bad index
type=index (default plan)
type=ALL (IGNORE INDEX)
- Full index scan and massive random reads caused serious global mutex contentions inside InnoDB, which degraded CPU scalability
52Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Conclusion
• Query Execution Plan is very important – Always be careful about EXPLAIN plan
• In some cases, control query execution plan by yourself– Covering index on primary key, etc
• Index can be used to reduce record-lock contentions
53Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database
Enjoy the conference !
• The slides will be published at Slideshare very soon
• My talks on Thursday– SSD Deployment Strategies for MySQL
• April 15th (Thu), 14:00-14:45, Ballroom E
54Copyright 2010 Sun Microsystems inc The World’s Most Popular Open Source Database