Think Exa!
-
Upload
enkitec -
Category
Technology
-
view
249 -
download
5
description
Transcript of Think Exa!
![Page 1: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/1.jpg)
Think Exa!
Learning what you need to learn about Exadata Forge5ng some of what we thought important
![Page 2: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/2.jpg)
![Page 3: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/3.jpg)
Who We Are
• Oracle-centric Consulting Partner focusing on the Oracle Technology Stack
• Exadata Specialized Partner status (one of a handful globally)
• 200+ successful Exadata implementations
• Dedicated, in-house Exadata lab (POV, Patch Validation)
• Exadata specific: capacity planning, patching, POC, troubleshooting
• Presence in the US, UK, DE and NL.
• That means we are open for a challenge in NL too!!
![Page 4: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/4.jpg)
www.facebook.com/enkitec
@enkitec
www.enkitec.com
![Page 7: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/7.jpg)
Where did you say you come from?
![Page 8: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/8.jpg)
![Page 9: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/9.jpg)
Introduc>on
Why Exadata works
![Page 10: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/10.jpg)
Plenty of reasons to migrate
• End of life on hardware • En>re plaGorm decommissioned
• Consolida>on on single hardware plaGorm
• No more support from engineering
• Save on licenses • ...
![Page 11: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/11.jpg)
Why and where Exadata can work
• Shared infrastructure – Sharing your storage with everyone is not efficient
– Sketchy I/O performance
• Old hardware – End of life for your system
• Consolida>on – You are consolida>ng databases
![Page 12: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/12.jpg)
Where you might come from
All logos/trademarks belong to their righGul owners
![Page 13: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/13.jpg)
Migra>on strategies (1)
Li: and Shi:
• Take exis>ng applica>on • Move to Exadata
– Minimum adjustments
– Just Enough Op>misa>ons (JeOS)
• Regression test • Go live
Exadata Op=mised
• Take exis>ng applica>on • Analyse workload
– Review workload characteris>cs – Memory, CPU, I/O paXerns, user
ac>vity
– Classify into BAU and peak • Consolidate
– 11.2 consolida>on – 12.1 consolida>on
• Review, Assess, Rinse, Repeat
![Page 14: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/14.jpg)
Migra>on strategies (2)
• Li[ and Shi[ is not bad – You need to get started! – Don’t over-‐engineer the solu>on – First results quickly
• But – Don’t stop there – Analyse workload – Op>mise for Exadata Think Exa!!
![Page 15: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/15.jpg)
What you would miss
• If you don’t inves>gate in understanding Exadata – … you don’t learn about Smart I/O and
– More specifically Smart Scans
– You miss out on the use of Hybrid Columnar Compression
– … and how to use it most efficiently
– … you don’t get to use I/O Resource Manager
• And we forgot to men>on all the other useful features!
![Page 16: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/16.jpg)
![Page 17: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/17.jpg)
Don’t stop here!
You are almost there!
![Page 18: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/18.jpg)
Take the long road…and walk it
![Page 19: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/19.jpg)
Take the long road…and walk it
Hardware decommissioning
Migrate database to Exadata Done
![Page 20: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/20.jpg)
Take the long road…and walk it
Hardware decommissioning
Migrate database to Exadata
Simplify, op>mise,
![Page 21: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/21.jpg)
Common scenario
• Highly visible applica>on moving to Exadata – Lots of TB of old, cold, historic data – Mixed workload: OLTP and Repor>ng
– Database started as 7.x on Solaris 2.4 – Thousands of data files due to UFS limita>ons
• No one dares to touch it • Killed with hardware in the past
– Run out of more powerful hardware to kill problem with
![Page 22: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/22.jpg)
How to migrate?
• Endianness conversion needed – Source PlaGorm is Big Endian
– Exadata is Linux = LiXle Endian • This takes >me
• “The Best Way” to migrate depends on your environment – Many use a combina>on of TTS and Replica>on
![Page 23: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/23.jpg)
One way to migrate
NFS export
Old live system
Logical replica>on
1. Convert datafiles
2. Apply transac>ons
![Page 24: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/24.jpg)
Think Exa!
• You s>ll have thousands of data files – All of which are 2 GB in size – Think about backup >me
• You are not using Exadata features yet – Simplify
– Op>mise
• Consider using bigfile tablespaces • Time to convert to locally managed tablespaces :)
![Page 25: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/25.jpg)
![Page 26: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/26.jpg)
Hybrid Columnar Compression
![Page 27: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/27.jpg)
Concepts guide 2 Tables and Table Clusters, Hybrid Columnar Compression With Hybrid Columnar Compression, the database stores the same column for a group of rows together. The data block does not store data in row-‐major format, but uses a combina>on of both row and columnar methods. Storing column data together, with the same data type and similar characteris>cs, drama>cally increases the storage savings achieved from compression. The database compresses data manipulated by any SQL opera>on, although compression levels are higher for direct path loads. Database opera>ons work transparently against compressed objects, so no applica>on changes are required.
![Page 28: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/28.jpg)
Oracle compression
This means HCC is radically different from the other compression methods available in Oracle: • Table compression / OLTP compression
– Values are stored in a symbol table per block, rows use pointer to symbol table.
• Index compression – One or more columns are stored in a symbol table per block, “rows” use pointer to symbol table.
• These compression types are essen>ally deduplica>on.
![Page 29: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/29.jpg)
HCC: tests
• Consider the following base table: TS@//enkx3db02/frits > desc hcc_baseName Null Type ------------ ---- -------------- ID NUMBER CLUSTERED NUMBER SCATTERED NUMBER RANDOMIZED NUMBER SHORT_STRING VARCHAR2(4000) -- 30 random charactersLONG_STRING1 VARCHAR2(4000) -- 130 random charactersLONG_STRING2 VARCHAR2(4000) -- 130 random charactersLONG_NUMBER NUMBER -- random number 1000000000 - 9999999999 RANDOM_DATE DATETS@//enkx3db02/frits > select count(*) from hcc_base; COUNT(*)---------- 3000000TS@//enkx3db02/frits > select bytes/1024/1024/1024 “GB” from user_segments where segment_name = 'HCC_BASE';
GB---------- 1.1875
![Page 30: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/30.jpg)
HCC: tests
• Let’s introduce HCC compression to the table
• The table I just created is a normal heap table
• Dic>onary aXributes – COMPRESSION
– COMPRESS_FOR
TS@//enkx3db02/frits > select table_name, compress_for 2 from user_tables where table_name = 'HCC_BASE';TABLE_NAME COMPRESS_FOR------------------------------ ------------HCC_BASE
![Page 31: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/31.jpg)
Let’s do some tests
Let’s make this table HCC: We got the normal heap table we just created:
TS@//enkx3db02/frits > select table_name, compress_for from user_tables where table_name = 'HCC_BASE';
TABLE_NAME COMPRESS_FOR
-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐ -‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐
HCC_BASE
![Page 32: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/32.jpg)
HCC: tests
• Add HCC compression now
• Check the data dic>onary: TS@//enkx3db02/frits > select table_name, compress_for from user_tables where table_name = 'HCC_BASE';TABLE_NAME COMPRESS_FOR------------------------------ ------------HCC_BASE QUERY HIGH
TS@//enkx3db02/frits > alter table hcc_base compress for query high;
![Page 33: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/33.jpg)
HCC: tests
• But is our table HCC compressed?
• Look at the size: TS@//enkx3db02/frits > select bytes/1024/1024/1024 ”GB" from user_segments where segment_name = 'HCC_BASE';
GB---------- 1.1875
(that’s s>ll the same)
![Page 34: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/34.jpg)
HCC: tests
The data dic>onary (user|all|dba _tables.compress_for) shows the configured state, not necessarily the actual state!
Use DBMS_COMPRESION.GET_COMPRESSION_TYPE() to find the actual compression state.
The GET_COMPRESSION_TYPE procedure reads it per row (rowid).
![Page 35: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/35.jpg)
HCC: tests
• DBMS_COMPRESION.GET_COMPRESSION_TYPE()
TS@//enkx3db02/frits > select decode( DBMS_COMPRESSION.GET_COMPRESSION_TYPE ( user, 'HCC_BASE', rowid), 1, 'No Compression', 2, 'Basic/OLTP Compression', 4, 'HCC Query High', 8, 'HCC Query Low', 16, 'HCC Archive High', 32, 'HCC Archive Low', 64, 'Compressed row', 'Unknown Compression Level') compression_type from hcc_base where rownum <2; COMPRESSION_TYPE-------------------------No Compression
![Page 36: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/36.jpg)
HCC: tests
Actually, if an HCC mode is set on a table, a direct path insert method (kcbl* code) is needed in order to make the rows HCC compressed.
This is not en>rely uncommon, basic compression works the same way.
![Page 37: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/37.jpg)
HCC: tests
Direct path inserts methods include: -‐ Insert /*+ append */ -‐ Create table as select -‐ Parallel DML
-‐ SQL*loader direct path loads -‐ Alter table move
-‐ Online table redefini>on
![Page 38: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/38.jpg)
HCC: tests
Now we got an HCC mode set on this table, we can use ‘alter table move’ to make it truly HCCed! TS@//enkx3db02/frits > alter table hcc_base move;
Let’s look at the size again: TS@//enkx3db02/frits > select bytes/1024/1024/1024 ”GB" from user_segments where segment_name = 'HCC_BASE';
GB---------- 0.640625 -- was 1.1875
![Page 39: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/39.jpg)
HCC: tests
Actually, this can be done in one go: TS@//enkx3db02/frits > alter table hcc_base move compress for query high;
Now let’s look with DBMS_COMPRESSION.GET_COMPRESSION_TYPE again:
![Page 40: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/40.jpg)
HCC: tests TS@//enkx3db02/frits > select decode( DBMS_COMPRESSION.GET_COMPRESSION_TYPE ( user, 'HCC_BASE', rowid), 1, 'No Compression', 2, 'Basic/OLTP Compression', 4, 'HCC Query High', 8, 'HCC Query Low', 16, 'HCC Archive High', 32, 'HCC Archive Low', 64, 'Compressed row', 'Unknown Compression Level') compression_type from hcc_base where rownum <2; COMPRESSION_TYPE-------------------------HCC Query High
![Page 41: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/41.jpg)
HCC: tests
What compression do I achieve on my set? • Non compressed size: 1.19 GB
• Compress for query low: 0.95 GB
• Compress for query high: 0.64 GB
• Compress for archive low: 0.64 GB
• Compress for archive high: 0.62 GB
![Page 42: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/42.jpg)
HCC: tests
Now let’s update our HCC compressed table: TS@//enkx3db02/frits > update hcc_base set id = id+1000000;TS@//enkx3db02/frits > commit;
Now look at the size of table, which was previously 0.64 GB in size:
TS@//enkx3db02/frits > select segment_name, bytes/1024/1024/1024 ”GB" from user_segments where segment_name = 'HCC_BASE';SEGMENT_NAME GB------------------------------------------------------------ ----------HCC_BASE 1.6875 -- noncompressed: 1.1875
![Page 43: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/43.jpg)
Let’s do some tests
Now look at the size of my previously 0,64 GB table: TS@//enkx3db02/frits > select segment_name, bytes/1024/1024/1024 "Gb" from user_segments where segment_name = 'HCC_BASE'; SEGMENT_NAME Gb -‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐ -‐-‐-‐-‐-‐-‐-‐-‐-‐-‐ HCC_BASE 1.6875
![Page 44: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/44.jpg)
HCC: tests
Let’s take a look at the compression type again: TS@//enkx3db02/frits > select decode( DBMS_COMPRESSION.GET_COMPRESSION_TYPE ( user, 'HCC_BASE', rowid), 1, 'No Compression', 2, 'Basic/OLTP Compression', 4, 'HCC Query High', 8, 'HCC Query Low', 16, 'HCC Archive High', 32, 'HCC Archive Low', 64, 'Compressed row', 'Unknown Compression Level') compression_type from hcc_base where rownum <2; COMPRESSION_TYPE-------------------------Compressed row
![Page 45: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/45.jpg)
HCC: tests
In versions up to 11.2.0.2*: • A row change in an HCC compressed segment would result in:
– An OLTP Compressed extra block being allocated. – The modified row being stored in the OLTP compressed block.
– The row pointer in the HCC CU header being changed to point to the row in the OLTP compressed block.
This had a big performance implica>on; for every changed row an extra IO via ‘cell single block physical read’ was needed. Increase in ‘table fetch con>nued row’!
![Page 46: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/46.jpg)
HCC: tests
For versions 11.2.0.3+: • A changed row is compressed as type 64: ‘Compressed row’.
• The changed HCC segment increases in size.
• No ‘cell single block physical read’ waits, and accompanying ‘table fetch con>nued row’ sta>s>c increase.
• Whole table scan is done as smart scan (!)
This makes updates a lot less intrusive. S>ll, the increase in size means you should avoid updates to HCC compressed segments!
![Page 47: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/47.jpg)
HC: compression / decompression
• HCC Compression is always done on the compute layer.
• With smart scans, the cells uncompresses the needed rows and columns as part of the smart scan.
• A cell can decide not to smart scan and revert to block mode.
• With non smart scans (block mode), the compute layer reads and uncompresses the blocks.
![Page 48: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/48.jpg)
HCC: Conclusion
Use HCC with care. • Use HCC in combina>on with par>>oning.
• HCC means trading space for CPU cycles.
• Make (absolutely) sure the data is ‘cold’.
• Only for TABLES – Indexes could end up being larger than the table.
• Work out an HCC strategy.
• IF data changes, consider another alter table move.
![Page 49: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/49.jpg)
![Page 50: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/50.jpg)
Some unlearning is in order
Taking a different approach on Exadata
![Page 51: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/51.jpg)
Exadata processing
• Storage >er is database-‐aware – Filtering can be done at storage >er
• Faster storage connec>on – Infiniband runs at 40gbps
• Storage can just send (par>al) row data to database >er – Not shipping en>re blocks
• Storage has more horsepower – 1 CPU core per spinning disk
• Lots of Flash! – X4 has 3.2TB per storage server
![Page 52: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/52.jpg)
The buffer cache size
• Size does maXer – Warehouse workloads benefit from small buffer cache
– You need direct path reads for smart scans • Small Table Threshold
• Size of the segment
– Shrinking SGA is ok for warehouse workloads • Mixed workload in same database is different story
![Page 53: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/53.jpg)
The ques>on about par>>oning
• On non-‐Exadata plaGorms you tried to – Eliminate as much I/O as possible
• Star schema
• Star transforma>on
• Bitmap indexes • Subpar>>ons
• On Exadata – Not such a good idea – See why
![Page 54: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/54.jpg)
The par>>oning issue (1)
• Somewhat extreme test case
![Page 55: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/55.jpg)
The par>>oning issue (2) MARTIN@DB12C1:1> select partition_name,subpartition_name,blocks,num_rows from user_tab_subpartitions 2 where table_name = 'T1_SUBPART' and rownum < 11;PARTITION_NAME SUBPARTITION_NAME BLOCKS NUM_ROWS------------------------------ ------------------------------ ---------- ----------SYS_P8116 SYS_SUBP8112 23 250SYS_P8116 SYS_SUBP8113 23 250SYS_P8116 SYS_SUBP8114 23 250SYS_P8116 SYS_SUBP8115 0 0SYS_P8122 SYS_SUBP8117 23 250SYS_P8158 SYS_SUBP8154 23 250SYS_P8158 SYS_SUBP8155 23 250SYS_P8158 SYS_SUBP8156 23 250SYS_P8158 SYS_SUBP8157 0 0SYS_P8182 SYS_SUBP8181 0 0MARTIN@DB12C1:1> select count(blocks),blocks 2 from user_tab_subpartitions 3 where table_name = 'T1_SUBPART' 4 group by blocks;COUNT(BLOCKS) BLOCKS------------- ---------- 3960 23 991 0 4 67
![Page 56: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/56.jpg)
Smart Scan Is Always BeXer ™ SQL ID: 5yc3hmz41jf3q Plan Hash: 2481424394select /* sdr_always */ count(1)from t1_subpartcall count cpu elapsed disk query current rows------- ------ -------- ---------- ---------- ---------- ---------- ----------Parse 1 0.00 0.00 0 0 0 0Execute 1 0.00 0.00 0 0 0 0Fetch 2 2.82 10.99 19996 30894 0 1------- ------ -------- ---------- ---------- ---------- ---------- ----------total 4 2.82 11.00 19996 30894 0 1
Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ library cache lock 1 0.00 0.00 library cache pin 1 0.00 0.00 SQL*Net message to client 2 0.00 0.00 reliable message 4954 0.00 2.96 enq: KO - fast object checkpoint 9902 0.00 1.21 Disk file operations I/O 1 0.00 0.00 cell smart table scan 7936 0.02 4.44 latch: ges resource hash list 3 0.00 0.00 KJC: Wait for msg sends to complete 2 0.00 0.00 SQL*Net message from client 2 0.00 0.00********************************************************************************
Well, maybe not
![Page 57: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/57.jpg)
In this case, surely not SQL ID: ctp93ksgpr72s Plan Hash: 2481424394select /* sdr_auto */ count(1)from t1_subpartcall count cpu elapsed disk query current rows------- ------ -------- ---------- ---------- ---------- ---------- ----------Parse 1 0.00 0.00 0 0 0 0Execute 1 0.00 0.00 0 0 0 0Fetch 2 0.11 0.12 0 30894 0 1------- ------ -------- ---------- ---------- ---------- ---------- ----------total 4 0.11 0.12 0 30894 0 1
Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 2 0.00 0.00 SQL*Net message from client 2 6.55 6.55********************************************************************************
![Page 58: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/58.jpg)
Think Exa!
• Smart Scans are great for data retrieval – Data processing <> data retrieval – Data to be retrieved should be large
• Smart Scans don’t help retrieve small amounts of data – Classic OLTP-‐style workload – Refrain from se�ng _serial_direct_read = ALWAYS system-‐wide
![Page 59: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/59.jpg)
Think Exa!
• Run>me par>>on pruning used to be essen>al – Small and smallest par>>ons
– Index based access paths – Very liXle I/O, good response >me, happy user
• Exadata can scoop lots of data effec>vely – Don’t stop par>>oning your data (-‐> ILM, performance)
– But review the strategy
![Page 60: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/60.jpg)
![Page 61: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/61.jpg)
Drop all your Indexes
Myth debunking
![Page 62: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/62.jpg)
Drop indexes?
• Should you drop all your indexes, when going to the Exadata plaGorm?
• What does an index actually do?
![Page 63: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/63.jpg)
Drop indexes?
• There are two essen>al methods to find a certain row in a table:
– Scan the whole table from beginning to end for row(s) matching your criteria.
– Look up the rows you need in an ordered subset of the data*, then retrieve the rows via their rowids.
– Par>>on pruning
![Page 64: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/64.jpg)
Drop indexes?
• Let’s take the HCC_BASE table from the HCC example. (Uncompressed)
– Table size: 1.19GB, number of blocks: 155648.
– The ID column contains a unique ID/number.
• Just like the PK in a lot of tables.
![Page 65: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/65.jpg)
Drop indexes? TS@//enkx3db02/frits > select * from hcc_base where id = 1;
Row source sta>s>cs from sql_trace:
TABLE ACCESS STORAGE FULL HCC_BASE (cr=149978 pr=149971 pw=0 time=358570 us cost=40848 size=10074560 card=1657)
Consistent
reads
Physical reads
0.36 seconds
![Page 66: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/66.jpg)
Drop indexes?
• Let’s create an index on hcc_base.id: TS@//enkx3db02/frits > create index i_hcc_base on hcc_base ( id );
• It results in an object with the following size:
– Index size: 0.05GB, number of blocks: 7168
![Page 67: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/67.jpg)
Drop indexes?
Row source sta>s>cs from sql_trace: TABLE ACCESS BY INDEX ROWID HCC_BASE
(cr=4 pr=0 pw=0 time=15 us cost=4 size=6080 card=1)INDEX RANGE SCAN I_HCC_BASE
(cr=3 pr=0 pw=0 time=9 us cost=3 size=0 card=1)
3 blocks read from the index!
(index root, branch, leaf)
And 1 block read to get the row belonging to the
id!
Total >me needed is 0.000015 seconds
![Page 68: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/68.jpg)
Drop indexes: conclusion
• Dropping all indexes on Exadata is a myth. – Some table constraints require an index (PK, unique).
![Page 69: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/69.jpg)
Drop indexes: conclusion
• However… – Some>mes response>me can be improved by removing indexes.
– Almost always these are unselec>ve indexes.
• Exadata has far beXer full scan capability than average non-‐Exadata plaGorms. – This makes the point where a full scan gives a beXer response>me different on exadata versus non-‐exadata.
![Page 70: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/70.jpg)
Drop indexes: conclusion
• The CBO has no Exadata specific decisions. – But we just concluded that the dynamics of full scans are different with Exadata.
• Resolu>on: Exadata (specific) system stats:
– exec dbms_stats.gather_system_stats(‘EXADATA’);
– Sets op>mizer internal calculated MBRC value to 128 (instead of 8), which makes full scans “cheaper”.
![Page 71: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/71.jpg)
![Page 72: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/72.jpg)
Simplify
![Page 73: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/73.jpg)
Simplify
• Try to make everything as simple as possible. – Do NOT use privilege separa>on, unless explicitly needed. – Do NOT change the compute node filesystem layout.
• Especially with the new computenodeupdate script.
– Use as less Oracle homes as possible.
• Only having an home for grid and one db oracle home is actually common!
– Do not apply the resecure step in onecommand.
• This keeps ssh keys among other things.
![Page 74: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/74.jpg)
Simplify
• Run exachk (exacheck) monthly.
– When applying defaults, less errors will be detected.
– Exacheck changes with new insight and new standards implemented in the O/S image.
– This means a new version of exacheck can come up with new or different checks.
![Page 75: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/75.jpg)
Simplify • Tablespaces
– Use ASSM tablespaces.
– Make the tablespaces bigfile tablespaces.
• There are excep>ons in specific cases, like much sessions using temp.
– Group all data belonging together into a single tablespace. • Of course there can be excep>ons, if there is a good reason.
– Use autoextent, limit tablespaces if there’s need to.
![Page 76: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/76.jpg)
Simplify • Tablespaces (con>nued)
– Try to reduce the number of tablespaces as much as possible.
– Move the audit table (AUD$) from the SYSTEM tablespace.
– Use 8 KB blocksize, even with a DWH. • If you have performance considera>ons, do a POC to measure performance impact between 8KB and 16KB (32KB?) blocksizes.
![Page 77: Think Exa!](https://reader034.fdocuments.net/reader034/viewer/2022052203/5580aa79d8b42aa0448b51a9/html5/thumbnails/77.jpg)
Thank you!