Amazon Aurora Deep Dive Aurora Deep dive YVR MeetUP.pdfDebanjan Saha General Manager, Amazon Aurora...
Transcript of Amazon Aurora Deep Dive Aurora Deep dive YVR MeetUP.pdfDebanjan Saha General Manager, Amazon Aurora...
© 2015, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Debanjan Saha
General Manager, Amazon Aurora
Amazon Aurora Deep Dive
MySQL-compatible relational database
Performance and availability of
commercial databases
Simplicity and cost-effectiveness of
open source databases
Delivered as a managed service
What is Amazon Aurora?
Aurora at a glance
AZ 1 AZ 2 AZ 3
Amazon S3
MasterRead
Replica
Read
Replica
Read
ReplicaRead
Replica
Massively scale-out storage
distributed across 3 AZs
Perfect fit for enterprise workload
6-way replication across 3 data centers
Failover in less than 30 secs
Near instant crash recovery
Up to 500 K/sec read and 100 K/sec write
15 low latency (10 ms) Read Replicas
Up to 64 TB DB optimized storage volume
Instant provisioning and deployment
Automated patching and software upgrade
Backup and point-in-time recovery
Compute and storage scaling
Performance and scale
Enterprise class availability
Fully managed service
Customers are using Amazon Aurora
Fastest growing service
in AWS history
Aurora customer adoption
Expedia: Online travel marketplace
Real-time business intelligence and analytics on a
growing corpus of online travel market place data.
Current SQL server based architecture is too
expensive. Performance degrades as data
volume grows.
Cassandra with Solr index requires large memory
footprint and hundreds of nodes, adding cost.
Aurora benefits:
Aurora meets scale and performance
requirements with much lower cost.
25,000 inserts/sec with peak up to 70,000. 30 ms
average response time for write and 17 ms for
read.
World’s leading online travel
company, with a portfolio that
includes more than 150 travel sites
in 70 countries.
Alfresco: Enterprise content management
Scaling Alfresco document repositories to billions
of documents.
Support user applications that require sub-
second response times.
Aurora benefits:
Scaled to 1 billion documents with a throughput
of 3 million per hour, which is 10 times faster than
their current environment.
Moving from large data centers to cost-effective
management with AWS and Aurora.
Leading the convergence of enterprise
content management and business process
management. More than 1,800 organizations
in 195 countries rely on Alfresco, including
leaders in financial services, health care, and
the public sector.
Earth Networks: Internet of things
Earth Networks processes over 25 terabytes
of real-time data daily and needs a scalable
database that can rapidly grow with expanding
data analysis.
Aurora benefits:
Aurora performance and scalability both scale
up and scale out through read replicas and
works well with their rapid data growth.
Moving from SQL server to Aurora was easy
and huge cost saving.
More than 10,000 exclusive
neighborhood-level sensors deliver
precise reports of weather conditions,
local forecasts and critical warnings
when, where, and how people need
them most.
4 client machines with 1,000 connections each
WRITE PERFORMANCE READ PERFORMANCE
Single client machine with 1,600 connections
Using MySQL SysBench with Amazon Aurora R3.8XL with 32 cores and 244 GB RAM
SQL Benchmark Results
Reproducing these results
h t t p s : / / d 0 . a w s s t a t i c . c o m / p r o d u c t - m a rk e t i n g / Au r o r a / R D S_ Au r o r a _ Pe r f o r m a n c e _ As s e s s m e n t _ Be n c hm a r k i n g _ v 1 - 2 . p d f
AMAZON
AURORA
R3.8XLARG
E
R3.8XLARGE
R3.8XLARGE
R3.8XLARGE
R3.8XLARGE
• Create an Amazon VPC (or use an existing one).
• Create four EC2 R3.8XL client instances to run the
SysBench client. All four should be in the same AZ.
• Enable enhanced networking on your clients
• Tune your Linux settings (see whitepaper)
• Install Sysbench version 0.5
• Launch a r3.8xlarge Amazon Aurora DB Instance in
the same VPC and AZ as your clients
• Start your benchmark!
1
2
3
4
5
6
7
Beyond benchmarks
If only real world applications saw benchmark performance
POSSIBLE DISTORTIONS
Real world requests contend with each other
Real world metadata rarely fits in data dictionary cache
Real world data rarely fits in buffer cache
Real world production databases need to run HA
Scaling User Connections
SysBench OLTP Workload
250 tables
Connections Amazon Aurora
RDS MySQL
30K IOPS (single AZ)
50 40,000 10,000
500 71,000 21,000
5,000 110,000 13,000
8xU P TO
FA S T E R
Scaling Table Count
Tables
Amazon
Aurora
MySQL
I2.8XL
local SSD
MySQL
I2.8XL
RAM disk
RDS
MySQL
30K IOPS
(single AZ)
10 60,000 18,000 22,000 25,000
100 66,000 19,000 24,000 23,000
1,000 64,000 7,000 18,000 8,000
10,000 54,000 4,000 8,000 5,000
SysBench write-only workload
1,000 connections
Default settings
11xU P TO
FA S T E R
Number of writes per second
Scaling Data Set Size
DB Size Amazon Aurora
RDS MySQL
30K IOPS (single AZ)
1GB 107,000 8,400
10GB 107,000 2,400
100GB 101,000 1,500
1TB 26,000 1,200
67xU P TO
FA S T E R
SYSBENCH WRITE-ONLY
DB Size Amazon Aurora
RDS MySQL
30K IOPS (single AZ)
80GB 12,582 585
800GB 9,406 69
CLOUDHARMONY TPC-C
136xU P TO
FA S T E R
Running with Read Replicas
Updates per
second Amazon Aurora
RDS MySQL
30K IOPS (single AZ)
1,000 2.62 ms 0 s
2,000 3.42 ms 1 s
5,000 3.94 ms 60 s
10,000 5.38 ms 300 s
SysBench Write-only Workload
250 tables
500xU P TO
L O W E R L A G
Do fewer IOs
Minimize network packets
Cache prior results
Offload the database engine
DO LESS WORK
Process asynchronously
Reduce latency path
Use lock-free data structures
Batch operations together
BE MORE EFFICIENT
How Do We Achieve These Results?
DATABASES ARE ALL ABOUT I/O
NETWORK-ATTACHED STORAGE IS ALL ABOUT PACKETS/SECOND
HIGH-THROUGHPUT PROCESSING DOES NOT ALLOW CONTEXT SWITCHES
A service-oriented architecture applied to the database
Moved the logging and storage layer into a
multi-tenant, scale-out database-optimized
storage service
Integrated with other AWS services like
Amazon EC2, Amazon VPC, Amazon
DynamoDB, Amazon SWF, and Amazon
Route 53 for control plane operations
Integrated with Amazon S3 for continuous
backup with 99.999999999% durability
Control PlaneData Plane
Amazon
DynamoDB
Amazon SWF
Amazon Route 53
Logging + Storage
SQL
Transactions
Caching
Amazon S3
1
2
3
Aurora Cluster
Amazon S3
AZ 1 AZ 2 AZ 3
Aurora Primary
instance
Cluster volume spans 3 AZs
Aurora Cluster with Replicas
Amazon S3
AZ 1 AZ 2 AZ 3
Aurora Primary
instance
Cluster volume spans 3 AZs
Aurora Replica Aurora Replica
IO Traffic in RDS MySQL
BINLOG DATA DOUBLE-WRITELOG FRM FILES
T Y P E O F W R I T E
MYSQL WITH STANDBY
EBS mirrorEBS mirror
AZ 1 AZ 2
Amazon S3
EBSAmazon Elastic
Block Store (EBS)
Primary
Instance
Standby
Instance
1
2
3
4
5
Issue write to EBS – EBS issues to mirror, ack when both done
Stage write to standby instance
Issue write to EBS on standby instance
IO FLOW
Steps 1, 3, 5 are sequential and synchronous
This amplifies both latency and jitter
Many types of writes for each user operation
Have to write data blocks twice to avoid torn writes
OBSERVATIONS
780K transactions
7,388K I/Os per million txns (excludes mirroring, standby)
Average 7.4 I/Os per transaction
PERFORMANCE
30 minute SysBench write-only workload, 100GB dataset, RDS Single AZ, 30K
PIOPS
IO Traffic in Aurora (Database)
AZ 1 AZ 3
Primary
Instance
Amazon S3
AZ 2
Replica
Instance
AMAZON AURORA
ASYNC
4/6 QUORUM
DISTRIBUTED
WRITES
BINLOG DATA DOUBLE-WRITELOG FRM FILES
T Y P E O F W R I T E S
30 minute SysBench writeonly workload, 100GB dataset
IO FLOW
Only write redo log records; all steps asynchronous
No data block writes (checkpoint, cache replacement)
6X more log writes, but 9X less network traffic
Tolerant of network and storage outlier latency
OBSERVATIONS
27,378K transactions 35X MORE
950K I/Os per 1M txns (6X amplification) 7.7X LESS
PERFORMANCE
Boxcar redo log records – fully ordered by LSN
Shuffle to appropriate segments – partially ordered
Boxcar to storage nodes and issue writes
IO Traffic in Aurora (Read Replica)
PAGE CACHE
UPDATE
Aurora Master
30% Read
70% Write
Aurora Replica
100% New Reads
Shared Multi-AZ Storage
MySQL Master
30% Read
70% Write
MySQL Replica
30% New Reads
70% Write
SINGLE-THREADED
BINLOG APPLY
Data Volume Data Volume
Logical: Ship SQL statements to Replica
Write workload similar on both instances
Independent storage
Can result in data drift between Master and
Replica
Physical: Ship redo from Master to Replica
Replica shares storage. No writes performed
Cached pages have redo applied
Advance read view when all commits seen
MYSQL READ SCALING AMAZON AURORA READ SCALING
Asynchronous group commits
Read
Write
Commit
Read
Read
T1
Commit (T1)
Commit (T2)
Commit (T3)
LSN 10
LSN 12
LSN 22
LSN 50
LSN 30
LSN 34
LSN 41
LSN 47
LSN 20
LSN 49
Commit (T4)
Commit (T5)
Commit (T6)
Commit (T7)
Commit (T8)
LSN GROWTHDurable LSN at head-node
COMMIT QUEUEPending commits in LSN order
TIME
GROUP
COMMIT
TRANSACTIONS
Read
Write
Commit
Read
Read
T1
Read
Write
Commit
Read
Read
Tn
TRADITIONAL APPROACH AMAZON AURORA
Maintain a buffer of log records to write out to disk
Issue write when buffer full or time out waiting for writes
First writer has latency penalty when write rate is low
Request I/O with first write, fill buffer till write picked up
Individual write durable when 4 of 6 storage nodes ACK
Advance DB Durable point up to earliest pending ACK
Re-entrant connections multiplexed to active threads
Kernel-space epoll() inserts into latch-free event queue
Dynamically size threads pool
Gracefully handles 5000+ concurrent client sessions on r3.8xl
Standard MySQL – one thread per connection
Doesn’t scale with connection count
MySQL EE – connections assigned to thread group
Requires careful stall threshold tuning
CLIE
NT
CO
NN
EC
TIO
N
CLIE
NT
CO
NN
EC
TIO
N
LATCH FREE
TASK QUEUE
ep
oll(
)
MYSQL THREAD MODEL AURORA THREAD MODEL
Adaptive thread pool
Continuing the Improvements
Write batch size tuning
Asynchronous send for read/write I/Os
Purge thread performance
Bulk insert performance
BATCH OPERATIONS
Failover time reductions
Malloc reduction
System call reductions
Undo slot caching patterns
Cooperative log apply
OTHER
Binlog and distributed transactions
Lock compression
Read-ahead
CUSTOMER FEEDBACK
Hot row contention
Dictionary statistics
Mini-transaction commit code path
Query cache read/write conflicts
Dictionary system mutex
LOCK CONTENTION
Availability
“Performance only matters if your database is up”
Storage node availability
Quorum system for read/write; latency tolerant
Peer to peer gossip replication to fill in holes
Continuous backup to S3 (designed for 11 9s durability)
Continuous scrubbing of data blocks
Continuous monitoring of nodes and disks for repair
10GB segments as unit of repair or hotspot rebalance to quickly rebalance load
Quorum membership changes do not stall writes
AZ 1 AZ 2 AZ 3
Amazon S3
Traditional databases
Have to replay logs since the last
checkpoint
Typically 5 minutes between checkpoints
Single-threaded in MySQL; requires a
large number of disk accesses
Amazon Aurora
Underlying storage replays redo records
on demand as part of a disk read
Parallel, distributed, asynchronous
No replay for startup
Checkpointed Data Redo Log
Crash at T0 requires
a re-application of the
SQL in the redo log since
last checkpoint
T0 T0
Crash at T0 will result in redo logs being
applied to each segment on demand, in
parallel, asynchronously
Instant crash recovery
Survivable caches
We moved the cache out of the
database process
Cache remains warm in the event of a
database restart
Lets you resume fully loaded
operations much faster
Instant crash recovery + survivable
cache = quick and easy recovery from
DB failures
SQL
Transactions
Caching
SQL
Transactions
Caching
SQL
Transactions
Caching
Caching process is outside the DB process
and remains warm across a database restart
Faster, more predictable failover
AppRunningFailure Detection DNS Propagation
Recovery Recovery
DBFailure
MYSQL
App
Running
Failure Detection DNS Propagation
Recovery
DB
Failure
AURORA WITH MARIADB DRIVER
1 5 - 2 0 s e c
3 - 2 0 s e c
ALTER SYSTEM CRASH [{INSTANCE | DISPATCHER | NODE}]
ALTER SYSTEM SIMULATE percent_failure DISK failure_type IN
[DISK index | NODE index] FOR INTERVAL interval
ALTER SYSTEM SIMULATE percent_failure NETWORK failure_type
[TO {ALL | read_replica | availability_zone}] FOR INTERVAL interval
Simulate failures using SQL
To cause the failure of a component at the database node:
To simulate the failure of disks:
To simulate the failure of networking: