Optimizing Application Performance In AWS · 2018. 8. 2. · C5 LARGE C5 4XLARGE C5 18XLARGE c Same...
Transcript of Optimizing Application Performance In AWS · 2018. 8. 2. · C5 LARGE C5 4XLARGE C5 18XLARGE c Same...
© 2018 Asperitas Consulting all rights reserved
Scott Wheeler
Principal Cloud Architect
Asperitas Consulting
Optimizing Application Performance
In AWS
© 2018 Asperitas Consulting All rights reserved.
Agenda
What Affects Performance in AWS?
Networking Performance
Compute Performance
Storage Performance
© 2018 Asperitas Consulting All rights reserved.
What Affects Performance?
Software Optimization (out of scope)
Network
Compute
Storage
Application Architecture (out of scope)
© 2018 Asperitas Consulting All rights reserved.
Network
© 2018 Asperitas Consulting All rights reserved.
Network Performance
Instance Types+ Larger is better+ Newer is better
AWS Enhanced Networking (SR-IOV)
+ Increased bandwidth (closer to advertised)x Issues with driver support for non-AWS Linux
AWS Placement Groups+ Low latency between instancesx May have instance type availability issuesx Only within an AZ
Application Location
© 2018 Asperitas Consulting All rights reserved.
Instance Type Considerations
Network Bandwidth Instance Types
Very Low t2.nano
Low t1.micro
Low to Moderate t2 (micro,small,medium,large)
Moderate medium (m3), large (m3, m4, c4, r3), xlarge (d2, r3, t2), 2xlarge (t2)
High xlarge (m3, m4 ,c4), 2xlarge (c4, m3, m4, p2, d2, r3), 4xlarge (c4, m4, d2, r3)
Up to 10 Gbps large (c5, m5, i3, r4), xlarge (c5, m5, r4, i3, x1e), 2xlarge (c5, m5, r4, h1, i3, p3, x1e), 4xlarge (m5, c5, r4, i3,
h1, x1e, g3), 8xlarge (x1e)
10 Gbps 8xlarge (c4, r3, r4, d2, h1, i3, g3, p2), 9xlarge (c5), 10xlarge (m4), 12xlarge (m5), 16xlarge (x1, x1e)
25 Gbps 16xlarge (c5, m4, r4, h1, i3, g3, p2, p3), 18xlarge (c5), 24xlarge (m5) 32xlarge (x1, x1e)
© 2018 Asperitas Consulting All rights reserved.
Network – Third Party Observed Performance
Source: CloudHarmony & flux
20 70 125 250 450 700 9001800
6000
30 100 200 400 600 900 1100
2200
8500
T2.NANO T2.MICRO T2.SMALL *.MEDIUM *.LARGE *.XLARGE *.2XLARGE *.4XLARGE *.8XLARGE
AWS Network Performance Ranges Observed (2017)
low Mbps high Mbps
© 2018 Asperitas Consulting All rights reserved.
Network – Third Party Observed Performance
Source: Andreas Wittig, Cloudanaut
0
5
10
15
20
25
Gb
ps
Min
Avg
Max
© 2018 Asperitas Consulting All rights reserved.
Network – New Performance Enhancements
EC2 to S3• Increase to 25 Gbps from 5Gbps
EC2 to EC2• 5 Gbps: single-flow traffic, 25 Gbps: multi-flow traffic for AZs within a
region
EC2 to EC2 (Cluster Placement Group)
• 10 Gbps: single-flow traffic, or 25 Gbps: multi-flow traffic for AZs within a region
Source: Amazon Web Services, Jan 2018
© 2018 Asperitas Consulting All rights reserved.
Other Network Considerations
Transit VPCsx VGW IPSec VPN connections limited to 1.25Gbps.
x Virtual Routers may only have 2.5Gbps real world throughput.
© 2018 Asperitas Consulting All rights reserved.
Compute
© 2018 Asperitas Consulting All rights reserved.
Compute Performance
0
5,000
10,000
15,000
20,000
25,000
30,000
35,000
40,000
45,000
50,000
Pas
sMar
k Sc
ore
AWS Instance Types
T2 M4 M5 C4 C5 X1e R5 R4 H1 I3 D2 Z1d
Source: Amazon Web Services
© 2018 Asperitas Consulting All rights reserved.
Storage
© 2018 Asperitas Consulting All rights reserved.
EBS & Instance Storage Performance
EBS• Magnetic: IOPS 250-500, throughput 250-500 MiB/s
• Solid State
• Standard: IOPS 10,000, throughput 160 MiB/s
• PIOPS: IOPS 32,000, throughput 500 MiB/s
Instance Attached Storage• NVMe: 3.3 million IOPS, 16 GB/s sequential read
© 2018 Asperitas Consulting All rights reserved.
EBS GP2 vs IO1 Performance
Use Case: 500GB @ 3,000 IOPS
gp2: $100/mo (1TB @ 3,000 IOPS)
io1: $258/mo (500GB @ 3,000 IOPS)
Use Case: 1TB @ 10,000 IOPS
gp2: $340/mo (1TB @ 10,000 IOPS)
io1: $750/mo (1TB @ 10,000 IOPS)
Source: Amazon Web Services
© 2018 Asperitas Consulting All rights reserved.
S3 Storage Performance
Keys Matter• Cause bottlenecks at 100 request/sec/thread• Add hash string to key prefix
Transfer Acceleration• Provide up to 30% throughput due to reduced latency
Multipart Upload• Break larger files into multiple chunks
Utilize CloudFront• Reduces latency
© 2018 Asperitas Consulting All rights reserved.
EFS Storage Performance
File System Size Aggregate Read/Write Throughput
100 GiB • Burst to 100 Mbps for 72 min/day
• Drive up to 5 Mbps continuously
1 TiB • Burst to 100 Mbps for 12 hours/day
• Drive up to 50 Mbps continuously
10 TiB • 1 Burst to Gbps for 12 hours/day
• Drive up to 500 Mbps continuously
Source: Amazon Web Services
© 2018 Asperitas Consulting All rights reserved.
Real World Project
© 2018 Asperitas Consulting All rights reserved.
Project Overview
Validate very low latency data feed performance in AWS.
Refactor Into Services
Utilize Containers (Docker)
Benchmark Performance• Same instance• Same AZ• Different AZs• Placement Groups• Dedicated Instances
© 2018 Asperitas Consulting All rights reserved.
Project Results
68,000 78,000
40,000
150,000
190,000 180,000
165,000 175,000
160,000 150,000
198,000 192,000
C5 LARGE C5 4XLARGE C5 18XLARGE
Me
ssag
es
/ se
c
Same AZ Same AZ Dedicated Plcmt Group Plcmt Grp Dedicated
68,000
150,000
165,000
150,000
78,000
190,000
175,000
198,000
40,000
180,000
160,000
192,000
SAME AZ SAME AZ DEDICATED PLCMT GROUP PLCMT GRP DEDICATED
Me
ssag
es
/ se
c
C5 large C5 4xlarge C5 18xlarge
© 2018 Asperitas Consulting All rights reserved.
Conclusions, Observations & Recommendations
C5 4xlarge: offered the best price/performance.
Placement Groups: should be used when low latency is needed.
Dedicated Instances: have benefit, but may not be needed.
Performance: relatively constant in long running tests.
Separate AZs: only used for DR and failover.
© 2018 Asperitas Consulting All rights reserved.
Contact Information
Principal Cloud ArchitectAsperitas Consulting
@dscottwheeler
Scott Wheeler
© 2018 Asperitas Consulting All rights reserved.
Please complete the session survey in the summit mobile app.
© 2018 Asperitas Consulting All rights reserved.
Thank you!