BIOMETRIA. BIBLIOGRAFIA http:\\biopersona.com http:\\ http:\\ http:\\.
The Case for HTTP/2 - Internetdagarna 2015 - Stockholm
-
Upload
andy-davies -
Category
Technology
-
view
1.758 -
download
1
Transcript of The Case for HTTP/2 - Internetdagarna 2015 - Stockholm
http://www.flickr.com/photos/arnybo/2679622216
The Case for HTTP/2
@AndyDavies Internetdagarna, Nov 2015
1999RFC 2616 - HTTP/1.1
The Web of 1999…
…is not the Web of 2015
http://www.flickr.com/photos/7671591@N08/1469828976
HTTP/1.x doesn’t use the network efficiently
and we’ve been hacking around some of the limitations
https://www.flickr.com/photos/rocketnumber9/13695281
Each TCP connection only supports one request at a time*
https://www.flickr.com/photos/39551170@N02/5621408218*HTTP Pipelining is broken in practice
So browsers allowed us to make more requests in parallel
Very old browsers - 2 parallel connections
Today’s browsers - 4 plus connections
To make even more parallel requests we split resources across hosts
www.bbc.co.ukstatic.bbci.co.uknews.bbcimg.co.uknode1.bbcimg.co.uk
Increasing the risk of network congestion and packet losshttps://www.flickr.com/photos/dcmaster/4585119705
Every request has an overheadhttps://www.flickr.com/photos/tholub/9488778040
HTTP 1.x - Higher latency = slower load timesPa
ge L
oad
Tim
e (s
)
1
2
3
4
Round Trip Time (ms)
0 20 40 60 80 100 120 140 160 180 200 220 240
Mike Belshe - “More Bandwidth Doesn’t Matter (much)”
Accept:text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 Accept-Encoding:gzip,deflate,sdch Accept-Language:en-US,en;q=0.8 Cache-Control:no-cache Cookie:NTABS=B5; BBC-UID=75620c7ca040deb7c0d3275d81c51c2361684a959e281319b3b5da4dab5958880Mozilla%2f5%2e0%20%28Macintosh%3b%20Intel%20Mac%20OS%20X%2010%5f9%5f1%29%20AppleWebKit%2f537%2e36%20%28KHTML%2c%20like%20Gecko%29%20Chrome%2f31%2e0%2e1650%2e63%20Safari%2f537%2e36; ckns_policy=111; BGUID=55b28cbc20d2e32f221f3ed0e1be9624c960f93b1e483329c3752a6d253efd40; s1=52CC023F3812005F; BBCCOMMENTSMODULESESSID=L-k22bbkde3jkqf928himljnlkl3; ckpf_ww_mobile_js=on; ckpf_mandolin=%22footer-promo%22%3A%7B%22segment%22%3Anull%2C%22end%22%3A%221392834182609%22%7D; _chartbeat2=ol0j0uq4hkq6pumu.1389101635322.1392285654268.0111111111111111; _chartbeat_uuniq=1; ecos.dt=1392285758216 Host:www.bbc.co.uk Pragma:no-cache User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.107 Safari/537.36
Headers sent with every request
Contain lots of repeated dataand aren’t compressed
And small responses may not fill the TCP Congestion Window
Could have sent more segments in this round-trip
Small response
So we follow recipes e.g. Reduce Requests
https://www.flickr.com/photos/nonny/116902484
Create CSS and JavaScript bundles
++++= =
Create CSS and JavaScript bundles
++++= =x+
Whole bundle is invalidated if a
single file changes
More to download and parse
and mush images together as sprites
and mush images together as sprites
Browser must download and decode the whole image
To get just one sprite …
We override the browser’s priorities
https://www.flickr.com/photos/skoupidiaris/5025176923
Embed binary* data using DataURIs
url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABkAAAAZCAMAAADzN3VRAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAAZQTFRF/wAAAAAAQaMSAwAAABJJREFUeNpiYBgFo2AwAIAAAwACigABtnCV2AAAAABJRU5ErkJggg==)
=
*dataURIs can be text too e.g. SVG
and inline ‘critical resources’<!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en-us"> <head> <meta http-equiv="content-type" content="text/html; charset=utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <link rel='stylesheet' href='http://fonts.googleapis.com/css?family=Open+Sans:400,300,600' type='text/css'> <link rel="alternate" href="http://blog.yoav.ws/index.xml" type="application/rss+xml" title="Yoav's blog thing" /> <style> body { font-family: 'Open Sans', 'Helvetica Neue', Helvetica, sans-serif; color: rgb(72, 70, 68); } img { max-width: 100%; } .li-page-header { color: rgb(255, 255, 255); padding: 16px 0px; background-color: #8a1e1b; } .container { position: relative; width: 90vw; max-width: 760px; margin: 0px auto; padding: 0px; } .clearfix:before, .clearfix:after, .row:before, .row:after { content: '\0020'; display: block; overflow: hidden; visibility: hidden; width: 0; height: 0; } .row:after, .clearfix:after { clear: both; } .row, .clearfix { zoom: 1; } .container .column, .container .columns { float: left; display: inline; margin-left: 10px; margin-right: 10px; }
There’s a tension between development and delivery
https://www.flickr.com/photos/domiriel/7376397968
Build tools and optimisation services help plug gaps
and won’t be going away…
But what if we could use the network more efficiently?https://www.flickr.com/photos/belsymington/4102783610
HTTP/2
HTTP/1.1 HTTP/2
https://http2.golang.org/gophertiles
HTTP/1.1 HTTP/2
https://http2.golang.org/gophertiles
HTTP/1.1 HTTP/2
https://http2.golang.org/gophertiles
Impressive!
But is it a real world test?
• HTTP methods, status codes and semantics remain the same• Binary headers• Header compression• Multiplexed• Server can push resources
HTTP/2
Each request becomes a stream
DATA frame
HEADERS frame
HTTP/1.1 200 OK Date: Mon, 07 Sep 2015 17:39:33 GMT Expires: -1 Cache-Control: private, max-age=0 Content-Type: text/html; charset=UTF-8 Content-Encoding: gzip X-XSS-Protection: 1; mode=block X-Frame-Options: SAMEORIGIN Transfer-Encoding: chunked<!doctype html> <html> <head> <meta charset="utf-8"> <title>This is my title<title>
<link rel="stylesheet" href="styles.css" type="text/css" />
<script src="script.js"></script>
HTTP/2HTTP/1.1
Streams are divided into frames
}} DATA
frameDATA frame
Frames are multiplexed over a TCP connection
… Stream 1 DATA
Stream 2 HEADERS
Stream 2 DATA
Stream 1 DATA …
Stream 4 DATA
Client Server
TCP connection comparison HTTP/2 vs HTTP/1.1
HTTP/1.1
HTTP/2
Prioritised using Weights and Dependencies
https://nghttp2.org/blog/2014/04/27/how-dependency-based-prioritization-works/
Weight: 200 id: 2
Weight: 100 id: 4
Weight: 1 id: 6
Weight: 100 id: 12
Weight: 100 id: 8
Weight: 100 id: 10
Prioritised using Weights and Dependencies
https://nghttp2.org/blog/2014/04/27/how-dependency-based-prioritization-works/
Weight: 200 id: 2
Weight: 100 id: 4
Weight: 1 id: 6
Weight: 100 id: 12
Weight: 100 id: 8
Weight: 100 id: 10
2/3 1/3 Low priority
What is the optimal order… Does it change as page loads?https://www.flickr.com/photos/add1sun/4993432274
Header compression
https://http2.github.io/http2-spec/compression.html
Does it make any difference?
Host: Ireland, Test Agent: Singapore, Cable
Does it make any difference?
Host: Ireland, Test Agent: Singapore, Cable
Does it make any difference?
Host: Ireland, Test Agent: Singapore, Cable
Does it make any difference?
Host: Ireland, Test Agent: Singapore, Cable
What about when server and client are close?
Host: Ireland, Test Agent: Ireland, Cable
HTTP/1.1
HTTP/2
and evenly matched when server and client are close
Host: Ireland, Test Agent: Ireland, Cable
HTTP/1.1
HTTP/2
and evenly matched when server and client are close
Host: Ireland, Test Agent: Ireland, Cable
HTTP/1.1
HTTP/2
h2 capable
h2 enabled
h2 unsupported 7,200 ms
5,325 ms
6,160 ms
Time to mobile load event
Sample is 1 month of data on https://next.ft.com
https://speakerdeck.com/surma/2-101
Opportunities for new kinds of optimisations
https://www.flickr.com/photos/inucara/14981917985
Browser Server
Serverbuildspage
GET index.html
<html><head>…
NetworkIdle
Request other page resources
Server push
Browser Server
Serverbuildspage
GET index.html
<html><head>…
Request other page resources
Push critical resources e.g. CSS
Server push
Browser Server
Serverbuildspage
GET index.html
<html><head>…
Request other page resources
Push critical resources e.g. CSS
Server push
Browser Server
Serverbuildspage
GET index.html
<html><head>…
Request other page resources
Push critical resources e.g. CSS
Browser can reject push but
may have already received data
Server push
Many opportunities for server push
HTML
CSS
DOM
CSSOM
RenderTree Layout PaintJavaScript
Fonts and background images discovered
when render tree builds
Could we push them?
Multiplexing offers interesting possibilities too
How much of an image do we need to make it usable - 5%?
Experiment by John Mellor at Google
Parallel version looks usable with just 15% of bytes
And almost complete with 25% of the image bytes!
There are some questions over the user experience with progressive images
Sequential version needs 80% of bytes to match up…
When do we kill off some HTTP/1.1 optimisation techniques?
http://www.flickr.com/photos/tonyjcase/7183556158
Browser support for HTTP/2 is relatively good
40 Edge 9 b39 30
a
a. Opera Mini doesn’t support HTTP/2 b. Server-Push not supported yet
Server Support
https://github.com/http2/http2-spec/wiki/Implementations
Server implementations are getting there
Choose your server carefully…
Does it respect stream and connection flow?
Does it support dependencies and weights?Does it support server-push?
How does it help optimisation?
Implementations are still young…
Resources pushed inreverse order
(Fixed in h2o 1.5.3)
And sometimes browsers have unexpected behaviour…
1.6s 1.7s 1.8s 1.9s 2.0s 2.1s
1.6
with push
without push
with push
without push
300ms gap in waterfall
http://webpagetest.org
Shows pushed resources in Firefox tests
chrome://net-internals
chrome://net-internals
GET request
chrome://net-internals
Received header frame
chrome://net-internals
Received data frame
chrome://net-internals
Server pushed resource
chrome://net-internals
Pushed resource matched to page
request
https://github.com/tatsuhiro-t/nghttp2
Check server conformance with h2spec
https://github.com/summerwind/h2spec
Server Support
https://github.com/bradfitz/http2/tree/master/h2i
Missing telnet for debugging?
No content until DNS, TCP and TLS negotiation complete
Efficient TLS is Important
Session Resumption, Certificate Stapling and improvements in TLS v1.3 all help
Efficient TLS is Important
istlsfastyet.com www.ssllabs.com/ssltestBulletproof SSL and TLS Ivan Ristic
Balancing HTTP/1.1 & HTTP/2
https://www.flickr.com/photos/kyletaylor/589628071
Some good practices remain constant across HTTP/1.1 and HTTP/2
Shrinking content - compression, minification, image optimisation
Reducing re-redirects
Effective caching
Reducing latency e.g. using a CDN
Reducing DNS lookups and TCP connections
Others need to vary to make the most of each
Replace inlining with server push
Reduce CSS/JS concatenation and image spriting
Avoiding sharding
HTTP/2 combines connections for shardsWhen:
Refer to same IP address
Use same certificate (wildcard, or SAN)
(Can also check in Network Tab in Chrome DevTools)
HTTP/2 combines connections for shardsWhen:
Refer to same IP address
Use same certificate (wildcard, or SAN) DNS lookup, but no new TCP connectionor TLS negotiation
(Can also check in Network Tab in Chrome DevTools)
Will the complexity be the end of hand crafted optimisations?
http://www.flickr.com/photos/simeon_barkas/2557059247
Will automation be the only sane way to manage this?
https://www.flickr.com/photos/skynoir/12342499794
https://www.flickr.com/photos/mariachily/3335708242
Still plenty of challenges…
Use of Third-Parties is still growing
Requests by Domain Bytes by Domain
W3C Resource Hints should help
<link rel="dns-prefetch" href=“//example.com”>
<link rel="preconnect" href=“//example.com”>
<link rel="preload" href=“//example.com/font.woff” as=“font”>
If you want to learn more…
hpbn.co/http2 http://daniel.haxx.se/http2
Go explore!
http://www.flickr.com/photos/atoach/6014917153
http://www.flickr.com/photos/auntiep/5024494612
@andydavies
http://slideshare.net/andydavies