Michael K. Bradshaw, Bing Wang,
description
Transcript of Michael K. Bradshaw, Bing Wang,
![Page 1: Michael K. Bradshaw, Bing Wang,](https://reader035.fdocuments.net/reader035/viewer/2022062500/56814f56550346895dbd0109/html5/thumbnails/1.jpg)
Periodic Broadcast and Patching Services - Implementation,
Measurement, and Analysis in an Internet Streaming Video
Testbed
Michael K. Bradshaw, Bing Wang, Subhabrata Sen , Lixin Gao, Jim Kurose,
Prashant Shenoy, and Don Towsley
ACM Multimedia 2001
![Page 2: Michael K. Bradshaw, Bing Wang,](https://reader035.fdocuments.net/reader035/viewer/2022062500/56814f56550346895dbd0109/html5/thumbnails/2.jpg)
Introduction
Multimedia streaming :significant loads place on both server and network resources.Multicast approaches : Batching Periodic Broadcast Patching
Issues :control/signaling overhead, the interaction between disk and CPU scheduling, multicast join/leave times
![Page 3: Michael K. Bradshaw, Bing Wang,](https://reader035.fdocuments.net/reader035/viewer/2022062500/56814f56550346895dbd0109/html5/thumbnails/3.jpg)
Batching
Server batches requests that arrive close together in time and multicast the stream to the set of batched clients.A drawback is that client playback latency increase with an increasing amount of client request aggregation.
![Page 4: Michael K. Bradshaw, Bing Wang,](https://reader035.fdocuments.net/reader035/viewer/2022062500/56814f56550346895dbd0109/html5/thumbnails/4.jpg)
Periodic Broadcast
Server divides a video object into multiple segments, and continuously broadcasts segments over a set of multicast addresses.Earlier portions are broadcast more frequently than later ones to limit playback startup latency.Clients simultaneously listen to multiple addresses, storing future segments for later playback.
![Page 5: Michael K. Bradshaw, Bing Wang,](https://reader035.fdocuments.net/reader035/viewer/2022062500/56814f56550346895dbd0109/html5/thumbnails/5.jpg)
Patching (stream tapping)
Server streams the entire video sequentially to the very first client.Client-side workahead buffering is used to allow a later-arriving client to receive its future playback data by listening to an existing ongoing transmission of the same video. Server need only additionally transmit those earlier frames that were missed by the later-arriving client.
![Page 6: Michael K. Bradshaw, Bing Wang,](https://reader035.fdocuments.net/reader035/viewer/2022062500/56814f56550346895dbd0109/html5/thumbnails/6.jpg)
Server and Client Architecture
![Page 7: Michael K. Bradshaw, Bing Wang,](https://reader035.fdocuments.net/reader035/viewer/2022062500/56814f56550346895dbd0109/html5/thumbnails/7.jpg)
Server Architecture
Server Control Engine (SCE) One listener thread A pool of free scheduler threads One transmission schedule per video
Server Data Engine (SDE) A global buffer cache manager Disk thread (DT) : round-lengthδ Network thread (NT) : round-lengthτ
![Page 8: Michael K. Bradshaw, Bing Wang,](https://reader035.fdocuments.net/reader035/viewer/2022062500/56814f56550346895dbd0109/html5/thumbnails/8.jpg)
Schedule Data Structure
![Page 9: Michael K. Bradshaw, Bing Wang,](https://reader035.fdocuments.net/reader035/viewer/2022062500/56814f56550346895dbd0109/html5/thumbnails/9.jpg)
Signaling between Server and Client
![Page 10: Michael K. Bradshaw, Bing Wang,](https://reader035.fdocuments.net/reader035/viewer/2022062500/56814f56550346895dbd0109/html5/thumbnails/10.jpg)
Testbed (1)
100 Mbps switched Ethernet LANThree machines (server, workload generator and client) with Pentium-II 400 MHz CPU, 400 MB RAM, running Linux OSWorkload Generator generates a background load of client requests in a Poisson manner and logs the timing information for the request to be served
![Page 11: Michael K. Bradshaw, Bing Wang,](https://reader035.fdocuments.net/reader035/viewer/2022062500/56814f56550346895dbd0109/html5/thumbnails/11.jpg)
Testbed (2)
Periodic broadcast : L. Gao, J. Kurose, and D. Towsley.
Efficient schemes for broadcasting popular videos (Greedy Disk-conserving Broadcasting segmentation scheme)
l-GDB : the initial segment is l seconds Subsequent segments are of size 2i-1l
where 1 < i < [log2L]
![Page 12: Michael K. Bradshaw, Bing Wang,](https://reader035.fdocuments.net/reader035/viewer/2022062500/56814f56550346895dbd0109/html5/thumbnails/12.jpg)
Testbed (3)Sample Videos for the experiments
Video Format Length(min)
Frame rate Bandwidth (Mbps)
File size (MB)
# of RTP pkts
Blade1 MPEG-1
12 30 1.99 180.1 155146
Blade2 MPEG-1
15 30 3 337 296706
Demo MPEG-2
2.7 30 2 40.6 351383Mbps, 15min MPEG-1 Blade2 video
Scheme Segs. Segment Lengths (sec)
3-GDB 9 3,6,12,24,48,96,192,384,134.5(768)10-GDB 7 10,20,40,80,160,320,270.9(640)30-GDB 5 30,60,120,240,450.9(480)
![Page 13: Michael K. Bradshaw, Bing Wang,](https://reader035.fdocuments.net/reader035/viewer/2022062500/56814f56550346895dbd0109/html5/thumbnails/13.jpg)
Testbed (4)
Patching algorithm : L. Gao and D. Towsley.
Supplying instantaneous video-on-demand services using controlled multicast. (Threshold-based Controlled Multicast scheme)
When client arrival rate for a video is Poisson with parameterλand the length of a video is L seconds, the threshold is chosen to be (sqrt(2Lλ+1)-1)/λ seconds.
![Page 14: Michael K. Bradshaw, Bing Wang,](https://reader035.fdocuments.net/reader035/viewer/2022062500/56814f56550346895dbd0109/html5/thumbnails/14.jpg)
Performance Metrics
Server Side : System Read Load (SRL) Server Network Throughput (SNT) Deadline Conformance Percentage
(DCP)
Client Side : Client Frame Interarrival Time (CFIT) Reception Schedule Latency (RSL)
![Page 15: Michael K. Bradshaw, Bing Wang,](https://reader035.fdocuments.net/reader035/viewer/2022062500/56814f56550346895dbd0109/html5/thumbnails/15.jpg)
Catching Implications (1)PB :
![Page 16: Michael K. Bradshaw, Bing Wang,](https://reader035.fdocuments.net/reader035/viewer/2022062500/56814f56550346895dbd0109/html5/thumbnails/16.jpg)
Catching Implications (2)Patching :
![Page 17: Michael K. Bradshaw, Bing Wang,](https://reader035.fdocuments.net/reader035/viewer/2022062500/56814f56550346895dbd0109/html5/thumbnails/17.jpg)
Catching Implications (3)
SRL for patching and 10-GDB with LFU caching
![Page 18: Michael K. Bradshaw, Bing Wang,](https://reader035.fdocuments.net/reader035/viewer/2022062500/56814f56550346895dbd0109/html5/thumbnails/18.jpg)
Component BenchmarksConfiguratio
n# Videos # Addresses
per VideoBandwidt
hper Video
NT completion
Time
DT completionTime
I 3 8 16M bits 1.60ms / 33ms
6.16ms / 1sec
II 1 24 48M bits 5.08ms / 33ms
8.39ms / 1sec
![Page 19: Michael K. Bradshaw, Bing Wang,](https://reader035.fdocuments.net/reader035/viewer/2022062500/56814f56550346895dbd0109/html5/thumbnails/19.jpg)
End-End Performance (1)
Client Frame Interarrival Time (CFIT) histogram under 3-GDB, 10-GDB, and 30-GDB at 600 requests per minute.
PB :
![Page 20: Michael K. Bradshaw, Bing Wang,](https://reader035.fdocuments.net/reader035/viewer/2022062500/56814f56550346895dbd0109/html5/thumbnails/20.jpg)
End-End Performance (2)Patching :Request
RateNetwork
LoadCFIT DCP
1 per minute 20.85M bps Similar to the 30-GDB
99.9%
5 per minute 55.27M bps Similar to the 30-GDB
99.9%
Higher rates Bottleneck occurs
- -
![Page 21: Michael K. Bradshaw, Bing Wang,](https://reader035.fdocuments.net/reader035/viewer/2022062500/56814f56550346895dbd0109/html5/thumbnails/21.jpg)
Scheduling Among Videos
![Page 22: Michael K. Bradshaw, Bing Wang,](https://reader035.fdocuments.net/reader035/viewer/2022062500/56814f56550346895dbd0109/html5/thumbnails/22.jpg)
Conclusions
Network bandwidth, rather than server resources, is likely to be the bottleneck. PB : 600 requests per minute Patching : fully loading a 100 Mb network
An initial client startup delay of less than 1.5 sec is sufficient to handle startup signaling and absorb data jitter.Dramatic reductions can be gained via application-level data caching using LFU replacement policy.