P2P Media streaming Rutuja Raghoji Ramya Tridandapani Malini Karunagaran.
-
Upload
victor-monroe -
Category
Documents
-
view
223 -
download
5
Transcript of P2P Media streaming Rutuja Raghoji Ramya Tridandapani Malini Karunagaran.
P2P Media streaming
Rutuja RaghojiRamya TridandapaniMalini Karunagaran
P2P with respect to media streaming Users contribute content they have already
consumed back to other users of the application.
Considerable intelligence to be located at the edge of the network.
Distributes the resources required for content delivery throughout the Internet.
Efficient delivery model at scale but no guarantee on minimum level of resource availability.
Live Streaming Metrics Startup Delay Source-to-end delay Playback continuity
Above metrics are directly related to user satisfaction.
CoolStreaming/DONet: A Data-Driven Overlay Network for Efficient Live Media Streaming
An efficient, robust and resilient system that is easy to implement.
Periodic exchange of information Retrieves unavailable information from
partners. Supplies available data to partners.
A public Internet-based DONet implementation, called CoolStreaming v.0.9.
Evaluated over PlanetLab: 30,000 distinct users, peak load 4000 simultaneous users.
CoolStreaming
Need for media streaming Multimedia applications have become more
popular. Eg: Net TV, news broadcast Increased network traffic IP Multicast is most efficient vehicle. Issues arise due to lack of incentives:
To install multicast routers To carry multicast traffic.
Solution: Overlay networks – Application level
CoolStreaming
IP Multicast Generally advocate tree based structure
Works well with dedicated infrastructure Mismatches application level overlay with dynamic
nodes Vulnerable to failure
Partially solved by structures like mesh and forest
Multicast systems – two categories Proxy-assisted Peer-to-peer based
CoolStreaming
Overlay construction algorithms DONet uses peer-to-peer paradigm Tree based protocols and extensions
Two types of distribution trees Centralized Eg: CoopNet Distributed Eg: SpreadIt, ZIGZAG
Gossip based protocol Used by DONet for membership management. Newly generated message sent to randomly selected
nodes Repeated until all nodes get the message. Suffers from significant redundancy, maybe severe for
streaming applications.
CoolStreaming
Data-centric Approach Adopted for the following reasons:
Migrating to application layer leads to greater flexibility No prescribed roles for the nodes Availability of data guides flow directions, not restricted by
overlay structure Suitable for overlay with high dynamic nodes
DONet targets live streaming with semi-synchronized nodes.
Design issues: How are partnerships formed? How do you encode and exchange data availability
information? How do you supply and retrieve video data among partners?
CoolStreaming
Design and Optimization of DONet
CoolStreaming
Design and Optimization of DONet(contd.) Terms and definitions
Membership manager: node which maintains partial view of other overlay nodes
Partnership manager: establishes and maintains partnership with the partner nodes
Scheduler: schedules transmission of video data Source node/origin node: always a supplier Other nodes can be a receiver, a supplier or
both depending on segment availability information
CoolStreaming
Node Join and Membership Management Maintains membership cache (mCache) mCache contains a partial list of identifiers for the
active nodes In joining algorithm, newly joined node contacts
the origin node; which redirects request to obtain list to deputy node.
Scalable Gossip Membership protocol, SCAM: distribute membership messages periodically.
Two events also trigger mCache update: Forward membership message through gossip. Node serves as deputy, include entry in candidate
list.
CoolStreaming
Partnership in DONet
CoolStreaming
Buffer Map Representation and Exchange A video stream is divided into segments of equal
length Buffer Map represents availability of segments in the
buffer of a node Each node exchanges BM with its partners before
scheduling segment transfer playback progresses of the nodes are semi-
synchronized a sliding window of 120-segment, 1 second video
segment BM contains 120 bits, 1 for each segment with value
1 indicating segment is available, 0 otherwise.
CoolStreaming
Scheduling Algorithm For homogenous and static network, simple round
robin scheduler works Dynamic network needs more intelligent scheduler! Two constraints:
playback deadline for each segment heterogeneous streaming bandwidth from the partners
Variation of the problem Parallel machine scheduling, known to be NP-hard!
Heuristic: Ordered by the number of potential suppliers for each
segment, and the node with the highest bandwidth and enough available time is selected
Execution time of implementation:15ms / execution Adopted TCP-Friendly Rate Control protocol Origin node advertises BM if needed
CoolStreaming
Failure Recovery and Partnership Refinement Departure can be detected after an idle time of TFRC
or BM exchange Reschedule using the BM :the probability of
concurrent departures is small. Following operations to enhance resilience:
Graceful departure Node failure
Each node periodically establishes new partnership which helps with: Maintaining a stable number of partners Nodes explore partners with better quality
Each node i calculates score for partner j: max{Si,j ; Sj,i}
Reject the one with the lowest score
CoolStreaming
Planet-based Performance Evaluation A collection of machines distributed over
the globe: 435 machines, hosted by 203 sites, spanning over 25 countries
CoolStreaming
Comparison with Tree-based Overlay
CoolStreaming
Comparison with Tree-based Overlay(contd.)
CoolStreaming
Conclusion Two interesting facts from results:
The current Internet has enough available bandwidth to support TV-quality streaming (¸ 450 Kbps).
The larger the data-driven overlay is, the better the streaming quality it delivers.
Drawbacks! Copyright issues with content! Nodes behind a NAT gateway are often restricted to serve
as receivers only 30% users are behind NAT TCP connection: 95% of the nodes can become relaying
nodes Supports VBR encoding only.
CoolStreaming
ZigZag - Introduction What is ZIGZAG?
Peer-to-Peer protocol for single source media streaming to multiple receivers.
Uses a tree structure
What problem is it solving? This protocol can maintain the stream under the effects
of network dynamics and unpredictable client behavior. i.e. If failure then it can have a quick and graceful recovery.
Shorten delay for source to receivers. Limit the required overhead on the receivers. Realize that this basic structure is somewhat common but
zigzag connects them differently.
ZIGZAG
ZigZag – Design Objectives Peer to peer technique for single-source
media streaming The end-to-end delay from source to
client should be low The node degree should be small Adapt to free join/leave receivers Minimize the amount of control overhead
ZIGZAG
ZigZag Protocol Administrative Organization
Represents the logical relationships among peers
Multicast tree Specify which peer data is received from Built based on C-rules which helps limits
the degree of a peer (outbound links) Control protocol
Specify the exchange of state information Policies adjusting the tree
Maintaining the robustness of the tree
ZIGZAG
Administrative Organization A multi-layer hierarchy of clusters Partition peers into clusters of size [k, 3k] Assign the role “head” and “associate
head” to certain peers
Properties H – number of layers
Bounded by [log3kN, logkN+1] Max. number of members in a cluster=3k
To prevent cluster undersize in the case of a client leave after splitting
ZIGZAG
Administrative Organization
Organized into a tree of clusters. Each cluster having at least 4 nodes (except top)
One of those nodes will be the head of the cluster Head node of a cluster becomes a member of parent
cluster
ZIGZAG
Relationship among clusters and peers
ZIGZAG
Terms Subordinate:
Non-head peers of a cluster headed by a peer X are called “subordinate” of X.
Foreign head: A non-head (or server) clustermate of a peer X at
layer j > 0 is called a “foreign head” of layer-(j-1) subordinates of X.
Foreign subordinate:Layer-(j-1) subordinates of X are called “foreign
subordinates” of any layer-j clustermate of X.
Foreign cluster: The layer-(j-1) cluster of X is called a “foreign cluster” any
layer-j clustermate of X.
ZIGZAG
Built based on the administrative organization C-rules specify the actual data flow from
source to any peer A peer, when not at its highest layer, neither has
a link out nor a link in Nonhead members of a cluster must receive the
content directly from its associate head. In other words, this associate head links to every other nonhead member
The associate head of a cluster, except for the server, must get the content directly from a foreign head of the cluster
Some nodes will stream data to more than 1 peers
Assumption: The uplink capacity of peer is enough for
streaming contents to multiple peers
Multicast tree
ZIGZAG
Multicast tree
ZIGZAG
Multicast tree - Properties The workload is shared among clients
Worst-case node degree is 6k-3 The end-to-end delay is small
Maximum height of the tree is 2logkN+1 Use of “associate head” for delivering
media Number of outbound links is lower Bottleneck will less likely to appear in
higher level
ZIGZAG
Control protocol Goal
Minimize the number of peers needed to be contacted
Only exchange information with parent, children and clustermates
Exchange as few states as possible Each node in a cluster periodically communicates
with its clustermates, children and parent on the tree.
Which nodes it is communicating with Which clusters have room for another node Which nodes are have paths to the lowest level of the
tree
control overhead for an average member is a constant. The worst node has to communicate with other nodes, which is acceptable since the information exchanged is just soft-state refreshes.
ZIGZAG
Client Join/Departure Basic principle
Maintain C-rule so that nice properties of degree and end-to-end delay is preserved
Direct solution Reconstruct the administrative
organization and multicast tree Costly in terms of exchange of state information
Proposed join/departure algorithm Limits the number of nodes to connect
during a join by O(k logkN) Limits the number of peers that need to
reconnect by 6k-2
ZIGZAG
Client Join When a peer sends a request to the server
Push request down the multicast tree until a leaf node is found that has room in it’s cluster
Periodic check of the clusters to see if they are to big, if so split them
Procedure If X is a layer-0 associate-head
Add P to the only cluster of X Make P a new child of X
Else If Addable(X)
Select a child Y s.t. Addable(Y) and D(Y)+d(Y,P) is min Forward the join request to Y
Else Select a child Y s.t. Reachable(Y) and D(Y)+d(Y,P) is min Forward the join request to Y
ZIGZAG
Client Departure
When a peer departs from the tree Parent, subordinates and children of the node are
notified Non-head clustermate is selected and asked to
take over forwarding data If many departures causes undersize then 2
cluster at the same layer are merged
Tasks to do for client (X) departure The parent removes link to X The children of X needs a new parent Each layer-i cluster X belongs to needs a new
head Layer-j cluster may require a new associate head
ZIGZAG
Client Departure If X’s highest level is at layer 0
If X is not the “associate head” No extra work needed
If X is the “associate head” The head of the cluster choose another member to
take up the responsibilities If X’s highest level is j (non zero)
It implies it is a “head” in layer [0,j-1] A non-head peer (X’) at layer 0 is randomly chosen to
replace the “head” responsibility Head of children of X (Y) will choose a new parent for X
that has a minimum degree
ZIGZAG
Performance Optimization The performance optimization procedure makes the
service load fairly distributed among the peers without violating the multicast tree rules.
Periodically checking the administrative organization and
multicast tree. If node has many children may consider switching
parenthood of some of the children to a clustermate - Balancing based on number of nodes
Switching based on capacity or bandwidth – Balancing based on bandwidth
Refinement handling Degree based switch
Balances degree by transferring service load Capacity based switch
Balances the peer busyness among all non-head peers of a cluster
ZIGZAG
Performance Analysis Performance under different scenarios
Look at various scenarios on a networksimulator running 3240 nodes,2000 client.5 – 15 nodes per cluster.
Scenario1: No failure Looked at overhead such as adding new nodes average of
2.4% population were contacted If too many children then it must ask most of them Performance improved when a split of a cluster occurred
Scenario 2: Failure possible Let every 200 nodes fail (out of 2000) sequentially. Mostly less than 20 reconnections (2% of population)
Scenario 3: ZIGZAG vs NICE
ZIGZAG
Scenario 1: No failure
Join and split overhead
a. the server has too many children and a new client has to ask all of them, thus resulting in many contacts. b. split procedure takes place after detecting a cluster at the second-highest layer is oversize. Thus results in very few children.
ZIGZAG
ZIGZAG
Scenario 2: Failure Possible
recovery overhead is always bounded by a constant regardless of the client population size.
Merge overhead is always small regardless of the client population size.
ZIGZAG
Scenario 3: ZIGZAG vs NICE
ZIGZAG
Comparison with other P2P systems
Sp
read
It • Single distribution tree
• Vulnerable to disruptions
• Long blocking time
Coop
Net • Multiple
distribution trees
• Heavy control overhead on source
Nara
da • Multi-sender
multi-receiver streaming
• Works on small P2P network
ZIGZAG
Advantages Oddities in the overhead of streaming
multimedia over the internet will not felt as much by the end user.
Efficient methods for nodes dropping & adding Uses two trees
1 to maintain administration 1 for data flow
Short end-to-end delay Tree structure
Low administration overhead Nodes “talk” to each other on regular basis
ZIGZAG
Disadvantages Possible for a lot of overhead per node
and especially head nodes Needs to Optimize at random intervals
to be effective which may cause large overhead.
ZIGZAG
Conclusion A P2P media streaming scheme
The maximum degree and end-to-end delay is small
A client join/leave algorithm is proposed aim at reducing the control overhead
Simulation result suggests that 5000 peers can be supported at a max. degree of 15
ZIGZAG
AnySee
Internet based Peer-to-Peer live streaming system
Released in the summer of 2004 in CERNET of China.
Over 60,000 users enjoy live videos including TV programs, movies, and academic conferences.
AnySee
What is it trying to achieve? Improving global resource utilization. Distributing traffic to all physical links evenly. Assigning resources based on locality and
delay. Guarantee streaming service quality by using
nearest peers. How is it different from previously discussed
systems? Adopts Inter-overlay optimization.
Intra-Overlay Optimization
Resources cannotJoin multiple overlays
S1 -> A -> D = 3 +5 = 8
Not Optimal globally!
S1 -> S2 -> D = 2 + 2 = 4
Globally Optimal!
Inter-Overlay Optimization Resources can join Multiple overlays Path S1->A->D(8) is replaced by S1->S2->D(4) Path S2->G->H (9) is replaced by S2->G->C->G(7)
Inter-Overlay OptimizationKey Challenges
Efficient neighbor discovery Resource assignment Overlay construction Optimization
AnySee –System Overview
AnySee –System OverviewMesh-based Overlay Manager All peers belonging to different streaming
overlays will join one substrate, mesh-based overlay.
Every peer has a unique identifier. Key goal is to match the above overlay to
the underlying physical topology. Done by using a Location-aware topological
Matching technique. Uses ping and pong messages and flooding
techniques to form the Mesh.
AnySee –System Overview
Single Overlay Manager Intra-Optimization Schemes similar to
Narada, DONet etc. Deals with join/leave operations of peers.
AnySee –System Overview
Inter-Overlay optimization Manager
Each peer maintains Active Streaming Path Set and Backup Streaming Path Set.
Inter-Overlay Optimization algorithm is called when number of backup streaming paths is less than a threshold.
Path from Backup streaming path is chosen when active streaming path is cut off due to poor streaming QoS.
AnySee –System Overview
Inter-Overlay optimization Manager
Two main tasks: Backup streaming path set management Active streaming path set management.
Backup streaming path set management is done using Reverse Tracing Algorithm.
AnySee –System Overview
Key Node Manager Allocates and manages resources. Admission control policy.Buffer Manager Responsible for receiving valid media data
from multiple providers in the active streaming path set and continuously keeping the media playback.
Small buffer space is enough due to Inter-Overlay Optimization because of shorter startup delay.
System Evaluation
For system evaluation two topologies are considered. Physical topology
Real topology with Internet characteristics. BRITE
Logical topology Overlay P2P topology built on top of the
physical topology. Gnutella based Crawler
System Evaluation – Simulation Parameters
S Number of streaming overlays M Number of neighborsN Size of one overlayr Streaming playback rate C Number of total B/W connections
System Evaluation – Performance Metrics
Resource Utilzation Ratio between used connection to all
connections.Continuity Index
Number of segments that arrive before playback deadline over the total number of segments.
Represents playback continuity.
System Evaluation
Continuity index V.S. streaming rates when N=400, S=12 and initial buffer size is 40 seconds
System Evaluation
Resources utilization: overlay size V.S. the number of streaming overlays when M=12, r=300 Kbps
System Evaluation
Continuity index under dynamic environments when M=5, N=400, r=300 Kbps and initial buffer size is 40 seconds
System Evaluation
Resource utilization under dynamic environments when M=5, N=400, and r=300 Kbps
Conclusion
Efficient and scalable live-streaming overlay construction has become a hot topic recently.
Important metrics to be considered in any live streaming application are Startup delay Source-to-end delay Playback continuity
Conclusion
Previously studied Intra-Overlay optimization techniques have drawbacks Low resource utilization High startup delay Source- to-end delay Inefficient resource assignment in
global P2P networks.
Conclusion
AnySee uses Inter-Overlay optimization technique.
AnySee peers are able to construct efficient paths using peers in different overlays rather than selecting better paths in the same overlay.
Experimental results show that AnySee outperforms existing intra-overlay live streaming schemes, such as Coolstreaming.
CDN vs P2PCDN P2P
Excellent quality to end-users when workload is in provisioning limits
High Scalability
Constrained by specifics of the existing servers and bandwidth
Low Maintenance cost
Not highly scalable Low stream quality with undesirable disruption
Higher Operation Costs Unfairness in face of heterogeneous peer resources
LiveSky
Hybrid CDN-P2P for Live Video Streaming Incorporates the best of both
technologies Mutually offsets each others’
deficiencies.
LiveSky
LiveSky – System Architecture
LiveSky
References Duc A. Tran, "ZIGZAG: an efficient peer-to-peer scheme for media
streaming" INFOCOM03
Xinyan Zhang, "CoolStreaming/DONet: a data-driven overlay network for peer-to-peer live media streaming" INFOCOM05
Xiaofei Liao, Hai Jin, Yunhao Liu, Lionel M. Ni, and Dafu Deng, "Anysee: Peer-to-peer live streaming" INFOCOM06
H. Yin et al., "Design and Deployment of a Hybrid CDN-P2P System for Live Video Streaming: Experiences with LiveSky", MM 2009
Duc A. Tran, Member, IEEE, Kien A. Hua, Senior Member, IEEE, and Tai T. Do “A Peer-to-Peer Architecture for Media Streaming”
Thank You