The State of the Art of P2P Video Streaming

download The State of the Art of P2P Video Streaming

of 27

  • date post

    25-Feb-2016
  • Category

    Documents

  • view

    20
  • download

    1

Embed Size (px)

description

The State of the Art of P2P Video Streaming. Slide courtesy: Dr. Sumi Helal & Dr. Choonhwa Lee at University of Florida, USA Prof . Darshan Purandare at University of Central Florida, USA Dr . Meng ZHANG, Dyyno Inc., USA - PowerPoint PPT Presentation

Transcript of The State of the Art of P2P Video Streaming

PowerPoint Presentation

The State of the Art of P2P Video Streaming

Slide courtesy: Dr. Sumi Helal & Dr. Choonhwa Lee at University of Florida, USA Prof. Darshan Purandare at University of Central Florida, USA Dr. Meng ZHANG, Dyyno Inc., USA Jan David Mol, Delft University of Technology, Netherlands1IntroductionVideo Streaming ApproachesIP MulticastContent Distribution NetworkApplication Layer MulticastMesh-Pull P2P StreamingCoolStreamingPPLiveMesh-Push-Pull MechanismMobile P2P StreamingOutline2P2P Protocols:1999: Napster, End System Multicast (ESM)2000: Gnutella, eDonkey2001: Kazaa 2002: eMule, BitTorrent2003: Skype2004: Coolstreaming, GridMedia, PPLive2005~: TVKoo, TVAnts, PPStream, SopCast,

Next: VoD, IPTV, Gaming P2P Is More Than File Download3Internet Traffic

4Large-scale video broadcast over InternetReal-time video streamingLarge numbers of viewersAOL Live 8 broadcast peaked at 175,000 (July 2005)CBS NCAA broadcast peaked at 268,000 (March 2006)NFL Superbowl 2007 had 93 million viewers in the U.S. (Nielsen Media Research)Very high data rateTV quality video encoded with MPEG-4 would require 1.5 Tbps aggregate capacity for 100 million viewersInternet Video Streaming5IP MulticastContent Distribution NetworksExpensiveAkamai, Limelight, etcApplication Layer MulticastSynonymsPeer-to-Peer multicast, Overlay multicastAlternative to IP MulticastScalableNo setup costTaxonomyOverlay Structure: Tree / MeshFetching Mechanism: Push / PullVideo Streaming Approaches6Network layer solutionInternet routers responsible for multicastingGroup membership: remember group members for each multicast session Multicast routing: route data to membersEfficient bandwidth usageNetwork topology is best known in network layerIP Multicast

7Per-group state in routersHigh complexity, especially in core routersScalability concernViolation of the end-to-end design principle: stateless

Slow deploymentChanges at infrastructural levelIP multicast is often disabled in routers

Difficult to support higher layer functionalityE.g., error control, flow control, and congestion control

IP Multicast8CDN nodes deployed at strategic locationsThese nodes cooperate with each other to satisfy an end users requestUser request is forwarded to a nearest CDN node, which has a cached copyQoS improves, as end user receives best possible connectionAkamai, Limelight, etc

9Content Distribution Networks (CDNs)9Application layer solutionMulticast functionality in end hostsEnd systems participate in multicast via an overlay structureOverlay consists of application-layer linksApplication-layer link is a logical link consisting of one or more links in underlying networkInitial approaches adopt tree topologyTree-PushTree construction & maintenanceDisruption in the event of churn and node failures Application Layer Multicast

Easy to deployNo change to network infrastructureProgrammable end hostsOverlay construction algorithms at end hosts can be easily appliedApplication-specific customizations11Application Layer Multicast11Data-driven/swarming protocolMedia content is broken down in small pieces and disseminated in a swarmNeighbor nodes use a gossip protocol to exchange their buffer mapNodes trade unavailable piecesBitTorrent

Mesh-Pull P2P streaming

CoolStreamingPPLive, SopCast, Fiedian, and TVAnts are derivates of CoolStreamingProprietary and working philosophy not publishedReverse engineered and measurement studies released12Why Is P2P Streaming Hard?Real-time constraintsPieces needed in a sequential order and on timeBandwidth constraintsDownload speed >= video speedHigh user expectationsUsers spoiled with low start-up time and no/little lossHigh churn rate Robust network topology to minimize churn impactFairness difficult to achieveHigh bandwidth peers have no incentive to contribute

13CoolStreamingVideo file is chopped and disseminated in a swarmNode upon arrival obtains a list of 40 peers from the serverNode contacts these peers to join the swarmEvery node has typically 4-8 neighbors, periodically sharing its buffer map with themNode exchanges missing chunks with its neighborsDeployed in the Internet and highly successful14Membership ManagerMaintains a list of members in the groupPeriodically generates membership messagesDistributes it using Scalable Gossip Membership Protocol (SGAM)Partnership ManagerPartners are members that have expected data segments Exchanges Buffer Map (BM) with partnersBuffer Map contains availability information of segmentsSchedulerDetermines which segment should be obtained from which partnerDownloads segments from partners and uploads their wanted segmentsCoolStreamingDiagram of CoolStreaming System

16Data-driven P2P streamingGossip-based protocolsPeer managementChannel discovery

Very popular P2P IPTV applicationOver 100,000 simultaneous viewers and 40,000 viewers dailyOver 200+ channelsWindows Media Video and Real Video format

PPLive17

Tree-Push vs. Mesh-Pull

18Tree-Push BasedContent flows from root to children along the treeNode failures affect a complete sub-treeLong recovery timeMesh-Pull BasedNodes exchanges data availability information with neighbor nodesResilient to node failureHigh control overheadMeta-data exchange consumes bandwidthLonger delay for downloading each chunkRequest-Response

Tree-Push vs. Mesh-Pull19Hybrid Pull-Push ProtocolPull-based protocol has trade-off between control overhead and delayTo minimize the delayNode notifies its neighbors of packet arrivals immediatelyNeighbors also request the packet immediately large control overheadTo decrease the overheadNode waits until a group of packets arrive before informing its neighbors Neighbors can also request a batch of packets at a time considerable delay

2020

Pull-Push Streaming MechanismPull mechanism as startupSuccessful pulls trigger packet pushes by the neighborsEvery node subscribes to pushing packets from the neighborsLost packets during the push interval are recovered by pull mechanism21n-sub streams: packets with sequence number s % nLoop avoidanceFor n-sub streams, there are n packets in a packet groupPacket party is composed of multiple packet groups.Push switching is determined by the pull results of the first packet group in a packet partyPull-Push Streaming Mechanism

22Mobile video streamingRapid growth of mobile P2P communicationVideo streaming expected to rise to as high as 91% of the Internet traffic in 2014Mobile environmentIncrease of mobile and wireless peersUnsteady network connectionsBattery powerVarious video coding for mobile devicesFrequent node churnSecurityMobile P2P Streaming23Mobile P2P StreamingMobile node issuesUplink vs. downlink bandwidthBattery powerMultiple interfacesGeo-targetingOther mobility considerationsProcessing powerLink layer mobilityMobile IP & proxy mobile IPTracker mobility

24Pioneering ApproachesVideo proxy located at the edge of networksAdaptive video transcoding considering the network conditions and constraints of mobile usersDistributed transcoding by fixed nodesSub-streams from multiple parents are assembledResilient to peer churns

25Pioneering ApproachesHierarchical overlayMultiple network interfaces access link vs. sharing linkPeer fetches a video thru cellular networks (WAN) to share it with others over local networks (LAN)Cooperative video streamingP2P-based application layer channel bonding in resource-constrained mobile environmentsSimilar, in spirit, to channel/link bundling technology at link layer to efficiently leverage the combined capacity of all access links26Questions?27time

Push

Push

Push

Push

Pull

Pull

Add new partner

Add new partner

Subscribe video packets from partners at the beginning of push time interval

Node enters