[IEEE 2012 International Conference on Future Communication Networks (ICFCN) - Baghdad, Iraq...

6
Impact of Video Transcoding Profiles in Wireless Multicast Streaming Estabraq Makkiya and Emad Hassan Al-Hemiary Network Engineering Department College of Information Engineering - Al Nahrain University P.O. Box 65074, Al-Jadria, Baghdad, Iraq [email protected] [email protected] Abstract—Standards that have been made for every single thing can always be combined to achieve better ones. This paper presents a comparison between some of the video streaming profiles of different video codecs against transmission protocols. Experiments have taken place in a multicast environment in order to reach better receptions and bandwidth utilization. Using the new generation of the IEEE 802.11 standard, namely ‘n’, the effect of hardware appliances and the indoor wave propagation have been taken into account. The results have shown that changing codecs corresponding to specific protocols can contribute in higher throughput with better video quality. Statistical analysis was done using SPSS17 performing multi- group ANOVA test has given enough evidence to reject the null hypothesis, and we are 95% confident that there is statistically significant difference in outputs, distances and frequencies with different pathways, namely (RTP, MPEG-2) profile. Keywords—video; streaming; transcoding profiles; wireless; multicast; codec; protocols; 802.11n. I. INTRODUCTION Video applications over wireless networks have been highly demanded recently owing to the major upsurge in both the bandwidth of wireless channels and the computational abilities of mobile devices. In order to provide effectual distribution among a many users concurrently, it has been useful to use multicast as a solution since it spares network resources by sharing one data stream across multiple recipients. Yet, packet loss ratio and diversity of the wireless channels, beside heterogeneity of the users, made video multicast over wireless networks a stimulating issue [1]. Multicasting is the operation of transmitting one video signal simultaneously to multiple users. All observers receive the same signal at the same time. Using special protocols, the network is guided to make replicas of the video stream for every beneficiary. This process of copying, takes place inside the network hardly at the video source. Replicas are made at a point in the network only where they are needed [2]. In order to receive a clear, efficient and an acceptable quality-stream, many factors need to be combined; such factors may be transmission protocols, coding techniques and environmental conditions. The ANOVA test is being used to guarantee a significant difference in output, distances and frequencies. II. RELATED WORK Substantial work has been done in altering parameters of transcoding. In [3], authors have presented different parameters including frame size, color depth and Q-scale. In [4], HTTP Dynamic streaming delivery is introduced in Flash Players, which enable live and on-demand streaming over standard HTTP infrastructures. H.264-specific settings, Keyframe intervals, Bitrate switching, MP4 stream packaging and Encoding variants are the recommended settings. Adapting both Transcoding profiles along with the Multicast technique is adopted in our work so as to achieve better reception of live stream under varying conditions of non-regulated channel spectra. III. VIDEO TRASNSCODING AND MULTICAST IP multicasting preserves network bandwidth by reducing the amount of excessive network traffic in one-to-many or many-to-many communications. It can provide cost-effective and high-quality stream delivery. Three mechanisms are required, namely: Group addressing mechanism, Host joining/leaving mechanism, and Multicast- enabled routing protocols [5]. The use of multicast streams presents a challenge for implementing QoS. Unlike unicast, multicast comprises multiple receivers, each with a hypothetically different service level agreement (SLA), communicating with the same server. Moreover, the dynamic nature of multicast group membership makes it arduous to expect the network resources expended by multicast streams. Though IP multicast consents effectual delivery of streaming video to thousands of receivers by replicating packets throughout the network, problems appear when the node is located far away from the multicast publishing points. When using interframe compression in streaming video; a reference frame is required. Unordered video packets or absent reference frames may cause the video to halt. To deal with this issue, one can repeat the multicast closer to the user [5]. Two widespread IP multicast models are the Any Source Multicast (ASM) model, which supports both one-to-many and many-to-many communication models, where the receiver joins any available multicast group. Using the network level source discovery in ASM simplifies 2012 International Conference on Future Communication Networks 978-1-4673-0260-9/12/$31.00 ©2012 IEEE 111

Transcript of [IEEE 2012 International Conference on Future Communication Networks (ICFCN) - Baghdad, Iraq...

Page 1: [IEEE 2012 International Conference on Future Communication Networks (ICFCN) - Baghdad, Iraq (2012.04.2-2012.04.5)] 2012 International Conference on Future Communication Networks -

Impact of Video Transcoding Profiles

in Wireless Multicast Streaming

Estabraq Makkiya and Emad Hassan Al-Hemiary

Network Engineering Department

College of Information Engineering - Al Nahrain University

P.O. Box 65074, Al-Jadria, Baghdad, Iraq

[email protected]

[email protected]

Abstract—Standards that have been made for every single thing

can always be combined to achieve better ones. This paper

presents a comparison between some of the video streaming

profiles of different video codecs against transmission protocols.

Experiments have taken place in a multicast environment in

order to reach better receptions and bandwidth utilization. Using

the new generation of the IEEE 802.11 standard, namely ‘n’, the

effect of hardware appliances and the indoor wave propagation

have been taken into account. The results have shown that

changing codecs corresponding to specific protocols can

contribute in higher throughput with better video quality.

Statistical analysis was done using SPSS17 performing multi-

group ANOVA test has given enough evidence to reject the null

hypothesis, and we are 95% confident that there is statistically

significant difference in outputs, distances and frequencies with

different pathways, namely (RTP, MPEG-2) profile.

Keywords—video; streaming; transcoding profiles; wireless;

multicast; codec; protocols; 802.11n.

I. INTRODUCTION

Video applications over wireless networks have been highly demanded recently owing to the major upsurge in both the bandwidth of wireless channels and the computational abilities of mobile devices. In order to provide effectual distribution among a many users concurrently, it has been useful to use multicast as a solution since it spares network resources by sharing one data stream across multiple recipients. Yet, packet loss ratio and diversity of the wireless channels, beside heterogeneity of the users, made video multicast over wireless networks a stimulating issue [1].

Multicasting is the operation of transmitting one video signal simultaneously to multiple users. All observers receive the same signal at the same time. Using special protocols, the network is guided to make replicas of the video stream for every beneficiary. This process of copying, takes place inside the network hardly at the video source. Replicas are made at a point in the network only where they are needed [2].

In order to receive a clear, efficient and an acceptable quality-stream, many factors need to be combined; such factors may be transmission protocols, coding techniques and environmental conditions. The ANOVA test is being used to guarantee a significant difference in output, distances and frequencies.

II. RELATED WORK

Substantial work has been done in altering parameters of

transcoding. In [3], authors have presented different

parameters including frame size, color depth and Q-scale. In

[4], HTTP Dynamic streaming delivery is introduced in Flash

Players, which enable live and on-demand streaming over

standard HTTP infrastructures. H.264-specific settings,

Keyframe intervals, Bitrate switching, MP4 stream packaging

and Encoding variants are the recommended settings.

Adapting both Transcoding profiles along with the

Multicast technique is adopted in our work so as to achieve

better reception of live stream under varying conditions of

non-regulated channel spectra.

III. VIDEO TRASNSCODING AND MULTICAST

IP multicasting preserves network bandwidth by reducing

the amount of excessive network traffic in one-to-many or

many-to-many communications. It can provide cost-effective

and high-quality stream delivery.

Three mechanisms are required, namely: Group addressing

mechanism, Host joining/leaving mechanism, and Multicast-

enabled routing protocols [5].

The use of multicast streams presents a challenge for

implementing QoS. Unlike unicast, multicast comprises

multiple receivers, each with a hypothetically different service

level agreement (SLA), communicating with the same server.

Moreover, the dynamic nature of multicast group membership

makes it arduous to expect the network resources expended by

multicast streams.

Though IP multicast consents effectual delivery of

streaming video to thousands of receivers by replicating

packets throughout the network, problems appear when the

node is located far away from the multicast publishing points.

When using interframe compression in streaming video; a

reference frame is required. Unordered video packets or

absent reference frames may cause the video to halt. To deal

with this issue, one can repeat the multicast closer to the user

[5].

Two widespread IP multicast models are the Any Source

Multicast (ASM) model, which supports both one-to-many

and many-to-many communication models, where the

receiver joins any available multicast group. Using the

network level source discovery in ASM simplifies

2012 International Conference on Future Communication Networks

978-1-4673-0260-9/12/$31.00 ©2012 IEEE 111

Page 2: [IEEE 2012 International Conference on Future Communication Networks (ICFCN) - Baghdad, Iraq (2012.04.2-2012.04.5)] 2012 International Conference on Future Communication Networks -

applications at the expense of highly compound network

consumption, and the Source Specific Multicast (SSM) model

supporting only one-to-many communication models, here;

the receiver joins a multicast source (instead of group). Using

the application level source discovery in SSM moderates

network complexity, thus lowering the cost of operation. SSM

also offers endurance against denial-of-service (DoS) attacks

[5]. Delivering a stream with an acceptable quality to users in a

bandwidth-efficient manner requires many factors [1].

A. Protocols and Tansmission Factors

To provide a very high quality of experience for end users, video streaming services require high transmission rates and hence high bandwidth capacities from the underlying network. The transmission rate depends on the compression and coding technology used. For example, for MPEG-2 coded standard definition (SD) video on demand (VoD) stream or IPTV stream per one TV channel, 3.5 Mbits/s to 5 Mbits/s is desirable. For H.264 (or MPEG-4 part 10), SD VoD or IPTV stream per one channel, the desired bandwidth is 2 Mb/s. High definition (HD) video stream using H.264 coding requires 8-12 Mb/s [6].

Others are video compression formats which are the most

common compression techniques in video are Moving Picture

Experts Group MPEG-2 and MPEG-4 or H.264. H.264 is one

of the video compression standards created by the Joint Video

Team (JVT) that consists of the International

Telecommunication Union (ITU) and the (MPEG). The ITU

has approved the name H.264 for the standard, while ISO/IEC

states it as the MPEG-4/AVC (part 10) standard [6].

Now, in order to deliver a stream to to the receipients

many transport protocols are to be considered.

UDP (User Datagram Protocol), as it is well known, has

no reliability. Yet this has noticeable advantages for video: In

many cases, it's desirable to just hop an absent packet and

move onto the next chunk of data. It's better than freezing the

video, waiting to see if the Internet will be able to convey that

packet on the next attempt. On the other hand, UDP has no

built-in packet ordering, so the responsibility of putting

sequence numbers inside the datagrams it sends will be the

applications’. Applications have to do their own accounting to

figure out if a packet has been dropped [7].

UDP has many advantages over its competent, TCP, one

of them. If a packet is lost in UDP then the server can just

keep transmitting UDP packets to the receiver. Unlike TCP

(Transport Control Protocol) that stops everything to bring

back the dropped packet causing jitter. Unlike TCP, it has no

flow control that slows the stream when congestion occurs

and speeds up when it doesn’t. It sends at a constant pace.

UDP is a simple protocol. It puts the packet in an envelope,

stamps it and sends it on. Applications take care of everything

else [7].

RTP (Real-Time Transport Protocol) is a network protocol

that delivers end-to-end network conveyance functions proper

for applications transmitting real-time data, such as audio and

video. RTP session is an involvement among a set of

participants communicating with RTP. Any participant could

be associated in multiple RTP sessions at the same time. In a

multimedia session, each medium is conveyed in a separate

RTP session with its own control packets unless the encoding

itself multiplexes multiple media into one data stream. Each

participant may differentiate multiple RTP sessions by

reception of different sessions by using different pairs of

destination transport addresses, where this pair of transport

addresses consists of one network address plus a pair of ports

for RTP and control protocol [8].

While TCP (Transport Control Protocol) and HTTP

(Hyper-Text Transfer Protocol) are considered as loss-free

protocols that guarantee well-ordered delivery of packets,

may be more suitable for compressed video transmitting

because it is highly subtle to information loss. Unlike UDP,

TCP is a connection-oriented transport protocol that assures

that packets are received correctly. This reliability is attained

by first establishing a session and then resending infected or

lost Packets. HTTP is mostly based on TCP[5].

B. Transcoding Profiles

Video transcoding manages transforming a previously compressed video signal into another one with distinct format, such as different bit rate, frame rate, frame size, or even different compression standard. Owing to the spreading and variety of multimedia applications and present communication infrastructure consisting of different underlying networks and protocols, there has been an upward necessity for inter-network multimedia communications over heterogeneous networks.

In case of real-time video, it can be implemented by the encoder adjusting its coding parameters. Yet, the visual quality has to be sacrificed due to that bit rate of the encoded video should match the “weakest link”.

There is a rising need for conversion among videos coded by diverse standards. Moreover, in the case of multicast, where a video source has to distribute the same video stream to various clients through channels with varying capacities, the encoded video stream needs to be converted to particular bit rates for each leaving channel. The same problem may also occur in multipoint video conferencing. In that case, multiplexing of multiple video streams may exceed the capacity of the channel and would require a bit rate conversion. Video transcoding is the technique that is dedicated to resolving these issues.

The pre-encoded, high quality, high bit-rate videos are kept at the video source. At the other side, different user clients maintain the clients’ profiles, which include the following parts: Transmission Profile, which is responsible for monitoring the dynamic conditions of the transmission channel, for example effective channel bandwidth, channel error rate, etc., Device Profile, that defines the facility of the device, such as screen size, processing power, etc., and the User Profile defining the user preferences [9].

Some of the transcoding profiles that comprise the audio/video coding techniques are:

• H.264 + ACC (MP4)

MPEG-4 (Motion Pictures Expert Group) H.264 is an

112

Page 3: [IEEE 2012 International Conference on Future Communication Networks (ICFCN) - Baghdad, Iraq (2012.04.2-2012.04.5)] 2012 International Conference on Future Communication Networks -

international standard for compressing video, which is recognized by the ISO. The MPEG-4 standard defines a container format, meaning that one file can contain many different types of data, stored as tracks. The overall file harmonizes and interleaves the data. So the video or audio in an MPEG-4 container may also be escorted by metadata, cover art, subtitles, and other textual or visual data that can be extracted by a player.

The AAC codec is a lossy compression method for audio and it has been used since MPEG-2, but has been updated for MPEG-4. It is much higher quality since it can support capture up to 96kHz and is able to support up to 48 channels and backwards estimation. AAC makes higher audio quality than MP3 files and keeps analogous or lower file sizes [10].

• MPEG-2 + MPGA (TS)

The main feature that distinguishes MPEG-2 from MPEG-1, is the efficient coding of joined videos that is the key enabler of digital storage media and TV broadcast.

MPEG-2 mainly functioned on the formats stated in ITU-R BT.601 with 4:2:0 color format and emits broadcast quality videos at the bit-rates of 4 to 8 Mb/s and high quality videos at 10 to 15 Mb/s bit-rates. It also handles HD videos or other color formats at even higher bit-rates [11]. MPEG-2 Transport Stream (TS) comprises a sequence of 188 byte transport packets. The easiest way of transporting these packets is to pack seven of these packets into the payload of an IP packet. This method works well in a “closed” network where traffic can be controlled and sufficient bandwidth can be provided for keeping Quality of Service. Yet, in an “open” network, such as the Internet, MPEG-2 TS packets would have to be encapsulated in transport protocol packets and then transported over the IP network.

• WMV + WMA (ASF)

The Advanced Systems Format (ASF) is the file format used by Windows Media Technologies. Audio and/or Video content compressed with a large variety of codecs can be stored in an ASF file and played back with the Windows Media Player.

ASF is an extensible file format planned to store synchronized multimedia data. It supports data delivery over a wide variety of networks and protocols while still proving appropriate for local playback. ASF supports advanced multimedia capabilities including extensible media types, component download, scalable media types, author-specified stream prioritization, multiple language support, and extensive bibliographic capabilities, including document and content management. The Advanced Systems Format file container stores the following in one file: audio, multi-bit-rate video, metadata (such as the file's title and author), and index and script commands (such as URLs and closed captioning).

IV. IEEE 802.11N

The 802.11n standard proposes up to (supposed) 600 Mbit/s bit rate. These high data rates, in addition to improved reliability, are demanded by a number of new applications, such as wireless computer networks that require higher data transmission rates among various computers at home, and

(because of the emergence of fiber-to-the-home) transfer rates from the computer to the wired Internet port at users’ homes, Audio and Video (AV) applications, e.g., transfer of videos from laptops, hard-disk video recorders, and DVD players to TVs, and Voice over Internet Protocol (VoIP) applications, which require lower data rates, but higher reliability [12].

802.11n achieves high data rates mainly by two methods: Aggregating frames along with using block acknowledgments in the MAC layer, and the use of multiple-antenna techniques and the increase of the available bandwidth from 20 to 40 MHz in the physical layer [12]. Frame aggregation is one of many MAC evolvements that maximizes goodput and upsurges efficiency. There are two chief techniques to perform frame aggregation, known as Aggregate-MSDU (A-MSDU) and Aggregate-MAC protocol data unit (A-MPDU). The key distinction between MSDU and MPDU is that the first is entering or exiting from the top portion of the MAC sublayer while the latter is enters or exits from the bottom. Aggregate conversation sequences can be likely to be acknowledged with Block ACKs, a new form of control frame that entails a matrix that resembles each single MSDU and its received status [13].

In the A-MSDU scheme, multiple MSDUs are aggregated to formulate a MPDU which may consist of many sub frames either from multiple sources or for multiple destinations. A-MSDU entails of multiple sub frames. Each sub frame of an AMSDU has a sub header (Destination address, Source Address, Length), MSDU, and padding bytes. The maximum length of an A-MSDU frame could be 3839 or up to 7955 bytes [14]. The perception of MPDU aggregation is to gather multiple MPDU subframes with a single leading PHY header. A chief disparity from A-MSDU aggregation is that A-MPDU performs after the MAC header encapsulation process. The maximum length that an A-MPDU can attain is 65,535 bytes.

Multiple-input, multiple-output (MIMO) defines a system having a transmitter with multiple antennas transmitting through the propagation atmosphere to a receiver with multiple receive antennas. IEEE 802.11n engages a variety of physical layer diversity mechanisms for attaining higher throughput and improved packet reception competences. In 802.11n, receiver diversity is implemented by using Maximum Ratio Combining (MRC), a technique that primarily combines signals from multiple antennas taking into account the signal-to-noise ratio (SNR) of the signals received at diverse antennas. [15].

IEEE802.11n also announces two different channel bandwidths – 20 MHz and 40 MHz. Theoretically, consuming a 40 MHz band should double the amount of throughput achieved using a 20 MHz band. However, all the 40 MHz channels are partly overlapping in the 2.4 GHz band, contrasting the 20 MHz channels 1, 6 and 11 which are non-overlapping. Hence, using 40 MHz channels can also lead to degradation in the throughput owing to increased interference with neighboring channels [15].

V. MATERIALS AND METHODS

The implementation of this experiment took place from Jan 2011 till June 2011 at a research lab in the Network Department, Al-Nahrain University, Baghdad. The network on which measurements were taken consisted of a streaming server, a wireless router and three receiving stations. The

113

Page 4: [IEEE 2012 International Conference on Future Communication Networks (ICFCN) - Baghdad, Iraq (2012.04.2-2012.04.5)] 2012 International Conference on Future Communication Networks -

results were analyzed using multiple group ANOVA test comparing parametric data from three groups settings via SPSS17 software.

Figure 1 shows the network architecture. One receiver is mobile in order to set benchmarks according to distances away from the streamer. So as to get better results, measurements were carried out over the new generation of the wireless standard 802.11n. The a300Mbps Wireless TL-WR940N router has its own coverage area that it was measured according to the building design considering walls and barriers.

Figure 2 shows the map of ground floor building along

with the measurement benchmarks, and Figure 3 captures a

frame picture of the video transmitted. Indoor channels are

extremely reliant on the settlement of walls and partitions

within the building. As placement of these walls and

partitions direct the signal pathway inside a building. In such

cases, a model of the location is a vital design tool in

assembling a layout that points to efficient communication

approaches. Considering the building construction design at

which the experiment took place, it could be judged to be a

special case of a more generalized with minor variances in

signal propagation.

Figure 1. Implemented Network

As for any electromagnetic wave, wi-fi signal follows the

characteristic of materials in propagation field. Walls,

ceilings, etc. will affect the overall behavior. As it is

demonstrated in Figure 3, the signal propagation takes some

unique shape depending on the power of the 11n router and

the distance away from the streamer.

Using an AirView Spectrum Analyzer, channels were

caught at a specific period at which the experiment was being

held. Channels spectra are viewed in Figure 5.

Since the experiment has took place in an area where there are

no channel regulation, in both 2.4 and 5 GHz, thus there

would be a wide floor of noise affecting system performance

so as to be considered when discussing results. Basing on

channel spectra, measurements were taken in both busy and

clear channels (channels 12 and 1, respectively) with the aim

of acquiring the climate performance of the network.

VI. SCENARIOS AND RESULTS

Changing the transcoding profiles with diverse streaming

protocols has shown varying in each protocol with a

corresponding live compression codec. The protocols that

have been tested were UDP, RTP and HTTP in the

correspondence to H.264, MPEG-2 and WMV coding

systems.

Figure 2. Ground Floor Plan with Power Measurements

Figure 3. A frame Picture of the Video transmitted at site of experinment

Figure 4. Propagation of 18x18 m2 floor area using an 300 Mb/s TL-W940N

Router

Taking into account the conditions of the channels and the

noise floor, what is called ‘clear channel’ may not be exactly

as it is named due to the lack of channels regulation and

supervision in the area of testing.

Experimenting the mentioned protocols along with

transcoding profiles occurred according to the parameters,

described in Table 1, of Frame Rate, Bit Rate and the furthest

distance from the streamer with retained acceptable live flow

of images received.

Applying the parameters mentioned in Table 1, the

following comparison resulted, pronounced in the figures,

showing the best outcome collected with those parameters.

Any increase or altering in one of those limitations would

cause transmission black out and image halting.

114

Page 5: [IEEE 2012 International Conference on Future Communication Networks (ICFCN) - Baghdad, Iraq (2012.04.2-2012.04.5)] 2012 International Conference on Future Communication Networks -

Figure 5. Channel Spectra

TABLE I. MEASURED PARAMETERS

UDP

AV Codec H.264+ACC(MP4) MPEG-2+MPGA(TS) WMV+WMA(ASF)

Frame rate 30 f/s – w:1200 – H:

800 30 f/s – w:1200 – H: 800

30 f/s – w:1200 – H:

800

Distance 15m 16m 18m

Bit rate 10000 Kbps 13000 Kbps 10000 Kbps

RTP

AV Codec H.264+ACC(MP4) MPEG-2+MPGA(TS) WMV+WMA(ASF)

Frame rate 30 f/s – w:1200 – H:

800 30 f/s – w:1200 – H: 800

30 f/s – w:1200 – H:

800

Distance 12m 17m 18m

Bit rate 8000 Kbps 13300 Kbps 7500 Kbps

HTTP

AV Codec WMV+WMA(ASF)

Frame rate 30 f/s – w:1200 – H: 800

Distance 18m

Bit rate 9000 Kbps

Streaming over UDP has shown variations when changing

AV codecs with the biggest share for the MPEG-2 presenting

about 11 Mbps multicasting stream to three recipients, along

with values rounding 5 Mbps and 4 Mbps for H.264 and

WMV, respectively. It should be mentioned that, as expected,

streaming over a clear or a low noise channel would present

better video resolution and higher bite rates.

Figure 6. BitRate Variation for Transcoding Profiles and Channels for UDP

When it comes to RTP, H.264 shows similar readings to

those in UDP’s, as viewed in Figure 7, while MPEG-2 rises

again to more than 13 Mbps in a more stable stream and

further distances covered maintaining live flow of images.

WMV now shows a little wider difference from 3 to 5 Mbps

when streaming in variable channels and it is noticed that it is

more susceptible to noise of diverse channels.

Many payload types have been defined for the

transmission of MPEG-2 streams over RTP [16], one of them

is the payload based on encapsulated MPEG-2 transport

stream (TS).

This type is established to adapt the hardware MPEG-2

codec implementations that function directly on TS. This

packetization method makes benefit of MPEG-2 timing model

that is based on MPEG-2 Program Clock Reference (PCR),

Decoding Time Stamps (DTS) and Presentation Time Stamps

(PTS). The RTP timestamps are not in practice in the

decoding part; yet, they can be efficient in estimating and

minimizing network-induced jitter and flow of time between

the streamer and the recipient. Taking into account the

sensitivity of the MPEG-2 timing system to network jitter,

may verify to be of a significant functionality.

In this manner, the TS’s are packetized in a way that each

packet consists of an integral multiple of MPEG-2 transport

packets that are 188 bytes, so as to upsurge the transmission

efficiency [16].

Figure 7. BitRate Variation for Transcoding Profiles and Channels for RTP

Since ASF is the file format used by Windows Media

Technologies, Audio and/or Video content can be compressed

with a large variety of codecs and then stored in an ASF file

and played back with the Windows Media Player; then it is

the best choice for streaming over TCP, or HTTP in our case,

because it stores the stream for local playback in Windows

Media Player. Bit rate received over HTTP is illustrated in

Figure 8.

Figure 8. BitRate Variation for Transcoding Profiles and Channels for

HTTP

The analysis of variance (ANOVA) test would be the

applicable statistical analysis method for comparing more

than two groups. Generally, if k groups are involved, a total

number of [(κ � 1) ⋅ κ] / 2 pairwise comparisons are possible, in

our example with three groups the number of comparisons is3.

It is not very useful to apply t-tests due to that multiple testing

is associated with a “true” significance level that is larger than

the nominal value of, say, 0.05. In this case, a null hypothesis

of no difference will be rejected even if the probability that

the difference occurred by chance is larger than the pre-

specified significance level.

The null hypothesis means there is no difference in

outputs, frequencies or distances when different codecs are

used. While the alternative hypothesis is that there is

115

Page 6: [IEEE 2012 International Conference on Future Communication Networks (ICFCN) - Baghdad, Iraq (2012.04.2-2012.04.5)] 2012 International Conference on Future Communication Networks -

significant difference in outputs, frequencies or distances

when different codecs are used.

Figure 9. MPEG-2 has shows the best reading concerning Freq and Distance

All data in this table show statistically significant p-values

(<0.05), which means that we have enough evidence to reject

the null hypothesis, and prove that there is statistically

significant difference between data in codecs with regards to

distance.

TABLE II. DESCRIPTIVE STATISTICS

codec Mean Std. Deviation N

Output

(Mb/s)

H.264+AAC 4.5500 .81670 6

MPEG-2+MPGA 10.1571 3.77397 7

WMV+WMA 4.6600 1.97664 10

Total 6.3043 3.77334 23

Frequency

(Mb/s)

H.264+AAC 7.6667 2.25093 6

MPEG-2+MPGA 10.2857 3.77334 7

WMV+WMA 7.8000 2.13698 10

Total 8.4217 2.88601 23

TABLE III. MULTIRATIVE TESTSC

Effect Value F Hypo-

thesis df

Error

df Sig.

Partial

Eta2

Intercept

Pillai’s Trace .811 38.624a

2.000 18.000 .000 .811

Wilks’

Lambda .189 38.624

a 2.000 18.000 .000 .811

Hotelling’s

Trace 4.292 38.624

a 2.000 18.000 .000 .811

Roy’s Largest

Root 4.292 38.624

a 2.000 18.000 .000 .811

Distance

Pillai’s Trace .432 6.837a

2.000 18.000 .006 .432

Wilks’

Lambda .568 6.837

a 2.000 18.000 .006 .432

Hotelling’s

Trace .760 6.837

a 2.000 18.000 .006 .432

Roy’s Largest

Root .760 6.837

a 2.000 18.000 .006 .432

Codec

Pillai’s Trace .642 4.489 4.000 38.000 .005 .321

Wilks’ Lambda

.361 5.971a

4.000 36.000 .001 .399

Hotelling’s Trace

1.758 7.472 4.000 34.000 .000 .468

Roy’s Largest Root

1.753 16.655b

2.000 19.000 .000 .637

a. Exact statistic, b. The statistic is an upper bound on F that yields a lower

bound on the significance level, c. Design: Intercept + distance + codec.

VII. CONCLUSIONS

Observing the results obtained, it is satisfying to state that

there is a clear impact of permutation of transcoding profiles

of video stream with the real-time transmitting protocols.

Streams coded using the H.264 and ACC, for video and

audio respectively, could not exceed 10 Mb/s input and about

5.4 Mb/s output in distance no more than 12 meters for RTP

and 15 meters for UDP. By monitoring the flow, it could be

noticed that the stream has shown fluctuations and recurrent

sharp edges, which affected the quality of the received video

stream. The WMV has shown better reception for the stream

over further distances reached 18 meter, only its bit rate could

not overdo 9 Mb/s input and 8.7 Mb/s output with a more

stable flow that the H.264’s.

The best received video transmission was under the

MPEG-2 and MPGA transcoding profile conveyed over RTP

with about 13 Mb/s input and output, 16 to 17 meters

coverage and an acceptable video quality.

Using the ANOVA test, it is enough evident to reject null

hypothesis, and be 95% confident that there is statistically

significant difference in outputs, distances and frequencies

with different pathways, the (RTP, MPEG-2) profile.

REFERENCES

[1] O. Z. Alay, T. Korakis, Y. Wang, and S. S. Panwar, “Layered wireless video multicast using relays” - IEEE Transaction on circuits and systems for video technology, VOL. 20, NO. 8, August 2010.

[2] S. M. Weiss, “Video over IP: IPTV, Internet video, H.264, P2P, WebTV, and streaming: A complete guide to understanding the technology”, Chapter 9, pp. 249 – 272, 2008.

[3] V. Samanta, R. Oliveira, A. Dixit, P. Aghera, P. Zerfos, S. Lu, “Impact of video encoding parameters on dynamic video transcoding”, University of California Los Angeles, COMSWARE’06, January 8–12, 2006, New Delhi, India

[4] M. Levkov, “Video encoding and transcoding recommendations for HTTP dynamic streaming on the adobe flash platform”, Adobe Systems, Inc., Oct 2010

[5] B. Bing, “Broadband video networking”, IEEE Globecom, Gorgia Institute of Technology, Artech House, 2009.

[6] S. Paul, “Digital video distribution in broadband, television, mobile and converged networks”, pp. 11 – 25, 1st Eddition, John Wiley& sons, August 2011.

[7] D. Stolarz, “Mastering Internet Video: A Guide to Streaming and On-Demand Video”, Chapter 5, pp. 212 – 220, Aug 2004.

[8] RFC3550 – “RTP: A Transport Protocol for Real-Time Applications”

[9] Z. Lei, “A cooperative video adaptation and streaming scheme for mobile and heterogeneous devices in a community network”, DISCOVER, SITE, University of Ottawa, ICME'09 Proceedings of the 2009 IEEE international conference on Multimedia and Expo, IEEE Press Piscataway, NJ, USA, 2009.

[10] D. Hassoun, “Exploring flash player support for high-definition H.264 video and AAC audio”, Adobe Developer Connection, Flash Player Developer Center, 2008.

[11] C. W. Chen, “Intelligent multimedia communication: Techniques and applications”, chapter 2, pp. 79, 2010 Springer-Verlag Berlin Heidelberg

[12] A. F. Molisch , “Wireless Communications”, Chapter 29, pp. 739, 2nd Eddition, Wiley, 2010

[13] D. Skordoulis, Q. Ni, U. Ali & M. Hadjinicolaou, “Analysis of concatenation and packing mechanisms in IEEE 802.11n”, Department of Electrical and Computer Engineering, School of Engineering and Design, Brunel University, London, UK – 2007

[14] S. T. Srikanth S, “A frame aggregation scheduler for IEEE 802.11n”, Communications (NCC), 2010 National Conference, pp. 1-5, Jan 2010.

[15] V. Shrivastava, S. Rayanchu, J. Yoonj, S. Banerjee, “802.11n under the microscope”, Proceedings of the 8th ACM SIGCOMM conference on Internet measurement, Vouliagmeni, Greece, Oct 2008.

[16] A. Basso, G. L. Cash, M. R. Civanlar, “Transmission of MPEG-2 Streams over Non-Guaranteed Quality of Service Networks”, AT&T Labs – Research, 2000

116