Post on 19-Jul-2018
SCHEDULING AND RESOURCE ALLOCATION IN BROADBAND MUL-DIA WIRELESS LOCAL AREA NETWORKS
Richard Wayne Kautz
A thesis submitted in confomity with the requirements for the degree of Doctor of Philosophy
Graduate Department of Electrical and Computer Engineering University of Toronto
@Copyright by Richard Wayne Kautz 1998
National Library Bibliothèque nationale du Canada
Acquisitions and Acquisitions et Bibliographie Services services bibliographiques
395 Wellington Street 395, tue Wellington OüawaON KIA ON4 ûttawa ON K1A O N 4 Canada Canada
The author has granted a non- L'auteur a accordé une licence non exclusive licence allowing the exclusive permettant à la National Library of Canada to Bibliothèque nationale du Canada de reproduce, loan, distribute or sel1 reproduire, prêter, distribuer ou copies of this thesis in microform, vendre des copies de cette thèse sous paper or electronic formats. la forme de microfiche/nlm, de
reproduction sur papier ou sur format électronique.
The author retains ownership of the L'auteur conserve la propriété du copyright in this thesis. Neither the droit d'auteur qui protège cette thèse. thesis nor substantial extracts fkom it Ni la thèse ni des extraits substantiels may be printed or othenivise de celle-ci ne doivent être imprimés reproduced without the author's ou autrement reproduits sans son permission. autorisation.
Abstract
SCHEDULING AND RESOURCE ALLOCATION IN BROADBAND MULTIMEDIA WEWLESS LOCAL AREA NETWORKS
Richard Wayne Kautz
Doctor of Philosophy in Electrical and Computer Engineering
University of Toronto
Two main topics of research in modern cornputer netarorks are the development of new wire-
less architectures and technologies, and the addition of new multimedia services. These two
topics have converged in the design of multimedia Wireles Local Area Netwotks (WLANs).
The sgnthesis of wire1ess and multimedia networks has opened new problem areas in trans-
mission scheduling and resource allocation. Techniques suitable for wireless telephony are
unsuitable for a multimedia environment, and techniques for wired muitimedia networks are
not immediately applicable to a wireless medium. The problems of transmission schedul-
ing and resource allocation are explored in three areas: The-Division Multiple Access
(TDMA), hybrid TDMA/ Code Division Multiple Access (CDMA) , and multicellular TDMA
environments.
The problem of scheduling îs explored in TDMA networks through Distributed
Fair Queueing (DFQ), a centralized scheduling protocol. The necessary concepts of Fair
Queueing are reviewed, and the resource allocation problem for multimedia semces is ad-
dressed. The DFQ architecture is then introduced, and the problems due to physical and
error control overhead are studied, Behaviour of a nrix of multimedia services is simulateci
t O determine average system performance.
The problems of Quaiity-of-Service (QoS) delivery in hybrid TDMA/Code Division
Multiple Access (CDMA) are addresseci by introducing difkential power control for QoS
preservation. The optimal power levels are determined in order to maximize the capaciw
of the network. Two scheduling methodologies are introduced, Mering in ac iency and
cornpl&@.
Finally, the problem of channel allocation in an unlicensed, distributecl-architecture
environment is explored through a simple interfmce avoidance protocol m e d Active
Channel Avoidance (ACA). The ACA protocol attempts to minimize interference between
melated networks in an environment while allowing communication between cells of md-
ticellular networks. The performance of simple network models under ACA is dculated, in
order to estimate performance for rd-world networb and provide a theoretical h e w o r k
for hrther refinement.
This thesis is dedicated to my parents, who have instilled a Lifelong !ove of leaming in me.
I would like to thank my supervisors, Professors Leon-Garcia and Pasupathy, who have
given me direction and constructive criticism throughout my program.
Contents
List of Figures vüi
List of Tables xi
1 Multimedia Wireless LAN Protocols 1 . . . . . . . . . . . . . . . . . . . . . 1.1 Multimedia Services and Requirements 3
. . . . . . . . . . . . . . . . . . 1.1.1 Service Types and Quality of Service 4 . . . . . . . . . . . . . . . . . . . . . 1.1.2 Multimedia Service Categories 5
. . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Wireless LAN Architecture 6 . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Wireless LAN Stations 6
. . . . . . . . . . . . . . . . . . . . . . . 1.2.2 Multimedia Protocol Layers 10 . . . . . . . . . . . . . . . . . . 1.3 Wirekss LAN Medium Access Alternatives 12
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 Access Techniques 13
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.2 Access Topologies 14
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.3 Access Scheduling 16 . . . . . . . . . . . . . . . . . . . . . . 1.4 Multimedia Network Service Models 17
. . . . . . . . . . . . . . . . . . 1.4.1 Asynchronous Transfer Mode (ATM) 17 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.2 InternetProtocols 21 . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 WLAN Evaluation Criteria 25
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Outline 26 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.1 New Contributions 28
2 Fair Queueing and Generalized Processor Sharing 29 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Literature Review 30
. . . . . . . . . . . . . . . . . . . . . . 2.1.1 Homogeneous Sewice Priority 30 . . . . . . . . . . . . . . . . . . . . . 2.1.2 Heterogeneous Service Priority 32 . . . . . . . . . . . . . . . . . . . . . 2.2 Resource Allocation for GPS Systems 35 . . . . . . . . . . . . . . . . . . . . 2.2.1 GPS Leaky-Bucket Performance 36 . . . . . . . . . . . . . . . . . . . 2.2.2 Service Share Ailocation Algorit hm 44
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.3 Example 50 2.2.4 Parameter Tkanslations for Packet Networks . . . . . . . . . . . . . . 51
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Conclusions 52
3 Distributecl Fair Queuehg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 DFQ Architecture
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Multi-QoS Support . . . . . . . . . . . . . . . . . . . . 3.1.2 Distributecl Architecture Support
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Service Usage for DFQ 3.2.1 Forward Error Control and Physical Layer Overhead . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 ARQ Overhead . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Tag POU Overhead
. . . . . . . . . . . 3.2.4 Usage Modifications for Service Share Ailocation . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Best-Mort 'PrafEc Support
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 S i m k Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Simulation Results
. . . . . . . . . . . . . . . . . . . . . . . . . 3.5.1 Mixecl-tr&c Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.2 CBR-traffic Simulation . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.3 Bursty-trac Simulation
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Conclusions
4 Hybrid CDMA/TDMA Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Architecture
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Hybrid Network Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Interference Control
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Service Partitions . . . . . . . . . . . . . . . . . . . . . 4.2.3 Residual Capôcity Calculat ions
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Calculations
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Conclusions
5 sUPER.Net Channel Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 SUPERNet Architecture
. . . . . . . . . . . . . . . . . . . . . . . . . 5.2 SUPERNet Transmission Rules . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Channel Allocation Strategis
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Static Allocation . . . . . . . . . . . . . . . . . . . . . . . . 5-3-2 Active Channel Avoidance . . . . . . . . . . . . . . . . . . . . . . . 5.4 MAC Channel Allocation Support
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 Channel Beacons 5.4.2 Interference Notification and Channel Change . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . 5.4.3 Intracluster Transmission . . . . . . . . . . . . . . . . . . . . . . . 5.4.4 Interciuster Commiinicat ion
5.5 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.1 Static Allocation
. . . . . . . . . . . . . . . . . . . . . . . . 5.5.2 Active Channel Avoidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Conclusions
A Senrice Share Algorithm Pseudocode 124 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.1 GPS Algorithm 124 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.2 DFQAlgorithm 125
References 127
List of Figures
The fusion of multimedia networks and wireless networks is the multimedia wireless network . Access points provide interconnections to other networks . Ceilular structure of networks . Networks are p u p e d into cells7 whicb are
. . . . . . . . . . . . . . . interconnectecl with either wired or wïreless iinks. Hidden terminal problem: Both stations A and B decide to transmit at the
. . . . . . . . . . . . . . . . . . . . . . . . same t h e to the same receiver C OS1 Basic Reference Mode1 . Solid lines indicate real interfaces between el+ ments . Dot ted lines indicate virtual interhces between peer network layers .
. . . . . . . . . . . . . . Distributed network architecture . All units are peers Centralized network architecture . Communication between units is con-
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . trolled by a base station Asynchronous 'Ikansfer Mode cell format . Lndex numbers O to 52 represent
. . . . . . . . . . . . . . . . . . . . . . . . . octets in order of transmission- . . . . . . . . . . . . . . . . . . . . . . . . . . Standard ATM protocol stack
Wireless ATM protocol stack, with Data Link and MAC sublayers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IP protocol stack
Wueless IP protocol stack, showing lower layer functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Internet Protocol v6 packet format
System design alternatives for multimedia wireless LAN networks . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . F E 0 queue with three services . . . . . . . . . . . . . . . . . . . . . . Round-robin queue wit h t hree services
. . . . . . . . . . . . . . . . . . Round-robin queue with variable size packets Virtual t h e in a GPS system . & ... 44 indicate the services' docated shares . SCFQ virtual t h e û (t) for an example arrival/departure sequence . . . . . . Leaky- buclcet-Limited aggregate arrival process . Original arrival process 4- (0, t) is limited to arriva1 process &(O, t) . . . . . . . . . . . . . . . . . . . . . . . Universai s e ~ c e cuve for three services . Senrices 2 and 3 are locally stable; semice 1 is locatly unstable . Dk is the maximum delay experience by a bit for service k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Locally stable service and maximum delay . . . . . . . . . . . . . . . . . . . . Locally unstable service and maximum delay . . . . . . . . . . . . . . . . . . Detail of relevant line segments for a locaily unstable service . . . . . . . . .
2.11 Qualitative diagram of the service share allocation aigorithm . . . . . . . . . 2.12 State transition diagram for service sets . . . . . . . . . . . . . . . . . . . . . 2.13 Service share 4 versus the number of bursty senrices in the system- . . . . . 2.14 Leaky-bucket interpretation of the Generic Cell Rate Algorithm (GCRA) . .
3.1 Distributed Fair Queueing WtAN architecture . . . . . . . . . . . . . . . . . 3.2 Transmission cycles . a) Downstream transmission . b) Upstrearn data tram-
mission . c) Upstream poil transmission . . . . . . . . . . . . . . . . . . . . . 3.3 Stopand-wait ARQ transmission cycles . . . . . . . . . . . . . . . . . . . . . 3.4 Service burstiness and its effect on tag poll generation . . . . . . . . . . . . . 3.5 Maximum possible tag poil rate for a service where L : - / L ~ = 0.1. X axis
indicates the service's service share&, and each curve represents a different value for average rate pi . Both axes represent fractions of total Channel bandwidth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.6 Two state Markov t r a c mode1 . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Average delay and delay jitter for bursty sources . X axis is average load p, y
d is average delay in ceil times . . . . . . . . . . . . . . . . . . . . . . . . . 3.8 Average delay and delay jitter for CBR sources . X axis is average load p, y
acis is average delay in cell times . . . . . . . . . . . . . . . . . . . . . . . . . 3.9 Average delay and delay jitter for Poisson sources . X axis is average load p,
y axis is average deiay in cell times . . . . . . . . . . . . . . . . . . . . . . . 3.10 Number of tag polls per cell tirne for remote connections . X axis is average
load p . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.11 Delay and delay jitter for varying numbers of CBR services with constant
offered load . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.12 Delay and delay jitter for bursty services with constant offered load and
varying Peak Ce11 Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Hybrid architecture data paths . . . . . . . . . . . . . . . . . . . . . . . . . . Hybrid transmission: One capsule is transmitted in each timeslot from each
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . transmitting code 'Itansmitted power and received power in a network, and the near-far cor- O ( T ~ ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Two partition schemes . a) Unpartitioned: determine semices to transmit each theslot . b) Multiple partitions: determine partition to transmit each timesiot . . - . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . System capacity for an unpartitioned system for Gp = 10 . . . . . . . . . . . System capacity for an unpartitioned system for Gp = 100 . . . . . . . . . . System capacity for a multiplepartition system for Gp = 10 . . . . . . . . . System capacity for a multiplepartition systemfor Gp = 100 . . . . . . . .
5.1 Two networks. designateci A and B. each are compriseci of two clusters. 1 and 2. which have overlapping cowage areas . . . . . . . . . . . . . . . . . .
5.2 Stations are subjected to t h levek of control for data transmission . . . .
The physicd hyer packets that individual stations transmit and receive are . . . part of the cluster burst that the SUPER.Net dows to be transmitted.
Hidden terminal problexn: Both ciusters A and B decide to transmit on the samechanne1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . An example interference graph. Each vertex represents a cluster; edges b e
. . . . tween vertices represent the possibili@ of interference between them. C h e i beacon operation, . . . . . . . . . . . . . . . . . . . . . . . . . . . . Channel testing. The test consists of the broadcast message CHTST, the test interval Tm, and the timeout interval Ttimeout. . . . . . . . . . . . . . . Forwarding Request-to-Send/Clear-tesend protocol between Ac- Points
. . . . . . . . . . . . . . . . . . . . . . . . . . . of two cooperating clusters. Probability of cwoccupation on a chamel for the static allocation algorithm. Nc=15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cycle where the ACA rhnnnei allocation aigorithm would not converge to a solution if channels were released immediateiy- Allocations would aiternate
. . . . . . . . . . . . . in each cluster between the tnro remahhg charnels. Cluster burst length versus loading XI. Tstup = 1 = 1, and Ttimmut is varieci from 0.11 to 21. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Control channel occupancy pCont,~ versus loading XI. TxtUp = 2 = 1, and
. . . . . . . . . . . . . . . . . . . . . . . . Ttimaut is varied from 0.11 to 21. Probabiiity of cwccupation on the chosen chamel versus the number of channel allocations, given a new cluster. The ratio is the fkaction &/qciv. N , = 1 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . a . * . . Co-occupancy time for dinerent cluster load levels and timeout periods. The
. . . . . . . . . . . . . . . . . . . co-occupancy time is in multiples of TgCN.
List of Tables
. . . . . . . . . . . . . 1.1 ATM s&ce classes and Qudity of Service provisions 19 . . . . . . . . . . . . . . . . . . . . . 1.2 P v 6 service classes and priority tevels. 23
2.1 Parameters for bursty and non-bursty service classes . . . . . . . . . . . . . . 50
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Simulation service mix 73
4.1 Voice/data service mix . Voice services are transmitted with each of the three . . . . . . . . . . . . . . . . . . . . data types one at a t h e in each example 91
Chapter 1
Multimedia Wireless LAN
Protocols
The recent explosive growth of dfordable, personal technology has brought with it
a need for more sophistication in communications systems. The abiiity of personal comput-
ers to handle many types of multimedia services such as voice and video, coupled with new
telep hony da ta services, demands networks which c m effect ively and flexi bly support t hese
services. As weil, technological advances and a favourable regdatory climate have opened
up new possibilities in wireless, mobile access to commUILication services. The fusion of
these two technologies is the multimedia wireless access network (Figure 1.1).
The goal of a multimedia wireless network is to allow the use of many different
services, such as voice, video, and data, in a wireless environment. Historically, wireless
services have been iimited to voice, usudy a ked-bandwidth, single quality of service
connection. More recently, wireless data has been introduced, either as a separate network
such as wireless Ethernet, or as a low-speed adjunct to voice services. A multimedia WLAN
requires much more flexibility in Quality of Service (QoS) provisioning to support new
services such as video and high-speed data.
In a multimedia network, resource allocation becomes a central problem. Network
resources, such as allocated bandwidth and buffer space, must be divided amongst the
users of the network. In homogeneous networks, Like the Advanced Mobile Phone System
(AMPS) cellular network [l], aU users employ the same service, and so have e q d priorities
and equal characteristics. Therefore, network resources can be divided equaily between aiI
Point I
Figure 1.1: The fusion of multimedia networks and wireless networks is the multimedia wireless network. Access points provide interco~ections to other networks.
users. In other homogeneous networks, such as IEEE 802.3 or Ethernet [2], ail users employ
the same access method to the network, and therefore have equal priority in the network.
In a multimedia network with heterogeneoua services, where each user may employ Werent
services with Werent priorith and requirernents, resource allocation becomes a compiex
tradeofE
This thmis explores the problem of Medium Access Control (MAC) in three dif-
ferent environments: Distributed Fair Queueing (DFQ) , Hybrid CDMA / TDMA, and SU-
PElR.Net Active Channel Avoidance (ACA). Each protocol is designed for a s p e c environ-
ment, determineci by network architecture, topology, and technology. The performance of
the protocols must be evaiuated based on suitable criteria: Quality of Service presenmtion,
efficiency, and compat ibility wit h current standards.
To set the stage for the theoretical developments of this thesis, this chapter intro-
duces the foliowing concepts:
a Multimedia Services and Requirements: explains b w m d t imedia services are defined,
and what they require of the network.
a Wireless LAN Architecture: explains what components are required in a wireless
LAN, and how they fit together.
a MultOmedia WLAN Access Alternatives: introduces some of the possible designs that
may comprise a wireless LAN.
Multimedia Network Semice Modekr: introduces Asynchronous 'Ilansfer Mode (ATM),
and multimedia Internet Protocols that may be used in a network.
WLAN Evuluation Criteria: details the judgement areas that are of interest in
wireless LAN performance.
Outliner introduces the structure of the thesis and details its new contributions to
the field.
1.1 Multimedia Services and Requirements
Multimedia services have traditionally been divided into t hree categories voice,
video, and data services. Each category has different transmission requirements and ciiffer-
ent source characteristics. A multimedia network must support aU the service categories
simultaneoudy: that is, it must be possible to support a mix of voice, video, and data
services transmitting concurrently.
Many multimedia services, such as voie and video, are inherently wnnection-
onented This Merentiata them fiom connectionless semices, such as some data services
hcluding electronic mail. A ~ ~ ~ e C t i o ~ - O r i e n t e d service must be admîtted to a network
through c d setup, so that resources may be set aside for its data, before it may transmit
any data After the service has completed its transmission (for example, when a partg using
a voice service wïshes to hang up), its connection must be ter-ted and its resources
released. A connectionless service does not need to be admitteci to the network, and may
transmit without any admipsion/termination overhead. However, network retrources may
not be allocated exclusively to the connectiodess service. Therefore, connection-oriented
services may enjoy a guaranteed level of service, at the cost of c d setup and termination
overhead. Connectionless services have no such overhead, but cannot be guaranteed any
le-1 of performance besides best-effort.
1 1 1 Service Types and Quality of Service
Service requirements can be qualitatively defined in terms of QoS parameters. A
service's requirements are iisted in a service wntmct, which is negotiated via the user-
network interface. While the specific parameters in the contract depend on the service
model, they will contain at least some of the following:
a Delay: A service may require that data be transmitted fkom source to destination in
1ess than a given amuunt of time, or else the data becomes useless. Rd-time voice
and video services have delay requirements, for example, as data must be received at
the destination before it is required for playback, or else the data has no value.
Delay variation (Delay jitter): A seMce rnay require that data be transmitted in a
regular fashion. Since received data is buffered until it is required by the destination,
unpredictabIe arrivais may cause the receive buffer to underflow or ovedow. This
may occur even if the service's delay guarantee is met.
Thmughput: A service rnay require a given Ievel of data to be trammitteci per unit
t h e , even though it has no speciiic requirements on delay. Some data seMces may
require a minimum throughput, or else the service application may assume that the
connection is stded or broken and generate an error condition.
0 Enwr/Loss Rote: Most services will have a limit on the data error or loss rate, but the
magnitude of the limit may vary considerably from service to service. Voice services,
for example, may tolerate bit error rates in the order of 10-~, as the human ear
can tolerate a Large amount of noise before the si@ qualiw becornes unacceptable.
Data services may require bit error rates of legs than 10-~, as even siagle errors in
transmissions may make the data worthless.
1.1.2 Multimedia Senrice Categories
Multimedia services are, by definition, separable into diffkrent categories: they are
commonly grouped into voice, video, and data. Each service category has different source
charact erist ics and network requirements.
Multimedia services can be divided by source characteristics into constant bit rate
(CBR) services and variable bit rate (VBR) services. Constant bit rate services, as the
name implies, generate a k e d rate of data. Variable bit rate services may generate t r a c
according to a random process. The tr&c amval process may be characterized by the
network in temu of a few quantitative parameters, or limited by a windowing algorithm.
For the purposes of t his document, all services are assumed to transmit t heir data in packets,
of either fixeci or variable length-
Each category has different contract requirements due to t heir int rinsic natures.
Of interest in this thesis are:
V o i e Services: Much effort has been expended to design voice coding and decoding
to provide acceptable QoS over many transmission methods. Voice coding has been
developed for Iow rate applications, as weil as high quality ones. The resources avail-
able in the network and the QoS guarantees available to the senrice wiil determine
the best coding method for that network.
Vidw Seruices: Video coding a lgor i th have &O been designed for a wide range of
transmission qualities and transmission networks. S e ~ c e s such as the Motion Picture
Experts Group (MPEG) standards can provide High Definition Television (HD TV)
quality picture and sound, but require high bitrate VBR or CBR connections. Other
services such as H.261 can provide only videoconference q d t y picture, but may
operate using low bitrate CBR connections [3].
Data Services: Data services are usually delay-insensitive, error-sensitive services
which are currently carriecl over both wireiess and wired best-effort cornputer net-
works. They indude co~mection-oriented semices such as terminal emulation and bdk
data exchange, as well as connectionless services such as World Wide Web (WWW).
1.2 Wireless LAN Architecture
The evolution of modern networks has led to the development of certain basic
structures in their architectures, whet her wireless or wired, homogeneous or mdtimedia
A multimedia wireless LAN must support its services using the existing fkamework. Of
particular concern are the definit ion and construction of stations, and the architecture of
the network protocoh that allow the network to operate.
1-2.1 Wireless LAN Stations
The basic problem addresseci by wireless networks is the wireless transmission of
data between entities d e d stations. These stations belong to a hierarchy of gmupings. The
characteristics of these stations within these hierarchies may influence the design of network
protocols. Of interest in t his thesis are station mo bility, station cornplezity, inter-station
distance and station number.
Station Groupings
While the simpiest networks consist of one monolithic group of stations, most
networks consist of smaller, interco~ected groupings of stations. These groupings may be
refemed to as cells or clwters (Figure 1.2). The cells rnay be interconnected via wireless
or wired links. Two main reasons for such network subdivision are the between-station
distance and station number.
Since stations are restricted by bot h design and regulation to a maximum transmis-
sion range, a network that covers too much area wil l contain stations which cannot receive
transmissions fiom other stations in the network. Besides the obvious effect of compiicat-
h g transmission between such stations, their situation may interfere with other station's
Wireless ceil coverage area
i
Figure 1.2: Cellular structure of networks. Networks are grouped into cells, which are interconnecteci with either wired or wireless W.
Figure 1.3: Hidden terminal problem: Both stations A and B decide to transmit at the same time to the same receiver C. Neither A nor B can sense the overlapping transmissions. Howeyef, receiver C is within the coverage area of both A and B, and receives overlapping (and therefore probably corrupted) messages.
co~nmunication by the Hidden Terminal effect (or Hidden Terminal problem). The Hidden
Terminal problem o c m when two tsansmitting stations, distant enough from each other
to not hear each other's transmissions, attempt to transmit to a receiver which hears both.
The two transmitters are unaware of the overlapping transmissions. Since most MAC pr*
tocols attempt to prevent such overlapping transmissions by transmitter channel-sensing,
this may cause serious network problems if not addressed in the design of the MAC protocol
(Figure 1.3). A similar but opposite hidden terminal problem rnay occur when a transmitter
hears interference the receiver does not, causing the transmitter to suspend transmission
even when the receiver would have experience no interference. Division of the network into
geographicaily smder ceiis of proper size can deviate t hese problems.
A high number of stations in a cell may adversely a.fFect network performance.
Many MAC protocols are either limiteci in the number of stations that rnay belong to a
given cell (for example, most cellular telephony protocols) or may experience serious decline
in performance or instability with increasing load (for example, ALOHA and rnany of its
derivatives). Dividing the network into smaller cells may improve each cell's performance
at the cost of more complexity in the interco~ection network.
Station Charact erist ics
While stations in different networks and different applications may vary in m y
ways, we concern ourselves with a few key attributes: mobiFity (fixed or mobile), complexity
(simple or complex), and distance and number (Wide, Metropolitan, or Local Area).
Station mobility may affect every aspect of network design. While a fixed wïrdess
network is us&d for avoiding wiring costs and simplifying occasional recodgurations, most
wireless networks are designed with mobility in mind. A mobile wireiesa netwUrk must d o w
stations to move kom c d to c d while maintainhg connection integrity and keeping track
of m e n t station locations.
The complexity of stations may affect design as well. Stations may be simple, such
as wireless telephone handsets, or complex, such as notebook and sub-notebook personal
cornputers. Simple stations usually support one type of service, such as voice, and are
limiteci in thek processing capabilities. Complex stations may support many simiilaneous
services of dXerent categories and have large processing capabilities. A protocol designed
for simple stations may be too inflexible for complex stations, and a protocol designed for
compltx stations may be too inefEcient and computationally complex for simple stations.
An important determining factor in design is the scaie of the station's separation
and the number of stations in the network. Networks may be Wide- Area Networks ( WANs) , which cover large areas and have thousands or millions of stations: for example, telephone
service and the Internet. Metropolitan-area networks (MANS) cover city-size and campus-
size areas and may have hundreds or thousands of stations; these include building intranets
and Iocd distribution systems. Local-area networks (LANs) cover building-size and room-
size areas, and generaily have a few tens or less of stations. The scale of the network has
implications for its design, The distance between stations (hi& in WANs and MANS, low in
LANs) determines the delay between transmission and reception of data. Thus, a protocol
that relies on many short messages may perform poorly in a WAN or MAN due to this
propagation delay. The number of stations in the network rnay affect the performance as
well, as the performance of some protocois may &op quickly with inmeashg numbers of
stations.
1.2.2 Multimedia Protocol Layers
The network protocol stack detefmines the exact QoS parameters used by the md-
timedia services, the restrictions placed on transmission, and the QoS guarantees possible
in the network.
The network protocol stack is composed of several layers, which employ a peer-to-
peer communication mode1 [4]. Each layer, except for the lowest, deals with an abstraction
of the channel. Eôch layer de& with a different problem of transmission: the lower leveia
are concerned wit h generat ing symbols in the transmission medium, routing, congestion,
and data integrity; these are generaily considered to be handled by the network software
and hardware. The higher layers are concerned with access rights, encryption, compression,
and other manipulations; these are generaily considered to be part of the application. The
modularity this arrangement permits has m a q advantages, including design simplicity and
interchangeability. The classic example of a network protocol stack is the Open Systerns
Interconnect (OSI) protocol stack (Figure 1.4) [5]. In each layer, the transmitted data is
containeci in a Protocol Data Unit (PDU). In the transmitter, each layer wraps the PDU of
the next higher layer with a header of its own to generate its PDU. Similady in the receiver,
each layer strips off the header of its peer and delivers the data to the next higher layer.
This way, each layer sees ody its own PDU and none of the headers of the layers below it,
and data passed to the layers above it is not modiiied and should not be interpreted.
Of the seven layers, the four lower layers are the most relevant for the resource
allocation problem. These layers are usuaily handled in kernel level software and system
hardware, while the top three layers are handled by the application software and not fwther
dealt with in this thesis. The functions of the bottom four layers are as follows:
0 Physical layer: provides source coding and modulation to generate symbols over the
transmission medium to the destination and decode incoming symbols. This allows the
next higher layer to treat the medium as a "bit pipe", where 1's and 0's are transmitted
and received, and not concern itself wit h symbols and carrier/bit synchronization.
Data link layer (DLL): provides packet-level synchronization and error control. Head-
ers are added to data received fiom the network layer in order that its peer DLL may
recognize the beginning and end of separate packets. Medium Access Control may
be implemented for a medium where multiple transmitters and receivers exist. Error
: ( Application F - - - - - - (e-g. file
/ Presentation - - - - - - t+
- - - - - - - - - - - - - Application i transfer, directory) , 1
Network - - - - - - - - - - - - - - - - - - - Network (routing, addressing,muxing)
- - - - - - - - - - - - - - - - - - - I=link(errerror cootrol, medium access)
Figure 1.4: OS1 Basic Reference Model. Solid lines indicate real interfaces between elements. Dot ted lines indicat e virtual interfixes between peer network layers.
f Physical - Physical
(timing)
control is added to provide more reliable transmission. This d o w s the next higher
layer to treat the medium as somewhat-diable links that carry packets.
Neturork Iayer: provides routing and flow control. Routing enables packets to be sent
between the source and a destination that could be many links away. Flow control
enables the packets to be delayed or rerouted in case of failures or congestion. This
d o m the next higher layer to treat the medium as a set of possible destinations where
packets can be sent with some reliability.
0 Thasport layec provides packetization and conuection functions. Source t r a c may
be packetized or repacketized to meet network constraints. T r a c from separate
sources may be handled as different connections, which may be treated with further
error protection to ensure reliable transmission. These functions aliow the next higher
layer to treat th+ medium as a set of reliable connections to other processes.
While the OS1 model is a useful example, real protocois may split, combine, move,
or omit some of these functions, especiaüy in the higher layers. However, OS1 provides a
baseline for protocol design and d y s i s , as well as a good conceptual model for network
functions.
1.3 Wireless LAN Medium Access Alternatives
A wireless network consists of a number of stations, which transmit and receive
their data using a band of the electromagnetic spectrum. The band may have restrictions on
its use (for example, modulation t ethnique) , limit ed propagation charact eristics, noise, and
interference from other sources and other networks. Medium Access Control (MAC) protw
cols must ailow stations to co11113nunicate with each other despite the charme1 impairments
such that the QoS guarantees to the services are met.
To allow internetwork communication, the network should employ a standard ser-
vice model that supports multimedia services. These indude Asynchronous Transfer Mode
(ATM), Intemet Protocol version 6 (IPv6) , and the Integrated SeMces Internet Protocol
(ISP) .
1.3.1 Access Techniques
Depending on the modulation strategy employed, the MAC protocol may use one
or more multiple access techniques:
m e n c y Division Multiple Access ( M A ) : Transmissions are modulated by dif"
ferent carrier fiequencies so that they are non-overlapping in the kequency domain.
This is the technique used for almost all analog radio devices, including AMPS cellular
voice service and commercial broadcast radio. W e relatively simple to implement,
bandwidth allocations for each service are fixed and cannot respond to changes in
traffic conditions. This makes the technique useful for constant bit rate services such
as voice, but makes transmission of variable bit rate trafEc very inefficient.
Time Division Multiple Access (TDMA): 'Itansmissions are scheduled at different
times so that they are nonoverlapping in the time domain. This category inchdes
systems where the transmitter contends for access to the medium, as well as the case
where fixed timeslots are reserved. New cellular voice standards, such as GSM and
IS-54 are included in this category [l]. This technique is more difücuit to implement
than FDMA, but is very flexible and is much more amenable to multimedia t r a c .
Code Division Multiple Access (CDMA): Transmissions are modulated by a digital
code sequence (and, most likely, a sinusoida1 carrier), so that the transmissions are
minimaily overlapping in the code domain. This category inchdes some spread-
spectrum cellular voice standards such as 1s-95 [Il. Note that a system can be spread-
spectrum without using CDMA: some wireless LAN standards operating in fiequency
bands which require spread-spectnim use TDMA exclusively, such as IEEE 802.11 [6]. CDMA d o w s much fkeedorn in data transmission, since any service is allowed to
transmit at any time. However, mapping QoS parameters meant for TDMA systems
into a variable-interference environment created by CDMA is quite problematic, and
high error rates due to interservice interference may be intolerable for services with
high accuracy requirements.
Hybrid multiple access techniques are &O possible. For example, GSM cellular
voice/data incorporates bot h FDMA and TDMA, as calls are t ime-division multiplexed over
many different fiequency IIh;inneIs [l]. Another alternative hybnd, CDMA and TDMA, wiil
be analyzed in Chapter 4.
Several multiple access protocok for d e s s multimedia access have been proposed
in the open fiterature. Ail are TDMA protocols, with some using kequency division between
cells or reversedirection data Iihannels.
Wwdess multimedia MAC protocols also incorporate either dynamic TDMA or a
contention mechanism. In chs i c static TDMA, ail services are ailocated regular periodic
timeslots: while suitable for constant bit rate services, it has no provision for variable bit
rate services.
1.3.2 Access Topologies
A MAC protocol may also be classifieci on what network topology it supports. The
two main types of topologies for MAC protocols are:
O Distributeck A distributed protocol treats all stations as quais, with equd control
over access to the medium. This is the model used in many cornputer networks such as
Ethernet, where the stations are cornplex and have no obvious hierarchy. Distributed
protocols can generdy support dynamic networks, where network topologies change
frequently, quite easily. However, the lack of a hierarchy may make QoS guarantees
very difEcult to enforce (Figure 1.5).
O Centrulazd A centralized network has a designateci base station, or wntdler , which
arbitrates to some extent the access to the network for al1 the stations. The other
stations, called remotes, must obey the base's access control. This is the model used
by voice ceiluiar and cordless systems, where many iimited-function stations (the
handsets) must access a central, more cornplex device (the exchange), and a hierarchy
is clear. Centrafized architectures can usualiy support QoS guarantees more easily
t han dynamic architect mes. Howeveq t the placement of base stations usually requires
installation, and canno t ded wit h dynamic changes (Figure 1.6).
Certain overlapping is possible of the two mchitectures: for example, a system
with distributed network topology I M ~ elect a base station fiom its stations and imple
ment a centralized MAC methodology. Similarly, a centralized system may use the base
station primarily as an access point to other networks, and implement a distributed MAC
methodology.
Figure 1.5: Distributed network architecture. Al1 units are peers.
Figure 1.6: Centralized network architecture. Communication between units is controiled by a base station.
1.3.3 Access Scheduling
TDMA and TDMA hybnd protocols may arbitrate access to the medium in severai
ways, each not m u t d y exclusive of another:
Polling: In a centralized network, a designatecl base station d o m 0th- stations to
trarismit by sending them a poil message. The amount of data that may be sent fkom
one station may be limited by the protocol, or it may transmit all that it has. The
polling algorithm used by the base station has a large &ect on the performance of
the system. The polling algorithm may enforce a fixeci polling order, or may poll
dynamidy based on system conditions. Designs that d e use of p o h g indude
the Institute of Electrid and Electronic Engineers (IEEE) 802.11 wireless LAN pro-
posai [6]; Distributed-Queue Rquest Update Multiple Access (DQRUMA) [Il, ?];
and sectorized round-robin polling [BI.
Reservation: In both centralized and distributed networks, a station may attempt
to reserve access on the channel for future transmissions by generating a resenmtion
message. The reservation message is a short transmission stating a station's intent
to transmit. Depending on the protocol and the type of s e ~ c e , the station may
reserve transmission time for a single packet, multiple packets, or an entire connection.
In a centralized network, a remote station may make reservations with the base in
order to allow the base to schedule transmissions efficiently. In a wireless distributed
system, reservations by way of a Request to Send/Clear to Send (RTS/CTS) protocol
may improve efficiency by preventing the Hidden Terminal problem (Section 1.2.1).
Widely-known systems using reservation are Dynamic TDMA/Time Division Duplex
(DTDMA/TDD) [9, 101 and DQRUMA.
Contention: Both centralized and distnbuted networks may use contention. Each
station transmits without explicit permission fkom other stations. In order to avoid
mllisions, where two or more stations generate overlapping transmissions, the net-
work may use Channel sensing protocols to determine that the rhannel is i de before
transmitting data The previously-mentioned IEEE 802.11 makes use of contention
for certain modes of operation.
1.4 Multimedia Network Service Models
Two multimedia Sennce models have gained prominence: ATM [12] and Mernet
Protocol (IP) [13]. ATM, developed by the International Telec011111iwiications Union (ITU,
formerly CCITT) and further developed by an industrial co~lsortium callecl the ATM Forum,
was originaUy intended as a multimedia transmission protocol for large, hi& bandwidth
optical-transmission-bd systems. Since its inception, the scope of the protocol has been
broadened to indude lower bandwidth systems with electricai and wireless interfaces. IP
version 6 (IPv6) and Integrated Services IP (ISIP), developed by the Intemet Consortium,
are intended to replace the current IPv4 protocol used on the Internet.
Both ATM and IP service models have their own protocol stack, which roughly
follow the OS1 standard mode1 (Figure 1.4). The Medium Access Control for wireless
networks is a modXcation to the standard protocol stack at the proper level.
1.4.1 Asynchronous Transfer Mode (ATM)
ATM was first introduced as a solution for integrated broadband multimedia o p
tical communications. The protocol was designed to smoot hly interleave diffkrent semices
and transport them with Little processing through a high bit rate ubackbonen channel. It is
designed to support voice and vida connections very well, and is essentially a telephony-
driven t echnology.
In order to maintain compatibility and eliminate internetvuorking through the ac-
cess layer, various proposah have b e n made for ATM access over lower bandwidth, more
error-prone media such as twisted pair and wireless. The ATM access technologies m u t
preserve the essential characteristics of ATM over a substantially different medium than
was originally intended.
Multimedia tra.nsmissions in a standard ATM network are corrieci in fixecl-length
celis, which are 53-octet packets contdning 48 octets of data and 5 octets of header infor-
mation (Figure 1.7). No standards yet exist for a wireless ATM cell, whose header may be
compressed to conserve bandwidth and carry extra fields for wirdess functions.
To quanti@ the QoS guarantees, services in ATM axe assigned to one of several
different semice classes (Table 1.1). The classes are:
Constant Bit Rate (CBR): These services generate a constant rate of trafEic, and
O 1 Type 1 GFC 1 GFC: Generic Flow Control
l I VPI I VCI: Virtud Channel Identifier
l VCI 1 VPI: Virtud Path Identifier
~FTG+G~ PLT: Payload Type
1 mc I CLP: Cell Loss Priority
HEC: Header Error Check
Payload
Figure 1.7: Asynchronous Tkansfer Mode cell format. Index numbers O to 52 represent octets in order of transmission.
require b i t s on delay and delay jitter. These services include constant rate voice
and constant rate video services.
rn Variable Bit Rate (VBRJ: These services generate a variable rate of t r a c . Real-time
VBR (RT-VBR) services require limits on delay and delay jitter, while n o n - d -
time VBR (nRT-VBR) services only require Lirnits on ceil los. The arriva1 pmcess
of the t r a c is constrained by policing by the ATM network. Real-time services
include variable rate voice and variable rate video, while non-red-the seMces include
transaction data-
Available Bit Rate (ABRI: These semices do not require specific delay and delay jitter
performance, but require a certain level of throughput. They do not generate trafEc
at a fked rate, but may generate tr&c at a rate specified by the network. This allows
the network to throttle the t r a c generated by these services depending on congestion
conditions in the network. These services would be ATM-aware data t r a d e r senrices.
Unspect3ed Bit Rate (UBR): These services do not require delay and delay jitter
performance, and tolerate cell laases well. They may generate trafEc at any rate they
I II CBR I R3['-VBR I m - V B R 1 ABR I UBR I
Table 1.1: ATM service classes and Quality of Service provisions.
Cell Loss Rate Cd Thnder Delay C d Delay Variation (Jitter) Flow Control
/ Management Plane
1
YeS Yes yes no
Figure 1.8: Standard ATM protocoi stack.
User Higher Layers
wish. However, the ATM network gives no guarantees on their delivery, and may drop
the cells for any reason. These seMces would likely include IP over ATM services.
Y= Yes Yes no
The standard ATM protocol stack is similar to the OS1 protocol stack (Figure 1.8).
However, in ATM the stack is divided into a user plane, c o n h l plane, and a management
plane. Information in the management plane, such as fault and configuration data,, is not
constrained by the peer-tepeer OS1 model. In a wireless ATM network, the ATM Layer is
subdivided to perform further error control and MAC functions. These functions are placed
in paralle1 with the standard protocol stack (Figure 1.9).
Control Higher Layers
Yes no no no
/A
I 1
A
YeS no no Yes
no no no no
/ Management Plane / User lane r
User Control
Higher Layers Higher Layers
ATM Adaptation Layer
I
I Physical Layers
Figure 1.9: Wireless ATM protocol stack, with Data Link and MAC sublayers.
Lower layers
Figure 1.10: IP protocol stack.
ISE and IPv6 are extensions of the curent IP version 4 protocol in worldwide
use on the Internet (14, 13, 15, 161. While multimedia applications such as telephony and
teleconferencing have ben introduced on the Internet using the P v 4 protocol, these appli-
cations must rely on best-effort service. The new IP protocols will support these applica-
tions better, because of increased flexibility in priority allocation and improved addressing.
IP is the natural evolution path of the Intemet, and is essentially a data industry-driven
t echnology.
The main purpose of the ISIP protocol is to develop extensions to the IP service
model to support multimedia tr&c, while P v 6 is intended to be a replacement for IPv4
which supports prioritized traific and extended addressing. Thus, lSIP is primarily a set
of s e ~ c e definitions and QoS contract specifications, while IPv6 is much more a concrete
standard of packet format, addressing methods, and management protocols. The ISIP and
IPv6 working groups are independent efforts, and it is unclear if, when, and how the groups
will cooperate. Nevertheless, the ISIP s e ~ c e model should serve as a guide to the types of
QoS contracts that would be made available in s next-generation multimedia IP network.
The IP protocol stack is 9imi .k~ to the OS1 stack (Figure 1.10). The Medium
Access Control functions are placed in the lower layers (Figure 1.11)
! Logical Link Control (LLC) i I; Medium Access Control
P hysical ............................................................
Figure 1.11: WireIess IP protocol stack, showing lower layer functions.
1 - U
Non-congestion controiled O (lowest) . . . 7 (highest) Congestion controlled 8 (lowest) . . . 15 (highest)
Table 1.2: IPv6 service classes and priori@ levels.
IP version 6
lkansmissions in IPv6 network are carried in variable length packets, incorporating
a variabldength header and variablelength data segment (Figure 1.12).
Services in IPv6 are divided into two trafic classes (Table 1.2) :
Congestion-controlled Th&: This class is intended for relatively delay-insensitive
services which may be ordered to back off in case of heavy network load. Services
are prioritized within this class, with Internet control t r a c as highest priority, and
unattended data transfer and mer data as lowest priority.
Non-congestion- wntmlled Thzfic: This class is intended for r d - time services wit h
a relatively constant data rate, which are exempt from network congestion control.
Again, services are prioritized within this class. However, non-congestion controlled
t r a c does not imply higher priority than congestion-controlled t r a c .
Since ISIP is primariiy an abstract service model, a single packet format has not
been specified. The primary work in ISIP has been the development of muitimedia traf-
fic classes. An ISIP t r a c contract is broken into Tspec paraneters, which are t r a c
charact eristics, and Rspec parasiet en, which are bandwid t h and delay requirements.
Service categories in ISIP are defined dinerently than in IPv6, and are c m n t l y
being added and revised. As of this writing, they include:
Gvamnteed Service: The highest claaa of service, this class guarantees a maximum
queueing delay baseci on the service's leaky-bucket parameters [17]. The service must
supply a maximum packet size M bytes, and a minimum policeci packet size rn bytes.
The minimum policed packet size dows the network to treat any srnaiier packet
32 bits (4 octets)
Hopby-hop options -$ header (variable)
4 >
outing header (vanable
b & e n t header (8 octets) 1
- UJ u 3 VER: Version V O
% PM: Priority w
' L I
NXT: Next Header Q, G w HOP: Hop Limit > %
O
1 :
2-5
6-9
header (variable)
Encapsulat ion security payload header (variable
i Destination options header (variable)
Flow label VER
Payload
P M
Figure 1.12: Internet Protocol v6 packet format.
- payload lengtt.
Source Address '
Destination Address I
NXT HOP
than m as if they were of size m, which simplifies certain network operations and
discourages small, wasteful packets.
Committed Rate: This c h guarantees a seNice a minimum transmission rate, but
does not guarantee any specific maximum delay or queueing backlog [18]. It quires
the same Rspec parameters as Guaranteed Service.
Controlled ha& This class attempts to provide a service with the performance
available in a Lightly-loaded network, and provides no guarantees on either maximum
delay or minimum bandwidth. Controlled-load service is intended for current IPv4
r d - t ime services, which function acceptably in best-effort systems, but s s e r perfor-
mance degradation in heavily4oaded networks [19].
1.5 WLAN Evaluat ion Criteria
Multimedia wueless LAN systems are cornplex, and so many performance mea-
surementg and cornparisons may be made. While the design itseif affects which performance
measurements are critical and which are not, this thesis will deal with three main perfor-
mance evaluation areas: QoS deliuery, eficiency, and service model wmpatibility. The
current Internet using IPv4 is used as an example of a deficient multimedia network; while
it performs admirably in its role as a data network and niultimedia-over-IPv4 is a fruitfui
research topic, it can only be considerd an improvised, interim multimedia technology.
QoS delzuery: A network's ability to deliver a service's guaranteed Ievel of QoS is of
primary concern in a multimedia network. A system should be able to carry traffic for
heterogeneous services without contract violation, and shodd be flexible enough to
deal with a range of different service contracts. For example, the Internet using LPv4
may be flexible in supporthg d o u s service contracts using the Red-The Transport
Protocol (RTP) [20], but is still a bat-effort s e ~ c e and cannot maintain guarsntees.
The Plain Old Telephone Service (POTS) may offer QoS guarantees, but is infiacible
and only offers one level of semice.
E'c iency: A network should also use its resources efficiently. Efficiency can be ex-
pressed in two ways: high system throughput, md accurate QoS contract delivery.
EIigh throughput is achieved through Iow protocol overhead and efficient medium ac-
cess control. Inaccurate QoS delivery causes resource waste by requiring a higher level
of service than actually necessary for a given service. For example, EtTP provides low
delay by essentially overspecifying the bandwidth required by a service and sending
redundant packets, such that a fraction of the packets arrive within the service's de-
lay bound. This extra bandwidth could be used for other services in a more efficient
network.
rn Semice mode1 compatibility: Finally, the network should be compatible with ATM and
IP resource models. The network should use the same performance guarantee metrics
as found in ATM or IP service contracts, such as delay and packet/ceU error rates.
As weli, it should use the same or similar source characterization metrics as in ATM
or IP contracts, such as average rates, peak rates, and burst Iengths.
This thesis is concerned with several problems in multimedia WLAN resource al-
location in the lower protocol leveis. Solutions to resource ailocation problems for different
access techniques and topologies are proposed using Merent access scheduling methodole
gies. These alternatives include (Figure 1-13) :
Access Techniques: The systems' multiple access technique may consist of TDMA,
FDMA, CDMA, or a hybrid of the t h e .
Topologies: The sy st em rnay be of centr alized architecture or distributed architecture.
Technologies: The systems' physical layers may be either spread-spectrum or 'nar-
rowbandn (non-spread-spect mm)
Chapter 2 deals with the theoretical problems of Fair Queueing, a technique for
TDMA access assuming a centralized architecture. Chapter 3 deals with a practid Fair
Queueing architecutute for TDMA, centralized access using a polling scheduling methodol-
ogy. This is a TDMA, centralized, narrowband system.
Chapter 4 deais with the theoretical problems of hybrid TDMA/CDMA access
techniques for wireless multimedia senrices using centralized topologies. This is a hybrid
TDMAICDMA, centralized, distributed system.
Topologies
Access Techniques
Time Division Frequency Division Code Division Multiple Access Multiple Access
Technolgies
Spread Spectmm
Figure 1.13: System design alternatives for multimedia wireless LAN networks.
Chapter 5 deais with the FDMA chamel access problem for the new SUPERNet
kquency ailocation for uniïcensed high-bandwidth netarorks. This is a hybrid FDMA/-
TDMA, centralized or distributecl, narrowband system.
1.6.1 New Contributions
Section 2.1 is an introduction to Fair Queueing and contains no new contnbu-
tions; however, the service share allocation algorithm in Section 2.2 is new material, and
determines minimum Fair Queueing service shares to guarantee heterogeneous QoS levels
for delay-sensitive systems. Chapter 3 introduces Distributed Fair Queueing as a wireIess
architecture and presents simulation r d t s for heterogeneous delay-sensitive services, cal-
culates the expected resource usage of services given the nature of their protoc01 overhead,
and develops a reservation mechanisrn for rernote best-effort t r d c . Chapter 4 d&es the
architecture for a hybrid CDMA/TDMA wireless multimedia network, and determines the
optimal differential power levels for services in order to maximize a definition of regid-
ual capacity, then introduces tFRO TDMA scheduling mecbanisms and compares theoretical
performance for voicefdata service mixes. Chapter 5 introduces the architecture of a dis-
tributed unlicensed network, and develops a channe1 allocation protocol based on minimal
group interactions support ing bot h unrehted and cellular station groupings, and calculates
theoretical performance for inter-group interference.
Chapter 2
Fair Queueing and Generalized
Processor Sharing
Fair Queueing (FQ) refers to a class of scheduling algorithms that aUow network
resources to be allocated to senrices in controiled mannem. A FQ scheduling algorithm
is able to Mt the share of resources that a single service may use. By doing this, a,li
services in the network may have a guaranteed level of resource utilization. While originally
developed for homogeneous data networks, the theory has been extended to multimedia
networks.
This chap ter has several goals:
0 Introduce the previous literature on Fair Queueing;
0 Propose a new Service Share Allocation Algorithm, for the calculation of Fair Queue-
ing parameters based on semice policing parameters and delay requirements;
0 Provide Service Share Allocation dgonthm complexity information and parameter
translation for ATM and IP services.
Fair Queueing and the Service Share Allocation algorithm provide a theoretical
basis for the Distributecl Fair Queueing WLAN protocol introduced in the next Chapter.
Figure 2.1: FIFO queue with three services.
2.1 Literature Review
FQ theory has shown considerable evolution in the last decade. The original
proposals address homogeneous networks, with only one QoS level. The homogeneous theory
was then extendeci to heterogeneous priority ievels suitable for multimedia networks. Thiç
section introduces the original FQ proposal, t hen explains Generalized Procesmr Sharing as
a theoretical mode1 for multimedia services. A packetized version of Generaiized Processor
Sharing, called Self-Clocked Fair Queueing, is introduced.
2.1.1 Homogeneous Service Priority
The first FQ proposa1 was motivated by the desire to create a fair homogeneous
datagram network [21]. Up to this point, reseaxch in scheduling for data networks had
focused on buffer minimization. This proposal, however, asserted that buffer minimization
was not an overriding factor in scheduiing and other factors should be considered as well.
Interestingly, the fairness question is derived korn an economic ivgument called the
tmgedy of the cornmons [21], where individu& using a common resource in a self-interesteci
m e r may overwhekn the common resource. Consider a First-In First-Out (FIFO) input
queue to a network, shared by several Merent services, A, B, and C (Figure 2.1). In order
for a service to maximize its throughput, it should send as many packets as possible (service
A). This a h w s the service to '<crowd outn the other services; in essence, this service may
transmit at the expense of other services. However, if the other services attempt the same
greedy strategy and flood the network with packets, A's packets will now be crowded out by
its cornpetitors. Each service wili then attempt to fiood the network with its own packets,
to avoid being crowded out by the other services. The network then must attempt to service
a large volume of t r s c , and may collapse under the load. In this way, the optimal strategy
for an individual service is not optimal for the whole network.
o TC-
Figure 2.2: Round-robin queue with three services.
Figure 2.3: Raund-robin queue with variable size packets.
If round-robin queueing is introduced, the optimal strategy changes (Figure 2.2).
In this case, a service that generates a large amount of t r a c interferes minimally with
those that generate smail amounts of t r a c . Such a service will only delay its own packets.
For a service's packets to experience minimal delay through the network, the service should
maintain a queue length of one in each switch. In this way, each individual service's optimal
strategy does not detrimentdy &ect the operation of the whole network.
Whde the FQ strategy controls services' access to network resources, it does not
control the number of services aiiowed into the network. A malicious sender may initiate
many services to send its data, inçtead of the single service required. This allows the set of
services greater access to network resources, while the access for any single s e ~ c e is still
Wted. The question of malicious senders has been explicitly ignored for this analysis, as
it is more properly treated as a network security problem.
Variable packet size was not initially considered, but has considerable implications
for faimess [22]. Consider a round-robin scheduler with variable size packets (Figure 2.3).
W e the round-robin scheduler may allocate resources fairly based on the number of pack-
ets, service A is obviously benefitting to the others' detriment by queueing larger packets.
To remove this iinfnirness, the idea of bit-by-bit round-robin scheduling was introduced. IR
principle, only one bit is serviceci fiom each user by the scheduler.
We define R(t) to be the number of rounds completed since t h e t = O. The
number of rounds completed depends on the d e r of active sessions at each time t: N=(t).
Therefore, the rate at which rounds are executed is kversely proportional to N,(t):
where C is the bit rate of the transmission chaMel served by the scheduler. Therefore, a
service t with a packet size of Li which starts transmitting at time to will require Li rounds
of service; therefore, its last bit of service wili occur at time t such that R(t) = R(to) + Li. If we now define S: and F! as the values of R(t) when packet k of service i starts and
finishes service respectively, we can state that
and, if $ is the packet's amML tirne,
Note that the start time in (2.3) is determined by the arrivai time of the packet if the queue
is empe, or the M s h time of the Iast packet if the queue is not.
In practice, of course, such bit-by-bit scheduling is infeasible in a datagram net-
work. However , the behaviour may be simuiated by packet- by-packet transmission. In
this case, the scheduler calculates the theoretical fmishing times for the packets in the
queues, and transmits the packet with the smallest &k. This may be done either preemp-
tively (that is, a new packet with smaller F,! than the currently transmitting packet wiU
preempt that packet's transmission) or non-preemptively. It has been proven that over sufE-
ciently long time periods a non-preemptive packetized algorithm asymptotically approacha
the bit-by-bit algorithm in performance (231.
2.1.2 Heterogeneous Service Priority
In order to generalize the FQ results to heterogeneous, multimedia services, a
priority mechanism must be est ablished. The priority mechanism atlows different services
to be treated in different ways, depending on their QoS requirements. Generalized Processor
Sharing (GPS ) [24] provides a t heoret ical hrnework for multimedia scheduling. A pract ical
implementation of GPS for packetized networks is Self-Clodred Fair Queueing (SCFQ) [23];
this protocol allows the use of GPS theoretical results in a p a c k e t i d multimedia network.
Generdked Processor Sharing
GPS was introduced by Parekh and Gallager in a 1992 paper [24]. GPS assumes
that information is avaihble on the amount of resources to be allocated to each session.
The resources are divided proportionally to each active session on that bais.
Assume that each service t has a unique associated transmit queue. For each
service i there is associated a positive number &, named the service share of the service.
The share of the total resource, gi, given to each s e ~ c e at any point in time is defined as:
where C is the bit rate of the chamel served by the scheduler and is constant, and S is the
set of currently backlogged sessions.
For example, consider two backlogged sessions. If = &, each session would be
served at rate 4. If 4, = f &, session 1 wu be serveci at rate 3 and session 2 at rate S. As with bit-by-bit round-robin scheduling, such a scheme is impractical and a
packet-oriented approach is needed. This is calleci Packet-by-Packet GPS (PGPS). As with
FQ, the service order of the packets is based on the least anishing time Fi of the queued
padrets. The finishing times are calculated by simulating a "pure" GPS system.
The concept of virtual time is used to implement PGPS. The virtual t h e v ( t )
indicates how quickiy or slowly a backiogged queue is serviceci, due to the number of back-
logged queues. If dv(t) /dt is high, a service has little cornpetition from ot her services for
schedder service; if dv(t) /dt is low, a service has much competit ion. Quantitatively, if t j is
d e h d as a time when mernbership of the set of backlogged services S
for any t ime interval r SU& t hat the membership of S does not change in
changes, t hen:
(2.5)
the int erval [t t j + 7) . The virtual t h e funct ion, t herefore, is continuous, piecewise linear, and rnonotonidy
increasing in each piecewis+linear segment. An example virtual time curve is given in
Figure 2.4. In a GPS bit-by-bit server, each queue i receives service at rate &&(tj +T)/&.
This is analogous to R(t) in (2.3) in the m e where ail#* are equal. Similariy to (2.2) and
(2.3) we have
Figure 2.4: Virtual t h e in a GPS system. &. . . qj4 indicate the s e ~ c e s ' allocated shares.
where L: is the length of the kth packet of service i.
The authors were able to derive worst-case packet delays and burstiness for services
policed with a Ieaky bucket aigorithm.
Self-Clocked Fair Queueing
In a 1994 paper, the Self-Clocked Fair Queueing (SCFQ) algorithm was introduced,
which aileviated the computational complexity of GPS. The GPS algorit hm requires calcu-
lation of the virtual time by the scheduler, which tries to simdate continuous GPS while
PGPS is implemented. The SCFQ algorithm dispenses with the virtual time calculation
while scheduling the packets in a near-optimal manner.
In SCFQ the virtual time u(t ) is replaced with a new definition of the virtual t h e
denoted û( t ) . Instead of representing the work done in a hypothetical continuous service
model, B ( t ) represents the work actuaiiy done in the system. The virtual tirne is now defined
as the finishing time (now refmed to as the senn'ce tag) F of the packet currently in service.
The start and finish equations become:
Figure 2.5: SCFQ virtud time B(t) for an example arrival/departure sequence. a; indicates the arrivai of packet j £iom service i, while d; indicates the departure of packet j from service i-
where t: is the arriva1 t h e of the kth packet of service i.
While this mode1 is considerably l e s cornplex to implement, it cannot discriminate
between the arrival times of packets which arrive during the same packet transmission.
While v ( t ) in GPS is piecewiselinear (Figure 2.4), d ( t ) is a piecewise-constant function
(Figure 2.5). This means that packets amiving during the same packet transmission have
the same value of il($). Therefore, packets may be sent in a different order than GPS.
However, as explained later in Section 2.2.4, delay bounds for services in a SCFQ system
can be shown to exhibit a maximum merence fkom GPS delay bounds, the magnitude
of which depends on the maximum packet length allowed in the network (cf. Equation
(2.29)).
2.2 Resource Allocation for GPS Systems
In the analysis of GPS systems, it is asswned that the service share & for sewice à
is avaiiable, and the performance of the system is then analyzed. However, in a real system,
the reverse is desired: given a speclfied performance level for the services in the network,
we wish to calculate the service shares requjred to achieve that level. Delay bounds for
services constrained by leaky-bucket [24, 251 and exponentially-bounded burstiness (EBB) arrivai models [26] have been developed given the services' resource ailocations and their
arrivai parameters. However, the problem of caiculating resource docation based on deIay
tolerances and arrival parameters had not been addresseci in the seminal literature.
This problem of resource allocation for delay-sensitive multimedia services wit h
leaky-bucket-constrained arrïval models and deterministic delay bounds is addressed in this
section. It is assumeci that arrivai process constraints are translateci into GPS leaky bucket
parameters. The algorithm will derive the minimd resource allocation for the services. Any
remdning resources rnay be used on other, delay-insensitive services. First, equations for
the minimum required service shares of leaky-bucket-limited services are derived, given thei.
leaky bucket parameters, delay tolerances, and the universal s e ~ c e curve of the network
(to be defined later). Then, an iterative algorithm using these equations is developed which
constructs the universal senrice cuve and therefore defins the minimum required s e ~ c e
shares for each delay-sensitive service in the network.
2.2.1 GPS Leaky-Bucket Performance
To underst and the behaviour of leaky-bucket-limited services in a GPS network,
some theoretical background must be introduced. This section introduces the leaky-bucket-
limited service, the universal service curve w hich determines worst-case behaviour, and the
concepts of locally-stable and locally-unstable services.
The worst-case delay performance of a GPS system depends on the arriva1 pro-
cesses of the transmitting services. One important class of arriva1 processes are leaky-
bucket-limited arrival processes (Figure 2.6). These processes have an associated pool of
credits of maximum size cri and credit arriva1 rate of pi credits per second. Each transmitted
bit consumes one credit, and the bucket may hold a maximum of q credits. h y credits
arriving to a fidl bucket are lost. A bit arriving to an empty credit bucket is discardeci. Any
bit that is not discardeci is placed in the service's transmit queue to be serviceci in FIFO
sequence. Each senrice may then be characterized by the 3-tuple (ci, pi, #i). Each service i
haa an arrival process A&, t2): this is the nurnber of bits generated by the service in the
intemal [t i , t2) . It is assumed that many bits may be generated by the service at the same
Figure 2.6: Leaky-bucket-Iimited aggregate arrivai process. Original arriva1 process 4 (O, t) is iimited to amval process Ai (O, t).
tirne, and so there may be discontinuities in Ai (O, t ) . For this anaiysis, Ai (t i, t2), pi and oi
are normalized by the bitrate of the channel C, such that a service whose credit generation
rate is C has pi = 1.
In a stable GPS system such that Cj pj < 1, busy periods in the system are of
finite duration as long as the bucket sizes Ci are a l l strictly positive. The worst-case delay
and queue backlog for each s e ~ c e is attained if each service is g d y . A greedy service
will transmit as much tra.£Eic as fast as it can, so at the beginning of a busy period each
greedy service wili start with a fuU credit bucket and queue as much traffic as possible with
its available credits. Each service will empty its queue at a time called its finishàng tàme, e:
and cease transmission.
If all services in the network are greedy and begin transmission simuitaneously
with full credit buEers, this causes the maximum load that the system aUows and is the
worst-case arrivai pattern. The service function that GPS dictates under this worst-case
condition is termed the unàuersal service curve. The finishing order of the semces and their
senrice shares determine the universal service curve, fiom which the maximum delay and
queue backlog can be graphicdy determined (Figure 2.7).
Greedy-service anrrlysis for GPS systems assumes that once the s e ~ c e ' s bucket
has emptied, the service halts transmission. It is assumeci then that the service is storing
credits for a subsequent worst-case burst. If it did not do this and continued to transmit at
a reduced rate, the worst-case condition would not reoccur. Thus, GPS assumes periodic
repetition of the abovedescribeci worst-case condition, since a one-time occurence of a
slightly worse condition would not have as much impact on network performance.
For a greedy leaky- bucket-bi te service,
as the s d c e will expend all its available credits at t = O and expend any new ones as
soon as they arrive. Here we d o w the expenditure of fractional credits as a simplifying
assumption. The straight iine function Ai(Oy t)/& is plotted for each seMce i. Each service
i has an aggngnte semice function &(O, t), which is the total number of bits for service
i that have been transmitted in the interval [O, t), normahed by the channel bit rate C.
Since the Channel is shared according to (2.4), the rate at which a service is served is:
and t herefore
Since Sk (O, O) = O for ail k E S, we can integrate to prove:
where S(0, t) is the universal service curve. This is equal to v(t), the GPS virtual t h e , as
defined in (2.5).
Thus, the virtual time of the systern v ( t ) is a measure of the work accomplished
in the systern. The s e ~ c e rate for each service i is
where v(0) = O. Since dv(t)/dt is nondecreasing except where it is undefineci, the virtual
time function for the worst-case situation is convez; that is, &u(t) /dt2 is always positive
wherever it is defined. Service 2's finishing time ei oceurs when Ai(O, ei) = Si(O, ei) and
therefore Ai (O, e:) /#i = ~ ( 4 ) ; after this, the service is idle until the start of the next busy
period. For clarity of presentation, we place the members of set {e;} in ascending order as
Work
Time
Figure 2.7: Universal seMce cuve for three services. Services 2 and 3 are Iocally stable; service 1 is locally unstable. Dk is the maximum deiay experience by a bit for service k.
el . . . e N and define eo = O. Therefore, the worst-case u(t) is a monotonically increasing,
convex function defined in the interval [O, max(e$)].
As pxwiously stated, all values uj, pi, and v(t) are normalized by the system rate
C: therefore, for stability the sum of the average rates C pi < 1. This causes no loss of
generaliw. Under this assumption, the dopes of the N linear segments of u(t) are given as:
Thedore, du(t)/dt = 1 for O 5 t < el and dv(t)/dt >_ 1 for d t > 0.
Depending on a given service's paramet ers (ci, pi, & ) , the service may be classed
as l o d l y stable or locolly unstable. L o d y stable services have a net reduction in their
backlog at all times: that is,
Locally unstable services, therefore, may have a temporazy increase in their bocklog at some
times:
The 1 0 4 stability of a service is critical in determining the location of the maxi-
mum delay. From the properties of lody-stable and -unstable services, we can determine
the service shares #i and finishing times to dlow a maximum delay of exactly Di in most
cases. Derivations for locally-stable and -unstable services are given separately in the next
two sections.
Locally Stable Services
Exampl= of Iocally stable services are services 2 and 3 in Figure 2.7. A more
detailed graph is shown in Figure 2.8. Rom (2.10), &$(O, t ) / a = pi- From (2.16) and
(2.17), aSi(O, t ) / a 2 &. From the stability condition (2.18), therefore, pi < #i, pi/& < 1,
and l/#i - l/p; < O. Rom this, we can prove the following:
Theorem 1 For a locally stable service s, under the worst-case conditions, the mazimally
delayed bit occurs w h e ~ Ai(O, O) = ni/&-
Proof: Assume the maximally delayed bit has arrival time t~ = O and service time ts.
Assume a later bit in the service with work measured as 0 ~ 1 4 ~ + Aw, where Aw > 0.
Work, w
Figure 2.8: Locally stable service and maximum delay.
Therefore, the later bit wili arrive in the queue at time = t~ + Aw/pi. The bit wiil be
serviced at time = t~ + dw/(aS/ût) < t~ + Aw/& Therefore, the delay of the bit
will be - t'A < (tS + Aw/&) - ( ta + Aw/p,) = t s - t~ + Aw(l /& - l/pi) < ts - La Consider an earlier bit in the service with work measured as a i /& - Aw. Since
this is part of the original backlog, t', = O. The bit is serviced at time t'' = ts - !kW dw/(aS/üt) > ts. Therefore, - t a < ts - t.4. In either case, tk - t a < ts - t.4.
t7
This result is easy to see graphidy, as the maximum delay is the widest horizontal
separation between the Si and Ai c w e s .
This dows us to calculate the service share necessary for a given value of maximum
delay Di given the leaky-bucket parameters of the service oi and pi. We assume for now
that we have already constructecl v ( t ) up to the point where maximum delay occurs: that
is, we know the dope of the segment mk = du/dt where maximum deiay occurs, and the
coordinates (ek-l, vk ) where this segment starteci.
Rom Figure 2.8, hding the intercept for a straight line gives:
and therefore
We can caldate the finishing time of the service assuming there are no other services that
finish before it , by calcuiating where the arrivai c m and the service cuve wouid intersect :
and t herefore
However, if r& is made too small, such that pi/+i 2 1, the service will be made
l o d y unstable. This means that the service will have to be Iocally unstable to cause a
fnaxim~m delay of Di.
Localiy Unstable S e ~ c e s
An example of a locaily unstable service is service 1 in Figure 2.7. A more detailed
graph is given in Figure 2.9. In this case, following the derivation at the beginning of Section
2.2.1, pi/4i > 1. Since b d l o g can accumulate, the maximum delay may occur for t r a c
in the original backlog or t r s c which has accumulated during the busy period.
Due to the convex nature of u ( t ) , backlog will grow to a point where the work
done is w,P, then dedine to O as dv(t)/dt becornes large enough. Therefore, aAi(O1 t)/% - &(O, t ) /at > O for Si(O, t ) < W: and aAi(O, t ) / % - &(O, t)/% < O for Si(O, t) > w,P.
Theorem 2 For a [ocally unstable service si under the worst-case conditions, the mazimally
delayed bit occurs where Ai (O, t) = max(w:, oi/&).
Proof: Assume oi/& 2 w:. Therefore, for ail w > oi/&, aAi (O, t)/ût - &(O, t)/% < O
and Theorem 1 holds.
Assume ci/& < tu!. Assume the maximally delayed bit has arrivai time t a and
service time ts. For a later bit in the c d with work measured as w: + Aw , the mival time
wiU be $ = ta + Aw/(aA/at). The bit will be servicd at time = ts + J:' dw/(aS/ût)-
Since v ( t ) and therefore Si(O, t) is convex, < ts + Aw/(aS/ût) for w > w> Therefore,
Work, w
Figure 2.9: Locally unstable service and maximum delay.
& - $ < ts - ta + Aw/(aS/ût) - Aw/(aA/&). Since aA/% - as/& < O for w > w:,
Aw/(aS/at) - Aw/(aA/&) < O and fs - < ts - ta.
For an earlier bit in the c d with work now measured as wp - Aw, = max(ta - Aw/(aA/at, O). NOW, 5 aS(0, tu!)/% so g = ts - 5:" dw/(aS/&) t ts - Aw/(aS/ât) for w < uy . Therefore, & - & 5 ts - t~ - Aw/(aS/at) + Aw/(aA/&). Since
aA/ôt - as/% 2 O for w < wy, Aw/(aS/ât) - Aw/(aA/at) 2 O and t$ - t'A c t~ - t ~ .
O
Again, this result is easy to see graphidy.
To calculate the service share & necessary to meet the delay tolerance Di, we may
use the same assumptions as in the locally-stable case. For the case where ai/& 2 w:, we
may use the technique given for the locally-stable case. Otherwise, we now assume that we
have constructed u( t ) up to the point where vk-1 = wf. To determine #*, we consider the
dotted triangle in Figure 2.9, detailed in Figure 2.10.
I C C
I / / slope =
I / di 0
Figure 2.10: Detail of relevant line segments for a locally unstable service.
From elementary geometry,
and therefore
Since it is assumed that vt-1 = wy , the service's baddog will be strictly decreasing from
this point onwards. The finiahg time of the service, assuming again that there are no
other services finishing before it, is the sarne as that derived for the locally-stable case in
Equation (2.23).
2.2.2 Service Share Allocation Algorit hm
The knowledge of where maximum delay values will be experienced for a given
service allows the construction of a service share allocation algorithm. Given a set of
services with leaky-bucket parameters (n i ,p i ) and maximum delay tolerances Di, we may
use the resuits of the previous section to construct an iterative algorithm to determine
minimum service shares q5i for each delay-sensitive service i.
The solutions to Di given in Section 2.2.1 assume information about virtual tirne
and hishing times that is not immediately avaiiable. As weiI, the state of a service-locally
stable or unstable-is not known kom the original data The allocation algorithm must
develop this information. To do this, the universal service curve is constructecl fkom t = O
to the highest finkhing time of the services, where each iteration devebps a vertex in the
universal service c m . A qualitative representation is given in Figure 2.11.
Iteration 1 Iteration 2
Figure 2.11: Qualitative diagram of the service share allocation algorithm. Three services are shown: each iteration develops a vertex in the virtual t h e v ( t ) . Each iteration allocates services which extend beyond the last determined vertex a revised, lower service share.
To develop this information, the algorithm iterates, such that each iteration k
calculates the next f inishg time et in u(t ) starting nom eo = O. At each iteration, trial
service shares 6 and trial finishing times e: are calculatecl for any service whose final values
have not been condusiwly determineci. The lowest ei is therefore the next finishing time
ek+l in the virtual time. Any semices where the value of ekt1 wodd not affect the calcdation
of q5j are finished and removed fkom further 4i dadat ions in the next iterations. The next
iteration reestirnates & and ei given the new information- The algorithm terminates when
all services have a stable value for
To explain the docation algorithm, the concept of service sets is introduced. At
the beginning of the algorithm, services are placed in a set corresponding to the initial state
of the allocation process. Each iteration, services may change states depending on their
new values of service share. The algorithm tenninates when aU services are in a terminai
state. To show that the algorithm produces useful resdts, it is shown that the service
share docated to a seMce may not inçrease between iterations. Finally, implementation
c o n c m for the algorithm are discussed.
Service States and State Transitions
Each service may belong to one of four sets: unresolved, unstable, resolved, and
allouzted Unresoived services do not have final values for di, and may be found to be
unstable. Unstable services are definitely locally unstable, but do not have final values for
#i eeither. Resolved services have final values for &, but their tentative M h h g times e:
have not been incorporated into v( t ) : t hat is, ek < e:. Ailocated services have final values
for 6, and their nnishuig times ei have already been incorporated into v( t ) . A state diagram
for the system is given in Figure 2.12.
Senrices are initially classifieci as unresolved. Estimates are then calculateci for $i for each unresolved or unstable service i (Section 2.2.1). If the estimate for $i is too iow,
such that pi/& 2 mk, then the delay tolerance Di is large enough that the tolerance will
be satisfied with a locally unstable allocation. The service's allocation at each iteration k is
Illnited to #i = pi/mr If it is not apparent whether the maximum delay will occm at oi/&
or at w:, then the service is kept in the unresolved set. If it is known that the maximum
delay wu occur at tu: (since vk has already passed a& and the maximum delay has not
ben achieved) then the service is placed in the unstable set.
014 I Vkr Di reached ei 5 ek
AUocate
Di not reached , -
Unstable 9 Di not reached
Figure 2.12: S tate transition diagram for service sets.
Any Sennces that have a finite 4 which are not in the ailocated set are checked to
fhd mind. This is now the next fmkhing time ek+l, and the correspondhg service@) are
placeci in the allocated set. Any unresolved or unstable services where the maximum delay
occurs at a time 1- than or equal to ek wil l not be affected by further calculations: their
service share allocation is now fÙdized and are moved to the resolved set. The next uk+l
and mk+i axe calculatd and the next iteration is begun.
As long as there is at Ieast one finite ei, at least one service will be moved into
the allocated set each iteration and the algorithm WU terminate. Wowever, there may
be instances where no f i t e e: are generated: this means that ail remaining services are
unstable. These services have high enough average rates and low enough delay tolerances
that efforts to lower their service shares enough to ewctly meet worst-case delay tolerances
will offer the posrribility of their backlogs increasing without bound. In these cases, their
s e M e e shares are set a t the minimum possible to avoid unstable backlogs.
Service Share Iterations
To properly ensure that unresolvmi and unstable senrices do not affect the perfor-
mance of resolved and allocated services, the estimates of 4; for unreso1ved and unstable
services must not increase between iterations. If & estimates were allowed to increase, fin-
ishing time estimates could decrease to where they would invalidate the fhishing times of
allocat ed services.
Rom the concavity property of v ( t ) , mk > mk-i and vk/ek > vk-i/ek-i. There-
fore, from Equations (2.21) and (2.25), estimates of di for unresolved and unstable services
will be strictly decreasing with each iteration. Estimates of q$ which make the service un-
stable (that is, pi/@* > mk) are limiteci to $i = p i / m k . Since r n k increases each iteration,
the lower limit on c#+ decreases each iteration. The algorithm is given in pseudocode in
Appendix AS.
Upon termination of the algorithm, each seMce will have an associated service
share &. If C #i 5 1, the set of services can be supported without violating delay tolerances,
and the remaining service share 1 - C +i may be used to support other classes of service
without causing delay-sensitive service QoS guarantees to be violatecl. If C di > 1, the set
of services cannot be supported without the possibility of QoS guarantee violation.
Algorithm Complexity and hplementation
The algorithm given in Appendix A.l requires two main dculations for each ser-
vice not in the docated set each iteration: the service share caldat ion and the finishing
time dculation, which involve multiplications and divisions. At least one service per i tem
tion is placed in the docated set each iteration. Therefore (pessimistically), the algorithm
will require at maximum ztl 2k = ZN(2N - 1)/2 = N(2N - 1) major calculations. If
the services belong to N* distinct service types, ail seMces in the same type wil l be placed
in the allocated set in the aame iteration at the minimum rate of one per iteration, and
t herefore, the complexity of the algorithm is at maximum Ne (2& - 1).
Since at least one service is moved to the resolved set each iteration, the algorithm
terminates in at m a t N iterations, where N is the number of dehy-sensitive services in
the network. Typically? many fewer iterations are required. If the services belong to service
types with the same QoS parameters, the dgorithm wiIl terminate in at most N& iterations,
where Nst is the number of distinct service types in the network.
To calculate the mininitun service share docations &, all service shares must be
updated upon the admission or termination of any service. While this may be acceptable
for a low number of service types and add/drop events, the add/drop mechanism may be
simplifieci if processing requirements dictate.
The service share ailocation a lgo r i t h assume that any undocated service share
(that is, 1 - C 4,) can be used by greedy services with no arriva1 process restrictions
while the already-admitted senrices meet their QoS contracts. This is done by setting the
initial seMce cuve slope rno = 1 and not n o > 1 (Appendix A.1). Therefore, any new
services may be added to the system without causing QoS contract violations in the already-
admitted services, as long as C $i 5 1. In this way, a new service can be added without
recalculating service shares for ali services; however, a recalcdation of aiI seMce shares
may lead to lower semice shares for already-admitted services (since now the lealqy-bucket
characteristics of the new service have been taken into account). This service may then be
dropped without further recalculation. However, a service admitted upon a full calculation
of all semice shares should be dropped upon a full calculation of ail service shares, since the
remainbg services' service shares are calculateci based on the dropped senrice's leaky-bucket
parameters, which are no longer vaüd.
This ailows two methods of delay-sensitive service addition: fvll addition and fast
Table 2.1: Parameters for bursty and non-bursty semice classes.
addition. N addition requires recaiculation of all &ce shares: this docates the min-
imum necessary resourcea, but is more processing-intensive, and requires more processing
upon caii &op as well. Fast addition does not require as much processing, but may allocate
more resources than necessary. The decision between full and fast addition depends on the
base system load and the network cal1 load, and is a decision for the system and network
management software.
To show operation of the algorith, a system with two service classes is modeW.
One service is a bursty service, with a large bucket size a, and low delay tolerance; the
other h a a s d e r bucket size and greater delay tolerance (Table 2.1). Therefore, it is
expected that the bursty service would have a lower fmishing time than the non-bursty
service. The number of bursty services is varied kom none to the maximum number of
supportable services. One non-bursty service is rnodeUd. A graph of the dculated semce
share aasignments is shown in Figure 2.13.
Since the bursty services are the first to finish the worst-case burst, their service
share allocations are always identical regardless of the number of services. However, the
allocation for the non-bursty service decreases with the number of bursty services. This may
seem counter-intuitive at hst , but may be explained by noting that the greater the number
of bursty semices, the more resources are accounted for in the network, as the leaky-bucket
properties of the services are known. After the buisty services' fkkhing time, the network
cannot know how the newly-keed resources are to be used, and must assume the worst-case
greedy services will be added. Consequentiy, the service shares for the non-bursty services
will be greater if the bursty services are not admitteci.
- O 5 10 15 20 25
Bursty services
0.05
0.045
0.04
0.035
0.03
0.025
0.02
0.015
0.01
0.005
O
Figure 2.13: Service share q5 versus the number of bursty services in the system.
I I I I
Bursty services - - - on-bursty service ---- -
- a
- - a
- d
- O
----- -----------__ & ------_ -----_ -- - --- - --- -- -2
I 1 1 1 -*
2-2-4 Parameter Translations for Packet Networks
While GPS analysis is based quite naturaIly on the burstiness (o;) and average
rate (pi) parameters of the service, these parameters are not directly available. In the case
of ATM, the information is provided through the Generalized Ceil Rate Algorithm (GCRA)
parameters [27]. The GCRA uses a Leaky-bucket model, but with different parameters. The
GCRA parameters may be converted to their GPS equivalents through some analysis.
The GCRA Leaky bucket is of the same structure as the GPS leaky bucket, but
using different paramet ers (Figure 2.14). Each packet queued for transmission from service
i adds Ii "creditsn to the bucket. The bucket has a capacity of Li + 1; credits, and credits
are draineci fiom the bucket at a rate of 1 packet per time unit. Clearly, the magnitude of
the system's time unit AT controis the value of 1;.
TO determine the GPS average rate p; fiom Ii , we determine the equilibrium arrival
rate to the bucket, such that the bucket will maintain the same filling lewl. We assume the
charnel rate is C, and therefore the unnorrnalized average bit arriva1 rate is C - pi. Since
the packet arrivai rate is then Cpi/Li , and the packet departure rate is llAT, the credit
balance equation is:
Figure 2.14: Leaky-bucket interpretation of the Generic Cell Rate Algorithm (GCRA) .
and t herefore, -
The burst limit is simply ri + L* q = - *. C
Because of the discretized nature of virtual time in a SCFQ system, the delay
performance for a given service share is somewhat worse than in GPS. Fkom [23], the added
delay for a service i in a SCFQ system, beyond the maximum delay calculateci by GPS is:
where Lmax is the longest packet length allowed in the system. Therefore, to meet a delay
constraint of D F in a SCFQ system, the service must be allocated a service share that
allows it to meet a delay corutaint of DFa - ADi using the GPS service share allocation
dgorit hm.
2.3 Conclusions
Fair Queueing scheduling a lgor i th , such as Generaiized Processor Sharing and
Self-Clocked Fair Queueing, c m guarantee delay performance for certain types of senrices.
Since the a l g o r i t h can support heterogeneous services, they are very suitable as scheduling
protocols for multimedia networh.
Through the use of an iterative algorith, service sharea for heterogeneous delay-
sensitive services in a GPS system can be allocated in such a way that under h u m
loading conditions deterministic delay constraints may be exactly satis6ied for most services.
As weil, the aigorithm may be used as an admission condition: if C q5i > 1, the services
canaot be supported together without the pwibility of QoS guarantee violation.
In practice, services are overwhelmingly locally stable or critically stable (where
4 = p). In end-tcxnd network studies, where bdy-unstable services pose d y t i c a l
probiems, locally-unstable allocations may be promoted to n i t i d stability, sacrificing the
possibiiîty of small reductions in total service share for more accurate end-to-end network
analysia.
Chapter 3
Distributed Fair Queueing
As shown in Chapter 2, Fair Queueing and particdarly Self-Clocked Fair Queue-
ing provides a workable solution to the multimedia scheduiing problem. Distributed Fair
Queueing (DFQ) is developed as a wireless LAN protocol using SCFQ as a basis for schedul-
ing. DFQ is a centralized, polling protocol, using TDMA in a The-Division Duplex (TDD)
channel, intended to support both ATM and Integrated Services IP senrice models.
The goals of this Chapter are to:
Introduce the DFQ architecture and its relationship to GPS and SCFQ scheduling;
Map ATM and IP service cfasses to GPS and SCFQ parameters;
Prove the ability of DFQ to maintain SCFQ scheduling in a wireless environment;
Est imate worst-case resource usage for services wit h Merent characteristics;
Simulate DFQ performance for Werent service scenarios and compaze results to the-
ore t id worst-case performance.
3.1 DFQ Architecture
The architecture under consideration consists of one base station and a set of
remote terminals (Figure 3.1). Ail communicatiori, both upstreurn (remote to base) and
dounurtre~m (base to remote), is performed over a single physical channel. The base station
is more complex than the remote stations, and is considered to have wired power and
extenial network interfaces. This architecture is identical to many other WLAN proposais
Figure 3.1: Distributed Fair Queueing WLAN architecture.
[28, 61, which recommend a unique, cornplex, and more expensive base station and simpler,
cheaper remote stations.
Both ATM and IP service models implement service contracts for at Ieast some
of their s e ~ c e categories. The DFQ scheduler may then use this information to determine
scheduling priority. In this way, it is possible for the base station to use a poliing strategy
for Medium Access Control (MAC). The service order is determined by the SCFQ tags of
the ce& or packets queued in the network. The main probIems solved by a DFQ system,
therefore, are the timely generation and exchange of SCFQ tags. This section deais with
the generation and exchange of tags in a wireless system for Merent service classes.
The ATM cells or P packets are transmitted in physicai data units cded capsules.
Each capsule may contain multiple or partial ATM ceb, or variable-length LP packets.
Upstream transmissions are initiated with a short poll transmission from the base. It is
assumed that the Hidden Terminal problem exists: that is, ail remotes can hear the base
station and the base station can hear ali remotes, but each remote may not be able to hear
every 0 t h remote (Section 1.2.1). Therefore, remote-to-remote transmission or detection
is not allowed.
The SCFQ scheduiing methodology allows DFQ to handle many Merent delay-
sensitive and -insensitive service classes. In an ATM system, the MAC is designed to
h d l e Constant Bit Rate (CBR), Variable Bit Rate (VBR), Available Bit Rate (ABR),
and Unspecified Bit Rate (UBR) services (Section 1.4.1). In an IP system, the MAC is
designecl to handle at least Guaranteed Service, Cornmitted Rate, and ControUed Load
services (Section 1.4.2). The delay tolerauces and/or negotiated rates of the services are
avaiiable to the MAC layer. The arrival probabilie distributions of the semces are not
available to the MAC layer, and are not required.
Quality of service parameters are assumed to be broken doam into node-by-node
performance requirements [12]. That is, the end-twnd QoS guarmtees for services that
span several links are divided-not n d y equally-betweea the LUiks. In this way,
the endoteend QoS requirements may be disregarded and only the link QoS requirements
conaidered.
3.1.1 Multi-QoS Support
The scheduling methodology used for DFQ is Self-Clocked F'air Queueing (Section
2.1.2). As shown in Section 2.2, SCFQ is suiteci for multimedia packetized networks, and
is well-developed theoreticaily. From SCFQ theory, the scheduling order is dependent on
the aervice share of a given semice $i, and the capsule length for any capsule k of service
i , L:. While the service share aigorithm for delay-sensitive services is presented in Chapter
2, provisions must be made for the handling of delay-insensitive t r a c as well.
The capsule length L: afFects the scheduling order of capsules and the performance
of the network. In a DFQ network, limits are required on the capsule Iengths: a maximum
capsule size Lm= to allow SCFQ delay guarantees; and a minimum capsule size L?" for
each service, to properly schedule polls (to be introduced shortly). If we do not allow
fractional-cell capsules in an ATM network, the minimum capsule length is one ceil length;
in aa ISIP network, the minimum capsule length is service-dependent and defhed by m, the
minimum policed unit for that s e ~ c e . The maximum capsule length Lm= is determîned
by the maximum number of ceils per capsule in an ATM network, and the maximum packet
size M in an ISIP network.
Delay-sensitive ATM Service Support
For a system to be able to support multimedia services, it must be able to deal
with a wide range of QoS guarantees. For ATM CBR and VBR services, the GCRA
policing parameters are supplied as in Section 2.2.4, and translatable into (O, p) format.
Delay and loss requirements for delay-sensitive ciasseri depend on the dass of service (Table
1.1). Guaranteed semice for ISIP is handled simikrly to ATM VBR Delay guarantees
can be supported by proper selection of the service's service share, # by a service share
allocation algorithm similar to that of Section 2.2, which is introduced in Section 3.2.
In a SCFQ network, delay jitter guanrntees are not supported: a service with stringent
jitter requirements must use delay tolerance to gusrantee its QoS contract will not be
violated. Therefore, a service with jitter tolerance JmaVi should subrnit a delay tolerance
of &(Dm*, J-,,). Cell loss and bit error rate requirements must be satisfied by proper
buEk allocation and error control policies, which are beyond the scope of this ehapter.
Delay-insensitive ATM Semce Support
Delay-insemitive services ate not handled by the service share allocation algorithm
of Chapter 2. In an ATM system, delay-insensitive trafic such as dtT-VBR and ABR
t r a c (Section 1.4.1) have average bit rates as part of the QoS contract. Likewise, ISIP
Committed Rate services have minimum bit rate requirements, but no delay requirements.
ISIP Controiled Load seMces may be assigned an average bit rate as part of their C d
Admission.
Non-real-time VBR supply the same t r s c parameters as RT-VBR, but do not
specify maximum delay or jitter. They may be supported by treating them as RT-VBR
services with W t e delay tolerance and submitted to the sewîce share allocation algorithm.
ABR services supply a Minimum Cell Rate (MCR), and optionally a Peak Cell
Rate (PCR) [29]. The aetual cell rate at which the service may transmit is determined
by network feedback: the service's route is composed of one or more rate feedback loops,
consisting of virtual source and virtual destination pairs. Each virtual destination in the
route is the virtud source of the next rate feedback bop. In each loop, the ABR seMce
transmits cells at a specified rate determined by resource availability. This rate may be
ehanged by the transmission of Resource Management (RM) cells. In a DFQ system, the
wireless luzk will form a rate feedback loop, such that the remote and base form a virtual
source/virtual destination pair, which allows the base to comrnunicate wireiess link available
rate information to the remotes. The ABR rate in the wireless link may be maintaineci by
selecting a service share for the senrice &(t) = &(t) , where &(t) is the curent ABR rate
as delivered to the base station. This will ensure the ABR rate is maintainecl at a minimum
of &(t), subject to the variation due to the service lag given in the next subsection.
Unspecified Bit Rate or other best-effort t r a c is handled through a resenmtion
mechanism (Section 3.3). The remaining service share after dehy-sensitive and ABR sources
(as defineci in Section 3.2) is used to service any best-effort trafic in a First-Corne First-
Served (FCFS) miinner.
ISIP Service Support
ISIP senrices may either be supported natively over DFQ, or may be supported as
ISIP-over-ATM. The ISIP service classes map well to Fair Queueing service parameters.
Native support for ISIP service classes requirea translation of ISIP service classes
to FQ parameters- Guamnteed SemfVIce s e ~ c e s supply 1eaky-bucket-compatible source char-
acteristics, and supply a maximum delay tolerance; these parameters can be directly used in
the Service Share AlIocation Algorithm. Comrnitted Rate services may be allocated s e ~ c e
sharea similarly to ATM ABR services. Contmlled Load services may be ailocated service
shares similarly to ATM ABR services, but may be docated a lower bandwidth than Com-
mitted Rate services: the actud bandwidth to allocate will depend on typical ControUed
Load service characteristics and is beyond the scope of this thesis.
ISIP-over-ATM rnay be accomplished by mapping ISIP service classes to ATM
service classes and using AAL5 adaptation for segmentation of ISIP packets. Guaranteed
Service services may be mapped to VBR services, while Committed Rate and Controiled
Load semces may be mapped to ABR services. ISIP-over-ATM will s&er increased over-
head than native ISIP due to the smaller average capsule size of the ATM capsule, but will
enjoy superior jitter performance due to the lower capsule size.
IPv6 Service Support
IPv6 service classes are divided into Congestion-controUed t r a c and Non-con-
gestion-controlled traffic, and each type of t r a c is divided into eight priority levels. Pri-
oritizat ion between priority levels of each class is, at this t ime, implementation-specifk.
Congestion-controlled trafEc is generaliy delay-insensitive, while Non-congestion-controlled
t r a c is generdy delay-sensitive. Source characteristics are not mandatory, but may be
provided by establishing a flow label for certain setvices, in which source characteristics and
QoS requirements may be specified [13]. Currently, standards for flow label establishment
have not been fonnulated.
IPv6 service classes are not as strictly d&ed as in ATM or ISP, and prwide
guidelines to prioritization rather than QoS preaemation guarantees. If a flow label is
not provided, the t r s c rnust be handled through a reservation mdanism (Section 3.3).
If source characteristics and QoS guarantees are provided by the establishment of a flow
label, the flow label establishment may be empioyed as o call setup rnwage and service
share ailocation may be handled simiIarly to ATM VBR or ATM ABR services. Fùrther
work on IPv6 prioritization must wait until the de facto or de jum establishment of flow
label establishment standards.
Service Tag Generation
Service tag generation in DFQ is identical to t hat for SCFQ. When a new capsule
k for service i is queued for transmission, a corresponding service tag 4k k generated using
the equation:
where 6 is the arrivai t h e of the s e ~ c e ' s kth capsule. We define = O. The scheduler
chooses the lowest among the head-o-line service tags to transmit.
We define the virtual t h e of a s e ~ c e i by the fxnishing time of the kt-arrived
capsule of the service, fii ( t ) = F.. We can define the semice lag of the system as &(t) =
8(t) - 6; ( t ) . A major result of SCFQ is a bounded service Iag. Rom [23],
1 O 5 &(t) 5 -L*, vt, vz E ~ ( t ) ,
#i
where B(t) is the set of backlogged services at time t . We prove in the next section that
the DFQ service lag is identical to the SCFQ service hg, which shows that the performance
Limits for SCFQ and DFQ are the same.
3.1.2 Distributed Architecture Support
Fair Queueing was deveioped to deal with a local queueing system, where arrival
queues are Iocateà in the same physical unit. More iinportantly, FQ assumes dimiteci
access to the state of the queues, especialiy the service tags. This is ciearly infeasible in a
wireless network environment, since the queues are distributeci among the stations and al1
comrndcation must be done over the wireless Channel. This means that the scheduler's
access to the states of remote queues is limiteci.
Since d dOWIlStream services have queues located in the base, the scheduler has
iinlimited access to the states of these queues, and no modifications need be made. However,
the upstrearn services are dl located in the remotes. Therefore, any transmission of virtuai
tirne information downstream or service tag information upstream m u t be done through
inband signalling. In this analysis, we d&e a local service as a downstream service (with
its queue in the base), and a rernote service as an upstream one (with its queue in a remote
station).
ansm mission schedubg is done exclusively through polling. A remote may only
transmit a capsule when expIicitly ordered by the base to do so using a pou message. The
scheduling of capsules is accomplished by ordering the capsules' SCFQ tag values.
Before a capsule is scheduled for transmission, the scheduler must have received a
SCFQ tag fkom the service for that particular capsule. In the downstream direction, this
is not a problem: the service queues are located in the base station, and the tags may be
delivered internally with no effect on the air interface.
In the upstream direction, the service queues are located in the rernotes. Therefore,
mechanisms must be established to deliver tags to the base station in time for the correct
scheduling of the remote semices' capsules. To do this, the tag for capsule k for a given
service is piggybacked onto the transmission of capsule k - 1. If this is not possible, when
capsule k has not arrived in the service queue by the time of transmission of capsule k - 1,
a special poll, d e d a tag poll, must be sent to the remote. The remote must then respond
with the tag value for capsule k, if it exists.
Therefore, in normal operation, there are three possibIe transmission cycles: down-
stream transmission, upstream data transmission, and upstream poll transmission (Figure
3.2). The upstream datalpoll transmission decision is based on whether the base has a valid
SCFQ tag for the service's next capsule.
To ensure proper scheduüng, the remote services without vaiid tags must be polied
for tags in a timely manner. The tsgs must be received at the base early enough that the
capsde's scheduled arrivai t h e is not missed, but the tag polla should be as infrequent as
possible to increase efficiency. The minimum vktual time ciBaence between consecutive
tags for the same service i is L? / #i: therefore, if a service with no valid tag is pokd at
these intervais, scheduling shodd be p r e s d , as will be proven by Theorem 3.
Base Remote
- Data
n Tag Pol1 .
Figure 3.2: Transmission cycles. a) Downstream transmission. b) Upst ream data transmis- sion. c) Upstream poli trammission.
In order to generate the proper tagr, remote stations must have up-twdate infor-
mation on the virtual thne 5( t ) in the network. TO allm th&, the base station transmits
û(t ) , the tag value of the capsule in service, on every poli a d downstream transmission. AU
stations can then strip û (t) off of the base's transmission during address decoda. Since in
a SCFQ aetwork Û ( t ) only changes between transmissions, and the base station transmits
û( t ) every transmission, the remote stations always have an uptedate vktual t h e value.
We can prove that the service hg bound (3.2) hold in a DFQ system:
Theorem 3 For a DFQ system, assumàng a noiseless channei,
Proofi If the service t is local, tags are generated and examineci as in SCFQ and
(3.2) holds. If the service i is remote and has a backlog of more than one capsule, the tag
of the next capsule ck+' is piggybacked on the capsule in service. Since ~ f + ' 2 F! and
û(t ) = ûi(t) = f l when comection i is serviced, the service tags are in proper nondecreasing
order and the situation is identical to SCFQ; therefore, (3.2) holds. If the connection's
backlog is one, a tag may not be ready for the next capsule by the time 2's m e n t capsule
enters service; then a tag poil is scheduled ~ P , t f ~ = û( t ) + LFn/& > c. Then t is checked
for r d tags when Û(t) = F,"$:. If a new tag qk+' was generated, it was generated at
û(t') > û( t ) . Therefore, *+' = û(t') + L?/& 2 F::(~. Since this tag is now ho- by
the base, the connection will be polled in the proper order and the connection is poiled at
the comect virtuai tirne. If no tag is present, another tag poll is scheduled and the argument
is repeated. If the backlog is O (an ide channel), a tag poll is scheduled upon c d admission
and the previous argument holds.
u
Therefore, SCFQ scheduling in a wireless environment is possible, but at the cost
of a certain amount of overhead in the form of tag polls. The magnitude of the overhead
depends on the characteristics of the services involveci, and is explored in the next section.
The restrictions on F!+' provide error checking capabiiity for the scheduler, in
case tag information transmitted fkom remote to base is comptecl. A service i may only
retum a tag in the range [P., ck + Lmax/#i]- If a value is returned outside that range, a
tag pou is scheduled instead of a data pou, and the tag information is retrmmitted.
3.2 Service Usage for DFQ
In a pure GPS system, as described in Chapter 2, the only bits transmitted are
data bits, and no overhead is assumecl. In a DFQ system, both data and overhead from
each d c e must be transmitted. As well, the overhead from each service may vary ac-
cording to the direction of transmission (upstream or downstream) , coding, and the service
share assigneci to the service. Therefore, services wbich have the same QoS contract may
use diffkrent average bandwidths, due to the Mering levels of overhead. For proper call
admision and billing, the actuai amaunt of bandwidth used by the service is calculated.
Aiao, integration of usage information in the service share allocation algorithm of Section
2-2 is accomplished.
We must now Merentiate between the data length of a capsule, which ia purely
service data, and the totd length of the capsule, which includes all physical and error control
overhead. The actual amount of bandwidth used by a service is called its usage, +. The
usage of service i depends on severai factors: the physical layer overhead for the data; the
error control applied to the data; and the tag poll overhead (for remote services). We dehe
the usage as the total bandwidth used by the service under maximum stable system loading,
since that ia the situation where the magnitude of the usage is most crucial. For a stable
system, uj 5 1. Maximum stable system loading occurs when xi u j = 1.
Since the data length and the capsule length are unequd in DFQ, some GPS
equations must be modifiecl. Service i's share of the total bandwidth is ui/(Cs u j ) , where
B ia the set of currently backlogged services. Therefore, the expected rate of virtual tirne
progression is
The work done for service i-considering only data and not DFQ overhead as work-will now
progress at the expected rate of &/ (CB uj) . The following sections derive usage t hree types
of DFQ overhead: Forward E ~ o r Control (FEC), Automatic Retransmit reQuest (ARQ),
and tag poll overhead. A service may be affecteci by one or a combination of overhead types.
3.2.1 Forward Error Control and Physical Layer Overhead
In a fuLZ DFQ system, the data to be trammittecl by the service will require a
significant amount of overhead information in order to be successfully transmitted. As well,
the service may use an Automatic Retransmit reQuest (ARQ) protocol to deaease errors.
All contribute to the service's usage.
Both physicai overhead. such as header, synchronization and guard intervals, and
Forward Ermr Control (FEC) either add to or mdtiply the length of the data Li in the
capsule to contribute to ui. Therefore, the usage as a function of service share is uj =
a*& + p, where a* and are arbitrary constants.
If we assume that FEC overhead linearly multiplies the length of the data, then
the coded data length can be represented as q Li, where ai is the coding multiplier for
service i. If there is no FEC coding, a = 1, otherwise a > 1. If the physical overhead is a
fked amount per capsule. than the length of the coded physical capsule is
where L- is the fixeci overhead per capsule.
To calculate the usage ui, we recall that &i is the fkaction of bandwidth ailocated
to semice i in a GPS system under maximum load (Cj #j = 1). Therefore, will be the
service share multiplied by the normalized increase in the size of the capsule due to its
overhead : %Li + L"
Li 4i-
Assuming L" of 60.5 octets1 [30, 311, would give ui = 1.148, and maximum
efficiency of 88%.
3.2.2 ARQ Overbead
An ARQ protocol may offer orders-of-magnitude reduction in error probability for
a service, at the cost of the retransmission delay [4]. The simplest ARQ protocol, stogand-
mi t , can be used effectively to control errors.
To implement stopand-wait ARQ, the receiving unit must be able to send a one
bit request number (RQ) to the transrnitting unit. In an upstream service, this is easily
accomplished by piggybadung the request number on the data poll. In a downstream
service, the request number of the next capsule is retumed fiom the remote to the base in
a short message after the downstream capaule, called the request message (Figure 3.3). The
' 1 octet tm-aroand tirne, 2 octets synchronization, 3.5 octets wireless header, 53 octets ATM celi, 1 octet extra enor contrai.
Base Remot e
Data -
Data Poll - C.
Figure 3.3: S topand-wait ARQ transmission cycles. a) Downstream transmission cycle, with RQ for the next capsule in the request message. b) Upstream transmission cycle, with RQ piggybacked in the data pou.
request number may be a single bit binary number.
We assume that for any received capsule, the probability that the capsule has
detectable enors is p,,. Therefore, the average number of transmissions before a capsule is
transmitted successfully is
Therefore, the effective average rate of the service because of the added ARQ load is in-
creased from pi in the non-ARQ case to p i / ( l - p,,). Rom Section 2.2 it can be shown
that, in order to guarantee the same delay tolerance, the service share wiil be iacreased to
4 1 - p . (The less-than condition occurs because of the shape of the universal
service c m ; a service with incressed average rate does not necessarily need a correspond-
ing increase in service share to preserve the same delay guarantee.) Therefore, the usage
has a least upper bound LYP & *RQ = - .
i 4 1 -pm-
3.2.3 Tag POU Overhead
In a remote senrice, tag polls are generated when the service becornes idle, and the
base must obtain the tag for the next capsule to be transmitted from that service. These
Tag POU "requests"
Figure 3.4: Service burstines and its eEiéct on tag poil generation. Arrows indicate the time that a tag poil is invoked. Activity is defhed over the time period [O, T) for an arbitrary T.
tag polls add significantly to the usage of the service. This type of overhead is unique to a
DFQ system. In this section, we prove that the highest tag poll rate is incurred when the . .
semice is minimally bursty, and a lowest upper bound is derived for the tag pou rate based
on this.
The emptying of a remote transmission queue at time t o causes a tag poll to occur
at V( t ) = ir(to) + L,"~/&. I f the queue is s t d emp ty then, additional tag polls are scheduled
at intervais of Aû(t) = L ? ~ / # ~ until the queue is no longer idle.
The time between tag polls depends on the rate of the virtual time in the network,
by Equation(3.4). Under maximum stable load, C s u j = 1 and therefore dû( t ) /d t = 1.
Then the expected time between tag polls under maximum stable load is simply At =
L?/&. For less than full load, by definition there is extra capacity in the network, and
the tag pou rate is not critical. Therefore, the equations are derived for the critical fidl load
case.
The burstiness of the service affects the rate of tag poil generation as well. A
service may exhibit minimal burstiness, in which case its transmission queue has either one
or no capsules and experience busy periods of one capsule in length. At the other end of the
scale, the burst length is limited by the service's worst-case finishg time as determined in
Section 2.2. We d e h e T burstiness as the behaviour of a service such that in each time
interval of length Tl the service's transmission queue is occupied for one contiguous interval,
and idle for the rest of the interval (Figure 3.4). Since the service is, on average, busy pi
fraction of the time, minimai burstiness is simply T burstiness such that the queue busy
period is only one minimum tength capsule, and where T = tPlpi. We can now prove that the burstiness which incurs the highest tag poii rate is the
minimum burstiness case, where T =
Theorem 4 The tag poll mte for s e r v i e i has o least upper bound of q$g = & ( l - pi" + pi /Li".
Proof: In the interval [O, T), the T-bursty service wiLl generate one tag poll when the
bunrt period ends (Figure 3.4). As w d , the service will generate tag poils at the rate of
LY/@~ until t = T. The ide period is of length T - piT: therefore, the number of tag
poiis generated d e r the first one is L(T - p i ~ ) / ( L P / & ) J = l & ( l - p i ) ~ / ~ , m ' " J , where 1-J is the floor function. Therefore, the number of tag pok generated in the interval [O, T) k:
An upper bound for this is obviously
Therefore, the tag pou rate upper bound is
Since T 1 the tag poll rate upper bound independent of T is
This bound is the least upper bound, since it is satisfied by equality when qb,(l- &)T/L?
is an integer.
O
To determine the maximum amount of extra bandwidth used by the remote service
for tag poils, the tag poil rate must be multipiied by the tag poli overhead LI, so that the
maximum bandwidth used for tag polls is
For a non-ARQ remote service, the usage is then the usage due to the capsules added to
the maximum overhead due to the tag poh:
Figure 3.5: Maximum possible tag poil rate for a service where L?/ L? = 0.1. X axis indicates the service's senrice &are&, and each curve represents a Merent value for average rate pi. Both axes represent &actions of total channel bandwidth.
0.1
0.08 a2
Y 3
If the service employs ARQ, the increased capsule rate must be reflected in the usage,
I 1 1 1 J
Rho=O*l - Rh0=0.25 ---- - Rh0=0.5 - - - - -
_ * * - - * -
C m - - - - - _ _ - - - * - - - - -
repiachg with 4:
The usage,
0.06 - _ - - - - - - * _ * * * * - -
Y _ _ * - - - - * * Y _ - - - - - -
m
0 . 1 I 1 1
O 0.1 0.2 0.3 0.4 0.5 Service share
therefore, is heavily dependent on the minimum capsule length L ? ~ ,
and service share di. A small minimum capsule length or large service share may cause
a heavy tag poll load for the service. Since the average tag poil rate is dependent on the
service's amival process, an estimate of the average tag poll rate cannot be obtained fÏom
the QoS contract data. Therefore, a leaat upper bound is the best guide to the increased
usage; however, under normal loading conditions the average tag poll rate may be far lower
than the maximum tag pou rate.
The worst-case tag pou rate for a senrice where ~ i - 1 L? = 0.1 is plotted for
varying +i and pi (Figure 3.5). The tag pou rate is plotted as a fiaction of the total chamel
bandwidth. For a high service share, the maximum possible tag pou rate is a significant
fraction of the system bandwidth, and can represent considerable waste of resources. A
lower, simulation-derived limit is suggested in Section 3.5.
Some waste of resources may be avoided by special treatment of CBR services.
Since, by definition, their capsule generation rate is known, a "shadow" data poil generator
at the base station could automatically generate a data poll for a CBR service i at rate C&. This would scheduie a data pou for the service at the same rate as the service generates
capsules, and tag p o h and their overhead would not be required for the service. This would
increase efiiciency, at the cost of base station complexity.
3.2.4 Usage Modifications for Service Share Allocation
To incorporate usage information. the service share allocation algorithm of Section
2.2 must be modifieci. In a theoretical GPS system, there is no overhead, so each service
only transmits information bits. In a DFQ system, each s e ~ c e transmits its information
bits, plus causes ovethead bits due to packetization, coding, and polling.
Since the service's t r a c contract is ody concerned with the treatment of its infor-
mation bits, the DFQ s b c e &are algorithm considers only information bits in calculating
maximum delays. This means that most of the algorithm is identical to the GPS dg*
rithm. However, the algorithm must now consider usage in detennining one s e ~ c e ' s effect
on another service.
Usage therefore &kcts the universal semice curve v i r t d time dope mk. In the
GPS case, the amount of resources a service i consumes is &, since there is no overhead.
In the DFQ case, service i now consumes resources. This affects the amount of resources
left for the rest of the services. As a consequence, the mk calcuiation must be made based
on ui. The modified algorithm is given in Appenduc A.2.
3.3 Best-Effort Trafic Support
In a poUing environment with intelligent remotes, a mechanism must be available
for the remotes to initiate connections. As well, best-effort t r s c must be sent in both
downstrearn and upstream directions. Since the distribution of this tra,f£ic is not known and
the service given to this t r a c is best-effort only, it is not weii suited to the treatment given
to delay-sensitive t r a c . This section presents a mechanism for transmitting best-effort
tr&c in both upstream and downstream directions. Upstream t r a c requires a resmt ion
mechanism, which allows a remote with best-effort tr&c to schedule its packets.
The basic strategy for this tr&c is First-Come First-Served (FCFS). Any best-
effort t r s c noted by the scheduler is placeci in the same FE0 queue. The remaining
usage in the network is then u' = 1 - C uj7 where uj is the sum of the usages of BU
delay-sensitive and ABR t d c admittecl to the network. Since the UBR t r a c does not
have FEC, ARQ, or poll overhead, by (3.6), the service share available for this best-effort
t r a c is:
where L~~~ is the minimum policed length for the network. Whenever the best-effort queue
is polled (using #' as its service share, and Lm" as the capsule Iength) then the HOL capsule
in the best-effort queue is transmitted.
While this mechanhm is workable for downstream t r a c , the upstream direction
presents more of a problem. The scheduler carrnot poil bes tdor t services for tags, as it
cannot determine the rate at which to poii t hem and in many cases may not know of their
existence. Instead, the scheduler periodidy broadcasts an open poll. Any remote with
best-effort tr&c to send may attempt to respond to this pou, so collisions are possible-
If a remote successfdy transmits a response to the open poll, then that station is placed
in the best-effort queue. If a remote has more best-effort t r a c to send, a request for its
transmission may be piggybacked on a best-effort capsule. In this way, remotes will ody
require use of the open poll mechanisrn when they have new best-effort t r a c and previously
had none. This is similar to the treatment of VBR and video trafne in (30, 101.
The service share for open polls is a fiaction of the best-effort service share, and
correspondingly reduces the semce share available for best-effort t r a c . The actual fraction
used determines the throughput of the best-effort system; anaiysis and simulation of simil;u
systems is given in [30, 101.
3.4 Similar Protocols
Several difkrent multimedia WLAN protocoh have been developed for the same
environment and service scenarios. While a& differ, it is not surprishg that protocois
developed for the same environment and service scenarios should share several characteris-
tics. Two influentid protocols are Dynamic Dynaniic TDMA/Time Division Duplex (DT-
DMA/TDD) (9,101 and Distributed-Queuing Request Update Multiple Access (DQRUMA)
[il, 71-
DTDMAITDD is a reservation-based protocol. Time is divided into kames, where
each kame consists of reservation dots and data slots. CBR services are allocated fixed data
dots in each fiame? while VBR Services must contend for data dots at the beginaing of a
tr&c burst, and retain data slots until the burst is O-. Best-effort t r a c must contend
for access for each packet. In DFQ, CBR and best-effort trafic are treated shdarly to
CBR and VBR t r a c in DTDMA/TDD: CBR t r a c is docated a given semice share for
the duration of the c d , while best-effort t r a c must contend for access at the begbmhg of
each b u . . However, VBR t r a c in DFQ is treated similarly to CBR t r a c in DFQ while
it is treated differently in DTDMA/TDD. DFQ offers the advantage of better QoS control
for CBR and VBR services; however, the DTDMA/TDD protocol is easier to implement in
hardware due to the fixeci fiame format.
DQRUMA is a polling/reservation protocol. It provides a Frequency-Division Du-
plex (FDD) polling structure which does not specm a particular p o b g algorithm. Services
contend for request access to the channel, then may retain access to the chiinnel by piggy-
backing hirther accerrs requests on transmitted packets similarly to DFQ. Downstream polls
and packet transmissions are synchronized with upstream transmissions, so pou fiequency is
restricted by other downstream transmissions. DFQ uses shdar piggybacking and request
contention (during c d setup for CBR and VBR, and at al1 times for best-effort s e ~ c e s ) .
However, DFQ provides tag polls to allow ide delay-sensitive services access to the chan-
ne1 without submitting access requests. DFQ therefore can provide better QoS control for
delay-sensitive semices, while DQRUMA is easier to implement due to FDD design and
h e d packet transmission intervals.
3.5 Simulation Results
In order to examine the behaviour of a distributed SCFQ system for delay-sensitive
t r a c , simulations were performed for different source models and load levels. Three sim-
ulations were perfomed: a mized-tmBc simulation consisting of t r a c of different source
rnodels and delay constraints transmit ted together; a CBR-tmfic simulation consisting of
CBR sources from 2 services to 51 seMces transmitted together; and a bursty-tmfic simu-
lation consisting of identical bursty t r a c with bmtiness varying hom low to high.
The tbree types of source models used in the simulation were deterministic constant
bit rate (CBR) sou~:ces, Poisson sources, and twestate Markov t r a c sources [32]. The
Figure 3.6: Two state Markov t r a c model.
two-state Markov trafic source alternates between idle and burst states. The burst state
generates CBR t r a c , while the idle state generates no trafEc (Figure 3.6).
Since a local-area network environment was assumed, propagation delay was set
to zero. Propagation delay would be an important £actor in a wider study of DFQ in
metropolitan-area networks.
3.5.1 Mixed-trac Simulation
To test the DFQ protocol in a heterogeneous environment, simuiations were per-
formed with a mix of CBR, bursty, and Poisson t r s c kom medium to high offered load.
Results were ob tained for average delay and delay j i tter , showing mult iplexer-like perfor-
mance with smooth response to increased load. Tag poll rates are obtained for upstream
traffic, showing actual tag poil rates well ':)elow the derived maximum rates according to
usage cdculations.
To test the protocol in a mixed-QoS, mixed-trac environment, simulations were
run for ail three t r s c sources (CBR, Poisson, and bursty t r d c ) for both remote and local
sources. The trafnc was composed of &ed services (Table 3.1). The channel ceil rate was
fixecl at los cells/s, and off& load was varied born p = .5 to p = -8. The simulation was
run for 50 000 cell times. To change the o f f d load, the average rate for each service was
proportionally increased. The capsule size was estimated at 60.5 bytes, and the poil at 7.5
bytes; this is a realistic estimate of capsule size for non-FEC trafiic. Any increase wodd
cause a co~nmensurate reduction in system efficiency. Service shares were calculateci using
the GPS service share calculation algorit hm for delay-sensitive services (Section 2.2) ; under
these conditions, the calculated service shares are close to the normalized average rates of
the services.
I I I 1 ~ v g . rate Type Nuniber Direction (ceils/s)
Burstv 1 . remote 4000 " 1 IBWSIX i 1 I
1 1 local 1 4000 2000
Poisson remote 15 000 Poisson local 15 000
Max. rate 1 Delay tol. 1 p 1 o 1
Table 3.1: SMulation senrice mix.
Values had to be chosen for the leaky-bucket sizes cri for the sentices. In an oper-
ational network, this wodd be left to the service creators and would not be a responsibility
of the network; however, in this case values did not exist. For the bursty senrices, state
changes were Mplemented every 100 bursty-state intercell times: therefore, more than 99
percent of the bursts would be l e s than 300 cells long (since a = 4000/50000 = .08). There-
fore, a = 300/100000 = .003. The Poisson seMce would most likely correspond to a data
service, requiring low ceU loss: given an interval of T ce&, the distribution of the number
of cells generated within that interval can be approximated by a Gaussian distribution of
mean pT and variance pT. Assume we would wish the bucket to ovedow at 5 standard
deviations: then 5und = 5m = T - pT = (1 - p ) T . Then T = 1021 celi times. At
worst-case, a bucket would have to hold all cells generated within that time, so the bucket
size would have to be 1021p = 306 cells, so a = 1021plC = -003. For CBR, the bucket size
was set at 1, so o = 1/C = 10%
Average delay and average delay jitter values for bursty, CBR, and Poisson sources
are given in Figures 3.7, 3.8, and 3.9 respectively. The numben of tag poils for remote
sources are given in Figure 3.10.
The total usage C exceeded 1 for oEereà loads of 0.68, even when delay per-
formance was very good. Therefore, the usage equations overestimate usage by poh . As
calculated in Section 3.2, local services wodd require = 1-14&; since the simulation
assumes one poli as 0.25 cell times, remote services would require ui = 1.14- 1-25& = 1.43&
without estimating tag polis. With & pi and & << 1, then the rightmost term in
(3.14) is approxhnately 0.5p, so each remote s e ~ c e would require usage of a p p r b a t e l y
ui = 1.43& + 0.5#i = 1.93&. Assiiming a nearly q u a i volume of remote and local services,
50 55 60 65 70 75 80 Offered load
Figure 3.7: Average delay and delay jitter for bursty sources. X axis is average load p, y axis is average delay in cell times.
Remote delay - Local delay ---
40 - Local jitter - - - - ;- Remote jitter -------- t
1
30 -
20 -
50 55 60 65 70 75 80 Offered load
Figure 3.8: Average deiay and delay jitter for CBR sources. X axis is average load p, y axis is average delay in ceIl times.
I 1 I I I 1 I
Remote delay -J
Local delay --- I' - Local jitter ---1 - Remote jitter ------ -
I)
1 1
I
-
50 55 60 65 70 75 80 Offered load
Figure 3.9: Average delay and delay jitter for Poisson sources. X axis is average load p, y axis is average delay in ceil times.
the maximum total service share C qji = 2/(1.14 + 1.93) = 0.65.
The graphs show typical exponential delay behaviour for increasing loads. Each
service experienced delays sigdicantly below their delay tolerances for p < 0.75, and low
jitter. No significant Merence between the performance of local and remote services of the
same type is apparent. except for CBR services at rnarginaliy stable load ( p = 0.8). This
is due to the increased overhead requirements of remote semices; since they have the same
delay requirement and higher overhead, they have slightly higher usage than the quivalent
local service. The effect is only apparent at margindy stable Ioads.
The fiequency of tag pok decreased substantially with increasing load. This is
to be expected, since greater load means that connections will be backlogged more of the
tirne. From (3.11), the least upper bound on the tag poll rate for full load is = 0.01
poils/ceLi time for bursty sources and = 0.034 poUs/cell time for Poisson sources. From
Figure 3.10, the measured rates were 0.0029 polls/cell t h e and 0.0077 poh/cell time,
respectively. Therefore, the average rates were measured to be 20% of calculated maxima.
This suggests the maxima may be excessively pessirnistic, and suggests the use of average
poll rate estimates based on a similar fiaction of the calculated maxima
Figure 3.10: Number of tag poils per cell t h e for remote connections. X axis is average load p.
3.5.2 CBR-trac Simulation
To detennine the effeet of varying numbers of services on the delay performance of
DFQ, simulations were performed with homogeneous services of constant offered load and
varying number. One to 50 services of 700 cells/s and quai delay tolerance were added to
the network ninning at 100 000 cek/s, while a different CBR service kept the offered load
at 70%. Therefore, the simulation demonstrates the effect of different numbers of services
being supported by the network under the same offered load. By the GPS service share
allocation a lgor i th , the worst-case celi delay would be (700 cells/s/70%)-' = 1 millisecond
= 100 cell times. Simulated performance showed a maximum delay of approximately 30
cell times, within the guaranteed delay bounds.
Average delay for the CBR services increased with increasing numbers of services.
This is to be expected, as eadi service may face increasing numbers of services with identical
tag values: since tag value ties are resolved randomly, the expected queueing delay of each
service should increase iinearly with increasing numbers of services. Delay jitter reached a
maximum of approximately 10 c d times. It is expected that jitter would depend on the
"phasesn of the CBR arrivai processes: a system with many in-phase CBR arrival processes
would s u f h higher jitter than a system with out-of-phase CBR amival processes, due to
-
O 5 10 15 20 25 30 35 40 45 50 Number of services
Figure 3.11: Delay and delay jitter for v;iIying numbers of CBR services with constant offered Ioad.
the random resolution of tag ties. In this case, CBR arrival process phases were distributed
uni fody over the CBR cell arrival period. The increasing delay of services in an SCFQ
system due to inaeasing numbers of services is predicted in [25].
3.5.3 Bursty-trafic Simulation
To determine the effect of service burstiness on the delay performance of the pro-
tocol, simulations were performed wit h hornogeneous seMces of constant average offered
Ioad and varying burstiness. Ten identical bursty services of average rate 7 000 cells/s were
supported in a network running at 100 000 cells/s. This maintained a 70% offered Load.
Peak cell rate was varied from 10 000 ceUs/s to 50 000 cells/s. Delay tolerance was ad-
justed such that 10 seNices of peak ceiI rate 50 000 ceils/s were barely supported (that is,
C u = 1). This simulation therefore demonstrates the effect of service burstiness on delay
and delay jitter while offaed load to the network is held constant.
Average delay increased approximately linearly with increasing service burstiness.
This is expected, due to the increased backiogs during busy periods propagating throughout
the simulation. The relative phases of the bursty service arriva1 processes obviously affect
the average delay; in this caset their phases were uncorrelated. Delay jitter was relatively
Jitter -- 7 I -.I
- - 'I
10 15 20 25 30 35 40 45 50 Maximum burst rate, 1000's of cells/s
Figure 3.12: Delay and delay jitter for bursty services with constant offered load and varying Peak C d Rate.
low throughout the simulation: it is expected that the low value was due to the uncorrelation
of the bursty service arrival periods? such that the average length of the service queues was
fairly constant a t d times.
3.6 Conclusions
This work has shown the feasibility of extending SCFQ to a distributed environ-
ment. With a centralized WLAN, we can schedule delay-sensitive connections with Merent
arrivai processes and Merent QoS requirements to meet their respective requirements as
well as possible. The service share of the connection was calculated as a h c t i o n of the con-
nection's delay requirements, with good results. The concept of usage d o m more accurate
determination of t r a c levels for real DFQ systems.
The DFQ system can handle constant and variablerate t r a c with user-specified
maximum delay bounds and service rates, therefore allowing QoS preservation. The polling
scheme allows efficient use of the a d a b l e bandwidth, as packet collisions do not occur.
The increased overhead due to tag polls was shown to be low in simulation.
DFQ may be used with both ATM and ISIP service models; it is not yet clear
whether IPv6 may be supported. DifEculties with IPv6 implementation may be encountered
due to its static priority Ievels, where non-congestion-controued tr&c has priority over
congestion-eontrolled tranic. Fair queueing implernents dynamic priority Ievels through the
tagging mechanism, where pnority corresponds to tag value: there is no mechanism to give
one capsuie a lower tag than another regardless of their atrival times. However, if non-
congestion-controlled trafEc is always handled through the best-effort t r a c mechanism,
Pv6-like performance may be supporteci. This depends on how rigo~~ous1y IPv6 t r a c
priority levels are enforced in practice.
Chapter 4
Hybrid CDMA/TDMA Networks
Most wireless networks designs are either T h e Division Multiple Acce~s, or Code
Division Multiple Access (Section 1.3.1). Certain fkequency bands, including the Industrial,
ScientSc, and Medical (ISM) bands, require the use of spread spectrum modulation for
unlicemeci operation. Using TDMA in such a network, where only one service may trans-
mit at once, wastes the simult aneous-transmission capability of spread-spectrum systems.
Usbg CDMA, such t hat any service can transmit at any time, requires t hat the worst-case
condition of dl senrices transmitting simultaneously be considered, which may cause gross
overdocation of resources so that QoS gurantees are not violated under worst case con-
ditions. A hybrid CDMA/TDMA architecture can mitigate the wesknesses of both pure
approaches. This chapter proposes a hybrid TDMA/CDMA architecture, where stations
may transmit on different spread-spectrum codes as in CDMA, but are restricted in the
times they rnay transmit as in TDMA.
Calculation of capacity in a hybrid network is quite dinerent from calculations of
capacity in a TDMA network. Allowing Merent codes to transmit a t once while attempt-
ing to Limit interference changes the nature of the problem to a two-dimensional one of
interference and delay. As weU, the ciifference in QoS between services makes the problem
quite different fkom a homogeneous network capacity problem, where all services have the
same requested QoS.
The goals of this Chapter are to:
Mathematically develop differential power control as a method for interference control
in a heterogeneous service environment;
Base
Figure 4.1: Hybrid architecture data paths.
Develop a definition of capacity for the network;
Develop partitioning schemes for either "on-the-fly" interference control or interference
control at cau setup;
Introduce examples of heterogeneous service capacity for simple voice/dat a service
mixes.
4.1 Architecture
The hybrid TDMA/CDMA system is a centraüzed system with a bsse and (pos-
sibly mobile) remotes. The physical architecture is Direct Sequence (DS) CDMA. The
spreading factor and chip rate is fixeci for each code, so that each code has a data rate R
and a spreading gain of G,. Ttansmission is dowed nom base to remote and remote to
base, and remote to remote traosmission is prohibited. At any t h e , the base may trans-
mit, or one or more remotes rnay transmit. Each remote may transmit data fkom one or
more services over separate spread-spectrum codes at once, and more than one remote may
transmit data at once (Figure 4.1). Similarly, the base may transmit data fiom one or more
services over separate spread-spectnun codes at once. Any simulaneous transmissions must
occur over different spread-spectrum codes.
Remo t es
d - r
Time (Time Transparency)
Base
Figure 4.2: Hybrid transmission: One capsule is transmitted in each timeslot fiom each transmitting code.
Therefore, it is assumed that both remotes and base stations are able to transmit
several codes simultaneously with different power levels. This is a nontrivial assumption;
remote station implementations for current wireless LANs and Persunal Communication
Systems (PCS) employ a single transmitter with a saturating preampiifier [33]. Forarard
Power Controi Law (34, 351 and variable QoS power control schemes [36, 371 assume such
a structure for base to remote communications, but not for remote units. Thus, a hybrid
CDMA/TDMA transceiver would have to use a more expensive non-saturating preampmer.
AU data is delivered to the MAC sublayer in units called capsules, as in Chapter
3. The capsule includes error control and header information. Capsules are dowed to
transmit at the begianing of timeslots (Figure 4.2). Each timeslot may or may not be
the same duration; however, the timeslot must be long enough to accomodate the longest
capsule which may be transmitted in the timeslot.
4.2 Hybrid Network Capacity
A service in a hybrid TDMA/CDMA network will have two types of QoS guar-
antees at the ceii level: time tmnspcrrency tolerances and semantic trunspurencg tolerances.
Time transparency tolerances indicate the service's ability to withstand delay and celi de-
lay variation. Semantic transparency tolerances indicate the service's abiiity to witbstand
Base
Figure 4.3: Tkansmitted power and received power in a network, and the near-far correction d(r j i ) -
noise and interference. This architecture relies on differential power control to control in-
terference. Tkaditiondy, power control strategies are meant to equalize received power in
transmissions fiom rernotes to base [35] and to maximize system capacity in a multiceUular
environment fkom base to remotes [34]. In these strategies, a power control law v(rji) is obtained which is a hinction of the distance between the units rji. The transmitted power
for each service is <p(rji) 6, where Po is the received power.
Differentid power control to support multiple QoS requirements hm been inves-
tigated by Yun et al [37] and Sampath e t al [36]. In the Merential power control case,
Po is replaced by a service-dependent received power Pj such that the expected received
power E[Pj] = Po. The set of received powers {Pj} is determineci by minimizing the re-
ceived power while meeting the s e ~ c e a ' QoS constraints. The resultant powers must then
be corrected for near-far effects by p(r j i ) to determine the power levels at which the data
should be transmitted (Figure 4.3).
4.2.1 Interference Control
Unlike a pure CDMA system or a pure TDMA system, a hybrid system may
use both tirne scheduling and power level assignment to guarantee QoS. The system may
trade off semantic and time transparency: for example, a transmission may be delayed
fiom a timeslot with many interferers to one with few, decreasing semantic impairment but
increasing thne impairment.
IR this study, each service j is assumeci to have a transmission rate &, received
power Pj, and an interference power tolerance Ta. The transmission rate is nonaaiized
to the channal bit rate: a service which ran at the same bit rate as the Channel would have
Ri = 1. The received power determines how much interference the given service causes to
other services when transmitting. The interference tolerance is compared to the sum of the
powers of the sixnultaneousIy transmitting senrices, such t hat the QoS contract of service j
is violat ed if the total transmitted power £tom aU other services transmit ting simultaneously
exceeds I":
where S is the set of sirnultaneously transmitting services at that time.
Therefare, a set of services S can only be transmitted together if
IYa" 2 C Pi, v j ES.
Adding Pj to esch side provides a more useful criterion:
t hen
where pT = minjéS(l;nax + Pj) is the power thmhold.
We would Iike to find the conditions on P, and TU such that the &um
min ES (lyax + pi) 3 C Pi iES
number of services is supported. We do tbis indirectly by using the interference midual
cupan'ty, c,! derived fiom (4.5). We define Po as the average power of ail services in S such
that Po = C Pi INs, where Ns is the number of services in S. So:
This determines how many Yneutral" services of power Po and interference tolerance (pT - Po) may be added to the set without violating QoS constraints.
There is a tight relationship between y and Pj. The service's BER tolerance
determines the minimum acceptable Signal to Interference Ratio (SIR); therefore, the ser-
vice's minimum SIR shodd be hdd constant. The m;nimrim SIR dowable for service j
is Gp P j / y , where Gp is the system's spread-spectrum processing gain. Therefore,
P j / y a = SIR,/Gp, where SIR, is constant for each service j.
To maximize Cf, we prove the following:
Theorem 5 cf is mazimzzed under the condition Pj/13' = SIRj/Gp V j i f and only if al1
( I F + Pj) equal; that is, (I," + Pj) = pT ~j E S.
P~ooE Necessitg: Rom (4.5), m i n j ( y + Pj) = pT. Assume there is a service k ~ i t h
(q" + Pk) = BP* such that 4 > 1, and C! is maximized. From (4.6),
Now replace (TU + Pk) with (y + Pk) /@ = pT. TO keep SIRk constant, we must replace
Pk with Pk/p. This means that Po is replaceci with Pola, where a > 1, since now the
average power is lower. Now,
and the maximum c,! assumption is violated. Therefore, if cf is rnaxhhed then (yu +Pj)
exceeds the minimum for no service and so (yU + Pi) = pTvj E S. If more than one
semice has (Fa + Pk) > qpT, the above process can be repeated for each service.
SGciency: Assume cf is net maximized, and (y + Pj) = P* v j € Et. TO
increase c., either pT must be increased or C Pi must be decreased.
Assume pT is increased by replaeing it with ppT, P > 1. This meam (Ta + Pj) is replaced by /3(ya + Pj)- TO retain the same SIR for each service, Pj is replaced by PPj
Therefore,
and therefore cf eannot be increased, according to the premise.
Assume some subet P C {Pj) is decreased by replacing Pj with Pj/a, a > 1.
Thdore, to retain the same SIR, Fa is replaceci by y / a . Therefore, pT = m i . (y+ Pj ) is r e p M by pT /a. HO-, since only a s u b ~ t of {Pi ) i~ changed, dl (ya + Pi) are
net equd and the assumptions are violated. If all Pj in S are replaced by /3Pj, O < f l < 1,
then (4.9) foUows.
Therefore, if Cf is not maximized, ail (yU+ Pj) cannot be equal, and the theorem
is proven.
. O
To determine the value of pT, we make use of the equations Ifa + Pi = pT, and
SIR, = Gp 0- P j / y - Rom this, for any senrice j
and t herefore
and t herefore pT -= Po
Then, fkom (4.5) and (4.12)
be no violations of in S:
we generate an admission poiicy. In order for there to
From (4.11) and (4.13) we can calculate the relative magnitudes of Pj, j E S:
In this bamework, noise of power spectral density 77012 in a system with a band-
width of W rad/s can be modelled with noise power PN = qOW. The interference fkom
other cells at the receivers is modelled with interference power PI. However, the equations
assume that any interference is on another code and attenuated by Gp: the noise power is
not, and must be modelled by GpPN. Rom (4.5)
The proof of Theorem 5 for (GpPN + PI)/Po held constant is a trivial extension of the
noiseiess derivation. FoIIowing the derivation given in (4.11) to (4.14),
and
Therefore, the rïght hand side can be increased arbitrarily close to 1 by increasing
the average power Po.
4.2.2 Service Partitions
The selection of services to simultaneously transmit is constrained by the results
of Theorem 5: if we wish to transmit with maximum efficiency, ail the services we transmit
simultaneously have the same value of (Ta + Pj ) . However, this does not mean that all
services in the network need have the same value of (yax + Pj), as they can be arranged
into subsets of services, or partitions. A service in a given partition may oniy be transmitted
simultaneously with another service in the same partition. The same codes may be used by
different services as long as the services are in different partitions.
This dows at least two Medium Access Control (MAC) schemes. In the un-
purtitioned case, all services in the network are divided into one downstrearn set and one
upstream set. Then the system must determine a subset that can be transmitted at a
given time such that cf 2 O for that subset. In the multiple-partition case, the services
are divided a priori into partitions Si . . . SM such that C . 2 0 for each partition. The par-
titions would have to be tirne-scheduled so that the QoS contracts for each service would
not be violated (Figure 4.4). Each partition would have to consist of only downstream
or only upstream services, and not a mixture of both. Otherwise, stations wouid have to
be capable of tmosmitting and receiving at the same time and resolving a complex near-
far pro blem. Thus, the upartitioned system perfom interfmence control " on-theflf' , while the multiple-partition system performs interference control at c d setup by assi*
partitions.
An unpartitioned system has the advantage of greater flexibility, since the trans-
mission of each service's data is dependent on the QoS contract of that service only. How-
ever, interference control decisions must be made every dot tirne, A multiple-partition
system has the sdvantage of greater simplicity, since capacity decisions are made on c d
admission. But, each partition must be characterized by its service with the mcst stringent
tirne tolerances. Overcapacity in one partition cannot be used by another partition, which
segments the a d a b l e bandwidt h. The effectiveness of service partitionhg will depend on
the service mix, as a system with a s d set of similrrr services wiU be easier to partition
than a system wit h a number of widely dissimilar services.
4.2.3 Residual Capacity Calculations
The interference parameters compiicate capacity caldations compared to those
of TDMA scenarios. The structure of the problem does not allow for a simple measme of
capacity in bits per second, but depends on the nature of the services to be supported.
However, we do not necessarily require the exact capacity of a multimedia system.
Since services are admitted to the network through C d Admission Control (CAC) on a
case-by-case basis, we only need the residual, or remaining capacity of the network. The
resources required for each prospective new service can be compared to the residual capacity,
and if the residud capacity is greater than the new service's requirements, the service can
be admitted. The measurement of residual system capacity depends on whether the system
is unpartitioned or multip1e-partition.
Unpartitioned Systems
For an unpartitioned system, we assume the services are divided into an upstream
set Su and a downstream set Sd. Each set has a power tolerance P: and PT. If each service
à in each set has an ailocated bit rate (as d&ed by C d Admission Control) of a,, and
Sd, and each set S, and Sd is serviceci at an average bit rate of R, and & = R - &,
Remotes
Base
Figure 4.4: Two partition schemes. a) Unpartitioned: determine s e ~ c e s to trammit each timeslot. b) Multiple partitions: determine partition to transmit each tirnedot.
respectively, we define the residual copocity of the sets as
and
This gives an idea of the amount of capaciw left in the set. This is the analog of Cf (4.6),
. but formulated in t m of bit rates instead of the theoretid idea of average services.
Multipl-part it ion Systems
Partition allocation in a mdtiple-partition system involves assigning senrices to
partitions in such a way that a service's partition is scheduled in a mitIIlier acceptable
to the service's QoS contract in an efficient a manner as possible. Semices are assigned to
partitions upon cal1 setup; since the scheduler now deals exclusively with the time scheduling
of partitions, the problem is reduced to a TDMA scheduling problem. To support best-effort
t r a c , one or more partitions may be allocated for unscheduled t r a c . T h e partitions
would obviously provide no guarantees on interference.
To determine how efficient a partitioning is, we must have a masure of the inef-
ficiency in the system. Whereas the residual interference capacity C,! in each partition can
be used for new s e ~ c e s , a certain amount of capacity is wasted and cannot be used for new
services. A partition Sm is scheduled at an average of R, bits/s: if a service i's allocated
bandwidth in S, is less, that amount of bandwidth is wasted since it cannot be used by
that service, and it cannot be allocated to another service since a partition must be able to
transmit aU its services simultaneously without QoS violations.
We can define the wasted mpacity for each partition S, as
I GU,, = - C (& - &,m) Pi-
;€&
Therefore, the residual capacity of the partition is
The s u m Cm Cm for a given upstream/downstream direction is generally Iess
than the unpartitioned result, as the probable increase in simplicity of scheduiing a multiple-
1 Chip rate R, 1 10 Mchïp/s 1
Data SIR 12, 10, 5
Spreading gain Gp Voice bit rate R,
'Ihble 4.1: Voice/data service mR. Voice services are transmitted with each of the three
10, 100 8 kb/s
data types one at a t h e in each example.
partition system is o h t by a Ioss in efficiency. This is analogous to the statistid multi-
plexing gain offered by wirehe ATM systems: unpartitioned systems can docate resourcea
based on the needs of the moment (statist i d mult iplexing) , while multiple partition sys-
tems must partition resources based on cail admission characteristics (static TDMA multi-
p l d g )
4.3 Calculations
Zn a hybrid system, as in a pure CDMA system, performance depends on the
selection of spreading factor and therefore the processing gain Gp and the code bit rate
R. A low spreading faetor allows high bit rates and low packetization delay, but results
in high interference levels and less antirnultipath protection. The high interference levels
would necessitate a conservative design with high interference margins and therefore lower
capacity. A high spreading factor mitigates the interference problem, but resdts in higher
packetization delays and more use of multi-code service transmissions. Multi-code services
when combined with ARQ would require complex reassembly and large b d e r space, and
codes are a finite resource for a station.
We consider a very simple voiceldata service mix as given in [36]. We consider
low bit rate voice and low bit rate data with fixed SIR for voice and several SIR values
for data (Table 4.1). Here we ignore transmission overhead and aII transmission is in one
direction. Capacity graphs for an unpartitioned system for Gp = 10 are given in Figure 4.5.
Graphs for Gp = 100 are given in Figure 4.6. These capacities were ealcdated using Cc.
fiom (4.21): t his reduces to solving:
O 50 100 150 200 250 300 350 400 Voice calls
Figure 4.5: System capacity for an unpartitioned system for Gp = 10.
where n, and nd are the numbers of voice and data channels, respectively. Capaciw graphs
for a multiple-partition system for Gp = 10 are given in Figure 4.7. Graphs for Gp = 100
are given in Figure 4.8. These capacities were caiculated by an algorithm which assigns
partitions to voice and data services. The minimum number of partitions are created for
each service type, and only t r a c of the same type may be placed in a partition.
The difFerence in capacity for the different spreading gains seems sqrising, at
k t . The ciifference is due to the lack of self-interference. For example, in the case where
only voice c a k are transmitted at Gp = 10, 2 interferers can be tolerated at once by each
service. Therefore, the system can support the seMce plus 2 interferers: 3 services at once.
When Gp = 100, 20 interferers can be tolerated at once by each service- Therefore, the
system cm support the service plus 20 interferers: 21 senrices at once. The bit rate R for
G, = 100 is 1/10 of the bit rate for Gp = 10, but the system cannot support 10 times the
services at Gp = 100.
The pronouncecl staircase features of the multiple-partition graphs are due to the
partition structure. Every t h e a service type acquires a new partition, the full capacity of
the partition is acquired at once. The lower number of senrices supported by the multiple
partition system is due to the residual capacity of each pastition which cannot be used by
the services in the system. It is not wasted capacity, as all services in each partition are
50 100 150 200 250 300 Voice calls
Figure 4.6: System capacity for an unpartitioned system for Gp = 100.
- O 50 100 150 200 250 300 350 400
Voice c a b
Figure 4.7: System c a p e for a multiplepartition system for Gp = 10.
I I 1 I i 4 kb/s - 8 kb/s ---
20 kb/s - - - - -
- - ----
I L---,
4
O 50 100 150 200 250 Voice calls
Figure 4.8: System capacity for a multiple-partition system for Gp = 100-
identical and therefore have the same allocated bit rate.
The capacity dinerence in practice will depend strongly on the details of execution.
A system with a low G, will be more susceptible to variations in individual interferers and
multipath interference, which will translate into larger safety margins and a decrease in
capacity.
4.4 Conclusions
A theoretical kamework for the design of a hybrid CDMA/TDMA wireless network
is developed. Given service SIR tolerances, it is possible to derive the optimum received
power Ievels to maximize the capacity of the hybrid system. Two design strate*, unpar-
titioned and multiple-partition design, are developed that exploit this capability in difkrent
ways. The performance of unpartitioned systems is hetter t han multiple-partition systems
for simple service mixes, although the design simplicity of the latter case may offset this
deficiency. Both types of systems have higher capacities at low spreading factors. The
actual scheduling methodology is left for future work.
While the hybrid architecture is suitable for an ATM system, it wodd be difi-
cult to implement such an atchitecture for an IF' system. While best-effort t r a c can be
handled t hrough best-effort partitions, variable-size IP packet s wodd make simdtaneous
transmission of capsules dif6cuIt without large in&ciencies due to d y long timeslots.
The development of a hybrid TDMA/CDMA system will require hrture research
into the time scheduling of either individual services or partitions, and partitionhg methods
for typical multimedia services. If partitionhg is done weU, simple time scheduling such as
Weighted Fair Queueing should d c e . As weil, protocol considerations regardiog upstream
and downstream trmmbsion, ,as well as service add/drop, must be investigated.
Chapter 5
SUPERNet Channel AIlocat ion
While most WLAN environments to date have been heavily regulated to use certain
protocois, modulation techniques, and channelization schernes, some new frequency bands
are being opened for unlicensed use with very few a priori restrictions. Networks using these
bands have the advantage of operating fkom a clean date, but must be able to c o d t with
other networks that may be using the same bands. In this chapter, we develop a spectrum
sharing protocol for a current 5 GHz proposed unlicensed regdatory environment. This
protocol d o w s different networks to share the environment with little interference from
other networks, and d o w communication between some networks. Performance resdts for
the protocol are analyt ically derived.
Bot h communications equipment manufacturers and the Federal Communications
Commission (FCC) have started development of a new class of networks: unlicensed, private,
wideband, multimedia wireless Local Area Networks (WLANs) for industrial, commercial,
educational, and other sites. A Notice of Proposed Rulemaking [38, 391 has b e n released by
the FCC to regulate the new Shared Unlicensed PERsonal NETworks (SUPERNets). The
SUPERNets will be allocated several Çequency blocks in the 5 GHz range. To introduce
the SUPERNet environment, it is necessary to discuss the environment, Channel selection,
and multicellular structures dictated by the regulations.
The goals of this Chapter are to:
Review the sUPER.Net environment and regulations;
Develop a grouping methodology for SUPERNet devices to support typical senrice
scenarios;
Develop Active Channel Avoidance (ACA) as a simple chsnnel allocation algorith;
O Analyze the interference-amidance performance of ACA for simple t r a c models.
5.1 SUPER.Net Architecture
SUPERNet designs are based on a philosophy of continuous invention, as the
networks in a given environment may be incompatible in their physical, MAC, datalink,
and other 1aye.m as long as they follow the few regulations proposed by the FCC. The pro-
p d FCC regulations are designed to ailow SUPERNets to share communications channels
(thus the S h e d designation), and Limit interf&rence. Therefore, several different SUPER-
Nets msy have overlapping coverage areas and share the same fiequency space. However,
the problem of exactly how to shme the fiequency space is deliberately leR open in the
regulations.
The SUPERNet frequency blocks are divided into many separate data chicnneh-
Each SUPER.Net may use one or more of the data channels according to its MAC archi-
tecture. When a SUPER.Net wishes to transmit, it must choose one or more of these data
chameh on which to transmit its b m t . Since the SUPERNets are allowed to be ad hoc,
temporary, and mobile, a fixeci Channel reuse scheme is impractical. As channel activity
is bursty, channel sensing schemes would not be reliable for channel selection. As well,
the SUPERNet proposal explkit ly allows dinerent networks t O share the same Channel.
These restrictions and capabilities are quite dSerent from those of cellular and Personal
Communication Services (PCS), and demand new solutions for channel allocation.
While the SUPERNet regulations are tailored to support the sharing of the medium
by unrelated groups of users, it is essential that a mechanism exists for supporting groups
of compatible type in a celldâr configuration. In this document, a group of users which
cooperate at the Medium Access Control (MAC) level is termed a cluster. A network is
made up of one or more clusters who may forward messages and hand off mobiles to one
another. A cluster in the same network is termed a wopemtang cluster, and a cluster of a dif-
ferent network is termed competing. CIusters correspond to mino/picoceUs in a centraiized
network architecture (Figure 5.1).
A mult i-cluster network must support communication b etween its component cIus-
ters through Access Points ( APs) . The APs are stations with the capability of forwarding
Figure 5.1: Two networks, designateci A and B, each are comprised of two clusters, 1 and 2, which have overlapping coverage areas. These clusters must be able to share the same channeis with either cooperating or competing clusters. Inter-cluster Access Points are indlcated by AP.
Channel Selection
Medium Access Control c Figure 5.2: Stations are subjected to three levels of control for data transmission.
data kom their cluster to one or more other clusters in the same network. Since a multi-
cluster network WU have pre-planned coverage and Wastructure, a semi-ked AP plan is
assumed for multi-clust er networks.
Therefore, a cluster may share the medium with both competing clusters, with
which it only seeks to minimize interference; and cooperating clusters, with which it se&
to minimize interference, but with which it may also transmit or receive signalling and
data transmissions. The goal of SUPERNet charnel docation is to d o w the most scient
docation strategy which supports the goals of both cooperating and competing clusters.
In essence, a station is subject to three levels of control for data transmission:
chanml selection, channel sharing (between different clusters), and Medium Access Control
(Figure 5.2). This chap ter prirnarily concentrat es on the Channel seiec t ion layer . Channel allocation has been studied extensively for cellular systems (40, 41, 421.
However, the packet nature of SUPERNet t r a c , as well as the dynamic nature of the ceUs
t hemselves, makes t hese strategies unsuit able for SUPER.Net application.
5.2 SUPERNet Transmission Rules
In order to hirther define SUPERNet channel access, the regulations for channel
access are introduced. We propose that a single control channel be made avaüable as a
common resource, which is the default channel for al1 clusters. A SUPERNet station will
access a data channel only when data t r a c is being transmitted in the cluster; otherwise,
the station will &en on the control chan~d.
AU SUPERNets are milndated to use three frequency bands, at 5.15-5.25 GBz,
5.25-5.35 GHz, and 5.125-5.825 GHz. While the bands must be divided into broadband
digital ch;rnnels, the channekation scheme is deliberately left an open problem. However, it
is expected that data channels will have bandwidths of at least 20 MHz. The d e s proposed
by the FCC for SUPERNets are as foUows:
0 Low power ( d u m 200 mW to 4 W EIRP, depending on the subband);
a Packet data (no circuit switched operation);
Cornpliance t O out-of-band emissions standards.
Any other standardization is to be leR to industrial guidelines.
None of the rules require that a l l SUPERNets adhere to the same physicd, MAC,
data link, or other layer standards. Therefore, it is possible and likely that a given environ-
ment will need to support heterogeneous SUPERNets efficiently and fairly. While handoff
between SUPERNets should be supported, in many cases it wilI not be possible. This is a
different situation fiom microcellular environments, where the goal is to create a seamless
e d o n m e n t for mobile transmi tters: here, the goal is to enable separate, heterogeneous
networks to coexist within the same medium.
In this chapter, we assume that each of the three fkequency band in the SUPER.Net
frequency allocation is divided into multiple data ctiannels and a control channel. Access
to a data channe1 by a SUPER.Net is o d y required when a station has data to transmit;
othemise all stations monitor the control channel. The period of time that a SUPERNet
accesses a data channel is termed a cluster burst. This cluster burst may consist of one or
more physid layer packets fkom dinerent stations (Figure 5.3). Ordering of the packets
in the transmission of a cluster burst is the responsibility of the SUPER.Net 's MAC layer.
A cluster burst rnay be transmitted on one or more channels, depending on the MAC
protocol. The Channel docation problem is then the process of aliowing transmissions of
cluster bursts fkom different clusters with as Little interference as possible.
Therefore, a SUPERNet cluster must be able to indicate to its component stations
when to transmit and on what chanael, in order to form the burst. To determine when the
burst may be trammitteci, the SUPERNet cluster must be able to detect when the chosen
channe1 is idle.
Figure 5.3: The physical layer packets that individual stationa transmit and receive are part of the cluster burst that the SUPERNet d o m to be transmitted.
However, determuiing whether the channe1 is idle or not is not straightforward,
as one station in a cluster rnay be able to sense interference while another may not. This
is due to possible dinerences in range and propagation characteristics kom one location to
another. As well, collisions between bursts rnay still occur, where a receiving station in
one of the clusters experiences interference fkom another cluster. This is due to the Hidden
Terminal effect introduced in Section 1.2.1, where transmitters rnay be far enough away
from each other to not sense each other7s interference, but one or more of the receivers
rnay experience the intederence (Figure 5.4). Therefore, while an idle channel detection
meehanism is required for each cluster, the mechônisrn cannot by itseif cause all possible
collisions to be avoided.
5.3 Channel Allocation Strat egies
For best system performance, transmission chrrnnels should be docated based on
a strategy that rninimiwes the interference between clusters. First, basic channel access d e s
are introduced, as weU as the concept of Access Points between cooperating clusters. The
Figure 5.4: Hidden terminal problem: Both clusters A and B decide to transmit on the same channel. Both A l and BI sense the channel ide, since neither could detect transmissions fiom the other. However, A2 and B2 lie within the coverage of both clusters, and experience interference, although both transmitters have sensed no interference.
strategy of Active Channel Avoidance (ACA) is introduced, which uses locally availôble
information to avoid interference. For cornparison, the simplest Channel docation strategy
of random allocation is introduced.
The transmission medium is divided into Nc broadband data channds and one
control channel. For now, we assume the data channels are of equal width for simplicity.
Each cluster r - attempt to use any one of the Nc data channeis at any time for data
transmission. The medium is used by many clusters, whose number may change with time
as clusters move, form, and disperse. A cluster transmitting on one channel will not intdere
with a cluster transmitting on another channe1.
In a multi-cluster network, each pair of Access Points (APs) between clusters
requires a channe1 for intercluster communication (Figure 5.1). Each cluster may have
multiple APs in different stations.
One cluster using the medium may or may not be able to interfixe with another
cluster's transmissions on the same channel, due to the distance between the clusters, inter-
vening w a h , or other propagation conditions. This allows us to construct an interference
graph, such t hat vertices represent clusters and edges represent possible interfaence be-
tween the connected vertices (Figure 5.5). Each vertex vi or edge has an ôssociated set
of Ilhanneh upon which the cluster represented by the vertex transmits. The edges in the
interference graph are directeci, since merences in receiver design or transmit power levels
may mean that network i may interfere with cluster j ' s transmissions, but not vice versa.
Two clusters are deemed neighbours if their vertices are connected with an edge,
and therefore are at risk of hterfering with each other. A cluster which rnay interfere with
cluster i is an in-nezghbour of i, and any cluster i may interfere with is an out-neighbour
of il similar to the fan-in and fan-out designations in an electronic circuit. For example,
in Figure 5.5, vz is an in-neighbour of v4 and v4 is a n out-neighbour of v2, as they are
connecteci by a unidirectional edge e2. Similarly, e;! is an in-edge of v4 and an out-edge of
v2, while v4 is the in-vertex of e;! and is the out-vertex of ez. A bidirectiond edge is
shorthand for two unidirectional edges. The more co~ectivity in the graph, the worse the
possible interference problem is for the medium.
Figure 5.5: An example interference graph. Each vertex represents a cluster; edges between vertices represent the possibility of interference between t hem.
5.3.1 Static Allocation
The s imph t chamel allocation strategy is static allmtion. Eôch cluster picks
a channel at random upon initiation, and does not change channels at any tirne during
the cluster's existence. Each channe1 has an equal probability l /Nc of being chosen by a
cluster. Collision avoidance is simply done by a Listen Before Talk (LBT) mechanism, and
no control channel mechanisms are necessary.
To allow intercluster cornmunication, the control channe1 would have to be rein-
troduced, as there is no other mechanism for contacthg other clusters. Therefore, this
protocol wouid o d y be useful for competing clusters. This protocol is useful as a ''baseinen
to determine the relative performance of other protocois.
5.3.2 Active Channel Avoidance
If clusters are allowed to communkate with each other, much of the collision
problems may be solved by sharing allocation information. In order for the co~~~munication
to be useful, it must supply timely information to the cluters, but at the same time it
must not impose too heavy a processing burden on them. The ciusters' task is to choose
transmission channels which are not current ly being incompat ibly u e d by n e i g h b o h g
clusterS.
Because the kequency band of the control channal is adjacent to the fkeqency band
of the data channeis, th& propagation characteristics are assumed to be simüar. Therefore,
the propagation range of the control and data channek will be similar, and clusters which
are possible interferers with each other on a data Channel will be able to hear most of the
others' ttansmissions on the control channei. It is assumed that any cluster which m o t
hear this activity on the control rhannd is iinlikely to experience or cause interference fiom
the other clusters-
Once the cluster is aware of the rihanne1 choices of the surrounding clusters, it may
change its own channel to minimiRe possible interference by choosing an unoccupied one.
This new Channel number will then be broadcast over the control channel. This d o m each
cluster to determine the local state of the medium, without the burden of tight control.
An &&ive tool for analyzing the AC A algorit hm is the interference graph (Figure
5.5). Each vertex in the graph will have associated with it a current channel number. Any
other vertex connected to the original vertex with the same rshnnne1 number is a possible
interferer . While compet ing clusters only seek to limit interference, cooperat ive clusters mus t
be able to commiinicate within their network as well as avoid interference fkom ciusters
both inside the network and outside. In order to ailow communication between cooperating
clusters, Access Points on neighbouring cooperative clusters should be able to aliocate a
channe1 for their use. Each cluster may then use several channels: one data Channel for
its internd use (an intracluster channel), and other data channels to communicate with its
neighbours in its network (interciuster channels) . A cluster rnay only use one of its chnnnels
at a tirne.
To improve efficiency, further restrictions m u t be placed on the chamel selections.
Since the cooperating in-neighbours of a network now will use the same channe1 as the
network, the network's channe1 choice is restricted to t hose most usable by both the network
and its cooperating in-neighbours.
A cluster must avoid the intracluster channels assigned to its in-neighbours. As
well, a cluster must avoid the intercluster channels allocated to the neighbouring APs that
do not communicate with its own APs. Since different out-neighbours of the same cluster
may hear a different subset of the cluster's APs, the Channel allocation for multicluster
networks is based on graph edges, and not on vertices. A cluster i rnay use a rih;rnnel for
intraclus ter transmission if:
the -el k not an intracluster chaune1 of any of its in-neighbours and
the chamel is not an interduster chamel of any of its in-neighbours ezcept if that
channel is used to commiinicate with cluster i .
A cluster i , in order to c o ~ ~ l l l l ~ c a t e with cluster j, may choose a rhannel for intercluster
trammission using the same d e s , as weU as ming any Channel used only by cluster j . The
extra channeis are allowed since cluster j wil l not be using them if it is communicating with
cluster i, so any c o d ï c t will be avoided.
If aU channeis are prohibited by the above rules, the channe1 assignment rnay
be d e randomly. Any out-neighbours of the cluster must then reevaluate th& channe1
allocations to attempt to resolve the conflict, or cope with Channel sharing on that channel.
5.4 MAC Channel Allocation Support
In order for channel allocation strategies to work, the Medium Access Control
(MAC) layer must be able to respond to current channe1 docations in its environment,
notify other clusters of its allocations, and change its currently allocated channel or channeIs.
These functions are made difEcult by the inability to guarantee that ail a cluster's stations
can hear all transmissions from its neighbours, and ail neighbows can hear a given station's
trsnsmissions. htra- and interclust er signalling is introduced to support AC A.
In order for these protocols to work, i t must be assumed that a broadcast facility
exists in each cluster. This allows a station to notify every station in the ciuster of an
event. This is a lenient assumption, as broadcasting is either the default or an option on
most wireless network protocols. The cluster must be able to determine whether or not it
is idle, or at least whether it has been idle for more than a certain period of time. As well,
the cluster must be able to generate a cluster ID number that is unique as far as it can
determine.
Channel allocation support involves several tasks:
Channel Allocation Notification: The cluster must inform its neighbours of its current
Channel allocations-
Interference Notification: The cluster must infolpl its stations of channeel allocations
of other clusters. This must be propagated through the cluster, as one station may be
able to hear chamel allocation messages fÎom a neighbouring cluster while another,
more distant station in the same cluster may not .
Channel Test/Not Cleor To Send Pmtocol: If performance is poor, or a new data
chamel is used, the cluster must ensure that no interference is detected from its
neighbours.
Begin Clwter Burst: Any station must be able to initiate a cluster burst if the cluster
is otherwise ide.
Change Channelsr The cluster must be able to change chameh if interference condi-
tions dictate.
Inter-Cluster Communication: An AP m u t be able to communicate with the APs of
its neighbouring clusters in the same network.
The channal allocation support protocol consists of bcrsic allocation tables at each
station, A P allocation tables for each AP, channel beacons, and a set of messages to transmit
internal information. All channel allocation signalling takes place on a control chamel
common to the entire environment. Each station rnonitors the control channel, until one
station wishes to begin data transmission. The data transmissions occur on one or more of
the cluster's assignai data channels.
Multiple access on the control channels is done via Ide Sense Multiple Access,
where messages are transmitted when the trammitter senses the channel to be idle. How-
ever, the Hidden Terminal problem, as well as simultaneous transmission attempts by dif-
ferent stations, may cause collisions between messages. The allocation support protocol
must then be able to cope with these undetected collisions. This is done by allowing for
redundant message transmission and simple transmission sequences.
The basic allocation table simpiy indicates whether interference exists on each
data Channel as perceived by that cluster. Each basic allocation table in the cluster should
be identical. AP allocation tables indicate whether interference fiom other than the AP
pair's cluster exists. Each AP wiU have a unique AP docation table. The basic docation
table will have to be transmitteà from station to station, but table corruptions should not
be able to cause serious problems in the network. Corruption may be reduced by suitable
error coding, whiie robust update protocoh can reduce the effect of the corruption.
IR order to notify neighbours of a cluster's m e n t rihanne1 allocation, each station
must transmit Channel Beacon messages (CHBCN) at regular intervals. This beacon allows
other stations in other clusters and other networks to determine the state of the transmission
cham& in the environment. Upon initialization, a cluster wiU monitor the contml chamel
for beacoos for a length of t h e &t to detennine charme1 states. During normal operation,
the cluster will monitor the control chirnnel for beacons and determine whether there has
been any change in Channel allocation.
The beacon comists of the cluster ID number, plus a jamming i n t d . The jam-
ming interval consists of a transmission of 2. certain length and no particular symbol pattern,
surromded by two guard bands of zero transmit energy. The channel number is determineci
by the length of the jaxnming intend, such that a channel number is uniquely represented
by a certain jamming interval length. The jamming-interval method allows other networks
which do not use the same modulation method to read the transmitted message; since dif-
ferent networks may not share AP pairs, origin and destination information is not needed.
The cluster ID field allows other members of the cluster to recognize the transmission as
its own and not interpret it as a neighbour's transmission, as well as providing origin infor-
mation for the Channel allocation rules. The jaamhg i n t e d should be set significantly
shorter than the length of control channel messages, to avoid confusion between jamming
intervals and control channel messages using different modulation.
Beacons from Access Points to announce intercluster channels are generated £iom
the AP only. Since only the AP may use that channel for that purpose, any cluster outside
the AP's transmission range is not affecteci by this allocation. The cluster ID of the AP's pair
is included in the beacon message, so that the AP pair's cluster may ignore the allocation.
The CHBCN messages must be transmitted fkequently enough that all stations
have accurate state information, but the beacon trafEc must not overwhelm the control
Channel. A CHBCN message is transmitted by each station upon initialization or channel
switching. The CHBCN is &O transmitted by each station at intervals of TBCN. If beacons £rom an occupied channel are not heard for a suitably long period of
time, it should be assumed that the channe1 is now idle and the allocation rnay be dropped
fiom the allocation table. This timeout is not expected to be a critical parameter in most
installations, as long as it is significantly longer than the c h e l change reaction time
(explainecl in Section 5.5.2). However, extremely long timeouts would reduce the number of
chameh available for transmission and reduce the effixtiveness of ACA. Such a parameter
should be decided upon as a part of the system design process.
5.4.2 Interference Notification and Channel Change
When a station receives a CHBCN fiom another cluster, it may need to inform
its own cluster of the neighbour's allocation. It cannot assume that a l l stations in its
network have received the message, so it broadcasts an Interference Information message
(INTINFO). The INTINFO message carries an updated copy of the basic allocation table.
When a foreign CHBCN message is received for a previously ide channe1, an INTINFO message is triggered (Figure 5.6). As well, if a station stops receiving a certain foreign
CHBCN for a suitably Iong period of tirne, it may assume that the relevant channel is now
idle and broadcast an INTINFO message to that effect. Of course, if another station is
still receiving that CHBCN, another INTINFO message may be broadcast to reinstate the
interference state of the channel.
When a station determines that a better channel exists than the current alloca-
tion, it may issue a Channel Change (CHANCHG) message. The CHANCHG message
includes both the cluster ID and a copy of the current basic allocation table. The copy
of the basic allocation table is included to prevent stations with Merent basic ailocation
tables (t hrough corrup ted INTINFO messages) from constantly transmit ting CHANCHG
messages to contradict each other. New beacons issued from the cluster will then carry the
new channel assignment .
5.4.3 Intracluster Transmission
A station must be able to initiate a cluster burst when it has data to transmit and
the cluster is otherwise ide. As well, an interference detection mechanism is introduced to
test the channel for interference from other clusters.
When a station wishes to transmit, it broadcasts a Begin Cluster Burst (BEG-
BURST) message on the control channel. The stations rnay then transmit on the data
chamel until the Channel is ide for Ttimmut. When the cluster has determineci itself to be
idle, each station reverts to the control channel.
When a new data channel is used or performance is poor, a station may initiate a
Pkaocation table \
t
I \
# 1
Figure 5.6: Channel beacon operation. Station A2 broadcasts a Channel Beacon indicating operation on channel number y. Tfüs is received by station B2, which updates its basic allocation table to indicate interference on channel y, and broadcasts an Interference In- formation message on cluster B. This INTINFO message is received by station BI, which updates its basic allocation table. The symbol x indicates the channel is occupied.
CHTST C H m
Data Channel
Figure 5.7: Channel testing. The test consists of the broadcast message CHTST, the test interval Tm, and the timeout interval Ttimeout.
test of the data channel to determine whether excessive interference exists on the channel.
The station broadcasts a Channel Test (CHTST) on the control chaanel. All stations on
the cluster listen to the relevant data channel for a souadhg interval Tm, then revert to
the control channel. If any station experiences interference during the sounding intenml,
that station broadcasts a Channel Not Clear to Send CH^) on the control channel &es - the sounding period. The CHCTS is a jamming intenmi longer than a BCN message. Any
station in the cluster which did not experience interference will be aware of the interference. -
If no station broadcasts CHCTS before a short timeout interval, the channel is considered
clear (Figure 5.7). Channel testing is considered to be a management function, and the
rules for initiation of a CHTST message are not defined in this chapter. Generally, a station
should initiate a channel test whenever a new data channel is accessed or if pedormance
on a channel is poor. If interference kom other clusters is noticed, then a channel change
should be attempted. If it is impossible to change channels due to traffic, a charnel test
should be done before each new cluster burst in order to avoid collisions.
5.4.4 Intercluster Communication
To support muiti-cluster networks, an Access Point of one cluster must be able to
communicate with the AP of another cluster. The initiating AI? must be able to signal the
receiving AP to accept a transmission, as weii as delaying any new intracluster transmissions
until after the AP to AP communication. The process may be referred to as forwarding,
since transmissions are forwarded kom one cluster to another. The data transmission
between the clusters must be of cluster b m t f m a t , to comply with the FCC regulations.
Figure 5.8: Forwarding Request- to-Send/Clear-tesend protocol between Access Points of two cooperating clusters.
Forwarding is accomplished using Forwarding Request-tesend (FRTS), Forward-
ing Clear-Tesend (FCTS) , and Forwarding Done (FDONE) messages- The initiat ing AP
sen& a FFWS message to the receiving AP, including the number of the data channel on
which to transmit- The other stations in the transmitter's cluster rnay not initiate transmis-
sions and enter a waiting-only state. If the receiving AP is &, the A P transmits a FCTS
message to the initiating AP (Figure 5.8). The other stations in the receiver's cluster then
enter a waiting-only state. The two APs then switch to a data channe1 and begin transmis-
sion. If the receiving AP is unable to respond to the FRTS, due to an ongoing transmission
or message collision, the initiating AP may retransmit the FRTS message after a short
timeout period, or give up and broadcast an FDONE message. M e r the intercluster trans-
mission is complete, the APs revert to the control channe1 and issue FDONE messages.
New transmissions may then be initiated.
The FRTS/FCTS protocol notifies the ot her stations in the initiating and receiving
clusteni that the forwarding is taking place, and prevents intracluster transmissions fkom
king initiated. The FDONE message d o w s the clusters to knm when the transmission is
complete. Howevet, a station may miss an FDONE message through collisions. To prevent
the station fkom remaining in Men-only mode, a station in listen-only mode will time out
and return to its normal state if no FDONE message is received.
5.5 Analysis
The intent of Active C h e 1 Avoidance is to aUow clusters to avoid transmitting
on chameh which are in use by neighbouring clusters. This situation is calleci co-occupation.
Cooccupation means that two or more clusters are sharing a communication channel, and
t herefore have lower throughput than if e x h cluster was the sole transmit ter on its chameh
in its neighbourhood.
The most usefd metrics for ACA performance are cwccupation probability and
co-occupancy t ime. Cwccupation probability is the likelihood of a cluster cwccupying
a channel after a change in the network topology. The ceoccupancy tirne is the expected
length of time a cluster will remain in the ceoccupied state before the ACA protocol can
react to the situation. For cornparison, the simple static allocation case is analyzed.
If clusters ceoccupy a channel, the situation is one of either ALOHA (if channel
testing is not done before new cluster bursts) or analogous to Carrier Sense Multiple Access
with Collision Avoidance (CSMAICA) (if channel sensing is done) and should be analyzed
as such. This situation would depend heavily on the type of MAC used by the clusters and
is outside the scope of this chapter.
5.5.1 Static Allocation
The static allocation strategy, as mentioned in Section 5.3.1, is the sirnplest strat-
egy. The channels are randomly determined during startup and the only mechanism for
collision avoidance is a Listen Before Talk (LBT) protocol.
If a cluster i is in an environment where nf channeis are in use, the number of
interferers on cluster 2's chosen Channel is binomially distributed. Each adjacent docation
on the interference graph has a probability of occupying the same charme1 as cluster i with
O 2 4 6 8 10 12 14 16 Number of channel allocations
Figure 5.9: Probability of co-occupation on a channe1 for the static allocation algorithm. Ne = 15.
a probability of l / N c , so the probabiiity of k other allocations on the same channel as à is
~ ( k intederers) = (z) (&)k (l - & ) n : - k ,
and the probability of at least one interferer is
The probability of cc+occupation on the chosen data channel from (5.2) is shown in Figure
5.9.
5.5.2 Active Channel Avoidance
The performance of ACA depends on the ability of the cluster to choose an i d e
channe1 upon initiahation, and the length of time it requires to sense ceoccupation and
then change ch;uinel.s. The protocols should be correct, so that a station or cluster cannot
enter a state or set of states that cannot lead back to normal operation.
To determine the responsiveness of the cluster to beacon messages, the fkaction of
tirne the cluster spencis on the control channel for a given level of activity must be d y z e d .
This requires an estimate of the cluster burst length distribution.
ACA Correctness
The correctness problem may be broken down into a s ignahg correctness problem
and a c h e l selection correctness problem. The s i p a l h g correction problem deah with
the inter- and intra-cluster messages, while the channcl seleetion correctness problem deah
with the actual choice of the intracluster chamel. - Signalling correctness is an issue in two interactions: the CHTSTICHCTS sig-
nalling (Section 5.4.3) and FRTS/FCTS s ignahg (Section 5.4.4). In the C H T S T / C H ~
signakg, the "home" state for a station is iistening on the control Channel. In the
FRTS/FCTS signalling, the home state for a station is listening on the control channe1
and free to initiate transmissions. Both states are always accessible due to station timeout
periods: if a message is missed, the station wiIi tirne out to the home state. This ensures
signalhg correctness.
Channel selection correctness is more cornplex. The protocol is incorrect if the
channel selection algorithm can cause unbounded-length oscillations in intracluster chamel
selection with a nonzero probability. This only may occur in interference graph loops,
where a channel change may eventually propagate back to the originating cluster. As well,
the cluster must be furced into the same state deterministically, so that the process will
iTi)initely repeat. If data channels are reieased immediately when made idle, a situation
may occur for cycles of odd-numbered h o p when only two channels are fkee in the vicinity
(Figure 5.10).
However, channek are not reieased immediately, since the charnel must be idle
for a significant tirne (Section 5.4.1). Therefore, the allocation tables in Figure 5.10 will
filI after one cycle, causing the clusters to choose randomly among the docated channels
(Section 5.4.2), breakhg the loop.
Cluster Burst Distribution
When the cluster is ide, all stations are tuned to the control chamel- If a station
wishes to transmit a packet, it issues a BEGBURST message. The station's packet is then
transmitted according to the cluster's MAC protocol. If other stations wish to transmit
a packet during charnel setup, they must defer transmission until the setup is complete.
Any other pa.ckets may be transmitted accordb to the cluster's MAC protocol, until the
channel is idle for an interval of Ttimeout, when stations revert to the control h e i . The
Legend:
n L Undlocated channel 4 1-1
Current data channel Occupied channel n Sequenee number 7 -1
Figure 5.10: Cycle where the ACA ehaMel allocation algorithm wodd not converge to a solut ion if channels were released immediately. Allocations wodd aiternate in each clust er between the two remahhg channels.
channel is determineci to be ide if no packets kom the cluster are initiated. For simpkity,
it is assumed that collision amidance is successful, and no interfaence occurs during the
burst . The statistics of the cluster burst will naturdy depend on the cluster MAC. As
an dy t i ca l ly tractable example, we may mode1 the MAC as a scheduler which has fd
knowledge of all station queue states. Therefore, the Channel will always be occupied when
there is a packet to transmit in the cluster. The MAC can be modelled as a single transmit
queue, whase service order is not necessarily First-Corne Fi rs t -Sed. If the packet arrivai
rate within the cluster is Poisson with mean A, the packet length is of arbitrary distribution
with mean 1, the length of the cluster burst can be ônalyzed as an M/G/1 queueing system
The "queuen is initially idle, as the cluster is assurned to have no baddog except
for the initiating packet. A packet is queued at time t = O (the initiating packet). Assuming
the BEGBURST and channel change require an interval of Tsetupl the system is then ide
for t E [O, Tm,,). The burst lasts until the queue is ide for At 2 Ttimmut. For tractability,
it is assumed that no other packet arrives during t E [O, T*tup), and the cluster burst length
is not constrained.
Rom M/G/1 queueing theory, the average busy interval is 1/(1- XI). The proba-
bility that an idle period exceeds Ttimeout is Ptimmut = e-ATthmut. Therefore,
Graphs of this function for dinerent values of loading XI and Ttimut are given in Figure
5.11.
Since the exponential arriva1 process is memoryless, the expected idle period be-
tween bursts is simply 1/X. Therefore, the fiaction of the cluster's time spent on the control
Channel is: l / X - - 1
Pcontrol = E(Tb,)+l/X XE(TbW)+l' This result applies to renewal processes other than Poisson arrival processes, and therefore
is a more general result. Graphs of the idle occupancy for different values of loading and
Figure 5.11: Cluster burst Length versus loading XI. Txtup = 2 = 1, and Tti,, is varid fiom 0-11 to 21.
are given in Figure 5.12.
Whether a long burst is desirable fiom efficiency perspectives, or a short burst is
desirable from control message responsiveness, depends on the expected service scenarios.
The burst Length calculation assumes t hat the burst was not delayed by Channel interference
or forwarded t r a c . If the burst is delayed, the initial packet backlog must be modeUed.
Collision Avoidance Performance
F'rorn the above analysis, it is possible to estirnate CO-occupancy time given severai
network parameters. It is assumeci that no cluster has more than Nc channels in use in its
neighbourhood, so that each ciuster is able to find 5ee rih;uineis. The case of an extremely
congested environment, where a cluster may have more than N, occupied channels in its
neighbourhood, is not considered for this analysis.
If the environment is in equilibrium, such that no channe1 changes are required
or initiatecl by any cluster, the moat likely stimuli for channel changes are new cluster
initialization or mobility. If a new cluster is initialized, it will listen to the comrnon control
channel for a fixeci length of time z ~ t for beacon messages, then choose unoccupied data
channels for transmission. The cluster wül choose an occupied Channel only if that cluster's
Timeout = 0.1 1 - - Timeout = 0.5 1 ---
Timeout = 1 - - - - - Timeout = 21 -----.-. - I
- - - 9 I
L - 9
O 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Loading
Figure 5.12: Control channel occupancy pçontroi versus loading A l . Twtup = 1 = 1, and Ttimeoa is varied &om 0.11 to 21.
beacon messages are all missed during &,. Environment changes due to mobilit3f will
require the dected clusters to react to new channel beacons.
The intracluster channel bearons are generated by each station in a cluster. There-
fore, a cluster is ükely to receive beacons fkom multiple stations in a foreign cluster. If a
cluster i can receive transmissions hom m, stations in cluster j, then the cluster beacon
interval is defined as:
Intercluster channel beacons are only generate by the APs, and therefore, for intercluster
beacons
%teduste BCN = T-'it&uew BCN - (5.8)
The probability of missing a given cluster's beacon messages generated at an aver-
age rate l/TBCN for a tirne Ti,t is p- = e-qnit/%c~, and the probability that 1 channeis
are mistakedy deemed ide is
If the cluster's beacons are missed, the new cluster must then choose the ocnipied chamel
to transmit upon for interference to occur.
The nature of the rhnnnel allocation distribution of the in-neighbours depends on
the comectedness of the interference graph. If the in-neighbours are all unaware of each
other's existence, the channe1 allocation distribution is Bose-Einstein [44] as in the static
allocation case. However, if the in-neighbours form a fuily-connected subgraph, each in-
neighbour occupies its own channel and the Channel allocation distribution is Fermi-Dirac.
Realistic situations will likely f d between these two cases. However, the Fermi-Dirac case
is the more pessimistic one, and may be used for a pessimistic performance evaiuation.
In the Fermi-Dirac case, each in-neighbour wil l occupy diffkrent data channels. If
we assume that there are nie channels in use in the neighbourhood, and 1 chnnnel allocations
have been missed, the probability of choosing an occupied channel is 1 / (Nc - n! + 1) . There-
fore, the total CO-occupat ion pro bability given nt in-neighbours and Fermi-Dirac channe1
statistics is:
Graphs of t his function for different ratios of &it/T& are given in Figure 5.13. Increasing
this ratio greatly decreases the risk of CO-occupation. The ACA protocol offers an improve-
ment in intderer avoidance of several orders of magnitude over static allocation (Figure
5.9).
If a CO-occupied channel condition is created, either through initialization or mo-
biliQ, the dected clusters may react and change channels through the CHANCHG mecha-
&m. The delay between the CO-occupation and the interference avoidance process depends
on how quickly the clusters can receive a channe1 beacon from an intedering cluster. Since
a cluster may
beacons h m
ceoccupat ion
only receive beacons when it is listening to the control channel, and the
the interking cluster are generated a t expected rate TgCN, the enpected
time is: 1
Assuming TiCN is independent of cluster load, T,,, is shown in Figure 5.14. Reaüsticaily,
the average cwccupancy time will be less, since all clusters should become aware of the
problem, so that the actual Tcwcc is the minimum caldateci TcMcc for dl the affecteci
clusters.
O 2 4 6 8 10 12 14 16 Number of channel allocations
Figure 5.13: Probability of ceoccupation on the chosen channel versus the number of channel allocations, given a new cluster. The ratio is the &action zit/TgcN. N, = 15.
Figure 5.14: Co-occupancy time for different cluster load levels and tirneout periods. The cu-occupancy t h e is in multiples of TBnr
5.6 Conclusions
The SUPER.Net proposal consists of a s d list of regdations for wireless, ad
hoc multimedia LANs. These LANs are expected to operate in an environment where
data ch;rnneis may be s h e d with other SUPERNets. One problem encountered in these
networks is channel docation, such that each individual network can operate as &ciently
as possible.
The use of Active Channel Avoidance offers orders-of-magnitude improvement
. over naive Channel allocation in terms of ceoccupation avoidaxice. Under moderate ciuster
dmsities and cluster loading, the ACA-related protocol overhead does not have a substantial
impact on network efficiency, and cooccupied channels are rare and exist for short periods.
The results were ob t ained using simple, anaiyt ically tract able models. To accurately mode1
network performance, estimates must be obtained for the expected number of adjacent
clusters n: and the number of received stations per cluster mj, as well as accurate models for
the MAC performance; this mu& be done through teletrac and service scenario modeIlhg
P l - The major associated problem with unlicensed ad-hoc networks such as SUPERNet
is the allocation problem of stations to clusters. Cluster membership may change due to
mobility, and the cluster division and amalgamation problems are far fkom trivial. The
clusters should be smail enough that fidl connectivity in the cluster is assured, but large
enough that the charme1 allocation problem is not overwhelming. As well, routing between
Access Points of cooperating clusters in a dynamic environment must be investigated. The
effect of ACA on each of the MAC protocols used in a sUPER.Net environment must be
investigated-
Chapter 6
Conclusions
The body of this thesis proposed three solutions to multimedia service transport
over a wireless LAN Channel. The solutions differed based on the constraints under which
the system was placed:
DFQr This system is a centralized, TDMA, narrowband system using polling to pr*
vide Medium Access Control, The system provides fine control over QoS and demon-
stratively supports ATM and ISLP service classa well. The system will perform best
where polling systems generdy perform best: in a small system with demanding QoS
requirements.
Hybrid TDMA/CDMA: This system is a centralized, hybrid TDMA/CDMA, spread
spectrum system using controlled simdaneous transmission. In theory, the hybrid
should provide efficient use of the given bandwidth: however, it is expected that a
simple TDMA algorithm to schedule transmissions wili limit the ability of the system
to handle demanding time-related QoS requirements.
SUPERNet ACA: This algorithm dows the scient sw ing of unlicensed spectrum
between related and unrelated groups of stations. The ability of SUPERNet devices
to support ATM and IP services is, of course, highly dependent on the underlying
MAC algorithms used by the devices: however, the nature of the uniicensecl medium
limits QoS contracts to statistical guarantees and not deterministic guarantees. It is
expected that users will accept QoS degradations to gain the mobility and flexibility
advantages of an unlicensed environment-
Appendix A
Service S hare Algorit hm
Pseudocode
A.1 GPS Algorithm
This algorithm calculates the service shares 4 for each delay-sensitive service as
explainecl in Section 2.2. The inputs to the algorithm are (oi , pi, Di) for each DS service i .
The usefui outputs are #i and the service set for each DS service i.
eo t O vo i- O ml t- 1 k t 1 aU services + unresolved set
begin estîmateshares:
for each i E unresolved set 0; 4Ji +
Vk-i + mk(Di - ek-1) for each i E unstable set
estimat e Anhhing-t imes: for each i 6 allocated set
#i c & / m k 4i &/mk d t m
fhdnext Ankhing-t ime: min e:
ek i(aii0cated set
Service(s) arg min ei + docated set iQaiiocated set
reallocatesets: for each i 4 allocated set
if 0 ~ 1 4 ~ < <k i f e j=oo
service i + unstable set eise
service i + resolved set else
service i + unresolved set
k + k + l whiIe unresolved set # {} OR (resolved set # {} AND unstable set # {))
A.2 DFQ Algorithm
This algorithm calculates the service shares q5i for each delay-sensitive service as
explained in Section 2.2 for a DFQ system. The inputs to the algorit hm are (oi , pi, D, ) for
each DS service i. The usehl outputs are @, and the service set for each DS service i. Usage
information for each service is assumeci to be determineci by a h c t ion u (i, oi, pi, $i ) .
eo t O UO +- O ml t I k.1 ail services + unresolved set
begin estimateshares:
for each i E unresolved set
0; #i +
Uk-l + mk(D; - e k - 1 ) for each i E unstabie set
p,(ek-1 - Di) +ai #i +
Vk- L
est imat e3n.k hingf imes: for each t 4 allocated set
if 4; < p i h k
k t p i l m 4 t 00
else ai + h ( m k e k - 1 - v k - l )
e: t #imk - pi
findnext finishing-tirne: min ei
ek ieaiiocateci set
service(s) arg min e: + allocated set iéaüocated set
Vk vk-1 f m k ( e k - ek-1)
reallocatesets: for each i 4 allocated set
if ai/& < <VI.
i f e - 0 0 service i + unstable set
eise service i + resolved set
else service i + unresolved set
for each i E allocated set
w hile k t k t l unresolved set # {} OR (resolved set # () AND unstable set # {))
References
[l] T. S. Rappaport, Wireless Communications: Pnneiples 6 Pmctice. New Jersey: Pren-
tice Hall, 1996.
[2] ISO /EC 8802-3, Cam-et sense multiple access with collision detection (CSMA/CD)
acceas method and physical layer specticatiow. ISO/IEC, 4th ed., 1993.
[3] ITU/CCITT, 8.861: Video CODEG for Audiovisual Services at p x 64 kbtts/s.
ITU/CCITT, 1990.
[4] D. Bertsekas and R. GalIager, Data Networkx Prentice Hall, 1991.
[5] D. Minoli and G. Dabrowski, Principles of Signaling for Ce11 Relay and Ehme Relay.
Boston: Artech House, 1995.
[6] IEEE 802.11-93/190, DFWMAC: Dastributed Foundation Wireless Medium Access
Contml, IEEE, Nov. 1993.
[7] M. Karol, 2. Liu, and K. Eng, "Distributed-queueing request update multiple access
(DQRUMA) for wireless padtet (ATM) netwotks," in ICC '95, pp. 1224-1231, 1995.
[8] D. D. Faiconer and G. M. Stamatelos, "Wireless access to broad-band services through
microceilular indoor systems," Canadian Journal of Electriual and Cornputer Engineer-
ing, vol. 19, no. 1, pp. 7-12, 1994.
[9] D. Raychaudhuri, "Wireless ATM networks: Architecture, system design, and prote
Ming," IEEE Personal Communicatiow Magazine, vol. 3, pp. 42-49, Aug. 1996.
[IO] N. D. Wilson, R. Ganesh, K. Joseph, and D. Raychaudhuri, "Packet CDMA versus
dynamic TDMA for multiple access in an integrated voice/data PCN," IEEE Journal
on Selected A m in Communications, vol. Il, Aug. 1993.
[il] E. Ayaaoglu, K. Eng, and M. Karol, "Wireless ATM: Limts, challenges, and propos-
&,* IEEE Persond Communkatâons Magazine, vol. 3, pp. 18-33, Aug. 1996.
[12] M. de Prpcker, Asynchmnow %rufer M ode: Solution for Bmdband ISDN. Tor onto:
Ellis Horwood, 1993.
[i3] W. Stallings, "IPv6: The new Intemet protocol," IEEE Communications Magazine,
vol. 34, pp. 96-108, July 1996.
[14] C . Huitexna, IPu6: The Neu Internet Pmtowl. New Jersey: Prentice Bail, 1996.
[15] P. White, "RSVP and integrated services in the Internet: A tutorid," IEEE Commu-
nications Magazine, vol. 35, pp. 100-106, May 1997.
[16] R. Braden, D. Clark, and S. Shenker, RFC 1633: Integmted Services in the Internet
Architecture: An Owemiew. IETF, July 1994.
[17] S. Shenker, C. Partridge, and R. Guerin, Specifimtion o f Guamnteed Quality of Semice.
IETF, Aug. 1996.
[18] F. Baker, R. Guerin, and D. Kandlur, Spm$xztion of Committed Rate Qudity of
Senice. IETF, June 1996.
[19] J. Wroclawski, Speeification of the Contmlled-Load Network Element Servàce. IETF, Nov. 1996.
[20] H . Schulzrinne, S . Casner, R Rederick, and V. Jacobson, RTP: A ltnnsport Protoc01
for Real-Tirne Applications. IETF, Mar. 1995.
[21] J. Nagle, "On packet switches with innnite storage," IEEE fiansactions on Commu-
nicutions, vol. CON-35, pp. 435-438, Apr. 1987.
[22] A. Derners, S. Keshav, and S. Shenkar, "Analysis and simulation of a fair queueing
algorithm," in Pmceedings of SIGCOMM '89, pp. 1-12, 1989.
[23] S. J. Golestani, "A self-clocked fair queueing scheme for broadband applications," in
Pmceedings of INFOCOM '94, pp. 63ô-646, 1994.
[24] A. K. Parekh and R Gallager, "A generalized processor sharing approach to flow con-
troi in integrated services networks: The singlenode case," IEEE/ACM Zbnsacfiom
on Networkng, vol. 1, pp. 344-357, June 1993.
[25] S. J. Golestani, "Network delay analysis of a class of fair queueing algorithms," IEEE
Journul on Selected Areas in Communiccrtions, vol. 13, pp. 1057-1070, Aug. 1995.
[26] 2. Zhang, D. Towsley, and J. Kurose, "Statistical a d y s i s of the generalized processor
sharing scheduling discipline," IEEE Journal on Selected Areas in Communàcatio w,
vol. 13, pp. 1071-1080, Aug. 1995.
1271 ATM Forum, User Network Interface Specifcution 3.1. ATM Forum, 1995.
[28] D. F. Bantz and F. J. Bauchot, "Wireless LAN design alternatives," IEEE Network,
March/April1994.
1291 ATM Forum/AF-tm-0056-000, %fit Management Specificution version 4.0. ATM
Fonun, 1996.
[30] R. Keh and K. C. Chua, "QoS issues in interconnected wireless and wireline ATM
networks," tech. rep., Centre for Wireless Communication, University of Singopore,
1995.
[31] R. Keh, "DFQ capsule structure." private communication, National University of Sin-
gapore, 1997.
[32] G. Woodruff and R. Kositpaiboon, "Multimedia trafnc management principles for guar-
anteed ATM network performance," IEEE Journal o n Selected Areas in Communica-
tions, vol. 8, pp. 437446, Apr. 1990.
[33] W. Baumberger and H. Kaufmann, "Monolithic integrations of a spread-spectrum
transmitter and receiver front end for wireless LAN applications," in Mobile
Communications-Advand Systems and Components ( C . G. Giinther, ed.), pp. 501-
509, Springer-Verlag, 1994.
[34] M. Zorzi, uSimplitied fotward-link power contml law in cellular CDMA," IEEE f i n a -
actions on Vehicdar Technology, vol. 43, pp. 1088-1093, Nov. 1994.
[35] R V. Nettleton and H. Alavi, "Power control for a spread specmim cellular mobile
radio system," in 3977f IEEE Véhide TcCnnofogy Conference, (Toronto), pp. 242-246,
1983.
[36] A. Sampath, P. S. Kumar, and J. Holtzman, "Power control and resource management
for a multimedia CDMA wireless system," in PlMRC '95, (Toronto), pp. 21-25, 1995.
[37] L. Yun and D. Messerschmidt, "Power control for variable QOS on a CDMA channel,"
in MILCOM '94, pp. 178-182, 1994.
[38] Federal Communications Commission, FCC 97-005, Amendment of the Commission%
Rules to Pmvide for Operution of Unlicensed Nn Devices in the 5 GHz kquency
Range. FCC, 1997.
[39] Federal Communications Commission, FCC 96-1 99, Notice of Pmposed Rule Muking,
NII/SUPERNet at 5 GHz. FCC, 1996.
[40] J. Chuang, "Performance issues and dgorithms for dynaDnic Channel assignment,"
IEEE Journal on Selected Anmi in Communications, vol. 11, pp. 955-962, Aug. 1993.
[41] P. Agrawal and B. Narendran, "A network flow fiamework for onlWe aynamic channe1
allocation," in GLOBECOM 96, pp. 229-234, 1996.
(421 J. Chuang, "Autonomous adaptive fkquency assignment for TDMA portable radio
systems," IEEE %rasactions on Vehicular Technology, pp. 627-635, Aug. 1991.
[43] A. Leon-Garcia, Pm bability and Random Pmcesses for Electrical Engineering. Toronto:
Addison Wesley, 1994.
[q E. A. Hyileraas, Mathematical and Theoretid Physzcs, vol. 1. Toronto: Wiley-
Interscience, 1970.
[45] B. Jabbari, 'TeletrafEc aspects of evolving and next-generation wireless communication
networks," IEEE Persond Communications, vol. 3, pp. 4-9, Dec. 1996.
IMAGE EVALUATION TEST TARGET (QA-3)
--
IL: ii p c - - - L , &Liu 11111I.I IlIll&
APPLIED IMAGE. lnc S 1ô53 East Main Street -
,HL Rochester. NY 14609 USA -- -- - - Phone: 71 61482-(MOO -- = Fax: 71 6/28&5989
O 1993. A p W d Image. Inc. AU R@!ts F k m d