NWACHI-IKPOR, JULIANA O.€¦ · comparison of asynchronous transfer mode (atm) netw ork cell...
Transcript of NWACHI-IKPOR, JULIANA O.€¦ · comparison of asynchronous transfer mode (atm) netw ork cell...
TRANSFER MODE (ATM) NETWORK CELL
Ebere Omeje
1
NWACHI- IKPOR, JULIANA O.
PG/M.ENG/10/57771
COMPARISON OF ASYNCHRONOUS
TRANSFER MODE (ATM) NETWORK CELL
ROUTING ALGORITHMS
FACULTY OF ENGINEERING
DEPARTMENT OF ELECTRONIC
ENGINEERING
Ebere Omeje Digitally Signed by: Content manager’s Name
DN : CN = Webmaster’s name
O= University of Nigeria, Nsukka
OU = Innovation Centre
IKPOR, JULIANA O.
COMPARISON OF ASYNCHRONOUS
TRANSFER MODE (ATM) NETWORK CELL
FACULTY OF ENGINEERING
DEPARTMENT OF ELECTRONIC
Digitally Signed by: Content manager’s Name
Webmaster’s name
O= University of Nigeria, Nsukka
2
TITLE PAGE
COMPARISON OF ASYNCHRONOUS TRANSFER MODE (ATM)
NETWORK CELL ROUTING ALGORITHMS
BY
NWACHI-IKPOR, JULIANA O.
PG/M.ENG/10/57771
DEPARTMENT OF ELECTRONIC ENGINEERING
FACULTY OF ENGINEERING
UNIVERSITY OF NIGERIA NSUKKA
DECEMBER, 2015.
3
APPROVAL PAGE
COMPARISON OF ASYNCHRONOUS TRANSFER MODE (ATM) NETW ORK
CELL ROUTING ALGORITHMS
BY
NWACHI-IKPOR, JULIANA O.
(PG/M.ENG/10/57771)
A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENT FOR THE
AWARD OF MASTER OF ELECTRONIC ENGINEERING (TELECOMMUNICATION
OPTION) IN THE DEPARTMENT OF ELECTRONIC ENGINEERING, UNIVERSITY OF
NIGERIA, NSUKKA.
____________________________ ______________________
NWACHI-IKPOR, JULIANA O . DATE
(STUDENT)
____________________________ ____________________ PROF. C.I.ANI DATE (SUPERVISOR)
__________________________ _____________________
EXTERNAL EXAMINER DATE
___________________________ _____________________
DR. M.A AHANEKU DATE
(HEAD OF DEPARTMENT)
___________________________ _____________________
4
PROF. E.S. OBE DATE
(CHAIRMAN, FACULTY POSTGRADUATE COMMITTEE)
CERTIFICATION
This is to certify that NWACHI-IKPOR, JULIANA O, a postgraduate student of the
department of electronic engineering with registration number PG/M.ENG/10/57771 has
satisfactorily completed the requirement for the award of Master of Engineering
(M.ENG) in Electronic Engineering.
____________________________ ____________________________
PROF. C.I.ANI DR. M.A AHANEKU (SUPERVISOR) (HEAD OF DEPARTMENT)
___________________________
PROF. E.S. OBE
(CHAIRMAN, FACULTY OF ENGINEERING POSTGRADUATE COMMITTEE)
5
DECLARATION
I, NWACHI-IKPOR, JULIANA O, a postgraduate student of the Department of
Electronic Engineering, University of Nigeria, Nsukka declare that the work embodied in
this thesis is original and has not been submitted by me in part or full for any other
diploma or degree of this or any other university.
____________________________ ______________________
NWACHI-IKPOR, JULIANA O DATE
6
DEDICATION
This work is dedicated to God Almighty for His infinite mercy and guidance towards me
throughout this program.
7
ACKNOWLEDGEMENT
I am grateful to God Almighty for his sustenance, guidance and protection throughout the
period of my training.
I also wish to express my profound indebtedness to my supervisor, Prof. C.I. Ani, who
supervised me at all stages of the work. His achievement and attitude have inspired me in
many ways and have led me to new ideas for solving the problems I have faced. He never
denied me his fatherly advice and guidance which helped me to succeed in this work.
I will remain grateful to my lecturers and the entire staff of the Department of Electronic
Engineering, University of Nigeria, Nsukka, most especially, Prof. C. I. Ani, Prof. O.U
Okparaku, Prof. C.C. Osuwagu, Prof. A.N Nzeako, Dr. M.A.Ahaneku and Dr. I.Oge.
Their advice and constructive criticisms contributed in mo mean measure to the
successful completion of my programme.
There are many people who have made this dissertation possible; without them it would
have been impossible for me to finish it. Though it is difficult to name them without
omitting someone, I am happy to acknowledge them here, and I apologize if I have
omitted anyone. My endless thanks goes to my husband Sir Paul Ikpor Nwachi whose
constant love, care, encouragement and understanding guided me throughout the period
of my study and to my children and in-laws. I say thank you for all your love, financial
and moral support throughout this study was the engine that kept me going.
8
I would also wish to express my sincere appreciation to all the members of my class,
victor, George, Iyke, Essein and my little friend and daughter Ify. To my Boss, Mr. E.U
Ezeorah and colleagues, a very big thanks to you all. Without your generous help,
continuous encouragement and moral support, this work could not have been complete.
9
ABSTRACT
The demand of telecommunication service is increasing rapidly. Asynchronous Transfer Mode
(ATM) network as a connection-oriented technology with fixed-size cell length reduce the
complexity of networks and improves the flexibility of traffic performance. ATM network
achieves this simplicity by the introduction of virtual path (VP) concept which helps to simplify
traffic control and resource management, by bundling several virtual channels (VCs) together
that have a common path, thus decreasing the amount of entities to be managed. This work
presents cell routing in VP-based ATM network. Network routing has to do with forwarding of
calls from one end to another while determining feasible paths from each source to destination.
The routing techniques can be implemented both in connection-oriented network and
connectionless networks. However, proper choice of routing algorithm is difficult because its
performance depends on the type of network. Two routing algorithms were investigated and
analyzed. These routing algorithms are: Deterministic Reservation Least Loaded Routing
Technique (LLR_D) and Deterministic Reservation Least Loaded Routing Algorithm with
Deterministic VP capacity sharing (LLR_VP). These algorithms were presented using flowcharts
and simulated in MATLAB environment. The quality of service (QoS) parameters such as delay,
utilization, and loss were compared with traffic intensity. A typical ATM model was developed
and these two algorithms were deployed and analyzed on this network. From the results
obtained, it is seen that the LLR_VP routing algorithms has the least delay, experiences the least
loss and better utilizes the network.
10
TABLE OF CONTENT
Title Page…………………………………………………………………………………………..i
Approval Page………………………………………………………...…………………………..ii
Certification……………………………………………………………..………………………..iii
Dedication……………………………………………...…………………...………………...…..iv
Acknowledgement………………………………………………………..…………………….…v
Abstract………………………………………………………………..……………………….....vi
Table of Contents…………………………………………………...………………………...….vii
List of Figures……………………………………………………...…………………………....viii
List of Tables………………………………………………………….……………………….....ix
List of Abbreviations………………………………………………….………………………….x
CHAPTER ONE: INTRODUCTION
1.0 Background of Study -----------------------------------------------------------------------1
1.1 Problem statement --------------------------------------------------------------------------3
1.2 Objectives of the Research-----------------------------------------------------------------3
1.3 Scope of the Research----------------------------------------------------------------------4
1.4 Research Methodology --------------------------------------------------------------------4
1.5 Organization of the work ------------------------------------------------------------------4
CHAPTER TWO: LITRETURE REVIEW
2.0. Introduction ----------------------------------------------------------------------------------5
2.1 Brief overview on principles and operations of ATM Network----------------------5
2.2 Asynchronous Transfer Mode (ATM)----------------------------------------------------6
2.3 BISDN/ATM Protocol Architecture------------------------------------------------------9
2.3.1 ATM Adaptation Layer-------------------------------------------------------------------11
2.3.2 ATM Layer --------------------------------------------------------------------------------12
2.3.3 ATM Physical Layer----------------------------------------------------------------------12
2.4 Cell Network ------------------------------------------------------------------------------ 13
2.4.1 Structure of an ATM cell---------------------------------------------------------------- 14
11
2.5 ATM Network Traffic ------------------------------------------------------------------- 16
2.5.1 Network Traffic parameters -------------------------------------------------------------16
2.5.2 Traffic Flow Control ----------------------------------------------------------------------16
2.6 Quality of Service Parameters -----------------------------------------------------------17
2.7 ATM Connection Setup-------------------------------------------------------------------19
2.7.1 Virtual Connection ------------------------------------------------------------------------20
2.8 Statistical Multiplexing-------------------------------------------------------------------21
2.9 Connection Admission Control----------------------------------------------------------22
2.10 Traffic Model ------------------------------------------------------------------------------22
2.11 Fluid Flow Model ------------------------------------------------------------------------23
2.12 Virtual Path Concept ---------------------------------------------------------------------23
2.13 General Overview of Network Routing------------------------------------------------27
2.13.1 Routing Metrics --------------------------------------------------------------------------29
2.13.2 Types of Routing Schemes--------------------------------------------------------------30
2.13.2.1 Static Routing -----------------------------------------------------------------------------30
2.13.2.2 Dynamic Routing--------------------------------------------------------------------------30
2.13.2.2.1 Types of Dynamic Routing---------------------------------------------------------------32
2.14 Routine in ATM Network----------------------------------------------------------------35
2.15 Related Works------------------------------------------------------------------------------36
2.16 Conclusion ---------------------------------------------------------------------------------39
CHAPTER THREE: MODELING
3.0 Introduction --------------------------------------------------------------------------------40
3.1 The Network Architecture----------------------------------------------------------------41
3.2 The Network Model-----------------------------------------------------------------------42
3.3 Routing Algorithms for Comparison----------------------------------------------------43
3.3.1 Deterministic Reservation Least Loaded Routing Technique (LLR_D)-----------43
3.3.2 `Deterministic Reservation Least Loaded Routing Algorithm with Deterministic
Virtual Path Capacity Sharing (LLR_VP)----------------------------------------------46
3.4 Conclusion ---------------------------------------------------------------------------------46
12
CHAPTER FOUR: RESULT AND ANALYSIS
4.0 Introduction --------------------------------------------------------------------------------47
4.1 Simulation Model Validation ------------------------------------------------------------51
4.2 Model Simulation -------------------------------------------------------------------------52
4.3 Simulation Result -------------------------------------------------------------------------53
4.3.1 Cell Loss Rate against Traffic Intensity For LLR_D And LLR_VP Algorithms for
the Entire Network -----------------------------------------------------------------------53
4.3.2 Server Utilization against Traffic Intensity for LLR_D and LLR_VP Algorithm
for the Entire Network.--------------------------------------------------------------------55
4.3.3 Cell delay against Traffic Intensity for LLR_D and LLR_VP Algorithms for the
Entire Network ---------------------------------------------------------------------------56
4.3.4 VP Utilization for LLR_D and LLR_VP Algorithms --------------------------------58
CHAPTER FIVE: SUMMARY, CONCLUSION AND RECOMMENDATIO N
5.0 Introduction---------------------------------------------------------------------------------60
5.1 Conclusion ---------------------------------------------------------------------------------60
5.2 Recommendation--------------------------------------------------------------------------60
5.3 Contributions to Knowledge ------------------------------------------------------------60
13
REFERENCES
ACRONYMS
ACM Access Control Machine
ATM Asynchronous Transfer Mode
AVR Available Bit Rate
BISDN Broadband Integrated Digital Service
BRA Basic Rate Access
CAC Call Admission Control
CBR Constant Bit Rate
CCITT International Telegraph and Telephone Consultative Committee
CLP Cell Loss Priority
CMT Connection Management Mechanism
CP Complete Partitioning
C-PLAN Control Plane
CRC Cyclic Redundancy Check
CS Complete Sharing
CSMA/CD Carrier Sensing Multiple Access/ Collision Detection
DA Destination Address
FDDI Fiber Distributed Digital Interface
FDM Frequency Division Multiplexing
FXS Foreign Exchange Station
GFC Generic Flow Control
GFC Generic Flow Control
GM Guaranteed Minimum
HDLC High Level Data Link Control
IDN Integrated Digital Network
ISDN Integrated Digital Service Network
ISP Internet Service Provider
14
LLR_D Deterministic Reservation Least Loaded Routing Technique
LLR_VP ` Deterministic Reservation Least Loaded Routing Algorithm
With Deterministic Virtual Path Capacity Sharing
PDU Protocol Data Unit
PHY Physical Layer Protocol
PLCP Physical Layer Convergence Protocol
PRA Primary Rate Access
PRM Protocol Reference Model
PVCS Permanent Virtual Channel
QOS Quality of Service
SDM Space Division Multiplexing
SMDS Switched Multimegabit Data Service
SMT Station Management
TDM Time Division Multiplexing
TDS Time Division Switching
TR Trunk Reservation
UBR Unspecified Bit Rate
UNI User Network Interface
UNT User to Network Interface
UP Upper Limit
U-PLAN User Plane
VBR Variable Bit Rate
VC Virtual Cell
VP Virtual Path
VPI Virtual Path Identifier
15
LIST OF FIGURES
Figure 2.1: ATM Network Interface -------------------------------------------------------------------7
Figure 2.2: ATM Physical Architecture Interface --------------------------------------------------- 8
Figure 2.3: BISDN Reference Model ------------------------------------------------------------------9
Figure 2.4: ATM Protocol Architure ---------------------------------------------------------------10
Figure 2.5: ATM Cell Structure -----------------------------------------------------------------------14
Figure 2.6: ATM Connection --------------------------------------------------------------------------20
Figure 2.7: Virtual Path Network ---------------------------------------------------------------------24
Figure 2.8: VP Borrowing model ---------------------------------------------------------------------25
Figure 2.9: Flow chart of operation of an LLR -----------------------------------------------------36
Figure 3.1: An ATM network architecture ----------------------------------------------------------41
Figure 3.2: Network model ----------------------------------------------------------------------------42
Figure 3.3: Deterministic Reservation LLR Routing Technique (LLR_D) ---------------------44
Figure 3.4: Deterministic Reservation LLR Algorithm with Deterministic VP Capacity
Sharing --------------------------------------------------------------------------------------45
Figure 4.1: Block diagram of an ATM Based Network -------------------------------------------47
Figure 4.2: MATLAB Simulink Simevent Model for an ATM Based Network ---------------48
Figure 4.3: Heterogeneous Traffic source module -------------------------------------------------49
Figure 4.4: Traffic pattern from the different sources ----------------------------------------------49
Figure 4.5: Transmission Facility Module (VP&VC) ----------------------------------------------50
Figure 4.6: Cell loss computation module -----------------------------------------------------------50
Figure 4.7: Blocking probability against traffic intensity for video-related model and enterprise-wide
network traffic model for a trunk capacity of 15Mbps ----------------------------------------51
Figure 4.8: Cell Loss Rate against Traffic Intensity for LLR_D and LLR_VP algorithms for
the entire network. ------------------------------------------------------------------------53
Figure 4.9: Server Utilization against Traffic Intensity for LLR_D and LLR_VP Algorithms
for the Entire Network. ------------------------------------------------------------------54
16
Figure 4.10: Cell delay against Traffic Intensity for LLR_D and LLR_VP Algorithms for the
Entire Network. ---------------------------------------------------------------------------56
Figure 4.11: VP Utilization for LLR_D and LLR_VP Algorithms --------------------------------57
LIST OF TABLES
Table 1: ATM Architectural Diagram --------------- ---------------------------------------------
10
17
CHAPTER ONE
INTRODUCTION
1.0 BACKGROUND OF STUDY
The rapid increase in the demand of telecommunication services has brought much technological
advancements. Several new high speed technologies are available today such as Fiber
Distributed Data Line (FDDI), Integrated Service Digital Network (ISDN) Asynchronous
Transfer Mode (ATM) and Digital Subscriber Line (DSL) [1, 2]. These networks have the
capability of transmitting information at high speed, and offer a wide range of Quality of
Service (QoS) properties. These advancements have spur the users of the network to demand for
remote data access, web services, great computing capabilities regardless of user location and
mobility in use [3, 1]. As a result, higher quality and versatile communication infrastructure with
more bandwidth capable of handling such demands are needed.
Broadband offers new brand of services where data, voice and video commonly known as
multimedia are delivered together as one packet. It is often referred to as “high speed” access to
the internet because of its high rate of data transmission [4]. Broadband Integrated Service
Digital Network (B-ISDN) is a standard network that provides wide range of services. ATM
being begotten from B-ISDN is appropriate network that are capable of offering such high
graded services. It plays an important role in the modern communication technology because of
its ability to handle high-bandwidth, low delay, packet-like switching and multiplexing
technique and also support quality of services (QoS) guarantees.
ATM is considered to reduce the complexity of the network and improve the flexibility of traffic
performance [4]. The data in ATM network is sent out in form of fixed-size length called cells,
each cell in ATM consists of 53 bytes. Out of these 53 bytes, 5 bytes are reserved for the header
field which contains information used to route cells from source to destination through the fixed
path set up during connection phase. 48 bytes are reserved for data field. ATM integrates the
multiplexing and switching functions and allows communication between devices that operate at
18
different speeds to co-exist by allowing different traffic types with varied traffic characteristics
and different QoS requirements to co-exist with Virtual Path (VP) subnetworks within ATM
network [5].
ATM as a connection oriented technique with fixed-size cells specifically established a fixed
channel between source and destination nodes, and appropriate resources e.g bandwidth and
buffer are reserved whenever data transfer begins. This means that a virtual connection of virtual
path has to be setup between two end-points across the ATM network prior to transfer of any
data. It consist of nodes (switches) interconnected by point-to-point or point-to-multipoint links
and supports services with different traffic characteristic and quality of service (QoS)
requirements. Virtual circuit systems ensure that packets sent are received in their correct
chronological order but required a route to be established through the network before
transmission of data takes place. The virtual connection is identified by the combination of a
virtual path identifier (VPI) and a virtual channel identifier (VCI). The current values of
VPI/VCI have a local significance on a given link and these values are part of ATM cell header.
Based on this, ATM cells are switched from one link to another. A standard has been adopted by
the two unions that support ATM technology- ITU and ATM Forum which standardized the
routing and signaling protocols for establishing point – to – point connections. Routing is a
process of computing the route to be used for the connection. That is, selecting path in a network
along which to send network traffic [1]. To compute efficient routes, the routing protocol must
provide a method of gathering and maintaining the topology information. The topology
information comprises the state information concerning the links and nodes in the network and
this information is very important in the computation of efficient route. Efficient route leads to
better utilization of the network resources. The route selected for the new connection will remain
in use for a potentially long period of time, the consequences of inefficient routing decision
affects the connection for as long as that connection remain active. Therefore, it is imperative
that path selection should be done carefully.
The user of ATM network is allowed to specify when setting up a call, the quality of service
(QoS) and the bandwidth parameter values that can guarantee for that call. To setup a new
connection, the source end-system must send a connection request into the ATM network across
it User-to-network (UNI) interface. The request will include the destination address, traffic
19
parameter, QoS requirement and other essential information for the network to find the path from
source to destination. A request is propagated through the network, setting up the connection as
it moves, until it reaches the destination end system. Call establishment consist of two
operations, the selection of route (path), and the setup of the connection state at each point along
the route. In selecting the route, the chosen route must appear to support the QoS and bandwidth
request based on the current available information of the network. Routing protocols do not
specify any single required algorithm for route selection. The call processing at each node along
the route confirms that resources requested are available, if not, crankback will occur which
causes a new route if any to be computed. Therefore, the final outcome is either the
establishment of route satisfying the request or total denial of the call.
This work is mainly based on the route selection option. As stated earlier routing is a process of
selecting paths in a network to send network traffic and is performed for many networks such as
telephone network (circuit switching), electronic data network (internet), and transportation
networks [6].
1.1 PROBLEM STATEMENT
In ATM network, it is expected that information sent from source node to destination node
should follow any path of its choice. Path selection in network routing must be fully specified.
This does not usually work that way since in a dynamic environment, there are problems
encountered when routing due to fluctuations in traffic load, link failures and topology changes.
The Virtual Path (VP) concept is implemented to allow management of virtual circuits (VCs),
thereby reducing the control cost of connections sharing common paths through the network into
one (single) and also simplify network architecture.
1.2 OBJECTIVES OF THE RESEARCH
The aims of this study include:
1. To compare two cell routing algorithms in ATM network namely: Deterministic
Reservation Least Loaded Routing Technique (LLR_D) and Deterministic Reservation
20
Least Loaded Routing Algorithm with Deterministic Virtual Path Capacity Sharing
(LLR_VP)
2. To know whether the reserved bandwidth on VP will affect the QoS requirements namely
(cell delay, cell loss rate and network utilization)
3. How to minimize the Cell Loss rate and cell delay and still maintain high throughput in a
VP based ATM network
1.3 SCOPE OF THE RESEARCH
This study is limited to two cell routing algorithm in ATM network. These algorithms will be
investigated with the following set of Quality of Service (QoS) parameters in view: cell delay,
cell Loss/blocking rate and network utilization. A typical ATM network will be modeled using
MATLAB Simulink, and the two set of algorithms for investigation implemented on it.
Simulation results generated will be analyzed using Microsoft Excel. t.
1.4 RESEARCH METHODOLOGY
To realize the objectives of this work, the following methodology was adopted:
i. Review of The ATM network architecture and implementation,
ii. Review of some cell routing algorithm in ATM network.
iii. Compare three routing algorithms from the review
iv. Develop flowcharts and models for implementing the proposed schemes
v. Simulate the model and obtain data
vi. Analysis in terms of performance metrics
1.5 ORGANISATION OF THE WORK
This work is further organized as follows: Chapter Two present an overview of ATM network
and review of literature was carried out. Chapter Three, presents the models and simulations of
the Cell Routing Algorithm. Chapter Four show result. In Chapter Five, conclusions will be
drawn and recommendations made.
21
CHAPTER TWO
LITERATURE REVIEW
2.0 INTRODUTION
In chapter one, an introduction of the research topic, aim of the research, scope of the work, its
significance and research method employed has been discussed. This chapter will explore the
ATM Principles and Operations, ATM basic principles, BISDN/ATM architecture, Structure of
An ATM Cell,
2.1 BRIEF OVERVIEW ON PRINCIPLES OF OPERATIONS OF A TM NETWORK
Several network applications require higher bandwidth and generation of heterogeneous mix of
network traffic. ATM network has the capability of supporting a diversity of traffic efficiently
with various service requirements such as voice, video and data in one transmission and
switching fabric technology. It promised to provide greater integration of capabilities and
services, more flexible access to the network, and more efficient and economical service.
ATM network employs small, fixed-length packets called cells to ensure that the switching and
multiplexing function could be carried out quickly, easily, and with least delay variation and also
to support delay-intolerant interactive voice service.
ATM is a connection-oriented technology in the sense that before two terminals on the network
can communicate, they should inform all intermediate switches about their service requirements
and traffic parameters. In ATM networks, each connection is called a virtual circuit or virtual
channel (VC), because it allows the capacity of each link to be shared by connections using that
link on a demand basis rather than by fixed allocations. The connections allow the network to
guarantee the quality of service (QoS) by limiting the number of VCs. Typically, a user declares
key service requirements at the time of connection setup, declares the traffic parameters, and
may agree to control these parameters dynamically as demanded by the network.
22
ATM was intended to provide a single unified networking standard that could support both
synchronous and asynchronous technologies and services, while offering multiple levels of
quality of service for packet traffic [7].
ATM network use negotiated service connection and as a connection-oriented cell network, it
provides services based on connections negotiated contract, that will satisfied the QoS
requirement. It is, all in all, simpler to give QoS in ATM systems than in the connectionless IP
networks [8,]. Quality of Service (QoS) in ATM networks is provided by specifying the
performance requirements for the requested logical connections along with the amount of
bandwidth needed to meet the pre-specified execution level and directing a Connection
Admission Control (CAC) to verify that the performance of the current connections are not
degraded by adding new connections. Unlike the traditional LANs which broadcast data across
the network without the acknowledgment of how the path is established or where the end user is
physically connected, resulting in large overheads of network management. [9]
2.2. ASYNCHRONOUS TRANSFER MODE (ATM)
An overview of ATM network has been done in chapter one. This chapter will concentrate on the
network architecture and Management, routing protocol and Routing algorithms of the network.
ATM network is review as the technology that presently exists and according to the physical
architecture illustrated in figure 2.1 below.
23
Figure 2.1: ATM Network Interface [6]
ATM architecture interface is a combination of hardware and software that can provide either an
end-to-end network or form a high-speed backbone. The structure of ATM and its software
components comprise the ATM architecture. ATM backbone is used as better option for services
that employed multimedia-type of transmission [10]. One of the merits of implementing ATM in
any network is its ability to provide a channel for time dependent transmission. Deploying ATM
may also result in a more future-proof network. ATM provides more flexibility when scaling up
from smaller to larger configurations. It also allows the creation of virtual LANs [11]. Virtual
networks provide ways of interconnection all systems at all sites of an organization [12]. Using
ATM as the backbone network simplifies network management by reducing some of the
problems encountered in a complex interworking environment, this backbone characteristic is
one of the reasons that makes ATM the most popular technology today. If the components are
not properly selected and configured correctly, it can create a more complex networking
environment. From figure 2.1, the ATM network architecture shows that some network
technologies connect directly to the ATM Central switch (backbone switch) while some connect
through the local switch (gateway switch). ATM network include two types of network
architecture: Private Network and Public Network. Private network is also known as Customer
Premise.
Figure 2.2: ATM Physical Architecture Interface [8]
The User-to-Network Interface (UNI):
network that the user subscribes to.
The Network-to-Network Interface (NNI):
another ATM switch within the same carrier’s ATM ne
The Broadband ISDN (BISDN) Inter Carrier Interface (B
public ATM carriers.
The ATM signaling protocol is private or public depending on the type of interface over which
the signaling is carried out. If the UNI is
network, the public UNI signaling protocol is used. If the UNI is between an end
and the end user’s private ATM network, the private UNI signaling protocol is used.
The ATM signaling protocol used
private NNI signaling protocol, which is referred to as the Private NNI or PNNI protocol.
24
architecture: Private Network and Public Network. Private network is also known as Customer
Figure 2.2: ATM Physical Architecture Interface [8]
Network Interface (UNI): is the interface between the end user and the ATM
network that the user subscribes to.
Network Interface (NNI): is the interface between one ATM switch and
another ATM switch within the same carrier’s ATM network.
The Broadband ISDN (BISDN) Inter Carrier Interface (B-ICI): is the interface between two
The ATM signaling protocol is private or public depending on the type of interface over which
the signaling is carried out. If the UNI is between an end-user terminal and a public ATM
network, the public UNI signaling protocol is used. If the UNI is between an end
and the end user’s private ATM network, the private UNI signaling protocol is used.
The ATM signaling protocol used between the ATM nodes within the same ATM network is the
private NNI signaling protocol, which is referred to as the Private NNI or PNNI protocol.
architecture: Private Network and Public Network. Private network is also known as Customer
is the interface between the end user and the ATM
is the interface between one ATM switch and
is the interface between two
The ATM signaling protocol is private or public depending on the type of interface over which
user terminal and a public ATM
network, the public UNI signaling protocol is used. If the UNI is between an end-user terminal
and the end user’s private ATM network, the private UNI signaling protocol is used.
between the ATM nodes within the same ATM network is the
private NNI signaling protocol, which is referred to as the Private NNI or PNNI protocol.
25
Between the ATM nodes in two different public ATM networks, the public NNI protocol is used.
This public NNI signaling protocol is referred to as the B-ICI protocol.
2.3 BISDN/ATM PROTOCOL ARCHITECTURE
This model contains three (3) planes: Control plane, Management plane and User plane. The
model is shown in figure 2.3.
Figure 2.3: BISDN Reference Model [8].
B-ISDN is the appropriate network that can be capable of giving such high graded services. The
three (3) planes have different functions, and are well discussed in detail as follows:
A. The Control Plane: This deals with call-establishment and call-release functions and other
connection-control functions necessary for providing switched services. The Control plane
structure shares the physical and ATM layers with the User plane. It also includes ATM
adaptation layer (AAL) procedures and higher-layer signaling protocols. [13]
B. The Management Plane: This plane provides management functions and has the ability
to exchange information between the User plane and the Control plane. The Management plane
contains two sections: layer management and plane management. The former performs
management functions relating to resources and parameters residing in its protocol entities. The
latter performs management functions related to a system as a whole and provides coordination
between all the planes.
26
C. User Plane: this is concerned with the transfer of user data including flow control and
error recovery. It has three basic layers that together provide support for user applications as
shown in figure 2.4: ATM adaptation layers, ATM layer and Physical layer [6].
Figure 2.4: ATM Protocol Architure [8]
Table 1: ATM Architectural Diagram
CS Sublayer
ATM Adaption Layer SAR Sublayer
ATM Layer
TC Sublayer
Physical Layer PM Sublayer
Each layer and sub-layer is described briefly below:
27
2.3.1 The ATM Adaptation Layer (AAL)
This layer has two (2) sublayers: Convergence sublayer and Segmentation and Reassembly
sublayer. This convert the higher layer service data unit (SDU) into 48 byte block used inside
ATM cells. The user information are converted into sequences of cells that can be transported by
the ATM Network. AAL entities on the receiver (destination) side reassemble and deliver the
information in a manner that is consistent with the requirement of a given application. AAL
entities reside in the terminal equipment and communicate on an end-to-end basis across the
ATM network. This layer enhances the services provided by the ATM Layer to a level required
by the next higher layer. It performs the functions for the user, control and management planes.
It supports the mapping between the ATM Layer and the next higher layer. It is the AAL
function to adapt all different services needed for higher layers to fit the ATM’s 48-byte payload.
There are different services offered by the AAL as follows:
AAL1: A connection–oriented service, it suitable for handling circuit-emulation applications,
such as voice and video conferencing. It requires timing synchronization between the source and
destination.
AAL2: It supports variable bit rate services with a timing relation between source and
destination. It is nearly identical to AAL1, except that it transfers service data units at a variable
bit rate, not a constant bit rate.
AAL3/4: It supports both connection-oriented and connectionless data. Compressed video and
frame relay use AAL3/4 to send data over ATM network.
AAL5: AAL5 provides a way for non-isochronous (time-dependent), variable bit rate,
connectionless applications to send and receive data. AAL5 was developed as a way to provide a
more efficient transfer of network traffic than AAL3/4. AAL5 merely adds a trailer to the
payload to indicate size and provide error detection. AAL5 is the preferred AAL when sending
connection-oriented or connectionless LAN protocol traffic over an ATM network.
Windows Server 2003 supports AAL5.
AAL5 provides a straightforward framing at the Common Part Convergence Sublayer (CPCS)
that behaves more like LAN technologies, such as Ethernet.
28
2.3.2 ATM Layers
This layer is solely concerned with the sequenced transfer of ATM cells in a connection setup
across the network. It accept a 48byte of user information from the AAL entities and add the
5byte header to form the ATM cell and the cell header use label to identifies connection used by
switch to determine the next hop in the path it follow and the type of priority scheduling a cell
will receive. It can also provide different QoS to different connections abiding with the service
contract negotiated between user and network during connection setup. Cell multiplexing/ DE
multiplexing: cells belonging to different virtual channels or virtual paths are
multiplexed/demultiplexed onto/from the same cell stream; the following functions are
performed by the ATM layer:
• Cell VPI/VCI Translation: the routing function is performed by mapping the virtual
path identifier/virtual channel identifier (VPI/VCI) of each cell received on an input link
onto a new VPI/VCI output link defining where to send the cell,
• Cell header Generation/Extraction: the header is generated (extracted) when a cell is
received from (delivered to) the AAL layer,
• Generic Flow Control: flow control information can be coded into the cell header at the
UNI [14].
2.3.3 The Physical Layer:
This layer is used for the transmission and reception of ATM cells across a physical medium
between two ATM devices. This can be transmission between an ATM endpoint and an ATM
switch, or it can be between two ATM switches. The physical layer is subdivided into a Physical
Medium Dependent (PMD) sublayer and Transmission Convergence (TC) sublayer. [6, 15]
physical layer has the following sub sections:
Physical Medium Dependent sublayer: This sublayer is responsible for transmission functions
and is highly dependent on the medium used. The principal function is the transmission and
reception of waveforms suitable for the medium, the insertion and extraction of bit timing
information and line coding (if required). The primitives identified at the border between the
PMD and TC sublayers are a continuous flow of logical bits or symbols with this associated
timing information [15]
29
Transmission convergence sublayer functions as:
• Transmission frame generation and recovery: This function performs the generation and
recovery of transmission frame.
• Transmission frame adaptation: This function performs the actions which are necessary
to structure the cell flow according to the payload structure of the transmission frame
(transmit direction) and to extract this cell flow out of the transmission frame (receive
direction).
• Cell delineation: Cell delineation prepares the cell flow in order to enable the receiving
side to recover cell boundaries according to the self-delineating mechanism. In the
transmit direction, the ATM cell stream is scrambled. In the receive direction, cell
boundaries are identified and confirmed (using the HEC mechanism) and the cell flow is
descrambled.
• HEC sequence generation and cell header verification: In transmit direction, the HEC
sequence is calculated and inserted in the header. In receive direction, cell headers are
checked for errors and, if possible, header errors are corrected. Cells whose headers are
determined to be errored and non-correctable are discarded.
• Cell rate decoupling: Cell rate decoupling includes insertion and suppression of idle cells,
in order to adapt the rate of valid ATM cells to the payload capacity of the transmission
system.
2.4 CELL NETWORKS
The adoption of cell network seems to solve many problems associated with frame
internetworking. A cell is a small data unit of fixed size. In a cell network, which uses the cell as
the basic unit of data exchange, all data are loaded into identical cells that can be transmitted
with complete predictability and uniformity. As frames of different sizes and formats reach the
cell network from a tributary network, they are split into multiple small data units of equal length
and are loaded into cells. The cells are then multiplexed with other cells and routed through the
cell network. Because each cell is the same size and all are small, the problems associated with
multiplexing different-sized frames are avoided [16].
2.4.1 Structure of an ATM Cell
The unit of transmission, multiplexing and switching in ATM is the fixed
[17]. Its fixed length is chosen to simplify the design of electronics in ATM switches and
multiplexers, because hardware manipulation of variable
processing a fixed length cell. ATM defines two different cell formats:
UNI (User-Network Interface) which interface the ATM endpoints and the ATM switches
NNI (Network-Network Interface) which interfaces two ATM sw
Figure 2.5: ATM Cell Structure [
GFC = Generic Flow Control (4 bits) (default: 4
VPI = Virtual Path Identifier (8 bits UNI) or (12 bits NNI)
VCI = Virtual Channel Identifier (16 bits)
PT = Payload Type (3 bits)
CLP = Cell Loss Priority (1 bit)
HEC = Header Error Control (8bits) (checksum of header only)
Generic Flow Control: This field consists of the first four bits of the first byte of the ATM
header. It is used to control the flow of traffic across the user
30
Structure of an ATM Cell
of transmission, multiplexing and switching in ATM is the fixed-length cell of 53 bytes
]. Its fixed length is chosen to simplify the design of electronics in ATM switches and
multiplexers, because hardware manipulation of variable-length packets is mor
processing a fixed length cell. ATM defines two different cell formats:
Network Interface) which interface the ATM endpoints and the ATM switches
Network Interface) which interfaces two ATM switches [13, 18].
Figure 2.5: ATM Cell Structure [6]
GFC = Generic Flow Control (4 bits) (default: 4-zero bits)
VPI = Virtual Path Identifier (8 bits UNI) or (12 bits NNI)
VCI = Virtual Channel Identifier (16 bits)
HEC = Header Error Control (8bits) (checksum of header only)
This field consists of the first four bits of the first byte of the ATM
header. It is used to control the flow of traffic across the user-to-network interface (UNI)
length cell of 53 bytes
]. Its fixed length is chosen to simplify the design of electronics in ATM switches and
length packets is more complex than
Network Interface) which interface the ATM endpoints and the ATM switches
].
This field consists of the first four bits of the first byte of the ATM
network interface (UNI) and is
31
only used at the UNI. When the GFC function is not in use, the value of this field is replaced
with zeros. This field has local significance only and can be used to provide standardized local
flow control functions of the end users. The value encoded in the GFC field is not carried end-to-
end and will be overwritten by the ATM switches [6].
Payload Type (PT): It is use to designate various special kinds of cells for Operation and
Management (OAM) purposes and to delineate packet boundaries in some AALs. It identifies
the cell content, if it is data cell, and idle, an OAM cell, VCC- level OAM information,
Explicitly Forward Congestion Indication (EFCI), AAL Information, Resources Management
Information
Cell Loss Priority (CLP): The CLP indicates the relative priority of cells. It acts as an indicator
as to whenever or not this cell is expendable, should the Network start becoming congested 1=
can discard the cell 0 = might not discard cell
Header Error Control (HEC): Several of ATM's link protocols use the HEC field to drive
algorithm which allows the position of the ATM cells to be found with no overhead required
beyond what is otherwise needed for header protection. It is an 8-bit field that allows an ATM
switch or ATM endpoint to correct a single-bit error or to detect multi-bit errors in the first 4
bytes of the ATM header. Multi-bit errored cells are silently discarded. The HEC only checks the
ATM header and not the ATM payload. The HEC needs to be recomputed at every switch since
the VPI/VCI value changes at every hop [15].
Virtual Path Identifier (VPI): VPI identifies path between two locations in an ATM network
that provides transportation for a group of virtual channels. It is an 8 bit field for UNI and 12 bit
that tells each switch along which virtual path the circuit will travel. In a UNI interface there is
maximum of 256 virtual paths and 4096 virtual paths in NNI [23]. When the end point has no
data to transmit, the VPI field is set to all zero to show idle condition. To permit a large VPI
value to be carried in the cell header, the four bits from GFC becomes an extension of the VPI
field.
32
Virtual Channel identifier VCI: VCI field is 16 bits long. The VPI/VCI is local identifier for a
given connection in a given link. Its values changes from one switch to another. The structure
supports a large number of connections and provides scalability to very large network [6]
2.5 ATM NETWORK TRAFFIC
The attributes of network traffic are made up of voice, video and data that provide network
administrators, the opportunity to manipulate freely the various services in terms of connection
acceptance, negotiation of the QoS, congestion control, and resource allocation. Therefore, the
feasibility and efficiency of the QoS management architecture are strongly dependent on the
nature of traffic to be accommodated.
2.5.1 Network Traffic Parameters
In [19], the following parameters may be used to describe network traffic characteristics.
� Cell peak arrival rate when the source is in the active state (peak rate);
� Average cell arrival rate;
� Burstiness. (i.e. the ratio between the peak rate and the average rate); and
� Average duration of the active state.
These traffic parameters are used for connection admission control (CAC), usage parameter
control (UPC) and resource allocation. The values of the traffic parameters are negotiated
between the user and the network during call set-up phase; combined with the traffic
characteristics of the aggregate cell arrival stream in the network, they are used for the operation
of the admission control for deciding whether or not a new connection is to be accepted.
In the usage parameter control, the algorithm monitors the user to know whether there is
violation of the traffic characteristic parameters negotiated during the connection establishment
phase. For the resource allocation purposes, the traffic parameters are used by network
administrators as the basis for allocating resources to user demands.
2.5.2 Traffic Flow Control
The network traffic flow has to be controlled in a predictable manner in order to agree with the
allocation of network resources. Flow control is a set of protocols that maintain the flow of
33
traffic within limits compatible with the amount of available resources. These limits may be
fixed or dynamically adjusted based on traffic status to ensure efficient network operations,
guarantee fairness to a certain degree in resource sharing, and protect the network from
congestion and deadlock [20]. It provides means for regulating the traffic inside the network so
that the behavior of internal traffic is more easily manageable. When user demands are allowed
to exceed network capacity it leads to congestion. The traffic is required to be kept within certain
bounds, such as peak bandwidth, maximal burst, and the network is committed to providing
certain service guarantees, such as maximal delay, loss rate etc [21]. Data flows between sources
and destination are disrupted if one or both resources are lacking anywhere along their network
paths. To ensure the integrity of the traffic, QoS parameters must be met at each point in the
entire network.
2.6 QUALITY OF SERVICE (QOS) PARAMETERS
According to [22], the performance of today’s networks is measured by QoS parameters, a
particular traffic and Quality of Service (QoS) parameters are requested in every ATM
application when establishing VCs to helps an end-user to send request to the network which will
in turn verify the set parameter and ensure that the services requested for are delivered by the
network with a certain quality. ITU - T defines QoS as a “Collective effect of service
performance which determines the degree of satisfaction of a user of the service” [23].
According to [23], the performance of today’s networks is measured by QoS parameters, such as,
This is the amount of time that elapse from the time a cell enter the source UNI to the time it
� Throughput: This is a technique used to describe the capacity a system to transfer data.
There are different ways to define and measure throughput, this includes: the cell rate across the
network; the cell rate of a specific application flow; the cell rate of end-to-end aggregated flows;
the cell rate of network-to- network aggregated flows. The amount of bandwidth allocated to
different types of cell affect throughput.
� Delay (or latency): exit at the destination UNI. There are a number of factors that
contribute to the amount of delay experienced by a cell as it traverses the network. They include;
propagation delay, processing delay, queuing delay,. The end-to-end delay can be calculated as
34
the sum of the individual propagation, processing and queuing, delays occurring at each
multiplexers and switches in the network.
� Jitter: Is the variation in delay over time experienced by consecutive cells that are part of
the same flow. It is measured by using mean, standard deviation, maximum or minimum of the
intercell arrival times for consecutive cells in a given flow. End-to-end jitter is never constant
because the level of network congestion always changes from time to time and from place to
place.
� Cell Loss: Is a situation where cells in a network fail to reach their destination due to
break in the link, corruption of cells or buffer overflow. The amount of cell loss in a network is
typically expressed in terms of the probability that the network will discard a given cell. The loss
is measured by rate – the number of cells lost, out of the total number transmitted
� Cell Blocking Probability: The chance or probability that all the buffers are full and any
subsequent cells are dropped (blocked).
� Cell Error rate: Sometimes cells are misdirected, or combined together, or corrupted,
while en route to its destination. The number of such cells from the total number transmitted
within a given period gives error rate. The receiver on detection of the erroneous cell would drop
the cell and either ask the source to repeat it or directly correct it. To ensure the integrity of
traffic, QoS parameters must be met over the entire network by the application of appropriate
resource allocation strategy.
ATM Forum defined the following traffic parameters for describing traffic that is injected into
the ATM network at the UNI [6, 24, 25].
• Peak Cell Rate (PCR): This refers to the maximum bit rate that may be transmitted from
the source.
• Cell Delay Variation Tolerance (CDVT): This refers to the level of cell delay variation
that must be tolerated in a give connection.
• Sustainable Cell Rate (SCR): This is Average traffic bandwidth the connection is
allowed to generate. That is the average cell rate that may be transmitted from the source.
• Maximum Burst Size (MBS): This is the maximum number of cells for which the
source may transmit at the PCR.
35
• Minimum Cell Rate (MCR): This is simply the minimum cell rate guaranteed by the
network.
• Maximum Cell Transfer Delay (maxCTD): Maximum allowed difference between
reception and transmission time for a cell between two end-user points.
• Peak-to-peak Cell Delay Variation (peak-to-peak CDV): Maximum allowed
difference between the maxCTD and minCTD for a cell between two end-user points. The
minCTD represents the minimum transfer time for a cell.
To help manage the growing complexity of specifying and routing on QoS, the ATM Forum’s
UNI 3.1 signaling specifies the following separate QoS classes that described general profile of
QoS parameters. These classes are:
� Constant Bit Rate (CBR) services include voice; circuit emulation with its traffic rate
specified by PCR while the QoS is specified by CTD, CDV as well as CLR. The cell
transmission rate is constant throughout the duration of the connection [6].
� Variable Bit Rate (VBR) that has real time and non real time. Real time VBR is also for
traffic with rigorous timing requirement such as video whose traffic is specified CR, SCR or
MBS and QoS specified by CLR, CTD or CDV, and also non-real-time variants and is used for
"bursty" traffic. Variable bit rate–real time is designed for applications that are sensitive to cell
delay variation. Examples for Variable bit rate–non real time allows users to send traffic at a rate
that varies with time depending on the availability of user information. Multimedia email is an
example of VBR–NRT.
� Available Bit Rate (ABR) is for source that can dynamically adapt the rate at which the
cells are transmitted in response to feedback from the network. It provides rate based flow
control and is aimed at data traffic such as file transfer and e-mail.
� Unspecified bit rate (UBR) is for ATM service category that does not provide and QoS
guarantee and appropriate for non critical application that can tolerate or readily adjust to the loss
of cell. This class is widely used today for TCP/IP.
36
2.7 ATM CONNECTIONS SETUP
ATM network as connection oriented network consists of endpoints and switches. It provides
different QoS to different connections and based on this a service contract is negotiated between
the user and the network when the connection is setup. The user will describe the traffic
requirement and the needed QoS when requesting for a connection. If the network accepts the
request, a contract is implemented to ensure traffic using the connection complies with the
stipulated traffic description or be discarded. �
ATM supports two types of connections in terms of users: point to point connection which can
be unidirectional or bidirectional and point to multipoint connection which are always
unidirectional. In term of duration, ATM provides permanent virtual connections (PVCs) and
switched virtual connections (SVCs). PVCs act as permanent leased lines between user sites, is a
connection that is setup and taken down manually by a network manager. A set of network
switches between the ATM source and destination are programmed with predefined values for
VCI/VPI. SVC is a connection that is setup automatically by a signalling protocol. It is widely
used because it does not require manual setup, but it is not reliable.
2.7.1 Virtual Connections
Connection across the network two endpoints is established through transmission paths (TPs),
virtual paths (VPs) and virtual circuits (VCs). These are the three major concepts in ATM:
Virtual channel
Virtual pathPhysical circuit
Figure 2.6: ATM Connection
Physical Transmission Circuit/ Path: A transmission path is a bundle of VPs. The VCs are
concatenated to create VPs, which, in turn, concatenate to create a transmission path. A physical
37
link can be shared by many VPs with different bandwidth allocation as far as it does not exceed
the link capacity and the peak bit rate transmitted on a VP shall never exceeds the VP bandwidth
allocation.
The virtual path (VP): This is a generic term for a bundle of virtual channel links; all the links
in a bundle have the same endpoints. A VPI identifies a group of VC links, at a given reference
point, that share the same VPC. A specific value of VPI is assigned each time a VP is switched
in the network. A VP link is a unidirectional capability for the transport of ATM cells between
two consecutive ATM entities where the VPI value is translated. VP connection provide several
benefit like efficient routing, in the sense that the intermediate nodes are not involves in call
setup when a VCC is assigned a preexisting VPC. Also traffic capacity and communication
resources can be reserved for VPCs so as to consolidate and manage traffic with similar
characteristics. VPC also allow fast recovery in link failure since alternative path can be set up
immediately [26].
Virtual channel (VC): A generic term used to describe a unidirectional communication
capability for the transport of ATM cells. Virtual channel link is a segment of virtual channel
connection between two adjacent nodes and are identifies by VCI for a given virtual path
connection (VPC). A specific value of VCI is assigned each time a VC is switched in the
network. A VC link is a unidirectional capability for the transport of ATM cells between two
consecutive ATM entities where the VCI value is translated. A VC link is originated or
terminated by the assignment or removal of the VCI value [26].
2.8 STATISTICAL MULTIPLEXING
Virtual Path concept inherently increases call blocking as a result of decreased capacity sharing,
it is important to consider the effect of statistical multiplexing especially, the bandwidth of VP
which can be shared between end-to-end connections through the establishment/release of end-
to-end connection and bandwidth management scheme, while considering the variability of
traffic on a connection. This technique concentrates traffic from multiple users or terminal onto a
shared communication link. This aggregation of cell flow into a single transmission line by the
multiplexer reduces the system cost, by reducing the number of transmission lines which leads to
38
improvement in system efficiency. Statistical multiplexing at the cell level during connection
lifetime is a powerful feature of ATM, inherited from packet switching network [6, 28, 29].
In this techniques, traffic of all cells are merged into queue and are transmitted on first-in-first-
out (FIFO) fashion, so that the entire bandwidth is allocated to the first cell out of the queue. The
result is a smaller average delay per cell. When several sources are active and are combined on a
single link at the same time, the required total bandwidth is less than the sum of the individual
connection. The statistical multiplexing gain is determined by the acceptable cell loss rates of the
connections [28, 30].
2.9 CONNECTION ADMISSION CONTROL
This scheme negotiates traffic description between user of the network and the network and
reserved bandwidth for virtual channel/ virtual path connection to guarantee the quality of
service (QoS). The number of VC to be carried on a link is the decision of the CAC while still
maintaining the required QoS. CAC in ATM network work like this, if a new call arrives at a
local switch, the total capacity of the new call and the existing VC will be calculated to know if it
can be accommodated by the VP and it will be compare to the unused bandwidth of that VP.
This calculation is based on a set of parameters established during call establishment which
represent the cell arrival process. The call is only accepted into the network when the required
bandwidth has been verified. The common CAC method can be taken as equivalent bandwidth
CAC which is a method used to convert traffic parameters and QoS into an equivalent bandwidth
for the connection and the value gotten is compared to the link unreserved bandwidth to see
whether the request can be supported. The function of CAC schemes depend on at least, the
following factors: the source traffic characteristics, the QoS and the free resources on the VP.
The concept of effective bandwidth help to reduce the complexity of CAC in ATM network, the
idea is to find the effective bandwidth that can support the needed QoS. There are two ways of
calculating the effective bandwidth: fluid-flow model and stationary bit rate model.
2.10 TRAFFIC MODEL
Each single connection in ATM network has a variable bit rate bounded by its peak rate. To
ascertain the effective bandwidth of a connection, an appropriate model has to be selected to
39
specify its characteristic with known parameters. In this work, the properties of individual
connections are of concern therefore the adoption of fluid flow model. The two state fluid flow
model that captures the basic feature (behavior) of traffic input source is adopted because a
traffic source can either be ON or OFF.
During ON state, cells are transmitted at the peak rate, r, and no cells are transmitted during OFF
state that is zero bit rate. The advantages of such traffic source are its simplicity and flexibility,
such as it can be used for connections ranging from burst to continuous bit streams.
Based on this, two state fluid flow model, the ON and OF state is the time when the source is at
active or idle state respectively. They are assumed to be exponentially distributed and therefore
the source is completely characterized by three parameters, namely peak rate r, utilization ρ, and
the mean ON period b, where ρ is the fraction of time the source is active and b is the mean of
the ON state period.
2.11 FLUID FLOW MODEL
The fluid flow model is adopted when focusing on individual connections which are statistically
independent. Each traffic source is assumed to be of two-state fluid flow type, i.e. it alternates
between ON and OFF states. During ON intervals, cells are transmitted at the peak rate, k, and
no cells are transmitted during OFF intervals that is zero bit rate. The duration of ON and OFF
intervals are exponentially distributed, therefore the traffic source is characterized by three
parameters: peak rate, r, utilization, p i.e the fraction of time the source is at ON state and mean
ON period b. More specifically, for sources with maximum cell loss probability ", peak bit rate r,
utilization, p, assume the buffer size of ATM multiplexer is B cells, then the required effective
bandwidth can be derived as follows [31]:
ê =αb�1 − � − B + α��1 − �r − �� + 4Bαb��1 − �I
2α��1 − p
where α = In�1/є)
For CBR cases r = 1, and b = ∞ then = Rpeak. This formular is used to calculate the effective
bandwidth.
2.12 VIRTUAL PATH CONCEPT
Routing in ATM networks is based on Virtual Path Connections (VPCs); a route is defined as a
concatenation of VPCs. ATM standards specify two types of
connections and Virtual Channel (VC) connections. While VCs are used as the virtual circuits on
which data is transferred, VPs are used to bundle several VCs together, thus decreasing the
amount of entities to be managed. Vi
effective, flexible network when network architecture and node processing are simplified [
Figure 2.7: Virtual Path Network [
Virtual path concept is used for segregating different types
resource is assigned to each type of traffic between a source and destination pair. As a result of
this, more than one VP may be established between the same Source and Destination pair with
each carrying different types of traffic. In the case of this work, it is assumed that traffic of the
same type requires identical end-
The introduction of this concept according to [
VCs to be groups in bundles, processed and t
advantages like reduction in node cost and the simplification of the network architecture thereby
promoting the required operation, administration and management functions. The fundamental
40
VIRTUAL PATH CONCEPT
Routing in ATM networks is based on Virtual Path Connections (VPCs); a route is defined as a
concatenation of VPCs. ATM standards specify two types of connections - Virtual Path (VP)
connections and Virtual Channel (VC) connections. While VCs are used as the virtual circuits on
which data is transferred, VPs are used to bundle several VCs together, thus decreasing the
amount of entities to be managed. Virtual path concept is the key to the development of a cost
effective, flexible network when network architecture and node processing are simplified [
Figure 2.7: Virtual Path Network [28]
Virtual path concept is used for segregating different types of traffics i.e. a VP with dedicated
resource is assigned to each type of traffic between a source and destination pair. As a result of
this, more than one VP may be established between the same Source and Destination pair with
s of traffic. In the case of this work, it is assumed that traffic of the
-to-end QoS.
The introduction of this concept according to [6, 28, 32] in ATM network allow management of
VCs to be groups in bundles, processed and transmitted. To manage in bundles, allow significant
advantages like reduction in node cost and the simplification of the network architecture thereby
promoting the required operation, administration and management functions. The fundamental
Routing in ATM networks is based on Virtual Path Connections (VPCs); a route is defined as a
Virtual Path (VP)
connections and Virtual Channel (VC) connections. While VCs are used as the virtual circuits on
which data is transferred, VPs are used to bundle several VCs together, thus decreasing the
rtual path concept is the key to the development of a cost-
effective, flexible network when network architecture and node processing are simplified [28].
of traffics i.e. a VP with dedicated
resource is assigned to each type of traffic between a source and destination pair. As a result of
this, more than one VP may be established between the same Source and Destination pair with
s of traffic. In the case of this work, it is assumed that traffic of the
] in ATM network allow management of
ransmitted. To manage in bundles, allow significant
advantages like reduction in node cost and the simplification of the network architecture thereby
promoting the required operation, administration and management functions. The fundamental
importance of VP concept is that individual connections are grouped together so that they share a
common path through the network as a single unit.
The action of management is applied to a small number of groups of the connections instead of
the large number of individual connections. This results to a lesser total processing requirement
per VC and a better use of network resources.
Figure 2.8: VP Borrowing model
Another advantage of VP Concept is that it allowed borrowing from one VP to another if they
share the same source node and the set of links in the former VP is subset of that of the latter.
Deterministic reservation Least Loaded Routing algorithm with dynamic VP capacity sharing
help to achieve this if the call is blocked on the direct path, it check if there
free capacity that can be shared by the direct VP as illustrated in figure 2.8
The implementation of VPs reduces the processing and delay associated with call acceptance
control (CAC) function, therefore plays an important role in cal
[33]. This concept reserved capacity on a VP connection in anticipation of later call arrivals, new
VC connection can be established by executing simple control functions at the endpoint of the
VP connection (terminator). No ca
as a result, cost effective network with enhanced performance is realized [
ATM network is constructed with nodes and links and VC is defined by creating a connection
between two endpoints which exchange information. A route is assigned to each VC in the
network; cells are transported along the assigned route to the VC to which it belongs. In this
ways several node functions are performed by each node recognizing the outgoing link to whic
incoming cells should be sent [41
P concept is that individual connections are grouped together so that they share a
common path through the network as a single unit.
The action of management is applied to a small number of groups of the connections instead of
al connections. This results to a lesser total processing requirement
per VC and a better use of network resources.
Figure 2.8: VP Borrowing model
Another advantage of VP Concept is that it allowed borrowing from one VP to another if they
source node and the set of links in the former VP is subset of that of the latter.
Deterministic reservation Least Loaded Routing algorithm with dynamic VP capacity sharing
help to achieve this if the call is blocked on the direct path, it check if there is another VP with
free capacity that can be shared by the direct VP as illustrated in figure 2.8
The implementation of VPs reduces the processing and delay associated with call acceptance
control (CAC) function, therefore plays an important role in call admission control in BISDN
]. This concept reserved capacity on a VP connection in anticipation of later call arrivals, new
VC connection can be established by executing simple control functions at the endpoint of the
VP connection (terminator). No call processing is carried out at the transit nodes in VP network,
as a result, cost effective network with enhanced performance is realized [28].
ATM network is constructed with nodes and links and VC is defined by creating a connection
ts which exchange information. A route is assigned to each VC in the
network; cells are transported along the assigned route to the VC to which it belongs. In this
ways several node functions are performed by each node recognizing the outgoing link to whic
incoming cells should be sent [28]. But one thing is common in transfer network like ATM
P concept is that individual connections are grouped together so that they share a
The action of management is applied to a small number of groups of the connections instead of
al connections. This results to a lesser total processing requirement
Another advantage of VP Concept is that it allowed borrowing from one VP to another if they
source node and the set of links in the former VP is subset of that of the latter.
Deterministic reservation Least Loaded Routing algorithm with dynamic VP capacity sharing
is another VP with
The implementation of VPs reduces the processing and delay associated with call acceptance
l admission control in BISDN
]. This concept reserved capacity on a VP connection in anticipation of later call arrivals, new
VC connection can be established by executing simple control functions at the endpoint of the
ll processing is carried out at the transit nodes in VP network,
ATM network is constructed with nodes and links and VC is defined by creating a connection
ts which exchange information. A route is assigned to each VC in the
network; cells are transported along the assigned route to the VC to which it belongs. In this
ways several node functions are performed by each node recognizing the outgoing link to which
]. But one thing is common in transfer network like ATM
42
technology, queuing delay and cell loss always occur whenever statistical cell multiplexing is
performed at nodes. This delay and cell loss probability depends on the maximum queuing buffer
size at the node and the link capacity between nodes. A traffic network is accommodated within
a large capacity transmission facilities network therefore in designing ATM networks the
minimum capacity of a virtual path between switches must be devised so as to provide the
required service quality such as cell blocking probability and cell transfer quality for each virtual
circuit. The designing algorithm to determine the VP capacity of the nodes and link capacity s
must be developed to realize an ATM network [34].
VPs play an important role in traffic control and resource management in ATM networks, as it is
defined as a logical direct link between two nodes in the network that are connected through two
or more sequential physical links. A VP is identified with the Virtual Path Identifier (VPI) and
each VP has its own bandwidth, limiting the number of VCs that it can accommodate. The
number of VPs in each link is not limited. The only limit is that the sum of bandwidth of the
VPs, does not exceed the capacity of the link. Setting up of a path in the network is done once for
all VCs using the same path and the required node function are effectively simplified, rewriting
of routing table of the transit node is not necessary at call setup because an area known as virtual
path identifier (VPI) is reserved at the cell header which can be compared at the arrival of cell
through the transit node with their VPI in the routing table. Routing table is only concerned with
the VP so the routing procedure at call setup is also eliminated at the transit not because this is
done by selecting an appropriate VP from end nodes terminating the VP [28]. Transit nodes are
free of bandwidth allocation process at call setup by comparing the bandwidth of the requested
connection to the unused bandwidth of the VP at the end nodes.
Reserving bandwidth for VPs makes VC connection to be established quickly and simply
because bandwidth along their path are guaranteed because the function is performed only at the
beginning of a VP. The action of eliminating node processing at the transit node has lead to low
cost of node construction which is a valuable issue in the construction of economical ATM
network since transmission cost has been reduced because of the development of high capacity
transmission system. Also it provide a logical service separation on network service access and
adaptability to varying traffic and network failure through dynamic resource management [28].
Implementation of priority control is possible by segregating traffic with different Quality of
43
Service (QoS), where each VP is considered as a logical link for a certain services. This leads to
building a VP subnetwork with different service with the network. Each VP has a number of
physical links assigned and effective bandwidth to assured QoS requirements. Several VPs can
be multiplexed on the same physical link, and varying traffic conditions and network failures can
also be tuned to accommodate the changing network condition and still maintain network
performance.
In all these mention merits of VP, there are still some demerit like the reservation of capacity in
anticipation of new traffic which decreases the capacity sharing leading to under utilization of
available bandwidth. It also do not exploit redundant bandwidth in another VP, so the network
throughput decreases as the total call blocking rate increase and network transmission cost also
increases. This work looks into how to use this redundant bandwidth in another VP when there
are VPs that need the bandwidth, thereby increasing the throughput and decreasing the call
blocking rate.
Dynamic bandwidth control method helps to flexibly reassign the individual VP bandwidth when
the connection on one VP increases, the remaining link bandwidth can be assigned to the busy
VP by statistically sharing of transmission facilities among VPs. With this, no link will be
redundant therefore the transmission efficiency will be improved as each VP in the link is well
utilized, although this control may increase the processing load, but the advantage in reduce node
processing is expected to maintained by changing the bandwidth less frequently than call setup
and clearance. Since VP bandwidth can be varied by merely modifying the bandwidth data
stored in the processor of the end nodes and not by accessing the switches, the transit nodes and
all the switches do not need to be accessed for bandwidth changes.
2.13 GENERAL OVERVIEW OF NETWORK ROUTING
The task of routing data from source to a destination is an important procedure for any network.
An efficient routing protocol, in conjunction with efficient connection admission control, allows
for correct operation of the network by ensuring that cells are delivered to their destination in a
correct chronological order. The overall objective of a routing policy is to increase the network
throughput in terms of call admissions, while guaranteeing the performance of the network
within specified levels. The design of an efficient routing policy is of enormous complexity,
44
since it depends on a number of variable and sometimes uncertain parameter. This complexity is
increased by the diversity of bandwidth and performance requirements of different connection
types in a multi-class network environment. Furthermore, the routing policy should be adaptive
to cater for changes in the network: topological changes due to faults or equipment being taken
in and out of service; and changing traffic conditions.
Various types of traffic like CBR for voice, VBR for video, and ABR for data or best effort
traffic, are moved across the network using different types of routing techniques. Diverse
requirements for Quality of Service (QoS) from different users must be satisfied. In order to
guarantee the QoS, some connections may have to be blocked by the connection admission
control (CAC) mechanism during the connection setup, and traditional routing meets a new
challenge to avoid network congestion and be able to route around congested regions when there
is congestion in the network. Routing over ATM networks should be connection oriented and
dynamic, not only to the physical connectivity topology of the network, but also according to the
congestion status of each link and node in the network and be able to scale to the network with
large size while reasonably limiting the overhead for memory, bandwidth and processing time
required for storing and exchanging routing information. The connection blocking probability as
well as the rerouting probability should be low.
Many existing routing algorithms have serious problems of lack of cooperation between
congestion control and routing. Routing having a very close relationship with congestion control,
especially with CAC affect the selection of paths. Path selected by the routing algorithm may be
rejected by CAC if the new connection request will seriously degrade the QoS of other existing
connections. Another problem is that a conventional routing algorithm will cause a significant
overhead when the network size gets very large or rerouting occurs frequently due to the varying
link state. In an ATM network the number of switch node may be very large, so reducing routing
overhead must be taken into account in the design of a routing algorithm [35].
To obtain high utilization under the QoS requirement in ATM, CAC must decide whether to
accept a new connection, based on not only the new connection’s anticipated traffic
characteristics and the QoS requirement of connected calls including the new call, but also the
bandwidth capacity of the links along the path which is selected by the routing algorithm. A
45
good routing algorithm should select the path with very low rejecting probability by the CAC,
besides the lowest cost criteria.
Routing has a strong relationship with CAC in ATM networks. Routing plays an important role
in ATM congestion control; because it can prevent congestion by dynamically selecting paths
according to the traffic load and link bandwidth conditions [35]. It can also mitigate congestion
occurrence by routing around the congested region. To do this, we extract a sub topology from
the entire network topology, by checking all the links and including only the links that have high
probability to accommodate the new connection before the path selection. Routing on this
abstracted effective topology will lead to low call blocking probability and rerouting probability,
thus improving the network performance and reducing the routing overhead
In order to let the routing overhead processing time be as low as possible, a few link metrics and
QoS metrics should be used to consume less bandwidth, memory and processing time while
providing enough information for routing.
Routing as stated is an act of moving network traffic along the selected path in the network
towards the destination and is performed for many kind of networks. The path selection in
network routing is typically formulated as a shortest path problem. There is also a problem of
routing in a dynamic environment due to fluctuations in traffic load, link failures and topology
changes [36].
2.13.1 Routing Metrics
Metrics are a way to measure or compare. Routing protocols use metrics to determine which
route is the best path. There are cases when a routing protocol learns of more than one route to
the same destination. To select the best path, the routing protocol must be able to evaluate and
differentiate among the available paths. For this purpose, a metric is used. A metric is a value
used by routing protocols to assign costs to reach remote networks. The metric is used to
determine which path is most preferable when there are multiple paths to the same remote
network. Metrics used in routing protocols include the following:
• Hop count: A simple metric that counts the number of routers a packet must traverse.
• Bandwidth: Influences path selection by preferring the path with the highest bandwidth.
• Load: Considers the traffic utilization of a certain link.
46
• Delay: Considers the time a packet takes to traverse a path.
• Reliability: Assesses the probability of a link failure, calculated from the interface error
count or previous link failures.
• Cost: A value determined by the network administrator to indicate preference for a route.
Cost can represent a metric, a combination of metrics, or a policy.
2.13.2 Types of Routing Schemes
Various routing schemes proposed and implemented in current public and commercial networks
from an ATM network perspective are discussed. This can be classified in many ways based on
their responsiveness. It can be Static or dynamic routing schemes [6, 33, 37].
2.13.2.1 Static Routing:
This approach is simply the process of manually entering routes into a device's routing table via
a configuration file that is loaded when the routing device starts up. As an alternative, these
routes can be entered by a network administrator who configures the routes manually. Since
these manually configured routes don't change after they are configured (unless a human changes
them) they are called 'static' routes. Static routing is the simplest form of routing, but it is a
manual process.
It is used when you have very few devices to configure and when you know the routes will
probably never change. Static routing also does not handle failures in external networks well
because any route that is configured manually must be updated or reconfigured manually to fix
or repair any lost connectivity. The static routing is the simplest way of routing the data packets
from a source to a destination in a network. Static routing has metric of zero.
2.13.2.2 Dynamic Routing:
Dynamic routing is an efficient method of traffic control where cell routing is frequently altered
due to the status of the network or anticipated demand shifts, so that the network can respond
quickly and properly to the changes in traffic and facility conditions. The routing decisions are
influenced by the current traffic conditions. Dynamic routing gives a better chance of success to
an individual cell by increasing the number of ways the cell can traverse the network. When
47
there is a new connection requests between a pair of switches, it is possible that the call is
established along the direct route or along an alternative allowable route. Normally routing
schemes attempt the direct route first and if it is unavailable then the alternative routes are
considered. The dynamic routing scheme increases network efficiency by routing calls away
from busy area through lightly loaded portion s of the network. The routing algorithms differ
mainly in how they choose the one route from the set of allowable routes.
Dynamic routing is complementary to alternative and adaptive routing, their time scale over
which traffic condition are assessed are different. This can exploit the non-coincidence of busy
hours across a large network and if the VP concept is used to the full, it can effectively mean
reconfiguration of the VPN layer [38]
Usually two-link alternative routes are considered. Removing the restriction of two VPs allows a
wider choice of alternative routes and as such tends to reduce the blocking probability. On the
other hand, it also tends to reduce the effective capacities of the physical links [39]. In general
the use of multiple VPs for a single call means inefficient use of network resources, because the
same resources could be used to complete the several separate calls [39]. The dynamic routing
method has two parts: The routing protocol that is used between neighboring routers to convey
information about their network environment. Routing algorithm is used to determine paths
through that network.
• Routing Protocol: The protocol defines the method used to share the information
externally, Routing protocol s capture the state information (e.g available resources) and
disseminate it throughout the network,. A routing protocol is the language a router speaks with
other routers in order to share information about the reachability and status of networks.
• Routing Algorithm: Routing algorithms use this information to compute appropriate
paths that is processing the information internally. Several routing algorithm can be defined by
changing factors such as: the metric parameter of the VP cost function, the composite rule of the
alternative route cost, the route selection scheme and the determination of available alternate
route. The cost parameter associates VP with a certain value that is adjusted dynamically
according to the varying load of the VP. The calculation is done based on the knowledge of the
current network load, on the traffic descriptors and on the QoS requirements.
48
Dynamic routing protocols not only perform these path determination and route table update
functions but also determine the next-best path if the best path to a destination becomes
unusable. The capability to compensate for topology changes is the most important advantage
dynamic routing offers over static routing.
2.13.2.2.1 Types of Dynamic routing protocols
� Distance-Vector: A distance-vector routing protocol sends a full copy of its routing table
to its directly attached neighbours. This is a periodic advertisement, meaning that even if there
have been no topological changes, a distance-vector routing protocol will, at regular intervals, re-
advertise its full routing table to its neighbors[40]. Obviously, this periodic advertisement of
redundant information is inefficient. Ideally, you want a full exchange of route information to
occur only once and subsequent updates to be triggered by topological changes. Another
drawback to distance-vector routing protocols is the time they take to converge, which is the time
required for all routers to update their routing table in response to a topological change in a
network. Hold-down timers can speed the convergence process. After a router makes a change to
a route entry, a hold-down timer prevents any subsequent updates for a specified period of time.
This approach helps stop flapping routes (which are routes that oscillate between being available
and unavailable) from preventing convergence. Yet another issue with distance-vector routing
protocols is the potential of a routing loop.
� Routing Information Protocol (RIP): A distance-vector routing protocol that uses a
metric of hop count. The maximum number of hops between two routers in an RIP-based
network is 15. Therefore, a hop count of 16 is considered to be infinite. Also, RIP is an IGP.
Three primary versions of RIP exist. RIPv1 periodically broadcasts its entire IP routing table,
and it supports only fixed-length subnet masks. RIPv2 supports variable-length subnet masks,
and it uses multicasts (to a multicast address of 224.0.0.9) to advertise its IP routing table, as
opposed to broadcasts. RIP next generation (RIPng) supports the routing of IPv6 networks, while
RIPv1and RIPv2 support the routing of IPv4 networks. The Routing Information Protocol (RIP)
was the first dynamic routing protocol to be used in an internetwork, so it was created and used
49
primarily with UNIX hosts for the purpose of sharing routing information [41]. RIP uses Hop
count metric that is the best path is chosen by the route with the lowest hop count.
� Interior Gateway Routing Protocol (IGRP): An IGRP exchanges routes between
routers in a single Autonomous System (AS). Common IGRPs include OSPF and EIGRP.
Although less popular, RIP and IS-IS are also considered IGRPs. Also, BGP is used as an EGP;
however, you can use interior BGP (IBGP) within an AS [42]. IGRP uses Bandwidth, delay,
reliability, and load metrics. Best path is chosen by the route with the smallest composite metric
value calculated from these multiple parameters. But by default, only bandwidth and delay are
used.
� Enhanced Interior Gateway Routing Protocol (EIGRP): EIGRP is classified as an
advanced distance-vector routing protocol, because it improves on the fundamental
characteristics of a distance-vector routing protocol. For example, EIGRP does not periodically
send out its entire IP routing table to its neighbors. Instead it uses triggered updates, and it
converges quickly. Also, EIGRP can support multiple routed protocols (for example, IPv4 and
IPv6). EIGRP can even advertise network services (for example, route plan information for a
unified communications network) using the Cisco Service Advertisement Framework (SAF).
By default, EIGRP uses bandwidth and delay in its metric calculation; however, other parameters
can be considered. These optional parameters include reliability, load, and maximum
transmission unit (MTU) size. EGRP also uses Bandwidth, delay, reliability, and load metrics.
Best path is chosen by the route with the smallest composite metric value calculated from these
multiple parameters and only bandwidth and delay are used by default.
� Link State: A link-state routing protocol allows routers to build a topological map of a
network. Then, similar to a global positioning system (GPS) in a car, a router can execute an
algorithm to calculate an optimal path (or paths) to a destination network. Routers send link-state
advertisements (LSA) to advertise the networks they know how to reach. Routers use those
LSAs to construct the topological map of a network. The algorithm run against this topological
map is Dijkstra’s Shortest Path First algorithm. Unlike distance-vector routing protocols, link-
50
state routing protocols exchange full routing information only when two routers initially form
their adjacency. Then, routing updates are sent in response to changes in the network, as opposed
to being sent periodically. Also, link-state routing protocols benefit from shorter convergence
times, as compared to distance-vector routing protocols
� Open Shortest Path First (OSPF): A more powerful routing protocol developed
subsequent to RIP, defined originally as RFC 1131 and more recently as RFC 2178, is called
Open Shortest Path First (OSPF). It is the preferred routing protocol for medium or large
networks which, in OSPF, are referred to as autonomous systems (ASs) [42].A link-state routing
protocol that uses a metric of cost, which is based on the link speed between two routers. OSPF
is a popular IGRP, because of its scalability, fast convergence, and vendor interoperability. SPF
uses cost metric
� Intermediate System to Intermediate System(ISIS): This link-state routing protocol
has similar operation as OSPF, It uses a configurable, yet dimensionless, metric associated with
an interface and runs Dijkstra’s Shortest Path First algorithm. Although using IS-IS as an IGP
offers the scalability, fast convergence, and vendor interoperability benefits of OSPF, it has not
been as widely deployed as OSPF.
� Path Vector: A path-vector routing protocol includes information about the exact path
packets take to reach a specific destination network. This path information typically consists of a
series of autonomous systems through which packets travel to reach their destination.
� Border Gateway Protocol (BGP): Border Gateway Protocol (BGP) is the only path-
vector protocol you are likely to encounter in a modern network. Also, BGP is the only EGP in
widespread use today. In fact, BGP is considered to be the routing protocol that runs the Internet,
which is an interconnection of multiple autonomous systems. BGP’s path selection is not solely
based on AS hops, however. BGP has a variety of other parameters that it can consider.
Interestingly, none of those parameters are based on link speed. Also, although BGP is incredibly
scalable, it does not quickly converge in the event of a topological change. The current version of
BGP is BGP version 4 (BGP-4). However, an enhancement to BGP-4, called Multiprotocol BGP
(MP-BGP), supports the routing of multiple routed protocols, such as IPv4 and IPv6.
51
2.14 Routing In Atm Network
The standard Cell routing techniques in ATM network is simply based on the utilization of VP
concept. This concept plays an important role in traffic control and resources management [43].
Dynamic routing gives a better chance of success to an individual cell by increasing the number
of ways the cell can traverse the network. When there is a new connection requests between a
pair of switches, it is possible that the call is established along the direct route or along an
alternative allowable route. Normally routing schemes attempt the direct route first and if it is
unavailable then the alternative routes are considered. The dynamic routing scheme increases
network efficiency by routing calls away from congestion area through lightly loaded portion of
the network. The routing algorithms differ mainly in how they choose the one route from the set
of allowable routes.
Dynamic routing algorithms in ATM network can be classified into two categories: Least
Loaded Routing –based (LLR-based) and Markov Decision Process-based (MDP-based). In the
course of this work, the least loaded routing algorithm is chosen to check the traffic load in the
network by comparing three routing algorithms based on LLR. The LLR approach tries to route a
call to the direct link first, if the call is blocked because of no free circuit, the least busy with the
maximum number of free circuit is then tried. The MDP approach can result in optimal or least
cost route, but are reported to be more computationally intensive [44, 45]. As a result of this,
knowing the computational complexity of each algorithm can be helpful. The effectiveness of
most of them depends on the loading conditions of the networks, while some other have
increased hardware requirements or require considerable execution time which render them
unsuitable for real-time applications [43]. Most dynamic routing schemes implemented in real
world are variations of LLR and each of them has its merits and demerits.
52
Figure 2.9: Flow chart of operation of an LLR
2.16 Related Works
Shun-Ping and Chi-Ming [31] in their study proposed a novel routing scheme, Random early
blocking (REBR) based on least loaded routing in Virtual Path-based ATM networks and
derive efficient approximation method to find call-level performance measures such as call
blocking and cell delay. The result shows that REBR performs much better than LLR when the
traffic load is light, but REBR approaches LLR when the traffic load becomes heavier. REBR is
a modification of least loaded routing. It first considers the direct link/route from the source to
Drop call
N
Y
N
Y
START
Scanning/Waiting
for Arrival
Any Arrival
Check least loaded
path
Any paths
available
Direct call to the least
loaded path
Initialization
53
destination. If the direct link/route is not available or not sufficient for the transmission, it then
finds a pair of alternate routes. If the alternate routes are occupied, it then looks for another pair
of alternate route for each pair of free alternate, the one with highest bandwidth is chosen for
transmission. In a situation where the bandwidths are equal, it chooses one at random.
Hon-Wai and Tsang [44], in their study proposed a Dynamic Routing Algorithm based on LLR
with packing and compared its performance with other routing algorithms. While considering
difference bandwidth requirement by direct and alternate route. It stated that all dynamic routing
scheme outperform the direct routing though the difference is very small as it shows
approximately the same performance.
Ren-Hung [45], in his work on routing problem in homogeneous VP-based ATM networks
stated that network blocking probability can be significantly reduced by LLR routing.
He further described the LLR routing algorithms; the free capacity of a VP is measured by the
maximum number of direct calls that can be added to the VP. As shown in Figure 2.7, when an
alternate call is added to a VP, the maximum number of direct calls that can be added to this VP
may decrease by more than one
Two algorithms based on Deterministic strategy were studied namely: deterministic reservation
LLR algorithm (LLR-D) and Deterministic Reservation LLR algorithm with dynamic VP
Capacity Sharing (LLR_DS). In LLR-_D, when a call arrives at the source node, the call is first
offered to the direct VP. If the direct VP does not have enough capacity available to carry the
call, the call is offered to the alternate path with the maximum free capacity where the free
capacity of an alternate path is defined as the minimum free capacity over the path’s VP’s. If the
call still cannot be carried by the alternate path with the maximum free capacity, then the call is
blocked.
Antonios, et al [43], in their study presented a simple heuristic routing algorithm suitable for real
time application, which, achieves an increased in the network throughput irrespective of the
network traffic load. Although, it is enhanced with the trunk reservation concept, which is
applied according to a probability that is increased linearly as the network load increases; this
54
policy aims at a better overall performance of the algorithm irrespective of the traffic load. The
effectiveness of this algorithm is based on the cost metric, that achieves a successful trade-off
between the use of the minimum-hop routes and the load balancing. Aimed at this target, an
efficient cost metric for the route selection was designed. Therefore the cost of a link is defined
as:
= ( - ) ……………………….………………………….(1)[43]
Let = already used equivalent bandwidth link i
Let = expected link utilization for the new call
Let = expected equivalent bandwidth for the new call
Let = equivalent bandwidth that correspond to the new call
Adding the costs of each link that belong to the candidate route, while considering the resources
that are required in terms of the number of the hops of this route. Hence, the cost of the route j
is defined as:
= ( ) be the cost of route j ……………………………. (2)[43]
The difference between is the additional equivalent bandwidth needed to establish the
new call, but are not necessarily equal with the equivalent bandwidth that corresponds to the
new call, if the last call was considered independence of the already established calls. In fact, due
to statistical multiplexing it generally holds that:
+
From equation (2)[43], it can be noticed that the cost of the route j increases as:
• The required additional bandwidth for establishing the new call increases.
• The congestion level of the links (expressed by their utilization factor) that comprise a
specific route increases.
• The number of the hops of the particular route increases
55
From the above statements, it show that the proposed cost function combines the number of the
hops the required, equivalent bandwidth and the congestion level of the routes.
In order to minimize the processing required at the intermediate nodes, satisfying one of the main
targets of the ATM networks, a call-level execution of the routing algorithm at the source nodes
is proposed (source-routing). In summary, the following on-line algorithm is defined:
When a new call request occurs:
1. For each candidate route estimate = ( - ) of each link according to the CAC
algorithm that is used.
2. Compute the cost of each route j using equation (2)[49].
3. Select the route with the minimum cost.
4. Apply the trunk reservation concept to the selected route with probability:
Pr =
4.1. IF trunk reservation concept is applied.
IF the route is accepted by the CAC
AND
the route is a minimum-hop one establish the call.
ELSE
select the route with the next minimum cost and run again step 4.
4.2. IF the trunk reservation concept is not applied.
IF the route is accepted by the CAC, establish the call
ELSE
select the route with the next minimum cost and run step 4 again.
5. IF all the routes have been rejected, the incoming call is blocked.
Siebenhar [46] in his paper carried out Simulative comparison of call routing algorithms in VP-
based ATM network. He proposed a dynamic algorithm known as Minimum Free Capacity
Routing (MFCR) for VP-based ATM networks with several direct paths connecting a pair of
56
nodes. The algorithm selects the direct path with the smallest residual capacity among those
paths having enough residual capacity. Because of this, the MFCR tries to aggregate the unused
bandwidth on one path. From the VC routing task, many routing algorithms such as fixed
alternate routing, Least Loaded Routing (LLR) or MFCR can be used. These algorithms were
compared andassumed to be more favorable for VP-based ATM network.
2.16 CONCLUSION
Having discussed different types of routing techniques, it is observed that the dynamic routing
algorithm better utilizes network resource while admitting traffic into the network. It also
noticed that most dynamic routing schemes implemented in real world are variations of LLR.
The next chapter will talk about two routing techniques out of the ones earlier discussed. These
routing algorithms are Deterministic Reservation LLR Routing Technique (LLR_D) and
Deterministic Reservation LLR Algorithm with Deterministic VP Capacity Sharing (LLR_VP).
These algorithms will be investigated to determine their performance with respect to server
utilization, cell loss rate in the network and cell delay.
57
CHAPTER THREE
MODELING
3.0 INTRODUCTION
From the literature reviewed, it is observed that many types of models exist, from mathematical
models, analytical, flowchart, computer program models to graphical models. For the purpose of
this work, graphical and flowchart models will be used to compare three routing algorithms as
shown in the previous chapter.
In this study, the model design of Cell routing is therefore designed and implemented using
MATLAB Simulink Simevent package. The simulation model was chosen due to the fact that
there is no single analytical model that can be traceable and still handle all the QoS parameters as
seen from all the models touched therefore, the advantage of computer simulation technique.
The two dynamic routing algorithms mentioned earlier were considered as basis for the analysis
of its variants: Deterministic Reservation LLR Routing Technique (LLR_D) and Deterministic
Reservation LLR Algorithm with Deterministic VP Capacity Sharing (LLR_VP). The network
architecture and model that were used for simulation and analysis are presented with simplified
flowcharts showing the stages involved in selecting route in ATM Network.
3.1 NETWORK ARCHITECTURE
The network architecture diagram shown in figure 3.1 supports: data, voice, and video traffic that
are based on virtual path technique. Each node realized the usual functions of traffic switching,
processing and transmission. The input traffics are from varied sources comprising of data,
voice, and video, which are bundled into virtual path.
Figure 3.1: An ATM network architecture
The virtual path concept is introduced to simplify
ATM networks consist of several VP subnetwork for which nodes are interconnected by VP.
The network topology is modeled by a directed graph G(V,E), where V is a set of nodes and E
represented the link. The physical network topology, with its nodes, links and links capacity
make up the first set of input parameter for the cell routing problem. Another input parameter is
the traffic demand matrix in term
The input parameters proposed in the designed model provides a set of cell with their route; that
is start, intermediate, end nodes and the allocated capacities. The model also determined the
combination of VPs to be assigned in order to route the VCs (cells).
No restrictions are imposed on the topology of the network
All nodes can be the source or destination of the network traffic. The virtual paths ( network
links) are assumed to be unidirectional logical links which can be established between
nodes in the network. There cannot be more than one VP with the same endpoints and are also
assumed to have determinstic bandwidth that are not subjected to statistical multiplexing with
cells from different VP. All VCs (cells) are routed entirely
VC carrying traffic without being assigned to a VP.
ERROR: undefinedOFFENDING COMMAND: it
STACK: