UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

72
UTRAN Transport Architecture - UTRAN architecture and transmission management Document number: UMT/SYS/DD/023087 Document issue: 02.04 / EN Document status: Standard Date: 30/July/2008 External document Copyright 2008 Alcatel-Lucent, All Rights Reserved Printed in Swindon, UK UNCONTROLLED COPY: The master of this document is stored on an electronic database and is “write protected”; it may be altered only by authorized persons. While copies may be printed, it is not recommended. Viewing of the master electronically ensures access to the current issue. Any hardcopies taken must be regarded as uncontrolled copies. ALCATEL-LUCENT CONFIDENTIAL: The information contained in this document is the property of Alcatel-Lucent. Except as expressly authorized in writing by Alcatel-Lucent, the holder shall keep all information contained herein confidential, shall disclose the information only to its employees with a need to know, and shall protect the information from disclosure and dissemination to third parties. Except as expressly authorized in writing by Alcatel-Lucent, the holder is granted no rights to use the information contained herein. If you have received this document in error, please notify the sender and destroy it immediately.

Transcript of UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

Page 1: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture and transmission management

Document number: UMT/SYS/DD/023087

Document issue: 02.04 / EN

Document status: Standard

Date: 30/July/2008

External document

Copyright 2008 Alcatel-Lucent, All Rights Reserved

Printed in Swindon, UK

UNCONTROLLED COPY: The master of this document is stored on an electronic database and is

“write protected”; it may be altered only by authorized persons. While copies may be printed, it is

not recommended. Viewing of the master electronically ensures access to the current issue. Any

hardcopies taken must be regarded as uncontrolled copies.

ALCATEL-LUCENT CONFIDENTIAL: The information contained in this document is the property of

Alcatel-Lucent. Except as expressly authorized in writing by Alcatel-Lucent, the holder shall keep

all information contained herein confidential, shall disclose the information only to its employees

with a need to know, and shall protect the information from disclosure and dissemination to third

parties. Except as expressly authorized in writing by Alcatel-Lucent, the holder is granted no rights

to use the information contained herein. If you have received this document in error, please notify

the sender and destroy it immediately.

Page 2: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04
Page 3: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 3/72

PUBLICATION HISTORY

10/Aug/2007

Issue 01.01 / EN, Draft

First draft.

17/Aug/2007

Issue 01.02 / EN, Draft

Update draft including contribution on Bandwidth Pools (chapter 7)

14/Sep/2007

Issue 01.03 / EN, Draft

Update draft after review.

28/Sep/2007

Issue 01.04 / EN, Preliminary

Update draft after comments (no formal review).

29/Oct/2007

Issue 01.05 / EN, Standard

Update version to add details on IMA and update congestion control algorithm after Bandwidth Pool

modification (no formal review).

14/Mar/2008

Issue 02.01 / EN, Draft

Include details for SS7 Dual Stack Support

Include description of IMA Link Failure Defence for OneBTS

08/April/2008

Issue 02.02 / EN, Preliminary

Include details of VP Shaping, VCC usage.

25/April/2008

Issue 02.03 / EN, Standard

Update after Review

30/July/2008

Issue 02.04 / EN, Standard

Additional Updates from Review

Page 4: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 4/72

CONTENTS

1 INTRODUCTION ................................................................................................6

1.1 OBJECT....................................................................................................................................6

1.2 SCOPE OF THIS DOCUMENT .......................................................................................................7

1.3 AUDIENCE FOR THIS DOCUMENT ................................................................................................7

1.4 DEFINITIONS AND SPECIFICATION PRINCIPLES .............................................................7

2 RELATED DOCUMENTS........................................................................................8

2.1 APPLICABLE DOCUMENTS ..........................................................................................................8

2.2 REFERENCE DOCUMENTS..........................................................................................................9

3 TRANSMISSION NETWORK OVERVIEW – COMMON ASPECTS ALL INTERFACES................... 10

3.1 SDH/SONET AND APS..........................................................................................................11

3.2 ATM ......................................................................................................................................12

3.2.1 VP/VC structure.............................................................................................................12

3.2.2 IMA ................................................................................................................................13

3.2.2.1 General description ........................................................................... 13

3.2.2.2 Defence in case of IMA link failure ......................................................... 15

3.2.3 Fractional E1/T1 [GLOBAL MARKET] ..........................................................................17

3.2.4 PNNI..............................................................................................................................18

3.2.5 Connection to an AAL-2 switch .....................................................................................18

3.2.6 VP ShaPING [USA MARKET].......................................................................................20

3.2.6.1 Introduction .................................................................................... 20

3.2.6.2 Dynamic distribution of Ul Traffic.......................................................... 22

3.2.6.3 LINK SHAPING .................................................................................. 23

3.2.6.4 Itf-B LINK........................................................................................ 25

3.2.6.5 Aggregate Bandwidth ......................................................................... 26

3.2.6.6 Upgrade and fallback ......................................................................... 26

3.2.7 Ethernet Passthrough OAM VCC..................................................................................27

3.3 IP ..........................................................................................................................................27

3.4 UTRAN SHARING ...................................................................................................................28

4 IUB SPECIFIC ASPECTS...................................................................................... 29

4.1 SYNCHRONIZATION ASPECTS ...................................................................................................29

Page 5: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 5/72

4.1.1 Introduction....................................................................................................................29

4.1.2 Synchronization of DL data ...........................................................................................34

4.1.3 Synchronization of UL data ...........................................................................................36

4.2 HYBRID CONFIGURATION, TNL MIXITY [GLOBAL MARKET].....................................................37

5 IU-CS/PS SPECIFIC ASPECTS ............................................................................... 38

5.1 NGN ARCHITECTURE ..............................................................................................................38

5.2 IU-FLEX ..................................................................................................................................39

6 IUR SPECIFIC ASPECTS...................................................................................... 40

7 FUNCTIONAL DESCRIPTION................................................................................ 41

7.1 FRAMEWORK ..........................................................................................................................41

7.1.1 Principle.........................................................................................................................41

7.1.2 Bandwidth Pool Capacity ..............................................................................................42

7.1.2.1 ATM case ........................................................................................ 42

7.1.2.2 IP case ........................................................................................... 42

7.1.2.3 BP capacity change............................................................................ 42

7.1.3 CAC...............................................................................................................................43

7.1.3.1 AAL2 and IP CAC ............................................................................... 43

7.1.3.2 CAC on Iub interfacE .......................................................................... 44

7.1.3.3 CAC on Iur/Iu-CS interfaces ................................................................. 45

7.1.3.4 CAC on Iu-PS interface........................................................................ 45

7.1.3.5 Support for Transport Bearer Replacement on SRNC (FRS 30091) .................... 45

7.1.4 ALCAP introduction on Iub............................................................................................46

7.1.5 Congestion Control .......................................................................................................46

7.1.5.1 Bandwidth Pool congestion control ........................................................ 46

7.1.5.2 25.902 mechanism, DL and UL case [GLOBAL MARKET] ................................ 49

7.2 GUARANTEED BIT RATE/MIN BR OVER HSDPA AND FAIR SHARING OF RESOURCES....................64

7.2.1 Guaranteed Bit Rate for Streaming traffic over HSDPA ...............................................65

7.2.2 Min Bit Rate for Interactive/ Background Traffic over HSDPA......................................66

8 ABBREVIATIONS AND DEFINITIONS ...................................................................... 67

8.1 ABBREVIATIONS ......................................................................................................................67

8.2 DEFINITIONS ...........................................................................................................................72

Page 6: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 6/72

1 INTRODUCTION

1.1 OBJECT

UA06.0 is a release where a great deal of transport features are introduced, in order:

. to cope with new transport media (support of IP: 33334, 33365) and improve management of

transport resources (Iub bandwidth pools resiliency: 34125, HSUPA and HSDPA traffic separation on

Iub: 34054), transport bearer replacement on SRNC: 30091.

. to improve Iub interface openness, by better compliance with the standard (3GPP compliant Iub:

Alcap: 28018)

Several features are also introduced that combine actions in radio and in transport domains to

provide desired end-to-end service:

. new services (Guaranteed bit rate over HSDPA: 29804)

. improvements on quality of service management, priority management and congestion

management (fair sharing between HSDPA and DCH: 33694, HSPA congestion control: 33367 and

33332, UTRAN sharing: 34105)

A number of these new features rely on the improved concept of bandwidth pools: 34202, and the

advanced QoS transport framework: 23479.

SS7 Dual stack support over the Iur Interface (34220) is also introduced in UA06.0.This provides the

ability to support the ANSI SS7 stack over an Iur lnterface to neighbouring RNCs whilst

simultaneously supporting an ITU stack on other Iur interfaces. This feature also introduces support

of the ANSI SS7 stack over the Iu-ps interface. ANSI over Iu-cs is not supported.

This document takes the opportunity of the introduction of all these new features to provide a

global description of the UTRAN transmission network architecture. On the one hand, it does not

provide a description strictly coupled to each feature, but on the other hand, it tentatively provides

a better overview of how the various features interact to provide the expected services to the

operator.

Due to the scope of this document, the table of contents does not quite follow the usual outline of a

Functional Note. For instance, there is no chapter on “interfaces”, the O&M section is not present.

This is because these sections are not really relevant here. O&M parameters, counters, alarms and

traces for instance are described in other more detailed FNs (with the notable exception of section

7.1.5.2, because the corresponding feature is not described elsewhere). As far as possible,

references have been included about where to find the complementary information.

Page 7: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 7/72

It thus also covers aspects that do not specifically evolve in UA06.0, such as Iu flex, and Iur

congestion control (34012).

MSC

SGSN

Node B RNC

Iu-PS

Iu-CS

Node B

Node B

RNC

Node B

Iu-PS

Iu-CS

Iub

Iub

Iub

Iub

Iur

Figure 1-1: UTRAN main logical interfaces

1.2 SCOPE OF THIS DOCUMENT

This document applies to the UA06.0 version of Alcatel-Lucent UTRAN.

1.3 AUDIENCE FOR THIS DOCUMENT

This is an external document.

1.4 DEFINITIONS AND SPECIFICATION PRINCIPLES

The present document addresses several markets with potentially different behaviours in those

markets. The definition of “Global Market” and “USA Market” are:

Page 8: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 8/72

Global Market: customers other than those part of the following market.

USA Market: customers with UTRAN where Alcatel-Lucent 939X Node B (former Lucent Flexent Node

B) is deployed.

For the purpose of the present document, the following notations apply:

[Global Market] This tagging of a word indicates that the word preceding the tag "[Global

Market]" applies only to the Global Market. This tagging of a heading indicates that the heading

preceding the tag "[Global Market]" and the section following the heading applies only to the Global

Market. This tagging shall, in particular, be used under the parameter name of parameters specific

to the Global Market.

[USA Market] This tagging of a word indicates that the word preceding the tag "[USA

Market]" applies only to the USA Market. This tagging of a heading indicates that the heading

preceding the tag "[USA Market]" and the section following the heading applies only to the USA

Market. This tagging shall, in particular, be used under the parameter name of parameters specific

to the USA Market.

[Global Market - …] This tagging indicates that the enclosed text following the "[Global Market -"

applies only to the Global Market. Multiple sequential paragraphs applying only to Global Market are

enclosed separately to enable insertion of USA Market specific (or common) paragraphs between the

Global Market specific paragraphs.

[USA Market - …] This tagging indicates that the enclosed text following the "[USA Market - "

applies only to the USA Market. Multiple sequential paragraphs applying only to USA Market are

enclosed separately to enable insertion of Global Market specific (or common) paragraphs between

the USA Market specific paragraphs.

Text that is not identified via one of the hereabove tags is common to all markets.

2 RELATED DOCUMENTS

2.1 APPLICABLE DOCUMENTS

Ref. # Document Identifier Document Title

[A1]

3GPP TS 22.105 (ed. Services and service capabilities

Page 9: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 9/72

Ref. # Document Identifier Document Title

6.4.0)

[A2] 3GPP TS 23.107 (ed. 6.4.0) Quality of Service (QoS) concept and architecture

[A3] 3GPP TS 23.236 (ed. 6.3.0) Intra-domain connection of Radio Access Network (RAN)

nodes to multiple Core Network (CN) nodes

[A4] 3GPP TS 25.402 (ed. 6.5.0) Synchronisation in UTRAN Stage 2

[A5] 3GPP TS 25.413 (ed. 6.c.0) UTRAN Iu interface RANAP signalling

[A6] 3GPP TS 25.426 (ed. 6.5.0) UTRAN Iur and Iub interface data transport & transport

signalling for DCH data streams

[A7] 3GPP TS 25.427 (ed. 6.8.0) UTRAN Iub/Iur interface user plane protocol for DCH data

streams

[A8] 3GPP TR 25.853 (ed. 4.0.0) Delay Budget within the Access Stratum

[A9] 3GPP TR 25.902 (ed. 6.1.0) Iub/Iur congestion control

[A10] (ATM forum) AF-PHY-

0086.001

Inverse Multiplexing for ATM (IMA) Specification Version

1.1

2.2 REFERENCE DOCUMENTS

Ref. # Document Identifier Document Title

[ 1 ] [R1] [ 1 ] UMT/IRC/APP/011676 [ 1 ] Iu Transport Engineering Guide

[ 2 ] [R2] [ 2 ] UMT/IRC/APP/0164 [ 2 ] Iub Transport Engineering Guide

[ 3 ] [R3] [ 3 ] UMT/SYS/DD/023092 [ 3 ] IP in UTRAN FN

[ 4 ] [R4] [ 4 ] UMT/SYS/DD/016708 [ 4 ] UTRAN Iu-flex Functional Note

[ 5 ] [R5] [ 5 ] UMT/IRC/APP/005184 [ 5 ] Addressing Transport Engineering Guide

[ 6 ] [R6] [ 6 ] UMT/SYS/DD/023094 [ 6 ] UTRAN sharing FN

[R7] UMT/SYS/DD/023235 Bandwidth Pools FN

[R8] UMT/SYS/DD/013319 HSDPA system specification

[R9] UMT/SYS/DD/018826 CAC FN

[R10] UMT/SYS/DD/018827 E-DCH system spefication

Page 10: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 10/72

Ref. # Document Identifier Document Title

[R11] UMT/SYS/DD/000128 IRM

3 TRANSMISSION NETWORK OVERVIEW – COMMON ASPECTS ALL INTERFACES

3GPP defines 2 alternatives for the UTRAN transmission: ATM transport network type (which is

largely deployed) and IP transport network type (which has been introduced more recently in the

standard). ATM infrastructure has been provided since the first UTRAN release, whereas IP

infrastructure is provided (to some extent) in UA06.0.

ATM infrastructure relies on a transmission layer that encompasses two subLayers:

- The PMD (Physical Medium) subLayer: type of medium: micro wave, copper (including various DSL

possibilities) or optical fiber,

- The TC (Transmission Convergence) subLayer, which covers

- Frame format,

- Payload field,

- Overhead fields: synchronization, error detection, etc.

Modems or more generally transmission nodes ensure compatibility with the various physical

interfaces, e.g. microwave or ADSL.

Two possible transmission formats may co-exist within a telecommunications network: PDH and/or

SDH/SONET.

PDH (Plesiochronous Digital Hierarchy) is used for copper medium or microwave, whereas

SDH/SONET (Synchronous Digital Hierarchy / Synchronous Optical NETwork) is commonly used for

optical fiber medium but may also be used for copper medium and microwave.

Several link levels are specified by ITU-T for PDH, in particular E1 (2,048kbit/s). ANSI specifies in

particular T1 (1,544 kbit/s).

Page 11: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 11/72

SDH/SONET links typically provide higher throughput than PDH links. PDH links may be encapsulated

within SDH/SONET containers.

PDH/SDH formats are described in [R1].

IP infrastructure is built on top of Ethernet interfaces at RNCs, Node B and CN nodes. Routers

ensure compatibility of the Ethernet interface to other interfaces when needed.

For ATM, the RNC supports SDH STM-1 / SONET OC3 interfaces (155.52 Mbit/s throughput, of which

149.76 usable at ATM level). The Node B supports (depending on the configuration) E1 or T1

interfaces.

For IP, the RNC offers Gigabit Ethernet interfaces, and the Node B 10/100 Base-T Ethernet

interfaces.

3.1 SDH/SONET AND APS

APS (Automatic Protection Switch) is supported in the RNC. It provides a protection mechanism both

for the line card and the transmission line between the RNC and its peer. APS maybe used between

2 neighboring RNCs, between the RNC and the Core Network Nodes, or between the RNC and a SDH

concentrator on the Iub interface.

If a failure is detected on the line interface (based on received signal information and quality), the

transmission path is switched from the working line to the protected line. This is done by the

reception side, as both paths transmit the same information at the source.

Protected line can be configured either on the same line card or on a different one (but preferably

on a different line card).

APS can be configured either in unidirectional mode (APS switching only occurs in the Node

detecting the failure), or in bidirectional mode (APS switching first occurs in the Node detecting the

failure, which then informs the remote side of the failure, which then switches as well).

APS can be configured either in revertive or non-revertive mode. In revertive mode, the system

switches back to initial configuration once there is no longer any problem on the path that provoked

the switch.

The APS is normally configured in bidirectional/revertive mode.

APS features are described in [R1].

Page 12: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 12/72

3.2 ATM

The UTRAN supports ATM infrastructures. ATM is compliant to ITU-T I.361.

ATM is a connection-oriented protocol, using a hierarchical structure of connections: Virtual Path

Connections (VPCs) contain Virtual Channel Connections (VCCs). A physical link may carry one or

several VPCs.

Both User Network Interface and Network Network Interface (UNI and NNI) are supported.

3.2.1 VP/VC STRUCTURE

Some VCCs must be configured for control and transport. There is not really any constraint for

grouping these VCCs in one or several VPCs. However, it is usually easier for the operator to limit

the number of connections to handle in the transmission network, and this can be better achieved

by configuring a limited number of VPCs, inside which the VCCs remain transparent to a VP

transmission backbone.

On the Iub interface, the number of user plane VCCs is limited to 16 per Node B. Before UA06.0, 3

transport QoS VCCs were used, usually mapped on VBR-RT, VBR-nRT and UBR ATM Transfer

Capabilities (ATC) (see ATM forum or ITU-T recommendation I.371 for definition of ATC), and UTRAN

traffic was mapped on these 3 VCC QoS classes:

Delay-Sensitive (DS) data, corresponding to conversational traffic on DCH, was mapped on VBR-RT,

Non Delay-Sensitive (NDS) traffic, corresponding to Interactive and background traffic on DCH was

mapped on VBR-NRT.

HSPA traffic (Interactive and background) was mapped on UBR.

Streaming traffic could be mapped to either DS or NDS VCCs.

Control plane VCCs (up to 8 are possible per Node B) were mapped on VBR-RT, except O&M VCC,

mapped on UBR.

Implementation relied on strict priority of R99 Delay Sensitive (conversational) traffic over R99 Non

Delay Sensitive (interactive or background) traffic over HSPA I/B traffic (i.e. in the RNC and in the

ATM VC Cross-connects, UBR VCC data is only sent in the holes left by VBR-NRT VCC data, and VBR-

NRT VCC data is only sent in the holes left by VBR-RT VCC data).

In UA06.0, an additional QoS is introduced to permit the introduction of guaranteed bit rate traffic

flows on HSDPA (see section 7.2). 4 QoS VCCs are then configured for user plane:

QoS0 (normally mapped on CBR) for conversational traffic on DCH, applicable for traffic coming from

Iu-CS,

QoS1 (normally mapped on VBR-RT) for streaming on DCH or HSDPA, applicable for traffic coming

from either Iu-CS or Iu-PS,

Page 13: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 13/72

QoS2 (normally mapped on VBR-NRT) for interactive/background on DCH, applicable for Iu-PS

domain

QoS3 (normally mapped on UBR+) for interactive/background on HSDPA, applicable for Iu-PS domain.

The introduction of this additional VCC can be done as part of the migration to UA06.0 (creation of

a VC internally declared with a null ECR, so as not to change the ECRs of the other VCs, in order to

start with the same configuration and services in UA06.0 as in UA05.1).The implementation still

relies on strict priority of QoS0 over QoS1, over QoS2 over QoS3.

The recommended configurations are described in [R1] and [R2].

If the operator requires a strict compliance to traffic contract, a VPT (Virtual Path Termination)

should be configured. Inside a VPT, all VCCs are handled commonly, in terms of traffic

management. In particular, they are shaped as a whole, so that it is possible for the VCCs to share

the VPT bandwidth, instead of individually requiring own bandwidth. Sharing is more efficient.

VCC Bandwidth can be declared independently for UL and DL, and both directions will be considered

by CAC. In particular, if the operator wants to force separation of E-DCH traffic and HSDPA traffic

on 2 different VCCs, he/she should configure a null UL BW for the HSDPA VCC and a null DL BW for

the E-DCH VCC.

The ALCAP, combined NodeBCP + CCP NBAP and UP VCCs will be optionally supported on the same

Virtual Path(same VPI). The VCC’s may optionally have different VPI’s. - the implementation does

not apply any restrictions as to what VPI values are provisioned.

In the case where all Iub Link VCCs for a given Iub Link (i.e. combined NodeBCP+CCP, ALCAP, and

Userplane VCCs) are provisioned in the same Virtual Path, then this Virtual Path will be terminated

on NodeB and the External ATM Switch associated with RNC (as opposed to the RNC INode itself).

Path Shaping in the downlink for the Virtual Path will be done on the RNC’s External ATM Switch

which will also be used to perform VC Switching, implying that the Virtual Path does not extend as

far as the RNC INode, and that the RNC INode only sees the individual VCCs and will therefore be

provisioned to simply terminate the individual VCCs (i.e. “PVC” switching mode).

3.2.2 IMA

3.2.2.1 GENERAL DESCRIPTION

Inverse Multiplexing for ATM (IMA) is an optional intermediate layer between physical layer and ATM

layer defined by the ATM forum [A10]. It provides to ATM layer a bit rate, which is (almost) the sum

Page 14: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 14/72

of the bandwidths of the physical links building the considered interface. It is used in particular on

Iub interface, where the commonly used E1s or T1s individually provide a low bandwidth compared

to the rates that are expected on the UMTS radio, in particular with HSPA services. For instance

with 3 E1s, instead of providing 3 separate ATM layers at 2 Mbit/s, a single ATM layer at (almost) 6

Mbit/s can be used thanks to IMA, thus allowing for individual connections at up to nearly 6 Mbit/s,

and also improving the sharing and thus use of the available bandwidth.

IMA provides link redundancy: the operator may configure the interface such that the number of

E1/T1 links on the interface is higher than what is really needed, and keep nominal capacity on loss

of 1 (or more) E1/T1. Alternatively, if the interface is configured to its target nominal capacity, the

loss of 1 E1/T1 will reduce the available bit rate without interrupting the service, contrary to what

would happen with separate E1/T1 links, in case the interface control channels are carried on the

lost E1/T1. To work efficiently, this feature requires a proper defence mechanism in case of loss of

E1/T1, as described in next section.

The principle of IMA is to carry ATM cells successively on its various operational links. The IMA

Control Protocol (ICP) supervises the state of the links and their synchronization. It is possible that a

different delay is experienced by ATM cells depending on the link on which they are carried. IMA is

designed to compensate this differential delay up to a certain limit, which allows using links that

follow different physical paths. If this maximum difference is exceeded on one link, the link will be

removed from service, and if the delay comes back in the supported range, the link will be put back

into service. When running over E1/T1 physical links, the IMA interface should compensate for link

differential delays of at least 25 milliseconds (requirement 75 of the IMA standard [A10]). However,

a lower value can be configured. For the UMTS application, which carries delay-sensitive services,

such a high value is not advisable, because it would deteriorate the voice quality. In Alcatel-Lucent

infrastructure, the default and recommended maximum differential delay is configured to 2 ms.

This value can be increased by the operator if desired. A value of 5 ms is acceptable, but the AMR

QoS might be degraded for higher values.

IMA is supported by the Node B, but is not terminated in the RNC, as the RNC does not directly

support E1 or T1 interfaces.

IMA is described in [R2].

Note that IMA is only optional. It is possible to support n E1s without IMA, even for cells supporting

HSPA. It is not optimal, but allows installing UTRAN when the transport network does not support

IMA. Each E1 can be configured to carry either R99 traffic only (plus control traffic), or HSPA traffic

only, or a mix of both.

It is also possible to have a mixed configuration of IMA + n E1s, or several IMA groups. For instance,

it can be interesting for the operator to use this physical separation to carry R99 and control traffic

(with strong timing constraints and high reliability requirements) on a “secure” medium and carry

HSPA traffic (less constraining) on a cheaper medium.

Page 15: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 15/72

RNC

Node B

BackBone network

(ATM)

OC-3/VC3

STM-1/VC4 1 to 8 T1/E1

with IMA

Node B

IMA

IMA

IMA IMA

RNC

Node B

BackBone network

(SONET/SDH)

OC-3/VC3

STM-1/VC4 T1/E1 links without

IMA

Node B

T1/E1 links with

IMA

IMA

IMA

VC11/VC12

RNC

Node B

BackBone network

(ATM)

OC-3/VC3

STM-1/VC4 Up to 8 T1/E1 total

1 or several IMA groups

IMA

IMA

ATM ATM

Figure 3-1: Examples of E1/T1 Iub configurations with or without IMA group(s)

3.2.2.2 DEFENCE IN CASE OF IMA LINK FAILURE

As indicated above, the RNC does not terminate the IMA group. According to IMA specification, the

loss of one or several E1/T1 links within an IMA group does not lead to the loss of the ATM

connections carried on that group (depending however on the configurable parameter defining the

Page 16: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 16/72

minimum number of links that need to be working to consider the IMA group as operational). The

loss of one or several E1/T1 links however clearly impacts the interface capacity. Thus the RNC

needs to react in terms of CAC and congestion control to take into account the really available

bandwidth. Otherwise, data are likely to be lost in the equipment terminating the IMA and suddenly

facing an excess of incoming traffic from RNC, and the RNC thus needs first to be informed about

the failure. No standard mechanism exists for that.

There are two different schemes to report IMA link failures, depending on the type of Node B, i.e.

the algorithm used with the iBTS and the OneBTS are different. This is detailled in the following

subsections.

3.2.2.2.1 IBTS [GLOBAL MARKET]

In UA06.0 and before, the Node B informs the RNC of the loss of one or several E1/T1, by sending F5

AIS/RDI alarms to the RNC on a number of VCs for a capacity equivalent to the capacity lost with

the failed E1/T1. The Node B is configured with an ordered list of VCCs that should be failed on loss

of E1/T1 link. When receiving the AIS or RDI, the RNC will update the interface capacity (or BP

capacity: see section 7.1.2.3) accordingly. This will impact CAC and congestion control (see section

7.1.5.1). Note that in some cases, the E1s that are failed carry cell control information. In that

case, the corresponding cells temporarily fail, but the RNC reestablishes these control channels on

remaining VCCs.

Because of this IMA link failure strategy, there is normally 1 VCC of each QoS (except HSPA) defined

on each E1. Before UA06.0, this means one VCC for DS data + one VCC for NDS data per E1. In

addition, there is one VCC for HSDPA. In UA06.0, there is normally one VCC for Q0, one VCC for Q1

and one VCC for Q2 per E1, plus one VCC for HSPA for the Iub interface. Other configurations are

possible, e.g. with 2 VCCs for HSPA in case separation of HSDPA and E-DCH traffic flows on 2

different VCCs is required.

The IMA defence mechanism must thus be adapted in order to cope with the maximum number of

UP VCCs.

Refer to [R7] for more details.

3.2.2.2.2 ONEBTS [USA MARKET]

The OneBTS does not support F5 OA&M cells but uses a proprietary ALCAP message instead to

inform the RNC about IMA link failures and capacity changes of the Iub link. The advantage of using

a proprietary ALCAP message compared to sending F5 AIS/RDI alarms is that there are no

restrictions on the number of E1/T1 links and the number of User Plane VCCs.

When the OneBTS detects a change in the IMA configuration, e.g. caused by an E1/T1 link failure it

sends an ALCAP UBL message (unblock) to the RNC. This message is extended by the Link

Characteristics (LC) parameter. The RNC acknowledges the UBL message with an UBC message

which does not include the LC parameter, i.e the UBC message is fully standards compliant.

The LC parameter is normally not present in the UBL message and thus it is used as an escape to

indicate the proprietary application. Without the additional LC parameter UBL and UBC will still

work as defined by the ALCAP standard in Q.2630.1.

Page 17: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 17/72

The Link Characteristics (LC) parameter used to indicate the newly available bandwidth is composed

of several fields. The meaning of these fields is given below:

Maximum CPS-SDU Bit Rate in forward direction: UL bandwidth of full IMA group

Maximum CPS-SDU Bit Rate in backward direction: DL bandwidth of full IMA group

Average CPS-SDU Bit Rate in forward direction: Available UL bandwidth after link failure

Average CPS-SDU Bit Rate in backward direction: Available DL bandwidth after link failure

Note that reporting bandwidth after all links have been restored the average and maximum rate will

be equal and will contain the configured IMA bandwidth.

The granularity of the CPS-SDU bit rate in the forward and the backward direction is 1000 bit/s

rather than 64 bit/s as is defined for the LC parameter in other messages by Q.2630.2. This yields a

supported bandwidth range of 0 to 65,535 kbit/s. The data rate in LC shall be the raw data rate

available, i.e. the rate including ATM headers, which is (cell rate * 53 * 8 / 1000), where cell rate is

user cell rate, and does not consider IMA cells.

The UBL message including the LC parameter is sent on all User Plane VCCs. This allows the RNC to

figure out which bandwidth pools are concerned.1

There is hysteresis in sending bandwidth reduction and bandwidth increase messages to protect

against message floods in case of instable, toggling links. Also after initialisation the OneBTS sends

an UBL message to indicate the actually available bandwidth to the RNC.

Upon receiving bandwidth reports VCCs of an IMA group the RNC shall adapt CAC and congestion

management using the reported figures. If reducing the data rate of user calls does not yield

sufficient spare to get out of Iub congestion the RNC may eventually need to pre-empt calls where

the existing pre-emption procedures apply.

3.2.3 FRACTIONAL E1/T1 [GLOBAL MARKET]

At the other end of IMA needs, very small configurations may require to share one E1 for 2 Node Bs.

This can be achieved e.g. by using fractional E1/T1, which consists in mapping ATM cells in 2 groups

of time slots inside the T1 or E1. The data for each Node B are then separated at PDH level. This is

supported in the line card in the Node B, which can realise drop and insert). The RNC does not see

the fractional structure (but of course needs to use adapted traffic descriptors, as in any

configuration).

Fractional E1/T1 feature is described in [R2].

1 In the AT&T network there is currently only a single bandwidth pool.

Page 18: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 18/72

3.2.4 PNNI

Private Network Network Interface (PNNI) refers to both a topology and a set of protocols and is

specified by the ATM-forum. The purpose of PNNI is to simplify configurations and improve

reliability and availability of the network. PNNI includes in particular routing protocols, which

provide information to the PNNI nodes to let them decide on how and where to send ATM cells.

A PNNI network can be either flat or hierarchical:

A PNNI flat network is composed of one single routing area. As a consequence, all nodes within the

network receive the same routing information, all nodes have the knowledge of the complete

network.

A PNNI two Layer hierarchy is split in several routing areas. A routing area is called a peer Group.

The routing information is flooding only within peerGroups.

Besides the routing protocol, PNNI also specifies signaling protocol, in charge of the setup /

maintenance / release of the ATM connections. Specific VCCs are pre-defined by the standard to

carry the PNNI routing and signalling protocols.

PNNI features are described in [R1].

3.2.5 CONNECTION TO AN AAL-2 SWITCH

The RNC is often connected by ATM connections to the CN nodes, neighbouring RNCs and Node Bs.

However, on Iu-CS and Iur interfaces, it is also possible to connect the RNC to one or several

intermediate AAL-2 switches. For instance, this AAL-2 switch can be part of a MSC or MGW (see

section 5.1 for overview of MGW meaning).

This is mostly handled by routing tables inside the RNC and is part of the feature FRS 26614 - RNC

implements QAAL2 Address Translation.

Redundancy is achieved thanks to FRS 26521 - RNC implements “QAAL-2 alternate routing feature”

on Iu and Iur.

The objective of the Q AAL2 alternate routing feature is to offer:

- Protection of Iu/Iur interfaces against transmission network or adjacent AAL-2 switch failure (see

Figure 3-2),

- Load balancing over AAL-2 routes.

Page 19: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 19/72

A remote AAL-2 endpoint node is either a MGW (or MSC) or a neighbor RNC. An AAL-2 endpoint node

is identified by one AAL-2 service Endpoint Address (A2EA), i.e. the address used at AAL-2 signalling

layer, which is used to control the establishment of AAL-2 bearers.

MGW1ALCAP

DPC1

AAL-2

A2EA1

RNC

Example: RAB assignment request sent by the MSC server contains A2EA2 as Transport Layer Address. Due to connectivity loss between RNC and MGW2, RNC sends QAAL-2 ERQ to MGW1 which forwards it to MGW2. This assumes full mesh between MGWs.

MGW2

ALCAP DPC2

AAL-2

A2EA2

Figure 3-2: Q AAL-2 alternate routing

The feature consists in filling the RNC AAL2 Address translation table with at least two routes.

Thanks to address “prefix”, it is possible to use a hierarchical structure of AAL-2 addresses, which

for instance allows interworking with other vendors that may allocate AAL-2 addresses indicated by

their RNC according to the Node B that is used.

Thanks to this feature, it is possible to support any configuration such as:

Direct connectivity (no AAL2 switches)

one ALCAP Point Code (PC) for Iu-CS

several ALCAP PCs for Iu-CS

one ALCAP PC per Iur

several ALCAP PCs per Iur

Connection through AAL2 switches

one AAL2 switch = one ALCAP PC

several AAL2 switches

dedicated AAL2 switches per interface

AAL2 switches common to several interfaces as well as interface types

Peer nodes with one A2EA

Page 20: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 20/72

Peer nodes with several A2EAs

This feature is described in [R1]. Addressing is described in more detail in [R5].

3.2.6 VP SHAPING [USA MARKET]

3.2.6.1 INTRODUCTION

Before release UA06 the Iub used shared PVCs for user plane traffic, i. e. all traffic classes use the

same transport channel. The advantage of this approach is that there is no bandwidth reservation

for certain traffic classes which can’t be used by other services, even if it is not required by the

corresponding service. The PVCs carrying data as scheduled by the sender, i.e. the RNC in downlink

and the Node B in UL direction.

When the OneBTS is connected to RNC 9370 there is no longer the shared VCC concept. Instead

there are particular VCCs for different services which are:

"Conversational" (Conversational Voice & CS Data)

"Strm" (R99 & HSPA Streaming)

R99 I/B" (R99 Interactive 1,2,3 & Background)

"HSPA I/B" (HSPA Interactive 1,2,3 & Background)

All VCCs are in the same Virtual Path (VP) and each of them can use the whole bandwidth,

depending on its priority. Low priority VCCs get only bandwidth if it is not needed by the high

priority ones. The scheme is highly dynamic in offering bandwidth to the services that actually need

it. Since it is not required to statically allocate and reserve network resources to certain services it

guarantees the efficient utilisation of transmission links.

In the downlink the RNC manages traffic by applying the concept of sharing bandwidth between VCs

in bandwidth pools. According to priority it transmits data over the Iub VCs of a bandwidth pool as

long as there is no congestion. For more information on Bandwidth Pools see the Bandwidth Pools

FN [R7]

For streaming services, the CAC in RNC reserves for GBR corrected by overbooking factor. If

overbooking is too aggressive it may be necessary to discard data.

Conversational services are not subject to congestion management, call admission control has to

make sure that a link will not be congested even when these services operate at maximum capacity.

For more information about congestion control procedures refer to section 7.1.5 of this document.

Eventually the ATM switch co-located to the RNC will perform traffic shaping on path level to make

the downlink data stream compliant to the QoS parameters of the virtual path (VP) including all VCs

of the VP.

In the uplink the Node B has to manage priority and congestion. For the OneBTS it was assumed that

R99 traffic does not cause congestion. Even if there is congestion caused of this type of traffic the

Node B cannot influence the amount of traffic sent by the UEs. Only the RNC could change data

rates. This is currently only considered during call admission control where some assumptions of the

users’ behaviour are taken.

Page 21: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 21/72

The UL traffic of R99 calls is prioritised into two categories:

Non-segmented channel which does not need AAL2 segmentation. This is mainly used for voice

services and has the highest priority

Segmented channel which is used for all other R99 traffic which needs segmentation on AAL2 level.

Further, there is one channel that is used to transmit E-DCH traffic. It uses AAL2 segmentation and

has the lowest priority, i.e. the corresponding queue will only be served when there is nothing to

transmit for R99 calls. The E-DCH queue is supervised and when its size exceeds certain threshold

the E-DCH scheduler is informed. Then the scheduler reduces the transmission opportunities of the

UEs until the size of the transmission queue goes below a low water mark.

In the UL the OneBTS shapes the traffic stream of each VC individually for releases prior to UA06.

Since before this release all user plane traffic used a single shared VC, this VC was designed to use

the full bandwidth of the physical link speed minus the bandwidth that is needed for signalling and

OA&M traffic. The PVCs for the latter do not need a big share of the link capacity, thus most of the

bandwidth is used for use plane traffic. Because the VC is used for all services bandwidth utilisation

is dynamic between different service classes with prioritisation as described above.

A pace controller is available on the LIU which provides transmission opportunities to each VC.

These transmission opportunities depend on the VCs’ cell rates and burst sizes.

The pace controller is implemented by using a number of transmission slots where each slot provides the opportunity to send a certain amount of ATM cells which depends on the link speed and the number of slots. When data arrive for UL transmission the pace controller looks for the next unused transmission opportunity and queues a number of ATM cells for the particular PVC. The number depends on the VC’s QoS parameters (PCR, SCR, MBS). If there are more ATM cells to be transmitted the pace controller calculates a gap between time slots used to queue data for the particular VC. This results in a data stream compliant to an ATM traffic contract for each ATM PVC.

From the pace controller’s view there is only a single priority in serving all ATM PVCs. Prioritisation is done at a higher layer providing the ATM cells in the appropriate sequence to the pace controller. The following diagram illustrates the operation.

E-DCH

channelP3

micro

channelP1

segmentation

channelP2

Itf-B NBAP ALCAP

Transmission Slot

1 2 3 4 n-1 n

AAL2

rtVBR

AAL5

nrtVBR

VC Priority

Pace Controller

Priority

1

shared

PVC

Figure 3-3 Operation of the Pace Controller before UA06

Page 22: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 22/72

3.2.6.2 DYNAMIC DISTRIBUTION OF UL TRAFFIC

Keeping the shaping on VC level would result in wasted bandwidth when the actual traffic does not

exactly match the given PVC configuration. For example, as illustrated as VC shaping in the Figure

3-4 below, if at a given time VC2 would need more traffic than configured it will not get it because

the pace controller limits the traffic to its configured rate, even if there is spare capacity on the

link because VC1 needs less bandwidth as configured. Vice versa, at another point of time VC1 may

need more bandwidth as configured. Again it cannot be used because the pace controller restricts

the bandwidth, although spare bandwidth is available. The bandwidth shown in grey in the figure

below refers to unused bandwidth.

This behaviour does not apply to UBR PVCs, for such PVCs there is no limitation of transmitted data other than the speed of the physical link.

VC1

VC2

Physical

Link

VC1 VC2Physical

Link

VC1

VC2

VC2

VC1

VP Shaping

VC Shaping

unused

bandwidth

VC1

VC2

VC1

VC2

required

bandwidth

t1 t2

actually used

bandwidth is

dynamically

adapted to

required

bandwidth

Time

Figure 3-4 VC Shaping

On the other hand, when using path shaping (or link shaping) each VC can take the required

bandwidth when available. If the aggregate bandwidth exceeds the physical bandwidth the VC with

higher priority can use it and cells of the VC with lower priority will be dropped.

If VCs have the same priority cells from both will be dropped in case of congestion.

The Node B shall support link shaping in the uplink, i.e. each PVC shall be able to take bandwidth

from others if it is not required there, even if the PVC has lower priority than that one that has

spare capacity.

Page 23: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 23/72

The term “link shaping” is used here rather than “path shaping” because the chosen solution does

not just consider VCs in the same path but all VCs sharing the physical link.

3.2.6.3 LINK SHAPING

With the change from the shared PVC concept to use multiple PVCs for the different services VC

shaping would limit the traffic sent uplink as defined by the ATM QoS parameters. In particular, the

sum of the SCRs of all VCs cannot be greater than the total link bandwidth. Thus the available

physical bandwidth needs to be distributed over all VCs. This in turn means if one service needs

more bandwidth than the corresponding PVC can carry data cannot be sent even if other PVCs have

plenty of spare capacity and the total link rate is below the physical speed.

In order to improve link utilisation the uplink shaping behaviour of the OneBTS shall be changed.

Only the total traffic on the link including all PVCs shall be limited, the distribution over the

individual PVCs shall be flexible and it shall be possible to change dynamically.

Changing the pace controller’s behaviour to support several priority levels for ATM calls allows to

control the aggregate rate of the egress data of all PVCs rather than to control the rate of individual

PVCs. This is illustrated in the figure below.

If there are several priority levels the pace controller can use transmission opportunities for the

highest priority PVCs first, if something is left it can be used for the second highest priority, if there

is still free transmission capacity the next priority level will be served and so on.

Note that the highest priority level can completely consume all available bandwidth, thus all PVCs

except the one with the highest priority can starve when this PVC is allowed to use the whole link

transmission rate. Therefore it is advised not to assign the full link speed to high priority PVC to

leave some space for low priority traffic.

The maximum bandwidth of a PVC can be restricted also when supporting multiple priorities. Similar

as for a single priority approach the pace controller will not schedule a PVC that has less bandwidth

than the physical speed in subsequent slots. These gaps can then be used for lower priority traffic.

On the other hand low priority PVCs may be assigned to use the full bandwidth. Because they only

get transmission slots assigned when there is no higher priority traffic they can use the full PVC or

link speed only if the bandwidth is not required for higher priority traffic.

The transmitted total rate is determined by the frequency the slots are served. This is controlled by a programmable counter that paces the speed of transmitted data. The actually achievable link rates depend on the physical layer (E1/T1, IMA, E3/T3, STM-1/ OC-3). In particular for high speed links it may not be possible to achieve an aggregated rate of all PVCs equal to the physical link speed because of performance limits of the Universal Radio Controller(URC) board which manages the physical links via it’s Line Interface Unit.

Page 24: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 24/72

HSDAT

channelP3

micro

channelP1

segmentation

channelP2

Itf-B P3

NBAP P1

ALCAP P1

Total data per slot limited and

shared by various priority levels

1 2 3 4 n-1 n

Priority

1

2

3

Transmission Slot

AMR PVC

R99 I/B PVC

HSPA I/B PVC

Figure 3-5 Link Shaping

The pace controller in the LIU shall support five priority levels.

By default the priorities shall be used as given below, where 1 is the highest and 5 is the lowest

priority:

Priority Service

1 Delay sensitive data (e.g. voice), signalling (NBAP, ALCAP) and Itf-B (UL)

2 Streaming data

3 Non-delay sensitive data (e.g. R99 interactive and background traffic)

4 HSDATA interactive and background traffic

5 Ethernet Passthrough and other low priority traffic

The priority shall be set by the OMC when the PVC is created.

Except for the Itf-B PVC the default priorities can be overwritten through OA&M.

The priority level of each control plane, user plane and the Ethernet Passthrough PVC shall be

configurable through OA&M.

The priority of all PVCs except the Itf-B VCC shall be set at creation time. Changing the priority of

one or more PVCs needs to delete and recreate the Iub interface object.

Page 25: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 25/72

Note: The priority of the Itf-B is controlled by a tuneable parameter that cannot be changed through OA&M. Changes take only effect after a reboot.

The pace controller shall be able to limit the cell rate of all PVCs except those having service

category UBR to values defined by its ATM QoS parameters.

Note: A PVC’s traffic characteristics will be shaped by the pace controller according to its ATM QoS parameters (PCR, SCR, MBS) when the required capacity is available on the link. In particular the highest priority PVC will be shaped to these parameters. Lower priority PVCs may only get a smaller share of the link’s bandwidth if it’s already consumed by higher priority ones. Thus the ATM QoS parameters define merely the upper limit but do not provide any guarantee that a PVC will really get this amount of capacity.

Limiting the bandwidth of high priority PVCs can be used to leave some transmission opportunities

to lower ones, thus avoiding starvation of such channels. However, limiting the BW of any but the

highest priority PVC may result in lower than expected bandwidth because if a slot cannot be used,

the PVC will be rescheduled as though the slot was available!

The service category (CBR, VBRrt, VBRnrt, UBR) of all PVCs except the Itf-B VCC shall be “set on the

fly”, i. e. changes shall take effect only when the Iub interface object is deleted and recreated.

The ATM category of the Itf-B is stored as a backplane parameter in the Network Facilities Data

(NFD) record. It is non-volatile and cannot be changed through OA&M. Changes made in the

backplane need a site visit and take only effect after a reboot.

3.2.6.4 ITF-B LINK

The ATM parameters of the Itf-B VCC are stored in the backplane of the Node B and cannot be

modified through OA&M. Since backplane modifications because of this feature shall be avoided the

Itf-B shall get a pre-defined priority.

Generally data sent uplink to the OMC over the Itf-B link are not very time critical do not have high

priority. Also the sizes of the messages are limited. The biggest blocks of data are the upload of

performance measurements which are done once every quarter of an hour. The size of these files is

in the order of a few kilobytes.

Considering this the Itf-B does not necessarily need a high priority. Moreover, call processing in the

Node B will continue if the Itf-B link is congested or even broken. Nevertheless it is desirable to

have the ability to report alarms also during busy hours in real time. Although failed UL data

transmission is repeated and it is unlikely that data cannot be delivered within reasonable time, in

particular the performance management application may get confused if PM data files are missing.

Thus, it is advised to assign a high priority to the Itf-B link and restrict its bandwidth.

The minimum bandwidth that can be assigned to a PVC allows still to transmit performance data in

less than 1 sec. Only during this period the Itf-B link will take capacity which is then not available

for user data. On the other hand, if there is no UL transmission on the Itf-B the capacity can be used

by lower priority PVCs.

Page 26: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 26/72

Note: Alternatively to the transmission of PM data every quarter of an hour, four files may be combined and transmitted once an hour. In this case the file size increases accordingly. A further option would be to compress performance data.

Traffic shaping in the DL direction shall consider that SW Download requires the transmission of a

big amount of data. Thus the Itf-B shall be allowed to get more bandwidth in DL direction as in the

UL. It shall also get lower priority in order to give user traffic precedence.

Preferably, the priority should be less than the PVCs carrying HSDATA interactive and background

traffic and higher or equal to the priority of the Ethernet Passthrough PVC. Alternatively these VPCs

could use a different path. The final definitions will be done through the link profiles.

The priority of the Itf-B VCC shall be stored as a tuneable parameter. The default value shall be the

highest priority level.

3.2.6.5 AGGREGATE BANDWIDTH

The aggregate bandwidth of all PVCs is determined by the number of transmission slots and the

frequency at which they are served. The aggregate bandwidth can be less than the speed of the

physical link. In particular for STM-1/OC-3 links the performance of the URC card is probably not

sufficient to send data at physical link speed.

Note: If the aggregate rate is lower than the link speed the individual ATM cells are still sent with link speed but there are idle cells between cells carrying payload.

For E1/T1 links and for IMA groups the aggregate bandwidth shall be identical to the physical link

speed.

For E3/T3 links and STM-1/OC-3 links the aggregate bandwidth shall be configurable through OA&M

via an attribute called “aggregatelinkbandwidth” which has been added to the E3T3 and STM1OC3

objects of the OneBTS model.

The aggregate link bandwidth shall be “set on create”, i.e. changes shall take effect only when the

corresponding physical interface object (E3T3port or STM-1OC-3port) is deleted and recreated.

Note: If there is only a single instance of an STM-1/OC-3 interface it always operates at the physical link speed.

3.2.6.6 UPGRADE AND FALLBACK

If more than 3 IMA groups existed before the upgrade the IMA groups with the highest numerical

Identifiers shall be deleted leaving only 3 IMA groups. If the Identifiers of the remaining IMA Groups

have values > 3 they shall be converted to match the range 1 … 3.

If a “Passthrough VCLTP” object existsI, in u05.0x it shall be deleted during the upgrade to UA06,

i.e. it will not be migrated to the new release. It shall be created again after the upgrade has been

completed.

Page 27: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 27/72

Note: The passthrough VCLTP object represents the virtual circuit link termination point for the

Ethernet port and contains information such as virtual path and channel identifiers, atm class and

max, min, peak, sustained cell rates.

During upgrade the ATM parameters of the Itf-B PVC in the backplane shall be changed to the new

settings.

Note: The ATM QoS parameters (PCR, SCR, MBS, ATM class) remain unchanged!

During fallback from UA06 to u05.0x the ATM parameters of the Itf-B PVC in the backplane shall be

set to the u05.0x values.

Note: The ATM QoS parameters (PCR, SCR, MBS, ATM class) remain unchanged!

3.2.7 ETHERNET PASSTHROUGH OAM VCC

Two possibilities for terminating the Ethernet Passthrough VCC have been examined:

Option 1: termination on the RNC9370 INode, with the INodeB Bridging passthrough traffic to a LAN.

Option 2: termination on the RNC9370's External ATM Switch (or another ATM Switch elsewhere in

the ATM Transport Network), with that ATM Switch perfoming bridging of Ethernet Passthrough

traffic to a LAN.

Following discussions with RNC9370 architects, it was determinated that the RNC9370 INode cannot

support the necessary Bridging needed for Option 1 in UA06.0A. Therefore Option 2 is the only

option available.

The Passthrough OAM VCC from a Flexent NodeB must be terminated on an External ATM Switch,

where it must be "Bridged" (rather than "IP Routed") onto a LAN port. This is needed to be

symmetrical with the LU NodeB end of the Passthrough VCC, which is bridged (rather than IP

Routed) onto the Site LAN. Thus only the LU NodeB end of the Ethernet Passthrough VCC shall be

provisioned by WPS/W-NMS. Planning/provisioning of the other end of the Ethernet Passthrough VCC

has to be manually co-ordinated outside of WPS/W-NMS.

Note:. This corresponding to Ethernet Passthrough Option 2

The Ethernet Passthrough OAM VCC shall be outside the Virtual Path used for Iub Link VCCs so that

Passthrough OAM traffic does not impact HSDPA traffic (both being 'ubr') i.e. it must have a

different VPI to the VPI used for Iub Link VCCs. It could however use the same VPI as that used for

the Itf-B VCC (which is also outside the Virtual Path used for Iub link VCCs)

3.3 IP

IP infrastructure is a key UA06.0 feature, mainly because it offers hopes to operators to reduce

their transmission costs. It is thus particularly important for the main source of transmission cost,

the Iub interface, and especially for the user plane.

The Quality of Service aspect maybe considered as a risk by some operators which thus would like to

keep, at least initially, the control plane and R99 user plane data plus the HSDPA time-constrained

Page 28: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 28/72

user plane data (streaming) on a “classical” ATM infrastructure, and move the HSPA

Interactive/Background user plane data onto an IP infrastructure. Thus, Alcatel-Lucent provides in

UA06.0 the “hybrid Iub interface” (mixed ATM/IP Iub configuration).

RNC

OMC

VR

ATM on STM1 IP/ATM

GE Link

Signaling flow on ATM OAM flow on ATM

R99 + Common channels + HSPA Streaming User plane o n ATM

STM1 Link

IPon VLAN/GE

HSPA Interactive / Background User plane on IP

E1/T1 Links Hybrid

BTS

Ethernet Link

IP Network: several DSCP

E1/T1 Links Hybrid

BTS

Ethernet Link

ATM Network: Several ATM VCs

Figure 3-6: Hybrid Iub logical view

It represents a first move towards a more complete UTRAN over IP portfolio foreseen in UA07.

IP features are described in [R3].

3.4 UTRAN SHARING

UTRAN sharing allows the infrastructure to be shared among up to four operators. The objective of

the UTRAN sharing feature is to allow the UTRAN network to be shared between up to four UMTS

operators (where operators own jointly a UTRAN and each operator being a UMTS licence holder)

while complying with their competitive constraints as per national regulations.

This mostly deals with Node B and transmission resources (radio resources are dedicated per

operator as each operator operates his own frequency), Node B and Iub interface resources can be

split between several operators so as to guarantee a minimum amount to each operator. For Iub

transmission, this is achieved thanks to the bandwidth pool concept (see section 7.1). Note that

each operator has its own Iu interface to the RNC.

UTRAN sharing is described in [R6].

Page 29: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 29/72

4 IUB SPECIFIC ASPECTS

4.1 SYNCHRONIZATION ASPECTS

4.1.1 INTRODUCTION

Synchronization procedures are described in TS 25.402 ([A4]), and cover many different aspects:

Node synchronization is defined in 3GPP TS25.402, and is the function to measure the transmission

delay between CRNC and its Node Bs, and to estimate the phase difference between RFN and BFN of

these nodes. This procedure is currently not used. It could be used prior to the radio link setup

procedure, but the standard allows alternatives.

Connection Frame Number timing determination is done in the RNC. It is used for R99 channels, to

indicate when data for each user are to be sent over the air interface.

[GLOBAL MARKET - Network synchronization covers the means used inside the network to forward

frequency information, in particular down to Node B. When using ATM infrastructure above E1/T1,

the Node B clock maybe either synchronized on (one of) the incoming E1 or T1, or may be used in

free run. In the latter case, the clock needs to be periodically tuned. The Node B clock accuracy

must be compatible with the constraints of the air interface, which requires a transmission with a

precision on the frequency of 5 10-8.]

Transport Channel Synchronization procedure is used between SRNC and Node B, along a DCH, to

achieve or restore the synchronization of the DCH (or set of co-ordinated DCHs) data stream in DL

direction. The problem consists in guarantying that each frame sent from SRNC to the Node B

arrives prior to the time the Node B has to send it over the radio, and each frame sent from Node B

to SRNC arrives in RNC in time to be sent to CN. The procedure is used when a new radio link is set

up. In addition, timing adjustment may occur during the call, if data sent by RNC arrives too

late/too early in Node B compared to the time when it is indicated to be sent by Node B on the air

interface. (In some cases, timing alignment on Iu interface may occur for e.g. AMR, in order to

adapt the reception of (e.g.) speech packets in the RNC to the time they need to be sent to the

concerned Node Bs, in order to minimize the CN to UE transfer delay, in a way compatible with the

inherent jitter on the Iu interface). This section details the transport channel synchronization and

timing adjustment procedures, because they are key to understand the transport design constraints.

ITU-T recommendation G.107 presents the so-called “E-model”, a computational model for use in

transmission planning, where the speech quality is rated by a factor denoted “R”:

AIe-effIdIsRoR +−−−=

where R0 is the basic signal to noise ratio, Is represents the impairments to the voice signal (e.g.

quantizing distortion brought by the speech coder), Id represents the impairments brought by delay

Page 30: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 30/72

aspects (subdivided in impairments brought by talker echo, by listener echo and brought by absolute

delay), Ie-eff represents impairments due to low rate codecs and packet loss, and A represents

compensation of impairments. Speech quality thus clearly appears as a trade off between various

aspects including complexity (speech coder/decoder complexity, echo cancellation techniques, …),

used bandwidth, and transmission delay.

The End-to-End delay requirements for speech are quite tight. The user perception of the QoS for a

speech call directly depends on that delay. ITU-T recommendation G.114 shows the user perception

of the speech quality versus mouth to ear delay, in otherwise perfect conditions.

G.114_F010 100 200 300 400 50050

60

70

80

90

100

Nearly allusers

dissatisfied

Many usersdissatisfied

Some usersdissatisfied

Userssatisfied

Usersvery satisfied

Mouth-to-ear-delay/ms

E-m

odel

rat

ing

R

Figure 4-1: (Figure 1 of ITU-T G.114): Determination of the effects of absolute delay by the E-

model

3GPP TS 22.105 ([A1]) presents objectives for one-way speech end-to-end delay in a UMTS mobile

network. It also presents requirements for other services, distinguishing between the 4 defined

UMTS traffic classes:

Conversational services (among them speech) are characterized (in particular) by a guaranteed bit

rate and a guaranteed and short transfer delay

Streaming services (for instance audio streaming) are characterized by a guaranteed bit rate and a

guaranteed transfer delay. The transfer delay is normally larger than for conversational services.

Interactive services (e.g. web browsing) do not request any guaranteed bit rate or guaranteed

transfer delay, but expect a “relatively” quick response time. Their expectations are reflected in

the “Traffic Handling Priority” that further refines their expected network responsivity.

Background services (e.g. mail transfer) do not request any guaranteed bit rate or guaranteed

transfer delay and are tolerant to large transfer delays. Their only constraint is for error ratio.

Page 31: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 31/72

The following table is a concatenation of excerpts of tables from 3GPP TS 22.105.

Page 32: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 32/72

Medium Application Degree of

symmetry

Data rate Key performance parameters and target

values

End-to-end One-

way

Delay

Delay

Variation

within a

call

Information

loss

Audio

Conversational

voice

Two-way 4-25 kb/s <150 msec

preferred

<400 msec limit

Note 1

< 1 msec < 3% FER

Video

[conversational]

Videophone

Two-way 32-384

kb/s

< 150 msec

preferred

<400 msec limit

Lip-synch : <

100 msec

[no value,

we can

probably

assume

1 msec]

< 1% FER

Note 1: The overall one way delay in the mobile network (from UE to PLMN border) is approximately 100msec.

One-way

Delay

Delay

Variation

Information

loss

Audio

Voice

messaging

Primarily

one-way

4-13 kb/s < 1 sec for

playback

< 2 sec for

record

< 1 msec

< 3% FER

Data [interactive]

Web-browsing

- HTML

Primarily one-

way

< 4 sec /page N.A Zero

Start-up

Delay

Transport

delay

Variation

Packet loss at

session layer

Audio

[streaming]

Speech, mixed

speech and

music, medium

and high quality

music

Primarily one-

way

5-128

kb/s

< 10 sec

< 2sec

< 1% Packet

loss ratio

Page 33: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 33/72

Video

Movie clips,

surveillance,

real-time video

Primarily one-

way

20-384

kb/s

< 10 sec <2 sec < 2% Packet

loss ratio

Table 1: (concatenated excerpts from tables in 3GPP TS.22.105 - text between [ and ] has been

added): End-user Performance Expectations

It should be stressed that these requirements pertain to End-to-End performance. In particular the

“< 1 ms” requirement on delay variation within an audio conversational or interactive call should

not be interpreted as meaning e.g. that Iub interface jitter requirement is to achieve a delay

variation within an audio call less than 1 ms. Indeed, the UE also participates in this, and can thus

accommodate for some buffering time, smoothing out the Iub jitter. This is particularly true in the

case of the interactive service.

End-user expectations can be split inside the end-to-end chain. The UTRAN requirements are

described in 3GPP TS 23.107 ([A2]). This leaves a very large range of values for target delays within

UTRAN, as shown in the following table:

Traffic class Conversational

class

Streaming class Interactive class Background class

Maximum bitrate (kbps) <= 16 000 (2) <= 16 000 (2) <= 16 000 - overhead (2) (3)

<= 16 000 - overhead (2) (3)

Residual BER 5*10-2, 10-2, 5*10-3, 10-3, 10-4, 10-5, 10-6

5*10-2, 10-2, 5*10-3, 10-3, 10-4, 10-5, 10-6

4*10-3, 10-5, 6*10-8 (7) 4*10-3, 10-5, 6*10-8 (7)

SDU error ratio 10-2, 7*10-3, 10-3, 10-4, 10-5

10-1, 10-2, 7*10-3, 10-3, 10-4, 10-5

10-3, 10-4, 10-6 10-3, 10-4, 10-6

Transfer delay (ms) 100 – maximum value 300 (8) – maximum value

Guaranteed bit rate

(kbps)

<= 16 000 (2) <= 16 000 (2)

Traffic handling priority 1,2,3

2) The granularity of the bit rate attributes shall be studied. Although the UMTS network has capability to

support a large number of different bitrate values, the number of possible values shall be limited not to unnecessarily increase the complexity of for example terminals, charging and interworking functions. Exact list of supported values shall be defined together with S1, N1, N3 and R2.

3) Impact from layer 2 protocols on maximum bitrate in non-transparent RLC protocol mode shall be estimated.

7) Values are derived from CRC lengths of 8, 16 and 24 bits on layer 1.

8) If the UE requests a transfer delay value lower than the minimum value, this shall not cause the network (SGSN and GGSN) to reject the request from the UE. The network may negotiate the value for the transfer delay.

Table 2: excerpt from table 4 in TS 23.107: Value ranges for UMTS Bearer Service Attributes

A more practical and detailed representation has been attempted in 3GPP TR 25.853 ([A8]), but this

TR is not constraining (as it is not a TS) and was not maintained: latest version is 4.0.0). The aim is

to derive an allowed transmission time on Iur+Iub interface. Although this TR does not impose any

value to be achieved for the transmission infrastructure, it clearly shows and explains the various

Page 34: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 34/72

sources of delays, and how transmission times between SRNC and Node B may vary according to the

nature of the interface (e.g. terrestrial or satellite), the topology of the connection (e.g. whether a

DRNC has to be used, whether the connection has to cross many ATM or AAL-2 switches, etc.). It is

also shown there, and the constrain is also explained in TS 25.402 ([A4]), how the UTRAN and in

particular the SRNC needs to cope with these varying transmission times, in order to achieve

simultaneous transmission on the air interface of data from all Node Bs involved in Soft Handover.

As shown from the above references, it is impossible to give a unique transmission delay value even

assuming a given target QoS, as it depends on many parameters (such as, for speech as an example,

speech coding rate, and target QoS. A small increase of delay on Iur/Iub results in a small decrease

in speech quality, but may still be compatible with the objective).

In any case, in order to meet the QoS requirements, the transport channel synchronization

procedure should aim at keeping a “low” delay, at avoiding frame loss, at supporting “some”

difference in delay between different Iub links in macro-diversity (possibly including an Iur portion)

and at allowing “some” transport jitter. Typically, the UTRAN is designed to support maximum

differences in delay between Iub links up to 50 ms, by default, but automatically adjusts to smaller

delays, thanks to a low initial target, as further detailed below, starting with Downlink.

4.1.2 SYNCHRONIZATION OF DL DATA

2. Reception in NodeBof the FP frame

LARGE SMALL

Time of processing in the NodeB

NodeB Buffer Capacity

151 152149 150147 148145 146143 144141 142139 140

151 152149 150147 148145 146143 144141 142139 140

RxW position : TOAWERxW size : TOAWS

OK Late Toolate

Buffer

Early

Early Margin (Nortel)

DL Data Frame with TB set: DL Data Frame

[CFN =152]

LTOA

TOAWS TOA=0RNCWS

TOAWS

TooEarly

SRNC

Node B

RNC Anticipation

RNCWS

Iub Delay (NodeB, NetLoad...)

1. Transmission by RNCof the FP frame

3. Buffering in NodeBof the FP frame

4. Transmission over the air by NodeBof the FP frame

Figure 4-2: Illustration of key transport channel synchronization parameters

When sending data to the Node B, the RNC indicates the target “Connection Frame Number”, which

is the DCH radio frame number for the UE. This information is necessary to make sure that in case

of macro-diversity, the UE receives the same data from all Node Bs of the active set at the same

time.

When receiving data from RNC, the Node B needs some minimum processing time before being able

to send it to the UE. Subtracting this processing time from CFN data transmit time, leads to “Latest

Page 35: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 35/72

Time Of Arrival” (LTOA) of the data in Node B (shown at CFN150 in the figure, for a target CFN

equal to 152 in this example). Some constant margin is then subtracted from LTOA, to lead to Time

of Arrival Window End (TOAWE), shown in CFN148 in the figure: this margin is kept to cope with

jitter in the transmission network. If data arrives later than TOAWE, the Node B will react by telling

the RNC to perform timing adjustment, i.e. to send data earlier. The Node B indicates in the TIMING

ADJUSTMENT message the actual Time Of Arrival (TOA) of the data, compared to TOAWE. Note that

TOA is used as the 0 time reference and times towards the future are considered negative.

The data are expected by the Node B any time between TOAWE and Time of Arrival Window Start

(TOAWS): if the data arrives within that window, data is buffered in the Node B, processed and

sent. If data arrives too early (i.e. before TOAWS), the Node B sends a TIMING ADJUSTMENT message

to the RNC, indicating TOA, with a positive value. TOA is still (as in the case of late arrival) the time

difference with TOAWE. Data arriving between TOAWS and LTOA are sent to the UE, data arriving

after LTOA are lost, and data arriving before TOAWS may be lost (they may exceed Node B buffering

capacity, even though an early arrival margin exists, as represented in the figure).

In the Alcatel-Lucent implementation, an additional parameter is introduced, called RNCWS (for

RNC Window Start). It is a typically a much smaller value than TOAWS, and is the estimated initial

window size aimed at compensating the jitter in the network. More precisely, the RNC targets an

arrival time in the Node B RNCWS before TOAWE.

When a radio link is set up, the RNC sends a message DL SYNCHRONIZATION, including the target

CFN. In the reply (UL SYNCHRONIZATION), the Node B indicates TOA, which allows the RNC to adjust

to the clock difference and transmission delay for this initial DL SYNCHRONIZATION message.

Assuming this initial transmission delay is kept constant, the RNC will send the data in such a way

that it arrives RNCWS before TOAWE. As explained before, if the transmission delay increases by

more than RNCWS, the Node B will react by a TIMING ADJUSTMENT message, and the RNC will adjust

to a new target accordingly.

Note that TOAWS is significantly larger than RNCWS, in order to cope with different transmission

delays in case of macro-diversity. The RNC keeps only one target, and adjusts (as long as it is

possible) to the longest needed transmission delay in the active set, keeping a single transmission

time, common to all Node Bs. It is up to the Node B(s) to buffer data arriving early. The maximum

UTRAN delay difference the UTRAN can cope with is thus TOAWS - RNCWS (if the target has not

been modified by previous Timing adjustment). If this difference is exceeded, the UTRAN adjusts to

the fastest Node B and only the Node Bs within the window can then contribute to the SHO. As the

RNC keeps a single (adjustable) transmission time whatever the difference in transmission delays to

all Node Bs involved in macro-diversity, the buffering requirement in Node B is increased as

compared to an implementation where RNC would adapt transmission time for each Node B,

according to each specific Iub transmission delay, because the Node B receive window has to cater

for absolute delay between Iub legs as well as drift & jitter. It however significantly simplifies

handling in RNC, and is equivalent in terms of end-to-end delay.

RNCWS being a small value, it is well suited for services with low jitter such as speech. For services

subject to higher jitter, in particular I/B services on DCH, if the network load is low at the time of

the initial transport channel synchronization, the RNC will be a little optimistic when sending the

data: data might arrive late. In some unlucky cases, this may lead to a frame loss at the Node B

(data arriving after LTOA), however, as explained above, the RNC will quickly adjust to the actually

Page 36: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 36/72

(maximum) experienced transmission delay, so the impact is minimal. Using a small value for

RNCWS allows minimizing transfer delay, which is better for performance.

Default value for TOAWS is 50 ms. LTOA allows by default 5 ms margin after TOAWE, and RNCWS

default value is 3 ms. (This is only indicative and subject to change).

Note that in case of HSDSCH, there is not such a strong timing requirement as in DCH, leading to the

risk that late data is discarded. However, round trip time is important in HSDSCH, especially for UE

categories allowing high bit rates, because the RLC window might get blocked if the time to receive

the RLC acknwowledgements from UE is too long, which will then reduce the data rate.

4.1.3 SYNCHRONIZATION OF UL DATA

Network delay jitter exists in UL as in DL.

Macro-diversity induced by Soft Handover (SHO) implies that DCH FP data arrive at multiple time

occurrences in the RNC, since delay differences exist between Iub links. The mechanism applied in

UL is similar to that applied in DL, in the sense that parameters ULRNCWS, ULLTOA, ULTOAWE and

ULTOAWS are used, similar respectively to RNCWS, LTOA, TOAWE and TOAWS in DL. ULRNCWS is

used to determine the initial target reception time of UL frames. ULLTOA represents the latest time

of arrival for Node B data to be usable by RNC for transmission towards MAC-d layer. ULTOAWE and

ULTOAWS represent both limits for “normal” reception window of data from all Node Bs. ULTOAWS –

ULRNCWS supports the UL delay difference between Iub links. The difference with DL is that there is

no message towards Node Bs for timing adjustment. The adjustment is performed locally, internally

to the RNC.

Page 37: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 37/72

ULTOAWS ULTOAWE

RNC Buffer Capacity

151 152149 150147 148145 146143 144141 142139 140

RxW position :ULTOAWERNC RxW size = ULTOAWS

OK Late ToolateEarly

Early Margin

DL Data Frame with TB set: DL Data Frame

[CFN =150]

ULTOAWS

ULTOA=0PossibleTarget

TooEarly

UL Iub Delay (NodeB, NetLoad...)Node B

SRNC

NodeB UL Tx Time

UL

LT

OA

ULTOA

ULRNCWS for NodeB & UL Jitters

Latetrigger ULTAD

Toolatetrigger ULTAD

Earlytrigger ULTAD

TooEarlytrigger ULTAD

ulDhoTimer is suppressed and the max delay supported

due to the infrastructure is ULTOAWS-ULRNCWS

like for DL

ULTproc=0ms for RNC

3. Buffering in RNCof the FP frame

1. Transmission by NodeB of the FP frame

2. Reception in RNCof the FP frame

4. CFN=150 Transport Block

from DHO to MACd

RNCBuffer

UL Iub Delay (NodeB, NetLoad…)

Figure 4-3: Illustration of key UL synchronization parameters

When receiving the very first UL data frame, the RNC creates a periodic ULTTI timer, and transmits

the data to MAC-d after waiting ULRNCWS plus ULTOAWE. In other words, LTOA position is

determined initially according to the arrival of the first UL data frame.

When receiving later UL data frames, the RNC normally keeps the same time reference, based on

the ULTTI timer. This means data are sent by DHO to MAC-d according to the ULTTI timer period.

However, the RNC may perform local timing adjustment to move the UL target time, based on

jitter. If needed, a shift occurs for transmission to MAC-d.

Default values in UL are the same as in DL for the corresponding parameters.

4.2 HYBRID CONFIGURATION, TNL MIXITY [GLOBAL MARKET ]

In the hybrid configuration, control plane, R99 user plane and OAM are carried over ATM and HSPA

user data are carried over IP (to the potential exception of HSDPA streaming service).

This configuration is described in [R3].

Resiliency can be provided. This means that in case of IP link failure, some HSPA traffic can be

established over the ATM infrastructure (in the limit allowed by CAC). When the IP link operation is

resumed, there may be for some duration coexistence of HSPA traffic over ATM and over IP. Both

infrastructures are likely to provide different transmission delays. As congestion detection can be

based on transmission delay build up detection, the algorithms need to take into account this

possible coexistence. Refer to section 7.1.5.2 for more details.

Page 38: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 38/72

5 IU-CS/PS SPECIFIC ASPECTS

5.1 NGN ARCHITECTURE

The MSC can either be made of a single functional entity or be split in MSC server, (also known as

Media Gateway Controler: MGC) + one or several Media GateWays (MGW). When the MSC is split,

the MGW is in charge of the user plane and handles ALCAP, whereas the MGC terminates RANAP and

coordinates accordingly with the MGW for the user plane connection setup.

The split of data between MGW and MGC can be done at ATM level (a set of VCCs for ALCAP + a set

of VCCs for User Plane between RNC and MGW, and a set of VCCs for RANAP between RNC and

MGC). It can also be done at SS7 level: ALCAP and RANAP are carried over a single set of VCCs, and

the separation between both data flows is performed at SS7 level in an intermediate Signalling

Transfer Point (STP). This STP is usually part of the MGW function. (This MGW may also be used as

ATM switch for e.g. Iu-PS traffic, and/or AAL-2 switch for e.g. Iur traffic: see section 3.2.5).

Seen from the RNC, the split of the MSC architecture is reflected in routing tables, and in ATM

architecture.

The RNC supports both combined and split MSC architectures, as described in [R1]. Addressing and

routing is described in [R5].

From UA06.0 the Iu-ps interface can be configured to support either the ITU or ANSI variations of

the SS7 stack.

As in non-NGN architecture, the Iu-CS interface may evolve towards IP and associated stacks in

further releases.

MGC

SGSN

RNC

Iu-PS

Iu-CS-RANAP

Node B

Iub MGW1

Iu-CS-ALCAPk+Iu-CS UPk

MGWn

MSC split in MGC (RANAP handling) + n MGWs (ALCAP + User Plane handling)

Figure 5-1: NGN logical architecture

Page 39: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 39/72

5.2 IU-FLEX

The RNC is traditionally connected to a single CN-CS node (MSC) and a single CN-PS node (SGSN).

Since UA05.0, it is also possible to connect the RNC to multiple CN nodes, thanks to Iu-flex feature.

This feature allows the RNC to be connected to multiple CN nodes, in order to allow load balancing,

redundancy, etc. as specified in TS 23.236 ([A3]).

Previously, a RNC could connect to only one core network node within each core network domain

i.e. one MSC and one SGSN. This hierarchical network topology leads to some limitations for

instance in case of core network failure, the geographical coverage of several RNCs will be lost.

This feature allows the connection of a RNC to several core network nodes (for a given operator)

within each core network domain. This new network structure gives more flexibility to the network

and increases its performance as well as capacity and scalability.

Figure 5-2 shows the network topology for UMTS networks without Iu-flex. Figure 5-3 shows the

network structure allowed by this feature.

MSC

SGSN

RNC

Iu-PS

Iu-CS

Node B

Iub

Figure 5-2: UMTS network topology without Iu-flex

Page 40: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 40/72

SGSN pool

RNC

Iu-PS Node B

Iub

MSC pool

Iu-CS

Figure 5-3: UMTS network topology with Iu-flex

The various core network nodes are grouped in pools and inside a same pool area a mobile will

always be attached to the same core network node. An RNC could be connected to several pool

areas for a same domain.

Note that the figures above show separate pools for SGSNs and MSCs as each domain is independent

of the other but this doesn’t preclude core implementations where the SGSN and the MSC would be

seen by the UTRAN as one and the same node. Though there’s not much likelihood to encounter

such a configuration, nothing in the Iu-Flex feature implementation should prevent it.

It is the responsibility of the RAN to find the core network node where the UE is attached or to

choose a core network node for the UE in case of first attachment or of a mobile entering the pool

area. A mobile will always be served by the CN node it has been attached to.

The details of this feature are described in [R4].

6 IUR SPECIFIC ASPECTS

Iur has similarities both with Iub interface and Iu interface: on the one hand, it has to convey

information between SRNC and DRNC so that the DRNC can transfer it to its Node Bs, and the Iur and

Iub control Radio Network Layer control protocols (RNSAP and NBAP) thus include parts that are

Page 41: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 41/72

very similar, e.g. for Radio Link Setup. The frame protocols on Iub and Iur have to support similar

functionalities and are thus almost identical (3GPP technical specifications for Data Transport and

Transport Signalling for DCH Data Streams (TS 25.426 – [A6]) and User Plane Protocols for DCH Data

Streams (TS 25.427 - [A7]) are common for Iub and Iur interfaces). On the other hand, the Iur

interface does not belong to the “last mile” portion of the transmission network and thus does not

have the same “lightness” constraints. This reflects in differences in the protocol architecture, for

instance RNSAP is carried over an NNI interface (as Iu interface) instead of a UNI for Iub. Its protocol

stack is built on MTP3-B as Iu interface, instead of a lighter SSCF-UNI in case of Iub.

At physical layer, the Iur interface is often carried on the same link as the Iu interface (at the RNC

egress), before entering the transmission backbone.

In summary though Iur is using the same signalling bearer as Iu, its control plane signalling (RNSAP)

and user plane are very similar to Iub. On the Iur the RNC can have either serving or drift role (on a

per call basis and depending under which RNC coverage the call originated).

While in serving role the RNC is free to choose the control of its resources, in drift role the RNC has

to follow the SRNC requests. Thus the Iur 3GPP specifications always address only drift RNC

behaviour. The drift RNC could be compared to a "big Node B”.

From UA06.0 onwards each Iur interface to a neighbouring RNC can be configured to use either the

ANSI standard SS7 stack or the ITU Standard SS7 Stack.

7 FUNCTIONAL DESCRIPTION

7.1 FRAMEWORK

7.1.1 PRINCIPLE

Most transmission features, in particular dealing with Quality of Service, are organized around the

concept of “bandwidth pool”.

A bandwidth pool is a logical grouping of a number of Iub VCCs (or “IP paths”, i.e. for the RNC a

couple Destination IP address / DSCP range), in terms of Connection Admission Control (see section

7.1.3) and Congestion control (see section 7.1.5.1). It is a method to split the interface bandwidth.

For instance, a bandwidth pool (BP) can be defined corresponding to all VCCs part of an IMA group

at the Node B interface: the bandwidth pool capacity will then be defined equal to the IMA group

capacity and the RNC will then accept calls on this bandwidth pool, according to the IMA group

capacity. The RNC will also monitor the DL traffic according to the IMA group capacity. As the RNC

does not terminate the IMA group, the RNC cannot directly control it.

Another use of bandwidth pool applies for the case of multi-E1/T1 non-IMA configuration: each E1 or

T1 corresponds to a banwidth pool.

The concept can also be used for UTRAN sharing: the terrestrial capacity can e.g. be split among 2

operators with 3 BPs: one dedicated to operator A, one dedicated to operator B, and one shared

between both operators. The calls will be setup in priority on the dedicated bandwidth pool. When

there is no space left on the dedicated bandwidth pool, calls will be setup on the shared banwidth

Page 42: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 42/72

pool. If the shared BP is not defined, the operators might lose calls because of terrestrial interface

congestion, whereas the physical capacity of the interface is not totally used. If no dedicated BP is

defined, it may happen that one operator takes over all physical capacity and thus does not allow

the other operator to setup any call. The respective capacities of the BPs are thus defined as a

compromise between the overall efficiency objectives and the guaranty that each operator aims to

achieve.

A bandwidth pool can be defined for ATM VCCs and also for “IP paths”: a BP cannot group ATM VCCs

and IP Paths.

In the case of ATM, the bandwidth pool concept is similar to that of a VP (a number of VCs to be

handled in common in terms of CAC and congestion control), without the ownership relation (a VC is

defined by a VPI + VCI, i.e. the VC belongs to a VP). It is thus more flexible, and it is extendable to

IP world.

7.1.2 BANDWIDTH POOL CAPACITY

7.1.2.1 ATM CASE

The bandwidth pool capacity is defined as the bandwidth value, up to which the CAC can accept

calls.

In case of a single BP on the interface, the BP capacity is equal to the ATM capacity offered by the

interface at its bottleneck (usually the Node B side of the Iub interface), minus the capacity

reserved for control (e.g. 5% of the interface). By construction of the VCC traffic contracts, it is also

made equal to the sum of ECRs of the VCCs part of the BP, where ECR = 2 x (PCR x SCR) / (PCR +

SCR) (ECR: Equivalent Cell Rate, PCR: Peak Cell Rate, SCR: Sustainable Cell Rate).

For multiple BPs, each BP can have an arbitrary capacity, as long as the sum of the capacities of all

BW Pools in the interface is equal to physical interface capacity minus the reservation for control

traffic: there is by definition no sharing of BW among BPs.

For each BP, it is also made equal to the sum of ECRs of the VCCs part of that BP (same as above).

7.1.2.2 IP CASE

As in ATM case, the interface bandwidth capacity can be split among several Bandwidth Pools.

An IP path represents the bandwidth reserved in the IP backbone for a destination IP address and a

range of DSCP (AF4x for example).

Based on Router configuration capabilities, a CR (Committed Rate) and a PR (Peak Rate) are defined

for each IP Path part of the Bandwidth Pool. Then an Equivalent Rate (ER) can be defined as ER = 2

x (CR x PR) / (CR + PR).

The PR and CR must be chosen such that the bandwidth pool capacity equals the sum of the ERs of

the IP Paths building that BW Pool.

7.1.2.3 BP CAPACITY CHANGE

The operator may reconfigure the BP capacity as a reaction to an alarm received from the UL or DL

congestion detection mechanism provided by TR 25.902 ([A9]) (see section 7.1.5.2).

Page 43: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 43/72

In the ATM case, the BP capacity automatically changes in response to a change of VCC status (e.g.

consecutive to a link failure within an IMA group: ATM VCC maintenance flow (F5) messages from

Node B informs the RNC that some VCCs are no longer usable, as described in section 3.2.2.2).

For the IP part of the hybrid Iub case, the IP connectivity is periodically checked thanks to a

proprietary heartbeat mechanism. When loss of IP connectivity is detected, the whole IP Bandwidth

Pool is lost at once, whereas the loss can be partial in ATM (loss of a single E1 for instance). As

mentioned in section 4.2, when IP connectivity is lost, connections on the corresponding BP are lost,

but new calls can still be setup if the operator has configured the possibility of resiliency (feature

34125 - existence of an ATM VCC within a shared BP, on which HSPA I/B traffic is mapped). This is

further detailed in [R3] and [R7].

7.1.3 CAC

CAC described here only addresses transport aspects. Radio CAC is described in [R9].

7.1.3.1 AAL2 AND IP CAC

AAL2 CAC is applicable to all AAL2-based interfaces of a UTRAN. When an AAL2 interface is

replaced by an IP-based interface, IP CAC is applied instead. IP CAC is functionally

equivalent to AAL2 CAC.

The aim of the AAL2 CAC function is to prevent admission of AAL2 connections in excess of the

available transport bandwidth while at the same time ensuring a most optimized use of this

bandwidth.

The aim of the IP CAC function is similarly to prevent admission of new calls or radio links, in excess

of the available transport bandwidth, while at the same time ensuring a most optimized use of this

bandwidth.

For each end-user service (more precisely for each TC/THP/RbSetQos, and in addition in the case of

DCH, for each configured Max Bit Rate), the RNC is provisioned with an AAL2 max bit rate (also

denoted MBR) as well as an EBR: equivalent bit rate. For streaming service on HSDPA, EBR indicated

to transport includes RLC/MAC-HS overhead, and is derived from the GBR given on Iu interface as

follows:

RlcPduSize

RlcPduSizeSizeMacdHeaderizeRlcHeaderSRanapGbrMachsGbr

++×=

)(*

)(**

eMacdPduSizSizeMacdHeaderuNbMinRlcPd

eMacdPduSizSizeMacdHeaderuNbMinRlcPdrSizeIuxFpHeadeMacHsGbrIuxFpGbr

+++=

torrectiveFacportEbrCorHsdpaTransIuxFpGbrEBR *=

The corrective factor above gives the flexibility to the operator to take some risk on GBR, betting

that not all users will be active at the same time, and that some overbooking is thus possible. Using

a value of 1 means that no overbooking is applied.

The EBR of a new connection is added to the sum of the EBRs of already established connections, if

it doesn’t exceed the provisioned bandwidth, the connection is accepted and the CAC is declared

successful. If it is not possible to accept the new connection in any BP, the CAC is failed. Note that

Page 44: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 44/72

for the sake of IP and AAL2 CAC compatibility, the EBR handled by the operator does not include the

transport overhead, which can be different in AAL2 and IP. Transport takes care of correcting the

provisioned EBR by the transport overhead, either for AAL2 or IP.

To prevent early call blocking, if the AAL2 or IP CAC function is activated on both sides of an

interface, both the algorithms and their provisioned parameters need to be reasonably close.

In a pure Alcatel-Lucent UTRAN configuration, the AAL2 CAC function may be only activated on the

RNC (the MGW relies on load balancing performed by the RNC, and blocks the paths when its

resources limits are reached). An AAL2 CAC may also be performed on other nodes, hence creating

the need for more flexibility of the RNC AAL2 CAC function to facilitate inter-working: finer

granularity of the algorithm, provisioning of the parameters on a per external interface basis.

3 CAC methods are defined in the ALU RNC, the 2 first of them differ by the threshold used to check

whether or not the new connection request can be accepted based on its EBR:

Interface method: the BP capacity, (or the whole interface capacity if no BP is configured) is

considered as the threshold to accept new calls on that BP. It is denoted aal2If for AAL2 case and

Ipif CAC for IP case.

Path method: each path capacity is considered as the threshold to accept new calls on that path

(this method is not recommended on Iub interface).

QoS bandwidth reservation method: for each QoS, a portion of the interface or BP bandwidth can

be reserved for that QoS. A new call can be accepted as long as it does not consume the bandwidth

part that is reserved for other QoS. Note that this is estimated from the CAC perspective, i.e. a

static perspective based on EBR. It does not prevent per se that part of the bandwidth reserved for

a QoS is in fact “stolen” by calls of other QoSs, due to mismatch between EBR and actual use of the

bandwidth by the calls. It may also happen for the same reason that calls are rejected due to the

CAC decision because of apparent potential QoS bandwidth reservation violation, although the

actual traffic is below the limit. This is inherent to the static perspective of CAC, which thus needs

to be complemented by a congestion control algorithm, to cope with situations where sum(EBR) is

at least temporarily underestimated compared to actual traffic.

7.1.3.2 CAC ON IUB INTERFACE

On the Iub interface, CAC is performed either on the basis of the whole interface (if the interface is

not split in BPs: calls are accepted if the interface as a whole can accept them), or on a per BP

basis otherwise (a new call is accepted if at least one suitable BP can accommodate it). QoS

bandwidth reservation method can be used on the Iub interface.

The Bandwidth Pool CAC is in charge of accepting or refusing new connection establishments within

a BP. In case the connection is refused, a “fall-back” BP may be tried if it has been configured and

is suitable. Bandwidth pools are classified into two categories:

Primary Bandwidth Pool are selected as a first choice by CAC, when of course they are suitable for

the requested service, which is determined thanks to the TransportMap table. The BP is suitable

when an entry matching (TC, THP, ARP and RbSetQoS) exists in the TransportMap.

Page 45: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 45/72

Shared Bandwidth Pool are fall back choices if the Primary BP was not suitable or was considered as

saturated.

When receiving a call setup request, the CAC in transport checks the QoS value represented by (TC,

THP, ARP, RbSetQoS), selects all suitable BPs (suitable means that a corresponding entry exists in

their TransportMap table), and sorts them by preference (first Primary, then Shared) and then by

decreasing available bandwidth.

From UA06.0 on, asymmetric bandwidth is supported. Therefore both UL and DL requested

bandwidths need to be considered. The more demanding direction is used as the sorting key: when

DL EBR >= UL EBR, BPs are sorted by decreasing DL BW, and when DL EBR < UL EBR, BPs are sorted

by decreasing UL BW.

The configuration of bandwidth pools should match physical/logical Iub interface e.g.:

for n*E1+IMA deployment, there are:

o n shared (preference = sharedForAllTrafficTypes) bandwidth pools that handle all traffic

types (corresponding to the n E1s)

o one Primary (preference = primaryForTrafficType) BP that handles HSPA I/B traffic (only

selected set of (TC,THP,ARP,RbSetQos) (corresponding to the IMA group).

7.1.3.3 CAC ON IUR/IU-CS INTERFACES

No bandwith pool is used on the Iur or Iu interfaces. Path or interface method can be used, as well

as QoS bandwidth reservation method. Note that different values of EBR can be defined on each

interface Iub, Iur and Iu-CS. Transport maps need to be defined on Iur and Iu-CS interfaces for the

selection of the most suitable AAL2 path.

7.1.3.4 CAC ON IU-PS INTERFACE

AAL2 CAC is of course not applicable on Iu-PS interface (since AAL2 is not used on Iu-PS). There is no

equivalent stringent CAC on Iu-PS interface in the sense of a strict check of EBR versus capacity.

The CAC on Iu-PS can be considered as only in charge of selecting the most suitable path, depending

on the RAB request. Transport maps need to be defined on Iu-PS interface for the selection of the

most suitable VC/AAL-5 connection to establish the requested GTP connection.

7.1.3.5 SUPPORT FOR TRANSPORT BEARER REPLACEMENT ON SRNC (FRS 30091)

When operating a synchronised radio link reconfiguration to modify a DCH or an HSPA MAC-d flow,

the 3GPP standard allows the option to keep the same transport bearer or to replace by another

transport bearer.

Transport bearer replacement is an option supported since UA04.1 (FRS26140) on the drift RNC and

on the Node B. In UA06.0 this feature is added on SRNC.

This feature is targetting the Iur interface but can also be activated on the serving Iub as a

configuration option.

Page 46: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 46/72

Transport bearer replacement allows to “naturally” update the transport CAC on both sides of an

interface. This is particularly useful for Iur. On Iub it is only useful if the peer Node B is also

performing a transport CAC.

Hence the recommendation is to mandatorily activate it on Iur and not on Iub (except if requested

by the customer).

7.1.4 ALCAP INTRODUCTION ON IUB

In UA06.0 ALCAP is introduced on Iub thus removing the proprietary AAL2 transport bearer

allocation scheme. The Iub becomes thus fully 3GPP compliant.

7.1.5 CONGESTION CONTROL

2 main congestion control mechanisms are implemented in the UA06.0 UTRAN.

The first one provided by Bandwidth Pool congestion control checks the last mile load (in Figure 3-1

for instance, the bottleneck is between the IMA termination point and the Node B, or between the

ATM cross-connect closest to the Node B and the Node B: this end part of the RNC-Node B

connection is the one supervised by the BP congestion control and is performed by the CRNC).

The second one checks the transmission backbone as a whole and may detect other problems (cell

losses due to other reasons than last mile, e.g. in an IP router in the hybrid Iub case). It may also

detect problems related with an error in configured interface capacity. It is also performed by the

CRNC.

[GLOBAL MARKET - Congestion control mainly applies on Iub interface. In this release, there is no

congestion control applied to Iur interface, which is not as critical as Iub interface from

transmission cost point of view (Iub is the “last mile” portion, there are much fewer Iur interfaces),

the dimensioning is thus not as constrained as for Iub. However, in case an Iub interface is

congested due to HSDPA traffic coming from Iur, there is a feed-back from Iub interface congestion

control in DRNC towards the responsible SRNC. This feedback will limit the HSDPA DL traffic from

SRNC to that DRNC, as explained in next section.]

[USA MARKET – In this release congestion control can be applied to the Iur Inteface due to the

introduction of the Bandwidth Limitation Algorithum to the Iur Interface under feature 34237. This

is for the USA Market only and is configured off for the global Market.]

On the Iu interface, no congestion control or flow control is applied. It is however possible for the

case of interactive, background and streaming service, to activate Iu traffic conformance. This

function checks according to the 3GPP algorithm described in [A2], that Maximum Bit rate is not

exceeded. This limits but does not prevent accumulation of data on Iu side, in case the SGSN sends

data at Maximum Bit Rate, at a time the RNC is only able to send them to UE at GBR.

7.1.5.1 BANDWIDTH POOL CONGESTION CONTROL

There is no shaping performed in the RNC. Data can be sent on the Iub interface or on a BP at the

rate of the physical interface (up to 155 Mbit/s). There is however obviously a bottleneck in the Iub

interface that has to be taken into account in the CRNC. This is performed thanks to the so-called

“bandwidth limitation” algorithm, also known as congestion control algorithm. It is a sort of

enhanced shaper, in the sense that it does not act directly at ATM level, but performs feed-back to

Page 47: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 47/72

the RNC source of data (RLC layer), in order to enable, disable or slow down the RLC layer

transmission for some services, based on real time observation of the data sent by transport. Note

that this feedback is applied when SRNC is also the controlling RNC. In case the controlling RNC is a

DRNC, the Iur interface protocol does not provide any possibility for DCH data flow-control. Some

margin may thus be kept on the Iub interface for DCH traffic potentially coming from Iur. If this

margin is insufficient, it results in local traffic being temporarily a bit penalized by distant traffic,

as the Iub counters will detect excess of traffic on the Iub interface. The CRNC will react by slowing

down the local traffic. This situation does not result in data loss. If this margin is too large, it would

result in Iub interface being underutilized. For HSDPA traffic, which is much more critical than DCH

traffic because HSDPA traffic can potentially reach or even be higher than the whole interface

capacity, the DRNC can stop the SRNC thanks to the HSDPA flow control mechanism (the DRNC will

send credits to SRNC “allowing 0 data”: this is the feature 34012, and is described in more detail in

[R8]).

In UA06.0, two cases of congestion control algorithms are defined on Iub : the first one will block

completely the RAB of one QoS for some time, the other one will limit for some time the RAB of all

QoSs except conversational to a reduced rate (either GBR or MinBr2). Two corresponding detection

mechanisms are developed:

In case 1, compatible with previous releases Iub or BP is considered as congested whenever data

arrive too late at the Node B and have to be discarded (refer to section 4.1 for details), or if HSDPA

traffic is discarded in the network due to ATM buffer overflows.

In case 2, newly considered in UA06.0, Iub or BP is considered as congested if a flow (R99

interactive/background traffic on DCH, streaming data or HSDPA I/B data) is blocked by other flows

for at least 10 ms (meaning that no data of that QoS could be sent during 10 ms). Note that R99

conversational traffic cannot be blocked in the ALU implementation: it is always handled with top

priority, and RLC source cannot be stopped.

The principle of Congestion Control is based on real time observation by means of counters of the

data sent in the various QoS, and backpressure. More precisely:

4 traffic counters, denoted Q0, Q01, Q012 and Q0123 respectively monitor the traffic sent on (QoS0),

(QoS0 + QoS1), (QoS0+ QoS1 + QoS2) and (QoS0+ QoS1+ QoS2 + QoS3).

The value of these counters is permanently compared to traffic thresholds and if at least one

threshold is reached, feed-back to RLC layer is performed (this is congestion case 1 detection). The

feed-back (x-off message from transport, with congestionFlag set to false) indicates to RLC layer to

stop, for “x-off time-out” duration.

The counter evolution is also monitored every 10 ms, in order to detect QoS stalling (case 2

congestion). If this evolution shows that no data of at least one QoS could be sent during the last 10

2 MinBR is defined for interactive (possibly different for each of the 3 THP values) and background

services, and a different value can be defined per ARP (thus 12 possible values), Actually this is

multiplied by 2, as 2 parameters are defined: one for R99 (MinBrForR99) and one for HSDPA

(MinBrForHsdpa). Both will be designated as MinBR in this document.

Page 48: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 48/72

ms, the x-off message will contain “x-off time-out” and congestionFlag set to true, during which the

RABs rate has to be slowed down to GBR for streaming RABs, to MinBR for interactive and

background RABs. In both cases, the x-off time-out is calculated by transport layer. The x-off

message may be repeated with new timer value, depending on the counter evolution. Expiry of x-off

time-out indicates a return to a normal situation, so it means the end of the congestion for all QoSs.

From this point, a slow start will be applied: data rate will be progressively increased from

GBR/MinBR up to maximum possible rate allowed by data availability from Iu side and radio

conditions, or until a new congestion is encountered. Note that end of congestion (congestionFlag

was set to true), can also be indicated by an x-on message.

For instance, if Q0123 reaches BpTh3 (BackPressureThreshold3), transport will immediately send x-off

message to all QoS3 RABs. This x-off message will indicate x-off timeout calculated so as to allow

the data already sent by RLC layer to be conveyed to Node B. CongestionFlag is set to false. When

the 10 ms counter evaluation clock ticks, the congestion status is checked (comparison of the

counters of all QoSs with their value 10 ms before). If no data for a QoS could be sent during that 10

ms period, an x-off message is sent to all QoS1 RABs, all QoS2 RABs and all QoS3 RABs indicating an x-

off time out and congestionFlag set to true. QoS1 and QoS2 RABs will thus decrease their rate to

respectively GBR and MinBR, for the duration of the x-off time-out. QoS3 RABs are already stopped

by the previously running x-off time-out. At the end of that timer, they will increase their rate to

MinBR/GBR. The situation will then be checked again periodically and a new x-off message may be

sent. If congestionFlag was set to “true”, the end of congestion will be notified by transport by

sending an x-on message. This message will allow all RABs of all QoSs to progressively increase their

rate from GBR/MinBR up to maximum allowed by radio conditions and data availability from Iu side,

provided no new congestion or backpressure situation is encountered.

As a summary:

The counter values are permanently compared to traffic thresholds3.

If a counter reaches a backpressure threshold, an x-off message indicating an x-off time-out

(CongestionFlag set to False) is immediately sent to all RABs of the QoS corresponding to the

reached threshold. The RLC layer for these RABs then stops data transmission to transport for the

indicated x-off time-out duration.

Every 10 ms, the counters are decremented by Th0 (i.e. sum(ECR) for ATM or sum(ER) for IP, where

the sum applies for all VCCs or IP paths of the considered BP), unless the QoS Bandwidth reservation

CAC method is used (see section 7.1.3.1). If QoS Bandwidth reservation method is used, every 10

ms, the counters are decremented by the bandwidth available for the corresponding QoS, where

Bandwidth available to Qosp = Th0 – qosnBwReser*(sum(ECRqosn)/100)) (n <> p).

If UA06.0 enhanced congestion control mechanism is activated, every 10 ms, the evolution of

counters is checked (comparison of the value of the counters with that of the same counter 10 ms

before). If this evolution shows that data could not be sent for a QoS during the 10 previous ms, x-

3 Note that the thresholds used for the comparison are dynamically updated, also based on the

traffic real time observation. Refer to [R7] to for details.

Page 49: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 49/72

off message indicating a x-off time-out (CongestionFlag set to True) is sent to all RABs of all non-

conversational QoS. The RLC layer will reduce the rate of all streaming RABs to GBR, of all

interactive/background RABs to MinBR.

In order to balance the load and offer fair sharing of the resource among all contending RABs, RLC

layer applies a slow start mechanism after x-off time-out (from GBR/MinBR up to potentially

maximum possible rate).

Refer to section 7.2 and to [R7] for more details.

7.1.5.2 25.902 MECHANISM, DL AND UL CASE [GLOBAL MARKET]

In UA06.0, the RNC and the Node B perform frame loss detection (respectively for UL and DL), and

the Node B controls DL delay build-up.

The sections hereunder describe these functionalities. The reaction to congestion detection for

radio aspects is described in [R10].

7.1.5.2.1 DOWNLINK DELAY DETECTION [GLOBAL MARKET]

RNC shall add the “DRT” information element in all HS-DSCH DATA FRAMES (ref. 25.435). DRT is an

internal RNC timer and represents the time when the packet is provided to lower layers for

transmission on the Iub.

The DRT is unique per Iub; this allows the Node B to maintain the minimum delay of the Iub

throughout for the different MAC-d flows. This algorithm is performed on a per H-BBU basis in the

Node B.

For every received HS-DSCH DATA FRAME (numbered n), the Node B computes the delay with the

following formula Xn=Tn-DRT(n) where DRT(n) is the Delay Reference Time value included by RNC in

the HS-DSCH DATA FRAME and Tn is the time when the frame is received in Node B. (The clocks of

Node B and RNC are supposed to be synchronized in frequency, but do not need to be in phase). The

Node B compares Xn with Xmin, Xmin being the minimum delay, computed over the Iub;

When the delay exceeds a congestion delay threshold (operator configurable), the Node B marks the

MAC-d flow as congested, i.e. when Xn > Xmin + CongestionThreshold.

When the delay falls below a decongestion delay threshold (operator configurable), the Node B

marks the MAC-d flow as non-congested, i.e. when Xn < Xmin + DecongestionThreshold.

These thresholds can be configured differently in IP & ATM.

Page 50: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 50/72

RNC

Node B Reception time

DRT 57 58 59 60 54 55

56

149 150 151 152 153 147 148

[DRT1=54]

T1 T2 T3

x1 = T1 - DRT1 = 149 - 54 = 95

[DRT2=56] [DRT3=58]

x2 = T2 - DRT2 = 152 - 56 = 96

x3 = T3 - DRT3 = 153 - 58 = 95

Delay +1ms. No delay.

The Node B regularly grants allocation to RNC through the HS-DSCH Capacity Allocation. Every time

the message is sent to RNC, the Node B adds:

The “TNL Congestion – detected by delay-build up” congestion indication if that particular MAC-d

flow is marked as congested.

The “No TNL Congestion” congestion indication if that particular MAC-d flow is marked as non-

congested.

It shall be noted that the initial state on Node B for a given MAC-d flow is “non-congested”.

7.1.5.2.2 DOWNLINK FRAME LOSS DETECTION [GLOBAL MARKET]

RNC shall add the “FSN” information element in all HS-DSCH DATA FRAMES (ref. 25.435). FSN is

initialized with the value ‘1’ for a given MAC-d flow, and increased by ‘1’ unit every time a HS-DSCH

DATA FRAME is sent by RNC for that MAC-d flow.

The value ‘1’ directly follows the value ‘15’, and in case the value ‘0’ is received, for example for

all the MAC-d flow going through Iur, the congestion detection algorithm is not activated.

Node B stores FSN of the succeeding HS-DSCH DATA FRAME with a good CRC for each MAC-d flow,

Nn. For every received HS-DSCH DATA FRAME with the FSN Nn, the Node B checks if Nn = Nn-1+1, in

that case no frame loss occurs.

If Nn > Nn-1+1, P frames are missing and might be lost, but it could also be due to IP de-sequencing

issue.

The Node B starts a configurable timer for IP de-sequencing and waits up to its expiry before the

MAC-d flow is marked as congested. If all the P missing frames are received during that interval, the

Page 51: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 51/72

MAC-d flow is NOT marked as congested, the timer is stopped. In that case, the last received frame

is the new Nn, and the nominal algorithm is resumed.

De-sequencing is due to router change or “load sharing”. Load sharing will highly impact throughput

generally due to de-sequencing, and should be de-activated. Thus, de-sequencing would only be

caused by router change. When the de-sequencing timer is running and a second jump of the FSN is

detected, it is considered that frame loss occurs. In such situation, the timer is stopped, and the

MAC-d flow is marked as congested.

When N succeeding HS-DSCH DATA FRAME have their successive FSN Nn = Nn-1+1, the MAC-d flow is

marked as non-congested, N being configurable.

RNC

Node B

[FSN=7] [FSN=9]

Frame loss detected

The Node B regularly grants allocation to RNC through the HS-DSCH Capacity Allocation. Every time

the message is sent to RNC, the Node B adds:

The “TNL Congestion – detected by frame” congestion indication if that particular MAC-d flow is

marked as congested.

The “No TNL Congestion” congestion indication if that particular MAC-d flow is marked as non-

congested.

If both algorithms are activated, and the MAC-d flow is marked as congested both delay build-up &

frame loss, the frame loss is reported to RNC. When the MAC-d flow is no more marked as frame

loss, it could be still reported as delay build-up depending on that algorithm.

7.1.5.2.3 DOWNLINK XMIN COMPUTATION

If Xn<Xmin, the link was congested during previous Xmin computation, thus congestion is reducing.

In that case, and to detect further congestion increase, the Xmin needs to be reduced.

Page 52: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 52/72

On contrary, if all the frames have their Xn >> Xmin for a long period, it means that transmission

network may have changed and the Xmin needs to be increased to avoid a permanent congestion

reporting by Node B.

To update Xmin, a mechanism is put in place to compute a new candidate Xminbackground during a

certain period, where Xminbackground is the lowest Xn value received during the period. This

period is an internal BTS timer set to 1 minute.

After this period, the Xmin is updated according to the following rule:

If Xminbackground < Xmin-5ms; Xmin = Xmin-5ms.

Reduction is limited to 5ms, TBD if the 5ms value can be tuned during test phase.

If Xmin-5ms <= Xminbackground <=Xmin; Xmin = Xminbackground

Reduction is considered.

Xminbackground >Xmin; Xmin = Xmin+0.1ms.

This algorithm runs separately for the 2 Xmin computed over ATM and over IP.

Note that the enhancement to maintain an Xmin per Iub depends on RNC implementation, thus it

could be deactivated in Open Iub environment.

7.1.5.2.4 UPLINK FRAME LOSS DETECTION [GLOBAL MARKET]

Node B shall add the “FSN” information element in all E-DCH UL DATA FRAME (ref. 25.435). FSN is

initialized with the value ‘1’ for a given MAC-d flow, and increased by ‘1’ unit every time a HS-DSCH

DATA FRAME is sent by RNC for that MAC-d flow.

The value ‘1’ directly follows the value ‘15’, and in case the value ‘0’ is received, for example for

all the MAC-d flows going through Iur, the congestion detection algorithm is not activated.

For every received E-DCH UL DATA FRAME with the FSNn, the RNC checks if FSNn = FSNn-1+1, in that

case no frame loss occurs. Otherwise, some frames have been lost and the RNC marks the leg as

congested.

If FSNn > FSNn-1+1, P frames are missing and might be lost, but it could also be due to IP de-

sequencing issue. The RNC starts a configurable timer for IP de-sequencing and waits up to its

expiry before the MAC-d flow is marked as congested. If all the P missing frames are received during

that interval, the MAC-d flow is NOT marked as congested, the timer is stopped. In that case, the

last received frame is the new FSNn, and the nominal algorithm is resumed.

Page 53: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 53/72

When the de-sequencing timer is running and a second jump of the FSN is detected, it is considered

that frame loss occurs. In such situation, the timer is stopped, and the MAC-d flow is marked as

congested.

With 10 ms TTI, when N succeeding E-DCH UL DATA FRAME have their successive FSN Nn = Nn-1+1, the

leg is marked as non-congested, N being configurable. With 2 ms TTI in non-bundling mode, the

algorithm uses parameter N multiplied by 5.

RNC

Node B

[FSN=7] [FSN=9]

Frame loss detected

For a MAC-d flow marked as congested, the RNC regularly sends a “TNL Congestion Indication”

message to Node B on the congested RL, with the cause “TNL Congestion – detected by frame loss”.

When a MAC-d flow marked as congested becomes non-congested, the RNC sends one message “TNL

Congestion Indication” message to Node B on the RL, which is no more congested with the cause

“No TNL Congestion”, and stops sending any other “TNL Congestion Indication”.

7.1.5.2.5 CONGESTION AND DE-CONGESTION LEVEL FOR THE UPLINK IUB

The cell is declared as congested in Red state if at least 1 UE is detected as congested “frame loss”

by the RNC.

The cell leaves the Red state if no UE is detected as congested “Frame loss” by the RNC.

One timer is used to take action on the scheduler; edchBLSupervisionTimer. When the timer

expires, the cell colour is observed to assess whether the global reduction factor R shall be

increased or decreased (cell colour concept is described in [R11]).

The step for upgrading/downgrading the reduction factor is configurable. :

For downgrading when Frame loss is detected (edchBLStepReductionFrameLoss)

Page 54: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 54/72

For upgrading, when no congestion is detected (edchBLStepIncrease).

Every edchBLSupervisionTimer, the global reduction factor is modified in the following way:

– Increased by edchBLStepIncrease if frame loss is detected.

Decreased by edchBLStepReductionFrameLoss if no congestion is detected.

7.1.5.2.6 E-DCH SCHEDULER ACTION UPON CONGESTION

Every time the Global Reduction Factor is modified, the scheduler updates the codec limitation of

each UE by multiplying the initial codec limitation (i.e. without congestion) by the global reduction

factor. This modification allows limiting the maximum achievable throughput of each UE. E-DCH

scheduler is described in [R10].

The Global Reduction Factor is modified and the scheduler continues to grant resources to the UE

according to radio conditions and the updated codec limitation.

7.1.5.2.7 FP MESSAGES FROM 3GPP 25.435 REL-6 & 25.427 REL-6

Page 55: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 55/72

Header CRC FT

CmCH-PI Frame Seq Nr

MAC-d PDU Length

MAC-d PDU Length (cont) Spare 1-0

Num Of PDUs

User Buffer Size

User Buffer Size (cont)

Spare, bits 7-4 MAC-d PDU 1

MAC-d PDU 1 (cont) Pad

Header

Spare, bits 7-4 MAC-d PDU n

MAC-d PDU n (cont) Pad Payload

New IE Flags 7(E) 6 5 4 3 2 1 0

Spare Extension

Payload CRC (cont)

DRT

DRT (cont)

7 0

Payload CRC

Flush

Figure 7-1: HS-DSCH DL Data Frame

DRT is positioned by RNC for downlink delay detection.

FSN is positioned by RNC for downlink frame loss detection.

Page 56: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 56/72

HS-DSCH Interval

HS-DSCH Credits (cont)

Maximum MAC-d PDU Length

Maximum MAC-d PDU Length (cont)

HS-DSCH Credits

HS-DSCH Repetition Period

CmCH -PI Spare

bits 7-6

0 7

Spare Extension

HS-DSCH Credits

Congestion Status

Figure 7-2: HS-DSCH Capacity allocation frame

The congestion status is positioned by the Node B for the downlink congestion detection, within the

range:

0 for “No TNL Congestion”

1 for “Reserved for future use”

2 for “TNL Congestion – detected by delay build-up”

3 for “TNL Congestion – detected by frame loss”

Page 57: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 57/72

CFN

7 0 FT

Header CRC cont FSN Number of Subframes

Spare N of HARQ Retransm 1st Subframe number

N of MAC-es PDUs First DDI First DDI cont First N

Last DDI Last N Last N cont Pad

Spare N of HARQ Retransm Last Subframe number

N of MAC-es PDUs First DDI First DDI cont First N

Last DDI Last N Last N cont Pad Spare First MAC-es PDU of first Subframe

Spare Second MAC-es PDU of first Subframe

Spare Last MAC-es PDU of first Subframe

Spare First MAC-es PDU of last Subframe

Spare Second MAC-es PDU of last Subframe

Spare Last MAC-es PDU of last Subframe

Spare extension Payload CRC Payload CRC cont

Header CRC

Payload

Header

Optional

SPARE

Figure 7-3: E-DCH UL Data Frame

CFN is positioned by Node B for uplink delay detection (not supported in this release).

FSN is positioned by Node B for uplink frame loss detection.

TNL Congestion Indication

Payload

7 0

1

0 - 32

Number of Octets

Spare Extension

Spare Congestion

Status

Page 58: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 58/72

The congestion status is positioned by the RNC to report uplink congestion.

7.1.5.2.8 O&M CONFIGURATION

RNC parameters

Name Description Related object Class &

Category

Congestion Detection

HsdpaCongestionFieldDRTInclusio

nAllowed

This parameter controls the

inclusion by the RNC of the HSDPA

User Plane congestion field DRT in

HS-DSCH Data frames sent over

Iub.

HSDPA25902 3

HsdpaCongestionFieldFSNInclusio

nAllowed

This parameter controls the

inclusion by the SRNC of the HSDPA

User Plane congestion field FSN in

HS-DSCH Data frames sent over Iub

HSDPA25902 3

EDCHCongestionDelayThreshold This parameter defines the E-DCH

congestion delay before sending a

TNL congestion notification with

delay built-up to the Node B.

EDCH25902 3

EDCHDeCongestionDelayThreshol

d

This parameter defines the E-DCH

congestion delay before sending

TNL congestion with no congestion

to the Node B after delay

congestion was detected.

EDCH25902 3

NumberOfFrameBeforeEDCHCong

estion

This parameter defines the number

of E-DCH frames to be lost in

sequence before sending TNL

congestion with frame loss

congestion to the Node B.

EDCH25902 3

NumberOfFrameBeforeEDCHDeCo

ngestion

This parameter defines the number

of E-DCH frames to be received in

sequence before sending a TNL

congestion notification with no

congestion to the Node B after

frame- loss congestion was

detected.

EDCH25902 3

Page 59: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 59/72

Name Description Related object Class &

Category

edchDeSequencingWaitTimer This parameter allows configuring

a delay before declaring frame loss

congestion.

EDCH25902 3

EdchCongestionControlNotificatio

nBackoffTimer

UL Iub FP Congestion Indication

frames are repeated at this period

until the congestion is detected

either by congestion caused by

frame loss or congestion caused by

delay built-up

EDCH25902 3

Node B parameters

Name Description Related object Class &

Category

Congestion Detection

HsdpaCongestionDetection This parameter controls the

activation of the congestion

detection on the Node B and the

inclusion by the Node B of the

congestion status in the HS-DSCH

Capacity Allocation.

BtsEquipment 3

HsdpaDelayPerIubReference This parameter controls the

activation of the enhanced delay

build-up detection algorithm on

the Node B

BtsEquipment 3

HsdpaCongestionDelayThresholdA

TMHigh

This parameter defines the HSDPA

congestion delay detection

threshold before considering the

MAC-d flow going over ATM as

congested

BtsEquipment 3

HsdpaCongestionDelayThresholdA

TMLow

This parameter defines the HSDPA

decongestion delay detection

threshold before considering the

MAC-d flow going over ATM as non-

congested

BtsEquipment 3

Page 60: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 60/72

Name Description Related object Class &

Category

HsdpaCongestionDelayThresholdI

PHigh

This parameter defines the HSDPA

congestion delay detection

threshold before considering the

MAC-d flow going over IP as

congested.

BtsEquipment 3

HsdpaCongestionDelayThresholdI

PLow

This parameter defines the HSDPA

decongestion delay detection

threshold before considering the

MAC-d flow going over IP as non-

congested

BtsEquipment 3

NumberOfFrameBeforeHSDPACon

gestion

This parameter defines the number

of consecutive HSDPA Iub frames

lost before declaring a MAC-d flow

in frame loss congestion.

BtsEquipment 3

NumberOfFrameBeforeHSDPADeC

ongestion

This parameter defines the number

of consecutive HSDPA Iub frames

(with consecutive FSN) to be

received before switching back to

“no congestion” after frame loss

congestion.

BtsEquipment 3

hsdpaDeSequencingWaitTimerIP This parameter allows configuring

a delay before declaring frame

loss. This is used only when HSDPA

is conveyed over IP.

BtsEquipment 3

UL Congestion Management

IsEdchBandwidthLimitationAllowe

d

The parameter is used to activate

the E-DCH congestion control

algorithm in the Node B.

BtsEquipment 3

edchBLSupervisionTimer This timer controls the periodicity

where the Node B modifies the

reduction factor.

BtsEquipment 3

edchBLStepReductionFrameLoss This parameter controls the

reduction applied to the reduction

factor when frame loss is

detected.

BtsEquipment 3

edchBLStepReductionDelay This parameter controls the

reduction applied to the reduction

factor when Delay build up is

detected.

BtsEquipment 3

Page 61: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 61/72

Name Description Related object Class &

Category

EdchBLStepIncrease This parameter controls the

increase applied to the reduction

factor when Delay build up is

detected.

BtsEquipment 3

edchBLMACdFlowTimer This parameter corresponds to the

internal timer to start or re-starts

at each time the MAC-d flow status

is set to ORANGE or RED.

BtsEquipment 3

EdchBLIubBandwidth This parameter corresponding to

the maximum bit rate that is

supported in the Iub.

BtsEquipment 3

Counters

For IUB congestion detection

Managed Object

./Aal2If

./Aal2If/BP

./IpIf/BP

Definition The counter indicates the number frames received with a delay

higher than the configured threshold.

Counter Name VS.EdchFramesWithDelayBuildUp

Type Cumulative

Triggering Event The counter is incremented every time an FP frame is received,

AND if the delay build-up was above the configured limit.

Screening None

Managed Object

./Aal2If

./Aal2If/BP

./IpIf/BP

Definition The counter indicates the number frames lost.

Counter Name VS.EdchFramesWithFrameLoss

Type Cumulative

Triggering Event

The counter is incremented every time a gap in the FSN number

is detected. It is increased of the number of frames that has been

lost.

Page 62: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 62/72

Screening None

Managed Object

./Aal2If

./Aal2If/BP

./IpIf/BP

Definition The counter indicates the number of frames received.

Counter Name VS.EdchTotalFramesReceived

Type Cumulative

Triggering Event The counter is incremented every time a frame is received.

Screening None

Managed Object BTSEquipment

Definition The counter indicates the number frames received with a delay

higher than the configured threshold, for the ATM transport.

Counter Name VS.HsdpaFramesWithDelayBuildUpATM

Type Cumulative

Triggering Event

The counter is incremented every time an FP frame is received

over ATM, AND if the delay build-up was above the configured

limit.

Screening None

Managed Object BTSEquipment

Definition The counter indicates the number frames received with a delay

higher than the configured threshold, for the IP transport.

Counter Name VS.HsdpaFramesWithDelayBuildUpIP

Type Cumulative

Triggering Event The counter is incremented every time an FP frame is received

over IP, AND if the delay build-up was above the configured limit.

Screening None

Managed Object BTSEquipment

Definition The counter indicates the number of frames received over ATM.

Counter Name VS.HsdpaTotalFramesReceivedATM

Page 63: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 63/72

Type Cumulative

Triggering Event The counter is incremented every time a frame is received on

ATM.

Screening None

Managed Object BTSEquipment

Definition The counter indicates the number of frames received over IP.

Counter Name VS.HsdpaTotalFramesReceivedIP

Type Cumulative

Triggering Event The counter is incremented every time a frame is received on IP.

Screening None

Managed Object BTSEquipment

Definition The counter indicates the number frames lost over ATM.

Counter Name VS.HsdpaFramesWithFrameLossATM

Type Cumulative

Triggering Event

The counter is incremented every time a gap in the FSN number

is detected. It is increased of the number of frames that has been

lost.

Screening None

Managed Object BTSEquipment

Definition The counter indicates the number frames lost over IP.

Counter Name VS.HsdpaFramesWithFrameLossIP

Type Cumulative

Triggering Event

The counter is incremented every time a gap in the FSN number

is detected. It is increased of the number of frames that has been

lost.

Screening None

For UL IUB congestion management

Managed Object BtsEquipment

Page 64: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 64/72

Definition The counter indicates the number of periods where the reduction

factor

Counter Name VS.EdchBLReductionFactor

Type Value

Triggering Event Each edchBLSupervisionTimer period

Screening

0: reduction factor equal 100% (meaning no Iub congestion was

detected.)

1: the reduction factor is between 95 and 100 (100 excluded)

2: the reduction factor is between 90 and 95 (95 excluded)

3: the reduction factor is between 85 and 90 (90 excluded)

4: the reduction factor is between 80 and 85 (85 excluded)

5: the reduction factor is between 70 and 80 (80 excluded)

6: the reduction factor is between 60 and 70 (70 excluded)

7: the reduction factor is between 50 and 60 (60 excluded)

8: the reduction factor is between 25 and 50 (50 excluded)

9: the reduction factor is between 0 and 25 (25 excluded)

Fault management

No modification.

7.2 GUARANTEED BIT RATE/MIN BR OVER HSDPA AND FAIR SHARING OF RESOURCES

Real Time Media (Video or Audio) Streaming applications require GBR service, which before UA06.0,

is only supported on DCH.

In addition, some operators request at least for their high priority users, the possibility of

guarantying a minimum bit rate also for Interactive and background services. This minimum bit rate

is to be offered when congestion is encountered in the network. This operator concern applies

primarily for HSDPA because DCH being a dedicated channel, normally provides a guaranteed

capacity to the user (at least as long as it is not moved to CELL_FACH state). However, even for

DCH, the transport has to evolve to make sure not to block data in case of congestion.

Starting from UA06.0, it is possible to offer GBR services on HSDPA, for PS streaming RABs, and

interactive/background services with operator-defined Minimum bit rate (MinBrForHsdpa). This not

only requires evolution in the radio domain, but also in the transport domain, as explained in

section 7.1. MinBR is also defined for R99 Interactive and background services (MinBrForR99). This

can be achieved if the UA06.0 enhanced congestion control algorithm is activated.

Page 65: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 65/72

From various sources, it seems that many UEs do not really provide the QoS parameters according to

application needs. In at least some cases, they are not able to provide the Traffic Class, let alone

the Max bit rate, guaranteed bit rate, THP, etc.

The operators have partial means to overcome this difficulty by providing the parameters in lieu of

the UE: the UE sends a SETUP or PDP CONTEXT ACTIVATION message to the Core Network with some

Non Access Stratum parameters, the CN checks the NAS parameters, checks the subscription rights

of the UE (thanks to a dialog between MSC or SGSN and HLR), and builds up the parameters

necessary for the RAB ASSIGNMENT REQUEST message to the RNC.

The CN may also supersede or provide the THP parameters. It also provides the ARP parameters,

which can for instance reflect the user subscription: a user choosing a higher quality (more

expensive) subscription will benefit from better treatment, with better THP and ARP values.

Whether the information is initially coming from the UE or from the HLR, the RNC gets what it

knows about the RAB parameters in the RAB assignment request message. However, some

parameters are optional (e.g. ARP) and the RNC thus sometimes needs to “guess”, e.g. based on

traffic class (for instance a speech call without provided ARP will then be considered as “more

important” than an interactive call without ARP).

Per 3GPP definition (TS 25.413, [A5]), all RAB include a Maximum Bit Rate, only streaming and

conversational RABs include the GBR parameter (this GBR can be set from 0 up to MaxBR). Only

interactive RABs include the THP. Although TS 23.107 ([A2]) foresees only THP 3 values (which is

also the number of different behaviours supported in Alcatel-Lucent implementation), TS 25.413

authorizes 15 values for THP. Any RAB may include ARP parameter.

For Interactive and Background services, the lack of GBR potentially prevents the operator to

guaranty a quality of service to his/her subscribers. In order to overcome this limitation, some

operators want the possibility in the RNC to configure a Minimum Bit rate that needs to be

modulated according to subscriber priority, as far as it can be known by the RNC, i.e. according to

the ARP value and THP value when applicable. In this release, MinBR can be configured for HSDPA

I/B RABs. Use of this MinBR is further detailed in section 7.2.2..

The sections hereunder will summarize the way to offer GBR and MinBR over HSDPA, thanks to the

BP feature. It will also address evolution of transport related to support of GBR/MinBR over DCH.

7.2.1 GUARANTEED BIT RATE FOR STREAMING TRAFFIC OVER HSDPA

Starting from UA06.0, streaming data for HSDPA and DCH are by default carried on the same VCC.

This VCC benefits from an ATM priority (by its ATM Transfer Capability also known.as. Service Class)

intermediate between conversational traffic and I/B traffic on DCH. Therefore, in case of

congestion, streaming traffic over HSDPA will get a higher priority than Interactive/Background

traffic, even if on DCH, thus maintaining the possibility to achieve the guaranteed bit rate.

Congestion on the interface is closely monitored thanks to transport counters of transmitted data

per QoS.

Page 66: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 66/72

In case a “congestion” on streaming data is observed by transport layer (more precisely, if the

amount of data sent for the conversational + streaming QoS implies that there will be queueing

reaching the maximum allowed duration for streaming at the interface point where the bit rate is

limited to the Iub interface capacity), the transport layer sends an x-off message to each MAC/RLC

context of streaming QoS, including an x-off time-out value depending on the observed congestion

level and calculated by transport. The message contains a congestionFlag set to False. Data is not

transmitted for the duration of x-off time-out. The x-off message is also sent when there is no

problem for conversational + streaming, but there is a congestion problem for another QoS (no data

of that QoS could be sent during the last 10 ms). In that case, the message sets to true the

“congestionFlag” and is sent to indicate to streaming RABs to limit their rate to at most MAC-hs GBR

during the x-off time-out.

When x-off time-out expires and congestionFlag was false, MAC/RLC layer resumes data

transmission to frame protocol at MAC-hs GBR. From there, it applies a slow start to reach the

optimal rate dictated by credit allocation from Node B and data availability coming from Iu

interface (unless a new x-off slows down the application again before). Credit allocation by Node B

is part of the HSDPA flow control mechanism and is described in [R8]. Data rate is also slowly

increased starting from MAC-hs GBR after expiry of x-off time-out after a congestion period

affecting another QoS than streaming (i.e. congestionFlag was set to true). Slow start allows to

achieve fair sharing between contending RABs, not only streaming RABs but also other QoS RABs.

Although GBR is not met during the short x-off time-outs, once averaged over longer time periods,

GBR should be met, unless the operator allows too much overbooking.

7.2.2 MIN BIT RATE FOR INTERACTIVE/ BACKGROUND TRAFFIC OVER HSDPA

The operator has the possibility to declare a minimum bit rate for interactive or background

applications. In case of congestion for interactive/background service, the behaviour is similar to

that described in the previous section, except interactive / background services do not benefit from

the high priority QoS dedicated to streaming traffic class services.

In recommended configurations:

Conversational traffic is mapped on QoS0,

Sreaming traffic is mapped on QoS1,

Interactive/background traffic on DCH is mapped on QoS2

Interactive/background traffic on HSDPA is mapped on QoS3.

If the amount of traffic measured on QoS0 + QoS1 + QoS2 reaches the corresponding threshold

(denoted BpTh2), an x-off message is immediately sent to QoS2 RABs, indicating an x-off time-out.

During this x-off time-out, no data should be sent, and after x-off time-out, data sending can be

resumed at up to MinBrForR99 for QoS2 RABs, and will then slowly increase up to maximum Bit rate

Page 67: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 67/72

allowed by the radio bearer configuration, if enough data is available from Iu side and no new

congestion is encountered. If a QoS is blocked for 10 ms (data of that QoS could not be sent during

10 ms), an x-off message is sent to all non-conversational QoS RABs indicating the congestion

situation (congestionFlag set to true) with an x-off time-out. For QoS2 RABs, this will limit data

sending to transport at MinBrForR99 during the x-off time-out. The end of congestion will then be

indicated by an x-on message: the data rate will slowly increase from MinBrForR99 up to maximum

bit rate allowed by data availability from Iu side and by radio configuration.

Similarly, when the amount of traffic measured on QoS0 + QoS1 + QoS2 + QoS3 reaches a threshold

(denoted BpTh3), an xoff is sent to QoS3 RABs, indicating an x-off time-out. During this x-off time-

out, no data should be sent, and after x-off time-out, data sending can be resumed at up to

MinBrForHsdpa for QoS3 RABs (unless a new x-off is received). If data from a QoS is blocked during

10 ms, all non-conversational QoS RABs will receive an x-off message indicating congestion

(congestionFlag set to true). While the x-off time-out is running, data rate for QoS3 RABs is limited

to MinBR. QoS3 RABs with no MinBR configured should not be allowed to send any data until x-off

time-out expiry or reception of x-on message

After x-off time-out expiry, data sending for QoS3 RABs may resume, slowly increasing from MinBR

up to the maximum bit rate allowed by credits sent by Node B and by data availability from Iu side.

Data can be sent at at least MinBR all the time, except for x-off time-outs. If averaged over

“reasonable” duration, MinBR can then be met, unless the operator allows too much overbooking. In

that latter case, there will be too many x-off messages and the MinBR cannot be met.

8 ABBREVIATIONS AND DEFINITIONS

8.1 ABBREVIATIONS

3G Third Generation

3GPP Third Generation Partnership Project

A2EA AAL-2 service Endpoint Address

AAL-2 ATM Adaptation Layer type 2

ADSL Asynchronous DSL

AESA ATM End System Address

AIS Alarm Indication Signal

ALCAP Access Link Control Application Protocol

ANSI American National Standards Institute

APS Automatic Protection Switch

ATC ATM Transfer Capability

ATM Asynchronous Transfer Mode

Page 68: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 68/72

BBU Base Band Unit

BP Bandwidth Pool

BpThn Backpressure Threshold level n (n= 1, 2 or 3).

BW BandWidth

CAC Connection Admission Control

CBR Constant Bit Rate

CFN Connection Frame Number

CN Core Network

CR Committed Rate

CRNC Controlling RNC

CS Circuit Switched

DCH Dedicated Channel

DHO Diversity HandOver

DL DownLink

DPC Destination Point Code

DRNC Drift RNC

DRT Delay Reference Time

DS Delay Sensitive

DSCP Differentiated Services Code Point

DSL Digital Subscriber Line

ECR Equivalent Cell Rate

E-DCH Enhanced DCH (=HSUPA)

EBR Equivalent Bit Rate

ER Equivalent Rate

ERQ Establish ReQuest

FN Frame Number

FN Functional Note

FP Frame Protocol

FSN Frame Sequence Number

GBR Guaranteed Bit Rate

GPRS General Packet Radio Service

H-BBU HSPA BBU

Page 69: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 69/72

HLR Home Location Register

HSDPA High Speed Downlink Packet Access

HSDSCH High Speed Downlink Shared CHannel

HSPA High Speed Packet Access (=HSDPA + HSUPA)

HSUPA High Speed Uplink Packet Access (=E-DCH)

ICP IMA Control Protocol

IETF Internet Engineering Task Force

IMA Inverse Multiplexing for ATM

IP Internet Protocol

ITU-T International Telecommunication Union – Telecommunication Standards

sector

LIU Line Interface Unit

LTOA Latest Time Of Arrival

MAC Medium Access Control

MAC-d MAC entity in charge of DCH

MAC-hs MAC entity in charge of HSDPA

MaxBR Maximum Bit Rate

MinBR Minimum Bit Rate

MGC Media Gateway Controller (also called MSC server)

MGW Media GateWay

MSC Mobile Switching Center

MTP3-B Message Transfer Part Layer 3 – Broadband

NBAP Node B Application Part

NDS Non Delay Sensitive

NFD Network Facilities Data

NGN Next Generation Network

NNI Network Network Interface

O&M Operation & Maintenance

OAM Operation, Administration and Maintenance

OC-3 Optical Carrier 3 (155 Mbit/s)

PC Point Code

PCR Peak Cell Rate

PDH Plesiochronous Digital Hierarchy

Page 70: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 70/72

PDU Protocol Data Unit

PMD sublayer Physical Medium Dependent sublayer

PNNI Private NNI

PR Peak Rate

PS Packet Switched

PSN Packet Switched Network

PW PseudoWire

PWE3 PseudoWire Edge to Edge Emulation

QAAL2 = AAL type 2 signalling protocol: specified in ITU-T recommendations

Q.2630.1 and Q.2630.2

QoS Quality of Service

RAN Radio Access Network

RAB Radio Access Bearer

RAN Radio Access Network

RANAP Radio Access Network Application Part

RDI Remote Defect Indication

RLC Radio Link Control

RNC Radio Network Controller

RNCWS RNC Window Start

RNL Radio Network Layer

RNS Radio network subsystem

RNSAP Radio network subsystem Application part

SCCF Service Specific Coordination Function

SCCP Subsystem Connection Control Protocol

SCR Sustainable Cell Rate

SDH Synchronous Digital Hierarchy

SDSL Synchronous DSL

SDU Service Data Unit

SGSN Serving GPRS Support Node

SHO Soft Hand Over

SONET Synchronous Optical Network

SRNC Serving RNC

SS7 Signalling System number 7

Page 71: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 71/72

SSSAR Service Specific Segmentation and Reassembly Sublayer

STM-1 Synchronous Transport Module level 1 (155 Mbit/s)

STP Signalling Transfer Point

TC Traffic Class

TC sublayer Transmission Convergence sublayer

TNL Transport Network Layer

TOA Time Of Arrival

TOAWE Time Of Arrival Window End

TOAWS Time Of Arrival Window Start

TR Technical Report

TS Technical Specification

TTI Transmission Timing Interval

UBR Unspecified Bit Rate

UBR+ Unspecified Bit Rate, with the additional indication of “MDCR”: Minimum

Desired Cell Rate

UE User Equipment

UL UpLink

UMTS Universal Mobile Telecommunications System

UNI User Network Interface

UP User Plane

URC Universal Radio Controller

UTRAN Universal Terrestrial Radio Access Network

VBR Variable Bit Rate

VBR-RT Variable Bit Rate Real Time

VBR-nRT Variable Bit Rate non Real Time

VC Virtual Channel

VCC Virtual Channel Connection

VCI Virtual Channel Identifier

VCL Virtual Circuit Link

VCLTP ATM VCL Termination Point

VP Virtual Path

VPC Virtual Path Connection

Page 72: UMT SYS DD 023087 UTRAN Architecture and Transmission Management 02 04

UTRAN Transport Architecture - UTRAN architecture a nd transmission management

UMT/SYS/DD/023087 02.02 / EN Standard 08/April/2008 Page 72/72

VPI Virtual Path Identifier

VPT Virtual Path Termination

8.2 DEFINITIONS

TERM DEFINITION

Network Facilities Data The Network Facilities Data Record is stored in the backplane

memory of the OneBTS. In contains parameters / information

relating to such things as ATM ports / Links etc. This data is non-

volitile and is read on boot to configure the application.

���� END OF DOCUMENT