UMTS Performance Optimization and Acceptance Guidelines

77
UMTS Performance Optimization and Acceptance Guidelines contents 1. INTRODUCTION.................................................. ........................................................... 6 1.1. OBJECT........................................................ .............................................................. .. 6 1.2. Scope of this document...................................................... ........................................ 6 1.3. Key Performance Indicator values and Definitions................................................ 7 1.4. Audience for this document...................................................... ................................. 7 2. Related documents..................................................... ........................................... 7 2.1. Applicable documents..................................................... ............................................ 7 2.2. Reference documents..................................................... ........................................... 7 3. General Methodology................................................... ...................................... 9 4. Assumptions................................................... .......................................................... 10 4.1. General Assumptions................................................... ............................................. 10 4.2. Specific Assumptions................................................... ............................................. 11 4.2.1. Design criteria...................................................... ................................................. 11

Transcript of UMTS Performance Optimization and Acceptance Guidelines

Page 1: UMTS Performance Optimization and Acceptance Guidelines

UMTS Performance Optimization and Acceptance Guidelines

contents1. INTRODUCTION............................................................................................................. 61.1. OBJECT........................................................................................................................ 61.2. Scope of this document.............................................................................................. 61.3. Key Performance Indicator values and Definitions................................................ 71.4. Audience for this document....................................................................................... 72. Related documents................................................................................................ 72.1. Applicable documents................................................................................................. 72.2. Reference documents................................................................................................ 73. General Methodology......................................................................................... 94. Assumptions............................................................................................................. 104.1. General Assumptions................................................................................................ 104.2. Specific Assumptions................................................................................................ 114.2.1. Design criteria....................................................................................................... 114.2.2. Performance criteria............................................................................................. 114.3. Warnings on metric definitions............................................................................... 114.4. Notation...................................................................................................................... 115. Test Areas.................................................................................................................. 125.1. Routes........................................................................................................................ 125.2. In-building Test Areas............................................................................................... 125.3. Special Test Areas.................................................................................................... 125.4. Warranty Zone.......................................................................................................... 126. Simulated Traffic Load (if applicable)........................................................ 136.1. Uplink.......................................................................................................................... 136.2. Downlink.................................................................................................................... 147. Test Setup................................................................................................................. 148. Performance Metrics......................................................................................... 148.1. Combined Metric: Successful Call Rate................................................................ 158.2. Detailed Metrics........................................................................................................ 169. System Optimization process.......................................................................... 169.1. Configuration Audit.................................................................................................. 199.2. Cell Shakedown........................................................................................................ 209.2.1. Objective............................................................................................................... 209.2.2. Entrance Criteria................................................................................................... 209.2.3. RF BASIC checking............................................................................................. 209.2.4. Drive Routes......................................................................................................... 219.2.5. Test Drives........................................................................................................... 219.2.6. Test Areas (indoor Site)........................................................................................ 219.2.7. Exit Criteria........................................................................................................... 219.2.8. Issue List............................................................................................................... 219.3. Cluster Optimization................................................................................................. 229.3.1. Objective............................................................................................................... 229.3.2. Entrance Criteria................................................................................................... 229.3.3. Drive Routes......................................................................................................... 239.3.4. Test Drives........................................................................................................... 23

Page 2: UMTS Performance Optimization and Acceptance Guidelines

9.3.5. Data Analysis / Problem Resolution..................................................................... 269.3.6. Exit Criteria........................................................................................................... 269.3.7. Reports................................................................................................................. 279.3.8. Issue List............................................................................................................... 279.4. System Optimization................................................................................................... 279.4.1. Objective............................................................................................................... 279.4.2. Entrance Criteria................................................................................................... 289.4.3. Drive Routes......................................................................................................... 289.4.4. Drive TESTs......................................................................................................... 289.4.5. Data Analysis / Problem Resolution..................................................................... 289.4.6. Exit Criteria........................................................................................................... 289.4.7. Reports................................................................................................................. 299.4.8. Issue List............................................................................................................... 2910. Reports...................................................................................................................... 3010.1. Performance Result Report............................................................................... 3010.1.1. Geodetic Information......................................................................................... 3010.1.2. Key Performance Indicators (KPIs) Reports................................................... 3010.2. Performance Measurement Reports.................................................................. 3110.3. Test Environment Report..................................................................................... 3110.4. Issue List................................................................................................................. 3111. OCNS unavailability............................................................................................... 3212. Abbreviations and definitions......................................................................... 3212.1. Abbreviations.......................................................................................................... 3212.2. definitions............................................................................................................... 3413. Appendix 1: Test Equipment List...................................................................... 3614. Appendix 2: Performance Metrics.................................................................. 3714.1. KPI Indicative values............................................................................................... 3714.2. Detailed metrics for acceptance and optimization............................................. 3814.2.1. Acceptance Metrics.......................................................................................... 3814.2.2. Optimization Metrics......................................................................................... 4414.3. Measurement Uncertainty.................................................................................... 5314.3.1. Tolerance Interval of One Data Population....................................................... 5314.3.2. Tolerance Interval of Several Data Populations............................................... 5414.3.3. Table z-values vs. confidence levels................................................................ 5615. Appendix 3: Overall planning for performance acceptance.......... 5715.1. Optimization resources:........................................................................................ 5715.1.1. IS-95 Experience.............................................................................................. 5715.1.2. teams................................................................................................................ 5715.1.3. Extrapolation to UMTS...................................................................................... 58

1. INTRODUCTION

Page 3: UMTS Performance Optimization and Acceptance Guidelines

1.1. OBJECT

This document is intended to describe a generic procedure for performing UMTS network performance optimization and acceptance tests as well as the main performance metrics that require field assessment for the set of services provided through this network.Procedures, conditions and performance metrics for actual contracts may have variations to those referenced, presented and described in this document.This document is a generic and internal document. It provides guidelines and a base for writing the actual Network Performance Acceptance Test Plan.It must not be communicated directly to the customer.

1.2. Scope of this document

This document is intended to define two key processes before launching a UMTS network: performance optimization and performance acceptance. Performance OptimizationA detailed system optimization process is needed to ensure that each of the System Performance Metrics can be achieved. As such, the objectives of the performance optimization process are to:§ uniform optimization procedure§ uniform measurement standards (test set-up, route selection, test area selection, etc...)§ set up clear methods and way of measurements in agreement between the customer and Nortel Networks.§ set up a process of following up with issues and work yet to be completed (issue lists)§ tracking of performance improvement trends over time Performance AcceptancePerformance Acceptance can be defined as the final performance test before commercial launch of a network. The sometimes called “in-service performance acceptance” (when the acceptance is actually done on a commercial network) is outside the scope of this document.The performance acceptance test validates the network design. It shows the compliance with contractual commitments.The criteria and process for performance acceptance must be clearly defined in order to ensure consistency all over the system. The detailed measurement processes for each metric will be defined including all necessary sign off requirements.Due to the unavailability of UMTS collection and analysis hardware and software equipment at the moment, the presented methods will be updated as the equipment becomes available.

1.3. Key Performance Indicator values and Definitions

The Product-KPI values and default definitions may be found in [R1]. However, the

Page 4: UMTS Performance Optimization and Acceptance Guidelines

definitions have been recalled in this document since some have to be slightly adapted to measurement methods and devices (availability of points of measurements, post-processing ability...). The Network-KPI values (including network engineering) must be evaluated on a case by case basis to include contract specificities.In any case, when writing the final and customized Acceptance Test Plan for the customer, values and definitions must be adapted according to customer requirements or network peculiarities.

1.4. Audience for this document

This document is internal and intended mainly for dressing generic Performance Acceptance Guidelines that may be used to customize each customer’s Acceptance Test Plan. The resulting Acceptance Test Plan is required to be signed off by both the Nortel Networks’ and the customer’s acceptance primes.Part of the audience should be found in teams dealing with:· Network performance acceptance· RF engineering· Network monitoring (metric implementation)· Network performance contract negotiations

2. Related documents

2.1. Applicable documents

All customized UMTS network performance acceptance test plans.

2.2. Reference documents

[R1] UMT/SYS/DD/72 v01.02 UMTS Key Performance Indicators (KPI) Design [Draft][R2] 3GPP TS 21.904 v3.3.0 UE Capability Requirements[R3] 3GPP TS 22.105 v3.10.0 Service aspects; services and service capabilities[R4] 3GPP TS 23.003 v3.7.0 Numbering, addressing and identification[R5] 3GPP TS 23.107 v3.5.0 QoS Concept and Architecture[R6] 3GPP TS 24.008 v3.6.0 Mobile radio interface layer 3 – Core Network Protocols – Stage 3[R7] 3GPP TS 25.101 v3.6.0 UE Radio Transmission and Reception (FDD)[R8] 3GPP TS 25.215 v3.4.0 Physical layer – Measurements (FDD)[R9] 3GPP TS 25.303 v3.7.0 Interlayer Procedures in Connected Mode[R10] 3GPP TS 25.331 v3.6.0 RRC Protocol Specification (Release 1999)[R11] 3GPP TS 25.413 v3.5.0 UTRAN Iu Interface RANAP Signalling[R12] ITU-T I.356 Integrated Services Digital Network; Overall network aspects and functions – Performance objectives[R13] ITU-T P.800 Methods for objective and sujective assessment of qualityObjective quality measurement of telephone band (300-3400 Hz) speech codecs[R14] ITU-T P.861 Methods for objctive and subjective assessment of quality

Page 5: UMTS Performance Optimization and Acceptance Guidelines

Methods for subjective determination of transmission quality[R15] RFC-2330 Framework for IP Performance Metrics[R16] RFC-2679 A One-way Delay Metric for IPPM[R17] RFC-2680 A One-way Packet Loss Metric for IPPM[R18] PE/IRC/APP/154 E2E Voice Quality User Manual[R19] TBD 1xRTT RF Acceptance Guidelines[-] TBD UMTS OCNS settings (not before first UMTS commercial release or latter)

3. General Methodology

UMTS systems introduce a wide range of applications. Like roads, streets or buildings are selected and chosen by both parties to be geographically representative, a set of representative applications and metrics has to be mutually agreed between the customer and Nortel Networks. The suggested methodology tries to merge both efficiency and user experience.During the same tests, both CS and PS domains will be simultaneously tested. On CS-domain, voice will be used as an application, when a mixture of ping and file transfer by FTP will be used on PS-domain.Only one combined metric (successful call rate) will characterized the performance acceptance based on a pass-or-fail criterion.If more in-depth tests are requested, a specific cluster is selected to test thoroughly the targeted applications. Results are supposed then to be extrapolated to the entire network, providing the same tuning are duplicated. Traffic (voice or data) should be generated by tools for repeatability and controllability purpose. All tests must be done in a controlled environment, that means all mobile devices are strictly controlled by the acceptance team and only the test equipment is running in the tested areas. As defined in the standards [R4], the end-to-end QoS is understood as including all parts of the network. Our goal, as a vendor, is to provide and commit on performances on the network part we are providing to our customer. Therefore, we will concentrate as much as we can on the UMTS Bearer, excluding as long as possible, the external bearer and the TE/MT local bearer. As a reminder, the standards (please refer to [R3]) distinguish several type of information transfer according to three main categories:· connection oriented vs. connectionless· Traffic type:▪ guaranteed bit rate vs. non guaranteed bit rate▪ constant bit rate vs. dynamically variable bit rate▪ real time vs. non real time

Page 6: UMTS Performance Optimization and Acceptance Guidelines

· Traffic characteristics:▪ Point to point (uni-directional, bi-directional symmetric, bi-directional asymmetric)▪ Point to multipoint (uni-directional multicast, uni-directional broadcast)For optimization and acceptance purpose we will keep the focus on bi-directional point-to-point connection oriented services. The same document [R3] defines the following attributes for information quality:· Maximum delay transfer (end-to-end one-way delay)· Delay variation· Information loss (error ratio: bit error ratio, SDU error ratio, Frame error ratio depending on which is the most appropriate)· Data Rate Here are the UMTS Bearer Attributes defined in the standards [R4]:· Traffic Class· Maximum Bit Rate· Guaranteed Bit Rate· Delivery Order· Maximum SDU Size· SDU Format Information· SDU Error Ratio· Residual Bit Error Rate· Delivery of erroneous SDUs· Transfer Delay· Traffic Handling Priority· Allocation/Retention PriorityHowever, not all of these attributes apply to any traffic class. Here is the table listing the applicability of the attributes according to the traffic class, [R5] section 6.4.3.3 table 2 “UMTS bearer attributes defined for each bearer traffic class”:

Traffic class Conversational class

Streaming class Interactive class

Background class

Maximum bit rate X X X XDelivery order X X X XMaximum SDU size X X X XSDU format information X X SDU error ratio X X X XResidual bit error ratio X X X XDelivery of erroneous SDUs X X X XTransfer delay X X Guaranteed bit rate X X Traffic handling priority X Allocation/Retention priority

X X X X

Page 7: UMTS Performance Optimization and Acceptance Guidelines

As far as optimization and acceptance metrics are concerned, we will focus on information quality measurements.

4. Assumptions

4.1. General Assumptions

This document is written with the following assumptions:§ Logging (call trace family) is an RNC feature available and fully operational at optimization and acceptance time§ Hardware equipment and corresponding software tools are available and fully operational.§ For loaded conditions (if applicable), OCNS is a feature available and fully operational at optimization and acceptance time[1]

4.2. Specific Assumptions

4.2.1. Design criteria

The list of the design criteria must be provided in the acceptance document. Some of the network performance metrics are influenced by these criteria (e.g. dropped call rate, call setup access success, blocking rate).The list should include at least (but is not limited to):§ cell edge reliability§ coverage targets (per environment, per associated quality of service)§ traffic assumptions (in kbps, per environment, uplink and downlink)§ services to be offered (various transmission rates, per environment)§ link budgets (including at the very least quality of service thresholds such as Eb/N0 targets per service or BLER and FER targets; radio margin assumptions; antenna assumptions)

4.2.2. Performance criteria

A reference should be provided to a table including all performance to be reached. A threshold shall be provided for each of the metrics and is to be met during the acceptance.Values can be divided into two main groups: mainly-equipment-driven values (such as call setup time, transfer delay) and mainly-design-driven values (such as dropped call rate, call setup success rate). Please refer to [R1] for the former and to the specific contract design assumptions for the latter.These values have to be mutually agreed by the customer and Nortel Networks and must result from the design criteria and the network element known or planned performance.

Page 8: UMTS Performance Optimization and Acceptance Guidelines

4.3. Warnings on metric definitions

The metric definitions in 14.2 (page 38) do not fully take into account the measurement methodology.Some of the messages may be only accessible through test mobiles or logging features.Counters may not be available or may not have a granularity accurate enough to perform the desired calculations. In lots of cases, a context of communication must be considered to link the different messages and itemized the metrics according to causes that are notified in other signaling messages during the call.All metrics are intended to be calculated on a network-wide basis or at the very least on a per-cluster basis.

4.4. Notation

In the following formulas, the messages are generally written in italic and with the following notation:ProtocolAbbreviation_MessageName(Cause)The abbreviations are detailed in the “Abbreviations and Definitions” chapter, page 32. The message name is the one used in the corresponding standard. The cause is generally used to indicate a group of causes that should be used to filter the related messages. The cause does not correspond necessarily to a cause described in the standards (explanations are given as far as possible to relate to the causes described in the standards).

5. Test Areas

The coverage area for test consists of drive-test routes, in-building and special test areas within the designed service area. The set of driven roads, in-building measurement areas and special test areas shall be mutually agreed between the customer and Nortel Networks.

5.1. Routes

The drive routes shall cover the areas of interest and include some primary as well as secondary roads. In addition, routes shall be chosen to ensure a sampling of the main encountered environment in the designed service area (such as dense urban, urban, suburban and rural morphologies). During data collection, the test routes shall be driven at speeds as being representative of normal subscriber behavior. To insure repetitive results, speed shall be carefully checked all along the drive tests. Should the acceptance be done with different performance thresholds (such as high requirements in dense urban areas, medium requirements in suburban areas and low

Page 9: UMTS Performance Optimization and Acceptance Guidelines

requirements in rural areas), the routes will be carefully designed to keep each drive-test in an area of one and only one type of coverage requirements (e.g. proscribe a drive overlapping dense urban and suburban in the previous example)

5.2. In-building Test Areas

As far as possible, in-building coverage from external site infrastructure should not be tested.However, building penetration margins specified in the design may be verified at street level using attenuators at the mobile (only RSSI would be checked since the Ec/N0 will stay unchanged due to the fact that it is a ratio). Specific buildings requiring indoor testing shall be clearly defined (building location, service areas and test areas within the building, interaction with external coverage). The customer shall provide access to Nortel Networks to all defined test areas within these buildings.

5.3. Special Test Areas

Special test areas (such as trains, subways, water areas, etc.) may be included in the performance acceptance. A specific process will be designed for such areas in agreement between the customer and Nortel Networks.

5.4. Warranty Zone

Maps defining accurately the warranty zone have to be provided and appended to the Performance Acceptance Test Plan.The tests will be confined in this zone and any measurement coming out of this warranty zone shall be discarded.The warranty zone shall be determined from predictions coming from Nortel Networks prediction tool (iPlanner) and shall be based on real site coverage (this means that all sites shall be CW-drive tested)[2]. Typically, the warranty zone will be defined from the P-CPICH EC/I0 (downlink) and the uplink budget.As a definition, the warranty area is the design coverage area diminished by deployment restrictions, exclusions and specific non-warranty areas (e.g. tunnels, specific bridges...). In case some of the sites included in the warranty area were not up and running at the beginning of the first drive test, then the warranty zone would have to be reviewed to include only up-and-running sites.

6. Simulated Traffic Load (if applicable)[3]

The RF performance of the UMTS network should be evaluated with mobile units that are operating under nominal loading conditions. For performance acceptance, tests may be

Page 10: UMTS Performance Optimization and Acceptance Guidelines

conducted with simulated traffic load in complement of the unloaded tests. This traffic load shall correspond to the number of Erlangs and kbps for which the system has been designed as defined in the Final Cell Plan. The general method of loading shall be as follows.

6.1. Uplink

Uplink load simulation shall be conducted by using a fixed attenuator (most likely 3 dB, but not to exceed 5.2 dB) inserted in the transmit path of the mobile unit. The exact value of the attenuator shall be consistent with the equivalent noise rise at the cell site receiver generated by the traffic load (at the busy hour) used in the design assumptions. The equivalent noise rise (interference margin) shall be determined by expressing the number of channels n associated with the traffic load as a fraction of the pole point P: n = number of channelsP = sector pole pointr = n / P = fractional loading with respect to the pole point The design value of r shall be 50 % by default (equivalent noise rise of 3 dB). It shall not exceed 70 % in any case (equivalent noise rise of 5.23 dB).During load simulation testing, the noise floor rise should be measured to verify the 3 dB theoretical value (at r = 50 % load). It has to be noted that this setting is pessimistic. Actually, the probability of several overlapping cells to be altogether at their maximum load is close to zero in normal conditions (unsaturated network). Therefore, the above settings should be corrected by a factor allowing taking into account a given amount of load balancing between cells. However, the amount of this factor is FFS.

6.2. Downlink

Downlink load simulation shall be conducted by using Orthogonal Channel Noise Simulation (OCNS). This feature should be part of Nortel Networks’ UMTS RNS[4] and will be used to broadcast downlink orthogonal interference of appropriate level. This interference shall be consistent with the traffic load (at the busy hour) defined in the Final Cell Plan. The number, level and channel activity of interfering links shall be defined in a mutual agreement between the customer and Nortel Networks. OCNS settings will be detailed in a referenced document [FFS].

7. Test Setup

All tests shall be performed using characterized and calibrated test mobiles that meet or

Page 11: UMTS Performance Optimization and Acceptance Guidelines

exceed the minimum performance specifications as specified in [R2] and [R6]. Test mobiles shall have fixed attenuators connected between the transceiver and the antenna to compensate for penetration loss, additional vehicle height, and performance of the test vehicle antenna as appropriate (it assumes that duplexers are used to separate uplink transmission and downlink reception). Additional attenuation may also be added to simulate uplink system load (if applicable) and/or building penetration loss (indoor coverage from outdoor sites).This assumes to have adequate packaging: screened boxes providing radio isolation at the attenuators need to be used with such a test set. The required test equipment list is detailed in “Appendix 1: Test Equipment List”, page 36.

8. Performance Metrics

This section describes the test procedures to be used to determine the system performance metrics as defined between the customer and Nortel Networks for the performance acceptance. All tests should be completed in a totally controlled environment under unloaded conditions.Depending on the feature availability and the load and/or ramp-up load of the network, tests under loaded conditions might be performed according to the design-assumed load (by default 50% loading relatively to the pole capacity). All metrics will be measured along the drive test routes and in the specific test areas. The preferred way for simplifying acceptance tests is to focus on only one combined metric linked to a simple pass-or-fail criterion.

8.1. Combined Metric: Successful Call Rate

Objective

The suggested method for acceptance is stating a pass-or-fail criterion based on a unique successful call metric. The successful call metric actually combined several of the typical metrics (see section 8.2 page 16).

Definition

For a call to be successful, it must correctly setup within a given time, hold during the average holding time (AHT), if necessary successfully hand-over (within the 3G coverage),

Page 12: UMTS Performance Optimization and Acceptance Guidelines

satisfy all defined quality criteria and correctly release the resources. The successful call rate is defined as the ratio between the number of successful calls versus the total number of attempts. As per this definition, the “successful call rate” metric combined the call setup success rate, the call setup time, the dropped call rate, the quality thresholds, the successful handover rate and the successful release rate. It has to be emphasized that this metric should be applied to the 3G coverage only and inter-RAT hard handover must not be part of this metric. Some examples for target values and quality thresholds for this metric can be found in section 14.1, page 37.

Measurement Method

Calls (either voice calls or data calls) will be chained during the entire test. To have enough samples, it is recommended to have more than one mobile per tested application (e.g. two mobiles for voice calls, two mobiles for data calls). To get closer to a “one-shot” test, mobile-originated and mobile-terminated calls could be mixed during the test (with a proportion to be agreed between the customer and Nortel Networks, e.g. 50% mobile-originated, 50% mobile terminated). In this case, the impact of paging (paging success rate) must be considered in the target values (either a longer setup time or a lower success rate). Depending on the measurement tool capabilities, mobile-terminated and mobile-originated calls could be physically passed on different physical sets (e.g. one mobile for voice mobile-terminated calls, one mobile for voice mobile-originated calls). For each call, the post-processing tool should come with a binary result “pass” or “fail”. A call passes if it is successful, as per the definition.The final percentage of success call rate is then computed by dividing the number of successful call by the total number of call attempts.

8.2. Detailed Metrics

On customer’s request, detailed metrics may be preferred by the customer to the successful call rate. In that case, the metrics that can be measured for acceptance are: Dropped Call RateCall Setup Success Rate (originations)

Page 13: UMTS Performance Optimization and Acceptance Guidelines

Call Setup Time (originations and terminations)Paging success rate (it might be combined with call setup success rates in a “call setup success rate (terminations)” metric)Bit Rate (minimum, average, maximum; both ways)File Transfer Time (one-way, measured on both ways)Voice Quality (typically MOS- or DMOS-like tests) For tests under loaded conditions, the following metrics may be added to the previous one: Blocking (radio)RAB Unavailability Rate Please refer to section 14.2, page 38, for detailed descriptions of each metric including metrics used for optimization but not for acceptance.

9. System Optimization process

The network optimization consists of four different stages, as described hereafter. Configuration AuditConfiguration audits are the preparatory work that has to be done before the sites are on air. This includes system-wide activities and tests performed on each base station (noise floor measurements, installation inspections, etc).Some or all of the work might have been already done during the installation, commission and hardware acceptance. ShakedownShakedown tests are done to verify the basic functionality of the RNS, rather than performance. Transmit power, code allocation, RRM algorithm parameters and basic call processing functions are tested on a per-sector basis. Shakedown tests are completed when correct codes are set up, power levels are measured, basic call setup and softer handoffs are successfully executed during drive tests around the site. Cluster OptimizationCluster optimization provides the first pass of optimiza, tion and will adjust settings for parameters, antenna tilts and antenna orientations where required. It is estimated to take between 1 and 1.5 days per site for one drive-test team[5] as a total across all rounds, depending on the number of problems encountered (i.e. approximately 6 weeks per cluster of approximately 30 cells with one drive-test team, all rounds included). For more information about planning and resource estimation, please see appendix 6, page 38. System Optimization (Control Route Analysis)System optimization along the control routes is intended to further tune the system in preparation for commercial clients. In large markets, some of the system optimization activities will be performed in parallel with the optimization of remaining clusters. Tests will

Page 14: UMTS Performance Optimization and Acceptance Guidelines

use routes over multiple clusters. Hereafter is a chart describing the process:

9.1. Configuration Audit

Each of the following criteria must be met prior to advancing to the next phase of testing.

Core Network Elements and Access Network Elements

The NEs (Node-Bs, RNCs, WGs, MSCs, GGSNs and other core network elements) should have passed the functional acceptance (all features tested and accepted). Further testing might occur in the extent that they are not interfering with performance optimization tests.Additionally, network testing and inter-working must have progressed to the point that basic call routing (voice and data) has been validated (i.e. local calls can be completed) for each of the NEs.

Cell Site Information

The plan must identify the date that each cell site will be installed and integrated. To the possible extent, the schedule for node-Bs installation and integration should be based on the clusters defined in the RF plan. Spectrum ClearanceThe spectrum should be cleared on all the service area. Noise floor measurements over the whole channel bandwidth for all the used carriers should reach at least the minimum requirements (expected values for the maximum noise power allowed in the channel bandwidth would be typically around –105 dBm). A plot from at least 30% of all the sites in the system (evenly geographically distributed), indicating the noise floor of the system, should be provided.This step is not correlated with Node-B installation. The measurement is done with a spectrum analyzer (on uplink and downlink bands) and does not need to have the Node-B equipment on site.The noise floor can be also checked with a drive test receiver (such as the one from Agilent referenced in 13, page 36) RF Design InformationDetails of the RF design must be provided including:· geographical maps from the RF design tool with site positions· predictive coverage plots (Ec/N0, downlink-uplink balance, uplink (mobile transmitted power) and number of pilots)· predicted soft/softer handoff plots

Page 15: UMTS Performance Optimization and Acceptance Guidelines

· CW drive test data· On-the-field RF data: as-built antenna heights (at the radiating center), as-built antenna azimuths, as-built antenna tilts. Antenna Sweep TestsAntenna sweep data shall be provided for each cell. Construction Issue ListAn issue list should be provided for each site (issues such as bad antenna orientation, bad tilts, cable problems, wrong connections...) as well as the attached plan to solve them. Directions to each siteWritten directions to the location of each site in the system should be provided (maps are welcomed to pinpoint the location). Cell Database and DatafillA complete information set (soft copy, DRF format) should be provided for each cell:· database information with coordinates (longitude, latitude), number of antennas (use of diversity or not), antenna type, antenna model, tilt, construction drawing, cable lengths, equipment configuration; this information should be itemized on a per sector basis when needed (such as antenna height, azimuth, scrambling code, antenna type, model, tilts...)· datafill information: all parameter values from the in-service MIB. All the appropriate information stated above is to be provided as a soft copy prior to the Shakedown phase of the project. The information should be gathered and placed in a binder for each site identified in the system. For border cells (cells in more than one cluster) the information can be photocopied and placed in appropriate binders.

9.2. Cell Shakedown

9.2.1. Objective

The objective of the cell shakedown is to ensure proper cell site operation in the integrated system. Proper operation is verified through drive tests around the site and evaluation of antenna orientation, cable interconnecting and correct code assignments.

9.2.2. Entrance Criteria

Each of the following criteria must be met on each cell of the cluster prior to starting cell verification tests. 1. Antenna sweep tests must be successfully completed. The associated sweep test acceptance forms shall be supplied for the cell site. These tests will be useful in verifying that the cells are properly cabled and are ready to support RF.2. Integration tests must be passed and accepted. Both customer and Nortel Networks

Page 16: UMTS Performance Optimization and Acceptance Guidelines

representatives must have accepted the hardware and the connectivity. Integration tests are used to show construction completion and basic network readiness.3. RNS, NSS and WG datafill parameters must be verified with accuracy.4. Cleared spectrum.

9.2.3. RF BASIC checking

Receiving Antenna Cabling

The transmission and especially reception cabling have to be checked. It should have been done during I&C (Installation and Commissioning). If not or if unsure, the cabling checking has to be part of these tests. A given power (TBD) shall be transmitted in front of the receiving antennas (main and diversity) and the received power shall be checked at the Node-B, compared to the noise level recorded when the reception is idle (no signal transmitted). This is assuming that the power coming from each antenna can be read independently at the Node-B or at the OMC-B.

Effective Transmit Antenna Orientation

EC/N0 by code is analyzed to determine if all antennas are pointing in the right directions. If an antenna is found to be incorrectly installed, then a work order will have to be issued as soon as possible to fix the problem.

9.2.4. Drive Routes

The drive routes for shakedown are defined as circles around the cell at approximately 30 percent of the expected cell coverage area.

9.2.5. Test Drives

These tests are performed by driving circular routes around the base station. Purpose: to test call setup in each cell, handoffs (softer) between cells and to verify antenna orientation, Primary Pilot EC/N0 (PCPICH EC/N0), scrambling code allocation for each sector according to the RF design and neighbor list coherence with engineering requirements. UE transmitted power will be analyzed to double-check possible reception cabling problems. Call type: voice call; file transfer (FTP).Load: unloadedData collected: PCPICH EC/N0, UE transmitted power, DL UTRA carrier RSSI, DL transmitted code power and UL RTWP.

Page 17: UMTS Performance Optimization and Acceptance Guidelines

9.2.6. Test Areas (indoor Site)

These tests are performed in selected indoor areas where coverage is provided by a specific indoor site. Purpose: to test indoor handoffs (softer, possibly soft and hard) between cells, if applicable; to test handoffs (soft and possibly hard) between indoor cells and outdoor cells, if applicable; verify antenna orientations, PCPICH EC/N0, and code assignment according to the RF design.Call type: voice call; FTP.Load: unloadedData collected: PCPICH EC/N0, UE transmitted power, DL UTRA carrier RSSI, DL transmitted code power and UL RTWP.

9.2.7. Exit Criteria

Cell testing is completed when correct code and reasonable power levels are measured, and softer handoffs are successfully executed.Calls on both domains (Circuit Switched and Packet Switched) must have been completed successfully on the site.

9.2.8. Issue List

There should be no issue list after shakedown tests. All problems must have been addressed and resolved for exiting this phase.However if issues are remaining, the warranty area might be revised before starting the next phase by excluding the areas affected by the remaining issues. The purpose is to have a warranty area compliant with the exit criteria, i.e. without remaining issues.

9.3. Cluster Optimization

9.3.1. Objective

The objective of cluster optimization is to perform RF design verification, identify and categorize network or coverage problems, optimize for best performances according to the acceptance criteria and serve as a first pass of problem resolution. Satisfactory system performance is quantified by the successful call rate metric being no less than 50 % more failure than needed to reach the agreed system-wide threshold.Example: if the system-wide threshold is 95 % of successfull calls, each cluster would be required to reach at least 92.5% successfull calls.

Page 18: UMTS Performance Optimization and Acceptance Guidelines

If detailed metric acceptance must be done, satisfactory system performance is quantified by the agreed metric values within their target thresholds. The system performance metrics need not to be met to exit cluster optimization but should be close enough to the targets to ensure acceptance during the system optimization stage. Areas not meeting system performance must be included in the issue list. Due to the high number of possible mix of applications, it is not possible practically to test thoroughly all clusters with all applications.

9.3.2. Entrance Criteria

Each of the following criteria must be met prior to starting optimization on each cluster: · All cell sites in the cluster must be integrated and accepted.If one or more sites are not ready at cluster optimization time, then the cluster must be redefined in order to exclude these sites (see impact on the warranty area too, page 12).· Shakedown tests must have been completed on all cell sites in the cluster.· All cell sites must have cleared the shakedown test exit criteria.· The cluster must not have any UMTS mobiles operating other than those from the optimization team (i.e. controlled environment).· System performance acceptance criteria are clearly defined and agreed by both the customer and Nortel Networks.· System performance acceptance deliverables are defined and agreed by both the customer and Nortel Networks.· Predicted coverage maps per cell are available.· Databases of system design and propagation tool reflect accurately the system configuration.· The warranty area is clearly defined and agreed by both the customer and Nortel Networks. Maps of the warranty area are issued.· If applicable, OCNS settings have been set up and agreed by both the customer and Nortel Networks.· Maps of drive test routes and specific data collection spots (including indoor sites) have been reviewed and mutually agreed by both the customer and Nortel Networks.· The network is locked from any hardware or software changes, except for maintenance purpose (e.g. hardware failure, critical software bug...).· All logistical hardware and software for performance acceptance processing are on site and ready to work (e.g. calibration hardware and software, coordination procedures...)

9.3.3. Drive Routes

Engineering judgment should be used to determine these routes. Marketing concerns and design issues should also be considered when defining the various drive test routes.Drive test routes must also be designed according to the threshold definitions: if several threshold levels have been defined, then each drive test must be contained in areas where only one threshold set applies.

Page 19: UMTS Performance Optimization and Acceptance Guidelines

Example: Three different threshold levels are defined depending on morphologies: one set for dense urban, one for suburban and one for rural. Then, drive test routes shall be designed exclusively within dense urban areas or within suburban areas or within rural areas. On average, clusters should comprise around 30 cells. Drive routes for each cluster will be mutually agreed by the customer and Nortel Networks and will be chosen as follows:

Optimization routes

Optimization routes will be selected from primary (e.g. highways) and secondary roads (e.g. city streets of more difficult morphology) within the cluster. In some instance, the engineering team may determine that additional roads should be included in the cluster to ensure adequate performance of critical service areas that are not included in the primary and secondary road classifications. These routes should encompass all land classifications (morphologies) and areas of engineering concern and should pass through all inward-facing sectors in the cluster. These drive test routes must be mutually agreed to between the customer and Nortel Networks. Drive test routes will not be defined through known bad coverage areas in the test area (i.e. areas with missing cells or areas not planned to be covered). It should take approximately five to ten hours to drive all optimization routes within a typical cluster.

Control Routes

Control routes will be used for demonstrating system performance on an individual cluster basis and on a system wide basis. These routes are used primarily for performance progress monitoring and should reflect typical network performance. The control routes should be a subset of the optimization routes and should require no more than 3 hours of driving for a 30-cell cluster. These routes should also encompass all roads of primary concern to the engineering staff and cover all land classifications defined within the cluster. When possible, control routes should be chosen to be contiguous with those from neighboring clusters. Control routes are extended as each cluster is exited. Note: time of drives may vary since the time is dependant on the size of the cluster areas and may vary depending on how close the cells are spaced apart.

9.3.4. Test Drives

To simplify the process and be closer to the user experience, only one type of test setup will be done for all drive tests. However, iterations of these drive tests have to be planned to improve the network; these iterations are necessary and mainly due to the dynamic aspect of the CDMA technology. Each iteration will have a different purpose.

Page 20: UMTS Performance Optimization and Acceptance Guidelines

Each drive tests (except the first one, pilot optimization drive) will get at least four mobiles, two dedicated to voice calls, and two dedicated to data calls.The tests shall follow the following processes during each applicable drive:

MO

Setup the callHold the call during the holding time

[ping RTT, file transfer time]Release the resources

[when holding time is off]MO only

MO

Setup the callHold the call during the holding time

[MOS]Release the resources

[when holding time is off]MO«MT

PS data callsCS data calls

PS calls will follow a test pattern composed of initial ping, subsequent pings, file transfer by FTP (either uplink and/or downlink), idle states (to activate the downsize/upsize feature, also known as “always-on”).Transferred files must be sized according to RAB rates, holding time and test patterns. The optimization engineers shall use their judgment in deciding whether or not all test drive iterations are required and how often and in what order each drive test needs to be performed.In the same spirit, several hereafter proposed drive tests can be grouped in the same run (e.g. pilot optimization and radio optimization, grouping the scrambling code analyzer with test mobiles) Note: RNC logging must be set up for all drives, except pilot optimization drive. It has to be emphasized that the following number of drive tests is higher than it should be with a mature technology using as much as possible remote optimization techniques and tools. This shall be updated and integrated in the current process when these techniques and tools are available.

Pilot Optimization Drive

Route: Optimization routes.

Page 21: UMTS Performance Optimization and Acceptance Guidelines

Load: unloadedPurpose: to determine the actual pilot coverage for each cluster and solve the main RF propagation issues (pilot shoot-up, pilot pollution, scrambling code overlap...)Equipment: Pilot scanner (e.g. Agilent E7476A).Applications: none.Data collected: Scrambling code analysis (Ec/N0, Delay Spread, Time), top-N analysis with a window at 20.Analyses: Per-PCPICH coverage; Best PCPICH coverage; 2nd, 3rd, 4th, 5th, 6th PCPICH coverage; Number of PCPICH over a given Ec/N0 (typically –12dB in IS-95 networks)[6].

Radio optimization DRIVE

Routes: Optimization routes.Load: unloadedPurpose: Mainly insure the RF coverage control and RF optimization (antenna azimuths, tilts, power settings, neighbor list tuning) Process: each call is setup, hold during the agreed holding time and released; mobile originating and mobile terminating for voice; mobile originating only for data.Applications: voice, combined ping and FTP.Data collected: Ec/N0, UE transmitted power, DL UTRA carrier RSSI, DL transmitted code power, UL RTWP, Eb/N0, average number of radio links, number of dropped calls, voice quality, file transfer time.Metrics calculated: dropped calls, bit rates (FTP only), voice quality.

Loaded Uu Optimization Drive (If applicable)[7]

This test drive is performed on the optimization routes.Load: simulated load as per page 13 (only if OCNS is available); unloaded otherwise and only if substantial RF changes have been made.Purpose: RF optimization under loadProcess: each call is setup, hold during the agreed holding time and released; mobile originating and mobile terminating for voice; mobile originating only for data.Applications: voice, combined ping and FTP.Data collected: Ec/N0, UE transmitted power, DL UTRA carrier RSSI, DL transmitted code power, UL RTWP, Eb/N0, average number of radio links, number of dropped calls, voice quality, file transfer times, ping RTTs.Metrics calculated: dropped callsbit rates (FTP only), transfer delay (FTP only).

Optimization Check and Fine Tuning Drive

This test drive is performed on the optimization routes.Load: unloaded; if the loaded UU optimization drive has been done, then simulated load as per page 13.Purpose: Check and fine-tune the optimization results.

Page 22: UMTS Performance Optimization and Acceptance Guidelines

Process: each call is setup, hold during the agreed holding time and released; mobile originating and mobile terminating for voice; mobile originating only for data.Applications: voice (originated and terminated); combined ping and FTP.Data collected: Ec/N0, UE transmitted power, DL UTRA carrier RSSI, DL transmitted code power, UL RTWP, Eb/N0, average number of radio links, number of dropped calls, voice quality, file transfer times, ping RTTs, call setup success or failures, call setup times. Metrics calculated: successful call rate (pass-or-fail), dropped call rate, bit rates (FTP only), call setup success rate, paging success rate.

Acceptance Data collection and Final Cluster Check DRIVE

This test drive is performed on the control routes.Load: unloaded; if drive #2 was done under loaded conditions, then simulated load as per page 13.Purpose: Check and collect acceptance measurement for this cluster.Applications: voice (originated and terminated); combined ping and FTP.Data collected: Ec/N0, UE transmitted power, DL UTRA carrier RSSI, DL transmitted code power, UL RTWP, Eb/N0, average number of radio links, number of dropped calls, voice quality, file transfer times, ping RTTs, call setup success or failures, call setup times. Metrics calculated: successful call rate (pass-or-fail), dropped call rate, bit rates (FTP only), call setup success rate, paging success rate. Supplementary drive tests might be needed if additional tests are required (such as IMSI attach (resp. detach) success rate, IMSI attach (resp. detach) time, location update / registration success rate, and so on)

9.3.5. Data Analysis / Problem Resolution

Following are the basic parameters adjusted during cluster optimization. 1. Neighbor list optimization (based on unloaded lists)· correct neighbor list entries (minimize neighbor list length)· ensure proper prioritization of entriesThis can be done with a pilot scanner drive test system and the correct processing tool (see page 36) 2. Coverage problems (can include holes, failures due to multiple or rapidly rising pilots)· adjust antenna azimuths, heights and types· downtilt / uptilt antennas· decrease Node-B power to create dominant server and reduce pilot pollution[8] Antenna type and azimuth changes must be approved by local RF engineer.

Page 23: UMTS Performance Optimization and Acceptance Guidelines

3. Network timer checking and tuning. 4. Handoff parameters checking and tuning

9.3.6. Exit Criteria

In order to exit each cluster, the successful call rate results[9] must be close enough[10] to the agreed thresholds (see examples in section 14.1, page 37) or the corresponding cluster must be put on the issue list. Cluster optimization sign-off will be performed by the regional customer manager or representative.A sign-off form with the issue list must be created and agreed to with the customer.

9.3.7. Reports

Following is a list of reports available to the customer. The Performance Result Report and issue list will be provided for cluster exit on the control routes of each cluster. The other data will be available for review during optimization and provided to the customer upon network performance acceptance. 1. Performance Result Report· Drive data information· Geographical information· Successful call rate results. 2. Performance Measurement Report (informative only)· Maps of recorded values 3. Test Environment Report· List of all cell parameters when exiting from a cluster· List of test equipment· Load simulation (OCNS) details (one time and if applicable) 4. Issue list

9.3.8. Issue List

The issue list shall contain at a minimum:· complete list of problems· opening date for each item· owners and dates of resolution

Page 24: UMTS Performance Optimization and Acceptance Guidelines

Depending on the project organization, one global issue list or one issue list by regional entity might be created.

9.4. System Optimization

9.4.1. Objective

The objective of system optimization (or control route analysis) is to optimize the sum of all clusters covering an entire region. A region can be defined along administrative borders or as an economical entity but in any case, the meaning of “region” has to be clearly defined between the customer and Nortel Networks. The system performance optimization starts with two optimized clusters and ends with all the clusters in the region. The entire region is fine-tuned with remaining and new issues resolved. In addition, action plans for future work are defined. The system performance metrics are calculated for the entire region and serve as the warranted values as per the agreement between the customer and Nortel Networks.

9.4.2. Entrance Criteria

All clusters to be optimized must have met the cluster exit criteria or signed off by the regional RF manager or designate with mutually agreed issue list.

9.4.3. Drive Routes

System optimization is typically performed on the control routes. However, additional optimization drive test routes may be defined as required for diagnostic testing of new problem areas that are identified as a result of new clusters being switched on.

9.4.4. Drive TESTs

Each of the five drive tests used in Cluster Optimization (page 23) can be used by the optimization engineers as required. RNC logging must be set up for all drives, except the pilot optimization drive. The cluster optimizations should have brought the system to a level where most of the optimization problems are already solved. Radio optimization checkingHopefully, the first drive test (done with coupling test mobiles and pilot scanner, equivalent to pilot optimization drive and radio optimization drive merged) will show very few optimization settings to change. Optional optimization drives

Page 25: UMTS Performance Optimization and Acceptance Guidelines

After possible changes in parameter values for optimization, another drive might be necessary to capture the new tuning and check the resulting improvement. Acceptance data collectionThe next drive shall then be used to collect acceptance data for checking if the successful call rate metric is meeting the acceptance criteria.

9.4.5. Data Analysis / Problem Resolution

Since the System Optimization is based on a network composed of already tuned clusters, most of the following parameters should have been optimized:· total cell power and P-CPICH power (typically the ratio between both is kept constant)· antenna azimuths, tilts, heights and types Due to the cross-effects between clusters, handoff parameters, neighbor list contents and neighbor prioritization might need changes at borders between clusters. This is also a time that can be used to finish up the tuning of difficult spots.

9.4.6. Exit Criteria

In order to exit each region, the system performance metrics must meet the threshold requirements. System Optimization sign-off will be performed by the regional RF manager or representative. A sign-off form with the issue list must be created and agreed to with both parties.

9.4.7. Reports

Interim reports

During the System Optimization phase, Nortel Networks will be preparing Control Routes Testing reports as permitted by time. These should include:· Updated issue lists· Successful call rate results· List of test equipment· Final antenna tilts and azimuths

Final deliverables

At the end of the System Optimization tests, Nortel Networks must submit the following documentation: 1. Performance Result Report

Page 26: UMTS Performance Optimization and Acceptance Guidelines

· Drive data information· Geodetic information· Successful call rate results 2. Performance Measurement Report· Maps of recorded values 3. Test Environment Report· List of all cell parameters as final network datafill.· List of test equipment· Load simulation (OCNS) details (if applicable) 4. Issue list

9.4.8. Issue List

The issue list shall contain as a minimum:· complete list of problems· owners and dates of resolution for issues in the process of being solved· new or remaining untreated issues Depending of the organization of the project, the issue list might be global or there might be one issue list per market.

10. Reports

Following are the details of the various reports that are generated throughout the optimization process.

10.1. Performance Result Report

The Performance Result Report will include measurements that can be used to determine the overall quality of system performance and whether the measured network meets, or is trending towards meeting the agreed system performance metric values. The data collected for these reports will be from control and/or optimization routes. The first part of the Performance Result Report will provide the following information for the latest set of data collected and reported:· Date of report and name of person who completed the report.· Number of cells included vs. the total for launch in the region. The number of cells currently undergoing optimization should also be included, as well as a list of these cells

Page 27: UMTS Performance Optimization and Acceptance Guidelines

with their references.· Test setups[11] when collecting the data.· Version of the software loads and other equipment-related information.· Outstanding actions that will improve or affect overall performance of the area with target dates.

10.1.1. Geodetic Information

The Performance Result Report should include geographical information in the form of a map showing the control routes as well as areas of acceptable or not acceptable quality in addition to areas not evaluated. The objective of the geodetic information is to show the extent of the area for which data has been collected and to ensure that the coverage is maintained as the system is optimized. The following information needs to be displayed on the map:· site locations· area tested, including drive test routes, where cluster optimization is underway and where cluster optimization is complete· control routes· areas/places of coverage problems· any relevant information that helps interpreting the results

10.1.2. Key Performance Indicators (KPIs) Reports

The KPI graphs or maps will show both uplink and downlink results after discarding the (1-r)% of the bins having the worst measurement results (r% coverage reliability).The report should also include information on the length and duration of drive routes used to collect the latest displayed results and any pertinent information on events. The report should also include information on the number of calls of each type made for the latest displayed results and any pertinent information on events that have influenced the measurements.Where failures are believed to be caused by issues other than RF, e.g. handset problems or network failures, this information should be indicated.

10.2. Performance Measurement Reports

Measurement scope is wider than the ones done for System Performance Metric (successful call rate) calculations. Supplemental measurements are used for optimization and especially for troubleshooting. They are mainly collected in areas where system performance objectives are not met. The performance measurement reports are gathering these metrics used for optimization.

Page 28: UMTS Performance Optimization and Acceptance Guidelines

Drive test data may include the following measurements:· Dropped calls and unsuccessful call attempt plots, counts and percentages· BLER plots· Ec/N0 plots· Eb/N0 plots· Mobile Transmit Power plots· Mobile Receive Power plots· Number of Radio Link plots (handoff state)

10.3. Test Environment Report

This report is gathering all information on the test environment:· List of test equipment, including test mobiles.· Equipment setting details.· List of all cell parameters with their final values.· Antenna tilts and azimuths· If applicable, load simulation details (including OCNS settings). It may also include details on issues that affected the network during performance acceptance measurements.

10.4. Issue List

One global Issue List will be kept by region. It will begin with the first cluster and grow as new clusters are added. Some of the problems identified on the issue list may not be regional but rather national problems. National problems can be tracked down on a project basis. Each issue will be assigned a priority:· 1 = currently impacting optimization work· 2 = must be resolved prior to commercial launch· 3 = other issues

11. OCNS unavailability

At the time this document is written, the UMTS OCNS feature is not scheduled to be implemented before the second commercial UTMS release, at the best.Therefore, number of network performance acceptances will have to be done without this feature. This means that there is no way to do the tests under loaded conditions. In this case, all the tests will be kept under unloaded conditions.

Page 29: UMTS Performance Optimization and Acceptance Guidelines

Assumption: the acceptance without OCNS will be done on early networks (opening before the first General Availability version or soon after). The traffic load of such networks when opening commercially will be probably very light, due also to the tension on the mobile device market. It is reasonable to do an acceptance under unloaded conditions and a close follow-up of the launched network to optimize the parameters as the traffic is ramping up. Depending on the contract, optimization services have to be included or proposed to the customer to assist its team in handling the increasing load during the first stage of the network. According to the experience on previous CDMA networks, it is critical to react quickly to adjust the right parameters to avoid brutal service degradations.

12. Abbreviations and definitions

12.1. Abbreviations

ABR Available Bit RateAHT Average Holding TimeB- Background (traffic class)BLER BLock Error RateC- Conversational Real Time (traffic class)CAIT CDMA Air Interface Tester (Qualcomm©)CBR Constant Bit RateCC Call Control (see [R7] for details)CPICH Common PIlot CHannelCRE Cell Reference EventCRT Conversational / Real TimeCS Circuit SwitchedCSSR Call Setup Success RateCST Call Setup TimeCW Continuous WaveDCR Dropped Call RateDL DownLinkDRF Data Request FormDst Destination (IP address of the destination source)FDD Frequency Division DuplexFER Frame Erasure RateFFS For Further StudyFQI Frame Quality IndicatorGGSN Gateway GPRS Support NodeGoS Grade of ServiceHHO Hard HandOverI- Interactive (traffic class)IE Information ElementIPPM Internet Protocol Performance MetricsISO International Standard Organizationkbps kilobit per second (1000 bits per second)

Page 30: UMTS Performance Optimization and Acceptance Guidelines

KPI Key Performance IndicatorLA Location AreamCR Minimum Cell RateMIB Management Information BaseMO Mobile OriginatedMP Measurement PointMT Mobile TerminatedNAS Non Access StratumNB Node-BNE Network ElementNSS Network Sub-SystemOCNS Orthogonal Channel Noise SimulationOSI Open System InterconnectOVSF Orthogonal Variable Spreading FactorPCPICH Primary Common PIlot CHannelPCR Peak Cell RatePDU Protocol Data UnitPS Packet SwitchedPSR Paging Success RateQoS Quality of ServiceRA Registration AreaRAB Radio Access BearerRANAP Radio Access Network Application PartRAT Radio Access TechnologyRFC Request For CommentsRL Radio LinkRNC Radio Network ControllerRNS Radio Network SystemRR Radio Resource ManagementRRC Radio Resource ControlRRM Radio Resource ManagementRSSI Received Signal Strength IndicatorRTT Round Trip TimeRTWP Received Total Wideband PowerS- Streaming (traffic class)SAP Service Access PointSCR Sustainable Cell RateSDU Service Data UnitSDUER Service Data Unit Error RateSHO Soft HandOffSM Session Management (see [R3] for details)Src Source (IP address of the source host)SSHO Softer HandOffStr Streaming (traffic class)TBC To Be ConfirmedTBD To Be DefinedTBR To Be Referenced

Page 31: UMTS Performance Optimization and Acceptance Guidelines

TD Transfer DelayTDD Time Division DuplexTE/MT Terminal Equipment / Mobile Terminal (parts of the UE)TrCH Transport CHannel

Type-P packet

Allows defining metrics and methodologies generically; further on, type P might be explicitely defined (i.e., exactly what type of packet is meant), partially defined (e.g., “with a payload of B octets”), or left generic. For more details, please look at RFC-2330 [R15]

UBR Unspecified Bit RateUE User EquipmentUL UpLinkURA UTRAN Registration AreaUTRAN Universal Terrestrial Radio Access NetworkVBR Variable Bit RateWG Wireless Gateway

12.2. definitions

Area availability area availability refers to the percentage of geographical bins over the specified coverage area that meets the system performance objective as agreed while compensating for the appropriate penetration margin. That is, if any geographical bin within the specified coverage area fails to meet the KPI objective while compensating for the appropriate penetration margin, then a service problem exists.

Average Holding Time Average time during which the call is sustained. This duration is typically used to define the call duration for the acceptance tests. Typical AHT is around 2 min.

B, KB, b, kb, Mb B = Byte = 8 bitsKB = Kilobyte = 1,024 bytes = 8192 bitsb = bitkb = kilobit = 1,000 bitsMb = megabit = 1,000,000 bits

Bin size geographic bins are used to calculate area availability. The bin size is 200m x 200m (or otherwise mutually agreed to)

Call Successfully established instantiation of a service that is provided through the network to a given user.

Cluster Typically around 30 sites that serve a contiguous geographic areaCluster exit data The final set of data collected for a cluster that demonstrates the

overall performance of the cluster along the specified drive routesControlled environment

Area with clear spectrum (verifying the requirements on noise floor level) and where all working mobiles are test mobiles (no other mobile should be able to roam in the area)

Drive Route A segment of the major roads within a cluster that will be targeted

Page 32: UMTS Performance Optimization and Acceptance Guidelines

for optimization. During data collection, the test routes shall be driven at speeds mutually agreed to as being representative of normal subscriber behavior. Drive routes will be classified as follows:Optimization Routes: drive routes selected to encompass all morphology classifications and areas of engineering concern defined in a cluster.Control Routes: a subset of the optimization routes that include the roads of principle concern for the regional engineering staff and cover most land classifications within the cluster.

Ec/N0 CPICH EC/N0 as defined in the standard [R8] section 5.1.6: the received energy per chip on the primary pilot channel divided by the power density in the band

Final Cell Plan Final issue of the RF design specifying the design coverage areaIssue list A list of system deficiencies (e.g. coverage, operation, construction,

etc.) noted during optimization. It will include a description of the deficiency, a priority, a responsible party, and the estimated resolution date.

KPI Key Performance Indicator: considered as a metric that is of particular importance for indicating the level of quality of service

QoS Quality of Service: referring to the UMTS QoS as provided by the UMTS Bearer (see [R5])

Reported set Composed of the active set and the monitored set (with a maximum of the six best received neighbors)

Service Area Area defined to be covered (may vary according to the service)This expression should not be mistaken with the service area defined in the 3GPP TS, concerning only a limited group of cells.

Test Area Area defined to be tested (consists of all drive test roads, all specific in-building test areas, all special test areas)

Warranty Area The design coverage area diminished by deployment restrictions, exclusions and specific non-warranty areas (e.g. tunnels, specific bridges...)

13. Appendix 1: Test Equipment List

Most of the tools and equipment are being studied and developed as of this document is written.Depending on the availability and the very first tests, this part should be much more detailed in the future. · Agilent E7476A W-CDMW Drive Test System (currently available)

Page 33: UMTS Performance Optimization and Acceptance Guidelines

· UMTS Diva-Agilent post-processing tool for SC scanner (currently available)· Test-mobiles (3rd-party vendor, availability to be specified), possibly with Qualcomm-CAIT or similar software (if available on WCDMA/UMTS)· Commercial mobiles (3rd-party vendor, availaibility to be specified)· Test isolation / insertion Loss Kit (within a suitcase): AITS (Air Interface Test Suitcase)· Vehicle GPS receiver (Trimble, Garmin, Magellan, Lowrance...)· UMTS RFO-DIVA (test mobile post-processing software)· Logging post-processing software (also dependant on logging availability in the equipment)· UMTS end-to-end tool (voice quality and data quality)· Radcom protocol analyzer with UTRAN-SPY (post-processing software)

Test UECirculatorCirculator

x dBattenuator for simulated load

(if applicable)y dB

attenuator for penetration lossesAntennaPossible DL bypass

Sample test set up for Isolation and Losses(x and y shall be defined according to contract values)

14. Appendix 2: Performance Metrics

Page 34: UMTS Performance Optimization and Acceptance Guidelines

14.1. KPI Indicative values

Most of the Network KPIs must be stated on a case by case basis to include all network design and contract specificities.Please refer to [R1] for example of Product-KPI values. Hereafter is an example for successfull call rate. Let us assume that three levels of coverage are designed, according to the marketing targets:1. Dense coverage: key marketing areas where coverage has been designed to support all services at high quality levels2. Medium coverage: areas where most ressource-consuming services are expected to be less used allowing less density in the coverage3. Light coverage: areas where basic services are provided with very good quality but no high ressource-consuming services are expected. The repartition for CS-voice call originations is supposed to be by default 50 % mobile-originated, 50 % mobile-terminated if no other repartition is better suited to the client’s needs.The repartition for PS call is 100 % mobile-originated.

Successful call rate metric[12]

ServiceCoverage density

Region threshold

Per-cluster threshold

(no more than 50 % vs. Region)

Call success conditions

CS-voicedense 95 % 92 %

over the call MOS ³ 3.5MO call setup delay < 4sMT call setup delay < 5sHolding time = 2 min (without call drop)

medium 85 % 78 % same as denselight 80 % 70 % same as dense

PS (I/B 64k-UL / 64k-DL)[13]

dense 95 % 92 %

Initial 500 B (4 kb) ping RTT (call setup) < 5.5 sContinuous 500 B (4 kb) ping RTT < 0.5 s800 kb file transfer < 20 s4 Mb file transfer < 85 sHolding time = 2 min (without call drop)

medium 85 % 78 % same as denselight 80 % 70 % same as dense

Due to tool capabilities, it is not recommended at this time to include CS-data tests in

Page 35: UMTS Performance Optimization and Acceptance Guidelines

acceptance criteria.

14.2. Detailed metrics for acceptance and optimization

14.2.1. Acceptance Metrics

Warnings: for each metric, a measurement source is given as far as possible. The measurement source, as well as the measurement methodology, may vary depending on the tool or feature availability. Moreover, when it is specified Iub or Iu (for spying with protocol analyzers), it does not mean that both interface spying are equivalent but that one or both at the same time may be required to get the enough information to compute the metric.

Dropped call rate

Objective

Measure the reliability of a call within the network coverage (which is strictly limited to the warranty zone). Please refer to the design assumptions and [R1] for values to be met within the warranty area.

Definition[14]

The dropped call rate is the ratio of abnormally terminated calls vs. the total number of established calls, which can be expressed by the following formulas: CS Domain: where Abnormal_Release_Cause is any cause as defined in 3GPP TS 24.008 [R6], Cause IE, different from:· Cause #16: Normal Call Clearing· Cause #31: Normal, unspecified PS Domain: where Abnormal_Release_Cause refers to the following IE failure cause values appearing in the preceeding RRC_Connection_Release as defined in 3GPP TS 25.331 [R10]: unspecified, pre-emptive release, congestion and re-establishment reject.[15]

Measurement method

Page 36: UMTS Performance Optimization and Acceptance Guidelines

To verify the coverage continuity, long calls will be performed (call are started and maintained as long as possible). The number of dropped calls, the time and the geographical location when the failure occurs will be recorded along each control route. During the same drive, short calls will be also performed. The short call duration will be the Average Holding Time (AHT) as agreed between both parties (the customer and Nortel Networks).The AHT may differ from one service to another (typically for speech service, the AHT is between 90s and 120s). Potential source of measurements:1. Trace mobile with post-processing2. Protocol analyzers on Iubs and post-processing tools

Call setup Success rate (Mobile Originated)

Objective

Measure the service availability when user wants to activate it. Please refer to the design assumptions and [R1] for values to be met within the warranty area.

Definition

The call setup success rate is the number of successfully established calls vs. the total number of call setup attempts, which can be expressed by the following formulas: CS Domain:* the Establishment Cause IE is defined in 3GPP TS 25.331 [R10].Note 1: this metric could be itemized by traffic class (assuming that the originating cause is correctly assigned when setting up the call)Note 2: the domain (CS or PS) information appears in the RRC_Initial_Direct_Transfer message following in the call setup message flow (please refer to [R1] for more details).

PS Domain:* the Establishment Cause IE is defined in 3GPP TS 25.331 [R10].Note 1: this metric could be itemized by traffic class (assuming that the originating cause is correctly assigned when setting up the call)Note 2: the domain (CS or PS) information appears in the RRC_Initial_Direct_Transfer message following in the call setup message flow (please refer to [R1] for more details).

Page 37: UMTS Performance Optimization and Acceptance Guidelines

Measurement method

The test mobile tool will be configured to originate a sequence of calls to a known non-busy number or IP address (depending on the application). Each call will be terminated when established. Potential source of measurements:1. Trace mobile with post-processing2. Protocol analyzers on Iubs and post-processing tools

Call Setup Time (Mobile Originated and Terminated)

Objective

Measure the responsiveness of the network when the user wants to access it. It is the time it takes for a mobile originated (respectively terminated) call to be set up across the network. Please refer to [R1] for values to be met within the warranty area.

Definition

The call setup time is the time between the initial call establishment request and the call establishment confirmation. This is therefore defined by the following formulas: CS Domain, mobile originated call:[16], [17] CS Domain, mobile terminated call:[18] PS Domain, mobile originated call:[19], [20] PS Domain, mobile terminated call:Non applicable as of this document was written.

Measurement Method

This test is accomplished in conjunction with the call setup success rate (for origination) and with the paging success rate (for termination).For each call, the call setup messages will be time-stamped and recorded and the time difference will be calculated as described in the definitions.

Page 38: UMTS Performance Optimization and Acceptance Guidelines

Potential source of measurements:1. Trace mobile with post-processing2. Protocol analyzers on Iu (mobile originated calls) and PSTN interface (CS mobile terminated calls with post-processing software

Paging SUccess rate

Objective

Measure the network efficiency in finding the appropriate mobile. Please refer to the design assumptions and [R1] for values to be met within the warranty area.

Definition

The paging success rate is the ratio of number of paging responses vs. the total number of paging attempts. This can be defined by the following formula: [21]* the Establishment Cause IE is defined in 3GPP TS 25.331 [R10]; all terminating causes must be grouped together.¤ the CN Domain Identity IE is defined in 3GPP TS 25.331 [R10]; only pages from a given CN domain shall be considered for this metric (meaning there is a Paging success rate for CS and a paging success rate for PS)

Measurement method

The test mobile will be in idle mode. A sequence of incoming call will be set up. Each call will be terminated when established. Potential source of measurements:Trace mobile with post-processing

Voice Quality, Mean Opinion Score (MOS)

Objective

The objective of this measurement is to provide a measurement of voice quality.Please refer to [R1] for values to be met.

Page 39: UMTS Performance Optimization and Acceptance Guidelines

Definition

The Mean Opinion Score is a subjective evaluation of voice quality [R14]. It is the result of an absolute rating (without reference sample) on a scale going from 1 (bad) to 5 (excellent).

Measurement Method

Please refer to [R18].The coverage reliability r must be considered in the measurements and therefore the resulting metric must be computed on the r % of the best samples only (the worst (1-r) % of the samples must be discarded). Potential source of measurements:An end to end tool shall be used to compare the received voice sample with the original one.

Bit Rate (data only)

Objective

This metric measures the throughput in kbps delivered to the application (usually at TCP level). The minimum, maximum and average shall be measured to get an idea of the transmission regularity. Please refer to [R1] for the values to be met.

Definition

The bit rate is the number of bit received at the application SAP (usually brought down to the TCP level) over a given period of time (usually measured in kbps).

Measurement Method

The measurement is taken from long calls. The best application for this type of test looks like a file transfer through FTP. A big enough file will be chosen in order to keep transferring during the entire drive test. The requested data rate transfer will be chosen according to the design assumptions in the driven areas. Bit rates for downlink and bit rates for uplink will be processed separately.If r is the coverage reliability, the worst (lowest bit rate) (1- r )% of the bins on the downlink and the worst (1- r )% of the bins on the uplink shall be excluded from analysis. For performance acceptance, the remaining r % of the bins of each of the downlink and uplink shall satisfy the bit rate requirements.Minimum, maximum and average values will be calculated for each way.

Page 40: UMTS Performance Optimization and Acceptance Guidelines

If key or strategic marketing locations included in the planned designed coverage area fall within the r % of bins that are discarded, these locations shall be put on the issue list. Potential source of measurements:Trace mobile with post-processing

File Transfer Time (Data Only)

Objective

This metric will measure the transmission efficiency between the source application SAP (usually at TCP level) and the destination application SAP (usually at TCP level). Please refer to [R1] for thresholds to be met.

Definition

The easiest way of measuring the transfer delay is to measure a so-called file transfer time defined as the time to receive without errors a file of a given size (between the first bit of the first received packet and the last bit of the last received packet completing the error-free transmitted file).This definition has the great advantage not to require any synchronization between both ends for the measurement tool. According to 3GPP TS 23.107 [R4] the following definition applies for the transfer delay:“Transfer delay (ms)Definition: Indicates maximum delay for 95th percentile of the distribution of delay for all delivered SDUs during the lifetime of a bearer service, where delay for an SDU is defined as the time from a request to transfer an SDU at one SAP to its delivery at the other SAP.NOTE 3: Transfer delay of an arbitrary SDU is not meaningful for a bursty source, since the last SDUs of a burst may have long delay due to queuing, whereas the meaningful response delay perceived by the user is the delay of the first SDU of the burst.”The file transfer time is then equivalent to adding all the SDU transfer delays for completing the file at destination, except the first delay between the time of the request and the transfer time of the first SDU. However, we can consider this time as negligible compared to the total file transfer time, assuming file sizes over 100 kb.

Measurement Method

The measurement can be done with the same method than the bit rate measurement and should be done during the same tests.The data processing should follow the same methodology than the one used for bit rate processing.

Page 41: UMTS Performance Optimization and Acceptance Guidelines

Potential source for measurements:An end-to-end tool must be used to monitor the test at both ends (checking that the file was received without errors).

14.2.2. Optimization Metrics

In complement of the performance metrics that will be used (partly or totally) for network performance acceptance, other metrics are measured and computed for optimization purpose without being formally linked to the acceptance process. This section is supposed to give a list and details about these metrics. The list is probably non-exhaustive and will have to be adjusted after the first UMTS network performance optimizations.

RF Performance

Primary CPICH Ec/N0

Objective

Troubleshoot for best server reception and interference reduction.

Definition

As reported by the UE in the Intra-frequency measurement quantity IE (defined in TS 3GPP 25.331 [R10]).

Measurement Method

The measurements shall be taken during any drive tests (all opportunities are welcome). Potential source of measurements:1. Scrambling-code scanner (such as the Agilent E7476A) with a post-processing software2. Trace mobile and post-processing software3. RNC logging and post-processing software (measurement report analyses)4. Protocol Analyzers on Iub with post-processing software

UE Tx Power

Page 42: UMTS Performance Optimization and Acceptance Guidelines

Objective

Troubleshoot for best server reception and uplink interference reduction.

Definition

As reported by the User Equipment in the UE Internal measured results IE (defined in TS 3GPP 25.331 [R10]).

Measurement Method

The measurements shall be taken during any drive tests (all opportunities are welcome). Potential source of measurements:1. Trace mobile and post-processing software2. Protocol Analyzers on Iub with post-processing software

Average Number of RLs in the active set

Objective

Neighbor list and SHO tuning.

Definition

within a given cell.

Measurement Method

The measurements shall be taken during continuous drive tests. Potential source for measurements:1. RNC logging and post-processing software (measurement report analyses)2. Protocol Analyzers on Iub with post-processing software3. Trace mobile

Page 43: UMTS Performance Optimization and Acceptance Guidelines

Average Number of PCPICHs in the Monitored set

Objective

Neighbor list and SHO tuning.

Definition

Average calculated from the number of monitored cell set measured by the mobile and transmitted to the UTRAN in the measurement report messages or by UTRAN counters.

Measurement Method[22]

The measurements shall be taken during continuous drive tests. Potential source for measurements:1. RNC logging and post-processing software (measurement report analyses)2. Protocol Analyzers on Iub with post-processing software3. Trace mobile

Average Number of RLs per call

Objective

Neighbor list and SHO tuning.

Definition

within the test areas (either cluster or system)

Measurement Method

The measurements shall be taken during continuous drive tests. Potential source for measurements:1. RNC logging and post-processing software (measurement report analyses)2. Protocol Analyzers on Iub with post-processing software

Page 44: UMTS Performance Optimization and Acceptance Guidelines

BLER (UL and DL)

Objective

Check the radio link transmission quality.

Definition

The Block Error Rate (BLER) is the Transport Channel Block Error Rate as measured by UTRAN in uplink (as specified in 3GPP TS 25.215 [R8]) and as reported by the UE in downlink in the RRC_Measurement_Report(quality_measured_results_list).

Measurement Method

The measurements shall be taken from all continuous drive tests. Potential source of measurements ( TBC ): 1. Trace Mobile with post-processing software (DL-BLER only)2. Protocol Analyzers on Iub with post-processing software

Radio Blocking Rate

Objective

Radio resource shortage.

Definition

[23]

Measurement Method

The measurements shall be taken from drive tests used to measure call setup success rate. Potential source of measurements ( TBC ): 1. RNC logging with post-processing software2. Protocol Analyzer spying the Iub with post-processing tool

Page 45: UMTS Performance Optimization and Acceptance Guidelines

Network Access Performance

RAB Unavailability

Objective

RAB resource shortage. This metric is classified as a Tier-2 metric.

Definition

* where: RANAP_RAB_AssRsp is the RANAP “RAB Assignment Response” message, RABs_failed_SetupOrModify is the “RABs failed to setup or modify” part of the previous message, Congestion causes are the following failure causes[24]: · “Requested Traffic Class Non Available” (#18), · “Requested Maximum Bit Rate Non Available” (#20), · “Requested Guaranteed Bit Rate Non Available” (#21), · “Requested Traffic Delay Non Achievable” (#22), · “No Remaining RAB” (#31), · “Requested Maximum Bit Rate Downlink Non Available” (#33), · “Requested Maximum Bit Rate Uplink Non Available” (#34), · “Requested Guaranteed Bit Rate Downlink Non Available” (#35), · “Requested Guaranteed Bit Rate Uplink Non Available” (#36), · “No resource available” (#114)Details may be found in [R11]

Measurement Method

This metric applies only on loaded networks. Measurements shall be done during the same drive tests as the ones used for call setup success rate (short calls).Potential source of measurements ( TBC ): 1. RNC logging with post-processing software2. Iu spying with protocol analyzers and post-processing software

Mobility Performance

UTRAN ð2G HHO Success Rate

Objective

Inter-RAT HHO reliability.

Page 46: UMTS Performance Optimization and Acceptance Guidelines

Definition

Limits: if the mobile cannot resume the connection to UTRAN (as specified in 3GPP TS 25.331 [R10]), the failure will not be counted.

Measurement Method

Due to the cross-network specificity of this feature, it is highly recommended to avoid including such a metric in the network performance acceptance contracts.It would be much better to isolate the test cases as a demonstration purpose of HHO reliability and correct tuning.Within this context, special drive-tests must be done in selected areas handing calls down to 2G. Sufficient drives must be done to get enough samples. Potential source of measurements:1. RNC logging with post-processing software2. Iub spying with protocol analyzers and post-processing software

Average UTRAN ð2G HHO time

Objective

Inter-RAT HHO tuning.

Definition

To be measured on the 2G (GSM) part as:

Measurement Method

Measurements should be made on previously selected areas of the network as agreed between both parties (the customer and Nortel Networks). Potential source of measurements ( TBC ): 1. Dual-mode test mobile with post-processing (GSM messages)2. GSM Call Trace?

Page 47: UMTS Performance Optimization and Acceptance Guidelines

ATM

ATM Cell Rate

Objective

Check the validity of the negotiated PCR for the ATM CoS CBR; the SCR for VBR; the couple (MCR, PCR) for ABR.

Definition

The ATM Cell Rate is defined in this document as being the effective cell rate in number of cells per second.

Measurement Method

Measurement should be made on all drives. Potential source of measurements ( TBC ): 1. Passport counters.2. Interface spying (either Iub or Iu) with protocol analyzers and post-processing software

ATM Cell Delay Variation (CDV)

Objective

Check the ATM transmission constancy.

Definition

The CDV we will consider in the document is the one defined as 1-point cell delay variation considering the Measurement Point (MP) at the end of the communication link (either a UE or the fixed extremity of the call). As described in ITU-T I.356 [R12]:“The 1-point CDV parameter describes variability in the pattern of cell arrival (entry or exit) event at an MP with reference to the negotiated peak cell rate 1/T (see Recommendation I.371); it includes variability introduced (or removed) in all connection portions between the cell source and the specified MP. It can be related to cell conformance at the MP, and to network queues.” The definition of this metric in ITU-T I.356 [R12] is:“1-point cell delay variation is defined based on the observation of a sequence of consecutive cell arrivals at a single MP.”

Page 48: UMTS Performance Optimization and Acceptance Guidelines

Measurement Method

Measurement should be made on all drives. Potential source of measurements ( TBC ): 1. Passport counters ?2. Interface spying (either Iub or Iu) with protocol analyzers and post-processing software

ATM Cell transfer Delay (CTD)

Objective

Measure the performance of the ATM transmission as far as the transmission delay through the transmission network is concerned.

Definition

As defined in the ITU-T I.356 [R12]:“Cell Transfer Delay (CTD) is the time, t2 – t1, between the occurrence of two corresponding cell transfer events, CRE1 at time t1 and CRE2 at time t2, where t2 > t1 and t2 – t1 £ Tmax. The value of Tmax is for further study, but should be larger than the largest practically conceivable cell transfer delay.”

Measurement Method

Measurement should be made on all drives. Potential source of measurements ( TBC ): 1. Passport counters.2. Interface spying (either Iub or Iu) with protocol analyzers and post-processing software

IP

IP One-Way Packet Loss

Objective

Check the IP transmission reliability.

Page 49: UMTS Performance Optimization and Acceptance Guidelines

Definition

This metric can be defined as the ratio of lost packets vs. the total number of transmitted packets which can be translated in the following formula: * where the One_Way_Packet_Loss is defined in RFC-2680 [R17] as: “>>The *Type-P-One-way-Packet-Loss* from Src to Dst at T is 0<< means that Src sent the first bit of a Type-P packet to Dst at wire-time* T and that Dst received that packet. >>The *Type-P-One-way-Packet-Loss* from Src to Dst at T is 1<< means that Src sent the first bit of a type-P packet to Dst at wire-time T and that Dst did not receive that packet.”

Measurement Method

Measurement should be made on all data drives. Potential source of measurements ( TBC ): 1. End-to-end data quality tool

IP Maximum One-Way Delay

Objective

Check the IP transmission efficiency.

Definition

The maximum value of the one-way delay metric for the 95th percentile of the distribution of delay for all delivered packets during the lifetime of the bearer service. The one-way delay is defined in RFC-2679 [R16] as: “For a real number dT, >>the *Type-P-One-way-Delay* from Src to Dst at T is dT<< means that Src sent the first bit of a Type-P packet to Dst at wire-time* T and that Dst received the last bit of that packet at wire-time T+dT. >>The *Type-P-One-way-Delay* from Src to Dst at T is undefined (informally, infinite)<< means that Src sent the first bit of a Type-P packet to Dst at wire-time T and that Dst did not receive that packet.”

Page 50: UMTS Performance Optimization and Acceptance Guidelines

Measurement Method

Measurements should be made during long data call drive tests. Potential source of measurements ( TBC ): 1. End-to-end data quality tool

IP Maximum Jitter

Objective

Check the IP transmission constancy.

Definition

The IP Maximum Jitter is defined as the maximum variation of the one-way delay metric as defined here above.

Measurement Method

Measurement should be made on all data drives. Potential source of measurements ( TBC ): 1. End-to-end data quality tool

14.3. Measurement Uncertainty

This is mostly applicable when a set of different detailed metrics are chosen for acceptance criteria.(Please see [R19] for more details)

14.3.1. Tolerance Interval of One Data Population

All of the performance metrics that is used to assess the quality of a network is estimated using a sample of events (calls, frames, etc.) Let p be the metric to estimate from the data (for example p could be FER or dropped call rate, etc.). Let n be the total number of events (frames, calls, etc.) in the sample, and k is the number of successes (i.e., number of events

Page 51: UMTS Performance Optimization and Acceptance Guidelines

that meet a certain criteria such as frame erasures, dropped calls, etc). Then p is estimated as the ratio:It can be shown that the distribution of this error is approximately normal (Gaussian) with zero mean, and a standard deviation given by:The accuracy of the estimate is thus influenced by the sample size, with more samples giving a more accurate estimate (smaller deviation of the error). Therefore, this measurement error must be taken into account when selecting pass/fail criteria for qualifying the network.The pass criterion is selected such that the estimate is less than or equal to the target value of the metric plus a Tolerance Interval. The Tolerance Interval is the value expressed as a number of standard deviates that achieve a given Tolerance Interval. That is,where Thus, the estimate should satisfywherep – is the target value for the metric (between 0 and 1).z – is the number of standard deviates that corresponds to a given Tolerance Interval.n - sample size.For acceptance test we will use a Tolerance Interval of 99%. In this case, the number of standard deviates z = 2.33. Clearly, the estimate converges to the target value as the sample size approaches infinity or when the target value is close to 1 (this indicates that the success events occur with high probability or certainty and the sample size doesn’t need to be large to estimate the metric). Example:Assume the metric under consideration is FER and the target/design value is 2%. If the FER is estimated from 100 samples, then the tolerance interval is given by . In this case, the estimate must satisfy On the other hand, if the sample size is 1000, then and the estimate must satisfy

14.3.2. Tolerance Interval of Several Data Populations

In some cases, it may be desired to calculate the tolerance interval of more than one data population with different sample sizes. For example, it may be desired to calculate the tolerance interval of the system-wide metric from the performance metrics of subsets of the systems (e.g., clusters). Or, it may be desired to calculate the tolerance interval associated with the cluster, which is composed of several samples with different sizes. This section provides a formula to calculate the effective tolerance interval.We assume that the measurements in all subsets (or data populations) are statistically independent. Let the sample sizes of the data populations be n1, n2, n3, … nN, where N is number of data populations. The standard deviation of data population i is given by:Since the data populations are statistically independent, the standard deviation of all data populations combined equalsThe tolerance interval of the total population is given by (please see section 14.3.1 for details)

Page 52: UMTS Performance Optimization and Acceptance Guidelines

where z = 2.33 for 99% confidence. We note that if the sample size in all data populations is the same, then

(For equal sample size in all data populations)Comparing this result to that of a single data population case, we note that the tolerance interval decreases by the square root of number of populations. Example:Assume the FER measurements are collected from 4 clusters. Let the sample sizes of the clusters be 1000, 1000, 10000 and 10000. Let the target/design criterion be p = 2%.The tolerance intervals that are associated with the FER measurements in the 4 clusters are given by 0.01, 0.01, 0.0033, and 0.0033.The composite tolerance interval is given by

14.3.3. Table z-values vs. confidence levels

Confidence z70% 0.52471% 0.55372% 0.58373% 0.61374% 0.64375% 0.67476% 0.70677% 0.73978% 0.77279% 0.80680% 0.84281% 0.87882% 0.91583% 0.95484% 0.99485% 1.03686% 1.08087% 1.12688% 1.17589% 1.22790% 1.28291% 1.34192% 1.40593% 1.47694% 1.55595% 1.64596% 1.75197% 1.88198% 2.054

Page 53: UMTS Performance Optimization and Acceptance Guidelines

99% 2.326 For any other value, please refer to any table of the standard normal distribution.

15. Appendix 3: Overall planning for performance acceptance

15.1. Optimization resources:

15.1.1. IS-95 Experience

The following figures are coming from an IS-95 network performance acceptance project and are given for information only. This is considered as one of the typical project but extrapolation to UMTS must be done carefully. More accurate numbers will be provided with time after the first UMTS network acceptance projects.

Cluster Size (in number of sites) 14-20Sites (total number for the tested network) 225Analyst Engineers 5.5Shakedown teams 2 (3 to 4 sites per team per day)Drive test teams 4Analysis Computers 5Weeks 9Shifts 2 (7/17 and 16/2)(Example of CDMA-IS95 Telstra network performance acceptance, Brisbane, Australia, 1999)

15.1.2. teams

Shakedown team

A shakedown team is composed of three persons:1. a driver2. a technician to collect data and check the RSSI on the received path (main and diversity) at the Node B.

Drive Test Team

A drive test team is composed of one operator and one driver. The operator runs data collection and controls the drive test equipment for all calls (typically voice and data). In some cases, this might need an extra operator to handle calls (e.g. specific data calls such as web browsing).

Page 54: UMTS Performance Optimization and Acceptance Guidelines

The operator is also in charge of downloading and post-processing the resulting data and managing the backups and the archives.

Analysis Team

The analyst team is composed of one experienced engineer. S/he analyses the data from each drive tests and makes change recommendations. The experienced engineer has also a coordination role: coordinate drive test teams (briefing before the drive tests, debriefing after the drive tests, advise to drive test teams when they encounter problems, possible real-time analyses with the on-site drive test team.

At-the-Switch (RNC) team

A logging operator has to be scheduled as well to start logging, stop logging. This could be the on-duty switch (RNC) technician. A technician has also to be present to do datafill changes, configuration audit. He would be also in charge of downloading and post-processing the logged data, as well as managing the backups and the archives. Depending on the workload, this task may need a fully dedicated technician.

15.1.3. Extrapolation to UMTS

Shakedown 1 h / site / team

This does NOT include:§ driving time to & from site§ re-do if maintenance problems (equipment failure, either site or measurement equipment or network problems)

Cluster optimizationData collection

1.5 site / team / shift[25]

This estimation includes:§ pre-driving routes (feasibility, clear description to insure all subsequent drives will follow the same path)§ time lost because of equipment problems (either drive test equipment failure or network equipment failure)§ “sanity check” driving (check after s/w upgrade without much analysis needed)§ driving to and from clusters

Cluster optimizationAnalyses

1 site / engineer / shift Beside analysis time, this includes:§ generating change requests§ reviewing datafill dumps§ advising drive test teams when they call in

Page 55: UMTS Performance Optimization and Acceptance Guidelines

with a problem§ advising rigging crews§ analysis abortion due to miscellaneous reasons (h/w failure at site, data collection problems such as bad configuration, h/w failure, data missing...)

System optimization

2 weeks(for one team, one shift a

day)

estimation assuming a system of around 10 to 20 clusters, each around 20 to 30 sites (200 to 600 sites)

These estimations are based according to the process described in the document.This would lead to around 20 days for a cluster of 20 sites and 30 days for a cluster of 30 sites for one team (one shift a day). However, if time or money supersedes network optimization (and potentially network tuning accuracy), here is an order of importance of the drive tests:

Drive test type (see process) Cluster optimization System optimizationPilot Optimization mandatory recommended (merged with next)

[26]Radio Optimizaton mandatory mandatory

Loaded Uu Optimization if OCNS available if OCNS availableAcceptance data collection mandatory mandatory

Taking the most risks on optimization and reducing at the minimum the drive tests, this would lead to the following estimations:

Cluster optimizationData collection

3 sites / team / shift Beside analysis time, this includes:§ generating change requests§ reviewing datafill dumps§ advising drive test teams when they call in with a problem§ advising rigging crews§ analysis abortion due to miscellaneous reasons (h/w failure at site, data collection problems such as bad configuration, h/w failure, data missing...)

Cluster optimizationAnalyses

2 sites / engineer / shift

Beside analysis time, this includes:§ generating change requests§ reviewing datafill dumps§ advising drive test teams when they call in with a problem§ advising rigging crews§ analysis abortion due to miscellaneous reasons (h/w failure at site, data collection problems

Page 56: UMTS Performance Optimization and Acceptance Guidelines

such as bad configuration, h/w failure, data missing...)

System optimization 1 week estimation assuming a system of around 10 to 20 clusters, each around 20 to 30 sites (200 to 600 sites)

This would lead to 10 to 15 days for clusters of 20 to 30 sites.Warning:The above figures are only estimations and shall be updated according to the first on-field experiences. An example of possible planning would be the following one:Let us assume the following:§ the network area (system) contains 300 sites§ this area is divided in 10 clusters§ 4 clusters can be optimized simultaneously (meaning without impacting each other)§ 4 analyst engineers and 4 drive test teams are on site and ready§ the shakedowns have been performed on a site by site basis, following the I&C and the network element acceptance steps§ the cluster optimization can be started simultaneously with the stability period§ the ending of the cluster optimization (last few clusters) allow to do the first drive test for system optimization. With all these assumptions, we would have the following schedule:

With a 7-days-a-week schedule, this would reduce the overall time frame to 3 months and with two shifts a day, we reach the minimum time of a little bit less than two months. ™ End of DOCUMENT ˜