cosme1x

download cosme1x

of 11

Transcript of cosme1x

  • 8/6/2019 cosme1x

    1/11

    A.F.COSME.UMTSCAPACITY SIMULATION STUDY

    186

    again here, pointing out when the Timetotrigger1a parameter is used, to

    illustrate how the cell addition works. For the description of the Soft

    Handover algorithm the following parameters are needed:

    - AS_Th: Threshold for macro diversity (reporting range);

    - AS_Th_Hyst: Hysteresis for the above threshold;

    - AS_Rep_Hyst: Replacement Hysteresis;

    - T: Time to Trigger;

    - AS_Max_Size: Maximum size of Active Set.

    .

    AS_Th AS_Th_HystAs_Rep_Hyst

    As_Th + As_Th_Hyst

    Cell 1 Connected

    Event 1A

    Add Cell 2Event 1C

    Replace Cell 1 with Cell 3

    Event 1B Remove Cell 3

    CPICH 1

    CPICH 2

    CPICH 3

    Time

    Measurement

    Quantity

    T T T

    Figure 91: WCDMA handover example [3gpp25.922]

    As described in the Figure above:

    - If Meas_Sign is below (Best_Ss - As_Th - As_Th_Hyst) for a period ofTremove Worst cell in the Active Set (Event 1B).

    - If Meas_Sign is greater than (Best_Ss (As_Th - As_Th_Hyst)) fora period ofT and the Active Set is not full add Best cell outside theActive Set in the Active Set (Event1A).

    - If Active Set is full and Best_Cand_Ss is greater than (Worst_Old_Ss +As_Rep_Hyst) for a period ofT add Best cell outside Active Set andRemove Worst cell in the Active Set (Event 1C).

    Equivalent to Timetotrigger 1A

    CPICH 2 >CPICH1

    (AS_Th

    AS_Th_Hyst)for a time >

    tAdd Cell

  • 8/6/2019 cosme1x

    2/11

    187 Beneficiario COLFUTURO 2003

    Where:

    - Best_Ss :the best measured cell present in the Active Set;

    - Worst_Old_Ss: the worst measured cell present in the Active Set;

    - Best_Cand_Set: the best measured cell present in the monitored set.

    - Meas_Sign :the measured and filtered quantity.

    The hysteresis parameter is defined as a sort of guard band in order toavoid event triggering due to insignificant measurement fluctuations.

    Event-triggered periodic measurement reporting is setup for event 1a andevent 1c. This means that if the UE sends a report to UTRAN and the UE

    does not get any ACTIVE SET UPDATE message, the UE will start to report

    the same event every repInterval1a until the Active Set is updated, or

    until the condition for event triggering is not valid anymore.

    When a MEASUREMENT REPORT message is received at the SRNC from

    the UE, and the MEASUREMENT REPORT message has been triggered onevent 1a, Soft/Softer Handover evaluation algorithm (SHO_Eval)

    processes the report and evaluates if the proposed candidate can beadded to the Active Set.

    After knowing the working principle of the Soft Handover algorithm we

    could say that event1a affects the handover performance in the following

    ways:

    According to Ericsson, too short settings for this timer cause updateevents of the active set to occur too quickly after each other, leading tohigh signaling load, and in the other hand too long settings cause that

    although the criteria have been fulfilled, the active set will not be updated.Less optimal cells will be used which leads to unnecessary interference.

    This will waste UTRAN resources.

    Additionally, Ericsson mentions that if this parameter is changed without

    complete knowledge of all system internal interactions the effects may beunexpected and fatal. They also mention that most radio networks are

    non-homogeneous and a change of a parameter value does notnecessarily have the same effect on the UE or RAN in all parts of the

    network.

    It is important to remember that the measurements used for handover

    event evaluation is made on the downlink CPICH. This means that using

    different settings for primaryCpichPower (power assigned to the CPICH)on neighboring cells will create a more complicated interference situation.

    8.5.2 The Analysis framework

    The experiment setup was done according to the Design Technique called

    Two-factor full factorial design without replications presented in [Jain].

  • 8/6/2019 cosme1x

    3/11

    A.F.COSME.UMTSCAPACITY SIMULATION STUDY

    188

    This means that we use two variables (or factors) which are carefully

    controlled and varied to study their impact on the performance. In our

    case, these two variables are the Traffic Densities (corresponding to the 5columns of the Table 4) and the parameter TimetoTrigger1a with its

    possible levels. The following Table shows the levels (values) chosen forthis parameter.

    Minimum Value 0 millisecond

    Value from Vodafone Netherlands(same as default value)

    200 millisecond

    Value from Vodafone Global 320 milliseconds

    Maximum value 5000 millisecond

    Table 61: Chosen levels for the parameter Time to Trigger 1A

    Once the input variables have been chosen, we have to define the output

    variables, i.e. the measured data as referred in [Jain], although in this

    case there is not measured data as such but simulated. Knowing theexpected behavior from Ericsson, we can monitor the performance interms of:

    Uplink Interference Load [%]

    Soft Handover Attempts [num]

    Each of the indicators is provided by the simulator so an independentanalysis for each one was performed, i.e. wit was constructed a differentmatrix and ANOVA (Analysis of Variance) Table for each one of these two

    indicators. The statistical ANOVA Test calculations are greatly simplified

    using the Microsoft Excel Data Analysis Pack, which implements the tests

    ANOVA: Two factors without replication and ANOVA: Two factors withreplication. The difference between both analyses is the number of

    repetitions for each experiment, in our case, due to time restrictions onlyone experiment was performed per each combination of input levels.

    To make the analysis, the Data has to be organized in a matrix where

    each column represents each of the parameter Timetotrigger1a levels

    (named factor A in this description) and each row represents the differentlevels of traffic densities (identified as Mix1_10Erl, Mix2_20Erl,

    Mix3_40Erl, Mix4_80Erl and Mix5_160Erl respectively, named factor B in

    this description). Each entry in the matrix, Yij, represents the response inthe experiment where factor A is at level I and factor B at level j. For

    instance, Y12 would be equivalent in our setup to Y (Minimum Value,

    Mix2).

  • 8/6/2019 cosme1x

    4/11

    189 Beneficiario COLFUTURO 2003

    The grand mean is obtained by averaging all observations. Averages

    per rows and columns are also required. Once these averages are

    calculated, the column effect or j (effect of the factor A at value j) areobtained by subtracting the grand mean from each column mean, and

    the Bi or row effects (effect of the factor B at value i) are calculated in thesame way, subtracting the grand mean from each row mean. This gives

    us a first indication about how different is the performance for each of theparameter/load alternatives regarding the average performancerepresented by .

    Next step is to build the matrix of the estimated response, defined as:

    ijY = + j+ Bi (8-2)

    Once this is defined, the Error Matrix can be found by subtracting position

    to position the estimated response matrix from the measured responsematrix. Each entry of the error matrix is defined as follows:

    Errorij= Yij(measured) - ijY (8-3)

    As the values of the , jsand Bis are computed such as the error haszero mean, this matrix has the property that the sum of all the entries in

    a row or column must be zero.

    Next step is to calculate the sum of squared errors (SSE) which is defined

    as

    SSE = (eij2) (8-4)

    Where this sum is performed including all entries in the error matrix.

    Next, the total variation SST (which is different from the total variance) is

    calculated as:

    SST = SSA + SSB + SSE (8-5)

    Where:

    SSA = b * (j)2 where b= number of rows

  • 8/6/2019 cosme1x

    5/11

    A.F.COSME.UMTSCAPACITY SIMULATION STUDY

    190

    SSB = a Sum (Bi)2 where a=number of columns

    At this point, we can also calculate the percentage of variation explainedby each factor (which should be higher than the percentage of variation

    explained by errors to consider that a parameter has a significant impact

    in the performance). The percentage of variations by each factor isdefined as:

    Percentage of variation explained by A = SSA/SST

    Percentage of variation explained by B = SSB/SST

    Percentage of variation explained by errors =SSE/SST

    To statistically test the SIGNIFICANCE of a factor, we must divide the sum

    of squares by their corresponding degrees of freedom (Df). In this case,

    the corresponding Dfs are:

    SSA Df = (a-1)

    SSB Dfs = (b-1)

    SSB Errors = (a-1)*(b-1)

    The degrees of freedom of the factors A and B are because the errors in

    each column should add 0 and in each row as well and for the errors thedegrees of freedom is the product of DfA and DfB.

    Next we proceed to calculate the mean squares of all factors as follows:

    MSA = SSA / (a-1) (8-6)

    MSB = SSB / (b-1) (8-7)

    MSE = SSE/ ((a-1)*(b-1)) (8-8)

    At this point we can also calculate the Standard Deviation of each of the

    factors:

    SSE = STANDAR DEVIATION OF ERRORS = (MSE) (8-9)

    Standard DEVIATION = Se/ (ab) (8-10)

    Standard DEVIATIONj= Se* ((a-1)/ab) (8-11)

    Standard DEVIATION Bi= Se ((b-1)/ab) (8-12)

  • 8/6/2019 cosme1x

    6/11

    191 Beneficiario COLFUTURO 2003

    Variance of each factor = (Standard deviation of the factor)2 (8-13)

    After mean and variance of each parameter are known, a confidence

    interval (defined as a function of the mean and the standard deviation)can be calculated using statistical methods.

    The last part is to calculate the F-ratios to test the statistical significance

    of the factors (a systematic confirmation from the previous results based

    on a statistical test). F-ratios are defined as follows:

    Fa = MSA/MSE (8-14)

    Fb = MSA/MSE (8-15)

    Then, the factor A is considered significant at level alpha (significance

    level, an alpha 0.05 is defined for a 95% confidence interval) if the

    computed ratio is more than FcritA = F[1-alpha,a-1, (a-1)(b-1)], where Fis computed from the table of quantiles of F variates. Accordingly, the

    factor B is considered significant at level alpha if the computed ratio is

    more than FcritB = F[1-alpha,b-1, (a-1)(b-1)] from the table of quantilesof F variates. All these values are conveniently arranged by Excel so after

    knowing the procedure, we would present the results and provide the

    conclusions using the criteria presented.

    8.5.3 Simulation Results

    First of all, we are going to show the original matrices with the measured results with the Uplink Interference Load and the Soft

    Handover Attempts. Next two tables illustrate the original matrices.

    min-level (0msec)

    Netherlands level(200 msec) global level (320 msec) max-level (5000 msec)

    MIX1_10Erl 199 191 191 174

    MIX2_20Erl 2132 403 424 372

    MIX3_40Erl 1240 879 873 1785

    MIX4_80Erl 3019 1777 2704 1658

    MIX5_160Erl 139329 8282 7202 2770

    Table 62: Original table for ANOVA, Cell HO attempts

  • 8/6/2019 cosme1x

    7/11

    A.F.COSME.UMTSCAPACITY SIMULATION STUDY

    192

    min-level(0 msec)

    netherlands level (200msec)

    global level (320msec)

    max-level (5000msec)

    MIX1_10Erl 10.27% 10.20% 10.18% 10.47%

    MIX2_20Erl 20.30% 20.44% 20.52% 20.95%

    MIX3_40Erl 42.32% 43.02% 42.61% 47.21%

    MIX4_80Erl 72.99% 77.48% 73.75% 76.90%

    MIX5_160Erl 88.61% 89.38% 90.02% 90.28%

    Table 63: Original table for ANOVA, Uplink Load

    Next, we show the Analysis of Variance after applying the Excel Data

    Analysis Toolkit over the original tables and afterwards we draw the mainconclusions.

    8.5 .3 .1 Analys is o f Var iance ( ANOVA)the results obtained for each measured response (i.e. Handover attempts

    and Uplink Load interference) after applying the ANOVA-2 factor without

    replication analysis were as follows:

    Source of Variation SS df MS F F crit

    Rows 4695850801 4 1173962700 1.33424525 3.25916

    Columns 2778331540 3 926110513.2 1.052553504 3.4903

    Error 10558442984 12 879870248.7

    Total 18032625325 19

    Table 64: ANOVA of Handover attempts, additive model

    Source of Variation SS df MS F F crit

    Rows 1.866556606 4 0.466639151 3032.14678 3.25916

    Columns 0.001438522 3 0.000479507 3.115761269 3.4903

    Error 0.001846767 12 0.000153897

    Total 1.869841895 19

    Table 65: ANOVA of Uplink load, additive model

  • 8/6/2019 cosme1x

    8/11

    193 Beneficiario COLFUTURO 2003

    The ROWS factor corresponds to the different traffic densities (5 levels)

    and the COLUMNS factor corresponds to the 4 levels of the parameter

    time to trigger 1a (0.200. 320 and 5000 msec). The columns arerespectively:

    SS = Squared Sum of each factor Df = degrees of freedom (number of independent terms to obtain

    the Squared Sum)

    MS = Mean Square value = SSparameter / Dfparameter F = Computed F factor = MSfactor /MSE where MSE is the Mean

    Square Error (SSE/Dferror).

    According to the ANOVA test of the Handover attempts (table 62), the

    percentage of variation explained by each factor is as follows:

    Percentage explained by Rows (load) = SSrows / SStotal = 26%

    Percentage explained by columns (time to trigger levels) =

    SScolumns/SStotal = 15 % Percentage explained by errors = SSerrors/ SStotal = 59%

    According to this first calculation, from the point of view of VARIATION

    (which is not the same as variance), the time to trigger level of the set ofperformed experiments is not significant. This was confirmed when

    checking the obtained F (MScolumns/MSerror) for the Time to trigger

    parameter, which is not higher than the Fcrit in any of the response

    variables, therefore, with the assumed additive model, one cannot confirmthe statistical significance for the columns (time to trigger levels). This is

    an indication that some transformation of the output variable must be

    tried in order to reduce the variance of the data. In fact, taking a look at

    the Maximum output / Minimum output ratio, specially in the HandoverAttempts output variable, one can see that is rather high compared with

    the order of magnitude of the data obtained: in this case [Jain] suggeststo try a logarithmic transformation over the output data (multiplicative

    model). The multiplicative model with two factors assumes a model as it is

    described in the next equation:

    Yij= Vi* Wj (8-16)

    If we take logarithm at both sides, we have an additive model:

    Log (Yij) = Log(Vi) + Log(Wj) (8-17)

    In the case of two-factor experiments, the additive model assumed so farwas:

    Yi= + j+ Bi+ eij (8-18)

    If we assume a logarithmic transformation, this means that the model

    would be:

    Log (Yi) = Log + Log j+ Log Bi+ Log eij (8-19)

    Therefore, the output in linear fashion would be:

  • 8/6/2019 cosme1x

    9/11

    A.F.COSME.UMTSCAPACITY SIMULATION STUDY

    194

    Yi= 10 * 10 j* 10

    Bi* 10 eij (8-20)

    Where , j , Bi ,eij are obtained from the ANOVA 2-factor analysis

    performed over the Logarithm of each one of the output levels. The

    transformed Tables for both variables and the corresponding ANOVA test

    are shown below.

    min-level (0 msec) netherlands level (200 msec) global level (320 msec) max-level (5000 m

    MIX1_10Erl 2,298853076 2,281033367 2,281033367 2,240549248

    MIX2_20Erl 3,3287872 2,605305046 2,627365857 2,57054294

    MIX3_40Erl 3,093421685 2,943988875 2,941014244 3,25163822

    MIX4_80Erl 3,479863113 3,249687428 3,432006687 3,219584526

    MIX5_160Erl 5,14404152 3,918135226 3,857453117 3,442479769

    Table 66: Logarithm transformation of the measured variable Handover attempts

    min-level (0 msec) netherlands level (200 msec) global level (320 msec) max-level (5000 m

    MIX1_10Erl -0.99 -0.99 -0.99 -0.98

    MIX2_20Erl -0.69 -0.69 -0.69 -0.68

    MIX3_40Erl -0.37 -0.37 -0.37 -0.33

    MIX4_80Erl -0.14 -0.11 -0.13 -0.11

    MIX5_160Erl -0.05 -0.05 -0.05 -0.04

    Table 67: Logarithm transformation of the measured variable Uplink load

    Source of Variation SS df MS F F crit

    Rows 2,041006383 3 0.680335461 19,62286238 4,757055

    Columns 0.017800402 2 0.008900201 0.256707802 5,143249

  • 8/6/2019 cosme1x

    10/11

    195 Beneficiario COLFUTURO 2003

    Error 0.208023309 6 0.034670552

    Total 2,266830094 11

    Table 68: ANOVA of Handover attempts with the multiplicative model

    Source of Variation SS df MS F F crit

    Rows 0.745806368 3 0.248602123 1768,122551 4,757055

    Columns 0.00070624 2 0.00035312 2,511480684 5,143249

    Error 0.000843614 6 0.000140602

    Total 0.747356222 11

    Table 69: ANOVA of Uplink load with the multiplicative model

    Again, the obtained F for the levels of the parameter time to trigger is nothigher than Fcrit, therefore with the multiplicative model it is not possibleeither to guarantee statistical significance. In the chapter 15 of [Jain]

    there are a list of graphical tests to determine which kind of

    transformation would be required, three of these tests were tried but thecriteria to apply the given transformation were not fulfilled with the

    collected information. Therefore, due to the limitations of time and

    resources of this project, this verification with more transformations in

    order to reduce the variance of the experiment is still open. It is alsosuggested to perform more than one simulation per each traffic density

    level and then apply the ANOVA 2 factor analysis with replication, whichwas not possible in this project due to the limitations of time and

    hardware.

    Therefore, to conclude the analysis of the given parameter another

    approach also mentioned in [Jain] is going to be used. This method isparticularly useful when the goal of the experiment is simply to find the

    best combination of factor levels (the combination that produces the best

    performance). The name of the method is the Ranking method and

    consists to organize the experiments in the order of increasing ordecreasing responses so that the experiment with the best response is

    first and the worst response is last. Then, the factor columns are observed

    to find levels that consistently produce good or bad results.

    For this analysis, the best experiment is defined as the one with thelowest handover attempts measurement and the lowest uplink load.

    Therefore, we present below a table where the rows have been sorted inorder of increasing number of handovers and increasing uplink load:

    experiment Load time to trigger HO attempts UL Load

    16 MIX1_10Erl 5000 174 10.47%

  • 8/6/2019 cosme1x

    11/11

    A.F.COSME.UMTSCAPACITY SIMULATION STUDY

    196

    11 MIX1_10Erl 320 191 10.18%

    6 MIX1_10Erl 200 191 10.20%

    1 MIX1_10Erl 0 199 10.27%

    17 MIX2_20Erl 5000 372 20.95%

    7 MIX2_20Erl 200 403 20.44%

    12 MIX2_20Erl 320 424 20.52%

    13 MIX3_40Erl 320 873 42,61%

    8 MIX3_40Erl 200 879 43,02%

    3 MIX3_40Erl 0 1240 42,32%

    19 MIX4_80Erl 5000 1658 76,90%

    9 MIX4_80Erl 200 1777 77,48%

    18 MIX3_40Erl 5000 1785 47,21%

    2 MIX2_20Erl 0 2132 20.30%

    14 MIX4_80Erl 320 2704 73,75%

    20 MIX5_160Erl 5000 2770 90.28%

    4 MIX4_80Erl 0 3019 72,99%

    15 MIX5_160Erl 320 7202 90.02%

    10 MIX5_160Erl 200 8282 89,38%

    5 MIX5_160Erl 0 139329 88,61%

    Table 70: Ranking method applied over the simulation outcomes

    The first interesting thing that we can notice in Table 68 is that in theLoad column, nearly all the experiments are ordered from the lowest load

    (10 Erlangs) to highest Load (160 Erlangs) including all the time to trigger

    levels per each traffic load level (exception made with the experiments 2,

    18 an 4 that can be attributed to random errors in the simulator).Therefore the effects of the increased traffic load on the system

    performance, measured in terms of Hand-over attempts and Uplink Load,can be seen: the highest load, the worse.

    Taking a look at the second column, we can observe the pattern 5000.

    320. 200 and 0 (with the 320 and 200 alternating position in some cases)in mostly all the table. Again, variations of this pattern can be attributedto experimental errors in the simulator.

    Then, we can appreciate that for the same level of traffic load, the worst

    results in terms of Handover Attempts are with the value 0 for the Time to

    siguiente pginapgina anterior

    http://cosme1y.pdf/http://cosme1w.pdf/