Detection of Outliers in Multivariate Data: A Method Based on … · 2012-05-23 · Detection of...

5
Detection of Outliers in Multivariate Data: A Method Based on Influence Eigen Nazrina Aziz Abstract—Outliers can be defined simply as an observation (or a subset of observations) that is isolated from the other observations in the data set. There are two main reasons that motivate people to find outliers; the first is the researchers intention. The second is the effects of an outlier on analy- ses. This article does not differentiate between the various justifications for outlier detection. The aim is to advise the analyst of observations that are considerably different from the majority. This article focuses on the identification of outliers using the eigenstructure of S and S (i) in terms of eigenvalues, eigenvectors and principle component. Note that S (i) is the sample covariance matrix of data matrix X (i) , where the subscript i in parentheses is read as ”with observation i removed from X”. The idea of using the eigenstructure as a tool for identification of outliers is motivated by Maximum Eigen Difference (MED). MED is the method for identification of outliers proposed by [1]. This method utilises the maximum eigenvalue and the corresponding eigenvector. It is noted that examination of observations effect on the maximum eigenvalue is very significant. The technique for identification of outliers discuss in this article is applicable to a wide variety of settings. In this article, observations that are located far away from the remaining data are considered to be outliers. Index Terms—outliers, eigenvalue, eigenvector, covariance matrix, principle component. I. I NTRODUCTION T HIS article focuses on the identification of outliers using the eigenstructure of S and S (i) in terms of eigenvalues, eigenvectors and principle component. Note that S (i) is the sample covariance matrix of data matrix X (i) , where the subscript i in parentheses is read as ”with observation i removed from X”. The idea of using the eigenstructure as a tool for identifi- cation of outliers is motivated by Maximum Eigen Difference (MED). This method utilizes the maximum eigenvalue and the corresponding eigenvector. It is noted that examination of the observations effect on the maximum eigenvalue is very significant. The reason is that outliers that lie in the direction close to the maximum eigenvalue or vice versa, will change the maximum eigenvalue [1]. The maximum eigenvalue contains maximum variance, therefore, the outliers detected by the maximum eigenvalue have a greater effect on variance, and they need extra attention. The article is organized as follows: Section II describes a general idea of the influence eigenvalues and eigenvectors. Section III explains the technique, influence eigen for iden- tification of outlier. Three different scenarios are considered to generate the data set from the multivariate distributions in this article are described in Section IV. Finally, Section V provides illustrative examples before presenting the conclu- sion. Manuscript received March 7, 2012; revised April 14, 2012. Nazrina Aziz is a senior lecturer in School of Quantitative Sciences, College of Arts and Sciences, Universiti Utara Malaysia (email: [email protected]). II. I NFLUENCE EIGENVALUES AND EIGENVECTORS Some statistical methods are concerned with eigenstructure problems and a few statistics are the functions of eigenvalues in multivariate analysis. A test statistic is considered as a function of eigenvalues of a transition matrix to test a Markov chain for independence [5] and eigenstructure methods are applied to study the co-linear problem in multivariate linear regression [7]. Now, consider the influence of eigenvalues λ j and eigen- vectors v j for matrix X T X where X is an n × p observation matrix consisting of n observations for p variables. If ith row of matrix X is deleted, one can write it as X (i) where the subscript i in parentheses is read as “with observation i is removed from X”, i.e. the ith row of X is x T i then X T (i) X (i) = X T X x i x T i . Let X T X have the eigenvalues-eigenvectors pairs (λ 1 ,v 1 ), (λ 2 ,v 2 ), ..., (λ p ,v p ), and the eigenvalues are in descending order λ 1 λ 2 ... λ p , (1) and let X T (i) X (i) have the eigenvalues and eigenvectors pairs (λ 1(i) ,v 1(i) ), (λ 2(i) ,v 2(i) ), ..., (λ p(i) ,v p(i) ), and the eigenvalues are also in descending order λ 1(i) λ 2(i) ... λ p(i) . (2) Define, V (i) =[υ 1(i) 2(i) ,...,υ p(i) ], (3) and V T (i) S (i) V (i) = V T (i) (X T (i) X (i) )V (i) = diag[λ 1(i) 2(i) ,...,λ p(i) ] = Λ (i) . (4) Then influence functions of eigenvalues λ j and eigenvec- tors v j are given respectively by [4] as follows: IF (x; λ j )=(x T v j ) 2 λ j (5) and IF (x; v j )= x T v j k̸=j x T v k (λ k λ j ) 1 v k . (6) If one wishes to examine the ith observation’s influence on the eigenvalues and eigenvectors of X T X, it is easy to remove the ith observation from the full data set and then compare the eigenvalues and eigenvectors of the remaining data with that of the complete data. Lemma 1: The properties of eigenvalues and eigenvectors are given as follows: 1) λ j λ j(i) ; Proceedings of the World Congress on Engineering 2012 Vol I WCE 2012, July 4 - 6, 2012, London, U.K. ISBN: 978-988-19251-3-8 ISSN: 2078-0958 (Print); ISSN: 2078-0966 (Online) WCE 2012

Transcript of Detection of Outliers in Multivariate Data: A Method Based on … · 2012-05-23 · Detection of...

Page 1: Detection of Outliers in Multivariate Data: A Method Based on … · 2012-05-23 · Detection of Outliers in Multivariate Data: A Method Based on Influence Eigen Nazrina Aziz Abstract—Outliers

Detection of Outliers in Multivariate Data: AMethod Based on Influence Eigen

Nazrina Aziz

Abstract—Outliers can be defined simply as an observation(or a subset of observations) that is isolated from the otherobservations in the data set. There are two main reasons thatmotivate people to find outliers; the first is the researchersintention. The second is the effects of an outlier on analy-ses. This article does not differentiate between the variousjustifications for outlier detection. The aim is to advise theanalyst of observations that are considerably different fromthe majority. This article focuses on the identification ofoutliers using the eigenstructure of S and S(i) in terms ofeigenvalues, eigenvectors and principle component. Note thatS(i) is the sample covariance matrix of data matrix X(i), wherethe subscript i in parentheses is read as ”with observation iremoved from X”. The idea of using the eigenstructure as atool for identification of outliers is motivated by MaximumEigen Difference (MED). MED is the method for identificationof outliers proposed by [1]. This method utilises the maximumeigenvalue and the corresponding eigenvector. It is noted thatexamination of observations effect on the maximum eigenvalueis very significant. The technique for identification of outliersdiscuss in this article is applicable to a wide variety of settings.In this article, observations that are located far away from theremaining data are considered to be outliers.

Index Terms—outliers, eigenvalue, eigenvector, covariancematrix, principle component.

I. INTRODUCTION

THIS article focuses on the identification of outliers usingthe eigenstructure of S and S(i) in terms of eigenvalues,

eigenvectors and principle component. Note that S(i) is thesample covariance matrix of data matrix X(i), where thesubscript i in parentheses is read as ”with observation iremoved from X”.

The idea of using the eigenstructure as a tool for identifi-cation of outliers is motivated by Maximum Eigen Difference(MED). This method utilizes the maximum eigenvalue andthe corresponding eigenvector. It is noted that examination ofthe observations effect on the maximum eigenvalue is verysignificant. The reason is that outliers that lie in the directionclose to the maximum eigenvalue or vice versa, will changethe maximum eigenvalue [1]. The maximum eigenvaluecontains maximum variance, therefore, the outliers detectedby the maximum eigenvalue have a greater effect on variance,and they need extra attention.

The article is organized as follows: Section II describes ageneral idea of the influence eigenvalues and eigenvectors.Section III explains the technique, influence eigen for iden-tification of outlier. Three different scenarios are consideredto generate the data set from the multivariate distributions inthis article are described in Section IV. Finally, Section Vprovides illustrative examples before presenting the conclu-sion.

Manuscript received March 7, 2012; revised April 14, 2012. Nazrina Azizis a senior lecturer in School of Quantitative Sciences, College of Arts andSciences, Universiti Utara Malaysia (email: [email protected]).

II. INFLUENCE EIGENVALUES AND EIGENVECTORS

Some statistical methods are concerned with eigenstructureproblems and a few statistics are the functions of eigenvaluesin multivariate analysis. A test statistic is considered as afunction of eigenvalues of a transition matrix to test a Markovchain for independence [5] and eigenstructure methods areapplied to study the co-linear problem in multivariate linearregression [7].

Now, consider the influence of eigenvalues λj and eigen-vectors vj for matrix XT X where X is an n× p observationmatrix consisting of n observations for p variables.

If ith row of matrix X is deleted, one can write it asX(i) where the subscript i in parentheses is read as “withobservation i is removed from X”, i.e. the ith row of Xis xT

i then XT(i)X(i) = XT X − xix

Ti . Let XT X have the

eigenvalues-eigenvectors pairs

(λ1, v1), (λ2, v2), ..., (λp, vp),

and the eigenvalues are in descending order

λ1 ≥ λ2 ≥ ... ≥ λp, (1)

and let XT(i)X(i) have the eigenvalues and eigenvectors pairs

(λ1(i), v1(i)), (λ2(i), v2(i)), ..., (λp(i), vp(i)),

and the eigenvalues are also in descending order

λ1(i) ≥ λ2(i) ≥ ... ≥ λp(i). (2)

Define,V(i) = [υ1(i), υ2(i), . . . , υp(i)], (3)

and

VT(i)S(i)V(i) = VT

(i)(XT(i)X(i))V(i)

= diag[λ1(i), λ2(i), . . . , λp(i)]

= Λ(i). (4)

Then influence functions of eigenvalues λj and eigenvec-tors vj are given respectively by [4] as follows:

IF (x;λj) = (xT vj)2 − λj (5)

and

IF (x; vj) = −xT vj∑k ̸=j

xT vk(λk − λj)−1vk. (6)

If one wishes to examine the ith observation’s influence onthe eigenvalues and eigenvectors of XT X, it is easy to removethe ith observation from the full data set and then comparethe eigenvalues and eigenvectors of the remaining data withthat of the complete data.

Lemma 1: The properties of eigenvalues and eigenvectorsare given as follows:

1) λj ≥ λj(i);

Proceedings of the World Congress on Engineering 2012 Vol I WCE 2012, July 4 - 6, 2012, London, U.K.

ISBN: 978-988-19251-3-8 ISSN: 2078-0958 (Print); ISSN: 2078-0966 (Online)

WCE 2012

Page 2: Detection of Outliers in Multivariate Data: A Method Based on … · 2012-05-23 · Detection of Outliers in Multivariate Data: A Method Based on Influence Eigen Nazrina Aziz Abstract—Outliers

2) The relationship of eigenvalues λj and λj(i) is givenby [1]:

λj(i) = λj −1

n− 1(l2ij − λj)−

1

2(n− 1)2l2ij

[1 +

∑k ̸=j

l2ijλk − λj

]+O(

1

n3), (7)

where lij = (xi − x̄)T vj ;

3) The relationship between eigenvectors of vj and vj(i)is obtained based on the observation matrix X givenby [1] as follows:

vj(i)

= vj +lij

n− 1

∑k ̸=j

likvk

λk − λj

−1

2(n− 1)2

∑k ̸=j

[ l2ij l2ikvj

(λk − λj)2−

2l2iklij

(λk − λj)

∑k ̸=j

likvk

λk − λj

+2l3ij likvk

(λk − λj)2

]+O(

1

n3). (8)

Proof: (i) λj ≥ λj(i) is obtained from the followingmatrix operations: It is noted that

XT X = XT(i)X(i) + xix

Ti ,

where XT X, XT(i)X(i) and xix

Ti are symmetric matrices and

xixTi is of rank unity, there exists on an orthogonal matrix

Q such that

QT (xixTi )Q =

(s 00 0

),

where s is the unique non-zero eigenvalues of xixTi , and

consider

QT (XT(i)X(i))Q =

(t cT

c XT∗ X∗

),

then there is an orthogonal matrix P(k−1)(k−1) so that

PT (XT∗ X∗)P = Λ∗ = diag{λ1, λ2, ...λk−1}

and one can define an orthogonal matrix

G = Q(

1 00 P

),

then

GT(XT X)G =

(1 0

0 PT

)QT

(XT(i)X(i))Q

(1 00 P

)+

(1 0

0 PT

)QT

(xixTi )Q

(1 00 P

)=

(t + s cT PPT c Λ∗

),

wherek∑

j=1

λj = t+ s+k−1∑i=1

λi

= t+k−1∑i=1

λi + s

=∑j=1

λj(i) + s. (9)

Note that s ≥ o, and λj ≥ λj(i) is obtained for anyi = 1, 2, ..., n.

III. INFLUENCE EIGEN FOR IDENTIFICATION OF OUTLIER

Let the sample covariance matrix be

S =1

nXT (In − 1

n1n1

Tn )X, (10)

where 1 is the n-vector of ones and In is the identity matrixof n × n. If X(I) and S(I) are the data matrix and samplecovariance matrix, respectively, when the m observationsare deleted and the subscript I in parentheses is read as“with a set of m observations I removed from X”, note thatI = {i1, i2, . . . , im} where 1 ≤ ij ≤ n and j = 1, 2, . . . ,m.

Therefore, one has

S(I) =1

n−mXT

(I)(In−m − 1

n−m1n−m1T

n−m)X(I) (11)

andSI =

1

mXT

I (Im − 1

m1m1T

m)XI . (12)

Lemma 2: It is noted that1) The relationship among S, SI and S(I) is given as

follows:S(I) =

nn−mS− nm

(n−m)2 [n−mn SI +(x̄I − x̄)(x̄I − x̄)T ];

2) If let I = {i} with a single observation, thenS(i) =

nn−1S − n

(n−1)2 (xi − x̄)(xi − x̄)T .

Proof: (i) Suppose that equations 10-12 are biasedestimates, they can be used to developed unbiased estimatesas in lemma 2.

(n−m)S(I)

= XT(I)

(In−m −

1n−m

1n−m1Tn−m

)X(I)

= XT(I)X(I) −

1n−m

XT(I)1Tn−m1n−mX(I)

= nS −nm

n−m(x̄− x̄I)(x̄− x̄I)

T +mx̄I x̄TI − XT

I XI (13)

Now simplify equation 13 as follows:

(n−m)S(I)

= nS −nm

n−m(x̄− x̄I)(x̄− x̄I)

T −m(XT

I XI

m− x̄I x̄

TI

)= nS −

nm

n−m(x̄− x̄I)(x̄− x̄I)

T −mSI (14)

(ii) By using equation 14, one can get the relationshipbetween S, SI and S(I) in (i) where x̄ =

∑ni=1 xi

n andx̄I =

∑i⊂I xi

m represent the mean vector of all observationsand the mean vector of the observations indexed by Irespectively. Next, replace m = 1 in the following equation

(n− 1)S(I)

= nS −nm

n−m(x̄− x̄I)(x̄− x̄I)

T −m(XT

I XI

m−

XTI 1m1TmXI

m2

)hence one can find equation (ii) in lemma 2 as following

(n− 1)S(i)

= nS −n

n− 1(x̄− x̄i)(x̄− x̄i)

T − 1(XT

I XI

1−

XTI XI

12

)= nS −

n

n− 1(x̄i − x̄)(x̄i − x̄)T

This completes the proof of lemma 2.

Proceedings of the World Congress on Engineering 2012 Vol I WCE 2012, July 4 - 6, 2012, London, U.K.

ISBN: 978-988-19251-3-8 ISSN: 2078-0958 (Print); ISSN: 2078-0966 (Online)

WCE 2012

Page 3: Detection of Outliers in Multivariate Data: A Method Based on … · 2012-05-23 · Detection of Outliers in Multivariate Data: A Method Based on Influence Eigen Nazrina Aziz Abstract—Outliers

Lemma 3: Let {(λj , vj), j = 1, 2, ..., p} be the pair ofeigenvalues and eigenvectors of sample covariance matrixS. {(λj(i), vj(i)), i = 1, 2, ..., n} be the pair of eigenvaluesand eigenvectors of covariance matrix S(i). One now has

1) λj(i) =n

n−1λj − n(n−1)2 ||xi − x̄i||2Gi

where the weights Gi satisfy 0 ≤ Gi ≤ 1 and∑i

Gi = 1;

2) nn−1λj+1 ≤ λj(i) ≤ n

n−1λj , j = 1, 2, ..., p.

Proof: It follows immediately from Theorem 1 in [6]

1) Denote αi = (xi − x̄)/∥xi − x̄∥ and from lemma 2,one has

S(i) =n

n− 1S − n

(n− 1)2(xi − x̄)(xi − x̄)T . (15)

Replace αi in equation 15 which implies

S(i) =n

n− 1S − n

(n− 1)2∥xi − x̄∥2αiα

Ti . (16)

Given that

n

n− 1λj −

n

(n− 1)2∥xi − x̄∥2 ≤ λj(i) ≤

n

n− 1λj ,

j = 1, 2, . . . , p.

Thus, the weights Gi satisfies 0 ≤ Gi ≤ 1 such that

λj(i) =n

n− 1λj −

n

(n− 1)2∥xi − x̄∥2Gj . (17)

Now, the preceding equation can be written as

traceS(i) =n

n− 1traceS− n

(n− 1)2∥xi− x̄∥2. (18)

From equation 17, one has

trace S(i) =

p∑j=1

λj(i)

=n

n− 1trace S −

n

(n− 1)2∥xi − x̄∥2

p∑i=1

Gi. (19)

As a consequence of equations 18 and 19, one hasp∑

j=1

Gj = 1.

2) The proof is given in Corollary 1 and 2 in [6]

Theorem 4: The influence eigen j for each observation ican be denoted by

∆∗j(i) = (xT

i vj)2 +

n∑k=1k ̸=i

{(vj + vj(i))

TxkxTk (vj + vj(i))

},

(20)where j = 1, 2, . . . , p.

Proof: According to [2] an influence interpretation ofthe Euclidean distance can be considered as the total ofinfluence eigen:

n

(n− 1)(xi − x̄)T (xi − x̄)

=

p∑j=1

{ 1

n− 1

(l2ij − λj

)+

1

2(n− 1)2l2ij

(1 +

∑k ̸=j

l2ijλk − λj

)}.

(21)

By using the relationship of influence eigenstructure inlemma 1, equation 21 can be re-written as follows:

n

(n− 1)(xi − x̄)T (xi − x̄)

=

p∑j=1

[(xT

i vj)2 +

n∑k=1k ̸=i

{(vj + vj(i))

TxkxTk (vj + vj(i))

}].

(22)

From equation 22, the influence eigen j for each observationi can be denoted by

∆∗j(i) = (xT

i vj)2 +

n∑k=1k ̸=i

{(vj + vj(i))

TxkxTk (vj + vj(i))

},

(23)where j = 1, 2, . . . , p.

However, if one considers the influence eigen j on I , thusTheorem 4 now becomes

∆∗j(I) =

−m

n−m

n∑k=1

(xTk vj)

2 − nm

(n−m)2vTj ×[n−m

nSI + (x̄I − x̄)(x̄I − x̄)T

]vj (24)

Suppose that the influence of an observation, i.e. an outlieron statistics such as jth eigenvalues, λj or eigenvectors, vjof a sample covariance matrix is simply the change in λj orvj when the ith observation is deleted from the sample.

Recall that this article considers the maximum eigenvalueand the corresponding eigenvector as the object of interest.From equation 1, it is given that

max{λ1, λ2, . . . , λp} = λmax

= λ1, (25)

where λ1 corresponds to v1. Now, let j = 1, and equation20 becomes

∆∗1(i) = (xT

i v1)2 +

n∑k=1k ̸=i

{(v1 + v1(i))

TxkxTk (v1 + v1(i))

}.

(26)Therefore, one can consider the influence eigen, ∆∗

1(i) asa tool to identify a potential influence observation, i.e.outlier in data matrix X. Perhaps the best advice is that theobservation that is obviously more extreme than most of theremaining observations in the data set should be examined.

As a consequence, by using ∆∗1(i), potential outliers in X

can be identified by plotting the index plot of {i,∆∗1(i)}. Note

that ith observation can be considered as a potential outlierif it is located further away than the remaining observationsin the data set. By using lemma 2, 3 and equation 26 thealgorithm for influence eigen, ∆∗

1(i) is given as follows:• Step 1 : Generate the sample covariance matrix S and

S(i);

Proceedings of the World Congress on Engineering 2012 Vol I WCE 2012, July 4 - 6, 2012, London, U.K.

ISBN: 978-988-19251-3-8 ISSN: 2078-0958 (Print); ISSN: 2078-0966 (Online)

WCE 2012

Page 4: Detection of Outliers in Multivariate Data: A Method Based on … · 2012-05-23 · Detection of Outliers in Multivariate Data: A Method Based on Influence Eigen Nazrina Aziz Abstract—Outliers

• Step 2 : Compute the eigenstructure of S and S(i).Denote the eigenstructure of S and S(i) as {Λ,V} and{Λ(i),V(i)} respectively.

• Step 3 : Choose the maximum eigenvalue and thecorresponding eigenvector pair, max{λj , vj} andmax{λj(i), vj(i)} of {Λ,V} and {Λ(i),V(i)} respec-tively, i.e. {λ1, v1} and {λ1(i), v1(i)};

• Step 4 : Compute ∆∗1(i) = (xT

i v1)2 +

n∑k=1k ̸=i

{(v1 +

v1(i))Txkx

Tk (v1 + v1(i))

}for each observation;

• Step 5 : Develop the index plot of {i,∆∗1(i)},

i = 1, 2, . . . , n.The outliers that are detectable from the index plot are

those which inflate variance and covariance. If an outlier isthe cause of a large increase in variances of the originalvariables, then it must be extreme on those variables [2].Thus, one can identify it by looking at the index plot.

IV. SIMULATION DATA SET

The influence eigen is tested on the simulation dataset.Three different scenarios are considered to generate thedata set from the multivariate distributions with sample sizeof 3005 and dimensions of 100. The sample size contains 5outliers.

A. Scenario 1: outliers with the same shapes but differentlocations

There are 2 conditions considered in the first scenario:• Condition 1 : A random vector of x1, x2, . . . , xn is

drawn from a p−variate normal distribution with meanvector µ and positive definite covariance matrix Σ,i.e. N(µ,Σ). Next x∗

1, x∗2, . . . , x

∗m is another random

sample drawn from a p−variate normal distribution withmean vector µc1 and a similar covariance matrix Σ, i.e.N(µc1,Σ). Note that m is the number of outliers. Laterthese two sets of data vector are merged;

• Condition 2 : The x1, x2, . . . , xn random vector isdeveloped as in condition 1. However, x∗

1, x∗2, . . . , x

∗m

is constructed by using N(µc2,Σ), which is closer tothe majority of data parental distribution in condition 1,i.e. µc2 < µc1;

B. Scenario 2: outliers with different shapes and differentlocations

In scenario 2, x1, x2, . . . , xn is a random vector drawnfor p − variate normal distribution with mean vector µ andpositive definite matrix Σ and x∗

1, x∗2, . . . , x

∗m is another set

of random vector from p − variate distribution with meanvector µs2 and covariance matrix Σs2. Note that µ ̸= µs2

and Σ ̸= Σs2.

C. Scenario 3: outlier from a different probability law

Let x1, x2, . . . , xn be a random sample drawn from p −variate normal distribution with mean vector µ and positivedefinite covariance matrix Σ. Now generate x∗

1, x∗2, . . . , x

∗m

drawn from p− variate student t distribution with z degreesof freedom and correlation matrix Σs3. Note that Σ ̸= Σs3.

V. ILLUSTRATION BY SIMULATION DATA SET

The technique is used on the simulate data set whichgenerated following three scenarios described in previoussection. Recall that the last 5 observations in the simulateddata set are the outliers. Fig. 1 clearly displays these 5outliers at the top of each index plot of ∆∗

1(i). It is notedthat in the index plot, there is a very large gap betweenthe outliers and the remaining observations, i.e. good data.Generation of a multivariate data set from a population wherethe good and bad data are closer to each other probablycauses the suggested technique not to perform. Nonetheless,Fig. 2 indicates all 5 observations that are supposed to beoutliers in the data set are considered for condition 2.

Recall that scenario 2 generated a data set with differentshapes and different locations. It is known the last 5 observa-tions in this data set are the outliers. Fig. 3 clearly displaysthese 5 outliers at the top of each index plot of ∆∗

1(i). Itis noted there is a large gap between the outliers and theremaining observations.

Fig. 4 also shows that ∆∗1(i) is capable of identifying

outliers in a high-dimensional data set that contains outlierscoming from different probability of laws. Note that outliersare denoted within the black circles.

0 500 1000 1500 2000 2500 3000 3500−10

0

0 1

00 2

00 3

00 4

00

0 500

10001500

20002500

30003500

influ

ence

eig

en_1

Fig. 1. 3D Scatterplot for Condition 1, Scenario 1.

0 500 1000 1500 2000 2500 3000 3500

−50

0 5

010

015

020

025

0

0 500

10001500

20002500

30003500

influ

ence

eig

en_1

Fig. 2. 3D Scatterplot for Condition 2, Scenario 1.

Proceedings of the World Congress on Engineering 2012 Vol I WCE 2012, July 4 - 6, 2012, London, U.K.

ISBN: 978-988-19251-3-8 ISSN: 2078-0958 (Print); ISSN: 2078-0966 (Online)

WCE 2012

Page 5: Detection of Outliers in Multivariate Data: A Method Based on … · 2012-05-23 · Detection of Outliers in Multivariate Data: A Method Based on Influence Eigen Nazrina Aziz Abstract—Outliers

0 500 1000 1500 2000 2500 3000 3500

−10

0

0 1

00 2

00 3

00 4

00 5

00

0

500

1000

1500

2000

2500

3000

3500

influ

ence

eig

en_1

Fig. 3. 3D Scatterplot for Scenario 2.

0 500 1000 1500 2000 2500 3000 3500

−10

0

0 1

00 2

00 3

00 4

00 5

00

0

500

1000

1500

2000

2500

3000

3500

influ

ence

eig

en_1

Fig. 4. 3D Scatterplot for Scenario 3.

VI. CONCLUSION

Sometimes, the identification of outliers is the main ob-jective of the analysis, and whether to remove the outliers orfor them to be down-weighted prior to fitting a non-robustmodel. This article does not differentiate between the variousjustifications for outlier detection. The aim is to advisethe analyst of observations that are considerably differentfrom the majority. Note that the technique in this article is,therefore, exploratory. The technique used in this article isperformed on large data set. In this article, observations thatare far away from the remaining data are considered to beoutliers.

If the ith observation is a potential outlier, their valuesfor ∆∗

1(i) are all situated at the top of the index plot; seeillustration of index plots in previous section. This is becausean outlier causes λ1 − λ1(i) values to be larger than otherobservations. Note that λ1(i) value is smaller for an outlier.This follows that ∆∗

1(i) become larger.

REFERENCES

[1] Gao. S, Li. G, Wang. D. Q, ”A new approach for detecting multivariateoutliers,” Communication in Statistics-Theory and Method vol.34, 2005,pp. 1857–1865.

[2] Gnanadesikan. R, Kettenring. J. R, ”Robust estimates, residuals, andoutlier detection with multiresponse data,” Biometrics vol.28, 1972, pp.81–124.

[3] Jolliffe. I. T, Principal Component Analysis. Springer-Verlag, NewYork, 1972.

[4] Radhakrishnan. R, Kshirsagar. A. M, ”Influence functions for certainparameters in multivariate analysis,” Communication in Statistics-Theoryand Method, vol. 10, 1981, pp. 515–529.

[5] Wang. D. Q, Scott. D. J, ”Testing a markov chain for independence,”Communication in Statistics-Theory and Method, vol. 18, 1989, pp.4085–4103.

[6] Wang. S.G, Liski. E. P, ”Effects of observations on the eigensystem of asample covariance matrix,” Journal of Statistical Planning and Inference,vol. 36, 1993, pp. 215–226.

[7] Wang. S. G, Nyquist. H, ”Effects on the eigenstructure of a datamatrix when deleting an observation,” Computational Statistics and DataAnalysis, vol. 11, 1991, pp. 179–188.

Proceedings of the World Congress on Engineering 2012 Vol I WCE 2012, July 4 - 6, 2012, London, U.K.

ISBN: 978-988-19251-3-8 ISSN: 2078-0958 (Print); ISSN: 2078-0966 (Online)

WCE 2012