Binary Operations on Orthogonal Models, Application to Prime Basis Factorials and Fractional...
-
Upload
paulo-canas -
Category
Documents
-
view
215 -
download
0
Transcript of Binary Operations on Orthogonal Models, Application to Prime Basis Factorials and Fractional...
This article was downloaded by: [York University Libraries]On: 18 November 2014, At: 20:52Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registered office: MortimerHouse, 37-41 Mortimer Street, London W1T 3JH, UK
Journal of Statistical Theory and PracticePublication details, including instructions for authors and subscription information:http://www.tandfonline.com/loi/ujsp20
Binary Operations on Orthogonal Models, Applicationto Prime Basis Factorials and Fractional ReplicatesVera de Jesus a , Mexia Tiago João a & Paulo Canas Rodrigues aa Faculty of Sciences and Technology-Mathematics Department , Nova University ofLisbon , Monte de Caparica 2829-516, Caparica, PortugalPublished online: 30 Nov 2011.
To cite this article: Vera de Jesus , Mexia Tiago João & Paulo Canas Rodrigues (2009) Binary Operations on OrthogonalModels, Application to Prime Basis Factorials and Fractional Replicates, Journal of Statistical Theory and Practice, 3:2,505-521, DOI: 10.1080/15598608.2009.10411941
To link to this article: http://dx.doi.org/10.1080/15598608.2009.10411941
PLEASE SCROLL DOWN FOR ARTICLE
Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) containedin the publications on our platform. However, Taylor & Francis, our agents, and our licensors make norepresentations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose ofthe Content. Any opinions and views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be reliedupon and should be independently verified with primary sources of information. Taylor and Francis shallnot be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and otherliabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to orarising out of the use of the Content.
This article may be used for research, teaching, and private study purposes. Any substantial or systematicreproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in anyform to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions
Journal of2
© Grace Scientific Publishing Statistical Theory and PracticeVolume 3, No.2, June 20094
Binary Operations on Orthogonal Models, Application toPrime Basis Factorials and Fractional Replicates6
Vera de Jesus, Faculty of Sciences and Technology-Mathematics Department,Nova University of Lisbon, Monte de Caparica 2829-516 Caparica, Portugal.8
Email: [email protected]
João Tiago Mexia, Faculty of Sciences and Technology-Mathematics Department,10Nova University of Lisbon, Monte de Caparica 2829-516 Caparica, Portugal.
Email: [email protected]
Paulo Canas Rodrigues, Faculty of Sciences and Technology-Mathematics Department,Nova University of Lisbon, Monte de Caparica 2829-516 Caparica, Portugal.14
Email: [email protected]
16 Received: December 11, 2007 Revised: September 3, 2008
Abstract18
Commutative Jordan algebras (CJA) are used in the study of orthogonal models either simple or de-rived through model crossing and model nesting. Once normality is assumed, UMVUE are obtained20for relevant parameters.
The general treatment is then applied to models obtained from prime basis factorials or their frac-22tional replicates. Besides model crossing and nesting, factor merging is considered. In this way wemay extend our results to factors with whatever number of levels instead of factors whose numbers of24levels are powers of the same prime.
AMS Subject Classification: 62J12; 62K15; 17C50.26
Key-words: Commutative Jordan algebras; Orthogonal models; Binary operations; Factorial models;Complete sufficient statistics; UMVUE; Confidence regions.28
1. Introduction
CJA enable the presentation of orthogonal models in a very convenient way. These30
algebras are linear spaces constituted by symmetric matrices that commute and that containthe squares of their matrices.32
In the next section we present the strong of CJA emphasizing the role played by orthog-∗1559-8608/09-2/$5 + $1pp – see inside front cover
© Grace Scientific Publishing, LLC
Dow
nloa
ded
by [
Yor
k U
nive
rsity
Lib
rari
es]
at 2
0:52
18
Nov
embe
r 20
14
506 Vera de Jesus, João Tiago Mexia & Paulo Canas Rodrigues
onal matrices associated to them. Seely (1971) showed that any CJA had one and only onebasis, the principal basis pb(A), whose matrices Q1, . . . ,Qw are pairwise orthogonal and2
orthogonal projection matrices. Now Q j = A′jA j, with the row vectors of A j constituting
an orthogonal basis for the range space R(Q j) of Q j, j = 1, . . . ,w. Then, if ∑wj=1 Q j = In, A4
will be complete and P = [A′1 . . . A′
w]′ will be an orthogonal matrix associated to A. If A isnot complete we can join Qw+1 = In−∑w
j=1 Q j to pb(A) thus obtaining the principal basis6
of A, the completed of A.We will consider binary operations on CJA which, see Fonseca, Mexia and Zmyslang8
(2006), may be used to derive complex models from simple ones. The basic techniques onthese developments are:10
• model crossing, then the treatments in the new model are all possible combinations of thetreatments of the initial models;12
• model nesting, now all the treatments of a model are nested inside each treatment ofanother treatment.14
Besides models obtained through crossing and nesting we will also derive models throughfactor merging. Sub algebras will be useful in connection with these models.16
A mixed model given by
Y =m
∑i=1
Xiβββ i +w
∑j=m+1
Xiβββ i + e, (1.1)18
with βββ 1, . . . ,βββ m and the βββ m+1, . . . , βββ w and e independent with null mean vectors and variance-covariance matrices σ2Icm+1 , . . . ,σ2Icw and σ2In, is associated to a CJA A if the matrices20
Mi = XiX′i, i = 1, . . . ,w, constitute a basis for A (Fonseca, Mexia and Zmyslang, 2006).
In this paper we are interested in models strictly associated to CJA22
Y =m
∑j=1
(A′
j⊗1r)
ηηη j +w
∑j=m+1
(A′
j⊗1r)
ηηη j + e, (1.2)
where ⊗ represents the kronecker matrix product and the components of 1r are equal to24
1. The vectors ηηη1, . . . ,ηηηm will be fixed, the ηηη1, . . . , ηηηm and e will be independent withmean vectors ηηηm+1, . . . ,ηηηw and 0, and variance-covariance matrices σ2
m+1Igm+1 , . . . ,σ 2wIgw26
and σ2In. When Q j = A′jA j, j = 1, . . . ,w, constitute the principal basis of a complete CJA
A of n0× n0 matrices, the model is strictly associated to A. We study these models in the28
third section. First we obtain general results and then we consider separately fixed effectsmodels, random effects models and mixed models.30
In the last section we applied the results on strictly associated models to prime basisfactorials and their fractional replicates. Besides crossing and nesting of models we will32
consider factor merging. This merging will enable us to overcome the requirement that, inthese models, the number of factors levels must be a power of a prime. The reason for this34
requirement was that, for example see Dey and Mukerjee (1999), Montgomery (2005) orMukerjee and Wu (2006), there was a 1-1 correspondence between the factor levels and the36
elements of a Galois field.
Dow
nloa
ded
by [
Yor
k U
nive
rsity
Lib
rari
es]
at 2
0:52
18
Nov
embe
r 20
14
Binary Operations on Orthogonal Models, Application 507
2. Commutative Jordan algebras
2.1. Orthogonal associated matrices2
Jordan algebras were introduced by Jordan, von Neumann and Wigner (1934) to providean algebraic foundation for Quantum Mechanics. Later on these structures were applied4
see for instance, Seely (1970), Seely (1971), Seely and Zyskind (1971), Seely (1977), Van-Leeuwen, Seely and Birkes (1998) and VanLeeuwen, Birkes and Seely (1999), to carry out6
linear statistical inference.Jordan algebras are linear spaces constituted by square matrices that contain the squares8
of their matrices. If the matrices commute, the Jordan algebra will be commutative. Weare interested in CJA. Moreover, Seely (1971) proved that each CJA has an unique basis10
{Q1, . . . ,Qw}, constituted by orthogonal projection matrices, all of them mutually orthogo-nal and with the same dimension. This basis is called the principal basis of the algebra.12
Let {Q1, . . . ,Qw} be the principal basis of a CJA A. If
Q1 =1n
1n1′n =
1n
Jn (2.1)14
where 1′n is a matrix n×1 with elements equal to 1, the CJA will be regular. Moreover, if
w
∑j=1
Q j = In (2.2)16
where In is the n× n identity matrix, the CJA will be complete. In what follows we onlyconsider regular and complete CJA.18
Let M be an orthogonal projection matrix belonging to A. We have
M =w
∑j=1
a jQ j. (2.3)20
with a j = 0 or a j = 1, j = 1, . . . ,w, and we write M = ∑ j∈C Q j with C = { j : a j 6= 0}. Thusorthogonal projection matrices in a CJA will be the sum of some or all the matrices in the22
principal basis.If R(Q j) =∇ j, j = 1, . . . ,w is the range spaces of the matrices Q j, j = 1, . . . ,w, with24
{Q1, . . . ,Qw} the principal basis of A we have, see Mexia (1995), Q j = A′jA j, j = 1, . . . ,w
where A j is a matrix whose row vectors constitute an orthogonal basis for ∇ j, j = 1, . . . ,w.26
Then,
P =[A′
1 . . . A′w]′ (2.4)28
will be an orthogonal matrix associated to the CJA A with principal basis N(A)= {Q1, . . . ,Qw}.
Moreover, if the CJA is complete and regular, we will have Q1 = 1n Jn =
(1√n 1n
)(1√n 1′n
)30
and so A1 = 1√n 1n. Then the elements of the first row of P are equal to 1√
n and P will be,see Mexia (1988), orthogonal standardized.32
Dow
nloa
ded
by [
Yor
k U
nive
rsity
Lib
rari
es]
at 2
0:52
18
Nov
embe
r 20
14
508 Vera de Jesus, João Tiago Mexia & Paulo Canas Rodrigues
Lastly we observe that, see Fonseca, Mexia and Zmyslony (2006), if
M =w
∑j=1
a jQ j (2.5)2
is a matrix of a complete CJA, its eigenvalues will be the a1, . . . ,aw. Since M is symmetrictheir eigenvalues will be real with multiplicity g j = rank(Q j), j = 1, . . . ,w and so4
det (M) =w
∏j=1
ag jj . (2.6)
Moreover since the Q j, j = 1, . . . ,w are pairwise orthogonal if M is regular we will have6
M−1 =w
∑j=1
a−1j Q j. (2.7)
2.2. Binary operations8
We represent by ⊗ the Kronecker matrix product. This matrix operation is studied inSteeb (1991).10
Let us establish the
Proposition 2.1. With X j ={
Q j,1, . . . ,Q j,w j
}the principal basis of the CJA A j constituted12
by the n j×n j, j = 1,2, matrices,
A1⊗A2 =
{w1
∑i=1
w2
∑j=1
ai, jQ1,i⊗Q2, j
}14
will be the CJA with principal basis{
Q1,i⊗Q2, j, i = 1, . . .w1, j = 1, . . . ,w2}
. If A1 andA2 are completes [regular] , A1⊗A2 will be complete [regular].16
Proof. As the Kronecker product is associative and Q1,i⊗Q2, j, i = 1, . . .w1, j = 1, . . . ,w2are mutually orthogonal projection orthogonal matrices, A1⊗A2 will be the linear space18
constituted by symmetric matrices that commute and contain the square of these matrices.Then A1⊗A2 will be a CJA.20
As the Q1,i⊗Q2, j, i = 1, . . .w1, j = 1, . . . ,w2 are mutually orthogonal and they are linearlyindependent, they constitute the principal basis of A1⊗A2.22
Besides this, ifwd∑
i=1Qd,i = Ind , d = 1,2, we will have
w1∑
i=1
w2∑j=1
Q1,i⊗Q2, j = In1 ⊗ In2 = In1n2
and, if Qd,1 = 1nd
Jnd , d = 1,2, Q1,d ⊗Q2,d = 1n1n2
Jn1n2 , which completes the proof.24
We can show that, see Fonseca, Mexia and Zmyslony (2006), if A1,A2 and A3 are CJA,
A1⊗ (A2⊗A3) = (A1⊗A2)⊗A3. (2.8)26
Thus, the ⊗ product of CJA is associative.
Dow
nloa
ded
by [
Yor
k U
nive
rsity
Lib
rari
es]
at 2
0:52
18
Nov
embe
r 20
14
Binary Operations on Orthogonal Models, Application 509
If Pl is an orthogonal matrix associated to Al , l = 1,2, P1 ⊗P2 will be an orthogonalmatrix associated to A1⊗A2.2
We also have the
Proposition 2.2. P(A1⊗A2) = P(A1)⊗P(A2) .4
Proof. As we saw, P(Al) = P = [A′l,1 : . . . : A′
l,wl]′ with Ql, j = A′
l, jAl, j, j = 1, . . . ,wl ,
l = 1,2. Then, Q1,i⊗Q2, j = (A′1,i⊗A′
2, j)(A1,i⊗A2, j), i = 1, . . . ,w1, j = 1, . . . ,w2 and we6
will have
P(A1⊗A2) = [A′1,1⊗A′
2,1 : . . . : A′1,w1
⊗A′2,w2
]′ = P(A1)⊗P(A2).8
Another operation with CJA in which we have the ⊗ product, will be represented10
by ~. Given{
Q j,1, . . . ,Q j,w j
}the principal basis constituted by n j × n j matrices
of the complete and regular CJA A j, j = 1,2, the principal basis of A1 ~ A2 is12 {Q1,1⊗ 1
n2Jn2 , . . . ,Q1,w1 ⊗ 1
n2Jn2
}∪{In1 ⊗Q2,2, . . . ,In1 ⊗Q2,w2} .
Then, if14
P j =
[ 1√n j1′n j
K j
]; j = 1,2 (2.9)
is an orthogonal matrix associated to A j, j = 1,2,16
P =
[P1⊗ 1
n2Jn2
In1 ⊗K2
](2.10)
will be an orthogonal matrix associated to A1 ~A2. Thus, if Q j,l = A′j,lA j,l , l = 1, . . . ,w j,18
with A j,1 = 1√n j1′n j
, j = 1,2, we have
Q1,l ⊗ 1n2
Jn2 =(
A1,l ⊗ 1√n2
1′n2
)′(A1,l ⊗ 1√
n21′n2
), l = 1, . . . ,w1
In1 ⊗Q2,l =(In⊗A2,l
)′ (In⊗A2,l), l = 1, . . . ,w2
(2.11)20
With A1,A2 and A3 complete and regular CJA we have, see Fonseca, Mexia and Zmys-lony (2006)22
A1 ~ (A2 ~A3) = (A1 ~A2)~A3 (2.12)
that is, the ~ product is also associative. We will say that, ~ is the restricted Kronecker24
product of complete and regular CJA.While the operation ⊗ will be used in models crossing, the operation ~ will be useful in26
the study of model nesting.Another interesting application of ~ will be when we consider r observations per treat-28
ment. Thus, if r = 1 and we use the CJA A, with r > 1, the CJA will be A~A(r), with
Dow
nloa
ded
by [
Yor
k U
nive
rsity
Lib
rari
es]
at 2
0:52
18
Nov
embe
r 20
14
510 Vera de Jesus, João Tiago Mexia & Paulo Canas Rodrigues
{ 1r Jr,Jr
}where Jr = Ir − 1
r Jr, the principal basis of A(r). We have Jr = K′rKr, with Kr
obtained deleting the first row equal to 1√r 1′r of a r× r orthogonal matrix.
2
2.3. Sub-algebras
A◦ is a sub-CJA of the CJA A if it is an algebra and is contained in A. With {Q◦1, . . . ,Q
◦w◦}4
and {Q1, . . . ,Qw} the principal basis of A◦ and A respectively, we have,
Q◦i = ∑
j∈Ci
Q j, i = 1, . . . ,w◦ (2.13)6
where the sets Ci are pair-wise disjoint.If A◦ is sub-algebra of A, and8
m◦
∑i=1
Q◦i =
m
∑j=1
Q j, (2.14)
the pair (A◦,A) is said to be (m◦,m)− compatible. If the pairs(A◦
l ,Al)
are(m◦
l ,ml)−10
compatible, l = 1,2, we can order the matrices of the principal basis of A◦1 ⊗A◦
2 and ofA1⊗A2 so that the pair (A◦
1⊗A◦2,A1⊗A2) will be (m◦
1m◦2, m1m2)−compatible, since12
m◦1∑
i1=1
m◦2∑
i2=1Q◦
1,i1 ⊗Q◦2,i2 =
m1
∑j1=1
m2
∑j2=1
Q1, j1 ⊗Q2, j2 . (2.15)
We point out that every CJA is sub-algebra of itself.14
Suppose that the pair (A◦1,A1) is (m◦
1,m1)− compatible and A◦2 is sub-algebra of A2.
If these algebras are all complete and regular, A◦1 ~ A◦
2 will be sub-algebra complete and16
regular of A1 ~A2. The pair (A◦1 ~A◦
2,A1 ~A2) will be (m◦1,m1)− compatible for having,
m◦1∑i=1
Q◦1,i⊗
1n2
Jn2 =m1
∑j=1
Q1, j⊗ 1n2
Jn2 . (2.16)18
Furthermore, if the pair (A◦2,A2) is (m◦
2,m2)−compatible, the pair (A◦1 ~A◦
2,A1 ~A2)will be (w◦1 +m◦
2−1,w1 +m2−1)− compatible.20
When we are working with mixed models and their sub-models, m and◦m would be the
dimensions of A and A◦, where A◦ is associated to the fixed effects part and◦m will be the22
separation value.
3. Strictly associated models24
3.1. General results
Let {Q1, . . . ,Qw} be the principal basis of a complete and regular CJA A constituted by26◦n× ◦
n matrices.
Dow
nloa
ded
by [
Yor
k U
nive
rsity
Lib
rari
es]
at 2
0:52
18
Nov
embe
r 20
14
Binary Operations on Orthogonal Models, Application 511
The principal basis of A~A(r) will be constituted by the matrices
Q j⊗ 1r
Jr =(
A j⊗ 1√r
1′r
)′ (A j⊗ 1√
r1′r
), j = 1, . . . ,w, (3.1)2
and Q⊥ = In−w∑j=1
Q j⊗ 1r Jr.
If the matrices of A are◦n× ◦
n those of A~A(r) will be n× n, with n =◦nr and, if A is4
complete,
w
∑j=1
Q j = I◦n
(3.2)6
as well as
Q⊥ = I◦n⊗Jr =
(A⊥
)′A⊥ (3.3)8
with A⊥ = I◦n⊗Kr.
We also have10{
g j = rank(Q j⊗ 1
r Jr)
= rank (Q j) = rank (A j) , j = 1, . . . ,w
g = rank(Q⊥)
= rank(A⊥)
=◦n(r−1)
. (3.4)
We put Z ∼ N (ξξξ ,W) when Z is normal with mean vector ξξξ and covariance variance12
matrix W.The model Y∼ N (µµµ ,V) will be strictly associated to the CJA A~A(r) if14
µµµ =m∑j=1
(A′
j⊗ 1√r 1r
)ηηη j
V =w∑j=1
γ j
(Q j⊗ 1√
r 1r
)+σ2Q⊥
, (3.5)
with γ1 = . . . = γm = σ2.16
With
ηηη j =(
A j⊗ 1√r 1′r
)µµµ, j = 1, . . . ,w
ηηη∗j =(
A j⊗ 1√r 1′r
)Y, j = 1, . . . ,w
, (3.6)18
we will have
ηηη∗j ∼ N(ηηη j,γ jIg j
), j = 1, . . . ,w (3.7)20
with ηηη j = 0g j , j = m+1, . . . ,w, and γ j = σ2j +σ2, j = 1, . . . ,w.
Then, with e∗ = Q⊥Y, the model22
Y =w
∑j=1
(A′
j⊗1√r
1r
)ηηη∗j + e∗, (3.8)
Dow
nloa
ded
by [
Yor
k U
nive
rsity
Lib
rari
es]
at 2
0:52
18
Nov
embe
r 20
14
512 Vera de Jesus, João Tiago Mexia & Paulo Canas Rodrigues
will be strictly associated to the CJA A~A(r) .As the model is normal with mean vector and variance-covariance matrix given by (3.5),2
it has the density
n(Y) =e− 1
2
m
∑j=1
‖ηηη∗j−ηηη j‖2
σ2 +w∑
i=m+1
S jγ j
+ Sσ2
(2π)n2 .σ
(n−∑w
j=m+1 g j
).
w∏
j=m+1(γ j)
g j/2, (3.9)4
with the complete sufficient statistics
ηηη∗j ∼ N(ηηη j,γ jIg j
); j = 1, . . . ,w
S j = ‖ηηη∗j‖2 ∼ γ jχ2p−1, j = 1, . . . ,w
S = ‖e∗‖2 ∼ σ2χ2g
. (3.10)6
Given the models
Yl =wl
∑j=1
(A′
l, j⊗1√rl
1rl
)ηηη∗l, j + e∗l , l = 1,2, (3.11)8
strictly associated to the CJA Al ~ A(rl) , l = 1,2, if we cross them, we will obtain themodel,10
Y =w1
∑j1=1
w2
∑j2=1
(A′
1, j1 ⊗A′2, j2 ⊗
1√r
1r
)ηηη∗j1, j2 + e∗, (3.12)
strictly associated to the CJA (A1⊗A2)~A(r) .12
If the first m1 [m2] terms of the first [second] model correspond to the fixed effects part, inthe final model the fixed effects part is constituted by the terms with indexes jl = 1, . . .ml ,14
l = 1,2.
If we nest all treatments of the second initial model inside each treatment of the first initial16
model, we obtain the model,
Y =w1
∑j1=1
(A′
1, j1 ⊗ I◦n2⊗ 1√
r1r
)ηηη∗j118
+w2
∑j2=2
(I◦
n1⊗A′
2, j2 ⊗1√r
1r
)ηηη∗w1+ j2−1 + e∗, (3.13)
strictly associated to the CJA (A1 ~A2)~A(r) .20
As the random effects factors do not nest fixed effects factors, if the first model has ran-dom effects factors, the second cannot have fixed effects factors. Similarly, if the second22
model has fixed effects factors, the first can only have fixed effects factors. Thus, represent-ing for Fx, [Al, Mt] the fixed effects models, [random, mixed], we will have the following24
possible cases:26
Dow
nloa
ded
by [
Yor
k U
nive
rsity
Lib
rari
es]
at 2
0:52
18
Nov
embe
r 20
14
Binary Operations on Orthogonal Models, Application 513
1st model 2nd modelAl AlMt AlFx Fx, Al, Mt
2nd model 1st modelAl Fx, Al, MtMt FxFx Fx
3.2. Fixed effects models2
Now we have m = w and γ1 = . . . = γw = σ2. Then, with{
ηηη = [ηηη ′1
: . . . : ηηη ′w]′
ηηη∗ = [ηηη∗′1
: . . . : ηηη∗′w ]′(3.14)4
we will have
ηηη∗ ∼ N(ηηη ,σ 2Ig
), g =
w
∑j=1
g j (3.15)6
independent of S = ‖e∗‖2 .
We will be interested in testing hypothesis on vectors Gηηη , with G = [G1 : . . . : Gw] a8
matrix whose row vectors are linearly independent. Taking
ΨΨΨ∗ = Gηηη∗ =w∑j=1
G jηηη∗j
ΨΨΨ = Gηηη =w∑j=1
G jηηη j
, (3.16)10
according to the Blackwell-Lehman-Scheffé theorem, we have the UMVUE
σ∗2 = Sg
ΨΨΨ∗ = Gηηη∗∼ N(ΨΨΨ,σ2GG′) (3.17)12
for σ2 and for ΨΨΨ.As the ηηη∗j , j = 1, . . . ,w are independents of S, ΨΨΨ∗ will be independent of S.14
We can use the statistic S∼ σ2χ2g to construct confidence intervals for σ2. The 1−q level
confidence intervals can be used to obtain q level tests for the null hypothesis16
H0 : σ2 = σ20 . (3.18)
The hypothesis being rejected when σ20 is not contained in the q level interval. Furthermore,18
see Mexia (1995),
U0 = (ΨΨΨ−ΨΨΨ0)′ (GG′)+ (ΨΨΨ−ΨΨΨ0) (3.19)20
where (GG′)+ is the Moore-Penrose inverse of GG′, will be the product by σ2 of a chi-square with c = rank(GG′) degrees of freedom and non-centrality parameter22
δ =1
σ2 (ΨΨΨ−ΨΨΨ0)′ (GG′)+ (ΨΨΨ−ΨΨΨ0) . (3.20)
Dow
nloa
ded
by [
Yor
k U
nive
rsity
Lib
rari
es]
at 2
0:52
18
Nov
embe
r 20
14
514 Vera de Jesus, João Tiago Mexia & Paulo Canas Rodrigues
We put U0 ∼ σ2χ2c,δ independent of S. Thus to test
H0 : ΨΨΨ = ΨΨΨ0 (3.21)2
we will have the statistics
F =gc
U0
S(3.22)4
with the F distribution with c and g degrees of freedom and non-centrality parameter δ .Since δ = 0 when H0 holds the test will be unbiased.6
3.3. Random effects models
In these models we have m = 1 and8
S j =∥∥ηηη∗j
∥∥2 ∼ γ jχ2g j
, j = 2, . . . ,w (i) S = ‖e∗‖2 ∼ σ2χ2g ,
where (i) stands for independence.10
Besides the γ j, j = 1, . . . ,w, we may be interested in the σ2j = γ j−σ2, j = 1, . . . ,w. The
estimators12
γ∗j = S jg j
, j = 2, . . . ,w
σ∗2 = Sg
(3.23)
and hence14
σ∗2j = γ∗j −σ∗2, j = 2, . . . ,w. (3.24)
will be UMVUE for γ j, j = 2, . . . ,w, σ2, and σ2j , j = 2, . . . ,w, since they are unbiased16
estimators which are derived from sufficient complete statistics.We can reason as above to obtain confidence intervals for σ2 and through duality, test18
H0 : σ2 = σ20 .
Putting20
υ j =γ j
σ2 , i = 1, . . . ,w, (3.25)
the statistics22
F j =gg j
S j
S, j = 2, . . . ,w, (3.26)
will be the products by the υ j, j = 2, . . . ,w, of variables with the central F distributions with24
g j, j = 2, . . . ,w, and g degrees of freedom. If fp,r,s is the quantile for probability p of thecentral F distribution with r and s degrees of freedom we have, for υ j, j = 2, . . . ,w, the 1−q26
Dow
nloa
ded
by [
Yor
k U
nive
rsity
Lib
rari
es]
at 2
0:52
18
Nov
embe
r 20
14
Binary Operations on Orthogonal Models, Application 515
level confidence intervals
[F j
f1− q2 ,g j ,g
;F j
f q2 ,g j ,g
], j = 2, . . . ,w
[F j
f1−q,g j ,g;+∞
], j = 2, . . . ,w
[0;
F j
fq,g j ,g
], j = 2, . . . ,w
(3.27)2
These intervals may be used to derive q level tests for the hypotheses
H0, j(υ j,0) : υ j = υ j,0, j = 2, . . . ,w. (3.28)4
We point out that the usual hypotheses
H0, j : σ2j = 0, j = 2, . . . ,w, (3.29)6
are equivalent to the H0, j(1), j = 2, . . . ,w.Moreover, if g ≥ 3, F j has the mean value g
g−2 υ j, j = 2, . . . ,w, so we will have the8
UMVUE
υ∗j =g−2
gF j, j = 2, . . . ,w. (3.30)10
Nextly, taking
λ j, j′ =γ j
γ j′, j 6= j′, (3.31)12
the statistics
F j, j′ =g j′
g j
S j
S j′; j 6= j′ (3.32)14
will be the product by the λ j, j′ , j 6= j′, of variables with central F distributions with g j andg j′ , j 6= j′, degrees of freedom. Thus we obtain the 1−q level confidence intervals for the16
λ j, j′ , j 6= j′, given by
[F j, j′
f1− q2 ,g j ,g j′
;F j, j′
f q2 ,g j ,g j′
], j 6= j′
[F j, j′
f1−q,g j ,g j′;+∞
], j 6= j′
[0;
F j
fq,g j ,g j′
], j 6= j′
(3.33)18
which can be used, though duality, to test the null hypothesis
H0, j, j′(λ j, j′,0) : λ j, j′ = λ j, j′,0, j 6= j′. (3.34)20
Dow
nloa
ded
by [
Yor
k U
nive
rsity
Lib
rari
es]
at 2
0:52
18
Nov
embe
r 20
14
516 Vera de Jesus, João Tiago Mexia & Paulo Canas Rodrigues
Lastly, if g j′ ≥ 3, we have the UMVUE
λ ∗j, j′ =g j′ −2
g j′F j, j′ , j 6= j′. (3.35)2
3.4. Mixed models
In these models we have the fixed effects part ∑mj=1
(A′
j⊗ 1√r 1
)ηηη∗j and the random4
effects part ∑wj=m+1
(A′
j⊗ 1√r 1
)ηηη∗j + e∗. We carry out the inference for the fixed effects
part as in the subsection 3.2 rewriting matrix G as G = [G1 : · · · : Gm] and taking ΨΨΨ∗ =6
∑mj=1 G jηηη∗j and ΨΨΨ = ∑m
j=1 G jηηη j. For the random effects part we reason as in subsection 3.3restricting ourselves to the γ j and to the υ j with j = m+1, . . . ,w.8
4. Prime basis and related factorial designs
4.1. Foreword10
As we hope it is now clear that the key to apply the preceding results is having anorthogonal matrix associated to the relevant CJA. This is true whether we are working with12
isolated models or if we are using binary operations to derive complex models from simpleones. Thus in considering prime basis and related factorials we will concentrate on deriving14
the required orthogonal matrices.
4.2. Prime basis factorials16
We will now consider the pN factorial design with N (N > 1, integer) factors each havinga prime number p of levels. The levels will be numerated from 0 to p− 1 which are the18
elements of the Galois field GF(p) of order p. The pN treatments may be representedby vector x = [x1, . . . ,xN ]
′, with x j ∈ GF(p), j = 1, . . . ,N. These vectors will constitute a20
dimension N vector space GF(p)N over GF(p). The vectors in GF(p)N can be ordered bythe indexes22
l(x) = 1+N
∑j=1
x j p j−1. (4.1)
Given a ∈ GF(p)N we have the linear application of GF(p)N on GF(p) given by24
La(x) =
(N
∑j=1
a jx j
)
(p)
(4.2)
where (p) indicates the use of module p arithmetic. These applications constitute a linear26
space LN[p] over GF(p) isomorphic to GF(p)N . Thus the La1(x), . . . .,Lam(x) are linearly
Dow
nloa
ded
by [
Yor
k U
nive
rsity
Lib
rari
es]
at 2
0:52
18
Nov
embe
r 20
14
Binary Operations on Orthogonal Models, Application 517
independent if and only if a1, . . . ,am are linearly independent.The linear applications whose first non null coefficient is 1 will be reduced linear appli-2
cations. Let LNr[p] be the family of such applications. There are
kN(p) =pN −1p−1
(4.3)4
reduced applications.With L1, . . . ,Lu linearly independent reduced applications, the set6
[L|b] = [L1, . . . ,Lu|b1, . . . ,bu] (4.4)
of treatments x such that (Li(x))(p) = bi, i = 1, . . . ,u, will be called a block. Since the system8
of equations defining a block enables us to express u components of x as linear combinationsof the remaining components, in every [L|b] there will be pN−u treatments and there will be10
pu blocks.We may order the reduced applications L∈LN
r[p] according to the increase of indexes l(a).12
Thus if l(a1) < l(a2) we will have k(a1) < k(a2), k(a) = 1, . . . ,kN(p).To each L ∈ LN
r[p] we can associate a p× pN matrix C(L) with elements14
ci, j(L) ={
0, L(x j) 6= i−11, L(x j) = i−1
(4.5)
i = 1, . . . , p and j = 1, . . . , pN . It is easy to see that16
C(L)C(L)′= pN−1Ip (4.6)
since L takes each of its values for pN−1 treatments. Moreover if L1 and L2 are linearly18
independent
C(L1)C(L2)′= pN−2Jp (4.7)20
since [L1,L2|b1,b2] contains pN−2 treatments whatever the pair (b1,b2).
With q = pN−1
2 we consider the matrices22
A(L) =1q
KpC(L), L ∈ LNr[p]. (4.8)
If L1,L2 ∈ LNr[p] are linearly independent, we have, according to (4.7)24
A(L1)A(L2)′= 0(p−1)×(p−1) (4.9)
We now prove the following,26
Proposition 4.1. The matrix
P(pN) =
[1√pN
1pN : A(L1)′: . . . : A(LkN(p))
′]′
(4.10)28
Dow
nloa
ded
by [
Yor
k U
nive
rsity
Lib
rari
es]
at 2
0:52
18
Nov
embe
r 20
14
518 Vera de Jesus, João Tiago Mexia & Paulo Canas Rodrigues
is orthogonal and it is associated to the CJA A(pN) with principal basis
N(pN) ={
1pN JpN ,Q(L1), . . . ,Q(LkN(p))
}(4.11)2
where
Q(L j) = A(L j)′A(L j), j = 1, . . . ,kN(p). (4.12)4
Proof. First of all, we have
A(L j)A(L j)′=
1q2 KpC(L j)C(L j)
′K′p =
1pN−1 Kp
(pN−1Ip
)K′p = KpK
′p = Ip.
and for i 6= j,
A(Li)A(L j)′
=1q2 KpC(Li)C(L j)
′K′p =
1pN−1 Kp
(pN−21p1
′p
)K′p6
=1p
(Kp1p)(Kp1p)′= 0(p−1)×(p−1).
Thus the 1pN JpN ,Q(L1), . . . ,Q(Lw) are symmetric, idempotent and mutually orthogonal so8
they will constitute the principal basis of CJA A(pN).
The matrix10
P(pNr) =
[1√rpN
1pN r, A′1⊗
1√rpN
1rpN . . . A′kN(p)⊗
1√rpN
1rpN ;(
A⊥)′]′
(4.13)
with A⊥ = IpN ⊗Kr will be orthogonal and it is associated to the CJA A(pN)~A(r) .12Let L1 be linear space with basis L1 = {L1, . . . ,Lu}. In L1 there are pu linear applications
of which ku(p) are reduced. If we give to these reduced applications the indexes 1, . . . ,ku(p)14
we can take
A(Bl) =[A(L1)′ : · · · : A(Lku(p)(p))′
]′, (4.14)16
then
P(Bl) =
[1√pN
1pN : A(Bl)′ : A(Lku(p+1))′ : · · · : A(LkN(p))
′]′
, (4.15)18
will be an orthogonal matrix associated to a CJA A(pN ,Bl) which we consider when ana-lyzing designs in which the treatments are grouped in blocks. That grouping of treatments is20
know as confounding since the linear applications in L1 have fixed values for the treatmentsin any block. Thus the variations ascertainable to these applications is confounded with the22
variation between blocks. The applications in L1 are said to be confounded.
Dow
nloa
ded
by [
Yor
k U
nive
rsity
Lib
rari
es]
at 2
0:52
18
Nov
embe
r 20
14
Binary Operations on Orthogonal Models, Application 519
4.3. Fractional replicates
If we take only the treatments x1, . . . ,xpN−u in one of the blocks [L1, . . . ,Lu|b1, . . . ,bu],2
we will have a fractional replicate 1pu × pN . We give to the chosen treatments the first pN−u
indexes. To study these models we introduce in L[p]N an equivalence relation ρL1 , putting4
lρL1 g if g = (cl + l∗)(p), (4.16)
with c ∈ {1, . . . p−1} and l∗ ∈ L1, where L1 is the linear subspace of L[p]N spanned by L.6
If l∗ takes a fixed value b for all chosen treatments, we will have
g(x j) = (cl(x j)+b)(p), j = 1, . . . , pN−u. (4.17)8
Therefore, if C∗(l) is the sub-matrix constituted by the first pN−u columns of C(l), we seethat the rows of C∗(g) are obtained reordering the rows of C∗(l). Similarly, with10
A∗(l) =1√
pN−u−1KpC∗(l), (4.18)
we see that the rows of A∗(g) are obtained reordering the rows of A∗(l).12
When r = 1, we have
S∗(l) = ‖A∗(l)y‖2 (4.19)14
and, when r > 1,
S∗(l) =∥∥∥(A∗(l)⊗ 1√
r1′r
)y∥∥∥
2(4.20)16
so in both cases we have,
S∗(l) = S∗(g). (4.21)18
Then, we chose in each ρL1 equivalence class, a reduced linear application to which isattributed the difference between the groups of treatments chosen, that correspond to the20
different values taken by the applications.As we saw, if we have the relation lρL1g, the two applications define the same group of22
treatments.The space L should be chosen in such a way that the effects of factorial levels and also24
low order factorial interactions are isolated in their ρL1 equivalence classes. For p = 2 andp = 3, this problem has been studied (for instance, see Cochran, 1992).26
To study confounding in the case of fractional replicates we complete L1 = {l1, . . . , lu}to obtain a basis {l1, . . . , lu, . . . , lv, . . . , lN} for L[p]N . Taking L2 = {lu+1, . . . , lv} and L3 =28
{lv+1, . . . , lN} as well as L j = L(L j), L j which is the subspace spanned by L j, j = 1,2,3.Furthermore we have the subspaces L1⊕L2 and L2⊕L3 given by the direct sum of L1 and30
L2 and of L2 and L3. With,
gh =N
∑j=1
ah, jl j, j = 1,2 (4.22)32
Dow
nloa
ded
by [
Yor
k U
nive
rsity
Lib
rari
es]
at 2
0:52
18
Nov
embe
r 20
14
520 Vera de Jesus, João Tiago Mexia & Paulo Canas Rodrigues
we have g1ρL1g2 if and only if a2, j = (ca1, j)(p), j = u + 1, . . . ,N, with c ∈ {1, . . . p− 1}.Let us establish the2
Proposition 4.2. L1 is a ρL1 equivalence class. Moreover, there are kN−u(p) classes dis-tinct of L1, each containing pu(p− 1) applications. The ρL1 equivalence classes are ρ-4
saturated and (L1⊕L2)\L1 is the union of kv−u(p) such classes.
Proof. If g1 ∈ L1 we have g1ρL1 g2 if and only if g2 ∈ L1 and, since when g1,g2 ∈ L1,6
g1ρL1g2 we see that L1 is a ρL1 equivalence class. Moreover, if g1 = g1,1 + g1,2 and g2 =g2,1 + g2,2 with g1,1,g2,1 ∈ L1 and g1,2,g2,2 ∈ L2 we have, as we saw, g1ρL1g2 if and only8
if g1,2ρg2,2. So, the ρL1 equivalence classes distinct from L1 will contain one and only oneapplication from (L1⊕L2)r. Thus, there will be kN−u(p) ρL1 equivalence classes distinct10
from L1. Since #(L[p]\L1
)= pN− pu, we see that they will contain pu(p−1) applications.
Lastly, if g1 ∈ L1⊕L2, g1ρL1 g2 when and only when g2 ∈ L1⊕L2 will be ρL1-saturated.12
The rest of the proof is straightforward.
Let us give the indexes 1, . . . ,ku(p) to the reduced applications in L1,r, the indexes ku(p)+14
1, . . . ,ku(p)+ kv(p) to the reduced applications in L2,r, the indexes ku(p)+ kv(p)+ 1, . . .,ku+v(p) to the reduced applications in (L1⊕L2)\(L1,r∪L2,r), the indexes ku+v(p)+1, . . .,16
ku+v(p)+kN−v(p) to the reduced applications in L3,r and the indexes ku+v(p)+kN−v(p), . . .,kN(p) to the reduced applications in L[p]Nr \(L1⊕L2)r. Given l ∈ (L1⊕L2)r, let mo(l) be18
a minimum order application ρL1-equivalent to l. We will then have
S(mo(l)
)= S(l); l ∈ (L1⊕L2)r. (4.23)20
Reasoning as for the previous section, we show that
P(
1pu × pN
)=
[1√pN−u
1pN−u : A∗(l)′; l ∈ (L1⊕L2)r]′
(4.24)22
is an orthogonal matrix associated to the CJA A( 1
pu × pN)
with principal basis
N
(1pu × pN
)=
{1
pN−u JpN−u , Q∗(l)}
(4.25)24
where Q∗(l) = A∗(l)′A∗(l), l ∈ (L1⊕L2)r. For a more general discussion of this problemsee Mukerjee and Wu (2006).26
4.4. Binary operations and factor merging
Crossing and nesting prime basis factorials and/or their fractional replicates is quite28
straight forward. Only two points need to be considered in detail. Firstly, when we crossmodels for which there is confounding, the blocks will be the cartesian products of the30
original ones. This is, in each of the new blocks, we have all the combinations between thetreatments in a block of each of the original models. If confounding had only been used32
for one of the original models the treatments of the other are treated as constituting a block.Thus in each of the new blocks we will have all the combinations between the treatments in34
a block and all the treatments of the model without confounding.
Dow
nloa
ded
by [
Yor
k U
nive
rsity
Lib
rari
es]
at 2
0:52
18
Nov
embe
r 20
14
Binary Operations on Orthogonal Models, Application 521
The second point also refers to model crossing. Let us assume the factors in the twooriginal models had p1 and p2 levels. Then we may merge one factor in each of the models2
to obtain a factor with p1 p2 levels. Let there be pNl−ull treatments, l = 1,2, in the two models
and Al(Ll) be the matrix associated to the effects of the two factors, l = 1,2. Thus, after the4
merging to the new factors will corresponding the matrix
A′ =
A′
1(L1)⊗ 1√pN2−u2
2
1p
N2−u22
:1√
pN1−u11
1p
N1−u11
⊗A′2(L2) : A′
1(L1)⊗A′2(L2)
. (4.26)6
When we replace the sub-matrices by A′ we replace A1⊗A2 by a CJA compatible withthis factor merging. Likewise we may merge factorial interactions taking Al(Ll), l = 1,2,8
to be the corresponding matrices in the two initial models.This merging of factors and factorial interactions enables us to overcame the limitation10
that the factors had to have a prime or the power of a prime number of levels.
References12
Cochran, W. G., 1992. Experimental designs. 2nd edition. Wiley Classics Library. New York.Dey, A., Mukerjee R., 1999. Fractional Factorial Plans. Wiley Series in Probability and Statistics.14Fonseca, M., Mexia, J.T., Zmyslony, R., 2006. Binary operations on Jordan algebras and orthogonal normal
models. Linear Algebra and its Applications, 417, 75–86.16Jordan, P., Von, Neumann J., Wigner, E.P., 1934. On an algebraic generalization of the quantum mechanical
formalism. Annals of Mathematics, Series II, 35, 29–64.18Mexia, J.T., 1988. Standardized orthogonal matrices and the decomposition of the sum of squares for
treatments. Trabalhos de Investigação N◦2, Departamento de Matemática, FCT-UNL.20Mexia, J.T., 1995. Introdução à Inferência Estatística Linear. Edições Universitárias Lusófonas.Montgomery, D., 1997. Design and Analysis of Experiments. John Wiley & Sons.22Mukerjee, R., Wu, C.F., 2006. A Modern Theory of Factorial Designs. Springer.Seely, J., 1970. Linear spaces and unbiased estimators. Application to a mixed linear model. The Annals of24
Mathematical Statistics, 41(5), 1735–1748.Seely, J., 1971. Quadratic subspaces and completeness. The Annals of Mathematical Statistics, 42(2), 710–26
721.Seely, J., Zyskind, G., 1971. Linear spaces and minimum variance estimation. The Annals of Mathematical28
Statistics, 42(2).Seely, J., 1977. Minimal sufficient statistics and completeness for multivariate normal families. Sankhya,30
39, 170–185.Seely, J., 1971. Quadratic subspaces and completeness. The annals of Mathematical Statistics, 42(2), 710–32
721.Steeb W. H., 1991. Kronecker Product and Applications. Manheim.34VanLeeuwen, D., Seely, J., Birkes D., 1998. Sufficient conditions for orthogonal designs in mixed linear
models. Journal of Statistical Planning and Inference, 73(1-2) 373–389.36VanLeeuwen, D., Birkes D., Seely, J., 1999. Balance and orthogonality in designs for mixed classification
models. The Annals of Statistics, 27(6) 1927–1947.38
Dow
nloa
ded
by [
Yor
k U
nive
rsity
Lib
rari
es]
at 2
0:52
18
Nov
embe
r 20
14