Aliasing (factorial experiments)

In the statistical theory of factorial experiments, aliasing is the property of fractional factorial designs that makes some effects "aliased" with each other – that is, indistinguishable from each other. A primary goal of the theory of such designs is the control of aliasing so that important effects are not aliased with each other.[1]

In a "full" factorial experiment, the number of treatment combinations or cells (see below) can be very large.[note 1] This necessitates limiting observations to a fraction (subset) of the treatment combinations. Aliasing is an automatic and unavoidable result of observing such a fraction.[3][4]

The aliasing properties of a design are often summarized by giving its resolution. This measures the degree to which the design avoids aliasing between main effects and important interactions.[5]

Fractional factorial experiments have long been a basic tool in agriculture,[6] food technology,[7][8] industry,[9][10][11] medicine and public health,[12][13] and the social and behavioral sciences.[14] They are widely used in exploratory research,[15] particularly in screening experiments, which have applications in industry, drug design and genetics.[16] In all such cases, a crucial step in designing such an experiment is deciding on the desired aliasing pattern, or at least the desired resolution.

As noted below, the concept of aliasing may have influenced the identification of an analogous phenomenon in signal processing theory.

Overview

edit

Associated with a factorial experiment is a collection of effects. Each factor determines a main effect, and each set of two or more factors determines an interaction effect (or simply an interaction) between those factors. Each effect is defined by a set of relations between cell means, as described below. In a fractional factorial design, effects are defined by restricting these relations to the cells in the fraction. It is when the restricted relations for two different effects turn out to be the same that the effects are said to be aliased.

The presence or absence of a given effect in a given data set is tested by statistical methods, most commonly analysis of variance. While aliasing has significant implications for estimation and hypothesis testing, it is fundamentally a combinatorial and algebraic phenomenon. Construction and analysis of fractional designs thus rely heavily on algebraic methods.

The definition of a fractional design is sometimes broadened to allow multiple observations of some or all treatment combinations – a multisubset of all treatment combinations.[17] A fraction that is a subset (that is, where treatment combinations are not repeated) is called simple. The theory described below applies to simple fractions.

Contrasts and effects

edit
Cell means in a 2 × 3 factorial experiment
B
A
1 2 3
1 μ11 μ12 μ13
2 μ21 μ22 μ23

In any design, full or fractional, the expected value of an observation in a given treatment combination is called a cell mean,[18] usually denoted using the Greek letter μ. (The term cell is borrowed from its use in tables of data.)

A contrast in cell means is a linear combination of cell means in which the coefficients sum to 0. In the 2 × 3 experiment illustrated here, the expression

 

is a contrast that compares the mean responses of the treatment combinations 11 and 12. (The coefficients here are 1 and –1.)

The effects in a factorial experiment are expressed in terms of contrasts.[19][20] In the above example, the contrast

 

is said to belong to the main effect of factor A as it contrasts the responses to the "1" level of factor   with those for the "2" level. The main effect of A is said to be absent if this expression equals 0. Similarly,

    and  
 

are contrasts belonging to the main effect of factor B. On the other hand, the contrasts

    and  
 

belong to the interaction of A and B; setting them equal to 0 expresses the lack of interaction.[note 2] These designations, which extend to arbitrary factorial experiments having three or more factors, depend on the pattern of coefficients, as explained elsewhere.[21][22]

Since it is the coefficients of these contrasts that carry the essential information, they are often displayed as column vectors. For the example above, such a table might look like this:[23]

cell      
11 1 1 0 1 1
12 1 −1 1 -1 0
13 1 0 −1 0 −1
21 −1 1 0 −1 -1
22 −1 −1 1 1 0
23 −1 0 −1 0 1

The columns of such a table are called contrast vectors: their components add up to 0. While there are in general many possible choices of columns to represent a given effect, the number of such columns — the degrees of freedom of the effect — is fixed and is given by a well-known formula.[24][25] In the 2 × 3 example above, the degrees of freedom for  , and the   interaction are 1, 2 and 2, respectively.

In a fractional factorial experiment, the contrast vectors belonging to a given effect are restricted to the treatment combinations in the fraction. Thus, in the half-fraction {11, 12, 13} in the 2 × 3 example, the three effects may be represented by the column vectors in the following table:

cell      
11 1 1 0 1 1
12 1 −1 1 −1 0
13 1 0 −1 0 −1

The consequence of this truncation — aliasing — is described below.

Definitions

edit

The factors in the design are allowed to have different numbers of levels, as in a   factorial experiment (an asymmetric or mixed-level experiment).

Fix a fraction of a full factorial design. Let   be a set of contrast vectors representing an effect (in particular, a main effect or interaction) in the full factorial design, and let   consist of the restrictions of those vectors to the fraction. One says that the effect is

  • preserved in the fraction if   consists of contrast vectors;
  • completely lost in the fraction if   consists of constant vectors, that is, vectors whose components are equal; and
  • partly lost otherwise.

Similarly, let   and   represent two effects and let   and   be their restrictions to the fraction. The two effects are said to be

  • unaliased in the fraction if each vector in   is orthogonal (perpendicular) to all the vectors in  , and vice versa;
  • completely aliased in the fraction if each vector in   is a linear combination of vectors in  , and vice versa;[note 3] and
  • partly aliased otherwise.

Finney[27] and Bush[28] introduced the terms "lost" and "preserved" in the sense used here. Despite the relatively long history of this topic, though, its terminology is not entirely standardized. The literature often describes lost effects as "not estimable" in a fraction,[29] although estimation is not the only issue at stake. Rao[30] referred to preserved effects as "measurable from" the fraction.

Resolution

edit

The extent of aliasing in a given fractional design is measured by the resolution of the fraction, a concept first defined by Box and Hunter:[5]

A fractional factorial design is said to have resolution   if every  -factor effect[note 4] is unaliased with every effect having fewer than   factors.

For example, a design has resolution   if main effects are unaliased with each other (taking  , though it allows main effects to be aliased with two-factor interactions. This is typically the lowest resolution desired for a fraction. It is not hard to see that a fraction of resolution   also has resolution  , etc., so one usually speaks of the maximum resolution of a fraction.

The number   in the definition of resolution is usually understood to be a positive integer, but one may consider the effect of the grand mean to be the (unique) effect with no factors (i.e., with  ). This effect sometimes appears in analysis of variance tables.[31] It has one degree of freedom, and is represented by a single vector, a column of 1's.[32] With this understanding, an effect is

  • preserved in a fraction if it is unaliased with the grand mean, and
  • completely lost in a fraction if it is completely aliased with the grand mean.

A fraction then has resolution   if all main effects are preserved in the fraction. If it has resolution   then two-factor interactions are also preserved.

Computation

edit

The definitions above require some computations with vectors, illustrated in the examples that follow. For certain fractional designs (the regular ones), a simple algebraic technique can be used that bypasses these procedures and gives a simple way to determine resolution. This is discussed below.

Examples

edit

The 2 × 3 experiment

edit

The fraction {11, 12, 13} of this experiment was described above along with its restricted vectors. It is repeated here along with the complementary fraction {21, 22, 23}:

cell      
11 1 1 0 1 1
12 1 −1 1 −1 0
13 1 0 −1 0 −1
cell      
21 −1 1 0 −1 −1
22 −1 −1 1 1 0
23 −1 0 −1 0 1

In both fractions, the   effect is completely lost (the   column is constant) while the   and interaction effects are preserved (each 3 × 1 column is a contrast vector as its components sum to 0). In addition, the   and interaction effects are completely aliased in each fraction: In the first fraction, the vectors for   are linear combinations of those for  , viz.,

    and    ;

in the reverse direction, the vectors for   can be written similarly in terms of those representing  . The argument in the second fraction is analogous.

These fractions have maximum resolution 1. The fact that the main effect of   is lost makes both of these fractions undesirable in practice. It turns out that in a 2 × 3 experiment (or in any a × b experiment in which a and b are relatively prime) there is no fraction that preserves both main effects -- that is, no fraction has resolution 2.

The 2 × 2 × 2 (or 2³) experiment

edit

This is a "two-level" experiment with factors   and  . In such experiments the factor levels are often denoted by 0 and 1, for reasons explained below. A treatment combination is then denoted by an ordered triple such as 101 (more formally, (1, 0, 1), denoting the cell in which   and   are at level "1" and   is at level "0"). The following table lists the eight cells of the full 2 × 2 × 2 factorial experiment, along with a contrast vector representing each effect, including a three-factor interaction:

cell              
000 1 1 1 1 1 1 1
001 1 1 −1 1 −1 −1 −1
010 1 −1 1 −1 1 −1 −1
011 1 −1 −1 −1 −1 1 1
100 −1 1 1 −1 −1 1 −1
101 −1 1 −1 −1 1 −1 1
110 −1 −1 1 1 −1 −1 1
111 −1 −1 −1 1 1 1 −1

Suppose that only the fraction consisting of the cells 000, 011, 101, and 110 is observed. The original contrast vectors, when restricted to these cells, are now 4 × 1, and can be seen by looking at just those four rows of the table. (Sorting the table on   will bring these rows together and make the restricted contrast vectors easier to see. Sorting twice puts them at the top.) The following can be observed concerning these restricted vectors:

  • The   column consists just of the constant 1 repeated four times.
  • The other columns are contrast vectors, having two 1's and two −1s.
  • The columns for   and   are equal. The same holds for   and  , and for   and  .
  • All other pairs of columns are orthogonal. For example, the column for   is orthogonal to that for  , for  , for  , and for  , as one can see by computing dot products.

Thus

  • the   interaction is completely lost in the fraction;
  • the other effects are preserved in the fraction;
  • the effects   and   are completely aliased with each other, as are   and  , and   and  .
  • all other pairs of effects are unaliased. For example,   is unaliased with both   and   and with the   and   interactions.

Now suppose instead that the complementary fraction {001,010,100,111} is observed. The same effects as before are lost or preserved, and the same pairs of effects as before are mutually unaliased. Moreover,   and   are still aliased in this fraction since the   and   vectors are negatives of each other, and similarly for   and   and for   and  . Both of these fractions thus have maximum resolution 3.

Aliasing in regular fractions

edit

The two half-fractions of a   factorial experiment described above are of a special kind: Each is the solution set of a linear equation using modular arithmetic. More exactly:

  • The fraction   is the solution set of the equation  . For example,   is a solution because  .
  • Similarly, the fraction   is the solution set to  

Such fractions are said to be regular. This idea applies to fractions of "classical"   designs, that is,   (or "symmetric") factorial designs in which the number of levels,  , of each of the   factors is a prime or the power of a prime.

A fractional factorial design is regular if it is the solution set of a system of one or more equations of the form
 
where the equation is modulo   if   is prime, and is in the finite field   if   is a power of a prime.[note 5] Such equations are called defining equations[33] of the fraction. When the defining equation or equations are homogeneous, the fraction is said to be principal.

One defining equation yields a fraction of size  , two independent equations a fraction of size   and so on. Such fractions are generally denoted as   designs. The half-fractions described above are   designs. The notation often includes the resolution as a subscript, in Roman numerals; the above fractions are thus   designs.

Associated to each expression   is another, namely  , which rewrites the coefficients as exponents. Such expressions are called "words", a term borrowed from group theory. (In a particular example where   is a specific number, the letters   are used, rather than  .) These words can be multiplied and raised to powers, where the word   acts as a multiplicative identity, and they thus form an abelian group  , known as the effects group.[34] When   is prime, one has   for every element (word)  ; something similar holds in the prime-power case. In   factorial experiments, each element of   represents a main effect or interaction. In   experiments with  , each one-letter word represents the main effect of that factor, while longer words represent components of interaction.[35][36][37] An example below illustrates this with  .

To each defining expression (the left-hand side of a defining equation) corresponds a defining word. The defining words generate a subgroup   of   that is variously called the alias subgroup,[34] the defining contrast subgroup,[38] or simply the defining subgroup of the fraction. Each element of   is a defining word since it corresponds to a defining equation, as one can show.[39] The effects represented by the defining words are completely lost in the fraction while all other effects are preserved. If  , say, then the equation[note 6]

 

is called the defining relation of the fraction.[41][42][43][44][45] This relation is used to determine the aliasing structure of the fraction: If a given effect is represented by the word  , then its aliases are computed by multiplying the defining relation by  , viz.,

 

where the products   are then simplified. This relation indicates complete (not partial) aliasing, and W is unaliased with all other effects listed in  .

Example 1

edit

In either of the   fractions described above, the defining word is  , since the exponents on these letters are the coefficients of  . The   effect is completely lost in the fraction, and the defining subgroup   is simply  , since squaring does not generate new elements  . The defining relation is thus

 ,

and multiplying both sides by   gives  ; which simplifies to

 

the alias relation seen earlier. Similarly,   and  . Note that multiplying both sides of the defining relation by   and   does not give any new alias relations.

For comparison, the   fraction with defining equation   has the defining word   (i.e.,  ). The effect   is completely lost, and the defining relation is  . Multiplying this by  , by  , and by   gives the alias relations  ,  , and   among the six remaining effects. This fraction only has resolution 2 since all effects (except  ) are preserved but two main effects are aliased. Finally, solving the defining equation   yields the fraction {000, 001, 110, 111}. One may verify all of this by sorting the table above on column  . The use of arithmetic modulo 2 explains why the factor levels in such designs are labeled 0 and 1.

Example 2

edit

In a 3-level design, factor levels are denoted 0, 1 and 2, and arithmetic is modulo 3. If there are four factors, say   and  , the effects group   will have the relations

 

From these it follows, for example, that   and  . A defining equation such as   would produce a regular 1/3-fraction of the 81 (=  ) treatment combinations, and the corresponding defining word would be  . Since its powers are

    and   ,

the defining subgroup   would be  , and so the fraction would have defining relation

 

Multiplying by  , for example, yields the aliases

 

For reasons explained elsewhere,[46] though, all powers of a defining word represent the same effect, and the convention is to choose that power whose leading exponent is 1. Squaring the latter two expressions does the trick[47] and gives the alias relations

 

Twelve other sets of three aliased effects are given by Wu and Hamada.[48] Examining all of these reveals that, like  , main effects are unaliased with each other and with two-factor effects, although some two-factor effects are aliased with each other. This means that this fraction has maximum resolution 4, and so is of type  .

The effect   is one of 4 components of the   interaction, while   is one of 8 components of the   interaction. In a 3-level design, each component of interaction carries 2 degrees of freedom.

Example 3

edit

A   design (  of a   design) may be created by solving two equations in 5 unknowns, say

 

modulo 2. The fraction has eight treatment combinations, such as 10000, 00110 and 11111, and is displayed in the article on fractional factorial designs.[note 7] Here the coefficients in the two defining equations give defining words   and  . Setting   and multiplying through by   gives the alias relation  . The second defining word similarly gives  . The article uses these two aliases to describe an alternate method of construction of the fraction.

The defining subgroup   has one more element, namely the product    , making use of the fact that  . The extra defining word   is known as the generalized interaction of   and  ,[49] and corresponds to the equation  , which is also satisfied by the fraction. With this word included, the full defining relation is

 

(these are the four elements of the defining subgroup), from which all the alias relations of this fraction can be derived – for example, multiplying through by   yields

 .

Continuing this process yields six more alias sets, each containing four effects. An examination of these sets reveals that main effects are not aliased with each other, but are aliased with two-factor interactions. This means that this fraction has maximum resolution 3. A quicker way to determine the resolution of a regular fraction is given below.

It is notable that the alias relations of the fraction depend only on the left-hand side of the defining equations, not on their constant terms. For this reason, some authors will restrict attention to principal fractions "without loss of generality", although the reduction to the principal case often requires verification.[51]

Determining the resolution of a regular fraction

edit

The length of a word in the effects group is defined to be the number of letters in its name, not counting repetition. For example, the length of the word   is 3.[note 8]

Theorem —  The maximum resolution of a regular fractional design is equal to the minimum length of a defining word.[52][53]

Using this result, one immediately gets the resolution of the preceding examples without computing alias relations:

  • In the   fraction with defining word  , the maximum resolution is 3 (the length of that word), while the fraction with defining word   has maximum resolution 2.
  • The defining words of the   fraction were   and  , both of length 4, so that the fraction has maximum resolution 4, as indicated.
  • In the   fraction with defining words   and  , the maximum resolution is 3, which is the shortest "wordlength".
One could also construct a   fraction from the defining words   and  , but the defining subgroup   will also include  , their product, and so the fraction will only have resolution 2 (the length of  ). This is true starting with any two words of length 4. Thus resolution 3 is the best one can hope for in a fraction of type  .

As these examples indicate, one must consider all the elements of the defining subgroup in applying the theorem above. This theorem is often taken to be a definition of resolution,[54][55] but the Box-Hunter definition given earlier applies to arbitrary fractional designs and so is more general.

Aliasing in general fractions

edit

Nonregular fractions are common, and have certain advantages. For example, they are not restricted to having size a power of  , where   is a prime or prime power. While some methods have been developed to deal with aliasing in particular nonregular designs, no overall algebraic scheme has emerged.

There is a universal combinatorial approach, however, going back to Rao.[56][57] If the treatment combinations of the fraction are written as rows of a table, that table is an orthogonal array. These rows are often referred to as "runs". The columns will correspond to the   factors, and the entries of the table will simply be the symbols used for factor levels, and need not be numbers. The number of levels need not be prime or prime-powered, and they may vary from factor to factor, so that the table may be a mixed-level array. In this section fractional designs are allowed to be mixed-level unless explicitly restricted.

A key parameter of an orthogonal array is its strength, the definition of which is given in the article on orthogonal arrays. One may thus refer to the strength of a fractional design. Two important facts flow immediately from its definition:

  • If an array (or fraction) has strength   then it also has strength   for every  . The array's maximum strength is of particular importance.
  • In a fixed-level array, all factors having   levels, the number of runs is a multiple of  , where   is the strength. Here   need not be a prime or prime power.

To state the next result, it is convenient to enumerate the factors of the experiment by 1 through  , and to let each nonempty subset   of   correspond to a main effect or interaction in the following way:   corresponds to the main effect of factor  ,   corresponds to the interaction of factors   and  , and so on.

The Fundamental Theorem of Aliasing[58] —  Consider a fraction of strength   on   factors. Let  .

  1. If  , then the effect corresponding to   is preserved in the fraction.[59]
  2. If   and  , then the effects corresponding to   and   are unaliased in the fraction.

Example: Consider a fractional factorial design with factors   and maximum strength  . Then:

  1. All effects up to three-factor interactions are preserved in the fraction.
  2. Main effects are unaliased with each other and with two-factor interactions.
  3. Two-factor interactions are unaliased with each other if they share a factor. For example, the   and   interactions are unaliased, but the   and   interactions may be at least partly aliased as the set   contains 4 elements but the strength of the fraction is only 3.

The Fundamental Theorem has a number of important consequences. In particular, it follows almost immediately that if a fraction has strength   then it has resolution  . With additional assumptions, a stronger conclusion is possible:

Theorem[60] —  If a fraction has maximum strength   and maximum resolution   then  

This result replaces the group-theoretic condition (minimum wordlength) in regular fractions with a combinatorial condition (maximum strength) in arbitrary ones.

Example. An important class of nonregular two-level designs are Plackett-Burman designs. As with all fractions constructed from Hadamard matrices, they have strength 2, and therefore resolution 3.[61] The smallest such design has 11 factors and 12 runs (treatment combinations), and is displayed in the article on such designs. Since 2 is its maximum strength,[note 9] 3 is its maximum resolution. Some detail about its aliasing pattern is given in the next section.

Partial aliasing

edit

In regular   fractions there is no partial aliasing: Each effect is either preserved or completely lost, and effects are either unaliased or completely aliased. The same holds in regular   experiments with   if one considers only main effects and components of interaction. However, a limited form of partial aliasing occurs in the latter. For example, in the   design described above the overall   interaction is partly lost since its   component is completely lost in the fraction while its other components (such as  ) are preserved. Similarly, the main effect of   is partly aliased with the   interaction since   is completely aliased with its   component and unaliased with the others.

In contrast, partial aliasing is uncontrolled and pervasive in nonregular fractions. In the 12-run Plackett-Burman design described in the previous section, for example, with factors labeled   through  , the only complete aliasing is between "complementary effects" such as   and   or   and  . Here the main effect of factor   is unaliased with the other main effects and with the   interaction, but it is partly aliased with 45 of the 55 two-factor interactions, 120 of the 165 three-factor interactions, and 150 of the 330 four-factor interactions. This phenomenon is generally described as complex aliasing.[62] Similarly, 924 effects are preserved in the fraction, 1122 effects are partly lost, and only one (the top-level interaction  ) is completely lost.

Analysis of variance (ANOVA)

edit

Wu and Hamada[63] analyze a data set collected on the   fractional design described above. Significance testing in the analysis of variance (ANOVA) requires that the error sum of squares and the degrees of freedom for error be nonzero. In order to insure this, two design decisions have been made:

ANOVA in the   design
Source df
  2
  2
  2
  2
  2
  2
  2
  2
  2
  2
  2
  2
  2
Error 54
Total 80
  • Interactions of three or four factors have been assumed absent. This decision is consistent with the effect hierarchy principle.[64]
  • Replication (inclusion of repeated observations) is necessary. In this case, three observations were made on each of the 27 treatment combinations in the fraction, for a total of 81 observations.

The accompanying table shows just two columns of an ANOVA table[65] for this experiment. Only main effects and components of two-factor interactions are listed, including three pairs of aliases. Aliasing between some two-factor interactions is expected, since the maximum resolution of this design is 4.

This experiment studied two response variables. In both cases, some aliased interactions were statistically significant. This poses a challenge of interpretation, since without more information or further assumptions it is impossible to determine which interaction is responsible for significance. In some instances there may be a theoretical basis to make this determination.[66]

This example shows one advantage of fractional designs. The full   factorial experiment has 81 treatment combinations, but taking one observation on each of these would leave no degrees of freedom for error. The fractional design also uses 81 observations, but on just 27 treatment combinations, in such a way that one can make inferences on main effects and on (most) two-factor interactions. This may be sufficient for practical purposes.

History

edit

The first statistical use of the term "aliasing" in print is the 1945 paper by Finney,[67] which dealt with regular fractions with 2 or 3 levels. The term was imported into signal processing theory a few years later, possibly influenced by its use in factorial experiments; the history of that usage is described in the article on aliasing in signal processing.

The 1961 paper in which Box and Hunter introduced the concept of "resolution" dealt with regular two-level designs, but their initial definition[5] makes no reference to lengths of defining words and so can be understood rather generally. Rao actually makes implicit use of resolution in his 1947 paper[68] introducing orthogonal arrays, reflected in an important parameter inequality that he develops. He distinguishes effects in full and fractional designs by using symbols   and   (corresponding to   and  ), but makes no mention of aliasing.

The term confounded is often used as a synonym for aliased, and so one must read the literature carefully. The former term "is generally reserved for the indistinguishability of a treatment contrast and a block contrast",[69] that is, for confounding with blocks. Kempthorne has shown[70] how confounding with blocks in a  -factor experiment may be viewed as aliasing in a fractional design with   factors, but it is unclear whether one can do the reverse.

See also

edit

Notes

edit
  1. ^ The number of treatment combinations grows exponentially with the number of factors in the experiment.[2]
  2. ^ Compare the example in the article on interaction.
  3. ^ In a more formal exposition, the sets   and   are vector spaces, and two effects are completely aliased in the fraction if  [26]
  4. ^ A 1-factor effect is the main effect of a single factor. For  , a  -factor effect is an interaction between   factors. The 0-factor effect is the effect of the grand mean, described below.
  5. ^ The case that   is prime is mentioned separately only for clarity, since the set of integers modulo   is itself a finite field, though often denoted   rather than  .
  6. ^ The equalities in this equation are a convention, and stand for a kind of equivalence of group elements.[40] In a more formal exposition, they represent the actual equality of spaces   of restricted vectors, where the identity element   stands for the space of constant vectors.
  7. ^ That article uses alternate notation for treatment combinations; for example, 10000, 00110 and 11111 are expressed as   and  .
  8. ^ This differs from the definition used in group theory, which counts repetitions. According to the latter view, the length of   is 4.
  9. ^ The strength cannot be 3 since 12 is not a multiple of  .

Citations

edit
  1. ^ Cheng (2019, p. 5)
  2. ^ Mukerjee & Wu (2006, pp. 1–2)
  3. ^ Kempthorne (1947, p. 390)
  4. ^ Dean, Voss & Draguljić (2017, p. 495)
  5. ^ a b c Box & Hunter (1961, p. 319)
  6. ^ Jankowski et al. (2016)
  7. ^ Kempthorne (1947, section 21.7)
  8. ^ Cornell (2006, sections 7.6-7.7)
  9. ^ Hamada & Wu (1992, examples 1 and 3)
  10. ^ Box, Hunter & Hunter (2005, sections 6.3 and 6.4)
  11. ^ Dean, Voss & Draguljić (2017, chapter 7)
  12. ^ Hamada & Wu (1992, example 2)
  13. ^ Nair et al. (2008)
  14. ^ Collins et al. (2009)
  15. ^ Kempthorne (1947, p. 390)
  16. ^ Dean & Lewis (2006)
  17. ^ Cheng (2019, p. 117)
  18. ^ Hocking (1985, p. 73). Hocking and others use the term "population mean" for expected value.
  19. ^ Hocking (1985, pp. 140–141)
  20. ^ Kuehl (2000, pp. 186–187)
  21. ^ Bose (1947, p. 110)
  22. ^ Beder (2022, p. 161)
  23. ^ Beder (2022, Example 5.21)
  24. ^ Kuehl (2000, p. 202)
  25. ^ Cheng (2019, p. 78)
  26. ^ Beder (2022, Definition 6.4)
  27. ^ Finney (1945, p. 293)
  28. ^ Bush (1950, p. 3)
  29. ^ Mukerjee & Wu (2006, Theorem 2.4.1)
  30. ^ Rao (1947, p. 135)
  31. ^ Searle (1987, p. 30)
  32. ^ Beder (2022, p. 165)
  33. ^ Cheng (2019, p. 141)
  34. ^ a b Finney (1945, p. 293)
  35. ^ Montgomery (2013, p.397ff)
  36. ^ Wu & Hamada (2009, Section 6.3)
  37. ^ Beder (2022, p. 188)
  38. ^ Wu & Hamada (2009, p. 209)
  39. ^ Beder (2022, p. 224)
  40. ^ Beder (2022, p. 234)
  41. ^ Cheng (2019, p. 140)
  42. ^ Dean, Voss & Draguljić (2017, p. 496)
  43. ^ Montgomery (2013, p. 322)
  44. ^ Mukerjee & Wu (2006, p. 26)
  45. ^ Wu & Hamada (2009, p. 207)
  46. ^ Wu & Hamada (2009, p. 272)
  47. ^ This uses relations such as  
  48. ^ Wu & Hamada (2009, p. 275)
  49. ^ Barnard (1936, p. 197)
  50. ^ Beder (2022, proof of Proposition 6.6)
  51. ^ See, for example,[50].
  52. ^ Raghavarao (1988, p. 278)
  53. ^ Beder (2022, p. 238). The identity element of the defining subgroup is not a defining word.
  54. ^ Cheng (2019, p. 147)
  55. ^ Wu & Hamada (2009, p. 210)
  56. ^ Rao (1947)
  57. ^ Rao (1973, p. 354)
  58. ^ Beder (2022, Theorem 6.43)
  59. ^ Bush (1950, p. 3)
  60. ^ Beder (2022, Theorem 6.51)
  61. ^ Hedayat, Sloane & Stufken (1999, Theorem 7.5)
  62. ^ Wu & Hamada (2009, Chapter 9)
  63. ^ Wu & Hamada (2009, Section 6.5)
  64. ^ Hamada & Wu (1992, p. 132)
  65. ^ Wu & Hamada (2009, Tables 6.6 and 6.7)
  66. ^ Wu & Hamada (2009, pp. 279–280)
  67. ^ Finney (1945, p. 292)
  68. ^ Rao (1947)
  69. ^ Dean, Voss & Draguljić (2017, p. 495)
  70. ^ Kempthorne (1947, pp. 264–268)

References

edit
  • Barnard, Mildred M. (1936). "Enumeration of the confounded arrangements in the   factorial designs". Supplement to the Journal of the Royal Statistical Society. 3: 195–202. doi:10.2307/2983671. JSTOR 2983671.
  • Beder, Jay H. (2022). Linear Models and Design. Cham, Switzerland: Springer. doi:10.1007/978-3-031-08176-7. ISBN 978-3-031-08175-0. S2CID 253542415.
  • Bose, R. C. (1947). "Mathematical theory of the symmetrical factorial design". Sankhya. 8: 107–166.
  • Box, G. E. P.; Hunter, J. S. (1961). "The   fractional factorial designs". Technometrics. 3: 311–351.
  • Bush, K. A. (1950). Orthogonal arrays (PhD thesis). University of North Carolina, Chapel Hill.
  • Cheng, Ching-Shui (2019). Theory of Factorial Design: Single- and Multi-Stratum Experiments. Boca Raton, Florida: CRC Press. ISBN 978-0-367-37898-1.
  • Dean, Angela; Lewis, Susan (2006). Screening: Methods for Experimentation in Industry, Drug Discovery, and Genetics. Cham, Switzerland: Springer. ISBN 978-0-387-28013-4.
  • Jankowski, Krzysztof J.; Budzyński, Wojciech S.; Załuski, Dariusz; Hulanicki, Piotr S.; Dubis, Bogdan (2016). "Using a fractional factorial design to evaluate the effect of the intensity of agronomic practices on the yield of different winter oilseed rape morphotypes". Field Crops Research. 188: 50–61. Bibcode:2016FCrRe.188...50J. doi:10.1016/j.fcr.2016.01.007. ISSN 1872-6852.