In statistics, a rank correlation is any of several statistics that measure an ordinal association — the relationship between rankings of different ordinal variables or different rankings of the same variable, where a "ranking" is the assignment of the ordering labels "first", "second", "third", etc. to different observations of a particular variable. A rank correlation coefficient measures the degree of similarity between two rankings, and can be used to assess the significance of the relation between them. For example, two common nonparametric methods of significance that use rank correlation are the Mann–Whitney U test and the Wilcoxon signed-rank test.

Context

edit

If, for example, one variable is the identity of a college basketball program and another variable is the identity of a college football program, one could test for a relationship between the poll rankings of the two types of program: do colleges with a higher-ranked basketball program tend to have a higher-ranked football program? A rank correlation coefficient can measure that relationship, and the measure of significance of the rank correlation coefficient can show whether the measured relationship is small enough to likely be a coincidence.

If there is only one variable, the identity of a college football program, but it is subject to two different poll rankings (say, one by coaches and one by sportswriters), then the similarity of the two different polls' rankings can be measured with a rank correlation coefficient.

As another example, in a contingency table with low income, medium income, and high income in the row variable and educational level—no high school, high school, university—in the column variable),[1] a rank correlation measures the relationship between income and educational level.

Correlation coefficients

edit

Some of the more popular rank correlation statistics include

  1. Spearman's ρ
  2. Kendall's τ
  3. Goodman and Kruskal's γ
  4. Somers' D

An increasing rank correlation coefficient implies increasing agreement between rankings. The coefficient is inside the interval [−1, 1] and assumes the value:

  • 1 if the agreement between the two rankings is perfect; the two rankings are the same.
  • 0 if the rankings are completely independent.
  • −1 if the disagreement between the two rankings is perfect; one ranking is the reverse of the other.

Following Diaconis (1988), a ranking can be seen as a permutation of a set of objects. Thus we can look at observed rankings as data obtained when the sample space is (identified with) a symmetric group. We can then introduce a metric, making the symmetric group into a metric space. Different metrics will correspond to different rank correlations.

General correlation coefficient

edit

Kendall 1970[2] showed that his   (tau) and Spearman's   (rho) are particular cases of a general correlation coefficient.

Suppose we have a set of   objects, which are being considered in relation to two properties, represented by   and  , forming the sets of values   and  . To any pair of individuals, say the  -th and the  -th we assign a  -score, denoted by  , and a  -score, denoted by  . The only requirement for these functions is that they be anti-symmetric, so   and  . (Note that in particular   if  .) Then the generalized correlation coefficient   is defined as

 

Equivalently, if all coefficients are collected into matrices   and  , with   and  , then

 

where   is the Frobenius inner product and   the Frobenius norm. In particular, the general correlation coefficient is the cosine of the angle between the matrices   and  .

Kendall's τ as a particular case

edit

If  ,   are the ranks of the  -member according to the  -quality and  -quality respectively, then we can define

 

The sum   is the number of concordant pairs minus the number of discordant pairs (see Kendall tau rank correlation coefficient). The sum   is just  , the number of terms  , as is  . Thus in this case,

 

Spearman’s ρ as a particular case

edit

If  ,   are the ranks of the  -member according to the   and the  -quality respectively, we may consider the matrices   defined by

 
 

The sums   and   are equal, since both   and   range from   to  . Hence

 

To simplify this expression, let   denote the difference in the ranks for each  . Further, let   be a uniformly distributed discrete random variables on  . Since the ranks   are just permutations of  , we can view both as being random variables distributed like  . Using basic summation results from discrete mathematics, it is easy to see that for the uniformly distributed random variable,  , we have   and   and thus  . Now, observing symmetries allows us to compute the parts of   as follows:

 

and

 

Hence

 

where   is the difference between ranks, which is exactly Spearman's rank correlation coefficient  .

Rank-biserial correlation

edit

Gene Glass (1965) noted that the rank-biserial can be derived from Spearman's  . "One can derive a coefficient defined on X, the dichotomous variable, and Y, the ranking variable, which estimates Spearman's rho between X and Y in the same way that biserial r estimates Pearson's r between two normal variables” (p. 91). The rank-biserial correlation had been introduced nine years before by Edward Cureton (1956) as a measure of rank correlation when the ranks are in two groups.

Kerby simple difference formula

edit

Dave Kerby (2014) recommended the rank-biserial as the measure to introduce students to rank correlation, because the general logic can be explained at an introductory level. The rank-biserial is the correlation used with the Mann–Whitney U test, a method commonly covered in introductory college courses on statistics. The data for this test consists of two groups; and for each member of the groups, the outcome is ranked for the study as a whole.

Kerby showed that this rank correlation can be expressed in terms of two concepts: the percent of data that support a stated hypothesis, and the percent of data that do not support it. The Kerby simple difference formula states that the rank correlation can be expressed as the difference between the proportion of favorable evidence (f) minus the proportion of unfavorable evidence (u).

 

Example and interpretation

edit

To illustrate the computation, suppose a coach trains long-distance runners for one month using two methods. Group A has 5 runners, and Group B has 4 runners. The stated hypothesis is that method A produces faster runners. The race to assess the results finds that the runners from Group A do indeed run faster, with the following ranks: 1, 2, 3, 4, and 6. The slower runners from Group B thus have ranks of 5, 7, 8, and 9.

The analysis is conducted on pairs, defined as a member of one group compared to a member of the other group. For example, the fastest runner in the study is a member of four pairs: (1,5), (1,7), (1,8), and (1,9). All four of these pairs support the hypothesis, because in each pair the runner from Group A is faster than the runner from Group B. There are a total of 20 pairs, and 19 pairs support the hypothesis. The only pair that does not support the hypothesis are the two runners with ranks 5 and 6, because in this pair, the runner from Group B had the faster time. By the Kerby simple difference formula, 95% of the data support the hypothesis (19 of 20 pairs), and 5% do not support (1 of 20 pairs), so the rank correlation is r = .95 − .05 = .90.

The maximum value for the correlation is r = 1, which means that 100% of the pairs favor the hypothesis. A correlation of r = 0 indicates that half the pairs favor the hypothesis and half do not; in other words, the sample groups do not differ in ranks, so there is no evidence that they come from two different populations. An effect size of r = 0 can be said to describe no relationship between group membership and the members' ranks.

References

edit
  1. ^ Kruskal, William H. (1958). "Ordinal Measures of Association". Journal of the American Statistical Association. 53 (284): 814–861. doi:10.2307/2281954. JSTOR 2281954.
  2. ^ Kendall, Maurice G (1970). Rank Correlation Methods (4 ed.). Griffin. ISBN 9780852641996.

Further reading

edit
edit