14.6 Nonparametric or Rank Correlation 639 sxy +xt*yt; *r=sxy/(sqrt(sxx*syy)+TINY); *z=0.5*1og((1.0+(*r)+TINY)/(1.0-(*r)+TINY)); Fisher's z transformation. df=n-2: t=(*r)*sqrt(df/((1.0-(*r)+TINY)*(1.0+(*r)+TINY))); Equation (14.5.5). *prob=betai(0.5*df,0.5,df/(df+t*t)); Student's t probability. / *prob=erfcc(fabs((*z)*sqrt(n-1.0))/1.4142136)*/ For large n,this easier computation of prob,using the short routine erfcc,would give approx- imately the same value. 三 g CITED REFERENCES AND FURTHER READING: Dunn,O.J.,and Clark,V.A.1974,Applied Statistics:Analysis of Variance and Regression (New York:Wiley). Hoel.P.G.1971.Introduction to Mathematical Statistics,4th ed.(New York:Wiley),Chapter 7. von Mises,R.1964,Mathematical Theory of Probability and Statistics (New York:Academic Press),Chapters IX(A)and IX(B). RECIPES I Korn,G.A.,and Korn,T.M.1968,Mathematical Handbook for Scientists and Engineers,2nd ed. 令 (New York:McGraw-Hill),$19.7. Norusis,M.J.1982,SPSS Introductory Guide:Basic Statistics and Operations:and 1985,SPSS- X Advanced Statistics Guide (New York:McGraw-Hill) 14.6 Nonparametric or Rank Correlation IENTIFIC It is precisely the uncertainty in interpreting the significance of the linear 6 correlation coefficient r that leads us to the important concepts of nonparametric or rank correlation.As before,we are given N pairs of measurements(xi,y).Before, difficulties arose because we did not necessarily know the probability distribution function from which the zi's or yi's were drawn. The key concept of nonparametric correlation is this:If we replace the value Recipes Numerica 10621 of each xi by the value of its rank among all the other xi's in the sample,that is,1,2,3,...,N,then the resulting list of numbers will be drawn from a perfectly 43106 known distribution function,namely uniformly from the integers between 1 and N, Recipes inclusive.Better than uniformly.in fact,since if the ;'s are all distinct.then each integer will occur precisely once.If some of the zi's have identical values,it is conventional to assign to all these "ties"the mean of the ranks that they would have had if their values had been slightly different.This midrank will sometimes be an integer,sometimes a half-integer.In all cases the sum of all assigned ranks will be the same as the sum of the integers from 1 to N,namely N(N+1). Of course we do exactly the same procedure for the yi's,replacing each value by its rank among the other yi's in the sample. Now we are free to invent statistics for detecting correlation between uniform sets of integers between 1 and N,keeping in mind the possibility ofties in the ranks. There is,of course,some loss of information in replacing the original numbers by ranks.We could construct some rather artificial examples where a correlation could be detected parametrically (e.g.,in the linear correlation coefficient r),but could not
14.6 Nonparametric or Rank Correlation 639 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). sxy += xt*yt; } *r=sxy/(sqrt(sxx*syy)+TINY); *z=0.5*log((1.0+(*r)+TINY)/(1.0-(*r)+TINY)); Fisher’s z transformation. df=n-2; t=(*r)*sqrt(df/((1.0-(*r)+TINY)*(1.0+(*r)+TINY))); Equation (14.5.5). *prob=betai(0.5*df,0.5,df/(df+t*t)); Student’s t probability. /* *prob=erfcc(fabs((*z)*sqrt(n-1.0))/1.4142136) */ For large n, this easier computation of prob, using the short routine erfcc, would give approximately the same value. } CITED REFERENCES AND FURTHER READING: Dunn, O.J., and Clark, V.A. 1974, Applied Statistics: Analysis of Variance and Regression (New York: Wiley). Hoel, P.G. 1971, Introduction to Mathematical Statistics, 4th ed. (New York: Wiley), Chapter 7. von Mises, R. 1964, Mathematical Theory of Probability and Statistics (New York: Academic Press), Chapters IX(A) and IX(B). Korn, G.A., and Korn, T.M. 1968, Mathematical Handbook for Scientists and Engineers, 2nd ed. (New York: McGraw-Hill), §19.7. Norusis, M.J. 1982, SPSS Introductory Guide: Basic Statistics and Operations; and 1985, SPSSX Advanced Statistics Guide (New York: McGraw-Hill). 14.6 Nonparametric or Rank Correlation It is precisely the uncertainty in interpreting the significance of the linear correlation coefficient r that leads us to the important concepts of nonparametric or rank correlation. As before, we are given N pairs of measurements (xi, yi). Before, difficulties arose because we did not necessarily know the probability distribution function from which the xi’s or yi’s were drawn. The key concept of nonparametric correlation is this: If we replace the value of each xi by the value of its rank among all the other xi’s in the sample, that is, 1, 2, 3,...,N, then the resulting list of numbers will be drawn from a perfectly known distribution function, namely uniformly from the integers between 1 and N, inclusive. Better than uniformly, in fact, since if the xi’s are all distinct, then each integer will occur precisely once. If some of the xi’s have identical values, it is conventional to assign to all these “ties” the mean of the ranks that they would have had if their values had been slightly different. This midrank will sometimes be an integer, sometimes a half-integer. In all cases the sum of all assigned ranks will be the same as the sum of the integers from 1 to N, namely 1 2N(N + 1). Of course we do exactly the same procedure for the y i’s, replacing each value by its rank among the other yi’s in the sample. Now we are free to invent statistics for detecting correlation between uniform sets of integers between 1 and N, keeping in mind the possibility of ties in the ranks. There is, of course, some loss of information in replacing the original numbers by ranks. We could construct some rather artificial examples where a correlation could be detected parametrically (e.g., in the linear correlation coefficient r), but could not
640 Chapter 14.Statistical Description of Data be detected nonparametrically.Such examples are very rare in real life,however, and the slight loss of information in ranking is a small price to pay for a very major advantage:When a correlation is demonstrated to be present nonparametrically. then it is really there!(That is,to a certainty level that depends on the significance chosen.)Nonparametric correlation is more robust than linear correlation,more resistant to unplanned defects in the data,in the same sort of sense that the median is more robust than the mean.For more on the concept of robustness,see $15.7. As always in statistics,some particular choices of a statistic have already been invented for us and consecrated,if not beatified,by popular use.We will discuss two,the Spearman rank-order correlation coefficient (rs),and Kendall's tau (T). Spearman Rank-Order Correlation Coefficient Let Ri be the rank of zi among the other z's,Si be the rank of yi among the other y's,ties being assigned the appropriate midrank as described above.Then the rank-order correlation coefficient is defined to be the linear correlation coefficient of the ranks,namely. RECIPES I ∑:(R:-D)(S-S) Ts= (14.6.1) 9 V∑(-)2V∑(S-2 The significance of a nonzero value of rs is tested by computing 9 W-2 9 t=r1-r (14.6.2) 是H which is distributed approximately as Student's distribution with N-2 degrees of freedom.A key point is that this approximation does not depend on the original 61 distribution of the x's and y's;it is always the same approximation,and always pretty good. It turns out that r.is closely related to another conventional measure of nonparametric correlation,the so-called sum squared difference of ranks,defined as N (R:-S)2 T 0 Numerica 10621 (14.6.3) =1 43126 (This D is sometimes denoted D**,where the asterisks are used to indicate that ties are treated by midranking.) When there are no ties in the data,then the exact relation between D and rs is 6D r=1-N3-N (14.6.4) When there are ties,then the exact relation is slightly more complicated:Let fr be the number of ties in the kth group of ties among the Ri's,and let gm be the number of ties in the mth group of ties among the Si's.Then it turns out that 1- N-N[D+立∑x(f-fx)+立∑m(g品-gm】 T=- ∑k(f-f) - ∑m(g品-9m】 1/2 (14.6.5) N3-N N3-N
640 Chapter 14. Statistical Description of Data Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). be detected nonparametrically. Such examples are very rare in real life, however, and the slight loss of information in ranking is a small price to pay for a very major advantage: When a correlation is demonstrated to be present nonparametrically, then it is really there! (That is, to a certainty level that depends on the significance chosen.) Nonparametric correlation is more robust than linear correlation, more resistant to unplanned defects in the data, in the same sort of sense that the median is more robust than the mean. For more on the concept of robustness, see §15.7. As always in statistics, some particular choices of a statistic have already been invented for us and consecrated, if not beatified, by popular use. We will discuss two, the Spearman rank-order correlation coefficient (r s), and Kendall’s tau (τ). Spearman Rank-Order Correlation Coefficient Let Ri be the rank of xi among the other x’s, Si be the rank of yi among the other y’s, ties being assigned the appropriate midrank as described above. Then the rank-order correlation coefficient is defined to be the linear correlation coefficient of the ranks, namely, rs = i(Ri − R)(Si − S) i(Ri − R)2 i(Si − S)2 (14.6.1) The significance of a nonzero value of rs is tested by computing t = rs N − 2 1 − r2 s (14.6.2) which is distributed approximately as Student’s distribution with N − 2 degrees of freedom. A key point is that this approximation does not depend on the original distribution of the x’s and y’s; it is always the same approximation, and always pretty good. It turns out that rs is closely related to another conventional measure of nonparametric correlation, the so-called sum squared difference of ranks, defined as D = N i=1 (Ri − Si) 2 (14.6.3) (This D is sometimes denoted D**, where the asterisks are used to indicate that ties are treated by midranking.) When there are no ties in the data, then the exact relation between D and r s is rs = 1 − 6D N3 − N (14.6.4) When there are ties, then the exact relation is slightly more complicated: Let f k be the number of ties in the kth group of ties among the Ri’s, and let gm be the number of ties in the mth group of ties among the Si’s. Then it turns out that rs = 1 − 6 N3 − N D + 1 12 k(f 3 k − fk) + 1 12 m(g3 m − gm) 1 − k(f 3 k − fk) N3 − N 1/2 1 − m(g3 m − gm) N3 − N 1/2 (14.6.5)
14.6 Nonparametric or Rank Correlation 641 holds exactly.Notice that if all the f&'s and all the gm's are equal to one,meaning that there are no ties,then equation (14.6.5)reduces to equation(14.6.4). In(14.6.2)we gave a t-statistic that tests the significance of a nonzero ra.It is also possible to test the significance of D directly.The expectation value of D in the null hypothesis of uncorrelated data sets is D=a3-M-b∑2-)-b∑G2-9a) (14.6.6 its variance is Var(D)= (N-1)N2(N+1)2 36 ∑m(g-gm】 14.6.7) N3-N N3-N from NUMERICAL RECIPES I 18881892 and it is approximately normally distributed,so that the significance level is a complementary error function(cf.equation 14.5.2).Of course,(14.6.2)and (14.6.7) server are not independent tests,but simply variants of the same test.In the program that (Nort follows,we calculate both the significance level obtained by using (14.6.2)and the America computer University Press. THE significance level obtained by using(14.6.7);their discrepancy will give you an idea of how good the approximations are.You will also notice that we break off the task ART of assigning ranks (including tied midranks)into a separate function,crank 9 Programs #include <math.h> #include "nrutil.h" void spear(float datal],float data2],unsigned long n,float *d,float *zd, float tprobd,float *rs,float *probrs) to dir Given two data arrays,data1 [1..n]and data2[1..n],this routine returns their sum-squared difference of ranks as D,the number of standard deviations by which D deviates from its null- hypothesis expected value as zd,the two-sided significance level of this deviation as probd OF SCIENTIFIC COMPUTING(ISBN Spearman's rank correlation rs as rs,and the two-sided significance level of its deviation from 198918920 zero as probrs.The external routines crank(below)and sort2 ($8.2)are used.A small value of either probd or probrs indicates a significant correlation (rs positive)or anticorrelation (rs negative). float betai(float a,float b,float x); void crank(unsigned long n,float w[],float *s); Numerical Recipes 10-621 43108 float erfcc(float x); void sort2(unsigned long n,float arr[],float brr[]); unsigned long j; (outside float vard,t,sg,sf,fac,en3n,en,df,aved,*wksp1,*wksp2; Software. wkspi-vector(1,n); wksp2=vector(1,n); North Amer ying of for (j=1;j<mn;j++){ wksp1[i]=datal[i]; wksp2[j]=data2[j]; sort2(n,wksp1,wksp2); Sort each of the data arrays,and convert the entries to crank(n,wksp1,&sf); ranks.The values sf and sg return the sums >(fi-f) sort2(n,wksp2,wksp1); and (gm-gm),respectively. crank(n,wksp2,&sg); *d=0.0; for (j=1;j<=n;j++) Sum the squared difference of ranks. *d +=SQR(wksp1[j]-wksp2[j]);
14.6 Nonparametric or Rank Correlation 641 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). holds exactly. Notice that if all the fk’s and all the gm’s are equal to one, meaning that there are no ties, then equation (14.6.5) reduces to equation (14.6.4). In (14.6.2) we gave a t-statistic that tests the significance of a nonzero r s. It is also possible to test the significance of D directly. The expectation value of D in the null hypothesis of uncorrelated data sets is D = 1 6 (N3 − N) − 1 12 k (f 3 k − fk) − 1 12 m (g3 m − gm) (14.6.6) its variance is Var(D) = (N − 1)N2(N + 1)2 36 × 1 − k(f 3 k − fk) N3 − N 1 − m(g3 m − gm) N3 − N (14.6.7) and it is approximately normally distributed, so that the significance level is a complementary error function (cf. equation 14.5.2). Of course, (14.6.2) and (14.6.7) are not independent tests, but simply variants of the same test. In the program that follows, we calculate both the significance level obtained by using (14.6.2) and the significance level obtained by using (14.6.7); their discrepancy will give you an idea of how good the approximations are. You will also notice that we break off the task of assigning ranks (including tied midranks) into a separate function, crank. #include <math.h> #include "nrutil.h" void spear(float data1[], float data2[], unsigned long n, float *d, float *zd, float *probd, float *rs, float *probrs) Given two data arrays, data1[1..n] and data2[1..n], this routine returns their sum-squared difference of ranks as D, the number of standard deviations by which D deviates from its nullhypothesis expected value as zd, the two-sided significance level of this deviation as probd, Spearman’s rank correlation rs as rs, and the two-sided significance level of its deviation from zero as probrs. The external routines crank (below) and sort2 (§8.2) are used. A small value of either probd or probrs indicates a significant correlation (rs positive) or anticorrelation (rs negative). { float betai(float a, float b, float x); void crank(unsigned long n, float w[], float *s); float erfcc(float x); void sort2(unsigned long n, float arr[], float brr[]); unsigned long j; float vard,t,sg,sf,fac,en3n,en,df,aved,*wksp1,*wksp2; wksp1=vector(1,n); wksp2=vector(1,n); for (j=1;j<=n;j++) { wksp1[j]=data1[j]; wksp2[j]=data2[j]; } sort2(n,wksp1,wksp2); Sort each of the data arrays, and convert the entries to ranks. The values sf and sg return the sums (f3 k −fk) and (g3 m − gm), respectively. crank(n,wksp1,&sf); sort2(n,wksp2,wksp1); crank(n,wksp2,&sg); *d=0.0; for (j=1;j<=n;j++) Sum the squared difference of ranks. *d += SQR(wksp1[j]-wksp2[j]);
642 Chapter 14. Statistical Description of Data en=n; en3n=en米en*en-en; aved=en3n/6.0-(sf+sg)/12.0; Expectation value of D. fac=(1.0-sf/en3n)*(1.0-sg/en3n) vard=((en-1.0)*en*en*SQR(en+1.0)/36.0)*fac; and variance of D give *zd=(*d-aved)/sqrt(vard); number of standard devia- *probd=erfcc(fabs(*zd)/1.4142136); tions and significance. *rs=(1.0-(6.0/en3n)*(*d+(sf+sg)/12.0))/sqrt(fac); Rank correlation coefficient, fac=(*rs+1.0)*(1.0-(*rs)); if (fac 0.0){ t=(*rs)*sqrt((en-2.0)/fac); and its t value, df=en-2.0; *probrs=betai(0.5*df,0.5,df/(df+t*t)); give its significance. else *probrs=0.0; free_vector(wksp2,1,n); free_vector(wksp1,1,n); g NUMERI ICAL void crank(unsigned long n,float w[],float *s) Given a sorted array w[1..n],replaces the elements by their rank,including midranking of ties, RECIPES I and returns as s the sum of f3-f,where f is the number of elements in each tie. unsigned long j=1,ji,jt; float t,rank; Press. *s=0.0; while (i<n){ 1f(w[j+1]I=w[j])[ Not a tie. [j]=j; ++ji else A tie: IENTIFIC for (jt=j+1;jt<=n&w[jt]==v[j];jt++); How far does it go? rank=0.5*(j+jt-1); This is the mean rank of the tie, for (ji=j;ji<=(jt-1);ji++)w[ji]=rank; so enter it into all the tied t=jt-ji entries, *g+=七*t*t-t; and update s. j=jt; 2 (ISBN if (i==n)w[n]=n; f the last element was not tied,this is its rank. Numerica 10.621 uctio Recipes 43108 Kendall's Tau (outside Kendall's r is even more nonparametric than Spearman's rs or D.Instead of Software. 首 using the numerical difference of ranks,it uses only the relative ordering of ranks higher in rank.lower in rank,or the same in rank.But in that case we don't even have to rank the data!Ranks will be higher,lower,or the same if and only if the values are larger,smaller,or equal,respectively.On balance,we prefer rs as being the more straightforward nonparametric test.but both statistics are in general use.In fact,T and rs are very strongly correlated and,in most applications,are effectively the same test. To define 7,we start with the N data points (xi,yi).Now consider all N(N-1)pairs of data points,where a data point cannot be paired with itself, and where the points in either order count as one pair.We call a pair concordant
642 Chapter 14. Statistical Description of Data Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). en=n; en3n=en*en*en-en; aved=en3n/6.0-(sf+sg)/12.0; Expectation value of D, fac=(1.0-sf/en3n)*(1.0-sg/en3n); vard=((en-1.0)*en*en*SQR(en+1.0)/36.0)*fac; and variance of D give *zd=(*d-aved)/sqrt(vard); number of standard devia- *probd=erfcc(fabs(*zd)/1.4142136); tions and significance. *rs=(1.0-(6.0/en3n)*(*d+(sf+sg)/12.0))/sqrt(fac); Rank correlation coefficient, fac=(*rs+1.0)*(1.0-(*rs)); if (fac > 0.0) { t=(*rs)*sqrt((en-2.0)/fac); and its t value, df=en-2.0; *probrs=betai(0.5*df,0.5,df/(df+t*t)); give its significance. } else *probrs=0.0; free_vector(wksp2,1,n); free_vector(wksp1,1,n); } void crank(unsigned long n, float w[], float *s) Given a sorted array w[1..n], replaces the elements by their rank, including midranking of ties, and returns as s the sum of f3 − f, w here f is the number of elements in each tie. { unsigned long j=1,ji,jt; float t,rank; *s=0.0; while (j < n) { if (w[j+1] != w[j]) { Not a tie. w[j]=j; ++j; } else { A tie: for (jt=j+1;jt<=n && w[jt]==w[j];jt++); Howfar does it go? rank=0.5*(j+jt-1); This is the mean rank of the tie, for (ji=j;ji<=(jt-1);ji++) w[ji]=rank; so enter it into all the tied t=jt-j; entries, *s += t*t*t-t; and update s. j=jt; } } if (j == n) w[n]=n; If the last element was not tied, this is its rank. } Kendall’s Tau Kendall’s τ is even more nonparametric than Spearman’s r s or D. Instead of using the numerical difference of ranks, it uses only the relative ordering of ranks: higher in rank, lower in rank, or the same in rank. But in that case we don’t even have to rank the data! Ranks will be higher, lower, or the same if and only if the values are larger, smaller, or equal, respectively. On balance, we prefer r s as being the more straightforward nonparametric test, but both statistics are in general use. In fact, τ and rs are very strongly correlated and, in most applications, are effectively the same test. To define τ, we start with the N data points (xi, yi). Now consider all 1 2N(N − 1) pairs of data points, where a data point cannot be paired with itself, and where the points in either order count as one pair. We call a pair concordant
14.6 Nonparametric or Rank Correlation 643 if the relative ordering of the ranks of the two x's (or for that matter the two x's themselves)is the same as the relative ordering of the ranks of the two y's (or for that matter the two y's themselves).We call a pair discordant if the relative ordering of the ranks of the two z's is opposite from the relative ordering of the ranks of the two y's.If there is a tie in either the ranks of the two x's or the ranks of the two y's,then we don't call the pair either concordant or discordant.If the tie is in the 's,we will call the pair an "extra y pair."If the tie is in the y's,we will call the pair an"extra x pair."If the tie is in both the z's and the y's,we don't call the pair anything at all.Are you still with us? Kendall's r is now the following simple combination of these various counts: concordant-discordant T三 Vconcordant+discordant+extra-y Vconcordant discordant extra-z (14.6.8) 公 You can easily convince yourself that this must lie between 1 and-1,and that it takes on the extreme values only for complete rank agreement or complete rank reversal,respectively. More important,Kendall has worked out,from the combinatorics,the approx- 9 imate distribution of r in the null hypothesis of no association between and y. In this caseT is approximately normally distributed,with zero expectation value 音 Press. and a variance of 4W+10 Var(T)=9N(N -1) (14.6.9) The following program proceeds according to the above description,and 可 therefore loops over all pairs of data points.Beware:This is an O(N2)algorithm, unlike the algorithm for rs,whose dominant sort operations are oforder N log N.If you are routinely computing Kendall's r for data sets of more than a few thousand points,you may be in for some serious computing.If,however,you are willing to bin your data into a moderate number of bins,then read on. 10621 #include <math.h> Fuunrggoirioh Numerica void kendl1(float datal[],float data2],unsigned long n,float *tau, Recipes 43106 float *z,float *prob) Given data arrays data1[1..n]and data2[1..n],this program returns Kendall's T as tau, its number of standard deviations from zero as z,and its two-sided significance level as prob. (outside Small values of prob indicate a significant correlation (tau positive)or anticorrelation (tau North Software. negative). float erfcc(float x); unsigned long n2=0,n1=0,k,j; long is=0; float svar,aa,a2,a1; for (j=1;j<n;j++){ Loop over first member of pair, for (k=(j+1);k<=n;k++){ and second member. a1=datal[j]-datal[k]; a2=data2[j]-data2[k]; aa=a1*a2; if (aa){ Neither array has a tie. ++n1;
14.6 Nonparametric or Rank Correlation 643 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). if the relative ordering of the ranks of the two x’s (or for that matter the two x’s themselves) is the same as the relative ordering of the ranks of the two y’s (or for that matter the two y’s themselves). We call a pair discordant if the relative ordering of the ranks of the two x’s is opposite from the relative ordering of the ranks of the two y’s. If there is a tie in either the ranks of the two x’s or the ranks of the two y’s, then we don’t call the pair either concordant or discordant. If the tie is in the x’s, we will call the pair an “extra y pair.” If the tie is in the y’s, we will call the pair an “extra x pair.” If the tie is in both the x’s and the y’s, we don’t call the pair anything at all. Are you still with us? Kendall’s τ is now the following simple combination of these various counts: τ = concordant − discordant √concordant + discordant + extra-y √concordant + discordant + extra-x (14.6.8) You can easily convince yourself that this must lie between 1 and −1, and that it takes on the extreme values only for complete rank agreement or complete rank reversal, respectively. More important, Kendall has worked out, from the combinatorics, the approximate distribution of τ in the null hypothesis of no association between x and y. In this case τ is approximately normally distributed, with zero expectation value and a variance of Var(τ) = 4N + 10 9N(N − 1) (14.6.9) The following program proceeds according to the above description, and therefore loops over all pairs of data points. Beware: This is an O(N 2) algorithm, unlike the algorithm for rs, whose dominant sort operations are of order N log N. If you are routinely computing Kendall’s τ for data sets of more than a few thousand points, you may be in for some serious computing. If, however, you are willing to bin your data into a moderate number of bins, then read on. #include <math.h> void kendl1(float data1[], float data2[], unsigned long n, float *tau, float *z, float *prob) Given data arrays data1[1..n] and data2[1..n], this program returns Kendall’s τ as tau, its number of standard deviations from zero as z, and its two-sided significance level as prob. Small values of prob indicate a significant correlation (tau positive) or anticorrelation (tau negative). { float erfcc(float x); unsigned long n2=0,n1=0,k,j; long is=0; float svar,aa,a2,a1; for (j=1;j<n;j++) { Loop over first member of pair, for (k=(j+1);k<=n;k++) { and second member. a1=data1[j]-data1[k]; a2=data2[j]-data2[k]; aa=a1*a2; if (aa) { Neither array has a tie. ++n1;