Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Several sets of (x,y) points, with the Pearson correlation coefficient of x and y for each set. Note that the correlation reflects the noisiness and direction of a linear relationship (top row), but not the slope of that relationship (middle), nor many aspects of nonlinear relationships (bottom). N.B.: the figure in the center has a slope of 0 but in that case the correlation coefficient is undefined because the variance of Y is zero.
The population correlation coefficient X,Y between two random variables X and Y with expected values X and Y and standard deviations X and Y is defined as:
where E is the expected value operator, cov means covariance, and, corr a widely used alternative notation for Pearson's correlation. The Pearson correlation is defined only if both of the standard deviations are finite and both of them are nonzero. It is a corollary of the CauchySchwarz inequality that the correlation cannot exceed 1 in absolute value. The correlation coefficient is symmetric: corr(X,Y)=corr(Y,X). The Pearson correlation is +1 in the case of a perfect positive (increasing) linear relationship (correlation), 1 in the case of a perfect decreasing (negative) linear relationship (anticorrelation),[5] and some value between 1 and 1 in all other cases, indicating the degree of linear dependence between the variables. As it approaches zero there is less
Correlation and dependence of a relationship (closer to uncorrelated). The closer the coefficient is to either 1 or 1, the stronger the correlation between the variables. If the variables are independent, Pearson's correlation coefficient is 0, but the converse is not true because the correlation coefficient detects only linear dependencies between two variables. For example, suppose the random variable X is symmetrically distributed about zero, and Y = X2. Then Y is completely determined by X, so that X and Y are perfectly dependent, but their correlation is zero; they are uncorrelated. However, in the special case when X and Y are jointly normal, uncorrelatedness is equivalent to independence. If we have a series of n measurements of X and Y written as xi and yi where i = 1, 2, ..., n, then the sample correlation coefficient can be used to estimate the population Pearson correlation r between X and Y. The sample correlation coefficient is written
where x and y are the sample means of X and Y, and sx and sy are the sample standard deviations of X and Y. This can also be written as:
If x and y are measurements that contain measurement error, as commonly happens in biological systems, the realistic limits on the correlation coefficient are not -1 to +1 but a smaller range.[6]
Correlation matrices
The correlation matrix of n random variables X1, ..., Xn is the n n matrix whose i,j entry is corr(Xi,Xj). If the measures of correlation used are product-moment coefficients, the correlation matrix is the same as the covariance matrix of the standardized random variables Xi / (Xi) for i = 1,...,n. This applies to both the matrix of population correlations (in which case "" is the population standard deviation), and to the matrix of sample correlations (in which case "" denotes the sample standard deviation). Consequently, each is necessarily a positive-semidefinite matrix. The correlation matrix is symmetric because the correlation between Xi and Xj is the same as the correlation between Xj andXi.
Common misconceptions
Correlation and causality
The conventional dictum that "correlation does not imply causation" means that correlation cannot be used to infer a causal relationship between the variables.[11] This dictum should not be taken to mean that correlations cannot indicate the potential existence of causal relations. However, the causes underlying the correlation, if any, may be indirect and unknown, and high correlations also overlap with identity relations (tautologies), where no causal process exists. Consequently, establishing a correlation between two variables is not a sufficient condition to establish a causal relationship (in either direction). For example, one may observe a correlation between an ordinary alarm clock ringing and daybreak, though there is no direct causal relationship between these events. A correlation between age and height in children is fairly causally transparent, but a correlation between mood and health in people is less so. Does improved mood lead to improved health, or does good health lead to good mood, or both? Or does some other factor underlie both? In other words, a correlation can be taken as evidence for a possible causal relationship, but cannot indicate what the causal relationship, if any, might be.
Correlation and dependence normally; while an obvious relationship between the two variables can be observed, it is not linear. In this case the Pearson correlation coefficient does not indicate that there is an exact functional relationship: only the extent to which that relationship can be approximated by a linear relationship. In the third case (bottom left), the linear relationship is perfect, except for one outlier which exerts enough influence to lower the correlation coefficient from 1 to 0.816. Finally, the fourth example (bottom right) shows another example when one outlier is enough to produce a high correlation coefficient, even though the relationship between the two variables is not linear. These examples indicate that the correlation coefficient, as a summary statistic, cannot replace visual examination of the data. Note that the examples are sometimes said to demonstrate that the Pearson correlation assumes that the data follow a normal distribution, but this is not correct.[4] The coefficient of determination generalizes the correlation coefficient for relationships beyond simple linear regression.
where EX and EY are the expected values of X and Y, respectively, and x and y are the standard deviations of X and Y, respectively.
Partial correlation
If a population or data-set is characterized by more than two variables, a partial correlation coefficient measures the strength of dependence between a pair of variables that is not accounted for by the way in which they both change in response to variations in a selected subset of the other variables.
References
[1] [2] [3] [4] Frederick Emory Croxton, Dudley Johnstone Cowden and Sidney Klein; Applied general statistics, page 625 Cornelius Frank Dietrich; Uncertainty, calibration, and probability : the statistics of scientific and industrial measurement, Page 331 Alexander Craig Aitken; Statistical mathematics, Page 95 J. L. Rodgers and W. A. Nicewander. Thirteen ways to look at the correlation coefficient (http:/ / www. jstor. org/ stable/ 2685263). The American Statistician, 42(1):5966, February 1988. [5] Dowdy, S. and Wearden, S. (1983). "Statistics for Research", Wiley. ISBN 0471086029 pp 230 [6] Francis, DP; Coats AJ, Gibson D (1999). "How high can a correlation coefficient be?". Int J Cardiol 69: 185199. doi:10.1016/S0167-5273(99)00028-5. [7] Yule, G.U and Kendall, M.G. (1950), "An Introduction to the Theory of Statistics", 14th Edition (5th Impression 1968). Charles Griffin & Co. pp 258270 [8] Kendall, M. G. (1955) "Rank Correlation Methods", Charles Griffin & Co. [9] Szkely, G. J. Rizzo, M. L. and Bakirov, N. K. (2007). "Measuring and testing independence by correlation of distances", Annals of Statistics, 35/6, 27692794. doi: 10.1214/009053607000000505 Reprint (http:/ / personal. bgsu. edu/ ~mrizzo/ energy/ AOS0283-reprint. pdf) [10] Szkely, G. J. and Rizzo, M. L. (2009). "Brownian distance covariance", Annals of Applied Statistics, 3/4, 12331303. doi: 10.1214/09-AOAS312 Reprint (http:/ / personal. bgsu. edu/ ~mrizzo/ energy/ AOAS312. pdf) [11] Aldrich, John (1995). "Correlations Genuine and Spurious in Pearson and Yule". Statistical Science 10 (4): 364376. doi:10.1214/ss/1177009870. JSTOR2246135. [12] Anscombe, Francis J. (1973). "Graphs in statistical analysis". The American Statistician 27: 1721. doi:10.2307/2682899. JSTOR2682899.
Further reading
Cohen, J., Cohen P., West, S.G., & Aiken, L.S. (2002). Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.). Psychology Press. ISBN0805822232.
External links
Earliest Uses: Correlation (http://jeff560.tripod.com/c.html) - gives basic history and references. Understanding Correlation (http://www.hawaii.edu/powerkills/UC.HTM) - Introductory material by a U. of Hawaii Prof. Statsoft Electronic Textbook (http://www.statsoft.com/textbook/stathome.html?stbasic.html&1) Pearson's Correlation Coefficient (http://www.vias.org/tmdatanaleng/cc_corr_coeff.html) - How to calculate it quickly Learning by Simulations (http://www.vias.org/simulations/simusoft_rdistri.html) - The distribution of the correlation coefficient Correlation measures the strength of a linear relationship between two variables. (http://www. statisticalengineering.com/correlation.htm) MathWorld page on (cross-) correlation coefficient(s) of a sample. (http://mathworld.wolfram.com/ CorrelationCoefficient.html) Compute Significance between two correlations (http://peaks.informatik.uni-erlangen.de/cgi-bin/ usignificance.cgi) - A useful website if one wants to compare two correlation values. A MATLAB Toolbox for computing Weighted Correlation Coefficients (http://www.mathworks.com/ matlabcentral/fileexchange/20846) Proof that the Sample Bivariate Correlation Coefficient has Limits 1 (http://www.docstoc.com/docs/ 3530180/Proof-that-the-Sample-Bivariate-Correlation-Coefficient-has-Limits-(Plus-or-Minus)-1)
License
Creative Commons Attribution-Share Alike 3.0 Unported //creativecommons.org/licenses/by-sa/3.0/