Probability Distributions
PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Tue, 05 Oct 2010 15:08:33 UTC
Contents
Articles
Probability distribution
Continous Distributions
Beta distribution Burr distribution Cauchy distribution Chisquare distribution Dirichlet distribution Fdistribution Gamma distribution Exponential distribution Erlang distribution Kumaraswamy distribution Inverse Gaussian distribution Laplace distribution Lévy distribution Loglogistic distribution Lognormal distribution Logistic distribution Normal distribution Pareto distribution Student's tdistribution Uniform distribution (continuous) Weibull distribution
Discrete distributions
Bernoulli distribution Betabinomial distribution Binomial distribution Uniform distribution (discrete) Geometric distribution Hypergeometric distribution Negative binomial distribution Yule–Simon distribution
1
_{7}
7
13
15
21
27
34
37
45
53
57
60
64
68
71
78
84
88
108
117
129
134
_{1}_{4}_{0}
140
142
148
156
158
163
170
177
Zipf's law
180
Multivariate distributions
Multinomial distribution Multivariate normal distribution Wishart distribution
References
Article Sources and Contributors Image Sources, Licenses and Contributors
Article Licenses
License
_{1}_{8}_{6}
186
189
196
200
203
205
Probability distribution
Probability distribution
_{1}
In probability theory and statistics, a probability distribution identifies either the probability of each value of a random variable (when the variable is discrete), or the probability of the value falling within a particular interval (when the variable is continuous). ^{[}^{1}^{]} The probability distribution describes the range of possible values that a random variable can attain and the probability that the value of the random variable is within any (measurable) subset of that range.
When the random variable takes values in the set of real numbers, the probability distribution is completely described by the cumulative distribution function, whose value at each real x is the probability that the random variable is smaller than or equal to x.
The Normal distribution, often called the "bell curve".
The concept of the probability distribution and the random variables which they describe underlies the
mathematical discipline of probability theory, and the science of statistics. There is spread or variability in almost any value that can be measured in a population (e.g. height of people, durability of a metal, sales growth, traffic flow, etc.); almost all measurements are made with some intrinsic error; in physics many processes are described probabilistically, from the kinetic properties of gases to the quantum mechanical description of fundamental particles. For these and many other reasons, simple numbers are often inadequate for describing a quantity, while probability distributions are often more appropriate.
There are various probability distributions that show up in various different applications. Two of the most important ones are the normal distribution and the categorical distribution. The normal distribution, also known as the Gaussian distribution, has a familiar "bell curve" shape and approximates many different naturally occurring distributions over real numbers. The categorical distribution describes the result of an experiment with a fixed, finite number of outcomes. For example, the toss of a fair coin is a categorical distribution, where the possible outcomes are heads and tails, each with probability 1/2.
Formal definition
In the measuretheoretic formalization of probability theory, a random variable is defined as a measurable function X from a probability space to measurable space . A probability distribution is the pushforward measure X _{*} P = PX ^{−}^{1} on .
Probability distributions of realvalued random variables
Because a probability distribution Pr on the real line is determined by the probability of a realvalued random variable X being in a halfopen interval (∞, x], the probability distribution is completely characterized by its cumulative distribution function:
Probability distribution
_{2}
Discrete probability distribution
A probability distribution is called discrete if its cumulative distribution function only increases in jumps. More
precisely, a probability distribution is discrete if there is a finite or countable set whose probability is 1.
For many familiar discrete distributions, the set of possible values is topologically discrete in the sense that all its points are isolated points. But, there are discrete distributions for which this countable set is dense on the real line.
Discrete distributions are characterized by a probability mass function, such that
Continuous probability distribution
is called continuous if its cumulative distribution function
is continuous and, therefore, the probability measure of singletons
for all
Another convention reserves the term continuous probability distribution for absolutely continuous distributions. These distributions can be characterized by a probability density function: a nonnegative Lebesgue integrable function defined on the real numbers such that
Discrete distributions and some continuous distributions (like the Cantor distribution) do not admit such a density.
Terminology
The support of a distribution is the smallest closed interval/set whose complement has probability zero. It may be understood as the points or elements that are actual members of the distribution.
A discrete random variable is a random variable whose probability distribution is discrete. Similarly, a continuous
random variable is a random variable whose probability distribution is continuous.
Simulated sampling
The following algorithm lets one sample from a probability distribution (either discrete or continuous). This algorithm assumes that one has access to the inverse of the cumulative distribution (easy to calculate with a discrete distribution, can be approximated for continuous distributions) and a computational primitive called "random()" which returns an arbitraryprecision floatingpointvalue in the range of [0,1).
define function sampleFrom(cdfInverse (type="function")):
// input:
// 
cdfInverse(x)  the inverse of the CDF of the probability distribution 
// 
example: if distribution is [[Gaussian]], one can use a [[Taylor approximation]] of the inverse of [[erf]](x) 
// 
example: if distribution is discrete, see explanation below pseudocode 
// output:
//
type="real number"  a value sampled from the probability distribution represented by cdfInverse
r = random() 

while(r == 0): 
(make sure r is not equal to 0; discontinuity possible) 
r = random()
return cdfInverse(r)
Probability distribution
_{3}
For discrete distributions, the function cdfInverse (inverse of cumulative distribution function) can be calculated from samples as follows: for each element in the sample range (discrete values along the xaxis), calculating the total samples before it. Normalize this new discrete distribution. This new discrete distribution is the CDF, and can be turned into an object which acts like a function: calling cdfInverse(query) returns the smallest xvalue such that the CDF is greater than or equal to the query.
define function dataToCdfInverse(discreteDistribution (type="dictionary"))
// input:
// 
discreteDistribution  a mapping from possible values to frequencies/probabilities 
// 
example: {0 > 1p, 1 > p} would be a [[Bernoulli distribution]] with chance=p 
// 
example: setting p=0.5 in the above example, this is a [[fair coin]] where P(X=1)>"heads" and P(X=0)>"tails" 
// output:
//
type="function"  a function that represents (CDF^1)(x)
define function cdfInverse(x):
integral = 0
go through mapping (key>value) in sorted order, adding value to integral
stop when integral > x (or integral >= x, doesn't matter)
return last key we added
return cdfInverse
Note that often, mathematics environments and computer algebra systems will have some way to represent probability distributions and sample from them. This functionality might even have been developed in thirdparty libraries. Such packages greatly facilitate such sampling, most likely have optimizations for common distributions, and are likely to be more elegant than the above barebones solution.
Some properties
• The probability density function of the sum of two independent random variables is the convolution of each of their density functions.
• The probability density function of the difference of two independent random variables is the crosscorrelation of their density functions.
• Probability distributions are not a vector space – they are not closed under linear combinations, as these do not preserve nonnegativity or total integral 1 – but they are closed under convex combination, thus forming a convex subset of the space of functions (or measures).
Common probability distributions
The following is a list of some of the most common probability distributions, grouped by the type of process that they are related to. For a more complete list, see list of probability distributions, which groups by the nature of the outcome being considered (discrete, continuous, multivariate, etc.)
Note also that all of the univariate distributions below are singlypeaked; that is, it is assumed that the values cluster around a single point. In practice, actuallyobserved quantities may cluster around multiple values. Such quantities can be modeled using a mixture distribution.
Probability distribution
_{4}
Related to realvalued quantities that grow linearly (e.g. errors, offsets)
• Normal distribution (aka Gaussian distribution), for a single such quantity; the most common continuous distribution
• Multivariate normal distribution (aka multivariate Gaussian distribution), for vectors of correlated outcomes that are individually Gaussiandistributed
Related to positive realvalued quantities that grow exponentially (e.g. prices, incomes, populations)
• Lognormal distribution, for a single such quantity whose log is normally distributed
• Pareto distribution, for a single such quantity whose log is exponentially distributed; the prototypical power law distribution
Related to realvalued quantities that are assumed to be uniformly distributed over a (possibly unknown) region
• Discrete uniform distribution, for a finite set of values (e.g. the outcome of a fair die)
• Continuous uniform distribution, for continuouslydistributed values
Related to Bernoulli trials (yes/no events, with a given probability)
Basic distributions
• Bernoulli distribution, for the outcome of a single Bernoulli trial (e.g. success/failure, yes/no)
• Binomial distribution, for the number of "positive occurrences" (e.g. successes, yes votes, etc.) given a fixed total number of independent occurrences
• Negative binomial distribution, for binomialtype observations but where the quantity of interest is the number of failures before a given number of successes occurs
• Geometric distribution, for binomialtype observations but where the quantity of interest is the number of failures before the first success; a special case of the negative binomial distribution
Related to sampling schemes over a finite population
• Binomial distribution, for the number of "positive occurrences" (e.g. successes, yes votes, etc.) given a fixed number of total occurrences, using sampling with replacement
• Hypergeometric distribution, for the number of "positive occurrences" (e.g. successes, yes votes, etc.) given a fixed number of total occurrences, using sampling without replacement
• Betabinomial distribution, for the number of "positive occurrences" (e.g. successes, yes votes, etc.) given a fixed number of total occurrences, sampling using a Polya urn scheme (in some sense, the "opposite" of sampling without replacement)
Probability distribution
_{5}
Related to categorical outcomes (events with K possible outcomes, with a given probability for each outcome)
• Categorical distribution, for a single categorical outcome (e.g. yes/no/maybe in a survey); a generalization of the Bernoulli distribution
• Multinomial distribution, for the number of each type of catergorical outcome, given a fixed number of total outcomes; a generalization of the binomial distribution
• Multivariate hypergeometric distribution, similar to the multinomial distribution, but using sampling without replacement; a generalization of the hypergeometric distribution
Related to events in a Poisson process (events that occur independently with a given rate)
• Poisson distribution, for the number of occurrences of a Poissontype event in a given period of time
• Exponential distribution, for the time before the next Poissontype event occurs
Useful for hypothesis testing related to normallydistributed outcomes
• Chisquare distribution, the distribution of a sum of squared standard normal variables; useful e.g. for inference regarding the sample variance of normallydistributed samples (see chisquare test)
• Student's t distribution, the distribution of the ratio of a standard normal variable and the square root of a scaled chi squared variable; useful for inference regarding the mean of normallydistributed samples with unknown variance (see Student's ttest)
• Fdistribution, the distribution of the ratio of two scaled chi squared variables; useful e.g. for inferences that involve comparing variances or involving Rsquared (the squared correlation coefficient)
Useful as conjugate prior distributions in Bayesian inference
• Beta distribution, for a single probability (real number between 0 and 1); conjugate to the Bernoulli distribution and binomial distribution
• Gamma distribution, for a nonnegative scaling parameter; conjugate to the rate parameter of a Poisson distribution or exponential distribution, the precision (inverse variance) of a normal distribution, etc.
• Dirichlet distribution, for a vector of probabilities that must sum to 1; conjugate to the categorical distribution and multinomial distribution; generalization of the beta distribution
• Wishart distribution, for a symmetric nonnegative definite matrix; conjugate to the inverse of the covariance matrix of a multivariate normal distribution; generalzation of the gamma distribution
See also
•
•
•
•
•
Notes
[1] Everitt, B.S. (2006) The Cambridge Dictionary of Statistics, Third Edition. pp. 313–314. Cambridge University Press, Cambridge. ISBN
0521690277
Probability distribution
_{6}
External links
• An 8foottall (2.4 m) Probability Machine (named Sir Francis) comparing stock market returns to the randomness of the beans dropping through the quincunx pattern. (http://www.youtube.com/ watch?v=AUSKTk9ENzg) from Index Funds Advisors IFA.com (http://www.ifa.com), youtube.com
• Interactive Discrete and Continuous Probability Distributions (http://www.socr.ucla.edu/htmls/ SOCR_Distributions.html), socr.ucla.edu
• A Compendium of Common Probability Distributions (http://www.causascientia.org/math_stat/Dists/ Compendium.pdf)
• A Probability Distribution Calculator (http://www.covariable.com/continuous.html), covariable.com
• Sourceforge.net (http://sourceforge.net/projects/distexplorer/), Distribution Explorer: a mixed C++ and C# Windows application that allows you to explore the properties of 20+ statistical distributions, and calculate CDF, PDF & quantiles. Written using opensource C++ from the Boost.org (http://www.boost.org) Math Toolkit library.
• Explore different probability distributions and fit your own dataset online  interactive tool (http://www.xjtek. com/anylogic/demo_models/111/), xjtek.com
Beta distribution
_{8}
In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval (0, 1) parameterized by two positive shape parameters, typically denoted by α and β. It is the special case of the Dirichlet distribution with only two parameters. Just as the Dirichlet distribution is the conjugate prior of the multinomial distribution and categorical distribution, the beta distribution is the conjugate prior of the binomial distribution and bernoulli distribution. In Bayesian statistics, it can be seen as the likelihood of the parameter p of a binomial distribution from observing α − 1 independent events with probability p and β − 1 with probability 1 − p.
Characterization
Probability density function
The probability density function of the beta distribution is:
where is the gamma function. The beta function, B, appears as a normalization constant to ensure that the total probability integrates to unity.
Cumulative distribution function
where
is the incomplete beta function and
Properties
The expected value ( ), variance (second central moment), skewness (third central moment), and kurtosis excess (forth central moment) of a Beta distribution random variable X with parameters α and β are:
The skewness is
The kurtosis excess is:
In general, the
th raw moment is given by
Beta distribution
_{9}
where
is a Pochhammer symbol representing rising factorial. It can also be written in a recursive form as
One can also show that
Quantities of information
Given two beta distributed random variables, X ~ Beta(α, β) and Y ~ Beta(α', β'), the information entropy of X is ^{[}^{1}^{]}
where
The cross entropy is
It follows that the Kullback–Leibler divergence between these two beta distributions is
is the digamma function.
Shapes
The beta density function can take on different shapes depending on the values of the two parameters:
•
•
•
•
•
Moreover, if
then the density function is symmetric about 1/2 (red & purple plots).
Parameter estimation
Let
be the sample mean and
be the sample variance. The methodofmoments estimates of the parameters are
When the distribution is required over an interval other than [0, 1], say
with
in the above equations. ^{[}^{2}^{]} ^{[}^{3}^{]}
, then replace
with
and
Beta distribution
_{1}_{0}
There is no closedform of the maximum likelihood estimates for the parameters.
Related distributions
• If X has a beta distribution, then T = X/(1 − X) has a "beta distribution of the second kind", also called the beta prime distribution.
• The connection with the binomial distribution is mentioned below.
• The Beta(1,1) distribution is identical to the standard uniform distribution.
• If X has the Beta(3/2,3/2) distribution and R > 0 is a real parameter, then Y := 2RX – R has the Wigner semicircle distribution.
• If X and Y are independently distributed Gamma(α, θ) and Gamma(β, θ) respectively, then X / (X + Y) is distributed Beta(α, β).
• If X and Y are independently distributed Beta(α,β) and F(2β, 2α) (Snedecor's F distribution with 2β and 2α degrees of freedom), then Pr(X ≤ α/(α + xβ)) = Pr(Y > x) for all x > 0.
• The beta distribution is a special case of the Dirichlet distribution for only two parameters.
• The Kumaraswamy distribution resembles the beta distribution.
• If
has a uniform distribution, then
, which is a special case of the Beta
distribution called the powerfunction distribution.
• Binomial opinions in subjective logic are equivalent to Beta distributions.
• Beta(1/2,1/2) is the Jeffreys prior for a proportion and is equivalent to arcsine distribution.
Beta(i, j) with integer values of i and j is the distribution of the ith order statistic (the ith smallest value) of a sample
of 
i + j − 1 independent random variables uniformly distributed between 0 and 1. The cumulative probability from 0 
to 
x is thus the probability that the ith smallest value is less than x, in other words, it is the probability that at least i 
of the random variables are less than x, a probability given by summing over the binomial distribution with its p parameter set to x. This shows the intimate connection between the beta distribution and the binomial distribution.
Applications
Rule of succession
A classic application of the beta distribution is the rule of succession, introduced in the 18th century by PierreSimon
Laplace in the course of treating the sunrise problem. It states that, given s successes in n conditionally independent
Bernoulli trials with probability p, that p should be estimated as . This estimate may be regarded as the
expected value of the posterior distribution over p, namely Beta(s + 1, n − s + 1), which is given by Bayes' rule if one assumes a uniform prior over p (i.e., Beta(1, 1)) and then observes that p generated s successes in n trials.
Bayesian statistics
Beta distributions are used extensively in Bayesian statistics, since beta distributions provide a family of conjugate prior distributions for binomial (including Bernoulli) and geometric distributions. The Beta(0,0) distribution is an improper prior and sometimes used to represent ignorance of parameter values.
Task duration modeling
The beta distribution can be used to model events which are constrained to take place within an interval defined by a minimum and maximum value. For this reason, the beta distribution — along with the triangular distribution — is used extensively in PERT, critical path method (CPM) and other project management / control systems to describe the time to completion of a task. In project management, shorthand computations are widely used to estimate the mean and standard deviation of the beta distribution:
Beta distribution
_{1}_{1}
where a is the minimum, c is the maximum, and b is the most likely value.
Using this set of approximations is known as threepoint estimation and are exact only for particular values of α and
β, 
specifically when ^{[}^{4}^{]} : 




or 
vice versa. 
These are notably poor approximations for most other beta distributions exhibiting average errors of 40% in the mean and 549% in the variance ^{[}^{5}^{]} ^{[}^{6}^{]} ^{[}^{7}^{]}
Information theory
We introduce one exemplary use of beta distribution in information theory, particularly for the information theoretic performance analysis for a communication system. In sensor array systems, the distribution of two vector production
is used for the performance estimation in frequent. Assume that s and v are vectors the (M − 1)dimensional
nullspace of h with isotropic i.i.d. where s, v and h are in C ^{M} and the elements of h are i.i.d complex Gaussian
random values. Then, the production of s and v with absolute of the result s ^{H} v is beta(1, M − 2) distributed.
Four parameters
A beta distribution with the two shape parameters α and β is supported on the range [0,1]. It is possible to alter the
location and scale of the distribution by introducing two further parameters representing the minimum and maximum values of the distribution. ^{[}^{8}^{]}
The probability density function of the four parameter beta distribution is given by
The standard form can be obtained by letting
References
[1] A. C. G. Verdugo Lazo and P. N. Rathie. "On the entropy of continuous probability distributions," IEEE Trans. Inf. Theory,
IT24:120–122,1978.
[2] Engineering Statistics Handbook (http://www.itl.nist.gov/div898/handbook/eda/section3/eda366h.htm) [3] Brighton Webs Ltd. Data & Analysis Services for Industry & Education (http://www.brightonwebs.co.uk/distributions/beta.asp) [4] Grubbs, Frank E. (1962). Attempts to Validate Certain PERT Statistics or ‘Picking on PERT’. Operations Research 10(6), p. 912–915. [5] Keefer, Donald L. and Verdini, William A. (1993). Better Estimation of PERT Activity Time Parameters. Management Science 39(9), p.
1086–1091.
[6] Keefer, Donald L. and Bodily, Samuel E. (1983). Threepoint Approximations for Continuous Random variables. Management Science 29(5), p. 595–609. [7] DRMI Newsletter, Issue 12, April 8, 2005 (http://www.nps.edu/drmi/docs/1apr05newsletter.pdf) [8] Beta4 distribution (http://www.vosesoftware.com/ModelRiskHelp/Distributions/Continuous_distributions/Beta_distribution.htm)
Beta distribution
_{1}_{2}
External links
• Weisstein, Eric W., " Beta Distribution (http://mathworld.wolfram.com/BetaDistribution.html)" from MathWorld.
• "Beta Distribution" (http://demonstrations.wolfram.com/BetaDistribution/) by Fiona Maclachlan, the Wolfram Demonstrations Project, 2007.
• Beta Distribution (http://www.brightonwebs.co.uk/distributions/beta.asp), brightonwebs.co.uk
• Beta Distributions (http://isometricland.com/geogebra/geogebra_beta_distributions.php) – Applet showing beta distributions in action.
Burr distribution
_{1}_{4}
mgf:
cf:
In probability theory, statistics and econometrics, the Burr Type XII distribution or simply the Burr distribution is a continuous probability distribution for a nonnegative random variable. It is also known as the SinghMaddala distribution and is one of a number of different distributions sometimes called the "generalized loglogistic distribution". It is most commonly used to model household income (See: Household income in the U.S. and compare to magenta graph at right).
The Burr distribution has probability density function: ^{[}^{1}^{]} ^{[}^{2}^{]}
See also
Loglogistic distribution
References
[1] Maddala, G.S
[2] Tadikamalla, Pandu R. (1980), "A Look at the Burr and Related Distributions" (http://links.jstor.org/ sici?sici=03067734(198012)48:3<337:ALATBA>2.0.CO;2Z), International Statistical Review 48 (3): 337–344, doi:10.2307/1402945,
1983, 1996. LimitedDependent and Qualitative Variables in Econometrics. Cambridge University Press.
Cauchy distribution
Cauchy distribution
_{1}_{5}
Not to be confused with the Lorenz curve.
Cauchy–Lorentz
Probability density function 

The purple curve is the standard Cauchy distribution 

Cumulative distribution function 



parameters: 
scale (real) 


pdf: 

cdf: 

mean: 
not defined 


mode: 

not defined 

not defined 

not defined 



mgf: 
not defined 
cf: 

The Cauchy–Lorentz distribution, named after Augustin Cauchy and Hendrik Lorentz, is a continuous probability distribution. As a probability distribution, it is known as the Cauchy distribution, while among physicists, it is known as the Lorentz distribution, Lorentz(ian) function, or Breit–Wigner distribution.
Cauchy distribution
_{1}_{6}
Its importance in physics is due to its being the solution to the differential equation describing forced resonance. ^{[}^{1}^{]} In mathematics, it is closely related to the Poisson kernel, which is the fundamental solution for the Laplace equation in the upper halfplane. In spectroscopy, it is the description of the shape of spectral lines which are subject to homogeneous broadening in which all atoms interact in the same way with the frequency range contained in the line shape. Many mechanisms cause homogeneous broadening, most notably collision broadening, and Chantler–Alda radiation. ^{[}^{2}^{]}
Characterization
Probability density function
The Cauchy distribution has the probability density function
where x _{0} is the location parameter, specifying the location of the peak of the distribution, and γ is the scale parameter which specifies the halfwidth at halfmaximum (HWHM). γ is also equal to half the interquartile range. Cauchy himself exploited such a density function in 1827, with infinitesimal scale parameter, in defining a Dirac delta function (see there).
The amplitude of the above Lorentzian function is given by
The special case when x _{0} = 0 and γ = 1 is called the standard Cauchy distribution with the probability density function
In physics, a threeparameter Lorentzian function is often used, as follows:
where I is the height of the peak.
Cauchy distribution
_{1}_{7}
Cumulative distribution function
The cumulative distribution function (cdf) is:
and the inverse cumulative distribution function of the Cauchy distribution is
Properties
The Cauchy distribution is an example of a distribution which has no mean, variance or higher moments defined. Its mode and median are well defined and are both equal to x _{0} .
When U and V are two independent normally distributed random variables with expected value 0 and variance 1, then the ratio U/V has the standard Cauchy distribution.
If X _{1} ,
then the sample mean (X _{1} +
affected by extreme values, can be used as a measure of central tendency). To see that this is true, compute the characteristic function of the sample mean:
+ X _{n} )/n has the same standard Cauchy distribution (the sample median, which is not
, X _{n} are independent and identically distributed random variables, each with a standard Cauchy distribution,
where is the sample mean. This example serves to show that the hypothesis of finite variance in the central limit
theorem cannot be dropped. It is also an example of a more generalized version of the central limit theorem that is characteristic of all stable distributions, of which the Cauchy distribution is a special case.
The Cauchy distribution is an infinitely divisible probability distribution. It is also a strictly stable distribution.
The standard Cauchy distribution coincides with the Student's tdistribution with one degree of freedom.
Like all stable distributions, the locationscale family to which the Cauchy distribution belongs is closed under linear transformations with real coefficients. In addition, the Cauchy distribution is the only univariate distribution which is closed under linear fractional transformations with real coefficients. In this connection, see also McCullagh's parametrization of the Cauchy distributions.
Characteristic function
Let X denote a Cauchy distributed random variable. The characteristic function of the Cauchy distribution is given by
which is just the Fourier transform of the probability density. It follows that the probability may be expressed in terms of the characteristic function by:
Explanation of undefined moments
Mean
If a probability distribution has a density function f(x) then the mean is
The question is now whether this is the same thing as
Cauchy distribution
_{1}_{8}
If at most one of the two terms in (2) is infinite, then (1) is the same as (2). But in the case of the Cauchy distribution, both the positive and negative terms of (2) are infinite. This means (2) is undefined. Moreover, if (1) is construed as a Lebesgue integral, then (1) is also undefined, since (1) is then defined simply as the difference (2) between positive and negative parts.
However, if (1) is construed as an improper integral rather than a Lebesgue integral, then (2) is undefined, and (1) is not necessarily welldefined. We may take (1) to mean
and this is its Cauchy principal value, which is zero, but we could also take (1) to mean, for example,
which is not zero, as can be seen easily by computing the integral.
Various results in probability theory about expected values, such as the strong law of large numbers, will not work in such cases.
Second moment
Without a defined mean, it is impossible to consider the variance or standard deviation of a standard Cauchy distribution, as these are defined with respect to the mean. But the second moment about zero can be considered. It turns out to be infinite:
Estimation of parameters
Since the mean and variance of the Cauchy distribution are not defined, attempts to estimate these parameters will not be successful. For example, if N samples are taken from a Cauchy distribution, one may calculate the sample mean as:
Although the sample values will be concentrated about the central value , the sample mean will become increasingly variable as more samples are taken, due to the increased likelihood of encountering sample points with a large absolute value. In fact, the distribution of the sample mean will be equal to the distribution of the samples themselves, i.e., the sample mean of a large sample is no better (or worse) an estimator of than any single observation from the sample. Similarly, calculating the sample variance will result in values that grow larger as more samples are taken.
Therefore, more robust means of estimating the central value and the scaling parameter are needed. One simple method is to take the median value of the sample as an estimator of and half the sample interquartile range as an estimator of . Other, more precise and robust methods have been developed ^{[}^{3}^{]} For example, the truncated mean of the middle 24% sample order statistics produces an estimate for that is more efficient than using either the sample median or the full sample mean. ^{[}^{4}^{]} ^{[}^{5}^{]} However, due to the fat tails of the Cauchy distribution, the efficiency of the estimator decreases if the mean more than 24% of the sample is used. ^{[}^{4}^{]} ^{[}^{5}^{]}
Maximum likelihood can also be used to estimate the parameters and . However, this tends to be complicated by the fact that this requires finding the roots of a high degree polynomial, and there can be multiple roots that represent local maxima. ^{[}^{6}^{]} Also, while the maximum likelihood estimator is asymptotically efficient, it is relatively inefficient for small samples. ^{[}^{7}^{]} The loglikelihood function for the Cauchy distribution for sample size n is:
Maximizing the log likelihood function with respect to and produces the following system of equations:
Cauchy distribution
_{1}_{9}
Solving just for
polynomial of degree (first for , then ). It is also worthwhile to note that
requires solving a polynomial of degree 2n − 1, ^{[}^{6}^{]} and solving just for
requires solving a
is a
monotone function in and that the solution
must
satisfy
Therefore, whether solving for one parameter or for both paramters simultaneously, a numerical solution on a
computer is typically required. The benefit of maximum likelihood estimation is asymptotic efficiency; estimating
using the sample median is only about 81% as asymptotically efficient as estimating
likelihood. ^{[}^{5}^{]} ^{[}^{8}^{]} The truncated sample mean using the middle 24% order statistics is about 88% as asymptotically efficient an estimator of as the maimum likelihood estimate. ^{[}^{5}^{]} When Newton's method is used to find the
by maximum
solution for the maximum likelihood estimate, the middle 24% order statistics can be used as an initial solution for
Multivariate Cauchy distribution
A random vector X = (X _{1} , …, X _{k} )′ is said to have the multivariate Cauchy distribution if every linear combination of
has a Cauchy distribution. That is, for any constant vector a ∈ R ^{k} , the random
variable Y = a′X should have a univariate Cauchy distribution. ^{[}^{9}^{]} The characteristic function of a multivariate Cauchy
distribution is given by:
its components Y = a X
_{1} + … + a _{k} ^{X} _{k}
1
where
homogeneous function of degree one. ^{[}^{9}^{]} More formally: ^{[}^{9}^{]}
are real functions with
a homogeneous function of degree one and
and
a positive
An example of a bivariate Cauchy distribution can be given by: ^{[}^{1}^{0}^{]}
Note that in this example, even though there is no analogue to a covariance matrix, x and y are not statistically independent. ^{[}^{1}^{0}^{]}
Related distributions
• The ratio of two independent standard normal random variables is a standard Cauchy variable, a Cauchy(0,1). Thus the Cauchy distribution is a ratio distribution.
• The standard Cauchy(0,1) distribution arises as a special case of Student's t distribution with one degree of
freedom.
• Relation to stable distribution: if X ~ Stable
, then X ~Cauchy(μ, γ).
Relativistic Breit–Wigner distribution
In nuclear and particle physics, the energy profile of a resonance is described by the relativistic Breit–Wigner distribution, while the Cauchy distribution is the (nonrelativistic) Breit–Wigner distribution.
Cauchy distribution
_{2}_{0}
See also
References
[1] http://webphysics.davidson.edu/Projects/AnAntonelli/node5.html Note that the intensity, which follows the Cauchy distribution, is the square of the amplitude. [2] E. Hecht (1987). Optics (2nd ed.). AddisonWesley. p. 603. [3] Cane, Gwenda J. (1974). "Linear Estimation of Parameters of the Cauchy Distribution Based on Sample Quantiles" (http://www.jstor.org/ stable/2285535). Journal of the American Statistical Association 69 (345):
[4] Rothenberg, Thomas J.; Fisher, Franklin, M.; Tilanus, C.B. (1966). "A note on estimation from a Cauchy sample". Journal of the American Statistical Association 59 (306): 460–463. [5] Bloch, Daniel (1966). "A note on the estimation of the location parameters of the Cauchy distribution" (http://www.jstor.org/pss/ 2282794). Journal of the American Statistical Association 61 (316):
[6] Ferguson, Thomas S. (1978). "Maximum Likelihood Estimates of the Parameters of the Cauchy Distribution for Samples of Size 3 and 4" (http://www.jstor.org/pss/2286549). Journal of the American Statistical Association 73 (361):
[7] Cohen Freue, Gabriella V. (2007). "The Pitman estimator of the Cauchy location parameter" (http://faculty.ksu.edu.sa/69424/USEPAP/ Coushy dist.pdf). Journal of Statistical Planning and Inference 137:
[8] Barnett, V. D. (1966). "Order Statistics Estimators of the Location of the Cauchy Distribution" (http://www.jstor.org/pss/2283210). Journal of the American Statistical Association 61 (316):
[9] Ferguson, Thomas S. (1962). "A Representation of the Symmetric Bivariate Cauchy Distribution" (http://www.jstor.org/pss/2237984). Journal of the American Statistical Association:
[10] Molenberghs, Geert; Lesaffre, Emmanuel (1997). "Nonlinear Integral Equations to Approximate Bivariate Densities with Given Marginals and Dependence Function" (http://www3.stat.sinica.edu.tw/statistica/oldpdf/A7n310.pdf). Statistica Sinica 7:
External links
• Earliest Uses: The entry on Cauchy distribution has some historical information. (http://jeff560.tripod.com/c. html)
• Weisstein, Eric W., " Cauchy Distribution (http://mathworld.wolfram.com/CauchyDistribution.html)" from MathWorld.
• GNU Scientific Library – Reference Manual (http://www.gnu.org/software/gsl/manual/gslref.
Chisquare distribution
Chisquare distribution
_{2}_{1}
Probability density function 

Cumulative distribution function 



notation: 
or

parameters: 
k ∈ N _{1} — degrees of freedom 
x ∈ [0, +∞) 

pdf: 

cdf: 

mean: 
k 


mode: 
max{ k − 2, 0 } 
2k 



12 / k 



mgf: 
(1 − 2 t) ^{−}^{k}^{/}^{2} for t < ½ 
cf: 
(1 − 2 i t) ^{−}^{k}^{/}^{2} ^{[}^{1}^{]} 
In probability theory and statistics, the chisquare distribution (also chisquared or χ²distribution) with k degrees of freedom is the distribution of a sum of the squares of k independent standard normal random variables. It is one of the most widely used probability distributions in inferential statistics, e.g. in hypothesis testing, or in construction of confidence intervals. ^{[}^{2}^{]} ^{[}^{3}^{]} ^{[}^{4}^{]} ^{[}^{5}^{]}
The bestknown situations in which the chisquare distribution is used are the common chisquare tests for goodness of fit of an observed distribution to a theoretical one, and of the independence of two criteria of classification of
Chisquare distribution
_{2}_{2}
qualitative data. Many other statistical tests also lead to a use of this distribution, like Friedman's analysis of variance by ranks.
The chisquare distribution is a special case of the gamma distribution.
Definition
If X _{1} , …, X _{k} are independent, standard normal random variables, then the sum of their squares
is distributed according to the chisquare distribution with k degrees of freedom. This is usually denoted as
The chisquare distribution has one parameter: k — a positive integer that specifies the number of degrees of freedom (i.e. the number of X _{i} ’s)
Characteristics
Further properties of the chisquare distribution can be found in the box at right.
Probability density function
The probability density function (pdf) of the chisquare distribution is
where Γ(k/2) denotes the Gamma function, which has closedform values at the halfintegers.
For derivations of the pdf in the cases of one and two degrees of freedom, see Proofs related to chisquare distribution.
Cumulative distribution function
Its cumulative distribution function is:
where γ(k,z) is the lower incomplete Gamma function and P(k,z) is the regularized Gamma function.
In a special case of k = 2 this function has a simple form:
Tables of this distribution — usually in its cumulative form — are widely available and the function is included in many spreadsheets and all statistical packages. For a closed form approximation for the CDF, see under Noncentral chisquare distribution.
Chisquare distribution
_{2}_{3}
Additivity
It follows from the definition of the chisquare distribution that the sum of independent chisquare variables is also
chisquare distributed. Specifically, if {X _{i} } _{i}_{=}_{1} ^{n} are independent chisquare variables with {k _{i} } _{i}_{=}_{1} ^{n} degrees of freedom, respectively, then Y = X _{1} + ⋯ + X _{n} is chisquare distributed with k _{1} + ⋯ + k _{n} degrees of freedom.
Information entropy
The information entropy is given by
where ψ(x) is the Digamma function.
Noncentral moments
The moments about zero of a chisquare distribution with k degrees of freedom are given by ^{[}^{6}^{]} ^{[}^{7}^{]}
Cumulants
The cumulants are readily obtained by a (formal) power series expansion of the logarithm of the characteristic function:
Asymptotic properties
By the central limit theorem, because the chisquare distribution is the sum of k independent random variables, it
converges to a normal distribution for large k (k > 50 is “approximately normal”). ^{[}^{8}^{]} Specifically, if X ~ χ²(k), then as
k tends to infinity, the distribution of
tends to a standard normal distribution. However,
convergence is slow as the skewness is and the excess kurtosis is 12/k.
Other functions of the chisquare distribution converge more rapidly to a normal distribution. Some examples are:
• If X ~ χ²(k) then to R. A. Fisher).
• If X ~ χ²(k) then
(Wilson
is approximately normally distributed with mean
is approximately normally distributed with mean
and unit variance (result credited
and variance
and Hilferty, 1931)
Related distributions
A chisquare variable with k degrees of freedom is defined as the sum of the squares of k independent standard
normal random variables.
If 
Y is a kdimensional Gaussian random vector with mean vector μ and rank k covariance matrix C, then 
X 
= (Y−μ) ^{T} C ^{−}^{1} (Y−μ) is chisquare distributed with k degrees of freedom. 
The sum of squares of statistically independent unitvariance Gaussian variables which do not have mean zero yields a generalization of the chisquare distribution called the noncentral chisquare distribution.
If Y is a vector of k i.i.d. standard normal random variables and A is a k×k idempotent matrix with rank k−n then the
quadratic form Y ^{T} AY is chisquare distributed with k−n degrees of freedom.
The chisquare distribution is also naturally related to other distributions arising from the Gaussian. In particular,
• Y is Fdistributed, Y ~ F(k _{1} ,k _{2} ) if
• If X is chisquare distributed, then
is chi distributed.
• If X _{1} ~ χ ^{2} _{k} 1 and X _{2} ~ χ ^{2} _{k} 2 are statistically independent, then X _{1} + X _{2} ~ χ ^{2} _{k} 1+k _{2} . If X _{1} and X _{2} are not independent, then X _{1} + X _{2} is not chisquare distributed.
Chisquare distribution
_{2}_{4}
Generalizations
The chisquare distribution is obtained as the sum of the squares of k independent, zeromean, unitvariance Gaussian random variables. Generalizations of this distribution can be obtained by summing the squares of other types of Gaussian random variables. Several such distributions are described below.
Chisquare distributions
Noncentral chisquare distribution
The noncentral chisquare distribution is obtained from the sum of the squares of independent Gaussian random variables having unit variance and nonzero means.
Generalized chisquare distribution
The generalized chisquare distribution is obtained from the quadratic form z′Az where z is a zeromean Gaussian vector having an arbitrary covariance matrix, and A is an arbitrary matrix.
Gamma, exponential, and related distributions
The chisquare distribution X ~ χ²(k) is a special case of the gamma distribution, in that X ~ Γ(k/2, 2) (using the shape parameterization of the gamma distribution).
Because the exponential distribution is also a special case of the Gamma distribution, we also have that if X ~ χ²(2), then X ~ Exp(1/2) is an exponential distribution.
The Erlang distribution is also a special case of the Gamma distribution and thus we also have that if X ~ χ²(k) with even k, then X is Erlang distributed with shape parameter k/2 and scale parameter 1/2.
Applications
The chisquare distribution has numerous applications in inferential statistics, for instance in chisquare tests and in estimating variances. It enters the problem of estimating the mean of a normally distributed population and the problem of estimating the slope of a regression line via its role in Student’s tdistribution. It enters all analysis of variance problems via its role in the Fdistribution, which is the distribution of the ratio of two independent chisquared random variables divided by their respective degrees of freedom.
Following
are
some
of
the
most
common
situations in which the chisquare distribution arises from a
Gaussiandistributed sample.
• if X _{1} , …, X _{n} are i.i.d. N(μ, σ ^{2} ) random variables, then
where
• The box below shows probability distributions with name starting with chi for some statistics based on X _{i} ∼ Normal(μ _{i} , σ ^{2} _{i} ), i = 1, ⋯, k, independent random variables:
Chisquare distribution
_{2}_{5}
Name 
Statistic 
chisquare distribution 






Table of χ² value vs P value
The Pvalue is the probability of observing a test statistic at least as extreme in a Chisquare distribution. Accordingly, since the cumulative distribution function (CDF) for the appropriate degrees of freedom (df) gives the probability of having obtained a value less extreme than this point, subtracting the CDF value from 1 gives the Pvalue. The table below gives a number of Pvalues matching to χ² for the first 10 degrees of freedom. A Pvalue of 0.05 or less is usually regarded as statistically significant.
Degrees of freedom (df) 
χ² value ^{[}^{9}^{]} 

1 0.004 
0.02 
0.06 
0.15 
0.46 
1.07 
1.64 
2.71 
3.84 
6.64 
10.83 

2 0.10 
0.21 
0.45 
0.71 
1.39 
2.41 
3.22 
4.60 
5.99 
9.21 
13.82 

3 0.35 
0.58 
1.01 
1.42 
2.37 
3.66 
4.64 
6.25 
7.82 
11.34 
16.27 

4 0.71 
1.06 
1.65 
2.20 
3.36 
4.88 
5.99 
7.78 
9.49 
13.28 
18.47 

5 1.14 
1.61 
2.34 
3.00 
4.35 
6.06 
7.29 
9.24 
11.07 
15.09 
20.52 

6 1.63 
2.20 
3.07 
3.83 
5.35 
7.23 
8.56 
10.64 
12.59 
16.81 
22.46 

7 2.17 
2.83 
3.82 
4.67 
6.35 
8.38 
9.80 
12.02 
14.07 
18.48 
24.32 

8 2.73 
3.49 
4.59 
5.53 
7.34 
9.52 
11.03 
13.36 
15.51 
20.09 
26.12 

9 3.32 
4.17 
5.38 
6.39 
8.34 
10.66 
12.24 
14.68 
16.92 
21.67 
27.88 

10 3.94 
4.86 
6.18 
7.27 
9.34 
11.78 
13.44 
15.99 
18.31 
23.21 
29.59 

P value (Probability) 
0.95 
0.90 
0.80 
0.70 
0.50 
0.30 
0.20 
0.10 
0.05 
0.01 
0.001 
Nonsignificant 
Significant 
Chisquare distribution
_{2}_{6}
See also
• Fisher's method for combining independent tests of significance
• Normal distribution
• Wishart distribution
References
Footnotes
[1] M.A. Sanders. "Characteristic function of the central chisquare distribution" Retrieved 20090306.
[2] Abramowitz, Milton; Stegun, Irene A., eds. (1965), "Chapter 26" (http://www.math.sfu.ca/~cbm/aands/page_940.htm), Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, New York: Dover, pp. 940, MR0167642, ISBN 9780486612720,
.
[3] NIST (2006). Engineering Statistics Handbook  ChiSquare Distribution (http://www.itl.nist.gov/div898/handbook/eda/section3/
[4] Jonhson, N.L.; S. Kotz, , N. Balakrishnan (1994). Continuous Univariate Distributions (Second Ed., Vol. 1, Chapter 18). John Willey and Sons. ISBN 0471584959. [5] Mood, Alexander; Franklin A. Graybill, Duane C. Boes (1974). Introduction to the Theory of Statistics (Third Edition, p. 241246). McGrawHill. ISBN 0070428646. [6] Chisquare distribution (http://mathworld.wolfram.com/ChiSquaredDistribution.html), from MathWorld, retrieved Feb. 11, 2009 [7] M. K. Simon, Probability Distributions Involving Gaussian Random Variables, New York: Springer, 2002, eq. (2.35), ISBN
9780387346571
[8] Box, Hunter and Hunter. Statistics for experimenters. Wiley. p. 46. [9] ChiSquare Test (http://www2.lv.psu.edu/jxm57/irp/chisquar.html) Table B.2. Dr. Jacqueline S. McLaughlin at The Pennsylvania State University. In turn citing: R.A. Fisher and F. Yates, Statistical Tables for Biological Agricultural and Medical Research, 6th ed., Table IV
Notations
• Wilson, E.B. Hilferty, M.M. (1931) The distribution of chisquare. Proceedings of the National Academy of Sciences, Washington, 17, 684–688.
External links
• Earliest Uses of Some of the Words of Mathematics: entry on Chi square has a brief history (http://jeff560. tripod.com/c.html)
• Course notes on ChiSquare Goodness of Fit Testing (http://www.stat.yale.edu/Courses/199798/101/chigf. htm) from Yale University Stats 101 class.
Molto più che documenti.
Scopri tutto ciò che Scribd ha da offrire, inclusi libri e audiolibri dei maggiori editori.
Annulla in qualsiasi momento.