Sei sulla pagina 1di 15

Formula Sheet

(Chapter 1)

• The sample mean is


n
1X
ȳ = yi .
n i=1

• The sample variance is


 !2 
n n n Pn
1 X 1 X 2 1 X yi2 − nȳ 2
s2 = (yi − ȳ)2 = y − yi = i=1
n − 1 i=1 n − 1 i=1 i n i=1
n−1

• The range is

max(y1 , y2 , ..., yn ) − min(y1 , y2 , ..., yn )

• You can use the rule below to determined the location of the q(p) in the sorted list.

m = (n + 1)p

• IQR = q(0.75) − q(0.25)

• Suspected low outlier is any value that is less than the lower limit: LL = q(0.25) − 1.5(IQR)

Suspected high outlier is any value that is greater than the upper limit: U L = q(0.75) + 1.5(IQR)

• The sample skewness is


1
Pn
(yi − ȳ)3
g1 =  Pn i=1
n
1 2 3/2

n i=1 (yi − ȳ)

• The sample kurtosis is


1
Pn 4
n i=1 (yi − ȳ)
g2 =  Pn
1 2 2

n i=1 (yi − ȳ)

• The sample correlation is


Sxy
r=p
Sxx Syy

n n n
!2
X
2
X 1 X
Sxx = (xi − x̄) = x2i − xi
i=1 i=1
n i=1

n n n
! n
!
X X 1 X X
Sxy = (xi − x̄)(yi − ȳ) = xi yi − xi yi
i=1 i=1
n i=1 i=1

n n n
!2
X X 1 X
Syy = (yi − ȳ)2 = yi2 − yi
i=1 i=1
n i=1

• The empirical cumulative distribution function is


number of values in{y1 , y2 , ...yn }which are ≤ y
Fb(y) =
n
(Chapter 2)

• Likelihood function for θ is

L(θ) = L(θ; y) = P (Y = y; θ) for θ ∈ Ω

• Relative likelihood function is


L(θ)
R(θ) = , θ∈Ω
L(θ)
b

0 ≤ R(θ) ≤ 1 for all θ ∈ Ω and R(θ)


b = 1.

• Log-relative likelihood function is


L(θ)
r(θ) = log R(θ) = log L( θ)
b
= `(θ) − `(θ) for θ ∈ Ω
b

• Likelihood function for independent experiments: Suppose we have two independent data set y1
and y2 , corresponding to two independent random variables Y1 and Y2 . The likelihood function
for θ based on the data y1 and y2 is

L(θ) = L1 (θ) × L2 (θ), θ∈Ω

• Likelihood function for multinomial models


k k
n! Y y Y y
L(θ) = θi i = θi i
y1 !y2 ! · · · yk ! i=1 i=1

The corresponding log-likelihood function is


k
X
`(θ) = yi log θi
i=1

• Invariance property of MLE: If θb is the MLE of θ, the g(θ)


b is the MLE of g(θ) for given
function g(·).
(Chapter 4)

• Suppose our assumed model Yi v G(µ, σ) , i = 1, 2, . . . , n is reasonable and the observed data are
y1 , y 2 , . . . , y n . µ̂ = ȳ is the maximum likelihood estimate of µ.
n n
1X 1 X 2
µ̃ = Ȳ = Yi , σ̃ 2 = (Y i −Ȳ )
n i=1 n i=1
v
u n
u 1 X
s =t (yi − ȳ)2
n − 1 i=1

n
• For Poisson data with unknown mean θ the point estimate of θ is θ̂ = ȳ = n1
P
yi
i=1

• Suppose θ is a scalar and that L(θ) = L(θ; y) is the likelihood function for θ based on the
observed data y . The relative likelihood function R(θ) is defined as

L(θ)
R(θ) = R(θ; y) = for θ ∈ Ω
L(θ̂)

• A 100p% likelihood interval for θ is the set {θ : R(θ) ≥ p} .

• Table 4.2
values of θ inside a 10% likelihood interval are plausible values of θ in light of the observed data.
values of θ inside a 50% likelihood interval are very plausible values of θ in light of the observed
data.
values of θ outside a 10% likelihood interval are implausible values of θ in light of the observed
data.
values of θ outside a 1% likelihood interval are very implausible values of θ in light of the observed
data.

• The log relative likelihood function is given by

r(θ) = log R(θ) = `(θ) − `(θ̂) for θ ∈ Ω

• For Binomial Distribution Maximizing L(θ) we find θ̂ = y/n and

L(θ) =θy (1 − θ)n−y for 0 < θ < 1.

• The coverage probability for the interval estimator is

C(θ) = P (θ ∈ [L(Y ), U (Y )]) = P [L(Y ) ≤ θ ≤ U (Y )]

• A 100p% confidence interval for a parameter is an interval estimate [L(y), U (y)] for which

P (θ ∈ [L(Y ), U (Y )]) = P [L(Y ) ≤ θ ≤ U (Y )] = p.

• Suppose Y = (Y 1 , . . . , Y n ) is a random sample from the G(µ, σ0 ) distribution. The pivotal


quantity is

Ȳ − µ
Q = Q(Y ; µ) = √ v G(0, 1)
σ0 / n

• The 100p% confidence interval (Two sided confidence interval) for µ is of the form:

point estimate ± C× sd of the estimator.


• If Y ∼Binomial(n, θ), then by the Central Limit Theorem

Y − nθ
p
nθ(1 − θ)

has approximately a G(0, 1) distribution for large n.

• The approximate 100p% confidence interval for Binomial Distribution θ given by


s
θ̂(1 − θ̂)
θ̂ ± C
n

• An approximate 100p% confidence interval for a random sample from the Poisson(θ):
s
θ̂
θ̂ ± C
n

• An approximate 100p% confidence interval for a random sample from the Exponential(θ):

θ̂
θ̂ ± C √
n

• A 100p% likelihood interval is an approximate 100 q% where


p
q = 2P (Z ≤ −2 log p) − 1

and Z ∼ N (0, 1).

• If a is a value such that

p = 2P (Z ≤ a) − 1 where Z ∼ N (0, 1)

then the likelihood interval


2 /2
{θ : R(θ) ≥ e−a }
is an approximate 100p% confidence interval.

• The Gamma function is defined as


Z ∞
Γ(α) = y α−1 e−y dy, α>0
0

Γ(α) = (α − 1)Γ(α − 1)
Γ(α) = (α − 1)! for α = 1, 2, . . .

Γ(1/2) = π

• The Chi-squared probability density function is


1
f (x; k) = x(k/2)−1 e−x/2
2k/2 Γ(k/2)

for x > 0 and k = 1, 2, . . .

• If X ∼ χ2 (k)

E(X) = k, V ar(X) = 2k.


• Probabilities for the random variable W v χ2 (1) :

√ √
P (W ≤ w) = P (|Z| ≤ w) = 2P (Z ≤ w) − 1

where Z v N (0, 1). Also



P (W > w) = P (|Z| > w)

= 2P (Z > w)

= 2[1 − P (Z ≤ w)].

• Probabilities for the random variable X ∼ χ2 (2)

P (X ≤ x) = 1 − e−x/2

P (X > x) = 1 − P (X ≤ x) = e−x/2

• The likelihood ratio statistic


 
L(θ)
Λ = Λ(θ; θ̃) = −2 log = 2l(θ̃) − 2l(θ)
L(θ̃)

• A 100p% confidence interval for µ of a Gaussian Population, Known σ has the form:

point estimate ± C (table value) × sd(estimator).


We find the value C in the Normal table such that
(1+p)
P (−C ≤ Z ≤ C) = p or P (Z ≤ C) = 2
where Z ∼ G(0, 1).
Such an interval is often called a ”two-sided, equal−tailed” confidence interval.
For a 90% confidence interval, C = 1.645.
For a 95% confidence interval, C = 1.960.
For a 99% confidence interval, C = 2.576.


Ȳ − µ
√ ∼ t(n − 1).
S/ n

• Suppose Y1 , . . . , Y n is a random sample from a G(µ, σ) distribution. The 100p% confidence


interval for µ when σ is unknown has the form

ȳ ± Cs/ n

• a 100p% confidence interval for σ is


s s 
2 2
 (n − 1)s , (n − 1)s 
b a

• To obtain 100p% C.I for σ we determine a and b such that


1−p 1+p
P (U ≤ a) = , P (U ≤ b) =
2 2
Where U = (n − 1)S 2 /σ 2 ∼ χ2 (n − 1)

• A 100p% prediction interval for a future observation Y is given by


" r r #
1 1
y cs 1 + ; y + cs 1+
n n
(Chapter 5)

• P-value for testing H0 using a test statistic D:

P − value = P (D ≥ d, H0 )

If Y ∼ G(µ, σ) with σ known, then


|Ȳ − µ0 |
D= √ ;
σ/ n
under H0 : µ = µ0 .
If Y ∼ G(µ, σ) with σ unknown then:

|Ȳ − µ0 |
D= √ .
S/ n
under H0 : µ = µ0 .
If Y ∼ Binomial(n, θ), then:

|Y − nθ0 |
D=p
nθ0 (1 − θ0 )
under H0 : θ = θ0 .

• Test of hypothesis for H0 : σ 2 = σ02 :

Using pivotal quantity U = (n − 1)S 2 /σ02 ∼ χ2 (n − 1).

if P (U ≤ u) > 1/2, P − value = 2P (U ≥ u);


if P (U ≤ u) < 1/2, P − value = 2P (U ≤ u).

• Likelihood ratio test with distribution f (y, θ) for H0 : θ = θ0 :


Observed value of Λ(θ0 ) is denoted by λ(θ0 ) = −2 log R(θ0 ).

Λ(θ) ∼ χ2 (1), approximately,


P-value for testing H0 : θ = θ0 is:

P-value = P [Λ(θ0 ) ≥ λ(θ0 )]


≈ P [W ≥ λ(θ0 )] , where W ∼ χ2 (1)

P-value Interpretation
No evidence against H0 ,
P-value> 0.1
based on the observed data.
Weak evidence against H0 ,
0.05 <P-value≤ 0.1
based on the observed data.
Evidence against H0 ,
0.01 <P-value≤ 0.05
based on the observed data.
Strong evidence against H0 ,
0.001 <P-value≤ 0.01
based on the observed data.
Very strong evidence against H0 ,
P-value≤ 0.001
based on the observed data.
(Chapter 6)

• Sample correlation:
Sxy
r=p ,
Sxx Syy
Pn Pn
where Sxy = i=1 xi yi − n(x̄)(ȳ) and Sxx = i=1 x2i − n(x̄)2 .

• Gaussian response model:


Yi ∼ G(µ(xi ), σ), independently

where µ(xi ) = α + βxi .

• The least square estimates for α and β:

b = ȳ − βbx̄,
α

βb = Sxy /Sxx .

• An unbiased estimator for σ 2 :


1
Se2 = (Syy − βS
e xy )
n−2

• The sampling distribution for βe and Se2 :


σ
βe ∼ G(β, √ )
Sxx

(n − 2)Se2
and 2
∼ χ2 (n − 2)
σ
• 100p% confidence interval for β is given by
p
β̂ ± ase / Sxx

where P (T ≤ a) = (1 + p)/2 and T v t(n − 2)

• 100p% confidence interval for α, is given by


s
1 (x̄)2
α̂ ± a se +
n Sxx

• The hypothesis test for β = β0 :


βe − β0
T = √ ∼ t(n − 2)
Se / Sxx
  
β̂ − β 0

p − value = 2 1 − P T ≤ √ 
se / Sxx

• Confidence interval for µ


e(x):
e(x) − µ(x)
µ
q ∼ t(n − 2)
1 (x−x̄)2
Se n + Sxx

A 100p% confidence interval for µ(x) is given by:


s
1 (x − x̄)2
e(x) ± ase
µ +
n Sxx

where P (−a ≤ T ≤ a) = p and T ∼ t(n − 2).


• Prediction interval for Y :
Y −µe(x)
q ∼ t(n − 2)
1 (x−x̄)2
Se 1 + n + Sxx

A 100p% prediction interval for Y is given by:


s
1 (x − x̄)2
e(x) ± ase
µ 1+ +
n Sxx

where P (−a ≤ T ≤ a) = p and T ∼ t(n − 2).

• rbi = yi − µ
bi , µ
bi = α b i , rb∗ = rbi /se .
b + βx i

• Two Gaussian population with common variance:

Y1i ∼ G(µ1 , σ), Y2i ∼ G(µ2 , σ)

Maximum likelihood estimators for µ1 , µ2 and σ:


n1
1 X
µ
e1 = Y1i = Y 1 ,
n1 i=1
n2
1 X
µ
e2 = Y2i = Y 2 ,
n2 i=1
2 X nj
2 1 X
σ
e = ej )2 .
(Yji − µ
n1 + n2 j=1 i=1

An estimator of the variance σ 2 (pooled estimator of variance):


(n1 − 1)S12 + (n2 − 1)S22
Sp2 =
n1 + n2 − 2
1
Pnj
where Sj2 = nj −1 i=1 (Yji − Y j )2 , j = 1, 2.

(Y 1 − Y 2 ) − (µ1 − µ2 )
T = q ∼ t(n1 + n2 − 2)
Sp n11 + n12
q
1 1
• A 100p% confidence interval for µ1 − µ2 : y 1 − y 2 ± asp n1
+ n2
where P (−a ≤ T ≤ a) = p and
T ∼ t(n1 + n2 − 2).

• P-value for testing H0 : µ1 = µ2 with D = |T |:

P − value = P (D ≥ d, H0 )

• Two Gaussian population with unequal variance:

Y1i ∼ G(µ1 , σ1 ), Y2i ∼ G(µ2 , σ2 )

(Y 1 − Y 2 ) − (µ1 − µ2 )
T = q 2 ∼ G(0, 1) approximately
s1 s22
n1
+ n2

• A 95% confidence interval for µ1 − µ2 :


q 2
s s2
y 1 − y 2 ± 1.96 n11 + n22 .

• P-value for testing H0 : µ1 = µ2 with D = |T |:

P − value = P (D ≥ d, H0 )
• Comparing mean using paired data:

Y1i − Y2i ∼ N (µ1 − µ2 , σ 2 )

where σ 2 = Var(Y1i ) + Var(Y2i ) − 2Cov(Y1i , Y2i ).

• Hypothesis test for H0 : µ1 = µ2 in paired data:

Yi = Y1i − Y2i ∼ G(µ1 − µ2 , σ)

|Y − 0|
D= √
S/ n
p − value = 2[1 − P (T ≤ d)]
where T ∼ t(n − 1), n = n1 = n2 , S 2 is the variance estimator for Yi .
(Chapter 7)

• Likelihood Ratio Test for Multinomial Model:


Pk
f (y1 , . . . , yk ; θ1 , . . . , θk ) = n!
θ y1
y1 !...,yk ! 1
. . . θkyk , where yj = 0, 1 . . . and j=1 yj = n. θ = (θ1 , . . . , θk )
• Likelihood function for Multinomial Model:
y
L(θ1 , . . . , θk ) = Πkj=1 θj j
• The maximum likelihood estimates are
yj
θ̂j = j = 1, ..., k
n
• Likelihood ratio test statistic for Multinomial Model:
" k   #
Y E j Yj
Λ = −2 log
j=1
Yj
k  
X Yj
=2 Yj log ∼ χ2 (k − 1 − p)
j=1
Ej

P − value = P (Λ ≥ λ, H0 ) ≈ P (W ≥ λ)
with W ∼ χ2 (k − 1 − p).
• Goodness of Fit test statistic:
k
X (Yj − Ej )2
D=
j=1
Ej

P − value = P (D ≥ d, H0 ) ≈ P (W ≥ d)
with W ∼ χ2 (k − 1 − p).
• Two way (Contingency) Tables:
• To test the independence of A and B classifications:

A\B B1 B2 ... Bb Total


A1 y11 y12 ... y1b r1
A2 y21 y22 ... y2b r2
.. .. .. ..
... . . ... . .
Aa ya1 ... ... yab ra
Total c1 c2 ... cb n

Pa Pb
i=1 j=1 θij = 1,
Null hypothesis: H0 : θij = αi βj .
eij = ri cj /n, observed value of the likelihood ratio statistics:
 
y
λ = 2 ai=1 bj=1 yij log eijij
P P

p − value = P (Λ ≥ λ; H0 ) ≈ P (W ≥ λ) where W ∼ χ2 ((a − 1)(b − 1)).


• Testing equality of Multinomial parameter for two or more groups:
θi1 + θi2 + · · · + θib = 1, i = 1, 2, . . . , a.

Null hypothesis: H0 : θ1 = θ2 · · · = θa , where θi = (θi1 , θi2 , . . . , θib ).


n = n1 + n2 + · · · + na
ni = yi+ = bj=1 yij
P

y+j = ai=1 yij


P
y+j
eij = ni n
for i = 1, . . . , a, j = 1 . . . , b.
p.f./p.d.f. Mean Variance m.g.f.
Discrete p.f.
Binomialn, p  ny  p y q n−y np npq pe t q n
0  p  1, q  1 − p y  0, 1, 2, . . . , n

Bernoullip p y 1 − p 1−y
p p1 − p pe t q
0  p  1, q  1 − p y  0, 1

yk−1 p k
Negative Binomialk, p y pkqy kq kq
p 1−qe t
p2
0  p  1, q  1 − p y  0, 1, 2, . . . t  − ln q

p
Geometricp pq y q q 1−qe t
p p2
0  p  1, q  1 − p y  0, 1, 2, . . . t  − ln q

r N−r
y n−y
HypergeometricN, r, n
  N
n
nr
N
n Nr 1 − Nr  N−n
N−1
intractible
r  N, n  N
y  0, 1, 2, . . . , min(r, n
e −  y
Poisson
  e e −1
t
y!
0 y  0, 1, . . .
y y y
Multinomialn,  1 , . . .  k 
n!
y 1 !y 2 !...y k !
 11  22 . . .  k k
k k n 1 , . . . , n k  VarY i   n i 1 −  i 
 i  0, ∑ i 1 y i  0, 1, . . . ; ∑ yi n
i1 i1

Continuous p.d.f.
e bt −e at
b−a 2
Uniforma, b fy  1
b−a
,ayb ab
2 12
b−at

t≠0

Exponential
1
fy  1

e −y/ , y  0  2 1−t
0 t  1/

e −y− /2 
1 2 2
N,  2  or G(,  fy 
e t
2 t 2 /2
2   2
−    ,   0 −  y  

fy  1
y k/2−1 e −y/2 , y0
2 k/2 Γk/2
Chi-squared(k 1 − 2t −k/2
 k 2k
k0 where Γa   x a−1 −x
e dx t  1/2
0

y 2 −k1/2
fy  c k 1  k
 k
Student t 0 k−2
−  y   where undefined
k0 if k  1 if k  2
c k  Γ k1
2
/ k Γ 2k 
Probabilities for Standard Normal N(0,1) Distribution
0.40
0.35
0.30 x
0.25
0.20
0.15 F(x)
0.10
0.05
0.00
-4 -3 -2 -1 0 1 2 3 4

This table gives the values of F(x) for x ≥ 0


x 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
0.0 0.50000 0.50399 0.50798 0.51197 0.51595 0.51994 0.52392 0.52790 0.53188 0.53586
0.1 0.53983 0.54380 0.54776 0.55172 0.55567 0.55962 0.56356 0.56750 0.57142 0.57534
0.2 0.57926 0.58317 0.58706 0.59095 0.59484 0.59871 0.60257 0.60642 0.61026 0.61409
0.3 0.61791 0.62172 0.62552 0.62930 0.63307 0.63683 0.64058 0.64431 0.64803 0.65173
0.4 0.65542 0.65910 0.66276 0.66640 0.67003 0.67364 0.67724 0.68082 0.68439 0.68793
0.5 0.69146 0.69497 0.69847 0.70194 0.70540 0.70884 0.71226 0.71566 0.71904 0.72240
0.6 0.72575 0.72907 0.73237 0.73565 0.73891 0.74215 0.74537 0.74857 0.75175 0.75490
0.7 0.75804 0.76115 0.76424 0.76730 0.77035 0.77337 0.77637 0.77935 0.78230 0.78524
0.8 0.78814 0.79103 0.79389 0.79673 0.79955 0.80234 0.80511 0.80785 0.81057 0.81327
0.9 0.81594 0.81859 0.82121 0.82381 0.82639 0.82894 0.83147 0.83398 0.83646 0.83891
1.0 0.84134 0.84375 0.84614 0.84849 0.85083 0.85314 0.85543 0.85769 0.85993 0.86214
1.1 0.86433 0.86650 0.86864 0.87076 0.87286 0.87493 0.87698 0.87900 0.88100 0.88298
1.2 0.88493 0.88686 0.88877 0.89065 0.89251 0.89435 0.89617 0.89796 0.89973 0.90147
1.3 0.90320 0.90490 0.90658 0.90824 0.90988 0.91149 0.91309 0.91466 0.91621 0.91774
1.4 0.91924 0.92073 0.92220 0.92364 0.92507 0.92647 0.92785 0.92922 0.93056 0.93189
1.5 0.93319 0.93448 0.93574 0.93699 0.93822 0.93943 0.94062 0.94179 0.94295 0.94408
1.6 0.94520 0.94630 0.94738 0.94845 0.94950 0.95053 0.95154 0.95254 0.95352 0.95449
1.7 0.95543 0.95637 0.95728 0.95818 0.95907 0.95994 0.96080 0.96164 0.96246 0.96327
1.8 0.96407 0.96485 0.96562 0.96638 0.96712 0.96784 0.96856 0.96926 0.96995 0.97062
1.9 0.97128 0.97193 0.97257 0.97320 0.97381 0.97441 0.97500 0.97558 0.97615 0.97670
2.0 0.97725 0.97778 0.97831 0.97882 0.97932 0.97982 0.98030 0.98077 0.98124 0.98169
2.1 0.98214 0.98257 0.98300 0.98341 0.98382 0.98422 0.98461 0.98500 0.98537 0.98574
2.2 0.98610 0.98645 0.98679 0.98713 0.98745 0.98778 0.98809 0.98840 0.98870 0.98899
2.3 0.98928 0.98956 0.98983 0.99010 0.99036 0.99061 0.99086 0.99111 0.99134 0.99158
2.4 0.99180 0.99202 0.99224 0.99245 0.99266 0.99286 0.99305 0.99324 0.99343 0.99361
2.5 0.99379 0.99396 0.99413 0.99430 0.99446 0.99461 0.99477 0.99492 0.99506 0.99520
2.6 0.99534 0.99547 0.99560 0.99573 0.99585 0.99598 0.99609 0.99621 0.99632 0.99643
2.7 0.99653 0.99664 0.99674 0.99683 0.99693 0.99702 0.99711 0.99720 0.99728 0.99736
2.8 0.99744 0.99752 0.99760 0.99767 0.99774 0.99781 0.99788 0.99795 0.99801 0.99807
2.9 0.99813 0.99819 0.99825 0.99831 0.99836 0.99841 0.99846 0.99851 0.99856 0.99861
3.0 0.99865 0.99869 0.99874 0.99878 0.99882 0.99886 0.99889 0.99893 0.99896 0.99900
3.1 0.99903 0.99906 0.99910 0.99913 0.99916 0.99918 0.99921 0.99924 0.99926 0.99929
3.2 0.99931 0.99934 0.99936 0.99938 0.99940 0.99942 0.99944 0.99946 0.99948 0.99950
3.3 0.99952 0.99953 0.99955 0.99957 0.99958 0.99960 0.99961 0.99962 0.99964 0.99965
3.4 0.99966 0.99968 0.99969 0.99970 0.99971 0.99972 0.99973 0.99974 0.99975 0.99976
3.5 0.99977 0.99978 0.99978 0.99979 0.99980 0.99981 0.99981 0.99982 0.99983 0.99983

This table gives the values of F -1(p) for p ≥ 0.50


p 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
0.5 0.0000 0.0251 0.0502 0.0753 0.1004 0.1257 0.1510 0.1764 0.2019 0.2275
0.6 0.2533 0.2793 0.3055 0.3319 0.3585 0.3853 0.4125 0.4399 0.4677 0.4959
0.7 0.5244 0.5534 0.5828 0.6128 0.6433 0.6745 0.7063 0.7388 0.7722 0.8064
0.8 0.8416 0.8779 0.9154 0.9542 0.9945 1.0364 1.0803 1.1264 1.1750 1.2265
0.9 1.2816 1.3408 1.4051 1.4758 1.5548 1.6449 1.7507 1.8808 2.0537 2.3263
CHI-SQUARED DISTRIBUTION QUANTILES

df\p 0.005 0.01 0.025 0.05 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.95 0.975 0.99 0.995
1 0.000 0.000 0.001 0.004 0.016 0.064 0.148 0.275 0.455 0.708 1.074 1.642 2.706 3.842 5.024 6.635 7.879
2 0.010 0.020 0.051 0.103 0.211 0.446 0.713 1.022 1.386 1.833 2.408 3.219 4.605 5.992 7.378 9.210 10.597
3 0.072 0.115 0.216 0.352 0.584 1.005 1.424 1.869 2.366 2.946 3.665 4.642 6.251 7.815 9.348 11.345 12.838
4 0.207 0.297 0.484 0.711 1.064 1.649 2.195 2.753 3.357 4.045 4.878 5.989 7.779 9.488 11.143 13.277 14.860
5 0.412 0.554 0.831 1.146 1.610 2.343 3.000 3.656 4.352 5.132 6.064 7.289 9.236 11.070 12.833 15.086 16.750
6 0.676 0.872 1.237 1.635 2.204 3.070 3.828 4.570 5.348 6.211 7.231 8.558 10.645 12.592 14.449 16.812 18.548
7 0.989 1.239 1.690 2.167 2.833 3.822 4.671 5.493 6.346 7.283 8.383 9.803 12.017 14.067 16.013 18.475 20.278
8 1.344 1.647 2.180 2.733 3.490 4.594 5.527 6.423 7.344 8.351 9.525 11.030 13.362 15.507 17.535 20.090 21.955
9 1.735 2.088 2.700 3.325 4.168 5.380 6.393 7.357 8.343 9.414 10.656 12.242 14.684 16.919 19.023 21.666 23.589
10 2.156 2.558 3.247 3.940 4.865 6.179 7.267 8.296 9.342 10.473 11.781 13.442 15.987 18.307 20.483 23.209 25.188
11 2.603 3.054 3.816 4.575 5.578 6.989 8.148 9.237 10.341 11.530 12.899 14.631 17.275 19.675 21.920 24.725 26.757
12 3.074 3.571 4.404 5.226 6.304 7.807 9.034 10.182 11.340 12.584 14.011 15.812 18.549 21.026 23.337 26.217 28.300
13 3.565 4.107 5.009 5.892 7.042 8.634 9.926 11.129 12.340 13.636 15.119 16.985 19.812 22.362 24.736 27.688 29.819
14 4.075 4.660 5.629 6.571 7.790 9.467 10.821 12.078 13.339 14.685 16.222 18.151 21.064 23.685 26.119 29.141 31.319
15 4.601 5.229 6.262 7.261 8.547 10.307 11.721 13.030 14.339 15.733 17.322 19.311 22.307 24.996 27.488 30.578 32.801
16 5.142 5.812 6.908 7.962 9.312 11.152 12.624 13.983 15.338 16.780 18.418 20.465 23.542 26.296 28.845 32.000 34.267
17 5.697 6.408 7.564 8.672 10.085 12.002 13.531 14.937 16.338 17.824 19.511 21.615 24.769 27.587 30.191 33.409 35.718
18 6.265 7.015 8.231 9.391 10.865 12.857 14.440 15.893 17.338 18.868 20.601 22.760 25.989 28.869 31.526 34.805 37.156
19 6.844 7.633 8.907 10.117 11.651 13.716 15.352 16.850 18.338 19.910 21.689 23.900 27.204 30.144 32.852 36.191 38.582
20 7.434 8.260 9.591 10.851 12.443 14.578 16.266 17.809 19.337 20.951 22.775 25.038 28.412 31.410 34.170 37.566 39.997
25 10.520 11.524 13.120 14.611 16.473 18.940 20.867 22.616 24.337 26.143 28.172 30.675 34.382 37.652 40.646 44.314 46.928
30 13.787 14.953 16.791 18.493 20.599 23.364 25.508 27.442 29.336 31.316 33.530 36.250 40.256 43.773 46.979 50.892 53.672
35 17.192 18.509 20.569 22.465 24.797 27.836 30.178 32.282 34.336 36.475 38.859 41.778 46.059 49.802 53.203 57.342 60.275
40 20.707 22.164 24.433 26.509 29.051 32.345 34.872 37.134 39.335 41.622 44.165 47.269 51.805 55.758 59.342 63.691 66.766
45 24.311 25.901 28.366 30.612 33.350 36.884 39.585 41.995 44.335 46.761 49.452 52.729 57.505 61.656 65.410 69.957 73.166
50 27.991 29.707 32.357 34.764 37.689 41.449 44.313 46.864 49.335 51.892 54.723 58.164 63.167 67.505 71.420 76.154 79.490
60 35.534 37.485 40.482 43.188 46.459 50.641 53.809 56.620 59.335 62.135 65.227 68.972 74.397 79.082 83.298 88.379 91.952
70 43.275 45.442 48.758 51.739 55.329 59.898 63.346 66.396 69.334 72.358 75.689 79.715 85.527 90.531 95.023 100.430 104.210
80 51.172 53.540 57.153 60.391 64.278 69.207 72.915 76.188 79.334 82.566 86.120 90.405 96.578 101.880 106.630 112.330 116.320
90 59.196 61.754 65.647 69.126 73.291 78.558 82.511 85.993 89.334 92.761 96.524 101.050 107.570 113.150 118.140 124.120 128.300
100 67.328 70.065 74.222 77.929 82.358 87.945 92.129 95.808 99.334 102.950 106.910 111.670 118.500 124.340 129.560 135.810 140.170
THIS PAGE MAY BE USED FOR ROUGH WORK.
THIS PAGE WILL NOT BE MARKED.

Potrebbero piacerti anche