Sei sulla pagina 1di 18

FormulaSheetforCPIT603(QuantitativeAnalysis)

PROBABILITY
Probability of any event: 0 P (event) 1
Permutation: subsets of r elements from n different
elements
n!
Prn n (n 1) (n 2) (n r 1)
(n r )!
Permutation of similar objects: n1 is of one type, n2 is of
second type, among n = n1+n2++nr elements
n!

n1 ! n 2 ! n3 ! n r !
Combinations: subsets of size r from a set of n elements
n n!
nC r C rn p r q nr
r r! (n r )!
P(A or B) = P(A) + P(B) P(A and B) Independent Events:
For Mutually exclusive events: P(A and B) = P(A)P(B)
P(A or B) = P(A | B) = P(A)
P(A) + P(B) Dependent Events:
P(A and B) = P(A) * P(B given A)
P(A and B and C) = P(A) * P(B | A) * P(C given
A and B)
P(AB) = P(A | B) P(B) Bayes Theorem
P( AB ) P (B | A) P( A)
Conditional Probability P ( A | B ) P( A | B)
P(B ) P (B | A) P ( A) P(B | A ) P( A )
A, B = any two events
A = complement of A
Markovs Inequality Chebyshevs Inequality
If X is a non-negative random variable with mean , then If X is a random variable with a finite mean
for any constant a > 0 and variance 2, then for any constant a > 0
2
P(X a) P( X - a)
a a2

DECISIONANALYSIS
Criterion of Realism Expected Monetary Value
Weighted average = (best in row) + (1 )(worst in EMV(alternative) = X i P( X i )
row) Xi = payoff for the alternative in state of
nature i
For Minimization: P(Xi) = probability of achieving payoff Xi (i.e.,
Weighted average = (best in row) + (1 )(worst in probability of state of nature i)
row) = summation symbol
EMV (alternative i) = (payoff of first state of nature) x Expected Value with Perfect Information
(probability of first state of nature) + (payoff of second EVwPI = (best payoff in state of nature i)
state of nature) x (probability of second state of nature) + (probability of state of nature i)
+ (payoff of last state of nature) x (probability of last EVwPI = (best payoff for first state of nature) x

state of nature) (probability of first state of nature) + (best


payoff for second state of nature) x (probability
of second state of nature) + + (best payoff for
last state of nature) x (probability of last state of
nature)

Expected Value of Perfect Information


EVPI = EVwPI Best EMV
Expected Value of Sample Information EVSI
EVSI = (EV with SI + cost) (EV without SI) Efficiency of sample information = 100%
EVPI
Utility of other outcome = (p)(utility of best outcome,
which is 1) + (1 p)(utility of the worst outcome, which
is 0)

REGRESSIONMODELS
Y 0 1 X Y b0 b1 X
Y dependent variable (response) Y predicted value of Y
X independent variable (predictor or explanatory) b estimate of , based on sample results
0 0
0 intercept (value of Y when X 0) b1 estimate of 1 , based on sample results
1 slope of the regression line
random error
Error = (Actual value) (Predicted value)
X
X average (mean) of X values
e Y Y n

Y
Y average (mean) of Y values
n

b1
( X X )(Y Y )
(X X) 2

b0 Y b1 X
Sum of Squares Total SST (Y Y ) 2 Sum of Squares Error SSE e 2 (Y Y ) 2
SST SSR + SSE

Sum of Squares Regression SSR (Y Y ) 2 Coefficient of Determination r 2
SSR
1
SSE
SST SST
Correlation Coefficient = r r 2 SSE
Mean Squared Error s 2 MSE
n k 1
Standard Error of Estimate s MSE Generic Linear Model Y 0 1 X
SSR MSR
MSR F Statistic : F
k MSE
k number of independent variables in the model degrees of freedom for the numerator = df1 = k
degrees of freedom for the denominator = df2 = n
k1

HypothesisTest H 0 : 1 0 Y = 0 + 1X1 + 2X2 + + kXk +


H 1 : 1 0 Y= dependent variable (response variable)
Xi = ith independent variable (predictor or
Reject if Fcalculated F , df1 , df 2 explanatory variable)
df 1 k df 2 n k 1 0 = intercept (value of Y when all Xi = 0)
p - value P( F calculated test statistic) i = coefficient of the ith independent variable
k= number of independent variables
Reject if p - value
= random error
Y b0 b1 X 1 b2 X 2 ... bk X k SSE /( n k 1)
Adjusted r 2 1
SST /(n 1)
Y = predicted value of Y
b0 = sample intercept (an estimate of 0)
bi = sample coefficient of the i th variable (an
estimate of i)

FORECASTING

Mean Absolute Deviation (MAD)


forecast error Mean Squared Error (MSE)
(error) 2
n n
error
actual
Mean Absolute Percent Error (MAPE) 100%
n
sum of demands in previous n periods Y Yt 1 ... Yt n 1
Mean Average Forecast Ft 1 t
n n
Weighted Moving Average : Ft 1
(Weight in period i)(Actual value in period) w Y w Y
1 t 2 t 1 ... wnYt n 1
(Weights) w w 1 2 ... wn

Exponentia l Smoothing : Ft 1 Ft (Yt Ft )


New forecast Last period s forecast (Last period s actual demand Last period s forecast)
Exponential Smoothing with Trend : Y b b X
0 1
Ft 1 FITt (Yt FITt )
where Y predicted value
Tt 1 Tt (Ft 1 FITt )
b0 intercept
FITt 1 Ft 1 Tt 1 b1 slope of the line
X time period (i.e., X 1, 2, 3, , n)
Y a b X b X b X b X
1 1 2 2 3 3 4 4

Tracking signal
RSFE

(forecast error)
MAD MAD

INVENTORYCONTROLMODELS
Q Annual ordering cost Number of orders placed per year
Average inventory level =
2 (Ordering cost per order)
Annual Demand D
Co Co
Number of units in each order Q
Annual holding cost Average Inventory Economic Order Quantity
(Carrying cost per unit per year) Annual ordering cost = Annual holding cost
D Q
Order quantity Co C h
(Carrying cost per unit per year) Q 2
2
2DCo
Q
Ch EOQ Q *
2 Ch
Total cost (TC) = Order cost + Holding cost Cost of storing one unit of inventory for one year = Ch =
D Q IC, where C is the unit price or cost of an inventory item
TC Co Ch and I is Annual inventory holding charge as a percentage
Q 2
of unit price or cost

2DCo
Q*
IC
ROP without Safety Stock: EOQ without instantaneous receipt assumption
Reorder Point (ROP) = Demand per day x Lead Maximum inventory level (Total produced during the
time for a new order in days production run) (Total used during the production run)
dL (Daily production rate)(Number of days production)
Inventory position = Inventory on hand + (Daily demand)(Number of days production)
Inventory on order (pt) (dt)
Q Q d
pt dt p d Q1
p p p
Total produced Q pt
Q d Q d
Average inventory 1 Annual holding cost 1 Ch
2 p 2 p
D D
Annual setup cost Cs Annual ordering cost Co
Q Q
D = the annual demand in units
Q number of pieces per order, or production run
Production Run Model: EOQ without Quantity Discount Model
instantaneous receipt assumption 2DCo
Annual holding cost Annual setup cost EOQ
IC
Q d D If EOQ < Minimum for discount, adjust the quantity to Q
1 Ch Cs
2 p Q = Minimum for discount
2DCs Total cost Material cost + Ordering cost + Holding cost
Q* D Q
d Total cost DC + Co + Ch
Ch 1 Q 2
p
Holding cost per unit is based on cost, so Ch = IC
Where I = holding cost as a percentage of the unit cost
(C )
Safety Stock Safety Stock with Normal Distribution
ROP = Average demand during lead time + Safety ROP = (Average demand during lead time) + ZsdLT
Stock Z = number of standard deviations for a given
Service level = 1 Probability of a stockout service level
Probability of a stockout = 1 Service level dLT = standard deviation of demand during the lead
time
Safety stock = ZdLT
Demand is variable but lead time is constant Demand is constant but lead time is variable

ROP d L Z d L ROP dL Z d L
d average daily demand L average lead time
d standard deviation of daily demand L standard deviation of lead time
L lead time in days d daily demand
Both demand and lead time are variable Total Annual Holding Cost with Safety Stock
ROP d L Z L d
2 2 2 Total Annual Holding Cost = Holding cost of regular
d L
inventory + Holding cost of safety stock
Q
THC Ch (SS)Ch
2
5

The expected marginal profit = P(MP)


The expected marginal loss = (1 P)(ML)
The optimal decision rule
Stock the additional unit if P(MP) (1 P)ML
P(MP) ML P(ML)
P(MP) + P(ML) ML
P(MP + ML) ML
ML
P
ML + MP

PROJECTMANAGEMENT
a + 4m + b ba
2
Expected Activity Time t = Variance =
6 6
Earliest finish time = Earliest start time + Expected Earliest start = Largest of the earliest finish times of
activity time immediate predecessors
EF = ES + t ES = Largest EF of immediate predecessors
Latest start time = Latest finish time Expected Latest finish time = Smallest of latest start times for
activity time following activities
LS = LF t LF = Smallest LS of following activities
Slack = LS ES, or Slack = LF EF Project Variance = sum of variances of activities on the
critical path
Project standard deviation T Project variance Due date Expected date of completion
Z
T
Value of work completed = (Percentage of work Activity difference = Actual cost Value of work
complete) x (Total activity budget) completed
Crash cost Normal Cost
Crash cost/Time Period
Normal time Crash time

WAITINGLINESANDQUEUINGTHEORYMODELS
Single-Channel Model, Poisson Arrivals, Multichannel Model, Poisson Arrivals, Exponential
Exponential Service Times (M/M/1) Service Times (M/M/m)
= mean number of arrivals per time period (arrival m = number of channels open
rate) = average arrival rate
= mean number of customers or units served per = average service rate at each channel
time period (service rate) The probability that there are zero customers in the
The average number of customers or units in the system
system, L 1
P0 for m
n
m
L n = m 1
1
1
m

n = 0 n! m! m
The average time a customer spends in the system,
W The average number of customers or units in the system
1 ( / ) m
W L P
(m 1)!(m ) 2 0

The average number of customers in the queue, Lq The average time a unit spends in the waiting line or

2 being served, in the system


Lq
( ) ( / ) m 1 L
W P
The average time a customer spends waiting in the (m 1)!(m ) 2 0

queue, Wq The average number of customers or units in line
waiting for service
Wq
( )
Lq L
The utilization factor for the system, (rho), the
probability the service facility is being used The average number of customers or units in line
waiting for service

1 L
Wq W q
The percent idle time, P0, or the probability no one
is in the system The average number of customers or units in line
waiting for service (Utilization rate)
P0 1


The probability that the number of customers in the m
system is greater than k, Pn>k
k 1

Pn>k

Finite Population Model Total service cost = (Number of channels) x (Cost per
(M/M/1 with Finite Source) channel)
= mean arrival rate Total service cost = mCs
= mean service rate m = number of channels
N = size of the population Cs = service cost (labor cost) of each channel
Probability that the system is empty
1 Total waiting cost = (Total time spent waiting by all
P0 n arrivals) x (Cost of waiting)
N
N!

n 0 (N n )!
= (Number of arrivals) x (Average wait per arrival)Cw
= (W)Cw
Average length of the queue
Total waiting cost (based on time in queue) = (Wq)Cw
Lq N 1 P0

Average number of customers (units) in the system Total cost = Total service cost + Total waiting cost
L Lq 1 P0 Total cost = mCs + WCw
Average waiting time in the queue Total cost (based on time in queue) = mCs + WqCw
Lq
Wq
(N L)
Average time in the system
1
W Wq

Probability of n units in the system
n
N!
Pn P for n 0,1,..., N
N n ! 0
Constant Service Time Model (M/D/1) Littles Flow Equations
Average length of the queue L = W (or W = L/)
Lq = Wq (or Wq = Lq/)
7

2
Lq Average time in system = average time in queue +
2 ( )
average time receiving service
Average waiting time in the queue
W = Wq + 1/

Wq
2 ( )
Average number of customers in the system

L Lq

Average time in the system
1
W Wq

MARKOVANALYSIS
(i ) = vector of state probabilities for period Pij = conditional probability of being in state j in the
i future given the current state of i
= (1, 2, 3, , n) P11 P12 P1n
where P P22 P2 n
n = number of states P 21


1, 2, , n = probability of being in state 1,
state 2, , state n Pm1 Pm 2 Pmn
For any period n we can compute the state Equilibrium condition
probabilities for period n + 1 = P
(n + 1) = (n)P
Fundamental Matrix M represent the amount of money that is in each of the
F = (I B)1 nonabsorbing states
Inverse of Matrix M = (M1, M2, M3, , Mn)
a b n = number of nonabsorbing states
P M1 = amount in the first state or category
c d M2 = amount in the second state or category
1
d b M n = amount in the nth state or category
a b r r
P -1 c a
c d
r r
r = ad bc
Partition of Matrix for absorbing states Computing lambda and the consistency index
I O n
P CI
A B n 1
I = identity matrix Consistency Ratio
O = a matrix with all 0s CI
CR
RI

STATISTICALQUALITYCONTROL
Upper control limit (UCL) x z x UCL x x A2 R
Lower control limit (LCL) x z x LCL x x A2 R
x = mean of the sample means R = average of the samples
z = number of normal standard deviations (2 for A2 = Mean factor
95.5% confidence, 3 for 99.7%) x = mean of the sample means
x = standard deviation of the sampling distribution
x
of the sample means =
n
UCL R D4 R p-charts
UCL p p z p
LCL R D3 R
LCL p p z p
UCLR = upper control chart limit for the range
LCLR = lower control chart limit for the range p = mean proportion or fraction defective in the sample
D4 and D3 = Upper range and lower range Total number of errors
p
Total number of records examined
z = number of standard deviations
p = standard deviation of the sampling distribution
p is estimated by p
Estimated standard deviation of a binomial distribution
p (1 p )
p
n
where n is the size of each sample
c-charts Range of the sample = Xmax - Xmin
The mean is c and the standard deviation is equal to
c
To compute the control limits we use c 3 c (3 is
used for 99.7% and 2 is used for 95.5%)
UCL c c 3 c
LCL c c 3 c

OTHERS
Computing lambda and the consistency index The input to one stage is also the output from
n another stage
CI sn1 = Output from stage n
n 1
Consistency Ratio The transformation function
CI tn = Transformation function at stage n
CR General formula to move from one stage to
RI
another using the transformation function
sn1 = tn (sn, dn)
The total return at any stage
fn = Total return at stage n
Transformation Functions
sn 1 an sn bn d n cn
Return Equations
rn an sn bn d n cn
Fixed cost Probability of breaking even
Break - even point (units) break - even point
Price/unit Variable cost/unit Z
f

sv
P(loss) = P(demand < break-even) Price Variable cost
P(profit) = P(demand > break-even) EMV (Mean demand)
unit unit
Fixed costs
K(break - even point X)for X BEP Using the unit normal loss integral, EOL can be
Opportunity Loss computed using
$0for X BEP
EOL = KN(D)
where
EOL = expected opportunity loss
K = loss per unit when sales are below the break-even
K = loss per unit when sales are below the break-
point
even point
X = sales in units
= standard deviation of the distribution
N(D) = value for the unit normal loss integral for a
given value of D
break even point
D

a ad ae a b

AB b d e bd be C c d
c cd ce Determinant Value = (a)(d) (c)(b)

a b c
d

a b c e ad be cf d e f
f g h i

Determinant Value = aei + bfg + cdh gec hfa
a b e f ae bg af bh
idb
c d g h ce dg cf dh Numerical value of numerator determinant
X
Numerical value of denominator determinant

10

a b d b
Original matrix a b
1

c d ad cb ad cb
Determinant value of original matrix ad cb c d c a

ad cb ad cb
d c
Matrix of cofactors
b a
d b
Adjoint of the matrix
c a
Equation for a line For the Nonlinear function
Y = a + bX Y = X2 4 X + 6
where b is the slope of the line Find the slope using two points and this equation
Given any two points (X1, Y1) and (X2, Y2) Change in Y Y Y2 Y1
b
Change in Y Y Y Y1 Change in X X X 2 X1
b 2
Change in X X X 2 X1
Y1 aX 2 bX c Y C Y 0
Y2 a ( X X ) 2 b( X X ) c Y Xn Y nX n 1
Y Y2 Y1 b(X ) 2aX (X ) c(X ) 2 Y cX n Y cnX n 1
Y b(X ) 2aX (X ) c(X ) 2 1 n
Y Y n 1
X X Xn X
X (b 2aX cX ) Y g ( x ) h( x ) Y g ( x) h( x)
b 2aX cX
X Y g ( x ) h( x ) Y g ( x) h( x)
Total cost (Total ordering cost) + (Total holding cost) Economic Order Quantity
+ (Total purchase cost) dTC DCo Ch

D Q
TC C o + C h DC dQ Q2 2
Q 2 2DCo
Q = order quantity Q
Ch
D = annual demand
Co = ordering cost per order d 2TC DCo
3
Ch = holding cost per unit per year dQ 2 Q
C = purchase (material) cost per unit

11
Name WhentoUse Approximations/Conditions ProbabilityMassfunction,MeanandVariance
E(X) is Expected Value; Xi = random variables possible values; P(Xi) = Probability of each of the random variables possible values
n n n
E X X i PX i 2 Variance [X i E (X)] 2 P(X i ) Standard Deviation Variance Cumulative F(x) = P ( X x) f ( x i ) ; xi x
i 1 i 1 i 1
Uniform Equalprobability Foraseriesofnvalues,f(x)=1/n,ab
(Discrete) Finitenumberofpossiblevalues Forarangethatstartsfromaandendswithb(a,a+1,a+2,,b)andab
(b a) (b a 1) 2 1
2 Variance
2 12
Binomial BernoulliTrials: Ifnislarge(np>5,n(1p)>5), n!
(Discrete) Eachtrialisindependent approximatebinomialtonormal. Probabilityofrsuccessinntrials p r q nr
r!(n r )!
Probabilityofsuccessinatrialisconstant P(Xx)=P(Xx+0.5)
x=0,1,,n,0p1,n=1,2,
Onlytwopossibleoutcomes P(xX)=P(x0.5X)
n n!
Unknown:Numberofsuccesses Ifnislarge&pissmall, nC r C rn p r q nr
Known:Numberoftrials approximatetoPoissonas=np
r r! ( n r )!
Numberoftrialsthatresultinasuccess Binomialexpansion Expectedvalue(mean)E(X)==np

a b p r q n r Variance=V(X)==np(1p)
n
n n
k 0 r

Geometric Bernoullitrial;Memoryless f ( x) (1 p) x 1 p x=1,2,,n,0p1


(Discrete) Numberoftrailsuntilfirstsuccess
Expectedvalue(mean)=E(X) = = 1/p Variance =V(X) ==(1 p)/p2
Negative Unknown:Numberoftrials x 1
Geometric Known:Numberofsuccess f (x ) 1 p x r p r x=r,r+1,r+2,,0p1
(Discrete) Numberoftrialsrequiredtoobtainrsuccesses r 1
E(X)= = r/p = Variance = r(1-p)/p2
Hypergeometric Trialsarenotindependent V(X)=V(X)ofbinomial*((Nn)/(N K N K N
f (x )
(Discrete) Withoutreplacement 1))where((Nn)/(N1))iscalled x n x n
Numberofsuccessinthesample finitepopulationcorrectionfactor x=max(0,nN+k)tomin(K,n),KN,nN
n<<Nor(n/N)<0.1, Kobjectsclassedassuccesses;NKobjectsclassified
hypergeometricisequalto asfailures;Samplesizeofnobjects
binomial. E(X)= = np wherep=K/N
Approximatedtonormalifnp>5, N n
n(1p)>5and(n/N)<0.1 2 np (1 p )
N 1

Poisson PoissonProcess: Arrivalratedoesnotchangeover e x

(Discrete) Probabilityofmorethanoneeventina time;Arrivalpatterndoesnot f ( x) P( X x ) x=0,1,2,,0<


x!
subintervaliszero. followregularpattern;Arrivalof
P(X)=probabilityofexactlyXarrivalsoroccurrences
Probabilityofoneeventinasubintervalis disjointtimeintervalsare
Expectedvalue=Variance =
constant&proportionaltolengthof independent.
Approximatedtonormalif>5

k
subinterval Taylorseries: e k!
Eventineachsubintervalisindependent Z X k 0
Numberofeventsintheinterval


Name WhentoUse Approximations/Conditions Probability Densityfunction,Meanand
Variance
P(x1 X x2) = P(x1 X x2) = P(x1 X x2) = P(x1 X x2)
x
E X xf ( x)dx ( x ) f ( x)dx
2 2
Cumulative F(x) = P(Xx) = f (u )du ; for -< x <

Uniform Equalprobability Foraseriesofnvalues,f(x)=1/(ba);wherea x b
(Continuous) Forarangethatstartsfromaandendswithb(a,a+1,a+2,,b)anda b
( a b) (b a ) 2
2 Variance V ( X )
2 12
Normal Notation:N(,) Ifnislarge(np>5,n(1p)>5),binomialis
1
( x )2

(Continuous) Xisanyrandomvariable approximatedtonormal. f (X) e 2 2

2
Cumulative(z)=P(Z<z);Zis
x 0.5 np
standardnormal P ( X x) P ( X x 0.5) P Z <x< <<,0<
(1 )
np p E(X) = V(X) =
x 0.5 np Standardnormalmeansmean==0andvariance=
P ( x X ) P( x 05 X ) P Z =1
np(1 p)
X
Z
Addingorsubtracting0.5iscalledcontinuity
correction.
X x
NormalisapproximatedtoPoissonif>5 P ( X x) P P( Z z )
X
Z Cumulativedistributionofastandardnormal
variable ( z ) P( Z z )
- < < + = 68% -2 < < +2 = 95%
-3 < < +3 = 99.7%
Exponential Memoryless f (x ) e x for 0 x
(Continuous) P ( X t1 t 2 | X t1 ) P ( X t 2 )
1 1
distancebetweensuccessive Expected value = = Average service time Variance =
2
eventsofPoissonprocesswithmean
>0 Theprobabilitythatanexponentiallydistributedtime(X)requiredtoserveacustomerislessthanorequalto
lengthuntilfirstcountina
timetisgivenbytheformula, P(X t ) 1 e t
Poissonprocess
Erlang rshape scale Formeanandvariance:ExponentialmultipliedbyrgivesErlang
(Continuous) Timebetweeneventsare P(X>0.1)=1F(0.1)
independent
13

lengthuntilrcountsinaPoisson r x r 1 e x
process f (x) for x 0 and r 1, 2, ...
(r 1)!
Ifr=1,Erlangrandomvariableisanexponentialrandomvariable
Gamma Forrisaninteger(r=1,2,),gamma

(Continuous) isErlang
GammaFunction (r ) x r 1 e x dx, for r 0
0
Erlangrandomvariableistimeuntil
r 1 x
thertheventinaPoissonprocess x e
r

andtimebetweeneventsare f (x) , for x 0, 0 and r 0 and (r ) r 1!, (1) 0! 1, (1 / 2) 1 / 2


( r )
independent
For=,r=,1,3/2,2,gamma Mean r Variance 2 r 2

ischisquare E(X)andV(X) =E(X)andV(X)of exponentialdistributionmultipliedbyr
Weibull Includesmemoryproperty =1,Weibullisidenticaltoexponential x
1
x
(Continuous) Scale;Shape =2,WeibullisidenticaltoRaleigh f ( x ) exp
Timeuntilfailureofmany
differentphysicalsystems x > 0,
x



Cumulative F ( x ) 1 e
1
E ( X ) 1 where (r ) r 1!

2
2 1
1 2
2
1

Lognormal Includesmemoryproperty Weibullcanbeapproximatedtolognormalwithand
(Continuous) X=exp(W);Wisnormally ln( x ) ln( x )
distributedwithmeanand F ( x) P X x P exp(W ) x P W ln( x) P Z for x 0

variance
F ( X ) 0, for x 0
ln(X)=W;Xislognormal
EasiertounderstandthanWeibull 1 (ln x ) 2
f (x) exp for 0 x
x 2 2 2
E ( X ) e
2
2 2
2
V ( X ) e 2 e 1
Beta Flexiblebutboundedoverafinite ( ) 1
(Continuous) range f (x) x (1 x) 1 for 0 x 1, 0, 0
( )( )

E( X ) V (X )
1
2

14

PowerLaw Calledasheavytaileddistribution. Arandomvariabledescribedbyitsminimumvaluexminandascaleparameter>1isissaidtoobeythe


(Continuous) f(x)decreasesrapidlywithxbutnot powerlawdistributionifitsdensityfunctionisgivenby
asrapidasexponentialdistribution.
( 1) x
f (x)
xmin xmin

Normalizethefunctionforagivensetofparameterstoensurethat f ( x ) dx 1

CentralLimit
Theorem


15

Name ProbabilityDensityfunction,MeanandVariance
Twoormore f X (x ) P( X x) f XY ( x, y ) f Y (y ) P (Y y ) f XY ( x, y ) f Y | x ( y ) f XY ( x, y ) / f X ( x)
DiscreteRandom y x
Variables E Y | x y f Y | x ( y ) V (Y | x) ( y Y | x ) 2 f Y | x ( y )
y y

f XY ( x, y ) f X ( x) f Y ( y )
Independence f Y | x ( y ) f Y ( y ); f XY ( x, y ) f X ( x) f Y ( y ) for all x and y
f X ( x) f X ( x)
JointProbabilityMassFn: f X 1 X 2 X p ( x1 , x 2 , , x p ) P ( X 1 x1 , X 2 x 2 , , X p x p ) forallpoints(x1,x2,,xp)intherangeofX1,X2,,Xp
JointProbabilityMassForsubset: f X 1 X 2 X k ( x1 , x 2 , , x k ) P ( X 1 x1 , X 2 x 2 , , X k x k ) P( X 1 x1 , X 2 x 2 , , X k x k ) for
allpointsintherangeofX1,X2,,XpforwhichX1=x1,X2=x2,,Xk=xk
MarginalProbabilityMassFunction: f X i ( x i ) P ( X i x i )
f X 1 X 2 X p ( x1 , x 2 , , x p )

x ) x Xi
2
Mean: E ( X i ) i f X 1 X 2 X p ( x1 , x 2 , , x p ) Variance: V ( X i i f X 1 X 2 X p ( x1 , x 2 , , x p )
Multinomial Therandomexperimentthatgeneratestheprobabilitydistributionconsistsofaseriesofindependenttrials.However,theresultsfromeach
Probability trialcanbecategorizedintooneofkclasses.
Distribution n!
P( X 1 x1 , X 2 x 2 , , X k x k ) p1x1 p 2x2 p kxk for x1 x 2 x k n and p1 p 2 p k 1
x1 ! x 2 ! x k !
E X i np i V ( X i ) np i (1 p i )
Twoormore MarginalProbabilityDensityFunction: f X ( x ) f XY ( x, y ) dy f Y ( y ) f XY ( x, y ) dy
Continuous y y
RandomVariables
f ( x, y )
f Y | x ( y ) XY for f X ( x) 0 E Y | x y f Y | x ( y ) dy V (Y | x) ( y Y | x ) 2 f Y | x ( y ) dy
f X ( x) y y

Independence: f Y | x ( y ) f Y ( y ); f X | y ( x ) f X ( x) f XY ( x, y ) f X ( x) f Y ( y ) for all x and y



JointProbabilityDensityFn: P ( X 1 x1 , X 2 x 2 , , X p x p ) B f X1 X 2 X p ( x1 , x 2 , , x p )dx1 dx 2 dx p
B

JointProbabilityMassForsubset: f X 1 X 2 X k ( x1 , x 2 , , x k ) f X1 X 2 X p ( x1 , x 2 , , x p )dx1 dx 2 dx p forallpointsintherangeof


X1,X2,,XpforwhichX1=x1,X2=x2,,Xk=xk
MarginalProbabilityDensityFunction: f X i ( x i ) f
R
X1 X 2 X p ( x1 , x 2 , , x p ) dx1 dx 2 dx i 1 dx i 1 dx p wheretheintegralisoverall

pointsofX1,X2,,XpforwhichXi=xi

16


Mean: E ( X i )

xi f X1 X 2 X p ( x1 , x 2 , , x p ) dx1 dx 2 dx p

x Xi 2
Variance: V ( X i ) i f X 1 X 2 X p ( x1 , x 2 , , x p ) dx1 dx 2 dx p

Covarianceisameasureoflinearrelationshipbetweentherandomvariables.Iftherelationshipbetweentherandomvariablesisnonlinear,thecovariancemight
notbesensitivetotherelationship.
Tworandomvariableswithnonzerocorrelationaresaidtobecorrelated.Similartocovariance,thecorrelationisameasureofthelinearrelationshipbetween
randomvariables.
cov( X , Y ) XY
Covariance: XY E[( X X )(Y Y )] E ( XY ) X Y Correlation: XY where 1 XY 1
V ( X )V (Y ) XY
IfXandYareindependentrandomvariables, XY XY 0
BivariateNormal 1 1 ( x x ) 2 2 ( x X )( y Y ) ( Y ) 2
f XY ( x, y; X , Y , X , Y , ) exp
2 X Y 1 2 2(1 2 ) X2 XY Y2
for<x<and<y<,withparametersx>0,y>0,<X<,<Y<and1<<1.
MarginalDistribution:IfXandYhaveabivariatenormaldistributionwithjointprobabilitydensityfXY(x,y;X,Y,X,Y,),themarginal
probabilitydistributionofXandYarenormalwithmeansxandyandstandarddeviationxandy,respectively.
ConditionalDistribution:IfXandYhaveabivariatenormaldistributionwithjointprobabilitydensityfXY(x,y;X,Y,X,Y,),theconditional
probabilitydistributionofYgivenX=xisnormalwithmean
Y Y
Y |x Y X X andvariance 2 Y | X Y2 1 2
X X
Correlation:IfXandYhaveabivariatenormaldistributionwithjointprobabilitydensityfunctionfXY(x,y;X,Y,X,Y,),thecorrelation
betweenXandYis
IfXandYhaveabivariatenormaldistributionwith=0,XandYareindependent
LinearFunctions GivenrandomvariablesX1,X2,,Xpandconstantsc1,c2,,cp,Y=c1X1+c2X2++cpXpisalinearcombinationofX1,X2,,Xp
ofrandom MeanE(Y)=c1E(X1)+c2E(X2)++cpE(Xp)
variables Variance: V (Y ) c1 V ( X 1 ) c 22V ( X 2 ) c 2pV ( X p ) 2
2

c i c j cov( X i , X j )
i j

IfX1,X2,,Xpareindependent,variance: V (Y ) c V ( X 1 ) c V ( X 2 ) c 2pV ( X p )
2
1
2
2

Meanandvarianceonaverage: E ( X ) ; V (X ) 2 p with E ( X i ) and V ( X i ) 2


E (Y ) c1 1 c 2 2 c p p ; V (Y ) c12 12 c 22 22 c 2p p2
GeneralFunctions Discrete:y=h(x)andx=u(y): f Y ( y ) f X u ( y )
ofrandom
17

variables f Y ( y ) f X [u ( y )] | J | where J u ' ( y ) is called the Jacobian of the transformation


Continuous:y=h(x)andx=u(y):
and the absolute value of J is used





Name WhentoUse Approximations/Conditions ProbabilityDensityfunction,Meanand
Variance

Uniform Equalprobability
(Continuous)

18

Potrebbero piacerti anche