Sei sulla pagina 1di 94

Session 9 References

Probability and Statistical Course.

Instructor: Dr.Ing.(c) Sergio A. Abreo C.

Escuela de Ingenierı́as Eléctrica, Electrónica y de


Telecomunicaciones

Universidad Industrial de Santander

March 07, 2018

Connectivity and Signal Processing Research Group.


info@cps.uis.edu.co http://cps.uis.edu.co/index.php
Session 9 References

Agenda

1 Session 9
Joint PDFs

2 References
Session 9 References

Two discrete random variables


Example: Two dice toss with different colored dice
A red die and a blue die are tossed.
The die that yields the larger number of dots is chosen.
If they both display the same number of dots, the red die is
chosen.
The numerical outcome of the experiment is defined to be 0 if
the blue die is chosen and 1 if the red die is chosen, along
with its corresponding number of dots.
The random vector is therefore defined as Y = number of
dots on chosen die,
X = 0 blue die chosen,
X = 1 red die chosen.
The outcomes of the experiment can be represented by (i, j)
where i = 0 for blue, i = 1 for red, and j = # of dots obs.
Session 9 References

Two discrete random variables


Example: Two dice toss with different colored dice
A red die and a blue die are tossed.
The die that yields the larger number of dots is chosen.
If they both display the same number of dots, the red die is
chosen.
The numerical outcome of the experiment is defined to be 0 if
the blue die is chosen and 1 if the red die is chosen, along
with its corresponding number of dots.
The random vector is therefore defined as Y = number of
dots on chosen die,
X = 0 blue die chosen,
X = 1 red die chosen.
The outcomes of the experiment can be represented by (i, j)
where i = 0 for blue, i = 1 for red, and j = # of dots obs.
Session 9 References

Two discrete random variables


Example: Two dice toss with different colored dice
A red die and a blue die are tossed.
The die that yields the larger number of dots is chosen.
If they both display the same number of dots, the red die is
chosen.
The numerical outcome of the experiment is defined to be 0 if
the blue die is chosen and 1 if the red die is chosen, along
with its corresponding number of dots.
The random vector is therefore defined as Y = number of
dots on chosen die,
X = 0 blue die chosen,
X = 1 red die chosen.
The outcomes of the experiment can be represented by (i, j)
where i = 0 for blue, i = 1 for red, and j = # of dots obs.
Session 9 References

Two discrete random variables


Example: Two dice toss with different colored dice
A red die and a blue die are tossed.
The die that yields the larger number of dots is chosen.
If they both display the same number of dots, the red die is
chosen.
The numerical outcome of the experiment is defined to be 0 if
the blue die is chosen and 1 if the red die is chosen, along
with its corresponding number of dots.
The random vector is therefore defined as Y = number of
dots on chosen die,
X = 0 blue die chosen,
X = 1 red die chosen.
The outcomes of the experiment can be represented by (i, j)
where i = 0 for blue, i = 1 for red, and j = # of dots obs.
Session 9 References

Two discrete random variables


Example: Two dice toss with different colored dice
A red die and a blue die are tossed.
The die that yields the larger number of dots is chosen.
If they both display the same number of dots, the red die is
chosen.
The numerical outcome of the experiment is defined to be 0 if
the blue die is chosen and 1 if the red die is chosen, along
with its corresponding number of dots.
The random vector is therefore defined as Y = number of
dots on chosen die,
X = 0 blue die chosen,
X = 1 red die chosen.
The outcomes of the experiment can be represented by (i, j)
where i = 0 for blue, i = 1 for red, and j = # of dots obs.
Session 9 References

Two discrete random variables


Example: Two dice toss with different colored dice
A red die and a blue die are tossed.
The die that yields the larger number of dots is chosen.
If they both display the same number of dots, the red die is
chosen.
The numerical outcome of the experiment is defined to be 0 if
the blue die is chosen and 1 if the red die is chosen, along
with its corresponding number of dots.
The random vector is therefore defined as Y = number of
dots on chosen die,
X = 0 blue die chosen,
X = 1 red die chosen.
The outcomes of the experiment can be represented by (i, j)
where i = 0 for blue, i = 1 for red, and j = # of dots obs.
Session 9 References

Two discrete random variables


Example: Two dice toss with different colored dice
A red die and a blue die are tossed.
The die that yields the larger number of dots is chosen.
If they both display the same number of dots, the red die is
chosen.
The numerical outcome of the experiment is defined to be 0 if
the blue die is chosen and 1 if the red die is chosen, along
with its corresponding number of dots.
The random vector is therefore defined as Y = number of
dots on chosen die,
X = 0 blue die chosen,
X = 1 red die chosen.
The outcomes of the experiment can be represented by (i, j)
where i = 0 for blue, i = 1 for red, and j = # of dots obs.
Session 9 References

Two discrete random variables


Example: Two dice toss with different colored dice
A red die and a blue die are tossed.
The die that yields the larger number of dots is chosen.
If they both display the same number of dots, the red die is
chosen.
The numerical outcome of the experiment is defined to be 0 if
the blue die is chosen and 1 if the red die is chosen, along
with its corresponding number of dots.
The random vector is therefore defined as Y = number of
dots on chosen die,
X = 0 blue die chosen,
X = 1 red die chosen.
The outcomes of the experiment can be represented by (i, j)
where i = 0 for blue, i = 1 for red, and j = # of dots obs.
Session 9 References

Two discrete random variables

Example: Two dice toss with different colored dice


What then is px,y [1, 3] =?
assume that each outcome in S is equally likely
Let’s see the outcomes space
Die blue=1 blue=2 blue=3 blue=4 blue=5 blue=6
red=1 (1,1) (0,2) (0,3) (0,4) (0,5) (0,6)
red=2 (1,2) (1,2) (0,3) (0,4) (0,5) (0,6)
red=3 (1,3) (1,3) (1,3) (0,4) (0,5) (0,6)
red=4 (1,4) (1,4) (1,4) (1,4) (0,5) (0,6)
red=5 (1,5) (1,5) (1,5) (1,5) (1,5) (0,6)
red=6 (1,6) (1,6) (1,6) (1,6) (1,6) (1,6)

p = 1/36
px,y [1, 3] = 1/12
Session 9 References

Two discrete random variables

Example: Two dice toss with different colored dice


What then is px,y [1, 3] =?
assume that each outcome in S is equally likely
Let’s see the outcomes space
Die blue=1 blue=2 blue=3 blue=4 blue=5 blue=6
red=1 (1,1) (0,2) (0,3) (0,4) (0,5) (0,6)
red=2 (1,2) (1,2) (0,3) (0,4) (0,5) (0,6)
red=3 (1,3) (1,3) (1,3) (0,4) (0,5) (0,6)
red=4 (1,4) (1,4) (1,4) (1,4) (0,5) (0,6)
red=5 (1,5) (1,5) (1,5) (1,5) (1,5) (0,6)
red=6 (1,6) (1,6) (1,6) (1,6) (1,6) (1,6)

p = 1/36
px,y [1, 3] = 1/12
Session 9 References

Two discrete random variables

Example: Two dice toss with different colored dice


What then is px,y [1, 3] =?
assume that each outcome in S is equally likely
Let’s see the outcomes space
Die blue=1 blue=2 blue=3 blue=4 blue=5 blue=6
red=1 (1,1) (0,2) (0,3) (0,4) (0,5) (0,6)
red=2 (1,2) (1,2) (0,3) (0,4) (0,5) (0,6)
red=3 (1,3) (1,3) (1,3) (0,4) (0,5) (0,6)
red=4 (1,4) (1,4) (1,4) (1,4) (0,5) (0,6)
red=5 (1,5) (1,5) (1,5) (1,5) (1,5) (0,6)
red=6 (1,6) (1,6) (1,6) (1,6) (1,6) (1,6)

p = 1/36
px,y [1, 3] = 1/12
Session 9 References

Two discrete random variables

Example: Two dice toss with different colored dice


What then is px,y [1, 3] =?
assume that each outcome in S is equally likely
Let’s see the outcomes space
Die blue=1 blue=2 blue=3 blue=4 blue=5 blue=6
red=1 (1,1) (0,2) (0,3) (0,4) (0,5) (0,6)
red=2 (1,2) (1,2) (0,3) (0,4) (0,5) (0,6)
red=3 (1,3) (1,3) (1,3) (0,4) (0,5) (0,6)
red=4 (1,4) (1,4) (1,4) (1,4) (0,5) (0,6)
red=5 (1,5) (1,5) (1,5) (1,5) (1,5) (0,6)
red=6 (1,6) (1,6) (1,6) (1,6) (1,6) (1,6)

p = 1/36
px,y [1, 3] = 1/12
Session 9 References

Two discrete random variables

Example: Two dice toss with different colored dice


What then is px,y [1, 3] =?
assume that each outcome in S is equally likely
Let’s see the outcomes space
Die blue=1 blue=2 blue=3 blue=4 blue=5 blue=6
red=1 (1,1) (0,2) (0,3) (0,4) (0,5) (0,6)
red=2 (1,2) (1,2) (0,3) (0,4) (0,5) (0,6)
red=3 (1,3) (1,3) (1,3) (0,4) (0,5) (0,6)
red=4 (1,4) (1,4) (1,4) (1,4) (0,5) (0,6)
red=5 (1,5) (1,5) (1,5) (1,5) (1,5) (0,6)
red=6 (1,6) (1,6) (1,6) (1,6) (1,6) (1,6)

p = 1/36
px,y [1, 3] = 1/12
Session 9 References

Two discrete random variables

Example: Two dice toss with different colored dice


What then is px,y [1, 3] =?
assume that each outcome in S is equally likely
Let’s see the outcomes space
Die blue=1 blue=2 blue=3 blue=4 blue=5 blue=6
red=1 (1,1) (0,2) (0,3) (0,4) (0,5) (0,6)
red=2 (1,2) (1,2) (0,3) (0,4) (0,5) (0,6)
red=3 (1,3) (1,3) (1,3) (0,4) (0,5) (0,6)
red=4 (1,4) (1,4) (1,4) (1,4) (0,5) (0,6)
red=5 (1,5) (1,5) (1,5) (1,5) (1,5) (0,6)
red=6 (1,6) (1,6) (1,6) (1,6) (1,6) (1,6)

p = 1/36
px,y [1, 3] = 1/12
Session 9 References

Two discrete random variables

Example: Two dice toss with different colored dice


What then is px,y [1, 3] =?
assume that each outcome in S is equally likely
Let’s see the outcomes space
Die blue=1 blue=2 blue=3 blue=4 blue=5 blue=6
red=1 (1,1) (0,2) (0,3) (0,4) (0,5) (0,6)
red=2 (1,2) (1,2) (0,3) (0,4) (0,5) (0,6)
red=3 (1,3) (1,3) (1,3) (0,4) (0,5) (0,6)
red=4 (1,4) (1,4) (1,4) (1,4) (0,5) (0,6)
red=5 (1,5) (1,5) (1,5) (1,5) (1,5) (0,6)
red=6 (1,6) (1,6) (1,6) (1,6) (1,6) (1,6)

p = 1/36
px,y [1, 3] = 1/12
Session 9 References

Two discrete random variables

Example: Two dice toss with different colored dice


What then is px,y [1, 3] =?
assume that each outcome in S is equally likely
Let’s see the outcomes space
Die blue=1 blue=2 blue=3 blue=4 blue=5 blue=6
red=1 (1,1) (0,2) (0,3) (0,4) (0,5) (0,6)
red=2 (1,2) (1,2) (0,3) (0,4) (0,5) (0,6)
red=3 (1,3) (1,3) (1,3) (0,4) (0,5) (0,6)
red=4 (1,4) (1,4) (1,4) (1,4) (0,5) (0,6)
red=5 (1,5) (1,5) (1,5) (1,5) (1,5) (0,6)
red=6 (1,6) (1,6) (1,6) (1,6) (1,6) (1,6)

p = 1/36
px,y [1, 3] = 1/12
Session 9 References

Two discrete random variables

Example: Two dice toss with different colored dice


What then is px,y [1, 3] =?
assume that each outcome in S is equally likely
Let’s see the outcomes space
Die blue=1 blue=2 blue=3 blue=4 blue=5 blue=6
red=1 (1,1) (0,2) (0,3) (0,4) (0,5) (0,6)
red=2 (1,2) (1,2) (0,3) (0,4) (0,5) (0,6)
red=3 (1,3) (1,3) (1,3) (0,4) (0,5) (0,6)
red=4 (1,4) (1,4) (1,4) (1,4) (0,5) (0,6)
red=5 (1,5) (1,5) (1,5) (1,5) (1,5) (0,6)
red=6 (1,6) (1,6) (1,6) (1,6) (1,6) (1,6)

p = 1/36
px,y [1, 3] = 1/12
Session 9 References

Two discrete random variables

Example: Two dice toss with different colored dice


What then is px,y [1, 3] =?
assume that each outcome in S is equally likely
Let’s see the outcomes space
Die blue=1 blue=2 blue=3 blue=4 blue=5 blue=6
red=1 (1,1) (0,2) (0,3) (0,4) (0,5) (0,6)
red=2 (1,2) (1,2) (0,3) (0,4) (0,5) (0,6)
red=3 (1,3) (1,3) (1,3) (0,4) (0,5) (0,6)
red=4 (1,4) (1,4) (1,4) (1,4) (0,5) (0,6)
red=5 (1,5) (1,5) (1,5) (1,5) (1,5) (0,6)
red=6 (1,6) (1,6) (1,6) (1,6) (1,6) (1,6)

p = 1/36
px,y [1, 3] = 1/12
Session 9 References

Two discrete random variables

Example: Two dice toss with different colored dice


What then is px,y [1, 3] =?
assume that each outcome in S is equally likely
Let’s see the outcomes space
Die blue=1 blue=2 blue=3 blue=4 blue=5 blue=6
red=1 (1,1) (0,2) (0,3) (0,4) (0,5) (0,6)
red=2 (1,2) (1,2) (0,3) (0,4) (0,5) (0,6)
red=3 (1,3) (1,3) (1,3) (0,4) (0,5) (0,6)
red=4 (1,4) (1,4) (1,4) (1,4) (0,5) (0,6)
red=5 (1,5) (1,5) (1,5) (1,5) (1,5) (0,6)
red=6 (1,6) (1,6) (1,6) (1,6) (1,6) (1,6)

p = 1/36
px,y [1, 3] = 1/12
Session 9 References

Two discrete random variables

If X and Y are discrete random variables, the joint probability


distribution of X and Y is a description of the set of points
(x, y ) in the range of (X , Y ) along with the probability of
each point.

Joint Probability Distributions


The joint probability mass function of the discrete random
variables X and Y denoted as fXY (x, y ), satisfies
1 fXY (x, y ) ≥ 0
P P
y fXY (x, y ) = 1
2
x
3 fXY (x, y ) = P(X = x, Y = y )
Session 9 References

Two discrete random variables

If X and Y are discrete random variables, the joint probability


distribution of X and Y is a description of the set of points
(x, y ) in the range of (X , Y ) along with the probability of
each point.

Joint Probability Distributions


The joint probability mass function of the discrete random
variables X and Y denoted as fXY (x, y ), satisfies
1 fXY (x, y ) ≥ 0
P P
y fXY (x, y ) = 1
2
x
3 fXY (x, y ) = P(X = x, Y = y )
Session 9 References

Two discrete random variables

If X and Y are discrete random variables, the joint probability


distribution of X and Y is a description of the set of points
(x, y ) in the range of (X , Y ) along with the probability of
each point.

Joint Probability Distributions


The joint probability mass function of the discrete random
variables X and Y denoted as fXY (x, y ), satisfies
1 fXY (x, y ) ≥ 0
P P
y fXY (x, y ) = 1
2
x
3 fXY (x, y ) = P(X = x, Y = y )
Session 9 References

Two discrete random variables

Marginal Probability Distributions


If more than one random variable is defined in a random
experiment, it is important to distinguish between the joint
probability distribution of X and Y and the probability
distribution of each variable individually.
The individual probability distribution of a random variable is
referred to as its marginal probability distribution.
The marginal probability distribution of X can be determined
from the joint probability distribution of X and other random
variables.
Consider first the determination of px,y [xi ] =?
Since {X = xi } does not specify any particular value for Y ,
the event {X = xi } is equivalent to the joint event
{X = xi , Y ∈ SY }.
Session 9 References

Two discrete random variables

Marginal Probability Distributions


If more than one random variable is defined in a random
experiment, it is important to distinguish between the joint
probability distribution of X and Y and the probability
distribution of each variable individually.
The individual probability distribution of a random variable is
referred to as its marginal probability distribution.
The marginal probability distribution of X can be determined
from the joint probability distribution of X and other random
variables.
Consider first the determination of px,y [xi ] =?
Since {X = xi } does not specify any particular value for Y ,
the event {X = xi } is equivalent to the joint event
{X = xi , Y ∈ SY }.
Session 9 References

Two discrete random variables

Marginal Probability Distributions


If more than one random variable is defined in a random
experiment, it is important to distinguish between the joint
probability distribution of X and Y and the probability
distribution of each variable individually.
The individual probability distribution of a random variable is
referred to as its marginal probability distribution.
The marginal probability distribution of X can be determined
from the joint probability distribution of X and other random
variables.
Consider first the determination of px,y [xi ] =?
Since {X = xi } does not specify any particular value for Y ,
the event {X = xi } is equivalent to the joint event
{X = xi , Y ∈ SY }.
Session 9 References

Two discrete random variables

Marginal Probability Distributions


If more than one random variable is defined in a random
experiment, it is important to distinguish between the joint
probability distribution of X and Y and the probability
distribution of each variable individually.
The individual probability distribution of a random variable is
referred to as its marginal probability distribution.
The marginal probability distribution of X can be determined
from the joint probability distribution of X and other random
variables.
Consider first the determination of px,y [xi ] =?
Since {X = xi } does not specify any particular value for Y ,
the event {X = xi } is equivalent to the joint event
{X = xi , Y ∈ SY }.
Session 9 References

Two discrete random variables

Marginal Probability Distributions


If more than one random variable is defined in a random
experiment, it is important to distinguish between the joint
probability distribution of X and Y and the probability
distribution of each variable individually.
The individual probability distribution of a random variable is
referred to as its marginal probability distribution.
The marginal probability distribution of X can be determined
from the joint probability distribution of X and other random
variables.
Consider first the determination of px,y [xi ] =?
Since {X = xi } does not specify any particular value for Y ,
the event {X = xi } is equivalent to the joint event
{X = xi , Y ∈ SY }.
Session 9 References

Two discrete random variables

Marginal Probability Distributions

Figure : Determination of marginal PDF value pX [x3 ] from joint PDF


pX ,Y [xi , yi ] by summing along y direction. Taken from [Kay, 2006]
Session 9 References

Two discrete random variables

Example: Two dice toss with different colored dice


P
red \blue 1 2 3 4 5 6
1 (1,1) (0,2) (0,3) (0,4) (0,5) (0,6) 1/6
2 (1,2) (1,2) (0,3) (0,4) (0,5) (0,6) 1/6
3 (1,3) (1,3) (1,3) (0,4) (0,5) (0,6) 1/6
4 (1,4) (1,4) (1,4) (1,4) (0,5) (0,6) 1/6
5 (1,5) (1,5) (1,5) (1,5) (1,5) (0,6) 1/6
6 (1,6) (1,6) (1,6) (1,6) (1,6) (1,6) 1/6
P
1/6 1/6 1/6 1/6 1/6 1/6
Session 9 References

Two discrete random variables

Example: Two dice toss with different colored dice


P
red \blue 1 2 3 4 5 6
1 (1,1) (0,2) (0,3) (0,4) (0,5) (0,6) 1/6
2 (1,2) (1,2) (0,3) (0,4) (0,5) (0,6) 1/6
3 (1,3) (1,3) (1,3) (0,4) (0,5) (0,6) 1/6
4 (1,4) (1,4) (1,4) (1,4) (0,5) (0,6) 1/6
5 (1,5) (1,5) (1,5) (1,5) (1,5) (0,6) 1/6
6 (1,6) (1,6) (1,6) (1,6) (1,6) (1,6) 1/6
P
1/6 1/6 1/6 1/6 1/6 1/6
Session 9 References

Two discrete random variables

Example: Two dice toss with different colored dice


P
red \blue 1 2 3 4 5 6
1 (1,1) (0,2) (0,3) (0,4) (0,5) (0,6) 1/6
2 (1,2) (1,2) (0,3) (0,4) (0,5) (0,6) 1/6
3 (1,3) (1,3) (1,3) (0,4) (0,5) (0,6) 1/6
4 (1,4) (1,4) (1,4) (1,4) (0,5) (0,6) 1/6
5 (1,5) (1,5) (1,5) (1,5) (1,5) (0,6) 1/6
6 (1,6) (1,6) (1,6) (1,6) (1,6) (1,6) 1/6
P
1/6 1/6 1/6 1/6 1/6 1/6
Session 9 References

Two discrete random variables

Example: Two dice toss with different colored dice


P
red \blue 1 2 3 4 5 6
1 (1,1) (0,2) (0,3) (0,4) (0,5) (0,6) 1/6
2 (1,2) (1,2) (0,3) (0,4) (0,5) (0,6) 1/6
3 (1,3) (1,3) (1,3) (0,4) (0,5) (0,6) 1/6
4 (1,4) (1,4) (1,4) (1,4) (0,5) (0,6) 1/6
5 (1,5) (1,5) (1,5) (1,5) (1,5) (0,6) 1/6
6 (1,6) (1,6) (1,6) (1,6) (1,6) (1,6) 1/6
P
1/6 1/6 1/6 1/6 1/6 1/6
Session 9 References

Two discrete random variables

Example: Two dice toss with different colored dice


P
red \blue 1 2 3 4 5 6
1 (1,1) (0,2) (0,3) (0,4) (0,5) (0,6) 1/6
2 (1,2) (1,2) (0,3) (0,4) (0,5) (0,6) 1/6
3 (1,3) (1,3) (1,3) (0,4) (0,5) (0,6) 1/6
4 (1,4) (1,4) (1,4) (1,4) (0,5) (0,6) 1/6
5 (1,5) (1,5) (1,5) (1,5) (1,5) (0,6) 1/6
6 (1,6) (1,6) (1,6) (1,6) (1,6) (1,6) 1/6
P
1/6 1/6 1/6 1/6 1/6 1/6
Session 9 References

Two discrete random variables

Example: Two dice toss with different colored dice


P
red \blue 1 2 3 4 5 6
1 (1,1) (0,2) (0,3) (0,4) (0,5) (0,6) 1/6
2 (1,2) (1,2) (0,3) (0,4) (0,5) (0,6) 1/6
3 (1,3) (1,3) (1,3) (0,4) (0,5) (0,6) 1/6
4 (1,4) (1,4) (1,4) (1,4) (0,5) (0,6) 1/6
5 (1,5) (1,5) (1,5) (1,5) (1,5) (0,6) 1/6
6 (1,6) (1,6) (1,6) (1,6) (1,6) (1,6) 1/6
P
1/6 1/6 1/6 1/6 1/6 1/6
Session 9 References

Two discrete random variables

Example: Two dice toss with different colored dice


P
red \blue 1 2 3 4 5 6
1 (1,1) (0,2) (0,3) (0,4) (0,5) (0,6) 1/6
2 (1,2) (1,2) (0,3) (0,4) (0,5) (0,6) 1/6
3 (1,3) (1,3) (1,3) (0,4) (0,5) (0,6) 1/6
4 (1,4) (1,4) (1,4) (1,4) (0,5) (0,6) 1/6
5 (1,5) (1,5) (1,5) (1,5) (1,5) (0,6) 1/6
6 (1,6) (1,6) (1,6) (1,6) (1,6) (1,6) 1/6
P
1/6 1/6 1/6 1/6 1/6 1/6
Session 9 References

Two discrete random variables

Marginal Probability Distributions


pX [xk ] = ∞
P
pXY [xk , yj ]
Pj=1

pY [yk ] = i=1 pXY [xi , yk ]
The terminology “marginal” PDF originates from the process
of summing the probabilities along each column/row and
writing the results in the margin (below/behind the x/y axis).

Joint PDF cannot be determined from marginal PDF’s


Having obtained the marginal PDFs from the joint PDF, we
might suppose we could reverse the process to find the joint
PDF from the marginal PDFs.
However, this is not possible in general. To see why, consider
the previous example
Session 9 References

Two discrete random variables

Marginal Probability Distributions


pX [xk ] = ∞
P
pXY [xk , yj ]
Pj=1

pY [yk ] = i=1 pXY [xi , yk ]
The terminology “marginal” PDF originates from the process
of summing the probabilities along each column/row and
writing the results in the margin (below/behind the x/y axis).

Joint PDF cannot be determined from marginal PDF’s


Having obtained the marginal PDFs from the joint PDF, we
might suppose we could reverse the process to find the joint
PDF from the marginal PDFs.
However, this is not possible in general. To see why, consider
the previous example
Session 9 References

Two discrete random variables

Marginal Probability Distributions


pX [xk ] = ∞
P
pXY [xk , yj ]
Pj=1

pY [yk ] = i=1 pXY [xi , yk ]
The terminology “marginal” PDF originates from the process
of summing the probabilities along each column/row and
writing the results in the margin (below/behind the x/y axis).

Joint PDF cannot be determined from marginal PDF’s


Having obtained the marginal PDFs from the joint PDF, we
might suppose we could reverse the process to find the joint
PDF from the marginal PDFs.
However, this is not possible in general. To see why, consider
the previous example
Session 9 References

Two discrete random variables

Marginal Probability Distributions


pX [xk ] = ∞
P
pXY [xk , yj ]
Pj=1

pY [yk ] = i=1 pXY [xi , yk ]
The terminology “marginal” PDF originates from the process
of summing the probabilities along each column/row and
writing the results in the margin (below/behind the x/y axis).

Joint PDF cannot be determined from marginal PDF’s


Having obtained the marginal PDFs from the joint PDF, we
might suppose we could reverse the process to find the joint
PDF from the marginal PDFs.
However, this is not possible in general. To see why, consider
the previous example
Session 9 References

Two discrete random variables

Marginal Probability Distributions


pX [xk ] = ∞
P
pXY [xk , yj ]
Pj=1

pY [yk ] = i=1 pXY [xi , yk ]
The terminology “marginal” PDF originates from the process
of summing the probabilities along each column/row and
writing the results in the margin (below/behind the x/y axis).

Joint PDF cannot be determined from marginal PDF’s


Having obtained the marginal PDFs from the joint PDF, we
might suppose we could reverse the process to find the joint
PDF from the marginal PDFs.
However, this is not possible in general. To see why, consider
the previous example
Session 9 References

Two discrete random variables

Marginal Probability Distributions


pX [xk ] = ∞
P
pXY [xk , yj ]
Pj=1

pY [yk ] = i=1 pXY [xi , yk ]
The terminology “marginal” PDF originates from the process
of summing the probabilities along each column/row and
writing the results in the margin (below/behind the x/y axis).

Joint PDF cannot be determined from marginal PDF’s


Having obtained the marginal PDFs from the joint PDF, we
might suppose we could reverse the process to find the joint
PDF from the marginal PDFs.
However, this is not possible in general. To see why, consider
the previous example
Session 9 References

Two discrete random variables

Example: Two dice toss with different colored dice


P
red \blue 1 2 3 4 5 6
1 ? ? ? ? ? ? 1/6
2 ? ? ? ? ? ? 1/6
3 ? ? ? ? ? ? 1/6
4 ? ? ? ? ? ? 1/6
5 ? ? ? ? ? ? 1/6
6 ? ? ? ? ? ? 1/6
P
1/6 1/6 1/6 1/6 1/6 1/6

In this case we have 36 unknonws and 12 equations, is it possible


to resolve the problem?
Session 9 References

Two discrete random variables

Example: Two dice toss with different colored dice


P
red \blue 1 2 3 4 5 6
1 ? ? ? ? ? ? 1/6
2 ? ? ? ? ? ? 1/6
3 ? ? ? ? ? ? 1/6
4 ? ? ? ? ? ? 1/6
5 ? ? ? ? ? ? 1/6
6 ? ? ? ? ? ? 1/6
P
1/6 1/6 1/6 1/6 1/6 1/6

In this case we have 36 unknonws and 12 equations, is it possible


to resolve the problem?
Session 9 References

Two discrete random variables

Mean and Variance from Joint Distributions


Given a joint probability mass function for random variables X
and Y , E (X ) and V (X ) can be obtained directly from the
joint probability distribution of X and Y .
First calculating the marginal probability distribution of X and
then determining E (X ) and V (X ) by the usual method.
X X X ∞
E (X ) = µx = xfX (x) = x( pXY [xk , yj ])
x x j=1

XX XX
E (X ) = x(pXY [xk , yj ]) = x(pXY [x, y ])
x j=1 x y
X
E (X ) = pXY [x, y ]x
R
R denotes the set of all points in the range of (X , Y )
Session 9 References

Two discrete random variables

Mean and Variance from Joint Distributions


Given a joint probability mass function for random variables X
and Y , E (X ) and V (X ) can be obtained directly from the
joint probability distribution of X and Y .
First calculating the marginal probability distribution of X and
then determining E (X ) and V (X ) by the usual method.
X X X ∞
E (X ) = µx = xfX (x) = x( pXY [xk , yj ])
x x j=1

XX XX
E (X ) = x(pXY [xk , yj ]) = x(pXY [x, y ])
x j=1 x y
X
E (X ) = pXY [x, y ]x
R
R denotes the set of all points in the range of (X , Y )
Session 9 References

Two discrete random variables

Mean and Variance from Joint Distributions


Given a joint probability mass function for random variables X
and Y , E (X ) and V (X ) can be obtained directly from the
joint probability distribution of X and Y .
First calculating the marginal probability distribution of X and
then determining E (X ) and V (X ) by the usual method.
X X X ∞
E (X ) = µx = xfX (x) = x( pXY [xk , yj ])
x x j=1

XX XX
E (X ) = x(pXY [xk , yj ]) = x(pXY [x, y ])
x j=1 x y
X
E (X ) = pXY [x, y ]x
R
R denotes the set of all points in the range of (X , Y )
Session 9 References

Two discrete random variables

Mean and Variance from Joint Distributions


Given a joint probability mass function for random variables X
and Y , E (X ) and V (X ) can be obtained directly from the
joint probability distribution of X and Y .
First calculating the marginal probability distribution of X and
then determining E (X ) and V (X ) by the usual method.
X X X ∞
E (X ) = µx = xfX (x) = x( pXY [xk , yj ])
x x j=1

XX XX
E (X ) = x(pXY [xk , yj ]) = x(pXY [x, y ])
x j=1 x y
X
E (X ) = pXY [x, y ]x
R
R denotes the set of all points in the range of (X , Y )
Session 9 References

Two discrete random variables

Mean and Variance from Joint Distributions


Given a joint probability mass function for random variables X
and Y , E (X ) and V (X ) can be obtained directly from the
joint probability distribution of X and Y .
First calculating the marginal probability distribution of X and
then determining E (X ) and V (X ) by the usual method.
X X X ∞
E (X ) = µx = xfX (x) = x( pXY [xk , yj ])
x x j=1

XX XX
E (X ) = x(pXY [xk , yj ]) = x(pXY [x, y ])
x j=1 x y
X
E (X ) = pXY [x, y ]x
R
R denotes the set of all points in the range of (X , Y )
Session 9 References

Two discrete random variables

Mean and Variance from Joint Distributions


Given a joint probability mass function for random variables X
and Y , E (X ) and V (X ) can be obtained directly from the
joint probability distribution of X and Y .
First calculating the marginal probability distribution of X and
then determining E (X ) and V (X ) by the usual method.
X X X ∞
E (X ) = µx = xfX (x) = x( pXY [xk , yj ])
x x j=1

XX XX
E (X ) = x(pXY [xk , yj ]) = x(pXY [x, y ])
x j=1 x y
X
E (X ) = pXY [x, y ]x
R
R denotes the set of all points in the range of (X , Y )
Session 9 References

Mean from Joint Distributions

E (X ) = 1[fXY (1, 1) + fXY (1, 2) + fXY (1, 3) + fXY (1, 4) + fXY (1, 5) +
fXY (1, 6)] + 2[fXY (2, 1) + fXY (2, 2) + fXY (2, 3) + fXY (2, 4) +
fXY (2, 5) + fXY (2, 6)] + 3[fXY (3, 1) + fXY (3, 2) + fXY (3, 3) +
fXY (3, 4) + fXY (3, 5) + fXY (3, 6)] + 4[fXY (4, 1) + fXY (4, 2) +
fXY (4, 3) + fXY (4, 4) + fXY (4, 5) + fXY (4, 6)] + 5[fXY (5, 1) +
fXY (5, 2) + fXY (5, 3) + fXY (5, 4) + fXY (5, 5) + fXY (5, 6)] +
6[fXY (6, 1) + fXY (6, 2) + fXY (6, 3) + fXY (6, 4) + fXY (6, 5) +
fXY (6, 6)] = 1 × [1/6] + 2 × [1/6] + 3 × [1/6] + 4 × [1/6] + 5 × [1/6] +
6 × [1/6] = 1/6 × (1 + 2 + 3 + 4 + 5 + 6) = 1/6 × 21 = 7/2 = 3.5

E (X ) = 3.5

E (Y ) =? = 3.5
Session 9 References

Mean from Joint Distributions

E (X ) = 1[fXY (1, 1) + fXY (1, 2) + fXY (1, 3) + fXY (1, 4) + fXY (1, 5) +
fXY (1, 6)] + 2[fXY (2, 1) + fXY (2, 2) + fXY (2, 3) + fXY (2, 4) +
fXY (2, 5) + fXY (2, 6)] + 3[fXY (3, 1) + fXY (3, 2) + fXY (3, 3) +
fXY (3, 4) + fXY (3, 5) + fXY (3, 6)] + 4[fXY (4, 1) + fXY (4, 2) +
fXY (4, 3) + fXY (4, 4) + fXY (4, 5) + fXY (4, 6)] + 5[fXY (5, 1) +
fXY (5, 2) + fXY (5, 3) + fXY (5, 4) + fXY (5, 5) + fXY (5, 6)] +
6[fXY (6, 1) + fXY (6, 2) + fXY (6, 3) + fXY (6, 4) + fXY (6, 5) +
fXY (6, 6)] = 1 × [1/6] + 2 × [1/6] + 3 × [1/6] + 4 × [1/6] + 5 × [1/6] +
6 × [1/6] = 1/6 × (1 + 2 + 3 + 4 + 5 + 6) = 1/6 × 21 = 7/2 = 3.5

E (X ) = 3.5

E (Y ) =? = 3.5
Session 9 References

Mean from Joint Distributions

E (X ) = 1[fXY (1, 1) + fXY (1, 2) + fXY (1, 3) + fXY (1, 4) + fXY (1, 5) +
fXY (1, 6)] + 2[fXY (2, 1) + fXY (2, 2) + fXY (2, 3) + fXY (2, 4) +
fXY (2, 5) + fXY (2, 6)] + 3[fXY (3, 1) + fXY (3, 2) + fXY (3, 3) +
fXY (3, 4) + fXY (3, 5) + fXY (3, 6)] + 4[fXY (4, 1) + fXY (4, 2) +
fXY (4, 3) + fXY (4, 4) + fXY (4, 5) + fXY (4, 6)] + 5[fXY (5, 1) +
fXY (5, 2) + fXY (5, 3) + fXY (5, 4) + fXY (5, 5) + fXY (5, 6)] +
6[fXY (6, 1) + fXY (6, 2) + fXY (6, 3) + fXY (6, 4) + fXY (6, 5) +
fXY (6, 6)] = 1 × [1/6] + 2 × [1/6] + 3 × [1/6] + 4 × [1/6] + 5 × [1/6] +
6 × [1/6] = 1/6 × (1 + 2 + 3 + 4 + 5 + 6) = 1/6 × 21 = 7/2 = 3.5

E (X ) = 3.5

E (Y ) =? = 3.5
Session 9 References

Mean from Joint Distributions

E (X ) = 1[fXY (1, 1) + fXY (1, 2) + fXY (1, 3) + fXY (1, 4) + fXY (1, 5) +
fXY (1, 6)] + 2[fXY (2, 1) + fXY (2, 2) + fXY (2, 3) + fXY (2, 4) +
fXY (2, 5) + fXY (2, 6)] + 3[fXY (3, 1) + fXY (3, 2) + fXY (3, 3) +
fXY (3, 4) + fXY (3, 5) + fXY (3, 6)] + 4[fXY (4, 1) + fXY (4, 2) +
fXY (4, 3) + fXY (4, 4) + fXY (4, 5) + fXY (4, 6)] + 5[fXY (5, 1) +
fXY (5, 2) + fXY (5, 3) + fXY (5, 4) + fXY (5, 5) + fXY (5, 6)] +
6[fXY (6, 1) + fXY (6, 2) + fXY (6, 3) + fXY (6, 4) + fXY (6, 5) +
fXY (6, 6)] = 1 × [1/6] + 2 × [1/6] + 3 × [1/6] + 4 × [1/6] + 5 × [1/6] +
6 × [1/6] = 1/6 × (1 + 2 + 3 + 4 + 5 + 6) = 1/6 × 21 = 7/2 = 3.5

E (X ) = 3.5

E (Y ) =? = 3.5
Session 9 References

Mean from Joint Distributions

E (X ) = 1[fXY (1, 1) + fXY (1, 2) + fXY (1, 3) + fXY (1, 4) + fXY (1, 5) +
fXY (1, 6)] + 2[fXY (2, 1) + fXY (2, 2) + fXY (2, 3) + fXY (2, 4) +
fXY (2, 5) + fXY (2, 6)] + 3[fXY (3, 1) + fXY (3, 2) + fXY (3, 3) +
fXY (3, 4) + fXY (3, 5) + fXY (3, 6)] + 4[fXY (4, 1) + fXY (4, 2) +
fXY (4, 3) + fXY (4, 4) + fXY (4, 5) + fXY (4, 6)] + 5[fXY (5, 1) +
fXY (5, 2) + fXY (5, 3) + fXY (5, 4) + fXY (5, 5) + fXY (5, 6)] +
6[fXY (6, 1) + fXY (6, 2) + fXY (6, 3) + fXY (6, 4) + fXY (6, 5) +
fXY (6, 6)] = 1 × [1/6] + 2 × [1/6] + 3 × [1/6] + 4 × [1/6] + 5 × [1/6] +
6 × [1/6] = 1/6 × (1 + 2 + 3 + 4 + 5 + 6) = 1/6 × 21 = 7/2 = 3.5

E (X ) = 3.5

E (Y ) =? = 3.5
Session 9 References

Mean from Joint Distributions

E (X ) = 1[fXY (1, 1) + fXY (1, 2) + fXY (1, 3) + fXY (1, 4) + fXY (1, 5) +
fXY (1, 6)] + 2[fXY (2, 1) + fXY (2, 2) + fXY (2, 3) + fXY (2, 4) +
fXY (2, 5) + fXY (2, 6)] + 3[fXY (3, 1) + fXY (3, 2) + fXY (3, 3) +
fXY (3, 4) + fXY (3, 5) + fXY (3, 6)] + 4[fXY (4, 1) + fXY (4, 2) +
fXY (4, 3) + fXY (4, 4) + fXY (4, 5) + fXY (4, 6)] + 5[fXY (5, 1) +
fXY (5, 2) + fXY (5, 3) + fXY (5, 4) + fXY (5, 5) + fXY (5, 6)] +
6[fXY (6, 1) + fXY (6, 2) + fXY (6, 3) + fXY (6, 4) + fXY (6, 5) +
fXY (6, 6)] = 1 × [1/6] + 2 × [1/6] + 3 × [1/6] + 4 × [1/6] + 5 × [1/6] +
6 × [1/6] = 1/6 × (1 + 2 + 3 + 4 + 5 + 6) = 1/6 × 21 = 7/2 = 3.5

E (X ) = 3.5

E (Y ) =? = 3.5
Session 9 References

Mean from Joint Distributions

E (X ) = 1[fXY (1, 1) + fXY (1, 2) + fXY (1, 3) + fXY (1, 4) + fXY (1, 5) +
fXY (1, 6)] + 2[fXY (2, 1) + fXY (2, 2) + fXY (2, 3) + fXY (2, 4) +
fXY (2, 5) + fXY (2, 6)] + 3[fXY (3, 1) + fXY (3, 2) + fXY (3, 3) +
fXY (3, 4) + fXY (3, 5) + fXY (3, 6)] + 4[fXY (4, 1) + fXY (4, 2) +
fXY (4, 3) + fXY (4, 4) + fXY (4, 5) + fXY (4, 6)] + 5[fXY (5, 1) +
fXY (5, 2) + fXY (5, 3) + fXY (5, 4) + fXY (5, 5) + fXY (5, 6)] +
6[fXY (6, 1) + fXY (6, 2) + fXY (6, 3) + fXY (6, 4) + fXY (6, 5) +
fXY (6, 6)] = 1 × [1/6] + 2 × [1/6] + 3 × [1/6] + 4 × [1/6] + 5 × [1/6] +
6 × [1/6] = 1/6 × (1 + 2 + 3 + 4 + 5 + 6) = 1/6 × 21 = 7/2 = 3.5

E (X ) = 3.5

E (Y ) =? = 3.5
Session 9 References

Mean from Joint Distributions

E (X ) = 1[fXY (1, 1) + fXY (1, 2) + fXY (1, 3) + fXY (1, 4) + fXY (1, 5) +
fXY (1, 6)] + 2[fXY (2, 1) + fXY (2, 2) + fXY (2, 3) + fXY (2, 4) +
fXY (2, 5) + fXY (2, 6)] + 3[fXY (3, 1) + fXY (3, 2) + fXY (3, 3) +
fXY (3, 4) + fXY (3, 5) + fXY (3, 6)] + 4[fXY (4, 1) + fXY (4, 2) +
fXY (4, 3) + fXY (4, 4) + fXY (4, 5) + fXY (4, 6)] + 5[fXY (5, 1) +
fXY (5, 2) + fXY (5, 3) + fXY (5, 4) + fXY (5, 5) + fXY (5, 6)] +
6[fXY (6, 1) + fXY (6, 2) + fXY (6, 3) + fXY (6, 4) + fXY (6, 5) +
fXY (6, 6)] = 1 × [1/6] + 2 × [1/6] + 3 × [1/6] + 4 × [1/6] + 5 × [1/6] +
6 × [1/6] = 1/6 × (1 + 2 + 3 + 4 + 5 + 6) = 1/6 × 21 = 7/2 = 3.5

E (X ) = 3.5

E (Y ) =? = 3.5
Session 9 References

Mean from Joint Distributions

E (X ) = 1[fXY (1, 1) + fXY (1, 2) + fXY (1, 3) + fXY (1, 4) + fXY (1, 5) +
fXY (1, 6)] + 2[fXY (2, 1) + fXY (2, 2) + fXY (2, 3) + fXY (2, 4) +
fXY (2, 5) + fXY (2, 6)] + 3[fXY (3, 1) + fXY (3, 2) + fXY (3, 3) +
fXY (3, 4) + fXY (3, 5) + fXY (3, 6)] + 4[fXY (4, 1) + fXY (4, 2) +
fXY (4, 3) + fXY (4, 4) + fXY (4, 5) + fXY (4, 6)] + 5[fXY (5, 1) +
fXY (5, 2) + fXY (5, 3) + fXY (5, 4) + fXY (5, 5) + fXY (5, 6)] +
6[fXY (6, 1) + fXY (6, 2) + fXY (6, 3) + fXY (6, 4) + fXY (6, 5) +
fXY (6, 6)] = 1 × [1/6] + 2 × [1/6] + 3 × [1/6] + 4 × [1/6] + 5 × [1/6] +
6 × [1/6] = 1/6 × (1 + 2 + 3 + 4 + 5 + 6) = 1/6 × 21 = 7/2 = 3.5

E (X ) = 3.5

E (Y ) =? = 3.5
Session 9 References

Mean from Joint Distributions

E (X ) = 1[fXY (1, 1) + fXY (1, 2) + fXY (1, 3) + fXY (1, 4) + fXY (1, 5) +
fXY (1, 6)] + 2[fXY (2, 1) + fXY (2, 2) + fXY (2, 3) + fXY (2, 4) +
fXY (2, 5) + fXY (2, 6)] + 3[fXY (3, 1) + fXY (3, 2) + fXY (3, 3) +
fXY (3, 4) + fXY (3, 5) + fXY (3, 6)] + 4[fXY (4, 1) + fXY (4, 2) +
fXY (4, 3) + fXY (4, 4) + fXY (4, 5) + fXY (4, 6)] + 5[fXY (5, 1) +
fXY (5, 2) + fXY (5, 3) + fXY (5, 4) + fXY (5, 5) + fXY (5, 6)] +
6[fXY (6, 1) + fXY (6, 2) + fXY (6, 3) + fXY (6, 4) + fXY (6, 5) +
fXY (6, 6)] = 1 × [1/6] + 2 × [1/6] + 3 × [1/6] + 4 × [1/6] + 5 × [1/6] +
6 × [1/6] = 1/6 × (1 + 2 + 3 + 4 + 5 + 6) = 1/6 × 21 = 7/2 = 3.5

E (X ) = 3.5

E (Y ) =? = 3.5
Session 9 References

Mean from Joint Distributions

E (X ) = 1[fXY (1, 1) + fXY (1, 2) + fXY (1, 3) + fXY (1, 4) + fXY (1, 5) +
fXY (1, 6)] + 2[fXY (2, 1) + fXY (2, 2) + fXY (2, 3) + fXY (2, 4) +
fXY (2, 5) + fXY (2, 6)] + 3[fXY (3, 1) + fXY (3, 2) + fXY (3, 3) +
fXY (3, 4) + fXY (3, 5) + fXY (3, 6)] + 4[fXY (4, 1) + fXY (4, 2) +
fXY (4, 3) + fXY (4, 4) + fXY (4, 5) + fXY (4, 6)] + 5[fXY (5, 1) +
fXY (5, 2) + fXY (5, 3) + fXY (5, 4) + fXY (5, 5) + fXY (5, 6)] +
6[fXY (6, 1) + fXY (6, 2) + fXY (6, 3) + fXY (6, 4) + fXY (6, 5) +
fXY (6, 6)] = 1 × [1/6] + 2 × [1/6] + 3 × [1/6] + 4 × [1/6] + 5 × [1/6] +
6 × [1/6] = 1/6 × (1 + 2 + 3 + 4 + 5 + 6) = 1/6 × 21 = 7/2 = 3.5

E (X ) = 3.5

E (Y ) =? = 3.5
Session 9 References

Mean from Joint Distributions

E (X ) = 1[fXY (1, 1) + fXY (1, 2) + fXY (1, 3) + fXY (1, 4) + fXY (1, 5) +
fXY (1, 6)] + 2[fXY (2, 1) + fXY (2, 2) + fXY (2, 3) + fXY (2, 4) +
fXY (2, 5) + fXY (2, 6)] + 3[fXY (3, 1) + fXY (3, 2) + fXY (3, 3) +
fXY (3, 4) + fXY (3, 5) + fXY (3, 6)] + 4[fXY (4, 1) + fXY (4, 2) +
fXY (4, 3) + fXY (4, 4) + fXY (4, 5) + fXY (4, 6)] + 5[fXY (5, 1) +
fXY (5, 2) + fXY (5, 3) + fXY (5, 4) + fXY (5, 5) + fXY (5, 6)] +
6[fXY (6, 1) + fXY (6, 2) + fXY (6, 3) + fXY (6, 4) + fXY (6, 5) +
fXY (6, 6)] = 1 × [1/6] + 2 × [1/6] + 3 × [1/6] + 4 × [1/6] + 5 × [1/6] +
6 × [1/6] = 1/6 × (1 + 2 + 3 + 4 + 5 + 6) = 1/6 × 21 = 7/2 = 3.5

E (X ) = 3.5

E (Y ) =? = 3.5
Session 9 References

Variance from Joint Distributions

X
V (X ) = σX2 = (x − µX )2 fX (x) =
x
X X∞ ∞
XX
2
(x − µX ) ( pXY [xk , yj ]) = (x − µX )2 (pXY [xk , yj ]) =
x j=1 x j=1
XX X
(x − µX )2 (pXY [x, y ]) = (x − µX )2 pXY [x, y ] =
x y R
6
X
(x − 3.5)2 × 1/6 = 2.9167
x=1
Session 9 References

Variance from Joint Distributions

X
V (X ) = σX2 = (x − µX )2 fX (x) =
x
X X∞ ∞
XX
2
(x − µX ) ( pXY [xk , yj ]) = (x − µX )2 (pXY [xk , yj ]) =
x j=1 x j=1
XX X
(x − µX )2 (pXY [x, y ]) = (x − µX )2 pXY [x, y ] =
x y R
6
X
(x − 3.5)2 × 1/6 = 2.9167
x=1
Session 9 References

Variance from Joint Distributions

X
V (X ) = σX2 = (x − µX )2 fX (x) =
x
X X∞ ∞
XX
2
(x − µX ) ( pXY [xk , yj ]) = (x − µX )2 (pXY [xk , yj ]) =
x j=1 x j=1
XX X
(x − µX )2 (pXY [x, y ]) = (x − µX )2 pXY [x, y ] =
x y R
6
X
(x − 3.5)2 × 1/6 = 2.9167
x=1
Session 9 References

Variance from Joint Distributions

X
V (X ) = σX2 = (x − µX )2 fX (x) =
x
X X∞ ∞
XX
2
(x − µX ) ( pXY [xk , yj ]) = (x − µX )2 (pXY [xk , yj ]) =
x j=1 x j=1
XX X
(x − µX )2 (pXY [x, y ]) = (x − µX )2 pXY [x, y ] =
x y R
6
X
(x − 3.5)2 × 1/6 = 2.9167
x=1
Session 9 References

Two discrete random variables

Conditional Probability
When two random variables are defined in a random
experiment, knowledge of one can change the probabilities
that we associate with the values of the other.
Recall that the definition of conditional probability for events
A and B is P(B|A) = P(A ∩ B)/P(A)
P(Y = 1|X = 3) = P(X = 3, Y = 1)/P(X = 3)
P(Y = 1|X = 3) = fXY (3, 1)/fX (3) = 1/36/1/6 = 1/6
P(Y = y |X = x) = fY |x (y )
fY |x (y ) = fXY (x, y )/fX (x) for fX (x) > 0
fY |x (y ) ≥ 0
X
fY |x (y ) = 1
Rx
Session 9 References

Two discrete random variables

Conditional Probability
When two random variables are defined in a random
experiment, knowledge of one can change the probabilities
that we associate with the values of the other.
Recall that the definition of conditional probability for events
A and B is P(B|A) = P(A ∩ B)/P(A)
P(Y = 1|X = 3) = P(X = 3, Y = 1)/P(X = 3)
P(Y = 1|X = 3) = fXY (3, 1)/fX (3) = 1/36/1/6 = 1/6
P(Y = y |X = x) = fY |x (y )
fY |x (y ) = fXY (x, y )/fX (x) for fX (x) > 0
fY |x (y ) ≥ 0
X
fY |x (y ) = 1
Rx
Session 9 References

Two discrete random variables

Conditional Probability
When two random variables are defined in a random
experiment, knowledge of one can change the probabilities
that we associate with the values of the other.
Recall that the definition of conditional probability for events
A and B is P(B|A) = P(A ∩ B)/P(A)
P(Y = 1|X = 3) = P(X = 3, Y = 1)/P(X = 3)
P(Y = 1|X = 3) = fXY (3, 1)/fX (3) = 1/36/1/6 = 1/6
P(Y = y |X = x) = fY |x (y )
fY |x (y ) = fXY (x, y )/fX (x) for fX (x) > 0
fY |x (y ) ≥ 0
X
fY |x (y ) = 1
Rx
Session 9 References

Two discrete random variables

Conditional Probability
When two random variables are defined in a random
experiment, knowledge of one can change the probabilities
that we associate with the values of the other.
Recall that the definition of conditional probability for events
A and B is P(B|A) = P(A ∩ B)/P(A)
P(Y = 1|X = 3) = P(X = 3, Y = 1)/P(X = 3)
P(Y = 1|X = 3) = fXY (3, 1)/fX (3) = 1/36/1/6 = 1/6
P(Y = y |X = x) = fY |x (y )
fY |x (y ) = fXY (x, y )/fX (x) for fX (x) > 0
fY |x (y ) ≥ 0
X
fY |x (y ) = 1
Rx
Session 9 References

Two discrete random variables

Conditional Probability
When two random variables are defined in a random
experiment, knowledge of one can change the probabilities
that we associate with the values of the other.
Recall that the definition of conditional probability for events
A and B is P(B|A) = P(A ∩ B)/P(A)
P(Y = 1|X = 3) = P(X = 3, Y = 1)/P(X = 3)
P(Y = 1|X = 3) = fXY (3, 1)/fX (3) = 1/36/1/6 = 1/6
P(Y = y |X = x) = fY |x (y )
fY |x (y ) = fXY (x, y )/fX (x) for fX (x) > 0
fY |x (y ) ≥ 0
X
fY |x (y ) = 1
Rx
Session 9 References

Two discrete random variables

Conditional Probability
When two random variables are defined in a random
experiment, knowledge of one can change the probabilities
that we associate with the values of the other.
Recall that the definition of conditional probability for events
A and B is P(B|A) = P(A ∩ B)/P(A)
P(Y = 1|X = 3) = P(X = 3, Y = 1)/P(X = 3)
P(Y = 1|X = 3) = fXY (3, 1)/fX (3) = 1/36/1/6 = 1/6
P(Y = y |X = x) = fY |x (y )
fY |x (y ) = fXY (x, y )/fX (x) for fX (x) > 0
fY |x (y ) ≥ 0
X
fY |x (y ) = 1
Rx
Session 9 References

Two discrete random variables

Conditional Probability
When two random variables are defined in a random
experiment, knowledge of one can change the probabilities
that we associate with the values of the other.
Recall that the definition of conditional probability for events
A and B is P(B|A) = P(A ∩ B)/P(A)
P(Y = 1|X = 3) = P(X = 3, Y = 1)/P(X = 3)
P(Y = 1|X = 3) = fXY (3, 1)/fX (3) = 1/36/1/6 = 1/6
P(Y = y |X = x) = fY |x (y )
fY |x (y ) = fXY (x, y )/fX (x) for fX (x) > 0
fY |x (y ) ≥ 0
X
fY |x (y ) = 1
Rx
Session 9 References

Two discrete random variables

Conditional Probability
When two random variables are defined in a random
experiment, knowledge of one can change the probabilities
that we associate with the values of the other.
Recall that the definition of conditional probability for events
A and B is P(B|A) = P(A ∩ B)/P(A)
P(Y = 1|X = 3) = P(X = 3, Y = 1)/P(X = 3)
P(Y = 1|X = 3) = fXY (3, 1)/fX (3) = 1/36/1/6 = 1/6
P(Y = y |X = x) = fY |x (y )
fY |x (y ) = fXY (x, y )/fX (x) for fX (x) > 0
fY |x (y ) ≥ 0
X
fY |x (y ) = 1
Rx
Session 9 References

Conditional Probability

Example: Two dice toss with different colored dice


P
red \blue 1 2 3 4 5 6
1 (1,1) (0,2) (0,3) (0,4) (0,5) (0,6) 1/6
2 (1,2) (1,2) (0,3) (0,4) (0,5) (0,6) 1/6
3 (1,3) (1,3) (1,3) (0,4) (0,5) (0,6) 1/6
4 (1,4) (1,4) (1,4) (1,4) (0,5) (0,6) 1/6
5 (1,5) (1,5) (1,5) (1,5) (1,5) (0,6) 1/6
6 (1,6) (1,6) (1,6) (1,6) (1,6) (1,6) 1/6
P
1/6 1/6 1/6 1/6 1/6 1/6
Session 9 References

Conditional Probability

Mean and Variance


Let Rx denote the set of all points in the range of (X , Y ) for which
X = x. The conditional mean of Y given X = x, denoted as
E (Y |x) or µY |x , is
X
E (Y |x) = µY |x = yfY |x (y )
Rx
and the conditional variance of Y given X = x, denoted as
V (Y |x) or σY2 |x , is
X
V (Y |x) = (y − µY |x )2 fY |x (y )
Rx
Session 9 References

Conditional Probability

Mean and Variance


Let Rx denote the set of all points in the range of (X , Y ) for which
X = x. The conditional mean of Y given X = x, denoted as
E (Y |x) or µY |x , is
X
E (Y |x) = µY |x = yfY |x (y )
Rx
and the conditional variance of Y given X = x, denoted as
V (Y |x) or σY2 |x , is
X
V (Y |x) = (y − µY |x )2 fY |x (y )
Rx
Session 9 References

Two discrete random variables

Independence
In some random experiments, knowledge of the values of X
does not change any of the probabilities associated with the
values for Y .
Independence implies that fXY (x, y ) = fX (x)fY (y ) for all x
and y .
If we find one pair of x and y in which the equality fails, X
and Y are not independent.
Y = number of dots on chosen die.
X = 0 blue die chosen,
X = 1 red die chosen.
Session 9 References

Two discrete random variables

Independence
In some random experiments, knowledge of the values of X
does not change any of the probabilities associated with the
values for Y .
Independence implies that fXY (x, y ) = fX (x)fY (y ) for all x
and y .
If we find one pair of x and y in which the equality fails, X
and Y are not independent.
Y = number of dots on chosen die.
X = 0 blue die chosen,
X = 1 red die chosen.
Session 9 References

Two discrete random variables

Independence
In some random experiments, knowledge of the values of X
does not change any of the probabilities associated with the
values for Y .
Independence implies that fXY (x, y ) = fX (x)fY (y ) for all x
and y .
If we find one pair of x and y in which the equality fails, X
and Y are not independent.
Y = number of dots on chosen die.
X = 0 blue die chosen,
X = 1 red die chosen.
Session 9 References

Two discrete random variables

Independence
In some random experiments, knowledge of the values of X
does not change any of the probabilities associated with the
values for Y .
Independence implies that fXY (x, y ) = fX (x)fY (y ) for all x
and y .
If we find one pair of x and y in which the equality fails, X
and Y are not independent.
Y = number of dots on chosen die.
X = 0 blue die chosen,
X = 1 red die chosen.
Session 9 References

Two discrete random variables

Independence
In some random experiments, knowledge of the values of X
does not change any of the probabilities associated with the
values for Y .
Independence implies that fXY (x, y ) = fX (x)fY (y ) for all x
and y .
If we find one pair of x and y in which the equality fails, X
and Y are not independent.
Y = number of dots on chosen die.
X = 0 blue die chosen,
X = 1 red die chosen.
Session 9 References

Two discrete random variables

Independence
In some random experiments, knowledge of the values of X
does not change any of the probabilities associated with the
values for Y .
Independence implies that fXY (x, y ) = fX (x)fY (y ) for all x
and y .
If we find one pair of x and y in which the equality fails, X
and Y are not independent.
Y = number of dots on chosen die.
X = 0 blue die chosen,
X = 1 red die chosen.
Session 9 References

Two discrete random variables

Example: Two dice toss with different colored dice


Joint probability and marginal probability
P
red \blue 1 2 3 4 5 6
1 1/36 1/36 1/36 1/36 1/36 1/36 1/6
2 1/36 1/36 1/36 1/36 1/36 1/36 1/6
3 1/36 1/36 1/36 1/36 1/36 1/36 1/6
4 1/36 1/36 1/36 1/36 1/36 1/36 1/6
5 1/36 1/36 1/36 1/36 1/36 1/36 1/6
6 1/36 1/36 1/36 1/36 1/36 1/36 1/6
P
1/6 1/6 1/6 1/6 1/6 1/6
Session 9 References

Two discrete random variables

Example: Two dice toss with different colored dice


Conditional probability

red \blue 1 2 3 4 5 6
1 1/6 1/6 1/6 1/6 1/6 1/6
2 1/6 1/6 1/6 1/6 1/6 1/6
3 1/6 1/6 1/6 1/6 1/6 1/6
4 1/6 1/6 1/6 1/6 1/6 1/6
5 1/6 1/6 1/6 1/6 1/6 1/6
6 1/6 1/6 1/6 1/6 1/6 1/6

is it independent?
Session 9 References

Two discrete random variables

Independence
For discrete random variables X and Y , if any one of the following
properties is true, the others are also true, and X and Y are
independent.
1 fXY (x, y ) = fX (x)fY (y ) for all x and y
2 fY |x (y ) = fY (y ) for all x and y with fX (x) > 0
3 fX |y (x) = fX (x) for all x and y with fY (y ) > 0
4 P(X ∈ A, Y ∈ B) = P(X ∈ A)P(Y ∈ B) for any sets A and
B in the range of X and Y , respectively.
Session 9 References

Two discrete random variables

Independence
For discrete random variables X and Y , if any one of the following
properties is true, the others are also true, and X and Y are
independent.
1 fXY (x, y ) = fX (x)fY (y ) for all x and y
2 fY |x (y ) = fY (y ) for all x and y with fX (x) > 0
3 fX |y (x) = fX (x) for all x and y with fY (y ) > 0
4 P(X ∈ A, Y ∈ B) = P(X ∈ A)P(Y ∈ B) for any sets A and
B in the range of X and Y , respectively.
Session 9 References

Two discrete random variables

Independence
For discrete random variables X and Y , if any one of the following
properties is true, the others are also true, and X and Y are
independent.
1 fXY (x, y ) = fX (x)fY (y ) for all x and y
2 fY |x (y ) = fY (y ) for all x and y with fX (x) > 0
3 fX |y (x) = fX (x) for all x and y with fY (y ) > 0
4 P(X ∈ A, Y ∈ B) = P(X ∈ A)P(Y ∈ B) for any sets A and
B in the range of X and Y , respectively.
Session 9 References

Two discrete random variables

Independence
For discrete random variables X and Y , if any one of the following
properties is true, the others are also true, and X and Y are
independent.
1 fXY (x, y ) = fX (x)fY (y ) for all x and y
2 fY |x (y ) = fY (y ) for all x and y with fX (x) > 0
3 fX |y (x) = fX (x) for all x and y with fY (y ) > 0
4 P(X ∈ A, Y ∈ B) = P(X ∈ A)P(Y ∈ B) for any sets A and
B in the range of X and Y , respectively.
Session 9 References

Two discrete random variables

Rectangular Range for (X, Y)!


1 If the set of points in two-dimensional space that receive
positive probability under fXY (x, y ) does not form a rectangle,
X and Y are not independent because knowledge of X can
restrict the range of values of Y that receive positive
probability.
2 If the set of points in two dimensional space that receives
positive probability under fXY (x, y ) forms a rectangle,
independence is possible but not demonstrated.
Session 9 References

Two discrete random variables

Rectangular Range for (X, Y)!


1 If the set of points in two-dimensional space that receive
positive probability under fXY (x, y ) does not form a rectangle,
X and Y are not independent because knowledge of X can
restrict the range of values of Y that receive positive
probability.
2 If the set of points in two dimensional space that receives
positive probability under fXY (x, y ) forms a rectangle,
independence is possible but not demonstrated.
Session 9 References

Two discrete random variables


Class activity

Figure : fXY (x, y ) taken from [Montgomery and Runger, 2010]


Session 9 References

References I

Kay, S. (2006).
Intuitive probability and random processes using MATLAB .
R
Springer Science & Business Media.

Montgomery, D. C. and Runger, G. C. (2010).


Applied statistics and probability for engineers.
John Wiley & Sons.

Potrebbero piacerti anche