Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Roman Dunaytsev
Department of Communications Engineering Tampere University of Technology dunaytse@cs.tut.fi
Outline
Random variables Stochastic processes Markov chains Transition matrices and probability vectors Regular matrices Probability to get from one state to another in a given number of steps Probability distribution after several steps Long-term behavior
TLT-2716, Exercise 2
2 / 40
Outline
Random variables Stochastic processes Markov chains Transition matrices and probability vectors Regular matrices Probability to get from one state to another in a given number of steps Probability distribution after several steps Long-term behavior
TLT-2716, Exercise 2
3 / 40
Random Variables
Suppose that to each point of a sample space we assign a real number We then have a real-valued function dened on the sample space This function is called a random variable It is usually denoted by a capital letter such as X Example: Suppose that a coin is tossed twice so that the sample space is = {TT , TH, HT , HH} Let X represent the number of heads that can come up Thus, we have: X (TT ) = 0, X (TH) = 1, X (HT ) = 1, X (HH) = 2
TLT-2716, Exercise 2
4 / 40
Example: In an experiment involving the transmission of a message, the following are examples of random variables:
The number of symbols received in error The number of retransmissions required to get an error-free copy The time needed to transmit the message
Example: You ask people whether they approve of the present government. The sample space could be: {disapprove, indierent, approve} To analyze your results, you could use X = {1, 0, 1} or X = {1, 2, 3}
Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 5 / 40
TLT-2716, Exercise 2
6 / 40
Outline
Random variables Stochastic processes Markov chains Transition matrices and probability vectors Regular matrices Probability to get from one state to another in a given number of steps Probability distribution after several steps Long-term behavior
TLT-2716, Exercise 2
7 / 40
Stochastic Processes
A stochastic process is a collection of random variables {S(t), t T }, where t is a parameter that runs over an index set T
In general, we call t the time-parameter
Each S(t) takes values in some set E called the state space Then S(t) is the state of the process at time (or step) t E.g., S(t) may be:
The number of incoming emails at time t The balance of a bank account on day t The number of heads shown by t ips of a coin
Since any stochastic process is simply a collection of random variables, the name random process is also used for them
TLT-2716, Exercise 2
8 / 40
When the index set is an interval of the real line, the stochastic process is said to be a continuous-time stochastic process
I.e., T = {t : t 0} or T = {t : < t < }
When the state space is countable, the stochastic process is said to be a discrete-space stochastic process When the state space is an interval of the real line, the stochastic process is said to be a continuous-space stochastic process
TLT-2716, Exercise 2
9 / 40
t
Income of a self-employed person at day t Income of an employee at time t in the course of year
DT & CS
CT & CS
t
Air temperature at noon over t days
Roman Dunaytsev (TUT) TLT-2716, Exercise 2
t
Air temperature at time t
10 / 40
Outline
Random variables Stochastic processes Markov chains Transition matrices and probability vectors Regular matrices Probability to get from one state to another in a given number of steps Probability distribution after several steps Long-term behavior
TLT-2716, Exercise 2
11 / 40
Markov Chains
A Markov chain as a frog jumping on a set of lily pads
TLT-2716, Exercise 2
13 / 40
For such a probability vector, p1 + p2 + + pm = 1 A Markov process at time n is fully dened by pij = Pr {S(n + 1) = j|S(n) = i} Where pij is the conditional probability of being in state j at step n + 1 given that the process was in state i at step n
Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 14 / 40
A vector = (p1 , p2 , . . . , pm ) is called a probability vector if the p components are nonnegative and their sum is 1 The entries in a probability vector can represent the probabilities of nding a system in each of the states A square matrix P is called a transition matrix if each of its rows is a probability vector
Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 15 / 40
Example: Which of the following matrices are matrices of transition probabilities? 1/3 0 2/3 0 1 0 1/4 3/4 3/4 1/2 1/4 1/2 1/6 1/3 1/3 1/3 1/3 1/3 1/3 1/3 2/3 0
TLT-2716, Exercise 2
16 / 40
Thus, the transition matrix and the transition diagram are as follows: P= pss pcs psc pcc
0.2 0.8
sunny cloudy
0.4
0.6
Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 17 / 40
TLT-2716, Exercise 2
18 / 40
pw w psw P= pew
pw s pss pes
TLT-2716, Exercise 2
19 / 40
S1 S1 p11 S2 . . . S3 . . . S4 . . . S5 p51
Roman Dunaytsev (TUT)
TLT-2716, Exercise 2
A state Si of a Markov chain is called absorbing if it is impossible to leave it (i.e., pii = 1) A Markov chain is absorbing if it has at least one absorbing state, and if from every state it is possible to go to an absorbing state (not necessarily in one step) In an absorbing Markov chain, a state which is not absorbing is called transient
Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 22 / 40
TLT-2716, Exercise 2
23 / 40
TLT-2716, Exercise 2
24 / 40
TLT-2716, Exercise 2
25 / 40
TLT-2716, Exercise 2
26 / 40
Matrix Multiplication
In order for matrix multiplication to be dened, the dimensions of the matrices must satisfy (n m)(m p) = (n p) The product C of 2 matrices A and B is dened as
a1,1 a1,2 a2,1 a2,2 a3,1 a3,2 A x c1,1 c1,2 c1,3 = c2,1 c2,2 c2,3 c3,1 c3,2 c3,3 C
That is c1,2 = a1,1 b1,2 + a1,2 b2,2 c3,3 = a3,1 b1,3 + a3,2 b2,3
TLT-2716, Exercise 2
27 / 40
Hence, A is regular
TLT-2716, Exercise 2
28 / 40
What is the probability that the system changes from state Si to state Sj in exactly n steps?
As a rule, this probability is denoted as pij
(n)
The probability of going from any state Si to another state Sj in a nite Markov chain with the transition matrix P in n steps is given by the element (i, j) of the matrix P n
Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 29 / 40
With this information let us form a Markov chain Let us denote as states the kinds of weather: {rain}, {nice}, and {snow} From the above information we determine the transition matrix: pr r pr n pr s 1/2 1/4 1/4 P = pnr pnn pns = 1/2 0 1/2 psr psn pss 1/4 1/4 1/2
Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 30 / 40
(2)
(2)
(2)
Then the probability distribution at step n can be found as (n) = (0) P n p p That is (1) = (0) P p p (2) = (1) P = (0) P 2 p p p ... (n) = (n1) P = (0) P n p p p
Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 32 / 40
Let the initial probability vector be (0) = (1/3, 1/3, 1/3) p Hence, the probability distribution of the states after 3 days is (3) = (0) P 3 : p p 0.406 0.203 0.391 (1/3, 1/3, 1/3) 0.406 0.188 0.406 = (0.401, 0.198, 0.401) 0.391 0.203 0.406
Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 33 / 40
P=
TLT-2716, Exercise 2
34 / 40
Find the probability distribution after 4 days We have that (4) = (0) P 4 p p Then (5/6, 1/6) 3/8 5/8 5/16 11/16 = (35/96, 61/96)
Thus, the probability of traveling to work by bus is 35/96 and driving to work is 61/96
Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 35 / 40
Thus, if a Markov system is regular, then its long-term transition matrix is given by the square matrix whose rows are all the same and equal to the steady-state vector
Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 36 / 40
0.4 0.2 0.4 P 7 = 0.4 0.2 0.4 0.4 0.2 0.4 Let the initial probability vector (0) be either (1/3, 1/3, 1/3) or p (1/10, 8/10, 1/10) Even then, after 1 week the result is the same (7) = (0) P 7 = (0.4, 0.2, 0.4) p p Hence, in the long run, the starting state does not really matter
Roman Dunaytsev (TUT) TLT-2716, Exercise 2 November 05, 2009 38 / 40
TLT-2716, Exercise 2
39 / 40