Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Fall 2001
10/3/2001
A
A +B
B
( A+
), independently of all other arrivals. Thus the combined Poisson process has an embedded
B
Bernoulli process. To solve this problem, you need a good grasp of the fundamental properties of
Poisson and Bernoulli processes. If you feel uncomfortable with the answers below, now is a good
time to review Poisson and Bernoulli processes, for example, by reading Chapter 4 of Fundamentals
of Applied Probability Theory by Alvin Drake.
(a)
(i) The times between successive A train arrivals are independent exponential random variables with parameter A = 3/hour. By the memoryless property, the fact that Bart
arrives at a random time has no relevance. The time he has to wait until the next A
train is still an exponential random variable with parameter A . So if we call his waiting
time X, then the PDF of X is given by
fX (x) = A eA x = 3e3x , x 0.
This is in fact a random incidence question. So we can also solve this question by using
formula (2.65) in the textbook. Let Y be the interarrival time of A trains, then
fX (x) =
E[Y ] =
1
A
= 13 . FY (x) = P {Y x} =
fX (x) =
x
0
1 FY (x)
.
E[Y ]
A eA y dy = 1 eA x = 1 e3x . Hence
1 (1 e3x )
1
3
= 3e3x , x 0.
(ii) An easy way to answer this question is to rst translate it into a Bernoulli process that
one is familiar with, for example, a sequence of coin tosses. An A train then becomes a
head and a B train becomes a tail. The problem now reads: What is the probability
that one obtains at least 3 tails before a head is obtained? This probability is the same
as the probability that the next three tosses result in tails (i.e. the next three trains
are B trains). The outcomes of the fourth and subsequent tosses are irrelevant for the
=
.
=1
A + B
A + B A + B
A + B
A + B
3
(iii) In order for exactly 3 B trains to arrive while Bart is waiting, the next 3 trains should
be B trains and the fourth one should be an A train.
P {exactly 3 B trains} = P {next 3 trains are B trains, the 4th train is an A train}
3
3
2
A
1
B
.
=
=
A + B
A + B
3
3
(b) The combined Poisson process of train arrivals has an arrival rate of = A + B = 9/hour.
The probability of having exactly 9 arrivals during any hour (t = 1) in this process is obtained
by using the Poisson PMF formula.
PK (9) =
99 e9
(t)9 et
=
0.1318.
9!
9!
(c) To answer this question, one needs to consider another Bernoulli process associated with the
combined Poisson process. In this Bernoulli process, each hour is an independent trial and
an hour is a success if exactly 9 trains arrive during that hour. From (b), a success occurs
with probability PK (9) 0.1318. The question asks what the expected number of trials until
the rst success is. The answer is the expected value of a geometric random variable with
parameter PK (9), i.e.
E[number of hours until exactly 9 trains per hour] =
(d)
1
1
= 7.59.
PK (9)
0.1318
(i) An A train will be delayed if the time, denoted by Z, from the arrival of the A train to
the moment of the arrival of the next B train is less than 30 seconds. By the memoryless
property, this time has an exponential PDF with parameter B . Thus the probability
that an A train is delayed is obtained by
PD = P {Z 30 seconds} =
(30
1/120
=
0
B eB t dt
6e6t dt = 1 e1/20 .
Since probabilities equal long-run frequencies, this is also approximately the fraction of
A trains that are delayed over some reasonably long period, say a month.
(ii) A B train passenger that benets from the delay policy does not have to wait at all for
an A train, so her expected waiting time under the policy is zero. Without the policy,
her mean waiting time would be
1
A
1
3
1
B
1
120
1
B ,
we have
1
2
1
120 .
cuts o the right tail of the exponential density curve, fW (w), but what is left is NOT
uniform.
Let us assume that it never happens that two or more A trains are delayed waiting for
the same B train, i.e. we ignore the possibility that two A trains may arrive within
30 seconds of each other. Under this assumption, one A train is held for every B train
receiving the benets. Therefore the policy will lead to a net global travel time reduction
if
E[total time reduction] > E[total time increase]
1
E[NBA ] hours > E[NA ] 0.00413 hours
3
E[NBA ] > 0.012 E[NA ],
where NBA is the number of people on a B train who wish to transfer to an A train and
NA is the number of people on an A train being held. In words, the policy is favored if
the average number of people on a B train who wish to transfer is at least 1.2% of the
average number of people on an A train.
This is a condition that one would expect to hold true for most subway transfer stations.
You might want to think about the eect of ignoring the possibility of two A trains
arriving within 30 seconds of each other. How would one assess the reasonableness of
this simplication? How would the analysis change if one did not make this simplifying
assumption?
d
2,
i.e.
and is also
Let T be the event that the needle touches a line (or lines).
2l
d as we saw in class.
cos1 dl , then the needle
If l d, P (T ) =
l
2
2 ].
d
2
d
U
l/2
lands. For the sake of notational simplicity, let A represent the event that 0 cos1 dl . Using
the total probability theorem, P (T ) is computed by
P (T ) = P (T | A)P (A) + P (T | Ac )P (Ac ) .
Since U [0, 2 ], P (A) =
P (T ) =
2
d
d
2
cos1 + P (T | Ac ) 1 cos1
.
l
l
2 l
cos | Ac ) = cos = cos ,
2
d 2
d
cos1
d
l
l
cos f|Ac () d ,
d
cos1
1
1
d
l
l
cos
d
1
2l
d
cos1 dl
2l
d
cos1
d
l
cos1
d
l
sin
cos1
d
l
1 sin cos
d
l
.
f ()
P (Ac ) ,
Now we have
2l
d
2
P (T ) = cos1 +
l
d
Because sin =
1 sin cos
d
l
.
1 cos2 ,
sin cos
d
=
l
1
cos cos1
d
l
2
=
2
d
1
.
l
Therefore, P (T ) is simplied as
P (T ) =
2l
d
2
cos1 +
1
l
d
2
d
.
1
l
To summarize,
P (T ) =
2l
d
2
cos1
d
l
2l
d
1
d 2
if l d,
if l > d.
3. (Kang, 2001)
Let random variable Y denote the interarrival time of buses.
E[Y ] = 3 0.4 + 5 0.5 + 12 0.1 = 4.9
E[Y 2 ] = 32 0.4 + 52 0.5 + 122 0.1 = 30.5
(a) Let V be the waiting time. Using Equation (2.66) in the textbook,
E[V ] =
30.5
E[Y 2 ]
=
= 3.11 minutes
2E[Y ]
2 4.9
(b) Consider N intervals, where N is very large. We can expect that the number of intervals with
length of 3 minutes is 0.4N . Similarly, 0.5N and 0.1N are the numbers of intervals with length
of 5 minutes and of 12 minutes, respectively. Therefore, the total length (minutes) of the N
intervals is 3 0.4N + 5 0.5N + 12 0.1N = 4.9N . The probability that he arrives during
a 12-minute interval is the proportion of the total length taken up by 12-minute intervals to
4.9N .
P (Mendel arrives during a 12-minute interval) =
6
12 0.1N
= 0.245
4.9N
(c) Let W be the length of the interval in which Mendel arrives. We can compute P (V < 1) by
P (V < 1) =P (V < 1 | W = 3)P (W = 3) +
P (V < 1 | W = 5)P (W = 5) +
P (V < 1 | W = 12)P (W = 12).
Given that Mendel arrives in a 3-minute interval, the probability that he waits less than one
minute is
1
3
because the moment of his arrival is totally random (uniformly distributed) over
1
5
1
12 .
Hence,
1
1
1
P (W = 3) + P (W = 5) +
P (W = 12)
3
5
12
1 5 0.5
1 12 0.1
1 3 0.4
+
+
=
3
4.9
5
4.9
12
4.9
P (V < 1) =
= 0.204
unit and incident, respectively. S (S ) denote the set of points within (outside) the central square.
Let A = {(X1 , Y1 ) S} and B = {(X2 , Y2 ) S}.
R1
R2
1111
0000
0000
1111
0000
1111 a
0000
1111
a
1
(a) Let us consider the case in which incidents and the response unit are uniformly, independently distributed over the entire square. In this case, the expected travel distance can be
7
decomposed as
we have
E[D] =
(b)
2
3
2
a ,
2
= E[D | A B]P (A)P (B) + 2E[D | A B ]P (A)P (B ) +
3
E[D | A B ]P (A )P (B )
2
= a(a2 )2 + 2E[D | A B ]a2 (1 a2 ) + E[D | A B ](1 a2 )2 .
3
(i) The set B can be divided into two classes of identically-sized shapes: four rectangles of
type R1 (bordering the central square) and four rectangles of type R2 (at corners of the
unit-square). Hence,
P {(X2 , Y2 ) R1 }
P {(X2 , Y2 ) R1 and (X2 , Y2 ) R1 R2 }
=
P {(X2 , Y2 ) R1 R2 }
P {(X2 , Y2 ) R1 R2 }
a(1 a) 12
2a
a
=
.
=
1
2 =
1
1
1
+a
a + 2 (1 a)
a(1 a) 2 + 2 (1 a)
(iii) Let Dx , Dy be the travel distances in the x axis and in the y axis, respectively. From
class, we know E[Dx | A R1 ] = a3 . If the locations of the response unit and incident are
8
a
2
1
2
1a
2 .
Hence,
1
2
1a
2 .
Therefore,
E[D | A R2 ] = E[Dx | A R2 ] + E[Dy | A R2 ]
1 1
a 1 1a
+
= + a.
=2
2 2
2
2 2
2
3
2
3 (1
a5 ) ( 43 a2 + a + 1)a2 (1 a)
(1 a2 )2
2a4 a3 a2 + 2a + 2
3(1 + a)(1 a2 )
2a3 + 3a2 + 4a + 2
.
3(1 + a)2
(0) indicates the expected travel distance when no zero-demand zone exists, which should
W
(0) = 2 .
be equal to E[D]. Indeed, W
3
(1) =
W
11
12 .
response unit and incidents are uniformly distributed along the perimeter of the unit-square.
9
(1) is the expected travel distance from one point on the perimeter to another point on
W
the perimeter. Let us compute this quantity in another way. Consider the locations of the
response unit and incident. There are 16 possible cases to consider:
The response unit and the incident are on the same edge of the square (4 cases). The
expected travel distance between two locations is 13 .
The response unit and the incident are on adjacent edges of the square (8 cases). The
expected travel distance between two locations is
1
2
1
2
= 1.
The response unit and the incident are on opposite edges of the square (4 cases). The
expected travel distance between two locations is
(1) =
Since all cases are equally likely, W
4
16
1
3
8
16
1
3
+ 1 = 43 .
1+
4
16
4
3
11
12 .
R1
R8
R7
R6
Z1
1111
0000
0000
1111
0000
1111
0000
1111
Z2
R2
R3
R4
R5
1
(a) Given that (X1 , Y1 ) A and (X2 , Y2 ) B , the cases where the perturbation term is strictly
positive are:
(X1 , Y1 ) R1 and (X2 , X2 ) R5
(X1 , Y1 ) R5 and (X2 , X2 ) R1
(X1 , Y1 ) R3 and (X2 , X2 ) R7
(X1 , Y1 ) R7 and (X2 , X2 ) R3
10
P ((X1 , Y1 ) R1 (X2 , X2 ) R5 )
P (A B )
a(1a) 2
2
a2
=
.
=
(1 a2 )2
4(a + 1)2
P ((X1 , Y1 ) R1 (X2 , X2 ) R5 | A B ) =
a2
=
4(a + 1)2
a
a+1
a2
.
4(a+1)2
Therefore,
2
.
(b) Consider the case where (X1 , Y1 ) R1 and (X2 , X2 ) R5 . Note that in this case, there is no
extra travel distance in the y direction. The travel distance in the x direction is given by
DxB
| R1 R5 = min(Z1 + Z2 , 2a Z1 Z2 ) =
Z1 + Z2 ,
if Z1 + Z2 a,
2a Z1 Z2 , otherwise,
where Z1 and Z2 is the xdistances from the left edges of R1 and R5 to the response unit
and the incident, respectively (see the gure above).
E[DxB
| R1 R5 ] =
a az1
0
az1
a a
E[DxB | R1 R5 ] =
1
a2
a az1
1
2
1
= a + a = a.
3
3
3
The expected travel distance in the x direction without the square barrier is 13 a. Therefore
11
the extra travel distance, given the perturbation term is positive, is 23 a 13 a = 13 a. This gives
E (a) = 1 a
W
3
(1) + W
E (1) =
(1) = W
W
11
12
1
12
a
a+1
2
.
of the response unit and incident when a = 1 (see Problem 3.13 (c)). There are 16 possible
cases to consider. Here, we focus on the extra travel distance due to the zero-demand zone
barrier, which is additionally required compared to Problem 3.13 (c).
The response unit and the incident are on the same edge of the square (4 cases). The
expected extra travel distance between two locations is 0.
The response unit and the incident are on adjacent edges of the square (8 cases). The
expected extra travel distance between two locations is also 0.
The response unit and the incident are on opposite edges of the square (4 cases). Note
that the expected travel distance between two locations was
1
3
+ 1 = 43 . However, since
no travel is allowed through the zero-demand zone, the expected travel distance becomes
2
3
2
3
4
16
0+
8
16
0+
4
16
1
3
1
12 .
a+
where fX1 (x1 ) and fX2 (x2 ) are the probability density functions of X1 and X2 , respectively. Because
X1 and X2 are uniformly distributed over (a, a+] and [0, a] respectively, fX1 (x1 ) =
1
a.
Thus,
12
1
G(a + ) =
a
1
=
a
=
a+
a
a+
1
1
a p + 1
a
1
p+1
(x1 x2 )
dx1
p+1
0
a+
p+1
(x
a)
xp+1
dx1
1
1
1
1
1
1
xp+2
(x1 a)p+2
=
1
a p + 1 p + 2
p+2
G(a + )
1
1
1
1
(p + 2)ap+1 + o() ,
a (p + 1)(p + 2)
a+
o()
as 0.
ap
(p+1)
as 0 by symmetry. If
0 X1 a and 0 X2 a, then G(a+ ) = G(a). Finally, we do not have to compute G(a+ ) for
the case where a < X1 a + and a < X2 a + because the associated probability is negligible.
The following table summarizes G(a + )s.
Case
0 X1 a, 0 X2 a
Probability of a case
a
a
a 2
a+ a+ = ( a+ )
a < X1 a + , 0 X2 a
a+
a
a+
a
(a+)2
ap
(p+1)
0 X1 a, a < X2 a +
a
a+
a+
a
(a+)2
ap
(p+1)
a < X1 a + , a < X2 a +
a+
a+
2
= ( a+
)
We do not care.
a
a+
a
a+
a
a+
2
+
a
a
ap
ap
+
+ o(2 )
2
(p + 1) (a + )
(p + 1) (a + )2
a
2ap
+ o(2 )
(p + 1) (a + )2
a
2ap
.
(p + 1) (a + )2
2
2
13
=1
2 3
+
+ .
a
a
a
a
1 .
a+
a
This gives the following approximations:
2
2
2 2
2
+ 2 1
,
1
=1
a
a
a
a
2
a
2
22
a
1
=
2 .
=
2
(a + )
a a+
a
a
a
a
a
a
a+
2
G(a + ) G(a) 1
a
2ap
2
2ap1
+
= G(a) 1
+
.
(p + 1) a
a
(p + 1)
a
(p + 1)
If 0, we have the following dierential equation:
G (a) =
2ap1
2G(a)
+
.
a
(p + 1)
Judicious guesses (or consultation with books on dierential equations) lead us to the following
solution:
G(a) E[D p ] =
2ap
.
(p + 1)(p + 2)
We can skip the derivation of the dierential equation by directly using Equation (3.64) in the
textbook. Once we obtain G(a + ), we can plug it in (3.64), which gives the same dierential
equation as above.
14