1
Worcester Polytechnic Institute
Department of Mathematical Sciences
Professor: Stephan Sturm
Grader: Hien Pham
Fall 2025 - B Term
MA 4237
Probabilistic Methods of
Operations Research
Assignment 3
due on Thursday, November 20, 11:59 p.m.
This assignment is completed by Floris V. Huiskers
1. Consider the symmetric random walk on Z2 given by the transition probabilities
P[Xn+1 = (i + 1, j) |Xn = (i, j)] = P[Xn+1 = (i − 1, j) |Xn = (i, j)]
= P[Xn+1 = (i, j + 1) |Xn = (i, j)] = P[Xn+1 = (i, j − 1) |Xn = (i, j)] =
1
4
for all states (i, j) ∈ Z2 . Show that this random walk is recurrent.
Note: This works similarly to the proof for the random walk in Z discussed in class.
Note that on the way you will encounter the binomial identity
X
n 2n
n
n
=
n
j
n−j
j=0
you will have to prove.
1
4
1
4
(i, j)
1
4
First, we show the following lemma:
1
4
2
Given an integer n,
2n
n
=
Pn
j=0
n
j
n
n−j
.
Proof: Consider 2n as a set of 2n distinct elements, and n as the number of elements
we want to pick from the 2n total elements. Then, by definition, there are 2n
ways to
n
pick n elements.
n P
counts the same thing. Then, by the properties of
Now, we show that nj=0 nj n−j
counting, the two must be equal.
Suppose that we split the 2n distinct elements into 2 disjoint sets of size n each. Now,
when we pick a total of n elements, we pick j elements from the first set and n − j
elements from the second set. Note that choices for different values of j are disjoint, so
summing over 0 ≤ j ≤ n counts each n-subset exactly once. Hence, this counts this the
total number of ways we can pick n elements from 2n.
It remains to show that given that we pick j elements from the first set, how many ways
we pick a total of n elements. However, it follows directly from the definition of a
combination and the fact that the number of ways we pick j elements from the first set
and
nn − j elements from the second set are independent of each other, that there are
n
ways to do this.
j
n−j
Using theP
previous,
we
conclude that the total number of ways we can pick n elements is
n
n
n
given by j=0 j n−j . Then the result follows.
Now we proof the main result.
Proof: First, we note that this Markov chain is irreducible. Hence, to prove that this
Markov chain is recurrent, it suffices to show that any state in the Markov chain is
recurrent. P
In particular, pick any state in this Markov chain and call it state 0. We will
n
show that ∞
n=0 p00 = ∞.
Now, first consider the state 0 as coordinate-pair (0, 0) in an integer grid Z × Z. Then,
all the possible (one) steps transitions from an integer coordinate-pair to a neighbor
coordinate-pair, is done by increasing or decreasing either the x or y coordinate by 1.
Now, consider the parity of the sum of the x and y coordinate of any coordinate-pair.
Then, each one step transition changes the parity (any two neighbors have different
parity). Now, we realize that the parity of (0, 0) is even. Hence, however we ’walk’ the
grid with n steps, the parity changed n times (once with each step). Hence, we conclude
that walks of length n, which start and end in (0, 0) must have
Then, it must
Pn∞is even.
P
2n+1
n
2n
be that p00 = 0 (for n any non-negative integer), such that n=0 p00 = ∞
n=0 p00 .
Now, for all non-negative integers n, we consider the number of walks of length 2n,
which start and end in (0, 0). Now, by a similar argument as before, one realizes that in
any such walk if you take k steps to the left, you must make k steps to the right. The,
same is true that for j steps up one must make j steps down. Hence, we know that
2n = 2k + 2j for some k + j = n =⇒ j = n − k.
Now, we count the number of ways we can walk 2n steps starting from (0, 0) and return
at (0, 0). Given a number of steps k we move left, note that we can determine this by
first locking down the order in which we move either left/right and up/down, followed
3
by determining the number of ways in which we can move left/right given the previous
order, followed by counting the number of ways we can move up/down (these last two
are independent, because we locked down their order in the first step). Then, since each
of these counting are independent, by fundamental theorem of counting, we can
multiply these to determine the total number of ways we can walk if we do k steps up.
First,
given a k, we make k moves to the left and k moves to the right. Then, there are
2n
, ways we can choose when to move left/right among all the moves, since then the
2k
times that we move up/down are forced.
Next, given the 2k spots where we move left and right, there are precisely 2k
ways in
k
which we can choose when to walk left, when we walk right is then forced.
Now, a similar argument
as before shows that the number of ways we can walk up and
2j
down is given by j . However, note that 2j = 2n − 2k, such that this is counted by
2j
= 2(n−k)
.
j
n−k
Now, we apply the fundamental theorem ofcounting,
and
we get that given a k for the
2n 2k 2(n−k)
number of steps we make left, there are 2k k
ways in which we can start at
n−k
(0, 0) and return at (0, 0) in 2n steps.
Lastly, note that k runs from 0 (no steps left/right) to n (all steps left/right). Hence, we
find that the
of ways
2k 2(n−k)
we can start at (0, 0) and return at (0, 0) in 2n steps
Ptotal number
is given by nk=0 2n
.
2k
k
n−k
2k 2(n−k)
(2n)!
(2n−2k)!
· (2k)!
· (n−k)!(n−k)!
= (k!)2 (2n)!
=
Now, notive that 2n
= (2k)!(2n−2k)!
k!k!
((n−k)!)2
2k
k
n−k
n (2n)!
2n n
n!
n!
= n k k−1
n!n! k!(n−k)! (n−k)!k!
2k 2(n−k) Pn
n n Pn
n P
n
Hence, we find that nk=0 2n
= k=0 2n
= 2n
k=0 k k−1 .
2k
k
n−k
n
k k−1
n
n P
Now, using the lemma, we have that nk=0 nk k−1
= 2n
. Hence, we have that the
n
total number of ways we can start at (0, 0) and return at (0, 0) in 2n steps, is given by
2n 2
.
n
Now, since the transition probability in each direction is 14 , it follows immediately that
2
2n
2n 2 1 2n
p2n
· 4 = ((2n)!)
· 14 .
00 = n
(n!)4
√
n
Now, using Stirling’s formula n! ∼ 2πn · ne , we have that
√
2n
4n
( 4πn·( 2n
(4πn)·( 2n )
) )2 2n
2n
2n
p00 ∼ (√2πn· en n )4 14 = 2 2 en 4n 41 .
(e)
(4π n )·( e )
2n
1
1
4n 1
Now, we can cancel 4πn, ( ne )4n , such that we get p2n
· 4 = πn
.
00 ∼ πn · 2
P∞ 1
Then, we know that n=0 πn =
because it is a (scalar-scaled)
P∞,
P∞ harmonic series. Now,
∞
we know that if an ∼ bn , then n=0 an = ∞ if and only
bnP
= ∞. We showed
Pif∞ n=0
∞
1
n
2n
that p2n
∼
.
Then,
using
the
previous,
we
find
that
p
=
00
n=0 00
n=0 p00 = ∞.
πn
This completes the proof.
4
2. Bob and Alice flip repeatedly a (fair) coin, in the case that it shows heads, Bob gives
Alice one dollar, and if it shows tails, Alice gives Bob one dollar. At the beginning,
Alice has $ a and Bob has $ b, a, b ∈ N. If somebody loses all their money, the game
stops. Calculate the invariant distribution for this Markov chain (is it unique?). What
can you say about the long-run proportions of this chain, and how can you interpret the
limiting distribution?
Answer: The corresponding Markov chain X (representing the wealth of Alice) has a
state space S = {0, 1, . . . , a + b}. Next to that, we have that
P[Xn+1 = a + b|Xn = a + b] = 1 and P[Xn+1 = 0|Xn = 0] = 1, and for i ̸= 0, a + b, we
have that P[Xn+1 = i + 1|Xn = i] = 0.5 and P[Xn+1 = i − 1|Xn = i] = 0.5. Hence, we
have the following transition matrix.
1
0
0
0 ··· 0
0
0.5 0 0.5 0 · · · 0
0
0
0.5
0
0.5
·
·
·
0
0
.
.
.
.
.
.
.
. . ..
..
..
..
..
..
P =
..
. 0 0.5 0
0
0
0
0
0
0
·
·
·
0.5
0
0.5
0
0
0
···
0
0
1
Note that a, b are arbitrary, hence the matrix contains the dotted lines.
Now, we solve π = πP to find an invariant solution. In particular, we solve P T π T = π T .
First, we find that
1 0.5 0
0 ··· 0 0
0 0 0.5 0 · · · 0 0
0 0.5 0 0.5 · · · 0 0
. .
.
.
.
.
.
T
.. ..
..
..
. . .. .. .
P =
...
0
0 0.5 0
0 0
0 0
0
·
·
·
0.5
0
0
0
0
0
···
0
0.5 1
Then it is clear that π = [1, 0, 0, 0, . . . , 0, 0] is a solution to the linear system. Hence,
this is an invariant state of this Markov chain.
Next to that, this invariant distribution is not unique. For example,
π = [0.5, 0, 0, 0, . . . , 0, 0.5]
In particular, there are three classes, the class of state 0, the class of state a + b, and the
class containing all the states 0 < i < a + b. It is immediate, that state 0 and state a + b
are recurrent states, because they are absorbing states. Next, since 0 and a + b are
5
absorbing, and there exits for all states 0 < i < a + b an n such that pni0 > 0 or
pni,a+b > 0, it follows that all the states 0 < i < a + b, are transient.
Next, the state space is finite, then by theorem, we find that the invariant distribution
of this Markov chain is a convex combination of the invariant state of class 0 and the
invariant state of class a + b, i.e., the invariant distribution of this Markov chain is given
by π = α · π0 + (1 − α) · πa+b for all α ∈ [0, 1] (which depends on the initial condition).
Next, since we have two absorbing states, as time goes to infinity, we have that the
long-term proportions are concentrated on 0 and a + b. Now, define h(i) to be the
probability that starting in state i you end up (get absorbed) in state a + b in the limit.
Then, 1 − h(i) is the probability that you end up (get absorbed) in state 0 in the limit.
Naturally, h(0) = 0 and h(a + b) = 1. Given a state 0 < i < a + b, then with the fair
coin you go with probability 0.5 to state i + 1 and with 0.5 to i − 1. Hence, we find that
h(i) = 12 h(i − 1) + 21 h(i + 1).
Now, we re-arrange to find h(i) − h(i − 1) = h(i + 1) − h(i). Define
d(i) = h(i) − h(i − 1) and note that the previous shows that d(i) = d(i + 1), and thus is
constant for all i. Hence, we find that h(i) is a linear function with respect to i. Let
d(i) = c, we find the general formula h(i) = ci + h(0) = ci. Now, since h(a + b) = 1, we
1
i
find 1 = c(a + b) =⇒ c = a+b
. Hence, we conclude that h(i) = a+b
.
In particular, since the Markov Chain X as defined has starting state a, we know that
a
b
a
is the probability to end up in state a + b and 1 − h(a) = 1 − a+b
= a+b
.
the h(a) = a+b
Hence, the limiting distribution gets absorbed in either 0 or a + b, and if Alice starts
with more money (a is greater) the probability of winning it all is greater for the
a
gets larger.
fraction a+b
3. Calculate the invariant distribution of the Markov chain X given by the diagram. Is it
unique? Does the chain necessarily converge to the invariant distribution?
0
1
2
1
1
2
1
2
0 1 0
Answer: First, we find that the transition matrix is P = 12 21 0 and we have that
0 12 21
1
0 2 0
T
P = 1 12 12
0 0 12
6
We solve P T π T = π T . First, we check if P T has eigen value 1:
1
−λ
0
1
2
−λ
1
1
1
T
2
= ( − λ) · det
det(P − λI) = det 1 2 − λ
=
2
2
1 12 − λ
1
0
0
−λ
2
1
1
1
1
( 2 − λ)(−λ( 2 − λ) − 2 · 1) = ( 2 − λ)(λ2 − 21 λ − 12 ) = ( 21 − λ)(λ − 1)(λ + 12 ).
Hence, we find that λ = 1 is an eigenvalue.
Let us find the corresponding eigenvector:
1
−1 12
−1 21 0
−1
0
0
2
1
= 1 − 21 12 → 0 0 12 →
P T − 1 · I = 1 21 − 1
2
1
−1
0 0
0
0 − 12
0 0 − 12
2
1 − 12 0
1 − 21 0
0 0 1 → 0 0 1.
2
0 0 0
0 0 0
Hence, π2 is a free variable and π1 = 12 π2 and π3 = 0. Let π2 = 23 , then π1 = 21 23 = 13 .
Hence, we find that π = ( 31 , 23 , 0).
First, we note thatX is an irreducible and finite Markov chain. Hence, this invariant
distribution is unique. Next, since the Markov chain is irreducible, it suffices to check
the periodicity of a single state. For state 1 we have p111 = 12 > 0. Thus the set
(n)
{ n : p11 > 0 } contains 1, and its greatest common divisor is therefore 1. Hence state 1
is aperiodic, and by irreducibility the entire chain is aperiodic. Then, by theorem, for all
initial distributions α, we have that α · P n → π for n → ∞. So, the chain necessarily
converges to the invariant distribution.
4. Classify the Markov chain in Figure 1 and calculate the periodicity of every state.
7
0
1
4
3
4
1
1
2
1
2
1
3
1
4
1
2
5
Figure 1: Diagram of the Markov chain of Problem 3.
Answer: There are clearly three classes: {0}, {1, 3}, and {2, 4, 5}. Since periodicity is a
class property, we have the determine the periodicity for only one state in each class.
First, the periodicity of 0 is ∞ (or undefined). Since pi0 = 0 for all i, we have pn00 = 0 for
all n. Hence, the set {n ≥ 1|pn00 > 0} = ∅, so by convention the periodicity of 0 is ∞ (or
undefined).
Next, the periodicity of 1 is 2. For pn11 = 1 for n even and pn11 = 0 for n odd. Hence,
{n ≥ 1|pn11 > 0} = {2n|n ∈ N}. It is clear that the gcd of this is 2. So, the periodicity of
each state in the class {1, 3} is 2.
Lastly, state 2 has a self-loop: indeed p122 = 21 > 0, so 1 ∈ {n : pn22 > 0} and therefore the
gcd is 1. Hence the states in the class {2, 4, 5} have periodicity 1.
5. Let X be a random walk on Z, with probability p ∈ (0, 1) to move to the right and 1 − p
to move to the left. Show that this Markov chain does not admit an invariant
distribution.
Proof: Suppose that there does exist a invariant solution to the random walk. Let this
be π. P
Then, we note that by definition of an invariant distribution, we must have that
πj = i πi pij = πj−1 pj−1,j + πj+1 pj+1,j and filling in that pj−1,j = p and pj+1,j = 1 − p,
we have that πj = pπj−1 + (1 − p)πj+1 .
Rewriting this gives, (1 − p)πj+1 − πj + pπj−1 = 0. Now, since (1 − p) + p = 1, we have
that (1 − p)πj+1 − pπj − (1 − p)πj + pπj−1 = 0, which leads to
p
(1 − p)(πj+1 − πj ) − p(πj − πj−1 ) = 0. Finally, we find that πj+1 − πj = 1−p
(πj − πj−1 ).
8
Similar to what we found in exercise 2, the consecutive difference between πj+1 − πj and
p
πj − πj−1 differs by a factor of 1−p
. Since this holds for all j, we see that the invariant
p j
p j
solution πj is proportional to ( 1−p ) . In particular, we find that πj = c · ( 1−p
) for some
p
c. Note that 1−p > 0, such that c > 0 (πj > 0).
Now,
π is a distribution,
must satisfy the normalization condition.
Pit
P since P
∞
∞
p −j
p j
)
+
π
=
c
·
(
j=1 c · ( 1−p ) . However, this is a sum of geometric series,
j∈Z j
j=0
1−p
p
> 1, the first sum diverges and if
for which we note that if p > 1 − p =⇒ 1−p
p
p
p < 1 − p =⇒ 1−p < 1, the second sum diverges. Lastly, if p = 1 − p =⇒ 1−p
= 1, then
both iterate over the constant c, and so both series diverge. Hence, for all p the proposed
invariant solution does not satisfy the normalization condition, which is a contradiction.
Hence, we conclude that a random walk on Z, with probability p ∈ (0, 1) to move to the
right and 1 − p to move to the left, does not have an invariant distribution.
This completes the proof.
6. The following blood inventory problem is encountered by a hospital: A rare blood type
(AB, Rh negative) is needed on a regular basis. One pint of it is regularly delivered
every three days (independent of the current inventory). If the blood is running out, an
expensive emergency delivery of the amount needed has to be made. The demand D (in
pints) over a three day period is given by
P[D = 0] = 40%,
P[D = 1] = 30%,
P[D = 2] = 20%,
P[D = 3] = 10%
and is independent from period to period. The delivered blood stays in the inventory for
at most 21 days, after that it has to be discarded. To reduce the amount of blood to be
discarded, always the oldest blood available will be used.
Let Xn be the number of pints in the storage directly after the n-th delivery.
a) Show that Xn is a Markov chain and find its transition matrix. Draw its diagram.
Answer: First, since a pint last 21 days and deliveries occur every 3 days, we have
that a pint can last for 7 periods before being discarded. In parituclar, the hospital
stocks at most 7 pints, because a 8th pint would have been from more than 21 days
ago. Hence, we have that following state space {0, 1, 2, 3, 4, 5, 6, 7},
Now, we have that Xn is the number of pints in stock after the nth delivery. Next,
define Dn = D to be the number of pints needed in the week after the nth delivery.
Then, the number of pints in stock after the nth delivery is first the number of pints
that were in stock after the (n-1)th delivery, minus the number of pints that were
needed in the week after the (n-1)th delivery. Since there is an emergency delivery
when blood runs out of the precise amount needed, it becomes 0 if this sum is
negative. Then, by delivery the number of pints increases by 1. Lastly, the total can
never exceed 7. We translate this mathematically:
Xn = min{max{Xn−1 − Dn−1 , 0} + 1, 7}
9
Since, Dn−1 = D is independent of the number of pints (it is a fixed distribution),
and Xn only depends further only on Xn−1 , we clearly have that
P[Xn+1 = j|Xn , Xn−1 , . . . , X0 ] = P[Xn+1 |Xn ], such that this is a Markov Chain.
Now that we have the model, we can find the transition matrix:
If Xn = 0, then Xn+1 = min{max{0 − D, 0} + 1, 7}:
• If D = 0, it follows that Xn+1 = 1, which has probability P[D = 0] = 0.4.
• If D = 1, it follows that Xn+1 = 1, which has probability P[D = 1] = 0.3.
• If D = 2, it follows that Xn+1 = 1, which has probability P[D = 2] = 0.2.
• If D = 3, it follows that Xn+1 = 1, which has probability P[D = 3] = 0.1.
Then, we have that P0i = [0, 1, 0, 0, 0, 0, 0].
We copy this process for all states:
If Xn = 1, then Xn+1 = min{max{1 − D, 0} + 1, 7}:
• If D = 0, it follows that Xn+1 = 2, which has probability P[D = 0] = 0.4.
• If D = 1, it follows that Xn+1 = 1, which has probability P[D = 1] = 0.3.
• If D = 2, it follows that Xn+1 = 1, which has probability P[D = 2] = 0.2.
• If D = 3, it follows that Xn+1 = 1, which has probability P[D = 3] = 0.1.
Then, we have that P1i = [0, 0.6, 0.4, 0, 0, 0, 0, 0].
If Xn = 2, then Xn+1 = min{max{2 − D, 0} + 1, 7}:
• If D = 0, it follows that Xn+1 = 3, which has probability P[D = 0] = 0.4.
• If D = 1, it follows that Xn+1 = 2, which has probability P[D = 1] = 0.3.
• If D = 2, it follows that Xn+1 = 1, which has probability P[D = 2] = 0.2.
• If D = 3, it follows that Xn+1 = 1, which has probability P[D = 3] = 0.1.
Then, we have that P2i = [0, 0.3, 0.3, 0.4, 0, 0, 0, 0].
If Xn = 3, then Xn+1 = min{max{3 − D, 0} + 1, 7}:
• If D = 0, it follows that Xn+1 = 4, which has probability P[D = 0] = 0.4.
• If D = 1, it follows that Xn+1 = 3, which has probability P[D = 1] = 0.3.
• If D = 2, it follows that Xn+1 = 2, which has probability P[D = 2] = 0.2.
• If D = 3, it follows that Xn+1 = 1, which has probability P[D = 3] = 0.1.
Then, we have that P3i = [0, 0.1, 0.2, 0.3, 0.4, 0, 0, 0].
If Xn = 4, then Xn+1 = min{max{4 − D, 0} + 1, 7}:
• If D = 0, it follows that Xn+1 = 5, which has probability P[D = 0] = 0.4.
• If D = 1, it follows that Xn+1 = 4, which has probability P[D = 1] = 0.3.
• If D = 2, it follows that Xn+1 = 3, which has probability P[D = 2] = 0.2.
• If D = 3, it follows that Xn+1 = 2, which has probability P[D = 3] = 0.1.
Then, we have that P4i = [0, 0, 0.1, 0.2, 0.3, 0.4, 0, 0].
If Xn = 5, then Xn+1 = min{max{5 − D, 0} + 1, 7}:
• If D = 0, it follows that Xn+1 = 6, which has probability P[D = 0] = 0.4.
• If D = 1, it follows that Xn+1 = 5, which has probability P[D = 1] = 0.3.
10
• If D = 2, it follows that Xn+1 = 4, which has probability P[D = 2] = 0.2.
• If D = 3, it follows that Xn+1 = 3, which has probability P[D = 3] = 0.1.
Then, we have that P5i = [0, 0, 0, 0.1, 0.2, 0.3, 0.4, 0].
If Xn = 6, then Xn+1 = min{max{6 − D, 0} + 1, 7}:
• If D = 0, it follows that Xn+1 = 7, which has probability P[D = 0] = 0.4.
• If D = 1, it follows that Xn+1 = 6, which has probability P[D = 1] = 0.3.
• If D = 2, it follows that Xn+1 = 5, which has probability P[D = 2] = 0.2.
• If D = 3, it follows that Xn+1 = 4, which has probability P[D = 3] = 0.1.
Then, we have that P6i = [0, 0, 0, 0, 0.1, 0.2, 0.3, 0.4].
If Xn = 7, then Xn+1 = min{max{7 − D, 0} + 1, 7}:
• If D = 0, it follows that Xn+1 = 7, which has probability P[D = 0] = 0.4.
• If D = 1, it follows that Xn+1 = 7, which has probability P[D = 1] = 0.3.
• If D = 2, it follows that Xn+1 = 6, which has probability P[D = 2] = 0.2.
• If D = 3, it follows that Xn+1 = 5, which has probability P[D = 3] = 0.1.
Then, we have that P7i = [0, 0, 0, 0, 0, 0.1, 0.2, 0.7].
This gives the following transition matrix:
0 1
0
0
0
0
0
0
0 0.6 0.4 0
0
0
0
0
0 0.3 0.3 0.4 0
0
0
0
0 0.1 0.2 0.3 0.4 0
0
0
P =
0 0 0.1 0.2 0.3 0.4 0
0
0 0
0
0.1
0.2
0.3
0.4
0
0 0
0
0 0.1 0.2 0.3 0.4
0 0
0
0
0 0.1 0.2 0.7
And we have the following diagram:
Figure 2: Enter Caption
b) Find the invariant distribution of the Markov chain.
Answer: The state space is finite and it has two classes, the class 0 which is
transient, and the class with all the other states which is recurrent because it is not
transient. Clearly, the states in the class besides 0 has periodicity 1, because they
have self loops. Next, if we ignore state 0 it is a irreducible Markov chain. In
particular, because state 0 has a probability of 1 to go to state 1, starting in state 0
11
or state 1 is equivalent. Hence, since we have an irreducible and aperiodic Markov
chain on a finite state space, we can use the theorem that (α · P n ) → π for n → ∞.
In particular, α = (1, 0, 0, 0, 0, 0, 0, 0) gives the same long term proportions as
α = (0, 1, 0, 0, 0, 0, 0, 0), we consider the second one.
Using the python code in the python file problem 6 assignment 5 FH.py, we find
that (α · P n ) → [0., 0.14, 0.14, 0.14, 0.14, 0.14, 0.13, 0.17], which is the invariant
distribution.
c) Calculate the probability that in the steady state a pint of blood has to be discarded
in the three day period.
Answer: This is the case where Xn = 7 and D = 0, that is
P[Xn = 7, D = 0] = π7 · P[D = 0] = 0.17 · 0.4 = 0.068. Hence, the probability that
blood is discarded is 0.068
d) Calculate the probability that in the steady state an expensive emergency delivery is
needed.
Answer: An emergency delivery is needed if the demand exceeds the inventory, i.e.,
D > Xn .
P
P[emergency] = 7i=0 πi P[D > i] = π0 P[D > 0] + π1 P[D > 1] + π2 P[D > 2] =
0 · 0.6 + 0.14 · 0.3 + 0.14 · 0.1 = 0.042 + 0.014 = 0.056. Hence, the probability that an
emergency is needed is 0.056.
Note: All programming problems can be written in any higher programming language,
mathematical or not (e.g., matlab, R, python, C++, ruby,. . . ). Please comment the programs
extensively and upload them as a comment on the homework submission (in a way that they
can be run easily). Plots and other output can be provided either as part of the main
homework submission or as separate .pdf file in the comments.
8 points per problems