Expected number of steps in a 1D random walk with reflecting edges
$begingroup$
Assume there is a row of $k$ tiles. A creature (monkey in some situations, ant in others, frog in others) lies on tile $a$. There is a 50% probability that the creature jumps to tile $a-1$ and a 50% probability that the creature jumps to tile $a+1$, unless it is on an edge tile. If it is on an edge tile, it must jump inwards, so it can't escape the system (i.e. tile 2 from tile 1 and tile $k-1$ from tile $k$). What is the expected number of steps for it to first reach tile $b$? $1<=a, b<=k$ is assumed. I feel like Markov chains might be used to get the answer, but I have a very limited understanding of them. If there is a closed form for the answer as well as a derivation for understanding, that would be perfect.
expected-value
$endgroup$
add a comment |
$begingroup$
Assume there is a row of $k$ tiles. A creature (monkey in some situations, ant in others, frog in others) lies on tile $a$. There is a 50% probability that the creature jumps to tile $a-1$ and a 50% probability that the creature jumps to tile $a+1$, unless it is on an edge tile. If it is on an edge tile, it must jump inwards, so it can't escape the system (i.e. tile 2 from tile 1 and tile $k-1$ from tile $k$). What is the expected number of steps for it to first reach tile $b$? $1<=a, b<=k$ is assumed. I feel like Markov chains might be used to get the answer, but I have a very limited understanding of them. If there is a closed form for the answer as well as a derivation for understanding, that would be perfect.
expected-value
$endgroup$
2
$begingroup$
Markov chains are the way!
$endgroup$
– Lord Shark the Unknown
Jan 19 at 5:20
$begingroup$
@LordSharktheUnknown Can you elaborate in an answer how to use them in order to solve this problem? Like I said, I have a limited understanding of them.
$endgroup$
– automaticallyGenerated
Jan 19 at 5:31
$begingroup$
I'have taken the liberty to modifiy your title and tags (3 tags are a good average) in order more readers are directed towards this interesting question and its interesting answer
$endgroup$
– Jean Marie
Feb 15 at 12:27
add a comment |
$begingroup$
Assume there is a row of $k$ tiles. A creature (monkey in some situations, ant in others, frog in others) lies on tile $a$. There is a 50% probability that the creature jumps to tile $a-1$ and a 50% probability that the creature jumps to tile $a+1$, unless it is on an edge tile. If it is on an edge tile, it must jump inwards, so it can't escape the system (i.e. tile 2 from tile 1 and tile $k-1$ from tile $k$). What is the expected number of steps for it to first reach tile $b$? $1<=a, b<=k$ is assumed. I feel like Markov chains might be used to get the answer, but I have a very limited understanding of them. If there is a closed form for the answer as well as a derivation for understanding, that would be perfect.
expected-value
$endgroup$
Assume there is a row of $k$ tiles. A creature (monkey in some situations, ant in others, frog in others) lies on tile $a$. There is a 50% probability that the creature jumps to tile $a-1$ and a 50% probability that the creature jumps to tile $a+1$, unless it is on an edge tile. If it is on an edge tile, it must jump inwards, so it can't escape the system (i.e. tile 2 from tile 1 and tile $k-1$ from tile $k$). What is the expected number of steps for it to first reach tile $b$? $1<=a, b<=k$ is assumed. I feel like Markov chains might be used to get the answer, but I have a very limited understanding of them. If there is a closed form for the answer as well as a derivation for understanding, that would be perfect.
expected-value
expected-value
edited Feb 15 at 12:25
Jean Marie
31.6k42355
31.6k42355
asked Jan 19 at 5:06
automaticallyGeneratedautomaticallyGenerated
14010
14010
2
$begingroup$
Markov chains are the way!
$endgroup$
– Lord Shark the Unknown
Jan 19 at 5:20
$begingroup$
@LordSharktheUnknown Can you elaborate in an answer how to use them in order to solve this problem? Like I said, I have a limited understanding of them.
$endgroup$
– automaticallyGenerated
Jan 19 at 5:31
$begingroup$
I'have taken the liberty to modifiy your title and tags (3 tags are a good average) in order more readers are directed towards this interesting question and its interesting answer
$endgroup$
– Jean Marie
Feb 15 at 12:27
add a comment |
2
$begingroup$
Markov chains are the way!
$endgroup$
– Lord Shark the Unknown
Jan 19 at 5:20
$begingroup$
@LordSharktheUnknown Can you elaborate in an answer how to use them in order to solve this problem? Like I said, I have a limited understanding of them.
$endgroup$
– automaticallyGenerated
Jan 19 at 5:31
$begingroup$
I'have taken the liberty to modifiy your title and tags (3 tags are a good average) in order more readers are directed towards this interesting question and its interesting answer
$endgroup$
– Jean Marie
Feb 15 at 12:27
2
2
$begingroup$
Markov chains are the way!
$endgroup$
– Lord Shark the Unknown
Jan 19 at 5:20
$begingroup$
Markov chains are the way!
$endgroup$
– Lord Shark the Unknown
Jan 19 at 5:20
$begingroup$
@LordSharktheUnknown Can you elaborate in an answer how to use them in order to solve this problem? Like I said, I have a limited understanding of them.
$endgroup$
– automaticallyGenerated
Jan 19 at 5:31
$begingroup$
@LordSharktheUnknown Can you elaborate in an answer how to use them in order to solve this problem? Like I said, I have a limited understanding of them.
$endgroup$
– automaticallyGenerated
Jan 19 at 5:31
$begingroup$
I'have taken the liberty to modifiy your title and tags (3 tags are a good average) in order more readers are directed towards this interesting question and its interesting answer
$endgroup$
– Jean Marie
Feb 15 at 12:27
$begingroup$
I'have taken the liberty to modifiy your title and tags (3 tags are a good average) in order more readers are directed towards this interesting question and its interesting answer
$endgroup$
– Jean Marie
Feb 15 at 12:27
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Amazingly (to me) there happens to be a very simple expression for the expected number of steps to reach $ b $ from $ a $. For $ a < b $, it is:
$$
left(,b + a -2,right),left(,b-a,right) .
$$
Although a Markov chain is the obvious way to model the process, and I'm sure it could used to derive the above result, there turns out to be a less cumbersome way of doing it.
For each $ i $ between $ 1 $ and $ b $ inclusive, let $ e_i $ be the expected number of steps the creature takes to go from $ i $ to $ b $. Obviously, $ e_b = 0 $.
If the creature starts from $ 1 $, then it has to take one step to $ 2 $, from which the expected number of steps to reach $ b $ is $ e_2 $. Thus, the expected number of steps, $ e_1 $, to reach $ b $ from $ 1 $ is $ e_2 + 1 $.
If the creature starts from $ b-1 $, then with probability $ frac{1}{2} $ it reaches $ b $ on the very next step—that is, in just a single step—, and with probability $ frac{1}{2} $ it jumps to $ b-2 $, from which the expected number of steps to reach $ b $ is $ e_{b-2} $. Thus $ e_{b-1} = frac{1}{2}left(e_{b-2} +1right) + frac{1}{2},1=frac{1}{2},e_{b-2}+1 $.
If the creature starts from any other point $ i $, with $ 2le ile b-2 $, then with probability $ frac{1}{2} $ it jumps to $ i-1 $, from which the expected number of steps to reach $ b $ is $ e_{i-1} $, and with probability $ frac{1}{2} $ it jumps to $ i+1 $, from which the expected number of steps to reach $ b $ is $ e_{i+1} $. Therefore, $ e_i = frac{1}{2}left(e_{i-1} +1right) + frac{1}{2}left(e_{i+1} +1right)= frac{1}{2},e_{i-1} + frac{1}{2},e_{i+1} +1 $.
Putting this all together, we have
begin{eqnarray}
e_1 &=& e_2 + 1\
e_i &=& frac{1}{2},e_{i-1} + frac{1}{2},e_{i+1} +1, mbox{for } i=2,3, dots, b-2 mbox{, and}\
e_{b-1} &=& frac{1}{2},e_{b-2}+1 ,
end{eqnarray}
or, equivalently,
begin{eqnarray}
e_1 - e_2 &=& 1\
-frac{1}{2},e_{i-1} + e_i -frac{1}{2},e_{i+1} &=& 1, mbox{for } i=2,3, dots, b-2 mbox{, and}\
-frac{1}{2},e_{b-2}+e_{b-1} &=& 1 .
end{eqnarray}
These equations can be written as:
$$
M,e = mathbb 1 ,
$$
where $ M $
is the $ left(,b-1,right)timesleft(,b-1,right) $ matrix, and $ mathbb 1 $ the $ left(,b-1,right)times,1 $ column vector, whose entries are given by:
begin{eqnarray}
M_{1,2} &=& -1\
M_{i,i} &=& 1 mbox{for } i=1,2,dots, b-1\
M_{i,i-1} &=& -frac{1}{2} mbox{for } i=2,3,dots, b-1\
M_{i,i+1} &=& -frac{1}{2} mbox{for } i=2,3,dots, b-2\
M_{i,,j} &=& 0 mbox{for all other } i, j\
mathbb 1_i &=& 1 mbox{for } i=1,2,dots, b-1 .
end{eqnarray}
For $ b=6 $, the matrix $ M $ looks like this:
$$left(begin{matrix}1&-1&0&0&0 \
-frac{1}{2}&1&-frac{1}{2}&0&0\
0&-frac{1}{2}&1&-frac{1}{2}&0\
0&0&-frac{1}{2}&1&-frac{1}{2}\
0&0&0&-frac{1}{2}&1&
end{matrix}right) ,$$
and has the following inverse:
$$
M^{-1} = left(begin{matrix} 5&8&6&4&2\
4&8&6&4&2\
3&6&6&4&2\
2&4&4&4&2\
1&2&2&2&2\
end{matrix}right) .
$$
From this, we can conjecture that the entries of the inverse of the $ left(,b-1,right)timesleft(,b-1,right) $ matrix $ M $, defined above, should be the matrix $ L $ whose entries are given by:
begin{eqnarray}
L_{i,1} &=& b-i mbox{for } i=1,2,dots, b-1\
L_{1,,j} &=& 2,left(b-jright) mbox{for } j=2,3,dots, b-1\
L_{i,,j} &=& 2,minleft(b-i,b-jright) mbox{for } 2le ile b-1 mbox{and } 2le jle b-1 ,
end{eqnarray}
and on checking the product $ M,L $, we find that it is indeed the $ left(,b-1,right)timesleft(,b-1,right) $ identity matrix. So, finally, we have:
$$
e = M^{-1},mathbb 1 = L,mathbb 1 ,
$$
and $ e_a $, the expected number of steps to get to $ b $ from $ a $ is the sum of the entries in the $ a^mbox{th} $ row of $ L $:
begin{eqnarray}
e_a &=& left(b-aright) + 2,left(,a-1,right),left(,b-a,right) + 2,sum_{j=1}^{b-a-1} j\
&=& left(,b + a -2,right),left(,b-a,right) ,
end{eqnarray}
as stated above.
$endgroup$
$begingroup$
Thanks for the clear answer. I think that this is just a disguised Markov chain, as your "M" is the identity matrix minus the transition matrix.
$endgroup$
– automaticallyGenerated
Jan 21 at 14:30
$begingroup$
Not quite. The identity matrix minus the transition matrix is a singular $ btimes b $ matrix. $ M $ is the submatrix obtained from it by chopping off its last row and column. But you're right that there is indeed a Markov chain lurking in the shadows.
$endgroup$
– lonza leggiera
Jan 21 at 23:27
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3079038%2fexpected-number-of-steps-in-a-1d-random-walk-with-reflecting-edges%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Amazingly (to me) there happens to be a very simple expression for the expected number of steps to reach $ b $ from $ a $. For $ a < b $, it is:
$$
left(,b + a -2,right),left(,b-a,right) .
$$
Although a Markov chain is the obvious way to model the process, and I'm sure it could used to derive the above result, there turns out to be a less cumbersome way of doing it.
For each $ i $ between $ 1 $ and $ b $ inclusive, let $ e_i $ be the expected number of steps the creature takes to go from $ i $ to $ b $. Obviously, $ e_b = 0 $.
If the creature starts from $ 1 $, then it has to take one step to $ 2 $, from which the expected number of steps to reach $ b $ is $ e_2 $. Thus, the expected number of steps, $ e_1 $, to reach $ b $ from $ 1 $ is $ e_2 + 1 $.
If the creature starts from $ b-1 $, then with probability $ frac{1}{2} $ it reaches $ b $ on the very next step—that is, in just a single step—, and with probability $ frac{1}{2} $ it jumps to $ b-2 $, from which the expected number of steps to reach $ b $ is $ e_{b-2} $. Thus $ e_{b-1} = frac{1}{2}left(e_{b-2} +1right) + frac{1}{2},1=frac{1}{2},e_{b-2}+1 $.
If the creature starts from any other point $ i $, with $ 2le ile b-2 $, then with probability $ frac{1}{2} $ it jumps to $ i-1 $, from which the expected number of steps to reach $ b $ is $ e_{i-1} $, and with probability $ frac{1}{2} $ it jumps to $ i+1 $, from which the expected number of steps to reach $ b $ is $ e_{i+1} $. Therefore, $ e_i = frac{1}{2}left(e_{i-1} +1right) + frac{1}{2}left(e_{i+1} +1right)= frac{1}{2},e_{i-1} + frac{1}{2},e_{i+1} +1 $.
Putting this all together, we have
begin{eqnarray}
e_1 &=& e_2 + 1\
e_i &=& frac{1}{2},e_{i-1} + frac{1}{2},e_{i+1} +1, mbox{for } i=2,3, dots, b-2 mbox{, and}\
e_{b-1} &=& frac{1}{2},e_{b-2}+1 ,
end{eqnarray}
or, equivalently,
begin{eqnarray}
e_1 - e_2 &=& 1\
-frac{1}{2},e_{i-1} + e_i -frac{1}{2},e_{i+1} &=& 1, mbox{for } i=2,3, dots, b-2 mbox{, and}\
-frac{1}{2},e_{b-2}+e_{b-1} &=& 1 .
end{eqnarray}
These equations can be written as:
$$
M,e = mathbb 1 ,
$$
where $ M $
is the $ left(,b-1,right)timesleft(,b-1,right) $ matrix, and $ mathbb 1 $ the $ left(,b-1,right)times,1 $ column vector, whose entries are given by:
begin{eqnarray}
M_{1,2} &=& -1\
M_{i,i} &=& 1 mbox{for } i=1,2,dots, b-1\
M_{i,i-1} &=& -frac{1}{2} mbox{for } i=2,3,dots, b-1\
M_{i,i+1} &=& -frac{1}{2} mbox{for } i=2,3,dots, b-2\
M_{i,,j} &=& 0 mbox{for all other } i, j\
mathbb 1_i &=& 1 mbox{for } i=1,2,dots, b-1 .
end{eqnarray}
For $ b=6 $, the matrix $ M $ looks like this:
$$left(begin{matrix}1&-1&0&0&0 \
-frac{1}{2}&1&-frac{1}{2}&0&0\
0&-frac{1}{2}&1&-frac{1}{2}&0\
0&0&-frac{1}{2}&1&-frac{1}{2}\
0&0&0&-frac{1}{2}&1&
end{matrix}right) ,$$
and has the following inverse:
$$
M^{-1} = left(begin{matrix} 5&8&6&4&2\
4&8&6&4&2\
3&6&6&4&2\
2&4&4&4&2\
1&2&2&2&2\
end{matrix}right) .
$$
From this, we can conjecture that the entries of the inverse of the $ left(,b-1,right)timesleft(,b-1,right) $ matrix $ M $, defined above, should be the matrix $ L $ whose entries are given by:
begin{eqnarray}
L_{i,1} &=& b-i mbox{for } i=1,2,dots, b-1\
L_{1,,j} &=& 2,left(b-jright) mbox{for } j=2,3,dots, b-1\
L_{i,,j} &=& 2,minleft(b-i,b-jright) mbox{for } 2le ile b-1 mbox{and } 2le jle b-1 ,
end{eqnarray}
and on checking the product $ M,L $, we find that it is indeed the $ left(,b-1,right)timesleft(,b-1,right) $ identity matrix. So, finally, we have:
$$
e = M^{-1},mathbb 1 = L,mathbb 1 ,
$$
and $ e_a $, the expected number of steps to get to $ b $ from $ a $ is the sum of the entries in the $ a^mbox{th} $ row of $ L $:
begin{eqnarray}
e_a &=& left(b-aright) + 2,left(,a-1,right),left(,b-a,right) + 2,sum_{j=1}^{b-a-1} j\
&=& left(,b + a -2,right),left(,b-a,right) ,
end{eqnarray}
as stated above.
$endgroup$
$begingroup$
Thanks for the clear answer. I think that this is just a disguised Markov chain, as your "M" is the identity matrix minus the transition matrix.
$endgroup$
– automaticallyGenerated
Jan 21 at 14:30
$begingroup$
Not quite. The identity matrix minus the transition matrix is a singular $ btimes b $ matrix. $ M $ is the submatrix obtained from it by chopping off its last row and column. But you're right that there is indeed a Markov chain lurking in the shadows.
$endgroup$
– lonza leggiera
Jan 21 at 23:27
add a comment |
$begingroup$
Amazingly (to me) there happens to be a very simple expression for the expected number of steps to reach $ b $ from $ a $. For $ a < b $, it is:
$$
left(,b + a -2,right),left(,b-a,right) .
$$
Although a Markov chain is the obvious way to model the process, and I'm sure it could used to derive the above result, there turns out to be a less cumbersome way of doing it.
For each $ i $ between $ 1 $ and $ b $ inclusive, let $ e_i $ be the expected number of steps the creature takes to go from $ i $ to $ b $. Obviously, $ e_b = 0 $.
If the creature starts from $ 1 $, then it has to take one step to $ 2 $, from which the expected number of steps to reach $ b $ is $ e_2 $. Thus, the expected number of steps, $ e_1 $, to reach $ b $ from $ 1 $ is $ e_2 + 1 $.
If the creature starts from $ b-1 $, then with probability $ frac{1}{2} $ it reaches $ b $ on the very next step—that is, in just a single step—, and with probability $ frac{1}{2} $ it jumps to $ b-2 $, from which the expected number of steps to reach $ b $ is $ e_{b-2} $. Thus $ e_{b-1} = frac{1}{2}left(e_{b-2} +1right) + frac{1}{2},1=frac{1}{2},e_{b-2}+1 $.
If the creature starts from any other point $ i $, with $ 2le ile b-2 $, then with probability $ frac{1}{2} $ it jumps to $ i-1 $, from which the expected number of steps to reach $ b $ is $ e_{i-1} $, and with probability $ frac{1}{2} $ it jumps to $ i+1 $, from which the expected number of steps to reach $ b $ is $ e_{i+1} $. Therefore, $ e_i = frac{1}{2}left(e_{i-1} +1right) + frac{1}{2}left(e_{i+1} +1right)= frac{1}{2},e_{i-1} + frac{1}{2},e_{i+1} +1 $.
Putting this all together, we have
begin{eqnarray}
e_1 &=& e_2 + 1\
e_i &=& frac{1}{2},e_{i-1} + frac{1}{2},e_{i+1} +1, mbox{for } i=2,3, dots, b-2 mbox{, and}\
e_{b-1} &=& frac{1}{2},e_{b-2}+1 ,
end{eqnarray}
or, equivalently,
begin{eqnarray}
e_1 - e_2 &=& 1\
-frac{1}{2},e_{i-1} + e_i -frac{1}{2},e_{i+1} &=& 1, mbox{for } i=2,3, dots, b-2 mbox{, and}\
-frac{1}{2},e_{b-2}+e_{b-1} &=& 1 .
end{eqnarray}
These equations can be written as:
$$
M,e = mathbb 1 ,
$$
where $ M $
is the $ left(,b-1,right)timesleft(,b-1,right) $ matrix, and $ mathbb 1 $ the $ left(,b-1,right)times,1 $ column vector, whose entries are given by:
begin{eqnarray}
M_{1,2} &=& -1\
M_{i,i} &=& 1 mbox{for } i=1,2,dots, b-1\
M_{i,i-1} &=& -frac{1}{2} mbox{for } i=2,3,dots, b-1\
M_{i,i+1} &=& -frac{1}{2} mbox{for } i=2,3,dots, b-2\
M_{i,,j} &=& 0 mbox{for all other } i, j\
mathbb 1_i &=& 1 mbox{for } i=1,2,dots, b-1 .
end{eqnarray}
For $ b=6 $, the matrix $ M $ looks like this:
$$left(begin{matrix}1&-1&0&0&0 \
-frac{1}{2}&1&-frac{1}{2}&0&0\
0&-frac{1}{2}&1&-frac{1}{2}&0\
0&0&-frac{1}{2}&1&-frac{1}{2}\
0&0&0&-frac{1}{2}&1&
end{matrix}right) ,$$
and has the following inverse:
$$
M^{-1} = left(begin{matrix} 5&8&6&4&2\
4&8&6&4&2\
3&6&6&4&2\
2&4&4&4&2\
1&2&2&2&2\
end{matrix}right) .
$$
From this, we can conjecture that the entries of the inverse of the $ left(,b-1,right)timesleft(,b-1,right) $ matrix $ M $, defined above, should be the matrix $ L $ whose entries are given by:
begin{eqnarray}
L_{i,1} &=& b-i mbox{for } i=1,2,dots, b-1\
L_{1,,j} &=& 2,left(b-jright) mbox{for } j=2,3,dots, b-1\
L_{i,,j} &=& 2,minleft(b-i,b-jright) mbox{for } 2le ile b-1 mbox{and } 2le jle b-1 ,
end{eqnarray}
and on checking the product $ M,L $, we find that it is indeed the $ left(,b-1,right)timesleft(,b-1,right) $ identity matrix. So, finally, we have:
$$
e = M^{-1},mathbb 1 = L,mathbb 1 ,
$$
and $ e_a $, the expected number of steps to get to $ b $ from $ a $ is the sum of the entries in the $ a^mbox{th} $ row of $ L $:
begin{eqnarray}
e_a &=& left(b-aright) + 2,left(,a-1,right),left(,b-a,right) + 2,sum_{j=1}^{b-a-1} j\
&=& left(,b + a -2,right),left(,b-a,right) ,
end{eqnarray}
as stated above.
$endgroup$
$begingroup$
Thanks for the clear answer. I think that this is just a disguised Markov chain, as your "M" is the identity matrix minus the transition matrix.
$endgroup$
– automaticallyGenerated
Jan 21 at 14:30
$begingroup$
Not quite. The identity matrix minus the transition matrix is a singular $ btimes b $ matrix. $ M $ is the submatrix obtained from it by chopping off its last row and column. But you're right that there is indeed a Markov chain lurking in the shadows.
$endgroup$
– lonza leggiera
Jan 21 at 23:27
add a comment |
$begingroup$
Amazingly (to me) there happens to be a very simple expression for the expected number of steps to reach $ b $ from $ a $. For $ a < b $, it is:
$$
left(,b + a -2,right),left(,b-a,right) .
$$
Although a Markov chain is the obvious way to model the process, and I'm sure it could used to derive the above result, there turns out to be a less cumbersome way of doing it.
For each $ i $ between $ 1 $ and $ b $ inclusive, let $ e_i $ be the expected number of steps the creature takes to go from $ i $ to $ b $. Obviously, $ e_b = 0 $.
If the creature starts from $ 1 $, then it has to take one step to $ 2 $, from which the expected number of steps to reach $ b $ is $ e_2 $. Thus, the expected number of steps, $ e_1 $, to reach $ b $ from $ 1 $ is $ e_2 + 1 $.
If the creature starts from $ b-1 $, then with probability $ frac{1}{2} $ it reaches $ b $ on the very next step—that is, in just a single step—, and with probability $ frac{1}{2} $ it jumps to $ b-2 $, from which the expected number of steps to reach $ b $ is $ e_{b-2} $. Thus $ e_{b-1} = frac{1}{2}left(e_{b-2} +1right) + frac{1}{2},1=frac{1}{2},e_{b-2}+1 $.
If the creature starts from any other point $ i $, with $ 2le ile b-2 $, then with probability $ frac{1}{2} $ it jumps to $ i-1 $, from which the expected number of steps to reach $ b $ is $ e_{i-1} $, and with probability $ frac{1}{2} $ it jumps to $ i+1 $, from which the expected number of steps to reach $ b $ is $ e_{i+1} $. Therefore, $ e_i = frac{1}{2}left(e_{i-1} +1right) + frac{1}{2}left(e_{i+1} +1right)= frac{1}{2},e_{i-1} + frac{1}{2},e_{i+1} +1 $.
Putting this all together, we have
begin{eqnarray}
e_1 &=& e_2 + 1\
e_i &=& frac{1}{2},e_{i-1} + frac{1}{2},e_{i+1} +1, mbox{for } i=2,3, dots, b-2 mbox{, and}\
e_{b-1} &=& frac{1}{2},e_{b-2}+1 ,
end{eqnarray}
or, equivalently,
begin{eqnarray}
e_1 - e_2 &=& 1\
-frac{1}{2},e_{i-1} + e_i -frac{1}{2},e_{i+1} &=& 1, mbox{for } i=2,3, dots, b-2 mbox{, and}\
-frac{1}{2},e_{b-2}+e_{b-1} &=& 1 .
end{eqnarray}
These equations can be written as:
$$
M,e = mathbb 1 ,
$$
where $ M $
is the $ left(,b-1,right)timesleft(,b-1,right) $ matrix, and $ mathbb 1 $ the $ left(,b-1,right)times,1 $ column vector, whose entries are given by:
begin{eqnarray}
M_{1,2} &=& -1\
M_{i,i} &=& 1 mbox{for } i=1,2,dots, b-1\
M_{i,i-1} &=& -frac{1}{2} mbox{for } i=2,3,dots, b-1\
M_{i,i+1} &=& -frac{1}{2} mbox{for } i=2,3,dots, b-2\
M_{i,,j} &=& 0 mbox{for all other } i, j\
mathbb 1_i &=& 1 mbox{for } i=1,2,dots, b-1 .
end{eqnarray}
For $ b=6 $, the matrix $ M $ looks like this:
$$left(begin{matrix}1&-1&0&0&0 \
-frac{1}{2}&1&-frac{1}{2}&0&0\
0&-frac{1}{2}&1&-frac{1}{2}&0\
0&0&-frac{1}{2}&1&-frac{1}{2}\
0&0&0&-frac{1}{2}&1&
end{matrix}right) ,$$
and has the following inverse:
$$
M^{-1} = left(begin{matrix} 5&8&6&4&2\
4&8&6&4&2\
3&6&6&4&2\
2&4&4&4&2\
1&2&2&2&2\
end{matrix}right) .
$$
From this, we can conjecture that the entries of the inverse of the $ left(,b-1,right)timesleft(,b-1,right) $ matrix $ M $, defined above, should be the matrix $ L $ whose entries are given by:
begin{eqnarray}
L_{i,1} &=& b-i mbox{for } i=1,2,dots, b-1\
L_{1,,j} &=& 2,left(b-jright) mbox{for } j=2,3,dots, b-1\
L_{i,,j} &=& 2,minleft(b-i,b-jright) mbox{for } 2le ile b-1 mbox{and } 2le jle b-1 ,
end{eqnarray}
and on checking the product $ M,L $, we find that it is indeed the $ left(,b-1,right)timesleft(,b-1,right) $ identity matrix. So, finally, we have:
$$
e = M^{-1},mathbb 1 = L,mathbb 1 ,
$$
and $ e_a $, the expected number of steps to get to $ b $ from $ a $ is the sum of the entries in the $ a^mbox{th} $ row of $ L $:
begin{eqnarray}
e_a &=& left(b-aright) + 2,left(,a-1,right),left(,b-a,right) + 2,sum_{j=1}^{b-a-1} j\
&=& left(,b + a -2,right),left(,b-a,right) ,
end{eqnarray}
as stated above.
$endgroup$
Amazingly (to me) there happens to be a very simple expression for the expected number of steps to reach $ b $ from $ a $. For $ a < b $, it is:
$$
left(,b + a -2,right),left(,b-a,right) .
$$
Although a Markov chain is the obvious way to model the process, and I'm sure it could used to derive the above result, there turns out to be a less cumbersome way of doing it.
For each $ i $ between $ 1 $ and $ b $ inclusive, let $ e_i $ be the expected number of steps the creature takes to go from $ i $ to $ b $. Obviously, $ e_b = 0 $.
If the creature starts from $ 1 $, then it has to take one step to $ 2 $, from which the expected number of steps to reach $ b $ is $ e_2 $. Thus, the expected number of steps, $ e_1 $, to reach $ b $ from $ 1 $ is $ e_2 + 1 $.
If the creature starts from $ b-1 $, then with probability $ frac{1}{2} $ it reaches $ b $ on the very next step—that is, in just a single step—, and with probability $ frac{1}{2} $ it jumps to $ b-2 $, from which the expected number of steps to reach $ b $ is $ e_{b-2} $. Thus $ e_{b-1} = frac{1}{2}left(e_{b-2} +1right) + frac{1}{2},1=frac{1}{2},e_{b-2}+1 $.
If the creature starts from any other point $ i $, with $ 2le ile b-2 $, then with probability $ frac{1}{2} $ it jumps to $ i-1 $, from which the expected number of steps to reach $ b $ is $ e_{i-1} $, and with probability $ frac{1}{2} $ it jumps to $ i+1 $, from which the expected number of steps to reach $ b $ is $ e_{i+1} $. Therefore, $ e_i = frac{1}{2}left(e_{i-1} +1right) + frac{1}{2}left(e_{i+1} +1right)= frac{1}{2},e_{i-1} + frac{1}{2},e_{i+1} +1 $.
Putting this all together, we have
begin{eqnarray}
e_1 &=& e_2 + 1\
e_i &=& frac{1}{2},e_{i-1} + frac{1}{2},e_{i+1} +1, mbox{for } i=2,3, dots, b-2 mbox{, and}\
e_{b-1} &=& frac{1}{2},e_{b-2}+1 ,
end{eqnarray}
or, equivalently,
begin{eqnarray}
e_1 - e_2 &=& 1\
-frac{1}{2},e_{i-1} + e_i -frac{1}{2},e_{i+1} &=& 1, mbox{for } i=2,3, dots, b-2 mbox{, and}\
-frac{1}{2},e_{b-2}+e_{b-1} &=& 1 .
end{eqnarray}
These equations can be written as:
$$
M,e = mathbb 1 ,
$$
where $ M $
is the $ left(,b-1,right)timesleft(,b-1,right) $ matrix, and $ mathbb 1 $ the $ left(,b-1,right)times,1 $ column vector, whose entries are given by:
begin{eqnarray}
M_{1,2} &=& -1\
M_{i,i} &=& 1 mbox{for } i=1,2,dots, b-1\
M_{i,i-1} &=& -frac{1}{2} mbox{for } i=2,3,dots, b-1\
M_{i,i+1} &=& -frac{1}{2} mbox{for } i=2,3,dots, b-2\
M_{i,,j} &=& 0 mbox{for all other } i, j\
mathbb 1_i &=& 1 mbox{for } i=1,2,dots, b-1 .
end{eqnarray}
For $ b=6 $, the matrix $ M $ looks like this:
$$left(begin{matrix}1&-1&0&0&0 \
-frac{1}{2}&1&-frac{1}{2}&0&0\
0&-frac{1}{2}&1&-frac{1}{2}&0\
0&0&-frac{1}{2}&1&-frac{1}{2}\
0&0&0&-frac{1}{2}&1&
end{matrix}right) ,$$
and has the following inverse:
$$
M^{-1} = left(begin{matrix} 5&8&6&4&2\
4&8&6&4&2\
3&6&6&4&2\
2&4&4&4&2\
1&2&2&2&2\
end{matrix}right) .
$$
From this, we can conjecture that the entries of the inverse of the $ left(,b-1,right)timesleft(,b-1,right) $ matrix $ M $, defined above, should be the matrix $ L $ whose entries are given by:
begin{eqnarray}
L_{i,1} &=& b-i mbox{for } i=1,2,dots, b-1\
L_{1,,j} &=& 2,left(b-jright) mbox{for } j=2,3,dots, b-1\
L_{i,,j} &=& 2,minleft(b-i,b-jright) mbox{for } 2le ile b-1 mbox{and } 2le jle b-1 ,
end{eqnarray}
and on checking the product $ M,L $, we find that it is indeed the $ left(,b-1,right)timesleft(,b-1,right) $ identity matrix. So, finally, we have:
$$
e = M^{-1},mathbb 1 = L,mathbb 1 ,
$$
and $ e_a $, the expected number of steps to get to $ b $ from $ a $ is the sum of the entries in the $ a^mbox{th} $ row of $ L $:
begin{eqnarray}
e_a &=& left(b-aright) + 2,left(,a-1,right),left(,b-a,right) + 2,sum_{j=1}^{b-a-1} j\
&=& left(,b + a -2,right),left(,b-a,right) ,
end{eqnarray}
as stated above.
edited Jan 21 at 8:52
answered Jan 20 at 9:15
lonza leggieralonza leggiera
1,544128
1,544128
$begingroup$
Thanks for the clear answer. I think that this is just a disguised Markov chain, as your "M" is the identity matrix minus the transition matrix.
$endgroup$
– automaticallyGenerated
Jan 21 at 14:30
$begingroup$
Not quite. The identity matrix minus the transition matrix is a singular $ btimes b $ matrix. $ M $ is the submatrix obtained from it by chopping off its last row and column. But you're right that there is indeed a Markov chain lurking in the shadows.
$endgroup$
– lonza leggiera
Jan 21 at 23:27
add a comment |
$begingroup$
Thanks for the clear answer. I think that this is just a disguised Markov chain, as your "M" is the identity matrix minus the transition matrix.
$endgroup$
– automaticallyGenerated
Jan 21 at 14:30
$begingroup$
Not quite. The identity matrix minus the transition matrix is a singular $ btimes b $ matrix. $ M $ is the submatrix obtained from it by chopping off its last row and column. But you're right that there is indeed a Markov chain lurking in the shadows.
$endgroup$
– lonza leggiera
Jan 21 at 23:27
$begingroup$
Thanks for the clear answer. I think that this is just a disguised Markov chain, as your "M" is the identity matrix minus the transition matrix.
$endgroup$
– automaticallyGenerated
Jan 21 at 14:30
$begingroup$
Thanks for the clear answer. I think that this is just a disguised Markov chain, as your "M" is the identity matrix minus the transition matrix.
$endgroup$
– automaticallyGenerated
Jan 21 at 14:30
$begingroup$
Not quite. The identity matrix minus the transition matrix is a singular $ btimes b $ matrix. $ M $ is the submatrix obtained from it by chopping off its last row and column. But you're right that there is indeed a Markov chain lurking in the shadows.
$endgroup$
– lonza leggiera
Jan 21 at 23:27
$begingroup$
Not quite. The identity matrix minus the transition matrix is a singular $ btimes b $ matrix. $ M $ is the submatrix obtained from it by chopping off its last row and column. But you're right that there is indeed a Markov chain lurking in the shadows.
$endgroup$
– lonza leggiera
Jan 21 at 23:27
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3079038%2fexpected-number-of-steps-in-a-1d-random-walk-with-reflecting-edges%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
2
$begingroup$
Markov chains are the way!
$endgroup$
– Lord Shark the Unknown
Jan 19 at 5:20
$begingroup$
@LordSharktheUnknown Can you elaborate in an answer how to use them in order to solve this problem? Like I said, I have a limited understanding of them.
$endgroup$
– automaticallyGenerated
Jan 19 at 5:31
$begingroup$
I'have taken the liberty to modifiy your title and tags (3 tags are a good average) in order more readers are directed towards this interesting question and its interesting answer
$endgroup$
– Jean Marie
Feb 15 at 12:27