Existence of separators for biased random walk.
$begingroup$
Let us consider a biased random walk in $mathbf{Z}$ whose step-lengths satisfy $mathbf{P}(xi = +1) = p > mathbf{P}(xi = -1) = q$ (with $p+q = 1$).
A value $k in mathbf{Z}$ is a "separator" of the random walk if the random walk started before $k$ passes through $k$ only once.
By the strong Markov property $mathbf{P}_k(0 text{ is separator}) = mathbf{P}_0(text{The random walk never returns to zero}).$ Denote by $A_k$ the probability of the random walk, started at $k neq 0,$ to never visit zero and $A_0$ be the probability of never return to zero. By the strong law of large numbers, $A_k = 0$ for all $k$ negative. Now, using first step analysis is easy to deduce $A_{k + 1} = q A_k + pA_{k + 2}$ for $k geq 1,$ $A_0 = p A_1$ and $A_1 = p A_2.$ Then, we have
$$A_{k + 2} - A_{k + 1} = dfrac{q}{p}(A_{k + 1} - A_k) quad (k geq 1).$$
It is easy to show $A_{k+2}-A_{k+1}=(frac{q}{p})^{k+1} A_1$ which then leads to, by means of a telescopic sum,
$$A_k=dfrac{1}{p} left[ sum_{j=0}^k left( dfrac{q}{p} right)^j right] A_0.$$
This shows that $A_k to dfrac{1}{p} dfrac{1}{1 - frac{q}{p}} A_0 = dfrac{1}{p-q}A_0.$
My intuition suggests strongly that $A_k to 1.$ How to prove this formally?
Having this, I can conclude $A_0 = p - q,$ which is very reasonable. So, the probability of zero being separator, starting at $k < 0$ is simply $p - q.$
Consider now the set $mathrm{X}_a = {atext{ is a separator}}.$ The previous can be restated as $$mathbf{P}_0(mathrm{X}_a) = p - q$$ for every $a geq 0.$
Can I use the previous formula to conclude $mathbf{P}_0left(bigcuplimits_{a = 0}^N mathrm{X}_aright) to 1$ as $N to infty$?
probability probability-theory random-walk
$endgroup$
add a comment |
$begingroup$
Let us consider a biased random walk in $mathbf{Z}$ whose step-lengths satisfy $mathbf{P}(xi = +1) = p > mathbf{P}(xi = -1) = q$ (with $p+q = 1$).
A value $k in mathbf{Z}$ is a "separator" of the random walk if the random walk started before $k$ passes through $k$ only once.
By the strong Markov property $mathbf{P}_k(0 text{ is separator}) = mathbf{P}_0(text{The random walk never returns to zero}).$ Denote by $A_k$ the probability of the random walk, started at $k neq 0,$ to never visit zero and $A_0$ be the probability of never return to zero. By the strong law of large numbers, $A_k = 0$ for all $k$ negative. Now, using first step analysis is easy to deduce $A_{k + 1} = q A_k + pA_{k + 2}$ for $k geq 1,$ $A_0 = p A_1$ and $A_1 = p A_2.$ Then, we have
$$A_{k + 2} - A_{k + 1} = dfrac{q}{p}(A_{k + 1} - A_k) quad (k geq 1).$$
It is easy to show $A_{k+2}-A_{k+1}=(frac{q}{p})^{k+1} A_1$ which then leads to, by means of a telescopic sum,
$$A_k=dfrac{1}{p} left[ sum_{j=0}^k left( dfrac{q}{p} right)^j right] A_0.$$
This shows that $A_k to dfrac{1}{p} dfrac{1}{1 - frac{q}{p}} A_0 = dfrac{1}{p-q}A_0.$
My intuition suggests strongly that $A_k to 1.$ How to prove this formally?
Having this, I can conclude $A_0 = p - q,$ which is very reasonable. So, the probability of zero being separator, starting at $k < 0$ is simply $p - q.$
Consider now the set $mathrm{X}_a = {atext{ is a separator}}.$ The previous can be restated as $$mathbf{P}_0(mathrm{X}_a) = p - q$$ for every $a geq 0.$
Can I use the previous formula to conclude $mathbf{P}_0left(bigcuplimits_{a = 0}^N mathrm{X}_aright) to 1$ as $N to infty$?
probability probability-theory random-walk
$endgroup$
add a comment |
$begingroup$
Let us consider a biased random walk in $mathbf{Z}$ whose step-lengths satisfy $mathbf{P}(xi = +1) = p > mathbf{P}(xi = -1) = q$ (with $p+q = 1$).
A value $k in mathbf{Z}$ is a "separator" of the random walk if the random walk started before $k$ passes through $k$ only once.
By the strong Markov property $mathbf{P}_k(0 text{ is separator}) = mathbf{P}_0(text{The random walk never returns to zero}).$ Denote by $A_k$ the probability of the random walk, started at $k neq 0,$ to never visit zero and $A_0$ be the probability of never return to zero. By the strong law of large numbers, $A_k = 0$ for all $k$ negative. Now, using first step analysis is easy to deduce $A_{k + 1} = q A_k + pA_{k + 2}$ for $k geq 1,$ $A_0 = p A_1$ and $A_1 = p A_2.$ Then, we have
$$A_{k + 2} - A_{k + 1} = dfrac{q}{p}(A_{k + 1} - A_k) quad (k geq 1).$$
It is easy to show $A_{k+2}-A_{k+1}=(frac{q}{p})^{k+1} A_1$ which then leads to, by means of a telescopic sum,
$$A_k=dfrac{1}{p} left[ sum_{j=0}^k left( dfrac{q}{p} right)^j right] A_0.$$
This shows that $A_k to dfrac{1}{p} dfrac{1}{1 - frac{q}{p}} A_0 = dfrac{1}{p-q}A_0.$
My intuition suggests strongly that $A_k to 1.$ How to prove this formally?
Having this, I can conclude $A_0 = p - q,$ which is very reasonable. So, the probability of zero being separator, starting at $k < 0$ is simply $p - q.$
Consider now the set $mathrm{X}_a = {atext{ is a separator}}.$ The previous can be restated as $$mathbf{P}_0(mathrm{X}_a) = p - q$$ for every $a geq 0.$
Can I use the previous formula to conclude $mathbf{P}_0left(bigcuplimits_{a = 0}^N mathrm{X}_aright) to 1$ as $N to infty$?
probability probability-theory random-walk
$endgroup$
Let us consider a biased random walk in $mathbf{Z}$ whose step-lengths satisfy $mathbf{P}(xi = +1) = p > mathbf{P}(xi = -1) = q$ (with $p+q = 1$).
A value $k in mathbf{Z}$ is a "separator" of the random walk if the random walk started before $k$ passes through $k$ only once.
By the strong Markov property $mathbf{P}_k(0 text{ is separator}) = mathbf{P}_0(text{The random walk never returns to zero}).$ Denote by $A_k$ the probability of the random walk, started at $k neq 0,$ to never visit zero and $A_0$ be the probability of never return to zero. By the strong law of large numbers, $A_k = 0$ for all $k$ negative. Now, using first step analysis is easy to deduce $A_{k + 1} = q A_k + pA_{k + 2}$ for $k geq 1,$ $A_0 = p A_1$ and $A_1 = p A_2.$ Then, we have
$$A_{k + 2} - A_{k + 1} = dfrac{q}{p}(A_{k + 1} - A_k) quad (k geq 1).$$
It is easy to show $A_{k+2}-A_{k+1}=(frac{q}{p})^{k+1} A_1$ which then leads to, by means of a telescopic sum,
$$A_k=dfrac{1}{p} left[ sum_{j=0}^k left( dfrac{q}{p} right)^j right] A_0.$$
This shows that $A_k to dfrac{1}{p} dfrac{1}{1 - frac{q}{p}} A_0 = dfrac{1}{p-q}A_0.$
My intuition suggests strongly that $A_k to 1.$ How to prove this formally?
Having this, I can conclude $A_0 = p - q,$ which is very reasonable. So, the probability of zero being separator, starting at $k < 0$ is simply $p - q.$
Consider now the set $mathrm{X}_a = {atext{ is a separator}}.$ The previous can be restated as $$mathbf{P}_0(mathrm{X}_a) = p - q$$ for every $a geq 0.$
Can I use the previous formula to conclude $mathbf{P}_0left(bigcuplimits_{a = 0}^N mathrm{X}_aright) to 1$ as $N to infty$?
probability probability-theory random-walk
probability probability-theory random-walk
edited Jan 17 at 18:32
Will M.
asked Jan 16 at 23:38
Will M.Will M.
2,890315
2,890315
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
You can show that
$$
(1-A_k)=(1-A_1)^kqquadtext{for all $kge 1$}tag{$*$}
$$
Why? You know $1-A_k$ is the probability of starting from $k$ and eventually hitting $0$. This is equal to the probability of starting from $k$ and moving eventually to $k-1$, then starting from $k-1$ and moving eventually to $k-2$, then $dots$ then starting from $1$ and moving to $0$. But moving from $i$ to $i-1$ is the same as moving from $1$ to $0$, so each event in that list of $k$ events has probability $(1-A_1)$.
Plugging $(*)$ with $k=2$ and $k=3$ into
$$
A_2=pA_3+qA_1,
$$
you can solve for $A_1$. You will find that $A_1<1$, so that $(*)$ implies $A_kto 1$ as $ktoinfty$ .
$endgroup$
$begingroup$
I would change "But moving from $i$ to $i−1$ is the same as moving from $1$ to $0$" to "But 'moving from $i$ to $i−1$ eventually' is the same as moving from '$1$ to $0$ eventually'." Any idea how to tackle the main question?
$endgroup$
– Will M.
Jan 18 at 22:14
$begingroup$
I cannot think of how to use your previous formulas to show $P(bigcup_{a=0}^N X_a)to 1$. Perhaps you can use the principle of inclusion exclusion/ Bonferroni inequalities to get bounds on the probability of this union?
$endgroup$
– Mike Earnest
Jan 18 at 22:51
add a comment |
$begingroup$
I found a nice solution after a while.
My first question is yes. The reason is much simpler than not. It is well known that for an irreducible Markov chain that is is recurrent we have that: for whatever the initial state $k$ may be and whatever another stat $o$ may be, the probability of starting at $k$ and eventually reaching $o$ is one.
To my second question. We consider $alpha_n = mathbf{P}_0left( bigcuplimits_{a=0}^n mathrm{X}_a^complementright)$ and notice that $(alpha_n)$ is increasing and therefore convergent. We show now that $(alpha_{n^2})$ converges to 1. To see this simply notice that
$$0 leq 1 - alpha_{n^2} leq mathbf{P}_0left( bigcuplimits_{a=0}^n mathrm{X}_{an}^complementright).$$
We can define now stopping times $tau_p$ to be the first entrance to state $p$ and measurable times $theta_p$ which are the first returns to $p.$ Then we divide
$$begin{align*}
mathbf{P}_0left( bigcuplimits_{a=0}^n mathrm{X}_{an}^complementright) &= mathbf{P}_0left( bigcuplimits_{a=0}^n mathrm{X}_{an}^complement cap{theta_0 < tau_n leq theta_n < tau_{2n}leq theta_{2n}<ldots<tau_{n^2}leqtheta_{n^2}<infty}right)\
&+mathbf{P}_0left( bigcuplimits_{a=0}^n mathrm{X}_{an}^complement capbigcup_{j=1}^n{tau_j < theta_{(j-1)n}}right)\
&leqmathbf{P}_0left(theta_0 < tau_n leq theta_n < tau_{2n}leq theta_{2n}<ldots<tau_{n^2}leqtheta_{n^2}<inftyright) + sum_{j=1}^nmathbf{P}_0(tau_j<theta_{(j-1)n})\
&mathop{=}^triangle gamma_n +sum_{j=1}^n omega_j.
end{align*}$$
To deal with $gamma_n$ simply use conditional expectation conditioning on the sigma field generated by $S_0,xi_1, ldots, xi_{tau_n}$ to reach the inequality $gamma_nleq(1-(p-q))gamma_{n-1}$ and so $gamma_n to 0$ (exponentially fast!)
Observe that $omega_j leq mathbf{P}_0(text{The random walk will ever hit} -n)$ and this is well known to be $(frac{q}{p})^n$ and clearly $n(frac{q}{p})^n to 0$ as well. Q.E.D.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3076433%2fexistence-of-separators-for-biased-random-walk%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
You can show that
$$
(1-A_k)=(1-A_1)^kqquadtext{for all $kge 1$}tag{$*$}
$$
Why? You know $1-A_k$ is the probability of starting from $k$ and eventually hitting $0$. This is equal to the probability of starting from $k$ and moving eventually to $k-1$, then starting from $k-1$ and moving eventually to $k-2$, then $dots$ then starting from $1$ and moving to $0$. But moving from $i$ to $i-1$ is the same as moving from $1$ to $0$, so each event in that list of $k$ events has probability $(1-A_1)$.
Plugging $(*)$ with $k=2$ and $k=3$ into
$$
A_2=pA_3+qA_1,
$$
you can solve for $A_1$. You will find that $A_1<1$, so that $(*)$ implies $A_kto 1$ as $ktoinfty$ .
$endgroup$
$begingroup$
I would change "But moving from $i$ to $i−1$ is the same as moving from $1$ to $0$" to "But 'moving from $i$ to $i−1$ eventually' is the same as moving from '$1$ to $0$ eventually'." Any idea how to tackle the main question?
$endgroup$
– Will M.
Jan 18 at 22:14
$begingroup$
I cannot think of how to use your previous formulas to show $P(bigcup_{a=0}^N X_a)to 1$. Perhaps you can use the principle of inclusion exclusion/ Bonferroni inequalities to get bounds on the probability of this union?
$endgroup$
– Mike Earnest
Jan 18 at 22:51
add a comment |
$begingroup$
You can show that
$$
(1-A_k)=(1-A_1)^kqquadtext{for all $kge 1$}tag{$*$}
$$
Why? You know $1-A_k$ is the probability of starting from $k$ and eventually hitting $0$. This is equal to the probability of starting from $k$ and moving eventually to $k-1$, then starting from $k-1$ and moving eventually to $k-2$, then $dots$ then starting from $1$ and moving to $0$. But moving from $i$ to $i-1$ is the same as moving from $1$ to $0$, so each event in that list of $k$ events has probability $(1-A_1)$.
Plugging $(*)$ with $k=2$ and $k=3$ into
$$
A_2=pA_3+qA_1,
$$
you can solve for $A_1$. You will find that $A_1<1$, so that $(*)$ implies $A_kto 1$ as $ktoinfty$ .
$endgroup$
$begingroup$
I would change "But moving from $i$ to $i−1$ is the same as moving from $1$ to $0$" to "But 'moving from $i$ to $i−1$ eventually' is the same as moving from '$1$ to $0$ eventually'." Any idea how to tackle the main question?
$endgroup$
– Will M.
Jan 18 at 22:14
$begingroup$
I cannot think of how to use your previous formulas to show $P(bigcup_{a=0}^N X_a)to 1$. Perhaps you can use the principle of inclusion exclusion/ Bonferroni inequalities to get bounds on the probability of this union?
$endgroup$
– Mike Earnest
Jan 18 at 22:51
add a comment |
$begingroup$
You can show that
$$
(1-A_k)=(1-A_1)^kqquadtext{for all $kge 1$}tag{$*$}
$$
Why? You know $1-A_k$ is the probability of starting from $k$ and eventually hitting $0$. This is equal to the probability of starting from $k$ and moving eventually to $k-1$, then starting from $k-1$ and moving eventually to $k-2$, then $dots$ then starting from $1$ and moving to $0$. But moving from $i$ to $i-1$ is the same as moving from $1$ to $0$, so each event in that list of $k$ events has probability $(1-A_1)$.
Plugging $(*)$ with $k=2$ and $k=3$ into
$$
A_2=pA_3+qA_1,
$$
you can solve for $A_1$. You will find that $A_1<1$, so that $(*)$ implies $A_kto 1$ as $ktoinfty$ .
$endgroup$
You can show that
$$
(1-A_k)=(1-A_1)^kqquadtext{for all $kge 1$}tag{$*$}
$$
Why? You know $1-A_k$ is the probability of starting from $k$ and eventually hitting $0$. This is equal to the probability of starting from $k$ and moving eventually to $k-1$, then starting from $k-1$ and moving eventually to $k-2$, then $dots$ then starting from $1$ and moving to $0$. But moving from $i$ to $i-1$ is the same as moving from $1$ to $0$, so each event in that list of $k$ events has probability $(1-A_1)$.
Plugging $(*)$ with $k=2$ and $k=3$ into
$$
A_2=pA_3+qA_1,
$$
you can solve for $A_1$. You will find that $A_1<1$, so that $(*)$ implies $A_kto 1$ as $ktoinfty$ .
answered Jan 18 at 0:36
Mike EarnestMike Earnest
27.6k22152
27.6k22152
$begingroup$
I would change "But moving from $i$ to $i−1$ is the same as moving from $1$ to $0$" to "But 'moving from $i$ to $i−1$ eventually' is the same as moving from '$1$ to $0$ eventually'." Any idea how to tackle the main question?
$endgroup$
– Will M.
Jan 18 at 22:14
$begingroup$
I cannot think of how to use your previous formulas to show $P(bigcup_{a=0}^N X_a)to 1$. Perhaps you can use the principle of inclusion exclusion/ Bonferroni inequalities to get bounds on the probability of this union?
$endgroup$
– Mike Earnest
Jan 18 at 22:51
add a comment |
$begingroup$
I would change "But moving from $i$ to $i−1$ is the same as moving from $1$ to $0$" to "But 'moving from $i$ to $i−1$ eventually' is the same as moving from '$1$ to $0$ eventually'." Any idea how to tackle the main question?
$endgroup$
– Will M.
Jan 18 at 22:14
$begingroup$
I cannot think of how to use your previous formulas to show $P(bigcup_{a=0}^N X_a)to 1$. Perhaps you can use the principle of inclusion exclusion/ Bonferroni inequalities to get bounds on the probability of this union?
$endgroup$
– Mike Earnest
Jan 18 at 22:51
$begingroup$
I would change "But moving from $i$ to $i−1$ is the same as moving from $1$ to $0$" to "But 'moving from $i$ to $i−1$ eventually' is the same as moving from '$1$ to $0$ eventually'." Any idea how to tackle the main question?
$endgroup$
– Will M.
Jan 18 at 22:14
$begingroup$
I would change "But moving from $i$ to $i−1$ is the same as moving from $1$ to $0$" to "But 'moving from $i$ to $i−1$ eventually' is the same as moving from '$1$ to $0$ eventually'." Any idea how to tackle the main question?
$endgroup$
– Will M.
Jan 18 at 22:14
$begingroup$
I cannot think of how to use your previous formulas to show $P(bigcup_{a=0}^N X_a)to 1$. Perhaps you can use the principle of inclusion exclusion/ Bonferroni inequalities to get bounds on the probability of this union?
$endgroup$
– Mike Earnest
Jan 18 at 22:51
$begingroup$
I cannot think of how to use your previous formulas to show $P(bigcup_{a=0}^N X_a)to 1$. Perhaps you can use the principle of inclusion exclusion/ Bonferroni inequalities to get bounds on the probability of this union?
$endgroup$
– Mike Earnest
Jan 18 at 22:51
add a comment |
$begingroup$
I found a nice solution after a while.
My first question is yes. The reason is much simpler than not. It is well known that for an irreducible Markov chain that is is recurrent we have that: for whatever the initial state $k$ may be and whatever another stat $o$ may be, the probability of starting at $k$ and eventually reaching $o$ is one.
To my second question. We consider $alpha_n = mathbf{P}_0left( bigcuplimits_{a=0}^n mathrm{X}_a^complementright)$ and notice that $(alpha_n)$ is increasing and therefore convergent. We show now that $(alpha_{n^2})$ converges to 1. To see this simply notice that
$$0 leq 1 - alpha_{n^2} leq mathbf{P}_0left( bigcuplimits_{a=0}^n mathrm{X}_{an}^complementright).$$
We can define now stopping times $tau_p$ to be the first entrance to state $p$ and measurable times $theta_p$ which are the first returns to $p.$ Then we divide
$$begin{align*}
mathbf{P}_0left( bigcuplimits_{a=0}^n mathrm{X}_{an}^complementright) &= mathbf{P}_0left( bigcuplimits_{a=0}^n mathrm{X}_{an}^complement cap{theta_0 < tau_n leq theta_n < tau_{2n}leq theta_{2n}<ldots<tau_{n^2}leqtheta_{n^2}<infty}right)\
&+mathbf{P}_0left( bigcuplimits_{a=0}^n mathrm{X}_{an}^complement capbigcup_{j=1}^n{tau_j < theta_{(j-1)n}}right)\
&leqmathbf{P}_0left(theta_0 < tau_n leq theta_n < tau_{2n}leq theta_{2n}<ldots<tau_{n^2}leqtheta_{n^2}<inftyright) + sum_{j=1}^nmathbf{P}_0(tau_j<theta_{(j-1)n})\
&mathop{=}^triangle gamma_n +sum_{j=1}^n omega_j.
end{align*}$$
To deal with $gamma_n$ simply use conditional expectation conditioning on the sigma field generated by $S_0,xi_1, ldots, xi_{tau_n}$ to reach the inequality $gamma_nleq(1-(p-q))gamma_{n-1}$ and so $gamma_n to 0$ (exponentially fast!)
Observe that $omega_j leq mathbf{P}_0(text{The random walk will ever hit} -n)$ and this is well known to be $(frac{q}{p})^n$ and clearly $n(frac{q}{p})^n to 0$ as well. Q.E.D.
$endgroup$
add a comment |
$begingroup$
I found a nice solution after a while.
My first question is yes. The reason is much simpler than not. It is well known that for an irreducible Markov chain that is is recurrent we have that: for whatever the initial state $k$ may be and whatever another stat $o$ may be, the probability of starting at $k$ and eventually reaching $o$ is one.
To my second question. We consider $alpha_n = mathbf{P}_0left( bigcuplimits_{a=0}^n mathrm{X}_a^complementright)$ and notice that $(alpha_n)$ is increasing and therefore convergent. We show now that $(alpha_{n^2})$ converges to 1. To see this simply notice that
$$0 leq 1 - alpha_{n^2} leq mathbf{P}_0left( bigcuplimits_{a=0}^n mathrm{X}_{an}^complementright).$$
We can define now stopping times $tau_p$ to be the first entrance to state $p$ and measurable times $theta_p$ which are the first returns to $p.$ Then we divide
$$begin{align*}
mathbf{P}_0left( bigcuplimits_{a=0}^n mathrm{X}_{an}^complementright) &= mathbf{P}_0left( bigcuplimits_{a=0}^n mathrm{X}_{an}^complement cap{theta_0 < tau_n leq theta_n < tau_{2n}leq theta_{2n}<ldots<tau_{n^2}leqtheta_{n^2}<infty}right)\
&+mathbf{P}_0left( bigcuplimits_{a=0}^n mathrm{X}_{an}^complement capbigcup_{j=1}^n{tau_j < theta_{(j-1)n}}right)\
&leqmathbf{P}_0left(theta_0 < tau_n leq theta_n < tau_{2n}leq theta_{2n}<ldots<tau_{n^2}leqtheta_{n^2}<inftyright) + sum_{j=1}^nmathbf{P}_0(tau_j<theta_{(j-1)n})\
&mathop{=}^triangle gamma_n +sum_{j=1}^n omega_j.
end{align*}$$
To deal with $gamma_n$ simply use conditional expectation conditioning on the sigma field generated by $S_0,xi_1, ldots, xi_{tau_n}$ to reach the inequality $gamma_nleq(1-(p-q))gamma_{n-1}$ and so $gamma_n to 0$ (exponentially fast!)
Observe that $omega_j leq mathbf{P}_0(text{The random walk will ever hit} -n)$ and this is well known to be $(frac{q}{p})^n$ and clearly $n(frac{q}{p})^n to 0$ as well. Q.E.D.
$endgroup$
add a comment |
$begingroup$
I found a nice solution after a while.
My first question is yes. The reason is much simpler than not. It is well known that for an irreducible Markov chain that is is recurrent we have that: for whatever the initial state $k$ may be and whatever another stat $o$ may be, the probability of starting at $k$ and eventually reaching $o$ is one.
To my second question. We consider $alpha_n = mathbf{P}_0left( bigcuplimits_{a=0}^n mathrm{X}_a^complementright)$ and notice that $(alpha_n)$ is increasing and therefore convergent. We show now that $(alpha_{n^2})$ converges to 1. To see this simply notice that
$$0 leq 1 - alpha_{n^2} leq mathbf{P}_0left( bigcuplimits_{a=0}^n mathrm{X}_{an}^complementright).$$
We can define now stopping times $tau_p$ to be the first entrance to state $p$ and measurable times $theta_p$ which are the first returns to $p.$ Then we divide
$$begin{align*}
mathbf{P}_0left( bigcuplimits_{a=0}^n mathrm{X}_{an}^complementright) &= mathbf{P}_0left( bigcuplimits_{a=0}^n mathrm{X}_{an}^complement cap{theta_0 < tau_n leq theta_n < tau_{2n}leq theta_{2n}<ldots<tau_{n^2}leqtheta_{n^2}<infty}right)\
&+mathbf{P}_0left( bigcuplimits_{a=0}^n mathrm{X}_{an}^complement capbigcup_{j=1}^n{tau_j < theta_{(j-1)n}}right)\
&leqmathbf{P}_0left(theta_0 < tau_n leq theta_n < tau_{2n}leq theta_{2n}<ldots<tau_{n^2}leqtheta_{n^2}<inftyright) + sum_{j=1}^nmathbf{P}_0(tau_j<theta_{(j-1)n})\
&mathop{=}^triangle gamma_n +sum_{j=1}^n omega_j.
end{align*}$$
To deal with $gamma_n$ simply use conditional expectation conditioning on the sigma field generated by $S_0,xi_1, ldots, xi_{tau_n}$ to reach the inequality $gamma_nleq(1-(p-q))gamma_{n-1}$ and so $gamma_n to 0$ (exponentially fast!)
Observe that $omega_j leq mathbf{P}_0(text{The random walk will ever hit} -n)$ and this is well known to be $(frac{q}{p})^n$ and clearly $n(frac{q}{p})^n to 0$ as well. Q.E.D.
$endgroup$
I found a nice solution after a while.
My first question is yes. The reason is much simpler than not. It is well known that for an irreducible Markov chain that is is recurrent we have that: for whatever the initial state $k$ may be and whatever another stat $o$ may be, the probability of starting at $k$ and eventually reaching $o$ is one.
To my second question. We consider $alpha_n = mathbf{P}_0left( bigcuplimits_{a=0}^n mathrm{X}_a^complementright)$ and notice that $(alpha_n)$ is increasing and therefore convergent. We show now that $(alpha_{n^2})$ converges to 1. To see this simply notice that
$$0 leq 1 - alpha_{n^2} leq mathbf{P}_0left( bigcuplimits_{a=0}^n mathrm{X}_{an}^complementright).$$
We can define now stopping times $tau_p$ to be the first entrance to state $p$ and measurable times $theta_p$ which are the first returns to $p.$ Then we divide
$$begin{align*}
mathbf{P}_0left( bigcuplimits_{a=0}^n mathrm{X}_{an}^complementright) &= mathbf{P}_0left( bigcuplimits_{a=0}^n mathrm{X}_{an}^complement cap{theta_0 < tau_n leq theta_n < tau_{2n}leq theta_{2n}<ldots<tau_{n^2}leqtheta_{n^2}<infty}right)\
&+mathbf{P}_0left( bigcuplimits_{a=0}^n mathrm{X}_{an}^complement capbigcup_{j=1}^n{tau_j < theta_{(j-1)n}}right)\
&leqmathbf{P}_0left(theta_0 < tau_n leq theta_n < tau_{2n}leq theta_{2n}<ldots<tau_{n^2}leqtheta_{n^2}<inftyright) + sum_{j=1}^nmathbf{P}_0(tau_j<theta_{(j-1)n})\
&mathop{=}^triangle gamma_n +sum_{j=1}^n omega_j.
end{align*}$$
To deal with $gamma_n$ simply use conditional expectation conditioning on the sigma field generated by $S_0,xi_1, ldots, xi_{tau_n}$ to reach the inequality $gamma_nleq(1-(p-q))gamma_{n-1}$ and so $gamma_n to 0$ (exponentially fast!)
Observe that $omega_j leq mathbf{P}_0(text{The random walk will ever hit} -n)$ and this is well known to be $(frac{q}{p})^n$ and clearly $n(frac{q}{p})^n to 0$ as well. Q.E.D.
answered Feb 13 at 22:53
Will M.Will M.
2,890315
2,890315
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3076433%2fexistence-of-separators-for-biased-random-walk%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown