How to understand explosion in continuous-time Markov chains?
$begingroup$
It is well known that for example in a pure-birth process, explosion occurs when
$$ sum_{n=1}^infty lambda_n^{-1} < infty $$
where $lambda_n$ is the birth-rate of a new individual when the population size is $n$.
For instance, explosion occurs one takes $lambda_n = c n^2$.
While I understand how to prove this mathematically, I'm having trouble understanding this intuitively: no matter how fast the population is growing or how much time passes, in practice the population size should be a finite (albeit huge) number that tends to $infty$, but how could it
ever be equal to $infty$?
markov-process
$endgroup$
add a comment |
$begingroup$
It is well known that for example in a pure-birth process, explosion occurs when
$$ sum_{n=1}^infty lambda_n^{-1} < infty $$
where $lambda_n$ is the birth-rate of a new individual when the population size is $n$.
For instance, explosion occurs one takes $lambda_n = c n^2$.
While I understand how to prove this mathematically, I'm having trouble understanding this intuitively: no matter how fast the population is growing or how much time passes, in practice the population size should be a finite (albeit huge) number that tends to $infty$, but how could it
ever be equal to $infty$?
markov-process
$endgroup$
2
$begingroup$
Infinity is not a quantity, merely a convenient expression for something which is unbounded.
$endgroup$
– jameselmore
Dec 31 '18 at 19:29
$begingroup$
This question may lend some intuition: math.stackexchange.com/questions/2136463/…
$endgroup$
– Math1000
Dec 31 '18 at 20:18
2
$begingroup$
Explosion of the chain $(X_t)$ means that there exists some random time $T$ such that $P(T<infty)=1$ and $$lim_{tto T-}X_t=infty$$ Thus, $X_t(omega)$ simply does not exist for $tgeqslant T(omega)$.
$endgroup$
– Did
Dec 31 '18 at 20:18
$begingroup$
@jameselmore Very poetic but is this relevant? My feeling is that it might even reveal a wrong conception of the situation...
$endgroup$
– Did
Dec 31 '18 at 20:20
add a comment |
$begingroup$
It is well known that for example in a pure-birth process, explosion occurs when
$$ sum_{n=1}^infty lambda_n^{-1} < infty $$
where $lambda_n$ is the birth-rate of a new individual when the population size is $n$.
For instance, explosion occurs one takes $lambda_n = c n^2$.
While I understand how to prove this mathematically, I'm having trouble understanding this intuitively: no matter how fast the population is growing or how much time passes, in practice the population size should be a finite (albeit huge) number that tends to $infty$, but how could it
ever be equal to $infty$?
markov-process
$endgroup$
It is well known that for example in a pure-birth process, explosion occurs when
$$ sum_{n=1}^infty lambda_n^{-1} < infty $$
where $lambda_n$ is the birth-rate of a new individual when the population size is $n$.
For instance, explosion occurs one takes $lambda_n = c n^2$.
While I understand how to prove this mathematically, I'm having trouble understanding this intuitively: no matter how fast the population is growing or how much time passes, in practice the population size should be a finite (albeit huge) number that tends to $infty$, but how could it
ever be equal to $infty$?
markov-process
markov-process
asked Dec 31 '18 at 19:27
p-valuep-value
220111
220111
2
$begingroup$
Infinity is not a quantity, merely a convenient expression for something which is unbounded.
$endgroup$
– jameselmore
Dec 31 '18 at 19:29
$begingroup$
This question may lend some intuition: math.stackexchange.com/questions/2136463/…
$endgroup$
– Math1000
Dec 31 '18 at 20:18
2
$begingroup$
Explosion of the chain $(X_t)$ means that there exists some random time $T$ such that $P(T<infty)=1$ and $$lim_{tto T-}X_t=infty$$ Thus, $X_t(omega)$ simply does not exist for $tgeqslant T(omega)$.
$endgroup$
– Did
Dec 31 '18 at 20:18
$begingroup$
@jameselmore Very poetic but is this relevant? My feeling is that it might even reveal a wrong conception of the situation...
$endgroup$
– Did
Dec 31 '18 at 20:20
add a comment |
2
$begingroup$
Infinity is not a quantity, merely a convenient expression for something which is unbounded.
$endgroup$
– jameselmore
Dec 31 '18 at 19:29
$begingroup$
This question may lend some intuition: math.stackexchange.com/questions/2136463/…
$endgroup$
– Math1000
Dec 31 '18 at 20:18
2
$begingroup$
Explosion of the chain $(X_t)$ means that there exists some random time $T$ such that $P(T<infty)=1$ and $$lim_{tto T-}X_t=infty$$ Thus, $X_t(omega)$ simply does not exist for $tgeqslant T(omega)$.
$endgroup$
– Did
Dec 31 '18 at 20:18
$begingroup$
@jameselmore Very poetic but is this relevant? My feeling is that it might even reveal a wrong conception of the situation...
$endgroup$
– Did
Dec 31 '18 at 20:20
2
2
$begingroup$
Infinity is not a quantity, merely a convenient expression for something which is unbounded.
$endgroup$
– jameselmore
Dec 31 '18 at 19:29
$begingroup$
Infinity is not a quantity, merely a convenient expression for something which is unbounded.
$endgroup$
– jameselmore
Dec 31 '18 at 19:29
$begingroup$
This question may lend some intuition: math.stackexchange.com/questions/2136463/…
$endgroup$
– Math1000
Dec 31 '18 at 20:18
$begingroup$
This question may lend some intuition: math.stackexchange.com/questions/2136463/…
$endgroup$
– Math1000
Dec 31 '18 at 20:18
2
2
$begingroup$
Explosion of the chain $(X_t)$ means that there exists some random time $T$ such that $P(T<infty)=1$ and $$lim_{tto T-}X_t=infty$$ Thus, $X_t(omega)$ simply does not exist for $tgeqslant T(omega)$.
$endgroup$
– Did
Dec 31 '18 at 20:18
$begingroup$
Explosion of the chain $(X_t)$ means that there exists some random time $T$ such that $P(T<infty)=1$ and $$lim_{tto T-}X_t=infty$$ Thus, $X_t(omega)$ simply does not exist for $tgeqslant T(omega)$.
$endgroup$
– Did
Dec 31 '18 at 20:18
$begingroup$
@jameselmore Very poetic but is this relevant? My feeling is that it might even reveal a wrong conception of the situation...
$endgroup$
– Did
Dec 31 '18 at 20:20
$begingroup$
@jameselmore Very poetic but is this relevant? My feeling is that it might even reveal a wrong conception of the situation...
$endgroup$
– Did
Dec 31 '18 at 20:20
add a comment |
3 Answers
3
active
oldest
votes
$begingroup$
Intuitively, what's happening is something like the following.
A birth occurs at time $t = 1/2$.
The next birth occurs at time $t = 3/4$.
The next birth occurs at time $t = 7/8$.
The next birth occurs at time $t=15/16$.
And so on.
By time 1, how many births have occurred? So what must the total population be?
In the "explosive" case, this sort of thing happens with probability 1. The actual birth times will vary randomly, but there will always be infinitely many births in finite time.
$endgroup$
add a comment |
$begingroup$
There is no problem conceptually with the infinity here, if we just think of the result as some mathematical statement about limits. The problem appears when we really want to think of the state of the process as a "population"; that is, when we're modeling a discrete and necessarily finite value. Then infinity has no meaning, so why would we introduce it?
Why use infinity to model finite, discrete processes
We do this all the time. For example, we use real numbers. Real numbers have infinite precision, and all real-world calculations have limited precision (though often, we're not sure exactly what it is). More tellingly, computer calculations have a very well-defined limited precision. Yet we still use real numbers in the error analysis of computer algorithms.
How does this work? Real numbers are a simplified model that avoids ugly details. Maybe we can prove that, with the assumption that our implementation behaves like the real numbers, we can get arbitrarily small error. If this cannot actually happen, what we learn is that our algorithm can produce smaller and smaller error up until we reach the point where real numbers stop being a good model. The goal of further error analysis is to determine when this happens.
The same can happen with a pure-birth process like the one you're considering. Maybe in fact the birth rate $lambda_n$ is defined by $lambda_n = n^2$ when $n < N$ (for some large parameter $N$) and $lambda_n = 0$ otherwise. We can approximate this by the pure-birth process where $lambda_n = n^2$ for all $n$, and conclude that explosion happens.
What do we learn from this? Well, in our original model explosion was impossible; in the simplified model, explosion happens with probability $1$. The conclusion is that with probability $1$, we eventually reach a state where the simplified model breaks down; in other words, we reach a population of $N$.
So the simplified process that allows infinity allows us to derive finite statements of this form for any "ecological limit" $N$. It is easier to work with, and more general because the conclusion is independent of $N$, and that is the advantage of introducing infinity.
Real-life examples and mathematical ones
You can imagine physical instances of a pure-birth process where we learn something from an "infinite approximation" such as this one. Maybe we really are looking at the population of some lifeform, whose growth follows such a process until ecological limitations start to become a factor. Knowing that the process is explosive guarantees that this will eventually happen.
This can also happen in a more mathematical application. Here is one example. It is not an explosive CTMC, because it is a discrete birth process, but the idea is the same.
Suppose we create a random graph on $n$ vertices by choosing $n$ of the $binom n2$ possible edges uniformly at random. Then the breadth-first search process from a given vertex initially resembles a Galton-Watson branching process where each individual has $text{Poisson}(2)$ children.
This is a good approximation when the population of the branching process is small. In this case, there are $approx n$ potential neighbors for each vertex we look at, and each one is actually a neighbor with probability $approx frac2n$. But if the breadth-first search reaches a constant fraction of the vertices in the graph, then the approximation breaks down: there are now significantly fewer than $n$ potential new neighbors, and so the average number of children in the branching process becomes less than $2$.
In this case, the branching process is infinite with some positive probability. This does not directly translate to the vertex we started at being in a component of size $infty$; that's nonsense, since the graph has only $n$ vertices. It does translate to the vertex being in a component with enough vertices that the approximation breaks down.
Some further analysis tells us that there cannot be two large components in this random graph. Therefore the probability that the Galton-Watson branching process is infinite (a seemingly nonsense probability in our actual setting) tells us a useful quantity: the fraction of vertices in the so-called "giant component" of this random graph.
$endgroup$
add a comment |
$begingroup$
The important thing to keep in mind is that the criteria for an explosive CTMC is that the number of transitions (in this case, births) is infinite within a finite time almost surely.
In that sense, we’re not really making any explicit statement about the population size, but rather about the number of transitions in the system—and only with probabilistic certainty at that!
To demonstrate this, here's an example of an explosive CTMC that always keeps a finite "population". Say you're evolving a random Markovian variable $Z(t) = left(X(t),Y(t)right)$ where $X(t) in {0,1}$ denotes the presence of an individual in the system and $Y(t) in mathbb{Z}^*$ is counting the number of previous transitions.
If we say that:
$$mathbb{P}left[Z(t) = (0,y+1)|Z(0) =(1,y)right] = mathbb{P}left[Z(t) = (1,y+1)|Z(0) =(0,y)right] = lambda y^2 e^{-lambda y^2 t}$$
and $0$ for all other transitions, this chain is demonstrably explosive and will oscillate between $X = 0$ and $X = 1$ infinitely fast in finite time.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3057981%2fhow-to-understand-explosion-in-continuous-time-markov-chains%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Intuitively, what's happening is something like the following.
A birth occurs at time $t = 1/2$.
The next birth occurs at time $t = 3/4$.
The next birth occurs at time $t = 7/8$.
The next birth occurs at time $t=15/16$.
And so on.
By time 1, how many births have occurred? So what must the total population be?
In the "explosive" case, this sort of thing happens with probability 1. The actual birth times will vary randomly, but there will always be infinitely many births in finite time.
$endgroup$
add a comment |
$begingroup$
Intuitively, what's happening is something like the following.
A birth occurs at time $t = 1/2$.
The next birth occurs at time $t = 3/4$.
The next birth occurs at time $t = 7/8$.
The next birth occurs at time $t=15/16$.
And so on.
By time 1, how many births have occurred? So what must the total population be?
In the "explosive" case, this sort of thing happens with probability 1. The actual birth times will vary randomly, but there will always be infinitely many births in finite time.
$endgroup$
add a comment |
$begingroup$
Intuitively, what's happening is something like the following.
A birth occurs at time $t = 1/2$.
The next birth occurs at time $t = 3/4$.
The next birth occurs at time $t = 7/8$.
The next birth occurs at time $t=15/16$.
And so on.
By time 1, how many births have occurred? So what must the total population be?
In the "explosive" case, this sort of thing happens with probability 1. The actual birth times will vary randomly, but there will always be infinitely many births in finite time.
$endgroup$
Intuitively, what's happening is something like the following.
A birth occurs at time $t = 1/2$.
The next birth occurs at time $t = 3/4$.
The next birth occurs at time $t = 7/8$.
The next birth occurs at time $t=15/16$.
And so on.
By time 1, how many births have occurred? So what must the total population be?
In the "explosive" case, this sort of thing happens with probability 1. The actual birth times will vary randomly, but there will always be infinitely many births in finite time.
answered Jan 1 at 2:46
Nate EldredgeNate Eldredge
62.8k682170
62.8k682170
add a comment |
add a comment |
$begingroup$
There is no problem conceptually with the infinity here, if we just think of the result as some mathematical statement about limits. The problem appears when we really want to think of the state of the process as a "population"; that is, when we're modeling a discrete and necessarily finite value. Then infinity has no meaning, so why would we introduce it?
Why use infinity to model finite, discrete processes
We do this all the time. For example, we use real numbers. Real numbers have infinite precision, and all real-world calculations have limited precision (though often, we're not sure exactly what it is). More tellingly, computer calculations have a very well-defined limited precision. Yet we still use real numbers in the error analysis of computer algorithms.
How does this work? Real numbers are a simplified model that avoids ugly details. Maybe we can prove that, with the assumption that our implementation behaves like the real numbers, we can get arbitrarily small error. If this cannot actually happen, what we learn is that our algorithm can produce smaller and smaller error up until we reach the point where real numbers stop being a good model. The goal of further error analysis is to determine when this happens.
The same can happen with a pure-birth process like the one you're considering. Maybe in fact the birth rate $lambda_n$ is defined by $lambda_n = n^2$ when $n < N$ (for some large parameter $N$) and $lambda_n = 0$ otherwise. We can approximate this by the pure-birth process where $lambda_n = n^2$ for all $n$, and conclude that explosion happens.
What do we learn from this? Well, in our original model explosion was impossible; in the simplified model, explosion happens with probability $1$. The conclusion is that with probability $1$, we eventually reach a state where the simplified model breaks down; in other words, we reach a population of $N$.
So the simplified process that allows infinity allows us to derive finite statements of this form for any "ecological limit" $N$. It is easier to work with, and more general because the conclusion is independent of $N$, and that is the advantage of introducing infinity.
Real-life examples and mathematical ones
You can imagine physical instances of a pure-birth process where we learn something from an "infinite approximation" such as this one. Maybe we really are looking at the population of some lifeform, whose growth follows such a process until ecological limitations start to become a factor. Knowing that the process is explosive guarantees that this will eventually happen.
This can also happen in a more mathematical application. Here is one example. It is not an explosive CTMC, because it is a discrete birth process, but the idea is the same.
Suppose we create a random graph on $n$ vertices by choosing $n$ of the $binom n2$ possible edges uniformly at random. Then the breadth-first search process from a given vertex initially resembles a Galton-Watson branching process where each individual has $text{Poisson}(2)$ children.
This is a good approximation when the population of the branching process is small. In this case, there are $approx n$ potential neighbors for each vertex we look at, and each one is actually a neighbor with probability $approx frac2n$. But if the breadth-first search reaches a constant fraction of the vertices in the graph, then the approximation breaks down: there are now significantly fewer than $n$ potential new neighbors, and so the average number of children in the branching process becomes less than $2$.
In this case, the branching process is infinite with some positive probability. This does not directly translate to the vertex we started at being in a component of size $infty$; that's nonsense, since the graph has only $n$ vertices. It does translate to the vertex being in a component with enough vertices that the approximation breaks down.
Some further analysis tells us that there cannot be two large components in this random graph. Therefore the probability that the Galton-Watson branching process is infinite (a seemingly nonsense probability in our actual setting) tells us a useful quantity: the fraction of vertices in the so-called "giant component" of this random graph.
$endgroup$
add a comment |
$begingroup$
There is no problem conceptually with the infinity here, if we just think of the result as some mathematical statement about limits. The problem appears when we really want to think of the state of the process as a "population"; that is, when we're modeling a discrete and necessarily finite value. Then infinity has no meaning, so why would we introduce it?
Why use infinity to model finite, discrete processes
We do this all the time. For example, we use real numbers. Real numbers have infinite precision, and all real-world calculations have limited precision (though often, we're not sure exactly what it is). More tellingly, computer calculations have a very well-defined limited precision. Yet we still use real numbers in the error analysis of computer algorithms.
How does this work? Real numbers are a simplified model that avoids ugly details. Maybe we can prove that, with the assumption that our implementation behaves like the real numbers, we can get arbitrarily small error. If this cannot actually happen, what we learn is that our algorithm can produce smaller and smaller error up until we reach the point where real numbers stop being a good model. The goal of further error analysis is to determine when this happens.
The same can happen with a pure-birth process like the one you're considering. Maybe in fact the birth rate $lambda_n$ is defined by $lambda_n = n^2$ when $n < N$ (for some large parameter $N$) and $lambda_n = 0$ otherwise. We can approximate this by the pure-birth process where $lambda_n = n^2$ for all $n$, and conclude that explosion happens.
What do we learn from this? Well, in our original model explosion was impossible; in the simplified model, explosion happens with probability $1$. The conclusion is that with probability $1$, we eventually reach a state where the simplified model breaks down; in other words, we reach a population of $N$.
So the simplified process that allows infinity allows us to derive finite statements of this form for any "ecological limit" $N$. It is easier to work with, and more general because the conclusion is independent of $N$, and that is the advantage of introducing infinity.
Real-life examples and mathematical ones
You can imagine physical instances of a pure-birth process where we learn something from an "infinite approximation" such as this one. Maybe we really are looking at the population of some lifeform, whose growth follows such a process until ecological limitations start to become a factor. Knowing that the process is explosive guarantees that this will eventually happen.
This can also happen in a more mathematical application. Here is one example. It is not an explosive CTMC, because it is a discrete birth process, but the idea is the same.
Suppose we create a random graph on $n$ vertices by choosing $n$ of the $binom n2$ possible edges uniformly at random. Then the breadth-first search process from a given vertex initially resembles a Galton-Watson branching process where each individual has $text{Poisson}(2)$ children.
This is a good approximation when the population of the branching process is small. In this case, there are $approx n$ potential neighbors for each vertex we look at, and each one is actually a neighbor with probability $approx frac2n$. But if the breadth-first search reaches a constant fraction of the vertices in the graph, then the approximation breaks down: there are now significantly fewer than $n$ potential new neighbors, and so the average number of children in the branching process becomes less than $2$.
In this case, the branching process is infinite with some positive probability. This does not directly translate to the vertex we started at being in a component of size $infty$; that's nonsense, since the graph has only $n$ vertices. It does translate to the vertex being in a component with enough vertices that the approximation breaks down.
Some further analysis tells us that there cannot be two large components in this random graph. Therefore the probability that the Galton-Watson branching process is infinite (a seemingly nonsense probability in our actual setting) tells us a useful quantity: the fraction of vertices in the so-called "giant component" of this random graph.
$endgroup$
add a comment |
$begingroup$
There is no problem conceptually with the infinity here, if we just think of the result as some mathematical statement about limits. The problem appears when we really want to think of the state of the process as a "population"; that is, when we're modeling a discrete and necessarily finite value. Then infinity has no meaning, so why would we introduce it?
Why use infinity to model finite, discrete processes
We do this all the time. For example, we use real numbers. Real numbers have infinite precision, and all real-world calculations have limited precision (though often, we're not sure exactly what it is). More tellingly, computer calculations have a very well-defined limited precision. Yet we still use real numbers in the error analysis of computer algorithms.
How does this work? Real numbers are a simplified model that avoids ugly details. Maybe we can prove that, with the assumption that our implementation behaves like the real numbers, we can get arbitrarily small error. If this cannot actually happen, what we learn is that our algorithm can produce smaller and smaller error up until we reach the point where real numbers stop being a good model. The goal of further error analysis is to determine when this happens.
The same can happen with a pure-birth process like the one you're considering. Maybe in fact the birth rate $lambda_n$ is defined by $lambda_n = n^2$ when $n < N$ (for some large parameter $N$) and $lambda_n = 0$ otherwise. We can approximate this by the pure-birth process where $lambda_n = n^2$ for all $n$, and conclude that explosion happens.
What do we learn from this? Well, in our original model explosion was impossible; in the simplified model, explosion happens with probability $1$. The conclusion is that with probability $1$, we eventually reach a state where the simplified model breaks down; in other words, we reach a population of $N$.
So the simplified process that allows infinity allows us to derive finite statements of this form for any "ecological limit" $N$. It is easier to work with, and more general because the conclusion is independent of $N$, and that is the advantage of introducing infinity.
Real-life examples and mathematical ones
You can imagine physical instances of a pure-birth process where we learn something from an "infinite approximation" such as this one. Maybe we really are looking at the population of some lifeform, whose growth follows such a process until ecological limitations start to become a factor. Knowing that the process is explosive guarantees that this will eventually happen.
This can also happen in a more mathematical application. Here is one example. It is not an explosive CTMC, because it is a discrete birth process, but the idea is the same.
Suppose we create a random graph on $n$ vertices by choosing $n$ of the $binom n2$ possible edges uniformly at random. Then the breadth-first search process from a given vertex initially resembles a Galton-Watson branching process where each individual has $text{Poisson}(2)$ children.
This is a good approximation when the population of the branching process is small. In this case, there are $approx n$ potential neighbors for each vertex we look at, and each one is actually a neighbor with probability $approx frac2n$. But if the breadth-first search reaches a constant fraction of the vertices in the graph, then the approximation breaks down: there are now significantly fewer than $n$ potential new neighbors, and so the average number of children in the branching process becomes less than $2$.
In this case, the branching process is infinite with some positive probability. This does not directly translate to the vertex we started at being in a component of size $infty$; that's nonsense, since the graph has only $n$ vertices. It does translate to the vertex being in a component with enough vertices that the approximation breaks down.
Some further analysis tells us that there cannot be two large components in this random graph. Therefore the probability that the Galton-Watson branching process is infinite (a seemingly nonsense probability in our actual setting) tells us a useful quantity: the fraction of vertices in the so-called "giant component" of this random graph.
$endgroup$
There is no problem conceptually with the infinity here, if we just think of the result as some mathematical statement about limits. The problem appears when we really want to think of the state of the process as a "population"; that is, when we're modeling a discrete and necessarily finite value. Then infinity has no meaning, so why would we introduce it?
Why use infinity to model finite, discrete processes
We do this all the time. For example, we use real numbers. Real numbers have infinite precision, and all real-world calculations have limited precision (though often, we're not sure exactly what it is). More tellingly, computer calculations have a very well-defined limited precision. Yet we still use real numbers in the error analysis of computer algorithms.
How does this work? Real numbers are a simplified model that avoids ugly details. Maybe we can prove that, with the assumption that our implementation behaves like the real numbers, we can get arbitrarily small error. If this cannot actually happen, what we learn is that our algorithm can produce smaller and smaller error up until we reach the point where real numbers stop being a good model. The goal of further error analysis is to determine when this happens.
The same can happen with a pure-birth process like the one you're considering. Maybe in fact the birth rate $lambda_n$ is defined by $lambda_n = n^2$ when $n < N$ (for some large parameter $N$) and $lambda_n = 0$ otherwise. We can approximate this by the pure-birth process where $lambda_n = n^2$ for all $n$, and conclude that explosion happens.
What do we learn from this? Well, in our original model explosion was impossible; in the simplified model, explosion happens with probability $1$. The conclusion is that with probability $1$, we eventually reach a state where the simplified model breaks down; in other words, we reach a population of $N$.
So the simplified process that allows infinity allows us to derive finite statements of this form for any "ecological limit" $N$. It is easier to work with, and more general because the conclusion is independent of $N$, and that is the advantage of introducing infinity.
Real-life examples and mathematical ones
You can imagine physical instances of a pure-birth process where we learn something from an "infinite approximation" such as this one. Maybe we really are looking at the population of some lifeform, whose growth follows such a process until ecological limitations start to become a factor. Knowing that the process is explosive guarantees that this will eventually happen.
This can also happen in a more mathematical application. Here is one example. It is not an explosive CTMC, because it is a discrete birth process, but the idea is the same.
Suppose we create a random graph on $n$ vertices by choosing $n$ of the $binom n2$ possible edges uniformly at random. Then the breadth-first search process from a given vertex initially resembles a Galton-Watson branching process where each individual has $text{Poisson}(2)$ children.
This is a good approximation when the population of the branching process is small. In this case, there are $approx n$ potential neighbors for each vertex we look at, and each one is actually a neighbor with probability $approx frac2n$. But if the breadth-first search reaches a constant fraction of the vertices in the graph, then the approximation breaks down: there are now significantly fewer than $n$ potential new neighbors, and so the average number of children in the branching process becomes less than $2$.
In this case, the branching process is infinite with some positive probability. This does not directly translate to the vertex we started at being in a component of size $infty$; that's nonsense, since the graph has only $n$ vertices. It does translate to the vertex being in a component with enough vertices that the approximation breaks down.
Some further analysis tells us that there cannot be two large components in this random graph. Therefore the probability that the Galton-Watson branching process is infinite (a seemingly nonsense probability in our actual setting) tells us a useful quantity: the fraction of vertices in the so-called "giant component" of this random graph.
answered Jan 1 at 4:05
Misha LavrovMisha Lavrov
44.9k556107
44.9k556107
add a comment |
add a comment |
$begingroup$
The important thing to keep in mind is that the criteria for an explosive CTMC is that the number of transitions (in this case, births) is infinite within a finite time almost surely.
In that sense, we’re not really making any explicit statement about the population size, but rather about the number of transitions in the system—and only with probabilistic certainty at that!
To demonstrate this, here's an example of an explosive CTMC that always keeps a finite "population". Say you're evolving a random Markovian variable $Z(t) = left(X(t),Y(t)right)$ where $X(t) in {0,1}$ denotes the presence of an individual in the system and $Y(t) in mathbb{Z}^*$ is counting the number of previous transitions.
If we say that:
$$mathbb{P}left[Z(t) = (0,y+1)|Z(0) =(1,y)right] = mathbb{P}left[Z(t) = (1,y+1)|Z(0) =(0,y)right] = lambda y^2 e^{-lambda y^2 t}$$
and $0$ for all other transitions, this chain is demonstrably explosive and will oscillate between $X = 0$ and $X = 1$ infinitely fast in finite time.
$endgroup$
add a comment |
$begingroup$
The important thing to keep in mind is that the criteria for an explosive CTMC is that the number of transitions (in this case, births) is infinite within a finite time almost surely.
In that sense, we’re not really making any explicit statement about the population size, but rather about the number of transitions in the system—and only with probabilistic certainty at that!
To demonstrate this, here's an example of an explosive CTMC that always keeps a finite "population". Say you're evolving a random Markovian variable $Z(t) = left(X(t),Y(t)right)$ where $X(t) in {0,1}$ denotes the presence of an individual in the system and $Y(t) in mathbb{Z}^*$ is counting the number of previous transitions.
If we say that:
$$mathbb{P}left[Z(t) = (0,y+1)|Z(0) =(1,y)right] = mathbb{P}left[Z(t) = (1,y+1)|Z(0) =(0,y)right] = lambda y^2 e^{-lambda y^2 t}$$
and $0$ for all other transitions, this chain is demonstrably explosive and will oscillate between $X = 0$ and $X = 1$ infinitely fast in finite time.
$endgroup$
add a comment |
$begingroup$
The important thing to keep in mind is that the criteria for an explosive CTMC is that the number of transitions (in this case, births) is infinite within a finite time almost surely.
In that sense, we’re not really making any explicit statement about the population size, but rather about the number of transitions in the system—and only with probabilistic certainty at that!
To demonstrate this, here's an example of an explosive CTMC that always keeps a finite "population". Say you're evolving a random Markovian variable $Z(t) = left(X(t),Y(t)right)$ where $X(t) in {0,1}$ denotes the presence of an individual in the system and $Y(t) in mathbb{Z}^*$ is counting the number of previous transitions.
If we say that:
$$mathbb{P}left[Z(t) = (0,y+1)|Z(0) =(1,y)right] = mathbb{P}left[Z(t) = (1,y+1)|Z(0) =(0,y)right] = lambda y^2 e^{-lambda y^2 t}$$
and $0$ for all other transitions, this chain is demonstrably explosive and will oscillate between $X = 0$ and $X = 1$ infinitely fast in finite time.
$endgroup$
The important thing to keep in mind is that the criteria for an explosive CTMC is that the number of transitions (in this case, births) is infinite within a finite time almost surely.
In that sense, we’re not really making any explicit statement about the population size, but rather about the number of transitions in the system—and only with probabilistic certainty at that!
To demonstrate this, here's an example of an explosive CTMC that always keeps a finite "population". Say you're evolving a random Markovian variable $Z(t) = left(X(t),Y(t)right)$ where $X(t) in {0,1}$ denotes the presence of an individual in the system and $Y(t) in mathbb{Z}^*$ is counting the number of previous transitions.
If we say that:
$$mathbb{P}left[Z(t) = (0,y+1)|Z(0) =(1,y)right] = mathbb{P}left[Z(t) = (1,y+1)|Z(0) =(0,y)right] = lambda y^2 e^{-lambda y^2 t}$$
and $0$ for all other transitions, this chain is demonstrably explosive and will oscillate between $X = 0$ and $X = 1$ infinitely fast in finite time.
edited Jan 1 at 5:17
answered Dec 31 '18 at 23:04
aghostinthefiguresaghostinthefigures
1,2301216
1,2301216
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3057981%2fhow-to-understand-explosion-in-continuous-time-markov-chains%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
2
$begingroup$
Infinity is not a quantity, merely a convenient expression for something which is unbounded.
$endgroup$
– jameselmore
Dec 31 '18 at 19:29
$begingroup$
This question may lend some intuition: math.stackexchange.com/questions/2136463/…
$endgroup$
– Math1000
Dec 31 '18 at 20:18
2
$begingroup$
Explosion of the chain $(X_t)$ means that there exists some random time $T$ such that $P(T<infty)=1$ and $$lim_{tto T-}X_t=infty$$ Thus, $X_t(omega)$ simply does not exist for $tgeqslant T(omega)$.
$endgroup$
– Did
Dec 31 '18 at 20:18
$begingroup$
@jameselmore Very poetic but is this relevant? My feeling is that it might even reveal a wrong conception of the situation...
$endgroup$
– Did
Dec 31 '18 at 20:20