When can we interchange the derivative with an expectation?
$begingroup$
Let $ (X_t) $ be a stochastic process, and define a new stochastic process by $ Y_t = int_0^t f(X_s) ds $. Is it true in general that $ frac{d} {dt} mathbb{E}(Y_t) = mathbb{E}(f(X_t)) $? If not, under what conditions would we be allowed to interchange the derivative operator with the expectation operator?
probability-theory stochastic-processes
$endgroup$
add a comment |
$begingroup$
Let $ (X_t) $ be a stochastic process, and define a new stochastic process by $ Y_t = int_0^t f(X_s) ds $. Is it true in general that $ frac{d} {dt} mathbb{E}(Y_t) = mathbb{E}(f(X_t)) $? If not, under what conditions would we be allowed to interchange the derivative operator with the expectation operator?
probability-theory stochastic-processes
$endgroup$
$begingroup$
@ Jonas : no it is not always true, but if you can interchange expectation and integral term then it is true so you only have to derive the conditions under which such operation is ok. Regards.
$endgroup$
– TheBridge
Oct 22 '12 at 20:58
2
$begingroup$
Where could I find information about when such an operation is ok?
$endgroup$
– jmbejara
Dec 4 '13 at 20:04
7
$begingroup$
A sufficient condition is that $$Eleft(int_0^tf(X_s)dsright)=int_0^tE(f(X_s))ds$$ and for that, some regularity of $(X_t)$ and $f$ and the finiteness of $$int_0^tE(|f(X_s)|)ds$$ suffice. Keyword: Fubini.
$endgroup$
– Did
Oct 26 '16 at 19:42
add a comment |
$begingroup$
Let $ (X_t) $ be a stochastic process, and define a new stochastic process by $ Y_t = int_0^t f(X_s) ds $. Is it true in general that $ frac{d} {dt} mathbb{E}(Y_t) = mathbb{E}(f(X_t)) $? If not, under what conditions would we be allowed to interchange the derivative operator with the expectation operator?
probability-theory stochastic-processes
$endgroup$
Let $ (X_t) $ be a stochastic process, and define a new stochastic process by $ Y_t = int_0^t f(X_s) ds $. Is it true in general that $ frac{d} {dt} mathbb{E}(Y_t) = mathbb{E}(f(X_t)) $? If not, under what conditions would we be allowed to interchange the derivative operator with the expectation operator?
probability-theory stochastic-processes
probability-theory stochastic-processes
asked Oct 20 '12 at 23:34
JonasJonas
92721327
92721327
$begingroup$
@ Jonas : no it is not always true, but if you can interchange expectation and integral term then it is true so you only have to derive the conditions under which such operation is ok. Regards.
$endgroup$
– TheBridge
Oct 22 '12 at 20:58
2
$begingroup$
Where could I find information about when such an operation is ok?
$endgroup$
– jmbejara
Dec 4 '13 at 20:04
7
$begingroup$
A sufficient condition is that $$Eleft(int_0^tf(X_s)dsright)=int_0^tE(f(X_s))ds$$ and for that, some regularity of $(X_t)$ and $f$ and the finiteness of $$int_0^tE(|f(X_s)|)ds$$ suffice. Keyword: Fubini.
$endgroup$
– Did
Oct 26 '16 at 19:42
add a comment |
$begingroup$
@ Jonas : no it is not always true, but if you can interchange expectation and integral term then it is true so you only have to derive the conditions under which such operation is ok. Regards.
$endgroup$
– TheBridge
Oct 22 '12 at 20:58
2
$begingroup$
Where could I find information about when such an operation is ok?
$endgroup$
– jmbejara
Dec 4 '13 at 20:04
7
$begingroup$
A sufficient condition is that $$Eleft(int_0^tf(X_s)dsright)=int_0^tE(f(X_s))ds$$ and for that, some regularity of $(X_t)$ and $f$ and the finiteness of $$int_0^tE(|f(X_s)|)ds$$ suffice. Keyword: Fubini.
$endgroup$
– Did
Oct 26 '16 at 19:42
$begingroup$
@ Jonas : no it is not always true, but if you can interchange expectation and integral term then it is true so you only have to derive the conditions under which such operation is ok. Regards.
$endgroup$
– TheBridge
Oct 22 '12 at 20:58
$begingroup$
@ Jonas : no it is not always true, but if you can interchange expectation and integral term then it is true so you only have to derive the conditions under which such operation is ok. Regards.
$endgroup$
– TheBridge
Oct 22 '12 at 20:58
2
2
$begingroup$
Where could I find information about when such an operation is ok?
$endgroup$
– jmbejara
Dec 4 '13 at 20:04
$begingroup$
Where could I find information about when such an operation is ok?
$endgroup$
– jmbejara
Dec 4 '13 at 20:04
7
7
$begingroup$
A sufficient condition is that $$Eleft(int_0^tf(X_s)dsright)=int_0^tE(f(X_s))ds$$ and for that, some regularity of $(X_t)$ and $f$ and the finiteness of $$int_0^tE(|f(X_s)|)ds$$ suffice. Keyword: Fubini.
$endgroup$
– Did
Oct 26 '16 at 19:42
$begingroup$
A sufficient condition is that $$Eleft(int_0^tf(X_s)dsright)=int_0^tE(f(X_s))ds$$ and for that, some regularity of $(X_t)$ and $f$ and the finiteness of $$int_0^tE(|f(X_s)|)ds$$ suffice. Keyword: Fubini.
$endgroup$
– Did
Oct 26 '16 at 19:42
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Interchanging a derivative with an expectation or an integral can be done using the dominated convergence theorem. Here is a version of such a result.
Lemma. Let $Xinmathcal{X}$ be a random variable $gcolon mathbb{R}times mathcal{X} to mathbb{R}$ a function such that $g(t, X)$ is integrable for all $t$ and $g$ is differentiable w.r.t. $t$. Assume that there is a random variable $Z$ such that $|frac{partial}{partial t} g(t, X)| leq Z$ a.s. for all $t$ and $mathbb{E}(Z) < infty$. Then
$$frac{partial}{partial t} mathbb{E}bigl(g(t, X)bigr)
= mathbb{E}bigl(frac{partial}{partial t} g(t, X)bigr).$$
Proof. We have
$$begin{align*}
frac{partial}{partial t} mathbb{E}bigl(g(t, X)bigr)
&= lim_{hto 0} frac1h Bigl( mathbb{E}bigl(g(t+h, X)bigr) - mathbb{E}bigl(g(t, X)bigr) Bigr) \
&= lim_{hto 0} mathbb{E}Bigl( frac{g(t+h, X) - g(t, X)}{h} Bigr) \
&= lim_{hto 0} mathbb{E}Bigl( frac{partial}{partial t} g(tau(h), X) Bigr),
end{align*}$$
where $tau(h) in (t, t+h)$ exists by the mean value theorem.
By assumption we have
$$Bigl| frac{partial}{partial t} g(tau(h), X) Bigr| leq Z$$
and thus we can use the dominated convergence theorem to conclude
$$begin{equation*}
frac{partial}{partial t} mathbb{E}bigl(g(t, X)bigr)
= mathbb{E}Bigl( lim_{hto 0} frac{partial}{partial t} g(tau(h), X) Bigr)
= mathbb{E}Bigl( frac{partial}{partial t} g(t, X) Bigr).
end{equation*}$$
This completes the proof.
In your case you would have $g(t, X) = int_0^t f(X_s) ,ds$ and a sufficient condition to obtain $frac{d}{dt} mathbb{E}(Y_t) = mathbb{E}bigl(f(X_t)bigr)$ would be for $f$ to be bounded.
If you want to take the derivative only for a single point $t=t^ast$,
boundedness of the derivative is only required in a neighbourhood of $t^ast$. Variants of the lemma can be derived by using different convergence theorems in place of the dominated convergence theorem, e.g. by using the Vitali convergence theorem.
$endgroup$
$begingroup$
The uniform boundedness of $f$ seems to be a much too restrictive condition.
$endgroup$
– Did
Oct 26 '16 at 19:44
$begingroup$
@Did yes, it's only a sufficient condition. In the lemma I showed, $Z$ is allowed to depend on $X$, so you can do much better, and if you use the Vitali convergence theorem you get the condition that the $f(X_t)$ are uniformly integrable. Do you know better results than this?
$endgroup$
– jochen
Oct 26 '16 at 20:25
$begingroup$
@Did ah, yes, your Fubini solution is more elegant.
$endgroup$
– jochen
Oct 27 '16 at 8:03
1
$begingroup$
@ jochen except $int_0^t f(X_s)ds$ cannot be written as $g(t,X)$ for some fixed function $g$ and fixed random variable $X$ :-(.
$endgroup$
– batman
Aug 13 '17 at 15:45
$begingroup$
@batman why not? You can have $X in Cbigl( [0,infty), mathbb{R} bigr)$ be the whole random path of the process $X$, and $g$ the function which integrates the path until time $t$.
$endgroup$
– jochen
Aug 14 '17 at 19:01
|
show 1 more comment
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f217702%2fwhen-can-we-interchange-the-derivative-with-an-expectation%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Interchanging a derivative with an expectation or an integral can be done using the dominated convergence theorem. Here is a version of such a result.
Lemma. Let $Xinmathcal{X}$ be a random variable $gcolon mathbb{R}times mathcal{X} to mathbb{R}$ a function such that $g(t, X)$ is integrable for all $t$ and $g$ is differentiable w.r.t. $t$. Assume that there is a random variable $Z$ such that $|frac{partial}{partial t} g(t, X)| leq Z$ a.s. for all $t$ and $mathbb{E}(Z) < infty$. Then
$$frac{partial}{partial t} mathbb{E}bigl(g(t, X)bigr)
= mathbb{E}bigl(frac{partial}{partial t} g(t, X)bigr).$$
Proof. We have
$$begin{align*}
frac{partial}{partial t} mathbb{E}bigl(g(t, X)bigr)
&= lim_{hto 0} frac1h Bigl( mathbb{E}bigl(g(t+h, X)bigr) - mathbb{E}bigl(g(t, X)bigr) Bigr) \
&= lim_{hto 0} mathbb{E}Bigl( frac{g(t+h, X) - g(t, X)}{h} Bigr) \
&= lim_{hto 0} mathbb{E}Bigl( frac{partial}{partial t} g(tau(h), X) Bigr),
end{align*}$$
where $tau(h) in (t, t+h)$ exists by the mean value theorem.
By assumption we have
$$Bigl| frac{partial}{partial t} g(tau(h), X) Bigr| leq Z$$
and thus we can use the dominated convergence theorem to conclude
$$begin{equation*}
frac{partial}{partial t} mathbb{E}bigl(g(t, X)bigr)
= mathbb{E}Bigl( lim_{hto 0} frac{partial}{partial t} g(tau(h), X) Bigr)
= mathbb{E}Bigl( frac{partial}{partial t} g(t, X) Bigr).
end{equation*}$$
This completes the proof.
In your case you would have $g(t, X) = int_0^t f(X_s) ,ds$ and a sufficient condition to obtain $frac{d}{dt} mathbb{E}(Y_t) = mathbb{E}bigl(f(X_t)bigr)$ would be for $f$ to be bounded.
If you want to take the derivative only for a single point $t=t^ast$,
boundedness of the derivative is only required in a neighbourhood of $t^ast$. Variants of the lemma can be derived by using different convergence theorems in place of the dominated convergence theorem, e.g. by using the Vitali convergence theorem.
$endgroup$
$begingroup$
The uniform boundedness of $f$ seems to be a much too restrictive condition.
$endgroup$
– Did
Oct 26 '16 at 19:44
$begingroup$
@Did yes, it's only a sufficient condition. In the lemma I showed, $Z$ is allowed to depend on $X$, so you can do much better, and if you use the Vitali convergence theorem you get the condition that the $f(X_t)$ are uniformly integrable. Do you know better results than this?
$endgroup$
– jochen
Oct 26 '16 at 20:25
$begingroup$
@Did ah, yes, your Fubini solution is more elegant.
$endgroup$
– jochen
Oct 27 '16 at 8:03
1
$begingroup$
@ jochen except $int_0^t f(X_s)ds$ cannot be written as $g(t,X)$ for some fixed function $g$ and fixed random variable $X$ :-(.
$endgroup$
– batman
Aug 13 '17 at 15:45
$begingroup$
@batman why not? You can have $X in Cbigl( [0,infty), mathbb{R} bigr)$ be the whole random path of the process $X$, and $g$ the function which integrates the path until time $t$.
$endgroup$
– jochen
Aug 14 '17 at 19:01
|
show 1 more comment
$begingroup$
Interchanging a derivative with an expectation or an integral can be done using the dominated convergence theorem. Here is a version of such a result.
Lemma. Let $Xinmathcal{X}$ be a random variable $gcolon mathbb{R}times mathcal{X} to mathbb{R}$ a function such that $g(t, X)$ is integrable for all $t$ and $g$ is differentiable w.r.t. $t$. Assume that there is a random variable $Z$ such that $|frac{partial}{partial t} g(t, X)| leq Z$ a.s. for all $t$ and $mathbb{E}(Z) < infty$. Then
$$frac{partial}{partial t} mathbb{E}bigl(g(t, X)bigr)
= mathbb{E}bigl(frac{partial}{partial t} g(t, X)bigr).$$
Proof. We have
$$begin{align*}
frac{partial}{partial t} mathbb{E}bigl(g(t, X)bigr)
&= lim_{hto 0} frac1h Bigl( mathbb{E}bigl(g(t+h, X)bigr) - mathbb{E}bigl(g(t, X)bigr) Bigr) \
&= lim_{hto 0} mathbb{E}Bigl( frac{g(t+h, X) - g(t, X)}{h} Bigr) \
&= lim_{hto 0} mathbb{E}Bigl( frac{partial}{partial t} g(tau(h), X) Bigr),
end{align*}$$
where $tau(h) in (t, t+h)$ exists by the mean value theorem.
By assumption we have
$$Bigl| frac{partial}{partial t} g(tau(h), X) Bigr| leq Z$$
and thus we can use the dominated convergence theorem to conclude
$$begin{equation*}
frac{partial}{partial t} mathbb{E}bigl(g(t, X)bigr)
= mathbb{E}Bigl( lim_{hto 0} frac{partial}{partial t} g(tau(h), X) Bigr)
= mathbb{E}Bigl( frac{partial}{partial t} g(t, X) Bigr).
end{equation*}$$
This completes the proof.
In your case you would have $g(t, X) = int_0^t f(X_s) ,ds$ and a sufficient condition to obtain $frac{d}{dt} mathbb{E}(Y_t) = mathbb{E}bigl(f(X_t)bigr)$ would be for $f$ to be bounded.
If you want to take the derivative only for a single point $t=t^ast$,
boundedness of the derivative is only required in a neighbourhood of $t^ast$. Variants of the lemma can be derived by using different convergence theorems in place of the dominated convergence theorem, e.g. by using the Vitali convergence theorem.
$endgroup$
$begingroup$
The uniform boundedness of $f$ seems to be a much too restrictive condition.
$endgroup$
– Did
Oct 26 '16 at 19:44
$begingroup$
@Did yes, it's only a sufficient condition. In the lemma I showed, $Z$ is allowed to depend on $X$, so you can do much better, and if you use the Vitali convergence theorem you get the condition that the $f(X_t)$ are uniformly integrable. Do you know better results than this?
$endgroup$
– jochen
Oct 26 '16 at 20:25
$begingroup$
@Did ah, yes, your Fubini solution is more elegant.
$endgroup$
– jochen
Oct 27 '16 at 8:03
1
$begingroup$
@ jochen except $int_0^t f(X_s)ds$ cannot be written as $g(t,X)$ for some fixed function $g$ and fixed random variable $X$ :-(.
$endgroup$
– batman
Aug 13 '17 at 15:45
$begingroup$
@batman why not? You can have $X in Cbigl( [0,infty), mathbb{R} bigr)$ be the whole random path of the process $X$, and $g$ the function which integrates the path until time $t$.
$endgroup$
– jochen
Aug 14 '17 at 19:01
|
show 1 more comment
$begingroup$
Interchanging a derivative with an expectation or an integral can be done using the dominated convergence theorem. Here is a version of such a result.
Lemma. Let $Xinmathcal{X}$ be a random variable $gcolon mathbb{R}times mathcal{X} to mathbb{R}$ a function such that $g(t, X)$ is integrable for all $t$ and $g$ is differentiable w.r.t. $t$. Assume that there is a random variable $Z$ such that $|frac{partial}{partial t} g(t, X)| leq Z$ a.s. for all $t$ and $mathbb{E}(Z) < infty$. Then
$$frac{partial}{partial t} mathbb{E}bigl(g(t, X)bigr)
= mathbb{E}bigl(frac{partial}{partial t} g(t, X)bigr).$$
Proof. We have
$$begin{align*}
frac{partial}{partial t} mathbb{E}bigl(g(t, X)bigr)
&= lim_{hto 0} frac1h Bigl( mathbb{E}bigl(g(t+h, X)bigr) - mathbb{E}bigl(g(t, X)bigr) Bigr) \
&= lim_{hto 0} mathbb{E}Bigl( frac{g(t+h, X) - g(t, X)}{h} Bigr) \
&= lim_{hto 0} mathbb{E}Bigl( frac{partial}{partial t} g(tau(h), X) Bigr),
end{align*}$$
where $tau(h) in (t, t+h)$ exists by the mean value theorem.
By assumption we have
$$Bigl| frac{partial}{partial t} g(tau(h), X) Bigr| leq Z$$
and thus we can use the dominated convergence theorem to conclude
$$begin{equation*}
frac{partial}{partial t} mathbb{E}bigl(g(t, X)bigr)
= mathbb{E}Bigl( lim_{hto 0} frac{partial}{partial t} g(tau(h), X) Bigr)
= mathbb{E}Bigl( frac{partial}{partial t} g(t, X) Bigr).
end{equation*}$$
This completes the proof.
In your case you would have $g(t, X) = int_0^t f(X_s) ,ds$ and a sufficient condition to obtain $frac{d}{dt} mathbb{E}(Y_t) = mathbb{E}bigl(f(X_t)bigr)$ would be for $f$ to be bounded.
If you want to take the derivative only for a single point $t=t^ast$,
boundedness of the derivative is only required in a neighbourhood of $t^ast$. Variants of the lemma can be derived by using different convergence theorems in place of the dominated convergence theorem, e.g. by using the Vitali convergence theorem.
$endgroup$
Interchanging a derivative with an expectation or an integral can be done using the dominated convergence theorem. Here is a version of such a result.
Lemma. Let $Xinmathcal{X}$ be a random variable $gcolon mathbb{R}times mathcal{X} to mathbb{R}$ a function such that $g(t, X)$ is integrable for all $t$ and $g$ is differentiable w.r.t. $t$. Assume that there is a random variable $Z$ such that $|frac{partial}{partial t} g(t, X)| leq Z$ a.s. for all $t$ and $mathbb{E}(Z) < infty$. Then
$$frac{partial}{partial t} mathbb{E}bigl(g(t, X)bigr)
= mathbb{E}bigl(frac{partial}{partial t} g(t, X)bigr).$$
Proof. We have
$$begin{align*}
frac{partial}{partial t} mathbb{E}bigl(g(t, X)bigr)
&= lim_{hto 0} frac1h Bigl( mathbb{E}bigl(g(t+h, X)bigr) - mathbb{E}bigl(g(t, X)bigr) Bigr) \
&= lim_{hto 0} mathbb{E}Bigl( frac{g(t+h, X) - g(t, X)}{h} Bigr) \
&= lim_{hto 0} mathbb{E}Bigl( frac{partial}{partial t} g(tau(h), X) Bigr),
end{align*}$$
where $tau(h) in (t, t+h)$ exists by the mean value theorem.
By assumption we have
$$Bigl| frac{partial}{partial t} g(tau(h), X) Bigr| leq Z$$
and thus we can use the dominated convergence theorem to conclude
$$begin{equation*}
frac{partial}{partial t} mathbb{E}bigl(g(t, X)bigr)
= mathbb{E}Bigl( lim_{hto 0} frac{partial}{partial t} g(tau(h), X) Bigr)
= mathbb{E}Bigl( frac{partial}{partial t} g(t, X) Bigr).
end{equation*}$$
This completes the proof.
In your case you would have $g(t, X) = int_0^t f(X_s) ,ds$ and a sufficient condition to obtain $frac{d}{dt} mathbb{E}(Y_t) = mathbb{E}bigl(f(X_t)bigr)$ would be for $f$ to be bounded.
If you want to take the derivative only for a single point $t=t^ast$,
boundedness of the derivative is only required in a neighbourhood of $t^ast$. Variants of the lemma can be derived by using different convergence theorems in place of the dominated convergence theorem, e.g. by using the Vitali convergence theorem.
answered Oct 26 '16 at 19:02
jochenjochen
416511
416511
$begingroup$
The uniform boundedness of $f$ seems to be a much too restrictive condition.
$endgroup$
– Did
Oct 26 '16 at 19:44
$begingroup$
@Did yes, it's only a sufficient condition. In the lemma I showed, $Z$ is allowed to depend on $X$, so you can do much better, and if you use the Vitali convergence theorem you get the condition that the $f(X_t)$ are uniformly integrable. Do you know better results than this?
$endgroup$
– jochen
Oct 26 '16 at 20:25
$begingroup$
@Did ah, yes, your Fubini solution is more elegant.
$endgroup$
– jochen
Oct 27 '16 at 8:03
1
$begingroup$
@ jochen except $int_0^t f(X_s)ds$ cannot be written as $g(t,X)$ for some fixed function $g$ and fixed random variable $X$ :-(.
$endgroup$
– batman
Aug 13 '17 at 15:45
$begingroup$
@batman why not? You can have $X in Cbigl( [0,infty), mathbb{R} bigr)$ be the whole random path of the process $X$, and $g$ the function which integrates the path until time $t$.
$endgroup$
– jochen
Aug 14 '17 at 19:01
|
show 1 more comment
$begingroup$
The uniform boundedness of $f$ seems to be a much too restrictive condition.
$endgroup$
– Did
Oct 26 '16 at 19:44
$begingroup$
@Did yes, it's only a sufficient condition. In the lemma I showed, $Z$ is allowed to depend on $X$, so you can do much better, and if you use the Vitali convergence theorem you get the condition that the $f(X_t)$ are uniformly integrable. Do you know better results than this?
$endgroup$
– jochen
Oct 26 '16 at 20:25
$begingroup$
@Did ah, yes, your Fubini solution is more elegant.
$endgroup$
– jochen
Oct 27 '16 at 8:03
1
$begingroup$
@ jochen except $int_0^t f(X_s)ds$ cannot be written as $g(t,X)$ for some fixed function $g$ and fixed random variable $X$ :-(.
$endgroup$
– batman
Aug 13 '17 at 15:45
$begingroup$
@batman why not? You can have $X in Cbigl( [0,infty), mathbb{R} bigr)$ be the whole random path of the process $X$, and $g$ the function which integrates the path until time $t$.
$endgroup$
– jochen
Aug 14 '17 at 19:01
$begingroup$
The uniform boundedness of $f$ seems to be a much too restrictive condition.
$endgroup$
– Did
Oct 26 '16 at 19:44
$begingroup$
The uniform boundedness of $f$ seems to be a much too restrictive condition.
$endgroup$
– Did
Oct 26 '16 at 19:44
$begingroup$
@Did yes, it's only a sufficient condition. In the lemma I showed, $Z$ is allowed to depend on $X$, so you can do much better, and if you use the Vitali convergence theorem you get the condition that the $f(X_t)$ are uniformly integrable. Do you know better results than this?
$endgroup$
– jochen
Oct 26 '16 at 20:25
$begingroup$
@Did yes, it's only a sufficient condition. In the lemma I showed, $Z$ is allowed to depend on $X$, so you can do much better, and if you use the Vitali convergence theorem you get the condition that the $f(X_t)$ are uniformly integrable. Do you know better results than this?
$endgroup$
– jochen
Oct 26 '16 at 20:25
$begingroup$
@Did ah, yes, your Fubini solution is more elegant.
$endgroup$
– jochen
Oct 27 '16 at 8:03
$begingroup$
@Did ah, yes, your Fubini solution is more elegant.
$endgroup$
– jochen
Oct 27 '16 at 8:03
1
1
$begingroup$
@ jochen except $int_0^t f(X_s)ds$ cannot be written as $g(t,X)$ for some fixed function $g$ and fixed random variable $X$ :-(.
$endgroup$
– batman
Aug 13 '17 at 15:45
$begingroup$
@ jochen except $int_0^t f(X_s)ds$ cannot be written as $g(t,X)$ for some fixed function $g$ and fixed random variable $X$ :-(.
$endgroup$
– batman
Aug 13 '17 at 15:45
$begingroup$
@batman why not? You can have $X in Cbigl( [0,infty), mathbb{R} bigr)$ be the whole random path of the process $X$, and $g$ the function which integrates the path until time $t$.
$endgroup$
– jochen
Aug 14 '17 at 19:01
$begingroup$
@batman why not? You can have $X in Cbigl( [0,infty), mathbb{R} bigr)$ be the whole random path of the process $X$, and $g$ the function which integrates the path until time $t$.
$endgroup$
– jochen
Aug 14 '17 at 19:01
|
show 1 more comment
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f217702%2fwhen-can-we-interchange-the-derivative-with-an-expectation%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
@ Jonas : no it is not always true, but if you can interchange expectation and integral term then it is true so you only have to derive the conditions under which such operation is ok. Regards.
$endgroup$
– TheBridge
Oct 22 '12 at 20:58
2
$begingroup$
Where could I find information about when such an operation is ok?
$endgroup$
– jmbejara
Dec 4 '13 at 20:04
7
$begingroup$
A sufficient condition is that $$Eleft(int_0^tf(X_s)dsright)=int_0^tE(f(X_s))ds$$ and for that, some regularity of $(X_t)$ and $f$ and the finiteness of $$int_0^tE(|f(X_s)|)ds$$ suffice. Keyword: Fubini.
$endgroup$
– Did
Oct 26 '16 at 19:42