Deriving the central Euler method and intuition
$begingroup$
My professor (Dutch) asked us to determine, among other things, the truncation error of the central Euler method. First of all, this is probably not the correct term, since there are very few results for "Central Euler", so that made looking things up a hassle.
To determine the truncation error, I thought I had to first know how to derive this central Euler method, given by:
$$
u_{k+1}=u_{k-1} + 2hcdot f(u_k)
$$
(It also states $u_0$ and $u_1$ are given/known)
In which $f(x)$ gives the slope at point $x$ (I think).
I tried to derive this using Taylor expansions, but I didn't get close.
I also thought I had to get an intuitive notion of what this means, and I figured out it's this: We add two times the slope (times the step size) at $x=u_k$ to the the y-coordinate at $u_{k-1}$ to get the y coordinate at $x=u_{k+1}$. The two times is because the length between $k-1$ and $k+1$ is equal to $2h$.
So my question is: How do I derive this method, what's it called, and is the derivation a good step in figuring out the truncation error?
Note: It was given that the truncation error is of order $h^3$.
ordinary-differential-equations numerical-methods truncation-error
$endgroup$
add a comment |
$begingroup$
My professor (Dutch) asked us to determine, among other things, the truncation error of the central Euler method. First of all, this is probably not the correct term, since there are very few results for "Central Euler", so that made looking things up a hassle.
To determine the truncation error, I thought I had to first know how to derive this central Euler method, given by:
$$
u_{k+1}=u_{k-1} + 2hcdot f(u_k)
$$
(It also states $u_0$ and $u_1$ are given/known)
In which $f(x)$ gives the slope at point $x$ (I think).
I tried to derive this using Taylor expansions, but I didn't get close.
I also thought I had to get an intuitive notion of what this means, and I figured out it's this: We add two times the slope (times the step size) at $x=u_k$ to the the y-coordinate at $u_{k-1}$ to get the y coordinate at $x=u_{k+1}$. The two times is because the length between $k-1$ and $k+1$ is equal to $2h$.
So my question is: How do I derive this method, what's it called, and is the derivation a good step in figuring out the truncation error?
Note: It was given that the truncation error is of order $h^3$.
ordinary-differential-equations numerical-methods truncation-error
$endgroup$
$begingroup$
If this is for a beginner's course in numerical methods then it would make much more sense in my opinion to consider the explicit midpoint method (aka modified Euler method) $u_{k+1} = u_k + h fleft( u_k + frac{h}{2} f(u_k) right)$, which is a one-step method. For this method you might use the techniques which you were taught for forward/backward Euler. But the jump from one-step to two-step schemes is not so straightforward to understand simply in an exercise.
$endgroup$
– Christoph
Jan 10 at 10:29
add a comment |
$begingroup$
My professor (Dutch) asked us to determine, among other things, the truncation error of the central Euler method. First of all, this is probably not the correct term, since there are very few results for "Central Euler", so that made looking things up a hassle.
To determine the truncation error, I thought I had to first know how to derive this central Euler method, given by:
$$
u_{k+1}=u_{k-1} + 2hcdot f(u_k)
$$
(It also states $u_0$ and $u_1$ are given/known)
In which $f(x)$ gives the slope at point $x$ (I think).
I tried to derive this using Taylor expansions, but I didn't get close.
I also thought I had to get an intuitive notion of what this means, and I figured out it's this: We add two times the slope (times the step size) at $x=u_k$ to the the y-coordinate at $u_{k-1}$ to get the y coordinate at $x=u_{k+1}$. The two times is because the length between $k-1$ and $k+1$ is equal to $2h$.
So my question is: How do I derive this method, what's it called, and is the derivation a good step in figuring out the truncation error?
Note: It was given that the truncation error is of order $h^3$.
ordinary-differential-equations numerical-methods truncation-error
$endgroup$
My professor (Dutch) asked us to determine, among other things, the truncation error of the central Euler method. First of all, this is probably not the correct term, since there are very few results for "Central Euler", so that made looking things up a hassle.
To determine the truncation error, I thought I had to first know how to derive this central Euler method, given by:
$$
u_{k+1}=u_{k-1} + 2hcdot f(u_k)
$$
(It also states $u_0$ and $u_1$ are given/known)
In which $f(x)$ gives the slope at point $x$ (I think).
I tried to derive this using Taylor expansions, but I didn't get close.
I also thought I had to get an intuitive notion of what this means, and I figured out it's this: We add two times the slope (times the step size) at $x=u_k$ to the the y-coordinate at $u_{k-1}$ to get the y coordinate at $x=u_{k+1}$. The two times is because the length between $k-1$ and $k+1$ is equal to $2h$.
So my question is: How do I derive this method, what's it called, and is the derivation a good step in figuring out the truncation error?
Note: It was given that the truncation error is of order $h^3$.
ordinary-differential-equations numerical-methods truncation-error
ordinary-differential-equations numerical-methods truncation-error
edited Jan 9 at 20:10
The Coding Wombat
asked Jan 9 at 18:24
The Coding WombatThe Coding Wombat
1969
1969
$begingroup$
If this is for a beginner's course in numerical methods then it would make much more sense in my opinion to consider the explicit midpoint method (aka modified Euler method) $u_{k+1} = u_k + h fleft( u_k + frac{h}{2} f(u_k) right)$, which is a one-step method. For this method you might use the techniques which you were taught for forward/backward Euler. But the jump from one-step to two-step schemes is not so straightforward to understand simply in an exercise.
$endgroup$
– Christoph
Jan 10 at 10:29
add a comment |
$begingroup$
If this is for a beginner's course in numerical methods then it would make much more sense in my opinion to consider the explicit midpoint method (aka modified Euler method) $u_{k+1} = u_k + h fleft( u_k + frac{h}{2} f(u_k) right)$, which is a one-step method. For this method you might use the techniques which you were taught for forward/backward Euler. But the jump from one-step to two-step schemes is not so straightforward to understand simply in an exercise.
$endgroup$
– Christoph
Jan 10 at 10:29
$begingroup$
If this is for a beginner's course in numerical methods then it would make much more sense in my opinion to consider the explicit midpoint method (aka modified Euler method) $u_{k+1} = u_k + h fleft( u_k + frac{h}{2} f(u_k) right)$, which is a one-step method. For this method you might use the techniques which you were taught for forward/backward Euler. But the jump from one-step to two-step schemes is not so straightforward to understand simply in an exercise.
$endgroup$
– Christoph
Jan 10 at 10:29
$begingroup$
If this is for a beginner's course in numerical methods then it would make much more sense in my opinion to consider the explicit midpoint method (aka modified Euler method) $u_{k+1} = u_k + h fleft( u_k + frac{h}{2} f(u_k) right)$, which is a one-step method. For this method you might use the techniques which you were taught for forward/backward Euler. But the jump from one-step to two-step schemes is not so straightforward to understand simply in an exercise.
$endgroup$
– Christoph
Jan 10 at 10:29
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
It's a two-step Nyström method. Using the ODE $u' = f(u)$ and $x_k := x_0 + k h$ you write
begin{equation}
u(x_{k+1}) - u(x_{k-1}) = int limits_{x_{k-1}}^{x_{k+1}} u'(x) , mathrm{d}x = int limits_{x_{k-1}}^{x_{k+1}} f(u(x)) , mathrm{d}x.
end{equation}
Now you use a quadrature formula for the integral, in this case the midpoint rule: $displaystyle int limits_{x_{k-1}}^{x_{k+1}} f(u(x)) , mathrm{d}x simeq 2h f(u(x_k))$.
For the local truncation error at $x_2$ we assume that both previous values $u_0 = u(x_0)$ and $u_1 = u(x_1)$ are exact. The local truncation error is then given by
begin{equation}
tau_2 = u(x_2) - u(x_0) - 2 h f(u(x_1)) = u(x_2) - u(x_0) - 2 h u'(x_1)
end{equation}
(using again the ODE in the last step). Now comes Taylor:
begin{eqnarray}
u(x_0) = u(x_1 - h) &=& u(x_1) - h u'(x_1) + frac{1}{2} h^2 u''(x_1) - frac{1}{6} h^3 u'''(x_1) + frac{1}{24} h^4 u''''(x_1) + O(h^5),\
u(x_2) = u(x_1 + h) &=& u(x_1) + h u'(x_1) + frac{1}{2} h^2 u''(x_1) + frac{1}{6} h^3 u'''(x_1) + frac{1}{24} h^4 u''''(x_1) + O(h^5),
end{eqnarray}
for $h rightarrow 0$, so that
begin{equation}
tau_2 = frac{1}{3} h^3 u^{primeprimeprime}(x_1) + O(h^5), quad h rightarrow 0.
end{equation}
$endgroup$
$begingroup$
Cool way of writing that difference as the definite integral of a derivative! So does this mean that the approximation error in the midpoint rule for this integral is the same as the truncation error of the "central euler method"?
$endgroup$
– The Coding Wombat
Jan 9 at 20:08
1
$begingroup$
That's right, I'll add something on the local truncation error.
$endgroup$
– Christoph
Jan 9 at 20:41
$begingroup$
So, my $u_{k+1}$ in the expression in my question is an approximation for the real value corresponding to $x_2$, right? And the $u(x_2)$ is the exact value at $x_2$? If that's the case, I understand why we then subract the approximation from the exact value. I don't understand the taylor expansion of $u(x_0)$ tho, where do the minus signs come from, and how do you expand to a higher $x_i$?
$endgroup$
– The Coding Wombat
Jan 9 at 21:02
1
$begingroup$
Yes, I did it for $k=1$ only, you can do it for any $k$ of course. The two expressions above are just Taylor expansions of $u(x_0) = u(x_1 - h)$ (note the minus sign!) and of $u(x_2) = u(x_1 + h)$ around $x_1$.
$endgroup$
– Christoph
Jan 9 at 21:05
$begingroup$
One more thing: In your formula for $tau$ you use $u(x_2)$ and $u(x_0)$ and $u(x_1)$, but here the first is an exact value for every iteration right? But the second and third are approximate results from previous iterations if I'm not mistaken? (So in this case $x_0$ and $x_1$ are exact in this example, because they were given, but I mean for the next and subsequent iterations).
$endgroup$
– The Coding Wombat
Jan 9 at 21:16
|
show 2 more comments
$begingroup$
Local Truncation Error
You can also go the differentiation route, with $x=x_k$, $xpm h=x_{kpm 1}$ and $u_j= u(x_j)$ method step can be related to the the central difference quotient. With the Taylor expansion of $u$ we get
$$
u(xpm h)=u(x)pm hu'(x)+frac{h^2}{2}u''(x)pmfrac{h^3}6u'''(x)+...
\~\
implies
frac{u(x+h)-u(x-h)}{2h}=u'(x)+frac{h^2}{6}u'''(x)+O(h^4)
$$
so that
$$
frac{u(x+h)-u(x-h)}{2h}-f(x,u(x))=O(h^2)
$$
making this linear multi-step method a second order explicit method.
Global Error Approximation
The distance $d_k=u_k-u(x_k)$ of the numerical method solution $u_k$ to the exact solution $u(x_k)$ evolves as
$$
d_{k+1}=d_{k-1}+2h(f(u_k)-f(u(x_k))-frac{h^3}3u'''(x_k)+...
=d_{k-1}+2hf'(u(x_k))d_k-frac{h^3}3u'''(x_k)+...
$$
where the omitted terms are of higher order assuming that $hd_k^2$ is $O(h^4)$ or smaller.
Linear difference equation for the error terms: To get an easily solved problem, first assume that $h$ is small enough so that $u,f'(u),u'''$ are slowly changing over several steps. Then we can set them constant in the above recursion. The now linear recursion
$$
d_{k+1}=d_{k-1}+2hf'd_k-frac{h^3}3u'''
$$
has for $f'ne 0$ a solution $d_k=Aq^k+B(-q)^{-k}+C$ where $q>0$ solves
$$
q^2-2hf'q+(hf')^2=1+(hf')^2implies q=hf'+sqrt{1+(hf')^2}=e^{hf'+O(h^3)}
$$
The resulting form of the error terms is, up to higher order terms,
$$d_k=Ae^{khf'}+(-1)^kBe^{-khf'}+C.$$
Differential equations for the error components: Translating the terms back into general functions, $d_k=a(x_k)+(-1)^kb(x_k)+c(x_k)$, we identify functions and their differential equations as
begin{align}
a(x)&simeq Ae^{(x-x_k)f'(u(x_k))}&implies a'(x)&=f'(u(x))a(x),\
b(x)&simeq Be^{-(x-x_k)f'(u(x_k))}&implies b'(x)&=-f'(u(x))b(x),\
c(x_{k-1})-c(x_{k-1})&=2hf'c(x_k)-frac{h^3}3u'''&implies c'(x)&=f'(u(x))c(x)-frac{h^2}{6}u'''(x)
end{align}
with $u'''(x)=f''(u(x))[f(u),f(u)]+f'(u)^2f(u)$. The initial values are zero for the trend $c$ and account for the error in the first step in the oscillating parts $a,b$.
begin{align}
c(x_0)&=0, \
a(x_0)+b(x_0)&=0, \
a(x_1)-b(x_1)&=u_1-u(x_1)-c(x_1)approx e(x_0)h^{p+1}+frac{h^3}{6}u'''(x_0)
end{align}
This has as consequence that the trend of the error has order $c(x)=O(h^2)$ while the oscillating parts are of order $a(x),b(x)=O(h^3)$ if $pge 2$ for the order of the method for the initial step. If the initial step has only order $p=1$, the oscillating parts will reflect this lower order, their contribution will be of the same scale as the error trend curve $c$.
Experimental Error Order Confirmation
That this method is really of order 2 but rather sensitive to the computation of $u_1$ show the following graph depicting the scaled error of a numerical against an exact solution. That the error graphs converge visibly, in the latter paths alternating between an upper and a lower error function, confirms the order, as else the scaled error graphs would be largely different in scale.
The equation used is of the form $F(u, u')=F(p,p')$ with here $F(u,u')=u'+10sin(u)$ and $p(t)=cos(t)$, so that $$u'(t)=f(t,u)=10(sin(cos(t))-sin(u(t))-sin(t).$$ The error for the method applied with step size $h$ is divided by $h^2e^{5t}$, as the equation is slightly stiff and the error rapidly growing.
The initialization is from top down by the exact value $u_1=p(t_1)$, order 3 Heun, by the explicit midpoint method and lastly by the order insufficient Euler method. To decrease the influence of the first error, the step was computed with 5 method steps of step size $h/5$.
See Hairer/Nørsett/Wanner: Solving ODE I: Non-stiff problems, chapter III.9, where Figure 9.2 about this same method has a similar construction.
$endgroup$
$begingroup$
I unfortunately don't recognize my expression in this answer. You use y and x and a function f that takes two arguments instead of 1?
$endgroup$
– The Coding Wombat
Jan 9 at 19:58
1
$begingroup$
Set $x=x_k=x_0+kh$, then $u_ksimeq y(x_k)$. The formula is the usual formula for the local discretization error of a method, you insert an exact solution in both sides of the method and use Taylor expansion to find the difference between them.
$endgroup$
– LutzL
Jan 9 at 20:16
$begingroup$
So that last line can be rewritten to $u_{k+1}=u_{k-1}+2hcdot f(u_k) + O(h^2)$? And is what's on the right side exactly equal to the exact value of $u_{k+1}$? If so, to get the complete error, I calculate the difference between this expression and the approxmiation (i.e. $u_{k-1}+2hcdot f(u_k)$)?
$endgroup$
– The Coding Wombat
Jan 9 at 20:27
$begingroup$
Also, where did you find the definition of the central difference? Especially the + third derivative and + O(h^4) part? I feel like I'm not searching the web correctly, since I'm having a hard time finding many aspects of numerical analysis.
$endgroup$
– The Coding Wombat
Jan 9 at 20:38
1
$begingroup$
The odd part of the function $g(h)=f(x+h)$ is $frac12(g(h)-g(-h))$. The even part is $frac12(g(h)+g(-h))$. The odd part of a power series contains only the odd powered terms, the even part complementarily the even powers.
$endgroup$
– LutzL
Jan 9 at 21:27
|
show 2 more comments
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3067795%2fderiving-the-central-euler-method-and-intuition%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
It's a two-step Nyström method. Using the ODE $u' = f(u)$ and $x_k := x_0 + k h$ you write
begin{equation}
u(x_{k+1}) - u(x_{k-1}) = int limits_{x_{k-1}}^{x_{k+1}} u'(x) , mathrm{d}x = int limits_{x_{k-1}}^{x_{k+1}} f(u(x)) , mathrm{d}x.
end{equation}
Now you use a quadrature formula for the integral, in this case the midpoint rule: $displaystyle int limits_{x_{k-1}}^{x_{k+1}} f(u(x)) , mathrm{d}x simeq 2h f(u(x_k))$.
For the local truncation error at $x_2$ we assume that both previous values $u_0 = u(x_0)$ and $u_1 = u(x_1)$ are exact. The local truncation error is then given by
begin{equation}
tau_2 = u(x_2) - u(x_0) - 2 h f(u(x_1)) = u(x_2) - u(x_0) - 2 h u'(x_1)
end{equation}
(using again the ODE in the last step). Now comes Taylor:
begin{eqnarray}
u(x_0) = u(x_1 - h) &=& u(x_1) - h u'(x_1) + frac{1}{2} h^2 u''(x_1) - frac{1}{6} h^3 u'''(x_1) + frac{1}{24} h^4 u''''(x_1) + O(h^5),\
u(x_2) = u(x_1 + h) &=& u(x_1) + h u'(x_1) + frac{1}{2} h^2 u''(x_1) + frac{1}{6} h^3 u'''(x_1) + frac{1}{24} h^4 u''''(x_1) + O(h^5),
end{eqnarray}
for $h rightarrow 0$, so that
begin{equation}
tau_2 = frac{1}{3} h^3 u^{primeprimeprime}(x_1) + O(h^5), quad h rightarrow 0.
end{equation}
$endgroup$
$begingroup$
Cool way of writing that difference as the definite integral of a derivative! So does this mean that the approximation error in the midpoint rule for this integral is the same as the truncation error of the "central euler method"?
$endgroup$
– The Coding Wombat
Jan 9 at 20:08
1
$begingroup$
That's right, I'll add something on the local truncation error.
$endgroup$
– Christoph
Jan 9 at 20:41
$begingroup$
So, my $u_{k+1}$ in the expression in my question is an approximation for the real value corresponding to $x_2$, right? And the $u(x_2)$ is the exact value at $x_2$? If that's the case, I understand why we then subract the approximation from the exact value. I don't understand the taylor expansion of $u(x_0)$ tho, where do the minus signs come from, and how do you expand to a higher $x_i$?
$endgroup$
– The Coding Wombat
Jan 9 at 21:02
1
$begingroup$
Yes, I did it for $k=1$ only, you can do it for any $k$ of course. The two expressions above are just Taylor expansions of $u(x_0) = u(x_1 - h)$ (note the minus sign!) and of $u(x_2) = u(x_1 + h)$ around $x_1$.
$endgroup$
– Christoph
Jan 9 at 21:05
$begingroup$
One more thing: In your formula for $tau$ you use $u(x_2)$ and $u(x_0)$ and $u(x_1)$, but here the first is an exact value for every iteration right? But the second and third are approximate results from previous iterations if I'm not mistaken? (So in this case $x_0$ and $x_1$ are exact in this example, because they were given, but I mean for the next and subsequent iterations).
$endgroup$
– The Coding Wombat
Jan 9 at 21:16
|
show 2 more comments
$begingroup$
It's a two-step Nyström method. Using the ODE $u' = f(u)$ and $x_k := x_0 + k h$ you write
begin{equation}
u(x_{k+1}) - u(x_{k-1}) = int limits_{x_{k-1}}^{x_{k+1}} u'(x) , mathrm{d}x = int limits_{x_{k-1}}^{x_{k+1}} f(u(x)) , mathrm{d}x.
end{equation}
Now you use a quadrature formula for the integral, in this case the midpoint rule: $displaystyle int limits_{x_{k-1}}^{x_{k+1}} f(u(x)) , mathrm{d}x simeq 2h f(u(x_k))$.
For the local truncation error at $x_2$ we assume that both previous values $u_0 = u(x_0)$ and $u_1 = u(x_1)$ are exact. The local truncation error is then given by
begin{equation}
tau_2 = u(x_2) - u(x_0) - 2 h f(u(x_1)) = u(x_2) - u(x_0) - 2 h u'(x_1)
end{equation}
(using again the ODE in the last step). Now comes Taylor:
begin{eqnarray}
u(x_0) = u(x_1 - h) &=& u(x_1) - h u'(x_1) + frac{1}{2} h^2 u''(x_1) - frac{1}{6} h^3 u'''(x_1) + frac{1}{24} h^4 u''''(x_1) + O(h^5),\
u(x_2) = u(x_1 + h) &=& u(x_1) + h u'(x_1) + frac{1}{2} h^2 u''(x_1) + frac{1}{6} h^3 u'''(x_1) + frac{1}{24} h^4 u''''(x_1) + O(h^5),
end{eqnarray}
for $h rightarrow 0$, so that
begin{equation}
tau_2 = frac{1}{3} h^3 u^{primeprimeprime}(x_1) + O(h^5), quad h rightarrow 0.
end{equation}
$endgroup$
$begingroup$
Cool way of writing that difference as the definite integral of a derivative! So does this mean that the approximation error in the midpoint rule for this integral is the same as the truncation error of the "central euler method"?
$endgroup$
– The Coding Wombat
Jan 9 at 20:08
1
$begingroup$
That's right, I'll add something on the local truncation error.
$endgroup$
– Christoph
Jan 9 at 20:41
$begingroup$
So, my $u_{k+1}$ in the expression in my question is an approximation for the real value corresponding to $x_2$, right? And the $u(x_2)$ is the exact value at $x_2$? If that's the case, I understand why we then subract the approximation from the exact value. I don't understand the taylor expansion of $u(x_0)$ tho, where do the minus signs come from, and how do you expand to a higher $x_i$?
$endgroup$
– The Coding Wombat
Jan 9 at 21:02
1
$begingroup$
Yes, I did it for $k=1$ only, you can do it for any $k$ of course. The two expressions above are just Taylor expansions of $u(x_0) = u(x_1 - h)$ (note the minus sign!) and of $u(x_2) = u(x_1 + h)$ around $x_1$.
$endgroup$
– Christoph
Jan 9 at 21:05
$begingroup$
One more thing: In your formula for $tau$ you use $u(x_2)$ and $u(x_0)$ and $u(x_1)$, but here the first is an exact value for every iteration right? But the second and third are approximate results from previous iterations if I'm not mistaken? (So in this case $x_0$ and $x_1$ are exact in this example, because they were given, but I mean for the next and subsequent iterations).
$endgroup$
– The Coding Wombat
Jan 9 at 21:16
|
show 2 more comments
$begingroup$
It's a two-step Nyström method. Using the ODE $u' = f(u)$ and $x_k := x_0 + k h$ you write
begin{equation}
u(x_{k+1}) - u(x_{k-1}) = int limits_{x_{k-1}}^{x_{k+1}} u'(x) , mathrm{d}x = int limits_{x_{k-1}}^{x_{k+1}} f(u(x)) , mathrm{d}x.
end{equation}
Now you use a quadrature formula for the integral, in this case the midpoint rule: $displaystyle int limits_{x_{k-1}}^{x_{k+1}} f(u(x)) , mathrm{d}x simeq 2h f(u(x_k))$.
For the local truncation error at $x_2$ we assume that both previous values $u_0 = u(x_0)$ and $u_1 = u(x_1)$ are exact. The local truncation error is then given by
begin{equation}
tau_2 = u(x_2) - u(x_0) - 2 h f(u(x_1)) = u(x_2) - u(x_0) - 2 h u'(x_1)
end{equation}
(using again the ODE in the last step). Now comes Taylor:
begin{eqnarray}
u(x_0) = u(x_1 - h) &=& u(x_1) - h u'(x_1) + frac{1}{2} h^2 u''(x_1) - frac{1}{6} h^3 u'''(x_1) + frac{1}{24} h^4 u''''(x_1) + O(h^5),\
u(x_2) = u(x_1 + h) &=& u(x_1) + h u'(x_1) + frac{1}{2} h^2 u''(x_1) + frac{1}{6} h^3 u'''(x_1) + frac{1}{24} h^4 u''''(x_1) + O(h^5),
end{eqnarray}
for $h rightarrow 0$, so that
begin{equation}
tau_2 = frac{1}{3} h^3 u^{primeprimeprime}(x_1) + O(h^5), quad h rightarrow 0.
end{equation}
$endgroup$
It's a two-step Nyström method. Using the ODE $u' = f(u)$ and $x_k := x_0 + k h$ you write
begin{equation}
u(x_{k+1}) - u(x_{k-1}) = int limits_{x_{k-1}}^{x_{k+1}} u'(x) , mathrm{d}x = int limits_{x_{k-1}}^{x_{k+1}} f(u(x)) , mathrm{d}x.
end{equation}
Now you use a quadrature formula for the integral, in this case the midpoint rule: $displaystyle int limits_{x_{k-1}}^{x_{k+1}} f(u(x)) , mathrm{d}x simeq 2h f(u(x_k))$.
For the local truncation error at $x_2$ we assume that both previous values $u_0 = u(x_0)$ and $u_1 = u(x_1)$ are exact. The local truncation error is then given by
begin{equation}
tau_2 = u(x_2) - u(x_0) - 2 h f(u(x_1)) = u(x_2) - u(x_0) - 2 h u'(x_1)
end{equation}
(using again the ODE in the last step). Now comes Taylor:
begin{eqnarray}
u(x_0) = u(x_1 - h) &=& u(x_1) - h u'(x_1) + frac{1}{2} h^2 u''(x_1) - frac{1}{6} h^3 u'''(x_1) + frac{1}{24} h^4 u''''(x_1) + O(h^5),\
u(x_2) = u(x_1 + h) &=& u(x_1) + h u'(x_1) + frac{1}{2} h^2 u''(x_1) + frac{1}{6} h^3 u'''(x_1) + frac{1}{24} h^4 u''''(x_1) + O(h^5),
end{eqnarray}
for $h rightarrow 0$, so that
begin{equation}
tau_2 = frac{1}{3} h^3 u^{primeprimeprime}(x_1) + O(h^5), quad h rightarrow 0.
end{equation}
edited Jan 9 at 21:10
answered Jan 9 at 19:24
ChristophChristoph
58116
58116
$begingroup$
Cool way of writing that difference as the definite integral of a derivative! So does this mean that the approximation error in the midpoint rule for this integral is the same as the truncation error of the "central euler method"?
$endgroup$
– The Coding Wombat
Jan 9 at 20:08
1
$begingroup$
That's right, I'll add something on the local truncation error.
$endgroup$
– Christoph
Jan 9 at 20:41
$begingroup$
So, my $u_{k+1}$ in the expression in my question is an approximation for the real value corresponding to $x_2$, right? And the $u(x_2)$ is the exact value at $x_2$? If that's the case, I understand why we then subract the approximation from the exact value. I don't understand the taylor expansion of $u(x_0)$ tho, where do the minus signs come from, and how do you expand to a higher $x_i$?
$endgroup$
– The Coding Wombat
Jan 9 at 21:02
1
$begingroup$
Yes, I did it for $k=1$ only, you can do it for any $k$ of course. The two expressions above are just Taylor expansions of $u(x_0) = u(x_1 - h)$ (note the minus sign!) and of $u(x_2) = u(x_1 + h)$ around $x_1$.
$endgroup$
– Christoph
Jan 9 at 21:05
$begingroup$
One more thing: In your formula for $tau$ you use $u(x_2)$ and $u(x_0)$ and $u(x_1)$, but here the first is an exact value for every iteration right? But the second and third are approximate results from previous iterations if I'm not mistaken? (So in this case $x_0$ and $x_1$ are exact in this example, because they were given, but I mean for the next and subsequent iterations).
$endgroup$
– The Coding Wombat
Jan 9 at 21:16
|
show 2 more comments
$begingroup$
Cool way of writing that difference as the definite integral of a derivative! So does this mean that the approximation error in the midpoint rule for this integral is the same as the truncation error of the "central euler method"?
$endgroup$
– The Coding Wombat
Jan 9 at 20:08
1
$begingroup$
That's right, I'll add something on the local truncation error.
$endgroup$
– Christoph
Jan 9 at 20:41
$begingroup$
So, my $u_{k+1}$ in the expression in my question is an approximation for the real value corresponding to $x_2$, right? And the $u(x_2)$ is the exact value at $x_2$? If that's the case, I understand why we then subract the approximation from the exact value. I don't understand the taylor expansion of $u(x_0)$ tho, where do the minus signs come from, and how do you expand to a higher $x_i$?
$endgroup$
– The Coding Wombat
Jan 9 at 21:02
1
$begingroup$
Yes, I did it for $k=1$ only, you can do it for any $k$ of course. The two expressions above are just Taylor expansions of $u(x_0) = u(x_1 - h)$ (note the minus sign!) and of $u(x_2) = u(x_1 + h)$ around $x_1$.
$endgroup$
– Christoph
Jan 9 at 21:05
$begingroup$
One more thing: In your formula for $tau$ you use $u(x_2)$ and $u(x_0)$ and $u(x_1)$, but here the first is an exact value for every iteration right? But the second and third are approximate results from previous iterations if I'm not mistaken? (So in this case $x_0$ and $x_1$ are exact in this example, because they were given, but I mean for the next and subsequent iterations).
$endgroup$
– The Coding Wombat
Jan 9 at 21:16
$begingroup$
Cool way of writing that difference as the definite integral of a derivative! So does this mean that the approximation error in the midpoint rule for this integral is the same as the truncation error of the "central euler method"?
$endgroup$
– The Coding Wombat
Jan 9 at 20:08
$begingroup$
Cool way of writing that difference as the definite integral of a derivative! So does this mean that the approximation error in the midpoint rule for this integral is the same as the truncation error of the "central euler method"?
$endgroup$
– The Coding Wombat
Jan 9 at 20:08
1
1
$begingroup$
That's right, I'll add something on the local truncation error.
$endgroup$
– Christoph
Jan 9 at 20:41
$begingroup$
That's right, I'll add something on the local truncation error.
$endgroup$
– Christoph
Jan 9 at 20:41
$begingroup$
So, my $u_{k+1}$ in the expression in my question is an approximation for the real value corresponding to $x_2$, right? And the $u(x_2)$ is the exact value at $x_2$? If that's the case, I understand why we then subract the approximation from the exact value. I don't understand the taylor expansion of $u(x_0)$ tho, where do the minus signs come from, and how do you expand to a higher $x_i$?
$endgroup$
– The Coding Wombat
Jan 9 at 21:02
$begingroup$
So, my $u_{k+1}$ in the expression in my question is an approximation for the real value corresponding to $x_2$, right? And the $u(x_2)$ is the exact value at $x_2$? If that's the case, I understand why we then subract the approximation from the exact value. I don't understand the taylor expansion of $u(x_0)$ tho, where do the minus signs come from, and how do you expand to a higher $x_i$?
$endgroup$
– The Coding Wombat
Jan 9 at 21:02
1
1
$begingroup$
Yes, I did it for $k=1$ only, you can do it for any $k$ of course. The two expressions above are just Taylor expansions of $u(x_0) = u(x_1 - h)$ (note the minus sign!) and of $u(x_2) = u(x_1 + h)$ around $x_1$.
$endgroup$
– Christoph
Jan 9 at 21:05
$begingroup$
Yes, I did it for $k=1$ only, you can do it for any $k$ of course. The two expressions above are just Taylor expansions of $u(x_0) = u(x_1 - h)$ (note the minus sign!) and of $u(x_2) = u(x_1 + h)$ around $x_1$.
$endgroup$
– Christoph
Jan 9 at 21:05
$begingroup$
One more thing: In your formula for $tau$ you use $u(x_2)$ and $u(x_0)$ and $u(x_1)$, but here the first is an exact value for every iteration right? But the second and third are approximate results from previous iterations if I'm not mistaken? (So in this case $x_0$ and $x_1$ are exact in this example, because they were given, but I mean for the next and subsequent iterations).
$endgroup$
– The Coding Wombat
Jan 9 at 21:16
$begingroup$
One more thing: In your formula for $tau$ you use $u(x_2)$ and $u(x_0)$ and $u(x_1)$, but here the first is an exact value for every iteration right? But the second and third are approximate results from previous iterations if I'm not mistaken? (So in this case $x_0$ and $x_1$ are exact in this example, because they were given, but I mean for the next and subsequent iterations).
$endgroup$
– The Coding Wombat
Jan 9 at 21:16
|
show 2 more comments
$begingroup$
Local Truncation Error
You can also go the differentiation route, with $x=x_k$, $xpm h=x_{kpm 1}$ and $u_j= u(x_j)$ method step can be related to the the central difference quotient. With the Taylor expansion of $u$ we get
$$
u(xpm h)=u(x)pm hu'(x)+frac{h^2}{2}u''(x)pmfrac{h^3}6u'''(x)+...
\~\
implies
frac{u(x+h)-u(x-h)}{2h}=u'(x)+frac{h^2}{6}u'''(x)+O(h^4)
$$
so that
$$
frac{u(x+h)-u(x-h)}{2h}-f(x,u(x))=O(h^2)
$$
making this linear multi-step method a second order explicit method.
Global Error Approximation
The distance $d_k=u_k-u(x_k)$ of the numerical method solution $u_k$ to the exact solution $u(x_k)$ evolves as
$$
d_{k+1}=d_{k-1}+2h(f(u_k)-f(u(x_k))-frac{h^3}3u'''(x_k)+...
=d_{k-1}+2hf'(u(x_k))d_k-frac{h^3}3u'''(x_k)+...
$$
where the omitted terms are of higher order assuming that $hd_k^2$ is $O(h^4)$ or smaller.
Linear difference equation for the error terms: To get an easily solved problem, first assume that $h$ is small enough so that $u,f'(u),u'''$ are slowly changing over several steps. Then we can set them constant in the above recursion. The now linear recursion
$$
d_{k+1}=d_{k-1}+2hf'd_k-frac{h^3}3u'''
$$
has for $f'ne 0$ a solution $d_k=Aq^k+B(-q)^{-k}+C$ where $q>0$ solves
$$
q^2-2hf'q+(hf')^2=1+(hf')^2implies q=hf'+sqrt{1+(hf')^2}=e^{hf'+O(h^3)}
$$
The resulting form of the error terms is, up to higher order terms,
$$d_k=Ae^{khf'}+(-1)^kBe^{-khf'}+C.$$
Differential equations for the error components: Translating the terms back into general functions, $d_k=a(x_k)+(-1)^kb(x_k)+c(x_k)$, we identify functions and their differential equations as
begin{align}
a(x)&simeq Ae^{(x-x_k)f'(u(x_k))}&implies a'(x)&=f'(u(x))a(x),\
b(x)&simeq Be^{-(x-x_k)f'(u(x_k))}&implies b'(x)&=-f'(u(x))b(x),\
c(x_{k-1})-c(x_{k-1})&=2hf'c(x_k)-frac{h^3}3u'''&implies c'(x)&=f'(u(x))c(x)-frac{h^2}{6}u'''(x)
end{align}
with $u'''(x)=f''(u(x))[f(u),f(u)]+f'(u)^2f(u)$. The initial values are zero for the trend $c$ and account for the error in the first step in the oscillating parts $a,b$.
begin{align}
c(x_0)&=0, \
a(x_0)+b(x_0)&=0, \
a(x_1)-b(x_1)&=u_1-u(x_1)-c(x_1)approx e(x_0)h^{p+1}+frac{h^3}{6}u'''(x_0)
end{align}
This has as consequence that the trend of the error has order $c(x)=O(h^2)$ while the oscillating parts are of order $a(x),b(x)=O(h^3)$ if $pge 2$ for the order of the method for the initial step. If the initial step has only order $p=1$, the oscillating parts will reflect this lower order, their contribution will be of the same scale as the error trend curve $c$.
Experimental Error Order Confirmation
That this method is really of order 2 but rather sensitive to the computation of $u_1$ show the following graph depicting the scaled error of a numerical against an exact solution. That the error graphs converge visibly, in the latter paths alternating between an upper and a lower error function, confirms the order, as else the scaled error graphs would be largely different in scale.
The equation used is of the form $F(u, u')=F(p,p')$ with here $F(u,u')=u'+10sin(u)$ and $p(t)=cos(t)$, so that $$u'(t)=f(t,u)=10(sin(cos(t))-sin(u(t))-sin(t).$$ The error for the method applied with step size $h$ is divided by $h^2e^{5t}$, as the equation is slightly stiff and the error rapidly growing.
The initialization is from top down by the exact value $u_1=p(t_1)$, order 3 Heun, by the explicit midpoint method and lastly by the order insufficient Euler method. To decrease the influence of the first error, the step was computed with 5 method steps of step size $h/5$.
See Hairer/Nørsett/Wanner: Solving ODE I: Non-stiff problems, chapter III.9, where Figure 9.2 about this same method has a similar construction.
$endgroup$
$begingroup$
I unfortunately don't recognize my expression in this answer. You use y and x and a function f that takes two arguments instead of 1?
$endgroup$
– The Coding Wombat
Jan 9 at 19:58
1
$begingroup$
Set $x=x_k=x_0+kh$, then $u_ksimeq y(x_k)$. The formula is the usual formula for the local discretization error of a method, you insert an exact solution in both sides of the method and use Taylor expansion to find the difference between them.
$endgroup$
– LutzL
Jan 9 at 20:16
$begingroup$
So that last line can be rewritten to $u_{k+1}=u_{k-1}+2hcdot f(u_k) + O(h^2)$? And is what's on the right side exactly equal to the exact value of $u_{k+1}$? If so, to get the complete error, I calculate the difference between this expression and the approxmiation (i.e. $u_{k-1}+2hcdot f(u_k)$)?
$endgroup$
– The Coding Wombat
Jan 9 at 20:27
$begingroup$
Also, where did you find the definition of the central difference? Especially the + third derivative and + O(h^4) part? I feel like I'm not searching the web correctly, since I'm having a hard time finding many aspects of numerical analysis.
$endgroup$
– The Coding Wombat
Jan 9 at 20:38
1
$begingroup$
The odd part of the function $g(h)=f(x+h)$ is $frac12(g(h)-g(-h))$. The even part is $frac12(g(h)+g(-h))$. The odd part of a power series contains only the odd powered terms, the even part complementarily the even powers.
$endgroup$
– LutzL
Jan 9 at 21:27
|
show 2 more comments
$begingroup$
Local Truncation Error
You can also go the differentiation route, with $x=x_k$, $xpm h=x_{kpm 1}$ and $u_j= u(x_j)$ method step can be related to the the central difference quotient. With the Taylor expansion of $u$ we get
$$
u(xpm h)=u(x)pm hu'(x)+frac{h^2}{2}u''(x)pmfrac{h^3}6u'''(x)+...
\~\
implies
frac{u(x+h)-u(x-h)}{2h}=u'(x)+frac{h^2}{6}u'''(x)+O(h^4)
$$
so that
$$
frac{u(x+h)-u(x-h)}{2h}-f(x,u(x))=O(h^2)
$$
making this linear multi-step method a second order explicit method.
Global Error Approximation
The distance $d_k=u_k-u(x_k)$ of the numerical method solution $u_k$ to the exact solution $u(x_k)$ evolves as
$$
d_{k+1}=d_{k-1}+2h(f(u_k)-f(u(x_k))-frac{h^3}3u'''(x_k)+...
=d_{k-1}+2hf'(u(x_k))d_k-frac{h^3}3u'''(x_k)+...
$$
where the omitted terms are of higher order assuming that $hd_k^2$ is $O(h^4)$ or smaller.
Linear difference equation for the error terms: To get an easily solved problem, first assume that $h$ is small enough so that $u,f'(u),u'''$ are slowly changing over several steps. Then we can set them constant in the above recursion. The now linear recursion
$$
d_{k+1}=d_{k-1}+2hf'd_k-frac{h^3}3u'''
$$
has for $f'ne 0$ a solution $d_k=Aq^k+B(-q)^{-k}+C$ where $q>0$ solves
$$
q^2-2hf'q+(hf')^2=1+(hf')^2implies q=hf'+sqrt{1+(hf')^2}=e^{hf'+O(h^3)}
$$
The resulting form of the error terms is, up to higher order terms,
$$d_k=Ae^{khf'}+(-1)^kBe^{-khf'}+C.$$
Differential equations for the error components: Translating the terms back into general functions, $d_k=a(x_k)+(-1)^kb(x_k)+c(x_k)$, we identify functions and their differential equations as
begin{align}
a(x)&simeq Ae^{(x-x_k)f'(u(x_k))}&implies a'(x)&=f'(u(x))a(x),\
b(x)&simeq Be^{-(x-x_k)f'(u(x_k))}&implies b'(x)&=-f'(u(x))b(x),\
c(x_{k-1})-c(x_{k-1})&=2hf'c(x_k)-frac{h^3}3u'''&implies c'(x)&=f'(u(x))c(x)-frac{h^2}{6}u'''(x)
end{align}
with $u'''(x)=f''(u(x))[f(u),f(u)]+f'(u)^2f(u)$. The initial values are zero for the trend $c$ and account for the error in the first step in the oscillating parts $a,b$.
begin{align}
c(x_0)&=0, \
a(x_0)+b(x_0)&=0, \
a(x_1)-b(x_1)&=u_1-u(x_1)-c(x_1)approx e(x_0)h^{p+1}+frac{h^3}{6}u'''(x_0)
end{align}
This has as consequence that the trend of the error has order $c(x)=O(h^2)$ while the oscillating parts are of order $a(x),b(x)=O(h^3)$ if $pge 2$ for the order of the method for the initial step. If the initial step has only order $p=1$, the oscillating parts will reflect this lower order, their contribution will be of the same scale as the error trend curve $c$.
Experimental Error Order Confirmation
That this method is really of order 2 but rather sensitive to the computation of $u_1$ show the following graph depicting the scaled error of a numerical against an exact solution. That the error graphs converge visibly, in the latter paths alternating between an upper and a lower error function, confirms the order, as else the scaled error graphs would be largely different in scale.
The equation used is of the form $F(u, u')=F(p,p')$ with here $F(u,u')=u'+10sin(u)$ and $p(t)=cos(t)$, so that $$u'(t)=f(t,u)=10(sin(cos(t))-sin(u(t))-sin(t).$$ The error for the method applied with step size $h$ is divided by $h^2e^{5t}$, as the equation is slightly stiff and the error rapidly growing.
The initialization is from top down by the exact value $u_1=p(t_1)$, order 3 Heun, by the explicit midpoint method and lastly by the order insufficient Euler method. To decrease the influence of the first error, the step was computed with 5 method steps of step size $h/5$.
See Hairer/Nørsett/Wanner: Solving ODE I: Non-stiff problems, chapter III.9, where Figure 9.2 about this same method has a similar construction.
$endgroup$
$begingroup$
I unfortunately don't recognize my expression in this answer. You use y and x and a function f that takes two arguments instead of 1?
$endgroup$
– The Coding Wombat
Jan 9 at 19:58
1
$begingroup$
Set $x=x_k=x_0+kh$, then $u_ksimeq y(x_k)$. The formula is the usual formula for the local discretization error of a method, you insert an exact solution in both sides of the method and use Taylor expansion to find the difference between them.
$endgroup$
– LutzL
Jan 9 at 20:16
$begingroup$
So that last line can be rewritten to $u_{k+1}=u_{k-1}+2hcdot f(u_k) + O(h^2)$? And is what's on the right side exactly equal to the exact value of $u_{k+1}$? If so, to get the complete error, I calculate the difference between this expression and the approxmiation (i.e. $u_{k-1}+2hcdot f(u_k)$)?
$endgroup$
– The Coding Wombat
Jan 9 at 20:27
$begingroup$
Also, where did you find the definition of the central difference? Especially the + third derivative and + O(h^4) part? I feel like I'm not searching the web correctly, since I'm having a hard time finding many aspects of numerical analysis.
$endgroup$
– The Coding Wombat
Jan 9 at 20:38
1
$begingroup$
The odd part of the function $g(h)=f(x+h)$ is $frac12(g(h)-g(-h))$. The even part is $frac12(g(h)+g(-h))$. The odd part of a power series contains only the odd powered terms, the even part complementarily the even powers.
$endgroup$
– LutzL
Jan 9 at 21:27
|
show 2 more comments
$begingroup$
Local Truncation Error
You can also go the differentiation route, with $x=x_k$, $xpm h=x_{kpm 1}$ and $u_j= u(x_j)$ method step can be related to the the central difference quotient. With the Taylor expansion of $u$ we get
$$
u(xpm h)=u(x)pm hu'(x)+frac{h^2}{2}u''(x)pmfrac{h^3}6u'''(x)+...
\~\
implies
frac{u(x+h)-u(x-h)}{2h}=u'(x)+frac{h^2}{6}u'''(x)+O(h^4)
$$
so that
$$
frac{u(x+h)-u(x-h)}{2h}-f(x,u(x))=O(h^2)
$$
making this linear multi-step method a second order explicit method.
Global Error Approximation
The distance $d_k=u_k-u(x_k)$ of the numerical method solution $u_k$ to the exact solution $u(x_k)$ evolves as
$$
d_{k+1}=d_{k-1}+2h(f(u_k)-f(u(x_k))-frac{h^3}3u'''(x_k)+...
=d_{k-1}+2hf'(u(x_k))d_k-frac{h^3}3u'''(x_k)+...
$$
where the omitted terms are of higher order assuming that $hd_k^2$ is $O(h^4)$ or smaller.
Linear difference equation for the error terms: To get an easily solved problem, first assume that $h$ is small enough so that $u,f'(u),u'''$ are slowly changing over several steps. Then we can set them constant in the above recursion. The now linear recursion
$$
d_{k+1}=d_{k-1}+2hf'd_k-frac{h^3}3u'''
$$
has for $f'ne 0$ a solution $d_k=Aq^k+B(-q)^{-k}+C$ where $q>0$ solves
$$
q^2-2hf'q+(hf')^2=1+(hf')^2implies q=hf'+sqrt{1+(hf')^2}=e^{hf'+O(h^3)}
$$
The resulting form of the error terms is, up to higher order terms,
$$d_k=Ae^{khf'}+(-1)^kBe^{-khf'}+C.$$
Differential equations for the error components: Translating the terms back into general functions, $d_k=a(x_k)+(-1)^kb(x_k)+c(x_k)$, we identify functions and their differential equations as
begin{align}
a(x)&simeq Ae^{(x-x_k)f'(u(x_k))}&implies a'(x)&=f'(u(x))a(x),\
b(x)&simeq Be^{-(x-x_k)f'(u(x_k))}&implies b'(x)&=-f'(u(x))b(x),\
c(x_{k-1})-c(x_{k-1})&=2hf'c(x_k)-frac{h^3}3u'''&implies c'(x)&=f'(u(x))c(x)-frac{h^2}{6}u'''(x)
end{align}
with $u'''(x)=f''(u(x))[f(u),f(u)]+f'(u)^2f(u)$. The initial values are zero for the trend $c$ and account for the error in the first step in the oscillating parts $a,b$.
begin{align}
c(x_0)&=0, \
a(x_0)+b(x_0)&=0, \
a(x_1)-b(x_1)&=u_1-u(x_1)-c(x_1)approx e(x_0)h^{p+1}+frac{h^3}{6}u'''(x_0)
end{align}
This has as consequence that the trend of the error has order $c(x)=O(h^2)$ while the oscillating parts are of order $a(x),b(x)=O(h^3)$ if $pge 2$ for the order of the method for the initial step. If the initial step has only order $p=1$, the oscillating parts will reflect this lower order, their contribution will be of the same scale as the error trend curve $c$.
Experimental Error Order Confirmation
That this method is really of order 2 but rather sensitive to the computation of $u_1$ show the following graph depicting the scaled error of a numerical against an exact solution. That the error graphs converge visibly, in the latter paths alternating between an upper and a lower error function, confirms the order, as else the scaled error graphs would be largely different in scale.
The equation used is of the form $F(u, u')=F(p,p')$ with here $F(u,u')=u'+10sin(u)$ and $p(t)=cos(t)$, so that $$u'(t)=f(t,u)=10(sin(cos(t))-sin(u(t))-sin(t).$$ The error for the method applied with step size $h$ is divided by $h^2e^{5t}$, as the equation is slightly stiff and the error rapidly growing.
The initialization is from top down by the exact value $u_1=p(t_1)$, order 3 Heun, by the explicit midpoint method and lastly by the order insufficient Euler method. To decrease the influence of the first error, the step was computed with 5 method steps of step size $h/5$.
See Hairer/Nørsett/Wanner: Solving ODE I: Non-stiff problems, chapter III.9, where Figure 9.2 about this same method has a similar construction.
$endgroup$
Local Truncation Error
You can also go the differentiation route, with $x=x_k$, $xpm h=x_{kpm 1}$ and $u_j= u(x_j)$ method step can be related to the the central difference quotient. With the Taylor expansion of $u$ we get
$$
u(xpm h)=u(x)pm hu'(x)+frac{h^2}{2}u''(x)pmfrac{h^3}6u'''(x)+...
\~\
implies
frac{u(x+h)-u(x-h)}{2h}=u'(x)+frac{h^2}{6}u'''(x)+O(h^4)
$$
so that
$$
frac{u(x+h)-u(x-h)}{2h}-f(x,u(x))=O(h^2)
$$
making this linear multi-step method a second order explicit method.
Global Error Approximation
The distance $d_k=u_k-u(x_k)$ of the numerical method solution $u_k$ to the exact solution $u(x_k)$ evolves as
$$
d_{k+1}=d_{k-1}+2h(f(u_k)-f(u(x_k))-frac{h^3}3u'''(x_k)+...
=d_{k-1}+2hf'(u(x_k))d_k-frac{h^3}3u'''(x_k)+...
$$
where the omitted terms are of higher order assuming that $hd_k^2$ is $O(h^4)$ or smaller.
Linear difference equation for the error terms: To get an easily solved problem, first assume that $h$ is small enough so that $u,f'(u),u'''$ are slowly changing over several steps. Then we can set them constant in the above recursion. The now linear recursion
$$
d_{k+1}=d_{k-1}+2hf'd_k-frac{h^3}3u'''
$$
has for $f'ne 0$ a solution $d_k=Aq^k+B(-q)^{-k}+C$ where $q>0$ solves
$$
q^2-2hf'q+(hf')^2=1+(hf')^2implies q=hf'+sqrt{1+(hf')^2}=e^{hf'+O(h^3)}
$$
The resulting form of the error terms is, up to higher order terms,
$$d_k=Ae^{khf'}+(-1)^kBe^{-khf'}+C.$$
Differential equations for the error components: Translating the terms back into general functions, $d_k=a(x_k)+(-1)^kb(x_k)+c(x_k)$, we identify functions and their differential equations as
begin{align}
a(x)&simeq Ae^{(x-x_k)f'(u(x_k))}&implies a'(x)&=f'(u(x))a(x),\
b(x)&simeq Be^{-(x-x_k)f'(u(x_k))}&implies b'(x)&=-f'(u(x))b(x),\
c(x_{k-1})-c(x_{k-1})&=2hf'c(x_k)-frac{h^3}3u'''&implies c'(x)&=f'(u(x))c(x)-frac{h^2}{6}u'''(x)
end{align}
with $u'''(x)=f''(u(x))[f(u),f(u)]+f'(u)^2f(u)$. The initial values are zero for the trend $c$ and account for the error in the first step in the oscillating parts $a,b$.
begin{align}
c(x_0)&=0, \
a(x_0)+b(x_0)&=0, \
a(x_1)-b(x_1)&=u_1-u(x_1)-c(x_1)approx e(x_0)h^{p+1}+frac{h^3}{6}u'''(x_0)
end{align}
This has as consequence that the trend of the error has order $c(x)=O(h^2)$ while the oscillating parts are of order $a(x),b(x)=O(h^3)$ if $pge 2$ for the order of the method for the initial step. If the initial step has only order $p=1$, the oscillating parts will reflect this lower order, their contribution will be of the same scale as the error trend curve $c$.
Experimental Error Order Confirmation
That this method is really of order 2 but rather sensitive to the computation of $u_1$ show the following graph depicting the scaled error of a numerical against an exact solution. That the error graphs converge visibly, in the latter paths alternating between an upper and a lower error function, confirms the order, as else the scaled error graphs would be largely different in scale.
The equation used is of the form $F(u, u')=F(p,p')$ with here $F(u,u')=u'+10sin(u)$ and $p(t)=cos(t)$, so that $$u'(t)=f(t,u)=10(sin(cos(t))-sin(u(t))-sin(t).$$ The error for the method applied with step size $h$ is divided by $h^2e^{5t}$, as the equation is slightly stiff and the error rapidly growing.
The initialization is from top down by the exact value $u_1=p(t_1)$, order 3 Heun, by the explicit midpoint method and lastly by the order insufficient Euler method. To decrease the influence of the first error, the step was computed with 5 method steps of step size $h/5$.
See Hairer/Nørsett/Wanner: Solving ODE I: Non-stiff problems, chapter III.9, where Figure 9.2 about this same method has a similar construction.
edited Jan 11 at 10:48
answered Jan 9 at 19:51
LutzLLutzL
59.1k42056
59.1k42056
$begingroup$
I unfortunately don't recognize my expression in this answer. You use y and x and a function f that takes two arguments instead of 1?
$endgroup$
– The Coding Wombat
Jan 9 at 19:58
1
$begingroup$
Set $x=x_k=x_0+kh$, then $u_ksimeq y(x_k)$. The formula is the usual formula for the local discretization error of a method, you insert an exact solution in both sides of the method and use Taylor expansion to find the difference between them.
$endgroup$
– LutzL
Jan 9 at 20:16
$begingroup$
So that last line can be rewritten to $u_{k+1}=u_{k-1}+2hcdot f(u_k) + O(h^2)$? And is what's on the right side exactly equal to the exact value of $u_{k+1}$? If so, to get the complete error, I calculate the difference between this expression and the approxmiation (i.e. $u_{k-1}+2hcdot f(u_k)$)?
$endgroup$
– The Coding Wombat
Jan 9 at 20:27
$begingroup$
Also, where did you find the definition of the central difference? Especially the + third derivative and + O(h^4) part? I feel like I'm not searching the web correctly, since I'm having a hard time finding many aspects of numerical analysis.
$endgroup$
– The Coding Wombat
Jan 9 at 20:38
1
$begingroup$
The odd part of the function $g(h)=f(x+h)$ is $frac12(g(h)-g(-h))$. The even part is $frac12(g(h)+g(-h))$. The odd part of a power series contains only the odd powered terms, the even part complementarily the even powers.
$endgroup$
– LutzL
Jan 9 at 21:27
|
show 2 more comments
$begingroup$
I unfortunately don't recognize my expression in this answer. You use y and x and a function f that takes two arguments instead of 1?
$endgroup$
– The Coding Wombat
Jan 9 at 19:58
1
$begingroup$
Set $x=x_k=x_0+kh$, then $u_ksimeq y(x_k)$. The formula is the usual formula for the local discretization error of a method, you insert an exact solution in both sides of the method and use Taylor expansion to find the difference between them.
$endgroup$
– LutzL
Jan 9 at 20:16
$begingroup$
So that last line can be rewritten to $u_{k+1}=u_{k-1}+2hcdot f(u_k) + O(h^2)$? And is what's on the right side exactly equal to the exact value of $u_{k+1}$? If so, to get the complete error, I calculate the difference between this expression and the approxmiation (i.e. $u_{k-1}+2hcdot f(u_k)$)?
$endgroup$
– The Coding Wombat
Jan 9 at 20:27
$begingroup$
Also, where did you find the definition of the central difference? Especially the + third derivative and + O(h^4) part? I feel like I'm not searching the web correctly, since I'm having a hard time finding many aspects of numerical analysis.
$endgroup$
– The Coding Wombat
Jan 9 at 20:38
1
$begingroup$
The odd part of the function $g(h)=f(x+h)$ is $frac12(g(h)-g(-h))$. The even part is $frac12(g(h)+g(-h))$. The odd part of a power series contains only the odd powered terms, the even part complementarily the even powers.
$endgroup$
– LutzL
Jan 9 at 21:27
$begingroup$
I unfortunately don't recognize my expression in this answer. You use y and x and a function f that takes two arguments instead of 1?
$endgroup$
– The Coding Wombat
Jan 9 at 19:58
$begingroup$
I unfortunately don't recognize my expression in this answer. You use y and x and a function f that takes two arguments instead of 1?
$endgroup$
– The Coding Wombat
Jan 9 at 19:58
1
1
$begingroup$
Set $x=x_k=x_0+kh$, then $u_ksimeq y(x_k)$. The formula is the usual formula for the local discretization error of a method, you insert an exact solution in both sides of the method and use Taylor expansion to find the difference between them.
$endgroup$
– LutzL
Jan 9 at 20:16
$begingroup$
Set $x=x_k=x_0+kh$, then $u_ksimeq y(x_k)$. The formula is the usual formula for the local discretization error of a method, you insert an exact solution in both sides of the method and use Taylor expansion to find the difference between them.
$endgroup$
– LutzL
Jan 9 at 20:16
$begingroup$
So that last line can be rewritten to $u_{k+1}=u_{k-1}+2hcdot f(u_k) + O(h^2)$? And is what's on the right side exactly equal to the exact value of $u_{k+1}$? If so, to get the complete error, I calculate the difference between this expression and the approxmiation (i.e. $u_{k-1}+2hcdot f(u_k)$)?
$endgroup$
– The Coding Wombat
Jan 9 at 20:27
$begingroup$
So that last line can be rewritten to $u_{k+1}=u_{k-1}+2hcdot f(u_k) + O(h^2)$? And is what's on the right side exactly equal to the exact value of $u_{k+1}$? If so, to get the complete error, I calculate the difference between this expression and the approxmiation (i.e. $u_{k-1}+2hcdot f(u_k)$)?
$endgroup$
– The Coding Wombat
Jan 9 at 20:27
$begingroup$
Also, where did you find the definition of the central difference? Especially the + third derivative and + O(h^4) part? I feel like I'm not searching the web correctly, since I'm having a hard time finding many aspects of numerical analysis.
$endgroup$
– The Coding Wombat
Jan 9 at 20:38
$begingroup$
Also, where did you find the definition of the central difference? Especially the + third derivative and + O(h^4) part? I feel like I'm not searching the web correctly, since I'm having a hard time finding many aspects of numerical analysis.
$endgroup$
– The Coding Wombat
Jan 9 at 20:38
1
1
$begingroup$
The odd part of the function $g(h)=f(x+h)$ is $frac12(g(h)-g(-h))$. The even part is $frac12(g(h)+g(-h))$. The odd part of a power series contains only the odd powered terms, the even part complementarily the even powers.
$endgroup$
– LutzL
Jan 9 at 21:27
$begingroup$
The odd part of the function $g(h)=f(x+h)$ is $frac12(g(h)-g(-h))$. The even part is $frac12(g(h)+g(-h))$. The odd part of a power series contains only the odd powered terms, the even part complementarily the even powers.
$endgroup$
– LutzL
Jan 9 at 21:27
|
show 2 more comments
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3067795%2fderiving-the-central-euler-method-and-intuition%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
If this is for a beginner's course in numerical methods then it would make much more sense in my opinion to consider the explicit midpoint method (aka modified Euler method) $u_{k+1} = u_k + h fleft( u_k + frac{h}{2} f(u_k) right)$, which is a one-step method. For this method you might use the techniques which you were taught for forward/backward Euler. But the jump from one-step to two-step schemes is not so straightforward to understand simply in an exercise.
$endgroup$
– Christoph
Jan 10 at 10:29