How will I implement Numerov method in a numerical example?
$begingroup$
I have been having difficulties implementing Numerov method of solving second order differential equation to solve numerical example. Please kindly suggest any text that discuss Numerov method and solve numerical examples.
reference-request numerical-methods
$endgroup$
add a comment |
$begingroup$
I have been having difficulties implementing Numerov method of solving second order differential equation to solve numerical example. Please kindly suggest any text that discuss Numerov method and solve numerical examples.
reference-request numerical-methods
$endgroup$
1
$begingroup$
What exactly in en.wikipedia.org/wiki/Numerov%27s_method and its references (esp. Hairer/Nørsett/Wanner) poses a problem? How far did you get with the implementation? And which version are you concerned with, the linear one or the general non-linear one?
$endgroup$
– LutzL
Jan 7 at 11:27
$begingroup$
The first question is part of the difficulty I encountered. The difficulty impedes me from implementing anything. I am concern with non-linear case.
$endgroup$
– Christian Adefila
Jan 15 at 9:13
add a comment |
$begingroup$
I have been having difficulties implementing Numerov method of solving second order differential equation to solve numerical example. Please kindly suggest any text that discuss Numerov method and solve numerical examples.
reference-request numerical-methods
$endgroup$
I have been having difficulties implementing Numerov method of solving second order differential equation to solve numerical example. Please kindly suggest any text that discuss Numerov method and solve numerical examples.
reference-request numerical-methods
reference-request numerical-methods
edited Jan 7 at 11:01
José Carlos Santos
161k22127232
161k22127232
asked Jan 7 at 10:59
Christian AdefilaChristian Adefila
1
1
1
$begingroup$
What exactly in en.wikipedia.org/wiki/Numerov%27s_method and its references (esp. Hairer/Nørsett/Wanner) poses a problem? How far did you get with the implementation? And which version are you concerned with, the linear one or the general non-linear one?
$endgroup$
– LutzL
Jan 7 at 11:27
$begingroup$
The first question is part of the difficulty I encountered. The difficulty impedes me from implementing anything. I am concern with non-linear case.
$endgroup$
– Christian Adefila
Jan 15 at 9:13
add a comment |
1
$begingroup$
What exactly in en.wikipedia.org/wiki/Numerov%27s_method and its references (esp. Hairer/Nørsett/Wanner) poses a problem? How far did you get with the implementation? And which version are you concerned with, the linear one or the general non-linear one?
$endgroup$
– LutzL
Jan 7 at 11:27
$begingroup$
The first question is part of the difficulty I encountered. The difficulty impedes me from implementing anything. I am concern with non-linear case.
$endgroup$
– Christian Adefila
Jan 15 at 9:13
1
1
$begingroup$
What exactly in en.wikipedia.org/wiki/Numerov%27s_method and its references (esp. Hairer/Nørsett/Wanner) poses a problem? How far did you get with the implementation? And which version are you concerned with, the linear one or the general non-linear one?
$endgroup$
– LutzL
Jan 7 at 11:27
$begingroup$
What exactly in en.wikipedia.org/wiki/Numerov%27s_method and its references (esp. Hairer/Nørsett/Wanner) poses a problem? How far did you get with the implementation? And which version are you concerned with, the linear one or the general non-linear one?
$endgroup$
– LutzL
Jan 7 at 11:27
$begingroup$
The first question is part of the difficulty I encountered. The difficulty impedes me from implementing anything. I am concern with non-linear case.
$endgroup$
– Christian Adefila
Jan 15 at 9:13
$begingroup$
The first question is part of the difficulty I encountered. The difficulty impedes me from implementing anything. I am concern with non-linear case.
$endgroup$
– Christian Adefila
Jan 15 at 9:13
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Numerov method
The Numerov method applies to second order ODE of the type $y''(x)=f(x,y(x))$. It sets the second order difference quotient equal to the right side plus the first correction term expressed also as second order difference quotient,
begin{align}
frac{y_{n+1}-2y_n+y_{n-1}}{h^2}
&=y''_k+frac{h^2}{12}y^{(4)}_k
= y''_k+frac{1}{12}(y''_{k+1}-2y''_k+y''_{k-1})\[.6em]
&=frac{1}{12}Bigl(f(x_{n+1},y_{n+1})+10f(x_n,y_n)+f(x_{n-1},y_{n-1})Bigr)
end{align}
I will use a leapfrogged implementation as recommended in Hairer/Nørsett/Wanner: "Solving ODE I",
begin{align}
y_{n+1}&=y_n+hv_{n+1/2}\
v_{n+1/2}&=v_{n-1/2}+frac{h}{12}Bigl(f(x_{n+1},y_{n+1})+10f(x_n,y_n)+f(x_{n-1},y_{n-1})Bigr)
end{align}
Predictor-corrector and first step
As the method is implicit, one needs to solve the implicit equation via some kind of iteration, fixed-point on the system as it is or some kind of Newton method for faster convergence, especially for stiff equations.
In the following proof-of-concept implementation, I use the most simple building blocks that satisfy the specifications.
- Use as predictor the Verlet-Størmer step $y_{n+1}^{[0]}=y_n+hv_{n-1/2}+h^2f(x_n,y_n)=2y_n-y_{n-1}+h^2f(x_n,y_n)$.
- Use fixed-point iteration as corrector.
- To compute the first step, use the 4th order Kutta method aka classical RK4. For optimal accuracy one has to apply this with sub-steps with a reduced step size.
The predictor has error $O(h^4)$, both to the exact value if $y_{n-1}$ and $y_n$ were values of an exact solution, as also to the solution of the implicit step equation. The fixed-point corrector after eliminating $v$ has contraction factor $frac{Lh^2}{12}$, so that the error towards the exact solution of the implicit step after the first correction step should be $O(h^6)$ and after the second one $O(h^8)$. This should be sufficient to get the global error to $O(h^4)$, however with only one correction the error of the iteration is in the same magnitude as the method discretization error and thus influences the global error accumulation. 3 and more correction steps are needed if $L$ is large so that even higher order error terms can be of the size of the discretization error.
Test problem
Use the idea of manufactured solutions, $F[y]=F[p]$, where $F[y]$ is some expression with derivatives of $y$, to generate a test problem with known exact solution $p(x)=sin(x)$. For instance,
$$
F[y]=y''+4sin(y) = F[sin(x)] = -sin(x)+4sin(sin(x)), ~~ y(0)=sin(0)=0,~ y'(0)=cos(0)=1
$$
The following graphs depict the (leading) error coefficient, that is, the difference of the numerical and exact solution divided by $h^4$.
That the graphs for the different step sizes are of the same scale and visually approaching some limit confirms that the method is of order $4$. Note that the first graph for 1 corrector step has a larger scale for the values than the next two, as predicted.
One can see that there is some slight improvement from the second to the third corrector step in the larger step sizes. In this example it is not large enough to justify the third corrector step, but for other problems the difference might be more significant.
The last graph shows the influence of the error of the first step. Having a simple 5th order error is not enough, to get to the optimal error behavior of the whole method, the first-step error coefficient needs to be reduced (by a factor 16 or 81) by using 2 or 3 RK4 sub-steps. Using 2 steps and Richardson extrapolation could also be a useful variant
Code
method
def firststep(f,x,y, v, dx, nRK4):
'''Apply nRK4 Kutta4 steps with step width dx/nRK4 to the first order system
[y', v'] = [v, f(x,y)]'''
y,v = (np.asarray(c) for c in (y,v) )
dy = np.zeros_like(y)
v0h = np.zeros_like(y);
h = dx/nRK4;
for j in range(nRK4):
# in principle, ky could be eliminated, at the cost of introducing unusual terms
k1y = v;
k1v = f(x,y);
k2y = v + 0.5*h*k1v;
k2v = f(x+0.5*h, y+0.5*h*k1y);
k3y = v + 0.5*h*k2v;
k3v = f(x+0.5*h, y+0.5*h*k2y);
k4y = v + h*k3v;
k4v = f(x+h, y+h*k3y);
dy = (k1y+2*(k2y+k3y)+k4y)/6;
v0h += dy
y += h*dy;
v += h*(k1v+2*(k2v+k3v)+k4v)/6;
x += h;
return y, v0h/nRK4
def Numerov_integrate(f, x0, xf, dx, y0, v0, nPECE=1, nRK4=1):
x = np.arange(x0, xf+0.5*dx, dx);
y = np.asarray(len(x)*[y0]);
# use v0h for v[k-1/2], v1h for v[k+1/2] where the current step if from k to k+1
y[1], v0h = firststep(f,x0,y0, v0, dx, nRK4);
fkm1, fk = f(x[0],y[0]), f(x[1],y[1]);
for k in range(1,len(x)-1):
# predictor, use simple Verlet
y[k+1] = y[k] + dx*(v0h+ dx*fk);
for j in range(nPECE):
v1h = v0h + dx/12*(f(x[k+1],y[k+1])+10*fk+fkm1);
y[k+1] = y[k] + dx*v1h;
fkm1, fk = fk, f(x[k+1], y[k+1]);
v0h = v1h;
return x,y;
test example
from numpy import sin
def mms_ode(x,y): return -sin(x)+4*(sin(sin(x))-sin(y))
# test the correctness of the implementation
m=10;
x,y = Numerov_integrate(mms_ode, 0, 4, 0.1/m, [0.0], [1.0], nPECE=2, nRK4=3)
for xk,yk in zip(x,y)[::2*m]: print "%15.10f: %15.10f, %15.10f | %15.10e"%(xk,yk,sin(xk), (yk-sin(xk))*(10*m)**4)
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3064879%2fhow-will-i-implement-numerov-method-in-a-numerical-example%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Numerov method
The Numerov method applies to second order ODE of the type $y''(x)=f(x,y(x))$. It sets the second order difference quotient equal to the right side plus the first correction term expressed also as second order difference quotient,
begin{align}
frac{y_{n+1}-2y_n+y_{n-1}}{h^2}
&=y''_k+frac{h^2}{12}y^{(4)}_k
= y''_k+frac{1}{12}(y''_{k+1}-2y''_k+y''_{k-1})\[.6em]
&=frac{1}{12}Bigl(f(x_{n+1},y_{n+1})+10f(x_n,y_n)+f(x_{n-1},y_{n-1})Bigr)
end{align}
I will use a leapfrogged implementation as recommended in Hairer/Nørsett/Wanner: "Solving ODE I",
begin{align}
y_{n+1}&=y_n+hv_{n+1/2}\
v_{n+1/2}&=v_{n-1/2}+frac{h}{12}Bigl(f(x_{n+1},y_{n+1})+10f(x_n,y_n)+f(x_{n-1},y_{n-1})Bigr)
end{align}
Predictor-corrector and first step
As the method is implicit, one needs to solve the implicit equation via some kind of iteration, fixed-point on the system as it is or some kind of Newton method for faster convergence, especially for stiff equations.
In the following proof-of-concept implementation, I use the most simple building blocks that satisfy the specifications.
- Use as predictor the Verlet-Størmer step $y_{n+1}^{[0]}=y_n+hv_{n-1/2}+h^2f(x_n,y_n)=2y_n-y_{n-1}+h^2f(x_n,y_n)$.
- Use fixed-point iteration as corrector.
- To compute the first step, use the 4th order Kutta method aka classical RK4. For optimal accuracy one has to apply this with sub-steps with a reduced step size.
The predictor has error $O(h^4)$, both to the exact value if $y_{n-1}$ and $y_n$ were values of an exact solution, as also to the solution of the implicit step equation. The fixed-point corrector after eliminating $v$ has contraction factor $frac{Lh^2}{12}$, so that the error towards the exact solution of the implicit step after the first correction step should be $O(h^6)$ and after the second one $O(h^8)$. This should be sufficient to get the global error to $O(h^4)$, however with only one correction the error of the iteration is in the same magnitude as the method discretization error and thus influences the global error accumulation. 3 and more correction steps are needed if $L$ is large so that even higher order error terms can be of the size of the discretization error.
Test problem
Use the idea of manufactured solutions, $F[y]=F[p]$, where $F[y]$ is some expression with derivatives of $y$, to generate a test problem with known exact solution $p(x)=sin(x)$. For instance,
$$
F[y]=y''+4sin(y) = F[sin(x)] = -sin(x)+4sin(sin(x)), ~~ y(0)=sin(0)=0,~ y'(0)=cos(0)=1
$$
The following graphs depict the (leading) error coefficient, that is, the difference of the numerical and exact solution divided by $h^4$.
That the graphs for the different step sizes are of the same scale and visually approaching some limit confirms that the method is of order $4$. Note that the first graph for 1 corrector step has a larger scale for the values than the next two, as predicted.
One can see that there is some slight improvement from the second to the third corrector step in the larger step sizes. In this example it is not large enough to justify the third corrector step, but for other problems the difference might be more significant.
The last graph shows the influence of the error of the first step. Having a simple 5th order error is not enough, to get to the optimal error behavior of the whole method, the first-step error coefficient needs to be reduced (by a factor 16 or 81) by using 2 or 3 RK4 sub-steps. Using 2 steps and Richardson extrapolation could also be a useful variant
Code
method
def firststep(f,x,y, v, dx, nRK4):
'''Apply nRK4 Kutta4 steps with step width dx/nRK4 to the first order system
[y', v'] = [v, f(x,y)]'''
y,v = (np.asarray(c) for c in (y,v) )
dy = np.zeros_like(y)
v0h = np.zeros_like(y);
h = dx/nRK4;
for j in range(nRK4):
# in principle, ky could be eliminated, at the cost of introducing unusual terms
k1y = v;
k1v = f(x,y);
k2y = v + 0.5*h*k1v;
k2v = f(x+0.5*h, y+0.5*h*k1y);
k3y = v + 0.5*h*k2v;
k3v = f(x+0.5*h, y+0.5*h*k2y);
k4y = v + h*k3v;
k4v = f(x+h, y+h*k3y);
dy = (k1y+2*(k2y+k3y)+k4y)/6;
v0h += dy
y += h*dy;
v += h*(k1v+2*(k2v+k3v)+k4v)/6;
x += h;
return y, v0h/nRK4
def Numerov_integrate(f, x0, xf, dx, y0, v0, nPECE=1, nRK4=1):
x = np.arange(x0, xf+0.5*dx, dx);
y = np.asarray(len(x)*[y0]);
# use v0h for v[k-1/2], v1h for v[k+1/2] where the current step if from k to k+1
y[1], v0h = firststep(f,x0,y0, v0, dx, nRK4);
fkm1, fk = f(x[0],y[0]), f(x[1],y[1]);
for k in range(1,len(x)-1):
# predictor, use simple Verlet
y[k+1] = y[k] + dx*(v0h+ dx*fk);
for j in range(nPECE):
v1h = v0h + dx/12*(f(x[k+1],y[k+1])+10*fk+fkm1);
y[k+1] = y[k] + dx*v1h;
fkm1, fk = fk, f(x[k+1], y[k+1]);
v0h = v1h;
return x,y;
test example
from numpy import sin
def mms_ode(x,y): return -sin(x)+4*(sin(sin(x))-sin(y))
# test the correctness of the implementation
m=10;
x,y = Numerov_integrate(mms_ode, 0, 4, 0.1/m, [0.0], [1.0], nPECE=2, nRK4=3)
for xk,yk in zip(x,y)[::2*m]: print "%15.10f: %15.10f, %15.10f | %15.10e"%(xk,yk,sin(xk), (yk-sin(xk))*(10*m)**4)
$endgroup$
add a comment |
$begingroup$
Numerov method
The Numerov method applies to second order ODE of the type $y''(x)=f(x,y(x))$. It sets the second order difference quotient equal to the right side plus the first correction term expressed also as second order difference quotient,
begin{align}
frac{y_{n+1}-2y_n+y_{n-1}}{h^2}
&=y''_k+frac{h^2}{12}y^{(4)}_k
= y''_k+frac{1}{12}(y''_{k+1}-2y''_k+y''_{k-1})\[.6em]
&=frac{1}{12}Bigl(f(x_{n+1},y_{n+1})+10f(x_n,y_n)+f(x_{n-1},y_{n-1})Bigr)
end{align}
I will use a leapfrogged implementation as recommended in Hairer/Nørsett/Wanner: "Solving ODE I",
begin{align}
y_{n+1}&=y_n+hv_{n+1/2}\
v_{n+1/2}&=v_{n-1/2}+frac{h}{12}Bigl(f(x_{n+1},y_{n+1})+10f(x_n,y_n)+f(x_{n-1},y_{n-1})Bigr)
end{align}
Predictor-corrector and first step
As the method is implicit, one needs to solve the implicit equation via some kind of iteration, fixed-point on the system as it is or some kind of Newton method for faster convergence, especially for stiff equations.
In the following proof-of-concept implementation, I use the most simple building blocks that satisfy the specifications.
- Use as predictor the Verlet-Størmer step $y_{n+1}^{[0]}=y_n+hv_{n-1/2}+h^2f(x_n,y_n)=2y_n-y_{n-1}+h^2f(x_n,y_n)$.
- Use fixed-point iteration as corrector.
- To compute the first step, use the 4th order Kutta method aka classical RK4. For optimal accuracy one has to apply this with sub-steps with a reduced step size.
The predictor has error $O(h^4)$, both to the exact value if $y_{n-1}$ and $y_n$ were values of an exact solution, as also to the solution of the implicit step equation. The fixed-point corrector after eliminating $v$ has contraction factor $frac{Lh^2}{12}$, so that the error towards the exact solution of the implicit step after the first correction step should be $O(h^6)$ and after the second one $O(h^8)$. This should be sufficient to get the global error to $O(h^4)$, however with only one correction the error of the iteration is in the same magnitude as the method discretization error and thus influences the global error accumulation. 3 and more correction steps are needed if $L$ is large so that even higher order error terms can be of the size of the discretization error.
Test problem
Use the idea of manufactured solutions, $F[y]=F[p]$, where $F[y]$ is some expression with derivatives of $y$, to generate a test problem with known exact solution $p(x)=sin(x)$. For instance,
$$
F[y]=y''+4sin(y) = F[sin(x)] = -sin(x)+4sin(sin(x)), ~~ y(0)=sin(0)=0,~ y'(0)=cos(0)=1
$$
The following graphs depict the (leading) error coefficient, that is, the difference of the numerical and exact solution divided by $h^4$.
That the graphs for the different step sizes are of the same scale and visually approaching some limit confirms that the method is of order $4$. Note that the first graph for 1 corrector step has a larger scale for the values than the next two, as predicted.
One can see that there is some slight improvement from the second to the third corrector step in the larger step sizes. In this example it is not large enough to justify the third corrector step, but for other problems the difference might be more significant.
The last graph shows the influence of the error of the first step. Having a simple 5th order error is not enough, to get to the optimal error behavior of the whole method, the first-step error coefficient needs to be reduced (by a factor 16 or 81) by using 2 or 3 RK4 sub-steps. Using 2 steps and Richardson extrapolation could also be a useful variant
Code
method
def firststep(f,x,y, v, dx, nRK4):
'''Apply nRK4 Kutta4 steps with step width dx/nRK4 to the first order system
[y', v'] = [v, f(x,y)]'''
y,v = (np.asarray(c) for c in (y,v) )
dy = np.zeros_like(y)
v0h = np.zeros_like(y);
h = dx/nRK4;
for j in range(nRK4):
# in principle, ky could be eliminated, at the cost of introducing unusual terms
k1y = v;
k1v = f(x,y);
k2y = v + 0.5*h*k1v;
k2v = f(x+0.5*h, y+0.5*h*k1y);
k3y = v + 0.5*h*k2v;
k3v = f(x+0.5*h, y+0.5*h*k2y);
k4y = v + h*k3v;
k4v = f(x+h, y+h*k3y);
dy = (k1y+2*(k2y+k3y)+k4y)/6;
v0h += dy
y += h*dy;
v += h*(k1v+2*(k2v+k3v)+k4v)/6;
x += h;
return y, v0h/nRK4
def Numerov_integrate(f, x0, xf, dx, y0, v0, nPECE=1, nRK4=1):
x = np.arange(x0, xf+0.5*dx, dx);
y = np.asarray(len(x)*[y0]);
# use v0h for v[k-1/2], v1h for v[k+1/2] where the current step if from k to k+1
y[1], v0h = firststep(f,x0,y0, v0, dx, nRK4);
fkm1, fk = f(x[0],y[0]), f(x[1],y[1]);
for k in range(1,len(x)-1):
# predictor, use simple Verlet
y[k+1] = y[k] + dx*(v0h+ dx*fk);
for j in range(nPECE):
v1h = v0h + dx/12*(f(x[k+1],y[k+1])+10*fk+fkm1);
y[k+1] = y[k] + dx*v1h;
fkm1, fk = fk, f(x[k+1], y[k+1]);
v0h = v1h;
return x,y;
test example
from numpy import sin
def mms_ode(x,y): return -sin(x)+4*(sin(sin(x))-sin(y))
# test the correctness of the implementation
m=10;
x,y = Numerov_integrate(mms_ode, 0, 4, 0.1/m, [0.0], [1.0], nPECE=2, nRK4=3)
for xk,yk in zip(x,y)[::2*m]: print "%15.10f: %15.10f, %15.10f | %15.10e"%(xk,yk,sin(xk), (yk-sin(xk))*(10*m)**4)
$endgroup$
add a comment |
$begingroup$
Numerov method
The Numerov method applies to second order ODE of the type $y''(x)=f(x,y(x))$. It sets the second order difference quotient equal to the right side plus the first correction term expressed also as second order difference quotient,
begin{align}
frac{y_{n+1}-2y_n+y_{n-1}}{h^2}
&=y''_k+frac{h^2}{12}y^{(4)}_k
= y''_k+frac{1}{12}(y''_{k+1}-2y''_k+y''_{k-1})\[.6em]
&=frac{1}{12}Bigl(f(x_{n+1},y_{n+1})+10f(x_n,y_n)+f(x_{n-1},y_{n-1})Bigr)
end{align}
I will use a leapfrogged implementation as recommended in Hairer/Nørsett/Wanner: "Solving ODE I",
begin{align}
y_{n+1}&=y_n+hv_{n+1/2}\
v_{n+1/2}&=v_{n-1/2}+frac{h}{12}Bigl(f(x_{n+1},y_{n+1})+10f(x_n,y_n)+f(x_{n-1},y_{n-1})Bigr)
end{align}
Predictor-corrector and first step
As the method is implicit, one needs to solve the implicit equation via some kind of iteration, fixed-point on the system as it is or some kind of Newton method for faster convergence, especially for stiff equations.
In the following proof-of-concept implementation, I use the most simple building blocks that satisfy the specifications.
- Use as predictor the Verlet-Størmer step $y_{n+1}^{[0]}=y_n+hv_{n-1/2}+h^2f(x_n,y_n)=2y_n-y_{n-1}+h^2f(x_n,y_n)$.
- Use fixed-point iteration as corrector.
- To compute the first step, use the 4th order Kutta method aka classical RK4. For optimal accuracy one has to apply this with sub-steps with a reduced step size.
The predictor has error $O(h^4)$, both to the exact value if $y_{n-1}$ and $y_n$ were values of an exact solution, as also to the solution of the implicit step equation. The fixed-point corrector after eliminating $v$ has contraction factor $frac{Lh^2}{12}$, so that the error towards the exact solution of the implicit step after the first correction step should be $O(h^6)$ and after the second one $O(h^8)$. This should be sufficient to get the global error to $O(h^4)$, however with only one correction the error of the iteration is in the same magnitude as the method discretization error and thus influences the global error accumulation. 3 and more correction steps are needed if $L$ is large so that even higher order error terms can be of the size of the discretization error.
Test problem
Use the idea of manufactured solutions, $F[y]=F[p]$, where $F[y]$ is some expression with derivatives of $y$, to generate a test problem with known exact solution $p(x)=sin(x)$. For instance,
$$
F[y]=y''+4sin(y) = F[sin(x)] = -sin(x)+4sin(sin(x)), ~~ y(0)=sin(0)=0,~ y'(0)=cos(0)=1
$$
The following graphs depict the (leading) error coefficient, that is, the difference of the numerical and exact solution divided by $h^4$.
That the graphs for the different step sizes are of the same scale and visually approaching some limit confirms that the method is of order $4$. Note that the first graph for 1 corrector step has a larger scale for the values than the next two, as predicted.
One can see that there is some slight improvement from the second to the third corrector step in the larger step sizes. In this example it is not large enough to justify the third corrector step, but for other problems the difference might be more significant.
The last graph shows the influence of the error of the first step. Having a simple 5th order error is not enough, to get to the optimal error behavior of the whole method, the first-step error coefficient needs to be reduced (by a factor 16 or 81) by using 2 or 3 RK4 sub-steps. Using 2 steps and Richardson extrapolation could also be a useful variant
Code
method
def firststep(f,x,y, v, dx, nRK4):
'''Apply nRK4 Kutta4 steps with step width dx/nRK4 to the first order system
[y', v'] = [v, f(x,y)]'''
y,v = (np.asarray(c) for c in (y,v) )
dy = np.zeros_like(y)
v0h = np.zeros_like(y);
h = dx/nRK4;
for j in range(nRK4):
# in principle, ky could be eliminated, at the cost of introducing unusual terms
k1y = v;
k1v = f(x,y);
k2y = v + 0.5*h*k1v;
k2v = f(x+0.5*h, y+0.5*h*k1y);
k3y = v + 0.5*h*k2v;
k3v = f(x+0.5*h, y+0.5*h*k2y);
k4y = v + h*k3v;
k4v = f(x+h, y+h*k3y);
dy = (k1y+2*(k2y+k3y)+k4y)/6;
v0h += dy
y += h*dy;
v += h*(k1v+2*(k2v+k3v)+k4v)/6;
x += h;
return y, v0h/nRK4
def Numerov_integrate(f, x0, xf, dx, y0, v0, nPECE=1, nRK4=1):
x = np.arange(x0, xf+0.5*dx, dx);
y = np.asarray(len(x)*[y0]);
# use v0h for v[k-1/2], v1h for v[k+1/2] where the current step if from k to k+1
y[1], v0h = firststep(f,x0,y0, v0, dx, nRK4);
fkm1, fk = f(x[0],y[0]), f(x[1],y[1]);
for k in range(1,len(x)-1):
# predictor, use simple Verlet
y[k+1] = y[k] + dx*(v0h+ dx*fk);
for j in range(nPECE):
v1h = v0h + dx/12*(f(x[k+1],y[k+1])+10*fk+fkm1);
y[k+1] = y[k] + dx*v1h;
fkm1, fk = fk, f(x[k+1], y[k+1]);
v0h = v1h;
return x,y;
test example
from numpy import sin
def mms_ode(x,y): return -sin(x)+4*(sin(sin(x))-sin(y))
# test the correctness of the implementation
m=10;
x,y = Numerov_integrate(mms_ode, 0, 4, 0.1/m, [0.0], [1.0], nPECE=2, nRK4=3)
for xk,yk in zip(x,y)[::2*m]: print "%15.10f: %15.10f, %15.10f | %15.10e"%(xk,yk,sin(xk), (yk-sin(xk))*(10*m)**4)
$endgroup$
Numerov method
The Numerov method applies to second order ODE of the type $y''(x)=f(x,y(x))$. It sets the second order difference quotient equal to the right side plus the first correction term expressed also as second order difference quotient,
begin{align}
frac{y_{n+1}-2y_n+y_{n-1}}{h^2}
&=y''_k+frac{h^2}{12}y^{(4)}_k
= y''_k+frac{1}{12}(y''_{k+1}-2y''_k+y''_{k-1})\[.6em]
&=frac{1}{12}Bigl(f(x_{n+1},y_{n+1})+10f(x_n,y_n)+f(x_{n-1},y_{n-1})Bigr)
end{align}
I will use a leapfrogged implementation as recommended in Hairer/Nørsett/Wanner: "Solving ODE I",
begin{align}
y_{n+1}&=y_n+hv_{n+1/2}\
v_{n+1/2}&=v_{n-1/2}+frac{h}{12}Bigl(f(x_{n+1},y_{n+1})+10f(x_n,y_n)+f(x_{n-1},y_{n-1})Bigr)
end{align}
Predictor-corrector and first step
As the method is implicit, one needs to solve the implicit equation via some kind of iteration, fixed-point on the system as it is or some kind of Newton method for faster convergence, especially for stiff equations.
In the following proof-of-concept implementation, I use the most simple building blocks that satisfy the specifications.
- Use as predictor the Verlet-Størmer step $y_{n+1}^{[0]}=y_n+hv_{n-1/2}+h^2f(x_n,y_n)=2y_n-y_{n-1}+h^2f(x_n,y_n)$.
- Use fixed-point iteration as corrector.
- To compute the first step, use the 4th order Kutta method aka classical RK4. For optimal accuracy one has to apply this with sub-steps with a reduced step size.
The predictor has error $O(h^4)$, both to the exact value if $y_{n-1}$ and $y_n$ were values of an exact solution, as also to the solution of the implicit step equation. The fixed-point corrector after eliminating $v$ has contraction factor $frac{Lh^2}{12}$, so that the error towards the exact solution of the implicit step after the first correction step should be $O(h^6)$ and after the second one $O(h^8)$. This should be sufficient to get the global error to $O(h^4)$, however with only one correction the error of the iteration is in the same magnitude as the method discretization error and thus influences the global error accumulation. 3 and more correction steps are needed if $L$ is large so that even higher order error terms can be of the size of the discretization error.
Test problem
Use the idea of manufactured solutions, $F[y]=F[p]$, where $F[y]$ is some expression with derivatives of $y$, to generate a test problem with known exact solution $p(x)=sin(x)$. For instance,
$$
F[y]=y''+4sin(y) = F[sin(x)] = -sin(x)+4sin(sin(x)), ~~ y(0)=sin(0)=0,~ y'(0)=cos(0)=1
$$
The following graphs depict the (leading) error coefficient, that is, the difference of the numerical and exact solution divided by $h^4$.
That the graphs for the different step sizes are of the same scale and visually approaching some limit confirms that the method is of order $4$. Note that the first graph for 1 corrector step has a larger scale for the values than the next two, as predicted.
One can see that there is some slight improvement from the second to the third corrector step in the larger step sizes. In this example it is not large enough to justify the third corrector step, but for other problems the difference might be more significant.
The last graph shows the influence of the error of the first step. Having a simple 5th order error is not enough, to get to the optimal error behavior of the whole method, the first-step error coefficient needs to be reduced (by a factor 16 or 81) by using 2 or 3 RK4 sub-steps. Using 2 steps and Richardson extrapolation could also be a useful variant
Code
method
def firststep(f,x,y, v, dx, nRK4):
'''Apply nRK4 Kutta4 steps with step width dx/nRK4 to the first order system
[y', v'] = [v, f(x,y)]'''
y,v = (np.asarray(c) for c in (y,v) )
dy = np.zeros_like(y)
v0h = np.zeros_like(y);
h = dx/nRK4;
for j in range(nRK4):
# in principle, ky could be eliminated, at the cost of introducing unusual terms
k1y = v;
k1v = f(x,y);
k2y = v + 0.5*h*k1v;
k2v = f(x+0.5*h, y+0.5*h*k1y);
k3y = v + 0.5*h*k2v;
k3v = f(x+0.5*h, y+0.5*h*k2y);
k4y = v + h*k3v;
k4v = f(x+h, y+h*k3y);
dy = (k1y+2*(k2y+k3y)+k4y)/6;
v0h += dy
y += h*dy;
v += h*(k1v+2*(k2v+k3v)+k4v)/6;
x += h;
return y, v0h/nRK4
def Numerov_integrate(f, x0, xf, dx, y0, v0, nPECE=1, nRK4=1):
x = np.arange(x0, xf+0.5*dx, dx);
y = np.asarray(len(x)*[y0]);
# use v0h for v[k-1/2], v1h for v[k+1/2] where the current step if from k to k+1
y[1], v0h = firststep(f,x0,y0, v0, dx, nRK4);
fkm1, fk = f(x[0],y[0]), f(x[1],y[1]);
for k in range(1,len(x)-1):
# predictor, use simple Verlet
y[k+1] = y[k] + dx*(v0h+ dx*fk);
for j in range(nPECE):
v1h = v0h + dx/12*(f(x[k+1],y[k+1])+10*fk+fkm1);
y[k+1] = y[k] + dx*v1h;
fkm1, fk = fk, f(x[k+1], y[k+1]);
v0h = v1h;
return x,y;
test example
from numpy import sin
def mms_ode(x,y): return -sin(x)+4*(sin(sin(x))-sin(y))
# test the correctness of the implementation
m=10;
x,y = Numerov_integrate(mms_ode, 0, 4, 0.1/m, [0.0], [1.0], nPECE=2, nRK4=3)
for xk,yk in zip(x,y)[::2*m]: print "%15.10f: %15.10f, %15.10f | %15.10e"%(xk,yk,sin(xk), (yk-sin(xk))*(10*m)**4)
answered Jan 15 at 12:49
LutzLLutzL
58.7k42055
58.7k42055
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3064879%2fhow-will-i-implement-numerov-method-in-a-numerical-example%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
$begingroup$
What exactly in en.wikipedia.org/wiki/Numerov%27s_method and its references (esp. Hairer/Nørsett/Wanner) poses a problem? How far did you get with the implementation? And which version are you concerned with, the linear one or the general non-linear one?
$endgroup$
– LutzL
Jan 7 at 11:27
$begingroup$
The first question is part of the difficulty I encountered. The difficulty impedes me from implementing anything. I am concern with non-linear case.
$endgroup$
– Christian Adefila
Jan 15 at 9:13