Scaling forcing function before numerical integration of linear ODE
$begingroup$
Suppose that we have a linear ode which we want to integrate numerically:
$$
L{y}=f(t)
$$
where $L$ is a linear differential operator (and some initial conditions), by using a method of order $mathcal{O}(h^n)$ -- $h$ being the step size of the method.
Is it true that if we scale up the forcing function $f(t)$ with a factor $K$ before integrating, and then rescale down the solution $y$ by $frac{1}{K}$ we have increased the precision of the method? The argument being that the order of the method is still the same and only dependent on the step size. So when we scale down the error is reduced.
ordinary-differential-equations numerical-methods soft-question
$endgroup$
add a comment |
$begingroup$
Suppose that we have a linear ode which we want to integrate numerically:
$$
L{y}=f(t)
$$
where $L$ is a linear differential operator (and some initial conditions), by using a method of order $mathcal{O}(h^n)$ -- $h$ being the step size of the method.
Is it true that if we scale up the forcing function $f(t)$ with a factor $K$ before integrating, and then rescale down the solution $y$ by $frac{1}{K}$ we have increased the precision of the method? The argument being that the order of the method is still the same and only dependent on the step size. So when we scale down the error is reduced.
ordinary-differential-equations numerical-methods soft-question
$endgroup$
add a comment |
$begingroup$
Suppose that we have a linear ode which we want to integrate numerically:
$$
L{y}=f(t)
$$
where $L$ is a linear differential operator (and some initial conditions), by using a method of order $mathcal{O}(h^n)$ -- $h$ being the step size of the method.
Is it true that if we scale up the forcing function $f(t)$ with a factor $K$ before integrating, and then rescale down the solution $y$ by $frac{1}{K}$ we have increased the precision of the method? The argument being that the order of the method is still the same and only dependent on the step size. So when we scale down the error is reduced.
ordinary-differential-equations numerical-methods soft-question
$endgroup$
Suppose that we have a linear ode which we want to integrate numerically:
$$
L{y}=f(t)
$$
where $L$ is a linear differential operator (and some initial conditions), by using a method of order $mathcal{O}(h^n)$ -- $h$ being the step size of the method.
Is it true that if we scale up the forcing function $f(t)$ with a factor $K$ before integrating, and then rescale down the solution $y$ by $frac{1}{K}$ we have increased the precision of the method? The argument being that the order of the method is still the same and only dependent on the step size. So when we scale down the error is reduced.
ordinary-differential-equations numerical-methods soft-question
ordinary-differential-equations numerical-methods soft-question
asked Jan 7 at 20:31
AmbeshAmbesh
1,4351140
1,4351140
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Imagine the function $y_s$ is the actual solution to the problem
$$
L{y_s} = f(t) tag{1a}
$$
And $hat{y}$ is an approximation, the error is measured as
$$
epsilon(n) = |y_s(t_n) - hat{y}_n| tag{2a}
$$
Now, since $L$ is linear, if you multiply both sides of (1a) by $K$, then
$$
L{ Ky_s } = K f(t) tag{1b}
$$
And the approximated solution is $hat{y}^{K}$ so the error is
$$
epsilon^K(n) = |K y_s(t_n) - hat{y}^{K}| = Kleft|y_s(t_n) - frac{hat{y}^{K}}{K}right| = K epsilon(n) tag{2b}
$$
So the error also scales as $K$. That is: if you multiply by $K$ and then divide the result, the accuracy does not increase
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3065460%2fscaling-forcing-function-before-numerical-integration-of-linear-ode%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Imagine the function $y_s$ is the actual solution to the problem
$$
L{y_s} = f(t) tag{1a}
$$
And $hat{y}$ is an approximation, the error is measured as
$$
epsilon(n) = |y_s(t_n) - hat{y}_n| tag{2a}
$$
Now, since $L$ is linear, if you multiply both sides of (1a) by $K$, then
$$
L{ Ky_s } = K f(t) tag{1b}
$$
And the approximated solution is $hat{y}^{K}$ so the error is
$$
epsilon^K(n) = |K y_s(t_n) - hat{y}^{K}| = Kleft|y_s(t_n) - frac{hat{y}^{K}}{K}right| = K epsilon(n) tag{2b}
$$
So the error also scales as $K$. That is: if you multiply by $K$ and then divide the result, the accuracy does not increase
$endgroup$
add a comment |
$begingroup$
Imagine the function $y_s$ is the actual solution to the problem
$$
L{y_s} = f(t) tag{1a}
$$
And $hat{y}$ is an approximation, the error is measured as
$$
epsilon(n) = |y_s(t_n) - hat{y}_n| tag{2a}
$$
Now, since $L$ is linear, if you multiply both sides of (1a) by $K$, then
$$
L{ Ky_s } = K f(t) tag{1b}
$$
And the approximated solution is $hat{y}^{K}$ so the error is
$$
epsilon^K(n) = |K y_s(t_n) - hat{y}^{K}| = Kleft|y_s(t_n) - frac{hat{y}^{K}}{K}right| = K epsilon(n) tag{2b}
$$
So the error also scales as $K$. That is: if you multiply by $K$ and then divide the result, the accuracy does not increase
$endgroup$
add a comment |
$begingroup$
Imagine the function $y_s$ is the actual solution to the problem
$$
L{y_s} = f(t) tag{1a}
$$
And $hat{y}$ is an approximation, the error is measured as
$$
epsilon(n) = |y_s(t_n) - hat{y}_n| tag{2a}
$$
Now, since $L$ is linear, if you multiply both sides of (1a) by $K$, then
$$
L{ Ky_s } = K f(t) tag{1b}
$$
And the approximated solution is $hat{y}^{K}$ so the error is
$$
epsilon^K(n) = |K y_s(t_n) - hat{y}^{K}| = Kleft|y_s(t_n) - frac{hat{y}^{K}}{K}right| = K epsilon(n) tag{2b}
$$
So the error also scales as $K$. That is: if you multiply by $K$ and then divide the result, the accuracy does not increase
$endgroup$
Imagine the function $y_s$ is the actual solution to the problem
$$
L{y_s} = f(t) tag{1a}
$$
And $hat{y}$ is an approximation, the error is measured as
$$
epsilon(n) = |y_s(t_n) - hat{y}_n| tag{2a}
$$
Now, since $L$ is linear, if you multiply both sides of (1a) by $K$, then
$$
L{ Ky_s } = K f(t) tag{1b}
$$
And the approximated solution is $hat{y}^{K}$ so the error is
$$
epsilon^K(n) = |K y_s(t_n) - hat{y}^{K}| = Kleft|y_s(t_n) - frac{hat{y}^{K}}{K}right| = K epsilon(n) tag{2b}
$$
So the error also scales as $K$. That is: if you multiply by $K$ and then divide the result, the accuracy does not increase
answered Jan 7 at 21:07
caveraccaverac
14.6k31130
14.6k31130
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3065460%2fscaling-forcing-function-before-numerical-integration-of-linear-ode%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown