Application of Gronwall Inequality to existence of solutions












1















Consider the $N$-dimensional autonomous system of ODEs
$$dot{x}= f(x),$$
where $f(x)$ is defined for any $x in mathbb{R}^N$, and satisfies $||f(x)|| leq alpha||x||$, where $alpha$ is a positive scalar constant, and the norm $||x||$ is the usual quadratic norm (the sum of squared components of a vector under the square root). Using Gronwall’s inequality, show that the solution emerging from any point $x_0inmathbb{R}^N$ exists for any finite time.




Here is my proposed solution.



We can first write $f(x)$ as an integral equation,



$$x(t) = x_0 + int_{t_0}^{t} f(x(s)) ds$$



where the integration constant is chosen such that $x(t_0)=x_0$. WLOG, assume that $t_0=0$. Then,



begin{equation}
begin{split}
||x(t)|| & = ||x_0 + int_{0}^{t} f(x(s)) ds|| \
& leq ||x_0|| + ||int_{0}^{t} f(x(s)) ds|| \
& leq ||x_0|| + alphaint_{0}^{t} ||x(s)|| ds
end{split}
end{equation}



Therefore, by the integral form of Gronwall's inequality, we see that



begin{equation}
begin{split}
||x(t)|| & leq ||x_0|| + alphaint_{0}^{t} ||x(s)|| ds \
& leq ||x_0||e^{alpha(t)}
end{split}
end{equation}



So, if we let $M = ||x_0||$, then $||x(t)||leq{{M}e^{alpha(t)}}$. Therefore, the solution is uniformly bounded on $[0,t]$ for $t>0$.



As $t>0$ was arbitrary, the solution is defined for all positive values of $t$.



We can then analyze what happens for negative values of $t$ by reversing time and applying the same argument to $[-t,0]$.



Once again assume that $t_0=0$. Then,



$$x(t) = x_0 + int_{-t}^{0} f(x(s)) ds$$



Therefore,



begin{equation}
begin{split}
||x(t)|| & = ||x_0 + int_{-t}^{0} f(x(s)) ds|| \
& leq ||x_0|| + ||int_{-t}^{0} f(x(s)) ds|| \
& leq ||x_0|| + alphaint_{-t}^{0} ||x(s)|| ds \
& leq ||x_0||e^{alpha(0+t)} \
& = {M}e^{alpha(t)}
end{split}
end{equation}



So, the solution is uniformly bounded on $[-t,0]$ for $t<0$.



Combining these two bounds, we see that the solution emerging from any point $x_0inmathbb{R}^N$ exists for any finite time.



Is this approach correct? Please let me know if there are any better alternatives.










share|cite|improve this question






















  • I do not think there are better alternatives, but your approach is not complete. You have just proved that if a solution is defined on $[t_1,t_2]$ then its is somehow bounded. Now, apply the extension theorem: if a right-nonextendible solution $x(cdot)$ is defined on some $[t_1,T)$ with $T<infty$ then for any compact $Ksubsetmathbb{R}^N$ there is $tau<T$ such that $x(t)notin K$ for $tin(tau,T)$. And this contradicts your estimates.
    – user539887
    Dec 19 '18 at 19:41












  • Do you mean the Picard–Lindelöf theorem: math.stackexchange.com/questions/2531735/…? I didn't use that theorem because it is for local solutions. The Gronwall inequality is for global solutions. I'm not sure why I need to apply the extension theorem.
    – Axion004
    Dec 19 '18 at 20:20










  • No, I mean just the result stating what I wrote, see, e.g., Corollary 2.16 on p. 53 of Teschl's Ordinary Differential Equations and Dynamical Systems.
    – user539887
    Dec 19 '18 at 20:37










  • I see, I only showed that the solution is bounded. I have to argue by lemma 2.14/corollary 2.15/corollary 2.16 that the solution exists (from reading the proof of theorem 2.17, this follows directly from the compactness of the interval). I don't like the wording in corollary 2.16. I am going to spend some more time reading it and see if I understand.
    – Axion004
    Dec 19 '18 at 22:16


















1















Consider the $N$-dimensional autonomous system of ODEs
$$dot{x}= f(x),$$
where $f(x)$ is defined for any $x in mathbb{R}^N$, and satisfies $||f(x)|| leq alpha||x||$, where $alpha$ is a positive scalar constant, and the norm $||x||$ is the usual quadratic norm (the sum of squared components of a vector under the square root). Using Gronwall’s inequality, show that the solution emerging from any point $x_0inmathbb{R}^N$ exists for any finite time.




Here is my proposed solution.



We can first write $f(x)$ as an integral equation,



$$x(t) = x_0 + int_{t_0}^{t} f(x(s)) ds$$



where the integration constant is chosen such that $x(t_0)=x_0$. WLOG, assume that $t_0=0$. Then,



begin{equation}
begin{split}
||x(t)|| & = ||x_0 + int_{0}^{t} f(x(s)) ds|| \
& leq ||x_0|| + ||int_{0}^{t} f(x(s)) ds|| \
& leq ||x_0|| + alphaint_{0}^{t} ||x(s)|| ds
end{split}
end{equation}



Therefore, by the integral form of Gronwall's inequality, we see that



begin{equation}
begin{split}
||x(t)|| & leq ||x_0|| + alphaint_{0}^{t} ||x(s)|| ds \
& leq ||x_0||e^{alpha(t)}
end{split}
end{equation}



So, if we let $M = ||x_0||$, then $||x(t)||leq{{M}e^{alpha(t)}}$. Therefore, the solution is uniformly bounded on $[0,t]$ for $t>0$.



As $t>0$ was arbitrary, the solution is defined for all positive values of $t$.



We can then analyze what happens for negative values of $t$ by reversing time and applying the same argument to $[-t,0]$.



Once again assume that $t_0=0$. Then,



$$x(t) = x_0 + int_{-t}^{0} f(x(s)) ds$$



Therefore,



begin{equation}
begin{split}
||x(t)|| & = ||x_0 + int_{-t}^{0} f(x(s)) ds|| \
& leq ||x_0|| + ||int_{-t}^{0} f(x(s)) ds|| \
& leq ||x_0|| + alphaint_{-t}^{0} ||x(s)|| ds \
& leq ||x_0||e^{alpha(0+t)} \
& = {M}e^{alpha(t)}
end{split}
end{equation}



So, the solution is uniformly bounded on $[-t,0]$ for $t<0$.



Combining these two bounds, we see that the solution emerging from any point $x_0inmathbb{R}^N$ exists for any finite time.



Is this approach correct? Please let me know if there are any better alternatives.










share|cite|improve this question






















  • I do not think there are better alternatives, but your approach is not complete. You have just proved that if a solution is defined on $[t_1,t_2]$ then its is somehow bounded. Now, apply the extension theorem: if a right-nonextendible solution $x(cdot)$ is defined on some $[t_1,T)$ with $T<infty$ then for any compact $Ksubsetmathbb{R}^N$ there is $tau<T$ such that $x(t)notin K$ for $tin(tau,T)$. And this contradicts your estimates.
    – user539887
    Dec 19 '18 at 19:41












  • Do you mean the Picard–Lindelöf theorem: math.stackexchange.com/questions/2531735/…? I didn't use that theorem because it is for local solutions. The Gronwall inequality is for global solutions. I'm not sure why I need to apply the extension theorem.
    – Axion004
    Dec 19 '18 at 20:20










  • No, I mean just the result stating what I wrote, see, e.g., Corollary 2.16 on p. 53 of Teschl's Ordinary Differential Equations and Dynamical Systems.
    – user539887
    Dec 19 '18 at 20:37










  • I see, I only showed that the solution is bounded. I have to argue by lemma 2.14/corollary 2.15/corollary 2.16 that the solution exists (from reading the proof of theorem 2.17, this follows directly from the compactness of the interval). I don't like the wording in corollary 2.16. I am going to spend some more time reading it and see if I understand.
    – Axion004
    Dec 19 '18 at 22:16
















1












1








1








Consider the $N$-dimensional autonomous system of ODEs
$$dot{x}= f(x),$$
where $f(x)$ is defined for any $x in mathbb{R}^N$, and satisfies $||f(x)|| leq alpha||x||$, where $alpha$ is a positive scalar constant, and the norm $||x||$ is the usual quadratic norm (the sum of squared components of a vector under the square root). Using Gronwall’s inequality, show that the solution emerging from any point $x_0inmathbb{R}^N$ exists for any finite time.




Here is my proposed solution.



We can first write $f(x)$ as an integral equation,



$$x(t) = x_0 + int_{t_0}^{t} f(x(s)) ds$$



where the integration constant is chosen such that $x(t_0)=x_0$. WLOG, assume that $t_0=0$. Then,



begin{equation}
begin{split}
||x(t)|| & = ||x_0 + int_{0}^{t} f(x(s)) ds|| \
& leq ||x_0|| + ||int_{0}^{t} f(x(s)) ds|| \
& leq ||x_0|| + alphaint_{0}^{t} ||x(s)|| ds
end{split}
end{equation}



Therefore, by the integral form of Gronwall's inequality, we see that



begin{equation}
begin{split}
||x(t)|| & leq ||x_0|| + alphaint_{0}^{t} ||x(s)|| ds \
& leq ||x_0||e^{alpha(t)}
end{split}
end{equation}



So, if we let $M = ||x_0||$, then $||x(t)||leq{{M}e^{alpha(t)}}$. Therefore, the solution is uniformly bounded on $[0,t]$ for $t>0$.



As $t>0$ was arbitrary, the solution is defined for all positive values of $t$.



We can then analyze what happens for negative values of $t$ by reversing time and applying the same argument to $[-t,0]$.



Once again assume that $t_0=0$. Then,



$$x(t) = x_0 + int_{-t}^{0} f(x(s)) ds$$



Therefore,



begin{equation}
begin{split}
||x(t)|| & = ||x_0 + int_{-t}^{0} f(x(s)) ds|| \
& leq ||x_0|| + ||int_{-t}^{0} f(x(s)) ds|| \
& leq ||x_0|| + alphaint_{-t}^{0} ||x(s)|| ds \
& leq ||x_0||e^{alpha(0+t)} \
& = {M}e^{alpha(t)}
end{split}
end{equation}



So, the solution is uniformly bounded on $[-t,0]$ for $t<0$.



Combining these two bounds, we see that the solution emerging from any point $x_0inmathbb{R}^N$ exists for any finite time.



Is this approach correct? Please let me know if there are any better alternatives.










share|cite|improve this question














Consider the $N$-dimensional autonomous system of ODEs
$$dot{x}= f(x),$$
where $f(x)$ is defined for any $x in mathbb{R}^N$, and satisfies $||f(x)|| leq alpha||x||$, where $alpha$ is a positive scalar constant, and the norm $||x||$ is the usual quadratic norm (the sum of squared components of a vector under the square root). Using Gronwall’s inequality, show that the solution emerging from any point $x_0inmathbb{R}^N$ exists for any finite time.




Here is my proposed solution.



We can first write $f(x)$ as an integral equation,



$$x(t) = x_0 + int_{t_0}^{t} f(x(s)) ds$$



where the integration constant is chosen such that $x(t_0)=x_0$. WLOG, assume that $t_0=0$. Then,



begin{equation}
begin{split}
||x(t)|| & = ||x_0 + int_{0}^{t} f(x(s)) ds|| \
& leq ||x_0|| + ||int_{0}^{t} f(x(s)) ds|| \
& leq ||x_0|| + alphaint_{0}^{t} ||x(s)|| ds
end{split}
end{equation}



Therefore, by the integral form of Gronwall's inequality, we see that



begin{equation}
begin{split}
||x(t)|| & leq ||x_0|| + alphaint_{0}^{t} ||x(s)|| ds \
& leq ||x_0||e^{alpha(t)}
end{split}
end{equation}



So, if we let $M = ||x_0||$, then $||x(t)||leq{{M}e^{alpha(t)}}$. Therefore, the solution is uniformly bounded on $[0,t]$ for $t>0$.



As $t>0$ was arbitrary, the solution is defined for all positive values of $t$.



We can then analyze what happens for negative values of $t$ by reversing time and applying the same argument to $[-t,0]$.



Once again assume that $t_0=0$. Then,



$$x(t) = x_0 + int_{-t}^{0} f(x(s)) ds$$



Therefore,



begin{equation}
begin{split}
||x(t)|| & = ||x_0 + int_{-t}^{0} f(x(s)) ds|| \
& leq ||x_0|| + ||int_{-t}^{0} f(x(s)) ds|| \
& leq ||x_0|| + alphaint_{-t}^{0} ||x(s)|| ds \
& leq ||x_0||e^{alpha(0+t)} \
& = {M}e^{alpha(t)}
end{split}
end{equation}



So, the solution is uniformly bounded on $[-t,0]$ for $t<0$.



Combining these two bounds, we see that the solution emerging from any point $x_0inmathbb{R}^N$ exists for any finite time.



Is this approach correct? Please let me know if there are any better alternatives.







differential-equations proof-verification integral-inequality






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Dec 19 '18 at 19:05









Axion004

237212




237212












  • I do not think there are better alternatives, but your approach is not complete. You have just proved that if a solution is defined on $[t_1,t_2]$ then its is somehow bounded. Now, apply the extension theorem: if a right-nonextendible solution $x(cdot)$ is defined on some $[t_1,T)$ with $T<infty$ then for any compact $Ksubsetmathbb{R}^N$ there is $tau<T$ such that $x(t)notin K$ for $tin(tau,T)$. And this contradicts your estimates.
    – user539887
    Dec 19 '18 at 19:41












  • Do you mean the Picard–Lindelöf theorem: math.stackexchange.com/questions/2531735/…? I didn't use that theorem because it is for local solutions. The Gronwall inequality is for global solutions. I'm not sure why I need to apply the extension theorem.
    – Axion004
    Dec 19 '18 at 20:20










  • No, I mean just the result stating what I wrote, see, e.g., Corollary 2.16 on p. 53 of Teschl's Ordinary Differential Equations and Dynamical Systems.
    – user539887
    Dec 19 '18 at 20:37










  • I see, I only showed that the solution is bounded. I have to argue by lemma 2.14/corollary 2.15/corollary 2.16 that the solution exists (from reading the proof of theorem 2.17, this follows directly from the compactness of the interval). I don't like the wording in corollary 2.16. I am going to spend some more time reading it and see if I understand.
    – Axion004
    Dec 19 '18 at 22:16




















  • I do not think there are better alternatives, but your approach is not complete. You have just proved that if a solution is defined on $[t_1,t_2]$ then its is somehow bounded. Now, apply the extension theorem: if a right-nonextendible solution $x(cdot)$ is defined on some $[t_1,T)$ with $T<infty$ then for any compact $Ksubsetmathbb{R}^N$ there is $tau<T$ such that $x(t)notin K$ for $tin(tau,T)$. And this contradicts your estimates.
    – user539887
    Dec 19 '18 at 19:41












  • Do you mean the Picard–Lindelöf theorem: math.stackexchange.com/questions/2531735/…? I didn't use that theorem because it is for local solutions. The Gronwall inequality is for global solutions. I'm not sure why I need to apply the extension theorem.
    – Axion004
    Dec 19 '18 at 20:20










  • No, I mean just the result stating what I wrote, see, e.g., Corollary 2.16 on p. 53 of Teschl's Ordinary Differential Equations and Dynamical Systems.
    – user539887
    Dec 19 '18 at 20:37










  • I see, I only showed that the solution is bounded. I have to argue by lemma 2.14/corollary 2.15/corollary 2.16 that the solution exists (from reading the proof of theorem 2.17, this follows directly from the compactness of the interval). I don't like the wording in corollary 2.16. I am going to spend some more time reading it and see if I understand.
    – Axion004
    Dec 19 '18 at 22:16


















I do not think there are better alternatives, but your approach is not complete. You have just proved that if a solution is defined on $[t_1,t_2]$ then its is somehow bounded. Now, apply the extension theorem: if a right-nonextendible solution $x(cdot)$ is defined on some $[t_1,T)$ with $T<infty$ then for any compact $Ksubsetmathbb{R}^N$ there is $tau<T$ such that $x(t)notin K$ for $tin(tau,T)$. And this contradicts your estimates.
– user539887
Dec 19 '18 at 19:41






I do not think there are better alternatives, but your approach is not complete. You have just proved that if a solution is defined on $[t_1,t_2]$ then its is somehow bounded. Now, apply the extension theorem: if a right-nonextendible solution $x(cdot)$ is defined on some $[t_1,T)$ with $T<infty$ then for any compact $Ksubsetmathbb{R}^N$ there is $tau<T$ such that $x(t)notin K$ for $tin(tau,T)$. And this contradicts your estimates.
– user539887
Dec 19 '18 at 19:41














Do you mean the Picard–Lindelöf theorem: math.stackexchange.com/questions/2531735/…? I didn't use that theorem because it is for local solutions. The Gronwall inequality is for global solutions. I'm not sure why I need to apply the extension theorem.
– Axion004
Dec 19 '18 at 20:20




Do you mean the Picard–Lindelöf theorem: math.stackexchange.com/questions/2531735/…? I didn't use that theorem because it is for local solutions. The Gronwall inequality is for global solutions. I'm not sure why I need to apply the extension theorem.
– Axion004
Dec 19 '18 at 20:20












No, I mean just the result stating what I wrote, see, e.g., Corollary 2.16 on p. 53 of Teschl's Ordinary Differential Equations and Dynamical Systems.
– user539887
Dec 19 '18 at 20:37




No, I mean just the result stating what I wrote, see, e.g., Corollary 2.16 on p. 53 of Teschl's Ordinary Differential Equations and Dynamical Systems.
– user539887
Dec 19 '18 at 20:37












I see, I only showed that the solution is bounded. I have to argue by lemma 2.14/corollary 2.15/corollary 2.16 that the solution exists (from reading the proof of theorem 2.17, this follows directly from the compactness of the interval). I don't like the wording in corollary 2.16. I am going to spend some more time reading it and see if I understand.
– Axion004
Dec 19 '18 at 22:16






I see, I only showed that the solution is bounded. I have to argue by lemma 2.14/corollary 2.15/corollary 2.16 that the solution exists (from reading the proof of theorem 2.17, this follows directly from the compactness of the interval). I don't like the wording in corollary 2.16. I am going to spend some more time reading it and see if I understand.
– Axion004
Dec 19 '18 at 22:16












1 Answer
1






active

oldest

votes


















1














As explained in the comments, the proposed answer only shows that the solution is bounded between some arbitrary interval $[t_1,t_2]$ where $t_1,t_2,inmathbb{R}$. We also need to show that we can extend the solution to any interval of finite length.



To do this, consider Lemma $2.14$ on page $52$ of Teschl.



$textbf{Lemma 2.14:}$ Let $phi(t)$ be a solution of $(2.10)$ defined on the interval $(t_-,t_+)$. Then there exists an extension to the interval $(t_-,t_+ + epsilon)$ for some $epsilon > 0$ if and only if there exists a sequence $t_min(t_-,t_+)$ such that



$$lim_{mtoinfty}(t_m,phi(t_m))=(t_+,y)in{U}. $$



The analogous statement holds for an extension to $(t_- - epsilon,t_+).$



As $||x(t)||leq{{M}e^{alpha(t)}}$, it is clear that $x$ lies in a compact ball. Therefore, by Lemma $2.14$ (and the Bolzano–Weierstrass theorem), we can extend the solution to any interval of finite length.



An alternative argument would be using Corollary $2.16$. I don't like the way Corollary $2.16$ is phrased and have decided to directly apply Lemma $2.14$ instead.






share|cite|improve this answer





















    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3046758%2fapplication-of-gronwall-inequality-to-existence-of-solutions%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    1














    As explained in the comments, the proposed answer only shows that the solution is bounded between some arbitrary interval $[t_1,t_2]$ where $t_1,t_2,inmathbb{R}$. We also need to show that we can extend the solution to any interval of finite length.



    To do this, consider Lemma $2.14$ on page $52$ of Teschl.



    $textbf{Lemma 2.14:}$ Let $phi(t)$ be a solution of $(2.10)$ defined on the interval $(t_-,t_+)$. Then there exists an extension to the interval $(t_-,t_+ + epsilon)$ for some $epsilon > 0$ if and only if there exists a sequence $t_min(t_-,t_+)$ such that



    $$lim_{mtoinfty}(t_m,phi(t_m))=(t_+,y)in{U}. $$



    The analogous statement holds for an extension to $(t_- - epsilon,t_+).$



    As $||x(t)||leq{{M}e^{alpha(t)}}$, it is clear that $x$ lies in a compact ball. Therefore, by Lemma $2.14$ (and the Bolzano–Weierstrass theorem), we can extend the solution to any interval of finite length.



    An alternative argument would be using Corollary $2.16$. I don't like the way Corollary $2.16$ is phrased and have decided to directly apply Lemma $2.14$ instead.






    share|cite|improve this answer


























      1














      As explained in the comments, the proposed answer only shows that the solution is bounded between some arbitrary interval $[t_1,t_2]$ where $t_1,t_2,inmathbb{R}$. We also need to show that we can extend the solution to any interval of finite length.



      To do this, consider Lemma $2.14$ on page $52$ of Teschl.



      $textbf{Lemma 2.14:}$ Let $phi(t)$ be a solution of $(2.10)$ defined on the interval $(t_-,t_+)$. Then there exists an extension to the interval $(t_-,t_+ + epsilon)$ for some $epsilon > 0$ if and only if there exists a sequence $t_min(t_-,t_+)$ such that



      $$lim_{mtoinfty}(t_m,phi(t_m))=(t_+,y)in{U}. $$



      The analogous statement holds for an extension to $(t_- - epsilon,t_+).$



      As $||x(t)||leq{{M}e^{alpha(t)}}$, it is clear that $x$ lies in a compact ball. Therefore, by Lemma $2.14$ (and the Bolzano–Weierstrass theorem), we can extend the solution to any interval of finite length.



      An alternative argument would be using Corollary $2.16$. I don't like the way Corollary $2.16$ is phrased and have decided to directly apply Lemma $2.14$ instead.






      share|cite|improve this answer
























        1












        1








        1






        As explained in the comments, the proposed answer only shows that the solution is bounded between some arbitrary interval $[t_1,t_2]$ where $t_1,t_2,inmathbb{R}$. We also need to show that we can extend the solution to any interval of finite length.



        To do this, consider Lemma $2.14$ on page $52$ of Teschl.



        $textbf{Lemma 2.14:}$ Let $phi(t)$ be a solution of $(2.10)$ defined on the interval $(t_-,t_+)$. Then there exists an extension to the interval $(t_-,t_+ + epsilon)$ for some $epsilon > 0$ if and only if there exists a sequence $t_min(t_-,t_+)$ such that



        $$lim_{mtoinfty}(t_m,phi(t_m))=(t_+,y)in{U}. $$



        The analogous statement holds for an extension to $(t_- - epsilon,t_+).$



        As $||x(t)||leq{{M}e^{alpha(t)}}$, it is clear that $x$ lies in a compact ball. Therefore, by Lemma $2.14$ (and the Bolzano–Weierstrass theorem), we can extend the solution to any interval of finite length.



        An alternative argument would be using Corollary $2.16$. I don't like the way Corollary $2.16$ is phrased and have decided to directly apply Lemma $2.14$ instead.






        share|cite|improve this answer












        As explained in the comments, the proposed answer only shows that the solution is bounded between some arbitrary interval $[t_1,t_2]$ where $t_1,t_2,inmathbb{R}$. We also need to show that we can extend the solution to any interval of finite length.



        To do this, consider Lemma $2.14$ on page $52$ of Teschl.



        $textbf{Lemma 2.14:}$ Let $phi(t)$ be a solution of $(2.10)$ defined on the interval $(t_-,t_+)$. Then there exists an extension to the interval $(t_-,t_+ + epsilon)$ for some $epsilon > 0$ if and only if there exists a sequence $t_min(t_-,t_+)$ such that



        $$lim_{mtoinfty}(t_m,phi(t_m))=(t_+,y)in{U}. $$



        The analogous statement holds for an extension to $(t_- - epsilon,t_+).$



        As $||x(t)||leq{{M}e^{alpha(t)}}$, it is clear that $x$ lies in a compact ball. Therefore, by Lemma $2.14$ (and the Bolzano–Weierstrass theorem), we can extend the solution to any interval of finite length.



        An alternative argument would be using Corollary $2.16$. I don't like the way Corollary $2.16$ is phrased and have decided to directly apply Lemma $2.14$ instead.







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered Dec 26 '18 at 17:56









        Axion004

        237212




        237212






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.





            Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


            Please pay close attention to the following guidance:


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3046758%2fapplication-of-gronwall-inequality-to-existence-of-solutions%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Human spaceflight

            Can not write log (Is /dev/pts mounted?) - openpty in Ubuntu-on-Windows?

            File:DeusFollowingSea.jpg