Variance of parameter estimate using recursive least squares












1












$begingroup$


I am learning about recursive least squares estimation using a forgetting factor $lambda$ as a tool for treating time variations of model parameters and have become stuck on the following problem.



Question



Find an expression for $Vbig[hat{b}big]$ given
$$y_t = bu_t + e_t, quad t=1,...,N$$



Where $e_t$ is white Gaussian noise with variance $sigma^2_e$ and $u_t$ is a deterministic signal such that



$$lim_{Ntoinfty} frac{1}{N} sum_{t=1}^{N} u^2_t$$



is finite. The unknown parameter b is estimated as
$$hat{b}= operatorname*{argmin}_b sum_{t=1}^{N} lambda^{N-t}(y_t-bu_t)^2,$$ where $0<lambda leq 1$.



My attempt at a solution



I can be seen that the argument that minimises the above equation is $hat{b}= frac{y_t}{u_t}$. However when I try to calculate the variance I get



$$Vbig[hat{b}big]=Vbig[frac{y_t}{u_t}big].$$ But as $u_t$ is a deterministic signal and I am under the impression that the variance of a deterministic signal is zero this would give me a zero in the denominator?



Any help greatly appreciated.



Edit



After user617446 comment I went back and recalculated $hat{b}$ as follows-



$$frac{partial}{partial b}bigg[ sum_{t=1}^{N} lambda^{N-t}(y_t-bu_t)^2 bigg] = 2bsum_{t=1}^{N}lambda^{N-t}u_t^2-2sum_{t=1}^{N}lambda^{N-t}y_tu_t$$



setting this equal to zero then solving gave



$$hat{b}=frac{sum_{t=1}^{N}lambda^{N-t}y_tu_t}{sum_{t=1}^{N}lambda^{N-t}u_t^2}.$$



I believe this to be correct but I am now stuck once again on how to calculate the variance? Grateful for any and all help.










share|cite|improve this question











$endgroup$








  • 1




    $begingroup$
    $hat b$ is not time dependent and hence cannot be considered to be a ratio of $y_t / u_t$. Instead, ask yourself what single value of $b$ will minimize the expression, given $y_t, u_t$
    $endgroup$
    – user617446
    Jan 16 at 14:45










  • $begingroup$
    Thanks for the hint @user617446! I have updated the question with my new calculations for the b estimate but am still stuck on calculating the variance.
    $endgroup$
    – Eiraus
    Jan 19 at 12:30










  • $begingroup$
    Suppose you had to solve the variance of $b$ in the case $y=bu+e$ for a single $t$. Do you know what to do then?
    $endgroup$
    – user617446
    Jan 20 at 5:32










  • $begingroup$
    $Var[b]=Var[frac{1}{u}(y-e)]$ Is this what you mean? But for a single t would that not just be the variance of a constant? I.e zero? Thanks for sticking with me @user617446 !
    $endgroup$
    – Eiraus
    Jan 20 at 19:40










  • $begingroup$
    You are almost there. For a single equation, $hat{b}=y/u$ but $y=b u +e$ where $b$ is the "correct" value ( not the estimate). If we substitute we get $hat{b}=b+e/u$ and the variance is $sigma^2/u^2$ PS: I believe that if you read up on maximum likelihood estimation, you could get more insight into these kind of problems.
    $endgroup$
    – user617446
    Jan 21 at 5:53


















1












$begingroup$


I am learning about recursive least squares estimation using a forgetting factor $lambda$ as a tool for treating time variations of model parameters and have become stuck on the following problem.



Question



Find an expression for $Vbig[hat{b}big]$ given
$$y_t = bu_t + e_t, quad t=1,...,N$$



Where $e_t$ is white Gaussian noise with variance $sigma^2_e$ and $u_t$ is a deterministic signal such that



$$lim_{Ntoinfty} frac{1}{N} sum_{t=1}^{N} u^2_t$$



is finite. The unknown parameter b is estimated as
$$hat{b}= operatorname*{argmin}_b sum_{t=1}^{N} lambda^{N-t}(y_t-bu_t)^2,$$ where $0<lambda leq 1$.



My attempt at a solution



I can be seen that the argument that minimises the above equation is $hat{b}= frac{y_t}{u_t}$. However when I try to calculate the variance I get



$$Vbig[hat{b}big]=Vbig[frac{y_t}{u_t}big].$$ But as $u_t$ is a deterministic signal and I am under the impression that the variance of a deterministic signal is zero this would give me a zero in the denominator?



Any help greatly appreciated.



Edit



After user617446 comment I went back and recalculated $hat{b}$ as follows-



$$frac{partial}{partial b}bigg[ sum_{t=1}^{N} lambda^{N-t}(y_t-bu_t)^2 bigg] = 2bsum_{t=1}^{N}lambda^{N-t}u_t^2-2sum_{t=1}^{N}lambda^{N-t}y_tu_t$$



setting this equal to zero then solving gave



$$hat{b}=frac{sum_{t=1}^{N}lambda^{N-t}y_tu_t}{sum_{t=1}^{N}lambda^{N-t}u_t^2}.$$



I believe this to be correct but I am now stuck once again on how to calculate the variance? Grateful for any and all help.










share|cite|improve this question











$endgroup$








  • 1




    $begingroup$
    $hat b$ is not time dependent and hence cannot be considered to be a ratio of $y_t / u_t$. Instead, ask yourself what single value of $b$ will minimize the expression, given $y_t, u_t$
    $endgroup$
    – user617446
    Jan 16 at 14:45










  • $begingroup$
    Thanks for the hint @user617446! I have updated the question with my new calculations for the b estimate but am still stuck on calculating the variance.
    $endgroup$
    – Eiraus
    Jan 19 at 12:30










  • $begingroup$
    Suppose you had to solve the variance of $b$ in the case $y=bu+e$ for a single $t$. Do you know what to do then?
    $endgroup$
    – user617446
    Jan 20 at 5:32










  • $begingroup$
    $Var[b]=Var[frac{1}{u}(y-e)]$ Is this what you mean? But for a single t would that not just be the variance of a constant? I.e zero? Thanks for sticking with me @user617446 !
    $endgroup$
    – Eiraus
    Jan 20 at 19:40










  • $begingroup$
    You are almost there. For a single equation, $hat{b}=y/u$ but $y=b u +e$ where $b$ is the "correct" value ( not the estimate). If we substitute we get $hat{b}=b+e/u$ and the variance is $sigma^2/u^2$ PS: I believe that if you read up on maximum likelihood estimation, you could get more insight into these kind of problems.
    $endgroup$
    – user617446
    Jan 21 at 5:53
















1












1








1





$begingroup$


I am learning about recursive least squares estimation using a forgetting factor $lambda$ as a tool for treating time variations of model parameters and have become stuck on the following problem.



Question



Find an expression for $Vbig[hat{b}big]$ given
$$y_t = bu_t + e_t, quad t=1,...,N$$



Where $e_t$ is white Gaussian noise with variance $sigma^2_e$ and $u_t$ is a deterministic signal such that



$$lim_{Ntoinfty} frac{1}{N} sum_{t=1}^{N} u^2_t$$



is finite. The unknown parameter b is estimated as
$$hat{b}= operatorname*{argmin}_b sum_{t=1}^{N} lambda^{N-t}(y_t-bu_t)^2,$$ where $0<lambda leq 1$.



My attempt at a solution



I can be seen that the argument that minimises the above equation is $hat{b}= frac{y_t}{u_t}$. However when I try to calculate the variance I get



$$Vbig[hat{b}big]=Vbig[frac{y_t}{u_t}big].$$ But as $u_t$ is a deterministic signal and I am under the impression that the variance of a deterministic signal is zero this would give me a zero in the denominator?



Any help greatly appreciated.



Edit



After user617446 comment I went back and recalculated $hat{b}$ as follows-



$$frac{partial}{partial b}bigg[ sum_{t=1}^{N} lambda^{N-t}(y_t-bu_t)^2 bigg] = 2bsum_{t=1}^{N}lambda^{N-t}u_t^2-2sum_{t=1}^{N}lambda^{N-t}y_tu_t$$



setting this equal to zero then solving gave



$$hat{b}=frac{sum_{t=1}^{N}lambda^{N-t}y_tu_t}{sum_{t=1}^{N}lambda^{N-t}u_t^2}.$$



I believe this to be correct but I am now stuck once again on how to calculate the variance? Grateful for any and all help.










share|cite|improve this question











$endgroup$




I am learning about recursive least squares estimation using a forgetting factor $lambda$ as a tool for treating time variations of model parameters and have become stuck on the following problem.



Question



Find an expression for $Vbig[hat{b}big]$ given
$$y_t = bu_t + e_t, quad t=1,...,N$$



Where $e_t$ is white Gaussian noise with variance $sigma^2_e$ and $u_t$ is a deterministic signal such that



$$lim_{Ntoinfty} frac{1}{N} sum_{t=1}^{N} u^2_t$$



is finite. The unknown parameter b is estimated as
$$hat{b}= operatorname*{argmin}_b sum_{t=1}^{N} lambda^{N-t}(y_t-bu_t)^2,$$ where $0<lambda leq 1$.



My attempt at a solution



I can be seen that the argument that minimises the above equation is $hat{b}= frac{y_t}{u_t}$. However when I try to calculate the variance I get



$$Vbig[hat{b}big]=Vbig[frac{y_t}{u_t}big].$$ But as $u_t$ is a deterministic signal and I am under the impression that the variance of a deterministic signal is zero this would give me a zero in the denominator?



Any help greatly appreciated.



Edit



After user617446 comment I went back and recalculated $hat{b}$ as follows-



$$frac{partial}{partial b}bigg[ sum_{t=1}^{N} lambda^{N-t}(y_t-bu_t)^2 bigg] = 2bsum_{t=1}^{N}lambda^{N-t}u_t^2-2sum_{t=1}^{N}lambda^{N-t}y_tu_t$$



setting this equal to zero then solving gave



$$hat{b}=frac{sum_{t=1}^{N}lambda^{N-t}y_tu_t}{sum_{t=1}^{N}lambda^{N-t}u_t^2}.$$



I believe this to be correct but I am now stuck once again on how to calculate the variance? Grateful for any and all help.







recursive-algorithms time-series






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Jan 20 at 3:20







Eiraus

















asked Jan 16 at 12:09









EirausEiraus

7910




7910








  • 1




    $begingroup$
    $hat b$ is not time dependent and hence cannot be considered to be a ratio of $y_t / u_t$. Instead, ask yourself what single value of $b$ will minimize the expression, given $y_t, u_t$
    $endgroup$
    – user617446
    Jan 16 at 14:45










  • $begingroup$
    Thanks for the hint @user617446! I have updated the question with my new calculations for the b estimate but am still stuck on calculating the variance.
    $endgroup$
    – Eiraus
    Jan 19 at 12:30










  • $begingroup$
    Suppose you had to solve the variance of $b$ in the case $y=bu+e$ for a single $t$. Do you know what to do then?
    $endgroup$
    – user617446
    Jan 20 at 5:32










  • $begingroup$
    $Var[b]=Var[frac{1}{u}(y-e)]$ Is this what you mean? But for a single t would that not just be the variance of a constant? I.e zero? Thanks for sticking with me @user617446 !
    $endgroup$
    – Eiraus
    Jan 20 at 19:40










  • $begingroup$
    You are almost there. For a single equation, $hat{b}=y/u$ but $y=b u +e$ where $b$ is the "correct" value ( not the estimate). If we substitute we get $hat{b}=b+e/u$ and the variance is $sigma^2/u^2$ PS: I believe that if you read up on maximum likelihood estimation, you could get more insight into these kind of problems.
    $endgroup$
    – user617446
    Jan 21 at 5:53
















  • 1




    $begingroup$
    $hat b$ is not time dependent and hence cannot be considered to be a ratio of $y_t / u_t$. Instead, ask yourself what single value of $b$ will minimize the expression, given $y_t, u_t$
    $endgroup$
    – user617446
    Jan 16 at 14:45










  • $begingroup$
    Thanks for the hint @user617446! I have updated the question with my new calculations for the b estimate but am still stuck on calculating the variance.
    $endgroup$
    – Eiraus
    Jan 19 at 12:30










  • $begingroup$
    Suppose you had to solve the variance of $b$ in the case $y=bu+e$ for a single $t$. Do you know what to do then?
    $endgroup$
    – user617446
    Jan 20 at 5:32










  • $begingroup$
    $Var[b]=Var[frac{1}{u}(y-e)]$ Is this what you mean? But for a single t would that not just be the variance of a constant? I.e zero? Thanks for sticking with me @user617446 !
    $endgroup$
    – Eiraus
    Jan 20 at 19:40










  • $begingroup$
    You are almost there. For a single equation, $hat{b}=y/u$ but $y=b u +e$ where $b$ is the "correct" value ( not the estimate). If we substitute we get $hat{b}=b+e/u$ and the variance is $sigma^2/u^2$ PS: I believe that if you read up on maximum likelihood estimation, you could get more insight into these kind of problems.
    $endgroup$
    – user617446
    Jan 21 at 5:53










1




1




$begingroup$
$hat b$ is not time dependent and hence cannot be considered to be a ratio of $y_t / u_t$. Instead, ask yourself what single value of $b$ will minimize the expression, given $y_t, u_t$
$endgroup$
– user617446
Jan 16 at 14:45




$begingroup$
$hat b$ is not time dependent and hence cannot be considered to be a ratio of $y_t / u_t$. Instead, ask yourself what single value of $b$ will minimize the expression, given $y_t, u_t$
$endgroup$
– user617446
Jan 16 at 14:45












$begingroup$
Thanks for the hint @user617446! I have updated the question with my new calculations for the b estimate but am still stuck on calculating the variance.
$endgroup$
– Eiraus
Jan 19 at 12:30




$begingroup$
Thanks for the hint @user617446! I have updated the question with my new calculations for the b estimate but am still stuck on calculating the variance.
$endgroup$
– Eiraus
Jan 19 at 12:30












$begingroup$
Suppose you had to solve the variance of $b$ in the case $y=bu+e$ for a single $t$. Do you know what to do then?
$endgroup$
– user617446
Jan 20 at 5:32




$begingroup$
Suppose you had to solve the variance of $b$ in the case $y=bu+e$ for a single $t$. Do you know what to do then?
$endgroup$
– user617446
Jan 20 at 5:32












$begingroup$
$Var[b]=Var[frac{1}{u}(y-e)]$ Is this what you mean? But for a single t would that not just be the variance of a constant? I.e zero? Thanks for sticking with me @user617446 !
$endgroup$
– Eiraus
Jan 20 at 19:40




$begingroup$
$Var[b]=Var[frac{1}{u}(y-e)]$ Is this what you mean? But for a single t would that not just be the variance of a constant? I.e zero? Thanks for sticking with me @user617446 !
$endgroup$
– Eiraus
Jan 20 at 19:40












$begingroup$
You are almost there. For a single equation, $hat{b}=y/u$ but $y=b u +e$ where $b$ is the "correct" value ( not the estimate). If we substitute we get $hat{b}=b+e/u$ and the variance is $sigma^2/u^2$ PS: I believe that if you read up on maximum likelihood estimation, you could get more insight into these kind of problems.
$endgroup$
– user617446
Jan 21 at 5:53






$begingroup$
You are almost there. For a single equation, $hat{b}=y/u$ but $y=b u +e$ where $b$ is the "correct" value ( not the estimate). If we substitute we get $hat{b}=b+e/u$ and the variance is $sigma^2/u^2$ PS: I believe that if you read up on maximum likelihood estimation, you could get more insight into these kind of problems.
$endgroup$
– user617446
Jan 21 at 5:53












1 Answer
1






active

oldest

votes


















0












$begingroup$

Firstly, the least square estmation can be found via differentiation of sum by the parameter $hat b,$ so the expression
$$hat b =dfrac{sumlimits_{t=1}^Nlambda^{N-t}y_t u_t}{sumlimits_{t=1}^Nlambda^{N-t}u_t^2}$$
is correct.



Parameter $hat b$ should be considered as random variable whose value depends on the specific white noise sample,
$$hat b =dfrac{sumlimits_{t=1^N}lambda^{N-t}(e_t+u_t b) u_t}{sumlimits_{t=1}^Nlambda^{N-t}u_t^2}
=b + dfrac{sumlimits_{t=1}^Nlambda^{N-t}e_t u_t}{sumlimits_{t=1}^Nlambda^{N-t}u_t^2}.$$

There are not reasons why the sums ratio can deviate the average mean of the random variable $hat b,$ so
$$M(hat b) = b.$$
Then the variance is
$$V(hat b) = M((hat b-b)^2) = Mleft(left(dfrac{sumlimits_{t=1}^Nlambda^{N-t}e_t u_t}{sumlimits_{t=1}^Nlambda^{N-t}u_t^2}right)^2right)\[4pt]
= dfrac{Mleft(sumlimits_{t=1}^N lambda^{2(N-t)}u_t^2e_t^2right)
+Mleft(sumlimits_{1leq t_1 < t_2leq N} lambda^{2N-t_1-t_2}u_{t_1}u_{t_2}e_{t_1} e_{t_2}right)}{left(sumlimits_{t=0}^Nlambda^{N-t}u_t^2right)^2}
= color{brown}{mathbf{dfrac{sumlimits_{t=1}^N lambda^{2(N-t)}u_t^2}{left(sumlimits_{t=1}^Nlambda^{N-t}u_t^2right)^2}cdotsigma_e^2}}.$$






share|cite|improve this answer









$endgroup$














    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3075660%2fvariance-of-parameter-estimate-using-recursive-least-squares%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0












    $begingroup$

    Firstly, the least square estmation can be found via differentiation of sum by the parameter $hat b,$ so the expression
    $$hat b =dfrac{sumlimits_{t=1}^Nlambda^{N-t}y_t u_t}{sumlimits_{t=1}^Nlambda^{N-t}u_t^2}$$
    is correct.



    Parameter $hat b$ should be considered as random variable whose value depends on the specific white noise sample,
    $$hat b =dfrac{sumlimits_{t=1^N}lambda^{N-t}(e_t+u_t b) u_t}{sumlimits_{t=1}^Nlambda^{N-t}u_t^2}
    =b + dfrac{sumlimits_{t=1}^Nlambda^{N-t}e_t u_t}{sumlimits_{t=1}^Nlambda^{N-t}u_t^2}.$$

    There are not reasons why the sums ratio can deviate the average mean of the random variable $hat b,$ so
    $$M(hat b) = b.$$
    Then the variance is
    $$V(hat b) = M((hat b-b)^2) = Mleft(left(dfrac{sumlimits_{t=1}^Nlambda^{N-t}e_t u_t}{sumlimits_{t=1}^Nlambda^{N-t}u_t^2}right)^2right)\[4pt]
    = dfrac{Mleft(sumlimits_{t=1}^N lambda^{2(N-t)}u_t^2e_t^2right)
    +Mleft(sumlimits_{1leq t_1 < t_2leq N} lambda^{2N-t_1-t_2}u_{t_1}u_{t_2}e_{t_1} e_{t_2}right)}{left(sumlimits_{t=0}^Nlambda^{N-t}u_t^2right)^2}
    = color{brown}{mathbf{dfrac{sumlimits_{t=1}^N lambda^{2(N-t)}u_t^2}{left(sumlimits_{t=1}^Nlambda^{N-t}u_t^2right)^2}cdotsigma_e^2}}.$$






    share|cite|improve this answer









    $endgroup$


















      0












      $begingroup$

      Firstly, the least square estmation can be found via differentiation of sum by the parameter $hat b,$ so the expression
      $$hat b =dfrac{sumlimits_{t=1}^Nlambda^{N-t}y_t u_t}{sumlimits_{t=1}^Nlambda^{N-t}u_t^2}$$
      is correct.



      Parameter $hat b$ should be considered as random variable whose value depends on the specific white noise sample,
      $$hat b =dfrac{sumlimits_{t=1^N}lambda^{N-t}(e_t+u_t b) u_t}{sumlimits_{t=1}^Nlambda^{N-t}u_t^2}
      =b + dfrac{sumlimits_{t=1}^Nlambda^{N-t}e_t u_t}{sumlimits_{t=1}^Nlambda^{N-t}u_t^2}.$$

      There are not reasons why the sums ratio can deviate the average mean of the random variable $hat b,$ so
      $$M(hat b) = b.$$
      Then the variance is
      $$V(hat b) = M((hat b-b)^2) = Mleft(left(dfrac{sumlimits_{t=1}^Nlambda^{N-t}e_t u_t}{sumlimits_{t=1}^Nlambda^{N-t}u_t^2}right)^2right)\[4pt]
      = dfrac{Mleft(sumlimits_{t=1}^N lambda^{2(N-t)}u_t^2e_t^2right)
      +Mleft(sumlimits_{1leq t_1 < t_2leq N} lambda^{2N-t_1-t_2}u_{t_1}u_{t_2}e_{t_1} e_{t_2}right)}{left(sumlimits_{t=0}^Nlambda^{N-t}u_t^2right)^2}
      = color{brown}{mathbf{dfrac{sumlimits_{t=1}^N lambda^{2(N-t)}u_t^2}{left(sumlimits_{t=1}^Nlambda^{N-t}u_t^2right)^2}cdotsigma_e^2}}.$$






      share|cite|improve this answer









      $endgroup$
















        0












        0








        0





        $begingroup$

        Firstly, the least square estmation can be found via differentiation of sum by the parameter $hat b,$ so the expression
        $$hat b =dfrac{sumlimits_{t=1}^Nlambda^{N-t}y_t u_t}{sumlimits_{t=1}^Nlambda^{N-t}u_t^2}$$
        is correct.



        Parameter $hat b$ should be considered as random variable whose value depends on the specific white noise sample,
        $$hat b =dfrac{sumlimits_{t=1^N}lambda^{N-t}(e_t+u_t b) u_t}{sumlimits_{t=1}^Nlambda^{N-t}u_t^2}
        =b + dfrac{sumlimits_{t=1}^Nlambda^{N-t}e_t u_t}{sumlimits_{t=1}^Nlambda^{N-t}u_t^2}.$$

        There are not reasons why the sums ratio can deviate the average mean of the random variable $hat b,$ so
        $$M(hat b) = b.$$
        Then the variance is
        $$V(hat b) = M((hat b-b)^2) = Mleft(left(dfrac{sumlimits_{t=1}^Nlambda^{N-t}e_t u_t}{sumlimits_{t=1}^Nlambda^{N-t}u_t^2}right)^2right)\[4pt]
        = dfrac{Mleft(sumlimits_{t=1}^N lambda^{2(N-t)}u_t^2e_t^2right)
        +Mleft(sumlimits_{1leq t_1 < t_2leq N} lambda^{2N-t_1-t_2}u_{t_1}u_{t_2}e_{t_1} e_{t_2}right)}{left(sumlimits_{t=0}^Nlambda^{N-t}u_t^2right)^2}
        = color{brown}{mathbf{dfrac{sumlimits_{t=1}^N lambda^{2(N-t)}u_t^2}{left(sumlimits_{t=1}^Nlambda^{N-t}u_t^2right)^2}cdotsigma_e^2}}.$$






        share|cite|improve this answer









        $endgroup$



        Firstly, the least square estmation can be found via differentiation of sum by the parameter $hat b,$ so the expression
        $$hat b =dfrac{sumlimits_{t=1}^Nlambda^{N-t}y_t u_t}{sumlimits_{t=1}^Nlambda^{N-t}u_t^2}$$
        is correct.



        Parameter $hat b$ should be considered as random variable whose value depends on the specific white noise sample,
        $$hat b =dfrac{sumlimits_{t=1^N}lambda^{N-t}(e_t+u_t b) u_t}{sumlimits_{t=1}^Nlambda^{N-t}u_t^2}
        =b + dfrac{sumlimits_{t=1}^Nlambda^{N-t}e_t u_t}{sumlimits_{t=1}^Nlambda^{N-t}u_t^2}.$$

        There are not reasons why the sums ratio can deviate the average mean of the random variable $hat b,$ so
        $$M(hat b) = b.$$
        Then the variance is
        $$V(hat b) = M((hat b-b)^2) = Mleft(left(dfrac{sumlimits_{t=1}^Nlambda^{N-t}e_t u_t}{sumlimits_{t=1}^Nlambda^{N-t}u_t^2}right)^2right)\[4pt]
        = dfrac{Mleft(sumlimits_{t=1}^N lambda^{2(N-t)}u_t^2e_t^2right)
        +Mleft(sumlimits_{1leq t_1 < t_2leq N} lambda^{2N-t_1-t_2}u_{t_1}u_{t_2}e_{t_1} e_{t_2}right)}{left(sumlimits_{t=0}^Nlambda^{N-t}u_t^2right)^2}
        = color{brown}{mathbf{dfrac{sumlimits_{t=1}^N lambda^{2(N-t)}u_t^2}{left(sumlimits_{t=1}^Nlambda^{N-t}u_t^2right)^2}cdotsigma_e^2}}.$$







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered Jan 24 at 5:40









        Yuri NegometyanovYuri Negometyanov

        12.5k1729




        12.5k1729






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3075660%2fvariance-of-parameter-estimate-using-recursive-least-squares%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Human spaceflight

            Can not write log (Is /dev/pts mounted?) - openpty in Ubuntu-on-Windows?

            File:DeusFollowingSea.jpg