Solving a Linear Mean-Square Estimation the easy way












1












$begingroup$


I have an exercise which is quite trivial. However I got stuck and I'm not sure if this the end-result. I assume there has to be a way to get this result much quicker.



Given are two randomly distributed variables y and n with mean 0 and variance 1. Further we know that $E{yn} = 0.5$ We measure $$x=y+n$$



We look for linear-mean-square estimation of y as a function of x.



From the given facts we now x has zero mean two. The variance has to be $$var(x) = var(y)+var(n)+2cdot cov(yn)$$
because y and n are correlated. Therefor $var(x)$ becomes $$1+1+2cdot 0.5 = 3$$



y becomes $y = n-x$ and $hat{y}$ should have the form $hat{y}= ax+b$.
For a and b I already have the general formula:
$$a=frac{E{xy}-m_xm_y}{sigma_x^2}$$
$$b=frac{E{x^2}m_y-E{xy}m_x}{sigma_x^2}.$$
In my case y should be $n-x$ if i got this right. But solving the problem with those formulas for a and b leaves me with terms like $E{xn}$. How can I know what this mean should be? I can't assume they are uncorrelated, can I? Probably I'm running completely in the wrong direction, and the solution is much more obvious.



EDIT 1: From Fat32's input I get for a:
$$a = frac{E{x(n-x)}}{sigma_x^2}=frac{3-1.5}{3}=1/2$$
and b $$b=frac{E{x^2}m_y-E{x(n-x)}m_x}{sigma_x^2}=frac{3-1.5}{3}=frac{1}{2}.$$
The solution would therefor be $y=-frac{1}{2}x+frac{1}{2}=frac{1}{2}(1-x)$ Not sure if this is true. Have to test it with random samples.



EDIT 2:
I did a test with matlab:



N = 10000;             %// Number of samples in each vector
M = randn(N, 2);

R = [1 0.5; 0.5 1]; %// correlation matrix
M = M * chol(R); %used to calculated depended random variables

n = M(:, 1);
y = M(:, 2);

x = y+n;

y_hat = -0.5*x+0.5;

mean(y-y_hat)


y_hat is not even close to the real y. Has not even the same mean. I don't get it. I'm making definitely some mistakes here.



EDIT 3:
Found another formula which uses the a and b. Inserted the linear leas squares solution becomes $$hat{y}=rho_{xy}frac{sigma_y}{sigma_x}(x-m_x)+m_y.$$ When I insert my values I get:
$$hat{y}=rho_{xy}frac{1}{sqrt{3}}x.$$
Rho is $frac{1.5}{sqrt(3)}$ and so $hat{y}$ becomes 0.5x as @Fat32 pointed out. The error above was that b is zero because $m_x$ and $m_y$ are zero.










share|improve this question











$endgroup$








  • 1




    $begingroup$
    Hey, for computing $a$ (and also $b$) you should have $E{x(n-x)} = 1.5-3 = -1.5$ ? why do you take $E{xn} = 1$ despite my answer says $1.5$ ?
    $endgroup$
    – Fat32
    Feb 3 at 14:15








  • 1




    $begingroup$
    @Fat32 sorry I worked quite sloppy. Too much learning for today.
    $endgroup$
    – Mr.Sh4nnon
    Feb 3 at 14:22






  • 1




    $begingroup$
    You are solving those $a$ and $b$ somehow wrong. I solved it with the same parameters to be $a = 0.5$ and $b = 0$ and it yields the expected result...
    $endgroup$
    – Fat32
    Feb 3 at 15:39






  • 1




    $begingroup$
    I will put the answer...
    $endgroup$
    – Fat32
    Feb 3 at 15:43






  • 1




    $begingroup$
    No I meant when trying to prove this with matlab with a small sample size, it can happen that another solution might result in a smaller error. But if you run the script many times, 0.5x will be the least square solution.
    $endgroup$
    – Mr.Sh4nnon
    Feb 3 at 16:11
















1












$begingroup$


I have an exercise which is quite trivial. However I got stuck and I'm not sure if this the end-result. I assume there has to be a way to get this result much quicker.



Given are two randomly distributed variables y and n with mean 0 and variance 1. Further we know that $E{yn} = 0.5$ We measure $$x=y+n$$



We look for linear-mean-square estimation of y as a function of x.



From the given facts we now x has zero mean two. The variance has to be $$var(x) = var(y)+var(n)+2cdot cov(yn)$$
because y and n are correlated. Therefor $var(x)$ becomes $$1+1+2cdot 0.5 = 3$$



y becomes $y = n-x$ and $hat{y}$ should have the form $hat{y}= ax+b$.
For a and b I already have the general formula:
$$a=frac{E{xy}-m_xm_y}{sigma_x^2}$$
$$b=frac{E{x^2}m_y-E{xy}m_x}{sigma_x^2}.$$
In my case y should be $n-x$ if i got this right. But solving the problem with those formulas for a and b leaves me with terms like $E{xn}$. How can I know what this mean should be? I can't assume they are uncorrelated, can I? Probably I'm running completely in the wrong direction, and the solution is much more obvious.



EDIT 1: From Fat32's input I get for a:
$$a = frac{E{x(n-x)}}{sigma_x^2}=frac{3-1.5}{3}=1/2$$
and b $$b=frac{E{x^2}m_y-E{x(n-x)}m_x}{sigma_x^2}=frac{3-1.5}{3}=frac{1}{2}.$$
The solution would therefor be $y=-frac{1}{2}x+frac{1}{2}=frac{1}{2}(1-x)$ Not sure if this is true. Have to test it with random samples.



EDIT 2:
I did a test with matlab:



N = 10000;             %// Number of samples in each vector
M = randn(N, 2);

R = [1 0.5; 0.5 1]; %// correlation matrix
M = M * chol(R); %used to calculated depended random variables

n = M(:, 1);
y = M(:, 2);

x = y+n;

y_hat = -0.5*x+0.5;

mean(y-y_hat)


y_hat is not even close to the real y. Has not even the same mean. I don't get it. I'm making definitely some mistakes here.



EDIT 3:
Found another formula which uses the a and b. Inserted the linear leas squares solution becomes $$hat{y}=rho_{xy}frac{sigma_y}{sigma_x}(x-m_x)+m_y.$$ When I insert my values I get:
$$hat{y}=rho_{xy}frac{1}{sqrt{3}}x.$$
Rho is $frac{1.5}{sqrt(3)}$ and so $hat{y}$ becomes 0.5x as @Fat32 pointed out. The error above was that b is zero because $m_x$ and $m_y$ are zero.










share|improve this question











$endgroup$








  • 1




    $begingroup$
    Hey, for computing $a$ (and also $b$) you should have $E{x(n-x)} = 1.5-3 = -1.5$ ? why do you take $E{xn} = 1$ despite my answer says $1.5$ ?
    $endgroup$
    – Fat32
    Feb 3 at 14:15








  • 1




    $begingroup$
    @Fat32 sorry I worked quite sloppy. Too much learning for today.
    $endgroup$
    – Mr.Sh4nnon
    Feb 3 at 14:22






  • 1




    $begingroup$
    You are solving those $a$ and $b$ somehow wrong. I solved it with the same parameters to be $a = 0.5$ and $b = 0$ and it yields the expected result...
    $endgroup$
    – Fat32
    Feb 3 at 15:39






  • 1




    $begingroup$
    I will put the answer...
    $endgroup$
    – Fat32
    Feb 3 at 15:43






  • 1




    $begingroup$
    No I meant when trying to prove this with matlab with a small sample size, it can happen that another solution might result in a smaller error. But if you run the script many times, 0.5x will be the least square solution.
    $endgroup$
    – Mr.Sh4nnon
    Feb 3 at 16:11














1












1








1





$begingroup$


I have an exercise which is quite trivial. However I got stuck and I'm not sure if this the end-result. I assume there has to be a way to get this result much quicker.



Given are two randomly distributed variables y and n with mean 0 and variance 1. Further we know that $E{yn} = 0.5$ We measure $$x=y+n$$



We look for linear-mean-square estimation of y as a function of x.



From the given facts we now x has zero mean two. The variance has to be $$var(x) = var(y)+var(n)+2cdot cov(yn)$$
because y and n are correlated. Therefor $var(x)$ becomes $$1+1+2cdot 0.5 = 3$$



y becomes $y = n-x$ and $hat{y}$ should have the form $hat{y}= ax+b$.
For a and b I already have the general formula:
$$a=frac{E{xy}-m_xm_y}{sigma_x^2}$$
$$b=frac{E{x^2}m_y-E{xy}m_x}{sigma_x^2}.$$
In my case y should be $n-x$ if i got this right. But solving the problem with those formulas for a and b leaves me with terms like $E{xn}$. How can I know what this mean should be? I can't assume they are uncorrelated, can I? Probably I'm running completely in the wrong direction, and the solution is much more obvious.



EDIT 1: From Fat32's input I get for a:
$$a = frac{E{x(n-x)}}{sigma_x^2}=frac{3-1.5}{3}=1/2$$
and b $$b=frac{E{x^2}m_y-E{x(n-x)}m_x}{sigma_x^2}=frac{3-1.5}{3}=frac{1}{2}.$$
The solution would therefor be $y=-frac{1}{2}x+frac{1}{2}=frac{1}{2}(1-x)$ Not sure if this is true. Have to test it with random samples.



EDIT 2:
I did a test with matlab:



N = 10000;             %// Number of samples in each vector
M = randn(N, 2);

R = [1 0.5; 0.5 1]; %// correlation matrix
M = M * chol(R); %used to calculated depended random variables

n = M(:, 1);
y = M(:, 2);

x = y+n;

y_hat = -0.5*x+0.5;

mean(y-y_hat)


y_hat is not even close to the real y. Has not even the same mean. I don't get it. I'm making definitely some mistakes here.



EDIT 3:
Found another formula which uses the a and b. Inserted the linear leas squares solution becomes $$hat{y}=rho_{xy}frac{sigma_y}{sigma_x}(x-m_x)+m_y.$$ When I insert my values I get:
$$hat{y}=rho_{xy}frac{1}{sqrt{3}}x.$$
Rho is $frac{1.5}{sqrt(3)}$ and so $hat{y}$ becomes 0.5x as @Fat32 pointed out. The error above was that b is zero because $m_x$ and $m_y$ are zero.










share|improve this question











$endgroup$




I have an exercise which is quite trivial. However I got stuck and I'm not sure if this the end-result. I assume there has to be a way to get this result much quicker.



Given are two randomly distributed variables y and n with mean 0 and variance 1. Further we know that $E{yn} = 0.5$ We measure $$x=y+n$$



We look for linear-mean-square estimation of y as a function of x.



From the given facts we now x has zero mean two. The variance has to be $$var(x) = var(y)+var(n)+2cdot cov(yn)$$
because y and n are correlated. Therefor $var(x)$ becomes $$1+1+2cdot 0.5 = 3$$



y becomes $y = n-x$ and $hat{y}$ should have the form $hat{y}= ax+b$.
For a and b I already have the general formula:
$$a=frac{E{xy}-m_xm_y}{sigma_x^2}$$
$$b=frac{E{x^2}m_y-E{xy}m_x}{sigma_x^2}.$$
In my case y should be $n-x$ if i got this right. But solving the problem with those formulas for a and b leaves me with terms like $E{xn}$. How can I know what this mean should be? I can't assume they are uncorrelated, can I? Probably I'm running completely in the wrong direction, and the solution is much more obvious.



EDIT 1: From Fat32's input I get for a:
$$a = frac{E{x(n-x)}}{sigma_x^2}=frac{3-1.5}{3}=1/2$$
and b $$b=frac{E{x^2}m_y-E{x(n-x)}m_x}{sigma_x^2}=frac{3-1.5}{3}=frac{1}{2}.$$
The solution would therefor be $y=-frac{1}{2}x+frac{1}{2}=frac{1}{2}(1-x)$ Not sure if this is true. Have to test it with random samples.



EDIT 2:
I did a test with matlab:



N = 10000;             %// Number of samples in each vector
M = randn(N, 2);

R = [1 0.5; 0.5 1]; %// correlation matrix
M = M * chol(R); %used to calculated depended random variables

n = M(:, 1);
y = M(:, 2);

x = y+n;

y_hat = -0.5*x+0.5;

mean(y-y_hat)


y_hat is not even close to the real y. Has not even the same mean. I don't get it. I'm making definitely some mistakes here.



EDIT 3:
Found another formula which uses the a and b. Inserted the linear leas squares solution becomes $$hat{y}=rho_{xy}frac{sigma_y}{sigma_x}(x-m_x)+m_y.$$ When I insert my values I get:
$$hat{y}=rho_{xy}frac{1}{sqrt{3}}x.$$
Rho is $frac{1.5}{sqrt(3)}$ and so $hat{y}$ becomes 0.5x as @Fat32 pointed out. The error above was that b is zero because $m_x$ and $m_y$ are zero.







estimation self-study least-squares






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Feb 3 at 15:57







Mr.Sh4nnon

















asked Feb 3 at 13:35









Mr.Sh4nnonMr.Sh4nnon

809




809








  • 1




    $begingroup$
    Hey, for computing $a$ (and also $b$) you should have $E{x(n-x)} = 1.5-3 = -1.5$ ? why do you take $E{xn} = 1$ despite my answer says $1.5$ ?
    $endgroup$
    – Fat32
    Feb 3 at 14:15








  • 1




    $begingroup$
    @Fat32 sorry I worked quite sloppy. Too much learning for today.
    $endgroup$
    – Mr.Sh4nnon
    Feb 3 at 14:22






  • 1




    $begingroup$
    You are solving those $a$ and $b$ somehow wrong. I solved it with the same parameters to be $a = 0.5$ and $b = 0$ and it yields the expected result...
    $endgroup$
    – Fat32
    Feb 3 at 15:39






  • 1




    $begingroup$
    I will put the answer...
    $endgroup$
    – Fat32
    Feb 3 at 15:43






  • 1




    $begingroup$
    No I meant when trying to prove this with matlab with a small sample size, it can happen that another solution might result in a smaller error. But if you run the script many times, 0.5x will be the least square solution.
    $endgroup$
    – Mr.Sh4nnon
    Feb 3 at 16:11














  • 1




    $begingroup$
    Hey, for computing $a$ (and also $b$) you should have $E{x(n-x)} = 1.5-3 = -1.5$ ? why do you take $E{xn} = 1$ despite my answer says $1.5$ ?
    $endgroup$
    – Fat32
    Feb 3 at 14:15








  • 1




    $begingroup$
    @Fat32 sorry I worked quite sloppy. Too much learning for today.
    $endgroup$
    – Mr.Sh4nnon
    Feb 3 at 14:22






  • 1




    $begingroup$
    You are solving those $a$ and $b$ somehow wrong. I solved it with the same parameters to be $a = 0.5$ and $b = 0$ and it yields the expected result...
    $endgroup$
    – Fat32
    Feb 3 at 15:39






  • 1




    $begingroup$
    I will put the answer...
    $endgroup$
    – Fat32
    Feb 3 at 15:43






  • 1




    $begingroup$
    No I meant when trying to prove this with matlab with a small sample size, it can happen that another solution might result in a smaller error. But if you run the script many times, 0.5x will be the least square solution.
    $endgroup$
    – Mr.Sh4nnon
    Feb 3 at 16:11








1




1




$begingroup$
Hey, for computing $a$ (and also $b$) you should have $E{x(n-x)} = 1.5-3 = -1.5$ ? why do you take $E{xn} = 1$ despite my answer says $1.5$ ?
$endgroup$
– Fat32
Feb 3 at 14:15






$begingroup$
Hey, for computing $a$ (and also $b$) you should have $E{x(n-x)} = 1.5-3 = -1.5$ ? why do you take $E{xn} = 1$ despite my answer says $1.5$ ?
$endgroup$
– Fat32
Feb 3 at 14:15






1




1




$begingroup$
@Fat32 sorry I worked quite sloppy. Too much learning for today.
$endgroup$
– Mr.Sh4nnon
Feb 3 at 14:22




$begingroup$
@Fat32 sorry I worked quite sloppy. Too much learning for today.
$endgroup$
– Mr.Sh4nnon
Feb 3 at 14:22




1




1




$begingroup$
You are solving those $a$ and $b$ somehow wrong. I solved it with the same parameters to be $a = 0.5$ and $b = 0$ and it yields the expected result...
$endgroup$
– Fat32
Feb 3 at 15:39




$begingroup$
You are solving those $a$ and $b$ somehow wrong. I solved it with the same parameters to be $a = 0.5$ and $b = 0$ and it yields the expected result...
$endgroup$
– Fat32
Feb 3 at 15:39




1




1




$begingroup$
I will put the answer...
$endgroup$
– Fat32
Feb 3 at 15:43




$begingroup$
I will put the answer...
$endgroup$
– Fat32
Feb 3 at 15:43




1




1




$begingroup$
No I meant when trying to prove this with matlab with a small sample size, it can happen that another solution might result in a smaller error. But if you run the script many times, 0.5x will be the least square solution.
$endgroup$
– Mr.Sh4nnon
Feb 3 at 16:11




$begingroup$
No I meant when trying to prove this with matlab with a small sample size, it can happen that another solution might result in a smaller error. But if you run the script many times, 0.5x will be the least square solution.
$endgroup$
– Mr.Sh4nnon
Feb 3 at 16:11










2 Answers
2






active

oldest

votes


















1












$begingroup$

Now I wanted to show you how to get those minimum linear mean square estimator coefficients $a$ and $b$ for your given problem setup. The procedure is summarised from the book Statistical Digital Signal Processing_MonsonHayes.



Given two random variables $X$ and $Y$, we observe $X$ and want to estimate $Y$ using a linear estimator :



$$ hat{Y} = acdot X + b $$



which minimized the mean square error $$xi^2 = E{ (Y-hat{Y})^2 } $$.



The solution is:



$$boxed{ a = frac{ E{XY} - m_xm_y }{ sigma_x^2} } $$



$$boxed{ b = frac{ E{X^2} m_y - E{X Y} m_x }{ sigma_x^2} } $$



And a better simplification happens by recognizing the correlation coefficient $$rho_{xy} = frac{ E{XY} - m_xm_y }{ sigma_x sigma_y } $$



Then the optimal linear estimator of $Y$ is re-written as:



$$boxed{ hat{Y} = rho_{xy} frac{sigma_y}{sigma_x}(X-m_x) + m_y }$$



Note that the resulting mimimum mean square error is also given by :



$$ xi_o^2 = sigma_y^2(1-rho_{xy}^2) $$



And further note that the orthogonality principle, for the optimum estimator, requires that:



$$ E{Xcdot E} = E{ X (Y - hat{Y}) } = 0 $$.



Now coming to your problem,



We are given the observation $ X = Y + N $ with the following statistics:



$$ E{Y}= E{N} = E{X} = 0 ~~~,~~~ sigma_y^2 = 1, sigma_n^2 = 1, sigma_x^2 = 3$$



(you can compute $sigma_x^2 = 3$ from the givens) and further given $E{YN} = 0.5$. Now we shall compute $rho_{xy}$ which is:



$$rho_{xy} = frac{ E{XY} - m_x m_y }{ sigma_x sigma_y } = frac{ E{(Y+N)Y}}-0cdot 0 }{ sqrt{3} } = frac{ 1 + 1.5}{ sqrt{3} } = frac{ sqrt{3} }{2}$$



Then the optimal linear mse becomes:



$$ hat{Y} = frac{sqrt{3}}{2} frac{1}{ sqrt{3}}(X-0) + 0 = 0.5 X $$



From which you can also infer that $a = 0.5$ and $b=0$.



Note that you could also reach the same result by just computing $a$ and $b$ according to formulas as follows:



$$ a = frac{ E{XY} - m_xm_y }{ sigma_x^2} = frac{ 1.5 - 0 cdot 0 }{3} = 0.5$$



$$ b = frac{ E{X^2} m_y - E{X Y} m_x }{ sigma_x^2} = frac{ 3 cdot 0 - 1.5 cdot 0 }{ 3} = 0$$



pretty simple ?






share|improve this answer











$endgroup$









  • 1




    $begingroup$
    That's the book I got it from. Finally got it right. Thank you very much!
    $endgroup$
    – Mr.Sh4nnon
    Feb 3 at 16:07






  • 1




    $begingroup$
    Note that you could also use very simply the direct equation for $a$ and $b$ but magically you messed up :-))
    $endgroup$
    – Fat32
    Feb 3 at 16:08






  • 1




    $begingroup$
    Yeah :D Mentioned the errors in the last edit. The problem when learning advanced stuff for the finals is that you get stuck on simple math because you slept for about 4 hours. The topic itself is possibly the easiest one from the book.
    $endgroup$
    – Mr.Sh4nnon
    Feb 3 at 16:13






  • 1




    $begingroup$
    yes ! Don't forget: A freshly charged average is better than a sleepy Einstein ;-)
    $endgroup$
    – Fat32
    Feb 3 at 16:17








  • 1




    $begingroup$
    Another alternative formulation of the linear estimator $$boxed{ hat{Y} = rho_{xy} frac{sigma_y}{sigma_x}(X-m_x) + m_y }$$ is to multiply numerator and denominator by $sigma_x$ to get $$boxed{ hat{Y} = frac{operatorname{cov}(X,Y)}{operatorname{var}(X)}(X-m_x) + m_y }$$ which can save some square-rooting or just calculation of $rho_{xy}$ etc. In the question asked. Note that $operatorname{var}(X)$ is given while $$operatorname{cov}(X,Y)=operatorname{cov}(Y+N,Y)=operatorname{var}(Y)+operatorname{cov}(N,Y),$$
    $endgroup$
    – Dilip Sarwate
    Feb 4 at 0:34





















1












$begingroup$

So in your case doesn't the relation $x = n+y$ help ?



I mean, assuming your derivation for the mean square estimtor is right, then to compute $E{xn}$ you would look for $E{ (y+n)n}$ and using properties of $x$ and $n$ you would get



$$E{xn} = E{(y+n)n} = E{yn} + E{n^2} = 0.5 + 1 = 1.5 $$






share|improve this answer









$endgroup$













  • $begingroup$
    I'm a little bit confused. Why is E(n^2) = 1? And the second problem is that a and b are given for a general solution y=ax+b. Reading again what I wrote I'm no longer sure if I have to substitute x with y+n or if have to substitute y with n-x in the formula.
    $endgroup$
    – Mr.Sh4nnon
    Feb 3 at 13:57






  • 1




    $begingroup$
    you said variance of $n$ is $1$, and it's also zero-mean; so $Var{n} = E{n^2} - mu_n^2 = E{n^2} = 1$
    $endgroup$
    – Fat32
    Feb 3 at 14:00








  • 1




    $begingroup$
    To your latter confusion, according to your statement (which is alitle confusing indeed) your measurement (observation) is $x$, and you want to estimate $y$ from $x$. the relation between the two is given by $n$.
    $endgroup$
    – Fat32
    Feb 3 at 14:04








  • 1




    $begingroup$
    and you could also use $E{xn} = E{x(x-y)} = E{x^2}-E{xy} = 3 - 1.5 = 1.5$.
    $endgroup$
    – Fat32
    Feb 3 at 14:11











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "295"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdsp.stackexchange.com%2fquestions%2f55271%2fsolving-a-linear-mean-square-estimation-the-easy-way%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









1












$begingroup$

Now I wanted to show you how to get those minimum linear mean square estimator coefficients $a$ and $b$ for your given problem setup. The procedure is summarised from the book Statistical Digital Signal Processing_MonsonHayes.



Given two random variables $X$ and $Y$, we observe $X$ and want to estimate $Y$ using a linear estimator :



$$ hat{Y} = acdot X + b $$



which minimized the mean square error $$xi^2 = E{ (Y-hat{Y})^2 } $$.



The solution is:



$$boxed{ a = frac{ E{XY} - m_xm_y }{ sigma_x^2} } $$



$$boxed{ b = frac{ E{X^2} m_y - E{X Y} m_x }{ sigma_x^2} } $$



And a better simplification happens by recognizing the correlation coefficient $$rho_{xy} = frac{ E{XY} - m_xm_y }{ sigma_x sigma_y } $$



Then the optimal linear estimator of $Y$ is re-written as:



$$boxed{ hat{Y} = rho_{xy} frac{sigma_y}{sigma_x}(X-m_x) + m_y }$$



Note that the resulting mimimum mean square error is also given by :



$$ xi_o^2 = sigma_y^2(1-rho_{xy}^2) $$



And further note that the orthogonality principle, for the optimum estimator, requires that:



$$ E{Xcdot E} = E{ X (Y - hat{Y}) } = 0 $$.



Now coming to your problem,



We are given the observation $ X = Y + N $ with the following statistics:



$$ E{Y}= E{N} = E{X} = 0 ~~~,~~~ sigma_y^2 = 1, sigma_n^2 = 1, sigma_x^2 = 3$$



(you can compute $sigma_x^2 = 3$ from the givens) and further given $E{YN} = 0.5$. Now we shall compute $rho_{xy}$ which is:



$$rho_{xy} = frac{ E{XY} - m_x m_y }{ sigma_x sigma_y } = frac{ E{(Y+N)Y}}-0cdot 0 }{ sqrt{3} } = frac{ 1 + 1.5}{ sqrt{3} } = frac{ sqrt{3} }{2}$$



Then the optimal linear mse becomes:



$$ hat{Y} = frac{sqrt{3}}{2} frac{1}{ sqrt{3}}(X-0) + 0 = 0.5 X $$



From which you can also infer that $a = 0.5$ and $b=0$.



Note that you could also reach the same result by just computing $a$ and $b$ according to formulas as follows:



$$ a = frac{ E{XY} - m_xm_y }{ sigma_x^2} = frac{ 1.5 - 0 cdot 0 }{3} = 0.5$$



$$ b = frac{ E{X^2} m_y - E{X Y} m_x }{ sigma_x^2} = frac{ 3 cdot 0 - 1.5 cdot 0 }{ 3} = 0$$



pretty simple ?






share|improve this answer











$endgroup$









  • 1




    $begingroup$
    That's the book I got it from. Finally got it right. Thank you very much!
    $endgroup$
    – Mr.Sh4nnon
    Feb 3 at 16:07






  • 1




    $begingroup$
    Note that you could also use very simply the direct equation for $a$ and $b$ but magically you messed up :-))
    $endgroup$
    – Fat32
    Feb 3 at 16:08






  • 1




    $begingroup$
    Yeah :D Mentioned the errors in the last edit. The problem when learning advanced stuff for the finals is that you get stuck on simple math because you slept for about 4 hours. The topic itself is possibly the easiest one from the book.
    $endgroup$
    – Mr.Sh4nnon
    Feb 3 at 16:13






  • 1




    $begingroup$
    yes ! Don't forget: A freshly charged average is better than a sleepy Einstein ;-)
    $endgroup$
    – Fat32
    Feb 3 at 16:17








  • 1




    $begingroup$
    Another alternative formulation of the linear estimator $$boxed{ hat{Y} = rho_{xy} frac{sigma_y}{sigma_x}(X-m_x) + m_y }$$ is to multiply numerator and denominator by $sigma_x$ to get $$boxed{ hat{Y} = frac{operatorname{cov}(X,Y)}{operatorname{var}(X)}(X-m_x) + m_y }$$ which can save some square-rooting or just calculation of $rho_{xy}$ etc. In the question asked. Note that $operatorname{var}(X)$ is given while $$operatorname{cov}(X,Y)=operatorname{cov}(Y+N,Y)=operatorname{var}(Y)+operatorname{cov}(N,Y),$$
    $endgroup$
    – Dilip Sarwate
    Feb 4 at 0:34


















1












$begingroup$

Now I wanted to show you how to get those minimum linear mean square estimator coefficients $a$ and $b$ for your given problem setup. The procedure is summarised from the book Statistical Digital Signal Processing_MonsonHayes.



Given two random variables $X$ and $Y$, we observe $X$ and want to estimate $Y$ using a linear estimator :



$$ hat{Y} = acdot X + b $$



which minimized the mean square error $$xi^2 = E{ (Y-hat{Y})^2 } $$.



The solution is:



$$boxed{ a = frac{ E{XY} - m_xm_y }{ sigma_x^2} } $$



$$boxed{ b = frac{ E{X^2} m_y - E{X Y} m_x }{ sigma_x^2} } $$



And a better simplification happens by recognizing the correlation coefficient $$rho_{xy} = frac{ E{XY} - m_xm_y }{ sigma_x sigma_y } $$



Then the optimal linear estimator of $Y$ is re-written as:



$$boxed{ hat{Y} = rho_{xy} frac{sigma_y}{sigma_x}(X-m_x) + m_y }$$



Note that the resulting mimimum mean square error is also given by :



$$ xi_o^2 = sigma_y^2(1-rho_{xy}^2) $$



And further note that the orthogonality principle, for the optimum estimator, requires that:



$$ E{Xcdot E} = E{ X (Y - hat{Y}) } = 0 $$.



Now coming to your problem,



We are given the observation $ X = Y + N $ with the following statistics:



$$ E{Y}= E{N} = E{X} = 0 ~~~,~~~ sigma_y^2 = 1, sigma_n^2 = 1, sigma_x^2 = 3$$



(you can compute $sigma_x^2 = 3$ from the givens) and further given $E{YN} = 0.5$. Now we shall compute $rho_{xy}$ which is:



$$rho_{xy} = frac{ E{XY} - m_x m_y }{ sigma_x sigma_y } = frac{ E{(Y+N)Y}}-0cdot 0 }{ sqrt{3} } = frac{ 1 + 1.5}{ sqrt{3} } = frac{ sqrt{3} }{2}$$



Then the optimal linear mse becomes:



$$ hat{Y} = frac{sqrt{3}}{2} frac{1}{ sqrt{3}}(X-0) + 0 = 0.5 X $$



From which you can also infer that $a = 0.5$ and $b=0$.



Note that you could also reach the same result by just computing $a$ and $b$ according to formulas as follows:



$$ a = frac{ E{XY} - m_xm_y }{ sigma_x^2} = frac{ 1.5 - 0 cdot 0 }{3} = 0.5$$



$$ b = frac{ E{X^2} m_y - E{X Y} m_x }{ sigma_x^2} = frac{ 3 cdot 0 - 1.5 cdot 0 }{ 3} = 0$$



pretty simple ?






share|improve this answer











$endgroup$









  • 1




    $begingroup$
    That's the book I got it from. Finally got it right. Thank you very much!
    $endgroup$
    – Mr.Sh4nnon
    Feb 3 at 16:07






  • 1




    $begingroup$
    Note that you could also use very simply the direct equation for $a$ and $b$ but magically you messed up :-))
    $endgroup$
    – Fat32
    Feb 3 at 16:08






  • 1




    $begingroup$
    Yeah :D Mentioned the errors in the last edit. The problem when learning advanced stuff for the finals is that you get stuck on simple math because you slept for about 4 hours. The topic itself is possibly the easiest one from the book.
    $endgroup$
    – Mr.Sh4nnon
    Feb 3 at 16:13






  • 1




    $begingroup$
    yes ! Don't forget: A freshly charged average is better than a sleepy Einstein ;-)
    $endgroup$
    – Fat32
    Feb 3 at 16:17








  • 1




    $begingroup$
    Another alternative formulation of the linear estimator $$boxed{ hat{Y} = rho_{xy} frac{sigma_y}{sigma_x}(X-m_x) + m_y }$$ is to multiply numerator and denominator by $sigma_x$ to get $$boxed{ hat{Y} = frac{operatorname{cov}(X,Y)}{operatorname{var}(X)}(X-m_x) + m_y }$$ which can save some square-rooting or just calculation of $rho_{xy}$ etc. In the question asked. Note that $operatorname{var}(X)$ is given while $$operatorname{cov}(X,Y)=operatorname{cov}(Y+N,Y)=operatorname{var}(Y)+operatorname{cov}(N,Y),$$
    $endgroup$
    – Dilip Sarwate
    Feb 4 at 0:34
















1












1








1





$begingroup$

Now I wanted to show you how to get those minimum linear mean square estimator coefficients $a$ and $b$ for your given problem setup. The procedure is summarised from the book Statistical Digital Signal Processing_MonsonHayes.



Given two random variables $X$ and $Y$, we observe $X$ and want to estimate $Y$ using a linear estimator :



$$ hat{Y} = acdot X + b $$



which minimized the mean square error $$xi^2 = E{ (Y-hat{Y})^2 } $$.



The solution is:



$$boxed{ a = frac{ E{XY} - m_xm_y }{ sigma_x^2} } $$



$$boxed{ b = frac{ E{X^2} m_y - E{X Y} m_x }{ sigma_x^2} } $$



And a better simplification happens by recognizing the correlation coefficient $$rho_{xy} = frac{ E{XY} - m_xm_y }{ sigma_x sigma_y } $$



Then the optimal linear estimator of $Y$ is re-written as:



$$boxed{ hat{Y} = rho_{xy} frac{sigma_y}{sigma_x}(X-m_x) + m_y }$$



Note that the resulting mimimum mean square error is also given by :



$$ xi_o^2 = sigma_y^2(1-rho_{xy}^2) $$



And further note that the orthogonality principle, for the optimum estimator, requires that:



$$ E{Xcdot E} = E{ X (Y - hat{Y}) } = 0 $$.



Now coming to your problem,



We are given the observation $ X = Y + N $ with the following statistics:



$$ E{Y}= E{N} = E{X} = 0 ~~~,~~~ sigma_y^2 = 1, sigma_n^2 = 1, sigma_x^2 = 3$$



(you can compute $sigma_x^2 = 3$ from the givens) and further given $E{YN} = 0.5$. Now we shall compute $rho_{xy}$ which is:



$$rho_{xy} = frac{ E{XY} - m_x m_y }{ sigma_x sigma_y } = frac{ E{(Y+N)Y}}-0cdot 0 }{ sqrt{3} } = frac{ 1 + 1.5}{ sqrt{3} } = frac{ sqrt{3} }{2}$$



Then the optimal linear mse becomes:



$$ hat{Y} = frac{sqrt{3}}{2} frac{1}{ sqrt{3}}(X-0) + 0 = 0.5 X $$



From which you can also infer that $a = 0.5$ and $b=0$.



Note that you could also reach the same result by just computing $a$ and $b$ according to formulas as follows:



$$ a = frac{ E{XY} - m_xm_y }{ sigma_x^2} = frac{ 1.5 - 0 cdot 0 }{3} = 0.5$$



$$ b = frac{ E{X^2} m_y - E{X Y} m_x }{ sigma_x^2} = frac{ 3 cdot 0 - 1.5 cdot 0 }{ 3} = 0$$



pretty simple ?






share|improve this answer











$endgroup$



Now I wanted to show you how to get those minimum linear mean square estimator coefficients $a$ and $b$ for your given problem setup. The procedure is summarised from the book Statistical Digital Signal Processing_MonsonHayes.



Given two random variables $X$ and $Y$, we observe $X$ and want to estimate $Y$ using a linear estimator :



$$ hat{Y} = acdot X + b $$



which minimized the mean square error $$xi^2 = E{ (Y-hat{Y})^2 } $$.



The solution is:



$$boxed{ a = frac{ E{XY} - m_xm_y }{ sigma_x^2} } $$



$$boxed{ b = frac{ E{X^2} m_y - E{X Y} m_x }{ sigma_x^2} } $$



And a better simplification happens by recognizing the correlation coefficient $$rho_{xy} = frac{ E{XY} - m_xm_y }{ sigma_x sigma_y } $$



Then the optimal linear estimator of $Y$ is re-written as:



$$boxed{ hat{Y} = rho_{xy} frac{sigma_y}{sigma_x}(X-m_x) + m_y }$$



Note that the resulting mimimum mean square error is also given by :



$$ xi_o^2 = sigma_y^2(1-rho_{xy}^2) $$



And further note that the orthogonality principle, for the optimum estimator, requires that:



$$ E{Xcdot E} = E{ X (Y - hat{Y}) } = 0 $$.



Now coming to your problem,



We are given the observation $ X = Y + N $ with the following statistics:



$$ E{Y}= E{N} = E{X} = 0 ~~~,~~~ sigma_y^2 = 1, sigma_n^2 = 1, sigma_x^2 = 3$$



(you can compute $sigma_x^2 = 3$ from the givens) and further given $E{YN} = 0.5$. Now we shall compute $rho_{xy}$ which is:



$$rho_{xy} = frac{ E{XY} - m_x m_y }{ sigma_x sigma_y } = frac{ E{(Y+N)Y}}-0cdot 0 }{ sqrt{3} } = frac{ 1 + 1.5}{ sqrt{3} } = frac{ sqrt{3} }{2}$$



Then the optimal linear mse becomes:



$$ hat{Y} = frac{sqrt{3}}{2} frac{1}{ sqrt{3}}(X-0) + 0 = 0.5 X $$



From which you can also infer that $a = 0.5$ and $b=0$.



Note that you could also reach the same result by just computing $a$ and $b$ according to formulas as follows:



$$ a = frac{ E{XY} - m_xm_y }{ sigma_x^2} = frac{ 1.5 - 0 cdot 0 }{3} = 0.5$$



$$ b = frac{ E{X^2} m_y - E{X Y} m_x }{ sigma_x^2} = frac{ 3 cdot 0 - 1.5 cdot 0 }{ 3} = 0$$



pretty simple ?







share|improve this answer














share|improve this answer



share|improve this answer








edited Feb 3 at 16:14

























answered Feb 3 at 16:05









Fat32Fat32

15.6k31232




15.6k31232








  • 1




    $begingroup$
    That's the book I got it from. Finally got it right. Thank you very much!
    $endgroup$
    – Mr.Sh4nnon
    Feb 3 at 16:07






  • 1




    $begingroup$
    Note that you could also use very simply the direct equation for $a$ and $b$ but magically you messed up :-))
    $endgroup$
    – Fat32
    Feb 3 at 16:08






  • 1




    $begingroup$
    Yeah :D Mentioned the errors in the last edit. The problem when learning advanced stuff for the finals is that you get stuck on simple math because you slept for about 4 hours. The topic itself is possibly the easiest one from the book.
    $endgroup$
    – Mr.Sh4nnon
    Feb 3 at 16:13






  • 1




    $begingroup$
    yes ! Don't forget: A freshly charged average is better than a sleepy Einstein ;-)
    $endgroup$
    – Fat32
    Feb 3 at 16:17








  • 1




    $begingroup$
    Another alternative formulation of the linear estimator $$boxed{ hat{Y} = rho_{xy} frac{sigma_y}{sigma_x}(X-m_x) + m_y }$$ is to multiply numerator and denominator by $sigma_x$ to get $$boxed{ hat{Y} = frac{operatorname{cov}(X,Y)}{operatorname{var}(X)}(X-m_x) + m_y }$$ which can save some square-rooting or just calculation of $rho_{xy}$ etc. In the question asked. Note that $operatorname{var}(X)$ is given while $$operatorname{cov}(X,Y)=operatorname{cov}(Y+N,Y)=operatorname{var}(Y)+operatorname{cov}(N,Y),$$
    $endgroup$
    – Dilip Sarwate
    Feb 4 at 0:34
















  • 1




    $begingroup$
    That's the book I got it from. Finally got it right. Thank you very much!
    $endgroup$
    – Mr.Sh4nnon
    Feb 3 at 16:07






  • 1




    $begingroup$
    Note that you could also use very simply the direct equation for $a$ and $b$ but magically you messed up :-))
    $endgroup$
    – Fat32
    Feb 3 at 16:08






  • 1




    $begingroup$
    Yeah :D Mentioned the errors in the last edit. The problem when learning advanced stuff for the finals is that you get stuck on simple math because you slept for about 4 hours. The topic itself is possibly the easiest one from the book.
    $endgroup$
    – Mr.Sh4nnon
    Feb 3 at 16:13






  • 1




    $begingroup$
    yes ! Don't forget: A freshly charged average is better than a sleepy Einstein ;-)
    $endgroup$
    – Fat32
    Feb 3 at 16:17








  • 1




    $begingroup$
    Another alternative formulation of the linear estimator $$boxed{ hat{Y} = rho_{xy} frac{sigma_y}{sigma_x}(X-m_x) + m_y }$$ is to multiply numerator and denominator by $sigma_x$ to get $$boxed{ hat{Y} = frac{operatorname{cov}(X,Y)}{operatorname{var}(X)}(X-m_x) + m_y }$$ which can save some square-rooting or just calculation of $rho_{xy}$ etc. In the question asked. Note that $operatorname{var}(X)$ is given while $$operatorname{cov}(X,Y)=operatorname{cov}(Y+N,Y)=operatorname{var}(Y)+operatorname{cov}(N,Y),$$
    $endgroup$
    – Dilip Sarwate
    Feb 4 at 0:34










1




1




$begingroup$
That's the book I got it from. Finally got it right. Thank you very much!
$endgroup$
– Mr.Sh4nnon
Feb 3 at 16:07




$begingroup$
That's the book I got it from. Finally got it right. Thank you very much!
$endgroup$
– Mr.Sh4nnon
Feb 3 at 16:07




1




1




$begingroup$
Note that you could also use very simply the direct equation for $a$ and $b$ but magically you messed up :-))
$endgroup$
– Fat32
Feb 3 at 16:08




$begingroup$
Note that you could also use very simply the direct equation for $a$ and $b$ but magically you messed up :-))
$endgroup$
– Fat32
Feb 3 at 16:08




1




1




$begingroup$
Yeah :D Mentioned the errors in the last edit. The problem when learning advanced stuff for the finals is that you get stuck on simple math because you slept for about 4 hours. The topic itself is possibly the easiest one from the book.
$endgroup$
– Mr.Sh4nnon
Feb 3 at 16:13




$begingroup$
Yeah :D Mentioned the errors in the last edit. The problem when learning advanced stuff for the finals is that you get stuck on simple math because you slept for about 4 hours. The topic itself is possibly the easiest one from the book.
$endgroup$
– Mr.Sh4nnon
Feb 3 at 16:13




1




1




$begingroup$
yes ! Don't forget: A freshly charged average is better than a sleepy Einstein ;-)
$endgroup$
– Fat32
Feb 3 at 16:17






$begingroup$
yes ! Don't forget: A freshly charged average is better than a sleepy Einstein ;-)
$endgroup$
– Fat32
Feb 3 at 16:17






1




1




$begingroup$
Another alternative formulation of the linear estimator $$boxed{ hat{Y} = rho_{xy} frac{sigma_y}{sigma_x}(X-m_x) + m_y }$$ is to multiply numerator and denominator by $sigma_x$ to get $$boxed{ hat{Y} = frac{operatorname{cov}(X,Y)}{operatorname{var}(X)}(X-m_x) + m_y }$$ which can save some square-rooting or just calculation of $rho_{xy}$ etc. In the question asked. Note that $operatorname{var}(X)$ is given while $$operatorname{cov}(X,Y)=operatorname{cov}(Y+N,Y)=operatorname{var}(Y)+operatorname{cov}(N,Y),$$
$endgroup$
– Dilip Sarwate
Feb 4 at 0:34






$begingroup$
Another alternative formulation of the linear estimator $$boxed{ hat{Y} = rho_{xy} frac{sigma_y}{sigma_x}(X-m_x) + m_y }$$ is to multiply numerator and denominator by $sigma_x$ to get $$boxed{ hat{Y} = frac{operatorname{cov}(X,Y)}{operatorname{var}(X)}(X-m_x) + m_y }$$ which can save some square-rooting or just calculation of $rho_{xy}$ etc. In the question asked. Note that $operatorname{var}(X)$ is given while $$operatorname{cov}(X,Y)=operatorname{cov}(Y+N,Y)=operatorname{var}(Y)+operatorname{cov}(N,Y),$$
$endgroup$
– Dilip Sarwate
Feb 4 at 0:34













1












$begingroup$

So in your case doesn't the relation $x = n+y$ help ?



I mean, assuming your derivation for the mean square estimtor is right, then to compute $E{xn}$ you would look for $E{ (y+n)n}$ and using properties of $x$ and $n$ you would get



$$E{xn} = E{(y+n)n} = E{yn} + E{n^2} = 0.5 + 1 = 1.5 $$






share|improve this answer









$endgroup$













  • $begingroup$
    I'm a little bit confused. Why is E(n^2) = 1? And the second problem is that a and b are given for a general solution y=ax+b. Reading again what I wrote I'm no longer sure if I have to substitute x with y+n or if have to substitute y with n-x in the formula.
    $endgroup$
    – Mr.Sh4nnon
    Feb 3 at 13:57






  • 1




    $begingroup$
    you said variance of $n$ is $1$, and it's also zero-mean; so $Var{n} = E{n^2} - mu_n^2 = E{n^2} = 1$
    $endgroup$
    – Fat32
    Feb 3 at 14:00








  • 1




    $begingroup$
    To your latter confusion, according to your statement (which is alitle confusing indeed) your measurement (observation) is $x$, and you want to estimate $y$ from $x$. the relation between the two is given by $n$.
    $endgroup$
    – Fat32
    Feb 3 at 14:04








  • 1




    $begingroup$
    and you could also use $E{xn} = E{x(x-y)} = E{x^2}-E{xy} = 3 - 1.5 = 1.5$.
    $endgroup$
    – Fat32
    Feb 3 at 14:11
















1












$begingroup$

So in your case doesn't the relation $x = n+y$ help ?



I mean, assuming your derivation for the mean square estimtor is right, then to compute $E{xn}$ you would look for $E{ (y+n)n}$ and using properties of $x$ and $n$ you would get



$$E{xn} = E{(y+n)n} = E{yn} + E{n^2} = 0.5 + 1 = 1.5 $$






share|improve this answer









$endgroup$













  • $begingroup$
    I'm a little bit confused. Why is E(n^2) = 1? And the second problem is that a and b are given for a general solution y=ax+b. Reading again what I wrote I'm no longer sure if I have to substitute x with y+n or if have to substitute y with n-x in the formula.
    $endgroup$
    – Mr.Sh4nnon
    Feb 3 at 13:57






  • 1




    $begingroup$
    you said variance of $n$ is $1$, and it's also zero-mean; so $Var{n} = E{n^2} - mu_n^2 = E{n^2} = 1$
    $endgroup$
    – Fat32
    Feb 3 at 14:00








  • 1




    $begingroup$
    To your latter confusion, according to your statement (which is alitle confusing indeed) your measurement (observation) is $x$, and you want to estimate $y$ from $x$. the relation between the two is given by $n$.
    $endgroup$
    – Fat32
    Feb 3 at 14:04








  • 1




    $begingroup$
    and you could also use $E{xn} = E{x(x-y)} = E{x^2}-E{xy} = 3 - 1.5 = 1.5$.
    $endgroup$
    – Fat32
    Feb 3 at 14:11














1












1








1





$begingroup$

So in your case doesn't the relation $x = n+y$ help ?



I mean, assuming your derivation for the mean square estimtor is right, then to compute $E{xn}$ you would look for $E{ (y+n)n}$ and using properties of $x$ and $n$ you would get



$$E{xn} = E{(y+n)n} = E{yn} + E{n^2} = 0.5 + 1 = 1.5 $$






share|improve this answer









$endgroup$



So in your case doesn't the relation $x = n+y$ help ?



I mean, assuming your derivation for the mean square estimtor is right, then to compute $E{xn}$ you would look for $E{ (y+n)n}$ and using properties of $x$ and $n$ you would get



$$E{xn} = E{(y+n)n} = E{yn} + E{n^2} = 0.5 + 1 = 1.5 $$







share|improve this answer












share|improve this answer



share|improve this answer










answered Feb 3 at 13:44









Fat32Fat32

15.6k31232




15.6k31232












  • $begingroup$
    I'm a little bit confused. Why is E(n^2) = 1? And the second problem is that a and b are given for a general solution y=ax+b. Reading again what I wrote I'm no longer sure if I have to substitute x with y+n or if have to substitute y with n-x in the formula.
    $endgroup$
    – Mr.Sh4nnon
    Feb 3 at 13:57






  • 1




    $begingroup$
    you said variance of $n$ is $1$, and it's also zero-mean; so $Var{n} = E{n^2} - mu_n^2 = E{n^2} = 1$
    $endgroup$
    – Fat32
    Feb 3 at 14:00








  • 1




    $begingroup$
    To your latter confusion, according to your statement (which is alitle confusing indeed) your measurement (observation) is $x$, and you want to estimate $y$ from $x$. the relation between the two is given by $n$.
    $endgroup$
    – Fat32
    Feb 3 at 14:04








  • 1




    $begingroup$
    and you could also use $E{xn} = E{x(x-y)} = E{x^2}-E{xy} = 3 - 1.5 = 1.5$.
    $endgroup$
    – Fat32
    Feb 3 at 14:11


















  • $begingroup$
    I'm a little bit confused. Why is E(n^2) = 1? And the second problem is that a and b are given for a general solution y=ax+b. Reading again what I wrote I'm no longer sure if I have to substitute x with y+n or if have to substitute y with n-x in the formula.
    $endgroup$
    – Mr.Sh4nnon
    Feb 3 at 13:57






  • 1




    $begingroup$
    you said variance of $n$ is $1$, and it's also zero-mean; so $Var{n} = E{n^2} - mu_n^2 = E{n^2} = 1$
    $endgroup$
    – Fat32
    Feb 3 at 14:00








  • 1




    $begingroup$
    To your latter confusion, according to your statement (which is alitle confusing indeed) your measurement (observation) is $x$, and you want to estimate $y$ from $x$. the relation between the two is given by $n$.
    $endgroup$
    – Fat32
    Feb 3 at 14:04








  • 1




    $begingroup$
    and you could also use $E{xn} = E{x(x-y)} = E{x^2}-E{xy} = 3 - 1.5 = 1.5$.
    $endgroup$
    – Fat32
    Feb 3 at 14:11
















$begingroup$
I'm a little bit confused. Why is E(n^2) = 1? And the second problem is that a and b are given for a general solution y=ax+b. Reading again what I wrote I'm no longer sure if I have to substitute x with y+n or if have to substitute y with n-x in the formula.
$endgroup$
– Mr.Sh4nnon
Feb 3 at 13:57




$begingroup$
I'm a little bit confused. Why is E(n^2) = 1? And the second problem is that a and b are given for a general solution y=ax+b. Reading again what I wrote I'm no longer sure if I have to substitute x with y+n or if have to substitute y with n-x in the formula.
$endgroup$
– Mr.Sh4nnon
Feb 3 at 13:57




1




1




$begingroup$
you said variance of $n$ is $1$, and it's also zero-mean; so $Var{n} = E{n^2} - mu_n^2 = E{n^2} = 1$
$endgroup$
– Fat32
Feb 3 at 14:00






$begingroup$
you said variance of $n$ is $1$, and it's also zero-mean; so $Var{n} = E{n^2} - mu_n^2 = E{n^2} = 1$
$endgroup$
– Fat32
Feb 3 at 14:00






1




1




$begingroup$
To your latter confusion, according to your statement (which is alitle confusing indeed) your measurement (observation) is $x$, and you want to estimate $y$ from $x$. the relation between the two is given by $n$.
$endgroup$
– Fat32
Feb 3 at 14:04






$begingroup$
To your latter confusion, according to your statement (which is alitle confusing indeed) your measurement (observation) is $x$, and you want to estimate $y$ from $x$. the relation between the two is given by $n$.
$endgroup$
– Fat32
Feb 3 at 14:04






1




1




$begingroup$
and you could also use $E{xn} = E{x(x-y)} = E{x^2}-E{xy} = 3 - 1.5 = 1.5$.
$endgroup$
– Fat32
Feb 3 at 14:11




$begingroup$
and you could also use $E{xn} = E{x(x-y)} = E{x^2}-E{xy} = 3 - 1.5 = 1.5$.
$endgroup$
– Fat32
Feb 3 at 14:11


















draft saved

draft discarded




















































Thanks for contributing an answer to Signal Processing Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdsp.stackexchange.com%2fquestions%2f55271%2fsolving-a-linear-mean-square-estimation-the-easy-way%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Human spaceflight

Can not write log (Is /dev/pts mounted?) - openpty in Ubuntu-on-Windows?

File:DeusFollowingSea.jpg