What is the null hypothesis for the individual p-values in multiple regression?












5














I have a linear regression model for a dependent variable $Y$ based on two independent variables, $X1$ and $X2$, so I have a general form of a regression equation



$Y = A + B_1 cdot X_1 + B_2 cdot X_2 + epsilon$,



where $A$ is the intercept, $epsilon$ is the error term, and $B_1$ and $B_2$ are the respective coefficients of $X_1$ and $X_2$. I perform a multiple regression with software (statsmodel in Python) and I get coefficients for the model: $A = a, B_1 = b_1, B_2 = b_2$. The model also gives me $p$ values for each coefficient: $p_a$, $p_1$, and $p_2$. My question is: What is the null hypothesis for those individual $p$ values? For example, to obtain $p_1$ I know that the null hypothesis entails a 0 coefficient for $B_1$, but what about the other variables? In other words, If the null hypothesis is $Y = A + 0 cdot X_1 + B_2 cdot X_2$, what are the values of $A$ and $B_2$ for the null hypothesis from which the $p$-value for $B_1$ is derived?










share|cite|improve this question









New contributor




tmldwn is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
















  • 2




    Your model is missing an error term.
    – Andreas Dzemski
    Dec 30 '18 at 19:48
















5














I have a linear regression model for a dependent variable $Y$ based on two independent variables, $X1$ and $X2$, so I have a general form of a regression equation



$Y = A + B_1 cdot X_1 + B_2 cdot X_2 + epsilon$,



where $A$ is the intercept, $epsilon$ is the error term, and $B_1$ and $B_2$ are the respective coefficients of $X_1$ and $X_2$. I perform a multiple regression with software (statsmodel in Python) and I get coefficients for the model: $A = a, B_1 = b_1, B_2 = b_2$. The model also gives me $p$ values for each coefficient: $p_a$, $p_1$, and $p_2$. My question is: What is the null hypothesis for those individual $p$ values? For example, to obtain $p_1$ I know that the null hypothesis entails a 0 coefficient for $B_1$, but what about the other variables? In other words, If the null hypothesis is $Y = A + 0 cdot X_1 + B_2 cdot X_2$, what are the values of $A$ and $B_2$ for the null hypothesis from which the $p$-value for $B_1$ is derived?










share|cite|improve this question









New contributor




tmldwn is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
















  • 2




    Your model is missing an error term.
    – Andreas Dzemski
    Dec 30 '18 at 19:48














5












5








5


1





I have a linear regression model for a dependent variable $Y$ based on two independent variables, $X1$ and $X2$, so I have a general form of a regression equation



$Y = A + B_1 cdot X_1 + B_2 cdot X_2 + epsilon$,



where $A$ is the intercept, $epsilon$ is the error term, and $B_1$ and $B_2$ are the respective coefficients of $X_1$ and $X_2$. I perform a multiple regression with software (statsmodel in Python) and I get coefficients for the model: $A = a, B_1 = b_1, B_2 = b_2$. The model also gives me $p$ values for each coefficient: $p_a$, $p_1$, and $p_2$. My question is: What is the null hypothesis for those individual $p$ values? For example, to obtain $p_1$ I know that the null hypothesis entails a 0 coefficient for $B_1$, but what about the other variables? In other words, If the null hypothesis is $Y = A + 0 cdot X_1 + B_2 cdot X_2$, what are the values of $A$ and $B_2$ for the null hypothesis from which the $p$-value for $B_1$ is derived?










share|cite|improve this question









New contributor




tmldwn is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











I have a linear regression model for a dependent variable $Y$ based on two independent variables, $X1$ and $X2$, so I have a general form of a regression equation



$Y = A + B_1 cdot X_1 + B_2 cdot X_2 + epsilon$,



where $A$ is the intercept, $epsilon$ is the error term, and $B_1$ and $B_2$ are the respective coefficients of $X_1$ and $X_2$. I perform a multiple regression with software (statsmodel in Python) and I get coefficients for the model: $A = a, B_1 = b_1, B_2 = b_2$. The model also gives me $p$ values for each coefficient: $p_a$, $p_1$, and $p_2$. My question is: What is the null hypothesis for those individual $p$ values? For example, to obtain $p_1$ I know that the null hypothesis entails a 0 coefficient for $B_1$, but what about the other variables? In other words, If the null hypothesis is $Y = A + 0 cdot X_1 + B_2 cdot X_2$, what are the values of $A$ and $B_2$ for the null hypothesis from which the $p$-value for $B_1$ is derived?







regression p-value






share|cite|improve this question









New contributor




tmldwn is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|cite|improve this question









New contributor




tmldwn is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|cite|improve this question




share|cite|improve this question








edited Dec 31 '18 at 19:47









Michael M

6,15332035




6,15332035






New contributor




tmldwn is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked Dec 30 '18 at 19:24









tmldwn

262




262




New contributor




tmldwn is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





tmldwn is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






tmldwn is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








  • 2




    Your model is missing an error term.
    – Andreas Dzemski
    Dec 30 '18 at 19:48














  • 2




    Your model is missing an error term.
    – Andreas Dzemski
    Dec 30 '18 at 19:48








2




2




Your model is missing an error term.
– Andreas Dzemski
Dec 30 '18 at 19:48




Your model is missing an error term.
– Andreas Dzemski
Dec 30 '18 at 19:48










3 Answers
3






active

oldest

votes


















5














The null hypothesis is
$$
H_0: B1 = 0 : text{and} : B2 in mathbb{R} : text{and} : A in mathbb{R},
$$

which basically means that the null hypothesis does not restrict B2 and A.
The alternative hypothesis is
$$
H_1: B1 neq 0 : text{and} : B2 in mathbb{R} : text{and} : A in mathbb{R}.
$$

In a way, the null hypothesis in the multiple regression model is a composite hypothesis. It is "fortunate" that we can construct a pivotal test statistic that does not depend on the true value of B2 and A, so that we do not suffer a penalty from testing a composite null hypothesis.



In other words, there are a lot of different distributions of $(Y, X1, X2)$ that are compatible with the null hypothesis $H_0$. However, all of these distributions lead to the same behavior of the the test statistic that is used to test $H_0$.



In my answer, I have not addressed the distribution of $epsilon$ and implicitly assumed that it is an independent centered normal random variable. If we only assume something like
$$
E[epsilon mid X1, X2] = 0
$$

then a similar conclusion holds asymptotically (under regularity assumptions).






share|cite|improve this answer























  • But as I understand it, doesn't the null hypothesis have to be a probability distribution? If I have specific values for the coefficients, I can generate a probability distribution by adding noise (epsilon) to the regression equation. But if I don't have specific values for coefficients, how would I generate the null probability distribution?
    – tmldwn
    Dec 30 '18 at 20:35












  • A composite null hypothesis is a whole set of possible probability measures.
    – Andreas Dzemski
    Dec 30 '18 at 20:52










  • I have edited my answer to emphasize this point.
    – Andreas Dzemski
    Dec 30 '18 at 21:28






  • 2




    @tmldwn: Here, the marginal distribution of the t-statistic does indeed not depend on where we are in the null. If you find this hard to understand then I suggest you go carefully through the derivation of the distribution of the t-statistic. Note that the t-statistic depends on the LS estimator. In a way this automatically adjusts the test statistic correctly for the "true" hypothesis in the null space (we don't have to take a stand on what A, B2 are because we don't need them to compute the test statistic).
    – Andreas Dzemski
    Dec 31 '18 at 11:02






  • 1




    This answer is completely wrong. As explained in this document, there is anova for the whole regression, but a t-test for each coeffieicnt: reliawiki.org/index.php/…
    – Josh
    Dec 31 '18 at 15:47





















0














You can make the same assupmtions for the other variables as the X1. The ANOVA table of the regression gives specific information about each variable significance and the overall significance as well.As far as regression analysis is concerned, the acceptance of null hypothesis implies that the coefficient of the variable is zero, given a certain level of significance.



If you want to acquire a more intuitive aspect of the issue, you can study more about Hypothesis testing.






share|cite|improve this answer





























    0














    The $p$-values are the result of a series of $t$-tests. The null hypothesis is that $B_j=0$, while the alternative hypothesis (again, for each coefficient) is, $B_jne0$



    (see here for more details: http://reliawiki.org/index.php/Multiple_Linear_Regression_Analysis#Test_on_Individual_Regression_Coefficients_.28t__Test.29)






    share|cite|improve this answer























      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "65"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });






      tmldwn is a new contributor. Be nice, and check out our Code of Conduct.










      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f385005%2fwhat-is-the-null-hypothesis-for-the-individual-p-values-in-multiple-regression%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      3 Answers
      3






      active

      oldest

      votes








      3 Answers
      3






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      5














      The null hypothesis is
      $$
      H_0: B1 = 0 : text{and} : B2 in mathbb{R} : text{and} : A in mathbb{R},
      $$

      which basically means that the null hypothesis does not restrict B2 and A.
      The alternative hypothesis is
      $$
      H_1: B1 neq 0 : text{and} : B2 in mathbb{R} : text{and} : A in mathbb{R}.
      $$

      In a way, the null hypothesis in the multiple regression model is a composite hypothesis. It is "fortunate" that we can construct a pivotal test statistic that does not depend on the true value of B2 and A, so that we do not suffer a penalty from testing a composite null hypothesis.



      In other words, there are a lot of different distributions of $(Y, X1, X2)$ that are compatible with the null hypothesis $H_0$. However, all of these distributions lead to the same behavior of the the test statistic that is used to test $H_0$.



      In my answer, I have not addressed the distribution of $epsilon$ and implicitly assumed that it is an independent centered normal random variable. If we only assume something like
      $$
      E[epsilon mid X1, X2] = 0
      $$

      then a similar conclusion holds asymptotically (under regularity assumptions).






      share|cite|improve this answer























      • But as I understand it, doesn't the null hypothesis have to be a probability distribution? If I have specific values for the coefficients, I can generate a probability distribution by adding noise (epsilon) to the regression equation. But if I don't have specific values for coefficients, how would I generate the null probability distribution?
        – tmldwn
        Dec 30 '18 at 20:35












      • A composite null hypothesis is a whole set of possible probability measures.
        – Andreas Dzemski
        Dec 30 '18 at 20:52










      • I have edited my answer to emphasize this point.
        – Andreas Dzemski
        Dec 30 '18 at 21:28






      • 2




        @tmldwn: Here, the marginal distribution of the t-statistic does indeed not depend on where we are in the null. If you find this hard to understand then I suggest you go carefully through the derivation of the distribution of the t-statistic. Note that the t-statistic depends on the LS estimator. In a way this automatically adjusts the test statistic correctly for the "true" hypothesis in the null space (we don't have to take a stand on what A, B2 are because we don't need them to compute the test statistic).
        – Andreas Dzemski
        Dec 31 '18 at 11:02






      • 1




        This answer is completely wrong. As explained in this document, there is anova for the whole regression, but a t-test for each coeffieicnt: reliawiki.org/index.php/…
        – Josh
        Dec 31 '18 at 15:47


















      5














      The null hypothesis is
      $$
      H_0: B1 = 0 : text{and} : B2 in mathbb{R} : text{and} : A in mathbb{R},
      $$

      which basically means that the null hypothesis does not restrict B2 and A.
      The alternative hypothesis is
      $$
      H_1: B1 neq 0 : text{and} : B2 in mathbb{R} : text{and} : A in mathbb{R}.
      $$

      In a way, the null hypothesis in the multiple regression model is a composite hypothesis. It is "fortunate" that we can construct a pivotal test statistic that does not depend on the true value of B2 and A, so that we do not suffer a penalty from testing a composite null hypothesis.



      In other words, there are a lot of different distributions of $(Y, X1, X2)$ that are compatible with the null hypothesis $H_0$. However, all of these distributions lead to the same behavior of the the test statistic that is used to test $H_0$.



      In my answer, I have not addressed the distribution of $epsilon$ and implicitly assumed that it is an independent centered normal random variable. If we only assume something like
      $$
      E[epsilon mid X1, X2] = 0
      $$

      then a similar conclusion holds asymptotically (under regularity assumptions).






      share|cite|improve this answer























      • But as I understand it, doesn't the null hypothesis have to be a probability distribution? If I have specific values for the coefficients, I can generate a probability distribution by adding noise (epsilon) to the regression equation. But if I don't have specific values for coefficients, how would I generate the null probability distribution?
        – tmldwn
        Dec 30 '18 at 20:35












      • A composite null hypothesis is a whole set of possible probability measures.
        – Andreas Dzemski
        Dec 30 '18 at 20:52










      • I have edited my answer to emphasize this point.
        – Andreas Dzemski
        Dec 30 '18 at 21:28






      • 2




        @tmldwn: Here, the marginal distribution of the t-statistic does indeed not depend on where we are in the null. If you find this hard to understand then I suggest you go carefully through the derivation of the distribution of the t-statistic. Note that the t-statistic depends on the LS estimator. In a way this automatically adjusts the test statistic correctly for the "true" hypothesis in the null space (we don't have to take a stand on what A, B2 are because we don't need them to compute the test statistic).
        – Andreas Dzemski
        Dec 31 '18 at 11:02






      • 1




        This answer is completely wrong. As explained in this document, there is anova for the whole regression, but a t-test for each coeffieicnt: reliawiki.org/index.php/…
        – Josh
        Dec 31 '18 at 15:47
















      5












      5








      5






      The null hypothesis is
      $$
      H_0: B1 = 0 : text{and} : B2 in mathbb{R} : text{and} : A in mathbb{R},
      $$

      which basically means that the null hypothesis does not restrict B2 and A.
      The alternative hypothesis is
      $$
      H_1: B1 neq 0 : text{and} : B2 in mathbb{R} : text{and} : A in mathbb{R}.
      $$

      In a way, the null hypothesis in the multiple regression model is a composite hypothesis. It is "fortunate" that we can construct a pivotal test statistic that does not depend on the true value of B2 and A, so that we do not suffer a penalty from testing a composite null hypothesis.



      In other words, there are a lot of different distributions of $(Y, X1, X2)$ that are compatible with the null hypothesis $H_0$. However, all of these distributions lead to the same behavior of the the test statistic that is used to test $H_0$.



      In my answer, I have not addressed the distribution of $epsilon$ and implicitly assumed that it is an independent centered normal random variable. If we only assume something like
      $$
      E[epsilon mid X1, X2] = 0
      $$

      then a similar conclusion holds asymptotically (under regularity assumptions).






      share|cite|improve this answer














      The null hypothesis is
      $$
      H_0: B1 = 0 : text{and} : B2 in mathbb{R} : text{and} : A in mathbb{R},
      $$

      which basically means that the null hypothesis does not restrict B2 and A.
      The alternative hypothesis is
      $$
      H_1: B1 neq 0 : text{and} : B2 in mathbb{R} : text{and} : A in mathbb{R}.
      $$

      In a way, the null hypothesis in the multiple regression model is a composite hypothesis. It is "fortunate" that we can construct a pivotal test statistic that does not depend on the true value of B2 and A, so that we do not suffer a penalty from testing a composite null hypothesis.



      In other words, there are a lot of different distributions of $(Y, X1, X2)$ that are compatible with the null hypothesis $H_0$. However, all of these distributions lead to the same behavior of the the test statistic that is used to test $H_0$.



      In my answer, I have not addressed the distribution of $epsilon$ and implicitly assumed that it is an independent centered normal random variable. If we only assume something like
      $$
      E[epsilon mid X1, X2] = 0
      $$

      then a similar conclusion holds asymptotically (under regularity assumptions).







      share|cite|improve this answer














      share|cite|improve this answer



      share|cite|improve this answer








      edited Dec 30 '18 at 21:26

























      answered Dec 30 '18 at 19:47









      Andreas Dzemski

      3195




      3195












      • But as I understand it, doesn't the null hypothesis have to be a probability distribution? If I have specific values for the coefficients, I can generate a probability distribution by adding noise (epsilon) to the regression equation. But if I don't have specific values for coefficients, how would I generate the null probability distribution?
        – tmldwn
        Dec 30 '18 at 20:35












      • A composite null hypothesis is a whole set of possible probability measures.
        – Andreas Dzemski
        Dec 30 '18 at 20:52










      • I have edited my answer to emphasize this point.
        – Andreas Dzemski
        Dec 30 '18 at 21:28






      • 2




        @tmldwn: Here, the marginal distribution of the t-statistic does indeed not depend on where we are in the null. If you find this hard to understand then I suggest you go carefully through the derivation of the distribution of the t-statistic. Note that the t-statistic depends on the LS estimator. In a way this automatically adjusts the test statistic correctly for the "true" hypothesis in the null space (we don't have to take a stand on what A, B2 are because we don't need them to compute the test statistic).
        – Andreas Dzemski
        Dec 31 '18 at 11:02






      • 1




        This answer is completely wrong. As explained in this document, there is anova for the whole regression, but a t-test for each coeffieicnt: reliawiki.org/index.php/…
        – Josh
        Dec 31 '18 at 15:47




















      • But as I understand it, doesn't the null hypothesis have to be a probability distribution? If I have specific values for the coefficients, I can generate a probability distribution by adding noise (epsilon) to the regression equation. But if I don't have specific values for coefficients, how would I generate the null probability distribution?
        – tmldwn
        Dec 30 '18 at 20:35












      • A composite null hypothesis is a whole set of possible probability measures.
        – Andreas Dzemski
        Dec 30 '18 at 20:52










      • I have edited my answer to emphasize this point.
        – Andreas Dzemski
        Dec 30 '18 at 21:28






      • 2




        @tmldwn: Here, the marginal distribution of the t-statistic does indeed not depend on where we are in the null. If you find this hard to understand then I suggest you go carefully through the derivation of the distribution of the t-statistic. Note that the t-statistic depends on the LS estimator. In a way this automatically adjusts the test statistic correctly for the "true" hypothesis in the null space (we don't have to take a stand on what A, B2 are because we don't need them to compute the test statistic).
        – Andreas Dzemski
        Dec 31 '18 at 11:02






      • 1




        This answer is completely wrong. As explained in this document, there is anova for the whole regression, but a t-test for each coeffieicnt: reliawiki.org/index.php/…
        – Josh
        Dec 31 '18 at 15:47


















      But as I understand it, doesn't the null hypothesis have to be a probability distribution? If I have specific values for the coefficients, I can generate a probability distribution by adding noise (epsilon) to the regression equation. But if I don't have specific values for coefficients, how would I generate the null probability distribution?
      – tmldwn
      Dec 30 '18 at 20:35






      But as I understand it, doesn't the null hypothesis have to be a probability distribution? If I have specific values for the coefficients, I can generate a probability distribution by adding noise (epsilon) to the regression equation. But if I don't have specific values for coefficients, how would I generate the null probability distribution?
      – tmldwn
      Dec 30 '18 at 20:35














      A composite null hypothesis is a whole set of possible probability measures.
      – Andreas Dzemski
      Dec 30 '18 at 20:52




      A composite null hypothesis is a whole set of possible probability measures.
      – Andreas Dzemski
      Dec 30 '18 at 20:52












      I have edited my answer to emphasize this point.
      – Andreas Dzemski
      Dec 30 '18 at 21:28




      I have edited my answer to emphasize this point.
      – Andreas Dzemski
      Dec 30 '18 at 21:28




      2




      2




      @tmldwn: Here, the marginal distribution of the t-statistic does indeed not depend on where we are in the null. If you find this hard to understand then I suggest you go carefully through the derivation of the distribution of the t-statistic. Note that the t-statistic depends on the LS estimator. In a way this automatically adjusts the test statistic correctly for the "true" hypothesis in the null space (we don't have to take a stand on what A, B2 are because we don't need them to compute the test statistic).
      – Andreas Dzemski
      Dec 31 '18 at 11:02




      @tmldwn: Here, the marginal distribution of the t-statistic does indeed not depend on where we are in the null. If you find this hard to understand then I suggest you go carefully through the derivation of the distribution of the t-statistic. Note that the t-statistic depends on the LS estimator. In a way this automatically adjusts the test statistic correctly for the "true" hypothesis in the null space (we don't have to take a stand on what A, B2 are because we don't need them to compute the test statistic).
      – Andreas Dzemski
      Dec 31 '18 at 11:02




      1




      1




      This answer is completely wrong. As explained in this document, there is anova for the whole regression, but a t-test for each coeffieicnt: reliawiki.org/index.php/…
      – Josh
      Dec 31 '18 at 15:47






      This answer is completely wrong. As explained in this document, there is anova for the whole regression, but a t-test for each coeffieicnt: reliawiki.org/index.php/…
      – Josh
      Dec 31 '18 at 15:47















      0














      You can make the same assupmtions for the other variables as the X1. The ANOVA table of the regression gives specific information about each variable significance and the overall significance as well.As far as regression analysis is concerned, the acceptance of null hypothesis implies that the coefficient of the variable is zero, given a certain level of significance.



      If you want to acquire a more intuitive aspect of the issue, you can study more about Hypothesis testing.






      share|cite|improve this answer


























        0














        You can make the same assupmtions for the other variables as the X1. The ANOVA table of the regression gives specific information about each variable significance and the overall significance as well.As far as regression analysis is concerned, the acceptance of null hypothesis implies that the coefficient of the variable is zero, given a certain level of significance.



        If you want to acquire a more intuitive aspect of the issue, you can study more about Hypothesis testing.






        share|cite|improve this answer
























          0












          0








          0






          You can make the same assupmtions for the other variables as the X1. The ANOVA table of the regression gives specific information about each variable significance and the overall significance as well.As far as regression analysis is concerned, the acceptance of null hypothesis implies that the coefficient of the variable is zero, given a certain level of significance.



          If you want to acquire a more intuitive aspect of the issue, you can study more about Hypothesis testing.






          share|cite|improve this answer












          You can make the same assupmtions for the other variables as the X1. The ANOVA table of the regression gives specific information about each variable significance and the overall significance as well.As far as regression analysis is concerned, the acceptance of null hypothesis implies that the coefficient of the variable is zero, given a certain level of significance.



          If you want to acquire a more intuitive aspect of the issue, you can study more about Hypothesis testing.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Dec 30 '18 at 19:52









          Logicseeker

          184




          184























              0














              The $p$-values are the result of a series of $t$-tests. The null hypothesis is that $B_j=0$, while the alternative hypothesis (again, for each coefficient) is, $B_jne0$



              (see here for more details: http://reliawiki.org/index.php/Multiple_Linear_Regression_Analysis#Test_on_Individual_Regression_Coefficients_.28t__Test.29)






              share|cite|improve this answer




























                0














                The $p$-values are the result of a series of $t$-tests. The null hypothesis is that $B_j=0$, while the alternative hypothesis (again, for each coefficient) is, $B_jne0$



                (see here for more details: http://reliawiki.org/index.php/Multiple_Linear_Regression_Analysis#Test_on_Individual_Regression_Coefficients_.28t__Test.29)






                share|cite|improve this answer


























                  0












                  0








                  0






                  The $p$-values are the result of a series of $t$-tests. The null hypothesis is that $B_j=0$, while the alternative hypothesis (again, for each coefficient) is, $B_jne0$



                  (see here for more details: http://reliawiki.org/index.php/Multiple_Linear_Regression_Analysis#Test_on_Individual_Regression_Coefficients_.28t__Test.29)






                  share|cite|improve this answer














                  The $p$-values are the result of a series of $t$-tests. The null hypothesis is that $B_j=0$, while the alternative hypothesis (again, for each coefficient) is, $B_jne0$



                  (see here for more details: http://reliawiki.org/index.php/Multiple_Linear_Regression_Analysis#Test_on_Individual_Regression_Coefficients_.28t__Test.29)







                  share|cite|improve this answer














                  share|cite|improve this answer



                  share|cite|improve this answer








                  edited Dec 31 '18 at 17:07









                  StatsStudent

                  4,69532042




                  4,69532042










                  answered Dec 31 '18 at 15:49









                  Josh

                  16911




                  16911






















                      tmldwn is a new contributor. Be nice, and check out our Code of Conduct.










                      draft saved

                      draft discarded


















                      tmldwn is a new contributor. Be nice, and check out our Code of Conduct.













                      tmldwn is a new contributor. Be nice, and check out our Code of Conduct.












                      tmldwn is a new contributor. Be nice, and check out our Code of Conduct.
















                      Thanks for contributing an answer to Cross Validated!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.





                      Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                      Please pay close attention to the following guidance:


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f385005%2fwhat-is-the-null-hypothesis-for-the-individual-p-values-in-multiple-regression%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Human spaceflight

                      Can not write log (Is /dev/pts mounted?) - openpty in Ubuntu-on-Windows?

                      File:DeusFollowingSea.jpg