You've correctly noted that the condition E=0 - called "mean independence" of u and x - is weaker than full-on independence of u and x.
#What does stack smashing detected mean how to
If anyone knows how to use the TeX codes please let me know how. Basic Econometrics, McGraw Hill Education. Econometric Analysis 7th Ed, Prentice Hall.ĭamodar, G., Porter, D., 2008. If x_i is taken to be a random vector, then Assumptions 1 through 4 (the Classical Linear Regression Assumptions) become a statement about the joint distribution of y_i and x_i." (Green, 2011) "Realistically, we have to allow the data on x_i to be random the same as y_i, so an alternative formulation is to assume that x_i is a random vector and our formal assumption concerns the nature of the random process that produces x_i. In the latter case, it is assumed that the X variable(s) and the error term are independent, that is, cov (X_i, u_i) = 0." (Damodar and Porter, 2008) "Values taken by the regressor X may be considered fixed in repeated samples (the case of fixed regressor) or they may be sampled along with the dependent variable Y (the case of stochastic regressor). If in the stochastic regressor case we assume that these regressors and the error term are independently distributed, the OLS estimators are still unbiased but they are no longer efficient (Damodar and Porter, 2008). While in a model with a stochastic explanatory variable, the conditional expectations are needed E(u|x) and Var(u|x). With it, we could use the results of elementary statistics to obtain our results by treating the vector x_i simply as a known constant in the probability distribution of y_i." Additionally, this unconditional expectation comes from the fact that E(u|x) = E(u)E(x) = E(u), that is the error term and the explanatory variable is independent. Green (2011): "The assumption of nonstochastic regressors at this point would be a mathematical convenience. In a model with a deterministic, or non-stochastic, explanatory variable, the assumption E(u)=0 and Var(u) = \sigma^2 is enough. Perhaps we should add another condition here: the type of explanatory variable in the model, deterministic or stochastic? Sorry if this seems to be confusing and probably a stupid question - I have been pondering about these concepts for a while and cannot find an illustrative answer that I am happy with. If I $X$ is given, or in other words, if I have knowledge of $X$, how can this change my expectation of $u_i$ when both are considered independent? What does this conditional expectation really mean and how does it improve my understanding of the underlying regression and to what means in contrast to the unconditional one? Maybe I have difficulties understanding the conditional expectation fully. If I have a theoretical scatter plot of error term with mean zero, how am I suppose to slice out the $X$ dependent part and say, look here, the error is indeed biased, when by definition the error term exists only because of the regression set up in the first place. Given a linear regression, I cannot find an example or perhaps a graphical representation or even a good story how an error term can be unconditionally mean-zero but conditionally biased. Their difference should not matter if the error term is independent of the explanatory variable, should it? $V =\sigma^2$, which is preferred to the conditional variance $V=\sigma^2$. I think there is a similar argument for the unconditional vs conditional homoskedasticity. conditional expectation really matter here? Hence conditional expectation implies unconditional expectation.īut don't we expect the error term $u_i$ to be independent of $x_i$ by construction? If that is the case, would not $E=0$ imply $E=0$ ? Or in other words, we if assume the error term to be independent of the explanatory variable, does the distinction of unconditional vs. The reason for my difficulties is the following argument:įirstly, it is clear to me that by the Law of Total Iteration $E] = E=0$. Almost every textbook naturally uses the conditional version instead of the unconditional one. I have trouble to understand why the conditional expectation $E=0$ is considered stronger (or more natural) than the unconditional expectation $E=0$. Where as usual $x_i$ is a $K\times1$ vector of independent explanatory variables, $\beta$ is a $K\times1$ vector of parameters and $u_i$ the error term of the $i$th observation and by construction of the true representation $u_i$ is considered to be independent of $x_i$. I refer here to a simple linear regression whose true representation is given by the equation: $y_i=x_i'\beta+u_i$,