What the hell's wrong with math tex codes in this article? All I see are red lines!!! -- 138.25.80.124 01:03, 8 August 2006 (UTC)
![]() | This article was nominated for deletion on 19 February 2006. The result of the discussion was keep. |
![]() | This redirect does not require a rating on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | |||||||
|
It's hard to know where to begin saying what's wrong with this truly horrible article...
There is no context-setting at all above. Nothing to tell the reader what subject this is on, unless the reader already knows.
The Gauss-Markov theorem states that projecting orthogonally onto a certain subspace is in a certain sesne optimal if certain assumptions hold. That is explained in the article titled Gauss-Markov theorem. What, then, is different about the purpose of this article?
"an explicit form for the function which lies the most closely to the dependent variable ." What does that mean?? This is one of the vaguest bits of writing I've seen in a while.
The above is completely nonsensical. It purports to define some particular space F, but it does not. It does not say what ω, , Γ, or S is, but those nee to be defined before being referred to in this way. And what possible relevance to the topic does this stipulation of F have?
What is η?? It has not been defined. A subspace of F? F has also not been defined. What is ? Not defined. Conventionally in this topic would be column vectors in Rk and the response variable Y would also be in Rk. But that jars with the idea that are in some space F of random variables, stated above.
How do we know that? And what does it mean? And what is X? Conventionally X would be a "design matrix", and in most accounts, X is not random, so it is trivially independent of any random variable. (And it wouldn't hurt to spell "indepedent" correctly.
What does that have to do with independence of X and anything else, and what does this weird notation η(X;θ) mean? I have a PhD in statistics, and I can't make any sense of this.
I know a context within which that would make sense, but I don't see its relevance here. The sort of projection in Hilbert space usually contemplated when this sort of thing is asserted is really not relevant to this topic.
This is just idiotic nonsense.
User:Deimos, for $50 per hour I'll sit down with you and parse the above if you're able to do it. I will require your patience. You're writing crap.
Some of the above might make some sense, but it is very vaguely written, to say the least. One concrete thing I can suggest: Please don't write
when you mean
Sigh..... Let's see .... I could ask why we should choose anything here.
OK, looking through this carefully has convinced me that this article is 100% worthless. Michael Hardy 23:38, 5 February 2006 (UTC)
After the last round of edits, it is still completely unclear what is to be proved in this article, and highly implausible that it proves anything. Michael Hardy 00:51, 9 February 2006 (UTC)
OK, I'm back for the moment. The article contains this sentence:
What does that mean? Does it mean that the least-squares estimator actually is that particular matrix product? If so, the proof should not involve probability, but only linear algebra. Does it mean that the least-squares estimator is the one that satisfies some list of criteria? If so which criteria? The Gauss-Markov assumptions? If it's the Gauss-Markov assumptions, then this would be a proof of the Gauss-Markov theorem. But I certainly don't think that's what it is. In the present state of the article, the reader can only guess what the writer intended to prove! Michael Hardy 03:19, 9 February 2006 (UTC)
I'm going to disect this slowly. The following is just the first step. The article says:
In simpler terms, what this says is the following:
The first version is badly written because
OK, this is just one small point; the article has many similar problems, not the least of which is that its purpose is still not clear. I'll be back. Michael Hardy 00:25, 20 February 2006 (UTC)
OK, this makes sense: I'll correct the article. Except for the "having expected value" part. The way I present it, you can always write . What the Gauss-Markov assumptions add is that there exists an optimal parameter for which has an expectation of 0 and that its components are independant. The advantage is that you do not have to suppose that the 's are constants. In the case of randomized designs, this is important. Deimos 28 12:10, 20 February 2006 (UTC)
I have now added to the introduction that I wish to give a motivation behind the criterion optimized in least-squares (seeing a regression as a projection on a linear space of random variables) and derive the expression of this estimator. One can differentiate the sum of squares and obtain the same result, but I think that the geometrical way of seeing the problem makes it easier to understand why we use the sum of squares (because of Pythagoras theorem, i.e. , where ).To see the regression problem in this way requires the Gauss-Markov hypothesis (otherwise we cannot show that E(.|X) is an orthogonal projection). Regards, Deimos 28 08:56, 9 February 2006 (UTC)
What the hell's wrong with math tex codes in this article? All I see are red lines!!! -- 138.25.80.124 01:03, 8 August 2006 (UTC)
![]() | This article was nominated for deletion on 19 February 2006. The result of the discussion was keep. |
![]() | This redirect does not require a rating on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | |||||||
|
It's hard to know where to begin saying what's wrong with this truly horrible article...
There is no context-setting at all above. Nothing to tell the reader what subject this is on, unless the reader already knows.
The Gauss-Markov theorem states that projecting orthogonally onto a certain subspace is in a certain sesne optimal if certain assumptions hold. That is explained in the article titled Gauss-Markov theorem. What, then, is different about the purpose of this article?
"an explicit form for the function which lies the most closely to the dependent variable ." What does that mean?? This is one of the vaguest bits of writing I've seen in a while.
The above is completely nonsensical. It purports to define some particular space F, but it does not. It does not say what ω, , Γ, or S is, but those nee to be defined before being referred to in this way. And what possible relevance to the topic does this stipulation of F have?
What is η?? It has not been defined. A subspace of F? F has also not been defined. What is ? Not defined. Conventionally in this topic would be column vectors in Rk and the response variable Y would also be in Rk. But that jars with the idea that are in some space F of random variables, stated above.
How do we know that? And what does it mean? And what is X? Conventionally X would be a "design matrix", and in most accounts, X is not random, so it is trivially independent of any random variable. (And it wouldn't hurt to spell "indepedent" correctly.
What does that have to do with independence of X and anything else, and what does this weird notation η(X;θ) mean? I have a PhD in statistics, and I can't make any sense of this.
I know a context within which that would make sense, but I don't see its relevance here. The sort of projection in Hilbert space usually contemplated when this sort of thing is asserted is really not relevant to this topic.
This is just idiotic nonsense.
User:Deimos, for $50 per hour I'll sit down with you and parse the above if you're able to do it. I will require your patience. You're writing crap.
Some of the above might make some sense, but it is very vaguely written, to say the least. One concrete thing I can suggest: Please don't write
when you mean
Sigh..... Let's see .... I could ask why we should choose anything here.
OK, looking through this carefully has convinced me that this article is 100% worthless. Michael Hardy 23:38, 5 February 2006 (UTC)
After the last round of edits, it is still completely unclear what is to be proved in this article, and highly implausible that it proves anything. Michael Hardy 00:51, 9 February 2006 (UTC)
OK, I'm back for the moment. The article contains this sentence:
What does that mean? Does it mean that the least-squares estimator actually is that particular matrix product? If so, the proof should not involve probability, but only linear algebra. Does it mean that the least-squares estimator is the one that satisfies some list of criteria? If so which criteria? The Gauss-Markov assumptions? If it's the Gauss-Markov assumptions, then this would be a proof of the Gauss-Markov theorem. But I certainly don't think that's what it is. In the present state of the article, the reader can only guess what the writer intended to prove! Michael Hardy 03:19, 9 February 2006 (UTC)
I'm going to disect this slowly. The following is just the first step. The article says:
In simpler terms, what this says is the following:
The first version is badly written because
OK, this is just one small point; the article has many similar problems, not the least of which is that its purpose is still not clear. I'll be back. Michael Hardy 00:25, 20 February 2006 (UTC)
OK, this makes sense: I'll correct the article. Except for the "having expected value" part. The way I present it, you can always write . What the Gauss-Markov assumptions add is that there exists an optimal parameter for which has an expectation of 0 and that its components are independant. The advantage is that you do not have to suppose that the 's are constants. In the case of randomized designs, this is important. Deimos 28 12:10, 20 February 2006 (UTC)
I have now added to the introduction that I wish to give a motivation behind the criterion optimized in least-squares (seeing a regression as a projection on a linear space of random variables) and derive the expression of this estimator. One can differentiate the sum of squares and obtain the same result, but I think that the geometrical way of seeing the problem makes it easier to understand why we use the sum of squares (because of Pythagoras theorem, i.e. , where ).To see the regression problem in this way requires the Gauss-Markov hypothesis (otherwise we cannot show that E(.|X) is an orthogonal projection). Regards, Deimos 28 08:56, 9 February 2006 (UTC)