![]() | This article is rated Start-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||
|
![]() | This article is substantially duplicated by a piece in an external publication. Please do not flag this article as a copyright violation of the following source:
|
I'm moving some questions from the article page to the talk page.
These can be addressed in the article text at some point. Wile E. Heresiarch 15:15, 13 Oct 2004 (UTC)
The clever graphic showing polynomial regression is inappropriate, since polynomial regression is a special case of linear regression, not nonlinear regression. A model is linear if the unknowns are a linear function of the knowns. In this case the Xs and Y are the knowns and the betas are the unknowns, so having powers of X in the predictors makes no difference. It is still linear. Blaise 10:19, 26 March 2006 (UTC)
The use of Monte Carlo deserves consideration. However, as it seems to me, the material just added has several problems. In essence, this is kind of jumping the gun on full development of the article.
1. There is not yet, and needs to be, a section on inference for nonlinear regression. Statistical inference is what distinguishes nonlinear regression from curve fitting. Most procedures from linear regression have analogues in nonlinear regression. The mention of Monte Carlo would belong in that section. 2. I think you are talking about parametric bootstrap. Why not mention nonparametric bootstrap (ordinarily, resampling). 3. You have put in an unusual procedure when as yet there is no mention of the standard errors available in standard nonlinear regression software. 4. The title is not correct. The material suggested is about evaluating error, not about parameter estimation. I don't think Monte Carlo is used for parameter estimation. 5. The use of Monte Carlo simulation as described could be considered for many statistical models. The material is not specific to nonlinear regression. 6. Use of Monte Carlo to evaluate sampling error is an important procedure in practice. I would say we do need to review other articles to see that this is covered. For example, since such materials may be used by government statisticians, it is a service to the public to provide some material.
If these points are not addressed, I will most likely take a shot at them at some point. Dfarrar 21:34, 20 March 2007 (UTC)
This article has been subject to a major revision which brings it into line with regression analysis and linear regression. The section on Monte Carlo has been removed, as it is wholly inappropriate; it has been replaced by a section on parameter statistics. Petergans ( talk) 16:48, 23 February 2008 (UTC)
The proper term for moving a function to a domain where it is linear is a linear transformation. Linearization almost always refers to approximating a function as linear for some bounded range. This is presented in the note:
"Linearization" as used here is not to be confused with the local linearization involved in standard algorithms such as the Gauss-Newton algorithm. Similarly, the methodology of generalized linear models does not involve linearization for parameter estimation.
I'm removing the note as it is no longer needed after the correction. I agree the note was very relevant and important when the term linearization was used.
Someone had suggested that shifting a problem into a linear domain is unnecessary and not recommended. I would ask the author of that section to provide some basis for his assertion beyond referring to the linear transformation section which indicates that its fair as long as proper consideration is given to errors. Certain problems, where in datasets are very large or where time intervals are very short, such as in a feedback control system, can only be practically solved by linear regression. Proper weighting of data points can compensate for the transform and yield theoretically optimal results. —Preceding unsigned comment added by 198.123.51.205 ( talk) 22:58, 13 June 2008 (UTC)
== Is this a known problem in multiple non-linear Regression It Non linear regression used on most on Application in Textile research
I'm referring to linearly adding non-linear effects. For example:
f(x1,x2) = k + a log(x1) + b log(x2)+ b log (x3) If condition
=k+a log (X1) +b log(X2) That
Now, suppose explanatory variables x1 and x2 are such that they have identical effects, with all else being equal. Suppose also that the quantities are such that they can be added (e.g. concentration of a greenhouse gas.) Then you have that a = b, so:
f(x1,x2) = k + a log(x1 + x2)
But then:
log(x1 + x2) = log(x1) + log(x2)
Which is absurd. So it seems to me that non-linear effects can't really be added linearly. I was just wondering if this is a known problem of multiple non-linear regression analysis. Joseph449008 ( talk) 14:18, 31 December 2009 (UTC)
Despite being a biochemist, I have a problem with the Michaelis–Menten model being used as an example. The Michaelis–Menten equation is a specific case of rectangular hyperbola and has its own jargon which is quite different from the standard mathematical jargon: this article is about a mathematical topic and not enzyme kinetics. Secondly, the example image given has a perfect fit which is a very poor example of regression. -- Squidonius ( talk) 00:31, 2 October 2011 (UTC)
:: Also, that looks a lot like polynomial linear regression. In no way can anyone guess it is non-linear regression from a simple viewing of the image. NK ( talk) 16:39, 18 July 2018 (UTC)
There is a problem with the last equation under the part "Regression Statistics". My web browser cannot convert it to PNG. I tried to fix it myself, but I failed. — Preceding unsigned comment added by Gustafullman ( talk • contribs) 10:49, 7 October 2011 (UTC)
"... observational data are modeled by a function which [sic.] is a nonlinear combination of the model parameters and depends on one or more independent variables." What this actually means, in plain English, is that the model of the observational data is a function of its own parameters and depends on some variables; i.e. the phrase is meaningless. If we cannot write an introductory sentence that means anything then it would be less confusing if we were to remove it altogether. At the moment, all it is actually saying is: "In statistics, nonlinear regression is a form of regression analysis in which observational data are modeled by a function"; maybe we ought to leave it at that and forget about the ensuing word salad? (by 90.217.127.222)
Please forgive me, but I do not like this at all. Do you think somebody reading this is really going to get it and say "ah ha, got it" and go solve his problem with C#, visual basic or excel? I really do not think so. and what's with that "Regression statistics" section? It's a first order Taylor series expansion. Why doesn't it say that? — Preceding unsigned comment added by 64.207.224.40 ( talk) 17:15, 14 December 2017 (UTC)
![]() | This article is rated Start-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||
|
![]() | This article is substantially duplicated by a piece in an external publication. Please do not flag this article as a copyright violation of the following source:
|
I'm moving some questions from the article page to the talk page.
These can be addressed in the article text at some point. Wile E. Heresiarch 15:15, 13 Oct 2004 (UTC)
The clever graphic showing polynomial regression is inappropriate, since polynomial regression is a special case of linear regression, not nonlinear regression. A model is linear if the unknowns are a linear function of the knowns. In this case the Xs and Y are the knowns and the betas are the unknowns, so having powers of X in the predictors makes no difference. It is still linear. Blaise 10:19, 26 March 2006 (UTC)
The use of Monte Carlo deserves consideration. However, as it seems to me, the material just added has several problems. In essence, this is kind of jumping the gun on full development of the article.
1. There is not yet, and needs to be, a section on inference for nonlinear regression. Statistical inference is what distinguishes nonlinear regression from curve fitting. Most procedures from linear regression have analogues in nonlinear regression. The mention of Monte Carlo would belong in that section. 2. I think you are talking about parametric bootstrap. Why not mention nonparametric bootstrap (ordinarily, resampling). 3. You have put in an unusual procedure when as yet there is no mention of the standard errors available in standard nonlinear regression software. 4. The title is not correct. The material suggested is about evaluating error, not about parameter estimation. I don't think Monte Carlo is used for parameter estimation. 5. The use of Monte Carlo simulation as described could be considered for many statistical models. The material is not specific to nonlinear regression. 6. Use of Monte Carlo to evaluate sampling error is an important procedure in practice. I would say we do need to review other articles to see that this is covered. For example, since such materials may be used by government statisticians, it is a service to the public to provide some material.
If these points are not addressed, I will most likely take a shot at them at some point. Dfarrar 21:34, 20 March 2007 (UTC)
This article has been subject to a major revision which brings it into line with regression analysis and linear regression. The section on Monte Carlo has been removed, as it is wholly inappropriate; it has been replaced by a section on parameter statistics. Petergans ( talk) 16:48, 23 February 2008 (UTC)
The proper term for moving a function to a domain where it is linear is a linear transformation. Linearization almost always refers to approximating a function as linear for some bounded range. This is presented in the note:
"Linearization" as used here is not to be confused with the local linearization involved in standard algorithms such as the Gauss-Newton algorithm. Similarly, the methodology of generalized linear models does not involve linearization for parameter estimation.
I'm removing the note as it is no longer needed after the correction. I agree the note was very relevant and important when the term linearization was used.
Someone had suggested that shifting a problem into a linear domain is unnecessary and not recommended. I would ask the author of that section to provide some basis for his assertion beyond referring to the linear transformation section which indicates that its fair as long as proper consideration is given to errors. Certain problems, where in datasets are very large or where time intervals are very short, such as in a feedback control system, can only be practically solved by linear regression. Proper weighting of data points can compensate for the transform and yield theoretically optimal results. —Preceding unsigned comment added by 198.123.51.205 ( talk) 22:58, 13 June 2008 (UTC)
== Is this a known problem in multiple non-linear Regression It Non linear regression used on most on Application in Textile research
I'm referring to linearly adding non-linear effects. For example:
f(x1,x2) = k + a log(x1) + b log(x2)+ b log (x3) If condition
=k+a log (X1) +b log(X2) That
Now, suppose explanatory variables x1 and x2 are such that they have identical effects, with all else being equal. Suppose also that the quantities are such that they can be added (e.g. concentration of a greenhouse gas.) Then you have that a = b, so:
f(x1,x2) = k + a log(x1 + x2)
But then:
log(x1 + x2) = log(x1) + log(x2)
Which is absurd. So it seems to me that non-linear effects can't really be added linearly. I was just wondering if this is a known problem of multiple non-linear regression analysis. Joseph449008 ( talk) 14:18, 31 December 2009 (UTC)
Despite being a biochemist, I have a problem with the Michaelis–Menten model being used as an example. The Michaelis–Menten equation is a specific case of rectangular hyperbola and has its own jargon which is quite different from the standard mathematical jargon: this article is about a mathematical topic and not enzyme kinetics. Secondly, the example image given has a perfect fit which is a very poor example of regression. -- Squidonius ( talk) 00:31, 2 October 2011 (UTC)
:: Also, that looks a lot like polynomial linear regression. In no way can anyone guess it is non-linear regression from a simple viewing of the image. NK ( talk) 16:39, 18 July 2018 (UTC)
There is a problem with the last equation under the part "Regression Statistics". My web browser cannot convert it to PNG. I tried to fix it myself, but I failed. — Preceding unsigned comment added by Gustafullman ( talk • contribs) 10:49, 7 October 2011 (UTC)
"... observational data are modeled by a function which [sic.] is a nonlinear combination of the model parameters and depends on one or more independent variables." What this actually means, in plain English, is that the model of the observational data is a function of its own parameters and depends on some variables; i.e. the phrase is meaningless. If we cannot write an introductory sentence that means anything then it would be less confusing if we were to remove it altogether. At the moment, all it is actually saying is: "In statistics, nonlinear regression is a form of regression analysis in which observational data are modeled by a function"; maybe we ought to leave it at that and forget about the ensuing word salad? (by 90.217.127.222)
Please forgive me, but I do not like this at all. Do you think somebody reading this is really going to get it and say "ah ha, got it" and go solve his problem with C#, visual basic or excel? I really do not think so. and what's with that "Regression statistics" section? It's a first order Taylor series expansion. Why doesn't it say that? — Preceding unsigned comment added by 64.207.224.40 ( talk) 17:15, 14 December 2017 (UTC)