![]() | This article is rated Start-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||
|
Regularization has two articles termed " regularization (mathematics)" and " regularization (machine learning)".
There is not very much difference between the two concepts. Presently "mathematics" has a small description on inverse problems, while the "machine learning" is towards a "statistical" description. I propose that the article should be called "regularization (mathematics)". "machine learning" is a narrow field. Other names or possible redirects would be
— fnielsen ( talk) 13:08, 16 April 2008 (UTC)
The origin of 'regularization' is inverse problem theory. Regularization in machine learning is the application of inverse problem theory in machine learning. Why does first example is not general one instead of some machine learning problem? — Preceding unsigned comment added by 46.76.184.231 ( talk) 15:24, 19 October 2018 (UTC)
I'm no statistics expert, but I sometimes dabble in machine learning. I was wondering what the following means:
This strikes me as a bit odd, because one of the main things I use cross-validation for is to find the right value of a regularization parameter. Can anyone with more experience in statistics provide a clue here? Qwertyus ( talk) 13:03, 23 February 2012 (UTC)
The use of hats seems to be inconsistent and never clearly explained. In case of the Tikhonov Regularized Least Square it even seems to be unnecessary. — Preceding unsigned comment added by Tomas.krehlik ( talk • contribs) 10:48, 2 February 2016 (UTC)
The article never explains what variables X and Y (input and output) mean. — Preceding unsigned comment added by 169.234.127.96 ( talk) 19:39, 17 September 2014 (UTC)
This article is littered with symbols that are not explained. If you don't know the meaning of a symbol you are flummoxed, if you do know the meaning then you don't need to read the article (except perhaps as an aide memoire). My personal flummox is the parallel lines and 2s : - what does mean? I guess it's a norm (magnitude of vector, matrix or function) and the top 2 means the components / kernel are squared. But what about the lower 2 I went to the article on Tikhonov, but that wasn't much help as it describes a having matrix Q in place of the lower, scalar, 2. I get that we would probably not explain (although we might) but IMHO this norm notation is sufficiently obscure as to require an explanation. Aredgers ( talk) 13:33, 23 November 2017 (UTC)
This article needs improvement for example: "It (Sic, Tikhonov regularization) is also known as ridge regression." Not quite, ridge regression is a subset procedure of Tikhonov regularization with other minor differences---see https://stats.stackexchange.com/q/234280/99274 Also, machine learning regularization is a subset of regularization for statistics and not the obverse. Also, regularization (mathematics) means something else entirely, for example, regularized is because and a regularized lower incomplete gamma function is because . CarlWesolowski ( talk) 15:30, 13 April 2020 (UTC)
Regularization is a term broadly used in many sub-fields of pure and applied mathematics, data science, etc. In its current form, and probably because of the previous merge, it is strongly biased toward machine learning. The very first section starts with "Empirical learning of classifiers (...)", which is incredibly narrow. — Preceding unsigned comment added by Pthibault ( talk • contribs) 09:48, 30 November 2020 (UTC)
Agreed. A way to write this section that is both more neutral and more general is to talk about a generic interpolation problem: given a collection of samples, find an interpolating function. This problem dates back hundreds of years, long before the dawn of machine learning, and is central to many sciences beyond computer science. (I fear, however, that if we try to edit the article in this way, we will be bludgeoned by the current horde of machine learners...) — Preceding unsigned comment added by 74.109.244.65 ( talk • contribs) 18 May 2021 (UTC)
Is there any information on the etymology of the term? I always assumed that the term goes back to Tikhonov regularization, which takes a singular or near-singular matrix and makes it "more regular", i.e. increases the absolute value of its determinant. — Preceding unsigned comment added by 2001:16B8:2E8A:8800:B45F:5236:B1C9:CEB7 ( talk) 07:40, 24 February 2022 (UTC)
Note 1, which gives a paper by Kratsios and Hyndman as a reference for the claim that regularization is used in finance, includes the statement "Term structure models can be regularized to remove arbitrage opportunities [sic?]." This is a meaningful and correct statement in the context of the original study (although certainly confusing to the general public). It also doesn't seem to appear, at least verbatim, in that paper or in other papers by Kratsios, although it's certainly a sentence that could appear in this part of the math finance literature. (Arbitrage opportunities are a property of solutions of math finance problems that are unwanted, regularization can be used to restrict solutions to those that don't have arbitrage opportunities). I would delete it, but I hesitate because I don't know of the intention of the original author of this text ...
I'd like to propose that we try to update this article by making use of an AI MorgsTheBot ( talk) 15:13, 1 April 2023 (UTC)
![]() | This article is rated Start-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||
|
Regularization has two articles termed " regularization (mathematics)" and " regularization (machine learning)".
There is not very much difference between the two concepts. Presently "mathematics" has a small description on inverse problems, while the "machine learning" is towards a "statistical" description. I propose that the article should be called "regularization (mathematics)". "machine learning" is a narrow field. Other names or possible redirects would be
— fnielsen ( talk) 13:08, 16 April 2008 (UTC)
The origin of 'regularization' is inverse problem theory. Regularization in machine learning is the application of inverse problem theory in machine learning. Why does first example is not general one instead of some machine learning problem? — Preceding unsigned comment added by 46.76.184.231 ( talk) 15:24, 19 October 2018 (UTC)
I'm no statistics expert, but I sometimes dabble in machine learning. I was wondering what the following means:
This strikes me as a bit odd, because one of the main things I use cross-validation for is to find the right value of a regularization parameter. Can anyone with more experience in statistics provide a clue here? Qwertyus ( talk) 13:03, 23 February 2012 (UTC)
The use of hats seems to be inconsistent and never clearly explained. In case of the Tikhonov Regularized Least Square it even seems to be unnecessary. — Preceding unsigned comment added by Tomas.krehlik ( talk • contribs) 10:48, 2 February 2016 (UTC)
The article never explains what variables X and Y (input and output) mean. — Preceding unsigned comment added by 169.234.127.96 ( talk) 19:39, 17 September 2014 (UTC)
This article is littered with symbols that are not explained. If you don't know the meaning of a symbol you are flummoxed, if you do know the meaning then you don't need to read the article (except perhaps as an aide memoire). My personal flummox is the parallel lines and 2s : - what does mean? I guess it's a norm (magnitude of vector, matrix or function) and the top 2 means the components / kernel are squared. But what about the lower 2 I went to the article on Tikhonov, but that wasn't much help as it describes a having matrix Q in place of the lower, scalar, 2. I get that we would probably not explain (although we might) but IMHO this norm notation is sufficiently obscure as to require an explanation. Aredgers ( talk) 13:33, 23 November 2017 (UTC)
This article needs improvement for example: "It (Sic, Tikhonov regularization) is also known as ridge regression." Not quite, ridge regression is a subset procedure of Tikhonov regularization with other minor differences---see https://stats.stackexchange.com/q/234280/99274 Also, machine learning regularization is a subset of regularization for statistics and not the obverse. Also, regularization (mathematics) means something else entirely, for example, regularized is because and a regularized lower incomplete gamma function is because . CarlWesolowski ( talk) 15:30, 13 April 2020 (UTC)
Regularization is a term broadly used in many sub-fields of pure and applied mathematics, data science, etc. In its current form, and probably because of the previous merge, it is strongly biased toward machine learning. The very first section starts with "Empirical learning of classifiers (...)", which is incredibly narrow. — Preceding unsigned comment added by Pthibault ( talk • contribs) 09:48, 30 November 2020 (UTC)
Agreed. A way to write this section that is both more neutral and more general is to talk about a generic interpolation problem: given a collection of samples, find an interpolating function. This problem dates back hundreds of years, long before the dawn of machine learning, and is central to many sciences beyond computer science. (I fear, however, that if we try to edit the article in this way, we will be bludgeoned by the current horde of machine learners...) — Preceding unsigned comment added by 74.109.244.65 ( talk • contribs) 18 May 2021 (UTC)
Is there any information on the etymology of the term? I always assumed that the term goes back to Tikhonov regularization, which takes a singular or near-singular matrix and makes it "more regular", i.e. increases the absolute value of its determinant. — Preceding unsigned comment added by 2001:16B8:2E8A:8800:B45F:5236:B1C9:CEB7 ( talk) 07:40, 24 February 2022 (UTC)
Note 1, which gives a paper by Kratsios and Hyndman as a reference for the claim that regularization is used in finance, includes the statement "Term structure models can be regularized to remove arbitrage opportunities [sic?]." This is a meaningful and correct statement in the context of the original study (although certainly confusing to the general public). It also doesn't seem to appear, at least verbatim, in that paper or in other papers by Kratsios, although it's certainly a sentence that could appear in this part of the math finance literature. (Arbitrage opportunities are a property of solutions of math finance problems that are unwanted, regularization can be used to restrict solutions to those that don't have arbitrage opportunities). I would delete it, but I hesitate because I don't know of the intention of the original author of this text ...
I'd like to propose that we try to update this article by making use of an AI MorgsTheBot ( talk) 15:13, 1 April 2023 (UTC)