This is the
talk page for discussing improvements to the
Convex optimization article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google ( books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
This article is rated C-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | |||||||||||
|
This article may be too technical for most readers to understand. |
The layout and presentation could be much better. I know it's a stub, but what little there is could at least be readable. 163.1.159.21 16:48, 14 May 2005 (UTC)
I see "convex optimization" applied to nonlinear functions with multiple minima. In that context are people really talking just about some convex portion of the domain around a local minimum? Similarly, is it still convex optimization if you are looking for the minimum of a Y-shaped valley where the minimum is at the bottom of the Y? What if it wasn't Y-shaped but more V-shaped so that you could draw an ellipse over either valley in which the function is convex? —Ben FrantzDale 19:50, 31 January 2007 (UTC)
There is no graph on the "banana function" page, so I can't tell. Here is a convex function, Image:Convex-function-graph-1.png, and here is a function which is not convex, Image:Quasiconvex function.png. A greater help would be to read the convex function article rather than this one. Oleg Alexandrov ( talk) 02:15, 7 February 2007 (UTC)
The theory of convex optimization pertains to convex minimization, not maximization (apart from a maximum boundary principle). Maximizing (even quadratic QP) convex functions (on convex subsets) is NP hard. Shouldn't the title of the article be clarified to "Convex Minimization"? Kiefer.Wolfowitz ( talk) 14:32, 7 June 2009 (UTC) I updated the discussion to specific "convex minimization" where appropriate. Kiefer.Wolfowitz ( talk) 15:49, 7 June 2009 (UTC)
Would somebody improve the illustration of the "problem hierarchy", please?
Convex quadratic programming (QP) (with linear constraints) is more general than linear programming.
I would not object to somebody changing the QP with Q constraints to simply QP (with linear constraints). Thanks. Kiefer.Wolfowitz ( talk) 19:47, 7 January 2010 (UTC)
The first sentence is gramatically wrong: "is focuses" -> "focuses" —Preceding unsigned comment added by 139.179.161.17 ( talk) 12:36, 28 February 2010 (UTC)
Does anyone know where I can find some information about "abstract convex programming"? By this I mean minimising a convex function over a convex feasible set. That is, there are no explicit constraints, all you know is that the feasible set is convex. —Preceding unsigned comment added by 203.167.251.186 ( talk) 04:48, 19 July 2010 (UTC)
In the introductory section it is mentioned that
With recent improvements in computing and in optimization theory, convex minimization is nearly as straightforward as linear programming.
While that is certainly true in theory (existence of a rich duality theory and polynomial-time algorithms), I am not so sure to which extent this is true in practice. For linear programs, simplex and interior-point methods can solve problems with input size (nonzero entries on constraint matrix) on the order of millions on a desktop computer. But even for a well studied and highly structured class of convex optimization problems, namely, semidefinite optimization, current solvers do not fare very well even for problems with input size on the order of a few thousand nonzeroes. See the benchmarks at http://plato.asu.edu/bench.html.
It is obvious that this is not a fair comparison, given that the codes for LPs are obviously much more mature. However, semidefinite optimization solvers (as well as general convex optimization software) faces an apparently intrinsic problem of numerical stability.
If anybody else can confirm that I am not completely outdated about this, could you please add these remarks to the page, or let me know so that I can write the remarks myself? Thanks.
-- Marcelkcs ( talk) 15:20, 4 June 2011 (UTC)
This article would benefit from a section on applications of convex optimization, which might include concrete examples. Akshayka ( talk) 20:11, 14 January 2019 (UTC)
I would like to raise concerns about the recent edits to the article. Specifically, I find the claim that "the field of convex optimization also considers the far more difficult problem of maximizing convex functions" questionable. According to Boyd/Vandenberghe, which is considered a standard reference, a convex optimization problem has three additional requirements as compared to a general optimization problem, namely 1) the objective function must be convex (in the case of minimization), 2) the inequality constraint functions must be convex, and 3) the equality constraint functions must be affine (Section 4.2.1, pp. 136-). In a concave maximization problem, requirements 2) and 3) are fulfilled but the negative of the objective function (which is concave) is maximized. Since they are equivalent, both these problems are referred to as convex optimization problems (p. 137). A problem that does not fulfil these requirements, for example the problem of minimizing a convex function subject to an equality constraint that is not affine, is referred to as a nonlinear optimization problem and is treated with global optimization methods. The term "convex minimization" seems to focus only on requirement 1) above, therefore a reference would be needed to show that this is standard terminology. Isheden ( talk) 19:36, 26 October 2011 (UTC)
I beg to differ. In Rockafellar (1997), Section 28 (p. 273), the term "ordinary convex program" is defined in a very similar way to Boyd's definition of a "convex optimization problem". Rockafellar even suggests a more rigorous definition on the same page as a tuple of functions satisfying the conditions given. Indicator functions and penalty representation are discussed in Boyd as well, see 11.2 for the use of the indicator function to make inequality constraints implicit in the objective, and 5.4.4 for the price interpretation of violating an inequality constraint. Perhaps what you mean is that the Lagrangian (see 5.1.1), which includes the constraints, is a convex function, and that "convex minimization" is about finding the minimum of this function (called the Lagrange dual function, see 5.1.2)? I think that what you call the theory of convex maximization would fit better in the convex analysis article, since this is a basic property of a convex function. The section "Generalized convexity" as it is formulated now would fit better in the article convex function. And you're right that local methods are often used when finding the global optimum is difficult, but the point of convex optimization is that it is in a certain sense the broadest class of nonlinear optimization problems for which any local optimum is the global optimum. Isheden ( talk) 21:51, 26 October 2011 (UTC)
I think different interpretations of the word "convex" might cause confusion here. In the term convex optimization, "convex" refers to the optimization problem, not to the objective function. A convex optimization problem or convex program may be either a minimization or a maximization problem. As cited from the article nonlinear optimization: "If the objective function is concave (maximization problem), or convex (minimization problem) and the constraint set is convex, then the program is called convex and general methods from convex optimization can be used in most cases." For this reason, I am against moving the article to convex minimization. Moreover, "convex optimization" and "convex optimization problem" are much more commonly used than "convex minimization" and "convex minimization problem", respectively, as determined by searches on Google Books. I think the article name should be based on the more common use in the literature. I noted also that the term "convex programming" is even more commonly used. On the other hand, I agree that the article should discuss the importance of convex optimization for non-convex optimization problems. I also think it is a good idea to mention that even mildly non-convex problems are often NP-hard. Isheden ( talk) 10:46, 27 October 2011 (UTC)
What's the question here? Convex optimization is certainly a subject, as borne out by Isheden's numbers. How long has "convex minimization" been in Wikipedia? Could that account by itself for the 6 hits it got, vs the 156 hits for "convex optimization?"
Usually optimization minimizes something, such as your cost. It can always be turned into maximizing something, such as what you have left after subtracting your cost.
I'm having difficulty understanding these arguments for an article on something other than convex optimization. -- Vaughan Pratt ( talk) 07:45, 2 November 2011 (UTC)
I trust the "factual accuracy" bs is over. Kiefer. Wolfowitz 12:04, 3 February 2012 (UTC)
I'm confused. The theory section states:
Technically you can have a set of one, but the middle line above loosely implies there can be multiple global minima, which is contradicted by the final statement. Can whatever is meant be better expressed please? Aarghdvaark ( talk) 06:58, 30 August 2012 (UTC)
The page is confusing because it seems to be using "minimum" to sometimes refer to the "minimizer". The minimum is a scalar value in the range of the objective function. The minimizers are the points in the domain at which the minima are reached. The global minimum for any optimization problem, or function, or set, is (assuming it exists) unique by definition. Also discussion the set of all local minima, while perhaps ok (though is it much use to talk of its convexity?) is not as interesting as the set of minimizers where those local minima are reached. — Preceding unsigned comment added by 129.81.103.111 ( talk) 17:50, 17 July 2015 (UTC)
The applications section is quite poorly written. Almost all of its citations point to a single set of slides of Boyd et al., but those examples prove only that some problems can be modeled as convex optimization problems, not that convex optimization is actually used in practice to solve any of these problems. I've updated the phrasing appropriately in that section, but I would love it if someone could provide more concrete examples of convex optimization used in real world problems (not just in slides or academic papers). — Preceding unsigned comment added by J2kun ( talk • contribs) 19:35, 21 January 2024 (UTC)
This is the
talk page for discussing improvements to the
Convex optimization article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google ( books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
This article is rated C-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | |||||||||||
|
This article may be too technical for most readers to understand. |
The layout and presentation could be much better. I know it's a stub, but what little there is could at least be readable. 163.1.159.21 16:48, 14 May 2005 (UTC)
I see "convex optimization" applied to nonlinear functions with multiple minima. In that context are people really talking just about some convex portion of the domain around a local minimum? Similarly, is it still convex optimization if you are looking for the minimum of a Y-shaped valley where the minimum is at the bottom of the Y? What if it wasn't Y-shaped but more V-shaped so that you could draw an ellipse over either valley in which the function is convex? —Ben FrantzDale 19:50, 31 January 2007 (UTC)
There is no graph on the "banana function" page, so I can't tell. Here is a convex function, Image:Convex-function-graph-1.png, and here is a function which is not convex, Image:Quasiconvex function.png. A greater help would be to read the convex function article rather than this one. Oleg Alexandrov ( talk) 02:15, 7 February 2007 (UTC)
The theory of convex optimization pertains to convex minimization, not maximization (apart from a maximum boundary principle). Maximizing (even quadratic QP) convex functions (on convex subsets) is NP hard. Shouldn't the title of the article be clarified to "Convex Minimization"? Kiefer.Wolfowitz ( talk) 14:32, 7 June 2009 (UTC) I updated the discussion to specific "convex minimization" where appropriate. Kiefer.Wolfowitz ( talk) 15:49, 7 June 2009 (UTC)
Would somebody improve the illustration of the "problem hierarchy", please?
Convex quadratic programming (QP) (with linear constraints) is more general than linear programming.
I would not object to somebody changing the QP with Q constraints to simply QP (with linear constraints). Thanks. Kiefer.Wolfowitz ( talk) 19:47, 7 January 2010 (UTC)
The first sentence is gramatically wrong: "is focuses" -> "focuses" —Preceding unsigned comment added by 139.179.161.17 ( talk) 12:36, 28 February 2010 (UTC)
Does anyone know where I can find some information about "abstract convex programming"? By this I mean minimising a convex function over a convex feasible set. That is, there are no explicit constraints, all you know is that the feasible set is convex. —Preceding unsigned comment added by 203.167.251.186 ( talk) 04:48, 19 July 2010 (UTC)
In the introductory section it is mentioned that
With recent improvements in computing and in optimization theory, convex minimization is nearly as straightforward as linear programming.
While that is certainly true in theory (existence of a rich duality theory and polynomial-time algorithms), I am not so sure to which extent this is true in practice. For linear programs, simplex and interior-point methods can solve problems with input size (nonzero entries on constraint matrix) on the order of millions on a desktop computer. But even for a well studied and highly structured class of convex optimization problems, namely, semidefinite optimization, current solvers do not fare very well even for problems with input size on the order of a few thousand nonzeroes. See the benchmarks at http://plato.asu.edu/bench.html.
It is obvious that this is not a fair comparison, given that the codes for LPs are obviously much more mature. However, semidefinite optimization solvers (as well as general convex optimization software) faces an apparently intrinsic problem of numerical stability.
If anybody else can confirm that I am not completely outdated about this, could you please add these remarks to the page, or let me know so that I can write the remarks myself? Thanks.
-- Marcelkcs ( talk) 15:20, 4 June 2011 (UTC)
This article would benefit from a section on applications of convex optimization, which might include concrete examples. Akshayka ( talk) 20:11, 14 January 2019 (UTC)
I would like to raise concerns about the recent edits to the article. Specifically, I find the claim that "the field of convex optimization also considers the far more difficult problem of maximizing convex functions" questionable. According to Boyd/Vandenberghe, which is considered a standard reference, a convex optimization problem has three additional requirements as compared to a general optimization problem, namely 1) the objective function must be convex (in the case of minimization), 2) the inequality constraint functions must be convex, and 3) the equality constraint functions must be affine (Section 4.2.1, pp. 136-). In a concave maximization problem, requirements 2) and 3) are fulfilled but the negative of the objective function (which is concave) is maximized. Since they are equivalent, both these problems are referred to as convex optimization problems (p. 137). A problem that does not fulfil these requirements, for example the problem of minimizing a convex function subject to an equality constraint that is not affine, is referred to as a nonlinear optimization problem and is treated with global optimization methods. The term "convex minimization" seems to focus only on requirement 1) above, therefore a reference would be needed to show that this is standard terminology. Isheden ( talk) 19:36, 26 October 2011 (UTC)
I beg to differ. In Rockafellar (1997), Section 28 (p. 273), the term "ordinary convex program" is defined in a very similar way to Boyd's definition of a "convex optimization problem". Rockafellar even suggests a more rigorous definition on the same page as a tuple of functions satisfying the conditions given. Indicator functions and penalty representation are discussed in Boyd as well, see 11.2 for the use of the indicator function to make inequality constraints implicit in the objective, and 5.4.4 for the price interpretation of violating an inequality constraint. Perhaps what you mean is that the Lagrangian (see 5.1.1), which includes the constraints, is a convex function, and that "convex minimization" is about finding the minimum of this function (called the Lagrange dual function, see 5.1.2)? I think that what you call the theory of convex maximization would fit better in the convex analysis article, since this is a basic property of a convex function. The section "Generalized convexity" as it is formulated now would fit better in the article convex function. And you're right that local methods are often used when finding the global optimum is difficult, but the point of convex optimization is that it is in a certain sense the broadest class of nonlinear optimization problems for which any local optimum is the global optimum. Isheden ( talk) 21:51, 26 October 2011 (UTC)
I think different interpretations of the word "convex" might cause confusion here. In the term convex optimization, "convex" refers to the optimization problem, not to the objective function. A convex optimization problem or convex program may be either a minimization or a maximization problem. As cited from the article nonlinear optimization: "If the objective function is concave (maximization problem), or convex (minimization problem) and the constraint set is convex, then the program is called convex and general methods from convex optimization can be used in most cases." For this reason, I am against moving the article to convex minimization. Moreover, "convex optimization" and "convex optimization problem" are much more commonly used than "convex minimization" and "convex minimization problem", respectively, as determined by searches on Google Books. I think the article name should be based on the more common use in the literature. I noted also that the term "convex programming" is even more commonly used. On the other hand, I agree that the article should discuss the importance of convex optimization for non-convex optimization problems. I also think it is a good idea to mention that even mildly non-convex problems are often NP-hard. Isheden ( talk) 10:46, 27 October 2011 (UTC)
What's the question here? Convex optimization is certainly a subject, as borne out by Isheden's numbers. How long has "convex minimization" been in Wikipedia? Could that account by itself for the 6 hits it got, vs the 156 hits for "convex optimization?"
Usually optimization minimizes something, such as your cost. It can always be turned into maximizing something, such as what you have left after subtracting your cost.
I'm having difficulty understanding these arguments for an article on something other than convex optimization. -- Vaughan Pratt ( talk) 07:45, 2 November 2011 (UTC)
I trust the "factual accuracy" bs is over. Kiefer. Wolfowitz 12:04, 3 February 2012 (UTC)
I'm confused. The theory section states:
Technically you can have a set of one, but the middle line above loosely implies there can be multiple global minima, which is contradicted by the final statement. Can whatever is meant be better expressed please? Aarghdvaark ( talk) 06:58, 30 August 2012 (UTC)
The page is confusing because it seems to be using "minimum" to sometimes refer to the "minimizer". The minimum is a scalar value in the range of the objective function. The minimizers are the points in the domain at which the minima are reached. The global minimum for any optimization problem, or function, or set, is (assuming it exists) unique by definition. Also discussion the set of all local minima, while perhaps ok (though is it much use to talk of its convexity?) is not as interesting as the set of minimizers where those local minima are reached. — Preceding unsigned comment added by 129.81.103.111 ( talk) 17:50, 17 July 2015 (UTC)
The applications section is quite poorly written. Almost all of its citations point to a single set of slides of Boyd et al., but those examples prove only that some problems can be modeled as convex optimization problems, not that convex optimization is actually used in practice to solve any of these problems. I've updated the phrasing appropriately in that section, but I would love it if someone could provide more concrete examples of convex optimization used in real world problems (not just in slides or academic papers). — Preceding unsigned comment added by J2kun ( talk • contribs) 19:35, 21 January 2024 (UTC)