Talk:Karush–Kuhn–Tucker conditions

Latest comment: 2 years ago by Redav in topic Grammar and layout

Error in the main KKT theorem edit

The KKT theorem seems to refer to a different formulation of the problem that is actually there: The KKT theorem makes no reference to   or  , and the Lagrangian in the theorem is   whereas in the problem statement it is  . Also, the term "optimal vector" is not defined, but if it means "globally optimal" then the theorem is wrong since the saddle point condition at most gives local optimality (and no convexity or the like is assumed here). In short, this central element in the page seems to be a mixture of different formulations. UffeHThygesen (talk) 09:47, 9 December 2020 (UTC)Reply

KKT in complex spaces edit

Could anyone please write on validity of the KKT for real-valued functions of complex vector variables? The current revision of the article covers the real case only. (2andrewknyazev (talk) 23:48, 22 July 2010 (UTC)).Reply

This would be a nonstandard topic, better more of research literature than an encyclopedia. (A complex vector space with complex dimension D can be viewed as a real vector space of dimension 2D.) Kiefer.Wolfowitz (talk) 11:08, 25 July 2010 (UTC)Reply
Is there a special reason for complex being "nonstandard"? Why, e.g., minimizing a real quadratic form over a complex space would be any less "standard" than over a real space? If it is true that the KKT holds without any changes for complex vector variables, that would be an encyclopedia-level statement. If there is a special reason, why it is not true, that would also be an encyclopedia-level statement, I guess. (2andrewknyazev (talk) 16:53, 26 July 2010 (UTC))Reply
WP is not a forum for original research. An article on KKT conditions should respect WP guidelines, particularly wrt the weight of topics in standard, leading reliable references (e.g. textbooks and research monographs, or invited review articles).
In a constructive spirit, let me suggest some references. For the complex field, you may look at literature about Hilbert's 17th problem (sums of squares representations, e.g. J-B Lasserre), plurisubharmonic functions, etc. No reliable standard reference comes to mind for complex vector spaces: It may be worthswhile to look at Karlin's book from the 1950s, e.g. Sobyczyk (sic.). Kiefer.Wolfowitz (talk) 21:52, 29 July 2010 (UTC)Reply

Complex optimization problems can be easliy transformed to real optimization problems mechanically. Thus I do not think it have any special properties. Before transformation you need define a order on complex space, to transform it to real one. If you mean real-valued function over complex field space, it is even simpler, just analyze a vector space with 2n dimensions (real and imaginary parts). --91.213.255.7 (talk) 03:44, 10 April 2012 (UTC)Reply

a minus or a plus ? edit

In other sources, there is a minus (instead of a plus) sign in front the multiplier (or it is to the right of the equality sign). Is the English Wikipedia article wrong? See for instance the Swedish article. /Anton —Preceding unsigned comment added by 83.249.20.170 (talk) 20:00, 2 September 2009 (UTC)Reply

I have edited http://sv.wikipedia.org/wiki/Karush-Kuhn-Tucker-villkor to make the sign correct. (2andrewknyazev (talk) 14:12, 29 July 2010 (UTC)).Reply

typo edit

The condition seems to be g_i(x)=<0 and g and f are concave I think at least g should be convex see e.g. http://www2.imm.dtu.dk/courses/02711/lecture3.pdf

There seems to be a typo here: should the complementarity condition be $\sum_i \mu_i g_i(x^\ast) = 0$, not $\mu_i g_i(x^\ast) = 0$ for each $i$? Tveldhui 21:05, 17 March 2006 (UTC)Reply

These are equivalent statements.

The introduction says "Allowing inequality constraints, the KKT approach to nonlinear programming generalizes the method of Lagrange multipliers, which had traditionally required linear constraints." Shouldn't "linear" (the second-to-last word) be replaced with "equality"? AFAIK, KKT and Langrange multipliers both do nonlinear programming quite naturally, and the first three words of the sentence also seem to support the idea that the shortcoming of Lagrange multipliers is that they require "equality constraints." 18.214.1.205 (talk) 02:46, 31 March 2010 (UTC)Reply

Thanks for noticing the problem with linear versus equality constraints, which I fixed. Kiefer.Wolfowitz (talk) 07:59, 31 March 2010 (UTC)Reply

One Condition missing edit

h(x) = 0 193.143.32.39 09:48, 3 January 2007 (UTC)Reply

Isn't that implied by the stipulation that x* be a feasible point? -- Jitse Niesen (talk) 12:44, 4 January 2007 (UTC)Reply


Regularity section edit

Does the $\lambda$ under "regularity" mean the $\mu$ and $\nu$ in the section before that? In this case this should be corrected, because the $\lambda$ for the dual multipliers doesn't appear anywhere else in the article. —Preceding unsigned comment added by 84.75.149.134 (talk) 22:17, 30 January 2008 (UTC)Reply

Suggestions for article improvement edit

  • The article needs historical background.
  • It needs to show how the Lagrange Method can be an special case of KKT, when one uses two inequality constraints as an equivalent of one equality constraint.
  • From an economics student's point of view, the article should show more results related to the problem of cost minimization subject to constraints. Also, it would be a good idea to explain the different steps involved in applying the method to particular problems. For example, minimization problems imply a different set up of conditions than maximization problems. A reliable reference on this is Chiang and Wainwright's textbook of mathematical economics.
  • The article needs a graphical ilustration of an example of functions defined over a set of inequality constraints.--Forich (talk) 03:13, 6 May 2008 (UTC)Reply
How about finding a max or min value of f(x) on an interval [a, b] or perhaps a two dimensional simplex? The inequalities would be constraints on the domain of x. You could have similar constraints on the range (say f has to be less than or equal to another function g). That should yield an interesting problem graphically. -- KarlHallowell (talk) 13:04, 14 May 2013 (UTC)Reply
  • The article refers to active constraints without defining what it means by "active". I suppose this refers to conditions with non-zero mu and lambda? Janez Demsar (talk) 19:41, 31 August 2009 (UTC)Reply
It also doesn't define what is meant by "Primal feasibility", "Dual feasibility", and "Complementary slackness". -- KarlHallowell (talk) 13:04, 14 May 2013 (UTC)Reply
  • I think the article should reference (and compare/contrast with) the well known Fritz John conditions as well (see | Bertsekas' Fritz John paper for some background ). Also, more elaboration on the Slater condition would be great, since this is one of the most used conditions. Lavaka (talk) 16:47, 16 September 2009 (UTC)Reply

Sufficient conditions edit

Someone correct me if I'm wrong, but shouldn't the sufficient conditions involve the Hessian? I think it's something like: the Hessian is positive definate with respect to feasible search directions? It appears the sufficient conditions are just copy/pasted necessary conditions... Alec.N (talk) 08:11, 27 October 2008 (UTC)Reply

The only difference between the necessary/sufficient conditions is that in the sufficient case f, g are convex and h is affine.

You are correct. It's just a particular case. I've clarified this section a bit. (2andrewknyazev (talk) 18:00, 28 July 2010 (UTC)).Reply

math not displayed edit

I've tried different encodings, but all I see at the end of the article (regularity conditions) are little squares rather than math (evidently, with different encoding you get different gibberish). E.g. "It can be shown that LICQ⇒MFCQ" etc. I went to the source, hoping that it was some simple omission, but this wasn't the case.

Bottom line, the math needs to be fixed it seems. —Preceding unsigned comment added by 83.83.234.195 (talk) 09:53, 9 February 2009 (UTC)Reply

the Hessian matrix determines whether the region is convex or not, H(x)> 0 means that it is stricly convex H(x)≥ 0 the region is convex. Strict convexity ensures that the solution found is a global minimum and thus optimum otherwise it might simply be a saddle point. —Preceding unsigned comment added by 86.163.157.227 (talk) 11:18, 18 April 2009 (UTC)Reply

First Order Necessary Conditions (FONC)

In the book Applied Optimization (2006) from Ross Baldick, the author almost makes no mention of the KKT conditions, and instead uses terms such as First Order Necessary Conditions (FONC), and Second Order Sufficient Conditions (SOSC). Are these terms used regularly in optimization? Do they warrant a mention in this article? ---201.127.248.133 (talk) 01:34, 10 February 2010 (UTC)Reply

Now added (2andrewknyazev (talk) 18:02, 28 July 2010 (UTC)).Reply

SD edit

DSOWOLS'SE —Preceding unsigned comment added by 82.69.107.29 (talk) 09:02, 14 March 2010 (UTC)Reply

typo edit

In the discussion of the value function, shouldn't the \lambda really be \mu? 18.78.0.83 (talk) 21:10, 28 June 2010 (UTC)Reply

Maximization case edit

In view of the confusion regarding the signs in the Lagrangian, it would be helpful with a short discussion of the maximization case also. Isheden (talk) 08:15, 14 December 2011 (UTC)Reply

Example edit

The current example that is given seems to be confusing. Can someone please update it with more details? — Preceding unsigned comment added by 128.97.206.107 (talk) 20:33, 22 January 2013 (UTC)Reply

How about some examples? Lagrange multipliers article, have multiple examples of using them to solve some optimizations. Similar simple examples could be created for this article. Does anybody have some external resources, books or articles, with sensible examples? --91.213.255.7 (talk) 03:45, 10 April 2012 (UTC)Reply

German and even more Portugal wikipedia looks to have interesting examples. — Preceding unsigned comment added by 91.213.255.7 (talk) 03:49, 10 April 2012 (UTC)Reply


Mistake in the example ? edit

I am nore sure, but the following KKT conditions in the economic example doesn't sound correct to me :  

I think it should be a "=".

The source, which I have now added, uses the inequality version along with a condition that either the inequality holds with equality or the choice variable equals zero. Loraof (talk) 17:51, 10 April 2017 (UTC)Reply

Second order necessary conditions? edit

I do not know what they are called but the second order conditions should be discussed more. Second order necessary conditions:

Hessian matrix of the objective function is positive semidefinite


Second order sufficient conditions:

Hessian matrix of the objective function is positive definite


Source (a large number, but here is one: http://www.stanford.edu/class/msande211/KKTgeometry.ppt) — Preceding unsigned comment added by 150.135.222.234 (talk) 21:30, 8 March 2013 (UTC)Reply

matrix notation edit

What means the comma in this operation?

 

Can someone who understand it, better explain it in the article?

Done. Loraof (talk) 03:01, 10 April 2017 (UTC)Reply

Problem in the Mangasarian-Fromovitz constraint qualification conditions? edit

If I want to minimize   under the constraints   and  , there is only one feasible point  . The gradients of the constraints   and   are positive-linearly independent (the Mangasarian-Fromovitz condition, as stated in the table) and yet, the KKT conditions are not necessary to certify the optimality of  . They actually do not hold in   because there is no linear combination of   and   that is equal to  . Daniel.porumbel (talk) 13:14, 16 July 2017 (UTC)Reply

In your example, the gradients of the constraints you provide are linearly dependent. Zfeinst (talk) 18:12, 13 July 2017 (UTC)Reply
Yes, but I understand from the table that the Mangasarian-Fromovitz condition (positive linearly independence) would be enough to derive the necessity of the KKT conditions, see the table in Section "Regularity conditions (or constraint qualifications)". In my example, the Mangasarian-Fromovitz constraint qualification is satisfied, yet the KKT conditions do not hold. Daniel.porumbel (talk) 13:14, 16 July 2017 (UTC)Reply
Positive linear INdependence is the statement for the Mangasarian-Fromovitz condition. Your example has positive linear dependence since  , which means the condition does not hold. Zfeinst (talk) 23:15, 16 July 2017 (UTC)Reply
Thanks for the reply, but sorry, I understand from the article that I do not have positive linear dependence for   and  . I repeat the given dependence definition: ( ) is positive-linear dependent if there exists   not all zero such that  . If   and  , what are the positive values of   and   so that  ? Daniel.porumbel (talk) 13:51, 18 July 2017 (UTC)Reply
See the updated definition in the text. Hopefully this answers your question. Zfeinst (talk) 15:15, 18 July 2017 (UTC)Reply
Yes, perfect, thanks.Daniel.porumbel (talk) 21:41, 18 July 2017 (UTC)Reply
In fact, one can find an analogous counterexample even for this second definition of positive linear dependence. Minimize   under the constraints   and  .There is only one feasible point:  . The gradients of the constraints in this point are   and  . They are positive-linearly independent according to the new definition. The Mangasarian-Fromovitz condition should imply the necessity of the KKT conditions, but they do not hold in  . I repeat the new positive dependence definition: ( ) is positive-linear dependent if there exists   and an element   such that  . With this definition,   and   are positive linearly independent. So either I missed something, or there is something missing in the Mangasarian-Fromovitz conditions as written in the article.Daniel.porumbel (talk) 10:23, 19 July 2017 (UTC)Reply
You are yet again correct. In Bertsekas "Nonlinear Programming" (2nd edition) page 329-330 defines MFCQ as a necessary condition by: Let   be a local minimum of the problem   where   are continuously differentiable functions. If the gradients   are linearly independent and there exists a vector   such that   and   for all equality constraints and all active inequality constraints, then the KKT conditions are necessary.
Since your example only has equality constraints, this will require linear independence of the gradients (plus an additional condition), which is not satisfied. The text in the article should be updated to reflect this actual definition. Zfeinst (talk) 12:13, 19 July 2017 (UTC)Reply
Thanks for this quick response. Indeed, even if I replace   by  , these new Mangasarian-Fromovitz conditions are not satisfied. The article should be updated. Daniel.porumbel (talk) 13:27, 19 July 2017 (UTC)Reply
Actually, I think I managed to figure out what was on the mind of the first person who wrote the Mangasarian-Fromovitz condition in the table. Let us first simplify and consider there is no equality constraints and we generalize afterwards. The existence of some   such that   for all active inequality constraints is equivalent to the fact that there exist no   not all zero such that  . This is a consequence of the hyperplane separation theorem. This formulation is equivalent to saying that the  's are positive linearly independent, using the initial (not accurate as far as I can see) definition of positive linearly dependence. If there are some equality constraints  , we need to project the gradients   on the null space (kernel) of  . For these projected gradients  , there must be no   not all zero such that  . Anyway, the article should be updated. Daniel.porumbel (talk) 14:07, 20 July 2017 (UTC)Reply

I was the one writing the first definition of MFCQ. The current definition of positive-linear dependence was wrong. I've removed this definition. — Preceding unsigned comment added by 143.107.45.1 (talk) 13:56, 5 April 2018 (UTC)Reply

Problem with second-order sufficient conditions? edit

I have the feeling, the sufficient conditions should involve a test about positivity of the hessian in more than just one direction. That is, till now, the text states a test

 

with   being orthogonal to the gradient of the constraint functions. But shouldn't   stem from a whole cone of admissible directions (in case of inequality constraints)? It's hard to find references, but here is something in French (see section 5.3.4): https://fr.wikipedia.org/wiki/Conditions_d%27optimalité#Problèmes_d'optimisation_avec_contraintes_d'égalité_et_d'inégalité

2003:D5:9F20:9500:5C01:2FA:636C:3372 (talk) 12:32, 19 November 2021 (UTC)Reply

Grammar and layout edit

The current text includes:

"The solution   found in the above section is a constrained local minimum if for the Lagrangian,

 

then,

 "

My strong impression is that, grammatically, the word "then" is superfluous, and it confuses (me). Also, the text reads as if the first formula is merely an apposition to or an explanation of "the Lagrangian".

Now, I understand that two formulas separated by a mere comma may make for an ugly layout. Therefore, I propose:

"The solution   found in the above section is a constrained local minimum for the Lagrangian,

 

if

 "

Any motivated objection(s)?Redav (talk) 22:32, 28 January 2022 (UTC)Reply