Generalized Tikhonov - correction

edit

It should be   instead of just  .

General Form

edit

Why not mentioning the general form incorporating an arbitrary smoothing operator (i.e. derivation operators) instead of just the identity matrix in special? Ref.: Hansen, P. C.: Rank-Deficient and Discrete Ill-posed Problems

No connection to Bias-Variance

edit

I am not convinced that there is a link to to Bias-variance dilemma and reference sentence do not support this. Bias-variance is about model selection, regularisation do not address model selection but well-posedness. --mcyp (talk) 02:00, 29 June 2021 (UTC)Reply

Weird reformulation of definition

edit

So the ridge estimator is defined as the ordinary least square one with the difference that one adds an additional multiple of the identity matrix to the moment matrix. There is one additional parameter, namely lambda. Now, in the "reformulation", where lambda is understood as a Lagrange multiplier, there is an additional parameter c which is nowhere chosen. This does not make sense. This minimum over beta is a function of lambda and c. I think what you mean is just the norm of beta, that is, c=0.--Limes12 (talk) 15:08, 8 August 2020 (UTC)Reply

Proposed merge of Ridge regression into Tikhonov regularization

edit
The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
To merge in the other direction, given that these are heavily overlapping, but that Ridge regression is the more commonly-used name. Klbrain (talk) 12:06, 17 October 2022 (UTC)Reply

Looks like an IP added some extra content that could be integrated here, though merging will probably require expertise. ~ Ase1estecharge-paritytime 10:16, 10 March 2021 (UTC)Reply

The content here doesn't seem to add much beyond what is already on the Tikhonov page. Everything in "Mathematical details" section is covered by in Tikhonov Regularisation section (after History). The only thing that strikes me as information that should be merged/added to the Tikhonov page would be the variance of the estimator, and I think that is something that should be expanded on and linked to the bias/variance tradeoff more generally. Alan O'Callaghan (talk) 11:44, 3 May 2021 (UTC)Reply

  • Support - I think this is a good idea, although I think it will be hard to write the lead with respect to prominence in reliable sources. The article currently has a useful footnote: "In statistics, the method is known as ridge regression, in machine learning it is known as weight decay, and with multiple independent discoveries, it is also variously known as the Tikhonov–Miller method, the Phillips–Twomey method, the constrained linear inversion method, L2 regularization, and the method of linear regularization". In fact, even this footnote buries the lead: this method is much better known as ridge regression and L2 regularization. Despite being a stub, the Ridge Regression article gets more pageviews than this one! This needs attention from someone with knowledge of the relevant science history, since reliable sources will typically talk about "L2 regularization" or "Ridge estimators/regression", and so who if anyone should be reflected in the title (Tikhonov vs Miller vs etc.) is an open question to me. Suriname0 (talk) 18:31, 30 October 2021 (UTC)Reply
    @Suriname0 and Alan O'Callaghan: I think there is an argument to merge the articles and rename the resulting article as Ridge regression. At least for me, ridge regression is the name I recognize. ChromeGames923 (talk · contribs) 03:24, 8 November 2021 (UTC)Reply
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.