Robust statistics in practice

edit

I think this article could be improved greatly if it would provide some information about how robust statistics could be used in practice. I think that's the real goal of this field is to use it to improve estimates. For example, show how you would use robust stastics to improve the linear regression of a sequence of linear measurements with a single outlier (as the problem is demonstrated in the regression analysis article). This article should demonstrate a solution!

Breakdown estimator

edit

The article has a discrepancy between the breakdown point - it states it is 1/N in one section, and 0 in another. Which is it? — Preceding unsigned comment added by Skecr8r (talkcontribs) 09:41, 29 May 2020 (UTC)Reply

I have clarified the finite-sample versus asymptotic breakdown point Nick Mulgan (talk) 05:45, 25 September 2021 (UTC)Reply

Book review

edit

This part doesnt seem wikipediaish:

Good books on robust statistics include those by Huber (1981), Hampel et al (1986) and Rousseeuw and Leroy (1987). A modern treatment is given by Maronna et al (2006). Huber's book is quite theoretical, whereas the book by Rousseew and Leroy is very practical (although the sections discussing software are rather out of date, the bulk of the book is still very relevant). Hampel et al (1987) and Maronna et al (2006) fall somewhere in the middle ground. All four of these are recommended reading, though Maronna et al is the most up to date. —Preceding unsigned comment added by 192.38.121.18 (talk) 09:06, 15 April 2009 (UTC)Reply

"classical statistical methods"

edit

"classical statistical methods" is not defined. does this refer to parametric statistics? if so, a link in the summary would be appropriate. —Preceding unsigned comment added by Landroni (talkcontribs) 21:22, 21 May 2009 (UTC)Reply

Info-Gap decision theory

edit

It should be pointed out that info-gap's robustness model is a simple instance of Wald's famous Maximin model. The text should be modified accordingly. Sniedo (talk) 10:12, 30 June 2009 (UTC)Reply

Empirical influence function

edit

The section is just absurdly technical for a general reference work. —Preceding unsigned comment added by 150.203.23.163 (talk) 05:42, 15 June 2010 (UTC)Reply

I can only agree, so I've added a {{technical|section=…}} tag to it. --Qwfp (talk) 09:15, 15 June 2010 (UTC)Reply
Agreed. Measure theory is not necessary here.Nick Mulgan (talk) 06:06, 25 September 2021 (UTC)Reply

Can't discern meaning

edit

The introduction has the sentence "Unfortunately, when there are outliers in the data, classical methods often have very poor performance, like standard Kalman filters, which are not robust to them". I can't work out what the emboldened part means, in particular what "them" is. Could someone knowledgeable, either rewrite that sentence or (probably better) put it into two sentences, please? :-) 78.147.61.105 (talk) 16:56, 10 April 2012 (UTC)Reply

I think the "them" was the outliers. I have rewritten the lead and intro sections to try to clarify things, including the "classical methods" complained of above. Melcombe (talk) 22:24, 16 April 2012 (UTC)Reply

M-estimators

edit

in my opinion, most of this, after the robust context could usefully be combined with the page

Nick Mulgan (talk) 04:56, 25 September 2021 (UTC)Reply

RANSAC

edit

My best practical tool in robust statistics is the algorithm RANSAC. It is used heavily in computer vision, where mostly the data or measurements are of two types : a) conforming to the model, and b) not conforming to the model (outliers or noise). And the data conforming to the model is often just affected by smaller gaussian (normally distributed) errors. Sees this all the time in my applications in computer vision where the interpretation of some of the phenomena seen in the image stream often need to be seen like the two types mentioned. The main drawback is that it is a slow algorithm, so use only on vital data (in realtime applications).— Preceding unsigned comment added by 2001:4643:E6E3:0:5530:EF4D:105D:7523 (talk) 09:30, 1 May 2018 (UTC)Reply

One of the hardest things about robust stats is the way usage has developed independently in many fields.Nick Mulgan (talk) 01:30, 26 September 2021 (UTC)Reply

Change slightly?

edit

I am confused by "The median is a robust measure of central tendency. Taking the same dataset {2,3,5,6,9}, if we add another datapoint with value -1000 or +1000 then the median will change slightly" I think it would not change at all? — Preceding unsigned comment added by 60.168.149.14 (talk) 08:45, 25 February 2019 (UTC)Reply

I'd guess what was meant was this: the median of the data set {2, 3, 5, 6, 9} is 5, while the median of the data set {2, 3, 5, 6, 9, 1000} is 5.5. In contrast, the means of those data sets are respectively 5 and 170.83. Pogogreg101 (talk) 16:20, 18 November 2023 (UTC)Reply

Change speed-of-light example to something else

edit

The use of speed-of-light data as an example is rather unfortunate these days. The speed of light was declared an exact constant in 1983, so proper interpretation of such measurements requires substantial physical finesse. See https://en.wikipedia.org/wiki/Speed_of_light#Measurement

Definition of appears to be incorrect?

edit

In the section "M-estimators", the function   is defined to equal  , but I'm pretty sure it is supposed to be  . Sources with this version include the article on M-estimators and Serfling (2009) Section 7.1.1. Also, otherwise it doesn't make sense that in the subsection Influence function of an M-estimator that the integral of the Jacobian of   is  . Pogogreg101 (talk) 16:36, 18 November 2023 (UTC)Reply