edits

edit

I edited the page for math content (although I'm no expert on latex, either), added a section on inference and a numerical example. I don't know if the numerical example is helpful. EconProf86 22:04, 31 July 2007 (UTC)Reply

Regression articles discussion July 2009

edit

A discussion of content overlap of some regression-related articles has been started at Talk:Linear least squares#Merger proposal but it isn't really just a question of merging and no actual merge proposal has been made. Melcombe (talk) 11:45, 14 July 2009 (UTC)Reply

Linear regression assumptions

edit

Perhaps there should be a straight-forward section on assumptions, and how to check them, like this:

   * 1) That there is a linear relationship between independent and dependent variable(s).
    How to Check: Make an XY scatter plot, then look for data grouping along a line, instead of along a curve.
   * 2) That the data are homoskedastic, meaning that errors do not tend to get bigger (or smaller), as a trend, as independent variables change.
    How to Check: Make a residual plot, then see if it is symmetric, or make an XY scatter plot, and see if the points do not tend to spread as they progress toward the left, or toward the right. If the scatter plot points look like they get farther apart as they go from left to right (or vice versa), then the data are not homoskedastic.
   * 3) That the data are normally distributed, which would meet the three following conditions:
   * a) Unimodal_function:
    How to Check: Make a histogram of the data, then look for only one major peak, instead of many.
   * b) Symmetric, or Unskewed Data Distribution:
    How to Check Skewness: Make that same histogram, then compare the left and right tails - Do they look to be the same size? Or is the graph 'leaning' one way or another?
   * c) Kurtosis is approximately Zero:
    How to Check: Make that same histogram, then compare its peakedness to a normal distribution. Is it 'peakier', or less 'peaky'? Are the data points more clustered than a normal distribution?

Briancady413 (talk)

1) is false. That's not what 'linear' means. The linearity refers to a linear relationship between y and the PARAMETERS, i.e. the alpha and the betas. A quadratic relationship between y and x is still (a little paradoxically) a 'linear regression'. In mathematics an equation is said to be linear or nonlinear if it's linear or nonlinear in the UNKNOWNS, and in the regression setting it's the alpha and betas that are unknown.

Blaise (talk) 14:00, 13 September 2013 (UTC)Reply

No, Briancady413 is correct about #1. While "linear" may mean what Blaise is saying in some contexts, in simple linear regression, the "linearity assumption" is referring to Y being linearly related to X. However, note that a plot of residuals vs. X values or residuals vs. Y-hat (predicted) values is often suggested for checking that assumption. (But in SLR, using Y vs. X is fine.) Likewise, the residuals are usually used to check all the rest of the assumptions, as well. - dcljr (talk) 07:58, 27 October 2015 (UTC)Reply
edit

The link Correlation#Sample_correlation in the first section is broken (there's no such anchor in the page).

Where should this point to? Correlation#Pearson.27s_product-moment_coefficient ?

Nuno J. Silva (talk) 18:21, 21 June 2010 (UTC)Reply

carets

edit

Can anyone more mathematically literate tell me if there's a reason the α and β subscripts of standard error s have carets on top of them in the section "normality assumption", but not in the numerical example? - Kyle at 6:35pm CST, 4 April 2011 —Preceding unsigned comment added by 160.94.178.67 (talk) 23:38, 5 April 2011 (UTC)Reply

Belated thanks for pointing this out. I've put the carets into the numerical example. Loraof (talk) 15:04, 27 July 2016 (UTC)Reply

Numerical Example

edit

There is an unfortunate basic error in the data. If plotted, an odd cadence in the x-positions can be noticed. This is because the original data were in inches, and the heights have been converted to metres, with rounding to the nearest centimetre. This is wrong, and has visible effects. Also, the line fit parameters change.

 Slope       Const.
 61.2722     -39.062  wrongly rounding to the nearest centimetre.
 61.6746     -39.7468 conversion without rounding.

And naturally, all the confidence intervals, etc. change as well. The fix is simple enough. Replace the original x by round(x/0.0254)*0.0254. I had reported this problem in 2008, but in the multiple reorganisations of the linear regression article, this was lost. There is not much point in discussing fancy calculations if the data are corrupt.

Later, it occurred to me to consider whether the weights might have been given in pounds. The results were odd in another way. Using the conversion 1KG = 2.20462234 lbs used in the USA, the weights are

115.1033 117.1095 120.1078 123.1061 126.1044 129.1247 132.123 135.1213 139.1337 142.132 146.1224 150.1348 154.1472 159.1517 164.1562
114.862  116.864  119.856  122.848  125.84   128.854  131.846 134.838  138.842  141.834 145.816  149.82   153.824  158.818  163.812

The second row being for the approximate conversion of 1KG = 2.2lbs. I am puzzled by the fractional parts. NickyMcLean (talk) 22:43, 5 September 2011 (UTC)Reply

edit

The link goes to Deming regression instead, not the total least squares page. LegendCJS (talk) 16:48, 28 September 2011 (UTC)Reply

By using calculus

edit
 

It is not obvious to me how to solve that equation to derive the forms below it in the "Fitting the regression line" section. The text says "By using either calculus, the geometry of inner product spaces or simply expanding to get a quadratic in α and β, it can be shown that the values of α and β that minimize the objective function Q are..."

How do I use these methods to show that? I'd like to see this expanded. Spell out the steps in the derivation more explicitly. Perhaps I could figure out how to do it using the "geometry of inner product spaces" method if I read the linked article. To solve with calculus, I would differentiate by α and β and find where the derivative was equal to zero and the second derivative was positive. I... forgot how to do this with two variables, and I especially don't know how to do this with that summation.

Ideally, I'd like to see a link to an article that describes these methods (like the "geometry of inner product spaces" method), and also at least some of the intermediate steps in the derivation. Gnebulon (talk) 02:57, 9 November 2011 (UTC)Reply

It's a matter of ploughing ahead. The general idea begins with minimising

  so the plan is to minimise E by choice of   in  

The first step is to expand the contents of the summation, thus   There are endless variations on this, with  , a,b, (or vice-versa) and m,c.

Then further expansion,  

Now apply the rules of the calculus, with a separate differentiation for each of the parameters. As usual, the extremum is to be found where the slope has the value zero. I prefer to avoid the horizontal bar in dy/dx as it is not a normal sort of fraction, so you shouldn't cancel the ds for example. But that's just me.

Anyway,  

The twos can be factored out, so  

Remembering that   a rearrangement gives

  or equivalently,  

Which is to say that the line (of whatever slope) goes through the average point   because  

Notice that the second differential is constant, 2: a positive number. So this extremum is a minimum.

Minimising with respect to   is the first half. The second is to minimise with respect to   and now it becomes clear why collecting the terms for E would have been a waste of effort.

 

As before, the twos can be factored out, so  

The second differential for this is   which must be positive, and so this extremum is also a minimum.

Remembering that  

 

Remembering that   and substituting for  

 

Multiplying out and re-arranging,

 
 

Multiplying top and bottom by N renders this less typographically formidable. Other variations are possible via the use of   and   as appropriate.

 

NickyMcLean (talk) 04:54, 22 September 2012 (UTC)Reply

Fitting the regression line

edit

In ==Fitting the regression line== shouldn't the expression immediately before the one containing Cov & Var have 1/n as the multiplier before the - x.y and before the - x^2 terms? This is the form of the expression which is often used in computing to generate a straight-line fit to set of "bumpy" data. As expressed here it does not work and moreover does not follow mathematically from the preceding expression! However with the 1/n terms in place it appears to produce the correct result. 1/n is unique to these two terms ONLY and therefore does NOT cancel?!? But then I am a Physicist and not a Mathematician so I may have missed something??? Chris B. 6:45pm PST on 24th. Mar. 2013. — Preceding unsigned comment added by 72.67.31.241 (talk) 00:48, 25 March 2013 (UTC)Reply

Have a look at the section below "By using calculus" which steps through a derivation and also mentions multiplying top and bottom by N. Incidentally, I also studied Physics. NickyMcLean (talk) 02:24, 25 March 2013 (UTC)Reply

Notation can lead to mistakes

edit

The expression   can lead to mistakes if you use the sample variance instead of variance. Since every spreadsheet gives you the sample variance it is likely that people can use this formula incorrecly (as one of my students just did in one assignment). It should be better to stress that Var is not the sample variance. — Preceding unsigned comment added by 25pietro (talkcontribs) 07:11, 13 June 2014 (UTC)Reply

Having just chased this very question around for far too long, I have to concur, and have edited the page accordingly.

beta hat

edit

I was confused by the formula for  , I wonder if the second one should have more parentheses around sums, like   (as product has precedence over summation) - but I am no mathematician nor English, so perhaps do not know conventions. Can someone look at it and possibly fix it? I know it is logical to at least assume the parentheses, but this is introductory and should be as precise as possible.Drabek (talk) 19:08, 21 March 2014 (UTC)Reply

why is R^2 = r^2 ?

edit

It would be nice to include a section proving why R^2 = r^2 in this case of Simple_linear_regression.

Wikipedia page on R^2 only says:

Similarly, in linear least squares regression with an estimated intercept term, R2 equals the square of the Pearson correlation coefficient between the observed and modeled (predicted) data values of the dependent variable.

i.e., R^2 = r(Y, Yhat)^2, which is proved in the Wikipedia page on Pearson_product-moment_correlation_coefficient (Section 5)

But is this the same as saying R^2 = r(Y, X)^2 ?? — Preceding unsigned comment added by 121.244.182.67 (talk) 11:21, 29 December 2014 (UTC)Reply

Yes. Pearson's correlation doesn't change if you replace X by a linear function of X, or Y by a linear function of Y, or both. Because Yhat in simple linear regression is a linear function of X (Yhat = b0 + b1 X), it follows that corr(Y, Yhat) = corr(Y, X). [Note, in case anyone is wondering, that b0 and b1 are considered constants here—even though they are calculated from the X and Y values in our sample, and thus change from sample to sample—because they're fixed values for any given sample (in other words, b0 and b1 don't vary with respect to X and Y, they are completely determined by X and Y). But perhaps no one was wondering about that…] - dcljr (talk) 07:41, 27 October 2015 (UTC)Reply

Scriptstyle

edit

An anon user has been trying to change the symbols   and   — and certain others in the article — to a smaller "scriptstyle" size:   and  . I have been reverting them to the default size provided by the <math> element. I see no mathematical justification for using a smaller size. In the absence of any mathematical justificiation, the default size should be used. Other opinions? - dcljr (talk) 04:57, 1 January 2016 (UTC)Reply

Assuming the anon user comes here to discuss this, as I have requsted they do, I suggest first giving us some examples of other math articles that use the "scriptstyle" for regular symbols like these. Apparently it is common enough that they were under the impression that it was "the default". I know of no such examples. Perhaps I haven't been paying attention? - dcljr (talk) 05:28, 1 January 2016 (UTC)Reply
I have encounter (and removed) these \scriptstyle commands in several articles. I guessed that this was an error due to some automatic conversion from some math editor into latex. It may also be caused by the fact that sometime the png rendering of inline latex was much larger then the current text. In any case this use of \scriptstyle is erroneous, as latex is designed for a good math rendering, and errors of rendering programs must be corrected in the rendering programs, not by patches in latex that may work well in some rendering modes, and may be awful on others. D.Lazard (talk) 10:47, 1 January 2016 (UTC)Reply
Well, since the anon user apparently doesn't feel the need to justify their changes, I'll do it for them: the only reason given was that "the inline math is much larger than the text around it" and that using scriptstyle made it "look absolutely perfect" in their browser. Nevermind that everyone uses different font settings (not to mention different rendering options, as alluded to by D.Lazard above)… Given this fact, my position is that the default size (meaning no special style options) should be used unless there is some specific commonly-accepted justification to use a special style in certain cases (e.g., using "textstyle" in inline summation or integration formulas, as described at Wikipedia:Manual of Style/Mathematics#Using LaTeX markup). - dcljr (talk) 04:41, 2 January 2016 (UTC)Reply

Title Change

edit

The introductory section of the article states that simple linear regression is a form of linear regression where (a) there is only one explanatory variable and (b) the OLS method is used. This is also the definition that is used in the rest of the article.

However, most other sources seem to only use (a) for the definition of simple linear regression. All of the top Google results for "simple linear regression -wikipedia", like Penn State, CMU, Columbia, this random website, this other website, and others use this definition. This was also orally confirmed to me by a statistics professor.

Accordingly, I thing the article should either be renamed (to Simple linear regression using Ordinary Least Squares?) or clarified to make it clear that condition (b) is not part of the standard definition for simple linear regression. --MattF (talk) 17:00, 17 October 2016 (UTC)Reply

I clarified the introduction. It would probably be relevant to include a section about non-OLS simple linear regression in the article as well. --MattF (talk) 02:53, 18 October 2016 (UTC)Reply
I think your change may be a little hasty; it's good to discuss significant changes on the Talk page before altering the article, but if you do so it's probably wise to allow more than 10 hours for people to respond. As I think this may be of interest to editors who aren't necessarily watching this page, I've posted notifications of this discussion at the Talk pages of the two WikiProjects given at the top of this Talk page: see Wikipedia talk:WikiProject Statistics#Does 'simple linear regression' imply OLS? Wikipedia talk:WikiProject Mathematics#Does 'simple linear regression' imply OLS?
I agree that SLR means (a) only. That's how it's used at Theil–Sen estimator for instance. We have a separate ordinary least squares artiicle already, so the one under the present name should be an overview of different techniques for (a), not focusing only on OLS. —David Eppstein (talk) 16:00, 18 October 2016 (UTC)Reply
I'm sorry if the change was too quick; I decided to make it because I had a fair number of supporting sources and because I felt the article was potentially misleading to readers. I will wait for more feedback before making any other modifications, and revert if necessary. --MattF (talk) 03:09, 19 October 2016 (UTC)Reply

Unintelligible if not already understood

edit

Like many maths articles, this will only make sense to someone that already understands it.

The leap from Q(a, b) has already been discussed above. But how about the more basic one of

f = a^ + b^ * x

which actually seems to boil down to simply f = y.

The goal of maths articles should not be to be as concise and obtuse as possible, but rather to be intelligible to the intelligent but non expert reader.Tuntable (talk) 00:19, 20 September 2018 (UTC)Reply

Broken mathematical symbols in section headings?

edit

The use of Beta_hat and other terms in the section headings is problematic–currently it reads "1.1 Intuitive explanation ?'"`UNIQ--postMath-00000020-QINU`"'?" and so forth.

I haven't modified it yet, because I'm familiar if there is a way to use HTML text to put those symbols in, which would be easiest. Otherwise, the sections just need to be retitled. — Preceding


unsigned comment added by Ziddletwix (talkcontribs) 17:33, 13 November 2021 (UTC)Reply

I retitled the sections for now, after wrestling with the latex for a while. It seems to work in the body of the text but break only in the contents.--Fuzzywolf82

Needs reorganized?

edit

The information in this article isn't bad or wrong, but it does seem to lack a coherent organization. Going to put that on my ToDo list. — Preceding unsigned comment added by Fuzzywolf82 (talkcontribs) 05:57, 15 November 2021 (UTC)Reply

Formula in numerical example has no antecedent

edit

The formula in the numerical example has no antecedent. I recognize the formula based on sums, but it should appear earlier in the article; the example should be reusing equations that have already been presented.

Arghman (talk) 03:07, 8 February 2022 (UTC)Reply

Proposed merge of Variance of the mean and predicted responses into Simple linear regression

edit

Variance is a fundamental statistic of any estimator; it'd fit well just before section Simple linear regression#Confidence intervals. fgnievinski (talk) 04:13, 30 November 2023 (UTC) fgnievinski (talk) 04:13, 30 November 2023 (UTC)Reply

    Y Merger complete. Klbrain (talk) 08:11, 9 September 2024 (UTC)Reply