Polynomial least squares edit

Limited aspects of the general subject of polynomial least squares are also addressed under many other titles: polynomial regression, curve fitting, linear regression, least squares, ordinary least squares, simple linear regression, linear least squares, approximation theory and method of moments. Polynomial least squares has application in radar trackers, estimation theory, signal processing, statistics, and econometrics.

There are two fundamental applications of polynomial least squares:

(1) To approximate a complicated function or set of observations with a simple low degree polynomial. This is commonly used in statistics and econometrics to fit a scatter plot (often called a scatter gram) with a straight line in the form of a first degree polynomial. Cite error: The <ref> tag has too many names (see the help page). [1] [2] [3] This is addressed under afore mentioned titles.

(2) To estimate an assumed underlying deterministic polynomial that is corrupted with statistically described additive errors (generally called noise in engineering) from observations or measurements. This is commonly used in target tracking in the form of the Kalman filter, which is effectively a recursive implementation of polynomial least squares. [4] [5] [6] [7] Estimating an assumed underlying deterministic polynomial can be used econometrics as well. [8] Processing noisy measurements is uniquely addressed here.

The term "estimate" is derived from statistical estimation theory and is perhaps better suited when assuming that a polynomial is corrupted with statistical measurement or observation errors. The term "approximate" is perhaps better suited when no statistical measurement or observation errors are assumed, such as when conventionally fitting a scatter plot or complicated function.

In effect, both applications produce average curves as generalizations of the common average of a set of numbers, which is equivalent to zero degree polynomial regression and least squares. [1] [2] [9]

Polynomial least squares estimate of a deterministic first degree polynomial corrupted with observation errors edit

Assume the deterministic function y with unknown coefficients 𝜶 and 𝜷 as follows:

 

which is corrupted with an additive stochastic process  , described as an error (noise in tracking) written as

 

Given samples   where the subscript   is the sample index, the problem is to apply polynomial least squares to estimate y(t), and to determine its variance along with its expected value.

Assumptions and definitions edit

(1) The error   is modeled as a zero mean stochastic process, samples of which are random variables that are uncorrelated and assumed to have identical probability distributions (specifically same mean and variance), but not necessarily Gaussian, treated as inputs to polynomial least squares. Stochastic processes and random variables are described only by probability distributions. [1] [2] [9]

(2) Polynomial least squares is modeled as a linear signal processing "system" which processes statistical inputs deterministically, the output being the linearly processed empirically determined statistical estimate, variance, and expected value. [6] [7] [8]

(3) Polynomial least squares processing produces deterministic moments (analogous to mechanical moments), which may be considered as moments of sample statistics, but not of statistical moments. [8]

Polynomial least squares and the orthogonality principle edit

Approximating a function z(t) with a polynomial

 

where hat (^) denotes the estimate and (J-1) is the polynomial degree, can be performed by applying the orthogonality principle. The error e in the sum of the squared errors can be written as

 

According to the orthogonality principle [4] [5] [6] [7] [8][9] [10] [11], e is minimum when the error ( - ) is orthogonal to the estimate  , that is

 

This can be described as the orthogonal projection of the data   onto a solution in the form of the polynomial  . [4] [6] [7] For N > J-1, orthogonal projection yields the standard overdetermined system of equations (often called normal equations) used to compute the coefficients in the polynomial approximation. [1][10] [11] The minimum e is then

 

The advantage of using orthogonal projection is that   can be determined for use in the polynomial least squares processed statistical variance of the estimate. [8][9] [11]

The empirically determined polynomial least squares output of a first degree polynomial corrupted with a observation errors edit

To fully determine the output of polynomial least squares, a weighting function describing the processing must first be structured and then the statistical moments can be computed.

The weighting function describing the linear polynomial least squares "system" edit

Given estimates of the coefficients 𝜶 and 𝜷 from polynomial least squares, the weighting function   can be formulated to estimate the unknown y as follows: [8]

 

where N is the number of samples,   are random variables as samples of the stochastic   (noisy signal), and the first degree polynomial data weights are

 

which represent the linear polynomial least squares "system" and describe its processing. [8] The Greek letter 𝜏 is the independent variable t when estimating the dependent variable y after data fitting has been performed. (The letter 𝜏 is used to avoid confusion with t before and sampling during polynomial least squares processing.) The overbar ( ¯ ) defines the deterministic centroid of   as processed by polynomial least squares [8] – i.e., it defines the deterministic first order moment, which may be considered a sample average, but does not here approximate a first order statistical moment:

 

Empirically determined statistical moments edit

Applying   yields

 

where

 

and

 

As linear functions of the random variables  , both coefficient estimates   and   are random variables. [8] In the absence of the errors  ,   and  , as they should to meet that boundary condition.

Because the statistical expectation operator E[•] is a linear function and the sampled stochastic process errors   are zero mean, the expected value of the estimate   is the first order statistical moment as follows: [1] [2] [3] [8]

 

The statistical variance in   is given by the second order statistical central moment as follows: [1] [2] [3] [8]

   

because

 

where   is the statistical variance of random variables  ; i.e.,   for i = n and (because   are uncorrelated)   for   [8]

Carrying out the multiplications and summations in   yields

  [8]

Measuring or approximating the statistical variance of the random errors edit

In a hardware system, such as a tracking radar, the measurement noise variance   can be determined from measurements when there is no target return – i.e., by just taking measurements of the noise alone.

However, if polynomial least squares is used when the variance   is not measureable (such as in econometrics or statistics), it can be estimated with observations in   from orthogonal projection as follows:

  [8]

As a result, to the first order approximation from the estimates   and   as functions of sampled   and  

 

which goes to zero in the absence of the errors  , as it should to meet that boundary condition. [8]

As a result, the samples   (noisy signal) are considered to be the input to the linear polynomial least squares "system" which transforms the samples into the empirically determined statistical estimate  , the expected value  , and the variance  . [8]

Properties of polynomial least squares modeled as a linear "system" edit

(1) The empirical statistical variance   is a function of  , N and  . Setting the derivative of   with respect to   equal to zero shows the minimum to occur at  ; i.e., at the centroid (sample average) of the samples  . The minimum statistical variance thus becomes  . This is equivalent to the statistical variance from polynomial least squares of a zero degree polynomial – i.e., of the centroid (sample average) of  . [1] [2] [8] [9]

(2) The empirical statistical variance   is a function of the quadratic   . Moreover, the further   deviates from   (even within the data window), the larger is the variance   due to the random variable errors   . The independent variable   can take any value on the   axis. It is not limited to the data window. It can extend beyond the data window – and likely will at times depending on the application. If it is within the data window, estimation is described as interpolation. If it is outside the data window, estimation is described as extrapolation. It is both intuitive and well known that the further is extrapolation, the larger is the error. [8]

(3) The empirical statistical variance   due to the random variable errors   is inversely proportional to N. As N increases, the statistical variance decreases. This is well known and what filtering out the errors   is all about. [1] [2] [8] [12] The underlying purpose of polynomial least squares is to filter out the errors to improve estimation accuracy by reducing the empirical statistical estimation variance. In reality, only two data points are required to estimate   and  ; albeit the more data points with zero mean statistical errors included, the smaller is the empirical statistical estimation variance as established by N samples.

(4) There is an additional issue to be considered when the noise variance is not measureable: Independent of the polynomial least squares estimation, any new observations would be described by the variance  . [8] [9]

Thus, the polynomial least squares statistical estimation variance   and the statistical variance of any new sample in   would both contribute to the uncertainty of any future observation. Both variances are clearly determined by polynomial least squares in advance.

(5) This concept also applies to higher degree polynomials. However, the weighting function   is obviously more complicated. In addition, the estimation variances increase exponentially as polynomial degrees increase linearly (i.e., in unit steps). However, there are ways of dealing with this as described in [6] [7].

The synergy of integrating polynomial least squares with statistical estimation theory edit

Modeling polynomial least squares as a linear signal processing "system" creates the synergy of integrating polynomial least squares with statistical estimation theory to deterministically process corrupted samples of an assumed polynomial. In the absence of the error ε, statistical estimation theory is irrelevant and polynomial least squares reverts back to the conventional approximation of complicated functions and scatter plots.

References edit

  1. ^ a b c d e f g h Gujarati, D. N., Basic Econometrics, Fourth Edition,[1]
  2. ^ a b c d e f g Hansen, B. E., ECONOMETRICS University of Wisconsin Department of Economics This Revision: January 16, 2015, [2]
  3. ^ a b c Copland, T. E. & Weston, J. F., Financial Theory and Corporate Policy, 3rd Edition, Addison-Wesley, New York, 1988
  4. ^ a b c Kalman, R. E., A New Approach to Linear Filtering and Prediction Problems, Journal of Basic Engineering, Vol. 82D, Mar. 1960.
  5. ^ a b Sorenson, H. W., Least-squares estimation: Gauss to Kalman, IEEE Spectrum, July, 1970.
  6. ^ a b c d e Bell, J. W., A Simple Kalman Filter Alternative: The Multi-Fractional Order Estimator, IET-RSN, Vol. 7, Issue 8, October 2013.
  7. ^ a b c d e Bell, J. W., A Simple Kalman Filter Alternative: The Multi-Fractional Order Estimator, IET-RSN, Vol. 7, Issue 8, October 2013.
  8. ^ a b c d e f g h i j k l m n o p q r s t [3]
  9. ^ a b c d e f Papoulis, A., Probability, RVs, and Stochastic Processes, McGraw-Hill, New York, 1965
  10. ^ a b Wylie, C. R., Jr., Advanced Engineering Mathematics, McGraw-Hill, New York, 1960.
  11. ^ a b c Schied, F., Numerical Analysis, Schaum's Outline Series, McGraw-Hill, New York, 1968.
  12. ^ [4]