Estimation theory

(Redirected from Parametric Estimating)

Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured empirical data that has a random component. The parameters describe an underlying physical setting in such a way that their value affects the distribution of the measured data. An estimator attempts to approximate the unknown parameters using the measurements. In estimation theory, two approaches are generally considered:[1]

  • The probabilistic approach (described in this article) assumes that the measured data is random with probability distribution dependent on the parameters of interest
  • The set-membership approach assumes that the measured data vector belongs to a set which depends on the parameter vector.

Examples

edit

For example, it is desired to estimate the proportion of a population of voters who will vote for a particular candidate. That proportion is the parameter sought; the estimate is based on a small random sample of voters. Alternatively, it is desired to estimate the probability of a voter voting for a particular candidate, based on some demographic features, such as age.

Or, for example, in radar the aim is to find the range of objects (airplanes, boats, etc.) by analyzing the two-way transit timing of received echoes of transmitted pulses. Since the reflected pulses are unavoidably embedded in electrical noise, their measured values are randomly distributed, so that the transit time must be estimated.

As another example, in electrical communication theory, the measurements which contain information regarding the parameters of interest are often associated with a noisy signal.

Basics

edit

For a given model, several statistical "ingredients" are needed so the estimator can be implemented. The first is a statistical sample – a set of data points taken from a random vector (RV) of size N. Put into a vector,   Secondly, there are M parameters   whose values are to be estimated. Third, the continuous probability density function (pdf) or its discrete counterpart, the probability mass function (pmf), of the underlying distribution that generated the data must be stated conditional on the values of the parameters:   It is also possible for the parameters themselves to have a probability distribution (e.g., Bayesian statistics). It is then necessary to define the Bayesian probability   After the model is formed, the goal is to estimate the parameters, with the estimates commonly denoted  , where the "hat" indicates the estimate.

One common estimator is the minimum mean squared error (MMSE) estimator, which utilizes the error between the estimated parameters and the actual value of the parameters   as the basis for optimality. This error term is then squared and the expected value of this squared value is minimized for the MMSE estimator.

Estimators

edit

Commonly used estimators (estimation methods) and topics related to them include:

Examples

edit

Unknown constant in additive white Gaussian noise

edit

Consider a received discrete signal,  , of   independent samples that consists of an unknown constant   with additive white Gaussian noise (AWGN)   with zero mean and known variance   (i.e.,  ). Since the variance is known then the only unknown parameter is  .

The model for the signal is then  

Two possible (of many) estimators for the parameter   are:

  •  
  •   which is the sample mean

Both of these estimators have a mean of  , which can be shown through taking the expected value of each estimator   and  

At this point, these two estimators would appear to perform the same. However, the difference between them becomes apparent when comparing the variances.   and  

It would seem that the sample mean is a better estimator since its variance is lower for every N > 1.

Maximum likelihood

edit

Continuing the example using the maximum likelihood estimator, the probability density function (pdf) of the noise for one sample   is   and the probability of   becomes (  can be thought of a  )   By independence, the probability of   becomes   Taking the natural logarithm of the pdf   and the maximum likelihood estimator is  

Taking the first derivative of the log-likelihood function   and setting it to zero  

This results in the maximum likelihood estimator   which is simply the sample mean. From this example, it was found that the sample mean is the maximum likelihood estimator for   samples of a fixed, unknown parameter corrupted by AWGN.

Cramér–Rao lower bound

edit

To find the Cramér–Rao lower bound (CRLB) of the sample mean estimator, it is first necessary to find the Fisher information number   and copying from above  

Taking the second derivative   and finding the negative expected value is trivial since it is now a deterministic constant  

Finally, putting the Fisher information into   results in  

Comparing this to the variance of the sample mean (determined previously) shows that the sample mean is equal to the Cramér–Rao lower bound for all values of   and  . In other words, the sample mean is the (necessarily unique) efficient estimator, and thus also the minimum variance unbiased estimator (MVUE), in addition to being the maximum likelihood estimator.

Maximum of a uniform distribution

edit

One of the simplest non-trivial examples of estimation is the estimation of the maximum of a uniform distribution. It is used as a hands-on classroom exercise and to illustrate basic principles of estimation theory. Further, in the case of estimation based on a single sample, it demonstrates philosophical issues and possible misunderstandings in the use of maximum likelihood estimators and likelihood functions.

Given a discrete uniform distribution   with unknown maximum, the UMVU estimator for the maximum is given by   where m is the sample maximum and k is the sample size, sampling without replacement.[2][3] This problem is commonly known as the German tank problem, due to application of maximum estimation to estimates of German tank production during World War II.

The formula may be understood intuitively as;

"The sample maximum plus the average gap between observations in the sample",

the gap being added to compensate for the negative bias of the sample maximum as an estimator for the population maximum.[note 1]

This has a variance of[2]   so a standard deviation of approximately  , the (population) average size of a gap between samples; compare   above. This can be seen as a very simple case of maximum spacing estimation.

The sample maximum is the maximum likelihood estimator for the population maximum, but, as discussed above, it is biased.

Applications

edit

Numerous fields require the use of estimation theory. Some of these fields include:

Measured data are likely to be subject to noise or uncertainty and it is through statistical probability that optimal solutions are sought to extract as much information from the data as possible.

See also

edit

Notes

edit
  1. ^ The sample maximum is never more than the population maximum, but can be less, hence it is a biased estimator: it will tend to underestimate the population maximum.

References

edit

Citations

edit
  1. ^ Walter, E.; Pronzato, L. (1997). Identification of Parametric Models from Experimental Data. London, England: Springer-Verlag.
  2. ^ a b Johnson, Roger (1994), "Estimating the Size of a Population", Teaching Statistics, 16 (2 (Summer)): 50–52, doi:10.1111/j.1467-9639.1994.tb00688.x
  3. ^ Johnson, Roger (2006), "Estimating the Size of a Population", Getting the Best from Teaching Statistics, archived from the original (PDF) on November 20, 2008

Sources

edit
edit