# Consistent estimator

A consistent estimator is one whose measurement error or bias approaches zero when the sample size tends to infinity.

From the definition of an unbiased estimator, we can draw the conclusion that we sometimes have estimation errors. Now, there are cases in which when the sample gets bigger the error decreases.

Sometimes, due to the characteristics of the estimator used, as the size of the sample increases, the error also increases. That estimator would not be desirable to use. Now, a priori, we do not know where the bias is tending. If it tends to zero, it tends to a certain value, or it tends to infinity as the sample size gets larger.

That said, it is necessary to define the concept of consistency. For them, we have to say that there are two types of consistency. For one thing, there is the simple consistency. While, on the other hand, the consistency is found in mean square.

To put it in some way, they are two mathematical tools that allow us to calculate towards which number or numbers our estimator converges.

Point estimate## Simple consistency

An estimator fulfills the property of simple consistency if the following equation is fulfilled:

From left to right, the equation is read as follows: The limit, when the sample size tends to infinity, of the probability that the absolute difference between the value of the estimator and the value of the parameter is greater than the error, equals zero.

It is understood that the value of the error noted by epsilon, must be greater than zero.

Intuitively, the formula indicates that when the sample size becomes very large, the probability of an error greater than zero is zero. Stated inversely, the probability that there is no error when the sample size is very large is, speaking in probabilities, practically 100%.

## Estimator consisting of quadratic mean

Another tool that can be used to check that an estimator is consistent is the root mean square error. This mathematical tool is even more powerful than the previous one. The reason is that the requirement of this condition is greater.

In the previous section, the requirement was that, probabilistically speaking, the possibility of making an error be zero or very close to zero.

Now, what we are demanding is defined by the following mathematical equality:

That is, when the sample size is large, the mathematical expectation of the squared errors is zero. The only option for this value to be zero is that the error always be zero. Why? Because when the estimation error is raised to two (Estimator - True value of the parameter), the result will always be positive. Unless, that is, the error is zero. Zero raised to two is zero.

Of course, if the limit returns 0.0001, we can assume that it is equal to zero. It is almost impossible for the root mean square error map to go to zero.

Statistically speaking, we will say that an estimator is consistent in the quadratic mean, in case the expectation of the squared error of the estimator taking into account different samples is zero or very close to it.