Maximizing the quantity with the
constraint
can be treated as an
unconstrained maximization of the strictly concave function
(Silvermann 1986):
It is possible to avoid some of the numerical and mathematical
difficulties of the MPL estimators by replacing the integrals of this
equation with approximations on a finite interval [a,b]
(Scott
et al. 1980). Thus, one can set f(a)=f(b)=0 if the
interval is somewhat larger than the range of all the observations or
one can mirror the data.
A discrete representation of (B1 (click here)) on a uniform grid of m
evenly spaced points with corresponding values denoted by fj
(j=1,m) is:
with and
for each j except
for
. In the first term, f(xi) is a
linear approximation between the points of the grid which contain
xi. Starting with a uniform guess function, one can maximize this
expression by varying the values of the parameters fj. As in the
case of the adaptive kernel, we can choose an optimal value of the
smoothing parameter with a data-based algorithm. For instance, the
unbiased cross validation estimate of
is the value that
minimizes the function:
where is an estimate of f constructed by leaving out the
single datum xi.