When using a Lagrange polynomial Pn(x) of degree n for
interpolating a continuous function f(x) we get
The function En(x) is the Lagrange remainder given by
where Q(x) is the polynomial of degree n+1
It is well known that he maximum absolute value of Q(x) is minimum
when the abscissas are chosen to be the zeros of
the Chebyshev polynomial Tn+1(x) of degree n+1. Nevertheless, this
polynomial could be not the best approximation to the function f(x).
One will succeed with the best approximation when it is achieved
in Eq. (1 (click here)) the minimum value of |En| over the entire
interval. Therefore, we will say that the polynomial Pn*(x) is the
best approximation of degree n to the function f(x) when
An important property characterizes the best approximation: at least at
n+2 critical points of the interval [-1,1] the error
function E(x)=Pn*(x)-f(x) assumes extremes values that are equal in
magnitude and alternating in sign (see, for instance,
Vallée-Poussin 1989). From this property, if a set of
critical points is known it is clear that Pn*(x) is easily determined
from the n+2 linear equations
in the unknowns and the n+1 coefficients of Pn*(x).
Consequently the problem of computing the best approximation to
a function f(x) can be reduced to determine the set
of n+2 points where
The critical points are not known a priori, but starting from a set of references points close enough to the critical one we can proceed by iterations to compute the polynomial coefficients. Murnaghan & Wrench (1959) assume the convergence of the iterations and find the best approximation by changing the complete set of abscissas at each step. Without need of this assumption, Stiefel (1958) demonstrated that, when proceeding by interchange in the transition from one set of reference points to the next, the iteration takes place in a finite number of steps.
Basically the procedure consists of the following steps:
The procedure is different when approximating a function given by a
finite set of m points f(xi), . Since we want to
compute a polynomial of degree n which fits the data, we face up to an
undetermined system: m points and n coefficients (n<m). The direct
approach to the problem is to solve the inconsistent system
of n+2 linear equations where the solution cj (the coefficients of
the polynomial) minimizes . Then, the
computation of the best approximation consists in determining the
set of coefficients cj which minimizes
The problem can be attacked in two ways. The first one begins with an initial estimation of the coefficients and uses an algorithm that tries to diminish the error function |Pn(x)-f(x)| at each step. The other, consists in finding the set of reference points xk that maximize |Pn(xk)-f(xk)|. Stiefel (1960) demonstrated that both methods are equivalent to the simplex method of linear programming.
In our work we construct the approximating polynomials by using the
Stiefel exchange-method (Stiefel 1958). A set of n+2 points is
selected as reference points from the set of m. The starter reference
points are those with closest abscissas to the zeros of the derivative
with respect to x of the Chebyshev polynomial Tn+1, plus both
ends . The exchange-method is an iterative one that, by changing
the reference points, computes those coefficients cj such that for
i=1 to m they minimize Eq. (2 (click here)). Uniform
approximation is ensured when Remez's rippling
occurs (Remez 1957), i.e. the errors at reference points are equal in absolute
value but opposite in sign. When this situation occurs the polynomial
obtained is optimal.
The Stiefel exchange-method has been implemented by Barrodale (1975) and Schmitt (1971). Deprit & Picard (1979) used the Barrodale algorithm with encouraging results. In our work we first used the Barrodale algorithm but the Schmitt one showed better results.