For clarity, let us now assume that is a one-to-one map.
The method presented in Sect. 3 then yields a solution
of Eq. (38),
and thereby the solution of the problem:
.
Unfortunately,
as already mentioned,
this method does not provide any information
on the robustness of the reconstruction process.
The most natural way of obtaining this information
is to find
, directly, as the solution of the normal Eq. (4):
where
In this section, we present the corresponding developments.
To conduct our analysis,
the eigenvalues of B
are ordered so as to form a
nondecreasing sequence (cf. Eq. (7)):
As is assumed to be a one-to-one map,
is strictly positive.
In the general case where the
generating E do not form an orthogonal set,
the reader must keep in mind the fact that
the action of
can be performed with the aid of algorithm 2.
The problem is solved by using the conjugate-gradients method
(cf. Sect. 2.3 of Lannes et al. 1987b).
Starting from any in E ,
the iterates
converge to
in at most m iterations,
getting closer to
at each iteration.
In this algorithm,
is the "direction of research" in
,
whereas
is the corresponding
"parameter of exact line search";
is the residue of the normal Eq. (39)
for
:
As ,
we have
, hence:
Denoting by an estimate of
,
we therefore have:
Let us introduce
an acceptable error threshold
for
.
Clearly,
the iterative process can be interrupted as soon as
is less than
;
therefore plays the role of
a convergence estimator.
The estimate of
is refined throughout the iterative process
as indicated in Sect. 4.2.
The corresponding algorithm can then be summarized
as follows.
ALGORITHM| 3:
#&#
Step 0: &Set (for example) and n=0 ;
&choose a natural starting point
in E ;
&compute
;
&set
.
Step 1: &Compute
&
,
5pt&
,
5pt&
,
5pt&
,
5pt&
;
6pt&if
, termination.
6pt
&Compute
&
,
5pt&
.
5pt&Increment n and loop to step 1. 101
In the conjugate-gradients method,
the n -dimensional subspace of E
generated by
the conjugate
directions
,
is referred to as the Krylov space of order n .
According to a well known property
(cf. properties 2 and 3.1 of Lannes et al. 1987b),
minimizes
on
.
Provided that n is sufficiently large, the least-squares solutions
in E and are very close to one another.
At the end of the reconstruction process,
is
therefore the effective object representation space.
The dimension of this space,
as well as the robustness of the reconstruction process,
depends upon the localization of the eigenvalues of B ,
and more precisely,
on the relative weight of
the projections of
onto the corresponding eigenspaces.
We are thus led
to consider the operator
where is the projection (operator) onto
.
The residues
form an orthogonal basis for
(see Appendix 4 of Lannes et al. 1987b).
As established in
Appendix 2 of Lannes et al. (1996),
the matrix of
expressed in this basis is tridiagonal
(this matrix is of course symmetric).
Its diagonal and upper-diagonal elements
are respectively given by the
relationships
and
The eigenvalues of can therefore be calculated very easily
with the aid of the QR algorithm
(cf. Sect. 11.3 of Press et al. 1992).
Let us order these eigenvalues
as those of B (see Eq. (41)):
By referring to the eigenvalue analysis based on
the notion of "minmax numbers"
(cf. Appendix 5 of Lannes et al. 1987a),
it is easy to show that
Figure 6: Image reconstruction via the regularized version of CLEAN;
a) traditional clean map for ;
b) improved clean map
provided by the regularized
version of CLEAN (
).
These images have to be compared with the image to be reconstructed
(Fig. 5 (click here)c).
As shown in Fig. 7 (click here)b, the matching pursuit process of WIPE
can still refine image (b)
Provided that the projections of
onto the eigenspaces corresponding to
and
are different from zero,
a condition which is always numerically satisfied in practice,
and
respectively tend
as n tends to m
(see Fig. 3 (click here) of Lannes et al. 1996).
In our reconstruction processes, the eigenvalues of
are computed at each iteration.
(The cost for this is negligible compared to that
of the action of B .)
As soon as
is less than
say
,
are very good approximations to and
,
respectively.
In most cases,
the termination test of the basic algorithm is then satisfied
(see Fig. 3 (click here) of
Lannes et al. 1996).