Up: CLEAN and WIPE

3. Optimization without control of robustness

Let be any subset of , say that generated by an aborted matching pursuit process;  has m  elements. Let us now consider the problem of minimizing on the space  E generated by the , k  spanning

By definition, E is the range of the operator:

In the case where is equipped with its standard scalar product, the adjoint of S is explicitly defined by the relationship:

Indeed, for any , we have from Eq. (36):

In what follows, S  is not necessarily a one-to-one map from  onto  E : the vectors lying in are not necessarily linearly independent.

Let now be a vector minimizing on  the quantity . Then, the vector minimizes  q  on  E . From Eq. (2), the vectors in question are such that

These vectors are therefore the solutions of the normal equation

(the least-squares solutions of the equation ).

In most cases encountered in image reconstruction, the conjugate-gradients method is the best suited technique for solving Eq. (38). The version of this method presented below provides .

ALGORITHM| 1:

#&# Step 0: &Set (for example) and n = 0 ; &choose a natural starting point in E ; &compute , &; &compute (for all ; &set (for all . 10ptStep 1: &Compute &, 5pt&, 5pt&(for all , 5pt&, 5pt&, 5pt&, 5pt&, 5pt&(for all ; 6pt&if , &termination. 6pt&Compute &, 5pt&(for all ; 5pt&increment n and loop to step 1. 101

Throughout this algorithm,  is the residue of the equation for . Likewise, is the value of at the same iterate:

The iteration in results from the identity:

The sequence of vectors converges towards a solution of the problem with all the remarkable properties of the method (see Lannes et al. 1987b). In practice, E is chosen so that  is a map. The uniqueness of the solution can easily be verified by modifying the starting point of the algorithm. The stopping criterion is based on the fact that the final residue must be practically orthogonal to all the 's (Eq. (37)); the of the angle between the vectors  and  is equal to . Here, as  is endowed with its standard scalar product, this algorithm cannot provide the condition number of . (The transposition of what is presented in Sect. 4.2 would give the "generalized condition number" of  AS ). We therefore recommend to use algorithm 1 only when  is approximately known.

REMARK| 3. Let us consider the special case where  A  is the identity operator on  (which then coincides with ). The problem is then to find , the projection of onto  E . Note that  is then equal to unity. In this case, algorithm 1 reduces to

ALGORITHM| 2:

#&# Step 0: &Set (for example) and n = 0 ; &set and ; &compute &, &(for all ; &set (for all . 10ptStep 1: &Compute &, 5pt&(for all , 5pt&, 5pt&, 5pt&, 5pt&, 5pt&(for all ); 6pt&if , &termination. 6pt&Compute &, 5pt&(for all ) ; 5pt&increment n and loop to step 1. 101

This algorithm converges towards the projection of  onto  E with all the properties of the conjugate-gradients method.

In principle, the projection operation can also be performed by using the matching pursuit iteration (25). In this case, on setting equal to its optimal value, this iteration reduces to

where

The residues are then obtained via iteration (27):

and likewise for (cf. Eq. (28)):

At each iteration, it is then natural to choose k so that . In the general case where the 's () do not form an orthogonal set, the conjugate-gradients algorithm is of course preferable.

Up: CLEAN and WIPE

Copyright by the European Southern Observatory (ESO)
web@ed-phys.fr