One can question the validity of standard deconvolution in AO. Standard deconvolution algorithms (Lucy, MEM, CLEAN) assume an exact psf. Though it is not the case for AO images, these algorithms could work. Charter (1992) considers that the correct probabilistic approach should be to integrate over uncertainties in the psf. But he says that choosing the psf at its modal value is a good approximation if the psf distribution probability is sharply peaked. It should be the case when the calibration is done carefully. However, it is quite difficult to estimate in the deconvolved image the error level and consequently the features which may come from the psf miscalibration. For example, the strength of the MEMSYS-4 algorithm (Copyright ©MEDC 1990, Gull 1989), compared to the previous ones is to provide a stopping criterion and a final error map. However, this map does not take in account the psf uncertainties and error could be underestimated (Charter 1992).
Since AO images deconvolved with standard algorithms are limited by the uncertainties in the psf knowledge but not by the signal to noise in the AO image, it seems interesting to use blind deconvolution algorithm. Looking for a solution to Eq. (2 (click here)) with an unknown object and a psf lying in a hyperspace looks challenging. The main problem is to well-condition the problem in order to get a unique solution. Indeed these algorithms are not satisfactory with no further constraint on the psf. Christou & Drummond (1995) have proposed the model-fitting blind deconvolution which assumes an a-priori knowledge of the object and/or the psf. From HST images of crowded star clusters with some isolated point sources in the isoplanetic field (on the edges for example), White (1994) enforces the presence of point sources in the restored object image to constrain the blind deconvolution process. We now address the general case of any object. Christou (1995) has proposed to work on several AO images with different psfs but based on the same object and has obtained some promising results. Reducing the dimension of the hyperspace is certainly a solution. Circular symmetry assumption for the psf yields interesting results (Thiébaut & Conan 1995). But it is possible to constrain the psf with the calibration psf. So, I have defined a loose constraint defined as to minimize the psf distance from the calibration psf. In a blind deconvolution algorithm (see Thiébaut & Conan 1995 e.g.), this constraint is seen as an additional term in the error metric. This term is given by:
where is the calibration psf.
Because of this loose constraint, the blind deconvolution algorithm could
be renamed near-sighted deconvolution. The overall
error metric could be redefined as follows:
where is called a hyper-parameter and
is the usual error metric in blind deconvolution.
corresponds to blind deconvolution and
corresponds to standard deconvolution.
I propose to control the blind deconvolution process by decreasing
from a large value to zero. For example, first step could be
to start with
, as initial guesses
the AO image and a wide Gaussian for the
deconvolved object image and the psf respectively.
At each step, one decreases
by a factor 10 and takes
the previous step solutions as the initial guesses. One
stops the process when there is no more significant
gain in the error
or there is
obvious divergence for the synthetic psf.
This new constraint has been easily incorporated in the error metric
of the deconvolution code written by Thiébaut & Conan (1995).
Data from a 0.126 arcsec binary observations
(Tessier et al.
1994; Brandner et al. 1995) taken at J with a Sr of about
3% have been used to test this
method. The calibration psf available was taken 45 minutes after the binary
observations. So, we are not sure about its reliability. This is
a difficult test because the overlapping psfs and the psf shape complexity
make tricky for identifying the binary.
Figure 15 (click here) shows that this process helps to resolve
the binary better. For low , as the psf constraint is loosed,
some noise appears in the halo of the synthetic psf. That means
that the problem starts to be ill-conditioned again, probably because the psf
halo size and dynamic range are too large, so there are too many degrees of
freedom. One better stops the process. Incidently, in order to
overcome this problem, one might use the wavelet
transform for a multiple resolution analysis (see e.g.
Starck & Murtagh 1994).
Using the multiscale decomposition for the psfs, Eq. (3 (click here))
can be expanded as:
where denotes the wavelet plane for f at scale i,
is the weight at scale i. Appropriate set of
with possibly use of thresholding may help to solve the previous problem.
Figure 15:
Results of blind deconvolution controlled with the calibration
psf for decreasing .
See text
This binary was revealed from HST observations in the visible
(Bernacca et al.
1993). The separation and the position angle (PA)
were measured to 0.126 arcsec arcsec
and
respectively.
Table 3 (click here) lists the binary parameters extracted from the
AO deconvolved images in J, it shows a 20% discrepancy
for the photometry between the near-sighted
deconvolution and the standard deconvolution methods.
We have to wait additional observations in J for confirmation
but the near-sighted deconvolution process might have retrieved
the relevant information.
All these results are still preliminary but the near-sighted technique certainly deserves further investigation.