So far, we have always applied photometric methods on original images (i.e. without post-processing). But even if these carry a lot of information, the full exploitation of adaptive optics images often requires the use of deconvolution methods. It is therefore also important to study how accurate photometry using deconvolved images can be. The method in this case is to use aperture photometry on the restored image with a very small aperture. It would be awkward to use a PSF fitting algorithm as the stellar images in the result of the deconvolution are very sharp (basically each point like object has its flux contained in a single pixel) (Cohen 1991).
Several studies of the photometric accuracy of deconvolved images have been published, especially in the case of the HST before correction of spherical aberration. Cohen (1991) studied the accuracy in the photometry of a crowded field after deconvolution using the maximum entropy method. She showed that the algorithm recovered faint sources systematically too faint and that it could therefore not be used when accurate photometry was needed. She also introduced a standard procedure which consisted in performing deconvolution on the original image in order to generate a list of stars and applying a PSF fitting algorithm to the original image using this list. Linde & Spännare (1993) concentrated on the Lucy-Richardson method. They showed that in a crowded field the brightness of most faint stars and some bright stars was systematically overestimated, whether using the original image or a deconvolved one. This was due to the confusion problem, where close stars were measured together as a single object. They also showed that the Lucy-Richardson method could decompose many such instances and allow a linear behaviour at slightly fainter levels of brightness. Other more recent papers on this subject include Lindler et al. (1994) and Busko (1994) which compare the results for several deconvolution techniques.
Several authors have studied the performances of different deconvolution methods in the case of adaptive optics images but concentrating on the appearance of the reconstructed image rather than photometry. Christou & Bonaccini (1996) applied several linear and non-linear methods, including blind and short-sighted deconvolution, to observations of the double star T Tauri (separation about 0.7 arcsec). When trying to determine the difference in magnitude between the two components, they obtained values varying from 1.46 to 1.85 depending on the method used, a very large range of 0.4 magnitudes or so. Tessier (1997) also applied different methods to a binary stars with a separation of about 0.13 arcsec. His estimation of the difference in magnitude varied from 0.74 to 0.96, that is to say a range of about 0.2 magnitudes. From these results, we can easily see that the problem of photometry on deconvolved images is far from being settled.
To estimate the precision of photometry on deconvolved images, we considered the detection of a faint companion next to a bright star, a case easier to analyse than star clusters. We first started with the case where the PSF was accurately known (for example if an isolated star is available in the field of view). Again, we created artificial images of stellar couples with different separations and magnitudes. We first applied DAOPHOT as a reference, then applied a deconvolution method to the image using the same PSF as for the creation of the image. Finally, we used aperture photometry with a very small aperture to perform photometry on the deconvolved image. We used the two main deconvolution algorithms currently popular in the adaptive optics community: the maximum entropy method and the Lucy-Richardson algorithm, as provided by the IRAF/STSDAS RESTORE package.
Figure 11: Error in the magnitude of a faint companion as a function of the
separation.
Two differences in magnitude between the main star and its companion are
considered: 2.5 and 5. Results are given for DAOPHOT on the original image
(DAO) and for aperture photometry on the image deconvolved by the
maximum entropy method (MEM) and the Lucy-Richardson algorithm (LUCY).
The PSF used was an image of HD5980 and no noise was added to the
original frame
Figure 11 (click here) presents the resulting error on the estimated magnitude as a function of the separation. Two differences in magnitude between the main star and its companion were considered: 2.5 and 5. In each case, three values are given: the result from DAOPHOT applied to the original image and the results given by aperture photometry on the images deconvolved by the maximum entropy and the Lucy-Richardson methods. The PSF used was an image of HD5980 in the K Band. The star had been observed with an integration time of 200 seconds, a Strehl ratio of 0.32 and a signal to noise ratio of 7000. We wanted to estimate the errors due specifically to the shape of the adaptive optics PSF and therefore assumed there was no noise in the original image.
Figure 11 (click here) shows that the two deconvolution methods yield larger errors than PSF fitting, as could be expected from previous studies. For a large difference in magnitude between the main star and its companion, the errors after deconvolution are more than 10 times the errors with DAOPHOT, the maximum entropy method performing a little worse than the Lucy-Richardson algorithm. When the difference in magnitude is smaller, the error after deconvolution is only a few times larger than the one with DAOPHOT at large separation, but increases drastically when the separation decreases, the maximum entropy method still performing a little worse. We checked these results by computing the error for several differences in magnitude (from 1 to 5) and for two given separations (0.28 and 0.85 arcsec). The previous behaviour was confirmed, with the error for photometry after deconvolution increasing from a few times the error with DAOPHOT for small differences in magnitude, to about 10 times for large differences.
As we already noted, the PSF usually has to be worked out from the image of a calibration star observed at a different time than the science object. The variations of the PSF with time then induces errors in the deconvolution process and the subsequent photometric measurements. In order to study this problem, we applied the same procedure as before but using different PSFs for the creation of the binary star image and its deconvolution. We used the objects that provided the best result with DAOPHOT: HD5980 and its calibration star SAO255763. The integration time for the image of HD5980 was 200 seconds, the signal to noise ratio 7000 and the Strehl ratio 0.32. For SAO255763, these parameters were respectively 120 seconds, 45000 and 0.35. The delay between the images of the two objects was 20 minutes. We choose two different separation, 0.28 and 0.85 arcsec, and studied the error on the magnitude of the companion as a function of the difference in magnitude between the two stars. As before, we introduced no noise in the original image.
Figure 12: Error in the magnitude of a faint companion when using a wrong PSF.
The results given by DAOPHOT on the original image (DAO) and by
aperture photometry applied on the images deconvolved by the
maximum entropy method (MEM) and the Lucy-Richardson algorithm (LUCY)
are shown. Two separations were considered: 0.28 and 0.85 arcsec
(noted 0.3 and 0.8 in the figure)
The results are presented in Fig. 12 (click here). Errors obtained with DAOPHOT applied on the original image (but with a wrong PSF) and with aperture photometry on the images deconvolved by the maximum entropy method and the Lucy-Richardson algorithm are shown. As could be expected the global level of error is larger than before. For example, for a difference in magnitude of 2.5 and a separation of 0.8 arcsec, DAOPHOT and the Lucy-Richardson method yield errors of about 0.02 and 0.04 in this case, and about 0.002 and 0.004 in the previous case. At differences in magnitude of less than 4 or so, the expected behaviour is observed, with DAOPHOT performing better than the deconvolution methods. But a surprising feature appears in Fig. 12. Given the fast increase of the error with the difference in magnitude for DAOPHOT and the slow increase for the deconvolution methods, there is a domain where aperture photometry on deconvolved images produces smaller errors than DAOPHOT applied on the original images. This occurs only in the case where a badly determined PSF is used. The threshold for this behaviour is a difference in magnitude of about 3.8, a value that only varies a little with the separation.
The presence of noise in the images, whatever its origin, is obviously an important factor in the quality of deconvolution. To illustrate the behaviour of photometry on deconvolved images in the presence of significant noise, we carried out a study similar to the previous one but added different levels of noise to the images to be deconvolved. This noise was gaussian and the same on the whole field of view. We studied only a given case: two stars with a separation of 0.85 arcsec and a difference in magnitude of 5 (this was the unusual case where deconvolution yielded better results than DAOPHOT previously). As before, the deconvolution used a wrong PSF. The PSFs used were the same as in the previous paragraph, the only difference being the addition of noise.
Figure 13: Error in the magnitude of a faint companion as a function of the
signal to noise ratio of its peak intensity. A wrong PSF was used and noise
added into the images.
The results given by DAOPHOT on the original image (DAOPHOT) and by
aperture photometry applied on the images deconvolved by the
maximum entropy method (MEM) and the Lucy-Richardson algorithm (LUCY)
are shown. A separation of 0.85 arcsec and a difference of 5 magnitudes
was considered
Figure 13 presents the result of this study. The error in the magnitude of the faint companion is given as a function of the signal to noise ratio of its peak intensity. Three methods are compared: DAOPHOT on the original image and aperture photometry on images deconvolved by the maximum entropy method and the Lucy-Richardson algorithm. The error obtained with DAOPHOT is basically constant provided the signal to noise ratio is higher than 3. The errors using deconvolved images are much more dependent on the noise. They are stable for a signal to noise higher than 20 or 50 depending on the method, but increase drastically below these thresholds, reaching more than 1 magnitude for a signal to noise lower than 5 or so. Note that the unusual case where deconvolved images yield better results than DAOPHOT on original images is therefore limited to good signal-to-noise ratios.
Another important issue is the presence of artifacts in the deconvolved
images, mainly due to the presence in the PSF of diffraction spikes,
lumps in the diffraction rings and features created by residual aberrations.
Of course the structure of the artifacts in the image depends
on the adaptive optics system used, and the results presented in this section
should therefore only be considered an illustration of the problem.
We deconvolved an image of the
single star HR2019 with another image of the same star taken after a
3-minute delay. The observation was carried out in a narrow band
at 2.08 m. The two images of HR2019 were obtained with an
integration time of 150 seconds, their signal to noise ratios
were about 7500 and their Strehl ratios 0.26 or so.
We used the Lucy-Richardson method and allowed 30 iterations.
Figure 14 presents the result of the deconvolution.
Figure 14: Example of a deconvolution using a wrong PSF.
An image of the single star HR2019 was deconvolved by an image of the
same star taken 3 minutes later. The exposure time is 150 seconds and
the image scale 0.035 arcsec per pixel. 30 iterations were used
for the deconvolution. A logarithmic scale is used for representation.
In this case, artifacts due to the first diffraction ring are between 4
and 8 magnitudes fainter than the central peak. Artifacts due to the second
ring are 6 magnitudes fainter. The others further out
are between 6 and 8 magnitudes fainter
In this particular case, artifacts due to the first diffraction ring are clearly visible around the centre. They are between 4 and 8 magnitudes fainter than the central peak at a distance of about 0.15 arcsec. Artifacts due to the second ring are at 0.3 arcsec or so and 6 magnitudes fainter. The other artifacts have magnitudes ranging from 6 to 8 relative to the central peak or even weaker. Performing the deconvolution with different numbers of iterations showed that the artifacts appeared very early after only a few iterations.
We also performed the same procedure on different couples of calibration stars. This showed that the result of a deconvolution was completely unpredictable. Sometimes only one or two artifacts due to the first ring appeared. Often there were about 10 artifacts as before. And in some cases several tens of artifacts were showing up. No known parameter such as the exposure time or the delay between the images seemed to control the number of artifacts. The conclusion of this attempts is clearly that the result of a deconvolution on adaptive optics data should not be trusted at faint levels, say for a difference of more than 3 magnitudes relative to the main star near the first diffraction ring and more than 5 magnitudes further out. Our experience suggests that repeating the procedure with several calibration stars helps, as some artifacts may disappear, but is usually not enough to remove the problem completely because the main ones tend always to be present.