There are currently two different methods of performing photometry with arrays: aperture photometry, for uncrowded fields, and point spread function fitting, for crowded fields. Aperture photometry is the equivalent of classical diaphragm photometry applied to an array frame: a given aperture is chosen and the light of the object studied is integrated on this aperture. The sky background is estimated by a measurement in an annulus around the object of interest. This area must be close enough to the star so that its background and the aperture's are the same, but not too close in order to minimise the contribution of the object in the annulus. When dealing with crowded fields, aperture photometry is no longer applicable as several sources can contribute together to the intensity in an aperture. In this case, another method has to be used: point spread function fitting. This consists of fitting a model of the stellar images to the data via a least-squares algorithm, allowing the study of several stars at the same time.
Both photometric methods rely on some strong assumptions. Basically, the point spread function has to be known very accurately and it should be smooth, stable with time and constant over the whole field of view. This is approximately the case in normal ground-based observations. The PSF is then seeing-limited, most of the energy is contained within a circle of diameter 1 arcsec or so, and the profile is fairly smooth and constant with time and position. But the requirements are no longer met when an adaptive optics system is used. As we will soon see, the global shape of the PSF then varies a lot with time and also with the position of the object on the field of view. Furthermore, the PSF is surrounded by a halo which is affected by strong fluctuations as well. Last, the PSF is not smooth but shows irregularities due to residual aberrations. All these factors mean that performing photometry with adaptive optics is much less straightforward than with a seeing limited system - although the use of this technique can still bring considerable improvement.
The first drawback of adaptive optics as far as photometry is concerned is the variability of the PSF with time. The variations of the PSF with time have two causes. First, the shape of the PSF and its Strehl ratio depend strongly on the seeing, which is known to vary rapidly with time. The variations of the seeing therefore imply strong fluctuations of the PSF with time. Secondly, some PSF variations are also due to wavefront sensor noise and to fluctuations in the high-order spatial modes of the wavefront that are not corrected by the adaptive optics system. These fluctuations would be present even if the seeing were constant. Tessier (1995) showed that the first possibility, seeing fluctuations, is probably the predominant source of PSF variations, even if it has not yet been proved decisively. Whatever the reason, the PSF cannot be considered stable with time, which poses a major problem for both photometric methods.
The usual procedure in aperture photometry is to observe both the objects of interest and photometric standards for calibration purposes. The science objects and these standards are unlikely to be very close in the sky, which means that they have to be observed at different times. The time variations described above therefore mean that the PSF changes slightly between the images of the targets and the standards. In the case of point spread function fitting, the problem is more obvious as the method requires an accurate knowledge of the point spread function, even when only relative photometry is performed. But in most applications of adaptive optics in crowded fields, search for faint companions around a bright object or stellar clusters, no independent PSF can be found in the science frames, either because only the bright object itself is available or because the field is too crowded to yield an isolated star. This means again that a calibration star has to be observed at a different time and therefore in slightly different conditions, hence the problem. Note that a possible way to overcome the problem of global PSF variations could be to reconstruct the PSF using control loop data, as proposed by Veran et al. (1997)
We have only considered global variations of the PSF linked to Strehl ratio variations so far. Another problem arises when the purpose of an observation is to detect faint structures around a star. In the common case where the adaptive optics correction is not perfect (i.e. the partial correction regime), the PSF is in fact the sum of two components: a central core which approximately corresponds to the diffraction-limited Airy profile and a large halo surrounding this core and due to the uncorrected or partially corrected high-order modes. The presence of this halo is going to be a major drawback for the detection of faint structures around a star. First, given its origin, the halo strongly fluctuates with time. This means that from one frame of an object to another, the faint structures in the PSF are different, a tendency which is enhanced by the unavoidable presence of noise in the image and in the wavefront sensor measurements. Secondly, even if this halo were stable with time, it would still depend on parameters such as the brightness of the object, its shape, its colour or its position in the sky. Changes in the halo can therefore appear in the usual case where several stars have to be compared. This second type of variation can be reduced by a careful choice of the calibration stars and by applying the procedure presented for example in Tessier (1995) (calibration star near the astronomical object, with the same flux, long integration time, short delay between observations). But the first type will always be present. Longer integration times can reduce the variations but never completely eliminate them. This is a serious problem when trying to detect structures such as a companion or an extended source around a star, but obviously also affects photometric measurements of such faint structures. We can add to this problem the fact that, for any natural guide star system, a bright star is needed if the science object is too faint to be used itself. This bright star is going to be surrounded by a halo whose fluctuations might dominate the faint science object and prevent any accurate photometric measurement.
A third problem is the presence of residual features in the PSF: spikes due to the secondary mirror supports, lumps in the diffraction ring due to some imperfectly corrected mode (for instance the coma) and also faint artifacts all around the core due to fixed residual aberrations in the adaptive optics system. All these features are affected by global variations and halo fluctuations with time, and by photon noise. They therefore vary with time in a way that cannot be predicted, and introduce a new source of difficulties in the detection of a faint objects and their photometry.
The last major problem of adaptive optics is angular anisoplanatism. The phase deformations induced by the atmosphere in two different directions are not the same. As a consequence, the PSF varies with the position in the field of view. This is a serious problem for aperture photometry in moderately crowded fields and for point spread function fitting as both methods rely on a constant shape for the PSF. In the case of aperture photometry, the fraction of light contained in a given aperture will depend on the position of an object. For point spread function fitting, slicing the field of view into subimages can help, but it requires determination of a PSF for each area and also introduces some calibration problems between different subimages.
We have only considered performing photometry out of direct images so far. Thanks to a very sharp core in the PSF, adaptive optics images can directly yield very good results. Nevertheless their full exploitation requires the use of efficient deconvolution methods such as the maximum entropy method (Gull & Daniell 1978) or the Lucy-Richardson algorithm (Lucy 1974; Richardson 1972) The use of these methods introduces a new kind of error. Deconvolution methods usually yield images that are in qualitative agreement with the true images. But this is not enough because a quantitative agreement is required to perform photometry. We therefore have to check if deconvolved images can be used to obtain quantitative data or if the deconvolution algorithms introduce large errors in the brightness of the different objects. Such a problem is not new and can be compared to that faced by users of the HST before correction of spherical aberration (see e.g. White & Burrows 1990). The problem is even more serious with adaptive optics because the PSF is usually not known precisely. Thus, another source of error has to be taken into account: the influence of a badly determined PSF on the photometric accuracy of deconvolved images.
To conclude this section, we can briefly summarise the problems of photometry with adaptive optics:
This study is devoted to the errors directly linked to the use of adaptive optics. We will not take into account other sources such as the precision of the flat fields or colour transformations. In most cases, we will also assume that no noise (photon noise or sky background) is present in the images.
Two kinds of data have been used in this work: actual images and
simulated point spread functions. The actual images were obtained
with the COME-ON-PLUS/ADONIS system installed at the 3.6-metre
telescope of the European Southern Observatory in La Silla
(Beuzit et al. 1994). Images acquired during different
runs in 1993, 1995 and 1996 were used. These included observations
of Betelgeuse (Esslinger 1997) and of several massive stars
(see e.g. Heydari-Malayeri et al. 1997a,b).
The observations of Betelgeuse were carried out with a narrow
band filter at 2.08 m. The others were performed in the
H and K broad bands, and a few in the J band.
The science objects were bright enough to be used as their own
wavefront correction reference.
To illustrate the theoretical benefits of using adaptive optics and to assess the consequences of angular anisoplanatism, we used simulated point spread functions constructed using the method described in Wilson & Jenkins (1995) and kindly provided by the authors. The atmospheric model assumed two thin turbulent layers, one at an altitude of 1 kilometres and another at 5 kilometres. The lowest layer was twice the strength of the upper one, which is believed to be a reasonable model for the atmosphere above another good astronomical site, La Palma. Note that these simulations assumed no central obscuration of the telescope and a correction of 20 modes (this was a computation time limitation).