next previous
Up: Faint source detection in


2 A brief discussion of the problem

The usual steps to analyze array data to construct a calibrated image are:

1.
calibrating the data:
(a)
extraction of the cosmic ray glitches by comparing successive readouts,
(b)
subtraction of the signal due to dark currents,
(c)
flat-fielding, by dividing the data by a flat-field extracted from the library,
(d)
converting camera units into physical fluxes (Jy),
2.
using a standard source detection algorithm, which allows us to estimate the background and the noise level, and fitting the Point Spread Function (PSF) to pixels showing a flux level higher than n times the noise standard deviation (rms).
As all ISOCAM surveys have been done using raster observations, we will not consider in this paper staring and CVF ISOCAM data (see Starck et al. 1999, for a general review of ISOCAM data calibration).

The simple calibration described above is successful when applied to bright objects (down to a few percent of the background level) but is inefficient when applied to faint source detection (below 1% of the background) with ISOCAM. At first order, this can be improved by modeling the flat-field, instead of using a library flat-field. The position of the lens of ISOCAM varies slightly between settings, and the optical flat-field varies as a function of the lens position by 2 to 20% from the center to the border of the array. In the case of empty fields (and more generally when most of the map covers an empty field), a simple median of the cube of data gives a very good flat-field, which allows us to reach a detection level of a few percent of the background level (Starck et al. 1999).

However, at second order, one encounters the main difficulty in dealing with ISOCAM faint source detection: the combination of the cosmic ray impacts (glitches) and the transient behavior of the detectors. For glitches producing single fast increases and decreases of the signal, a simple median filtering produces a fairly good deglitching. The ISOCAM glitch rate is one per second, and each glitch on average has an impact on eight pixels (Claret et al. 1999). However, 5 to 20% of the total number of readouts, depending on the integration time and the strength of the selection criterion, are affected by memory effects, which can produce false detections. Consequently, the main limitation here is not the detection limit of the instrument, which is quite low, but the false detections, whose number increases with the sensitivity.
Three types of glitches can be isolated, those creating:

1.
a positive strong and short feature (lasting one readout only),
2.
a positive tail (fader, lasting a few readouts),
3.
a negative tail (dipper, lasting a several tens of readouts).
Figure 1 is a plot in camera units (ADU, for Analog to Digital Units) measured by a single pixel as a function of the number of readouts, i.e. time, which shows these three types of glitches: (a) three sharp "1" type, (b) a "fader" at readout 80 and lasting 20 readouts, (c) a "dipper" at readout 230 lasting 150 readouts.

The two first pixels are taken from a four by four raster observation of the Lockman hole, with a pixel field of view of 6 arc second, an individual integration time of 2.1 second, the LW3 filters (15 $\mu$m), a gain of 2, and 56 readouts for the first raster position and 27 readout for the others (observation number:03000102).

The last pixel is from another observation, with the same parameters except for the number of readouts per raster position, which is equal to 22 instead of 27 (observation number:02600404).

Finally, the signal measured by a single pixel as a function of time is the combination of memory effects, cosmic ray impacts and real sources: memory effects begin with the first readouts, since the detector faces a flux variation from an offset position to the target position (stabilization), then appear with long-lasting glitches and following the detection of real sources. Clearly one needs to separate all these components of the signal in each pixel before building a final raster map, and to keep the information of the associated noise before applying a source detection algorithm.

In Sect. 3, we will show that the concept of pattern recognition using a multi-resolution algorithm leads to an efficient calibration procedure, free of the major problems described above. Simulations and real data analysis will be presented in a Sect. 4.


next previous
Up: Faint source detection in

Copyright The European Southern Observatory (ESO)