next previous
Up: Faint source detection in


Subsections

5 Simulations


Simulations are essential to tune the method in order to detect the faintest sources without excessive false detections. They allow one to compute the following quantities:

1.
sensitivity limit: the flux of the faintest source detected;
2.
photometric accuracy: the response of the detectors is generally not stabilized when pointing towards faint sources because i stabilization is very slow. Hence the uncertainty on the photometry depends on the quality of the algorithm for transient correction (Abergel et al. 1996).
3.
completeness limit: the faintest flux for which all sources or at least a known fraction of sources are detected.
4.
rate of false detections: the number of false detections due to glitches mimicking sources.
We describe below the details of the simulations.

5.1 Noise and cosmic rays simulation

If there are no sources, the temporal signal of the pixel contains only instrumental noise and cosmic ray effects. Due to the lack of accuracy of our interpretations of the physical effects of cosmic rays on the detector, we are compelled to use real ISOCAM data. A first possibility is to add simulated sources to the real observation: this allows us to test and calibrate the photometry and verify the completeness at a given flux by comparing the number of new detections with the number of introduced simulated sources. This method was applied in the study of the ISOCAM image of the Hubble Deep Field (Aussel et al. 1999). However, this technique does not allow us to measure the false detection rate and to test the possibility of recovering the $\log N - \log S$ relation if it is sensitive to confusion. Thus we needed a long observation void of sources, where the only sources are the simulated sources intentionally added to the dataset. This very long pointed observation ("staring'') has been cut in order to create a false "raster'' observation. If a pixel detects a source, its flux remains above the background level throughout the observation, and this has no effect on the source detection algorithm since the low frequency signal is subtracted. The "staring'' observation has to be at least as long as the raster to be simulated. Moreover, the rate of cosmic rays must be compatible with that of the observation and the individual integration time must be the same, since the behavior of the detectors depends on it. Finally, the high frequency photon noise level of the two observations must be compatible. Such an observation was acquired for us during the calibration of ISOCAM toward the end of the ISO mission, with both the LW2 and LW3 filters.
  
\begin{figure}
\includegraphics [width=8cm,clip]{imafull.eps}
\hspace*{2mm}
\includegraphics [width=8cm,clip]{imahdf.eps}\end{figure} Figure:11 ISO-HDF mosaic: (left) full simulated ISO-HDF image using the staring observation plus simulated sources following a distribution without evolution (Franceschini 1997), i.e. sum of Fig. 8 (right) plus Fig. 7 (right), (right) real image of the ISO-HDF. We find more sources in the real image than in the simulated image, indicating strong evolution

5.2 Source simulation

The behavior of a source on a detector is a step function i.e. the signal of a pixel increases when observing the source and afterwards decreases down to the background level. The presence of transients modifies this behavior. We used the Abergel et al. (1996) inverse transient model to simulate the sources seen by ISOCAM. To take into account the PSF effect, i.e. the distribution of the flux of a point source among the nearby pixels, we adopt the (Okumura 1997) model. In this way we are able to simulate a source once its position on the detector is known. An observation is therefore composed of a set of sources whose positions follow a uniform probability density function.

To study the completeness limit and the photometric accuracy, we generate a list of sources with uniform flux. Several lists must be generated to ensure that peculiar source positions (e.g. a pixel affected by numerous glitches) do not affect the result. Fake observation data are then created for several flux levels.

To test the validity of the number counts ($\log N - \log S$ or ${\rm d}N/{\rm d}S$) obtained from one observation, we can analyze simulated observations which contain random lists of fake sources whose fluxes follow the theoretical $\log N - \log S$. In Sect. 6, we apply this method to the case of the Hubble Deep Field, North.

5.3 Specificity of a simulation

Ideally, to reach the ultimate limit of the instrument, a new set of simulations should be produced and analyzed for each observation, since the results depend on the parameters of the observations as well as on the background level and glitch rate. However, typical cases can be analyzed and used as templates for other observations. We have performed detailed simulations (one hundred simulations per observation) for two template cases: the "ISO-HDF'', which corresponds to what we call ultra-deep observations with a very large redundancy (see below), and the "Deep Survey'' (Elbaz et al. 1998), for shallower observation with less redundancy and spatial resolution.


next previous
Up: Faint source detection in

Copyright The European Southern Observatory (ESO)