next previous
Up: Faint source detection in


6 The case of the Hubble Deep Field North

The Hubble Deep Field North was observed by ISO on July 1996 (Serjeant et al. 1997) and it was analyzed by several independent groups (Goldschmidt et al. 1997; Désert et al. 1999; Aussel et al. 1999).

This work completes a previous paper on the ISO-HDF North (Aussel et al. 1999) with new simulations on an ideal dataset at 15 $\mu$m (filter LW3, $12-18~\mu$m, see Figs. 91011). In order to check if the conditions were similar for both observations (the staring observation and the real mosaic on the HDF field), we first performed an analysis of the staring observation in order to determine the percentage of readouts affected by glitches of type 2 and 3 (faders and dippers). In the case of the ISO-HDF (see Table 1), the percentage of data affected because of faders and dippers is about 20%, close to the fraction of pixels lost because of glitches of type 1. In the case of Deep Survey-like observations, for comparison where, the redundancy per sky position is much lower (about 3 to 6, instead of 64), one cannot set such strong criteria for the correction of glitches with memory effects and the typical fraction of corrected pixels is of the order of 5% (the fraction of lost pixels, however remains identical). Finally, we also checked that the Gaussian plus readout noise mean is comparable in both observations, which is indeed the case when considering the same integration time as shown in Table 1.


  
Table 1: Comparison of the noise level and rate of cosmic ray glitches during the ISO-HDF observation and during a staring observation used for building a simulation of the same ISO-HDF mosaic in the LW3 band ($12-18~\mu$m)

\begin{tabular}
{ccc}
\hline
\multicolumn{1}{c}{Observation}&
\multicolumn{1}{c}...
 ...\% \\ Faders & 5.2\% & 5.5\% \\ Dippers & 14.1\% & 13.4\% \\ \hline\end{tabular}

In order to quantify the effect of incompleteness plus photometric uncertainty on the number counts, we built several ISO-HDF simulated images including simulated sources whose flux distribution followed that proposed in Franceschini et al. (1997). We stress here that this does not influence the output number counts, but on the contrary allows us to check if after applying PRETI, the slope of the number counts was close to the one used in the input. The main uncertainty here comes from the accuracy reached in the photometry of the sources, which redistributes the sources in each flux bin.

We use a lower limit of $S_{\rm l}=0.1~\mu$Jy, which is much lower than the sensitivity of our observations, and an upper limit $S_{\rm u}=1$ mJy, for the fluxes of the fake sources. We then simulate several fake mosaics with fluxes distributed as described above. Fluxes in Jy are converted into ISOCAM units (ADU/gain/second) following the standard conversion table from the ISO cookbook (ISO-Team, 1994) (1 ADU/g/s = 1.96 mJy with LW3).

We project each source on the detector for each pointing of the camera by taking into account the field distortion and the point spread function (using the model of Okumura 1997). This allows us to build a cube of images, i.e. an image of $32\times 32$ pixels for each pointing of the satellite, that we then multiply by the flat-field computed from the original data without simulated sources. Finally, we add to this cube the mean background level of the staring observation in order to model the transient behavior of the sources, which depends on the total flux level of each detector, using the model of Abergel (1996). Finally, we subtract again the mean background level to this cube of simulated sources in order to add it to the real dataset of the staring observation, which contains the real background with noise and cosmic rays.

Figure 12 shows the fraction of false detections as a function of flux limit using a detection threshold of $7 \tau_{\rm w}$and $5 \tau_{\rm w}$, where $\tau_{\rm w}$ is the noise level in wavelet space. For each simulation, we have built a main and a supplementary list of sources as mentioned in the paper and found that in both cases the rate of false detection down to the completeness limit is only 2%.

  
\begin{figure}
\includegraphics [width=8cm,clip]{fig_hdf_false.ps}\end{figure} Figure 12: Fraction of false detections as function of the flux limit of the sample, for a detection threshold of 7 $\tau_{\rm w}$ (lower) and 5 $\tau_{\rm w}$ (upper)

Hence in the main list of sources extracted from the ISO-HDF (21 sources), which was built using the $7 \tau_{\rm w}$ threshold, the completeness limit is 200 $\mu$Jy while the sensitivity limit is 50 $\mu$Jy with a rate of false detection close to 2%. But in the supplementary list (a total of 46 sources including the previous list), which goes down to $5 \tau_{\rm w}$, the completeness limit is 100 $\mu$Jy with about the same rate of false detection of 2%, which we could not measure previously. Hence, we can now merge these two source lists into one single list of 46 sources, the completeness limit of which is 100 $\mu$Jy instead of 200 $\mu$Jy, and contains statistically only one false source (2% of 46 sources) with a confidence level of 95%. At fainter fluxes, the number of false detections increases very rapidly in the supplementary list. For a limit of 20 $\mu$Jy it varies between 5 and 30% according to the simulations. On a list of one hundred sources this implies 10 false detections between 20 and 80 $\mu$Jy.


next previous
Up: Faint source detection in

Copyright The European Southern Observatory (ESO)