next previous
Up: ESO Imaging Survey


Subsections

3 Data reduction

3.1 Overview of the EIS pipeline

An integral part of the EIS project has been the development of an automated pipeline to handle and reduce the large volume of data generated by EIS. The pipeline consists of different modules built from preexisting software consisting of: 1) standard IRAF tools for the initial processing of each input image and preparation of superflats; 2) the Leiden Data Analysis Center (LDAC) software, developed for the DENIS (Epchtein et al. 1996) project to perform photometric and astrometric calibrations; 3) the SExtractor object detection and classification code (Bertin & Arnouts 1996); 4) the "drizzle'' image coaddition software (Fruchter & Hook 1997; Hook & Fruchter 1997), originally developed for HST, to create coadded output images from the many, overlapping input frames.

A major aim of the EIS software is to handle the generic problem posed by the building up of a mosaic of overlapping images, with varying characteristics, and the extraction of information from the resulting inhomogeneous coadded frames. This has required significant changes in the preexisting software. To illustrate the power of the tool being developed a brief description of the basic idea behind the pipeline is necessary. For each input frame a weight map, which contains information about the noise properties of the frame, and a flag map, which contains information about the pixels that should be masked such as bad pixels and likely cosmic ray hits, are produced. After background subtraction and astrometric and relative photometric calibration, each input frame is mapped to a flux-preserving conic equal-area projection grid, chosen to minimize distortion in area and shape of objects across the relatively large EIS patch. The flux of each pixel of the input frame is redistributed in the superimage and coadded according to weight and flags of the input frames contributing to the same region of the coadded image. Therefore, the coaddition is carried out on a pixel-by-pixel basis. It is clear that in the process the information about the individual input frames is lost and in order to trace them back an associated context (domain) map is created, providing the required cross-reference between the object and the input frames that have contributed to its final flux. Another important output is the combined weight map which provides the required information to the object detection algorithm to adapt the threshold of source extraction to the noise properties of the context being analyzed. SExtractor has been extensively modified for EIS and its new version incorporates this adaptive thresholding (new SExtractor documentation and software are available at "http://www.eso.org/eis''). For a survey such as EIS, being carried out in visitor mode with varying seeing conditions, this cross-reference is essential as it may not be possible to easily characterize the PSF in the final coadded image, a problem which in turn affects the galaxy/star classification algorithm.

In the present paper the main focus is on the processing and the object catalogs extracted from the individual frames that make up the mosaic and which provide a full contiguous coverage of patch A with well defined characteristics.

3.2 Retrieving raw data

EIS utilizes the observational and the technical capabilities of the refurbished NTT and of the ESO Data Flow System (DFS), from the preparation of the observations to the final archiving of the data. DFS represents an important tool for large observational programs. The ESO Archive is interfaced with the EIS pipeline at both ends: it supplies the observed raw data and collects the output catalogs and reduced images.

In the course of delivering the data to EIS, the raw data is also archived in the ESO NTT Archive. The headers of the data delivered to EIS had to be adjusted in a variety of ways to meet the requirements of the EIS pipeline. Since standard ESO FITS-headers contain a wealth of instrumental and observational parameters in special ESO keywords, translation of some of these keywords to user defined keywords is a standard tool the Archive offers. Header translation was executed automatically right after the transfer of every single file. Moreover the files were renamed and sorted into subdirectories to reflect the nature of the frames (calibration, science and test frames). Filenames have been constructed which ensured both uniqueness and a meaningful description of the observed EIS tile, filter and exposure. Most of the important parameters of all transferred files are ingested into several database tables, briefly described below, which serve as a pool of information used during the execution of the pipeline. The database also provides all the information necessary to characterize the observations.

3.3 Frame processing

  The EMMI frames have been read in a two-port readout mode. The assumed gain difference between the two amplifiers is about 10% (2.4 e$^{-}\!$/ADU, 2.16 e$^{-}\!$/ADU), with a readout noise of 5.49 e- and 5.81 e-, as reported by the NTT team. Slight variations of this value have been detected from run to run. Currently, standard IRAF tools are used to remove the instrumental signature from each frame. To handle the dual-port readout, pre-scan corrections are applied for each half-frame using the xccdred package of IRAF by subtracting a fitted pre-scan value for each row.

A master bias for each run is created by median combining all bias frames, typically $~\rlap{$\gt$}{\lower 1.0ex\hbox{$\sim$}}$50 per run, using the 3-$\sigma$ clipping option. The effective area used in this calculation is from Cols. 800 to 2000 (along the x-axis) and rows from 100 to 1900 (along the y-axis), which avoids a bad column visible at the upper part of the chip and vignetted regions on top and bottom of the image. The same procedure is adopted for the dome flats. About 10 dome flats for each filter were used with about 15 000 ADUs. Skyflats were obtained at the beginning and end of each night using an appropriate calibration template to account for the variation of sky brightness in each band so that about five skyflats per night were obtained with 40 000 ADUs. These bright sky exposures were monitored automatically to reject those frames which could have been saturated or that had low S/N. The skyflats were combined using a median filter on a run basis and then median-filtered within a $15\times 15$ pixel box.

For each science frame a pre-scan correction is made and the frames are trimmed making the usable area of the chip from row 21 to 2066 (x-axis) and from Col. 1 to 2007 (y-axis). Note that this procedure does not completely remove the vignetted region of the frame or a coating defect visible on the chip. As will be seen later a suitable mask is defined to handle these regions. After trimming, the combined bias is subtracted and the frames are flat-fielded using the dome flats, and an illumination correction is applied using the combined skyflats.

After each survey frame has been corrected for these instrumental effects, a quick inspection is carried out by eye. A single EIS keyword is set in the header to flag those frames which contain satellite tracks, bright star(s), background gradients due to nearby bright objects, or those useless for scientific purposes (e.g., those affected by motions of the EMMI rotator or by glitches in the tracking leading to double or even triple images). Flagged frames are then rejected during the creation of the supersky flat, which is obtained from the combination of all suitable science frames using a median filter and 3$\sigma$ clipping. The superflat is created on a run basis and is smoothed using a running box $15\times 15$ pixels in size.

During visual inspection masks are also created and saved for use by the pipeline (Sect. 3.7) to mark regions affected by bright stars just outside the frame, some satellite tracks or cosmic rays in the form of long streaks in the frame. This can be done by taking advantage of features recently implemented in the SkyCat display tool (Brighton 1998, see also "http://www.eso.org/eis'').

Finally, the superflat is applied to both the survey and standard star frames taken during the night. The low-resolution background images (minibacks, see Sect. 5.1.1) generated by SExtractor, prior to the source extraction (minibacks), have been used to estimate the homogeneity of the resulting science images. In general, the flatness of the images is $~\rlap{$<$}{\lower 1.0ex\hbox{$\sim$}}0.2\%$, except for two runs for which larger values (up to $\sim$1.4%) are found. A major contribution to the background residual is probably due to variations in the relative gain of the two readout ports, which will be investigated further.

Note that the flat-fielding of EIS images is done in surface brightness, not in flux. Variations of the pixel scale over the field may cause a drift of the magnitudes, especially at the edges of the frames. However, distortions lead to a variation of pixel-scale which has been estimated from the astrometric solution (Sect. 3.8) to be $~\rlap{$<$}{\lower 1.0ex\hbox{$\sim$}}$0.5%. This translates to a photometric drift $~\rlap{$<$}{\lower 1.0ex\hbox{$\sim$}}$0.01 mag, over the field, which has not been corrected for in the present release.

3.4 Processing standard stars

Frames for standard stars are also processed automatically through a parallel branch of the pipeline fine-tuned to process standard star fields. Reference catalogs for all the Landolt fields are available and are used to pair objects and identify the standard stars. Aperture photometry, using Landolt apertures, is carried out and extinction coefficients and zero-points are computed and stored in the calibration database together with other observational parameters. Plots are also produced to ease the task of identifying photometric nights. The automatic process has been checked against reductions carried out manually with IRAF tasks, yielding consistent results.

3.5 Survey monitoring and quality control

  An integral part of the survey pipeline is the automatic production of reports for the monitoring of the data and the data reduction, and for diagnosing the different tasks of the EIS pipeline. These reports produce several plots that are interfaced with the WEB, for easy access by the EIS team. They can also be retrieved at a later time from information available in the EIS database (Sect. 3.6).

From the raw data retrieved from the archive a set of plots is produced which provides information on: 1) the time-sequence of observations (which has helped optimize the use of the DFS, and monitor the efficiency of the observations); 2) the performance of the pointing model; 3) the continuity of the coverage of the patch (by monitoring the overlap between tiles); and 4) the observed tiles and repeated observations.

  
\begin{figure}
\resizebox {8.8cm}{!}{\includegraphics{7652f6.eps}}\end{figure} Figure 6: Limiting isophote distribution for the patch A frames actually accepted for the survey. Vertical lines refer to 25, 50 and 75 percentiles of the distributions

After processing the data another set of plots is produced which show the seeing as measured on the images, the number counts at a fixed limiting magnitude, the limiting magnitude, defined as the $5\sigma$detection threshold for a point source, and the $1\sigma$ limiting isophote (mag/arcsec2). As an illustration Fig. 6 shows the distribution of the limiting isophote for the accepted tiles of patch A, and Figs. 7 and 8 show the two-dimensional distribution of the seeing and limiting isophote. These plots are useful to guide the selection of regions for different types of analysis. Diagnostic plots are also produced after the astrometric calibration of the frames, and before and after the calculation of the relative and the absolute photometric calibration.

  
\begin{figure}
\resizebox {8.8cm}{!}{\includegraphics{7652f7.eps}}\end{figure} Figure 7: Two-dimensional distribution of the seeing as measured on the I-band images for patch A for all the accepted even frames. Contours refer to 25, 50 and 75 percentiles of the distribution

  
\begin{figure}
\resizebox {8.8cm}{!}{\includegraphics{7652f8.eps}}\end{figure} Figure 8: Two-dimensional distribution of the computed 1$\sigma$limiting isophote for the accepted even frames for patch A. Contours refer to 25, 50 and 75 percentiles of the distribution

Based on this information the observing plans for subsequent runs are reviewed. In the case of patch A, originally observed under poor conditions, an attempt was made to improve the quality of the data as discussed earlier (see Fig. 2).

The image quality of the survey frames has also been monitored by computing the size and anisotropy of the PSF for suitably chosen stars covering the survey frames. While this in principle could be done directly from the catalogs produced by the pipeline, the second order moments of the brightness distribution produced by the current version of SExtractor are sensitive to noise which affect the measurement of object shapes. Therefore, to monitor the shape of the PSF the software developed by Kaiser et al. (1995) (hereafter KSB) has been used. It computes the shape of objects by using appropriate weights in the calculation of the second order moments. Comparison between the results of the two algorithms shows that, while both lead to comparable results, the KSB software provides more robust results than SExtractor (Fig. 9). It is, however, considerably slower than SExtractor, and for the time being it is run in parallel to the main pipeline, just for diagnostic purposes.

  
\begin{figure}
\includegraphics [width=12cm,height=4cm,angle=0.]{7652f9.eps}\end{figure} Figure 9: Comparison of SExtractor and KSB shapes for stellar objects: The left panel shows the anisotropy structure as measured by SExtractor, while the right panel as measured by the KSB algorithm for the same stars

The typical structure of the anisotropy of the EMMI PSF is shown in the upper left panel of Fig. 10, which displays the polarization vector for stellar images in a frame with a seeing of about 1 arcsec. As can be seen the anisotropy shows a complex structure and has a mean amplitude of $\sim$6% (lower left panel). However, the variation of the anisotropy is well represented by a second-order polynomial. Application of this correction leads to the small random residuals (rms $\sim$ 2%) shown in the right panels of Fig. 10. Tests have also shown that the number of stars in the images of patch A is on average $\sim$40. This number is sufficient to allow this correction to be computed even for typical survey frames.

  
\begin{figure}
\includegraphics [width=12cm,height=12cm,angle=0.]{7652f10.eps}\end{figure} Figure 10: A typical pattern of the PSF anisotropy (upper left) and components of the polarization vector (lower left) for an EMMI I-band frame (1 arcsec seeing). Also shown are the spatial distribution (upper right) and components of the polarization vector (lower right) after polynomial correction, as described in the text

More importantly, the EMMI images show a systematic increase in the size of the PSF along the y-direction of the CCD, while none is seen in the x-direction (Fig. 11). The difference between the size in the lower part and the upper part of the images is typically $10\%$ which, in principle, can affect the star/galaxy classification algorithm. This effect was caused by the misalignment between the primary and secondary NTT mirrors. Recently, the mirrors have been realigned and a great improvement of the image quality is expected for the observations of patch D.

  
\begin{figure}
\includegraphics [width=12cm,height=6.0cm,angle=0.]{7652f11.eps}\end{figure} Figure 11: Variation of the size of the PSF along the x-axis (left panel) and y-axis (right panel) of the CCD on EMMI. Note the increase in size near the top edge of the CCD. The quantity rg is the size of the stellar images as determined by the KSB algorithm. The image from which this result was derived has a seeing of $\sim$1 arcsec

Since there were reports (Erben 1996) of problems in the optics of EMMI before its refurbishing, for comparison EMMI data from 1996 (Villumsen et al. 1998) and test data taken in April 1997 have also been analyzed. The PSF anisotropy map derived from images from 1996 showed erratic behavior even for consecutive exposures taken 10 minutes apart. By contrast, EIS images using the refurbished EMMI have proven to be quite stable. In fact, under similar observing conditions, the PSF anisotropy shows no strong time-dependent variations, as can be seen in Fig. 12 which displays nine consecutive frames of an EIS OB. This stability implies that the strong optical anisotropy can usually be corrected for.

  
\begin{figure}
\resizebox {12cm}{!}{\includegraphics{7652f12.eps}}\end{figure} Figure 12: Nine consecutive EIS images showing the stability of the PSF anisotropy with time

The continuous monitoring of the EMMI PSF over an extended period of time will provide valuable information in order to better understand all the potential sources that may contribute to the anisotropy such as the telescope tracking and pointing as well as environmental effects. Even if this exercise proves to be of limited use for EIS, the implementation of these tools may be of great value for future surveys.

3.6 EIS database

  A survey project like EIS collects a large number of science and calibration frames under varying conditions and produces a wealth of intermediate calibration parameters, catalogs and images. This multi-step process needs accurate monitoring as well as traceback facilities to control the progress and steer the survey as a whole. EIS is using a relational database consisting of several tables, which have been implemented in the course of the ongoing survey.

There are tables dedicated to storing parameters related to the observations such as: 1) FITS keywords for all images delivered by the ESO Archive to the EIS data reduction group; 2) Additional keywords for different types of images which include survey frames (tiles), photometric standards, astrometric reference frames, bias, flat-fields and darks; 3) extinction data observed routinely by the Swiss telescope and delivered by the Geneva Observatory.

Another set of tables are used to control and store the results of the photometric calibration process including: 1) additional parameters of the frames of photometric standards; 2) information and results from different reduction runs of the photometric calibration data on a frame by frame basis; 3) data for photometric and spectrophotometric standards, combining results from the literature and from the reduction of the calibration frames on a star by star basis.

Finally, there are tables that: 1) store basic information about the nights in which EIS observations have been carried out; 2) control and monitor the processing of the survey data; 3) store information about the coadded sections and the catalogs produced by the pipeline.

All these tables are related by common keys and the underlying commercial database engine used (Sybase) provides a powerful SQL dialect to manage and retrieve the stored data. Even though the EIS-DB is far from complete, its implementation is important as one can take full advantage of the available DB engine to retrieve key information about the survey, to control the processing and to provide a variety of statistics that can be used to fully characterize it. A full description of the EIS database will be presented elsewhere (Deul et al. 1998).

3.7 Weighting and flagging

 

3.7.1 Rationale

Because of the small number of EIS frames entering coaddition at a given position in the sky (typically two frames), defects and other undesired features cannot be rejected through robust combination such as $\sigma-clipping$ or by taking the mode of a histogram. In addition, two frames covering the same area in the sky can be observed at different epochs with very different seeing. Therefore, artifacts need to be identified on the individual images themselves, and discarded from coaddition. This can be achieved by creating maps of bad pixels.

This approach has been expanded to a more general one, which is the handling, throughout the pipeline, of a set of weight- and flag-maps associated with each science frame. Weight-maps carry the information on "how useful'' a pixel is, while flag-maps tell why that is so. Using this approach, the image processing task does not need to interpret the flags and decide whether a given feature matters or not, which improves the modularity of the pipeline.

3.7.2 Implementation

The creation of weight- and flag-maps is left to the Weight Watcher program (see "http://www.eso.org/eis''). Several images enter into the creation of the weight and flag-map associated to each science frame. A gain-map -- which is essentially a flatfield where the differences in electronic gains between the two read-out ports have been compensated -- provides a basic weight-map. It is multiplied by a hand-made binary mask where regions with very strong vignetting (gain drop larger than 70%) and a $\approx 30''\times
30''$ CCD coating defect are set to zero and flagged. Bad columns stand out clearly in bias frames. Affected pixels are detected with a simple thresholding, marked accordingly in the flag-map, and again set to zero in the weight-map. A thresholding suffices to identify saturated pixels on science frames. These various steps are shown in Fig. 13.

  
\begin{figure}
\resizebox {8cm}{!}{\fbox{\includegraphics{7652f13a.eps}}}

\resizebox {8cm}{!}{\fbox{\includegraphics{7652f13b.eps}}}\end{figure} Figure 13: Two components of the weight maps: the hand-drawn mask (left panel), which excludes strongly vignetted regions, and the gain map (right panel) obtained from the flatfield

3.7.3 Identification of electronic artifacts

Glitches cannot be identified as easily. Those include cosmic ray impacts, "bad'' pixels and, occasionally, non-saturated features induced by the intense saturation caused by very bright stars in the field of view. Instead of designing a fine-tuned, classical algorithm to do the job, a new technique has been applied based on neural networks, a kind of "artificial retina''. The details of this "retina'' are described elsewhere (Bertin 1999), but it suffices to say here that it acts as a non-linear filter whose characteristics are set through machine-learning on a set of examples. The learning is conducted on pairs of images: one is the input image, the other is a "model'', which is what one would like the input image to look like after filtering.

  
\begin{figure}
\resizebox {8cm}{!}{\fbox{\includegraphics{7652f14a.eps}}}

\resizebox {8cm}{!}{\fbox{\includegraphics{7652f14b.eps}}}\end{figure} Figure 14: Left: Part of an EIS image with very good seeing ($FWHM = 0.54\hbox{$^{\prime\prime}$}$). Right: retina-filtered image of the same field, showing up cosmic-ray impacts. Both images with negative scale

For the detection of glitches, a set of typical EIS images was used as input images, reflecting different seeing and S/N conditions (but putting stronger emphasis on good seeing images). Dark exposures containing almost nothing but read-out noise, cosmic-ray impacts and bad pixels were compiled. To these images a selection of more typical features, induced by saturation, were added. These "artifact images'' were then used as the model images, and were also added to the simulations to produce the input images. The first EIS images used as input already contain unidentified cosmic-rays, producing ambiguity in the learning. Thus the process was done iteratively (3 iterations), using images where the remaining obvious features are identified with a retina from a previous learning, and discarded. An example of a retina-filtered image is shown in Fig. 14.

A crude estimate (through visual inspection) of the success rate in the identification of pixels affected by glitches is $\sim$95%. The remaining 5% originate primarily from the tails of cosmic-ray impacts which are difficult to discriminate from underlying objects. The spurious detections induced by these residuals are, however, easily filtered out because of the large fraction of flagged pixels they contain (see Sect. 5.2).

3.7.4 Other artifacts

Unfortunately, there are other unwanted features that cannot be easily identified as such automatically: optical ghosts and satellite/asteroid trails. At this stage of the pipeline, obvious defects of this kind were identified through systematic visual inspection of all science images (Sect. 3.3). The approximate limits of the features are stored as a polygonal description which allows Weight Watcher to flag the related pixels and put them to zero in the weight-map.

3.8 Astrometric calibration

 

To derive accurate world coordinates for the objects extracted from EIS frames both the pairing of extracted objects in overlap regions and pairing of extracted objects with a reference catalog (USNO-A1) are used.

The pairing information of extracted objects with the reference catalog is obtained by assuming that the image header information is correctly describing the pointing center (with an accuracy of 10% of the size of the image) and the pixel scale (within 10%). Using a pattern recognition technique, that allows for an unknown linear transformation, the pattern of extracted objects is matched with the pattern of reference stars. This results in corrections to the pointing center and pixel scale. Then applying these corrections the pairing between extracted objects in the overlap regions is performed.

Because the unit of measurement is a set of ten consecutive, overlapping frames in which the telescope is operating in a mechanically coherent manner (offset pointing, vs. preset pointing between sets of ten frames) this forms the basis for an astrometric solution. Independent pointing offsets are determined for each frame, while considering the focal scale and distortion parameters as constant or smoothly variable as a function of frame number in the set. The plate model is defined as the mapping between pixel space and normal coordinates (gnomonic projection about the field center). This mapping is a polynomial description for which the polynomial parameters are allowed to vary smoothly (Chebychev polynomial) with frame number in the set. Therefore, the mapping between (xy) pixel space and ($\zeta$, $\eta$) allows for flexure and other mechanical deformation of the telescope while pointing.

Using pairing information, association of the source observed in several frames, and association of the extracted source with the reference catalog, a least squares solution is derived where these distances between the members of the pairs are simultaneously minimized. Weighting is done in accordance with the positional accuracy of the input data. The source extraction accuracy rms is 0.03 arcsec, while the reference has an rms of order 0.3 arcsec. The astrometric least squares solution is done in an iterative way. Because associations have to be derived before any astrometric calibration has been done, erroneous pairings may occur. A Kappa-Sigma clipping technique allows for discarding the large distance excursions (probable erroneous pairings) between the iterations.

From the astrometric solution it is found that distortion of the pixel-scale is $~\rlap{$<$}{\lower 1.0ex\hbox{$\sim$}}0.5$% and the accuracy of the relative astrometry is $\sim$0.03 arcsec.

3.9 Photometric calibration

 

3.9.1 Method

Deriving a coherent photometric system for observations done in both photometric and non-photometric conditions is a challenge handled in EIS through a stepwise procedure.

The first step is to derive the relative photometry among frames in an overlapping set. In contrast to what is done for the astrometric calibration, all overlaps among frames in a contiguous sky area are used. Using the overlap pairing information, an estimate for the total extinction difference between frames can be computed. This can be expressed in terms of a relative zero-point, which by definition, includes the effects of airmass and extinction. To limit the magnitude difference calculation to reliable measurements, a selection of input pairs is made based on the signal-to-noise ratio, maximum allowed magnitude difference between members of a pair, and limiting magnitudes for the brightest and faintest usable pairs. Because a set of frames will have multiple overlaps, the number of data points (frame-to-frame magnitude differences) will be over-determined with respect to the number of frames. Therefore, the relative zero-point for each frame can be derived simultaneously in a least squares sense. Weighting is applied based on the number of extracted objects and their fluxes. The solution is computed in an iterative fashion. Estimated frame-based zero-points from a previous iteration are applied to the magnitudes and new sets of pairs are selected, rejecting extraneous pairs. The process stops when no new rejections are made between iterations. The internal accuracy of the derived photometric solution is $~\rlap{$<$}{\lower 1.0ex\hbox{$\sim$}}0.005$ mag.

The second step involves correcting possible systematic photometric errors, and deriving an absolute zero-point. Systematic errors are introduced by incorrect flat-fielding (stray-light, pixel-scale variations, gain changes between read-out ports) or variation of image quality. The latter has, however, been minimized by adopting an appropriate photometric estimator (Sect. 5.1.2). The correction for these systematic errors can be made by using external information, such as pointed measurements from other telescopes, and/or absolute zero-points from EIS measurements of Landolt standards, which are used to anchor a global photometric solution. As long as these observations cover the patch uniformly, systematic zero-point errors can be corrected by a weighted least-square fit of a low-order polynomial to the difference between the relative zero-point derived from the previous step and the external pointed measurements. This general procedure will be adopted in the final version of the EIS data.

3.9.2 Calibration of patch A

In this preliminary release, the zero-point calibration of patch A was determined to be simply an offset derived from a weighted average of the zero-points of the available anchors. For patch A, the available anchor candidates are: the 2.2 m data, the 0.9 m data and the EIS tiles taken during photometric nights. Given the fringing problems detected with the 2.2 m data, the latter have, for the time being, been discarded. For patch A, five nights were observed under photometric conditions. This represents about one hundred tiles, covering a wide range in declination (Fig. 5).

There are some indications for a small zero-point gradient in right ascension ($\sim$0.02 mag/deg), in agreement with the uncertainties of the flat-fields ($\sim$0.002 mag). By contrast, the behavior along the north-south direction is much less well determined, as it relies on calibrations generally carried out in different nights. With the current calibrations, however, the systematic trend is estimated to be $~\rlap{$<$}{\lower 1.0ex\hbox{$\sim$}}$0.2 mag peak-to-peak, which is the amplitude of the measured zeropoint differences and is more likely to be of order 0.02 mag peak-to-peak, which is the extrapolation of the flat-field uncertainties.

The DENIS strip could have been a perfect data set to constrain the homogeneity of the zero-point in declination. However, careful examination of the DENIS standards revealed a significant variation in the zero-points for the night when this strip was observed, preventing its use to constrain possible gradients in declination of the EIS data. Note that another strip crossing the surveyed area has also been observed and an attempt will be made to retrieve it from the DENIS consortium before the final release of the EIS data.

3.10 Coaddition

Coaddition serves three main purposes: it increases the depth of the final images; it allows the suppression of artifacts which are present on some images (such as cosmic-ray hits); and it allows the creation of a "super-image" of arbitrary size to facilitate studies which would be affected by the presence of the edges of the individual frames.

A general coaddition program had been developed from the "drizzle" method originally created to handle HST images (Fruchter & Hook 1997; Hook & Fruchter 1997). The inputs for each coaddition operation are the survey frame and a weight map which, as mentioned above, gives weight 0 to the bad pixels. The corner positions of each input pixel are transformed, using the astrometric solution generated by the pipeline, first into equatorial sky positions and then onto pixel positions in the output super-image. The conic equal-area sky projection is used for the super-image as it minimizes distortion of object shapes and areas. To ease manipulation and display the super-images are normally stored as sets of contiguous $4096\times 4096$ pixel sections.

To compute the super-image pixel values, the corners of each input pixel are projected onto the output pixel grid of the super-image to determine the overlap area. The data value from the input image is then averaged with the current values in the output super-image using weighting which is derived by combining the weight of the input, the weight of the current output pixel and the overlap area. The method reconstructs a map of the surface intensity of the sky as well as an output weight map which gives a measure of the statistical significance of each pixel value. A third output image, the "context map" encodes which of the inputs were coadded at a given pixel of the output so that properties of the super-image at that point, such as PSF-size, can be reconstructed during subsequent analysis. The values in the context-map provide pointers to a list which is dynamically generated and updated during coaddition. This is a table of the unique image identifiers for a specified context as well as the number of input images and the number of pixels having a given context value.


next previous
Up: ESO Imaging Survey

Copyright The European Southern Observatory (ESO)