next previous
Up: The luminosity function

2. Method used

A new method is proposed for determining the faint end of the luminosity function: LUMINOUS (LUminosityfunction Modelling of Image Noise to Observe Undetectable Stars). The basic principle is that if two images look "similar" enough, the luminosity functions are similar.

A simulated image is created, using information derived from the observed image. This includes noise characteristics and sensitivity of the detector, analog to digital converter (ADC) effects, PSF, diffuse sky brightness, detected stars, and the location of cosmic ray effects (CREs) and defective pixels. Iteratively modifying the number of detected stars for certain magnitude intervals, the simulated image is made to look more like the observed image. Any additional stars needed to improve the appearance of the simulated image are located at random.

To determine the similarity between the images, the histograms of pixel intensities in the images are used. The histogram has the advantage of independence of the location of the stars in the image. A pixel-to-pixel comparison of the simulated and real image would be sensitive to the location of the (undetectable) faint stars.

2.1. Measure of similarity

The histogram of the simulated image is compared with that of the original image. The goodness of fit of the two histograms is expressed with the tex2html_wrap_inline1072 parameter, defined as


displaymath1070
where i0 is the first significant intensity bin in the histograms, N+1 is the number of histogram bins over which tex2html_wrap_inline1072 is determined, and Ri and Si are the counts in the tex2html_wrap_inline1084 bin of the two histograms.

This tex2html_wrap_inline1072 measure covers the interesting range of intensities, where the contribution from bright stars is small, but the largest effect from the undetectable and faint stars is located. Statistically equivalent histograms should have a tex2html_wrap_inline1072 of unity.

2.2. What is being modelled?

Since the aim of LUMINOUS is to detect stars that are of the same intensity as fluctuations in the noise of the image, this noise should be modelled as accurately as possible. In the original raw image, all intensities are in the form of integer values. Subsequent image processing will change these intensities to real values, more closely representing the true intensity of the light falling on the CCD pixels. This processing includes e.g. bias subtraction and flat fielding.

With real values it is not obvious how to define a unique histogram. The properties of the noise are changed in a complex way. In the raw image the noise is easily defined as the quadratic sum of the read-out noise and the Poisson (shot) noise. As an example, consider a part of the image with a low value for the flat field. The noise in this part will be amplified during flat fielding of the image. If the flat field has a small systematic error, this will affect the intensities and give them a larger uncertainty. If, on the other hand, the true intensity of the image is modelled, and then multiplied with the flat field, the systematic error will affect the intensities as well. The Poisson noise in the intensities will add to the effect of this error, but since the value of the flat field was low, and the error small, the effect of the Poisson noise will dominate, and can easily be calculated.

For these reasons the raw image is modelled. Instead of correcting for effects of the detector, they are modelled. The method chosen here is to model the physical processes taking place. An LF dictates a certain distribution of stars projected on the detector. Spatial sensitivity variations of the detector are modelled through the flat field. The photons are converted to electrons, which are read out through electronics with a certain gain factor, linearity effects, dynamic range, read-out noise etc. Raw data is the final product. The following steps are taken when converting an input LF to a simulated histogram: add dark image, add known stars (from photometry), add unknown stars (from input LF), apply flat-field effects, add Poisson noise, add read-out noise, convert from electrons to analog to digital unit (ADU), add bias level, add bias image, apply ADC nonlinearity effects, multiply with CRE mask, create histogram.

Careful mapping of the noise characteristics of the individual pixels of the image during processing could be done as an alternative, and should give the same results in the end, but the effects of the noise on the histogram would complicate matters and make comparison of two images through the histogram impractical.

2.3. Assumptions

The following assumptions have been made for the method described here:

Not all of these assumptions are critical. If any of the assumptions is invalid, the corresponding effect should be included when creating the simulated image.

2.4. Required information

A number of properties of the image should be known before LUMINOUS can be applied. These include:

Much of this information is needed in standard image processing, and is usually known for the instrument used. The accuracy with which some of the effects are known is not always sufficient for LUMINOUS. For classical photometry the read-out noise is usually only used to determine the signal to noise ratio, and a low accuracy value suffices. For LUMINOUS it should be checked if the given value is correct and stable, since the statistical properties of the read-out noise are very important.

Except for the effects discussed, the CCD is considered perfect. This includes absence of effects like geometric distortion, deferred charge, charge transfer problems or trailing of the images, video noise, sub-pixel scale sensitivity variations, residual image effects, etc.


next previous
Up: The luminosity function

Copyright by the European Southern Observatory (ESO)