next previous
Up: Astronomical image compression


Subsections

3 Comparison

3.1 Quality assessment

In order to compare the different compression methods, we can use several characteristics, with constraints on these characteristics depending on the type of applications (see Table 2).


  
Table 2: List of criteria for comparison of compression methods for various types of astronomical image-based applications
\begin{table}
\begin {tabular}{\vert\vert l\vert\vert l\vert l\vert l\vert l\ver...
 ...ogressive vision & yes & yes & no & no & no \\  \hline
\end {tabular}\end{table}

The progressive vision aspect is very useful in the context of quick views (for example on the Web) and catalogue overlays, where the user can decide when the quality of a displayed image is sufficient. On the contrary, this feature is not required for more quantitative tasks.

The requirement of speed of display (transfer + processing time) is usually critical for applications related to progressive vision.

The estimation of the quality of a compression method and rate compared to others is based on the quality of restitution of the relevant information, which is always relative to the type of application. For good quality quick views of a given area, catalogue and database overlays, and cross-correlation of sources at various wavelengths, the required quality will be essentially qualitative: good geometry of the objects, no visual artifacts, good contrast, etc.

For cross-identification processes, and any situation where recalibration to improve astrometry and photometry is needed, or reprocessing of object detections where some were obviously missed, star/galaxy discrimination or separation of distinct objects falsely merged, the quality estimation must be a quantitative process. The loss of information can be measured by the evolution of "relevant parameters'' varying according to compression rate and method.

Quality criteria which can be retained for estimating the merits and performances of a compression method fall under these headings:

1.
Visual aspect.
2.
Signal-to-noise ratio.
3.
Detection of real and faint objects.
4.
Object morphology.
5.
Astrometry.
6.
Photometry.
Very few really quantitative studies have been carried out up to now in astronomy, in order to define which compression method should be used. Two studies were carried out in the framework of the ALADIN project. One was in 1993-94, when JPEG, FITSPRESS, and HCOMPRESS were evaluated (Carlsohn et al. 1993; Dubaj 1994) and another in 1996-1997 (Starck et al. 1997a; Starck et al. 1997b), when JPEG and PMT were compared (see Sects. 3.3 and 3.4).

3.2 Visual quality

A quick overview was obtained of each method produced by running all compression algorithms on two images. The first was a 256 $\times$ 256 image of the Coma cluster from an STScI POSS-I digitized plate, and the second was a 1024 $\times$ 1024 image, extracted from the ESO 7992V plate digitized by CAI-MAMA (described in more detail in the next section). The visual quality was estimated from the visual aspect of the decompressed image, and the quality of the residual (original image - decompressed image). Conclusions relative to this study are: An interesting feature of the wavelet method is that the compression ratio is a user parameter. For PMT, and MathMorph, the compression ratio is determined from noise modelling. For other methods, a user parameter allows the compression ratio to be changed, and consequently the image quality, but only iterations can lead to a given compression ratio, or to a given quality.

3.3 First ALADIN project study

We conducted two quantitative studies at CDS, within the scope of the ALADIN project, focusing on a small number of methods. The effects of compression for a Schmidt photographic plate in the region of M 5 (numbered ESO 7992v), scanned with the CAI-MAMA facility, were examined. The digitized image is a mosaic of $28 \times 28$ subimages, each of $1024 \times 1024$ pixels. Sampling is 0.666 arcsec per pixel. This region was chosen because of the availability of a catalogue (Ojha et al. 1994) obtained from the same plate digitization, where positions and blue magnitudes had been estimated for $20\,000$ stars or galaxies of magnitude 10-19. The position of each object was ascertained by Ojha et al. by marginal Gaussian fitting to the intensity distribution. Magnitude was determined using 120 photometric standards, which allowed the magnitude-integrated density calibration curve to be specified.

To carry out our tests in a reasonable time and to avoid plate boundary effects, we analyzed 25 adjacent subimages, located at the centre of the photographic plate. We stress that these test images are real and not simulated. They are representative of the images distributed by the CDS's reference image service, ALADIN. The central region used for the astrometry and photometry measurements contains about 2000 objects whose magnitude distribution (from 14 for the brightest objects, to 19 for the faintest objects) illustrate that of the global population of the catalogue (Dubaj 1994).

Detection experiments (Carlsohn et al. 1993) have been performed to study the effect of compression on the preservation of faint objects. These were carried out on a test region, where 16 sources of estimated magnitude close to 21 were identified. Results are described in Table 3.

  
\begin{figure}
{
\psfig {figure=ds1667f1.eps,width=14cm,height=8cm,angle=-90}
}\end{figure} Figure 1: Comparison of the ability of the different packages to recover the position of objects according to object magnitude: astrometrical error increases with magnitude. We recorded the limit of magnitude above which the position error exceeds the catalogue precision: 0.1 pixel


  
Table 3: Detection of faint objects in a digitized patch of a Schmidt plate image, using the MIDAS detection routines SEARCH/INVENTORY on original and compressed/decompressed images at different compression rates: 4:1, 10:1, 20:1 and 40:1. With comparable detection parameters, and depending on compression method, faint objects can be lost or spurious objects corresponding to local maxima can be found. Visual inspection is necessary to confirm real detected objects

\begin{tabular}
{\vert c\vert c\vert c\vert c\vert c\vert c\vert c\vert} \hline ...
 ... 6 & 0 & 6 & 38 \\  
 & 40 & 5 & 11 & 0 & 11 & 69 \\  \hline \hline\end{tabular}

Detection errors (loss, and false detections) clearly increase with the compression rate. FITSPRESS loses the most objects, HCOMPRESS creates the most false objects. JPEG slightly better preserves faint objects but only below compression rate of 40:1. In Carlsohn et al. (1993), we also compared the three methods with respect to the signal-to-noise ratio, positional and brightness error of known objects. We conclude that JPEG is better than HCOMPRESS at low signal-to-noise ratios, and is relatively similar at higher levels. Concerning the signal-to-noise ratio, the astrometry, and photometry, JPEG and HCOMPRESS produce images of equivalent quality, but FITSPRESS is again worse than the other two methods.
The first astrometrical tests were undertaken by Dubaj and are summarized in Fig. 1.

Star/galaxy discrimination was assessed by measuring the mean density by pixel, and considering the deviation of this quantity relative to its mean value for stars with the same integrated density as the object. Sufficiently low values are considered associated with galaxies. Applying this criterion to a subsample of around 1000 objects known a priori as stars or galaxies led to a contamination rate of 18% on the original image and 21% to 25% with compressed/uncompressed images (compression factor 40, for the three methods). This shows at least that morphological studies can be made on compressed/uncompressed images without substantial degradation.

The general conclusion of this first study was that none of these methods could provide good visual quality above compression rates of 40:1 and that the standard JPEG method was ultimately not so bad, even if block artifacts appear. The available software (i.e., HCOMPRESS and FITSPRESS) developed in astronomy was not convincing in the framework of the ALADIN project. When the PMT method was proposed (Starck et al. 1996; Starck et al. 1998), a second study was carried out in order to compare JPEG and PMT. In the meantime, MathMorph was implemented and underwent the same tests before its integration into the MR/1 package.

  
\begin{figure}
\psfig {figure=ds1667f2a.eps,bbllx=1.6cm,bblly=5cm,bburx=19.7cm,b...
 ...ly=5cm,bburx=19.7cm,bbury=23.3cm,width=8cm,height=8cm,angle=0,clip=}\end{figure} Figure 2: Left: Original image, subimage extracted from $1024 \times 1024$ patch, extracted in turn from the central region of ESO7992v. Right: JPEG compressed image at 40:1 compression rate

3.4 Second ALADIN project study

For the two compression methods studied here (JPEG and PMT), each implying loss of information, we have to look for good compromise between compression rate and visual quality. In the case of JPEG, various studies (Carlsohn et al. 1993; Dubaj 1994) confirm that beyond a compression rate of 40:1 this method of compression, when used on 12 bit/pixel images, gives rise to "blocky'' artifacts. For PMT, as described in this article, the reconstruction artifacts appear at higher compression rates, beyond a rate of 260 in the particular case of our images. Figure 2 allows the visual quality of the two methods to be compared, for test image 325. A subimage of the original image is shown in Fig. 2.
  
\begin{figure}
\psfig {figure=ds1667f3a.eps,bbllx=1.6cm,bblly=5cm,bburx=19.7cm,b...
 ...ly=5cm,bburx=19.7cm,bbury=23.3cm,width=8cm,height=8cm,angle=0,clip=}\end{figure} Figure 3: Left: MathMorph compressed image of the same patch, at 203:1 compression rate. Right: PMT-compressed image at 260:1 compression rate

To estimate the influence of compression algorithms on astrometrical precision of the objects, we studied the error in the position of the object in the original image compared to the position in the compressed/uncompressed image. This was done for each object in the catalogue. To determine the position of the objects, we used the MIDAS (ESO 1995) software routines based on fitting of two marginal Gaussian profiles as used originally by Ojha et al. (1994) for creating the catalogue. Knowing the catalogue magnitude of the objects, we can represent the mean positional error as a function of the object magnitude. This was done for magnitude intervals of 0.25 for the 2000 objects of the dataset used. Figure 4 allows the performances of JPEG and PMT to be compared. We note that for the two methods, the error is below the systematic error of the catalogue, in particular in the interval from object magnitudes 13 to 19 where we have sufficient objects to warrant asserting a significant result. Outside that interval, our dataset does not contain enough objects to establish a mean error in the astrometry.

  
\begin{figure}
{
\psfig {figure=ds1667f4.eps,width=14cm,height=8cm,angle=-90}
}\end{figure} Figure 4: Mean error in astrometry, by interval of 0.25 magnitude, for images compressed 40 times by JPEG, 260 times by PMT, and 210 times for MathMorph

Conservation of photometric properties is also a fundamental criterion for comparison of compression algorithms. We compared the integrated density of the objects in the 25 original images with the corresponding integrated densities from the images compressed/uncompressed with PMT and with JPEG. This study was carried out in three stages:

  
\begin{figure}
{
\psfig {figure=ds1667f5.eps,width=14cm,height=8cm,angle=-90}
}\end{figure} Figure 5: Comparison of the calibration error, by 0.0625 magnitude intervals, between the uncompressed image using JPEG, the original image, and the reference catalogue
  
\begin{figure}
{
\psfig {figure=ds1667f6.eps,width=14cm,height=8cm,angle=-90}
}\end{figure} Figure 6: Comparison of the calibration error, by 0.0625 magnitude intervals, between the uncompressed image using PMT, the original image, and the reference catalogue

  
\begin{figure}
{
\psfig {figure=ds1667f7.eps,width=14cm,height=8cm,angle=-90}
}\end{figure} Figure 7: Comparison of the calibration error, by 0.0625 magnitude intervals, measured on the uncompressed image using MathMorph, the original image, and the reference catalogue

  
\begin{figure}
\psfig {figure=ds1667f8.eps,width=16cm,height=10cm}\end{figure} Figure 8: Comparison of the overall time for compression, transmission and decompression for distribution of astronomical images using the Web: the network rate is supposed to be 10 Kbits/second, and the image size is 2 MBytes ($1024 \times 1024 \times 2$ bytes). The best preserving codecs with respect to visual quality are shown at the bottom of the graph

With each detected object, we associate its nearest neighbour in the catalogue according to the following rule, and we assign the corresponding catalogue magnitude, $M_{\rm c}$, to the detected object: a detected object, (x,y), is associated with the closest catalogue object $(x_{\rm c},\,y_{\rm c})$ subject to their distance being less than or equal to 3 pixels. This finally provides two object lists: $(x_{\rm o},\,y_{\rm o},\,d_{\rm o},\,Mc_{\rm o})$, for the original image, and $(x_{\rm r},\,y_{\rm r},\,d_{\rm r},\,Mc_{\rm r})$ for the reconstructed image. In a similar manner, the magnitude and logarithm of the integrated density association curves, $Mc_{\rm r} = f(d_{\rm r})$, are studied for the JPEG- and PMT-reconstructed images. To verify the stability of the photometric values in spite of the compression, we hope to obtain curves, and thus to calibrate the reconstructed images, and to find dispersion around an average position which stays close to the dispersion obtained on the calibration curve of the original image. In fact, for varied compression methods, a systematic lowering of integrated densities of images is noted (Dubaj 1994), which results in the average calibration function (fitted by an order 3 polynomial) being slightly translated relative to the calibration function of the original image. To estimate the behaviour of the dispersion of the calibration curve for both compression methods, we proceeded thus:

Thus we measure the photometric stability of the objects following compression, relative to their representation in the original image. The corresponding error curves are shown in Figs. 5, 6 and 7. The JPEG curve shows a slight increase for magnitudes above 18, and a smoothing effect for brighter objects between 14 and 16. For PMT, an increase in dispersion is noticed for high magnitudes, which corresponds to the problem of the detection of faint objects. Lowering the detection threshold from 4$\sigma$ to 3$\sigma$ does not change this. We note that the number of intervals below 14 is too small to allow for interpretation of the behaviour of very bright objects. Even if PMT brings about greater degradation in the photometry of objects, especially when the objects are faint, the errors stay close to that of the catalogue, and as such are entirely acceptable. Of course we recall also that the compression rate used with PMT is 260:1, compared to 40:1 for JPEG.

3.5 Computation time

Table 4 presents the computation time required for compression and decompression on a specific platform (Sun Ultra-Enterprise, 250 MHz and 1 processor). With the JPEG, wavelet, and fractal methods, the time to convert our integer-2 FITS format to a one-byte image is not taken into account. Depending on the applications, the constraints are not the same, and this table can help in choosing a method for a given project. The last column indicates if software already exists for progressive image transmission.


  
Table 4: Compression of an 1024 $\times$ 1024 integer-2 image. Platform: Sun Ultra-Enterprise; 250 MHz and 1 processor

\begin{tabular}
{\vert c\vert c\vert c\vert c\vert c\vert c\vert} \hline \hline
...
 ...\  \hline
PMT & 7.8 & 3.1 & N & 270 & Y (in Java) \\  \hline \hline\end{tabular}

Thinking from a Web server point of view we would like to compare the performances of the different packages considering two scenarios:


next previous
Up: Astronomical image compression

Copyright The European Southern Observatory (ESO)