Up: Astronomical image compression
Subsections
In order to compare the different compression
methods, we can use several characteristics, with constraints on these
characteristics depending
on the type of applications
(see Table 2).
Table 2:
List of criteria for comparison of compression methods for various types of
astronomical image-based applications
 |
The progressive vision aspect is very useful in the context of quick
views (for example on the Web) and catalogue overlays, where the user can
decide when the quality of a displayed image is sufficient. On the contrary,
this feature is not required for more quantitative tasks.
The requirement of speed of display (transfer + processing
time) is usually critical for applications related to progressive
vision.
The estimation of the quality of a compression method and rate compared to
others is based on the quality of restitution of the relevant information,
which is always relative to the type of application. For good quality quick
views of a given area, catalogue and database overlays, and cross-correlation
of sources at various wavelengths, the required quality will be essentially
qualitative: good geometry of the objects, no visual artifacts, good contrast,
etc.
For cross-identification processes, and any situation where recalibration to
improve astrometry and photometry
is needed, or reprocessing of object detections where some were obviously
missed, star/galaxy discrimination or separation of distinct objects falsely
merged, the quality estimation must be a quantitative process. The loss of
information can be measured by the evolution of "relevant parameters'' varying
according to compression rate and method.
Quality criteria which can be retained for estimating the merits and performances of a compression
method fall under these headings:
- 1.
- Visual aspect.
- 2.
- Signal-to-noise ratio.
- 3.
- Detection of real and faint objects.
- 4.
- Object morphology.
- 5.
- Astrometry.
- 6.
- Photometry.
Very few really quantitative studies have been carried out up to now
in astronomy,
in order to define which compression method should be used.
Two studies were carried out in the framework of the ALADIN project. One
was in 1993-94, when JPEG, FITSPRESS, and HCOMPRESS were evaluated
(Carlsohn et al. 1993;
Dubaj 1994)
and another in 1996-1997
(Starck et al. 1997a;
Starck et al. 1997b),
when JPEG and PMT were compared (see Sects. 3.3 and 3.4).
A quick overview was obtained of each method produced by running
all compression algorithms on two images. The first was a 256
256
image
of the Coma cluster from an STScI POSS-I digitized
plate, and the second was a 1024
1024 image, extracted
from the ESO
7992V plate digitized by CAI-MAMA (described in more detail in the next
section). The visual
quality was estimated from the visual aspect of the decompressed image, and the
quality of the residual (original image - decompressed image). Conclusions
relative to this study are:
- FITSPRESS leads to cross-like artifacts in the residual
image, a loss of faint
objects and a decrease in objects' brightness.
- JPEG cannot be used at compression ratios higher than 40. Above this,
artifacts become significant, and furthermore astrometry and photometry
become very bad.
- The fractal method cannot be used for astronomical data compression.
There are boxy artifacts, but the main problem is that object
fluxes are modified
after decompression, and the residual contains a lot of information (stars
or galaxies can be easily identified on the residual map).
- MathMorph leads to good compression ratios,
but the background estimation is delicate. For the Coma cluster, the result
was relatively bad, due to the difficulty of finding the background. More
sophisticated algorithms can certainly be used to do this task. Another
drawback of this method is the bad recovery of the contours of the object,
which leads also to a loss of flux.
- HCOMPRESS produces artifacts. Iterative reconstruction allows them to
be suppressed, but in this case the reconstruction takes time. However this
approach should be considered when the archived data are already
compressed with
HCOMPRESS (e.g. HST archive).
- The wavelet method produces very good results for the Coma cluster
(compression ratio of 40). For the second image, where a compression
ratio of more than 200 is
obtained with the PMT or by mathematic morphology, artifacts appear if we try
to achieve the same high performances. This method can be used, but not for
very high compression ratios.
- The pyramidal median transform produces good quality results for both
images. The compression ratio, similarly to
the mathematical morphology method,
depends on the content of the image. The fewer the pixels of objects in the
image, the higher the compression ratio.
An interesting feature of the wavelet method is that
the compression ratio is a user parameter. For PMT, and MathMorph, the compression
ratio is determined from noise modelling. For other methods, a user parameter
allows the compression ratio to be changed, and consequently
the image quality, but only iterations can lead to a given compression
ratio, or to a given quality.
We conducted two quantitative studies at CDS, within the scope of the
ALADIN project, focusing on a small number of methods.
The effects of compression for
a Schmidt photographic plate in the region of M 5 (numbered ESO 7992v), scanned
with the CAI-MAMA facility, were examined.
The digitized image is a mosaic of
subimages,
each of
pixels. Sampling is 0.666 arcsec per pixel.
This region was chosen because of the availability of a
catalogue (Ojha et al. 1994)
obtained from the same plate digitization, where positions and blue magnitudes
had been estimated for
stars or galaxies of magnitude 10-19.
The position of each object was ascertained by Ojha et al. by marginal
Gaussian fitting
to the intensity distribution. Magnitude was determined using 120
photometric standards, which allowed the magnitude-integrated density
calibration curve to be specified.
To carry out our tests in a
reasonable time and to avoid plate boundary effects, we analyzed 25 adjacent
subimages, located at the centre of the photographic plate.
We stress that these test images are real and not simulated. They are
representative
of the images distributed by the CDS's reference image service, ALADIN.
The central region used for
the astrometry and photometry measurements
contains about 2000 objects whose magnitude distribution (from 14 for the
brightest objects, to 19 for the faintest objects) illustrate that of the
global population of the catalogue
(Dubaj 1994).
Detection experiments
(Carlsohn et al. 1993) have been performed
to study the effect of compression on the preservation of faint objects.
These were carried out on a test region, where
16 sources of estimated magnitude
close to 21 were identified.
Results are described in
Table 3.
 |
Figure 1:
Comparison of the ability of the different packages to recover
the position of objects according to object magnitude:
astrometrical error increases with magnitude. We recorded the limit of
magnitude above which the position
error exceeds the catalogue precision: 0.1 pixel |
Table 3:
Detection of faint objects in a digitized patch of a Schmidt plate
image, using the MIDAS detection routines SEARCH/INVENTORY on original and
compressed/decompressed images at
different compression rates: 4:1,
10:1, 20:1 and 40:1. With comparable detection parameters, and depending on
compression method, faint objects can be lost or spurious objects corresponding
to local maxima can be found. Visual inspection is necessary to confirm
real detected objects
 |
Detection errors (loss, and false detections) clearly increase with the
compression rate. FITSPRESS loses the most objects, HCOMPRESS creates the most
false objects. JPEG slightly better preserves faint objects but only below
compression rate of 40:1.
In Carlsohn et al. (1993),
we also compared the three methods with respect to
the signal-to-noise ratio, positional and brightness error of known objects.
We conclude that JPEG is
better than HCOMPRESS at low signal-to-noise ratios, and is relatively
similar at higher levels. Concerning the signal-to-noise ratio,
the astrometry,
and photometry, JPEG and HCOMPRESS produce images of equivalent quality, but
FITSPRESS is again worse than the other two methods.
The first astrometrical tests were undertaken by Dubaj and are
summarized in Fig. 1.
Star/galaxy discrimination was assessed by
measuring the mean density by pixel, and considering the deviation of this
quantity relative to its mean value for stars with the same integrated
density as the
object. Sufficiently low values are considered associated with
galaxies. Applying this
criterion to a subsample of around 1000 objects known a priori
as stars or galaxies led to a contamination rate of 18% on the original
image and 21% to 25% with compressed/uncompressed images (compression
factor 40, for
the three methods). This shows at least that morphological studies can be
made on compressed/uncompressed images without substantial degradation.
The general conclusion of this first
study was that none of these methods could provide good visual
quality above compression rates of 40:1 and that
the standard JPEG method was ultimately not so bad, even if block
artifacts appear. The available software (i.e., HCOMPRESS and FITSPRESS)
developed in astronomy was not convincing in the framework of the ALADIN
project. When the PMT method was
proposed (Starck et al. 1996;
Starck et al. 1998), a second
study was carried out in order to compare JPEG and PMT. In the
meantime, MathMorph was implemented and
underwent the same tests before its integration into the MR/1 package.
 |
Figure 2:
Left: Original image, subimage extracted from
patch,
extracted in turn from the central region of ESO7992v.
Right: JPEG compressed image at 40:1 compression rate |
For the two compression methods studied here (JPEG and PMT),
each implying loss of
information, we have to look for good compromise between compression rate
and visual quality. In the case of JPEG, various studies
(Carlsohn et al. 1993;
Dubaj 1994)
confirm that beyond a compression rate
of 40:1 this method of compression, when
used on 12 bit/pixel images, gives rise to "blocky'' artifacts.
For PMT, as described in this article,
the reconstruction artifacts appear at higher compression rates,
beyond a rate of 260 in the particular case of our images.
Figure 2 allows the visual quality of the
two methods to be compared, for test image 325. A subimage of
the original image is shown in Fig. 2.
 |
Figure 3:
Left: MathMorph compressed image of the same patch, at 203:1 compression rate.
Right: PMT-compressed image at 260:1 compression rate |
To estimate the influence of compression algorithms on astrometrical
precision
of the objects, we studied the error in the position of the object in the
original image compared to the position in the compressed/uncompressed image.
This was done for each object in the catalogue. To determine the position
of the objects, we used the MIDAS
(ESO 1995) software
routines based on fitting of two marginal
Gaussian profiles
as used originally by
Ojha et al. (1994) for
creating the catalogue. Knowing the catalogue magnitude of the objects, we can
represent the mean positional error as a function of the object
magnitude. This was done for magnitude intervals of 0.25 for the 2000
objects of the dataset used. Figure 4 allows the
performances of JPEG and PMT to be
compared. We note that for the two methods, the error is below the
systematic error of
the catalogue, in particular in the interval from object magnitudes 13 to 19
where we have sufficient objects to warrant asserting a significant result.
Outside that interval, our dataset does not contain enough objects to
establish a mean error in the astrometry.
 |
Figure 4:
Mean error in astrometry, by interval of 0.25 magnitude, for
images compressed 40 times by JPEG, 260 times by PMT, and 210 times for MathMorph |
Conservation of photometric properties is also a fundamental criterion for
comparison of compression algorithms.
We compared the integrated density of the objects in the 25 original images
with the corresponding integrated densities from the images
compressed/uncompressed with PMT and with JPEG. This study was
carried out in three
stages:
- Detection of objects in the original image, and in the reconstructed
image, and calculation of the integrated densities. This stage
of the processing therefore gives a list of
objects characterized by
, with
the coordinates
of the barycentre, and
the logarithm of the integrated density.
Similarly,
represents the list of objects detected under
similar conditions in the reconstructed image.
- Magnitude calibration of the original image and of the reconstructed
image.
- Calculation of the error in the logarithm of the integrated
density, by magnitude interval.
 |
Figure 5:
Comparison of the calibration error, by 0.0625 magnitude intervals,
between the uncompressed image using JPEG, the original image, and the
reference catalogue |
 |
Figure 6:
Comparison of the calibration error, by 0.0625 magnitude intervals, between
the uncompressed image using PMT, the original image, and the
reference catalogue |
 |
Figure 7:
Comparison of the calibration error, by 0.0625 magnitude intervals,
measured on the uncompressed image using MathMorph, the original image, and the
reference catalogue |
 |
Figure 8:
Comparison of the overall time for compression, transmission and
decompression for distribution of astronomical images using the Web: the
network rate is supposed to be 10 Kbits/second, and the image size is
2 MBytes
( bytes). The best preserving codecs with
respect to visual quality are shown at the bottom of the graph |
With each detected object, we associate its nearest neighbour in the catalogue
according to the following rule,
and we assign the corresponding catalogue
magnitude,
, to the detected object: a detected object, (x,y),
is associated with the closest catalogue object
subject to their distance being less than or equal to 3 pixels.
This finally provides two object lists:
, for the
original image, and
for the reconstructed image.
In a similar manner, the magnitude and logarithm of the integrated
density association curves,
,
are studied for the
JPEG- and PMT-reconstructed images.
To verify the stability of the photometric values in spite of the compression,
we hope to obtain curves, and thus to calibrate the reconstructed images,
and to find dispersion around an average position which stays close to the
dispersion obtained on the calibration curve of the original image.
In fact, for varied compression methods, a systematic lowering of
integrated densities of images is noted
(Dubaj 1994),
which results in the
average calibration function (fitted by an order 3 polynomial) being slightly
translated relative to the calibration function of the original image.
To estimate the behaviour of the dispersion of the calibration curve for both
compression methods, we proceeded thus:
- Approximation by polynomial (degree 3) regression of the
calibration function.
Mc=f (d).
- Calculation of the mean calibration error by magnitude interval,
for the set of objects detected on the 25 subimages, i.e. about 2000 objects
in all.
Thus we measure the photometric stability of the objects following
compression, relative to their representation in the original image.
The corresponding error curves are shown in Figs. 5, 6 and 7.
The JPEG curve shows a slight increase for magnitudes above
18, and a smoothing
effect for brighter objects between 14 and 16.
For PMT, an increase in dispersion is noticed for high magnitudes, which
corresponds to the problem of the detection of faint objects. Lowering the
detection threshold from 4
to 3
does not
change this. We note that the number of intervals below 14 is too small
to allow for interpretation of the behaviour of very bright objects.
Even if PMT brings about greater degradation in the photometry of objects,
especially when the objects are faint, the errors stay close to that of
the catalogue, and as such are entirely acceptable. Of course we recall also
that the compression rate used with PMT
is 260:1, compared to 40:1 for JPEG.
Table 4 presents the computation time required for compression
and decompression on a specific platform (Sun Ultra-Enterprise, 250 MHz and
1 processor).
With the JPEG, wavelet, and fractal methods, the time to convert our
integer-2 FITS
format to a one-byte image is not taken into account. Depending on
the applications,
the constraints are not the same, and this table can help in choosing a method
for a given project. The last column indicates if software already exists for
progressive image transmission.
Table 4:
Compression of an 1024
1024 integer-2 image.
Platform: Sun Ultra-Enterprise; 250 MHz and 1 processor
 |
Thinking from a Web server point of view we would like to compare
the performances of the different packages
considering two scenarios:
- Archive original and compressed images and distribute both on demand.
- Compress the data before transferring them and let the
end-user decompress them at the client side.
This situation has been studied and is illustrated in Fig. 8.
Considering a network rate of 10 Kbits/second and an image of 2 Mbytes, we
measured the time necessary to compress, transmit and decompress the image.
Methods are ordered from top to bottom according to increasing
visual quality of
the decompressed image. If we consider 20 seconds to be the maximum delay
the end-user can wait for an image to be delivered, only HCOMPRESS
and PMT succeed,
with less artifacts for PMT.
Up: Astronomical image compression
Copyright The European Southern Observatory (ESO)