A possible solution to solve the bandwidth problem is to reduce the amount of information of the sampled signal i.e. its entropy. Independently from the way in which this is performed, the final compression strategy will be lossy, and the final reconstructed (uncompressed) signal will be corrupted with respect to the original one, degrading in some regard the experimental performances. In this regard, any sort of lossy compression may be seen as a kind of signal rebinning with a coarser resolution (quantization step) in .
There are at least six aspects in PLANCK-LFI operations which may be affected by a coarser quantization:
Since the non linear nature of the quantization process, all of them are hard to be analytically evaluated and for this reason a specific simulation task is in progress for the PLANCK-LFI collaboration (White & Seyfert 1999; Maris et al. 2000). However a heuristic evaluation for the point (1) by analytical means is feasible.
Quantization operates a convolution of the normal distribution of
the input signal with the quantization operator
sign
.
If the
quantization error:
is uniformly distributed its expectation is
and its rms is
(Kollár 1994). Quantization over a large amount of samples
may be regarded as an extra source of noise which will enhance the
variance per sample. If the quantization error is statistically
independent from the input quantized signal and if it may be added
in quadrature to the white noise variance
,
the total
variance per sample will be
.
So for
the expected quantization rms is
.
From error propagation the relative error on the C_{l} is
(Maino 1999):
(11) |
Copyright The European Southern Observatory (ESO)