We have seen in the previous section that the FS model allows an
adjustment of the response curves with an accuracy better than 1%
for more than 90%
of the pixels for uniform illuminations on the whole useful dynamic range.
The two parameters of the model (
and
)
have been adjusted pixel by pixel. No significant temporal variations
have been found during the ISO mission.
We now present correction methods based on this model.
Basically two methods are possible to correct the data from the transient effects: readout by readout and block by block. One readout corresponds to the output value measured at the end of one integration. One block (typically a few to several hundreds of readouts) corresponds to a constant configuration of the whole system (parameters of the camera, pointing of the satellite). To take into account this information it may help to improve the stability of the correction and the signal to noise ratio.
At the present time and in this paper, we only indicate methods which allow us to correct the data readout by readout. Obviously, we need to assume that during each integration, the input flux is constant. It is wrong during the slew of the satellite during two successive sky positions, or when a change of one parameter of the instrument (lens and filter wheels position) is commanded. Generally, only one or two successive readouts are affected.
The correction method is based on the FS model
described in Sect. 3.
We use the same notations as in Eq. (6).
We consider a set of N successive readouts
,
in units of ADU/G/s.
Each term
can be identified with J n(t)in Eq. (6), if we take t equal to the time of
the end of the integration number n which
starts at time tn (see Fig. 5).
The transient correction consists of computing,
for n going from 0 to N-1,
from
the measured responses J n(t).
Practically we need to invert Eq. (6) in order to
compute
from J n(t),
and
.
Equation (6) is non-linear because of the exponential
term and there is no analytical solution.
We have tested three different methods to find the solution:
In order to study the inversion quality and its noise robustness,
we have tested these three correction methods on simulated data for
different fluxes, different steps of fluxes and different
signal to noise ratio.
Practically we choose an input flux history:
,
for n going from 0 to N-1. The detector
response (
J n (t), for n going from 0 to N-1) is computed,
readouts by readouts, using Eq. (6).
Then we add a Gaussian noise with a standard deviation up to
ADU/G/s
.
All the values
J n (t) were positive before the noise is added
but not necessarily after. For n going from 0 to N-1,
we compute readout by readout the corrected response
using the three methods described above.
Finally we compare
and
.
Practically we have used the same kind of signal
as in Fig. 4, with typically 600 readouts,
upward steps at readout 200,
downward step at readout 400, an integration time
of 2.1 s
for each readout,
= 0.55 and
= 600 ADU/G.
The low levels are equal to 0.1, 0.5, 1, 2, 5 and 10 ADU/G/s, while
high levels are 5, 25, 50, 100, 250, 500 or 1000 ADU/G/s.
The Müller method gives always the
more accurate result even for high steps
(typically for
)
and strong noise (
ADU/G/s).
But this method is also the slowest one.
Sometimes divergences occur for levels below 1 ADU/G/s.
The iterating approach is very sensitive to the amplitude of the noise.
On the simulations, divergence could occur for any step ratio
when
is greater than
ADU/G/s. On real data,
divergences occur quite often. Thus we conclude that this method has
to be discarded.
The method based on a second order development of the
exponential around 0 must be used only for limited
fluxes
,
since the factor
must be significantly smaller than 1. This method presents the advantage
that at low flux it never diverges, even with noise of high amplitude
(
ADU/G/s), and is extremely fast.
Practically, we use this last method at small flux,
and the Müller one at high fluxes.
In any case, the computing time is typically ten times lower
than with the old IAS method (Abergel et al. 1999)
for an accuracy typically ten times better.
On simulations, the accuracy is better than
for each readout;
on real data, the accuracy is better than
for each readout when we have not problem due to glitches,
spatial gradients or inaccurate dark levels...
Figures 9 and 10 present two results to illustrate the quality of the correction of real data for uniform illumination. For observations with the Circular Variable Filter (CVF), after correction we see that the data can be used on the whole spectral range. It was generally not the case with previous methods.
Copyright The European Southern Observatory (ESO)