next previous
Up: Application of the OS-EM


3 The OS-EM method for LBT imaging

In the case of CT the modification of EM provided by OS-EM consists first in grouping the projections in ordered subsets and then in applying the standard EM algorithm to each subset, an iteration of OS-EM being a single pass through all the specified subsets. The application of OS-EM to LBT is obtained by replacing the projections with the images corresponding to different orientations of the baseline.

As we pointed out in the Introduction, we assume to have one image for each orientation angle and a small number of orientations. In such a case a quite natural choice is to take subsets consisting of one image, ordered, for instance, for increasing values of the parallactic angle corresponding to increasing values of the index j. Then the OS-EM method is as follows:

Again each iteration provides a nonnegative estimate of the unknown object, which, in the case of the normalizations $\hat K_j(0,0)=1$ satisfies the condition

 \begin{displaymath}\sum_{m,n=0}^{N-1}f^{(k)}(m,n)=\sum_{m,n=0}^{N-1}g_p(m,n).
\end{displaymath} (10)

This condition does not coincide with but is essentially equivalent to condition 8 when the numbers of counts of the various images differ only by noise fluctuations. If large discrepancies are observed, it is possible to preprocess the various images in such a way that they have a number of counts coinciding with the arithmetic mean of Eq. (8). In this way it may be possible to avoid oscillations of the restored image inside one OS-EM iteration. Other methods for reducing these oscillations are proposed by Huang (1999).

As concerns the behaviour of the likelihood function $l(\vec{f}^{(k)})$, it has been observed by Hudson & Larkin (1994), in the case of numerical simulations, that it is in general an increasing function of k, so that also OS-EM should converge to a maximum point of $l(\vec{f})$, at least when the images are normalized in the way indicated above.

However the gain which can be obtained by means of OS-EM becomes evident only if one takes into account the semiconvergence property mentioned at the end of the previous section. Both EM and OS-EM have this property (as concerns OS-EM, see Hudson & Larkin 1994, where the RMS error is given as a function of the chi-square) but OS-EM reaches the best approximation of the unknown object after a number of iterations much smaller than that required by EM. As remarked by Hudson & Larkin (1994), in the case of nonoverlapping subsets the gain is by a factor of the order of the number of subsets. Therefore in our case the gain should be by a factor p, the number of images corresponding to different orientations of the baseline and this result is confirmed by the numerical simulations presented in the next section.

In order to appreciate the gain in computational time for the restoration of LBT images, it is important to remark that the computational cost of one LR/EM and one OS-EM iteration is approximately the same. As follows from Eq. (9), one OS-EM iteration consists of a cycle through all LBT images and therefore it requires the computation of the FFT's of the p arrays $\vec{h}^{(j)}$ in order to compute the denominators $\vec{A}_j\vec{h}^{(j-1)}$. On the other hand one LR/EM iteration requires the computation of the FFT of the array $\vec{f}^{(k)}$ for computing the denominators $\vec{A}_j\vec{f}^{(k)}$. It follows that one LR/EM iteration requires the computation of 3p + 1 FFT's while one OS-EM iteration requires the computation of 4p FFT's. Therefore, if we assume that the computation time is dominated by the number of FFT's, it follows that the time required by one OS-EM iteration is about 25% greater than the time required by one LR/EM iteration. For examples our IDL codes, running on a Pentium-200 MHz, requires 0.31 s per LR/EM iteration and 0.38 s per OS-EM iteration in the case of four $64\times 64$ images.


next previous
Up: Application of the OS-EM

Copyright The European Southern Observatory (ESO)