This paper elaborates ideas presented in 1992 by Kus et al. It deconvolves interferometric data without explicit use of discrete Fourier transforms (DFT) between image space and visibility space (frequency space) because such transformations need an equidistant rectangular grid on both sides. The grids must be large enough to avoid aliasing effects. To the contrary, an observational grid of complex visibilities (CVs) is, generally, neither equidistant nor rectangular nor sufficiently large.
Usually, the observed CVs in the u,v-plane are interpolated onto a rectangular equidistant grid. Where no observations exist, zero values of the corresponding CVs are assumed. The real part of the DFT of those interpolated and extended CVs is called the dirty map (DM). The image, or Clean Map (CM), is found from the dirty one by deconvolution with the dirty beam (DB). Several deconvolution methods and algorithms are available. All the usual ones work with DFTs, i.e., do the convolution by DFT. One can easily recognize flaws in this standard procedure. The first is introduced by the CV interpolation. In particular, interpolation into an equidistant grid, i.e. constant CV density in the u,v plane, means that the measured CVs are weighted according to the reciprocal CV density. Second, the use of DFTs between image space and visibility space implies periodicity of the maps and thus creates aliasing effects. One can as well say that CM, DB, and DM are assumed to be connected by a Fourier convolution instead of a straight convolution.
We have devised a method to avoid these flaws. The Minimum Information Method MIM (Pfleiderer 1985) does a Least-Squares-Fit (LSF) of observed data. That is to the effect that we make use of the so-called direct Fourier transform which is well known in imaging (Sramek & Schwab 1986) but not in deconvolution.
The deconvolution equations are considered as a linear system of equations connecting the maps DM, DB, and CM, which is to be solved only approximately within the error tolerances. Overfitting is avoided by a diophantic solution of the error normalized equations. The method of solution is similar but not identical to CLEAN. Smoothing is accomplished by a local smoothing constraint (Pfleiderer 1989) which changes the DB into a Prussian Helmet type beam. The linear connection between CM and DM is, however, kept. The procedure is comparable to but not identical with the "Smoothness stabilized CLEAN'' (Cornwell 1983). Our formalism is given in Sect. 2, the deconvolution algorithm in Sect. 3, the detection and deconvolution of steep features in Sect. 4, some tests in Sect. 5, and our deconvolution of old VLBI data of 3C 309.1 in Sect. 6.