next previous
Up: Determination of the

6. Discussion

At a first glance it is clear, that the PLANCK baseline concept will be able to extract CMBR anisotropies in the presence of contaminating foreground sources with an accuracy and a sky coverage which will allow the detection of very small temperature fluctuations in the CMBR. It can be seen, that the recovery of the anisotropies in the presence of the two-temperature dust does not cause any severe problems, except for a small systematic error seen in tex2html_wrap_inline1934. This error is most likely due to fact, that the temperature of the coldest dust component in the fitting function was chosen 1 K higher than the corresponding temperature in the simulated observations. The large systematic error in the LFI result for a perfect power-law dust (in the case where the dust is fitted) is most likely caused by this modelling having too few data points to constrain the fit.

It seems as if the HFI part of the mission is a very important element in its success, which can be seen from the separate analysis of the LFI and the HFI concepts. This is undoubted caused by the fact that the HFI includes the most sensitive bands around the peak of the CMBR black-body. These two bands are also included in the revised PLANCK baseline indicating that these bands are crucial to the recovery of the CMBR anisotropies.

Including IR/FIR point sources and the Sunyaev-Zel'dovich effect in the calculations might have a negative effect on the high frequency part of the experiment, and strengthen the need for lower frequency bands in order to achieve a satisfactory anisotropy extraction. This is also an important point for further investigation. Furthermore the low frequency bands are indispensable when attempting to constrain the synchrotron radiation and especially the free-free emission by making the first all-sky maps of these galactic emission components at these frequencies, at a high level of sensitivity and with fairly high angular resolution. Even though the low frequency bands may not seem to be of great assistance in the single pixel extraction process shown above, the production of such maps would undoubted have considerable impact on the actual data processing of PLANCK observations, not to mention the astrophysical information about our Galaxy, embedded in such maps. It is important to keep in mind that the above listed results are based on the assumption that the low frequency radiation components can be fitted using power-laws. This might however not be the case and the only way to enable us to describe and remove foregrounds no matter what the sky presents to us at these frequencies, is to have complete coverage of the entire spectral range of interest.

The revised PLANCK configuration, which is largely the same concept as the baseline but with three bands omitted, perfoms almost as well as the PLANCK baseline for the reason stated above, namely that the two most sensitive bands are placed at the peak of the CMBR. The larger gaps between the frequency bands could however prove to be problematic for obtaining an accurate description of the known foreground sources and to ensure that no unknown source of confusion lurks at the frequencies not yet studied with high sensitivity and resolution.

The PLANCK configuration based on bolometers alone also performs almost as well as the PLANCK baseline. From a scientific point of view a pure bolometer based mission is thus clearly feasible. It is however important to keep in mind the fact, that a mission based only on detectors which require cryogenic cooling is more sensitive to technical problems, than a mission with both radiometers and bolometers. A combined mission could most likely still operate the radiometers and obtain scientific data if the cryogenic system should fail.

With respect to the MAP concept we find, that in the case where the dust is fitted, the two-temperature dust model results in a performance comparable to the LFI concept whereas the power-law dust results in a quite poorly constrained fit. The reason for this is most likely the fact that MAP only covers the 22-90 GHz range, where dust emission gives only a small contribution to the total intensity. The attempt to fit a badly constrained foreground introduces large errors, as was also concluded by Brandt et al. (1994). With this in mind the calculations where the dust is not fitted show that MAP is doing much better when this emission component is ignored. For the power-law dust model the results obtained are comparable to those obtained using LFI, whereas the two-temperature dust scenario performs somewhat better. Under these conditions it is seen that MAP easily can deliver the intended tex2html_wrap_inline1556 sensitivities of tex2html_wrap_inline1610 as mentioned in Sect. 2.

The LFI concept does not seem to benefit substantially from the modelling where the dust component is not fitted. It seems as if the 125 GHz band is stretching too far into the frequency region where the dust contribution is no longer negligible.

Since the modelling done in this study is based on the basic ideas of Brandt et al. (1994), as mentioned earlier, it is natural to compare the results of the two studies. First of all it is important to notice that Brandt et al. (1994) only carry out their calculations with very limited ranges of foreground parameters found in a tex2html_wrap_inline1560 patch in the sky. These parameter ranges are indicated in Fig. 2 (click here).

For the space-based experiments they examine two different frequency intervals, which can be compared to the LFI and HFI concepts, namely the range from 30 to 120 GHz and from 150 to 435 GHz. As in this study they make 100 model calculations and fits using the same instrument and foreground characteristics in order to produce a statistical sample. They also state their results in terms of the standard deviation of the difference between the input parameters and the fitted parameters.

For both frequency ranges stated above they find tex2html_wrap_inline1938 (termed tex2html_wrap_inline2020 by Brandt et al. 1994) in the order of a few tex2html_wrap_inline1620K , which compares well to the results given above for the HFI concept, whereas the LFI results are a factor 10 larger. This can however be understood by looking at the noise levels assumed. In the low frequency region they operate with an noise level, which is about a factor 10 smaller than the values given for the LFI concept. For the high frequency range the assumed noise level is almost identical with the level calculated for the HFI. Generally the results obtained in this study are thus in agreement with the work done by Brandt et al. (1994).

The main result of this work is, that even extreme and poorly known foregrounds do not prevent the extraction of CMBR anisotropies with an accuracy in tex2html_wrap_inline2024. It can be very useful to search for patches with low contamination by foregrounds, but it is not imperative for the success of a CMBR anisotropy experiment like PLANCK, at least not on angular scales of about 30'.

Brandt et al. (1994) do not address the case of a mission covering the entire frequency range relevant for CMBR anisotropy searches but from the above considerations it seems clear, that a broad frequency coverage will allow a better extraction of the CMBR anisotropies with the intended accuracy.

Tegmark & Efstathiou (1996) find that a pixel-by-pixel subtraction with PLANCK data yields an accuracy in the extraction of anisotropies of a few tex2html_wrap_inline1620K at a resolution of 30', in good agreement with the results achieved in this study, whereas their optimal filtering technique yields results,which are about 100 times better.

Since most of the foreground parameters are rather uncertain and since they are definitely not independent of the position in the sky as discussed earlier, the above stated gain in accuracy of a factor tex2html_wrap_inline2032 is not directly achievable. In the reduction and analysis of data obtained with a mission like PLANCK, the very detailed all-sky maps also depicting the foregrounds could be utilized by first making a single-pixel spectral analysis as demonstrated above. One could imagine a scheme where, for each pixel, first the CMBR signal is extracted and then subtracted from the data. Then the dust parameters are extracted and this component is subtracted. Finally the synchrotron radiation and free-free emission parameters are extracted. Such an analysis would produce maps of the various foregrounds and of the CMBR with an accuracy on the tex2html_wrap_inline1520 level. These maps could then be used as the spectral and spatial templates needed to perform the optimal Wiener filtering technique to improve the accuracy in the determination of the CMBR anisotropies, thus combining the strength of both methods. The result of this combined analysis will then in the best case be the mapping of the CMBR anisotropies down to the 10' scale with an accuracy of a fraction of tex2html_wrap_inline1520. Such measurements will enable us to determine most cosmological parameters to an accuracy in the order of one percent (see e.g. Jungman et al. 1996) and thus unambiguously discriminate between competing cosmological scenarios.


next previous
Up: Determination of the

Copyright by the European Southern Observatory (ESO)