next previous
Up: A stellar coronograph

5. Performances of the coronograph

 

5.1. Efficiency of the coronograph on a single object

The efficiency of the coronograph can be measured by comparing images of the same object with and without the occulting mask (Fig. 4 (click here)). As exposure times are much shorter in the second case, the profile is normalized to one second exposure. The coronograph parameters are a 0.8tex2html_wrap_inline1120 occulting mask and a Lyot stop occulting 10% of the light in the following pupil plane.

  figure297
Figure 4: Respective normalized profiles of a star without the occulting mask (full line) and with the mask (dashed): (top). Horizontal axis is scaled in arcsec. The ratio of these two profiles is plotted (bottom) on a logarithmic scale

The most spectacular effect is the gain in terms of dynamic range. The rejection rate, i.e. the ratio of the brightest pixel of the respective images, is larger than 100. As a direct consequence, it becomes possible to integrate longer in order to get a good signal-to-noise ratio (S/N) on the wings of these profiles. This provides the ability to detect fainter emissions. We have to stress the fact that the reduction procedure described above assumes that no part of the detector is saturated and that every pixel operates in the linear regime.

A second effect of the coronograph is the lowering, by a factor of 2, of the outer profile of the occulted star outside the area of efficiency of the occulting mask. This efficiency of the Lyot stop comes from the fact that the occulting mask rejects most of the light of the occulted object in the border of the next pupil. As the delicate step of the reduction procedure is the removal of this profile wings (Sect. 4.3 (click here)), this gain, specific to coronographic techniques, is valuable.

5.2. Efficiency of the whole procedure

As already seen in Sect. 4 (click here), the remaining light from the central object is removed by the comparison with another star. We estimate the noise of the whole procedure by applying it to two point-like sources, or on the part of the resulting image where no emission is expected. The deviation from zero indicates the faintest detectable emission around a given occulted object. Figure 5 (click here) presents the radial dependence of this limit. With such observational parameters the final correction by the reference object leads to a detectable limit around 20 times fainter than the diffracted light of the central object at 2tex2html_wrap_inline1120. In terms of dynamic range, a gain of 100000 further than 2tex2html_wrap_inline1120, or 40000 at 1.5tex2html_wrap_inline1120, is obtained if one compares the peak intensity of the non occulted star with the residuals after the whole procedure.

  figure308
Figure 5: Profile, on a logarithmic scale, of the rms residual light after the whole reduction procedure as a function of the distance from the star c). This indicates the detection limit in the present case which, at 2tex2html_wrap_inline1120, is 105 fainter than the peak intensity of the star observed without the mask with the same angular resolution a). The difference between the detection limit and the profile of the occulted star b) shows the efficiency of the reference profile subtraction. Profile d) is the rms variation of individual images from the average image in a cube (for the reference and the object). This illustrates the cumulative effects of the photon noise and the high turbulence modes. The discrepancy between the observed limit level c) and the expected one d), due to variations along a single cube, is due to small variation between the two object observations. Further than 2tex2html_wrap_inline1120, the detection limit level is given by the precision on the sky and is related to the total exposure time

5.3. Effect of observational parameters on the performance

The derived detection limit obviously depends on the particular instrumental (mask sizes, filters) and observational parameters (brightness of star, airmass, weather
conditions and variability, exposure times..). Therefore, they should be considered as illustrative rather than as upper limits to what can be achieved. One may basically extrapolate corresponding limits for observations under different conditions, or use them to fit the instrumental possibilities to specific scientific requirements.

In a data cube, the uncertainty of the measurement is derived from the rms variations of the individual images compared to the average image (Fig. 5 (click here)d). The precision of the average image increases as the square root of the number of images. In a cube of dark and flat field corrected data, the uncertainty of the signal has various origins, depending on the distance from the star. Further than 2tex2html_wrap_inline1120 from the star, in low flux area, the detector read out noise dominates. It is then consistent with the uncertainty of the dark signal and the sky emission (for typical exposure times of seconds or less). Closer to the star, as the flux gets larger, the S/N improves but the absolute uncertainty per pixel also increases, and gets larger than the read out noise. Major sources of uncertainties here are the photon counting noise and also the irregularities on the wings of the PSF due to the imperfectly corrected high order and fast modes of the turbulence. As a consequence, in the complete field of view, the precision of the measurement of the residual light may be directly enhanced by increasing the total exposure time.

The precision of the whole procedure (Fig. 5 (click here)b), i.e. after subtraction of the reference signal, may not be better than the one of the target and the reference. This level is reached in read out noise dominated regions, i.e. further than 2tex2html_wrap_inline1120. This means that, close to the star, the critical step of the procedure is the subtraction. We already mentioned that this may come from a globally different behaviour of the correction for both stars because of small differences of brightnesses, position in sky, observation time, and numerical subtraction (multiplicative factor determination, offset). To investigate the very close environment of stars, one needs a long total exposure time and an exact similarity between the reference and target stars. They should also be angularly very close and observed less than one hour apart, or even less when the turbulence changes rapidly. The observing strategy should then aim to optimize the compromise between a large total exposure time and short delays between the various exposures. This depends on the distance and magnitude of the investigated emissions.

Other important parameters are the occulting and Lyot masks sizes. One needs to use small occulting masks to investigate the very close vicinity to the star, even though this leads to lower rejecting capabilities of the occulting and Lyot masks (see for instance Sect. 2 (click here); Malbet et al. 1996) and smaller exposure times. One comment on the final angular resolution: This is critical in the whole reduction procedure as it allows the use of small masks, whether or not the observer is interested in achieving angular resolution. If the highest possible angular resolution is not the observational priority, it is always possible, a posteriori, to degrade the resolution. The limit detection, in terms of flux per element resolution, correspondingly increases.


next previous
Up: A stellar coronograph

Copyright by the European Southern Observatory (ESO)
web@ed-phys.fr