The Backus-Gilbert scheme produces a one-dimensional family of solutions,
parametrised by the smoothing parameter . We can represent this as
a solution curve on a graph of standard deviation against of the recovered
against the chosen measure of the width of the resolution
function: the resolution is inversely related to the width, so such a
curve illustrates the trade-off between accuracy and bias in the recovered
solution.
In general, one usually considers intermediate values of , around
the turning point between the two extremes. Precisely which
one
chooses depends on the relative importance of stability and resolution in
the particular problem.
In the present case, we are interested in measuring the polarization at the limb, and it is clear from Fig. 2 that the resolution we need can only be obtained at the cost of a high standard deviation in the recovered value. If we try to increase the accuracy, we end up no longer measuring the limb polarization proper, but rather a "blurred'' value of the polarisation, smoothed by convolution with the averaging kernel. If the polarization is a maximum at the limb, the bias introduced by this smoothing will reduce the estimated polarisation: the more sharply the polarization falls away from the limb, the greater this biasing will be.
Looking at the low- (ie, no-noise)
limit, we find that the
function is sharply peaked
at r=1, so that in principle we can extract a well-resolved
limb-polarization. In fact, the quality of this peak degrades
substantially for r<1: the method as presented cannot
reasonably resolve polarizations on the disk. If we wished to recover
these, we might use our knowledge of the kernels and include in the
sum in Eq. (7) only those kernels which are non-zero in [0,r].
Such a procedure would give no improvement for r=1.
![]() |
Figure 3:
The shape of the averaging kernel
![]() ![]() ![]() ![]() |
As we increase , the peak broadens, but the variance of the
recovered
decreases.
![]() |
Figure 4:
The width of ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
This diagram in a sense completes the Backus-Gilbert analysis of the
inverse problem, as represented by the kernel in Eq. (21), and we are
now in a position to move on to invert real or simulated data. Before we
can do that, however, we must decide what value of to use. To
make that decision, we must consider the level and approximate functional
form of the polarization P(r), and use this to set the scale for the
resolution and standard deviation we need to achieve. In turn, this fixes
the number of data points n we require in our data and the value of the
parameter
we must choose in our inversion. Despite the fact that
we are invoking a particular model at this point in our analysis, we
emphasise that this introduces no practical model dependence. We are
using an approximate model purely to help us understand what counts as
"sufficiently stable'' or "sufficiently well resolved'', and after this
understanding is gained the numbers we recover remain model independent
measurements, as opposed to any method of parameter fitting.
Firstly, Chandrasekhar suggests that the limb polarization is
of the order of P(1)=0.1; we therefore need a variance which
is at least as small as this, requiring . Secondly, if we are not to have an overly biased
result, our resolution function must be narrow compared with
the width of the underlying function P(r). The resolution
we need is therefore of order
, with the width functional used in Eq. (9). Taking
as representative, we find we need a
resolution better than
. Comparison with
Fig. 4 indicates that
and
should therefore give us a satisfactory recovery of P(1).
The maximum accuracy achievable for a given resolution (or vice
versa) can be read off from the graph.
![]() |
Figure 5: The recovered values of P(1) for three realisations of noisy simulated data with the same parameters. The horizontal line is the correct value of P(1) |
Figure 5 shows the recovery of the limb polarization from three sets of simulated noisy data, as a function of the smoothing parameter. It is clear that the solution becomes more stable as greater smoothing is imposed. However, there comes a point where further smoothing does nothing to improve the solution, but merely degrades the resolution of the recovery.
The graphs presented by Kemp et al. (1983) have 25 data
points in the eclipse phase. The same paper states that, in the Algol
primary star, 50% of the polarized flux comes from an annulus on the
limb of width less than 0.005 .
Our Backus-Gilbert analysis of a spherically symmetric model system
with the Algol orbital and luminosity parameters indicates that this
limb polarization profile cannot be resolved in the Algol system. For
the eclipse coverage reported in Kemp et al. (1983) the
minimum width of the averaging kernel at the limb
is 0.0199
, even in the zero-noise limit.
This indicates that the limb polarization cannot be truly resolved with this data if the polarization profile is really as sharp as the stellar models cited by Kemp et al. predict. Thus, while the data does indicate the detection of limb polarization, it cannot reasonably be used (as was attempted by Wilson & Liou 1993) to make a quantitative estimate of the polarization profile.
The precise dependence of the resolution on the number of data points
is beyond the scope of the present paper, but we note here that
increasing the number of points from 25 to as many as 1000 does not
reduce the kernel width sufficiently (from 0.019 to
0.0144
). For practical purposes, then, the
inadequate resolution is intrinsic to the Algol system and will not be
alleviated by improved observations.
Even if we assume, however, that we cannot change the number or quality of our data points (telescope time and equipment are limited, after all), we might still have the freedom to adjust the positions si (i.e., the times) at which me make our measurements.
In Eq. (7), we are averaging the data points fi with a weight
vector qi. Where qi is relatively large, therefore, fi will
be smeared over several qi, or over a range of si, smearing out
features in the kernel K(r;si). If we can identify prominent
features in the vector qi, and cluster our
measurements round the si they correspond to, we should be able to
decrease the spread of and so increase the
resolution of the recovery.
Our simulations show that this rather informal argument is
valid for our case. The line in Fig. 4 captioned
"n=60 (uneven)'' has the same number of data points as the
line "n=60'', but with the values si chosen so that the
"data rate'' in a band is double that outside
the band. This does not significantly improve the variance of
the result (we are not gathering any more information than
before), but it does noticeably improve its resolution.
This improvement in our procedure corresponds to concentrating our measurements on the point around the beginning of the eclipse. We might have guessed that this would be a reasonable strategy to adopt, but the Backus-Gilbert method has justified our guess, and would substitute for a lack of intuition in a more obscure situation.
Copyright The European Southern Observatory (ESO)