  Up: Parametric representation of helioseismic

Subsections

# 3 Mathematical development

## 3.1 Signal model

The signal that we propose to analyse is the full-disk measurement from GOLF [Gabriel et al. 1995]. It is regularly sampled and does not present any gaps. This signal is well known [Lazrek et al. 1997] for containing l=0 to 3 modes and marginally l=4 and l=5. Higher degree modes are averaged and remain, in particular in the lower frequency range, well below the noise level.

The signal is down-sampled to 80 s after being filtered above 12 MHz to avoid aliasing. A small bandwidth of around an l=1 mode contains only a known number of modes. The wings of the modes surrounding the bandwidth studied can be regarded as an additional white noise.

It can be easily demonstrated that when where is the sampling frequency, the samples from the continuous model (1) verify a second order autoregressive equation:

 xn(k)=-a1(k) xn-1(k)-a2(k) xn-2(k)+en(k), (2)

where en(k) is a Gaussian white noise excitation. The index k references the mode. This can be proved using the filter representation of the process and converting the analog filter to a discrete filter using the backward approximation. Various statistical justifications of this model can be found for example in [Priestley 1994].

It is worth noting that the width of the analysed frequency bands corresponds to an innovation rate of 2 days. The hypothesis of temporal independence of the excitation signal en(k) is justified by statistical analysis of the mode power already mentioned in Sect. 2 and in particular by the absence of correlation found by Foglizzo et al.

Finally, we assume that the measured signal yn is the sum of L modes: (3)
and a Gaussian white noise with variance . We will assume herein that is known. The set of measurements will be referred .

In the model (2,3) the correlation between the different modes excitations is introduced by the covariance of the input terms. Let be the vector of excitation and Q its covariance: . The correlation coefficient between excitation of mode i and j is defined by .

The problem consists in the estimation of the coefficients of the autoregressive process, denoted , and the covariance matrix Q.

## 3.2 Position of the problem

An immediate solution is to resort to maximum likelihood estimation. The process yn is Gaussian with covariance matrix . Consequently, the estimated parameters are the global maximisers of the log-likelihood function or equivalently, after dropping the constant terms, of the function: (4)
with . Unfortunately, the maximisation of (4) by an optimisation algorithm is not tractable because:

• the dimension of leads to a large computational cost for the evaluation of and the non-linear dependence between the unknown parameters and leads to a cost function that is highly non-linear, see Appendix A.
• the maximisation must be performed under the constraints that the autoregressive models are stable and the matrix Q is semi-positive definite. Whereas the first constraint can be removed by a proper reparametrisation of , a simple expression of the second constraint is not available.
A suboptimal solution is to minimise a distance between a M order matrix and its estimated version ,computed from the yn. This solution significantly reduces the computational cost. If the cost function is defined as: (5)
it can be shown that its dependence on the elements of Q is quadratic, see appendix A. However, the problems related to the nonlinear dependence on and the constraint on Q remain.

A major difficulty in this estimation problem comes from the coupling between the two sets of parameter and Q. To address this, we propose a two stage approach that first estimates and then uses this result for the estimation of Q. Besides the different nature of and Q, this solution can be justified by the strong a priori that is available on the shape of the modes that can be incorporated in the first stage. Another argument is given by an analogy with the pure autoregressive case where the power of the excitation is generally computed from the autoregressive parameters and the data, [Stoica & Moses 1997].

## 3.3 Estimation of the dynamical parameters

For this goal, we will derive a new statistical representation of yn, i.e. a signal with the same autocorrelation sequence as yn. Consider the signal zn obtained by filtering yn: (6)
Hypothesis (2,3) implies that zn is the sum of each mode filtered by H(z) and can be expressed: (7) (8)
with: (9)
Equation (8) implies for |l|>2L. Then zn has a moving average representation of order 2L, [Stoica & Moses 1997, p. 102]: (10)
Consequently yn has an autoregressive moving average representation where the coefficients of the autoregressive part equal the coefficients of H(z): (11)
This result suggests the following algorithm for the estimation of the parameter vector :
1.
Apply to yn an ARMA estimation algorithm in order to estimate its autoregressive part, H(z).
2.
Compute the filters 1+a1(k)z-1+a2(k) z-2 combining the L pairs of H(z) conjugate roots.

We propose to estimate H(z) using the subspace parameter algorithm described in [Stoica & Moses 1997, p. 115]. This method is based on the singular value decomposition of the estimated Block-Hankel matrix of covariances whose element (i,j) are the estimated autocorrelations yn at lag |i+j-1|. Let the diagonal matrix made from the 2L principal singular values of and the matrix containing the associated 2L left singular vectors. Let and be the matrices made from the first and last, M-1 rows of . The roots of H(z) are given by the eigenvalues of matrix , the least square solution of the system .

## 3.4 Estimation of the excitation covariance

The second step of the method consists in replacing by its estimated value and estimating the covariance of the vector . The derivation of an efficient algorithm for this requires a signal model that explicitly involves this vector process. A solution is to use a state space representation of yn.

It can be easily verified that, up to a shift in the excitation, each mode can be represented by the following state space representation: (12)
with: Consequently, a state space representation of yn is obtained by the concatenation of the previous models: (13)
where F is a block diagonal matrix with the matrices F(k) on the diagonal. H is the vector obtained by concatenation of the H(k), and G is the matrix having as non-zero elements G(2k,2)=1, . The unobserved set of states will be referred .

It is worth noting that the model (13), contrary to the model (2,3), allows a simple expression of the power spectrum density (PSD), [Anderson & Moore 1979, p. 75]: (14)

As suggested in Sect. 3.1, we intend to estimate Q as the global maximiser of . The dependence on will be dropped in the the following. The observation equation of model (13) shows that there exist a many-to-one mapping between and . This suggests maximizing the likelihood by an iterative technique: the expectation maximisation algorithm (EM). This method is direct maximising the likelihood of , the "incomplete data'' making an essential use of , the "complete data''. The reference paper on EM is Dempster et al. (1977) Moon (1996) has presented a tutorial on the EM algorithm including extensive references.

This algorithm has been recently applied in Deng and Shen (1997) for the estimation of all the parameters of a linear dynamic system. We will derive here the method for the case where the only unknown is Q.

The E step is the computation of the conditional expectation of given and the current estimate of Q, noted . For the M step, let be that value of Q that maximizes the previous conditional expectation.

In our case, the log likelihood of is calculated using the Markov property of (12), [Anderson & Moore 1979]. Dropping all the terms that are not functions of Q, we have:  Taking the conditional expectation of this equality and using the differentiation rules and gives the following recursion for : (15)
Expansion of the inner term shows that each iteration requires the computation of terms and . If is the conditional expectation of the state with respect to the observation and , Pk/N its variance and Pk,k-1/N the cross covariance between and , we have:  These quantities can then be computed, as in Deng & Shen (1997), using a fixed interval smoothing algorithm, [Anderson & Moore 1979] whose equations are summarized in Appendix B.

## 3.5 Remarks on the algorithm

• The development of Sect. 3.3 can be easily extended to incorporate the iterative estimation of . This solution has not been investigated here because of the high signal to noise ratio that has been observed in the analysed frequency band. In our case has been estimated by the average of the bins of the periodogram surrounding the modes.
• The structure of (15) guarantees that is semi-positive definite.
• In Sage & Melsa (1971), it is shown that the maximum a posteriori estimator of Q is: (16) (17)
An iterative implementation of this estimator can be obtained estimating the smoothed state vector at iteration k using and compute using Eq. (16). It is worthy noting that this algorithm corresponds to the EM algorithm where the update of Q is computed assuming that the matrices Pk/N and Pk,k-1/N equal zero. A major drawback of this solution is that it does not guarantee an increase of the likelihood at each iteration.
• A large part of the previous development can be easily extended to the case where the modes are represented by autoregressive models of order higher than two, allowing the relaxation of the hypothesis of temporal independence of the noise input. The major difficulty that arises is to group the roots of H(z) corresponding to the different modes.  Up: Parametric representation of helioseismic

Copyright The European Southern Observatory (ESO)