next previous
Up: Spectral analysis of stellar networks


5 The neural network based estimator system

The process for periodicity analysis can be divided in the following steps:

- Preprocessing

We first interpolate the data with a simple linear fitting and then calculate and subtract the average pattern to obtain zero mean process (Karhunen & Joutsensalo 1994; Karhunen & Joutsensalo 1995).

- Neural computing

The fundamental learning parameters are:

1) the initial weight matrix;

2) the number of neurons, that is the number of principal eigenvectors that we need, equal to twice the number of signal periodicities (for real signals);

3) $\epsilon $, i.e. the threshold parameter for convergence;

4) $\alpha $, the nonlinear learning function parameter;

5) $\mu $, that is the learning rate.

We initialise the weight matrix ${\bf W}$ assigning the classical small random values. Otherwise we can use the first patterns of the signal as the columns of the matrix: experimental results show that the latter technique speeds up the convergence of our neural estimator (n.e.). However, it cannot be used with anomalously shaped signals, such as stratigraphic geological signals.

Experimental results show that $\alpha $ can be fixed to: 1., 5., 10., 20., even if for symmetric networks a smaller value of $\alpha $ is preferable for convergence reasons. Moreover, the learning rate $\mu $ can be decreased during the learning phase, but we fixed it between 0.05 and 0.0001 in our experiments.

We use a simple criterion to decide if the neural network has reached the convergence: we calculate the distance between the weight matrix at step k+1, ${\bf W}_{k+1}$, and the matrix at the previous step ${\bf W}_k$, and if this distance is less than a fixed error threshold ($\epsilon $) we stop the learning process.

We finally have the following general algorithm in which STEP 4 is one of the neural learning algorithms seen above in Sect. 3:

- Frequency estimator

We exploit the frequency estimator Multiple Signal Classificator (MUSIC). It takes as input the weight matrix columns after the learning. The estimated signal frequencies are obtained as the peak locations of the following function (Kay 1988; Marple 1987):

\begin{eqnarray}
P_{\rm MUSIC} = \log\left(\frac{1}{1-\sum_{i=1}^{\rm M} \vert\vec{e}_f^{\rm H}\vec{w}(i)\vert^2}\right) & \end{eqnarray} (11)
where $\vec{w}(i)$ is the i-th weight vector after learning, and $\vec{e}_f^{\rm H} $ is the pure sinusoidal vector: $\vec{e}_f^{\rm H}=[1,e_f^{j2\pi
f},~\ldots,e_f^{j2\pi f(L-1)}]^{\rm H} $.

When f is the frequency of the i-th sinusoidal component, f=fi, we have $\vec{e} = \vec{e}_i $ and $P_{\rm MUSIC} \to \infty$. In practice we have a peak near and in correspondence of the component frequency. Estimates are related to the highest peaks.


next previous
Up: Spectral analysis of stellar networks

Copyright The European Southern Observatory (ESO)