The process for periodicity analysis can be divided in the following steps:

**- Preprocessing**

We first interpolate the data with a simple linear fitting and then calculate and subtract the average pattern to obtain zero mean process (Karhunen & Joutsensalo 1994; Karhunen & Joutsensalo 1995).

**- Neural computing**

The fundamental learning parameters are:

**1)** the initial weight matrix;

**2)** the number of neurons, that is the number of principal eigenvectors
that we need, equal to twice the number of signal periodicities (for real
signals);

**3)** , i.e. the threshold parameter for convergence;

**4)** , the nonlinear learning function parameter;

**5)** , that is the learning rate.

We initialise the weight matrix assigning the classical small random values. Otherwise we can use the first patterns of the signal as the columns of the matrix: experimental results show that the latter technique speeds up the convergence of our neural estimator (n.e.). However, it cannot be used with anomalously shaped signals, such as stratigraphic geological signals.

Experimental results show that can be fixed to: 1., 5., 10., 20., even if for symmetric networks a smaller value of is preferable for convergence reasons. Moreover, the learning rate can be decreased during the learning phase, but we fixed it between 0.05 and 0.0001 in our experiments.

We use a simple criterion to decide if the neural network has reached the
convergence: we calculate the distance between the weight matrix at step *k*+1, , and the matrix at the previous step
, and
if this distance is less than a fixed error threshold () we stop
the learning process.

We finally have the following general algorithm in which STEP 4 is one of the neural learning algorithms seen above in Sect. 3:

**STEP 1**Initialise the weight vectors with small random values, or with orthonormalised signal patterns. Initialise the learning threshold , the learning rate . Reset pattern counter*k*=0.**STEP 2**Input the*k*-th pattern where*N*is the number of input components.**STEP 3**Calculate the output for each neuron .**STEP 4**Modify the weights .**STEP 5**Calculate(10) **STEP 6**Convergence test: if then goto**STEP 8**.**STEP 7***k*=*k*+1. Goto**STEP 2**.**STEP 8**End.

**- Frequency estimator**

We exploit the frequency estimator *Multiple Signal* *Classificator* (MUSIC). It
takes as input the weight matrix columns after the learning. The estimated
signal frequencies are obtained as the peak locations of the following
function (Kay 1988; Marple 1987):

(11) |

When *f* is the frequency of the *i*-th sinusoidal component, *f*=*f*_{i}, we
have and . In practice we have
a peak near and in correspondence of the component frequency. Estimates are
related to the highest peaks.

Copyright The European Southern Observatory (ESO)