In this section we show the performance of the neural based estimator system using artificial and real data. The artificial data are generated following the literature (Kay 1988; Marple 1987) and they are noisy sinusoidal mixtures. These are used to select the neural models for the next phases and to compare the n.e. with P's, by using Montecarlo methods to generate samples. Real data, instead, come from astrophysics: in fact, real signals are light curves of Cepheids and a light curve in the Johnson's system.
In the Sects. 7.3 and 7.4, we use an extension of Music to directly include unevenly sampled data without using the interpolation step of the previous algorithm in the following way:
(23) 
where p is the frequency number, is the ith weight vector of the PCA neural network after the learning, and is the sinusoidal vector: where , t_{1},..., are the first L components of the temporal coordinates of the uneven signal.
Furthermore, to optimise the performance of the PCA neural networks, we stop the learning process when , so avoiding overfitting problems.
In this section we use synthetic data to select the neural networks used in the next experiments. In this case, the uneven sampling is obtained by randomly deleting a fixed number of points from the synthetic sinusoidmixtures: this is a widely used technique in the specialised literature (Horne & Baliunas 1986).
The experiments are organised in this way. First of all, we use synthetic unevenly sampled signals to compare the different neural algorithms in the neural estimator (n.e.) with the Scargle's P.
For this type of experiments, we realise a statistical test using five synthetic signals. Each one is composed by the sum of five sinusoids of randomly chosen frequencies in [0,0.5] and randomly chosen phases in (Kay 1988; Karhunen & Joutsensalo 1994; Marple 1987), added to white random noise of fixed variance. We take 200 samples of each signal and randomly discard of them (100 points), getting an uneven sampling (Horne & Baliunas 1986). In this way we have several degree of randomness: frequencies, phases, noise, deleted points.
After this, we interpolate the signal and evaluate the P and the n.e. system with the following neural algorithms: robust algorithm in Eq. (4) in the hierarchical and symmetric case; nonlinear algorithm in Eq. (8) in the hierarchical and symmetric case. Each of these is used with two nonlinear learning functions: and . Therefore we have eight different neural algorithms to evaluate.
We chose these algorithms after we made several experiments involving all the neural algorithms presented in Sect. 3, with several learning functions, and we verified that the behaviour of the algorithms and learning functions cited above was the same or better than the others. So we restricted the range of algorithms to better show the most relevant features of the test.
We evaluated the average differences between target and estimated frequencies. This was repeated for the five signals and then for each algorithm we made the average evaluation of the single results over the five signals. The less this averages were, the greatest the accuracy was.
We also calculated the average of the number of epochs and CPU time for convergence. We compare this with the behaviour of P.
Shared signals parameters are: number of frequencies =5, variance noise =0.5, number of sampled points =200, number of deleted points =100.
Signal 1: frequencies =
Signal 2: frequencies =
Signal 3: frequencies =
Signal 4: frequencies =
Signal 5: frequencies =
Neural parameters: ; ; ; number of points in each pattern N=110 (these are used for almost all the neural algorithms; however, for a few of them a little variation of some parameters is required to achieve convergence).
Scargle parameters: , p_{0}=0.01.
Results are shown in Table 1:
We have to spend few words about the differences of behaviour among the neural algorithms elicited by the experiments. Nonlinear algorithms are more complex than robust ones; they are relatively slower in converging, with higher probability to be caught in local minima, so their estimates results are sometimes not reliable. So we restrict our choice to robust models. Moreover, symmetric models require more effort in finding the right parameters to achieve convergence than the hierarchical ones. The performance, however, are comparable.
From Table 1 we can see that the best neural algorithm for our aim is the n.5 in Table 1 (Eq. (4) in the symmetric case with learning function ).
However, this algorithm requires much more efforts in finding the right parameters for the convergence than the algorithm n.2 from the same table (Eq. (4) in the hierarchical case with learning function ), which has performance comparable with it.
For this reason, in the following experiments when we present the neural algorithm, it is algorithm n.2.
We show, as an example, in Figs. 1113 the estimate result of the n.e. algorithm and P on signal n.1.
We now present the result for whitening preprocessing on one synthetic signal (Figs. 1416). We compare this technique with the standard n.e.
Signal frequencies =
Neural network estimates with whitening: 0.1, 0.15, 0.2, 0.25, 0.3 .
Neural network estimates without whitening: .
From this and other experiments we saw that when we used the whitening in our n.e. the results were worse and more time consuming than the ones obtained using the standard n.e. (i.e. without whitening the signal). For these reasons whitening is not a suitable technique to improve our n.e.
Here we present a set of synthetic signals generated by random varying the noise variance, the phase and the deleted points with Montecarlo methods. The signal is a sinusoid () with frequency 0.1 Hz, R(t) the Gaussian random noise with mean composed by 100 points, with a random phase. We follow Horne & Baliunas (Horne & Baliunas 1986) for the choice of the signals.
We generated two different series of samples depending on the number of deleted points: the first one with 50 deleted points, the second one with 80 deleted points. We made 100 experiments for each variance value. The results are shown in Table 2 and Table 3, and compared with the Lomb's P because it works better than the Scargle's P with unevenly spaced data, introducing confidence intervals which are useful to identify the accepted peaks.
The results show that both the techniques obtain a comparable performance.


The first real signal is related to the Cepheid SU Cygni (Fernie 1979). The sequence was obtained with the photometric tecnique UBVRI and the sampling made from June to December 1977. The light curve is composed by 21 samples in the V band, and a period of , as shown in Fig. 17. In this case, the parameters of the n.e. are: N=10, p=2, , . The estimate frequency interval is . The estimated frequency is 0.26 (1/JD) in agreement with the Lomb's P, but without showing any spurious peak (see Figs. 18 and 19).
The second real signal is related to the Cepheid U Aql (Moffet & Barnes 1980). The sequence was obtained with the photometric tecnique BVRI and the sampling made from April 1977 to December 1979. The light curve is composed by 39 samples in the V band, and a period of , as shown in Fig. 20. In this case, the parameters of the n.e. are: N=20, p=2, , . The estimate frequency interval is . The estimated frequency is 0.1425 (1/JD) in agreement with the Lomb's P, but without showing any spurious peak (see Figs. 21 and 22).
The third real signal is related to the Cepheid X Cygni (Moffet & Barnes 1980). The sequence was obtained with the photometric technique BVRI and the sampling made from April 1977 to December 1979. The light curve is composed by 120 samples in the V band, and a period of , as shown in Fig. 23. In this case, the parameters of the n.e. are: N=70, p=2, , . The estimate frequency interval is . The estimated frequency is 0.061 (1/JD) in agreement with the Lomb's P, but without showing any spurious peak (see Figs. 24 and 25).
The fourth real signal is related to the Cepheid T Mon (Moffet & Barnes 1980). The sequence was obtained with the photometric technique BVRI and the sampling made from April 1977 to December 1979. The light curve is composed by 24 samples in the V band, and a period of , as shown in Fig. 26. In this case, the parameters of the n.e. are: N=10, p=2, , . The estimate frequency interval is . The estimated frequency is 0.037 (1/JD) (see Fig. 28).
The Lomb's P does not work in this case because there many peaks, and at least two greater than the threshold of the most accurate confidence interval (see Fig. 27).
The fifth real signal we used for the test phase is a light curve in the Johnson's system (Binnendijk 1960) for the eclipsing binary U Peg (see Figs. 29 and 30). This system was observed photoelectrically in the effective wavelengths 5300 A and 4420 A with the 28inch reflecting telescope of the Flower and Cook Observatory during October and November, 1958.
We made several experiments with the n.e., and we elicited a dependence of the frequency estimate on the variation of the number of elements for input pattern. The optimal experimental parameters for the n.e. are: N=300, ; . The period found by the n.e. is expressed in JD and is not in agreement with results cited in literature (Binnendijk 1960), (Rigterink 1972), (Zhai et al. 1984), (Lu 1985) and (Zhai et al. 1988). The fundamental frequency is 5.4 1/JD (see Fig. 31) instead of 2.7 1/JD. We obtain a frequency double of the observed one. Lomb's P has some high peaks as in the previous experiments and the estimated frequency is always the double of the observed one (see Fig. 32).
Copyright The European Southern Observatory (ESO)