next previous
Up: Determination of orbital parameters


Subsections

6 Discussion

 

6.1 Comparison with other methods

 As mentioned in the introduction, very little has been done to find efficient search methods for the problem of determining orbital parameters in a general case. An exception is the work by Salo & Laurikainen (1993) in which an error minimization technique was used to fit the orbital parameters of NGC 7753/7752. However, in their paper, only three variables were taken as fitting parameters for the error minimization technique and other variables, e.g. the masses, were kept fixed.

A common and straightforward method for finding orbital parameters is to carry out a full search of (part of) the parameter space. The drawback with such a method is that the number of simulations needed to achieve acceptable resolution over the parameter space grows very fast in an N-dimensional space where $N \geq 5$. A simple example should suffice as illustration: In our Run 1, a very close match was obtained after 100 generations consisting of 500 simulations each, i.e. after 50000 simulations. A full parameter search would only allow $(50\,000/4)^{1/5}
\approx 6.59 \approx 7$ values of each of the parameters $m_{1}, m_{2},
\Delta z, \Delta v_{x},$ and $\Delta v_{y}$. The factor 4 in the denominator comes from the fact that, for each set ${m_{1},m_{2},\Delta z,
\Delta v_{x}, \Delta v_{y}}$, four values of the spin must be tested. Even if the spins were assumed to be known, only $50\,000^{1/5} \approx 8.7
\approx 9$ values of each parameter could be tested. With the range [-50.0,50.0] for $\Delta z$, as used in Run 1, that would give a resolution of only 100/(9-1) = 12.5 and an equally poor resolution for the the other parameters. Note also that, for Run 1, the GA found an acceptable solution already after 20 generations, or 10000 simulations. With 10000 simulations available, a full search would allow only $6.3 \approx 6$ values of each parameter to be tested, if the spins were known.

Clearly, a complete, unbiased search of the whole parameter space is not an option, at least when the search space is of dimension five or higher. Thus, in order to carry out a full search, either the number of dimensions must be reduced by fixing the values of some parameters, or, if all parameters are used, some constraints must be imposed on the values to be searched. While this is certainly possible in some cases, the constraints may not always be justified and it may be impossible to find the correct orbit after imposing the constraints. A strong advantage of the GA method is that only very loose constraints need to be imposed, even when the number of parameters is large.

6.2 Comparison with random search

 Even though mutations (which play a subordinate role) are random, selection is not, and a GA is strongly different from a random search. In order to illustrate the non-randomness of the GA method, a calculation was carried out in which the values of the parameters $m_{1}, m_{2}, \Delta z, \Delta v_{x}, \Delta v_{y}, s_{1}$, and s2 were generated at random. A total of 10000 random sets of parameters were generated, corresponding to 20 generations of 500 simulations each for the GA. In Fig. 5, the results of the first 20 generations of Run 1 are compared with the results of the random run. The GA took the lead already after 6 generations, and the difference in performance should be obvious from the figure.
  
\begin{figure}
\psfig {figure=ds7239f5.eps,width=7.7cm}\end{figure} Figure 5: Comparison of the performance of a GA and a random search. The dashed line corresponds to the random search. The vertical axis shows fitness values and the horizontal axis the number of individuals evaluated

6.3 Sensitivity to noise

 Whereas the artificial data sets used so far were noise-free, real data sets are invariably noisy. Furthermore, even if the noise levels are low, in real data there will always be other deviations from the idealized situation used in the GA simulations. For example, the mass-to-light ratio need not be constant.
  
Table 5: Results from the two runs with noisy input data. The upper row shows the orbital parameters of the best simulation in generation 100 for the run with 10% noise added, the middle row shows the same parameters for the run with 30% noise added, and the bottom row shows the orbital parameters corresponding to the observational data. Note that an exact match is impossible in this case, because of the noise added to the data

\begin{tabular}
{llllllllllll}
\hline\noalign{\smallskip}
Gen. & $a$\space & $e$...
 ...5.75 & 33.00 & 1 & 1 & 1.00 &
1.00 & \\ \noalign{\smallskip}
\hline\end{tabular}

Therefore, in order to be useful, the GA method must be able to function even in the presence of noise. I have tested the noise sensitivity by first adding $10\%$ noise to the data set used in Run 1. The noise was added by changing the values of the masses in the grid of observational data according to
\begin{displaymath}
m_{i,j} \rightarrow m_{i,j}(1 + \alpha_{i,j}),\end{displaymath} (6)
where mi,j denotes the mass in cell (i,j), and $\alpha_{i,j}$ are random numbers between $-\alpha_{\rm max}$ and $\alpha_{\rm max}$, where $\alpha_{\rm max}$ is the noise level. Thus, for the run with $10\%$ noise, $\alpha_{\rm max}$ was equal to 0.1. The results are shown in the upper row of Table 5, from which it is evident that 10% noise does not stop the GA from finding the correct solution. The results from a run with $30\%$ noise are shown in the middle row of Table 5. Even in this case, the GA manages to find a solution fairly close to the correct one.
next previous
Up: Determination of orbital parameters

Copyright The European Southern Observatory (ESO)