Summary

Study of the impact of etilism in the genetic algorithm to optimize the neural network. Also, it uses averaging of multiple results for analyzing a probabilistic factor.

Notes

  • Similarly to the work of Shimo, Roque, Tinós, Tejada, & Morato (2010), this work tries to model the behavior of rats without depending on comparing to data from real rats. They are used only for validation.
  • This specific study seeks to understand the uncertainty inherently present in the evolutionary algorithm, especially in the fitness function evaluation. That’s why it uses an averaging of multiple applications of the function.
  • Recurrent multilayer perceptron – Elman’s network – with ten inputs (six sensors and four recurrent signals from the hidden layer), four neurons in the hidden layer, and four neurons in the output. The genetic algorithm optimizes the weights of the network.
  • Maze is divided into 21 positions, five in each of the arms, and one in the center. Evaluated for 5 minutes, equivalent to 300 time steps (one per second).
  • The same fitness function, considering the conflict of fear and anxiety, proposed in Costa, Roque, Morato, & Tinós (2012).
  • Punishment factor is probabilistic, depending on where the virtual rat is located.
  • Let a virtual rat free for 5 minutes. In the end, calculated its fitness 30 times. Later, calculated their average fitness, which ended becoming this individual’s fitness. They did for a population of individuals.
  • Strategies:
    1. Individuals selected by etilism – two best individuals stay to the next generation – and tournament – the best of two individuals is randomly selected with 0.75 probability. The tournament happens until the population is completed. One-point crossover (rate 0.6) and mutation with uniform distribution (rate 0.05). 500-1500 generations, depending on the experiment. After the conclusion of the genetic algorithm, the best individual of the 30 executions gets the mean fitness calculated.
    2. Similar to #1, but each of the best individuals gets evaluated over n samples of the fitness function. The best individual of each execution is not re-evaluated.
    3. Similar to #1, but there’s not etilism, only tournament.
    4. Similar to #2, but there’s not etilism, only tournament.
  • The parameters for all the simulations are the following: γ(pt) = 3, β = 5, αo = 0.015, αe = 0.012 and αc = 0.011.
  • Strategy 1 gives the best results. Strategies without elitism – #3 and #4 – are the worse.
  • Increasing the number of samples improves fitness.

Thoughts

  • It is important to evaluate averages of multiple simulations when a significant part of the model has a probabilistic factor.
  • In this specific genetic algorithm, etilism is necessary for getting good results.

References

Costa, A. A., Roque, A. C., Morato, S., & Tinós, R. (2012). A Model Based on Genetic Algorithm for Investigation of the Behavior of Rats in the Elevated Plus-Maze. In Intelligent Data Engineering and Automated Learning - IDEAL 2012 (Vol. 7435, pp. 151–158). Berlin, Heidelberg: Springer Berlin Heidelberg. http://doi.org/10.1007/978-3-642-32639-4_19

Shimo, H. K., Roque, A. C., Tinós, R., Tejada, J., & Morato, S. (2010). Use of Evolutionary Robots as an Auxiliary Tool for Developing Behavioral Models of Rats in an Elevated Plus-Maze (pp. 217–222). Presented at the 2010 Eleventh Brazilian Symposium on Neural Networks (SBRN 2010), São Paulo, SP: IEEE. http://doi.org/10.1109/SBRN.2010.45