Neuroevolution of Robots Behavior

My photographThe aim of the project is to evolve artificial neural networks as controllers for agents (simulated robots) that complete various tasks. Neural networks are evolved by HyperNEAT (Stanley). Robots are simulated using Vivae environment. By now robots are able to stay and drive on roads instead of grass surface with a high friction. Other experiments deal with obstacle avoidance. We work also on HyperGP which is HyperNEAT with NEAT substituted by Genetic Programming.

The ViVAE simulator sources are available for public at github.

See video

Publications

  • Drchal, J. and Kapraľ, O. and Koutník, J. and Šnorek, M.: Combining Multiple Inputs in HyperNEAT Mobile Agent Controller. vol. 2 nr. , p. 775-783, Springer, Berlin, 2009. ISSN 0302-9743 BibTex, PDF

    In this paper we present neuro-evolution of neural network controllers for mobile agents in a simulated environment. The controller is obtained through evolution of hypercube encoded weights of recurrent neural networks (HyperNEAT). The simulated agent’s goal is to find a target in a shortest time interval. The generated neural network processes three different inputs – surface quality, obstacles and distance to the target. A behavior emerged in agents features ability of driving on roads, obstacle avoidance and provides an efficient way of the target search.

  • Drchal, J. and Koutník, J. and Šnorek, M.: HyperNEAT Controlled Robots Learn How to Drive on Roads in Simulated Environment. In: 2009 IEEE Congress on Evolutionary Computation, p. 6, Research Publishing Services, Singapore, 2009. ISBN 978-1-4244-2959-2 BibTex, PDF

    In this paper we describe simulation of autonomous robots controlled by recurrent neural networks, which are evolved through indirect encoding using HyperNEAT algorithm. The robots utilize 180 degree wide sensor array. Thanks to the scalability of the neural network generated by HyperNEAT, the sensor array can have various resolution. This would allow to use camera as an input for neural network controller used in real robot. The robots were simulated using software simulation environment. In the experiments the robots were trained to drive with imaximum average speed. Such fitness forces them to learn how to drive on roads and avoid collisions. Evolved neural networks show excellent scalability. Scaling of the sensory input breaks performance of the robots, which should be gained back with re-training of the robot with a different sensory input resolution.

  • Zdeněk Buk and Jan Koutník and Miroslav Šnorek: NEAT in HyperNEAT Substituted with Genetic Programming. vol. 5495 nr. , p. 243-252, Springer, Kuopio, Finland, 2009. ISSN BibTex, PDF

    In this paper we present application of genetic programming (GP) [1] to evolution of indirect encoding of neural network weights. We compare usage of original HyperNEAT algorithm with our implementation, in which we replaced the underlying NEAT with genetic programming. The algorithm was named HyperGP. The evolved neural networks were used as controllers of autonomous mobile agents (robots) in simulation. The agents were trained to drive with maximum average speed. This forces them to learn how to drive on roads and avoid collisions. The genetic programming lacking the NEAT complexification property shows better exploration ability and tends to generate more complex solutions in fewer generations. On the other hand, the basic genetic programming generates quite complex functions for weights generation. Both approaches generate neural controllers with similar abilities.