Tuning and learning with evolutionary methods: anticipatory representations and dynamic neural fields

Jean-Charles Quinton

Pascal Institute / Polytech Clermont-Ferrand

Three complementary ways of using evolutionary algorithms when dealing with complex systems are briefly introduced in this abstract. Such approach will be demonstrated on neuro-inspired computational models used for sensorimotor control. Since they are highly non-linear and interact with real-world environments, analyzing and fully designing them is not an option.

Adopting a mesoscopic level of modeling, dynamic neural fields exhibit robust attentional and action selection capabilities under the right conditions [Fix et al. 2010]. Coupling several of these fields or interfacing them with other mechanisms grants additional capabilities such as working memory or planification [Schöner 2008]. Their dynamic attractors, bifurcation points and thus overall behavior are however determined by tightly intertwined parameters. Evolutionary algorithms can thus be used to tune parameters with a fitness function that directly reflects the performance of the whole system on a specific task [Igel et al. 2001, Quinton 2010].

They can also be used as a constructivist and unsupervised learning method, committing to a neural Darwinism approach [Edelman 1987]. Using a predictive form of representation and extending the work on immunitary algorithms from [Buisson 2004], task-independent predictors can be acquired through mutation and selection. Feature selection and spatiotemporal multi-resolution come for free, although it does not compare well to statistical methods (BCM, STDP, LWPR...) in terms of scalability and efficiency. Combining this approach and the fields previously introduced into Predictive Neural Fields (PNF), the complex interplay between top-down projections and bottom-up processing might be tackled [Quinton and Girau 2011].

Finally, and whatever the expressive power of emerging models in computational neuroscience, research on embodiment shows that learning sensorimotor controllers can be greatly facilitated by exploiting the right body dynamics [Pfeifer and Bongard 2006]. Evolution may provide adequate body structure, reflexes and physiological dynamics that scaffold the development of the agent. The combined power of evolutionary methods for constraining the body dynamics, adjusting parameters and learning an explicit anticipatory model of the world will be demonstrated on a toy experiment.

References:

  • Fix J., Rougier N., Alexandre F. (2010). “A dynamic neural field approach to the covert and overt deployment of spatial attention”, Cognitive computation, 3(1), p. 279-293

  • Schöner, G. (2008). Dynamical systems approaches to cognition. In J.P. Spencer, M.S. Thomas, & J.L. McClelland (Eds.) Toward a Unified Theory of Development: Connectionism and Dynamic Systems Theory Re-Considered. New York: Oxford University Press.

  • C. Igel, W. Erlhagen and D. Jancke, “Optimization of dynamic fields”, Neurocomputing, 36:225-233, 2001.

  • J-C. Quinton (2010). Exploring and Optimizing Dynamic Neural Fields Parameters Using Genetic Algorithms. In Proceedings of IEEE World Congress on Computational Intelligence (IJCNN 2010), Barcelona, Spain.
  • Edelman, Gerald Neural Darwinism. The Theory of Neuronal Group Selection (Basic Books, New York 1987).
  • Jean-Christophe Buisson (2004). A rhythm recognition computer program to advocate interactivist perception. Cognitive Science, Elsevier, Vol. 28 N. 1, p. 75-87.

  • J-C. Quinton, B. Girau (2011). Predictive neural fields for improved tracking and attentional properties. IEEE International Joint Conference on Neural Networks (IJCNN 2011) (San José, USA).
  • Rolf Pfeifer and Josh C. Bongard. 2006. How the Body Shapes the Way We Think: A New View of Intelligence (Bradford Books). The MIT Press.

EvoNeuro: AbstractQuinton (last edited 2012-07-06 08:57:25 by BenoitGirard)