Abstract: Movement planing is a fundamental skill that is involved in many human motor control tasks. While the hippocampus plays a central role, the functional principles underlying planning are largely unexplored. In this talk, I present a computational model for planning that is derived from theoretical principles of the probabilistic inference framework. Optimal learning rules are inferred and links to the widely used machine learning techniques expectation maximization and policy search are established. As computational model for hippocampal sweeps, we show that the network dynamics are qualitatively similar to transient firing patterns during planning and foraging in the hippocampus of awake behaving rats. In robotic tasks, non-Gaussian hard constraints are modeled, dozens of movement plans are simulated in parallel, and forward and inverse kinematic models are learned simultaneously through interactions with the environment.
Invited Talk at the Frankfurt Institute for Advanced Studies (FIAS), Germany
Learning to Plan through Reinforcement Learning in Spiking Neural Networks