NIPS 2013 Workshop
Planning with Information Constraints
for Control, Reinforcement Learning, Computational Neuroscience, Robotics and Games



How do you make decisions when there are way more possibilities than you can analyze? How do you decide under such information constraints?

Planning and decision-making with information constraints is at the heart of adaptive control, reinforcement learning, robotic path planning, experimental design, active learning, computational neuroscience and games. In most real-world problems, perfect planning is either impossible (computational intractability, lack of information, diminished control) or sometimes even undesirable (distrust, risk sensitivity, level of cooperation of the others).

Recent developments have shown that a single method, based on the free energy functional borrowed from thermodynamics, provides a principled way of designing systems with information constraints that parallels Bayesian inference. This single method, known in the literature under various labels such as

  • KL-control
  • linearly-solvable stochastic control
  • information-theoretic bounded rationality
  • relative entropy policy search
is proving itself very general and powerful as a foundation for a novel class of probabilistic planning problems.


1) Give a comprehensive introduction to planning with information constraints targeted to a wide audience with machine learning background. Invited speakers will give an overview of the theoretical results and talk about their experience in applications to control, reinforcement learning, computational neuroscience and robotics.

2) Bring together the leading researchers in the field to discuss, compare and unify their approaches, while interacting with the audience. Recent advances will be presented in short talks and a poster session based on contributed material. Furthermore, ample space will be given to state open questions and to sketch future directions.