Contact

Tim Genewein

 

Bild von Genewein, Tim

Tim Genewein

Position: Doktorand  Abteilung: Alumni Braun

 

Research Group: Sensorimotor Learning and Decision Making

Biological organisms as well as artificial agents are faced with the problem of coping with a persistently varying environment. The process of learning in such an environment is twofold:

  • adaptation to the current parameters of the environment
  • extraction of invariants across a variety of environmental parameters

 

The latter learning process is referred to as structural learning which requires the formation of abstract representations, potentially on multiple levels of abstraction. From an information theoretic point of view, abstractions are compressed representations that carry relevant information and neglect irrelevant information – e.g. the abstract concept of a chair, where particular features like color, material or weight are considered irrelevant.The corresponding information theoretic framework for lossy compression is called rate-distortion theory.

 

Using the rate-distortion formalism we recently showed how a limit in information processing capacity leads to the emergence of abstractions in artificial agents that maximize their expected utility or reward [1]. Importantly, the granularity of the abstraction depends directly on the cost of information processing.

 

Currently, we are working on extending this idea to modelling a hierarchy of several layers of abstraction. In current models of hierarchies of abstractions such as hierarchical Bayesian models or Connectionist models and deep networks, information processing capabilities are implicitly constrained by the model architecture but are not explicitly considered in their analysis. We argue that abstractions arise as a consequence of information processing limits and that the emergence of abstractions is heavily guided by information processing cost. We thus propose to explore an information theoretic point of view on the formation of abstract representations.

Introduction


A central question in the computational study of biological motor control is the investigation of principles and mechanisms that underlie the nervous system’s ability to learn new sensorimotor transformations [1]. This comprises not only the ability to learn new mappings from sensory stimuli to motor commands, but also the ability to predict sensory consequences of one’s own actions. Such sensorimotor transformations are thought to be represented in the nervous system as internal models. A forward internal model predicts the causal relationship between actions and their sensory consequences, while sometimes the phrase inverse internal model is used to indicate the mapping from a desired sensory state to the appropriate motor command. While forward models can easily be acquired by self-supervised learning, the acquisition of an inverse model or control law is computationally much more difficult. Recent studies have shown that humans make use of internal models [2].
From a theoretical point of view there are two levels of analysis with regard to the problem of learning how to act (the adaptive control problem): structural adaptive control and parametric adaptive control. In parametric adaptive control, knowledge of the structure of the control task is presupposed, and only unknown parameters of a particular task environment of that class have to be estimated. Conversely, when the structure of the adaptive control problem is unknown, the relevant control variables have to be extracted first as well as their interrelations, dimensionality, range of potential values, time scales, stochasticity etc. The problem of structural motor learning for a class of similar motor tasks is related to the problem of transfer learning or inductive learning that is investigated in machine learning.

 

Project A: How do humans select among different structures?
This project investigates the question of how the motor system selects between different structures or models, when each model can be associated with a range of different task-specific parameters. We design a sensorimotor task that requires subjects to compensate visuomotor shifts in a three-dimensional virtual reality setup, where one of the dimensions can be mapped to a model variable and the other dimension to the parameter variable. By introducing probe trials that are neutral in the parameter dimension, we can directly test for model selection. Our experimental design lends itself to the general study of hierarchical Bayesian inference in a sensorimotor context that requires the integration of subjects’ prior beliefs over different structures. Thus, we can test whether subjects prefer “simple” structures when presented with ambiguous evidence in line with Occam’s razor (i.e. Bayesian model comparison based on Bayes’ factors).

 

Project B: Information theoretic foundations of structure learning.

Learning structure is the problem of separating invariant structural information from variations of specific instances of the structure - in other words: separating structure from noise. Consider the abstract concept of a chair, where large parts of the sensory information induced by a specific chair, like color, material or weight are considered as variations or noise, whereas other parts of the sensory information are absolutely crucial. The abstract chair can be viewed as a (lossy) compression of the sensory information.

 

The information theoretic framework for analyzing lossy compression is rate-distortion theory where the problem lies in transmitting information over a channel with limited capacity, leading to the necessity to discard the most irrelevant information. Essentially this is the same problem as separating (relevant) structure from (irrelevant) noise. As we show in [3], the mathematical formalism of rate-distortion theory can also be applied to an agent with limited computational capacity that maximizes its utility, leading to the emergence of abstractions. Importantly these abstractions are induced by having limited information processing capabilities and the granularity of the abstractions is directly influenced by the cost of information processing.

 

The work in [3] and the close relation to the thermodynamic framework for decision-making with information processing cost lead to the following conclusions:

  • A limit in information processing capacity is equivalent to having a certain cost of information processing.
  • Abstractions (or a certain kind of robustness) arise when trading of maximum expected utility with information processing cost.
  • Abstractions emerge as a consequence of limited information processing capacity.

 

We plan on extending our work to modelling several layers of abstractions. Commonly used models for modelling hierarchies of abstractions such as hierarchical Bayesian models or Connectionist models and deep networks implicitly set an information processing limit through their architecture but this is usually not considered in the analysis. Our previous results suggest that information processing cost plays a crucial role in the emergence of abstractions and thus furhter exploration of the information theoretic point of view on the formation of abstractions seems promising. Potentially this could also add a novel theoretical point of view on existing models of hierarchies of abstractions.

 

References

[1] D. M. Wolpert and Z. Ghahramani, "Computational principles of movement neuroscience", Nat Neurosci, 2000

[2] D. M. Wolpert, Z. Ghahramani, and M. I. Jordan, "An internal model for sensorimotor integration", Science, 1995

[3] T. Genewein and D. A. Braun, "Abstraction in decision-makers with limited information processing capabilities", NIPS 2013 Workshop on Planning with Information Constraints

Education

  • 2012 - Jan.: Joined the Sensorimotor Learning and Decision Making group as a PhD candidate.
  • 2012 - Mar.: MSc Telematics, Graz University of Technology, Austria
    • Thesis: Structure Learning for Robotic Motor Control
  • 2009 - Oct.: BSc Telematics, Graz University of Technology, Austria

 

Work Experience

  • 2010 to 2011: Software developer (part-time) at NTE Systems, Graz
  • 2009 to 2010: Software developer (part-time) at IVM Engineering, Graz


Präferenzen: 
Referenzen pro Seite: Jahr: Medium:

  
Zeige Zusammenfassung

Artikel (6):

Genewein T und Braun DA (Juni-2016) Bio-inspired feedback-circuit implementation of discrete, free energy optimizing, winner-take-all computations Biological Cybernetics 110(2) 135–150.
Genewein T, Leibfried F, Grau-Moya J und Braun DA (Oktober-2015) Bounded rationality, abstraction and hierarchical decision-making: an information-theoretic optimality principle Frontiers in Robotics and AI 2(27) 1-24.
Genewein T, Hez E, Razzaghpanah Z und Braun DA (August-2015) Structure Learning in Bayesian Sensorimotor Integration PLoS Computational Biology 11(8) 1-27.
Genewein T und Braun D (Mai-2014) Occam's Razor in sensorimotor learning Proceedings of the Royal Society of London B 281(1783) 1-7.
Peng Z, Genewein T und Braun DA (März-2014) Assessing randomness and complexity in human motion trajectories through analysis of symbolic sequences Frontiers in Human Neuroscience 8(168) 1-13.
Genewein T und Braun DA (Oktober-2012) A sensorimotor paradigm for Bayesian model selection Frontiers in Human Neuroscience 6(291) 1-16.

Beiträge zu Tagungsbänden (4):

Peng Z, Genewein T, Leibfried F und Braun DA (September-25-2017) An Information-Theoretic On-Line Update Principle for Perception-Action Coupling, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017), -.
Grau-Moya J, Leibfried F, Genewein T und Braun DA (September-2016) Planning with Information-Processing Constraints and Model Uncertainty in Markov Decision Processes In: Machine Learning and Knowledge Discovery in Databases, , European Conference on Machine Learning and Principles and Practice of Knowledge Discovery (ECML PKDD 2016), Springer, Cham, Switzerland, 475-491, Series: Lecture Notes in Computer Science ; 9852.
Genewein T und Braun DA (Dezember-2013) Abstraction in Decision-Makers with Limited Information Processing Capabilities, NIPS 2013 Workshop Planning with Information Constraints for Control, Reinforcement Learning, Computational Neuroscience, Robotics and Games, 1-9.
pdf
Ortega PA, Grau-Moya J, Genewein T, Balduzzi D und Braun DA (April-2013) A Nonparametric Conjugate Prior Distribution for the Maximizing Argument of a Noisy Function In: Advances in Neural Information Processing Systems 25, , Twenty-Sixth Annual Conference on Neural Information Processing Systems (NIPS 2012), Curran, Red Hook, NY, USA, 3014-3022.
pdf

Poster (5):

Peng Z, Genewein T und Braun D (Oktober-2013): Towards assessing randomness and complexity in human motion trajectories, 14th Conference of Junior Neuroscientists of Tübingen (NeNa 2013): Do the Results, Justify the Methods, Schramberg, Germany.
Peng Z, Genewein T und Braun DA (September-2013): Assessing randomness in human motion trajectories, Bernstein Conference 2013, Tübingen, Germany.
Genewein T und Braun DA (September-2013): Occam's Razor in sensorimotor learning, Bernstein Conference 2013, Tübingen, Germany.
Genewein T und Braun DA (Juni-28-2013): Bayesian Occam’s Razor for structure selection in human motor learning, RSS 2013 Workshop on Hierarchical and Structured Learning for Robotics, Berlin, Germany.
pdf
Genewein T (September-2012): A Sensorimotor Paradigm for Bayesian Model Selection, Tübingen International Summerschool 2012 (TISS 2012), Heiligkreuztal, Germany.

Vorträge (4):

Genewein T (Mai-16-2016) Invited Lecture: Information-theoretic bounded rationality in perception-action systems, ICRA 2016 Workshop on Task-Driven Perceptual Representations: Sensing, Planning and Control under Resource Constraints, Stockholm, Sweden.
Genewein T (Februar-29-2016) Invited Lecture: Hierarchical decision-making in perception-action systems, Cognitive Systems and Machine Learning Group: Bosch Research, Renningen, Germany.
Genewein T und Braun DA (Dezember-16-2014) Abstract Talk: An information-theoretic optimality principle for the formation of abstractions, Seventh International Workshop on Guided Self-Organization (GSO 2014), Freiburg, Germany.
Genewein T (Januar-17-2013) Invited Lecture: Bayesian model selection in sensorimotor tasks, University of Cambridge: Computational and Biological Learning Lab, Cambridge, UK.

Export als:
BibTeX, XML, pubman, Edoc, RTF
Last updated: Montag, 22.05.2017