VR-HYPERSPACE

VR-HYPERSPACE is a 7th Framework Programme funded under the Aeronautics and Air Transport (AAT) workprogramme and runs from October 2011 to September 2014. It comprises of 9 partners from 6 European countries. VR‐HYPERSPACE will carry out fundamental research and development leading to a paradigm shift in relation to passenger comfort. The project adopts radical approaches using virtual and mixed reality technologies and the latest studies in neuroscience and psychology to change perception of space, and more profoundly, enable to change the very perception of oneself.

More about Opens external link in new windowVR-HYPERSPACE...
 
myCopter - Enabling Technologies for Personal Aerial Transportation Systems

Considering the prevailing congestion problems with ground-based transportation and the anticipated growth of traffic in the coming decades, a major challenge is to find solutions that combine the best of ground-based and air-based transportation. The optimal solution would consist in creating a personal air transport system (PATS) that can overcome the environmental and financial costs associated with all of our current methods of transport.

The myCopter project aims to pave the way for personal aerial vehicles (PAVs) to be used by the general public. The project consortium consists of experts that can make the technology advancements necessary for a viable PATS, and a partner to assess the impact of the envisioned PATS on society. To this end, test models of handling dynamics for potential PAVs will be designed and implemented on unmanned aerial vehicles, motion simulators, and a manned helicopter. In addition, an investigation into the human capability of flying a PAV will be conducted, resulting in a user-centred design of a suitable human-machine interface (HMI). Furthermore, the project will introduce new automation technologies for obstacle avoidance, path planning and formation flying, which also have excellent potential for other aerospace applications. This project is a unique integration of technological advancements and social investigations that are necessary to move public transportation into the third dimension.

More about myCopter ...

Finished third party funded projects

SUPRA - Simulation of Upset Recovery in Aviation

SUPRA – Simulation of UPset Recovery in Aviation – is a collaborative research project funded by the 7th Framework Programme of the European Union, under the 2nd Transport and Aeronautics Call. Within SUPRA nine established research organisations and SME’s from six different countries (Netherlands, Germany, Austria, United Kingdom, Spain, and Russia) collectively aim to enhance flight simulators beyond their current capabilities to allow for effective upset recovery training.

More about SUPRA ...
 
 
TANGO Emotional interaction grounded in realistic context

The goal of the TANGO project is to take these familiar ideas about affective communication one radical step further by developing a framework to represent and model the essential interactive nature of social communication based on non- verbal communication with facial and bodily expression.

TANGO will investigate interactions in real life contexts showing agents in daily situations such as navigation and affective communication. A central goal of the project is the development of a mathematical theory of emotional communicative behaviour. Theoretical developments and investigations of the neurofunctional basis of affective interactions will be combined with advanced methods from computer vision and computer graphics. Emotional interactions can be studied quantitatively in detail and can be transferred in technical systems that simulate believable emotional interactive behaviour. Based on the obtained experimental results and mathematical analysis, a new generation of technical devices establishing emotional communication between humans and machines will be developed.

More about TANGO...
 
 
WABS – Wahrnehmungsbasierte Bewegungssimulation

The perception-based motion simulation project, funded by the Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung), aims to bring the impression of simulated motion as close as possible to reality by implementing psychophysical laws of perception into the control framework of the simulators. Human motion perception models are experimentally tested in driving and flying scenarios using our CyberMotion Simulator in order enable a new generation of highly effective motion simulators.

More information about WABS...
 
BW-FIT
Information at your fingertips - interactive visualization for Giga pixel displays.
 
THE - The Hand Embodied
THE Hand Embodied refers to the “hand” as both a cognitive entity – standing for the sense of active touch – and as the physical embodiment of such sense, the organ, comprised of actuators and sensors that ultimately realize the link between perception and action.

The general idea is to study how the embodied characteristics of the human hand and its sensors, the sensorimotor transformations, and the very constraints they impose, affect and determine the learning and control strategies we use for such fundamental cognitive functions as exploring, grasping and manipulating. The ultimate goal of the project is to learn from human data and hypotheses‐driven simulations how to devise improved system architectures for the “hand” as a cognitive organ, and eventually how to better design and control robot hands and haptic interfaces. The project hinges about the conceptual structure and the geometry of such enabling constraints, or synergies: correlations in redundant hand mobility (motor synergies), correlations in redundant cutaneous and kinaesthetic receptors readings (multi‐cue integration), and overall sensorimotor system synergies.

More information about THE...
 
Poeticon - The Poectics of Every Day Life
Reproducing an act with sensorimotor means and using fine natural language for communicating the intentionality behind the act is what Aristotle called "Poetics". POETICON is a research and development project that explores the "poetics of everyday life", i.e. the synthesis of sensorimotor representations and natural language in everyday human interaction. This is related to an old problem in Artificial Intelligence on how meaning emerges, which is approached here in a new way.

POETICON follows an empirical approach for discovering the "languages" of sensorimotor representations and the correspondences with natural language; guided by experiments in psychology and neuroscience, it employs cutting-edge equipment and established cognitive protocols for collecting face and body movement measurements, visual object information and associated linguistic descriptions from interacting human subjects, with a two-fold objective:
(i) The creation of the PRAXICON, an extensible computational resource which associates symbolic representations (words/concepts) with corresponding sensorimotor representations and that is enriched with information on patterns among these representations for forming conceptual structures.
(ii)The exploration of the association of symbolic and sensorimotor representations through cognitive and neurophysiological experiments and experimentation with a humanoid as driving forces and implementation tools for the development of the PRAXICON, respectively.
POETICON views a cognitive system as a set of different languages (the spoken, the motor, the vision language and so on) and provides a set of tools for parsing, generating and translating among them. Through inter-disciplinary research, it contributes to the exploration of what integration in human cognition is and how it can be reproduced by intelligent agents. This is an ambitious first step for revealing and conquering the "poetics of everyday life".

More about POETICON ...
 
BACS - Bayesian Approach to Cognitive Systems
Despite very extensive research efforts contemporary robots and other cognitive artifacts are not yet ready to autonomously operate in complex real world environments. One of the major reasons for this failure in creating cognitive situated systems is the difficulty in the handling of incomplete knowledge and uncertainty. In this project we will investigate and apply Bayesian models and approaches in order to develop artificial cognitive systems that can carry out complex tasks in real world environments.

We will take inspiration from the brains of mammals including humans and apply our findings to the developments of cognitive systems. The Bayesian approach will be used to model different levels of brain function, from neural functions up to complex behaviors. This will enable us to show that neural functions and higher-level cognitive activities can coherently be modeled within the Bayesian framework. The Bayesian models will be validated and adapted as necessary according to neuro-physiological data from rats and humans and through psychophysical experiments on human. The Bayesian approach will also be used to develop four artificial cognitive systems concerned with (i) autonomous navigation, (ii) multi-modal perception and reconstruction of the environment, (iii) Semantic facial motion tracking, and (iv) human body motion recognition and behavior analysis. The conducted research shall result in a consistent Bayesian framework offering enhanced tools for probabilistic reasoning in complex real world situations. The performance will be demonstrated through its applications to drive assistant systems and 3D mapping, both very complex real world tasks.
 
Wayfinding
Wayfinding is a European Project investigating the neurocognitive basis of spatial memory and orientation in humans. The project started in April 2005 and brings together six laboratories working in psychology, physiology, biology, neuroscience and medicine. The aim is to retrace the evolutionary history of spatial orientation and memory, and to show how individuals adapt their navigational strategies according to the situation. A deeper understanding of how humans make sense of space will provide invaluable information for environmental planning and design, and will lead to improved solutions for people with impaired spatial abilities.
 
CyberWalk
CyberWalk is a European Project which aims at developing treadmills for people to walk in any direction. The goal is to enable omnidirectional walking in virtual worlds. Partners from Germany, Italy, and Switzerland jointly work on this project. Despite recent improvements in Virtual Reality technology it is at present still impossible to physically walk through virtual environments. In this project our goal is to significantly advance the scientific and technological state-of-the-art by enabling quasi-natural, unconstrained, and omni-directional walking in virtual worlds. To achieve this visionary goal we follow a holistic approach that unites science, technology and applications. CyberWalk will develop a completely novel concept for a high-fidelity omni-directional treadmill, named CyberCarpet. This treadmill is envisioned to be a 2D platform consisting of many small balls that should allow unrestricted omni-directional walking, permitting the user to execute quick or slow movements, or even step over and cross his or her legs. At the end of the project it is foreseen that we will have an easy-to-use device that has been constructed to fit individual needs, independent of gender or age. Its widespread use will be facilitated by the fact that users can get quickly prepared to use it, as it will consist of a planar basis and the visual tracking that supports the control operates marker-less (i.e. no special costumes have to be put on and no markers need to be attached to the body first). One only has to put on a Head Mounted Display, through which the virtual environment is displayed. The concept of motion control behind this treadmill will focus on diminishing the forces exerted on the walking user, by minimizing the overall accelerations. To place the developments on a solid human-centred footing CyberWalk will continuously push research in the field of cognitive/behavioural science and will determine the necessary psychophysical design guidelines and appropriate evaluation procedures. The CyberWalk project will showcase its developments via a physical walk-through (most probably through the virtual reconstruction of the ancient city of Sagalassos). However, it seems clear that the CyberWalk approach will also prove relevant to many other application areas such as medical treatment and rehabilitation (e.g. Parkinson’s disease, phobia, etc.), entertainment, sports (e.g. training facilities, fitness centres), behavioural science, education (museums), training (maintenance teams, security guards, etc.), and architecture (exploring large virtual construction sites).
 
More about CyberWalk...
 
Jast
The main objective of JAST is to build jointly-acting autonomous systems that communicate and work intelligently on mutual tasks. By combining a basic, gender-differentiated understanding of the cognition, neurobiology and dialogue strategy of joint action JAST specifically aims at:
• Building two autonomous agents, each endowed with a vision system, a gripper and a speech recognition/production system that in cooperative configurations can carry out complex construction task
• Implementing perceptual modules for object recognition and recognition of gestures and actions of the partner (human or robot)
• Implementing control schemes that generate motor behaviour on the basis of internal forward models for the co-ordinated action of multiple cognitive systems
• Implementing verbal and non-verbal communication structures,
• Developing autonomous systems with goal-directed and self-organizing learning processes
• Implementing an error monitoring system capable of reacting intelligently to self- and other-generated errors. Because the JAST consortium combines leading scientists from disciplines that normally do not share even a vocabulary, JAST will initiate a completely new way of thinking about human perception, decision making, and behaviour. To realize these subgoals JAST will bring together top European scientists from various disciplines to lay the foundations and develop jointly-acting, adaptive, autonomous agents that, supported by dialogue, can perform perceptuo-motor tasks intelligently. The consortium forms a homogenous group in that the partners will share a prototypical research paradigm and variations of a common construction scenario.
 
PRA
The scientific endeavour of the network will be shared across three large-scale projects. Our objectives are: • to determine how chromatic and achromatic cues are used at different spatial scales to recognise naturalistic stimuli (faces, objects and scenes), and how low-level vision optimises the extraction of these cues in different recognition tasks; • to detail how visual information is used for action, in particular with respect to motion cues. Our experiments will challenge the predominant view that perception and action are two independent cerebral systems and suggest an alternative framework; • to determine the nature of the interaction between recognition and action. The study of this interaction has been neglected in the past and we anticipate major breakthroughs for models of recognition and action by exploring issues such as action imitation. These three large-scale projects are sub-divided into working groups that will focus on more specific questions. Each working group will be responsible for a couple of milestones.
 
POEMS
The overall objective of POEMS is to define the perceptually relevant rendering parameters for enhanced spatial presence experience. POEMS will develop reliable and valid multi-level measurement methods (subjective–introspective, psychophysical, physiological and behavioral) for spatial presence and ego-motion perception. These methods will be used to establish "optimal" auditory, visual, and vibrational rendering parameters as well as cross-modal, synergistic interactions important for perceiving spatial presence and ego-motion. Using this knowledge, the relation between rendering parameters, cross-modal effects, and spatial presence will be mathematically modeled. The long-term aim is to enable developers and designers of VR systems to optimize their simulations both technically and perceptually, thus allowing for lean, elegant, and low-cost VR simulations with a high sense of spatial presence and ego-motion. This is a prerequisite for the usability of VR for, e.g., training purposes.
 
TOUCH HapSys
To achieve these goals, two main threads will be followed: On one side, the consortium will explore and develop new technologies, which will be used to significantly improve haptic displays. On the other side the psychophysical basis of human haptic perception will be investigated. One goal is to exploit haptic illusions to overcome fundamental technological limitations. Four demonstrators covering typical application scenarios with a critical technological challenge will be developed: Haptic interaction with biological tissues, haptic texture rendering and recognition, the simulation of rigid objects with clearly defined, sharp edges, and multi-modal volumetric exploration systems.
 
COMIC
COMIC is an IST 5th framework project focusing on new methods of work and e-commerce. The main aim of COMIC is to define generic cognitive models for multimodal interaction and to evaluate these in a number of demonstrators.
 
CogVis
The objective of this project is to provide the methods and techniques that enable construction of vision systems that can perform task oriented categorization and recognition of objects and events in the context of an embodied agent. The functionality will enable construction of mobile agents that can interpret the action of humans and interact with the environment for tasks such as fetch and delivery of objects in a realistic domestic setting.
 


Last updated: Tuesday, 04.11.2014