We have initially employed a large screen, half-cylindrical virtual reality projection system to study human perception since 1997. Studies in a variety of areas have been carried out, including spatial cognition and the perceptual control of action. In 2005, we made a number of fundamental improvements to the virtual reality system. Perhaps the most noticeable change is an alteration of the screen size and geometry. This includes extending the screen horizontally (from 180 to 230 degrees) and adding a floor screen and projector. It is important to note that the projection screen curves smoothly from the wall projection to the floor projection, resulting in an overall screen geometry that can be described as a quarter- sphere. Vertically, the screen subtends 125 degrees (25 degree of visual angle upwards and 100 degrees downwards from the normal observation position).
In 2011 the image generation and projection setup was significantly updated. The existing four JVC SX21 DILA projectors (1400x1050) and curved mirrors were replaced with six EYEVIS LED DLP projectors (1920x1200), thereby simplifying the projection setup and increasing the overall resolution. In order to compensate for the visual distortions caused by the curved projection screen, as well as to achieve soft-edge blending for seamless overlap areas, we have developed a flexible warping solution using the new warp and blend features of the NVIDIA Quadro chipsets. This solution gives us the flexibility of a hardware-based warping solution and the accuracy of a software-based warping. The necessary calibration data for the image warping and blending stages is generated by a new camera-based projector auto-calibration system (DOMEPROJECTION.COM). Image generation is handled by a new high-end render cluster consisting of six client image generation PCs and one master PC. To avoid tearing artifacts resulting from the multi-projector setup, the rendering computers use frame-synchronized graphics cards to synchronize the projected images.
In addition to improving the visual aspects of the system, we increased the quality, number, and type of input devices. Participants in the experiments can, for example, interact with the virtual environment via actuated Wittenstein helicopter controls, joysticks, a space mouse, steering wheels, a Go-Kart, or a virtual bicycle (VRBike). Furthermore a Razer Hydra 6DOF joystick can be used for wand navigation and small volume tracking. Some of the input devices offer the possibility of force-feedback. With the VRBike, for example, one can actively pedal and steer through the virtual environment, and the virtual inertia and incline will be reflected in the pedals' resistance.
For stereo projection setup, the NVIDIA 3DVision Pro active shutter-glasses are used. These glasses use RF technology for synchronization and therefore can also be used in conjunction with the infrared-based optical tracking system. The glasses have been modified with markers for the optical tracking system and thus can be used for head tracking.
Several commercial 3D scanning systems are used for capturing shape and color information of faces. The data is used for producing stimuli for psychophysical experiments and for building models to support machine learning and computer vision algorithms:
Cyberware Head Scanner
This scanner (Cyberware, Inc., USA), uses laser triangulation for recording 3D data and a line sensor for capturing color information, both producing 512 x 512 data points (typical depth resolution 0.1 mm), covering 360º of the head in a cylindrical projection within 20 s. It was extensively used to build the MPI Head Database (http://faces.kyb.tuebingen.mpg.de/).
ABW Structured Light Scanner
This is a customized version of an industrial scanning system (ABW GmbH, Germany), modified for use as a dedicated face scanner. It consists of two LCD line projectors, three video cameras and three DSLR cameras. Using structured light and a calibrated camera/projection setup, 3D data can be calculated from the video images using triangulation. Covering a face from ear to ear, the system produces up to 900.000 3D points and 18 megapixels of color information. One recording takes about 2 s, thus making this scanner much more suitable for recording facial expressions. It was used extensively for building a facial expression model and for collecting static FACS data.
ABW Dynamic Scanner
Using the same principle of structured light, this system uses a high-speed stripe pattern projector, two high-speed video cameras and a color camera synchronized to strobe illumination. It can currently perform 40 3D measurements/s (with color information), producing detailed face scans over time. Ten seconds of recording time produce 2 GB of raw data.
3dMD Speckle Patter Scanner
This turn-key system (3dMD Ltd, UK) is mainly designed for medical purposes. Using four video cameras in combination with infrared speckle pattern flashes and two color cameras in sync with photography flashes, it can capture a face from ear to ear in 2 ms, making it highly suitable for recording infants and children. This system is used in collaboration with the University Clinic for Dentistry and Oral Medicine for studying infant growth.
Passive 4D Stereo Scanner
With three synchronized HD machine vision cameras (two grayscale, one color), this system (Dimensional Imaging, UK) is capable of reconstructing high-quality dense stereo data of moving faces at a rate of 25 frames/s. Since it is a passive system, high-quality studio lighting can be used to illuminate the subject which results in better color data as well as more subject comfort, compared to the ABW Dynamic Scanner. While care must be taken with focus, exposure and calibration of the cameras, the system imposes fewer limitations on rigid head motion.