A brain-machine interface allows humans to interact with their environment without the aid of muscle power. For example, paralyzed patients can spell out words and form sentences using just their thoughts. However, this requires costly data analysis of the complex multidimensional signals that the brain generates to control this process. It is carried out with ”support vector machines” which automatically extract regularities from a large amount of data which are then used for control purposes. New learning processes are developed in the “Empirical Inference” department which can detect structures in experimental data. These include, for example, algorithms for pattern recognition, regression, density estimates, novelty detection and feature selection. The focus is on developing so-called kernel methods.
The basic problem with learning is “generalization”: the extracted laws are supposed to explain not just existing observations (the “training data”) correctly, but also apply new observations. This problem of induction not only touches fundamental issues in statistics but in the empirical sciences in general. These issues include the representation of data and existing knowledge and the complexity or capacity of explanations or models. If a given quantity of empirical observations can be adequately accounted for by a model of low complexity (where this term is formalized appropriately), then statistical learning theory guarantees that it is highly probable that future observations will be consistent with the model.
Processing data from brain-machine interfaces is however only one problem amongst many which can be solved with learning theory. Modern science is full of problems that remain unsolved because traditional analysis methods cannot get to grips with multidimensional data