Robin Ince

Alumni Department Physiology of Cognitive Processes

Main Focus

Generally, my interests lie in the area of ; using mathematical, statistical and computational techniques to learn as much as possible from a given set of data. , an exciting frontier of modern science, is an ideal area to explore these interests, and provides the opportunity to provide new understanding to the fundamental problem of how our brains work. Experimental advances in modern neuroscience provide a rich source of data sets, which can often be large (or worse, small!), noisy and difficult to collect. This provides a great challenge to support the work of my experimental colleagues by extracting the maximum value from the data they obtain.

Specifically my work to date has primarily consisted of applying tools from to the problem of neural coding; that is how neurons represent information about the outside world. Information theory has a number of advantages for this; it is a measure of dependence between variables that is sensitive to both linear and non-linear effects, is non-parametric, placing no assumptions on the underlying system of study and provides quantitative results on a scale that can be meaningfully compared between different systems. It can be used to evaluate the timing precision of spikes, evaluate different candidate codes (for example spike rate vs temporal spike pattern; population pooled spike count vs labelled line population code), as well as to quantify the different effects of interactions between variables in a system with multivariate input or output (for example, interactions between spiking neurons). I am also interested in applying a wider variety of techniques from to these large scale problems of neurological data analysis in general, and in particular the issue of neural coding.

Technically, I am interested in practical techniques for improving calculation or estimation of information theoretic quantities both in terms of improving statistical properties, for example by correcting for the and in terms of improving computational performance, which is important since many analyses require style controls which involve repeating information calculations many times on shuffled or modelled data and can be a significant bottleneck.

For more please see my home page:

Information theoretic analysis tools

A major difficulty when estimating information theoretic quantities from experimental data is the problem of bias, a systematic error caused by limited sampling. While many techniques have been developed to correct for this effect, implementing them can be an involved task. I have developed , an open source Python library which implements a range of bias corrections. Having these tools freely available is important to encourage wider use of the techniques, and easily allow people to try a range of corrections or measures on their data (or on simulated data with similar statistical properties). Additionally, this library implements the information breakdown technique which quantifies the effect of different types of interaction between the multivariate outputs (or inputs) of a system, as well as a tool for computing maximum entropy solutions subject to marginal equality constraints which can be used to obtain further details of the effect of interactions of different orders.

I am currently working on the statistical interpretation of mutual information as a test of independence, characterising the distribution under the null hypothesis and investigating the effects of temporal dependence in signals. I am also looking at applications of conditional mutual information in neural data analysis, for example when dealing with correlated stimuli, or to provide a form of robust group inference.

Information theoretic analysis of fMRI data

There are two fundamental approaches to the statistical detection of activated regions in fMRI imaging experiments; by using external stimulus information to find regions where activity correlates with stimulus changes (e.g. the General Linear Model [GLM]), or by purely data driven approaches which attempt to extract commonly activated areas from the data without any external information (e.g. Independent Component Analysis [ICA]). I am developing information theoretic techniques for both of these approaches.

In the first case, one can view mutual information between a stimulus condition and the as the effect size for a statistical test of independence. If there is a significant deviation from independence, then the response is modulated by the stimulus and hence that region can be considered activated during the task. In some ways this is more general than traditional statistical comparisons such as the t-test; for example if the mean of the response did not change, but the variance did, mutual information would be sensitive to such an effect. can be obtained directly from the information value, which allows direct comparison with results from other methods, such as the . The advantage is that no assumptions are made about the response (for example normality), the mechanism of activation (no requirement for a to be specified) or the linearity of the effect. A disadvantage is that if responses to different conditions overlap in time it is difficult to tease out the effect of the different conditions (the beauty of the GLM approach is that it allows such separation). I am currently working to develop the best way to apply these techniques to different block or event related designs, how to account for temporal dependence in the statistical inference and how the information approach can be extended to group analysis as well as producing high performance code to implement the analysis.

To address the second case, where stimulus information is not available, I am investigating graph-theoretic clustering methods together with information based dependency measures between individual voxels to obtain, in a purely data-driven way, areas that are activated together.

Go to Editor View