This file was created by the Typo3 extension
sevenpack version 0.7.14
 Timezone: CEST
Creation date: 20170728
Creation time: 003419
 Number of references
99
article
SchuttHMW2016
Painfree and accurate Bayesian estimation of psychometric functions for (potentially) overdispersed data
Vision Research
2016
5
122
105–123
The psychometric function describes how an experimental variable, such as stimulus strength, influences the behaviour of an observer. Estimation of psychometric functions from experimental data plays a central role in fields such as psychophysics, experimental psychology and in the behavioural neurosciences. Experimental data may exhibit substantial overdispersion, which may result from nonstationarity in the behaviour of observers. Here we extend the standard binomial model which is typically used for psychometric function estimation to a betabinomial model. We show that the use of the betabinomial model makes it possible to determine accurate credible intervals even in data which exhibit substantial overdispersion. This goes beyond classical measures for overdispersiongoodnessoffitwhich can detect overdispersion but provide no method to do correct inference for overdispersed data. We use Bayesian inference methods for estimating the posterior distribution of the parameters of the psychometric function. Unlike previous Bayesian psychometric inference methods our software implementationpsignifit 4performs numerical integration of the posterior within automatically determined bounds. This avoids the use of Markov chain Monte Carlo (MCMC) methods typically requiring expert knowledge. Extensive numerical tests show the validity of the approach and we discuss implications of overdispersion for experimental design. A comprehensive MATLAB toolbox implementing the method is freely available; a python implementation providing the basic capabilities is also available.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
http://www.sciencedirect.com/science/article/pii/S0042698916000390
10.1016/j.visres.2016.02.002
HHSchütt
harmelingSHarmeling
jakobJHMacke
felixFAWichmann
article
PanzeriMGk2015
Neural population coding: combining insights from microscopic and mass signals
Trends in Cognitive Sciences
2015
3
19
3
162–172
Behavior relies on the distributed and coordinated activity of neural populations. Population activity can be measured using multineuron recordings and neuroimaging. Neural recordings reveal how the heterogeneity, sparseness, timing, and correlation of population activity shape information processing in local networks, whereas neuroimaging shows how longrange coupling and brain states impact on local activity and perception. To obtain an integrated perspective on neural information processing we need to combine knowledge from both levels of investigation. We review recent progress of how neural recordings, neuroimaging, and computational approaches begin to elucidate how interactions between local neural population activity and largescale dynamics shape the structure and coding capacity of local information representations, make them statedependent, and control distributed populations that collectively shape behavior.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Department Logothetis
Research Group Macke
Research Group Kayser
http://www.sciencedirect.com/science/article/pii/S1364661315000030
10.1016/j.tics.2015.01.002
stefanoSPanzeri
jakobJHMacke
JGross
kayserCKayser
article
KuffnerZNHSWLFMHCSEGHvMMSTVSL2014
Crowdsourced analysis of clinical trial data to predict amyotrophic lateral sclerosis progression
Nature Biotechnology
2015
1
33
1
5157
Amyotrophic lateral sclerosis (ALS) is a fatal neurodegenerative disease with substantial heterogeneity in its clinical presentation. This makes diagnosis and effective treatment difficult, so better tools for estimating disease progression are needed. Here, we report results from the DREAMPhil Bowen ALS Prediction Prize4Life challenge. In this crowdsourcing competition, competitors developed algorithms for the prediction of disease progression of 1,822 ALS patients from standardized, anonymized phase 2/3 clinical trials. The two best algorithms outperformed a method designed by the challenge organizers as well as predictions by ALS clinicians. We estimate that using both winning algorithms in future trial designs could reduce the required number of patients by at least 20%. The DREAMPhil Bowen ALS Prediction Prize4Life challenge also identified several potential nonstandard predictors of disease progression including uric acid, creatinine and surprisingly, blood pressure, shedding light on ALS pathobiology. This analysis reveals the potential of a crowdsourcing competition that uses clinical trial data for accelerating ALS research and development.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
Department Schölkopf
http://www.nature.com/nbt/journal/v33/n1/pdf/nbt.3051.pdf
10.1038/nbt.3051
RKüffner
NZach
RNorel
JHawe
DSchoenfeld
LWang
GLi
LFang
LMackey
OHardiman
MCudkowicz
ASherman
GErtaylan
moritzgwMGrosseWentrup
THothorn
Jvan Ligtenberg
jakobJHMacke
TMeyer
bsBSchölkopf
LTran
RVaughan
GStolovitzky
MLLeitner
article
FrundWM2014
Quantifying the effect of intertrial dependence on perceptual decisions
Journal of Vision
2014
6
14
7:9
116
In the perceptual sciences, experimenters study the causal mechanisms of perceptual systems by probing observers with carefully constructed stimuli. It has long been known, however, that perceptual decisions are not only determined by the stimulus, but also by internal factors. Internal factors could lead to a statistical influence of previous stimuli and responses on the current trial, resulting in serial dependencies, which complicate the causal inference between stimulus and response. However, the majority of studies do not take serial dependencies into account, and it has been unclear how strongly they influence perceptual decisions. We hypothesize that one reason for this neglect is that there has been no reliable tool to quantify them and to correct for their effects. Here we develop a statistical method to detect, estimate, and correct for serial dependencies in behavioral data. We show that even trained psychophysical observers suffer from strong history dependence. A substantial fraction of the decision variance on difficult stimuli was independent of the stimulus but dependent on experimental history. We discuss the strong dependence of perceptual decisions on internal factors and its implications for correct data interpretation.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
http://www.journalofvision.org/content/14/7/9.full.pdf+html
10.1167/14.7.9
IFründ
felixFAWichmann
jakobJMacke
article
WatanabeBMML2013
Temporal Jitter of the BOLD Signal Reveals a Reliable Initial Dip and Improved Spatial Resolution
Current Biology
2013
11
23
21
2146–2150
fMRI, one of the most important noninvasive brain imaging methods, relies on the blood oxygen leveldependent (BOLD) signal, whose precise underpinnings are still not fully understood [1]. It is a widespread assumption that the components of the hemodynamic response function (HRF) are fixed relative to each other in time, leading most studies as well as analysis tools to focus on trialaveraged responses, thus using or estimating a condition or locationspecific “canonical HRF” [2, 3 and 4]. In the current study, we examined the nature of the variability of the BOLD response and asked in particular whether the positive BOLD peak is subject to trialtotrial temporal jitter. Our results show that the positive peak of the stimulusevoked BOLD signal exhibits a trialtotrial temporal jitter on the order of seconds. Moreover, the trialtotrial variability can be exploited to uncover the initial dip in the majority of voxels by pooling trial responses with large peak latencies. Initial dips exposed by this procedure possess higher spatial resolution compared to the positive BOLD signal in the human visual cortex. These findings allow for the reliable observation of fMRI signals that are physiologically closer to neural activity, leading to improvements in both temporal and spatial resolution.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Department Logothetis
Research Group Macke
http://www.sciencedirect.com/science/article/pii/S0960982213011160
10.1016/j.cub.2013.08.057
watanabeMWatanabe
abartelsABartels
jakobJHMacke
yusukeYMurayama
nikosNKLogothetis
article
MackeML2013_2
Estimation bias in maximum entropy models
Entropy
2013
8
15
8
31093219
Maximum entropy models have become popular statistical models in neuroscience and other areas in biology and can be useful tools for obtaining estimates of mutual information in biological systems. However, maximum entropy models fit to small data sets can be subject to sampling bias; i.e., the true entropy of the data can be severely underestimated. Here, we study the sampling properties of estimates of the entropy obtained from maximum entropy models. We focus on pairwise binary models, which are used extensively to model neural population activity. We show that if the data is well described by a pairwise model, the bias is equal to the number of parameters divided by twice the number of observations. If, however, the higher order correlations in the data deviate from those predicted by the model, the bias can be larger. Using a phenomenological model of neural population recordings, we find that this additional bias is highest for small firing probabilities, strong correlations and large population sizes—for the parameters we tested, a factor of about four higher. We derive guidelines for how long a neurophysiological experiment needs to be in order to ensure that the bias is less than a specified criterion. Finally, we show how a modified plugin estimate of the entropy can be used for bias correction.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
http://www.mdpi.com/10994300/15/8/3109/pdf
10.3390/e15083109
jakobJHMacke
iainIMurray
PELatham
article
HaefnerGMB2013
Inferring decoding strategies from choice probabilities in the presence of correlated variability
Nature Neuroscience
2013
2
16
2
235–242
The activity of cortical neurons in sensory areas covaries with perceptual decisions, a relationship that is often quantified by choice probabilities. Although choice probabilities have been measured extensively, their interpretation has remained fraught with difficulty. We derive the mathematical relationship between choice probabilities, readout weights and correlated variability in the standard neural decisionmaking model. Our solution allowed us to prove and generalize earlier observations on the basis of numerical simulations and to derive new predictions. Notably, our results indicate how the readout weight profile, or decoding strategy, can be inferred from experimentally measurable quantities. Furthermore, we developed a test to decide whether the decoding weights of individual neurons are optimal for the task, even without knowing the underlying correlations. We confirmed the practicality of our approach using simulated data from a realistic population model. Thus, our findings provide a theoretical foundation for a growing body of experimental results on choice probabilities and correlations.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
Research Group Bethge
Department Schölkopf
http://www.nature.com/neuro/journal/v16/n2/pdf/nn.3309.pdf
10.1038/nn.3309
rhaefnerRMHaefner
sgerwinnSGerwinn
jakobJHMacke
mbethgeMBethge
article
SchwartzMATB2012
Low Error Discrimination using a Correlated Population Code
Journal of Neurophysiology
2012
8
108
4
10691088
We explored the manner in which spatial information is encoded by retinal ganglion cell populations. We flashed a set of 36 shape stimuli onto the tiger salamander retina and used different decoding algorithms to read out information from a population of 162 ganglion cells. We compared the discrimination performance of linear decoders, which ignore correlation induced by common stimulation, against nonlinear decoders, which can accurately model these correlations. Similar to previous studies, decoders that ignored correlation suffered only a modest drop in discrimination performance for groups of up to ∼30 cells. However, for more realistic groups of 100+ cells, we found orderofmagnitude differences in the error rate. We also compared decoders that used only the presence of a single spike from each cell against more complex decoders that included information from multiple spike counts and multiple time bins. More complex decoders substantially outperformed simpler decoders, showing the importance of spike timing information. Particularly effective was the first spike latency representation, which allowed zero discrimination errors for the majority of shape stimuli. Furthermore, the performance of nonlinear decoders showed even greater enhancement compared to linear decoders for these complex representations. Finally, decoders that approximated the correlation structure in the population by matching all pairwise correlations with a maximum entropy model fit to all 162 neurons were quite successful, especially for the spike latency representation. Together, these results suggest a picture in which linear decoders allow a coarse categorization of shape stimuli, while nonlinear decoders, which take advantage of both correlation and spike timing, are needed to achieve highfidelity discrimination.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
http://jn.physiology.org/content/early/2012/04/20/jn.00564.2011.full.pdf+html
10.1152/jn.00564.2011
GSchwartz
jakobJMacke
DAmodei
HTang
MJBerry
article
BuesingMS2012
Learning stable, regularised latent models of neural population dynamics
Network
2012
3
23
12
2447
Ongoing advances in experimental technique are making commonplace simultaneous recordings of the activity of tens to hundreds of cortical neurons at high temporal resolution. Latent population models, including Gaussianprocess factor analysis and hidden linear dynamical system (LDS) models, have proven effective at capturing the statistical structure of such data sets. They can be estimated efficiently, yield useful visualisations of population activity, and are also integral buildingblocks of decoding algorithms for brainmachine interfaces (BMI). One practical challenge, particularly to LDS models, is that when parameters are learned using realistic volumes of data the resulting models often fail to reflect the true temporal continuity of the dynamics; and indeed may describe a biologicallyimplausible unstable population dynamic that is, it may predict neural activity that grows without bound. We propose a method for learning LDS models based on expectation maximisation that constrains parameters to yield stable systems and at the same time promotes capture of temporal structure by appropriate regularisation. We show that when only little training data is available our method yields LDS parameter estimates which provide a substantially better statistical description of the data than alternatives, whilst guaranteeing stable dynamics. We demonstrate our methods using both synthetic data and extracellular multielectrode recordings from motor cortex.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://informahealthcare.com/doi/abs/10.3109/0954898X.2012.677095
10.3109/0954898X.2012.677095
LBuesing
jakobJHMacke
MSahani
article
MackeBb2011
Statistical analysis of multicell recordings: linking population coding models to experimental data
Frontiers in Computational Neuroscience
2011
7
5
35
12
Modern recording techniques such as multielectrode arrays and twophoton imaging methods are capable of simultaneously monitoring the activity of large neuronal ensembles at single cell resolution. These methods finally give us the means to address some of the most crucial questions in systems neuroscience: what are the dynamics of neural population activity? How do populations of neurons perform computations? What is the functional organization of neural ensembles?
While the wealth of new experimental data generated by these techniques provides exciting opportunities to test ideas about how neural ensembles operate, it also provides major challenges: multicell recordings necessarily yield data which is highdimensional in nature. Understanding this kind of data requires powerful statistical techniques for capturing the structure of the neural population responses, as well as their relationship with external stimuli or behavioral observations. Furthermore, linking recorded neural population activity to the predictions of theoretical models of population coding has turned out not to be straightforward.
These challenges motivated us to organize a workshop at the 2009 Computational Neuroscience Meeting in Berlin to discuss these issues. In order to collect some of the recent progress in this field, and to foster discussion on the most important directions and most pressing questions, we issued a call for papers for this Research Topic. We asked authors to address the following four questions:
1. What classes of statistical methods are most useful for modeling population activity?
2. What are the main limitations of current approaches, and what can be done to overcome them?
3. How can statistical methods be used to empirically test existing models of (probabilistic) population coding?
4. What role can statistical methods play in formulating novel hypotheses about the principles of information processing in neural populations?
A total of 15 papers addressing questions related to these themes are now collected in this Research Topic. Three of these articles have resulted in “Focused reviews” in Frontiers in Neuroscience (Crumiller et al., 2011; Rosenbaum et al., 2011; Tchumatchenko et al., 2011), illustrating the great interest in the topic. Many of the articles are devoted to a better understanding of how correlations arise in neural circuits, and how they can be detected, modeled, and interpreted. For example, by modeling how pairwise correlations are transformed by spiking nonlinearities in simple neural circuits, Tchumatchenko et al. (2010) show that pairwise correlation coefficients have to be interpreted with care, since their magnitude can depend strongly on the temporal statistics of their inputcorrelations. In a similar spirit, Rosenbaum et al. (2010) study how correlations can arise and accumulate in feedforward circuits as a result of pooling of correlated inputs.
Lyamzin et al. (2010) and Krumin et al. (2010) present methods for simulating correlated population activity and extend previous work to more general settings. The method of Lyamzin et al. (2010) allows one to generate synthetic spike trains which match commonly reported statistical properties, such as time varying firing rates as well signal and noise correlations. The Hawkes framework presented by Krumin et al. (2010) allows one to fit models of recurrent population activity to the correlationstructure of experimental data. Louis et al. (2010) present a novel method for generating surrogate spike trains which can be useful when trying to assess the significance and timescale of correlations in neural spike trains. Finally, Pipa and Munk (2011) study spike synchronization in prefrontal cortex during working memory.
A number of studies are also devoted to advancing our methodological toolkit for analyzing various aspects of population activity (Gerwinn et al., 2010; Machens, 2010; Staude et al., 2010; Yu et al., 2010). For example, Gerwinn et al. (2010) explain how full probabilistic inference can be performed in the popular model class of generalized linear models (GLMs), and study the effect of using prior distributions on the parameters of the stimulus and coupling filters. Staude et al. (2010) extend a method for detecting higherorder correlations between neurons via population spike counts to nonstationary settings. Yu et al. (2010) describe a new technique for estimating the information rate of a population of neurons using frequencydomain methods. Machens (2010) introduces a novel extension of principal component analysis for separating the variability of a neural response into different sources.
Focusing less on the spike responses of neural populations but on aggregate signals of population activity, BoatmanReich et al. (2010) and Hoerzer et al. (2010) describe methods for a quantitative analysis of field potential recordings. While BoatmanReich et al. (2010) discuss a number of existing techniques in a unified framework and highlight the potential pitfalls associated with such approaches, Hoerzer et al. (2010) demonstrate how multivariate autoregressive models and the concept of Granger causality can be used to infer local functional connectivity in area V4 of behaving macaques.
A final group of studies is devoted to understanding experimental data in light of computational models (Galán et al., 2010; Pandarinath et al., 2010; Shteingart et al., 2010). Pandarinath et al. (2010) present a novel mechanism that may explain how neural networks in the retina switch from one state to another by a change in gap junction coupling, and conjecture that this mechanism might also be found in other neural circuits. Galán et al. (2010) present a model of how hypoxia may change the network structure in the respiratory networks in the brainstem, and analyze neural correlations in multielectrode recordings in light of this model. Finally, Shteingart et al. (2010) show that the spontaneous activation sequences they find in cultured networks cannot be explained by Zipf’s law, but rather require a wrestling model.
The papers of this Research Topic thus span a wide range of topics in the statistical modeling of multicell recordings. Together with other recent advances, they provide us with a useful toolkit to tackle the challenges presented by the vast amount of data collected with modern recording techniques. The impact of novel statistical methods on the field and their potential to generate scientific progress, however, depends critically on how readily they can be adopted and applied by laboratories and researchers working with experimental data. An important step toward this goal is to also publish computer code along with the articles (Barnes, 2010) as a successful implementation of advanced methods also relies on many details which are hard to communicate in the article itself. In this way it becomes much more likely that other researchers can actually use the methods, and unnecessary reimplementations can be avoided. Some of the papers in this Research Topic already follow this goal (Gerwinn et al., 2010; Louis et al., 2010; Lyamzin et al., 2010). We hope that this practice becomes more and more common in the future and encourage authors and editors of Research Topics to make as much code available as possible, ideally in a format that can be easily integrated with existing software sharing initiatives (Herz et al., 2008; Goldberg et al., 2009).
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Bethge
http://www.frontiersin.org/Computational_Neuroscience/10.3389/fncom.2011.00035/full
10.3389/fncom.2011.00035
jakobJMacke
berensPBerens
mbethgeMBethge
article
MackeOb2011
Common Input Explains HigherOrder Correlations and Entropy in a Simple Model of Neural Population Activity
Physical Review Letters
2011
5
106
20
14
Simultaneously recorded neurons exhibit correlations whose underlying causes are not known. Here, we use a population of threshold neurons receiving correlated inputs to model neural population recordings. We show analytically that small changes in secondorder correlations can lead to large changes in higherorder redundancies, and that the resulting interactions have a strong impact on the entropy, sparsity, and statistical heat capacity of the population. Our findings for this simple model may explain some surprising effects recently observed in neural population recordings.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Bethge
http://prl.aps.org/pdf/PRL/v106/i20/e208102
10.1103/PhysRevLett.106.208102
208102
jakobJHMacke
MOpper
mbethgeMBethge
article
6516
Gaussian process methods for estimating cortical maps
NeuroImage
2011
5
56
2
570581
A striking feature of cortical organization is that the encoding of many stimulus features, for example orientation or direction selectivity, is arranged into topographic maps. Functional imaging methods such as optical imaging of intrinsic signals, voltage sensitive dye imaging or functional magnetic resonance imaging are important tools for studying the structure of cortical maps. As functional imaging measurements are usually noisy, statistical processing of the data is necessary to extract maps from the imaging data. We here present a probabilistic model of functional imaging data based on Gaussian processes. In comparison to conventional approaches, our model yields superior estimates of cortical maps from smaller amounts of data. In addition, we obtain quantitative uncertainty estimates, i.e. error bars on properties of the estimated map. We use our probabilistic model to study the coding properties of the map and the role of noisecorrelations by decoding the stimulus from single trials of an imaging experiment.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Bethge
http://www.sciencedirect.com/science?_ob=MImg&_imagekey=B6WNP5032NNX13&_cdi=6968&_user=29041&_pii=S1053811910007007&_origin=&_coverDate=05%2F15%2F2011&_sk=999439997&view=c&wchp=dGLbVlzzSkWl&md5=17cff103ca4f9e756eee9e6711fca3e4&ie=/sdarticle.pdf
Biologische Kybernetik
MaxPlanckGesellschaft
en
10.1016/j.neuroimage.2010.04.272
jakobJHMacke
sgerwinnSGerwinn
LWWhite
MKaschube
mbethgeMBethge
article
7040
Reconstructing stimuli from the spiketimes of leaky integrate and fire neurons
Frontiers in Neuroscience
2011
2
5
1
116
Reconstructing stimuli from the spike trains of neurons is an important approach for understanding the neural code. One of the difficulties associated with this task is that signals which are varying continuously in time are encoded into sequences of discrete events or spikes. An important problem is to determine how much information about the continuously varying stimulus can be extracted from the timepoints at which spikes were observed, especially if these timepoints are subject to some sort of randomness. For the special case of spike trains generated by leaky integrate and fire neurons, noise can be introduced by allowing variations in the threshold every time a spike is released. A simple decoding algorithm previously derived for the noiseless case can be extended to the stochastic case, but turns out to be biased. Here, we review a solution to this problem, by presenting a simple yet efficient algorithm which greatly reduces the bias, and therefore leads to better decoding performance in the stochastic case.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Bethge
http://www.frontiersin.org/Neuroscience/10.3389/fnins.2011.00001/abstract
Biologische Kybernetik
MaxPlanckGesellschaft
en
10.3389/fnins.2011.00001
19
sgerwinnSGerwinn
jakobJHMacke
mbethgeMBethge
article
LyamzinML2010
Modeling population spike trains with specified timevarying spike rates, trialtotrial variability, and pairwise signal and noise correlations
Frontiers in Computational Neuroscience
2010
10
4
144
111
As multielectrode and imaging technology begin to provide us with simultaneous recordings of large neuronal populations, new methods for modeling such data must also be developed. Here, we present a model for the type of data commonly recorded in early sensory pathways: responses to repeated trials of a sensory stimulus in which each neuron has it own timevarying spike rate (as described by its PSTH) and the dependencies between cells are characterized by both signal and noise correlations. This model is an extension of previous attempts to model population spike trains designed to control only the total correlation between cells. In our model, the response of each cell is represented as a binary vector given by the dichotomized sum of a deterministic “signal” that is repeated on each trial and a Gaussian random “noise” that is different on each trial. This model allows the simulation of population spike trains with PSTHs, trialtotrial variability, and pairwise correlations that match those measured experimentally. Furthermore, the model also allows the noise correlations in the spike trains to be manipulated independently of the signal correlations and singlecell properties. To demonstrate the utility of the model, we use it to simulate and manipulate experimental responses from the mammalian auditory and visual systems. We also present a general form of the model in which both the signal and noise are Gaussian random processes, allowing the mean spike rate, trialtotrial variability, and pairwise signal and noise correlations to be specified independently. Together, these methods for modeling spike trains comprise a potentially powerful set of tools for both theorists and experimentalists studying population responses in sensory systems.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Bethge
http://www.frontiersin.org/Computational_Neuroscience/10.3389/fncom.2010.00144/abstract
10.3389/fncom.2010.00144
DRLyamzin
jakobJHMacke
NALesica
article
6515
Estimating predictive stimulus features from psychophysical data: The decision image technique applied to human faces
Journal of Vision
2010
5
10
5:22
124
One major challenge in the sensory sciences is to identify the stimulus features on which sensory systems base their computations, and which are predictive of a behavioral decision: they are a prerequisite for computational models of perception. We describe a technique (decision images) for extracting predictive stimulus features using logistic regression. A decision image not only defines a region of interest within a stimulus but is a quantitative template which defines a direction in stimulus space. Decision images thus enable the development of predictive models, as well as the generation of optimized stimuli for subsequent psychophysical investigations. Here we describe our method and apply it to data from a human face classification experiment. We show that decision images are able to predict human responses not only in terms of overall percent correct but also in terms of the probabilities with which individual faces are (mis) classified by individual observers. We show that the most predictive dimension for gender categorization is neither aligned with the axis defined by the two classmeans, nor with the first principal component of all facestwo hypotheses frequently entertained in the literature. Our method can be applied to a wide range of binary classification tasks in vision or other psychophysical contexts.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Bethge
http://www.journalofvision.org/content/10/5/22.full.pdf+html
Biologische Kybernetik
MaxPlanckGesellschaft
en
10.1167/10.5.22
jakobJHMacke
felixFAWichmann
article
6502
Bayesian inference for generalized linear models for spiking neurons
Frontiers in Computational Neuroscience
2010
4
4
12
117
Generalized Linear Models (GLMs) are commonly used statistical methods for modelling the relationship between neural population activity and presented stimuli. When the dimension of the parameter space is large, strong regularization has to be used in order to fit GLMs to datasets of realistic size without overfitting. By imposing properly chosen priors over parameters, Bayesian inference provides an effective and principled approach for achieving regularization. Here we show how the posterior distribution over model parameters of GLMs can be approximated by a Gaussian using the Expectation Propagation algorithm. In this way, we obtain an estimate of the posterior mean and posterior covariance, allowing us to calculate Bayesian confidence intervals that characterize the uncertainty about the optimal solution. From the posterior we also obtain a different point estimate, namely the posterior mean as opposed to the commonly used maximum a posteriori estimate. We systematically compare the different inference techniques on simulated as well as on multielectrode recordings of retinal ganglion cells, and explore the effects of the chosen prior and the performance measure used. We find that good performance can be achieved by choosing an Laplace prior together with the posterior mean estimate.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Bethge
http://frontiersin.org/neuroscience/computationalneuroscience/paper/10.3389/fncom.2010.00012/pdf/
Biologische Kybernetik
MaxPlanckGesellschaft
en
10.3389/fncom.2010.00012
sgerwinnSGerwinn
jakobJMacke
mbethgeMBethge
article
6102
Bayesian population decoding of spiking neurons
Frontiers in Computational Neuroscience
2009
10
3
21
114
The timing of action potentials in spiking neurons depends on the temporal dynamics of their inputs and contains information about temporal fluctuations in the stimulus. Leaky integrateandfire neurons constitute a popular class of encoding models, in which spike times depend directly on the temporal structure of the inputs. However, optimal decoding rules for these models have only been studied explicitly in the noiseless case. Here, we study decoding rules for probabilistic inference of a continuous stimulus from the spike times of a population of leaky integrateandfire neurons with threshold noise. We derive three algorithms for approximating the posterior distribution over stimuli as a function of the observed spike trains. In addition to a reconstruction of the stimulus we thus obtain an estimate of the uncertainty as well. Furthermore, we derive a `spikebyspike‘ online decoding scheme that recursively updates the posterior with the arrival of each new spike. We use these decoding rules to reconstruct timevarying stimuli represented by a Gaussian process from spike trains of single neurons as well as neural populations.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Bethge
http://www.frontiersin.org/computationalneuroscience/paper/10.3389/neuro.10/021.2009/pdf/
Biologische Kybernetik
MaxPlanckGesellschaft
en
10.3389/neuro.10.021.2009
sgerwinnSGerwinn
jakobJHMacke
mbethgeMBethge
article
5157
Generating Spike Trains with Specified Correlation Coefficients
Neural Computation
2009
2
21
2
397423
Spike trains recorded from populations of neurons can exhibit substantial pairwise correlations between neurons and rich temporal structure. Thus, for the realistic simulation and analysis of neural systems, it is essential to have efficient methods for generating artificial spike trains with specified correlation structure. Here we show how correlated binary spike trains can be simulated by means of a latent multivariate gaussian model. Sampling from the model is computationally very efficient and, in particular, feasible even for large populations of neurons. The entropy of the model is close to the theoretical maximum for a wide range of parameters. In addition, this framework naturally extends to correlations over time and offers an elegant way to model correlated neural spike counts with arbitrary marginal distributions.
http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/macke2009_5157[0].pdf
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Department Schölkopf
Research Group Bethge
http://www.mitpressjournals.org/doi/pdf/10.1162/neco.2008.0208713
Biologische Kybernetik
MaxPlanckGesellschaft
en
10.1162/neco.2008.0208713
jakobJHMacke
berensPBerens
aeckerASEcker
atoliasASTolias
mbethgeMBethge
article
4877
Comparison of Pattern Recognition Methods in Classifying Highresolution BOLD Signals Obtained at High Magnetic Field in Monkeys
Magnetic Resonance Imaging
2008
9
26
7
10071014
Pattern recognition methods have shown that functional magnetic resonance imaging (fMRI) data can reveal significant information about brain activity. For example, in the debate of how object categories are represented in the brain, multivariate analysis has been used to provide evidence of a distributed encoding scheme [Science 293:5539 (2001) 24252430]. Many followup studies have employed different methods to analyze human fMRI data with varying degrees of success [Nature reviews 7:7 (2006) 523534]. In this study, we compare four popular pattern recognition methods: correlation analysis, supportvector machines (SVM), linear discriminant analysis (LDA) and Gaussian naïve Bayes (GNB), using data collected at high field (7 Tesla) with higher resolution than usual fMRI studies. We investigate prediction performance on single trials and for averages across varying numbers of stimulus presentations. The performance of the various algorithms depends on the nature of the brain activity being categorized: for
several tasks, many of the methods work well, whereas for others, no method performs above chance level. An important factor in overall classification performance is careful preprocessing of the data, including dimensionality reduction, voxel selection and outlier elimination.
http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/sdarticle_4877[0].pdf
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Department Schölkopf
Department Logothetis
Research Group Bethge
http://www.sciencedirect.com/science?_ob=MImg&amp;_imagekey=B6T9D4T5BWJY57&amp;_cdi=5112&amp;_user=29041&amp;_orig=browse&amp;_coverDate=09%2F30%2F2008&amp;_sk=999739992&amp;view=c&amp;wchp=dGLbVzzzSkWb&amp;md5=25e9
Biologische Kybernetik
MaxPlanckGesellschaft
en
http://dx.doi.org/10.1016/j.mri.2008.02.016
shihpiSPKu
arthurAGretton
jakobJMacke
nikosNKLogothetis
article
4667
Contourpropagation Algorithms for Semiautomated Reconstruction of Neural Processes
Journal of Neuroscience Methods
2008
1
167
2
349357
A new technique, Serial Block Face Scanning Electron Microscopy (SBFSEM), allows for automatic
sectioning and imaging of biological tissue with a scanning electron microscope. Image
stacks generated with this technology have a resolution sufficient to distinguish different cellular
compartments, including synaptic structures, which should make it possible to obtain detailed
anatomical knowledge of complete neuronal circuits. Such an image stack contains several thousands
of images and is recorded with a minimal voxel size of 1020nm in the x and y and 30nm
in zdirection. Consequently, a tissue block of 1mm3 (the approximate volume of the Calliphora
vicina brain) will produce several hundred terabytes of data. Therefore, highly automated 3D
reconstruction algorithms are needed. As a first step in this direction we have developed semiautomated
segmentation algorithms for a precise contour tracing of cell membranes. These
algorithms were embedded into an easytooperate user interface, which allows direct 3D observation
of the extracted objects during the segmentation of image stacks. Compared to purely
manual tracing, processing time is greatly accelerated.
http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/Macke_Maack_07_JNeuMeth_Segmentation_[0].pdf
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Department Schölkopf
Research Group Bethge
http://www.sciencedirect.com/science?_ob=MImg&amp;amp;_imagekey=B6T044PCXG9T11&amp;amp;_cdi=4852&amp;amp;_user=29041&amp;amp;_orig=browse&amp;amp;_coverDate=08%2F10%2F2007&amp;amp;_sk=999999999&amp;amp;view=c&amp;amp;wch
Biologische Kybernetik
MaxPlanckGesellschaft
en
10.1016/j.jneumeth.2007.07.021
jakobJHMacke
NMaack
RGupta
WDenk
bsBSchölkopf
ABorst
inproceedings
ParkBM2015
Unlocking neural population nonstationarity using a hierarchical dynamics model
2016
145153
Neural population activity often exhibits rich variability. This variability is thought to arise from singleneuron stochasticity, neural dynamics on short timescales, as well as from modulations of neural firing properties on long timescales, often referred to as nonstationarity. To better understand the nature of covariability in neural circuits and their impact on cortical information processing, we introduce a hierarchical dynamics model that is able to capture intertrial modulations in firing rates, as well as neural population dynamics. We derive an algorithm for Bayesian Laplace propagation for fast posterior inference, and demonstrate that our model provides a better account of the structure of neural firing than existing stationary dynamics models, when applied to neural population recordings from primary visual cortex.
http://www.kyb.tuebingen.mpg.defileadmin/user_upload/files/publications/2015/NIPS2015Park.pdf
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
http://papers.nips.cc/paper/5790unlockingneuralpopulationnonstationaritiesusinghierarchicaldynamicsmodels
Cortes, C. , N.D. Lawrence, D.D. Lee, M. Sugiyama, R. Garnett, R. Garnett
Curran
Red Hook, NY, USA
Advances in Neural Information Processing Systems 28
Montréal, Canada
TwentyNinth Annual Conference on Neural Information Processing Systems (NIPS 2015)
mparkMPark
GBohner
jakobJMacke
inproceedings
PutzkyFBM2014
A Bayesian model for identifying hierarchically organised states in neural population activity
2015
30953103
Neural population activity in cortical circuits is not solely driven by external inputs, but is also modulated by endogenous states. These cortical states vary on multiple timescales and also across areas and layers of the neocortex. To understand information processing in cortical circuits, we need to understand the statistical structure of internal states and their interaction with sensory inputs. Here, we present a statistical model for extracting hierarchically organized neural population states from multichannel recordings of neural spiking activity. We model population states using a hidden Markov decision tree with statedependent tuning parameters and a generalized linear observation model. Using variational Bayesian inference, we estimate the posterior distribution over parameters from population recordings of neural spike trains. On simulated data, we show that we can identify the underlying sequence of population states over time and reconstruct the ground truth parameters. Using extracellular population recordings from visual cortex, we find that a model with two levels of population states outperforms a generalized linear model which does not include statedependence, as well as models which only including a binary state. Finally, modelling of statedependence via our model also improves the accuracy with which sensory stimuli can be decoded from the population response.
http://www.kyb.tuebingen.mpg.defileadmin/user_upload/files/publications/2014/NIPS2014PutzkyPaper.pdf
http://www.kyb.tuebingen.mpg.defileadmin/user_upload/files/publications/2014/NIPS2014PutzkySuppl.pdf
http://www.kyb.tuebingen.mpg.de
Research Group Macke
Department Logothetis
http://papers.nips.cc/paper/5338abayesianmodelforidentifyinghierarchicallyorganisedstatesinneuralpopulationactivity
Ghahramani, Z. , M. Welling, C. Cortes, N. D. Lawrence, K. Q. Weinberger
Curran
Red Hook, NY, USA
Advances in Neural Information Processing Systems 27
Montréal, Quebec, Canada
TwentyEighth Annual Conference on Neural Information Processing Systems (NIPS 2014)
9781510800410
pputzkyPPutzky
ffranzenFFranzen
gbassettoGBassetto
jakobJHMacke
inproceedings
ArcherKPM2014
Lowdimensional models of neural population activity in sensory cortical circuits
2015
343351
Neural responses in visual cortex are influenced by visual stimuli and by ongoing spiking activity in local circuits. An important challenge in computational neuroscience is to develop models that can account for both of these features in large multineuron recordings and to reveal how stimulus representations interact with and depend on cortical dynamics. Here we introduce a statistical model of neural population activity that integrates a nonlinear receptive field model with a latent dynamical model of ongoing cortical activity. This model captures the temporal dynamics, effective network connectivity in large population recordings, and correlations due to shared stimulus drive as well as common noise. Moreover, because the nonlinear stimulus inputs are mixed by the ongoing dynamics, the model can account for a relatively large number of idiosyncratic receptive field shapes with a small number of nonlinear inputs to a lowdimensional latent dynamical model. We introduce a fast estimation method using online expectation maximization with Laplace approximations. Inference scales linearly in both population size and recording duration. We apply this model to multichannel recordings from primary visual cortex and show that it accounts for a large number of individual neural receptive fields using a small number of nonlinear inputs and a lowdimensional dynamical model.
http://www.kyb.tuebingen.mpg.defileadmin/user_upload/files/publications/2014/NIPS2014Archer.pdf
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
http://papers.nips.cc/paper/5263lowdimensionalmodelsofneuralpopulationactivityinsensorycorticalcircuits
Ghahramani, Z. , M. Welling, C. Cortes, N. D. Lawrence, K. Q. Weinberger
Curran
Red Hook, NY, USA
Advances in Neural Information Processing Systems 27
Montréal, Quebec, Canada
TwentyEighth Annual Conference on Neural Information Processing Systems (NIPS 2014)
9781510800410
earcherEWArcher
UKoster
JWPillow
jakobJHMacke
inproceedings
TuragaBPDPHM2013
Inferring neural population dynamics from multiple partial recordings of the same neural circuit
2014
539547
Simultaneous recordings of the activity of large neural populations are extremely valuable as they can be used to infer the dynamics and interactions of neurons in a local circuit, shedding light on the computations performed. It is now possible to measure the activity of hundreds of neurons using 2photon calcium imaging. However, many computations are thought to involve circuits consisting of thousands of neurons, such as cortical barrels in rodent somatosensory cortex. Here we contribute a statistical method for stitching" together sequentially imaged sets of neurons into one model by phrasing the problem as fitting a latent dynamical system with missing observations. This method allows us to substantially expand the populationsizes for which population dynamics can be characterizedbeyond the number of simultaneously imaged neurons. In particular, we demonstrate using recordings in mouse somatosensory cortex that this method makes it possible to predict noise correlations between nonsimultaneously recorded neuron pairs.
http://www.kyb.tuebingen.mpg.defileadmin/user_upload/files/publications/2013/NIPS2013Turaga.pdf
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
http://nips.cc/Conferences/2013/
Burges, C.J.C. , L. Bottou, M. Welling, Z. Ghahramani, K.Q. Weinberger
Curran
Red Hook, NY, USA
Advances in Neural Information Processing Systems 26
Stateline, NV, USA
TwentySeventh Annual Conference on Neural Information Processing Systems (NIPS 2013)
9781632660244
SCTuraga
LBuesing
AMPacker
HDalgleish
NPettit
MHausser
jakobJHMacke
inproceedings
BusingMS2013
Spectral learning of linear dynamics from generalisedlinear observations with application to neural population data
2013
4
16911699
Latent linear dynamical systems with generalisedlinear observation models arise in a variety of applications, for example when modelling the spiking activity of populations of neurons. Here, we show how spectral learning methods for linear systems with Gaussian observations (usually called subspace identification in this context) can be extended to estimate the parameters of dynamical system models observed through nonGaussian noise models. We use this approach to obtain estimates of parameters for a dynamical model of neural population data, where the observed spikecounts are Poissondistributed with logrates determined by the latent dynamical process, possibly driven by external inputs. We show that the extended system identification algorithm is consistent and accurately recovers the correct parameters on large simulated data sets with much smaller computational cost than approximate expectationmaximisation (EM) due to the noniterative nature of subspace identification. Even on smaller data sets, it provides an effective initialization for EM, leading to more robust performance and faster convergence. These benefits are shown to extend to real neural data.
http://www.kyb.tuebingen.mpg.defileadmin/user_upload/files/publications/2013/NIPS2012Buesing.pdf
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://books.nips.cc/nips25.html
Bartlett, P. , F.C.N. Pereira, L. Bottou, C.J.C. Burges, K.Q. Weinberger
Curran
Red Hook, NY, USA
Advances in Neural Information Processing Systems 25
Lake Tahoe, NV, USA
TwentySixth Annual Conference on Neural Information Processing Systems (NIPS 2012)
9781627480031
LBuesing
jakobJHMacke
MSahani
inproceedings
MackeBCYSS2012
Empirical models of spiking in neural populations
2012
1
13501358
Neurons in the neocortex code and compute as part of a locally interconnected population. Largescale multielectrode recording makes it possible to access
these population processes empirically by fitting statistical models to unaveraged data. What statistical structure best describes the concurrent spiking of cells within a local network? We argue that in the cortex, where firing exhibits extensive correlations in both time and space and where a typical sample of neurons still reflects
only a very small fraction of the local population, the most appropriate model captures shared variability by a lowdimensional latent process evolving with smooth
dynamics, rather than by putative direct coupling. We test this claim by comparing a latent dynamical model with realistic spiking observations to coupled generalised
linear spikeresponse models (GLMs) using cortical recordings. We find that the latent dynamical approach outperforms the GLM in terms of goodnessoffit, and reproduces the temporal correlations in the data more accurately. We also compare models whose observations models are either derived from a Gaussian or pointprocess models, finding that the nonGaussian model provides slightly better goodnessoffit and more realistic population spike counts.
http://www.kyb.tuebingen.mpg.defileadmin/user_upload/files/publications/2012/NIPS2011Macke.pdf
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://nips.cc/Conferences/2011/
ShaweTaylor , J. , R.S. Zemel, P. Bartlett, F. Pereira, K.Q. Weinberger
Curran
Red Hook, NY, USA
Advances in Neural Information Processing Systems 24
Granada, Spain
TwentyFifth Annual Conference on Neural Information Processing Systems (NIPS 2011)
9781618395993
jakobJHMacke
LBüsing
JPCunningham
BMYu
KVShenoy
MSahani
inproceedings
MackeML2012
How biased are maximum entropy models?
2012
1
20342042
Maximum entropy models have become popular statistical models in neuroscience and other areas in biology, and can be useful tools for obtaining estimates of mutual
information in biological systems. However, maximum entropy models fit to small data sets can be subject to sampling bias; i.e. the true entropy of the data can be severely underestimated. Here we study the sampling properties of estimates of the entropy obtained from maximum entropy models. We show that if the data is generated by a distribution that lies in the model class, the bias is equal to the number of parameters divided by twice the number of observations. However, in practice, the true distribution is usually outside the model class, and we show here that this misspecification can lead to much larger bias. We provide a perturbative approximation of the maximally expected bias when the true model is out of
model class, and we illustrate our results using numerical simulations of an Ising model; i.e. the secondorder maximum entropy distribution on binary data.
http://www.kyb.tuebingen.mpg.defileadmin/user_upload/files/publications/2012/NIPS2011Macke2.pdf
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://nips.cc/Conferences/2011/
ShaweTaylor , J. , R.S. Zemel, P. Bartlett, F. Pereira, K.Q. Weinberger
Curran
Red Hook, NY, USA
Advances in Neural Information Processing Systems 24
Granada, Spain
TwentyFifth Annual Conference on Neural Information Processing Systems (NIPS 2011)
9781618395993
jakobJHMacke
IMurray
PLatham
inproceedings
6121
Bayesian estimation of orientation preference maps
2010
4
11951203
Imaging techniques such as optical imaging of intrinsic signals, 2photon calcium imaging and voltage sensitive dye imaging can be used to measure the functional organization of visual cortex across different spatial and temporal scales. Here, we present Bayesian methods based on Gaussian processes for extracting topographic maps from functional imaging data. In particular, we focus on the estimation of
orientation preference maps (OPMs) from intrinsic signal imaging data. We model the underlying map as a bivariate Gaussian process, with a prior covariance function that reflects known properties of OPMs, and a noise covariance adjusted to the data. The posterior mean can be interpreted as an optimally smoothed estimate of the map, and can be used for model based interpolations of the map from sparse measurements. By sampling from the posterior distribution, we can get error bars on statistical properties such as preferred orientations, pinwheel locations or pinwheel counts. Finally, the use of an explicit probabilistic model facilitates interpretation of parameters and quantitative model comparisons. We demonstrate our model both on simulated data and on intrinsic signaling data from ferret visual cortex.
http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/NIPS2009Macke_6121[0].pdf
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Bethge
http://nips.cc/Conferences/2009/
Bengio, Y. , D. Schuurmans, J. Lafferty, C. Williams, A. Culotta
Curran
Red Hook, NY, USA
Advances in Neural Information Processing Systems 22
Biologische Kybernetik
MaxPlanckGesellschaft
Vancouver, BC, Canada
23rd Annual Conference on Neural Information Processing Systems (NIPS 2009)
en
9781615679119
jakobJHMacke
sgerwinnSGerwinn
MKaschube
LEWhite
mbethgeMBethge
inproceedings
4728
Bayesian Inference for Spiking Neuron Models with a Sparsity Prior
2008
9
529536
Generalized linear models are the most commonly used tools to describe the stimulus selectivity of sensory neurons. Here we present a Bayesian treatment of such models. Using the expectation propagation algorithm, we are able to approximate the full posterior distribution over all weights. In addition, we use a Laplacian prior to favor sparse solutions. Therefore, stimulus features that do not critically influence neural activity will be assigned zero weights and thus be effectively excluded by the model. This feature selection mechanism facilitates both the interpretation of the neuron model as well as its predictive abilities. The posterior distribution can be used to obtain confidence intervals which makes it possible to assess the statistical significance of the solution. In neural data analysis, the available amount of experimental measurements is often limited whereas the parameter space is large. In such a situation, both regularization by a sparsity prior and uncertainty estimates for the model parameters are essential.
We apply our method to multielectrode recordings of retinal ganglion cells and use our uncertainty estimate to test the statistical significance of functional couplings between neurons. Furthermore we used the sparsity of the Laplace prior to select those filters from a spiketriggered covariance analysis that are most informative about the neural response.
http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/BayesLNP_4728[0].pdf
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Department Schölkopf
Research Group Bethge
http://nips.cc/Conferences/2007/
Platt, J. C., D. Koller, Y. Singer, S. Roweis
Curran
Red Hook, NY, USA
Advances in neural information processing systems 20
Biologische Kybernetik
MaxPlanckGesellschaft
Vancouver, BC, Canada
TwentyFirst Annual Conference on Neural Information Processing Systems (NIPS 2007)
en
9781605603520
sgerwinnSGerwinn
jakobJMacke
seegerMSeeger
mbethgeMBethge
inproceedings
4738
Receptive Fields without SpikeTriggering
2008
9
969976
Stimulus selectivity of sensory neurons is often characterized by estimating their receptive field properties such as orientation selectivity. Receptive fields are usually derived from the mean (or covariance) of the spiketriggered stimulus ensemble. This approach treats each spike as an independent message but does not take into account that information might be conveyed through patterns of neural activity that are distributed across space or time. Can we find a concise description for the processing of a whole population of neurons analogous to the receptive field for single neurons? Here, we present a generalization of the linear receptive field which is not bound to be triggered on individual spikes but can be meaningfully
linked to distributed response patterns. More precisely, we seek to identify those stimulus features and the corresponding patterns of neural activity that are most
reliably coupled. We use an extension of reversecorrelation methods based on canonical correlation analysis. The resulting population receptive fields span the
subspace of stimuli that is most informative about the population response. We evaluate our approach using both neuronal models and multielectrode recordings from rabbit retinal ganglion cells. We show how the model can be extended to capture nonlinear stimulusresponse relationships using kernel canonical correlation analysis, which makes it possible to test different coding mechanisms. Our technique can also be used to calculate receptive fields from multidimensional neural measurements such as those obtained from dynamic imaging methods.
http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/NIPS2007Macke_4738[0].pdf
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Department Schölkopf
Research Group Bethge
http://nips.cc/Conferences/2007/
Platt, J. C., D. Koller, Y. Singer, S. Roweis
Curran
Red Hook, NY, USA
Advances in neural information processing systems 20
Biologische Kybernetik
MaxPlanckGesellschaft
Vancouver, BC, Canada
TwentyFirst Annual Conference on Neural Information Processing Systems (NIPS 2007)
en
9781605603520
jakobJHMacke
gzeckGZeck
mbethgeMBethge
inproceedings
4266
Inducing Metric Violations in Human Similarity Judgements
2007
9
777784
Attempting to model human categorization and similarity judgements is both a very interesting but also an exceedingly difficult challenge. Some of the difficulty
arises because of conflicting evidence whether human categorization and similarity judgements should or should not be modelled as to operate on a mental representation that is essentially metric. Intuitively, this has a strong appeal as it would allow (dis)similarity to be represented geometrically as distance in some internal space. Here we show how a single stimulus, carefully constructed in a
psychophysical experiment, introduces l2 violations in what used to be an internal similarity space that could be adequately modelled as Euclidean. We term this one
influential data point a conflictual judgement. We present an algorithm of how to analyse such data and how to identify the crucial point. Thus there may not be a
strict dichotomy between either a metric or a nonmetric internal space but rather degrees to which potentially large subsets of stimuli are represented metrically
with a small subset causing a global violation of metricity.
http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/NIPS2006Laub_4266[0].pdf
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Department Schölkopf
Research Group Bethge
http://nips.cc/Conferences/2006/
Schölkopf, B. , J. Platt, T. Hofmann
MIT Press
Cambridge, MA, USA
Advances in Neural Information Processing Systems 19
Biologische Kybernetik
MaxPlanckGesellschaft
Vancouver, BC, Canada
Twentieth Annual Conference on Neural Information Processing Systems (NIPS 2006)
en
0262195682
JLaub
jakobJHMacke
klausKRMüller
felixFAWichmann
inproceedings
4305
Unsupervised learning of a steerable basis for invariant image representations
2007
2
112
There are two aspects to unsupervised learning of invariant representations of images: First, we can reduce the dimensionality of the representation by finding an optimal tradeoff between temporal stability and informativeness. We show that the answer to this optimization problem is generally not unique so that there is still considerable freedom in choosing a suitable basis. Which of the many optimal representations should be selected? Here, we focus on this second aspect, and seek to find representations that are invariant under geometrical transformations occuring in sequences of natural images. We utilize ideas of steerability and Lie groups, which have been developed in the context of filter design. In particular, we show how an antisymmetric version of canonical correlation analysis can be used to learn a fullrank image basis which is steerable with respect to rotations. We provide a geometric interpretation of this algorithm by showing that it finds the twodimensional eigensubspaces of the avera
ge bivector. For data which exhibits a variety of transformations, we develop a bivector clustering algorithm, which we use to learn a basis of generalized quadrature pairs (i.e. complex cells) from sequences of natural images.
http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/SPIE2007Bethge_4305[0].pdf
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Department Schölkopf
Research Group Bethge
http://www.ece.northwestern.edu/~pappas/hvei/past/6806.html
Rogowitz, B. E.
SPIE
Bellingham, WA, USA
Proceedings of the SPIE ; 6492
Human Vision and Electronic Imaging XII
Biologische Kybernetik
MaxPlanckGesellschaft
San Jose, CA, USA
SPIE Human Vision and Electronic Imaging Conference 2007
en
9780819466051
10.1117/12.711119
mbethgeMBethge
sgerwinnSGerwinn
jakobJHMacke
inbook
Macke2014
Electrophysiology Analysis, Bayesian
2015
10781082
Bayesian analysis of electrophysiological data refers to the statistical processing of data obtained in electrophysiological experiments (i.e., recordings of action potentials or voltage measurements with electrodes or imaging devices) which utilize methods from Bayesian statistics. Bayesian statistics is a framework for describing and modelling empirical data using the mathematical language of probability to model uncertainty. Bayesian statistics provides a principled and flexible framework for combining empirical observations with prior knowledge and for quantifying uncertainty. These features are especially useful for analysis questions in which the dataset sizes are small in comparison to the complexity of the model, which is often the case in neurophysiological data analysis.
http://www.kyb.tuebingen.mpg.defileadmin/user_upload/files/publications/2014/Bayesian_Neurophysiology.pdf
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
http://link.springer.com/content/pdf/10.1007%2F9781461473206_4481.pdf
Jaeger, D. , R. Jung
Springer
New York, NY, USA
Encyclopedia of Computational Neuroscience
9781461466741
10.1007/9781461473206_4481
jakobJHMacke
inbook
MackeBS2015
Estimating State and Parameters in State Space Models of Spike Trains
2015
137159
Neural computations at all scales of evolutionary and behavioural complexity are carried out by recurrently connected networks of neurons that communicate with each other, with neurons elsewhere in the brain, and with muscles through the firing of action potentials or “spikes.” To understand how nervous tissue computes, it is therefore necessary to understand how the spiking of neurons is shaped both by inputs to the network and by the recurrent action of existing network activity. Whereas most historical spike data were collected one neuron at a time, new techniques including silicon multielectrode array recording and scanning 2photon, lightsheet or lightfield fluorescence calcium imaging increasingly make it possible to record spikes from dozens, hundreds and potentially thousands of individual neurons simultaneously. These new data offer unprecedented empirical access to network computation, promising breakthroughs both in our understanding of neural coding and computation (Stevenson & Kording 2011), and our ability to build prosthetic neural interfaces (Santhanam et al. 2006). Fulfillment of this promise will require powerful methods for data modeling and analysis, able to capture the structure of statistical dependence of network activity across neurons and time.
Probabilistic latent state space models (SSMs) are particularly wellsuited to this task. Neural activity often appears stochastic, in that repeated trials under the same controlled experimental conditions can evoke quite different patterns of firing. Some part of this variation may reflect differences in the way the computation unfolds on each trial. Another part might reflect noisy creation and transmission of neural signals. Yet more may come from chaotic amplification of small perturbations. As computational signals are thought to be distributed across the population (in a “population code”), variation in the computation may be distinguished by its common impact on different neurons and the systematic evolution of these common effects in time.
An SSM is able to capture such structured variation through the evolution of its latent state trajectory. This latent state provides a summary description of all factors modulating neural activity that are not observed directly. These factors could include processes such as arousal, attention, cortical state (Harris & Thiele 2011) or behavioural states of the animal (Niell & Stryker 2010; Maimon 2011).
http://www.kyb.tuebingen.mpg.defileadmin/user_upload/files/publications/2015/Macke_Busing_Sahani_DRAFT.pdf
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
http://ebooks.cambridge.org/chapter.jsf?bid=CBO9781139941433&cid=CBO9781139941433A054
Chen, Z.
Cambridge University Press
Cambridge, UK
Advanced State Space Methods for Neural and Clinical Data
9781107079199
10.1017/CBO9781139941433.007
jakobJHMacke
LBuesing
MSahani
techreport
ParkM2014
Hierarchical models for neural population dynamics
in the presence of nonstationarity
2015
1
Neural population activity often exhibits rich variability and temporal structure. This variability is thought to arise from singleneuron stochasticity, neural dynam
ics on short timescales, as well as from modulations of neural firing properties on long timescales, often referred to as “nonstationarity”. To better understand the
nature of covariability in neural circuits and their impact on cortical information processing, we need statistical models that are able to capture multiple sources
of variability on different timescales. Here, we introduce a hierarchical statistical model of neural population activity which models both neural population dynamics as well as intertrial modulations in firing rates. In addition, we extend the model to allow us to capture nonstationarities in the population dynamics itself (i.e., correlations across neurons). We develop variational inference methods for learning model parameters, and demonstrate that the method can recover
nonstationarities in both average firing rates and correlation structure. Applied to neural population recordings from anesthetized macaque primary visual cortex,
our models provide a better account of the structure of neural firing than stationary dynamics models.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
http://arxiv.org/pdf/1410.3111v1

submitted
MPark
jakobJHMacke
techreport
5865
The effect of pairwise neural
correlations on global population
statistics
2009
3
183
Simultaneously recorded neurons often exhibit correlations in their spiking activity. These correlations
shape the statistical structure of the population activity, and can lead to substantial redundancy across
neurons. Here, we study the effect of pairwise correlations on the population spike count statistics and redundancy
in populations of thresholdneurons in which responsecorrelations arise from correlated Gaussian inputs. We investigate
the scaling of the redundancy as the population size is increased, and compare the asymptotic redundancy
in our models to the corresponding maximum and minimum entropy models.
http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/MPIKTR183_[0].pdf
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Bethge
Biologische Kybernetik
MaxPlanckGesellschaft
Max Planck Institute for Biological Cybernetics, Tübingen, Germany
en
jakobJHMacke
MOpper
mbethgeMBethge
poster
SpeiserTAM2017
Amortized inference for fast spike prediction from calcium imaging data
2017
2
25
207208
Calcium imaging allows neuronal activity measurements from large populations of spatially identified neurons invivo.
However, spike inference algorithms are needed to infer spike times from fluorescence measurements of calcium concentration. Bayesian model inversion can be used to infer spikes, using carefully designed generative models that describe how spiking activity in a neuron influences measured fluorescence. Model inversion typically requires either computationally expensive MCMC sampling methods, or faster but approximate maximumaposteriori estimation. We present a method for efficiently inverting generative models for spike inference. Our method is several orders of magnitude faster than existing approaches, allowing for generativemodel based spike inference in realtime for largescale population neural imaging, and can be applied to a wide range of linear and nonlinear generative models. We use recent advances in blackbox variational inference (BBVI, Ranganath 2014) and ‘amortize’ inference by learning a deep network based recognitionmodel for fast model inversion (Mnih 2016). At training time, we simultaneously optimize the parameters of the generative model as well as the weights of a deep neural network which predicts the posterior approximation. At test time, performing inference for a given trace amounts to a fast single forward pass through the network at constant computational cost, and without the need for iterative optimization or MCMC sampling. On simple synthetic datasets, we show that our method is just as accurate as existing methods. However, the BBVI approach works with a wide range of generative models in a blackbox manner as long as they are differentiable. In particular, we show that using a nonlinear generative model is better suited to describe GCaMP6 data (Chen 2013), leading to improved performance on real data. The framework can also easily be extended to combine supervised and unsupervised objectives enabling
semisupervised learning of spike inference.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.cosyne.org/c/index.php?title=Cosyne2017_posters_3
Salt Lake City, UT, USA
Computational and Systems Neuroscience Meeting (COSYNE 2017)
ASpeiser
STuraga
earcherEArcher
jakobJMacke
poster
LueckmannMN2017
Can serial dependencies in choices and neural activity explain choice probabilities?
2017
2
24
153
The activity of sensory neurons covaries with choice during perceptual decisions, commonly quantified as “choice
probability”. Moreover, choices are influenced by a subject’s previous choice (serial dependencies) and neuronal
activity often shows temporal correlations on long (seconds) timescales. Here, we ask whether these findings
are linked, specifically: How are choice probabilities in sensory neurons influenced by serial dependencies in
choices and neuronal activity? Do serial dependencies in choices and neural activity reflect the same underlying
process? Using generalized linear models (GLMs) we analyze simultaneous measurements of behavior and V2 neural activity in macaques performing a visual discrimination task. We observe that past decisions are substantially more predictive of the current choice than the current spike count. Moreover, spiking activity exhibits strong correlations from trial to trial. We dissect temporal correlations by systematically varying the order of
predictors in the GLM, and find that these correlations reflect two largely separate processes: There is neither a
direct effect of the previoustrial spike count on choice, nor a direct effect of preceding choices on the spike count.
Additionally, variability in spike counts can largely be explained by slow fluctuations across multiple trials (using a Gaussian Process latent modulator within the GLM). Is choiceprobability explained by history effects, i.e. how big is the residual choice probability after correcting for temporal correlations? We compute semipartial correlations between choices and neural activity, which constitute a lower bound on the residual choice probability. We find that removing history effects by using semipartial correlations does not systematically change the magnitude of choice probabilities. We therefore conclude that despite the substantial serial dependencies in choices and neural activity these do not explain the observed choice probability. Rather, the serial dependencies in choices and spiking activity reflect two parallel processes which are correlated by instantaneous covariations between choices and activity.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.cosyne.org/c/index.php?title=Cosyne2017_posters_2
Salt Lake City, UT, USA
Computational and Systems Neuroscience Meeting (COSYNE 2017)
JMLueckmann
jakobJMacke
HNienborg
poster
GoncalvesLBNM2017
Flexible Bayesian inference for mechanistic models of neural dynamics
2017
2
24
113
One of the central goals of computational neuroscience is to understand the dynamics of single neurons and neural ensembles. However, linking mechanistic models of neural dynamics to empirical observations of neural activity has been challenging. Statistical inference is only possible for a few models of neural dynamics (e.g. GLMs), and no generally applicable, effective statistical inference algorithms are available: As a consequence, comparisons between models and data are either qualitative or rely on manual parameter tweaking, parameterfitting using heuristics or bruteforce search. Furthermore, parameterfitting approaches typically return a single bestfitting estimate, but do not characterize the entire space of models that would be consistent with data. We overcome this limitation by presenting a general method for Bayesian inference on mechanistic models of neural dynamics. Our approach can be applied in a ‘black box’ manner to a wide range of neural models without requiring modelspecific modifications. In particular, it extends to models without explicit likelihoods (e.g. most spiking networks). We achieve this goal by building on recent advances in likelihoodfree Bayesian inference (Papamakarios and Murray 2016, Moreno et al. 2016): the key idea is to simulate multiple datasets from different parameters, and then to train a probabilistic neural network which approximates the mapping from data to posterior distribution. We illustrate this approach using HodgkinHuxley models of single neurons and models of spiking networks: On simulated data, estimated posterior distributions recover groundtruth parameters, and reveal the manifold of parameters for which the model exhibits the same behaviour. On invitro recordings of membrane voltages, we recover multivariate posteriors over biophysical parameters, and voltage traces accurately match empirical data. Our approach will enable neuroscientists to perform Bayesian inference on complex neural dynamics models without having to design modelspecific algorithms, closing the gap between biophysical and statistical approaches to neural dynamics.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.cosyne.org/c/index.php?title=Cosyne2017_posters_2
Salt Lake City, UT, USA
Computational and Systems Neuroscience Meeting (COSYNE 2017)
PGoncalves
JMLueckmann
gbassettoGBassetto
MNonnenmacher
jakobJMacke
poster
BassettoM2017
Using bayesian inference to estimate receptive fields from a small number of spikes
2017
2
23
6364
crucial step towards understanding how the external world is represented by sensory neurons is the characterization
of neural receptive fields. Advances in experimental methods give increasing opportunity to study sensory processing in behaving animals, but also necessitate the ability to estimate receptive fields from very small spikecounts. For visual neurons, the stimulus space can be very high dimensional, raising challenges for dataanalysis: How can one accurately estimate neural receptive fields using only a few spikes, and obtain quantitative uncertaintyestimates about tuning properties (such as location and preferred orientation)? For many sensory areas, there are canonical parametric models of receptive field shapes (e.g., Gabor functions for primary visual cortex) which can be used to constrain receptive fields – we will use such parametric models for receptive field estimation in lowdata regimes using full Bayesian inference. We will focus on modelling simple cells in primary visual cortex, but our approach will be applicable more generally. We model the spike generation process using a generalized linear model (GLM), with a receptive field parameterized as a timemodulated Gabor. Use of the parametric model dramatically reduces the number of parameters, and allows us to directly estimate the posterior distribution over interpretable model parameters. We develop an efficient Markov Chain Monte Carlo procedure which is adapted to receptive field estimation from moviedata, by exploiting spatiotemporal separability of receptive fields. We show that the method successfully detects the presence or absence of a receptive field in simulated data even when the total number of spikes is low, and can correctly recover groundtruth parameters. When applied to electrophysiological recordings, it returns estimates of model parameters which are consistent across different subsets of the data. In comparison with nonparametric methods based on Gaussian Processes, we find that it leads to better spikeprediction performance.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.cosyne.org/c/index.php?title=Cosyne2017_posters_1
Salt Lake City, UT, USA
Computational and Systems Neuroscience Meeting (COSYNE 2017)
gbassettoGBassetto
jakobJMacke
poster
BassettoM2016
Full Bayesian inference for modelbased receptive field
estimation, with application to primary visual cortex
2016
9
21
117118
A central question in sensory neuroscience is to understand how sensory information is represented in neural activity. A crucial step towards the solution of this problem is the characterization of the neuron’s receptive field (RF), which provides a quantitative description of those features of a rich sensory stimulus that modulate the firing rate of the neuron.
For visual neurons, the stimulus space can be very high dimensional, and RFs have to be estimated from neurophysiological recordings of limited size. The scarcity of data makes it paramount to have statistical methods which incorporate prior knowledge into the estimation process (Park & Pillow 2011), as well as to provide quantitative estimates of uncertainty about the inferred RFs (Stevenson et al 2011). For many sensory areas, there are canonical parametric models of RF shapes – e.g., Gabor functions for RFs in primary visual cortex (V1) (Jones & Palmer 1987). Bayesian methods provide a quantitative way of evaluating these models on empirical data by estimating the uncertainty of the inferred model parameters.
We present a technique for full Bayesian inference of the parameters of parametric RF models, focusing on Gaborshapes for V1. We model the spike generation process by means of a generalized linear model (GLM, Paninski 2004), whose linear filter (i.e., RF) is parameterized as a timemodulated Gaborfunction. Use of this model dramatically reduces the number of parameters required to describe the RF, and allows us to directly estimate the posterior distribution over interpretable model parameters (e.g. location, orientation, etc.). The resulting model is nonlinear in the parameters. We present an efficient Markov Chain Monte Carlo procedure for inferring the full posterior distribution over model parameters.
We show that the method successfully detects the presence or absence of a RF in simulated data – even when the total number of spikes is very low – and can correctly recover groundtruth parameters. When applied to electrophysiological recordings, it returns estimates of model parameters which are consistent across different subsets of the data. Our current implementation is focused on the response of simple cells in V1, but the approach can readily be extended to other sensory areas or nonlinear models of complex cells.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
https://abstracts.gnode.org/conference/BC16/abstracts#/uuid/553b312adc8048a687c23dddc19644d5
Berlin, Germany
Bernstein Conference 2016
10.12751/nncn.bc2016.0109
gbassettoGBassetto
jakobJMacke
poster
NonnenmacherBSTM2016
Stitching neural activity in space and time: theory and practice
2016
2
27
223224
Simultaneous recordings of the activity of large neural populations are extremely valuable as they can be used to
infer the dynamics and interactions of neurons in a local circuit, shedding light on the computations performed. It
is now possible to measure the activity of hundreds of neurons using invivo 2photon calcium imaging. However,
this experimental technique imposes a tradeoff between the number of neurons which can be simultaneously recorded, and the temporal resolution at which the activity of those neurons can be sampled. Previous work (Turaga et al 2012, Bishop & Yu 2014) has shown that statistical models can be used to ameliorate this tradeoff, by ‘stitching’ neural activity from subpopulations of neurons which have been imaged sequentially with overlap, rather than simultaneously. This makes it possible to estimate correlations even between nonsimultaneously recorded neurons. In this work, we make two contributions: First, we show how taking into account correlations in the dynamics of neural activity gives rise to more general conditions under which stitching can be achieved, extending the work of (Bishop & Yu 2014). Second, we extend this framework to stitch activity both in space and time, i.e. from multiple subpopulations which might be imaged at different temporal rates. We use lowdimensional linear latent dynamical systems (LDS) to model neural population activity, and present scalable algorithms to estimate the parameters of a globally accurate LDS model from incomplete measurements. Using simulated data, we show that this approach can provide more accurate estimates of neural correlations than conventional approaches, and gives insights into the underlying neural dynamics.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
http://www.cosyne.org/c/index.php?title=Cosyne_16
Salt Lake City, UT, USA
Computational and Systems Neuroscience Meeting (COSYNE 2016)
mnonnenmacherMNonnenmacher
LBuesing
ASpeiser
STuraga
jakobJHMacke
poster
NonnenmacherBBBM2015_2
Correlations and signatures of criticality in neural population models
2015
10
20
45
543.23
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
Research Group Bethge
http://www.sfn.org/am2015/
Chicago, IL, USA
45th Annual Meeting of the Society for Neuroscience (Neuroscience 2015)
mnonnenmacherMNonnenmacher
CBehrens
berensPBerens
mbethgeMBethge
jakobJHMacke
poster
CzubaykoBNOMK2015
Anatomical basis of spiking correlation in upper layers of somatosensory cortex
2015
10
18
45
240.25
In neuronal populations of the sensory cortex, stimulus responses are shaped by the cortical architecture on anatomical scales from tens of microns to millimeters. In particular, in L2/3 rodent vibrissal cortex we previously observed that whisker deflection evokes pairwise correlations that decrease both with distance between neurons and distance to the center of the whiskerassociated column (Kerr, de Kock, Greenberg, Bruno, Sakmann, and Helmchen. (2007). J. Neurosci. 27: 1331628). One possible explanation for this finding is that these correlations arise from anatomically structured common inputs. L4 spiny stellate (SS) cells send vertical axon fibers to L2/3 that are confined within the borders of the whiskerassociated column and neuronal pairs closer together could exhibit greater dendritic overlap. Therefore, for pairs closer to the column center more of this overlap will intersect with SS projections. We tested this hypothesis using 2photon targeted patching of L2/3 pyramidal pairs in anaesthetized rats to record sub and suprathreshold stimulus responses followed by anatomic reconstruction of the neurons and barrel field. We found a positive and statistically significant association between correlated AP firing and dendritic overlay inside the whiskerassociated column. This effect was strongest for suprathreshold activity evoked shortly after whisker deflection (~20 ms), and decayed rapidly thereafter. It was also robust with respect to the voxel size, determined by the L4 axon reconstructions, used to quantify dendritic overlap. No relationship was detectable for offset responses or spontaneous activity. These results support the notion that the spatially structured correlations observed for shortlatency stimulusevoked spiking arise from anatomically structured feedforward projections.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Oberländer
Research Group Macke
http://www.sfn.org/am2015/
Chicago, IL, USA
45th Annual Meeting of the Society for Neuroscience (Neuroscience 2015)
czubaykoUCzubayko
gbassettoGBassetto
rnarayananRTNarayanan
moberlaenderMOberlaender
jakobJHMacke
jkerrJNDKerr
poster
RullaNMWSK2015
Twophoton imaging of neuronal populations in the primary visual cortex representation of the overhead visual field
2015
10
18
45
232.10
Rodents have a large binocular field of view that extends from the snout to over the animals head. Recent experiments have shown that rodents have a strong, innate, evasive behavior evoked exclusively by stimuli presented above them. However, little is known about the functional properties of cortical neurons that represent the overhead visual field. Here we describe a method for allowing direct optical recording from populations of neurons representing the overhead visual field. Firstly, the conventional microscope objective has been replaced with a periscope coupled to a miniature objective to facilitate placement of a stimulus monitor above the rat’s head. Secondly, we developed a method for presentation of visual stimuli on the OLED display of a tablet running the Android OS, and a camerabased method for calibrating the position of the stimulus display in relation to the animals head. Using this setup, we recorded in rats the activity of neurons in the representation of the overhead visual field of the primary visual cortex in response to a range of stimuli. Neurons were labeled with the calcium indicator OGB1 with counterstaining of astrocytes using sulforhodamine 101. Stimuli were either an expanding or contracting looming dot, or a moving dot that moved at constant speed along multiple trajectories to cover all positions within the display. In both stimulus types, differing sets of foreground/background luminance were used. Preliminary results show that 19% of the neurons responded with clear and reproducible transients to the looming dot stimulus, and 30% were responsive to moving dot stimuli. The response profiles of neurons to different stimulus types and parameters were further analyzed in detail and compared between cortical areas and receptive field properties established for this cortical region.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.sfn.org/am2015/
Chicago, IL, USA
45th Annual Meeting of the Society for Neuroscience (Neuroscience 2015)
rullaSRulla
benedictBNg
jakobJMacke
dhwDWallace
jsawJSawinski
jkerrJKerr
poster
BassettoSEM2015
A statistical characterization of neural population responses in V1
2015
9
16
146147
Population activity in primary visual cortex exhibits substantial variability that is correlated on multiple time scales and across neurons [1]. A quantitative account of how
visual information is encoded in population of neurons in primary visual cortex therefore requires an accurate characterization of this variability. Our goal is provide a statistical model for capturing the statistical structure of this variability and its dependence on external stimuli, with particular focus on temporal correlations both on short (withintrial) and long (acrosstrial) timescales [2]. We address this question using neural population recordings from primary visual cortex in response to drifting gratings [3], using the framework of generalized linear models (GLMs). To model stimulusdriven responses, we take a nonparametric approach and employ Gaussianprocess priors to model the smoothness of responseprofiles across time and different stimulus orientations, and lowrank constraints to facilitate inference from limited data. We find that the parameters which control the prior smoothness are consistent across neurons within each recording session, but differ markedly across recordings. For most neurons, the timevarying response across all stimulus orientations can be well captured using a lowrank
decomposition with k = 4 dimensions. To capture slow modulations in firing rates, we include covariates in the GLM which are constrained to vary smoothly across trials,
and find that including these terms leads to significant improvements in goodnessoffit. Finally, we use latent dynamical systems [3] with pointprocess observation models [4] to capture variations and covariations in firing rates on fast timescales. While we focus our analysis on modelling neural population responses in V1, our approach provides a general formalism for obtaining an accurate quantitative model of response variability in neural populations.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
Department Logothetis
http://www.nncn.de/de/bernsteinconference/2015/program
Heidelberg, Germany
Bernstein Conference 2015
10.12751/nncn.bc2015.0139
gbassettoGBassetto
fsandhaegerFSandhaeger
aeckerAEcker
jakobJHMacke
poster
SchuttHMW2015
Psignifit 4: Painfree Bayesian Inference for Psychometric Functions
Journal of Vision
2015
9
15
12
474
Psychometric functions are frequently used in vision science to model task performance. These sigmoid functions can be fit to data using likelihood maximization, but this ignores the reliability or variance of the point estimates. In contrast Bayesian methods automatically calculate this reliability. However, using Bayesian methods in practice usually requires expert knowledge, user interaction and computation time. Also most methodsincluding Bayesian onesare vulnerable to nonstationary observers (whose performance is not constant). For such observers all methods, which assume a stationary binomial observer are overconfident in the estimates. We present Psignifit 4, a new method for fitting psychometric functions, which provides an efficient Bayesian analysis based on numerical integration, which requires little userinteraction and runs in seconds on a common office computer. Additionally it fits a betabinomial model increasing the stability against nonstationarity and contains standard settings including a heuristic to set the prior based on the interval of stimulus levels in the experimental data. Obviously all properties of the analysis can be adjusted. To test our method it was run on extensive simulated datasets. First we tested the numerical accuracy of our method with different settings and found settings which calculate a good estimate fast and reliably. Testing the statistical properties, we find that our method calculates correct or slightly conservative confidence intervals in all tested conditions, including different sampling schemes, betabinomial observers, other nonstationary observers and adaptive methods. When enough data was collected to overcome the small sample bias caused by the prior, the point estimates are also essentially unbiased. In summary we present a userfriendly, fast, correct and comprehensively tested Bayesian method to fit psychometric functions, which handles nonstationary observers well and is freely available as an MATLAB implementation online.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
http://jov.arvojournals.org/article.aspx?articleid=2433582
St. Pete Beach, FL, USA
15th Annual Meeting of the Vision Sciences Society (VSS 2015)
10.1167/15.12.474
HSchütt
harmelingSHarmeling
jakobJMacke
felixFWichmann
poster
NonnenmacherBPBM2015
Correlations and signatures of criticality in neural population models
2015
3
7
207208
Largescale recording methods make it possible to measure the statistics of neural population activity, and thereby
to gain insights into the principles that govern the collective activity of neural ensembles. One hypothesis that has emerged from this approach is that neural populations are poised at a ‘thermodynamic critical point’, and that this has important functional consequences (Tkacik et al 2014). Support for this hypothesis has come from studies that computed the specific heat, a measure of global population statistics, for groups of neurons subsampled from population recordings. These studies have found two effects which—in physical systems—indicate a critical point: First, specific heat diverges with population size N. Second, when manipulating population statistics by introducing a ’temperature’ in analogy to statistical mechanics, the maximum heat moves towards unittemperature for large populations. What mechanisms can explain these observations? We show that both effects arise in a simple simulation of retinal population activity. They robustly appear across a range of parameters including biologically implausible ones, and can be understood analytically in simple models. The specific heat grows with N whenever the (average) correlation is independent of N, which is always true when uniformly subsampling a large, correlated population. For weakly correlated populations, the rate of divergence of the specific heat is proportional to the correlation strength. Thus, if retinal population codes were optimized to maximize specific heat, then this would predict that they seek to increase correlations. This is incongruent with theories of efficient coding that make
the opposite prediction. We find criticality in a simple and parsimonious model of retinal processing, and without
the need for finetuning or adaptation. This suggests that signatures of criticality might not require an optimized
coding strategy, but rather arise as consequence of subsampling a stimulusdriven neural population (Aitchison
et al 2014).
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
Research Group Bethge
http://www.cosyne.org/c/index.php?title=Cosyne2015_Program
Salt Lake City, UT, USA
Computational and Systems Neuroscience Meeting (COSYNE 2015)
mnonnenmacherMNonnenmacher
CBehrens
berensPBerens
mbethgeMBethge
jakobJMacke
poster
NienborgM2014_2
Using sequential dependencies in neural activity and behavior to dissect choice related activity in V2
2014
11
17
44
435.08
During perceptual decisions the activity of sensory neurons covaries with choice. Previous findings suggest that this partially reflects “bottomup” and “topdown” effects. However, the quantitative contributions of these effects are unclear. To address this question, we take advantage of the observation that past choices influence current behavior (sequential dependencies). Here, we use data from two macaque monkeys performing a disparity discrimination task during simultaneous extracellular recordings of disparity selective V2 neurons. We quantify the sequential dependencies using generalized linear models to predict choices or spiking activity of the V2 neurons. We find that past choices predict current choices substantially better than the spike counts on the current trial, i.e. have a higher “choice probability”. In addition, we observe that past choices have a significant predictive effect on the activity of sensory neurons on the current trial. This effect results from sequential dependencies of choices and neural activity alone, but also reflects a direct influence of past choices on the spike count on the current trial. We then use these sequential dependencies to dissect the neuronal covariation with choice: We decomposed the choice covariation of neural spike counts into components, which can be explained by behavior or neural activity on previous trials. We find that about 30% of the observed covariation is already explained by the animals’ previous choice, suggesting a “topdown” contribution of at least 30%. Additionally, our results exemplify how variability frequently regarded as noise reflects the systematic effect of ignored neural and behavioral covariates, and that interpretation of covariations between neural activity and observed behavior should take the temporal context within the experiment into account.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
http://www.sfn.org/annualmeeting/neuroscience2014
Washington, DC, USA
44th Annual Meeting of the Society for Neuroscience (Neuroscience 2014)
HNienborg
jakobJHMacke
poster
ArcherPM2014_3
Low Dimensional Dynamical Models of Neural Populations with
Common Input
2014
10
15
22
Modern experimental technologies enable simultaneous recording of large neural populations. These highdimensional data present a challenge for analysis. Recent work has focused on extracting lowdimensional dynamical trajectories that may underly such responses. Such
methods enable visualization and may also provide insight into neural computations. Previous work focuses on modeling a population’s dynamics without conditioning on external
stimuli. Our proposed technique integrates linear dimensionality reduction with a latent dynamical system model of neural activity. Under our model, population response is governed by a lowdimensional dynamical system with quadratic input. In this framework the number of parameters in grows linearly with population (size given fixed latent dimensionality). Hence it is computationally fast for large populations, unlike fullyconnected models. Our method captures both noise correlations and lowdimensional stimulus selectivity through the simultaneous modeling of dynamics and stimulus dependence. This approach is particularly wellsuited for studying the population activity of sensory cortices, where neurons often
have substantial receptive field overlap.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
http://www.neuroschooltuebingennena.de/fileadmin/user_upload/Dokumente/neuroscience/Abstractbook_NeNa2014_final.pdf
Schramberg, Germany
15th Conference of Junior Neuroscientists of Tübingen (NeNa 2014)
earcherEArcher
JPillow
jakobJMacke
poster
ArcherPM2014_2
Lowdimensional dynamical neural population models with
shared stimulus drive
2014
9
3
7273
Modern experimental technologies enable simultaneous recording of large neural populations. These highdimensional data present a challenge for analysis. Recent work has focused on extracting lowdimensional dynamical trajectories that may underlie such responses. These methods enable visualization and may also provide insight into neural compuations. However, previous work focused on modeling a population’s dynamics without conditioning on external stimuli.
We propose a new technique that integrates linear dimensionality reduction (analogous to the STA and STC) with a latent dynamical system model of neural activity. Under our model, the spike response of a neural population is governed by a low dimensional dynamical system with quadratic input. In this framework, the number of parameters grows linearly with population (size given fixed latent dimensionality). Hence, it is computationally fast for large populations, unlike fullyconnected models.
Our method captures both noise correlations and lowdimensional stimulus selectivity through the simultaneous modeling of dynamics and stimulus dependence. This approach is particularly wellsuited for studying the population activity of sensory cortices, where neurons often have substantial receptive field overlap.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
http://abstracts.gnode.org/abstracts/b1c1ba29cec74c65a3d4e6194b1bb0aa
Göttingen, Germany
Bernstein Conference 2014
10.12751/nncn.bc2014.0076
earcherEArcher
JWPillow
jakobJHMacke
poster
NienborgM2014
Using sequential dependencies in neural activity and behavior to dissect choice related activity in V2
2014
9
3
7374
During perceptual decisions the activity of sensory neurons covaries with choice. Previous findings suggest that this partially reflects “bottomup” and “topdown” effects. However, the quantitative contributions of these effects are unclear. To address this question, we take advantage of the observation that past choices influence current behavior (sequential dependencies).
Here, we use data from two macaque monkeys performing a disparity discrimination task during simultaneous extracellular recordings of disparity selective V2 neurons. We quantify the sequential dependencies using generalized linear models to predict choices or spiking activity of the V2 neurons. We find that past choices predict current choices substantially better than the spike counts on the current trial, i.e. have a higher “choice probability”. In addition, we observe that past choices have a significant predictive effect on the activity of sensory neurons on the current trial. This effect results from sequential dependencies of choices and neural activity alone, but also reflects a direct influence of past choices on the spike count on the current trial.
We then use these sequential dependencies to dissect the neuronal covariation with choice: We decomposed the choice covariation of neural spike counts into components, which can be explained by behavior or neural activity on previous trials. We find that about 30% of the observed covariation is already explained by the animals’ previous choice, suggesting a “topdown” contribution of at least 30%. Additionally, our results exemplify how variability frequently regarded as noise reflects the systematic effect of ignored neural and behavioral covariates, and that interpretation of covariations between neural activity and observed behavior should take the temporal context within the experiment into account.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
http://abstracts.gnode.org/abstracts/cf7e8932c4534b8c8df1aa7414cbaca1
Göttingen, Germany
Bernstein Conference 2014
10.12751/nncn.bc2014.0077
HNienborg
jakobJHMacke
poster
SchuttHMW2014
Painfree bayesian inference for psychometric functions
Perception
2014
8
43
ECVP Abstract Supplement
162
To estimate psychophysical performance, psychometric functions are usually modeled as sigmoidal functions, whose parameters are estimated by likelihood maximization. While this approach gives a point estimate, it ignores its reliability (its variance). This is in contrast to Bayesian methods, which in principle can determine the posterior of the parameters and thus the reliability of the estimates. However, using Bayesian methods in practice usually requires extensive expert knowledge, user interaction and computation time. Also many methodsincluding Bayesian onesare vulnerable to nonstationary observers (whose performance is not constant). Our work provides an efficient Bayesian analysis, which runs within seconds on a common office computer, requires little userinteraction and improves robustness against nonstationarity. A Matlab implementation of our method, called PSIGNFIT 4, is freely available online. We additionally provide methods to combine posteriors to test the difference between psychometric functions (such as between conditions), obtain posterior distributions for the average of a group, and other comparisons of practical interest. Our method uses numerical integration, allowing robust estimation of a betabinomial model that is stable against nonstationarities. Comprehensive simulations to test the numerical and statistical correctness and robustness of our method are in progress, and initial results look very promising.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
Department Schölkopf
http://pec.sagepub.com/content/43/1_suppl.toc
Beograd, Serbia
37th European Conference on Visual Perception (ECVP 2014)
10.1177/03010066140430S101
HSchütt
harmelingSHarmeling
jakobJMacke
felixFWichmann
poster
SchuttHMW2014_2
Painfree Bayesian inference for psychometric functions
2014
8
3839
To estimate psychophysical performance, psychometric functions are usually modeled as sigmoidal functions, whose parameters are estimated by likelihood maximization.
While this approach gives a point estimate, it ignores its reliability (its variance). This is in contrast to Bayesian methods, which in principle can determine the posterior of the parameters and thus the reliability of the estimates. However, using Bayesian methods in practice usually requires extensive expert knowledge, user interaction and computation time. Also many methodsincluding Bayesian
onesare vulnerable to nonstationary observers (whose performance is not constant). Our work provides an efficient Bayesian analysis, which runs within seconds on a
common office computer, requires little userinteraction and improves robustness against nonstationarity. A Matlab implementation of our method, called PSIGNFIT 4, is freely available online. We additionally provide methods to combine
posteriors to test the difference between psychometric functions (such as between conditions), obtain posterior distributions for the average of a group, and other
comparisons of practical interest.
Our method uses numerical integration, allowing robust estimation of a betabinomial model that is stable against nonstationarities. Comprehensive simulations to test
the numerical and statistical correctness and robustness of our method are in progress, and initial results look very promising.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://unituebingen.de//uni/sii/empg2014/Program.htm
Tübingen, Germany
2014 European Mathematical Psychology Group Meeting (EMPG)
HSchütt
harmelingSHarmeling
jakobJMacke
felixFWichmann
poster
ArcherPM2014
Lowdimensional models of neural population recordings with complex stimulus selectivity
2014
3
2014
162
Modern experimental technologies such as multielectrode arrays and 2photon population calcium imaging make
it possible to record the responses of large neural populations (up to 100s of neurons) simultaneously. These
highdimensional data pose a significant challenge for analysis. Recent work has focused on extracting lowdimensional dynamical trajectories that may underlie such responses. These methods enable visualization of
highdimensional neural activity, and may also provide insight into the function of underlying circuitry. Previous
work, however, has primarily focused on models of a opulation’s intrinsic dynamics, without taking into
account any external stimulus drive. We propose a new technique that integrates linear dimensionality reduction
of stimulusresponse functions (analogous to spiketriggered average and covariance analysis) with a latent
dynamical system (LDS) model of neural activity. Under our model, the population response is governed by a
lowdimensional dynamical system with nonlinear (quadratic) stimulusdependent input. Parameters of the model can be learned by combining standard expectation maximization for linear dynamical system models with a recently proposed algorithms for learning quadratic feature selectivity. Unlike models with alltoall connectivity, this
framework scales well to large populations since, given fixed latent dimensionality, the number of parameters
grows linearly with population size. Simultaneous modeling of dynamics and stimulus dependence allows our method to model correlations in response variability while also uncovering lowdimensional stimulus selectivity that is shared across a population. Because stimulus selectivity and noise correlations both arise from coupling to the underlying dynamical system, it is particularly wellsuited for studying the neural population activity of sensory
cortices, where stimulus inputs received by different neurons are likely to be mediated by local circuitry, giving rise to both shared dynamics and substantial receptive field overlap.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
http://www.cosyne.org/c/index.php?title=Cosyne_14
Salt Lake City, UT, USA
Computational and Systems Neuroscience Meeting (COSYNE 2014)
earcherEArcher
JWPillow
jakobJMacke
poster
TuragaBPDPHM2014
Predicting noise correlations for nonsimultaneously measured neuron pairs
2014
3
2014
84
Simultaneous recordings of the activity of large neural populations are extremely valuable as they can be used to
infer the dynamics and interactions of neurons in a local circuit, shedding light on the computations performed. It
is now possible to measure the activity of hundreds of neurons using invivo 2photon calcium imaging. However,
many computations are thought to involve circuits consisting of thousands of neurons, such as cortical barrels in rodent somatosensory cortex. Here we contribute a statistical method for “stitching” together sequentially imaged sets of neurons into one model by phrasing the problem as fitting a latent dynamical system with missing observations.
This method allows us to substantially expand the population sizes for which population dynamics can be characterized—beyond the number of simultaneously imaged neurons. We describe conditions for successful stitching and use recordings from mouse somatosensory cortex to demonstrate that this method enables accurate predictions of noise correlations between nonsimultaneously recorded neuron pairs.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
http://www.cosyne.org/c/index.php?title=Cosyne_14
Salt Lake City, UT, USA
Computational and Systems Neuroscience Meeting (COSYNE 2014)
SCTuraga
LBuesing
APacker
HDalgleish
NPettit
MHauser
jakobJMacke
poster
TuragaBPHM2013
Inferring interactions between cell types from multiple calcium imaging snapshots of the same neural
circuit
2013
11
13
43
743.27
Understanding the functional connectivity between different cortical cell types and the resulting population dynamics is a challenging and important problem. Progress with invivo 2photon population calcium imaging has made it possible to densely sample neural activity in superficial layers of a local patch of cortex. In principle, such data can be used to infer the functional (statistical) connectivity between different classes of cortical neurons by fitting models such as generalized linear models or latent dynamical systems (LDS). However, this approach faces 3 major challenges which we address: 1) only small populations of neurons (~200) can currently be simultaneously imaged at any given time; 2) the cell types of individual neurons are often unknown; and 3) it is unclear how to pool data across different animals to derive an average model.
First, while it is not possible to simultaneously image all neurons in a cortical column, it is currently possible to image the activity of ~200 neurons at a time and to repeat this procedure at multiple cortical depths (down to layer 3). We present a computational method ("Stitching LDS") which allows us to "stitch" such nonsimultaneously imaged populations of neurons into one large virtual population spanning different depths of cortex. Importantly  and surprisingly  this approach allows us to predict couplings and noise correlations even for pairs of neurons that were never imaged simultaneously.
Second, we automatically cluster neurons based on similarities in their functional connectivity (“Clustering LDS”). Under the assumption that such functionally defined clusters can correspond to cell types, this enables us to infer both the cell types and their functional connectivity.
Third, while connection profiles of individual cells in one class can be variable, we expect the ‘average’ influence of one cell class on another to be fairly consistent across animals. We show how our approach can be used to pool measurements across different animals in a principled manner (“Pooling LDS”). The result is a highly accurate average model of the interactions between different cell classes.
We demonstrate the utility of our computational tools by applying them to model the superficial layers of barrel cortex based on invivo 2photon imaging data in awake mice.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
http://www.sfn.org/annualmeeting/neuroscience2013
San Diego, CA, USA
43rd Annual Meeting of the Society for Neuroscience (Neuroscience 2013)
SCTuraga
LBuesing
MPacker
MHausser
jakobJHMacke
poster
MackeML2013
How biased are maximum entropy models of neural population activity?
2013
3
III89
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
http://www.cosyne.org/c/index.php?title=Cosyne_13
Salt Lake City, UT, USA
Computational and Systems Neuroscience Meeting (COSYNE 2013)
jakobJMacke
iainIMurray
PLatham
poster
BuesingMB2013
Robust estimation for neural statespace models
2013
3
II89
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
http://www.cosyne.org/c/index.php?title=Cosyne_13
Salt Lake City, UT, USA
Computational and Systems Neuroscience Meeting (COSYNE 2013)
LBuesing
jakobJMacke
MSahani
poster
MackeBCYSS2012_2
Empirical models of spiking in neural populations
2012
5
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Ashburn, VA, USA
Janelia Farm Conference 2012: Machine Learning, Statistical Inference, and Neuroscience
jakobJHMacke
LBüsing
JPCunningham
BMYu
KVShenoy
MSahani
poster
HaefnerGMB2011
Relationship between decoding strategy, choice
probabilities and neural correlations in perceptual decisionmaking task
2011
11
41
17.09
When monkeys make a perceptual decision about ambiguous visual stimuli, individual sensory neurons in MT and other areas have been shown to covary with the decision. This observation suggests that the response variability in those very neurons causes the animal to choose one over the other option. However, the fact that sensory neurons are correlated has greatly complicated attempts to link those covariances (and the associated choice probabilities) to a direct involvement of any particular neuron in a decisionmaking task.
Here we report on an analytical treatment of choice probabilities in a population of correlated sensory neurons read out by a linear decoder. We present a closedform solution that links choice probabilities, noise correlations and decoding weights for the case of fixed integration time. This allowed us to analytically prove and generalize a prior numerical finding about the choice probabilities being only due to the difference between the correlations within and between decision pools (Nienborg & Cumming 2010) and derive simplified expressions for a range of interesting cases. We investigated the implications for plausible correlation structures like poolbased and limitedrange correlations.
We found that the relationship between choice probabilities and decoding weights is in general nonmonotonic and highly sensitive to the underlying correlation structure. In fact, given empirical measures of the interneuronal correlations and CPs, our formulas allow to infer the individual neuronal decoding weights. We confirmed the feasibility of this approach using synthetic data. We then applied our analytical results to a published dataset of empirical noise correlations and choice probabilities (Cohen & Newsome 2008 and 2009) recorded during a classic motion discriminating task (Britten et al 1992). We found that the data are compatible with an optimal readout scheme in which the responses of neurons with the correct direction preference are summed and those with perpendicular preference, but positively correlated noise, are subtracted. While the correlation data of Cohen & Newsome (being based on individual extracellular electrode recordings) do not give access to the full covariance structure of a neural population, our analytical formulas will make it possible to accurately infer individual readout weights from simultaneous population recordings.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Bethge
http://www.sfn.org/am2011/
Washington, DC, USA
41st Annual Meeting of the Society for Neuroscience (Neuroscience 2011)
rhaefnerRMHaefner
sgerwinnSGerwinn
jakobJHMacke
mbethgeMBethge
poster
MackeBCYSS2011
Modelling lowdimensional dynamics in recorded spiking populations
2011
2
I34
Neural population activity reflects not only variations in stimulus drive ( captured by many neural encoding models) but also the rich computational dynamics of recurrent neural circuitry. Identifying this dynamical structure, and relating it to external stimuli and behavioural events, is a crucial step towards understanding neural computation. One datadriven approach is to fit hidden lowdimensional dynamical systems models to the highdimensional spiking observations collected by microelectrode arrays (Yu et al, 2006, 2009). This approach yields lowdimensional representations of populationactivity, allowing analysis and visualization of population dynamics with single trial resolution. Here, we compare two models using latent linear dynamics, with the dependence of spiking observations on the dynamical state being either linear with Gaussian observations (GaussLDS), or generalised linear with Poisson observations and an exponential nonlinearity (PoissonLDS) (Kulkarni & Paninski, 2007). Both models were fit by ExpectationMaximisation to multielectrode recordings from premotor cortex in behaving monkeys during the delayperiod of a delayed reach task. We evaluated the accuracy of different approximations for the Estep necessary for PoissonLDS using elliptical slice sampling. We quantified modelperformance using a crossprediction approach (Yu et al). Although only the Poisson noise model takes the discrete nature of spiking into account, we found no consistent improvement of the Poissonmodel over GaussLDS: PoissonLDS was generally more accurate for low dimensions, but slightly underperformed GaussLDS in higher dimensions (cf. Lawhern et al. 2010). We also examined the ability of such models to capture conventional population metrics such as pairwise correlations and the distribution of synchronous spikes counts. We found that both models were able to reproduce these quantities with very low dynamical dimension, although the nonpositivity of the Gaussian model introduced a bias. Thus, despite its verisimilitude, the Poisson observation model does not always yield more accurate predictions in real data.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Bethge
http://www.cosyne.org/c/index.php?title=Cosyne_11_posters
Salt Lake City, UT, USA
Computational and Systems Neuroscience Meeting (COSYNE 2011)
jakobJHMacke
LBüsing
JPCunningham
BMYu
KVShenoy
MSahani
poster
MackeOB2011_2
The effect of common input on higherorder correlations and
entropy in neural populations
2011
2
III68
Finding models for capturing the statistical structure of multineuron firing patterns is a major challenge in sensory neuroscience. Recently, Maximum Entropy (MaxEnt) models have become popular tools for studying neural population recordings [4, 3]. These studies have found that small populations in retinal, but not in local cortical circuits, are well described by models based on pairwise correlations. It has also been found that entropy in small populations grows sublinearly [4], that sparsity in the population code is related to correlations [3], and it has been conjectured that neural populations might be at a ícritical pointí. While there have been many empirical studies using MaxEnt models, there has arguably been a lack of analytical studies that might explain the diversity of their findings. In particular, theoretical models would be of great importance for investigating their implications for large populations. Here, we study these questions in a simple, tractable population model of neurons receiving Gaussian inputs [1, 2]. Although the Gaussian input has maximal entropy, the spikingnonlinearities yield nontrivial higherorder correlations (íhocsí). We find that the magnitude of hocs is strongly modulated by pairwise correlations, in a manner which is consistent with neural recordings. In addition, we show that the entropy in this model grows sublinearly for small, but linearly for large populations. We characterize how the magnitude of hocs grows with population size. Finally, we find that the hocs in this model lead to a diverging specific heat, and therefore, that any such model appears to be at a critical point. We conclude that common input might provide a mechanistic explanation for a wide range of recent empirical observations. [1] SI Amari, H Nakahara, S Wu, Y Sakai. Neural Comput, 2003. [2] JH Macke, M Opper, M Bethge. ArXiv, 2010. [3] IE Ohiorhenuan, et. al Nature, 2010. [4] E Schneidman, MJ Berry, R Segev, W Bialek. Nature, 2006.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Bethge
http://www.cosyne.org/c/index.php?title=Cosyne_11_posters3
Salt Lake City, UT, USA
Computational and Systems Neuroscience Meeting (COSYNE 2011)
jakobJHMacke
MOpper
mbethgeMBethge
poster
7074
Estimating cortical maps with Gaussian process models
2010
11
40
483.18
A striking feature of cortical organization is that the encoding of many stimulus features, such as orientation preference, is arranged into topographic maps. The structure of these maps has been extensively studied using functional imaging methods, for example optical imaging of intrinsic signals, voltage sensitive dye imaging or functional magnetic resonance imaging. As functional imaging measurements are usually noisy, statistical processing of the data is necessary to extract maps from the imaging data. We here present a probabilistic model of functional imaging data based on Gaussian processes. In comparison to conventional approaches, our model yields superior estimates of cortical maps from smaller amounts of data. In addition, we obtain quantitative uncertainty estimates, i.e. error bars on properties of the estimated map. We use our probabilistic model to study the coding properties of the map and the role of noise correlations by decoding the stimulus from single trials of an imaging experiment. In addition, we show how our method can be used to reconstruct maps from sparse measurements, for example multielectrode recordings. We demonstrate our model both on simulated data and on intrinsic signaling data from ferret visual cortex.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Bethge
http://www.sfn.org/am2010/index.aspx?pagename=abstracts_main
Biologische Kybernetik
MaxPlanckGesellschaft
San Diego, CA, USA
40th Annual Meeting of the Society for Neuroscience (Neuroscience 2010)
en
jakobJHMacke
GSebastian
LEWhite
MKaschube
mbethgeMBethge
poster
HafnerGMB2009
Neuronal decisionmaking with realistic spiking models
Frontiers in Computational Neuroscience
2009
10
1
2009
Conference Abstract: Bernstein Conference on Computational Neuroscience
132133
The neuronal processes underlying perceptual decisionmaking have been the focus of numerous studies over the past two decades. In the current standard model [1][2][3] the output of noisy sensory neurons is pooled and integrated by decision neurons. Once the activity of the decision neurons reaches a threshold, the corresponding choice is made. This bottomup model was recently challenged based on the empirical finding that the time courses of psychophysical kernel (PK) and choice probability (CP) qualitatively differ from each other [4]. It was concluded that the decisionrelated activity in sensory neurons, at least in part, reflects the decision through a topdown signal, rather than contribute to the decision causally. However, the prediction of the standard bottomup model about the relationship between the time courses of PKs and CPs crucially depends on the underlying noise model. Our study explores the impact of the time course and correlation structure of neuronal noise on PK and CP for several decision models. For the case of nonleaky integration over the entire stimulus duration, we derive analytical expressions for Gaussian additive noise with arbitrary correlation structure. For comparison, we also investigate biophysically generated responses with a Fano factor that increases with the counting window [5], and alternative decision models (leaky, integration to bound) using numerical simulations.
In the case of nonleaky integration over the entire stimulus duration we find that the amplitude of the PK only depends on the overall level of noise, but not its temporal changes. Consequently the PK remains constant regardless of the temporal evolution or correlation structure in the noise. In conjunction with the observed decrease in the amplitude of the PK (e.g. [4]) this supports the conclusion that decreasing PKs are evidence for an integration to a bound model [1][3]. However, we find that the temporal evolution of the CP depends strongly on both the time course of the noise variance and the temporal correlations within the pool of sensory neurons. For instance, a noise variance that increases over time also leads to an increasing CP. The bottomup account that appears to agree best with the data in [4] combines an increasing variance of the correlated noise (the noise that cannot be eliminated by averaging over many neurons) with an integrationtobound decision model. This leads to a decreasing PK, as well as a CP that first increases slowly before leveling off and persisting until the end. We do not find qualitatively different results when using biophysically generated or Poisson distributed responses instead of additive Gaussian noise.
In summary, we advance the analytical framework for a quantitative comparison of choice probabilities and psychophysical kernels and find that recent data that was taken to be evidence of a topdown component in choice probabilities, may alternatively be accounted for by a bottomup model when allowing for timevarying correlated noise.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Bethge
http://www.frontiersin.org/10.3389/conf.neuro.10.2009.14.004/event_abstract?sname=Bernstein_Conference_on_Computational_Neuroscience
Frankfurt a.M., Germany
Bernstein Conference on Computational Neuroscience (BCCN 2009)
10.3389/conf.neuro.10.2009.14.004
rhaefnerRHäfner
sgerwinnSGerwinn
jakobJHMacke
mbethgeMBethge
poster
6242
Estimating Critical Stimulus Features from Psychophysical Data: The DecisionImage Technique Applied to Human Faces
Journal of Vision
2009
8
9
8
31
One of the main challenges in the sensory sciences is to identify the stimulus features on which the sensory systems base their computations: they are a prerequisite for computational models of perception. We describe a techniquedecisionimages for extracting critical stimulus features based on logistic regression. Rather than embedding the stimuli in noise, as is done in classification image analysis, we want to infer the important features directly from physically heterogeneous stimuli. A Decisionimage not only defines the critical regionofinterest within a stimulus but is a quantitative template which defines a direction in stimulus space. Decisionimages thus enable the development of predictive models, as well as the generation of optimized stimuli for subsequent psychophysical investigations. Here we describe our method and apply it to data from a human face discrimination experiment. We show that decisionimages are able to predict human responses not only in terms of overall percent correct but are able to predict, for individual observers, the probabilities with which individual faces are (mis) classified. We then test the predictions of the models using optimized stimuli. Finally, we discuss possible generalizations of the approach and its relationships with other models.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Department Schölkopf
Research Group Bethge
http://www.journalofvision.org/9/8/31/
Biologische Kybernetik
MaxPlanckGesellschaft
Naples, FL, USA
9th Annual Meeting of the Vision Sciences Society (VSS 2009)
en
10.1167/9.8.31
jakobJHMacke
felixFAWichmann
poster
5845
Bayesian estimation of orientation preference maps
Frontiers in Systems Neuroscience
2009
3
2009
Conference Abstracts: Computational and Systems Neuroscience
Neurons in the early visual cortex of mammals exhibit a striking organization with respect to their functional properties. A prominent example is the layout of orientation preferences in primary visual cortex, the orientation preference map (OPM). Functional imaging techniques, such as optical imaging of intrinsic signals have been used extensively for the measurement of OPMs. As the signaltonoise ratio in individual pixels if often low, the signals are usually spatially smoothed with a fixed linear filter to obtain an estimate of the functional map.
Here, we consider the estimation of the map from noisy measurements as a Bayesian inference problem. By combining prior knowledge about the structure of OPMs with experimental measurements, we want to obtain better estimates of the OPM with smaller trial numbers. In addition, the use of an explicit, probabilistic model for the data provides a principled framework for setting parameters and smoothing.
We model the underlying map as a bivariate Gaussian process (GP, a.k.a. Gaussian random field), with a prior covariance function that reflects known properties of OPMs. The posterior mean of the map can be interpreted as an optimally smoothed map. Hyperparameters of the model can be chosen by optimization of the marginal likelihood. In addition, the GP also returns a predicted map for any location, and can therefore be used for extending the map to pixel at which no, or only unreliable data was obtained.
We also obtain a posterior distribution over maps, from which we can estimate the posterior uncertainty of statistical properties of the maps, such as the pinwheel density. Finally, our probabilistic model of both the signal and the noise can be used for decoding, and for estimating the informational content of the map.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Bethge
http://www.cosyne.org/c/index.php?title=Cosyne_09
Biologische Kybernetik
MaxPlanckGesellschaft
Salt Lake City, UT, USA
Computational and Systems Neuroscience Meeting (COSYNE 2009)
en
10.3389/conf.neuro.06.2009.03.310
jakobJMacke
sgerwinnSGerwinn
LWhite
MKaschube
mbethgeMBethge
poster
5843
Bayesian Population Decoding of Spiking Neurons
Frontiers in Systems Neuroscience
2009
3
2009
Conference Abstracts: Computational and Systems Neuroscience
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Bethge
http://www.cosyne.org/c/index.php?title=Cosyne_09
Biologische Kybernetik
MaxPlanckGesellschaft
Salt Lake City, UT, USA,
Computational and Systems Neuroscience Meeting (COSYNE 2009)
en
10.3389/conf.neuro.06.2009.03.026
sgerwinnSGerwinn
jakobJMacke
mbethgeMBethge
poster
5844
Sensory input statistics and network mechanisms in primate primary visual cortex
Frontiers in Systems Neuroscience
2009
3
2009
Conference Abstracts: Computational and Systems Neuroscience
Understanding the structure of multineuronal firing patterns in ensembles of cortical neurons is a major challenge for systems neuroscience. The dependence of network properties on the statistics of the sensory input can provide important insights into the computations performed by neural ensembles. Here, we study the functional properties of neural populations in the primary visual cortex of awake, behaving macaques by varying visual input statistics in a controlled way. Using arrays of chronically implanted tetrodes, we record simultaneously from up to thirty wellisolated neurons while presenting sets of images with three different correlation structures: spatially uncorrelated white noise (whn), images matching the secondorder correlations of natural images (phs) and natural images including higherorder correlations (nat).
We find that groups of six nearby cortical neurons show little redundancy in their firing patterns (represented as binary vectors, 10ms bins) but rather act almost independently (mean multiinformation 0.85 bits/s, range 0.16  1.90 bits/s, mean fraction of marginal entropy 0.34 %, N=46). Although network correlations are weak, they are statistically significant. While relatively few groups showed significant redundancies under stimulation with white noise (67.4 ± 3.2%; mean fraction of groups ± S.E.M.), many more did so in the other two conditions (phs: 95.7 ± 0.6%; nat: 89.1 ± 1.4%). Additional higherorder correlations in natural images compared to phase scrambled images did not increase but rather decrease the redundancy in the cortical representation: Network correlations are significantly higher in phs than in nat, as is the number of significantly correlated groups.
Multiinformation measures the reduction in entropy due to any form of correlation. By using second order maximum entropy modeling, we find that a large fraction of multiinformation is accounted for by pairwise correlations (whn: 75.0 ± 3.3%; phs: 82.8 ± 2.1%; nat: 80.8 ± 2.4%; groups with significant redundancy). Importantly, stimulation with natural images containing higherorder correlations only lead to a slight increase in the fraction of redundancy due to higherorder correlations in the cortical representation (mean difference 2.26 %, p=0.054, Sign test).
While our results suggest that population activity in V1 may be modeled well using pairwise correlations only, they leave roughly 2025 % of the multiinformation unexplained. Therefore, choosing a particular form of higherorder interactions may improve model quality. Thus, in addition to the independent model, we evaluated the quality of three different models: (a) The secondorder maximum entropy model, which minimizes higherorder correlations, (b) a model which assumes that correlations are a product of common inputs (Dichotomized Gaussian) and (c) a mixture model in which correlations are induced by a discrete number of latent states. We find that an independent model is sufficient for the white noise condition but neither for phs or nat. In contrast, all of the correlation models (ac) perform similarly well for the conditions with correlated stimuli.
Our results suggest that under natural stimulation redundancies in cortical neurons are relatively weak. Higherorder correlations in natural images do not increase but rather decrease the redundancies in the cortical representation.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Bethge
http://www.cosyne.org/c/index.php?title=Cosyne_09
Biologische Kybernetik
MaxPlanckGesellschaft
Salt Lake City, UT, USA
Computational and Systems Neuroscience Meeting (COSYNE 2009)
en
10.3389/conf.neuro.06.2009.03.298
berensPBerens
jakobJHMacke
aeckerASEcker
RJCotton
mbethgeMBethge
atoliasASTolias
poster
MackeOB2008_2
How pairwise correlations affect the redundancy in large populations of neurons
Frontiers in Computational Neuroscience
2008
10
2008
Conference Abstract: Bernstein Symposium 2008
Simultaneously recorded neurons often exhibit correlations in their spiking activity. These correlations shape the statistical structure of the population activity, and can lead to substantial redundancy across neurons. Knowing the amount of redundancy in neural responses is critical for our understanding of the neural code. Here, we study the effect of pairwise correlations on the statistical structure of population activity. We model correlated activity as arising from common Gaussian inputs into simple threshold neurons. In population models with exchangeable correlation structure, one can analytically calculate the distribution of synchronous events across the whole population, and the joint entropy (and thus the redundancy) of the neural responses. We investigate the scaling of the redundancy as the population size is increased, and characterize its phase transitions for increasing correlation strengths. We compare the asymptotic redundancy in our models to the corresponding maximum and minimum entropy models. Although this model must exhibit more redundancy than the maximum entropy model, we find that its joint entropy increases linearly with population size.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Bethge
http://www.frontiersin.org/10.3389/conf.neuro.10.2008.01.086/event_abstract?sname=Bernstein_Symposium_2008
München, Germany
Bernstein Symposium 2008
10.3389/conf.neuro.10.2008.01.086
jakobJMacke
MOpper
mbethgeMBethge
poster
MackeBEOTB2008
Modeling populations of spiking neurons with the Dichotomized Gaussian distribution
2008
7
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Bethge
Department Logothetis
http://www.theswartzfoundation.org/summermeeting2008.asp
Princeton, NJ, USA
Annual Meeting 2008 of SloanSwartz Centers for Theoretical Neurobiology
jakobJHMacke
berensPBerens
aeckerASEcker
MOpper
atoliasASTolias
mbethgeMBethge
poster
5857
Analysis of Pattern Recognition Methods in Classifying Bold Signals in Monkeys at 7Tesla
2008
6
67
Pattern recognition methods have shown that fMRI data can reveal significant information
about brain activity. For example, in the debate of how objectcategories are represented in
the brain, multivariate analysis has been used to provide evidence of distributed encoding
schemes. Many followup studies have employed different methods to analyze human fMRI
data with varying degrees of success. In this study we compare four popular pattern recognition
methods: correlation analysis, supportvector machines (SVM), linear discriminant analysis
and Gaussian naïve Bayes (GNB), using data collected at high field (7T) with higher resolution
than usual fMRI studies. We investigate prediction performance on single trials and for averages
across varying numbers of stimulus presentations. The performance of the various algorithms
depends on the nature of the brain activity being categorized: for several tasks,
many of the methods work well, whereas for others, no methods perform above chance level.
An important factor in overall classification performance is careful preprocessing of the data,
including dimensionality reduction, voxel selection, and outlier elimination.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Department Schölkopf
Department Logothetis
Research Group Bethge
http://www.areadne.org/2008/home.html
Biologische Kybernetik
MaxPlanckGesellschaft
Santorini, Greece
AREADNE 2008: Research in Encoding and Decoding of Neural Ensembles
en
shihpiSPKu
arthurAGretton
jakobJMacke
atoliasATTolias
nikosNKLogothetis
poster
5101
Flexible Models for Population Spike Trains
2008
6
48
In order to understand how neural systems perform computations and process sensory
information, we need to understand the structure of firing patterns in large populations of
neurons. Spike trains recorded from populations of neurons can exhibit substantial pair wise
correlations between neurons and rich temporal structure. Thus, efficient methods for
generating artificial spike trains with specified correlation structure are essential for the
realistic simulation and analysis of neural systems.
Here we show how correlated binary spike trains can be modeled by means of a latent
multivariate Gaussian model. Sampling from our model is computationally very efficient, and
in particular, feasible even for large populations of neurons. We show empirically that the
spike trains generated with this method have entropy close to the theoretical maximum. They
are therefore consistent with specified pairwise correlations without exhibiting systematic
higherorder correlations. We compare our model to alternative approaches and discuss its
limitations and advantages. In addition, we demonstrate its use for modeling temporal
correlations in a neuron recorded in macaque primary visual cortex.
Neural activity is often summarized by discarding the exact timing of spikes, and only
counting the total number of spikes that a neuron (or population) fires in a given time window.
In modeling studies, these spike counts have often been assumed to be Poisson distributed
and neurons to be independent. However, correlations between spike counts have been
reported in various visual areas. We show how both temporal and interneuron correlations
shape the structure of spike counts, and how our model can be used to generate spike counts
with arbitrary marginal distributions and correlation structure. We demonstrate its capabilities
by modeling a population of simultaneously recorded neurons from the primary visual cortex
of a macaque, and we show how a model with correlations accounts for the data far better
than a model that assumes independence.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Department Schölkopf
Research Group Bethge
http://www.areadne.org/2008/home.html
Biologische Kybernetik
MaxPlanckGesellschaft
Santorini, Greece
AREADNE 2008: Research in Encoding and Decoding of Neural Ensembles
en
mbethgeMBethge
jakobJHMacke
berensPBerens
aeckerASEcker
atoliasASTolias
poster
5100
Pairwise Correlations and Multineuronal Firing Patterns in the Primary Visual Cortex of the Awake, Behaving Macaque
2008
6
46
Understanding the structure of multineuronal firing patterns has been a central quest and major challenge for systems neuroscience. In particular, how do pairwise interactions between neurons shape the firing patterns of neuronal ensembles in the cortex? To study this question, we recorded simultaneously from multiple single neurons in the primary visual cortex of an awake, behaving macaque using an array of chronically implanted tetrodes1. High
contrast flashed and moving bars were used for stimulation, while the monkey was required to maintain fixation. In a similar vein to recent studies of in vitro preparations 2,3,5, we applied maximum entropy analysis for the first time to the binary spiking patterns of populations of cortical neurons recorded in vivo from the awake macaque. We employed the Dichotomized Gaussian distribution, which can be seen as a close approximation to the pairwise maximumentropy model for binary data4. Surprisingly, we find that even pairs of neurons with nearby receptive
fields (receptive field center distance < 0.15°) have only weak correlations between their binary responses computed in bins of 10 ms (median absolute correlation coefficient: 0.014, 0.0100.019, 95% confidence intervals, N=95 pairs; positive correlations: 0.015, N=59; negative correlations: 0.013, N=36). Accordingly, the distribution of spiking patterns of groups of 10 neurons is described well with a model that assumes independence between individual neurons (JensenShannonDivergence: 1.06×102 independent model, 0.96×102 approximate secondorder maximumentropy model4; H/H1=0.992). These results suggest that the distribution of firing patterns of small cortical networks in the awake animal is predominantly determined by the mean activity of the participating cells, not by their interactions.
Meaningful computations, however, are performed by neuronal populations much larger than 10 neurons. Therefore, we investigated how weak pairwise correlations affect the firing patterns of artificial populations4 of up to 1000 cells with the same correlation structure as experimentally measured. We find that in neuronal ensembles of this size firing patterns with many active or silent neurons occur considerably more often than expected from a fully
independent population (e.g. 130 or more out of 1000 neurons are active simultaneously roughly every 300 ms in the correlated model and only once every 34 seconds in the
independent model). These results suggest that the firing patterns of cortical networks comparable in size to several minicolumns exhibit a rich structure, even if most pairs appear relatively independent when studying small subgroups thereof.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Department Schölkopf
Research Group Bethge
http://www.areadne.org/2008/home.html
Biologische Kybernetik
MaxPlanckGesellschaft
Santorini, Greece
AREADNE 2008: Research in Encoding and Decoding of Neural Ensembles
en
berensPBerens
aeckerASEcker
MSubramaniyan
jakobJHMacke
PHauck
mbethgeMBethge
atoliasASTolias
poster
MackeSB2008
The role of stimulus correlations for population decoding in
the retina
2008
6
73
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Bethge
http://www.areadne.org/2008/home.html
Santorini, Greece
AREADNE 2008: Research in Encoding and Decoding of Neural Ensembles
jakobJHMacke
GSchwartz
MBerry
poster
4952
The role of stimulus correlations for population decoding in the retina
2008
3
5
172
a large number of retinal ganglion cells, one should be able to construct a decoding algorithm to discriminate different visual stimuli. Despite the inherent noise in the response of the ganglion cell population, everyday visual experience is highly deterministic. We have designed an experiment to study the nature of the population code of the retina in the “low error” regime.
We presented 36 different black and white shapes, each with the same number of black pixels, to the retina of a tiger salamander while recording retinal ganglion cell responses using a multielectrode array.
Each shape was presented over 100 trials for 0.5 s each and trials were randomly interleaved. Spike trains were recorded from 162 ganglion cells in 13 experiments. We removed noise correlations by shuffling trials, as we wanted to focus on the role of correlations induced by the stimulus (signal correlations).
We designed decoding algorithms for this population response in order to detect each target shape against
the distracter set of the 35 other shapes. Binary response vectors were constructed using a 100 ms bin following the presentation of each shape. First, we used a simple decoder that assumes that all neurons are independent. This decoder is a linear classifier. A second decoder, which takes into account correlations between neurons, was constructed by fitting Ising models1 to the population response using up to 162 neurons for each model.
We also constructed the statistically optimal decoder based on a mixture model, which accounts for signal correlations.
Using populations of many neurons, the optimal and Ising
decoders performed considerably better than the “independent” decoder. For certain shapes, the optimal decoder had 100 times fewer false positives than the independent decoder at 99% hit rate, and, in the median across shapes, the performance enhancement was 8fold. While the decoder using an Ising model fit to the pairwise correlations did not achieve optimality, it was up to 50 times more accurate than the independent decoder, and 3 times more accurate in the median across shapes.
Some shape discriminations were performed at zero error out of 3500 trials using the optimal and Ising decoders on only a subset of the recorded cells while none reached this “low error” level using the independent decoder even on all 162 cells (see figure).
We find that discrimination with very low error using large populations requires a decoder that models signal correlations. Linear classifiers were unable to reach the “low error” regime. The Ising model of the population response is successfully applied to groups of up to 162 cells and offers a biologically feasible mechanism by which downstream neurons could account for correlations in their inputs.
http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/COSYNE2008Schwartz_4952[0].pdf
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Bethge
http://cosyne.org/c/index.php?title=Cosyne_08
Biologische Kybernetik
MaxPlanckGesellschaft
Salt Lake City, UT, USA,
Computational and Systems Neuroscience Meeting (COSYNE 2008)
en
GSchwartz
jakobJMacke
MBerry
poster
4347
3D Reconstruction of Neural Circuits from Serial EM Images
Neuroforum
2007
4
13
Supplement
1195
The neural processing of visual motion is of essential importance for course control. A basic model suggesting
a possible mechanism of how such a computation could be implemented in the fly visual system is the so
called "correlationtype motion detector" proposed by Reichardt and Hassenstein in the 1950s. The basic
requirement to reconstruct the neural circuit underlying this computation is the availability of electron
microscopic 3D data sets of whole ensembles of neurons constituting the fly visual ganglia. We apply a new
technique,"Serial Block Face Scanning Electron Microscopy" (SBFSEM), that allows for an automatic
sectioning and imaging of biological tissue with a scanning electron microscope [Denk, Horstman (2004)
Serial block face scanning electron microscopy to reconstruct threedimensional tissue nanostructure. PLOS
Biology 2: 19001909]. Image Stacks generated with this technology have a resolution sufficient to
distinguish different cellular compartments, especially synaptic structures. Consequently detailed anatomical
knowledge of complete neuronal circuits can be obtained. Such an image stack contains several thousands of
images and is recorded with a minimal voxel size of 25nm in x and y and 30nm in z direction. Consequently a
tissue block of 1mm³ (volume of the Calliphora vicina brain) produces several hundreds terabyte of data.
Therefore new concepts for managing large data sets and for automated 3D reconstruction algorithms need to
be developed. We developed an automated image segmentation and 3D reconstruction software, which allows
a precise contour tracing of cell membranes and simultaneously displays the resulting 3D structure. In detail,
the software contains two standalone packages: Neuron2D and Neuron3D, both offer an easytooperate
GraphicalUserInterface.
Neuron2D software provides the following image processing functions:
• Image Viewer: Display image stacks in single or movie mode and optional calculates intensity distribution
of each image.
• Image Preprocessing: Filter process of image stacks. Implemented filters are a Gaussian 2D and a
NonLinearDiffusion Filter. The filter step enhances the contrast between contour lines and image
background, leading to an enhanced signal to noise ratio which further improves detection of membrane
structures.
• Image Segmentation: The implemented algorithm extracts contour lines from the preceding image and
automatically traces the contour lines in the following images (zdirection), taking into account the previous
image segmentation. In addition, a manual interaction is possible.
To visualize 3D structures of neuronal circuits the additional software Neuron3D was developed. The
reconstruction of neuronal surfaces from contour lines, obtained in Neuron2D, is implemented as a graph
theory approach. The reconstructed anatomical data can further provide a subset for computational models of
neuronal circuits in the fly visual system.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Department Schölkopf
Research Group Bethge
http://nwg.glia.mdcberlin.de/media/pdf/conference/ProceedingsGoettingen2007.pdf
Biologische Kybernetik
MaxPlanckGesellschaft
Göttingen, Germany
7th Meeting of the German Neuroscience Society, 31st Göttingen Neurobiology Conference
en
NMaack
CKapfer
jakobJHMacke
bsBSchölkopf
WDenk
ABorst
poster
4345
Identifying temporal population codes in the retina using canonical correlation analysis
Neuroforum
2007
4
13
Supplement
359
Right from the first synapse in the retina, the visual information gets distributed across several parallel
channels with different temporal filtering properties (Wässle, 2004). Yet, the prevalent system identification
tool for characterizing neural responses, the spiketriggered average, only allows one to investigate the
individual neural responses independently of each other. Here, we present a novel data analysis tool for the
identification of temporal population codes based on canonical correlation analysis (Hotelling, 1936).
Canonical correlation analysis allows one to find `population receptive fields' (PRF) which are maximally
correlated with the temporal response of the entire neural population. The method is a convex optimization
technique which essentially solves an eigenvalue problem and is not prone to local minima.
We apply the method to simultaneous recordings from rabbit retinal ganlion cells in a whole mount
preparation (Zeck et al, 2005). The cells respond to a 16 by 16 pixel msequence stimulus presented at a frame
rate of 1/(20 msec). The response of 27 ganglion cells is correlated with each input frame in an interval
between zero and 200 msec relative to the stimulus. The 200 msec response period is binned into 14
equalsized bins. As shown in the figure, we obtain six predictive population receptive fields (left column),
each of which gives rise to a different population response (right column). The xaxis of the colorcoded
images used to describe the population response kernels (right column) corresponds to the index of the 27
different neurons, while the yaxis indicates time relative to the stimulus from 0 (top) to 200 msec (bottom).
The six population receptive fields do not only provide a more concise description of the population response
but can also be estimated much more reliably than the receptive fields of individual neurons.
In conclusion, we suggest to characterize retinal ganglion cell responses in terms of population receptive
fields, rather than discussing stimulusneuron and neuronneuron dependencies separately.
http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/TS242C_4345[0].pdf
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Department Schölkopf
Research Group Bethge
http://nwg.glia.mdcberlin.de/media/pdf/conference/ProceedingsGoettingen2007.pdf
Biologische Kybernetik
MaxPlanckGesellschaft
Göttingen, Germany
7th Meeting of the German Neuroscience Society, 31st Göttingen Neurobiology Conference
en
mbethgeMBethge
jakobJHMacke
sgerwinnSGerwinn
gzeckGZeck
poster
4265
Implicit Wiener Series for Estimating Nonlinear Receptive Fields
Neuroforum
2007
4
13
Supplement
1199
The representation of the nonlinear response properties of a neuron by a Wiener series expansion has enjoyed
a certain popularity in the past, but its application has been limited to rather lowdimensional and weakly
nonlinear systems due to the exponential growth of the number of terms that have to be estimated. A recently
developed estimation method [1] utilizes the kernel techniques widely used in the machine learning
community to implicitly represent the Wiener series as an element of an abstract dot product space. In contrast
to the classical estimation methods for the Wiener series, the estimation complexity of the implicit
representation is linear in the input dimensionality and independent of the degree of nonlinearity.
From the neural system identification point of view, the proposed estimation method has several advantages:
1. Due to the linear dependence of the estimation complexity on input dimensionality, system identification
can be also done for systems acting on highdimensional inputs such as images or video sequences.
2. Compared to classical crosscorrelation techniques (such as spiketriggered average or covariance
estimates), similar accuracies can be achieved with a considerably smaller amount of data.
3. The new technique does not need white noise as input, but works for arbitrary classes of input signals such
as, e.g., natural image patches.
4. Regularisation concepts from machine learning to identify systems with noisecontaminated output signals.
We present an application of the implicit Wiener series to find the lowdimensional stimulus subspace which
accounts for most of the neuron's activity. We approximate the secondorder term of a full Wiener series
model with a set of parallel cascades consisting of a linear receptive field and a static nonlinearity. This type
of approximation is known as reduced set technique in machine learning. We compare our results on
simulated and physiological datasets to existing identification techniques in terms of prediction performance
and accuracy of the obtained subspaces.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Department Schölkopf
Research Group Bethge
http://nwg.glia.mdcberlin.de/media/pdf/conference/ProceedingsGoettingen2007.pdf
Biologische Kybernetik
MaxPlanckGesellschaft
Göttingen, Germany
7th Meeting of the German Neuroscience Society, 31st Göttingen Neurobiology Conference
en
mofMOFranz
jakobJHMacke
ASaleem
SRSchultz
poster
4668
Estimating Population Receptive Fields in Space and Time
2007
2
44
Right from the first synapse in the retina, visual information gets distributed
across several parallel channels with different temporal filtering properties.
Yet, commonly used system identification tools for characterizing
neural responses, such as the spiketriggered average, only allow one to
investigate the individual neural responses independently of each other.
Conversely, many population coding models of neurons and correlations
between neurons concentrate on the encoding of a singlevariate stimulus.
We seek to identify the features of the visual stimulus that are encoded in
the temporal response of an ensemble of neurons, and the corresponding
spikepatterns that indicate the presence of these features.
We present a novel data analysis tool for the identification of such temporal
population codes based on canonical correlation analysis (Hotelling,
1936). The “population receptive fields” (PRFs) are defined to be those
dimensions of the stimulusspace that are maximally correlated with the
temporal response of the entire neural population, irrespective of whether
the stimulus features are encoded by the responses of single neurons or by
patterns of spikes across neurons or time. These dimensions are identified
by canonical correlation analysis, a convex optimization technique which essentially solves an eigenvalue
problem and is not prone to local minima.
Each receptive field can be represented by the weighted sum of a small number of functions that are separable
in spacetime. Therefore, nonseparable receptive fields can be estimated more efficiently than with spiketriggered
techniques, which makes our method advantageous even for the estimation of singlecell receptive
fields.
The method is demonstrated by applying it to data from multielectrode recordings from rabbit retinal ganglion
cells in a whole mount preparation (Zeck et al, 2005). The figure displays the first 6 PRFs of a population
of 27 cells from one such experiment. The recovered stimulusfeatures look qualitatively different
to the receptive fields of single retinal ganglion cells. In addition, we show how the model can be extendended
to capture nonlinear stimulusresponse relationships and to test different codingmechanisms by the
use of kernelcanonical correlation analysis. In conclusion, we suggest to characterize responses of ensembles
of neurons in terms of PRFs, rather than discussing stimulusneuron and neuronneuron dependencies
separately.
http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/Cosyne2007I37_[0].pdf
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Department Schölkopf
Research Group Bethge
http://www.cosyne.org/wiki/Cosyne_07_Program
Biologische Kybernetik
MaxPlanckGesellschaft
Salt Lake City, UT, USA
Computational and Systems Neuroscience Meeting (COSYNE 2007)
en
jakobJHMacke
gzeckGZeck
mbethgeMBethge
poster
4358
Nonlinear Receptive Field Analysis: Making Kernel Methods Interpretable
2007
2
16
Identification of stimulusresponse functions is a central problem in systems neuroscience and related areas.
Prominent examples are the estimation of receptive fields and classification images [1]. In most cases, the
relationship between a highdimensional input and the system output is modeled by a linear (firstorder) or
quadratic (secondorder) model. Models with third or higher order dependencies are seldom used, since
both parameter estimation and model interpretation can become very difficult.
Recently, Wu and Gallant [3] proposed the use of kernel methods, which have become a standard tool in
machine learning during the past decade [2]. Kernel methods can capture relationships of any order, while
solving the parameter estmation problem efficiently. In short, the stimuli are mapped into a highdimensional
feature space, where a standard linear method, such as linear regression or Fisher discriminant, is applied.
The kernel function allows for doing this implicitly, with all computations carried out in stimulus space.
As a consequence, the resulting model is nonlinear, but many desirable properties of linear methods are
retained. For example, the estimation problem has no local minima, which is in contrast to other nonlinear
approaches, such as neural networks [4].
Unfortunately, although kernel methods excel at modeling complex functions, the question of how to interpret
the resulting models remains. In particular, it is not clear how receptive fields should be defined in
this context, or how they can be visualized. To remedy this, we propose the following definition: noting
that the model is linear in feature space, we define a nonlinear receptive field as a stimulus whose image in
feature space maximizes the dotproduct with the learned model. This can be seen as a generalization of the
receptive field of a linear filter: if the feature map is the identity, the kernel method becomes linear, and our
receptive field definition coincides with that of a linear filter. If it is nonlinear, we numerically invert the
feature space mapping to recover the receptive field in stimulus space.
Experimental results show that receptive fields of simulated visual neurons, using natural stimuli, are correctly
identified. Moreover, we use this technique to compute nonlinear receptive fields of the human fixation
mechanism during freeviewing of natural images.
http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/Cosyne2007I9_4358[0].pdf
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Department Schölkopf
Research Group Bethge
http://www.cosyne.org/wiki/Cosyne_07_Program
Biologische Kybernetik
MaxPlanckGesellschaft
Salt Lake City, UT, USA
Computational and Systems Neuroscience Meeting (COSYNE 2007)
en
kienzleWKienzle
jakobJHMacke
felixFAWichmann
bsBSchölkopf
mofMOFranz
conference
Macke2017
Correlations and signatures of criticality in neural population models
2017
1
23
Largescale recording methods make it possible to measure the statistics of neural population activity and to gain insights into the principles that govern the collective activity of neural ensembles. One hypothesis that has emerged from this approach is that neural populations are poised at a thermodynamic critical point. Support for this notion has come from a recent series of studies which identified signatures of criticality in the statistics of neural activity recorded from populations of retinal ganglion cells, and hypothesized that the retina might be optimised to be operating at this critical point.
What mechanisms can explain these observations? Do they require the neural system to be finetuned to be poised at the critical point, or do they robustly emerge in generic circuits? We here show that these effects arise in a simple, canonical models of retinal population activity. They robustly appear across a range of parameters, and can be understood analytically in a simple model. These observations pose the question of whether signatures of criticality are indicative of an optimised coding strategy, or whether alternative theories are more promising candidates for understanding sensory coding.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
Invited Lecture
http://www.bccntuebingen.de/events/doublefeatureeventbernsteincrckickoffsymposium.html
Tübingen, Germany
Double Feature Workshop: Bernstein Symposium & KickOff Symposium
jakobJMacke
conference
Macke2015_2
Correlations and signatures of criticality in neural population models
2015
12
11
Largescale recording methods make it possible to measure the statistics of neural population activity, and thereby to gain insights into the principles that govern the collective activity of neural ensembles. One hypothesis that has emerged from this approach is that neural populations are poised at a ‘thermodynamic critical point’, and that this has important functional consequences
(Tkacik et al 2014). Support for this hypothesis has come from studies that computed the specific heat, a measure of global population statistics, for groups of neurons subsampled from population recordings. These studies have found two effects which in physical systems indicate a critical point: First, specific heat diverges with population size N. Second, when manipulating population
statistics by introducing a ’temperature’ in analogy to statistical mechanics, the maximum heat moves towards unit
temperature for large populations.
What mechanisms can explain these observations? We show that both effects arise in a simple simulation of retinal population activity. They robustly appear across a range of parameters including biologically implausible ones, and can be understood analytically in simple models. The specific heat grows with N. whenever the (average) correlation is independent of N, which is always true when uniformly subsampling a large, correlated population. For weakly
correlated populations, the rate of divergence of the specific heat is proportional to the correlation strength. Thus, if retinal population codes were optimized to
maximize specific heat, then this would predict that they seek to increase correlations. This is incongruent with theories of efficient coding that make the opposite prediction. We find criticality in a simple and parsimonious model of retinal processing, and without the need for finetuning or adaptation. This suggests that signatures of criticality might not require an optimized coding strategy, but rather arise as consequence of subsampling a stimulusdriven neural population (Aitchison et al 2014).
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
Invited Lecture
https://netadis.wordpress.com/nipsworkshop2015/
Montréal, Canada
NIPS 2015 Workshop: Modelling and Inference for Dynamics on Complex Interaction Networks: Joining Up Machine aqnd Statistical Physics
jakobJMacke
conference
Macke2015_4
Correlations and signatures of criticality in neural population models
2015
10
29
Largescale recording methods make it possible to measure the statistics of neural population activity and to gain insights into the principles that govern the collective activity of neural ensembles. One hypothesis that has emerged from this approach is that neural populations are poised at a thermodynamic critical point. Support for this notion has come from a recent series of studies which identified signatures of criticality (such as a divergence of the specific heat with population size) in the statistics of neural activity recorded from populations of retinal ganglion cells, and hypothesized that the retina might be optimised to be operating at this critical point.
What mechanisms can explain these observations? Do they require the neural system to be finetuned to be poised at the critical point, or do they robustly emerge in generic circuits? How are signatures of criticality related to the structure of correlations within the neural population? We here show that these effects arise in a simple simulation of retinal population activity. They robustly appear across a range of parameters including biologically implausible ones, and can be understood analytically in a simple model. The specific heat diverges linearly with population size n whenever the (average) correlation is independent of n— in particular, this is generally true when subsampling a large, correlated population. These observations pose the question of whether signatures of criticality are indicative of an optimised coding strategy, or whether they arise as byproduct of subsampling a neural population with correlations.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
Invited Lecture
http://www.ieclnc.ens.fr/groupforneuraltheory/events/gntiecnewideasintheoretical/pastevents/
Paris, France
Institut d'Etudes de la Cognition (IEC) at the Ecole Normale Supérieure: Group for Neural Theory
jakobJMacke
conference
NonnenmacherBBBM2015
Correlations and signatures of criticality in neural population models
2015
9
16
2728
Largescale recording methods make it possible to measure the statistics of neural population activity, and thereby to gain insights into the principles that govern the
collective activity of neural ensembles. One hypothesis that has emerged from this approach is that neural populations are poised at a thermodynamic critical point [1], and that this may have important functional consequences. Support for this hypothesis has come from studies [2,3] that identified signatures of criticality (such as a divergence of the specific heat with population size) in the statistics of neural activity recorded from populations of retinal ganglion cells. What mechanisms can explain these observations? Do they require the neural system to be finetuned to be poised at the critical point, or do they robustly emerge in generic circuits [4,5,6]?
We show that indicators for thermodynamic criticality arise in a simple simulation of retinal population activity, and without the need for finetuning or adaptation. Using simple statistical models [7], we demonstrate that peak specific heat grows with population size whenever the (average) correlation is independent of the number of
neurons. The latter is always true when uniformly subsampling a large, correlated population. For weakly correlated populations, the rate of divergence of the specific heat is proportional to the correlation strength. This predicts that neural populations would be strongly correlated if they were optimized to maximize specific heat, which is in contrast with theories of efficient coding that make the opposite prediction. Our findings suggest that indicators for thermodynamic criticality might not require an optimized coding strategy, but rather arise as consequence of subsampling a stimulusdriven neural population.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
Research Group Bethge
Invited Lecture
http://www.nncn.de/de/bernsteinconference/2015/program
Heidelberg, Germany
Bernstein Conference 2015
10.12751/nncn.bc2015.0013
mnonnenmacherMNonnenmacher
CBehrens
berensPBerens
mbethgeMBethge
jakobJMacke
conference
Macke2016
Estimating state and parameters in Gaussian statespace models with pointprocess observations
2015
9
14
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Invited Lecture
http://www.nncn.de/de/bernsteinconference/2015/satelliteworkshops/estimatingparametersandunobservedstatevariablesfromneuraldata
Heidelberg, Germany
Bernstein Conference 2015 Satellite Workshop "Estimating parameters and unobserved state variables from neural data"
jakobJMacke
conference
Macke2015_3
Correlations, criticality and common input
2015
4
2
Largescale recording methods make it possible to measure the statistics of neural population activity, and to describe their joint statistics by fitting statistical models to population spike train data. What can the
statistical structure of neural population data tell us about the underlying mechanisms, as well as about the
principles that govern the collectivity activity and coding properties of neural ensembles? One intriguing hypothesis that has emerged from this approach is that the statistics of neural populations resemble those of physical systems
which are poised at a thermodynamic critical point. Support for this hypothesis has come from studies that computed the `specific heat’ (a measure of global population statistics which is effectively the normalized variance of logprobabilities of spikepatterns). These studies have found two effects which—in physical systems indicate a critical point: First, specific heat diverges
with population size N. Second, when manipulating population statistics by introducing a ’temperature’ in
analogy to statistical mechanics, the maximum heat moves towards unittemperature for large populations. What mechanisms can explain these observations? Do they require the neural system to be finetuned to be poised at the critical point, or do they robustly emerge in generic circuits? How are signatures of criticality related to the structure of correlations within the neural population? In this talk, I will address these questions, give some answers, and pose more questions.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
Invited Lecture
http://lcn1.epfl.ch/files/content/sites/lcn/files/2015%20Seminars/SCNS%2002%2004%2015%20%20Macke.pdf
Zürich, Switzerland
University of Zurich: Swiss Computational Neuroscience Seminars
jakobJMacke
conference
Macke2015
Dissecting choiceprobabilities in V2 neurons using serial dependence
2015
3
9
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
Invited Lecture
http://www.cosyne.org/c/index.php?title=Cosyne2015_Program
Snowbird, UT, USA
COSYNE 2015 Workshops
jakobJMacke
conference
Macke2014_2
Statistical methods for characterizing cortical population activity
2014
5
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
Invited Lecture
http://2014.occamos.de/videos.html
Osnabrück, Germany
Osnabrück Computational Cognition Alliance Meeting on "The Brain as a Probabilistic Inference Engine" (OCCAM 2014)
jakobJHMacke
conference
Macke2013_3
Inferring neural population dynamics from multiple partial measurements of the same circuit
2013
10
25
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
Invited Lecture
http://itp.unifrankfurt.de/~gros/Seminar/groupSeminar.html
Frankfurt a.M., Germany
Group Seminar C. Gros "Complex and Cognitive Systems": MaxPlanckInstitute for Brain Research
jakobJMacke
conference
Macke2013_2
Characterizing the dynamics of large neural populations
2013
9
9
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
Invited Lecture
https://www.helmholtzmuenchen.de/icb/institute/icbseminar/pastseminars/index.html
München, Germany
ICB Institute of Computational Biology: Helmholtz Zentrum
jakobJMacke
conference
Macke2013
B8: Statistical Modelling of Psychophysical Data
Perception
2013
8
25
42
ECVP Abstract Supplement
4
n this tutorial, we will discuss some statistical techniques that one can use in order to obtain a more accurate statistical model of the relationship between experimental variables and psychophysical performance. We will use models which include the effect of additional, nonstimulus determinants of behaviour, and which therefore give us additional flexibility in analysing psychophysical data. For example, these models will allow us to estimate the effect of experimental history on the responses on an observer, and to automatically correct for errors which can be attributed to such historyeffects. By reanalysing a large dataset of lowlevel psychophysical data, we will show that the resulting models have vastly superior statistical goodness of fit, give more accurate estimates of psychophysical functions and allow us to detect and capture interesting temporal structure in psychophysical data. In summary, the approach presented in this tutorial does not only yield more accurate models of the data, but also has the potential to reveal unexpected structure in the kind of data that every visual scientist has plentiful classical psychophysical data with binary responses.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Macke
Invited Lecture
http://pec.sagepub.com/content/42/1_suppl.toc
Bremen, Germany
36th European Conference on Visual Perception (ECVP 2013)
10.1177/03010066130420S101
jakobJMacke
conference
GerwinnMB2010
Toolbox for inference in generalized linear models of spiking neurons
Frontiers in Computational Neuroscience
2010
10
2010
Conference Abstract: Bernstein Conference on Computational Neuroscience
Generalized linear models are increasingly used for analyzing neural data, and to characterize the stimulus dependence and functional connectivity of both single neurons and neural populations. One possibility to extend the computational complexity of these models is to expand the stimulus, and possibly the representation of the spiking history into high dimensional feature spaces.
When the dimension of the parameter space is large, strong regularization has to be used in order to fit GLMs to datasets of realistic size without overfitting. By imposing properly chosen priors over parameters, Bayesian inference provides an effective and principled approach for achieving regularization.
In this work, we present a MATLAB toolbox which provides efficient inference methods for parameter fitting. This includes standard maximum a posteriori estimation for Gaussian and Laplacian prior, which is also sometimes referred to as L1 and L2reguralization. Furthermore, it implements approximate inference techniques for both prior distributions based on the expectation propagation algorithm [1].
In order to model the refractory property and functional couplings between neurons, the spiking history within a population is often represented as responses to a set of predefined basis functions. Most of the basis function sets used so far, are nonorthogonal. Commonly priors are specified without taking the properties of the basis functions into account (uncorrelated Gauss, independent Laplace). However, if basis functions overlap, the coefficients are correlated. As an example application of this toolbox, we analyze the effect of independent prior distributions, if the set of basis functions are nonorthogonal and compare the performance to the orthogonal setting.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Bethge
Abstract Talk
http://www.frontiersin.org/10.3389/conf.fncom.2010.51.00091/event_abstract?sname=Bernstein_Conference_on_Computational_Neuroscience_1
Berlin, Germany
Bernstein Conference on Computational Neuroscience (BCCN 2010)
10.3389/conf.fncom.2010.51.00091
sgerwinnSGerwinn
jakobJHMacke
mbethgeMBethge
conference
HafnerGMB2010
Implications of correlated neuronal noise in decision making circuits for physiology and behavior
Frontiers in Neuroscience
2010
2
Conference Abstract: Computational and Systems Neuroscience 2010
Understanding how the activity of sensory neurons contribute to perceptual decision making is one of the major questions in neuroscience. In the current standard model, the output of opposing pools of noisy, correlated sensory neurons is integrated by downstream neurons whose activity elicits a decisiondependent behavior [1][2]. The predictions of the standard model for empirical measurements like choice probability (CP), psychophysical kernel (PK) and reaction time distribution crucially depend on the spatial and temporal correlations within the pools of sensory neurons. This dependency has so far only been investigated numerically and for timeinvariant correlations and variances. However, it has recently been shown that the noise variance undergoes significant changes over the course of the stimulus presentation [3]. The same is true for interneuronal correlations that have been shown to change with task and attentional state [4][5]. In the first part of our work we compute analytically the time course of CPs and PKs in the presence of arbitrary noise correlations and variances for the case of nonleaky integration and Gaussian noise. This allows general insights and is especially needed in the light of the experimental transition from singlecell to multicell recordings. Then we simulate the implications of realistic noise in several variants of the standard model (leaky and nonleaky integration, integration over the entire stimulus presentation or until a bound, with and without urgency signal) and compare them to physiological data. We find that in the case of nonleaky integration over the entire stimulus duration, the PK only depends on the overall level of noise variance, not its time course. That means that the PK remains constant regardless of the temporal changes in the noise. This finding supports an earlier conclusion that an observed decreasing PK suggests that the brain is not integrating over the entire stimulus duration but only until it has accumulated sufficient evidence, even in the case of no urgency [6]. The time course of the CP, on the other hand, strongly depends on the time course of the noise variances and on the temporal and interneuronal correlations. If noise variance or interneuronal correlation increases, CPs increase as well. This dissociation of PK and CP allows an alternative solution to the puzzle recently posed by [7] in a bottomup framework by combining integration to a bound with an increase in noise variance/correlation. In addition, we derive how the distribution of reaction times depends on noise variance and correlation, further constraining the model using empirical observations.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Bethge
Department Schölkopf
Abstract Talk
http://www.frontiersin.org/10.3389/conf.fnins.2010.03.00023/event_abstract
Salt Lake City, UT, USA
Computational and Systems Neuroscience Meeting (COSYNE 2010)
10.3389/conf.fnins.2010.03.00023
rhaefnerRHaefner
sgerwinnSGerwinn
jakobJMacke
mbethgeMBethge
conference
Macke2009
Modelling correlated populations: Redundancies, spike counts and the effect of common input
2009
7
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Bethge
Invited Lecture
http://www.cin.unituebingen.de/newsevents/browseallevents/detail/view/338/page/3/conferencecomputationalneurosciencemeeting2009.html
Tübingen, Germany
Computational Neuroscience Meeting 2009
jakobJMacke
conference
MackeOB2008
How pairwise correlations aect the redundancy in large
populations of neurons
Frontiers in Computational Neuroscience
2008
10
2008
Conference Abstract: Bernstein Symposium 2008
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Research Group Bethge
Abstract Talk
http://www.frontiersin.org/community/AbstractDetails.aspx?ABS_DOI=10.3389/conf.neuro.10.2008.01.086&eid=108&sname=Bernstein_Symposium_2008
München, Germany
Bernstein Symposium 2008
10.3389/conf.neuro.10.2008.01.086
jakobJHMacke
MOpper
mbethgeMBethge
conference
KuGML2008
Pattern recognition methods in classifying fMRI data
2008
10
11
Pattern recognition methods have shown that fMRI data can reveal significant information about brain activity. For example, in the debate of how object{categories are represented in the brain, multivariate analysis has been used to provide evidence of a distributed encoding scheme. Many follow{up studies have employed different methods to analyze human fMRI data with varying degrees of success. In this presentation I would like to discuss and compare four popular pattern recognition methods: correlation analysis,
support{vector machines (SVM), linear discriminant analysis and Gaussian natife Bayes (GNB), using data collected at high field (7T) with higher resolution than usual fMRI
studies. We investigate prediction performance on single trials and for averages across varying numbers of stimulus presentations. The performance of the various algorithms
depends on the nature of the brain activity being categorized: for several tasks, many of the methods work well, whereas for others, no methods perform above chance level. An important factor in overall classiffication performance is careful preprocessing of the data,
including dimensionality reduction, voxel selection, and outlier elimination.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Department Logothetis
Department Schölkopf
Research Group Bethge
Abstract Talk
Ellwangen, Germany
9th Conference of the Junior Neuroscientists of Tübingen (NeNa 2008)
shipiSPKu
arthurAGretton
jakobJMacke
nikosNKLogothetis
conference
5408
Estimating receptive fields without spiketriggering
2007
11
37
768.1
The prevalent means of characterizing stimulus selectivity in sensory neurons is to estimate their receptive field properties such as orientation selectivity. Receptive fields are usually derived from the mean (or covariance) of the spiketriggered stimulus ensemble.
This approach treats each spike as an independent message but ignores the possibility that information might be conveyed through patterns of neural activity that are distributed across space or time.
In the retina for example, visual stimuli are analyzed by several parallel channels with different spatiotemporal filtering properties. How can we define the receptive field of a whole population of neurons, not just a single neuron?
Imaging methods (such as voltagesensitive dye imaging) yield measurements of neural activity that do not contain spiking events at all. How can receptive fields be derived from this kind of data?
Even for single neurons, there is evidence that multiple features of the neural response, for example spike patterns or latencies, can carry information. How can these features be taken into account in the estimation process?
Here, we address the question of how receptive fields can be calculated from such distributed representations. We seek to identify those stimulus features and the corresponding patterns of neural activity that are most reliably coupled, as measured by the mutual information between the two signals. As an efficient implementation of this strategy, we use an extension of reversecorrelation methods based on canonical correlation analysis [1]. We evaluate our approach using both simulated data and multielectrode recordings from rabbit retinal ganglion cells [2]. In addition, we show how the model can be extended to capture nonlinear stimulusresponse relationships and to test different coding mechanisms using kernel canonical correlation analysis [3].
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Department Schölkopf
Research Group Bethge
Abstract Talk
http://www.sfn.org/am2007/
Biologische Kybernetik
MaxPlanckGesellschaft
San Diego, CA, USA
37th Annual Meeting of the Society for Neuroscience (Neuroscience 2007)
en
jakobJHMacke
gzeckGZeck
mbethgeMBethge
conference
Macke2006
DecisionImages: A tool for identifying critical stimulus features
2006
11
7
10
neurons during a visual task is an important prerequisite for computational models of visual cognition. We describe a technique for estimating highdimensional decisionimages, and apply the method to a psychophysical gender discrimination task. The use of regularization makes it possible to map out decisionimages using a relatively small number of stimuli.
Statistical analysis of the result shows a remarkable fit to the datasets collected—remarkable, as gender discrimination is a rather highlevel visual task, and thus believed to be complex, but our model is conceptually rather simple. We demonstrate that the decisionimages are sensitive to subtle changes in lighting, texture, and pose, and to individual differences in gender discrimination exhibited by our subjects.
We show how decisionimages can be used to create new stimuli, and how the approach can be generalized to nonlinear and multiscale decisionimages. In addition, connections to reverse correlation techniques for receptive field estimation are described.
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
http://www.kyb.tuebingen.mpg.de
Department Schölkopf
Research Group Bethge
Abstract Talk
Oberjoch, Germany
7th Conference of the Junior Neuroscientists of Tübingen (NeNa 2006)
jakobJMacke