Pathways to the 2023 IHP thematic program Random Processes in the Brain/Seminars

From NeuroMat
Jump to: navigation, search
HOMEPARTICIPANTSWEBINARSDISCUSSIONFURTHER CONTENT
Banner 2 NeuroMat pathways to the 2023 IHP thematic program - Random Processes in the Brain.jpg
Logo - Neuromat - Horizontal - EN v2.svg

Priscilla Greenwood

Building a stochastic neural circuit of cortical-pulvinar interaction
Building a stochastic neural circuit of cortical-pulvinar interaction.jpg
Seminar video recording.
Warmup interview
  • Speaker: Priscilla Greenwood, University of British Columbia
  • Date: Tuesday, December 6, 2022
  • Abstract: Phase coherence between oscillating areas of cortex is associated with attention and information transmission. A growing literature is devoted to how this might work. We use a rate model of noise-driven quasi-cycle oscillators to build a neural circuit representing cortical-pulvinar interactions. Modelling in terms of rates or densities rather than firing allows computation of a result about optimal phase relationships. This is joint work with Lawrence Ward of the UBC Psychology Department.

Gilles Laurent

Exploring the space of neural systems dynamics
Exploring the space of neural systems dynamics.jpg
Seminar video recording.
Warmup interview
  • Speaker: Gilles Laurent, Max Planck Institute for Brain Research
  • Date: Tuesday, November 22, 2022
  • Abstract: I will illustrate the diversity of dynamical properties that neural systems can express, in a search for mechanistic and functional understanding of “the brain”, using examples from a diversity of systems and animal species.

Wojciech Szpankowski

Structural and Temporal Information
Structural and Temporal Information.jpg
Seminar video recording.
Warmup interview
  • Speaker: Wojciech Szpankowski, Purdue University
  • Date: Tuesday, November 8, 2022
  • Abstract: Shannon's information theory has served as a bedrock for advances in communication and storage systems over the past five decades. However, this theory does not handle well higher order structures (e.g., graphs, geometric structures), temporal aspects (e.g., real-time considerations), or semantics. We argue that these are essential aspects of data and information that underly a broad class of current and emerging data science applications. In this talk, we present some recent results on structural and temporal information. We first show how to extract temporal information in dynamic networks (arrival of nodes) from its structure (unlabeled graphs). We then proceed to establish fundamental limits on information content for some data structures, and present asymptotically optimal lossless compression algorithms achieving these limits for various graph models.

Olivier Faugeras

Mathematical Neuroscience
Mathematical neuroscience.jpg
Seminar video recording.
Warmup interview
  • Speaker: Olivier Faugeras, Inria Sophia Antipolis
  • Date: Tuesday, October 25, 2022
  • Abstract: Why is it important to ground neuroscience in mathematics ? What kind of mathematics are relevant in this scientific area where biology, perception; action and cognition are closely intermingled ? What kind of relationships should be entertained with experimentalists and computationalists ? In this lecture I will try to answer these questions through examples drawn from the analysis of the activity of large populations of neurons by mathematical methods from probability, statistics, and geometry.

Tilo Schwalger

Current challenges for mesoscopic neural population dynamics and metastability
Current challenges for mesoscopic neural population dynamics and metastability.jpg
Seminar video recording.
  • Speaker: Tilo Schwalger, Institut für Mathematik, Technische Universität Berlin
  • Date: Tuesday, October 11, 2022
  • Abstract: Mesoscopic neuronal population dynamics deals with emergent neural activity and computations at a coarse-grained spatial scale at which fluctuations due to a finite number of neurons should not be neglected. A prime example is metastable dynamics in cortical and hippocampal circuits, in which fluctuations likely play a critical role. In this lecture, I will discuss recent advances and current challenges for mean-field descriptions of computations and metastable dynamics at the mesoscopic scale. Firstly, I will discuss fundamental differences between external noise and intrinsic "finite-size noise" in population models, and their distinct impact on metastable dynamics. Is it possible to infer the type of metastability and noise from mesoscopic population data? Secondly, I will address the question of how to treat single-neuron dynamics (e.g. refractory mechanisms, adaptation) and synaptic dynamics (e.g. short-term depression) at the level of mesoscopic populations. Is it possible to derive (low-dimensional) bottom-up mesoscopic models that link back to the microscopic properties of spiking neural networks? And thirdly, I will address the fundamental problem of heterogeneity in biological neural networks. An important source of heterogeneity is non-homogeneous network structure. The synaptic connectivity of any neural network that performs computations is structured, e.g. as a result of learning. How can mesoscopic mean-field theories, which so far assumed homogeneous (unstructured) connectivity, be generalized to heterogeneous, structured connectivity?

Thibaud Taillefumier

Replica-mean field limits of metastable dynamics
Replica-mean field limits of metastable dynamics.jpg
Seminar video recording.
  • Speaker: Thibaud Taillefumier, University of Texas at Austin
  • Date: Tuesday, September 27, 2022
  • Abstract: In this talk, we propose to decipher the activity of neural networks via a “multiply and conquer” approach. This approach considers limit networks made of infinitely many replicas with the same basic neural structure. The key point is that these so-called replica-mean-field networks are in fact simplified, tractable versions of neural networks that retain important features of the finite network structure of interest. The finite size of neuronal populations and synaptic interactions is a core determinant of neural dynamics, being responsible for non-zero correlation in the spiking activity, but also for finite transition rates between metastable neural states. Theoretically, we develop our replica framework by expanding on ideas from the theory of communication networks rather than from statistical physics to establish Poissonian mean-field limits for spiking networks. Computationally, we leverage this replica approach to characterize the stationary spiking activity emerging in the replica mean-field limit via reduction to tractable functional equations. We conclude by discussing perspectives about how to predict transition rates in metastable networks from the characterization of their replica mean-field limit.

Related publications:

  • Yu, Luyan and Taillefumier, Thibaud. (2022). Metastable spiking networks in the replica-mean-field limit. PLoS Computational Biology. 18, no. 6 (2022): e1010215.
  • Baccelli, François, Michel Davydov, and Taillefumier Thibaud (2022). Replica-mean-field limits of fragmentation-interaction-aggregation processes. Journal of Applied Probability 59, no. 1 (2022): 38-59
  • Baccelli, François, and Taillefumier, Thibaud. The pair-replica-mean-field limit for intensity-based neural networks. SIAM Journal on Applied Dynamical Systems 20, no. 1 (2021): 165-207.
  • Baccelli, François, and Taillefumier, Thibaud. Replica-mean-field limits for intensity-based neural networks. SIAM Journal on Applied Dynamical Systems 18, no. 4 (2019): 1756-1797.

Markus Diesmann

Single-neuron model in cortical context.jpg
Seminar video recording; slide deck.
Single-neuron model in cortical context
  • Speaker: Markus Diesmann, Jülich Research Centre
  • Date: Tuesday, May 31, 2022
  • Abstract: In the preparation of the 2023 IHP thematic program "Random Processes in the Brain" the question came up how relevant the single-neuron model is for cortical dynamics and function. Given the plethora of single-neuron models available, insight into their differential effects on the network level would give theoreticians guidance on which model to choose for which research question. The purpose of this talk is to outline a small project approaching this question which could be carried out in the framework of the thematic program in a collaboration of several labs. The talk first presents a well-studied full-density network model of the cortical microcircuit as a suitable reference network. The proposal is to replace the original single-neuron model by alternative common single-neuron models and to quantify the impact on the network level. For this purpose the presentation reviews a range of common single-neuron models as candidates and a set of measures like firing rate, irregularity, and the power spectrum. It seems achievable that all relevant neuron models can be formulated in the domain-specific language NESTML and data analysis be carried out in the Elephant framework such that a reproducible digital workflow for the project can be constructed. A minimal scope of the investigation covers a static network in a stationary state. However, there are indications in the literature that the conventional constraints on network activity are weak. Furthermore, hypotheses on the function of the cortical microcircuit depend on the transient interaction between cortical layers, synaptic plasticity, and a separation of dendritic and somatic compartments. Therefore, we need to carefully debate how the scope of an initial exploration can usefully be restricted.

Peter F Liddle

Disorganization of mental activity in psychosis.jpg
Seminar video recording.
Disorganization of mental activity in psychosis
  • Speaker: Peter F Liddle, Institute of Mental Health, University of Nottingham
  • Date: Tuesday, April 26, 2022
  • Abstract: Many patients with psychotic illnesses including schizophrenia, suffer persisting disability despite treatment of delusions and hallucinations with antipsychotic medication. There is substantial evidence that disorganization of mental activity makes a major contribution to persisting disability, by disrupting thought, emotion and behaviour. Evidence suggests that this disorganization involves impaired recruitment of the relevant brain systems required to make sense of sensory input and achieve our goals. There is diminished engagement of relevant brain circuits, together with failure to suppress task-irrelevant brain activity. We propose that disorganization of mental activity reflects imprecision of the predictive coding that shapes perception and action. The brain generates internal models of the world that are successively updated in light of sensory information. What we perceive is determined by adjusting predictions to minimise discrepancy between prediction and sensory input. Motor actions are controlled by a forward model of the state of brain and body as intended action is executed. Action is continuously adjusted to minimize discrepancy between prediction and sensory input. Disorganization is associated with both imprecise timing and imprecise content of predictions. We need models that incorporate the interactions between excitatory and inhibitory neurons in local circuits with parameters representing long range communication between brain regions to help us understand the pathophysiological mechanism responsible for imprecise predictive coding in psychotic illness.

Massimiliano Tamborrino

Structure-preserving Approximate Bayesian Computation (ABC) for stochastic neuronal models.jpg
Seminar video recording.
Structure-preserving Approximate Bayesian Computation (ABC) for stochastic neuronal models
  • Speaker: Massimiliano Tamborrino, Department of Statistics at University of Warwick
  • Date: Tuesday, March 29, 2022
  • Abstract: ABC has become one of the major tools for parameter inference in complex mathematical models in the last decade. The method is based on the idea of deriving an approximate posterior density aiming to target the true (unavailable) posterior by running massive simulations from the model for different parameters to replace the intractable likelihood, choosing then those parameters whose simulations are good matches to the observed data. When applying ABC to stochastic models, the derivation of effective summary statistics and proper distances is particularly challenging, since simulations from the model under the same parameter configuration result in different output. Moreover, since exact simulation from complex stochastic models is rarely possible, reliable numerical methods need to be applied. In this talk, we show how to use the underlying structural properties of the model to construct specific ABC summaries that are less sensitive to the intrinsic stochasticity of the model, and the importance of adopting reliable property-preserving numerical (splitting) schemes for the synthetic data generation. Indeed, the commonly used Euler-Maruyama scheme may drastically fail even with very small step sizes. The proposed approach is illustrated first on the stochastic FitzHugh-Nagumo model, and then on the broad class of partially observed Hamiltonian stochastic differential equations, in particular on the stochastic Jensen-and-Rit neural mass model, both with simulated and with real electroencephalography (EEG) data, for both one neural population and a network of neural populations (ongoing work).

Christophe Pouzat

Simulation-based inference for neural network structure.jpg
Simulation-based inference for neural network structure
  • Speaker: Christophe Pouzat, Université de Strasbourg and NeuroMat
  • Date: Tuesday, March 8, 2022
  • Abstract: The central issue we would like to discuss with you is the network inference both from a structural and a dynamical viewpoint. What we mean by ”network” here corresponds to a cortical column, not a whole brain. We now have, for many brain regions, a lot of anatomical/ structural data. We would like to use these data when we try to infer the network underlying the observed neuronal activity (e.g., in the form of spike trains) recorded by the experimentalists. We also have many different reduced dynamical models for the neurons: some deterministic like the Hodgkin-Huxley and its reduced versions (e.g. Morris-Lecar, FitzHugh-Nagumo) and variants of integrate-and-fire models (e.g. exponential IF) and Izhikevich model; some stochastic models like stochastic integrate-and-fire models, adaptive threshold models, the Hawkes process or the Galves-Löcherbach model. These reduced models allow us to simulate reasonably large scale networks (like cortical columns). These large scale simulations require the specification of many parameters for the dynamical model, as well as for the random graph models we can propose from the known anatomical data. Several colleagues have by now used a combination of a dynamic model and of a network model with fixed parameters to generate data under the ”null hypothesis” (e.g., no functional coupling between the observed neurons), leading to an empirical distribution of their statistic of interest. We think that we should now go one step further and that’s what we would like to discuss with you. Namely we would like to consider a simulation based approach for the network inference problem as is now done in many fields under various names (e.g. ”Approximate Bayesian Computation” and ”Simulation Based Inference”). In our view a successful implementation of these methods to our problem will require the gathering of experts from many fields: quantitative neuro-anatomy, random graphs, large scale numerical simulation of the various dynamical models, statistics and probability.