Research

Exploiting synergies between
Machine Learning and neuroscience to demystify brain functions

Cognitive and computational neuroscience has flourished in recent years with the arrival of innovations enabling us to record and track thousands of neurons for long periods from multiple brain regions. However, the conceptual insights into brain function that we can extract from such data are only as deep as the analysis we can do and the models we can understand. In parallel, artificial intelligence (AI) and the study of in silico neural networks have undergone a great revolution. Much like their biological counterparts, our understanding of why and how modern Machine Learning (ML) algorithms work lags behind technological achievements. The parallel developments in neuroscience and ML afford exciting research directions to understand how computation emerges from circuit dynamics and connectivity.

We aim to exploit synergies between model ML and neuroscience along several principal directions:

  1. Study underlying principles of learning and inference by neural networks with emphasis on biological-plausible implementations

  2. Use artificial neural networks as surrogate brain circuits: a hypothesis-generating tool.

  3. Develop algorithms and theories targeted at specific computational neuroscience needs. We are interested in analyzing large-scale neural data from complex behavioral experiments and designing optical stimulation experiments.

  4. Highlight the differences between modern ML frameworks and their neuronal counterparts to understand the deficiencies and benefits of either.

Dynamics and computation of cortical circuits

Cortical circuits are large complex networks of interconnected neurons of various types and present rich dynamical repertoire. Modern artificial neural networks (ANN)often try to mimic the biological structures with hopes to reproduce some of their computational power. Due to the highly recurrent connectivity, the self-generated dynamics in the system are fundamental. To understand what computations can be performed by recurrent circuits, we first need to understand their dynamical properties. With the use of statistical physics, we describe the typical behavior of these networks through a small number of order parameters. These parameters are emergent properties of the large networks and help us abstract the complex microscopic behavior. Reducing the high dimensional dynamics to a few relevant measures helps us characterize the phase space of the activity, and study the computational properties. Of particular interest are transition points between two dynamical phases. A system poised at such critical points presents unique behavior that can be shown to be computation-beneficial.

Through the study of in-silico ANN, we test these theories and show the emergence of computation. Furthermore, modern experimental techniques help us test these theories in-vivo. By combining calcium imaging and optogenetic simulations we can test whether cortical circuits are tuned to critical points in the dynamical phase space.

Computation across different brain regions and different neural architectures

Cognitive and behavioral processes require coordinated computation across many functionally and anatomically distinct brain areas. Recent technological developments enable us to simultaneously record the activity of many neurons across distant brain regions in behaving animals. However, classical neuroscience models consider isolated brain regions and fit them to neural data recorded in a single location. To understand and model brain-wide distributed computations, we desperately need new theories for neuronal dynamics in composite networks to accompany the new experimental paradigms. One possibility to model brain-wide activity is to train brain-inspired artificial networks using behavioral or neural data. Artificial networks provide an excellent framework for generating novel hypotheses of brain functions. However, except for some structural similarities, the in-silico models significantly differ from their biological counterparts as they rely on unrealistic neural dynamics and biologically implausible learning schemes. To consider artificial neural networks as surrogate brain circuits, we again need a rigorous theory for their dynamics and computation to extract biologically relevant elements.

We study the computation and learning dynamics of large composite networks with two or more interacting subnetworks. Subnetworks are highly connected local circuits with distinct structural or functional properties. We use theoretical and computational frameworks to understand how the interaction between the subnetworks shapes neural dynamics and how it affects computation and learning in these systems.


Page under construction