Research

Structure shapes dynamics and dynamics implement computations.

How can a large network of 'simple' units perform complex computations? Statistical physics gives us a robust set of tools to reduce the complexity of large systems to a comprehensible set of variables. With these abstractions, we can study the collective dynamics of the network, and propose computations that the network can perform. Ultimately, the goal is to study the brain so we can build better algorithms and to develop algorithms that help us study the brain.

Dynamics and computations in recurrent neural networks

Cortical circuits are large complex networks of interconnected neurons of various types and present rich dynamical repertoire. Modern artificial neural networks (ANN)often try to mimic the biological structures with hopes to reproduce some of their computational power. Due to the highly recurrent connectivity, the self-generated dynamics in the system are fundamental. To understand what computations can be performed by recurrent circuits, we first need to understand their dynamical properties. With the use of statistical physics, we describe the typical behavior of these networks through a small number of order parameters. These parameters are emergent properties of the large networks and help us abstract the complex microscopic behavior. Reducing the high dimensional dynamics to a few relevant measures helps us characterize the phase space of the activity, and study the computational properties. Of particular interest are transition points between two dynamical phases. A system poised at such critical points presents unique behavior that can be shown to be computation-beneficial.

Through the study of in-silico ANN, we test these theories and show the emergence of computation. Furthermore, modern experimental techniques help us test these theories in-vivo. By combining calcium imaging and optogenetic simulations we can test whether cortical circuits are tuned to critical points in the dynamical phase space.

Dynamics and representations in structured deep networks

Many artificial neural networks are constructed in a hierarchical fashion with a directional signal propagation (in space and time). These layer-wise structures are also abundant in the brain. For example, we find feedforward processing in the hippocampus, the cerebellum, and in various sensory pathways. It is evident that deep neural networks have strong representational power, can be trained, and can generalize and extract meaningful patterns from training data. However, how they do it and how to interpret their internal representations is still not well understood, and a very active field of study.

Using methods from statistical field theory, we study the dynamics signal propagation as it transverse the layers of a deep network. By finding the appropriate order parameters, we can ask what the regimes that are relevant for deep learning are. For simple toy-problems, we can analytically derive performance measures for the network and show what would be the optimal architecture.

  • Optimal Architectures in a Solvable Model of Deep Networks, NeurIPS, 2016
  • Dynamics of deep networks with random representations, in prep.
  • Dynamical mean-field theory for deep networks with simple structure – path integral approach. in prep.

Statistical mechanics of high-dimensional inference

Many computational problems can be represented as graphical networks where each node of the graph represents statistical variables or an observable. When the problems are very high dimensional, and the models have a thermodynamic number of nodes, we can use the tools of statistical physics to derive algorithms that can minimize a cost function, or find the appropriate posteriors for the statistical variables. One such example is tensor decomposition.

Often, large, high dimensional datasets collected across multiple modalities can be organized as a higher-order tensor. Low-rank tensor decomposition then arises as a powerful and widely used tool to discover simple low dimensional structures underlying such data. However, we currently lack a theoretical understanding of the algorithmic behavior of low-rank tensor decomposition. We derive Bayesian approximate message passing (AMP) algorithms for recovering arbitrarily shaped low-rank tensors buried within the noise, and we employ dynamic mean-field theory to precisely characterize their performance. Our theory reveals the existence of phase transitions between easy, hard, and impossible inference regimes and displays an excellent match with simulations. Moreover, it reveals several qualitative surprises compared to the behavior of symmetric, cubic tensor decomposition. Finally, we compare our AMP algorithm to the most commonly used algorithm, the alternating least squares (ALS), and demonstrate that AMP significantly outperforms ALS in the presence of noise.

Cortex-cerebellum interactions and learning

Classical theories posit that cerebellar granule cells massively outnumber their inputs in order to produce decorrelated, diverse, and generic expansions that facilitate arbitrary pattern separation by downstream Purkinje cells. The largest input to the cerebellum comes from the neocortex via pons. Here we use simultaneous two-photon Ca2+ imaging in premotor layer 5 pyramidal and granule cells during a motor planning task to examine information transfer from neocortex to cerebellum. Surprisingly, in expert mice, granule cell responses were highly similar to cortical output, with causal pontine contributions to high layer 5 – granule cell correlations. Ensemble activities of granule cells were both dense and redundant, with little evidence of expansion relative to layer 5. By contrast, early in learning, granule cell representations were more diverse and less correlated to cortical outputs. Response redundancy increased with task performance and produced much higher fidelity encoding of cortical activity and of behavior. These data suggest a major extension of prevailing theories: rather than generic expansion, cortico-cerebellar dynamics can shape to learned tasks to provide more extensive and reliable encoding at the cost of less response diversity and input transformation.

Coding and dynamics of spiking neurons

Real neurons in the brain communicate with spikes, and not with a continuous analog signal. There are several theories on how to achieve efficient coding with spiking neurons. It is unclear whether the biological solution for neuron communication is optimized to specific computation, or it is a constraint which the computation must overcome. In particular, spike synchronization and transmission delays play an essential role in the dynamics of these networks.

Some of the open questions which we are attempting to answer include how do the theories of coding spiking networks compare to other established dynamical theories in cortical networks and what may be the computational benefits of using spikes (if any).

  • We presented preliminary results in CoSyne 2019.

Some older biophysics projects

Wave functions of the transverse modes.

The physics behind honeycomb construction

How do hornets and honeybees build their hives? These structures exhibit remarkable symmetry and regularity that requires much precision. Furthermore, some species construct their hives in total darkness! It was conjectured a century ago that hornets pour the wax on their body which hardens in its most stable structure -- a hexagonal lattice. But the stable structure would be a puddle on the floor! We show that by exploiting the acoustic modes of their structure, these insects can achieve perfect symmetry by tuning the structure of each cell to the echoes of a perturbed ultrasonic wave, very much like a piano-tuner uses a tuning-fork.

Molecular dynamics of protein-substrate bindings

The entry of a substrate into the active site is the first event in any enzymatic reaction. However, due to the short time interval between the encounter and the formation of the stable complex, the detailed steps are experimentally unobserved. Through the use of molecular dynamics techniques, an in-silicone simulation of the biophysical can be performed which allows 'observations' and 'measurements' that are impossible in-vitro. In this projects we have studies the encounter between palmitate molecule and the Toad Liver fatty acid binding protein, ending with the formation of a stable complex resemblance in the structure of other proteins of this family. Solving a Poisson-Boltzmann equation, coupling the electrical field and the distribution of molecules in the system gives insight into the forces operating on the system leading to the formation of the tight complex.