Coordination of distributed cortical networks producing complex movements
The generation of simple skilled behaviors, such as reaching to grasp objects, requires the rapid and precise coordination of many body parts (e.g. shoulder, elbow, wrist, digits). As the representation of different body parts is spatially distributed across sensorimotor cortex (SMC), the coordination required of skilled behavior implies similar coordination of SMC generating movements. Despite many experiments in SMC, we have limited understanding of the functional organization and coordination of neuronal populations that produce skilled behaviors. Recent studies have emphasized the importance of local neural population dynamics; however, examination of larger scale network dynamics is rare. Furthermore, linking dynamics to principles of representation has proven challenging. Compared to our understanding of motor networks, our knowledge of sensory processing is more developed. The concept of sparsity has proven very useful to understand computations in sensory systems, and has recently been applied to spatial patterns of hippocampus field potential recordings; however, the role of sparseness in motor regions is not understood. We conjecture that primary motor regions transform sparse independent representations of afferent messages from premotor areas into dense coordinated efferent signals for driving motor actuators. Extending our recent findings in humans/rodents, we will understand brain functions across multiple spatiotemporal scales, examine how distributed representations are dynamically coordinated, and test hypotheses on the role of sparsity in cortical computations. We have developed a novel, 3 d.o.f robot to enable 3D reach tasks in rats with video tracking. PLAY VIDEO of rat reaching to handle.
Distributed representations of ethological auditory objects
In many mammals, including humans, neural representations of complex sounds are distributed throughout large areas of cortex, including across primary auditory cortex (A1). The ability to discriminate between classes of sounds is critical to the survival of many species. For example, the ability of rats to distinguish between different avian sounds (e.g., hawk vs. crow) is a life-or-death capability. While the functional organization of A1 has been long investigated with simple stimuli, current receptive field models explain relatively small amounts (~15%) of response variation. This indicates that we do not understand what auditory features drive A1 neural responses. Concomitantly, most studies are focused on responses of relatively few neurons confined to a small number of cortical columns. While informative, such studies provide only limited insight into the distributed representation of ethologically relevant complex sounds. Indeed, we lack the broad sound stimulus set required to even begin addressing these issues. Thus, in order to advance our foundation for understanding representations of complex sounds we need a rich stimulus set along with spatially extended recording approaches to measure distributed representations of these sounds. We have collaborated in the design of novel mECoG devices that record neural activity spatially localized to ±200um (~the diameter of a cortical column), and which cover the entire A1. Our lab has pioneered concurrent laminar polytrode and mECoG recordings for simultaneous interrogation of local and distributed neural activity. We have constructed and continue to annotate a large database of natural sounds for playback during A1 recordings. Our recent analyses indicate that the structure of co-fluctuations amongst signals can provide insight into the nature of distributed representation. Together, these observations and tools motivate and enable us to test the hypothesis that A1 representations are optimized to classify complex, ethologically relevant sounds.PLAY VIDEO of neural responses to auditory stimuli, where each electrode is color coded by preferred frequency (note: this is at 1/4 speed).
Bridging spatiotemporal scales in the brain with multi-modal data and biophysical simulations
Biological systems are organized and function across a large range of spatial and temporal scales. One of the most complicated, multi-scale biological systems is the brain, in which myriad neurons are organized into microcircuits (‘columns’) that perform specific computations, but are simultaneously integrated into larger networks. Investigating the activity of individual neurons and small neuronal populations has yielded exquisite insight into microcircuit mechanisms of local computations, while macroscale measurements (e.g. fMRI) have revealed global processing of entire brain areas. Additionally, different frequency bands within the brains continuous electrical potential reflect different biological processes. For example, high-gamma (Hγ: 80-150Hz) activity is directly reflective of multi-unit neuronal spiking, while the lower frequency components (e.g. β: 15-30Hz) are thought to reflect synapto-dendritic currents. Thus, low frequencies might reflect inputs to neuronal populations, while the high-gamma band activity mainly reflects their spiking activity (outputs). How local neural processing is organized and coordinated in the context of broader neural networks, and how this is reflected in spiking activity and different frequency bands, is poorly understood. We combine uECoG with laminar polytrodes, optogenetic manipulations, and large-scale biophysically detailed simulations to bridge spatiotemporal scales in the nervous system. PLAY VIDEO of membrane potential in a large-scale biophysical simulation (note: this is slowed relatively to real-time. Movie curtesy of Burlen Loring).
Next-generation devices towards 10k channel neurophysiology
Understanding how brains process information and how dysfunction disrupts that processing requires measuring patterns of neural activity with cellular and millisecond resolution. These patterns are by nature distributed: many brain regions contribute to any given act of sensation, perception, cognition, or behavior. Further, each region contains many neurons, and measuring activity from neural ensembles provides information that measurements of individual neurons cannot. However, current neural recordings methods do not enable the acquisition of such data. While much effort has gone into the design of probes with hundreds of sensors, much less effort has been devoted to the electronics required to get large-scale data off the heads of animals with minimal wires. At left is an example ~2000 channel neural interface chip (E-Chip, Peter Denes, LBNL) We collaborate on the development of next generation neurophysiology systems that will enable simultaneously recordings from 10s of thousands of channels at millisecond resolution distributed across multiple brain regions.
‘Governing equations’ of neural dynamics from noisy data
Brain function is an emergent property of the coordinated activity of multiple neuronal types that are widely distributed across brain regions. That is, the interaction of many components over time produce something that is fundamentally different than can be predicted from the parts. Current statistical-machine learning tools provide insight into the properties of individual neurons and can characterize the properties of entire populations. However, understanding how neurons with different morphological, electrophysiological, and transcriptomic properties differentially contribute to population dynamics is a major hurdle. Such an understanding is required to give biological interpretation to dynamical systems models of populations and to treat neurological disorders through targeted pharmacological interventions and optogenetic control. Addressing this challenge will require novel methods for extracting nonlinear ‘governing equations’ from high-dimensional, noisy time-series data with unobserved influences. A central area of research and development in our group is focused on developing methods to address this long-term, grand challenge problem.
Statistical-machine learning for enhanced inference and discovery in scientific data
Neuroscience researchers often implicitly or explicitly interpret the output of their data analysis tools as reflecting the true state of nature. Therefore, neuroscience data analysis requires statistical machine learning algorithms that are simultaneously interpretable and predictive. By interpretable, we mean that inferred models yield insight into the physical processes that generated the data; by predictive, we mean that the model predicts the data with high accuracy. However, methods that achieve both are lacking. For example, while deep learning approaches (e.g., LSTMs) can achieve remarkable predictive accuracy on extremely complicated data sets, extracting physically interpretable insights from the learned model remains a central challenge. Slightly more formally, algorithms for data driven discovery should be selective (only features that influence the response variable are selected), accurate (the estimated parameters in the model are as close to the “real” value as possible), predictive (allow prediction of the response variable), stable (return the same values on multiple runs) and scalable (able to return an answer in a reasonable amount of time on large data sets). We develop statistical machine learning methods that focused on achieving all of these qualities to enhance extraction of understanding from complex scientific data.
Deep networks to learn scientifically meaningful representations
As scientific datasets grow in size and complexity, our models and methods must also grow to take advantage of this additional information. In neuroscience in particular, datasets for many different measurement techniques are growing in terms of the collection duration per subject or animal, the spatial and/or temporal sampling resolution, the number of functional or structural units of the brain captured, and in the complexity of the stimulus used or behavior performed during recording. This growth in dataset size and complexity is mirrored across many scientific domains. We aim to bring scientific interpretability to the representations learned by deep networks.
Datasets of this scale present an interesting scientific opportunity: to be able to derive insight into the structure of natural systems by creating models which can adapt themselves to the latent structure of large amounts of data, often called data-driven hypothesis testing. Deep networks have been shown to have the capability to consume huge amounts of data without saturating model performance, but it has not been shown generally that the latent representations or mappings deep networks learn from data contain information or structure that is scientifically useful. In neuroscience, deep networks are an alternative and complementary approach to traditional neural data analysis methods. Traditional methods are often linear but interpretable, which is a common tension in data analysis techniques: the performance vs. interpretability tradeoff. These traditional methods often have lower predictive performance which is often incrementally improved through model tweaks and additions. Alternatively, deep networks have been shown to be state-of-the-art models in many applied machine learning tasks. They scale well to large datasets and also can learn complex, nonlinear, high dimensional mappings with high performance. But, the representations learned by deep networks are often difficult to interpret. We are developing basic methods to enhance the interpretability of deep network representations, and apply them to diverse scientific datasets.