Cortical coordination giving rise to behavior and perception
The generation of simple skilled behaviors, such as reaching to grasp objects, requires the rapid and precise coordination of many body parts (e.g. shoulder, elbow, wrist, digits). As the representation of different body parts is spatially distributed across sensorimotor cortex (SMC), the coordination required of skilled behavior implies similar coordination of SMC generating movements. Despite many experiments in SMC, we have limited understanding of the functional organization and coordination of neuronal populations that produce skilled behaviors. Recent studies have emphasized the importance of local neural population dynamics; however, examination of larger scale network dynamics is rare. Furthermore, linking dynamics to principles of representation has proven challenging. Compared to our understanding of motor networks, our knowledge of sensory processing is more developed. The concept of sparsity has proven very useful to understand computations in sensory systems, and has recently been applied to spatial patterns of hippocampus field potential recordings; however, the role of sparseness in motor regions is not understood. We conjecture that primary motor regions transform sparse independent representations of afferent messages from premotor areas into dense coordinated efferent signals for driving motor actuators. Extending our recent findings in humans/rodents, we will understand brain functions across multiple spatiotemporal scales, examine how distributed representations are dynamically coordinated, and test hypotheses on the role of sparsity in cortical computations.
Predictive and Interpretable Statistical Machine Learning
Neuroscience researchers often implicitly or explicitly interpret the output of their data analysis tools as reflecting the true state of nature. Therefore, neuroscience data analysis requires statistical machine learning algorithms that are simultaneously interpretable and predictive. By interpretable, we mean that inferred models yield insight into the physical processes that generated the data; by predictive, we mean that the model predicts the data with high accuracy. However, methods that achieve both are lacking. For example, while ‘deep learning’ approaches (e.g., LSTMs) can achieve remarkable predictive accuracy on extremely complicated data sets, extracting physically interpretable insights from the learned model remains a central challenge. Slightly more formally, algorithms for data driven discovery should be selective (only features that influence the response variable are selected), accurate (the estimated parameters in the model are as close to the “real” value as possible), predictive (allow prediction of the response variable), stable (return the same values on multiple runs) and scalable (able to return an answer in a reasonable amount of time on large data sets). We have recently developed the Union of Intersections (UoI) method that is simultaneously interpretable and predictive. UoI is a flexible, modular, and scalable framework to enhance both the identification of features (model selection), as well as the estimation of their contributions (model estimation), and thus resulting in improved data prediction and interpretation in linear and non-linear regression/classification, as well as parts-based matrix decompositions.
Deep Neural Networks for Large-Scale Scientific Data Analysis
As scientific datasets grow in size and complexity, our models and methods must also grow to take advantage of this additional information. In neuroscience in particular, datasets for many different measurement techniques are growing in terms of the collection duration per subject or animal, the spatial and/or temporal sampling resolution, the number of functional or structural units of the brain captured, and in the complexity of the stimulus used or behavior performed during recording. This growth in dataset size and complexity is mirrored across many scientific domains. We aim to bring scientific interpretability to the representations learned by deep networks.
Datasets of this scale present an interesting scientific opportunity: to be able to derive insight into the structure of natural systems by creating models which can adapt themselves to the latent structure of large amounts of data, often called data-driven hypothesis testing. Deep networks have been shown to have the capability to consume huge amounts of data without saturating model performance, but it has not been shown generally that the latent representations or mappings deep networks learn from data contain information or structure that is scientifically useful. In neuroscience, deep networks are an alternative and complementary approach to traditional neural data analysis methods. Traditional methods are often linear but interpretable, which is a common tension in data analysis techniques: the performance vs. interpretability tradeoff. These traditional methods often have lower predictive performance which is often incrementally improved through model tweaks and additions. Alternatively, deep networks have been shown to be state-of-the-art models in many applied machine learning tasks. They scale well to large datasets and also can learn complex, nonlinear, high dimensional mappings with high performance. But, the representations learned by deep networks are often difficult to interpret. We will answer the question of whether supervised or unsupervised deep networks are learning latent representations which are scientifically meaningful, and apply deep networks to scientific datasets at Lawrence Berkeley National Lab (LBNL).
Biological systems are organized and function across a large range of spatial and temporal scales. One of the most complicated, multi-scale biological systems is the brain, in which myriad neurons are organized into microcircuits (‘columns’) that perform specific computations, but are simultaneously integrated into larger networks. Investigating the activity of individual neurons and small neuronal populations has yielded exquisite insight into microcircuit mechanisms of local computations, while macroscale measurements (e.g. fMRI) have revealed global processing of entire brain areas. Additionally, different frequency bands within the electrophysiological field potential (FP) reflect different biological processes. For example, high-gamma (Hγ: 80-150Hz) activity is directly reflective of multi-unit neuronal spiking, while the lower frequency components (e.g. β: 15-30Hz) are thought to reflect synapto-dendritic currents. Thus, low frequencies might reflect inputs to neuronal populations, while the high-gamma band activity mainly reflects their spiking activity (outputs). How local neural processing is organized and coordinated in the context of broader neural networks, and how this is reflected in spiking activity and different frequency bands, is poorly understood.
Development of neural recording and manipulation tools
Understanding the nervous system requires the integration of neural recording and manipulation tools. We are developing new large-scale network recording capabilities and circuit manipulation tools to link neural activity to behavior. This requires a system to:
1. Record from many thousands of channels across multiple spatial scales.
2. Simultaneously have large-area coverage and high-density in three dimensions.
3. Record stable neural activity on time-scales from milliseconds-to-hours.
4. Scale to 10s-100s of thousands of channels.
5. Selectively manipulate neural activity at multiple locations.
6. Simultaneously record and manipulate neural activity.
7. Analyze neural data in real-time for closed-loop control of neural circuits.
8. Compatible with recordings in awake behaving animals
9. Minimal with the number of wires to and from the animal.
Additionally, to better understand the human brain, this system should
10. Record and analyze similar signals as used in humans.
11. Record and manipulate the neural activity that generate these signals.
Our diverse and unique team brings together expertise in integrated electronics, silicon fabrication, large-scale neural recording, data acquisition and real-time computing. Our goal is to develop an integrated and flexible system for massive channel count, 3-D electrophysiology and optical manipulation of brain networks with real-time neural analysis that is scalable to 10s-100s of thousands of channels.