• Tools and Resources
  • Customer Services
  • Cognitive Neuroscience
  • Computational Neuroscience
  • Development
  • Disorders of the Nervous System
  • Invertebrate Neuroscience
  • Molecular and Cellular Systems
  • Motor Systems
  • Neuroendocrine and Autonomic Systems
  • Sensory Systems
  • Share This Facebook LinkedIn Twitter

Article contents

Deep neural networks in computational neuroscience.

  • Tim C. Kietzmann , Tim C. Kietzmann MRC Cognition and Brain Science Unit, University of Cambridge
  • Patrick McClure Patrick McClure MRC Cognition and Brain Science Unit, University of Cambridge
  •  and  Nikolaus Kriegeskorte Nikolaus Kriegeskorte MRC Cognition and Brain Science Unit, University of Cambridge; Department of Psychology, Columbia University
  • https://doi.org/10.1093/acrefore/9780190264086.013.46
  • Published online: 25 January 2019

The goal of computational neuroscience is to find mechanistic explanations of how the nervous system processes information to give rise to cognitive function and behavior. At the heart of the field are its models, that is, mathematical and computational descriptions of the system being studied, which map sensory stimuli to neural responses and/or neural to behavioral responses. These models range from simple to complex. Recently, deep neural networks (DNNs) have come to dominate several domains of artificial intelligence (AI). As the term “neural network” suggests, these models are inspired by biological brains. However, current DNNs neglect many details of biological neural networks. These simplifications contribute to their computational efficiency, enabling them to perform complex feats of intelligence, ranging from perceptual (e.g., visual object and auditory speech recognition) to cognitive tasks (e.g., machine translation), and on to motor control (e.g., playing computer games or controlling a robot arm). In addition to their ability to model complex intelligent behaviors, DNNs excel at predicting neural responses to novel sensory stimuli with accuracies well beyond any other currently available model type. DNNs can have millions of parameters, which are required to capture the domain knowledge needed for successful task performance. Contrary to the intuition that this renders them into impenetrable black boxes, the computational properties of the network units are the result of four directly manipulable elements: input statistics , network structure , functional objective , and learning algorithm . With full access to the activity and connectivity of all units, advanced visualization techniques, and analytic tools to map network representations to neural data, DNNs represent a powerful framework for building task-performing models and will drive substantial insights in computational neuroscience.

  • deep neural networks
  • deep learning
  • convolutional neural networks
  • objective functions
  • levels of abstraction
  • modeling the brain
  • input statistics
  • biological detail

Explaining Brain Information Processing Requires Complex, Task-Performing Models

The goal of computational neuroscience is to find mechanistic explanations for how the nervous system processes information to support cognitive function as well as adaptive behavior. Computational models, that is, mathematical and computational descriptions of component systems, aim to capture the mapping of sensory input to neural responses and furthermore to explain representational transformations, neuronal dynamics, and the way the brain controls behavior. The overarching challenge is therefore to define models that explain neural measurements as well as complex adaptive behavior. Historically, computational neuroscientists have had successes with shallow, linear–nonlinear “tuning” models used to predict lower-level sensory processing. Yet the brain is a deep recurrent neural network that exploits multistage nonlinear transformations and complex dynamics. It therefore seems inevitable that computational neuroscience will come to rely increasingly on complex models, likely from the family of deep recurrent neural networks. The need for multiple stages of nonlinear computation has long been appreciated in the domain of vision, by both experimentalists (Hubel & Wiesel, 1959 ) and theorists (Fukushima, 1980 ; LeCun & Bengio, 1995 ; Riesenhuber & Poggio, 1999 ; Wallis & Rolls, 1997 ).

The traditional focus on shallow models was motivated both by the desire for simple explanations and by the difficulty of fitting complex models. Hand-crafted features, which laid the basis of modern computational neuroscience (Jones & Palmer, 1987 ), do not carry us beyond restricted lower-level tuning functions. As an alternative approach, researchers started directly using neural data to fit model parameters (Dumoulin & Wandell, 2008 ; Wu, David, & Gallant, 2006 ). This approach was shown to be particularly successful for early visual processes (Cadena et al., 2017 ; Gao & Ganguli, 2015 ; Maheswaranathan et al., 2018 ). Despite its elegance, importance, and success, this approach is ultimately limited by the number of neural observations that can be collected from a given system. Even with neural measurement technology advancing rapidly (multi-site array recordings, two-photon imaging, and neuropixels, to name just a few), the amount of recordable data may not provide enough constraints to fit realistically complex, that is, parameter-rich, models. For instance, while researchers can now record separately from hundreds of individual neurons, and the number of stimuli used may approach 10,000, the numbers of parameters in typically used deep neural networks (DNNs) are many orders of magnitude larger. For instance, the influential object recognition network “AlexNet” has 60 million parameters (Krizhevsky, Sutskever, & Hinton, 2012 ), and a more recent object recognition network, VGG-16, has 138 million parameters (Simonyan & Zisserman, 2015 ). This high number is required to encode substantial domain knowledge, which is required for intelligent behavior. Transferring this information into the model through the bottleneck of neural measurements alone is likely too inefficient for understanding and performing real-world tasks.

In search for a solution to this conundrum, the key insight was the idea that rather than fitting parameters based on neural observations, models could instead be trained to perform relevant behavior in the real world. This approach brings machine learning to bear on models for computational neuroscience, enabling researchers to constrain the model parameters via task training. In the domain of vision, for instance, category-labeled sets of training images can easily be assembled using web-based technologies, and the amount of available data can therefore be expanded more easily than for measurements of neural activity. Of course, different models trained to perform a relevant task (such as object recognition, if one tried to understand computations in the primate ventral stream) might differ in their ability to explain neural data. Testing which model architectures, input statistics, and learning objectives yield the best predictions of neural activity in novel experimental conditions (e.g., a set of images that has not been used in fitting the parameters) is thus a powerful technique to learn about the computational mechanisms that might underlie the neural responses. The combined use of task training and neural data thereby enables us to build complex models with extensive knowledge about the world in order to explain how biological brains implement cognitive function.

Brain-Inspired Neural Network Models Are Revolutionizing Artificial Intelligence and Exhibit Rich Potential for Computational Neuroscience

Neural network models have become a central class of models in machine learning (Figure 1 ). Driven to optimize task performance, researchers developed and improved model architectures, hardware, and training schemes that eventually led to today’s high-performance DNNs. These models have revolutionized several domains of AI (LeCun, Bengio, & Hinton, 2015 ). Starting with the seminal work by Krizhevsky et al. ( 2012 ), who won the ImageNet competition in visual object recognition by a large margin, deep neural networks now dominate computer vision (He, Zhang, Ren, & Sun, 2016 ; Simonyan & Zisserman, 2015 ; Szegedy et al., 2015 ) and drove reinforcement learning (Lange & Riedmiller, 2010 ; Mnih et al., 2015 ), speech recognition (Sak, Senior, & Beaufays, 2014 ), machine translation (Sutskever, Vinyals, & Le, 2014 ; Wu et al., 2016 ), and many other domains to unprecedented performance levels. In terms of visual processing, deep convolutional, feed-forward networks (CNNs) now achieve human-level classification performance (VanRullen, 2017 ).

Figure 1. Convolutional neural network structure. (A) An example of a feed-forward convolutional neural network (CNN) with several convolutional layers followed by a fully connected layer. Bottom-up receptive fields for selected neurons are illustrated with blue boxes. (B) The bottom-up (blue), lateral (green), and top-down (red) receptive fields for two example neurons in different layers of a recurrent convolutional neural network (RCNN) are shown.

Although originally inspired by biology, current DNNs implement only the most essential features of biological neural networks. They are composed of simple units that typically compute a linear combination of their inputs and pass the result through a static nonlinearity (e.g., setting negative values to zero). Similar to the ventral stream in the brain, convolutional neural networks process images through a sequence of visuotopic representations: each unit “sees” a restricted local region of the map in the previous layer (its receptive field), and similar feature detectors exist across spatial locations (although this is only approximately true in the primate brain). Along the hierarchy, CNNs and brains furthermore perform a deep cascade of nonlinear computations, resulting in receptive fields that increase in size, invariance, and complexity. Beyond these similarities, DNNs typically do not include many biological details. For instance, they often do not include lateral or top-down connections, and compute continuous outputs (real numbers that could be interpreted as firing rates) rather than spikes. The list of features of biological neural networks not captured by these models is endless.

Yet despite large differences and many biological features missing, deep convolutional neural networks predict functional signatures of primate visual processing across multiple hierarchical levels at unprecedented accuracy. Trained to recognize objects, they develop V1-like receptive fields in early layers, and are predictive of single cell recordings in macaque inferotemporal cortex (IT) (Cadieu et al., 2014 ; Khaligh-Razavi & Kriegeskorte, 2014 ; for reviews see Kriegeskorte, 2015 ; Yamins et al., 2014 ; Yamins & DiCarlo, 2016 ; Figure 2A ). In particular, the explanatory power of DNNs is on a par with the performance of linear prediction based on an independent set of IT neurons and exceeds linear predictions based directly on the category labels on which the networks were trained (Yamins et al., 2014 ). DNNs explain about 50% of the variance of windowed spike counts in IT across individual images (Yamins et al., 2014 ), a performance level comparable to that achieved with Gabor models in V1 (Olshausen & Field, 2005 ). DNNs thereby constitute the only model class in computational neuroscience that is capable of predicting responses to novel images in IT with reasonable accuracy. DNN modeling has also been shown to improve predictions of intermediate representations in area V4 over alternative models (Yamins & DiCarlo, 2016 ). This indicates that, in order to solve the task of object classification, the trained network passes information through a similar sequence of intermediate representations as does the primate brain.

computational neuroscience research topics

Figure 2. Testing the internal representations of DNNs against neural data. (A) An example of neuron-level encoding with a convolutional neural network (adapted from Yamins & DiCarlo, 2016 ). (B) A CNN-based encoding model applied to human fMRI data (adapted from Güçlü & van Gerven, 2015 ). (C) Comparing the representational geometry of a trained CNN to human and monkey brain activation patterns using representation-level similarity analysis (adapted from Khaligh-Razavi & Kriegeskorte, 2014 ).

In human neuroscience too, DNNs have proven capable of predicting representations across multiple levels of processing. Whereas lower network layers better predict lower-level visual representations, subsequent, higher layers better predict activity in higher, more anterior cortical areas, as measured with functional magnetic resonance imaging (Eickenberg, Gramfort, & Thirion, 2016 ; Güçlü & van Gerven, 2015 ; Khaligh-Razavi & Kriegeskorte, 2014 ; Figure 2B – C ). In line with results from macaque IT, DNNs were furthermore able to explain within-category neural similarities, despite being trained on a categorization task that aims at abstracting away from differences across category exemplars (Khaligh-Razavi & Kriegeskorte, 2014 ). At a lower spatial, but higher temporal, resolution, DNNs have also been shown to be predictive of visually evoked magnetoencephalography (MEG) data (Cichy, Khosla, Pantazis, & Oliva, 2016 ; Cichy, Khosla, Pantazis, Torralba, & Oliva, 2016 ; Seeliger et al., 2018 ). On the behavioral level, deep networks exhibit similar behavior to humans (Hong, Yamins, Majaj, & DiCarlo, 2016 ; Kheradpisheh, Ghodrati, Ganjtabesh, & Masquelier, 2016a , 2016b ; Kubilius, Bracci, & Op de Beeck, 2016 ; Majaj, Hong, Solomon, & DiCarlo, 2015 ) and are currently the best-performing model in explaining human eye movements in free viewing paradigms (Kümmerer, Theis, & Bethge, 2014 ). Despite these advances, however, current DNNs still exhibit substantial differences in how they process and recognize visual stimuli (Linsley, Eberhardt, Sharma, Gupta, & Serre, 2017 ; Rajalingham et al., 2018 ; Ullman, Assif, Fetaya, & Harari, 2016 ), how they generalize to atypical category instances (Saleh, Elgammal, & Feldman, 2016 ), and how they perform under image manipulations, including reduced contrast and additive noise (Geirhos et al., 2017 ). Yet the overall success clearly illustrates the power of DNN models for computational neuroscience.

How Can Deep Neural Networks Be Tested With Brain and Behavioral Data?

DNNs are often trained to optimize external task objectives rather than being derived from neural data. However, even human-level performance does not imply that the underlying computations employ the same mechanisms (Ritter, Barrett, Santoro, & Botvinick, 2017 ). Testing models with neural measurements is therefore crucial to assess how well network-internal representations match cortical responses. Fortunately, computational neuroscience has a rich toolbox at its disposal that allows researchers to probe even highly complex models, including DNNs (Diedrichsen & Kriegeskorte, 2017 ).

One such tool is the class of encoding models, which use external, fixed feature spaces in order to model neural responses across a large variety of experimental conditions (e.g., different stimuli, Figure 2A – B ). The underlying idea is that if the model and the brain compute similar features, then linear combinations of the model features should enable successful prediction of the neural responses for independent experimental data (Naselaris, Kay, Nishimoto, & Gallant, 2011 ). For visual representations, the model feature space can be derived from simple filters, such as Gabor wavelets (Kay, Naselaris, Prenger, & Gallant, 2008 ), from human labeling of the stimuli (Huth, Nishimoto, Vu, & Gallant, 2012 ; Mitchell et al., 2008 ; Naselaris, Prenger, Kay, Oliver, & Gallant, 2009 ), or from responses in different layers of a DNN (Agrawal, Stansbury, Malik, & Gallant, 2014 ; Güçlü & van Gerven, 2015 ).

Probing the system on the level of multivariate response patterns, representational similarity analysis (RSA) (Kriegeskorte & Kievit, 2013 ; Kriegeskorte, Mur, & Bandettini, 2008 ; Nili et al., 2014 ) provides another approach to comparing internal representations in DNNs and the brain (Figure 2C ). RSA is based around the concept of a representational dissimilarity matrix (RDM), which stores the dissimilarities of a system’s responses (neural or model) to all pairs of experimental conditions. RDMs can therefore be interpreted as describing representational geometries: conditions that elicit similar responses are close together in response space, whereas conditions that lead to differential responses will have larger distances. A model representation is considered similar to a brain representation to the degree that it emphasizes the same distinctions among the stimuli, that is, the model and brain are considered similar if they elicit similar RDMs. Comparisons on the level of RDMs sidestep the problem of defining a correspondence mapping between the units of the model and the channels of brain-activity measurement. This approach can be applied to voxels in functional magnetic resonance imaging (fMRI) (Carlin, Calder, Kriegeskorte, Nili, & Rowe, 2011 ; Guntupalli, Wheeler, & Gobbini, 2016 ; Khaligh-Razavi & Kriegeskorte, 2014 ; Kietzmann, Swisher, König, & Tong, 2012 ), single-cell recordings (Kriegeskorte et al., 2008 ; Leibo, Liao, Freiwald, Anselmi, & Poggio, 2017 ; Tsao, Moeller, & Freiwald, 2008 ), magneto- and electroencephalography (M/EEG) data (Cichy, Pantazis, & Oliva, 2014 ; Kietzmann, Gert, Tong, & König, 2017 ), and behavioral measurements including perceptual judgments (Mur et al., 2013 ).

Although the internal features in a model and the brain may be similar, the distribution of features may not parallel the neural selectivity observed in neuroimaging data. This can either be due to methodological limitations of the neuroimaging technique, or because respective brain area exhibits a bias for certain features that is not captured in the model. To account for such deviations, mixed RSA provides a technique to recombine model features to best explain the empirical data (Khaligh-Razavi, Henriksson, Kay, & Kriegeskorte, 2017 ). The increase in explanatory power due to this reweighting thereby directly speaks to the question of in how far the original, non-reweighted feature space contained the correct feature distribution, relative to the brain measurements.

On the behavioral level, recognition performance (Cadieu et al., 2014 ; Hong et al., 2016 ; Majaj et al., 2015 ; Rajalingham et al., 2018 ), perceptual confusions, and illusions provide valuable clues as to how representations in brains and DNNs may differ. For instance, it can be highly informative to understand the detailed patterns of errors (Walther, Caddigan, Fei-Fei, & Beck, 2009 ) and reaction times across stimuli, which may reveal subtle functional differences between systems that exhibit the same overall level of task performance. Visual metamers (Freeman & Simoncelli, 2011 ; Wallis, Bethge, & Wichmann, 2016 ) provide a powerful tool to test for similarities in internal representations across systems. Given an original image, a modified version is created that nevertheless leads to an unaltered model response (for instance, the activation profile of a DNN layer). For example, if a model was insensitive to a selected band of spatial frequencies, then modifications in this particular range will remain unnoticed by the model. If the human brain processed the stimuli via the same mechanism as the model, it should similarly be insensitive to such changes. The two images are therefore indistinguishable (“metameric”) to the model and the brain. Conversely, an adversarial example is a minimal modification of an image that elicits a different category label from a DNN (Goodfellow, Shlens, & Szegedy, 2015 ; Nguyen, Yosinski, & Clune, 2015 ). For convolutional feed-forward networks, minimal changes to an image (say of a bus), which are imperceptible to humans, lead the model to classify the image incorrectly (say as an ostrich). Adversarial examples can be generated using the backpropagation algorithm down to the level of the image, to find the gradients in image space that change the classification output. This method requires omniscient access to the system, making it impossible to perform a fair comparison with biological brains, which might likewise be confused by stimuli designed to exploit the idiosyncratic aspects (Elsayed et al., 2018 ; Kriegeskorte, 2015 ). The more general lesson for computational neuroscience is that metamers and adversarial examples provide methods for designing stimuli for which different representations disagree maximally. This can optimize the power to adjudicate between alternative models experimentally.

Ranging across levels of description and modalities of brain-activity measurement, from responses in single neurons, to array recordings, fMRI and MEG data, and behavior, the methods described here enable computational neuroscientists to investigate the similarities and differences between models and neural responses. This essential element is required to be able to find an answer to the question of which biological detail and set of computational objectives is needed to align the internal representations of brains and DNNs, while exhibiting successful task performance.

Drawing Insights From Deep Neural Network Models

Deep learning has transformed machine learning and only recently found its way back into computational neuroscience. Despite their high performance in terms of predicting held-out neural data, DNNs have been met with skepticism regarding their explanatory value as models of brain information processing (e.g., Kay, 2017 ). One of the arguments commonly put forward is that DNNs merely exchange one impenetrably complex system for another (the “black box” argument). That is, while DNNs may be able to predict neural data, researchers now face the problem of understanding what exactly the network is doing.

The black box argument is best appreciated in historical context. Shallow models are easier to understand and supported by stronger mathematical results. For example, the weight template of a linear–nonlinear model can be directly visualized and understood in relation to the concept of an optimal linear filter. Simple models can furthermore enable researchers to understand the role of each individual parameter. A model with fewer parameters is therefore considered more parsimonious as a theoretical account. It is certainly true that simpler models should be preferred over models with excessive degrees of freedom. Many seminal explanations in neuroscience have been derived from simple models. This argument only applies, however, if the two models provide similar predictive power. Models should be as simple as possible, but no simpler. Because the brain is a complex system with billions of parameters (presumably containing the domain knowledge required for adaptive behavior) and complex dynamics (which implement perceptual inference, cognition, and motor control), computational neuroscience will eventually need complex models. The challenge for the field is therefore to find ways to draw insight from them. One way is to consider their constraints at a higher level of abstraction. The computational properties of DNNs can be understood as the result of four manipulable elements: the network architecture , the input statistics , the functional objective , and the learning algorithm .

Insights Generated at a Higher Level of Abstraction: Experiments With Network Architecture, Input Statistics, Functional Objective, and the Learning Algorithm

A worthwhile thought experiment for neuroscientists is to consider what cortical representations would develop if the world were different. Governed by different input statistics, a different distribution of category occurrences, or different temporal dependency structure, the brain and its internal representations may develop quite differently. Knowledge of how it would differ can provide us with principal insights into the objectives that it tries to solve. Deep learning allows computational neuroscientists to make this thought experiment a simulated reality (Mehrer, Kietzmann, & Kriegeskorte, 2017 ). Investigations of which aspects of the simulated world are crucial to render the learned representations more similar to the brain thereby serve an essential function.

In addition to changes in input statistics, the network architecture can be subject to experimentation. Current DNNs derive their power from bold simplifications. Although complex in terms of their parameter count, they are simple in terms of their component mechanisms. Starting from this abstract level, biological details can be integrated in order to see which ones prove to be required, and which ones do not, for predicting a given neural phenomenon. For instance, it can be asked whether neural responses in a given paradigm are best explained by a feed-forward or a recurrent network architecture. Biological brains draw from a rich set of dynamical primitives. It will therefore be interesting to see to what extent incorporating more biologically inspired mechanisms can enhance the power of DNNs and their ability to explain neural activity and animal behavior.

Given input statistics and architecture, the missing determinants that transform the randomly initialized model into a trained DNN are the objective function and the learning algorithm. The idea of normative approaches is that neural representations in the brain can be understood as being optimized with regard to one or many overall objectives. These define what the brain should compute, in order to provide the basis for successful behavior. While experimentally difficult to investigate, deep learning trained on different objectives allows researchers to ask the directly related inverse question: what functions need to be optimized such that the resulting internal representations best predict neural data? Various objectives have been suggested in both the neuroscience and machine learning community. Feed-forward convolutional DNNs are often trained with the objective to minimize classification error (Krizhevsky et al., 2012 ; Simonyan & Zisserman, 2015 ; Yamins & DiCarlo, 2016 ). This focus on classification performance has proven quite successful, leading researchers to observe an intriguing correlation: classification performance is positively related to the ability to predict neural data (Khaligh-Razavi & Kriegeskorte, 2014 ; Yamins et al., 2014 ). That is, the better the network performed on a given image set, the better it could predict neural data, even though the latter was never part of the training objective. Despite its success, the objective to minimize classification error in a DNN for visual object recognition requires millions of labeled training images. Although the finished product, the trained DNN, provides the best current predictive model of ventral stream responses, the process by which the model is obtained therefore is not biologically plausible.

To address this issue, additional objective functions from the unsupervised domain have been suggested, allowing the brain (and DNNs) to create error signals without external feedback. One influential suggestion is that neurons in the brain aim at an efficient sparse code, while faithfully representing the external information (Olshausen & Field, 1996 ; Simoncelli & Olshausen, 2001 ). Similarly, compression-based objectives aim to represent the input with as few neural dimensions as possible. Autoencoders are one model class following this coding principle (Hinton & Salakhutdinov, 2006 ). Exploiting information from the temporal domain, the temporal stability or slowness objective is based on the insight that latent variables that vary slowly over time are useful for adaptive behavior. Neurons should therefore detect the underlying, slowly changing signals, while disregarding fast changes likely due to noise. This potentially simplifies readout from downstream neurons (Berkes & Wiskott, 2005 ; Földiák, 1991 ; Kayser, Körding, & König, 2003 ; Kayser, Einhäuser, Dümmer, König, & Körding, 2001 ; Körding, Kayser, Einhäuser, & König, 2004 ; Rolls, 2012 ; Wiskott & Sejnowski, 2002 ). Stability can be optimized across layers in hierarchical systems, if each subsequent layer tries to find an optimally stable solution from the activation profiles in previous layer. This approach was shown to lead to invariant codes for object identity (Franzius, Wilbert, & Wiskott, 2008 ) and viewpoint-invariant place selectivity (Franzius, Sprekeler, & Wiskott, 2007 ; Wyss, König, & Verschure, 2006 ). Experimental evidence in favor of the temporal stability objective in the brain has been provided by electrophysiological and behavioral studies (Li & DiCarlo, 2008 , 2010 ; Wallis & Bülthoff, 2001 ).

Many implementations of classification, sparseness, and stability objectives ignore the action repertoire of the agent. Yet different cognitive systems living in the same world may exhibit different neural representations because the requirements to optimally support action may differ. Deep networks optimizing the predictability of the sensory consequence (Weiller, Märtin, Dähne, Engel, & König, 2010 ) or the cost of a given action (Mnih et al., 2015 ) have started incorporating the corresponding information. More generally, it should be noted that there are likely multiple objectives that the brain optimizes across space and time (Marblestone, Wayne, & Kording, 2016 ), and neural response patterns may encode multiple types of information simultaneously, enabling selective read-out by downstream units (DiCarlo & Cox, 2007 ).

In summary, one way to draw theoretical insights from DNN models is to explore what architectures, input statistics, objective functions, and learning algorithms yield the best predictions for neural activity and behavior. This approach does not elucidate the role of individual units or connections in the brain. However, it can reveal which features of biological structure likely support selected functional aspects, and what objectives the biological system might be optimized for, either via evolutionary pressure or during the development of the individual.

Looking Into the Black Box: Receptive Field Visualization and “In Silico” Electrophysiology

In addition to contextualizing DNNs on a more abstract level, we can also open the “black box” and look inside. Unlike a biological brain, a DNN model is entirely accessible to scrutiny and manipulation, enabling, for example, high-throughput “in silico” electrophysiology. The latter can be used to gain an intuition for the selectivity of individual units. For instance, large and diverse image sets can be searched for the stimuli that lead to maximal unit activation (Figure 3 ). Building on this approach, the technique of network dissection has emerged, which provides a more quantitative view on unit selectivity (Zhou, Bau, Oliva, & Torralba, 2017 ). It uses a large data set of segmented and labeled stimuli to first find images and image regions that maximally drive network units. Based on the ground-truth labels for these images, it is then derived whether the unit’s selectivity is semantically consistent across samples. If so, an interpretable label, ranging from color selectivity to different textures, object parts, objects, and whole scenes, is assigned to the unit. This characterization can be applied to all units of a network layer, providing powerful summary statistics.

computational neuroscience research topics

Figure 3. Visualizing the preferred features of internal neurons. (A) Activations in a random subset of feature maps across layers for strongly driving ImageNet images projected down to pixel space (adapted from Zeiler & Fergus, 2014 ). (B) Feature visualization based on image optimization for two example units. Starting from pure noise, images were altered to maximally excite or inhibit the respective network unit. Maximally and minimally driving example stimuli are shown next to the optimization results (adapted from Olah et al., 2017 ).

Another method for understanding a unit’s preferences is via feature visualization, a rapidly expanding set of diverse techniques that directly speak to the desire for human interpretability beyond example images. One of many ways to visualize what image features drive a given unit deep in a neural network is to approximately undo the operations performed by a convolutional DNN in the context of a given image (Zeiler & Fergus, 2014 ). This results in visualizations such as those shown in Figure 3A . A related technique is feature visualization by optimization (see Olah, Mordvintsev, & Schubert ( 2017 ) for a review), which is based on the idea to use backpropagation (Rumelhart, Hinton, & Williams, 1986 ), potentially including a natural image prior, to calculate the change in the input needed to drive or inhibit the activation of any unit in a DNN (Simonyan & Zisserman, 2015 ; Yosinski, Clune, Nguyen, Fuchs, & Lipson, 2015 ). As one option, the optimization can be started from an image that already strongly drives the unit, computing a gradient in image space that enhances the unit’s activity even further. The gradient-adjusted image shows how small changes to the pixels affect the activity of the unit. For example, if the image that is strongly driving the unit shows a person next to a car, the corresponding gradient image might reveal that it is really the face of the person driving the unit’s response. In that case, the gradient image would deviate from zero only in the region of the face, and adding it to the original image would accentuate the facial features. Relatedly, optimization can be started from an arbitrary image, with the goal of enhancing the activity of a single or all units in a given layer (as iteratively performed in Google’s DeepDream). Another option is to start from pure noise images, and to again use backpropagation to iteratively optimize the input to strongly drive a particular unit. This approach yields complex psychedelic-looking patterns containing features and forms, which the network has learned through its task training (Figure 3B ). Similar to the previous approach that characterizes a unit by finding maximally driving stimuli, gradient images are best derived from many different test images to get a sense of the orientation of its tuning surface around multiple reference points (test images). Relatedly, it is important to note that the tuning function of a unit deep in a network cannot be characterized by a single visual template. If it could, there would be no need for multiple stages of nonlinear transformation. However, the techniques described in this section can provide first intuitions about unit selectivities across different layers or time points.

DNNs can provide computational neuroscientists with a powerful tool, and are far from a black box. Insights can be generated by looking at the parameters of DNN models at a more abstract level, for instance, by observing the effects on predictive performance resulting from changes to the network architecture, input statistics, objective function, and learning algorithm. Furthermore, in silico electrophysiology enables researchers to measure and manipulate every single neuron, in order to visualize and characterize its selectivity and role in the overall system.

What Neurobiological Details Matter to Brain Computation?

A second concern about DNNs is that they abstract away too much from biological reality to be of use as models for neuroscience. Whereas the black box argument states that DNNs are too complex, the biological realism argument states that they are too simple. Both arguments have merit. It is conceivable that a model is simultaneously too simple (in some ways) and too complex (in other ways). However, this raises a fundamental question: which features of the biological structure should be modeled and which omitted to explain brain function (Tank, 1989 )?

Abstraction is the essence of modeling and is the driving force of understanding. If the goal of computational neuroscience is to understand brain computation , then we should seek the simplest models that can explain task performance and predict neural data. The elements of the model should map onto the brain at some level of description. However, what biological elements must be modeled is an empirical question. DNNs are important not because they capture many biological features, but because they provide a minimal functioning starting point for exploring what biological details matter to brain computation. If, for instance, spiking models outperformed rate-coding models at explaining neural activity and task performance (for example, in tasks requiring probabilistic inference [Buesing, Bill, Nessler, & Maass, 2011 ]), then this would be strong evidence in favor of spiking models. Large-scale models will furthermore enable an exploration of the level of detail required in systems implementing the whole perception–action cycle (Eliasmith et al., 2012 ; Eliasmith & Trujillo, 2014 ).

Convolutional DNNs such as AlexNet (Krizhevsky et al., 2012 ) and VGG (Simonyan & Zisserman, 2015 ) were built to optimize performance rather than biological plausibility. However, these models draw from a history of neuroscientific insight and share many qualitative features with the primate ventral stream. The defining property of convolutional DNNs is the use of convolutional layers. These have two main characteristics: (1) local connections that define receptive fields and (2) parameter sharing between neurons across the visual field. Whereas spatially restricted receptive fields are a prevalent biological phenomenon, parameter sharing is biologically implausible. However, biological visual systems learn qualitatively similar sets of basis features in different parts of a retinotopic map, and similar results have been observed in models optimizing a sparseness objective (Güçlü & van Gerven, 2014 ; Olshausen & Field, 1996 ). Moving toward greater biological plausibility with DNNs, locally connected layers that have receptive fields without parameter sharing have been suggested (Uetz & Behnke, 2009 ). Researchers have already started exploring this type of DNN, which was shown to be very successful in face recognition (Sun, Wang, & Tang, 2015 ; Taigman, Ranzato, Aviv, & Park, 2014 ). One reason for this is that locally connected layers work best in cases where similar features are frequently present in the same visual arrangement, such as faces. In the brain, retinotopic organization principles have been proposed for higher-level visual areas (Levy, Hasson, Avidan, Hendler, & Malach, 2001 ), and similar organization mechanisms may have led to faciotopy, the spatially stereotypical activation for facial features across the cortical surface in face-selective regions (Henriksson, Mur, & Kriegeskorte, 2015 ).

Beyond the Feed-Forward Sweep: Recurrent DNNs

Another aspect in which convolutional AlexNet and VGG deviate from biology is the focus on feed-forward processing. Feed-forward DNNs compute static functions and are therefore limited to modeling the feed-forward sweep of signal flow through a biological system. Yet recurrent connections are a key computational feature in the brain, and represent a major research frontier in neuroscience. In the visual system, too, recurrence is a ubiquitous phenomenon. Recurrence is likely the source of representational transitions from global to local information (Matsumoto, Okada, Sugase-Miyamoto, Yamane, & Kawano, 2005 ; Sugase, Yamane, Ueno, & Kawano, 1999 ). The timing of signatures of facial identity (Barragan-Jason, Besson, Ceccaldi, & Barbeau, 2013 ; Freiwald & Tsao, 2010 ) and social cues, such as direct eye contact (Kietzmann et al., 2017 ), too, point towards a reliance on recurrent computations. Finally, recurrent connections likely play a vital role in early category learning (Kietzmann, Ehinger, Porada, Engel, & König, 2016 ), in dealing with occlusion (Wyatte, Curran, & O’Reilly, 2012 ; Wyatte, Jilk, & O’Reilly, 2014 ), and in object-based attention (Roelfsema, Lamme, & Spekreijse, 1998 ).

Whereas the first generation of DNNs focused on feed-forward, the general class of DNNs can implement recurrence (Oord, Kalchbrenner, & Kavukcuoglu, 2016 ). By using lateral recurrent connections, DNNs can implement visual attention mechanisms (Li, Yang, Liu, Wen, & Xu, 2017 ; Mnih, Heess, Graves, & Kavukcuoglu, 2014 ), and lateral recurrent connections can also be added to convolutional DNNs (Liang & Hu, 2015 ; Spoerer, McClure, & Kriegeskorte, 2017 ). These increase the effective receptive field size of each unit, and allow for long-range activity propagation (Pavel et al., 2017 ). Lateral connections can make decisive contributions to network computation. For instance, in modeling the responses of retinal ganglion cells, the introduction of lateral recurrent connections to feed-forward CNNs leads to the emergence of contrast adaptation in the model (McIntosh, Maheswaranathan, Nayebi, Ganguli, & Baccus, 2017 ). In addition to local feed-forward and lateral recurrent connections, the brain also uses local feedback, as well as long-range feed-forward and feedback connections. While missing from the convolutional DNNs previously used to predict neural data, DNNs with these different connection types have been implemented (He et al., 2016 ; Liao & Poggio, 2016 ; Spoerer et al., 2017 ; Srivastava, Greff, & Schmidhuber, 2015 ). Moreover, long short-term memory (LSTM) units (Hochreiter & Schmidhuber, 1997 ) are a popular form of recurrent connectivity used in DNNs. These units use differentiable read and write gates to learn how to use and store information in an artificial memory “cell.” Recently, a biologically plausible implementation of LSTM units has been proposed using cortical microcircuits (Costa, Assael, Shillingford, de Freitas, & Vogels, 2017 ).

The field of recurrent convolutional DNNs is still in its infancy, and the effects of lateral and top-down connections on the representational dynamics in these networks, as well as their predictive power for neural data, are yet to be fully explored. Recurrent architectures are an exciting tool for computational neuroscience and will likely allow for key insights into the recurrent computational dynamics of the brain, from sensory processing to flexible cognitive tasks (Song, Yang, & Wang, 2016 , 2017 ).

Optimizing for External Objectives: Backpropagation and Biological Plausibility

Apart from architectural considerations, backpropagation, the most successful learning algorithm for DNNs, has classically been considered neurobiologically implausible. Rather than as a model of biological learning, backpropagation may be viewed as an efficient way to arrive at reasonable parameter estimates, which are then subject to further tests. That is, even if backpropagation is considered a mere technical solution, the trained model may still be a good model of the neural system. However, if the brain does optimize cost functions during development and learning (which can be diverse, and supervised, unsupervised, or reinforcement-based), then it will have to use a form of optimization mechanism, an instance of which are stochastic gradient descent techniques. There is growing literature on neurobiologically plausible forms of error-driven learning, that is, ways in which the brain could adjust its internal parameters to optimize such objective functions (Lee, Zhang, Fischer, & Bengio, 2015 ; Lillicrap et al., 2016 ; O’Reilly, 1996 ; Xie & Seung, 2003 ). These methods have been shown to allow deep neural networks to learn simple vision tasks (Guerguiev, Lillicrap, & Richards, 2017 ). The brain may not be performing the exact algorithm of backpropagation, but it may have a mechanism for modifying synaptic weights in order to optimize one or many objective functions (Marblestone et al., 2016 ).

Stochasticity, Oscillations, and Spikes

Another aspect in which DNNs deviate from biological realism is that DNNs are typically deterministic, while biological networks are stochastic. While much of this stochasticity is commonly thought to be noise, it has been hypothesized that this variability could code for uncertainty (Fiser, Berkes, Orbán, & Lengyel, 2010 ; Hoyer, Hyvarinen, Patrik, Aapo, & Hyv, 2003 ; Orban, Berkes, Fiser, & Lengyel, 2016 ). In line with this, DNNs that include stochastic sampling during training and test can yield higher performance, and are better able to estimate their own uncertainty (McClure & Kriegeskorte, 2016 ). Furthermore, currently available recurrent convolutional DNNs often only run for a few time steps, and the roles of dynamical features found in biological networks, such as oscillations, are only beginning to be tested (Finger & König, 2013 ; Reichert & Serre, 2013 ; Siegel, Donner, & Engel, 2012 ). Another abstraction is the omission of spiking dynamics. However, DNNs with spiking neurons can be implemented (Hunsberger & Eliasmith, 2016 ; Tavanaei & Maida, 2016 ) and represent an exciting frontier of deep learning research. These considerations show that it would be hasty to judge the merits of DNNs based on the level of abstraction chosen in the first generation.

Deep Learning: A Powerful Framework to Advance Computational Neuroscience

Deep neural networks have revolutionized machine learning and AI, and have recently found their way back into computational neuroscience. DNNs reach human-level performance in certain tasks, and early experiments indicate that they are capable of capturing characteristics of cortical function that cannot be captured with shallow linear–nonlinear models. With this, DNNs offer an intriguing new framework that enables computational neuroscientists to address fundamental questions about brain computation in the developing and adult brain.

Figure 4. Cartoon overview of different models in computational neuroscience. Given computational constraints, models need to make simplifying assumptions. These can be regarding the biological detail, or the behavioral relevance of the model output. The explanatory merit of a model is not dependent on the exact replication of biological detail, but on its ability to provide insights into the inner workings of the brain at a given level of abstraction.

Computational neuroscience comprises a wide range of models, defined at various levels of biological and behavioral detail (Figure 4 ). For instance, many conductance-based models contain large numbers of parameters to explain a single or a few neurons at a great level of detail but are typically not geared towards behavior. DNNs, at the other end of the spectrum, use their high number of parameters not to account for effects on the molecular level, but to achieve behavioral relevance, while accounting for overall neural selectivity. Explanatory merit is not only gained by biological realism (because this would render human brains the perfect explanation for themselves), nor does it directly follow from simplistic models that cannot account for complex animal behavior. The space of models is continuous, and neuroscientific insight works across multiple levels of explanation, following top-down and bottom-up approaches (Craver, 2009 ). The usage of DNNs in computational neuroscience is still in its infancy, and the integration of biological detail will require close collaboration between modelers, experimental neuroscientists, and anatomists.

DNNs will not replace shallow models, but rather enhance researchers’ investigative repertoire. With computers approaching the brain in computational power, we are entering a truly exciting phase of computational neuroscience.

Further Reading

Kriegeskorte ( 2015 )—introduction of deep learning as a general framework to understand brain information processing.

Yamins & DiCarlo ( 2016 )—perspective on goal-driven deep learning to understand sensory processing.

Marblestone, Wayne, & Kording ( 2016 )—review with a focus on cost functions in the brain and DNNs.

Lindsay ( 2018 )—overview of how DNNs can be used as models of visual processing.

LeCun, Bengio, & Hinton ( 2015 )—high-level overview of deep learning.

Goodfellow, Bengio, & Courville ( 2016 )—introductory book on deep learning.

  • Agrawal, P. , Stansbury, D. , Malik, J. , & Gallant, J. (2014). Pixels to voxels: Modeling visual representation in the human brain . ArXiv Preprint , 1–15.
  • Barragan-Jason, G. , Besson, G. , Ceccaldi, M. , & Barbeau, E. J. (2013). Fast and famous: Looking for the fastest speed at which a face can be recognized . Frontiers in Psychology , 4 (March), 100.
  • Berkes, P. , & Wiskott, L. (2005). Slow feature analysis yields a rich repertoire of complex cell properties . Journal of Vision , 5 , 579–602.
  • Buesing, L. , Bill, J. , Nessler, B. , & Maass, W. (2011). Neural dynamics as sampling: A model for stochastic computation in recurrent networks of spiking neurons . PLoS Computational Biology , 7 (11).
  • Cadena, S. A. , Denfield, G. H. , Walker, E. Y. , Gatys, L. A. , Tolias, A. S. , Bethge, M. , & Ecker, A. S. (2017). Deep convolutional models improve predictions of macaque V1 responses to natural images . BioRxiv , 1–16.
  • Cadieu, C. F. , Hong, H. , Yamins, D. L. K. , Pinto, N. , Ardila, D. , Solomon, E. A. , . . . DiCarlo, J. J. (2014). Deep neural networks rival the representation of primate IT cortex for core visual object recognition . PLoS Computational Biology , 10 (12), 1–18.
  • Carlin, J. D. , Calder, A. J. , Kriegeskorte, N. , Nili, H. , & Rowe, J. B. (2011). A head view-invariant representation of gaze direction in anterior superior temporal sulcus . Current Biology , 21 (21), 1–5.
  • Cichy, R. M. , Khosla, A. , Pantazis, D. , & Oliva, A. (2016). Dynamics of scene representations in the human brain revealed by magnetoencephalography and deep neural networks . NeuroImage , 153 , 1–13.
  • Cichy, R. M. , Khosla, A. , Pantazis, D. , Torralba, A. , & Oliva, A. (2016). Deep neural networks predict hierarchical spatio-temporal cortical dynamics of human visual object recognition . ArXiv Preprint , arXiv:1601.02970, 1–15.
  • Cichy, R. M. , Pantazis, D. , & Oliva, A. (2014). Resolving human object recognition in space and time . Nature Neuroscience , 17 , 455–462.
  • Costa, R. P. , Assael, Y. M. , Shillingford, B. , de Freitas, N. , & Vogels, T. P. (2017). Cortical microcircuits as gated-recurrent neural networks . Advances in Neural Information Processing Systems , 30 , 272–283.
  • Craver, C. (2009). Explaining the brain: Mechanisms and the mosaic unity of neuroscience 2007 . New York: Oxford University Press.
  • DiCarlo, J. J. , & Cox, D. D. (2007). Untangling invariant object recognition . Trends in Cognitive Sciences , 11 (8), 333–341.
  • Diedrichsen, J. , & Kriegeskorte, N. (2017). Representational models: A common framework for understanding encoding, pattern-component, and representational-similarity analysis . PLoS Computational Biology , 13 (4), 1–33.
  • Dumoulin, S. O. , & Wandell, B. A. (2008). Population receptive field estimates in human visual cortex . NeuroImage , 39 (2), 647–660.
  • Eickenberg, M. , Gramfort, A. , & Thirion, B. (2016). Seeing it all: Convolutional network layers map the function of the human visual system . NeuroImage , 152 , 184–194.
  • Eliasmith, C. , Stewart, T. C. , Choo, X. , Bekolay, T. , DeWolf, T. , Tang, C. , & Rasmussen, D. (2012). A large-scale model of the functioning brain . Science , 338 (6111), 1202–1205.
  • Eliasmith, C. , & Trujillo, O. (2014). The use and abuse of large-scale brain models . Current Opinion in Neurobiology , 25 , 1–6.
  • Elsayed, G. F. , Shankar, S. , Cheung, B. , Papernot, N. , Kurakin, A. , Goodfellow, I. , & Sohl-Dickstein, J. (2018). Adversarial examples that fool both human and computer vision . ArXiv Preprint , 1–19.
  • Finger, H. , & König, P. (2013). Phase synchrony facilitates binding and segmentation of natural images in a coupled neural oscillator network . Frontiers in Computational Neuroscience , 7 (January), 195.
  • Fiser, J. , Berkes, P. , Orbán, G. , & Lengyel, M. (2010). Statistically optimal perception and learning: from behavior to neural representations . Trends in Cognitive Sciences , 14 (3), 119–130.
  • Földiák, P. (1991). Learning invariance from transformation sequences . Neural Computation , 3 , 194–200.
  • Franzius, M. , Sprekeler, H. , & Wiskott, L. (2007). Slowness and sparseness lead to place, head-direction, and spatial-view cells . PLoS Computational Biology , 3 , 1605–1622.
  • Franzius, M. , Wilbert, N. , & Wiskott, L. (2008). Invariant object recognition with slow feature analysis . In Artificial Neural Networks–ICANN 2008 (pp. 961–970). Berlin and Heidelberg: Springer.
  • Freeman, J. , & Simoncelli, E. P. (2011). Metamers of the ventral stream . Nature Neuroscience , 14 (9), 1195–1201.
  • Freiwald, W. A. , & Tsao, D. Y. (2010). Functional compartmentalization and viewpoint generalization within the macaque face-processing system . Science , 330 (6005), 845–851.
  • Seeliger, K. , Fritsche, M. , Güçlü, U. , Schoenmakers, S. , Schoffelen, J.-M. , Bosch, S. E. , & van Gerven, M. A. J. (2018). Convolutional neural network-based encoding and decoding of visual object recognition in space and time . NeuroImage , 180 (A), 253–266.
  • Fukushima, K. (1980). Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position . Biological Cybernetics , 46 , 193–202.
  • Gao, P. , & Ganguli, S. (2015). On simplicity and complexity in the brave new world of large-scale neuroscience . Current Opinion in Neurobiology , 15 , 148–155.
  • Geirhos, R. , Janssen, D. H. J. , Schütt, H. H. , Rauber, J. , Bethge, M. , & Wichmann, F. A. (2017). Comparing deep neural networks against humans: Object recognition when the signal gets weaker . ArXiv Preprint , 1–31.
  • Goodfellow, I. , Bengio, Y. , & Courville, A. (2016). Deep learning . Cambridge, MA: MIT Press
  • Goodfellow, I. J. , Shlens, J. , & Szegedy, C. (2015). Explaining and harnessing adversarial examples . ArXiv Preprint , arXiv:1607.02533, 1–11.
  • Güçlü, U. , & van Gerven, M. A. J. (2014). Unsupervised feature learning improves prediction of human brain activity in response to natural images . PLoS Computational Biology , 10 (8).
  • Güçlü, U. , & van Gerven, M. A. J. (2015). Deep neural networks reveal a gradient in the complexity of neural representations across the ventral stream . Journal of Neuroscience , 35 (27), 10005–10014.
  • Guerguiev, J. , Lillicrap, T. P. , & Richards, B. A. (2017). Towards deep learning with segregated dendrites . ELife , 6 , 1–37.
  • Guntupalli, J. , Wheeler, K. , & Gobbini, M. (2016). Disentangling the representation of identity from head view. Cerebral Cortex , 27 (1), 1–25.
  • He, K. , Zhang, X. , Ren, S. , & Sun, J. (2016). Deep residual learning for image recognition . In Computer Vision and Pattern Recognition (CVPR) (pp. 770–778). New York, NY: IEEE Publishing
  • Henriksson, L. , Mur, M. , & Kriegeskorte, N. (2015). Faciotopy—A face-feature map with face-like topology in the human occipital face area . Cortex , 72 , 156–167.
  • Hinton, G. E. , & Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks . Science , 313 (5786), 504–507.
  • Hochreiter, S. , & Schmidhuber, J. (1997). Long short-term memory . Neural Computation , 9 (8), 1735–1780.
  • Hong, H. , Yamins, D. L. , Majaj, N. J. , & DiCarlo, J. J. (2016). Explicit information for category-orthogonal object properties increases along the ventral stream . Nature Neuroscience , 19 (4), 613–622.
  • Hoyer, P. O. P. , Hyvarinen, A. , Patrik, O. H. , Aapo, H. , & Hyv, A. (2003). Interpreting neural response variability as Monte Carlo sampling of the posterior. Advances in Neural Information Processing Systems , 13 , 293–300.
  • Hubel, D. , & Wiesel, T. (1959). Receptive fields of single neurones in the cat’s striate cortex . Journal of Physiology , 148 , 574–591.
  • Hunsberger, E. , & Eliasmith, C. (2016). Training spiking deep networks for neuromorphic hardware. ArXiv Preprint , arXiv:1611.05141, 1–10.
  • Huth, A. G. , Nishimoto, S. , Vu, A. T. , & Gallant, J. L. (2012). A continuous semantic space describes the representation of thousands of object and action categories across the human brain . Neuron , 76 (6), 1210–1224.
  • Jones, J. P. , & Palmer, L. A. (1987). An evaluation of the two-dimensional Gabor filter model of simple receptive fields in cat striate cortex . Journal of Neurophysiology , 58 (6), 1233–1258.
  • Kay, K. N. (2017). Principles for models of neural information processing. NeuroImage , 180 (A), 101–109.
  • Kay, K. N. , Naselaris, T. , Prenger, R. J. , & Gallant, J. L. (2008). Identifying natural images from human brain activity . Nature , 452 (7185), 352–355.
  • Kayser, C. , Einhäuser, W. , Dümmer, O. , König, P. , & Körding, K. (2001). Extracting slow subspaces from natural videos leads to complex cells . Artificial Neural Networks—ICANN , 1075–1080.
  • Kayser, C. , Körding, K. P. , & König, P. (2003). Learning the nonlinearity of neurons from natural visual stimuli . Neural Computation , 15 (8), 1751–1759.
  • Khaligh-Razavi, S.-M. , Henriksson, L. , Kay, K. , & Kriegeskorte, N. (2017). Fixed versus mixed RSA : Explaining visual representations by fixed and mixed feature sets from shallow and deep computational models . Journal of Mathematical Psychology , 76 , 184–197.
  • Khaligh-Razavi, S.-M. , & Kriegeskorte, N. (2014). Deep supervised, but not unsupervised, models may explain IT cortical representation . PLoS Computational Biology , 10 (11), 1–29.
  • Kheradpisheh, S. R. , Ghodrati, M. , Ganjtabesh, M. , & Masquelier, T. (2016a). Deep networks resemble human feed-forward vision in invariant object recognition . Scientific Reports , 6 (32672), 1–24.
  • Kheradpisheh, S. R. , Ghodrati, M. , Ganjtabesh, M. , & Masquelier, T. (2016b). Humans and deep networks largely agree on which kinds of variation make object recognition harder . Frontiers in Computational Neuroscience , 10 (August), 1–15.
  • Kietzmann, T. C. , Ehinger, B. V. , Porada, D. , Engel, A. K. , & König, P. (2016). Extensive training leads to temporal and spatial shifts of cortical activity underlying visual category selectivity . NeuroImage , 134 , 22–34.
  • Kietzmann, T. C. , Gert, A. , Tong, F. , & König, P. (2017). Representational dynamics of facial viewpoint encoding . Journal of Cognitive Neuroscience , 4 , 637–651.
  • Kietzmann, T. C. , Swisher, J. D. , König, P. , & Tong, F. (2012). Prevalence of selectivity for mirror-symmetric views of faces in the ventral and dorsal visual pathways . Journal of Neuroscience , 32 (34), 11763–11772.
  • Körding, K. P. , Kayser, C. , Einhäuser, W. , & König, P. (2004). How are complex cell properties adapted to the statistics of natural stimuli ? Journal of Neurophysiology , 91 (1), 206–212.
  • Kriegeskorte, N. (2015). Deep neural networks: a new framework for modelling biological vision and brain information processing. Annual Reviews of Vision Science , 1 , 417–446.
  • Kriegeskorte, N. , & Kievit, R. A. (2013). Representational geometry: Integrating cognition, computation, and the brain . Trends in Cognitive Sciences , 17 (8), 401–412.
  • Kriegeskorte, N. , Mur, M. , & Bandettini, P. (2008). Representational similarity analysis—connecting the branches of systems neuroscience . Frontiers in Systems Neuroscience , 2 (November), 4.
  • Krizhevsky, A. , Sutskever, I. , & Hinton, G. E. (2012). ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems , 25 , 1–9.
  • Kubilius, J. , Bracci, S. , & Op de Beeck, H. P. (2016). Deep neural networks as a computational model for human shape sensitivity . PLoS Computational Biology , 12 (4), e1004896.
  • Kümmerer, M. , Theis, L. , & Bethge, M. (2014). Deep Gaze I: Boosting saliency prediction with feature maps trained on ImageNet . ArXiv Preprint , 1–11.
  • Lange, S. , & Riedmiller, M. (2010). Deep auto-encoder neural networks in reinforcement learning . International Joint Conference on Neural Networks (IJCNN), 1–8, IEEE, New York
  • LeCun, Y. , & Bengio, Y. (1995). Convolutional networks for images, speech, and time-series . In The handbook of brain theory and neural networks (pp. 255–258).
  • LeCun, Y. , Bengio, Y. , & Hinton, G. (2015). Deep learning . Nature , 521 (7553), 436–444.
  • Lee, D. H. , Zhang, S. , Fischer, A. , & Bengio, Y. (2015). Difference target propagation . Joint European conference on machine learning and knowledge discovery in databases (pp. 498–515). New York: Springer.
  • Leibo, J. Z. , Liao, Q. , Freiwald, W. , Anselmi, F. , & Poggio, T. (2017). View-tolerant face recognition and Hebbian learning imply mirror-symmetric neural tuning to head orientation . Current Biology , 27 , 62–67.
  • Levy, I. , Hasson, U. , Avidan, G. , Hendler, T. , & Malach, R. (2001). Center–periphery organization of human object areas . Nature Neuroscience , 4 (5), 533–539.
  • Li, N. , & DiCarlo, J. J. (2008). Unsupervised natural experience rapidly alters invariant object representation in visual cortex . Science , 321 (5895), 1502–1507.
  • Li, N. , & DiCarlo, J. J. (2010). Unsupervised natural visual experience rapidly reshapes size-invariant object representation in inferior temporal cortex . Neuron , 67 (6), 1062–1075.
  • Li, Z. , Yang, Y. , Liu, X. , Wen, S. , & Xu, W. (2017). Dynamic computational time for visual attention . In ICCV (pp. 1–11). New York, NY: IEEE Publishing.
  • Liang, M. , & Hu, X. (2015). Recurrent convolutional neural network for object recognition . In Computer Vision and Pattern Recognition (CVPR) (pp. 3367–3375). New York, NY: IEEE Publishing.
  • Liao, Q. , & Poggio, T. (2016). Bridging the gaps between residual learning, recurrent neural networks and visual cortex . ArXiv Preprint , 1–16.
  • Lillicrap, T. P. , Cownden, D. , Tweed, D. B. , Akerman, C. J. , Bell, C. , Bodznick, D. , . . . Bengio, Y. (2016). Random synaptic feedback weights support error backpropagation for deep learning . Nature Communications , 7 , 1–10.
  • Lindsay, G. (2018). Deep convolutional neural networks as models of the visual system: Q&A . Neurdiness—Thinking about Brains .
  • Linsley, D. , Eberhardt, S. , Sharma, T. , Gupta, P. , & Serre, T. (2017). What are the visual features underlying human versus machine vision? International Conference on Computer Vision , (ICCV) (pp. 1–9). New York: IEEE Publishing.
  • McClure, P. , & Kriegeskorte, N. (2016). Robustly representing uncertainty in deep neural networks through sampling . ArXiv Preprint , v7, 1–14.
  • McIntosh, L. T. , Maheswaranathan, N. , Nayebi, A. , Ganguli, S. , & Baccus, S. A. (2017). Deep learning models of the retinal response to natural scenes. Advances in Neural Information Processing Systems , 30 , 1–9.
  • Maheswaranathan, N. , Mcintosh, L. , Kastner, D. B. , Melander, J. , Brezovec, L. , Nayebi, A. , . . . Baccus, S. A. (2018). Deep learning models reveal internal structure and diverse computations in the retina under natural scenes . BioRxiv .
  • Majaj, N. J. , Hong, H. , Solomon, E. A. , & DiCarlo, J. J. (2015). Simple learned weighted sums of inferior temporal neuronal firing rates accurately predict human core object recognition performance . Journal of Neuroscience , 35 (39), 13402–13418.
  • Marblestone, A. H. , Wayne, G. , & Kording, K. P. (2016). Towards an integration of deep learning and neuroscience . Frontiers in Computational Neuroscience , 10 , 1–41.
  • Matsumoto, N. , Okada, M. , Sugase-Miyamoto, Y. , Yamane, S. , & Kawano, K. (2005). Population dynamics of face-responsive neurons in the inferior temporal cortex . Cerebral Cortex , 15 (8), 1103–1112.
  • Mehrer, J. , Kietzmann, T. C. , & Kriegeskorte, N. (2017). Deep neural networks trained on ecologically relevant categories better explain human IT. In Cognitive Computational Neuroscience Meeting (Vol. 1, pp. 1–2).
  • Mitchell, T. M. , Shinkareva, S. V , Carlson, A. , Chang, K.-M. , Malave, V. L. , Mason, R. A. , & Just, M. A. (2008). Predicting human brain activity associated with the meanings of nouns . Science , 320 (5880), 1191–1195.
  • Mnih, V. , Heess, N. , Graves, A. , & Kavukcuoglu, K. (2014). Recurrent models of visual attention . Advances in Neural Information Processing Systems , 27 , 1–9.
  • Mnih, V. , Kavukcuoglu, K. , Silver, D. , Rusu, A. A , Veness, J. , Bellemare, M. G. , . . . Hassabis, D. (2015). Human-level control through deep reinforcement learning . Nature , 518 (7540), 529–533.
  • Mur, M. , Meys, M. , Bodurka, J. , Goebel, R. , Bandettini, P. A. , & Kriegeskorte, N. (2013). Human object-similarity judgments reflect and transcend the primate-IT object representation . Frontiers in Psychology , 4 (March), 1–22.
  • Naselaris, T. , Kay, K. N. , Nishimoto, S. , & Gallant, J. L. (2011). Encoding and decoding in fMRI . NeuroImage , 56 (2), 400–410.
  • Naselaris, T. , Prenger, R. J. , Kay, K. N. , Oliver, M. , & Gallant, J. L. (2009). Bayesian reconstruction of natural images from human brain activity. Neuron , 63 , 902–915.
  • Nguyen, A. , Yosinski, J. , & Clune, J. (2015). Deep neural networks are easily fooled . Computer Vision and Pattern Recognition (pp. 427–436). New York, NY: IEEE Publishing.
  • Nili, H. , Wingfield, C. , Walther, A. , Su, L. , Marslen-Wilson, W. , & Kriegeskorte, N. (2014). A toolbox for representational similarity analysis . PLoS Computational Biology , 10 (4).
  • Olah, C. , Mordvintsev, A. , & Schubert, L. (2017). Feature Visualization . Distill .
  • Olshausen, B. , & Field, D. J. (1996). Emergence of simple-cell receptive field properties by learning a sparse code for natural images . Nature , 381 (13), 607–609.
  • Olshausen, B. , & Field, D. J. (2005). How close are we to understanding v1 ? Neural Computation , 17 (8), 1665–1699.
  • Oord, A. van den , Kalchbrenner, N. , & Kavukcuoglu, K. (2016). Pixel recurrent neural networks . Arxiv Preprint , 1–11.
  • Orban, G. , Berkes, P. , Fiser, J. , & Lengyel, M. (2016). Neural variability and sampling-based probabilistic representations in the visual cortex . Neuron , 92 (2), 530–543.
  • O’Reilly, R. C. (1996). Biologically plausible error-driven learning using local activation differences: The generalized recirculation algorithm . Neural Computation , 8 (5), 895–938.
  • Pavel, M. S. , Schulz, H. , Behnke, S. , Serban Pavel, M. , Schulz, H. , & Behnke, S. (2017). Object class segmentation of RGB-D video using recurrent convolutional neural networks . Neural Networks , 88 , 105–113.
  • Rajalingham, R. , Issa, E. B. , Bashivan, P. , Kar, K. , Schmidt, K. , & DiCarlo, J. J. (2018). Large-scale, high-resolution comparison of the core visual object recognition behavior of humans, monkeys, and state-of-the-art deep artificial neural networks . BioRxiv , 1–41.
  • Reichert, D. P. , & Serre, T. (2013). Neuronal synchrony in complex-valued deep networks . International Conference on Learning Representations .
  • Riesenhuber, M. , & Poggio, T. (1999). Hierarchical models of object recognition in cortex . Nature Neuroscience , 2 (11), 1019–1025.
  • Ritter, S. , Barrett, D. G. T. , Santoro, A. , & Botvinick, M. M. (2017). Cognitive psychology for deep neural networks: A shape bias case study . ArXiv Preprint , arXiv:1706.08606.
  • Roelfsema, P. R. , Lamme, V. A. , & Spekreijse, H. (1998). Object-based attention in the primary visual cortex of the macaque monkey . Nature , 395 (6700), 376–381.
  • Rolls, E. T. (2012). Invariant visual object and face recognition: Neural and computational bases, and a model, VisNet . Frontiers in Computational Neuroscience , 6 (June), 35.
  • Rumelhart, D. E. , Hinton, G. E. , & Williams, R. J. (1986). Learning representations by back-propagating errors . Nature , 323 , 533–536
  • Sak, H. , Senior, A. , & Beaufays, F. (2014). Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. ArXiv Preprint , arXiv:1402.1128, 1–5.
  • Saleh, B. , Elgammal, A. , & Feldman, J. (2016). The role of typicality in object classification: Improving The generalization capacity of convolutional neural networks . ArXiv Preprint , 1–8.
  • Siegel, M. , Donner, T. , & Engel, A. (2012). Spectral fingerprints of large-scale neuronal interactions . Nature Reviews Neuroscience , 13 (February), 20–25.
  • Simoncelli, E. P. , & Olshausen, B. A. (2001). Natural image statistics and neural representation. Annual Review of Neuroscience , 24 , 1193–1216.
  • Simonyan, K. , & Zisserman, A. (2015). Very deep convolutional networks for large-scale image recognition . ArXiv Preprint , arXiv:1409.15506, 1–14.
  • Song, H. F. , Yang, G. R. , & Wang, X. J. (2016). Training excitatory-inhibitory recurrent neural networks for cognitive tasks: A simple and flexible framework . PLoS Computational Biology , 12 (2), 1–30.
  • Song, H. F. , Yang, G. R. , & Wang, X. J. (2017). Reward-based training of recurrent neural networks for cognitive and value-based tasks . ELife , 6 , 1–24.
  • Spoerer, C. J. , McClure, P. , & Kriegeskorte, N. (2017). Recurrent convolutional neural networks: a better model of biological object recognition under occlusion. Frontiers in Psychology , 8 (1551), 1–14.
  • Srivastava, R. K. , Greff, K. , & Schmidhuber, J. (2015). Highway networks . ArXiv Preprint , 1–6.
  • Sugase, Y. , Yamane, S. , Ueno, S. , & Kawano, K. (1999). Global and fine information coded by single neurons in the temporal visual cortex . Nature , 400 (6747), 869–873.
  • Sun, Y. , Wang, X. , & Tang, X. (2015). Deeply learned face representations are sparse, selective, and robust . In Computer Vision and Pattern Recognition (CVPR) (pp. 2892–2900). New York: IEEE Publishing.
  • Sutskever, I. , Vinyals, O. , & Le, Q. V. (2014). Sequence to sequence learning with neural networks . Advances in Neural Information Processing Systems , 27 , 1–9.
  • Szegedy, C. , Liu, W. , Jia, Y. , Sermanet, P. , Reed, S. , Anguelov, D. , . . . Rabinovich, A. (2015). Going deeper with convolutions . Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition , 7–12 June , 1–9.
  • Taigman, Y. , Ranzato, M. A. , Aviv, T. , & Park, M. (2014). DeepFace: Closing the gap to human-level performance in face verification . In Computer Vision and Pattern Recognition (CVPR) (pp. 1–8). New York: IEEE Publishing.
  • Tank, D. (1989). What details of neural circuits matter? Seminars in the Neurosciences , 1 , 67–79.
  • Tavanaei, A. , & Maida, A. S. (2016). Bio-inspired spiking convolutional neural network using layer-wise sparse coding and STDP learning . ArXiv Preprint , arXiv:1611.03000v2, 1–20.
  • Tsao, D. Y. , Moeller, S. , & Freiwald, W. A. (2008). Comparing face patch systems in macaques and humans . Proceedings of the National Academy of Sciences , 105 (49), 19514.
  • Uetz, R. , & Behnke, S. (2009). Locally-connected hierarchical neural networks for gpu-accelerated object recognition. Advances in Neural Information Processing Systems - Workshop on Large-Scale Machine Learning: Parallelism and Massive Datasets , 22 , 10–13.
  • Ullman, S. , Assif, L. , Fetaya, E. , & Harari, D. (2016). Atoms of recognition in human and computer vision . Proceedings of the National Academy of Sciences , 113 (10), 2744–2749.
  • VanRullen, R. (2017). Perception science in the age of deep neural networks . Frontiers in Psychology , 8 (February), 142.
  • Wallis, G. , & Bülthoff, H. H. (2001). Effects of temporal association on recognition memory . Proceedings of the National Academy of Sciences of the United States of America , 98 (8), 4800–4804.
  • Wallis, G. , & Rolls, E. (1997). Invariant face and object recognition in the visual system . Progress in Neurobiology , 51 , 167–194.
  • Wallis, T. S. A. , Bethge, M. , & Wichmann, F. A. (2016). Testing models of peripheral encoding using metamerism in an oddity paradigm . Journal of Vision , 16 (2), 1–30.
  • Walther, D. B. , Caddigan, E. , Fei-Fei, L. , & Beck, D. M. (2009). Natural scene categories revealed in distributed patterns of activity in the human brain . Journal of Neuroscience , 29 (34), 10573–10581.
  • Weiller, D. , Märtin, R. , Dähne, S. , Engel, A. K. , & König, P. (2010). Involving motor capabilities in the formation of sensory space representations . PloS One , 5 (4), e10377.
  • Wiskott, L. , & Sejnowski, T. J. T. J. (2002). Slow feature analysis: Unsupervised learning of invariances . Neural Computation , 14 (4), 715–770.
  • Wu, M. C.-K. , David, S. V. , & Gallant, J. L. (2006). Complete functional characterization of sensory neurons by system identification . Annual Review of Neuroscience , 29 (1), 477–505.
  • Wu, Y. , Schuster, M. , Chen, Z. , Le, Q. V. , Norouzi, M. , Macherey, W. , . . . Dean, J. (2016). Google’s Neural Machine Translation system: Bridging the gap between human and machine translation . ArXiv Preprint , 1–23.
  • Wyatte, D. , Curran, T. , & O’Reilly, R. (2012). The limits of feedforward vision: Recurrent processing promotes robust object recognition when objects are degraded . Journal of Cognitive Neuroscience , 24 (11), 2248–2261.
  • Wyatte, D. , Jilk, D. J. , & O’Reilly, R. C. (2014). Early recurrent feedback facilitates visual object recognition under challenging conditions . Frontiers in Psychology , 5 (July).
  • Wyss, R. , König, P. , & Verschure, P. F. M. J. (2006). A model of the ventral visual system based on temporal stability and local memory . PLoS Biology , 4 (5), 836–843.
  • Xie, X. , & Seung, H. S. (2003). Equivalence of backpropagation and contrastive Hebbian learning in a layered network . Neural Computation , 15 (2), 441–454.
  • Yamins, D. L. , & DiCarlo, J. J. (2016). Using goal-driven deep learning models to understand sensory cortex . Nature Neuroscience , 19 (3), 356–365.
  • Yamins, D. L. , Hong, H. , Cadieu, C. , Solomon, E. , Seibert, D. , & DiCarlo, J. (2014). Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the National Academy of Sciences of the United States of America , 111 (23), 8619–8624.
  • Yosinski, J. , Clune, J. , Nguyen, A. , Fuchs, T. , & Lipson, H. (2015). Understanding neural networks through deep visualization . International Conference on Machine Learning—Deep Learning Workshop 2015 .
  • Zeiler, M. D. , & Fergus, R. (2014). Visualizing and understanding convolutional networks . In European Conference on Computer Vision (pp. 818–833). Cham: Springer.
  • Zhou, B. , Bau, D. , Oliva, A. , & Torralba, A. (2017). Interpreting deep visual representations via network dissection . In Computer Vision and Pattern Recognition (CVPR) (pp. 1–9). New York, NY: IEEE Publishing.

Related Articles

  • General Principles for Sensory Coding
  • Learning and Memory
  • Electrophysiology and Behavior of Cnidarian Nervous Systems
  • Single Neuron Computational Modeling
  • Normalization Principles in Computational Neuroscience
  • What Is a Neuronal Ensemble?

Printed from Oxford Research Encyclopedias, Neuroscience. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 15 May 2024

  • Cookie Policy
  • Privacy Policy
  • Legal Notice
  • Accessibility
  • [66.249.64.20|81.177.180.204]
  • 81.177.180.204

Character limit 500 /500

Current topics in Computational Cognitive Neuroscience

Affiliations.

  • 1 Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, 04103, Leipzig, Germany. Electronic address: [email protected].
  • 2 Max Planck Research Group NeuroCode, Max Planck Institute for Human Development, 14195, Berlin, Germany; Max Planck UCL Centre for Computational Psychiatry and Ageing Research, 14195, Berlin, Germany. Electronic address: [email protected].
  • PMID: 32898518
  • DOI: 10.1016/j.neuropsychologia.2020.107621

Publication types

  • Introductory Journal Article
  • Research Support, Non-U.S. Gov't
  • Cognitive Neuroscience*
  • Neurosciences*

Open Access is an initiative that aims to make scientific research freely available to all. To date our community has made over 100 million downloads. It’s based on principles of collaboration, unobstructed discovery, and, most importantly, scientific progression. As PhD students, we found it difficult to access the research we needed, so we decided to create a new Open Access publisher that levels the playing field for scientists across the world. How? By making research easy to access, and puts the academic needs of the researchers before the business interests of publishers.

We are a community of more than 103,000 authors and editors from 3,291 institutions spanning 160 countries, including Nobel Prize winners and some of the world’s most-cited researchers. Publishing on IntechOpen allows authors to earn citations and find new collaborators, meaning more people see your work not only from your own field of study, but from other related fields too.

Brief introduction to this section that descibes Open Access especially from an IntechOpen perspective

Want to get in touch? Contact our London head office or media team here

Our team is growing all the time, so we’re always on the lookout for smart people who want to help us reshape the world of scientific publishing.

IntechOpen Book Series > Artificial Intelligence > Computational Neuroscience

Research Topic

Computational neuroscience.

PART OF IntechOpen Book Series: Artificial Intelligence

ISSN 2633-1403

Research Topic on Computational Neuroscience - always open for submissions, with an Annual Volume published each calendar year with a dedicated ISSN and ISBN.

Led by Topic Editor Dr. Magnus Johnsson and an international Editorial Board, all submissions are peer-reviewed and published immediately after acceptance.

Computational neuroscience focuses on biologically realistic abstractions and models validated and solved through computational simulations to understand principles for the development, structure, physiology, and ability of the nervous system. This topic is dedicated to biologically plausible descriptions and computational models - at various abstraction levels - of neurons and neural systems. This includes, but is not limited to: single-neuron modeling, sensory processing, motor control, memory, and synaptic plasticity, attention, identification, categorization, discrimination, learning, development, axonal patterning, guidance, neural architecture, behaviors, and dynamics of networks, cognition and the neuroscientific basis of consciousness. Particularly interesting are models of various types of more compound functions and abilities, various and more general fundamental principles (e.g., regarding architecture, organization, learning, development, etc.) found at various spatial and temporal levels.

Single-Neuron Modeling Sensory Processing Motor Control Memory and Synaptic Pasticity Attention Identification Categorization Discrimination Learning Development Axonal Patterning and Guidance Neural Architecture Behaviours and Dynamics of Networks Cognition and the Neuroscientific Basis of Consciousness

Topic Editor

An image of Magnus Johnsson

Magnus Johnsson

Malmö University, Sweden

Part of book series

Artificial Intelligence

BOOK SERIES DOI 10.5772/intechopen.79920

Doab logo

Publish your work in Computational Neuroscience

computational neuroscience research topics

Open for Submissions

Contribute your work to the Topic Computational Neuroscience , always open for submissions. Your work will be published as part of the Annual Volume 2022, with a dedicated ISSN and ISBN.

Ready to publish your work? Let’s get started...

Your work should

3 months to publication

Always Open for Submissions

+3.3 M Unique Visitors per Month

ISSN + ISBN

Publishing process

Chapter Processing Charge

£1400 Chapter - Book Series Topic (Annual Volume)

It’s due only after the Editor accepts your work for publication

services included

Open Access Funding

We’ve prepared a guide for Open Access Funding.

You can read the guide here.

Submit your work in Computational Neuroscience

Publishing in IntechOpen Book Series

Work at your own pace with no cutoff date for submitting your work

Peer-review

Robust peer-review ensures your work is suitable for publication and of interest to the scientific community

Online First

Early online publication after acceptance assures research is made available to the scientific community without delay

Open Access

Your work will be permanently available online, free to download, share and read

A Word from Our Authors and Editors

An image ofManuel Gonzalez Ronquillo

The opportunity to work with a prestigious publisher allows for the possibility to collaborate with more research groups interested in animal nutrition, leading to the development of new feeding strategies and food valuation while being more sustainable with the environment, allowing more readers to learn about the subject.

Universidad Autónoma del Estado de México Mexico

An image ofCatrin Sian Rutland

I work with IntechOpen for a number of reasons: their professionalism, their mission in support of Open Access publishing, and the quality of their peer-reviewed publications, but also because they believe in equality.

University of Nottingham United Kingdom

An image ofBerend Olivier

It was great publishing with IntechOpen, the process was straightforward and I had support all along.

Utrecht University Netherlands

Contact our Editorial Office

If you have any questions about publishing in IntechOpen Book Series or submitting your research, our team will be happy to help. Please contact us at [email protected]

[email protected]

+44 20 8089 5702

Keep up with Open Science

- A curated selection of newly published works - Book Series and IntechOpen news and announcements

Memberships And Partnerships

STM

computational neuroscience research topics

Computational neuroscience is the study of brain function in terms of the nervous system’s information processing capabilities, such as those exhibited by neurons as they interact in circuits, ensembles and systems via electrical and chemical signals. Computational neuroscience models allow for generating hypotheses about learning and memory, cognition and arousal among other brain functions.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Undergrad Neurosci Educ
  • v.19(2); Spring 2021

Teaching Computation in Neuroscience: Notes on the 2019 Society for Neuroscience Professional Development Workshop on Teaching

William grisham.

1 Department of Psychology, UCLA, Los Angeles, CA, 90095-1563

Mathew Abrams

2 International Neuroinformatics Coordinating Facility, Karolinska Institutet. Nobels väg 15A, Stockholm. Sweden SE-171 77

Walt E. Babiec

3 Neuroscience Interdepartmental Program/Physiology, UCLA, Los Angeles, CA, 90095-1761

Adrienne L. Fairhall

4 Department of Physiology and Biophysics and Computational Neuroscience Center, University of Washington, Seattle WA 98195

Robert E. Kass

5 Department of Statistics & Data Science, Machine Learning Department, and Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213

Pascal Wallisch

6 Department of Psychology, New York University, New York, NY 10003

Richard Olivo

7 Department of Biological Sciences, Smith College, Northampton, MA 01063

The 2019 Society for Neuroscience Professional Development Workshop on Teaching reviewed current tools, approaches, and examples for teaching computation in neuroscience. Robert Kass described the statistical foundations that students need to properly analyze data. Pascal Wallisch compared MATLAB and Python as programming languages for teaching students. Adrienne Fairhall discussed computational methods, training opportunities, and curricular considerations. Walt Babiec provided a view from the trenches on practical aspects of teaching computational neuroscience. Mathew Abrams concluded the session with an overview of resources for teaching and learning computational modeling in neuroscience.

If the human brain were so simple That we could understand it, We would be so simple That we couldn’t. - George Edgin Pugh, The Biological Origin of Human Values , 1977

The task of understanding brains is a central aim of neuroscience. As educators, we need ways of conceptualizing the brain so that we can explain it and its function to students. Models can fit this need if they reflect important aspects of reality. A plastic model of a human brain reflects reality and can explain neuroanatomy, but it is static. Real brains, by contrast, are complex, dynamic, and interactive -- often in a nonlinear fashion across time. Thus, if we are to capture this reality, we need effective models, and the only models that could reasonably fulfill this role are computational ones. In addition, stunning advances in recording, molecular, and anatomical techniques provide us with data sets of ever-increasing complexity, pushing the need for tools and concepts to extract meaning from these data. The BRAIN Initiative’s BRAIN 2025 report ( Bargmann et al., 2014 ) put theory, modeling and data analysis at the core of expected future advances in neuroscience, and the 2019 BRAIN review ( du Lac et al, 2019 ) underscored the ongoing pressing need for training in these areas.

Teaching computational neuroscience endows students with valuable skills as they enter the workforce ( Grisham et al., 2016 ). Indeed, the National Science Foundation (NSF) and American Association for the Advancement of Science Vision and Change ( AAAS, 2011 ) document urges educators in biological science to augment students’ quantitative reasoning by using modeling and simulation to describe living systems. Statistical methods of analyzing data are best learned in pursuit of scientific questions, and such experience and skills in data science have never been in greater demand. At one time this curriculum seemed beyond the reach of most undergraduates ( Grisham, 2014 ), but a reconsideration of the zeitgeist forces us to conclude that now is the time to develop such courses. The Professional Development Workshop on Teaching at the 2019 annual meeting of the Society for Neuroscience gathered together experts to discuss options for teaching computation in neuroscience, with the goal of helping faculty plan or revise courses in this area, particularly for undergraduates.

ROB KASS: STATISTICAL BACKGROUND AND STATISTICAL MODELS IN COMPUTATIONAL NEUROSCIENCE: WHAT IS COMPUTATIONAL NEUROSCIENCE?

Computational neuroscience emerged from converging ideas that would now be associated with computer science, mathematics, neuroscience, psychology, and statistics. It remains helpful for students, even within the briefest of introductions, to appreciate the very constructive interplay among multiple disciplines in attempting to understand the brain. With support of an NIH Blueprint training grant, for the past three years the initial pages from Kass et al. (2018) have served as an introductory reading in many contexts, including an undergraduate bootcamp in computational neuroscience for students from across the country.

There are two distinct ways that statistics enter computational neuroscience: first, through stochastic models of neural phenomena, and second, through data analysis. Students should have a feeling for both of these roles of statistics. It would be possible to design an undergraduate curriculum on the basics of computational neuroscience that brings in these two roles of statistics. An introductory course in this curriculum could serve both computational and non-computational students. For many years, Robert Kass has taught such an introductory course in computational neuroscience to graduate students from a range of programs — from biology to engineering — at the Center for the Neural Basis of Cognition (a joint effort of Carnegie Mellon and the University of Pittsburgh). Two projects are currently underway to provide educators with new resources: a textbook on computational neuroscience, and a collection of 10-minute videos on selected topics. It will likely be several years until the textbook is available, but the videos should be public by 2021 (and the collection is designed to grow with contributions from instructors and researchers around the world).

The constraints of most undergraduate curricula, however, are often limited by scope and faculty expertise, and it isn’t clear how many institutions will soon be able to accommodate a semester-long course on computational neuroscience for undergraduates. Furthermore, a designer of such a course faces a choice: either accept some superficiality and teach to diverse backgrounds, or require multiple prerequisites in math and statistics as well as programming comfort with high-level languages such as Python, MATLAB, or R. Many neuroscience instructors might like to incorporate some computational topics into a general neuroscience course, but even at this level, background lectures are essential, and material from Kass et al. (2014) will be helpful because it is aimed at an undergraduate neuroscience audience.

Ten essential topics in computational neuroscience, including four on background material, would be:

  • Random variables and important probability distributions.
  • Random vectors, least-squares linear regression, and the underlying linear algebra.
  • Bayes’ Theorem and the optimality of Bayes classifiers; the Law of Large Numbers and the Central Limit Theorem; and statistical estimation.
  • The exponential function solutions to a first-order differential equation.
  • Random walk models of integrate-and-fire functions of neurons; effects of noise: balanced excitation and inhibition.
  • Electrical circuit model of a neuron. Passive synaptic dynamics and phenomenological models of spiking and integrate-and-fire dynamics.
  • The Hodgkin-Huxley model of action potential generation.
  • Population vectors.
  • Information theory in human discrimination. A nice reading is Miller (1956) .
  • Cognition and optimality. An overview is given in Chapter 1 of Anderson (2007) .

PASCAL WALLISCH: TEACHING A PROGRAMMING LANGUAGE: MATLAB OR PYTHON?

Computational neuroscience analyses of data can be broken down into three levels At the top, data sets usually are multivariate, so the strategic goal is often to reduce dimensions, which is the task of an algorithm. Algorithms , the second level, are tactics to achieve a strategic goal, and there are many algorithms to choose from. After choosing one, there is an implementation stage where coding takes place, the third level. Students usually focus on the implementation level, which is one level down from the algorithmic level. Nonetheless, as educators, we should urge students to focus on the algorithmic level and ask questions such as, “Does the algorithm fit the problem?” and “Do the data conform to assumptions of the algorithmic tactic?” Answers to these questions should determine the choice of the algorithmic tactic—and hence the programming package.

Programs are lifeboats to keep students from drowning in the tsunami of data. The two lifeboats currently receiving attention are MATLAB & Python. MATLAB stands for “matrix laboratory” and was actually created to teach FORTRAN. Python’s name comes from Monty Python, not from the snake. Both are high level programming languages.

Although social media discussions about the two languages are sometimes quite vehement, there really is no need to be dogmatic in choosing between them. Both are tools that allow one to achieve some goal from an initial state. A tool removes a problem standing in the way of achieving a goal. So, what’s the best tool? That depends where you start, what you want to do, and what the problem is. The tool should fit the problem, and it is okay to use more than one tool.

There are various considerations for choosing between MATLAB or Python. One is processing speed, which isn’t very different between the two because both are fairly fast. Another is cognitive ease, which is a measure of the difficulty of writing and understanding code. The two are fairly comparable — both MATLAB and Python are high level programs with thousands of functions available. Another consideration is backward compatibility — MathWorks makes sure that backward compatibility exists for prior code, and although Python is more leading edge, its developers don’t seem terribly concerned with backward compatibility. One of the biggest conceptual differences between the two is indexing values: Python starts at 0, MATLAB starts at 1. Also, the data type is a matrix in MATLAB, whereas Python is general purpose. Which you choose to use will depend on the task you want to accomplish.

Style is also different between the two packages; blank space is meaningful in Python but not in MATLAB. Despite arguments on social media sites, Python is not really more elegant than MATLAB. MATLAB has a more straightforward syntax, and Python is often more verbose. Both packages are continually evolving to improve their capabilities and ease of use. Python is now easier to install than it was before Anaconda was developed. New MATLAB capabilities of string handling datatypes are now expanded. Webscraping is easier in Python, but sound-handling is easier in MATLAB. Again, the choice depends on the problem.

Although Python is currently the most popular language among programmers, many more publications have used MATLAB rather than Python for data analyses; Python and its variants make up less than 1% of publications.

The most compelling issue in making a choice is where your students are in their programming abilities. Inexperienced students in NYU classes do better with MATLAB, and MATLAB provides excellent support with actual people available to help 24/7. With Python, there is no one to call, and Stack Overflow is the help desk. The problem is that Stack Overflow is a wiki that may or may not have the right information.

Finally, there are cost considerations—MATLAB is easier to learn, but requires a license whereas Python is free. Dr. Wallisch’s book with Eric Nylan (2017), Neural Data Science: A Primer with MATLAB and Python , teaches both in Rosetta Stone fashion by providing programs in both languages along with English explanations.

So to answer the question, “Which one should an educator pick?” Whatever works for you and your students.

ADRIENNE FAIRHALL: TEACHING COMPUTATIONAL NEUROSCIENCE

Computational neuroscience as a field is the union of multiple approaches to understanding neural function: theory, modeling, and data analysis ( Figure 1 ). Theory is the big picture: the algorithm or framework or principle or solution space that underlies a specific dynamic. Examples include reinforcement learning (algorithm), Hopfield networks (framework), efficient coding (principle) and attractor dynamics (solution space). Modeling describes the attempt to write down and solve equations that reproduce aspects of experimental data, however coarse-grained. A classic example is the Hodgkin-Huxley system of equations for action potential generation in an axon, or at the opposite extreme, the Blue Brain project, which aims to simulate at biophysical levels of detail the activity of an entire cortical column ( Einevoll et al., 2019 ). The goal of data analysis is to characterize a system via observations ( Aljadeff et al., 2016 ; Kass et al., 2014 ), ideally revealing properties or dynamics that can be mapped onto models and, ultimately, test theories.

An external file that holds a picture, illustration, etc.
Object name is june-19-185f1.jpg

Schema of conceptual areas within computational neuroscience and their roles in understanding biological computation. (Adapted from Fairhall, 2014 ).

Teaching computational neuroscience is, therefore, challenging and multifaceted. Full understanding of a neural system should involve all three components ( Fairhall, 2014 ), so students should gain some facility with all of these aspects. Each aspect involves distinct disciplines of mathematics, engineering, computer science, physics, and statistics.

Students have different needs or expectations for their training. All emerging systems neuroscientists should gain sufficient mathematical background and coding skill so they can manipulate data. While some will want to attain proficiency in order to understand and use ideas and methods in experimental research, others want to specialize in theory. However, to be able to develop novel conceptual theories or devise new data analysis methods, it is certainly helpful to have a deep grasp of at least one quantitative field — applied mathematics, statistics, physics, or computer science. Thus, designing a training program in computational neuroscience at both undergraduate and graduate levels needs to handle diverse preparation and expectations. Further, a program needs to provide a breadth of understanding and a grounding in basic neuroscience with the opportunity for depth of training in a specific field. Given that students enter with a wide range of backgrounds, students and teachers can help to bridge some of the inevitable gaps.

Core Coursework

An ideal minimal coursework sequence will likely need to bridge undergraduate and graduate classes. Given the increasing sophistication of analysis required for large data sets, undergraduate students thinking of entering systems neuroscience need to maintain a reasonably high level of core mathematics, whether they plan to become theorists or experimentalists. Providing early information and encouragement to new undergraduates to maintain mathematical training can have high impact. For example, early exposure in the freshman or sophomore year to neuroscience research talks can highlight the deep intersections of neuroscience and mathematical topics, and encourage students to maintain a high level of quantitative undergraduate coursework.

A sequence such as in Table 1 is recommended. Learning to code is obviously vital. This learning could be done in a computer science class but can also be learned alongside mathematics or data analysis methods or in the context of an “integrative” computational neuroscience class.

Topics for course work

Bridging classes

Graduate school is not too late to learn computational methods. Many institutions are now offering graduate classes in mathematical/quantitative methods for neuroscience. The University of Washington teaches a class called “Quantitative Methods in Neuroscience,” offered both for computational neuroscience undergraduates and as a core class for all neuroscience graduate students. The course consists of five modules: linear algebra, differential equations, Fourier transforms, stochastic processes, and principal component analysis. Each topic is studied for two weeks: one week of lectures and one week of interactive exercises buttressed by a classic neuroscience paper that students present that employs the method in the exercises. Bringing together relatively mathematically sophisticated undergraduates and potentially mathematically naïve graduate students allows active two-way exchange in class exercises and presentations. A key tool for this class is a set of MATLAB tutorials, which intersperse liberally commented code with pedagogical text and quiz prompts. In these tutorials students:

  • Walk through basic commands with annotations
  • Perform computations and parameter variations
  • Describe outputs
  • Interpret results
  • Are prompted to write new code
  • Ponder/answer embedded open-ended questions
  • Can repurpose code in a novel way in a project.

The outcome for this class is that all students gain a working ability to use MATLAB, and they gain exposure to mathematical ideas in sufficient depth to inspire them to do additional classes or reading where desired. It helps students new to biology to see how mathematics can play an important role in framing and solving biology questions. The compressed format also allows pointing out the relationships between the topics, which are very often obscured when these subjects are learned in isolation — in particular, the key role of linear systems.

Integrative Course Design

In view of the multilevel interactions outlined in Figure 1 , it would be ideal when possible to incorporate all three into teaching. For students with diverse preparation, it can be especially important to motivate methods and analysis with a framing of the question being addressed. One can begin with the question posed by the biological system; discuss and explain the big-picture framework and the mathematical underpinnings; consider concrete models; and teach methods to validate or explore models based on data. Incorporating a project for final assessment is an opportunity to consolidate learning of the process of interdisciplinary science.

Be Hands-On

In any computational neuroscience class, students should be coding up models and playing with data. A key teaching decision is: Python or MATLAB? Both have pros and cons, as described in detail by Dr. Wallisch (see above). MATLAB has a lower barrier for entry for newcomers to coding and can be preferable for undergraduates or introductory computational graduate classes that include students who will follow an experimental track. For students specializing in computational fields, Python is a good long-term investment.

Bridging Gaps

Graduate students can pick up missing math as electives or by auditing; postdocs can also audit classes. Online classes between undergraduate and graduate school can be a great way to supplement missing math classes from the “core” list; there are many high-quality options including Khan Academy for basics like linear algebra.

Summer Schools

Summer schools are an excellent accelerated option to gain rapid experience in computational neuroscience. A large number are offered around the US and internationally, helpfully collected by Tom Burns at https://tfburns.github.io/compneuro-summer-schools/ and https://docs.google.com/spreadsheets/d/1b05MPR7bkxnwKjzY-6KHd_aDaV_68qwMhuFVu5IGG9g/edit#gid=276255682 . These schools are often suited for more advanced students (senior graduate students and postdocs) but many specifically aim to cater to a wide range of backgrounds and to give a rapid leg up to students who want an intensive learning experience. There are of course many other benefits to summer schools:

  • Students form relationships with international peers which can continue throughout their careers
  • Many courses incorporate a project that is an excellent learning opportunity and a chance to work directly with a professor or teaching assistants
  • There are opportunities to interact extensively with well-known professors nationally and internationally to build career visibility.

Two highly recommended courses are the Methods in Computational Neuroscience course at the Marine Biological Laboratory and the Summer Workshop for the Dynamic Brain, co-run by the Allen Institute for Brain Science and the University of Washington. Both of these courses mix lectures on systems in neuroscience with mathematics and statistical methods and show how models are developed in application to the systems under discussion. The recent success of Neuromatch Academy ( Juavinett, 2020 ), which offered in depth training to almost 2000 students worldwide online during the 2020 pandemic, is surely going to remain an important model for the future.

Online Classes

Particularly in the wake of COVID-19, online classes provide an important component of computational neuroscience teaching. An online course on the Coursera platform, Computational Neuroscience, has served many students as an introduction to the field ( https://www.coursera.org/learn/computationalneuroscience ). Coursera also facilitates interaction between students.

WALT BABIEC: GEOMETRY OF THE NERVOUS SYSTEM: A COURSE IN DYNAMICAL SYSTEMS ANALYSIS & MODELING OF NEURAL FUNCTION

Living things change with time. But how do we understand and make sense of that change? Are there no similarities in how organisms change with time? Is there a vocabulary that we can use to describe and differentiate that change? There is, and a UCLA course – Dynamical Systems Modeling of Physiological Systems – provides students in Neuroscience, Physiological Science, and Life Sciences with a rigorous, quantitative framework for describing dynamic behavior in living systems.

The course takes a dynamical systems approach to describing or modeling living systems. The state of any system, whether it be a gene network, a cell, a whole organism, or an ecosystem, can be described by a set of time-varying state variables such as protein concentration, animal population, or genotype prevalence. Differential equations or iterated maps are used to define how those state variables are changing at any instant. They provide the road map to understanding how a system changes with time, whether certain changes with time are even possible, and what might happen to those forms of change if properties of the system change.

Before describing how to teach the dynamical systems approach to students whose first love is not mathematics, it’s important to understand the big ideas, the purpose for teaching them this material in the first place. First, it is important to modernize students’ view of the meaning of homeostasis. Rather than Cannon’s (1929) more rigid notion of keeping things standing still at a certain physiological set point, the course helps students recognize and understand that most homeostatic processes, such as regulation of Purkinje neuron firing rate, hypothalamic control of body temperature, or the sleep-wake and circadian cycles, are controlled oscillations rather than static equilibria. While this may sound a little bit undefined in language, our dynamical systems viewpoint allows us to generate precise definitions that can be tested. If the change in temperature, that is, T’ = 0, then we have true homeostasis. If, however, T’ ≠ 0, but the trajectory of T repeats with time, we have a stable limit-cycle oscillation. The latter is what we see throughout physiological systems and nature as a whole.

This brings us to the next major point. Real physiological systems are nonlinear, rely upon feedback, and operate with time delays. While we often eschew nonlinearities in engineered systems, they are essential in physiological systems. Simple but incredibly important decisions (for example, whether a neuron fires an action potential or remains quiescent) are nonlinear by their very nature and cannot be linearized without losing that behavior. Furthermore, oscillatory behavior requires negative feedback and time delays in order to operate properly. Again, a dynamical systems approach allows observing and analyzing how the strength of the feedback and the length of the time delays affect the behavior of the physiological system being studied.

Finally, a dynamical systems viewpoint allows us to understand and observe how complex physiological behavior emerges from self-organized activity. For example, every neuroscience student is taught that a neuron has a firing threshold. If the membrane potential stays below threshold, the neuron doesn’t fire. If the membrane potential exceeds threshold, the neuron fires an action potential. But where is threshold stored? What protein or nucleotide sequence encodes threshold? After studying the Hodgkin- Huxley equations and Fitzhugh’s simplified formulation of them, the students understand that threshold is an emergent property of the nonlinear interaction of the passive membrane properties of the neuron with voltage-activated Na+ and K+ channels. Crossing threshold represents a Hopf bifurcation from equilibrium behavior at rest, to oscillatory behavior during action potential firing. Action potentials, therefore, are an emergent property of solubilized ions, lipids, and proteins whose dynamic interactions can be described effectively and efficiently with a system of four ordinary differential equations, the Hodgkin-Huxley equations.

Most of the students are not biomathematics or quantitative biology majors. Also, they are not experienced modelers. So, we take an approach where they learn the basics of how to analyze and model dynamical systems without lengthy derivations or proofs. In fact, the students never even solve a system of differential equations analytically, because, as they learn, most non-linear differential equations lack analytic solutions. The approach is laid out in a wonderful textbook by Garfinkel et al. Modeling Life (2017 — video lectures are available for free at https://modelinginbiology.github.io/videos/ ). This book not only makes dynamical systems, including chaotic ones, easily understandable for Life Sciences students, but also the examples in the book are centered around physiological, ecological, and epidemiological systems rather than the usual assortment of physical systems (e.g., mass-spring, pendulum, and planetary) that form the basis of most books on the topic of dynamical systems.

The UCLA course takes place during a single ten-week quarter. In class, the course lays out the underpinnings of the dynamical systems approach. Students are given ample opportunities to work with this approach in weekly simulation laboratories that are overseen by teaching assistants who are highly skilled in the analysis and interpretation of dynamical systems. Rather than forcing students into learning a lot of coding skills while also learning about dynamical systems, the class employs an inexpensive and simple to use but powerful modeling program called Berkeley Madonna (2021) that is available for both Windows and MacOS devices. Students are assessed using a variety of methods, including 1) in-class exams to assess their understanding of dynamical systems thinking, 2) simulation laboratory exercises to assess their development as implementers, 3) interpretation of models, 4) and finally a modeling project where students develop models in areas of their own interest that put together all of what they learn.

When the course is complete, the students are able to analyze the behavior of systems of ordinary nonlinear differential equations for equilibria and, more generally, attractors, as well as qualitative changes in the dynamic behavior of these systems with changes in parameter values (bifurcations). They are able to develop differential equation models of biological systems, including the identification of relevant state variables and their feedforward and feedback interactions. In addition, they know how to use computer simulations to calculate and visualize the behavior of particular solutions to differential equation models of biological systems. Most importantly, they become proficient enough in these skills to develop and simulate de novo differential equations models from basic system descriptions.

Dynamical systems are the language of nature. Understanding how to describe natural systems in terms of dynamical systems and then analyze the behavior that emerges from them empowers students to go from describing what may be happening based on linear thinking about first principles, to what is actually happening and what can (and, just as importantly, cannot) emerge from the behavior of these systems. Solving 21st-century problems in biology requires contemporary quantitative thinking. Only then can we truly appreciate the importance and beauty of the geometry of nature.

MATHEW ABRAMS: TRAININGSPACE: RESOURCES FOR COMPUTATIONAL NEUROSCIENCE & NEUROEDUCATION WITHOUT BORDERS

The International Neuroinformatics Coordinating Facility (INCF) now includes TrainingSpace ( https://training.incf.org/ ), an online hub to make neuroscience educational materials more accessible to the global neuroscience community. TrainingSpace was developed in collaboration with INCF, HBP, SfN, FENS, IBRO, IEEE, BD2K, and the iNeuro Initiative. As a hub, TrainingSpace provides users with access to:

  • Multimedia educational content from courses, conference lectures, and laboratory exercises from some of the world’s leading neuroscience institutes and societies.
  • Four study tracks (Neuroinformatics, Computational Neuroscience, Neuroscience, and Brain Medicine) to facilitate self-guided study.
  • Tutorials/demonstrations of resources (tools, software, and services) available for neuroscience research.
  • Neurostars.org , a Q&A forum.
  • KnowledgeSpace, a data discoverability portal/encyclopedia for neuroscience that provides users with access to over 1,600,000 files of publicly available data and models as well as links to literature references and scientific abstracts.

In addition to the subject themes of the four study tracks (neuroinformatics, computational neuroscience, brain medicine, and neuroscience), TrainingSpace also includes lectures, courses, and tutorials in computer science, data science, ethics, career development, and open science. All content objects in TrainingSpace include a general description, learning objectives/topics covered, difficulty level, and links to required software/tools and prerequisites courses/lectures. Many lectures also include downloadable lecture notes and slides and links to Jupyter notebooks, code repositories, and sample datasets. TrainingSpace also provides access to tutorials on open science resources that instructors could incorporate into their courses (instructors are free to include all multimedia content found in TrainingSpace into their courses).

To facilitate self-guided learning in TrainingSpace, INCF is pursuing its integration with Neurostars.org , a question and answer forum for neuroscience researchers, infrastructure providers and software developers. Neurostars provides access to experts from around the world for students and teachers. Sample datasets and models are available in KnowledgeSpace ( https://knowledge-space.org/ ), which was developed jointly by INCF, the Human Brain Project, and the Neuroscience Information Framework (NIF). KnowledgeSpace is a repository of global neuroscience web resources, including experimental, clinical, and translational neuroscience databases, knowledge bases, atlases, and genetic/genomic resources, all of which have been integrated into TrainingSpace. KnowledgeSpace also serves as an encyclopedia for neuroscience that combines general descriptions found in Wikipedia with more detailed content from InterLex, a dynamic lexicon of neuroscience concepts supported by NIF. KnowledgeSpace then integrates the content from those two sources with the latest neuroscience citations found in PubMed and data found in some of the world’s leading neuroscience repositories.

Speakers in this workshop offered a variety of viewpoints. One that was widely endorsed was providing hands-on instruction, including the use of resources that allow learning with actual data sets, as described by Dr. Abrams. There were differences in opinion about how steeped in mathematical training students need to be; both Dr. Fairhall and Dr. Kass suggested the need for a fairly rigorous background, while Dr. Babiec described a course in which differential equations are not necessary. As for whether Python or MATLAB is better for implementing algorithms, both approaches are valuable depending on your students’ background and which pedagogical objectives you want to achieve.

The future will no doubt provide even more complex computational models that strive to integrate levels from the molecular to the behavioral. The organizers believe that this workshop will help faculty to teach computational aspects of neuroscience at both the undergraduate and graduate levels.

Videos of the workshop can be viewed at the Society for Neuroscience’s Neuronline website: https://neuronline.sfn.org/career-paths/teachingcomputation-in-neuroscience . Viewing is unlimited for SfN members, and currently includes up to five free articles for others.

  • Aljadeff Y, Lansdell B, Fairhall A, Kleinfeld D. spike train analysis, deconstructed. Neuron. 2016; 91 (2):221–259. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • American Association for the Advancement of Science. Vision and change in undergraduate biology education: a call to action. Washington, DC: AAAS; 2011. Available at https://visionandchange.org/finalreport/ [ Google Scholar ]
  • Anderson JR. how can the mind occur in the physical universe? Oxford, UK: Oxford Press; 2007. Available at. [ CrossRef ] [ Google Scholar ]
  • Bargmann C, et al. BRAIN 2025 Brain Research through Advancing Innovative Neurotechnologies (BRAIN) working group report to the advisory committee to the director, NIH. Bethesda, MD: National Institutes of Health; 2014. Available at https://braininitiative.nih.gov/strategic-planning/brain-2025-report . [ Google Scholar ]
  • Berkeley Madonna, Inc. Berkeley Madonna. Berkely, CA: University of California, Berkeley; 2021. Available at https://berkeley-madonna.myshopify.com/ [ Google Scholar ]
  • Cannon WB. Organization for physiological homeostasis. Physiol Rev. 1929; 9 :399–431. [ Google Scholar ]
  • Du Lac C, et al. Report of the NIH Director BRAIN Initiative Working Group 2.0. Bethesda, MD: National Institutes of Health; 2019. The BRAIN Initiative 2.0: From cells to circuits, towards cures. Available at https://braininitiative.nih.gov/strategic-planning/acd-working-groups/brain-initiative-20-cells-circuits-toward-cures . [ Google Scholar ]
  • Einevoll GT, et al. The Scientific Case for Brain Simulations. Neuron. 2019; 102 (4):735–744. doi: 10.1016/J.Neuron.2019.03.027. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fairhall A. The receptive field is dead. long live the receptive field? Curr Opin Neurobiol. 2014; 25 :Ix–Xii. doi: 10.1016/J.Conb.2014.02.001. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Garfinkel A, Shevtsov J, Guo Y. Modeling life: the mathematics of biological systems. New York, NY: Springer; 2017. [ Google Scholar ]
  • Grisham W. Book Review: MATLAB For Neuroscientists: an introduction to scientific computing in MATLAB (Second Edition) J Undergrad Neurosci Educ. 2014; 13 (1):R3–R4. [ Google Scholar ]
  • Grisham W, Lom B, Lanyon L, Ramos RL. Proposed training to meet challenges of large-scale data in neuroscience. Front Neuroinform. 2016; 10 :28. doi: 10.3389/Fninf.2016.00028. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Juavinett A. The Self-Organized Movement to Create an Inclusive Computational Neuroscience School. Simons Foundation Blog. 2020. Sep 17, Available at https://www.simonsfoundation.org/2020/09/17/the-self-organized-movement-to-create-an-inclusive-computational-neuroscience-school/
  • Kass RE, Eden U, Brown E. Analysis of neural data. New York, NY: Springer; 2014. [ Google Scholar ]
  • Kass RE, et al. Ten simple rules for effective statistical practice. PLOS Comput Biol. 2016 doi: 10.1371/journal.pcbi.1004961. Available at. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kass RE, et al. Computational neuroscience: mathematical and statistical perspectives. Annu Rev Stat Appl. 2018; 5 :183–214. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Miller G. The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychological Review. 1956; 101 (2):343–352. [ PubMed ] [ Google Scholar ]
  • Nylen EL, Wallisch P. Neural data science: a primer with MATLAB and Python. Boston, MA: Academic Press; 2017. [ Google Scholar ]
  • Pugh GE. The biological origin of human values. New York, NY: Basic Books; 1977. [ Google Scholar ]
  • Wallach P, Lusignan ME, Benayoun MD, Baker TI, Dickey AS, Hatsopoulos NG. MATLAB for neuroscientists: an introduction to scientific computing in MATLAB. Boston, MA: Academic Press; 2014. [ Google Scholar ]

computational neuroscience research topics

Mathematical Methods in Computational Neuroscience

8 July - 26 July 2024, Fred Kavli Knowledge Center, Eresfjord, Norway

cool dark_edited.jpg

Computational Neuroscience and Inference from data are disciplines that extensively use tools from Mathematics and Physics to understand the behavior of model neuronal networks and analyze data from real experiments. Due to its interdisciplinary nature and the complexity of the neuronal networks, the list of techniques that are borrowed from Physics and Mathematics is an extensive one. Although using tools from standard curriculum of Physics, Mathematics and Engineering is common, more advanced research requires methods and techniques that are not usually covered in any single discipline. 

To fill in this gap, this summer school covers some of the most important methods used in computational neuroscience research through both main lectures and scientific seminars (5-6 main lectures per topic and  1-2 seminars by each invited seminar speaker).

Organizers: Yasser Roudi, Ines Samengo, Nicolai Waniek, Ivan Davidovich and Benjamin Dunn

Tutors: Bautista Arenaza

Lectures  (this is not a final list)

Information theory and inference 

Statistical mechanics of neural networks

Dynamics of neural networks

Dimensionality Reduction

computational neuroscience research topics

Invited lecturers and seminars speakers (this is not a final list and might change)

Sara Solla, Northwestern University, USA

Predrag Cvitanović, Georgia Tech, USA Juan Gallego, Imperial College London, UK

Inés Samengo, Balseiro Institute, Argentina Yasser Roudi, King's College London, UK

Peter Dayan, Tübingen, Germany

Nicolai Waniek, NTNU, Norway

Li Zhaoping, MPI, Germany Soledad Gonzalo Cogno, NTNU, Norway

Iván Davidovich, NTNU, Norway

Benjamin Dunn, NTNU, Norway

Nina  Miolane, UCSB, USA

Matteo Marsili, ICTP, Italy

Federico Stella, Donders Institute, the Netherlands

Some speakers will join in-person and others remotely. Due to administrative issues, we are currently unable to broadcast the talks openly via Zoom .

G0170421_1627810330173 (2).JPG

YouTube channel for recorded videos:

https://www.youtube.com/channel/UCnhu3TheurYpmF1qgk770Jg

GOPR0562_1628344491885.JPG

Applications for 2024 are now closed

The deadline for applications was March 31st at 11:59PM CET. The results of the selection process will be communicated by email around mid April.

The summer school is aimed at PhD students, but Master's students and postdocs (as well as those transitioning between any of these) are also welcome to apply.

T here are no registration fees for the school. Accommodation and food (except for alcohol) will be covered by the school for all students selected to participate. Participating s tudents must attend the school in person for its whole duration. Students should expect to be assigned a shared bedroom.

Ground transportation between Molde and the Fred Kavli Knowledge Center will be provided by the school on a pre-determined schedule on both arrival and departure days (July 8th and 26th, respectively).

Non-NRSN students: Students th at don't belong to NRSN will need to find their own funding to cover their travel expenses from their place of residence to Molde (and back).

NRSN students:  NRSN can cover your travel expenses. Please check http s://www.ntnu.edu/nrsn/grants .

computational neuroscience research topics

About the Fred Kavli Knowledge Center

The Fred Kavli Knowledge Center is located in the family farm where Fred Kavli grew up. It is surrounded by the scenic area of Eresfjord and is a gathering place for programs that stimulate curiosity, innovation, and big ideas.

For further information see the website of the Fred Kavli Knowledge Center

  • Skip to main content
  • Skip to navigation
  • Skip to search
  • Increase font size
  • Decrease font size
  • Sharpen color
  • Invert color

Weizmann Institute of Science

Search form

You are here.

back to research topics

Theoretical and Computational Neuroscience

The brain is acting through the interaction of billions of neurons and myriads of action potentials that are criss-crossing within and between brain areas. To make sense of this complexity, one must use mathematical tools and sophisticated analysis methods in order to extract the important information and create reduced models of brain function. Together, faculty members and students at the Weizmann Institute, coming from diverse quantitative backgrounds such as physics, engineering, mathematics and computer science, are breaking new cutting-edge avenues in computational and theoretical neuroscience. We are using mathematical tools taken from Statistical Physics, Dynamicsl Systems, Machine Learning and Information Theory -- to name just a few -- in order to create new models and theories of brain function. Both analytical approaches and simulations are used heavily. By intense collaborations with experimental laboratories, these new theories and computational tools are put to the test, and then refined further. Our aim is to unravel the basic principles of brain operation and the underlying neural codes.

Related Groups

  • Yarden Cohen
  • Michal Ramot
  • Takashi Kawashima
  • Michail Tsodyks
  • Elad Schneidman

Subscribe to RSS - Theoretical and Computational Neuroscience

  • Search Menu
  • Advance articles
  • Editor's Choice
  • Focus Issue Archive
  • Open Access Articles
  • JAMIA Journal Club
  • Author Guidelines
  • Submission Site
  • Open Access
  • Call for Papers
  • About Journal of the American Medical Informatics Association
  • About the American Medical Informatics Association
  • Journals Career Network
  • Editorial Board
  • Advertising and Corporate Services
  • Self-Archiving Policy
  • Dispatch Dates
  • For Reviewers
  • Journals on Oxford Academic
  • Books on Oxford Academic

Article Contents

Introduction, materials and methods, limitations, author contributions, conflicts of interest, data availability, automating literature screening and curation with applications to computational neuroscience.

ORCID logo

  • Article contents
  • Figures & tables
  • Supplementary Data

Ziqing Ji, Siyan Guo, Yujie Qiao, Robert A McDougal, Automating literature screening and curation with applications to computational neuroscience, Journal of the American Medical Informatics Association , 2024;, ocae097, https://doi.org/10.1093/jamia/ocae097

  • Permissions Icon Permissions

ModelDB ( https://modeldb.science ) is a discovery platform for computational neuroscience, containing over 1850 published model codes with standardized metadata. These codes were mainly supplied from unsolicited model author submissions, but this approach is inherently limited. For example, we estimate we have captured only around one-third of NEURON models, the most common type of models in ModelDB. To more completely characterize the state of computational neuroscience modeling work, we aim to identify works containing results derived from computational neuroscience approaches and their standardized associated metadata (eg, cell types, research topics).

Known computational neuroscience work from ModelDB and identified neuroscience work queried from PubMed were included in our study. After pre-screening with SPECTER2 (a free document embedding method), GPT-3.5, and GPT-4 were used to identify likely computational neuroscience work and relevant metadata.

SPECTER2, GPT-4, and GPT-3.5 demonstrated varied but high abilities in identification of computational neuroscience work. GPT-4 achieved 96.9% accuracy and GPT-3.5 improved from 54.2% to 85.5% through instruction-tuning and Chain of Thought. GPT-4 also showed high potential in identifying relevant metadata annotations.

Accuracy in identification and extraction might further be improved by dealing with ambiguity of what are computational elements, including more information from papers (eg, Methods section), improving prompts, etc.

Natural language processing and large language model techniques can be added to ModelDB to facilitate further model discovery, and will contribute to a more standardized and comprehensive framework for establishing domain-specific resources.

Over the years, numerous informatics resources have been developed to aggregate human knowledge, from generalist resources like Wikidata 1 to domain-specific scientific resources like GenBank 2 for nucleotide sequences and NeuroMorpho.Org 3 for neuron morphologies. Researchers are often obligated to submit nucleotide sequences to GenBank, but for many other types of scientific products, researchers have no such obligations, sometimes leading to sharing either not happening or not happening in a consistent manner (eg, with standardized metadata). The broader scientific community, however, benefits most when scientific products are widely available, as this allows researchers to readily build on prior work, especially when the scientific products are available in a standardized form. 4–8

Domain-specific knowledge-bases attempt to address this need but face at least 3 major challenges: (i) identifying relevant publications; (ii) identifying relevant metadata; (iii) obtaining additional needed details not present in the corresponding publication. Monitoring the literature is non-trivial; PubMed lists over 1.7 million publications in 2022. Even once the literature is filtered to a specific field, the scientific product and relevant metadata desired by knowledge-bases is often not explicitly mentioned in the title or abstract, so determining relevance of a given paper may require carefully reading the full text. Traditionally, these challenges are addressed by human curators, but this requires both significant domain knowledge and time. Established repositories with community support may receive community contributions. Individual researchers know their work best and are, for that reason, in the best position to identify relevant scientific products and metadata. However, contributing researchers are unlikely to be experts on the ontologies used by a specific repository. To streamline operations, some knowledge-bases have turned to using rule-based approaches (eg, 9 ) or Natural Language Processing (NLP) techniques such as using a custom BERT-based model (eg, 10 ) to partly automate metadata curation, but these approaches often do not generalize well between knowledge-bases.

We describe a generalizable, cost-effective approach for identifying papers containing or using a given type of scientific product and assigning associated metadata. Our approach uses document-embeddings of core pre-identified papers to systematically pre-screen publications. We use a large-language model (LLM) to confirm inclusion criteria and identify the presence-or-absence of categories of metadata in the abstract. For identified categories, the LLM is used to identify relevant metadata annotations from a list of terms specified by the knowledge-base. We demonstrate the utility of this approach by applying it to identify papers and metadata for inclusion in ModelDB, a discovery tool for computational neuroscience research. 11

Corpus acquisition

Our corpus includes 1564 abstracts (actually: titles and abstracts, but referred to as “abstracts” for convenience) for computational neuroscience models from ModelDB as well as generic neuroscience abstracts from 2022 extracted from PubMed with MeSH terms under either C10 (“Nervous System Diseases”) or A08 (“Nervous System”). Our final corpus included a total of 105 502 neuroscience related abstracts from 2022.

PCA general separability

We applied Principal Component Analysis (PCA) to SPECTER2 embeddings 12 of a set of 3264 abstracts containing 1564 known computational models from ModelDB and 1700 generic neuroscience works (identified from PubMed with C10 or A08 MeSH terms). 12 SPECTER2 is a BERT-based 13 document-embedding method that transforms paper titles and abstracts into a 768-dimensional vector; this transformation was trained using citation information, which allowed it to learn to group related papers into nearby vectors. As with any vector-based data, PCA can be used to reduce the dimensionality of SPECTER2 embeddings while preserving relationships between the embeddings and most of the variability within the embeddings. Using PCA gives us a denser, lower-dimensional dataset that can then be analyzed with tools like k -nearest neighbors (KNN) which are most powerful in small dimensions.

KNN-based approach with SPECTER2 embeddings

We explored the vector space of SPECTER2 embeddings of the 1564 known computational works and 1700 generic neuroscience works by looking at the KNN to known computational work from ModelDB. Specifically, we calculated 2 values: the fraction of models whose k th nearest model is within distance d in the set of models, and the fraction of generic neuroscience works whose k th nearest model is within distance d . The maximum value in difference would correspond to an “optimal” distance value for each k . Therefore, if an unknown abstract’s k th nearest model is within the corresponding “optimal” distance, this abstract was deemed likely to include computational work.

With our test dataset, we recorded the distance to the k th nearest model for each abstract in the test dataset for different k values (we considered k  = 5, 10, 50, and 100). The abstracts were then sorted by this distance. Based on the result, we explored 2 hypotheses: Firstly, the ones with shorter distance values may have a higher possibility of being computational. Secondly, as k increases, more non-computational work may occur within the set of abstracts with smaller associated distances. To assess these hypotheses, for each k , the first 100 test abstracts with the smallest distances were annotated by 2 authors with a background of health informatics (S.G., Z.J.) to determine a gold standard for whether or not they include computational work.

Prompt engineering with GPT-3.5 and GPT-4

Works identified as likely to use computational neuroscience from the SPECTER2 embeddings were further screened by GPT-3.5 and GPT-4. Prompts were written taking into consideration the distinction between computational neuroscience and models in other categories (ie, statistics models and biophysics models), as shown below:

You are an expert in computational neuroscience, reviewing papers for possible inclusion in a repository of computational neuroscience. This database includes papers that use computational models written in any programming language for any tool, but they all must have a mechanistic component for getting insight into the function of individual neurons, networks of neurons, or of the nervous system in health or disease. Suppose that a paper has a title and abstract as indicated below. Respond with “yes” if the paper likely uses computational neuroscience approaches (eg, simulation with a mechanistic model), and “no” otherwise. In particular, respond “yes” for a paper that uses both computational neuroscience and other approaches. Respond “no” for a paper that uses machine learning to make predictions about the nervous system but does not include a mechanistic model. Respond “no” for purely experimental papers. Provide no other output.

   Title: {title}

   Abstract: {abstract}

GPT-4 was queried for each title and abstract in our corpus using this prompt. We varied the temperature (randomness) used by GPT-3.5 and GPT-4 and used voting to assess how this parameter affected computational versus non-computational paper classification. We later used an expanded prompt for GPT-3.5 stepping through the classification in a version of Chain-of-Thought (CoT) 14 reasoning.

Result evaluation through annotation agreement

SPECTER2, GPT-3.5, and GPT-4 performances were evaluated by comparing to ground truth, established through inter-annotator agreement (utilizing Cohen’s Kappa Coefficient). Annotations were performed systematically, with initial agreement of conditions to screen computational or non-computational characteristics to eliminate human bias error as follows:

The annotators go through the abstract and title of the individual paper to examine if any keywords or concepts related to computational neuroscience are present;

When computational keywords are present, the annotator take into considerations of the context of the paper to determine if such keywords and concepts are relevant, or if they are only mentioned for comparison purposes;

To reinforce the annotators’ decision, annotators also consider other sections of the paper as additional support, especially for the method section, where annotators are able to distinguish methods that may seem computational on the surface level but should not be considered, such as statistical models, metaanalysis, reviews, and etc.;

After completing the entire process, annotators are asked to verify each other’s output results, in case of any misalignment of details that may affect the annotating performance;

Ground truth is established under the consideration of independent outputs from each annotator to evaluate the model performance.

Metadata identification

In addition to academic field classification, we assessed the positive predictive value (PPV) of metadata identification as follows: GPT-4 identified metadata for 115 PubMed papers derived from k  = 5 group with the smallest distances, using queries containing terminology sources for paper concepts, regions of interest, ion channels, cell type, receptors, and transmitters derived from ModelDB’s terminologies. These prompts stressed that answers should only come from the supplied terminology lists, which GPT-4 mostly respected. (Attempts to use GPT-3.5 led to high numbers of off-list suggestions, and were not pursued further.)

To identify the accuracy of metadata identification from GPT-4, 2 annotators with health informatics (AG) and biostatistics (YQ) backgrounds performed manual validation of the GPT-4 identifications, neglecting the lack of precision of certain keywords resulting from the selection from ModelDB’s existing terminology.

This validation process assigned 1 of 3 rankings for each metadata tag: “Correct,” “Incorrect,” and “Borderline.” While “Correct” and “Incorrect” explicitly determines the performance of the model, “Borderline” catches keywords present in the text, but not directly relevant to the paper, as well as terms that are relevant to concepts in ModelDB’s terminology, but not entirely correct given the granularity of concept. For instance, in “Dopamine depletion can be predicted by the aperiodic component of subthalamic local field potentials,” the cell type of “Dopaminergic substantia nigra neuron” is not explicitly mentioned in the abstract, but referenced to introduce into the focus of the paper. 15

For this study, we restricted our attention to neuroscience papers published in 2022 as this is after the training cutoff date for GPT-3.5 and GPT-4 and has a defined end-date to allow completeness.

Identification of candidate papers

As described previously, we included computational neuroscience models from ModelDB, and generic neuroscience papers from PubMed published in 2022. PubMed lists 1 771 881 papers published between January 1, 2022 and December 31, 2022. As 84.5% of models from ModelDB with MeSH terms contained entries in the C10 or A08 subtrees, we used the presence of these subtrees as a proxy for an article about neuroscience. Using this criteria, we found 105 202 neuroscience-related papers in PubMed from 2022.

PCA results

Applying PCA to the high-dimensional SPECTER2 embeddings of our 2 datasets—1700 generic neuroscience abstracts and 1564 computational neuroscience abstracts—revealed a notable separability in SPECTER2 embeddings between the 2 datasets ( Figure 1 ). Thus SPECTER2 embeddings can capture relevant information pertaining to computational aspects of scientific abstracts within the broader neuroscience corpus.

PCA analysis of the SPECTER2 embeddings exhibits clear separation between modeling (blue) and non-modeling (orange) along the 0th principal component.

PCA analysis of the SPECTER2 embeddings exhibits clear separation between modeling (blue) and non-modeling (orange) along the 0th principal component.

SPECTER2 vector space exploration

Based on the SPECTER2 embeddings of 1564 known models from ModelDB and 1700 generic neuroscience models, we implemented a KNN-based approach in the vector space containing all the embeddings to examine patterns in their distribution and to determine how likely a neuroscience abstract is computational or not.

Given our hypothesis that computational modeling papers embed near other computational modeling work, we examined the differences between percentages of models ( Figure 2A ) and non-models ( Figure 2B ) with k th nearest known model within a certain distance. We used this difference ( Figure 2C ) to determine a cutoff threshold for when a paper should be considered likely to involve computational neuroscience work (ie, if the k th nearest model is within the threshold). From this, we generated a possible range in which this threshold distance lies. For example, for the 5 nearest neighbors ( k  = 5), it is likely that a neuroscience work is computational when its fifth nearest neighbor (among the set of known models) is within the distance of approximately 0.08. As the value of k increases, the threshold distance increases as well, taking more neighbors into consideration, which can create bias in results and produce false positives or false negatives.

SPECTER2 analysis. (A) The calculated percentages of known models that have their kth nearest neighbor (a known model) within a certain distance. (B) The percentages of generic neuroscience abstracts that have their kth nearest model within a certain distance. (C) The difference between A and B. (D) The number of computational work found going down the list from SPECTER2 results (k = 5), compared with the case when identification is fully accurate, as shown by the dotted line. The first 20 papers by similarity to ModelDB papers were all computational neuroscience papers, with the odds of containing computational neuroscience broadly decreasing as the similarity decreased.

SPECTER2 analysis. (A) The calculated percentages of known models that have their k th nearest neighbor (a known model) within a certain distance. (B) The percentages of generic neuroscience abstracts that have their k th nearest model within a certain distance. (C) The difference between A and B. (D) The number of computational work found going down the list from SPECTER2 results ( k  = 5), compared with the case when identification is fully accurate, as shown by the dotted line. The first 20 papers by similarity to ModelDB papers were all computational neuroscience papers, with the odds of containing computational neuroscience broadly decreasing as the similarity decreased.

With these potential thresholds, we explored the vector space of our test dataset, containing 105 502 neuroscience-related abstracts published in 2022. For different values of k , high probability computational abstracts were identified and sorted by the distance to their k th computational neighbor. Two annotators manually annotated the top 100 abstracts for k  = 5, 20, and 50 to determine accuracy levels. Of these, smaller values of k generated more accurate predictions of the use of computational techniques. As shown in Figure 2D , the top 20 abstracts identified by SPECTER2 using k  = 5 are all computational models. Further down the list, more non-computational works start to emerge.

GPT-3.5 and GPT-4 results

We queried both GPT-3.5 and GPT-4 to find out if they would identify each paper—given the title and abstract—as using computational neuroscience or not. This check was performed using the prompt described in methods without examples for the top 100 abstracts from SPECTER2 analysis results with k  = 5, 20, and 50. These prompts were initially run with temperature = 0 for near-deterministic results. Cohen Kappa’s agreement index between the 2 annotators, as well as agreement with GPT-3.5 and GPT-4 is shown in Figure 3 . GPT-3.5 tends to have much lower agreement scores with human annotators, compared to GPT-4. GPT-3.5 often misclassified non-computational work as computational, leading to false positives.

Cohen Kappa’s agreement (an agreement measure that accounts for chance events) comparing scores the identification of a paper as including computational neuroscience work or not among 2 human annotators (S1 and S2), GPT-3.5, and GPT-4. The k = 5, 20, and 50 matrices denote 3 overlapping corpuses of papers to consider, based on SPECTER embeddings as described in the text. Agreement is broadly comparable in all 3 sets, with k = 5 showing the highest overall values, so k = 5 was used for the remainder of the analyses.

Cohen Kappa’s agreement (an agreement measure that accounts for chance events) comparing scores the identification of a paper as including computational neuroscience work or not among 2 human annotators (S1 and S2), GPT-3.5, and GPT-4. The k  = 5, 20, and 50 matrices denote 3 overlapping corpuses of papers to consider, based on SPECTER embeddings as described in the text. Agreement is broadly comparable in all 3 sets, with k  = 5 showing the highest overall values, so k  = 5 was used for the remainder of the analyses.

Improving GPT-3.5 results

We employed 3 strategies to improve the performance of the relatively low-cost GPT-3.5 model. By using annotator S1’s results as baseline, we calculated F1 scores (the harmonic mean of precision and recall; an F1 score near 1 requires both low false positives and false negatives) to evaluate improvement in GPT-3.5’s performance from these different strategies.

1. Instruction fine-tuning: allowing outputs of uncertainty

As GPT-3.5 results returned a large number of false positives, we tweaked the prompt to allow GPT-3.5 to output unsure when it is not certain about the computational characteristics of an abstract. All instances classified as “unsure” are actually non-computational, according to GPT-4’s outputs. Integrating “unsure” instances into the negative set increased GPT-3.5’s F1 score to 0.617.

2. Role of temperature

Using the fine-tuned prompt from the previous step, we further explored the role of temperature by using temperature (randomness) values 0, 0.5, 1, 0.5, and 2. The best performing temperatures as measured by F1 scores were 0.5 and 1, as measured by a majority vote of 3 calls to the API ( Figure 4 ).

(A) Role of temperature (randomness). GPT-3.5 was queried 3 times for whether or not an abstract implied use of a computational model. After combining the answers by voting, moderate but low temperature (randomness) achieved the highest F1-1 scores, with high temperatures showing poor performance. (B) GPT-3.5 F1-1 scores before and after Chain-of-Thought (CoT), compared with GPT-4.

(A) Role of temperature (randomness). GPT-3.5 was queried 3 times for whether or not an abstract implied use of a computational model. After combining the answers by voting, moderate but low temperature (randomness) achieved the highest F1-1 scores, with high temperatures showing poor performance. (B) GPT-3.5 F1-1 scores before and after Chain-of-Thought (CoT), compared with GPT-4.

3. Chain-of-thought prompting

Following, 14 which showed that incorporating reasoning steps (known as CoT) in LLM prompts can improve their performance, we added reasoning steps of how a human would approach classification tasks of computational work. Instead of using the few-shot examples as demonstrated by the CoT prompting, we took a zero-shot approach with specifying a general reasoning path. Our prompt is shown below.

You are an expert in computational neuroscience, reviewing papers for possible inclusion in a repository of computational neuroscience. This database includes papers that use computational models written in any programming language for any tool, but they all must have a mechanistic component for getting insight into the function individual neurons, networks of neurons, or of the nervous system in health or disease.

Suppose that a paper has title and abstract as indicated below. Perform the following steps, numbering each answer as follows.

Identify evidence that this paper addresses a problem related to neuroscience or neurology.

Identify specific evidence in the abstract that directly suggests the paper uses computational neuroscience approaches (eg, simulation with a mechanistic model). Do not speculate beyond the methods explicitly mentioned in the abstract.

Identify evidence that this paper uses machine learning or any other computational methods.

Provide a one-word final assessment of either “yes,” “no,” or “unsure” (include the quotes but provide no other output) as follows: Respond with “yes” if the paper likely uses computational neuroscience approaches (eg, simulation with a mechanistic model), and “no” otherwise. Respond “unsure” if it is unclear if the paper uses a computational model or not. In particular, respond “yes” for a paper that uses both computational neuroscience and other approaches. Respond “no” for a paper that uses machine learning to make predictions about the nervous system but does not include a mechanistic model. Respond “no” for purely experimental papers. Provide no other output.

   Title: “{title}”

   Abstract: “{abstract}”

GPT-3.5’s F1 score, even though still lower than GPT-4, showed a marked increase to 85.5% after CoT was applied ( Figure 4 ).

We used repeated queries to GPT-4 to identify relevant computational neuroscience related metadata annotations for each remaining abstract. Specifically, we performed separate queries for each abstract for ModelDB’s long-standing 6 broad categories of metadata: “brain regions/organisms,” “cell types,” “ion channels,” “receptors,” “transmitters,” and “model concepts.” Each category may have hundreds of possible values and models may have multiple relevant metadata tags for a given category; for example, a model that examines the concept “Aging/Alzheimer’s” may also study the concept “synaptic plasticity.” Each category was predicted via 2 methods, the GPT-4 approach and for comparison an older rule-based approach. 9 Both models broadly had comparable PPVs, with the exception that the rule-based approach had a lower false positive rate for receptors. Our new GPT-4 approach, however, identified 85% more total metadata tags than the rule-based approach. In exploratory studies, GPT-3.5 was relatively prone to suggest new terms or reword terms and so we did not apply it here.

Figure 5 illustrates the distribution of metadata tags assigned by the Rule-based method and GPT-4 query to 115 abstracts, totaling 636 tags from the rule-based model, and 1154 tags from GPT-4 across various categories. Notably, the category “model concepts” received the highest number of tags, amounting to 403 and 753, respectively. Since paper titles and abstracts do not encompass all details about a model, our evaluation focused solely on the relevance of the predicted metadata tags, rather than the comprehensiveness of their model descriptions.

(A) Comparison of the performance of GPT-4 (top) vs ModelDB’s legacy rule-based predictor (bottom) on an analysis of 115 selected neuroscience abstracts. Each color/column represents a different broad category of metadata. Both raw counts (y-axis) and percentages are shown. Bars C, B, and I denote “correct,” “borderline” (see text), and “incorrect,” respectively. In all categories except receptors (where the rule-based approach had a very low error rate), positive predictive value was broadly comparable between the 2 methods, however, GPT-4 showed greater recall, predicting 85% more metadata tags in total. (B) Metadata identification accuracy across categories via GPT-4. Results are evaluated and accuracy scores are calculated by comparing with human annotations. Total legend represents the total number of tags in each category for metadata.

(A) Comparison of the performance of GPT-4 (top) vs ModelDB’s legacy rule-based predictor (bottom) on an analysis of 115 selected neuroscience abstracts. Each color/column represents a different broad category of metadata. Both raw counts ( y -axis) and percentages are shown. Bars C, B, and I denote “correct,” “borderline” (see text), and “incorrect,” respectively. In all categories except receptors (where the rule-based approach had a very low error rate), positive predictive value was broadly comparable between the 2 methods, however, GPT-4 showed greater recall, predicting 85% more metadata tags in total. (B) Metadata identification accuracy across categories via GPT-4. Results are evaluated and accuracy scores are calculated by comparing with human annotations. Total legend represents the total number of tags in each category for metadata.

Upon manual review, we found “model regions” obtained the highest PPV at 96.6% from the rule-based model, and the “model concepts” achieved the highest PPV at 97.2% via GPT-4, while “cell types” recorded the lowest at 73.7% and 70.4%, respectively. These findings indicate that our approach, which leverages GPT-4 for metadata identification, shows comparable PPV to the rules-based approach, 9 with more total matches and without needing to develop an explicit set of rules.

Despite ongoing and previous efforts (eg, NIFSTD 16 ; EBRAINS Knowledge Graph 17 ; INCF 18 ) there remains a lack of widely used robust and consistent ontologies in neuroscience. 19 , 20 MeSH terms used to index publications in PubMed are focused on medically relevant terms, missing the cellular-level details essential for characterizing neuroscience. Therefore, establishing an automated process for identifying and annotating neuroscience work and building efficient knowledge repositories will enhance the ability to find and reuse computational models. In the future, integrating NLP techniques and LLMs may play an essential role in determining a publication’s computational aspects and metadata extraction.

Using only ModelDB data and why KNN-based approach?

We initially used entries from ModelDB 11 to generate SPECTER2 embeddings to establish an initial vector space of computational neuroscience work. However, ModelDB is not necessarily representative of computational neuroscience as a whole. 21 An alternative approach for creating this initial set is to collect abstracts from journals that are deemed as most relevant, such as Journal of Computational Neuroscience, etc. However, only a small fraction of ModelDB models is in these journals as computational models are often paired with experimental studies. We used KNN to measure similarity to embeddings of known model papers as our generic neuroscience set was randomly selected and may thus contain some computational work, so it could not serve as a true set of negative examples. Furthermore, KNN has the advantage of not requiring us to make assumptions about the shape or contiguity of the region of computational neuroscience work in the embedding space unlike many other algorithms (eg, unlike Support Vector Machines).

SPECTER2 embeddings are only from title and abstract

We used SPECTER2 12 document-level embeddings to determine whether a publication counts as computational neuroscience work. A unique feature of SPECTER2 is that it was trained on citations but does not need them to produce an embedding. In our case, if the publications that cite or are cited by a publication are computational, it is more likely that this publication is also computational. Citation links provide more information on the publication’s content and explicitly using them would generate embeddings with higher representativeness. Another challenge is that the abstract does not necessarily focus on key model details (eg, model 87284 22 uses CA1 pyramidal cells, which are not mentioned in the abstract). While full text and citations contain additional information that could be used in future studies to detect computational neuroscience models, we do not use them here to keep the model simple, because of the increased tokens (and hence cost) required for considering more text, and because of licensing restrictions (for full text). Furthermore, covering too much information in embeddings could lead to false positives in the results. 9

What counts as computational neuroscience work?

Computational neuroscience 23 is not entirely well defined, which poses a challenge for both manual and automated paper screening. Neuroscience itself is an interdisciplinary field that often makes heavy use of computation, creating a fuzzy boundary between analysis of experimental data and computational neuroscience. Therefore, we defined a scope—based on existing ModelDB entries—when prompting responses from GPT-4 and GPT-3.5, and the 2 annotators used the same scope during the manual annotation process.

When evaluating metadata predictions, annotators mainly considered the relevance of the provided tags to the papers themselves, while acknowledging the circumstances where certain tags could be mentioned, but to only serve as a reference or contrast. For example, “Gamma oscillation” was identified from the rule-based model as 1 of the tags for “model concepts” in 15 ; however, gamma oscillation was only mentioned as the context of the article and not as a main theme of the paper. In this scenario, annotators would evaluate the tag “Gamma oscillation” as “borderline.” In computational neuroscience, the authors know with certainty if their models include specific, eg, ion channels or do not; when generalizing to other fields, the use of hedging (see eg, 24 ) around uncertainties may affect prediction accuracy.

Using one-shot/few-shot learning

As shown in the analysis of GPT-3.5 results, subtle changes in prompts may yield large differences in the results. In this study, prompts given to GPT-3.5 and GPT-4 did not include concrete examples and expected results. Research on Few-Shot Learning (FSL) shows that models can learn to generalize information from only a limited number of examples and perform with high accuracy on a new task. 25 For example, in our study, providing a few examples of abstracts with their associated metadata might result in better results from GPT-3.5 and GPT-4. Future endeavors could potentially devote to experimenting with One-Shot Learning (OSL) or FSL, and measure the improvement in results.

Fine-tuning LLMs for metadata extraction

Achieving state-of-the-art results in extracting metadata information in computational neuroscience might require further fine-tuning a LLM. Fine-tuning a LLM on a specific task can markedly improve performance, 26 but to employ this in the curation context will require the development of a large high-quality dataset of examples.

Recognizing the increasing importance of repository curation and how this could benefit future relevant work in neuroscience, we explored the feasibility of leveraging NLP techniques and LLMs to enhance identification and metadata extraction of computational neuroscience work.

SPECTER2 embeddings yield promising outcomes, demonstrating high effectiveness for screening large numbers of neuroscience papers for the use of computational models without incurring API charges. GPT-4 shows relatively high accuracy in identifying metadata, but requires continued exploration in metrics to increase performance. Automating the process of collecting computational work and systematically storing information related to computational models paves the way for future research to reuse, update, or improve existing models.

Continued efforts should focus on increasing the accuracy of LLMs in gaining more accurate and contextually aware results. Attempts at using alternative metrics to improve accuracy of LLMs or establishing a new dataset comprising manually labeled computational information and metadata to fine-tune current LLMs can potentially help reach state-of-the-art results. Overall, this study marks a significant step forward in exploring the potential of NLP techniques and LLMs in automating the identification and curation of computational neuroscience work.

All authors contributed to the design of the study; Robert A. McDougal, Ziqing Ji, and Siyan Guo implemented the code; and all authors contributed to analysis of the results, writing, and editing the manuscript, and approval of the final version.

This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

The authors have no competing interests to declare.

The source code and data underlying this study is available in a GitHub repository at https://github.com/mcdougallab/gpt-curation .

Vrandečić D , Krötzsch M.   Wikidata: a free collaborative knowledgebase . Commun ACM . 2014 ; 57 ( 10 ): 78 - 85 .

Google Scholar

Benson DA , Cavanaugh M , Clark K , et al.    GenBank . Nucleic Acids Res . 2013 ; 41(Database issue) : D36 - D42 .

Ascoli GA , Donohue DE , Halavi M.   NeuroMorpho.Org: a central resource for neuronal morphologies . J Neurosci . 2007 ; 27 ( 35 ): 9247 - 9251 .

Ascoli GA.   Sharing neuron data: carrots, sticks, and digital records . PLoS Biol . 2015 ; 13 ( 10 ): e1002275 .

McDougal RA , Bulanova AS , Lytton WW.   Reproducibility in computational neuroscience models and simulations . IEEE Trans Biomed Eng . 2016 ; 63 ( 10 ): 2021 - 2035 .

Crook SM , Davison AP , McDougal RA , et al.    Editorial: reproducibility and rigour in computational neuroscience . Front Neuroinform . 2020 ; 14 : 23 .

Abrams MB , Bjaalie JG , Das S , et al.    A standards organization for open and FAIR neuroscience: the international neuroinformatics coordinating facility . Neuroinformatics . 2022 ; 20 ( 1 ): 25 - 36 .

Poline J-B , Kennedy DN , Sommer FT , et al.    Is neuroscience FAIR? a call for collaborative standardisation of neuroscience data . Neuroinformatics . 2022 ; 20 ( 2 ): 507 - 512 .

McDougal RA , Dalal I , Morse TM , et al.    Automated metadata suggestion during repository submission . Neuroinformatics . 2019 ; 17 ( 3 ): 361 - 371 .

Bijari K , Zoubi Y , Ascoli GA.   Assisted neuroscience knowledge extraction via machine learning applied to neural reconstruction metadata on NeuroMorpho.Org . Brain Inform . 2022 ; 9 ( 1 ): 26 .

McDougal RA , Morse TM , Carnevale T , et al.    Twenty years of ModelDB and beyond: building essential modeling tools for the future of neuroscience . J Comput Neurosci . 2017 ; 42 ( 1 ): 1 - 10 .

Singh A , D’Arcy M , Cohan A , et al.   SciRepEval: a multi-format benchmark for scientific document representations. In: Bouamor H , Pino J , Bali K , eds. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing . Association for Computational Linguistics ; 2023 : 5548 – 5566 .

Google Preview

Devlin J , Chang M-W , Lee K , et al.  BERT: pre-training of deep bidirectional transformers for language understanding. arXiv [cs.CL]. http://arxiv.org/abs/1810.04805 , 2018, preprint: not peer reviewed.

Wei J , Wang X , Schuurmans D , et al.  Chain-of-thought prompting elicits reasoning in large language models. arXiv [cs.CL]:24824–24837. https://proceedings.neurips.cc/paper_files/paper/2022/file/9d5609613524ecf4f15af0f7b31abca4-Paper-Conference.pdf , 2022, preprint: not peer reviewed.

Kim J , Lee J , Kim E , et al.    Dopamine depletion can be predicted by the aperiodic component of subthalamic local field potentials . Neurobiol Dis . 2022 ; 168 : 105692 .

Bug WJ , Ascoli GA , Grethe JS , et al.    The NIFSTD and BIRNLex vocabularies: building comprehensive ontologies for neuroscience . Neuroinformatics . 2008 ; 6 ( 3 ): 175 - 194 .

Appukuttan S , Bologna LL , Schürmann F , et al.    EBRAINS live papers—interactive resource sheets for computational studies in neuroscience . Neuroinformatics . 2023 ; 21 ( 1 ): 101 - 113 .

Abrams MB , Bjaalie JG , Das S , et al.    Correction to: a standards organization for open and FAIR neuroscience: the international neuroinformatics coordinating facility . Neuroinformatics . 2022 ; 20 ( 1 ): 37 - 38 .

Hamilton DJ , Wheeler DW , White CM , et al.    Name-calling in the hippocampus (and beyond): coming to terms with neuron types and properties . Brain Inform . 2017 ; 4 ( 1 ): 1 - 12 .

Shepherd GM , Marenco L , Hines ML , et al.    Neuron names: a gene- and property-based name format, with special reference to cortical neurons . Front Neuroanat . 2019 ; 13 : 25 .

Tikidji-Hamburyan RA , Narayana V , Bozkus Z , et al.    Software for brain network simulations: a comparative study . Front Neuroinform . 2017 ; 11 : 46 .

Morse TM , Carnevale NT , Mutalik PG , et al.    Abnormal excitability of oblique dendrites implicated in early Alzheimer’s: a computational study . Front Neural Circuits . 2010 ; 4 : 16 .

Sejnowski TJ , Koch C , Churchland PS.   Computational neuroscience . Science . 1988 ; 241 ( 4871 ): 1299 - 1306 .

Kilicoglu H , Bergler S.  A high-precision approach to detecting hedges and their scopes. In: Farkas R , Vincze V , Szarvas G , et al. , eds. Proceedings of the Fourteenth Conference on Computational Natural Language Learning—Shared Task . Association for Computational Linguistics ; 2010 : 70 - 77 .

Wang Y , Yao Q , Kwok JT , et al.    Generalizing from a few examples: a survey on few-shot learning . ACM Comput Surv . 2020 ; 53 ( 3 ): 1 - 34 .

Howard J , Ruder S. Universal language model fine-tuning for text classification. arXiv [cs.CL]. https://doi.org/10.48550/ARXIV.1801.06146 , 2018 , preprint: not peer reviewed.

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1527-974X
  • Copyright © 2024 American Medical Informatics Association
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts

Computational neuroscience articles within Nature Methods

Research Highlight | 12 April 2024

Modeling locomotion from environment to neurons

Brief Communication 11 April 2024 | Open Access

brainlife.io: a decentralized and open-source cloud platform to support neuroscience research

brainlife.io is a one-stop cloud platform for data management, visualization and analysis in human neuroscience. It is web-based and provides access to a variety of tools in a reproducible and reliable manner.

  • Soichi Hayashi
  • , Bradley A. Caron
  •  &  Franco Pestilli

Article 08 April 2024 | Open Access

Spike sorting with Kilosort4

Kilosort4 is a spike-sorting algorithm with improved performance compared to previous versions, owing to the use of a graph-based clustering approach. The tool extracts the activity of individual neurons from electrophysiological recordings acquired with, for example, Neuropixels electrodes.

  • Marius Pachitariu
  • , Shashwat Sridhar
  •  &  Carsen Stringer

Research Briefing | 01 April 2024

Building an automated three-dimensional flight agent for neural network reconstruction

RoboEM, an artificial intelligence (AI)-based flight agent, automatically steers through three-dimensional electron microscopy (3D-EM) images of brain tissue to follow neurites. RoboEM substantially improves state-of-the-art automated reconstructions, eliminating manual proofreading needs in complex connectomic analysis problems and paving the way for high-throughput, cost-effective, large-scale mapping of neuronal networks — connectomes.

Article 21 March 2024 | Open Access

RoboEM: automated 3D flight tracing for synaptic-resolution connectomics

RoboEM enables automated proofreading of electron microscopy datasets using a strategy akin to that of self-steering cars. This decreases the need for manual proofreading of segmented datasets and facilitates connectomic analyses.

  • Martin Schmidt
  • , Alessandro Motta
  •  &  Moritz Helmstaedter

Research Highlight | 11 January 2024

Predicting neural activity from facial expressions

Facemap tracks keypoints on the mouse face and feeds the information into a deep neural network to predict neural activity.

Research Briefing | 21 December 2023

Ultra-long-working-distance multiphoton objective unlocks new possibilities for imaging

In 1858, the first standard for microscope objectives was established to encourage interchangeable components. Over the following 150 years, standards have evolved to constrain the size of objectives, which limits the parameters of working distance, field of view and resolution. A new design breaks out of this conventional envelope, offering an ultra-long working distance in air and enabling new neuroscience experiments.

Article | 05 December 2023

Automated neuron tracking inside moving and deforming C. elegans using deep learning and targeted augmentation

Targettrack is a deep-learning-based pipeline for automatic tracking of neurons within freely moving C. elegans . Using targeted augmentation, the pipeline has a reduced need for manually annotated training data.

  • Core Francisco Park
  • , Mahsa Barzegar-Keshteli
  •  &  Sahand Jamal Rahi

Article 20 November 2023 | Open Access

Multi-layered maps of neuropil with segmentation-guided contrastive learning

SegCLR automatically annotates segmented electron microscopy datasets of the brain with information such as cellular subcompartments and cell types, using a self-supervised contrastive learning approach.

  • Sven Dorkenwald
  • , Peter H. Li
  •  &  Viren Jain

Resource 02 October 2023 | Open Access

Waxholm Space atlas of the rat brain: a 3D atlas supporting data analysis and integration

An updated version of the Waxholm Space atlas of the rat brain includes more detailed annotations of several brain regions, including the cortex, striatopallidal region, midbrain and thalamus, expanding the previous version with 112 new and 57 revised structures.

  • Heidi Kleven
  • , Ingvild E. Bjerke
  •  &  Trygve B. Leergaard

Article | 07 September 2023

FIOLA: an accelerated pipeline for fluorescence imaging online analysis

FIOLA is a pipeline for processing calcium or voltage imaging data. Its advantages include the fast speed and online processing.

  • Changjia Cai
  • , Cynthia Dong
  •  &  Andrea Giovannucci

Resource | 17 April 2023

BigNeuron: a resource to benchmark and predict performance of algorithms for automated tracing of neurons in light microscopy datasets

This resource describes a collection of neurons from a variety of light microscopy-based datasets, which can serve as a gold standard for testing automated tracing algorithms, as shown by comparison of the performance of 35 algorithms.

  • Linus Manubens-Gil
  •  &  Hanchuan Peng

Article | 27 March 2023

High-speed low-light in vivo two-photon voltage imaging of large neuronal populations

A suite of tools including positive-going voltage indicators, a high-speed two-photon microscope, and denoising software enables prolonged imaging of electrical activity in neurons with limited toxicity.

  • Jelena Platisa
  •  &  Jerry L. Chen

Brief Communication | 02 March 2023

A modular architecture for organizing, processing and sharing neurophysiology data

A modular architecture for managing and sharing electrophysiology, behavior, colony management and other data has been built to support individual laboratories or large consortia.

  • Luigi Acerbi
  • , Valeria Aguillon-Rodriguez
  •  &  Miles J. Wells

Research Briefing | 30 December 2022

Digital brain atlases reveal postnatal development to 2 years of age in human infants

During the first two years of postnatal development, the human brain undergoes rapid, pronounced changes in size, shape and content. Using high-resolution MRI, we constructed month-to-month atlases of infants 2 weeks to 2 years old, capturing key spatiotemporal traits of early brain development in terms of cortical geometries and tissue properties.

Resource 30 December 2022 | Open Access

Multifaceted atlases of the human brain in its infancy

This Resource presents surface and volume atlases of human brain development during early infancy, at monthly intervals.

  • Sahar Ahmad
  •  &  Pew-Thian Yap

Article 30 December 2022 | Open Access

Local shape descriptors for neuron segmentation

During segmentation of neurons in electron microscopy datasets, auxiliary learning via the prediction of local shape descriptors increases efficiency, which is important for the processing of datasets of ever-increasing size.

  • Arlo Sheridan
  • , Tri M. Nguyen
  •  &  Jan Funke

Brief Communication 01 December 2022 | Open Access

TemplateFlow: FAIR-sharing of multi-scale, multi-species brain models

TemplateFlow is a repository for human and other brain templates and atlases, which operates under the FAIR principles.

  • Rastko Ciric
  • , William H. Thompson
  •  &  Oscar Esteban

Brief Communication | 28 November 2022

A large-scale neural network training framework for generalized estimation of single-trial population dynamics

AutoLFADS models neural population activity via a deep learning-based approach with automated hyperparameter optimization.

  • Mohammad Reza Keshtkaran
  • , Andrew R. Sedler
  •  &  Chethan Pandarinath

News & Views | 24 October 2022

Mapping of the zebrafish brain takes shape

The generation of a whole larval zebrafish brain electron microscopy volume in tandem with automated tools lays the groundwork for producing the first vertebrate brain connectome.

  • Paul Brooks
  • , Andrew Champion
  •  &  Marta Costa

Article 17 October 2022 | Open Access

Estimation of skeletal kinematics in freely moving rodents

Pose estimation in combination with an anatomically constrained model allows inferring skeletal kinematics in rodents.

  • Arne Monsees
  • , Kay-Michael Voit
  •  &  Jason N. D. Kerr

News & Views | 06 October 2022

The data science future of neuroscience theory

An approach for integrating the wealth of heterogeneous brain data — from gene expression and neurotransmitter receptor density to structure and function — allows neuroscientists to easily place their data within the broader neuroscientific context.

  • Bradley Voytek

Article 06 October 2022 | Open Access

neuromaps: structural and functional interpretation of brain maps

neuromaps is a toolbox for accessing, transforming and comparing human neuroimaging data.

  • Ross D. Markello
  • , Justine Y. Hansen
  •  &  Bratislav Misic

Research Highlight | 06 September 2022

Neuroscience data analysis in the cloud

The NeuroCAAS platform simplifies data analysis in the neuroscience space for users and enhances reproducibility.

This Month | 04 August 2022

When labs welcome under-represented groups

To diversify science, some labs open summer doors wide to reach out to under-represented groups.

  • Vivien Marx

Brief Communication | 10 June 2022

ASLPrep: a platform for processing of arterial spin labeled MRI and quantification of regional brain perfusion

ASLPrep is a software suite for reproducible processing of arterial spin labeled magnetic resonance imaging data.

  • Azeez Adebimpe
  • , Maxwell Bertolero
  •  &  Theodore D. Satterthwaite

Research Briefing | 11 May 2022

NeuroMechFly: an integrative simulation testbed for studying Drosophila behavioral control

Neuromechanical simulations enable the study of how interactions between organisms and their physical surroundings give rise to behavior. NeuroMechFly is an open-source neuromechanical model of adult Drosophila , with data-driven morphological biorealism that enables a synergistic cross-talk between computational and experimental neuroscience.

Article | 11 May 2022

NeuroMechFly, a neuromechanical model of adult Drosophila melanogaster

NeuroMechFly enables simulations of adult Drosophila melanogaster . The platform combines a biomechanical representation of the fly body, models of the muscles, a neural controller and a physics-based simulation of the environment.

  • Victor Lobato-Rios
  • , Shravan Tata Ramalingasetty
  •  &  Pavan Ramdya

News & Views | 12 April 2022

Tracking together: estimating social poses

Two new toolkits that leverage deep-learning approaches can track the positions of multiple animals and estimate poses in different experimental paradigms.

  •  &  Gordon J. Berman

This Month | 12 April 2022

Mackenzie Weygandt Mathis

Building a sustainable open source toolbox to track social behavior and how to get in the zone.

Article 12 April 2022 | Open Access

Multi-animal pose estimation, identification and tracking with DeepLabCut

DeepLabCut is extended to enable multi-animal pose estimation, animal identification and tracking, thereby enabling the analysis of social behaviors.

  • Jessy Lauer
  •  &  Alexander Mathis

Article 04 April 2022 | Open Access

SLEAP: A deep learning system for multi-animal pose tracking

SLEAP is a versatile deep learning-based multi-animal pose-tracking tool designed to work on videos of diverse animals, including during social behavior.

  • Talmo D. Pereira
  • , Nathaniel Tabris
  •  &  Mala Murthy

Article | 28 March 2022

Detecting and correcting false transients in calcium imaging

SEUDO is a tool for detecting and correcting errors introduced by automated processing of calcium imaging data.

  • Jeffrey L. Gauthier
  • , Sue Ann Koay
  •  &  Adam S. Charles

This Month | 05 August 2021

Pavan Ramdya

A neuroscientist who jams, plays and builds a way to capture animal movement.

Article | 05 August 2021

LiftPose3D, a deep learning-based approach for transforming two-dimensional to three-dimensional poses in laboratory animals

LiftPose3D infers three-dimensional poses from two-dimensional data or from limited three-dimensional data. The approach is illustrated for videos of behaving Drosophila , mice, rats and macaques.

  • Adam Gosztolai
  • , Semih Günel

Correspondence | 12 July 2021

CloudReg: automatic terabyte-scale cross-modal brain volume registration

  • Vikram Chandrashekhar
  • , Daniel J. Tward
  •  &  Joshua T. Vogelstein

Correspondence | 30 June 2021

The ENIGMA Toolbox: multiscale neural contextualization of multisite neuroimaging datasets

  • Sara Larivière
  • , Casey Paquola
  •  &  Boris C. Bernhardt

Article | 19 April 2021

Geometric deep learning enables 3D kinematic profiling across species and environments

DANNCE enables robust 3D tracking of animals’ limbs and other features in naturalistic environments by making use of a deep learning approach that incorporates geometric reasoning. DANNCE is demonstrated on behavioral sequences from rodents, marmosets, and chickadees.

  • Timothy W. Dunn
  • , Jesse D. Marshall
  •  &  Bence P. Ölveczky

This Month | 01 April 2021

Tiago Ferreira

How computational neuroanatomy, vintage gear and fado fit together.

Brief Communication | 01 April 2021

SNT: a unifying toolbox for quantification of neuronal anatomy

SNT is a toolbox neuronal morphometry and connectomics and provides various analysis, visualization, quantification and modeling tools.

  • Cameron Arshadi
  • , Ulrik Günther
  •  &  Tiago A. Ferreira

Correspondence | 09 March 2021

Chunkflow: hybrid cloud processing of large 3D images by convolutional nets

  • Jingpeng Wu
  • , William M. Silversmith
  •  &  H. Sebastian Seung

Comment | 04 January 2021

Quantum computing at the frontiers of biological sciences

Computing plays a critical role in the biological sciences but faces increasing challenges of scale and complexity. Quantum computing, a computational paradigm exploiting the unique properties of quantum mechanical analogs of classical bits, seeks to address many of these challenges. We discuss the potential for quantum computing to aid in the merging of insights across different areas of biological sciences.

  • Prashant S. Emani
  • , Jonathan Warrell
  •  &  Aram W. Harrow

Article | 07 September 2020

A temporal decomposition method for identifying venous effects in task-based fMRI

Temporal decomposition through manifold fitting (TDM) is an analysis technique that decomposes blood oxygenation level dependent (BOLD) responses in task-based fMRI into different components that likely correspond to microvasculature- and macrovasculature-driven signals.

  • Kendrick Kay
  • , Keith W. Jamison
  •  &  Kamil Uğurbil

In Brief | 02 July 2020

Benchmarked spike sorting

Article | 02 March 2020

A virtual reality system to analyze neural activity and behavior in adult zebrafish

Complex behaviors and the underlying neural activity in adult zebrafish can be accessed through a virtual reality system in combination with two-photon microscopy.

  • Kuo-Hua Huang
  • , Peter Rupprecht
  •  &  Rainer W. Friedrich

Comment | 20 December 2018

Imaging whole nervous systems: insights into behavior from worms to fish

The development of systems combining rapid volumetric imaging with three-dimensional tracking has enabled the measurement of brain-wide dynamics in freely behaving animals such as worms, flies, and fish. These advances provide an exciting opportunity to understand the organization of neural circuits in the context of voluntary and natural behaviors. In this Comment, we highlight recent progress in this burgeoning area of research.

  • John A. Calarco
  •  &  Aravinthan D. T. Samuel

Article | 20 December 2018

Fast animal pose estimation using deep neural networks

LEAP is a deep-learning-based approach for the analysis of animal pose. LEAP’s graphical user interface facilitates training of the deep network. The authors illustrate the method by analyzing Drosophila and mouse behavior.

  • , Diego E. Aldarondo
  •  &  Joshua W. Shaevitz

Article | 10 December 2018

fMRIPrep: a robust preprocessing pipeline for functional MRI

fMRIPrep is a robust and easy-to-use pipeline for preprocessing of diverse fMRI data. The transparent workflow dispenses of manual intervention, thereby ensuring the reproducibility of the results.

  • Oscar Esteban
  • , Christopher J. Markiewicz
  •  &  Krzysztof J. Gorgolewski

Article | 30 November 2018

Brain-wide circuit interrogation at the cellular level guided by online analysis of neuronal function

Imaging of neuronal activity across the whole zebrafish brain in combination with online analysis allows for manipulating neuronal activity according to function. This approach is used to ablate or activate neurons in fictively swimming zebrafish larvae.

  • Nikita Vladimirov
  • , Chen Wang
  •  &  Misha B. Ahrens

Correspondence | 30 October 2018

A community-developed open-source computational ecosystem for big neuro data

  • Joshua T. Vogelstein
  • , Eric Perlman
  •  &  Randal Burns

Advertisement

Browse broader subjects

  • Computational biology and bioinformatics
  • Neuroscience

Browse narrower subjects

  • Biophysical models
  • Dynamical systems
  • Learning algorithms
  • Network models
  • Neural decoding
  • Neural encoding

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

computational neuroscience research topics

MIT Technology Review

  • Newsletters

Google helped make an exquisitely detailed map of a tiny piece of the human brain

A small brain sample was sliced into 5,000 pieces, and machine learning helped stitch it back together.

  • Cassandra Willyard archive page

""

A team led by scientists from Harvard and Google has created a 3D, nanoscale-resolution map of a single cubic millimeter of the human brain. Although the map covers just a fraction of the organ—a whole brain is a million times larger—that piece contains roughly 57,000 cells, about 230 millimeters of blood vessels, and nearly 150 million synapses. It is currently the highest-resolution picture of the human brain ever created.

To make a map this finely detailed, the team had to cut the tissue sample into 5,000 slices and scan them with a high-speed electron microscope. Then they used a machine-learning model to help electronically stitch the slices back together and label the features. The raw data set alone took up 1.4 petabytes. “It’s probably the most computer-intensive work in all of neuroscience,” says Michael Hawrylycz, a computational neuroscientist at the Allen Institute for Brain Science, who was not involved in the research. “There is a Herculean amount of work involved.”

Many other brain atlases exist, but most provide much lower-resolution data. At the nanoscale, researchers can trace the brain’s wiring one neuron at a time to the synapses, the places where they connect. “To really understand how the human brain works, how it processes information, how it stores memories, we will ultimately need a map that’s at that resolution,” says Viren Jain, a senior research scientist at Google and coauthor on the paper, published in Science on May 9 . The data set itself and a preprint version of this paper were released in 2021 .

Brain atlases come in many forms. Some reveal how the cells are organized. Others cover gene expression. This one focuses on connections between cells, a field called “connectomics.” The outermost layer of the brain contains roughly 16 billion neurons that link up with each other to form trillions of connections. A single neuron might receive information from hundreds or even thousands of other neurons and send information to a similar number. That makes tracing these connections an exceedingly complex task, even in just a small piece of the brain..  

To create this map, the team faced a number of hurdles. The first problem was finding a sample of brain tissue. The brain deteriorates quickly after death, so cadaver tissue doesn’t work. Instead, the team used a piece of tissue removed from a woman with epilepsy during brain surgery that was meant to help control her seizures.

Once the researchers had the sample, they had to carefully preserve it in resin so that it could be cut into slices, each about a thousandth the thickness of a human hair. Then they imaged the sections using a high-speed electron microscope designed specifically for this project. 

Next came the computational challenge. “You have all of these wires traversing everywhere in three dimensions, making all kinds of different connections,” Jain says. The team at Google used a machine-learning model to stitch the slices back together, align each one with the next, color-code the wiring, and find the connections. This is harder than it might seem. “If you make a single mistake, then all of the connections attached to that wire are now incorrect,” Jain says. 

“The ability to get this deep a reconstruction of any human brain sample is an important advance,” says Seth Ament, a neuroscientist at the University of Maryland. The map is “the closest to the  ground truth that we can get right now.” But he also cautions that it’s a single brain specimen taken from a single individual. 

The map, which is freely available at a web platform called Neuroglancer , is meant to be a resource other researchers can use to make their own discoveries. “Now anybody who’s interested in studying the human cortex in this level of detail can go into the data themselves. They can proofread certain structures to make sure everything is correct, and then publish their own findings,” Jain says. (The preprint has already been cited at least 136 times .) 

The team has already identified some surprises. For example, some of the long tendrils that carry signals from one neuron to the next formed “whorls,” spots where they twirled around themselves. Axons typically form a single synapse to transmit information to the next cell. The team identified single axons that formed repeated connections—in some cases, 50 separate synapses. Why that might be isn’t yet clear, but the strong bonds could help facilitate very quick or strong reactions to certain stimuli, Jain says. “It’s a very simple finding about the organization of the human cortex,” he says. But “we didn’t know this before because we didn’t have maps at this resolution.”

The data set was full of surprises, says Jeff Lichtman, a neuroscientist at Harvard University who helped lead the research. “There were just so many things in it that were incompatible with what you would read in a textbook.” The researchers may not have explanations for what they’re seeing, but they have plenty of new questions: “That’s the way science moves forward.” 

Biotechnology and health

How scientists traced a mysterious covid case back to six toilets.

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

An AI-driven “factory of drugs” claims to have hit a big milestone

Insilico is part of a wave of companies betting on AI as the "next amazing revolution" in biology

  • Antonio Regalado archive page

The quest to legitimize longevity medicine

Longevity clinics offer a mix of services that largely cater to the wealthy. Now there’s a push to establish their work as a credible medical field.

  • Jessica Hamzelou archive page

There is a new most expensive drug in the world. Price tag: $4.25 million

But will the latest gene therapy suffer the curse of the costliest drug?

Stay connected

Get the latest updates from mit technology review.

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.

ScienceDaily

What makes a memory? It may be related to how hard your brain had to work

The human brain filters through a flood of experiences to create specific memories. Why do some of the experiences in this deluge of sensory information become "memorable," while most are discarded by the brain?

A computational model and behavioral study developed by Yale scientists suggests a new clue to this age-old question, they report in the journal Nature Human Behavior .

"The mind prioritizes remembering things that it is not able to explain very well," said Ilker Yildirim, an assistant professor of psychology in Yale's Faculty of Arts and Sciences and senior author of the paper. "If a scene is predictable, and not surprising, it might be ignored."

For example, a person may be briefly confused by the presence of a fire hydrant in a remote natural environment, making the image difficult to interpret, and therefore more memorable. "Our study explored the question of which visual information is memorable by pairing a computational model of scene complexity with a behavioral study," said Yildirim.

For the study, which was led by Yildirim and John Lafferty, the John C. Malone Professor of Statistics and Data Science at Yale, the researchers developed a computational model that addressed two steps in memory formation -- the compression of visual signals and their reconstruction.

Based on this model, they designed a series of experiments in which people were asked if they remembered specific images from a sequence of natural images shown in rapid succession. The Yale team found that the harder it was for the computational model to reconstruct an image, the more likely the image would be remembered by the participants.

"We used an AI model to try to shed light on perception of scenes by people -- this understanding could help in the development of more efficient memory systems for AI in the future," said Lafferty, who is also the director of the Center for Neurocomputation and Machine Intelligence at the Wu Tsai Institute at Yale.

Former Yale graduate students Qi Lin (Psychology) and Zifan Lin (Statistics and Data Science) are co-first authors of the paper.

  • Intelligence
  • Neuroscience
  • Social Psychology
  • Child Development
  • Hallucination
  • Sensory system
  • Psycholinguistics
  • Neurobiology
  • Parkinson's disease

Story Source:

Materials provided by Yale University . Original written by Bill Hathaway. Note: Content may be edited for style and length.

Journal Reference :

  • Qi Lin, Zifan Li, John Lafferty, Ilker Yildirim. Images with harder-to-reconstruct visual representations leave stronger memory traces . Nature Human Behaviour , 2024; DOI: 10.1038/s41562-024-01870-3

Cite This Page :

Explore More

  • Nature's 3D Printer: Bristle Worms
  • Giant ' Cotton Candy' Planet
  • A Young Whale's Journey
  • No Inner Voice Linked to Poorer Verbal Memory
  • Bird Flu A(H5N1) Transmitted from Cow to Human
  • Universe's Oldest Stars in Our Galactic Backyard
  • Polygenic Embryo Screening for IVF: Opinions
  • VR With Cinematoghraphics More Engaging
  • 2023 Was the Hottest Summer in 2000 Years
  • Fastest Rate of CO2 Rise Over Last 50,000 Years

Trending Topics

Strange & offbeat.

Skip to content

Search form

Undergraduate student research round-up: summer across the college of sciences.

NSF REUs, a new community college initiative, conferences and workshops offer ample opportunities for students — current, prospective, and visiting — to hone their research skills in the College of Sciences.

As the mercury climbed across Atlanta this summer, student research heated up across the College of Sciences, thanks to special summer programs for undergraduates from around the globe that help undergraduates get a head start on research experience for STEM careers in academia, industry, and beyond.

This year’s initiatives included National Science Foundation Research Experiences for Undergraduates (NSF REU) programs, a new initiative to engage Georgia community college students, summer workshops in computational chemistry and quantitative biosciences, and more.

Through the workshops, students learned to navigate new methods of research that involve data analysis and computational aspects of disciplines like chemistry and biology — as well as communicate connections across concepts like group theory, topology, combinatorics, and number theory.

Meanwhile, the NSF REU programs across the College’s six Schools of Biological Sciences , Chemistry and Biochemistry , Earth and Atmospheric Sciences , Physics , Psychology , and Mathematics , as well as the Undergraduate Neuroscience Program , allowed early-year students to get their first taste of in-depth research with unique expertise and equipment available at Georgia Tech. 

Other students took advantage of special fellowships to attend summer conferences in their chosen disciplines, where they networked with fellow young scientists and mathematicians while soaking up knowledge from peers and mentors. 

Here’s a roundup of some of the 2022 summer undergraduate student research programs and events led by the College of Sciences at Georgia Tech:

The Summer Theoretical and Computational Chemistry (STACC) Workshop 

Undergraduates eager to try calculations in areas such as quantum dynamics, electronic structure theory, and classical molecular dynamics — and who want to know more about new data science and machine learning tools — got their chance during this two-week early summer computational chemistry workshop.

“Theoretical and computational studies provide a necessary complement to experimental investigations because they are able to obtain the atomistic level of detail that is near impossible to probe with experiment,” said Joshua Kretchmer , assistant professor in the School of Chemistry and Biochemistry. 

“It is becoming more and more routine to use these techniques, even outside of pure theory research groups, as computers have become more powerful and more easy-to-use software is being developed to perform these calculations,” Kretchmer said. “It is thus important for students to be exposed to these techniques early on in their undergraduate education so they have a basic understanding of how and when the slew of different computational techniques are best utilized.”

2022 was the first year for the STACC Workshop, and Kretchmer added that the students “seem to be engaged and excited by the material, both in terms of learning the technical skills necessary to utilize high-performance computers and the unique aspects that can be learned about chemical systems from computer simulations.”

Those thoughts were echoed by University of South Florida student Nicholas Giunto. “After simulating and calculating these various processes, I realized how theoretical chemistry can do so much more than just simulate these scenarios. This technique of chemistry can be used in many other fields of science as well,” Giunto said. “This workshop has broadened my perspective of chemistry, and taught me a whole new field of science that is innovative and prudent.”

For more information, check out the STACC website here . 

Summer College Research Internship 

Thanks to a grant from the Betsy Middleton and John Clark Sutherland Dean’s Chair , community college students in Georgia were paired up with a Georgia Tech College of Sciences lab — at no cost to the students — for the inaugural Summer College Research Internship (SCRI) .

The idea for SCRI grew from Shania Khatri’s experiences conducting research for the first time. Khatri, a fourth-year Biological Sciences major scheduled to graduate in December 2022, began research in high school through a program at a local university that placed students, especially those historically underrepresented in STEM, in labs to complete their own summer research projects. 

“I felt firsthand how important mentorship was in building confidence in STEM, promoting belonging, and ultimately influencing my decision to pursue higher education and research,” Khatri said. “Research shows that students who complete high school and undergraduate programs are more likely to pursue STEM majors and consider doctoral degrees, underscoring that mentorship early in careers can improve achievement and retention of these students.”

SCRI students helped design experiments, collected and analyzed data, and presented the results of their work. They worked closely with their Ph.D. student mentors, learning from them as well as the broader community of their host labs. They also heard weekly lectures from College of Science faculty as they learned about the broader research environment at Georgia Tech. 

“The accepted students have strong scholastic potential, and we hope that we can excite them about the research happening at Georgia Tech and potentially recruit them to join our programs, either as transfer students or future graduate students,” said William Ratcliff , associate professor in the School of Biological Sciences and co-director of the Interdisciplinary Ph.D. in Quantitative Biosciences Program . Ratcliff also co-leads the SCRI with Todd Streelman , professor and chair of the School of Biological Sciences at Tech.

Three students from two-year community college programs in Georgia were chosen for the inaugural SCRI, Ratcliff said. With diverse interests, all three researched in labs within the Center for Microbial Dynamics and Infection (CMDI) . 

“While this was not part of our review criteria, two of the three students are members of groups that are underrepresented in science according to National Institutes of Health criteria, so this is a great opportunity to broaden participation in academic research,” Ratcliff added.

“When discussing diversity in STEM and retention of underrepresented minorities, community college students should be at the forefront of the discussion,” Khatri said. “It is my hope that through this program the students will gain confidence in their own abilities, and learn skills of science communication, data analysis, critical thinking, collaborative work, and problem solving that will aid them in any career path.”

More information on the Summer College Research Internship is available here . 

Child Lab Day

Child Lab Day is the capstone assignment for students in the School of Psychology course PSYC 2103 Human Development . Christopher Stanzione , senior lecturer and associate chair for undergraduate studies for the School, said his students conducted cognitive, language, and conceptual assessments in June on children ranging in age from four months to nine years old. 

“This is a great applied experience for the Georgia Tech students,” Stanzione said. “All semester we study these concepts, but to see development in action is special. They’ll likely see the gradual change between concepts by administering the assessments to kids of different ages.”

The first Child Lab Day was in 2019. This summer, students majoring in psychology, biomedical engineering, computer science, biology, neuroscience, and economics took part in this second one. “They loved it,” Stanzione said.

National Science Foundation Research Experiences for Undergraduates (NSF REUs)

For the first time, this year all six schools across the College of Sciences — plus the Neuroscience program at Tech — led Research Experiences for Undergraduates, a National Science Foundation initiative. 

Each student was associated with a specific research project, and worked closely with school faculty and other researchers. Students were given stipends and, in many cases, assistance with housing and travel to help cover the experience.

“Since most of the undergraduate participants are recruited from institutions that do not have extensive research infrastructure, the immersive research experience available to them in these programs can be transformational,” said David Collard , professor and senior associate dean in the College, who previously led the REU program in the School of Chemistry and Biochemistry for more than a decade. 

“A measure of success of the REU programs in the College of Sciences is that many of the undergraduate participants subsequently go on to complete their Ph.D., some at Georgia Tech, and others elsewhere,” Collard added.

The following are the details for each College of Sciences school’s REU program. Learn more about future Summer Research Programs for Undergraduates here .

School of Earth and Atmospheric Sciences REU:

Georgia Tech Broadening Participation in Atmospheric Science, Oceanography, and Geosciences

Working under the supervision of a School of Earth and Atmospheric Sciences (EAS) faculty member, participants focused on a single research project, but also gained a broad perspective on research in Earth and atmospheric sciences by participating in the dynamic research environment. This interdisciplinary REU program had projects ranging from planetary science to meteorology to oceanography. In addition to full time research, undergraduate researchers participated in a number of professional development activities, seminars with faculty and research scientists, presentation and research poster symposiums, and social activities with other summer REU students.

Schools of Biological Sciences, Chemistry and Biochemistry, Civil and Environmental Engineering, Chemical and Biomolecular Engineering REU:

Aquatic Chemical Ecology (ACE) at Georgia Tech

The Aquatic Chemical Ecology REU gave students the opportunity to perform research with faculty from five Georgia Tech schools. 

Students participated in research with one or more faculty members, learned about careers in science and engineering, and saw how scientists blend knowledge and skills from physics, chemistry, and biology to investigate some of the most challenging problems in environmental sciences. 

This was the first REU experience for Jenn Newlon, a rising senior at the University of North Carolina Wilmington . In fact, “I’d actually never heard of an REU before I came here,” she said. “It’s been a really good experience. I never really saw this side of research in my institution. While I did get to do undergraduate research, it was more of, ‘do this in a lab, this is what happens.’ I had to present my findings every week to my PI (principal investigator), who gave really good feedback. And all the people in my lab were really kind and helpful.”

Schools of Psychology, Biological Sciences REU:

Neuroscience Research Experience for Undergraduates

The first week of the inaugural Neuroscience/Psychology REU was a Neuroscience Bootcamp, where students engaged in hands-on activities to learn about brain anatomy, functional magnetic resonance imaging (fMRI), encephalography, and other techniques.  Then the student researchers spent time working on projects in the laboratories of mentors in either the School of Psychology, School of Biological Sciences, or with researchers at Georgia State University. They also attended professional development and social activities with other REU students.

“There is tremendous interest in neuroscience, and we have seen an incredible expansion of technology in our ability to record from the human nervous system,” said Lewis Wheaton , associate professor in the School of Biological Sciences and co-director of the Neuroscience/Psychology REU. 

“At the same time, many students do not have access to these technologies at their academic institutions because of expense,” Wheaton said. “We feel that it is vital to ensure that students who do not have access to these technologies at their universities get exposure to the tools and approaches to understand the human brain. I am excited to further focus on providing opportunities for women and underrepresented minorities to engage in this research.”

A unique feature of the Neuroscience REU program is that it allows some students to come back for a two-year experience, “which can really provide a great opportunity to enhance their research, and put these students in a stronger position to advance their careers,” Wheaton added.

“It is also great that we can show them the research and educational environment at Georgia Tech and in the broader Atlanta area,” said Eric Schumacher, professor in the School of Psychology and co-director of the Neuroscience/Psychology REU. “This is an opportune time to showcase our two schools and the Institute, given that both schools are working with the College and Institute to offer a cross-disciplinary Neuroscience Ph.D. program soon.” 

That was the impression that Alexa Toliver came away with. The fourth year student at Arizona State University is majoring in neurobiology, “but I always wanted to do neuroscience research,” she said during the recent REUs poster session at the Ford Environmental Science and Technology Building. “It was a little new, but it was a great opportunity and I never felt uncomfortable with any of the topics. This was the only neuroscience REU that I could find, and I applied to it and I got it, so I was excited.”

School of Physics REU:

Georgia Tech Broadening Participation in Physics

Working under the supervision of a physics faculty member, participants focused on a single research project but also gained a broad perspective on research in physics by participating in the dynamic research environment. 

Available projects for the REU spanned the field of physics ranging from quantum materials, quantum simulation/sensing, astrophysics, physics of living systems, and non-linear dynamics. 

In addition to full time research, undergraduate researchers participated in a number of professional development seminars, research horizon lunches, and social activities with other summer REU students.

Brendan D’Aquino, a rising senior at Northeastern University in Boston, had planned to use his computer science background to get an industry job after graduation. Then he attended the 2022 School of Physics REU. 

“After doing an internship last year at a software company that does physics, I kind of realized I wanted to make the switch,” D’Aquino said. “So I applied to the program. I got to work here. And I thought it was super cool. So this was my first time doing research. I kind of had grad school in the back of my mind for a while. But 10 weeks here kind of makes me more sure that I want to get into that in the future.”

School of Mathematics REU :

The School of Mathematics has a rich tradition of offering summer undergraduate research programs. The projects have been mentored by faculty and postdocs covering a range of topics, such as graph coloring, random matrices, contact homology, knots, bounded operators, harmonic analysis, and toric varieties. 

Previous Math REU students have published many papers, won a number of awards, and have been very successful in their graduate school applications.

“The main purpose of our REU is to give students research experience which should help them decide if they want to do math research for a living, and in particular, go to a math grad school,” said Igor Belegradek , professor and director of Teaching Effectiveness in the School of Mathematics. Belegradek also coordinates the Math REU. “Also, if there is a publication or poster at a conference, their grad school application will definitely become more competitive.”

Sometimes that application is sent to Georgia Tech. “We did have a few students who were accepted to our grad school after attending an REU with us,” Belegradek said. “It definitely helps put Georgia Tech Mathematics on the map. This summer we have 22 REU students, and only two of them are from Georgia Tech.”

Mathematics topics for the 2022 REU included aspects of graph coloring, Legendrian contact homology, Eigenvectors from eigenvalues and Gaussian random matrices, and applications of Donaldson's Diagonalization theorem.

Read more about the 2021 Mathematics REUs here .

In July, the School of Mathematics also hosted its biennial Topology Students Workshop , organized by Professor Dan Margalit since 2012. 

Events included a public lecture on campus, “Juggling Numbers, Algebra, and Topology”, accessible for curious people of all ages and backgrounds.

“One goal of mathematics is to describe the patterns in the world, from weather to population growth to disease transmission,” event organizers said. The workshop used mathematics to describe juggling patterns, count the different kinds of patterns, and create new patterns, “making surprising connections to group theory, topology, combinatorics, and number theory.”

The 36th Annual Symposium of the Protein Society 

From microproteins, protein condensates, synthetic biology and biosensors, to the latest developments in machine learning and imaging technologies, to addressing health disparities, the Protein Society Symposium, held in San Francisco in early July, provided a state-of-the-art view of the most exciting areas of research in biology and medicine.

Four students of Raquel Lieberman ’s School of Chemistry and Biochemistry lab attended, thanks to Protein Society travel fellowships: 

Lydia Kenney, fourth-year undergraduate and Beckman Scholar in the Lieberman lab. Kenney was also selected to give an oral presentation in a dedicated session to undergraduates

Minh Thu (Alice) Ma, fourth-year Ph.D.student

Emily Saccuzzo, fourth-year Ph.D. student

Gwendell Thomas, first-year Ph.D. student

Kenney and Ma won Best Poster awards at the symposium, and Saccuzzo won an honorable mention.

“The conference was amazing! We saw so many great speakers and presentations about protein science, and it was a great way to meet scientists from all over the world,” Kenney said. “I’m so grateful for this experience, especially as I begin to apply to graduate school and think about my future career in science. It was a great experience, and one that has truly deepened my appreciation for science and research.”

“To have each of these superstars selected for travel fellowships puts them in an elite cohort of trainees at this 500-plus person meeting,” Lieberman said. “I am so excited for them to present their thesis research and to get feedback from colleagues in our field from all over the world. I’m sure new ideas, collaborations, and other opportunities will emerge from this experience. It’s just the boost they and I need after a challenging couple of years as experimental biochemists.”

Related Media

Students conduct poster sessions during 2022's Summer Research Experience for Undergraduates (REU) in the Ford Environmental Science and Technology building. (Photo Renay San Miguel)

Students conduct poster sessions during 2022's Summer Research Experience for Undergraduates (REU) in the Ford Environmental Science and Technology building. (Photo Renay San Miguel)

Brendan D'Aquino, rising senior at Northeastern University, explains his research during the summer 2022 School of Physics REU. (Photo Renay San Miguel)

Brendan D'Aquino, rising senior at Northeastern University, explains his research during the summer 2022 School of Physics REU. (Photo Renay San Miguel)

Alexa Toliver, fourth-year student at Arizona State University, explains her neuroscience research during the summer 2022 Research Experience for Undergraduates. (Photo Renay San Miguel)

Alexa Toliver, fourth-year student at Arizona State University, explains her neuroscience research during the summer 2022 Research Experience for Undergraduates. (Photo Renay San Miguel)

KeAndre Williams (right), a School of Economics major, conducts a test during Child Lab Day June 14. (Photo Christopher Stanzione)

KeAndre Williams (right), a School of Economics major, conducts a test during Child Lab Day June 14. (Photo Christopher Stanzione)

Children ages four months to nine years old took part in assessment tests conducted by School of Psychology students during Child Lab Day at Georgia Tech. (Photo Christopher Stanzione)

Children ages four months to nine years old took part in assessment tests conducted by School of Psychology students during Child Lab Day at Georgia Tech. (Photo Christopher Stanzione)

Students in the School of Psychology's Human Development class conduct assessment tests during Child Lab Day. (Photo Christopher Stanzione)

Students in the School of Psychology's Human Development class conduct assessment tests during Child Lab Day. (Photo Christopher Stanzione)

Shania Khatri

Shania Khatri

Lydia Kenney (left) and Mihn Thu (Alice) Ma show off their best poster awards won at the Protein Society Symposium in July. (Photo courtesy Raquel Lieberman)

Lydia Kenney (left) and Mihn Thu (Alice) Ma show off their best poster awards won at the Protein Society Symposium in July. (Photo courtesy Raquel Lieberman)

Lydia Kenney

Lydia Kenney

Minh Thu (Alice) Ma

Minh Thu (Alice) Ma

Emily Saccuzzo

Emily Saccuzzo

Gwendell Thomas

Gwendell Thomas

For More Information Contact

Writer: Renay San Miguel Communications Officer II/Science Writer College of Sciences 404-894-5209

Editor: Jess Hunt-Ralston

Related Links

  • How I Spent My Summer 2021: NSF REUs Welcome Undergraduate Researchers
  • College of Sciences Summer Research Programs for Undergraduates
  • 2021 and Beyond: Research Opportunities for Undergraduate Students
  • From REU to Ph.D. at Georgia Tech

Georgia Tech Resources

  • Offices & Departments
  • News Center
  • Campus Calendar
  • Special Events
  • Institute Communications
  • Visitor Resources
  • Campus Visits
  • Directions to Campus
  • Visitor Parking Information
  • GTvisitor Wireless Network Information
  • Georgia Tech Global Learning Center
  • Georgia Tech Hotel & Conference Center
  • Barnes & Noble at Georgia Tech
  • Ferst Center for the Arts
  • Robert C. Williams Paper Museum
  • College of Sciences
  • School of Biological Sciences
  • School of Chemistry & Biochemistry
  • School of Earth & Atmospheric Sciences
  • School of Mathematics
  • School of Physics
  • School of Psychology

Privacy and Data

  • EU General Data Protection Regulation Privacy Notice

Reach the College

Map of College of Sciences | Georgia Institute of Technology | Atlanta, GA

Georgia Tech College of Sciences Office of the Dean Administration Building (Tech Tower), Second Floor, Room 202 225 North Ave NW Atlanta, GA 30332-0365

Georgia Institute of Technology North Avenue, Atlanta, GA 30332 404.894.2000

  • Emergency Information
  • Legal & Privacy Information
  • Human Trafficking Notice
  • Title IX/Sexual Misconduct
  • Hazing Public Disclosures
  • Accessibility
  • Accountability
  • Accreditation

© Georgia Institute of Technology

  • Frontiers in Computational Neuroscience
  • Research Topics

Women In Computational Neuroscience

Total Downloads

Total Views and Downloads

About this Research Topic

At present, less than 30% of researchers worldwide are women. Long-standing biases and gender stereotypes are discouraging girls and women from pursuing a career in science, technology, engineering and mathematics (STEM) research. Science and gender equality are, however, essential to ensure sustainable ...

Keywords : Women in, Computational Neuroscience, STEM, #CollectionSeries

Important Note : All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Topic Editors

Topic coordinators, recent articles, submission deadlines, participating journals.

Manuscripts can be submitted to this Research Topic via the following journals:

total views

  • Demographics

No records found

total views article views downloads topic views

Top countries

Top referring sites, about frontiers research topics.

With their unique mixes of varied contributions from Original Research to Review Articles, Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author.

IMAGES

  1. PPT

    computational neuroscience research topics

  2. 150 Best Neuroscience Research Topics and Ideas for Students

    computational neuroscience research topics

  3. What is Computational Neuroscience?

    computational neuroscience research topics

  4. Neuroscience Research Topics : 100+ Cool Ideas

    computational neuroscience research topics

  5. 120 Neuroscience Research Topics: Explore the World

    computational neuroscience research topics

  6. 100 Best Neuroscience Topics for 2023

    computational neuroscience research topics

VIDEO

  1. CCN 2023 Livestream Session 1, Thursday 24th August, 3:30pm BST

  2. Day 1

  3. Day 2

  4. Human Brain Project Summit 2023

  5. Computing the Universe

  6. How to Explore the Field of Computational Neuroscience and Brain Modeling

COMMENTS

  1. Computational neuroscience

    Computational neuroscience is the field of study in which mathematical tools and theories are used to investigate brain function. It can also incorporate diverse approaches from electrical ...

  2. Frontiers in Computational Neuroscience

    Deep Learning and Neuroimage Processing in Understanding Neurological Diseases. Ricardo José Ferrari. Ali Abdollahzadeh. Joana Carvalho. 365 views. Part of the world's most cited neuroscience series, this journal promotes theoretical modeling of brain function, building key communication between theoretical and experimental neuroscience.

  3. Computational neuroscience

    Sleep restores an optimal computational regime in cortical networks. Xu et al. show that waking progressively disrupts neural dynamics criticality in the visual cortex and that sleep restores it ...

  4. Cognitive computational neuroscience

    The authors review recent work at the intersection of cognitive science, computational neuroscience and artificial intelligence that develops and tests computational models mimicking neural and ...

  5. Computational Neuroscience: Mathematical and Statistical Perspectives

    Textbooks of neuroscience use varying organizational rubrics, but major topics include the molecular physiology of neurons, sensory systems, the motor system, and systems that support higher-order functions associated with complex and flexible behavior (Kandel et al. 2013; Swanson 2012). Attempts at understanding computational properties of the ...

  6. Hot Topics in Computational Neuroscience

    Keywords: Hot Topics, Computational Neuroscience, #CollectionSeries . Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements.Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

  7. Horizons in Computational Neuroscience 2022

    Manuscript Extension Submission Deadline 28 February 2023. Guidelines. We are delighted to present the inaugural "Horizons in Computational Neuroscience" article collection. This collection showcases high-impact, authoritative and reader-friendly review articles covering the most topical research at the forefront of Computational Neuroscience.

  8. Deep Neural Networks in Computational Neuroscience

    Brain-Inspired Neural Network Models Are Revolutionizing Artificial Intelligence and Exhibit Rich Potential for Computational Neuroscience. Neural network models have become a central class of models in machine learning (Figure 1).Driven to optimize task performance, researchers developed and improved model architectures, hardware, and training schemes that eventually led to today's high ...

  9. 28132 PDFs

    Computational Neuroscience - Science topic. Computational neuroscience is the study of brain function in terms of the information processing properties of the structures that make up the nervous ...

  10. Editorial: Advances in Computational Neuroscience

    The 28th Annual Computational Neuroscience Meeting CNS * 2019 took place from 13 to 17 July 2019 in the city of Barcelona. The conference encompassed a wide diversity of Research Topics and welcomed participants from around the world, with keynotes on "Brain networks, adolescence, and schizophrenia" by Professor Ed Bullmore, "Neural circuits for mental simulation" by Professor Kenji ...

  11. Current topics in Computational Cognitive Neuroscience

    1 Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, 04103, Leipzig, Germany. Electronic address: [email protected]. 2 Max Planck Research Group NeuroCode, Max Planck Institute for Human Development, 14195, Berlin, Germany; Max Planck UCL Centre for Computational Psychiatry and Ageing Research ...

  12. Computational Neuroscience

    ISSN 2633-1403. Research Topic on Computational Neuroscience - always open for submissions, with an Annual Volume published each calendar year with a dedicated ISSN and ISBN. Led by Topic Editor Dr. Magnus Johnsson and an international Editorial Board, all submissions are peer-reviewed and published immediately after acceptance.

  13. Computational neuroscience

    Major topics. Research in computational neuroscience can be roughly categorized into several lines of inquiry. Most computational neuroscientists collaborate closely with experimentalists in analyzing novel data and synthesizing new models of biological phenomena. ... and sensitivity of these currents is an important topic of computational ...

  14. Computational Neuroscience

    Computational neuroscience is the study of brain function in terms of the nervous system's information processing capabilities, such as those exhibited by neurons as they interact in circuits, ensembles and systems via electrical and chemical signals.

  15. Computational neuroscience

    A high-performance speech neuroprosthesis. A speech-to-text brain-computer interface that records spiking activity from intracortical microelectrode arrays enabled an individual who cannot speak ...

  16. Teaching Computation in Neuroscience: Notes on the 2019 Society for

    Ten essential topics in computational neuroscience, including four on background material, would be: Random variables and important probability distributions. ... For example, early exposure in the freshman or sophomore year to neuroscience research talks can highlight the deep intersections of neuroscience and mathematical topics, and ...

  17. Insights in Computational Neuroscience

    The goal of this special edition Research Topic is to shed light on the progress made in the past decade in the Computational Neuroscience field and on its future challenges to provide a thorough overview of the status of the art of the field. This article collection will inspire, inform and provide direction and guidance to researchers in the ...

  18. Mathematical Methods in Computational Neuroscience

    To fill in this gap, this summer school covers some of the most important methods used in computational neuroscience research through both main lectures and scientific seminars (5-6 main lectures per topic and 1-2 seminars by each invited seminar speaker). Organizers: Yasser Roudi, Ines Samengo, Nicolai Waniek, Ivan Davidovich and Benjamin Dunn

  19. Theoretical and Computational Neuroscience

    Theoretical and Computational Neuroscience. The brain is acting through the interaction of billions of neurons and myriads of action potentials that are criss-crossing within and between brain areas. To make sense of this complexity, one must use mathematical tools and sophisticated analysis methods in order to extract the important information ...

  20. Automating literature screening and curation with applications to

    To more completely characterize the state of computational neuroscience modeling work, we aim to identify works containing results derived from computational neuroscience approaches and their standardized associated metadata (eg, cell types, research topics). Materials and Methods.

  21. Computational neuroscience

    Quantum computing, a computational paradigm exploiting the unique properties of quantum mechanical analogs of classical bits, seeks to address many of these challenges. We discuss the potential ...

  22. A Neuro-AI Framework for Visual Neuroscience

    A Neuro-AI Framework for Visual Neuroscience. May15. May 15, 2024 | 12 - 1pm. Hybrid. Hosted by: UCLA Stein Eye Institute Seminar. Zoom Link. I aim to develop computational models that mimic the transformation between visual stimuli and the activations of retinal, thalamic, and cortical neurons. These models can be used as "digital twins ...

  23. Google helped make an exquisitely detailed map of a tiny piece of the

    A team led by scientists from Harvard and Google has created a 3D, nanoscale-resolution map of a single cubic millimeter of the human brain. Although the map covers just a fraction of the organ ...

  24. Computational Neuroscience and the Experimental World

    The proposed research topic aims to bridge the gap between computational neuroscience models and experimental data. By comparing theoretical models with perception and cognition experiments, we can refine our understanding of how the brain processes information. These comparisons can advance our knowledge by testing theoretical assumptions ...

  25. What makes a memory? It may be related to how hard your ...

    A computational model and behavioral study developed by Yale scientists suggests a new clue to this age-old question, they report in the journal Nature Human Behavior. "The mind prioritizes ...

  26. Undergraduate Student Research Round-up: Summer Across the College of

    For the first time, this year all six schools across the College of Sciences — plus the Neuroscience program at Tech — led Research Experiences for Undergraduates, a National Science Foundation initiative. Each student was associated with a specific research project, and worked closely with school faculty and other researchers.

  27. Advances in Computational Neuroscience

    About this Research Topic. Submission closed. Computational Neuroscience combines mathematical analyses and computer simulations with experimental neuroscience, to develop a principled understanding of the workings of nervous systems and apply it in a wide range of technologies. The Organization for Computational Neuroscience (OCNS) promotes ...

  28. Autism research is becoming more diverse but not yet more global

    brain imaging craft and careers neural circuits connectivity systems neuroscience science and society open neuroscience computational neuroscience See all topics Melbourne meeting: Researchers at INSAR 2024 have much to learn from Indigenous and low-resource communities about their needs and creative solutions to meet those needs.

  29. University dedicates Omenn-Darling Bioengineering Institute at Venture

    The main areas of focus of the Omenn-Darling Bioengineering Institute are cellular engineering, device engineering and computational bioengineering, as well as the ethical and public policy implications of such new ideas and technologies. An important aspect of the institute's work is to bolster innovation and entrepreneurship by collaborating with the region's biotech and pharmaceutical ...

  30. Women In Computational Neuroscience

    Keywords: Women in, Computational Neuroscience, STEM, #CollectionSeries . Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements.Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.