Abstracts

Abstract of invited talks. The presenters are underlined. The abstracts are indexed by the last name of the presensters.

 

Image Quality Transfer

Daniel Alexander, University College London, UK

I will talk about Image Quality Transfer, deep learning strategies we have been developing to enable it, and various emerging applications of the idea.  Image Quality Transfer (Alexander et al NIMG 2017; Tanno et al MICCAI 2017) aims to propagate information from high quality images, e.g. from a uniquely powerful scanner and/or long or expensive acquisition protocol, to more widely available data, e.g. from a standard hospital scanner.  The technique works by patch regression.  Early implementations using random-forest regression (Alexander NIMG 2017) show compelling results in applications in diffusion MRI; for example it enables tractography to recover the four pathways from the hand area to the thalamus in data sets with 2.5mm isotropic resolution, although that task has previously required 1.25mm isotropic resolution (Sotiropoulos NIMG 2013).  More recent work (Tanno et al MICCAI 2017) shows substantial benefits of using CNNs for this task and develops a new approach to quantify the uncertainty in the mapping and disentangle its component sources.  My talk will cover these approaches, more recent development of new memory-efficient architectures that can exploit larger training data sets, as well as a range of other emerging applications of the idea.

 

Reshaping the paradigms and models: dynamic connectivity, inter-subject correlation, multi-echo based event-detection, and naturalistic stimuli

Peter Bandettini, Emily Finn, Javier Gonzalez-Castillo, David Jangraw, National Institute of Mental Health, Bethesda, USA (plenary talk)

This presentation highlight the prominent themes our ongoing processing method development research. These include the use of novel and mostly naturalistic paradigms, dynamic connectivity assessment, cross subject correlation, multi-echo EPI, and automated event detection. In all of these, the general idea has involved stepping back from potentially over-simplified and overly-restrictive models so that a wider net is cast to detect more subtle and perhaps meaningful information from the fMRI time series. Specifically, the studies that I will mention include: Inter-subject correlation with naturalistic stimuli, functional connectivity-based task decoding, functional connectivity-based reading performance prediction, and multi-echo enhance automated event-detection.

Inter-subject Correlation with Naturalistic Stimuli: Inter-subject correlation (ISC) is a technique for detecting synchrony of brain activity across subjects as they engage in a naturalistic task (e.g., story listening or movie viewing). By calculating the temporal correlation of the same voxel in two different subjects’ brains, ISC can identify regions that are responding reliably to a complex, naturalistic stimulus across multiple individuals. This is a powerful technique, as it does not require any a priori assumptions about the task structure, nor even a fixed hemodynamic response. Traditionally, naturalistic tasks combined with ISC analysis have been used to study patterns of brain activity that are shared across the population, but we are now leveraging these techniques to study individual differences, and how these relate to behavioral phenotypes. In one recent experiment, participants listened to a narrative describing a deliberately ambiguous social scenario, designed such that some individuals would find it highly suspicious, while others less so. Using a novel formulation of ISC, we identified several brain areas that were differentially synchronized during listening between participants with high- and low trait-level paranoia, including theory-of-mind regions. Follow-up event-related analysis indicated that while posterior superior temporal cortex responded reliably to mentalizing events in all participants, anterior temporal and medial prefrontal cortex responded to such events only in high-paranoia individuals. Analyzing participants’ speech as they freely recalled the narrative revealed semantic and syntactic features that also scaled with paranoia. These results indicate that trait paranoia acts as an intrinsic ‘prime’ that modulates neural and behavioral responses to the same stimulus across individuals. We are now extending these ISC-based approaches to a large data set of children watching an emotionally evocative animated video, and finding that a child’s mean ISC in certain parts of association cortex is related to his or her score on the Social Responsiveness Scale, a measure of social function/autistic tendencies. Ultimately this technique may allow us to use a naturalistic stimulus as a brain “stress test” to predict present and future phenotypes in novel individuals.

Functional Connectivity-Based Task Decoding: Whole-brain connectivity patterns continuously evolve over time – either spontaneously during rest or while being driven by specific tasks. Within a single scan, well-stablished networks—such as the DMN— have been shown to change their configuration as subjects perform different cognitive processes. Using  a sliding window based dynamic connectivity assessment method, we have been able to decode ongoing tasks based purely on the temporal correlation structure – completely independent of magnitude changes. One way to achieve this is decoding is by simple unsupervised clustering techniques—such as k-means—that take as input windowed (e.g., tens of seconds long) patterns of whole-brain functional connectivity and produce output clusters of scan segments that correspond to similar on-going cognitive processes. Once those segments are known, additional analyses could be used to actually decode the nature of the processes taking place during each segment.

Functional Connectivity-Based Reading Performance Prediction: Reading recall is a crucial skill. A reliable link between a person’s neural signals and their reading recall ability could lead to a better understanding of reading difficulties and better inform the design and selection of future interventions. Previous studies have identified neural regions with activity or functional connectivity (FC) that correlate weakly with reading ability. However, methods that are typically limited to the use of a single region or isolated connection and often rely on simplified reading and comprehension tasks. To push the field towards a better understanding of real-life reading, we made two methodological changes. First, we gave participants a naturalistic reading task in which subjects freely viewed multi-line pages of text, then answered recall questions on the reading. We then employed a method called connectome-based predictive modeling to find a whole-brain functional connectivity network whose FC strength during the reading task correlated strongly with performance on the reading recall questions. The network identified with this method showed an ability to predict reading recall that outperformed canonical networks of language, memory, and arousal. These results highlight the importance of naturalistic tasks and individual differences in revealing new and practically relevant elements of brain function.

Multi-echo Enhanced Automated Event-Detection: Multi-echo fMRI refers to the concurrent acquisition of several BOLD weighted time series per voxel, each of them at a different echo time (TE). Multi-echo data can be used in different ways to improve the signal-to-noise ratio of the data. In its simplest form, averaging of the different time series produces a decrease in thermal noise. Methods such as multi-echo ICA make use of the differences in echo-time dependence profiles of BOLD and non-BOLD signal to automatically detect and remove nuisance signals present in the data. Removal of non-BOLD signal in the time series improves the sensitivity of a class of processing methods known as Paradigm Free Mapping where individual events are automatically identified from the time series – with no prior knowledge of timing. I will present this approach, known as ME-SPFM (Multi-Echo Sparse Paradigm Free Mapping) aimed at detecting BOLD events—assuming only that they follow a canonical  hemodynamic response that the activation-induced fractional signal change is linearly dependent on echo time. I will also demonstrate how this multi-echo based deconvolution method outperforms its single-echo counterpart in terms of sensitivity and specificity.

 

Big data for precision medicine: charting resting-state functional connectivity & connectopies

Christian F. Beckmann, Radboud University, Netherlands

Large clinical and population cohort neuroimaging resources are increasingly coming online, forming a new field of imaging epidemiology. These offer a unified perspective that links brain connectional organization to behaviour and cognition. Currently, however, the full potential of these resources for understanding brain connectivity is not being realized. This is due to a lack of suitable analysis tools that explore relationships between and integrate across modalities, are sensitive to subtle changes in individual connectivity profiles and provide a means to move beyond simple case-control analysis towards understanding inter-individual differences in connectivity. In this talk I will outline novel approaches for charting the organisation of functional connectivity and introduce a ‘normative modelling’ strategy for utilising big cohort data for generating individualised predictions with application in clinical neuroimaging studies.

 

Exact combinatorial inference for brain networks

Moo K. Chung, Zhan Luo, Hyekyoung Lee, Yuan Wang, Andrew L. Aexander, Richard J. Davidson, H. Hill Goldsmith, University of Wisconsin-Madion, USA

The permutation test is known as the only exact test procedure in statistics. However, often it is not exact in practice and only an approximate method since only a small fraction of every possible permutation is generated. Even for a small sample size, it often requires to generate tens of thousands permutations, which can be a serious computational bottleneck. In this talk, we propose a novel combinatorial inference procedure that enumerates all possible permutations combinatorially and avoids the computational bottleneck. The performance of the proposed method is extensively validated against the standard permutation test. The method is applied in 111 twin pairs of DTI and 208 twin pairs of rs-fMRI in effectively determining the genetic contribution of both functional and structural brain networks.

 

EEG spectral analysis using a nested dependent dirichlet process

Mark Fiecas, University of Warwick, UK

In this project, we discuss a novel approach for conducting spectral analysis on resting-state EEG (RS-EEG) data collected from the Minnesota Twin Family Study. Typically, spectral analysis methods treat time series from each subject separately, and independent spectral densities are fit to each time series. In certain scenarios, such as our EEG data collected on twins, it is reasonable to assume that time series may have similar underlying characteristics, and borrowing information across subjects can significantly improve estimation. However, there are currently very few methods that share information across subjects when estimating spectral densities. In this talk, we develop a Bayesian nonparametric modeling approach for estimating EEG spectra. In our methodology, we use Bernstein polynomials to estimate the subject-specific spectrum, which we allow to vary by covariates using a dependent Dirichlet process (DP) prior. In order to estimate the spectra for the entire sample, we nest this model using a nested DP process. Thus, the top level DP cluster subjects with similar spectral densities and the bottom-level dependent DP fits a functional curve to the subjects within that cluster. We illustrate our methodology by conducting spectral analysis on resting state EEG data collected from the Minnesota Twin Family Study (MTFS). The MTFS collected resting-state EEG and behavioral information from 379 monozygotic and 199 dizygotic twin pairs.

Deep network neuroscience

Shi Gu, James Gee, University of Electronic Science and Technology of China, China

Network neuroscience addresses the challenges of understanding the principles and mechanisms underlying complex brain function and cognition by mapping, recording, analyzing and modeling the elements and interactions of neurobiological systems via modern network science. Conventional approaches build a top-down framework that has the advantage of supporting interpretable results. However, the simplistic structure of these schemes may also limit our ability to model the brain. Here, we frame several basic modeling steps using state-of-the-art deep neural network methods which address certain limitations with current techniques in a way that may improve feature discovery performance. We start by defining functional networks, not through the typical identification of brain regions then establishing connectivity between these regions, but by using the spatiotemporal mode as the fundamental elements of a brain’s dynamic networks, where the spatial component is modeled with 3D convolutional neural networks and the temporal component with recurrent neural networks. Then, borrowing the idea of image style transformation with generative adversarial networks, we propose a similar set up to infer the connection between brain structure and dynamics, enabling one to evaluate the extent to which the dynamics is determined from structure and the extent to which the structure can be inferred from observed dynamics. Finally, instead of simple correlation, deep spectrum and graph convolutional approaches are explored for uncovering the nonlinear relationships between different spatiotemporal modes.

 

A time-varying AR, bivariate DLM of functional near-infrared spectroscopy data

Timothy D. Johnson, Department of Biostatistics, University of Michigan, USA

Functional near-infrared spectroscopy (fNIRS) is a relatively new neuroimaging technique. It is a low cost, portable, and non-invasive method to measure brain activity via the blood oxygen level dependent signal. Similar to fMRI, it measures changes in the level of blood oxygen in the brain. Its time resolution is much ner than fMRI, however its spatial resolution is much courser – similar to EEG or MEG. fNIRS is nding widespread use on young children whom cannot remain still in the MRI magnet and it can be used in situations where fMRI is contraindicated – such as with patients whom have cochlear implants. Furthermore, fNIRS measures the concentration of both oxygenated and deoxygenated hemoglobin, both of which are of scientic interest. In this talk, I propose a fully Bayesian time-varying autoregressive model to analyze fNIRS data within the multivariate DLM framework. The hemodynamic response function is modeled with the canonical HRF and the low frequency drift with a variable B-spline model (both locations and number of knots are allowed to vary). Both the model error and the autoregressive processes vary with time. Via simulation studies, I show that this model naturally handles motion artifacts and gives good statistical properties. The model is then apply to a fNIRS data set.

 

A Comparison of Brain Connectivity and Cognitive Impairment between Depression and Non-depression

Hakmook Kang, Kim Albert, Brian Boyd, Justin Blaber, Bennett Landman, Warren Taylor Vanderbilt University, USA

In this work, we introduce a Bayesian double-fusion technique for enhancing estimation of resting state functional connectivity (FC) based on functional magnetic resonance imaging (fMRI) data between brain regions by using structural connectivity (SC) based on diffusion tensor imaging (DTI) data. Concurrently acquired two imaging data will be simultaneously used for FC estimation, which allows us to precisely investigate the relationship between FC and SC, or alterations in white matter microstructural integrity. The method is applied to multi-subject data (n = 47) with depression (n = 21) and without depression (n = 26) to examine how SC differences are related to differences in function (i.e., FC) and in turn related to cognitive task performance in depression. In particular, we focus on five regions of interest: 1) posterior cingulate cortex / precuneus 2) dorsal anterior cingulate 3) thalamus 4) amygdala 5) medial orbitofrontal cortex.

 

Deep learning approaches to functional MRI analysis

Jong-Hwan Lee, Department of Brain and Cognitive Engineering, Korea University

From the technical breakthrough in about a decade ago, the deep learning approaches based on deep neural networks (DNNs) have been dominating various machine learning applications such as the computer vision, speech recognition, and natural language processing. In more recent years, the deep learning approaches have been shown the efficacy in the neuroimaging data analysis. In our works, the DNNs have been applied mainly for the functional MRI data analysis and the sparsity of the weight parameters of the DNNs has been systematically controlled to circumvent a curse-of-dimensionality issue when training the DNNs with millions of weight parameters using only a hundred or thousands of samples. As exemplary works, I will introduce the classification of the schizophrenic patients using whole brain functional connectivity patterns and classification of sensory motor tasks using whole brain neuronal activation patterns. Our on-going works will be also introduced if time allows.

 

Phase angle spatial embedding (PhASE): a kernel method for studying the topology of the human functional connectome

Alex D. Leow, University of Illinois-Chicago, USA

Modern resting-state functional magnetic resonance imaging (rs-fMRI) provides a wealth of information about the inherent functional connectivity of the human brain. However, understanding the nonlinear topology of rs-fMRI and the role of negative correlations remains a challenge. Here, I will discuss a class of novel graph embedding techniques to study the nonlinear topology (the intrinsic geometry) of the functional connectome. These techniques are closely related to a class of maximum mean discrepancy (MMD) kernel methods in a reproducing kernel Hilbert space (RKHS). Then, one can extract topological and modular connectome features of resting-state connectivity using MMD maximization as well as the minimum spanning tree (MSTs) induced by these graph embeddings. To illustrate, some use cases using public datasets will be discussed.

 

Learning-based Quantification of Baby Brain Development

Gang Li, University of North Carolina-Chapel Hill

The increasing availability of infant brain MRI data, such as the data collected from the Baby Connectome Project (BCP), affords unprecedented opportunities for precise charting of dynamic early brain developmental trajectories in understanding normative and aberrant brain growth. However, most existing neuroimaging analysis tools, which are mainly developed for adult brains, are not suitable for infant brains, due to extremely low tissue contrast and regionally-heterogeneous dynamic changes of imaging appearance, brain size, shape and folding in infant brains. In this talk, I will introduce a set of our pioneered machine learning based neuroimaging computational methods for quantitatively characterizing baby brain development, including skull stripping, tissue segmentation, cortical topological correction, surface parcellation, and missing data estimation and prediction. I will also show neuroscience applications of these methods in advancing our understanding of the baby brains.

 

Principal Directions of Mediation – a new approach towards multivariate mediation analysis

Martin Lindquist, Johns Hopkins University, USA

Recent years there have been a number of exciting developments in the statistical area of functional data analysis (FDA). FDA deals with the analysis and theory of data that can be represented as functions, such as curves, surfaces or images. Many of the methods that have been developed are ideally suited for the analysis of neuroimaging data, which consists of images and/or curves. In this talk we discuss how methods from FDA can be used to uncover exciting new results, not readily apparent using standard analysis techniques, in wide ranging areas of neuroimaging research.

 

Challenges and opportunities in population neuroimaging

Thomas Nichols, University of Oxford, UK

Brain imaging studies have traditionally struggled to break into 3-digit sample sizes: e.g., a recent Functional Magnetic Resonance Imaging (fMRI) meta-analysis of emotion found a median sample size of n=13. However, we now have a growing collection studies with sample sizes with 4-, 5- and even 6-digits. Many of these ‘population neuroimaging’ studies are epidemiological in nature, trying to characterize typical variation in the population to help predict health outcomes across the life span. I will discuss some of the challenges these studies present, in terms of massive computational burden but also some of the opportunities. Recent work (Eklund et al, 2016) has demonstrated the importance of real data empirical evaluations, over and above Monte Carlo simulations. I’ll discuss the opportunities for using data from UK Biobank to evaluate new statistical methods with realistic signal and noise. As a specific example, I’ll present work on selective inference, obtaining unbiased estimates of effect size from fMRI studies. Our method extends existing methods by accounting for bias from both thresholding and use of peaks. We demonstrate the accuracy of our method in simulations and with UK Biobank, using 5,000 subjects to define “truth” and an additional 5,000 subject to be split into many small samples for evaluation. We find our method is an improvement over other available methods (split halves sampling) and demonstrates an important role massive neuroimaging databases will have in the years to come. Joint work with Sam Davenport.

 

Challenges in the Analysis of High Dimensional Brain Signals

Hernando Ombao, King Abdullah University of Science and Technology (KAUST), Saudi Arabia

Advances in imaging technology has given unprecedented access for neuroscientists to examine various facets of how the brain “works”. Brain activity is complex. A full understanding of brain activity requires careful study of its multi-scale spatial-temporal organization (from neurons to regions of interest; and from transient events to long-term temporal dynamics). It is also multi-faceted and cannot be fully characterized by a single data modality. To fully appreciate brain processes, one must integrate various data that probe into both the anatomical structure and specific functionality such as electrical, metabolic and hemodynamic.

There are many challenges to analyzing brain data. First, brain data is massive – these are recordings across many location and over long recording times. Second, it has a complex structure with non-stationary properties that evolve over space and time. Third, brain data is often dominated by noise. Thus, this environment has provided big opportunities for data scientists to develop new tools and models for addressing the current research in the neuroscience community. This talk will highlight these challenges and the different research expertise at KAUST covering the neurosciences and the data sciences (computational science, statistical learning and modeling). We will also present some of the work developed by members of the KAUST Biostatistics Group and the UC Irvine Space-Time Modeling Group for visualizing and characterizing the dynamics brain connectivity.

 

Why and how standard analyses often fail in neuroimaging, and what can we do about it
Jean-Baptiste Poline, McGill University, Canada (plenary talk)
In this talk, I will first briefly review the evidence -or lack thereof- of a reproducibility crisis in neuroimaging, with a specific focus on the reproducibility of results obtained from traditional statistical methods that uses the null hypothesis statistical testing (NHST) framework. I will examplify how these technique may fail to provide the community solid scientific results. Second, I will  investigate the causes of this potential lack of reproducibility, teasing apart the technical aspects and those related to the research culture.  Last, I will show some research avenues and initiatives to curb the issue through biostatistical or neuroinformatics methods and what we can expect for the future.

 

A set-based mixed effect model for gene-environment interaction and its application to neuroimaging phenotypes

Anqi Qiu, Department of Biomedical Engineering, Clinical Imaging Research Centre National University of Singapore, Institute for Clinical Sciences Agency for Science, Technology, and Research, Singapore

Imaging genetics is an emerging field for the investigation of neuro- mechanisms linked to genetic variation. Although imaging genetics has recently shown great promise in understanding biological mechanisms for brain development and psychiatric disorders, studying the link between genetic variants and neuroimaging phenotypes remains statistically challenging due to the high-dimensionality of both genetic and neuroimaging data. This becomes even more challenging when studying G × E on neuroimaging phenotypes. In this talk, I introudce a set-based mixed effect model for gene-environment interaction (MixGE) on neuroimaging phenotypes, such as structural volumes and tensor-based morphometry (TBM). This model incorporates both fixed and random effects of G × E to investigate homogeneous and heterogeneous contributions of multiple genetic variants and their interaction with environmental risks to phenotypes. We discuss the construction of score statistics for the terms associated with fixed and random effects of G×E to avoid direct parameter estimation in the MixGE model, which would greatly increase computational cost. We also describe how the score statistics can be combined into a single significance value to increase statistical power. We evaluated MixGE using simulated and real Alzheimer’s Disease Neuroimaging Initiative (ADNI) data, and showed statistical power superior to other burden and variance component methods. We then demonstrated the use of MixGE for exploring the voxelwise effect of G × E on TBM, made feasible by the computational efficiency of MixGE. Through this, we discovered a potential interaction effect of gene ABCA7 and cardiovascular risk on local volume change of the right superior parietal cortex, which warrants further investigation.

 

Do not test for activation in fMRI but estimate the regions of activation

Armin Schwatzman, University of California – San Diego, USA

Null hypothesis testing lies at the foundation of human brain mapping as the core method for fMRI inference. However, recent studies have shown that under optimal conditions the null hypothesis is never true, and brain activity related to a task can be found everywhere in the brain. Rather than testing for significance, we propose to directly estimate the spatial extent of interesting brain activity, defined as excursion sets of the percentage BOLD signal change above a pre-defined threshold. The uncertainty in the estimates is then captured by a nested pair of spatial confidence regions (CRs) called inner and outer sets. These spatial CRs are defined in such a way that the true excursion sets include the inner set and are included in the outer set with a given confidence. Asymptotic coverage probabilities may be determined using the Gaussian kinematic formula or via a multiplier bootstrap. The method is illustrated in task fMRI data from the Human Connectome Project.

 

Deep convolutional frameless: a general deep learning framework for inverse problems in neuroimaging

Jong Chul Ye, Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Korea

Neuro-imaging requires several inverse problems from image reconstruction to brain decoding. Recently, deep learning approaches with various network architectures have achieved significant performance improvement over existing methods in these inverse problems. However, it is still unclear why these deep learning architectures work for specific inverse problems. Moreover, in contrast to the usual evolution of signal processing theory around the classical theories, the link between deep learning and the classical signal processing approaches such as wavelets, non-local processing, compressed sensing, etc, are not yet well understood. To address these issues, here we show that the long-searched-for missing link is the convolution framelets for representing a signal by convolving local and non-local bases. The convolution framelets was originally developed to generalize the theory of low-rank Hankel matrix approaches for inverse problems. This article extends the idea to show that a generic inverse problem under low-dimensional manifold constraints can be solved equivalently by using a deep neural network architecture that meets the so-called frame condition. Using numerical experiments with various inverse problems in neuro-imaging, we demonstrated that our deep convolution framelets network shows consistent improvement over existing deep architectures. This discovery suggests that the success of deep learning is not from a magical power of a black-box, but rather comes from the power of a novel signal representation using non-local basis combined with data-driven local basis, which is indeed a natural extension of classical signal processing theory.

 

Spatial and temporal dynamics of resting-state functional connectivity to improve single-subject prediction of diagnosis

Andrew Zalesky, The University of Melbourne, Australia

Correlation in functional MRI activity between spatially separated brain regions can fluctuate dynamically when an individual is at rest. These dynamics are typically characterized temporally by measuring fluctuations in functional connectivity between brain regions that remain fixed in space over time. I will present recent work in which dynamics in functional connectivity are characterized in both time and space. Spatial dynamics enable network regions (nodes) to vary in size (contract/expand) over time according to the functional connectivity profile of their constituent voxels. I will show how these spatial dynamics can substantially improve the ability to distinguish schizophrenia patients from healthy comparisons individuals at the single-subject level. Machine classifiers trained on functional connectivity dynamics mapped over both space and time predicted diagnostic status with accuracy exceeding 91%, whereas utilizing only spatial or temporal dynamics alone yielded lower classification accuracies. Static measures of functional connectivity yielded the lowest accuracy (79.5%).

 

Co-regularized regression for the integration of brain imaging and genomics data

Yu-Ping Wang, Department of Biomedical Engineering, Global Biostatistics and Data Sciences, Computer Science and Neurosciences Tulane University

Estimating the potential links between neurological and genetic variability has been a challenge in brain imaging genomics. In this work, we propose a combination of two widely used statistical models: sparse regression and canonical correlation analysis (CCA). While the former seeks multivariate linear relationships between a given phenotype and associated observations, the latter searches to extract co-expression patterns between imaging and genomics data. We propose to incorporate both CCA and regression models within a unified formulation. The underlying motivation is to extract discriminative variables that are also co-expressed across modalities. We first show that the simplest formulation of such model can be expressed as a special case of collaborative learning methods. We explore the parameter space and provide some guidelines regarding parameter selection. The model is first tested on a simple toy dataset and a more advanced simulated imaging genomics dataset. Finally, we validate the proposed formulation using single nucleotide polymorphisms (SNP) data and functional magnetic resonance imaging (fMRI) data from a population of adolescents (n = 362 subjects, age 16.9 ± 1.9 years from the Philadelphia Neurodevelopmental Cohort) for the study of learning ability. Furthermore, we carry out a significance analysis of the resulting features that allow us to carefully extract brain regions and genes linked to learning and cognitive ability.