Skip to main content

Research

http://brain%20with%20highlighted%20regions%20involved%20in%20speech%20and%20auditory%20perception

Research Areas

The SoundBrain Lab has several areas of research, all of which encompass understanding the neurobiology of speech processing and/or speech learning. Below, you’ll find a list of our current funded areas of research. For a list of our publications, please click here.

Funded Research

Individual variability in auditory learning characterized using multi-scale and multi-modal physiology and neuromodulation

MPI: Bharath Chandrasekaran, PhD
National Science Foundation Award # 2319493

It is critical for people around the world to be able to learn new skills and information throughout their lives, although often people differ in their proficiency to do so. Prior work has attempted to explain individual differences in learning using static “traits” that are thought to change very little over time – e.g., working memory span, IQ, and musical ability. However, recent work has suggested that a major source of variability is the constantly changing “states” of the brain during learning. Using information gleaned from a series of studies in both human and animal models, this project seeks to develop a non-invasive device that integrates attention (pupil dilation) and its modulation (vagal nerve stimulation) toward the goal of problem solving, in this case second-language learning. Success in this endeavor will enable the development of novel neurotechnologies and training regimens that will make challenging tasks like second language acquisition accessible to wide array of underserved and overlooked communities in education. This research is in collaboration with researchers at University of California San Francisco, Baylor College of Medicine, and University of New Hampshire.

Cortical contributions to frequency-following response generation and modulation

MPI: Bharath Chandrasekaran, PhD
National Institute on Deafness and Communication Disorders R01DC0133115

Frequency-following responses (FFRs) are scalp-recorded electrophysiological ‘neurophonic’ potentials that reflect phase-locked activity from neural ensembles across the auditory pathway. FFRs provide a neural snapshot of the integrity of supra-threshold speech processing that can be measured non-invasively using a minimal electrophysiological set-up that already exists in audiology clinics, has high test-retest reliability, and requires minimal subject preparation. An evolving perspective is that the FFR should be considered an integrated response from both subcortical and cortical neural ensembles. There is a critical need to understand cortical contributions to the FFR to realize the fundamental translational potential as a biomarker for many clinical conditions. Using a highly complementary and cross-disciplinary team of PIs, this proposal builds on key scientific insights gained in the first funding period with the explicit goal of accelerating pre- clinical to clinical translation. Using a cross-species (human, macaque, guinea pigs), cross-level (cells to meso-scale), neurocomputational approach, this proposal systematically deconstructs the role of the cortex in the generation and modulation of the FFR. This research is in collaboration with researchers at the University of Pittsburgh and Children’s Hospital of Pittsburgh.

 

Neural Systems in Auditory and Speech Categorization

PI: Bharath Chandrasekaran, PhD
National Institute on Deafness and Communication Disorders R01DC015504

Using complementary multi-modal neuroimaging methods (functional magnetic resonance imaging (fMRI) and electrocorticography (ECoG)) in conjunction with rigorous behavioral approaches, we examine the role of multiple cortico-striatal and sensory cortical networks in the acquisition and automatization of novel non-speech and speech categories in the mature adult brain. We test the scientific premise of a dual-learning systems (DLS) model by probing neural function using fMRI or ECoG during the process of feedback-dependent category learning. In contrast to popular single-learning system (SLS) approaches, DLS posits that two neurally- dissociable cortico-striatal systems are critical to speech learning: an explicit, sound-to-rule cortico-striatal system, that maps sounds onto rules, and an implicit, sound-to-reward cortico-striatal system that implicitly associates sounds with actions that lead to immediate reward. Per DLS, the two systems contribute to the emerging expertise of the learner. Via closed loops, the highly plastic cortico-striatal systems ‘train’ key less labile temporal lobe networks to categorize information by validated rules or rewards. Once categories are learned to the point of automaticity, cortico-striatal networks are no longer required to mediate behavior. Instead, abstract categorical information within the temporal cortex drives highly accurate speech categorization. We use fMRI to examine the relative dominance of the two cortico-striatal networks in learning multidimensional non-speech category structures that are experimenter-constrained to either rely on rules (rule- based, RB), or on implicit integration of multidimensional cues (information-integration, II). This research is in collaboration with researchers at the University of California, San Francisco.

Investigating human non-lemniscal inferior colliculus contributions to auditory learning with 7T MRI

PI: Kevin Sitek, PhD
National Institute on Deafness and Communication Disorders K01DC019421

Human inferior colliculus (IC) plays a critical role in auditory processing. However, the anatomy and function of the lemniscal (primary) and non-lemniscal subdivisions of IC in living humans are poorly understood due to the technical challenges of in vivo magnetic resonance imaging (MRI) of the small midbrain structures deep within the brain. In particular, despite predominant top-down and bottom-up theories of auditory learning, the neural systems underlying human speech category learning is unknown. Recent advances in MRI acquisition open the door for focused investigations into the anatomy and functional processing of human auditory midbrain. In this project, we use ultra-high field 7T MRI to quantify anatomical midbrain tissue contrast in a sub- structure dependent manner. We also map the structural connections from each IC subdivision throughout the auditory system. Quantifying the specific anatomical MRI contrasts and connectivity patterns in living human midbrain will enable future clinical applications for investigating hearing disorders such as sensorineural hearing loss and tinnitus.

 
http://diagram%20for%20estimating%20multivariate%20temporal%20response%20functions

Open Science Initiative

The SoundBrain Lab is committed to an open science framework. As a part of this initiative, we are compiling our code to be publicly available on Github. Can’t find what you’re looking for? Email us!