Skip navigation

Seminars in Hearing Research at Purdue

 

Abstracts

Talks in 2023-2024

Nelson 1215 [Thursdays 12:00-1:00pm]

Link to Schedule


August 24, 2023

Alexander Francis, Professor, SLHS

Updated results from a study relating the effects of hearing loss and listening conditions on postural sway to fall risk in older adults

Hearing loss is associated with increased fall risk in older adults, but multiple mechanisms have been proposed to account for this. Hearing loss may reduce spatial awareness and/or increase cognitive load, both of which may increase fall risk but may be ameliorated by hearing aid use. We are currently finishing a study comparing fall incidence in daily life with postural sway measured under various listening conditions in older adults with and without hearing aids. An earlier version of this talk reported results from the first 17 participants (7 with hearing aids, 10 without). We have now completed enrollment and initial testing of 46 participants, each of whom stood with feet together and eyes closed, listening to noise vocoded speech (4, 8, 16 channels) and spatially distributed environmental sounds (silence, 1, 3 sources) for 1 minute per condition while postural sway was recorded in the lab. Participants subsequently reported daily near-falls and falls for 4 months (this phase will be completed in December). Initial results from the first 17 participants suggested that hearing aid users were more stable (or more rigid) than non-users and this pattern appears similar in the larger sample. Additional analyses will be presented as time permits.


August 31, 2023

Jeff Lucas, Professor, Biological Sciences

The duck project. Huh? What duck project? Glad you asked...

Next to chickens, Pekin ducks are one of the most important sources of poultry-derived meat. Duck rearing obviously requires keeping relatively large numbers of ducks in relatively small areas. This arrangement can result in various levels of physiological stress, which has impacts on animal welfare and also on productivity. Our project on this topic has three phases. The first is to characterize the vocal system of Pekin ducks, something that surprisingly hasn’t been done before. The second is to use the duck vocal system as a preliminary source of information about stress. This includes determining whether there is environment-dependent call syntax and whether the spectral properties of calls vary with the environment. ‘Environment’ includes both social environment and several types of ‘enrichment’. The third is to see if we can identify vocal elements that might be broadcast to the ducks to mitigate stress.


September 7, 2023

Dalton Aaker, Undergraduate, Biomedical Engineering

Decoding fNIRS Neural Responses: A Machine Learning Approach

The aim of this project is to explore a machine learning model that accurately identifies positive auditory-evoked neural responses while controlling for factors that introduce noise to the neural signal and observe the effects of decoding these interferences. Human neuroimaging data collected via functional near-infrared spectroscopy (fNIRS) from a single subject twice daily for five consecutive days was analyzed. The data followed a block-design paradigm with two conditions: meaningful auditory speech and silence serving as a baseline control. Hemoglobin concentration data was collected using a continuous-wave fNIRS system (NIRx NIRSport2) with specific source-detector pairs optimized for brain regions associated with sound perception and language comprehension. Standard fNIRS data cleaning and preprocessing practices were applied, and Python's Sci-kit learn library was utilized for decoding and prediction on the extracted datasets. Estimators were trained on hemoglobin concentrations and applied stimuli, with cross-validation using Stratified K Folding. Some estimators required training on both systemic physiological and fNIRS datasets, using a feature union technique to join the relevant features. Preliminary analysis revealed that the model achieved the strongest predictive ability using only the oxygenated hemoglobin signal. At low subject counts, the best decoding accuracies were achieved using a combination of Galvanic Skin Response (GSR) and oxygenated hemoglobin signals. In general, physiological data did not consistently improve decoding accuracy, except for GSR data. This study provides insights applicable to machine learning, neuroscience, and optical engineering and the ability to combine cofactors for maximum prediction capabilities in machine learning models is a key area of ongoing research.


September 14, 2023

Edward Bartlett, Professor, Biology & Biomedical Engineering

Spatially-specific, closed-loop, infrared thalamocortical deep brain stimulation

Deep brain stimulation (DBS) is a powerful clinical tool for the treatment of circuit-based neurological disorders such as Parkinson’s disease and obsessive-compulsive disorder. Electrical DBS is, however, limited by the spread of stimulus currents into tissue unrelated to treatment, potentially causing abhorrent patient side effects. In this work, we utilize infrared neural stimulation (INS), an optical neuromodulation technique which uses near to mid infrared light, to drive graded excitatory and inhibitory responses in nerves and neurons, to facilitate an optical and spatially constrained DBS paradigm. INS has been shown to provide spatially constrained responses in the cochlea and in cortical neurons. Unlike other optical techniques, INS does not require genetic modification of neural targets. In this study, we show that INS produces graded, biophysically relevant single-unit responses with robust information transfer in thalamocortical circuits. Importantly, we show that cortical spread of activation from thalamic INS produces more spatially constrained response profiles than conventional electrical stimulation. Owing to observed spatial precision, we used deep reinforcement learning to close the loop on thalamocortical INS, creating real time representations of stimulus-response dynamics while driving cortical neurons to precise firing patterns. Our data suggest that INS can serve as a spatially precise and for both open and closed-loop DBS.


September 21, 2023

Anathanaryan Krishnan, Professor, SLHS

Tonal language experience shapes functional asymmetries in pitch processing at the brainstem and cortical levels

Temporal attributes of pitch processing at cortical and subcortical levels shaped by language experience are differentially weighted and well-coordinated. The question is whether language experience induced functional modulation of hemispheric preference is complemented by selective brainstem ear symmetry for pitch processing. Brainstem frequency-following and cortical pitch responses were recorded concurrently from Mandarin and English participants. A Mandarin syllable (in both speech and non-speech context) with a rising pitch contour (Mandarin Tone 2) was presented monaurally to both ears. At the cortical level, left ear stimulation in the Chinese group revealed an experience-dependent enhanced response for pitch processing in the right hemisphere, consistent with a functional account. The English group revealed a contralateral hemisphere preference consistent with a structural account. At the brainstem level, the Chinese participants showed a functional leftward ear asymmetry, whereas English were consistent with a structural account. Taken together, the cortical RH preference for processing pitch-relevant information in the Chinese group may reflect, at least in part, experience-dependent enhanced input from the right IC. Thus, RH preference seen at the cortical level may already be emerging at the subcortical level. Overall, language experience appears to modulate both cortical hemispheric preference and brainstem ear asymmetry in a complementary manner to optimize processing of temporal attributes of pitch.


September 28, 2023

Varsha Mysore Athreya, PhD Student, SLHS

Age Effects on Auditory Temporal Processing and Relationship to Speech Perception in Noise

Individuals with normal audiometric sensitivity have variable speech perception in noise capabilities, which can worsen with age. Temporal processing plays a vital role in speech perception, especially in adverse listening conditions. Auditory decline due to aging manifests both as peripheral pathology and central auditory system changes, leading to altered temporal processing. To understand the relative contributions of these changes, we measure within-(frequency)-channel and cross-channel temporal processing in normal-hearing individuals across a wide age range. Robust perception of within-channel temporal cues requires precise coding both at peripheral and central levels of the auditory pathways. However, cross-channel processing is supported by central mechanisms. In this talk, I will present our results from a battery of behavioral and electrophysiological measures of within- and cross-channel temporal processing. Age effects on cross-channel temporal-coherence processing appear to be larger than within-channel alterations. Furthermore, our metrics of individual cross-channel temporal-coherence processing are stronger predictors of speech-in-noise outcomes, especially when tasks emphasize streaming and selective attention. Taken together, our results underscore the importance of central auditory changes in aging as a key contributor to age-related perceptual deficits.


October 5th, 2023

Afagh Farhadi, Postdoctoral Fellow, SLHS

Modeling the Medial Olivocochlear Efferent in the Descending Auditory Pathway with a Dynamic Gain Control Feedback system

Computational modeling is a powerful tool in hearing research, as it helps to test hypothesis regarding the underlying mechanisms involved in different auditory scenes, including speech in noise. The medial olivocochlear (MOC) efferent system is suggested to play a crucial role in enhancing auditory processing in noisy backgrounds. The MOC system is a part of the descending auditory system that includes pathways that ultimately project to the outer hair cells (OHCs) in the cochlea, which are responsible for cochlear amplification. Auditory models have mostly been focused on the ascending pathway of the auditory system, which is responsible for transmitting sensory information from the cochlea to higher areas. However, the descending pathway or efferent system has received relatively little attention. The subcortical auditory model proposed in this study incorporated a feedback projection from MOC neurons which dynamically adjusted cochlear gain based on inputs received by the MOC. The two primary inputs to the MOC that were examined in the model were the projections from wide-dynamic-range cells in the cochlear nucleus, and the fluctuation-driven information from IC cells in the midbrain. The model parameters were optimized using neural recordings from IC cells in awake rabbits responding to AM noise. The model with efferents and the optimized set of parameters successfully simulated the trend observed in recorded neural responses, whereas the model without efferent did not simulate the same trend. The optimized parameters also matched the physiological evidence for the dynamics of the MOC efferent system. The proposed model with efferents was tested using several psychoacoustical detection experiments and was found to predict human listener thresholds better than the model without efferents. These results demonstrate the significance of the efferent system in analyzing the mechanisms underlying various psychoacoustic phenomena, including not only simultaneous masking but also forward masking and auditory enhancement. Finally, using the proposed model, the effect of MOC efferent activity on the neural coding of speech-like signals was explored. Simulation results demonstrated that the efferent system enhanced neural fluctuation patterns for vowel-like sounds, which can potentially improve speech perception.


October 12th, 2023

Kelsey Anbuhl, Postdoctoral Fellow, Center for Neural Science, NYU

A top-down, cingulate-to-auditory cortex projection facilitates effortful listening

We often exert greater cognitive resources (i.e., listening effort) to understand speech under challenging acoustic conditions. This mechanism can be overwhelmed in those with hearing loss, resulting in cognitive fatigue in adults, and potentially impeding language acquisition in children. However, the neural mechanisms that support listening effort are uncertain. Evidence from human studies suggest that the cingulate cortex is engaged under difficult listening conditions and may exert top-down modulation of the auditory cortex. In this talk, I will provide anatomical evidence in the gerbil for a strong, descending projection from the cingulate cortex to dorsal and primary auditory cortex. I will also discuss an auditory effort task used to vary the difficulty of listening conditions, where trials were clustered into ‘Easy’ or ‘Hard’ blocks based on sound duration (long vs short duration, respectively). Then, I will demonstrate that the cingulate-to-auditory cortex projection facilitates performance during hard listening blocks by using chemogenetic (i.e., DREADDs) and pharmacological (i.e., muscimol) approaches. Taken together, these results suggest that this top-down, cingulate-to-auditory cortex pathway is a plausible circuit that may be undermined by hearing loss. 


October 19th, 2023

Aditi Gargeshwari, Doctoral Candidate, Auditory Electrophysiology Lab (Krishnan), SLHS

Influence of Visual Speech Cues on Auditory Subcortical and Cortical Responses in Normal and Impaired Ears

The human brain can integrate information from different sensory modalities to produce a unified representation of the external world. Temporally congruent auditory and visual stimulation facilitates multisensory perception and integration, allowing us to better process and understand the information presented to us, especially in the presence of background noise and for individuals with hearing difficulties. The influences of visual speech cues on behavioral speech perception are vastly studied. Functional magnetic resonance imaging and electrophysiological studies have mostly evaluated auditory cortical responses. Little is known about the influence of visual speech production cues on the auditory brainstem phase-locked ensemble responses in normal and individuals with sensorineural hearing loss. To address this knowledge gap, we evaluated the effects of congruent- and incongruent-audiovisual (AV) speech on scalp-recorded brainstem frequency following response (FFR) and the cortical acoustic change complex (ACC). For the FFR, both phase locking to envelope periodicity (FFRENV) and to the temporal fine structure (FFRTFS) were evaluated. I will present our initial results showing that visual speech cues do influence evoked responses at both cortical and brainstem levels, albeit differently. These results appear to suggest that neural activity relevant to audio-visual integration may be already emerging at the brainstem level. Thus, these measures have the potential to be developed as clinical metrics for evaluation of audio-visual integration, and as measures to evaluate hearing-aid outcomes and prognosis.

 


October 26th, 2023

Meredith Christine Ziliak, Doctoral Candidate, PULSe/BIO (Bartlett Lab)

The Effects of Small Arms Fire Like Noise on Hearing Loss and Thalamocortical Processing

Noise-induced hearing loss (NIHL) is a prevalent issue among military personnel, primarily attributed to environmental auditory stressors in their environment, notably firearm noise, which consists of repeated, high-intensity bursts of sound. While previous research has investigated the impact of firearm noise on the peripheral auditory system, its influence on the central auditory system (CAS) is still unknown. Therefore, our study aims to identify how firearm-like sounds affects cochlear and CAS structures and processes, specifically within the thalamocortical region. Our hypothesis posits that firearm induced NIHL will elevate auditory thresholds and lead to an overall reduction in middle latency response (MLR) component amplitudes, indicated compromised functionality within the inferior colliculus and medial geniculate body. We exposed subjects to a simulated small arms fire (SAF) noise, and subsequently employed a modified click train to assess the effects of noise exposure on thalamocortical processing over a 56-day period. Our preliminary findings reveal a reduction in all MLR component amplitudes and a loss of peak distinctiveness. Intriguingly, we observed a non-monotonic progression of thalamocortical dysfunction, characterized by phases of initial damage, temporary recovery, and chronic degradation. Future analyses will enable us to compare the progression of thalamocortical processing in response to both NIHL and metabolic-induced hearing loss as we examine the effects of SAF exposure relative to D-galactose exposure.

 



November 2nd , 2023

Leroy Medrano, AuD student, SLHS (summer T35 project)

Reliability of audiovisual temporal window measurements

The temporal binding window (TBW) is a period of time where two stimuli presented at different times are perceived as one event. This is a way to measure how the brain integrates different stimuli, and to see if individuals can distinguish the temporal delay between a pair of stimuli. The use of visual and auditory stimuli has been used more in prior multisensory integration studies, and the results from these studies have been used to compare individuals with typical sensory and cognitive functions with individuals who have impairments in these domains. These conclusions are built on differences that aren’t supportable without some reliability measurements. Therefore, we aim to measure the consistency of temporal binding windows across different visits using audio-visual stimuli. Temporal binding window measures were collected in 10 participants (ages 19 – 58) with normal hearing and no history of vestibular disease. Each participant was tested twice at separate visits (at baseline, and on average 7 days later) using a fixed psychophysical procedure that consisted of subjective responses of either “flash” or “beep” in each trial. Preliminary results using both Generalized Linear Model (GLM) and Bayesian parameters suggest that there may not be consistency across visits at the group level when using linear regression analysis. Further research and analysis will include a larger sample size to account for variance, Karolinska Sleepiness Scale (KSS) data analysis to see if subjective sleepiness can influence performance, and perhaps increasing the number of trials per time interval. Overall, more work is needed to investigate the reliability of audio-visual behavioral data.

 



November 9th , 2023

Samanatha Hauser and Andrew Sivaprakasam

Audiology Research Diagnostics Core (ARDC) & Data Repository: Big Data Analysis for Hearing Research

Many labs on Purdue’s campus, whether they study hearing directly or not, run (or wish they could run) basic auditory diagnostics to ensure their research participants have appropriate hearing profiles for the study. This data, however, is often not used beyond establishing a subject's inclusion in a study and each lab’s test battery may lack standardization or may be incomplete due to resource limitations. To address these problems, the Audiology Research Diagnostics Core (ARDC) was designed to optimize the collection of high-quality audiologic data and to reduce the barrier-to-entry for hearing evaluations (e.g., in non-auditory labs). Not only does the ARDC function as a core facility to facilitate comprehensive hearing evaluations, but it drives the Audiology Research Diagnostics Core Repository (ARDCR). The ARDCR will include data collected in the ARDC as well as across other hearing labs on campus to accumulate large scale datasets for future analyses. We will demonstrate how the ARDCR can also be used by Purdue researchers to recruit participants with specified profiles of hearing, along with the current structure and formatting of this data. As the ARDC is in its early stages, we will continue a discussion about how labs can best engage with the ARDC and solicit feedback regarding ARDC logistics. Visit https://engineering.purdue.edu/TPAN/hearing/ardc for more information.

 



November 16th , 2023

Andrew Sivaprakasam, MD/PhD Trainee, Purdue BME / Indiana University School of Medicine

Physiological and Perceptual Investigations of Place and Time Coding of Pitch Across Species: Preliminary Findings and Experimental Framework

An elusive empirical neural explanation for pitch perception has inspired several cochlear place and time-dependent hypotheses. Most experts debate the importance of tonotopy versus temporal coding, but it is still unclear how disruptions of these mechanisms are influenced by sensorineural hearing loss (SNHL). SNHL is complex, but as a diagnosis, it groups variable degrees of inner and outer hair cell (IHC/OHC) and cochlear synapse (CS) damage which likely results in different patterns of pitch perception. With careful stimulus design and electrophysiological data harmonization between animal and human subjects, we are developing a framework to more directly link alterations in neural coding to pitch perceptual outcomes. I will present progress related to my NRSA F30 project (Title: Place and Time Processing of Pitch in the Context of Cochlear Dysfunction), which was funded thanks to the fruitful discussions and feedback sparked by this seminar two years ago. Key progress so far includes data collection in several chinchillas and humans with normal hearing status, involving iteratively refined electrophysiological (EFR, ACC) and behavioral correlates (F0DL) of place and time coding.

 



November 30th , 2023

Cole Trent PhD Student,SLHS

Discrimination of intact vs elliptical sentences in CI-simulated speech

Cochlear implant (CI) speech-recognition outcomes are influenced by various combinations of bottom-up and top-down factors. As such, there is substantial individual variability in CI outcomes. The purpose of this project is to assess elliptical speech as a tool for evaluating bottom-up perceptual acuity in CI users. Noise-vocoded sentence stimuli were presented to 10 normal-hearing (NH) listeners. Stimuli consisted of IEEE sentences that either maintained correct consonant place-of-articulation cues (Intact Speech) or contained consonant “ellipses” that ambiguated spectral cues to consonant place (Elliptical Speech). The “ellipses” replaced each key-word consonant with a new consonant maintaining the same manner and voicing of the original, but with an altered place feature. A concurrent neuroimaging measurement was conducted to estimate cortical activity in bilateral auditory cortex and dorsolateral prefrontal cortex via functional near-infrared spectroscopy (fNIRS). This project tested our hypotheses that: 1) NH participants will successfully detect ellipses in more favorable listening conditions but will be unable to detect ellipses under more degraded listening conditions, and 2) listeners will demonstrate differences in cortical activity in response to Intact vs Elliptical speech depending on the severity of the signal degradation. Preliminary results demonstrate that the detection of ellipses in speech is impacted by the spectral resolution of the signal, and this result is also reflected in listeners’ cortical activity. These results have important implications for CI users whereby elliptical speech is a potential tool for isolating the quality of the bottom-up input from top-down processing of speech.

 



December 07th , 2023

Alexander V. Galazyuk, Professor of Anatomy and Neurobiology, Northeast Ohio Medical University

Residual inhibition: from the mechanism to tinnitus treatment.

Neurons in various sensory systems show some level of spontaneous firing in the absence of sensory stimuli. In the auditory system spontaneous firing has been shown at all levels of the auditory pathway from spiral ganglion neurons in the cochlea to neurons of the auditory cortex. This internal "noise" is normal for the system, and it does not interfere with our ability to perceive silence or analyze sounds. However, this internal noise can be elevated under pathological conditions. After cochlear insult the input to the central auditory system becomes markedly reduced. To compensate for this loss, the central gain enhancement (or neural amplification) increases neuronal sensitivity which gives rise to hyperactivity. Such hyperactivity has been hypothesized to cause phantom sound perception or tinnitus. This implies that suppression of this hyperactivity should reduce/eliminate tinnitus. Research from our laboratory has been devoted to identifying the mechanism underlying residual inhibition of tinnitus, a brief suppression of tinnitus following a sound stimulus. We found that during this suppression spontaneous firing of auditory neurons is greatly reduced and it is mediated by metabotropic glutamate receptors (mGluRs). The key mechanisms that govern neural suppression in animals closely resemble clinical psychoacoustic findings of residual inhibition observed in tinnitus patients. We also demonstrated that drugs targeting mGluRs suppress spontaneous activity in auditory neurons and reduce/eliminate behavioral signs of tinnitus in mice for several hours. Thus, these drugs are therapeutically relevant for tinnitus suppression in humans.

 



January 16, 2024

David A. Eddins, Ph.D., CCC-A Professor, School of Communication Sciences & Disorders Director, Communication Technologies Research Center University of Central Florida

A Novel Treatment for Hyperacusis: Basic Science, Technology, and Counseling

Patients with moderate to severe hyperacusis, an abnormal sensitivity to the loudness of everyday sounds, present a unique clinical challenge. Hyperacusis frequently is debilitating and life-altering, with patients engaged in self-treatment that involves limiting potential exposure to offending sounds through a combination of situational avoidance and use of hearing protection devices. They often present for assessment wearing earplugs, earmuffs, or both. Unfortunately, chronic use of earplugs exacerbates the condition. The adverse impact of this partial sound deprivation in mature adults induces hyper-sensitivity to high-level sounds, as measured using psychoacoustic, electroacoustic, and electrophysiological methods. The clinical challenge, therefore, is to eliminate the use of counter-productive hearing protection devices and introduce effective treatment. In this seminar, a novel treatment that overcomes these challenges using a combination of technology, precision audiology, and proven counseling methods will be presented along with the underlying scientific principles. Results from initial evaluation of the treatment indicate remarkable success based on objective and subjective outcomes.

 



February, 15 2024

Lisa Goffman, PhD, CCC-SLP, *SLHS Distinguished Alumni* Senior Scientist and Endowed Chair, Boys Town National Research Hospital

A developmentally grounded account of sequential pattern learning deficits in developmental language disorder.

Infants are able to learn sound patterns that obligate local sequential dependencies that are no longer readily accessible to adults (Gerken et al., 2019). However, adults can learn similar patterns that do not require attention to sequential dependencies. Interestingly, the sound pattern learned so readily by infants is the second most frequent morphophonological pattern across human languages: thus, infants’ ability to learn this pattern may underlie the observation that acquiring a language early in life is important for the development of mature performance. This surprising developmental trajectory, in which infants and young children show learning skills that adults do not, also raises intriguing questions about learning in a group of children characterized by their language difficulties—those with developmental language disorder (DLD). Children with DLD show deficits in the organization of sequential patterns in both language production and non-linguistic motor skill. Thus, DLD does not appear to be a disorder of speech, language, or motor skill, but rather an impaired ability to detect and deploy sequential dependencies over multiple domains. We propose that sequential dependency learning is required for frequently encountered phonological and morphosyntactic patterns in natural language as well as other domain general abilities, and deficits in children with DLD interfere with learning of these sequential dependencies. However, patterns that do not rely on sequential dependencies are learnable by children and adults with DLD.

 



February, 22 2024

Samantha Hauser, AuD, CCC-A, PhD Candidate

Towards Precision Diagnostics for Complex Sensorineural Hearing Loss.

Although sensorineural hearing loss (SNHL) is a single audiometrically defined clinical category, animal models and human temporal-bone studies show that individuals have highly complex SNHL pathologies, with variable combinations of inner and outer hair cell (IHC/OHC) injury, reduced endocochlear potential (EP), and cochlear synaptopathy. An umbrella diagnosis of SNHL belies the diversity of possible underlying cochlear pathophysiology profiles and the associated divergent listening outcomes among patients with similar audiograms. Current clinical tools and guidelines do not adequately capture this diversity in hearing, hindering individualized audiological management and treatments. To address this problem, we are testing a battery of physiological biomarkers rather than a single metric of hearing. Our coordinated battery in multiple preclinical chinchilla models of SNHL and in humans with SNHL allows for an investigation of not only isolated pathologies but considers the potential interactions between the many sources of cochlear dysfunction in complex SNHL. In addition to physiological assessment, human subjects complete behavioral measures of speech perception that place varying degrees of demand on peripheral and central factors to assess functional differences in hearing. Efforts to increase the precision of diagnostic tools in audiology, initial findings from this study, and challenges related to cross-species integration will be presented.

 



February, 29 2024

Edward Bartlett. Associate Dean for Undergraduate Affairs, Professor of BIO and BMO

Neurometric amplitude modulation detection in the inferior colliculus of Young and Aged rats

Amplitude modulation is an important acoustic cue for sound discrimination, and humans and animals are able to detect small modulation depths behaviorally. In the inferior colliculus (IC), both firing rate and phase-locking may be used to detect amplitude modulation. How neural representations that detect modulation change with age are poorly understood, including the extent to which age-related changes may be attributed to the inherited properties of ascending inputs to IC neurons. Here, simultaneous measures of local field potentials (LFPs) and single-unit responses were made from the inferior colliculus of Young and Aged rats using both noise and tone carriers in response to sinusoidally amplitude-modulated sounds of varying depths. We found that Young units had higher firing rates than Aged for noise carriers, whereas Aged units had higher phase-locking (vector strength), especially for tone carriers. Sustained LFPs were larger in Young animals for modulation frequencies 8-16 Hz and comparable at higher modulation frequencies. Onset LFP amplitudes were much larger in Young animals and were correlated with the evoked firing rates. Unit neurometric thresholds by synchrony or firing rate measures did not differ significantly across age and were comparable to behavioral thresholds in previous studies whereas LFP thresholds were lower than behavior.

 



March 07, 2024

Pasquale Bottalico, Associate Professor, University of Illinois Urbana-Champaign

Classroom acoustics for enhancing students' understanding when a teacher suffers from a dysphonic voice.

The purpose of this project is to assess the acoustical conditions in which optimal intelligibility and low listening difficulty can be achieved in real classrooms for elementary students, taking into consideration the effects of dysphonic voice and typical classroom noise. Speech intelligibility tests were performed in six elementary classrooms with 80 normal-hearing students aged 7–11 years. The speech material was produced by a female actor using a normal voice quality and simulating a dysphonic voice. The stimuli were played by a Head and Torso Simulator. Child babble noise and classrooms with different reverberation times were used to obtain a Speech Transmission Index (STI) range from 0.2 to 0.7, corresponding to the categories bad, poor, fair, and good. The results showed a statistically significant decrease in intelligibility when the speaker was dysphonic, in STI higher than 0.33. The rating of listening difficulty showed a significantly greater difficulty in perceiving the dysphonic voice. In addition, younger children showed poorer performance and greater listening difficulty compared with older children when listening to the normal voice quality. Both groups were equally impacted when the voice was dysphonic. The results suggested that better acoustic conditions are needed for children to reach a good level of intelligibility and to reduce listening difficulty if the teacher is suffering from voice problems. This was true for children regardless of grade level, highlighting the importance of ensuring more favorable acoustic conditions for children throughout all elementary schools.

 



March 21, 2024

Romina Andrea Najarro Flores, PhD Candidate. BIO

Preliminary results on the auditory processing of dialect-specific properties of Carolina chickadee songs.

Birds can change elements of their songs across populations and form dialects, which are cultural communication conventions. Song dialects have primarily been studied from the perspective of the sender; geographical variation of the auditory system that may facilitate dialect formation and persistence has been largely overlooked. We tested whether the geographical distribution of song properties correlates with song-related auditory processing. We predicted that auditory processing of each population would match their corresponding population song properties. Our study system is a series of populations of Carolina chickadees that are distributed across fragmented forests in central Indiana and have quite variable song dialects. The dialects differ from each other and from the common tonal “fee-bee fee-bay” song. More importantly, they differ in the presence and properties of complex shifts in frequency and amplitude modulation. We exposed Carolina chickadees from different populations to tones, AM and FM stimuli and their natural song elements and recorded their Auditory Evoked Potentials. Preliminary data will be presented.

 



March 28, 2024

Meredith Christine Ziliak, PhD Candidate, BIO

An Analysis of Subcortical and Thalamocortical Processes Exposed to Small Arms Fire-Like Noise.

The auditory brainstem response (ABR) and middle latency response (MLR) are auditory evoked potentials (AEPs) useful as diagnostic tools in the clinic and laboratory. Both AEPs consist of waveform components that indicate neuronal activation of structures and processes along the central auditory system (CAS), with ABRs corresponding to subcortical and MLRs to subcortical and thalamocortical regions. Typically, ABRs and MLRs are analyzed in isolation as they assess functionally and geographically distinct neuronal populations. They also require unique filters and processes for analysis. However, when ABRs and MLRs are not used in tandem, critical information is lost regarding the propagation of information from one CAS region to another, specifically from the subcortical to the thalamocortical. To better understand how damage affects auditory processing throughout the ascending pathway, it is paramount to analyze the CAS in a more holistic manner. Therefore, we have investigated the ABR and MLR of subjects exposed to a broadband small arms fire-like (SAF) noise inducing persistent reduced thresholds and distortion product otoacoustic emissions (DPOAEs) of noise induced hearing loss (NIHL). Previously having found indicators of hidden hearing loss through correlations between ABR thresholds and waveform components, we hypothesize waveform characteristics of the ABRs and MLRs will demonstrate idiosyncrasies indicative of altered transmission or diagnostic sensitivity between ABR and MLR measurements. In this presentation, we will discuss comparisons between ABR and MLR characteristics, as well as new methods of analysis for MLR waveform components.

 



April 04, 2024

Ananthanarayan Krishnan PhD, Professor, SLHS.

The spatiotemporal organization of the pitch processing network is shaped by tonal language experience.

There is compelling evidence that tonal language experience shapes neural representations of pitch in the auditory brainstem and cortex. However, crucial questions remain about the spatiotemporal organization of this distributed pitch network. (1) How does language experience change the spatiotemporal dynamics and coordination of pitch processing at different stages/time windows (when) along the processing hierarchy? (2) Does language experience selectively reconfigure the brain regions, their connectivity strengths, and hemispheric preference at the different stages (where) based on functional demands? (3) Does language experience preferentially drive temporal and/or spectral cue-based mechanisms of pitch encoding (how)? Our long-term objective is to advance knowledge of how language experience shapes the spatiotemporal (when, where, and how) reorganization of the brain and its underlying pitch circuitry. Using a cross-linguistic (Mandarin vs. English) design, we propose a novel multimodal neuroimaging approach that integrates EEG-derived pitch-specific responses with fMRI and functional connectivity measures to assess the brain processes subserving language-dependent tuning of pitch representations at different spatial scales and temporal windows of processing. The long-term goal is to advance our understanding of how language experiences reconfigure the distributed processing network to optimally represent behaviorally relevant acoustic, and linguistic features of dynamic pitch. Our central hypothesis is that tonal language experience reconfigures the spatiotemporal dynamics of the brain’s pitch networks according to functional language demands, which in turn, enables optimal neural encoding of linguistically relevant features of the acoustic signal. To this end, we utilize a functionally motivated theoretical framework that views pitch processing as a distributed hierarchical process that involves coordination between multiple stages/time windows of processing in cortical and subcortical regions utilizing local, bottom-up, and top-down processes to address these questions. This presentation will present preliminary evidence addressing Aim 1 which will characterize brainstem and cortical EEG derived pitch specific measures, fMRI activation patterns, and EEG derived functional connectivity measures reflecting early sensory level pitch processing in response to speech and non-speech sounds with native and non-native pitch contours. We aim to show that experience-dependent effects: (i) at the early sensory level processing in the brainstem and auditory cortex, target neural encoding of only the linguistically relevant dynamic temporal attributes of native pitch contours with a left ear advantage (brainstem) and a right hemispheric (RH) preference (auditory cortex irrespective of domain; (ii) will selectively recruit the language network bilaterally (including the pSTG-aSTG axis, temporal pole and IFG) for native pitch contours irrespective of domain but in Mandarin speakers only. Also, EEG derived functional connectivity measures will show a more organized connectivity between early sensory and downstream language related components only in the Mandarin group. These early results are promising and suggests that the spatiotemporal dynamics in the pitch processing is selectively influenced to enhance neural representations of pitch subserving both auditory and linguistic function.

 



April 11, 2024

Homeira Islam Kafi, PhD Dissertation Defense, SLHS.

Multiple Pathways to Suprathreshold Speech in Noise Deficit in Human Listeners.

Threshold audiometry, which measures the audibility of sounds in quiet, is currently the foundation of clinical hearing evaluation and patient management. Yet, despite using clinically prescribed state-of-the-art hearing aids that can restore audibility in quiet, patients with sensorineural hearing loss (SNHL) experience difficulty understanding speech in noisy backgrounds (e.g. cocktail party-like situations). This is likely because the amplification provided by modern hearing aids while restoring audibility in quiet, cannot compensate for the degradation in neural coding of speech in noise resulting from a range of non-linear changes in cochlear function that occur due to hearing damage. Furthermore, in addition to robust neural coding, the efficacy of cognitive processes such as selective attention also influences speech understanding outcomes. While much is known about how audibility affects speech understanding outcomes, little is known about suprathreshold deficits in SNHL. Unfortunately, direct measurements of the physiological changes in human inner ears are not possible due to ethical constraints. Here, I use noninvasive tools to characterize the effects of two less-familiar forms of SNHL: cochlear synaptopathy and distorted tonotopy. Results from our experiments showed that age-related CS degrades envelope coding even in the absence of audiometric hearing loss and that these effects can be quantified using non-invasive electroencephalography (EEG)-based envelope-following response (EFRs) metrics. To date, DT has been only studied in laboratory-controlled animal models. Here, I combined psychophysical tuning curves, EFRs, and speech-in-noise measurements to characterize the effects of DT. Our results suggest that low-frequency noise produces a strong masking effect on the coding of speech by the high-frequency portions of the cochlea in individuals with SNHL and that an index of DT (tip-to-tail ratio) obtained from psychophysical tuning curves can account for a significant portion of the large individual variability in listening outcomes among hearing-aid users, over and beyond audibility. Lastly, I propose a machine-learning framework to study the effect of attentional control on speech-in-noise outcomes. Specifically, I introduced a machine-learning model to assess how attentional control influences speech-in-noise understanding, using EEG to link prestimulus neural activity with listening performance. This design allows for examining the influence of top-down executive function on listening outcomes separately from the peripheral effects of SNHL.

 



April 18, 2024

Alexander L. Francis, Professor SLHS.

Does hearing impairment affect gait under realistic conditions?

Hearing loss is associated with increased risk of falling, but little is known about how hearing impairment might affect mobility. Here we report preliminary results from an ongoing experimental study investigating dynamic properties of gait while walking with and without impaired hearing. Young- and middle-aged adult participants with self-reported normal hearing completed four walking tasks, 2 indoors, 2 outdoors, each conducted with and without a simulated hearing loss. In the impaired condition, participants wore an insert earplug in one ear combined with binaural, circumaural noise-damping headphones. We quantified gait parameters using data from inertial measurement units (IMUs) affixed to participants’ ankles and waist, allowing participants to walk farther than in typical gait assessments. Parameters included standard gait metrics such as limb acceleration as well as a novel spatiotemporal index quantifying variability in step patterns. Analysis will focus on how temporal-spatial gait parameters and variability differ between indoor and outdoor walking with and without hearing impairment. This is work is part of a collaboration with Jeff Haddad and Satyajit Ambike in Health and Kinesiology, and Sarah Burgin and Troy Wesson at IUSM.

 


April 25, 2024

Elle OBrien, PhD. Michigan University

How will generative AI change scientific software?

Across biomedical research areas, scientists make and use code to facilitate data collection, generate data through computational simulations, investigate patterns in data through descriptive statistics and visualizations, and run statistical analyses to connect data to hypotheses. This talk will give an overview of an ongoing research program about the present and future of scientific software with an emphasis on applications to biomedical research. First, we will review findings from a recently-published study of how scientists adopt new software tools for data-intensive research, which sheds light on the pressures currently facing scientists who wish to expand their data analysis toolkit. Second, we will discuss new research that will look at which researchers are using generative AI tools for programming, how researchers reason about the validity of generated code, and how scientific practices and teamwork may be changing as a result.