Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Cross-Modal Re-Organization in Adults with Early Stage Hearing Loss

  • Julia Campbell,

    Affiliation University of Colorado at Boulder, Department of Speech, Language and Hearing Sciences, Boulder, Colorado, United States of America

  • Anu Sharma

    anu.sharma@colorado.edu

    Affiliations University of Colorado at Boulder, Department of Speech, Language and Hearing Sciences, Boulder, Colorado, United States of America, University of Colorado at Boulder, Institute of Cognitive Science, Boulder, Colorado, United States of America

Abstract

Cortical cross-modal re-organization, or recruitment of auditory cortical areas for visual processing, has been well-documented in deafness. However, the degree of sensory deprivation necessary to induce such cortical plasticity remains unclear. We recorded visual evoked potentials (VEP) using high-density electroencephalography in nine persons with adult-onset mild-moderate hearing loss and eight normal hearing control subjects. Behavioral auditory performance was quantified using a clinical measure of speech perception-in-noise. Relative to normal hearing controls, adults with hearing loss showed significantly larger P1, N1, and P2 VEP amplitudes, decreased N1 latency, and a novel positive component (P2’) following the P2 VEP. Current source density reconstruction of VEPs revealed a shift toward ventral stream processing including activation of auditory temporal cortex in hearing-impaired adults. The hearing loss group showed worse than normal speech perception performance in noise, which was strongly correlated with a decrease in the N1 VEP latency. Overall, our findings provide the first evidence that visual cross-modal re-organization not only begins in the early stages of hearing impairment, but may also be an important factor in determining behavioral outcomes for listeners with hearing loss, a finding which demands further investigation.

Introduction

A basic tenet of neuroplasticity is that central pathways will re-organize following long-term sensory deprivation. There is ample evidence from animal and human studies of cross-modal re-organization of the cortex that occurs in both blindness [1], [2], [3], [4], [5], [6], [7], and congenital deafness [8], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19], [20], [21], [22], [23]. For example, congenitally deaf white cats show enhanced motion processing and localization in the visual periphery, and recruit higher-order auditory cortex for improved performance in these tasks [19], [24]. Similarly, congenitally and post-lingually deaf humans (with and without cochlear implants) demonstrate activation of auditory cortical areas during processing of visual motion and complex visual pattern changes, which is not seen for normal hearing control subjects [15], [17], [25], [26], [27]. Although cross-modal recruitment serves to enhance behavioral performance for the recruiting modality [19], [28], it has been linked to a decrease in performance of the recruited modality. In deaf adults fitted with cochlear implants, cross-modal recruitment (measured by event-related potentials) has been correlated with decreased performance on speech perception tasks [25], [26], [29]. All of the studies mentioned above have been conducted on individuals in the most advanced stage of hearing loss (i.e., profound deafness). However, most post-lingual deafened adults show a gradual decline in hearing, which typically progresses through the mild, moderate, severe and profound stages of hearing loss [30], [31]. Thus, the degree of sensory deprivation necessary to induce cross-modal cortical plasticity remains unclear. Given the potential impact on clinical outcomes, it would be useful to determine whether cross-modal cortical changes begin during early stages of hearing decline or whether these changes are limited to the near-total sensory deprivation that accompanies deafness. In this study, we examined visual evoked potentials (VEP) using high-density electroencephalography and auditory behavioral outcome using a clinical test of speech perception in noise in persons with adult-onset mild-moderate hearing loss and normal hearing control subjects.

Materials and Methods

Participants and Ethics Statement

Seventeen adults between the ages of 37 to 68 years participated in this study. The study was approved by the University of Colorado at Boulder Institutional Review Board, and all participants provided written consent. Subjects were recruited via advertisements in the community, and hearing was tested for all subjects using standard clinical audiometric procedures prior to speech-in-noise and EEG measurements. Eight of the subjects (mean age and standard deviation: 50.5+/−6.2 years; range: 37.4–57 years) revealed clinically normal hearing thresholds (i.e., below 25 dB Hearing Level) for frequencies ranging from 250 Hz to 8000 Hz. Nine of the subjects demonstrated hearing loss (mean age and standard deviation: 56.9+/−8.9 years; range: 38.4–68.2 years). On average, this group showed normal hearing from 250 Hz through 1000 Hz and a mild-to-moderate sensorineural hearing loss bilaterally from 2000 Hz to 8000 Hz. Average audiograms for the two groups are shown in Figure 1. None of the participants with hearing loss were receiving clinical intervention at the time of enrollment. However, many participants suspected a possible hearing loss prior to diagnosis. Subjects who were diagnosed with hearing loss through the study received counseling from a state-licensed clinical audiologist (first author) and referrals to audiology clinics for possible consideration of amplification. EEG testing sessions took place on separate days for those diagnosed with hearing loss unless otherwise requested. The participants in the normal hearing (NH) group and hearing loss (HL) group showed no difference in age between groups (t(15) = −1.69, ρ >0.05). All participants reported no issues with visual acuity and no neurological impairment.

thumbnail
Figure 1. Mean audiometric subject thresholds.

Auditory thresholds are shown for right and left ears for the standard audiometric frequencies from 250(NH) group are depicted in blue; the hearing loss (HL) group in red. The positive-going blue bars illustrate the standard deviation for the average threshold at the designated frequency for the NH group, and negative-going red bars illustrate the standard deviation for the HL group. The solid black line illustrates the criterion for normal hearing, at 25 dB HL.

https://doi.org/10.1371/journal.pone.0090594.g001

Auditory Behavioral Testing: Test of Speech Perception-in-noise

Speech perception in background noise was measured using the QuickSIN™ test [32], a clinical assessment of auditory acuity in background noise. Participants faced a speaker at 0° azimuth and were instructed to repeat two recorded sentence lists (six sentences each) presented at 65 dB Hearing Level (HL). Background noise was varied to determine the signal-to-noise ratio (SNR) required by the participant to accurately repeat 50% of the sentences. The SNR values began at 25 dB and decreased in 5 dB increments to 0 dB. The SNR score from the two lists was computed and averaged for each participant. Overall, the lower the SNR score, the better the performance on the test.

EEG Procedures

Visual stimuli.

Participants were shown a high contrast sinusoidal concentric grating that morphs into a radially modulated grating or circle-star pattern [25], [33], [34] on a 26-inch flat-screen LCD television at a viewing distance of approximately 42 inches. The circle and star figures were presented 150 times. The star figure was presented on the screen for 600 ms, then immediately followed by the circle figure, also lasting for 600 ms. This presentation method provided the percept of apparent motion and shape change to the viewer. A total of 300 stimulus presentations (sweeps) were presented, for a testing time of three minutes. The VEP was time-locked to the onset of each individual star and circle presentation. Participants were instructed to direct their gaze to the center of the star/circle at a black dot and to not shift gaze during the three minutes.

EEG Recording and Analyses

Participants were fitted with a 128-channel EEG electrode recording net (Electrical Geodesic, Inc.) and seated in a comfortable reclining chair in an electro-magnetically shielded sound booth. All stimuli were presented via E-Prime® 2.0, stimulus software compatible with Net Station 4 (Electrical Geodesic, Inc). The sampling rate for the EEG recordings was 1 kHz, with a band-pass filter set at 0.1–200 Hz.

Data were band-pass filtered offline at 1–30 Hz and segmented according to the EEG activity surrounding the stimulus presentation (epochs), with 100 ms pre-stimulus and 495 ms post-stimulus time. EEG recordings were corrected to the pre-stimulus baseline, and eye-blink artifact recorded at designated eye channels was removed if greater than +/−100 µV, unless adjusted for individual subjects. Bad channels were removed from the recording and replaced with interpolated data from the remaining channels via a spline interpolation algorithm. Remaining data were averaged and re-referenced using common average reference. Individual waveform averages were averaged together for each of the two groups (i.e., the normal hearing and hearing loss group) to compute a grand-averaged waveform. Amplitudes and latencies for individual participants were recorded for all three obligatory visual evoked potential (VEP) peaks (i.e., P1, N1 and P2). The P1 peak component was observed as the first positive-going peak occurring approximately within a latency window of 90 to 130 ms, the N1 component was observed as the second peak or first negative-going peak occurring approximately between 135 ms to 200 ms, and the P2 component was observed as the third peak or second positive-going peak occurring approximately within 200 to 300 ms. If a peak component occurred outside of the described latency ranges, it was still marked and included according to the order of appearance (e.g., the first large positive component at 80 ms was marked as P1). P1 amplitudes were defined as the onset to peak value, N1 amplitudes as the peak of the N1 component to peak of the P2 component, and P2 amplitudes as peak of the P2 component to offset value. Latencies were chosen at the highest amplitude of the peak.

First we created a two-dimensional voltage map using Net Station 4 (Electrical Geodesic, Inc), which allowed us to examine regions of interest (ROI) around the occipital midline [25], [29], focusing on the greatest group differences for visual stimuli. Using planned comparisons with the Bonferroni correction, electrodes within the ROI were then chosen for statistical analysis according to the largest mean group differences for the amplitude and latency of each VEP component.

Source Localization Analysis (Current Density Reconstructions)

EEG data for individual participants were exported from Net Station and imported into EEGLAB [35] using MatLab® (The MathWorks®, Inc., 2010). The data were corrected to the baseline of a pre-stimulus interval of 100 ms and sweeps greater than +/−100 µV were rejected as artifacts. The sampling rate was down-sampled to 250 Hz to reduce processing, altering the post-stimulus time to 492 ms. The first step in creating source models was to prune the concatenated EEG sweeps or trials for each subject through independent component analysis (ICA) [36], [37]. This statistical procedure allows for observation of the spatially fixed and temporally independent components that underlie the evoked potential [38], and is useful in precise source modeling in EEG, including for deeper generators [37], [39], [40], [41]. EEGLAB was chosen specifically for preliminary source localization analysis in order to utilize the ICA algorithm that provides for optimal cortical source localization, and to perform ICA on concatenated EEG sweeps [36], [37], [42]. Once the independent components that accounted for greatest percent variance in the evoked potential were identified in the designated timeframe for a peak component of interest (e.g., P1, N1, P2), the remaining independent components were regarded as artifact/noise and discarded. The pruned potential waveforms for each subject were then grand averaged for each group (NH and HL) and exported into CURRY® Scan 7 Neuroimaging Suite (Compumedics Neuroscan™) for source modeling. In CURRY®, an additional ICA was run on the VEP mean global field power (MGFP) (incorporating all 128 channel EEG data), with only components showing a signal-to-noise ratio (SNR) of 2.0 or greater accepted. The third VEP component in the HL group (P2’) was pruned and averaged as an individual component as it was present in a subset of this group.

Peak components for the VEP MGFP waveforms were selected separately for current density reconstruction (CDR) via sLORETA, with no a-priori restrictions placed on the model. The selected head model was standardized using the boundary element method (BEM) [43]. sLORETA, or standardized low-resolution brain electromagnetic tomography, is a specific statistical method that estimates CDR [44], [45]. The CDR is represented by a graded color scale image placed on an average MRI of 100 people. Sagittal MRI slices were selected to illustrate the greatest differences in cortical activation between the groups. Montreal Neurological Institute (MNI) co-ordinates (in millimeters) illustrate the three dimensional physical locations of each slice.

Results

Visual Evoked Potentials

Waveforms for the two groups across the whole head (128 channels) are shown in Figure 2. Three obligatory cortical VEP components elicited in response to the visual stimulus were analyzed: the P1 (occurring at approximately 100 ms), the N1 (occurring at approximately 150 ms), and the P2 (occurring at approximately 230 ms). We compared amplitude and latencies for these components between NH and HL groups at electrodes in the occipital ROI. A one-way ANOVA was computed to compare group differences. Post-hoc planned comparison of means using the Bonferroni correction was completed to describe significant differences between electrodes (see Figure 2).

thumbnail
Figure 2. Occipital Region of Interest (ROI) cortical visual evoked potentials (VEPs).

A. Peak components P1, N1, and P2 amplitudes are significantly larger for the adult Hearing Loss (HL) group (red) in comparison to the adult Normal Hearing group (blue). Mean group differences are illustrated in corresponding mean bar graphs for each component. One asterisk indicates significance at ρ <0.05; two asterisks indicate significance at ρ <0.01. B. The N1 component latency is significantly decreased in the HL group as compared to the NH group, also illustrated in the mean bar graph. C. A third positive peak component, denoted as P2’, has been found in a subset of the HL group.

https://doi.org/10.1371/journal.pone.0090594.g002

P1 amplitude was larger for the HL group (F(1, 285) = 6.265, ρ <0.05). N1 amplitude was larger for the HL group (F(1, 285) = 9.865, ρ <0.01). N1 latency was decreased for the HL group (F(1, 285) = 7.684, ρ <0.01). Finally, P2 amplitudes were increased for the HL group (F(1, 285) = 8.983, ρ <0.01). Overall, this trend of decreased latencies and increased amplitudes for obligatory VEP components for HL listeners is consistent with previous results, which showed evidence of cross-modal recruitment in deaf subjects [13], [25], [26], [46].

An unexpected finding was the visual identification of a positive component following the P2 (occurring between approximately 295 and 395 ms) in the HL group (see Figure 2C). We labeled this component P2’. While a possible component similar to this is observed by Doucet and colleagues [25], it was not analyzed or discussed in that study. Figure 2 shows the evoked potential waveforms for both groups at the described electrodes, with mean bar graphs illustrating significant differences.

Cortical Source Localization

Cortical source localization, or current density reconstruction (CDR), was performed in order to visualize anatomical regions of possible cross-modal re-organization in the HL group. The sLORETA algorithm provided by CURRY® Scan 7 Neuroimaging Suite was applied to the three VEP peak components (Figure 3). The activations were superimposed on an average MRI (sagittal slice view) and the MNI co-ordinates are shown beneath each slice. The scale of the F distribution, indicating the strength of the activations, is also shown.

thumbnail
Figure 3. Current source density reconstructions for the NH and HL groups.

A. The cortical activation at the P1, N1 and P2 VEP peak components using sagittal magnetic resonance imaging (MRI) slices. The scale of the F Distribution is shown in a scale the upper right corner ranging from red to yellow (where yellow reflects the greater strength of activation). The Montreal Neurological Institute (MNI) coordinates are listed beneath each MRI slice. B. The table describing the activated cortical regions for the VEP components for the NH and HL groups, listed in approximate order of highest level of activation.

https://doi.org/10.1371/journal.pone.0090594.g003

As expected, for the NH group, the visual stimuli elicited all three VEP components and activated visual processing regions, including multiple cerebellar areas, which have been shown to respond to visual motion [47], [48] (Figure 3). Higher-order visual cortical regions such as Brodmann areas 18 and 19, and the fusiform region, were also activated. These findings are consistent with previous studies, which used stimuli generally similar to ours in NH subjects [34], [47], [48], [49]. The P1 component showed similar cortical and cerebellar activation for both groups. However, for the N1 and P2 components, the HL group showed greater activation along the ventral visual stream in temporal areas, which are traditionally associated with auditory processing (including superior temporal gyrus (STG), medial temporal gyrus (MTG), and inferior temporal gyrus (ITG)). This result is consistent with previous reports of cross-modal activation of temporal areas in deaf subjects [15], [17], [27]. Figure 3 shows the current density reconstructions for the NH and HL groups. A table is provided in Figure 3 describing activated regions corresponding to each of the peak components. Interestingly, the P2’ component (seen only in the HL group) showed activation of both cerebellar/occipital regions as well as temporal areas. This response pattern suggests that an additional processing step may take place within the ventral visual stream in listeners with hearing loss.

Behavioral Performance

Speech perception-in-noise acuity was measured for both groups using the QuickSIN™ clinical test [32]. The results of the QuickSIN™ are reported as a signal-to-noise ratio (SNR) threshold; therefore a lower score reflects better performance. Mean scores for the NH and HL groups are shown in Figure 4A. A Mann-Whitney U Test revealed a significant difference between the two groups (U = 10.5, Z = −2.46, ρ <0.05). This difference in auditory performance in background noise between normal hearing listeners and listeners with mild-to-moderate hearing loss is consistent with Killion et al. [32] and Wilson et al. [50].

thumbnail
Figure 4. Mean scores on the QuickSIN test for the NH and HL groups, (A).

Error bars are shown as vertical black lines. The asterisk reflects significant differences at ρ <0.05. B. QuickSIN scores are shown on the vertical axis and N1 VEP component latencies on the horizontal axis. Values are shown as closed circles for the NH group and open circles for the HL group. The Spearman’s rho value (−0.7) and significance at ρ = 0.001 are indicated on the upper right hand corner.

https://doi.org/10.1371/journal.pone.0090594.g004

The N1 VEP component has been suggested as a marker of cross-modal re-organization in deafness [13], [26], [46], [51]. Therefore, we correlated the latency of the N1 with QuickSIN™ scores for the subjects. Due to hearing loss consisting of a gradual increase in auditory threshold from 0 dB HL, we included the N1 latency values and QuickSIN scores of all 17 participants in the correlation analysis. Mean QuickSIN™ scores and N1 VEP latencies were tested for normal distribution, and a Spearman’s rank-order correlation was computed due to the non-normal distribution of the data. As seen in Figure 4B, a negative correlation was observed between N1 latency and QuickSIN scores (r = −0.701, ρ = 0.001). That is, a shorter N1 latency was associated with higher scores (i.e., worse performance) on the QuickSIN™ test. Overall, our results reflecting differences in speech-in-noise perception between the NH and HL groups as a function of visual evoked potentials are consistent with previous studies in deaf subjects showing cross-modal re-organization in subjects with poor speech perception [25], [26]. N1 latency changes also showed a significant negative correlation with the pure tone threshold averages (PTA) at 500, 1000, and 2000 Hz, which is a clinically relevant indicator of audiometric function (right ear, r = −0.446, ρ <0.05; left ear, r = −0.540, ρ <0.05). That is, as the degree of hearing loss increased, there was a corresponding decrease in N1 latency.

Discussion

We sought to examine whether cross-modal recruitment is evident in early stages of hearing decline or whether cross-modal plasticity is limited to the near-total sensory deprivation that accompanies profound deafness. We recorded high-density EEG in response to a visual stimulus in adults with normal hearing in the low frequencies and a mild-to-moderate hearing loss in the high frequencies. A group of age-matched normal hearing adults served as the control group. All participants were administered the QuickSIN, a test of speech-in-noise perception which is used to document clinical outcomes in patients with hearing loss.

Relative to normal hearing controls, adults with mild-to-moderate hearing loss showed: (i) increased amplitude of the P1, N1 and P2 VEP components, (ii) presence of an additional positive VEP component (P2’) occurring after the P2, (iii) decreased latency of the N1 VEP, (iv) cortical re-organization as evidenced by increased activation of auditory temporal areas elicited by visual stimulation, (iv) poorer speech perception scores in noise, (v) a significant negative correlation between the degree of hearing loss and N1 VEP latency, and (vi) a strong negative correlation between speech-in-noise perception and N1 VEP latency. Overall, this pattern of results in our listeners with mild hearing loss is consistent with previous findings in deaf subjects suggesting visual cross-modal recruitment in deafness [13], [25], [26], [51].

Consistent with our findings, significantly increased amplitudes of N1 and P2 VEPs with hearing loss are well-documented in deafness [13], [46], [51]. More recently, changes in visual shift of form and motion have been shown to elicit larger N1 and P2 responses correlated with poor speech perception in deaf, cochlear-implanted adults, respectively [25], [26]. Interestingly, smaller than normal amplitude of the visual P1 component has recently been reported in cochlear-implanted adults [29]. However, Sandmann and colleagues used a stimulus consisting of four separate checkerboard reversal patterns at varying luminance ratios, which is a more complex pattern of stimulation relative to the one used in this study. The checkerboard pattern is therefore more likely to tap into a different stage of visual processing than is evident in this study [29]. Typically, decreased latencies and increased amplitudes of evoked potential components are considered to be reflective of faster processing [52], [53] suggesting that HL subjects recruit additional cortical areas to subserve processing speed and/or efficiency. To this end, we identified the P2’ VEP component (following the P2) only in the HL group, possibly indicating a new or additional generator facilitating visual processing in the HL group.

Current source density reconstructions (CDR) were compared between NH and HL listeners. As expected, for the visual stimulus, NH listeners showed cerebellar/occipital activation for the P1, N1 and P2 VEP components (Figure 3). Responsive regions included Brodmann areas 18 and 19, which comprise higher-order visual cortex. Visual stimuli comparable in both shape and appearance of motion to the one used in the present study have been shown to activate similar cortical regions in VEP and fMRI imaging studies [34], [47], [54].

The HL group showed occipital/cerebellar activation comparable to the NH group, as was evident in the CDR for the P1 response. However, higher-order processing as reflected by the CDR for the N1 and P2 components showed clear evidence of cortical re-organization. Cortical activation for these components showed an emphasis in ventral stream processing within temporal cortex, including temporal gyri (ITG, MTG and STG), which are typically associated with auditory cortical processing [55], [56]. The P2’ component, identified in only the HL group, showed underlying activation of both cerebellar/occipital areas and temporal areas, suggestive of a possible new generator in temporal cortex subserving visual processing.

In a recent study, Campbell and Sharma [57] examined cortical responses to auditory stimulation in adults with mild-moderate hearing loss. The authors reported a change in cortical resource allocation, including decreased temporal activation in STG and increased frontal activation in response to passive auditory stimulation in mild-moderate hearing loss. Taken together with the present results, this suggests that a decreased temporal activation to sound in mild-moderate hearing loss may be coincidental with the increased visual activation of temporal areas in this study.

Overall, the CDR findings are strongly suggestive of cortical re-organization facilitating cross-modal recruitment for visual processing in adults with mild-moderate hearing loss. The shift of activation to temporal cortex represents activation of the ventral visual stream, which is typically responsive to visual object form or shape changes, and is located within temporal cortex in proximity to auditory areas. The ventral stream has been implicated in the processing of facial and mouth movements [58], [59]. Thus, our results may be suggestive of compensatory plasticity as HL listeners begin to rely on facial information as a strategy to compensate for their hearing impairment [60], [61], [62], [63], [64]. Because the ventral stream is largely responsible for processing object and face information, a heavier processing load may be imposed on this stream when listeners with hearing loss begin to pay more attention to lip and facial cues. Indeed, visual attention is a modulatory influence for compensatory plasticity, and congruent visual input has shown to enhance auditory speech perception performance in cochlear-implanted adults [18], [65]. A recent study by Strelnikov and colleagues suggests that increased intra-modal compensatory activity in occipital cortex predicts better outcomes for post-lingually deaf adults after cochlear implantation, presumably due to the synergy of the visual system in deciphering auditory information and ultimately increasing the ability for auditory discrimination when sound is re-introduced via implantation [65]. However, similar to the present study, increased cross-modal re-organization in superior temporal sulcus (STS) appears to predict poor outcomes for both pre- and post-lingually deaf implant users [65], [66], [67].

Hearing loss is most consistently associated with poor outcomes in recognizing speech in background noise, a skill essential for everyday listening [30], [68], [69], [70]. Consistent with previous research in hearing-impaired listeners, our results show that listeners with even mild-to-moderate hearing loss demonstrate a significant deficit when listening to speech in background noise [71], [72]. A strong negative correlation was observed between speech perception-in-noise performance on the QuickSIN test and N1 VEP latency. That is, a shorter N1 latency was associated with higher scores (i.e., worse performance) on the QuickSIN™ test. While hearing loss is a well-known contributor to decreased speech in noise performance [73], [74], our results suggest that cross-modal plasticity may also be an important factor that should be considered in the decreased auditory performance in background noise of listeners with hearing loss. If we assume that cross-modal plasticity implies a greater reliance on lip-reading, then it might possibly serve as a facilitatory compensation in noisy situations where congruent visual input enhances auditory processing [65]. A decrease in N1 latency was also correlated with higher audiometric thresholds, suggesting a possible increase in cross-modal recruitment as hearing loss gets worse. Future studies should systematically describe the extent of cross-modal recruitment as a function of hearing loss ranging from mild to profound, as well as investigate the possible contribution of cross-modal plasticity on speech perception performance.

Overall, the VEP and behavioral results that we describe are strongly indicative of visual cross-modal re-organization in adults with mild-moderate hearing loss. This is a new finding as previous reports of cross-modal plasticity have been confined to adults with deafness, which was congenital or pre-lingual, and/or in cochlear-implanted adults [13], [25], [27], [51], [65], [75], [76]. The mechanisms of cross-modal plasticity in both deafness and moderate hearing loss have been explored in animal studies. Recent studies suggest that only those cortical areas involved in the sharing of multi-modal information are recruited, and still maintain the functional specificity of the original sensory modality [19], [24]. That is, higher-order, multi-modal areas are more susceptible to recruitment when a shared modality is no longer receiving appropriate input. Furthermore, this cross-modal plasticity has been found to take place as a result of moderate hearing loss, and not just profound sensory deprivation [77]. In humans, both audition and vision share object recognition functions in the ventral stream [78], [79], and are thus primed for compensatory plasticity in hearing loss. When the listening environment becomes challenging, as in background noise, greater attention to visual objects in the form of processing of faces and lips, may facilitate auditory object recognition. Along these same lines, activation of the ventral stream in adults who have experienced late-onset blindness has been correlated with poor performance in an auditory spatial task [80]. Similarly, resting state studies of pre-lingually deaf cochlear implanted children and post-lingually deaf cochlear implanted children and adults showed ventral activation in patients who had poor speech perception outcomes [66], [67]. Thus, it appears that compensatory activation, in either modality, of the cortical auditory-visual ventral stream may be associated with poorer auditory performance.

Summary and Conclusion

Our study provides new evidence of cross-modal cortical re-organization in adult-onset mild-moderate hearing loss. Increased amplitudes of P1, N1 and P2 VEP components, decreased N1 latency, a novel P2’ component and current source density reconstructions reflecting a ventral shift in activation were observed for adults with mild to moderate hearing loss relative to normal hearing controls. Furthermore, we observed a strong negative correlation between cross-modal re-organization (as reflected by decreased N1 latency) and speech perception in noise. Future studies are needed to outline the detailed trajectory of cross-modal changes as hearing declines from a mild hearing loss to deafness. Prospective longitudinal studies will provide important information concerning the timeline of cross-modal re-organization according to severity of hearing loss, including a quantification of the degree or severity of re-organization. In addition, such studies may indicate the effect of clinical interventions, such as amplification or cochlear implantation, in reversing cross-modal re-organization.

Acknowledgments

We would like to acknowledge Teresa Mitchell, Ph.D, for her comments on a draft of the manuscript.

Author Contributions

Conceived and designed the experiments: JC AS. Performed the experiments: JC. Analyzed the data: JC AS. Contributed reagents/materials/analysis tools: JC AS. Wrote the paper: JC AS.

References

  1. 1. Kujala T, Alho K, Naatanen R (2000) Cross-modal reorganization of human cortical functions. Trends Neurosci 23(3): 115–20.
  2. 2. Roder B, Stock O, Bien S, Neville H, Rosler F (2002) Speech processing activates visual cortex in congenitally blind humans. Eur J Neurosci 16(5): 930–6.
  3. 3. Voss P, Gougoux FDR, Lassonde M, Zatorre RJ, Lepore F (2006) A positron emission tomography study during auditory localization by late-onset blind individuals. NeuroReport 17(4): 383–8.
  4. 4. Collignon O, Lassonde M, Lepore F, Bastien D, Veraart C (2007) Functional cerebral reorganization for auditory spatial processing and auditory substitution of vision in early blind subjects. Cereb Cortex 17(2): 457–65.
  5. 5. Collignon O, Vandewalle G, Voss P, Albouy G, Charbonneau G, et al. (2011) Functional specialization for auditory–spatial processing in the occipital cortex of congenitally blind humans. Proc Natl Acad Sci U S A 108(11): 4435–4440.
  6. 6. Watkins KE, Shakespeare TJ, O’Donoghue MC, Alexander I, Ragge N, et al. (2013) Early auditory processing in area V5/MT+ of the congenitally blind brain. J Neurosci 13 33(46): 18242–18246.
  7. 7. Kupers R, Ptito M (2013). Compensatory plasticity and cross-modal reorganization following early visual deprivation. Neurosci Biobehav Rev S01497634(13)00191–197.
  8. 8. Sharma A, Dorman MF, Kral A (2005) The influence of a sensitive period on central auditory development in children with unilateral and bilateral cochlear implants. Hear Res 203(1–2): 134–143.
  9. 9. Sharma A, Gilley PM, Dorman MF, Baldwin R (2007) Deprivation-induced cortical reorganization in children with cochlear implants. Int J Audiol 46(9): 494–499.
  10. 10. Sharma A, Nash AA, Dorman M (2009) Cortical development, plasticity and re-organization in children with cochlear implants. J Commun Disord 42(4): 272–279.
  11. 11. Sharma A, Dorman M (2011) Central auditory system development and plasticity after cochlear implantation. In F-G Zeng et al. (Eds.), Auditory Prostheses: New Horizons (233–255). Springer Handbook of Auditory Research 39. New York: Springer.
  12. 12. Sharma A, Mitchell T (2013) The impact of deafness on the human central auditory and visual systems. In A Kral et al. (Eds.), Deafness (189–215). Springer Handbook of Auditory Research 47. New York: Springer.
  13. 13. Neville HJ, Lawson D (1987) Attention to central and peripheral visual space in a movement detection task: an event-related potential and behavioral study. II. Congenitally deaf adults. Brain Res 405(2): 268–83.
  14. 14. Bavelier D, Tomann A, Hutton C, Mitchell T, Corina D, et al. (2000) Visual attention to the periphery is enhanced in congenitally deaf individuals. J Neurosci 20(17): RC93.
  15. 15. Finney EM, Fine I, Dobkins KR (2001) Visual stimuli activate auditory cortex in the deaf. Nat Neurosci 4(12): 1171–3.
  16. 16. Neville HJ, Bavelier D (2001) Effects of auditory and visual deprivation on human brain development. Clin Neurosci Res 1(4): 248–57.
  17. 17. Finney EM, Clementz BA, Hickok G, Dobkins KR (2003) Visual stimuli activate auditory cortex in deaf subjects: evidence from MEG. NeuroReport 14(11): 1425–7.
  18. 18. Bavelier D, Hirshorn EA (2010) I see where you’re hearing: how cross-modal plasticity may exploit homologous brain structures. Nat Neurosci 13(11): 1309–11.
  19. 19. Lomber SG, Meredith MA, Kral A (2010) Cross-modal plasticity in specific auditory cortices underlies visual compensations in the deaf. Nat Neurosci 13(11): 1421–7.
  20. 20. Kral A, Sharma A (2012) Developmental neuroplasticity after cochlear implantation. Trends Neurosci 35(2): 111–22.
  21. 21. Meredith MA, Allman BL (2012) Early hearing-impairment results in crossmodal reorganization of ferret core auditory cortex. Neural Plast 2012: 601591.
  22. 22. Gilley PM, Sharma A, Dorman MF (2008) Cortical reorganization in children with cochlear implants. Brain Res 1239: 56–65.
  23. 23. Kral A (2013) Auditory critical periods: a review from system’s perspective. Neuroscience 5 247: 117–133.
  24. 24. Meredith MA, Kryklywy J, McMillan AJ, Malhotra S, Lum-Tai R, et al. (2011) Crossmodal reorganization in the early deaf switches sensory, but not behavioral roles of auditory cortex. Proc Natl Acad Sci U S A 108(21): 8856–61.
  25. 25. Doucet ME, Bergeron F, Lassonde M, Ferron P, Lepore F (2006) Cross-modal reorganization and speech perception in cochlear implant users. Brain 129(Pt 12): 3376–83.
  26. 26. Buckley KA, Tobey EA (2011) Cross-modal plasticity and speech perception in pre- and postlingually deaf cochlear implant users. Ear Hear (1): 2–15.
  27. 27. Vachon P, Voss P, Lassonde M, Leroux J-M, Mensour B, et al. (2013) Reorganization of the auditory, visual and multimodal areas in early deaf individuals. Neuroscience 15 245: 50–60.
  28. 28. Bosworth RG, Dobkins KR (2002) The effects of spatial attention on motion processing in deaf signers, hearing signers, and hearing nonsigners. Brain Cogn 49(1): 152–69.
  29. 29. Sandmann P, Dillier N, Eichele T, Meyer M, Kegel A, et al. (2012) Visual activation of auditory cortex reflects maladaptive plasticity in cochlear implant users. Brain 135(2): 555–68.
  30. 30. Lazard DS, Vincent C, Venail F, Van de Heyning P, Truy E, et al. (2012) Pre-, per- and postoperative factors affecting performance of postlinguistically deaf adults using cochlear implants: a new conceptual model over time. PLoS ONE 7(11): e48739.
  31. 31. Lazard DS, Innes-Brown H, Barone P (2014) Adaptation of the communicative brain to post-lingual deafness. Evidence from functional imaging. Hear Res 307: 136–143.
  32. 32. Killion MC, Niquette PA, Gudmundsen GI, Revit LJ, Banerjee S (2004) Development of a quick speech-in-noise test for measuring signal-to-noise ratio loss in normal-hearing and hearing-impaired listeners. J Acoust Soc Am 116(4 Pt 1): 2395–405.
  33. 33. Doucet ME, Gosselin F, Lassonde M, Guillemot JP, Lepore F (2005) Development of visual evoked potentials to radially modulated concentric patterns. NeuroReport 16(16): 1753–6.
  34. 34. Bertrand J-A, Lassonde M, Robert M, Nguyen DK, Bertone A, et al. (2012) An intracranial event-related potential study on transformational apparent motion. Does its neural processing differ from real motion? Exp Brain Res 216(1): 145–53.
  35. 35. Delorme A, Makeig S (2004) EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J Neurosci Methods 134(1): 9–21.
  36. 36. Debener S, Ullsperger M, Siegel M, Engel AK (2006) Single-trial EEG–fMRI reveals the dynamics of cognitive function. Trends Cogn. Sci 10(12): 558–63.
  37. 37. Debener S, Hine J, Bleeck S, Eyles J (2008) Source localization of auditory evoked potentials after cochlear implantation. Psychophysiology 45(1): 20–4.
  38. 38. Makeig S, Jung TP, Bell AJ (1997) Blind separation of auditory event-related brain responses into independent components Proc Natl Acad Sci U S A. 94(20): 10979–10984.
  39. 39. Makeig S, Delorme A, Westerfield M, Jung TP, Townsend J, et al. (2004) Electroencephalographic brain dynamics following manually responded visual targets. PLoS Biol 2(6): e176.
  40. 40. Hine J, Debener S (2007) Late auditory evoked potentials asymmetry revisited. Clin Neurophysiol 118(6): 1274–85.
  41. 41. Joos K, Vanneste S, De Ridder D (2012) Disentangling depression and distress networks in the tinnitus brain. PLoS ONE 7(7): e40544.
  42. 42. Delorme A, Palmer J, Onton J, Oostenveld R, Makeig S (2012) Independent EEG sources are dipolar. PLoS ONE 7(2): e30135.
  43. 43. Fuchs M, Kastner J, Wagner M, Hawes S, Ebersole JS (2002) A standardized boundary element method volume conductor model. Clin Neurophysiol 113(5): 702–12.
  44. 44. Pascual-Marqui RD (2002) Standardized low-resolution brain electromagnetic tomography (sLORETA): technical details. Methods Find Exp Clin Pharmacol 24(Suppl. D): 5–12.
  45. 45. Grech R, Cassar T, Muscat J (2008) Review on solving the inverse problem in EEG source analysis. JNER 5: 25.
  46. 46. Armstrong BA, Neville HJ, Hillyard SA, Mitchell TV (2002) Auditory deprivation affects processing of motion, but not color. Brain Res Cogn Brain Res 14(3): 422–34.
  47. 47. Dupont P, Sary G, Peuskens H, Orban GA (2003) Cerebral regions processing first- and higher order motion in an opposed-direction discrimination task. Eur J Neurosci 17(7): 1509–17.
  48. 48. Kellermann T, Regenbogen C, De Vos M, Mößnang C, Finkelmeyer A, et al. (2012) Effective connectivity of the human cerebellum during visual attention. J Neurosci 32(33): 11453–60.
  49. 49. Wilkinson F, James TW, Wilson HR, Gati JS, Menon RS, et al. (2000) An fMRIstudy of the selective activation of human extrastriate form vision areas by radial and concentric gratings. Curr Biol 10(22): 1455–8.
  50. 50. Wilson RH, McArdle RA, Smith SL (2007) An Evaluation of the BKB-SIN, HINT, QuickSIN, and WIN Materials on Listeners With Normal Hearing and Listeners With Hearing Loss. J Speech Lang Hear Res 50(4): 844–56.
  51. 51. Neville HJ, Schmidt A, Kutas M (1983) Altered visual-evoked potentials in congenitally deaf adults. Brain Res 266(1): 127–32.
  52. 52. Tong Y, Melara RD, Rao A (2009) P2 enhancement from auditory discrimination training is associated with improved reaction times. Brain Res 1297: 80–8.
  53. 53. George EM, Coch D (2011) Music training and working memory: an ERP study. Neuropsychologia 49(5): 1083–1094.
  54. 54. Allison T, Puce A, Spencer DD, McCarthy G (1999) Electrophysiological studies of human face perception. I: Potentials generated in occipitotemporal cortex by face and non-face stimuli. Cereb Cortex 9(5): 415–30.
  55. 55. Andics A, McQueen JM, Petersson KM, Gál V, Rudas G, et al. (2010) Neural mechanisms for voice recognition. NeuroImage 52(4): 1528–40.
  56. 56. Pasley BN, David SV, Mesgarani N, Flinker A (2012) Reconstructing speech from human auditory cortex. PLoS biology 10(1): e1001251.
  57. 57. Campbell J, Sharma A (2013) Compensatory changes in cortical resource allocation in adults with hearing loss. Front Syst Neurosci 7: 71.
  58. 58. Puce A, Allison T, McCarthy G (1999) Electrophysiological studies of human face perception. III: Effects of top-down processing on face-specific potentials. Cereb Cortex 9(5): 445–458.
  59. 59. Nasr S, Tootell RBH (2012) Role of fusiform and anterior temporal cortical areas in facial recognition. NeuroImage 63(3): 1743–53.
  60. 60. McCullough S, Emmorey K, Sereno M (2005) Neural organization for recognition of grammatical and emotional facial expressions in deaf ASL signers and hearing nonsigners. Brain Res Cogn Brain Res 22(2): 193–203.
  61. 61. Sadato N, Okada T, Honda M, Matsuki K, Yoshida M, et al. (2005) Cross modal integration and plastic changes revealed by lip movement, random-dot motion and sign languages in the hearing and deaf. Cereb Cortex 15(8): 1113–22.
  62. 62. Woodhouse L, Hickson L (2009) Review of visual speech perception by hearing and hearing-impaired people: clinical implications. Int J Lang Commun Disord 44(3): 253–70.
  63. 63. Letourneau SM, Mitchell TV (2011) Gaze patterns during identity and emotion judgments in hearing adults and deaf users of American Sign Language. Perception 40: 563–575.
  64. 64. Rouger J, Lagleyre S, Démonet J-F, Fraysse B, Deguine O, et al. (2012) Evolution of crossmodal reorganization of the voice area in cochlear-implanted deaf patients. Hum Brain Mapp 33(8): 1929–40.
  65. 65. Strelnikov K, Rouger J, Demonet JF, Lagleyre S, Fraysse B, et al. (2013) Visual activity predicts auditory recovery from deafness after adult cochlear implantation. Brain 136(12): 3682–3695.
  66. 66. Giraud AL, Lee HJ (2007) Predicting cochlear implant outcome from brain organisation in the deaf. Restor Neurol Neurosci 25(3–4): 381–90.
  67. 67. Lee HJ, Giraud AL, Kang E, Oh SH, Kang H, et al. (2007) Cortical activity at rest predicts cochlear implantation outcome. Cereb Cortex 17(4): 909–17.
  68. 68. Souza PE, Boike KT, Witherell K, Tremblay K (2007) Prediction of Speech Recognition from Audibility in Older Listeners with Hearing Loss: Effects of Age, Amplification, and Background Noise. J AM Acad Audiol 18: 54–65.
  69. 69. Anderson S, Kraus N (2010) Sensory-Cognitive Interaction in the Neural Encoding of Speech in Noise: A Review. J AM Acad Audiol 21(9): 575–85.
  70. 70. Gifford RH, Revit LJ (2010) Speech perception for adult cochlear implant recipients in a realistic background noise: effectiveness of preprocessing strategies and external options for improving speech recognition in noise. J AM Acad Audiol 21(7): 441–51-quiz487-8.
  71. 71. Dubno JR (1984) Effects of age and mild hearing loss on speech recognition in noise. J Acoust Soc Am 76(1): 87–96.
  72. 72. Vermiglio AJ, Soli SD, Freed DJ, Fisher LM (2012) The relationship between high frequency pure-tone hearing loss, hearing in noise test (HINT) thresholds, and the articulation index. J AM Acad Audiol 23(10): 779–88.
  73. 73. Hornsby BW, Johnson EE, Picou E (2011) Effects of degree and configuration of hearing loss on the contribution of high- and low-frequency speech information to bilateral speech understanding. Ear Hear 32(5): 543–55.
  74. 74. Moore BC (1996) Perceptual consequences of cochlear hearing loss and their implications for the design of hearing aids. Ear Hear 17(2): 133–61.
  75. 75. Lazard DS, Lee HJ, Gaebler M, Kell CA, Truy E, et al. (2010) Phonological processing in post-lingual deafness and cochlear implant outcome. NeuroImage 49(4): 3443–51.
  76. 76. Lazard DS, Lee HJ, Truy E, Giraud AL (2013) Bilateral reorganization of posterior temporal cortices in post-lingual deafness and its relation to cochlear implant outcome. Hum Brain Mapp 34(5): 1208–1219.
  77. 77. Meredith MA, Keniston LP, Allman BL (2012) Multisensory dysfunction accompanies crossmodal plasticity following adult hearing impairment. Neuroscience 214: 136–48.
  78. 78. Rauschecker JP (2012) Ventral and dorsal streams in the evolution of speech and language. Front Evol Neurosci 4: 7.
  79. 79. Schirmer A, Fox PM, Grandjean D (2012) On the spatial organization of sound processing in the human temporal lobe: a meta-analysis. NeuroImage 63(1): 137–47.
  80. 80. Voss P, Gougoux F, Zatorre RJ, Lassonde M, Lepore F (2008) Differential occipital responses in early- and late-blind individuals during a sound-source discrimination task. NeuroImage 40: 746–758.