ISSN: 2455-1759
Archives of Otolaryngology and Rhinology
Research Article       Open Access      Peer-Reviewed

Long-latency auditory evoked potentials: Normalization of protocol applied to normal adults

Dayane Domeneghini Didoné, Sheila Jacques Oppitz, Maiara Santos Gonçalves* and Michele Vargas Garcia

Federal University of Santa Maria, Department of Morphology - Building 19, Av. Roraima, 1000, University City - Camobi Neighborhood, Santa Maria-RS, CEP: 97105-900, Brazil
*Corresponding author: Maiara Santos Gonçalves, Federal University of Santa Maria, Department of Morphology - Building 19, Av. Roraima, 1000, University City - Camobi Neighborhood, Santa Maria-RS, CEP: 97105-900, Brazil, E-mail: maiarasg@yahoo.com.br
Received: 17 June, 2019 | Accepted: 23 July, 2019 | Published: 25 July, 2019

Cite this as

Didoné DD, Oppitz SJ, Gonçalves MS, Garcia MV (2019) Long-latency auditory evoked potentials: Normalization of protocol applied to normal adults. Arch Otolaryngol Rhinol 5(3): 069-073. DOI: 10.17352/2455-1759.000101

Introduction

The clinical audiologist gained a strong ally in the diagnostic process of central auditory alterations from the association between electrophysiological and behavioral methods.

The auditory evoked potentials refer to a series of electrical changes that occur from the inner ear to the cerebral cortex in response to sound stimulation [1], occurring in time (ms) and amplitude (μV) determined, thus enabling them to be classified as short, medium and long latency potentials [2].

Specifically, long-latency auditory evoked potentials (LEALL) reflect the neuroelectric activity of the auditory pathway in the thalamus and auditory cortex regions, reporting discrimination, integration, and attention skills [3]. This potential is divided into exogenous and endogenous evoked potentials.

Exogenous potentials (P1, N1, P2, N2) are influenced by the physical characteristics of the stimulus (intensity, frequency and duration), independent of the individual's attention to the acoustic stimuli, and may appear latently around 50 to 200 ms after stimulation. The presence of these components indicates that there was encoding of the stimulus in the auditory cortex, whereas absence suggests non-coding. Endogenous potential (P3) is elicited through internal events related to the individual's cognitive function [4], and is expected to appear with approximate latency at 300 ms post-stimulation in adult individuals [5].

P3 mainly reflects the activity of the thalamus and cortex, structures that involve the functions of discrimination, integration and attention, so it is used to detect changes in information processing, immediate memory and decision making. Graphically, it is characterized by a wave of great amplitude, which is generated by the expectation of the sound perception of a rare stimulus, in exchange for another frequent stimulus. This type of stimulus is called the oddball paradigm, where the individual must discriminate two different stimuli, one being frequently presented and the other being randomly introduced, called the rare stimulus [6].

There are two main markers of this potential, the latency that is used to measure the time elapsed from the application of the stimulus to the time when P3 occurs, and the amplitude, which signifies the size of the neural activation in response to the stimulus, the importance of measuring them.

As for latency to toneburst stimuli, Martin, Tremblay and Korczak [5], classify the exogenous components as a 'P1-N1-P2' complex, where, in normal hearing adults, the negative peak appears approximately after 100 ms after stimulus, and is therefore called the N100. The same can be said of P2. P1 has an approximate latency of 50 ms from the start of pacing. Kraus and McGee [7], argue that the latency of the N1 component is between 80 and 250 ms, P2 is about 200 ms, N2 is between 200 and 400 ms, and P3 is between 250 and 350 ms in adult individuals. Furthermore, McPherson [4], believes that the N1 components would present latency between 80 and 150, P2 between 145 and 180 and P3 between 220 and 380 ms. For speech stimuli, some authors point out that the values ​​considered normative may be different when compared with toneburst [8,9].

As for amplitude values, the range of normality found in the literature is still very broad, ranging from 1.7 to 20μV. For Ruth and Lambert [10], McPherson [4] and Kraus and McGee [11], the normal range for amplitude of P3 would be between 1.7 μV and 19.0 μV.

Based on the above, and considering the variety of protocols of normality, the objective of this study was to normalize cortical potentials based on the results found in normal-hearing adults with different stimuli using the Intelligent Hearing Systems brand equipment. Also, the degree of ease of recognition between verbal and non-verbal stimuli of P3 was verified.

Methodology

This is a cross-sectional, individual, observational, and contemporary study. It was approved by the Research Ethics Committee (CEP) of the Federal University of Santa Maria (UFSM) under the protocol 25933514.1.0000.5346.

The sampling procedure was performed in a simple randomized manner, from June to September 2013. Adult subjects without otological complaints were selected from the Audiology and Electrophysiology Outpatient Clinic of a University Hospital of Rio Grande do Sul. informed about the purpose of the research and agreed to participate in it by signing the Informed Consent Term.

As inclusion criterion, hearing was considered: normal hearing according to criteria defined by Lloyd and Kaplan [12], tympanogram type A and acoustic reflexes present, according to Jerger [13].

Individuals with a history of hearing, neurological and language risk were excluded from the sample, and this information was collected by means of an anamnesis previously performed.

All participants included in the sample were submitted to the same protocol of examinations, which consisted of audiological evaluation and PEALL research with speech stimulus and toneburst.

Visual inspection of the external auditory meatus was performed using the Klinic Welch-Allyn clinical otoscope to rule out any changes that could influence audiometric thresholds. Next, the tonal audiometry was performed in an acoustically treated booth and through the Iteras II audiometer of the Madsen brand.

The tympanometric curve and the acoustic reflexes, searched in the frequencies of 500 Hz to 4000 Hz, bilaterally and in the contralateral mode, were analyzed using the AT235 middle ear analyzer from Interacoustics.

The search for long-latency auditory evoked potentials was performed with two-channel Intelligent Hearing Systems. After cleansing the skin with abrasive paste, the electrodes were placed in positions A1 (left mastoid) and A2 (right mastoid), Cz (vertex), with the ground electrode (Fpz) on the forehead. The impedance value of the electrodes should be less than or equal to 3 kohms.

For this examination, the patient was instructed to pay attention to the different stimuli (rare stimulus) that appeared randomly, within a series of equal stimuli (frequent stimulus). The percentage of presentation of the rare stimuli was of 20%, whereas for frequent stimuli was of 80%.

Non-verbal (toneburst) stimuli were used at frequencies of 1000 Hz (frequent stimulus) and 4000 Hz (rare stimulus); and verbal (syllables / ba / - frequent stimulus and / ga /, / da / e / di / - rare stimuli) presented binaural at an intensity of 75 dBHL. For each type of stimulus (verbal and nonverbal), 300 stimuli (approximately 240 frequent and 60 rare) were used to obtain the potentials. The tracings were not replicated, since replication could accustom the patient in recognizing the rare stimulus, besides causing fatigue and compromising attention.

The research started with pa / ba / and / ga /, followed by / ba / and / di /, / ba / and / da / and toneburst, being that all the verbal stimuli verbal so that the patient could know them.

From the tracing corresponding to the frequent stimulus, the latencies of the waves P1, N1, P2, N2; and from the plot corresponding to the rare stimulus, the latency and amplitude values ​​of the P3 wave were evaluated.

After the examination, each individual was instructed to compare the rare verbal stimuli / di /, / ga / e / da / and indicate the most easily recognizable. He was then instructed to compare the result of this classification with the rare non-verbal tone burst stimulus (4000 Hz), again indicating which would be the easiest recognition.

As study variables, the latencies of the components P1, N1, P2, N2 and P3 and the amplitude of P3 for the four types of stimuli were analyzed.

Results

In the period included in the research, 30 individuals were evaluated, 15 (50%) of the male gender and 15 (50%) of the female gender. The mean age was 23.3 (± 3.5) years.

From the information concerning the latency values ​​of the components P1, N1, P2, N2 and P3, for the four different stimuli, the estimates for mean and standard deviation were obtained, as shown in table 1.

The results of the amplitude values ​​of P3 are shown in table 2.

Table 3 presents the information regarding the percentage of presence and absence of potentials for each type of stimulus.

Discussion

The long latency potential evaluates the cognitive processes of hearing, that is, it allows to know the functional use that the individual makes of the stimulus and to infer about the auditory abilities of memory, attention and auditory discrimination. It is currently used to detect auditory processing disorders along with behavioral assessments.

Compared with the international literature, the researches involving PEALL in Brazil are recent, which makes it necessary to adopt reference criteria.

The standardization of cortical potentials is fundamental so that this evaluation can be used in clinical practice, helping, along with the behavioral evaluations, in the diagnosis of auditory processing alterations.

In our study, mean P1 latency for right and left ears for the toneburst stimulus was 63.13 ms. These results agree partially with Albrecht and Uwer [14], where the authors found values ​​of 56 ms and 43 ms for the left and right hemispheres, respectively.

For N1 with toneburst, mean latency between the ears was 100.6 ms also agreeing with the study of Albrecht and Uwer [14]. Kraus and McGee [7], propose N1 values ​​ranging from 80 to 250ms and, McPherson [4], variations from 80 to 150ms. Considering the minimum and maximum values ​​found in this study, our results agree with the variability proposed by McPherson [4].

In this study, the average latency of P2 for right and left ears with toneburst was 173.5 ms, agreeing with César and Munhoz [15], who found values ​​close to 182 ms. Our results also agree with Kraus and McGee [7], where the authors report latency close to 200ms. Considering minimum and maximum values ​​our results also agree with McPherson [4], describe latency range ranging from 145 to 180ms.

For N2 the average for toneburst was 217.45. These results agree with Colafémina et al., [16], who found values ​​of 231.2 ms for this component.

The mean latency for the P300 with toneburst found in this study was 298.7ms. These results agree with the study by Crippa, Aita and Ferreira [17], in which the mean latency for P3 was 299.4ms in the right ear and 296.9ms in the left ear. However, it disagrees with the study by Duarte et al., [18], where the average for the P300 was 341ms. Some authors, such as Hall [19], point out that the variations for the P300 can be justified by determinants such as different equipment, patient care, age, among others.

In relation to the amplitude of P3, what is observed in the literature are variations from 1.7 to 20uV (Reis; Iório, [20], and many scholars do not characterize amplitude as an important parameter in P3 interpretation [21]. Because it is a normalization study, the amplitude was analyzed in this study. For the tone burst, the average between the ears was 5.95uV. These results agree with the study by Pinzan-Faria [22], where the average for ears with normal thresholds was 6.47uV. However, our results disagree with the study by Silva, Pinto and Matas [23], in which the mean amplitude for individuals without central alterations was 10.6uV. Picton [24] notes that the amplitude of P3 is variable due to the different levels of attention during the examination.

Speech stimuli have been increasingly used in clinical practice since they contribute to the verification of related cortical regions such as speech signal processing. As the present research deals with a descriptive and normative study, the variables between speech stimuli and tone burst were not statistically compared. However, it was observed that the mean of the exogenous components and the P3 was higher for the speech stimuli when compared to the tone burst, except for the P1 component, in which the average for most speech stimuli was lower when compared to the pure tones. These results agree with Massa et al. [25] and Alvarenga et al., [26], in which the authors emphasize that P300 latency increases when the "targets" for discrimination are more "difficult" than the standard, ie, latency is sensitive to the task processing demand.

In terms of amplitude, some scholars report that the amplitude decreases the greater the difficulty of the task's difficulty [27], Geal-Dor, Kamenir, Babkoff, 2005 [28], Massa et al 2011 [25]. If we consider the greater complexity of speech stimuli at the level of cortical processing, this assertion is not justified for the present research, since the amplitude was higher for the speech stimuli. However, when we consider the subject's opinion for the difficulties of the stimulus, our research agrees with the authors mentioned above, since the individuals reported that the majority of the speech stimuli are easier when compared with the tone burst, and the mean amplitude was higher for syllabic contrasts. The variable "difficulty" of the task was not analyzed for the present article, however they are part of another parallel research.

Regarding the occurrence of potentials, in this study the N1-P2 complex was visualized in all patients and in all types of stimuli. These results corroborate with the literature, which states that from the age of 16 this complex can already be visualized, being dependent on the maturational process [29]. The occurrence of P1 and N2 was higher for speech stimuli than for tone burst. Novak et al., [30], report that N2 is related to the process of identification and attention to the rare stimulus, being correlated with the difficulty level of the task. This fact justifies the greatest occurrence for speech stimuli for N2, since, as previously reported, most individuals reported having greater ease in identifying these contrasts. P1 is considered the potential of greater variability in adults due to the greater difficulty of visualization. Albrecht, Suchodoletz and Uwer [14], report that P1 is not as reliable for interpreting the results in adults. We believe that a higher occurrence of this potential for speech stimuli may be correlated with the greater cortical stimulation of speech contrasts.

From this research can be characterized the latency and amplitude values ​​for the cortical potentials through different speech and tone burst stimuli. This standardization can be used, along with other research, as a reference for clinical use.

Thus, in general, we found that for speech stimuli the overall mean of P1 latency was 62.95 ms whereas for tone burst it was 63.15. For N1 the mean was 106.03 ms for speech and 100.6 ms for pure tones. The mean P2 was 178.51 ms for speech and 173.5 ms for tone burst. For N2 the mean latency was 247.28 ms for speech and 217.45 ms for tone burst. For P3, the mean was 324.18 ms for speech and 298.7 ms for tone burst. The amplitude was 6.75 uV for speech stimuli and 5.95 ms for tone burst. These values ​​can be used as a reference in other studies.

Conclusion

It was possible to characterize latency and amplitude values ​​for cortical potentials with different speech and tone burst stimuli.

  1. Figueiredo MS, Castro Junior NP (2003) Brainstem auditory evoked potentials (ABR). In: Figueiredo MS. Essential knowledge to understand well otoacoustic emissions and Bera. São José dos Campos: Pulso 85-97.
  2. Sousa LCS, Piza MRT, Alvarenga KF, Cóser PL (2008) Electrophysiology of hearing and otoacoustic emissions. Principles and clinical applications. São Paulo: Tecmed 95-107.
  3. Soares AJC, Sanches SGG, Neves-Lobo LF, Carmalm RMM, Matas CG, et al. (2011) Long-latency auditory evoked potentials and central auditory processing in children with reading and writing disorders: preliminary data. Intl. Arch. Otorhinolaryngol. 15: 486-491. Link: https://bit.ly/2XZD2zT
  4. Mcpherson DL (1996) Late potentials of auditory system (evoked potentials). San Diego: Singular Publishing Group.
  5. Martin BA, Tremblay KI, Korczak P (2008) Speech evoked potentials: from the laboratory to the clinic. Ear Hear 29: 285-313. Link: https://bit.ly/2y9xGTf
  6. Schochat E (2003) Medidas eletrofisiológicas da audição. C- Respostas de Longa Latência. In: Carvallo RMM. Fonoaudiologia – Informação para a formação, Procedimentos em Audiologia. Rio de Janeiro. Guanabara Koogan 71-85.
  7. Kraus N, Mcgee T (1994) Potenciais auditivos evocados de longa latência. In: Katz J, editor. Handbook of Clinical Audiology. Baltimore. Williams and Wilkins 406-423.
  8. Kraus N, Nicol T (2003) Aggregate neural responses to speech sounds in the central auditory system.Speech Communication 41: 35-47. Link: https://bit.ly/30UL7Tv
  9. Korczak PA, Kurtzberg D, Stapells DR (2005) Effects of sensori-neural hearing loss and personal hearing aids on cortical event-related potential and behavioral measures of speech-sound processing. Ear Hear 26: 165–85. Link: https://bit.ly/2ObEz19
  10. Ruth RA, Lambert PR (1991) Auditory evoked potentials.Otolaryngol. Clin North Am Philadelphia 24: 349-370. Link: https://bit.ly/2M9mxKz
  11. Kraus N, Mcgee T (1999) Long-latency auditory potentials. In: Katz, J. Treatise on clinical audiology. São Paulo. Manole 403-420.
  12. Lloyd Ll, Kaplan H (1978) Audiometric interpretation: a manual or basic audiometry. University Park Press: Baltimore 16-17.
  13. Jerger J (1970) Clinical experience with impedance audiometry. Arch Otolaryngol 92: 311-324. Link: https://bit.ly/2JMo3Rg
  14. Albrecht R, Suchodoletz WV, Uwer R (2000) The development of auditory evoked dipole source activity from childhood to adulthood.ClinicalNeurophysiology 111: 2268-2276. Link: https://bit.ly/2GrGxEu
  15. César CPHAR, Munhoz MSL (1997) Evaluation of long-latency-related event potentials in healthy young and adult subjects. Acta AWHO16: 114-122.
  16. Colafêmina JF, Fellipe ACN, Junqueira CAO, Frizzo AC (2000) Long-latency Auditory Evoked Potentials (P300) in Healthy Young Adults: A Regulatory Study. Rev. Bras. Otolaryngology 66: 618-625.
  17. Crippa Bl, Aita ADC, Ferreira MIDC (2011) Standardization of electrophysiological responses to P300 in normolern adults. Disturb Common 23: 325-333.
  18. Duarte Jl, Alvarenga KF, Banhara MR, Melo ADP, Sás RM, et al. (2009) Long-latency auditory evoked potential-P300 in normal subjects: value of simultaneous recording in Fz and Cz. Brazilian Journal of Otorhinolaryngology 75: 231-236. Link: https://bit.ly/2GrGJnc
  19. Hall J (2006) New handbook of audiotry evoked responses. Boston. Allyn and Bacon. Link: https://bit.ly/2M6GIbQ
  20. Reis ACMB, Iório MCM (2007) P300 in subjects with hearing loss. Pro-phono: Journal of scientific update 19: 113-122.
  21. Schochat E, Scheuer CI, Andrade ER (2002) ABR and auditory P300 findings in children with ADHD. Arch. Neuro-Psiquiatr 60: 742-747. Link: https://bit.ly/2KdhXZh
  22. Pinzan-Faria VM (2005) Study of the long-latency auditory evoked potential P300 in unilateral hearing loss. Masters dissertation. Sao Paulo. 73 f. Federal University of São Paulo (UNIFESP). Paulista Medical School. Link: https://bit.ly/2OhNGxF
  23. Silva, Pinto and Matas (2007).
  24. Picton TW (1992) The P300 wave of the human event-related potential. Clin. Neurophysiol 9: 456-479. Link: https://bit.ly/2XZwKjP
  25. Massa CG, Rabelo CM, Matas CG, Schochat E, Samelli AG (2011) P300 with verbal and nonverbal stimuli in normal hearing adults.Braz J Otorhinolaryngol 77: 686-690. Link: https://bit.ly/2SBHGOx
  26. Alvarenga KF, Vicente LC, Lopes RCF, Silva RA, Banhara MR, et al. (2013) Influence of speech contrasts on cortical auditory evoked potentials. Braz j otorhinolaryngol São Paulo79: 3. Link: https://bit.ly/2SEHFcW
  27. Caps JW, Harkrider AW, Hedrick MS (2005) Neurophysiological indices of speech and nonspeech stimulus processing. J Speech Lang Hear Res 48: 1147-1164. Link: https://bit.ly/2Gs9pfE
  28. Geal-Dor M, Kamenir Y, Babkoff H (2005) Event related potentials (ERPs) and behavioral responses: comparison of tonal stimuli to speech stimuli in phonological and semantic tasks. J Basic Clin Physiol Pharmacol 16: 139-155. Link: https://bit.ly/2M9ve7C
  29. Sussman E, Steinschneider M, Gumenyuk V, Grushko J, Lawson K (2008) The maturation of human evoked brain potentials to sounds presented at different stimulus rates. Hear Res 236: 61-79. Link: https://bit.ly/2Yf6gpF
  30. Novak GP, Ritter W, Vaughan HG, Wiznitzer ML (1990) Differentiation of negative event-related potentials in an auditory discrimination task. Electroencephalogr Clin Neurophysiol 75: 255-275. Link: https://bit.ly/2YsmZtI
© 2019 Didoné DD, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
 

Help ?