Cochlear Implants: Fundamentals and Application - part 4 pptx

87 287 0
Cochlear Implants: Fundamentals and Application - part 4 pptx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Auditory Physiology 223 8 7 0.2 0.4 0.8 1.2 1.6 6 5 4 3210 Time (ms) Stimulus current (mA) 5V ␮ F IGURE 5.14. Population responses. A series of electrically evoked brainstem responses (EABRs) produced by a bipolar ם1 stimulating electrode close to the inner wall of the scala tympani of the cat and recorded differentially with subcutaneous scalp needle electrodes. The amplitude of the waves is plotted from increases in current level from 0.2 to 1.6 mA (Reprinted from Shepherd et al. 1993, with permission from Elsevier.) high frequencies into the excitatory area of the unit produced a roughly sinusoidal distribution of discharge rates. A small number of units, however, in the DCN produced asymmetrical responses (Erulkar et al 1968; Moller 1971). This was due to the asymmetry of the inhibitory side bands. At higher modulation rates (50–300 Hz) the bandwidth of the unit’s response area becomes narrower. There- fore, at the first auditory nucleus there are cells that demonstrate some selectivity in their response to modulated sounds. This becomes much more marked in the higher levels of the auditory system, particularly in the AC, where there are units that are very specific in their sensitivities to the direction and depth of modulation (Bogdanski and Galambos 1960; Suga 1963, 1965; Evans and Whitfield 1964). Phillips and Hall (1987) discovered units responsive not only to amplitude mod- ulated (AM) tones but also to the rate of change and the base sound intensity level. The response could be explained from the inhibitory side bands for the unit. As the sound level increased, it excited the neighboring area with a sup- pression of the response. Whitfield and Evans (1965) also showed that certain cortical cells responded to frequency modulated (FM) tones and not to pure tones, and the response was directionally selective. Furthermore, in the AC the tuning curves were found by Oonishi and Katsuki (1965) to vary in shape from flat through irregular and multipeaked to sharp. This 224 5. Electrophysiology would indicate multifrequency input and more complex processing. However, the tuning curves were found to be similar in shape and bandwidth to those at lower levels by Calford, Webster et al (1983). Speech Much of the research on the coding of speech as a whole has been carried out on the AN, and many studies have used synthetic speech. A lot less is known about how natural speech is processed and transformed by the central auditory nuclei, and how the critical temporal and spectral features that identify a speech sound segment (phoneme) are extracted. The responses of AN fibers to the synthesized vowels /A/,/e/,/ I /,/u/ were ex- amined by Sachs and Young (1979). This was done to see how the responses to the spectrally more complex vowels compared to the responses to two-tone stim- uli. The study showed that there were formant frequency peaks for normalized discharge rate. At high intensities the peaks disappeared due to rate saturation and two-tone suppression. This raised the question of how place coding alone could convey speech information at high intensities, and suggested that temporal coding was also involved. A study was undertaken by Delgutte (1984) and Delgutte and Kiang (1984) to help determine whether the formant pattern and fundamental (voicing) frequency could be represented in the fine time patterns of the discharges of the AN fibers. Results of the analysis of period histograms showed the intervals between action potentials were almost always harmonics of the vowel fundamental frequency. They were either the fundamental frequency or one of the formants or the fiber’s characteristic frequency. The relative contribution of these frequencies depended on a fiber’s characteristic frequency relative to the formant frequencies. It was found that (1) if the characteristic frequency was below the first formant, the largest response components were harmonics of the fundamental frequency clos- est to unit’s characteristic frequency; (2) in the region around the characteristic frequency of the first formant, this formant and its harmonics were the largest components; (3) an intermediate region between the first and second formants had prominent components of both the fibers characteristic frequency and the fundamental frequency; (4) in a region centered around the second formant, the harmonics closest to the second formant were dominant; and (5) in a high-fre- quency area there were multiple intervals at both the formant and fundamental frequencies. These results suggested that in addition to place coding, the temporal coding of speech information is fundamentally important and is likely to be so in noise. It also indicated that for electrical stimulation and speech-processing strat- egies for implant patients, information on the fundamental frequency should be presented across the electrodes used for place coding of frequency. Much less is known about the coding of the complex temporal and spectral features of consonants. There is a need to examine the ability of the VCN to extract consonant features from naturally spoken speech. Research by Clarey and Clark (2001) and Clarey et al (2001) has shown the chopper cell in the VCN Auditory Physiology 225 codes the voice onset time (VOT) of syllables with great accuracy. The VOT, as discussed in Chapter 7, is the time between the release in the closure of the vocal tract to produce a plosive such as /b/ or /g/ and to the onset of the voicing. The intracellular recordings of Clarey et al show hyperpolarization during the period of the VOT, which could result from inhibitory side bands that sharpen the dis- charge peak at the onset of the burst and thus the salience of the VOT. The octopus cells in the posteroventral cochlear nucleus (PVCN) are also finely tuned to pro- vide phase information. They are sensitive to phase and could be coding voicing. The studies on the AC by Evans and Whitfield (1964) laid the foundation for studies on species vocalizations. A strong response to a natural call does not mean the unit has extracted this feature, as it may evoke strong responses regardless. One way to overcome this difficulty is to present the stimulus backward. In a study by Wang et al (1995), it was found that natural vocalizations of marmoset monkeys produced stronger responses in the primary AC than did equally com- plex sounds such as a time-reversed call. The population of cells also had a clearer representation of the spectral shape of the call. Sound Localization The direction of a sound is coded primarily through interaural differences in intensity or the time of arrival of the signal (phase). The spectral differences introduced by the pinna for sound from various locations are also important, especially if a person has hearing in only one ear. See Chapter 6 for more details. The coding takes place in cells that have binaural inputs, so that an interaction can occur as a result of interaural intensity or timing differences. Cells that code the information have predominantly inhibitory (I) inputs from either the contralateral (IE cell) or ipsilateral ears (EI cell), and excitatory (E) inputs from the other ear. The convention is to refer to the input from the contra- lateral ear first. Coding may also occur through excitatory inputs from both the contralateral and ipsilateral ears (EE) cells (Goldberg and Brown 1969). Interaural Intensity Differences The binaural coding of inter-aural intensity differences (IIDs) by units in the SOC was demonstrated by Goldberg and Brown (1969). EI and IE units were relatively insensitive to the base binaural intensity, but sensitive to IIDs (Hall 1965). The sensitivity of EI cells to IIDs was seen in the lateral superior olive (LSO) (Tsuch- itani and Boudreau 1967; 1969; Boudreau and Tsuchitani 1968, 1970; Caird and Klinke 1983). EI units in the ICC were found to respond over a range of IIDs (Hind et al 1963; Rose et al 1963; Geisler et al 1969; Semple and Aitkin 1979). A curve was fitted to normalized IID functions from 43 EI cells deep in the SC of the cat (Fig. 5.15) (Wise and Irvine 1985). This shows changes in response to increases in intensity from the contralateral side, thus coding for sound locali- zation in the contralateral azimuth. The explanation is that as they were EI cells with a strong stimulus from the ipsilateral inhibitory ear, there would be no re- sponse when stimulating this side, but a graded response occurred to variations 226 5. Electrophysiology -30 -20-102040 Interaural intensity difference ( contralateral dB re ipsilateral ) Maximum response 80 60 40 20 100 Median plane Contralateral azimuths Ipsilateral azimuths 30 10 0 0 F IGURE 5.15. A curve fitted to normalized interaural intensity function from 43 EI cells deep in the SC of the cat. The maximal brain cell response is plotted for interaural intensity differences for the contralateral and ipsilateral side. When there is a strong ipsilateral input the cell does not fire. When the strength of the excitatory input from the contralateral side increases, then the cell fires with a graded response to each portion in the contralateral field. (Wise and Irvine 1985, reprinted with permission of the American Physiological Society.) in the intensity from the excitatory contralateral input. In the SOC and IC there was a smaller proportion of IE cells to process information from the ipsilateral side of the body. In addition to the EI and IE cells that were responsive to IIDs, Goldberg and Brown (1969) found that EE cells were generally not responsive to IIDs, but sensitive to changes in the overall intensity. They had sharper rate/intensity func- tions, and a wider dynamic range than normal stimuli. The studies referred to above show there are units in both the SOC and IC that code information from either half of the field midway between each ear. Studies have also shown that units in the LL, SC , MGB, and AC also code IIDs. Interaural Time Differences The processing of interaural time differences (ITDs) involves disparity in the arrival of transients as well as the phase of the ongoing pure tones. There is evidence that these are both processed by two different mechanisms. The cells in Auditory Physiology 227 1 2 3 4 5 6 7 Acoustic Right Left Electric Brainstem Nuclei Phase Delays Dela y Lines F IGURE 5.16. A delay line where phase differences between each ear are converted to place of excitation. This is the basis of the model of Jeffress (1948). This model is relevant to bilateral cochlear implants or bimodal speech processing with hearing in one ear and electrical stimulation of the other. the SOC sensitive to transients are those that are excited by one ear and inhibited by the other (i.e., EI or IE cells). As with intensity, the cell is excited maximally when the excitatory ear leads, and suppressed maximally when the inhibitory ear leads (Galambos et al 1959; Moushegian et al 1964a,b; Hall 1965). Some cells (EI/IE) were sensitive to both ITDs and IIDs, and with these one could be traded against the other; that is, shortening the time of arrival at one ear could be coun- terbalanced by a reduction in the intensity (time/intensity trading). Caird and Klinke (1983) found that some IE cells in the LSO had similar IID and ITD functions, the range for IID being 30 dB and the ITD range being 2 ms, which was greater than the 300 to 400 ls for sound localization. Kuwada and Yin (1983) report that most cells in the IC that were sensitive to interaural phase were in- sensitive to interaural transients, and this further supported the view that the coding of interaural transients and phase are through different mechanisms. But only a small proportion responded differentially as a function of the direction of the ITD variation, and together with the data from Yin and Kuwada (1983) this suggests that the coding of the direction of sound movement is at a higher level, and presumed in the SC and AC. The coding of interaural phase difference was reported in the IC by Rose et al (1966), as was done previously for the medial superior olive (MSO) by Moush- egian et al (1964a,b). Rose et al (1966) found that when sine waves were presented to each ear there was an optimal phase difference that gave a maximal response [the characteristic delay (CD)]. This was consistent with the coincidence detection model of Jeffress (1948). The model is illustrated in Figure 5.16. The model postulates there are units with different delay lines from each ear. When the delay line is such that a certain phase difference between each ear provides maximal excitation, phase difference is coded on a place basis. Furthermore, a study of MSO and ventral nucleus of the trapezoid body (VNTB) units in the dog by 228 5. Electrophysiology Goldberg and Brown (1969) showed the maximum discharge occurred for a bin- aural phase delay that corresponded to the difference between the monaural phase locking for each ear. These data were also consistent with the coincidence detector model for interaural phase differences proposed by Jeffress (1948). According to these studies and the coincidence detection model of Jeffress, the timing and site of origin of the inputs to each ear are important for binaural processing of tem- poral information with electrical stimulation. Evidence for the role of the MSO in coding ITDs comes from a study on a patient who had bilateral cochlear implants and poor interaural temporal discrim- ination. The patient died 13 years after the first implant, and the section of the brainstem showed that the cell density and volume of the MSO were significantly less than the MSO of a person of the same age with a single cochlear implant (Yukawa et al 2001) (see Chapter 3). In addition, Yin and Kuwada (1983) found low-frequency units in the IC with interaural periodic delay functions responding to stimuli with small differences in frequency to produce beating. Most of these cells were insensitive to onset or transient disparity. The majority of the cells were excitatory (EE), but there were a variety of other types. Furthermore, only a small proportion of the binaural phase-sensitive cells exhibited monaural phase locking (Kuwada et al 1984), and this suggested earlier processing in the SOC. The majority of cells responded when the contralateral ear was leading and thus coded the localization of sound in the contralateral half of space. When low-pass noise was presented to low-frequency units in the IC, the phase delay varied the interaural response curves and had a periodicity that followed the cell’s characteristic or best frequency (Geisler et al 1969). However, when uncorrelated noise with the same spectral composition was presented, there was no evidence sensitivity to the delays (Yin et al 1986). This indicates phase sen- sitivity depends on the fine structure of the signal, and cross-correlation of the signal could explain the coding. It also serves to emphasize the importance of reproducing the fine temporospatial patterns of response with electrical stimula- tion to localize and understand speech in the presence of background noise. This is especially relevant to the design of bilateral cochlear implants or bimodal speech processing. Higher Level Processing The physiological studies on the SOC and IC for the coding of sound localization referred to above showed basic mechanisms for the processing of IIDs and ITDs. They did not reveal a specific response for a particular location or demonstrate how a moving sound could be detected. Evidence was seen for this in the barn owl by Knudsen and Konishi (1978). Recordings were made from the mesence- phalicus lateralis dorsalis (MLD), the avian homologue of the IC, and this showed cells that responded to sounds arising from restricted areas in space. In mammals a map of auditory space was found in the deep and intermediate layers of the SC in the guinea pig (Palmer and King 1982; King and Palmer 1983) Auditory Physiology 229 and the cat (Middlebrooks and Knudsen 1984). This map for auditory space resembled that for visual space in the superficial layers of the SC. In a study on monkeys a similar organization was found. For some cells the position of the auditory receptive field was affected by the eye position. Thus a discrepancy between the position of a sound and the visual axis was mapped onto the SC. The responses of many single units in the primary cortex of monkeys and cats were influenced by the interaural differences in time (ITD) and intensity (IID) (Brugge et al 1969; Brugge and Merzenich 1973). The unit’s responses may be facilitatory (EE) or suppressive (EI or IE). Behavioral studies after bilateral ablation of the primary AC in the cat, monkey, and other experimental animals have helped establish the importance of the AC in sound localization. The physiological studies referred to above have demon- strated how sound is converted from bottom-up processing into appropriate in- formation for final coding by the AC. Studies by Neff (1968) and Strominger (1969) demonstrated that after bilateral ablation of the primary AC the animal’s ability to localize sound was grossly impaired or reduced to chance level. Sound localization and lateralization were affected, as were detection of temporal pat- terns and order, change in duration of sound, and change in the spectra of complex sounds (Neff et al 1975). Coding and Perception Pitch and Loudness Frequency and intensity coding correlate predominantly with pitch and loudness perception, respectively, and these sensations underlie the perception of speech and environmental sounds, although the relation is not well understood. Never- theless, an adequate representation of frequency and intensity coding using elec- trical stimulation is important for cochlear implant processing of speech and other sounds. The time/period (rate) code for frequency results in temporal pitch, and the place code in place (spectral) pitch. With sound it is difficult to determine the relative importance of the time/period and place codes in the perception of pitch, as the underlying frequency codes operate together when sound stimulates the cochlea. With electrical stimulation of auditory nerves the two codes can be reproduced separately to study their relative importance. Although frequency and intensity coding correlate predominantly with the per- ception of pitch and loudness, respectively, frequency coding may have an effect on loudness, and loudness coding on pitch. For example, increasing intensity increases the loudness, and there may be a small change in pitch. For frequencies below 2000 Hz there can be a maximum 5% decrease in pitch with an increase in intensity, and a 5% increase in pitch for frequencies above 4000 Hz (Moore 1997). Sound Localization The responses of units in the auditory pathways to IIDs and ITDs are consistent with the findings from psychophysical studies, and form the basis for analyzing 230 5. Electrophysiology the effects of electrical stimulation on the cochlear nerve for the restoration of hearing in deafness with bilateral cochlear implants or bimodal speech processing (an implant in one ear and a hearing aid in the other). For example, the perception of IIDs was better preserved than ITDs for electrical stimulation with bilateral cochlear implants (see Chapter 6). Understanding the coding of binaural excita- tion is increasingly important with the introduction of bilateral cochlear implants, and bimodal speech processing, especially to improve hearing signals in noise. The data indicate the importance of stimuli being presented from the same site in each ear. They also suggest that coding strategies need to emphasize the in- teraction of both IIDs and transient ITDs if the phase ITDs cannot be readily transmitted. Neural Plasticity Learning to use the perceptual information provided by the cochlear implant, which only partially reproduces the coding of sound, depends in part on the plasticity of the responses in the central auditory nervous system, especially in children born with little hearing. In the experimental animal there are two types of plasticity. The first is the development of neural connections within a critical period after birth—developmental plasticity. The second results from a change in the central representation of neurons in the mature animal after neural connec- tivity has been established—postdevelopmental plasticity. Developmental Plasticity Evidence of a critical period for changes in the central auditory system in response to surgical destruction of the cochlea was demonstrated in the ferret, where a marked loss of neurons in the CN occurred after its ablation 5 days after birth (Moore 1990b). However, ablation of the cochlea 24 days postpartem (i.e., a week before the onset of hearing) had little effect. This was discussed in more detail in Chapter 3. An example of developmental plasticity is the increase in the number of pro- jections from the CN to the ipsilateral IC when the cochlea on the opposite side was destroyed in the neonatal gerbil (Nordeen et al 1983). In this case there was a critical period that extended to 40 to 90 postnatal days, but did not occur in the adult animal. A similar phenomenon was demonstrated in the ferret (Moore and Kowalchuk 1988), where the critical period for the neural modeling extended to postnatal days 40 to 90, that is, well beyond the onset of hearing. There was a substantial increase in the expression of the growth-associated protein GAP-43, an indicator of synaptic remodeling (Illing et al 1997). Evidence that the changes were not due to ablation of the cochlea per se was demonstrated by Born and Rubel (1988), who found that they still occurred when there was lack of neural activity in the auditory nerve due to a neural blocker. This was supported in another study in which ferrets were unilaterally deafened with a conductive loss Auditory Physiology 231 without damage to the cochlea, and again the same changes were seen (Moore et al 1989). The neural modeling changes described by Nordeen et al (1983) and Moore and Kowalchuk (1988) were accompanied by lower response thresholds, greater peak discharge rates, and shorter minimum response latencies (Kitzes 1984; Kitzes and Semple 1985). In addition, the marked developmental changes in the neural pathways were not seen after bilateral ablation of the cochleae (Moore 1990a), indicating that with unilateral loss there was an upregulation of connec- tivity on the active side. Furthermore, downregulation with loss of stimulation was seen by Hardie et al (1998), who found the density of synapses on central neurons was halved in animals with bilateral experimentally induced deafness, but not if the deafness was unilateral. Evidence was also found that hearing loss also involved changes in the type of transmitters at the synapses (Redd et al 2000). Furthermore the biological basis for these changes could be the effect of an anti- apoptotic gene bcl-2 (Mostafapour et al 2000). The fact that an increase in connections occurred with loss of hearing in one ear rather than in both indicates that the innervation of the cells in the IC was due to a competitive interaction between the afferent projections from each ear during development. This suggests that if a person had a congenital hearing loss in one ear and then became deaf, it would be preferable to insert a cochlear implant in the more recently deafened ear. However, an early patient in Melbourne had the congenitally deaf ear implanted, and her speech perception results with the University of Melbourne’s F0/F2 strategy were above average. This indicates that there were sufficient connections for transmitting information through elec- trical stimulation, that higher processing above the level of the IC was of great importance, or that the results on the experimental animal do not apply to the human. The experimental animal findings could also indicate that implanting one ear in a child during the developmental stage could later limit the ability to use information from two ears, should that be shown to be of benefit. This question is unresolved. The above results, however, do support the clinical findings that psychophysical and speech results are better if an implant is undertaken at an early age (Dowell et al 1986, 1995; Clark, Blamey et al 1987). Evidence of plasticity is also seen in the cortex. When cats are visually deprived at birth, they are superior in auditory location tasks than sighted cats (Korte and Rauschecker 1993; Rauschecker and Korte 1993). This is discussed further in a review (Kral et al 2001b). The physiological basis for this effect was increased tuning in the anterior ectosylvian area (a higher order region of the cortex), and the auditory area expanded to areas normally receiving only visual stimuli. How- ever, only a few units responded to auditory stimuli in the primary visual cortex (Yaka et al 1999). Furthermore, the higher auditory cortex (for example, AII in cats) has greater plasticity than the primary auditory cortex (Diamond and Wein- berger 1984; Weinberger et al 1984), and in congenital auditory deprivation may be recruited for the processing of other sensory modalities. This is supported by 232 5. Electrophysiology the observation that deaf subjects perform better in visual tests than do hearing subjects (Neville and Lawson 1987a,b; Leva¨nen et al 1998; Marschark 1998; Parasnis 1998). It has also been observed with implantation in prelinguistically deaf patients that if the amount of activation in higher order auditory centers is increased with visual stimuli, in particular sign language, as determined by pos- itron emission tomography (PET), then speech perception will be poor (Lee et al 2001), and the older the person the poorer the speech results (Dowell et al 1985; Blamey et al 1992). The mechanisms underlying the above changes are assumed to be long-term potentiation (Bliss and Lomo 1973) and long-term depression (Ito 1986). The different susceptibilities to long-term potentiation and depression are based on change in glutamate receptors. Inhibition seems to be related to the sensitive periods in development. In the auditory cortex the c-aminobutyric acid receptor cell count increases at the end of the sensitive period (Gao et al 1999), and is responsible for its termination. Nerve growth factors and brain-derived neuro- trophic factors are crucial for cortical development and influence the duration of sensitive periods in cats and rats (Galuske et al 1999; Pizzorusso et al 1999; Sermasi et al 1999). They participate in stimulus-dependent postnatal develop- ment. Their production depends on activity, and they affect synaptic plasticity and dendritic growth (Boulanger and Poo 1999; Caleo et al 1999). Further un- derstanding of the plasticity of the central nervous system is revealed through experimental animal studies using electrical stimulation, and are also directly relevant to cochlear implantation (see Plasticity, below). Postdevelopmental Plasticity Postdevelopmental plasticity was demonstrated in the mature guinea pig when an area of the cochlea was destroyed, and the corresponding area of the brain, in particular the cortex, was shown to have increased representation from the neigh- boring frequency regions (Robertson et al 1989). This postdevelopmental plas- ticity was probably due to the loss of inhibition that normally suppresses the input from neighboring frequency areas. It was shown in the cat that there was reor- ganization of the topographical map in the primary AC contralateral to the le- sioned side, but the cortical field was normal for ipsilateral excitation from the unlesioned cochlea (Rajan et al 1993). This reorganization could also have been due to an increase in dendrite length in spiny-free neurons. McMullen et al (1988) found the dendrite length increased by 27% in the contralateral cortex compared to littermate controls. In addition, it was found by Snyder et al (2000a) that changes occur at the level of the IC and soon after a lesion of the spiral ganglion. IC units previously tuned to the frequency corresponding to the site of the lesion became less sensitive to that frequency, but tuned to the frequencies at the edge of the lesion. Furthermore, behavioral training can modify the tonotopic organi- zation of the primary AC in the primate. Recanzone et al (1993) report an increase in cortical representation for frequencies where there was improved discrimina- tion. These data underpin the clinical findings that speech perception results im- prove in postlinguistically deaf adults up to at least 2 years postoperatively. [...]... Right: Wide spread (Reprinted with permission from Paolini et al 2000) -2 0 -3 0 -3 0 -4 0 -4 0 -5 0 -5 0 -6 0 -6 0 -7 0 -7 0 -8 0 Membrane potential (mV) -2 0 -8 0 AP EPSP FIGURE 5.25 Left: Intracellular traces from the globular bushy cells in the cochlear nucleus show responses to a 1000-Hz acoustic stimulus Right: Superimposed traces on an expanded time base are seen AP, action potential; EPSP, excitatory postsynaptic... postsynaptic potential (Reprinted with permission from Paolini et al 2000.) 250 5 Electrophysiology S -6 9 -7 0 Graded EPSPs V(t) -7 1 -7 2 -7 3 -7 4 -7 5 5 mV -7 6 1.0 1 ms 1.5 2.0 2.5 3.0 Time (ms) FIGURE 5.26 Intracellular traces from the globular bushy cells in the cochlear nucleus show responses to intracochlear electrical stimulation Left: Superimposed traces indicate graded excitatory postsynaptic potentials... Clopton and Glass (19 84) on unit responses in the CN of the guinea pig to sinusoidal electrical stimuli indicated that if complex stimuli consisting of two and five sinusoids were used, Number of spikes 60 20 Acoustic 41 6 Hz 50 Acoustic 8 34 Hz 40 30 10 20 10 0 0 0 2 4 6 8 10 Number of spikes 700 0 5 10 15 20 25 30 20 600 Electric 40 0 pulses/s Electric 800 pulses/s 500 40 0 10 300 200 100 0 0 0 2 4 6 Time... masking at the probe fre- Acoustic probe Electric masker FIGURE 5.17 The masking of acoustic probe tones of different frequencies by the electrical masker consisting of a burst of electrical pulses at different rates (McAnally and Clark 19 94, reprinted with permission of Taylor and Francis) 2 34 5 Electrophysiology 100 0.2 mA 0 .4 mA 0.6 mA 0.8 mA CAP reduction (%) 80 60 40 20 0 -2 0 2 4 6 8 10 20 Probe frequency... FIGURE 5.21 Extracellular interspike histograms from globular bushy cells in the anteroventral cochlear nucleus for acoustic stimulation at 41 6 and 8 34 Hz and electrical stimulation at 40 0 and 800 pulses/s (Clark 1998b,c Reprinted with permission of Elsevier Science.) Electrical Stimulation of the Cochlear Nerve 245 the units responded primarily to the more intense peaks This indicated that the amplitude... similar to that employed by Laird (1979) and Kiang et al (1979) In addition, the band-pass signal was half-wave rectified to simulate the hair cell directionality in response, and then low-pass filtered to allow for a 1-ms refractory period For hearing, the model first used the basilar membrane model of Au et al (1995), in turn derived from that of Neely and Kim (1986), and a computational model of the inner... determined The advantage of a response due to stochastic resonance, however, may only lie close to threshold as determined by Hohn and Burkitt (2001a) using an integrate -and- fire model There are biological safety concerns when stimulating at high rates (see Chapter 4) Integrate -and- Fire Model A difficulty with the point process model is that it determines only the average firing statistics in a population of... EPSPs and IPSPs, and is thus more realistic physiologically It enables the calculation of the probability density function of the membrane potential reaching threshold, and the probability of the output spikes The integrate -and- fire model has been used to examine the relationship between the input and output of a nerve cell when the inputs have a firing rate that has a Poisson distribution (Burkitt and. .. AN is seen in the normal ear but not in the deafened ear (Kiang et al 1979; Liberman and Dodds 19 84) Studies by Ehren- Electrical Stimulation of the Cochlear Nerve Limited Spread 249 Wide Spread Intracellular Trace Sound Wave Globular Bushy Cell Cochlear Nucleus Spiral Ganglion cells Cochlea Traveling wave FIGURE 5. 24 The effect of spread of excitation of the basilar membrane on the processing of phase... rates of 100 and 200 pulses/s the DLs for rate of stimulation were also considerably poorer than for sounds of the same frequency The results showed that for electrical stimuli of 100, 200, and 40 0 pulses/s, the DLs varied from 50% and above These DLs were greater than those obtained by Shower and Biddulph (1931) for acoustic stimulation in humans at 125, 250, and 500 Hz The DLs were 3%, 1%, and 1%, respectively . found in the deep and intermediate layers of the SC in the guinea pig (Palmer and King 1982; King and Palmer 1983) Auditory Physiology 229 and the cat (Middlebrooks and Knudsen 19 84) . This map for. changes are assumed to be long-term potentiation (Bliss and Lomo 1973) and long-term depression (Ito 1986). The different susceptibilities to long-term potentiation and depression are based on change. showed peaks of masking at the probe fre- 2 34 5. Electrophysiology Probe frequency (kHz) CAP reduction(%) 2 4 6 8 10 20 -2 0 0 20 40 60 80 100 0.2 mA 0 .4 mA 0.6 mA 0.8 mA F IGURE 5.18. The masking

Ngày đăng: 11/08/2014, 06:21

Tài liệu cùng người dùng

Tài liệu liên quan