Cochlear Implants: Fundamentals and Application - part 9 docx

87 278 0
Cochlear Implants: Fundamentals and Application - part 9 docx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

658 11. Rehabilitation and Habilitation 00 0 0 10 20 30 40 50 60 70 80 90 100 Sub j ect Present (n=12) Absent (n=4) BKB sentences (%) F IGURE 11.3. Place pitch ranking versus word scores for the Bamford-Kowal-Bench(BKB) open-set sentences for electrical stimulation alone on 16 children using cochlear implant. Pitch ranking is classified as present or absent. (Busby and Clark 2000a,b; Clark 2002). Reprinted with permission from Clark G. M. 2002, Learning to hear and the cochlear implant. Textbook of Perceptual Learning, M. Fahle and T. Poggio, eds. Cambridge, Mass. MIT Press: 147–160. As discriminating place of electrode stimulation is a different perceptual task from ranking pitch, this was also correlated with duration of deafness. The ability of children to rank pitch tonotopically (i.e., according to place of stimulation), rather than simply discriminate electrode place, was compared with their speech perception scores, as shown in Figure 11.3. The poorest results were found in those not able to order pitch (“Absent”). In addition, those children with the longest duration of deafness had the lowest scores on the Bamford-Kowal-Bench (BKB) (Bench and Bamford 1979) word-in-sentence test. Furthermore, it can be seen (Fig. 11.3) that not all children who could rank pitch (“Present”) had good speech perception results. For 75% of the 16 children in the study, a tonotopic ordering of pitch percepts was found (“Present”). However, only 58% of these children with good ability to rank pitch had satisfactory speech perception of 30% or more. This suggested that the effect of developmental plasticity on the neural connectivity required for place discrimination was not the only factor for learning speech. At least another factor was required for speech perception, most probably language, as discussed below and in Chapter 7. In another group of children from the University of Melbourne’s Cochlear Implant Clinic, that were unselected, the data showed speech perception was significantly better the younger the child when the implant surgery was performed (Fig. 11.4). The scores were obtained 2 years or longer after implantation. Cochlear Implants—Postdevelopmental Plasticity An important question for cochlear implantation is, Would a patient who had adjusted to a certain speech-processing strategy get further benefits from an al- Principles 659 A g e at implantation (y ears ) PBK phoneme score (%) 02 4 6 8 10 12 14 16 18 2 0 0 20 40 60 80 100 n=74 F IGURE 11.4. Speech perception versus age at operation for 74 unselected congenitally deaf children presenting to the University of Melbourne’s Cochlear Implant Clinic. PBK, phonetically balanced (kindergarten) monosyllables. ternative strategy? At a more basic level, would the patterns of excitation in the auditory cortex and neural connectivity that were required become so established that other patterns could not be processed? The effects of postdevelopmental plasticity were studied in older children by comparing speech perception after changing them from the Multipeak to the SPEAK strategy. The Multipeak strategy selects two formant frequencies [first (F1) and second (F2)], and the outputs from up to three high-frequency band-pass filters and stimulates at a rate proportional to the voicing frequency. In contrast, the SPEAK strategy selects six or more spectral maxima, and stimulates at a constant rate, with amplitude variations con- veying voicing information. As discussed in Chapter 7, although it has been shown that the SPEAK strategy represents the place speech feature, in particular, better than does the Multipeak strategy, neither the neural connectivity required to process the feature nor the contribution of the feature to speech perception is well understood. Appropriate neural connectivity may need to be established for the frequency transitions that underlie the place features. An improved strategy may either use these connec- tions or establish others. Studies in the Cooperative Research Center (CRC) for Cochlear Implant Speech and Hearing Research (Fig. 11.5) (Dowell and Cowan 1997) revealed a trend for improved scores from 6 to 18 months after changing strategies for six out of seven children when tested with the pediatric Speech Intelligibility Test (SIT) (Jerger et al 1980) sentences in quiet and especially in noise. At eighteen months the results for SPEAK were significantly better than for the Multipeak strategy. The period of learning required for effective use of the new strategy may be due to postdevelopmental neural plastic changes in lower level processing for additional speech features, or higher level changes in the patterns representing 660 11. Rehabilitation and Habilitation 0 10 20 30 40 50 60 70 80 90 100 Multipeak SPEAK - 6 months SPEAK - 12 months SPEAK - 18 months Child SIT sentences in noise (%) RW * SN * CG * EB * MA DH * SE * MEAN * F IGURE 11.5. Speech perception scores for SIT sentences in noise (ם15 dB SNR) for seven children using the Multipeak and SPEAK speech processing strategy after 6, 12 and 18 months experience. *Scores with SPEAK at 18 months significantly higher than with Multipeak (p Ͻ 0.05). (Dowell and Cowan 1997). Reprinted with permission from Clark G. M., 2002, Learning to hear and the cochlear implant. Textbook of perceptual learning, Fahle M. and Poggio T., eds. Cambridge, Mass. MIT Press: 147–160. the phonemes. The need for time to learn is illustrated in Figure 11.5, which indicates an improvement from 6 to 12 or 18 months’ use of the SPEAK strategy (Dowell and Cowan et al 1997). The results thus suggest that although children have learned to associate certain spectral and temporal patterns of cortical stimu- lation with words, they can readjust to the new strategy presumably due to per- ceptual learning. Further evidence for postdevelopmental plasticity has been seen in a pilot study in an adult cochlear implant patient where the perceptual vowel spaces were mapped at different intervals after implantation. With the normal two-formant vowel space there is a limited range or grouping of frequencies required for the perception of each vowel. With electrical stimulation at first, as shown in Figure 11.6, there was a wider range of electrodes contributing to the perception of each vowel, and a greater variability in the results. However, after the patient learned to use the implant, the range of electrodes contributing to the perception of the vowels became more restricted, and the vowel spaces came to more closely re- semble those for normal hearing. The plasticity described for the Nucleus speech-processing strategy was also seen (Dorman and Loizou 1997) for vowel recognition in seven of eight patients who were converted from the Ineraid device (a four-fixed-filter strategy providing analog stimulation at a rate depending on the speech wave amplitude variations) (Eddington 1980) to the continuous interleaved sampler (CIS) strategy (a six- fixed-filter strategy providing pulsatile stimulation at a constant rate of approxi- mately 800 pulses/s) (Wilson et al 1991). The scores were similar immediately after surgery, but improved after a month. It indicated that reprogramming strat- Principles 661 First formant ( Hz ) Second formant (Hz) 300 400 500 600 700 800 900 600 1000 1400 1800 2200 “head” “who’d” “cord” “hot” “hut” “cart” Normal formant space 2 weeks postoperative 3 weeks postoperative F IGURE 11.6. The center of two formant vowel spaces for the vowels /O/,/Å /,/ø/,/A/,/u/,/E/ and the shift in the electrodes representing these vowels from two to three weeks postoperatively (Blamey and Dooley, personal communication; Clark 2002). Reprinted with permission from Clark G.M. 2002, Learning to hear and the cochlear implant. Textbook of perceptual learning, Fahle M and Poggio T., eds. Cambridge, Mass. MIT Press: 147–160. egies with altered frequency-to-electrode allocation and variation in the presen- tation of temporal information could be made. This suggests that the reprogram- ming is carried out at a higher level than for speech features. Plasticity—Cross-Modality in Humans There have been a number of examples of children demonstrating they can ef- fectively use a cochlear implant to communicate by auditory means, as well as use sign language of the deaf when required. These children usually learn auditory communication first. The need to develop the central neural connections for au- ditory processing of speech at an early stage has been well attested to by the better results the younger the child at operation. This is supported by studies with the positron emission tomography (PET) scanner. Parving et al (1995) showed that only two of five deaf patients with cochlear implants had an increased blood flow in the contralateral hemisphere, and this correlated with their speech understand- ing. Kubo (2002) found the auditory association area was activated by sign lan- guage but not by speech in a congenitally deaf cochlear implant user. In contrast, in short-term cochlear implant users there was competing information process- ing, and in a group of long-term users the auditory input was dominant. Cross- modality plasticity of auditory and visual inputs was found. This research indi- cated the need to undertake cochlear implantation first to provide audition before learning sign language. Analytic Versus Synthetic Training The learning that takes place with speech-processing strategies could depend on developmental or postdevelopmental plasticity. It is also important to know how 662 11. Rehabilitation and Habilitation to train the implantee to facilitate learning. The two main approaches to training are termed analytic and synthetic (McCarthy and Alpiner 1982). Analytic training involves breaking speech down into its individual components (single words, phonemes) and training discrimination at this level. Typically, very little contex- tual information is available. It is assumed that this will improve speech discrim- ination in everyday communication. The synthetic or global approach provides communication strategies to help the hearing-impaired person to understand the overall message. People are encouraged to make use of contextual cues, construc- tive questions, guessing, and so on, to determine what is said. The importance of key words is stressed, with little emphasis being given to the less meaningful words within an utterance. Exercises typically consist of sentence material or connected discourse. The synthetic approach to training has been favored, or a combination of the two approaches. Much of the research on the relative merits of analytic versus synthetic training on speech perception has been on subjects using speech reading (Sanders 1982). The results have been inconclusive (Walden et al 1977, 1981; Lesner et al 1987). Fewer studies have investigated the value of auditory training of more relevance to cochlear implantation. Rubinstein and Boothroyd (1987) trained hearing-im- paired adults in the recognition of consonants, using either synthetic training alone or a combination of synthetic and analytic exercises. They found an increase in speech recognition scores on sentence tests following training for both groups, with no significant differences. In a study by Alcantara et al (1988), seven normal-hearing subjects received training using an eight-channel electrotactile device (transmitting fundamental frequency, second formant, and amplitude information via electrodes positioned on the subjects’fingers) (Blamey and Clark 1985). The study compared the bene- fits of a synthetic approach to training with a combined approach using both analytic and synthetic training. Each subject received 3 months’ training using one approach followed by 3 months’ training with the other approach (the order was alternated between the subjects). Training sessions were for 1 hour, three times per week. Therefore, each subject received approximately 35 hours of ex- perience with each approach. Each subject’s performance was assessed threetimes during the program: prior to the commencement of training, following completion of the first 3-month program, and following completion of the second 3-month program. A variety of materials were used to assess and compare the benefits of training. The results suggested that both approaches to training were beneficial, with improvements in scores. However, the benefits depended on the test mate- rials. The inclusion of analytic training resulted in improved scores for analytic tests. Synthetic-only training resulted in greater improvements in scores for some synthetic tests, perhaps because there was more synthetic training in the synthetic- only program than in the combined approach. These results suggest that the type of assessment materials used is crucial in determining the benefits of training. The more similar the assessment material is to the training material, the greater the possibility that the subject has learned the best way to do the test. Assuming that synthetic materials more closely represent typical communicative situations, Mapping and Fitting Procedures in Adults and Children 663 the authors concluded that synthetic training should be included in a training program. Mapping and Fitting Procedures in Adults and Children Before commencing training, it is essential to optimize the speech signal presented via electrical stimulation. At the first postoperative test session (typically 10 to 14 days after the operation), the clinician selects the right strength of magnet for the transmission coil that will retain it in place over the implant. Occasionally, this first test session needs to be delayed until swelling over the implant has reduced sufficiently for the transmission coil to be retained. The patient’s speech processor is connected to a personal computer via an interface unit so the stimulus parameters can be controlled, as discussed in Chap- ter 8. Parameters such as the currents for threshold (T) and maximum comfortable (MC) levels, as well as the stimulation mode (Bipolar-BPם1 etc. versus mono- polar), pulse width, duration of the stimulus, and pulse rate can be varied. Physiological and Psychophysical Principles Prior to (re)habilitation the outputs of the filters in the speech processor need to be mapped to appropriate electrodes with current levels that lie within the dynamic range for each electrode, that is, from T to MC levels. The electrical representation of the acoustic signals should remain within the operating range so that it has an appropriate loudness, that is, it is neither too soft nor too loud. The stimulus parameter responsible for neural excitation is electrical charge, and this can be controlled by varying either the pulse amplitude or width. The relationship be- tween current level and loudness has been investigated by Eddington et al (1978) and Zeng and Shannon (1992), and was discussed in Chapter 6. Loudness depends on the number of neurons excited as well as other parameters such as rate, pulse interval, number of pulses, and duration. A linear relation was observed between loudness in decibels and current amplitude by Eddingtonet et al. With sound, Stevens (1975) showed that as loudness was a power function of intensity, both the logarithm of intensity and loudness could be plotted as a straight line. If there are regions in the cochlea with reduced numbers of spiral ganglion cells, a larger current than elsewhere will be required to operate within the dy- namic range of each electrode (Kawano et al 1995, 1998). A larger current may also be required to stimulate an appropriate number of neurons if the array is more distant from the ganglion cells or pathology results in spreading the current away from the auditory neurons (Cohen et al 1998, 2001a,b). This may be re- solved by changing the mode of stimulation to vary the current pathways. With the earlier speech-processing systems bipolar (BP) and common ground (CG) stimulation were used to localize the current to separate groups of nerve fibers for place coding of frequency. Bipolar stimulation occurs when the current flows between two electrodes on the array. A normal stimulus mode with the Nucleus 664 11. Rehabilitation and Habilitation array is bipolarם1 (BPם1), where the current flows from an electrode across one to the next electrode. This is necessary for an adequate threshold and dynamic range with some electrode geometries and cochlear pathologies. The separation of the two electrodes in the bipolar mode can be further increased with more inactive intervening electrodes (BPםn) to achieve lower T and MC loudness levels. It was shown by Tong and Clark (1985) that increasing the extent of the stimulus in this way did not impair subjects’ abilities to distinguish pairs of elec- trodes according to their degree of separation. CG stimulation occurs when current spreads from the active electrode to all other electrodes connected together elec- tronically as a ground. An advantage of CG stimulation is that there are more consistent thresholds than with bipolar stimulation, and in children they will be less subject to unpleasant variations in loudness. This is not such an issue with monopolar stimulation that is used more routinely. With CG stimulation, there was a marked reversal of pitch and timbre in the middle of the array in three of nine patients, and a tendency for the T and MC levels to be higher in this part of the co- chlea (Busby et al 1994). The deviation from the tonotopic organization of the cochlea was assumed to be due to the effect of a loss of neurons, and pathology in the cochlea. The lowest thresholds were obtained with monopolar (MP) stimulation. With this mode of stimulation the current passes from the active electrode to a distant ground outside the cochlea (the grounding electrode is placed under the temporalis muscle). It was thought that monopolar stimulation would not allow adequate localization of current for the place coding of speech frequencies; however, as discussed in Chapter 6, studies by Busby et al (1994) showed that MP stimuli could also be localized to groups of nerve fibers. One difficulty in mapping the current from each filter into the dynamic range for each electrode is that it can lead to unacceptable and inappropriate variations in loudness. This is due to failure to take loudness summation into consideration. Loudness summation may result when more than one electrode is activated per stimulus cycle. Only partial summation was shown by Tong and Clark (1986) to occur for bipolar stimulation with Nucleus banded array for spatial separations up to 3 mm, and was considered due to the spread of current and refractory effects of nerve fibers. As one pulse led the other by 0.8 ms, it was not due to an interaction of the stimulating electrical fields. This partial summation over short segments of the cochlea was assumed to be due to the critical band where acous- tically the loudness of a band of noise of fixed intensity remains constant until the bandwidth of the noise exceeds the critical band, when the loudness increases with width. The bandwidth remains constant if the intensity is increased up to 80 dB. As discussed in Chapter 6, the loudness of a sound in sones will sum completely if the frequencies are separated by more than one critical bandwidth. If not, as discussed above, it will depend on summing first the energy of the sounds, and then determining the relation between loudness and the change in intensity. The critical band is equivalent to about a 0.89 to 1-mm length of the basilar membrane, and thus current stimulating on more than one electrode outside that region could produce increased loudness. This has been described by McKay et Mapping and Fitting Procedures in Adults and Children 665 al (2001) for cochlear implant patients where, for example, the loudness of eight electrodes each at threshold has to be reduced by 50 current steps for the combined stimulus to be at threshold. Producing a MAP The T and MC levels for the electrical currents on each electrode are written onto a programmable chip in the speech processor where they are stored, and this is referred to as a MAP. The details in the MAP are incorporated into whatever speech-processing strategy is being used. The frequency boundaries for the elec- trode to be stimulated are also set to determine the pitch range of the electrodes. Additional information can be obtained by conducting psychophysical tests on the discrimination of electrode current level and pulse rate. However, these tasks require training and are relatively time-consuming, and therefore are not routinely carried out with patients. These individual details vary from patient to patient and in each patient over time, especially in the first few weeks postoperatively. The variations in the T and MC levels are due to pathological changes at the electrode– tissue interface. These changes increase both the impedance at the electrode– tissue interface and the current spread. With a constant current stimulator a change in impedance should allow the T and MC levels to remain constant (see Chapters 4 and 8). In contrast, the development of a fibrous tissue electrode sheath and new bone formation alters the spread of current and moves the electrode away from the spiral ganglion cells and thus raises T and MC levels. The current levels between the T and MC levels cover the dynamic range. The frequency-to-electrode conversion depends on the strategy, the percepts obtained, and whether there is linear place pitch scaling for the electrodes. With formant- based and spectral maxima strategies, the 100-Hz bandwidths are arranged line- arly for the seven most apical channels (corresponding to the first formant fre- quencies 300–1000 Hz), and then the bandwidths increase logarithmically for frequencies greater than 1000 Hz, for the 13 (or more) basal stimulation channels (corresponding to the second formant frequencies). Although there is normally a log/linear relationship between frequency and site of stimulation along the basilar membrane, the above arrangement was found to give better speech perception when used with the Nucleus F0/F1/F2 and subsequent strategies. The frequency boundaries can be altered should there be a significant reduction in the number of channels available. The frequency boundaries for each electrode, and the min- imum and maximum current levels (in arbitrary units) for the advanced combi- nation encoder (ACE) as well as the SPEAK or CIS strategies are programmed into a MAP. The mode of operation of the SPEAK, CIS, and ACE strategies was described in Chapters 7 and 8. The MAP, stored in a memory chip in the speech processor, can easily be reprogrammed should the hearing become too soft, loud, harsh, echoey, muffled, and so on. Typically, the MAP is changed regularly during the first few weeks or months following the operation. The patient’s ability to judge comfortably loud levels, and balance loudness across electrodes generally improves with experi- 666 11. Rehabilitation and Habilitation ence, and therefore the MAP can be refined. Also, there are some changes within the cochlea during the postoperative period (for example, fibrous tissue growth) as explained above that alter the current levels required. For the majority of implantees, a new MAP needs to be programmed every 12 months or so, to take into account any minor changes in the levels. At the first test session, the current level on a particular electrode (using a burst of pulses at 200 to 500 pulse/s with a duration of 500 ms) is increased until a hearing sensation is reported. It is wise to begin with the most apical electrode, as the likelihood of stimulating nonauditory neurons (the facial and the tympanic branch of the glossopharyngeal nerve) is then very remote. The T levels can be obtained as with audiometry by averaging a number of responses to an ascending and/or descending presentation of stimuli. When as- cending from no signal to a percept, the threshold will be higher than when descending in amplitude. A more stable T level can be obtained by also averaging the results for the two procedures. The major difference from audiometry is that the T level should be the lowest stimulus level where a response always occurs (i.e., 100% threshold rather than 50%). It is not so useful to provide a signal that can be heard only 50% of the time. The T level depends on the number of residual nerve fibers excited, which in turn depends on the area of the electrical field as well as the distance of the electrode from the nerve fibers and the nature of the intervening tissue. The same applies to the MC level of hearing. The MC level is the highest stimulus intensity that can be used without causing discomfort. The level is lower for an initial rather than a continuous presentation, as adaptation occurs in the latter case. As speech is a dynamic signal often with short bursts to individual electrodes, the lower or more conservative value should be adopted to ensure there are no unpleasant side effects. Setting the MC level correctly is especially important when the greater part of the speech signal is mapped to the top 20% of the dynamic range. If the T and MC levels are high for bipolar stimulation, they can be brought more into the current output range of the receiver-stimulator by stimulating a greater area of the cochlea (i.e., number of neurons). This is achieved with current passing between more widely separated electrodes as discussed above (i.e., BPםn). A study by Busby et al (1994) showed that the T and MC current levels were highest for bipolar and lowest for monopolar stimuli. For common ground stimu- lation there was a trend for T and MC levels to be highest in the middle of the array. This could be due to the spread of the return current in both directions. With monopolar stimulation T and MC levels increased from the apical to basal ends, due to the fact that the more basal region is larger with the electrode further from the ganglion cells, and there is often more fibrous tissue and bone near the round window affecting the spread of current. There was no consistent pattern for bipolar stimulation. Occasionally, a group of electrodes shows markedly el- evated levels. In this case, electrode discrimination needs to be investigated, as there may be poorer neural survival in that portion of the cochlea. While measuring the T and MC levels for each electrode, it is useful to gain Mapping and Fitting Procedures in Adults and Children 667 an impression of the pitch and timbre of the hearing sensations elicited. The pitch and timbre are most commonly reported as being dull for the more apical elec- trodes and sharp for the more basal electrodes. Once the levels have been mea- sured for each electrode, they should be stimulated one at a time, at a particular level (for example, at the MC level) from one end of the electrode array to the other. This enables a check to be made that the pitch of the hearing sensations elicited corresponds to the tonotopicity of the cochlea. In the study by Busby et al (1994) on nine postlinguistically deaf patients, the general pattern of pitch estimations across electrodes was consistent with the tonotopic organization of the cochlea for both monopolar and bipolar stimulation. There was, however, a marked reversal of pitch ordering for electrodes in the middle of the array with common ground stimulation for three of the nine patients, as discussed above. Ordering of pitch can also provide an indication of the distance to which the electrode array has been inserted into the cochlea, if the listener is asked to report when the sensations become sharp. The second reason for sweeping through the electrodes is to determine whether the hearing sensations are equally loud. If the loudness is not balanced, some speech sounds can appear very soft or drop out altogether. With an imbalance in loudness, voices may seem too harsh or too echoey. Balancing the loudness of the electrodes is not easy for the subject, par- ticularly at first, because pitch and loudness are related; sharper or higher-pitched sounds generally sound louder than duller or lower-pitched sounds, and therefore lower comfort levels may be indicated by the listener for the sharp-sounding electrodes than the dull-sounding electrodes. If the speech processor was pro- grammed with these levels, the listener would report voices sounding muffled and unclear, necessitating an increase in the levels of the more basal electrodes. The T and MC levels are set after the loudness percepts are comparable across electrodes at the above intensities. The dynamic range for each electrode is the difference in current level between the T and MC levels. Large dynamic ranges are preferable (with more current level steps), as this allows better amplitude resolution. Acoustic stimuli detected by the speech processor’s microphone are presented to the implantee at levels within the dynamic range. Provided he/she has judged the MC levels appropriately, no incoming sound should produce an uncomfortably loud hearing sensation. It is also necessary to evaluate the loudness growth function for increases in intensity at each electrode, as this may vary and lead to unpleasant or nonoptimal speech perception if it is not taken into consideration. The shape of the function can be roughly assessed by sweeping across electrodes, at an intensity halfway between the T and MC levels. If an electrode sounds softer, for example, at this level, this may be due to the shape of the loudness growth curve. It has been demonstrated by Zeng and Shannon (1994, 1995) that the loudness function of sinusoidal stimuli is best described as a power function for stimuli less than 300 pulses/s and an exponential function above this rate, as illustrated in Figure 11.7. The importance of accurately balancing loudness across electrodes was dem- onstrated in the study by Dawson et al (1997). The degree of loudness imbalance in mapping the MC levels was examined in 10 adult patients. Four of them had [...]... device (Fryauf-Bertschy et al 199 2, 199 7; Gantz et al 199 4; Miyamoto 676 11 Rehabilitation and Habilitation et al 199 6; Osberger et al 199 6) Furthermore, Miyamoto et al ( 199 6) found a continued improvement in word recognition beyond 5 years, and this highlights the need for long-term follow-up (Kirk 2000) Analytic Studies on the recognition of vowels and consonants are analytic exercises and should aim... (Paul 199 8), academic achievement (Goldgar and Osberger 198 6), and career development Development of language with the 3M single-channel implant was discussed by Kirk and Hill-Brown ( 198 5) Initial reports on receptive language development with the Nucleus 22 (F0 /F1 /F2 and Multipeak) systems on small groups of children were made by Busby et al ( 198 9), Dowell et al ( 199 1), Geers and Moog ( 199 1), Hasenstab... aid and 24 implant users The BKB speech perception results for audition alone (A), and audition and speech reading (AV) (Fig 11.11), as well as the PPVT) (Dunn and Dunn 198 1, 199 7) and clinical evaluation S N L 692 11 Rehabilitation and Habilitation of language fundamentals (CELF) (Wiig et al 199 2; Semel et al 199 5) (Fig 11.12) were recorded The PPVT is suitable for children from 2 years and up, and. .. et al 198 1; Boothroyd S N L Training in Children 691 et al 199 1), grammar (Power and Quigley 197 3), and pragmatics (Kretschmer and Kretschmer 199 4) It was essential to know to what extent improvements in speech perception and production, reported above and in Chapter 12, for the Nucleus 22 cochlear prosthesis, led to better receptive and expressive language Fluent auditory-oral language has far-reaching... (Geers and Moog 198 8; Boothroyd et al 199 1) The study by Geers and Moog ( 199 1) used control groups of matched children using the multiple-channel cochlear implant (Nucleus-22) conventional aids, and a two-channel vibrotactile aid (Tactaid II), and found language learning was faster for the multiple-channel implant It thus appeared that vocabulary acquisition for profoundly deaf children with multiple-channel... compliance, and hence that a programming change was required The NRT software for the Nucleus system was produced by Dillier and others at the University of Zurich, in collaboration with Cochlear Limited in 199 5 Validation of the NRT measurement technique (Abbas et al 199 9) and a three-stage field trial confirmed that clear, stable, and repeatable responses were obtained in over 93 % of subjects (Dillier 199 8;... ( 198 2), Ling ( 197 6, 198 4), Ling and Ling ( 197 8), Mecklenburg et al ( 198 7), Sims, Walter et al ( 198 2), and Ross and Giolas ( 197 8) Before and after surgery the child is given parental and team support When the device is switched on, the first task is to program the speech processor correctly The T and MC levels are set for each electrode, and the loudness levels balanced In establishing T and MC levels, take... with 10 Nucleus Spectra-22 and SPrint subjects (McDermott et al 2002) Syllabic compression with fast attack and slower release times had been examined to improve speech understanding with hearing aids by reducing the 670 11 Rehabilitation and Habilitation intensity differences between consonants and vowels (Braida et al 197 9; Walker and Dillon 198 2; Busby et al 198 8; Dillon 199 6) The study showed a... Speech production in implanted children depends on general and specific factors as well as training and experience General factors are the same as those that lead to good speech perception (see Chapter 9) In particular, age at implantation correlates negatively with speech as shown for example by Tye-Murray et al ( 199 5), Nikolopoulos et al ( 199 9), and Barker et al (2000) They reported that with the Nucleus... (Dyar 199 4) and the Speech Intelligibility Rating (SIR) (Parker and Irlam 199 4) The child’s ability to imitate speech can be tested using Phonetic Level Evaluation (Ling 197 6) or Voice Analysis (Ling 197 6) Elicited speech is from visual prompts where the child is asked to name a picture (Test of Articulation Competence) (Fisher and Logemann 197 1) or verbally repeat a written sentence (McGarr 198 3) The . the device (Fryauf-Bertschy et al 199 2, 199 7; Gantz et al 199 4; Miyamoto 676 11. Rehabilitation and Habilitation et al 199 6; Osberger et al 199 6). Furthermore, Miyamoto et al ( 199 6) found a continued. the 670 11. Rehabilitation and Habilitation intensity differences between consonants and vowels (Braida et al 197 9; Walker and Dillon 198 2; Busby et al 198 8; Dillon 199 6). The study showed a significant improvement. perception results if they have the implant before approx- imately 2 to 4 years of age (Dowell et al 199 7; Fryauf-Bertschy et al 199 7; Miyamoto et al 199 7). There is a critical period for the development

Ngày đăng: 11/08/2014, 06:21

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan