Saturday, June 7, 2008

Influences of Electromagnetic Articulography Sensors on Speech Produced by Healthy Adults and Individuals With Aphasia and Apraxia


Influences of Electromagnetic Articulography Sensors on Speech Produced by Healthy Adults and Individuals With Aphasia and Apraxia


William F Katz, Sneha V Bharadwaj, Monica P Stettler. Journal of Speech, Language, and Hearing Research. Rockville: Jun 2006. Vol. 49, Iss. 3; pg. 645, 15 pgs
Abstract (Summary)
This study examined whether the intraoral transducers used in electromagnetic articulography (EMA) interfere with speech and whether there is an added risk of interference when EMA systems are used to study individuals with aphasia and apraxia. Ten adult talkers (5 individuals with aphasia/apraxia, 5 controls) produced 12 American English vowels in /hVd/ words, the fricative-vowel (FV) words (/si/, /su/, /∫i/, /∫u/), and the sentence She had your dark suit in greasy wash water all year, in EMA sensors-on and sensors-off conditions. Segmental durations, vowel formant frequencies, and fricative spectral moments were measured to address possible acoustic effects of sensor placement. A perceptual experiment examined whether FV words produced in the sensors-on condition were less identifiable than those produced in the sensors-off condition. EMA sensors caused no consistent acoustic effects across all talkers, although significant within-subject effects were noted for a small subset of the talkers. The perceptual results revealed some instances of sensor-related intelligibility loss for FV words produced by individuals with aphasia and apraxia. The findings support previous suggestions that acoustic screening procedures be used to protect articulatory experiments from those individuals who may show consistent effects of having devices placed on intraoral structures. The findings further suggest that studies of fricatives produced by individuals with aphasia and apraxia may require additional safeguards to ensure that results are not adversely affected by intraoral sensor interference.
» Jump to indexing (document details)
Full Text (8196 words)
Copyright American Speech-Language-Hearing Association Jun 2006
[Headnote]
Purpose: This study examined whether the intraoral transducers used in electromagnetic articulography (EMA) interfere with speech and whether there is an added risk of interference when EMA systems are used to study individuals with aphasia and apraxia.
Method: Ten adult talkers (5 individuals with aphasia/apraxia, 5 controls) produced 12 American English vowels in /hVd/ words, the fricative-vowel (FV) words (/si/, /su/, /∫i/, /∫u/), and the sentence She had your dark suit in greasy wash water all year, in EMA sensors-on and sensors-off conditions. Segmental durations, vowel formant frequencies, and fricative spectral moments were measured to address possible acoustic effects of sensor placement. A perceptual experiment examined whether FV words produced in the sensors-on condition were less identifiable than those produced in the sensors-off condition.
Results: EMA sensors caused no consistent acoustic effects across all talkers, although significant within-subject effects were noted for a small subset of the talkers. The perceptual results revealed some instances of sensor-related intelligibility loss for FV words produced by individuals with aphasia and apraxia.
Conclusions: The findings support previous suggestions that acoustic screening procedures be used to protect articulatory experiments from those individuals who may show consistent effects of having devices placed on intraoral structures. The findings further suggest that studies of fricatives produced by individuals with aphasia and apraxia may require additional safeguards to ensure that results are not adversely affected by intraoral sensor interference.
KEY WORDS: speech production, electromagnetic articulography, fricative spectral moments, aphasia, apraxia of speech


Speech production is studied using techniques that provide anatomical images or movies of articulation (e.g., cineradiography, videoflouroscopy) as well as techniques that derive individual fleshpoint data during speech movement (e.g., X-ray microbeam, selspot, and electromagnetic articulography [EMA]). A potential complication of fleshpoint tracking systems is that the sensors used to record speech movement may themselves alter participants' speech. For instance, intraoral sensors might obstruct the speech airway, resulting in sound patterns not normally observed in speech. It is also possible that data recorded during EMA or X-ray microbeam studies may to some extent reflect participants' compensation for the presence of intraoral sensors in the vocal tract. Indirect evidence concerning these issues was provided by Perkell and Nelson (1985), who compared formant frequencies of the vowels /i/ and /u/ recorded in the Tokyo X-ray microbeam system with population means obtained in previous acoustic studies that did not involve intraoral sensors (e.g., Hillenbrand, Getty, Clark, & Wheeler, 1995; Peterson & Barney, 1952). The results suggested that X-ray microbeam pellets cause little detectable articulatory interference.
A direct test of potential articulatory interference by a fleshpoint tracking device (the University of Wisconsin X-ray microbeam system) was conducted by Weismer and Bunton (1999). The researchers examined 21 adult talkers who produced the sentence She had your dark suit in greasy wash water all year, with and without an array of X-ray microbeam pellets in place during articulation. This array included four pellets placed on the midsagittal lingual surface. The results indicated no overall differences that were consistent for all speakers. However, approximately 20% of the talkers showed acoustically detectable changes as a result of the pellets placed on the tongue during the X-ray microbeam procedure. For example, pellets-on conditions for vowel production resulted in higher Fl values for some female talkers (suggesting greater mouth opening) and lower F2 values for some male and female talkers (suggesting a more retracted tongue position) than in pellets-off conditions. These occasional acoustic differences resulting from pellet placement were not detectable in perceptual experiments designed to simulate informal listening conditions. The authors concluded that acoustic screening procedures may be important to shield articulatory kinematic experiments from individuals who show consistent effects of having devices placed on intraoral structures.
One factor that may have contributed to the differences between the findings of Perkell and Nelson ( 1985) and Weismer and Bunton ( 1999) is that the former study examined isolated vowels, while the latter examined vowels produced in a sentential context. Speech produced in citation form may differ in a number of articulatory factors from that produced in a more natural sentential context (e.g., Lindblom, 1990). For example, sounds that occur in stressed or accented syllables (hyperspeech) appear to reflect reduced coarticulation or overlap between adjacent sounds ( de Jong, 1995; de Jong, Beckman, & Edwards, 1993) and greater velocity, magnitude, and duration (Beckman & Cohen, 2000; Beckman & Edwards, 1994). It is therefore possible that speech produced in more natural contexts (hypospeech) might show heightened susceptibility to articulatory interference effects, perhaps as the result of less conscious monitoring or compensation by the speaker. It is important to consider these communication contexts when examining the extent to which talkers do or do not show compensation for a given vocal tract perturbation.
An important clinical concern is that the use of fleshpoint tracking systems has not been limited to the study of speech produced by healthy adults. Rather, methods such as EMA are being increasingly applied to study (and treat) individuals with disorders such as aphasia and apraxia of speech (AOS; Katz, Bharadwaj, & Carstens, 1999; Katz, Bharadwaj, Gabbert, & Stettler, 2002; Katz, Carter, & Levitt, 2003), dysarthria (Goozée, Murdoch, Theodoras, & Stokes, 2000; Murdoch, Goozée, & Cahill, 2001; Schultz, SuIc, Léon, & Gilligan, 2000), stuttering (Peters, Hulstijn, & Van lieshout, 2000), and developmental AOS (Nijland, Maasen, Hulstijn, & Peters, 2004). If sensor-related interference poses added problems for clinical populations, this could potentially complicate the interpretation of kinematic assessment and treatment studies. Thus, one of the main goals of this study was to replicate the findings of Weismer and Bun ton (1999) with individuals having speech difficulties resulting from AOS and aphasia.
To examine these issues, adult talkers (individuals with aphasia/apraxia and healthy controls) were recorded producing speech under EMA sensors-on and sensors-off conditions. Speech samples included repeated monosyllabic /hVd/ words and the sentence She had your dark suit in greasy wash water all year. A number of temporal and spectral acoustic parameters were measured, and a perceptual experiment (with healthy adult listeners) was conducted to determine whether EMA sensors affected the intelligibility of fricative-vowel (FV) words produced by individuals with aphasia/apraxia and healthy control talkers.
Method
Participants
Participants were 10 monolingual American English-speaking adults (5 individuals with aphasia/ apraxia, 5 healthy controls) from the Dallas, TX, area. There were 2 female talkers (control participant C3 and participant A2 in the aphasia/apraxia group) and 8 male talkers. Participants had no prior phonetic training or experience in EMA experimentation. Individuals in the control group reported no history of neurological or articulation disorders. Four individuals with aphasia/ apraxia had been diagnosed with Broca's aphasia, and 1 had been diagnosed with anomic aphasia (see Table 1). All had AOS and an etiology of left-hemisphere cerebrovascular accident (CVA). Individuals with aphasia/ apraxia were diagnosed based on clinical examination and performance on the Boston Diagnostic Aphasia Exam (Goodglass, Kaplan, & Barresi, 2001) and the Apraxia Battery for Adults, Version 2 (ABA-2; Dabul, 2000). Apraxic severity levels, based on the overall scores of the ABA-2 Impairment Profile section, ranged from mild to moderate. The age range for the aphasie/ apraxic group was 38-67 years (M = 59;6 [years; months]), and that for the control group was 25-59 years (M = 55;0).


Speech Sample, Sensor Array
Testing took place in a sound-treated room at the UTD Gallier Center for Communication Disorders. Speech samples included vowels in /hVd/ contexts, FV words, and the sentence She had your dark suit in greasy wash water all year. The /TiVd/ and FV words were elicited in the carrier phrase, I said __ again. Seven repetitions were elicited for each sensor condition (on/off), yielding a total of 168 /hVd/ words, 56 FV words, and 14 sentences per talker. The /hVd/ words, FV words, and sentences were produced in separate blocks, with the order of stimulus type and sensor conditions (on/off) counterbalanced between talkers. Within each block, stimuli were produced in random order. Talkers repeated each item following a spoken and orthographic model (written on a 4 in. × 6 in. index card) presented by one of the experimenters (WK, a male, native speaker of American English). Speech was elicited at a comfortable speaking rate in a session lasting approximately 45 min.


The sentence She had your dark suit in greasy wash water all year was taken from the DARPA/TIMIT corpus (Garofolo, 1988). This sentence had been examined in a previous study of X-ray microbeam pellet interference (Weismer & Bunton, 1999). By including this sentence, we could compare microbeam pellet and EMA sensor effects between studies. From this sentence, segmentai durations were measured, and formant frequencies were estimated for the vowels /i/, /æ/, /u/, and /a/ (taken from the words she, had, suit, and wash).
For the sensors-on condition, participants spoke with two miniature receiver coils (approximately 2 × 2 × 3 mm) attached to the lingual surface. These sensors (Model SM220) are used in commercially available EMA systems manufactured by Carstens Medezinelektronik, GmbH. EMA sensors were placed (a) midline on the tongue body and (b) on the tongue tip approximately 1 cm posterior to the apex (see Figure 1). Although it is possible that greater sensor interference could occur with the placement of 3 to 4 lingual sensors, the use of two sensors was motivated by the fact that sensors placed on the superior, anterior lingual surface are involved in a variety of articulatory gestures, including palatal contact (potentially influencing sibilant production). Placement followed a standardized template system originally designed for pellet placement in the X-ray microbeam system (Westbury, 1994). Sensors were affixed to the tongue using a biocompatible adhesive, with the wires led out the corners of the participant's mouth (Katz, Machetanz, Orth, & Schoenle, 1990).1


[Photograph]
Figure 1. Example of electromagnetic articulography sensors attached to the lingual surface (midline on the tongue body and approximately 1 cm posterior to the apex).


As is common with EMA testing, before further recording, participants were given approximately 5 min to get used to the presence of EMA sensors, or until the investigators determined that there was no significant change in speech production attributable to the lingual EMA sensors. During this desensitization period, participants were engaged in informal conversation with the investigators.
Dato Collection
Acoustic data were recorded with an Audio-Technica AT831b microphone placed 8 in. from the lips. Recordings were made with a portable DAT recorder, Teac model DA-P20. The digital waveforms were later transferred to computer disk at a rate of 48 kHz and 16-bit resolution using a DAT-Link+ digital audio interface, then down-sampled to 22 kHz for subsequent analysis.
Acoustic Measures
From the seven productions elicited for each /hVd/ and FV word target, the first five phonemically correct productions were selected for analysis. Five productions were selected for each target because most of the individuals with aphasia/apraxia were able to produce this many correct utterances within seven attempts. Phonemically correct utterances were determined by independent transcription conducted by two of the authors (William F. Katz and Monica P. Stettler). As expected, there was no data loss for the control talkers, while individuals with aphasia/apraxia showed characteristic problems with particular speech sounds. Talker Al had particular difficulty producing FV words, and these items were removed from further analysis. In all, 344 items were included in the FV acoustic analyses.
For individuals with aphasia/apraxia, it was more difficult to produce the sentence She had your dark suit in greasy wash water all year than to repeat single words in a carrier phrase. Accordingly, there were many cases of substitutions, omissions, and distortions in their sentential materials. Nonetheless, it was possible to select five sentences produced by each talker for duration measurement purposes and the first five phonemically correct instances of the vowels /i/, /as/, IvJ, and /u/ for formant frequency analysis.


The first three formant frequencies (F1-F3) were estimated at vowel midpoint for the vowels /i/, /æ/, /u/, and Id. The four corner vowels were selected because they delimit the acoustic (and, by inference, articulatory) working space for vowels. Vowel formant frequencies (F1-F3) were estimated using an automated formant-tracking procedure developed by Nearey, Hillenbrand, and Assmann (2002). In this procedure, several different linear predictive coding (LPC) models varying in the number of coefficients are applied, given some assumptions about the number of expected formants in a given frequency range. The best model is then selected based on formant continuity, formant ranges, and formant bandwidth, along with a measure of the correlation between the spectrum of the original and a synthesized version. Final formant frequency values were estimated as the median of five successive measurements spaced 5-ms apart, spanning vowel midpoint.
Fricative centroids were measured at fricative midpoint using TF32 software (Milenkovic, 2001). Spectral moments analysis treats the Fourier power spectrum as a random probability distribution from which four measures may be derived: centroid (spectral mean), variance (energy spread around the spectral peak), skewness (tilt or symmetry of the spectrum), and kurtosis (peakedness of the spectrum; Forrest, Weismer, Milenkovic, & Dougall, 1988; Tjaden & Turner, 1997). Although Weismer and Bunton (1999) examined all four spectral moments in their study of the effects of X-ray microbeam pellets on speech, only the first spectral moment (centroid) showed any evidence of differing as a function of pellet placement during speech. Based on these findings, as well as the data from other studies highlighting the importance of the centroid in determining fricative quality (e.g., Jongman, Wayland, & Wong, 2000; Nittrouer, StuddertKennedy, & McGowan, 1989; Tabain, 1998), we focused on fricative centroids as a measure of possible interference effects of EMA sensors during fricative production.
Perceptual Measures
Ten native speakers of American English with a background in speech-language pathology volunteered as listeners. Listeners ranged from 23 to 53 years of age (M = 28 years). All listeners had taken a course in phonetics and reported no speech, language, or hearing problems.
Stimuli consisted of the syllables /si/, /su/, /∫i/, and /∫u/, produced by individuals with aphasia and apraxia and by healthy control talkers under sensors-on and sensors-off conditions. There were 200 productions by the control talkers (5 participants × 2 fricatives × 2 vowels × 2 sensor conditions × 5 repetitions) and 144 productions by the 4 individuals with aphasia/apraxia. As noted prev'ously, the FV productions of Talker Al and the /∫i/ productions of Talker A3 were eliminated due to high error rates. All stimuli were adjusted to the same peak amplitude, resulting in levels between 65 and 72 dB SPL(A).
The FV word identification task was conducted in a sound-treated room at the University of Texas at Dallas, Callier Center for Communication Disorders. Listeners were instructed that they would hear the words /si/, /su/, /∫i/, and /∫u/ produced by adult talkers (including individuals with aphasia/apraxia and healthy controls) under conditions of having EMA sensors on or off the tongue during speech. Productions by individuals with aphasia/apraxia and by healthy controls were presented in randomized (mixed) order. The listener's task was to identify each word by clicking one of four response panels (labeled with IPA symbols and the words see, she, Sue, and shoe) on a computer screen. Before the experiment, listeners first completed a practice set in which they were given 16 stimuli presented through headphones. The practice session was designed to familiarize the participant with the range of variations in the quality of fricatives to be identified in the main experiment and to familiarize them with the task. The materials for this practice session included productions by individuals with aphasia/apraxia and healthy control talkers other than those used in the main experiment. In the main experiment, listeners identified a total of 344 words. The experiment was selfpaced, and listeners were allowed to listen to stimuli any number of times before giving their answer by pressing a replay button. Listeners completed the experiment in one session lasting approximately 40 min.
Results
Segment Durations
Figure 2 shows mean vowel durations (and standard errors ) for phonemically correct /hVd/ words produced by the two talker groups in sensors-on and sensors-off conditions. A mixed-design, repeated measures analysis of variance (ANOVA) was conducted with group (aphasie/ apraxic, control) as the between-subject variable and vowel (I'll, /æ/, /u/, and IaI) and sensor condition (on/off) as within-subject variables. Results indicated a significant main effect for vowel, F(Il, 96) = 9.09, p < .0001, and a significant Vowel × Group interaction, F(Il, 96) = 1.99, p = .0376. These effects reflect two main patterns: (a) vowel-specific differences among the 12 vowels investigated (e.g., tense vowels longer than lax vowels) and (b) greater vowel-specific durational differences in aphasie/ apraxic as opposed to control talker group productions. Figure 2 also indicates that productions by individuals with aphasia/apraxia were generally longer than those of normal control talkers, although this group difference did not reach significance, F(I, 96) = 1.09, p = .299, ns. Critically, there was no significant main effect for sensor condition, and no other significant two-way or three-way interactions. Although one must be careful when interpreting negative findings, the fact that vowel and group factors revealed significant effects (whereas sensor condition did not) suggests EMA sensors do not affect the duration of vowels in /hVd/ contexts produced by healthy adults or individuals with aphasia/apraxia.


Figure 2. Mean vowel durations and standard errors for /hVd/ productions by healthy adults and individuals with aphasia/apraxia, shown by speaking condition (open bars = sensors off, shaded bars = sensors on).


Figure 3 contains sensors-on and sensors-off duration data for sentences produced by the two talker groups. As expected, productions by individuals with aphasia/ apraxia had overall longer durations than those of the control talkers. However, there was little systematic difference for either talker group as a function of sensor condition.


Figure 3. Mean segment durations and standard errors for productions by healthy adults and individuals with aphasia/apraxia, shown by speaking condition (open bars = sensors off, shaded bars = sensors on).


Post hoc analyses of the three-way (Group × Segment × Sensor Condition) interaction focused on the effects of sensors (off vs. on) for phonemes produced by each of the two talker groups. In these analyses, the differences of least squares means were computed (t tests), with significance set at p < .01 to correct for multiple comparisons. Results indicated no significant sensors-off versus sensors-on differences for segments produced by the healthy control talkers. For productions by individuals with aphasia/apraxia, three segments showed significant sensor effects (/jrr/, /d/, and /ar/). However, the direction of these effects was not consistent: /pj/ and /ar/ durations were shorter in the sensors-on condition, while /d/ durations were shorter in the sensors-off condition.
In summary, the data revealed expected group and segment differences, while sensor condition had little systematic effect. We interpret these data as showing little difference between individuals with aphasia/ apraxia and healthy adults with respect to possible durational interference from EMA sensors.
Vowel Formant Frequencies and Trajectories
The average formant frequencies (F 1-F3) of the vowel portions of the words /hid/, /heed/, /hud/, and /hud/ are summarized by group, vowel, and condition in Table 2. Following Weismer and Bunton ( 1999 ), between-condition differences of 75, 150, and 200 Hz (for Fl, F2, and F3, respectively) were operationally defined as minimal criteria for intraoral sensor interference. These values are based on considerations of typical measurement error for F1-F3 formant values (Lindblom, 1962; Monson & Englebretson, 1983) and on difference limens data for formant frequencies (Kewley-Port & Watson, 1994).


Table 2. Mean formant frequency values for healthy adult (control) talkers and individuals with aphasia/apraxia across speaking conditions (/hVd/ productions).


Of the 120 sensors-on/sensors-off comparisons (10 talkers × 4 vowels × 3 formants) shown in Table 2, 4 (3%) reached criteria: Fl of Id produced by Talker A2, F3 of /a/ produced by Talker A4, and Fl and F2 of /u/ by Talker A4. These cases are boldface in Table 2. Keeping in mind the caveat that formant frequency patterns are at best a first approximation of causality (Borden, Harris, & Raphael, 2003), one can nevertheless consider tube perturbation theory (e.g., Stevens, 2000; Stevens & House, 1955) to speculate about some possible articulatory explanations for these patterns of formant frequency change. For Talker A2, lowered Fl for /u/ suggested higher overall tongue position in the sensorson condition. For Talker A4, sensors-on productions showed higher F3 for /u/, suggesting an increased point of constriction between the teeth and alveolar ridge, or at the pharynx. Talker A4's /u/ productions showed increased Fl in the sensors-on condition (implying a lowered tongue position) and a decreased F2 (suggesting a more retracted tongue position).
The few observed cases of potential sensor interference occurred for individuals with aphasia/apraxia, suggesting that these individuals may show greater intraoral interference effects than healthy control talkers. However, the vowel formant frequencies produced by individuals with aphasia/apraxia were more variable than those of the control talkers (as reflected by 49% greater standard deviations), raising the possibility that these sensor-dependent differences were a by-product of increased variability per se (and not due to increased susceptibility to intraoral interference).
To better understand the effects of sensors during vowel production, we examined vowel formant frequency trajectories. These data addressed the question of whether EMA sensors affect vowel spectral change over time, a property claimed to be a form of dynamic vowel specification (e.g., Strange, 1989). Vowel formant frequency trajectories for the /hVd/ utterances were estimated using an LPC-based, pitch synchronous tracking algorithm (TF32, Milenkovic, 2001). Overlapping plots of these trajectories were made for the four cases of sensor-related formant frequency differences. For three of these cases, there were no apparent differences in trajectory shape or duration as a function of sensor condition. In contrast, Participant A4's /u/ F2 values showed a qualitative difference in formant frequency transitions, with the off condition being relatively steady state and the sensors-on data showing a more curved pattern. These patterns are shown in Figure 4, which is an overlapping plot of the F2 trajectories of 10 /u/ productions by A4. The five productions made in the sensors-off conditions are plotted in crosses, and the five produced in the sensors-on conditions are plotted in circles.


Figure 4. Overlapping F2 formant trajectories for /u/ produced by Talker A2 (an individual with aphasia/ apraxia). Productions made in the sensors-off condition are plotted with crosses, and those made in the sensors-on conditions are plotted with circles.


The vowels /i/, /æ/, /u/, and IaI were measured in the words she, had, suit, and wash, taken from the sentence She had your dark suit in greasy wash water all year. The average formant frequencies (F1-F3) are summarized by group, vowel, and condition in Table 3. As in the case of the /hVd/ data, between-condition formant frequency differences were operationally defined as minimal criteria for intraoral sensor interference (Weismer & Bunton, 1999).
As shown in Table 3, seven between-condition comparisons (5.8% of the data) reached criteria. There was no obvious pattern for these sensor-related differences to favor a specific vowel, talker group, or formant. Also, the direction of sensor-related effects did not suggest any one articulatory pattern for these talkers. For example, Control Talker C3 produced lower Fl values for /a/ in the sensors-on condition, suggesting a higher tongue position for this low vowel (possibly a case of undershoot). In contrast, Talker A4 produced lower F2 values for /u/ in the sensors-on condition, suggesting either lingual overshoot for this back/high vowel, or perhaps compensatory lip rounding.
As with talkers' /hVd/ data, vowel formant frequency trajectories were inspected to determine whether the presence of sensors produced any qualitative difference in trajectory shape. The data revealed no special cases of formant trajectory difference due to sensor placement.
Fricative Spectra
Table 4 shows centroid values for healthy control talkers and for individuals with aphasia/apraxia, listed separately for sensors-off and sensors-on conditions. As mentioned previously, the FV productions of Talker A1 and the /∫i/ productions of Talker A3 were not included in this analysis due to these participants' difficulties producing these sounds. Using a minimum difference of at least 1 kHz between conditions as significant (Weismer & Bunton, 1999), two of the complete set of 35 sensors-on versus sensors-off comparisons reached significance. These cases are boldface in Table 4. Both cases were for productions by Talker A2, who showed higher centroid values in the sensors-on condition for /∫i/ and /∫u/.


Table 3. Mean formant frequencies for control talkers and individuals with aphasia/apraxia across speaking conditions (sentential productions).


To further examine possible interference effects of EMA sensors on fricative spectra, histograms were plotted for each talker's /s/ and /∫/ productions, with data plotted separately for the sensors-off and sensors-on talking conditions. Previous studies have shown that repeated productions of /s/ and /∫/ by healthy adult talkers have clearly distinguishable centroid values, while productions by individuals with aphasia/apraxia are more variable and overlapped (Haley et al., 2000; Harmes et al., 1984; Ryalls, 1986). Similar patterns were noted in the present data: All 5 healthy control talkers produced bimodal centroid patterns separated by approximately 3 kHz, in both sensors-off and sensors-on conditions. Of the 4 individuals with aphasia/apraxia included in this analysis, 3 had greater-than-normal spectral overlap in both the sensors-off and sensors-on conditions, with no increased spectral overlap as the result of sensors being present. However, Talker A2 produced clearly distinguishable /s/ and /∫/ centroids in the sensors-off condition (resembling those of the normal talkers) and a highly overlapped pattern in the sensors-on condition. Thus, these data reinforce the minimal distance findings (Table 4) in suggesting that Talker A2 showed acoustic evidence of EMA sensor interference during fricative production.
Identification Scores
Listeners did well on the FV word identification task, with mean performance ranging from 90% to 93% correct across the individual listeners. Figure 5 shows that listeners showed near-ceiling performance for words produced by healthy control talkers (98.8%) and lower accuracy for words produced by individuals with aphasia/ apraxia (82.7%). Figure 5 also indicates that intelligibility varied as a function of word and sensor condition for productions by individuals with aphasia/apraxia.
The data were analyzed with a three-way (Talker Group × Word × Sensor Condition) repeated measures ANOVA. The results indicated significant main effects for group, F(1, 9) = 526.1, p < .0001, and sensor condition, F(1, 9) = 26.46, p < .0006, with a significant Group × Sensor Condition interaction, F(I, 9) = 22.1, p = .0011. These findings reflect lower identification scores for productions by the aphasic/apraxic group than for the healthy control group and higher values for the sensors-off (92.9%) than sensors-on (88.7%) conditions. Critically, there were no significant sensor-related intelligibility differences for productions by healthy control talkers, while individuals with aphasia/apraxia produced speech that was more intelligible in sensors-off (86.9%) than sensors-on (78.7%) conditions. There was also a significant Word × Sensor Condition interaction, F(3, 27) = 11.78, p < .0001, and a Group × Word × Sensor Condition interaction, F(3, 27) = 7.7, p < .0007. Post hoc analyses (Scheffé, p < .01) investigating the three-way interaction indicated that /∫i/ produced by individuals with aphasia/apraxia were significantly less intelligible in the sensors-on than sensors-off conditions (marked with an asterisk in Figure 5). Individual talker data were examined to determine whether decreased intelligibility for /∫i/ produced under the sensors-on condition held for all members of the aphasic/apraxic group. The results showed that this pattern obtained for 3 of the 4 talkers with aphasia/apraxia that were included in this analysis.


Table 4. Mean centroid values (kHz) for fricatives across speaking conditions.
Figure 5. Identification of fricative-vowel stimuli produced by healthy adult (control) talkers and talkers with aphasia/apraxia, under two speaking conditions (sensors off and on). Error bars show standard errors.


The perceptual data were compared with the fricative centroid measurements of the FV stimuli (described in Table 4). For Talker A2, correspondences between acoustic and perceptual findings fell in an expected direction. Fricatives produced by this talker had significantly higher centroid values for both /∫i/ and /∫u/ in the sensors-on compared with the sensors-off condition (/∫i/: sensors-off, 4.675 kHz, sensors-on, 5.845 kHz; /∫u/ sensors-off, 5.765 kHz, sensors-on, 6.985 kHz). These higher centroid values for /∫/ should presumably have shifted listener judgments toward /s/, thereby lowering correct identification. Indeed, this pattern obtained, with lower ratings for A2's J V productions in the sensors-on condition (84%) than sensors-off condition (99%). However, for the other three individuals with aphasia/apraxia (A3, A4, and A5), the correspondence between acoustic and perceptual data was less robust. For these talkers, there were no cases of sensor-related centroid differences greater than 1 kHz, yet a token-by-token analysis of the perceptual data revealed instances of substantial sensor-related intelligibility differences.
In summary, the acoustic and perceptual data were sensitive to talker group differences, and both data sources suggested that productions by healthy control talkers show minimal interference from EMA sensors. However, the acoustic and perceptual data showed less agreement with respect to individual talker and stimulus details for productions by individuals with aphasia/apraxia.
Discussion
Point-parameterized estimates of vocal tract motion are increasingly reported in the literature, both for healthy adults and for talkers with speech and language deficits. The results of these investigations are used to address models of speech production, as well as clinical issues such as the assessment and remediation of speech and language disorders. EMA systems play a growing role in this research. Although one study has examined the effects of X-ray microbeam pellets on speech produced by healthy adult talkers, the effects of EMA sensors on speech have not yet been investigated. It is also not known whether the risk of sensor interference is increased in talkers with speech disorders subsequent to brain damage. To address these questions, the current study examined a number of acoustic speech parameters (including segmental duration, vowel formant frequencies and trajectories, and fricative centroid values) in productions by individuals with aphasia/apraxia and age-matched healthy adult talkers under EMA sensors-off and sensors-on conditions. For most of these measures, citation form and sentential utterances were compared to determine whether subtle sensor-related differences could be detected across speech modes. A perceptual study using healthy adult listeners was conducted to obtain identification accuracy for FV words produced by individuals with aphasia/apraxia and by healthy adult talkers.


Considering next the possibility of spectral interference from EMA sensors, analysis of /hVd/ productions revealed vowel formant frequency values for productions by 2 individuals with aphasia/apraxia (A2, A4) that exceeded operationally defined thresholds for intraoral sensor interference. However, only one vowel was affected for each talker (/a/ for A2, /u/ for A4), suggesting minimal interference even for these talkers. When sensor-related formant frequency differences for /i/, /æ, /u/, and /a/ were examined in words taken from the sentence She had your dark suit in greasy wash water all year, a small number of cases (5.8%) reached criteria, with no obvious tendency for these sensorrelated differences to favor a specific vowel, talker group, or formant. Inspection of vowel formant frequency trajectories revealed only one apparent case of sensorrelated difference (F2 trajectories for /u/ produced by A4). Taken together, the data suggest little EMA sensor interference affecting either vowel steady-state measures or vowel dynamic qualities. These acoustic findings support both informal evaluations by researchers (e.g., Schönle et al., 1987) and participants' self-reports indicating that EMA sensor interference during vowel production is minimal.
Potential spectral interference for consonants was assessed by measuring fricative centroid values for the words /si/, /su/, /∫i/, and /∫u/ produced under sensors-on and sensors-off conditions. The results indicated that Talker A2 showed significantly higher centroids in the sensors-on condition for /∫i/ and /∫u/. Histograms of centroid values for repeated productions of this talker's fricatives indicated a distinct, bimodal pattern in the sensors-off condition and greatly increased overlap for the sensors-on condition. Taken together, these data suggest Talker A2 had particular difficulty producing fricatives under EMA sensors-on conditions. The direction of the J V shift for this talker ( higher centroids in the sensors-on condition) was similar to that observed by Weismer and Bunton ( 1999) for sentential productions by normal participants in the X-ray microbeam system. These authors suggested three possible explanations for such a shift: (a) a vocal tract constriction somewhat more forward; (b) greater overall effort in utterance production, with higher flows through the fricative constriction and consequently greater energy in the higher frequencies of the source spectrum (Shadle, 1990); and (c) sensors acting like obstacles in the path of the flow, increasing the high-frequency energy in the turbulent source and thus contributing to the first spectral moment differences. Another possibility may be a saturation effect difference, consisting of lower tongue tip contact with the alveolar ridge for/s/, but not/∫/(Perkell et al., 2004). Conceivably, the EMA sensor could have interfered with tongue tip contact patterns, resulting in a more /s/-like quality for /∫/ attempts.
An identification experiment examined whether EMA sensors affected the intelligibility of participants' /si/, /su/, /∫i/, /∫u/ productions. The results revealed an interaction between talker group and sensor condition (on/off). Productions by healthy adult talkers were identified almost perfectly, with no apparent effects of sensor interference. These data support previous findings from perceptual rating tasks that showed healthy adult talkers produce no discernable evidence of speech being made with or without X-ray microbeam pellets attached (Weismer & Bunton, 1999). Nevertheless, because the data for productions by healthy control talkers were pretty well at ceiling, it is possible that subtle effects of sensor interference might emerge if the task were made more difficult for the listeners. Future studies might explore this issue further by presenting stimuli under more demanding conditions, such as in the presence of noise masking.
In the current study, FV productions by individuals with aphasia/apraxia were identified with lower accuracy than those of healthy controls, a finding consistent with clinical descriptions of imprecise fricative production in aphasia and apraxia (e.g., Haley et al., 2000). There was also evidence consistent with an interpretation of sensor-related interference: Productions by individuals with aphasia/apraxia were less intelligible in sensors-on versus sensors-off conditions, a pattern that was significant for the word /∫i/. On closer inspection, the significant results for /∫i/ appear to have resulted from unusually high intelligibility for sensors-off productions, rather than from lowered intelligibility for sensors-on productions. Why the /∫i/ productions of individuals with aphasia/apraxia were so intelligible is not entirely clear. Nevertheless, despite this one unusual pattern, the perceptual data generally suggest that individuals with aphasia/apraxia have greater-than-normal difficulty producing sibilant fricatives under EMA sensor conditions.
Because EMA sensors pose the same type of physical obstruction to the oral cavity in healthy control talkers and in individuals with aphasia/apraxia, it seems reasonable to assume that any additional difficulties noted in the productions of individuals with aphasia/apraxia may be due to deficits in the ability to compensate for the presence of EMA sensors during speech. If it is further assumed that the ability to adapt to the presence of EMA sensors is functionally related to the compensatory ability needed to overcome the presence of other types of intraoral obstructions (e.g., a bite block), the present data support previous claims that individuals with aphasia/apraxia have intact compensatory articulation abilities during vowel production (Baum, Kim, & Katz, 1997).
However, the fricative findings give some indication of possible compensatory difficulties in the speech of individuals with aphasia/apraxia. These talkers, considered as a group, showed greater perceptual effects from EMA sensors than healthy normal controls. Inspection of individual talker data revealed that decreased intelligibility in the sensors-on conditions occurred for 3 of the 4 talkers with aphasia/apraxia. The most consistent case of sensor-related effects was Talker A2, whose /∫V/ productions also showed increased centroid overlap in the sensors-on condition. Cumulatively, these data provide tentative evidence that compensatory problems may underlie the difficulty that some individuals with aphasia/apraxia experience while producing fricatives under EMA conditions.
Baum and McFarland (1997) noted that healthy adults producing the fricative /s/ under artificial palate conditions show marked improvement after as little as 15 min of intense practice with the palate in place. Although the perturbations involved in the current study are arguably different than those resulting from an artificial palate, it is conceivable that practice speaking with EMA tongue tip sensors attached was sufficient to allow substantial adaptation for fricatives produced by the healthy control talkers but not for the talkers with aphasia/ apraxia. Additional experimentation that includes testing after practice would help address this issue.
Whereas the acoustic and perceptual data for productions by healthy control talkers were quite congruent, the perceptual data for speech produced by individuals with aphasia/apraxia did not always correspond with the patterns one would expect based on the fricative centroid values. A mismatch between listeners' perceptions and fricative spectral attributes has been noted in previous studies of incorrect /∫/ productions by individuals with aphasia/apraxia (Wambaugh, Doyle, West, & Kalinyak, 1995). Whereas the present perceptual data appeared sensitive to talker group, word, and sensor differences, there are a number of possible reasons why measured centroid values did not predict listeners' results. One possibility is that a combination of spectral moments could provide improved predictive power, as suggested by previous studies of fricatives produced by normal healthy speakers (Forrest et al., 1988; Jongman et al., 2000). Another possibility is that predictive power could be improved by considering profiles of successive spectral moment portraits over time, such as suggested in the FORMOFFA model for the analysis of normal and disordered speech (Buder, Kent, Kent, Milenkovic, & Workinger, 1996).
By including both citation form and sentential speech samples, the present study tested the hypothesis that speech produced in sentential contexts would reveal greater EMA sensor interference than citation form contexts. Relatively little support was found for this hypothesis. Although both segment durations and vowel formant frequencies were slightly more affected in the sentential stimuli than in single-word productions, these effects were noted primarily for productions by individuals with aphasia/apraxia, and the effects were not uniform across individual talkers or stimuli. Overall, the results suggest that citation form and sentential utterances show little difference with respect to their effectiveness in eliciting acoustic evidence of EMA sensor interference.
In conclusion, there are two important methodological implications of the present findings. First, the data support the observation made by Weismer and Bunton (1999) that perceptual indices will not provide adequate screening criteria to protect kinematic experiments from normal healthy individuals with consistent sensor interference effects. Weismer and Bunton noted that listeners were unable to reliably determine whether stimuli were produced with X-ray microbeam pellets on or off. In the present data, listeners showed strong ceiling effects and no influence of EMA sensors when identifying fricatives produced by healthy control talkers. Taken together, these two experiments examining different fleshpoint tracking technologies suggest that acoustic screening techniques be used to identify those individuals who may show consistent effects of having sensors placed in the oral cavity. As noted by Weismer and Bunton, this protocol could involve recording speech sounds in sensors-off and sensors-on conditions, followed by acoustic analyses. The present results suggest it will be especially important to examine sibilant production.
second, the current findings suggest that intervention studies involving consecutive EMA measurement of speech produced by individuals with aphasia/apraxia must be designed to ensure that any observed progress is not merely the participants adapting to the presence of the sensors over time. This potential confound can be circumvented by taking appropriate safeguards in experimental design, such as probing for stimulus generalization outside of the training set (Katz et al., 2002, 2003). At present, this concern would appear limited to studies of sibilant production by individuals with aphasia/apraxia. Additional studies are needed to determine the exact articulatory explanations for these interference effects and whether such problems extend to other classes of sounds or productions by individuals with different types of speech disorders.
Acknowledgments
Portions of the results were presented in 2001 at the 39th Meeting of the Academy of Aphasia (Boulder, CO). This research was supported by Callier Excellence Award 19-02. We would like to thank June Levitt, Nicole Rush, and Michiko Yoshida for assistance with acoustic analysis.
[Footnote]
1 In some laboratories (e.g., University of Munich Institute of Phonetics and Speech Communication), EMA sensors are attached in such a way that the wire is first oriented toward the back of the mouth, reducing the risk of wires going over the tongue tip.


[Reference]
References
Baum, S. R., Kim, J. A., & Katz, W. F. ( 1997). Compensation for jaw fixation by aphasie patients. Brain and Language, 15, 354-376.
Baum, S. R., & McFarland, D. H. (1997). The development of speech adaptation to an artificial palate. Journal of the Acoustical Society of America, 102, 2353-2359.
Beckman, M. E., & Cohen, K. B. (2000). Modeling the articulatory dynamics of two levels of stress contrast. In M. Home (Ed.), Prosody: Theory and experiment (pp. 169-200). Dordrecht, The Netherlands: Kluwer.
Beckman, M. E., & Edwards, J. (1994). Articulatory evidence for differentiating stress categories. In P. A. Keating (Ed.), Papers in laboratory phonology III: Phonological structure and phonetic form (pp. 7-33). Cambridge, England: Cambridge University Press.
Borden, G., Harris, K., & Raphael, L. (2003). Speech science primer: Physiology, acoustics, and perception of speech. Baltimore: Lippincott Williams & Wilkins.
Buder, E. H., Kent, R. D., Kent, J. F., Milenkovic, P., & Workinger, M. S. (1996). FORMOFFA: An automated formant, moment, fundamental frequency, amplitude analysis of normal and disordered speech. Clinical Linguistics and Phonetics, 10, 31-54.
Crystal, T., & House, A. (1988a). The duration of American English consonants: An overview. Journal of Phonetics, 16, 285-294.
Crystal, T., & House, A. (1988b). The duration of American English vowels: An overview. Journal of Phonetics, 16, 263-284.
Crystal, T., & House, A. (1988c). Segmentai durations in connected-speech signals: Current results. Journal of the Acoustical Society of America, 83, 1553-1573.
Dabul, B. (2000). Apraxia Battery for Adults (ABA-2). Tigard, OR: C.C. Publications.
de Jong, K. (1995). The supraglottal articulation of prominence in English: Linguistic stress as localized hyperarticulation. Journal of the Acoustical Society of America, 97, 491-504.
de Jong, K., Beckman, M. E., & Edwards, J. (1993). The interplay between prosodie structure and coarticulation. Language and Speech, 36, 197-212.
Engwall, O. (2000). Dynamical aspects of coarticulation in Swedish fricatives-A combined EMA & EPG study. Quarterly Progress and Status Report From the Department of Speech, Music, & Hearing at the Royal Institute of Technology [KTH/, Stockholm, Sweden, 4, 49-73.
Forrest, K., Weismer, G., Milenkovic, P., & Dougall, R. (1988). Statistical analysis of word-initial voiceless obstruents: Preliminary data. Journal of the Acoustical Society of America, 84, 115-123.
Garofolo, J. S. (1988). Getting started with the DARPA TlMIT CDROM: An acoustic phonetic continuous speech database. Gaithersburg, MD: National Institute of Standards and Technology.
Goodglass, H., Kaplan, E., & Barresi, B. (2001). The assessment of aphasia and related disorders (3rd ed.). Philadelphia: Lea & Febiger.
Goozee, J. V., Murdoch, B. E., Theodores, D. G., & Stokes, P. D. (2000). Kinematic analysis of tongue movements following traumatic brain injury using electromagnetic articulography. Brain Injury, 14, 153-174.
Haley, K. L., Ohde, R. N., & Wertz, R. T. (2000). Precision of fricative production in aphasia and apraxia of speech: A perceptual and acoustic study. Aphasiology, 14, 619-634.
Hardcastle, W. J. (1987). Electropalatographic study of articulation disorders in verbal dyspraxia. In J. Ryalls (Ed.), Phonetic approaches to speech production in aphasia (pp. 113-136). Boston: College-Hill.
Harmes, S., Daniloff, R., Hoffman, P., Lewis, J., Kramer, M., & Absher, R. (1984). Temporal and articulatory control of fricative articulation by speakers with Broca's aphasia. Journal of Phonetics, 12, 367-385.
Hillenbrand, J. M., Getty, L. A., Clark, M. J., & Wheeler, K. (1995). Acoustic characteristics of American English vowels. Journal of the Acoustical Society of America, 97, 3099-3111.
Jongman, A., Wayland, R., & Wong, S. (2000). Acoustic characteristics of English fricatives. Journal of the Acoustical Society of America, 108, 1252-1263.
Katz, W., & Bharadwaj, S. (2001). Coarticulation in fricative-vowel syllables produced by children and adults: A preliminary report. Clinical Linguistics and Phonetics, 15, 139-144.
Katz, W., Bharadwaj, S., & Carstens, B. (1999). Electromagnetic articulography treatment for an adult with Broca's aphasia and apraxia of speech. Journal of Speech, Language, and Hearing Research, 42, 1355-1366.
Katz, W., Bharadwaj, S., Gabbert, G., & Stettler, M. (2002). Visual augmented knowledge of performance: Treating place-of-articulation errors in apraxia of speech using EMA. Brain and Language, 83, 187-189.
Katz, W., Carter, G., & Levitt, J. (2003). Biofeedback treatment of buccofacial apraxia using EMA. Brain and Language, 87, 175-176.
Katz, W., Machetanz, J., Orth, U., & Schoenle, P. (1990). A kinematic analysis of anticipatory coarticulation in the speech of anterior aphasie subjects using electromagnetic articulography. Brain and Language, 38, 555-575.
Kewley-Port, D., & Watson, C. S. (1994). Formant frequency discrimination for isolated English vowels. Journal of the Acoustical Society of America, 95, 485-496.
Klich, R., Ireland, J., & Weidner, W. ( 1979). Articulatory and phonological aspects of consonant substitutions in apraxia of speech. Cortex, 15, 451-470.
Lindblom, B. (1962). Accuracy and limitations of sonographic measurements. Proceedings of the 4th International Congress of Phonetic Sciences. The Hague, The Netherlands: Mouton.
Lindblom, B. (1990). Exploring phonetic variation: A sketch of the H-and-H theory. In W. J. Hardcastle & A. Marchai (Eds.), Speech production and speech modeling (pp. 403-439). Dordrecht, The Netherlands: Kluwer Academic.
Mertus, J. (2002). BLISS [Software analysis package]. Providence, RI: Author.
Milenkovic, P. (2001). Time-frequency analyzer (TF32) (Software analysis package]. Madison: University of Wisconsin.
Monson, R., & Engebretson, A. M. (1983). The accuracy of formant frequency measurements: A comparison of spectrographic analysis and linear prediction. Journal of Speech and Hearing Research, 26, 89-97.
Murdoch, B., Goozée, J. V., & Cahill, L. (2001). Dynamic assessment of tongue function in children with dysarthria associated with acquired brain injury using electromagnetic articulography. Brain Impairment, 2, 63.
Nearey, T. M., Hillenbrand, J. M., & Assmann, P. F. (2002). Evaluation of a strategy for automatic formant tracking. Journal of the Acoustical Society of America, 112, 2323.
Nijland, L., Maassen, B., Hulstijn, W., & Peters, H. F. M. (2004). Speech motor coordination in Dutch-speaking children with DAS studied with EMMA. Journal of Multilingual Communication Disorders, 2, 50-60.
Nittrouer, S., Studdert-Kennedy, M., & McGowan, R. S. (1989). The emergence of phonetic segments: Evidence from the spectral structure of fricative vowel syllables spoken by children and adults. Journal of Speech and Hearing Research, 32, 120-132.
Odell, K., McNeil, M. R., Rosenbek, J. C., & Hunter, L. (1990). Perceptual characteristics of consonant production by apraxic speakers. Journal of Speech and Hearing Disorders, 55, 349-359.
Perkell, J. S., Matthies, M. L., Tiede, M., Lane, H., Zandipour, M., Marrone, N., et al. (2004). The distinctness of speakers' /s/-/∫/ contrast is related to their auditory discrimination and use of an articulatory saturation effect. Journal of Speech, Language, and Hearing Research, 47, 1259-1269.
Perkell, J. S., & Nelson, W. L. (1985). Variability in production of the vowels /i/ and /u/. Journal of the Acoustical Society of America, 77, 1889-1895.
Peters, H. F. M., Hulstijn, W., & Van lieshout, P. H. H. M. (2000). Recent developments in speech motor research into stuttering. Folia Phoniatrica et Logopaedica, 52, 103-119.
Peterson, G. E., & Barney, H. L. (1952). Control methods used in a study of the vowels. Journal of the Acoustical Society of America, 24, 175-184.
Ryalls, J. (1986). An acoustic study of vowel perception in aphasia. Brain and Language, 29, 48-67.
Schönle,P., Grabe, K., Wenig, P., Hohne, J., Schrader, J., & Conrad, B. (1987). Electromagnetic articulography: Use of alternating magnetic fields for tracing movements of multiple points inside and outside the vocal tract. Brain and Language, 20, 90-114.
Schultz, G. M., SuIc, S., Leon, S., & Gilligan, G. (2000). Speech motor learning in Parkinson's disease: Preliminary results. Journal of Medical Speech-Language Pathology, 8, 243-247.
Shadle, C. H. (1990). Articulatory-acoustic relationships in fricative consonants. In W. J. Hardcastle & A. Marchai (Eds. ), Speech production and speech modeling ( pp. 189-209). Dordrecht, The Netherlands: Kluwer Academic.
Stevens, K. (2000). Acoustic phonetics. Cambridge, MA: MIT Press.
Stevens, K., & House, A. (1955). Development of a quantitative description of vowel articulation. Journal of the Acoustical Society of America, 27, 484-493.
Strange, W. (1989). Evolving theories of vowel perception. Journal of the Acoustical Society of America, 85, 2081-2087.
Tabain, M. (1998). Non-sibilant fricatives in English: Spectral information above IO kHz. Phonetica, 55, 107-130.
Tabain, M. (2003). Effects of prosodie boundary on /aC/ sequences: Articulatory results. Journal of the Acoustical Society of America, 113, 2834-2849.
Tjaden, K., & Turner, G. S. (1997). Spectral properties of fricatives in amyotrophic lateral sclerosis. Journal of Speech, Language, and Hearing Research, 40, 1358-1372.
Umeda, N. (1975). Vowel duration in American English. Journal of the Acoustical Society of America, 58, 434-445.
Umeda, N. ( 1977). Consonant duration in American English. Journal of the Acoustical Society of America, 61, 846-858.
Wambaugh, J. L., Doyle, P. J., West, J. E., & Kalinyak, M. M. (1995). Spectral analysis of sound errors in persons with apraxia of speech and aphasia. American Journal of Speech-Language-Pathology, 4, 186-192.
Weismer, G., & Bunton, K. (1999). Influences of pellet markers on speech production behavior: Acoustical and perceptual measures. Journal of the Acoustical Society of America, 105, 2882-2891.
Westbury, J. (1994). X-ray Microbeam speech production user's handbook (Version I). Madison: University of Wisconsin-Madison.


[Author Affiliation]
William F. Katz
Sneha V. Bharadwaj
Monica P. StetHer
University of Texas at Dallas


[Author Affiliation]
Received July 9, 2005
Accepted October 30, 2005
DOI: 10.1044/1092-4388(2006/047)
Contact author: William F. Katz, Callier Center for Communication Disorders, University of Texas at Dallas, 1966 Inwood Road, Dallas, Texas 75235.
E-mail: wkatz@utdallas.edu

Indexing (document details)
Subjects:
Electric noise, Aphasia, Speech, Speech disorders
MeSH subjects:
Adult, Aged, Aphasia -- physiopathology, Apraxias -- physiopathology, Case-Control Studies, Electromagnetics, Female, Humans, Male, Middle Aged, Phonation -- physiology, Phonetics, Speech Intelligibility, Speech Production Measurement -- instrumentation, Verbal Behavior
Author(s):
William F Katz, Sneha V Bharadwaj, Monica P Stettler
Author Affiliation:
William F. KatzSneha V. BharadwajMonica P. StetHer3University of Texas at DallasReceived July 9, 2005Accepted October 30, 2005DOI: 10.1044/1092-4388(2006/047)Contact author: William F. Katz, Callier Center for Communication Disorders, 4University of Texas at Dallas, 1966 Inwood Road, Dallas, Texas 75235.E-mail: wkatz@utdallas.edu
Document types:
Feature, Journal Article
Document features:
Tables, Graphs, Photographs, References
Publication title:
Journal of Speech, Language, and Hearing Research. Rockville: Jun 2006. Vol. 49, Iss. 3; pg. 645, 15 pgs
Source type:
Periodical
ISSN:
10924388
ProQuest document ID:
1074085191
Text Word Count
8196
Document URL:
http://proquest.umi.com/pqdweb?did=1074085191&sid=2&Fmt=4&clientId=29550&RQT=309&VName=PQD

Thursday, June 5, 2008

Preface

Origin of this book
May 2004 Berkeley, conference on “Methods in Phonology”
Conference in honor of John Ohala
Focus of this book
Foundational experimental methods
Methods to test phonological hypothesis on the knowledge of speakers and hears’ native sound system
the acquisition of the sound system
the laws that govern the sound system
Methods are not static
Recent change in “methods in Phonology”
The rise of new experimental techniques
Increased use of experimental methods in Phonology
Factors responsible for this change
Factors causing recent change in “methods in Phonology”
The rise of new experimental techniques
Increased use of experimental methods in Phonology
Factors responsible for this change
Increasingly diverse questions
Structure of grammar
Representation of sound patterns
Phonetic and phonological constraints
Categorization
New prospective
Development of the techniques
Availability of corpora
Phonological unification in recognition and application
Experiment embedment within other science fields
To unify the established knowledge and the account of language and speech
Modeling in Phonology and relevant techniques
The ability to model relevant behaviors and patterns
The increasing importance of modeling tools
Phonological findings & theoretical implications therefrom

The Combination of the Questions

01. How’s language and its parts represented in the mind of the speaker; how is this representation accessed and used? How to progress the experiments? How can we account for the variation in the phonetic shape of these elements as a function of context and speaking style?
02. What is the point that we should emphasize on during experiment? How, physically and physiologically, does speech work – the phonetic mechanisms of speech production and perception, including the structures and units it is built on?
03. How and why does pronunciation change over time, thus giving rise to different dialects and languages, and different forms of the same word or morpheme in different contexts? (I think this is related to sociolinguistics.) How can we account for common patterns in diverse languages, such as segment inventories and phonotactics?
04. How can we ameliorate communication disorders? (I think this is related to neurolinguistics.)
05. How can the functions of speech be enhanced and amplified, for example, to give permanency to ephemeral speech, to permit communication over great distances, and to permit communication with machines using speech? The use of each equipment.
06. How did language and speech arise or evolve in our species? Why is the vocal apparatus different as a function of the age and sex of the speaker? What is the relation, if any, between human speech and non-human communication? How to analyze the experimental results and how to explain the experimental results?

Questions in textbook

01. How’s language and its parts, including words and morphemes, represented in the mind of the speaker; how is this representation accessed and used? How can we account for the variation in the phonetic shape of these elements as a function of context and speaking style?
02. How, physically and physiologically, does speech work – the phonetic mechanisms of speech production and perception, including the structures and units it is built on?
03. How and why does pronunciation change over time, thus giving rise to different dialects and languages, and different forms of the same word or morpheme in different contexts? How can we account for common patterns in diverse languages, such as segment inventories and phonotactics?
04. How can we ameliorate communication disorders?
05. How can the functions of speech be enhanced and amplified, for example, to give permanency to ephemeral speech, to permit communication over great distances, and to permit communication with machines using speech?
06. How is speech acquired as a first language and as a subsequent language?
07. How is sound associated with meaning?
08. How did language and speech arise or evolve in our species? Why is the vocal apparatus different as a function of the age and sex of the speaker? What is the relation, if any, between human speech and non-human communication?

Wednesday, June 4, 2008

從胺基酸合成看構詞學


從胺基酸合成看構詞學

Edwar Sapir在1921年寫的一段話:
世界上沒有一種語言的語法是完美的,所有語法都有缺漏…
語法的缺漏可以舉右列英文構詞缺漏的圖表說明。
在這12 × 6 = 72欄位的表格中,只有33格被填滿,備份的多餘現象超過百分之五十。
在生物學裡也出現多餘備份的現象。每一份胺基酸都是由三個核甘酸一組所構成的,可選擇的核甘酸有四個,所以整個組合可能性有43=64種。但是實際上人體只用到64個排列中的20個,就可以製造出標準的胺基酸,所以備份的多餘比超過百分之五十。
胺基酸是由核甘酸經轉錄﹝Transcription﹞及轉譯﹝Translation﹞之後合成的。被用來合成胺基酸的核甘酸鹼基總共有四種,分別是A、U、G、C。要合成胺基酸時,會從這四種中任選三個出來,依循鹼基配對原則,以任一DNA為模版,合成一條與之互補的RNA副本,再以mRNA為模版在核糖體上依序合成多胜肽鏈﹝也就是胺基酸﹞。如上所述,總共應有共64種排列組合,但自然中卻只用掉了百分之五十左右的組合,這一點跟構詞學中詞素之間合成詞的組合方式及佔用比例類似。
說明:造成胺基酸合成未用掉所有排列組合的可能原因之一是原子與分子間的萬有引力所造成的斥力,使得並非每一種排列組合都能夠成形,另一則為身體所需的蛋白質﹝包含必須及非必須胺基酸﹞有些可自行合成,有些可自食物中攝取且數量已經足夠,為講求經濟原則,無須再生產多餘的胺基酸徒增身體負擔。