Wednesday, November 19, 2008

Chap. 03 p.117-119

Chap. 03 p.117-119

It is also interesting to note that children do not begin to develop speech until their brains have attained a certain degree of electrophysiological maturity, defined in terms of an increase with age in the frequency of the dominant rhythm. Only when this rhythm is about 7 cps or faster (at about age two years) are they ready for speech development.
(g) Neurological Correlates; Pacing of Speech During Thalamic Stimulation. Deep electrical stimulation in the basal ganglia and thalamus is frequently performed in the course of surgical treatment of thalamic pain or certain extrapyramidal motor disorders. Guiot, Hertzog, Rondot, and Molina (1961) have reported that electrical stimulation in a particular place in the thalamus (the ventrolateral nucleus near its contact with the internal capsule) frequently interferes with the rate of speaking. Both slowing to the point of total arrest and acceleration of speech have been observed. The latter is the more interesting for our discussion. It is a behavioral derangement which may occur in complete isolation, that is, without any other observable motor manifestation or abnormal subjective experience. The patient is conscious and cooperative during part of the operation. He is encouraged to maintain spontaneous conversation and, failing this, is aked to count slowly at a rate of about one digit per second. It acceleration occurs with electrical stimulation, it may be sudden and immediate, or it may be a quick speeding up, the words at the end being generated so rapidly as to become unintelligible. It is significant that under conditions of evoked acceleration the shortest observed intervals between digits are about 170 msec.
Acceleration, uncontrollable by the patient, is occasionally associated with parkinsonism and goes under the name of tachyphemia.

(2) Final Comments on Speech Rhythmicity (Cultural, Individual and Biological Variations)
We have proposed that a rhythm exists in speech which serves as an organizing principle and perhaps a timing device for articulation. The basic time unit has a duration of one-sixth of a second. If this rhythm is due to physiological factors rather than cultural ones, it should be then present in all languages of the world. But what about the rhythm of Swedish, Chinese, or Navaho which sound so different to our English-trained ears? What about American Southern dialects which seem more deliberate than the dialect of Brooklyn, New York; and the British dialects which seem faster than American ones? These judgments are based on criteria such as intonation patterns and content of communications, which habe little in common with the potential underlying metric of speech movements. The rise and fall time in intonation patterns (non-tonal languages) are much slower than the phenomenon discussed here, usually extending over two seconds and more. With proper analysis, they may well reveal themselves to be multiples of the much faster basic units discussed above. On the other hand, the pitch-phonemes (also known as tonemes) are likely to fall within the same metric as other phonemes. Nor does our ability to speak slowly or fast have any bearing on the “six-per-second hypothesis,” because it should be possible to make different use of the time units available. There most likely is more than one way of distributing a train of syllables over the rhythmic vehicles.
On the other hand, physiological factors would allow for individual differences because organisms very one from the other. Moreover, the underlying rhythm may be expected to vary within an individual in accordance with physiological states and rates of metabolism. Such within-subject variations would, of course, be subtle, and detection would require statistical analysis of the periodic phenomena involved.
The statistic necessary to prove or reject our hypothesis is quite simple. At present the only obstacle is the necessity of making observations and measurements of hundreds of thousands of events. Suppose we programmed an electronic computer to search the electrical analogue of a speech signal for that point in time at which any voiceless stop is released. And then measured the time lapse between all such successive points. From these data we can make histograms (bar-charts) showing the frequency distribution of all measurements. Since our hypothesis assumes that the variable syllable-duration-time is not continuous and that there are time quanta, the frequency distribution should be multi-modal: and since the basic time unit is predicted to be 160 ± 20msec, the distance between the peaks should be equal to or multiples of this unit.
In a previous section of this chapter we have demonstrated certain formal properties of the ordering of speech events. In the discussion of rhythm we have added some temporal dimensions to those events. The rhythm we have added some temporal dimensions to those events. The rhythm we have added some temporal dimensions to those events. The rhythm is seen as the timing mechanism which should make the ordering phenomenon physically possible. The rhythm is the grid, so to speak. Into whose slots events may be intercalated.
It has long been known that the universally ovserved Rhythmicity of the vertebrate brain (Bremer, 1944; Holst, 1937) or central bervous tissue, in general (Adrian, 1937; Wall,1959) is the underlying motor for a vast variety of rhythmic movements found among vertebrates. If our hypothesis is correct, the motor mechanics of speech (and probably even syntax) is no exception to this generalization, and in this respect, then, speech is no different from many other types of animal behavior. In man, however, the rhythmic motor subserves a highly specialized activity, namely speech.

Chap. 01 p.18-20

Chap. 01 p.18-20

The situation for primates and man in particular is not completely clear. Although regeneration is also amyotypic and coordination is either permanently disarranged or at least always remains poor, some central nervous system mechanisms seem to have developed in those forms that enable the individual to make some secondary, partial readjustment. Perhaps this new learning is based on more complex cortical activities – possibly those that are experienced by man as will – but these speculations still lack empirical evidence.
The picture would not be complete without at least a superficial reference to the sensory disarrangement brought about by extracorporeal distortions, such as vision through wearing distorting lenses or prisms. Man, and a variety of lower forms, can learn quickly to make a number of adaptive corrections for these distortions (Kohler, 1951). However, the adjustment is not complete. In adjusting motor coordination to distorted visual input, it is essential that the individual goes through a period of motor adaptation, and there is cogent evidence that this is required for a physiological reintegration between afferent and efferent impulses and not simply to provide the subject with “knowledge” of the spatial configurations (Held and Hein, 1958), (Smith and Smith, 1962). Furthermore, man’s cognitive adjustment to visually distorted environment is never complete. Subjects who wear image-inverting goggles soon come to perceive the world right-side-up (though at the beginning it was seen upside down). But even after many weeks of relative adjustment, they experience paradoxical sights such as smoke from a pipe falling downward instead of rising upward or snowflakes going up instead of coming down.
The over-all conclusion that must be drawn from the disarrangement experiments are first, that motor coordination (and certain behavior patterns dependent upon it) is driven by a rigid, unalterable cycle of neurophysiological events inherent in a species’ central nervous system; second, that larval, fetal, or embryonic tissues lack specialization; this enables these tissues to influence one another in such a way as to continue to play their originally assigned role despite certain arbitrary peripheral rearrangements. Because of this adaptability, species-specific motor coordination reappears again and again regardless of experimentally switched connections. Third, as tissues become more specialized – both in ontogeny and in phylogeny – the adaptability and mutual tissue influence disappears. Therefore, in higher vertebrates peripheral disarrangements cause permanent discoordination. Finally, with advance of phylogenetic history, ancillary neurophysiological mechanisms appear which modify and at times obscure the central and inherent theme – the cyclic driving force at the root of simple motor coordination. More complex storage devices (memories) and inhibitory mechanisms are examples.
With the emergence of more specialized brains, the nature of behavior-specificity changes. Although it would be an inexcusable over-simplification to say that behavior, in general, becomes more or less specific with phylogenetic advance, there is perhaps some truth in the following generalizations. In the lower forms, there seems to be a greater latitude in what constitutes an effective stimulus, but there is a very narrow range of possible responses. Pattern perception, for instance, is poorly developed so that an extremely large array of stimulus configurations may serve to elicit a certain behavior sequence, and thus there is little specificity in stimulability. However, the motor responses are all highly predictable and are based on relatively simple neuromuscular correlates; thus there is high degree of response specificity. With advancing phylogeny, the reverse seems to become true. More complex pattern perception is correlated with greater stimulus specificity and has a wider range of possible motor responses, that is, less response specificity. However, both of these trends in decreasing and increasing specificity are actually related to greater and greater behavioral and ecological specialization. Taxonomists will be quick to point out countless exceptions to these rules. Evolution is not so simple and can never be brought to conform to a few formulas. The statement here is merely to the effect that such trends exist and that, generally speaking, specificity both in stimulation and in responsiveness changes throughout the history of animal life.
In the vast majority of vertebrates, functional readjustment to anatomical rearrangement appears to be totally impossible. Even if the animal once “knew now” to pounce on prey, peripheral-central disarrangement will permanently incapacitate the animal from pursuing the necessities for its livelihood. If the primate order should indeed be proven to be an exception to this rule – and there is little evidence of this so far – then we would have to deal with thie phenomenon as an extreme specialization, whose details and consequences are yet to be investigated. There is much less modifiability for those coordination patterns which constitute species specific behavior than is usually realized, and we must keep in mind that most behavioral traits have species-specific aspects.
This statement is not contradicted by the great variety of arbitrary behavior that is produced by training. Pressing a bar in a cage, pecking at a red spot, jumping into the air at signal of a buzzer (in short, the infinity of arbitrary tricks an animal can be made to perform) do not imply that we could train individuals of one species (for examples, common house cats) to adopt the identical motor behavior patterns of another, such as that of a dog. Although there is perfect homology of muscles, we cannot train a cat to wag its tail with a dog’s characteristic motor coordination. Nor can one induce a cat to vocalize on the same occasions a dog vocalizes instinctively, for instance, when someone walks through the backyard. Just as an individual of one species cannot transcend the limits to behavior set by its evolutionary inheritance, so it cannot make adjustments for certain organic aberrations, particularly those just discussed. The nearly infinite possibility of training and retraining is a sign of the great freedom enjoyed by most mammals in combining and recombining individual traits, including sensory and motor aspects. The trais themselves come from a limited repertoire, are not modifiable, and are invariably species-specific in their precise motor coordination and general execution.
In Goethe’s words, addressing a developing being:
Nach dem Gesetz, wonach du angetreten.
So must du seyn, dir kannst du nicht entfliehen,
So sagten schon Sibyllen, so Propheten;
Und keine Zeit und keine Macht Zerstuchelt
Gepragte Form, die lebend sich entwickelt.*

Wednesday, October 8, 2008

APPENDIX B

The history of the biological basis of language*
OTTO MARX

Language has been thought of as being the expression of man’s reason, the result of onomatopoeia, invented as a means of communication, considered basic to the formation of society, or simply a gift of God. Each of these definitions of language has been used in the construction of a multitude of language theories [1]. We shall not be concerned with the development of these theories, but limit ourselves to a discussion of the recurrent emergence of the thoughts on the biological basis of language.
The idea that language is on of man’s inherent characteristics, like vision or hearing, is found in some myths on the creation of man [2]. In these myths, language is given to man in conjunction with his senses, so that apparently it was considered on of them, and not part of man’s cultural or social functions (which are also described as given or taught by the gods). By no means can these assertions of a divine origin be considered antithetical to a natural origin of language; on the contrary, everything natural to man was God’s gift to him.
Between the reaslm of mythology and science stands the experiment of the Egyptian King Psammetichos of the seventh century B.C. and related by Herodotus (fifth century B.C.). Psammetichos supposedly tried to have two children raised by shepherds who never spoke to them in order to see what language would develop [3]. This experiment is relevant to our discussion in so far as its design implies the belief that children left to themselves will develop language. Psammetichos thought he would be able to demonstrate which language was the oldest, but apparently did not doubt that even untutored children would speak.
Language first became the subject of discussion by the pre-Socratic philosophers in the latter part of the sixth century B.C. The setting up of antitheses, typical for Greek philosophy, was also applied to the problems which language posed. But discussions of language were limited to a mere consideration of naming and were purely secondary outgrowths of the philosopher’s search for general truths. In order to understand the statements on language made by the Greek philosophers, it is essential to give an idea of the context in which they were made and briefly describe the evolution of the meaning of the two ever recurring terms nomos and physis in which language was to be discussed. Nomos was later replaced by theses and was often wrongly translated as convention while physis has been incorrectly equated with nature.
For Herakleitos (ca. 500 B.C.), nomos was the order regulating the life of society and the individual, but he did not see it as a product of society [4].The nomos of society was valid, but not absolute. Similarly names were valid as they reflected some aspect of the object they named. (Apparently, he did not consider them physis as had been thought)[5]. Physis would have implied that names are an adequate expression of reality or of the true nature of things, an idea to which Herakleitos did not subscribe.
Parmenides, (fifth century B.C.) thought that originally names had been given to things on the basis of “wrong thinking,” and that the continued use of the original names perpetuated the errors of men’s earlier thinking about the objects around them. To him, and to Anaxagoras and Empedokles, names and concepts were synonymous. Their concern with conventional names and their condemnation of them as nomos was related to their critical view of conventional thought. To these philosophers’ nomos and conventional thought had acquired the connotation of incorrectness and inadequacy as opposed to the truth and real nature or physis which they were seeking [5].
Pindar(522-433 B.C.) considered all of man’s true abilities innate. They cannot be acquired by learning bt can only be furthered by training [6]. For him the rules of society which are nomos were God-given and, therefore, contained absolute truth. Nomos and physis were not purely antithetical as it was for Parmenides and his school. It is also well to keep in mind that nomos and physis had not been antithetical in Greek ethnography. Nomos referred to all peculiarities of a people due to custom and not attributable to the influences of climate, country, or food. So Herodotus had ascribed the elongated heads of a tribe, due to their binding of the infant’s skull, to nomos, but he believed that this would become hereditary (physis). In medicine of the fifth century B.C., physis came to mean normal [7].
Although we find the nomos-physis antithesis in all Greek philosophy and science, the exact meaning of the terms would have to be determined in each case, before we might claim that one of the philosophers made certain pronouncements about language. We have attempted to indicate that none of the pre-Socratic philosophers were concerned with language as such, nor with questions of its origin or development, and in no case could their statements be said to establish language as cultural or natural to man.
In classical philosophy, the relationship of the name to its object continued to be the focal point in discussions on language: naming and language were synonymous. Did the object determined in some way the name by which it was called, just as its shape determined the image we saw of it? In his dialogue, Cratylos, Plato (427-347) attempted a solution of this problem. If the name was determined by the nature of the object to which it referred, then language was physis, that is , it could be said to reflect the true nature of things, but if it were nomos, then the name could not serve as a source of real knowledge. As Steinthal [8] pointed out, language was taken as given, and the philosophical discussion had not originated from questions about the nature of man or language. Plato’s answer could, therefore, have only indirect implications for questions about language origin which were to arise much later. He overcame the antithesis by demonstrating that the name does not represent the object but that it stands for the idea which we have of the object. Furthermore, he declared that the name or the word is only a sound symbol which in itself does not reveal the truth of the idea it represents. Words gain their meaning from other oodes of communication like imitative body movements or noises. The latter are similar to painting in that they are representative but not purely symbolic as is language. The only reference to the origin of language in Cratylos is Socrates’ statement that speaking of a divine origin of words is but a contrivance to avoid a scientific examination of the source of names [10].
Aristotle’s (384-322 B.C.) interest in language was both philosophical and scientific. In his book on animals the ten paragraphs devoted to language follow immediately after a discussion of the senses. His differentiation of sound, voice, and language is based on his physical concept of sound production. In his opinion, voice was produced in the trachea and language resulted from the modulation of the voice by tongue and lip movements. Language proper is only found in man. Children babble and stammer because they have not yet gained control over their tongues. Among the animals only the song of birds is similar call, “kak kak” in one vicinity and “tri tri” in another and as the song of a bird will differ from that of its parents’ if it grows up without them. Language, like the song of the nightingale, is perfected by training.
Aristotle had based his differentiation of man’s language (logos) from the language of animals (phonē) biologically, for he thought that man’s language was produced mainly by movement of the tongue and the sounds of animals by the impact of air on the walls of the trachea. He did not think that human language could have been derived from sounds, noises or the expression of emotions seen in animals and children. “A sound is not yet a word it only becomes a word when it is used by man as a sign.” “The articulated signs (of human language) are not like the expression of emotions of children or animals. Animal noises cannot be combined to form syllables, nor can they be reduced to syllables like human speech” [12]. He rejected an onomatopoeic origin of language and established the primacy of its symbolic function. Because he recognized that the meaning of spoken language was based on agreement, it has been claimed that he thought language to be of cultural origin. I terms of the old antithesis of physis versus nomos, Aristotle saw both principles operative in language. Physis meant to him the law of nature without the virtue of justice which it had contained for Plato, and Nomos was replaced by thesis and had come to mean man made. Language, as such, he considered physis, and the meaning of words he attributed to thesis [13].
The question of the origin of language had not been raised in Greek philosophy until Epicurus (341-271 B.C.) asked:”What makes language possible? How does man form words so that he is understood?”[14]. He concluded that neither God nor reason, but Nature was the source of language. To him, language was a biological function like vision and hearing. A different opinion was held by Zeno (333-262 B.C.) the founder of the Stoa, to whom language was an expression of man’s mind and derived from his reason. He believed that names had been given without conscious reflection or purpose [15].
Although Epicurus had been the first to contemplate the origin of language, Chysippos (died about 200 B.C.) a stoic, was the first to consider language in terms broader than names. Before him the ambiguity of some names had been noted but no satisfactory explanation had been found. Chrysippos proclaimed that all names were ambiguous and lost their ambiguity by being placed in context. Thereby he drew attention to the importance of the grouping of words but his belief that language did not follow logic kept his inquiry from proceeding any further [16].

CHAPTER Nine

CHAPTER Nine

Toward a bilogical theory of language development (General summary)


We have discussed language from many different aspects, have drawn various conclusions and offered a variety of explanations. If we now stand back and survey the entire panorama, will this synopsis suggest an integrated theory? I believe it will.

I. FIVE GENERAL PREMISES

The language theory to be proposed here is based upon the following five empirically verifiable, general biological premises.

(i) Cognitive function is species-specific. Taxonomies suggest themselves for virtually all aspects of life. Formally, these taxonomies are always type-token hierarchies, and on every level of the hierarchy we may discern differences among tokens and, at the same time, there are commonalities that assign the tokens logically to a type. The commonalities are not necessarily more and more abstract theoretical concepts but are suggested by physiological and structural invariances. An anatomical example of such an invariance is cell-constituency- it is common to all organisms. In the realm of sensory perception there are physiological properties that result in commonalities for entire classes of animals, so that every species has very similar pure stimulus thresholds. When we compare behavior across species, we also find certain invariances, fro instance, the general effects of reward and punishment. But in each of these examples there are also species differences. Cells combine into a species-specific form; sensations combine to produce species-specific pattern-recognition; and behavioral parameters enter into the elaboration of species-specific action patterns.
Let us focus on the species-psecificities of behvior. There are certain cerebral functions that mediate between sensory input and motor output which we shall call generically cognitive function. The neurophysiology of cognitive function is larely unknown but its behavioral correlates are the propensity for problem solving, the formation of learning sets, the tendency to generalize in certain directions, or the facility for memorizing some but not other conditions. The interaction or integrated patterns of all of these different potentialities produces the cognitive specificities that have induced von Uexkuell, the forerunner of modern ethology, to propose that every species has its own world-view. The phenomenological implications of his formulation may sound old-fashioned today, but students of animal behavior cannot ignore the fact that the differences in cognitive processes (1)are empirically demonstrable and (2) are the correlates of species-specific behavior.

(ii) Specific properties of cognitive function arereplicated in every memberof the species. Although there are individual differences among all creatures, the members of one species resemble each other very closely. In every individual a highly invariable type of both form and function is replicated. Individual differences of most characteristics tend to have a normal (Gaussian) frequency distribution and the differences within species are smaller than between species. (We are disregarding special taxonomic problems in species identification.)
The application of these notions to (i) makes it clear that also the cognitive processes and potentialities that are characteristics of a species are replicated in every individual. Notice that we must distinguish between what an individual actually does and what he is capable of doing. The intraspecific similarity holds for the latter, not the former, and the similarity in capacity becomes striking only if we concentrate on the general type and manner of activity and disregard such variables as how fast or how accurately a given performance is carried out.

(iii) Cognitive processes and capacities are differentiated spontaneously with maturation. This statement must not be confused with the question of how much the environment contributes to development. It is obvious that all development requires an appropriate substrate and availability of certain forms of energy. However, in most cases environments are not specific to just one form of life and development. A forest pond may be an appropriate environment for hundreds of different forms of life. It may support the fertilized egg of a frog or a minnow, and each of the eggs will respond to just those types and forms of energy that are appropriate to it. The frog’s egg will develop into a frog and the minnow’s egg into a minnow. The pond just makes the building stones available, but the organismic architecture unfolds through conditions that are created within the maturing individual.
Cognition is regarded as the behavioral manifestation of physiological processes. Form and function are not arbitrarily superimposed upon the embryo from the outside but gradually develop through a process of differentiation. The basic plan is based on information contained in the developing tissues. Some fuctions need an extra organismic stimulus for the initiation of operation-something that triggers the cocked mechanisms; the onset of air-breathingin mammals is an example. These extra-organismic stimuli do not shape the ensuing function.a species’ peculiar mode of processing visual input, as evidenced in pattern recognition, may develop only in individuals who have had a minimum of exposure to properly illuminated objects in the environment during their formative years. But the environment clearly does not shape the mode of input processing, because the environment might have been the background to the visual development of a vast number of other types of pattern-recognition.

(iv) At birth, man is relatively immature; certain aspects of his behavior and cognitive functionemerge only during infancy. Man’s postnatial state of maturity (brain and behavior) is less advanced than that of other primates. This is a statement of fact and not a return to the fetalization and neotony theories of old (details in Chapter Four).

(v) Certain social phenomena among aimals come about by spontaneous adaptation of the behavior of the growing individual to the behavior of other individuals around him. Adequate environment does not merely include nutritive and physical conditions; many animals require specific social conditions for proper development. The survival of the species frequently depednds on the development of mechanisms for social cohesion or social cooperation. The development of typical social behavior in a growing individual requires, for many species, exposure to specific stimuli such as the presence of certain action patterns in the mother, a sexual partner, a group leader, etc. sometimes mere exposure to social behavior of other individuals is a sufficient stimulus. For some species the correct stimulation must occur during a narrow formative period in infancy; failing this, further development may become seriously and irreverible distorted. In all types of developing social behavior, the growing individual begins to engage in behavior as if by resonance; he is maturationally ready but will not begin to perform unless properly stimulated. If expsed to the stimuli, he becomes socially “excited” as a resonator may become excited when exposed to a given range of sound frequencies. Some social behavior consists of intricate patterns, the development of which is the result of subtle adjustments to and interactions with similar behavior patterns (for example, the songs of certain bird species). An impoverished social input may entail permanently impoverished behavior patterns.
Even though the development of social behavior may require an environmental trigger for proper development and function, the triggering stimulus must not be mistaken for the cause that shapes the behavior. Prerequisite social triggering mechanisms do not shape the social behavior in the way Emily Post may shape the manners of a debutante.

II. A concise statement of the theory
(1) Language is the manifestation of species-specific cognitive propensities. It is the consequence of the biological peculiarities that make a human type of cognition possible. *The dependence of language upon human cognition is merely one instance of the general phenomenon characterized by premise (i) above. There is evidence (Chapter Seven and Eight) that cognitive function is a more basic and primary process than language, and that the dependence-relationship of language upon cognition is incomparably stronger than vice versa.
(2) The cognitive function underlying language consists of an adaptation of a ubiquitous process (among vertebrates) of categorization and extraction of similarities. The perception and production of language may be reduced on all levels to categorization processes, including the subsuming of narrow categories under more comprehensive ones and the subdivision of comprehensive categories into more specific ones. The extraction of similarities does not only operate upon physical stimuli but also upon categories of underlying structural schemata. Words label categorization processes (Chapter Seven and Eight).
(3) Certain Specialization s in peripheral anatomy and physiology account for some of the universal features of natural languages, but the description of these human peculiarities does not constitute an explanation for the phylogenetic development of language. During the evolutionary history of the species form, function and behavior have interacted adaptively, but none of these aspects may be regarded as the “cause” of the other. Today, mastery of language by an individual may be accomplished despite severe peripheral anomalies, indicating that cerebral function is now the determining factor for language behavior as we know it in contemporary man. This, however, does not necessarily reflect the evolutionary sequence of developmental events.

Saturday, June 7, 2008

Influences of Electromagnetic Articulography Sensors on Speech Produced by Healthy Adults and Individuals With Aphasia and Apraxia


Influences of Electromagnetic Articulography Sensors on Speech Produced by Healthy Adults and Individuals With Aphasia and Apraxia


William F Katz, Sneha V Bharadwaj, Monica P Stettler. Journal of Speech, Language, and Hearing Research. Rockville: Jun 2006. Vol. 49, Iss. 3; pg. 645, 15 pgs
Abstract (Summary)
This study examined whether the intraoral transducers used in electromagnetic articulography (EMA) interfere with speech and whether there is an added risk of interference when EMA systems are used to study individuals with aphasia and apraxia. Ten adult talkers (5 individuals with aphasia/apraxia, 5 controls) produced 12 American English vowels in /hVd/ words, the fricative-vowel (FV) words (/si/, /su/, /∫i/, /∫u/), and the sentence She had your dark suit in greasy wash water all year, in EMA sensors-on and sensors-off conditions. Segmental durations, vowel formant frequencies, and fricative spectral moments were measured to address possible acoustic effects of sensor placement. A perceptual experiment examined whether FV words produced in the sensors-on condition were less identifiable than those produced in the sensors-off condition. EMA sensors caused no consistent acoustic effects across all talkers, although significant within-subject effects were noted for a small subset of the talkers. The perceptual results revealed some instances of sensor-related intelligibility loss for FV words produced by individuals with aphasia and apraxia. The findings support previous suggestions that acoustic screening procedures be used to protect articulatory experiments from those individuals who may show consistent effects of having devices placed on intraoral structures. The findings further suggest that studies of fricatives produced by individuals with aphasia and apraxia may require additional safeguards to ensure that results are not adversely affected by intraoral sensor interference.
» Jump to indexing (document details)
Full Text (8196 words)
Copyright American Speech-Language-Hearing Association Jun 2006
[Headnote]
Purpose: This study examined whether the intraoral transducers used in electromagnetic articulography (EMA) interfere with speech and whether there is an added risk of interference when EMA systems are used to study individuals with aphasia and apraxia.
Method: Ten adult talkers (5 individuals with aphasia/apraxia, 5 controls) produced 12 American English vowels in /hVd/ words, the fricative-vowel (FV) words (/si/, /su/, /∫i/, /∫u/), and the sentence She had your dark suit in greasy wash water all year, in EMA sensors-on and sensors-off conditions. Segmental durations, vowel formant frequencies, and fricative spectral moments were measured to address possible acoustic effects of sensor placement. A perceptual experiment examined whether FV words produced in the sensors-on condition were less identifiable than those produced in the sensors-off condition.
Results: EMA sensors caused no consistent acoustic effects across all talkers, although significant within-subject effects were noted for a small subset of the talkers. The perceptual results revealed some instances of sensor-related intelligibility loss for FV words produced by individuals with aphasia and apraxia.
Conclusions: The findings support previous suggestions that acoustic screening procedures be used to protect articulatory experiments from those individuals who may show consistent effects of having devices placed on intraoral structures. The findings further suggest that studies of fricatives produced by individuals with aphasia and apraxia may require additional safeguards to ensure that results are not adversely affected by intraoral sensor interference.
KEY WORDS: speech production, electromagnetic articulography, fricative spectral moments, aphasia, apraxia of speech


Speech production is studied using techniques that provide anatomical images or movies of articulation (e.g., cineradiography, videoflouroscopy) as well as techniques that derive individual fleshpoint data during speech movement (e.g., X-ray microbeam, selspot, and electromagnetic articulography [EMA]). A potential complication of fleshpoint tracking systems is that the sensors used to record speech movement may themselves alter participants' speech. For instance, intraoral sensors might obstruct the speech airway, resulting in sound patterns not normally observed in speech. It is also possible that data recorded during EMA or X-ray microbeam studies may to some extent reflect participants' compensation for the presence of intraoral sensors in the vocal tract. Indirect evidence concerning these issues was provided by Perkell and Nelson (1985), who compared formant frequencies of the vowels /i/ and /u/ recorded in the Tokyo X-ray microbeam system with population means obtained in previous acoustic studies that did not involve intraoral sensors (e.g., Hillenbrand, Getty, Clark, & Wheeler, 1995; Peterson & Barney, 1952). The results suggested that X-ray microbeam pellets cause little detectable articulatory interference.
A direct test of potential articulatory interference by a fleshpoint tracking device (the University of Wisconsin X-ray microbeam system) was conducted by Weismer and Bunton (1999). The researchers examined 21 adult talkers who produced the sentence She had your dark suit in greasy wash water all year, with and without an array of X-ray microbeam pellets in place during articulation. This array included four pellets placed on the midsagittal lingual surface. The results indicated no overall differences that were consistent for all speakers. However, approximately 20% of the talkers showed acoustically detectable changes as a result of the pellets placed on the tongue during the X-ray microbeam procedure. For example, pellets-on conditions for vowel production resulted in higher Fl values for some female talkers (suggesting greater mouth opening) and lower F2 values for some male and female talkers (suggesting a more retracted tongue position) than in pellets-off conditions. These occasional acoustic differences resulting from pellet placement were not detectable in perceptual experiments designed to simulate informal listening conditions. The authors concluded that acoustic screening procedures may be important to shield articulatory kinematic experiments from individuals who show consistent effects of having devices placed on intraoral structures.
One factor that may have contributed to the differences between the findings of Perkell and Nelson ( 1985) and Weismer and Bunton ( 1999) is that the former study examined isolated vowels, while the latter examined vowels produced in a sentential context. Speech produced in citation form may differ in a number of articulatory factors from that produced in a more natural sentential context (e.g., Lindblom, 1990). For example, sounds that occur in stressed or accented syllables (hyperspeech) appear to reflect reduced coarticulation or overlap between adjacent sounds ( de Jong, 1995; de Jong, Beckman, & Edwards, 1993) and greater velocity, magnitude, and duration (Beckman & Cohen, 2000; Beckman & Edwards, 1994). It is therefore possible that speech produced in more natural contexts (hypospeech) might show heightened susceptibility to articulatory interference effects, perhaps as the result of less conscious monitoring or compensation by the speaker. It is important to consider these communication contexts when examining the extent to which talkers do or do not show compensation for a given vocal tract perturbation.
An important clinical concern is that the use of fleshpoint tracking systems has not been limited to the study of speech produced by healthy adults. Rather, methods such as EMA are being increasingly applied to study (and treat) individuals with disorders such as aphasia and apraxia of speech (AOS; Katz, Bharadwaj, & Carstens, 1999; Katz, Bharadwaj, Gabbert, & Stettler, 2002; Katz, Carter, & Levitt, 2003), dysarthria (Goozée, Murdoch, Theodoras, & Stokes, 2000; Murdoch, Goozée, & Cahill, 2001; Schultz, SuIc, Léon, & Gilligan, 2000), stuttering (Peters, Hulstijn, & Van lieshout, 2000), and developmental AOS (Nijland, Maasen, Hulstijn, & Peters, 2004). If sensor-related interference poses added problems for clinical populations, this could potentially complicate the interpretation of kinematic assessment and treatment studies. Thus, one of the main goals of this study was to replicate the findings of Weismer and Bun ton (1999) with individuals having speech difficulties resulting from AOS and aphasia.
To examine these issues, adult talkers (individuals with aphasia/apraxia and healthy controls) were recorded producing speech under EMA sensors-on and sensors-off conditions. Speech samples included repeated monosyllabic /hVd/ words and the sentence She had your dark suit in greasy wash water all year. A number of temporal and spectral acoustic parameters were measured, and a perceptual experiment (with healthy adult listeners) was conducted to determine whether EMA sensors affected the intelligibility of fricative-vowel (FV) words produced by individuals with aphasia/apraxia and healthy control talkers.
Method
Participants
Participants were 10 monolingual American English-speaking adults (5 individuals with aphasia/ apraxia, 5 healthy controls) from the Dallas, TX, area. There were 2 female talkers (control participant C3 and participant A2 in the aphasia/apraxia group) and 8 male talkers. Participants had no prior phonetic training or experience in EMA experimentation. Individuals in the control group reported no history of neurological or articulation disorders. Four individuals with aphasia/ apraxia had been diagnosed with Broca's aphasia, and 1 had been diagnosed with anomic aphasia (see Table 1). All had AOS and an etiology of left-hemisphere cerebrovascular accident (CVA). Individuals with aphasia/ apraxia were diagnosed based on clinical examination and performance on the Boston Diagnostic Aphasia Exam (Goodglass, Kaplan, & Barresi, 2001) and the Apraxia Battery for Adults, Version 2 (ABA-2; Dabul, 2000). Apraxic severity levels, based on the overall scores of the ABA-2 Impairment Profile section, ranged from mild to moderate. The age range for the aphasie/ apraxic group was 38-67 years (M = 59;6 [years; months]), and that for the control group was 25-59 years (M = 55;0).


Speech Sample, Sensor Array
Testing took place in a sound-treated room at the UTD Gallier Center for Communication Disorders. Speech samples included vowels in /hVd/ contexts, FV words, and the sentence She had your dark suit in greasy wash water all year. The /TiVd/ and FV words were elicited in the carrier phrase, I said __ again. Seven repetitions were elicited for each sensor condition (on/off), yielding a total of 168 /hVd/ words, 56 FV words, and 14 sentences per talker. The /hVd/ words, FV words, and sentences were produced in separate blocks, with the order of stimulus type and sensor conditions (on/off) counterbalanced between talkers. Within each block, stimuli were produced in random order. Talkers repeated each item following a spoken and orthographic model (written on a 4 in. × 6 in. index card) presented by one of the experimenters (WK, a male, native speaker of American English). Speech was elicited at a comfortable speaking rate in a session lasting approximately 45 min.


The sentence She had your dark suit in greasy wash water all year was taken from the DARPA/TIMIT corpus (Garofolo, 1988). This sentence had been examined in a previous study of X-ray microbeam pellet interference (Weismer & Bunton, 1999). By including this sentence, we could compare microbeam pellet and EMA sensor effects between studies. From this sentence, segmentai durations were measured, and formant frequencies were estimated for the vowels /i/, /æ/, /u/, and /a/ (taken from the words she, had, suit, and wash).
For the sensors-on condition, participants spoke with two miniature receiver coils (approximately 2 × 2 × 3 mm) attached to the lingual surface. These sensors (Model SM220) are used in commercially available EMA systems manufactured by Carstens Medezinelektronik, GmbH. EMA sensors were placed (a) midline on the tongue body and (b) on the tongue tip approximately 1 cm posterior to the apex (see Figure 1). Although it is possible that greater sensor interference could occur with the placement of 3 to 4 lingual sensors, the use of two sensors was motivated by the fact that sensors placed on the superior, anterior lingual surface are involved in a variety of articulatory gestures, including palatal contact (potentially influencing sibilant production). Placement followed a standardized template system originally designed for pellet placement in the X-ray microbeam system (Westbury, 1994). Sensors were affixed to the tongue using a biocompatible adhesive, with the wires led out the corners of the participant's mouth (Katz, Machetanz, Orth, & Schoenle, 1990).1


[Photograph]
Figure 1. Example of electromagnetic articulography sensors attached to the lingual surface (midline on the tongue body and approximately 1 cm posterior to the apex).


As is common with EMA testing, before further recording, participants were given approximately 5 min to get used to the presence of EMA sensors, or until the investigators determined that there was no significant change in speech production attributable to the lingual EMA sensors. During this desensitization period, participants were engaged in informal conversation with the investigators.
Dato Collection
Acoustic data were recorded with an Audio-Technica AT831b microphone placed 8 in. from the lips. Recordings were made with a portable DAT recorder, Teac model DA-P20. The digital waveforms were later transferred to computer disk at a rate of 48 kHz and 16-bit resolution using a DAT-Link+ digital audio interface, then down-sampled to 22 kHz for subsequent analysis.
Acoustic Measures
From the seven productions elicited for each /hVd/ and FV word target, the first five phonemically correct productions were selected for analysis. Five productions were selected for each target because most of the individuals with aphasia/apraxia were able to produce this many correct utterances within seven attempts. Phonemically correct utterances were determined by independent transcription conducted by two of the authors (William F. Katz and Monica P. Stettler). As expected, there was no data loss for the control talkers, while individuals with aphasia/apraxia showed characteristic problems with particular speech sounds. Talker Al had particular difficulty producing FV words, and these items were removed from further analysis. In all, 344 items were included in the FV acoustic analyses.
For individuals with aphasia/apraxia, it was more difficult to produce the sentence She had your dark suit in greasy wash water all year than to repeat single words in a carrier phrase. Accordingly, there were many cases of substitutions, omissions, and distortions in their sentential materials. Nonetheless, it was possible to select five sentences produced by each talker for duration measurement purposes and the first five phonemically correct instances of the vowels /i/, /as/, IvJ, and /u/ for formant frequency analysis.


The first three formant frequencies (F1-F3) were estimated at vowel midpoint for the vowels /i/, /æ/, /u/, and Id. The four corner vowels were selected because they delimit the acoustic (and, by inference, articulatory) working space for vowels. Vowel formant frequencies (F1-F3) were estimated using an automated formant-tracking procedure developed by Nearey, Hillenbrand, and Assmann (2002). In this procedure, several different linear predictive coding (LPC) models varying in the number of coefficients are applied, given some assumptions about the number of expected formants in a given frequency range. The best model is then selected based on formant continuity, formant ranges, and formant bandwidth, along with a measure of the correlation between the spectrum of the original and a synthesized version. Final formant frequency values were estimated as the median of five successive measurements spaced 5-ms apart, spanning vowel midpoint.
Fricative centroids were measured at fricative midpoint using TF32 software (Milenkovic, 2001). Spectral moments analysis treats the Fourier power spectrum as a random probability distribution from which four measures may be derived: centroid (spectral mean), variance (energy spread around the spectral peak), skewness (tilt or symmetry of the spectrum), and kurtosis (peakedness of the spectrum; Forrest, Weismer, Milenkovic, & Dougall, 1988; Tjaden & Turner, 1997). Although Weismer and Bunton (1999) examined all four spectral moments in their study of the effects of X-ray microbeam pellets on speech, only the first spectral moment (centroid) showed any evidence of differing as a function of pellet placement during speech. Based on these findings, as well as the data from other studies highlighting the importance of the centroid in determining fricative quality (e.g., Jongman, Wayland, & Wong, 2000; Nittrouer, StuddertKennedy, & McGowan, 1989; Tabain, 1998), we focused on fricative centroids as a measure of possible interference effects of EMA sensors during fricative production.
Perceptual Measures
Ten native speakers of American English with a background in speech-language pathology volunteered as listeners. Listeners ranged from 23 to 53 years of age (M = 28 years). All listeners had taken a course in phonetics and reported no speech, language, or hearing problems.
Stimuli consisted of the syllables /si/, /su/, /∫i/, and /∫u/, produced by individuals with aphasia and apraxia and by healthy control talkers under sensors-on and sensors-off conditions. There were 200 productions by the control talkers (5 participants × 2 fricatives × 2 vowels × 2 sensor conditions × 5 repetitions) and 144 productions by the 4 individuals with aphasia/apraxia. As noted prev'ously, the FV productions of Talker Al and the /∫i/ productions of Talker A3 were eliminated due to high error rates. All stimuli were adjusted to the same peak amplitude, resulting in levels between 65 and 72 dB SPL(A).
The FV word identification task was conducted in a sound-treated room at the University of Texas at Dallas, Callier Center for Communication Disorders. Listeners were instructed that they would hear the words /si/, /su/, /∫i/, and /∫u/ produced by adult talkers (including individuals with aphasia/apraxia and healthy controls) under conditions of having EMA sensors on or off the tongue during speech. Productions by individuals with aphasia/apraxia and by healthy controls were presented in randomized (mixed) order. The listener's task was to identify each word by clicking one of four response panels (labeled with IPA symbols and the words see, she, Sue, and shoe) on a computer screen. Before the experiment, listeners first completed a practice set in which they were given 16 stimuli presented through headphones. The practice session was designed to familiarize the participant with the range of variations in the quality of fricatives to be identified in the main experiment and to familiarize them with the task. The materials for this practice session included productions by individuals with aphasia/apraxia and healthy control talkers other than those used in the main experiment. In the main experiment, listeners identified a total of 344 words. The experiment was selfpaced, and listeners were allowed to listen to stimuli any number of times before giving their answer by pressing a replay button. Listeners completed the experiment in one session lasting approximately 40 min.
Results
Segment Durations
Figure 2 shows mean vowel durations (and standard errors ) for phonemically correct /hVd/ words produced by the two talker groups in sensors-on and sensors-off conditions. A mixed-design, repeated measures analysis of variance (ANOVA) was conducted with group (aphasie/ apraxic, control) as the between-subject variable and vowel (I'll, /æ/, /u/, and IaI) and sensor condition (on/off) as within-subject variables. Results indicated a significant main effect for vowel, F(Il, 96) = 9.09, p < .0001, and a significant Vowel × Group interaction, F(Il, 96) = 1.99, p = .0376. These effects reflect two main patterns: (a) vowel-specific differences among the 12 vowels investigated (e.g., tense vowels longer than lax vowels) and (b) greater vowel-specific durational differences in aphasie/ apraxic as opposed to control talker group productions. Figure 2 also indicates that productions by individuals with aphasia/apraxia were generally longer than those of normal control talkers, although this group difference did not reach significance, F(I, 96) = 1.09, p = .299, ns. Critically, there was no significant main effect for sensor condition, and no other significant two-way or three-way interactions. Although one must be careful when interpreting negative findings, the fact that vowel and group factors revealed significant effects (whereas sensor condition did not) suggests EMA sensors do not affect the duration of vowels in /hVd/ contexts produced by healthy adults or individuals with aphasia/apraxia.


Figure 2. Mean vowel durations and standard errors for /hVd/ productions by healthy adults and individuals with aphasia/apraxia, shown by speaking condition (open bars = sensors off, shaded bars = sensors on).


Figure 3 contains sensors-on and sensors-off duration data for sentences produced by the two talker groups. As expected, productions by individuals with aphasia/ apraxia had overall longer durations than those of the control talkers. However, there was little systematic difference for either talker group as a function of sensor condition.


Figure 3. Mean segment durations and standard errors for productions by healthy adults and individuals with aphasia/apraxia, shown by speaking condition (open bars = sensors off, shaded bars = sensors on).


Post hoc analyses of the three-way (Group × Segment × Sensor Condition) interaction focused on the effects of sensors (off vs. on) for phonemes produced by each of the two talker groups. In these analyses, the differences of least squares means were computed (t tests), with significance set at p < .01 to correct for multiple comparisons. Results indicated no significant sensors-off versus sensors-on differences for segments produced by the healthy control talkers. For productions by individuals with aphasia/apraxia, three segments showed significant sensor effects (/jrr/, /d/, and /ar/). However, the direction of these effects was not consistent: /pj/ and /ar/ durations were shorter in the sensors-on condition, while /d/ durations were shorter in the sensors-off condition.
In summary, the data revealed expected group and segment differences, while sensor condition had little systematic effect. We interpret these data as showing little difference between individuals with aphasia/ apraxia and healthy adults with respect to possible durational interference from EMA sensors.
Vowel Formant Frequencies and Trajectories
The average formant frequencies (F 1-F3) of the vowel portions of the words /hid/, /heed/, /hud/, and /hud/ are summarized by group, vowel, and condition in Table 2. Following Weismer and Bunton ( 1999 ), between-condition differences of 75, 150, and 200 Hz (for Fl, F2, and F3, respectively) were operationally defined as minimal criteria for intraoral sensor interference. These values are based on considerations of typical measurement error for F1-F3 formant values (Lindblom, 1962; Monson & Englebretson, 1983) and on difference limens data for formant frequencies (Kewley-Port & Watson, 1994).


Table 2. Mean formant frequency values for healthy adult (control) talkers and individuals with aphasia/apraxia across speaking conditions (/hVd/ productions).


Of the 120 sensors-on/sensors-off comparisons (10 talkers × 4 vowels × 3 formants) shown in Table 2, 4 (3%) reached criteria: Fl of Id produced by Talker A2, F3 of /a/ produced by Talker A4, and Fl and F2 of /u/ by Talker A4. These cases are boldface in Table 2. Keeping in mind the caveat that formant frequency patterns are at best a first approximation of causality (Borden, Harris, & Raphael, 2003), one can nevertheless consider tube perturbation theory (e.g., Stevens, 2000; Stevens & House, 1955) to speculate about some possible articulatory explanations for these patterns of formant frequency change. For Talker A2, lowered Fl for /u/ suggested higher overall tongue position in the sensorson condition. For Talker A4, sensors-on productions showed higher F3 for /u/, suggesting an increased point of constriction between the teeth and alveolar ridge, or at the pharynx. Talker A4's /u/ productions showed increased Fl in the sensors-on condition (implying a lowered tongue position) and a decreased F2 (suggesting a more retracted tongue position).
The few observed cases of potential sensor interference occurred for individuals with aphasia/apraxia, suggesting that these individuals may show greater intraoral interference effects than healthy control talkers. However, the vowel formant frequencies produced by individuals with aphasia/apraxia were more variable than those of the control talkers (as reflected by 49% greater standard deviations), raising the possibility that these sensor-dependent differences were a by-product of increased variability per se (and not due to increased susceptibility to intraoral interference).
To better understand the effects of sensors during vowel production, we examined vowel formant frequency trajectories. These data addressed the question of whether EMA sensors affect vowel spectral change over time, a property claimed to be a form of dynamic vowel specification (e.g., Strange, 1989). Vowel formant frequency trajectories for the /hVd/ utterances were estimated using an LPC-based, pitch synchronous tracking algorithm (TF32, Milenkovic, 2001). Overlapping plots of these trajectories were made for the four cases of sensor-related formant frequency differences. For three of these cases, there were no apparent differences in trajectory shape or duration as a function of sensor condition. In contrast, Participant A4's /u/ F2 values showed a qualitative difference in formant frequency transitions, with the off condition being relatively steady state and the sensors-on data showing a more curved pattern. These patterns are shown in Figure 4, which is an overlapping plot of the F2 trajectories of 10 /u/ productions by A4. The five productions made in the sensors-off conditions are plotted in crosses, and the five produced in the sensors-on conditions are plotted in circles.


Figure 4. Overlapping F2 formant trajectories for /u/ produced by Talker A2 (an individual with aphasia/ apraxia). Productions made in the sensors-off condition are plotted with crosses, and those made in the sensors-on conditions are plotted with circles.


The vowels /i/, /æ/, /u/, and IaI were measured in the words she, had, suit, and wash, taken from the sentence She had your dark suit in greasy wash water all year. The average formant frequencies (F1-F3) are summarized by group, vowel, and condition in Table 3. As in the case of the /hVd/ data, between-condition formant frequency differences were operationally defined as minimal criteria for intraoral sensor interference (Weismer & Bunton, 1999).
As shown in Table 3, seven between-condition comparisons (5.8% of the data) reached criteria. There was no obvious pattern for these sensor-related differences to favor a specific vowel, talker group, or formant. Also, the direction of sensor-related effects did not suggest any one articulatory pattern for these talkers. For example, Control Talker C3 produced lower Fl values for /a/ in the sensors-on condition, suggesting a higher tongue position for this low vowel (possibly a case of undershoot). In contrast, Talker A4 produced lower F2 values for /u/ in the sensors-on condition, suggesting either lingual overshoot for this back/high vowel, or perhaps compensatory lip rounding.
As with talkers' /hVd/ data, vowel formant frequency trajectories were inspected to determine whether the presence of sensors produced any qualitative difference in trajectory shape. The data revealed no special cases of formant trajectory difference due to sensor placement.
Fricative Spectra
Table 4 shows centroid values for healthy control talkers and for individuals with aphasia/apraxia, listed separately for sensors-off and sensors-on conditions. As mentioned previously, the FV productions of Talker A1 and the /∫i/ productions of Talker A3 were not included in this analysis due to these participants' difficulties producing these sounds. Using a minimum difference of at least 1 kHz between conditions as significant (Weismer & Bunton, 1999), two of the complete set of 35 sensors-on versus sensors-off comparisons reached significance. These cases are boldface in Table 4. Both cases were for productions by Talker A2, who showed higher centroid values in the sensors-on condition for /∫i/ and /∫u/.


Table 3. Mean formant frequencies for control talkers and individuals with aphasia/apraxia across speaking conditions (sentential productions).


To further examine possible interference effects of EMA sensors on fricative spectra, histograms were plotted for each talker's /s/ and /∫/ productions, with data plotted separately for the sensors-off and sensors-on talking conditions. Previous studies have shown that repeated productions of /s/ and /∫/ by healthy adult talkers have clearly distinguishable centroid values, while productions by individuals with aphasia/apraxia are more variable and overlapped (Haley et al., 2000; Harmes et al., 1984; Ryalls, 1986). Similar patterns were noted in the present data: All 5 healthy control talkers produced bimodal centroid patterns separated by approximately 3 kHz, in both sensors-off and sensors-on conditions. Of the 4 individuals with aphasia/apraxia included in this analysis, 3 had greater-than-normal spectral overlap in both the sensors-off and sensors-on conditions, with no increased spectral overlap as the result of sensors being present. However, Talker A2 produced clearly distinguishable /s/ and /∫/ centroids in the sensors-off condition (resembling those of the normal talkers) and a highly overlapped pattern in the sensors-on condition. Thus, these data reinforce the minimal distance findings (Table 4) in suggesting that Talker A2 showed acoustic evidence of EMA sensor interference during fricative production.
Identification Scores
Listeners did well on the FV word identification task, with mean performance ranging from 90% to 93% correct across the individual listeners. Figure 5 shows that listeners showed near-ceiling performance for words produced by healthy control talkers (98.8%) and lower accuracy for words produced by individuals with aphasia/ apraxia (82.7%). Figure 5 also indicates that intelligibility varied as a function of word and sensor condition for productions by individuals with aphasia/apraxia.
The data were analyzed with a three-way (Talker Group × Word × Sensor Condition) repeated measures ANOVA. The results indicated significant main effects for group, F(1, 9) = 526.1, p < .0001, and sensor condition, F(1, 9) = 26.46, p < .0006, with a significant Group × Sensor Condition interaction, F(I, 9) = 22.1, p = .0011. These findings reflect lower identification scores for productions by the aphasic/apraxic group than for the healthy control group and higher values for the sensors-off (92.9%) than sensors-on (88.7%) conditions. Critically, there were no significant sensor-related intelligibility differences for productions by healthy control talkers, while individuals with aphasia/apraxia produced speech that was more intelligible in sensors-off (86.9%) than sensors-on (78.7%) conditions. There was also a significant Word × Sensor Condition interaction, F(3, 27) = 11.78, p < .0001, and a Group × Word × Sensor Condition interaction, F(3, 27) = 7.7, p < .0007. Post hoc analyses (Scheffé, p < .01) investigating the three-way interaction indicated that /∫i/ produced by individuals with aphasia/apraxia were significantly less intelligible in the sensors-on than sensors-off conditions (marked with an asterisk in Figure 5). Individual talker data were examined to determine whether decreased intelligibility for /∫i/ produced under the sensors-on condition held for all members of the aphasic/apraxic group. The results showed that this pattern obtained for 3 of the 4 talkers with aphasia/apraxia that were included in this analysis.


Table 4. Mean centroid values (kHz) for fricatives across speaking conditions.
Figure 5. Identification of fricative-vowel stimuli produced by healthy adult (control) talkers and talkers with aphasia/apraxia, under two speaking conditions (sensors off and on). Error bars show standard errors.


The perceptual data were compared with the fricative centroid measurements of the FV stimuli (described in Table 4). For Talker A2, correspondences between acoustic and perceptual findings fell in an expected direction. Fricatives produced by this talker had significantly higher centroid values for both /∫i/ and /∫u/ in the sensors-on compared with the sensors-off condition (/∫i/: sensors-off, 4.675 kHz, sensors-on, 5.845 kHz; /∫u/ sensors-off, 5.765 kHz, sensors-on, 6.985 kHz). These higher centroid values for /∫/ should presumably have shifted listener judgments toward /s/, thereby lowering correct identification. Indeed, this pattern obtained, with lower ratings for A2's J V productions in the sensors-on condition (84%) than sensors-off condition (99%). However, for the other three individuals with aphasia/apraxia (A3, A4, and A5), the correspondence between acoustic and perceptual data was less robust. For these talkers, there were no cases of sensor-related centroid differences greater than 1 kHz, yet a token-by-token analysis of the perceptual data revealed instances of substantial sensor-related intelligibility differences.
In summary, the acoustic and perceptual data were sensitive to talker group differences, and both data sources suggested that productions by healthy control talkers show minimal interference from EMA sensors. However, the acoustic and perceptual data showed less agreement with respect to individual talker and stimulus details for productions by individuals with aphasia/apraxia.
Discussion
Point-parameterized estimates of vocal tract motion are increasingly reported in the literature, both for healthy adults and for talkers with speech and language deficits. The results of these investigations are used to address models of speech production, as well as clinical issues such as the assessment and remediation of speech and language disorders. EMA systems play a growing role in this research. Although one study has examined the effects of X-ray microbeam pellets on speech produced by healthy adult talkers, the effects of EMA sensors on speech have not yet been investigated. It is also not known whether the risk of sensor interference is increased in talkers with speech disorders subsequent to brain damage. To address these questions, the current study examined a number of acoustic speech parameters (including segmental duration, vowel formant frequencies and trajectories, and fricative centroid values) in productions by individuals with aphasia/apraxia and age-matched healthy adult talkers under EMA sensors-off and sensors-on conditions. For most of these measures, citation form and sentential utterances were compared to determine whether subtle sensor-related differences could be detected across speech modes. A perceptual study using healthy adult listeners was conducted to obtain identification accuracy for FV words produced by individuals with aphasia/apraxia and by healthy adult talkers.


Considering next the possibility of spectral interference from EMA sensors, analysis of /hVd/ productions revealed vowel formant frequency values for productions by 2 individuals with aphasia/apraxia (A2, A4) that exceeded operationally defined thresholds for intraoral sensor interference. However, only one vowel was affected for each talker (/a/ for A2, /u/ for A4), suggesting minimal interference even for these talkers. When sensor-related formant frequency differences for /i/, /æ, /u/, and /a/ were examined in words taken from the sentence She had your dark suit in greasy wash water all year, a small number of cases (5.8%) reached criteria, with no obvious tendency for these sensorrelated differences to favor a specific vowel, talker group, or formant. Inspection of vowel formant frequency trajectories revealed only one apparent case of sensorrelated difference (F2 trajectories for /u/ produced by A4). Taken together, the data suggest little EMA sensor interference affecting either vowel steady-state measures or vowel dynamic qualities. These acoustic findings support both informal evaluations by researchers (e.g., Schönle et al., 1987) and participants' self-reports indicating that EMA sensor interference during vowel production is minimal.
Potential spectral interference for consonants was assessed by measuring fricative centroid values for the words /si/, /su/, /∫i/, and /∫u/ produced under sensors-on and sensors-off conditions. The results indicated that Talker A2 showed significantly higher centroids in the sensors-on condition for /∫i/ and /∫u/. Histograms of centroid values for repeated productions of this talker's fricatives indicated a distinct, bimodal pattern in the sensors-off condition and greatly increased overlap for the sensors-on condition. Taken together, these data suggest Talker A2 had particular difficulty producing fricatives under EMA sensors-on conditions. The direction of the J V shift for this talker ( higher centroids in the sensors-on condition) was similar to that observed by Weismer and Bunton ( 1999) for sentential productions by normal participants in the X-ray microbeam system. These authors suggested three possible explanations for such a shift: (a) a vocal tract constriction somewhat more forward; (b) greater overall effort in utterance production, with higher flows through the fricative constriction and consequently greater energy in the higher frequencies of the source spectrum (Shadle, 1990); and (c) sensors acting like obstacles in the path of the flow, increasing the high-frequency energy in the turbulent source and thus contributing to the first spectral moment differences. Another possibility may be a saturation effect difference, consisting of lower tongue tip contact with the alveolar ridge for/s/, but not/∫/(Perkell et al., 2004). Conceivably, the EMA sensor could have interfered with tongue tip contact patterns, resulting in a more /s/-like quality for /∫/ attempts.
An identification experiment examined whether EMA sensors affected the intelligibility of participants' /si/, /su/, /∫i/, /∫u/ productions. The results revealed an interaction between talker group and sensor condition (on/off). Productions by healthy adult talkers were identified almost perfectly, with no apparent effects of sensor interference. These data support previous findings from perceptual rating tasks that showed healthy adult talkers produce no discernable evidence of speech being made with or without X-ray microbeam pellets attached (Weismer & Bunton, 1999). Nevertheless, because the data for productions by healthy control talkers were pretty well at ceiling, it is possible that subtle effects of sensor interference might emerge if the task were made more difficult for the listeners. Future studies might explore this issue further by presenting stimuli under more demanding conditions, such as in the presence of noise masking.
In the current study, FV productions by individuals with aphasia/apraxia were identified with lower accuracy than those of healthy controls, a finding consistent with clinical descriptions of imprecise fricative production in aphasia and apraxia (e.g., Haley et al., 2000). There was also evidence consistent with an interpretation of sensor-related interference: Productions by individuals with aphasia/apraxia were less intelligible in sensors-on versus sensors-off conditions, a pattern that was significant for the word /∫i/. On closer inspection, the significant results for /∫i/ appear to have resulted from unusually high intelligibility for sensors-off productions, rather than from lowered intelligibility for sensors-on productions. Why the /∫i/ productions of individuals with aphasia/apraxia were so intelligible is not entirely clear. Nevertheless, despite this one unusual pattern, the perceptual data generally suggest that individuals with aphasia/apraxia have greater-than-normal difficulty producing sibilant fricatives under EMA sensor conditions.
Because EMA sensors pose the same type of physical obstruction to the oral cavity in healthy control talkers and in individuals with aphasia/apraxia, it seems reasonable to assume that any additional difficulties noted in the productions of individuals with aphasia/apraxia may be due to deficits in the ability to compensate for the presence of EMA sensors during speech. If it is further assumed that the ability to adapt to the presence of EMA sensors is functionally related to the compensatory ability needed to overcome the presence of other types of intraoral obstructions (e.g., a bite block), the present data support previous claims that individuals with aphasia/apraxia have intact compensatory articulation abilities during vowel production (Baum, Kim, & Katz, 1997).
However, the fricative findings give some indication of possible compensatory difficulties in the speech of individuals with aphasia/apraxia. These talkers, considered as a group, showed greater perceptual effects from EMA sensors than healthy normal controls. Inspection of individual talker data revealed that decreased intelligibility in the sensors-on conditions occurred for 3 of the 4 talkers with aphasia/apraxia. The most consistent case of sensor-related effects was Talker A2, whose /∫V/ productions also showed increased centroid overlap in the sensors-on condition. Cumulatively, these data provide tentative evidence that compensatory problems may underlie the difficulty that some individuals with aphasia/apraxia experience while producing fricatives under EMA conditions.
Baum and McFarland (1997) noted that healthy adults producing the fricative /s/ under artificial palate conditions show marked improvement after as little as 15 min of intense practice with the palate in place. Although the perturbations involved in the current study are arguably different than those resulting from an artificial palate, it is conceivable that practice speaking with EMA tongue tip sensors attached was sufficient to allow substantial adaptation for fricatives produced by the healthy control talkers but not for the talkers with aphasia/ apraxia. Additional experimentation that includes testing after practice would help address this issue.
Whereas the acoustic and perceptual data for productions by healthy control talkers were quite congruent, the perceptual data for speech produced by individuals with aphasia/apraxia did not always correspond with the patterns one would expect based on the fricative centroid values. A mismatch between listeners' perceptions and fricative spectral attributes has been noted in previous studies of incorrect /∫/ productions by individuals with aphasia/apraxia (Wambaugh, Doyle, West, & Kalinyak, 1995). Whereas the present perceptual data appeared sensitive to talker group, word, and sensor differences, there are a number of possible reasons why measured centroid values did not predict listeners' results. One possibility is that a combination of spectral moments could provide improved predictive power, as suggested by previous studies of fricatives produced by normal healthy speakers (Forrest et al., 1988; Jongman et al., 2000). Another possibility is that predictive power could be improved by considering profiles of successive spectral moment portraits over time, such as suggested in the FORMOFFA model for the analysis of normal and disordered speech (Buder, Kent, Kent, Milenkovic, & Workinger, 1996).
By including both citation form and sentential speech samples, the present study tested the hypothesis that speech produced in sentential contexts would reveal greater EMA sensor interference than citation form contexts. Relatively little support was found for this hypothesis. Although both segment durations and vowel formant frequencies were slightly more affected in the sentential stimuli than in single-word productions, these effects were noted primarily for productions by individuals with aphasia/apraxia, and the effects were not uniform across individual talkers or stimuli. Overall, the results suggest that citation form and sentential utterances show little difference with respect to their effectiveness in eliciting acoustic evidence of EMA sensor interference.
In conclusion, there are two important methodological implications of the present findings. First, the data support the observation made by Weismer and Bunton (1999) that perceptual indices will not provide adequate screening criteria to protect kinematic experiments from normal healthy individuals with consistent sensor interference effects. Weismer and Bunton noted that listeners were unable to reliably determine whether stimuli were produced with X-ray microbeam pellets on or off. In the present data, listeners showed strong ceiling effects and no influence of EMA sensors when identifying fricatives produced by healthy control talkers. Taken together, these two experiments examining different fleshpoint tracking technologies suggest that acoustic screening techniques be used to identify those individuals who may show consistent effects of having sensors placed in the oral cavity. As noted by Weismer and Bunton, this protocol could involve recording speech sounds in sensors-off and sensors-on conditions, followed by acoustic analyses. The present results suggest it will be especially important to examine sibilant production.
second, the current findings suggest that intervention studies involving consecutive EMA measurement of speech produced by individuals with aphasia/apraxia must be designed to ensure that any observed progress is not merely the participants adapting to the presence of the sensors over time. This potential confound can be circumvented by taking appropriate safeguards in experimental design, such as probing for stimulus generalization outside of the training set (Katz et al., 2002, 2003). At present, this concern would appear limited to studies of sibilant production by individuals with aphasia/apraxia. Additional studies are needed to determine the exact articulatory explanations for these interference effects and whether such problems extend to other classes of sounds or productions by individuals with different types of speech disorders.
Acknowledgments
Portions of the results were presented in 2001 at the 39th Meeting of the Academy of Aphasia (Boulder, CO). This research was supported by Callier Excellence Award 19-02. We would like to thank June Levitt, Nicole Rush, and Michiko Yoshida for assistance with acoustic analysis.
[Footnote]
1 In some laboratories (e.g., University of Munich Institute of Phonetics and Speech Communication), EMA sensors are attached in such a way that the wire is first oriented toward the back of the mouth, reducing the risk of wires going over the tongue tip.


[Reference]
References
Baum, S. R., Kim, J. A., & Katz, W. F. ( 1997). Compensation for jaw fixation by aphasie patients. Brain and Language, 15, 354-376.
Baum, S. R., & McFarland, D. H. (1997). The development of speech adaptation to an artificial palate. Journal of the Acoustical Society of America, 102, 2353-2359.
Beckman, M. E., & Cohen, K. B. (2000). Modeling the articulatory dynamics of two levels of stress contrast. In M. Home (Ed.), Prosody: Theory and experiment (pp. 169-200). Dordrecht, The Netherlands: Kluwer.
Beckman, M. E., & Edwards, J. (1994). Articulatory evidence for differentiating stress categories. In P. A. Keating (Ed.), Papers in laboratory phonology III: Phonological structure and phonetic form (pp. 7-33). Cambridge, England: Cambridge University Press.
Borden, G., Harris, K., & Raphael, L. (2003). Speech science primer: Physiology, acoustics, and perception of speech. Baltimore: Lippincott Williams & Wilkins.
Buder, E. H., Kent, R. D., Kent, J. F., Milenkovic, P., & Workinger, M. S. (1996). FORMOFFA: An automated formant, moment, fundamental frequency, amplitude analysis of normal and disordered speech. Clinical Linguistics and Phonetics, 10, 31-54.
Crystal, T., & House, A. (1988a). The duration of American English consonants: An overview. Journal of Phonetics, 16, 285-294.
Crystal, T., & House, A. (1988b). The duration of American English vowels: An overview. Journal of Phonetics, 16, 263-284.
Crystal, T., & House, A. (1988c). Segmentai durations in connected-speech signals: Current results. Journal of the Acoustical Society of America, 83, 1553-1573.
Dabul, B. (2000). Apraxia Battery for Adults (ABA-2). Tigard, OR: C.C. Publications.
de Jong, K. (1995). The supraglottal articulation of prominence in English: Linguistic stress as localized hyperarticulation. Journal of the Acoustical Society of America, 97, 491-504.
de Jong, K., Beckman, M. E., & Edwards, J. (1993). The interplay between prosodie structure and coarticulation. Language and Speech, 36, 197-212.
Engwall, O. (2000). Dynamical aspects of coarticulation in Swedish fricatives-A combined EMA & EPG study. Quarterly Progress and Status Report From the Department of Speech, Music, & Hearing at the Royal Institute of Technology [KTH/, Stockholm, Sweden, 4, 49-73.
Forrest, K., Weismer, G., Milenkovic, P., & Dougall, R. (1988). Statistical analysis of word-initial voiceless obstruents: Preliminary data. Journal of the Acoustical Society of America, 84, 115-123.
Garofolo, J. S. (1988). Getting started with the DARPA TlMIT CDROM: An acoustic phonetic continuous speech database. Gaithersburg, MD: National Institute of Standards and Technology.
Goodglass, H., Kaplan, E., & Barresi, B. (2001). The assessment of aphasia and related disorders (3rd ed.). Philadelphia: Lea & Febiger.
Goozee, J. V., Murdoch, B. E., Theodores, D. G., & Stokes, P. D. (2000). Kinematic analysis of tongue movements following traumatic brain injury using electromagnetic articulography. Brain Injury, 14, 153-174.
Haley, K. L., Ohde, R. N., & Wertz, R. T. (2000). Precision of fricative production in aphasia and apraxia of speech: A perceptual and acoustic study. Aphasiology, 14, 619-634.
Hardcastle, W. J. (1987). Electropalatographic study of articulation disorders in verbal dyspraxia. In J. Ryalls (Ed.), Phonetic approaches to speech production in aphasia (pp. 113-136). Boston: College-Hill.
Harmes, S., Daniloff, R., Hoffman, P., Lewis, J., Kramer, M., & Absher, R. (1984). Temporal and articulatory control of fricative articulation by speakers with Broca's aphasia. Journal of Phonetics, 12, 367-385.
Hillenbrand, J. M., Getty, L. A., Clark, M. J., & Wheeler, K. (1995). Acoustic characteristics of American English vowels. Journal of the Acoustical Society of America, 97, 3099-3111.
Jongman, A., Wayland, R., & Wong, S. (2000). Acoustic characteristics of English fricatives. Journal of the Acoustical Society of America, 108, 1252-1263.
Katz, W., & Bharadwaj, S. (2001). Coarticulation in fricative-vowel syllables produced by children and adults: A preliminary report. Clinical Linguistics and Phonetics, 15, 139-144.
Katz, W., Bharadwaj, S., & Carstens, B. (1999). Electromagnetic articulography treatment for an adult with Broca's aphasia and apraxia of speech. Journal of Speech, Language, and Hearing Research, 42, 1355-1366.
Katz, W., Bharadwaj, S., Gabbert, G., & Stettler, M. (2002). Visual augmented knowledge of performance: Treating place-of-articulation errors in apraxia of speech using EMA. Brain and Language, 83, 187-189.
Katz, W., Carter, G., & Levitt, J. (2003). Biofeedback treatment of buccofacial apraxia using EMA. Brain and Language, 87, 175-176.
Katz, W., Machetanz, J., Orth, U., & Schoenle, P. (1990). A kinematic analysis of anticipatory coarticulation in the speech of anterior aphasie subjects using electromagnetic articulography. Brain and Language, 38, 555-575.
Kewley-Port, D., & Watson, C. S. (1994). Formant frequency discrimination for isolated English vowels. Journal of the Acoustical Society of America, 95, 485-496.
Klich, R., Ireland, J., & Weidner, W. ( 1979). Articulatory and phonological aspects of consonant substitutions in apraxia of speech. Cortex, 15, 451-470.
Lindblom, B. (1962). Accuracy and limitations of sonographic measurements. Proceedings of the 4th International Congress of Phonetic Sciences. The Hague, The Netherlands: Mouton.
Lindblom, B. (1990). Exploring phonetic variation: A sketch of the H-and-H theory. In W. J. Hardcastle & A. Marchai (Eds.), Speech production and speech modeling (pp. 403-439). Dordrecht, The Netherlands: Kluwer Academic.
Mertus, J. (2002). BLISS [Software analysis package]. Providence, RI: Author.
Milenkovic, P. (2001). Time-frequency analyzer (TF32) (Software analysis package]. Madison: University of Wisconsin.
Monson, R., & Engebretson, A. M. (1983). The accuracy of formant frequency measurements: A comparison of spectrographic analysis and linear prediction. Journal of Speech and Hearing Research, 26, 89-97.
Murdoch, B., Goozée, J. V., & Cahill, L. (2001). Dynamic assessment of tongue function in children with dysarthria associated with acquired brain injury using electromagnetic articulography. Brain Impairment, 2, 63.
Nearey, T. M., Hillenbrand, J. M., & Assmann, P. F. (2002). Evaluation of a strategy for automatic formant tracking. Journal of the Acoustical Society of America, 112, 2323.
Nijland, L., Maassen, B., Hulstijn, W., & Peters, H. F. M. (2004). Speech motor coordination in Dutch-speaking children with DAS studied with EMMA. Journal of Multilingual Communication Disorders, 2, 50-60.
Nittrouer, S., Studdert-Kennedy, M., & McGowan, R. S. (1989). The emergence of phonetic segments: Evidence from the spectral structure of fricative vowel syllables spoken by children and adults. Journal of Speech and Hearing Research, 32, 120-132.
Odell, K., McNeil, M. R., Rosenbek, J. C., & Hunter, L. (1990). Perceptual characteristics of consonant production by apraxic speakers. Journal of Speech and Hearing Disorders, 55, 349-359.
Perkell, J. S., Matthies, M. L., Tiede, M., Lane, H., Zandipour, M., Marrone, N., et al. (2004). The distinctness of speakers' /s/-/∫/ contrast is related to their auditory discrimination and use of an articulatory saturation effect. Journal of Speech, Language, and Hearing Research, 47, 1259-1269.
Perkell, J. S., & Nelson, W. L. (1985). Variability in production of the vowels /i/ and /u/. Journal of the Acoustical Society of America, 77, 1889-1895.
Peters, H. F. M., Hulstijn, W., & Van lieshout, P. H. H. M. (2000). Recent developments in speech motor research into stuttering. Folia Phoniatrica et Logopaedica, 52, 103-119.
Peterson, G. E., & Barney, H. L. (1952). Control methods used in a study of the vowels. Journal of the Acoustical Society of America, 24, 175-184.
Ryalls, J. (1986). An acoustic study of vowel perception in aphasia. Brain and Language, 29, 48-67.
Schönle,P., Grabe, K., Wenig, P., Hohne, J., Schrader, J., & Conrad, B. (1987). Electromagnetic articulography: Use of alternating magnetic fields for tracing movements of multiple points inside and outside the vocal tract. Brain and Language, 20, 90-114.
Schultz, G. M., SuIc, S., Leon, S., & Gilligan, G. (2000). Speech motor learning in Parkinson's disease: Preliminary results. Journal of Medical Speech-Language Pathology, 8, 243-247.
Shadle, C. H. (1990). Articulatory-acoustic relationships in fricative consonants. In W. J. Hardcastle & A. Marchai (Eds. ), Speech production and speech modeling ( pp. 189-209). Dordrecht, The Netherlands: Kluwer Academic.
Stevens, K. (2000). Acoustic phonetics. Cambridge, MA: MIT Press.
Stevens, K., & House, A. (1955). Development of a quantitative description of vowel articulation. Journal of the Acoustical Society of America, 27, 484-493.
Strange, W. (1989). Evolving theories of vowel perception. Journal of the Acoustical Society of America, 85, 2081-2087.
Tabain, M. (1998). Non-sibilant fricatives in English: Spectral information above IO kHz. Phonetica, 55, 107-130.
Tabain, M. (2003). Effects of prosodie boundary on /aC/ sequences: Articulatory results. Journal of the Acoustical Society of America, 113, 2834-2849.
Tjaden, K., & Turner, G. S. (1997). Spectral properties of fricatives in amyotrophic lateral sclerosis. Journal of Speech, Language, and Hearing Research, 40, 1358-1372.
Umeda, N. (1975). Vowel duration in American English. Journal of the Acoustical Society of America, 58, 434-445.
Umeda, N. ( 1977). Consonant duration in American English. Journal of the Acoustical Society of America, 61, 846-858.
Wambaugh, J. L., Doyle, P. J., West, J. E., & Kalinyak, M. M. (1995). Spectral analysis of sound errors in persons with apraxia of speech and aphasia. American Journal of Speech-Language-Pathology, 4, 186-192.
Weismer, G., & Bunton, K. (1999). Influences of pellet markers on speech production behavior: Acoustical and perceptual measures. Journal of the Acoustical Society of America, 105, 2882-2891.
Westbury, J. (1994). X-ray Microbeam speech production user's handbook (Version I). Madison: University of Wisconsin-Madison.


[Author Affiliation]
William F. Katz
Sneha V. Bharadwaj
Monica P. StetHer
University of Texas at Dallas


[Author Affiliation]
Received July 9, 2005
Accepted October 30, 2005
DOI: 10.1044/1092-4388(2006/047)
Contact author: William F. Katz, Callier Center for Communication Disorders, University of Texas at Dallas, 1966 Inwood Road, Dallas, Texas 75235.
E-mail: wkatz@utdallas.edu

Indexing (document details)
Subjects:
Electric noise, Aphasia, Speech, Speech disorders
MeSH subjects:
Adult, Aged, Aphasia -- physiopathology, Apraxias -- physiopathology, Case-Control Studies, Electromagnetics, Female, Humans, Male, Middle Aged, Phonation -- physiology, Phonetics, Speech Intelligibility, Speech Production Measurement -- instrumentation, Verbal Behavior
Author(s):
William F Katz, Sneha V Bharadwaj, Monica P Stettler
Author Affiliation:
William F. KatzSneha V. BharadwajMonica P. StetHer3University of Texas at DallasReceived July 9, 2005Accepted October 30, 2005DOI: 10.1044/1092-4388(2006/047)Contact author: William F. Katz, Callier Center for Communication Disorders, 4University of Texas at Dallas, 1966 Inwood Road, Dallas, Texas 75235.E-mail: wkatz@utdallas.edu
Document types:
Feature, Journal Article
Document features:
Tables, Graphs, Photographs, References
Publication title:
Journal of Speech, Language, and Hearing Research. Rockville: Jun 2006. Vol. 49, Iss. 3; pg. 645, 15 pgs
Source type:
Periodical
ISSN:
10924388
ProQuest document ID:
1074085191
Text Word Count
8196
Document URL:
http://proquest.umi.com/pqdweb?did=1074085191&sid=2&Fmt=4&clientId=29550&RQT=309&VName=PQD

Thursday, June 5, 2008

Preface

Origin of this book
May 2004 Berkeley, conference on “Methods in Phonology”
Conference in honor of John Ohala
Focus of this book
Foundational experimental methods
Methods to test phonological hypothesis on the knowledge of speakers and hears’ native sound system
the acquisition of the sound system
the laws that govern the sound system
Methods are not static
Recent change in “methods in Phonology”
The rise of new experimental techniques
Increased use of experimental methods in Phonology
Factors responsible for this change
Factors causing recent change in “methods in Phonology”
The rise of new experimental techniques
Increased use of experimental methods in Phonology
Factors responsible for this change
Increasingly diverse questions
Structure of grammar
Representation of sound patterns
Phonetic and phonological constraints
Categorization
New prospective
Development of the techniques
Availability of corpora
Phonological unification in recognition and application
Experiment embedment within other science fields
To unify the established knowledge and the account of language and speech
Modeling in Phonology and relevant techniques
The ability to model relevant behaviors and patterns
The increasing importance of modeling tools
Phonological findings & theoretical implications therefrom