Wednesday, March 18, 2009

Georg Wilhelm Friedrich Hegel


格奧爾格·威廉·弗里德里希·黑格爾(Georg Wilhelm Friedrich Hegel,1770年8月27日1831年11月14日),德國哲學家,出生於今天德國西南部符騰堡州首府斯圖加特。18歲時,他進入蒂賓根大學符騰堡州的一所新教神學院)學習,在那裡,他與荷爾德林謝林成為朋友,同時,為斯賓諾莎康德盧梭等人的著作和 法國大革命深深吸引。許多人認為,黑格爾的思想,象徵著了19世紀德國唯心主義哲學運動的頂峰,對後世哲學流派,如存在主義馬克思歷史唯物主義都產生了深遠的影響。更有甚者,由於黑格爾的政治思想兼具自由主義與保守主義兩者之要義,因此,對於那些因看到自由主義在承認個人需求、體現人的基本價值方面的無能為力,而覺得自由主義正面臨挑戰的人來說,他的哲學無疑是為自由主義提供了一條新的出路。黑格爾與史詩詩人荷爾德林和客觀唯心論者謝林同一時期就讀於蒂賓根神學院。在深入觀察了法國大革命的整個演進過程後,三人合作致力於對康德及其後繼者費希特的唯心主義哲學的批判
黑格爾的第一部也是最重要的一部著作是《精神現象學》。在世時出版的作品還有《哲學全書》、《邏輯學》和《法哲學原理》。其他有關歷史哲學、宗教哲學、美學和哲學史的著作,則是在他去世後,根據他當年演講時學生所做的筆記彙編而成。
黑格爾的著作以其覆蓋面之深廣而享有盛譽。他建立了一個龐大的體系來理解哲學歷史和我們身處的世界本身——在黑格爾,這個世界通常被看成是「一個歷史的行進,其中每一個相續的運動都是為解決前一運動中的矛盾而出現的。」例如,他認為,法國大革命是人類有史以來第一次在西方社會中引入真正的自由。 但正因為是絕對的初次,它也是絕對的激進的:在革命消滅了它的對立面後,革命所喚起的暴力高潮無法自我平抑,結局是無路可去的革命最終自食其果——得之不易的自由自毀於殘暴的恐怖統治。然而,歷史總是在對過失的自我學習中前行的:正是這種經驗,也只能在有了這樣的經驗之後,一個由自由公民組成的既能行使理性政府的職責,又能實現自由、平等的革命理想的憲政政府才有可能得以出現。
在《歷史哲學》的前言中,黑格爾闡述:「哲學表明,意識是存在於它無限多個概念之上的,也就是說,意識是存在於自由的、無限多的形態之中,而對立的抽象內省的形態只是它的一種反映。意識是自由的、獨立存在的、有個性的,僅僅屬於精神。」
所以作為單獨概念的「意識」,由兩部分組成,兩者都各有無限多個「形態」,一部分是原則性的,另一部分是對每個歷史事件的具體反映,所以他又說:「通常意義上的意識存在兩個方面,一方面是對事物總體的概念,另一方面是對事物具體反應的抽象概念。」他同時說:「每個人的自我意識不同,對事物的反應也不同,和原則性的意識有所偏移,但是對於一個正常的人來說,這種偏移是有限度的,這種限度取決於他的正常狀態,取決於他對上帝的尊敬程度。要了解這種概念的程度,屬於形上學的範疇。」所以,雖然黑格爾的語言很難懂,但他闡述了:形上學必須要研究每一個事件中的命題和反命題如何聯繫的機制,因此必須要比較每一個歷史事件中的例子和它們的原型,了解它們之間的共同點和不同的地方。黑格爾對人類社會抱有一種有組織性和目的論的觀念,他的著作語言豐富而難懂,對於現代讀者來說非常令人迷惑。不僅如此,他的觀念和現代知識界流行的存在主義哲學以及個人權利的觀念也正相反。黑格爾的學說在後代很長時間內一直引起爭論,他對思想界有廣泛的影響,幾乎任何學派都對他的學說給予肯定或提出批判。歷史學家將黑格爾學派分為兩個陣營,黑格爾右派的代表是柏林漢博德大學的追隨者,他們擁護福音正統的宗教觀念,擁護後拿破倫時代的政治保守主義。黑格爾左派,有時也被稱為「青年黑格爾派」,他們繼承黑格爾學說中的革命成分,在宗教方面主張無神論,在政治領域主張自由民主,其中包括費爾巴哈和年輕時代的馬克思恩格斯。在1830年代和1840年代,這些年輕的黑格爾信徒經常在柏林希貝爾吧聚會、爭論,這裏的氛圍造就了對以後150年有很大影響的思想家們,形成了無神論人文主義共產主義無政府主義、和利己主義的基本觀念。但是幾乎沒有一位黑格爾左派宣稱自己是黑格爾的追隨者,有幾位還公開批評黑格爾的哲學,但是這種歷史上的區分法仍然在現代學院哲學中使用,黑格爾左派對黑格爾的批評導致一個全新的領域—關於黑格爾和黑格爾理論的文學作品。當代對在校學生來說,為了方便將黑格爾的辯證法分為三個階段,「正題」(例如在法國歷史上的大革命)、「反題」(大革命隨後的恐怖階段)和「合題」(自由公民的憲法保障狀態)。這種分法並不是黑格爾自己提出的,最早見於費希爾的對個人和全體之間聯繫的一個模擬描述。黑格爾學者們並沒有意識到這種三段論法會掩蓋黑格爾理論的真實論點,雖然黑格爾曾經說過:「要考慮到兩個基本元素:第一,自由的意志是絕對的和最終的目的;第二,實現的方法,也就是說知識和意識的主觀方面,包括生命、運動和活動。」(正題和反題),但他沒有使用「合題」這個術語,而是用「整體」。「這樣我們就了解了整體道德和實現自由的狀態,以及其後這兩種元素的主觀整合。」
黑格爾運用這種辯證法體系解釋哲學科學藝術政治宗教歷史,但是現代評論家指出黑格爾經常修飾歷史的真實以適應他的辯證法模式。卡爾·波普爾在他的《開放型社會和它的敵人》中指出,黑格爾體系文飾了威廉三世的統治,他認為1830年代的普魯士是理想的社會。赫伯特·馬爾庫斯在他的《理性和革命:黑格爾和社會理論的崛起》中批判了黑格爾作為一個國家權力的辯護士,為20世紀極權主義的興起開闢了道路。實際上黑格爾並沒有為這些權力形式辯護,只是認為存在的都是合理的,因為這些權力存在,所以也是合理的。亞瑟·施潘豪爾藐視黑格爾對歷史的解釋,認為他的著作是蒙昧主義的,是「偽哲學」,許多英國學派的哲學家也遵從這種看法。20世紀黑格爾的哲學開始復興,主要是因為幾個原因,一是發現黑格爾的哲學是馬克思主義哲學的源頭,還因為黑格爾的歷史觀開始復活,再有黑格爾辯證法的重要性得到廣泛的認同,將黑格爾的理論重新帶到馬克思經典中的最重要的著作是喬治·盧卡奇的《歷史與階級意識》,掀起一股重新了解評價黑格爾的著作熱,黑格爾的復興也引起對黑格爾早期著作的興趣。現代美國的哲學家也明顯受到黑格爾的影響。

Vygotsky的社會建構主義

Vygotsky的社會建構主義
一、什麼是建構主義?
傳統西方文化中,對於知識如何獲得一向是哲學探討的重心。關於人是怎麼樣得到知識有兩種主要對立的理論─指導主義(instructionalism)與建構主義(constructivism或constructionism)。
前者含有一個似是而非的觀念,就是認為:知識獲得只是真實世界的表徵,也就是說,知識本身跟學知識的人(knower)是分別獨立存在的兩者。持這種觀點的哲學家,把知識看成只是一種能夠正確反映真實世界的客觀事實(註:以這樣的觀念發展出來的研究方法,去探索自然界的種種問題,在學術界被稱為實證主義(positivism)),而學習者在面對新知識的時候像是一張白紙,只有外界對它塗染上知識的色彩,它才能夠獲得新知識。許多教育學者從這樣子的哲學思考出發,認為,教育,是擁有正確事實的指導者,以權威對有如空白的學習者指導,替他們建立知識的活動。所以學習者的身分就好比是等待外界給予知識的人,以便成為知識的擁有者。在西方世界,自啟蒙時期以來,科學主義當道,影響到學校教育較注重讓學習者接受權威來源的知識,而較不關心在這樣接受的過程中,學習者自身的感受。這種時空條件下,指導主義的哲學觀自然變成為指導者對學習者,以上對下、以知識的給予者對接受者的教學方式之合理化藉口。國內中小學教育建制,基本上是師法歐、美暨日本,過去習慣以教師跟教材為中心的教學方式,也是有很明顯指導主義的思考痕跡。
建構主義則放棄傳統上認為知識本身跟學習知識的人兩者獨立的觀念,反而強調,知識是基於學習者在現實世界中,事務經驗及事件本身的作用關係(註:持這種觀點發展出來的研究方法,去探索自然界的種種問題,在學術界被稱為相對主義(relativism));因為這是學習者能不能夠成功地調適新訊息之關鍵。他們把學習看做是心智建造(mental construction)的結果,也就是學習者把新接觸到的訊息融入他們已有的知識中的過程。因此,當學習者有機會自己主動弄懂新學習的事物,所得到的效果會是最好。換句話說,建構主義者認為,即使是學習新知識,學習者也不可能像白紙一般,而是會帶著已有的觀念,去接觸新觀念。透過學習活動,讓新舊知識接軌,當新舊知識無法融合銜接的時候,學習者就會發生認知衝突的現象。這個時候老師的角色就很重要,老師要以有經驗的學習者的立場,幫助發生認知衝突的學生,讓他們能夠把新舊知識兩者作比較分析,以排除衝突,達到真正瞭解新資訊的意義。所以說,建構主義很強調學習者在學習過程中的主動角色。
二、建構主義如何被教育學界引用?
前面說到,建構主義很強調學生在學習過程中的主動角色,那麼實務上,它可以怎麼被應用在教學上?我們不仿從科技、知識產生以及教學目的演進簡史來看。
建構主義在教育上再度被西方學界強調,是二十世紀後半葉的事,建構主義被賦予新的內涵再現江湖有以下主要理由:科技演進、資訊量大增,以及這兩者的交互作用。因為以上的理由,使得二十世紀的最後二十五年,制式教育的教學目標也從傳統的「精熟訊息內容」逐漸轉變為「理解訊息以及能應用訊息於實際情境」。以科學教育為例,我們也可以說,自從「每個人都需要學科學」(Science for all)這個信念成為學界的共識之後,有關於族群、文化以及性別等特殊背景在科學教育中的影響,逐漸受到重視。
在這一股觀念革新的潮流中,建構主義的理論徹底改變了我們對於科學是怎麼教跟怎麼學的想法。來自三個方向的證據尤其影響深遠:人們對科學的看法、科學史及認知心理學。因為傳統教學及學習模式所重視的是:某一種領域知識內容的精通、儲存訊息以及能夠回憶應用,這些都是其主要目的。然而現代社會,資訊量大、變化太快加上許多跨領域的知識產生,傳統教學及學習模式的消極方式,不足以應付新社會的公民需要,建構主義的理念於是就水漲船高。另一方面,對教育學界來說,建構主義是觀念上的舶來品(因為它的原始理念來自哲學),所以有些當代教育學者相信,建構主義應該不是一個教人家怎麼去教的理論,而是有關知識以及學習的理論。更貼切地看,與其說它是某一種理論,毋寧說是一套觀念。它發展到今日已經變成是一套融合神經生理學、心理學、哲學、社會學、語言學、人類學以及教育學等相關領域,對於人的瞭解,所提出的教與學的過程應該怎麼互動較為理想的思考方式。
要檢討一個教學活動是否朝向建構主義的理念,或許活動中指導者跟學習者的互動品質,可以給我們作為一個參考;也就是說,透過互動,學習者是真的弄懂他所接觸到的新知識?還是只有把他所接觸的新知識背起來?所以我們幾乎可以用化學光譜的觀念來類比建構主義。有的人在教學活動中給學習者完全自主的學習空間(註:持這種主張者常被稱為是radical constructivists),因為這一派學者對於實證主義對知識的看法─「知識是絕對客觀的真理」,幾乎完全不承認。有的人可能對於知識的看法採取相對主義與實證主義的折衷看法,所以給學習者有很大自主學習的空間。還有的人則會比較多保留一些實證主義的想法,所以給學習者自主學習的空間較少一些(註:持這種主張者常被稱為是moderate constructivists)。但只要是他們沒有喧賓奪主地以指導者的身分代替學習者去學,都可以算是朝向符合建構主義的方向幫助學習者建構知識。反過來說,在教學活動中如果只有指導者對學習者作知識的單向傳輸,那只是在指導主義框架底下的知識灌輸,而不是朝向建構主義的教學,因為,你永遠無法確定,在那樣的過程中,學習者有沒有機會把新知識真得弄懂了,進而轉化成為他永久記憶的一部份。
相信各位老師都能預料得到,這樣一套集各家大成的建構主義理念,不可能有什麼標準典範或是以這樣的典範所衍生出來的標準教學模式。因為建構主義者認為知識建構的過程具有重覆性,所以,新知識的組成份是先前已建構知識的產物。因此,心智構造與知識內容兩者的作用,就交織在建構主義的討論中,往往無法分開來看。雖然許多自稱是建構主義理論的不同學派對於知識的起源主張各異,不過他們對於學習的內涵,也就是「學習者的心智構造如何發展」都有相同的假設,就是說,他們的共同關懷點都是:「學生的腦子是怎麼去想新知識?這樣子能不能跟他們的已有知識融合?」
建構主義理論的學派中兩個極端的代表是皮亞傑(Piaget)的認知建構論(cognitive constructivism,也有人把它稱為是遺傳認識論或發生認識論genetic epistemology)與維高次基(Vygotsky)的社會建構論(social constructivism)。以下我們就以這兩個最主要學派─認知建構論及社會建構論為例,進一步說明。
三、認知建構論
想要瞭解認知建構論究竟是怎麼一回事,就一定得談皮亞傑(Jean Piaget, 1896-1980)。皮亞傑是研究兒童認知發展過程中各個階段質的變化的先驅者。他的專長領域可以說是集生物學家、哲學家暨心理學家於一身,他的想法深受邏輯實證論的影響。以下,我們先從他的專長背景切入,再回顧他的研究對二十世紀的教育思潮發生那些重要影響。
以生物學方面來說,Piaget是個天才兒童,他十歲時就以得白化症的麻雀為題材發表第一篇科學論文。然而他的生物觀是十九世紀的,也就是認為,物種中個體的生長階段是重複該物種發展過程的階段。
Piaget曾教過哲學很多年,他接受康德「先驗知識存在」的主張。他也是個邏輯學家,曾經跟一群數學哲學家參與創造邏輯實證論的運動。他把康德的主張跟邏輯連結,認為邏輯能力就是先驗知識的一個例子。他好奇這樣的知識如何在兒童的腦海中發生,所以提出一個假說,主張思想的結構跟思想的邏輯具相同的形式。
皮亞傑對後世影響最大的恐怕要算是心理學方面,Piaget主張,生長中的兒童,是研究人類知識起源最好的資訊來源。研究知識起源的學問在哲學上稱為認識論(epistemology,或譯為知識論),所以Piaget建立的研究領域被人稱為遺傳(發生)認識論(genetic epistemology)。他曾受僱參與Binet-Simon的智力測驗研究分析結果。他的方法與眾不同,因為受測者在測驗中答錯的部分幾乎相同,所以他對測驗中錯的答案比對正確的答案還有興趣,想要一探究竟,於是用一些科學的問題晤談兒童。他經常給兒童一些材料,然後要小孩子一方面解決問題,例如:怎麼樣平衡兩物的質量,同時一方面要小孩子解釋。這種他首創的研究法後來被稱為臨床晤談法(clinical interview)。他就是這樣,從研究兒童思想的邏輯之中去導出他對於思想的結構之理論,就是有名的認知發展階段理論。
Piaget認知發展階段理論的主張認為:兒童對於自然世界的想法,是經歷一系列質的變換階段。不過後來許多建構主義者對Piaget的階段理論不太在意,反而對於他所建議知識發展的機制更有興趣。因為Piaget主張,思想藉著從一個認知平衡狀態成長到另一個狀態而發展,對學習者而言如果你的經驗跟所預期的一致,那麼這些經驗就有意義;你只需把新經驗加到自己腦中的資訊庫就好了。但是,假如你的新經驗並非預期的,你也丟不掉它們。這種情形,你有三種選擇:不去管它們(很平常的做法)、或是在腦中改變它們使這些經驗能契合腦中已有資訊(更平常的做法)、或是改變你的思想方式使新經驗能契合(很不平常的做法)。他認為,唯一能讓學習者認知進步之途,就是透過一系列認知階段的基模組(schemata)之改變。當新資訊跟先前經驗有差異而不能契合時,它就無法彙整入已有的基模組中,因而就不能被理解。要消除前後差異的唯一方法,就是使基模組被修正,這也就是概念改變的過程。
四、社會建構論
大約跟皮亞傑在學術領域逐漸成長的同時,世界另一端,一場革命鬥爭帶來一種有關於知識起源的新觀念,這主要源自政治經濟學理論家卡爾‧馬克思(Karl Marx)。其中,維高次基(Lev Vygotsky,1896-1934)也是這場蘇俄革命產物中的主要人物。Vygotsky出身知識份子家庭,他是馬克思主義者。馬克思主義強調個人自主性戰勝環境的拘束,和強調個人透過堅持與不斷努力在社會中爬升的潛力。所以,當時的主流科學思潮及達爾文生物觀,被他們看成太具階級色彩,不被馬克思主義者接受。
Vygotsky對於當時心理學的資料庫及方法論都不滿意,所以想要發展一個新科學,他想要創出一套知識的馬克思理論(Marxist theory of knowledge)。就某種程度來看,Piaget與Vygotsky觀點相似。但是Vygotsky拒絕知識的生物學觀點,而注重文化與社會的角色。他認為智力發展最主要的引擎是文化,而其機制是成人與兒童之間的社會化互動。就如同Piaget是從研究個人以尋找知識成長的新資料庫,Vygotsky則在文化的歷史中替他的想法找到支持。他認為語言是文化發展與傳承的一項工具,所以它必須跟文化同步成長。從這個觀點看,Vygotsky的理論是社會歷史取向。雖然Vygotsky認同Piaget的階段發展理論,但是他自己比較沒有對這一部份作精闢解釋。然而Vygotsky理論的獨特性,在於他堅持較高階的心智過程本質上是社會性、文化性以及歷史性。
綜觀Vygotsky學派對於Piaget學派的認知建構主義的批評,最常見有主要兩點:首先,Piaget認知建構主義過度強調邏輯思考;雖然重視邏輯推理對了某些科學教育學者的胃口,卻不為社會建構論者喜歡。他們對Piaget學派的第二個批評是,它忽視知識建構過程中的所處的情境。社會建構論學派發現,因為不重視情境,所以Piaget的觀點沒有結果,無意義。

社會建構論者與Piaget學派最大的不同在於,社會建構論者把他們的典範回溯自辯證法(dialectic)。辯證法,起初Plato把它當成是一個跟人的靈魂作批判式對話的形式。Kant在他的純理性批判(Critique of Pure Reason)一書中把辯證法說明得更精緻。最後經黑格爾(Hegel)把它形式化成現今形式。依黑格爾辯證法,理論是經過正、反、合三個階段發展成的。也就是說,一種概念先經過正、反的辯證,隨後產生的衝突概念要經由更高階的概念─「合」來解決。然後,這將會變成下一個正、反、合三個階段所辯證出來的理論。
因為強調情境對於知識產生的重要性,所以社會建構論者一向努力探究語言在建構主義中的角色是什麼?對話,這個最具有社會性的語言形式,是社會建構過程的中心。所以,那些把語言認為是調和知識建構過程的人,都被當作是社會建構論者。他們相信,不管是有錢人或窮人、多數族群或少數族群、男性或女性,所有人如果沒有機會去講話,學習就無從發生。任何建構主義概念假如會剝奪某些學生的學習,就是社會不公。此外,社會建構論者不強調個人的知識建構,反而提出合作式的得到意義(collaorative meaning making)。在這個架構下,科學教師的角色急劇地轉變。老師的責任不再是指導學生去懂那些已經印好在手冊、書本或其它權威知識,而是在許多建構知識的方法中去找出根據。
五、折衷式的看法
如前所述,建構主義之所以會有不同學派的爭論,起因於對於知識起源看法的分歧。而建構主義探討知識的起源,實際上重視心智構造與知識內容兩個面向交織的作用。所以我們覺得,除了上述認知建構論與社會建構論兩個主要學派之外,有必要在此簡要介紹學界對於建構主義折衷式的看法。
就心智構造方面的考慮,Baker與Piburn (1997)引用皮亞傑基模(schema) 的意涵代表知識建構過程所涉及的世界觀(worldview)。他們認為,知識來自基模,所以知識正確與否,要看它本身對基模而言是否可靠翔實。他們把任何與一個相關基模一致的特定知識稱為概念架構(conceptual framework),把衍生自不同基模組(schemata)的知識稱為另類架構(alternative frameworks),把專家在其領域的概念架構稱為科學概念(scientific conception),而學生在未被指導之前的概念稱為前概念(preconception)。
在知識內容的探討方面,對於「研究對象/目標」(subject/object)這個爭議的問題,他們認為,人的經驗是透過一套存在觀察者與被觀察者之間的「過濾器」改變。這一點早經完形心理學派證實。他們指出用於建構知識的過濾器就是每個人自己的基模組。Baker與Piburn認為,每一個人都有一套個人癖好,為眾所週知的基模組;他們就把這一套基模組叫作心理及背景特徵(psychological and background characteristics)。Baker與Piburn主張,另外有第二套基模組,也就是受到我們個人生活及工作之中的文化所界定的常規與價值觀。Baker與Piburn把基模組之間的轉變看做是認知衝突(cognitive conflict)的一種。他們把經驗跟期待無法配合所引起認知衝突的現象,稱為異例(anomaly)。
所以他們對建構主義的實用定義如下:
§ 建構主義是一種理論性的見解,它是基於「知識是由個人與文化建構」這種主張。
§ 經驗受到基模組的調和,而基模組由個人的心理及背景特徵以及文化的常規與價值觀構成。 是一個基模運用到經驗的累積結果,無所謂對錯。它只不過是一個另類架構。
§ 概念架構
§ 知識建構的過程,包含透過異例引發衝突的時期,從一個基模向另一個基模的移動。
§ 要採納一個新的基模,必須將已存在的知識重組到一個新的概念架構中。
§ 原則上,這樣的知識建構過程沒有止境。所以不可能有絕對的知識,所有的知識都跟情境有關而且會隨著情境改變。
六、建構主義與教室教學
今天建構主義者所主張的教學方法,通常是基於科學家對於人腦以及學習的種種現象所作的研究。建構主義者強調,學習活動的要角是學習者而不是指導者;學習者透過本身跟所學習的事物間互動,而瞭解這些事物的特徵,才能真正理解。所以在建構主義的理論裡,不但接受而且鼓勵學習者的自主性與進取心。建構主義理念的教學方式其實是一個開放的空間。以下我們用一個生物學的簡例作個紙上演示,讓老師從這個演示過程中去體會,您可以怎麼樣試著用朝向建構主義的方式,去幫助您的學生自主建構知識。
創造論對演化論
演化論(Evolution)對於生命起源的解釋經達爾文發表雖然已經超過一百四十年,但仍然有許多人認為演化論最多只能算是一個理論,而不是科學事實。有人甚至於不相信演化論反而相信創造論(Creationism)。這種情況下,老師在教到這一個單元的時候,要怎麼協助孩子真正瞭解演化論的內涵呢?這是一個使用辯證法的好時機。首先,辯論需要題目,在此,題目已經很明顯─創造論對演化論。老師可以就這兩方面的看法提供學生一些實用的假說,以供辯證。譬如說,以下兩例:
§ 化石證據:演化論者的觀點認為,岩石的紀錄代表至少超過十億年長時期演化的紀錄。創造論者則認為那些化石是在創世紀洪水災難時形成的(註:這觀點跟Werner主張的大災難理論幾乎完全相同)。
§ 結構性相似:例如人類的手跟水生哺乳類鰭的構造類似,使得演化論者認為兩者有共同血緣。創造論者則認為相同的構造,正暗示著,這是來自一個創造者的行動所造成的結果。
以上議題都是可以提供學生進行正面、反面思考,以及老師協助學生整合觀念的參考。
最後,我們從前述的探討中歸納出以下幾點建構主義理念應用於實務的原則,供有心的老師參考:
1. 強調「學」為主「教」為輔
2. 視學習者為有意志及有目的的個體
3. 把學習看成是一個過程
4. 承認「經驗」在學習中的重要性
5. 培養學習者自然的好奇心
6. 把學習者的心智狀況當一回事
7. 基於認知心理學的原則作考量
8. 考慮學習者「如何」學
9. 強調學習者所處的情境
10. 考慮到學習者的信仰及態度
11. 鼓勵並接受學習者的自主性跟進取心
12. 鼓勵學習者發問
13. 評量學習成效時,同時強調學習者的表現與對知識的瞭解程度
14. 鼓勵學習者跟同儕以及老師對話
15. 鼓勵合作學習
16. 以真實生活讓學習者產生身歷其境的感受
17. 提供學習者在真實的經驗中建構新知識,以及得到新理解的機會
參考文獻
1. Baker, D. R. & Piburn, M. D. (1997) Constructing Science in Middle and Secondary School Classroom. Allyn and Bacon.
2. Duffy, M. T. & Jonassen, D. H. (1992) Constructivism: New Implications for Instructional Technology. In Duffy, M. T. & Jonassen, D. H. (Eds) Constructivism and the Technology of Instruction: A Conversation. Hillsdale, New Jersey: Lawrence Erlbaum.
3. Gregory, R. L. (1987) (Ed.) The Oxford Companion to The Mind. Oxford: OUP.
4. Inhelder, B. & Piaget, J. (1958) The Growth of Logical Thinking from Childhood to Adolescence. Translated by Parsons, A. & Milgram, S. London: RKP.
5. Klemke, E. D., Hollinger, R. & Kline, A. D. (1988) (Eds) Philosophy of Science. New York: Prometheus Books.
6. Piattelli-Palmarini, M. (1980) (Ed.) Language and Learning: The Debate between Jean Piaget and Norm Chomsky. London: RKP.
7. Vygotsky, L. (1986) Thought and Language. Translation newly revised & edited by Alex Kozulin. Cambridge, Massachusetts: The MIT Press.
來源:http://pei.cjjh.tc.edu.tw/sci-edu/edu_3_15.htm

Wednesday, November 19, 2008

Chap. 03 p.117-119

Chap. 03 p.117-119

It is also interesting to note that children do not begin to develop speech until their brains have attained a certain degree of electrophysiological maturity, defined in terms of an increase with age in the frequency of the dominant rhythm. Only when this rhythm is about 7 cps or faster (at about age two years) are they ready for speech development.
(g) Neurological Correlates; Pacing of Speech During Thalamic Stimulation. Deep electrical stimulation in the basal ganglia and thalamus is frequently performed in the course of surgical treatment of thalamic pain or certain extrapyramidal motor disorders. Guiot, Hertzog, Rondot, and Molina (1961) have reported that electrical stimulation in a particular place in the thalamus (the ventrolateral nucleus near its contact with the internal capsule) frequently interferes with the rate of speaking. Both slowing to the point of total arrest and acceleration of speech have been observed. The latter is the more interesting for our discussion. It is a behavioral derangement which may occur in complete isolation, that is, without any other observable motor manifestation or abnormal subjective experience. The patient is conscious and cooperative during part of the operation. He is encouraged to maintain spontaneous conversation and, failing this, is aked to count slowly at a rate of about one digit per second. It acceleration occurs with electrical stimulation, it may be sudden and immediate, or it may be a quick speeding up, the words at the end being generated so rapidly as to become unintelligible. It is significant that under conditions of evoked acceleration the shortest observed intervals between digits are about 170 msec.
Acceleration, uncontrollable by the patient, is occasionally associated with parkinsonism and goes under the name of tachyphemia.

(2) Final Comments on Speech Rhythmicity (Cultural, Individual and Biological Variations)
We have proposed that a rhythm exists in speech which serves as an organizing principle and perhaps a timing device for articulation. The basic time unit has a duration of one-sixth of a second. If this rhythm is due to physiological factors rather than cultural ones, it should be then present in all languages of the world. But what about the rhythm of Swedish, Chinese, or Navaho which sound so different to our English-trained ears? What about American Southern dialects which seem more deliberate than the dialect of Brooklyn, New York; and the British dialects which seem faster than American ones? These judgments are based on criteria such as intonation patterns and content of communications, which habe little in common with the potential underlying metric of speech movements. The rise and fall time in intonation patterns (non-tonal languages) are much slower than the phenomenon discussed here, usually extending over two seconds and more. With proper analysis, they may well reveal themselves to be multiples of the much faster basic units discussed above. On the other hand, the pitch-phonemes (also known as tonemes) are likely to fall within the same metric as other phonemes. Nor does our ability to speak slowly or fast have any bearing on the “six-per-second hypothesis,” because it should be possible to make different use of the time units available. There most likely is more than one way of distributing a train of syllables over the rhythmic vehicles.
On the other hand, physiological factors would allow for individual differences because organisms very one from the other. Moreover, the underlying rhythm may be expected to vary within an individual in accordance with physiological states and rates of metabolism. Such within-subject variations would, of course, be subtle, and detection would require statistical analysis of the periodic phenomena involved.
The statistic necessary to prove or reject our hypothesis is quite simple. At present the only obstacle is the necessity of making observations and measurements of hundreds of thousands of events. Suppose we programmed an electronic computer to search the electrical analogue of a speech signal for that point in time at which any voiceless stop is released. And then measured the time lapse between all such successive points. From these data we can make histograms (bar-charts) showing the frequency distribution of all measurements. Since our hypothesis assumes that the variable syllable-duration-time is not continuous and that there are time quanta, the frequency distribution should be multi-modal: and since the basic time unit is predicted to be 160 ± 20msec, the distance between the peaks should be equal to or multiples of this unit.
In a previous section of this chapter we have demonstrated certain formal properties of the ordering of speech events. In the discussion of rhythm we have added some temporal dimensions to those events. The rhythm we have added some temporal dimensions to those events. The rhythm we have added some temporal dimensions to those events. The rhythm is seen as the timing mechanism which should make the ordering phenomenon physically possible. The rhythm is the grid, so to speak. Into whose slots events may be intercalated.
It has long been known that the universally ovserved Rhythmicity of the vertebrate brain (Bremer, 1944; Holst, 1937) or central bervous tissue, in general (Adrian, 1937; Wall,1959) is the underlying motor for a vast variety of rhythmic movements found among vertebrates. If our hypothesis is correct, the motor mechanics of speech (and probably even syntax) is no exception to this generalization, and in this respect, then, speech is no different from many other types of animal behavior. In man, however, the rhythmic motor subserves a highly specialized activity, namely speech.

Chap. 01 p.18-20

Chap. 01 p.18-20

The situation for primates and man in particular is not completely clear. Although regeneration is also amyotypic and coordination is either permanently disarranged or at least always remains poor, some central nervous system mechanisms seem to have developed in those forms that enable the individual to make some secondary, partial readjustment. Perhaps this new learning is based on more complex cortical activities – possibly those that are experienced by man as will – but these speculations still lack empirical evidence.
The picture would not be complete without at least a superficial reference to the sensory disarrangement brought about by extracorporeal distortions, such as vision through wearing distorting lenses or prisms. Man, and a variety of lower forms, can learn quickly to make a number of adaptive corrections for these distortions (Kohler, 1951). However, the adjustment is not complete. In adjusting motor coordination to distorted visual input, it is essential that the individual goes through a period of motor adaptation, and there is cogent evidence that this is required for a physiological reintegration between afferent and efferent impulses and not simply to provide the subject with “knowledge” of the spatial configurations (Held and Hein, 1958), (Smith and Smith, 1962). Furthermore, man’s cognitive adjustment to visually distorted environment is never complete. Subjects who wear image-inverting goggles soon come to perceive the world right-side-up (though at the beginning it was seen upside down). But even after many weeks of relative adjustment, they experience paradoxical sights such as smoke from a pipe falling downward instead of rising upward or snowflakes going up instead of coming down.
The over-all conclusion that must be drawn from the disarrangement experiments are first, that motor coordination (and certain behavior patterns dependent upon it) is driven by a rigid, unalterable cycle of neurophysiological events inherent in a species’ central nervous system; second, that larval, fetal, or embryonic tissues lack specialization; this enables these tissues to influence one another in such a way as to continue to play their originally assigned role despite certain arbitrary peripheral rearrangements. Because of this adaptability, species-specific motor coordination reappears again and again regardless of experimentally switched connections. Third, as tissues become more specialized – both in ontogeny and in phylogeny – the adaptability and mutual tissue influence disappears. Therefore, in higher vertebrates peripheral disarrangements cause permanent discoordination. Finally, with advance of phylogenetic history, ancillary neurophysiological mechanisms appear which modify and at times obscure the central and inherent theme – the cyclic driving force at the root of simple motor coordination. More complex storage devices (memories) and inhibitory mechanisms are examples.
With the emergence of more specialized brains, the nature of behavior-specificity changes. Although it would be an inexcusable over-simplification to say that behavior, in general, becomes more or less specific with phylogenetic advance, there is perhaps some truth in the following generalizations. In the lower forms, there seems to be a greater latitude in what constitutes an effective stimulus, but there is a very narrow range of possible responses. Pattern perception, for instance, is poorly developed so that an extremely large array of stimulus configurations may serve to elicit a certain behavior sequence, and thus there is little specificity in stimulability. However, the motor responses are all highly predictable and are based on relatively simple neuromuscular correlates; thus there is high degree of response specificity. With advancing phylogeny, the reverse seems to become true. More complex pattern perception is correlated with greater stimulus specificity and has a wider range of possible motor responses, that is, less response specificity. However, both of these trends in decreasing and increasing specificity are actually related to greater and greater behavioral and ecological specialization. Taxonomists will be quick to point out countless exceptions to these rules. Evolution is not so simple and can never be brought to conform to a few formulas. The statement here is merely to the effect that such trends exist and that, generally speaking, specificity both in stimulation and in responsiveness changes throughout the history of animal life.
In the vast majority of vertebrates, functional readjustment to anatomical rearrangement appears to be totally impossible. Even if the animal once “knew now” to pounce on prey, peripheral-central disarrangement will permanently incapacitate the animal from pursuing the necessities for its livelihood. If the primate order should indeed be proven to be an exception to this rule – and there is little evidence of this so far – then we would have to deal with thie phenomenon as an extreme specialization, whose details and consequences are yet to be investigated. There is much less modifiability for those coordination patterns which constitute species specific behavior than is usually realized, and we must keep in mind that most behavioral traits have species-specific aspects.
This statement is not contradicted by the great variety of arbitrary behavior that is produced by training. Pressing a bar in a cage, pecking at a red spot, jumping into the air at signal of a buzzer (in short, the infinity of arbitrary tricks an animal can be made to perform) do not imply that we could train individuals of one species (for examples, common house cats) to adopt the identical motor behavior patterns of another, such as that of a dog. Although there is perfect homology of muscles, we cannot train a cat to wag its tail with a dog’s characteristic motor coordination. Nor can one induce a cat to vocalize on the same occasions a dog vocalizes instinctively, for instance, when someone walks through the backyard. Just as an individual of one species cannot transcend the limits to behavior set by its evolutionary inheritance, so it cannot make adjustments for certain organic aberrations, particularly those just discussed. The nearly infinite possibility of training and retraining is a sign of the great freedom enjoyed by most mammals in combining and recombining individual traits, including sensory and motor aspects. The trais themselves come from a limited repertoire, are not modifiable, and are invariably species-specific in their precise motor coordination and general execution.
In Goethe’s words, addressing a developing being:
Nach dem Gesetz, wonach du angetreten.
So must du seyn, dir kannst du nicht entfliehen,
So sagten schon Sibyllen, so Propheten;
Und keine Zeit und keine Macht Zerstuchelt
Gepragte Form, die lebend sich entwickelt.*

Wednesday, October 8, 2008

APPENDIX B

The history of the biological basis of language*
OTTO MARX

Language has been thought of as being the expression of man’s reason, the result of onomatopoeia, invented as a means of communication, considered basic to the formation of society, or simply a gift of God. Each of these definitions of language has been used in the construction of a multitude of language theories [1]. We shall not be concerned with the development of these theories, but limit ourselves to a discussion of the recurrent emergence of the thoughts on the biological basis of language.
The idea that language is on of man’s inherent characteristics, like vision or hearing, is found in some myths on the creation of man [2]. In these myths, language is given to man in conjunction with his senses, so that apparently it was considered on of them, and not part of man’s cultural or social functions (which are also described as given or taught by the gods). By no means can these assertions of a divine origin be considered antithetical to a natural origin of language; on the contrary, everything natural to man was God’s gift to him.
Between the reaslm of mythology and science stands the experiment of the Egyptian King Psammetichos of the seventh century B.C. and related by Herodotus (fifth century B.C.). Psammetichos supposedly tried to have two children raised by shepherds who never spoke to them in order to see what language would develop [3]. This experiment is relevant to our discussion in so far as its design implies the belief that children left to themselves will develop language. Psammetichos thought he would be able to demonstrate which language was the oldest, but apparently did not doubt that even untutored children would speak.
Language first became the subject of discussion by the pre-Socratic philosophers in the latter part of the sixth century B.C. The setting up of antitheses, typical for Greek philosophy, was also applied to the problems which language posed. But discussions of language were limited to a mere consideration of naming and were purely secondary outgrowths of the philosopher’s search for general truths. In order to understand the statements on language made by the Greek philosophers, it is essential to give an idea of the context in which they were made and briefly describe the evolution of the meaning of the two ever recurring terms nomos and physis in which language was to be discussed. Nomos was later replaced by theses and was often wrongly translated as convention while physis has been incorrectly equated with nature.
For Herakleitos (ca. 500 B.C.), nomos was the order regulating the life of society and the individual, but he did not see it as a product of society [4].The nomos of society was valid, but not absolute. Similarly names were valid as they reflected some aspect of the object they named. (Apparently, he did not consider them physis as had been thought)[5]. Physis would have implied that names are an adequate expression of reality or of the true nature of things, an idea to which Herakleitos did not subscribe.
Parmenides, (fifth century B.C.) thought that originally names had been given to things on the basis of “wrong thinking,” and that the continued use of the original names perpetuated the errors of men’s earlier thinking about the objects around them. To him, and to Anaxagoras and Empedokles, names and concepts were synonymous. Their concern with conventional names and their condemnation of them as nomos was related to their critical view of conventional thought. To these philosophers’ nomos and conventional thought had acquired the connotation of incorrectness and inadequacy as opposed to the truth and real nature or physis which they were seeking [5].
Pindar(522-433 B.C.) considered all of man’s true abilities innate. They cannot be acquired by learning bt can only be furthered by training [6]. For him the rules of society which are nomos were God-given and, therefore, contained absolute truth. Nomos and physis were not purely antithetical as it was for Parmenides and his school. It is also well to keep in mind that nomos and physis had not been antithetical in Greek ethnography. Nomos referred to all peculiarities of a people due to custom and not attributable to the influences of climate, country, or food. So Herodotus had ascribed the elongated heads of a tribe, due to their binding of the infant’s skull, to nomos, but he believed that this would become hereditary (physis). In medicine of the fifth century B.C., physis came to mean normal [7].
Although we find the nomos-physis antithesis in all Greek philosophy and science, the exact meaning of the terms would have to be determined in each case, before we might claim that one of the philosophers made certain pronouncements about language. We have attempted to indicate that none of the pre-Socratic philosophers were concerned with language as such, nor with questions of its origin or development, and in no case could their statements be said to establish language as cultural or natural to man.
In classical philosophy, the relationship of the name to its object continued to be the focal point in discussions on language: naming and language were synonymous. Did the object determined in some way the name by which it was called, just as its shape determined the image we saw of it? In his dialogue, Cratylos, Plato (427-347) attempted a solution of this problem. If the name was determined by the nature of the object to which it referred, then language was physis, that is , it could be said to reflect the true nature of things, but if it were nomos, then the name could not serve as a source of real knowledge. As Steinthal [8] pointed out, language was taken as given, and the philosophical discussion had not originated from questions about the nature of man or language. Plato’s answer could, therefore, have only indirect implications for questions about language origin which were to arise much later. He overcame the antithesis by demonstrating that the name does not represent the object but that it stands for the idea which we have of the object. Furthermore, he declared that the name or the word is only a sound symbol which in itself does not reveal the truth of the idea it represents. Words gain their meaning from other oodes of communication like imitative body movements or noises. The latter are similar to painting in that they are representative but not purely symbolic as is language. The only reference to the origin of language in Cratylos is Socrates’ statement that speaking of a divine origin of words is but a contrivance to avoid a scientific examination of the source of names [10].
Aristotle’s (384-322 B.C.) interest in language was both philosophical and scientific. In his book on animals the ten paragraphs devoted to language follow immediately after a discussion of the senses. His differentiation of sound, voice, and language is based on his physical concept of sound production. In his opinion, voice was produced in the trachea and language resulted from the modulation of the voice by tongue and lip movements. Language proper is only found in man. Children babble and stammer because they have not yet gained control over their tongues. Among the animals only the song of birds is similar call, “kak kak” in one vicinity and “tri tri” in another and as the song of a bird will differ from that of its parents’ if it grows up without them. Language, like the song of the nightingale, is perfected by training.
Aristotle had based his differentiation of man’s language (logos) from the language of animals (phonē) biologically, for he thought that man’s language was produced mainly by movement of the tongue and the sounds of animals by the impact of air on the walls of the trachea. He did not think that human language could have been derived from sounds, noises or the expression of emotions seen in animals and children. “A sound is not yet a word it only becomes a word when it is used by man as a sign.” “The articulated signs (of human language) are not like the expression of emotions of children or animals. Animal noises cannot be combined to form syllables, nor can they be reduced to syllables like human speech” [12]. He rejected an onomatopoeic origin of language and established the primacy of its symbolic function. Because he recognized that the meaning of spoken language was based on agreement, it has been claimed that he thought language to be of cultural origin. I terms of the old antithesis of physis versus nomos, Aristotle saw both principles operative in language. Physis meant to him the law of nature without the virtue of justice which it had contained for Plato, and Nomos was replaced by thesis and had come to mean man made. Language, as such, he considered physis, and the meaning of words he attributed to thesis [13].
The question of the origin of language had not been raised in Greek philosophy until Epicurus (341-271 B.C.) asked:”What makes language possible? How does man form words so that he is understood?”[14]. He concluded that neither God nor reason, but Nature was the source of language. To him, language was a biological function like vision and hearing. A different opinion was held by Zeno (333-262 B.C.) the founder of the Stoa, to whom language was an expression of man’s mind and derived from his reason. He believed that names had been given without conscious reflection or purpose [15].
Although Epicurus had been the first to contemplate the origin of language, Chysippos (died about 200 B.C.) a stoic, was the first to consider language in terms broader than names. Before him the ambiguity of some names had been noted but no satisfactory explanation had been found. Chrysippos proclaimed that all names were ambiguous and lost their ambiguity by being placed in context. Thereby he drew attention to the importance of the grouping of words but his belief that language did not follow logic kept his inquiry from proceeding any further [16].

CHAPTER Nine

CHAPTER Nine

Toward a bilogical theory of language development (General summary)


We have discussed language from many different aspects, have drawn various conclusions and offered a variety of explanations. If we now stand back and survey the entire panorama, will this synopsis suggest an integrated theory? I believe it will.

I. FIVE GENERAL PREMISES

The language theory to be proposed here is based upon the following five empirically verifiable, general biological premises.

(i) Cognitive function is species-specific. Taxonomies suggest themselves for virtually all aspects of life. Formally, these taxonomies are always type-token hierarchies, and on every level of the hierarchy we may discern differences among tokens and, at the same time, there are commonalities that assign the tokens logically to a type. The commonalities are not necessarily more and more abstract theoretical concepts but are suggested by physiological and structural invariances. An anatomical example of such an invariance is cell-constituency- it is common to all organisms. In the realm of sensory perception there are physiological properties that result in commonalities for entire classes of animals, so that every species has very similar pure stimulus thresholds. When we compare behavior across species, we also find certain invariances, fro instance, the general effects of reward and punishment. But in each of these examples there are also species differences. Cells combine into a species-specific form; sensations combine to produce species-specific pattern-recognition; and behavioral parameters enter into the elaboration of species-specific action patterns.
Let us focus on the species-psecificities of behvior. There are certain cerebral functions that mediate between sensory input and motor output which we shall call generically cognitive function. The neurophysiology of cognitive function is larely unknown but its behavioral correlates are the propensity for problem solving, the formation of learning sets, the tendency to generalize in certain directions, or the facility for memorizing some but not other conditions. The interaction or integrated patterns of all of these different potentialities produces the cognitive specificities that have induced von Uexkuell, the forerunner of modern ethology, to propose that every species has its own world-view. The phenomenological implications of his formulation may sound old-fashioned today, but students of animal behavior cannot ignore the fact that the differences in cognitive processes (1)are empirically demonstrable and (2) are the correlates of species-specific behavior.

(ii) Specific properties of cognitive function arereplicated in every memberof the species. Although there are individual differences among all creatures, the members of one species resemble each other very closely. In every individual a highly invariable type of both form and function is replicated. Individual differences of most characteristics tend to have a normal (Gaussian) frequency distribution and the differences within species are smaller than between species. (We are disregarding special taxonomic problems in species identification.)
The application of these notions to (i) makes it clear that also the cognitive processes and potentialities that are characteristics of a species are replicated in every individual. Notice that we must distinguish between what an individual actually does and what he is capable of doing. The intraspecific similarity holds for the latter, not the former, and the similarity in capacity becomes striking only if we concentrate on the general type and manner of activity and disregard such variables as how fast or how accurately a given performance is carried out.

(iii) Cognitive processes and capacities are differentiated spontaneously with maturation. This statement must not be confused with the question of how much the environment contributes to development. It is obvious that all development requires an appropriate substrate and availability of certain forms of energy. However, in most cases environments are not specific to just one form of life and development. A forest pond may be an appropriate environment for hundreds of different forms of life. It may support the fertilized egg of a frog or a minnow, and each of the eggs will respond to just those types and forms of energy that are appropriate to it. The frog’s egg will develop into a frog and the minnow’s egg into a minnow. The pond just makes the building stones available, but the organismic architecture unfolds through conditions that are created within the maturing individual.
Cognition is regarded as the behavioral manifestation of physiological processes. Form and function are not arbitrarily superimposed upon the embryo from the outside but gradually develop through a process of differentiation. The basic plan is based on information contained in the developing tissues. Some fuctions need an extra organismic stimulus for the initiation of operation-something that triggers the cocked mechanisms; the onset of air-breathingin mammals is an example. These extra-organismic stimuli do not shape the ensuing function.a species’ peculiar mode of processing visual input, as evidenced in pattern recognition, may develop only in individuals who have had a minimum of exposure to properly illuminated objects in the environment during their formative years. But the environment clearly does not shape the mode of input processing, because the environment might have been the background to the visual development of a vast number of other types of pattern-recognition.

(iv) At birth, man is relatively immature; certain aspects of his behavior and cognitive functionemerge only during infancy. Man’s postnatial state of maturity (brain and behavior) is less advanced than that of other primates. This is a statement of fact and not a return to the fetalization and neotony theories of old (details in Chapter Four).

(v) Certain social phenomena among aimals come about by spontaneous adaptation of the behavior of the growing individual to the behavior of other individuals around him. Adequate environment does not merely include nutritive and physical conditions; many animals require specific social conditions for proper development. The survival of the species frequently depednds on the development of mechanisms for social cohesion or social cooperation. The development of typical social behavior in a growing individual requires, for many species, exposure to specific stimuli such as the presence of certain action patterns in the mother, a sexual partner, a group leader, etc. sometimes mere exposure to social behavior of other individuals is a sufficient stimulus. For some species the correct stimulation must occur during a narrow formative period in infancy; failing this, further development may become seriously and irreverible distorted. In all types of developing social behavior, the growing individual begins to engage in behavior as if by resonance; he is maturationally ready but will not begin to perform unless properly stimulated. If expsed to the stimuli, he becomes socially “excited” as a resonator may become excited when exposed to a given range of sound frequencies. Some social behavior consists of intricate patterns, the development of which is the result of subtle adjustments to and interactions with similar behavior patterns (for example, the songs of certain bird species). An impoverished social input may entail permanently impoverished behavior patterns.
Even though the development of social behavior may require an environmental trigger for proper development and function, the triggering stimulus must not be mistaken for the cause that shapes the behavior. Prerequisite social triggering mechanisms do not shape the social behavior in the way Emily Post may shape the manners of a debutante.

II. A concise statement of the theory
(1) Language is the manifestation of species-specific cognitive propensities. It is the consequence of the biological peculiarities that make a human type of cognition possible. *The dependence of language upon human cognition is merely one instance of the general phenomenon characterized by premise (i) above. There is evidence (Chapter Seven and Eight) that cognitive function is a more basic and primary process than language, and that the dependence-relationship of language upon cognition is incomparably stronger than vice versa.
(2) The cognitive function underlying language consists of an adaptation of a ubiquitous process (among vertebrates) of categorization and extraction of similarities. The perception and production of language may be reduced on all levels to categorization processes, including the subsuming of narrow categories under more comprehensive ones and the subdivision of comprehensive categories into more specific ones. The extraction of similarities does not only operate upon physical stimuli but also upon categories of underlying structural schemata. Words label categorization processes (Chapter Seven and Eight).
(3) Certain Specialization s in peripheral anatomy and physiology account for some of the universal features of natural languages, but the description of these human peculiarities does not constitute an explanation for the phylogenetic development of language. During the evolutionary history of the species form, function and behavior have interacted adaptively, but none of these aspects may be regarded as the “cause” of the other. Today, mastery of language by an individual may be accomplished despite severe peripheral anomalies, indicating that cerebral function is now the determining factor for language behavior as we know it in contemporary man. This, however, does not necessarily reflect the evolutionary sequence of developmental events.

Saturday, June 7, 2008

Influences of Electromagnetic Articulography Sensors on Speech Produced by Healthy Adults and Individuals With Aphasia and Apraxia


Influences of Electromagnetic Articulography Sensors on Speech Produced by Healthy Adults and Individuals With Aphasia and Apraxia


William F Katz, Sneha V Bharadwaj, Monica P Stettler. Journal of Speech, Language, and Hearing Research. Rockville: Jun 2006. Vol. 49, Iss. 3; pg. 645, 15 pgs
Abstract (Summary)
This study examined whether the intraoral transducers used in electromagnetic articulography (EMA) interfere with speech and whether there is an added risk of interference when EMA systems are used to study individuals with aphasia and apraxia. Ten adult talkers (5 individuals with aphasia/apraxia, 5 controls) produced 12 American English vowels in /hVd/ words, the fricative-vowel (FV) words (/si/, /su/, /∫i/, /∫u/), and the sentence She had your dark suit in greasy wash water all year, in EMA sensors-on and sensors-off conditions. Segmental durations, vowel formant frequencies, and fricative spectral moments were measured to address possible acoustic effects of sensor placement. A perceptual experiment examined whether FV words produced in the sensors-on condition were less identifiable than those produced in the sensors-off condition. EMA sensors caused no consistent acoustic effects across all talkers, although significant within-subject effects were noted for a small subset of the talkers. The perceptual results revealed some instances of sensor-related intelligibility loss for FV words produced by individuals with aphasia and apraxia. The findings support previous suggestions that acoustic screening procedures be used to protect articulatory experiments from those individuals who may show consistent effects of having devices placed on intraoral structures. The findings further suggest that studies of fricatives produced by individuals with aphasia and apraxia may require additional safeguards to ensure that results are not adversely affected by intraoral sensor interference.
» Jump to indexing (document details)
Full Text (8196 words)
Copyright American Speech-Language-Hearing Association Jun 2006
[Headnote]
Purpose: This study examined whether the intraoral transducers used in electromagnetic articulography (EMA) interfere with speech and whether there is an added risk of interference when EMA systems are used to study individuals with aphasia and apraxia.
Method: Ten adult talkers (5 individuals with aphasia/apraxia, 5 controls) produced 12 American English vowels in /hVd/ words, the fricative-vowel (FV) words (/si/, /su/, /∫i/, /∫u/), and the sentence She had your dark suit in greasy wash water all year, in EMA sensors-on and sensors-off conditions. Segmental durations, vowel formant frequencies, and fricative spectral moments were measured to address possible acoustic effects of sensor placement. A perceptual experiment examined whether FV words produced in the sensors-on condition were less identifiable than those produced in the sensors-off condition.
Results: EMA sensors caused no consistent acoustic effects across all talkers, although significant within-subject effects were noted for a small subset of the talkers. The perceptual results revealed some instances of sensor-related intelligibility loss for FV words produced by individuals with aphasia and apraxia.
Conclusions: The findings support previous suggestions that acoustic screening procedures be used to protect articulatory experiments from those individuals who may show consistent effects of having devices placed on intraoral structures. The findings further suggest that studies of fricatives produced by individuals with aphasia and apraxia may require additional safeguards to ensure that results are not adversely affected by intraoral sensor interference.
KEY WORDS: speech production, electromagnetic articulography, fricative spectral moments, aphasia, apraxia of speech


Speech production is studied using techniques that provide anatomical images or movies of articulation (e.g., cineradiography, videoflouroscopy) as well as techniques that derive individual fleshpoint data during speech movement (e.g., X-ray microbeam, selspot, and electromagnetic articulography [EMA]). A potential complication of fleshpoint tracking systems is that the sensors used to record speech movement may themselves alter participants' speech. For instance, intraoral sensors might obstruct the speech airway, resulting in sound patterns not normally observed in speech. It is also possible that data recorded during EMA or X-ray microbeam studies may to some extent reflect participants' compensation for the presence of intraoral sensors in the vocal tract. Indirect evidence concerning these issues was provided by Perkell and Nelson (1985), who compared formant frequencies of the vowels /i/ and /u/ recorded in the Tokyo X-ray microbeam system with population means obtained in previous acoustic studies that did not involve intraoral sensors (e.g., Hillenbrand, Getty, Clark, & Wheeler, 1995; Peterson & Barney, 1952). The results suggested that X-ray microbeam pellets cause little detectable articulatory interference.
A direct test of potential articulatory interference by a fleshpoint tracking device (the University of Wisconsin X-ray microbeam system) was conducted by Weismer and Bunton (1999). The researchers examined 21 adult talkers who produced the sentence She had your dark suit in greasy wash water all year, with and without an array of X-ray microbeam pellets in place during articulation. This array included four pellets placed on the midsagittal lingual surface. The results indicated no overall differences that were consistent for all speakers. However, approximately 20% of the talkers showed acoustically detectable changes as a result of the pellets placed on the tongue during the X-ray microbeam procedure. For example, pellets-on conditions for vowel production resulted in higher Fl values for some female talkers (suggesting greater mouth opening) and lower F2 values for some male and female talkers (suggesting a more retracted tongue position) than in pellets-off conditions. These occasional acoustic differences resulting from pellet placement were not detectable in perceptual experiments designed to simulate informal listening conditions. The authors concluded that acoustic screening procedures may be important to shield articulatory kinematic experiments from individuals who show consistent effects of having devices placed on intraoral structures.
One factor that may have contributed to the differences between the findings of Perkell and Nelson ( 1985) and Weismer and Bunton ( 1999) is that the former study examined isolated vowels, while the latter examined vowels produced in a sentential context. Speech produced in citation form may differ in a number of articulatory factors from that produced in a more natural sentential context (e.g., Lindblom, 1990). For example, sounds that occur in stressed or accented syllables (hyperspeech) appear to reflect reduced coarticulation or overlap between adjacent sounds ( de Jong, 1995; de Jong, Beckman, & Edwards, 1993) and greater velocity, magnitude, and duration (Beckman & Cohen, 2000; Beckman & Edwards, 1994). It is therefore possible that speech produced in more natural contexts (hypospeech) might show heightened susceptibility to articulatory interference effects, perhaps as the result of less conscious monitoring or compensation by the speaker. It is important to consider these communication contexts when examining the extent to which talkers do or do not show compensation for a given vocal tract perturbation.
An important clinical concern is that the use of fleshpoint tracking systems has not been limited to the study of speech produced by healthy adults. Rather, methods such as EMA are being increasingly applied to study (and treat) individuals with disorders such as aphasia and apraxia of speech (AOS; Katz, Bharadwaj, & Carstens, 1999; Katz, Bharadwaj, Gabbert, & Stettler, 2002; Katz, Carter, & Levitt, 2003), dysarthria (Goozée, Murdoch, Theodoras, & Stokes, 2000; Murdoch, Goozée, & Cahill, 2001; Schultz, SuIc, Léon, & Gilligan, 2000), stuttering (Peters, Hulstijn, & Van lieshout, 2000), and developmental AOS (Nijland, Maasen, Hulstijn, & Peters, 2004). If sensor-related interference poses added problems for clinical populations, this could potentially complicate the interpretation of kinematic assessment and treatment studies. Thus, one of the main goals of this study was to replicate the findings of Weismer and Bun ton (1999) with individuals having speech difficulties resulting from AOS and aphasia.
To examine these issues, adult talkers (individuals with aphasia/apraxia and healthy controls) were recorded producing speech under EMA sensors-on and sensors-off conditions. Speech samples included repeated monosyllabic /hVd/ words and the sentence She had your dark suit in greasy wash water all year. A number of temporal and spectral acoustic parameters were measured, and a perceptual experiment (with healthy adult listeners) was conducted to determine whether EMA sensors affected the intelligibility of fricative-vowel (FV) words produced by individuals with aphasia/apraxia and healthy control talkers.
Method
Participants
Participants were 10 monolingual American English-speaking adults (5 individuals with aphasia/ apraxia, 5 healthy controls) from the Dallas, TX, area. There were 2 female talkers (control participant C3 and participant A2 in the aphasia/apraxia group) and 8 male talkers. Participants had no prior phonetic training or experience in EMA experimentation. Individuals in the control group reported no history of neurological or articulation disorders. Four individuals with aphasia/ apraxia had been diagnosed with Broca's aphasia, and 1 had been diagnosed with anomic aphasia (see Table 1). All had AOS and an etiology of left-hemisphere cerebrovascular accident (CVA). Individuals with aphasia/ apraxia were diagnosed based on clinical examination and performance on the Boston Diagnostic Aphasia Exam (Goodglass, Kaplan, & Barresi, 2001) and the Apraxia Battery for Adults, Version 2 (ABA-2; Dabul, 2000). Apraxic severity levels, based on the overall scores of the ABA-2 Impairment Profile section, ranged from mild to moderate. The age range for the aphasie/ apraxic group was 38-67 years (M = 59;6 [years; months]), and that for the control group was 25-59 years (M = 55;0).


Speech Sample, Sensor Array
Testing took place in a sound-treated room at the UTD Gallier Center for Communication Disorders. Speech samples included vowels in /hVd/ contexts, FV words, and the sentence She had your dark suit in greasy wash water all year. The /TiVd/ and FV words were elicited in the carrier phrase, I said __ again. Seven repetitions were elicited for each sensor condition (on/off), yielding a total of 168 /hVd/ words, 56 FV words, and 14 sentences per talker. The /hVd/ words, FV words, and sentences were produced in separate blocks, with the order of stimulus type and sensor conditions (on/off) counterbalanced between talkers. Within each block, stimuli were produced in random order. Talkers repeated each item following a spoken and orthographic model (written on a 4 in. × 6 in. index card) presented by one of the experimenters (WK, a male, native speaker of American English). Speech was elicited at a comfortable speaking rate in a session lasting approximately 45 min.


The sentence She had your dark suit in greasy wash water all year was taken from the DARPA/TIMIT corpus (Garofolo, 1988). This sentence had been examined in a previous study of X-ray microbeam pellet interference (Weismer & Bunton, 1999). By including this sentence, we could compare microbeam pellet and EMA sensor effects between studies. From this sentence, segmentai durations were measured, and formant frequencies were estimated for the vowels /i/, /æ/, /u/, and /a/ (taken from the words she, had, suit, and wash).
For the sensors-on condition, participants spoke with two miniature receiver coils (approximately 2 × 2 × 3 mm) attached to the lingual surface. These sensors (Model SM220) are used in commercially available EMA systems manufactured by Carstens Medezinelektronik, GmbH. EMA sensors were placed (a) midline on the tongue body and (b) on the tongue tip approximately 1 cm posterior to the apex (see Figure 1). Although it is possible that greater sensor interference could occur with the placement of 3 to 4 lingual sensors, the use of two sensors was motivated by the fact that sensors placed on the superior, anterior lingual surface are involved in a variety of articulatory gestures, including palatal contact (potentially influencing sibilant production). Placement followed a standardized template system originally designed for pellet placement in the X-ray microbeam system (Westbury, 1994). Sensors were affixed to the tongue using a biocompatible adhesive, with the wires led out the corners of the participant's mouth (Katz, Machetanz, Orth, & Schoenle, 1990).1


[Photograph]
Figure 1. Example of electromagnetic articulography sensors attached to the lingual surface (midline on the tongue body and approximately 1 cm posterior to the apex).


As is common with EMA testing, before further recording, participants were given approximately 5 min to get used to the presence of EMA sensors, or until the investigators determined that there was no significant change in speech production attributable to the lingual EMA sensors. During this desensitization period, participants were engaged in informal conversation with the investigators.
Dato Collection
Acoustic data were recorded with an Audio-Technica AT831b microphone placed 8 in. from the lips. Recordings were made with a portable DAT recorder, Teac model DA-P20. The digital waveforms were later transferred to computer disk at a rate of 48 kHz and 16-bit resolution using a DAT-Link+ digital audio interface, then down-sampled to 22 kHz for subsequent analysis.
Acoustic Measures
From the seven productions elicited for each /hVd/ and FV word target, the first five phonemically correct productions were selected for analysis. Five productions were selected for each target because most of the individuals with aphasia/apraxia were able to produce this many correct utterances within seven attempts. Phonemically correct utterances were determined by independent transcription conducted by two of the authors (William F. Katz and Monica P. Stettler). As expected, there was no data loss for the control talkers, while individuals with aphasia/apraxia showed characteristic problems with particular speech sounds. Talker Al had particular difficulty producing FV words, and these items were removed from further analysis. In all, 344 items were included in the FV acoustic analyses.
For individuals with aphasia/apraxia, it was more difficult to produce the sentence She had your dark suit in greasy wash water all year than to repeat single words in a carrier phrase. Accordingly, there were many cases of substitutions, omissions, and distortions in their sentential materials. Nonetheless, it was possible to select five sentences produced by each talker for duration measurement purposes and the first five phonemically correct instances of the vowels /i/, /as/, IvJ, and /u/ for formant frequency analysis.


The first three formant frequencies (F1-F3) were estimated at vowel midpoint for the vowels /i/, /æ/, /u/, and Id. The four corner vowels were selected because they delimit the acoustic (and, by inference, articulatory) working space for vowels. Vowel formant frequencies (F1-F3) were estimated using an automated formant-tracking procedure developed by Nearey, Hillenbrand, and Assmann (2002). In this procedure, several different linear predictive coding (LPC) models varying in the number of coefficients are applied, given some assumptions about the number of expected formants in a given frequency range. The best model is then selected based on formant continuity, formant ranges, and formant bandwidth, along with a measure of the correlation between the spectrum of the original and a synthesized version. Final formant frequency values were estimated as the median of five successive measurements spaced 5-ms apart, spanning vowel midpoint.
Fricative centroids were measured at fricative midpoint using TF32 software (Milenkovic, 2001). Spectral moments analysis treats the Fourier power spectrum as a random probability distribution from which four measures may be derived: centroid (spectral mean), variance (energy spread around the spectral peak), skewness (tilt or symmetry of the spectrum), and kurtosis (peakedness of the spectrum; Forrest, Weismer, Milenkovic, & Dougall, 1988; Tjaden & Turner, 1997). Although Weismer and Bunton (1999) examined all four spectral moments in their study of the effects of X-ray microbeam pellets on speech, only the first spectral moment (centroid) showed any evidence of differing as a function of pellet placement during speech. Based on these findings, as well as the data from other studies highlighting the importance of the centroid in determining fricative quality (e.g., Jongman, Wayland, & Wong, 2000; Nittrouer, StuddertKennedy, & McGowan, 1989; Tabain, 1998), we focused on fricative centroids as a measure of possible interference effects of EMA sensors during fricative production.
Perceptual Measures
Ten native speakers of American English with a background in speech-language pathology volunteered as listeners. Listeners ranged from 23 to 53 years of age (M = 28 years). All listeners had taken a course in phonetics and reported no speech, language, or hearing problems.
Stimuli consisted of the syllables /si/, /su/, /∫i/, and /∫u/, produced by individuals with aphasia and apraxia and by healthy control talkers under sensors-on and sensors-off conditions. There were 200 productions by the control talkers (5 participants × 2 fricatives × 2 vowels × 2 sensor conditions × 5 repetitions) and 144 productions by the 4 individuals with aphasia/apraxia. As noted prev'ously, the FV productions of Talker Al and the /∫i/ productions of Talker A3 were eliminated due to high error rates. All stimuli were adjusted to the same peak amplitude, resulting in levels between 65 and 72 dB SPL(A).
The FV word identification task was conducted in a sound-treated room at the University of Texas at Dallas, Callier Center for Communication Disorders. Listeners were instructed that they would hear the words /si/, /su/, /∫i/, and /∫u/ produced by adult talkers (including individuals with aphasia/apraxia and healthy controls) under conditions of having EMA sensors on or off the tongue during speech. Productions by individuals with aphasia/apraxia and by healthy controls were presented in randomized (mixed) order. The listener's task was to identify each word by clicking one of four response panels (labeled with IPA symbols and the words see, she, Sue, and shoe) on a computer screen. Before the experiment, listeners first completed a practice set in which they were given 16 stimuli presented through headphones. The practice session was designed to familiarize the participant with the range of variations in the quality of fricatives to be identified in the main experiment and to familiarize them with the task. The materials for this practice session included productions by individuals with aphasia/apraxia and healthy control talkers other than those used in the main experiment. In the main experiment, listeners identified a total of 344 words. The experiment was selfpaced, and listeners were allowed to listen to stimuli any number of times before giving their answer by pressing a replay button. Listeners completed the experiment in one session lasting approximately 40 min.
Results
Segment Durations
Figure 2 shows mean vowel durations (and standard errors ) for phonemically correct /hVd/ words produced by the two talker groups in sensors-on and sensors-off conditions. A mixed-design, repeated measures analysis of variance (ANOVA) was conducted with group (aphasie/ apraxic, control) as the between-subject variable and vowel (I'll, /æ/, /u/, and IaI) and sensor condition (on/off) as within-subject variables. Results indicated a significant main effect for vowel, F(Il, 96) = 9.09, p < .0001, and a significant Vowel × Group interaction, F(Il, 96) = 1.99, p = .0376. These effects reflect two main patterns: (a) vowel-specific differences among the 12 vowels investigated (e.g., tense vowels longer than lax vowels) and (b) greater vowel-specific durational differences in aphasie/ apraxic as opposed to control talker group productions. Figure 2 also indicates that productions by individuals with aphasia/apraxia were generally longer than those of normal control talkers, although this group difference did not reach significance, F(I, 96) = 1.09, p = .299, ns. Critically, there was no significant main effect for sensor condition, and no other significant two-way or three-way interactions. Although one must be careful when interpreting negative findings, the fact that vowel and group factors revealed significant effects (whereas sensor condition did not) suggests EMA sensors do not affect the duration of vowels in /hVd/ contexts produced by healthy adults or individuals with aphasia/apraxia.


Figure 2. Mean vowel durations and standard errors for /hVd/ productions by healthy adults and individuals with aphasia/apraxia, shown by speaking condition (open bars = sensors off, shaded bars = sensors on).


Figure 3 contains sensors-on and sensors-off duration data for sentences produced by the two talker groups. As expected, productions by individuals with aphasia/ apraxia had overall longer durations than those of the control talkers. However, there was little systematic difference for either talker group as a function of sensor condition.


Figure 3. Mean segment durations and standard errors for productions by healthy adults and individuals with aphasia/apraxia, shown by speaking condition (open bars = sensors off, shaded bars = sensors on).


Post hoc analyses of the three-way (Group × Segment × Sensor Condition) interaction focused on the effects of sensors (off vs. on) for phonemes produced by each of the two talker groups. In these analyses, the differences of least squares means were computed (t tests), with significance set at p < .01 to correct for multiple comparisons. Results indicated no significant sensors-off versus sensors-on differences for segments produced by the healthy control talkers. For productions by individuals with aphasia/apraxia, three segments showed significant sensor effects (/jrr/, /d/, and /ar/). However, the direction of these effects was not consistent: /pj/ and /ar/ durations were shorter in the sensors-on condition, while /d/ durations were shorter in the sensors-off condition.
In summary, the data revealed expected group and segment differences, while sensor condition had little systematic effect. We interpret these data as showing little difference between individuals with aphasia/ apraxia and healthy adults with respect to possible durational interference from EMA sensors.
Vowel Formant Frequencies and Trajectories
The average formant frequencies (F 1-F3) of the vowel portions of the words /hid/, /heed/, /hud/, and /hud/ are summarized by group, vowel, and condition in Table 2. Following Weismer and Bunton ( 1999 ), between-condition differences of 75, 150, and 200 Hz (for Fl, F2, and F3, respectively) were operationally defined as minimal criteria for intraoral sensor interference. These values are based on considerations of typical measurement error for F1-F3 formant values (Lindblom, 1962; Monson & Englebretson, 1983) and on difference limens data for formant frequencies (Kewley-Port & Watson, 1994).


Table 2. Mean formant frequency values for healthy adult (control) talkers and individuals with aphasia/apraxia across speaking conditions (/hVd/ productions).


Of the 120 sensors-on/sensors-off comparisons (10 talkers × 4 vowels × 3 formants) shown in Table 2, 4 (3%) reached criteria: Fl of Id produced by Talker A2, F3 of /a/ produced by Talker A4, and Fl and F2 of /u/ by Talker A4. These cases are boldface in Table 2. Keeping in mind the caveat that formant frequency patterns are at best a first approximation of causality (Borden, Harris, & Raphael, 2003), one can nevertheless consider tube perturbation theory (e.g., Stevens, 2000; Stevens & House, 1955) to speculate about some possible articulatory explanations for these patterns of formant frequency change. For Talker A2, lowered Fl for /u/ suggested higher overall tongue position in the sensorson condition. For Talker A4, sensors-on productions showed higher F3 for /u/, suggesting an increased point of constriction between the teeth and alveolar ridge, or at the pharynx. Talker A4's /u/ productions showed increased Fl in the sensors-on condition (implying a lowered tongue position) and a decreased F2 (suggesting a more retracted tongue position).
The few observed cases of potential sensor interference occurred for individuals with aphasia/apraxia, suggesting that these individuals may show greater intraoral interference effects than healthy control talkers. However, the vowel formant frequencies produced by individuals with aphasia/apraxia were more variable than those of the control talkers (as reflected by 49% greater standard deviations), raising the possibility that these sensor-dependent differences were a by-product of increased variability per se (and not due to increased susceptibility to intraoral interference).
To better understand the effects of sensors during vowel production, we examined vowel formant frequency trajectories. These data addressed the question of whether EMA sensors affect vowel spectral change over time, a property claimed to be a form of dynamic vowel specification (e.g., Strange, 1989). Vowel formant frequency trajectories for the /hVd/ utterances were estimated using an LPC-based, pitch synchronous tracking algorithm (TF32, Milenkovic, 2001). Overlapping plots of these trajectories were made for the four cases of sensor-related formant frequency differences. For three of these cases, there were no apparent differences in trajectory shape or duration as a function of sensor condition. In contrast, Participant A4's /u/ F2 values showed a qualitative difference in formant frequency transitions, with the off condition being relatively steady state and the sensors-on data showing a more curved pattern. These patterns are shown in Figure 4, which is an overlapping plot of the F2 trajectories of 10 /u/ productions by A4. The five productions made in the sensors-off conditions are plotted in crosses, and the five produced in the sensors-on conditions are plotted in circles.


Figure 4. Overlapping F2 formant trajectories for /u/ produced by Talker A2 (an individual with aphasia/ apraxia). Productions made in the sensors-off condition are plotted with crosses, and those made in the sensors-on conditions are plotted with circles.


The vowels /i/, /æ/, /u/, and IaI were measured in the words she, had, suit, and wash, taken from the sentence She had your dark suit in greasy wash water all year. The average formant frequencies (F1-F3) are summarized by group, vowel, and condition in Table 3. As in the case of the /hVd/ data, between-condition formant frequency differences were operationally defined as minimal criteria for intraoral sensor interference (Weismer & Bunton, 1999).
As shown in Table 3, seven between-condition comparisons (5.8% of the data) reached criteria. There was no obvious pattern for these sensor-related differences to favor a specific vowel, talker group, or formant. Also, the direction of sensor-related effects did not suggest any one articulatory pattern for these talkers. For example, Control Talker C3 produced lower Fl values for /a/ in the sensors-on condition, suggesting a higher tongue position for this low vowel (possibly a case of undershoot). In contrast, Talker A4 produced lower F2 values for /u/ in the sensors-on condition, suggesting either lingual overshoot for this back/high vowel, or perhaps compensatory lip rounding.
As with talkers' /hVd/ data, vowel formant frequency trajectories were inspected to determine whether the presence of sensors produced any qualitative difference in trajectory shape. The data revealed no special cases of formant trajectory difference due to sensor placement.
Fricative Spectra
Table 4 shows centroid values for healthy control talkers and for individuals with aphasia/apraxia, listed separately for sensors-off and sensors-on conditions. As mentioned previously, the FV productions of Talker A1 and the /∫i/ productions of Talker A3 were not included in this analysis due to these participants' difficulties producing these sounds. Using a minimum difference of at least 1 kHz between conditions as significant (Weismer & Bunton, 1999), two of the complete set of 35 sensors-on versus sensors-off comparisons reached significance. These cases are boldface in Table 4. Both cases were for productions by Talker A2, who showed higher centroid values in the sensors-on condition for /∫i/ and /∫u/.


Table 3. Mean formant frequencies for control talkers and individuals with aphasia/apraxia across speaking conditions (sentential productions).


To further examine possible interference effects of EMA sensors on fricative spectra, histograms were plotted for each talker's /s/ and /∫/ productions, with data plotted separately for the sensors-off and sensors-on talking conditions. Previous studies have shown that repeated productions of /s/ and /∫/ by healthy adult talkers have clearly distinguishable centroid values, while productions by individuals with aphasia/apraxia are more variable and overlapped (Haley et al., 2000; Harmes et al., 1984; Ryalls, 1986). Similar patterns were noted in the present data: All 5 healthy control talkers produced bimodal centroid patterns separated by approximately 3 kHz, in both sensors-off and sensors-on conditions. Of the 4 individuals with aphasia/apraxia included in this analysis, 3 had greater-than-normal spectral overlap in both the sensors-off and sensors-on conditions, with no increased spectral overlap as the result of sensors being present. However, Talker A2 produced clearly distinguishable /s/ and /∫/ centroids in the sensors-off condition (resembling those of the normal talkers) and a highly overlapped pattern in the sensors-on condition. Thus, these data reinforce the minimal distance findings (Table 4) in suggesting that Talker A2 showed acoustic evidence of EMA sensor interference during fricative production.
Identification Scores
Listeners did well on the FV word identification task, with mean performance ranging from 90% to 93% correct across the individual listeners. Figure 5 shows that listeners showed near-ceiling performance for words produced by healthy control talkers (98.8%) and lower accuracy for words produced by individuals with aphasia/ apraxia (82.7%). Figure 5 also indicates that intelligibility varied as a function of word and sensor condition for productions by individuals with aphasia/apraxia.
The data were analyzed with a three-way (Talker Group × Word × Sensor Condition) repeated measures ANOVA. The results indicated significant main effects for group, F(1, 9) = 526.1, p < .0001, and sensor condition, F(1, 9) = 26.46, p < .0006, with a significant Group × Sensor Condition interaction, F(I, 9) = 22.1, p = .0011. These findings reflect lower identification scores for productions by the aphasic/apraxic group than for the healthy control group and higher values for the sensors-off (92.9%) than sensors-on (88.7%) conditions. Critically, there were no significant sensor-related intelligibility differences for productions by healthy control talkers, while individuals with aphasia/apraxia produced speech that was more intelligible in sensors-off (86.9%) than sensors-on (78.7%) conditions. There was also a significant Word × Sensor Condition interaction, F(3, 27) = 11.78, p < .0001, and a Group × Word × Sensor Condition interaction, F(3, 27) = 7.7, p < .0007. Post hoc analyses (Scheffé, p < .01) investigating the three-way interaction indicated that /∫i/ produced by individuals with aphasia/apraxia were significantly less intelligible in the sensors-on than sensors-off conditions (marked with an asterisk in Figure 5). Individual talker data were examined to determine whether decreased intelligibility for /∫i/ produced under the sensors-on condition held for all members of the aphasic/apraxic group. The results showed that this pattern obtained for 3 of the 4 talkers with aphasia/apraxia that were included in this analysis.


Table 4. Mean centroid values (kHz) for fricatives across speaking conditions.
Figure 5. Identification of fricative-vowel stimuli produced by healthy adult (control) talkers and talkers with aphasia/apraxia, under two speaking conditions (sensors off and on). Error bars show standard errors.


The perceptual data were compared with the fricative centroid measurements of the FV stimuli (described in Table 4). For Talker A2, correspondences between acoustic and perceptual findings fell in an expected direction. Fricatives produced by this talker had significantly higher centroid values for both /∫i/ and /∫u/ in the sensors-on compared with the sensors-off condition (/∫i/: sensors-off, 4.675 kHz, sensors-on, 5.845 kHz; /∫u/ sensors-off, 5.765 kHz, sensors-on, 6.985 kHz). These higher centroid values for /∫/ should presumably have shifted listener judgments toward /s/, thereby lowering correct identification. Indeed, this pattern obtained, with lower ratings for A2's J V productions in the sensors-on condition (84%) than sensors-off condition (99%). However, for the other three individuals with aphasia/apraxia (A3, A4, and A5), the correspondence between acoustic and perceptual data was less robust. For these talkers, there were no cases of sensor-related centroid differences greater than 1 kHz, yet a token-by-token analysis of the perceptual data revealed instances of substantial sensor-related intelligibility differences.
In summary, the acoustic and perceptual data were sensitive to talker group differences, and both data sources suggested that productions by healthy control talkers show minimal interference from EMA sensors. However, the acoustic and perceptual data showed less agreement with respect to individual talker and stimulus details for productions by individuals with aphasia/apraxia.
Discussion
Point-parameterized estimates of vocal tract motion are increasingly reported in the literature, both for healthy adults and for talkers with speech and language deficits. The results of these investigations are used to address models of speech production, as well as clinical issues such as the assessment and remediation of speech and language disorders. EMA systems play a growing role in this research. Although one study has examined the effects of X-ray microbeam pellets on speech produced by healthy adult talkers, the effects of EMA sensors on speech have not yet been investigated. It is also not known whether the risk of sensor interference is increased in talkers with speech disorders subsequent to brain damage. To address these questions, the current study examined a number of acoustic speech parameters (including segmental duration, vowel formant frequencies and trajectories, and fricative centroid values) in productions by individuals with aphasia/apraxia and age-matched healthy adult talkers under EMA sensors-off and sensors-on conditions. For most of these measures, citation form and sentential utterances were compared to determine whether subtle sensor-related differences could be detected across speech modes. A perceptual study using healthy adult listeners was conducted to obtain identification accuracy for FV words produced by individuals with aphasia/apraxia and by healthy adult talkers.


Considering next the possibility of spectral interference from EMA sensors, analysis of /hVd/ productions revealed vowel formant frequency values for productions by 2 individuals with aphasia/apraxia (A2, A4) that exceeded operationally defined thresholds for intraoral sensor interference. However, only one vowel was affected for each talker (/a/ for A2, /u/ for A4), suggesting minimal interference even for these talkers. When sensor-related formant frequency differences for /i/, /æ, /u/, and /a/ were examined in words taken from the sentence She had your dark suit in greasy wash water all year, a small number of cases (5.8%) reached criteria, with no obvious tendency for these sensorrelated differences to favor a specific vowel, talker group, or formant. Inspection of vowel formant frequency trajectories revealed only one apparent case of sensorrelated difference (F2 trajectories for /u/ produced by A4). Taken together, the data suggest little EMA sensor interference affecting either vowel steady-state measures or vowel dynamic qualities. These acoustic findings support both informal evaluations by researchers (e.g., Schönle et al., 1987) and participants' self-reports indicating that EMA sensor interference during vowel production is minimal.
Potential spectral interference for consonants was assessed by measuring fricative centroid values for the words /si/, /su/, /∫i/, and /∫u/ produced under sensors-on and sensors-off conditions. The results indicated that Talker A2 showed significantly higher centroids in the sensors-on condition for /∫i/ and /∫u/. Histograms of centroid values for repeated productions of this talker's fricatives indicated a distinct, bimodal pattern in the sensors-off condition and greatly increased overlap for the sensors-on condition. Taken together, these data suggest Talker A2 had particular difficulty producing fricatives under EMA sensors-on conditions. The direction of the J V shift for this talker ( higher centroids in the sensors-on condition) was similar to that observed by Weismer and Bunton ( 1999) for sentential productions by normal participants in the X-ray microbeam system. These authors suggested three possible explanations for such a shift: (a) a vocal tract constriction somewhat more forward; (b) greater overall effort in utterance production, with higher flows through the fricative constriction and consequently greater energy in the higher frequencies of the source spectrum (Shadle, 1990); and (c) sensors acting like obstacles in the path of the flow, increasing the high-frequency energy in the turbulent source and thus contributing to the first spectral moment differences. Another possibility may be a saturation effect difference, consisting of lower tongue tip contact with the alveolar ridge for/s/, but not/∫/(Perkell et al., 2004). Conceivably, the EMA sensor could have interfered with tongue tip contact patterns, resulting in a more /s/-like quality for /∫/ attempts.
An identification experiment examined whether EMA sensors affected the intelligibility of participants' /si/, /su/, /∫i/, /∫u/ productions. The results revealed an interaction between talker group and sensor condition (on/off). Productions by healthy adult talkers were identified almost perfectly, with no apparent effects of sensor interference. These data support previous findings from perceptual rating tasks that showed healthy adult talkers produce no discernable evidence of speech being made with or without X-ray microbeam pellets attached (Weismer & Bunton, 1999). Nevertheless, because the data for productions by healthy control talkers were pretty well at ceiling, it is possible that subtle effects of sensor interference might emerge if the task were made more difficult for the listeners. Future studies might explore this issue further by presenting stimuli under more demanding conditions, such as in the presence of noise masking.
In the current study, FV productions by individuals with aphasia/apraxia were identified with lower accuracy than those of healthy controls, a finding consistent with clinical descriptions of imprecise fricative production in aphasia and apraxia (e.g., Haley et al., 2000). There was also evidence consistent with an interpretation of sensor-related interference: Productions by individuals with aphasia/apraxia were less intelligible in sensors-on versus sensors-off conditions, a pattern that was significant for the word /∫i/. On closer inspection, the significant results for /∫i/ appear to have resulted from unusually high intelligibility for sensors-off productions, rather than from lowered intelligibility for sensors-on productions. Why the /∫i/ productions of individuals with aphasia/apraxia were so intelligible is not entirely clear. Nevertheless, despite this one unusual pattern, the perceptual data generally suggest that individuals with aphasia/apraxia have greater-than-normal difficulty producing sibilant fricatives under EMA sensor conditions.
Because EMA sensors pose the same type of physical obstruction to the oral cavity in healthy control talkers and in individuals with aphasia/apraxia, it seems reasonable to assume that any additional difficulties noted in the productions of individuals with aphasia/apraxia may be due to deficits in the ability to compensate for the presence of EMA sensors during speech. If it is further assumed that the ability to adapt to the presence of EMA sensors is functionally related to the compensatory ability needed to overcome the presence of other types of intraoral obstructions (e.g., a bite block), the present data support previous claims that individuals with aphasia/apraxia have intact compensatory articulation abilities during vowel production (Baum, Kim, & Katz, 1997).
However, the fricative findings give some indication of possible compensatory difficulties in the speech of individuals with aphasia/apraxia. These talkers, considered as a group, showed greater perceptual effects from EMA sensors than healthy normal controls. Inspection of individual talker data revealed that decreased intelligibility in the sensors-on conditions occurred for 3 of the 4 talkers with aphasia/apraxia. The most consistent case of sensor-related effects was Talker A2, whose /∫V/ productions also showed increased centroid overlap in the sensors-on condition. Cumulatively, these data provide tentative evidence that compensatory problems may underlie the difficulty that some individuals with aphasia/apraxia experience while producing fricatives under EMA conditions.
Baum and McFarland (1997) noted that healthy adults producing the fricative /s/ under artificial palate conditions show marked improvement after as little as 15 min of intense practice with the palate in place. Although the perturbations involved in the current study are arguably different than those resulting from an artificial palate, it is conceivable that practice speaking with EMA tongue tip sensors attached was sufficient to allow substantial adaptation for fricatives produced by the healthy control talkers but not for the talkers with aphasia/ apraxia. Additional experimentation that includes testing after practice would help address this issue.
Whereas the acoustic and perceptual data for productions by healthy control talkers were quite congruent, the perceptual data for speech produced by individuals with aphasia/apraxia did not always correspond with the patterns one would expect based on the fricative centroid values. A mismatch between listeners' perceptions and fricative spectral attributes has been noted in previous studies of incorrect /∫/ productions by individuals with aphasia/apraxia (Wambaugh, Doyle, West, & Kalinyak, 1995). Whereas the present perceptual data appeared sensitive to talker group, word, and sensor differences, there are a number of possible reasons why measured centroid values did not predict listeners' results. One possibility is that a combination of spectral moments could provide improved predictive power, as suggested by previous studies of fricatives produced by normal healthy speakers (Forrest et al., 1988; Jongman et al., 2000). Another possibility is that predictive power could be improved by considering profiles of successive spectral moment portraits over time, such as suggested in the FORMOFFA model for the analysis of normal and disordered speech (Buder, Kent, Kent, Milenkovic, & Workinger, 1996).
By including both citation form and sentential speech samples, the present study tested the hypothesis that speech produced in sentential contexts would reveal greater EMA sensor interference than citation form contexts. Relatively little support was found for this hypothesis. Although both segment durations and vowel formant frequencies were slightly more affected in the sentential stimuli than in single-word productions, these effects were noted primarily for productions by individuals with aphasia/apraxia, and the effects were not uniform across individual talkers or stimuli. Overall, the results suggest that citation form and sentential utterances show little difference with respect to their effectiveness in eliciting acoustic evidence of EMA sensor interference.
In conclusion, there are two important methodological implications of the present findings. First, the data support the observation made by Weismer and Bunton (1999) that perceptual indices will not provide adequate screening criteria to protect kinematic experiments from normal healthy individuals with consistent sensor interference effects. Weismer and Bunton noted that listeners were unable to reliably determine whether stimuli were produced with X-ray microbeam pellets on or off. In the present data, listeners showed strong ceiling effects and no influence of EMA sensors when identifying fricatives produced by healthy control talkers. Taken together, these two experiments examining different fleshpoint tracking technologies suggest that acoustic screening techniques be used to identify those individuals who may show consistent effects of having sensors placed in the oral cavity. As noted by Weismer and Bunton, this protocol could involve recording speech sounds in sensors-off and sensors-on conditions, followed by acoustic analyses. The present results suggest it will be especially important to examine sibilant production.
second, the current findings suggest that intervention studies involving consecutive EMA measurement of speech produced by individuals with aphasia/apraxia must be designed to ensure that any observed progress is not merely the participants adapting to the presence of the sensors over time. This potential confound can be circumvented by taking appropriate safeguards in experimental design, such as probing for stimulus generalization outside of the training set (Katz et al., 2002, 2003). At present, this concern would appear limited to studies of sibilant production by individuals with aphasia/apraxia. Additional studies are needed to determine the exact articulatory explanations for these interference effects and whether such problems extend to other classes of sounds or productions by individuals with different types of speech disorders.
Acknowledgments
Portions of the results were presented in 2001 at the 39th Meeting of the Academy of Aphasia (Boulder, CO). This research was supported by Callier Excellence Award 19-02. We would like to thank June Levitt, Nicole Rush, and Michiko Yoshida for assistance with acoustic analysis.
[Footnote]
1 In some laboratories (e.g., University of Munich Institute of Phonetics and Speech Communication), EMA sensors are attached in such a way that the wire is first oriented toward the back of the mouth, reducing the risk of wires going over the tongue tip.


[Reference]
References
Baum, S. R., Kim, J. A., & Katz, W. F. ( 1997). Compensation for jaw fixation by aphasie patients. Brain and Language, 15, 354-376.
Baum, S. R., & McFarland, D. H. (1997). The development of speech adaptation to an artificial palate. Journal of the Acoustical Society of America, 102, 2353-2359.
Beckman, M. E., & Cohen, K. B. (2000). Modeling the articulatory dynamics of two levels of stress contrast. In M. Home (Ed.), Prosody: Theory and experiment (pp. 169-200). Dordrecht, The Netherlands: Kluwer.
Beckman, M. E., & Edwards, J. (1994). Articulatory evidence for differentiating stress categories. In P. A. Keating (Ed.), Papers in laboratory phonology III: Phonological structure and phonetic form (pp. 7-33). Cambridge, England: Cambridge University Press.
Borden, G., Harris, K., & Raphael, L. (2003). Speech science primer: Physiology, acoustics, and perception of speech. Baltimore: Lippincott Williams & Wilkins.
Buder, E. H., Kent, R. D., Kent, J. F., Milenkovic, P., & Workinger, M. S. (1996). FORMOFFA: An automated formant, moment, fundamental frequency, amplitude analysis of normal and disordered speech. Clinical Linguistics and Phonetics, 10, 31-54.
Crystal, T., & House, A. (1988a). The duration of American English consonants: An overview. Journal of Phonetics, 16, 285-294.
Crystal, T., & House, A. (1988b). The duration of American English vowels: An overview. Journal of Phonetics, 16, 263-284.
Crystal, T., & House, A. (1988c). Segmentai durations in connected-speech signals: Current results. Journal of the Acoustical Society of America, 83, 1553-1573.
Dabul, B. (2000). Apraxia Battery for Adults (ABA-2). Tigard, OR: C.C. Publications.
de Jong, K. (1995). The supraglottal articulation of prominence in English: Linguistic stress as localized hyperarticulation. Journal of the Acoustical Society of America, 97, 491-504.
de Jong, K., Beckman, M. E., & Edwards, J. (1993). The interplay between prosodie structure and coarticulation. Language and Speech, 36, 197-212.
Engwall, O. (2000). Dynamical aspects of coarticulation in Swedish fricatives-A combined EMA & EPG study. Quarterly Progress and Status Report From the Department of Speech, Music, & Hearing at the Royal Institute of Technology [KTH/, Stockholm, Sweden, 4, 49-73.
Forrest, K., Weismer, G., Milenkovic, P., & Dougall, R. (1988). Statistical analysis of word-initial voiceless obstruents: Preliminary data. Journal of the Acoustical Society of America, 84, 115-123.
Garofolo, J. S. (1988). Getting started with the DARPA TlMIT CDROM: An acoustic phonetic continuous speech database. Gaithersburg, MD: National Institute of Standards and Technology.
Goodglass, H., Kaplan, E., & Barresi, B. (2001). The assessment of aphasia and related disorders (3rd ed.). Philadelphia: Lea & Febiger.
Goozee, J. V., Murdoch, B. E., Theodores, D. G., & Stokes, P. D. (2000). Kinematic analysis of tongue movements following traumatic brain injury using electromagnetic articulography. Brain Injury, 14, 153-174.
Haley, K. L., Ohde, R. N., & Wertz, R. T. (2000). Precision of fricative production in aphasia and apraxia of speech: A perceptual and acoustic study. Aphasiology, 14, 619-634.
Hardcastle, W. J. (1987). Electropalatographic study of articulation disorders in verbal dyspraxia. In J. Ryalls (Ed.), Phonetic approaches to speech production in aphasia (pp. 113-136). Boston: College-Hill.
Harmes, S., Daniloff, R., Hoffman, P., Lewis, J., Kramer, M., & Absher, R. (1984). Temporal and articulatory control of fricative articulation by speakers with Broca's aphasia. Journal of Phonetics, 12, 367-385.
Hillenbrand, J. M., Getty, L. A., Clark, M. J., & Wheeler, K. (1995). Acoustic characteristics of American English vowels. Journal of the Acoustical Society of America, 97, 3099-3111.
Jongman, A., Wayland, R., & Wong, S. (2000). Acoustic characteristics of English fricatives. Journal of the Acoustical Society of America, 108, 1252-1263.
Katz, W., & Bharadwaj, S. (2001). Coarticulation in fricative-vowel syllables produced by children and adults: A preliminary report. Clinical Linguistics and Phonetics, 15, 139-144.
Katz, W., Bharadwaj, S., & Carstens, B. (1999). Electromagnetic articulography treatment for an adult with Broca's aphasia and apraxia of speech. Journal of Speech, Language, and Hearing Research, 42, 1355-1366.
Katz, W., Bharadwaj, S., Gabbert, G., & Stettler, M. (2002). Visual augmented knowledge of performance: Treating place-of-articulation errors in apraxia of speech using EMA. Brain and Language, 83, 187-189.
Katz, W., Carter, G., & Levitt, J. (2003). Biofeedback treatment of buccofacial apraxia using EMA. Brain and Language, 87, 175-176.
Katz, W., Machetanz, J., Orth, U., & Schoenle, P. (1990). A kinematic analysis of anticipatory coarticulation in the speech of anterior aphasie subjects using electromagnetic articulography. Brain and Language, 38, 555-575.
Kewley-Port, D., & Watson, C. S. (1994). Formant frequency discrimination for isolated English vowels. Journal of the Acoustical Society of America, 95, 485-496.
Klich, R., Ireland, J., & Weidner, W. ( 1979). Articulatory and phonological aspects of consonant substitutions in apraxia of speech. Cortex, 15, 451-470.
Lindblom, B. (1962). Accuracy and limitations of sonographic measurements. Proceedings of the 4th International Congress of Phonetic Sciences. The Hague, The Netherlands: Mouton.
Lindblom, B. (1990). Exploring phonetic variation: A sketch of the H-and-H theory. In W. J. Hardcastle & A. Marchai (Eds.), Speech production and speech modeling (pp. 403-439). Dordrecht, The Netherlands: Kluwer Academic.
Mertus, J. (2002). BLISS [Software analysis package]. Providence, RI: Author.
Milenkovic, P. (2001). Time-frequency analyzer (TF32) (Software analysis package]. Madison: University of Wisconsin.
Monson, R., & Engebretson, A. M. (1983). The accuracy of formant frequency measurements: A comparison of spectrographic analysis and linear prediction. Journal of Speech and Hearing Research, 26, 89-97.
Murdoch, B., Goozée, J. V., & Cahill, L. (2001). Dynamic assessment of tongue function in children with dysarthria associated with acquired brain injury using electromagnetic articulography. Brain Impairment, 2, 63.
Nearey, T. M., Hillenbrand, J. M., & Assmann, P. F. (2002). Evaluation of a strategy for automatic formant tracking. Journal of the Acoustical Society of America, 112, 2323.
Nijland, L., Maassen, B., Hulstijn, W., & Peters, H. F. M. (2004). Speech motor coordination in Dutch-speaking children with DAS studied with EMMA. Journal of Multilingual Communication Disorders, 2, 50-60.
Nittrouer, S., Studdert-Kennedy, M., & McGowan, R. S. (1989). The emergence of phonetic segments: Evidence from the spectral structure of fricative vowel syllables spoken by children and adults. Journal of Speech and Hearing Research, 32, 120-132.
Odell, K., McNeil, M. R., Rosenbek, J. C., & Hunter, L. (1990). Perceptual characteristics of consonant production by apraxic speakers. Journal of Speech and Hearing Disorders, 55, 349-359.
Perkell, J. S., Matthies, M. L., Tiede, M., Lane, H., Zandipour, M., Marrone, N., et al. (2004). The distinctness of speakers' /s/-/∫/ contrast is related to their auditory discrimination and use of an articulatory saturation effect. Journal of Speech, Language, and Hearing Research, 47, 1259-1269.
Perkell, J. S., & Nelson, W. L. (1985). Variability in production of the vowels /i/ and /u/. Journal of the Acoustical Society of America, 77, 1889-1895.
Peters, H. F. M., Hulstijn, W., & Van lieshout, P. H. H. M. (2000). Recent developments in speech motor research into stuttering. Folia Phoniatrica et Logopaedica, 52, 103-119.
Peterson, G. E., & Barney, H. L. (1952). Control methods used in a study of the vowels. Journal of the Acoustical Society of America, 24, 175-184.
Ryalls, J. (1986). An acoustic study of vowel perception in aphasia. Brain and Language, 29, 48-67.
Schönle,P., Grabe, K., Wenig, P., Hohne, J., Schrader, J., & Conrad, B. (1987). Electromagnetic articulography: Use of alternating magnetic fields for tracing movements of multiple points inside and outside the vocal tract. Brain and Language, 20, 90-114.
Schultz, G. M., SuIc, S., Leon, S., & Gilligan, G. (2000). Speech motor learning in Parkinson's disease: Preliminary results. Journal of Medical Speech-Language Pathology, 8, 243-247.
Shadle, C. H. (1990). Articulatory-acoustic relationships in fricative consonants. In W. J. Hardcastle & A. Marchai (Eds. ), Speech production and speech modeling ( pp. 189-209). Dordrecht, The Netherlands: Kluwer Academic.
Stevens, K. (2000). Acoustic phonetics. Cambridge, MA: MIT Press.
Stevens, K., & House, A. (1955). Development of a quantitative description of vowel articulation. Journal of the Acoustical Society of America, 27, 484-493.
Strange, W. (1989). Evolving theories of vowel perception. Journal of the Acoustical Society of America, 85, 2081-2087.
Tabain, M. (1998). Non-sibilant fricatives in English: Spectral information above IO kHz. Phonetica, 55, 107-130.
Tabain, M. (2003). Effects of prosodie boundary on /aC/ sequences: Articulatory results. Journal of the Acoustical Society of America, 113, 2834-2849.
Tjaden, K., & Turner, G. S. (1997). Spectral properties of fricatives in amyotrophic lateral sclerosis. Journal of Speech, Language, and Hearing Research, 40, 1358-1372.
Umeda, N. (1975). Vowel duration in American English. Journal of the Acoustical Society of America, 58, 434-445.
Umeda, N. ( 1977). Consonant duration in American English. Journal of the Acoustical Society of America, 61, 846-858.
Wambaugh, J. L., Doyle, P. J., West, J. E., & Kalinyak, M. M. (1995). Spectral analysis of sound errors in persons with apraxia of speech and aphasia. American Journal of Speech-Language-Pathology, 4, 186-192.
Weismer, G., & Bunton, K. (1999). Influences of pellet markers on speech production behavior: Acoustical and perceptual measures. Journal of the Acoustical Society of America, 105, 2882-2891.
Westbury, J. (1994). X-ray Microbeam speech production user's handbook (Version I). Madison: University of Wisconsin-Madison.


[Author Affiliation]
William F. Katz
Sneha V. Bharadwaj
Monica P. StetHer
University of Texas at Dallas


[Author Affiliation]
Received July 9, 2005
Accepted October 30, 2005
DOI: 10.1044/1092-4388(2006/047)
Contact author: William F. Katz, Callier Center for Communication Disorders, University of Texas at Dallas, 1966 Inwood Road, Dallas, Texas 75235.
E-mail: wkatz@utdallas.edu

Indexing (document details)
Subjects:
Electric noise, Aphasia, Speech, Speech disorders
MeSH subjects:
Adult, Aged, Aphasia -- physiopathology, Apraxias -- physiopathology, Case-Control Studies, Electromagnetics, Female, Humans, Male, Middle Aged, Phonation -- physiology, Phonetics, Speech Intelligibility, Speech Production Measurement -- instrumentation, Verbal Behavior
Author(s):
William F Katz, Sneha V Bharadwaj, Monica P Stettler
Author Affiliation:
William F. KatzSneha V. BharadwajMonica P. StetHer3University of Texas at DallasReceived July 9, 2005Accepted October 30, 2005DOI: 10.1044/1092-4388(2006/047)Contact author: William F. Katz, Callier Center for Communication Disorders, 4University of Texas at Dallas, 1966 Inwood Road, Dallas, Texas 75235.E-mail: wkatz@utdallas.edu
Document types:
Feature, Journal Article
Document features:
Tables, Graphs, Photographs, References
Publication title:
Journal of Speech, Language, and Hearing Research. Rockville: Jun 2006. Vol. 49, Iss. 3; pg. 645, 15 pgs
Source type:
Periodical
ISSN:
10924388
ProQuest document ID:
1074085191
Text Word Count
8196
Document URL:
http://proquest.umi.com/pqdweb?did=1074085191&sid=2&Fmt=4&clientId=29550&RQT=309&VName=PQD