When Shakespeare writes, we see the intent in Romeo’s words, but it is lost againwhen we attempt to express it using a computer model for language; a model with anability to handle trope
Trang 1Understanding the Figurative Language of Tropes in Natural Language Processing Using a Brain-based Organization for Ontologies
by
Christine M Keuper
A dissertation submitted in partial fulfillment of the requirements
for the degree of Doctor of Philosophy
Graduate School of Computer and Information Sciences
Nova Southeastern University
2007
1
Trang 23244325 2007
Copyright 2007 by Keuper, Christine M.
UMI Microform Copyright
All rights reserved This microform edition is protected against unauthorized copying under Title 17, United States Code.
ProQuest Information and Learning Company
300 North Zeeb Road P.O Box 1346 Ann Arbor, MI 48106-1346 All rights reserved.
by ProQuest Information and Learning Company
Trang 3We hereby certify that this dissertation, submitted by Christine M Keuper, conforms
to acceptable standards and is fully adequate in scope and quality to fulfill the tation requirements for the degree of Doctor of Philosophy
disser- _
Chairperson of Dissertation Committee
_
Dissertation Committee Member
_
Dissertation Committee Member
Approved:
_
Dean, Graduate School of Computer and Information Sciences
Graduate School of Computer and Information Sciences
Nova Southeastern University
2007
2
Trang 4Processing Using a Brain-based Organization for Ontologies
byChristine M Keuper
2007
Look, love, what envious streaks
Do lace the severing clouds in yonder east;
Night's candles are burnt out, and jocund day Stands tiptoe on the misty mountain tops.
“Romeo and Juliet,” Shakespeare
Language communication is the successful interpretation of the speaker’s communicativeintent When Shakespeare writes, we see the intent in Romeo’s words, but it is lost againwhen we attempt to express it using a computer model for language; a model with anability to handle tropes (metaphor, metonymy, synecdoche and irony) is needed The goal
of this model is to correctly interpret the nouns that occur within these tropes
Early computer language models had not worked well when they encountered tropes, yetthe brain handled them easily These early models concentrated on the language functions
of the left temporal lobes of the brain; perhaps the models worked poorly because theyhad limited themselves to modelling only the parts of the brain that handled propositionallanguage The designs of these models were also influenced by the assumption that thehuman brain understood language using a grammar-based Language Acquisition Device
In examining human language acquisition however, grammar does not even show up untilthe third year
In addition to the common taxonomic and mereologic structures that occur in most
language models, the current model also recreates the brain’s thematic, perceptual andfunctional categorizations Words no longer occur at a single location: words defined bytheir perceptual features, whether nouns or adjectives, occur within perceptual categoriza-tions, and those defined by functional features, whether nouns or verbs, occur withinfunctional categorizations Tenor-vehicle connections then expand these perceptual andfunctional categories with metaphor Words occurring within thematic categories are used
to understand metonymy; and words occurring in the taxonomic and mereologic tures are used to understand synecdoche
struc-Classifiers, such as the Japanese hon, indicate membership in a category Marked
percep-tual and functional classifiers in ASL, Japanese and Swahili made it easier to identify theoccurrences of unmarked perceptual and functional categories in English Likewise, themythos-based categories in Dyirbal, French and German made the remnants of mythos-based categories still occurring in English understandable
This is one or two pages, page iii or pages iii and iv The page number(s) should not be
printed The abstract should be written in the past tense It should contain the problemstatement, method(s) employed, results/findings, conclusions, and recommendations Itshould not exceed 350 words The abstract is single-spaced
Trang 5a college diploma the same year I did as well He is also missed.
To my daughter Francie, who was still an infant when this journey began, the day I went across town to the Polytechnic University in San Luis Obispo, California and became part of a very small group of women who wanted to study engineering amongst the thousands of men there She taught me about child language acquisition, she was a joyful part
of my life and gave me a reason to get up every morning, and she was emotional support to me many years later when
we were both in graduate school at the same time To my older brother Robert who followed me to the university, also
to study engineering, but who died after developing a fatal cancer He always believed in me To my youngest brother Phillip, who I raised from infancy, who is also no longer here To my younger sister Karen, who was always a safety net for me.
To my professors at Cal Poly: To Dr Peter Litchfield, who taught me experimental psychology methodology To Dr Barbara Cook, who taught me cultural anthropology To Dr Robert Lint, my linguistics professor, for the wonderful sense of déjà vu that occurred when I walked into my first compiler design class, for all of the questions he asked, some
of which I am still trying to answer here many years after his death To Dr Jay Bayne, my advisor, and Dr Emile Attala, my thesis advisor, for encouraging my love of computer science, and all of the fantastical directions I wanted to
go with the computer.
To Lisa Krasna, who let me adopt and raise her deaf, autistic son, Jeremy To Jeremy, who taught me what I didn’t know, and who has become a great joy in my life in his adulthood To Dr Edward Ritvo, for his medical research that opened the doors for Jeremy, and for introducing me to Bill Christopher To Bill Christopher, who also has an adopted, autistic son, Ned, and who introduced me to Dr Art Schawlow and his wife Aurelia, who had an autistic son, Artie To Art and Aurelia who encouraged me to continue the development of the methodology I used to teach Jeremy language, and who also both encouraged me to continue my studies in computer science They are both missed To Alan Alda, who encouraged me to continue development of the sign language dictionary I was working on, and who encouraged
me to return to graduate school.
To Dr Graham Chalmers, my friend and advisor of many years To Mark Lucas and Scott Simon, who were always there with new language features for the development environment To Dr John Bonvillian, whose emails helped me refine my thoughts and theories about language models To Dr Bill Stokoe, who spent years encouraging me via email
to continue with my linguistic and computer science studies, and who I finally met in person shortly before his death.
To Dr Jerry Keuper who, after hearing I was interested in computational linguistics, sent me a copy of his book on Chinese idiom as well as a few chapters of a book he was writing on Spanish idiom, and who also called me on my first day as a new doctoral student at Nova to encourage me Drs Stokoe and Keuper are both missed as well.
To my daughter Meagan, who is now away at college studying industrial design, for her love and support, and for as a young child being proud to tell her friends that her mother was studying for a “doctorette.”
And finally last, but certainly not least, to my professors at Nova Southeastern University, all of whom supplied me with a quality education To Dr Rollie Guild, who started working with me when I was a new student at Nova, directing my early research To Dr Lee Leitner, who continued after Dr Guild’s death, helping me take a vague idea and start to turn it into a dissertation To my dissertation committee, Dr Michael Laszlo, Dr James Cannady and Dr Amon Seagull, for their interminable patience, and for the excellent direction and feedback they provided me with while working on this dissertation.
Trang 6Abstract iii
List of Tables x
List of Figures xii
Chapters 1 Introduction 1
Issues 4
Can Something “Not in the Real World” be Represented in a Classic Taxonomy? 4 Can There be More than One Conceptual System? 5
Can an Interlingua Represent Concepts Independent of Language? 7
Limitations 9
The Autonomy Hypothesis and the Lexical Independence Hypothesis 9
Delimitations 10
Pre- and Post-editing to Resolve Ambiguity 10
Pragmatic Ambiguity 10
2 Relevance, Significance, and Brief Review of the Literature 12
Introduction 12
Early Attempts at Machine Translation of Natural Language 13
Is There a Language Acquisition Device? 15
The Development of Tropes 16
Language Stimuli 18
Perceptual Conceptualization and Lexicalization 20
Basic-level Perceptual Categorization, Prototypes, and Radial Structures 25
Perceptual Categorization in Navaho, Japanese, and ASL 29
Morphology and Categorization 31
Action Verbs 34
Thematic Categorization 36
Thematic Roles 38
Arbitrary “one criterion” Categorization and Ad-hoc Categorization 39
Metaphoric Categorization 40
Orientational Metaphor 42
Ontological Metaphor 43
Mythos-based Categorization in Dyirbal 44
Mereologic Categorization 48
Part-whole Hierarchies Across Languages 50
Taxonomic Categorization 52
Contrastive Ambiguity and Taxonomic Categorization 52
Taxonomic Categorization in German 53
Category Markedness and Taxonomic Ambiguity 55
Grammar 56
Grammatical Ambiguity 58
v
Trang 7Metonymy 60
Synecdoche 61
Irony 62
Context Switching 64
Where’s the Syntax? 65
Summary 68
3 Methodology 69
The Proposed Model 70
Paradigm and Syntagm 71
Paradigmatic Language 74
Time Metaphor and Orientational Metaphor 74
Perceptual Metaphor 76
Ontological Metaphor 80
Tenor-vehicle Metaphor 81
Contrastive Ambiguity 84
Paradigmatic Selection 87
Mereologic and Taxonomic Ambiguity 90
Mereologic and Taxonomic Synecdoche 91
Syntagmatic Language 92
Chunking, Idiom, and Irony 92
Thematic Categorization 93
Functional Categorization 94
Grammatical Inflection 97
Complementary Ambiguity 97
Grammatical Inflection in Idiom 98
Syntactic Ambiguity 98
Thematic-and Function-based Metonymy 99
Format for Presenting Results 99
Evaluation of the Results 100
Summary 103
4 Results 104
Data Analysis 104
Input 104
Internal Representation 106
Brain Structure Modules 107
Language Processing 109
The Right Anterior Temporal Lobe Module 111
Parsing 111
Idioms and Collocations 112
Agglutinative and Derivational Languages 113
Irony 114
Echolalia 114
The Right Frontal Lobe Module 116
Switching Languages 117
vi
Trang 8The Left Anterior Temporal Lobe Module 118
Grammatical Inflection and Function Words 118
The Right Posterior Temporal Lobe Module 120
Perceptual Categorization and Perceptual Classifiers 120
Time and Orientational Metaphor 132
Ontological Metaphor 134
Thematic Categorization and Metonymy 136
The Left Motor Cortex Module 137
Functional Categorization and Contrastive Ambiguity 138
Functional Categorization and Complementary Ambiguity 139
Verb-Noun Pairs and Subject-Verb-Object Groupings 140
Functional Ambiguity 145
Retention of S-V-O in Broca’s Aphasia 146
Verb Loss in ALS 147
The Left Posterior Temporal Lobe Module 147
Hierarchical Categorization and Hierarchical Ambiguity 147
Mereology-based Interlingua 149
Mereology-based Synecdoche 150
Taxonomy-based Synecdoche 152
Wernicke’s Aphasia 153
The Right Motor Cortex Module 155
Functional Categorization and Tenor-vehicle Metaphor 155
Asperger’s Syndrome 156
The Left Frontal Lobe Module 157
The Impact of “Not Implemented” 157
Syntactic Ambiguity 158
Broca’s Aphasia 159
Findings 159
Comparison to Language Acquisition, Aphasiology, and Autism Models 160
Comparison to Learning Models 160
Comparison to Propositional Models 160
Comparison to Grammatical Models 161
Comparison to Statistical Models 162
Comparison to Interlingual Models 163
Comparison to Cruse’s Examples of Taxonomic Ambiguity 163
Comparison to Pustejovsky’s Contrastive & Complementary Ambiguity 163
Comparison to Examples of Functional Ambiguity 164
Comparison to Examples in Fillmore’s Case Theory 165
Comparison to Jackendoff’s Examples of Thematic-based Metonymy 167
Comparison to Chandler’s Examples of Synecdoche 168
Comparison to Examples of Tenor-vehicle Metaphor 169
Comparison to Narayanan’s Examples of Metaphor 169
Comparison to Lakoff’s Examples of Classifiers and Categorization 172
vii
Trang 9Comparison to Lakoff’s Examples of Ontological Metaphor 173
Summary of the Results 174
5 Conclusions, Implications, Recommendations, and Summary 177
Conclusions 177
Evaluation of Error 178
Limitations of the Findings 181
Implications 182
Recommendations 184
Summary 186
The Implementation 189
Computer Models of Mental Processes 190
Appendices A Some History of Computers and Natural Language 192
Rule-based Direct Translations 192
Transfer Approaches 192
Interlingua Approaches 193
Corpus-based Systems—Statistical Methods and Example-based Translation 195
Knowledge-based Systems 195
B Autism 197
The Triad of Impairment 197
The Rates of Autism in Neurocutaneous Disorders 198
C Classical Conditioning 200
D The Midbrain 202
E A Mereologic Structure from the MeSH 204
F Mereologic Groups for Animals 207
G Strong Verbs 209
H Three Possible Selections From Fabeln 211
I The Sentences 212
Sentences by Language 213
ASL 213
Dani 213
Danish 213
Dyirbal 214
English 214
French 250
German 250
Hasau 252
Hawaiian 253
Hebrew 253
Inuit/Yupik 253
Irish 253
Italian 253
Japanese 254
viii
Trang 10Spanish 254
Swahili 255
Tarahumara 258
Examples by Order of Occurrence 259
Examples by Topic 299
Parsing 299
Ambiguity 300
Metaphor 307
Metonymy 309
Synecdoche 310
Classifiers 310
Switching S-V-O/S-O-V/V-S-O 312
Meaning and Grammar 313
J The Rapid Application Development Prototyping Environment 315
Reference List 317
ix
Trang 11List of Tables
Table 1 The Development of Tropes 17
Table 2 Morphology and Categorization in Swahili 32
Table 3 Bantu Roots 33
Table 4 Thematic Roles 39
Table 5 Literal Meaning and Metaphor 44
Table 6 Dyirbal Classification (in English) 45
Table 7 Dixon’s Dyirbal Classification System 45
Table 8 Part/whole and Class Inclusion Hierarchies 50
Table 9 Part/whole Relationships Across Languages 51
Table 10 Part/whole Relationships Applied to Colors 52
Table 11 Das Gemüse Superordinate Category 53
Table 12 Das Tier Superordinate Category 54
Table 13 Superordinate Neuter Classification of Animals 54
Table 14 Neuter Classification of Non-indigenous Animals 54
Table 15 Category Markedness 55
Table 16 Grammatical Inflection in Nouns, Adjectives and Verbs 58
Table 17 ASL vs English 59
Table 18 The Ambiguity of fly 60
Table 19 Metonymy 60
Table 20 Synecdoche 61
Table 21 The Paradigmatic and Syntagmatic Aspects of Language 72
Table 22 Time Metaphor 75
Table 23 Orientational Metaphor 75
Table 24 Perceptual Metaphor 79
Table 25 Chinese Phoneme ma 84
Table 26 Banke, Baunke and Banque Merge into Bank 84
Table 27 The Meanings of Baunke, Banke and Banque 86
Table 28 Sample Languages by Language Family 105
Table 29 Sample S-O-V, S-V-O, & V-S-O Languages, with Number of Speakers 106
Table 30 Functionality Implemented in the Modules 108
Table 31 Language Disorders 108
Table 32 Perceptual Classifiers in ASL, English, Japanese and Swahili 120
Table 33 -dege 121
Table 34 -tabu 122
Table 35 Perceptual Descriptors 130
Table 36 Thematic Definitions 154
Table 37 Disambiguating bill 156
Table 38 Choice of Prototypical Examples 175
Table 39 Complementary Ambiguity 180
x
Trang 12Table 41 Transfer Systems 193
Table 42 Direct Translation vs Interlingua With 10,000 Entries 194
Table 43 Interlingua Systems 194
Table 44 Corpus-based Systems 195
Table 45 Knowledge-based Systems 196
Table 46 Reported Cases of Autism 1943-1994 198
Table 47 Reported Cases of Autism 1997 and 2004 198
Table 48 Neurotransmitters Important for Language 203
xi
Trang 13List of Figures
Figure 1 La Peau Rouge vs Le Peau-rouge 8
Figure 2 Metaphorical Deep Structure 10
Figure 3 Pragmatic Ambiguity 11
Figure 4 Language Stimuli 19
Figure 5 The Cerebellum 19
Figure 6 Tonal Rhythm and Lexicalization 21
Figure 7 Perceptual Categorization 26
Figure 8 Japanese hon 30
Figure 9 Action Verbs 35
Figure 10 Thematic Categorization 37
Figure 11 Arbitrary and Ad Hoc Categorization 40
Figure 12 Metaphoric Categorization 41
Figure 13 Creating Metaphor 42
Figure 14 The Back of an Object 43
Figure 15 A Dyirbal Classification System 46
Figure 16 Mereologic and Taxonomic Categorization 49
Figure 17 Pustejovsky’s Category and Genus of bank 53
Figure 18 Grammatical Inflection 57
Figure 19 Metonymy 61
Figure 20 Synecdoche 62
Figure 21 Irony 63
Figure 22 Context Switching 64
Figure 23 Cultural Ontologies 65
Figure 24 Complex planning 66
Figure 25 Classic Language Analysis 69
Figure 26 Nirenburg’s Ontology 70
Figure 27 Proposed Language Analysis 70
Figure 28 Paradigmatic and Syntagmatic Language 75
Figure 29 The Meanings of pike and pikestaff 76
Figure 30 The Meanings of pike 76
Figure 31 A Perceptual pike Ontology 77
Figure 32 The Meaning of pikestaff 77
Figure 33 A Perceptual pikestaff Ontology 78
Figure 34 pikestaff vs pike Ontologies 80
Figure 35 Metaphor Evoking the Underlying Mythos 81
Figure 36 The Metaphoric Meaning of pike 82
Figure 37 The time is-a pikestaff Metaphor 82
Figure 38 A snow blankets the ground Metaphor 83
xii
Trang 14Figure 40 Paradigmatic Selection 87
Figure 41 Paradigmatic Selection of Cooking Terms 89
Figure 42 A Ceramics Oven and a Dutch Oven 90
Figure 43 Taxonomic and Mereologic Ontology 90
Figure 44 Examples of Taxonomic Ambiguity 91
Figure 45 Examples of Synecdoche 91
Figure 46 A Taxonomic computer Ontology 91
Figure 47 A Mereologic credit card Ontology 92
Figure 48 Cuéntaselo a tu tía 93
Figure 49 A Thematic bedroom Ontology 94
Figure 50 A Thematic pastrami Ontology 94
Figure 51 An eat Functional Ontology 94
Figure 52 The essen and fressen Functional Ontologies 94
Figure 53 Classic husband Ontology 95
Figure 54 Functional husband Ontology 96
Figure 55 The to husband Ontology 96
Figure 56 The English Word bill 96
Figure 57 Inconsistent Grammatical Inflection 97
Figure 58 Grammatical Inflection 98
Figure 59 Animal Crackers 98
Figure 60 Time Flies 99
Figure 61 Examples of Metonymy 99
Figure 62 The Structures of the Ontologies 100
Figure 63 Evaluating Compilers 101
Figure 64 Language Models Represent Small Portions of Language 102
Figure 65 The Internal Representation Window 106
Figure 66 The Brain Structures Window 107
Figure 67 The RATL module 111
Figure 68 The Sign Language Input Screen 112
Figure 69 You see red 112
Figure 70 We drove to New York 113
Figure 71 Rechtsschutzversicherungsgesellschaften ist das größte Wort im deutschen Wörterbuch 113
Figure 72 That’s fine with me 114
Figure 73 Deactivating the RATL module 115
Figure 74 lion 115
Figure 75 It’s plain as a pikestaff 115
Figure 76 The lion is big 116
Figure 77 La chair crue a une texture croquante et une saveur poivree 117
Figure 78 The RFL module 117
Figure 79 Le piquant du gout peut etre retire en retirant la peau rouge 118
Figure 80 Oumpah Pah le Peau Rouge is a comic book 118
Figure 81 Bill saw the bank of clouds 119
xii
Trang 15Figure 82 Bill saw the bank of the river 119
Figure 83 Kitambaa kidogo kitatosha 121
Figure 84 Kitabu 122
Figure 85 Kijitabu 122
Figure 86 Empitsu wa gohon desu 122
Figure 87 Biiru o nihon kudasai 123
Figure 88 long, straight road hierarchy 123
Figure 89 long, thin fish hierarchy 124
Figure 90 pikestaff: long, rigid, thin, straight 124
Figure 91 The boy caught a pike 125
Figure 92 spike: short, rigid, thin, straight, pointed 126
Figure 93 high, narrow heel hierarchy 126
Figure 94 Father drove the car 127
Figure 95 The fireman fell to the ground 128
Figure 96 ASL Signs 128
Figure 97 Defining Characteristics of the Signs 129
Figure 98 ASL Classifiers 129
Figure 99 Father put the pencil on the table 130
Figure 100 floor, ground Classifier 131
Figure 101 Fireman fall down on the floor/ground 131
Figure 102 ASL Brain Structures 131
Figure 103 Saa mbili asubuhi 132
Figure 104 Saa kumi na moja jioni 133
Figure 105 Rokuji 133
Figure 106 Rokujikan 133
Figure 107 Ga cokali can baya da kwarya 134
Figure 108 Mahali pa hatari 134
Figure 109 Bill is blue 135
Figure 110 RPTL module 135
Figure 111 The pastrami wants the check 136
Figure 112 The White House vetoed the bill 137
Figure 113 White House Brain Structure 137
Figure 114 Pike catches boy 138
Figure 115 Bill wants the bill 139
Figure 116 The boy went through the door 140
Figure 117 The ghost went through the door 140
Figure 118 Bill baked the cake 141
Figure 119 The clay pot was baked in the oven 141
Figure 120 Bill baked a cake in the dutch oven 142
Figure 121 The house is an oven 142
Figure 122 Bill took a cake of soap 143
Figure 123 The car is caked with mud 143
Figure 124 The makeup is caked 144
Figure 125 The husband baked the cake 144
xiv
Trang 16Figure 127 Bill husbanded his resources 145
Figure 128 The boy runs the course 146
Figure 129 The water runs its course 146
Figure 130 Bill washed the dishes 148
Figure 131 Bill washed the dishes Brain Structure 148
Figure 132 The lion ate the dog 149
Figure 133 Wir essen und fressen und tanzen und trinken 150
Figure 134 The recycling center takes plastic 151
Figure 135 The store takes plastic 151
Figure 136 UCLA won the game 152
Figure 137 Bill used the Hoover 152
Figure 138 Brain Structure Wernicke’s Aphasia 153
Figure 139 Taxonomic grandmother 153
Figure 140 Broken LPTL 154
Figure 141 Thematic grandmother 154
Figure 142 The words cut Bill 155
Figure 143 The words cut Bill Brain Structure 156
Figure 144 The words cut Bill—Brain Structure Without RMC 157
Figure 145 The words cut Bill—Internal Representation Without RMC 157
Figure 146 Bill will diet and exercise if his doctor approves 158
Figure 147 Fruit flies like a banana 159
Figure 148 Mereologic and Taxonomic Categorization 165
Figure 149 Grandmother baked for an hour 165
Figure 150 Grandmother and the cake baked for an hour 166
Figure 151 Banggun ganibarragu budin bangun gujarra 167
Figure 152 The ham wants the bill 168
Figure 153 The policy backs the track 171
Figure 154 Policy back on track 171
Figure 155 Narayanan’s Metaphoric Mappings 171
Figure 156 Noun Mappings 172
Figure 157 Ja antwortet der Löwe 174
Figure 158 The ham wants the bill 179
Figure 159 Translation Into an Interlingua 193
Figure 160 Language in Autism 197
Figure 161 The Reticular Formation in the Midbrain 202
xv
Trang 17Chapter 1 Introduction
“The strategy I’m adopting here is to build more lamps.”
Ray Jackendoff
Science fiction often offers us a glimpse into what the human mind considers doable,sometimes centuries before the details of developing the necessary technology will beworked out.1 One of the common characters in science fiction has been the robot capable
of carrying on a conversation with us, the computer capable of processing and usingnatural language: Asimov’s robots, HAL in 2001, the Universal Translator on the Enter-prise Even though we struggle at how to accomplish this goal we still believe in ourhearts that computers will eventually have the ability to process natural language
The brain is currently our only working model2 for language understanding WhenBroca published his first brain study in 1865, he postulated that language was based in theleft anterior temporal lobe In 1874 Wernicke discovered the left posterior temporal lobewas also involved in language; Broca’s area was the seat of grammar and Wernicke’s areathe source of the lexicon Language models of that time followed this lead and assumed
1 As long as science has been around, futuristic thinkers have recorded “ what might be.” That Leonardo da Vinci could conceive of ideas 400-500 years before the necessary science and engineering were available makes it less surprising that writers also spoke of future technology, (though the term “science fiction” was only coined in 1954): Cyrano de Bergerac’s “Voyage to the Moon” (1649) was 320 years before our voyage to the moon, Margaret Cavendish’s “Description of a New World, Called the Blazing World” in
1666 was 340 years ago, and of course Mary Shelly’s Frankenstein dates back to 1818, and will be 200 years old in a dozen years.
2 A model in the sense of “a thing that serves as a pattern” rather than as “a set of plans”
Trang 18that a grammar combined with a lexicon could describe language completely In thecontext-free world of computer languages this model might have worked, but when it wasapplied to explaining natural languages the results were disappointing.
The nature and structure of language is not well understood Attempts to understandlanguage through defining a grammar go back over 2000 years but these attempts haveonly been partially successful (Manning & Schütze, 1999) As Edward Sapir (1971) said,
“All grammars leak.” Even then, the grammaticality of a sentence, the judgment ofwhether it is structurally well-formed, does not guarantee that the sentence carries anymeaning3 (Huck & Goldsmith, 1995); language communication is not the exchange ofsymbolic expressions; it is the successful interpretation of the speaker’s communicativeintent (Green, 1996)
The field of Generative Semantics, which attempted to define a path from externallanguage form to meaning, grew out of this concern To the Generative Semanticists,semantics initially meant logic, and early attempts to create semantic language modelsused first order predicate calculus However, logic-based language models failed miser-ably in dealing with the extensive occurrence of figurative language (Lakoff, 1987), andthe formal semantic theories did not addressed the use of words in novel contexts (Puste-jovsky, 1998) The field of objectivism also wanted to avoid the use of figurative lan-guage to grasp the world, but it was not possible:
3 In 1957, Chomsky created a sample sentence, colorless green ideas sleep furiously, whose grammar is
correct but which has no meaning.
2
Trang 19An attempt to avoid figurative language became closely allied to the realist ideology of objectivism Language and reality, thought and language, and form and content are regarded by realists as separate,
or at least as separable Realists favor the use of the 'clearest', most 'transparent' language for the accurate and truthful description of 'facts' However, language isn't 'glass' (as the metaphorical refer- ences to clarity and transparency suggest), and it is unavoidably implicated in the construction of the world as we know it Banishing metaphor is an impossible task since it is central to language (Chan- dler, 2001).
Figurative language, or tropes, are often treated as though they are anomalous
lan-guage forms, but they are instead an integral part of lanlan-guage (Kittay, 1987) Because thefigurative use of language is so ubiquitous, Jonathan Culler referred to tropes as “a
system, indeed the system, by which the mind comes to grasp the world conceptually in
language” (Chandler, 2001)
Giambattista Vico (1668 — 1744) was the first to identify the four basic tropes as
metaphor, metonymy, synecdoche, and irony (Chandler, 2001) Metaphor is connecting a
thing to something else that has a similarity to it but is unrelated:4 leeway has a literal meaning of the lateral drift of a vessel due to the force of the wind; freedom becomes connected to lateral drift through metaphor, and the meaning of leeway is expanded to include room for freedom Metonymy is the use of the name of a thing for something else
that has a thematic association5 with it: the White House said means the President said Synecdoche is the use of the name of a thing for something else that has a taxonomic or
mereologic6 association with it: Did you see the new wheels I got means did you see the
4 Metaphor is often used as an all-inclusive term for figurative expressions, and likewise metonymy is often
used to include both metonymy and synecdoche Each of these terms will only be used here in their most restrictive sense.
5 A thematic relationship is between things that occur in the same place and at the same time, the tion between the dog, his leash, and his food dish.
connec-6 A mereologic association is a part/whole relationship.
3
3
Trang 20new car I got Irony is the use of an opposite for the intended meaning: This is a fine
situation you have gotten us into means this is a bad situation you have gotten us into.
The work being described is a computer-based model that deals with the extensiveoccurrence of tropes in natural language The model is limited in its scope to the forma-tion of structures to represent nouns, and the new ontologies that can better represent thefigurative use of those nouns These structures are based on how the brain acquireslanguage at a functional level,7 how the basis of that language is conceptual and meta-phoric, and how the creation of perceptual, thematic, and metaphoric categorizationoccurs at an early stage of language development, continues throughout language acquisi-tion, and is integral to language use.|
Issues
Before creating an underlying structure to represent the nouns, there are severalconcerns that need to be addressed:
° Can something “not in the real world” be represented in a classic taxonomy?
° Can there be more than one conceptual system?
° Can an interlingua8 represent concepts independent of language?
Each of these issues will be discussed in turn
Can Something “Not in the Real World” be Represented in a Classic Taxonomy?
The meanings of words have to be reflected in the structures created to represent them
(Pustejovsky, 1998) If these structures are limited to classic taxonomies then only objects _
7 Brain descriptions are at a functional level for each area of the brain rather than at a lower, cellular level, with the exception of the cerebellum, which is broken down in enough detail to show the difference in paths between conditioned and unconditioned stimuli.
8 An interlingua is a layer below the surface level of a language, used to represent concepts for translation into another language See Appendix A for more details on interlingua.
4
Trang 215that are part of objective reality are categorizable Langacker (1990) states that meaningdoes not reside in objective reality, nor is it represented in terms of truth conditions, but iswithin the realm of cognitive processing So, the interpretations of some “things” are
based on perceptions rather than being objective and “in the real world.”
An extreme example of how ontologically relevant entities depend on our perceptive and cognitive structures is the notion of constella- tion: is a constellation a genuine thing, different from a collection of stars? This is not so clear at first sight But, if we distinguish be- tween stars and their specific arrangements, we are able to under- stand how constellations may be considered as cognitive things de- pendent on states of mind (Gangemi et al., 2001).
When languages such as Dyirbal (the aboriginal language of Australia) are examined,the underlying cultural belief that some things possess a spirit will become crucial tounderstanding why the Dyirbal language is structured as it is, and this will have to bereflected in the structures created to represent concepts in Dyirbal (there is additionaldiscussion of mythos-based categorization in Dyirbal on pg 44)
Can There be More than One Conceptual System?
Early attempts to create semantic-based translation models used first order predicatecalculus Lakoff (1987) states that predicate calculus “assumes an a priori view of catego-rization, namely, the classical theory that categories are sets defined by common proper-ties of objects Such an assumption makes it impossible to ask, as an empirical question,whether the classical view of categorization is correct.” Classical categorization hasbecome the background assumption, the unquestioned truth, at the basis of all otherdisciplines (Lakoff, 1987)
55
5
Trang 226Classical categorization, a view which has existed from the time of Aristotle, is based
on categories as abstract containers.9 Objects were either inside the “container” or side, and all the objects inside shared the properties of the category equally (Lakoff,1987) Since the properties defining the category are shared by all of the members inclassical theory, if classical categorization was complete, then no member of a categorywould have any kind of special status (Lakoff, 1987) Without this special status, lan-guages not using a classical categorization, such as Dyirbal, appear unintelligible anduntranslatable
out-Two important claims are made when trying to understand and translate conceptualsystems that may not use classical categorization The first claim is, if two languageshave radically different conceptual systems, then understanding and translation betweenthem is impossible All conceptual systems vary as to the “fineness of grain” of theconcepts they contain; the differences have to be fundamental, such as how space andtime are dealt with (Lakoff, 1987) Minor domain-of-experience vocabulary is not aproblem, as with the claimed large number of words for snow in Eskimo;10 all languageshave this phenomena (Lakoff, 1987), with seafaring terms, or technology, or even linguis-tics
The second claim is, if two languages have different conceptual systems then learning
the other language is not possible, and if people can learn radically different languages,
_
9 A category seen as a “container” is itself a metaphor (Lakoff, 1987).
10 This is a story that has grown with retelling, starting at “4 words for snow in Eskimo” (Boas, 1911) and growing to “100 words for snow” (New York Times, 1984), with other sources going as high as 200 and
400 English has snow, sleet, slush, blizzard, pack, powder, etc (and facetiously, water for melted snow,
rain for melted snow falling from the sky, steam for gaseous snow, and flood for fast moving melted snow
coming out of the mountains in the Spring) The actual four Inuit/Yupik words are aput (snow on the ground), gana (falling snow), piqsirpoq (drifting snow), and qimuqsuq (snow drift) (Pullum, 1991).
6
Trang 237then those languages could not have different conceptual systems (Lakoff, 1987) Sincepeople have a general conceptualizing capability as well as an ability to express conceptsmetalinguistically, understanding different conceptual systems is clearly possible (Lakoff,1987) Learning and translating between different conceptual systems should also bepossible, as a single language can have multiple, and even incompatible conceptualmodels (Lakoff, 1987) Because we deal with incompatible models everyday, they move
out of our awareness When we get up in the morning, the sun rises (part of the folk
model), even though a few hours later in science class, it is the earth that is rotating ratherthan the sun moving (the scientific model) A botanical model gives us a definition of
fruit as the seed bearing part of a plant used for food Using this definition, tomatoes,
eggplant, avocados, cucumbers, and zucchini are fruits In everyday life however, the folkclassification of these foods are according to their use and flavor, and they are put into thecategory with vegetables
Can an Interlingua Represent Concepts Independent of Language?
An interlingua might be used for concepts such as colors since the color spectrum can
be broken down into subdivisions smaller than exists for the naming of colors in any ofthe languages However, the interlingua could not be used by itself for doing a transla-tion Since the interlingua would only include things that are part of the physical world,the metaphoric use of color would be lost, along with any specific cultural informationabout how the color spectrum is divided
What we see as a single concept in English may not be so in another language, as
with the different American Sign Language (ASL) signs for run when referring to a
77
Trang 24person running vs an animal running, and with the German essen (to eat when applied to
a person) and fressen (to eat when applied to an animal).
Problems would also occur in the French language, as compounds can have different
categorization information from individual words: peau rouge, “skin red,” can be line even though peau is feminine (Figure 1), and cordon bleu, “ribbon blue,” can be
Figure 1 La Peau Rouge vs Le Peau-rouge
feminine even though cordon is masculine (Abeill, Clement, & Toussenel, 2003) But
what has happened is not just a grammatical variation; a switching of categories has
occurred A person who is le peau-rouge would need more than a sunburn to be ered a member of that football team Cordon bleu is not just a ribbon that is blue; when
consid-categorized, it is an award and not a sewing notion When gender information is seen aspart of categorization rather than as simply grammar, it can operate as a signal that anobject might have changed categories
An additional problem occurs with adjectives, where the color in red apple is ent than that intended with red hair, and the amount of money referred to in expensive car
differ-88/15 p 8 I think
the correct
Trang 25Limitations
The Autonomy Hypothesis and the Lexical Independence Hypothesis
The Autonomy Hypothesis states that syntactic analysis must be done without ence to semantics (Chomsky, 1957), and the Lexical Independence Hypothesis that “the
refer-meanings of words are independent of any grammatical constructions that the wordsoccur in” (Lakoff, 1987) Both of these hypotheses are false; in natural language thegrammar is not free of the meaning, and the meaning is not free of the grammar (Lakoff,1987; Pustejovsky, 1998) Because this natural language computer model does not dealwith most aspects of grammar, some meanings will be lost; because some aspects ofgrammar are irrevocably tied to meaning, they become part of the semantic structure.What is more significant however, is that the proposed model is based on the deepstructure11 of language being metaphorical instead of grammatical (Figure 2) While some
meaning will be lost, the most important functions of language should not be lost if a
grammar component is not implemented This will be discussed in more detail in laterchapters
8/15 p 9 footnote
11, structure is
misspelled
8/15 Fixed
Trang 26deep structure grammar
deep structure metaphorsurface structure
metaphor
surface structure grammar
Proposed Model Chomskian Model
Figure 2 Metaphorical Deep Structure
Delimitations
Pre- and Post-editing to Resolve Ambiguity
It is common for natural language systems to use some type of pre-editing, editing, or interactive intervention when problems or ambiguities arise Pre-editing caninvolve the identification of proper nouns, the marking of grammatical categories, flag-ging or substituting unknown words, indicating embedded clauses, or even reformulatingthe input text into a “controlled language” (Hutchins, 1992) Post-editing usually involvesidentifying unresolved ambiguity, correcting the output, or making corrections such aswith inconsistent gender An interactive approach is one that resolves syntactic andsemantic ambiguities during the translation process (Hutchins, 1992) Any pre- or post-editing or interactive intervention in the current system will be identified
post-Pragmatic Ambiguity
Pragmatic ambiguity is created when there is insufficient information available aboutthe context in which an utterance occurs (Figure 3) The pragmatic ambiguity of a
1010
Trang 27He shot a few bucks.12
She ate the hamburger with relish.
Figure 3 Pragmatic Ambiguity
sentence in isolation can be resolved by the computer by inquiring about the missingcontext interactively, in order to determine the larger context,13 or by doing what most
people do — apply stereotypes This means that he shot a few bucks and she shot a few bucks would be interpreted differently; the stereotype would be used to disambiguate the
sentence.14 In she ate the hamburger with relish, the pragmatic ambiguity would either
have to be left unresolved or the disambiguation would have to be purely a matter of
statistical probability, that relish is associated with hamburger more often than it is with eat This could be implemented using statistics in the language model This does not
guarantee that the correct conclusion has been reached, as is the problem with the matic ambiguity that occurs as part of communication even with native users of a lan-guage.15
prag- _
12 This is a rewording of an example used by Pustejovsky (1998)
13 This context can include the general topic of conversation, the background of the speakers, the languages, and the cultures.
14 This is not a statement of what should be, but of what is.
15 Humor has always made use of disambiguation strategies by manipulating the tone of voice and timing of pauses in order to lead the audience to interpret an utterance incorrectly A famous female comedienne would say, “I like to shop ” followed by a pause that was long enough for the audience to conclude that they had interpreted the sentence correctly, then she would continue, “lift and ”
11
Trang 28Chapter 2 Relevance, Significance, and Brief Review of the Literature
Discovery consists of looking at the same thing as everyone else and thinking something different.
Albert Szent-Gyögyi
Introduction
Creating a language model immediately brings out proponents of opposing camps;some very great names throughout history have had radically different opinions on thenature of language, how it is acquired, and what the brain’s involvement is in this pro-cess Since the computer model developed in this research to translate the figurativelanguage of tropes is based on how the brain acquires language, relevant literature fromneurology, linguistics, and child language acquisition is included as needed to supply therationale for the resulting choices that will be made in the acceptance, rejection, ormodification of components from previous designs of computer models for naturallanguage
The first attempts to create a computer model for natural language date back to thelate 1940s, but assumptions made about the nature of language led to failure in theseinitial attempts Two very significant missing pieces have been the crucial role of catego-rization and equally crucial role of metaphor in language acquisition by the brain Includ-ing these missing pieces will allow the computer model to better handle the metaphorical
language of tropes
1212/4 “Since the
Trang 29Early Attempts at Machine Translation of Natural Language
Research into the machine translation of natural language can trace its origins back to
1947, to correspondence between Warren Weaver of the Rockefeller Foundation andNorbert Weiner (Hutchins, 1998) Based on the successes of wartime code-breaking andthe advances in information theory, Weaver wrote a 1949 memorandum suggestingvarious proposals for research Universities in the U.S responded, and by 1954 the firstpublic demonstration of a computer used for translation occurred (Hutchins, 1998) In acollaboration between Georgetown University and IBM, an IBM 701 machine, pro-grammed with six grammar rules and a vocabulary of 250 words, translated severalsentences from Russian to English Leon Dostert, the head of the project, believed at thetime that a specialized computer for language translation was three to five years off(Plumb, 1954) When this had not materialized by 1966, the U.S Government’s Auto-matic Language Processing Advisory Committee (ALPAC), concluded that “there is noimmediate or predictable prospect of useful machine translation,” as machine translationwas twice as expensive as human translation, slower, and less accurate (Hutchins, 1998).Yehoshua Bar Hillel, who held the first full-time post in machine translation at Mas-sachusetts Institute of Technology (MIT) in 1951, and who organized the first conference
on machine translation (Hutchins, 1995), made a pronouncement that might seem ous now, that the machines must be able to process meaning in order to translate language(Nirenburg, 1997) While Bar Hillel felt that meaning should be based on logic, he feltthat designing a logical system for translation was not an obtainable goal (Nirenburg,
obvi-1997):
1313
Trang 30Expert human translators use their background knowledge, mostly subconsciously, in order to resolve syntactical and semantical ambi- guities which machines will have either to leave unresolved or re- solve by some “mechanical” rule which will every so often result in
a wrong translation (Bar Hillel, 1964).
Let us be satisfied with a machine output which will every so often be neither unique nor smooth, which every so often will present the post- editor with a multiplicity of renderings among which he will have to take his choice, or with a text which, if it is unique, will not be gram- matical Let the machine provide the post-editor with all pos- sible help, present him with as many possible renderings as he can digest without becoming confused by the embarrass de richesse (Bar Hillel, 1964).
This was, in fact, what many machine translation projects were forced to do
Bar Hillel collaborated with philosopher and semanticist Rudolf Carnap,16 and guist Noam Chomsky (Nirenburg, 1997; Wikipedia, 2005) Chomsky had joined the staff
lin-of MIT in 1955, a few years after Bar Hillel, and was involved with the machine tion project for two years (Nirenburg, 1997; Wikipedia, 2005) One of the fateful conclu-sions Chomsky had reached was that the meaning of a sentence was dependent to asignificant degree on its grammatical analysis (Chomsky, 1957; Huck and Goldsmith,1995); Chomsky also came to believe that the brain had a Language Acquisition Device(LAD), that the brain already possessed an innate grammar, and that this grammar repre-
transla-sented the deep structure17 of language (Huck & Goldsmith, 1995) The work of Broca,Wernicke, Carnap and Chomsky had set the stage for the direction machine translationwas going to go for the next several decades.18
_
16 Carnap was a leading figure in logical positivism (the logical analysis of scientific knowledge) He also
felt that language consists of a system of formation rules that may not at any point make reference to
semantics.
17 Chomsky divided language into a surface structure and a deep structure.
18 Appendix A gives more history of computers and natural language.
14
Trang 31Is There a Language Acquisition Device?
Chomsky’s belief in a Language Acquisition Device (LAD) was formed from two significant hypotheses of his: that there is a Universal Grammar innate to the brain, and
that syntactic analysis must be done without reference to semantics These two
hypoth-eses are called respectively the innateness hypothesis and the autonomy hypothesis.
Unfortunately, research in neurology and language acquisition was not supportingChomsky’s belief in a grammatical LAD (Aboitiz & Garcıa, 1997) If grammar was trulyinnate, and the foundation on which the rest of language is built, it should be the first
aspect of language to show itself; instead, grammar turned out to be one of the last
aspects of language to be learned and used, not occurring until the third year of language
development As Claparéde states in the preface to Piaget’s The Language and Thought of the Child, in looking at thought and language in the child, we have incorrectly applied the
“mold and pattern of the adult mind” (Piaget, 1926)
Not all linguists agreed with Chomsky; there were others, such as George Lakoff,whose beliefs were in sharp contrast (Huck & Goldsmith, 1995) Lakoff, who was origi-
nator of the term generative semantics, received his undergraduate degrees in
Mathemat-ics and English Literature from MIT in 1962, and a Ph.D in LinguistMathemat-ics from IndianaUniversity in 1965 At Berkeley in 1975, Lakoff organized a Linguistics Institute funded
by a NSF grant: Rosch gave her first lecture on basic level categories; Talmy gave hisfirst lecture on spatial relations; Fillmore gave his first lecture on frame semantics; andKay and MacDaniel presented their work on the neurobiology of color categorization.Twenty years after Chomsky had set the stage for the syntactic direction in machine15
15
Trang 32translation, Lakoff was setting the stage for a semantic underpinning for language sentation.
repre-Rather than viewing grammar being innate, Lakoff sees the ability to conceptualizeand form cognitive models as being that innate, biological basis on which language isbuilt (Lakoff, 1987) These cognitive models, how the child thinks about things, are used
in forming the categories that tropes are based on
The Development of Tropes
Vico hypothesized a historical sequence for the development of the four tropes: frommetaphor to metonymy to synecdoche to irony, and Hayden White compared the develop-ment of the tropes to Piaget’s stages of cognitive development (White, 1978; Chandler,2001) Piaget himself was interested in connections between historical systems and
cognitive development, as was Carl Jung (Gelernter, 1994); the idea of ontogeny pitulates phylogeny may have eventually fallen by the wayside in biology, but it contin-
reca-ued as a useful model for understanding of linguistic development (Gelernter, 1994).Even though Chandler (2001) saw the comparison of the tropes to Piaget’s levels ofdevelopment as a “speculative analogy,” combining Vico’s sequence of tropes and
White’s idea of comparing tropes to Piaget’s stages of cognitive development does in fact
give an accurate order and timing of these tropes during language development (Table 1)
As shown in column 3 of Table 1, language acquisition first goes through a perceptualconceptualization and lexicalization step, then a thematic categorization, a metaphoric
categorization, a mereologic categorization, and then finally a taxonomic categorization.16
16
Trang 33It is only after these initial stages, and three years into the acquisition of language, thatgrammar begins to become part of the language equation The remaining three tropesthen complete the basic acquisition of language These tropes are so pervasive in lan-guage they can be used to identify the level of language acquisition more accurately than
a simple measure of vocabulary size19 (Vygotsky, 1934; Lantolf, 2005)
Piagetian Stage of Age in Child Starts Acquiring Primary Brain Lobe
Sensorimotor 0 — 2 Language Stimuli Left cerebellum
Perceptual Conceptualization Motor cortex and temporal lobes
Lexicalization Right anterior temporal lobe
Thematic Categorization Right posterior temporal lobe
Metaphoric Categorization Right posterior temporal lobe
Pre-operational 2 — 6 Mereologic Categorization Left posterior temporal lobe
Taxonomic Categorization Left posterior temporal lobe
Grammar Left anterior temporal lobe
Metonymy Right posterior temporal lobe
Concrete Operations 6 — 12 Synecdoche Left posterior temporal lobe
Formal Operations 12 — 18 Irony Right anterior temporal lobe
Table 1 The Development of Tropes
What research in neurology is showing (column 4 of Table 1) is that initial languageacquisition primarily occurs in the motor cortex, the right anterior temporal lobe and theright posterior temporal lobe during the first couple of years, followed by the left poste-rior temporal lobe and the left anterior temporal lobe in subsequent years.|
_
19 When the last trope irony is acquired, normal language acquisition has been successful.
17
Trang 34When language learning does not occur in the child, as happens with autism,20 then wehave additional information about how the brain learns and processes language.
The following sections will trace the brain’s acquisition of language during thoseinitial years, and supply the rationale for the design of the proposed natural languagemodel The model is intended to represent the foundations of language that must beacquired before a grammar/lexicon model can be applied In doing so, it will supply thenecessary non-grammatical foundations for the interpretation of tropes as well
The descriptions of the order of linguistic acquisition in the various parts of the brainhave been simplified The areas of the brain not being described at each level are notsitting dormant while one area acquires language; however, the descriptions do showwhere the greatest involvement is at that point of language development.21
Language Stimuli
From birth to age six months the language mechanism in the brain primarily involvesthe cerebellum, the motor cortex, and the right anterior temporal lobe Since the leftcerebellum handles input for the right cerebrum,the language input for right anteriortemporal lobe comes from the left cerebellum (Figure 4)
_
20 Autism is a neurological disorder that affects language and communication Half of all autistic children
do not acquire any language Irony forms the dividing line between the highest level of language
develop-ment that can occur in autism and normal language acquisition Autistic children who acquire language never make it as far as this last milestone See Appendix B for more details.
21 WADA tests showed that 95-98% of right-handed people are left brain dominant for the grammatical aspects of speech, as well as 69% of left handers 18% of the left-handers are right brain dominant for these aspects of speech, with the remaining 13% having a bilateral dominance (The WADA test uses sodium amytal , injected into either the right or left carotid artery, to put one hemisphere of the brain to sleep The language in the other hemisphere can then be assessed in isolation) (Caplan, 1998).
18
Trang 35Anterior Temporal Lobe
Language Stimuli
Motor Cortex Motor Cortex
Figure 4 Language Stimuli
The cerebellum is involved in the preprocessing of sensory data, the integration ofvisual, auditory, vestibular and somatosensory input, and the acquisition and maintenance
of classical conditioning.22 Input to the cerebellar cortex is via mossy fibres and climbingfibres (Figure 5) Mossy fibres (via the parallel fibres) connect the pontine nuclei to the
Inferior Olive Climbing FibreMossy Fibres
Pontine nuclei
Purkinje cell
Output from the cerebellum Parallel Fibres
Conditioned Stimuli (CS)
Input to the cerebellum
Unconditioned Stimuli (UC)
Figure 5 The Cerebellum _
22 Classical conditioning is explained in Appendix C
19
Trang 36Purkinje cells of the cerebellum and provide a graded response, with many mossy fibresneeded to cause one of the cerebellum’s Purkinje cells to fire The pontine nuclei carriesthe conditioned stimuli (CS) information Climbing fibres, originating at the inferiorolive, provide all-or-nothing firing, with one fibre firing a single Purkinje cell Theinferior olive carries the unconditioned stimuli (US) information (Figure 5).
Output from the cerebellum is controlled by the Purkinje cells which inhibit the firing
of the deep nuclei in the midbrain.23 In autism, there is a 41% loss of Purkinje cells andthe inferior olive is smaller than normal (Courchesne, 1988)
Lesions in the cerebellum are known to disrupt classical conditioning (Schmajuk,1997) Eyeblink response is considered a hallmark of cerebellar function; it is a classi-cally conditioned response, and when there is damage to the cerebellum it is abnormal.Because it signals damage, the presence of abnormal eye blink response is consideredevidence of cerebellar damage in autism (Belmonte & Carper, 1998) This cerebellardamage in autism prevents classically conditioned language learning from occurringnormally
Perceptual Conceptualization and Lexicalization
In the first three months of life infants are already capable of recognizing familiarvoices (PSHC, 2004) and of attending closely to the sound of an unfamiliar voice (Bo-wen, 2004) At three to six months the infant enjoys music and rhythm (Bowen, 2004)
_
23 The midbrain is explained in Appendix D
20
Trang 3721and is capable of responding to changes in tone of voice (PSHC, 2004) These are func-tions of the right anterior temporal lobe (Figure 6).
Tonal Rhythm/
Lexicalization
Language Stimuli
Functional Definitions Functional
Definitions
Figure 6 Tonal Rhythm and Lexicalization
Basic-level sensory-motor conceptualization also develops using the general shapeand motor interaction to form the mental image of an object (Lakoff, 1987) The proper-ties that define the object for the child are not inherent to the object but are in the interac-tions the child has with the object (Lakoff, 1987); so for the young child, the object we
assign with a label of chair can be defined as something that is sat upon24 instead ofsolely in terms of having four legs, a seat and a back When initial definitions are formedthis way, the motor cortex is triggered (Figure 4) Basic-level perceptual categories will
be discussed in more depth in that section, and the motor cortex will be discussed in moredepth in the action verbs section
_
24 This also means that a chair that is never sat upon, perhaps because it is broken, might not be categorized
as a chair by the young child Likewise, a front stoop that is sat upon might become categorized as a chair
by the child.
21
Trang 38At five to six months the infant starts babbling, imitating the tonal aspects of guage with inflection, a rising and falling of pitch and rhythm which makes it sound liketrue speech (PSHC, 2004).
lan-The tonal processing of utterances occurs before the processing individual words(Piaget, 1926) As a result, the child will respond to phrases such as “pat-a-cake” and
“wave bye-bye” (Bowen, 2004) Language at this point is being treated idiomatically bythe child, and the lack of syntax is not allowing for any substitutions For instance, the
child’s understanding of the phrase bye-bye daddy does not mean the child is capable of making a substitution and understanding the phrase bye-bye mommy Gardner (1983)
makes an important point: “Just because a child’s output of language starts as individualwords, and progresses to phrases, does not mean that the input of language is processingthe same way.25 After all, the child has no concept of words when he starts, and languageoutput is almost a year into the language learning process.”
The brain’s analysis of an utterance as possibly consisting of smaller units of meaning
is called lexicalization The process of lexicalization is accomplished by the brain through two means: sentence subtraction and ostensive definition Sentence subtraction occurs
when two almost identical utterances are compared by the brain, and the piece that differsbecomes a separate semantic unit (Piaget, 1926) These semantic units are still not always
as small as a word This sentence subtraction will also eventually lead to grammar
_
25 In this pre-word/pre-grammar stage, the goal of the brain is not to parse the sentence into individual words, but to identify the patterns that are occurring:
<Bill> <wants> <animal crackers>
<fruit flies> <like> <a banana>
The sentence may even remain unparsed, functioning as an idiom that cannot be divided without losing meaning: <time flies like an arrow>
12/19 Sentence added
Trang 3923Ostensive definition occurs when an object is pointed to and an utterance consisting
of only a label is supplied (Markman, 1989) When young children hear these labels they
are predisposed to assume that the label refers to the whole object rather than to itsproperties (Markman, 1989) In autism, pointing is not understood (Frith, 1989) andostensive definitions may not occur Because ostensive definition aids in sentence sub-traction, the separation of utterances into words may also not occur.26
From approximately six months old to a year, and if the tonal recognition and ization have been working correctly, the child starts to respond to his name and to look atcommon objects when the names for these objects are spoken (PSHC, 2004) Betweenone and two years the spoken vocabulary increases to approximately 300 words (PSHC,2004), and during the second year the vocabulary increases to almost 1000 words(floridaspeech, 2002) There is no grammar present yet as there needs to be a criticalmass of words in the vocabulary before the language mechanism starts involving thegrammar areas of the brain.|
lexical-If there is early damage to the right anterior temporal lobe lexicalization will notdevelop correctly In a French study by Lalande, lexicalization problems were seen inilliterate adults, where they ran together words that are separate in the written language,and also divided up words that are single words in the written language (Piaget, 1926)
of the tone of voice, the autistic child does not “get” the ostensive definition and may not acquire language.
12/4 p 23: I don’t
understand the
“there-fore” here: “In
au-tism, pointing is not
understood (Frith,
1989), and ostensive
definitions, and
therefore the
separa-tion of utterances into
words, may not
occur.” Am I just not
Trang 40Pragmatic problems in conversational turn-taking can also occur with damage to the rightanterior temporal lobe, as turn-taking requires attention to pauses in conversation, andinterpreting the meaning of those pauses Speech becomes excessive and rambling as aresult.|
In autism, if language does develop, lexicalization often does not occur at all, and the
entire sentence will remain undivided, with no attention paid to the ostensive definitions
or to pauses indicating separate semantic chunks Echolalia results, where the autisticchild uses entire sentences verbatim.27 Some autistic children have used chunks evenlarger than the sentence, sometimes paragraphs, or even short stories, repeating themverbatim in their attempt to communicate (Heffner, 2000)
In the adult, the right anterior temporal lobe specializes for the prosodic elements ofspeech: pitch, rhythm (duration), and stress (frequency, intensity, and timing) (Hooper,2003) Prosody performs a chunking function with speech, distinguishing compound
words from noun phrases (redcoat vs red coat, backward vs back ward , greenhouse vs green house) (Morgan, 2003), distinguishing some nouns and verbs (REcord vs.
reCORD),28 and declarative sentences from interrogative sentences (Hooper, 2003)
When the right anterior temporal lobe develops normally but then is damaged in an
adult, aprosodia, the inability to detect or use affect in speech, occurs; the emotional
content of speech is lost Speech has a flat affect and a monotonous intonation; stress on
words is indicated with amplitude changes rather than with pitch and duration changes
_
27 Seventy five percent of the autistic children who do acquire language are echolalic.
28 These words and not ambiguous when spoken so the chunking function is not performing any
disambigu-ation Examples requiring disambiguation such as the box is brown vs box the clothing are disambiguated
by another area of the brain.