Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 57 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
57
Dung lượng
719,67 KB
Nội dung
As Hockett (1963:3) points out, statistical universals are no less important than unrestricted universals. One fundamental assumption of language universals research has to be the assumption that the actually occurring human languages are representative qualitatively and quantitatively of what is possible in human language. On the other hand, it is dangerous in many cases to assume that because a particular property has not been observed in an actually occurring human language, then it is in principle impossible. Many of the rarer properties that we now know about are apparently found in geographically- restricted areas, rather than as isolated occurrences spread randomly across the world. An excellent example is the restriction of object-initial languages (in our present knowledge) to Amazonia. Let us suppose that these languages had never been discovered (perhaps the tribes which spoke them might have died out in the last century); what therefore might have been our conclusions about the possible basic word orders of human languages? It would have been tempting to suggest as an unrestricted universal that no languages have object-initial basic word order. Indeed, before the object-initial languages were discovered, many linguists did indeed posit such an unrestricted universal. But the known existence of the other orders SOV, SVO, VSO and VOS should have made us wary: these orders tell us that in principle (a) languages can operate with differing orders of constituents, (b) the position of the verb is not fixed, (c) subjects can appear both before and after objects. These principles of course also admit the possibility of OSV and OVS orders. In such cases, we should have done better to make the statistical claim. The general point seems to be that if it is possible to describe the observed properties of actually-occurring human languages in terms of a set of principles which also permit non-observed properties, we should not base unrestricted universals on the simple fact that these properties have not been observed. Rather, we should say that the probability of a language possessing them is low. Many unrestricted universals might better be reframed as statistical ones without their significance being thereby diminished: ultimately it must be hoped that the preponderance of one property over another can be shown not to be an accident of world-history, but correlated in a significant number of cases with such factors as the nature of the human cognitive system, the nature of language as a communicative system, or the principles which govern linguistic change. The same criticisms which apply to unrestricted universals can also be levelled against the third kind of universal proposed in Greenberg, Osgood and Jenkins’s schema. These take the form: (iṱ ) For all x, if x is a language, then if x has property P, x has property Q Such a statement is called a universal implication by Greenberg, Osgood, and Jenkins, and an absolute implicational universal by Comrie (1981:19). It allows for the existence of three classes of language: (a) languages which have both P and Q, (b) languages which have neither P nor Q, and (c) languages which have Q but not P. It would be falsified only by the discovery of a language which had P but not Q. Such universals have played a major role in recent language universals research. As a phonological example of a universal implication, we can cite Ferguson’s (1963:46) claim that in a given language the number of nasal vowel phonemes is never greater than the number of non-nasal vowel phonemes. In the form (iṱ ), this would read: for all x, if x is a language, than if x has n nasal vowel phonemes, x has m non-nasal vowel phonemes (where m ṱ n). An example of a nasal vowel phoneme would be the segment /ã/ in the French word dent /dã/ ‘tooth’/. Two recent samples have not disconfirmed this universal. Crother’s (1978) survey of vowel systems, based on the Stanford Phonology Archive, worked with a sample of 209 languages of which 50 (24%) had nasal vowel systems. Of these 50, 22 had the same number of non- nasal vowels as nasal vowels (m=n) and 28 had more non-nasal vowels than nasal vowels (m>n). Ruhlen’s (1978) sample of approximately 700 languages contained 155 (22%) with nasal vowel systems, of which 83 had the same number of non-nasal vowels as nasal vowels and 72 had more non-nasal vowels than nasal vowels. No languages in either sample had more nasal vowels than non-nasal vowels (n > m). A grammatical example of a claimed absolute implicational universal is Greenberg’s (1963:88) word order universal: languages with dominant VSO order are always prepositional. Prepositions are words like English in: they precede the noun phrases which they govern as in in Tokyo. Postpositions, on the other hand follow the noun phrase they govern, as in Japanese Tokyo ni ‘in Tokyo’. In Greenberg’s 30 language sample there are 6 languages with dominant VSO order (Berber, Hebrew, Maori, Masai, Welsh and Zapotec), and all of these have prepositions and not postpositions. On the other hand the 13 SVO languages divide into 10 with prepositions, as in English, and 3 with postpositions (Finnish, Guarani and Songhai), while the 11 SOV languages are exclusively postpositional. In fact, however, Greenberg learnt of a possible exception to his universal after the Dobbs Ferry conference and just in time to be included in an additional note to his paper. The language in question was the Uto-Aztecan language Papago, which was thought to be VSO and postpositional. The status of Papago both as a postpositional language and as a VSO language has since been questioned (see Comrie 1981: 28 and the reference in Payne, D. 1986:462), but was included as such in the major survey of word order universals by Hawkins (1983) which used a sample of 336 languages. In this sample there were a total of 52 VSO and VOS languages, which Hawkins groups together as V-1 (Verb First). Papago is the only one claimed to have postpositions: the remaining 51 are all prepositional. AN ENCYCLOPAEDIA OF LANGUAGE 165 Is it possible to maintain that there are any genuine universal implications? As Smith (1981) points out, one has the strong impression that exceptions to them will not be a great surprise. Given that a language can in principle use dominant word order VSO, and given that a language can in principle use postpositions, the combination of the two in a single language might in principle be expected to occur. In fact, regardless of the status of Papago, the combination of V-1 and postpositions has recently been argued to occur in a number of Amazonian languages, namely Yagua, Caquinte, Amuesha, Taushiro and Guajajara (Payne, D. 1986; Derbyshire 1987). What is significant is the preponderance of V-1 and prepositional languages over V-1 and postpositional languages. We need therefore to reformulate this particular universal implication, and probably many others, in statistical terms. This lead us to the fourth instantiation of schema (i), called a statistical correlation by Greenberg, Osgood and Jenkins, and an implicational tendency by Comrie. Such universals take the form (iṱ ṱ): (iṱ ṱ)For all x, if x is a language, then if x has property P, the probability that it has property Q is greater than the probability that it has property R In our example, the property P is the property of being dominantly V−1, the property Q is the property of using prepositions, and the property R is the property of using postpositions. The four instantiations of schema (i) given above are the main framework within which research into Greenbergian universals takes place. Greenberg, Osgood and Jenkins do however also suggest two more types of synchronic universal, as well as the general form which must be taken by universal statements of linguistic change (diachronic universals). The first of the two types of synchronic universal is called a restricted equivalence. It takes the form (iṱ ṱ ′): (iṱ ṱ ′) For all x, if x is a language, then if x has property P, it has property Q, and vice versa Such a statement is easily seen to be equivalent to two statements of type (iṱ ). The example given is that if a language has a lateral click, it always has a dental click, and vice versa. Since clicks are known only in a very restricted set of languages in southern Africa, this statement has limited import. The difficulty in finding genuine cases of restricted equivalence is probably insurmountable: even in the case of the statement about clicks we ought to be wary, since there is no obvious reason why a language should not have one type of click without the other. On the other hand, there might be grounds for postulating a statistical version of (iṱ ṱ ′), which would be equivalent to two statements of type (iṱ ṱ), if we could find two properties which mutually implicated each other to a significant extent. Note that we could not use the properties V−1 and prepositional, since although the majority of V−1 languages are prepositional, the majority of prepositional languages are not V−1 (they are SVO). But the property of having dominant SOV order and the property of using postpositions rather than prepositions do seem to provide an example. Out of 174 SOV languages in Hawkins’s (1983) sample, 162 are postpositional and only 12 are prepositional. Out of the 188 postpositional languages in the same sample, 162 have dominant order SOV and 25 have other orders. Following Comrie’s lead in calling non-absolute universals tendencies, we might call such a universal a mutual implicational tendency. The logical type of such a universal is clear, however: it is simply a combination of two implicational tendencies (which incidentally need not involve the same numerical probabilities). The second extra type of synchronic universal is what Greenberg, Osgood and Jenkins call a universal frequency distribution. What they seem to have in mind are universals in which it is possible to make a measurement of a certain property across all languages (for example, the degree of redundancy in the information theory sense) and get a result which shows a statistical distribution around a mean. The statement of the statistical properties of the distribution (its mean, standard deviations etc.) would then be a valid universal fact. Comrie (1981:22), having avoided the use of the term statistical for universals of types (iṱ ) and (iṱṱ), is able to call such a universal a statistical universal. Finally, Greenberg, Osgood and Jenkins’s general formula for diachronic universals is given as (ii): (ii) For all x and all y where x is an earlier and y a later stage of the same language, then… An example of such a universal would be Ferguson’s (1963:46) claim that nasal vowel phonemes, except in cases of borrowing and analogy, always result from the loss of a nasal consonant phoneme. This can be illustrated by the development of the nasal vowel phoneme /εṱ/ in French (Harris 1987:216): Latin fin-em (end-acc.s) developed first into [fin] with the loss of the accusative singular ending, then the vowel was allophonically nasalised to give [fɩṱn] and lowered to give [fεṱn]. Subsequently the loss of the nasal consonant, giving modern French [fεṱ] (spelt fin), led to the creation of a nasal vowel phoneme. One feature of diachronic universals stressed by Greenberg, Osgood and Jenkins is that, apart from generalities like ‘all languages change’, they are invariably probabilistic. No-one can say with certainty that a particular property of an earlier stage x of a language will definitely change into another property at a later stage y, or even say retrospectively that a 166 LANGUAGE UNIVERSALS AND LANGUAGE TYPES particular property at a later stage y must have arisen from another property at an earlier stage x. Although the majority of nasal vowel phonemes do indeed seem to have arisen through the kind of mechanism illustrated above (Ruhlen 1978:230), one cannot predict that a sequence of oral vowel and nasal consonant will invariably be converted into a nasal vowel phoneme within a given time span: the Old English dative masculine pronoun him has survived in modern English unchanged, and not resulted in a form like /hɩṱ/. Nor can one say that all existing nasal vowel phonemes have arisen from a sequence of oral vowel and nasal consonant in a given language: other mechanisms include borrowing from one language into another via loan- words, as in French loans like Restaurant /restorã/ ‘restaurant’ in German, the emergence of nasalisation in the environment of glottalic sounds (Ruhlen 1978:231–2), and spreading as an areal feature. For instance, nasalised vowels are a characteristic areal feature of northern India, found in a wide range of languages from the Indo-Aryan family (except Sinhalese and Romani, which are outside the area, and some dialects of Marathi), in the isolate Burushaski, and in many languages from the Tibeto-Burman, Dardic and Munda families. Interestingly, some neighbouring Iranian dialects belong to the area, including eastern dialects of Pashto and Balochi, as does the northern Dravidian language, Kurukh, although the majority of both Iranian and Dravidian languages lack nasal vowels (Edel′man 1968:77, Masica 1976:88). 1.4 Explanation of Greenbergian universals Given that Greenbergian universals are valid statements about the nature of the set of possible human languages, how can their validity be explained? The problem is an acute one, since an explanation based on the behaviour or knowledge of individual speakers of a language appears at first sight to be excluded. How can an individual speaker of a particular language conceivably have any knowledge of the distribution of basic word order patterns in the world’s languages, or the distribution of oral and nasal vowels? A child faced with learning an OVS language with nasal vowels, like the Amazonian language Apalai (Koehn and Koehn 1986), learns it just as naturally as a child learns an SVO language without nasal vowels, like English. Why then are there many more languages which resemble English with respect to these features than languages which resemble Apalai? Of particular interest in answering this kind of question is the relationship between diachronic and synchronic universals. Since diachronic universals are inevitably probabilistic in nature, nothing can be predicted with absolute certainty about the presence or absence of a given property in any individual language on the basis of a diachronic universal. On the other hand, individual synchronic universals, in particular the statistical ones (‘tendencies’ in Comrie’s terminology), may be at least partially accounted for in terms of the probabilities of language change. Greenberg (1966) states the idea that Ferguson’s (1963) synchronic universal concerning the relationship between the number of nasal and non-nasal vowel phonemes in a language, viz. that there are never more nasal than non-nasal vowel phonemes, is a straightforward consequence of Ferguson’s (1963) diachronic universal concerning the development of nasal vowel phonemes from oral vowel and nasal consonant sequences: if there are five oral vowels in a language, then a maximum of five vowels are available for nasalisation in the environment of a nasal consonant. Of course, this cannot be the whole explanation: once the language has developed a symmetric system with five oral and five nasal vowels, what is to prevent a subsequent merger of one or more of the oral vowel phonemes leading to a state of affairs in which there are more nasal vowels than oral vowels? The fact that this development is conceivable ought to make us wary of thinking about the synchronic universal as an absolute one. However, the apparent rarity of the development can be formulated as another diachronic universal: in languages with both oral and nasal vowel systems, merger is at least as probable in the nasal vowel system as in the oral vowel system. In French, for example, the nasal vowel phoneme /œṱ/ is in the process of being absorbed by /εṱ/, whereas the oral vowel phoneme /œ/ (or /œ=ø/ for those speakers who do not distinguish between /œ/ and /ø/) is not being absorbed by /ε/ (Harris 1987:217). Such a diachronic account of the synchronic universal seems preferable to the notion that the class of nasal vowels is in some sense ‘unnatural’ or ‘marked’ with respect to the class of oral vowels, as is implied in classical generative phonology (Chomsky and Halle 1968:402–19). Indeed, to the extent that the notion of ‘unnaturalness’ or ‘markedness’ is merely a restatement of the synchronic universal governing the distribution of nasal and oral vowel phonemes across the world’s languages, it suffers from the same problems of explanation. Of course, accounting for the synchronic universal in terms of the diachronic universal simply throws the problem of explanation one stage back; but explanation of linguistic change can eventually be based on the behaviour of individual speakers. In this particular example the development of nasal vowels from a sequence of oral vowel and nasal consonant eventually results from the anticipatory articulation of the nasality inherent in the nasal consonant, for which a psycholinguistic explanation seems plausible. The tendency for nasal vowels to merge more than oral vowels may be explicable if it can be demonstrated that nasal vowels possess a lesser degree of perceptual differentiation than the corresponding oral vowels. Much work remains to be done in this area, but ultimately we might hope that the explanation for the synchronic universal AN ENCYCLOPAEDIA OF LANGUAGE 167 would reduce to factors which are involved in the pressure on individual speakers for linguistic change. These factors are essentially psycholinguistic in nature, relating to processes of production and perception. Similar issues arise in the attempt to explain the non-absolute grammatical universals. Why, for instance, is there a correlation between basic word order and the use of prepositions or postpositions? Attempts to explain this phenomenon in purely synchronic terms essentially rely on the idea of natural serialisation introduced by Vennemann (1973): it is claimed that the relationship between a verb and its object is similar to the relationship between an adposition (preposition or postposition) and its object, and that therefore languages will naturally express this relationship linearly in the same order. A language with Verb-Object order will tend to have Preposition-Object order, and a language with Object-Verb order will tend to have Object-Postposition order. The similarity can be stated in semantic terms, based on notions like operator and operand or function and argument (Keenan 1979), or in syntactic terms, based on the notion of government or case-assignment (Haider 1986). In languages with overt case systems, for example, the range of cases which can be assigned to objects by verbs is essentially the same as the range of cases governed by prepositions. However, we might be suspicious of such explanations on the grounds that a natural serialisation principle, like a markedness principle in phonology, does not seem to be something that an individual speaker of a language, or a child learning a language, in principle needs to know or indeed can know. For example, there seems to be no evidence that a child has any more difficulty in learning a language which fails to conform to the natural serialisation principle than learning one which does. In addition, even if the explanation is accepted, there remains the problem of explaining just why the principles of linguistic change act in such a way that the majority of languages conform to the principle (Mallinson and Blake 1981:393). An alternative explanation of the correlation between basic word order and adposition type is therefore the diachronic one, likewise first proposed by Vennemann (1973), that verbs are a major historical source for adpositions. This can be seen for example in the development of the English preposition regarding from the verb regard: in a sentence like he made a speech regarding the new proposal, the form regarding seems to act as a high style replacement for the preposition about. It cannot be treated synchronically as a participial form of the verb regard, since sentences like *his speech regarded the new proposal are unacceptable. If a language has verbs preceding objects, therefore, an automatic consequence of the development of adpositions from verbs will be that these adpositions will be prepositions. Of course, this cannot be the whole story, since prepositions can arise historically from other sources than verbs. Nevertheless, the diachronic explanation seems promising. There are grounds for thinking that many of the functional explanations for grammatical universals are also best thought of in this diachronic sense. A functional explanation for a grammatical universal essentially aims to demonstrate that a system which observes that universal increases the ease with which the semantic content of an utterance can be recovered from its syntactic structure. Why should languages develop in such a way as to conform, in the majority, to a particular functional principle, unless it is the functional principle itself which motivates the change? As an illustration of this point, let us consider one of best known functional explanations in syntax. This is Andersen’s (1976) and Comrie’s (1978, 1981) explanation for the distribution of case marking in simple intransitive and transitive sentences. Reverting to the use of S (=Subject) as a mnemonic for the single argument in intransitive sentences, and A (=Agent), and O (=Object) for the two arguments in a typical transitive sentence with an active verb, we have the following two basic sentence patterns (abstracting from considerations of word order): (46) S V intransitive (47) A O V transitive In the intransitive construction, there is only a single argument, S, so this argument does not need to be distinguished in any way from the others. However the two arguments A and O in the transitive sentence do need to be distinguished, otherwise ambiguity will result. Case marking is one way of achieving this, hence we would expect the most frequent case marking systems to be those in which A and O are assigned distinct cases: the case marking of S can then be identified either with A, giving the nominative-accusative system, or with O, giving the ergative-absolutive system. Examples of these two systems are given in (48)–(51), from Russian and Kurdish (Kurmanji dialect) respectively: (48) on pad - ët he(nom.) fall - 3s ‘He is falling’ (49) on sestr - u ljub - it he(nom.) sister - (acc.) love - 3s ‘He loves his sister’ 168 LANGUAGE UNIVERSALS AND LANGUAGE TYPES (50) ew ket he(abs.) fell ‘He fell’ (51) jin ê ew dît woman - (obl.) he(abs.) saw ‘The woman saw him’ We can indeed formulate a Greenbergian universal to the effect that if a language has a case-marking system, the probability that it has distinct cases for A and O is greater than that it has the same case for A and O. This is not an absolute universal, since the system in which A and O have the same case, as opposed to S, is attested in the Iranian Pamir language Roshani (Payne, J. 1980). Roshani exhibits this system (the double-oblique system) in past tenses only: (52) az - um pa Xaraγ sut I(abs.) - IS to Xorog went ‘I went to Xorog’ (53) mu tā wunt I(obl.) you(obl.) saw ‘I saw you’ (In the present tense, the system is nominative-accusative, in that the absolutive case is used for S and A, and the oblique case for O: (54) čāy sā -t who(abs.) go -3s ‘who is going?’ (55) az tā wun - um I(abs.) you(obl.) see - IS ‘I see you’ The historical origin of the double-oblique system can be easily reconstructed: the transitive past with its characteristic double-oblique form as shown in (53) was originally ergative, with an oblique A and an absolutive O. The absolutive case of O in the past tenses at this stage contrasted with the oblique case of O in the present tense: this dysfunctionality of the system was resolved by the development of the use of the oblique case for O in past tenses, thereby however creating another dysfunctionality: the double-oblique construction. At this stage, we might expect the functional principle to come into force as a pressure on individual speakers of Roshani to find a way of again differentiating A and O in transitive past sentences. Indeed, this seems to be happening: younger speakers of Roshani use forms like (56), in which A is absolutive, or (57), in which O is additionally marked by the preposition az (literally ‘from’): (56) az - um tā wunt I(abs.) - IS you(obl.) saw ‘I saw you’ (57) mu az taw wunt I(obl.) from you(obl.) saw ‘I saw you’ In this case, the functional principle in question is clearly seen as a force behind the historical change. It remains to be seen whether other functional principles can be considered in this way, but the line of enquiry is a promising one. The diachronic dimension in explanation is fully discussed by Bybee (1988). AN ENCYCLOPAEDIA OF LANGUAGE 169 1.5 Chomskyan universals The Chomskyan view of language universals differs in important respects from the Greenbergian view. At the heart of the difference lies Chomsky’s notion that the goal of linguistic theory is to characterise I-language, which is language viewed as the internalised knowledge incorporated in the brain of a particular speaker, rather than E-language, which is language viewed as a shared social phenomenon external to the mind. The important questions to which Chomsky attempts to provide an answer are (Chomsky 1988:3): (a) What is the system of knowledge? What is in the mind/brain of the speaker of a language? (b) How does this system of knowledge arise in the mind/brain? The answer to question (a) is logically prior: it consists firstly in the construction of a grammatical description which is the theory of a particular language, and secondly in the construction of a theory of universal grammar (UG), whose role is to determine which principles of the grammatical description of the particular language are language universals, i.e. invariant and fixed principles of the language faculty of mankind. The construction of UG contributes to the solution of question (b), inasmuch as the principles of UG can be considered as innate and not part of what must be discovered by the language learner. (See Chapter 4, above.) As will be evident from the above, Chomskyan universals are universal principles of grammar which are incorporated in the grammar of a particular language. The explanation for them is that they are innate. As we have seen, Greenbergian probabilistic universals cannot sensibily be incorporated in the grammars of particular languages, since they are statements about how languages tend to be rather than how they must be. The explanation for them may be reducible to principles of linguistic change, but in any event, innateness does not seem to be involved in their explanation, since all languages seem to be learned with equal facility. Clearly only absolute Greenbergian universals are candidates for incorporation into Chomsky’s UG. The development of ideas about UG within the Chomskyan framework can be divided into two phases. In the early phase, it was thought that the principles of UG could be incorporated as such in the grammars of individual languages. As Katz (1966:109) puts it: ‘each linguistic description has a common part consisting of the set of linguistic universals and a variable part consisting of the generalisations that hold only for the given language.’ Such a view leads immediately to the ‘Chomskyan syllogism’ (Haider 1986): (A) The principles of UG hold for any natural language (B) Language x is a natural language Hence: The principles of UG hold for x Hence: A detailed analysis of x will lead to the principles of UG In fact, as demonstrated by Keenan (1976b), this view of the principles of UG, and the research strategy based on it, is untenable. It is untenable because any particular language x greatly under-realises what is universally possible: the constraints on the forms of its structures are generally much stronger than the constraints that are universally valid. As a simple illustration of this point, Keenan considers the notion of ‘promotion rule’: many languages, including English, have rules whose effect is to form complex structures from simpler ones by assigning the properties of one NP to another. The English passive, no matter how it is formally defined, has the effect of assigning subject properties, such as initial position in the sentence, to an underlying object: from John gave the book to Mary we can derive The book was given to Mary by John. It turns out, however, that many languages have no promotion rules of this kind: examples are Hausa, Urhobo and Arosi. If the principles of UG were based on these languages, we would be motivated to exclude promotion rules from the set of possible rules permitted by UG. Since Chomsky (1981), the theory of UG has been modified to include principles which are ‘parametrised’, i.e. principles which include variables which may have different values in different languages. Different settings of these values then account for the observed diversity of languages. Although there are strong arguments to the contrary (see especially Bowerman 1988), it is often argued that this conception of UG simplifies the problem of accounting for the acquisition of language, since the task of the language learner can be thought of in part as establishing the values of the parameters, and this can be done on the basis of relatively simple sentences. A change in the value of even one parameter can have radical consequences as it works its way through the whole system of grammar. As a simple example of a parameter we can cite the ‘head parameter’, which fixes the order of heads and complements. UG permits basically four lexical categories: V (verb), N (noun), A (adjective) and P (preposition). These four lexical categories occur as the ‘head’ in the corresponding phrasal categories: VP (verb phrase), NP (noun phrase), AP (adjective phrase) and PP (prepositional phrase). Letting X and Y be variables for any of the lexical categories V, N, A or P, the general structure of a phrase can be expressed in the formula (58): 170 LANGUAGE UNIVERSALS AND LANGUAGE TYPES (5) XP=X−YP This is understood to mean that a phrase of a certain category, say VP, will consist of a V which is its head and a complement which can be a phrase of any category, say NP. The English VP in (59) is an instantiation of these choices: (59) [ VP [ V speak] [ NP English] ] The principle in (58) is an invariant principle of UG, but several parameter values have to be fixed before it can yield actual phrases in a particular language. In particular, one set of parameters fixes the choices of X and Y, and another fixes the order of the head and complement in each case. In English, the general rule is that the head precedes the complement, while in Japanese it follows. It is an interesting question whether principle (58) does account for all the variety of phrasal types across the world’s languages. There are at least three prima facie objections. Languages with VSO and OSV orders are potential candidates for counter-examples, since V is not adjacent to O. In the case of VSO languages, Chomsky argues for an abstract analysis in which the underlying structure is SVO and the verb is moved to the front of the clause. Non-configurational languages like Warlpiri are another potential objection, since in these languages it can be argued that V and O do not form a phrase: here Chomsky again has to postulate an abstract structure. Finally, languages with VP-nominative structures like Toba Batak seem difficult to fit into the schema, since the subject would not normally be thought of in the Chomskyan framework as a complement of the verb. Here however the principle (58) might be maintained if the structural definition of subject as a sister of VP were abandoned, with considerable consequences for many other principles of grammar. One of the consequences of the adoption of the principles and parameters model of UG is that the Chomskyan syllogism now fails. It is impossible to deduce the principles of UG by detailed study of a single language. Another consequence, as pointed out by Keenan (1982), is that it becomes possible to state Greenbergian absolute implicational universals as constraints on the choice of parameters. For instance, Keenan’s view of passivisation in UG is that it is a rule which derives n-place predicates from n+1−place predicates, a process often described as a reduction of valency (see Chapter 3, above). In English, the one-place predicate is seen (which is intransitive and takes a single obligatory subject NP) is derived from the two-place predicate see (which is transitive and requires both object and subject NPs). In English we cannot form a zero place predicate from a one-place predicate: there are no passives of intransitive verbs. But such passives do exist in languages like German: from the verb tanzen ‘dance’ it is possible to form a passive es wird getanzt (it is danced, i.e. there is dancing) with dummy es, which is not a subject. Keenan’s preliminary formulation of the Passive in UG is therefore as follows: (60) a. Rule: Pn −> {Pass, Pn+1}, all nṱ 0 b. Parameter Conditions (i) if n L is not zero, then 1 є n L English just has one instantiation of the rule, with n=1, i.e. P1 −> {Pass, P2}. The P1 is seen can be formed by adding passive morphology and the auxiliary be to the P2 see. German has two instantiations of the rule: we can not only form the P1 wirdgesehen from the P2 sehen, but also the Po wird getantzt from the P1 tanzen. The Greenbergian implicational universal that if a language forms passives from intransitive verbs, it will also form passives from transitive verbs follows from the parameter condition that if a language forms passives at all, it forms passives from transitive verbs. We can perhaps see here a move towards a successful synthesis of work within the Chomskyan and Greenbergian paradigms. However a word of caution is in order: we have argued that many implicational universals may simply be tendencies, and if they are, it is inappropriate to include them within a characterisation of innate knowledge. Regardless of the problem of explanation, however, generalisations like (60) based on a principles and parameters approach within an adequate sample of languages seem a promising way forward. 1.6 Hierarchies One of the most successful notions to emerge from language universals research is the notion of ‘hierarchy’. Linguistic categories can be ordered hierarchically according to which rules apply to them. Hierarchies therefore follow from the statement of implicational universals and tendencies. One example is the Keenan-Comrie hierarchy of grammatical relations known as the Accessibility Hierarchy (Keenan and Comrie 1977, Comrie 1981). Essentially, the hierarchy is as follows: AN ENCYCLOPAEDIA OF LANGUAGE 171 subject > direct object > non-direct object > possessor The hierarchy plays a role in numerous grammatical processes, but was originally proposed as a statement of the different accessibility of these noun phrase positions to relativisation. English provides essentially no evidence for the existence of the hierarchy, since the method of forming relative clauses in English with the relative pronouns who and which (the wh-strategy) permits all four of the positions to be relativised: (61) the man [who bought a book for the girl] (62) the book [which the man bought for the girl] (63) the girl [for whom the man bought a book] (64) the girl [whose book was a success] In (61), for example, the head noun man plays the role of subject within the relative clause, and in (64) the head noun girl plays the role of possessor. As predicted by the hierarchy, the two intermediate positions of direct object in (62) and non- direct object in (63) are also relativisable. However, there are languages like Malagasy which permit only the subject position to be relativised. Keenan (1985:157) provides some examples. (65) Manasa ny lamba ny vehivavy wash the clothes the woman ‘The woman is washing the clothes’ (66) ny vehivavy [(izay) manasa ny lamba] the woman that wash the clothes ‘The woman that is washing the clothes’ (67) *ny lamba [ (izay) manasa ny vehivavy] the clothes that wash the woman ‘The clothes that the woman is washing’ Sentence (65) illustrates the basic word order VOS in Malagasy. While the relative clause construction in (66) is acceptable, where the head noun vehivavy plays the subject role in the relative clause, the relative clause in (67) is not permitted. Neither are relative clauses based on the oblique object or possessor positions. In order to express the meaning in (67), Malagasy is forced to promote the direct object, by passivisation, into the subject position, where it can be relativised: (68) Ny lamba [(izay) sasan’ny vehivavy] the clothes that washed by the woman ‘The clothes that are washed by the woman’ The hierarchy also states that there are languages in which the subject and direct object positions are relativisable, but not the oblique object and possessor positions: Bantu languages like Luganda seem to fall into this category. And we can also expect languages in which subject, direct object and oblique objects are relativisable, but not possessors: an example is the Fering dialect of North Frisian. As we begin to expect of generalisations based on implicational statements, there are counter-examples to the hierarchy as presented. Ergativity presents an initial problem, forcing us to distinguish between intransitive subjects (Ss) and transitive subjects (As): the syntactically ergative Dyirbal for example permits relativisation on Ss and Os, but not As. Interestingly, it has a process (called the ‘anti-passive’) which has the effect of converting As into Ss, just as the passive in Malagasy converts Os into Ss. They can then be relativised. Other problems are presented by West Indonesian languages like Malay, which permit relativisation of subjects and possessors, but not of direct objects or most non-direct objects (Comrie 1981:150). Keenan and Comrie (1977) attempt to preserve the hierarchy as an absolute universal by distinguishing between different strategies for forming relative clauses within the same language (for example, the expression of the role of the head noun within the relative clause by the use of case- marked pronouns like English who/whom/whose, as opposed to the use of forms which lack case, like English that). Each strategy must then operate on contiguous elements of the hierarchy, and one strategy must operate on at least some subjects. Significantly, however, even with this hedging, there still remain recalcitrant counterexamples like Tongan, which has a 172 LANGUAGE UNIVERSALS AND LANGUAGE TYPES [+case] strategy for (some) subjects, non-direct objects and possessors, but a [−case] strategy for direct objects (Comrie 1981: 151). A second hierarchy which seems to have quite a pervasive role in language is the animacy hierarchy of Silverstein (1976). This has the form: 1st & 2nd person non- singular pronouns > 1st & 2nd person singular pronouns > 3rd person pronouns > proper nouns > human common nouns > animate common nouns > inanimate common nouns This hierarchy was originally proposed as a statement of the distribution of case-marking systems in languages which show ‘split’ ergativity, i.e. where some nominals work according to an ergative-absolutive system, but others work according to a nominative-accusative system. We have already seen one example of this in the case-marking of nominals in Dyirbal, where all 1st and 2nd person pronouns and proper nouns are nominative-accusative, and all common nouns (and determiners) are ergative-absolutive. There are no 3rd person pronouns distinct from the determiners. The general principle is that ergative marking extends from the right of the hierarchy, and accusative marking from the left. Dixon (1980:290) gives a plausible functional reason for this: things which are high on the animacy hierarchy are typically instigators of actions and therefore more likely to be As than are things which are low on the hierarchy. It therefore makes sense that things which are low on the hierarchy should have a special marking (the ergative) when untypically they occur as As. The reverse argument applies for the accusative marking of things which are high on the hierarchy. The animacy hierarchy has since been refined and extended in various ways. The relative ordering of the persons is thoroughly reviewed by Plank (1985). Lazard (1984) incorporates into the hierarchy such notions as definiteness versus indefiniteness and genericity versus non-genericity in a wide-ranging account of the ways Os can differ: in Persian, for example, all definite Os are marked by the postposition -rã, but some indefinite Os may or may not take -rã, according to whether they are human or not. Lazard’s combined scale resembles Table 8. Table 8 1st & 2nd 3rd person pronouns Definite Indefinite Mass Generic person Proper names Human Non-human pronouns Ultimately, we seem to see a scale running from maximal individualisation on the one hand to maximal generalisation on the other hand. Such notions also play a fundamental role in Seiler’s (1986) attempt to relate a wide body of linguistic phenomena involving nominals to a single scale. 2. LANGUAGE TYPES 2.1 Introduction The aim of linguistic typology is to categorise actually-occurring languages according to their properties. It is essentially an application of work in language universals research to the question of how similar particular languages are to each other, or how different. There are essentially two ways in which languages can be categorised. The first is to partition the set of actually-occurring languages into subsets which share a particular property P. Such a partitioning is usually called a ‘classificatory’ typology, and the individual subsets are called ‘classificatory’ types. We can then say of any particular language x which possesses the relevant property P that it ‘belongs to’ the (classificatory) type T. Which property we choose as the basis of a classificatory typology is completely open, and depends on the purpose for which we wish to use the typology. There is of course little point in choosing a property which is genuinely universal (like the use of vowels), since then every language would belong to the same type with respect to that property. But any other property might be of interest for some purpose: we could for example classify languages into those which use clicks (‘click languages’) and those which do not, or those which use distinctive tone (‘tone languages’) and those which do not. Such classifications are typically used by one linguist describing to another the salient feature(s) of a particular language. Many linguists have felt, however, that there should be more significance to the notion of ‘language type’ than simple classification. A first move that is often made is to suggest that the property which is chosen as the basis of classification should AN ENCYCLOPAEDIA OF LANGUAGE 173 be a property on which other properties depend, i.e. a property which is the antecedent of a Greenbergian universal. We could for example choose basic word orders (in terms of the elements, S, O and V), which serve as the antecedents for such further properties as prepositional versus postpositional. If yet more properties could be found which were dependent on the basic word order, we might form a ‘general’ or ‘holistic’ typology which classified languages not on the basis of a single property, but on the basis of whole systems of properties. Unfortunately, such general or holistic typologies seem to be illusory (for discussion see Vennemann 1981 and Ramat 1986). One reason is that not enough properties seem to depend on each other, but more seriously, even those implications which do hold invariably turn out to be tendencies rather than absolute universals. A possible solution to this problem is the notion of ‘ideal’ (or ‘consistent’) type: this is an abstraction based on the most frequently observed co-occurrences, or deduced a priori from abstract principles. We then have a second way of classifying languages, namely, in relation to an abstraction which may or may not be represented in actually-occurring languages. We can say such things as: language x belongs to the (ideal) type T except for property P, or, in numerical terms, language x belongs to the (ideal) type T to the extent e. Ideal types therefore provide a convenient way for linguists to talk about particular languages in global terms: they have no other status than this. It is important to distinguish between classificatory and ideal types when making statements to the effect that a particular language belongs to a particular type. Japanese might be said to be an ‘SOV language’ in both senses: it has basic word order SOV and a number of related properties like the use of postpositions. But Persian is an ‘SOV language’ only in the classificatory sense that it has SOV basic word order: it differs from the ideal type in many respects, including the use of prepositions. In the sections which follow, we shall concentrate on examples of ideal types from phonology, morphology and syntax respectively. 2.2 Phonological types The most intriguing ideal types of phonology are Gil’s (1986) ‘iambic’ and ‘trochaic’ types. Iambic metres, which are based on the principle weak-strong, tend to contain more syllables than trochaic metres, which are based on the principle strong- weak. Iambic metres are more suited to be spoken, while trochaic metres are more suited to be sung. Starting from these metrical notions, Gil establishes the two ideal types: (a) iambic languages have more syllables than trochaic languages, (b) iambic languages have simpler syllable structures than trochaic languages, (c) iambic languages are stress-timed while trochaic languages are syllable-timed, (d) iambic languages have more obstruent segments than sonorant segments in their phonemic inventories, while trochaic languages have more sonorant segments than obstruent segments, (e) iambic languages have more level intonation contours, and trochaic languages have more variable intonation contours, (f) iambic languages are tonal while trochaic languages are non-tonal. English is closer to the trochaic ideal, with a very complex syllable template of up to three segments before the syllable peak (as in strengths /s-t-r-e-ŋ-θ-s/) and up to four segments after the syllable peak (as in sixths /s-i-k-s-θ-s/. It is of course not tonal, but does possess a rich variety of intonation contours and a relatively low consonant/vowel ratio of 2.08: the number of consonants in the phonemic inventory is 27 and the number of vowels 13. By contrast, Turkish is closer to the iambic ideal, with a very simple syllable structure template (C)V(C)(C), no tone, and a higher consonant/ vowel ratio of 3 (24 consonants and 8 vowels). Gil even has statistical evidence that word order may be related: SVO languages like English are more likely to be trochaic, and SOV languages like Turkish are more likely to be iambic. 2.3 Morphological types Morphological typologies attempt to characterise languages according to: (i) the extent to which linguistic concepts are expressed by morphological (i.e. word-internal) modification, rather than by the use of separate words (ii) the morphological techniques employed The foundations of morphological typology were laid primarily at the beginning of the nineteenth century, although, as Frans Plank pointed out in a recent lecture to the Linguistics Association of Great Britain, eighteenth-century precursors like Beauzée are known. In these early typologies, however, the two factors mentioned above are typically conflated into a simple 174 LANGUAGE UNIVERSALS AND LANGUAGE TYPES [...]... GN GN NG NA AN AN NA NA AN AN NA NA AN AN NA NA AN AN NA NA AN AN NA NA 38 13 1 0 0 0 1 0 56 17 7 4 0 0 12 13 10 0 2 0 11 AN ENCYCLOPAEDIA OF LANGUAGE Type Basic Order Pr/Po NG/GN NA /AN Languages in sample 22 23 24 SOV SOV SOV Po Po Po NG GN GN AN AN NA 181 0 96 55 Counter to Hawkins, we do not believe that the zeroes in this table are significant: they represent rare combinations rather than absolute... Greenbergian and Chomskyan traditions is found in Coopmans (1983) and Comrie (1984) Keenans important works on universals are collected in Keenan (1987) Valuable collections on particular topics are: Li (1976) on subjects and topics, Plank (1979) on ergativity, Plank (1984) on objects, and Hawkins (1988) on explanation PART B THE LARGER PROVINCE OF LANGUAGE 10 LANGUAGE AND MIND: PSYCHOLINGUISTICS JEAN AITCHISON... when a woman cutting bread said: Yes, you can take the bread out, meaning dog Such examples suggest that the mind readily and subconsciously activates large numbers of words, the majority of which will not in the end be selected This has led to the suggestion that in selecting words, suppression of unwanted extra ones may be an important factor: aphasics and AN ENCYCLOPAEDIA OF LANGUAGE 195 language... alongside an open class word It turned out that such a grammar seriously undercharacterised the speech of numerous children In particular, it failed to deal satisfactorily with utterances such as Mummy banana, which could have several possible meanings: Mummys eating a banana, Mummy slipped on a banana skin, Thats mummys banana, Please mummy give me a banana This type of observation showed that child language... existing samples of languages are unfortunately not free of bias and not sufficiently large However as an indication of the kind of results which might emerge, we give in Figure 10 a plot of the two indices analyticity (Aux/W) and lexicality for the sample of twenty-six languages studied by Kasevi and Jaxontov Some clustering seems to be observable, with languages like Turkish, Tagalog, Arabic and Vietnamese... study of language and mind aims to model the workings of the mind in relation to language, but, unlike the study of language and the brain (see Chapter 11 below), does not attempt to relate its findings to physical reality A person working on language and mind is trying to produce a map of the mind which works in somewhat the same way as a plan of the London Underground The latter provides an elegant... identification of sounds and words Parsing involves the assignment of structure to the various words, and the analysis of the functional relationships between them Interpretation covers the recognition of semantic relationships, and the linking up 188 LANGUAGE AND MIND of the utterance with the real world This threefold division corresponds roughly to the linguistic levels of phonetics/ phonology, syntax, and... (b) agglutinating languages (c) flectional languages (d) incorporating languages An isolating language like Chinese would present its roots in isolation, without any grammatical modification An agglutinating language like Basque (the term derives from Latin gluten glue) would glue any number of invariant endings, each with its own meaning, on to an invariant root, while a flectional language like Greek... Typology and Universals of Vowel Systems, in Greenberg 1978 vol 2:93152 Derbyshire, D.C (1987) Areal Characteristics of Amazonian Languages, International Journal of American Linguistics, 53:31126 Derbyshire, D.C and Pullum, G.K (eds) (1986) Handbook of Amazonian Languages: Volume 1, Mouton de Gruyter, Berlin Dixon, R.M.W (1979) Ergativity, Language, 55:59138 Dixon, R.M.W (1980) The Languages of Australia,... Indeed, languages in some of the unattested types are now coming to light For example, Payne, D (1986) cites the Amazonian language Yagua as belonging to Type 8, and the northern or Jewish dialect of the Iranian language Tati seems, according to the description of Grjunberg and Davydova (1982), to belong to type 18 Rather, the significance of the table lies in the strong tendencies which can be observed . verb and preposition, and the languages which did not, Schlegel further divided the inflexional languages in type (c) into: (c1) analytic languages (c2) synthetic languages An analytic language. iambic languages have more syllables than trochaic languages, (b) iambic languages have simpler syllable structures than trochaic languages, (c) iambic languages are stress-timed while trochaic languages. northern Dravidian language, Kurukh, although the majority of both Iranian and Dravidian languages lack nasal vowels (Edel′man 1968:77, Masica 1976:88). 1.4 Explanation of Greenbergian universals Given