An Encyclopaedia of Language_02 pot

53 194 0
An Encyclopaedia of Language_02 pot

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

further parentheses. This also applies to the structure of (24)(b), in which an infinitive clause-like element (=to+a verb phrase) has been successively embedded as a second ‘elaborator’ of a verb alongside its direct object, with the infinitival proclitic to acting as a marker of the embedding. Such differences between the embedded and non-embedded forms of the structure are akin to a transformational relationship, in that an indicative verb form corresponds to an infinitive (or a subjunctive in some languages), cf. also the Latin accusative-and-infinitive construction, in which the embedded subject has the accusative corresponding to the normal nominative. In an embedding, one element is downgraded and used as a constituent (or constituent of a constituent) of a higher element, to which it is in principle equal, formulaically: X 0 [=A+X 1 , or X 0 [=A+B [=C+X 1 ]. In co-ordination two similar elements are added together as equals in a combination which could have been represented by one of them alone, formulaically: X 0 [=X 1 … & X n ], where n ṱ 1. This normally means that each of the co-ordinated items is of the same class as the other(s) and of the whole. For instance, in the examples of (25)(a), (b) and (c) both the co-ordinated elements and the whole structure are (semantically related) nouns, noun phrases and verb phrases respectively: (25) (a) my mother and father, those cups and saucers; (b) my mother and my headmaster, John’s new cups and my German coffee; (c) I’ve dropped a cup and broken it. (d) [[[plaice and chips] and [strawberries and cream]] and [[goulash and rice] and [apple-pie and custard]]]. In co-ordinations, then, a compound element paradoxically consists of a series of elements equivalent to itself (just as a compound word is superficially often a sequence of potential words). This has the consequence that co-ordination within co- ordination is possible, as in (25)(d). Both embedding and co-ordination involve combining constituents of the same size and class. We have already discussed the question of class, but how many different size-units are there? Clearly words are combined into phrases, but phrases of different size and class occur within each other without the need for any downgrading of the kind associated with embedding. For instance, in: (26)…[might [live in [a [very poor] area]]]] we might distinguish an adjective phrase inside a noun phrase inside a preposition phrase inside a verb phrase inside a predicate phrase. The term ‘clause’ is used to indicate an embedded or co-ordinated sentence like the inner elements of (27) (a) or (b) respectively: (27) (a) [[Whoever arrives last] washes up]. (b) [[John arrived last] and [he washed up]]. But we should beware of the idea that a sentence can be exhaustively divided into clauses. In (27)(a) the subordinate clause Whoever arrives last is a sentence embedded inside another sentence, not alongside another clause. Similarly we should be clear that the co-ordinate ‘clauses’ of the compound sentence (27)(b) are nothing more than co-ordinated sentences, just as a compound noun phrase like that of (25)(b) consists simply of co-ordinate noun phrases. In the hierarchy of different size-units in syntax (sometimes referred to as ‘rank’ in ‘systemic-functional grammar’, cf. Halliday 1985:25–6) we only need to have words, different levels of phrases and sentences; ‘clauses’ are just embedded or co-ordinated sentences. 56 LANGUAGE AS FORM AND PATTERN In describing grammatical patterns, so far we have seen that the two main factors are the extent of each construction and the classes of its member constituents. Given the various complications involved, including transformations, are these factors enough to explain all the subtleties of grammatical patterning? Or is it also necessary to take account of the relations of the constituents to each other and their functions within the whole construction— in short, of functional relations? Chomsky (1965:68–74) asserts that this information is redundant. Let us consider the evidence. Looking at examples like those of (28)(a), (b) and (c), Bloomfield and his followers distinguished three main types of construction: (28) (a) netting, wire (that type of thing); netting and wire, (b) thick wire, (c) with wire. In (28)(a) two nouns netting and wire occur, possibly linked by a conjunction, and either one of them could stand in place of the whole construction, which is a nominal element; in (28)(b) only wire, the noun, could replace the whole construction. Both constructions have (at least) one central element or ‘head’, and are therefore described as ‘endocentric’; but whereas (28)(a) is co-ordinative, (28)(b) is subordinative, with the adjective thick acting as an optional modifier. In (28)(c), on the other hand, we have a combination made up of a preposition and a noun, but together they make an element of a further category, either adverbial (as in (mend it) with wire) or adnominal (=quasi-adjectival) (as in (puppets) with wire); this is therefore termed an ‘exocentric’ construction, consisting of a basic element and a relational element. But are these construction types and functional labels predictable on the basis of the classes involved? Is it not precisely the function of a preposition to convert a noun(phrase) into an adverbial or adnominal, and of an adjective to act as optional modifier of a noun? This is true; but then what about wire netting? In this phrase, which is not a compound noun but a regular syntactic pattern (cf. gold watch, cotton shirt, etc.), two nouns occur side by side but not as coordinates—rather with the first as ‘modifier’ and the second as ‘head’. Let us take a further example of the need for functional relations: (29) (a) (Mary) consulted/saw/interviewed an expert. (b) (Mary) became/was/sounded an expert. In each case the verb phrase (which is also the predicate phrase) consists of a verb followed by a noun phrase, but the function of the noun phrase differs: in (29)(a) it is an object (and accepts subject position in a corresponding passive sentence), while in (29)(b) it is a predicative (complement) and has a similar function to that of an adjective phrase (cf. very expert). There are two ways in which we might make good this lack of a functional-relational specification: we might replace our constituent structures with a different model, or we might try supplementing them in some way. The more radical policy is to abandon constituent structure altogether, and this is done in the various versions of dependency grammar (cf. Hays 1964, Korhonen 1977). Dependency grammar takes as its basis the relations between lexical elements, and the dependency involved is not so much one of a unilateral requirement for occurrence (as in a subordinative endocentric construction) as a semantic dependency for interpretation. For instance in the predicate phrase (Students…): (30) the word generous depends on mothers, which depends on on, which depends on depend. Only the first of these relations involves optionality, and in the case of mothers and on, it is difficult to see the latter as the dominating element. But, it is argued (with less than total conviction), in each case the ‘dependent’ relies on the ‘governor’ for its semantic interpretation. AN ENCYCLOPAEDIA OF LANGUAGE 57 Closely related to dependency grammar is valency grammar, which (following Tesnière 1959) emphasises that certain ‘governors’, especially verbs, have the power to require a particular number and particular types of ‘dependent’ (i.e. subject, object, adverbial, etc.), cf. for instance the different needs of the verbs in Figure 8 above. But dependency and valency grammar, if interpreted too narrowly, are in danger of failing to give sufficient attention to the structure of the superficial form of sentence, and a functionally-supplemented constituency grammar might be preferable. Candidates in this field include the rather programmatic Relational Grammar (cf. Johnson 1977, Perlmutter 1983: chapters 1–3) and Functional Grammar (cf. Dik 1978), in which functional notions like subject and object are basic but occur at different levels of description to allow for the different applications of the notions to cases like: (31) (a) Someone’s broken a window, have they? (b) A window’s been broken (by someone), has it? (c) There’s been a window broken (by someone), has there? In (31)(a) someone is clearly the subject and has the semantic role of agent, but it retains the role of agent and is in some sense still the underlying subject in (b) where superficially a window is the subject; and in (c) even the empty word there shows some sign of being at least a surface subject (by being echoed in the final tag question). Bresnan’s lexical-functional grammar, on the other hand (cf. Bresnan 1982: chapter 4), has attempted to link active and passive forms lexically by giving each transitive verb a double syntactic potential. In his ‘case grammar’ Fillmore (1968, 1977) tried to make a direct link between surface subjects, etc. and semantic roles like agent. The allied movement of ‘generative semantics’ (associated with the names of G.Lakoff, J.D. McCawley, P.M.Postal and J.R.Ross) aimed at a full integration of syntax and semantics (on which see Chapter 4). These projects now seem to have been abandoned; but we should note that recent work in Montague grammar/semantics has similar aims but works on a logical basis of truth conditions, ‘possible worlds’ and abstract mathematical models (cf. Dowty et al. 1981). An integration of syntax and semantics is also called for by the proponents of Generalized Phrase Structure Grammar (cf. Gazdar et al. 1985). Chomsky has always maintained that syntax is autonomous of semantics, although in his recent work grammatical deep structures have given way to semantic rules (cf. Chapter 4). Whatever the theory to be adopted, syntax and semantics need to be brought together, because it is insufficient to establish grammatical patterns without being able to describe their meanings. The difficulty is that, whereas in syntax we try to work with discrete structures, in semantics we are faced with a multidimensional continuum of partly overlapping subtle distinctions. Consider, for a moment, the meanings of (32)(a) and (b) with their reflexive and reciprocal pronouns (which have been one of Chomsky’s recurring themes in recent years, cf. Chomsky 1981): (32) (a) They liked themselves/each other. (b) They said they liked themselves/each other. Both versions of (32)(a) involve a kind of reflexiveness: assuming two people A and B, the each other version clearly has the meaning ‘A liked B, and B liked A’, while at first sight the themselves version means ‘A liked A, and B liked B’; yet, on reflection, we realise that the version with themselves can also mean ‘A liked A and B, and B liked A and B’. With (32)(b) the situation is more complex: in the themselves version did A, for instance, say that he liked B, or that he liked A and B, or that B liked A (and B), and did B say the same or something different? (We can leave aside here the question of whether the liking is present or past.) Needless to say, if more than two people are involved, the possibilities become even more complex, and the question naturally arises: how much such semantic detail can a grammar cope with? There is a further question to be considered about the limits of a grammar in another direction: what are its upper limits in terms of the size of its units? The sentence was traditionally regarded as the upper limit of grammatical analysis, and this was re-affirmed by Bloomfield (1935:170). But in recent years the developing fields of text-linguistics, discourse analysis and pragmatics (see Chapters 6, 7 and 8) have all given attention to the links between sentences, and some of these links are undoubtedly grammatical. ‘Preforms’, like pronouns (both ‘pro-noun phrases’ like she, it, and the pronoun in the narrower sense, one of a big one) and the pro-verb do, often rely on anaphoric reference to previous sentences for their interpretation. Equally the selection between sentence-types such as active vs. passive, cleft vs. non-cleft, is made on the basis of the wider text. Furthermore, a choice often available to the speaker is between articulating two sentences separately and combining them through embedding or coordination. 58 LANGUAGE AS FORM AND PATTERN 6. FORMALISATION IN GRAMMAR At the beginning of this chapter it was suggested that full explicitness, possibly even generativity, was a desirable quality for a grammar. Various attempts have been made to achieve this in the history of modern linguistics. One of the first was Jespersen’s Analytic syntax (1969 [1937]), which, although it presents mere ‘formulas’, does have a double system of description to refer to both functions (S(ubject), P(redicate), etc.) and to ‘ranks’ (1=primary, etc.) of modification, as well as a system of brackets for representing subordination and embedding; but the system is not really fully explicit and only works through examples and intuition. Harris’s (1951) system was much more rigorous. Starting from a set of word classes (N, V, A, etc.) he attempted to relate these to word-sequence classes (N 1 , N 2 , etc.) through a series of equations, some of which were ‘repeatable’ (i.e. recursively applicable), others not. This came very close to the explicitness claimed for generative grammar by Chomsky, Harris’s pupil. In later work (1952,1957) Harris suggested transformations as a way of stating relations between different sentences and of accounting for similarities of lexical collocational restrictions between different structures (e.g. write the poem/*house, wire the house/*poem compared with The *house/poem is written, etc.); these were also presented in the form of equations, which can, of course, be read in either direction. Chomsky’s rewrite rules, first presented in 1955–7, were, however, unidirectional (e.g. S→NP+VP, VP→V+NP, etc.) and were fundamentally different in that they were intended to specify (=‘generate’) sentences and assign structural descriptions automatically in one fell swoop. From the beginning he argued that both context-free and context-sensitive rules were necessary; he also claimed that transformational rewrite rules were required not only to relate different sentences, but also to relate ‘deep’ and ‘surface’ forms of the same sentence. With the development of transformational grammar, it became apparent that the overall rewriting potential of the model was so powerful that restrictions came to be suggested. The variant of generative grammar that has gone furthest in this direction is GPSG (Gazdar et al. 1985), which has abandoned context-sensitive rules and transformational rules, and redesigned context-free rules so that the constituency of constructions (‘Immediate Dominance’) and the sequence of constituents (‘Linear Precedence’) are stated separately; this gets around the problem of discontinuous constructions. Furthermore metarules are introduced to allow new rules to be based on existing rules, thus taking care of some transformational relations. Although this theory has some attractive features, it is apparently too concerned with the form grammar should take rather than with making it accurately reflect the structure of a language. The same criticism can be made of Montague grammar (Dowty et al. 1981), which seems more concerned with the niceties of mathematical logic than with the analysis of the language actually used by speakers. There is no reason to suppose that natural language as a social or psychological reality comes close to either a computer program (often the inspiration of work in GSPG) or the formulae of mathematical logic. Nevertheless Chomsky made explicit rule-formulation fashionable, and even some already established grammatical theories suddenly found that (rather like Molière’s Monsieur Jourdain) they had been practising generative grammar for years without realising it, for instance tagmemics (Cook 1969:144, 158f) and systemic grammar (Hudson 1974). One of the simplest and earliest mathematical modes of representation for grammar which was implicitly generative, actually came from a logician. The Pole Ajdukiewicz (1935; following Leśniewski, see Lyons 1968; 227–31) developed a ‘categorial grammar’, which, rather in the manner of Harris, related word categories and construction categories to the basic units ‘sentence’ and ‘noun’ through a series of equations involving fractions: for instance, a verb is something that when combined with, or ‘multiplied by’, a noun (phrase) gives a sentence, and therefore must be a sentence ‘divided by’ a noun (phrase). A verb is thus an element that converts nouns to sentences, and an adjective is an element that can be added to nominal elements without changing their category. There is no clear place for the articles in Ajdukiewicz’s scheme, but then Polish has none! ‘Categorial grammar’ shares certain features with dependency and valency grammar. Tesnière, for instance, defines prepositions as convertors (‘translatifs’) of noun elements into adverbials or adjectivals. On the other hand, in dependency grammar the verb is not seen merely as a convenor but as the principal element in the sentence, which achieves sentence status with the aid of its dependent nominals and adverbials. A formalised system of dependency grammar must therefore make provision for verbs (at least) that ‘govern’ but also require certain ‘complements’. Hays (1964) proposes a formalism for achieving this with rules of the form V a (N 1 , *) for intransitive verbs and V b (N 1 , * N 2 ) for transitive ones, with the asterisk indicating the linear position of the ‘governor’ relative to its ‘dependents’. But, as we have already seen, there are different kinds of relationship subsumed under ‘dependency’, and any formalism, however attractive, is likely to obscure this. We need to ask ourselves why such a degree of formalism is required. Chomsky himself denied that his formalism was intended as a model for linguistic performance, either for speaking, or (still less) for understanding; he proposed it, rather, as a model for linguistic competence. But is the grammar of a language really like that? Is there a clearly defined list of sentences which are as grammatical in the language in question? For example, does the grammar of English allow sentences with phrases like ?the too heavy suitcases (cited above) or sentences like those of (33)? AN ENCYCLOPAEDIA OF LANGUAGE 59 (33) (a) John wasn’t enjoying starting driving. (b) Who did the students say the professor claimed he wanted to write a poem in honour of? Equally, in view of the complex subtleties of structures like English prepositional verbs or indirect object constructions, can we be sure that one mode of analysis is ever going to give us a perfect description? If the answer to either of these questions is ‘No’, and language is not well-defined in the fullest sense, we are entitled to ask whether a closed system of fully-formalised rules can ever capture the natural elasticity of language. Certainly, though, we can accept the view expressed by Mephistopheles (in Goethe’s Faust Part I), roughly: With words one can have a splendid fight, With words devise a system right, or, as the original has it: Mit Worten läßt sich trefflich streiten, Mit Worten ein System bereiten. REFERENCES Ajdukiewicz, K. (1935) ‘Die syntaktische Konnexität, Studia Philosophica (Warszawa), I: 1–28. Bloch, B. and Trager, G.L. (1947) Outline of linguistic analysis. Linguistic Society of America, Baltimore, Md. Bloomfield, L. (1935) Language, British edition (American edition: 1933), Allen & Unwin, London. Bresnan, J. (ed.) (1982) The mental representation of grammatical relations, MIT Press, Cambridge, Mass. Chomsky, N. (1964) ‘Current issues in linguistic theory’, Fodor and Katz (1964):50–118. Chomsky, N. (1965) Aspects of the theory of syntax, MIT Press, Cambridge, Mass. Chomsky, N. (1972 (1968)) Language and mind, enlarged edition, Harcourt Brace, New York. Chomsky, N. (1981) Lectures on government and binding, Foris, Dordrecht. Chomsky, N. and Halle, M. (1968) The sound pattern of English, Harper & Row, New York. Cole, P. and Sadock, J.M. (eds) (1977) Syntax and semantics, volume 8: grammatical relations, Academic Press, New York. Cook, W.A. (1969) Introduction to tagmemic analysis, Holt Rinehart, New York. Cruse, D.A. (1986) Lexical semantics, Cambridge University Press, Cambridge. Dell, F. (1980) Generative phonology, Cambridge University Press, Cambridge, and Hermann, Paris. Dik, S.C. (1978) Functional grammar, North Holland, Amsterdam. Dinneen, F.P. (1967) An introduction to general linguistics, Holt Rinehart, New York. Dowty, D.R., Wall, R.E., and Peters, S. (1981) Introduction to Montague semantics, Reidel, Dordrecht. Fillmore, C.J. (1977) ‘The case for case re-opened’ in Cole and Sadock (1977):59–81. linguistic theory, Holt Rinehart, New York: 1–88. Fodor, J.A. and Katz, J.J. (1964) The structure of language: readings in the philosophy of language, Prentice-Hall, Englewood Cliffs, N.J. Gazdar, G., Klein, E., Pullum, G and Sag, I. (1985) Generalized phrase structure grammar, Blackwell, Oxford. Halliday, M.A.K. (1985) An introduction to functional grammar, Edward Arnold, London. Harris, Z.S. (1951) Methods in structural linguistics, University of Chicago Press, Chicago (reprinted as Structural linguistics, (1955)). Harris, Z.S. (1952) ‘Discourse analysis’, Language, 28:1–30. (Reprinted in Fodor and Katz (1964):355–83.) Harris, Z.S. (1957) ‘Cooccurrence and transformation in linguistic structure’, Language, 33:283–340. (Reprinted in Fodor and Katz (1964): 155–210.) Hays, D.G. (1964) ‘Dependency theory: a formalism and some observations’, Language, 40:511–25. (Reprinted in F.W.Householder, Syntactic theory I: structuralist, Penguin, Harmondsworth: 223–40.) Hudson, R.A. (1974) ‘Systemic generative grammar’, Linguistics, 139:5–42. Jespersen, O. (1969) Analytic syntax, Holt Rinehart, New York. (First published 1937, Allen & Unwin, London.) Johnson, D.E. (1977) ‘On relational constraints on grammars’ZZ in Cole and Sadock (1977):151–78. Korhonen, J. (1977) Studien zu Dependent Valenz und Satzmodell, Teil I, Peter Lang, Berne. Kratochvil, P. (1968) The Chinese language today, Hutchinson, London. Lyons, J. (1968) Introduction to theoretical linguistics, Cambridge University Press, Cambridge. Matthews, P.H. (1970) ‘Recent developments in morphology’, in J.Lyons (ed.) New horizons in linguistics, Penguin, Harmondsworth: 96–114. Perlmutter, D.M. (ed.) (1983) Studies in relational grammar 1, University of Chicago Press, Chicago. Radford, A. (1981) Tranformational syntax: a student’s guide to Chomsky’s Extended Standard Theory, Cambridge University Press, Cambridge. Robins, R.H. (1967) A short history of linguistics, Longman, London. Tesnière, L. (1959) Eléments de syntaxe structurale, Klincksieck, Paris. 60 LANGUAGE AS FORM AND PATTERN T’ung, P.C. and Pollard, D.E. (1982) Colloquial Chinese, Routledge & Kegan Paul, London. Wells, R.S. (1947) ‘Immediate Constituents’, Language, 23:81–117. (Reprinted in M. Joos (ed.) (1957) Readings in linguistics I, University of Chicago, Chicago: 186–207.) FURTHER READING Allerton, D.J. (1979) Essentials of grammatical theory, Routledge & Kegan Paul, London. Bauer, L. (1983) English word-formation, Cambridge University Press, Cambridge. Brown, E.K. and Miller, J.E. (1982) Syntax: generative grammar, Hutchinson, London. Huddleston, R. (1984) Introduction to the grammar of English, Cambridge University Press, Cambridge. Matthews, P.H. (1974) Morphology: an introduction to the theory of word structure, Cambridge University Press, Cambridge. Matthews, P.H. (1981) Syntax, Cambridge University Press, Cambridge. Sampson, G.R. (1980) Schools of Linguistics, Hutchinson, London. AN ENCYCLOPAEDIA OF LANGUAGE 61 4 LANGUAGE AS A MENTAL FACULTY: CHOMSKY’S PROGRESS P.H.MATTHEWS Noam Chomsky is at once a brilliant grammarian and an important philosopher of language. As a grammarian, he has had greater influence on our conception of English syntax, both of the nature of syntax and the nature of particular constructions, than any other scholar now living, and continues to display a remarkable ability to discover new problems and new generalisations that his predecessors had entirely failed to notice. As a philosopher of language, he is responsible above all for the belief that linguistics is, in his terms, a branch of cognitive psychology, and that human beings have a genetically inherited faculty of language which is independent of other faculties of the mind. If these contributions were separate, they might well be thought to merit two chapters in an encyclopaedia of this kind. But they are intimately related. Chomsky’s philosophy of mind rests directly on a philosophy of grammar, in which the term ‘grammar’ was used, in the 1960s, to refer not simply to a linguist’s description of a language, but to the basic knowledge of linguistic structures that every speaker of a language has acquired in infancy. The central issues of linguistic theory are then posed as follows. First, we must ask what grammars are like: what form does a speaker’s basic knowledge of a language take? Second, we have to ask how speakers do in fact acquire this knowledge. Chomsky’s answer to the second question largely reflects his answer to the first, and both are central to his view of mind in general. The term ‘philosophy of grammar’ will recall the title of a famous work by Otto Jespersen (1924), a scholar with whose interests Chomsky has himself expressed sympathy (1975,1986:21f). The aim of this chapter is to examine the development of his own philosophy of grammar, from its beginning in the 1950s to the form in which we find it now, thirty years after the work which first made his reputation. I have referred, in the singular, to Chomsky’s ‘philosophy’ of grammar. Like that of any other major scholar, his work forms a historical unit. One can see the roots of things he says now in things that he said at the very outset of his career in the early 1950s. But one might also speak, in the plural, of Chomsky’s ‘philosophies’. His thought has never been static, and within this unity there have been many important shifts of emphasis, many innovations and much reshaping of old theory into new. On some central issues, notably on the place of semantics in grammar, his views have changed not once but twice. For a historian of linguistic theory it is fascinating to trace the continuities and discontinuities in Chomsky’s ideas. But for a student of current theory it is the best and possibly the only way to understand him. He is not a systematiser, and attempts to impose a system on him are liable to be betrayed by the next book that he writes. For those who are maddened by such things, he can be maddeningly inconsistent. At present, as always, his theories are in transition. To appreciate why they are going where they are one must have a thorough critical appreciation of their background. I have also referred to Chomsky in particular, and not, in general, to a Chomskyan school. For it is doubtful whether any permanent school can be identified. Since the early 1960s Chomsky has, at any time, had crowds of followers. Many pupils have clung to his coat tails and, after publishing a thesis which was proclaimed to be important, have done little or nothing thereafter. Others have been scholars of independent intellect whose work has then diverged so much from Chomsky’s own that no community of interest has remained. The greatest number have been teachers; by the early 1970s there were classroom versions of what Chomsky and others were supposed to have established in the 1960s which, as the decade wore on, were increasingly enshrined in textbooks. But both teachers and textbooks were left stranded when it was clear that he had taken a fresh turn. In the 1980s there is a new wave of followers, and little dialogue between them and the best of the old. We will refer to some of these people as we go along. But in Chomskyan linguistics the only element of continuity is Chomsky himself. His career may be divided into four periods. Externally it is one: he moved as a young man from the University of Pennsylvania, via Harvard, to the Massachusetts Institute of Technology, and has stayed there since. But the first stage of his intellectual history begins in the early 1950s and is centred on his first book, Syntactic Structures (1957). In this period he was still strongly influenced by older theorists in the United States, retaining many of their biases while, in other ways, reacting against them. The second period begins towards the middle 1960s. It was brief, but immensely productive: a space of three years saw two monographs on grammar (1965a, 1966a), a rash excursion into the history of linguistics (1966b), an important set of general lectures (1968), not to mention a joint work on phonology (Chomsky and Halle 1968). For many commentators this is Chomsky’s classic period, the period of what he himself has called the ‘standard’ theory of transformational grammar. But by the end of the 1960s we can already distinguish the beginnings of a period of groping and reorientation, which was to last through most of the 1970s. This is marked most clearly by a series of technical papers (collected in Chomsky 1972a and 1977a) and a further set of general lectures (1976). By the end of the decade the reorientation was complete, and we may therefore distinguish a fourth phase whose latest manifesto (1986) opens, in part, a new perspective. I will take these periods in turn. But this is not a chronicle, and I will not hesitate to refer both backwards and forwards where connections can be drawn. 1. ‘SYNTACTIC STRUCTURES’ One remark of Chomsky’s that seemed provocative or puzzling at the end of the 1970s was his assertion that the notion of a grammar is more central than that of a language (1980:126ff). Since then he has changed his terms: what was formerly a ‘grammar’, and had been called so for the previous twenty years, is renamed a ‘language’ or ‘I-language’ (1986:21ff.). But, in ordinary parlance, a grammar is not a language. It is merely one of the means by which a language, as the primary object of study, is described. Nor would Chomsky have disagreed with this at the beginning. To understand why both his views and his terms have shifted, we have to go back to his first book, and in particular to ideas that he inherited from his teachers. In the view that was dominant in America when he entered the subject, the first or only task of linguistics was to study the formal patterning of units. For example, there is a unit hat which is identified by the smaller units /h/, /a/ and /t/, in that order. Ignore its meaning; in this conception of linguistics it is not relevant. There is also a unit coat and, still ignoring meaning, these can generally be substituted one for the other: I wear a hat/coat, Some hats/coats were on sale, and so on. In the key term of this school, hat and coat have similar DISTRIBUTIONS. We can therefore class them together, and can then go on to class together larger units such as a hat or a coat, these coats or that scarf, still for no other reason than that, in such sentences as A hat would look nice or These coats would look nice, they can all be said—meaning once more apart—in the same context. The description of a language is complete when the distribution for all units has been stated in terms of classes which are ideally general. This approach was developed most consistently by Zellig Harris, in a book (1951) with whose typescript Chomsky himself helped (preface, v). Chomsky said later that this was how he learned linguistics (reference in Newmeyer 1980:33). His own work shows this very clearly. Critics of Harris and others had asked how a language could be described without appeal to meaning; but in Chomsky’s view the implication that it could be done ‘with appeal to meaning’ was ‘totally unsupported’ (1957:93). He saw ‘little evidence that “intuition about meaning” is at all useful in the actual investigation of linguistic form’ (94). His own investigation of syntax was ‘completely formal and non-semantic’ (93), and linguistic theory in general, for him as for Harris, was a theory of distributional relations. For Harris, a language was simply the collection of utterances whose formal structure one set out to describe. Similarly, for Chomsky, it was ‘a set …of sentences’ (1957:13). In describing a language one must then do two things. Firstly, one must define the membership of this set. For example, the set ‘English’ has among its members I wear a coat, That scarf would look nice, and so on. In Chomsky’s terms, these are GRAMMATICAL SEQUENCES of elements, whereas *Coat Wear I a or *Would nice look that scarf are sequences that are UNGRAMMATICAL. Secondly, one has to indicate the structure that each sentence has. For example, in I wear a coat the pronoun I, classed by Chomsky as a Noun Phrase, is followed by a Verb and a further Noun Phrase, which in turn consists of an Article plus a Noun. According to Chomsky, a grammar was a ‘device’ which performed both tasks. It contained a series of rules for the distribution of smaller and larger units. Thus, by one rule, a Noun Phrase can consist of an Article followed by a Noun. Unless there are other rules to the contrary, this excludes the possibility that successive Articles and Nouns might not form a Noun Phrase, or that, within such a phrase, the Noun might come first and the Article after it. In this conception, a language is the primary object and a grammar is a set of statements about it. One standard way of putting this was to say that grammars were theories of languages. But let us now ask what it means to ‘know a language’. As Chomsky saw it, speakers of English know what sequences of elements are grammatical sentences in English. But that is because they know the rules by which sentences are formed; to use a term which Chomsky popularised in the 1960s, it is because they have INTERNALISED (1965a:8) the grammar of English. ‘Knowing a grammar’ is thus the primary concept, and ‘knowing a language’, in the technical and rather unnatural definition of a language with which he began, is at best derivative. It took several years for the implications of this shift to sink in. But once it had, it was obvious that this definition of a language made sense only when linguistics was restricted to the study of distributional relations. For these may indeed be seen as relations in a set of sentences. To ‘study language’ in a real sense is to study something else; and that might very appropriately be called an INTERNALISED LANGUAGE or ‘I-LANGUAGE’. In the rest of this section we will look further at Chomsky’s thought as we find it in his first phase. As we have seen, he followed Harris in excluding meaning from the analysis of a language. The reason he gave was that there is no one-to-one relation between meaning and form. Forms can differ phonemically but mean the same; equally the same form can have AN ENCYCLOPAEDIA OF LANGUAGE 63 different meanings. Not all morphemes have an independent meaning, and some forms that are not morphemes do. There is no coincidence between syntactic constructions such as Verb plus Object and constructional meanings such as Action-Goal (1957:94ff.). Therefore a grammar had to treat forms on their own. If a grammar was a theory of a particular language, a linguistic theory was in turn a general theory about grammars. But what can we expect of such a theory? The answer, in part, was that it had to specify the forms that grammars might take. They consisted of rules: thus, in Chomsky’s formulation at that time, of phrase structure rules followed by transformational rules followed by morphophonemic rules. These rules were seen as generating the sentences of a language, in that, by following them through, it would be possible to produce any grammatical sequence of elements and none that were ungrammatical. Such rules had to be precise and had to conform to a format which the theory of grammar laid down. They also had to be as restrictive as possible. The main thrust of Chomsky’s work in the 1950s was to demonstrate that some forms of grammar were too restrictive. With a finite state grammar (1957: Ch. 3) one could not generate the sentences of English. With a phrase structure grammar one might be able to generate them, but one could not describe their structure satisfactorily. With a transformational grammar one could do both. But one did not want to form a grammar which would also allow one to generate sets of sentences which were quite unlike any human language. Part of Chomsky’s insight was to see this as a problem of mathematical formalisation. A grammar was a type of mathematical system. If the sets of sentences that can be generated by one type of system (A) include all those that can be generated by another type of system (B) but not vice versa, A is more POWERFUL than B. What was needed was a theory that had just the power—no more, no less—that was needed. But a linguistic theory also had to provide what Chomsky called an EVALUATION MEASURE. Suppose that we have two grammars, both in the form that the theory prescribes and both generating the same language. But one may be simpler and, in that respect, better. According to Chomsky, the theory itself should then discriminate between them. Given a precise account of the forms of rule that it permits, including a detailed specification of the notation in which they are to be written, it should, in addition, prescribe a way of measuring the relative simplicity of alternative grammars for the same set of sentences. Now since these grammars are different they will in part describe the language differently. They might establish different units: for example, in A hat would look nice, one grammar might relate would to a Complement look nice while the other might say that nice was the Complement of a single Verb would look. If not, they would establish different classes. For example, one might class both I and a hat as Noun Phrases, while the other might deal with Pronouns separately. The evaluation measure will therefore tell us which analysis of the language a given theory favours. This account of the aims of linguistic theory was new and brilliant. But, in retrospect, it seems clear that there were problems. Grammars, as we have seen, were theories of languages and, like many other theories, they were based on limited evidence. They therefore made predictions: in Chomsky’s words, which echo those of Harris (1951:13) or Hockett (1948), any grammar ‘will project the finite and somewhat accidental corpus of observed utterances to a set (presumably infinite) of grammatical utterances’ (1957:15). It was then true to the extent that its predictions of what was and what was not grammatical were borne out. But then we have on top of that another theory which will take two grammars that are in this sense equally true, and literally calculate that one is, in some other sense, better. Does ‘better’ just mean ‘simpler’? That is what Chomsky seemed to be saying, and still more his associate Morris Halle (1961). But simplicity is not itself a simple notion: how then could we decide what sort of simplicity should be measured? Or does ‘better’ again mean ‘truer’? Then in what respect truer and why should these levels of truth be separated? These questions were answered, as we will see, in Chomsky’s next phase (see section 2). For the moment, however, a more obvious problem was whether the study of forms and meanings could sensibly be divorced. For although Harris and others had sought to base their analyses on purely distributional evidence, they did not, of course, maintain that meanings could not be investigated. Likewise, for Chomsky, ‘the fact that correspondences between formal and semantic features exist…cannot be ignored’ (1957:102). All that was claimed was that the formal features had to be investigated first, that descriptive linguistics (Harris 1951:5) was concerned with them alone, and that any study of meaning had to come later. In Harris’s terms, the meaning of utterances was, ‘in the last analysis’, their ‘correlation…with the social situation in which they occur’ (1951:187). This had its roots in Leonard Bloomfield’s theory (1933: Ch. 9). For Chomsky, ‘the real question that should be asked’ was: ‘How are the…devices available in a given language put to work in the actual use of this language?’ (1957:93). A language could therefore be studied like an ‘instrument or tool’ (103). On the one hand, we can describe its formal devices without reference to their use. In the same way, to develop the analogy, one could in principle describe a knife — handle, blade, sharp edge and all—without knowing, or while pretending that one did not know, that it was used for cutting. However, these devices have a purpose. So, given this account of the handle, edge and so on, one could then go on to incorporate it in a wider form of description which would also explain what they are for. In the same way, we can envisage a ‘more general theory of language’ (102) of which a linguistic theory, in the sense already described, is only one part. The other part would be a separate ‘theory of the use of language’. In this light, both a grammar and a linguistic theory can be evaluated on two levels. Considered on its own, grammar A may be simpler than grammar B. This notion of simplicity may be given a precise sense, as we have seen, by an evaluation measure. In a looser sense, theory A may also permit a simpler form of grammar than theory B. Thus, in his first book, 64 LANGUAGE AS A MENTAL FACULTY Chomsky argued that a theory which included transformational rules allowed a simpler grammar of English than one which included phrase structure and morphophonemic rules alone (1957: Chs. 5 and 7). But if we then go on to study meaning, simplicity is only one criterion. For we can also require of a grammar that it should ‘provide explanations’ (1957:85) for semantic facts. The form /əneym/ has two meanings (‘a name’ and ‘an aim’); this is explained, as Chomsky saw it, by a formal grammar in which it is divided into two different sequences of morphemes. In a passage that became famous, he argued that the shooting of the hunters could be used either of hunters shooting or of hunters being shot. That could be explained by a grammar in which, for reasons of pure simplicity, it is derived by different transformations (88f). A theory which allows transformations is therefore better for another reason. Not only does it give a simpler description of the knife; but, if we may continue the analogy, a description which is simpler in terms of the proposed evaluation measure will also explain why the knife is held as it is and used to cut things. What then was the real argument for transformations? For most of Chomsky’s followers, it was precisely that they threw light on distinctions and similarities of meaning. On the one hand, forms which are ambiguous would have analyses to match, thus the shooting of the hunters and many other stock examples. On the other hand, a transformation could relate forms that were partly or wholly synonymous. For example, Actives were said to be synonyms or paraphrases of the corresponding Passives. Moreover, given that a linguistic theory allowed transformations, how did one decide in particular cases whether such a rule should be established? The sophisticated answer was that this should be decided by the evaluation measure; and, since the linguistic theory of which the measure was part could itself be seen as part of a more general theory which would also include a theory of use, it should ideally be so devised that a grammar whose formal descriptions contributed to the explanation of meanings would be simpler than one which did not. But in practice most of those who applied the model took what in an earlier phase of distributional linguistics might well have been disparaged as a ‘short cut’. If there were semantic reasons for establishing a transformation they established it. The grammar might in an intuitive sense be simplified or it might not; but the appeal to meaning was overriding. Now Chomsky’s followers are not Chomsky himself, and by the end of the 1960s this had led to a remodelling of grammar under the name of generative semantics (see the beginning of section 3) which he rejected. But neither he nor anyone else made any serious attempt to justify a syntactic evaluation measure. A proposal was developed in morphophonemics or, as it was misnamed, generative phonology. But in that field meanings were irrelevant and, even then, it did not in the end work. In syntax, despite the great place that it had in Chomsky’s initial programme, the evaluation measure was still-born. For, by relating theories of form to subsequent theories of meaning, he had ensured that it would be transcended. 2. THE ‘CLASSIC’ CHOMSKY In his account of the battle of El Alamein, Liddell Hart (1970:315) comments on Montgomery’s ‘adaptability and versatility’ in devising a fresh plan when his initial thrust had failed. It was ‘a better tribute to his generalship’ than his own habit of talking as if everything had gone as he intended. One might say much the same about Chomsky, both in his next phase and in the long re-adjustment which followed. From his own accounts, one might suppose that his thought has been consistent from the beginning. But in this way his true genius has often been disguised from his own troops. Of the changes that mark Chomsky’s general thinking in the middle 1960s, the most straightforward, on the face of it, was his extension of the concept of a grammar to include a SEMANTIC COMPONENT. Its syntactic component, as before, said nothing about meanings. Syntactic rules continued to indicate which sequences of morphemes could and could not represent grammatical sentences. But each sentence was now interpreted semantically. A generative grammar, as Chomsky put it in a series of lectures delivered in the summer of 1964, became ‘a system of rules that relate [phonetic] signals to semantic interpretations of these signals’ (1966a:12). The objects that it generated were sentences in the old sense. But they now had meanings attached. More precisely, therefore, they were pairings of a phonetic representation of a sentence and its SEMANTIC REPRESENTATION. How does this relate to the earlier division between a theory of form and a theory of use? One might say simply that the term ‘linguistic theory’ had been redefined: whereas it was previously part of a ‘more general theory of language’ (Chomsky 1957:102), it now was that theory. But then there is a problem as to what was meant by ‘use’. In 1957 Chomsky had talked of the ‘actual use’ of the language; this could be taken to mean that semantic theory was concerned with the use made of a particular utterance, by a particular speaker, at a particular time, in a particular set of circumstances. But a generative grammar is a system of rules; particular uses vary indefinitely; therefore, if a grammar was to assign semantic interpretations to sentences, these had to be something else. In Chomsky’s formulation, they were ‘intrinsic meanings’ of sentences (1968=1972b:71). In this context he no longer spoke of ‘uses’. But, if we go back to the analogy of the knife, we might say that its intrinsic use is for cutting. I may then use it, on a particular occasion, to slice this particular cabbage which is in my kitchen. On another occasion I may use it in a non-intrinsic way, say as a makeshift screwdriver. In the same sense there was AN ENCYCLOPAEDIA OF LANGUAGE 65 [...]... Hjelmslev envisaged an inventory of content figurae (elements of meaning) in terms of which the meanings of all vocabulary items in the language could be expressed The analysis of word meanings into content figurae would be a reduction, like the analysis of word-forms into phonemes, in the sense that the inventory of elements in AN ENCYCLOPAEDIA OF LANGUAGE 91 terms of which an analysis could be expressed... by virtue of their descriptive meaning that Its a dog entails Its an animal, for instance The relevance of descriptive meaning is not restricted to statements: it is also important for questions and commands It is by virtue of descriptive meaning that Half-past ten constitutes an answer to What time is it?, while Two spoonfuls of treacle does not, and that, if the time is actually 11.30, the answer is... with semantic meaning 3 APPROACHES TO WORD MEANING 3.1 Word and sentence Perhaps because of the familiarity of dictionaries, and perhaps, too, because of a naùve conception of the way language is learnt, according to which we first learn single words, and only later learn how to string them together, we tend to think of word meaning as basic and sentence meaning as secondary Of course, the meaning of a... square brackets are semantic distinguishes These take care of any idiosyncratic aspects of the meaning of a wordthe residue not accounted for by the markers This analysis is designed to give specific kinds of information about the meaning of a word For instance, the number of branch-endings gives the number of ways in which the word is ambiguous The analysis of each sense can be read off as the items traversed... reflection of the malleability of meaning, not of the sapience of syntax.) Our concern here is with the semantic dimension of normality We have seen that the patterns of normal and abnormal occurrence of a word are a reflection of its meaning In a sense, there is only one kind of normality, but there are several types of abnormality, and this fact makes abnormality potentially more informative than normality... John and drank in John drank the potion It is difficult to make any useful generalisation about the meanings of verbs which will accept John as subject, but it is relatively easy to say what sort of subjects drank requires; drank is thus the selector and John the selectee The selector in drank the potion is again drank: little meaningful generalisation is possible concerning what sort of things can be... the lack of semantic fit between words from two different languageswill be acutely obvious to anyone who has tried to translate a passage word-by-word from one language to another The fact is that each language packages its meanings differently, both syntagmatically (notice the different distribution of the elements of meaning in John ran up the stairs and Jean monta lescalier en courant) and paradigmatically... primarily from the point of view of its embodiment in words, and is biased towards disciplined description rather than formalised theory 2 LANGUAGE AND MEANING 2.1 Meaning, signs and sign systems The ability to convey meaning is the distinctive property of signs Any sign must be capable of manifesting itself in some way in the experience of an observer: usually it has a physical existence of some sort To be... degree iconic can vary continuously, and only discrete signs can be wholly arbitrary 2.2 Language and other channels of communication Language is the prime vehicle for the conveyance of meaning; but it is not the only one, and it is illuminating to look at it in the context of the full repertoire of signs used in human communication Confining our attention to a typical everyday manifestation of communicative... instances, we find no continuum of form correlating with the continuum of meaning between any pair of words A speaker, in forming an utterance, must choose from a finite set of discrete and distinct possibilities If someone observes an animal intermediate in appearance between a dog and cat, he cannot describe it as a dag or a dat, in the hope that intermediate forms will convey intermediate meanings . terms of semantic selection’ (1986:86). Beyond this, and beyond the internalised language as a whole, semantics can be seen as a relation between language and the world (44) or between language and. important philosopher of language. As a grammarian, he has had greater influence on our conception of English syntax, both of the nature of syntax and the nature of particular constructions, than any. renamed a language or ‘I -language (1986:21ff.). But, in ordinary parlance, a grammar is not a language. It is merely one of the means by which a language, as the primary object of study, is

Ngày đăng: 26/07/2014, 00:21

Từ khóa liên quan

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan