Radford 1988, Chapters 2 and 3
We have seen that the interpretation of noun phrases of various types
(reflexive pronouns, ordinary personal pronouns, and even full noun
phrases) is constrained by their structural relations with other noun
phrases. The question that we will address over the next few weeks is:
What are the origins of these structural relations, and of phrase structure
itself? This web page will discuss two approaches to this question that
have been taken in the history of generative grammar. The first, based on
so-called phrase structure rules, characterized the field from its
beginnings until roughly the early 1980s. It had a number of conceptual
shortcomings, however, which over time led to its rejection and to the
adoption of the second approach, according to which syntactic structure is
from the lexicon.
Phrase structure rules
From the 1960's through the early 1980's, phrase structure was thought to
be generated by phrase structure rules of the sort illustrated in
|(1)||a.||VP ---> V NP NP|
|b.||VP ---> V NP PP|
|c.||VP ---> V NP|
|d.||VP ---> V|
From a formal mathematical point of view, such rules are part of a context-free grammar, and they therefore haveby definitioncertain formal properties. Specifically, the lefthand side of a phrase structure rule must consist of a single symbol, whereas the righthand side of a phrase structure rule may consist of one or more symbols.
|(2)||a.||The symbol on the lefthand side of a rule corresponds to a node.|
|b.||The symbol(s) on the righthand side of the rule correspond to daughters of the node in (a). The linear order of the daughters is the same as that of the symbols in the rule.|
With the help of (2), the rules in (1) can be translated into the trees in (3).
The phrase structure rules in (1) contain only nonterminal symbols (= syntactic categories; roughly speaking, parts of speech). In order to generate phrases and sentences consisting of actual words, there must also be rules available whose righthand side contains terminal symbols, like those in (4). In contrast to nonterminals, terminal symbols cannot appear on the lefthand side of a phrase structure rule.
|(4)||a.||V ---> tell|
|b.||V ---> put|
|c.||V ---> devour|
|d.||V ---> waited|
Because we can think of the terminal symbolsthe lexical itemsas being inserted into structures like those in (3), rules like (4) are known as lexical insertion rules. Lexical insertion rules are like ordinary phrase structure rules in that the lefthand side in both must be a single nonterminal symbol. However, the righthand side of a lexical insertion rule is constrained to be a single terminal symbol.
In conjunction with other phrase structure rules, it then becomes possible to build structures for sentences like those in (6).
|(6)||a.||I will tell you the answer.|
|b.||They put the book on the shelf.|
|c.||The lion will devour the wildebeest.|
Although combining phrase structure and lexical insertion rules correctly allows us to build structures for the sentences in (6), a serious problem with it was noted early on: it also incorrectly gives structures for the sentences in (7).
|(7)||a.||*||I will put you the answer.|
|b.||*||They waited the book on the shelf.|
In traditional terms, the difficulty is that the system of phrase structure rules and lexical insertion rules fails to distinguish among various subcategories of verbs, such as intransitive, transitive and ditransitive.
Intransitive, transitive, and ditransitive verbs take zero, one and two complements, respectively. The term 'complement' is defined in the section Complements versus adjuncts.Transformational grammarians therefore proposed to incorporate this sort of information into each verb's lexical entry. The idea is that each lexical item has an entry in our mental grammar, comparable to a conventional dictionary entry (though much more detailed). Included in the lexical entry is the lexical item's pronunciation, its meaning, its syntactic category and, crucially for present purposes, the syntactic environments that it can occur in. When represented as in (8), these environments are known as subcategorization frames. The blank line represents the position of the verb, and the remaining syntactic categories represent the verb's environment. As (8a,e) show, verbs may be associated with more than one subcategorization frame.
|(8)||a.||tell||___ NP NP, ___ NP PP|
|b.||put||___ NP PP|
|e.||eat||___, ___ NP|
Now here is the way that subcategorization frames are used. At the point of lexical insertion, a verb's subcategorization frame is checked against the syntactic environment that the verb is being inserted into. If the environment matches the frame, lexical insertion goes forward, but if not, lexical insertion fails. Thus, the sentences in (6) are generated, but the ones in (7) are not.
First, the architecture of a grammar that is based on rules and subcategoriation frames requires the information in each subcategorization frame to duplicate information in some phrase structure rule. The history of generative grammar has been driven by the assumption that such redundancy indicates a failure of insight, and that more insight will be achieved by finding a way to eliminate the redundancy.
Second, nothing in the mathematical constraints on phrase structure rules prohibits "crazy rules" like those in (9).
|(9)||a.||NP ---> V Adj|
|b.||VP ---> Adj|
But structures corresponding to such rules are simply not found in the world's languages. Rather, a phrase of a particular type, say a verb phrase, contains a lexical item of that same type, in this case, a verb. That is, phrases have a core, which in traditional grammar is called the heada term which has been adopted by generative grammarians.
A first important piece of information that needs to be represented in a treelet is a lexical item's syntactic category. For the moment, we will focus on the four so-called lexical categories: N(oun), V(erb), A(djective) and P(reposition). We can easily represent a lexical item's syntactic category by representing it as the lexical item's mother. This is shown in (10).
A second piece of information that needs to be represented is the fact that lexical items serve as heads of phrases. Again, we can represent this by having an appropriate type of phrase dominate the lexical item. We can actually think of the representation as being generated in two steps: first, a phrasal node of as yet indeterminate type is added to the treelets in (10), and then the lower syntactic category's type percolates, to use a widespread metaphor, to determine the type of the entire phrase.
We also need to represent the distinction among the various subcategories of verbs, the fact that prepositions take objects, and so on. We can do this by providing appropriate slots in the treelets. Some examples of treelets for prepositions and ditransitive, transitive and intransitive verbs are shown in (13).
Notice that the slots in the treelets other than the lexical item anchoring it are empty. The reason is that each treelet is intended to provide exactly the information that is characteristic of the lexical item anchoring itno less, but also no more. For instance, a verb like devour requires an object, but not a particular one; many different phrases will fill the bill, just as long as they are noun phrases.
A final issue concerns verbs such as eat, which can be used either with or without a following object. In the phrase structure approach to generating syntactic structure, this fact is represented by associating multiple subcategorization frames with a single lexical item; see (8e). In our current approach, we will simply associate a single lexical item with more than one treelet, as in (14).
As we will see in a moment, the structures in (12)-(14) are actually oversimplified, but for now, they adequately illustrate how the redundancy inherent in a system of phrase structure rules, lexical insertion rules and subcategorization frames can be eliminated in a system based on treelets. To see this clearly, compare the VP trees in (13) to those in (5).
Note: For expository convenience, we will focus in what follows on heads that are verbs. We will extend our discussion to other types of heads in Weeks 3 and 4.
There are a number of conceivable ways that we could go about adding such a slot. One idea is simply to add a daughter to the VP nodes in (13b-d) in the position preceding the V node. This would yield structures as in (15).
But now notice that in such 'flat' structures, the subject and any objects of the verb c-command each other. However, we know from the distribution of reflexive pronouns that subjects asymmetrically c-command objects (recall Exercise 2 of Assignment 1). That is, subjects c-command objects, but objects don't c-command subjects. Since the representations in (15) do not represent this fact, they must be rejected.
What we need to do is to add a node (standardly called V'read as V bar)that groups the verb together with any objects, but excludes the subject. The desired structures are shown in (16).
Notice that we have no direct evidence yet for the existence of a V' in the treelet for waited. However, we will assume its existence on conceptual grounds: including it makes the treelet in (16c) analogous to those in (16a,b).
An empirical argument for including V' in the treelets of intransitive verbs is explored in Exercise 2 of Assignment 2.
Notice further that it is V' and not the node labelled VP that corresponds to the 'verb phrase' of traditional grammar. Although the terminology of traditional grammar and generative grammar coincides in many cases, this is not always the case. Such a divergence of technical from colloquial usage is nothing out of the ordinary; it occurs in the development of any science.
We now introduce some useful terminology to discuss hierarchically structured treelets as in (16). We will say that the lexical item projects the syntactic structure in the treelet. The lexical item's syntactic category, the head of the projected structurehere, Vis also referred to as a lexical projection. V' is the intermediate projection, and VP is the phrasal projection or maximal projection. These three projections are also sometimes said to have distinct bar levels, as summarized in (17).
The sister of the intermediate projection is called the specifier; it is the slot for subjects. In English, the specifier of VP is the intermediate projection's left sister, but in a VOS language like Malagasy, it is the right sister. Following traditional terminology, the sister of the head is called the complement. There is only one specifier slot. However, there can be more than one complement slot, as we will discuss in more detail later on. Finally, it should be mentioned that subjects and complements are often referred to as a head's arguments.
|(18)||They eat [NP five apples] [NP every day] .|
As mentioned earlier, certain verbsamong them, eatare associated with more than one treelet. The grammaticality of (18) therefore suggests that eat can be associated with the ditransitive treelet in (19) (in addition to the intransitive and transitive treelets in (14)).
Plausible as this suggestion may seem at first glance, there turns out to be strong evidence against it. This evidence is based on a syntactic phenomenon by the name of do so substitution, which is illustrated in (20) and (21). As is evident, do so can substitute either for a complete V', as in (20b), or for part of one, as in (21b).
|(20)||a.||They eat five apples every day, ...|
|b.||and we do so, too.|
|(21)||a.||They eat five apples every day, ...|
|b.||... and we do so every week.|
We now introduce an assumption that has been standard in syntactic theory from before the times of generative grammarnamely, that do so substitution is possible iff the sequence being replaced is a syntactic constituent (= unit of syntactic structure). In trees, constituents will be represented as nodes that exhaustively dominate the sequence of words in question.
|(22)||exhaustively dominate: A node exhaustively dominates a sequence of symbols iff it dominates all and only the symbols in question, neither more nor less.|
For instance, A dominates the sequence B C in (23a-c), but exhaustively dominates it only in (23a). A doesn't exhaustively dominate B C in (23b,c) because it dominates too much material. A also fails to exhaustively dominate B C in (23d) because it dominates too little material (domination is a necessary condition for exhaustive domination (though not a sufficient condition, as (23b,c) shows).
The question arises whether the representation in (19) is consistent with the do so facts in (20) and (21). The answer is that (19) is consistent with do so substitution in (20b), since the sequence substituted for is exhaustively dominated by V'. But (19) is not consistent with the grammaticality of do so substitution in (21b), since (19) contains no node that exhaustively dominates the sequence eat five apples. Since there is no reason to assume that the verb phrases in (20) and (21) have distinct structures, the treelet in (19) must be rejected.
But what is the correct syntactic structure for the verb phrase eat five apples every day? It must be the case that both the entire V' is a constituent (= exhaustively dominated by some node), as in (19), but also that the subsequence eat five apples is a constituent. The tree in (24) has the proper shape. (Note: this is not yet the actual treelet for eat; we return to this issue directly.)
Notice now that in (24), the NP five apples is a sister of the head V, whereas the NP every day and the head V are more distantly related. Given the way that we have defined the notion of complement, only the direct object is a complement. But what relation does every day bear to the head V, being neither a complement nor a specifier? Again adopting a term from traditional grammar, we will refer to constituents related to a head as the NP every day is to V in (24) as modifiers or adjuncts.
It is important to realize that complements, adjuncts and specifiers all stand in a distinct structural relation to their head. Although both complements and adjuncts are daughters of intermediate projections, they differ crucially in that complements are sisters of heads, whereas adjuncts are sisters of the next projection level up, intermediate projections. In being sisters of intermediate projections, adjuncts resemble specifiers. But once again, the two relations are distinct because adjuncts are daughters of intermediate projections, whereas specifiers are daughters of maximal projections. These structural distinctions are summarized in (25).
|(25)||Linguistic relation to head||Sister of ...||Daughter of ...|
|Adjunct||Intermediate projection||Intermediate projection|
|Specifier||Intermediate projection||Maximal projection|
Both traditional and generative grammar maintain that the relation between heads and their arguments is closer than that between heads and adjuncts. The idea is that adjuncts are not required by a particular lexical item; rather, they are optional specifications. Now we have stated repeatedly that treelets are intended to provide exactly the information that is characteristic of the lexical item anchoring itno less, but also no more. In the approach to phrase structure that we adopt here, therefore, treelets will include slots for arguments, but not for adjuncts. This means that the treelet for eat that is used to generate the syntactic structure for a sentence like (18) has only a single V', as in (26)not two, as in (24).
Now it is clear that the syntactic structure of a sentence like They eat five apples can be derived by building subtrees for they and five apples and then substituting them in the appropriate slots. This process of substitution is illustrated schematically for a complement node in (27). The empty complement node in the containing syntactic structure in (27a) is the target of adjunction and is replaced with the tree for the complement in (27b). The result of the substitution is shown in (27c).
Of course, specifier slots are equally able to serve as targets of substitution.
But given the absence of adjunct slots, how can adjuncts enter into syntactic structure? The answer is that adjuncts are integrated into treelets via a special two-step process called adjunction. We illustrate the adjunction process with the treelet for eat from (26), repeated as (28a), and the tree for the adjunct phrase every day, schematically given in (28b). The target of adjunction is the intermediate projection in the treelet for eat.
The first step in adjunction is to make a copy of the target of adjunction right above the original node, as in (29a). The second step is to attach the tree for the adjunct phrase as a daughter of the newly created node, as in (29b).
Note: It is important to attach the modifier as a daughter of the higher copy. If the modifier were attached as a daughter of the lower copy, the resulting tree would erroneously give the impression that the attached constituent was a complement; see (25) if this is unclear.
Note that the result of adjunction in (29b) is identical to the tree in (24). This is exactly the desired result.