## 4 Introducing the X' schema of phrase structure

As was mentioned in Chapter 1, we can represent the individual vocabulary items of a language as small pieces of syntactic structure, or elementary trees. The idea is to generate phrases and sentences by composing (and possibly otherwise manipulating) these elementary trees in mathematically well-defined ways.

In this view, vocabulary items are comparable to the atoms of physical matter. Atoms do not combine into molecules just because they happen to be next to each other; rather, their combinatorial possibilities are governed by their internal structure (for instance, the number of electrons on an atom's outermost shell and the relative number of protons and electrons).

Accordingly, in the first part of this chapter, we consider the internal structure of elementary trees. As in the last chapter, we begin by focusing on how verbs combine with their arguments to form larger phrases. For the time being, we will treat noun phrases and prepositional phrases as unanalyzed chunks, postponing discussion of their internal structure until Chapter 5. We then generalize the approach developed for verbs and their arguments to the point where we can build simple sentences as well as complex sentences containing subordinate clauses. In order to derive sentences, we will find it necessary to introduce a formal operation called movement, which allows us to represent the fact that constituents can have more than one function in a sentence.

In the second part of the chapter, we turn to the representation of the modification relation (familiar from Chapter 3). As we will show, it is not possible to combine modifiers with elementary trees by the substitution operation introduced in Chapter 1. Besides substitution and movement, we therefore introduce a third and final formal operation called adjunction.

#### Transitive elementary trees

We begin our investigation of the internal structure of elementary trees by considering how a transitive verb like ate combines with its two arguments in a sentence like (1).

 (1) The children ate the pizza.

From the possibility of pronoun substitution, as in (2), we know that the two arguments are constituents (specifically, noun phrases).

 (2) They ate it.

In principle, the verb could combine with its two noun phrase arguments in either order, or with both at once. The three possibilities are represented by the schematic structures in (3) (we address the question of which syntactic category to assign to the nodes labeled by question marks in a moment).

 (3) a. b. c.

However, as we already know from the discussion in Chapter 2, only the representation in (3a) is consistent with the do so substitution facts in (4).

 (4) The children ate the pizza; the children did so.

In other words, transitive verbs combine first with their object. The resulting constituent in turn combines with the subject.

What is the syntactic category of the constituents that result from these two combinations? In principle, the result of combining a verb with a noun phrase might be a phrase with either verbal or nominal properties. But clearly, a phrase like ate the pizza doesn't have the distribution of a noun phrase. For instance, it can't function as the object of a preposition (even a semantically vacuous one like of) Nor does it pattern like a noun phrase in other respects. For instance, as we have just seen, the appropriate pro-form for it is not a pronoun, but a form of do so, just as would be the case if the predicate of the sentence were an intransitive verb. In other words, for the purposes of do so substitution, the combination of a verb and its object is equivalent to an intransitive verb (say, intransitive eat); cf. (4) with (5).

 (5) The children ate; the children did so.

However, it won't do to simply assign the syntactic category V to the verb-object combination, on a par with the verb that it contains, since that would leave unexplained the contrast between (4) and (6) with respect to do so substitution.

 (6) The children ate the pizza; *the children did so the pizza.

Notice furthermore that the syntactic category of the verb-object constituent is distinct from the syntactic category of the constituent that includes the subject. This is evident from the contrast in (7), which would be unexpected if both constituents belonged to the same syntactic category.

 (7) a. We saw the children eat the pizza. b. * We saw eat the pizza.

In order to represent the facts in (4)-(7), the following notation has been developed. Verbs are said to project three bar levels, conventionally numbered from zero to two. The lowest bar level, V0, is a syntactic category for vocabulary items; it is often indicated simply by V without a superscript. The next bar level is V' (read as 'V-bar'),1 the syntactic category of a transitive verb and its object. The highest bar level is V" (read as 'V double bar'), which is the result of combining a V' with a subject. For a transitive verb, each bar level corresponds to the number of arguments with which the verb has combined.

Somewhat confusingly for the novice, the verb's second projection, V", is more often than not labeled VP. In early work in generative grammar, the label VP was intended as a mnemonic abbreviation for the verb phrase of traditional grammar and did indeed correspond to that category. In current phrase structure theory, however, the label that corresponds to the traditional verb phrase is V', whereas VP includes a verb's subject, which the traditional verb phrase does not. The idea is that the highest bar level projected by a verb contains all of its arguments. For clarity, we will avoid using the term 'verb phrase' if possible, but if we do use it, we mean the traditional verb phrase that excludes the subject (V', not VP). Conversely, when we say VP, we always mean the projection that contains all of the verb's arguments, not the verb phrase of traditional grammar.

The fully labeled structure for (1), with the standard labels for the three verbal projections, is given in (8).

 (8)

Given (8), we can 'un-substitute' the two arguments. This yields the elementary tree for ate in (9).

 (9)

#### The X' schema

As we show later on in this chapter and in Chapter 5, the basic form of the elementary tree in (9) can be extended to other syntactic categories. In other words, (9) is an instantiation of a general phrase structure template, shown in (10) and known as the X' schema (read: X-bar schema) of phrase structure. X, Y, and Z are variables over syntactic categories.

 (10)

A number of standard terms are used in connection with the X' schema. X (= X0) is the lexical projection of the vocabulary item that it dominates, X' the intermediate projection, and XP (= X") the maximal projection (sometimes also called phrasal projection). The correspondence between projections and the bar levels introduced earlier is summarized in (11).

 The terms 'intermediate' and 'phrasal' are somewhat misleading, since they suggest that the syntactic status of intermediate projections is somehow intermediate between lexical and phrasal constituents. This is not the case. Intermediate projections are full-fledged phrases, and 'intermediate' simply refers to the position of the projection in the tree structure.

 Label Projection Bar level (11) X (= X0) Lexical 0 X' Intermediate 1 XP (= X") Maximal (phrasal) 2

The lexical projection X is known as the head of the structure in (10) (the term is sometimes also used to refer to the vocabulary item dominated by the lexical projection). The three projections of the head form what we will call the spine of the elementary tree. Following traditional terminology, the sister of the head - YP in (10) - is called its complement. As we discuss in the next subsection, elementary trees need not include a complement position.

 Note the spelling of complement with e (not i). The idea is that complements complete the meaning of the head.

 The term "specifier" suggests that constituents in that position somehow specify the remainder of the tree, and this might lead you to confuse specifiers with modifiers (discussed later on in this chapter). At one point, constituents in specifier position were thought to have this function, and that is how the name arose. We no longer believe this, but unfortunately, the name has stuck (what we might call terminological inertia). In order to minimize this confusion, syntacticians often use the abbreviation "Spec" (pronounced "speck").

Each elementary tree has at most one specifier, and elementary trees can lack a specifier altogether, as we will see later on in this chapter. The specifier and complement positions of a head are its (syntactic) argument positions.

In summary, an elementary tree consists of a spine and from zero to two argument positions.

 The terms 'specifier,' 'complement,' and 'argument' can be used to refer to the structural positions just defined or to the constituents that substitute into those positions. (This is analogous to the way we can use a nontechnical term like 'bowl' to refer either to the container itself (Hand me the bowl) or to its contents I'd like a bowl (of soup).) If it is necessary to avoid confusion between the two senses, we can distinguish between 'specifier position' and 'constituent in specifier position' (and analogously for 'complement' and 'argument').

An important question that arises in connection with the X' schema in (10) is how to represent predicates with more than two semantic arguments (say, rent or give). The most obvious approach is to allow elementary trees with more than two complements. Plausible as this approach may seem, however, it is now widely assumed that syntactic structure is at most binary-branching. In cases where we have evidence from linguistic judgments concerning the issue, we repeatedly find that binary-branching structures correctly represent our judgments, whereas ones with more branches don't. It is on this empirical basis that we are led to hypothesize that binary-branchingness is a formal universal. If a predicate has more than two semantic arguments, there are two ways in which the additional arguments can be integrated into syntactic structure. In some cases (as with rent), the supernumerary arguments are integrated into syntactic structure by adjunction, an operation distinct from substitution that we introduce later in this chapter. This case involves a syntax-semantics mismatch, since a semantic argument ends up occupying a position that is not a syntactic argument position. In other cases (as with give), the apparently atomic predicate is decomposed semantically and syntactically into more than one head, thus yielding a total of more than two argument positions. This second case is discussed in detail in Chapter 7.

#### Intransitive elementary trees

So far, we have discussed the internal structure of the elementary trees required for transitive verbs (and transitive categories more generally). In this section, we address the internal structure of the elementary trees required for intransitive verbs - for instance, intransitive eat. The two structures in (12) come to mind as possibilities.

 (12) a. b.

The trees differ in the presence of an intermediate projection, and (12b) might at first glance seem preferable because it is simpler (in the sense of containing fewer nodes). However, (12b) violates the X' schema, and adopting it would complicate the grammar as a whole, which consists not just of the elementary trees, but of the rules and definitions stated over them. For instance, adopting (12a) allows us to summarize the facts concerning do so substitution illustrated in (4)-(6) by means of the succinct generalization in (13).

 (13) Do so substitutes for instances of V'.

Given (12b), (13) would need to be reformulated as the more cumbersome disjunctive statement in (14).

 (14) Do so substitutes for instances of V' or of V without a complement.

A second, similar reason to prefer (12a) is that it permits the succinct definition of the notion of specifier in (15a) rather than the disjunctive statement in (15b).

 (15) a. Specifiers are sisters of intermediate projections. b. Specifiers are sisters of intermediate projections or of lexical projections without a complement.

#### Deriving simple sentences

We are almost at the point of being able to construct X'-compliant representations of complete sentences, but before we can, we need to address the syntactic representation of tense. The following discussion relies on the notion of do support and on the status of modals and auxiliary do as members of the syntactic category I(nflection); see Modals and auxiliary verbs in English for more details.

In a sentence like (16), the verb waited contains the bound morpheme -ed, which expresses past tense.

 (16) He waited.

If tense morphemes were invariably expressed on the verb in this way, then complete structures for full sentences could be derived by substituting appropriate structures into the argument positions of the verb's elementary tree. However, this is not a general solution, because tense is not always expressed as a bound morpheme on the verb. For instance, in (17), the future tense counterpart of (16), the future tense is expressed by a free morpheme, the modal will.

 (17) He will wait.

Even more strikingly, the past tense in English, though ordinarily expressed as a bound morpheme on the verb, must be expressed by a free morpheme in do support contexts, as shown in (18).

 (18) a. Emphasis: He did wait. b. Negation: He didn't wait. c. Question: Did he wait?

The morphologically variable expression of tense as a free or a bound morpheme raises two related syntactic questions. First, what is the representation of sentences like (17) and (18a), where tense is expressed as a free morpheme? (We postpone discussion of negated sentences and questions until later chapters; see Chapters 6 and 11.) Second, and more generally, how can we represent all sentences in a syntactically uniform way, regardless of how tense is expressed morphologically? There are two reasons for wanting a uniform representation. First, from a semantic point of view, both past and future are semantically parallel functions, taking situations (denoted by VPs) as input and returning as output situations that are located in time, either before or after the time of speaking.2 Second, and very generally, when similar objects are represented in a uniform way, it is easier for the mind to manipulate them and they are computationally more tractable than when they are not so represented. Mathematicians, logicians, computer scientists and others are therefore fond of finding normal forms or canonical forms for the abstract objects they deal with. An example from daily life is that we impose a normal form on the set of letters in the alphabet (namely, the conventional order in the alphabet song). That way, looking up words in the dictionary is much quicker than it would be if the words were not sorted with reference to the alphabet's normal form. Notice, by the way, that the X' schema of phrase structure under discussion in this chapter is a normal form, and our earlier preference for (12a) over (12b) can be framed as a preference for a representation that is in normal form over one that isn't.

Returning to the problem at hand, we begin by answering the first question in several steps. First, it is clear that (17) and (18a) share a common predicate-argument structure (predicate used here in the sense of Fregean predicate). That is, both of these sentences denote a situation in which someone is waiting, with the sentences differing only as to which point in time the situation holds. We can capture this commonality by taking the elementary tree for the verb wait in (19a) and substituting an argument constituent in the specifier position, yielding (19b).

 (19) a. b.

Second, in accordance with the general approach to syntactic structure that we have been developing, modals and auxiliaries, like all vocabulary items, project elementary trees. The elementary trees for will and auxiliary did are shown in (20).

 (20) a. b.

 (21) a. b.

The structures in (21) neatly reflect the semantic relation between tense and situations. The element in I corresponds to the tense function, the complement of I (= VP) corresponds to the function's input (the situation), and the maximal projection of I (= IP) corresponds to the function's output (the situation located in time). There remains a problem, however: the I element and the subject of the sentence are in the wrong order in (21). This problem can be solved by introducing a movement operation that transforms the structures in (21) into those in (22).

 (22) a. b.

 (23) a. Susie drafted the letter. Agent Theme Subject Object b. The letter was drafted (by Susie). Theme Agent Subject Prepositional phrase

In order to clearly express a phrase's multiple functions, we do not simply move the phrase from one position to another. Instead, movement leaves a trace in the phrase's original position, and the two positions share an index. We will use the lowercase letters (i, j, k, ...) as movement indices.4 A constituent and its traces of movement are called a chain. The elements of a chain are its links. Higher links in a chain are often referred to as the antecedents of lower ones. Finally, the highest and lowest links in a chain are sometimes referred to as the chain's head and tail, respectively.

 Don't confuse this sense of the term 'head' with the sense introduced earlier in connection with X' structures. The head of an X' structure is the structure's lexical projection (or sometimes the vocabulary item dominated by it). The head of a movement chain is the highest constituent in a chain; the constituent's X' status is irrelevant). Which sense is meant is generally clear from the context. Is the head of the movement chains in (22) a head in the X' sense? No. The reason is that it is possible to replace it by what is clearly a phrase (say, by the student in the red sweater). But the head of a chain can be a head in the X' sense, as we will see in Chapter 6 in connection with verb movement. Finally, we should point out the existence of a special type of chain - what mathematicians would call a degenerate case. It is perfectly possible for a chain to consist of a single constituent. This is the case when a constituent hasn't moved. The chain then contains a single link, which is simultaneously the chain's head and its tail.

We are now in a position to answer the second question posed earlier - namely, how can sentences be represented in a syntactically uniform way regardless of the morphological expression of tense? A simple answer to this question is possible if we assume that English has tense elements that are structurally analogous to auxiliary do, but not pronounced, as shown in (24); we will use square brackets as a convention to indicate such silent elements.

 (24) a. b.

Elementary trees as in (24) make it possible to derive structures for sentences in which tense is expressed as a bound morpheme on the verb along the same lines as for sentences containing a modal or auxiliary do. In other words, they make it possible to impose IP as the normal form for all sentences. In (25), we illustrate the derivation of He waited.

 (25) a. b. c. d. Select elementary tree for verb Substitute argument Substitute predicate-argument structure in (25b) into elementary tree for tense Move subject from Spec(VP) to Spec(IP)

#### Deriving complex sentences

This section is devoted to the derivation of sentences that contain complement
clauses (also known as clausal complements). Some examples are given in (26); the complement clauses are in italics.

 (26) a. We will ask if she left. b. They believe that he came.

Although sentences with complement clauses can become unboundedly long (recall the instances of recursion in Chapter 1), deriving structures for them proceeds straightforwardly along the lines already laid out. If and that are both complementizers, so called because they have the effect of turning independent sentences into the complements of a matrix verb, and they project the elementary trees in (27).

 (27) a. b.

Given elementary trees like (27), we can derive the italicized complement clause in (26a) as in (28).

 (28) a. b. Elementary tree for verb of complement clause Substitute argument c. d. e. Substitute (28b) in elementary tree for tense (24b) Move subject in complement clause Substitute (28d) in elementary tree for complementizer (27a)

The structure in (28e) in turn allows us to derive the entire matrix clause, as in (29).

 (29) a. b. Elementary tree for matrix clause verb Substitute arguments, including clausal complement (28e) c. d. Substitute (29b) in elementary tree for modal (20a) Move subject in matrix clause

 (30) a. A structure is recursive iff it contains at least one recursive node. b. A node is recursive iff it dominates a node distinct from it, but with the same label.

 The recursive nodes in (29d) are the higher IP, I', VP, and V' nodes (and no others). Note that a recursive node need not be the root node of a tree, and that it can be any projection level (XP, X', or X). The lower IP, I', VP, and V' nodes are not recursive nodes, since they don't dominate another instance of the same category. For a node to be recursive, it is not enough that the tree contains a second instance of the category somewhere. The first node has to dominate (though not necessarily immediately dominate) the second one. For instance, none of the NounPhr nodes in (29d) is recursive.

#### Modification is different

The elementary trees introduced so far allow us to represent two of the three basic linguistic relations discussed in
Chapter 3: namely, argumenthood and predication. As we have seen, semantic arguments of a verb can be represented by substituting syntactic arguments at one of the two argument positions in the verb's elementary tree: either the complement position or the specifier position. VPs and IPs can be treated as arguments (specifically, as complements) of I and C, respectively. And finally, although predication is not reducible to argumenthood (recall from Chapter 3 that expletive subjects are required independently of a verb's semantic requirements), subjects occupy specifier positions regardless of whether they are semantic arguments or not. In other words, predication does not require a special structural relationship uniquely associated with it.

An important remaining question is how to represent the modification relation using the X' schema developed so far. In principle, modification might resemble predication in not requiring a structural relation of its own. As it turns out, however, neither of the two head-argument relations (head-specifier, head-complement) adequately represents the relation between a head and its modifier. As we have seen, when a verb combines with a complement, the category of the resulting constituent (V') is distinct from that of the verb (V) (recall the contrast between (4) and (6)), and when the verb and the complement in turn combine with the specifier, the category of the resulting constituent (VP) is distinct yet again (recall the contrast in (7)). By contrast, modifying a verb-complement combination like ate the pizza in (31) does not change the syntactic category of the resulting constituent, which remains V' (the modifier is in italics).

 (31) a. The children ate the pizza. b. The children ate the pizza with gusto.

This is evident from the do so substitution facts in (32), where either the unmodified or the modified verb-complement combination can be replaced by a form of do so.

 (32) a. The children ate the pizza with gusto; the children did so with gusto. b. The children ate the pizza with gusto; the children did so.

The same pattern holds for intransitive verbs that combine with a modifier.

 (33) a. The children ate with gusto; the children did so with gusto. b. The children ate with gusto; the children did so.

The do so substitution facts in (32) and (33) motivate the syntactic structure for (31b) that is given in (34) (for clarity, we focus on the internal structure of the VP, omitting the projection of the silent past tense element and subject movement).

 (34)

The structural relation of the modifier with gusto to the spine of the V projection is known as the adjunct relation, and the modifier itself is said to be an adjunct. Modifiers are always represented as adjuncts in syntactic structure. As a result, 'modifier' and 'adjunct' tend to be used somewhat interchangeably. In this book, however, we will distinguish between the two terms as follows. We will use 'modifier' when we want to highlight a phrase's semantic function of qualifying or restricting the constituent being modified. For instance, as we mentioned in Chapter 3, a verb like laugh denotes the set of entities that laugh. Combining the verb with a modifier like uproariously yields the expression laugh uproariously, which denotes a subset of the set denoted by laugh. We will use the term 'adjunct' when focusing on a constituent's structural position in a tree. As we will see later on in this chapter, it is possible for semantic arguments to be represented as syntactic adjuncts. This does not change the semantic argument into a modifier, however!

#### The need for an adjunction operation

The structure in (34) raises the question of what elementary tree for transitive ate is involved in its derivation. 'Un-substituting' both arguments and the modifier, as we did earlier for trees containing only arguments, yields the structure in (35).

 (35) ← not a possible elementary tree!

Is the structure in (35) a satisfactory elementary tree? Clearly, allowing it means that our grammar now contains two elementary trees for transitive ate. At first glance, this doesn't seem like a serious problem, since we already allow the two elementary trees for transitive and intransitive ate in (36).

 (36) a. b.

But (35) differs in one important respect from the structures in (36): it is a recursive structure. This has an extremely undesirable consequence: namely, that if we were to derive structures like (34) by means of elementary trees like those in (35), there would be no principled way to avoid an unbounded number of such elementary trees. For instance, the derivations of the sentences in (37), with their increasing number of modifiers, would each require a distinct elementary tree for drink, and each additional modifier would require an additional elementary tree.

 (37) a. We would drink lemonade. b. We would drink lemonade in summer. c. We would drink lemonade in summer on the porch. d. We would drink lemonade in summer on the porch with friends.

For the moment, we will use the adjunction operation to integrate modifiers into syntactic structures. As we will see in Chapter 6, the adjunction operation is also used for other purposes. Whatever its linguistic purpose, however, it is always the same formal (= graph-theoretical) operation: namely, a two-step process that targets a particular node. When the purpose of adjunction is to integrate a modifier into a larger structure, as it is here, the target of adjunction is an intermediate projection, indicated in red in (38a). The first step in carrying out adjunction is to make a clone of the target of adjunction that immediately dominates the original node, as in (38b). The second step is to attach the tree for the modifier as a daughter of this higher clone, as in (38c).

 (38) a. b. c. Select target of adjunction Clone target of adjunction Attach modifier as daughter of higher clone

Deriving the rest of the structure for the entire sentence proceeds as outlined earlier, as shown in (39).

 (39) a. b. c. Substitute arguments Substitute (39a) in elementary tree for tense Move subject

For expository reasons, we illustrate the derivation of the sentence with adjunction preceding substitution and movement. However, the order of adjunction with respect to the other operations is irrelevant.

We conclude this section with a general point concerning intermediate projections and adjunction to them. Given that words (or syntactic atoms of some sort) combine with one another to form phrases, any theory of syntax must assume heads and phrases. But distinguishing between two types of phrases (intermediate projections vs. maximal projections) seems inelegant, and attempts have therefore been made to eliminate intermediate projections, along with the possibility of adjunction to them. For instance, given our current assumptions, sentences like (40) force us to allow adjunction to intermediate projections.

 (40) a. [IP They [I' never [I' will agree to that. ] ] ] b. God let [VP there [V' suddenly [V' be light. ] ] ]

However, if the IP and the small clause VP in such sentences were 'split up' into two separate projections, it would be possible to eliminate the intermediate projections and to adjoin the modifiers to maximal projections instead. This is illustrated in (41), where IP has been split into Agr(eement)P and T(ense)P, and the small clause VP has been split into Pred(ication)P and a lower VP.

 (41) a. [AgrP They [TP never [TP will agree to that. ] ] ] b. God let [PredP there [VP suddenly [VP be light. ] ] ]

A useful way to frame the issue is as a trade-off between two options. The first option buys a relatively small set of familiar syntactic categories at the cost of assuming intermediate projections. The second option buys an intuitively appealing two-level phrase structure scheme at the cost of a proliferating and increasingly abstract set of syntactic categories. As befits a science, current syntactic theory generally prefers the second option: generality at the cost of abstraction. In this introductory textbook, however, we will make the opposite choice and continue to assume the classic X' schema in (10) with its three bar levels.

Given this choice, we know from (40) that adjunction must be able to target intermediate projections. As we will see in Chapter 6, adjunction must also be able to target heads. Assuming three bar levels, can adjunction also target maximal projections? The ximplest answer (taking 'simple' to mean 'maximally general') answer is 'yes'. For expository simplicity, however, we will restrict adjunction in this textbook to heads and intermediate projections.5

#### A typology of syntactic dependents

Each of the three types of syntactic dependents that we have been discussing - complements, specifiers, and adjuncts - stands in a unique structural relation to the head and to the spine of the head's projection. Complements and adjuncts are both daughters of intermediate projections, but they differ in that complements are sisters of heads, whereas adjuncts are sisters of the next higher projection level. As sisters of intermediate projections, adjuncts resemble specifiers. But again, the two relations are distinct because adjuncts are daughters of intermediate projections, whereas specifiers are daughters of maximal projections. These structural relations and distinctions are summarized in (42), which also includes the formal operations that fill or create the positions in question.

 Relation to head Sister of ... Daughter of ... Formal operation (42) Complement Head Intermediate projection Substitution Adjunct Intermediate projection Intermediate projection Adjunction Specifier Intermediate projection Maximal projection Substitution

#### More on the distinction between complements and adjuncts

Given the table in (42), it is easy to tell whether a constituent is represented in a particular tree structure as a complement or as an adjunct. However, it is not always self-evident whether a phrase is a complement or an adjunct as a matter of linguistic fact..

 Remember that tree structures are models of linguistic facts. Just because it is possible to build a tree that represents a certain phrase as a complement doesn't mean that the phrase actually is a complement. In other words, trees can "lie".

The most reliable way to determine the relation of a particular phrase to a verb is to use do so substitution. If a phrase need not be included as part of the sequence being replaced by do so, then it is an adjunct. If it must be included, then it is a complement. Using this test, we find that phrases specifying cause or rationale, time, location, or manner are generally adjuncts, even if they are noun phrases. Some examples, including the results of do so substitution, are given in (43); the adjuncts are in italics.

 (43) a. Rationale They waited for no good reason, but we did so for a very good one. b. Duration They waited (for) a day, but we did so (for) a month. c. Location They waited in the parking lot, but we did so across the street. d. Manner They waited patiently, but we did so impatiently.

In the examples we have seen in this book so far, semantic arguments are expressed as syntactic arguments (or not at all), and modifiers are expressed as adjuncts. It is possible, however, for semantic arguments to be expressed in the syntax as adjuncts (this is the mismatch case mentioned earlier in connection with binary-branchingness). For example, as we mentioned in Chapter 3, rent, from a semantic point of view, is a five-place predicate, with arguments denoting landlord, tenant, rental property, amount of money, and lease term. Some of these semantic arguments are expressed as syntactic arguments. For instance, in (44), the phrase denoting the rental property is a complement, as is evident from the results of do so substitution.

 (44) a. Dennis rented the apartment to Lois. b. * ... and David did so the studio to Rob.

On the other hand, do so substitution shows that the phrase denoting the lease term is an adjunct, even though lease terms are semantic arguments of rent on a par with rental properties.

 (45) a. Dennis rented Lois the apartment for two months. b. ✓ ... and David did so for a whole year.

 Wishful thinking Actual situation (46) a. If a syntactic dependent is obligatory, then it is a complement. TRUE TRUE b. If a syntactic dependent is a complement, then it is obligatory. TRUE FALSE

But as the rightmost column indicates, the biconditional relationship doesn't hold. It is true that obligatory syntactic dependents are complements. For instance, the contrast in (47) is evidence that the noun phrase following devour is a complement, a conclusion that is borne out by do so substitution in (48).

 (47) Every time I see him, ... a. * ... he's devouring. b. ... he's devouring a six-inch steak. (48) a. * He devoured a hamburger and french fries, and I did so six samosas. b. He devoured a hamburger and french fries, and I did so, too.

But not all complements are obligatory. The grammaticality of both (49a) and (49b) shows that the phrase French fries in (49b) is optional. But the ungrammaticality of (49c) shows that it is nevertheless a complement.

 (49) a. He ate, and I did so, too. b. He ate French fries, and I did so, too. c. * He ate French fries, and I did so three samosas.

Although (46b) is false, (46a) does have the consequence in (50) (derived by the modus tollens rule of propositional logic).

 (50) If a syntactic dependent is not a complement, it is not obligatory.

The two valid generalizations in (46a) and (50) can be summarized succinctly as in (51).

 (51) a. Obligatory syntactic dependents are complements. b. Adjuncts are optional.

#### Notes

1. Why is V' read as V-bar when it contains not a bar, but a prime symbol? The reason is that when the idea of bar levels was introduced in the 1970s, the various levels were distinguished by horizontal bars over a syntactic category. The lowest level had no bars, the first level one, and the second two. But back in the days of typewriters, such overbars were cumbersome to type (you typed the symbol, --* rolled up the platen a bit, backspaced, typed an overbar *--, repeated from --* to *-- for each overbar, and then rolled the platen down again the right amount). Overbars are also expensive to typeset, and even today, they aren't part of the standard character sets for HTML documents such as this one. Therefore, it was and continues to be convenient to substitute prime symbols for overbars. However, linguists have failed to update their terminology (terminological inertia again!), and so the old term 'bar' is still with us.

2. The semantics of tense we are assuming here is oversimplified, but sufficient for our purposes.

3. The representations in (21) look like appropriate representations for the questions Will he wait? and Did he wait? But notice that they can't be, since they contain unfilled substitution nodes. Moreover, as we will see in Chapter 11, there is reason to postulate a projection above IP in the representations of questions.

4. In the syntactic literature, alphabetical subscripts are used as movement indices but also to specify reference and coreference (see Reference and related notions). For clarity, this textbook uses the lowercase letters as movement indices exclusively. If we need referential indices, we will use the natural numbers.

5. Sentences like (i) appear to require adjunction to maximal projections (IP in the present case).

 (i) Tomorrow, we will eat pizza.

It has been argued, however, that the clause-initial phrase occupies the specifier position of the projection of a (silent) head higher than I. Analogous reasoning would extend to examples like (ii), where the silent head is even higher..

 (ii) Tomorrow, what will we eat?

7. A similarly tempting (but false) biconditional relationship was discussed in Chapter 2.

#### Exercise 4.1

What is the X' status of Fregean and of Aristotelian predicates? You should be able to answer in one or two brief sentences.

#### Exercise 4.2

The trees in (1) fail to correctly account for certain grammaticality judgments. What are the judgments?

 (1) a. b.

#### Exercise 4.3

A. Are the italicized phrases in (1) syntactic arguments or adjuncts? Explain. There is no need for extensive discussion beyond the syntactic evidence (the do so substitution facts) on which you base your conclusions.

 (1) a. They waited for us. b. This program costs twenty dollars. c. We drove to Denver. d. We worded the letter carefully. e. They are behaving very inconsiderately. f. This volcano might erupt any minute.

B. Using the grammar tool in x-bar ch4, build structures for the sentences in (1). Needless to say, the structures you build should be consistent with the evidence you gave in Part A.

#### Exercise 4.4

A. Using the grammar tool in x-bar ch4,build structures for the sentences in (1). Motivate your attachment of the lowest argument or adjunct in each sentence (in other words, give the judgments that lead you to attach the relevant phrase the way you do).

 (1) a. They demolished the house. b. Mona Lisa called the other neighbor. c. Mona Lisa called the other day. d. You will recall that her smile amazed everyone. e. Most people doubt that Mona Lisa lives in Kansas. f. My friend wondered if Mona Lisa would come to his party.

B. Indicate all recursive nodes in the structures that you build for (1). You can do this by using the grammar tool's highlighting feature (see the "Instructions" menu).

#### Exercise 4.5

A. (1) is structurally ambiguous. Paraphrase the two relevant interpretations. (Focus on the structural ambiguity, ignoring the referential ambiguity of they.)

 (1) They claimed that they paid on the 15th.

B. Using the grammar tool in x-bar ch4, build a structure for each of the interpretations, indicating which structure goes with which interpretation.

C. Indicate all recursive nodes in the structures that you build for (1). You can do this by using the grammar tool's highlighting feature (see the "Instructions" menu).

#### Exercise 4.6

A. Make up a sentence with two adjuncts. Provide syntactic evidence that the adjuncts are adjuncts rather than syntactic arguments. Then build the structure for the sentence using the grammar tool in x-bar ch4. Finally, switch the linear order of the adjuncts, and build the structure for the resulting word order variant of your original sentence.

B. Make up a simple sentence in which one of the semantic arguments of the verb is expressed in the syntax as an adjunct. Provide evidence that the adjunct is one. Finally, build the structure for your sentence using the grammar tool in x-bar ch4.

#### Problem 4.1

A. Is it possible for an adjunct to precede a complement? How about immediately precede?

What if you are allowed to "swivel" at X' in the X' schema, so that YP precedes X? (The resulting head-final structures are discussed in more detail in Chapter 5.) Paste the following structures into a tree generator to see what is meant.

 (1) a. Head-initial (= (10)): ``` [XP [YP spec^] [X' [X head] [ZP comp^]]] ``` b. Head-final: ``` [XP [YP spec^] [X' [ZP comp^] [X head]]] ```

B. Is it possible for two adjuncts to be sisters? Explain.

C. In the chapter, we define adjunction as an operation that clones a target node and attaches a phrase as the daughter of the higher clone. Imagine an operation that clones the target node and attaches a phrase as the daughter of the lower clone. Would such an operation be useful? Explain.