HomePeopleResearchUndergradGraduateCoursesEvents & PublicationsResources

Speaker Series

The Linguistics Speaker Series is a series of invited talks, organized each semester by the grad students in the Penn Department of Linguistics. We invite students and professors from various subfields and various universities to speak about their current research. All talks are open to the greater Penn linguistics community.

For Spring 2013, the speaker series is organized by Sabriya Fisher. You can see a list of past speakers here.

Schedule

Talks are held on Thursday afternoons in the IRCS Large Conference Room unless otherwise indicated. IRCS is at 3401 Walnut Street, 4th floor, suite 400A. Make two lefts out of the elevators, and the Large Conference Room is the very first door on the left within IRCS, room 470 (Directions and Map)

Announcements of talks are posted below, in the department, and to the Penguists mailing list.

3:30-4:30 Talk
4:30-5:00 Question and Answer Period

Speakers for Spring 2013:

  • January 24 - Colin Phillips, University of Maryland
  • February 28 - Jim Wood , Yale University
  • March 14 - Suzanne Evans-Wagner, University of Michigan
  • March 28 - Larry Hyman, University of California, Berkeley
  • April 4 - Veneeta Dayal, Rutgers University
  • April 11 - Ricardo Bermudez-Otero, University of Manchester
January 24 - Colin Phillips, University of Maryland
Generating expectations and meanings in comprehension and production

We often have expectations about utterances before they are uttered. How we do this, in language production and comprehension alike, has implications for practical concerns and for theoretical questions about language architecture. The ability to generate reliable expectations may be a key enabler of robust language understanding in noisy environments. Understanding the (non-)parallels between the generative mechanisms engaged in comprehension and production is essential for any attempt to close the gap between grammatical 'knowledge' and language use systems. In this talk I explore how we generate expectations about word-level and sentence-level meanings. One set of studies uses behavioral interference paradigms to examine the time-course of verb generation when Japanese speakers plan their utterances. Two other series of studies focus on electrophysiological evidence for the generation of verb expectations in Chinese, Spanish, and English. Evidence for advance generation of verb meanings is found in comprehension and production alike. But we find that different types of linguistic information drive expectations on different time scales. In verb-final clauses, verb expectations are initially driven only by lexical associations, and effects of compositional interpretations are observed only after a delay. Similar mechanisms operate in production and comprehension, but they yield different outputs, depending on the information available to the language user in a specific task.

February 28 - Jim Wood , Yale University
The syntactic underpinnings of passive voice

It is common to say that sentences like John was attacked (by a dog) are derived by “passivizing” an active sentence like A dog attacked John. A number of things seem to happen when a verb is passivized in a canonical passive: an object is “promoted” to subject, the subject is “demoted” to an optional by-phrase or implicit argument, the main verb becomes a participle and a copula is inserted. However, each of these things is demonstrably independent of the passive. Given this, it seems highly unlikely that “passive” is primitive notion in grammar. In this talk, I propose that we can make sense of the above properties if we properly divide the workload between syntax and semantics. (i) In the syntax, there can be a “needy” Voice head that wants to introduce a subject, but other, higher heads can select for this needy Voice head, preventing it from getting what it wants. (ii) In the semantics, the (needy) Voice head may introduce an agent role, but the agent role will only survive under two circumstances: either the Voice head got its subject in the syntax (in which case that subject will bear the agent role), or else there has to be some higher head which is capable of existentially closing over the agent role. Dividing things up in this way not only explains a number of facts about canonical passives, but also explains the morphosyntactic properties of other “passive-like” constructions, including -able adjectives and the Icelandic “modal passive -st” construction. The claim is that languages have several ways of dealing with the syntactic properties of the Voice head, and several ways of dealing with the semantically-introduced agent role. What we call “passive” is a particular combination of these, but not a primitive feature of the syntax or the semantics.

March 14 - Suzanne Evans-Wagner, University of Michigan
What might individual differences tell us about language change across the lifespan?

A growing number of panel and trend studies demonstrate that individuals can continue to participate in community language change in progress, even after early adulthood. Yet not all members of a community exhibit this behavior. So what, if anything, do the lifetime participators have in common? While high exposure to the change via social networks must play a role, in this talk I discuss the possibility that lifetime participators also share cognitive characteristics that make them especially receptive to community linguistic variability. I report on two replications of Labov et al (2011)'s newsreader job suitability experiment, in which participants' social competence, attention to patterns and other relevant characteristics were measured using clinical diagnostics of Autism Spectrum Disorder. The results suggest that some of these measures do correlate with individual differences in sensitivity to sociolinguistic variation.

March 28 - Larry Hyman, University of California, Berkeley
Postlexical Construction Tonology

Although it is common for “replacive” tonal patterns to be assigned by word-level morphological constructions, it is almost unknown for such overriding schemas to be assigned by specific phrase-level syntactic constructions. I begin by demonstrating that Kalabari, a head-final Ijoid language of Nigeria, does exactly this: Whenever a noun is non-initial within the noun phrase, it loses its tones and receives different templatic “melodies” depending on the constructional word class of the preceding specifier/modifier (Harry & Hyman 2012). I first document the Kalabari facts, including a similar interaction between a direct object and the following verb, consider related cases from Tommo So (Dogon, Mali; Heath & McPherson [in press], McPherson 2012), Chimwiini (Bantu, Somalia; Kisseberth 2009), Urarina (Isolate, Peru; Olawsky 2006), and Yagaria (TNG, New Guinea; Ford 1993), and argue that although they apply to entire syntactic phrases, such tonal assignments have all of the properties of morphological rules. The question is whether the above effects of one word or word class on another constitute further evidence that there are some things that only tone can do (Hyman 2011), or whether we can relate these cases to better-known constructional effects on the segmental make-up of words.

April 4 - Veneeta Dayal, Rutgers University
An Aspect Based Account of Number Neutrality in Pseudo-Incorporation

A defining characteristic of pseudo-incorporation structures is that nominals that appear to be singular in form, nevertheless have properties associated with plurals. For example, they can be complements of collective predicates like 'collect', as in the possibility of 'stamp-collect' and 'stamps-collect' in a language like Hungarian. In spite of this, I argue that the interpretation of singular forms is not plural or number neutral. One argument for this is that they are not acceptable as complements of all collective predicates. Hungarian allows 'examples-compare' but not 'example-compare'. The appearance of neutrality can be traced to aspectual operators and/or lexical properties of certain collective verbs.

April 11 - Ricardo Bermudez-Otero, University of Manchester
The stem-level syndrome

This talk seeks to describe and explain the peculiar properties of phonological processes applying in cyclic domains smaller than the word, i.e. in sublexical domains. In stratal-cyclic phonological frameworks such as Lexical Phonology and Stratal Optimality Theory, such processes are assigned to the highest stratum in the grammar: the stem level. To characterize the stem-level syndrome, rule-based Lexical Phonology imposed special conditions on rule application at the stem level: stem-level rules were claimed to exhibit stratum-internal cyclic reapplication, to be structure-preserving, and to undergo blocking in nonderived environments. English stress assignment provides a classic example of cyclic reapplication within the same stratum: in a word like [ WL [SL [ SL imàgin-] átion- ] less-ness], foot creation applies twice in the two inner stem-level cycles, and does not apply in the outer word-level domain.

The Lexical Phonology approach to the stem-level syndrome made a number of incorrect predictions. Notably, English has a large set of phonological processes whose domain excludes word-level suffixes, but which nonetheless do not show cyclic reapplication, structure preservation, or blocking in nonderived environments. The GOAT split in the London vernacular is a particularly salient example. In contrast, the stem-level syndrome has altogether dropped off the agenda in current optimality-theoretic frameworks relying on output-output correspondence. Such frameworks do not recognize the notion of cyclic domain, and so cannot sort phonological processes into classes according to domain size. This stance is also unsatisfactory insofar as it fails to account for striking generalizations: in particular, cyclic reapplication within the same stratum is only found at the stem level.

This talk outlines an alternative approach to the stem-level syndrome, where the special properties of stem-level phonological processes emerge from more fundamental grammatical mechanisms. I propose that the distinguishing trait of stem-level linguistic expressions is that they are stored nonanalytically, i.e. as whole output forms generated by the stem-level morphology and phonology. In contrast, word-level and phrase-level constructs are either not stored, or stored analytically (i.e. decomposed as strings of stem-level pieces). Given independently motivated assumptions about morphological blocking, together with the input-output faithfulness technology of Optimality Theory, these postulates about lexical storage make accurate predictions about the behaviour of stem-level phonological processes. Notably, cyclic reapplication turns out to be closely correlated with neutralization (Chung's Generalization).

Last Modified: 06 Apr 2013
Department of Linguistics
619 Williams Hall (campus map)
University of Pennsylvania
Philadelphia, PA 19104-6305
Telephone: (215) 898-6046
Fax: (215) 573-2091
For more information, contact Amy Forsyth at