Spencer Caplan will be defending his dissertation "The Immediacy of Linguistic Computation" on Friday October 8th, 2021 at 1:30pm EDT. The defense is public, and all are welcome to attend.
Title: The Immediacy of Linguistic Computation
Supervisors: Charles Yang, John Trueswell
Committee: Mitch Marcus
Date: October 8th
Time: 1:30pm EDT
Location: 3401 Walnut Street, Room 401B (the Active Learning Classroom)
This dissertation investigates the wide-ranging implications of a simple fact: language unfolds over time. Whether as cognitive symbols in our minds, or as their physical realization in the world, if linguistic computations are not made over transient and shifting information as it occurs, they cannot be made at all. This dissertation explores the interaction between the computations, mechanisms, and representations of language acquisition and language processing—with a central theme being the study of the temporal restrictions inherent to information processing that I term the Immediacy of Linguistic Computation. This program motivates the study of intermediate representations recruited during online processing and acquisition rather than simply an Input/Output mapping. While ultimately extracted from linguistic input, such intermediate representations may differ significantly from the underlying distributional signal. I demonstrate that, due to the immediacy of linguistic computation, such intermediate representations are necessary, discoverable, and offer an explanatory connection between competence (linguistic representation) and performance (psycholinguistic behavior). The dissertation is comprised of four case studies. First, I present experimental evidence from a perceptual learning paradigm that the intermediate representation of speech consists of probabilistic activation over discrete linguistic categories but includes no direct information about the original acoustic-phonetic signal. Second, I present a computational model of word learning grounded in category formation. Instead of retaining experiential statistics over words and all their potential meanings, my model constructs hypotheses for word meanings as they occur. Uses of the same word are evaluated (and revised) with respect to the learner's intermediate representation rather than to their complete distribution of experience. In the third case study, I probe predictions about the time-course, content, and structure of these intermediate representations of meaning via a new eye-tracking paradigm. Finally, the fourth case study uses large-scale corpus data to explore syntactic choices during language production. I demonstrate how a mechanistic account of production can give rise to highly ``efficient’’ outcomes even without explicit optimization. Taken together these case studies represent a rich analysis of the immediacy of linguistic computation and its system-wide impact on the mental representations and cognitive algorithms of language.