Event

Yiran Chen will be defending their dissertation proposal on Friday, March 25 at 2:30 PM. The defense will take place in person in the Linguistics department library, and on Zoom.

The proposal document can be found here. And an abstract can be found below.

------------------------------------

Title: Regularization and probabilistic learning in the acquisition of linguistic variation
Supervisor: Kathryn Schuler
Committee: Anna Papafragou (Chair), Meredith Tamminga, Charles Yang

Abstract:

Sociolinguistics research has accumulated abundant evidence that many correspondences in language are not deterministic but probabilistic. Children need to learn these probabilistic correspondences since they are an integral part of linguistic knowledge. Indeed, emerging evidence from developmental sociolinguistics suggests that children match probabilistic sociolinguistic variation from a very young age (Labov 1989; Roberts 1997, 2002 ; Smith et al 2007, 2013; Miller 2013). However, an important finding in language acquisition, supported by naturalistic and experimental data, is that children have a strong tendency to regularize probabilistic patterns in language input (Singleton & Newport 2004, Hudson Kam & Newport 2005). Given that both behaviors are robustly attested and serve important functions to language acquisition, this dissertation explores what factors lead children to regularize or to reproduce the probabilistic variation in their language input.

To address this question, this dissertation first integrates findings about how learners approach probabilistic linguistic input from separate disciplines that have different research questions and have employed different methodologies (Chapter 2). Based on the literature review, the presence of conditioning factors, amount of exposure, input reliability and shared variability were identified to be key modulating factors. In the following two chapters, the dissertation focuses on the latter two factors, input reliability (Chapter 3) and shared variability (Chapter 4), each at a time to independently test their roles in modulating learners’ regularization vs. veridic matching of probabilistic variation. In completed experiments that employed the artificial language learning paradigm, both factors were found to independently modulate adult learners’ learning of probabilistic variation - adults regularized more when input reliability was perceived low and more accurately learned a probabilistic constraint when the constraint was shared across multiple input speakers. Given both similarity and differences between adult and child learners facing probabilistic variation, further studies that employ the same paradigms with children are proposed in order to test whether these factors indeed contribute to children’s acquisition of meaningful linguistic variation.