Event



Dissertation Proposal Defense: Budnick

May 15, 2020 at - | This event will be Virtual due to the Coronavirus restrictions.

 

Ryan Budnick will be defending his dissertation proposal, tentatively entitled: Limited Learners in Language Acquisition, on Friday, May 15th, at 1:00 p.m. 

------------------------

Title: Limited Learners in Language Acquisition

Supervisor: Charles Yang

Proposal Committee:  Gareth Roberts (chair), Kathryn Schuler & Mitch Marcus

Time: 1:00 p.m. - 2:30 p.m.

______________________


Abstract:
   

Language learners are limited: They bring limited cognitive resources to bear on limited data. These limitations dominated early formal modelling of language acquisition, but new approaches to computational modelling have driven researchers to afford more powerful computational abilities to language learners. In this dissertation proposal I formulate new abstract limitations, motivated by psycholinguistic evidence and theoretical computer science, which provide new insights into old data, and suggest future directions for modelling work, across three cases.

First, I examine the acquisition of lexical stress systems. Based on developmental evidence, I argue that learners acquire much of their prosodic knowledge prior to robust word segmentation, and so regular stress systems are only learnable if they are parseable. Simple restrictions on the memory and revision capacities of the learner are then sufficient to generate the non-rhythmic stress typology. Second, I turn to the learning of word meanings under referential ambiguity. I formalize a measure of attention, and demonstrate that models which reflect learners' actual behavior are those models which have restricted attention. This suggests a domain to search for models for broader learning problems. Third, I consider parametric learning models whose states are characterized as probability distributions over the grammar space. Researchers have recently argued that the failure of Yang's (2002) naïve learner to converge on certain learning problems provides evidence that learners actually have additional computational power. I construct a simple modification of the naïve model that is sufficient to learn in these difficult cases and generalizes to probabilistic OT learning.