Synergie in Language Acquisition by Mark Johnson (Macquarie University, New South Wales, Australia)
Each human language contains an unbounded number of different sentences. How can something so large and complex possibly be learnt? Over the past decade and a half we've learned how to define probability distributions over grammars and the linguistic structures they generate, making it possible to define statistical models that learn regularities of complex linguistic structures. Bayesian approaches are particularly attractive because they can exploit ""prior"" (e.g., innate) knowledge as well as learn statistical generalizations from the input.
This talk compares two different Bayesian models of language acquisition. A staged learner learns the components of language independently of each other, while a joint learner learns them simultaneously. A joint learner can take advantages of synergistic dependencies between linguistic components to bootstrap acquisition in ways that a staged learner cannot. We use Bayesian models to show that there are dependencies between word reference, syllable structure and the lexicon that a learner could take advantage of to synergistically improve language acquisition.