Entropy Reduction and Asian Language by John Hale (Cornell University, NY, USA)
This talk presents a particular conceptualization of human language understanding as information processing. From this viewpoint, understanding a sentence word-by-word is a kind of incomplete perception problem in which comprehenders over time become more certain about the linguistic structure of the utterance they are trying to understand. The Entropy Reduction hypothesis holds that the scale of these certainty-increases reflects psychological effort. This claim revives the application of information theory to psycholinguistics, which languished since the 1950s. But in contrast to that earlier work, modern applications of information theory to language-understanding now use generative grammars to specify the relevant structures and their probabilities. This representation makes it possible to apply standard techniques from computational linguistics to work out weighted ""expectations"" about as-yet-unheard words. The talk exemplifies the general theory using examples from Chinese, Japanese & Korean. The prenomial character of relative clauses in these languages is an important test case for any general cognitive theory of sentence processing.