QT2

Do linguistic models influence neurolinguistic models and if so, how?

Organizers :    Mireille Besson & Cheryl Frenck-Mestre

Participants :  Mireille Besson, Philippe Blache, Cheryl Frenck-Mestre, James German; Mariapaola d’Império, Noel N’Guyen (waiting for agreement); Serge Pinto

Summary

The main goal is to closely examine several models of the anatomo-functional architecture of language in the brain (Friederici, Hagoort, Poeppel, Price) to determine if and how they are influenced by linguistic theories ((Bybee, Chomsky, Fillmore, Goldberg, Jackendoff, Kay…). This implies reviewing, understanding and synthesizing the different models and their implications. The ultimate goal is to reach a better understanding of how to bridge the gap between linguistics and neurolinguistics. The various expertise within the BLRI/ILCB are very well-suited to address these general issues.

Our first goal at the Porquerolles meeting will be to discuss the evolution of some of the main linguistic and neurolinguistics models.

  1. The modularity of language processing

What is modularity and evolution of the concept

From Fodor 1983 (“The Modularity of Mind”) to Fodor 2000 (« The mind doesn’t work this way ») considering Pinker 1997 (“How the mind works”).

The basic assumption underlying modularity, as defined by Fodor (1983), is that cognitive mental processes are computational. Cognitive processes are specific logico-algebric computations on mental representations that are structured syntactically (i.e., they obey an ensemble of rules that define the relationship between the different elements). In his first essay, Fodor (1983) considers that “Roughly, modular cognitive systems are domain specific, innately specified, hardwired, autonomous, and not assembled.” (p.36). Modular systems are domain-specific computational mechanisms. In his second essay, Fodor (2000) clearly insists that “informational encapsulation is at the heart of modularity” (p. 107) because this is how modules can be functionally specified. Informational encapsulation (impenetrability) can be directly tested by using cognitive neuroscience approaches.

Interestingly for our understanding of the evolution of the concept of modularity, Fodor (2000) considers that only local systems can be modular. Global systems are not modular because they are not informationally encapsulated (they do not compute their own function independently of any other information that could possibly be relevant for its implementation). The fact that global systems are context-dependent and use all the information available within the entire system to compute their function is taken as a major argument against what Fodor calls the “massive modularity” of mind (Pinker, 1997). Of course, the problem then is to define local and global systems.

In linguistics

Linguistic information relies on different domains: phonetics, prosody, phonology, morphology, syntax, semantics, and pragmatics.  Classical linguistic theories describe separately these domains, often considered as separate modules. The processing mechanism consists for each module to process its specific information; the general architecture describes the flow of information between the different modules. In many such theories (not to say most of them), syntax is at the center of the architecture [Chomsky 1995], and the mechanism is mainly sequential: when processing a sentence, each module provides a complete description of its corresponding domain and passes it to the next one (e.g. for written material morphology -> syntax -> semantics). In this conception of language processing, the interpretation (i.e. the meaning) is built incrementally, word-by-word, following a compositional procedure (the meaning of a sentence being a function of the meaning of its components). More recent works have proposed a more parallel view of language processing, relying on the interaction of the different modules [Jackendoff 1997; 2007].

Many points to be discussed, among them: who is most correct: Fodor (1983) or Fodor (2000)? Even the most basic assumption underlying modularity, that cognitive mental processes are computational, is called into question by Fodor (2000). If not computational: then what (abductif, ostensive…)? Domain-specific computational mechanisms? Domain specificity of brain structures (see the example of Broca area(s) below)?

  1. Evolution of the modular view of language processing toward a dynamic network view

Modular view: each component of the language system is processed in one dedicated brain region. “The processing of phonemes is performed in the left middle portion of the STG; the processing of auditorily presented words is located in a region anterior to Heschl’s gyrus in the left STG” (Friederici, 2012, TICS, p.263).

Dynamic network view: “The functionality of a region is co-determined by the network of regions in which it is embedded at particular moments in time.”; “Core regions of language processing interact with other networks (e.g. the attentional networks and the ToM network) to establish full functionality of language and communication”; ‘‘The mapping between neurons and cognition relies less on what individual nodes can do and more on the topology of their connectivity.’’ (Hagoort, 2014, Curr Op. in Neurobiology, p. 184). Or “An attempt is made to embed the human data in the framework of distributed coding in recurrent networks exhibiting high-dimensional dynamics.” (Friederici & Singer, 2015, TICS, p.330)

Both: Here we argue that language, like other cognitive functions, depends on distributed computations in specialized cortical areas forming large-scale dynamic networks (Friederici & Singer, 2015).

As an example: Broca’s area

At the beginning: Broca’s area = language production; Wernicke’s area = language comprehension; Arcuate fasciculus = information exchange between the two.

Evolution: not only two language areas (Vigneau et al., 2006, NeuroImage); not only one dorsal route but two ventral systems and two dorsal systems (Friederici, 2012, TICS); “Broca is dead!”: no consistent anatomical definition of “Broca’s areas” (Tremblay & Dick, 2016, Brain & Lang).

Domain specific: Broca subserves a uniquely human capacity for syntax; “The basic operation of ‘Merge’ appears to be localized in a very confined region; namely, in the pars opercularis (BA 44) in Broca’s area, suggesting a small-scale network ((Friederici & Singer, 2015, TICS, p.330; Zaccarella & Friederici, 2015, Frontiers in Psychology); music-selective responses in the planum polare  (Norman-Haignere et al, 2015, Neuron).

Domain general: Broca area is involved in electing among competing sources of information and in cognitive control; “Here we argue that even the highly complex cognitive function of language is based on computational principles similar to those of other cognitive and executive functions.” (Friederici & Singer, 2015, TICS, p.330)

Both: Frontal regions (Broca’s area and adjacent cortex) are crucial for unification operations; these operations generate larger structures from the building blocks that are retrieved from memory (MUC model, Hagoort, 2005, TICS).

In linguistics

Contemporary linguistics considers necessary to propose a more inclusive view of language processing, making it possible to reconcile competence and performance, data and grammars, by representing the processing in terms of interaction of different sources of information, including context, gestures, etc. In this perspective, language has to be understood and described in its natural environment (e.g. during an interaction). In this approach, there is not necessarily a syntactic or semantic underlying structure, the meaning being built from the fusion of partial sources of information or even accessed directly.

Several steps must be underlined, opening the way towards this integrative view:

Lexicalization & unification: more and more semantic and syntactic information is attached to lexical entries (Gazdar et al., 1985; Pustejovsky, 1991). In some cases, lexical entries even contain entire partial structures (Joshi & Schabes, 1987; Abeillé et al., 1989). This move has been accompanied with the introduction of a logical operation: unification (Kay, 1989). These new concepts have progressively put aside the mechanism of derivation, replacing it by operations on feature structures instead of atomic categories. This organization of the linguistic information gives more importance to the lexicon:  information is stored into the lexicon instead of rules.

Constraints: the idea to represent linguistic information in terms of constraints has emerged progressively in the different theoretical paradigms. For constraint-based theories, such as HPSG (Pollard & Sag, 1994; Sag et al., 2003), derivation is replaced by constraint satisfaction: instead of rules, all linguistic information is encoded into lexical entries and abstract schemas.  Some general principles control feature propagation and unification. One of the interests of this approach is the direct encoding of the syntax/semantics interface, both structures being built at the same time, thanks to unification and structure sharing. In the generative paradigm, Optimality Theory (Prince & Smolensky, 2004) also brings constraint satisfaction at the center of the processing mechanism. One important innovation, interesting to underline in this discussion, is the introduction of constraint violation: the optimal structure, chosen among a set of candidates, is the one that violates the less the set of constraints. Another theory, called Property Grammars (Blache, 2016) integrates these different aspects: all linguistic information is represented by means of constraint, the only processing device is constraint satisfaction, all constraints can be violated. This approach integrates performance and competence by providing a mean to represent general linguistic properties as well as describing any type of sentence or message (even non canonical or ill formed).

Constructions: the idea to represent linguistic phenomena starting from the interaction of different sources of information is the basis of Construction Grammars (Fillmore et al. 1988; Goldberg, 2003; Sag, 2012). A construction is a form/meaning pair (e.g. idiom, interrogative, ditransitive, etc.). Different properties (morphological, prosodic, syntactic, semantic, etc.) constitute the form that makes it possible to identify the construction. For example, some interrogative particles and specific prosodic contours make it possible to identify an interrogative construction. When a construction is recognized, the associated meaning can be accessed directly. For example, idioms can be recognized from a set of forms in a certain order (the first words of the idioms). From this point, its meaning can be accessed, without any need of building a syntactic or semantic structure.

The evolution: One way to describe this evolution of linguistic theories consists in considering the processing mechanism. For generative approaches, the grammar is a set of rules, which have to be applied step-by-step thanks to derivation. In computational terms, this comes to a procedural representation: the grammar is considered as a device generating a language. On the opposite, for recent non-derivational theories, the grammar is a set of descriptions. The processing mechanism consists in verifying their validity. In computational terms, this is a declarative representation of the problem. This shift from procedural to declarative perspectives put in question a strict incremental and compositional conception of language processing. It leaves open the possibility of having different way of access to the meaning: direct (constructions) or compositional.

Many points to be discussed, among them: Are these different views compatible? what are the implications of these different views for on-going research? synchronization of oscillatory activity as a tool for studying dynamic networks?…

References

Abeillé A., Bishop K., Cote S., Joshi A., Schabes Y. (1989) “Lexicalized TAGs, parsing and lexicon”,  in proceedings of the DARPA Speech and Natural Language Workshop.

Blache P. (2016) “Representing syntax by means of properties: a formal framework for descriptive approaches”, in Journal of Language Modelling, 4:2

Bybee J. (2010) Language, Usage and Cognition, Cambridge University Press

Chomsky N. (1995) The Minimalist Program, MIT Press

Fillmore C. (1988) “The Mechanisms of Construction Grammar”, in Proceedings of the Fourteenth Annual Meeting of the Berkeley Linguistics Society.

Gazdar, G.; Klein, E.; Pullum, G. K.; and Sag, I. A. (1985) Generalized Phrase Structure Grammars. Blackwell Publishing, Oxford

Goldberg, A. E. (2003 “)Constructions: a new theoretical approach to language”, in Trends in Cognitive Sciences, 7(5)

Jackendoff R. (1997) The Architecture of the Language faculty, MIT Press

Jackendoff, R. (2007) “A Parallel Architecture perspective on language processing”, in Brain Research, 1146

Joshi A. & et Y. Schabes (1997) « Tree-adjoining grammars », in G. Rosenberg et A. Salomaa (eds), Handbook of Formal Languages, vol. 3 : Beyond Words, Springer

Kay M. (1979) « Functional Grammar », Annual Meeting of the Berkeley Linguistics Society, vol. 5

Pollard C. & Sag I. (1994) Head-Driven Phrase Structure Grammar, CSLI Publications

Prince A. & and Smolensky P. (2004): Optimality Theory: Constraint Interaction in Generative Grammar, Blackwell Publishers

Pustejovsky, J. (1991) The Generative Lexicon, in Computational Linguistics, 17.4

Sag I., Wasow T. & Bender E. (2003) Syntactic Theory: A Formal Introduction (Second Edition), CSLI Publications

Sag I. (2012) “Sign-Based Construction Grammar: An Informal Synopsis”, in Sign-Based Construction Grammar , Hans C. Boas & Ivan A. Sag (eds), CSLI

read more  >>