Feedback on Language, Music and Cognition in Cologne

The international workshop Language, Music and Cognition took place from September 27th to 29th 2012 hosted by the University of Cologne, Germany. This three-day-workshop consisted of three thematic sessions (one thematic session per day) which interacted in discussions all over the workshop: 1st day – syntax and semantics, 2nd day – language and music, and 3rd day – prosody, sign language, and gesture.

Topics of the presentations:

Syntax and semantics:

  • Petra Schumacher (University of Mainz / University of Cologne):
    Adjusting meaning in real-time
  • Steven Frisson (University of Birmingham):
    Online semantic interpretation during reading
  • Andrea E. Martin (University of Edinburgh):
    Cue-based retrieval interference during sentence comprehension: ERP evidence from ellipsis
  • Ina Bornkessel-Schlesewsky / Matthias Schlesewsky (University of Marburg, University of Mainz):
    Dorsal and ventral streams in language: Puzzles and possible solutions
  • Mante S. Nieuwland (University of Edinburgh):
    ERP evidence for animacy processing asymmetries during Spanish sentence comprehension
  • Jutta Mueller (Max-Planck-Institut, Leipzig):
    First steps towards language: Auditory artificial grammar learning across development
  • Gert Westermann, (Lancaster University):
    Experience-dependent brain development as a key to understanding the systematicity of linguistic representations

Language and music:

  • Barbara Tillmann (Lyon Neuroscience Research Center CRNL):
    Music and language structure processing: What is shared?
  • Evelina Fedorenko (Massachusetts Institute of Technology):
    Syntactic processing in language and music: Existence of overlapping circuits does not imply lack of specialized ones
  • Yun Nan (Beijing Normal University):
    Cross-domain pitch processing in music and Mandarin: perceptual and post-perceptual basis
  • Kazuo Okanoya (Riken Lab / University of Tokyo):
    Segmentation in Language and Music: Statistical and Emotional Cues
  • Julie Chobert (Laboratoire de Neurosciences Cognitives, Marseille):
    Influence of musical training on the preattentive processing of syllables in normal-reading children and children with dyslexia
  • Daniela Sammler (Max-Planck-Institut, Leipzig):
    Neuroanatomical overlap of syntax in music and language
  • Thomas Bever (University of Arizona, Tuscon):
    There are at least two “normal” neurological organizations for language and music
Prosody, sign language, and gesture:
  • Richard Wiese (University of Marburg):
    Formal representations of rhythm in speech and music
  • Mara E. Breen (Mt. Holyoke College, Massachusetts):
    Empirical investigations of the role of implicit prosody in sentence processing
  • Martha Tyrone (Haskins Lab, Long Island University – Brooklyn):
    Prosody and Limb Movement in American Sign Language
  • Markus Steinbach (University of Göttingen):
    When gestures become signs – The integration of gestures into sign languages
  • Ulrike Domahs (University of Marburg):
    Language specific processing of word prosody

In addition to these presentations, there was a poster session on the second day.

In almost all presentations, it was focused on empirical studies and there were less theoretical considerations. The first and second sessions (partly also the third session), though, shared fruitful theoretical discussions about “syntax” – what is it? It seems to be difficult to get a general consensus about this concept also in linguistic studies in which syntax plays a very important role. So, general discussion about this concept is needed for the future development in comparative research on language and music. Concerning musical ‘syntax’, it deals mainly with harmonic syntax – what’s about rhythm?

This workshop showed the importance of cross disciplinary discussions – having a wide view and several perspectives.

For details, please visit the workshop homepage!