We investigate the neurocognitive underpinnings of co-speech gesture perception. For this aim we employed the well-established cognitive categories of the influential American logician and semiotician Charles Sanders Peirce.
Complex multimodal bodies of data such as combined speech and gesture data represent a new frontier in humanities research demanding novel approaches to the mathematical modeling, mining and quantification of these data.
This approach combines qualitative and quantitative methods to analyze audio, video and motion-capture data in correlation with with prepositional/adverbial phrases.
This study explores how coverbal gestures may function as physically articulated entry points to frames and mental maps.
Gestural re-/presentations of sound experience were analyzed in terms of spatial and temporal features.
To investigate the neurocognitive underpinnings of communication, a fMRI experiment was conducted with a movie sequence (Lola Rennt) as a semi-naturalistic stimulus.
A methodology employing motion-capture technology was developed to investigate the forms and functions of head gestures.
Based on data stemming from semi-structured interviews, a new method to map and visualize speakers' three-dimensional gesture spaces was developed.
Gestures in description of paintings from memory with respect to their meanings and message.
Architecture students pursued their own artistic projects, using the MoCap system to e.g. create three-dimensional sketches through gestures.
This study followed up on work by S. Harrison in factory surroundings that indicated an influence of environmental conditions (e.g. noise) on gesture production.