It is well known that listeners of probably all languages give verbal and non-verbal signals, called backchannels,
to their interlocutors. However, it is not well understood what drives listeners to backchannel. To what degree are they an
indicator of listener attention? Are backchannels semantically motivated, performed when the message has been parsed and
comprehended? Or are they automatic responses triggered by overt cues from the speaker (such as eye contact, gestures, or
prosodic information), requiring minimal comprehension?
An important first step in answering this question is identifying what overt speaker cues trigger backchannels, and to what
degree. This preliminary study looks at storytelling data from conversational dyads. We find that the speaker cue most likely to
‘successfully’ trigger a backchannel is making eye contact. Interestingly, however, other cues are more likely to trigger different
kinds of backchannels: gestural cues trigger more head nods, while prosodic cues trigger more verbal backchannels.