A Taxonomy of Social Cues for Conversational Agents

When using the taxonomy, please cite as Feine, J., Gnewuch U., Morana S. and Maedche A. (2019): “A Taxonomy of Social Cues for Conversational Agents” International Journal of Human-Computer Studies. To read the paper, please click here.

Social cue: Head movement
Communication system: Visual
Cue category: Kinesics
Cue Description
The CA moves its head.
Cue example
Head nodding, head turning.
Cue impact
Head nods are helpful for successful turn-taking (Cassel et al. 1999; Becker et al. 2005) and increases behavioral realism of an animated agent, which leads to more social presence (Von der Pütten et al. 2009) and more spoken words (Puetten et al. 2010), makes the agent more natural (Mersiol et al. 2002), human-like (McBreen et al. 2001), helpful (Nunamaker et al. 2011) and more warm in combination with laugher (Ding et al. 2014). It is further useful as part of relational behavior strategies to ensure a long-term working alliance (Bickmore et al. 2005), intention to use the agent (Lisetti et al. 2013), elicits more social behavior by the user (Appel et al. 2012), increases effort and performance on a task by building rapport (Kraemer et al. 2016).
Reference List
1. Appel, J., Pütten, A. von der, Krämer, N. C., & Gratch, J. (2012). Does Humanity Matter?: Analyzing the Importance of Social Cues and Perceived Agency of a Computer System for the Emergence of Social Reactions during Human-Computer Interaction. Advances in Human-Computer Interaction (2012:2), pp. 1-10.
2. Becker, C., Prendinger, H., Ishizuka, M., & Wachsmuth, I. (Eds.). 2005. Evaluating affective feedback of the 3D agent max in a competitive cards game: Springer.
3. Bickmore, T. W., & Picard, R. W. (2005). Establishing and Maintaining Long-term Human-computer Relationships. ACM TRANSACTIONS ON COMPUTER-HUMAN INTERACTION (12:2), pp. 293-327, from http://doi.acm.org/10.1145/1067860.1067867.
4. Cassell, J., Bickmore, T., Billinghurst, M., Campbell, L., Chang, K., Vilhjálmsson, H., & Yan, H. (1999). Embodiment in Conversational Interfaces: Rea. In : CHI ’99, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 520-527). New York, NY, USA: ACM.
5. Kraemer, N. C., Karacora, B., Lucas, G., Dehghani, M., Ruether, G., & Gratch, J. (2016). Closing the gender gap in STEM with friendly male instructors? On the effects of rapport behavior and gender of a virtual agent in an instructional interaction. COMPUTERS & EDUCATION (99, pp. 1-13.
6. Lisetti, C., Amini, R., Yasavur, U., & Rishe, N. (2013). I Can Help You Change! An Empathic Virtual Agent Delivers Behavior Change Health Interventions. ACM Trans. Manage. Inf. Syst. (4:4), pp. 19:1-19:28, from http://doi.acm.org/10.1145/2544103.
7. McBreen, H. M., & Jack, M. A. (2001). Evaluating humanoid synthetic agents in e-retail applications. IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART A-SYSTEMS AND HUMANS (31:5), pp. 394-405.
8. Ding, Y., Prepin, K., Huang, J., Pelachaud, C., & Artières, T. (2014). Laughter Animation Synthesis. In : AAMAS ’14, Proceedings of the 2014 International Conference on Autonomous Agents and Multi-agent Systems (pp. 773-780). Richland, SC: International Foundation for Autonomous Agents and Multiagent Systems.
9. Mersiol, M., Chateau, N., & Maffiolo, V. (Eds.). 2002. Talking heads: Which matching between faces and synthetic voices? Proceedings. Fourth IEEE International Conference on Multimodal Interfaces.
10. Nunamaker, J. J. E., Derrick, D. C., Elkins, A. C., Burgoon, J. K., & Patton, M. W. (2011). Embodied Conversational Agent-Based Kiosk for Automated Interviewing. JOURNAL OF MANAGEMENT INFORMATION SYSTEMS (28:1), pp. 17-48.
11. Puetten, A. M. von der, Kraemer, N. C., Gratch, J., & Kang, S.-H. (2010). “It doesn’t matter what you are!” Explaining social effects of agents and avatars. Computers in Human Behavior (26:6), pp. 1641-1650.
12. von der Pütten, Astrid Marieke, Krämer, N., & Gratch, J. (2009). Who s there? Can a Virtual Agent Really Elicit Social Presence? The 12th Annual International Workshop on Presence.
13. Cassell, J. (2000). Embodied conversational interface agents? Communications of the ACM 43 (4), 70?78.
14. Cassell, J. and K. R. Thorisson (1999). ?The power of a nod and a glance: Envelope vs. emotional feedback in animated conversational agents? Applied Artificial intelligence 13 (4-5), 519?538.
15. Rosis, F. de, C. Pelachaud, I. Poggi, V. Carofiglio and B. de Carolis (2003). ?From Greta's mind to her face: modelling the dynamics of affective states in a conversational embodied agent? International Journal of Human-Computer Studies 59 (1-2), 81?118.
16. Bickmore, T. and J. Cassell (2005). Social Dialogue with Embodied Conversational Agents. In J. C. J. Kuppevelt, N. O. Bernsen and L. Dybkjær (eds.) Advances in Natural Multimodal Dialogue Systems, pp. 23?54. Dordrecht: Springer.
17. Thiebaux, M., S. Marsella, A. N. Marshall and M. Kallmann (2008). Smartbody: Behavior realization for embodied conversational agents. In International Foundation for Autonomous Agents and Multiagent SystemsInternational Foundation for Autonomous Agents and Multiagent Systems.
18. Kopp, S. and I. Wachsmuth (2004). Synthesizing multimodal utterances for conversational agents Computer animation and virtual worlds 15 (1), 39?52.
19. Ryokai, K., C. Vaucelle and J. Cassell (2003). Virtual peers as partners in storytelling and literacy learning Journal of Computer Assisted Learning 19 (2), 195?208.
20. Carolis, B. de, C. Pelachaud, I. Poggi and M. Steedman (2004). APML, a Markup Language for Believable Behavior Generation. In H. Prendinger and M. Ishizuka (eds.) Life-Like Characters: Tools, Affective Functions, and Applications, pp. 65?85. Berlin, Heidelberg: Springer Berlin Heidelberg.
21. Bailenson, J. N. and N. Yee (2005). Digital chameleons. Automatic assimilation of nonverbal gestures in immersive virtual environments Psychological science 16 (10), 814?819.
22. Pelachaud, C. (2005). Multimodal Expressive Embodied Conversational Agents. In: Proceedings of the 13th Annual ACM International Conference on Multimedia. New York, NY, USA: ACM, pp. 683?689.
23. Bevacqua, E., S. Pammi, S. J. Hyniewska, M. Schröder and C. Pelachaud (2010). Multimodal Backchannels for Embodied Conversational Agents. In: Intelligent Virtual Agents. Ed. by J. Allbeck, N. Badler, T. Bickmore, C. Pelachaud, A. Safonova. Berlin, Heidelberg: Springer Berlin Heidelberg, pp. 194?200.