A Taxonomy of Social Cues for Conversational Agents

When using the taxonomy, please cite as Feine, J., Gnewuch U., Morana S. and Maedche A. (2019): “A Taxonomy of Social Cues for Conversational Agents” International Journal of Human-Computer Studies. To read the paper, please click here.

Social cue: Facial expression
Communication system: Visual
Cue category: Kinesics
Cue Description
A facial expression consists of one or more motions/movements on the face of the agent.
Cue example
Smiling, looking angry, moving eyebrow.
Cue impact
Agents can express their emotions using facial expression (Becker et al. 2004; Becker et al. 2005;Guo et al. 2016) and are preferred over nonrealistic static agents (McBreen et al. 2001). Also, facial expressions impact the credibility of agents and outweigh other body cues (Cowell, Stanney 2005). Mirroring of the user’s facial expression can impact the acceptance, trust and likeability of agent (Lisetti et al. 2013). Also, expressed facial emotions of an agent are perceived different dependent on the age of the user (Beer et al. 2015). Furthermore, facial expression can indicate turn-taking attempts (Cassel, Bickmore 2000), and positive or negative facial expression create similar impression on anxiety like in real world (Pertaub et al. 2001; Gebhard et al. 2014). Reactive laughing leads to an emotional contagion effect (Niewiadomski et al. 2013), increases social presence (Pecune et al. 2015) and is perceived as more positive than smiling agents (Ding et al. 2014). Smiling agents impacts perceived likability (Nunamaker et al. 2001; Cafaro et al. 2016), let users smile longer when interacting with the agent (Kramer et al. 2013), impact the performance on a task (Kraemer et al. 2016), and is useful as part of relational behavior strategies to ensure a long-term working alliance (Bickmore et al. 2005; Bickmore et al. 2010). Facial expression can also distract the user leading to decreased user involvement (Hess et al. 2005). Also, people respond to emotional facial expression different dependent on the agent’s gender (Hayashi 2016). Specific facial expressions can further be used for flirtation which impacts the user’s enjoyment, increased their interest to continue the interaction or even to engage in a conversation (Bee et al. 2009).
Reference List
1. Becker, C., Kopp, S., & Wachsmuth, L. (2004). Simulating the emotion dynamics of a multimodal conversational agent. In E. Andre, L. Dybkjaer, W. Minker, & P. Heisterkamp (Eds.): Lecture Notes in Computer Science, AFFECTIVE DIALOGUE SYSTEMS, PROCEEDINGS (pp. 154-165).
2. Becker, C., Prendinger, H., Ishizuka, M., & Wachsmuth, I. (Eds.). 2005. Evaluating affective feedback of the 3D agent max in a competitive cards game: Springer.
3. Beer, J. M., Smarr, C.-A., Fisk, A. D., & Rogers, W. A. (2015). Younger and older users’ recognition of virtual agent facial expressions. INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES (75, pp. 1-20.
4. Bickmore, T. W., & Picard, R. W. (2005). Establishing and Maintaining Long-term Human-computer Relationships. ACM TRANSACTIONS ON COMPUTER-HUMAN INTERACTION (12:2), pp. 293-327, from http://doi.acm.org/10.1145/1067860.1067867.
5. Cassell, J., Bickmore, T., Billinghurst, M., Campbell, L., Chang, K., Vilhjálmsson, H., & Yan, H. (1999). Embodiment in Conversational Interfaces: Rea. In : CHI ’99, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 520-527). New York, NY, USA: ACM.
6. Bickmore, T. W., Fernando, R., Ring, L., & Schulman, D. (2010). Empathic Touch by Relational Agents. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING (1:1), pp. 60–71.
7. Cafaro, A., Vilhjalmsson, H. H., & Bickmore, T. (2016). First Impressions in Human-Agent Virtual Encounters. ACM TRANSACTIONS ON COMPUTER-HUMAN INTERACTION (23:4).
8. Cassell, J., & Bickmore, T. (2000). EXTERNAL MANIFESTATIONS OF TRUST WORTHINESS IN THE INTERFACE. Communications of the ACM (43:12), pp. 50-56.
9. Cowell, A. J., & Stanney, K. M. (2005). Manipulation of non-verbal interaction style and demographic embodiment to increase anthropomorphic computer character credibility. INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES (62:2), pp. 281-306.
10. Ding, Y., Prepin, K., Huang, J., Pelachaud, C., & Artières, T. (2014). Laughter Animation Synthesis. In : AAMAS ’14, Proceedings of the 2014 International Conference on Autonomous Agents and Multi-agent Systems (pp. 773-780). Richland, SC: International Foundation for Autonomous Agents and Multiagent Systems.
11. Gebhard, P., Baur, T., Damian, I., Mehlmann, G., Wagner, J., & André, E. (2014). Exploring Interaction Strategies for Virtual Characters to Induce Stress in Simulated Job Interviews. In : AAMAS ’14, Proceedings of the 2014 International Conference on Autonomous Agents and Multi-agent Systems (pp. 661-668). Richland, SC: International Foundation for Autonomous Agents and Multiagent Systems.
12. Guo, Y. R., Goh, D. H.-L., Muhamad, H. B. H., Ong, B. K., & Lei, Z. (2016). Experimental Evaluation of Affective Embodied Agents in an Information Literacy Game. In : JCDL ’16, Proceedings of the 16th ACM/IEEE-CS on Joint Conference on Digital Libraries (pp. 119-128). New York, NY, USA: ACM.
13. Hayashi, Y. (2016). Lexical Network Analysis on an Online Explanation Task: Effects of Affect and Embodiment of a Pedagogical Agent. IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS (E99D:6), pp. 1455-1461.
14. HESS, T. J., FULLER, M. A., & MATHEW, J. (2005). Involvement and Decision-Making Performance with a Decision Aid: The Influence of Social Multimedia, Gender, and Playfulness. JOURNAL OF MANAGEMENT INFORMATION SYSTEMS (22:3), pp. 15-54.
15. Bee, N., André, E., & Tober, S. (2009). Breaking the Ice in Human-Agent Communication: Eye-Gaze Based Initiation of Contact with an Embodied Conversational Agent. In Z. Ruttkay, M. Kipp, A. Nijholt, & H. H. Vilhjálmsson (Eds.), Intelligent Virtual Agents (pp. 229–242). Berlin, Heidelberg: Springer Berlin Heidelberg
16. Kraemer, N. C., Karacora, B., Lucas, G., Dehghani, M., Ruether, G., & Gratch, J. (2016). Closing the gender gap in STEM with friendly male instructors? On the effects of rapport behavior and gender of a virtual agent in an instructional interaction. COMPUTERS & EDUCATION (99, pp. 1-13.
17. Kramer, N., Kopp, S., Becker-Asano, C., & Sommer, N. (2013). Smile and the world will smile with you-The effects of a virtual agent’s smile on users’ evaluation and behavior. INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES (71:3), pp. 335-349.
18. Lisetti, C., Amini, R., Yasavur, U., & Rishe, N. (2013). I Can Help You Change! An Empathic Virtual Agent Delivers Behavior Change Health Interventions. ACM Trans. Manage. Inf. Syst. (4:4), pp. 19:1-19:28, from http://doi.acm.org/10.1145/2544103.
19. McBreen, H. M., & Jack, M. A. (2001). Evaluating humanoid synthetic agents in e-retail applications. IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART A-SYSTEMS AND HUMANS (31:5), pp. 394-405.
20. Niewiadomski, R., Hofmann, J., Urbain, J., Platt, T., Wagner, J., Piot, B., et al. (2013). Laugh-aware Virtual Agent and Its Impact on User Amusement. In : AAMAS ’13, Proceedings of the 2013 International Conference on Autonomous Agents and Multi-agent Systems (pp. 619-626). Richland, SC: International Foundation for Autonomous Agents and Multiagent Systems.
21. Nunamaker, J. J. E., Derrick, D. C., Elkins, A. C., Burgoon, J. K., & Patton, M. W. (2011). Embodied Conversational Agent-Based Kiosk for Automated Interviewing. JOURNAL OF MANAGEMENT INFORMATION SYSTEMS (28:1), pp. 17-48.
22. Mersiol, M., Chateau, N., & Maffiolo, V. (Eds.). 2002. Talking heads: Which matching between faces and synthetic voices? Proceedings. Fourth IEEE International Conference on Multimodal Interfaces.
23. P Pertaub, D., Slater, M., & Barker, C. (2001). An experiment on fear of public speaking in virtual reality. Studies in health technology and informatics (81:
24. Pecune, F., Mancini, M., Biancardi, B., Varni, G., Ding, Y., Pelachaud, C., et al. (2015). Laughing with a Virtual Agent. In : AAMAS ’15, Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems (pp. 1817-1818). Richland, SC: International Foundation for Autonomous Agents and Multiagent Systems.
25. Bonito, J. A., Burgoon, J. K., & Bengtsson, B. (1999). The Role of Expectations in Human-computer Interaction. In : GROUP ’99, Proceedings of the International ACM SIGGROUP Conference on Supporting Group Work (pp. 229-238). New York, NY, USA: ACM.
26. Cassell, J. (2000). Embodied conversational interface agents? Communications of the ACM 43 (4), 70?78.
27. Cassell, J. and K. R. Thorisson (1999). ?The power of a nod and a glance: Envelope vs. emotional feedback in animated conversational agents? Applied Artificial intelligence 13 (4-5), 519?538.
28. Rosis, F. de, C. Pelachaud, I. Poggi, V. Carofiglio and B. de Carolis (2003). ?From Greta's mind to her face: modelling the dynamics of affective states in a conversational embodied agent? International Journal of Human-Computer Studies 59 (1-2), 81?118.
29. Bickmore, T. and J. Cassell (2005). Social Dialogue with Embodied Conversational Agents. In J. C. J. Kuppevelt, N. O. Bernsen and L. Dybkjær (eds.) Advances in Natural Multimodal Dialogue Systems, pp. 23?54. Dordrecht: Springer.
30. Kopp, S. and I. Wachsmuth (2004). Synthesizing multimodal utterances for conversational agents Computer animation and virtual worlds 15 (1), 39?52.
31. Cassell, J. (2001). Embodied Conversational Agents. Representation and Intelligence in User Interfaces AI MAGAZINE 22 (4), 67?83.
32. Ryokai, K., C. Vaucelle and J. Cassell (2003). Virtual peers as partners in storytelling and literacy learning Journal of Computer Assisted Learning 19 (2), 195?208.
33. Carolis, B. de, C. Pelachaud, I. Poggi and M. Steedman (2004). APML, a Markup Language for Believable Behavior Generation. In H. Prendinger and M. Ishizuka (eds.) Life-Like Characters: Tools, Affective Functions, and Applications, pp. 65?85. Berlin, Heidelberg: Springer Berlin Heidelberg.
34. Pelachaud, C. (2005). Multimodal Expressive Embodied Conversational Agents. In: Proceedings of the 13th Annual ACM International Conference on Multimedia. New York, NY, USA: ACM, pp. 683?689.
35. Bevacqua, E., S. Pammi, S. J. Hyniewska, M. Schröder and C. Pelachaud (2010). Multimodal Backchannels for Embodied Conversational Agents. In: Intelligent Virtual Agents. Ed. by J. Allbeck, N. Badler, T. Bickmore, C. Pelachaud, A. Safonova. Berlin, Heidelberg: Springer Berlin Heidelberg, pp. 194?200.