Overview of Intelligent Virtual Agents IVA2011 Conference
SLTC Newsletter, October 2011
IVA 2011 is a research conference on Intelligent Virtual Agents. Intelligent Virtual Agents (IVAs) are animated embodied characters with interactive human-like capabilities such as speech, gestures, facial expressions, head and eye movement. Virtual agents have capabilities both to perceive and to exhibit human-like behaviours. Virtual characters enhance user interaction with a dialogue system by adding a visual modality and creating a persona for a dialogue system. They are used in interactive systems as tutors, museum guides, advisers, signers of sign language, and virtual improvisational artists.
Topics of the conference
Intelligent Virtual Agents conference is an annual event started in 1999. The conference brings together researchers from diverse fields of artificial intelligence, psychology, computer science, speech, and linguistics. This year the conference gathered over 100 participants and featured presentations sessions on the following topics:
- Social and Dramatic Interaction
- Guides and Relational Agents
- Nonverbal Behavior
- Adaptation and Coordination
- Listening and Feedback
- Frameworks and Tools
- Cooperation and Copresence
In an invited talk, Dr. Jens Edlund described motivation and approaches for "creating systems that cause a user to communicate in the same way as with a human". This theme was reflected in many of the presentations throughout the conference that described research on various behavioural aspects of virtual agents. In his talk, Jens contrasted the evolution of communication capabilities of interactive systems with human communication learning. Children learn non-verbal interaction such as prosody and gestures before they actually learn words. Communicative systems on the other hand "learned to speak" before they acquired the non-verbal behaviour capabilities. Research on dialogue started with a focus on verbal communication. Current research on virtual agents focuses on adding non-verbal behaviour capabilities to spoken systems. Presentations at IVA2011 included studies of virtual agents perceiving user's mood and personality, using prosody for more realistic and context dependent realization of speech, gesture, and establishing rapport with a user. These non-verbal capabilities complement verbal capabilities of interactive system bringing human-system communication one step closer to human-human communication.
How Virtual agents are used
Virtual agents are used increasingly in applications. Virtual agents presented at IVA2011 included museum guides Ada and Grace (D. Traum et al.). The characters answer visitors' questions and chat with each other to present visitors with museum information in an entertaining and engaging manner. Tinker (T. Bickmore et al.) was another museum guide volunteer presented at IVA. Tinker has been deployed at the Science Museum in Boston for over three years and has had over 125,000 users interact with it. Museum guide characters are designed to engage and interest users enticing them to communicate with the system longer and to learn from this communication. Installations in public places such as museums allow researchers access to a large pool of subjects while bringing technology to the end users and making an impact.
Other examples of applications include improvisational theatre agents (A. Brisson et al., B. Magerko et al.), agents listeners that provide natural non-verbal feedback to a speaker/story teller (Wang et al., I. De Kok and D. Heylen), and virtual interviewers (Astrid M. von der Pütten et al.).
How to embody your dialogue system
IVA2011 featured demonstrations and presentations on an emerging standard for creating virtual humans, Behaviour Markup Language (BML) . BML is XML-based language for specifying behaviour of a virtual agent . A BML realizer is software modules that steer behaviour of a virtual human. Several BML realizers (virtual agents) have been recently developed and made publicly available:
- Elckerlyc by University of Twente
- EMBR by DFKI
- Greta by CNRS
- Smartbody by University of Southern California's Institute for Creative Technologies and Information Science Institute
- Cadia by Reykjavik University
These realizers can be integrated with an existing dialogue system or used for creating animations of communication between virtual agents. Connecting a virtual agent to an existing application involves sending BML to the Virtual Human and receiving feedback about its performance (see links for integration instructions).
IVAs for evaluating speech systems.
As speech is one of the most important capabilities of a virtual agent, research areas of IVA and speech are closely related. A fully functional virtual agent requires speech and natural language processing (NLP) capabilities in order to hold a conversation with a user. IVAs on the other hand, can be used for evaluating speech and NLP applications.
Evaluation of natural language applications is a challenging task which often requires human involvement. Recently researchers started using task-based evaluation of systems using virtual environments. For example, GIVE challenge was used in the past two years for evaluation of referring expression generation. In a challenge evaluation, system capabilities are evaluated based on a user performance and task completion. GIVE challenge is set up as a game and involves players following automatically generated instructions. Different natural language generation (NLG) algorithms create different instructions. System's performance is judged based on users' task completion, response time, and satisfaction with the system.
Introducing virtual agents into evaluation challenges could be the next step for evaluation of language systems and could be used for task-based evaluation of dialogue systems or its components. A dialogue system may be evaluated as a user interacts with an agent or observes interaction between virtual agents in a game-like environment. Introducing agents would create a more interesting evaluation scenarios for analysing different aspects of a dialogue system including NLG, dialogue management, and language understanding.
Thanks to Dennis Reidsma for providing links for BML description and BML realizers.
- S. Kopp, B. Krenn, S. C. Marsella, A. N. Marshall, C. Pelachaud, H. Pirker, K. R. Thórisson, and H. H. Vilhjálmsson. Towards a Common Framework for Multimodal Generation: The Behavior Markup Language. In Proceedings of the 6th International Conference on Intelligent Virtual Agents, 2006.
If you have comments, corrections, or additions to this article, please contact the author: Svetlana Stoyanchev, s.stoyanchev [at] gmail [dot] com