Human-Centered Artificial Intelligence
With the rise of AI based models to recognize gestures, voice commands, biological signals, etc, we have become accustomed to using technologies such as voice virtual assistants (e.g. Amazon Echo) or gesture/face/object recognition software to interact with multiple devices such as unlocking your smartphone or changing the music being played. Whilst such modalities may be effective for certain tasks, how could they be used in other contexts? Additionally, could multiple modalities (multimodal interfaces) be combined together to unlock new potential forms of interaction?
Formerly known as "Multimodal user interfaces" and "Future User Interfaces", this course is given by the Human-IST Institute and is designed to provide students with an overview of AI-based natural input modalities (e.g. voice, gesture, facial expressions etc) and how they can be coupled with multimedia output modalities such as video, audio, information visualizations, 3D graphics etc. Students will learn the basic techniques for designing, implementing and evaluating interfaces, as well as theoretical knowledge on multimodality, representation, visualisation of information and cognitive ergonomics.
Additionally, over the course of the semester, students (individual or group depending on the number of students) will have to design and develop their own multimodal interface to be presented at mid-term and at the end of the semester.
- Teacher(s): Prof. Denis Lalanne
- Assistant(s): Yong-Joon Thoo, Maximiliano Jeanneret Medina
Evaluation Criteria
- Written exam on the content seen during the courses (example questions are provided throughout the semester)
- Individual or group project:
- Final Report
- Presentation(s)
Learning Objectives
By the end of the course, a student should:
- Be able to design, program and evaluate a human-centered interface.
- Know the properties of multimodal interfaces.
- Master the different levels of multimodal fusion.
- Be familiar with the architectures of software adapted to multimodal interfaces and be able to understand synchronicity problems.
- Have theoretical and practical knowledge of: gesture recognition, speech recognition, information visualisation and tangible interfaces.
- Understand and know how to apply the various methods to evaluate multimodal interfaces.