Nadine: A Human-like sociable and emotional Robot that remembers facts and people
By Nadia Magnenat Thalmann, NTU, Singapore, and MIRALab, University of Geneva, Switzerland
Nadine is a humanoid robot with soft skin and flowing brunette hair. She can talk with anybody, reacts with emotions and with a friendly smile and handshake. She looks in the eye when talking, and the next time you meet her, she’s sure to remember you and your previous chat with her. The research for Nadine is very interdisciplinary. Over the past four years, we have been fostering cross-disciplinary research in social robotics technologies – involving engineering, computer science, linguistics, psychology and other fields – to transform a virtual human into a physical being that’s able to observe and interact with humans. We will present the core technology of Nadine Robot, and also show some interactions we can have with her.
Nadia Magnenat Thalmann is currently Professor and Director of the Institute for Media Innovation, NanyangTechnological University, Singapore. She is also the Founder and Director of the MIRALab, an interdisciplinary lab in Human Computer Animation, University of Geneva, Switzerland. Her global research area is the modelling and simulation of Virtual Humans. She is now working on Social Robots, mixed realities and medical simulation. All over her career, she has received many artistic and scientific Awards, among them the 2012 Humboldt Research Award, and two Doctor honoris Causa (from University of Hanover in Germany and from the University of Ottawa in Canada). She is Editor-in-Chief of the Journal The Visual Computer (Springer-Verlag) and is a Member of the Swiss Academy of Engineering Sciences. More information can be found in Wikipedia.
Rapidly built character animation and simulation
By Ari Shapiro, University of Southern California, Institute of Creative Technologies, USA
The 3D character will continue to be an integral part of 3D games, movies, and other simulated environments. Recently, the tools and technologies to produce characters have improved and become more affordable. In this talk, I describe my experiences in attempting to rapidly build 3D characters using sensing technology, behavior and modeling algorithms.
Ari Shapiro has nearly two decades of professional experience in the computer field as an engineer, consultant, manager and scientist. He currently works as a research scientist at the USC Institute for Creative Technologies, where his focus is on synthesizing realistic animation for virtual characters. At ICT, he heads the team for the SmartBody application, which serves as an animation system for synchronizing speech, facial animation, body motion and gesturing for many of ICT’s real time virtual human systems. For several years, he worked on character animation tools and algorithms in the research and development departments of visual effects and video games companies such as Industrial Light and Magic, LucasArts and Rhythm and Hues Studios. He has worked on many feature-length films, and holds film credits in The Incredible Hulk and Alvin and the Chipmunks 2. In addition, he holds video games credits in the Star Wars: The Force Unleashed series. More information can be found here.
Real Virtuality: High-fidelity multisensory virtual experiences
by Alan Chalmers, University of Warwick, United Kingdom
Virtual experiences offer the possibility of simulating potentially complex, dangerous or threatening real world experiences in a safe, repeatable and controlled manner. They can provide a powerful and fully customizable tool for a personalised experience and allow attributes of human behaviour in such environments to be examined. To accurately simulate reality, however, virtual experiences need to be in real-time, based on physics, and deliver multiple senses (visuals, audio, smell, feel, temperature, taste, etc.) in a natural manner including ensuring any cross-modalities (the influence of one sense on another). This talk describes Real Virtuality, a framework for computing and delivering high-fidelity multisensory virtual experiences, and shows how this can be used to provide new insights in diverse applications such as automotive design and virtual archaeology.
Alan Chalmers is a Professor of Visualisation at WMG, University of Warwick, UK and a Royal Society Industrial Fellow. He has an MSc with distinction from Rhodes University, 1985 and a PhD from University of Bristol, 1991. Chalmers is Honorary President of Afrigraph and a former Vice President of ACM SIGGRAPH. He has published over 230 papers in journals and international conferences on HDR, high-fidelity virtual environments, multi-sensory perception, parallel processing and virtual archaeology and successfully supervised 37 PhD students. From 2011-2015, Chalmers was Chair of EU COST Action “IC1005 HDR” that co-ordinated HDR research across Europe to facilitate its widespread uptake. He is also a UK representative on IST/37 considering the incorporation of HDR in an MPEG standard. More information can be found in here.