Extended Reality Environments and Real-time Computer Graphics Systems

This research activity of the Laboratory focuses on the field eXtended Reality (XR) environments, including Augmented, Virtual, and Mixed Reality (MR) environments. It involves developing conceptual reference models for the implementation of XR experiences, innovative approaches that support realistic and high-quality interaction with virtual characters, as well as tangible case studies that demonstrate how the proposed conceptual models are realized. Additionally, this research direction explores high-fidelity computer graphics systems for Human-Computer Interaction. Activities in this area focus on dynamic scene rendering methods using GPU-accelerated graphics, MR character simulation frameworks, as well as algebraic frameworks for computer graphics and virtual character simulation.


Indicative Outcomes


Novel geometric cutting, tearing and drilling in deformable 3D meshes in VR (2021): In this work, we demonstrate the merits of multivector usage with a novel, integrated rigged character simulation framework based on CGA. In such a framework, and for the first time, one may perform real-time cuts and tears as well as drill holes on a rigged 3D model. These operations can be performed before and/or after model animation, while maintaining deformation topology. Moreover, our framework permits generation of intermediate keyframes on-the-fly based on user input, apart from the frames provided in the model data.

The Invisible Museum: A User-Centric Platform for Creating Virtual 3D Exhibitions with VR Support (2021): A user-centric platform that allows users to create interactive and immersive virtual 3D/VR exhibitions using a unified collaborative authoring environment. The platform itself was designed following a Human-Centered Design approach, with the active participation of museum curators and end-users. Content representation adheres to domain standards such as International Committee for Documentation of the International Council of Museums (CIDOC-CRM) and the Europeana Data Model and exploits state-of-the-art deep learning technologies to assist the curators by generating ontology bindings for textual data. The platform enables the formulation and semantic representation of narratives that guide storytelling experiences and bind the presented artifacts with their socio-historic context. Main contributions are pertinent to the fields of (a) user-designed dynamic virtual exhibitions, (b) personalized suggestions and exhibition tours, (c) visualization in web-based 3D/VR technologies, and (d) immersive navigation and interaction.

A Technological Framework for Rapid Prototyping of X-reality Applications for Interactive 3D Spaces (2021): A integrated technological framework for rapid prototyping of X-reality applications for interactive 3D spaces, featuring real-time person and object tracking, touch input support and spatial sound output. The framework comprises the interactive 3D space, and an API for developers.

Real-Time Adaptation of Context-Aware Intelligent User Interfaces, for Enhanced Situational Awareness (2021): A novel computational approach for the dynamic adaptation of User Interfaces (UIs) is proposed, which aims at enhancing the Situational Awareness (SA) of users by leveraging the current context and providing the most useful information, in an optimal and efficient manner. By combining Ontology modeling and reasoning with Combinatorial Optimization, the system decides what information to present, when to present it, where to visualize it in the display - and how , taking into consideration contextual factors as well as placement constraints.

An X-reality framework unifying the virtual and real world towards realistic virtual museums (2020): The framework provides the synthesis of augmented, virtual and mixed reality technologies to create unified X-Reality experiences in realistic virtual museums, engaging visitors in an interactive and seamless fusion of physical and virtual worlds that will feature virtual agents exhibiting naturalistic behavior. Museum visitors are able to interact with the virtual agents, as they would with real world counterparts. The framework provides refined experiences for museum visitors, but also achieves high quality entertainment combined with more effective knowledge acquisition.

Intuitive Second-Screen Experiences in VR Isolation (2020): An approach for second screen experiences in VR orchestrating interplay between using handheld devices while using a VR headset. It entails allowing the users to interface with the touchscreen display despite the opaque, view-obstructing visor. It is achieved through a “mobile second-screen interreality system” (MSIS), referring a virtual 3D smart device representation inside the VR world, which is tightly coupled with its real-world counterpart, allowing users to manipulate, interface with, and look at the actual device screen, as if the head-mounted display were not present. Advanced hand and finger-tracking is used to represent the user’s virtual hands’ and fingers’ movements on the screen, further enhancing virtual presence, and cementing the links established between the human brain and virtual reality space.

New line of research where computational geometric algebra is used to solve core computer graphics problems in real-time virtual character simulation and HCI (2020): In this work, we present a novel, integrated rigged character simulation framework in Conformal Geometric Algebra (CGA) that supports, for the first time, real-time cuts and tears, before and/or after the animation, while maintaining deformation topology. The purpose of using CGA is to lift several restrictions posed by current state-of-the-art character animation & deformation methods. Previous implementations originally required weighted matrices to perform deformations, whereas, in the current state-of-the-art, dual-quaternions handle both rotations and translations, but cannot handle dilations. CGA is a suitable extension of dual-quaternion algebra that amends these two major previous shortcomings: the need to constantly transmute between matrices and dual-quaternions as well as the inability to properly dilate a model during animation.

Application of geometric algebra to computer graphics in general and real-time virtual characters in particular (2018): In this work, we propose a novel VR s/w system aiming to disrupt the healthcare training industry with the first Psychomotor Virtual Reality (VR) Surgical Training solution. Our system generates a fail-safe, realistic environment for surgeons to master and extend their skills in an affordable and portable solution. We deliver an educational tool for orthopedic surgeries to enhance the learning procedure with gamification elements, advanced interactability and cooperative features in an immersive VR operating theater.

Virtual 3D Environment for Cultural Heritage Artifacts (2017): The Virtual 3D Environment for Cultural Heritage Artifacts is a virtual reality application that allows users to grasp and manipulate three dimensional items located in a virtual environment of cultural heritage. As users approach the virtual environment, the lighting of the room adjusts according to their locations. Users have access to archaeological exhibits that can be viewed, rotated and magnified in the virtual environment. In addition, the system allows users to modify the lighting of the selected 3D object aiming to highlight its details (two draggable spotlights are enabled/disabled), auto rotating the exhibit along Y axis and resetting the exhibit to its initial position. The interaction of the user in the virtual reality environment is achieved using physical gestures. Users’ hands are displayed in the virtual world as three-dimensional virtual hands. Users can view and manage their position and direction as real world mapping is applied. In addition, users can use their left hand as a menu when the palm is facing the users’ eyes. This menu refers to the last item selected and can be pinned at any time using the pin button at the top right side. It contains an indicative title and image, along with a short textual description. Furthermore, actions on the corresponding exhibit can be applied through the options at the bottom side of the menu.

AmiSim (2017): A virtual reality tour simulating existing physical spaces and enabling users to navigate in the virtual world in order to extract information. AmiSim is a framework that simulates existing physical spaces enabling users to navigate in the virtual world in order to extract information. The system offers an immersive user experience giving users the sense that they were there, although their physical presence is not necessary. The system employs a head mounted VR headset (Oculus Rift) for rendering virtual worlds stereoscopically. In addition, a special lightweight camera capable of full hand articulation tracking (LeapMotion) is included. Alternatively, motion controllers (Oculus Touch) can be used as an additional means of interaction, providing a more precise albeit more obtrusive medium. Finally, a Kinect One sensor is a complementary camera employed for embedding users’ figures in the virtual environment via user tracking and background subtraction. ICS-FORTH’s Ambient Intelligence (AmI) Facility has been used as a case study for the creation of a virtual reality tour with AmiSim. During the virtual tour, the system allows users to extract information through digital interactive installations, integrated in the virtual environment in accordance with their physical location while creating a realistic and immersive user experience.

CocinAR (2017): CocinAR is an Augmented Reality (AR) system that has been developed to help the teaching of pre-schoolers (including cognitive impaired children) how to prepare simple meals. It includes a variety of exercises and mini games, aiming to instruct children: (i) which meals are appropriate for breakfast, lunch, and dinner, (ii) how to cook simple meals (e.g. bread with butter and honey, lettuce salad, pasta with tomato sauce, etc.), and (iii) fundamental rules of safety and hygiene that should be applied during the food preparation process. The system supports multimodal input, utilizing tangible objects on a table-top surface and multimedia output available in textual, auditory and pictorial form. Profiling functionality is supported, allowing the system to adapt to the needs and preferences of each individual user, while an extensive analytics framework that allows the trainers to monitor the progress of their students is provided. CocinAR consists of a computer, a high-resolution projector, a simple wooden table, an infrared camera and a high-resolution camera. With the aim to support an immersive user experience, the system is designed to “camouflage” itself in a way that none of the equipment used is visible to the users, leaving visible only the plain wooden table.

Global illumination real-time rendering of static 3D geometrical scenes in parallel with complete VR frameworks for virtual human simulation and their related simulation technologies. (2017): In this work, we use the conformal geometric algebra (CGA) as the mathematical background for “GI in real-time” under distant Image-based lighting (IBL) illumination, for diffuse surfaces with self-shadowing by efficiently rotating the environment light using CGA entities. Our work is based on spherical harmonics (SH), which are used for approximating natural, area-light illumination as irradiance maps. Our main novelty is that we extend the PRT algorithm by representing SH for the first time with CGA.

Real-time rendering of life-size AR/VR of fully integrated and simulated virtual humans, working on high fidelity presence (including dynamic scene precomputed light transport) and interaction (including animation and gaze model) in Mixed Reality (2017): Our main goal in this work is to compare different software methodologies for creating VR environments and present a complete novel methodology for authoring life-sized AR virtual characters and life-sized AR crowd simulation using only modern mobile devices. One important aspect of these environments that we focus on is creating realistic and interactive virtual characters via procedurally generated body and facial animations which are illuminated with real environment light. Virtual characters' transformations are handled efficiently using a single mathematical framework, the 3D Euclidean geometric algebra (GA), and the conformal geometric algebra (CGA) which is able to handle translations, rotations, and dilations. Using such a single algebraic framework, we avoid conversions between different mathematical representations; as a result, we achieve more efficient performance.

Presence and Gamification algorithms for medical, surgical training systems as well as novel digital heritage applications, including novel Neurofeedback methods and BCI-methods in VR for enhanced learning, education and training (2017): Building on augmented cognition theory and technology, our novel contribution in this work enables accelerated, certain brain functions related to task performance as well as their enhancement. We integrated in an open-source framework, latest immersive virtual reality (VR) head-mounted displays, with the Emotiv EPOC EEG headset in an open neuro- and biofeedback system for cognitive state detection and augmentation.

Big Data Visualization(2016): A 3D big data visualization application for data centers, based on the real layout of computers. The application displays information such as temperature, CPU load, energy consumption, etc., and visualizes potential problems and sectors which need intervention. Users navigate and interact through gestures and virtual menus. The application is available for high resolution screens but also VR head-warn devices such as Oculus Rift.

Augmented Reality Mirror(2016): An Augmented Reality (AR) mirror which provides motion-based interaction to the users and suggests various outfits. The system can be easily in-stalled inside or at the window of a retail shop, enabling the users to stand in front of it and see themselves wearing clothes that the system suggests while they are able to naturally interact with the system remotely, using gestures, in order to like or dislike the recommended outfit. The users can also choose to post photos wearing the proposed clothes on their social media accounts, as well as to buy the clothes either directly from the store or on-line.

Interactive 3D virtual assistant (2014): A virtual 3D assistant which can be incorporated in a variety of devices, such as smartphones, tablets, televisions or projectors. The interaction of the user with the agent is achieved through gestures, speech recognition and smartphone events. The agent can offer: a) user training to manipulate the interaction techniques of a system, b) tour guidance of the system, c) structured presentation of multimedia items and d) instant help in real time. The assistance content is dynamic and every application provides its content in real time, making the system adaptable in different environments and systems. https://doi.org/10.1007/978-3-319-20804-6_23