The focus of this activity is research in the fields of Augmented, Virtual, and Mixed Reality environments, including reference conceptual models for implementing XR experiences that transcend the boundaries between real and virtual interaction, novel approaches supporting realistic high-quality human-to-virtual character interactions, and tangible case studies, illustrating how the proposed conceptual models are materialized.
The Invisible Museum: A User-Centric Platform for Creating Virtual 3D Exhibitions with VR Support (2021): A user-centric platform that allows users to create interactive and immersive virtual 3D/VR exhibitions using a unified collaborative authoring environment. The platform itself was designed following a Human-Centered Design approach, with the active participation of museum curators and end-users. Content representation adheres to domain standards such as International Committee for Documentation of the International Council of Museums (CIDOC-CRM) and the Europeana Data Model and exploits state-of-the-art deep learning technologies to assist the curators by generating ontology bindings for textual data. The platform enables the formulation and semantic representation of narratives that guide storytelling experiences and bind the presented artifacts with their socio-historic context. Main contributions are pertinent to the fields of (a) user-designed dynamic virtual exhibitions, (b) personalized suggestions and exhibition tours, (c) visualization in web-based 3D/VR technologies, and (d) immersive navigation and interaction.https://doi.org/10.3390/electronics10030363Virtual Reality Virtual museum 3D/VR exhibitions Natural language processing
A Technological Framework for Rapid Prototyping of X-reality Applications for Interactive 3D Spaces (2021): A integrated technological framework for rapid prototyping of X-reality applications for interactive 3D spaces, featuring real-time person and object tracking, touch input support and spatial sound output. The framework comprises the interactive 3D space, and an API for developers.https://doi.org/10.1007/978-3-030-74009-2_13Extended reality Large display interfaces Multi-display environments
Real-Time Adaptation of Context-Aware Intelligent User Interfaces, for Enhanced Situational Awareness (2021): A novel computational approach for the dynamic adaptation of User Interfaces (UIs) is proposed, which aims at enhancing the Situational Awareness (SA) of users by leveraging the current context and providing the most useful information, in an optimal and efficient manner. By combining Ontology modeling and reasoning with Combinatorial Optimization, the system decides what information to present, when to present it, where to visualize it in the display - and how , taking into consideration contextual factors as well as placement constraints.https://doi.org/10.1109/ACCESS.2022.3152743Intelligent User Interfaces Situational Awareness Augmented reality Context-awareness UI optimization
An X-reality framework unifying the virtual and real world towards realistic virtual museums (2020): The framework provides the synthesis of augmented, virtual and mixed reality technologies to create unified X-Reality experiences in realistic virtual museums, engaging visitors in an interactive and seamless fusion of physical and virtual worlds that will feature virtual agents exhibiting naturalistic behavior. Museum visitors are able to interact with the virtual agents, as they would with real world counterparts. The framework provides refined experiences for museum visitors, but also achieves high quality entertainment combined with more effective knowledge acquisition.https://doi.org/10.3390/app11010338https://doi.org/10.1109/ISMAR-Adjunct51615.2020.00069X-Reality Virtual museum
Intuitive Second-Screen Experiences in VR Isolation (2020): An approach for second screen experiences in VR orchestrating interplay between using handheld devices while using a VR headset. It entails allowing the users to interface with the touchscreen display despite the opaque, view-obstructing visor. It is achieved through a “mobile second-screen interreality system” (MSIS), referring a virtual 3D smart device representation inside the VR world, which is tightly coupled with its real-world counterpart, allowing users to manipulate, interface with, and look at the actual device screen, as if the head-mounted display were not present. Advanced hand and finger-tracking is used to represent the user’s virtual hands’ and fingers’ movements on the screen, further enhancing virtual presence, and cementing the links established between the human brain and virtual reality space.https://doi.org/10.1007/978-3-030-60703-6_17Virtual Reality Second-screen Interaction
Virtual 3D Environment for Cultural Heritage Artifacts (2017): The Virtual 3D Environment for Cultural Heritage Artifacts is a virtual reality application that allows users to grasp and manipulate three dimensional items located in a virtual environment of cultural heritage. As users approach the virtual environment, the lighting of the room adjusts according to their locations. Users have access to archaeological exhibits that can be viewed, rotated and magnified in the virtual environment. In addition, the system allows users to modify the lighting of the selected 3D object aiming to highlight its details (two draggable spotlights are enabled/disabled), auto rotating the exhibit along Y axis and resetting the exhibit to its initial position. The interaction of the user in the virtual reality environment is achieved using physical gestures. Users’ hands are displayed in the virtual world as three-dimensional virtual hands. Users can view and manage their position and direction as real world mapping is applied. In addition, users can use their left hand as a menu when the palm is facing the users’ eyes. This menu refers to the last item selected and can be pinned at any time using the pin button at the top right side. It contains an indicative title and image, along with a short textual description. Furthermore, actions on the corresponding exhibit can be applied through the options at the bottom side of the menu.https://doi.org/10.1007/978-3-319-92285-0_25Cultural Heritage Virtual Reality
AmiSim (2017): A virtual reality tour simulating existing physical spaces and enabling users to navigate in the virtual world in order to extract information. AmiSim is a framework that simulates existing physical spaces enabling users to navigate in the virtual world in order to extract information. The system offers an immersive user experience giving users the sense that they were there, although their physical presence is not necessary. The system employs a head mounted VR headset (Oculus Rift) for rendering virtual worlds stereoscopically. In addition, a special lightweight camera capable of full hand articulation tracking (LeapMotion) is included. Alternatively, motion controllers (Oculus Touch) can be used as an additional means of interaction, providing a more precise albeit more obtrusive medium. Finally, a Kinect One sensor is a complementary camera employed for embedding users’ figures in the virtual environment via user tracking and background subtraction. ICS-FORTH’s Ambient Intelligence (AmI) Facility has been used as a case study for the creation of a virtual reality tour with AmiSim. During the virtual tour, the system allows users to extract information through digital interactive installations, integrated in the virtual environment in accordance with their physical location while creating a realistic and immersive user experience.Virtual Reality Physical Space Simulation
CocinAR (2017): CocinAR is an Augmented Reality (AR) system that has been developed to help the teaching of pre-schoolers (including cognitive impaired children) how to prepare simple meals. It includes a variety of exercises and mini games, aiming to instruct children: (i) which meals are appropriate for breakfast, lunch, and dinner, (ii) how to cook simple meals (e.g. bread with butter and honey, lettuce salad, pasta with tomato sauce, etc.), and (iii) fundamental rules of safety and hygiene that should be applied during the food preparation process. The system supports multimodal input, utilizing tangible objects on a table-top surface and multimedia output available in textual, auditory and pictorial form. Profiling functionality is supported, allowing the system to adapt to the needs and preferences of each individual user, while an extensive analytics framework that allows the trainers to monitor the progress of their students is provided. CocinAR consists of a computer, a high-resolution projector, a simple wooden table, an infrared camera and a high-resolution camera. With the aim to support an immersive user experience, the system is designed to “camouflage” itself in a way that none of the equipment used is visible to the users, leaving visible only the plain wooden table.https://doi.org/10.1007/978-3-319-76111-4_24Augmented Reality Serious games Users with disabilities
Big Data Visualization(2016): A 3D big data visualization application for data centers, based on the real layout of computers. The application displays information such as temperature, CPU load, energy consumption, etc., and visualizes potential problems and sectors which need intervention. Users navigate and interact through gestures and virtual menus. The application is available for high resolution screens but also VR head-warn devices such as Oculus Rift.https://doi.org/10.1007/978-3-319-58521-5_20https://doi.org/10.1007/978-3-319-39862-4_6Virtual Reality Big Data
Augmented Reality Mirror(2016): An Augmented Reality (AR) mirror which provides motion-based interaction to the users and suggests various outfits. The system can be easily in-stalled inside or at the window of a retail shop, enabling the users to stand in front of it and see themselves wearing clothes that the system suggests while they are able to naturally interact with the system remotely, using gestures, in order to like or dislike the recommended outfit. The users can also choose to post photos wearing the proposed clothes on their social media accounts, as well as to buy the clothes either directly from the store or on-line.https://doi.org/10.1007/978-3-319-40542-1_77Augmented Reality Retail
Interactive 3D virtual assistant (2014): A virtual 3D assistant which can be incorporated in a variety of devices, such as smartphones, tablets, televisions or projectors. The interaction of the user with the agent is achieved through gestures, speech recognition and smartphone events. The agent can offer: a) user training to manipulate the interaction techniques of a system, b) tour guidance of the system, c) structured presentation of multimedia items and d) instant help in real time. The assistance content is dynamic and every application provides its content in real time, making the system adaptable in different environments and systems. https://doi.org/10.1007/978-3-319-20804-6_23