Page 1

Autonomous agents to accelerate extended reality testing Extended reality systems (XR) are used in a wide range of sectors, yet they need to be tested before they can reach the commercial market. We spoke to Rui Filipe Fernandes Prada about the work of the iv4XR project in developing autonomous testing agents that promise to bring significant benefits to developers. An extended reality (XR)

system typically involves a representation of a virtual world, and many users quickly find themselves fully absorbed in that scenario, whether they’re in a flight simulation or playing a computer game. These systems are used in a wide variety of settings, from museums to combat training, but they first need to be tested before they can be applied, which is currently done in quite a labour-intensive way. “Currently a lot of human labour is required to test these XR systems, in the sense that you need users to test them,” says Rui Prada, coordinator of the iv4XR project. These testers are asked to try to perform specific tasks integral to the functioning of the system. “For example, the users might try to navigate from point A to point B in the virtual world, to interact with all the objects within it, or to combine objects,” explains Prada. “They are also asked to explore the system without much guidance. They might perform more open tasks such as to simply see what you can do, to try to finish a certain level in the game, or to find all the hidden objects. So in this case users have a bit more freedom.”

iv4XR project The aim in the iv4XR project is to automate two kinds of tests, using techniques from artificial intelligence. In one type of test there is a specific task that can be scripted, so it might be expected that it should be relatively simple to automate; however, Prada says this is a technically challenging task. “In these XR applications you need intelligence to adapt as things change. You cannot have a simple script which performs

a user when they perform a task, will they be happy with the result?” This may depend to a degree on whether a user is using the system for their own personal enjoyment or for training to develop their professional skills, and also whether they feel they are progressing. As part of the project, Prada plans to model the knowledge and skills of users, and to investigate how the structure of a system affects their ability to learn. “If we configure the

We are testing

the technical specifications of the systems, while we are also looking at the users. Will a person with a specific profile enjoy the game? Will they be able to perform the task, given the skills and knowledge that they have? well in XR – you need an intelligent agent to adapt,” he explains. The idea in the project is to essentially build models of individual users – autonomous testing agents – which can then test the system. “We are testing the technical specifications of the systems, while we are also looking at the users. Will a person with a specific profile enjoy the game? Will they be able to perform the task, given the skills and knowledge that they have?” outlines Prada. “Will the user be able to perform the tasks that they want to perform on the system? How does it feel for

levels of a game in a certain way, do users progress more quickly?” he asks. A user who fails to master the system as quickly as they may have expected may become discouraged, while another might quickly understand how it works then move on; by using multiple testing agents on a single system, researchers aim to build a fuller picture. “We try different profiles and we see how they use the system. That’s part of user experience testing,” continues Prada. “These autonomous testing

agents have emotions, and their emotional state changes depending on how they interact with the system. This is designed to mimic the emotional process of an individual person.” The amount of information that an individual user is able to absorb is another important consideration in the project, with Prada and his colleagues working to develop a model of cognitive load. If the cognitive load is too high then it will be difficult for users to learn, yet it’s also important that it’s not too low. “If the cognitive load is too low it means that it’s not engaging, it’s not really pushing the users,” points out Prada. The ideal scenario is to strike the right balance between engaging users without overloading them, which will enhance their enjoyment and help users develop their skills. In cases where an XR system is going to be used for training, the use of testing agents will also provide invaluable pointers. “You can explore what kind of actions or tasks the agent will perform more easily. You can then see if the tasks that these agents are performing are aligned with the training goals,” explains Prada. A testing agent may also use the system in ways that were not necessarily anticipated in development, exploring possibilities that may not have occurred to the average user. In a situation where a testing agent is being used to perform an open task, they may find interesting avenues of investigation. “The goal may be to explore a 3-D environment – an agent may try to go into places in many different ways, they can do things that are not expected. They can also do very repetitive things that people are not willing to do,” says Prada. These agents can be used at all stages of a project, but Prada believes they are particularly useful during development, giving companies an insight into user experience at an earlier point, while the agent is also designed to

adapt to changes in the system. “Our goal is that if you setup the agent and then change things, you can still use the same agent,” he continues. “This is because the agent is able to deal with the changes that have been made in the XR system.”

Commercial benefits The ability to use these autonomous testing agents could help speed up development, potentially leading to significant commercial benefits for companies keen to improve efficiency. Autonomous agents can generate results quickly without the hassle of setting up a user study, and they are always ready to work unsociable hours. “You can set up the agents to run through the night for example and get the results the next day, instead of waiting for a week,” points out Prada. These agents are not intended as a complete replacement for human testing, but Prada believes there are circumstances in which they can perform better than a human, and he is looking into the wider commercial potential of this research. “We are investigating market opportunities for these autonomous testing agents and trying to assess their potential in different areas, like game development and simulations for protecting critical infrastructures, such as nuclear power plants,” he continues. “We want to show that these technologies can be useful in automating testing.” A number of partners in the iv4XR consortium are using the testing agents, and the hope is to validate the project’s approach. This would then represent a strong basis to continue the development of the technology once the project reaches the end of its funding term. “We’re looking to gather more concrete data and are planning to publish a report on the benefits that this technology can bring to new markets,” says Prada.

iv4XR Intelligent Verification/Validation for Extended Reality Based Systems Project Objectives

Extended reality systems are used in a variety of sectors, from cultural heritage to flight training, for both entertainment and education purposes. These systems need to be tested before they can be applied, which currently is done by human testers in quite a labour-intensive way. The aim in the iv4XR project is to develop autonomous testing agents, essentially models of individual users, to explore virtual worlds. This could help companies develop more engaging systems and also speed up development.

Project Funding

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 856716.

Project Partners


Contact Details

Project Coordinator, Rui Filipe Fernandes Prada INESC-ID | Instituto Superior Técnico, Universidade de Lisboa Senior Researcher | Associate Professor Edifício IST-Taguspark - Av. Prof. Aníbal Cavaco Silva, 2744-016, Porto Salvo, Portugal T: +351 21 423 3292 E: rui.prada@tecnico.ulisboa.pt E: rui.prada@gaips.inesc-id.pt W: https://iv4xr-project.eu

Rui Prada

Rui Prada is Researcher at INESC-ID in the AI for People and Society research group. He conducts research on social intelligent agents, affective computing, human-agent interaction, computer games, applied gaming, and game AI.

Above: Screenshot from MAEVE, the Thales use case (nuclear power plant simulator)

Screenshot from the space engineers game, Keen software.

Screenshot from the space engineers game, Keen software.

Screenshot from the space engineers game (the game is developed by Keen software house).


EU Research