5 minute read

The application of VR technology in mining engineering

The application of

in mining engineering

The Department of Mining Engineering’s Virtual Reality Centre was established in 2015 with the financial assistance of Kumba Iron Ore. The purpose of this world-class addition to the University’s facilities was to enhance education, training and research in operational risk across industries through an innovative approach to information optimisation and visualisation by incorporating immersive technology such as augmented reality (AR) and virtual reality (VR).

With the establishment of the AEL Intelligent Blasting Chair for Innovative Rock-breaking Technology in 2018, the Virtual Reality Centre could be utilised to simulate three-dimensional blasting techniques and to visualise new research. This would establish the University as a centre of excellence for emerging rockbreaking technologies.

The AEL Intelligent Blasting Chair is a joint initiative between the Department of Mining Engineering and the Department of Electrical, Electronic and Computer Engineering, which exploits the Department of Mining Engineering’s AR and VR expertise and facilities to strengthen AEL’s market and technology leadership position. In the process, it supports groundbreaking projects to resolve pressing issues in the mining industry.

Under the leadership of Prof William Spiteri, an extraordinary professor in the Department of Mining Engineering, three projects have been launched in the Chair, which have achieved significant milestones over the past two years.

THE DEVELOPMENT OF A FLYROCK MEASUREMENT TECHNIQUE

This project in the Department of Mining Engineering entails the development of a quantitative measuring technique to physically capture and study the in-flight motion of flyrock to improve on predictive models and to better understand the causative factors. It was originally initiated through a request from Glencore coal mine to assist with the evaluation of its mathematical model and its empirical standards for predicting the safety radii to protect equipment and personnel from flyrock.

The insight obtained from this project highlighted the dearth of work done in the past, internationally, in an attempt to understand this phenomenon. The present study started with a more extensive literature review, which showed that – despite the numerous and theoretical predictive models developed over the past ten years – no definitive measuring technique to properly test these theories exists. The next step was to identify the most appropriate technology to measure the flight path of flyrock in the aftermath of a blast.

Photogrammetry was selected as the technique most likely to succeed. Normally, photogrammetry involves taking several photographs of a static object from different angles using a single camera. The photographic data is then manipulated to yield a 3D image of the object. In the case of flyrock monitoring, the process had to be reversed to a technique where several cameras were used to capture a moving object. The data would then be manipulated to depict the trajectory of the flyrock in 3D. This technique was developed and perfected by employing a clay pigeon sling in a controlled and demarcated space. Finally, the multiple camera system was deployed in a quarry where photographs of flyrock were successfully captured by all the cameras.

The study is currently concentrating on converting the quarry photographic data into a point cloud form from which the trajectory of the flyrock can be calculated. Subsequent work will focus on extracting positional and physical data of the flyrock fragments from the photographic data using existing photogrammetric and stereo mapping software. Once the positional data can be obtained within an acceptable margin of error (±1 cm), the research can shift towards the analysis and interpretation of the data based on ballistic principles.

The ultimate goal is to determine two coordinates for the flyrock: its final landing position and the point of origin. This output will enable mines to build historic databases of the operation’s flyrock. It will also enable researchers to quantitatively investigate the effect of various blasting parameters on the risk of flyrock, and will enable the visualisation of the data for training and educational purposes.

THE APPLICATION OF VR TECHNOLOGY TO ENHANCE LEARNING

The VR training project in the Department of Mining Engineering identified the need to digitise the current training that is being conducted for the Intellishot ® electronic detonator product using a flipped classroom approach.

This entailed the development of three elements. The first element comprises a theory component using six e-learning courses, which the trainees work through at their own time and pace, with additional support provided where necessary. This is followed by face-to-face training to fill knowledge gaps. The trainee is then introduced to the VR programme and works through a facilitated perfect blast (with voice-overs, as well as facilitator explanations). This takes the trainees into a trouble-shooting phase. Once the trainees have worked through the perfect blast, they work through a second round without facilitation, where two errors are randomly displayed. The trainees then apply what they have learned about solving errors. Finally, they complete a VR assessment, where they are required to conduct a live blast.

THE CONVERSION OF VISUAL DATA INTO 3D IMAGES

This project in the Department of Electrical, Electronic and Computer Engineering is developing techniques to convert visual data, such as video footage obtained by a drone flying over an open-pit mine, into threedimensional (3D) VR and AR images. The knowledge base for converting point cloud data into 3D VR and AR images had not been available within the Department of Electrical, Electronic and Computer Engineering prior to the commencement of this project.

The initial phase of the project has been completed and various demonstrations have been held to progressively show the levels achieved. To improve the quality of VR visualisation, an approach using mixed-medium and highresolution meshes has been developed. The approach entails the use of a medium-resolution mesh for general navigation, switching to a high-resolution mesh when approaching objects or features in the environment.

The integration of large 3D models extracted from point clouds and other data into the Unity and Unreal engines must overcome the memory and object count limitations of the engines. Although the initial partitioning (segmentation) results were disappointing, an improved segmentation algorithm has been developed that is optimised for the generation of segments used for VR visualisation. However, the algorithm is currently only serial in nature as it requires long run times for large point clouds.

This article is from: