2 minute read

High-performance ray tracing for room acoustics

Last August EPCC’s James Perry, Kostas Kavoussanakis and I started work on the Auralisation of Acoustics in Architecture (A3) project. One of its goals was to explore the use of ray-tracing techniques to model the sound qualities of a room.

Such a tool could help optimise the acoustics of an existing or future concert hall, improving the audience’s listening experience. It could also help recreate the sound characteristics of ruined historical spaces.

Advertisement

Ray tracing in acoustics bears many similarities to ray tracing in graphics, primarily in that the brunt of the work involves finding the nearest triangle (in a surface mesh) that intersects a given ray, and this is generally accomplished using acceleration structures such as bounding-volume hierarchies (BVH) or voxel grids. In graphics the goal is to generate an image, whereas in architectural acoustics the goal is to generate an impulse response, which allows one to analyse and recreate the acoustics of a room such as a concert hall.

While ray tracing is strictly a highfrequency approximation (unlike wave-based methods for acoustics simulation, which are valid for all frequencies), it is useful as its computational costs can be significantly less than brute-force wave-based methods [1]. Then again, ray tracing for room acoustics is by no means trivial. Using only the strict principles of classic geometric acoustics [2] , the computational costs of ray tracing in room acoustics scales primarily with the reverberation time of the room (the time it takes a sound to decay by a factor of 1000 in amplitude), and in a typical concert hall this might mean firing billions of rays, and each ray may bounce through the room hundreds of times.

Because this team has previously worked together through the five-year ERC-funded NESS project [3] , the collaboration in this project has been extremely productive, with prototype codes written in Matlab by myself, and initial ports to C++ by James Perry, and further back-and-forth testing and optimisation by both of us. Through many optimisations, including efficient parallelisation over multiple CPUs and advanced vectorisation, we were able to ray trace a concert hall in a matter of minutes, as opposed to the days required by Matlab prototype codes.

Our aim is to get this new ray-tracing tool (and other wave-based tools developed as part of the parallel ERCfunded Wave-based Room Acoustic Modelling project) into the hands of practitioners, to study how auralisation can be utilised in early phases of architectural design.

References

1. B. Hamilton, “Simulating the acoustics of 3d rooms.” https://www.epcc.ed.ac.uk/ blog/2014/11/24/simulating-acoustics-3d-rooms. Accessed: 2018-05-04.

2. L. Cremer and H. A. Müller, Principles and Applications of Room Acoustics, vol. 1. Chapman & Hall, 1982.

3. K. Kavoussanakis, “Ness (next generation sound synthesis project) bows out.” https://www. epcc.ed.ac.uk/blog/2017/04/12/ness-bows-out. Accessed: 2018-05-04.

The nine-month A3 project was funded by a grant from the CAHSS Challenge Investment Fund.

Brian Hamilton Acoustics and Audio Group, University of Edinburgh brian.hamilton@ed.ac.uk

Ray tracing simulation of sound propagation paths in a large concert hall.

This article is from: