5 minute read

Research Grants in Action

An Interview with Associate Professor Frank Galliard

A/Prof Frank Galliard

What is your current role/s?

I am currently a consultant neuroradiologist and Director of Research of the Radiology Department Royal Melbourne Hospital, and a clinical associate professor in the Faculty of Medicine, Dentistry, and Health Sciences at the University of Melbourne. Additionally, I am the founder and Editor in Chief of Radiopaedia.org.

What are your research interests?

My research interests are varied, primarily investigating how novel computer visualisations can be used to make routine imaging tasks faster and more accurate. I have a small grant-supported lab that has been working on this since 2015, with the bulk of the work so far on multiple sclerosis.

My additional research interests are in tumour radiomics and perceptual learning as a way of training radiologists (in collaboration with the University of Melbourne psychology department).

Are there any big unanswered research questions in this area?

In recent years, an enormous amount of interest and effort is being expended in artificial intelligence/machine learning applications. Although many of these are fascinating, in many instances they provide an AI solution to a problem that doesn’t really exist. In contrast, little innovation in how traditional imaging is shown has occurred since moving from printed film to PACS.

In fact, other than scrolling through stacks and performing multiplanar reformats, we largely present imaging in a way very reminiscent of printed film. We hang studies side by side to compare them and we present them in near identical ways regardless of the specific clinical question. Furthermore, most PACS viewers present imaging in a similar way regardless of context; the view shown to diagnostic radiologists is very similar to that shown on the ward, or in the outpatient setting.

There is no a priori reason to believe that this is optimum, and we believe there is a lot of ‘low hanging fruit’; improvements in how routine imaging is presented can have significant impact on outcomes. We have demonstrated this multiple times within the context of MRI for the follow-up of multiple sclerosis: see www.github.com/mh-cad

In 2019, you received RANZCR Research Grant funding for your project “MRI brain 4D time-lapse viewer.” Can you tell us a bit more about this project?

Brain tumours, along with many other conditions, are followed up for extended periods of time in an effort to visualise gradual growth and to assess for alterations’ rate of change. This can be challenging especially as the interval between follow-up studies is variable and patient positioning can subtly vary. Moreover, these features are even harder for a patient to appreciate in the outpatient setting and the ability to fully understand the nature and progression of their condition can play a vital role in empowering them to make decisions.

To this end, we have created an application which presents a longitudinal view of all available scans allowing radiologist, clinicians and patients to navigate not only through three spatial dimensions (as is usual) but also through time in a manner reminiscent of timelapse photography.

See Figure 1: 4D Viewer Interface

Additionally, we are working on implementing an MRI equivalent of ultrasound M mode that we are all familiar with; showing change in signal intensity along a line drawn in a particular location.

See Figure 2: M mode

Our initial aim was to use this visualisation in selected patients in the neuro-oncology multidisciplinary meeting at the Royal Melbourne Hospital. This meeting typically presents six patients in a 30-minute time slot, each patient often having over a dozen prior MRI scans, making it challenging to present their imaging over time. Formal feedback obtained from this meeting confirmed that this visualisation was felt to be intuitive and ‘better’ or ‘much better’ than the traditional way to show cases.

Additionally, we are looking at exploring novel ways of presenting this data, both by interpolating the non-imaged time points to create a smoother and more granular animation of change over time, as well as finding ways to present these findings in the Augmented Reality (AR) and Virtual Reality (VR) setting. To this end, we have an industry-funded grant to create a 4D viewer for multiple sclerosis scans and to present a proof of concept in AR/VR.

What have been the key learnings from this project? Why are the results so important?

The heterogeneity of scans, even from the one institution, is challenging. Normalising position and signal intensity such that parts of the images that are not changing appear stable whereas those that demonstrate pathological change remain visible is a core nontrivial part of this project and continues to be a relative weakness. The other challenge is to perform the necessary steps automatically on dozens of studies quickly enough so that the results are potentially available at the time of reporting. This requires dedicated hardware and optimised imaging pipeline.

Figure 1: 4D Viewer Interface

Figure 2: M mode

The most important key lesson we are learning in each of our visualisation projects is that there is a great deal of potential in optimising how routine imaging is presented and that these changes can have a significant impact in the efficiency and accuracy of clinical interpretation.

The research has been the foundation around which a number of other projects are taking shape. Where to next for this project and your research team?

Taking these lessons forward, we are planning to further improve the platform and assess it in the outpatient setting. We believe patients will be interested in seeing how their conditions change over time in an intuitive way and that this in turn will have benefits to how engaged they are in their management.

To this end, we have created an application which presents a longitudinal view of all available scans allowing radiologist, clinicians and patients to navigate not only through three spatial dimensions (as is usual) but also through time in a manner reminiscent of time-lapse photography.

Time-lapse video: https://youtu.be/bH1q7WGb35Y

This article is from: