
1 minute read
President's Message
by RANZCR
Embracing the World of Artifical Intelligence
The issues associated with artificial intelligence (AI) have been a topic for debate in the medical profession for many years now and the field of radiology has attracted more than its fair share of attention. The discussion was originally framed as an exotic sci-fi scenario but soon, like a runaway robot in a B-grade movie, it became a worrying potential threat to our livelihoods. Then it was discussed as an opportunity we should seize, and finally as an inevitability we had to adjust to. In a sense, it’s all of these at once and, in addition, a reminder of how hard it is to predict and plan for the future.
RANZCR entered the AI debate substantively in early 2018 when our previous President outlined his wish to dismiss the hype and take steps for ‘all stakeholders to actively work together’ to ‘lay the ground rules’ for how AI should be applied. This was followed in 2019 with the RANZCR’s release of a set of ethical principles to guide the development of standards related to AI and machine learning, which emphasised vital principles such as patient safety and privacy, as well as the need to delineate responsibility for AI-assisted decision making and to preserve the doctor patient relationship. Then in 2020 we established an advisory committee to advise our faculty councils on the implication of AI tools for clinical radiology and radiation oncology. Finally, in 2022, the College promulgated a position statement on the regulation of AI in medicine. This argued that current regulatory mechanisms were no longer fit for purpose in a world in which AI technology would be widely employed to assist medical practitioners. The document highlighted a range of potential misuses and the need to address system failures in the technology. AI has also been a recurring topic at our recent annual scientific meetings—for example, its use to improve the sensitivity and accuracy of mammograms in detecting breast cancer.
Our efforts to date are all good spadework, but the AI world is moving rapidly and it is hard for any one organisation to stay ahead of the approaching avalanche. This has been especially true since the recent and unprecedented publicity given to the generative AI chatbot known as ChatGPT and a slew of competitors and variants based on so-called large language models. These chatbots are known for their ability to realistically imitate human speech and writing. Unsurprisingly, misuses of the technology are already appearing in local medical settings. Only weeks ago, the CEO of a major teaching hospital in WA felt compelled to ban staff from using ChatGPT and similar tools to write medical notes on patients prior to upload into hospital record systems. The CEO argued correctly that ‘there is no assurance of patient confidentiality’ in using such technology and that we do not ‘fully understand the security risks’ in doing so.
Hundreds of AI applications have now gained approval from the US FDA, and most of them are intended for use in radiology. Many are being implemented in hospitals and practices in the absence of robust validation of their clinical performance. Indeed, a 2021 Dutch study of 100 commercially available AI software products in radiology found that 64 lacked peer-reviewed evidence of their efficacy and only 18 had demonstrated clinical value. Large teaching hospitals and academic centres may have the resources inhouse to assess an AI application prior to its use, but not so for private radiology practices and smaller healthcare services. These organisations are at a high risk of implementing AI tools that do not perform as intended, resulting in workflow inefficiencies and even risks to patient safety.
Uncertainty in AI is endemic at present. In a recent survey of its members on their experiences with AI tools in clinical radiology, the European Society of Radiology found that only 23 per cent had noticed a significant reduction in their workload; 70 per cent found no such effect. Our College feels the time is now for helping our members make informed choices in the AI world, not just from the viewpoint of ethical standards, but also on the practical aspects of implementing AI in the workplace. RANZCR spearheaded the effort on a multi-society collaboration to develop a paper on AI implementation guidance. This would comprise a set of guidelines on selection, evaluation, implementation and monitoring of AI tools.
At the European Congress of Radiology (ECR) in Vienna earlier this year, RANZCR was able organise a meeting of the leaders of American College of Radiology (ACR), European Society of Radiology (ESR), Radiological Society of North America (RSNA) and Canadian Society of Radiology (CAR) and made a persuasive case for why we need a multisociety approach. President of ACR Dr Howard Fleishon and President of ESR Dr Adrian Brady were pivotal in getting everyone on board. Our AI Committee Chair (and President-Elect) Professor John Slavotinek’s help in this effort should be acknowledged. As I write this article, a RANZCR – ACR – ESR – RSNA – CAR collaboration on a multi-society paper is being developed for release early next year.
Potentially we are on the cusp of the biggest upheaval in our history due to exponential growth of AI. As worldover politicians and regulators are discovering somewhat belatedly, we need to grab hold of the controls.
The time is now for helping our members make informed choices in the AI world, not just from the viewpoint of ethical standards, but also on the practical aspects.