3 minute read

The Marginal Gains Dear Readers,

of Outsourcing Thinking in Ophthalmology to AI

Ah, I tried to get OpenAI to write this column for me, but I hated what it wrote for me. It’s not me. It’s not my voice. And it’s not a voice that I like to read either. It’s the same when I ask it to write on a topic like corneal cross-linking. It produces something that’s perfectly coherent (albeit quite stuck-up and wordy). When you ask it to reference what it wrote — it does so! Amazing! It’s also a disaster. The references it produces have recognizable author names, appropriate journals and dates, and perfectly sensible titles.

It’s unfortunate that when you check them against what’s in PubMed, they’re also complete works of fiction. Further, the confidently and coherently written copy isn’t quite right. If you know a topic well, you can see the mistakes and correct them ... but if you don’t, obviously you can’t. If someone is lazy enough to get AI to write copy for them, then they are probably also too lazy to spend the quality time that’s absolutely needed to fact-check it. This is a big problem.

If content like this gets published — which is especially dangerous in medicine — then the inaccuracy propagates and gets incorporated into the next model. Misinformation gets reinforced, rather than refuted.

I don’t know if AI’s going to have to think concepts through from first principles or refer back to known-correct datasets in order to make what it says accurate. But I worry that there’s enough falsehoods floating about in the ether in everything that’s newsworthy, we don’t need AI adding to it across everything else. Perhaps it won’t turn out all that bad. Instead of curing cancer and solving climate change, I can imagine that some of the best minds in the world will be working on fixing these issues. I hope.

So am I on an AI downer? Not at all. When its scope is limited to being, let’s say, a super-nomogram to predict refractive surgery outcomes, or what effect an excimer laser ablation pattern has on the shape and biomechanics of a cornea — all good. While I do wonder whether after crunching all of the numbers, the extra quarter-diopter improvement in refraction over what’s currently (incredibly good) state-of-the-art will be appreciated by most cataract or refractive surgery patients, there’s certainly lower hanging fruit waiting to be picked with a greater potential impact.

Combine AI with OCT aqueous outflow imaging and find out where’s the best location to place a MIGS stent. Optimize care by comprehensively understanding and modeling the risk of a patient’s glaucoma progressing on a certain medication, and recommend a timely switch or surgical intervention.

However, I don’t see AI here as being a paradigm shift in the same way as GPT-3 and deep generative neural networks seem to be for text copy and image generation — which look like they’re about to put many of the people who work in the creative industries out of a job. I do see AI speeding up the process of marginal improvements across the board, be it the optics of multifocal IOLs or the deployment of phaco energy patterns in different parts of the lens based on pre-surgical imaging or values from a sensor in a phaco tip. Because, unlike the creatives, doctors are harder to replace. Too few have been trained over the years, and the demand on their services has never been higher.

I believe the biggest role AI has to play in ophthalmology over the next decade or two is to optimize for efficiency — be it an individualized anti-VEGF regimen for patients with AMD based on retina OCT scans that maximizes treatment effect and minimizes the number of clinic visits they need to retain their vision, or arranging the order of patients in a surgical list based on predicted operating times to maximize patient throughput in an OR.

We’ll all be working with AI sooner than we think, and it’s going to be interesting to see how enthusiastically (or not) we will be in terms of taking instructions from it.

So what did ChatGPT write when I asked it to finish this article with the instruction: “Please write a 100-word conclusion to this article?”

In conclusion, the author expresses concern about the potential inaccuracies of AI-generated content, particularly in the field of medicine. They argue that if someone is too lazy to fact-check AI-generated content, it can lead to inaccuracies being propagated and reinforced. However, the author also acknowledges the potential benefits of AI in fields such as ophthalmology, where it can be used to optimize care and efficiency. They predict that AI will play a significant role in the field over the next decade and express curiosity about how doctors will respond to taking instructions from it.

Awful, isn’t it?

Cheers, Mark Hillen, PhD Director of Communications ELZA Institute, Zurich, Switzerland Editor-At-Large | CAKE