Memphis Law: Fall 2017

Page 20

F DA & H E A LT H C A R E The U.S. Food and Drug Administration (FDA) is already a regulatory juggernaut, but with the incredible uptick in AI technology, there now looms the impending regulatory challenges that come with AI in medicine. According to a recent report from “Research and Markets,” the overall market for AI in health care is expected to reach $7.98 billion by 2022, at a compound annual growth rate of 52.68% between 2017 and 2022. AI, and in particular machine learning, powers more and more medical device software in today’s health care industry. And as is the nature of machine learning, it is always changing, learning and improving, which results in rapid product changes. For the FDA and its regulators, these sorts of on-the-fly changes to an algorithm determining when new products enter the market would normally be a workload nightmare. However, a new digital health unit created by the FDA aims to speed up that regulatory process in order to keep pace with technology. The new group contains over a dozen engineers — with specialties like software development, AI, cyber security, cloud computing and more — to help ready the agency to regulate the future of health care. Over the past year, the FDA has issued several documents intended to help describe their current advice and guidance on the future of digital health care. These guides help developers know what the FDA does and does not regulate as a “medical device.” That’s an important subgroup, as it could contain many popular health and wellness apps that utilize AI, but do not require as much FDA attention since they don’t pose a high risk to the public. But it could also include devices using machine learning algorithms that could help diagnose such illnesses as cervical cancer or predict heart attacks, devices which require close regulatory scrutiny of FDA reviewers. “When you start adding analytical AI for any image analysis — think of detecting cancer or some serious disease — at that point, people need to know when that

18

detection means something and is real,” said Bakul Patel, the FDA’s associate director for digital health in a recent interview with IEEE Spectrum Magazine. AI AND AUTONOMOUS W E A P O N S SYST E M S Elon Musk’s recent dire warnings about AI and the need to regulate it hit home with many individuals who are afraid of what the technology will mean to the weapons and defense industries. In what sounds like the beginning of a science fiction film, Musk and some of the world’s leading robotics and AI experts have recently called upon the United Nations to ban the development and use of what they call “killer robots.” Elon Musk and Mustafa Suleyman, AI expert at Alphabet (Google’s parent company), are leading the group of over 100 specialists from 26 different countries calling for the ban on autonomous weapons. In a recent vote, the United Nations decided to begin formal discussions regarding weapons such as drones, tanks, and automated machine guns. This group of technology specialists sent an open letter to the UN calling for it to prevent the arms race that they state is under way for these killer robots. They believe without intervention, autonomous weapons will be the cause of a “third revolution in warfare,” similar to those that followed the inventions of gunpowder and nuclear arms. “Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend,” the group’s letter states. “These can be weapons of terror, weapons that despots and terrorists use against innocent populations and weapons hacked to behave in undesirable ways.” As an indication of the already existing autonomous weapons already in place, a report released earlier this year cites Izumi Nakamitsu, the head of the disarmament affairs office, as saying that “technology is advancing rapidly, but that regulation has not kept pace.” In the same report,

“The Federal

Government wants to ensure it does not impede progress with unnecessary or unintended barriers to innovation.” she points out that some of the world’s military hot spots already have intelligent machines in place, such as “guard robots” in the demilitarized zone between South and North Korea. For example, according to reporting by the Washington Post, the South Korean military is using a surveillance tool called the SGR-AI, which can detect, track and fire upon intruders. The robot was implemented to reduce the strain on thousands of human guards who man the heavily fortified, 160-mile border. While it does not operate autonomously yet, it has the capability to, according to Nakamitsu’s report. The Pentagon has tested groups of miniature drones — raising the possibility that the military could send groups of these drones into enemy territory equipped to gather intelligence, block radar or — aided by AI-based, facial recognition technology — carry out assassinations. From the United States to Russia to the United Kingdom to China, many governments are already very interested in putting rapid advances in AI to military use. The question is whether or not the United Nations and other leading AI regulatory bodies will stop it.

G E N E R A L A I R E G U L AT I O N In a late 2016 interview with WIRED magazine, former President Barack Obama summarized our government’s regulatory approach to AI succinctly and with a good deal of accurate foresight. “The way I’ve been thinking about the regulatory structure as AI emerges is that, early in a technology, a thousand flowers should bloom,” said President Obama. “And the government should add a relatively light touch, investing heavily in research and making sure there’s a conversation between basic research and applied research. As technologies emerge and mature, then figuring out how they get incorporated into existing regulatory structures becomes a tougher problem, and the government needs to be involved a little bit more. Not always to force the new technology into the square peg that exists, but to make sure the regulations reflect a broad base set of values. Otherwise, we may find that it’s disadvantaging certain people or certain groups.” That’s an accurate description of how the U.S. government has approached the issue, whether it was through President Obama’s direction or happenstance. “It’s not clear to me that special regulations need to be created for AI,” said Andrew Olney, associate professor at the Institute for Intelligent Systems at the University of Memphis. “Rather we need to extend existing frameworks, and this will require better public understanding of AI.” Regulatory responses will become even more complicated as new AI technologies emerge. For example, “reinforcement learning” is a major focus for AI researchers and regulators today. This method allows AI models to learn from their past experiences, and unlike other avenues of creating AI models, reinforcement learning forces AI to determine the best course of action in a complex scenario by utilizing a grading system for the model and forcing it to achieve the highest score. How will our legal system deal with AI devices and models that utilize this method?


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.