23 minute read

The Data Scientist Magazine - Issue 5

THE PATH TO RESPONSIBLE AI

JULIA STOYANOVICH is Institute

Associate Professor of Computer Science and Engineering, Associate Professor of Data Science, and Director of the Center for Responsible AI at New York University. She engages in academic research, education, and technology policy, and speaks about the benefits and harms of AI to practitioners and members of the public. Julia’s research interests include AI ethics and legal compliance, and data management and AI systems. She has co-authored over 100 academic publications, and has written for the New York Times, the Wall Street Journal and Le Monde. Julia holds a Ph.D. in Computer Science from Columbia University. She is a recipient of the NSF CAREER Award and a Senior Member of the ACM.

JULIA STOYANOVICH

Could you give us your definition of what responsible AI is?

That’s a great question to ask and a very difficult question to answer. At the NYU Tandon Center for Responsible AI, our goal is to make responsible AI synonymous with AI in a not too distant future. I use “Responsible AI” to refer to the socially sustainable design, development, and use of technology. We want to build and use AI systems in ways that make things better for all - or at least for most - of us, not only for the select few. Our hope is that these systems will be able to cure diseases, distribute resources more equitably and more justly in society, and also - of course - make money for people. We also want these systems to make it so that economic opportunity is distributed in ways that are equitable.

One component of responsible AI is AI ethics. Ethics is the study of what is morally good and bad, and morally right and wrong. AI ethics is usually used to refer to the embedding of moral values and principles into AI systems. Much of this conversation usually centres around the unintended consequences of AI. This is when there are mistakes that an AI system may make, and the mistakes that a human may make when following a recommendation or a suggestion from an AI. This conversation also often concerns bias in AI. Further, we want to think about arbitrariness in decisions as a kind of mistake that an AI system may make.

For a positive example of AI use, let’s take medical imaging. We are starting to use cutting-edge AI tools in clinical practice to improve diagnosis and prognosis capabilities. In a recent collaboration, researchers from NYU and Facebook AI developed a technology called Fast MRI. This is a way to generate semi-synthetic magnetic resonance imaging (MRI) scans. These scans use a lot less real data as compared to a traditional MRI

and so can be done much faster. We start with a quick MRI scan of an individual, and then we fill in the gaps with the help of AI. It has been shown that these semisynthetic MRI scans are diagnostically interchangeable with traditional scans. MRI machines are in short supply in many locations, and so this makes MRI scans more accessible, and allows more people to be diagnosed with diseases. It also can make a huge difference for somebody who is claustrophobic, and does not want to stay inside an MRI machine for longer than is absolutely necessary.

Importantly, here, what we have is an environment in which machines cooperate productively with welltrained and professionally responsible clinicians. They understand what it means for a person to have a disease. They also understand that the responsibility for the diagnosis, and also for any mistakes that they may make, even if the diagnosis is AI-assisted, are still with them. This is because clinicians have been trained in medical ethics. And so, this gives us an environment in which AI is being used responsibly. It’s helping us solve a problem, an actual problem - increasing access to MRI technology. We can check if the technology works - we can validate the quality of the semi-synthetic MRI scan. And we have a responsible human decision maker in the mix.

I like to contrast this with some other examples where the use of AI is irresponsible. Here, there are lots of things that can go wrong.

For example, some uses of AI create a self-fulfilling

We want to build and use AI systems in ways that make things better for all - or at least for most - of us, not only for the select few.

prophecy rather than addressing an actual need. In some uses, we are asking machines to make predictions that are morally questionable, like predicting whether somebody will commit a crime in the future based on how others like them have behaved in the past. Sometimes AI is out in an environment where it interacts with people who have not been taught how to interact with these machines, and then these people just take the suggestions on faith, and they cannot meaningfully take responsibility for any mistakes.

How can global regulations help ensure AI is responsible?

AI is being used in a variety of sectors, with a variety of impacts. From health, to economic opportunity, to people surviving or dying on a battlefield. Because of this variety, I think that it’s going to be tough to come up with globally accepted ways to regulate AI use. In part, this is because we don’t really agree on a universal set of ethics or moral norms or values. But this is not to say that we shouldn’t try. I think that there are some high-level insights that we all share and some high-level goals. Most importantly, it’s that we should keep our humanity in our interactions with AI. We should make sure that it’s people who are deciding what the future will look like - and not machines.

Is this a problem that can be solved through regulation?

Regulation is a very valuable tool in our responsible AI toolkit. But it’s not the only thing we will rely on. Government oversight, internal oversight within AI vendor companies and within organisations that buy and use AI, as well as awareness of the people being impacted by these systems, are all very important for controlling them.

Let’s take the medical domain, where the use of AI presents challenges even though this is already a tightly regulated space in many countries. There’s a negative example that was surfaced by Obermeyer and co-authors in 2019 1. In many hospitals throughout the United States, predictive analytics are used to estimate how likely somebody is to be very ill. Researchers showed that these predictors exhibit racial bias: at a given “risk score”, African-American patients are actually considerably sicker than White patients. This happens because of the way that the predictive problem has been set up: the algorithm predicts how ill someone is based on healthcare costs - on how much money has been spent on healthcare for comparable patients to date. We have a biased healthcare system in the US, where people from lower income communities have less access to medical care. These are very often people who are African-American or Hispanic. Therefore, healthcare spending is going to be lower for them than for

somebody from a more affluent social group, but who is comparably as ill. By using a biased proxy like healthcare cost, we end up propelling the past into the future, further exacerbating disparities in access to healthcare.

So, in this domain - and in many others - we need to be very careful about how we use data, how we collect it, what it encodes, and what are some of the harms that the irresponsible use of data may bring to this domain.

What is causing these bias boxes? Is it limitations of data and what role do data models play for AI?

Data may or may not represent the world faithfully. This certainly contributes very strongly to the bias in predictions. But it’s not the only reason. I like to think about bias in the data by invoking the metaphor that data is a mirror reflection of the world. Even if we reflect the world perfectly correctly in the data, it’s still a reflection of the world such as it is today, and not of a world that could or should be.

The world being reflected may be gender biased, or racially biased, or have some other distortions built in. Then, the data will reflect this and it will legitimise it. Because there is a perception that data is “correct” and that it’s “objective.” Data is almost never correct or objective.

And so, we also need to think about whether, given the current state of the world and given the current state of our data collection and data processing capabilities, we in fact can do things better than simply replaying the past into the future with the help of these data-based predictions.

Do you differentiate between different types of AI or would it all fall under the same umbrella?

So here, as an academic, I choose to take an extreme point of view. Of course, in the real world things may be a bit more nuanced. I actually think that it doesn’t matter what sort of technology lives inside that technical “black box.” It could be a very complex model or it could be a very simple one.

I have spent the bulk of my career studying these very simple gadgets called score-based rankers. You start with a dataset of items, let’s say these are people applying for jobs, and you compute a score for each item using a formula that you know upfront. Some combination of standardised test scores, for example, and some number of years of experience. Then you sort everybody on that score. Even in that case, by taking the top 10% of the people from that ranked list to then invite them for in-person job interviews, you’re introducing a lot of opacity. You, as a decision-maker, are not going to immediately understand what is the impact of the scoring formula on whom you choose to invite for job interviews, and whom you forgo.

(1) “Dissecting racial bias in an algorithm used to manage the health of populations”, Obermeyer et al., Science, 2019, www.science.org/doi/10.1126/science.aax2342

As another example, let’s say that we’re talking about college admissions. Let’s say, half of the score is made up of the high school grade point average of the student, and half of the score is based on their standardised test performance, like the SAT. If this is a very selective college, then applicants self-select, and only those with the very top SAT scores will apply.

Although the SAT score component has an equal weight in your formula, it’s going to have far less importance, because everybody’s tied on that component of the score. This shows you that even seemingly simple models can have side effects - or direct effects - that are hard for people to predict.

So rather than worrying about what lives inside that black box - whether it’s a generative AI model, a simple, rule-based AI, or a scoring formula - we should worry about the impacts that these devices have.

To think about the impacts of AI, we have to ask: what is the domain in which we use it? Can we tell what the AI does, rather than how it works? We have the scientific methods at our disposal to help us deal with and unpack how black boxes work. We can feed it some inputs and see what happens at the output. Are there any changes in the input, for example, if I change nothing except an applicant’s gender or ethnicity? If the output changes, then we can suspect that there is something going on that we should be looking into more closely.

To summarise, I wouldn’t worry about whether we are dealing with a very complex machine or a seemingly simple one. I would worry more about what these machines do, whether they work, and how we measure their performance. And I would worry about the consequences of a mistake, and about whether and how we can correct these mistakes.

Did you see an increased interest in regulations and responsible AI with the rising of generative AI? Yes, absolutely. It’s a blessing and a curse that there’s now this hype around generative AI.

The blessing is, of course, that almost everybody is paying attention. Worldwide, we have politicians speaking about the need to control the adverse impacts or the risks of harm that the use of generative AI can bring. Together with that, everybody’s just paying attention to AI more generally, and to how we might oversee, regulate, and bring more responsibility into our deployment of these systems. It’s a good thing in that sense.

But of course, hype is also very tiring, and it’s also harmful in that we are paying a lot of attention to things that may or may not matter immediately. We shouldn’t forget that we already are using AI tools in very impactful domains, and have been using these for decades. These are not, for the most part, fancy tools like large language models. They are much simpler tools like rule-based systems, score-based rankers, or

linear regression models. These are being used in hiring, employment, credit and lending, and in determining who has access to housing. We shouldn’t forget that if the AI tool is simpler, there can still be, and have been, documented tremendous harms that the use of these tools can bring. We should definitely regulate in a way that looks at who is impacted, and what are the impacts, rather than by regulating a particular kind of technology that sits inside the box.

What’s happening currently regarding regulations? How do you think we should go about creating regulations?

I think that we just need to try to regulate this space. We shouldn’t wait until we come to a moment where we’re absolutely sure that this is the perfect way to put a regulation into place. That will never happen. It’s very hard to reach a consensus. So I think that we should try. We should talk less and do more. I’m really glad that the European Union has been leading the way in this, starting with the GDPR (the General Data Protection Regulation). That has been extremely impactful. In the United States there is still no analogue to this, and this is really problematic. I’m also really glad that the AI Act in the European Union is moving forward. Again, in the US we have been hearing lots of people speak about this. But we are yet to see regulation at the federal level in the United States. We are lagging behind. In the US, of course, we have a system that is decentralised, at least to some extent. And so there is also a lot of opportunity in the United States to regulate at the level of cities and states.

Is there any evidence between the strictness in regulations and suppression of technological innovation?

I’ve not actually done any research specifically to look at the impact of regulation on innovation. It’s hard to do this research really, because we don’t have examples of two places that are comparable in every way, except that one has stronger regulatory regimes than another. But personally, I don’t believe that regulation stifles innovation in any way.

To me, “responsible AI” is, first and foremost, AI that is socially sustainable. To reach social sustainability, we need to make it so that when we deploy a tool, it doesn’t break society further. Because then you have to recover from the ill impacts of that. So to me, first deploying something and then seeing how it plays out is not at all a sustainable way to operate a society. It also only advantages a very select few. The people who are releasing the technology stand to benefit from it

One of [the bigger risks of AI] is that decisions will be made with the help of these tools by people who do not question whether the predictions of the tools are correct in any sense.

financially now. But in the long run, this is going to hurt us and it is already hurting us. So I personally see no alternative here. Considering the success that this technology has had, we do need to think about regulation at the same time as we think about large-scale adoption of things like large language models.

What’s your opinion on the release of generative AI, such as Chat GPT, to a mass audience? Was this too early in terms of the maturity of the technology?

I definitely think that it’s too risky. I think it’s extremely irresponsible to have unleashed this technology without giving us any meaningful way to control how the data travels and where the data goes. We also haven’t been given any meaningful way to understand where this technology can be safely used. There are tremendous issues around labour and environmental sustainability that go along with the release of the technology. I think that the harm to individuals and to society, and the risks of further harm due to data protection violations, bias and anthropomorphisation of these tools far outweigh the benefits.

But then the question is: benefits to whom? For the companies that release this technology, financial benefits are what matter. We need regulation so that it’s not just the select few who benefit. I don’t currently do any research work that involves generative AI because I just don’t think that we should be feeding into this hype and giving away our data. Those who produce these technologies need to spend resources - including time and money - on figuring out how to control them before they can go into even broader use.

What are the bigger risks of AI?

One of them is that decisions will be made with the help of these tools by people who do not question whether the predictions of the tools are correct in any sense. So, many of the decisions being made will be arbitrary, and this is even beyond bias. How our data is used, and whether we’re comfortable with our data being used in this way, is also problematic. One of the angles on this - in addition to the conversation about benefits and harms - is that people have rights. We have rights to privacy. We have rights to agency, to being in charge, both of our own data and existence, and also of the world in which our society functions. At the high level, it’s really just that we’re insisting on using a technology that we don’t yet really know how to control.

To be more concrete, we need to think about – in each specific domain – who benefits, who is harmed, and who can mitigate the harm. It’s the same story with every technology that we’ve been experiencing throughout human history. The Industrial Revolution also left out some and benefited some others. And we need to make sure that we are acting and using technology in ways

that are more equitable this time around.

How can practising data science leaders and data scientists make sure that they develop AI systems responsibly?

In my very simple worldview, there are essentially four conditions that you need to meet to use AI responsibly.

Firstly , are you using AI to meet some clear need for improvement? Are you just using it because your competitors are doing the same, or is there some actual problem that you can clearly articulate, and that you want AI to help you solve?

Secondly , can you actually check whether the AI is going to meet the requirements for that need for improvement? Can you validate that the predictions of the AI are good and correct? If you can’t validate it, then again, it’s not the right setup.

Thirdly , can the problem that we have set out to solve be solved given the current capabilities in terms of hardware and data and software? If that is not the case, for example, if data doesn’t exist that would allow you to predict the kind of thing that you want to predict, then it’s hopeless. AI is not magic.

Finally, AI very rarely operates autonomously. Usually it’s in a collaboration with a human. So, do you actually have these decision-makers who are well-equipped to work with your AI, and who can challenge it when it needs to be challenged? Here again, take the example of a clinician working with AI to diagnose a disease. They need to understand that it’s up to them to make the decision.

Outside of these four conditions, there are, of course, others, like legal compliance. Are you going to be legally compliant in your data collection and AI use? But, the main four components are absolutely crucial. Is there a problem to solve? Can we solve that problem? Can we check that we solved it? And can we use this AI, this solution, safely together with humans?

Generally, to me, responsible AI is about human agency. It’s about people at every level taking responsibility for what we do professionally and for how we’re impacted personally.

How can global regulations lead us to ensure that all AI is responsible?

This is, again, a very difficult question. I don’t know whether we are prepared to regulate the use of AI globally. We have been trying to do this in a number of very concrete domains.

For example, take lethal autonomous weapons. These weapons decide who or what is a legitimate target, and who or what is a civilian - person or infrastructure - and so should not be targeted. Even in this domain, AI has been very difficult to regulate globally.

The United Nations has been playing a tremendous role in pushing for regulation in this domain. But it has been very difficult to come to a global worldwide agreement about how we can control these technologies.

There is a balance between the rate of technological development and the rate that we develop ethical frameworks that need to be hit. Is that balance being met, and do you think we will be able to keep up with technological advances in the future?

I am an engineer - I’m not a philosopher or somebody whose job it is to predict the future. Engineers predict the future by making it. I think more engineers are going to understand that it’s our responsibility to make sure that we build systems that we are proud of and that we can stand behind. We should take control and participate in making decisions about what we think we should be building and using.

When we talk about responsible AI, that term itself is a bit misleading. Responsible AI doesn’t mean that the AI is responsible. It’s the people who are responsible for the development and use of AI.

One of the things that’s particularly dangerous with the current AI hype, is that there are some very vocal people saying that AI is about to take over and that it has a mind of its own. They argue that whatever harms us socially is the AI’s responsibility. This is a really problematic narrative, because it absolves those who stand to benefit from AI, financially and otherwise, from any responsibility for the mistakes. We cannot allow that to pass.

I think that this is really a point in history where we’re witnessing people fuelling this AI hype for personal benefit, so that they absolve themselves of the responsibility and yet reap all the benefits. Generally, to me, responsible AI is about human agency. It’s about people at every level taking responsibility for what we do professionally and for how we’re impacted personally. We all need to step up and say that We the People are in control here. The agency is ours and the responsibility is ours.

This is, again, one area in which generative AI is presenting us challenges, because a lot of the impetus for these tools to exist is to seem indistinguishable from what a human would have produced. This anthropomorphisation of AI is very problematic, because it takes us away from the goal of staying in control and into somehow giving up agency to machines. We should resist this as much as possible.

What do you say when people counter that AGI’s can start writing their own code now and potentially start self-improving at some point in the future?

I don’t believe that’s the case. Furthermore, we should decide whether we are okay with this. If generative AI writing code is something that we think can be used to

automate more mundane tasks--for example, software testing–then certainly we can allow this particular use.

But whenever we ask an AI to do something, we need to be able to measure whether whatever it has done is correct, good, and adheres to the requirements that we have set out. If we can’t do that, then we cannot take an AI’s word on faith that it worked.

One example is the use of AI in hiring and employment. There are several tools that have been developed that claim to construct a personality profile of a job applicant based on their resume. But, is there any way to validate this? If I made such a prediction myself, could I actually check if I was correct? If the answer is no, then we should not be using machines to make these predictions. This is because AI tools are engineering artefacts. If we can’t tell that they work, then they don’t work.

Do you have a perspective on Open AI’s super alignment initiative?

I don’t have a perspective on their super alignment initiative, and I’m not a fan of the term alignment in general. Usually the message there is that somehow we’re able to just automate moral and value-based reasoning in machines, and I don’t believe that is possible, nor should it be the goal.

I don’t think that we can automate ethics or responsibility. I don’t think alignment in the way that it’s being discussed right now is a productive way forward. This is because it essentially borders on this conversation about algorithmic morality, where essentially it’s just the simplest, least nuanced version of utilitarianism that we end up trying to embed.

For example, we only look at how many people die and how many people are safer. We add these numbers up, we subtract some, and then based on that, we decide whether or not it’s safe to deploy self-driving cars, for example. I think that the use of AI is way too complex and contextdependent for us to pretend that we can automate ethics and responsibility and morality. So. I think that that’s a dead end.

So in your view, is making AI systems responsible more of the duty of engineers in the first place?

For technologists like myself, I think the main task is to figure out where technology can be helpful and where it has its limits. Technology cannot solve all of society’s problems. There’s no way for you to de-bias a dataset and then proclaim that now you are hiring with no bias, or lending with no bias. This is hubris. We need people to make decisions and take responsibility for decisions throughout. There’s no way that we can align technology to our values, push a button, and then say that the world is fair and just.

On November 1st 2023 Julia was invited to speak at the AI Insight Forum at the US Senate. Her full statement can be found on this link: r-ai.co/AIImpactForum

This article is from: