© 2024 KRWG
News that Matters.
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Studying Artificial Intelligence At New York University

LINDA WERTHEIMER, HOST:

Artificial intelligence is increasingly a part of our daily lives. It helps run our search engines, populating our Facebook feeds. It lets us interact with Siri and Alexa. But the powerful algorithms and predictive systems that make up AI are playing a deeper role in society with some serious implications. Professor Kate Crawford is co-founder of New York University's new AI Now Institute. She says, you're seeing AI at work in health care, education, even criminal justice.

(SOUNDBITE OF ARCHIVED RECORDING)

KATE CRAWFORD: Criminal justice has sort of very early-stage algorithmic systems that are already being brought to bear. So if we take, for example, courtroom settings, we have predictive risk algorithms that are being used to assist judges in deciding whether or not somebody is high risk for re-offense or low risk. But as we saw in a recent investigation by ProPublica, these systems are already producing twice the false positive rates for black defendants as they are for white defendants.

WERTHEIMER: Can you give us a sense of what creates biases in algorithms? And maybe you could give us an example.

CRAWFORD: Often, a common cause is the data that's used to train a system. So if we take, for example, a predictive policing system - would be trained on a lot of data about crimes and arrests. And that makes sense. I mean, that's an important data set. But we could also look to the history of who tends to be arrested and who tends to be approached by the police. And we can see that there is also a racial history there and an issue for low-income communities, which is different to those in high-income categories. So they're the types of biases that I think concern us the most.

WERTHEIMER: As a civilian in this area, we have no idea how many algorithms are sorting us out as we move through our lives. Is there something, as consumers of information, we should be aware of, we should watch out for?

CRAWFORD: I think there's a lot we can do. For example, we're starting to see a lot of early AI systems being applied in hiring in HR. And I think many people, you know, when you're - taking a job interview for example, you don't necessarily ask to say, oh, you know, how am I being assessed here? But these kinds of questions, I think, will become increasingly important as we start to see algorithmic systems have a greater role in decision making.

WERTHEIMER: I was thinking about law firms, which, for many years, promoted men. I wonder if that's the kind of thing you mean that would come out of the machine instead of out of the partners of the law firm.

CRAWFORD: Yeah, precisely. And let me give you an example. There's a new system that many companies are using right now, which is called HireVue. And what it does is it essentially records a person while they're doing a job interview and then looks at that footage to assess things like - what sorts of gestures do they use? What words do they use? How frequently do they pause? Very, very intimate analyses which they then use to match to their most successful employees.

Now, that might make sense. You might say, yes, we want to replicate our most successful employees. But the downside is if that becomes a type of ingrained form of bias - that you're really just, you know, replicating people who already look like and sound like the people at the top of your company. So I think there's a tendency - it's often called automation bias - that we assume that an algorithmic system must be coming up with a more objective or a better result than a human. But that is not necessarily the case.

WERTHEIMER: Now, you worked at Microsoft for a long time. So you...

CRAWFORD: I still do. Yeah, absolutely.

WERTHEIMER: You have considerable insight into the industry that is creating these algorithms. Do you think that in the rush to invent new products, they are overlooking questions like the ones you were just raising? Or is this happening deliberately?

CRAWFORD: It's certainly not deliberate. But the thing that's really motivated me and my colleagues in setting up the AI Now Institute is that those are seen very much as sort of technical problems. But in actual fact, what these systems are showing us is that data sets are always functions of human history. They reflect who we are. So in many ways, we have to think about these problems as social problems first, not technical. And so for us, that means bringing a lot more disciplines into the room. That means sociology. It means law. It means anthropology, philosophy and history.

We can actually learn a lot from these much longer-term studies of how humans interact and how social change occurs. Unfortunately, at the moment, a lot of those decisions are being made by people whose training is purely technical. And in the same way that we wouldn't expect a judge to tune a technical system, we shouldn't be expecting an engineer to understand the intricacies of the criminal justice system.

WERTHEIMER: Professor Kate Crawford is co-founder of NYU's just-opened AI Now Institute. Thank you very much.

CRAWFORD: It's a pleasure, Linda. Transcript provided by NPR, Copyright NPR.