© 2024 KRWG
News that Matters.
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Is The Concern Artificial Intelligence — Or Autonomy?

Getty Images/iStockphoto

There's a provocative interview with the philosopher Daniel Dennett in Living on Earth.

The topic is Dennett's latest book — From Bacteria to Bach and Back: The Evolution of Minds — and his idea that Charles Darwin and Alan Turing can be credited, in a way, with the same discovery: that you don't need comprehension to achieve competence.

Darwin showed how you can get the appearance of purpose and design out of blind processes of natural selection. And Turing, one of the pioneers in the field of computation, offered evidence that any problem precise enough to be computed at all, can be computed by a mechanical device — that is, a device without an iota of insight or understanding.

But the part of the interview that particularly grabbed my attention comes at the end. Living on Earth host Steve Curwood raises the, by now, hoary worry that as AI advances, machines will come to lord over us. This is a staple of science fiction and it has recently become the focus of considerable attention among opinion-makers. (Discussion of the so-called "singularity.") Dennett acknowledges that the risk of takeover is a real one. But he says we've misunderstood it: The risk is not that machines will become autonomous and come to rule over us — the risk is, rather, that we will come to depend too much on machines.

The big problem AI faces is not the intelligence part, really. It's the autonomy part. Finally, at the end of the day, even the smartest computers are tools, our tools — and their intentions are our intentions. Or, to the extent that we can speak of their intentions at all — for example of the intention of a self-driving car to avoid an obstacle — we have in mind something it was designed to do.

Even the most primitive organism, in contrast, at least seems to have a kind of autonomy. It really has its own interests. Light. Food. Survival. Life.

The danger of our growing dependence on technologies is not really that we are losing our natural autonomy in quite this sense. Our needs are still our needs. But it is a loss of autonomy, nonetheless. Even auto mechanics these days rely on diagnostic computers and, in the era of self-driving cars, will any of us still know how to drive? Think what would happen if we lost electricity, or if the grid were really and truly hacked? We'd be thrown back into the 19th century, as Dennett says. But in many ways, things would be worse. We'd be thrown back — but without the knowledge and know-how that made it possible for our ancestors to thrive in the olden days.

I don't think this fear is unrealistic. But we need to put it in context. The truth is, we've been technological since our dawn as a species. We first find ourselves in the archaeological record precisely there where we see a great exposition of tools, technologies, art-making and also linguistic practices. In a sense, to be human is to be cyborgian — that is, a technological extended version of our merely biological selves. This suggests that at any time in our development, a large-scale breakdown in the technological infrastructure would spell not exactly our doom, but our radical reorganization.

Perhaps what makes our current predicament unprecedented is the fact that we are so densely networked. When the library of Alexandria burned down, books and, indeed, knowledge, were lost. But in a world where libraries are replaced by their online versions, it isn't inconceivable that every library could be, simply, deleted.

What happens to us then?

Copyright 2021 NPR. To see more, visit https://www.npr.org.

Alva Noë is a contributor to the NPR blog 13.7: Cosmos and Culture. He is writer and a philosopher who works on the nature of mind and human experience.