An Intelligible Conversation About Artificial Intelligence

Yejin Choi leading a research seminar in September at the Paul G. Allen School of Computer Science & Engineering at the University of Washington. John D. and Catherine T. MacArthur Foundation

We have shared dozens of links to stories about advances in technology–especially those with application to conservation–none has touched on artificial intelligence. During my daily scanning for articles to share in 2022 I have noticed this topic more and more as a source of fear, to the point where I stopped reading them; but today’s is different:

An A.I. Pioneer on What We Should Really Fear

Artificial intelligence stirs our highest ambitions and deepest fears like few other technologies. It’s as if every gleaming and Promethean promise of machines able to perform tasks at speeds and with skills of which we can only dream carries with it a countervailing nightmare of human displacement and obsolescence. But despite recent A.I. breakthroughs in previously human-dominated realms of language and visual art — the prose compositions of the GPT-3 language model and visual creations of the DALL-E 2 system have drawn intense interest — our gravest concerns should probably be tempered. At least that’s according to the computer scientist Yejin Choi, a 2022 recipient of the prestigious MacArthur “genius” grant who has been doing groundbreaking research on developing common sense and ethical reasoning in A.I. “There is a bit of hype around A.I. potential, as well as A.I. fear,” admits Choi, who is 45. Which isn’t to say the story of humans and A.I. will be without its surprises. “It has the feeling of adventure,” Choi says about her work. “You’re exploring this unknown territory. You see something unexpected, and then you feel like, I want to find out what else is out there!”

What are the biggest misconceptions people still have about A.I.? They make hasty generalizations. “Oh,

 can write this wonderful blog article. Maybe GPT-4 will be a New York Times Magazine editor.” [Laughs.] I don’t think it can replace anybody there because it doesn’t have a true understanding about the political backdrop and so cannot really write something relevant for readers. Then there’s the concerns about A.I. sentience. There are always people who believe in something that doesn’t make sense. People believe in tarot cards. People believe in conspiracy theories. So of course there will be people who believe in A.I. being sentient.

I know this is maybe the most clichéd possible question to ask you, but I’m going to ask it anyway: Will humans ever create sentient artificial intelligence? I might change my mind, but currently I am skeptical. I can see that some people might have that impression, but when you work so close to A.I., you see a lot of limitations. That’s the problem. From a distance, it looks like, oh, my God! Up close, I see all the flaws. Whenever there’s a lot of patterns, a lot of data, A.I. is very good at processing that — certain things like the game of Go or chess. But humans have this tendency to believe that if A.I. can do something smart like translation or chess, then it must be really good at all the easy stuff too. The truth is, what’s easy for machines can be hard for humans and vice versa. You’d be surprised how A.I. struggles with basic common sense. It’s crazy.

Can you explain what “common sense” means in the context of teaching it to A.I.? A way of describing it is that common sense is the dark matter of intelligence. Normal matter is what we see, what we can interact with. We thought for a long time that that’s what was there in the physical world — and just that. It turns out that’s only 5 percent of the universe. Ninety-five percent is dark matter and dark energy, but it’s invisible and not directly measurable. We know it exists, because if it doesn’t, then the normal matter doesn’t make sense. So we know it’s there, and we know there’s a lot of it. We’re coming to that realization with common sense. It’s the unspoken, implicit knowledge that you and I have. It’s so obvious that we often don’t talk about it. For example, how many eyes does a horse have? Two. We don’t talk about it, but everyone knows it. We don’t know the exact fraction of knowledge that you and I have that we didn’t talk about — but still know — but my speculation is that there’s a lot. Let me give you another example: You and I know birds can fly, and we know penguins generally cannot. So A.I. researchers thought, we can code this up: Birds usually fly, except for penguins. But in fact, exceptions are the challenge for common-sense rules. Newborn baby birds cannot fly, birds covered in oil cannot fly, birds who are injured cannot fly, birds in a cage cannot fly. The point being, exceptions are not exceptional, and you and I can think of them even though nobody told us. It’s a fascinating capability, and it’s not so easy for A.I.

Read the whole conversation here.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s