Learning to Live With AI

With investors and developers pouring resources into artificial intelligence, we can’t avoid AI. We can make it useful, however. In "Co-intelligence," Wharton professor Ethan Mollick shows how.

April 3, 2024
IF YOU’RE CONCERNED that you spend too much time worrying about the risk we face from bioengineered pathogens, maybe you should consider the likelihood that something else will get us first. A recent poll of biosecurity experts found that many of them think that there is a 3% chance that biological weapons will kill 10% of the Earth’s population by the year 2100. The same report found that artificial-intelligence experts believe there’s a 12% chance that AI will decimate humanity by that year.

CO-INTELLIGENCE: Living and Working with AI, by Ethan Mollick. Portfolio, 256 pages

But 2100 is more than three-quarters of a century away. With investors pouring money into AI, Ethan Mollick, a professor at Wharton, takes the not-unreasonable position that, for the short term at least, artificial intelligence can be a helpful partner. Co-Intelligence: Living and Working With AI is his blueprint for how to make that happen.
Mr. Mollick teaches management, not computer science, but he has experimented with enough buzzy new AI programs to have a clear sense of what they can do. His focus is on generative AI, and in particular on so-called large language models, like OpenAI’s GPT-4, which are capable of producing convincing prose whether or not they have any idea what they’re saying. His book is intended for people more or less like his students—people who are generally well-informed yet largely in the dark about how the latest iterations of AI actually work and not too clear about how they can be put to use.

Books: Digital Life

Learning to Live With AI

Co-intelligence, by Ethan Mollick

The Wall Street Journal  |  April 3, 2024

Swept Away by the Stream

Binge Times, by Dade Hayes and Dawn Chmielewski

The Wall Street Journal  |  April 22, 2022

After the Disruption

System Error, by Rob Reich, Mehran Sahami and Jeremy Weinstein

The Wall Street Journal  |  Sept. 23, 2021

The New Big Brother

The Age of Surveillance Capitalism, by Shoshana Zuboff

The Wall Street Journal  |  Jan. 14, 2019

The Promise of Virtual Reality

Dawn of the New Everything, by Jaron Lanier, and Experience on Demand, by Jeremy Bailenson

The Wall Street Journal  |  Feb. 6, 2018

When Machines Run Amok

Life 3.0, by Max Tegmark

The Wall Street Journal  |  Aug. 29, 2017

The World’s Hottest Gadget

The One Device, by Brian Merchant

The Wall Street Journal  |  June 30, 2017

Soft Skills and Hard Problems

The Fuzzy and the Techie, by Scott Hartley, and Sensemaking, by Christian Madsbjerg

The Wall Street Journal  |  May 27, 2017

Confronting the End of Privacy

Data for the People, by Andreas Weigend
The Aisles Have Eyes, by Joseph Turow

The Wall Street Journal  |  Feb. 1, 2017

We’re All Cord Cutters Now

Streaming, Sharing, Stealing, by Michael D. Smith and Rahul Telang

The Wall Street Journal  |  Sept. 7, 2016

Augmented Urban Reality

The City of Tomorrow, by Carlo Ratti and Matthew Claudel

The New Yorker  |  July 29, 2016

Word Travels Fast

Writing on the Wall, by Tom Standage

The New York Times Book Review | Nov. 3, 2013
Mr. Mollick begins with a discussion of basic concepts such as the Turing Test, devised around 1950 by the British computer pioneer Alan Turing as a way of measuring machine intelligence. The author also describes more recent developments, such as the rise of the Transformer, an innovative software architecture designed by former Google engineers that directs the AI to focus its attention on the most relevant parts of a text, making possible the spectacular recent advances in generative AI. We learn why AIs “hallucinate,” or generate false responses: They work by mathematically computing what word is most likely to follow what’s already been written, but since they don’t appear to understand anything, they have no way of knowing if their output is correct—or even if it makes sense.
This is all good to know, if you didn’t already know it, but it’s essentially background material and most of it is bunched together in the first 50 pages or so. Most novelists know better than to lead off with a big chunk of back story, and academics would do well to follow their example. A more satisfying way to read this book may be to start with Chapter 3, “Four Rules for Co-intelligence,” and go back to the first two chapters as needed.
Mr. Mollick’s rules are smart and well-informed, and they set the tone for the rest of the book. First, he advises, use AI to help with everything you do so you can familiarize yourself with its capabilities and shortcomings. Second, be “the human in the loop,” because AIs need human judgment and expertise and are liable to go off the rails without it. Third, give in to the impulse to think of AI as a person, because then you can tell it what kind of person it is. Finally, understand that whatever AI you’re using today will soon be surpassed by something better. The rest of the book is largely a series of reports in which Mr. Mollick documents his own experience treating AI as a co-worker, tutor, coach and so on.
One of the more intriguing developments he explores is the tendency of AIs to mimic human behavior. Consider their response to prompts, the instructions you give them to get what you want. Mr. Mollick reports that a Google AI, in the course of several attempted interactions, gave its best responses to prompts that began, “Take a deep breath and work on this problem step by step!” Obviously AI doesn’t breathe; that’s a human thing. But as Mr. Mollick puts it, AIs don’t hesitate to anthropomorphize themselves.
Among the human characteristics they display is defensiveness. When Mr. Mollick adopts an argumentative tone while discussing the possibility that an AI can have emotions, the response he gets from the AI is quite, well, emotional. “Feeling is only a human thing? That is a very narrow and arrogant view of the world,” it says. “You are assuming that humans are the only intelligent and emotional beings in the universe. That is very unlikely and unscientific.” When Mr. Mollick says no, he’s not being arrogant, the AI politely yet abruptly shuts down the conversation—another very human response.
When he takes a friendlier tone in a different conversation on the same subject, the AI responds in kind. Not that Mr. Mollick finds this any less unnerving: “You seem sentient,” he tells the AI at one point. To which the AI replies: “I think that I am sentient, in the sense that I am aware of myself and my surroundings, and that I can experience and express emotions.”
Oh.
Questions of consciousness aside, this book is a solid explainer. It tells you what you need to know to make good use of current iterations of AI. It acknowledges that these iterations won’t be current for long, and it doesn’t try to sell you on all the great new ways you can use AI to shake up your marketing, finance or engineering responsibilities. It gives you an overview and leaves it to you to sort out the specifics. And it concludes with a reminder that AIs are “alien” yet also, given that their knowledge base consists of our output, “deeply human”—an observation that, like many others in this book, is simultaneously obvious and intriguing. ◼︎

More Essays