Civilization is really a very new and very glitchy thing. If you talk to evolutionary psychologists and people who've looked at how our brains have developed over hundreds of thousands, if not millions of years, they'll tell you that our sense of wonder and creativity, as well as our ability to be cautious and rational, and to trust people we've never met to govern us, all of that kind of stuff—the vast majority of our decision-making actually rests on a much older, much more ancient system. We are so much more like primates than we like to think. Certainly, that's been the lesson of the last sort of 50 years of behavioral science. I was trying to marry a lot of research looking at the ways in which people are trying to build interesting technologies, but then deploy them to make money, and how tempting it is to throw those technologies at those ancient decision-making systems, especially because so much of our decision-making is instinctive. 

My worry with AI, of course, is that as we automate decision-making more and more, we use automated systems not only to entertain ourselves but to decide who gets a job, who gets a loan, and who gets bail. I worry that we're going to be in a position in 20 years where we don't have the internal compass we once did. We may have slid away from that higher human functioning—the creativity, the rationality, and all that stuff—and back toward a more primitive version of ourselves, because that's the part that gets played on by this kind of technology. And that's how all these companies wind up making money.

We’re undergoing a massive upgrade moment. This conversation focuses on one of the most immediate and profound challenges to humanity: the ways technology is engineered to exploit our vulnerabilities and slowly erase our ability to make original, conscious choices. Our guest is Jacob Ward, a journalist who has spent over 20 years covering the breakthroughs and powerful forces that determine the course of history. Jacob is a Reporter-in-Residence at The Omidyar Network and the founding editor and host of The Rip Current, a newsletter and podcast that examines technology, politics, and the fight to protect the future.

He’s the author of the book, The Loop: How AI Is Creating a World Without Choices and How to Fight Back. He served as the editor-in-chief of Popular Science and was a correspondent for NBC, The TODAY Show, and Al Jazeera. His PBS documentary series, Hacking Your Mind predicted the rise of Donald Trump. We discussed creativity in the age of AI, the importance of emotional and intuitive intelligence, and the need to reclaim the aspects of life—like connection and nature—that algorithms cannot commodify.

THE CREATIVE PROCESS

I’m really looking forward to this conversation because you've been writing about technology for a long time. Before the rollout of AI, you’ve been following this and many other things. But let's get into your book, The Loop, where you address so many things, including the unconscious patterns of our human behavior and those magisteria, how they overlap, and the digital algorithms that take advantage of our wobbly human habits you describe. It determines how society works, and I think it's definitely making us all less happy and less free. It's funny to think about the world before the rollout. If you look at social media, if you go back like 20 years and look at today, at the state of global politics, anxiety, and increasing youth suicide rates or mental health issues, it has just accelerated. The human condition was always not perfect, but now with AI, which operates at the speed of psychosis, what are we going to look like in 20 years?

JACOB WARD

Yeah, boy, what we will look like in 20 years is anyone's guess. I think the broad concept that I've been trying to follow in my career for a while now is this idea that human civilization is really a very new and very glitchy thing. I worry that we're going to be in a position in 20 years where we don't have the internal compass we once did. We may have slid away from that higher human functioning—the creativity and the rationality—and back toward a more primitive version of ourselves because that's the part that gets played on by this kind of technology.

THE CREATIVE PROCESS

These technology companies are so powerful. They can fight lawsuits, and I’m really for governance, but there’s governance and then there is implementation. Then you have countries like China and India, where they've normalized algorithmic control. So when you say governance, governance is taking control or oversight from the technology companies who've been able to guinea pig us all, testing it on us, and then making the watchdogs the government. To be clear, I'm for governance, but it’s not without issues. What are they using that control for? How are they spying on us?

JACOB WARD

It's complicated, right? I feel the tension that you're describing. A lot of what I argued for in my book and have come to believe about what we're going to need to do in terms of regulating based on the psychological effects is just that. 

The weird part is that I’ll go speak to people, and they'll bring up China, which really does that, right? They really do regulate based on the worries they have about the effect that technology might have on the mind. They do things that I look at and think, oh, that would be cool. For instance, it's widely understood now that kids in China do not use social media like kids in the United States. Kids under a certain age are not even allowed by law to be scrolling on something like our TikTok. That is not okay by the government. 

As a worried parent of teenagers, I think to myself, that'd be great. I wish that were true here. I also know that they have what I consider to be really positive ideas about making sure that these companies are transparent about the algorithms they use and how those algorithms function. On the other hand, one of the requirements around transparency is that they show that the company has built an algorithm that is based on party doctrine that agrees with the central Communist Party. 

I’m caught between wanting regulation because, right now in the United States, there are no data privacy laws—there's no regulation whatsoever. I definitely want something, but do I want a random four-year executive making those choices? Do I want it to be as top-down a system of control as China has implemented? I don't think so, but there's something in between no rules—which is what we currently live in—and the rules that are out there in the world.

There's this idea that somehow AI is going to do all the dishes for you so that you can relax and paint in your garden. One thing that I come back to all the time is this idea called the Jevons Paradox. 

William Jevons was a 19th-century British economist. He was trying to figure out, on the part of the British Empire, are we running out of coal? Because it was a huge problem. He was saying, you know, boy, if we run out of coal, then this empire is going to collapse, and so we have to figure this out. 

One of the things he was trying to determine was why are we running out of coal at a time when we are using coal more efficiently than ever? There had been this new invention in terms of steam engines that was using coal so much more efficiently, and yet they were blowing through it at this higher rate. Well, ever since, Jevons Paradox has become the name for when we use a thing more efficiently. We don’t use less of it; we use more of it. 

It's true in water—the more reservoirs you build to hold drinking water, the more water we consume—and I would argue it's true of labor-saving technology. With technology like AI, this illusion that we're going to somehow work less because this thing enables us to work more efficiently, I think instead we're going to burn through people's time even faster. You're already seeing companies lay people off because they say, we just don't need this number of people. That means that the people who remain are expected to do all those fired people's work. I just think that's going to be true in the creative industries. I think it's going to be true in the professional world, as it's been true with water and coal and everything else.

The tradition of software making software is that you release it into the wild. The more people use it, the better it gets because that's more and more people catching the bugs and the glitches, and you then fix them as you go. Scale, in theory, solves your problem with software. The people making the AI systems all come from that world. They don't come from the hardware world, where if you go to scale with a product that causes damage or is problematic, those problems only get worse the more people you sell that thing to. 

A car that blows up sold on a grand scale is vastly worse than selling just a couple of them. Look at this, and I think, you know, these are companies experimenting in the wild. They created these systems that can generate huge amounts of content, releasing a detector for that content. 

So they're flooding the system, polluting our information ecosystem with this slop, this made-up stuff, this hallucinated text, and AI-generated whatever—without giving us the tools we need to know what was written by a human and what was not. I think that's another huge problem, not only because they don't know how these things are making decisions moment to moment, but also that they have no ability to even detect their creations out in the world. So the combination of those things is a real problem that I think is explained by this sort of software attitude that, ah, we'll fix it later. Let’s get to scale; that’s what's going to solve our problems.

THE CREATIVE PROCESS

In your domain, in journalism, and my domain, which is kind of journalism but also art, I’m not all doom and gloom about AI because I think there are wonderful things that AI enables. I just had a conversation this morning about medicine. There are advances with AI in medicine and the levels of accelerating experimentation and the great possibilities, but then it also has its reverse. Because they can create novel toxins too—chemical weapons or something like that. There are all these great possibilities, and it comes down to governance. It comes down to the future that we want and not just accepting the future that is handed to us by these billionaires who own these companies, wanting to profit off of what they've scraped from us, not leaving us with any jobs afterward.

JACOB WARD

I know, right? That is the worry. It is not clear to me how this is going to happen, but the American ideal has always been this fantasy, this myth that each of us is our own individual and that we should love the idea that an individual makes it big on her or his own, just by going with our gut and pulling ourselves up by our bootstraps, right? We don't like the idea that it takes a village to raise a child. 

We don't like the idea that you need the support of everyone around you, and we really don't like the idea that you should actually pay attention to the data in making a decision about your future. We love the Han Solo kind of character who says, don’t tell me the odds; I’m going to fly through this asteroid field. In the movie, that guy always survives somehow. 

In the real world, that guy is going to get smashed to pieces every time in the asteroid field. To my mind, the way that we as a society think about our vulnerability to manipulation, we don’t like to think about our vulnerability to things like addiction or the loss of agency over time. 

I don't think we look as clearly as we need to at the vulnerabilities of our brains. I think that one of the things we're going to have to shift in this country—and I think the courts are going to have to do this for us—is away from the idea that everybody is in charge of themselves and responsible for themselves, and toward the idea that there are some big companies employing very, very smart people who are learning to tweak your behavior to suit their purposes. It was true with cigarettes. It's true with alcohol. It's true with gambling. I think it's going to be increasingly true in technology. 

THE CREATIVE PROCESS

I do want to know about your reflections on education because young people are really the most at risk. They're coming up at a time when they won't be able to tell the artificial fast food from the real, the things that take time. 

So, how do we safeguard that slow learning process so that even if we have a technological apocalypse or our systems get switched off, or if bad actors come and turn off our systems, we can still rely on our own intelligence and development?

JACOB WARD

The problem we’re facing, right, is that the entry-level and just above entry-level jobs at big companies are going away as AI is assumed to be able to do that kind of work. What happens to a new college kid who's just come out of university? Is that kid supposed to already be qualified for that third-level job? That third promotion where your job skills are somehow learned, like the basic knowledge work kind of jobs that a research assistant or a clerical worker would have picked up. 

If you've got a whole generation of 21-year-olds who are suddenly unable to find that entry-level work, yet are expected to have that third job level of experience, there’s going to have to be some kind of apprenticeship notion that is going to get kids from here to there. At the very least, they could be shadowing somebody in that third-level job, learning over the course of a few years those job functions, while AI does all the entry-level work. 

We’re going to have more people, literal humans walking around needing work than ever before, and those people are going to be of voting age. So that’s the other part: some candidate is going to come along and say, I’m going to get everybody back to work, and here’s how I’m going to do it through this national apprenticeship program. I feel like that’s the kind of platform that, in a generation, could really make a political movement. 

I think people like Andrew Yang were sort of too early with thinking about universal basic income and these things. I think he was right, but he was too early. Now, I think once we get into the crisis of entry-level work, there’s going to be a real political opportunity for someone.

THE CREATIVE PROCESS

With the uncertain future that we've been talking about, what has been your antidote? Some people connect with nature. For me, it's nature, but also art, and just slowing down. When talking about the manic rate, the acceleration in our societies or the frequencies we're surrounded by—even Wi-Fi—that's everywhere. I don't know what that's doing to us. We can only speculate. I know scientists who actually only use wired connections; they don’t use things like Wi-Fi because they say it can affect us. 

I have found personally that the arts, nature, and one-to-one conversation—not getting drawn into social media feeds where so much is happening—is a good way to reset. It’s about our health, too, our mental health. But now AI and technology have crept into our art, and I think it's affecting us.

I don't know your perspective on how it is affecting us, but that's one of the things we are addressing now. I’m doing a five-year project with the Center for European Policy Studies, and we have some museum partners, like the Stedelijk Museum in Amsterdam, doing exhibitions, podcasts, and different things. 

Just examining these issues, we just had the Future of Life Institute in Brussels talking about what kind of future we are going towards. Just because it's faster doesn’t make us healthier or better. These are things you've been writing about for a long time. I wonder, as someone who's an artist and writer—you know, taste is individual—but we can lose our taste for authentic art. With artificial intelligence, you also have artificial scraped art; it's like fast food or something. We will lose our ability to tell the difference between something made through human life experience versus something created by a prompt.

JACOB WARD

I started worrying about this in 2019. There was a paper that came out, or an article in MIT Technology Review by a guy named Sean Dorrance Kelly. He made the argument that AI can't create art, that it is impossible for true creativity to lie inside AI. He goes on about the philosophical basis of how true creativity, as we understand it academically, comes from humans. I remember reading that and thinking, as well-reasoned as this argument was, this guy's missing the point because it’s not going to matter what our academic definition of creativity is. 

Millions of people every day, when the market embraces automated art in a huge and transformative fashion, will show that fundamentally, what’s going to make art possible is whether people engage in it. People have already shown that they’re very happy to go ahead and engage with something slapped together by AI in a few milliseconds. 

There are places where I look at AI-generated art and think, it’s funny and diverting. There’s an AI-generated song that gives you technical instructions on how to steal an F-16 fighter jet. “How do I steal an F-16?” and then there’s this great catchy chorus. It makes me laugh every time, and I think to myself, Sean Dorrance Kelly, sorry, buddy, the ship sailed the second it became available to people.

We’re seeing the effects of people’s conformity. We're seeing people’s personal presentation obey a certain conformity based on technology. I was just in Portugal on a trip recently, and whenever I travel abroad, I’m always excited to see what the kids are wearing. I just want to see what youth fashion is doing in that place. I went there, and kids were dressed exactly like my kids. It was the same outfits. 

I realized, oh, this is the TikTok effect. This is people watching the same fashions go around and around and around, creating this kind of culture of conformity. Some people have tried to argue in court that AI can create an original work and should be given copyright for that work. I’m curious what you think about it. Do you think that these systems, which can absolutely mimic something like music incredibly easily—that's no problem—can we consider AI the author of a thing?

THE CREATIVE PROCESS

The importance of the arts and humanities, developing these interpersonal skills, and journalism—these things that, you know, didn’t used to be done for us. What would you like young people to know, preserve, and remember?

JACOB WARD

There’s an incredible cookbook by the personal chef to Georgia O'Keeffe. This guy describes all of these different conversations that he would see O’Keeffe have with her dinner guests. One conversation she had at a table full of young artists was about the key to happiness. They discussed how to sustain happiness. 

O'Keeffe was sitting at the head of the table just quietly listening to everybody talk. Then she finally interjected and said, happiness is a very fleeting thing, and you cannot have it in your life in a sustainable way. She said, I have found that the only thing that is truly sustainable in the long term is interest. That's the only thing you can guarantee can keep going for a long period—is interest. 

That's the thing that I feel like I’m currently talking to my teenager about because the desire is for dopamine; the desire is for happiness in that sort of quick-hit way. That feeling is fleeting; that's how our brain chemistry works. It’s not designed to be anything more than a momentary reward. But that intellectual satisfaction of the curiosity, and the yearning, the seeking, and the learning that comes with interest can really go a long way. 

It's not as immediately rewarding, but it is much more rewarding in the long term. So that's the first thing I would say: we need to be focused, not on how quickly can I get ChatGPT to summarize this book, but on reading the book so that you’re interested in it or so that you know whether you are interested in it. Find what interests you because this is a world that’s only going to try and convince you that happiness is what counts, while interest is what really counts. 

That’s one thing I think about. The other is that I had an interview with an expert in friendships, a guy named David Jay. He describes the difference between transactional relationships and the ideal relationship. In a transactional relationship—like at a professional networking event—that’s a relationship where you know what it is going to give you. You can predict the outcome of that relationship. A good relationship—the ideal relationship—is one where you don’t know what’s going to happen. The reason you go hang out with great friends is that you don’t know what kind of trouble you’re going to get into together. You don’t know what is going to happen when you go on vacation together. 

For me, that's the other thing: this world wants to corral your experiences into a predictable set of expectations because that’s how marketers get you. Instead, be weird. Keep doing weird stuff that is unpredictable because that both helps fend off a lot of these forces that are coming at you, and that’s a better kind of relationship with yourself and the people around you.

THE CREATIVE PROCESS

Georgia O'Keeffe did these amazing paintings of flowers and the desert. She was able to tap into the endless curiosity found in the natural world. For you, when you talk about the AI psychosis and our rapidly changing world, what is your connection to the beauty and wonder of the natural world? Maybe this goes back to why you named your podcast The Rip Current.

JACOB WARD

Oh, Mia, I love that you're asking me about that, and I could bore you endlessly with tales of surfing. Surfing is how I connect with that. I go into the ocean here in the Pacific. I live in California. I'm not an accomplished surfer; I think anyone who has been surfing a long time would look at me and be like, that guy is sort of 50/50 on his reliability. 

But it's such a tremendous test for me of both my body and courage. It’s a very scary thing to do. Sharks are my greatest fear in the world, and yet I go out and recreate among them. One of the great pleasures of surfing, beyond the physical act, is exhausting yourself paddling out to get to the break. You have to fight your way through the whitewater to do that. 

Once you're out there and beyond where the waves are breaking, inevitably, you end up in deep water. You’re just beyond where the underwater land causes the wave to trip and break. So you're in this suddenly deep water, and typically you’re only about 50 yards offshore. You’re well within sight of the land, but it is a totally different world, and a group of pelicans will come past you so close that you can see their weird little eyes looking at you. Then you can smell them once they’ve passed by, and they smell horrible. 

I’ve had pods of dolphins go by. I’ve never seen a shark, thank God. You get to be in a totally alien environment for a moment that you really have no business being in, with just enough technology—in my case, neoprene wetsuits and a surfboard—to make you functional out there. That, to me, is wonderful; it literally turns off all of my anxious machinery and then turns it back on when I come back to shore.

For the full conversation, listen to the episode.

This interview was conducted by Mia Funk with the participation of collaborating universities and students. Associate Interviews Producer on this episode was Sophie Garnier. Guest contributing editor was Eliza Disbrow. The Creative Process & One Planet Podcast is produced by Mia Funk.

Mia Funk is an artist, interviewer, and founder of The Creative Process & One Planet Podcast (Conversations about Climate Change & Environmental Solutions).
Listen on Apple, Spotify, or wherever you get your podcasts.