Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.

He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic BiasGlobal Catastrophic RisksHuman Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.

Bostrom’s academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy’s Top 100 Global Thinkers list twice and was included in Prospect’s World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots.

THE CREATIVE PROCESS - ONE PLANET PODCAST

You and your colleagues have said that you expect that we'll have the singularity in roughly 2040 or 2045. What kind of world do you think will have then, if we were to time travel there? Or maybe there's a few possible futures you see for 2040?

NICK BOSTROM

We really need to think, rather than having some particular date in mind, we need to think in terms of probability distributions smeared out over a large interval. I do think though that there is a real possibility that within the lifetime of many people who are here today, we will see the arrival of transformative AI, machine intelligence systems that not only can automate specific tasks but can replicate the full generality of human thinking. So that everything that we humans can do with our brains,  machines will be able to do, and in fact do faster and more efficiently. What the consequences of that are, is very much an open question and, I think, depends in part on the extent to which we manage to get our act together before these developments. In terms of, on the one hand, working out our technical issues in AI alignment, figuring out exactly the methods by which you could ensure that such very powerful cognitive engines will be aligned to our values, will actually do what we intend for them to do, as opposed to something else. And then, of course, also the political challenges of ensuring that such a powerful technology will be used for positive ends. So depending on how well we perform among those two challenges, the outcome, I think, could be extremely good or extremely bad. And I think all of those possibilities are still in the cards.

*

There are many problems and challenges and issues in the world. Of course, in founding the Future of Humanity Institute, we were trying to focus on the small subset that really could fundamentally transform the human condition, not just repaint it in a different color or add a little decoration on the surface, but either things that could threaten the very survival of Earth originating intelligent life or maybe in some profound way transform human nature. So that was one filter – and that of course excludes most questions and challenges that we're facing – and then within that subset, we were trying to focus on ones that were neglected by academia.

*

On the one hand, if AI actually worked out in the ideal way, then it could be an extremely powerful tool for developing solutions to climate change and many other environmental problems that we have, for example, in developing more efficient clean energy technologies. There are efforts on the way now to try to get fusion reactors to work using AI tools, to sort of guide the containment of the plasma. Recent work with AlphaFold by DeepMind, which is a subsidiary of Alphabet, they're working on developing AI tools that can be used for molecular modeling, and you could imagine various uses of that for developing better solar panels or other kinds of remedial technologies to clean up or reduce pollution. So certainly the potential from AI to the environment are manyfold and will increase over time.

*

In the other direction, you pointed to maybe the critical issue here, which is the governance aspect, which I think is one of the core sources of many of the greatest threats to human civilization on the planet. The difficulties we have in effectively tackling these global governance challenges. So global warming, I think, at its core is really a problem of the global commons. So we all share the same atmosphere and the same global climate, ultimately. And we have a certain reservoir, the environment can absorb a certain amount of carbon dioxide without damage, but if we put out too much, then we together face a negative consequence.

*

If all jobs could be done by AI could be done more cheaply and better by AI, then what would we do? It would be a world without work, and I think that initially that sounds kind of frightening. How would we earn an income? What would we do all day long?  I think it's also a big opportunity to rethink what it means to be human and what gives meaning in our lives. I think because we have been forced to work since the rise of our species, we had to earn our bread by the sweat of our brows.

We have kind of defined our identity and dignity around work. A lot of people take pride in being a breadwinner, in making a contribution to society by putting an effort and achieving some useful aims, but in this hypothetical future where that's not needed anymore. We would have to find some other basis for our human worth. Not what we can do to produce instrumental, useful outcomes, but maybe rather what we can be and experience to add value to the world by actually living happy and fulfilling lives. And so leisure culture, cultivating enjoyment of life, all the good things, happy conversation, appreciation for art, for natural beauty.

All of these things that are now seen as kind of gratuitous extras, little frills around the existence of the universe, maybe we would have to build those into the center. That would have profound consequences for how we educate people, the kinds of culture that we encourage, the habits and characters that we celebrate. That will require a big transition. But I think ultimately that is also an enormous opportunity to make the human experience much better than it currently is.

*

 And so I think while it must be interesting as a thought experiment to consider what would the world be if it were roughly like now, except we didn't have to work, and you can then consider how we might sort of play around with the education system and public culture, and so forth, but I think what we really face is an even more profound change into this condition where human nature becomes plastic in the sense of malleable, and we then have to think more from the ground up - What is it that ultimately brings value to the world? If you could be literally any kind of being you chose to be, what kind of being would you want to be? What constraints and limitations and flaws would you want to retain because it's part of what makes you, you. And what aspects would you want to improve? If you have like a bad knee, you probably would want to fix the knee. If you're nearsighted, and you could just snap your fingers and have perfect eyesight, that seems pretty attractive, but then if you keep going in that direction, eventually, it's not clear that you're human anymore. You become some sort of idealized ethereal being, and maybe that's a desirable ultimate destiny for humanity, but I'm not sure we would want to rush there immediately. Maybe we would want to take a kind of slower path to get to that destination.

This interview was conducted by Mia Funk and Sydney Field with the participation of collaborating universities and students. Associate Interviews Producer on this podcast was Sydney Field. Digital Media Coordinators are Jacob A. Preisler and Megan Hegenbarth. 

Mia Funk is an artist, interviewer and founder of The Creative Process & One Planet Podcast (Conversations about Climate Change & Environmental Solutions).