Algorithms are deciding whether you are eligible for a loan, a job, an apartment or insurance. They determine what you see online, who reads your social media posts and who connects with you on dating apps. They may even decide whether you get arrested or go to jail. Your very life hangs in the balance of prophecies.

If you are unlucky, a prediction will be what kills you. Forecasts can determine your place on a waiting list for an organ transplant or whether you get medical care in an emergency. Policymaking hinges on predictions. War and peace and whether someone lives or dies are decided based on forecasts about the strength of an adversary, the impact of a mission or the identity of a person. And yet no one has asked your permission to make those guesses. No governmental agency is supervising them. No one is informing you of the prophecies that shape your fate. Prophecies are the grounds on which fights over the future take place.

In this episode of the Speaking Out of Place podcast, Professor David Palumbo-Liu talks with Carissa Véliz, an associate professor at the University of Oxford, about her new book, Prophecy: Prediction, Power, and the Fight for the Future—from Ancient Oracles to AI.  Linking this work to her previous book, Privacy is Power: Why and How You Should Take Back Control of Your Data, Véliz writes: “ surveillance and prediction are digital technology’s original sins.”

In our wide-ranging discussion, we talk about how both massive and intrusive invasions of privacy at all levels of society and false claims to be able to predict the future erode democracy, are corrosive to ethics, and undermine people’s ability to think for themselves.  Instead, we are conditioned to trust an unregulated band of “effective altruists” who claim to know better than we what kinds of lives we should prefer and the choices we should make. Véliz argues instead that we should embrace the uncertain to build resilience, to prepare for contingency but not be determined by what we cannot see, and to foster curiosity and imagination.

Speaking Out of Place is produced in collaboration with The Creative Process and is made with support from Stanford University.

Carissa Véliz is an Associate Professor in Philosophy at the Institute for Ethics in AI, and a Fellow at Hertford College at the University of Oxford. She is the recipient of the 2021 Herbert A. Simon Award for Outstanding Research in Computing and Philosophy. She is a member of UNESCO’s Women 4 Ethical AI. She advises companies and policymakers around the world on privacy and the ethics of AI. She is a board member of the Proton Foundation, along with Sir Tim Berners-Lee and Proton’s CEO Andy Yen. She is the author of the highly-acclaimed Privacy Is Power (an Economist book of the year, 2020) and the editor of the Oxford Handbook of Digital Ethics. Her new book Prophecy was described as “The most important book you will read for years” by Roger McNamee, the tech investor and best-selling author.

DAVID PALUMBO-LIU

Thank you so much for being on the show. It is such an amazing book and there is so much to talk about. I am not sure we can fit it into the time that you have generously allotted us.

CARISSA VÉLIZ

Thank you, and thank you for having me. And it is so special to have this conversation because you pour your soul into a book you spend years on.

And I really do think of myself as mostly a writer. And you never know how it is going to come across. And it is very special to be able to be here now.

DAVID PALUMBO-LIU

So Carissa, please read for us. This is something I always love to have my guests do. Because it is important to hear the voice behind the words. Do you have a passage you would like to share with us?

CARISSA VÉLIZ

I would love to share the start of the book.

What if I told you that a prophecy led you to this book? It is meant for you. It is in your cards to read it as much as it was in mine to write it. Books have ways of choosing their readers. Maybe an algorithmic prediction identified you as someone who will enjoy this book and showed you an advertisement for it, or perhaps your local bookseller or librarian knows you well enough to foresee that these pages are for you. Or you might be one of the lucky ones who has a friend with a talent for suggesting just the right title.

Regardless of how this book found you, a series of forecasts lies behind its journey. The predictions of the author, agent, editors, publicists, marketers, algorithms and podcasters have weighed in on its life. Predictions are what connected you and me. Our minds meet through prophecy.

Your life story is not a lot different from that of this book. Countless predictions have directed your path, opening and closing doors for you, pushing you to some places while blocking the way to others, leading you to where you are now. As you read these sentences, artificial intelligence is making forecasts about you.

Algorithms are deciding whether you are eligible for a loan, a job, an apartment or insurance. They determine what you see online, who reads your social media posts and who connects with you on dating apps. They may even decide whether you get arrested or go to jail. Your very life hangs in the balance of prophecies.

If you are unlucky, a prediction will be what kills you. Forecasts can determine your place on a waiting list for an organ transplant or whether you get medical care in an emergency. Policymaking hinges on predictions.

War and peace and whether someone lives or dies are decided based on forecasts about the strength of an adversary, the impact of a mission or the identity of a person. And yet no one has asked your permission to make those guesses. No governmental agency is supervising them. No one is informing you of the prophecies that shape your fate.

Prophecies are the grounds on which fights over the future take place. Our expectations bend the social world toward our predictions. When someone forecasts that the world will be a certain way, they are commanding that others obey their wishes and bring that world about.

Even though we have been using predictions for thousands of years to make some of the most important decisions of our lives, we have dedicated remarkably little thought to the deeper questions about prophecy. What exactly are predictions? What are their effects? Who has the authority to make them and when is it appropriate to use them?

DAVID PALUMBO-LIU

Perfect. Thank you. One of the things I like about the book in particular is that you write so well and you come up with these lines that are seared in my memory, and there is one that comments on the connection between your previous book and this one.

And the line that I am going to read to you is, "Surveillance and prediction are digital technology's original sins." Like that should be the t-shirt. So could you unpack that for us? I think that is a wonderful bridge between these two books.

CARISSA VÉLIZ

So one of the privileges of people like us in our job is that you get to spend a lot of time thinking about what we might be missing as a society, what we might not be seeing.

Even though AI is very much front and center in the conversation right now, one of the things that has worried me is how the challenges of AI or its most ethically problematic aspects get discussed as if they were all on the same footing—bias and the future of jobs and existential risk. And it seems to me like that is missing a causal story, a more insightful diagnosis of how did we get here, and more importantly, how do we get out of here?

And if we understand or analyze the problems in more of a causal way of what came first and how does it work and what causes what, you realize that the two main problems from which all other problems stem is first, mass surveillance on which not only much of the internet is funded but also how AI works.

The kind of AI that we are using, which is machine learning, works on troves of data, and in fact all of the data that is held on the internet and much of that data is personal data. And even when it is not personal, data can be inferred from that data.

And then the second problem is how all of this mass surveillance machine is built in the service of prediction. So it would not be worth anything if it was not used for prediction. And it is used for prediction, not based on truth or science or well-being or democracy but for profit.

And profit does not always align well with those other values that I hope we think are more important than only profit. And one of the symptoms that is the case that we think is more important is that we do not allow certain kinds of models of profit. For example, theft. It is very profitable, but not okay.

DAVID PALUMBO-LIU

Yeah. It is interesting because I did not know that you thought of yourself as a writer, which makes total sense. And so I see now the kind of construction of the book is different, because when we get to the end, you say, now I am going to tell you, this is actually all about ethics.

This is sort of a hidden core. And I thought I could see that. I could see the plot developing in a way. And so I also had a hidden core that I saw, which you have already mentioned. So it is in your mind too, and that is democracy. I think these two things go so well together.

So let me ask you to go first. Talk about how ethics really is at the core of this book, in whatever way you want to discuss it. Because it is attached to it in many ways, but whatever way you want to pull out as being most important to you right now.

CARISSA VÉLIZ

So in a way, ethics is at the very base. And when I applied for the grant that allowed me to write this book, the grant was called the Ethics of Prediction.

But ethics has a bad reputation and even more these days; I think it has always had pretty much of a bad reputation. And it is interesting and puzzling because people seem to think about ethics as something boring, even though I have not seen any discussions that are more heated than ones about moral issues.

People also seem to think that that is just about opinion and all opinions are equally valid, even though we clearly do not think that in the law or in the way we behave or in the way society functions. Society would not be able to function were it not for ethics.

And another idea that is very important for me is that I am not sure we realize that the alternative to ethical AI is not neutral AI; it is unethical AI. Ethics happens by design, and if we do not think about it, it is going to come out wrong.

I care also a lot about ethics because it is a foundation for democracy. You could imagine a country with perfect laws, whatever that might look like for you, but nobody follows the law unless it is for ethical reasons.

At base, when it comes down to it, everything ends up as a conversation about ethics, about what is a good life, what do we owe other people and how should we live? And so it is a very philosophical question that is always going to be relevant, and it is always going to come back to us time and again.

But I wanted to show all these other more practical applications and more immediate concerns that have to do with equality of opportunity, with democracy, with freedom, with how we are treated, and then only at the end reveal actually what we are talking about is ethics, so as to not put off people who might not be particularly positively inclined towards ethics.

DAVID PALUMBO-LIU

No, you are right. It is interesting that, exactly as you say, ethics makes people's eyes glaze over. This is church or something like that. But it is so attached to things like, as you said, democracy.

Because if you do not have a sense of ethics, then you will accept all sorts of adulterated notions of democracy. You will let democracy slide back until it is like the United States today, right? You have just a phantom version of democracy and people play along with it for whatever reason.

And also, ethics importantly should be personal. It should be something you take into your life, and when it becomes only abstract, you do not see it again, as you say in the everyday operations of decisions you make.

So could you tell us a little bit about how the idea of artificial intelligence helps us accept a version of the world in which our ethics are basically given over to other people? We trust them because of their brilliance mostly reflected in their wealth. Again, the whole way of interpreting signs is weird here. How does this erode our sense of who we are?

CARISSA VÉLIZ

I love the way you are asking that because it shows that talking about ethics is not optional. Because even when you defer to a tech executive because you respect how wealthy they are, that is a question about values, what you are valuing and what your ethics are.

So it is not that we do not want to talk about ethics, so we will not. It is just implicit and what I want is, no, we should have the conversation because it is happening anyway, and we might as well make it explicit and think about it carefully.

I also like this connection between the political and the personal, and one of the things I have learned from philosophers who have reflected about democracy after the Second World War is the connection between personal values and democracy.

About how citizens are the ones who hold up democracy, who hold the line, citizens as students, as professors, as journalists, as business people, as whoever you are. You have a role holding the line.

Again, it is about values and what kind of society we want to build, and I am suggesting that it is not a coincidence that democracy is struggling so badly at the same time as a rise of digital tech.

That did not happen by accident. It is related to digital tech depending on surveillance, increasing control over the population and having a lot of wealth and a lot of power. And many of these individuals have more wealth than all the agencies trying to regulate them together.

DAVID PALUMBO-LIU

Yeah. There is a connection. Yeah.

CARISSA VÉLIZ

Yeah. And about how the kind of tricks, the sleight of hands that are used to hide the power plays that are happening right in front of our eyes.

DAVID PALUMBO-LIU

Could you give us an example of that?

CARISSA VÉLIZ

Yeah, so more and more when I read the newspaper, I realize that a significant proportion of what I am reading is in future tense, and it is not about what is happening today. It is a picture of what the world might look like tomorrow.

And often it is citing very wealthy individuals who are selling us this idea about the future, and often it is something like the following: In two years' time, AI will be able to do everything.

Every problem that we have with AI today is going to go away and AI is going to get better at everything, and we are going to use AI for absolutely everything, whether it is dating or jobs or chores. There are going to be robots everywhere. Self-driving cars every year are going to be here entirely next year and so on and so forth.

And what I want people to realize is that predictions are not facts. They are not descriptions of the world. More often than not, they are power plays in disguise.

So what a tech executive is saying when he is saying, tomorrow we are going to use AI for everything, is I own a tech company. I make AI, I make a lot of money from AI. I want to instill in you the fear of missing out. So go out there in the world and buy AI and fulfill my vision of the future.

And what I would like citizens to do instead is to say, ah, that is a prediction. It is not that this guy is running numbers in his head. It is not that he is actually setting off a kind of hypothesis or quest for knowledge.

This is the future he wants to build because it is in his financial interest. Is it the future that I want to inhabit? And if it is not, to not take that prediction as a fact but rather as an invitation for defiance.

DAVID PALUMBO-LIU

Okay, so you anticipated two words that I was going to bring up that you feature in your book. We will get to that in half a second, but the two words were bullshit and infinity, and I think you just mentioned both.

But before we get there, let us talk about how predictions are not facts. And maybe you could unpack that a bit. And also in your unpacking, tell our listeners what is the closest we can get to prediction.

In other words, where does probability fit in what things exist in the world that seem to you to be reasonable approximations of prediction without sliding into that fantasy and closer to facts? So get there first and then we will talk about bullshit and infinity. Okay.

CARISSA VÉLIZ

So one of the joys of writing this book is that first it is a post-tenure book, and second...

DAVID PALUMBO-LIU

They are the best.

CARISSA VÉLIZ

Yeah. And I think there comes an age in life in which you just think, what am I doing here? And I am going to say what I think we should be really saying, I am going to write the book I really want to write without any concern for anything else.

And so it is quite a quirky book that dabbles in probability and philosophy. So let us get a little bit into the philosophy and then a little bit into the probability.

READ MORE [ + ]
CARISSA VÉLIZ

The philosophy is I take J.L. Austin's Speech Acts to analyze what exactly is that prediction. And even though a prediction sounds like a description of the world. So if I tell you tomorrow the world looks like X, it sounds like a description. What I argue is rather that it is a speech act.

So Austin had this idea that some sentences do not describe the world but rather do something. So his book was titled How to Do Things with Words. So he argues that when you order your child that they clean up their room, that is doing something, you are making them do something. When a priest says, I pronounce you husband and wife, he is marrying a couple.

And I argue that when somebody makes a prediction, often what it actually is, is an order. So it is closer to an order than a description. So when a tech executive says, we should be using AI for this or that task, what he is saying is use AI for this.

And when we give credence to that prediction, what we are doing in fact is obeying. We are not understanding a fact and reacting in accordance, but we are obeying an order and making that prediction a self-fulfilling prophecy.

But you are right. There are times in which it does not sound like that is what is happening. When I look at my weather app and it says that there is a 60% chance of rain, I will take my umbrella. And so what is happening there, and I think a couple of things are happening.

One is that is a prediction about the world. It is not about people, and in particular, it is not about an individual. It is not about a social reality. So when I say it is going to rain, my prediction has no effect on the clouds. The clouds could not care less about what I think about, and they are not going to react differently.

But the social world does not work that way. If I tell my student, I think you are terrible and you are going to fail that exam, I bet I am going to have an effect on them. Same as if I tell them, I think you are brilliant and you are going to do well.

And in the same way, when a financial agency says that a country is going to go into bankruptcy, and suddenly investors are fleeing, it has an effect on that social world. And so that is the first thing that is a red flag about whether a prediction is more of a power play than a description of the world.

And the second thing is that it is probabilistic. So if I say there is a 60% chance of raining and then I keep a record of every time my app says there is a 60% chance of rain, and if 60% of those times it actually rains, that gives me a reason to trust that prediction.

And in a way, the prediction being probabilistic makes it not a prediction in the sense that it does not tell you what is going to happen for sure. It tells you what is likely to happen. Now there is a warning to that, and that is that in the age of algorithms and AI, we often make probabilistic predictions about people.

And the problem with that is not only that they have an effect on people but that we treat them as binary. For example, I might have enough data that suggests that there is a 70% chance that Jack smokes, and Jack might not be a smoker, but if that data gets sold in the marketplace, Jack gets treated as a smoker.

So even though the probability is 70%, because these systems are not built for justice or for truth, they are built for profit, it is more profitable to assume that he is a smoker and adjust his insurance fee or not give him a job or whatever the consequence might be.

DAVID PALUMBO-LIU

That is such a beautiful explanation because you hit all of the marks that I was wondering about and just a few things. One is that in terms of running the numbers as it were, you point this out so well in your book. It is people who make the decisions to run the numbers, right? In other words, it is not this machine doing it for itself.

So if a company wants X outcome and they do not get it from the available data, they could just keep running the data forever and ever. And even if you are skeptical about it, they say, just wait, just give us time because we know we are right. And then it falls back on this notion of why do we trust these particular people?

And I was thinking, I mentioned before, because they are wealthy, so obviously there is an assumption that they are successful, so obviously they are smart. And the example you gave about Speech Act theory living in California. I think about Sir Francis Drake sailing up the coast of California and saying, I claim this for my queen.

So again, colonialism, power. This is something else that really plays a big part in your book. It is people with power that we give undue credence to because they happen to be wealthy or powerful, but we do not understand the paths that they took to get there, which are largely in my, and probably your estimation, unethical, right? They step over people. Yeah. So talk a little bit about bullshit and infinity because these things go hand in glove in terms of AI.

CARISSA VÉLIZ

Yes. So bullshit is a philosophical term, I promise.

DAVID PALUMBO-LIU

Yes, of course. Nothing less. You are an Oxonian, come on.

CARISSA VÉLIZ

Yeah, exactly. We are very proper here. And Harry Frankfurt came up with this concept to convey the idea that there is a way of using language that is different from telling the truth and different from lying.

And when you bullshit someone you do not care about whether it is true or false, you are just saying something that will have an effect either to confuse or to get you what you want. But truth is not a consideration.

And Harry Frankfurt says that bullshit is much more dangerous for democracy than lies because a liar, first of all, needs to be clear on what the truth is, so they can lie. And also they also care about the truth. So the truth-teller and the liar are essentially on opposite sides of the court, but they are playing the same game.

The bullshitter does not care about the rules of the game, and that makes it very hard to grapple with. Sometimes they will tell the truth, but it is just by chance.

It is not that they care about the truth, and I argue that AI is the ultimate bullshitter because it does not care about the truth. It is not an embodied being with values or anything like that. And it was not designed to track truth. It was designed to satisfy the desires of human beings.

So the way AI was trained, like a large language model, was it had some input, it created an output, and there are different ways to interpret exactly what this output is. But one way to interpret it is it analyzed all the data it had, which included conversations in forums, all kinds of internet texts, books, and it outputs what a human being would plausibly say, statistically say in that case.

And then it presents options to a human being who is training it, and the human being chooses the answer they prefer. One of the problems is that human beings often do not prefer the truth. We prefer the good story, the answer that rhymes, the answer that is intuitively true but maybe false.

The answer that is simple, the answer that is funny, and that is the way to build the perfect bullshitter because they have so much data on how to satisfy this human desire for hearing something pleasant to the ear or that attracts our attention. Often it is something that outrages us and that keeps us engaged.

There is this disconnection between large language models and the empirical reality. So when I report that I really like coconut water, what I am thinking about is the experience of having coconut water and how it makes me feel.

When a large language model says, oh yeah, coconut water is fantastic, of course it has no idea what it is saying. It has never tasted coconut water. It cannot taste coconut water. It is reporting on the text of others to describe whether it is sweet or not.

It is designed to be an impersonator because it very much sounds like a human being and hijacks our emotional response. And so now we are getting these examples of chatbot delusion, or sometimes some people call it chatbot psychosis, because part of what satisfies our desires is to be validated.

So chatbots have a tendency to tell you that you are brilliant, that everything you say is incredibly original and smart, and we lose touch with reality easily. And part of the importance of talking with other people is that they confront us. And it can be annoying and it can be irritating, and it can be frustrating not to agree with someone, but it helps you gain perspective.

DAVID PALUMBO-LIU

Yeah. Now I was thinking that when you talk about narrative and you talked about being a writer in the book, you often mention that bullshit gets packaged in these very plausible narratives, but you raise the really important question of how we become conditioned to think of certain narratives as being natural.

And then also when you talk about the absence of an interlocutor, I am thinking about how that is what universities are for. That is what public spaces are for. But when we begin to distrust people and trust machines because they are objective, that is when we get into deep shit. Talk about infinity. You talked about bullshit. Now put it together with infinity.

CARISSA VÉLIZ

So the idea of including infinity in the book comes from effective altruists wanting to use it in a certain way. So effective altruists are a group of philosophers who, I am sorry to say, started in Oxford.

DAVID PALUMBO-LIU

But we have our American version too. So come on, give us Yanks a chance.

CARISSA VÉLIZ

That is true, that is true. Thank you. We can share the burden. The main idea is one that is very attractive to really anyone, but especially young people, of saying we should be altruistic and we should do so effectively, and that sounds like a great idea.

Who would not be on board with that? But then the trouble starts with some of the implications and some of the practices, and they come from a long tradition of utilitarianism, which is an ethical theory that roughly argues that what you should do when we want to do the right thing is to maximize utility.

Utility sometimes gets cashed out as happiness or well-being or whatever unit of utility you decide is the right one. When you are confronted with a dilemma, you calculate what are the consequences of doing A versus not doing A or doing B, and then whatever maximizes utility is the thing that you should do now.

A whole host of questions arise after this first step, one of which is, yeah, of course, but if you do A, that is going to cause B and C and D and E and like, when do you stop the calculation? Because it is infinite. One of the tools in the toolkit of the effective altruist is to stop very far into the future potentially. Infinitely far into the future, and one of the implications of this is a view called longtermism and the view is plausible at first.

And the plausible presentation is we are blindsided by thinking about the short term, when really we should think more long term. And I think this is an important point and sometimes it is very valid, but the way effective altruists are thinking about the future is thousands of years down the line.

And so their argument is, look, there are however many billion people right now on the planet, but if you take into account all the future of humanity, that is trillions. And so those trillions clearly outweigh billions. And so we should favor their interests over our own.

And then there are all kinds of very questionable assumptions that they make, something like, people are going to be able to upload their brains into the cloud and all kinds of things. And so they end up defending very problematic views. For example, we should minimize the risk of existential threats.

Which is the idea that AI could potentially end humanity or damage it to the point at which very few human beings remain. Because even if you just minimize the risk by a billionth of a percentage, because there are trillions of people on the line, that will have more of an effect than if you were to save all of humanity right now or improve all of humanity's present.

And of course, there are so many assumptions in that. One of the arguments of the book is that we suck at prediction, and not only because we are bad at it, although we are bad at it, but also because the future is unpredictable and that is part of the most positive thing that life has to offer.

DAVID PALUMBO-LIU

Yeah, exactly.

CARISSA VÉLIZ

That we can influence the future. That we can change it. And when you assume these bold things like how many people are going to live and what is it going to be like, and whether they are going to upload their brains, essentially it can justify anything. Because in the vast ocean of infinity, any act is meaningless.

So you could say, okay, I am going to murder someone, but you cannot add bad to infinity. So in history, infinite bad things are going to happen. So if I murder one person, it is just a blip in the ocean of infinity. And in the same way, you cannot possibly improve the world because you cannot do better than infinite good.

And in infinity there is infinite good and infinite bad. So in a way, introducing infinity into the picture just makes everything meaningless on the one hand. And it makes you be able to defend anything, literally anything. And so it is a sleight of hand that is extremely dangerous because it does not show what the implications are, and it can sound very plausible at our first glance.

DAVID PALUMBO-LIU

One quick thing and another not quick thing. The quick thing is when you were speaking, I was thinking about something that my colleague at Stanford called Condoleezza Rice said. And she said it is a war crime now, but who knows what the future will bring.

CARISSA VÉLIZ

Exactly.

DAVID PALUMBO-LIU

It is...

CARISSA VÉLIZ

Having a vision that is not human.

DAVID PALUMBO-LIU

Exactly.

CARISSA VÉLIZ

And ethics is about being a human being and a good human being.

DAVID PALUMBO-LIU

Yeah. The other thing is when you mention about the existential threat of AI, we have to fix it. It is so self-centered.

It is strange to think about machines, but AI is, how should you put it, that environment is so self-referential, right? It only thinks of itself and it could only refer to bettering itself rather than any other thing in the world that it could better, right? Because the only instrument that will allow us to make the world better is it.

So you have to improve us or else we might kill you all. And as you point out in the book, what about world hunger? What about peace? So how does this fixation, and yet another time the Stanford Daily Sam Altman is in the headline is just, he is omnipresent, if not in reality, in people's minds. But what kinds of things are we diverted from thinking about when we buy into the formula?

CARISSA VÉLIZ

One of the most important things is how democracy works and how we keep it alive and healthy. And if we lose democracy, we are perfectly capable of killing each other without the help of AI. Exactly.

AI can enhance our power and that is concerning, but we are very far away from a moment in which we can actually blame AI for it much more scared about other human beings, and we lose sight of the mechanisms that are going on here often. What is going on with AI reminds me of some of the mechanisms you can see in an abusive relationship in which somebody abuses someone and then consoles them for it, and then abuses them again, and then consoles it.

So often we get AI tech companies more precisely bringing into the picture huge problems for democracy, for well-being, for young people, and then offering AI as a solution to the problems they are bringing.

DAVID PALUMBO-LIU

Yeah. Yeah.

CARISSA VÉLIZ

So we lose perspective of how this is our shit and we should be running it, because if we do not run it, they are going to run it and look how that is going.

DAVID PALUMBO-LIU

Yeah, and I was thinking about many stories that you tell about how people surrender to this, that they not only are not thinking differently, but they are not acting differently. That their behaviors change not only the way that their minds are working but the way they act in the world.

You talk about curiosity in a really interesting way, and to me it is very much attached to the idea of learning. Right? I love that Ted Lasso episode, by the way. It is one of my favorites, and listeners can google that, but what is killing besides our curiosity.

And one thing I was going to add as an illustration about doing things, my prior guest on the show is named Helen Wybrow. And she wrote a book called The Salt Stones, and it is all about being a shepherd. And it is a beautiful story because her mother and father were farmers and then they moved to the US from the UK and now she is a farmer raising sheep among other things.

She is also a writer. But she was talking about a moment in her mother's life when she was developing dementia and she was not able to put things together, but she was still able to do tasks on the farm. She had built this into her daily way of being in the world is to doing things.

So what happens when we let machines take over our doing, and not just our mental but also our physical work, for example?

CARISSA VÉLIZ

A lot of things happen. One thing that happens is that we lose skills, so we forget how to run the place because we rely on AI. And AI is not a good thing to rely on, first because it is designed by companies that have proven themselves to be quite untrustworthy, but second, it is a very glitchy technology. So if you compare an electronic book to a paper book, the paper book is a lot more reliable. You know what its failures are, so do not put it next to the fire. But other than that, you do not need to charge it, it cannot be hacked, it does not change the content and it does not need electricity. It is very robust, so you can throw it out the window and it is going to survive. You can hurt someone, but the book will be fine.

And AI is very brittle, just like most of digital technology. There are many reasons for that, but one of them is that we have thousands of years of experience with the analog. We evolved with the analog, so we are very good at manipulating and understanding it, at getting nourished from it. And when we lose that contact, we forget important skills. One example is how pilots today are much less practiced in emergency situations because they rely so much on autopilot that they do not practice enough. And when AI fails, we are in much less experienced hands to deal with an emergency. A few months ago there was a blackout in Spain, and it made evident how important it is to keep investing in the analog world. The only people who had dinner that night were people who were carrying cash at a restaurant, and we forget that connection to the world of things. So there are skills about how to run the world and how to manipulate the world.

There are thinking skills because one of the things we are externalizing is even our thoughts, and that is dangerous because writing is not only about the product. It is a way of thinking, a way of thinking better and of being able to develop more complex thoughts that you cannot hold in your head. I worry that young people are not having that opportunity to develop critical thinking skills that will allow them to deal with very complex problems. And when we externalize our thinking, we are losing so much of what we value in universities and in democracy because democracy is partly about how to run things and how to think about things.

So one of the funniest but also most depressing stories that I have read in the past few years is how Peter Kyle, Minister for Digital Affairs in the UK, turned out was using ChatGPT to ask it about policies regarding AI and privacy. And it will not surprise you that ChatGPT told him that the GDPR, the European regulation for privacy, was just very cumbersome and a barrier to innovation and all that kind of narrative. So one of the ways to think about it is that democracy is a conversation, and when we externalize that language to a chatbot, we are essentially standing up from the table of democracy and losing our place.

DAVID PALUMBO-LIU

Wow. I teach a composition course at Stanford. I have taught it for 25 years because writing is important exactly the way you have said. I used to have this exercise where I would have students from the previous year's class come in and visit the current classes and ask, what have you learned in the year? How is your writing? And it got to be the situation where every single person who came back to visit the current class when asked, what did you learn from Professor Palumbo-Liu's class last year? And they all said, and sometimes they were doing this eyeball roll, they said that writing is thinking and thinking is writing.

And that is really my pitch, right? It is not about just the skill of writing, but in repetition and doing rough drafts and doing revisions, you are rethinking, you are polishing your thoughts and you are talking with yourself. I also have lots of teamwork, getting feedback that way. And so it is a way of understanding who you are and how you can best present your ideas. And then I also say I teach this course because it is empowerment. It is a way for you to be able to be in the world and make your arguments and yourself in whatever version of yourself happens to exist at that moment, present and formidable, as you said, and not in an antagonistic way present.

CARISSA VÉLIZ

It is like strengthening a muscle. Exactly. And it is interesting how we value going to the gym or going to run, and we never complain about it being inconvenient. Nobody ever says that, yeah, but exercising is so inconvenient. But yeah, of course it is inconvenient. That is beside the point. And in other areas of life we have come to this cultural agreement that convenience is so important that it trumps almost everything else. Forgetting that anything that matters in life is inconvenient. It is very inconvenient having friends, they always have problems.

DAVID PALUMBO-LIU
Yeah.
CARISSA VÉLIZ

They come up with different opinions to you. They, I do not know, partner up with the wrong person. It is very inconvenient to have a family. It is very inconvenient to go to vote. It is very inconvenient to do exercise, to eat well, to do a PhD. So I think we should revalue, I do not know how to put it, but difficulty, overcoming difficulty and take pleasure in gaining mastery because it is a form of...

DAVID PALUMBO-LIU
Empowerment.
CARISSA VÉLIZ
Exactly.
DAVID PALUMBO-LIU

Exactly. And it helps you feel that you have the capacity to be resilient when you are facing the unexpected and you can absorb it or you can deal with it, or you can learn how to deal with it through the help of friends and stuff like that, because life is always going to be challenging. Life would not be life without a challenge, and life would be a horrible life without a challenge. What kind of world do people want? All glassy, smooth and wonderful, but you would not appreciate the sun if you did not have the rain that you and I have today. I am assuming you have it in Oxford, and it is not like we are inventing this.

Philosophers have said this constantly, but again, this diminution of the world, and this is something else that Helen Wybrow and I talked about, the diminution of the world, and she was, this might be a sidetrack where she was saying, yeah, people do not understand that sheep are amazing because they do not live in the world. They go across landscapes and they create fertilizer and they till the soil by just moving through it. We always are so fixated on the end result, which has to conform to something that we want. I wanted to talk about the personal aspect of this book, which is really wonderful. You talk a lot about your father and your grandfather and you dedicate the book to your mother. So could you talk a little bit about that story you tell about your father and your grandfather and how that informed who you are?

CARISSA VÉLIZ

Yeah, so again, I wrote the book that I really wanted to write, and so I allowed myself some freedom that I had not before.

DAVID PALUMBO-LIU
I am glad you did.
CARISSA VÉLIZ

Thank you. Because partly I appreciate it more and more when an author reveals where they are coming from. Authors do not hold a view from nowhere. I think we are all shaped by our experiences, by the family stories that we inherit. And one of the family stories that runs in my family is that when my grandfather was a child, he had an accident at school. Another boy shut a metal door on his finger, and to try to stop the bleeding and the pain, he wrapped his finger around his t-shirt. And when he was coming home, his mom saw him. My great-grandmother saw him across the field, and because he was covered in blood, she thought that something really bad had happened and shortly thereafter had a heart attack, and within a day or two had died.

And of course, in a way, it is unfair to say, well, she should not have made it a prediction. She predicted that something terrible had happened instead of waiting to ask what happened. And in a way, it is unfair because it was such a bodily shock that it might have been impossible to stop that, but in a way, it is a metaphor for how to live life. How many times have we had this? As you gain experience in life, you receive bad news and your first reaction to it comes with shock. You imagine the worst possible thing. And then as you get used to the news, you start seeing ways out. You start realizing that it might not be the end of the world, that actually it has a silver lining that you had not considered. Sometimes it even becomes good news, not always. I think as you grow older, you do get better at absorbing that kind of shock, at not overreacting too quickly, at asking questions and pausing and saying, I am not going to allow my mind to gallop into the future.

And I am very lucky with my mom. She is a very wise woman, and one of the things she has always told me when I run into trouble is that life is an adventure. Again, it ties back to curiosity, and curiosity is partly not reading ahead, not going to the end of the book to figure out how it ends. Because part of the pleasure and the value in a good book is not knowing what is going to happen and reading through it and finding out in due course, and who wants a good story to end? If you jump to the end of the book, you miss out on the adventure that life is.

DAVID PALUMBO-LIU

You talk about prediction and the way that it curtails our sense of how we might act in the world. And I teach a course on solidarity and we did a segment on the International Brigades.

CARISSA VÉLIZ

Oh wow.

DAVID PALUMBO-LIU

And just amazing. A whole other episode you and I could talk about.

CARISSA VÉLIZ

Yes.

DAVID PALUMBO-LIU

But one of the things that struck me as I was teaching this course and reading more and more about it was that the fatality rate of the International Brigades was 70%. It was 70%, and these were people who were coming from all around the world. I read about people from India, from China, from Japan, who were also fighting fascists but came to defend the Republic, and the number of people that survived that then reenlisted to fight the Second World War. Because their commitment to fighting fascists was so deep that if they thought in terms of predictions and probability, they never would have, but who knows what would have happened in the Second World War? Maybe they did not comprise a significant number of people that fought in the Second World War, but that spirit, which I think you and I both are so concerned about losing, right? The thought that I would do it even if it is unlikely to succeed, because it is the right thing to do, and I know it because that is who I am.

And me being at the age that I am, and I told this to my wife just the other day. I said, if I could just die thinking I did the best I could. Yeah. It was not perfect, but what else would I have done? And that is what is so troubling about AI, is that it will tell you you are not being smart. You are not being efficient, your utility is not going to happen because you have made the wrong choice. Could you talk a little bit about that?

CARISSA VÉLIZ

And it is a combination between what AI is going to tell us, but also what its bosses are going to tell us. What do the tech executives tell us? Because they also tell us AI is inevitable. This is what it looks like. It cannot look any other way. And fighting it or suggesting another way forward is just futile. So do not even waste your time. And of course that is very comfortable for them because people do not even imagine defying their vision of the world. And this thinking probabilistically or in a utilitarian way of, okay, what are the chances of success? And if they are not large enough, then I will not even bother. Goes so against the way we work with the things that we value the most, and family is one of them.

So think about the person you love most in the world, whoever that might be, your spouse, your child, your sibling. If something were to happen to them, it would not match the chances of rescuing them. You would do anything to rescue them in a way. To be a utilitarian makes it so easy for the powerful to build a world of their liking because they only have to make it unlikely for you to succeed to persuade you that you are not even going to try.

DAVID PALUMBO-LIU

Yeah, exactly. It would be a waste of time, quote unquote.

CARISSA VÉLIZ

Yeah. And when I think about my own life, it was not in my cards to end up an Oxford professor. If I had thought probabilistically, I would not even have tried. And so I want to encourage young people to be principled, to act out of a principle. Is it the right thing and is it important enough? Do not think about the chances because we are not good at calculating the chances in the first place. And there are things that are so important that even if you were to lose, you want to be in that fight. You want to be on the right side of history.

DAVID PALUMBO-LIU

Yeah. Wow. That would be the perfect note to leave on. Except I want to get to the epilogue because the epilogue condenses it, and I must say this, you are a young person, but you are so wise. You really are, you are wise beyond your years. So yes, thank your parents and your grandparents and anybody you have had contact with as your mentors there, because the book is filled with wisdom that is teaching me so much. So tell us about your recommendations in the epilogue, which I think are wonderful. Do not rush to the end of the book, guys. Read the whole book, but tell us what the epilogue is please.

CARISSA VÉLIZ

I wanted to write a book that, on the one hand, is very philosophical and very theoretical. On the other hand, it is very personal and it is about what it is like to be a professor at Oxford and the worst and best of academia. I wanted it to be also funny and practical, so even though I am a philosopher, you just have to trust me that I am quite a practically minded person, and I wanted the ideas to land on something that people could use in their everyday life, whether they are just ordinary citizens or business people or whatever else, policymakers.

And so it is a kind of reminding of what is important. So one of the first things is to try to identify predictions as such, and it is not always obvious. Sure, if somebody says in 15 years the world will look like this, that is obvious that it is a prediction. But often predictions get veiled in much more subtle language or in subtle technology, like large language models. So even when large language models speak in the present, the way they function is actually through predictions. And then once you have identified that something is a prediction, be critical. Ask yourself, okay, who is making this prediction? Are they using data? What kind of data? Who collected that data? Why was the data collected? Does it have any kind of blind spots? Does it have any kind of biases? Who does this prediction really benefit? Is it a future that I want for myself, for my community, for my family, for my country?

And if not, what am I going to do about it? Am I going to just fold and believe this prediction even though I think it would be a terrible future to walk towards? Or am I going to take a prediction and say, no, I am going to defy that. That is for profit. I know why he is making this prediction. But I want a different future for myself and for my community. I invite people to try to focus on preparation instead of prediction. One of my favorite examples comes from Margaret Heffernan, and she makes a case that we do not build airplanes based on prediction. So when an airplane takes off, there is not a team of engineers and data scientists trying to predict whether a goose is going to impact the engine. We build airplanes to be able to sustain that impact because we know that sooner or later it is going to happen. And that way we do not need to predict when it is going to happen. We just prepare.

And so often in life, whether it is buying insurance for yourself or building a robust company, you will be much better served by preparation than by prediction. And then more generally, trying to predict ethically. So I argue that we should not make predictions about individuals because way too often they end up being self-fulfilling prophecies that are closer to a sentence or a verdict, and you should not have that kind of dominance over another individual. So if we have to predict, let us make predictions about the weather or about population-level statistics and cultivate the right values. Live ethically, have principles, have curiosity, and I think right now we would do very well in cultivating bravery. And I have this story about how when I was reading Aristotle when I was an undergraduate, the whole part about standing firm in battle seemed like so irrelevant for me. I was young enough and naive enough to think that war was a thing of the past, something in the history books, and not to think metaphorically about the mere existence as a kind of struggle and how important it is to stand firm in battle to hold the line, to face life with bravery and integrity.

DAVID PALUMBO-LIU

Wow. Thank you so much for being on the show. It is really an amazing book and in many more ways than what we have been able to get to in this short time. But I assume you are going to write another book, which means you are going to be back on the show. But for right now, I just want to thank you. It has been a pleasure and we have learned so much. Thank you so much, Carissa.

CARISSA VÉLIZ

Thank you so much, David. It has been my pleasure.

Speaking Out of Place is produced in collaboration with The Creative Process and is made with support from Stanford University.

Speaking Out of Place, which carries on the spirit of Palumbo-Liu’s book of the same title, argues against the notion that we are voiceless and powerless, and that we need politicians and pundits and experts to speak for us.

Judith Butler on Speaking Out of Place:

“In this work we see how every critical analysis of homelessness, displacement, internment, violence, and exploitation is countered by emergent and intensifying social movements that move beyond national borders to the ideal of a planetary alliance. As an activist and a scholar, Palumbo-Liu shows us what vigilance means in these times.  This book takes us through the wretched landscape of our world to the ideals of social transformation, calling for a place, the planet, where collective passions can bring about a true and radical democracy.”

David Palumbo-Liu is the Louise Hewlett Nixon Professor and Professor of Comparative Literature at Stanford University. He has written widely on issues of literary criticism and theory, culture and society, race, ethnicity and indigeneity, human rights, and environmental justice. His books include The Deliverance of Others: Reading Literature in a Global Age, and Speaking Out of Place: Getting Our Political Voices Back. His writing has appeared in The Washington Post, The Guardian, The Nation, Al Jazeera, Jacobin, Truthout, and other venues.
Bluesky @palumboliu.bsky.social
Apple Podcasts · Spotify · Website · Instagram