Max Bennett is the cofounder and CEO of Alby, a start-up that helps companies integrate large language models into their websites to create guided shopping and search experiences. Previously, Bennett was the cofounder and chief product offi­cer of Bluecore, one of the fastest growing companies in the U.S., providing AI technologies to some of the largest companies in the world. Bluecore has been featured in the annual Inc. 500 fastest growing com­panies, as well as Glassdoor’s 50 best places to work in the U.S. Bluecore was recently valued at over $1 bil­lion. Bennett holds several patents for AI technologies and has published numerous scientific papers in peer-reviewed journals on the topics of evolutionary neuro­science and the neocortex. He has been featured on the Forbes 30 Under 30 list as well as the Built In NYC’s 30 Tech Leaders Under 30. He is the author of A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains.

THE CREATIVE PROCESS · ONE PLANET PODCAST

One of those core questions around AI is can it have consciousness? But first, how do you define yourself as having consciousness?

BENNETT

I think anyone who's practiced any sense or form of meditation, realizes that often it's when we turn our minds off and we think less that we feel most aware and present. And I think that is a great sort of introspective case study in the decoupling between conscious awareness and thinking. So I think it is highly possible that we will have very intelligent machines that far surpass us in quote-unquote intelligence, its ability to reason and problem solve, but could very easily not be sentient or conscious at all. Similarly, I think this also applies to other animals. I think folks who argue that animals are not conscious or sentient due to their inability to solve a variety of intellectual tasks may also be wrong. But when it comes to consciousness, we have pretty much no idea how consciousness emerges from matter. There are some hot, relatively speculative ideas, but we have no real scientific grounding on which to define this is how consciousness emerges.

THE CREATIVE PROCESS · ONE PLANET PODCAST

I just interviewed Howard Gardner, who, as you know, pioneered the theory of multiple intelligences. Often when we think about human intelligence, we are thinking about the brain, where it's nested mostly or for an animal, maybe it's a mind body thing. And then we have our own mind body problem. How do you apply that distinction when you apply it to artificial intelligence or machine learning without a limbic system?

MAX BENNETT

So, modern neuroscientists are questioning if there really is one consistent limbic system. But usually when we're looking at the limbic system, we're thinking about things like emotion, volition, and goals. And those types of things, I would argue reinforcement learning algorithms, at least on a primitive level, we already have because the way that we get them to achieve goals like play a game of go and win is we give them a reward signal or a reward function. And then we let them self-play and teach themselves based on maximizing that reward. But that doesn't mean that they're self-aware, doesn't mean that they're experiencing anything at all. There's a fascinating set of questions in the AI community around what's called the reward hypothesis, which is how much of intelligent behavior can be understood through the lens of just trying to optimize a reward signal. We are more than just trying to optimize reward signals. We do things to try and reinforce our own identities. We do things to try and understand ourselves. These are attributes that are hard to explain from a simple reward signal, but do make sense. And other conceptions of intelligence like Karl Friston's active inference where we build a model of ourselves and try and reinforce that model.

THE CREATIVE PROCESS · ONE PLANET PODCAST

As of August 19th in Europe, the digital platforms are now subject to the digital services act, designed to get rid of non transparent practices and take illegal content off social media, search engines and other major websites. In addition to President Biden’s new executive order to set up guardrails, new safety assessments, equity, civil rights and AI's impact on the labor market. So with all this governance in place, can tech companies be counted on to do the right thing for humanity? And what role can we play in designing the future we want to live in?

BENNETT

One of the crowning achievements of humanity is self-delusion. We like to convince ourselves that the thing that's best for us is also the best for everyone else. So it doesn't mean that people are inherently being bad, but whenever someone comes and says you should regulate thing ABC, and it just so turns out that if you do ABC, it will enrich that individual and their company.

We should just be somewhat skeptical to make sure that is in fact the best way to regulate it. So in terms of the regulations themselves, I think a lot of them are really good ideas. I think Yann LeCun has some of my favorite philosophies on this at Meta. Where I do think we should not be regulating is research. And I think we should absolutely be supporting open source. And I do think it's much more reasonable to regulate products. So this is a very important distinction. Regulating research is effectively telling scientists they're not allowed to look into certain forms of AI. They're not allowed to test certain forms of AI. I think this is. Or at least, if we do regulate research, we should have a higher burden of proof for restraining research.

THE CREATIVE PROCESS · ONE PLANET PODCAST

I'm curious about what you were saying earlier about AI intelligence being able to encapsulate the human condition. So I was curious about a process like art, poetry, or writing a haiku, which has a particular set of rules. If ChatGPT can create a haiku that follows these rules, would you consider that haiku therefore a true haiku? And therefore, can AI produce art and by extension encapsulate the human experience?

BENNETT

When I read some speculative fiction that moves me, for example, part of the reason it moves me is because it came from a human mind who experienced something they were trying to share with a fellow human mind. And it moves me because I can tell in the writing that another human felt something and wanted to share what they experienced, the pain or the joy they felt, or the story they built in their heads. And what I think is interesting about the stories created by ChatGPT is none of them have been any good yet. To me, the only distinction I can draw is when humans do it, there is a message imbued in the art because I am experiencing something as a feeling, thinking human, and I am channeling that into the thing that I'm creating as a little message gift and human solidarity to others who might experience it themselves. And I think that is something missing from AI that does that. And I think it might not be that they're not quote-unquote creative, but that the creativeness is almost vacuous because it lacks that message from another thinking sentient being that's trying to communicate with us. And that's why when we look at it, we see something feels missing. That's hard to articulate. So it begs an interesting question: Do you need to feel and suffer a little bit and to get the trials and triumphs of the human experience to write art that is compelling or to create art that is compelling and meaningful to a fellow human?

THE CREATIVE PROCESS · ONE PLANET PODCAST

In the aftermath of OpenAI's firing and rehiring of its co-founder and CEO Sam Altman, there have been these revelations about what sparked the internal disruption and what do we do with a significant generative AI breakthrough. These is a prediction that we could get this super intelligence may be within this decade or sooner. What are your thoughts on that and what kind of safeguards should we have in place?

BENNETT

So I think it is definitely a real possibility that in the next 10 years, we will have AI systems that can do a large swath of human cognitive work. I think something we're all going to realize, which I think ChatGPT is starting to reveal to us, is that this is not going to happen with one feature release. It's not going to be that OpenAI releases GPT5 and now it solves every single human task better than any human. It's going to be a methodical dismantling of tasks that only humans can do, and it's going to slowly subsume those. So, for example, you know GPT-4 can do a bunch of things as well as a human. GPT-5 is going to subsume other tasks, and eventually, over the course of perhaps a decade or two decades or three, we're going to wake up one day and realize, wow, a huge swath of tasks that used to be uniquely human can now be done by AI, but I think it's highly unlikely it's going to happen in one release.

So I think more grounded question for people to ask when we think about what is happening at Open AI. Is this Q* algorithm that everyone is talking about going to be a meaningful step up and new sets of tasks that now these algorithms can do that previously only humans could do? And I think it's definitely possible. I think when we look at what Q* is doing, it's not that innovative of an underlying algorithm. People have been doing these types of search algorithms. but it's possible that by giving GPT effectively the ability to think so early what we were talking about was the problem is this autoaggressive mechanism of just predicting the next word prevents it from pausing, thinking about possibilities, evaluating the outcomes, and then choosing one.

And in very simple terms, what Q* does is enable these algorithms, these language models to do exactly that. So what is going to happen if they launch this Q* thing, if it's actually what the breakthrough was. When you ask GPT a question, it's going to pause, it's going to search through possible outcomes. It's going to evaluate the results of those, and then it's going to render you an answer. And the open research question, of which we won't know until they release it, is how much does that improve its performance on reasoning-related tasks? And the reason why they're excited is because they, internally, at least it seems, got it to do basic math, which is something that GPT-4 is terrible at GPT-4 doesn't do math very well.

So that could be very fascinating. Does that mean that it's going to be human-like intelligence? No, because human intelligence has a bunch of other things. Also that they would not have, for example, breakthrough number four (in my book), which is what evolved in early primates, is the ability to mentalize.

THE CREATIVE PROCESS · ONE PLANET PODCAST

Putting some of the computational benefits of AI to use mitigating climate change or managing resources for a growing population would be wonderful. With all this writing about human intelligence, you're also been reflecting on some of the great minds and how they make their breakthroughs. One other thing that we're considering now is the possibility of neural wetware. What are your reflections on great minds and whether it would be possible to achieve that through neural wetware.

BENNETT

I do think there's a very real possibility that we will find that in order to have super-intelligent systems that are energy efficient, we need wetware. I mean, the difference in the energy cost of running ChatGPT versus a human brain is astronomical. A human brain runs on the amount of energy of about a light bulb, which is a crazy thing to realize how energy efficient the thing in our head is that creates all of the amazing intelligence we have, all of the common sense, sentience itself. And ChatGPT, which captures only a small fraction of that is consuming way more energy.

THE CREATIVE PROCESS · ONE PLANET PODCAST

What are your reflections on the importance of the environmental humanities and telling stories about our planet?

BENNETT

We are on a speck of dust in the middle of nothingness. That is the only haven we are privy to that can support life and intelligence. I forget who came up with this analogy but I love it, which is if we think about Earth, not as a planet, but as a spaceship, and we're tunneling through the void, wouldn't we care a lot about the state of this spaceship? Like, this is where we live. Would we let it start rusting in places and things start breaking down? Would we let it run out of fuel? Would we let key parts of the spaceship's functioning just cease to be working? With the hope that, oh, we'll have generations in the future who will fix the spaceship.

We have what might be called the technophiles who are fascinated with accelerating technology and expanding human consciousness into machines and seeing how far into the universe we can spread and going on these grand adventures. And some think what's beautiful about the human condition doesn't require expansion. It doesn't require more. We already have everything beautiful about sentience. We should preserve it and just share it with other animals and maintain some form of symbiosis and balance. And no matter which of these two sides of the spectrum you fall on, we must keep Earth healthy.

THE CREATIVE PROCESS · ONE PLANET PODCAST

You've spoken about what moves you in the arts and the importance of telling stories. And so even as we expand these technologies, what for you is the importance of having the humanities as part of the design process or the humanities involved in the formation or the governance.

BENNETT

Technology and engineering and all of these things can tell us how to do things and how to achieve an outcome, but they do not give us a why. They do not tell us to what end are we achieving. And I think especially now as AI technology and other forms of technology are accelerating at almost a scary pace.

It behooves us to be very clear as to why we're doing what we're doing and what's the future that we're trying to achieve. And that is fundamentally a question that is only answered in the humanities. That is a philosophical question. That is a moral question. A political question. And I think folks who only focus on the engine, the interesting Curious intellectual challenges of the engineering of technology are at risk of creating a world we don't in fact like because we're not thinking about why we're doing what we're doing or what the goal is.

And so I think many forms of the humanities help us understand why. I'm personally a big fan of speculative fiction. I think science fiction helps shed light on the consequences of various things and helps us explore futures. I think that's one wonderful form of reasoning about why we may or may not want something.

Political theory and philosophy are incredible sources of reasoning about the consequences of various choices and help us challenge our intuitions on what is right and wrong. I think art helps us get in touch with the intuitive sense of human solidarity. When we look at art created by someone who is going through something painful or scary or special and we can connect with them, I think that helps us share our common humanity. So I think the humanities are an essential part of any future because it helps us define why we're doing things and what's the end goal that we're trying to achieve.

This interview was conducted by Mia Funk and Callie Cho with the participation of collaborating universities and students. Associate Interviews Producers on this episode were Katie Foster and Callie Cho. The Creative Process is produced by Mia Funk. Additional production support by Sophie Garnier.
Mia Funk is an artist, interviewer and founder of The Creative Process & One Planet Podcast (Conversations about Climate Change & Environmental Solutions).