Highlights - SUSAN SCHNEIDER - Author of Artificial You: AI and the Future of Your Mind, Fmr. Distinguished Scholar, US Library of Congress

Highlights - SUSAN SCHNEIDER - Author of Artificial You: AI and the Future of Your Mind, Fmr. Distinguished Scholar, US Library of Congress

Founding Director · Center for the Future Mind · Florida Atlantic University
Author of Artificial You: AI and the Future of Your Mind
Fmr. NASA Chair at NASA · Fmr. Distinguished Scholar at US Library of Congress

So it's hard to tell exactly what the dangers are, but that's certainly one thing that we need to track that beings that are vastly intellectually superior to other beings may not respect the weaker beings, given our own past. It's really hard to tell exactly what will happen. The first concern I have is with surveillance capitalism in this country. The constant surveillance of us because the US is a surveillance capitalist economy, and it's the same elsewhere in the world, right? With Facebook and all these social media companies, things have just been going deeply wrong. And so it leads me to worry about how the future is going to play out. These tech companies aren't going to be doing the right thing for humanity. And this gets to my second worry, which is how's all this going to work for humans exactly? It's not clear where humans will even be needed in the future.

SUSAN SCHNEIDER - Director, Center for the Future Mind, FAU, Fmr. NASA Chair at NASA

SUSAN SCHNEIDER - Director, Center for the Future Mind, FAU, Fmr. NASA Chair at NASA

Founding Director · Center for the Future Mind · Florida Atlantic University
Author of Artificial You: AI and the Future of Your Mind
Fmr. NASA Chair at NASA · Fmr. Distinguished Scholar at US Library of Congress

So it's hard to tell exactly what the dangers are, but that's certainly one thing that we need to track that beings that are vastly intellectually superior to other beings may not respect the weaker beings, given our own past. It's really hard to tell exactly what will happen. The first concern I have is with surveillance capitalism in this country. The constant surveillance of us because the US is a surveillance capitalist economy, and it's the same elsewhere in the world, right? With Facebook and all these social media companies, things have just been going deeply wrong. And so it leads me to worry about how the future is going to play out. These tech companies aren't going to be doing the right thing for humanity. And this gets to my second worry, which is how's all this going to work for humans exactly? It's not clear where humans will even be needed in the future.

Highlights - LINDSEY ANDERSON BEER - Writer, Director - Pet Sematary: Bloodlines - Sleepy Hollow

Highlights - LINDSEY ANDERSON BEER - Writer, Director - Pet Sematary: Bloodlines - Sleepy Hollow

Writer · Director · Executive Producer
Pet Sematary: Bloodlines · Sleepy Hollow · Bambi · Lord of the Flies

For me, I don't start a project unless I have a really clear understanding of who the main characters are and why this is a journey that's necessary for them to take. And why are these both the best and the worst people to be in this series? That's the question I ask myself all the time because you need to know: What are their strengths? What are their weaknesses? What are the dramatic tension points going to be where these specific people can really succeed or really fail in this scenario? I love people who are passionate, and Quentin Tarantino is just so passionate. And I've never been in a writer's room or even really in any kind of development experience where a director was just so passionate and so full of kind of energetic ideas. And that was really inspiring. Somebody who just completely knows their own point of view and gets excited by their own ideas is just fun to watch.

LINDSEY ANDERSON BEER - Writer, Director, Producer - Pet Sematary: Bloodlines - Sleepy Hollow - Bambi

LINDSEY ANDERSON BEER - Writer, Director, Producer - Pet Sematary: Bloodlines - Sleepy Hollow - Bambi

Writer · Director · Executive Producer
Pet Sematary: Bloodlines · Sleepy Hollow · Bambi · Lord of the Flies

For me, I don't start a project unless I have a really clear understanding of who the main characters are and why this is a journey that's necessary for them to take. And why are these both the best and the worst people to be in this series? That's the question I ask myself all the time because you need to know: What are their strengths? What are their weaknesses? What are the dramatic tension points going to be where these specific people can really succeed or really fail in this scenario? I love people who are passionate, and Quentin Tarantino is just so passionate. And I've never been in a writer's room or even really in any kind of development experience where a director was just so passionate and so full of kind of energetic ideas. And that was really inspiring. Somebody who just completely knows their own point of view and gets excited by their own ideas is just fun to watch.

Highlights - Alberto Savoia - Google’s 1st Engineering Director - Author of “The Right It”

Highlights - Alberto Savoia - Google’s 1st Engineering Director - Author of “The Right It”

Google’s 1st Engineering Director · Innovation Agitator Emeritus
Author of The Right It: Why So Many Ideas Fail and How to Make Sure Yours Succeed

As much as I would love to take the credit, Google Ads was a big team, and I was fortunate to be brought in as a director that managed the team. I think the reason it was so successful is because innovations and new ideas, they compound. They build one upon the other. So the reason why ads was so successful for Google is because search was so successful for Google. So when you have search and you have billions of people coming in every day, maybe every hour, and searching all kinds of things, you have this treasure trove of data. If you have billion searches per day, you know how many experiments can you run? And so Google is very famous for doing a lot of A/B experiments. That's how we collect the data. So what actually enabled Google to be so successful and to grow is this mental attitude, which is the same one that Amazon and some of these really successful technology companies have, of doing a lot of experiments on small samples and continually refining their data based on that. If you're dealing with a lot of people, you can do those experiments and that's why these companies are successful. The sad thing or what happens with companies that do not operate in that way, that do not try to operate on data and do all of those experiments, those are the ones that are left behind. Innovation is experimentation.

Alberto Savoia - Google’s 1st Engineering Director - Author of “The Right It”

Alberto Savoia - Google’s 1st Engineering Director - Author of “The Right It”

Google’s 1st Engineering Director · Innovation Agitator Emeritus
Author of The Right It: Why So Many Ideas Fail and How to Make Sure Yours Succeed

As much as I would love to take the credit, Google Ads was a big team, and I was fortunate to be brought in as a director that managed the team. I think the reason it was so successful is because innovations and new ideas, they compound. They build one upon the other. So the reason why ads was so successful for Google is because search was so successful for Google. So when you have search and you have billions of people coming in every day, maybe every hour, and searching all kinds of things, you have this treasure trove of data. If you have billion searches per day, you know how many experiments can you run? And so Google is very famous for doing a lot of A/B experiments. That's how we collect the data. So what actually enabled Google to be so successful and to grow is this mental attitude, which is the same one that Amazon and some of these really successful technology companies have, of doing a lot of experiments on small samples and continually refining their data based on that. If you're dealing with a lot of people, you can do those experiments and that's why these companies are successful. The sad thing or what happens with companies that do not operate in that way, that do not try to operate on data and do all of those experiments, those are the ones that are left behind. Innovation is experimentation.

Highlights - Nick Bostrom - Founding Director, Future of Humanity Institute, Oxford

Highlights - Nick Bostrom - Founding Director, Future of Humanity Institute, Oxford

Founding Director of Future of Humanity Institute, University of Oxford
Philosopher, Author of Superintelligence: Paths, Dangers, Strategies

I do think though that there is a real possibility that within the lifetime of many people who are here today, we will see the arrival of transformative AI, machine intelligence systems that not only can automate specific tasks but can replicate the full generality of human thinking. So that everything that we humans can do with our brains, machines will be able to do, and in fact do faster and more efficiently. What the consequences of that are, is very much an open question and, I think, depends in part on the extent to which we manage to get our act together before these developments. In terms of, on the one hand, working out our technical issues in AI alignment, figuring out exactly the methods by which you could ensure that such very powerful cognitive engines will be aligned to our values, will actually do what we intend for them to do, as opposed to something else. And then, of course, also the political challenges of ensuring that such a powerful technology will be used for positive ends. So depending on how well we perform among those two challenges, the outcome, I think, could be extremely good or extremely bad. And I think all of those possibilities are still in the cards.

Nick Bostrom - Philosopher, Founding Director, Future of Humanity Institute, Oxford


Nick Bostrom - Philosopher, Founding Director, Future of Humanity Institute, Oxford


Founding Director of Future of Humanity Institute, University of Oxford
Philosopher, Author of Superintelligence: Paths, Dangers, Strategies

I do think though that there is a real possibility that within the lifetime of many people who are here today, we will see the arrival of transformative AI, machine intelligence systems that not only can automate specific tasks but can replicate the full generality of human thinking. So that everything that we humans can do with our brains, machines will be able to do, and in fact do faster and more efficiently. What the consequences of that are, is very much an open question and, I think, depends in part on the extent to which we manage to get our act together before these developments. In terms of, on the one hand, working out our technical issues in AI alignment, figuring out exactly the methods by which you could ensure that such very powerful cognitive engines will be aligned to our values, will actually do what we intend for them to do, as opposed to something else. And then, of course, also the political challenges of ensuring that such a powerful technology will be used for positive ends. So depending on how well we perform among those two challenges, the outcome, I think, could be extremely good or extremely bad. And I think all of those possibilities are still in the cards.

Highlights–Nicholas Christakis–Author of “Blueprint”

Highlights–Nicholas Christakis–Author of “Blueprint”

Author of Blueprint: The Evolutionary Origins of a Good Society
Director of the Human Nature Lab at Yale · Co-director of the Yale Institute for Network Science

So these kinds of problems in what I call hybrid systems of humans and machines are a key focus of my lab right now. Margaret Traeger, who's now at Notre Dame, she did a wonderful project in which we made these groups of three humans and a humanoid robot work together to solve a problem.

We manipulated the humanity of the robot. For example, sometimes we had the robot tell stupid dad jokes, like corny jokes. Or we had the robot break the ice by saying, "You know, robots can make mistakes, too." This kind of stuff. And what we found was that the human interactions could be changed by the simple programming of the robot.

Nicholas A. Christakis, Author of “Blueprint" Director of Human Nature Lab, Yale

Nicholas A. Christakis, Author of “Blueprint" Director of Human Nature Lab, Yale

Author of Blueprint: The Evolutionary Origins of a Good Society
Director of the Human Nature Lab at Yale · Co-director of the Yale Institute for Network Science

We're not attempting to invent super smart AI to replace human cognition. We are inventing dumb AI to supplement human interaction. Are there simple forms of artificial intelligence, simple programming of bots, such that when they are added to groups of humans – because those humans are smart or otherwise positively inclined - that help the humans to help themselves? Can we get groups of people to work better together, for instance, to confront climate change, or to reduce racism online, or to foster innovation within firms?

Can we have simple forms of AI that are added into our midst that make us work better together? And the work we're doing in that part of my lab shows that abundantly that's the case. And we published a stream of papers showing that we can do that.

BEN PRING

BEN PRING

Director of Cognizant’s Center for the Future of Work
Author of Monster: A Tough Love Letter On Taming the Machines that Rule Our Jobs, Lives

They’re single-purpose engines doing one thing in extraordinary ways, and they’ve been encouraged that by the ecosystem around them, by the funding that’s being pumped into them by people whose only motivation is simply to make more money–and you can see the results of that in the world as this technology has grown from a little acorn to now being the biggest Sequoia in the forest. And it’s shading every other tree, it’s taking all the light, it’s taking all the energy from the forest, and it’s distorting so much in the world.