Highlights - Alain Robert - Famous Rock and Urban Climber - "The French Spider-Man”

Highlights - Alain Robert - Famous Rock and Urban Climber - "The French Spider-Man”

Famous Rock & Urban Climber · "The French Spider-Man”
Known for Free Solo Climbing the World’s Tallest Skyscrapers using no Climbing Equipment

First of all, yes, I need to know what I will be climbing, whether it's on rocks or whether it's on buildings. And then there is physical preparation. And regarding the mindset, it's more something that became a bit automatic over the years because I have been free soloing for almost 50 years. So it is pretty much my whole life. So that means that for me, being mentally ready, it's kind of simple. It's almost always the same mental process, meaning, I can be afraid before an ascent, but I know myself actually very well. And I know that once I am starting to climb, I feel fine. I put my fear aside, and I'm just climbing.

Alain Robert - Famous Rock and Urban Climber - "The French Spider-Man”

Alain Robert - Famous Rock and Urban Climber - "The French Spider-Man”

Famous Rock & Urban Climber · "The French Spider-Man”
Known for Free Solo Climbing the World’s Tallest Skyscrapers using no Climbing Equipment

First of all, yes, I need to know what I will be climbing, whether it's on rocks or whether it's on buildings. And then there is physical preparation. And regarding the mindset, it's more something that became a bit automatic over the years because I have been free soloing for almost 50 years. So it is pretty much my whole life. So that means that for me, being mentally ready, it's kind of simple. It's almost always the same mental process, meaning, I can be afraid before an ascent, but I know myself actually very well. And I know that once I am starting to climb, I feel fine. I put my fear aside, and I'm just climbing.

Highlights - Nick Bostrom - Founding Director, Future of Humanity Institute, Oxford

Highlights - Nick Bostrom - Founding Director, Future of Humanity Institute, Oxford

Founding Director of Future of Humanity Institute, University of Oxford
Philosopher, Author of Superintelligence: Paths, Dangers, Strategies

I do think though that there is a real possibility that within the lifetime of many people who are here today, we will see the arrival of transformative AI, machine intelligence systems that not only can automate specific tasks but can replicate the full generality of human thinking. So that everything that we humans can do with our brains, machines will be able to do, and in fact do faster and more efficiently. What the consequences of that are, is very much an open question and, I think, depends in part on the extent to which we manage to get our act together before these developments. In terms of, on the one hand, working out our technical issues in AI alignment, figuring out exactly the methods by which you could ensure that such very powerful cognitive engines will be aligned to our values, will actually do what we intend for them to do, as opposed to something else. And then, of course, also the political challenges of ensuring that such a powerful technology will be used for positive ends. So depending on how well we perform among those two challenges, the outcome, I think, could be extremely good or extremely bad. And I think all of those possibilities are still in the cards.

Nick Bostrom - Philosopher, Founding Director, Future of Humanity Institute, Oxford


Nick Bostrom - Philosopher, Founding Director, Future of Humanity Institute, Oxford


Founding Director of Future of Humanity Institute, University of Oxford
Philosopher, Author of Superintelligence: Paths, Dangers, Strategies

I do think though that there is a real possibility that within the lifetime of many people who are here today, we will see the arrival of transformative AI, machine intelligence systems that not only can automate specific tasks but can replicate the full generality of human thinking. So that everything that we humans can do with our brains, machines will be able to do, and in fact do faster and more efficiently. What the consequences of that are, is very much an open question and, I think, depends in part on the extent to which we manage to get our act together before these developments. In terms of, on the one hand, working out our technical issues in AI alignment, figuring out exactly the methods by which you could ensure that such very powerful cognitive engines will be aligned to our values, will actually do what we intend for them to do, as opposed to something else. And then, of course, also the political challenges of ensuring that such a powerful technology will be used for positive ends. So depending on how well we perform among those two challenges, the outcome, I think, could be extremely good or extremely bad. And I think all of those possibilities are still in the cards.