The Evolutionary Brain - DR. FERNANDO GARCÍA-MORENO on Creativity & Survival

The Evolutionary Brain - DR. FERNANDO GARCÍA-MORENO on Creativity & Survival

I think creative thinking is rooted in different parts of the brain. I believe that creativity is mostly a cultural expression of how our brains react to the world. It is our culture and our lives that make our brains creative in different manners. Even though you and I have very similar brains containing exactly the same cell types, we have evolved alongside each other for 300 million years. We share a lot of features, yet we express our ideas through creative thinking differently. In my opinion, this is cultural evolution—an expression of how our brains have evolved throughout our lives, how we learn, and what experiences we have over time: what we read, the movies we see, and the people we talk to.

We are working in the lab to understand this moment in development, which is called phenotypic. This is something that has been known for over a hundred years. When you see many vertebrate embryos at this early embryonic time point, all embryos look very, very similar. We are extrapolating these ideas to the brain. We have seen that at this time point, the phenotypic period, all brains of these species are very simple but very closely related. We share the same features with a fish or with a gecko or with any other mammalian species at this early time point. We have the same brain with the same genes active and the same cell types involved in it.

Highlights - Nick Bostrom - Founding Director, Future of Humanity Institute, Oxford

Highlights - Nick Bostrom - Founding Director, Future of Humanity Institute, Oxford

Founding Director of Future of Humanity Institute, University of Oxford
Philosopher, Author of Superintelligence: Paths, Dangers, Strategies

I do think though that there is a real possibility that within the lifetime of many people who are here today, we will see the arrival of transformative AI, machine intelligence systems that not only can automate specific tasks but can replicate the full generality of human thinking. So that everything that we humans can do with our brains, machines will be able to do, and in fact do faster and more efficiently. What the consequences of that are, is very much an open question and, I think, depends in part on the extent to which we manage to get our act together before these developments. In terms of, on the one hand, working out our technical issues in AI alignment, figuring out exactly the methods by which you could ensure that such very powerful cognitive engines will be aligned to our values, will actually do what we intend for them to do, as opposed to something else. And then, of course, also the political challenges of ensuring that such a powerful technology will be used for positive ends. So depending on how well we perform among those two challenges, the outcome, I think, could be extremely good or extremely bad. And I think all of those possibilities are still in the cards.

Nick Bostrom - Philosopher, Founding Director, Future of Humanity Institute, Oxford


Nick Bostrom - Philosopher, Founding Director, Future of Humanity Institute, Oxford


Founding Director of Future of Humanity Institute, University of Oxford
Philosopher, Author of Superintelligence: Paths, Dangers, Strategies

I do think though that there is a real possibility that within the lifetime of many people who are here today, we will see the arrival of transformative AI, machine intelligence systems that not only can automate specific tasks but can replicate the full generality of human thinking. So that everything that we humans can do with our brains, machines will be able to do, and in fact do faster and more efficiently. What the consequences of that are, is very much an open question and, I think, depends in part on the extent to which we manage to get our act together before these developments. In terms of, on the one hand, working out our technical issues in AI alignment, figuring out exactly the methods by which you could ensure that such very powerful cognitive engines will be aligned to our values, will actually do what we intend for them to do, as opposed to something else. And then, of course, also the political challenges of ensuring that such a powerful technology will be used for positive ends. So depending on how well we perform among those two challenges, the outcome, I think, could be extremely good or extremely bad. And I think all of those possibilities are still in the cards.