The Evolutionary Brain - DR. FERNANDO GARCÍA-MORENO on Creativity & Survival

The Evolutionary Brain - DR. FERNANDO GARCÍA-MORENO on Creativity & Survival

I think creative thinking is rooted in different parts of the brain. I believe that creativity is mostly a cultural expression of how our brains react to the world. It is our culture and our lives that make our brains creative in different manners. Even though you and I have very similar brains containing exactly the same cell types, we have evolved alongside each other for 300 million years. We share a lot of features, yet we express our ideas through creative thinking differently. In my opinion, this is cultural evolution—an expression of how our brains have evolved throughout our lives, how we learn, and what experiences we have over time: what we read, the movies we see, and the people we talk to.

We are working in the lab to understand this moment in development, which is called phenotypic. This is something that has been known for over a hundred years. When you see many vertebrate embryos at this early embryonic time point, all embryos look very, very similar. We are extrapolating these ideas to the brain. We have seen that at this time point, the phenotypic period, all brains of these species are very simple but very closely related. We share the same features with a fish or with a gecko or with any other mammalian species at this early time point. We have the same brain with the same genes active and the same cell types involved in it.

What can AI teach us about human cognition & creativity? - Highlights - RAPHAËL MILLIÈRE

What can AI teach us about human cognition & creativity? - Highlights - RAPHAËL MILLIÈRE

Asst. Professor in Philosophy of AI · Macquarie University
I'd like to focus more on the immediate harms that the kinds of AI technologies we have today might pose. With language models, the kind of technology that powers ChatGPT and other chatbots, there are harms that might result from regular use of these systems, and then there are harms that might result from malicious use. Regular use would be how you and I might use ChatGPT and other chatbots to do ordinary things. There is a concern that these systems might reproduce and amplify, for example, racist or sexist biases, or spread misinformation. These systems are known to, as researchers put it, “hallucinate” in some cases, making up facts or false citations. And then there are the harms from malicious use, which might result from some bad actors using the systems for nefarious purposes. That would include disinformation on a mass scale. You could imagine a bad actor using language models to automate the creation of fake news and propaganda to try to manipulate voters, for example. And this takes us into the medium term future, because we're not quite there, but another concern would be language models providing dangerous, potentially illegal information that is not readily available on the internet for anyone to access. As they get better over time, there is a concern that in the wrong hands, these systems might become quite powerful weapons, at least indirectly, and so people have been trying to mitigate these potential harm

How can we ensure that AI is aligned with human values? - RAPHAËL MILLIÈRE

How can we ensure that AI is aligned with human values? - RAPHAËL MILLIÈRE

Asst. Professor in Philosophy of AI · Macquarie University
I'd like to focus more on the immediate harms that the kinds of AI technologies we have today might pose. With language models, the kind of technology that powers ChatGPT and other chatbots, there are harms that might result from regular use of these systems, and then there are harms that might result from malicious use. Regular use would be how you and I might use ChatGPT and other chatbots to do ordinary things. There is a concern that these systems might reproduce and amplify, for example, racist or sexist biases, or spread misinformation. These systems are known to, as researchers put it, “hallucinate” in some cases, making up facts or false citations. And then there are the harms from malicious use, which might result from some bad actors using the systems for nefarious purposes. That would include disinformation on a mass scale. You could imagine a bad actor using language models to automate the creation of fake news and propaganda to try to manipulate voters, for example. And this takes us into the medium term future, because we're not quite there, but another concern would be language models providing dangerous, potentially illegal information that is not readily available on the internet for anyone to access. As they get better over time, there is a concern that in the wrong hands, these systems might become quite powerful weapons, at least indirectly, and so people have been trying to mitigate these potential harm