The End of the World with Josh Clark   /     Artificial Intelligence

Description

An artificial intelligence capable of improving itself runs the risk of growing intelligent beyond any human capacity and outside of our control. Josh explains why a superintelligent AI that we haven’t planned for would be extremely bad for humankind. (Original score by Point Lobo.) Researchers have been working seriously on creating human-level intelligence in machines since at least the 1940s, and starting around 2006 that wild dream became truly feasible. Around that year, machine learning took a huge leap forward with the advent of artificial neural nets, algorithms that are not only capable of learning, but can also learn on their own. The rise of neural nets signals a big and sudden move down a dangerous path: machines that can learn on their own may also learn to improve themselves. And when a machine can improve itself, it can rewrite its code, make improvements to its structure – and get better at getting better. At some point, a self-improving machine will surpass the level of human intelligence - becoming superintelligent. At this point, it will become capable of taking over everything from our cellular networks to the global internet infrastructure. And it’s about here that the existential risk that artificial intelligence poses to humanity comes in. We have no reason to believe that a machine we create will be friendly toward us, or even consider us at all. A superintelligent machine in control of the world we’d built and with no capacity to empathize with humans could lead directly to our extinction in all manner of creative ways, from repurposing our atoms into new materials for its expanding network, to plunging us into a resource conflict we would surely lose. There are some people working to head off catastrophe-by-AI, but with each new algorithm we release that is capable of improving itself, a new possible future existential threat is set loose. Interviewees: Nick Bostrom, Oxford University philosopher and founder of the Future of Humanity Institute; David Pearce, philosopher and co-founder of the World Transhumanist Association (Humanity+); Sebastian Farquahar, Oxford University philosopher.Learn more about advertising on the HowStuffWorks podcasts at www.howstuffworks.com/advertisers.htmAnd to learn about your ad choices when listening to podcasts, visit https://www.howstuffworks.com/privacy.htm#ad-choices

Summary

An artificial intelligence capable of improving itself runs the risk of growing intelligent beyond any human capacity and outside of our control. Josh explains why a superintelligent AI that we haven’t planned for would be extremely bad for humankind. (Original score by Point Lobo.)

Researchers have been working seriously on creating human-level intelligence in machines since at least the 1940s, and starting around 2006 that wild dream became truly feasible. Around that year, machine learning took a huge leap forward with the advent of artificial neural nets, algorithms that are not only capable of learning, but can also learn on their own. The rise of neural nets signals a big and sudden move down a dangerous path: machines that can learn on their own may also learn to improve themselves. And when a machine can improve itself, it can rewrite its code, make improvements to its structure – and get better at getting better. At some point, a self-improving machine will surpass the level of human intelligence - becoming superintelligent. At this point, it will become capable of taking over everything from our cellular networks to the global internet infrastructure. And it’s about here that the existential risk that artificial intelligence poses to humanity comes in. We have no reason to believe that a machine we create will be friendly toward us, or even consider us at all. A superintelligent machine in control of the world we’d built and with no capacity to empathize with humans could lead directly to our extinction in all manner of creative ways, from repurposing our atoms into new materials for its expanding network, to plunging us into a resource conflict we would surely lose. There are some people working to head off catastrophe-by-AI, but with each new algorithm we release that is capable of improving itself, a new possible future existential threat is set loose.

Interviewees: Nick Bostrom, Oxford University philosopher and founder of the Future of Humanity Institute; David Pearce, philosopher and co-founder of the World Transhumanist Association (Humanity+); Sebastian Farquahar, Oxford University philosopher.








Subtitle
An artificial intelligence capable of improving itself runs the risk of growing intelligent beyond any human capacity and outside of our control. Josh explains why a superintelligent AI that we haven’t planned for would be extremely bad for humankind.
Duration
2676
Publishing date
2018-11-16 05:01
Contributors
  iHeartRadio
author  
Enclosures
https://www.podtrac.com/pts/redirect.mp3/traffic.megaphone.fm/HSW5428252738.mp3
audio/mpeg