In this episode we pick up our discussion of Existential Risks. After a brief recap of what was discussed in the last episode, we jump into a discussion about seven human-caused existential risks: what philosopher Toby Ord says are 1,000 times more likely to cause human extinction over the next century than any naturally occurring risks.

You can listen in your browser or right-click here (and Save as) to download

This episode is also available on Spotify, Apple Podcasts, and Google Podcasts.

Timestamps:

  • 0:00 - Recap from Part 1
  • 05:59 - Nuclear weapons
  • 16:06 - Climate change
  • 23:51 - The ethical cost of climate change
  • 28:32 - On energy transitions
  • 33:31 - Ecological or environmental collapse
  • 42:13 - Bioengineered pandemic
  • 45:16 - Nanotechnology and self-replicators
  • 49:55 - Artificial intelligence
  • 59:32 - Global collapse or stagnation
  • 1:09:14 - Summing up the risks, outro

Food for Thought:

I'd love to read your responses to these questions in the comments below.

  1. Do you agree with Toby Ord's assessment that we have a 1-in-6 chance of going extinct over the next century as a result of one of these human-caused existential risks? Why or why not?

Recap:

  • We start with a brief recap of the last episode of the podcast, where we discussed seven naturally occurring existential risks. From there, we jump into a detailed discussion of seven human-caused existential risks:
  1. The proliferation of nuclear weapons. We examine the risk posed to humanity by the proliferation of nuclear weapons, looking at the different contexts that could become 'flashpoints' for nuclear war. By far the biggest existential threat when it comes to nuclear weapons is the potential of a long-duration nuclear winter scenario.
  2. Human-caused climate change. Our main existential threat here is the possibility of a runaway greenhouse effect that renders the surface of the Earth completely uninhabitable. That's highly unlikely to happen, so what we're left with is climate change as a destabilizer and an ethical dilemma: those who are least responsible for climate change are those who will suffer its worst effects.
  3. Ecological or environmental collapse. We discuss the importance of a well functioning biosphere and the possibility of a global mass extinction event that destroys our ability to grow food. This is also unlikely, and it's not too late for us to change our trajectory and begin repairing the Earth's biosphere.
  4. A bioengineered global pandemic. While it may be extremely difficult to engineer the perfect killer virus capable of wiping out humanity, there is the possibility that it might just amount to an engineering problem. Perhaps it's theoretically possible to engineer a pathogen that spreads rapidly and unnoticed and is nearly 100% lethal.
  5. Development of self-replicating nanobots. This is the 'grey goo' scenario where a swarm of self-replicating nano-scale robots are released, and they quickly consume all available matter on Earth and turn it into copies of themselves. Similar to the bioengineered pandemic issue, this is theoretically possible and really just comes down to an engineering problem.
  6. Emergence of an artificial superintelligence. The emergence of an AI that's smarter than human beings would be either the greatest thing we ever invented, or the worst. We discuss the perils of releasing an AI that's not aligned to human wellbeing and instead finds more value in eradicating us to pursue its own goals (such as maximizing its processing power by turning all available matter in the universe into computer processors).
  7. The collapse or stagnation of global order. This is the risk that we go out with a whimper, rather than a bang. This could be in the form of a complete civilization meltdown (like the Roman Empire before us) which takes us centuries to recover from, if ever. Or that future generations develop and decide to live entirely within artificial virtual reality systems that are better than 'real life', rending us effectively extinct as a species. Or, the emergence of a global government that then becomes totalitarian and stifles humanity's long-term potential.
  • We end things off by tallying up the total existential risk load according to Toby Ord. He gives us a 1 in 6 chance of going extinct within the next century as a result of any of the above (particularly loaded towards the emergence of a superintelligent AI, a bioengineered pandemic, or some other unforeseen technology). [1]

Additional Content:


Works Cited:

  1. Toby Ord, The Precipice. Hatchette Books 2020.
  2. Vaclav Smil, Global Catastrophes and Trends: The Next Fifty Years. The MIT Press 2008.
  3. Nick Bostrom and Milan M. Cirkovic, Global Catastrophic Risks. Oxford University Press 2008.
  4. Nick Bostrom, Superintelligence: Paths, Dangers, Strategies. Oxford University Press 2016.