Saturday, February 23, 2008

Existential Risks & The Lifeboat Foundation

I don’t recall how I wound up there, but I recently dropped by the Lifeboat Foundation’s website. They are a very interesting group, dedicated to preventing or acting against any possible threat to our future as a species. Basically, they want to prevent anything capable of wiping out humanity or permanently countering our progress.

Full post explores a report explaining the basic premise of the Lifeboat Foundation and provides some of my own thoughts.

On their site, the Lifeboat Foundation provides this thought-provoking report on what are possible existential risks and what can be done to avoid them. As the report points out, we as a species do best by learning through trial-and-error but an existential risk is one where that approach is disastrous. One error and you’re done. Considering my thinking last summer on planetary defense, I was quite glad to come across this website and report. While I focused on alien invasion, they are dealing with some (currently) more likely scenarios. There is repeated mention of nanotech disasters (or other risks causing nanotech disasters).

However, I greatly appreciate that they included “technological arrest” as a possible risk. For instance, if a world government arose and used the technological risks as justification for authoritarian controls suppressing technological development, they would be actually making it harder for us to prevent other forms of existential disasters. This isn’t the only trade-off available in the list of risks they discuss. One possible option is to make certain forms of risk due to terrorist actions less possible, not just through hunting down existing terrorists but through making it more and more taboo/unnecessary (less people consider acts of terrorism viable/needed means of expressing political dissent). Once upon a time in the West (and still in some parts of the world), violence to other people was exceptionally common in the form of dueling, blood feuds, etc. Nowadays, these are considered barbaric by the Western mainstream. But it is still excused in some cases where a totalitarian (or perceived totalitarian) government does not allow for non-violent dissent.

With social evolution to eliminate both the causes and excuses for terrorism, we could render terrorist-style violence less likely*. BUT, reducing our species’ capacity for violent acts could leave us vulnerable to hostile aliens by reducing our ability to effectively counter their actions. While the current probability for the latter is low compared to the more likely terrorism, this could result in a situation where the minimization of one risk, increases the severity and probability of another further down the line. These are exactly the sort of questions we need to ask in assessing countermeasures.

Also, the report discusses a quite interesting phenomena, story-bias. This is where we focus on the possibilities outlined in our fictional entertainments even though these options are selected for their narrative quality as opposed to their severity and probability. Our fictions are overflowing with disaster scenarios and while they may help raise public awareness, they may also provide a false sense of confidence or a false understanding of the problems. My thinking is that the sheer diversity of fictional scenarios may act as a counter measure here. Different scenarios present similar developments with different levels of concern.

A useful example is to compare how wormholes are treated by the “Stargate” universe versus the “Farscape” one. Both introduce the concept of wormholes and provide some basic rules of their function but the specific nature of both the physics presented and the socio-political situation in each universe contributes to a vastly different interpretation of the dangers. “Stargate” treats wormholes as phenomena capable of being useful tools for intergalactic civilizations while not shying away from their potential as weapons. “Farscape” presents them as phenomena with horrible potential for abuse as weapons so much so that one powerful civilization restricts dissemination of the knowledge and will even kidnap or destroy those who learn about them. Watchers of both shows are provided with a larger, balanced understanding of both the good & bad possibilities and what specifics could change their nature from one to the other**. When it comes to public policy, such variety in fictional universes could create a large audience used to comparing and contrasting differing scenarios and understanding of how a specific, relatively small detail can change the threat assessment posed by a potential existential risk.

When discussing mitigating techniques, the report stated something quite refreshing for me:

“Creating a broad-based consensus among the world’s nation states is time-consuming, difficult and in many instances impossible. We must therefore recognize the possibility that cases may arise in which a powerful nation or coalition of states needs to act unilaterally for its own and the common interest. Such unilateral action may infringe on the sovereignty of other nations and may need to be done preemptively.”

In light of the existential threat posed by terrorism and recent history, I can only state that I very much agree with this assessment and the discussion that follows this quote. Being liked by everyone is great but ultimately, preventing deaths and disasters is more important. If we can accomplish things without unilateral actions, then great. But we shouldn’t be too wedded to a particular form of solution.

Of course, they followed this discussion with a weird pro-superintelligence stance that I find a little strange. The argument is that even if superintelligences can pose a threat, it would still be a good idea to develop one because it could help us mitigate other threats. The advantages of this trade-off are not as obvious to me as they are to the author of this report. Also, the Appendix discusses possible “whimper” scenarios with a strange focus on the potential counter-evolutionary nature of “hobbyists”. While those with familiarity with the topic might understand it better, I found the author’s focus on this possibility and some of his assumptions unclear and not as self-evident as presented.

Thinking through these types of problems also holds additional benefits. It’s quite likely research into existential risks can help make our civilization and species more resistant to smaller risks like hurricanes, epidemics and political instability. The Lifeboat Foundation is definitely worth watching for future developments (check our their project list). All in all, I am quite glad someone’s dedicating serious thought to these sorts of problems.

One final endnote, I cannot believe there is such a thing as a Voluntary Human Extinction Movement! They were referenced in the Lifeboat report. The idea of pursuing a goal so pessimistic and self-hating as species-suicide is utterly and thankfully beyond me. The only good outcome I can think of is this movement could result in people with weak will-to-live de-selecting themselves from having an evolutionary effect (no kids = no passing on / distributing genes that promote susceptibility to this kind of thinking). Still, as wacko as they sound and as much emphasis as they place on their peaceful intentions, I don’t believe this movement is remotely benign. Moral busybodies of any flavor are usually incapable of limiting themselves to their own decisions. At some point, they want control over others to make everyone’s decisions “for the greater good”. Imagine a world where a die-hard proponent of this philosophy had access to technologies capable of wiping out humanity or was in a position of power that would give them access to nuclear codes or something similar? (shudder)

*Of course, it only takes one. We would not be able to eliminate the threat of terrorism on any scale, merely reduce its probability and perhaps the severity (with countermeasures of some kind).

** For an military sci-fi exploration of wormholes in both “Stargate” and “Farscape” universes, I recommend “The Lost Warrior” series by Neil Gartner. It’s a work of crossover fanfiction assuming both universes are true. Just as a warning, it’s not a happy tale.

No comments: