Thursday, October 23, 2008

Tail Events

The financial crisis has gotten many people interested in the possibility of really bad, really low probability systemic risk. This is not a new idea; people like Nassim Taleb and Mandelbrot have discussed for some time the ways in which financial returns follow non-normal distributions, a fact which is difficult to determine from existing data.

Of course, volatility in equities has been driven by systemic risk in the financial sector. Brunnermeier, whose work I've talked a lot about, goes into some length in explaining how links between financial firms generate strong cycles and network effects. And of course connected networks are going to display the usual non-linear chaotic symptoms. This is not really new either; Friedman based his advocacy for strong government intervention in the financial sector in times of crisis exactly on such network arguments (of course, the argument also supported the claim that governments are 'bad,' which never really made sense; either the government is a malevolent actor and ought be eradicated, or else it is an imperfect one and should be made better).

My point is just that none of this is exclusive to stocks or finance generally. There are plenty of tail risks out there founded on network connections with enormous consequences. People have a really hard time grasping this cognitively--I know I do--and the urge to compartmentalize and reduce really prevents people from understanding or planning for these events. Nor does historical data help that much, especially since historical data is biased against existential risks.

I still don't understand that last point, but it's really big for considering the biggest such "unknown unknown"--nuclear war. So we've been really close to nuclear war on several occasions. Eisenhower's deputies tried to "normalize" the use of nuclear weapons, while of course there was Cuba (where tactical nukes were deployed by Russians with standing orders to retaliate against a military strike by the US, and standing orders for the US military at one point dictated an invasion). Other interesting events include Petrov, Arkhipov, at least one recorded case of near Soviet missile firing, and so forth. The possibility of bad signaling, faulty equipment, false signal, etc etc is disturbingly high. Nixon tried the "I'm an unpredictable actor" approach, but from the Soviet point of view (not the American) there were actually rational reasons for him to act scary. This is also all very systemic, as many bad things are likely to happen in unison. MAD works well enough in the long-run, but the tail risks are very bad. India-Pakistan is of course the same situation without response time. What is particularly scary about that is that at least some Pakistani military leadership believe they could genuinely come out ahead in a nuclear exchange.

The really bad thing about tail risks in this scenario is that they really are martingales, but the inevitable bad event is existential exhaustion. That is, the probability of total collapse multiplies across, so even if in every period there is a really low probability of a really bad event, that bad event will happen eventually. So most civilizations collapse eventually. But of course, before globalization, collapses had only local effects. Eventually, some collapse will take all of us down.

What such a record of near-misses should tell a Bayesian who believes in many-worlds is that "we" are the survivors of quantum suicide. The nature of quantum suicide is that if you risk your life, the universe "splits" into some in which you die and others in which you life, and of course you are sentient and alive only in certain universes. So, our status as alive, sentient beings is powerful information about the state of all worlds. That is, we live in a reasonably representative universe which has escaped the tail risk of nuclear holocaust, and are quickly heading towards the "miraculous" universe which escapes in the nick of time every time--at least, we are most likely to be conscious of that state of affairs. I like to think of this as the Golan Trevize universe.

Another way of saying this is that you should not be "surprised" that you live in a populous nation, in a populous country, and so forth, because likely you are a "representative" person in terms of where your sentience ends up. There are other statistical likelihoods you can deduce from this position. For one, it is unlikely that mankind will last much longer in present form or present population. After all, if we continue dodging tail risks, then presumably we will continue expanding as fact we can as organisms do. The sum total of humans who have ever lived will be really large; and the probability of us being born around now really small (of course, there are still plenty of people around now, and maybe we are exactly a low probability tail event). So our best probability estimates suggest that either the human population drops pretty rapidly from now (then we find ourselves near the mean population, historically distributed, where we expect to be) or else we are a type of consciousness that achieves a maximum around now. So our best estimes suggest a singularity or total extinction soonish.

I suppose it makes sense that I'm "reflecting" about all of this. How does age enter into this; should we expect to live more about as long as we have? That doesn't make a lot of sense, so either this is all wrong it's more proof of the "nowness" of now.

But a naive estimate of the total "intelligences" available across all universes has to be pretty large, right? Even a low-ball estimate for ET balloons pretty quickly, and the rare Earth hypothesis does not seem particularly likely given the total ease at which life spreads. After all, if life can thrive here, where can't it develop? So something seems wrong. Intelligence ought to be spread all over the universe, and intelligence should beget more and more intelligence through robotic means if nothing else. So, given that you must, by virtue of the fact that "you" are a being with experience and sentience, why this particular sentient being, given that sentience ought to be everywhere? Perhaps--again--we are a tail event. Or we are a type of sentience only found among certain intelligences. But the probability of being a type of intelligence is a probability like any other, and we ought to expect to be a representative intelligence. So--shouldn't we be that much smarter? Is this really the mean intelligence level out there? That's kind of sad--we've only really been thinking for a few centuries. Or maybe we are some of the "ants" of the intelligence world, and just really numerous, while the truly high intelligence folks hog a certain few states?

So I suspect that the tail events I've been discussing both make it extremely unlikely for intelligence to form--so we are very historically contingent--and also very quickly kill off (or else singularitize and make scarce) more advanced intelligences. Looking through Earth's history, there is no real fitness advantage to high g. Plenty of animals have succeeded with many strategies, but even bonobos are nearly extinct. Like a laser which depends on certain light particles escaping and then amplifying, I suspect that it's really, really hard to evolve intelligence; but once it comes up, it grows quickly. As it grows however, it rapidly develops for itself contagious, fat tailed risk and wipes itself out across most of the universes it occupies. Of course, I'm testing the joint hypothesis of extinction and singularity. Not that this could be "tested," except in a sufficiently advanced simulation.

But of course any advanced intelligence "should" develop such a simulation, and the total possible number of intelligence states is going to be far greater in the simulation than in the "original"--whatever that means. So we should expect to be in a simulation. And what is the distribution of the owners? Perhaps they are malevolent or nice, but most likely they are simply apathetic. Much the same way that "junk" DNA rapidly overtakes the rest, the efficiently copying and haphazard simulators ought to be the ones generating the most states of affairs (How well does the multiverse hold up in a simulation? Hopefully they are nice enough to keep the seed number for future reference or recurrence (in some meta-probablistic sense, don't we have to be recurrent? After all, recurrent events happen infinitely often, while non-recurrent ones only finitely, so we simply have to be recurrent for the same reason that you will never pick up the exact number one when drawing from a continuum distribution from zero to two)). If you want a clear policy takeaway, we could be more concerned about maintaining the robustness of human survival to inevitable extreme events. Asteroids and all of that.

Let me check in the morning if this makes any sense.

No comments: