Mark Kleiman responded to my post about smallpox vaccination, where I raised the question of our uncertainty about the probability of a smallpox attack from Iraq. The key point was that it’s really hard to confidently put a probability on that event, and so while we can figure out what we should do if we knew the risk, we don’t know it and so the decision is harder. (I also said that makes the Administration’s position look a little odd.)

Mark responded by outlining the concept of a “critical value” for the risk of an attack—- it’s the value

that would make the expected costs [of vaccinating vs not vaccinating] equal … If the actual probability [of an attack] is higher than the critical value, vaccination will have the lower expected cost and is therefore the preferred option; else, not…

So now, instead of moping around, saying “What do you think the probability of a smallpox attack is?” “I dunno, Marny, what do you think the probability of a smallpox attack is?” we have something to concentrate on: is the probability as high as 1%, or not?

This is fair enough. But it doesn’t make the basic problem go away—- it just focuses it more. As Mark says himself, “Thinking that the risk of a major smallpox attack from somewhere within, say, the next five years, is as low as 1% requires much more precise knowledge of hostile capacities and intentions than I think we actually have.” Which is to say that we can’t calculate the risk properly. We’re uncertain about it. Again, I’d suggest that it’s not just “paucity of data” that’s the problem here (though it is in part), but also the fact that a smallpox attack is not the same kind of event as a measles outbreak. It’s something launched by your enemy, not caused by natural processes. I still agree with Mark that the current policy seems weird—- the administration says that Saddam is indeed likely to attack us with smallpox yet we don’t all need to be vaccinated, which doesn’t seem defensible in the absence of other data—- but the basic problem of uncertainty remains.

(My general point about the uncertainty wasn’t that we just wring our hands—- it was that we should look to what decision makers within organizations actually do when put in this situation. It’s the difference between policy analysis and the sociology of organizations, I guess.)

Interestingly, in a more recent post about the “precautionary principle”, Mark argues as follows:

Ever since I heard of it, I’ve been impatient with the proposed “precautionary principle” … By what cockamamie reasoning could the fact that a risk is unknown be taken to imply that it is unacceptably large? I mean, really! But Sasha Volokh proposes a different and more plausible way to think about it than the arguments I’ve seen in the past: that we should consider the variance, as well as the expected value, in choosing risks…

The conceptually hairy issue arises where we have no good basis for calculating the risk: in the case of fundamentally new technologies. And that is the situation for which the precautionary principle was designed. Sasha is clearly right that in its most radical form the precautionary princple is self-defeating, since incaction also carries unknown risks. But it looks from the argument above as if any proposal where a plausible story can be told of truly catastrophic risk (i.e., risks equivalent to substantial fractions of total national or world wealth) ought to be forbidden until the probability attached to the risk can be plausibly quantified. This is much stronger than Sasha’s proposed “slight bias” against risk; it’s most of the way to the precautionary principle itself, as long as the worst conceivable case is bad enough. [Emphasis added.]

Surely there’s a connection to the smallpox question here. Why wouldn’t this argument apply if we substitute the smallpox threat for a new technology? An important difference is that the unknown benefits and hazards are wrapped up together in the technology case but separated into different actions in the smallpox case. But if we can tell a “plausible story” of “truly catastrophic risk” (e.g., a smallpox attack) then this “ought to be forbidden” (i.e. vaccinated against). So “risk aversion means that a sufficiently bad “worst case” ought to be enough to kill a project, even with what would otherwise be a negligible probability attached to it.” Because he’s thinking about technology, Mark says “That’s not the answer I wanted, but it seems to be the one I just got.” Doesn’t the same chain of reasoning give you the answer he does want in the case of smallpox, even though in general he thinks the precautionary principle should be called “the Animal Crackers principle”?

I’ve probably made a mistake somewhere (I’m tired). But on the face of it, the corsstalk between Mark’s two posts suggests a problem with being so averse to the precautionary principle and so in favor of universal smallpox vaccination at the same time.