Mark Kleiman has been following the smallpox vaccination story pretty closely, and argues that the administration has wimped out by not getting everybody vaccinated. He points to a recent RAND study in the NEJM which calculates the expected number of deaths under several scenarios of vaccination policy and subsequent attack. Mark makes some effective criticisms of the study’s assumptions and then comments:

The estimated death toll from mass national vaccination is about 500: about 3 per million vaccinated. Are we really going to put the whole country at risk of a devastating attack in order to avoid 500 deaths? Now primum non nocere—first, do no harm—is a valuable maxim of medicine. But if doing no harm were all we expected of our health care system, I know where we could save a trilliion and a half dollars a year. At some point, health care needs to do some good, even at some risk.

The key question is, how likely is a smallpox attack, either on the United States or elsewhere? Do we really have any idea? If we could attach a probability to that question, the policy problem would solve itself, because we would be able to use that number to calculate the expected benefits (in terms of deaths) of various vaccination strategies. I think we cannot calculate the risk of an attack with much confidence. Instead of being able to rationally bet on some outcome, we are simply uncertain about what the facts are. The disctinction is important. If you know the risks—- that is, the probabilities of various outcomes—- then you can make an informed decision. But if you are uncertain, then, even though you are aware that there’s a problem to be addressed, you just don’t know the most important bits of information about it. The RAND study (and the cost-benefit approach in general) can tell us what we should do given certain probabilities, but not what those probabilities actually are. If the probability of a smallpox attack was in fact zero, for instance, then Mark’s comment would just mean “Why don’t we spend a lot of money to kill 500 people?” Conversely, if the risk is much higher than we think, the administration is being very irresponsible by not vaccinating everyone.

Now, I don’t think the probability of an attack is zero. But neither do I think that it will be easy to figure out what that risk actually is. Even worse, the probability of a smallpox outbreak is not like the probability of a natural event like, say, a measles or whooping cough epidemic. It can be deliberately instigated (and, more optimistcally, perhaps deterred as well). Nevertheless, we still have to do something. What sort of decisions will we tend to make under these conditions?

It’s happened before in somewhat similar cases. When HIV appeared in the blood supply, blood banks and plasma companies reacted in different ways to the problem, given their structural position vis a vis their suppliers and the form of their exchange relationship with them. (You can read a paper of mine about this, if you’re interested.) The blood banks made the wrong calculation. They thought that a few isolated cases of people dying from odd, opportunistic diseases could safely be ignored in a huge supply system. They confidently said that the “risk” of infection was “less than one in a million”. But of course they couldn’t really calculate the risks, because they didn’t have the right data. Worse, they wrongly interpreted the data they did have: some epidemiologists looked at the same few cases and (correctly, in retrospect) saw the first few points on a fast-growing infection-rate curve. A slow and conservative response led to a disaster in this case.

By contrast, the Swine-Flu vaccination program in the 1970s ended up killing more people than the disease it was supposed to protect against. (There was no Swine Flu epidemic after all.) Diana Dutton’s Worse than the Disease analyzes this case. The Swine Flu experience was in the minds of many decisionmakers when HIV came along as an example of how a quick and aggressive response to a threat that wasn’t really there.

In both cases, organizations made decisions using the language of risk assessment, but what they were really doing was vaguely weighing uncertainties. The choices they made said more about their structural position and political obligations than about real risks. This seems to be a chronic problem for organizations when there’s not much real data to go on.

Mark hopes that the Administration’s decision to vaccinate health workers but not the general public

means that we have good intelligence showing that Iraq doesn’t have the capacity to mount a smallpox attack. If we’re about to be at war with Iraq, and if we’re not morally certain that Iraq has no such capacity, it’s time for some actual homeland defense.

I’d tend to agree with this logic. I think it’s very unlikely that Iraq would launch a smallpox attack on the U.S., given the possible consequences for them. (Of course this is just a guess, which is precisely the problem, and presumably the government has much better data on this than me.) It seems to me, however, that the issue is not so much one of cost-benefit risk assessment as it is a game-theoretic question of deterrence. If the U.S. wants to invade, and it knows that smallpox is a weapon in Iraq’s arsenal, then it should take steps towards deterring the use of that weapon. It could do that by vaccinating everyone. If the U.S. knows that Iraq can’t mount an attack like this, then deterrence isn’t necessary and neither is universal vaccination. But at the moment the government is saying both that Iraq has these weapons and that mass vaccination isn’t needed. Is this message the result of a careful weighing of risks based on real data, or is it a reflection of the political position the administration finds itself in?