Thu Jan 23, 2003

A Lott of Old Rosh

Disclaimer: I’m a practising social science researcher who does a fair amount of quantitative analysis, but I’m not an expert on sampling methods or data weighting. Proceed with that in mind.

Kevin Drum has been following the increasingly bizarre John Lott affair. If you know the score, skip to the next paragraph. To get your bearings, read these posts by Mark Kleiman, Tim Lambert, James Lindgren, Julian Sanchez and John Quiggin. Kevin has a question about the survey Lott claims to have conducted in January 1997 on defensive gun use (DGU). Read his post to get the context.

Three points about all of this:

First, it does seem possible, in principle, to get the results Lott is claiming. You’d need to have the right kind of respondents (in demographic terms) to give the right kind of answers. In other words, if a youngish white male from California had fired a gun, he would have been upweighted in Lott’s survey (or at least not downweighted eight-fold) and the “shot at attacker” percent would have been higher. As it is, if a certain kind of person is overrepresented in the sample—- Lott’s black Vermonter [search for ‘Vermont’]—- and they do the right thing (e.g., fire the gun), then they could be downweighted and the numbers could turn out right. It seems a little unlikely.

Second, it’s clear that Lott is leveraging a tiny number of observations to make his claim. After all, the core issue is how gun use breaks down within the DGU category. But Lott’s survey design is mainly good for telling you how common DGU incidents are in the population. In my view, he should have oversampled people who have experienced DGUs to get some statistical power over that subpopulation. You could sample, say, 1,000 non-DGU respondents, then don’t bother collecting any more data about them. But keep phoning people until you find, say, just 100 DGU people. That way, you get enough respondents to see with confidence how that population breaks down in terms of gun use. Lott weights his tiny number of ‘DGU positive’ cases to leverage them into a national estimate. But there aren’t enough cases to generalize about kinds of DGU incidents. I don’t believe the estimates.

Third, and more important, I suggest that all of this is really a very small point in the larger context of the survey’s apparent non-existence. I picked the brains of a few people who know more about sampling methods than me about this topic. In each case, I had trouble getting to the weights issue because they were laughing so much at the background information. Lott says he has no dataset, no paper records of any kind, no memory of the precise wording of questions in the survey instrument, and no recollection of the names of the students involved in the data collection. He did not apply for any funding, paid for the survey out of his own pocket, and did not collect the data via a phonebank. Instead, “one of the students had a program to randomly sample the telephone numbers by state. My guess is that it was part of the [marketing] CD [he obtained from an unknown source and no longer has], but on that point I can’t be sure.” Lott claims that he had two students on the job, working from their own phones, and they “had also gotten others that they knew from other campuses from places such as I think the University of Illinois at Chicago circle (but I am not sure that I remember this accurately).” Did they all get copies of the CD and its “program”? Did Lott do anything to oversee the data collection and coding? How was it all collated? Phone surveys have low response rates. Getting 2,424 respondents would have meant the RAs made at least twice that number of calls. That’s a lot of long-distance calls to be making from your dorm room.

In other words, the whole thing sounds ridiculous. In any event, it seems trivially obvious to me that you shouldn’t make claims on the basis of data you don’t have. This is especially true when your claims are inconsistent with all of the other available data on this issue.

One point came up which I haven’t seen mentioned before: if Lott did the survey while at the University of Chicago, why didn’t he go through their Institutional Review Board? Federal Law says you can’t conduct any research involving human subjects without first obtaining IRB approval. Does the Chicago IRB have any record of Lott going through Human Subjects review? Has he given any reason why he didn’t?

Finally, as Mark Kleiman has pointed out, the gun control debate doesn’t stand or fall on this alleged survey and its supposed findings. But it’s clear, I think, that Lott’s credibility does. For a new comprehensive analysis of Lott’s main thesis, see the recent Stanford Law Review article by Ayres and Donohue, “Shooting Down the More Guns, Less Crime Hypothesis” (text [pdf], figures [pdf]). Thanks to Iain Coleman for the link.