The two-sided quality of the connection between departments and specialties invites us to find ways of visualizing them both at the same time. But the large number of departments and specialties makes it tricky to generate interpretable pictures. There is a large family of methods designed to map multidimensional data onto just a couple of dimensions. Here I'll take one of the more straightforward ways of doing this and apply it to the 2006 data.
One of the nice features of the PGR data is the duality in the relationship between departments and specialties. Departmental identities are defined in part by the kind of specialized work that gets done in them. The identity of areas is associated with particular departments and schools (with a large or small ‘s’). The PGR data lets us see some of this association, and of course also make the link between this relationship and overall status.
I want to get to the department-level stuff today instead of just looking at the raters, but I promised yesterday that I'd say something about the relationship between the field position of raters and their voting patterns. As with specialty areas, where you stand might depend on where you sit. If we slice raters into groups based on the PGR rating of their employer, we can calculate overall PGR scores based just on the votes from within each group, as we did with the specialty areas.
Yesterday we saw that raters come mostly from the top half of of PGR ranked schools, with a good chunk of them from very highly-ranked schools. We also saw that specialty areas are not equally represented in the rater pool. (Specialty areas are not equally represented within departments, either, because not all subfields have equal status—more on that later.) Are voting patterns in the 2006 data connected to the social location of raters?
As it does for the current report, the 2006 rankings listed the names and affiliations of those who participated in the report, along with the survey instrument and a bit of information about the response patterns of raters. Based on this information, we can say a little bit about where the raters come from. For example, in 2006 about sixty five percent of raters were based in the U.S., eighteen percent in the UK, eight percent in Canada, five percent in Australia or New Zealand, and the small remainder elsewhere.
I come in peace. As Brian mentioned last week, I'm going to be guesting on his blog for the next few days. For those of you who don't know me—which I imagine is most of you—I am a sociologist; I teach at Duke University both in my home department and the Kenan Institute for Ethics; and for the past nine years or so I've been a blogger at Crooked Timber. Initially, I was tempted to treat this gig in the way that people tend to treat philosophers they meet in bars—viz, aggressively tell you all what my philosophy is, perhaps make a truly original joke that comes with fries, or maybe sketch out my own interpretation of two-dimensionalism.