About nine months ago, my colleague Steve Vaisey told me he was interested in organizing a session at the American Sociological Association Meetings about the idea of “nuance” in sociological theory, and in particular about how there seemed to be a lot of demand for the stuff. He asked me if I’d be interested in submitting a paper called something like “Against Nuance”. I replied that if you were going to do something like that, you should just go ahead and call it “Fuck Nuance” and be done with it.
The other day, Jonathan Marshall posted a nice graphic showing population age profiles of electoral constituencies in New Zealand, ordered by their tendency to vote left or right. He put the data on github, and on a long transatlantic flight yesterday I ended up messing around with it a bit.
Almost the only bit of Demography I know is the old saw that women get sicker but men die quicker. So I thought I’d take a look at differences in the sex composition of age cohorts by constituency.
In an effort to not lose all of my lucrative Consulting Thinkfluanalyst income to the snowman, I redrew my LOESS and LTS decompositions of Apple’s quarterly sales data by product. They now extend to Q2 2015. First, here’s a plot of the trends showing the individual sales figures with a LOESS smoother fitted to them.
Figure 1. Quarterly sales data for Apple Macs, iPhones, and iPads. Here’s the Mac by itself, which continues to grow healthily (unlike the rest of the PC industry), just on a smaller scale than other Apple products.
The other day at Daily Nous, Justin asked about so-called “Sleeping Beauty” papers in Philosophy:
“Sleeping Beauty” papers “lie dormant for years before experiencing a sudden spike in citations as they are discovered and recognized as important.” A recent article in Nature discussed scientific papers that have slumbered for decades … Are there sleeping beauty papers in philosophy? (I mean, of course, besides that paper of yours from a few years back that no one has cited…yet.
Choropleth maps of the United States are everywhere these days, showing various distributions geographically. They’re visually appealing and can be very effective, but then again not always. They’re vulnerable to a few problems. In the U.S. case, the fact that states and counties vary widely in size and population means that they can be a bit misleading. And they make it easy to present a geographical distribution to insinuate an explanation.
This morning, Social Science Twitter is consumed by the discovery of fraud in a very widely-circulated political science paper published last year in Science magazine. “When contact changes minds: An experiment on transmission of support for gay equality”, by Michael LaCour and Donald Green, reported very strong and persistent changes in people’s opinion about same-sex marriage when voters were canvassed by a gay person. The paper appeared to have a strong experimental design and, importantly, really good follow-up data.
The hosts at Accidental Tech Podcast have been thinking about how to broaden their base of listeners to include more women. Good for them. They’re getting plenty of advice (and a certain amount of flak), which I won’t add to. But in general when doing this kind of thing it can be helpful to look back on what your past practice has been. For example, it can be useful to audit one’s own habits of linking and engagement.
This UK Election data is really too much fun to play around with. Here’s a (probably final) collection of pictures. First, a map of the turnout (that is, the percentage of the electorate who actually voted) by constituency, with London highlighted for a bit more detail.
Constituencies by Turnout. There’s a strong suggestion here that Labour areas have lower turnout. Here’s a scatterplot of all seats showing the winning candidate’s share of the electorate plotted against turnout.
I’m still playing around with the UK Election data I mapped yesterday, which ended up at the Monkey Cage blog over at the Washington Post. On Twitter, Vaughn Roderick posted a nice comparison showing the proximity of many Labour seats to coalfields.
That got me thinking about how much the landscape of England is embedded in its political life. In particular, what do the names of places tell you about their political leanings?
The United Kingdom’s election results are being digested by the chattering classes. So, yesterday afternoon I thought I’d see if I could grab the election data to make some pictures. Because the ever-civilized BBC has election web pages with a sane HTML structure, this proved a lot more straightforward than I feared. (Thanks also in no small part to statistician Hadley Wickham’s rvest scraping library, alongside many other tools he has contributed to the community of social scientists who use R to do data analysis.