I taught my Data Visualization seminar in Philadelphia this past Friday and Saturday. It covers most of the content of my book, including a unit on making maps. The examples in the book are from the United States. But what about other places? Two of the participants were from Canada, and so here’s an example that walks through the process of grabbing a shapefile and converting it to a simple-features object for use in R.
As part of the run-up to the release of Data Visualization (out in about ten days! Currently 30% off on Amazon!), I’ve been playing with graphing different kinds of data. One great source of rich time-series data is mortality.org, which hosts a collection of standardized demographic data for a large number of countries. Mortality rates are often interesting to look at as a heatmap, as we get data for a series of ages (e.
Here’s a feature of dplyr that occasionally bites me (most recently while making these graphs). It’s about to change mostly for the better, but is also likely to bite me again in the future. If you want to follow along there’s a GitHub repo with the necessary code and data.
Say we have a data frame or tibble and we want to get a frequency table or set of counts out of it.
Since the U.S. midterm elections I’ve been playing around with some Congressional Quarterly data about the composition of the House and Senate since 1945. Unfortunately I’m not allowed to share the data, but here are two or three things I had to do with it that you might find useful.
The data comes as a set of CSV files, one for each congressional session. You download the data by repeatedly querying CQ’s main database by year.
Earlier this year my colleague Steve Vaisey was converting code in some course notes from Stata to R. He asked me a question about tidily converting from long to wide format when you have multiple value columns. This is a little more awkward than it should be, and I’ve run into the issue several times since then. I’m writing down the answer (or, an answer) here so that I can find it again myself.
To close out what has become demography week, I combined the US monthly birth data with data for England and Wales (from the same ONS source as before), so that I could look at the trends together. The monthly England and Wales data I have to hand runs from 1938 to 1991. I thought combining the monthly tiled heatmap and the LOESS decomposition would work well as a poster, so I made one.
Amateur demography week continues around here. Today we are looking at the population of England and Wales since 1961, courtesy of some data from the UK Office of National Statistics. We have data on population counts by age (in nice, detailed, yearly increments) broken down by sex. We’re going to tidy the data, make a pyramid for a year, and then make an animated gif that shows the changing age distribution of the population over more than fifty years.
Yesterday I came across Aaron Penne’s collection of very nice data visualizations, one of which was of monthly births in the United States since 1933. He made a tiled heatmap of the data, taking care when calculating the average rate to correct for the varying number of days in different months. Aaron works in Python, so I took the opportunity to play around with the data and redo the plots in R.
On Twitter the other day, Philip Cohen put up some data on changes in Bachelor’s degrees awarded between 1995 and 2015. The data come from the National Center for Education Statistics. It seemed like a good candidate for drawing as a figure, so I had a go at it:
Changes in the number of Bachelor’s degrees awarded over the past twenty years.
Afterwards, I was messing around with the data and wanted to draw some time-series plots for the various subject areas the NCES tracks.
Data Visualization: A Practical Introduction will be published later this year by Princeton University Press. You can read a near-complete draft of the book at socviz.co. If you would like to receive one (1) email when the book is available for pre-order, please fill out this very short form. The goal of the book is to introduce readers to the principles and practice of data visualization in a humane, reproducible, and up-to-date way.