Note: This is the text of my contribution to a panel at the SASE meetings, UC Berkeley last Sunday. My role was to tee up the discussion. The other panelists were Maciej Cegłowski, Stuart Russell, and AnnaLee Saxenian. My remarks draw on work that Marion Fourcade and I have been doing on information technology and markets, but she should not be held responsible for anything here, especially the bits about 18th century French intellectuals. Maciej has also published his remarks — they’re excellent, and you should read them.

In its original sense, the term “moral economy” refers to some kind of informal but forceful collective control over the market. It’s the original wisdom of crowds. It puts justice over efficiency, fairness over freedom, and community expectations over individual opportunity. Its most prominent exponents, E.P. Thompson (1971) and James Scott (1977), had in mind, respectively, 18th century English crowds angry about the price of bread, and norms of reciprocity amongst crop-farming peasants in 20th century Southeast Asia. Both settings are quite far removed from the moralized, technologically enabled but passive-aggressive struggle that unfolded in the Uber I took on Friday evening, on my way here from the airport. The 101 was backed up all the way from the Bridge to Portola. My driver got agitated. “I know a shortcut”, he said, and exited on to some surface streets south of the Mission. “But Google Maps says everything is completely jammed”, I replied, “You should just say on the highway.” “If I cut over to Folsom, it’ll be faster,” he said. “No, just do what the Google Maps Voice says, for God’s sake. It knows better than you! Don’t make me give you a bad rating!” I didn’t say that last part out loud, of course, because I am a conflict averse person. In my mind’s eye, the ghost of E.P. Thompson looked at me in a disgusted sort of way.

The Ideologues of Progress

While the original idea of a moral economy doesn’t fit so well with the world of information technology, a slightly expanded one does quite easily. New technologies are counting and classifying your actions constantly in an effort to make you a better person. Their promoters and investors constantly moralize about their products, too. They do it so much that the goal of software “making the world a better place” is a stock joke, as in the show Silicon Valley. This kind of moral economy is not about justice or fairness. Instead it evangelizes social progress through technological disruption. This vision has deep historical roots that are uncomfortably entwined with the origins of the social sciences.

The precursors of modern social science were “ideologues of progress”, in Krishan Kumar’s (1986) phrase. They had vivid ideas about what the future would look like; they insisted on the connection between social change and moral progress; and they had strong views about the role of science in this process. We see it first in the Scottish Enlightenment, and then in France with the philosophes of the eighteenth and early nineteenth centuries—people like Turgot, and Condorcet, and especially Henri Saint-Simon. They coined words like “individualism”, “industrialism”, “socialism”, “the organization of labor”, stages of development, or used them in their modern sense for the first time. Their successor and disciple Auguste Comte gave us the word “sociology”, as you probably know, but also the word “altruism”, as he fleshed out his positivist religion of scientific humanism.1

They saw themselves as having discovered the stages society passed through, which in their view made their ideas scientific rather than political. They almost all thought that authority in the society of the future would be grounded in scientific knowledge. They had fabulous plans for the role of scientists, including social scientists. They would constitute the supreme source of authority within the state. Saint-Simon’s version, or one of them, was called the Council of Newton. Comte’s was going to be called the Positive Occidental Committee, a permanent council of his new Religion of Humanity. He sensibly said it was to meet “usually in Paris”, and would consist of “eight Frenchmen, seven Englishmen, six Germans, five Italians, and four Spaniards.” He designed a flag for it and everything.2

These ideas may sound a little nutty. But in some ways Comte had the right idea—he just backed the wrong social-scientific horse, and the picked the wrong city. His and his predecessors’ ideas about social progress, scientific knowledge, and practical administration were hugely influential right across the political spectrum. They promised the elimination of politics and its replacement by rational administration by men of knowledge and expertise. The Saint-Simonian vision became what Hayek called “the religion of the engineers”, full of faith in the power of rational expertise. That religion is very much still with us. It’s in our institutions and our culture everywhere from the Federal Reserve Board on down to comment threads on Hacker News. And it’s there in the need that tech firms feel to say they’re going to use the data they have collected to Make the World a Better Place.

Contemporary social theorists no longer expect to be priests of a new society. These days they are mostly outside the bubble, spending their time coining terms to describe it. Meanwhile, in recent years, the technology sector has massively accelerated the demand for the collection and analysis of data while also gradually diminishing the role of specifically social-scientific expertise in its evaluation. A few people are lucky enough to get access to private treasure-houses of data at places like Facebook or Uber. But mostly, these firms are managing and analyzing their data for themselves. The ideology of progress has been cut loose from social science and grafted itself on to big data and its handmaiden, data science. More recently, those terms are starting to be displaced by the idea of “Artificial Intelligence”. I think this has happened because of the so-called Internet of Things. Now that everyone has a powerful little networked computer in their pockets, it’s time to put one in everything else, too. So computers aren’t just being used to track you and sell ads, but also do other things, from vacuuming your floor to ordering your groceries to driving you around.

Does the thing Really Work?

In his book, The Sneetches (1961), Dr Seuss discusses the disruptive entrepreneur Sylvester McMonkey McBean, a pioneer in the development of smart devices that satisfy the needs of socially connected groups with strong community values:

“Just pay me your money and hop right aboard!”
So they clambered inside. Then the big machine roared.
And it klonked. And it bonked. And it jerked. And it berked.
And it bopped them about. But the thing really worked!

McBean’s device was a pernicious technology of social classification. But I think it’s important to keep in mind that, as Seuss points out, the thing really worked. It really did put stars on the bellies of the Sneetches who had none upon thars, and they loved it. If it hadn’t really worked it would have been pernicious as well, just in a different way.

Consider two basic experiences of our new world of smart devices and internet-enabled things. The first is the nice one. I associate it with the lives of people who live in Apple advertisements. It’s the feeling of something “just working”, that sense that a computer or device knows what you want it to do, or has anticipated a need that you have and acted on it in a pleasing way. It is a feeling of magic and delight, or at least a sense of ease and convenience. Two decades ago I got that feeling from being able to fetch photographs of Mars from a computer in Pasadena, even though I was in a flat in Ireland. A decade ago I got that feeling from watching my new phone produce my approximate location on a map, even though I already knew where I was. Last year I got it from stepping out of San Francisco Airport, touching my phone a few times, and having a car appear to take me where I wanted to go. In five years time, in ten years time, I expect something else will play that role, too.

The second basic experience is the bad one. I associate it with a parade of malfunctioning, misconceived or badly-designed software and smart devices.3 A hand-dryer that requires you to watch an ad before it will work. A flask that knows when it is empty and does … something. Most recently I’ve experienced it with allegedly smart devices that pretend they can talk with and understand you, but which are really just verbal command lines operating on the narrowest of gauges. If you stray from the expected path at all, the illusion of both interactivity and smartness is destroyed. That happens because indexical pronouns and contextual meanings are appallingly difficult problems to solve programatically. Until they figure out how to interpret what you really want, the people who program the computer would much prefer you to behave like a robot with a very limited set of needs.

Social theorists consistently underestimate the value of technology’s delightful aspects. When it works, people really love it. They pay money for it. Theorists tend to react by assuming there must be something wrong with people who have that feeling. They want to say your Fitbit or Apple Watch is exercising a subtle form of control over you by encouraging you not just to meet your step count for the day but also encouraging you to value the act of meeting your step count for the day, and most perniciously by arranging things so that you experience your valuation of the act of meeting your step count for the day as a satisfying personal choice, rather than an instrument of neoliberal governmentality.

Conversely, though, the same theorists also consistently overestimate how often software and hardware actually works properly. They don’t make this mistake in their own lives, because they have to deal with malfunctioning printers, insurance company websites, iTunes, and Learning Management Software. But too often in our theories, “algorithms” rule, forming a massive system of social reproduction. Social-scientific critiques of information technology are like mirror images of the moralizing hype that comes with the technology. Like a mirror, they reverse left and right, so that cheerful hype becomes a harsh critique of the all-consuming power of technology. But—also like a mirror—they do not reverse up and down. The technology is still assumed to work, even though it probably doesn’t, most of the time.

It matters which technologies are going to work, and which ones are just going to be billion dollar cargo-cults. I am not confident in my ability to pick. The volume of engineering resources presently being directed at these problems is astonishing, and the massive diffusion of cheap, connected computers is unprecedented. Social scientists should hesitate before simply asserting that implementation problems will not be solved in something like the manner that the main players are driving towards. But we may also be overly tempted to believe that these new technologies really are working as advertised. This may be because, even though our temperament is critical, Comte’s ideology of progress is still there in the marrow of our field. We expect society and its technologies to work very systematically, even when we do not like the results.

Either way, these technologies continue to hoover up vast quantities of data for collection, maybe for analysis, destined eventually to be shared, breached by hackers, or otherwise abused in some way. If, like McBean’s machine, the thing really works, then we have one set of implications for our future—a future where individual tastes and and potentials are accurately and predictably sifted from gigantic datasets in an ongoing flow of profitable mutual co-ordination and anticipation. If it doesn’t really work, another future presents itself—one where technologies are more like (in Maciej Cegłowski’s phrase) “money laundering for bias”, or ritualized applications of nonsensical or procrustean methods. We may face some version of Oscar Wilde’s dilemma, where the only thing worse than the moral economy of technology working as advertised is the moral economy of it not working as advertised.

I have no straightforward answer to this problem, which may be why I am not giving a TED talk. Mostly I worry about the things that will get sold not to consumers, but to institutions like schools, law enforcement, healthcare and so on—things that will probably be broken or fraudulent in some basic sense, that will cost a lot of money, and that will have very real consequences for the people who have to use them or be judged by them. And I think back to my Uber ride from San Francisco to Berkeley the other day. I used my fancy phone to order up a car with a minimum of fuss. My harried driver was upset that the traffic was going to prevent him from getting another fare as soon as he wanted, and he probably needed the money. I found myself annoyed that he was trying to be smarter than Google—as if that were possible!—and then felt bad about my annoyance. Meanwhile outside the car, the roads were in terrible shape, there were far too many people trying to cross the bridge, and everyone was sitting in the traffic, looking at their phones.


  1. Kumar’s Prophecy and Progress (1986) has an excellent discussion of these themes, which I rely on here and in the following two paragraphs. ↩︎

  2. See Comte’s A General View of Positivism (1865) for plenty more in this vein. ↩︎

  3. See @internetofshit on Twitter for an ongoing parade of examples. ↩︎