Market Discipline and Organizational Responsibility
Here are two stories that bump into one another in an interesting way. I came across both of them on the plane back from DC last week. (Catching up on reading is the only good thing about long, late-night flights.) The first one, Enron A Year On, comes from the November 28th edition of the Economist. The article goes through the political and legislative responses to the wave of corporate scandals and suggests that the new laws need to be implemented and enforced properly, rather than just sitting on the books, especially in the areas of auditing, accounting standards and corporate governance. All well and good: with scandal on this scale, the right kind of laws need to be put in place and enforced in a sensible and consistent way. But, being the Economist, it can’t give up on the idea that the market somehow fixed itself:
There is, however, a second answer to the question of whether investors are now any safer. It is that they may be, but thanks to a more powerful force than any rules or regulations: the self-correcting discipline of the marketplace. Bosses will always be greedy, auditors will always be fallible, boards will always miss things; but the post-Enron climate has made these mistakes less likely, at least for now. Auditors are being more thorough, and audit fees are rising. Public outrage over bosses’ remuneration has led to change…
Enronitis showed that there is no substitute for constant scrutiny and questioning. In the end, if investors are ready to suspend disbelief when confronted by companies with inflated numbers and implausible business plans, no amount of regulation can save them. For prudent investors, the price of the marketplace has to be eternal vigilance.
It’s not clear what mechanism is supposed to be at work here. All that’s really said is that imprudent investors will lose money—- but imprudence can only be assessed post hoc. (We know you must have been imprudent because you lost money.) The “constant scrutiny and questioning” and “eternal vigilance” counseled by the Economist to individual investors is out of step with what’s supposed to be the main virtue of markets, viz, that they are far better information processors than individuals.
The article seems to want to treat Enron and WorldCom as bubbles. In a bubble, prices (for tulip bulbs, say, or Florida real estate) rise absurdly and, at best, everyone hopes to make a killing before the whole thing collapses. If you mortgage your home for a tulip bulb, maybe more fool you. Prudent investors don’t get caught in bubbles. (Then again, they don’t make killings, either: this is the post hoc problem again.) But although they were surrounded by hype and excitement, Enron and WorldCom were not really bubbles. They were frauds. The information investors needed to make rational decisions was being manipulated or falsified by interested parties. In a bubble, investors look at what other investors are prepared to pay and drive up the price in a self-fulfilling cycle. The market runs away with itself, as my mother would say. In a fraud, investors look to what are supposed to be reliable signals of underlying value—- company accounts, the recommendations of experts—- and get screwed. It’s hard to be prudent when the information one relies on to make decisions is being doctored or falsified without your knowledge. And it’s not as if investors can audit Enron themselves—- that’s precisely where the magic of the market is supposed to come in! But the discipline of the market is not much use without institutions that ensure the information that gets distilled into prices is reliable.
It may be more difficult to build such institutions than you think. Consider the results reported in the second article, a piece by James Surowiecki in the Dec. 9th New Yorker. (It’s not available online.) He describes a little experiment designed to simulate the relationship between investors and stock analysts with a conflict of interest.
“…[George] Lowenstein and his colleagues Don Moore and Daylian Cain devised an experiment. One group of people (estimators) were asked to look at several jars of coins from a distance and estimate the value of the coins in each jar. The more accurate their estimate, the more they were paid. Another group of people (advisers) were allowed to get closer to the jars and give the estimators advice. The advisers, however, were paid according to how high the estimators’ guesses were. So the advisers had an incentive to give misleading advice. Not surprisingly, when the estimators listened to the advisers their guesses were higher. The remarkable thing was that even when the estimators were told that the advisers had a conflict of interest they didn’t care. They continued to guess higher, as though the advice were honest and unbiased. Full disclosure didn’t make them any more skeptical.”
It gets more interesting:
Once the conflict of interest was disclosed, the advisers’ advice got worse. “It’s as if people said ‘You know the score, so now anything goes,’” Loewenstein says. Full disclosure, by itself, may have the perverse effect of making analysts and auditors more biased, not less.
This sort of evidence makes the Economist’s confidence in market discipline seem a little complacent. But it also shows that well-functioning institutions are not easy to build. The key problem is getting people to be responsible, and responsibility means being willing to take ownership of a problem. In Loewenstein’s experiment, the disclosure rule had the effect of detaching the problem of honesty and bias from anybody in particular. Carol Heimer, a sociologist at Northwestern University, is doing some fascinating work in this area. Her book (with Lisa Staffen) on neonatal intensive care units analyzes responsible care as a practical achievement dependent on features of the organizational environment as well as the motives and character of parents and medical teams. In more recent work, Heimer extends her analysis to other cases. Here’s a parallel to Loewenstein’s experiment from a different context:
In a hospital system with a rule requiring review of all cases of patients dying within thirty days of surgery, one surgeon evaded review by “keeping corpses warm” until just after the thirteth day postsurgery (Devers 1980)… The rule was presumably intended to uncover uncompetent surgery, and most likely was intended to have its effect primarily at the collective level—-not to increase the number of patients who survived just past thirty days, clearly, but to allow the hospital to evaluate the competence of surgeons, to detect cases of botched surgery, and to learn what factors affected survival rates. But a rigid rule relying on a single indicator is easy to work around, and a physician faced with scrutiny of his failures has some substantial incentive to evade review.
Heimer calls this a problem of “floors becoming ceilings”—- ie, when checks designed to signal low performance are treated instead (to switch metaphors) as thresholds or triggers that must not be set off. Rules will be counterproductive when they cause people to act in this or similar ways. Mere compliance does not ensure responsibility. The problem of interest then, is to discover what sort of rules work best—- and prior to that. to figure out what a good rule is, in principle. Neither of these tasks is easy. Heimer makes some generalizations:
Rules are especially likely to be unproductive when formulated by distant external bodies who are obligated to achieve only narrow goals tather than to consider the overall welfare of the system, when the rules are highly visible ceremonial responses that wil be judged by groups who are only episodically attentive to the conditions the rules are intended to address, when the rules are designed around extreme circumstances but applied to less extreme ones, when rules are based on records that were intended for another purpose or can easily be distorted by interested parties, and when rules to discourage wrongdoing are conflated with rules to encourage high-quality performance.
The core point is that “responsibility is crucially about moral competence, and rule systems that aim for responsibility rather than just accountability must encourage social arrangements that foster and reinforce high standards and a sense of obligation to a larger group.”
It’s important to see that responsibility and moral competence are not just abstract boy scout virtues but also, in large part, organizational accomplishments. The right rules, procedures and informal practices are what distinguish a good surgical unit from a bad one, a safe nuclear power station from a dangerous one, or an above-board business from a corrupt one.
In each of these cases, it’s worth asking what it takes to make a good set of rules and procedures collapse. In the business case, the obvious problem is increasing opportunities to make enormous amounts of money. Even here, the institutional system is crucial: these opportunities often derive from changes in the rules rather than really new markets, and are often only viable because perpetrators know they are highly unlikely to be punished to any great degree. So there’s no reason to be responsible. But that’s a topic for another post.