Frank Ackerman, July 2019
I am honored to be invited to talk about a lifetime in economics. My work in the field follows a perverse arc of progress: from youthful optimism about a complete understanding of economics, to a more mature pessimism about how unpredictably bad the worst cases can turn out to be. Was studying economics still a good idea, despite the worsening implications that turn up as one digs deeper into it? How did the personal satisfaction of exploring the mathematics of the field relate to the unfair and uncomfortable reality of the U.S. and global economy?
My father was a physicist, and some of his career interests rubbed off on me. I arrived in college thinking that choosing a career meant deciding between majoring in math or physics. Both these choices seemed remote from the political and social turmoil of the day. At the same time, the dynamics of individual behavior – what is often called neoclassical economics today – relied on an intriguing analogy.
The decisions made by economic actors, exchanging goods and services in the marketplace, look comparable to the movement of physical particles. Collision of particles leads to exchanges of energy, until all have the same equilibrium level of energy. Similarly, interaction of firms and individuals in the marketplace leads to exchanges of goods and services, until all have an equilibrium level of profit for firms, or utility (satisfaction of personal desires) for individuals.
Here, it seemed, there was a problem that required mathematical analysis to understand society. But why does this interpretation, based on the approach to equilibrium in both cases, look so much more successful for physical particles than for economic actors? I only gradually realized that intensive use of mathematics was the problem, as much as the solution, in economics. The resulting simplification is in some cases the key to understanding, and in others a misleading clue about the nature of the problem. The assumption that we are starting from a point close to equilibrium implies that several fundamental problems can be ignored.
In this talk, I will look at three stages in the critique of economics, which correspond to three stages in my career:
- My work at Dollars & Sense, and the focus on inequality
- Work at Tellus Institute and at GDAE, and the role of cost-benefit analysis in limiting the scope of economic calculation
- Most recently, work on climate change and uncertainty, at Synapse Energy Economics – and in my latest book, Worst-Case Economics
The story takes off in the early 1970s. It was a time of popular movement and protest. I was part of a network of radical economists, who saw the need for analysis and guidance for the (we hoped) rising grassroots movements for change. Our broader group launched the Union for Radical Political Economics (URPE), a national organization; a Boston-based subgroup created Dollars & Sense, a popular, non-technical magazine about what is wrong with the U.S. economy today.
In more ways than one, we were on our own. We had to create both a critique of the U.S. economy, and the magazine that presented that critique to the world. (Long before the internet and social media, any new voice required creation of a new outlet to spread its views.) The practical challenges were enormous. Growing a subscriber base meant succumbing to the use of junk mail, a quarter of a million pieces in one year. With our Marxist values, we tried to create an egalitarian, collective approach to our work: everyone sweeps the floor, everyone answers the mail, everyone writes about the global economy. In the short run, this required more work. In the long run, it created a cohesive staff committed to producing high level economic journalism.
Our work could be described as exposing the extent of inequality in the American economy. An idealized version of economic theory suggests that inequality can be ignored, that all economic actors will approach equilibrium without interference or interaction with others. In fact, interactions that worsen inequality are common; more powerful economic actors can easily impose additional costs on others. The result is that the rich get richer, while the rest of us are, at best, stuck in neutral, neither advancing nor retreating.
This is no one’s idea of the shape of a good society. But the myth that GDP growth favors everyone was gaining prominence as the idealism of the 1960s and 1970s faded. Dollars & Sense critiqued the Reaganomics trend toward trickle-down economics, arguing that unequal ownership of crucial wealth-producing assets means that tomorrow’s inequalities will resemble those of today.
Despite the practical struggle to create our own media outlet, the challenges of inventing new voices to take on an ascendant neoliberalism were ones we enjoyed. It was, in the end, the ultimate Marxist-Lennonist enterprise: you could say that we were dreamers, but we weren’t the only ones. If only Dollars and Sense had been more financially secure, I could easily have spent the rest of my career advocating an equalization of incomes and opportunities, building our magazine and our critique.
I spent much of my career at two long-term jobs, first at Tellus Institute (originally Energy Systems Research Group, ESRG) and then at Tufts University’s Global Development and Environment Institute (GDAE). Both involved the role of economics in shaping public policy, and specifically the application of cost-benefit analysis.
Cost-benefit analysis, the standard approach to the economics of public policy, requires prices for everything that matters. The value of not killing someone, or of not destroying an irreplaceable ecosystem, or anything else of value, must be expressed in monetary terms and incorporated into cost-benefit calculations. All too often this leads to absurd results, endorsing a narrowly stingy approach of “pricing the priceless”, since some price tag is better than none.
In the quantitative apparatus of economics, everything that matters requires a monetary measure – including the harms done by externalities such as pollution. Cost-benefit analysis, the bottom-line decision process, begins with valuation of externalities. This often means using extensive survey research methods to invent and attach price tags throughout the economy. Lisa Heinzerling and I addressed the biases of cost-benefit analysis in Priceless: On Knowing the Price of Everything and the Value of Nothing. The problem is that the push to create numerical values repeatedly leads to nonsensical results, often building in extreme undervaluation of externalities.
Consider the cost of preventable deaths, perhaps in workplace accidents. How much is it worth spending to prevent such deaths? One response, which many people would endorse, says that there is no such amount, since human lives are not for sale. The cost-benefit framework, however, demands finite monetary measures for all costs and all benefits. As a result, every harm must have a price attached. If it is “worth” spending exactly $10 million per avoided fatality, then cost-benefit analysis endorses an outcome where one more death could have been avoided for $11 million – but was not.
The biases of cost-benefit analysis are not restricted to accidental loss of life. The same issues of nonsensical results arise in valuing illness and disability, or in pricing the preservation of unique ecosystems. How many migraine headaches are worth a year of life? How much is the existence of a beautiful coastline, or the Grand Canyon, worth to society as a whole?
One response to these questions rejects the quantitative imperative of cost-benefit analysis, calling for a broader, multi-dimensional understanding of economic value. I associate this approach with work done at GDAE, on the value of unpaid or underpaid, caring labor, or the pursuit of sustainable development goals in developing countries. Neva Goodwin, Julie Nelson, Jonathan Harris, Brian Roach, and their colleagues have elaborated this approach, rejecting the false precision of cost-benefit analysis in favor of a deeper, more thoughtful picture of the many sides of economic activity.
My own work on recycling, in the 1990s, similarly elaborates on non-monetary values that are addressed by public policy. A desire for frugality, an attempt to avoid visible signs of waste, pursuit of a sustainable way of living in a material world – these are all valuable goals, but impossible to quantify in terms that fit into cost-benefit analysis. The broader “GDAE critique”, like my work on waste and recycling, offers a valid, multi-dimensional expansion of economic value.
Yet the maddening narrowness of conventional economic theory that comes from reducing everything to a single set of numbers gives it the appearance of analytical power. While most of my work at Tellus and GDAE was aimed at applied policy analysis, my first formal attempt to analyze this quantitative approach was in the article “Still Dead After All These Years.” It identified the limited usefulness of general equilibrium theory. It took years to get it published, in the sixth journal where I submitted it; along the way it received rejections both for being too obviously true, and for being too obviously false.
Catastrophic risk in climate and finance
The last episode of this intellectual odyssey examines the puzzling frequency of extreme events, the topic of my latest book, Worst-Case Economics. My move to Synapse Energy Economics provided a supportive environment to write about climate risk and uncertainty, topics of increasing importance in a world of dangerous extremes.
Cost-benefit analysis intersects with climate crisis in the calculation of the “social cost of carbon” (SCC), the present value of the current and future monetary damages caused by a ton of CO2 emissions. How much is it worth to reduce greenhouse gas emissions and limit climate-related damages? Once again, cost-benefit analysis offers arguments to avoid spending too much, as well as too little, on this worthy goal.
Assessing the uncertain cost of climate change requires dealing with probability, a subject that requires a short excursion into the underlying mathematics.
Suppose, to begin with, that fluctuations in markets or physical processes arise solely from small, independent, randomly distributed events. (This turns out to be empirically untrue, although it is widely used as a default explanation due to its simplicity.) Under this assumption, the outcomes of random variation can be described by the bell curve, or normal distribution. This would imply that dangerous extremes are vanishingly rare, uncommon enough to be safely ignored. At least, ignoring these events appeared plausible, before the worsening waves of recent hurricanes and droughts.
If data are normally distributed, an 8-standard-deviation event is literally trillions of times less likely than a 1-standard-deviation event. If the events in question occur in the time frame of a single day, then the odds are that there would have been no 8-standard-deviation events since the Big Bang, so many eons ago. In fact, extreme events are incredibly more common than that. Daily changes in the S&P 500 index, a measure of financial extremes, have on average reached 8 standard deviations more than once per decade. Although data are much more limited, climate extremes seem to follow similar patterns.
Rather than trillions of times less likely, 8-standard-deviation events in finance or climate are only hundreds of times less likely than 1-standard-deviation events – in other words, much too common to ignore. People routinely buy fire insurance or life insurance to protect against private losses of this magnitude. The same should be true for public policy addressing collective extremes such as catastrophic climate risk.
In physical processes, there are multiple explanations for “fat tails” or “black swans”, metaphors that have been used to describe unexpectedly frequent extremes. The structure of uncertainty, the background of random fluctuation, makes extreme events dangerously common. Why do we accept these simplifying assumptions? Because they make the mathematics of economic crisis look tractable – even if it is wrong.
Economics embodies a relevant but troubled application of mathematical tools to contemporary problems. The success of these approaches in physical sciences contrasts with the failure of the same techniques in social sciences – a puzzle I have struggled to understand throughout my career.
What accounts for the appeal of this analogy? The theory of particles colliding and exchanging energy until they reach equilibrium was one of the great insights of late nineteenth-century physics. The formalization of neoclassical economics, in the same time period, assumed the same mechanism for economic actors and exchanges. Its mathematical elegance provides a powerful but flawed parallel, assigning an automatic movement toward equilibrium to both climate and market forces.
As crisis and disequilibrium become ever more important, catastrophic events can be ruled out only under narrowly defined, ideal conditions. The bell curve (normal distribution) provides one default picture of the likelihood of extremes, implying that they are extraordinarily rare. Other, more realistic understandings of the world lead to more complex, riskier possibilities, in which extreme results and worst-case outcomes cannot be ruled out.
Was it worth the intellectual effort to work as an economist? The problems I encountered in economics remain very much with us. We are still stymied by complacence and understatement of inequality, the use of overly mathematical cost-benefit models and low carbon cost estimates to justify bad policy, and miscalculation of current and future risks. The next generation of economists and policy analysts will need to explore the same problems more deeply, identifying what is right and wrong in the current mathematical formulations of economics. I have worked to expose the flawed assumptions and misguided priorities behind rationales for bad policies. I am convinced that an approach to economics that is grounded in good values, sound logic and empirical evidence provides an opportunity to understand social problems and hopefully to mitigate them.
 Ackerman and Heinzerling, Priceless: On Knowing the Price of Everything and the Value of Nothing (New York: The New Press, 2004).
 Ackerman, Worst-Case Economics, Chapter 12.
 Ackerman (1997), Why Do We Recycle? (Washington DC: Island Press).
 Ackerman (2002), “Still Dead After All These Years: Interpreting the Failure of General Equilibrium Theory”, Journal of Economic Methodology.
 Martin Weitzman’s “Dismal Theorem” argues persuasively that the uncertainty in risk reduction is so great that the SCC is literally infinite. See Weitzman (2009), “On modeling and interpreting the economics of catastrophic climate change”, Review of Economics and Statistics.
 This refers to the central limit theorem of statistics: if all fluctuations result from identical, independently distributed random events, then for large numbers of events, the resulting outcomes are normally distributed.
 Ackerman (2017), Chapter 7 and elsewhere.
 For an argument that disequilibrium structures can persist over time, see Prigogine (1996), The End of Certainty: Time, Chaos, and the New Laws of Nature (New York: The Free Press). Prigogine received a Nobel Prize in chemistry, in part for his work on the persistence of disequilibrium.