Tag Archives: economics

Artificial Intelligence and the Utility Monster: It’s the Economy Stupid

In his 2014 book Superintelligence: Paths, Dangers, and Strategies, Nick Bostrom discussed issues related to whether we could prevent a superintelligent artificial intelligence (AI) computer system from posing an existential risk to humanity.  In 2014 he also presented for Talks at Google. In that presentation, an audience member (at 49 min 35 sec) posed the idea that a superintelligent computer could become a utility monster.  The utility monster is an idea of philosopher Robert Nozik, and it relates to the philosophical concept of utilitarianism.

In utilitarianism, only the maximum happiness, or utility, of the group is what matters. The distribution of utility within the group does not matter. Consider the idea of marginal utility which is how much utility comes from consuming the next increment of resources.  Because the superintelligent AI system might be much smarter than all of humanity, it could have a higher marginal utility than that of humans.  The machine could conclude that total utility was maximized by its consuming one-hundred percent of natural resources because in doing so, it could maximize overall utility simply by maximizing its own utility.

Bostrom then discussed the paper clip maximizer as a classic AI thought experiment. What if the superintelligent AI system only tries to maximize the number of paper clips (the paper clip is an arbitrary placeholder)? The AI system would likely determine that keeping humans alive is detrimental to the goal of maximizing the number of paper clips in the world. Humans need resources to survive, and these resources could be used to make more paper clips.  It is not that the AI machine dislikes or specifically tries to harm humanity. It is just that the superintelligent AI system is indifferent to our existence.

Now think about “the economy” and the metric of gross domestic product (GDP) which is usually used as a metric of the size, or throughput, of the economy. GDP is roughly treated as utility in economics. GDP is now a substitute for paper clips. Could we tell the difference between a world that is run by a superintelligent GDP maximizer and the world that we live in right now?  That is to say if certain politicians, business owners and executives, and economists are pushing for rules that maximize GDP with , then is that “the economy” simply a mechanism to maximize GDP without regard for how money is distributed?

Philip Mirowski points out that one of Friedrich Hayek’s ideas was that the economy was smarter than any one person or group of persons. Government officials, for example, can’t know enough to make good economic decisions. Mirowski discusses Hayek’s idea in his book The Road from Mont Pelerin which explores the history of the “neoliberal thought collective”.  Mirowski points out that Hayek saw the economy as the ultimate information processor.  Thus, markets are able to aggregate data in the most effective way to produce the “correct” signal, say the price, to direct people on what to make and what to buy.

Need better decisions? Make another market! There is little to no need for people to think.

In an extreme world with markets for everything, each of us becomes an automaton responding to price signals to maximize collective utility, or GDP, that might have very little to do with our personal well-being.

How could we know if we have allowed the economy to simply become a GDP maximizing utility monster? Perhaps GDP would keep going up, but if it didn’t, perhaps we’d start adding activities to GDP that have existed for centuries, but had previously not been counted due to illegality or other reasons. Prostitution and legalizing previously drugs are examples. Check on that one.

Perhaps if all we wanted to do was increase GDP, we’d cut corporate taxes to spur investment in capital versus spending on education, which is for people. Perhaps human life expectancy would go down, and drug sales would be up (the utility monster is indifferent to people). Perhaps we’d see increases in wealth or income inequality. Perhaps people would contract with “transportation network companies” to drive around, wait for algorithmic signals on where to drive to pick up a person or thing, and then deliver that person or thing as directed.

Most macroeconomic analyses are based upon the concept of maximizing utility, which is usually interpreted as the value of what “we” consume over all time into the future.  Many interesting (troubling to many) trends are occurring in the U.S. regarding health, distribution of income, and the ability of people to separate concepts of fact and truth. Thus, we should consider whether the superintelligent AI future some fear might already in action, but at perhaps a slower and more subtle pace than some pontificate might happen after “the singularity” when AI becomes more capable than humans.

The recent populist political movements in the U.S. and other countries could in fact be a rejection of the “algorithm of GDP maximization” associated with our current economic system.

Learn about utilitarianism.  Learn to go beyond GDP here, here, and here.

Macro and Climate Economics: It’s Time to Talk about the “Elephant in the Room”

This blog was written for the Cynthia and George Mitchell Foundation, and originally appeared here: http://www.cgmf.org/blog-entry/213/.

This is the first of a two-part series. Part 2 is: “The most important and misleading assumption in the world.

If we want to maximize our ability to achieve future energy, climate, and economic goals, we must start to use improved economic modeling concepts.  There is a very real tradeoff of the rate at which we address climate change and the amount of economic growth we experience during the transition to a low-carbon economy.

If we ignore this tradeoff, as do most of the economic models, then we risk politicians and citizens revolting against the energy transition midway through.

On September 3, 2016, President Obama and Chinese President Xi Jinping each joined the Paris Climate Change Agreement to support U.S. and Chinese efforts to greenhouse gas emissions (GHGs) limits for their respective country. This is an important signal to the world that the presidents of the two largest economies and GHG emitters are cooperating on a truly global environmental matter, and it provides two leaps toward obtaining enough global commitments to set the Paris Agreement in motion.

The economic outcomes from models used to inform policymakers like Presidents Obama and Xi, however, are so fundamentally flawed that they are delusional.

The projections for climate and economy interactions during a transition to low-carbon economy are performed using Integrated Assessment Models (IAMs) that link earth systems models to human activities via economic models. Several of these IAMs inform the Intergovernmental Panel on Climate Change (IPCC), and the IPCC reports in turn inform policy makers.

The earth systems part of the IAMs project changes to climate from increased concentration of greenhouse gases in the atmosphere, land use changes, and other biophysical factors.  The economic part of the IAMs characterizes human responses to the climate and the changes in energy technologies that are needed to limit global GHG emissions.

For example, the latest IPCC report, the Fifth Assessment Report (AR5), projects a range of baseline (e.g., no GHG mitigation) scenarios in which the world economy is between 300 and and 800 percent larger in the year 2100 as compared to 2010.

The AR5 report goes on to indicate the modeled decline in economic growth under various levels of GHG mitigation. That is to say, the economic modeling assumes there are additional investments, beyond business as usual, needed to reduce GHG emissions.  Because these investments are in addition to those made in the baseline scenario, they cost more money and the economy will grow less.

The report indicates that if countries invest enough to reduce GHG emissions over time to stay below a policy target of a 2oC temperature increase by 2100 (e.g., CO2, eq. concentrations < 450 ppm), then the decline in the size of the economy is typically less than 5 percent, or possibly up to 11 percent.  This economic result coincides with a GHG emissions trajectory that essentially reaches zero net GHG emissions worldwide by 2100.

Think about that result: Zero net emissions by 2100 and, instead of the economy being 300 to 800 percent larger without mitigation, it is “only” 280 to 750 percent larger with full mitigation.  Apparently we’ll be much richer in the future no matter if we mitigate GHG emissions or not, and there is no reported possibility of a smaller economy.

This type of result is delusional, and doesn’t pass the smell test.

Humans have not lived with zero net annual GHG emissions since before the start of agriculture.  The results from the models also indicate the economy always grows no matter the level of climate mitigation or economic damages from increased temperatures.

The reason that models appear to output that economic growth always occurs is because they actually input that growth always occurs.  Economic growth is an assumption put into the models.

This assumption in macroeconomic models is the so-called elephant in the room that, unfortunately, almost no one talks about or seeks to improve. 

The models do answer one (not very useful) question: “If the economy grows this much, what types of energy investments can I make?”  Instead, the models should answer a much more relevant question: “If I make these energy investments, what happens to the economy?”

The energy economic models, including those used by United States government agencies, effectively assume the economy always returns to some “trend” of the past several decades—the trend of growth, the trend of employment, the trend of technological innovation.  They extrapolate the past economy into a future low-carbon economy in a way that is guesswork at best, and a belief system at worst.

We have experience in witnessing disasters of extrapolation.

The space shuttle Challenger exploded because the launch was pressured to occur during cold temperatures that were outside of the tested range of the sealing O-rings of the solid rocket boosters.  The conditions for launch were outside of the test statistics for the O-rings.

The firm Long Term Capital Management (LTCM), run by Nobel Prize economists, declared bankruptcy due to economic conditions that were thought to be practically impossible to occur.  The conditions of the economy ventured outside of the test statistics of the LTCM models.

The Great Recession surprised former Federal Reserve chairman Alan Greenspan, known as “the Wizard.”  He later testified to Congress that there was a “flaw in the model that I perceived is the critical functioning structure that defines how the world works, so to speak.”

Greenspan extrapolated nearly thirty years of economic growth and debt accumulation as being indefinitely possible. The conditions of the economy ventured outside of the statistics with which Greenspan was familiar.

The state of our world and economy today continues to reside outside of historical statistical realm. Quite simply, we need macroeconomic approaches that can think beyond historical data and statistics.

How do we fix the flaw in macroeconomic models used for assessment of climate change?  Part two of this two-part series will explain that there is research pointing to methods for improved modeling of what is termed “total factor productivity,” and, in effect, economic growth as a function of the energy system many seek to transform.