Tag Archives: artificial intelligence

Artificial Intelligence and the Utility Monster: It’s the Economy Stupid

In his 2014 book Superintelligence: Paths, Dangers, and Strategies, Nick Bostrom discussed issues related to whether we could prevent a superintelligent artificial intelligence (AI) computer system from posing an existential risk to humanity.  In 2014 he also presented for Talks at Google. In that presentation, an audience member (at 49 min 35 sec) posed the idea that a superintelligent computer could become a utility monster.  The utility monster is an idea of philosopher Robert Nozik, and it relates to the philosophical concept of utilitarianism.

In utilitarianism, only the maximum happiness, or utility, of the group is what matters. The distribution of utility within the group does not matter. Consider the idea of marginal utility which is how much utility comes from consuming the next increment of resources.  Because the superintelligent AI system might be much smarter than all of humanity, it could have a higher marginal utility than that of humans.  The machine could conclude that total utility was maximized by its consuming one-hundred percent of natural resources because in doing so, it could maximize overall utility simply by maximizing its own utility.

Bostrom then discussed the paper clip maximizer as a classic AI thought experiment. What if the superintelligent AI system only tries to maximize the number of paper clips (the paper clip is an arbitrary placeholder)? The AI system would likely determine that keeping humans alive is detrimental to the goal of maximizing the number of paper clips in the world. Humans need resources to survive, and these resources could be used to make more paper clips.  It is not that the AI machine dislikes or specifically tries to harm humanity. It is just that the superintelligent AI system is indifferent to our existence.

Now think about “the economy” and the metric of gross domestic product (GDP) which is usually used as a metric of the size, or throughput, of the economy. GDP is roughly treated as utility in economics. GDP is now a substitute for paper clips. Could we tell the difference between a world that is run by a superintelligent GDP maximizer and the world that we live in right now?  That is to say if certain politicians, business owners and executives, and economists are pushing for rules that maximize GDP with , then is that “the economy” simply a mechanism to maximize GDP without regard for how money is distributed?

Philip Mirowski points out that one of Friedrich Hayek’s ideas was that the economy was smarter than any one person or group of persons. Government officials, for example, can’t know enough to make good economic decisions. Mirowski discusses Hayek’s idea in his book The Road from Mont Pelerin which explores the history of the “neoliberal thought collective”.  Mirowski points out that Hayek saw the economy as the ultimate information processor.  Thus, markets are able to aggregate data in the most effective way to produce the “correct” signal, say the price, to direct people on what to make and what to buy.

Need better decisions? Make another market! There is little to no need for people to think.

In an extreme world with markets for everything, each of us becomes an automaton responding to price signals to maximize collective utility, or GDP, that might have very little to do with our personal well-being.

How could we know if we have allowed the economy to simply become a GDP maximizing utility monster? Perhaps GDP would keep going up, but if it didn’t, perhaps we’d start adding activities to GDP that have existed for centuries, but had previously not been counted due to illegality or other reasons. Prostitution and legalizing previously drugs are examples. Check on that one.

Perhaps if all we wanted to do was increase GDP, we’d cut corporate taxes to spur investment in capital versus spending on education, which is for people. Perhaps human life expectancy would go down, and drug sales would be up (the utility monster is indifferent to people). Perhaps we’d see increases in wealth or income inequality. Perhaps people would contract with “transportation network companies” to drive around, wait for algorithmic signals on where to drive to pick up a person or thing, and then deliver that person or thing as directed.

Most macroeconomic analyses are based upon the concept of maximizing utility, which is usually interpreted as the value of what “we” consume over all time into the future.  Many interesting (troubling to many) trends are occurring in the U.S. regarding health, distribution of income, and the ability of people to separate concepts of fact and truth. Thus, we should consider whether the superintelligent AI future some fear might already in action, but at perhaps a slower and more subtle pace than some pontificate might happen after “the singularity” when AI becomes more capable than humans.

The recent populist political movements in the U.S. and other countries could in fact be a rejection of the “algorithm of GDP maximization” associated with our current economic system.

Learn about utilitarianism.  Learn to go beyond GDP here, here, and here.