Author Archives: CareyKing

Explanation of the Jevons Paradox (or “backfire” effect) using the HARMONEY model

December 6, 2020

In this blog I use my HARMONEY (“Human And Resources with MONEY”) economic growth model (also see this free early version) to demonstrate the dynamics of the Jevons Paradox: that an increase in end-use efficiency leads to an increase in total resource extraction, rate of resource depletion, and final level of depletion.

Here I summarize some important features and assumptions of the HARMONEY model to give context for why it exhibits the behavior summarized below:

  • Natural resources: There is only one natural resource, and it is modeled as something akin to a forest where the resource can grow back (at some rate) after it is depleted. By this assumption, the economy can also continuously extract resources at the same rate the resources are regenerated as sort of a steady state economy.
    • As resources are depleted, it takes more resources to extract the next unit of resources. This presents the ability to check the feedbacks of going after harder-to-reach resources after accessing the easiest resource first.  It also allows me to calculate net energy return ratios, the so-called “energy return on energy invested” but what I will call (in later figures) the “net external power ratio”.
  • Population: Population is endogenous such that population growth and decline is dependent on the level of per capita resources consumption. If there are not enough natural resources left for households to consume (per person), then population can decline and level off.
  • There are 2 industrial sectors:
    • The “goods” sector uses labor and capital (e.g., machines) to make new capital.
    • The “extraction” sector uses labor and capital to extract resources.
  • Capital requires resources consumption for
    • Its operation (e.g., it needs fuel to operate)
    • Its creation (e.g., capital is made out of natural resources)
  • Prices: I keep the assumption from the main results in my HARMONEY paper (King (2020)) which is that prices are calculated by assuming a constant markup on the full cost of producing outputs (full cost includes wages, intermediate costs, depreciation, and interest payments).

Here I’ll focus on changing the parameter that affects how many resources are required for consumption to operate capital to produce a unit of its output.  Equation (1) describes the quantity of natural resource consumption required during the operation of capital where K is the amount of physical capital, CU is the capacity utilization (a number between 0 and 1 indicating the fraction of the time the capital operates), and η is like an efficiency term, but its units are different.  The symbol η represents “resources consumed per unit of capital” and its units are [resources/(time·capital)].

 

natural resource consumption to operate capital = η·K·CU     (1)

 

Equation (1) holds for both goods sector operation (e.g., fuel to operate machines that make more machines) and extraction sector operation.

 

Demonstrating Jevons Paradox (or the backfire effect)

I will show a series of outputs from the HARMONEY model. There are two simulation results shown on each figure. The solid line represents the simulation where each η remains at a constant value of 0.16 throughout the simulation.  The dashed line represents the simulation where each η decreases, starting at year = 50, from its maximum value of 0.16 to a minimum value of 0.0533 as a function of how fast new capital is created (e.g., rate of investment).  Importantly, a decreasing η represents the SAME effect as increasing thermodynamic efficiency of an electric motor, steam turbine, combustion engine, etc.   Figure 1 shows the constant η for the first simulation and the decreasing η for the second simulation.

Figure 1. The amount of resources consumption, or η, to operate a unit of capital in (left) the extraction sector and (right) the goods sector.  Solid line = constant η (constant efficiency).  Dashed line = decreasing η (increasing efficiency).

 

Figure 2 shows the amount of resources that resides in the environment, yet to be extracted.  In the constant efficiency scenario, the resource can get extracted to a level of near 65 units of resources remaining (65% of its maximum possible level) at its most depleted state.  In the increasing efficiency case, the resource is depleted to near half of its maximum level at about 50 units of resources remaining at its most depleted state.

 

 
Figure 2. The amount of natural resources available, or remaining, in the environment for the economy and population to consume decreases when machines become more efficient.  Solid line = constant η (constant efficiency).  Dashed line = decreasing η (increasing efficiency).

 

Figure 3 shows the resources extraction rate that increases for the dashed line increasing efficiency scenario.   (NOTE: By the definition of the resource as a forest, the maximum resource extraction rate occurs when the resource is depleted to half of its maximum level. I have chosen the parameters to not extract (much) past the 50% level for the purposes of this blog as this most accurately represents our primary use of fossil fuels.)

 
Figure 3. The rate of resource extraction increases when machines become more efficient.  Solid line = constant η (constant efficiency).  Dashed line = decreasing η (increasing efficiency).

 

Figure 4 shows a higher human population for the scenario in which there is increased machine efficiency.  If a model assumes a constant or exogenous population growth, then it probably cannot exhibit the backfire effect (or Jevons Paradox).  The reason that population increases with higher efficiency is that after resources are consumed to (i) operate machines and (ii) make more machines, the higher efficiency allows for more resources to be left for human consumption such that death rates remain low for a longer period of time that in turn allows population to increase for a longer period of time.  Eventually, population levels off even in the increased efficiency scenario, but in a world also with more machines (more capital).

 
Figure 4. The human population increases when machines become more efficient.  Solid line = constant η (constant efficiency).  Dashed line = decreasing η (increasing efficiency).

 

Figure 5 shows the amount of total capital that is higher for the scenario in which there is increased machine efficiency.  The reason that the capital stock increases with higher efficiency is that after resources are consumed to operate machines, the higher efficiency allows for more resources to be left for the creation of more machines.

 
Figure 5. The total capital in the economy (both for the extraction and goods sectors) increases when capital operates with higher resources efficiency.   Solid line = constant η (constant efficiency).  Dashed line = decreasing η (increasing efficiency).

 

With increased capital, population, and resources extraction (the of the major “factor inputs” to economic production) as efficiency increases, there is also increased net output of the economy as shown in Figure 6.

 
Figure 6. The net output, or GDP, of the model economy is about two times larger after the increase in capital resources consumption efficiency.    Solid line = constant η (constant efficiency).  Dashed line = decreasing η (increasing efficiency).

 

Clearly increased efficiency allows for (i) higher resource depletion, (ii) higher extraction rate of resources, (iii) an increased population, and (iv) more capital accumulation.  For the latter two stocks, population and capital, higher operational resource efficiency enables more resources to be allocated to the accumulation of both more people and capital that do more work.  This is consistent with the idea that more useful work, which is all energy inputs (technically exergy) times their full conversion efficiencies, goes hand in hand with more GDP.  Thus, by making choices to increase efficiency in the real economy, so far (globally to date) this has translated to more useful work, resource extraction, and net output (or GDP).

Figure 7 shows the real price of natural resources and goods (or machines).   With increasing efficiency (or decreasing η in the model), the price for a unit of natural resources increases relative to a constant efficiency world, and this is precisely the trend we’ve experienced for world oil prices that increased after the 1970s, when oil efficiency efforts started.  To date, oil prices have yet to return to the low prices experienced for the 90 years previous to 1974.  The price of goods (or machines) decreases with the increasing efficiency in their operation.

 

Figure 7. (a) The price of natural resources increases as η, the amount of resources to operate a unit of capital, decreases.  (b) The price of goods (or machines) decreases as η, the amount of resources to operate a unit of capital, decreases.   Solid line = constant η (constant efficiency).  Dashed line = decreasing η (increasing efficiency).

 

For the net energy geeks out there, Figure 8 shows the “net external power ratio”, or NEPR, which is the same concept that many people refer to as EROI = “energy return on (energy) invested”.  (See my previous publication for reasons why I prefer to use NEPR as a more specific term.)  Equation (2) expresses the idea behind the mathematics.

 

NEPR = (resource flows available for use outside of the extraction sector) /  (resource inputs to operate extraction capital + resource inputs to create new extraction capital)    (2)

 

When machines consume less resources for their operation, then there is period in which NEPR increases before reaching a maximum from which it again declines.  This increase in tandem with increasing efficiency represents an increased net resource flow available as “net output,” which in the economic sense is that output available for consumption and investment.

 

 
Figure 8. The net external power ratio (NEPR), often referred to as EROI (= energy return on (energy) invested), increases in response to an increase in capital operating efficiency since a higher fraction of total resource flows can temporarily go to net output.   Solid line = constant η (constant efficiency).  Dashed line = decreasing η (increasing efficiency).

 

Figure 9 shows the wage share, or percentage of GDP that is paid to wages, increases during the time that efficiency is increasing from about year = 50 to year = 110.  An increasing consumption of resources and GDP can thus translate to a higher (or at least a share decreasing at a slower rate) distribution to wages.

 
Figure 9. The share of GDP that is paid to wages (or workers).  Solid line = constant η (constant efficiency).  Dashed line = decreasing η (increasing efficiency).

 

Figure 10 shows that the debt ratio is temporarily lower while efficiency increases. Not shown (due to stopping the simulation) is that the debt ratio in the efficiency case also starts to decrease because resource constraints eventually inhibit investment below profits such that companies pay back debt.  See the paper for more details.  To understand the current economic situation in the U.S. and most OECD countries, it is critical to understand the context of high levels of private debt of companies and consumers (debt of consumers is not part of the HARMONEY 1.0 model).  Increasing efficiency can increase output faster than debt (hence lowering debt ratio), but only for a while. Ultimately, the second law of thermodynamics limits the efficiency of energy conversations.

 

 
Figure 10. The debt ratio (debt/GDP) of the economy increases more slowly while efficiency increases (e.g., decreasing η).    Solid line = constant η (constant efficiency).  Dashed line = decreasing η (increasing efficiency).

 

Summary

The HARMONEY model shows many trends that are indicative of the real world economy.  Thus far, globally, we keep making end-use devices more efficient, and we keep consuming energy at higher rates. When η is decreased, or efficiency is increased, in the HARMONEY model, resource extraction increases, population increases, capital increases, and net output increases.  Increasing efficiency is a tactic to increase consumption and output, not a tactic to reduce overall resource consumption.  The Jevons Paradox is only a paradox to those who are not thinking about the dynamics of the economy and how it responds to changes in efficiency over time.

How Wages are linked to Energy Consumption: Data and Theory

Introduction

How do economic analyses account for the roles and impacts of both the cost and quantity of natural resource consumption?

This question has been debated perhaps as long as there has been the profession of economics.  Before the use of fossil fuels, early “classical” economists knew that most products of interest, such as food and building materials, came from the land as it harnesses the energy from the sun. Thus, land as a natural resource was front-and-center to economic thinking.

With industrialization and the use of fossil fuels (that provide energy independent of current sunlight) economic analyses became less focused on the role of natural resources as an input into economic production such that in the 1900s most mainstream (i.e., Neoclassical) growth models do not directly account for energy and natural resources.  Many researchers, including myself, think we must explicitly consider the use of natural resources if we are to understand economic growth and the distribution of the stocks (e.g., debt) and flows (e.g., wages, profits) of money within the economy.

I have recently published a paper on my economic growth model that consistently and simultaneously accounts for both the use of natural resources, such as energy, and debt. 

The paper sheds new light on some of the most important contemporary economic trends in the United States and other economies of the OECD.  In particular, the model provides the foundation to directly link changes in the rate of energy consumption to increases in wage inequality and debt that began during the 1970s.

This publication is in a 2020 volume of the journal Ecological Economics asAn Integrated Biophysical and Economic Modeling Framework for Long-Term Sustainability Analysis: the HARMONEY Model”.   The name of the model, “HARMONEY,” is an acronym for “Human And Resources with MONEY.”

 

Model Results Reflect Trends in U.S. Data

Figures 1 and 2 show comparisons of model results to U.S. data.  For these comparisons the qualitative similarities in the general sequence of long-trends and structural change are important, not the relation of magnitudes of variables or specific model times to specific years in the U.S. data.

Figure 1 shows the wage share and per capita energy consumption of the U.S. The wage share is the percentage of GDP allocated to hourly or salaried workers. Notice how both the wage share and per capita energy consumption have a different trend before versus after the early 1970s. Before 1973, wage share remained constant at about 50% of GDP, and energy consumption per person increased at 3%/yr. After 1973, wage share declined at about 1.5-2% per decade as energy consumption per person declined slightly or remained relatively constant.

 
(a) (b)
Figure 1. (a)  In the same way as the U.S. data, the wage share (left axis) from the HARMONEY model shows the same simultaneous turning point in long-term trend, from a constant value to a declining value, when per capita resource consumption reaches its peak.  (b) Data for the U.S. wage share (left axis) and per capita energy consumption (right axis) both change their long-term trends in the 1970s.

The model results show practically the exact same trends as in the U.S. data.  When initially formulating the model, I had no immediate goal to mimic this type of relationship. I did want a model that had several important elements, but I didn’t anticipate my first results would so clearly relate to real world data. In the HARMONEY model, the wage share emerges because of how its systems-oriented structure relates the elements to one another, as described further below.

The HARMONEY model also provides insight into debt accumulation. Figure 2 shows private U.S. debt in terms of the debt ratio (debt divided by GDP) for corporations and financial institutions. These two categories are equivalent to the concept of debt included in the model. It was the accumulation U.S. private debt (and household debt in mortgages) and associated interest payments that triggered the 2008 Financial Crisis. The crisis was not triggered by government debt.

The new insight from this research is that it shows how increasing debt ratios can arise from a slowdown in resource consumption rates.  In essence a debt crisis cannot be analyzed independently of the longer-term context of natural resource consumption.

(a) (b)
Figure 2. Both the (a) U.S. data and (b) HARMONEY model show a slow rise in private debt ratio before a more rapid increase. The transition occurs soon after the peak in per capita energy consumption for the U.S. and peak in resource extraction per person for the model. U.S. data are from U.S. Federal Reserve Bank z.1 Financial Accounts of the United States, Table L.208 (Debt, listed as liabilities by sector). Model results are from the scenario labeled as “Renewable-High(b)” in the paper.

Note how private debt ratio increases much more rapidly after the 1970s than before, and the increase in financial sector debt drives the overall trend for the U.S. This same breakpoint occurs in the HARMONEY model and for the same reasons. In both the U.S. data and the model, when per capita resource consumption was rapid, the debt ratio increased but at a much slower rate than after per capita consumption stagnated.  Note that “mainstream” neoclassical economic theory does not account for the concept of debt, and it assumes the quantity of money has no fundamental role in long-term trends. Steve Keen’s research provided a simple way to include debt into economic growth modeling. In his 2011 book Debunking Economics, Keen states the problem clearly:

“This [lack of consideration of debt], along with the unnecessary insistence on equilibrium modeling, is the key weakness in neoclassical economics: if you omit so crucial a variable as debt from your analysis of a market economy, there is precious little else you will get right.” –– Steve Keen (2011)

This lack of consideration of debt is the fundamental reason why mainstream economists could not foresee or anticipate the 2008 Financial Crisis. Their theory tells them not to model debt, the direct cause of the crisis itself!

 

More Details on the Results

For those that want more details that explain these model results, then you can keep reading.  Also, at the bottom of this blog I provide links to videos where I describe the model structure and results.

The wage share decline is driven by two quantities: the accounting for depreciation for an increasing quantity of capital and the interest payments on a rising debt ratio. The pattern occurs if you assume, as observed in the U.S. data, that companies keep investing more money than their profits. Since the 1920s, U.S. corporations typically invest 1.5 to 2.5 times more each year than they make in profits. Thus, in this face of constant or slower increase in total energy consumption, the economy accumulates capital that either operates less or requires less energy to operate (e.g., efficient equipment, computers).

Think about the patterns in Figures 1 and 2 the following way. We can assume four major distributions from GDP (or ‘value added’) in national economic accounting: government (as taxes), private profits including interest (or rent) payments to capital owners, depreciation (on capital), and wages (to workers).

In a capitalist system based on maintaining private sector profits, if both the debt ratio and the amount of capital per person increase, then increasing shares of GDP go to two categories: depreciation and interest payments. To minimize interest payments at high debt, you must lower the interest rate, and that is why central bank interest rates have remained at historic lows, sometimes even negative, since 2008.  Assuming a constant share of GDP to government taxes, when there is a restriction in the growth rate of GDP and energy consumption, the prioritization of profits, taxation, and depreciation means that the workers’ share is the only portion available to take the hit.

A short trip down memory lane provides the context for why I’ve performed this research.

In 1972, the book The Limits to Growth brought an idea mainstream discussion: physical growth on a finite plant cannot continue.  There were both detractors and proponents of the conceptual and mathematical models used in the book. When the authors updated the modeling in their 1992 Beyond the Limits, William Nordhaus (Nobel Laureate awarded in 2018) again critiqued the approach as he’d done in 1973 in his paper Lethal model 2: The limits to growth revisited.  Whether any “limits to growth” exist is contested in the economic literature, but there is little doubt in the ecological literature.  Many, including Ugo Bardi in his The Limits to Growth Revisited, state that the critiques of Nordhaus were ignorant of the mathematical and computational methods used in The Limits to Growth models.  However in a commentary within Nordhaus’ 1992 critique, Martin Weitzman effectively summarized the differences in worldviews between an ecological approach to economics and the mainstream view:

“There may be a some value in trying to understand a little better why the advocates of the limits-to-growth view see things so differently and what, if anything, might narrow the differences.

I think that there are two major differences in empirical world views between mainstream economists and anti-growth conservationists. The average ecologist sees everywhere that carrying capacity is a genuine limit to growth. Every empirical study, formal or informal, confirms this truth. And every meaningful theoretical model has this structure built in. Whether it is algae, anchovies, or arctic foxes, a limit to growth always appears. To be sure, carrying capacity is a long-term concept. There may be temporary population upswings or even population explosions, but they always swing down or crash in the end because of finite limits represented by carrying capacity. And Homo sapiens is just another species-one that actually is genetically much closer to its closest sister species, chimpanzees, than most animals are to their closest sister species.

Needless to say, the average contemporary economist does not readily see any long-term carrying capacity constraints for human beings. The historical record is full of past hurdles to growth that were overcome by substitution and technological progress. The numbers on contemporary growth, and the evidence before one’s eyes, do not seem to be sending signals that we are running out of substitution possibilities or out of inventions that enhance productivity.” — Martin Weitzman (1992)

Per Weitzman, I have been interested in “narrowing the differences” between economic and ecological worldviews by coherently including them in the same framework.  It was with that goal in mind that I created the model summarized in this article.  The model is based on a similar concept as that in The Limits to Growth in that it has an allocation of resources and capital between the “resource extraction” and “other” parts of the economy.  But to better communicate with economists it also includes economic factors such as debt and wages. Without this type of combination we can’t understand if and how energy and resource consumption play a role in the trends of debt ratios and wage inequality that now dominate contemporary social, economic, and political discussion.

It is easier to propagate the meme of your model if you give it a memorable name, so I called my model HARMONEY for “Human And Resources with MONEY”. The HARMONEY model is a combination of two other existing models. The first is a simple model of an agrarian society that harvests a forest-like resource to feed itself. The second is a model of a simple economy with fluctuating business cycles, tracking capital, wages, and employment, while also considering the real world tendency of businesses to invest more than their profits by borrowing money from a bank. This borrowing is what “creates money” as debt within the model, just like commercial banks create money when they provide a loan to a business.

From the standpoint of natural resource use, HARMONEY has three key features that are consistent with real-world physical activities and that drive the patterns in Figures 1 and 2. First, natural resources are required to operate capital. This is the same as saying you need fuel to run your car, and a factory needs electricity to operate manufacturing machinery and computers.  Second, natural resources are required to make new capital. This is the same as saying that all of the objects around you now (coffee mugs, computers, buildings, etc.) are made of natural resources. Third, natural resources are required to sustain human livelihood. This is the same as saying that, at a very basic level we need food to survive, and at a higher level more resource consumption leads to more longevity. Thus, whatever the flow of natural resources, those resources must be allocated between the three aforementioned uses.

These three features for modeling the use of natural resources, combined with the concept of private debt as loans from banks, give us tremendous insight into contemporary economic discussions.

 

Links for Further Exploration

For further learning you can access the article directly and watch videos of me presenting the model background and results in videos (video 2018video 2019) via my website: http://careyking.com/publications/ and http://careyking.com/presentations/.

Artificial Intelligence and the Utility Monster: It’s the Economy Stupid

In his 2014 book Superintelligence: Paths, Dangers, and Strategies, Nick Bostrom discussed issues related to whether we could prevent a superintelligent artificial intelligence (AI) computer system from posing an existential risk to humanity.  In 2014 he also presented for Talks at Google. In that presentation, an audience member (at 49 min 35 sec) posed the idea that a superintelligent computer could become a utility monster.  The utility monster is an idea of philosopher Robert Nozik, and it relates to the philosophical concept of utilitarianism.

In utilitarianism, only the maximum happiness, or utility, of the group is what matters. The distribution of utility within the group does not matter. Consider the idea of marginal utility which is how much utility comes from consuming the next increment of resources.  Because the superintelligent AI system might be much smarter than all of humanity, it could have a higher marginal utility than that of humans.  The machine could conclude that total utility was maximized by its consuming one-hundred percent of natural resources because in doing so, it could maximize overall utility simply by maximizing its own utility.

Bostrom then discussed the paper clip maximizer as a classic AI thought experiment. What if the superintelligent AI system only tries to maximize the number of paper clips (the paper clip is an arbitrary placeholder)? The AI system would likely determine that keeping humans alive is detrimental to the goal of maximizing the number of paper clips in the world. Humans need resources to survive, and these resources could be used to make more paper clips.  It is not that the AI machine dislikes or specifically tries to harm humanity. It is just that the superintelligent AI system is indifferent to our existence.

Now think about “the economy” and the metric of gross domestic product (GDP) which is usually used as a metric of the size, or throughput, of the economy. GDP is roughly treated as utility in economics. GDP is now a substitute for paper clips. Could we tell the difference between a world that is run by a superintelligent GDP maximizer and the world that we live in right now?  That is to say if certain politicians, business owners and executives, and economists are pushing for rules that maximize GDP with , then is that “the economy” simply a mechanism to maximize GDP without regard for how money is distributed?

Philip Mirowski points out that one of Friedrich Hayek’s ideas was that the economy was smarter than any one person or group of persons. Government officials, for example, can’t know enough to make good economic decisions. Mirowski discusses Hayek’s idea in his book The Road from Mont Pelerin which explores the history of the “neoliberal thought collective”.  Mirowski points out that Hayek saw the economy as the ultimate information processor.  Thus, markets are able to aggregate data in the most effective way to produce the “correct” signal, say the price, to direct people on what to make and what to buy.

Need better decisions? Make another market! There is little to no need for people to think.

In an extreme world with markets for everything, each of us becomes an automaton responding to price signals to maximize collective utility, or GDP, that might have very little to do with our personal well-being.

How could we know if we have allowed the economy to simply become a GDP maximizing utility monster? Perhaps GDP would keep going up, but if it didn’t, perhaps we’d start adding activities to GDP that have existed for centuries, but had previously not been counted due to illegality or other reasons. Prostitution and legalizing previously drugs are examples. Check on that one.

Perhaps if all we wanted to do was increase GDP, we’d cut corporate taxes to spur investment in capital versus spending on education, which is for people. Perhaps human life expectancy would go down, and drug sales would be up (the utility monster is indifferent to people). Perhaps we’d see increases in wealth or income inequality. Perhaps people would contract with “transportation network companies” to drive around, wait for algorithmic signals on where to drive to pick up a person or thing, and then deliver that person or thing as directed.

Most macroeconomic analyses are based upon the concept of maximizing utility, which is usually interpreted as the value of what “we” consume over all time into the future.  Many interesting (troubling to many) trends are occurring in the U.S. regarding health, distribution of income, and the ability of people to separate concepts of fact and truth. Thus, we should consider whether the superintelligent AI future some fear might already in action, but at perhaps a slower and more subtle pace than some pontificate might happen after “the singularity” when AI becomes more capable than humans.

The recent populist political movements in the U.S. and other countries could in fact be a rejection of the “algorithm of GDP maximization” associated with our current economic system.

Learn about utilitarianism.  Learn to go beyond GDP here, here, and here.

A Lack of Systematic Thinking Keeps America from Staying Great

The following is an Energy Institute commentary piece coauthored with Dr. Josh Rhodes also of the University of Texas at Austin Energy Institute, January 2018.

Our economic system operates within intellectual, social, and physical constraints. Each of these constraints can feedback to affect the others. To produce more goods and services we have to 1) know how to produce them, 2) make them desirable, acceptable and affordable, and 3) have the required natural resources.  The finite size of the Earth increasingly affects socioeconomic outcomes across the globe, including within the developed economies.

Ecologists, anthropologists, and systems scientists have anticipated this since the 1970s.  However, the physical constraints on societal and economic organization and equality are largely unappreciated and misunderstood.

Click here to read the full commentary

Pipeline, Standing Rock conflict is all about power

The following is the text of an opinion editorial I wrote that was placed in many major Texas newspapers on December 8, 2016. I also include comments received via e-mail, from readers, and only include names when persons specifically gave permission to do so.

Links to version in Austin American StatesmanHouston Chronicle, Dallas Morning News

The recent decision by the President Barack Obama’s Administration, via the Army Corps of Engineers, to ask for a more in-depth environmental impact statement regarding a final section of the Dakota Access oil pipeline represents a clash of power.  The simple story is one of environmental and health concerns, but in reality the full story is much more. It is a continuation of the populist fervor building up in the United States.  It is a continuation of the pursuit of infinite growth. It is a story of physical power, political power and economic power.

The pipeline is designed to transport 570,000 barrels per day of U.S. light sweet from the Bakken and Three Forks production region of North Dakota to Patoka, Ill.  That is 40 gigawatts of power, or the output of 20 nuclear power plants. A power level equal to more than half of the peak electric load in Texas on the hottest summer day, an amount of power that is not trivial.

This amount of physical power flow does not go unnoticed by those who lack economic and political power. In the early days of the fossil fuel age, a small group of people could restrict the flow of coal, and thus significant physical power.  Those that can restrict or control of physical power can command economic power, and those in control of economic power, can command political power.  The Dakota Access pipeline is no different.

In short, it is all about power.

Thus, by challenging the physical flow of power, the Standing Rock tribe challenged the current economic and political power.  After months of protest, they saw local law enforcement treat them as the first African Americans integrated into southern universities: with tear gas, rubber bullets, and water cannons. These Native Americans, and those joining them, were on a slow path to defeat with orders to vacate the protest camp. They simply did not represent enough political or economic power.

However, the power struggle turned in their favor as soon as a new political power arrived in the form of a group of 2,000 military veterans.  Firing tear gas and water cannons at Native Americans is bad for business. Doing the same to military veterans is a public relations nightmare for business and politicians.

Obama’s decision on Dakota Access is an easy one to make as the outgoing President, and at the onset of winter in North Dakota.

As president-elect Donald Trump discusses approving the Dakota Access pipeline route, attempting to reverse the decision of his predecessor as quickly as possible, it will test his populist credentials that he sold to the American public.  More physical power (e.g., oil flow) does translate to a larger economy. The oil in the ground is no use if it cannot flow to the pump.  But alas, there is also less use in gasoline flowing to the pump if fewer and fewer people can afford to use it. More power flowing to fewer pockets is not what Trump claims to promote.

The Keystone XL oil pipeline debate centered on carbon and climate concerns and from where our physical power originates. The voters in Wisconsin, Ohio, and Michigan that help put Trump in White House were not thinking about climate change.  These Americans felt left behind by increased global competition.  They lost economic power and control over their lives. Trump told them he would give both back to them, whether that actually happens remains to be seen.

The Dakota Access pipeline concerns the same story. It’s about the power people to be in control of decisions that affect their lives.  The Native Americans, protestors, and veterans in North Dakota showed up as a test of power of the local people against broader business interests. They won this battle, but if history is any indication, they likely will not win the war for stopping or rerouting the pipeline. Obama bought them some time.  Only time will tell just exactly what Trump will buy for them and thus which citizens of America he is helping to be great again. Trump needs to let us know if he thinks there is equal power for ensuring a right-of-way versus the right to get in the way.

Carey King is a research scientist and the assistant director of the Energy Institute at The University of Texas at Austin.

Energy Giant Shell Says Oil Demand Could Peak in Just Five Years – Commentary

This blog refers to the following recent Bloomberg article on “peak oil demand” and also, MarketPlace on November 16 even made the incorrect case that “peak oil demand”, driven by increases in efficiency, is somehow different than “peak oil” in general: it isn’t, peak oil (demand if you will) is related to budget constraints that are ultimately stemmed from resource constraints …

(http://www.bloomberg.com/news/articles/20161102/europesbiggestoilcompanythinksdemandmaypeakin5years)

I’ll comment on two quotes from the Bloomberg article.

FIRST: From the article is the following quote:

“We’ve long been of the opinion that demand will peak before supply,” Chief Financial Officer
Simon Henry said on a conference call on Tuesday. “And that peak may be somewhere between 5 and 15 years hence, and it will be driven by efficiency and substitution, more than offsetting the new demand for transport.”

I’ve commented on this before with regard to people discussing peak oil.  Over the long-term, there is no difference between peak supply and peak demand.  In order for Shell and others to extract more oil, consumers need to want and be able to purchase the refined products from that oil.  Peak supply is defined by peak demand as much as peak demand is defined by peak supply.   Ideally both supply and demand follow each other.  But if they don’t to an “extreme”, then there are lower profits and layoffs in the industry (supply > demand) or recession can happen if demand > supply to a large enough extent (e.g., due to price rise in oil without time to substitute).

Thus, peak oil demand (which we can’t actually define demand due to lack of data) IS a response to oil supply constraints and (a finite Earth more generally) in the long-term.  If not, what else is responsible for the lack of purchasing power of consumers?  If U.S. workers were getting paid higher incomes AND deciding not to purchase more oil AND working normal 40-hour work weeks, then we’d would have a reason to think about whether demand was being tempered by choice. Until then, the most logical conclusion is that consumer budgets are constrained, and these constrained budgets are not independent of resource constraints and increased difficulty for companies to make profits (thus lower margins and lower wages to save even those low margins).

 

SECOND: On the topic of biofuels and hydrogen replacing oil

“Shell will be in business for “many decades to come” because it is focusing more on natural gas
and expanding its newenergy businesses including biofuels and hydrogen, Henry said.
“Even if oil demand declines, its replacements will be in products that we are very well placed to
supply one way or the other, so we need to be the energy major of the 2050s,” Henry said. “That
underpins our strategic thinking. It’s part of the switch to gas, it’s part of what we do in biofuels,
both now and in the future.”

While Simon Henry is not directly quoted here in discussing hydrogen as a substitute for oil, he does discuss biofuels.  This is an absurd assertion that there will can simultaneously be a peak in demand for liquid fuels from petroleum but that for some reason consumers would still be able to afford to substitute biofuels (that have much lower net energy and are restricted by land use, even algae is limited due to low net energy) or hydrogen (which is not a primary fuel and is difficult to store).

I do believe the world will continue to electrify (via renewable electricity) to reduce demand on hydrocarbons which can be used for physical material feedstocks as well as fuels. But please, let’s not tell people … still … that biofuels and hydrogen can substitute for oil anywhere in the next several decades.  We need to have invested much much more into hydrocarbon-efficient end-use devices (e.g., electric vehicles) before we can afford even a fraction of current developed world lifestyles if we power any decent percentage of our economy on biofuels and hydrogen.

Always remember, it is “energy price” x “energy consumption” = “energy expenditures”, compared to incomes and GDP that determine whether or not energy is expensive.  To afford higher prices (e.g., $/BBL) you need to become more efficient for each BBL, and that efficiency is not at zero investment costs.

 

Relations Between Energy and Structure of the U.S. Economy Over Time

If you care to understand how the “energy part” of our economy feeds back and shapes the “non-energy part” of the economy, then this blog’s for you!

Essentially every energy analyst and energy economist should understand the results of this paper.

The findings of this paper have important implications for economic modeling in that the paper helps explain how fundamental shifts in resources costs relate to economic structure and economic growth.

This is a summary of a my publication in Biophysical Economics and Resource Quality:

King, Carey W.  Information Theory to Assess Relations Between Energy and Structure of the U.S. Economy Over Time. Biophysical Economics and Resource Quality, 2016, 1 (2), 10. doi: 10.1007/s41247-016-0011-y.  View paper free online here: link, or download a pre-print draft: pdficon_small link.

 

One of the major driving influences of the research behind this paper comes from a mixture of ideas from Charlie Hall and Joseph Tainter.   Hall (an ecologist by degree)  is seen by many as “Dr. EROI” where EROI = ‘energy return on energy invested’ is the most-common ‘net energy’ term for a calculation for how much energy you get for each unit of energy you use to extract energy.  Tainter (an anthropologist) appreciates the concept of net energy and has applied it qualitatively to describe that more net energy and gross energy is required to enable the structure of ‘complex’ societies (mostly, but not entirely, in a pre-industrial context):

“Energy gain has implications beyond mere accounting. It fundamentally influences the structure and organization of living systems, including human societies.”

(Tainter et al, 2003: article link)

WHY DO WE CARE?    Complexity … redundancy versus efficiency … equality versus hierarchy …

The reason why we care to understand the state of the economy, or other complex systems, in terms of efficiency and redundancy (or resilience) is that more efficient systems (that produce more output for increasingly fewer inputs) are also brittle.  If conditions change, they are less able to adapt.  The same conditions that allowed them to greatly increase output with fewer inputs also force them to greatly decrease output when those fewer inputs are no longer available (e.g., oil imports are embargoed).

 

In this paper, I put Tainter and other ideas relating to tradeoffs of “efficiency” and “redundancy” to the test.  I did so using the concepts of ecologist Robert Ulanowicz, who has for a large part of his career worked on calculating the ‘structure’ of ecosystems using an information theory approach.  I was immediately convinced that Ulanowicz’s framework could be applied to economic data to test Tainter’s concept and also test if there indeed was any relationship we could see between net energy and the economy.

Thus, my paper describes the changing structure of the United States’ (U.S.) domestic economy by applying Ulanowicz’s information theory-based metrics (with some added twists I felt necessary to be more precise) to the U.S. input-output (I-O) tables (e.g., economic transactions) from 1947 to 2012. The findings of this paper have important implications for economic modeling in that the paper helps explain how fundamental shifts in resources costs relate to economic structure and economic growth.

The results of this paper (summarized in Figure 1) show that increasing gross power consumption, as well as a less spending by food and energy sectors, correlate to increased distribution of money among economic sectors, and vice versa.

In short, the ideas of Hall and Tainter appear to be true: the U.S. economic structure does change significantly depending upon (1) the rate at which we consume energy (e.g., power as energy/year) and (2) the relative cost of energy (and food)!!

I will now explain in more detail how to understand the results in Figure 1 (see Appendix at the end of this blog to understand how the calculations work).   In Figure 1, the “Net Power Ratio” (NPR) is a metric of “energy and food gain” that is larger when energy and food costs are lower.  Its definition is: NPR = (Gross Domestic Product) / (Expenditures of Energy and Food sectors).

 


Figure 1. After 2002, when energy, food, and water sector costs increased after reaching their low point, the direction of structural change of the U.S. economy reversed trends indicating that money became increasingly concentrated in fewer types of transactions.

 

The information theory metrics indicate two time periods at which major structural shifts occurred:  The first was between 1967 and 1972, and the second was around the turn of the 21st Century when food and energy expenditures no longer continued to decrease after 2002.

 

Structural Shift 1 (1967 or 1972):

The change in trend around 1967 (could possibly be described as 1972) is that equality shifts from increasing to decreasing.

From 1945 until 1967/1972, both equality and redundancy were increasing.  The U.S. was increasing its power consumption (e.g., energy/yr) at about 4%/yr. That is to say, the U.S. economy was booming after World War II gobbling up more and more energy every year at a high rate because energy (e.g., oil) was abundant and getting cheaper by the day.

Increasing equality means that over time each sector of the economy was coming closer to the condition that each sector had approximately the same total sales in a year.  That is to say, the sales of the “construction” sector were becoming more equal to the sales of the “aircraft and parts” and “amusement” sectors.  Some sectors would sell less over time (e.g., “farming”) and some sectors would sell more (e.g., “metals manufacturing”).  This makes sense because some “new” sectors have practically no transactions in 1954 whereas they are more integrated into the economy in 1967 (e.g, “aircraft and parts” and “amusement”).

Increasing redundancy means that over time each sectoral transaction (e.g., “farms” to “metals manufacturing” or “oil and gas” to “machinery and equipment manufacturing”) was becoming more equal.  This again makes sense because some “new” sectors have practically no transactions in 1954 whereas they are more integrated into the economy in 1967.

The structural shift in the U.S. economy can be explained by a few things that came to a head in the late 1960s and early 1970s:

  1. U.S. had little global competition for resources in immediate aftermath of World War II (e.g., Europe and Japan were devastated and needed time to recover).
  2. Oil:  Peak U.S. crude oil production in 1970 enabled the Arab oil embargo of 1973, and OPEC’s increase in posted oil price in 1974, to raise oil prices to such a degree as to cause a significant decrease in global oil consumption.
  3. Efficiency and environmental controls enacted for the first time:   The Clean Air Act (1970) and Clean Water Act (1972) were substantially increased in scope and enforcement.  The environmental and energy changes encouraged significant investment in utilities (e.g., wastewater treatment) and resource extraction along with a focus on consumer energy efficiency for the first time since industrialization.

 

Structural Shift 2 (2002):

One of the theories of ecologists (e.g., Howard T. Odum and Robert Ulanowicz) is that systems must have some “structural reserves” in existence to be able to respond to resource constraints or other disturbances that might occur in the future.

The major change that occurred in 2002 was that energy and food no longer continued to get cheaper.  If you’ve followed my work, you know this already (see my blog from 2012)!!!  As I’ve stated in even more detail in my papers in 2015 (Part 2 and Part 3 of 3-part series in Energies) this is the defining macro trend of the Industrial Era (no, really …).

In response to this second structural shift from energy and food costs, it is clear that the U.S. economy did trade off structural reserves for efficiency.  Efficiency is the opposite of redundancy in Figure 1, so that the efficiency of the economy decreased all the way to 2002 after which it started increasing efficiency.  The U.S. economy decreased structural redundancy and equality for structural efficiency (e.g., increasing metrics of efficiency and hierarchy) after food and energy expenditures increased post-2002.

So after 2002, the U.S. economy (and by my inference, the same thing is happening in each world economy overall) had to shift money into FEWER sectors and FEWER types of transactions.

  1. China had entered the World Trade Organization in 2001 and started to become the world’s manufacturer.  This decreased monetary flows to domestic manufacturing.
  2. Financial deregulation in the 1990s, including the Gramm–Leach–Bliley Act of 1999 which repealed the Glass-Steagall Act, increased the monetary flows to the financial sectors.
  3. These two effects (China and financial deregulation) led to increased demand and speculation for energy and commodities, so that monetary flows increased to the oil and gas sector (remember the highest oil price of $147/BBL in July of 2008 during the height of the Great Recession) to meet U.S. and global demand.

 

APPENDIX: How to interpret the results in Figure 1 via the methods of the paper.

  • Consider the U.S. economy as many ‘sectors’ buying and selling products with each other (See Figure 2)
  •       –  Example sectors are “oil and gas sector” and “farming”, etc.

 

  • Each sector also produces some “net output” (a column “to the right” not shown in FIgure 2)
  •       –  This net output from each sector sums to GDP
  •       –  Also represents largely what you and I buy as consumers

iopicture_20161105

FIgure 2.  The economy’s transactions are often viewed via the “input-output” table where each entry represents how much of a given sector (on the column) purchases from a given sector (on the row).

 

  • A highly redundant economy (or system or network of flows) interacts with many of the possible partners in many ways and relatively equally (Figure 3 – LEFT).  This might not be the best for growth, but you have “backups” in case things go wrong with one of your partners.
  • A highly efficient economy (or system or network of flows) interacts with fewer possible partners such that there are fewer sectors or people to deal with to get things done (Figure 3 – RIGHT).  This efficiency can, and typically does lead to increased potential for growth.

redundancy_and_efficiency_matrices

Figure 3.  What the distribution of monetary flows (from one sector to another) looks like for an economy (or generically a network of flows) that is fully, or 100%, redundant (LEFT image) and one that is fully, or 100%, efficient (RIGHT image).  The numbers don’t have to be all “1”, they could be any number that is the same.

 

 

  • A highly equal economy (or system or network of flows) interacts with many of the possible partners in equal manners such that every sector or actors sells and buys the same total amount of goods and services (Figure 4 – LEFT).   Note that both the images in Figure 3 are also 100% equal. This is not a practical expectation because we can easily understand structural relationships among economic sectors that will prevent equality (e.g., the “oil and gas extraction” sector sells most of its products to the “refined oil products” sector).
  • A highly hierarchical economy (or system or network of flows) has a small number of sectors that dominate the transactions and monetary flows (Figure 4 – RIGHT).  In the extreme case of FIgure 4 – RIGHT, there is only one type of transaction that occurs (Sector 1 purchasing stuff from Sector 4) and the economy actually no longer is defined the same (e.g., Sectors 2 and 3 don’t effectively exist since they have no transactions).

equality_and_hierarchy_matrices

Figure 4.  What the distribution of monetary flows (from one sector to another) looks like for an economy (or generically a network of flows) that is fully, or 100%, equal (LEFT image) and one that is fully, or 100%, hierarchical (RIGHT image).  The numbers don’t have to be all “1”, they could be any number that is the same.

 

 

 

 

The Most Important and Misleading Assumption in the World

Note: This is the second of a two-part series by the author. Part one: “Macro and climate economics: It’s time to talk about the ‘elephant in the room’.

Part one of this blog post explained how macroeconomic models are flawed in a fundamental way.

These models are coupled to models of the Earth’s natural systems as Integrated Assessment Models (IAMs) that are used to inform climate change policy. Most IAM results presented in the Intergovernmental Panel on Climate Change (IPCC) reports show climate mitigation costs as trivial compared to gains in economic growth.

The referred to “elephant in the room” (from part one of this series) is the fact that economic growth is usually simply assumed to occur.  No matter what the quantity or rate of investment in the energy system or the level of climate damages, the results indicate that economy will always grow. This defies intuition, and begs the question: If the costs of climate mitigation really are so small, then why is there so much disagreement over a low-carbon transition?

One way to explain the problem is via a term called “total factor productivity,” or TFP.  TFP is the Achilles Heel of macroeconomics, and why no one talks about the aforementioned elephant with the exposed heel in the macroeconomics classroom.

Essentially economic output, or GDP, is usually modeled as being dependent upon the amount of labor in the workforce, the amount of capital (e.g., factories, machines, computers, buildings), and TFP.

TFP can be understood as all of the reasons why the economy grows that are not already characterized by the quantity of labor and capital.  In statistical terms it’s called a “residual,” or the amount unexplained by an assumed underlying equation of economic growth.

TFP is often projected to continue (based upon trends from historical rates) at around 1.5 percent annually.  Because labor and capital change relatively slowly (aside from events such as wars, a quick rise in sea level, or other similar “events”), this TFP assumption effectively assumes a large amount of growth into the future.

Further, the assumption of a historical annual rate of increase in TFP is inherently independent of energy-related factors (see IPCC report “Climate Change 2014: Mitigation of Climate Change”). Thus, the normal IAM assumption is inadequate because it presents the case to policy makers that even dramatic increases in energy investment for a low-carbon energy transition don’t affect TFP and hence economic growth.

This is a problem since it makes the transition appear trivial. It’s incorrect, however, to assume TFP will continue into the future just as it had in the past because the past was a time of increasing carbonization of the economy.  It is too much of an extrapolation to assume TFP will be the same during decarbonization.

But there is a solution.

A significant body of research indicates that accounting for both energy and its conversion efficiency to physical work (e.g., engines and motors) and other energy services (e.g., light) can explain the vast majority of TFP. That is to say, instead of assuming an increase in TFP into the future that is independent of the modeled energy technology investments, we could assume a series of low-carbon energy technology investments and estimate the effect on TFP, thus economic growth, from the bottom up.

TFP is effectively composed of the effects of machines and energy substituting for human labor. A human pushing a button on an electrified machine is more “productive” than that human turning a crank by hand on that same machine.

Part of the reason why TFP, and its cousin labor productivity (= economic output / hour of labor), have been decreasing in the last decade is due to declining energy consumption and slower improvements in efficiency.  There are still a lot of low-hanging fruit, however, we already picked the ripe fruit that fell to the ground.  And, it still takes effort to pick even the low-hanging fruit. There is no free (fruit) lunch.

Aside from a need to develop more accurate macroeconomic models that explicitly account for the role of energy, there is a larger concern in regard to sustainability. The modeling improvements discussed in this post relate to the economic and environmental (e.g., climate, energy) pillars of sustainability.

Existing models, however, also inhibit discussion of equity, the third pillar. If we convince ourselves that we will always grow in the future, no matter what, then we can more easily convince ourselves that we can defer the question of sharing until the future, until after we’ve figured out growth for now.

This is exactly why the exogenous TFP assumption is socially dangerous.

The models simply assume economic growth occurs.  Then, since everyone is convinced that the world is going to have more wealth to share in the future, no matter what, then we can avoid discussions about sharing and preserving what we have now.  We can deflect the conversation to “growth” instead of the “equitable” part of sustainability.  “Help us grow the economy first, and then we can fix the other issues.”

That said, we know a number of things for certain.

The Earth is finite, and we know we cannot have infinite growth on a finite planet.   Thus we need physical and economic models that also reflect this reality. Unfortunately, we’re using economic models that ignore this reality. Why should we make policy using economic models that don’t reflect what should be obvious to a third-grader?

We can do better, and we must do better if we want realistic economic assessments of a low-carbon energy transition. If we don’t want realistic assessments, then we can continue the status quo, which is to explain the future economy by projecting a factor (i.e., TFP) defined as what cannot explained by insufficient theory.

Macro and Climate Economics: It’s Time to Talk about the “Elephant in the Room”

This blog was written for the Cynthia and George Mitchell Foundation, and originally appeared here: http://www.cgmf.org/blog-entry/213/.

This is the first of a two-part series. Part 2 is: “The most important and misleading assumption in the world.

If we want to maximize our ability to achieve future energy, climate, and economic goals, we must start to use improved economic modeling concepts.  There is a very real tradeoff of the rate at which we address climate change and the amount of economic growth we experience during the transition to a low-carbon economy.

If we ignore this tradeoff, as do most of the economic models, then we risk politicians and citizens revolting against the energy transition midway through.

On September 3, 2016, President Obama and Chinese President Xi Jinping each joined the Paris Climate Change Agreement to support U.S. and Chinese efforts to greenhouse gas emissions (GHGs) limits for their respective country. This is an important signal to the world that the presidents of the two largest economies and GHG emitters are cooperating on a truly global environmental matter, and it provides two leaps toward obtaining enough global commitments to set the Paris Agreement in motion.

The economic outcomes from models used to inform policymakers like Presidents Obama and Xi, however, are so fundamentally flawed that they are delusional.

The projections for climate and economy interactions during a transition to low-carbon economy are performed using Integrated Assessment Models (IAMs) that link earth systems models to human activities via economic models. Several of these IAMs inform the Intergovernmental Panel on Climate Change (IPCC), and the IPCC reports in turn inform policy makers.

The earth systems part of the IAMs project changes to climate from increased concentration of greenhouse gases in the atmosphere, land use changes, and other biophysical factors.  The economic part of the IAMs characterizes human responses to the climate and the changes in energy technologies that are needed to limit global GHG emissions.

For example, the latest IPCC report, the Fifth Assessment Report (AR5), projects a range of baseline (e.g., no GHG mitigation) scenarios in which the world economy is between 300 and and 800 percent larger in the year 2100 as compared to 2010.

The AR5 report goes on to indicate the modeled decline in economic growth under various levels of GHG mitigation. That is to say, the economic modeling assumes there are additional investments, beyond business as usual, needed to reduce GHG emissions.  Because these investments are in addition to those made in the baseline scenario, they cost more money and the economy will grow less.

The report indicates that if countries invest enough to reduce GHG emissions over time to stay below a policy target of a 2oC temperature increase by 2100 (e.g., CO2, eq. concentrations < 450 ppm), then the decline in the size of the economy is typically less than 5 percent, or possibly up to 11 percent.  This economic result coincides with a GHG emissions trajectory that essentially reaches zero net GHG emissions worldwide by 2100.

Think about that result: Zero net emissions by 2100 and, instead of the economy being 300 to 800 percent larger without mitigation, it is “only” 280 to 750 percent larger with full mitigation.  Apparently we’ll be much richer in the future no matter if we mitigate GHG emissions or not, and there is no reported possibility of a smaller economy.

This type of result is delusional, and doesn’t pass the smell test.

Humans have not lived with zero net annual GHG emissions since before the start of agriculture.  The results from the models also indicate the economy always grows no matter the level of climate mitigation or economic damages from increased temperatures.

The reason that models appear to output that economic growth always occurs is because they actually input that growth always occurs.  Economic growth is an assumption put into the models.

This assumption in macroeconomic models is the so-called elephant in the room that, unfortunately, almost no one talks about or seeks to improve. 

The models do answer one (not very useful) question: “If the economy grows this much, what types of energy investments can I make?”  Instead, the models should answer a much more relevant question: “If I make these energy investments, what happens to the economy?”

The energy economic models, including those used by United States government agencies, effectively assume the economy always returns to some “trend” of the past several decades—the trend of growth, the trend of employment, the trend of technological innovation.  They extrapolate the past economy into a future low-carbon economy in a way that is guesswork at best, and a belief system at worst.

We have experience in witnessing disasters of extrapolation.

The space shuttle Challenger exploded because the launch was pressured to occur during cold temperatures that were outside of the tested range of the sealing O-rings of the solid rocket boosters.  The conditions for launch were outside of the test statistics for the O-rings.

The firm Long Term Capital Management (LTCM), run by Nobel Prize economists, declared bankruptcy due to economic conditions that were thought to be practically impossible to occur.  The conditions of the economy ventured outside of the test statistics of the LTCM models.

The Great Recession surprised former Federal Reserve chairman Alan Greenspan, known as “the Wizard.”  He later testified to Congress that there was a “flaw in the model that I perceived is the critical functioning structure that defines how the world works, so to speak.”

Greenspan extrapolated nearly thirty years of economic growth and debt accumulation as being indefinitely possible. The conditions of the economy ventured outside of the statistics with which Greenspan was familiar.

The state of our world and economy today continues to reside outside of historical statistical realm. Quite simply, we need macroeconomic approaches that can think beyond historical data and statistics.

How do we fix the flaw in macroeconomic models used for assessment of climate change?  Part two of this two-part series will explain that there is research pointing to methods for improved modeling of what is termed “total factor productivity,” and, in effect, economic growth as a function of the energy system many seek to transform.