How Economists Have
Misjudged Global Warming
Robert U. Ayres
[Reprinted from World Watch Magazine,
September/October, 2001]
Robert U. Ayres was
Professor of Engineering and Public Policy at Carnegie-Mellon
University from 1979 to 1992, then moved to the European business
school INSEAD, in France, where he is now Emeritus Professor of
Environment and Management. He is a brother of WORLD WATCH editor Ed
Ayres.
A deep chasm has opened between scientists and economists over the
issue of global warming. To some degree the chasm has always been
there, because economists have never been able to achieve the
predictive rigor of the hard sciences. But the rift was increased
dramatically by the Bush administration's new energy policy, as
presented in its Report of the National Energy Policy Development
Group authored by Dick Cheney, Colin Powell, Paul O'Neill, Gale
Norton, and others in April 2001.
The Cheney report was quickly put together and based on a virtually
unquestioned assumption that the only way to keep the U.S. economy
healthy is to greatly increase its supply and consumption of coal,
oil, and natural gas. It simply side-steps the findings of the
monumental Climate Change 2001: Third Assessment Report, by
the Intergovernmental Panel on Climate Change (IPCC), which warns that
we may be courting climatic catastrophe unless our burning of fossil
fuel and resulting production of carbon dioxide emissions-is sharply
reduced. The TPCC Report is based on five years of intensive
investigation by the leading climate scientists of more than a hundred
countries. Curiously, the Bush team is on record as unwilling to trust
the "uncertain science" of global warming. Yet it
unhesitatingly puts its faith in the vastly more uncertain science-if
that is the word -- of long-term economic forecasting.
When the Bush energy policy was announced, environmentalists -- and
others who had expressed concerns about whether human industries are
on a sustainable course-were deeply distressed. The Bush position
seemed so utterly at odds with what the scientists have been
saying-and saying with increasing urgency. But even more than
distressed, they were perplexed. why would such a globally resounding
voice as that of the IPCC be shrugged off? Some critics averred that
it was Bush's and Cheney's oil and coal industry connections, and
their need to pay off industry political contributors (who contributed
heavily to their election), which accounted for the anti-TPCC stance.
Some said it was Bush's fear that voters would be angered by any
short-term increases in gasoline prices, as would likely result from
any serious cuts in carbon dioxide emissions. And both factors may
well have carried some weight.
Recall, however, that the U.S. Senate rejected U.S. participation in
anything resembling the Kyoto agreement long before Bush and Cheney
came into office. Moreover, the National Energy Policy (NEP) report
was never really offered to the public as a rebuttal to the IPCC
report, because it never addressed most of the scientists' concerns~
whereas some 1,500 climate scientists had spent millions of research
hours (and thousands of super-computer hours) tracking the role of
industrially produced C02 in warming the planet, the NEP report
allocated just five paragraphs to the subject, and did not even
include the terms "carbon dioxide" or "greenhouse gases"
in its glossary. Rather than being a scientific rebuttal, the
government report was put forward as an alternative to science.
DISCOUNTING In practice, the
task of quantifying and comparing present costs and future
benefits (or the converse) is often virtually impossible to
accomplish with any confidence. I am particularly aware of this
difficulty because I was a small fly on the wallpaper at the
scene of some of the early efforts to apply benefit-cost
analysis to real-world issues. In those days (the late 1 960s) a
few environmental economists were concerned about excessive U.S.
government investment in building dams on small rivers.
At that time, the U.S. Army Corps of engineers had become a
dam-building agency. The Corps had become heavily involved in
this activity during the construction of the giant Tennessee
Valley Authority (TVA) and Grand Coulee projects in the 1 930s.
When those jobs were completed, the Army engineers needed new
sources of employment. In their presentations to the U.S.
Congress, they justified their proposals for more dams on the
basis of optimistic estimates of future recreational and other
benefits (i.e. the number of future visitors and how much they
would spend) which they discounted at interest rates less than 3
percent, which was the lowest rate paid at the time on extant
long-term government bonds issued back in the ~ 930s. By the
same argument the apparent monetary costs of construction (bond
interest payments) was understated using the same assumption.
Environmental economists at Resources For the Future Inc. (RFF)
tried to develop a methodology for yielding more realistic
assessments.
One of the rules of thumb that came out of that experience was
that, to avoid foolish capital investments, future benefits
should be substantially discounted-preferably by at least 8
percent. Now, ironically, it is the high discount rates attached
by economists to the future benefits of avoiding greenhouse
warming that make those benefits seem hard to justify in
benefit-cost terms. In retrospect, what makes this bit of
economic history particularly ironic is that in those days,
environmental damages-such as the destruction of wetlands or
disruption of fish spawning patterns-were not even counted among
the costs. The avoidance of those costs, of course, would make
the uncounted future benefits even larger.
|
To be accepted so easily by the entire Republican leadership, and a
few Democrats as well, it had to be not just a politically persuasive
argument, but a declaration based on a fundamental belief system-an
economic ideology founded on certain assumptions not even subject to
challenge.
In a recent piece for the New York Review of Books, the
environmental author Bill McKibben writes that these two documents "offer
competing blueprints for the twenty-first century," and that "it
would not be hyperbole to say they outline the first great choice of
the new millennium, a choice that may well affect the planet
throughout the thousand years to come." If that is so, then the
arena in which the battle for the planet's future will be played out
is not primarily in the evaluation of the IPCC's atmospheric science
at all, but in the evaluation of those rarely questioned assumptions
on which the conservative economic ideology is founded.
If MeKibben is right, environmentalists who continue to argue
defensively about the soundness of IPCC climate models, or even about
the moral failures of a policy that ignores future generations, arc
barking up the wrong trees. True, someone who argues on such grounds
may eventually be vindicated, if catastrophic damage to the U.S.
coasts or crops, or the drowning of a Pacific island nation, proves
the Bush-Cheney policies to have been tragically misconceived. But
such vindication would come too late to be of much help, less
consolation. To change the policy before the damage is done, it is
necessary to challenge-and expose for the fallacies they are-the
hidden assumptions that lie behind these otherwise incomprehensible
positions.
To be more specific, the administration's position on the Kyoto
climate treaty, as we have heard from countless government
spokespersons and TV talking heads, is that any major government
intervention to reduce C02 and other greenhouse gas emissions would
harm the U.S. economy." In effect, it is argued that the costs of
any government-inspired actions aimed at reducing greenhouse emissions
will greatly exceed the discounted present value of the future
benefits. The term "discounted present value" is an
economist's jargon for the idea that costs ate greater if paid now
than if paid later (see box at left). That's because if we spend the
money now we can't be earning interest on it later; and besides,
society will presumably be richer later so it will be easier for our
descendants to pay than it is for us. By the same argument, benefits
to be received in the distant future are worth less to us now than
they will be worth to our (richer) children who get to enjoy them.
The assertion that measures to reduce emissions will be very costly
causes many business people to react negatively, in part, because it
seems-at first-so obvious. Moreover, this assertion is almost never
challenged by anyone with business or academic credentials. It is
accepted as revealed truth by the most presumably objective and
knowledgeable of the economically savvy news media, such as The
Economist. One might easily say, of course it will be
costly. After all, we are implicitly talking about fundamentally
restructuring the energy supply and distribution system of the world,
not just building more of the same things, as the Bush team wants to
do. (But the Bush program of building lots more coal-fired and nuclear
power plants would be costly too.)
In reality, however, accessing costs is not that simple. To begin
with, costs (think of them as investments) are not very meaningful
unless paired with their associated profits or benefits. Jt cost a lot
to launch the auto industry at the end of the 19th century. But that
launch also created jobs, generated revenues for all kinds of old and
new businesses, brought astronomical profits to (some) investors, and
provided new services to consumers-on a scale that the manufacturers
of horse-drawn carriages could hardly have imagined. Unfortunately,
the "it-will-cost" argument often gets hung up on the highly
political question of who will pay and who will enjoy the future
benefits-or in this case, the avoided damages. Will the benefits be
enjoyed by those who must immediately pay higher taxes or higher fuel
prices? In other words, what can we offer in the near term to satisfy
skeptical investors who would otherwise prefer to stay with
business-as-usual and simply hope that the predictions of future
climate disaster are wrong?
Most scientists and engineers will probably agree with most
economists that rapid introduction of a new and unproven technology
will almost always raise costs in the short run. Why? Because it means
skipping stages in the normal evolutionary development process, making
decisions before the facts are all in, using off-the-shelf components
whether suitable or not, paying premium prices for custom designs,
selecting contractors on the basis of existing production capacity
rather than long-term potential, and so forth.
True, there are sometimes situations in which a more eco-efficient
technology is already available-one that is both cheaper to use and
less polluting than what is in general use. This would not happen in a
perfect competitive market where all actors had perfect information,
of course. But in the real world it does happen sometimes. These
opportunities are what the energy researcher Amory Lovins calls "free
lunches," and we would be well advised to seek them out and
partake of as many as possible. But few who know the subject in depth
believe that eating free lunches can avert short-run cost increases to
energy consumers altogether. Part of the reason for this is that the
only sure way to encourage people to use less energy rather than more
is to raise the price. It can make sense to subsidize one form of
energy while taxing another. But overall, the price paid by consumers
of fossil fuels will have to go up if the output of C02 is to go down.
In this sense, there is no free lunch.
The real question is not whether the short-run cost increases will
result from accelerated introduction of renewables and substitution of
capital investment (e.g., in heat pumps) better insulation, and better
windows) for energy consumption. They will. The question is whether
these short-term increases can be compensated not only by immediate
environmental benefits and later cost savings (as scale economies kick
in), but also by other long-term benefits. I mean new products,
services, jobs, and profits resulting from the introduction of
completely new spin-off technologies and new applications of these
technologies. That was what happened after Thomas Edison's
introduction of his system for electric lighting. It is the sort of
thing we must hope for -- and actively seek -- now.
The issues are rarely presented to senior governmental
decision-makers in such terms. More often the cost issue is presented
in isolation, and the possible spin-off benefits are ignored precisely
because they are hypothetical and hard to quantify.
Well-meaning Attempts at Cast-Benefit
Analysis
In the 1980s, NASA scientist James Hansen first asserted to the media
that "greenhouse warming" had already begun, due to
widespread burning of coal, oil, and natural gas. He was denounced,
even by some fellow scientists, as an alarmist. Nonetheless,
scientists all over the world began mobilizing to study such
indicators as the surface and upper-atmospheric temperatures of the
planet, the melting of polar ice, and the carbon dioxide content of
glaciers, the oceans, and the atmosphere, and in the 1990s global
warming became a volatile political issue. At first, many economists
tried to assess this enormous phenomenon the way RFF economists had
once tried to evaluate dam-building projects. But from the outset, the
endeavor has proved vexing.
Logically, to assess the economic impacts of all these effects in a
cost-benefit framework means examining three broad categories of
projections: (1) the future costs of any measures taken to mitigate
the warming; (2) the future costs of any damage likely to be done if
the warming is not mitigated; and (3) the future spinoff benefits of
undertaking such mitigation. To make useful projections, though, it is
first necessary to make some assumption about the extent of the damage
the warming will bring-and therein lies a major difficulty.
To begin with the evaluation of potential damages, public perceptions
of what warming could mean have varied wildly. People living in cold
climates, including many Russians and Scandinavians, were initially
inclined to see warming as being quite beneficial, insofar as it might
bring longer growing seasons and balmier weather. Vineyards might grow
in Scotland, and beach resorts might boom in the Baltic. Those living
in South Asia, on the other hand, worried about droughts, floods,
changes in the monsoon pattern, pests, outbreaks of new diseases, and
resurgence of old ones like malaria. Warming on the global level has
hundreds of different kinds of potential local impacts. Trying to
attach reliable monetary costs or benefits to each of them proves to
be a greater challenge than that of the legendary blind men trying to
describe an elephant.
To illustrate just one problem, consider the future impact of warming
on the Mississippi River Delta. It will likely be hit, sooner or
later, by massive flooding, whether from the north or from the
encroaching Gulf of Mexico. This isn't like trying to assess the
impact of a dam, where the altered level of the river water is known
precisely. For the world at large, the IPCC's
Third Assessment Report forecasts a significant future
increase in sea level due to ice-melt and thermal expansion. But the
extent of the sea level rise (and therefore of the Gulf of Mexico's
rise) will depend on the amount of warming, for which the IPCC's
projected range is disconcertingly wide-from a moderate 2.7 degrees F.
to a calamitous 11 degrees F. over the next century. Moreover, that
range has changed since the Second Assessment in 1995, and could
change again.
Then there are the uncertainties about how much a given level of
atmospheric warming contributes to increased storm severity (such as
the strength of a hurricane blowing in from the Caribbean), or how
much sea-level rise contributes to the severity of storm surges (such
as the likely height and reach of a wave rolling in from the Gulf).
But even if that can be agreed on, there's the question of assessing
what New Orleans is worth as a city. when houses or streets are
destroyed, or livestock drown, or cotton fields disappear into the
mud, it's not so difficult to assess the loss. Insurance companies do
it all the time. If we know what it costs to rebuild them, we know
what it's worth to save them. But how do we assess the worth of a
human life lost or saved? Beyond that, how do we assess the worth of a
community or culture?
Considering just the value of an individual life is hard enough.
Remember, it's not quite the same question as asking how much life
insurance a person has been willing to buy. If the value is set by
society as a whole, do we value everyone the same. Do we value young
children as much as economically productive adults? Is the value of
the life of an employer of thousands of workers the same as the value
of the life of an unemployed (or unemployable) person? Is the value of
a criminal the same as that of a philanthropist or scientist or
artist? And there are some other questions of this ilk that I won't
raise now because they would likely enrage you, but that would have to
be asked-and answered-in order to assess the value of a life saved.
Thinking about this issue has long been giving economists headaches.
Most will now concede privately, if not publicly, that it's virtually
impossible to quantify the benefits of avoiding really major
catastrophes, such as those brought on by climate change. The famous
precautionary principle is, as much as anything, an admission that
it's better to be safe than sorry when we can't really know the
magnitude of the risks.
Calculating the direct costs of avoiding climate change isn't
much easier. One common simplification is to focus on what it would
cost to substantially reduce carbon dioxide emissions, based on the
argument that carbon dioxide accounts for at least half of the
problem. This suggests that abatement policy should be focused on
burning less coal and oil. That could be achieved, at least to some
degree, by conservation and substitution of other energy sources for
coal-burning electric power plants. Reviving the largely moribund
nuclear power industry has been proposed as an option by some (and by
the Bush NEP report), but strongly opposed by much of the public,
including me. In any case, allowing for safety and disposal costs,
nuclear power today is more costly than today's coal-burning power
plants. Moreover, to plan, design, and build a nuclear plant from
scratch takes ten years, on average. The suggestion that new nuclear
plants could relieve California's "energy shortage" any time
soon is either dishonest or naive. Other alternative power sources
such as solar photovoltaic and wind, less well-developed than nuclear
power, would cost even more to introduce on a large scale in a very
short time. To put "large scale" in its proper context, bear
in mind that the present contribution by alternative energy sources,
while growing fast, is still only a very tiny fraction of total
demand. Costs should decline as the scale of output rises, but nobody
can say with confidence how much. It is difficult to describe a
package of technologically proven and economically viable alternatives
to fossil fuels for which accurate future cost calculations are
feasible.
And, while world leaders have battled to a deadlock over a Kyoto
treaty that would reduce CO2 emissions by a mere 5 percent, remember
that the scientists of the IPCC agree that C02 would have to be
reduced by 60 to 50 percent to stabilize climate -- so the assessment
of costs would have to focus on that much larger shift in the
technological mix. In short, there are so many "ifs" in the
assessment of what it will cost to phase out the bulk of our
fossil-fuel dependence that here, too, economists have been at a loss.
A Too-Clever Simplification
It is at times of great frustration that attractive simplifications
tend to pop up. Demagogues depend on them. Simplification removes the
burden of thinking too hard, and replaces the discomfort of facing
ethical ambiguities with the security of faith. In effect, that is
what has happened in the arena of climate change economics.
The Yale University economist William Nordhaus is not a demagogue, by
any means. Back in the 1980s, he made a serious effort to add up the
potential costs of climate warming, on a sector-by-sector basis. In
this effort he focused entirely on losses of output that could be
directly attributed to warming, with no attention to indirect (and
incalculable) consequences such as disease, migration away from
coastal or estuarine areas prone to flooding (from storms and sea
level rise), the impact of landless refugees on social order and
social services in other areas, and so on. Naturally, disagreements
arose. Other economists made slightly different estimates based on
different assumptions. But all of the studies assumed that the costs
of climate warming could be measured in terms of decreased economic
output.
And what of the costs of amelioration? This is where Nordhaus came up
with a theory that-as it has been subsequently seized upon by others,
including the authors of the Bush Energy Policy Report-sweeps aside
all the vexations of trying to do a detailed cost-benefit analysis of
global warming. His theory has two parts. First, he assumes (with the
vast majority of his fellow economists) that past rates of economic
growth were the result of "optimal" choices, on the grounds
that firms with perfect information acting rationally in a free
competitive market would tend to make optimal choices. Next, he
extrapolates these optimal past growth rates far into the future. He
doesn't actually say so, but one can safely infer that such a
long-range projection is reasonable if; and only if; economic growth
would continue automatically, indefinitely, and independent of
governmental intervention. This sounds like a theory written in an
ivory tower, far from everyday experience. But since most economists
go along with it, let's take it seriously and see where it takes us.
This hypothetical future growth trajectory is regarded as a "baseline."
But of course, assuming we humans have free will, the future can
presumably be altered by policy changes taken now. Nordhaus now says
(in effect) that the only possible impact of government interventions
to reduce greenhouse gas (GHG) emissions will necessarily be to reduce
economic growth from the optimum trajectory. why? Because the
government interventions must reduce the "option space" (the
range of possible choices) available to entrepreneurs. For instance,
they might be prevented by regulations from burning the cheapest
fuels.
Nordhaus thus assumes that, in the event of any new regulation to
reduce GHG emissions, the set of all possible choices open to each
firm in the economy will be smaller in the future than it is now. If
one accepts that assumption-as many economists do-then it follows
logically that the tighter the regulatory constraints become, the
slower the rate of economic growth will be. The cost of GHG reduction
can then be equated to the cumulative difference in future GDP under
unconstrained (high growth) and constrained (lower growth) cases.
To convert this clever theory into a numerical estimate, a
quantitative relationship between GHG reduction and abatement costs
must also be assumed. This is another area where real-world
complications intrude. But here Nordhaus once again manages a nimble
side-step. In a nutshell, he argues that, even if the reduction from
the assumed "optimal" growth rate caused by government
intervention is very small, over a period of decades it would amount
to a very large sum. If it were a reduction of merely 0.1 percent per
year, for example, tbe cost over the next century could come to
several trillions of dollars. Comparing those "trillions" to
the mere "billions" that purportedly would be the total
benefit gained by sharply reducing C02 emissions, constitutes the
essence of the argument against U.S. ratification of the Kyoto
agreement.
As it happens, William Nordhaus provided the principal economic input
to the U.S. negotiations in Kyoto-and to the country's eventual
refusal to cooperate with the Kyoto process altogether.
Where Scientists Don't Buy It
So, it is this highly simplified theory of entrepreneurial option
constrained, that explains the "economic damage" refrain we
have heard so often since environmentalists -hacked by the IPCC
scientists -first began warning of the need to reduce greenhouse gas
emissions. But it is also here that economists and non-economists tend
to part company. Most neoclassical economists accept Nordhaus's main
theoretical assumptions, and (therefore) his generic conclusions.
However many other people, including environmentalists, many
engineers and most "hard" scientists-those who study the
real economy as it is embodie4 in the real physical world-tend to
disagree. Although I have worked in both worlds, I count myself among
those who disagree, and I think I can pinpoint why. There are three
basic problems with the Nordhaus model:
The Myth of Optimal Growth: The
assumption that past growth has been optimal ("because rational
profit-seeking firms with perfect information operating in a perfectly
competitive economy would tend to make optimal choices") is
dubious. Indeed many would say that it is patently absurd. The economy
of the past has never been a "free market" that is
unconstrained by government regulation. while firms certainly seek to
maximize profits, they do not enjoy anything even close to perfect
information. If perfect information were freely available, there would
be no demand for consulting firms, financial advisors, economists, or
any of a host of professions that thrive by selling information, or
access to it. Markets are not perfectly competitive, either; barriers
to entry can be very high. The most common strategy for a large,
established firm is to create barriers for its competitors-as have
such companies as Microsoft. Moreover, if business managers' choices
were always optimal, there would be no market "bubbles" and
no market "crashes." And finally, there is abundant evidence
that government interventions don't always reduce economic growth-and
that in some cases they are needed to kick-start it.
Taking Technical Progress For Granted:
Nordhaus assumes that technological progress, and therefore economic
growth, will occur smoothly and steadily, automatically and
independent of economic conditions (absent government intervention of
any kind). Technological progress is thus "exogenous" to the
economy. Yet, if you study the history of technology, you will find
abundant evidence that economic and/or other crises are often critical
to innovation. Major innovations have been triggered by wars or
threats of war.1 Others have been triggered by scarcity or prospect of
scarcity.2 Still others have been responses to powerful new "needs"
that were created by previous innovations.3 A few have been stimulated
by the sudden availability (or discovery) of a new resource.4 In fact,
it is more difficult to think of modern innovations that were prompted
merely by accident or curiosity.5
In short, there is no a priori reason, based on history, to
expect that a government intervention to restrict the use of
carbon-based fuels or to encourage the use of non-carbon based fuels
would inevitably inhibit economic growth. Many important technologies
have actually been kicked off by the government, via the military.
Some have later flourished in the civilian world. Electronic
computers, radar, jet engines and nuclear power are just a few. The
Internet began as ARPAnet, a military-sponsored project to provide
rapid data links between a number of universities. In France, the
highly successful Airbus Consortium (which is apparently poised to end
Boeing's dominance of the passenger air transport business) and the
highly successful TGV high-speed railway system (the world's best) are
both results of direct government intervention. Based on the real
history of industrial development, there is at least as much reason to
suppose that innovation and growth will be stimulated as inhibited by
government.
The Fallacy of a Static Model:
Now I come to the third flaw in the Nordhaus model. I have left it for
last because, from a theoretical perspective, this one is fatal. Let's
revisit, for a moment, that standard neoclassical notion that
entrepreneurs have perfect information, or at least all the
information needed to make optimal choices. In that ideal economy,
unlike the real world, all firms competing in the market know all the
possibilities for technological choice at all times. They choose the
best among a fixed range of possible choices, based on their own mix
of capital assets and skills. It seems to follow that if the range of
choices is constrained by government action, some of the best choices
will be excluded and, ipso facto, growth will suffer. The flaw
in this reasoning is its failure to recognize that in the real world,
the range of technological possibilities is not fixed. There are new
technological possibilities being introduced constantly, but by no
means randomly. In principle, the choice of R&D projects should be
optimal too-meaning that R&D money should flow preferentially to
the projects showing the best return on R&D money spent in the
past. In reality, though, R&D choices made by government are
largely political. Money flows preferentially to the sectors with the
biggest firms and the noisiest lobbies. In the United States, it is
nuclear power that has received (and is still receiving) the most
government R&D money, followed by fossil fuel technologies such as
"clean coal." And it's in those industries that the Bush
administration plans to put most of its research dollars. Yet if those
dollars were really spent on the projects that have provided the
greatest performance gains per dollar spent, they'd be spent on
conservation technologies, wind turbines, and photovoltaic power.
If the choices available to entrepreneurs are not fixed once and for
all, then there is no way they could possibly make optimal choices for
the indefinite future, since they do not know now (or ever) what
possibilities will be generated by scientific progress in the future.
It follows that the entire Nordhaus theory of decreased option space
has no basis. The Emperor has no clothes. In short, the intellectual
argument underlying the Bush Administration's opposition to the Kyoto
Protocol is completely fallacious.
How Could They Have Been So Wrong?
The senior economists who advise governments and teach the next
generation of economists are all professors at major universities such
as Harvard, MIT, Yale, Chicago, Stanford, and Princeton. How, you
might ask, could a group of such obviously intelligent and educated
people come to embrace an economic model with such counter-factual
assumptions and counter-intuitive implications? The answer, I suspect,
has to do with the way in which neoclassical theorists are trained to
think about economic systems. Neoclassic economics is the creed now
taught in most universities and textbooks. (Nordhaus is now the
co-author, with Paul Samuelson, of the most widely read economics
textbook of the past half century.) Its students are taught, from
their first days as students in "economics 101," to think in
terms of abstract entities-"firms"-exchanging abstract goods
and services in a perfectly competitive marketplace.
These abstract entities buy capital services and labor (from abstract
capitalists and workers), and the goods and services they produce are
essentially immaterial. I use that word literally. The producer firms
purchase "raw materials" (also immaterial), and there are no
troublesome physical wastes or pollutants. In short, real materials
and energy are not involved in the neoclassical economic paradigm. The
neoclassical system is a kind of perpetual motion machine. It produces
and consumes-and grows -- without constraints or limits, except
insofar as consumer preferences (for present vs. future satisfaction,
for example) enter the picture.
Why so many unrealistic simplifications? The reason is that the real
world is far too complex for us to model with any confidence, and
economic theorists-like physicists-are searching for general laws that
can be applied to a wide range of situations. Starting from the very
simple assumptions mentioned above, it is possible to create models
that are mathematically tractable (though hardly trivial) and about
which theorems can be proved. Then, hopefully, the results can be
generalized step by step to more and more realistic cases. It is not a
foolish program of research, if the starting assumptions are
reasonable, or at least not unreasonable.
For many traditional kinds of economic analysis the neoclassical
assumptions are not unreasonable. But for purposes of discussing and
assessing policies for long-term environmental sustainability, they
are not reasonable. The standard assumption that firms are abstract
entities producing immaterial goods and services is an acceptable
simplification for many purposes. But it is not appropriate for
analyzing a problem arising directly and essentially from the material
nature of the goods and services. Greenhouse gases are physical and
material in nature. They are also pollutants, and pollutants-by
definition-arise from so-called market failures, which means that the
standard neoclassical assumption of perfect markets and perfect
competition has no place for them. Perhaps that helps to explain why
the Bush administration acts despite its recent lip service to climate
change-as though those gases don't really exist.
Furthermore, the assumption that the economy is a kind of perpetual
motion machine capable of growth-in-equilibrium forever cannot be
accepted for purposes of assessing climate change policies. There are
two reasons. The first is that economic growth comes from
technological innovation, as the histories of the industrial and
communications revolutions amply demonstrate. It involves "creative
destruction" through which new materials, new machines, new
doctrines and new techniques displace old ones. This is not an
equilibrium phenomenon. The second reason is that new technologies do
not appear out of the blue-like "manna from heaven"-for no
reason. They are almost always developed by deliberate activity to
create and disseminate knowledge. And more often than not, this
creative activity is prompted by some sort of disequilibrium or
scarcity.
With that in mind, let's return to the starting point for this
discussion, about calculating the "cost" of intervening to
avoid climate warming. Recall that the conventional approach to making
this calculation is to assume some business-as-usual growth
trajectory, based on extrapolations from recent productivity growth
rates, and then to assume slightly lower growth rates corresponding to
the case where regulations have been introduced. The cost of
regulation is taken to be the difference (or a function of the
difference) between the two growth rates.
But, can it be so easy to calculate the future growth of an economic
system many decades into the future? Doesn't it depend on rates of
population growth, female participation in the labor force,
urbanization, education, lifetime working hours, environmental
constraints, capital investment, scientific discoveries.. and so on
and on? Surely, no-one would pretend that these factors have not
changed significantly in the past century and are not changing
still-some of them quite rapidly. Then how can it be reasonable to
extrapolate past growth rates into the future? Or is it that this
approach-the Nordhaus notion of an economy that continues
automatically, indefinitely, and independent of government action-is
essentially unreasonable?
The Mystique of Technical Progress
It is helpful, here, to review a little history of economic thought.
In the 18th century when the study of economics began to distinguish
itself from "natural philosophy," there were two competing
theories. The French physiocrats, as they were called, attributed
wealth to agricultural surplus (a gift of nature) and thence to land.
The English theorists, as exemplified by John Locke, attributed wealth
by contrast to the result of human labor on the land. Adam Smith-and
Karl Marx after him-emphasized the role of labor as the primary
creator of wealth. Marx distinguished between current labor inputs and
"capital" created by past labor. By the end of the 19th
century, economists had pretty much settled on capital and labor as
the two "factors of production" that should account for
economic output.
By the mid 20th century, it was possible to reconstruct historical
time series for GNP, labor, and capital investment as far back as the
1870s. Using straightforward statistical methods, it was possible to
ascertain how much of GNP growth since the Civil War could be
explained by increasing labor supply and how much could be attributed
to increasing capital stock. The answer was a great surprise. It
turned out that labor and capital together could only account for a
small fraction of the growth that had actually occurred. The
unexplained residual was named "technical progress."
Technical progress, so called, seems to add 1.5 percent or so to the
U.S. GNP year in, year out, like clockwork.
Most economists are still using versions of a theory of growth
developed nearly half a century ago by Robert Solow, who was awarded a
Nobel Prize for his accomplishment. The Solow model, in its simple
form, depends only on three variables. The first two are total labor
inputs and total capital stocks. (Capital services are assumed to be
proportional to the stock). The third variable is the above-mentioned
technical progress, which Solow introduces as an exogenous multiplier
of a "production function" that depends on the other two
variables. The multiplier is usually expressed as an exponential
function of time which increases at a constant rate based on past
history.
The first economist to focus on technological progress as an economic
phenomenon was an Austrian, Arnold Schumpeter, who moved to the United
States and taught at Harvard. It was Schumpeter who pointed out (in
his PhD thesis in 1911 or so) that innovation is the driver of growth,
and that innovators have economic incentives to achieve temporary
monopolies that allow "supernormal" profits. This is still
the incentive for Intel and Microsoft. It is what drives drug
companies to develop new drugs, and McDonald's to develop new
hamburger combinations. It was Schumpeter, by the way, who coined the
phrase "creative destruction" I used a few paragraphs back.
Unfortunately, there has never been any real quantitative theory to
explain Schumpeterian technical progress, although some economists
made careers trying to explain it in terms of other variables. This
activity was called "growth accounting." So, perhaps it is
with some justice that technical progress has been called "manna
from heaven." In practice, most economic growth theorists since
Solow have ignored the problem, like the mad aunt in the attic that no
one mentions, and simply assumed that such progress will continue into
the future as before.
The Solow growth model is not intellectually satisfying, of course.
And the standard model has other drawbacks, as well. For instance, it
holds that the role of capital investment as a driver of growth
necessarily declines over time. This is because the capital stock
eventually becomes so large that annual investments are needed simply
to compensate for annual depreciation. when this point of saturation
is reached, further growth per capita can only result from technical
progress. This feature of the model also implies that countries with a
small capital stock will grow faster than countries with a large
capital stock. Thus, the model predicts gradual convergence between
poor and rich countries.
Growth data has been accumulated by the World Bank and other agencies
for many decades. However neither the expected saturation nor the
convergence phenomena are clearly indicated by the data. So far, there
is no convincing evidence of capital saturation at all. There are
examples of convergence, but there appear to be more examples of
divergence. Nor is there any explanation of why technical progress is
faster in some countries than in others. In short, the standard
economic model is-and has been for some time-in urgent need of repair,
if not major revision.
Growth as a Positive Feedback Process
One other feature of the standard Solow model is very troublesome in
the long run context. Quite simply, the model treats natural resource
consumption in general, and energy consumption in particular, only as
a consequence of economic activity (as the American economy grows,
we'll "consume" more energy), not as a causal factor. Yet
the causal relationship surely runs both ways. The engine of economic
growth is a positive feedback cycle, in which declining costs of
producing goods and services stimulate increased demand for those
goods and services. They also stimulate the creation of newer goods
and services. (Increased demand for cars stimulated demand for paving
machines, traffic lights, and personal-injury lawyers-and eventually
for Gulf War weapons, traffic reporters, drive-in fast-food
restaurants, and advertising copywriters.) Increased demand for the
expanding array of products and services triggers increased investment
and increased scale of production. Investment may be strictly in
bricks and mortar or it may also include research and development (R&D).
Economies of scale, along with process improvements resulting from R&D
together with "learning by doing," then yield further cost
savings-which lead to further price reductions, and so on around the
loop.
This feedback cycle works for all kinds of goods and services, of
course, but it is particularly powerful in the domain of energy
conversion and mechanical power. The Industrial Revolution began with
a shortage of charcoal due to deforestation (much of it for timber to
build navy ships) in the 16th and 17th centuries. Coal was discovered,
and mining began. But soon the mine shafts went below the water table.
It was necessary to pump the water out of flooded mine shafts. Horses
on a treadmill were the usual source of pumping power until Thomas
Neweomen built a steam engine (based on some borrowed ideas) and
commercialized it for application in the mines. Coalfired steam
engines, even very crude ones, could pump water more cheaply than
horses on a treadmill. The price of coal dropped, demand rose, and in
the late 18th and early 19th centuries new applications of coal-such
as gas for lighting in cities-rapidly emerged. At the same time,
growing demand for steam engines increased and triggered important
design innovations (especially James Watt's improvements) that sharply
increased their efficiency and reduced their cost. Steam engines
powered boring machines (to make cannons but also to drill out the
cylinders of more powerfuil steam engines!), drove river boats
Upstream, carried coal cars from mines to ports, drove mechanical
looms and eventually drove railways and ocean-going ships.
After a long trial-and-error learning process, coal (and later, coke)
also replaced charcoal in iron-smelting and brought about the
widespread availability of cast iron, then wrought iron, and finally
Bessemer steel. The major market for cheap steel was to build the
railway network and its associated infrastructure, such as bridges.
Then came ships and high-rise buildings. The first industrial
revolution was certainly powered by coal and steam, and the
fingerprints of feedback are pervasively visible.
The energy-power feedback cycle did not stop in the mid 19th century.
But technical progress opened up new avenues for further industrial
development-and for still more feedback loops. The rapid growth of the
steel industry after 1870 required vast quantities of coke. Coke is
made from coal, but the pyrolysis process yields a gaseous waste known
as coke oven gas. At first this material was just flared off But its
abundant availability led inventive minds to wonder whether such gas
might somehow be used as a fuel. Soon, a new kind of engine was
invented, which could burn fuel inside the cylinders and harness the
hot expanding combustion products to drive the pistons. This "internal
combustion" engine had several fathers, but it was fully
commercialized (in 1876) by K1aus Otto in Cologne, Germany, near where
the coking industry was centered.
Otto's compact engines were designed to be stationary, for use in
small factories and shops. However one of his engineers, Gottlieb
Daimler, took the idea further. With Wilhelm Maybach, he designed a
miniature version of Otto's engine, with two cylinders. Thanks to
Maybach's invention, the carburetor, it was capable of burning a
liquid fuel. Daimler partnered with Karl Benz, a carriage
manufacturer, to produce horseless carriages. Their first model was
named for Benz's daughter, Mercedes. The rest is well-known history.
Whence came the liquid fuel burned by DaimlerMaybach's engines? It,
too, was a virtually costless waste product of the distilleries that
made "illuminating oil" (kerosene) from "rock oil"
(crude petroleum) to replace whale oil in household lamps. whale oil
had been getting scarce for several decades, as demand outstripped the
supply of sperm whales. Rockefeller's Standard Oil monopoly was based
on "illuminating oil"-a big business in the late 19th
century. Gasoline, a light fraction too volatile for household use,
was either discarded or used wastefully for dry-cleaning or other
minor applications. But in 1890 or so, Daimler-Benz (and hundreds of
competitors that soon appeared) put this refinery waste product to
work in their new horseless carriages. Twenty years or so later, it
was gasoline that was the primary product of petroleum refineries, and
soon after that it was necessary to find ways of "cracking"
heavier fractions and recombining lighter fractions to increase the
gasoline supply. Again, the fingerprints of feedback are everywhere to
be seen.
Another major innovation of the late 19th century was the use of
steam engines, especially the new steam turbines, to generate electric
power. Like petroleum, this technology was first used for lighting.
The demand for brighter and cheaper light provided an enormous impetus
to the nascent electrical industry. But no sooner did technical
progress in electrical engineering (largely thanks to Thomas Edison)
make it possible to generate electric power cheaply on a large scale,
than a host of completely new applications emerged. These included
electric motors (soon applied to streetcars, elevators, washing
machines, refrigerators and all sorts of factory machines), and
electric flirnaces capable of reaching extraordinarily high
temperatures and malcing totally new materials such as silicon carbide
(carborundum) for cutting tools, calcium carbide to make acetylene gas
for lighting and for welding, tungsten filaments for incandescent
lights, and stainless steel.
Today, electric power is rapidly replacing all other sources of
mechanical power other than for transportation and off-road machines.
It is electric power that makes possible the systematic use of the
electromagnetic spectrum for communications (telegraph, telephone,
radio, radar), entertainment (TV), and electronic data processing. It
is the merger of electronic data processing with telecommunications
that may provide the motive force for the next burst of economic
growth. Nevertheless, most of the electric power in the world is still
produced by steam turbines powered by burning coal or other fossil
fuels. This is the dependency that must be broken-and broken soon-if
the climate warming process is not to get our of control.
Breaking the Cycle
To summarize, there is every reason to conclude that technical
progress up to now has been largely driven by the energy-power
feedback cycle. The advent of microelectronics-based information
technology in recent decades has introduced another significant-but
not yet independent-driver of technical progress. Biotechnology is
likely to be increasingly potent in the coming decades. But declining
energy prices, and increasing demand for fuel and power, continue to
play an important role in the economic growth machine.
In the context of long-term economic forecasting, this is a vital
point. It means that future economic growth along the present
trajectory must mean large increases in energy and natural resources
consumption. The ratio of GNP to energy may now be declining slowly
(for the most industrialized countries), but the consumption of energy
and materials per capita is still increasing. To those who are trying
to envision a sustainable economy, "dematerialization" is a
compelling notion. But it is not yet a reality. Economic growth and
materials/energy consumption are still very tightly linked. If "dematerialization"
were really enforced, today or tomorrow, economic growth would
instantly stop and go into reverse. In that respect-in the short
run-the intuitions of the Kyoto skeptics are correct. Economic growth
over the next few years is still very much dependent on continued
increases in energy consumption accompanied by declining prices.
In the long term, however, the skeptics are probably quite wrong. One
reason is that the existing energy-power feedback system is itself
running out of steam-so to speak. There are not many possibilities for
increasing fossil energy conversion efficiency, or developing
marvelous new materials from petrochemicals, or exploiting further
economies of scale. On the contrary, oil will have to be drilled
deeper, further and further offshore, or in the arctic, or recovered
from shale, all of which will increase costs. Carbon dioxide from
electric power plants will have to be captured and sequestered
underground or deep in the ocean. Again, costs will increase.
The "business as usual" trajectory (allowing for marginal
improvements but continuing to depend on fossil fuels) is really a
dead end because it excludes any truly potent new technological
directions. By "potent," I mean technologies with
applications beyond their original narrow application-as the
technology for creating electric light made possible so many other new
and previously unimaginable products and services, for example. In
fact, the longer the existing system is functioning and keeping costs
and prices low, the harder it is for radical new approaches to break
into the charmed circle.
It is no accident that major breakthroughs in the past have often
followed a crisis of some sort. People started to dig coal when
charcoal became too costly. People started searching for "rock
oil" and learning how to drill for it and refine it because the
sperm whales were becoming scarce. The Germans developed synthetic
ammonia technology to free themselves from dependence on
British-controlled Chilean nitrates. In the second war, they developed
coal gasification and liquefaction technologies to replace petroleum
supplies from the middle East and Russia. The Americans developed
synthetic rubber in World War II because the natural rubber
plantations of Indochina had been captured by the Japanese.
There isn't the slightest doubt that new energy technologies from
renewable sources can be developed, and that this will happen faster
if access to cheap coal oil and gas is restricted. Wind power and
solar photovoltaic power are two of the most promising possibilities.
Solar hydrogen is the most exciting long-term possibility. None of
these receive any significant public or private sector funding.
On the demand side, conservation can replace a much larger fraction
of current energy consumption than the conservative establishment is
willing to concede. In the replacement of fossil-fuels, however,
positive incentives will be needed to overcome the natural advantages
of invested capital and knowledge gained through experience. Yet only
a tiny fraction of the world's energy R&D spending goes into the
new and promising technologies. Most of it still goes into the
discredited nuclear sinkhole, or into marginal technologies to help
the existing energy establishment.
In the short run, radical new technologies will cost more than
established ones. But in the long run they have far more potential to
cut costs-and to do so without wrecking the environment. But the
redirection of investment is not going to happen spontaneously or
painlessly. If the industrial world is to achieve a sustainable
relationship with the environment, major institutional changes will be
needed, and steps in a new direction will have to be taken soon.
[Part II; in the next
issue will discuss the economic basis for a new diretion in the
global energy economy] |
|