FAQ on idealized solar geoengineering moderates key climate hazards

By David Keith | March 11, 2019

This post provides some color commentary as an FAQ about “Halving warming with idealized solar geoengineering moderates key climate hazards”, published 11 March 2019 in Nature Climate Change. Feel free to send me questions and I may add to the FAQ. See also Harvard press release and video.

This feels like the most important solar geoengineering (SG) study I have been lucky to be a part of. From my perspective, it’s more important and should get more attention than progress on our stratospheric experiment.

We use a high-resolution state-of-the-art model to go after a central policy-relevant question: what regions would be made worse off if solar geoengineering was combined with emissions cuts to limit climate risks? We find that no region is made worse off in any of the major climate impact indicators we examined. (It’s easy to cherry pick regions to make SG look great or terrible—we used standard regions from the IPCC SREX report.)

My hope is that the paper will dispel some of the common-but-false assumptions that solar geoengineering necessarily entails massive risks, that its impacts are highly unequal, and that it works for temperature but messes up precipitation. And I hope it demonstrates that further research needs to be done.

How does this matter for climate policy? 

There is strong evidence from multiple climate models that if solar geoengineering were implemented with reasonably uniform global coverage (e.g. uniform aerosols in the stratosphere) and if it’s used in combination with strong emissions cuts—as a complement, not a substitute—then it may offer major reductions in the climate risks that matter most to humans and ecosystems without making any region significantly worse off.

The possibility that solar geoengineering could enable deep reductions in climate risks is a strong argument for a serious global, open access research program aimed at better understanding the risks and efficacy of solar geoengineering. For more on what such a program might look like, here is my case for a responsible research program in the NAS Issues in Science & Technology, as well as an important paper from Douglas MacMartin and Ben Kravitz in PNAS.

Does this mean people who worry about the risks of solar geoengineering are wrong? Does this argue for deployment? 

Not at all. I have worried about this technology’s risks since the early 90’s. At this point research is still dominated by a small group of scientists. This means real danger of groupthink. We may simply be wrong.

What this paper illustrates is that it’s too early to leap to conclusions in either direction. This is true both for those who are convinced solar geoengineering will work, and for those who are convinced that solar geoengineering will cause droughts, or will harm the poor while benefiting the rich.

This paper, along with many previous by many authors, shows that solar geoengineering could have large and equally distributed benefits, but it doesn’t prove it. It is an idealized model. There are still huge uncertainties.

It’s clear that if misused, e.g., by deployment in only one hemisphere, solar geoengineering could have huge impacts. We need technically sophisticated efforts to quantify risks of plausible deployment of uniform and solar geoengineering that is used as a moderate supplement too emissions cuts. Until that work is done it’s too early to leap to conclusions.

Who’s behind this paper? Why does it matter? 

This paper started from a discussion with Gabe Vecchi (now Princeton, then GFDL) following a talk I gave at Princeton in 2016. Gabe decided to study solar geoengineering using GFDL’s new 25-km-resolution tropical cyclone permitting model. This is important because this model does a substantially better job simulating current precipitation extremes than typical models that have been used before on solar geoengineering. It’s also important because it is the first time that GFDL, the oldest and one of the best climate modeling centers, got involved in solar geoengineering research.

Gabe brought in Larry Horowitz (GFDL), one of the model’s developers, and Jie He (now at Georgia Tech). I meanwhile encouraged Peter Irvine, a postdoctoral fellow in my group, to take the lead in analyzing the data and writing the paper.

Gabe was collaborating with hurricane expert Kerry Emanuel (MIT), and as we began to look carefully at the hurricane responses, Gabe did not have confidence in the ocean-basin-by-basin regional response, so we invited Kerry to join the paper.

This new collaboration is relevant because solar geoengineering publications have been too dominated by a small group, and this brings significant new collaborators with deep climate science expertise to this important topic.

What about precipitation?  

This paper highlights a common misunderstanding about solar geoengineering: that a world with solar geoengineering would inevitably have less water availability. If all warming from rising CO2 was offset by solar geoengineering, there would be less rain overall than in the current climate. This has led to concerns about droughts and monsoons. However, global warming increases rainfall so something which reverses this could reduce flood risk. When solar geoengineering is used with emissions cuts to halve warming, global-mean rainfall is more-or-less restored to its original levels. Moreover, while it seems reasonable to assume that less rain means that things are drier, in fact what matters more for ecosystems and farmers is water availability: rainfall minus evaporation. Solar geoengineering reduces rainfall, but it also reduces evaporation by reducing temperatures. So, a decrease in rainfall may be associated with an increase in water availability.

One of the ways this paper takes a step beyond current literature is by focusing on a larger set of variables that (we think) are more relevant to assessing real world climate impacts. Rather than just looking at temperature and precipitation, we looked at: annual average temperature, extreme temperature, extreme precipitation, precipitation minus evaporation as a proxy for water availability, and intensity of tropical cyclones. Note: while we do not highlight them in the paper, we also find that the simulation moderates changes in precipitation. More on this and some data on the seasonal response can be found in the supporting material.

Why did the paper adjust the solar constant rather than attempting a realistic simulation of stratospheric aerosols? 

Here’s the crucial paragraph in the paper:

We analyse the distribution of climate changes resulting from reducing the solar constant to offset roughly half the radiative forcing from doubling CO₂. A spatially uniform reflective stratospheric aerosol layer, which could be achieved by adjusting aerosol injection using feedback, would produce a similar radiative forcing to a solar constant reduction. Even with a uniform distribution, stratospheric sulphate solar geoengineering will differ from a solar constant reduction in that sulphates heat the lower stratosphere, perturb the ozone layer, and increase the ratio of diffuse to direct light. Each of these effects can be reduced by choices of alternate non-sulphate aerosol, though their side-effects are less well understood because there is no direct natural analogue. We nevertheless choose solar constant reduction as a benchmark because, given the diverse implementations of aerosols in models, solar modification allows more direct tests of inter-model differences in climate response to solar geoengineering.

Let me nerd out: In separate work, the group at NCAR, our group, and others have done work that suggests it is possible to adjust the injection of aerosols to achieve roughly uniform radiative forcing. No group has yet simulated this in a way that reasonably approximates the way that feedback from limb-sounds in situ measurements would be used in a stratospheric analysis/forecast system to adjust injection to achieve a specified optical depth profile. Moreover, no existing model can do a good job of simulating this because models with Eulerian grid boxes instantaneously mix aerosol or precursor emissions into the grid box, whereas material would form a linear plume after being dispersed from an aircraft. Local concentrations in the plume will be far higher than simulated in a Eulerian model. This will produce different SO oxidation rates and different rates of particle formation. Several research groups are now beginning to work together to address these problem.

Stepping back from technical complexity, this paper suggests what might be possible with a well-designed aerosol injection method. It also underscores one of the many reasons why research is needed—to better understand what such a method might look like, and what its risks and limitations might be.

Original post on Harvard.edu

Two new papers examine how turbine-atmosphere interactions shape wind-power’s environmental impacts

Today Lee Miller and I published a pair of papers on the interaction between wind turbines and the atmosphere. “Observation-based solar and wind power capacity factors and power densities” in Environmental Research Letters, and “Climatic impacts of wind power” in Joule. (Many thanks to the journals for arranging simultaneous publication.) Don’t miss Lee’s video abstracts for Joule and ERL.

From my perspective, there are two big takeaways. First, there are now two independent lines of high-quality data suggesting that models with atmosphere-turbine interactions are getting something important correct. Second, that wind power has a somewhat larger environmental footprint than many had assumed, that, specifically, the land footprint of wind is at least 10 times higher than that of solar.

What does this mean for public policy? In my opinion, it means more empirical research to answer specific questions about wind’s impacts. A wise reporter chided me that scientists always want more research while pushing me towards a policy relevant conclusion. For me, the strongest high-level conclusion is that, as policymakers push towards decarbonization, it’s worth pushing a bit harder on solar and a bit less hard on wind.

Context matters: the big problem is that policymakers should be doing much more to cut carbon emissions, most importantly by technology-neutral policies that penalize the use of the atmosphere as a cost-free disposal site for carbon pollution. Some thoughtful environmental activists who are fighting day-to-day against fossil fuel interests to accelerate the deployment of low-carbon power will ask, Why publish the stuff that hands ammunition to the other side? My answer is simply that no large-scale energy technology is without social and environmental impacts. And, as renewable energy grows out of its cradle into the energy mainstream, those whose goal is environmental protection must welcome careful analysis of its full environmental impacts, particularly when that analysis can inform energy choices to reduce future impacts.

Why the timescale comparison? 
Much reporting will focus narrowly on the timescale comparison in the Joule paper. Reporters seem drawn to want a simple, over-the-top claim/sound bite along the lines of, Wind is worse than fossil fuels this century. Such a claim is total nonsense.

Why then, did we make the wind versus fossil comparison in the Joule paper? Simply reporting that we get a specific climate change for a specific large deployment scenario isn’t very helpful because it doesn’t provide a relevant comparison. We need to find a way to compare the relative environmental footprints of low-carbon energy sources like solar and wind. Policymakers need a rough metric of how much these climate impacts matter on a per-unit-energy basis. A single wind farm has negligible impact on global climate over the next century. Yet, if that single wind turbine provides an infinitesimal global benefit in the form of reduced emissions and climate change, and an infinitesimal climate impact in the form of non-local hemispheric-scale climate change caused by atmosphere-turbine interactions, it’s relevant to compare these two infinitesimal effects in order produce a crude estimate of the ratio of benefits to harms. Because the carbon benefits grow cumulatively with time while the turbine-atmosphere interactions are instantaneous, this ratio is not dimensionless, but instead has units of time.

Because both the benefits and harms are very roughly linear, the timescale metric is relevant to wind or solar power at any scale. It doesn’t depend on the specific half-terawatt scenario studied here.

What do these timescales mean to me? They’re very rough order-of-magnitude guides to the relevance of the climate changes caused by low-carbon power sources. If the timescale is on the order of decades or less, then I think it’s fair to completely ignore the climate impacts in practical policymaking. This is the case for solar power. If the timescale was thousands of years, then I think the climate impact poses a serious problem. For wind, our analysis suggests the timescale is, to a very rough order, a hundred years—scientist speak for more-than-decades and less-than-millennia. Given that, I think it’s fair to conclude that wind power’s climate impacts are non-negligible.

Statements like wind is worse than fossil fuels this century are nonsense both because they’re overly precise about the timescale, which is in fact contingent on a bunch of open-ended assumptions as we describe in the paper, and because the climate change from atmosphere-turbine interactions and CO are quite different. There may be significant benefits to the climate change from wind turbine drag. For example, all of the global models that have examined large-scale wind deployment scenarios (including Mark Jacobson’s, though he did not show climate results in this paper) show cooling over the Arctic. Years ago, Danny Kirk-Davidoff and I wrote a nerdy paper in Journal of the Atmospheric Sciences to try and understand the reasons for this cooling. If correct, this is an added climate benefit of wind power.

Q: What’s new? A: Observational support for the models. 
For me, the importance of these papers is not the timescale, nor the turbine-induced climate change, both of which have been shown before. But, in the main, the environmental science community ignored them. In part, I suspect many people concluded that the results were simply not robust, were not backed up by observational evidence.

The ERL paper is the first observational estimate of the average power density of large-scale wind power. Power density matters because it determines how much land is required to supply a given amount of energy. Our results use newly released data on the location of all US wind turbines. We find an average density of 0.5 Wm-2 consistent with physically based models and inconsistent with wind resource estimates that ignore interactions between wind turbines and the atmosphere. See this graphical summary. (Power density matters because it means that supplying a given amount of wind power takes more land than previously assumed. Roughly 3 times more than an important estimate by the US DOE and more than 10 times the amount from an important study used by the IPCC.)

Graphic

As we catalogue in the Joule paper, warming has now been observed at least 28 operational wind farms in at least 10 separate studies. Most of the studies are based on changes in satellite-observed skin temperature before and after wind farm installation. For me, the major result of the Joule paper was that our model roughly matches the diurnal and seasonal cycle of warming, providing strong confirmation that we are capturing an important mechanism that causes wind power-induced climate change.

As I see it, the novelty of these two papers is the link between models and observations. My naïve hope is that readers will not over-interpret the specific results in the Joule paper, which are highly configuration-dependent, but rather hear the importance of observational confirmation of previously theoretical results and conclude that one cannot simply ignore these effects, concluding that wind power’s land footprint and climate impacts need to receive more serious consideration in strategic decisions about decarbonizing our energy system.

Original post on Harvard.edu

Why I am proud to commercialize direct air capture while I oppose any commercial work on solar geoengineering

By David Keith | June 4, 2018

My academic work is focused on solar geoengineering. I am also founder and part-time employee of Carbon Engineering, a Canadian company commercializing technology that captures carbon dioxide directly from the atmosphere.

It’s easy to confuse the two efforts, in part because of sloppy use of “geoengineering” to encompass a range of unrelated ideas, from planting trees to massive space mirrors between Earth and the Sun. Words matter. Critics sometimes exploit this confusion, implying that our work on solar geoengineering aims at profit, or takes funding from oil and gas companies to serve industry interests.

Pointed critiques aside, there is room for misunderstanding. A few months back, a cousin of mine who does environmental art asked if Carbon Engineering was commercializing my academic research on geoengineering. The answer is an emphatic, “No.” I oppose commercial work on the core technologies of solar geoengineering, yet I am very proud of the work Carbon Engineering is doing to commercialize carbon-neutral transportation fuels made from atmospheric CO and renewable power.

The remainder of this essay provides some personal reflections on the difference between direct air capture and solar geoengineering, differences that shape my views about the very different roles commercial interests play in their development. I also reflect on the conflicts of interest that arise from me working on both topics. I assume that you, my reader, have more familiarity with solar geoengineering than with Carbon Engineering’s work, so I start my explanation there.

Carbon Engineering

Carbon Engineering is a privately held company developing technology for direct air capture (DAC) of CO from the atmosphere. It was founded in 2009 in Calgary AB and is now based in Squamish BC. As of June 2018, we have just under 40 employees and have raised a cumulative total of 30m $US including both investments and government support.

Our research began as an academic effort to understand the cost of DAC by doing a bottom-up engineering cost analysis of a DAC system constructed using off-the-shelf technologies. Our work was motivated, in part, by what I suspected were over-optimistic claims that DAC might be very cheap. As we dug deeper, the effort gradually shifted from assessment to problem-solving. We began to innovate until we reached a point where it seemed the most effective way to enable this environmental technology was to create a company, so we could focus on practical research to de-risk the innovation and drive it towards commercialization.

Carbon Engineering’s primary business model is to use DAC to make carbon-neutral hydrocarbon fuels from carbon-free energy. Cheap solar power plus electrolysis can be used to make hydrogen at a price that gets more competitive every year. Carbon neutral hydrogen and DAC CO are combined using gas-to-liquids technologies to make transportation fuels such as aviation kerosene, diesel, and gasoline. We call this our AIR TO FUELS TM process. These fuels would be compatible with existing infrastructure but have no connection to oil and gas, and have near-zero lifecycle carbon emissions. They provide a way to use intermittent renewable power from sunny and windy locations to power transportation around the world. On a large scale, Carbon Engineering aims to make synthetic fuels at $1 per liter. Fuel from early plants would be more expensive, but in the long-run costs will come down below $1 per liter as the cost of solar and other low-carbon power decline along with the cost of electrolysis. When derived from oil, the same fuels now have production costs of about $0.6 per liter.

Our fuels are unlikely to beat oil in a head-to-head fight unless oil is penalized for its climate impact. Carbon Engineering has a strong business case today because policies that penalize CO emissions and reward ultra-low carbon transportation fuels are already in place and evolving rapidly. Examples include the various biofuel standards, California’s Low Carbon Fuel Standard (LCFS), and European fleet emission vehicle standards.

While synthetic fuels are our primary business case, Carbon Engineering is also exploring the use of atmospheric CO to make high-value products, and of markets that reward permanent removal of CO from the atmosphere through a combination of DAC and carbon sequestration technologies.

Conflicts of Interest

Ignoring larger questions about carbon removal and solar geoengineering, what about the conflict between my roles at Harvard and Carbon Engineering? Harvard allows faculty to spend up to 20% of their time on outside work, and I spend that working for Carbon Engineering. My view is that universities including Harvard are too willing to accept a professor’s involvement in companies that are tightly tied to their academic research. This problem seems most acute in biomedical research, but it also applies in cleantech. I try and keep the division sharp. I ended all my academic work on DAC soon after forming Carbon Engineering. I have no research grants on DAC and no students or research staff working on it, or any similar technology. I do a limited amount of collaboration with other researchers interested in DAC, but in doing so, I make it clear that for DAC related work my primary responsibility is to Carbon Engineering.

As I see it, I have a clear conflict of interest if I do academic or advisory work on DAC (or related areas such as low carbon fuel policies) using my status as a professor or as an “expert” in energy and climate policy without indicating my vested interest in a company that has direct benefits from low carbon fuel policies.

That said, the concerns about my conflicts of interest have focused on the conflict between Carbon Engineering and my academic work on solar geoengineering.

Much of the concern about solar geoengineering is rooted in the fear that its development will sap efforts to cut emissions. This is often called geoengineering’s “moral hazard.” I think it’s somewhat more useful to think of it as a political risk. Put simply, I expect that work on solar geoengineering, including my own work, will be actively exploited by those who oppose emissions cuts, most obviously fossil fuel companies and fossil-rich nations. Such groups will likely exaggerate the effectiveness of solar geoengineering and minimize it’s risks in order to weaken efforts to cut emissions. Whatever it’s called, concern about political misuse of solar geoengineering research is real and serious. To my knowledge, I was the first to call it out as a “moral hazard” in a review article in 2000.

If solar geoengineering weakens climate policies, then it threatens cleantech companies like Carbon Engineering. This is not a minor issue. The only way that Carbon Engineering succeeds is with strong carbon policy. When raising funds for Carbon Engineering, one of the biggest concerns we hear from potential investors is that government policies penalizing high-carbon fuels (such as California’s Low Carbon Fuel Standard) might not be politically stable if governments waver on environmental policies.

To sum up: there would be conflict of interest if my advocacy of solar geoengineering research benefited the interests of my company. But this cannot be the case. My advocacy of solar geoengineering research is contrary to the interests of Carbon Engineering for two reasons. First: because of the potential for solar geoengineering to weaken mitigation policies, i.e. the “moral hazard”. And second, because my involvement in solar geoengineering increases the chance that Carbon Engineering will be seen as a “geoengineering” company with all the ethical and regulatory concerns that this entails.

Conversely, there would be a conflict of interest if my work at Carbon Engineering made solar geoengineering more credible. This seems implausible. Many climate policy advocates see a trade-off between solar geoengineering and carbon removal. They argue that mitigation alone cannot meet the ambitious climate targets agreed to at Paris, and either carbon removal or solar geoengineering may be needed to keep the world from warming more than two degrees. As most see carbon removal as much less risky and politically problematic than solar geoengineering, it follows that if work in Carbon Engineering and its competitors help make carbon removal more plausible, it weakens this case for solar geoengineering.

Divergent Roles for Private Capital

As explained in an earlier blog, I oppose commercial work on core solar geoengineering technologies. My essential concern is that commercial development cannot produce the level of transparency and trust the world needs to make sensible decisions about deployment. A company would have an interest in overselling, an interest in concealing risks. Solar geoengineering is not cleantech. It’s not a better battery or wind turbine. It’s a set of technologies that might allow humanity to alter the entire climate. As much as possible, it needs to be owned and controlled by transparent democratic institutions. It requires global governance.

It might be argued that, in forgoing commercial development of solar geoengineering, we lose the chance to harness commercial innovation to reduce costs. But cost is already so low that it’s more of a bug than a feature. Low cost may make it too tempting. Low cost enables unilateral action.

Why commercial work on DAC but not on solar geoengineering? It’s true that as a company, Carbon Engineering’s development process is less transparent than academic research. But transparency in the development process is not needed if the final product can be easily validated. When Carbon Engineering succeeds and large-scale air capture plants are built, it will be very easy for outside entities such as governments, third parties, or citizen groups to monitor the net flows of energy and materials in and out of the plant, as well as various industrial byproducts or emissions. The potential environmental risks of a Carbon Engineering plant are well-regulated by existing regulations on similar industries such as power plants, paper mills, and chemical plants.

This difference is linked to the fundamental difference between solar geoengineering and a carbon removal technology like DAC. Solar geoengineering is large-scale climate modification which inherently has global consequences that are difficult to quantify even after deployment. DAC results in emissions reductions (carbon-neutral synthetic fuels) or net CO removal (sequestration), with local impacts that can be measured with reasonable accuracy.

For public policy, the essential distinction between solar geoengineering and DAC rests on their very different distributions of risks, benefits, and costs: solar geoengineering entails uncertain global risks and benefits with negligible direct costs, while DAC and similar carbon removal technologies provide a global benefit in exchange for local risks and significant costs. Their very different governance challenges arise directly from this asymmetry.

Clean energy technologies like wind or nuclear power also offer the global benefit of reduced emissions in exchange for local costs and environmental risks. DAC is, as I see it, more like an energy technology than a form of geoengineering. And, the use of DAC to make carbon-neutral hydrocarbon fuels is an energy technology that competes directly with batteries and biofuels to provide low-carbon transportation. Finally, unlike solar geoengineering, there is a large public benefit to driving down the cost of DAC. That’s why I am very proud to be part of Carbon Engineering, but strongly oppose any commercial work on solar geoengineering.

Calling both “geoengineering” is misleading. Words matter.

Original post on Harvard.edu

Why we chose not to patent solar geoengineering technologies

By David Keith and John Dykema | May 3, 2018

We broadly oppose commercial development solar geoengineering. In our view, a central objective of solar geoengineering research is to develop credible assessments of its risks and efficacy. Credibility depends, in part, on confidence that the risks of solar geoengineering are not concealed, that its effectiveness is not exaggerated. Such credibility can, in our view, be best generated by a transparent multipolar research effort. Where, “transparent,” means open access to the full research process, including raw data, dead ends, and experimental failures. Where “multipolar” means the research is conducted by a diversity of independent entities including research by groups that focus on finding the ways that it will fail.

Such transparency cannot reasonably be achieved in a commercial setting that depends on the ability to protect and monetize intellectual property. We therefore disapprove of patenting of technologies that are core to the deployment or monitoring of solar geoengineering. This is not an injunction against any commercial involvement. Any research, or eventual deployment, will—of course—depend on a web of firms suppling components and services. Our concern is with the core technologies specific to solar geoengineering.

Our recent publication, Production of Sulfates Onboard an Aircraft: Implications for the Cost and Feasibility of Stratospheric Solar Geoengineering, serves as a useful example to discuss our concerns with patenting.

Unlike most of our work, this paper describes a possible improvement to technologies for solar geoengineering. It provides a chemical engineering analysis of a system to convert sulfur to SO2 or SO3, and could be used aboard aircraft to produce sulfate aerosols in the stratosphere for solar geoengineering. Such a system could reduce the cost and environmental impact per unit of sulfur delivered. And it could facilitate use of SO3 to make accumulation mode H2SO4 particles that allow for better control of the distribution of particle sizes.

In any case, this is an example of a technology that would very likely have been patentable. However, because we oppose patenting we elected not to patent this technology. In so doing we follow the practice that we have for all solar geoengineering related research. We have never filed a patent related to solar geoengineering and have worked to find ways to block or discourage others from doing so. In 2012 for example, one of us (Keith) participated in organizing a workshop run by Granger Morgan at Carnegie Mellon University. It explored options for promoting transparency, leading to a suggestion that:

“In order to lessen the incentive for private commercial interests to influence the direction of the pursuit of SRM, it would be desirable to restrict the assertion of such private intellectual property rights to technical fields other than SRM. Federal agencies already have statutory authority to take prescribed action to restrict or partially restrict the patent rights of awardees.”

More recently, Harvard’s Solar Geoengineering Research Program was established with a policy discouraging patenting.

Our publication of the above finding contributes to transparency because it partially blocks anyone else from patenting something similar. Here’s a very rough summary of relevant patent law: Europe, and many other jurisdictions, have a so-called First To File policy with no grace period after public disclosure. This means that anyone can, in principle, file a patent whether they were the inventor or not, so long as they are the first person to file. However, once the work is publicly disclosed, it is unpatentable. The U.S. has a recently revised First Inventor To File system that includes a limited one-year grace period. The grace period means that, under new restrictions, the original authors can file a patent within one year of publication. No one else can, since the publication is prior art and other inventors are not authors of it. Note that this is a mere sketch of the issues—patent law is absurdly complicated.

In publishing this work we made it unpatentable by anyone (including us) in Europe and similar jurisdictions, and we complicated its patenting in the US. In practice, there are a lot of ways where it is likely possible for someone to patent something that treads on some of the same ground, but they would be restricted by our public disclosure of the original idea as “prior art”.

Returning to the big questions. We have mixed feelings about this paper. We are interested in improving knowledge of the risks of solar geoengineering or finding ways to reduce those risks. We have generally avoided finding ways to reduce its cost. For this publication, we chose to make an exception. To start, because low cost is potentially problematic, we thought a method that could meaningfully reduce cost without introducing significant technical complexity would be an important finding to publish. Furthermore, although this method could in principle reduce cost, its practical realization would require significant engineering and capital expenditure well beyond what is represented by our publication. Additionally, we did this work because we thought that, in this case, for sulfate aerosols, the most generic and well-studied potential method of solar geoengineering, it was worthwhile to present this idea in written form since it has already been discussed. (It was mentioned obliquely in the footnotes of a prior paper.) On balance, we felt it was better to describe the process in a technical publication. Moreover, we judged that if solar geoengineering ever moved towards deployment, a well-funded engineering effort would far surpass our effort in inventing innovative ways to reduce costs.

It might be argued that in forgoing commercial development of solar geoengineering we lose the chance to harness commercial innovation to reduce costs for core solar geoengineering technologies. But, cost is already so low that it’s more of a bug than a feature. Low costs enable unilateral action. Solar geoengineering is not a consumer product. It’s a set of technologies that might allow humanity to alter the climate over decades to centuries. As much as possible, it needs to be owned and controlled by transparent democratic institutions. It requires global governance.

Original post on Harvard.edu

Climate Impacts of Biking vs. Driving

By Daniel Thorpe with help from David Keith | June 20, 2016

Paleo-diet cyclists warm the planet as much as Prius drivers — but under the usual (but crazy) assumption that nothing matters beyond 100 years in the future

See our free course on edX that debates issues like this and teaches you how to do the kinds of research and calculations in this blog post.

NOTE #1: this is a back-of-the-envelope estimate of the marginal impact of biking or driving a kilometer, looking only at the fuel for each (food and gasoline). Our goal is only to stimulate quantitative thinking about what drives carbon emissions (e.g., transportation vs diet). It’s not an evaluation of whether biking or driving best overall, and it’s not peer-reviewed research. As many enthusiastic readers have pointed out, bikes provide exercise, impose less danger on others when driven, take much less energy to manufacture, etc. Please keep riding your bike, David and I do so daily 🙂

NOTE #2: The original version of this post on 26 May 2016 had an unreasonably high estimate of caloric expenditure of biking, 50 kcal/km, and underestimate of Paleo diet intensity of 3.8 g CO₂e/kcal. After looking carefully into several references I amended the post to have 25 kcal/km for cycling and 5.4 g CO₂e/kcal for a Paleo diet [i]; the impact of vegans and average-diet cyclists is now much lower, and the impact of Paleo cyclists is a bit lower too. Thanks to all the readers who brought the errors to my attention!

Is it better for the climate to bike or drive? Obviously it’s cleaner to bike. Right? Not so fast;  biking can have a bigger impact than you think, depending on your diet. Long story short, if you eat enough meat the extra calories burned by biking can lead to similar emissions as driving a car with good fuel economy [ii].

First let’s start with the energy needed to travel a kilometer by bike or car. Biking takes around 25 kcal/km [iii] above basal metabolism, which is equivalent to .11 MJ/km. A typical car in the US gets 25 mpg, or 9.5L/100 km, which is equivalent to 3.3 MJ/km. The Toyota Prius takes only 5 L/100km, or 1.7 MJ/km. So a typical car takes 30x more energy per kilometer than biking, and a Prius takes 15x more. This is what we expect given how much heavier cars are than bikes.

But not all energy use has the same impact on climate. There’s a range of greenhouse gases that warm the climate at different rates and stay in the atmosphere for different lengths of time. When an activity leads to emission of several different greenhouse gases, we often combine them all into one metric, “CO₂ equivalents,” by multiplying all the gases other than CO₂ by their “Global Warming Potential,” which reflects how much more or less they affect the climate than CO₂. This doesn’t matter a lot for estimating the impact of cars, where 90+% of the emissions are CO₂, but it does matter for the agriculture powering a bike ride, where there are substantial emissions of N₂O and CH₄, which have GWP’s around 30 and 300, meaning we usually count 1 gram of CH₄ emissions as equivalent to ~30 grams of CO₂ emissions. Determining the exact value of these equivalencies a tricky exercise that involves value judgements, something that we’ll return to later.

So let’s make estimates of the climate impacts of biking and driving, in CO₂ equivalents (CO₂e). If we look at a typical car in the US, taking 9.5L/100km, we can use the lifecycle emissions from gasoline, ~3.2 kg CO2e/liter, to estimate 300 gCO₂e per kilometer of driving. A Prius emits half as much, 150 gCO₂e/km. We can do a similar analysis for biking. An “average American” eats 2600 kcal/day and their diet leads to about 2.6 gCO2e/kcal [iv]. Given that .11 MJ/km requirement for biking, this gives us an impact of 65 gCO₂e/km. This is a little under half the impact of the Prius! Before writing this post, I guessed driving to have ~10x more marginal impact than riding my bike.

What about a meat-heavy diet, the Paleo diet? I looked at Paleo meal plans and academic lifecycle GHG estimates for the foods in those meal plans, and estimated the average emissions of a Paleo diet to be 5.4 gCO₂e/kcal [v]. This gives us 135 gCO₂e/km, very close to the Prius. What about a vegan? Vegan diets have much lower emissions, around 1.6 gCO₂e/kcal [vi], for 40 gCO₂e/km. This means that a biking vegan has less than a third the impact of an individual driving a Prius, and 1/7th the impact of an individual driving an average car.

Sharing rides in cars matters too. Two paleo afficianados are friendlier to the climate if they carpool together in a Prius rather than biking somewhere together. The distance-weighted average occupancy for US car travel is 1.6, so this is no minor effect (N.B. average occupancy for commuters is just 1.1, which seems awfully low; I hope someone figures out how to make carpooling more common, maybe with something like Uber Commute). If we adjust the emissions for car travel down by a factor of 1.6, the intensity of average cars is ~190 gCO₂e/km, on the order of one Paleo cyclist! A Prius has an occupancy-adjusted intensity of just 100 gCO₂e/km, lower than a Paleo cyclist! Check out Table 1 for a summary of these calculations.

Mode of Transport Energy Consumption (MJ/passenger-km) Climate Impact (gCO2e/passenger-km)
Biking, vegan diet .11 40
Biking, avg US diet .11 65
Prius, double occupancy .85 75
Biking, paleo diet .11 135
Prius, single occupancy 1.7 150
Typical (25mpg) US car, single occupancy 3.3 300

Table 1: Rough estimates of energy use and climate impact of different kinds of transportation.

Land Use

We’ve seen that the climate impacts of a bike ride can be surprisingly similar to those of a car trip, depending on the car and your diet. But there are environmental considerations other than climate change, like land use. How much land do you think is required to fuel a car trip (in the form of oil extraction) relative to the land needed to fuel a bike ride (in the form of agriculture)? Unlike the greenhouse gas example, this doesn’t depend on your car or your diet; the bike ride almost certainly requires more land.

Estimates of land use for fossil fuel extraction vary widely, but in general they are at least 3,000 liters of oil per year for every square meter of land occupied for oil extraction, and some estimates go as high as 300,000 liters per m^2-yr for conventional oil production. These figures are equivalent to about 120 to 12,000 GJ of energy per m^2-yr, or 10 to 1000 W/m^2 [vii]. Food production per unit of land is much lower than this range. Cereal grains are at the upper end of calories per unit land out of the various types of food, but we only produce around 7500 kg of grains per hectare-year, according to the World Bank. Using the calorie density of grains (~3.6 kcal/g), that’s only 120 GJ/hectare-yr, or .4 W/m^2, at least 25 times less than the power density of fossil fuel extraction! Similar estimates for other types of food are substantially lower – fruits and vegetables are around .25 and .1 W/m^2, respectively, and chicken and beef are around .04 W/m^2 and .02 W/m^2 when accounting for the land to house the animals and grow their food [viii]. Any real diet, then, will have an average no higher than .4 W/m^2 (grain-only diet), and likely closer to .1 W/m^2, going lower with more animal product consumption.

So even though a car ride takes 15-30 times more energy, its fuel source uses at least 25-100 times less land per unit energy, giving driving a lower land footprint than biking, even when comparing a biking vegan to a standard American car.

Of course, there are differences in how fossil fuel extraction and agriculture affect the land they occupy. Images of the tar sands may seem a lot worse than what we think of when we think of farms, but the most land-efficient farms may not really be more attractive (Figure 1). Less land-efficient farms with pasture-roaming animals look gentler on the land, but by taking up more land they also have a harsher impact on large species that they displace (trees, deer, wolves, bears…). There’s no clear-cut answer to which is preferable, but it is clear that fossil fuel extraction uses little land per unit of energy extracted, and that powering our lives with alternative fuels (especially fuels derived from agriculture, like biofuels) will almost surely entail an increase in human appropriation of land.

Figure 1: Do fossil fuels or agriculture have a harsher impact on the land they occupy? Tar sands image from this source, cattle image from this source.

Discussion and Conclusion

There are two important qualifications about the calculations above (besides the fact that the uncertainties are large). The first is that we found biking to have a surprisingly similar impact to driving on a per kilometer basis. But of course, cars enable you to travel much faster and much farther than bikes, so someone with a bike and no car almost surely has a much lower impact by virtue of covering a lot less distance. When I owned a car in rural Virginia I drove 20,000 km/yr, and now that I only own a bike in urban Cambridge, Massachusetts I bike about 1,500 km/yr. And there are lots of other impacts we neglect, like the energy to manufacture cars, or air pollution, or the danger car driving imposes on society.

The second qualification is something I mentioned earlier, the trickiness of equating greenhouse gases. We used the “Global Warming Potential” which adds up the radiative forcing for gases over some time horizon and compares to the sum for CO₂ over that same horizon (we used the standard 100 year horizon). But this completely ignores the radiative forcing after that time horizon; this is important because CO₂ stays in the atmosphere for millennia, while the main other gases we counted, N₂O and CH₄, have lifetimes around 100 and 10 years, respectively. So our equivalence method captured almost all of the climate impacts of N₂O and CH₄ but ignored hundreds of years of CO₂’s influence after this century. There are reasons to think we should care more about short-term warming, since we’ll have an easier time adapting to slower changes farther in the future, but it seems odd to completely neglect everything more than 100 years away. This is a long-contested topic (e.g. see Shoemaker 2013), involving value judgements about the present and distant future, with no clear right answer; keep this in mind when you read calculations of CO₂e that seem very cut-and-dry.

But these qualifications aside, we’ve seen that agricultural impacts on the environment really matter. We didn’t come to quite as strong a conclusion as Michael Pollan once did, but we came pretty close; biking has a surprisingly similar marginal impact to driving on a per kilometer basis, and depending on your diet can cause similar greenhouse gas emissions and more land use. This points to some of the important lessons from our upcoming online course, that there’s no free lunch when it comes to issues of energy and environment, and that it’s really useful to be able to make quantitative estimates of environmental impacts. Our analysis certainly doesn’t prove that you shouldn’t do more biking instead of driving, but it does help us know more clearly the environmental impacts of making the switch.

SIGN UP FOR THE HARVARDX COURSE: ES137

References

Berners-lee et al (2012) The relative greenhouse gas impacts of realistic dietary choices. Energy Policy.

Environmental Working Group (2011) Meat Eater’s Guide to Climate Change and Health http://static.ewg.org/reports/2011/meateaters/pdf/methodology_ewg_meat_e…

Fthenakis (2009) Land use and electricity generation: A life-cycle analysis. Renewable and Sustainable Energy Reviews, 13, 1465-1474

Gerbens-Leenes (2002) A method to determine land use requirements relating to food consumption patterns, Agriculture, Ecosystems, and Environment

Geus et al (2006) Determining the intensity and energy expenditure during commuter cycling. British Journal of Sports Medicine

Scarborough (2014) Dietary GHG emissions of meat eaters, fish eaters, vegetarians, vegans in UK, Climactic Change, Vol 125 Issue 2, pp 179-192

http://link.springer.com/article/10.1007/s10584-014-1169-1

Shoemaker (2013) What Role for Short-Lived Climate Pollutants in Mitigation Policy? Science Vol 342, pg 1323-1324

Smil (2015) Power Density: A key to understanding energy sources and uses. MIT Press, Cambridge MA

Swain et al (1987) Influence of body size on oxygen consumption during bicycling.  Journal of Applied Physiology

Weber (2008) Food Miles and the Relative Climate Impacts of Food Choices in the United States

Wilson (2013) The carbon foodprint of 5 diets compared, ShrinkThatFootprint.com accessed May 15 2016 http://shrinkthatfootprint.com/food-carbon-footprint-diet

Vieux et al (2012) Greenhouse gas emissions of self-selected individual diets in France. Ecological Economics

Appendix A

My brief notes on how I arrived at bicycling calories expenditure and carbon intensity of diets here (except for details on Paleo diet estimate, in Appendix B)

  • kcal/km for biking
    • My original of 50 was definitely too high
    • Hard to get a definitive answer, I’m going to go with 25, see details below
    • Most measures are for total calories burned while riding, need to be careful to subtract out calories for “basal” metabolism (calories burned for normal bodily functions, which person would have burned anyway even sitting still) to get additional or “net” kcal burned due to biking
      • I assumed 2600 kcal/day as the basal rate, and already subtracted from numbers below
      • Doing this means we don’t need to look at the calories burned by a person driving
    • Popular online tools like bicycling.com, etc seem to suggest a bit over 25 kcal/km on net for 75 kg individual (US avg for adults) biking 12-15 mph
      • Hard to say what’s the right speed to use, but commuters seem to be around 12-13 mph (see below) and recreational cyclists can be substantially higher
    • The only academic studies I found during a short search measured oxygen consumption during a ride. One I converted to kcal burned by multiplying by 4.76 kcal/liter of oxygen, the other did the conversion themselves. This method will probably be an underestimate b/c it misses anaerobic expenditure and excess post exercise oxygen consumption
      • Swain 1987 studied “experienced” cyclists with racing-style bikes on level ground, so probably a substantial underestimate for our purposes; found ~17 kcal/km on net for riders around 75 kg and 12.5 mph
      • Geus 2006 used a similar method and found ~22 kcal/km on net for commuters weighing ~75 kg and going ~12.5 mph on their actual daily commutes and on their actual bikes
    • I decided to go with 25 kcal/km on net since the academic studies’ methods likely underestimate a bit, but I’m not thrilled with the available data; I think ~18-30 kcal/km is the largest justifiable range, depending on speed, type of bike, terrain, and how much oxygen consumption methods underestimate
  • gCO2e/kcal for diets
    • I originally estimated 2.6 g CO2e/kcal for avg american and 1.6 g for vegan based on two sources (one for total emissions due to diet and one for calories); I made my own estimate for paleo diet based on paleo meal plans and LCA data on the foods therein
    • I divided estimate of total emissions due to diet by total calories consumed, but the estimate of emissions included food waste whereas my estimate of calories consumed did not; thus I overestimated gCO2e/kcal using my sources, by a factor of 3700 kcal [food supply] / 2600 kcal [food consumed]
    • However, after looking more carefully at more rigorous academic sources I think if anything my original estimate might have been a bit low
      • Using emissions and calorie information from Scarborough 2014 we get ~ 2.8 g CO2e/kcal for average person in UK and 1.5 g CO2e/kcal for vegans
      • Using Vieux 2012 (and adjusting for food waste which they ignore, with factor of 3700/2600) we get 2.7 g CO2e/kcal for average person in France
      • Berners-lee 2012 gives ~2.1 g CO2e/kcal for average UK’er and 1.5 g for vegans
      • Scarborough and Vieux ignore post-sale factors (transport of food to home, refrigeration, cooking…); Vieux ignores waste but I adjusted; they all ignore land use change; Berners-lee seems to ignore cooking at first glance
    • Thus I feel pretty comfortable leaving my original estimates for average Americans and vegans alone; my original numbers are close to the averages from those studies above which are probably underestimates
    • My original paleo estimate didn’t account for food waste, so I adjusted upwards (see Appendix B)

Appendix B

Daily food intake of estimated paleo diet adapted from http://paleoleap.com/paleo-meal-plan/. Meat consumption assumed to be ⅓ beef, rest from chicken, fish, and pork. Vegetables assumed to include some high-calorie vegetables like butternut squash. GHG intensities of food from Wilson (2013), Weber (2008), EWG (2011). The total diet related impact is higher than the “direct impact” calculated here due to food waste (for every kcal consumed there’s a bit of food waste); the USDA estimates average US food intake to be about 2600 kcal/day with 1100 kcal/day of additional food waste, so we estimate the total dietary impact of a paleo diet to be 3.8 gCO2e/kcal [direct impact] * 3700/2600 = 5.4 g CO2e/kcal of food consumed. This seems high relative to other diets, but it does involve dramatically more meat and egg consumption than other diets. We use .5 kg/day here which might even be low given that many Paleo meal plans call for meat or eggs at almost all meals and that the US average is already .25 kg/day; .5 kg/day is also much higher than the “high meat consumption” diet from Scarborough 2014, which included all diets over .1 kg/day and had an emissions intensity of 3.6 g CO2e/kcal.

Servings Weight (kg) Caloric Intensity (kcal/kg) Total Calories (kcal) GHG intensity (gCO2e/kcal) GHG Impact (kgCO2e)
Meat 3 .5 2500 1250 6 7.5
Vegetables 5 .9 250 220 3 .66
Oils 1 .08 890 750 1 .75
Nuts 2 .35 6000 200 2.5 .5
Fruit 1 .2 900 180 4 .7
Totals 2600 10

Endnotes

[i] It’s tricky to make a good estimate of these numbers; see Appendix A for my terse notes on how I got to my estimates. For the sake of this simple blog post I think I’m now satisfied with my estimates, but there’s room for disagreement. Let me know if you see anything egregiously wrong, or if you’ve got a substantially better, more thoroughly researched set of estimates for me to plug in.

[ii] Please note that this is only an analysis of the extra calories burned by the bike ride vs the gasoline burned by the car; it doesn’t include analysis of the energy used to make cars, or air pollution, or any of many other factors.

[iii] See Appendix A.

[iv] See Appendix A.

[v] See Appendix B.

[vi] See Appendix A.

[vii] For example, see Fthenkais (2009) and Smil (2015).

[viii] See Gerbens-Leenes (2002); for beef they estimate 21 m^2-yr/kg, or .05 kg/m^2-yr; using 2500 kcal/kg, that’s about 550 kJ/m^2-yr, or .02 Wm^2. Their estimates for vegetables and fruits (.3 m^2-yr/kg and .5 m^2-yr/kg) can similarly be converted to abouve .25 and .1 W/m^2. Their estimate for grains (1.3 m^2-yr/kg) converts to .37 W/m^2, very close to our initial estimate using World Bank data.

Original post on Harvard.edu

LED Salad and Jevon’s Paradox

By David Keith with huge help from Daniel Thorpe | May 12, 2016

How a physics breakthrough that enables huge energy savings may bite back when it meets the locavore 

N.B. The impatient may skip to the policy section, “Jevon’s salad” below.

Photo

Credit: Kirsten Anderson & David Keith

In January my wife and I bought a small automated hydroponic garden to grow salad greens in our apartment. We love the fresh lettuce, the indoor plants, and even the bright light, which helps evening work in the home office.

Being an energy nerd I got to thinking: what’s the energy and carbon cost of this lettuce compared to store-bought? And, can I illustrate the lessons we aim to teach in our upcoming online course, Energy Within Environmental Constraints?

And also, what lessons can I draw about the links between energy efficiency and demand? The garden is enabled by super-efficient LED lighting, but is that efficiency reducing or increasing energy use?

First, a dive into the specifics, then on to Jevons’ ‘paradox’.

We bought an Aerogarden for $280: a complete system with 45W of LED lights in a specific mix of colors that helps drive plant growth, plus a computer-controlled irrigation pump.

The grow lights are on 16 hours a day and the yield is about 20 g of lettuce (a small serving) every three days. What’s the energy and carbon intensity?

  • Input: 45 W × 16 hr/day × 3 days = 2.2 kWh of electricity
  • Output: 20 g of lettuce
  • Energy Intensity: 110 kWh/kg (electricity)

This is the input of electrical energy, but we don’t mine electricity. So we need to compute the amount of primary energy needed to generate the electricity, which is approximately 3 times as much [i]. This brings our total to 330 kWh = 1.2 GJ = 1,200 MJ of primary energy required to produce 1 kg of lettuce. That’s a big number. The energy content wood is about 15-20 MJ/kg.

It’s no easy task to estimate the lifecycle energy inputs into traditionally-grown lettuce, including energy to plant seeds, water and fertilize, harvest, and transport over large distances, but a variety of academic estimates have converged on a range of 4 to 14 MJ of primary energy per kilogram of lettuce, two orders of magnitude lower than our supremely local greens [ii]. Lettuce grown in heated greenhouses at northern latitudes has much higher lifecycle energy cost than field lettuce, around 200 MJ/kg, but still five times smaller than my home lettuce. Trade-offs between eating ‘local’ and energy and environmental impact are widespread—the energy cost of transporting food is (typically) small compared to the energy used in agriculture.

How about the greenhouse gas emissions? Conventionally grown lettuce involves fertilizers and land disturbance, leading to emissions of potent nitrous oxide and methane, so perhaps our lettuce will compare more favorably. We can use the U.S. average greenhouse gas intensity of electricity generation, around 0.5 kg CO₂e/kWh [iii] to estimate our lettuce’s impact:

  • Carbon intensity: 110 kWh/kg × 0.5 kg CO₂e/kWh = 55 kg CO₂e/kg lettuce

Estimating the lifecycle greenhouse gas impact of conventional lettuce once again is not trivial, but the literature points to a range of 1-3 kg CO₂e/kg lettuce [iv], more than an order of magnitude less than our home-grown lettuce.

Is our lettuce cheaper than store-bought?  Let’s compute the amortized cost of my lettuce using simple levelized cost calculations, which are a centerpiece of Energy Within Environmental Constraints.

  • Variable cost (electricity) = 110 kWh/kg × $0.2/kWh = $22/kg lettuce, ignoring other variable costs like water, seeds, and nutrients.
  • Fixed cost = CAPEX × CCF/output = $280 × 0.15/(0.02kg × 100 harvests/yr) = $20/kg.

The levelized cost of my lettuce then is at least $40/kg (counting electricity and capital only), much higher than store-bought lettuce at around $5-8/kg (the electricity input for my lettuce is higher by itself). Here’s a summary of the comparisons so far:

Lettuce @ home Field grown
Energy intensity (MJ/kg) 1000 10
Climate forcing intensity (kgCO₂e/kg) 55 1-3
Cost ($/kg) 40 5-8

Another fun comparison is to compute the mass of fossil fuels needed to grow an equal mass of lettuce. It takes around 23 kg of fossil fuels for us to produce 1 kg of lettuce, plus some contributions from nuclear, hydro, wind and solar [v]. For every 20g serving of lettuce we harvest from our countertop, someone dug up and burned roughly 500g of the rotted remains of primeval swamp goo, turned the heat into electric power, and transmitted that power to us across the greatest engineering achievement of the 20th century, the modern electric grid.

Finally, just for fun, we can think about the overall energy efficiency. Lettuce is a food, and while we eat lettuce more for nutrients or pleasure than for calories, it’s still relevant to compare the energy used to make the lettuce with the energy content of the lettuce to derive an overall dimensionless energy efficiency. (Physicists and economists agree on the glory of dimensionless numbers.) The energy efficiency of the electricity-to-calories conversion can be evaluated as follows:

  • lettuce is 0.13 kcal/g, so 20g x 0.13kcal/g/2.2 kWh = 0.15%;
  • If we look at the efficiency of primary-energy-to-calories, then it’s a factor of 3 worse,  0.05%.

To be clear, this isn’t just a problem of lettuce. Converting primary energy into food takes energy. If I could only figure a way to plug myself in!

Jevons’ salad — implications for energy policy

White LEDs are amazingly efficient. When you compare luminaires (nerdy word for a complete electric light unit) with similarly acceptable color distribution, the luminous efficacy of commercial white LEDs can be 100 lm/W (lumens per watt), which is a factor of 7 better than typical incandescent bulbs.

When the Nobel committee awarded the Nobel Prize in physics to Shuji Nakamura for invention of the white LED, they suggested that the invention might dramatically decrease energy use for lighting: “With 20 percent of the world’s electricity used for lighting, it’s been calculated that optimal use of LED lighting could reduce this to 4 percent.”

N.B., the Nakamura breakthrough was a method of making gallium nitride (GaN) blue LEDs and solid state lasers with “blue-ray” disks as an early application. Most “white” LED’s use a blue LED source to stimulate a phosphorescent material that makes “white light”. Yet another important energy technology for which the big initial innovation was not driven by concerns about energy.

Michael Shellenberger and Ted Nordhaus of the Breakthrough Institute responded with a New York Times suggesting that the actual demand reduction might be much smaller than expected, saying “it would be a mistake to assume that LEDs will significantly reduce overall energy consumption.” They argue that, since the 1800s, as more efficient lighting technologies have been invented, demand “would rise for these new technologies and increase as new ways were found to use them [emphasis added]. This led to more overall energy consumption.”

This harkens back to the paradox of William Jevons, who in 1865 wrote about the possible exhaustion of English coal resources. Some scholars of the time argued that the ever-increasing efficiency of coal-burning technologies (e.g., Watts’s engine) would drive demand down, and prevent the country from ever running out. Jevons showed that efficiency and total coal use had increased together over time in England, implying that increasing efficiency may not be a good way to reduce energy demand, and may even increase it. Since then this topic has been hotly contested, with some pointing to the historical correlations and saying that energy efficiency may not be able to help us keep demand down, and others pointing to the growing, robust body of economics literature on “the rebound effect” that we’ll discuss shortly, which convincingly shows that when the efficiency of a technology is improved, people tend to use it more but not enough to erase all of the energy savings.

Gernot Wagner, then at EDF (now working with me, but that’s another story) worked with a group of environmental economists and energy experts who responded in a discussion on the New York Times’ website saying: “But what about the claim that this efficiency improvement will only lead to more energy use? This claim is simply not justified.”

What can we learn from my experience with LED’s at home? Can it illuminate the debate about Jevons’ Paradox?

First, I used LED’s to replace many of the incandescent bulbs in our apartment.

  • I replaced 7 × 60W incandescent bulbs, which run for about 1 hr/day each for a total of 400 Wh/day; 7 × 8W LED bulbs for 1 hr/day is just 60 Wh/day.
  • Savings: ~350 Wh/day or 130 kWhr/year which is 25$/year at a 0.2 $/kWhr power price.

Little of this saving is lost to rebound (definitions below). We don’t pay attention to the direct cost of leaving the lights on, but I do pay more attention to turning off our remaining incandescent bulbs than the LEDs, so there is some behavioral rebound.

But then I bought the LED garden and it uses 45W × 16 hrs = 720 Wh/day.

Of course, maybe the LED garden still represents energy savings. After all, if I had bought an old-fashioned incandescent Aerogarden it would have used 7 times as much power! So LEDs saved energy after all.

No. Incandescent are so inefficient, and they emit so much of their energy in the near-infrared, which plants don’t use, that if one used them with sufficient brightness to make plants grow as fast in this garden, they would overheat the plants and kill them. Think Aero-‘desiccator’ or Aero-‘oven’ not Aerogarden.

The Aerogarden and its competitors are a new class of product, a product that would not have existed without new high-efficiency lighting technologies. Thus, a technology that raised the technical efficiency of an energy conversion process also opened up a new source of energy demand.

It’s not just my little indoor garden. There’s rapid growth in commercial farming using LED lighting to produce specialty vegetables for high-value markets. And this isn’t just a few little startups, Phillips is developing special LEDs for indoor farming. GE is doing the same and they have partnered with a company in Japan with multiple lettuce growing factories. The largest in Japan grows 10,000 lettuce heads/day. One white paper from this industry estimates that the market can grow to $15bn per year.

What’s driving this market? I don’t know, but I suspect it’s a desire by affluent consumers, possibly driven by high-end restaurants, and a desire to consume local produce. If so, then environmentally conscious locavores have created a monstrous blowback.

Now, let’s return to the feud over rebound. The environmental economists who pooh-pooh the idea that new high-efficiency technologies can increase energy demand focus on studies of three effects:

  1. The direct rebound effect — which occurs when an increase in efficiency lowers operating costs and, in turn, causes an increase in consumption. E.g., more efficient car -> lower cost per mile -> drive more
  2. The indirect rebound effect — due to the energy impacts of spending the money saved by the efficiency improvement. E.g., more efficient car -> more money at end of month -> more spending on books (low energy impact) or cheap vacations (high impact).
  3. The macro economic rebound, which itself comes in two forms:
    1. the macroeconomic price effect; e.g., more efficient cars -> lower gasoline consumption -> lower gasoline prices -> increased consumption of gasoline elsewhere in the economy;
    2. the macroeconomic growth effect; e.g. more efficient cars -> same technology invented to make cars more efficient is used elsewhere in the economy -> increased productivity leads to economic growth.

The environmental economists are very likely right that the first three rebound effects are typically large enough to cause a modest increase in energy demand in response to an efficiency improvement. Studies put the sum of all these three rebound effects at, say 10-40% in developed countries (see this recent review of the issue by Gernot and colleagues). This is important, and makes the energy reductions from energy efficiency improvements significantly less than one might expect, but it’s not ‘backfire,’ which would require rebound >100%.

But notice that my LED garden — or its commercial cousins — is not covered by any of the economist’s rebound effects except (arguably) the last one, the macroeconomic growth effect.

New high-efficiency technologies, which may reduce demand (albeit with some rebound) in one class of applications (replacing incandescent lights in my living room with LED lights), end up creating new categories of products that would not otherwise have existed (the Aerogarden).  The compact size of LEDs, their low heat production, and low variable cost all can open up new uses not imagined before, from machine vision to always-on displays and signage, to more extensive outdoor and street lighting, and more.

This is the root of the concern that Shellenberger and Nordhaus were raising. The specific arguments and numbers given by Wagner et al are accurate but they don’t address the case that worries me here.

It’s important to note that efficiency improvements, and their unintended consequences we explored here, enable huge gains in human welfare.  They don’t just enable my family’s expensive lettuce – they also are bringing solar-powered lighting to the world’s poorest, allowing a new generation a chance to study at night for a better life.  (N.B., A colleague in Calgary was an early pioneered with Light Up The World, and now there are amazing commercial ventures like http://www.m-kopa.com/).

It’s sometimes said that efficiency is a good thing “even with” rebound, but if your goal is to increase human welfare then rebound itself is a good thing – if rebound is happening, people are consuming more of the things they like, and the economy is growing.

But if your goal is to use targeted efforts to increase energy efficiency as a means to cut absolute energy use and carbon emissions then this is larger induced technology “rebound” should give you pause.

I share Shellenberger and Nordhaus’s skepticism that improvements in energy efficiency will automatically lead to predictable reductions in energy demand. This matters for policy.  It implies that if we want to manage carbon emissions, it’s risky to rely on demand reduction from energy efficiency as part of this effort, because demand can rise in unpredictable ways and efficiency improvements can be part of the rise.  It implies that the safest way to manage carbon emissions is to decarbonize the energy supply, so we guarantee a stable climate no matter what happens with energy demand, innovation, and the continued flourishing of new ways to use energy to improve human welfare.

N.B. Our edX course is deliberately opinionated — but when my views depart from the mainstream we work to bring in other voices. Here’s a clip of energy efficiency guru Amory Lovins responding to my question about efficiency and demand.

Addendum

I mentioned before that Gernot Wagner has since left EDF and joined me at Harvard, largely in an effort to work on solar geoengineering. But that doesn’t mean we can’t have some fun with rebound, too. A few words in response to the above by Gernot:

First off, yes, the Nobel committee goofed by citing that calculation. It ignored the rebound effect. Plain and simple. Shellenberger and Nordhaus were right to point that out.

That’s a broader phenomenon: physicists and engineers — present company excluded — tend to ignore these important behavioral effects. The same goes for some environmentalists with a clear agenda of talking up their favored approach. (EDF excluded on that one, which has long argued for the most economically sensible solution and (mostly) gets it right on rebound, too. Full disclosure: I still work for EDF on a consulting basis.)

That said, my first entry into this long-standing debate was a Nature arguing how “The rebound effect is overplayed.” I stand by these words, chiefly for the same reason David mentions above: Don’t focus on rebound. Trying to minimize it is the wrong target. Maximize welfare instead. David gets jollies out of his Aerogarden, as he should. I’ve seen and admired it. It’s a cool gadget.

David is also right, of course, that his Aerogarden fits squarely into the final rebound category: the macroeconomic growth effect. That’s also the category about which we know the least, largely because economists have a poor handle of why growth happens. That’s why we economists insist on a causal link. No causal link, no rebound. The Aerogarden might be closer to having such a direct link than many other examples.

Either way, back to the main conclusion: Don’t focus on rebound. Focus on welfare.

In the end, all of this is what makes Energy Within Environmental Constraints such an important course. It combines the fundamental physics and engineering with economics and policy.

SIGN UP FOR THE HARVARDX COURSE: ES137

References

Bin, Shui, and Hadi Dowlatabadi. “Consumer lifestyle approach to US energy use and the related CO 2 emissions.” Energy policy 33, no. 2 (2005): 197-208.

Brander, M., A. Sood, C. Wylie, A. Haughton, and J. Lovell. “Electricity-specific emission factors for grid electricity. Ecometrica.” Edinburgh, United Kingdom (2011).

Carlsson-Kanyama, Annika, Marianne Pipping Ekström, and Helena Shanahan. “Food and life cycle energy inputs: consequences of diet and ways to increase efficiency.” Ecological economics 44, no. 2 (2003): 293-307.

Carlsson-Kanyama, Annika, and Mireille Faist. Energy use in the food sector: a data survey. Stockholm, Sweden: Swedish Environmental Protection Agency, 2000.

Gillingham, Kenneth, David Rapson, and Gernot Wagner. “The rebound effect and energy efficiency policy.” Review of Environmental Economics and Policy (2016).

Pelletier, Nathan, Eric Audsley, Sonja Brodt, Tara Garnett, Patrik Henriksson, Alissa Kendall, Klaas Jan Kramer, David Murphy, Thomas Nemecek, and Max Troell. “Energy intensity of agriculture and food systems.” (2011).

Stoessel, Franziska, Ronnie Juraske, Stephan Pfister, and Stefanie Hellweg. “Life cycle inventory and carbon and water FoodPrint of fruits and vegetables: Application to a Swiss retailer.” Environmental science & technology 46, no. 6 (2012): 3253-3262.

Weber, Christopher L., and H. Scott Matthews. “Food-miles and the relative climate impacts of food choices in the United States.” Environmental science & technology 42, no. 10 (2008): 3508-3513.

WWF, How Low Can We Go?: An Assessment of Greenhouse Emissions from the UK Food System and the Scope for Reduction by 2050 (2010).

Endnotes

[i] LLNL says that, in 2015, 4*10^10 GJ of primary energy flowed into the electricity sector, and 3.7*10^9 MWh of electricity flowed out (for almost exactly 33% efficiency)

[ii] Carlsson-Kanyama (2003) finds 5 MJ/kg for cabbage -> 1.4 kWh/kg. Bin (2003) finds 4.4 kWh/kg for vegetables [see table 2, EIO-LCA coefficients]. Carlsson-Kanyama (2000) finds 1 kWh/kg for lettuce for open field, 45 kWh/kg for greenhouse, including production, storage, transportation. Pelletier (2011) Finds 1.4 kWh/kg for vegetables.

[iii] U.S. EPA says 2.05*10^12 kg of CO2e from electricity generation in U.S. in 2014, EIA says 4.1*10^12 kWh generated in 2014, for 0.5 kg CO2e/kWh.

[iv] Stoessel (2012) finds 3 kg CO2e/kg for lettuce in LCA in Switzerland, growing in heated greenhouse increases by factor of 5 to 10. Weber (2008) finds ~2 kg CO2e/kg for fruits and vegetables in LCA in US [see their fig 2], supplement table SI-3 indicates half of impact from N2O, 5% from CH4, rest from CO2. WWF (2010) finds ~1 kg CO2e/kg for lettuce produced in the UK, but that only goes up to regional distribution center and doesn’t count distribution to grocery stores, energy for consumers to go buy it, etc. Central estimate -> ~2 kg CO2e/kg for open field lettuce, 10-20kg CO2e/kg lettuce for heated greenhouse.

[v] 330 kWh =1.2GJ of primary energy for 1 kg of lettuce; LLNL says 38% of US electricity from coal, 26% from gas; we assume the following energy densities: 0.027 GJ/kg for coal, 0.052 GJ/kg for gas. The mass of coal needed = 1.2GJ*0.38/.027GJ/kg = 17 kg; The mass of natural gas needed = 1.2GJ*.26/.052 = 6 kg. 23 kg of coal and gas needed for 1 kg of lettuce; for every 20g serving 460g of fossil fuels needed (~500).

Original post on Harvard.edu

Cheap Solar Power

By David Keith | April 19, 2016

Background

Over the last few years solar PV has got cheap. Cheap enough to start impacting some commodity energy markets today. Cheap enough that with continued progress, but no breakthroughs, it might alter the global outlook for energy supply within a decade.

I have long been skeptical of solar hype. In 2008 we did an expert judgment exercise suggesting only even odds of getting to module prices of 0.3 $/W in 2030. In 2011 we did some analysis showing how the power-law learning curve for modules appeared to be flattening. That analysis was done at the end of a decade that saw big increases in installed capacity, with little corresponding change in module prices. The solar market was driven by incentives, like tax credits and feed-in tariffs that drove rooftop solar seem systems which are (arguably) little more than green bling for the wealthy. I worried that deployment incentives (global total amounting to many hundreds of billions of dollars over the past decade) would simply lock in the current technologies and do little to drive the breakthroughs that were needed to get solar cheap enough to compete for commodity power.

I was wrong.

Current Costs

Facts have changed. Just a few years ago the cost for industrial systems was twice what it is today. A host of little innovations have driven costs down. Module prices are now around 0.5 $/W. The unsubsidized electricity cost from industrial-scale solar PV in the most favorable locations is now well below 40 $/MWhr and could very easily be below 20 $/MWhr by 2020. Compared to other new sources of supply, this would be the cheapest electricity on the planet. Let’s look at how that cost is calculated.

The current state of play is captured in three facts:

  • The capital cost of industrial (>50 MW) solar PV installations with North-South axis trackers is now about 1,500 $/kW, and contracts for some industrial systems without trackers are getting down to 1,000 $/kW.
  • Capacity factors of industrial systems with trackers are reaching just over 30% at the best sites in the US.
  • Real world efficiency for commercial PV systems now exceeds 20%.

Let’s now proceed on the assumption that these facts are correct. What does this mean for electricity supply cost?

Assume that an average Capital Change Factor (CCF) is 6%, a low but not unfeasible value, as the risk premium for these facilities has decreased dramatically. (CCF is the ratio of the total annualized cost of capital, spread across debt and equity, divided by capital cost.) At 1,500 $/kW, 6%/year CCF, and 30% capacity factor, electricity cost is 34 $/MWhr

1500 × 0.06/(8760 × 0.3) = 34

Note that this low cost of capital would only make sense for a project that was selling into a low risk market.

Now suppose costs for big systems (>100 MW) get to 1,000 $/kW by 2020 and you install them in the world’s best locations using a North-South oriented single axis tracker to a capacity factor of 34%. These trackers used to add a lot of CAPEX, but disciplined manufacturing and scale has driven cost down to about 100 $/kW. (Here is info the Sunpower C1 tracker.)

Under these assumptions power cost is 20 $/MWhr. Two cents per kWhr!

1000 × 0.06/(8760 × 0.34) = 20 $/MWhr (or same cost at 750 $/kW and 26% CF)

That’s 5.5 $/GJ for electricity.  (20 $/MWhr and 3.6 GJ/MWhr -> 5.5 $/GJ)

Even 40 $/MWhr is very cheap power. The 2013 median price of sales to industrial customers in the US was about 60$.

That’s the good news. But cheap solar does not deal with the problem of solar power’s intermittency. It does not mean rooftop solar in New England makes sense. It does not magically decarbonize the world. In the long run we need low-carbon dispatchable power in the world’s demand centers. This will require some combination of gas for peaking, storage, and long distance transmission. Lots of the world’s demand is in places where insolation is at least 40% less than in the best locations, which are parts of Mexico, Southern-California, the Mid-East, or Australia.

But it does mean that one can now build systems in the world’s sunny locations and get very cheap power.

Implications

What does this mean?

Implication #1: In sunny places, solar will reshape commodity power markets.

Examples

  • Power prices will have a mid-day low. This is already happening in California, where it’s called the “duck curve.” It will soon be the norm in other high-sun demand centers, and the changing power price structure will shake utilities and industrial customers.
  • Wind suddenly looks less interesting. The capacity factors, global build rate, and costs for wind power have been nearly flat for five years.
  • Nuclear and CCS will have a harder time competing. For example, there are nuclear builds in the middle-east (e.g., UAE building Korean reactors), but with cheap solar it will be hard to compete against solar with gas backup.
  • Gas for load following and low-capex peaking looks ever more important.

Implication #2: There will be opportunities to bring electrical demand to where power is cheap.

One option is look for products that have very high energy cost and are easily transportable, and build solar farms and production together in high-insolation sites.

Four options are aluminum, ammonia, desalination, and transportation fuels. The first two are each about 1% of global primary energy demand. Niches yes, but not small. Desalination is growing fast and it’s much cheaper to store water than electricity.

If (a) most of the energy demand is from processes that can handle a diurnal cycle, and if (b) the amortized CAPEX is low compared to the energy cost, then one can deal with variability by simply cycling the production facility on and off.

For transportation fuels, if cheap solar means hydrogen prices under 10 $/GJ in sunny places, then carbon-neutral synthetic fuels look promising. It takes about 2 t-CO₂ and 40 GJ of H2 to make 1000 liters of gasoline using a process like Exxon Methanol-to-Gasoline. If we can get CO₂ from the air at 125 $/t-CO₂ then the idea of making fuels at prices of order 1 $/L looks plausible over the next few decades.

The Upshot

Cheap solar is limited by intermittency and by the fact that many of the locations with the highest energy consumption don’t have good solar resources (e.g, NE US, northern Europe, coastal China).

In the near term, a surprising amount of intermittency can be managed cost effectively with gas turbine backup, and this works even as electricity sector carbon emission are pushed down to a third of today’s values. Looking further ahead, long-distance electric transmission can move solar power from good sites to demand centers and can reduce the impact of intermittency by averaging supply and demand across larger areas.

Looking even further ahead, if we want a stable climate humanity must bring net carbon emissions to zero. And, if we hope for a prosperous world with ample energy that can raise standards of living for the poor, then energy demand will more than double, growing to beyond 30 Terawatts. Climate is not the only problem: energy systems have other social and environmental costs, and the land footprint of energy is a good proxy for environmental impacts on water, landscapes, and the natural world. My view is that only two forms of energy—solar and nuclear power—can plausibly supply tens of TW without a huge environmental impact. But that’s a topic for future posts. For now, let’s celebrate the last decade’s progress towards cheap solar.

Original post on Harvard.edu