Compromising with our planet: We Need to Make Progress in Climate Negotiations

Digg This
Share

Nature, Science and Politics – Not the Same Thing

There is an old saying, “Nature does not negotiate.” It is based on the fact that there are fundamental truths such as that pure water freezes at 0oC when at normal atmospheric pressure. Even if you desperately believe that water should freeze at 50C, and argue that point eloquently, with sound logic and solid (but flawed) science, you still won’t get Nature to budge. Even if you convince the overwhelming majority of water scientists that you are right, you won’t get Nature to budge. Water will continue to freeze at 0oC. Nature is not politics.

Science is one of the ways in which humans seek to understand the fundamental truths governing the behavior of Nature. In my view, science is one of the best ways we have for understanding nature. Science does not give us ‘truth’ about the natural world, because Nature does not have a spokesperson who is there to tell us the facts. Science is a process whereby we use our abilities to observe, to measure, to infer and deduce, all in an orderly approach that builds a model (a concept, an equation, a specific statement or description) of the rules governing the behavior of Nature. Science should have nothing to do with our beliefs and everything to do with observable information. Over time, better observations, measurements and experiments permit us to refine our models to become ever closer to the ‘truth’, accurate descriptions of how Nature behaves. ‘Ever closer to’ emphasizes an essential difference between science and some other ways of discerning the rules governing Nature – science is a process that leads to continual refinement and improvement of our understanding, and therefore of the accuracy of our models of reality. A scientist never knows absolutely what Nature’s rules are, but with sufficient time and investigation, his or her knowledge becomes very close to absolute and makes it possible to make such statements as ‘water freezes at 0oC’ with virtually complete confidence that they are correct. When scientific knowledge is less complete, such as in fields that are very new or little attended to, there will be differences among particular studies and among scientists. In such cases, additional investigation is necessary to resolve such differences, and relying on authority (Dr. Smith is world-famous and he said…), or seeking a compromise between apparently dissenting views are not appropriate ways to reach an agreed conclusion. Science is not religion, not mysticism, and certainly not politics.

Politics is different. Politicians are expected to discuss issues from multiple viewpoints, and then reach a compromise. This ideal, which seems so rare in several modern legislatures, is believed to yield the best outcome, meaning the outcome that will come closest to fulfilling the needs and wishes of each party. This is the same approach used in the marketplace and in business negotiations. It is when questions about Nature get muddled up in politics that we get into real difficulties – because Nature does not compromise; it does not even negotiate. Nature behaves the way it does, because that is the way it behaves.

climate-change-science-v-politics-cartoon
Scientists and politicians work and think differently.
Cartoon © John Ditchburn

The Special Nature of Climate Negotiations

Our growing power to disturb nature has brought Nature and politics together like never before. Consider the Intergovernmental Panel on Climate Change. The IPCC process is an inherently political process because it is, in the end, a process whereby the great majority of countries around the world will reach a negotiated agreement concerning anthropogenic climate change. Or at least, the hope is that, in the end, such an agreement, a compromise, will be reached. But the IPCC process also deals with the science of climate change, and the science is our attempt to understand, or decipher, the immutable rules that govern the behavior of our planet, and thereby explain what happens when we substantially increase the concentration of greenhouse gases in the atmosphere and heat the planet up.

Our scientists do not have total understanding of how our planet works, but the understanding is growing – they are grasping at the immutable rules that govern Nature’s behavior. The IPCC process seeks to build a global consensus agreement (an agreed model) among scientists about what the rules appear to be that govern the myriad components of the very complicated response of the planet to our additions of greenhouse gases to the atmosphere. The scientists have a broad range of expertise, because the myriad components are just that – myriad: there are processes in the atmosphere, in the oceans, between the atmosphere and the landscape, in the Arctic, the Antarctic and the Tropics, biological, chemical, and physical processes affecting all parts of the biosphere, and of the planet itself. The scientists have differing skill levels, different capabilities, different training and experience, and these differences as well as the fact that they are human mean that they do not all agree on the significance of every finding, every analysis, every model projection. But, so far as I know, they approach the task before them as scientists: always attempting to decipher the rules by which nature functions. They seek to understand the rules, not find a compromise between nature’s rules and their own wishes or the wishes of communities or nations to which they belong.

The results of the scientists’ deliberations are conveyed to the politicians in the series of IPCC Assessment Reports to provide a sound, agreed understanding of Nature and our impacts. The underlying assumption governing the IPCC process is that when negotiating international treaties governing human activities that affect climate, availability of sound, up-to-date scientific understanding should help achieve best possible results. But in this transfer from scientists to politicians we move from an arena where Nature operates under certain imperfectly known rules to an arena where nations negotiate seeking the most optimal outcome from each national perspective. Suddenly the growing, but not yet perfect, certainty of the scientists is replaced by the flexibility of negotiation among competing viewpoints.

As a scientist, I find it difficult to comprehend the mental gymnastics that permit a belief that a compromise view is not only a satisfactory outcome to a political process, but usually a superior outcome to any other. When the political process involves an issue centering on the behavior of Nature, I find the assumption that the compromise view is the one to aim for mysterious in the extreme. Let me illustrate with a simple but hypothetical example. Imagine a world consisting of a number of small low-lying countries that lack fossil fuel resources, and a number of larger, wealthier, more mountainous countries that do have abundant fossil fuel reserves. Imagine that economies of all countries are powered by fossil fuels and the CO2 emitted is causing the climate to warm and sea level to rise. The scientist in me says that sea level will continue to rise unless we reduce CO2 concentrations in the atmosphere. We can do this by replacing our fossil fuels with alternatives, or by introducing technology that prevents the emitted CO2 from reaching the atmosphere or removes it once there. Now imagine that there is no technology available to sequester CO2 that is not enormously expensive. Given these circumstances, I do not see a negotiated compromise yielding a satisfactory outcome.

Eliminating use of fossil fuels substantially impacts economies in the mountainous countries. Putting effective CO2 sequestration in place impacts the economies of all countries, and substantially the mountainous ones which gain no benefit from whatever share of the cost they cover. Continuing to use fossil fuels leaves the low-lying countries disappearing beneath the seas, or forced to embark on extensive dike-building programs to survive. Any compromise – some reduction in use and some slowing in the rate of submersion from a rising sea level – between large countries not troubled by sea level rise and small nations risking total submersion would be harmful to both groups and likely more harmful to the smaller (and weaker) low-lying nations.

For me, the correct solution is to agree to stop emissions of CO2 and equitably share the cost of that action. This is a solution that recognizes the inability of Nature to compromise, and therefore seeks to remove the human activities that have caused the problem with sea level. But the political process among nations of differing wealth, power, and investment in the fossil fuel industries seems to me very unlikely to reach that solution, because it is precisely the wealthy and powerful nations that would have to absorb the economic loss.

climate negotiations small
It is perhaps not surprising that the climate conferences have made so little progress. All parties argue to preserve their own vested interests and it’s the powerful countries that have the biggest stake in fossil fuel and other current technologies. Image © Diplo

The real world we live in is not all that different to the one I just imagined. The ramifications of anthropogenic climate change are more far-reaching than just an effect on sea level, and nations are not easily divided into the rich and mountainous vs the weak and low-lying. But the differences among nations in the extent of the impact on current economic and way-of-life norms are considerable, and the politicians will approach the negotiations in the way they always do – seeking compromise that preserves their own benefits to the maximum extent possible. Inevitably, the stronger countries will get more of what they want and the weaker countries will lose. Little wonder that the IPCC has been in existence since 1988 and we still do not have a substantive climate change agreement in place.

Making the Negotiations Work

On July 12th I commented on an article by social scientists Marco Grasso and Timmins Roberts in Nature Climate Change. They sought to resolve the current lack of progress on climate change by redefining the terms of the negotiation, specifically by seeking agreement first among the largest CO2 emitters, the 13 members of the Major Economies Forum, that account for over 80% of cumulative global CO2 emissions. Once these major emitters reached agreement, that agreement would be taken to the larger group. Not being a social scientist, I do not know if this two-step process will help us move forward, but it seems logical and worth a try.

I am far less comfortable with an article by two other social scientists, Oliver Geden and Silke Beck, which appeared in the September issue of Nature Climate Change. Geden is at the German Institute for International and Security Affairs, Berlin, and Beck is at the Helmholtz Centre for Environmental Research, Leipzig. Their article is titled “Renegotiating the global climate stabilization target.” That target is the familiar 2oC maximum increase in average global temperature (or an atmospheric concentration of 450 ppm CO2) that IPCC members have agreed should not be breached. As Geden and Beck note in their introduction, that target was formulated through a dialogue between climate scientists and policy advisors, and was formally adopted by policymakers at the 2010 UN climate change conference in Cancun. They note the concept of a temperature target had been suggested as early as the mid-1990s; certainly, the idea of 2oC as a reasonable limit on temperate rise (one that left the world with manageable climate change) was clear by the time of the 4th Assessment Report in 2007. It is also clear, as Geden and Beck note, that the chance of achieving agreement on policy in time to remain within this limit has been growing dimmer with every year that has passed without substantive progress.

Geden and Beck argue that the IPCC process must not move forward with the 2oC limit in place if the world is going to fail to remain within it. In their words,

“For national governments that take climate policy seriously, it is unthinkable to continue pursuing political goals that are patently unachievable. This will make it necessary to modify the 2 °C target in some way.”

They leave no doubt that by ‘in some way’ they mean ‘upwards’. As a scientist I find this reasoning staggering, although I also see it as logical given the negotiation/compromise nature of the political process that has to take place. I also see in it a repeat of the process the UN has recently gone through with many of its Millenium Development Goals.

ive-failed-over-and-over-and-over-again-in-my-life-and-that-is-why-i-succeed-michael-jordan
A message for IPCC and for Geden and Beck; it’s OK to fail, suck it up and try harder. Changing the negotiation rules so failure won’t happen is not exactly grappling with the issue of climate change. Image © Quotepedia

The MDGs were put in place by 2001 after nearly a decade of discussion and negotiation; they were intended to be achieved by 2015 (an interesting history is here). Some of the targets specified for the MDGs have already been, or will be reached by 2015, but a number of other targets will not be met. Beginning as early as 2010, discussion about the effectiveness of the MDGs focused on the failure to set achievable targets and on items of good news from many countries: some goals are being met in some countries so let’s cheer these accomplishments, and some goals were too difficult (at least in some countries). Apparently it is preferable to pay attention to the items of good news, and find reasons to explain away the bad news instead of stating forthrightly ‘we believed these targets were achievable by 2015 but they have not all been met; we failed’. Beginning about the same time, discussions explored what could replace the MDGs after 2015. Retaining the goals that had not been met was considered less palatable than coming up with a somewhat similar group of new Sustainable Development Goals or SDGs. Those SDGs are now being rolled out with much fanfare. My point here is not to condemn the effort to provide a list of clear, quantitative goals to be met within an agreed timeframe, or to suggest the MDGs were a wasted effort. They were not. My point is that the negotiation process took a long time, with lots of little retreats to come up with an initial set of MDGs that, at the time, were believed reachable, and when it became clear that many would not be reached excuses were invented and discussion shifted to the generation of a whole new set of goals. Failure seems to be a word that is avoided at all costs.

Back to the 2oC emissions target. Now that it is becoming apparent that the world is in grave danger of not meeting it, Geden and Beck argue that it is time to soften the target, because it would be ‘unthinkable’ to have a target which was not reached. I think it is worth pointing out here that even as early as 2007 when the 2oC target was first being floated, there were many environmental and climate scientists who warned that 2oC could well be too high to avoid profound climate change and the environmental and human stresses that would cause. Those working on coral reefs were quick to recommend 1.5oC, or a CO2 concentration in the atmosphere of 350 ppm, an amount of CO2 last seen in the early 1980s. A number of other environmental scientists have expressed support for this safer level. Indeed, as knowledge of the ramifying effects of warming, and the environmental responses to them, has grown, support for a more conservative target than 2oC has grown among the scientific community. The report from Working Group II that forms part of the IPCC 5th Assessment being rolled out this year shows this growing conservatism in its many references to warming ‘between 1o and 2oC’, however it also introduces the concept of managing risk and illustrates how mitigation decisions can increase or decrease environmental resilience which is key to ultimate success. The report from Working Group I sticks with the 2oC limit for the most part but does talk about targets lower than this. So does the report from Working Group III.

I suggest that the scientists participating in the IPCC process during the period prior to 2010 never held that the 2oC limit was an absolute threshold below which all would be well and above which there would be climate disaster. Nothing in the 4th Assessment documents suggests this. It was the policy experts interested in creating conditions favoring an effective international negotiation process who turned it into a firm target. From the perspective of the scientists, a firm target was preferable to no target at all in a negotiation process that did not appear to be being taken very seriously. While scientists have become increasingly concerned over the possibility of sudden environmental changes due to so-called tipping points, they recognize that specifying the future temperature at which a tipping point will occur is either fortune telling or educated guesswork but certainly not science. A target of 2oC was a reasonable one, and therefore one the science community could work with. For the 5th Assessment, the discussion of risk management by Working Group II seems to me an attempt by the science community to suggest to the policy negotiators that any specific temperature target will be shorthand for negotiations that ultimately lead to consensus on the amount of risk that will be tolerated that actions to be taken will ensure the future climate will be manageable. The Synthesis of their report includes a figure that makes clear how the choices among alternative actions made over time can lead to a broad range of end results, from a very good world with high environmental resilience to a much poorer world with little environmental resilience. I also accept what Geden and Beck imply; that political negotiators will find it far easier to negotiate with reference to a specific temperature target than using something as complex as a risk management strategy to achieve a best possible outcome for the planet. In their words, “even a sensible approach, such as establishing a three tiered risk-management framework … has little chance of being seriously considered before 2016.”

IPCC WGII 5th Assess Report SPM 9
While the 2014 IPCC WG II report, supporting the 5th Assessment, speaks in terms of temperature targets, it also discusses risk management. Figure SPM9 shows how a series of choices over time between more difficult and less difficult actions can lead to a broad range of futures that differ in the resilience of the planetary environment. Working to achieve a challenging target should lead to better futures. Figure © IPCC

When the scientists, with their improved models of how the planet’s climate system works, continue to warn about unacceptable climate impacts if CO2 concentrations rise much above 450 ppm and a 2oC temperature rise, and even talk about targets more conservative than this, I find it ‘unthinkable’ that Geden and Beck would advocate raising the target above 2oC simply to ease the negotiation process. They are putting the ease of political negotiators ahead of the improved understanding of the rules under which Nature operates.

The insistence by policy experts and by politicians that a climate treaty be negotiated among nations as if it were a treaty on trade or security misses one crucial point. Climate negotiations are not the same as trade negotiations. The issue centers on how much, if at all, we should restrict the ability of humanity to affect the climate of the planet. Nature, which is governed by immutable natural laws, is not at the negotiating table, and nobody else is there to speak on Nature’s behalf. We have only the scientists’ improving, but not yet perfect understanding of Nature’s laws to guide us. That understanding should not, and cannot be treated as just one more starting position to be modified by compromise in reaching decisions. In other words, the negotiation around climate is not comparable to the negotiations around trade or security because all negotiating parties are more or less on one side of the struggle and no party represents the opposing view. All nations would prefer to retain their freedom to manage their activities in ways that best serve their immediate economic or other needs. Few if any will be altruistic enough to voluntarily step back, curtailing practices seen as globally damaging but locally and immediately economically beneficial. The only force moving parties towards a compromise that involves any repression of national freedom to act according to immediate self-interest is an appeal to be team players on a global scale. People in leadership positions, political or economic, seldom put good sportsmanship ahead of economic self-interest, and shaming so far has not been very effective. Changing the already agreed 2oC target to something warmer, simply to ease these negotiations seems to me a very bad step.

I suggest that the IPCC science community, having worked to build consensus on how human activities modify the planet, and having used this consensus view to build projections of likely and less likely futures under different sets of political decisions between now and then, must go one step further. The scientists must articulate in clear, understandable language, the likely planetary consequences for every set of proposed actions that the policy and political participants come up with. Rather than the present linear process of IPCC scientists delivering detailed Assessment reports that then guide climate negotiations, there needs to be a parallel process in which negotiators use the latest Assessment data to guide their negotiations while teams of scientists take alternative negotiating positions and generate projections into the future of the likely consequences of such actions, reporting their results back to the negotiators. The participating scientists would retain their scientific credibility because they would not be advocating for or against any particular proposal; they would be reporting their best assessments of the future consequences of adopting each proposal under consideration.

The IPCC Assessment process already goes a lot of the way towards achieving this. The 5th Assessment includes abundant projections of future consequences of adopting different broad policies. What is needed now is a more effective roll-out of these projections so that people understand the scale and timeframe of likely changes, and a mechanism for real-time interaction between an apolitical science team that runs projections and an international political negotiation group that develops and then compromises among proposals for action. As for the 2oC target; let’s keep it as a fixed goalpost while we struggle for a collective solution to this existential problem.

Why the IPCC Process Need Some Successes Now

That we need to get moving on efforts to lower CO2 emissions is shown by another new paper, this one by Steven Davies of Univ. of California at Irvine and Robert Socolow of Princeton University, published in Environmental Research Letters on 26th August. It is open access so accessible to all. What Davies and Socolow have done is to calculate the extent of committed releases of CO2 globally from the operation of electricity generation plants. By ‘committed’ releases, they refer to the CO2 that will be emitted by each generator during its remaining lifetime. Committed releases for a single generator are highest the day it goes into operation and then decrease as it operates, reaching zero on the day it is decommissioned. Davies and Socolow argue that by calculating the global committed releases every year for the electricity generation industry they provide a new way of documenting the extent of the contribution to climate change from this source. In a world in which the amount of electricity being generated from use of fossil fuels is thought to be going down, global committed emissions will fall year by year.

The results they present, from 1950 to 2012 do not suggest anything like that rosy picture. Global committed emissions of CO2 have risen continuously since 1950. Initially that was due to construction of new generating capacity in the US and Europe, but in recent years it has been caused by the rapid increases in capacity in China, India and some other developing countries. The overall pattern through the years has been driven primarily by the emissions from coal-fired power plants. Even today, coal remains the primary fuel used for electricity generation. All told, the committed emissions due to electricity generation were 307 gigatonnes CO2 in 2012, and this committed amount was increasing at about 6.5 gigatonnes CO2 per year. Not only have we not been reducing our use of fossil fuels in the electricity generation field, we continue to increase this use, and predominantly we use coal.

Figure 5 [Remaining Commitments, Changes, Ratio].eps
Parts A and B of Figure 5, Davies and Socolow, showing the continuing accumulation of committed CO2 emissions to 2012. In A, the commitments are partitioned among nations; in B among fuels. Figure © Envir. Res. Lett.

A September 7th article by Stephen Leahy drew my attention to one particularly concerning consequence of Davies and Socolow’s work. The 2014 IPCC Working Group 1Report included a number of simulations to determine the cumulative CO2 emissions that would be allowable for various target temperature increases. In other words, if we decide on a specific temperature target, how much more CO2 can we emit and still get there? If we do decide to work to limit the temperature increase to 2oC, the allowable future anthropogenic emissions of amount to about 990 gigatonnes CO2 between now and 2100. More than this, and we overshoot the 2o C target. Now electricity generation is currently responsible for about 40% of total CO2 emissions, and 307 gigatonnes is about 30% of 990 gigatonnes, which means that either we are going to have to drastically curtail all our other sources of CO2 emissions, or we are going to have to stop building new fossil fuel power plants within the next couple of years, even to replace ones that are coming to the end of their lives.

Let me say that a different way. The set of power generation plants now in action on this planet will, in the remainder of their lives before decommissioning, emit virtually all the CO2 that our budget allows for if we want to keep to a target of no more than 2oC temperature rise, and want to continue our other activities, chiefly to do with land use, that emit CO2. There is essentially no wiggle room left. We have so greatly increased the rate at which we emit CO2 that we now have in place enough capacity to emit all the CO2 we will be permitted to emit if we want to keep the global temperature reasonable. Surely that fact is one that might help spur on the negotiations on climate?

climate t shirt cop16-cancun-0261
A message the climate negotiators need to hear. Far better than the message that the target is being eased to make the task easier.

Categories: Climate change, Coal, Economics, Politics | Leave a comment

Coal, Climate, and Tales from Four Countries

Digg This
Share

Our use of coal

Wikipedia says it is the official state mineral of Kentucky, and the official state rock of Utah. It also gets placed in the Christmas stockings of children who have been badly behaved during the year. And it is responsible for a lot of the carbon pollution we have inflicted on the world. Today, I am exploring some issues around coal.
To start with some facts and figures, I turned to the International Energy Agency. The first diagram is from their 2013 Key World Energy Statistics. It shows the global energy production (TPES) broken down by fuel type in 1973 and 2011. The quantity of each fuel is expressed in millions of tonnes of oil equivalent (Mtoe), the amount of oil that would produce the same amount of energy as the fuel in question; coal is prominent. It was 24.6% of total fuel produced in 1973 and 28.8% of a much larger total in 2011.

1 Fuels in enegy supply IEA 2014The shares of each fuel in the total primary energy supply (TPES). These are the fuels we extract from nature to drive our global economy. Figure © IEA

The majority of coal gets used in production of electricity and other forms of energy, rather than being burned to produce energy directly for end-users. Thus, its share of the world total of final energy consumption, as shown in the next diagram, is quite a bit smaller: 13.7% of a total of 4,674 Mtoe in 1973, and 10.1% of 8,918 Mtoe in 2011.

2 share of fuels in final consumption IEA 2014The shares of each fuel, including electricity and synthesized oils, which we use to power  our global economy. Figure © IEA

Nevertheless, as shown in the third diagram, whether it is used in the production of other fuels, in other industrial processes, or to produce energy itself, the burning of this coal yields impressive amounts of CO2 pollution: 35.0% of the 15,628 Mt of CO2 emitted in 1973, and 44.0% of the 31,342 Mt of CO2¬ emitted in 2011. It’s the most polluting, per unit of energy released, of all the fossil fuels, and for that reason, a fuel that we should be consigning to history as quickly as possible.

3 share of fuels in co2 emissions IEA 2014The shares of CO2 emissions due to the use of each of our primary fossil fuels, whether the fuel was burned in producing refined fuels or electricity, or directly to produce motive power. The totals of CO2 emissions shown are those due to energy use. We also emit CO2 through our patterns of land use, cement manufacture and other activities. CO2 emissions due to fossil fuel use are 74% of all anthropogenic releases of this gas. Figure © IEA

Unfortunately, the planet has stored an enormous quantity of coal, and, unlike oil which tends to be available in far fewer places, coal is widely distributed across all continents. Coal was the first fossil fuel we harvested; in many ways it proved the easiest to mine, to transport and to use. In Our Dying Planet, I talked about the many environmental and health problems associated with extraction and use of coal. While it started our industrial revolution, it was definitely a mixed blessing for humanity, and it continues to be one. Here, I will focus on recent trends and events and on three parts of the world.

As recently as 2006, while advocating greater speed in developing and implementing technology for carbon capture and storage, as an essential tool in the fight to rein in CO2 emissions, the IEA was projecting a continuing and nearly linear increase in the global rate of coal consumption from about 3000 Mtoe in 2005 to about 4500 Mtoe in 2030, a 50% increase in 25 years. The U.S. Energy Information Administration was similarly bullish, with nearly identical numbers (although they measured in Quadrillion BTUs, just to make life difficult). They reported use of 3089 Mtoe coal in 2005, and predicted 5099 Mtoe in 2030. For both agencies, the growth in use expected in China was a major factor underlying the calculations. In 2006, IEA was reporting Chinese coal-fired electricity generating capacity of 302 Gigawatts in 2004, and an expected increase to 661 Gigawatts by 2020.

China: So Much Coal, So Much Use, So Much Smog. Whoops.

So let’s start with China. This country has vast reserves of coal. In 2012, China was responsible for producing 45.3% of the 7.8 billion tonnes of coal mined worldwide, and imported a further 278 million tonnes from other countries. China is a rapidly modernizing country without a lot of oil or gas. It has had to make full use of its available coal, and had been building coal-fired power plants at a rate of almost one a week as recently as 2010. There have been consequences.

800px-Coal_mine_in_Inner_Mongolia_Wikimedia Commons 002-450x337A typical Chinese open-pit coal mine, this one is in Inner Mongolia. Photo from Wikimedia Commons

Pannan Coal Fired Plant in ChinaThe Pannan power plant, one of China’s many coal-fired generation stations.
Photo © Simon Lim/Greenpeace

China has the unenviable reputation of having the most dangerous coal-mining industry worldwide. On August 20th, USA Today reported that authorities were working to try and save trapped miners in two separate accidents that had already recorded fatalities. There were over 1000 fatalities in mining accidents in China in 2013 (a less lethal than average year). The pollution of land and water that always lurks around coal mining operations is undoubtedly also serious, but not necessarily closely monitored. But it is the air pollution in the big Chinese cities that has captured world and Chinese attention in recent months. This pollution is a result of multiple causes, but undeniable is the role of fine particulates due to the mining, transport, and use of coal. And China has begun to respond. In September 2013, National Geographic reported that China was taking a number of steps to rein in its coal industry, including cancelling a number of planned coal-fired power plants in the vicinity of its major cities. Early this month, Beijing announced plans to phase out all use of coal for heating by 2020, and now there are indications that use of coal in China might actually be falling. Greenpeace reported on August 19th that the growth of imports had almost ceased during the first half of this year and that domestic production had dropped 1.8%. Part of this plateauing could be due to the sluggish (for China) economy, but an apparent increase in coal stockpiles appears to indicate that the reduction in use has been more than an economic slowing would cause. In April 2014, Jeff Rubin, writing in the Globe & Mail, picked up on the Chinese slowdown in coal use, predicting that it would have an impact on world trade in coal. If the Beijing smog has got the attention of the Chinese leadership, and there is real movement on the coal front, that can only be good news for the global climate and the prospects for progress at next year’s climate talks.

Tiananmen Square smog AFP photo ST_20130131_GNPOLLUTION31I6Z1_3505137e

Tiananmen Square, Beijing, on a less than ideal day in January, 2013. Photo © Agence France Presse

Ontario: A Good-news Energy Story from Canada

The slow move away from use of coal has been going on for a long time in North America, and that has speeded up with the enhanced availability of natural gas due to fracking. As a result, the US EIA projected increased exports by both Canada and the US from 2011 through 2040 as greater proportions of the product being mined were shipped overseas. In Canada, the transition in use has been led by provincial governments, most notably Ontario, which announced in April that the Thunder Bay Generating Station, Ontario’s last coal-fired power plant, had burned its last coal and was being closed. Over a span of 10 years, Ontario went from depending on coal for 25% of its electricity needs to relying totally on nuclear, hydro, and renewables – chiefly wind and solar. Nuclear and hydro are the backbone, together providing 78% of Ontario’s power requirements (about 155 Terawatt hours in 2013). The nuclear infrastructure includes the 6,300 megawatt Bruce Power facility on the shore of Lake Huron – the world’s largest operating nuclear site — and a further 6,600 megawatts in two other plants, while the hydro generation capacity is primarily at Niagara Falls and at the Lower Mattagami River in the north.

Bruce Power Plant Toronto Star photoBruce Nuclear Power Plant, Ontario. Photo © Toronto Star

Newboro1 solar farm Ontario David Chan Globe & MailThe 10 megawatt Newboro1 solar farm under construction in eastern Ontario.
Photo © David Chan/Globe&Mail

Ontario undertook this transition because its government articulated a clear policy favoring a cleaner mix of fuels. (Needless to say, the Harper government has provided no spurs or carrots to encourage such progressive action by Canadian provinces!). The path taken included an emphasis on conservation. This was encouraged by a differential cost of electricity depending on hour by hour demand, and by use of smart metering for residential consumers, as well as by comparable inducements to industry. As a result, the Ontario government claims to have ‘saved’ about 8.6 terawatt hours demand through increases in efficiency of use. Use of a feed-in tariff (FIT) program specifying fixed prices for electricity sold into the grid from solar or wind generation systems spurred development of a robust renewables sector that is now responsible for over 5% of total demand. The remainder is supplied by gas-fired generation stations. These renewables-favoring policies have resulted in a significant investment in wind farms, and in solar – both on-roof residential systems and large solar farms – development not seen to a similar extent in other parts of Canada. Along the way, the Ontario government made one stupid decision, giving in to nimby-ism just before a critical election that resulted in significant added costs to relocate to planned gas-fired plants, and electricity prices for consumers have spiked upwards (largely due to prior mismanagement, but consumers notice only the added cost). Nevertheless, Ontario’s story is one of the few good-news stories on the climate front in Canada – one that has made a significant dent in Canada’s overall CO2 emissions. But not in Canada’s mining of coal, which is primarily for export.

USA: New EPA Regulations Start to Turn Off the Coal Button

Exporting of coal to China, India and other nations is now a major activity in Canada, in the US, and in Australia. Exporting coal requires coal export terminals, with all the attendant mess in terms of coal dust, fine particulates and low-scale chronic pollution. Some potential sites for terminals are saying ‘no’. Last week the Oregon Department of State Lands rejected an application by Ambre Energy to transport coal by rail from Montana and Wyoming to Boardman, then barge it down the Columbia River to Port Westland for transfer to bulk carriers for transport to markets in Asia. The project would move 8.8 million tonnes of coal per year, and was the smallest of three new projects proposed for the Pacific North-West as a way of finding access to new Asian markets for US-produced coal. This week Port Metro Vancouver approved the expansion of the Fraser Surrey dock to accommodate shipping 4 million tonnes of US coal brought in by rail and then shopped to Asia.

The pressure to develop new coal terminals arises from pressures within North America to reduce use of coal. In the US that pressure has been partly the ample supply of inexpensive natural gas that has resulted from use of fracking – gas is a less polluting fuel, and retrofits to generation stations are relatively straightforward. Partly it has been the growing realization that coal’s time has passed in the US economy. In June, the Environmental Protection Agency, which had previously announced tough new regulations governing new coal-fired plants, introduced new requirements for CO2 emissions from existing coal-fired plants. Given that some 39% of electricity generation is based on use of coal, regulations to cut CO2 emissions from these plants can have a major impact on the US contribution to global CO2 emissions. But equally, the importance of coal-fired plants means that the new regulation is going to be fought. Early this month, the LA Times reported that 12 states were suing the EPA over the proposed regulations, however, John Deutsch, writing in the Wall Street Journal on 18th August, termed the new regulations “an unexpectedly thoughtful and well-supported plan setting specific goals for reducing emissions chosen from a menu of measures such as increased efficiency, emissions trading and fuel switching, mainly from coal to natural gas for electricity generation”. If successful, this will be a significant move to curtail CO2 emissions form use of coal. And it will curtail domestic coal use.

Australia: Economy vs Environment – Coal vs Coral, but Maybe the Coral will Win

Australia is one of the other major coal producers, 5th largest on the planet according to the IEA, producing some 421 million tonnes of coal per year, of which 302 million tonnes are exported to Asia. In Australia the rush to increase export capacity in order to keep up with the presumed growing demand from China has led to an unfortunate conflict between support of one of the country’s major industries and conservation of its most important, and iconic, natural heritage treasure, the Great Barrier Reef. In Australia, the major coal production regions are in Queensland and the major export terminals lie along the Queensland coast, immediately inshore from the Great Barrier Reef.

I vividly remember the sleepy town of Gladstone, Queensland in the early 1970s – a small country town on the Queensland coast that had even then been transformed by the construction of a giant alumina plant that processed raw bauxite into alumina for export, using the readily available Queensland coal in the process. Close to the alumina plant, along the large estuary that was Gladstone harbor, was a coal terminal that shipped coal out to Asia and points beyond. It was a quaint Queensland town that had already been morphed into something unreal by all this heavy industry, and it showed it. On the main street there were gift shops catering to tourists headed to the reef, with postcards of the alumina plant at night – as if this was a scenic wonder tourists might want to see. Maybe some of them did! In the early 1970s, it was a quaint little town with a small cancer growing on its southeast flank.

I saw Gladstone last in 2012, after many years away, and because I took the helicopter out to the Heron Island Research Station, I got to see it from the air. The cancer had grown. There are now two alumina plants, and the coal terminal had clearly metastasized. There is something incredibly sad about a region of mangroves, deltaic channels, and mud flats that has been taken over by industry.

Gladstone Q Rachel Fountain - ABCr350481_1606081

Somewhere in this scene there is a quaint Queensland country town named Gladstone, but it’s sure hard to find. Photo © Rachel Fountain/ABC

Gladstone is just one of several coal terminals on the Queensland coast, and recently there have been plans advanced to upgrade, and expand several of them and to build completely new ones. It’s all part of the rush to build shipping capacity to get the coal to China before anyone else does – a remarkably similar plan to the one that drives the urgent need to build pipelines from Alberta to every corner of Canada in order to export the tar sands glurp as quickly as possible. In both cases the industries are acting as if they know that they are dealing in a time-sensitive product, one that is going to spoil very soon. In both cases they are dealing in products that will spoil, not like rotten bananas, but in value once the world suddenly stops using them.

The Australian port expansion plans are extensive, and I discussed them last month. Ove Hoegh-Guldberg of University of Queensland sent me a couple of articles discussing this issue. I also found two recent documents concerning management of the Great Barrier Reef. The first, Great Barrier Reef Outlook Report 2014, was produced by the Great Barrier Reef Marine Park Authority (GBRMPA) as mandated both by the Great Barrier Reef Marine Park Act, and by UNESCO regulations governing management of a World Heritage Area. It replaces the 2009 Outlook Report, and assesses the state of the Great Barrier Reef, factors influencing, and trends in that state, and projects a long-term assessment of reef condition. The second, Great Barrier Reef Region Strategic Assessment: Strategic assessment report, was produced by GBRMPA as one part of a National and Queensland strategic assessment of the Great Barrier Reef region, including the Great Barrier Reef and its catchment area along the coastal plain of Queensland. A comparable assessment is being done by the State of Queensland on the terrestrial portion of the region. Both documents deal with the impacts of port development on the Great Barrier Reef.

Coal ports GBR Outlook 2014

Ports within the Great Barrier Reef region and the total throughput (imports + exports) per year in millions of tonnes. Most of the tonnage is coal exports. There are major expansion plans, or work in progress, at Abbot Point, Hay Point, Gladstone, and Townsville, however proposals for three new coal terminals in the northern part of the region have now been shelved. Figure © Ports Australia.

Production of coal in Queensland has doubled since 1990 to about 200 million tonnes per year, and exports are expected to reach between 300 and 450 tonnes per year by 2025. That requires significant port expansion, and expansion requires significant dredging and disposal of spoil. Plans for Gladstone include the dredging of 12 million m3 of sediments that would be disposed offshore, but within the Great Barrier Reef region. Similar quantities of dredging are involved at other ports, and there is also regular maintenance dredging to keep shipping lanes deep enough for the ever larger vessels.

However, opposition to the dredging is growing, and three banks, HSBC, Deutsche Bank, and RBS, have recently announced that they were cancelling planned investments in coal port expansion in Australia. Perhaps more importantly, the lowering of demand in China is causing some adjustments to plans for mine development and port expansion in Australia. The Dudgeon Point development at Hay Point, which would have required dredging of about 15 million m3 was cancelled on June 20th because of “weakened demand”. It has always been recognized that humanity would come to its senses regarding climate change before we burned up all known reserves of fossil fuels. At some stage, perfectly good resources would get left in the ground. It now seems possible that we are starting to see this shift, and after coal, the tar sands will be next. On July 28th, Jeff Rubin drew attention to the parallels between Stephen Harper’s love of the tar sands and Tony Abbot’s love of Australia’s coal industry, and suggested that the Chinese slowdown could be the game-changer that slows or cancels the expansion in coal production that Prime Minister Abbot so clearly wants to occur.

Categories: Climate change, Coal, Economics, In the News, Politics | Leave a comment

Tailings ponds, troublesome externalities, tepid regulations and tipping points; what the Mount Polley Mine tells us about Canada’s environmental crisis.

Digg This
Share

Tailings Ponds

It happened early on Monday 4th August. The tailings pond dam at the Mount Polley Mine in central British Columbia gave way. Ten million cubic meters of contaminated water and 4.5 million cubic meters of metals-laden sands and silts washed out into Hazeltine Creek, the outlet for Polley Lake. Hazeltine Creek flows southeast into nearby Quesnel Lake and then through Caribou Creek and on to the Fraser River. The contaminated sands and silts, together with debris scoured out of Hazeltine Creek plugged Polley Lake which rose 1.5 meters behind the unstable, and likely temporary plug. Most of the sands and debris ended up in Quesnel Lake where another large debris field is evident. Timber and other debris ripped from the banks of the Hazeltine Creek was floating about in Quesnel Lake as giant islands and for a time there was concern that one of these would ram into and demolish the only road bridge connecting the small town of Lively to the outside world.

tailings breech Polley mine CBC

Breached dam of the Mount Polley mine tailings pond and the path of destruction down to Hazeltine Creek and Polley Lake. Image © CBC

The tailings pond was about 4 x 4 km in area, a relatively big one. (If the 10 + 4.5 million cubic meters of water and sediments released were giant sugar cubes or Lego blocks they would form a rectangular stack, 100 meters by 100 meters at its base, and soaring nearly 1.5 km high, three times the height of Toronto’s CN Tower or more than 1.5 times the height of the Burj Khalifa.) Tailings ponds are the preferred method for handling mining wastes and the wastes from use of coal, and they do fail from time to time. In Our Dying Planet I wrote of the failure of a containment pond for fly ash generated at the Kingston Fossil Plant, a giant coal-powered electricity generating station near Knoxville, Tennessee. This 4 million cubic meter spill was less than a third the size of the failure at Polley Lake. Of course, the tailings lakes being used in Alberta’s tar sands operations dwarf even Mount Polley; the Pembina Institute reports that 173 km2 of Alberta, an area half again as large as the city of Vancouver, is already covered by tailings lakes, that 25 thousand cubic meters of tailings waste is being produced each day, and that the expected total of contained tailings from tar sands operations will reach 1.3 billion cubic meters by 2060.

Fortunately, it now appears that the water from the Mount Polley tailings pond was relatively free of contaminants although the sediments are likely to include nickel, arsenic, lead and copper. Water samples from Polley and Quesnel Lakes appear safe to drink although a drinking water ban still remains in effect. The contaminant loads in the sediments, however, may be a bigger problem, and cleaning them up will take some time. Water is now being pumped from Polley Lake into Hazeltine Creek to minimize the risk of an uncontrolled rupture of the plug of contaminated sediments and further damage downstream. Some water continues to flow from the tailings pond although the mine staff are working hard to put a temporary dam in place. The mine is in a remote part of the province, so most of the damage that has been done has not affected infrastructure or communities.

Troublesome Externalities

Still, this failure must cause us to think of the many risks that the mammoth scale of resource extraction industries now imposes on the environment and on people. Back when mines were small holes that went deep underground, back when dump trucks stood only three times as tall as a man and carried modest loads, back when tar sands were worthless because we did not have the technology or the capital investment needed to mine them, our impacts on the environment were similarly modest. But we are not back there anymore. Our much greater ability to change the landscape has led to ever grander projects, but we continue to think of environmental damage as something we can repair after the fact if necessary, rather than as something we should absolutely minimize in the first place. We think this way because environmental degradation or damage is just one of those negative externalities that do not enter into the economic assessment of the feasibility of a project. Sure, we have progressed from the old days when rape of the environment, with no effort to minimize damage or undertake any remediation, was the usual approach in all resource extraction industries. Now there is lip-service to the idea that minimizing environmental damage, and repair of any damage caused, should be the norm for all extractive industries – fisheries, forestry, and all types of mining are all managed with regulations that are intended to achieve sustainability (in the case of fisheries and forestry), and avoid or remediate any collateral environmental damage. Industry spends money to comply with the regulations, and some responsible corporations even go beyond the requirements of the regulations in order to more fully avoid environmental damage. But most corporations do their best to minimize these expenses because there is no payback to investors when the extractive operation ends if the environment is successfully put back together. If they do the bare minimum and walk away leaving something less than perfect behind, their investors are not unhappy.

Tepid Regulations

In situations where a major resource extraction operation, or a set of smaller nearby operations, comes to dominate the economic activity in a region, the economic power of the industry is such that it can influence government to minimize regulations, turn a blind eye, or, in other ways, let the industry get by with less effort to minimize or to remediate environmental damage. This is so because governments, the ultimate regulators, also tend to see environment as an externality – they understand the economic (and tax revenue) benefits of profitable industry that employs people far better than the intrinsic but often intangible, and certainly non-taxable, benefits of a healthy, sustainably managed environment. Environment comes second, if it even comes into consideration at all.

This certainly seems to be the case in Alberta’s tar sands district, where the enthusiasm of both provincial and national governments for an expansion of the industry seems to have led to minimal resource rents (Alberta), overly favorable taxation policies for the industry (national), weak regulations (Alberta), lack of enforcement (both), and perennial delays in putting additional or replacement tougher regulations in place (both). This overly compliant attitude by both governments has surely been encouraged by the Canadian Association of Petroleum Producers (CAPP), and individual operators, all of which have ample funds to spend in persuading government regulators to go easy, to go slow, and to give them yet more time to comply. This is all perfectly legal, but it is what happens when the foxes (the mining operators) are helping the farmers (the governments) look after the hen house (the environment).

Oil Pipeline

Tailings lakes in Alberta’s tar sands district. Photo © Jeff McIntosh/Associated Press

In the tar sands district, there are enormous quantities of water being permanently contaminated through use in bitumen extraction. These are stored permanently in tailings lakes, because there is no current technology to clean them up at reasonable cost. This water is effectively being withdrawn permanently from the hydrological cycle, with no evident thought concerning the availability of water in that relatively arid region, or the impacts of those withdrawals on the environment. The Pembina Institute reports that about 11 thousand cubic meters of contaminated water are leaching from tar sands tailings lakes into waterways and groundwater every day. There is solid evidence of pollution damaging to fish and wildlife downstream. The lakes do not appear to be becoming clean by themselves, and the possibility of a tailings lake breach is a continuing possibility (and these tailings are dirty). What future problems for ourselves are we building in northern Alberta?

Tipping Points

One of the over-used phrases when discussing the environmental crisis is ‘tipping point.’ Recognition of the fact that environmental responses to stressors are rarely linear and sometimes rapid, leads naturally to a concern that as we continue to stress the environment, we must anticipate some unexpected, rapid, and perhaps consequential responses. The most frightening ‘tipping point’ in my view is what James Hansen calls ‘runaway climate change’. This is a situation that could arise if we push global temperature high enough to trigger several plausible positive feedback mechanisms that would then essentially take over and continually escalate the rate of warming no matter how much we reduce our own contributions of greenhouse gases. Potential positive feedback mechanisms include the melting of Arctic sea ice, the thawing of permafrost, and the melting of methane clathrates. Melting of the sea ice leaves the Arctic Ocean darker (less reflective) and capable of absorbing (as heat) more of the sunlight that impinges on it. As more ice melts, the rate of warming increases. This is already happening. Thawing of permafrost has the potential to release large quantities of trapped methane (a potent greenhouse gas) formed from decomposition of organic material in the frozen soils. This is starting to happen. Melting of methane clathrates will release more methane, and will happen when the oceans warm sufficiently. There are vast quantities of methane clathrates in the deep ocean and in subsea sediments. Remember the difficulties BP had in capping the Deepwater Horizon blowout. One early attempt with a large steel cone that was to be positioned over the well to funnel the emerging oil into a pipe to the surface failed dismally when methane in the emerging stream of oil combined with seawater to form clathrates which attached to its inner surfaces, plugging its narrow opening to the pipe, and causing the cone to be pushed away by the emerging oil.

Putting aside ‘runaway climate change’ as simply too awful to contemplate for very long, there remain a number of other potential ‘tipping points’ as nature responds to our meddling. The losses of insect pollinators due to our careless uses of pesticides and other chemicals are already having impacts on crop yields, but a tipping point in which widespread crop failure suddenly results is not far-fetched. Unexpected links between climate and insect-borne pathogens might lead to a sudden expansion to new regions of serious diseases like malaria. In much the same way, warming in British Columbia has already permitted a massive northern expansion and early incursions across the Rockies into northern Alberta by the mountain pine beetle resulting in large and growing losses in the BC forestry industry. Coinciding warming and shifts in location, timing and extent of rainfall have already led to severe droughts and floods in many parts of the world, but tipping into a long-term change in regional climate with severe consequences for food production or human health will likely occur and, because of the natural variability of weather, will not be recognized until some years after it happens. Has this already happened in the US Southwest or in western Australia?

Hidden complexity when several factors act together to stress a population

Most insidious of all are the tipping points that occur when seemingly unrelated impacts combine to lessen the chance of an ecological system being able to sustain itself. Some of these are surely already playing out in the oceans, but we are seeing only glimpses of what is going on. As an example of the potential complexity, consider the production of any fishery species. The production of young is a vital task for every population if it is to persist, and heavily fished populations are already stressed by being less abundant than before, and made up of younger organisms. Each individual may have, at best, only one opportunity to reproduce, while its ancestors lived long enough to reproduce in multiple years. Indeed, in the Pacific bluefin tuna, a heavily overfished species, the average individual is caught before it even reaches maturity, yet its ancestors, prior to overfishing, were living 15 years and reaching maturity at five years of age. Thus, overfishing both reduces the number of animals available to spawn, and reduces the number of chances to reproduce by each individual.
While this may not seem very important, limiting the number of chances to reproduce limits the overall total number of offspring that can be produced by each parent, and for organisms like fish that have very high (close to 100%) and very variable mortality in the first few days or weeks after hatching, that reduction in number of chances to reproduce can greatly limit the chance of a fish to get to be a successful parent at all. (It’s a bit like throwing a pair of dice – if you have only one or two chances, the likelihood of throwing a pair of sixes is small, but if you get to throw them 15 or 20 times the odds are much more in your favor for at least one pair of sixes. Think of a fish successfully reproducing as being like you throwing a pair of sixes.)

The typically high and variable mortality of newly hatched and older juvenile fishes is due to a number of factors including the availability of food and how fast they can grow. If there is plenty of food, they will grow faster, become better able to avoid being eaten themselves, and perhaps grow to adulthood. If food is scarce, they will grow slowly if at all, and may starve if not consumed themselves. Either way far fewer of the young fish get to grow up. If the water is warmer, that may increase growth rates directly, but only if food is available. Given this kind of a production system, an adult fish may well need more than one or two chances to reproduce to have any chance of producing young that reach adulthood themselves. This is so despite the fact that fish typically shed thousands of eggs at a time – thousands of eggs, of which nearly 100% usually die before reaching adulthood.

Now consider the prey species consumed by the young hatchlings. These tiny planktonic species must be available to be consumed at the time and place that young fish are present. But their own growth, survival and reproductive success is affected by the environment too. To the direct effects on survival and growth of these small prey species due to warming, or ocean acidification, add in the indirect effects of melting sea ice and glaciers which alter ocean circulation, transporting populations of plankton along different paths than they travelled before. All these factors together act to make the prey more abundant, less abundant, or even absent from a particular location in a particular year. Suddenly, juvenile fish are produced at a time and place that does not coincide with abundant prey to eat. The result is a reproductive failure by the population that year.

Have this happen in just two or three years in a row and that population will decline far more rapidly than would be expected based on overfishing alone (and also far more rapidly than it would if climate change was occurring but the fishery population was not being heavily fished). Mostly we find out about failed reproduction only after the fact, when the new individuals would be large enough to start being caught. Only there aren’t any new individuals. Only rarely do we know the full reasons for why the failure occurred.

MacKenzie et al Fig 5 2007

Figure 5 from MacKenzie et al 2007 showing the complex network of effects of climate change on a) reproductive success, and b) growth in three fishery species: cod, sprat and herring. Climate, by affecting salinity (S), oxygen content (O2) and water temperature (T) has both direct effects (solid arrows) and indirect effects (dashed arrows) on each species, and the pattern of effects on reproduction differs from that on growth. Figure © Global Change Biology

A number of recent reproductive failures of fishery species have been attributed to an absence of the usual prey species (or a shift in their distribution away from where the hatchlings are – which amounts to the same thing). In 2007, Brian MacKenzie and three colleagues, all from Danish research institutions, published a review of impacts of climate change on Baltic Sea fisheries, in the journal Global Change Biology. They showed complex direct and indirect effects of climate-induced changes in the environment (temperature, salinity, oxygen content) on cod, herring and sprat and on the copepod prey species used by their juveniles. That same year, in Fisheries Oceanography, Thomas Brunel and Jean Boucher of IFREMER, France, published a look at long-term trends in reproductive success of fishery species across the northeast Atlantic. They found that ten populations of cod, plaice, whiting and herring showed substantial declines over the period 1970-2000. These were strongly correlated with warming of ocean waters, however the details varied greatly among populations and species. In some, overfishing had a major impact on population size before climate effects on reproductive success began to add to the decline; in others effects on reproduction were paramount. The overall message from such studies is that the success of a fishery population, and hence its availability for capture, depends on a number of different, direct or indirect impacts that include complex effects of climate change as well as impacts from fishing.

Never mind the continued production of a fishery species. Consider the plight of coral reefs around the world. If ever there was a good example of a tipping point, this is it. Reefs have been impacted by humans for many years, through overfishing, pollution, and physical destruction either deliberately to blast out channels or harvest limestone for building purposes, or inadvertently through inappropriate coastal development that lowers water quality, increases turbidity or simply buries living reefs in fine silt. Reefs have also suffered for years. Now, with added pressures due to warming and ocean acidification, and continued overfishing, pollution and physical destruction, reefs in many locations are deteriorating rapidly – a classic tipping point brought about by the synergistic action of multiple stressors each of which might have been far less harmful if it had acted alone.

Coral landscape 3m patch steneck

A coral reef in healthy condition – the way they are supposed to look. Photo © R. Steneck

Tipping Points in the Tar sands?

Still, let’s struggle back onto dry land, because there is a reason I brought up tipping points. I believe our relatively new ability to embark on giant projects causing major changes to environment, even without unplanned events like tailings pond failures, is setting us up for a variety of tipping points in the near future. The tar sands operations in Alberta certainly constitute such a giant project.

There has been a tendency in the long-running debate over the tar sands to deal with issues one at a time. This mirrors the practice of government to evaluate and approve new tar sands projects one at a time. Given the scale of the tar sands industry, and its potential scale in the future if it expands as proponents and governments want, it makes sense to look at all the issues surrounding this industry together – instead of a series of separate potential environmental problems, there is a set of interacting, possibly synergistic problems that are likely to cause unacceptable environmental change. Let’s build a list:

 

  1. While the products refined from tar sands bitumen yield essentially the same CO2 emissions as those obtained from conventional oil, extracting and refining the bitumen costs about three times the amount of energy (= three times the CO2 emissions).
  2. Much of the energy used to produce the bitumen and upgrade it (make it liquid) for transport comes from burning natural gas, the least polluting of the fossil fuels – good fuel is burned to produce environmentally poor fuel, sort of like using gold to produce lead.
  3. The current scale, and the planned expansion of the tar sands operations are such that executing these plans makes it impossible for Canada to reduce its national CO2 emissions, no matter what adjustments are made in other parts of the economy. Current estimates suggest Canadian emissions will still be higher than 1990 levels in 2020. This puts Canada in an impossible position with respect to other nations, a position which will complicate international relations in ways that are unclear.
  4. Tar sands production is currently being constrained by the lack of capacity to transport the bitumen to refineries and markets. New pipelines and other transport capacity will boost refinery activity and production of refined products, with the result that availability of fuels will increase, prices will fall, and the North American addiction to gasoline and similar fuels will lead to increased use. We will be using more of the environmentally dirtiest oil, at the very time that nations should be weaning themselves off these products. A recent estimate in Nature Climate Change by Peter Erickson and Michael Lazarus of the Seattle-based Stockholm Environment Institute, suggests that approval of the Keystone XL pipeline would unlock about four times the CO2 pollution than in previous estimates, because of this effect on price and use. Yes, it would be worse to transition towards fuels derived from coal, but not much worse.
  5. Tar sands operations have already radically altered 280 km2 of the boreal forest, wetland and lake system of northern Alberta. If they expand as planned some 140,000 km2 of natural habitat could be disturbed. To date, reclamation of terrestrial habitats has been negligible, and techniques for reclaiming the wetlands do not exist. The United Nations Environment Program has included Alberta’s tar sands region among the 100 hotspots of environmental degradation worldwide, and Environment Canada has referred to the “staggering challenges for forest conservation and reclamation”.
  6. The extensive wetlands are an important sink for CO2, storing between 2000 and 6000 tonnes CO2 per hectare. Yet the approved reclamation plans call for most of these wetlands to become upland forests. A 2011 study by Rebecca Rooney and colleagues from University of Alberta, published in the Proceedings of the National Academy of Sciences (USA) reported that the net loss of wetlands from mining ventures approved to that time, is equivalent to the release of 41.8 to 173.4 million tonnes stored CO2, and the permanent loss of the capacity to store 21 to 26 thousand tonnes CO2 per year into the future.
  7. The Athabasca River, nearly 1500 km long, winds from its headwaters in the Columbia icefield in Jasper National Park, through the tar sands district and on to its delta into Lake Athabasca in Wood Buffalo National Park. It is also one of the longest undammed rivers in North America. Its water flow north from Lake Athabasca via the Slave River to Great Slave Lake and thence via the majestic Mackenzie River to the Arctic coast, a journey of about 4000 km. We all know what is happening to glaciers and the rivers they feed. The tar sands industry takes all its water from the Athabasca River or from groundwater sources in the basin. Current permits allow the taking of 349 million m3 water per year, an amount comparable to that used by a city of 2 million people. Unlike in most mining operations, that water is not used, cleaned and returned to the river. It’s used, dirtied, and stored forever in those giant tailings lakes. Flow in the river is already reaching critical low levels for fish populations during the winter months, and it is clear the river cannot supply the ever larger amounts of water a ramping up of tar sands production will demand. What will happen then? The governments have not yet introduced any regulations to limit operators access to water when river flow is low, and existing permits to draw water explicitly permit continued use so long as there is water available to use.
  8. The tailings lakes are already leaching contaminated waters into the Athabasca system. They are acutely toxic to animal life, containing naphthenic acids, for example, at concentrations 100 times greater than natural waters. And nobody knows how, if, or when these waters will be reclaimed.
  9. Air pollution is also a problem in Alberta’s tar sands paradise. Nitrous oxides, sulfur dioxide, volatile organic compounds and particulate matter are all emitted in large quantities from tar sands operations. Modelling of pollution from 6 operations (three in use and three more approved and under development) shows that they will collectively exceed provincial, national and international standards. Releases of benzene are also on the rise, from the burning of fossil fuels in operations and via evaporation from the tailings lakes.
  10. And then there are the pipelines, and the trains carrying explosive tar sands crude. Not only does construction of each pipeline snake across the countryside like a lengthening scar, the pipelines do rupture from time to time, and some, such as Northern Gateway will empty out into previously pristine coastal waters. To get the tar sands product to flow, it has to be diluted with natural gas condensate or similar fluids. Last year’s Lac Mégantic derailment and catastrophic fire showed what could happen to tar sands crude carried by train.

Put these ten issues together and it is clear – tar sand mining as practiced in Alberta is setting us up for any number of tipping points down the road. Meanwhile, our Harper government is spending a fortune of our money advertising the merits of the Keystone XL pipeline and Canada’s ethical oil on the walls of the Washington DC subway.

tar-sands-2-1000 Dodge Pembina

This used to be a productive, carbon sequestering environment of boreal forests and wetlands.
Photo © David Dodge, Pembina Institute.

Meanwhile, back at the Mount Polley Mine, the ban on use of water in Polley and Quesnel Lakes and Hazeltine and Caribou Creeks remains in effect, and Imperial Metals is still working to put temporary repairs in place at the tailings pond. A report in the Vancouver Sun suggests that Imperial Metals’ property and business interruption insurance ($15 million) may fall far short of the expected costs for clean-up, estimated to be in vicinity of $200 million. If that is not enough, production at the mine, now suspended, generates 83% of Imperial Metals’ revenue, and a class-action lawsuit by a group of investors in the company has already been filed. Needless to say, If the clean-up costs more than the deep pockets of Imperial Metals can provide, bankruptcy doesn’t magically repair the damaged environment. That troublesome externality will sit a permanent scar to join all those other scars in other places where industry made mistakes it could not repair.

How long will it take before we know the full extent of the damage? Let’s just observe that on 28th July 2014, over four years after the Deepwater Horizon blowout occurred, Charles Fisher of Penn State University, and 11 colleagues published a report in Proceedings of the National Academy of Sciences (USA) detailing their discovery of five, previously unknown, deep water coral reefs, two of which had been damaged by toxic substances from the blowout or from chemicals dispersed to control it. One of these reefs was 22 km southwest of the site and 1900 meters deep. That greatly expands the footprint of that disaster, both in distance and depth from previous estimates. The damage due to the Deepwater Horizon blowout continues today.

Humans are very powerful, and too often that power damages the fabric of the environment that ultimately sustains us. If there are tipping points in our future, and there probably are, we have put them there because we insist on seeing environmental damage as simply a troubling externality. It’s an externality that we need to internalize as quickly as we can.

Categories: Canada's environmental policies, Changing Oceans, Climate change, Economics, Fisheries, In the News, Tar Sands | Comments Off