How can we expect the average person, or even a politician, to understand what is happening to coral reefs?


Coral reefs should have alerted us by now

By now, everybody knows that coral reefs around the world are being seriously degraded by the effects of warmer water caused by climate change.  Occurrences of coral bleaching are far more frequent than they used to be (nobody had witnessed such an event prior to 1982), and their cumulative impacts have been a primary cause of the substantial loss of coral cover worldwide.  Estimates of loss over the past 30 years or so, based on sound scientific data, approximate 50% for both the Great Barrier Reef and the Caribbean, while less extensive data from other regions confirm these results are not unique.  This is a far more rapid rate of loss than that of rainforest, or forested land overall, and is clearly not a rate that is compatible with the continued presence of coral reefs.  Scientists are united in attributing major portions of this loss of coral to the effects, direct or indirect, of climate change.

The profound extent of the damage being caused to coral reefs, such as revealed in these contrasting photos from the Line Islands, should be a wake-up call to all about the need to address climate change.  Photos © Scripps Institution of Oceanography

Those of us who have spent our careers doing research on coral reefs have long wondered why what is currently happening to them has not mobilized deep concern across the world to do something promptly to reduce the risk of further human-caused climate change.  Indeed, I vividly remember being at a large international conference in 2000 where the effects of the strong 1997-8 el Niño were a hot topic.  Conversations kept coming back around to the hopeful expectation that the world’s first circumtropical mass bleaching episode would be the very strong wake-up call to the world that would begin the effort to rein in emissions of greenhouse gases.  To us, the link was obvious, and the consequences of ignoring climate change were going to be devastating in many ways, far beyond our coral reefs.  Reefs were just the canary, doomed to suffer first and thereby warn the world.  But it did not turn out as we expected.

Ever since, reef scientists and managers have been struggling to articulate the story of coral reef decline in ways that will more effectively capture the attention of the public and lead to strengthened policy on climate around the world.  We have provided detailed case studies of bleaching events around the world.  We’ve explained the links between rising temperatures, bleaching, coral mortality, and reef degradation.  We have used powerful models to project likely futures.  We have helped document the enormous value of coral reefs, economically, esthetically and biologically.  And we have advocated for action, locally and globally, that would help sustain coral reef systems.  All seemingly to no avail.

I’ve come to believe that the failure of most people to get what I‘ll call the coral reef message is due to several factors, only some of which are under the control of the scientists and managers.  We can be blamed for part, but not all, of this failure.  The media also share part of the blame.  And the audience – that everyperson out on the street – shares the rest.  Let’s consider each in turn.

The scientists

Coral reef scientists, like other scientists everywhere, seldom find ‘the art of story-telling’ among the courses required during their graduate careers.  Somehow, we assume telling stories is easy, and we all know how to do it.  Fact is, we don’t, and peer review by journals, or at conferences, seldom addresses this gap.  Some of us even believe that telling stories is somehow not what a scientist should be doing; it smacks too much of entertainment.  Our colleagues put up with this deficiency and drive themselves to listen to our 15-minute conference presentations or read (or at least skim) our journal articles, even when the talks and the articles are mind-numbingly boring.  Real people are simply not that interested.  Randy Olson, long ago a coral reef scientist, has drawn my attention to the following cartoon that makes this point with respect to seminars or conference papers, but it applies to every written communication as well.  Properly designed, each has a structure that begins with some background (Randy calls this ‘and’), identifies the problem being addressed (‘but’), and draws a conclusion (‘therefore’).  A lengthier talk or technical article will have a more complicated structure – likely an overall ‘and, but, therefore’, with a series of two or more subsidiary ‘and, but, therefore’ sections within it.  Each such sequence of elements builds a story arc that generates, maintains and finally rewards interest by the listener or reader.

Cartoon © www.animateyourscience

While some of us are increasingly attending to story-telling in our conference presentations, very few of us bring this to our technical articles.  Too often, our articles are a succession of ‘and’ with no discernable story arc, just a long list of mind-numbing details.  Then too, the sequence of such details is often incompatible with story-telling.  The journals seem to go out of their way to impose a structure on articles that bears no relationship at all to story-telling; in some journals Materials and Methods, or ‘what was done’ becomes ‘supplementary material’ stored separately in an archive so the main text jumps directly from introduction to results; in others, Results and Conclusions come ahead of Introduction.  Naturally, scientists now read such articles by glancing at the opening paragraph, scanning the figures, taking a quick look at the final paragraph and then maybe reading more carefully.  This reading is not done for enjoyment, and the articles are seldom enjoyable.  Mostly, they are not memorable either.

Nor does peer review improve the quality of writing; the focus instead is on scientific accuracy and rigor – definitely important aspects of a technical article.  When we add deficient copy editing, a general tolerance of slang with meaning limited to those from the same subculture as the author, and the fact that few English majors end up as scientists it should not be surprising that technical articles are seldom models of effective story-telling and sometimes barely literate.  Yet it is the stories that make a piece of science memorable, and articles that are not remembered don’t get cited and might as well not have been written.

Science students are seldom taught how to communicate science effectively.
© Nature Education.

Well, OK, you say.  Technical articles are intended to convey information within the science community.  They were never meant to tell stories.  I disagree, but perhaps more important is the fact that we scientists also apply our story-less style to pieces of writing that are intended to reach a wider audience.  There are gloriously talented exceptions among us, but for the majority, our articles for the popular press come out as a long string of details: and, and, and, and, and.

Back when I began my own career, the process of publication took a year or more, with manuscripts and revisions being mailed back and forth across the globe before type was finally set and a journal issue printed.  I think we did a slightly better job of story-telling; I know we spent more care on each manuscript (and published far fewer).  Now, in a world of instant communication, peopled by far more scientists, under far more pressure to succeed, the production of poorly written articles does not generate much attention for the author.  Yet it is attention that is essential to maintain the stream of funding needed to do science, and scientists have learned to compensate for their nonmemorable technical articles by using social media and press releases to try and generate buzz each time a new article appears.

Many universities now have established publicity units that help with this buzz-making task; in others, the scientist has to go it alone, again with no formal instruction on how to do so.  But how do you generate buzz?  By telling effective and enticing stories.  Since we don’t know how to do that, we adopt an easy two-step trick to create enticing copy concerning a new piece of research: hype the story as new, different, the first report, a major breakthrough; and make sure the story contradicts prior studies or the current consensus on a topic.

Now, don’t get me wrong.  There are breakthroughs in science, there are major discoveries, and there are discoveries that completely redefine our understanding of some topic.  They deserve to be highlighted, shouted from the hills.  But every single article coming out of a scientist’s research lab?  No!  We all occasionally do confirmatory work, or routine baseline work needed to prepare for the breakthrough.  Papers reporting such work do not deserve a Hollywood treatment with searchlights in the sky and a stirring theme by John Williams.  Unfortunately, in our efforts to generate buzz about everything we do, we are creating a dull background drone.  We are also misleading the media and the audience.

The media

The advent of social media has been tough on the traditional media, or more specifically the professional media.  While the number of career journalists has almost certainly grown, the demands on their time are greater, and the degree to which they can afford to specialize on complex topics such as science has decreased.  The pressure to publish quickly has led to verbatim line-for-line reporting of the press releases created by those hyping scientists who are not very good at telling stories.  When the journalists have the time to write their own words, they latch onto the most sensational claims by the scientists and create the semblance of journalistic balance by citing one source from each side of any apparent controversy (journalists too are competing to be heard).

I’m not sure whether the science community deserves the greater part of the blame for the sorry state of science reporting because of our willingness to claim inflated importance for our work and to stress how it contradicts prior understanding, or whether the journalists are primarily to blame for taking the bait we feed them hook, line and sinker.  Between the two of us, we have made a mess of the reporting of scientific stories.

This mess is made worse when the topic, as is the case for climate change, impinges on the perceived vested interests of powerful individuals and corporations.  To protect their interests, these economically powerful entities have joined the communication effort with their own, often well-crafted, press releases and stories all designed to raise doubt concerning the scientific consensus or the degree of certainty of current scientific conclusions.  Many journalists, ill-equipped to discern scientifically shaky claims, incorporate into their own stories the material fed to them by these professional deniers.

My little survey

Last month I undertook a quick survey of recent media reports concerning coral reefs.  This was not a scientific survey, but a quick skim using Google, hunting out interesting articles much as would anyone attempting to keep up with what the media were saying.  I gave preference to well established print media, and reputedly ‘authoritative’ news sources including some web-only ones.  Here’s what I found.

On 28th April, Nature published the latest in the series of papers by Terry Hughes, James Cook University, and colleagues arising from their study of the 2016 bleaching on the Great Barrier Reef.  This one included 16 authors, mostly from Australia but including three from NOAA’s Coral Reef Watch program in the US.  This is an important and authoritative account that focuses on the pattern of coral mortality in terms of extent of heating and of coral taxonomy, and on the longer-term consequences in terms of ecosystem structure and function.  They show that some heat-sensitive taxa died from the direct impacts of warming, that others died some time after loss of their symbionts due to the physiological impairment that resulted, and that still others died still later due to secondary mortality factors such as disease that were facilitated by the deteriorated condition of the corals following bleaching.  They paint a bleak future in which the Great Barrier Reef will substantially reorganize itself (in terms of species composition and relative abundance, and of ecological process) in an altered, warmer world, and conclude: “The large-scale loss of functionally diverse corals is a harbinger of further radical shifts in the condition and dynamics of all ecosystems, reinforcing the need for risk assessment of ecosystem collapse, especially if global action on climate change fails to limit warming to 1.5–2 °C above the pre-industrial base-line”.  (One has to read that sentence carefully to realize they have moved from talking about reefs to talking about all ecosystems on the planet.  They left it till a single closing sentence!)

This pair of images shows the congruity between extent of coral mortality during the 2016 bleaching event (left image) and the heat exposure (as degree-heating weeks) immediately prior (right image).  Figure © T. Hughes & Nature.

This article had first appeared online at the Nature site on 18th April, and Hughes had provided a press release.  Science Daily put the press release up on its site the same day.  The Atlantic printed an article by Robinson Mayer that depended heavily on the press release.  It is scientifically accurate and captures the main points of the Nature article, but the science is so wrapped in poetic metaphors that I think many less-informed readers would come away confused.   It begins “Once upon a time, there was a city so dazzling and kaleidoscopic, so braided and water-rimmed, that it was often compared to a single living body. It clustered around a glimmering emerald spine, which astronauts could glimpse from orbit. It hid warm nooks and crannies, each a nursery for new life. It opened into radiant, iris-colored avenues, which tourists crossed oceans to see. The city was, the experts declared, the planet’s largest living structure.”  A good thing that paragraph follows the simple title: “Since 2016, Half of All Coral in the Great Barrier Reef Has Died” otherwise who could guess what Mayer meant!  (One of the challenges in conveying science is to make the story interesting without losing the reader in the process, but I’m not sure such dense metaphors help.  Mayer skips easily from “a kind of invisible wildfire” which “mercilessly ravished the city” to discussion of topics as esoteric as “degree-heating weeks” (his italics), and back again, but does manage to avoid distorting the science.

Peter Hannam, science reporter at the Sydney Morning Herald, also made use of the press release in reporting, also on the 18th April.  He picked up on Hughes’ casual reference, when interviewed, to the reef being ‘cooked’ and this became the first word of his title:  ‘Cooked’: Study finds Great Barrier Reef transformed by mass bleaching.

On 19th April, things began to go downhill.  Graham Lloyd, environment editor at The Australian – flagship of Rupert Murdoch’s News Corp – drew upon the press release but put his story under the title “Not all scientists agree on cause of Great Barrier Reef damage”.  He quoted Jochen Kaempf, Flinders University, as saying “the claimed link between the 2016 heatwave and global warming has no scientific basis”.  This quote was out of context and concerned the detail of whether the anomalously warm sea surface temperatures in the reef region in 2016 was a direct consequence of climate change or not (in much the same way, scientists can rarely be certain that a specific storm, or a particular run of warm weather, was a direct consequence of climate change rather than an example of the high variability that characterizes all weather).  Kaempf supports the idea that climate change is degrading coral reefs and was one of 154 scientists who signed an open letter to the Australian prime minister in August 2016 protesting that government’s failure to tackle greenhouse gas emissions seriously.  The Cairns Post picked up on Lloyd’s creativity by generating an article the following day under the heading “Link between Great Barrier Reef bleaching and global warming “has no scientific basis”: researcher”.

Also on 19th April, Mikhail Matz, University of Texas, Austin, with three colleagues in Texas and Australia, published an article in Plos Genetics concerning whether or not corals had the ability to adapt rapidly to warming.  They had used information on genetic variation in the common and widespread Great Barrier Reef coral, Acropora millepora, and biophysical models of coral propagation along the length of the reef, to explore whether putative genes providing tolerance to warmer, more tropical waters of the northern GBR might be transmitted southward as climate warmed over the next 50 to 100 years.  Their modeling results suggested less tropical populations of this species of coral did have the capacity to evolve greater warmth tolerance in that way.  Media being media, promptly turned their attention in this new direction.  Pacific Standard, the California-based newsmagazine, published an article on 19th April with the optimistic title, “Corals Can Withstand Another Century of Climate Change”.  It drew on the Plos Genetics article, while avoiding mention of the fact that Matz and colleagues had looked at a single widespread species in a modeling exercise which showed that species might succeed given continuation of current warming rates.  The reporting is correct, and yet, by conflating ‘coral’ and ‘coral reef’, it suggests that reef degradation is no longer a problem.  For the reader who reads carefully, the article concludes with a final quote from Matz, “The only thing which actually will solve the problem is to stop climate change”, but many readers do not read carefully.

And so it goes.  Mikhail Matz was one of several coauthors of an article published in Proceedings of the National Academy of Sciences on 25th April.  That article, headed by Phillip Cleves of Stanford University, essentially demonstrated that Acropora millipora, like other organisms, could be manipulated genetically using the CRISPR technology.  To do so, they had to collect newly fertilized coral eggs during the brief 1-2 day spawning window.  This very preliminary study was reported in The Independent under the heading “First genetically engineered coral created to help save reefs from climate change”, but that is far from what Cleve’s team had done.  They had achieved changes in a couple of genes that have nothing to do with tolerance to warming.  They held out the hope that in future, scientists would be able to identify genes responsible for heat tolerance or bleaching, and then use CRISPR to deliberately manipulate them.

Meanwhile, on 27th April, the Huffington Post website reported “The Dangerous Belief That Extreme Technology Will Fix Climate Change”.  That report was prompted by a small conference held at Harvard University earlier in the year.  It includes information on the likely cost (a few billion dollars) of solar geoengineering involving the delivery of sulphur dioxide to the stratosphere over a 10-year period and expresses concern that at such a relatively modest cost, there is little to stop rogue nations or individuals undertaking such action without sufficient preliminary risk analysis.

The risk is real.  We belong to a culture that has convinced itself, through numerous past successes, that technological fixes exist for all problems and it’s only a matter of time before we will find the fix for climate change.  The search for such a fix removes the urgency to undertake serious emissions reduction.  The Huffington Post article is timely.  It also reports that Harvard University already has an interdisciplinary Solar Geoengineering Research Program.  That program functions at present to encourage discussion and evaluation of such possibilities, but it could easily morph into a program to undertake such activities on our behalf.

The New York Times reported on 9th May that “Australia Pledges Millions of Dollars in Bid to Rescue Great Barrier Reef”.  This concerned a pre-election announcement by the Australian government that has been broadly criticized by the science community, given that government’s refusal to tackle climate change in any meaningful way.  This controversy was covered in the NYT piece, but a quick skim of the first, largely laudatory, half of the article would lead a reader to believe the Australian government was attending competently to the Great Barrier Reef’s problem.  (One irony not mentioned in the article – the funds are going not to the strong Australian reef science community, nor to the management agency responsible for the reef, but to the Great Barrier Reef Foundation, a body without the capacity to do research or management, one led by people with ties to the oil industry.)

On 9th May, advised, “Marine protected areas help coral reefs survive climate change” based on another new technical article, this time in Science Advances, by Bob Steneck, University of Maine, Pete Mumby, University of Queensland, and three others.  Their article reported on their detailed survey of protected and unprotected sites across the eastern Caribbean.  Mumby and Steneck selected only MPAs that were well-managed and actually succeeded in reducing fishing effort (there are many MPAs that have no measurable effect on fishing pressure at all).  These effective MPAs were compared to comparable reef sites open to fishing in a survey that included abundance and size distribution of larger groupers and snappers, and of parrotfishes, abundance of benthic turf algae, and abundance of young coral recruits.  The results showed that effective fisheries management via an MPA affected all of these, but that the extent or power of the effect was reduced at each step from the harvestable fish, to the algae, to the recruiting corals.  The idea that protecting parrotfishes and other herbivores on reefs will reduce algal populations, thereby permitting more effective settlement and growth of recruited corals is logical, but evidence supporting it has been weak even in the Caribbean where it may be most relevant.  This article establishes that the chain of hypothesized processes does work as expected and MPAs, properly managed, could enhance coral recruitment in places where otherwise algal growth will outcompete the corals.  It’s an important article.  But it certainly does not claim that MPAs help reefs confronted by climate change.  For that is too fine a point by far.

Subsequent headlines I found were “Great Barrier Reef’s five near-death experiences revealed in new paper” (Sydney Morning Herald, 28th May), “How Justin Trudeau and Jerry Brown Can Help Save the Great Barrier Reef” (The New Yorker, 30th May), “World’s largest coral reef farm set for Fujairah” (Gulf Today, 1st June), and “Coral decline in Great Barrier Reef ‘unprecedented’” (The Guardian, 5th June).  These referred respectively to a new study of the geological history of the Great Barrier Reef over the past 30,000 years, the fact that political leaders of Australia, Canada, and California are doing little to reduce production of fossil fuels – the one essential action to assist coral reefs, a routine announcement of a new business enterprise in the UAE to farm corals commercially (the report was not clear on the uses to which the farmed coral would be put), and the release of the annual report from the long-term reef monitoring project run by the Australian Institute for Marine Science, a 30-year long record of coral decline on the Great Barrier Reef.  None of these directly relate to the 2016 bleaching or to the effects of climate change on coral reefs, but if one just scans headlines they suggest, respectively, that the GBR has nearly died five times, that the likes of Justin Trudeau and Jerry Brown can save it, that farming of corals is under way in the Middle East, so all is now well, and finally that the deterioration of the GBR is unprecedented.

Given my survey, I’m not surprised at all that the average everyperson is a bit confused about what is happening to coral reefs. The sequence of headlines bounces us back and forth from despair to optimism, journalists have been seduced by the hype in press releases by the scientists, errors in interpreting the science have been made (, and in some cases (The Australian, Cairns Post) there has been a deliberate effort to mislead.  I think we should be able to depend upon the media to do a better job than this.

The audience

And then there are the readers, the everypersons who, in talking to one another, create public opinion; who vote; who support (or not) government actions on climate change.  They also must bear part of the blame for the failure of communication, although in their case the blame is tempered.  They may be shirking the responsibility to be well informed and contribute effectively as members of their societies, but it is the education available in those societies and the cultural norms that have left many of them less able than they might be to evaluate the news provided by the media.

In advanced western societies today, one can generally say the following about the people:  The ability to evaluate news critically is weaker than it should be.  The sense that understanding the issues of the day is an important part of citizenship is poorly defined.  The capacity to discriminate fact from hypothesis or to spot the logical fallacies in an argument is more limited than it should be.  The distinction between belief and fact is poorly recognized.  And the idea that there are fundamental truths and absolute impossibilities is increasingly being questioned.

Add to these problems the fact that people are increasingly completing their formal educations with little retained ability to deal with quantitative data or to recognize the difference between linear and exponential patterns of change.  It’s not surprising that comprehending the scientific complexity inherent in any environmental science story becomes very difficult, even for the person trying hard to comprehend.  Add in also the special facts underlying any coral reef story:

  • Coral reefs are biogenic rocky masses that are dynamically balanced between rates of calcification by corals and some other reef organisms and rates of reef erosion due to wave action, storms, and action of numerous bioeroding species that drill into, dissolve, or bite off chunks of reef rock while consuming the algae that live in its surface layers.
  • Corals and coral reefs are entirely different entities despite the fact that bleaching is a response by corals that has direct consequences for reef degradation.
  • Corals are the major calcifying organisms on coral reefs, but they depend on an intimate symbiosis with minute photosynthesizing dinoflagellates that live within the coral’s tissues. Physiological stress, such as that caused by warmer than usual water, breaks down this symbiosis, and without their dinoflagellates the corals are compromised and may die.
  • Coral cover is a standard measurement to quantify the abundance of living coral on a reef and loss of coral cover is a measure of the extent of coral death caused by (eg.) a bleaching event.
  • ‘Death’ of a reef is a colloquialism referring to severe reef degradation because a reef is not a single organism capable of dying, but a collection of many organisms each of which may die. When many corals on a reef die it is common to speak of the reef as now ‘dead’ – it has lost substantial coral cover, but it will ‘recover’ if recruitment of new juvenile corals and growth of any corals that did not die substantially restore its level of coral cover.
  • There are many factors that can degrade coral reefs by reducing their coral cover, excessive warming, severe storms, outbreaks of the Crown of thorns starfish, numerous coral diseases, siltation, coastal pollution and sea level change are some of them. These factors can act together or separately and can be differently severe in different locations or at different times.

Such ideas (this list is incomplete) are part of the unspoken fundamental knowledge possessed by any reef scientist or manager, and by many other people, but individuals lacking this knowledge will find media accounts of what is currently happening to coral reefs difficult to interpret.

Image © PhD Comics.

Nor should we expect the everyperson to know the details of coral reef ecology, yet articles in the media are overflowing with such details, but not presented in a way that helps the reader get the gist of what is happening.  Poorly equipped to understand the scientific details, buffeted by sensational headlines, whipsawed back and forth between despair and optimism, is it any wonder that for most readers, the prevailing coral reef message is that “reefs are being harmed, scientists are making discoveries, there is concern, but there is also reason for optimism”?  And that is a story that is not particularly interesting, certainly not a story that will keep everypersons attention.  If you do not depend directly on a coral reef, it’s just another just-so nature story.

We can all do better

I don’t pretend that the discoveries arising from my half hour with Google are definitive, but I think they are representative of what would be found by an interested everyperson attempting to understand the coral reef crisis.  I recognize that there exist many gifted scientists able to communicate effectively with the wider public, and journalists able to read the technical literature critically and create factual yet interesting stories for the wider public.  I know there are members of the public who genuinely want to understand their world.  But I also know that the state of communication of the coral reef story can be improved substantially.  If anything, there are too many stories in the media that delve deeply into the nitty gritty of particular scientific studies, and too few that provided the needed overview and a wider perspective.

There are two parts of the coral reef message that deserve wide promulgation.  The first concerns our current understanding of the immense value of coral reefs biologically, economically and esthetically.  It deserves more than the reporting of some facts and figures about numbers of dependent people, contribution to GDP, and some vague waffle about solace for the soul.  I think the case can be made that we have an obligation to humanity, and a moral obligation to the planet as well, to act to minimize our unintended negative impacts on coral reef systems.  The second concerns the canary connection between the effects of climate change on coral reefs and the concern of many environmental scientists that human activities have begun to shift the planet beyond the planetary boundaries that define a ‘Holocene-like’ environment.  For me, this connection is ultimately what makes the ‘coral reef message’ deeply troubling, because a non-Holocene world is likely to prove a very difficult place for our civilization to continue to prosper.   We have it within our power to address the size of our footprint on this planet and are changing our behavior far too slowly.  A concerted effort to convey both these parts of the coral reef story to the everypersons, using effective story-telling techniques, could be far more effective in raising awareness and concern about the current decline of coral reefs, and in building understanding of the perils we are currently creating for ourselves around the world.  The societal changes needed are unlikely to occur without this.

Most of us remain blissfully unaware that we have left the Holocene for parts unknown.  The coral reef story gives us a preview of how things may turn out.
© David Pope/Canberra Times

Categories: Communicting science, coral reef science, In the News | Comments Off on How can we expect the average person, or even a politician, to understand what is happening to coral reefs?

Our Human Condition – Trapped by the Familiar. It’s why Governmental or Economic Decisions are So Often Wrong when Environment is Involved.


The current kerfuffle over the expansion of the Kinder-Morgan pipeline that ships tar sands crude from Alberta to an export terminal on the coast of British Columbia is a sorry story.  It demonstrates that most leaders seem incapable of looking outside the box, never mind acting outside it.  Anthropocene times require out of the box thinking and action.  Canada, like the rest of the world, sits within the Anthropocene yet carries on as if we still inhabit the tranquil world of the Holocene.

Lots of Kinder-Morgan pipe waiting to go into the ground.  Photo © Chris Helgren/Reuters

The Anthropocene – Not your Grandfather’s Holocene

As anyone who has read this blog regularly knows, I believe humanity is in the midst of a potentially existential environmental crisis – like many things environmental, it is a slowly moving crisis by human time scales, but also an inexorable one.  And it is a crisis almost entirely of our own doing.  Climate change holds center stage just now, but this crisis includes a number of aspects beyond climate change.  All of them need addressing, and the need for speed in this addressing gets steadily greater.  It’s a lot to demand of a naked ape whose entire history of civilized progress, from the earliest agriculture to our first tentative ventures out into space, has taken place in the benign paradise we named the Holocene.

Ah, the Holocene.  Those were the days, when one could deliberate, and re-deliberate, year after year, confident that the problem being deliberated about, while still present, was not going to get substantially worse.  Hell, sometimes, if one deliberated long enough, the problem went away all on its own.  Sea level has been essentially static for the last 8000 years.  Alpine glaciers reliably stored water for slow release into the headwaters of most of the major rivers of the planet.  Monsoons came predictably enough that a monsoon failure was a super big deal unlikely to happen several years in a row.  Local fisheries collapsed from being overfished, but there were always new fisheries around the next headland waiting to be used, and collapsed fisheries sometimes recovered.  There were good years and bad ones, sometimes times of real hardship, but the world was a dependable place that provided, by and large, dependable weather, adequate food, the other resources we needed, opportunities to prosper.

Over time we have removed much of the forests, striped most of the fishes out of the oceans, caused the extinction of large numbers of organisms, turned major tracts of land into monocultures of fertilizer-dependent crops, and scoured vast areas of seabed clean by dragging and re-dragging fishing gear across them.  We have redistributed organisms across the planet, sometimes to our own considerable inconvenience, and through imperfect health policies, have encouraged the evolution of pathogens immune to most of the remedies we can throw at them.

As well, we have made a mess from time to time, littering the world with left-over and waste items, some organic, many not.  For many years, the problem of littering was simply to move away, or move the litter away – ‘the solution to pollution is dilution’ was a mantra that came into use 4000 years ago when people in India and Crete independently discovered how to use water to flush human wastes through drain systems and away.  It proved efficacious in most circumstances until our cities became so large that we needed mechanical and microbial ways of hastening decomposition before diluting in a river or the ocean.  Now with a population exceeding 7 billion, and with an out of control chemical industry inventing novel compounds at a pace that defied efforts to screen them for safety before release into the environment, this mantra frequently seems insufficient.

Today, our tendency to pollute is poisoning the soil, deoxygenating the coastal ocean, and altering the composition of the atmosphere in ways that are modifying our climate.  Some places are worse off than others, but there is now nowhere on this planet where evidence of us, in the form of our messes, does not exist.

We are Trapped in a Box

In the Anthropocene, all these problems become progressively worse as our population grows and our mean standards of living increase.  And our pollution of the atmosphere is causing climate to change at a pace that has seldom if ever been seen before, and certainly not in the couple of million years since our species first appeared on the planet.  Now that we have photographs of our planet from space that tell us, wordlessly, that it is finite and alone – the only place in the reachable universe that can support our species –, and now that we have a reasonably good understanding of the more obvious environmental and ecological consequences of most of the bad things we are doing, one might expect the ‘wise man’ (Homo sapiens), or at least the leaders of groups of such wise men, might recognize this environmental crisis as important, a problem to solve, even an existential problem requiring urgent solution.

Except Homo sapiens is clearly mis-named.  Because we do not readily think outside the box.  Our societies, whether relying on democratic, feudal, socialist or other governance, have histories rooted in the Holocene, when the planet provided a dependable environment in which to plan and execute our noble enterprises.  Most of us, once we become powerful enough in our own eyes to presume we can control nature, have treated nature as the set of places, creatures and things available for us to use.  Sort of like a giant supermarket, full of items to pick up and carry out, with no cash registers in sight.  (This supermarket even has places where we can dump our left-over, unwanted, or otherwise useless things, free of charge.  Quite a wonderful place really!)

Because the natural world has been dependable in providing us with goods and services, and because we have built economic systems which ignore the costs of using nature as we do, governance (both political and business) has evolved, under all political systems, to maximize short-term, and personal gains, minimize short-term and personal costs, and assume the environment will somehow take care of itself no matter what we do.  This approach to governance leads to a political process that seeks to find consensus (also known as reaching a compromise) among competing entities within society in which the short-term and personal interests of all sides are met to some degree – the so-called win-win solution.  Possible wins or losses by nature are irrelevant to the process.  Governance has also developed procedures which can be guaranteed to take a long time, simply because the longer you spend deliberating the more likely one side will walk away, making the solution politically painless for those who remain.  In societies in which the governance is achieved by a government built through a process of election of individuals to office of limited tenure, the tendency to think short-term is amplified.  Few leaders think long-term, well beyond the end of their own mandate in office, especially if the long-term benefit will become apparent only at the end, while costs will be more immediate.  In other words, because of the ways in which we have traditionally treated the natural world, and because of the nature of decision making in our societies, we have learned to ignore environmental damage whenever preventing that damage cuts into the personal and short-term benefits of leading actors in the society.

In the Anthropocene, the risks to our societies of continuing to make decisions in this way are likely to rise to the point when they can no longer be tolerated.  Unfortunately, with our gift for short-term thinking, and the slow pace at which environmental problems usually develop, we are slow to learn, and will likely have to experience a number of ‘intolerable’ environmental crises before we mend our ways.

Kinder-Morgan Trans-Mountain Pipeline Twinning Project

All of which brings me back to Kinder-Morgan.  It was only a few years ago, that the twinning of the existing Trans-Mountain pipeline owned by the giant US company, Kinder-Morgan, was one of about four new pipeline proposals for shipping tar sands bitumen off to refineries or ports.  All were deemed ‘essential’ to the future well-being of the Alberta economy, and to prosperity across Canada, because projections were for tar sands production to grow substantially.  In a 2008 report on production, CAPP projected 2020 tar sands production to be 3.5 million barrels per day (MMBD), up from 1.2 MMBD in 2007.  Earlier predictions had been for 3 MMBD by 2015.  The rosy predictions created images in the mind of the Alberta landscape becoming covered by row upon row of barrels waiting to be shipped out; clearly, we needed lots of new pipelines to keep up with production.

CAPP forecast for tar sands production to 2030 as of June 2017.  Image © CBC News.

As things have turned out, future production estimates have been scaled back.  2015 came in under 2.4 MMBD, and CAPP is now predicting 3.1 MMBD by 2020 (and 3.67 MMBD by 2030).  The talk about three-fold increases seems to have dried up, although CAPP is still talking growth.  (Interestingly, the National Energy Board is more optimistic than CAPP, the industry spokesman!)

As well, industry spokespeople argued for the need for pipelines to ‘tidewater’, meaning anywhere except the US Gulf Coast, as a way of combating the differential value of Alberta crude compared to other North American or World sources.  By getting product to the east or west coast of Canada, the argument went, Alberta suppliers would be able to get their oil to market at a price nearly on par with the Brent crude benchmark.  In addition, following the 2013 rail accident, explosion and fire in downtown Lac Mégantic which claimed 47 lives, a new argument for more pipelines appeared; pipelines are safer than trains.  (Most people forget the Lac Mégantic accident had nothing to do with tar sands oil.)

Given all these arguments, anybody who questioned the necessity of all these new pipelines was dismissed as a head-in-the-clouds greenie who simply does not understand economics.  And yet, several knowledgeable people have questioned the necessity (and I have blogged about it here and here).  Among the doubters is David Hughes, an earth scientist who was for 32 years with the Canadian Geological Survey, and an authority on global and North American energy and sustainability issues (see an early comment here).  In May 2017, he authored a detailed assessment of Kinder-Morgan in a Canadian Centre for Policy Alternatives (CCPA) and Parkland Institute report, Will the Trans Mountain Pipeline and Tidewater Access Boost Prices and Save Canada’s Oil Industry?  This report is well worth a read.  In it, Hughes tackles each of the arguments in favor of new pipelines and finds them wanting.

National Energy Board’s anticipated growth in tar sands production, as constrained by the Alberta GHG emissions cap.  The reference, or average expected, case (black line) becomes constrained by the cap in 2025, the ‘high price’ trend in 2023.  Capping of production necessarily reduces future need for pipeline capacity.  Figure © CCPA/Parkland, based on NEB data.

Production estimates for the tar sands have been consistently optimistic, and Kinder-Morgan used estimates even higher than those used by the National Energy Board when articulating the need for the twinned pipeline.  The NEB predictions are higher than those from CAPP.  In 2016, NEB projected tar sands production would most likely reach 4.25 MMBD in 2030 and 4.8 MMBD by 2040, with the possibility of it reaching 5.3 MMBD if prices were unexpectedly high; Kinder-Morgan projected 5 MMBD by 2038.  Furthermore, the emissions cap introduced by Alberta, if obeyed, will prevent that degree of growth (assuming present-day emissions per barrel of product), and both the reference case and the high price possibility are constrained to 3.2 MMBD.  The emissions cap is hit in 2025 for the reference case.  With almost two million fewer barrels per day to move than Kinder-Morgan was predicting, the need for additional pipeline capacity no longer exists.

Hughes also demolishes the supposed price differential caused by the ‘landlocked’ status of Canada’s bitumen.  He shows convincingly, by graphing the historical trend in prices, that the differential that existed between West Texas Intermediate and Brent crude prices in 2012 and 2013 (when the Kinder-Morgan expansion was proposed) no longer exists and had not existed in years prior to 2011.  Fact is, the tidewater price at Houston is not significantly different to that at Vancouver or Halifax.

The differential in oil price that bringing tar sands product to ‘tidewater’ is supposed to correct – except there has only been a differential for a few years around 2012-13.
© CCPA/Parkland.

As for the argument that additional pipeline capacity was needed to reduce the risk of shipping oil by train, when the math is done, as Hughes reports, taking account of the effects of the Alberta cap on emissions on tar sands growth, there is surplus pipeline capacity even now.  New pipelines are not needed for this reason either.

The interesting thing about Hughes’ argument, and he has made it a number of times, is that it makes no mention of the need to reduce emissions and other forms of environmental damage (other than including the effects of Alberta’s (very weak) cap on emissions).  His arguments are based on the same economic and business cases that are used by proponents for every new pipeline being proposed.  And similar arguments have been made by others.  The business case is without merit.

If we add in a serious desire to improve environment and reduce the risk of climate change, the need for any additional pipeline capacity evaporates completely.  Canada’s current commitment under Paris (which we are not yet meeting) is woefully inadequate if climate change is to be kept under 2oC, and the existing Alberta cap on tar sands emissions barely constrains expansion.  Canada is going to have to do substantially more to reach its weak climate goals, and very much more to meet real goals for emissions reduction.  We can choose not to do this because we cannot afford the dent to our economy that winding down the tar sands would cause.  But that ensure we would be recognized permanently as a climate slacker, and would no doubt get some negative repercussions if we stuck our heels in.

I’m not going to belabor the environmental argument here; I’ve done so several times already and there are plenty of other sources of this information.  We are not going to be able to reduce emissions sufficiently to do our share to keep climate change below 20C if we also permit tar sands production to grow to the limit set by the Alberta cap.  This is not politics or economics, it is science.  The two goals are incompatible in this universe.  But, of course, our politicians are trying to treat this incompatibility like any other political problem.  And therein lies abject failure.

PM Justin Trudeau has staked his future on living up to Canada’s climate commitments (it would be a first for this country), and on sustaining Alberta’s fossil fuel economy.  He doesn’t seem to wonder if all those talented people employed in the tar sands might be able to do something useful that does not involve massive increases to our emissions.  Maybe he should read the recent Globe & Mail op-ed by Jeffrey D. Sachs, a professor at Columbia University and director of the Center for Sustainable Development and the UN Sustainable Development Solutions Network.  In his 13th April article he articulated a vision of Canada undertaking the infrastructure development to more fully integrate its electricity grid, both across Canada and between Canada and the US, so that we could export emissions-free electricity produced chiefly from our ample hydropower and other non-fossil sources, including nuclear, into the US energy supply thereby aiding their decarbonization while encouraging the continuation of our own progress in that field.  To what he wrote I would add building a massive expansion of wind and perhaps solar farms in Alberta to further this effort.

Premiers Notley of Alberta and Horgan of British Columbia are also trapped by not looking outside the box.  Notley introduced a tepid cap on tar sands production that will not likely kick in until 2025, but now faces stiff opposition on her right because she has gone too far down a climate change path.  Trudeau promised her a pipeline if she’d do the right thing on climate and now it looks like the pipeline may not happen.  In British Columbia, John Horgan leads a minority government that opposes increased oil shipments out of their ports, or pipelines across their iconic landscape; he is propped up by the Green Party who are even more belligerently opposed to pipelines.  The conventional political compromise ain’t going to happen.

Meanwhile, preventing the pols from thinking carefully and deeply (yes, they do, sometimes) is a cacophony from the business sector arguing that if Kinder-Morgan is not built the world as we know it will come to an end (a rough translation of their perspective on the hit to Canada’s reputation as a place to do business).  While I understand that countries must provide a dependable environment for investment, I seriously doubt that the failure to build a pipeline in British Columbia will stop economic sectors other than those engaged in fossil fuel digging, processing and shipping, from continuing to see Canada as a nation of laws and reliable governance.  Would Google really have second thoughts about investing in Canada if Kinder-Morgan goes down?  Really?  Might not some economic sectors take renewed interest in Canada as a land which values its environment sufficiently to leave the tar sands in the ground and build opportunity in other ways?  And Kinder-Morgan’s signs of cold feet are surely an indication that the business argument for the pipeline is not quite as wonderful for all concerned as they claimed when making the application to build.

So, what is the outcome from the meeting in Ottawa between Trudeau, Notley and Horgan last weekend?  A typically political solution.  They could not find a consensus, but Trudeau and Notley agreed they could use national and Alberta tax monies to seduce Kinder-Morgan to go ahead, even though the regulatory battles will continue.  This is not an intelligent plan.

Meanwhile Canadian government websites continue to put as shiny a lipstick as possible on Canada’s appallingly weak progress on the climate front, and Trudeau’s hard-working Minister of Environment, Catherine McKenna has until now been assuring Canadians that we are ‘on track’ to achieve our climate goals, while avoiding niggling issues like the 66 Mt CO2 emissions gap that still exists between Canada’s 2030 target and the realistic 2030 projection of emissions.  It’s a gap for which Ms. McKenna has only waffle words.  Her words in a mid-March interview show that she fully understands what is happening; once more Canada will put false tar sands economics ahead of environment and fail to fulfill its UN commitments made in those heady days in Paris.  All because everyone is staying carefully inside the box – a box where big investments in dirty things you can dig up and sell are more important than an environment worth living in.

Let’s Talk about the Oceans

Just to ram that last point home, last week’s copy of Nature included two research articles, an overview, and an editorial all talking about the erratic behavior of the AMOC.  AMOC is not some wooly-coated, brown-eyed, cuddly creature that lives in the Himalayas; AMOC is the Atlantic Meridional Overturning Circulation, and while many people have never heard of it they should be paying attention.  The AMOC is slowing down.

Diagram of the AMOC prepared by Levke Caesar for the press release accompanying the Nature article.  Surface currents are in red, deep currents in blue, and the color bar scale refers to ocean color.  The cool area in the North Atlantic, site of the subpolar gyre that marks the region where surface water is falling to deeper layers, and variation in sea surface temperature at this region appears to be a useful proxy for the strength of AMOC.  Image © L. Caesar/PIK.

The AMOC transfers vast quantities of surface waters, first carried to the North Atlantic on the Gulf Stream, to the ocean depths and back towards the tropics.  It happens because the warm, salty Caribbean water cools as it moves north until it becomes dense enough, despite its saltiness, to drop below the North Atlantic water.  This vast waterfall within the ocean drives the Gulf Stream and the major ocean circulation system that ensures that oxygen gets carried down to the depths, and that heat is transferred from the tropics to the temperate zones.  Back in the 1950s marine scientists began to work out the giant oceanic circulatory currents, and their role in determining climate.  Greenland ice core data suggested these current systems sometimes changed radically and suddenly triggering climate changes.  The AMOC was relatively strong and stable during the Holocene, but in 2005, a report in Nature described an apparently weakened AMOC, raising the possibility that it might be becoming unstable.  Work since then has revealed that its strength fluctuates in complex ways.  The two research articles in last week’s Nature confirm that the AMOC is now about 15% weaker, especially in winter and spring, which slows the flow of surface water to the depths.

The two articles, one by Levke Caesar, Potsdam Institute for Climate Impact Research, Germany, and 4 colleagues from Europe and the USA, and one by David Thornalley, University College London and Woods Hole Oceanographic Institution, and 11 colleagues from Europe and North America, use radically different approaches to confirm this marked slow-down.  The teams’ results differ in the estimated date of on-set, with Thornalley’s team suggesting this pattern began around 1850 and Caesar’s group pinning the change to the mid-20th century.  The discrepancy is as much a comment on the difficulty of doing global-scale oceanographic research as on the different approaches taken.  Thornalley used paleoclimate data over the past 1600 years, while Caesar used high-resolution global climate models and data on sea surface temperature to reveal patterns of change in surface temperatures in the North Atlantic since the late 1800s.  Both teams attribute the slow-down to human releases of greenhouse gases and resulting climate change.  The take-home message for me is that this is one more glimpse of the seriousness of the climate crisis.  The slow-down of AMOC, likely triggered by the copious new cold fresh water being introduced to the North Atlantic as Greenland’s glaciers melt, could have sudden and serious consequences for the climate of Europe or for the Northern Hemisphere.  We don’t know how serious, nor how soon, nor how rapid such climatic changes might be, but they could affect the lives of hundreds of millions of people.

And yet, content in our boxes, not looking out, we continue to make decisions about pipelines as if it was routine politics as usual.  Sometime in a distant future, people are going to look at the money wasted in building unnecessary infrastructure to prop up a fossil fuel industry coming to its natural end, the other vast sums of money wasted protecting cities from rising seas, from catastrophic floods, or from unending droughts, and compare them to the dollars that could have been spent productively building the new, low-emissions, decarbonized economy of the 21st century.  And they will ask, “How stupid were they?  Why could they not recognize what had to be done?  Why did they not work to prevent this difficult, dangerous world in which we now live?”  Guess I’m in my optimistic phase today – thinking there will be people in the future with time to think about such things.

If you don’t think outside the box, you’ll never figure out how to move Canadians towards effective climate policy.  Cartoon © Brian Gable/Globe & Mail.

Categories: Canada's environmental policies, Changing Oceans, Climate change, Economics, Politics, Tar Sands | Comments Off on Our Human Condition – Trapped by the Familiar. It’s why Governmental or Economic Decisions are So Often Wrong when Environment is Involved.

. Catching Cod, or Not… Fisheries Management and the Anthropocene


The people of Newfoundland are a patient lot, but their patience has been further tested this month.  And that test provides me a teaching moment:  Our negative impacts on the world can be halted, but we assume too often that when they are halted, the world will recover, promptly and smoothly, to its former condition.  That is seldom the case.

Cod fishing has been the lifeblood for Newfoundland ever since Europeans began using drying racks on its shores to prepare the fish for shipment home.  Photo © JL Rotman/Corbis.

The good people of Newfoundland, when they are not busy starring in musicals about their well-recognized generosity, mostly wait patiently for the fishing to improve.  They’ve been waiting since 1992.  That’s a long time, and the wait is far from over.  The Department of Fisheries and Oceans has just released a report showing that the northern stock of Atlantic cod, Gadus morhua, listed as Vulnerable on IUCN’s Red List, has declined in numbers for the second year in a row, and remains within what DFO refers to as the critical zone – a population that is so reduced that any fishing mortality at all should be avoided.

The northern stock is the population of cod that occupies the waters off the southern third of the Labrador coast and the eastern coast of Newfoundland, extending out beyond Canada’s 200 nautical mile limit, and encompassing all of the Grand Banks.  This is an immense area that used to provide annual catches in the order of 200,000 to 400,000 tonnes of cod (nearly 800,000 tonnes in 1968 and 1969) until the stock collapsed in the early 1990s and commercial fishing was suspended in 1992.

Annual fishery landings of northern cod, 1958 to 2017 (left) and 1995 to 2017 (right).  2J, 3K, and 3L are the northern, central and southern sectors (NAFO Divisions) of the western North Atlantic occupied by this population of cod (they stretch from the central Labrador coast south to the southern boundary of the Grand Banks).  It is clear from the right-hand graph that the catch has been substantially larger in 2016 and 2017 than in recent years (although still far below the catches prior to stock collapse in 1992).  Graph © Fisheries and Oceans Canada.

DFO analyses reveal that both natural mortality rate, and mortality rate due to fishing have increased.  Natural mortality (the proportion of fish dying during the year from causes other than fishing) nearly doubled from 0.39 in 2016 to 0.74 in 2017.  That is equivalent to a change from 70% of animals alive at 1/1/2016 surviving through 31/12/2016 to just 48% of animals alive at 1/1/2017 surviving through 31/12/2017.  Fishing mortality (the proportion of fish dying due to fishing) increased from 0.014 in 2015, to 0.021 in 2016 and 0.025 in 2017.  Likely reasons for the increase in natural mortality include a falling abundance of capelin and shrimp, major food sources for cod, and temperature changes due to climate change.  (Temperature changes can directly impact the fish by altering metabolic requirements for food, and can affect them indirectly through impacts on food species.)  The increases in fishing mortality are evident in data on fish landings – these increased from 10,000 tonnes in 2016 to 13,000 tonnes in 2017, but would likely have had only a minor impact on the population compared to that of the change in natural mortality.

DFO reports that spawning stock biomass (the estimated abundance, as biomass, of spawning-age individuals) has declined from 423 kilotonnes in 2017 to 315 kilotonnes in 2018.  That is a substantial drop, and DFO expects the stock numbers to decline further in 2019, based on the lack of spawning-age individuals.  A plot of the estimated number of two-year-old fish since 1983 shows the sorry state of this population since the early 1990s.  Looks like those Newfoundlanders will be waiting a few decades more.

Graph showing decline in status of northern cod (as number of two year old fish) from 1983 to 2017.  The pronounced crash in 1989-1992 and the slow ‘recovery’ since are both clear.
© DFO Canada.

The Atlantic cod is a moderate size fish.  It can reach two meters in length, although it is now seldom seen larger than 90 cm in length.  It reaches breeding age between 5 to 7 years old at a size of about 35-50 cm length along the northern Newfoundland and Labrador coasts, but matures at the same size, at 2 to 3 years of age further south.  When they spawn, newly mature females produce 300-500,000 eggs, but a female can produce several million eggs once >75 cm in length.  The eggs are pelagic and hatch into pelagic larvae.  After two and a half months, larvae settle to the bottom and commence a juvenile life in which they are relatively sedentary in waters 10 to 150 m deep where they make use of sponges and seaweed that provide cover.  After 1 to 4 years of juvenile life they become more active and wide-ranging mid-water predators.

The Atlantic cod is not just a convenient fishery species.  It sustained the most important fishery in the Atlantic Ocean from the late 15th century until 1992 – 500 years.  That fishery began with ships journeying across from Europe for a season of fishing (or until the hold was full of fish in brine).  The fishery switched to making use of the islands and coastline of Newfoundland, as places for summer camps where fish were dried before being packed in salt for shipment back to Europe (less weight per tonne of edible product).  The colonization of Newfoundland, the Maritimes, and New England states followed; a direct consequence of the cod fishery.  This was a fishery which became a major part of a trade cycle that also moved West Indian sugar and African slaves.  The economic value of the cod fishery is hard to overestimate.  But it collapsed in 1992, and 25 years later it seems very unlikely to recover any time soon.

Why Did It Happen?

So, why did the cod stock collapse?  The first thing to recognize is that this is not an isolated occurrence.  Fisheries collapse all the time.  Well-managed fisheries (like the cod fishery) and poorly managed fisheries collapse.  Collapses can be gradual or sudden and can be devastating to the communities that depend upon them – just talk to Newfoundlanders about cod.  The collapse of the northern cod fishery is a part of a wide-spread, multi-species collapse affecting most trawl fisheries off the eastern coast of Canada and the north-eastern USA in recent decades.

It is widely thought that fisheries for long-lived, slow-growing, larger species (such as cod) are more prone to collapse.  These, the argument goes, have less capacity to rapidly recover numbers if overfished because it takes so long from hatching to reproductive maturity that the fishery is seriously depleted before anyone is aware of the problem.  However, recent studies by Malin Pinsky of Rutgers University have confirmed that short-lived, fast-growing, and smaller species are also prone to collapse.  In fact, in his first paper on this topic published in 2011 as he was completing PhD studies at Stanford University, Pinsky examined data on a globally distributed list of nearly 600 fishery stocks for which landings data or formal population assessment data existed.  Of these, about a quarter had collapsed at some point in their fishing history.  He looked at a number of species attributes: longevity, age at maturity, size, growth rate, trophic level, fecundity, egg size.  For none of these was there a significant trend in likelihood of collapse.  The only factor that did show a significant trend was fishing mortality – the proportion of the population each year that succumbs to fishing.

One interesting difference existed between the stocks for which formal assessment data were available and the stocks for which only landings data were available.  For the landings data set, there was a slight (though statistically nonsignificant) trend towards collapses being more likely in longer lived, later maturing or slower growing species.  When assessment data are available, that indicates a stock is being managed using fishery science procedures, but many fish stocks are not managed this way and those less scientifically managed stocks predominate in the landings data set (information on numbers caught each year, and perhaps on fishing effort as well).  More scientifically rigorous management should mean that catch rate is set so that fishing mortality is closely related to the capacity of that species to replenish the fish taken, and this should lessen the risk of collapse.  Less effectively managed stocks are likely to exceed appropriate levels of fishing pressure more frequently if they are slow-growing or late-maturing than if they are faster growing or earlier maturing stocks.

In a follow-up paper published in 2015, Pinsky, and his colleague, David Byler of Princeton University, reached a surprising conclusion.  They evaluated the 150+ fishery stocks worldwide for which long-term assessment data were available, using newer analytical methods that could also incorporate climate variability, along with fishery and population data.  They were able to confirm, contrary to expectations, that species with what they call ‘fast’ life cycles (shorter lived, faster growing, earlier reproducing) were more likely than others to collapse under excessive fishing pressure.  This is likely due to the greater impact on such species of changes in climate or other environmental variables – short-lived, rapidly reproducing species are more likely to track environmental change more completely than species which buffer fluctuations in environmental condition more effectively and show a dampened population response to good times or bad.  Paradoxically, good management is more effective on the ‘slow’ species than on ‘fast’ ones.  Their analysis also showed that chronic (i.e. prolonged) overfishing leads to depletion of a fishery, while higher levels of overfishing (i.e. fishing at levels well above sustainable ones) lead to collapse in the first place.

We Set Fisheries Up for Collapse

So why DID the northern cod stock crash, and why is it remaining depressed a quarter century later?  This was a ‘well-managed’ fishery, in the sense that DFO had relatively good information about the structure of the population gained through regular stock assessment and was issuing advice based on those assessments.  What stock assessments cannot evaluate, however, is the likelihood of environmental changes that may affect the capacity of the population to replenish itself (such as by altering food regimes or affecting the survivorship of larval stages, for example).  Further, there is always some slippage between fishery management advice and fishery regulations introduced and enforced.  The cod fishery was being intensively fished during the late 1980s – at, or even in slight excess of that which would be sustainable.  Changes in water temperature altered availability of food for larval and/or juvenile cod.  The cod were failing to replenish themselves, but fishing pressure remained high.  Recommendations to reduce the allowable catch were not followed at first because of the effect that would have on the industry.  Only when the pressure to do something became high enough to force a politically difficult decision, the fishing moratorium was put in place (forcing an abrupt loss of livelihood for the fishing community), but it was already too late to avoid the marked collapse.  Landings fell from 219,000 tonnes in 1990, to 154,000 tonnes in 1991 and 52,000 tonnes in 1992.  In 1995, three years after the moratorium went into effect, fewer than 1000 tonnes were landed.

Since 1992, there has been ongoing pressure from the fishing community to open up the fishery again.  Catches have been allowed to creep up, and 13,000 tonnes were landed in 2017.  While the continual, low-level fishing pressure is likely having an insignificant effect on the stock, it is still having an effect that diminishes the ability of the stock to build up numbers.  In addition, the environment in which the cod reside has continued to change; some other species have become more common and use food that otherwise would be available for cod, or prey upon cod themselves, and prey species may be less available for other environmental reasons as well.

The story of the northern cod fishery is typical of collapses.  Pinsky and Byler refer to ‘the dynamics of coupled social-ecological systems’ by which they mean the societal features that delay responses to a decline in a fishery stock.  These features include the unavoidable delays in collection of scientific data on the stock, analyses of those data and development of fishery recommendations, and eventual adjustments to fishing pressure.  When a stock is already being fished at a rate which is on the borderline of being unsustainable (and most fisheries reach this level of exploitation quite quickly), continuation of that level of fishing pressure when the population declines exacerbates the decline and can quickly lead to substantial damage to the stock.  In other words, we set fisheries up to collapse the moment we manage for maximum sustainable yield – taking the maximum amount of fish that should be sustainable given the status of that population.  And pressure by the fishing industry nearly always ensures we manage for maximum sustainable yield.  We set them up to fail, because our management systems are incapable of responding quickly enough to signs that the population has declined.  We also forget that it is normal for populations to fluctuate in size – there are good years and bad years, after all.  But when you are managing for maximum sustainable yield, it is very important to know that the population has fallen because of factors unrelated to fishing.  That is the time when catch should be reduced to compensate, yet we frequently take a different approach – well, let’s keep a watch on things, but continue fishing at the previously agreed rate.

Optimistic Views of Global Trends

The Fisheries and Agriculture Organization of the United Nations (FAO) reports every second year on the status of fisheries and aquaculture globally; the next report will be out within the next few months.  In these reports, FAO classifies global fisheries as overfished, fully fished and underfished (implying that fish species exist only for the purpose of being caught by humans!).  Over the years, despite FAO’s tendency to always take an optimistic view, the number of underfished stocks has diminished, the number of fully fished stocks has remained more or less the same, and the number of overfished stocks has grown.  The 2016 report showed about 30% of fishery stocks were now overfished, while only 10% remained underfished.

Trend in the status of globally important marine fishery stocks from 1974 to 2013 as reported by FAO.  Figure 13 in State of World Fisheries and Agriculture, 2016 © Food and Agriculture Organization of the United Nations.

Despite this long-standing trend, and flat fishery yields since the late 1980s, FAO projects a 1% increase in total capture fishery yield by 2025, due in part to the recovery of a number of overfished (=collapsed) stocks, spurred in part by the UN’s new Sustainable Development Goals (SDGs) and their specific targets which might help improve management of fisheries.  I fear this is a vain hope, but perhaps we will all rise miraculously to the UN challenge and end hunger and poverty, while conserving marine resources (the three SDGs singled out by FAO as likely to lead to stock recoveries).

In the past, fisheries that collapsed did oftentimes recover.  The important step that needed to be taken was to suspend fishing for long enough to permit the recovery to take place.  On the face of it, this is a reasonable expectation:  if we fish a population, we are increasing the mortality rate of that population, meaning that individuals live shorter lives and perhaps reproduce on fewer occasions.  If a population suffers an increase in mortality rate, with no increase in rate of production of young (or perhaps a reduction in this as well) that population will become smaller until a new balance between production and mortality is reached.  Therefore, if we stop fishing, we lower the mortality rate again, which should lead to another equilibration between production and mortality and a consequent increase in the population.  This is the way water level in a bathtub can be set by adjusting the inflow and outflow.  It is more or less the way things used to work for fisheries, although our success at stopping fishing was often less than needed, and the depleted population dwindled further, lingered at low levels, or took many years to slowly recover.

In a 2010 review, NOAA fisheries scientist, Steven Murawski, evaluated 25 well-managed commercial fisheries stocks that had collapsed to ascertain patterns of recovery.  He reported that all but one “exhibited signs of recovery”, but in several instances recovery was incomplete even after a decade or more of careful management.  One of his take-home messages was “Rebuilding the majority of stocks classified worldwide as ‘overfished’ will take a more effective, consistent, and politically supported stock-recovery paradigm, if society is eventually to meet its articulated sustainability goals for global fisheries”.  That’s a very polite way of saying we have to work much harder than we usually do to cut fishing pressure and keep it cut for a sufficient time for recovery to occur.  Newfoundland fishermen, whose patience has been tested, must understand that the testing will last a good bit longer.

Now Things May Be Different

Sadly, however, the present and future may not be like the past, and Murawski’s optimism may now be misplaced.  The example of the northern cod stock seems to show this.  A 2017 article in Nature Communications, by Gregory Britten and colleagues at Dalhousie University, points out that fishery management practice has presumed that the playing field was level, but it now appears to be strongly tilted.

Well, actually, they did not even mention playing fields!  What they did say is that fisheries management (and other types of environmental management) has always proceeded on the assumption that while environmental conditions often vary, they vary around a stationary ‘average’ condition.  Thus, the assumption goes, there are good years and bad years for fish production or survivorship and growth, but these average out over time; there are no long-term trends in environmental condition.  Britten and colleagues suggest that while it may have been appropriate through most of the 20th century, we can no longer assume this essential long-term stationarity in environmental conditions.  They suggest that there is now growing evidence that the environment is changing directionally, in many ways, and these changes have consequences for the production of fish; these changes must be taken account of when determining allowable catch for future years.  The most obvious of these long-term changes relate to climate, but here it gets complicated, because global warming does not translate simply into a uniform warming of the oceans, and warming does not translate simply into enhanced, or diminished, production in any particular fishery.  Much of their article concerns a method for making best estimates of future production for use in setting allowable catch.

  When Miami Beach is under water, it doesn’t really matter if climate change is our fault or not (but it is). Image © Joe Darrow/Florida Trends.

While I think of it, I’ve heard this argument for stationarity of environmental conditions more times than I care to remember from climate skeptics of various stripes.  Its usually offered immediately before the arguments based on sunspot cycles, or wobbles in Earth’s axis.  And it gets followed in turn by the claim, “Well, anyway, if the climate is changing its nothing to do with human actions” – as if that matters a damn when glaciers are melting at alarming rates and the world’s major cities are getting wetter and wetter every high tide.  But I digress…..

Britten and colleagues evaluated 276 well-managed global fishery stocks, and showed that 68% of them exhibited biologically significant non-stationarity in productivity.  They illustrated one consequence of managing such a population as if its intrinsic productivity was constant, using the Gulf of St. Lawrence cod stock and the eastern Atlantic stock of bluefin tuna as examples.

The cod stock (much like the northern cod stock off eastern Newfoundland) went through a series of good years (strong intrinsic production) in the 70’s and early 80’s, but then moved into a series of bad years from the mid 80’s to the present.  Managing them as if their productivity was stationary meant that there were fish available to catch that were likely not taken in the good years, but in the bad years the allowable catch was consistently set too high.  This stock collapsed along with the northern cod as revealed by the record of actual catch (as + symbols).

Two examples of consequences of managing assuming stationarity in intrinsic production are the Gulf of St. Lawrence stock of cod (a, c) and the eastern Atlantic bluefin tuna (b, d).  For both (a, b), the surplus production estimated (an index of the amount of fish available to harvest) using conventional methods is shown as a solid black line, and that estimated including non-stationarity as a dotted black line.  The calculated sustainable yields (c, d) are shown as the horizontal black line (conventional) and the irregular dotted line (non-stationary).  Differences between the two methods are shown as gray and pink shading – gray indicating an underestimate of production of sustainable catch using conventional procedures, and pink indicating an overestimate of these.
Image is Figure 2 in Britten’s article.

The bluefin tuna experienced an alternating series of several good followed by several bad years over the time period.  Each cycle lasted a decade or more.  This depressed fishery undoubtedly benefited during the period from 1985 to 2002, when allowable catch using conventional management methods underestimated the real potential catch.  The situation since 2002, however, suggests allowable catch needs to be substantially reduced because the conventional approach is seriously overestimating the numbers of fish available to catch.  Whether this cycling pattern of good production followed by poor production will continue into the future is unknown, but assuming that the stock has stationary intrinsic production capacity is clearly not warranted.

We No Longer Live in our Grandfather’s World

The tendency to expect fish populations to have stationary rates of production, growth, and (non-fishery) mortality, is just one aspect of our too-human expectation that the world is a dependable place.  It is a logical development of us having spent the last 11,500 years in the Holocene, a period remarkable for its stationarity once the melting back of the Pleistocene ice sheets had been accomplished.  After all, our entire development of agriculture and advanced civilization has taken place at a time when ‘severe’ climate fluctuations were as much as 0.5oC in extent.  (A 2013 analysis in Nature Geoscience has shown that the Medieval Warm Period, ~1000 to 1300 AD, was a time when parts of Europe were perhaps 0.5oC warmer than in the subsequent Little Ice Age, ~1400 to 1900 AD, but globally, temperatures declined perhaps 0.4oC over 2000 years before the warming of the 20th Century.  A new analysis in Nature this February has confirmed the long-term fluctuation in global temperature of only about 1oC during the last 10,000 years until the mid-20th Century warming commenced.)

That dependable Holocene world ended during the 20th Century.  Now we live on a planet that is changing in many ways, mostly driven by our activities.  Temperature is warming rapidly.  Rainfall patterns are changing dramatically.  Storms are becoming more violent.  Glaciers are melting and sea level is rising inexorably.  Oceans are acidifying and losing dissolved oxygen.  These changes have ramifying impacts on our environment, on the natural resource such as fish which we try to manage, and on our agriculture.  They also change the impacts of pest species, of disease pathogens, and the intricate relationships among species that govern ecosystem function and ecosystem resilience.

The global environmental crisis is complex, daunting and growing, but it is possible to dissect it and deal with each part step by step.  Failing to do so is a recipe for human disaster.
© F. Pharand-Deschênes /Globaïa

By coincidence, while I was writing this commentary, a friend lent me a short book published by the Calgary Institute for the Humanities in 1988.  By Lydia Dotto, a Canadian science journalist and photographer now at Fleming College, Thinking the Unthinkable is a 73-page summary of a three-day conference held in 1987 at the University of Calgary.  The conference, Civilization and Rapid Climate Change, brought together 45 scientists, social scientists and humanists to discuss the threat to human life posed by overpopulation, nuclear war and rapid climate change.  While the organizers mostly had nuclear winter in mind when they decided to look at climate change, the discussion in the book reveals about an even split between concern for the environmental consequences of nuclear war, and concern for the environmental impacts of anthropogenic climate change.  Back in 1987, people were starting to think about climate change, something that would become important in “the next century”.  I found the book prescient in its ability to anticipate what was happening to our planet, strange in its clear preoccupation with nuclear conflict – a conflict that has faded mostly from view until very recently – and very chilling in two things.  These are its clear-sighted discussion of the severity of the risks we face, and its explicit link of overpopulation to the environmental changes that are coming.

The world has been talking about climate change and the environmental crisis for so long now that we have become immune to the severity of the events we are considering.  Dotto is quite clear about the likelihood of our civilization surviving the kinds of environmental changes that are on track for happening over the next half century or so – it is trivial to nil.  She is also quite clear about the importance of overpopulation, not simply population growth, and overconsumption, both of which have continued to get worse in the 30 years that have passed.  These are at the root of the environmental problems we now face, and we are not going to solve these environmental problems without also dealing with the population problems.

I feel sorry for those Newfoundland fishermen, waiting patiently for the cod to return.  Only now are they, and the managers, and the rest of us becoming aware of the fact that the northern cod stock may never recover to the levels it was at in the 1980s or earlier.  The changing environment, and the actions of other species in response during the absence of the cod, may mean that the possibility of rich cod stocks, truly overflowing abundance of a majestic fish, is something that can never be attained again.  Or then again, perhaps they will recover, and we simply have to wait patiently for far longer than we ever imagined that we would.  But I am also sorry for all of us, because we seem so slow to learn the lessons that the natural world is putting up in front of us, lessons that should be easy to learn, if only we’d pause, and think, and absorb.  We cannot continue behaving the way we do, and expect that the natural world, ever forgiving, will let us make error after error, will tolerate our overconsumption and our trashing pollution.  The planet has a certain capacity to absorb the slights we throw; we have been exceeding that capacity for some time now, and do not seem to be mending our ways nearly fast enough.  Unless we change dramatically and soon, our future will be dire indeed, and cod stocks will be the least of our worries.

Gadus morhua, the Atlantic cod, photographed at Atlanterhavsparken, Ålesund, Norway.
© H-P Fjeld.

Categories: Canada's environmental policies, Changing Oceans, Climate change, Fisheries, In the News | Comments Off on . Catching Cod, or Not… Fisheries Management and the Anthropocene