Peru’s ancient water systems can help protect communities from shortages caused by climate change

 

Mount Hount Huascarán, Cordillera Blanca, taken from Hauashao village. Credit: Susan Conlon



Water is essential for human life, but in many parts of the world water supplies are under threat from more extreme, less predictable weather conditions due to climate change. Nowhere is this clearer than in the Peruvian Andes, where rising temperatures and receding glaciers forewarn of imminent water scarcity for the communities that live there.

Peru holds more than 70% of the world’s tropical glaciers. Along the 180 kilometre expanse of the Cordillera Blanca (“white mountains”), more than 250,000 people depend on glaciers for a year-round supply of water. Meltwater from the glaciers supplies rivers, offering a vital supplement to rainwater so that locals can continue irrigating food crops throughout the dry season, from May to October.
But Peruvian glaciers have shrunk by 25% since 1987, and the water supply to rivers during the dry season is gradually decreasing. While national and regional governments and NGOs are responding to the threat of water scarcity with modern engineering solutions, there are growing concerns among the communities affected that such efforts are misplaced.

Modern day misfires

Take, for example, the village of Huashao. Nestled between the highest peaks of the Cordillera Blanca, Huashao is a typical farming village of the region. Glacier meltwater feeds the Yurac Uran Atma canal, which supplies irrigation water to families in Huashao. In 2011, a municipal government project transformed this canal from a rustic irrigation ditch to a modern PVC pipeline, with lock-gates to regulate the flow of water and ensure equal distribution throughout the village.
The village of Huashao. ConDevCenter/Flickr.CC BY-NC-ND
Governments and NGOs commonly promote modern measures to capture and conserve water for irrigation – for example, by lining irrigation canals with concrete, to prevent leakages. While it’s important to conserve water to safeguard food supplies, these kinds of measures have been criticised for their lack of flexibility and sensitivity to local needs.
While the pipeline in Huashao provided security and reduced the amount of time people had to devote to distributing water where it was needed, Conlon’s ongoing ethnographic research in the village found that local women were concerned about its effect on the local puquios (springs) – a valued source of irrigation and drinking water.
Noticing less water in puquios, they blamed the canal lining for stopping water from filtering into the local geology. Local communities see this process as an integral part of water distribution, but authorities often refer to it as “leakage”.
What’s more, the local people responsible for maintaining and operating the new canal found that not everything worked as planned. They were particularly worried when a problem caused water to overflow the canal walls, and blamed the design of the lock–gates.
Here, the government’s preference for modern engineering meant that it missed an opportunity to engage with traditional technologies and local knowledge. This is hardly surprising – ancient know-how has been routinely dismissed as inferior by state authorities and well-meaning (but badly briefed) NGOs. Yet traditional technologies, like the puquios, have been providing flexible ways to manage water in Huashao for hundreds of years.
In Huashao, the local people are coming to realise the limitations of modern engineering. But across the Andes, many other communities are still seduced by the promise of quick fixes offered by concrete, steel and PVC pipelines. Unfortunately, initial, costly investments of aid and expertise are rarely followed up, and since communities often lack the necessary knowledge and funds to maintain these systems, they eventually break down.

Ancient married with modern

Slowly, a push back is starting. There has been renewed interest in what society can learn from traditional irrigation systems. A recent international workshop held in Trujillo, Peru, brought together social scientists, geographers and climate scientists to discuss how to tackle issues around water use and scarcity.
It seems likely that the best solutions will be found by combining old and new knowledge, rather than dismissing one in favour of the other. For instance, parallel to the Cordillera Blanca is the Cordillera Negra (“black mountains”), which faces the Pacific Ocean. Without the benefit of glaciers, the ancient inhabitants of this area learned to harness rain water to see them through the dry season.
These pre-Colombian cultures instigated millennia-long engineering projects, resulting in large dams and reservoirs placed along the slopes of the mountains. These structures controlled water and soil erosion, feeding underground water deposits and providing water for crops and livestock.
An ancient dam in the Cordillera Negra. Kevin Lane.Author provided
Disuse over the last few centuries means that few are still functioning, but those that are, are a tribute to the ancient expertise. By contrast, modern concrete micro-dams have a functional life of 40 to 50 years, often curtailed by seismic activity to between 15 and 25 years.
Fortunately, plans are afoot to revisit these old technologies. Solutions rooted in respect for community and local knowledge, and allied to flexible modern engineering – such as better water retainment technology – are exploring ways in which we can shore-up the effectiveness of these ancient dams.
Throwing money and resources into engineering projects does not always guarantee success when trying to combat the effects of climate change and protect vulnerable communities. But the marriage of ancient and modern technologies offers promising solutions to the threat of water scarcity in Peru, and places like it all across the world.
———
This blog is by Cabot Institute member Dr Susan Conlon, Research Associate at the University of Bristol, and Kevin Lane, Senior Researcher in Archeology at Universidad de Buenos Airies. The article is republished from The Conversation under the Creative Commons licence. Read the original article
Dr Susan Conlon

Evacuating a nuclear disaster area is (usually) a waste of time and money, says study

Asahi Shimmbun/EPA

More than 110,000 people were moved from their homes following the Fukushima nuclear disaster in Japan in March 2011. Another 50,000 left of their own will, and 85,000 had still not returned four-and-a-half years later.

While this might seem like an obvious way of keeping people safe, my colleagues and I have just completed research that shows this kind of mass evacuation is unnecessary, and can even do more harm than good. We calculated that the Fukushima evacuation extended the population’s average life expectancy by less than three months.

To do this, we had to estimate how such a nuclear meltdown could affect the average remaining life expectancy of a population from the date of the event. The radiation would cause some people to get cancer and so die younger than they otherwise would have (other health effects are very unlikely because the radiation exposure is so limited). This brings down the average life expectancy of the whole group.

But the average radiation cancer victim will still live into their 60s or 70s. The loss of life expectancy from a radiation cancer will always be less than from an immediately fatal accident such as a train or car crash. These victims have their lives cut short by an average of 40 years, double the 20 years that the average sufferer of cancer caused by radiation exposure. So if you could choose your way of dying from the two, radiation exposure and cancer would on average leave you with a much longer lifespan.

How do you know if evacuation is worthwhile?

To work out how much a specific nuclear accident will affect life expectancy, we can use something called the CLEARE (Change of life expectancy from averting a radiation exposure) Programme). This tells us how much a specific dose of radiation will shorten your remaining lifespan by on average.

Yet knowing how a nuclear meltdown will affect average life expectancy isn’t enough to work out whether it is worth evacuating people. You also need to measure it against the costs of the evacuation. To do this, we have developed a method known as the judgement or J-value. This can effectively tell us how much quality of life people are willing to sacrifice to increase their remaining life expectancy, and at what point they are no longer willing to pay.

You can work out the J-value for a specific country using a measure of the average amount of money people in that country have (GDP per head) and a measure of how averse to risk they are, based on data about their work-life balance. When you put this data through the J-value model, you can effectively find the maximum amount people will on average be willing to pay for longer life expectancy.

After applying the J-value to the Fukushima scenario, we found that the amount of life expectancy preserved by moving people away was too low to justify it. If no one had been evacuated, the local population’s average life expectancy would have fallen by less than three months. The J-value data tells us that three months isn’t enough of a gain for people to be willing to sacrifice the quality of life lost through paying their share of the cost of an evacuation, which can run into billions of dollars (although the bill would actually be settled by the power company or government).

Japanese evacuation centre. Dai Kurokawa/EPA

The three month average loss suggests the number of people who will actually die from radiation-induced cancer is very small. Compare it to the average of 20 years lost when you look at all radiation cancer sufferers. In another comparison, the average inhabitant of London loses 4.5 months of life expectancy because of the city’s air pollution. Yet no one has suggested evacuating that city.
We also used the J-value to examine the decisions made after the world’s worst nuclear accident, which occurred 25 years before Fukushima at the Chernobyl nuclear power plant in Ukraine. In that case, 116,000 people were moved out in 1986, never to return, and a further 220,000 followed in 1990.

By calculating the J-value using data on people in Ukraine and Belarus in the late 1980s and early 1990s, we can work out the minimum amount of life expectancy people would have been willing to evacuate for. In this instance, people should only have been moved if their lifetime radiation exposure would have reduced their life expectancy by nine months or more.

This applied to just 31,000 people. If we took a more cautious approach and said that if one in 20 of a town’s inhabitants lost this much life expectancy, then the whole settlement should be moved, it would still only mean the evacuation of 72,500 people. The 220,000 people in the second relocation lost at most three months’ life expectancy and so none of them should have been moved. In total, only between 10% and 20% of the number relocated needed to move away.

To support our research, colleagues at the University of Manchester analysed hundreds of possible large nuclear reactor accidents across the world. They found relocation was not a sensible policy in any of the expected case scenarios they examined.

More harm than good

Some might argue that people have the right to be evacuated if their life expectancy is threatened at all. But overspending on extremely expensive evacuation can actually harm the people it is supposed to help. For example, the World Heath Organisation has documented the psychological damage done to the Chernobyl evacuees, including their conviction that they are doomed to die young.

From their perspective, this belief is entirely logical. Nuclear refugees can’t be expected to understand exactly how radiation works, but they know when huge amounts of money are being spent. These payments can come to be seen as compensation, suggesting the radiation must have left them in an awful state of health. Their governments have never lavished such amounts of money on them before, so they believe their situation must be dire.

The ConversationBut the reality is that, in most cases, the risk from radiation exposure if they stay in their homes is minimal. It is important that the precedents of Chernobyl and Fukushima do not establish mass relocation as the prime policy choice in the future, because this will benefit nobody.

————————————-
This blog has been written by Cabot Institute member Philip Thomas, Professor of Risk Management, University of Bristol.

Professor Philip Thomas

This article was originally published on The Conversation. Read the original article.

Unless we regain our historic awe of the deep ocean, it will be plundered

File 20171107 1017 1vsenhn.jpg?ixlib=rb 1.1
Image credit: BBC Blue Planet

In the memorable second instalment of Blue Planet II, we are offered glimpses of an unfamiliar world – the deep ocean. The episode places an unusual emphasis on its own construction: glimpses of the deep sea and its inhabitants are interspersed with shots of the technology – a manned submersible – that brought us these astonishing images. It is very unusual and extremely challenging, we are given to understand, for a human to enter and interact with this unfamiliar world.The most watched programme of 2017 in the UK, Blue Planet II provides the opportunity to revisit questions that have long occupied us. To whom does the sea belong? Should humans enter its depths? These questions are perhaps especially urgent today, when Nautilus Minerals, a mining company registered in Vancouver, has been granted a license to extract gold and copper from the seafloor off the coast of Papua New Guinea. Though the company has suffered some setbacks, mining is still scheduled to begin in 2019.

Blue Planet’s team explore the deep. Image credit BBC/Blue Planet

This marks a new era in our interaction with the oceans. For a long time in Western culture, to go to sea at all was to transgress. In Seneca’s Medea, the chorus blames advances in navigation for having brought the Golden Age to an end, while for more than one Mediterranean culture to travel through the Straits of Gibraltar and into the wide Atlantic was considered unwisely to tempt divine forces. The vast seas were associated with knowledge that humankind was better off without – another version, if you will, of the apple in the garden.

If to travel horizontally across the sea was to trespass, then to travel vertically into its depths was to redouble the indiscretion. In his 17th-century poem Vanitie (I), George Herbert writes of a diver seeking out a “pearl” which “God did hide | On purpose from the ventrous wretch”. In Herbert’s imagination, the deep sea is off limits, containing tempting objects whose attainment will damage us. Something like this vision of the deep resurfaces more than 300 years later in one of the most startling passages of Thomas Mann’s novel Doctor Faustus (1947), as a trip underwater in a diving bell figures forth the protagonist’s desire for occult, ungodly knowledge.

An early diving bell used by 16th century divers. National Undersearch Research Program (NURP)

Mann’s deep sea is a symbolic space, but his reference to a diving bell gestures towards the technological advances that have taken humans and their tools into the material deep. Our whale-lines and fathom-lines have long groped into the oceans’ dark reaches, while more recently deep-sea cables, submarines and offshore rigs have penetrated their secrets. Somewhat paradoxically, it may be that our day-to-day involvement in the oceans means that they no longer sit so prominently on our cultural radar: we have demystified the deep, and stripped it of its imaginative power.

But at the same time, technological advances in shipping and travel mean that our culture is one of “sea-blindness”: even while writing by the light provided by oil extracted from the ocean floor, using communications provided by deep-sea cables, or arguing over the renewal of Trident, we perhaps struggle to believe that we, as humans, are linked to the oceans and their black depths. This wine bottle, found lying on the sea bed in the remote Atlantic, is to most of us an uncanny object: a familiar entity in an alien world, it combines the homely with the unhomely.

Wine bottle found in the deep North Atlantic. Laura Robinson, University of Bristol, and the Natural Environment Research Council. Expedition JC094 was funded by the European Research Council.

For this reason, the activities planned by Nautilus Minerals have the whiff of science fiction. The company’s very name recalls that of the underwater craft of Jules Verne’s adventure novel Twenty Thousand Leagues under the Seas (1870), perhaps the most famous literary text set in the deep oceans. But mining the deep is no longer a fantasy, and its practice is potentially devastating. As the Deep Sea Mining Campaign points out, the mineral deposits targeted by Nautilus gather around hydrothermal vents, the astonishing structures which featured heavily in the second episode of Blue Planet II. These vents support unique ecosystems which, if the mining goes ahead, are likely to be destroyed before we even begin to understand them. (Notice the total lack of aquatic life in Nautilus’s corporate video: they might as well be drilling on the moon.) The campaigners against deep sea mining also insist – sounding not unlike George Herbert – that we don’t need the minerals located at the bottom of the sea: that the reasons for wrenching them from the deep are at best suspect.

So should we be leaving the deep sea well alone? Sadly, it is rather too late for that. Our underwater cameras transmit images of tangled fishing gear, cables and bottles strewn on the seafloor, and we find specimens of deep sea animals thousands of metres deep and hundreds of kilometres away from land with plastic fibres in their guts and skeletons. It seems almost inevitable that deep sea mining will open a new and substantial chapter on humanity’s relationship with the oceans. Mining new resources is still perceived to be more economically viable than recycling; as natural resources become scarcer, the ocean bed will almost certainly become of interest to global corporations with the capacity to explore and mine it – and to governments that stand to benefit from these activities. These governments are also likely to compete with one another for ownership of parts of the global ocean currently in dispute, such as the South China Sea and the Arctic. The question is perhaps not if the deep sea will be exploited, but how and by whom. So what is to be done?

A feather star in the deep waters of the Antarctic. BBC NHU
Rather than declaring the deep sea off-limits, we think our best course of action is to regain our fascination with it. We may have a toe-hold within the oceans; but, as any marine scientist will tell you, the deep still harbours unimaginable secrets. The onus is on both scientists and those working in what has been dubbed the “blue humanities” to translate, to a wider public, the sense of excitement to be found in exploring this element. Then, perhaps, we can prevent the deep ocean from becoming yet another commodity to be mined – or, at least, we can ensure that such mining is responsible and that it takes place under proper scrutiny.
The sea, and especially the deep sea, will never be “ours” in the way that tracts of land become cities, or even in the way rivers become avenues of commerce. This is one of its great attractions, and is why it is so easy to sit back and view the deep sea with awed detachment when watching Blue Planet II. But we cannot afford to pretend that it lies entirely beyond our sphere of activity. Only by expressing our humility before it, perhaps, can we save it from ruthless exploitation; only by acknowledging and celebrating our ignorance of it can we protect it from the devastation that our technological advances have made possible.-
——————————-
This blog is written by Laurence Publicover, Lecturer in English, University of Bristol and Katharine Hendry, Reader in Geochemistry, University of Bristol and both members of the University’s Cabot Institute. This article was originally published on The Conversation. Read the original article.

We just had the hottest year on record – where does that leave climate denial?

Image credit: Wikimedia Commons

At a news conference announcing that 2015 broke all previous heat records by a wide margin, one journalist started a question with “If this trend continues…” The response by the Director of NASA’s Goddard Institute for Space Studies, Gavin Schmidt, summed up the physics of climate change succinctly: “It’s not a question of if…”

Even if global emissions begin to decline, as now appears possible after the agreement signed in Paris last December, there is no reasonable scientific doubt that the upward trends in global temperature, sea levels, and extreme weather events will continue for quite some time.

Politically and ideologically motivated denial will nonetheless continue for a little while longer, until it ceases to be politically opportune.

So how does one deny that climate change is upon us and that 2015 was by far the hottest year on record? What misinformation will be disseminated to confuse the public?

 

The real deal: 2015 was the hottest year on record.
Met Office, CC BY-NC-SA

Research has identified several telltale signs that differentiate denial from scepticism, whether it is denial of the link between smoking and lung cancer or between CO2 emissions and climate change.
One technique of denial involves “cherry-picking”, best described as wilfully ignoring a mountain of inconvenient evidence in favour of a small molehill that serves a desired purpose. Cherry-picking is already in full swing in response to the record-breaking temperatures of 2015.

Political operatives such as James Taylor of the Heartland Institute – which once compared acceptance of the science of climate change to the Unabomber in an ill-fated billboard campaign – have already denied 2015 set a record by pointing to satellite data, which ostensibly shows no warming for the last umpteen years and which purportedly relegates 2015 to third place.

 

Satellite data (green) has much more uncertainty than thermometer records (red).
Kevin Cowtan / RSS / Met Office HadCRUT4, Author provided

So what about the satellite data?

If you cannot remember when you last checked the satellites to decide whether to go for a picnic, that’s probably because the satellites don’t actually measure temperature. Instead, they measure the microwave emissions of oxygen molecules in very broad bands of the atmosphere, for example ranging from the surface to about 18km above the earth. Those microwave soundings are converted into estimates of temperature using highly-complex models. Different teams of researchers use different models and they come up with fairly different answers, although they all agree that there has been ongoing warming since records began in 1979.

There is nothing wrong with using models, such as those required to interpret satellite data, for their intended purpose – namely to detect a trend in temperatures at high altitudes, far away from the surface where we grow our crops and make decisions about picnics.

But to use high-altitude data with its large uncertainties to determine whether 2015 is the hottest year on record is like trying to determine whether it’s safe to cross the road by firmly shutting your eyes and ears and then standing on your head to detect passing vehicles from their seismic vibrations. Yes, a big truck might be detectable that way, but most of us would rather just have a look and see whether it’s safe to cross the road.

And if you just look at the surface-based climate data with your own eyes, then you will see that NASA, the US NOAA, the UK Met Office, the Berkeley Earth group, the Japan Meteorological Agency, and many other researchers around the world, all independently arrived at one consistent and certain end result – namely that 2015 was by far the hottest year globally since records began more than a century ago.

Enter denial strategy two: that if every scientific agency around the world agrees on global warming, they must be engaging in a conspiracy! Far from being an incidental ornament, conspiratorial thinking is central to denial. When a scientific fact has been as thoroughly examined as global warming being caused by greenhouse gases or the link between HIV and AIDS, then no contrary position can claim much intellectual or scholarly respectability because it is so overwhelmingly at odds with the evidence.

That’s why politicians such as Republican Congressman Lamar Smith need to accuse the NOAA of having “altered the [climate] data to get the results they needed to advance this administration’s extreme climate change agenda”. If the evidence is against you, then it has to be manipulated by mysterious forces in pursuit of a nefarious agenda.

This is like saying that you shouldn’t cross the road by just looking because the several dozen optometrists who have independently attested to your 20/20 vision have manipulated the results because … World Government! Taxation! … and therefore you’d better stand on your head blindfolded with tinfoil.

So do the people who disseminate misinformation about climate actually believe what they are saying?

The question can be answered by considering the stock market. Investors decide on which stock to buy based on their best estimates of a company’s future potential. In other words, investors place an educated bet on a company’s future based on their constant reading of odds that are determined by myriad factors.

Investors put their money where their beliefs are.

Likewise, climate scientists put their money where their knowledge is: physicist Mark Boslough recently offered a $25,000 bet on future temperature increases. It has not been taken up. Nobel laureate Brian Schmidt similarly offered a bet to an Australian “skeptic” on climate change. It was not taken up.

People who deny climate science do not put their money where their mouth is. And when they very occasionally do, they lose.

This is not altogether surprising: in a recent peer-reviewed paper, with James Risbey as first author, we showed that wagering on global surface warming would have won a bet every year since 1970. We therefore suggested that denial may be “… largely posturing on the part of the contrarians. Bets against greenhouse warming are largely hopeless now and that is widely understood.”

So the cherry-picking and conspiracy-theorising will continue while it is politically opportune, but the people behind it won’t put their money where their mouth is. They probably know better.
————————–
The Conversation

This blog was written by Cabot Institute member, Professor Stephan Lewandowsky, Chair of Cognitive Psychology, University of Bristol.

This article was originally published on The Conversation. Read the original article.

Why do flood defences fail?

More than 40,000 people were forced to leave their homes after Storm Desmond caused devastating floods and wreaked havoc in north-west England. Initial indications were that the storm may have caused the heaviest local daily rainfall on record in the UK. As much as £45m has been spent on flood defences in the region in the previous ten years and yet the rainfall still proved overwhelming. So what should we actually expect from flood defence measures in this kind of situation? And why do they sometimes fail?

We know that floods can and will happen. Yet we live and work and put our crucial societal infrastructure in places that could get flooded. Instead of keeping our entire society away from rivers and their floodplains, we accept flood risks because living in lowlands has benefits for society that outweigh the costs of flood damage. But knowing how much risk to take is a tricky business. And even when there is an overall benefit for society, the consequences for individuals can be devastating.

We also need to calculate risks when we build flood defences. We usually protect ourselves from some flood damage by building structures like flood walls and river or tidal barriers to keep rising waters away from populated areas, and storage reservoirs and canals to capture excess water and channel it away. But these structures are only designed to keep out waters from typical-sized floods. Bigger defences that could protect us from the largest possible floods, which may only happen once every 100 years, would be much more expensive to build and so we choose to accept this risk as less than the costs.

Balancing the costs and benefits

In the UK, the Environment Agency works with local communities to assess the trade off between the costs of flood protection measures, and the benefits of avoiding flood damage. We can estimate the lifetime benefits of different proposed flood protection structures in the face of typical-sized floods, as well as the results of doing nothing. On the other side of the ledger, we can also estimate the structures’ construction and maintenance costs.

In some cases, flood protection measures can be designed so that if they fail, they do the least damage possible, or at least avoid catastrophic damage. For example, a flood protection wall can be built so that if flood waters run over it they run into a park rather than residential streets or commercial premises. And secondary flood walls outside the main wall can redirect some of the overflow back towards the river channel.

 

Thames Barrier: big costs but bigger benefits.
Ross Angus/Flickr, CC BY-SA

The Environment Agency puts the highest priority on the projects with the largest benefits for the smallest costs. Deciding where that threshold should be set is a very important social decision, because it provides protection to some but not all parts of our communities. Communities and businesses need to be well-informed about the reasons for those thresholds, and their likely consequences.

We also protect ourselves from flood damage in other ways. Zoning rules prevent valuable assets such as houses and businesses being built where there is an exceptionally high flood risk. Through land management, we can choose to increase the amount of wooded land, which can reduce the impact of smaller floods. And flood forecasting alerts emergency services and helps communities rapidly move people and their portable valuables out of the way.

Always some risk

It’s important to realise that since flood protection measures never eliminate all the risks, there are always extra costs on some in society from exceptional events such as Storm Desmond, which produce very large floods that overwhelm protection measures. The costs of damage from these exceptional floods are difficult to estimate. Since these large floods have been rare in the past, our records of them are very limited, and we are not sure how often they will occur in the future or how much damage will they cause. We also know that the climate is changing, as are the risks of severe floods, and we are still quite uncertain about how this will affect extreme rainfall.

 

At the same time we know that it’s very hard to judge the risk from catastrophic events. For example, we are more likely to be afraid of catastrophic events such as nuclear radiation accidents or terrorist attacks, but do not worry so much about much larger total losses from smaller events that occur more often, such as floods.

Although the process of balancing costs against benefits seems clear and rational, choosing the best flood protection structure is not straightforward. Social attitudes to risk are complicated, and it’s difficult not to be emotionally involved if your home or livelihood are at risk.
The Conversation

————————————–
This blog is written by Cabot Institute member Dr Ross Woods, a Senior Lecturer in Water and Environmental Engineering, University of Bristol.  This article was originally published on The Conversation. Read the original article.

Ross Woods

Global warming ‘pause’ was a myth all along, says new study

The idea that global warming has “stopped” is a contrarian talking point that dates back to at least 2006. This framing was first created on blogs, then picked up by segments of the media – and it ultimately found entry into the scientific literature itself. There are now numerous peer-reviewed articles that address a presumed recent “pause” or “hiatus” in global warming, including the latest IPCC report.

So did global warming really pause, stop, or enter a hiatus? At least six academic studies have been published in 2015 that argue against the existence of a pause or hiatus, including three that were authored by me and colleagues James Risbey of CSIRO in Hobart, Tasmania, and Naomi Oreskes of Harvard University.

Our most recent paper has just been published in Nature’s open-access journal Scientific Reports and provides further evidence against the pause.

Pause not backed up by data

First, we analysed the research literature on global temperature variation over the recent period. This turns out to be crucial because research on the pause has addressed – and often conflated – several distinct questions: some asked whether there is a pause or hiatus in warming, others asked whether it slowed compared to the long-term trend and yet others have examined whether warming has lagged behind expectations derived from climate models.

These are all distinct questions and involve different data and different statistical hypotheses. Unnecessary confusion has resulted because they were frequently conflated under the blanket labels of pause or hiatus.

 

New NOAA data released earlier this year confirmed there had been no pause. The author’s latest study used NASA’s GISTEMP data and obtained the same conclusions.
NOAA

To reduce the confusion, we were exclusively concerned with the first question: is there, or has there recently been, a pause or hiatus in warming? It is this question – and only this question – that we answer with a clear and unambiguous “no”.

No one can agree when the pause started

We considered 40 recent peer-reviewed articles on the so-called pause and inferred what the authors considered to be its onset year. There was a spread of about a decade (1993-2003) between the various papers. Thus, rather than being consensually defined, the pause appears to be a diffuse phenomenon whose presumed onset is anywhere during a ten-year window.

Given that the average presumed duration of the pause in the same set of articles is only 13.5 years, this is of concern: it is difficult to see how scientists could be talking about the same phenomenon when they talked about short trends that commenced up to a decade apart.

This concern was amplified in our third point: the pauses in the literature are by no means consistently extreme or unusual, when compared to all possible trends. If we take the past three decades, during which temperatures increased by 0.6℃, we would have been in a pause between 30% and 40% of the time using the definition in the literature.

In other words, academic research on the pause is typically not talking about an actual pause but, at best, about a fluctuation in warming rate that is towards the lower end of the various temperature trends over recent decades.

How the pause became a meme

If there has been no pause, why then did the recent period attract so much research attention?
One reason is a matter of semantics. Many academic studies addressed not the absence of warming but a presumed discrepancy between climate models and observations. Those articles were scientifically valuable (we even wrote one ourselves), but we do not believe that those articles should have been framed in the language of a pause: the relationship between models (what was expected to happen) and observations (what actually happened) is a completely different issue from the question about whether or not global warming has paused.

A second reason is that the incessant challenge of climate science by highly vocal contrarians and Merchants of Doubt may have amplified scientists’ natural tendency to be reticent over reporting the most dramatic risks they are concerned about.

We explored the possible underlying mechanisms for this in an article earlier this year, which suggested climate denial had seeped into the scientific community. Scientists have unwittingly been influenced by a linguistic frame that originated outside the scientific community and by accepting the word pause they have subtly reframed their own research.

Research directed towards the pause has clearly yielded interesting insights into medium-term climate variability. My colleagues and I do not fault that research at all. Except that the research was not about a (non-existent) pause – it was about a routine fluctuation in warming rate. With 2015 being virtually certain to be another hottest year on record, this routine fluctuation has likely already come to an end.
The Conversation

—————————————
This article was originally published on The Conversation. Read the original article.

This blog is by Cabot Institute member Prof Stephan LewandowskyUniversity of Bristol.

Prof Steve Lewandowsky



Read the official press release.

Why climate ‘uncertainty’ is no excuse for doing nothing

By Richard Pancost, University of Bristol and Stephan Lewandowsky, University of Bristol

Former environment minister Owen Paterson has called for the UK to scrap its climate change targets. In a speech to the Global Warming Policy Foundation, he cited “considerable uncertainty” over the impact of carbon emissions on global warming, a line that was displayed prominently in coverage by the Telegraph and the Daily Mail.

Paterson is far from alone: climate change debate has been suffused with appeals to “uncertainty” to delay policy action. Who hasn’t heard politicians or media personalities use uncertainty associated with some aspects of climate change to claim that the science is “not settled”?

Over in the US, this sort of thinking pops up quite often in the opinion pages of The Wall Street Journal. Its most recent article, by Professor Judith Curry, concludes that the ostensibly slowed rate of recent warming gives us “more time to find ways to decarbonise the economy affordably.”

At first glance, avoiding interference with the global economy may seem advisable when there is uncertainty about the future rate of warming or the severity of its consequences.

So let’s do nothing.
WSJ

But delaying action because the facts are presumed to be unreliable reflects a misunderstanding of the science of uncertainty. Simply because a crucial parameter such as the climate system’s sensitivity to greenhouse gas emissions is expressed as a range – for example, that under some emissions scenarios we will experience 2.6°C to 4.8ºC of global warming or 0.3 to 1.7 m of sea level rise by 2100 – does not mean that the underlying science is poorly understood. We are very confident that temperatures and sea levels will rise by a considerable amount.

Perhaps more importantly, just because some aspects of climate change are difficult to predict (will your county experience more intense floods in a warmer world, or will the floods occur down the road?) does not negate our wider understanding of the climate. We can’t yet predict the floods of the future but we do know that precipitation will be more intense because more water will be stored in the atmosphere on a warmer planet.

This idea of uncertainty might be embedded deeply within science but is no one’s friend and it should be minimised to the greatest extent possible. It is an impetus to mitigative action rather than a reason for complacency.

Uncertainty means greater risk

There are three key aspects of scientific uncertainty surrounding climate change projections that exacerbate rather than ameliorate the risks to our future.

First, uncertainty has an asymmetrical effect on many climatic quantities. For example, a quantity known as Earth system sensitivity, which tells us how much the planet warms for each doubling of atmospheric carbon dioxide concentration, has been estimated to be between 1.5°C to 4.5ºC. However, it is highly unlikely, given the well-established understanding of how carbon dioxide absorbs long-wave radiation, that this value can be below 1ºC. There is a possibility, however, that sensitivity could be higher than 4.5ºC. For fundamental mathematical reasons, the uncertainty favours greater, rather than smaller, climate impacts than a simple range suggests.

Second, the uncertainty in our projections makes adaptation to climate change more expensive and challenging. Suppose we need to build flood defences for a coastal English town. If we could forecast a 1m sea level rise by 2100 without any uncertainty, the town could confidently build flood barriers 1m higher than they are today. However, although sea levels are most likely to rise by about 1m, we’re really looking at a range between 0.3m and 1.7m. Therefore, flood defences must be at least 1.7m higher than today – 70cm higher than they could be in the absence of uncertainty. And as uncertainty increases, so does the required height of flood defences for non-negotiable mathematical reasons.

And the problem doesn’t end there, as there is further uncertainty in forecasts of rainfall occurrence, intensity and storm surges. This could ultimately mandate a 2 to 3m-high flood defence to stay on the safe side, even if the most likely prediction is for only a 1m sea-level rise. Even then, as most uncertainty ranges are for 95% confidence, there is a 5% chance that those walls would still be too low. Maybe a town is willing to accept a 5% chance of a breach, but a nuclear power station cannot to take such risks.

Finally, some global warming consequences are associated with deep, so-called systemic uncertainty. For example, the combined impact on coral reefs of warmer oceans, more acidic waters and coastal run-off that becomes more silt-choked from more intense rainfalls is very difficult to predict. But we do know, from decades of study of complex systems, that those deep uncertainties may camouflage particularly grave risks. This is particularly concerning given that more than 2.6 billion people depend on the oceans as their primary source of protein.

Similarly, warming of Arctic permafrost could promote the growth of CO2-sequestering plants, the release of warming-accelerating methane, or both. Warm worlds with very high levels of carbon dioxide did exist in the very distant past and these earlier worlds provide some insight into the response of the Earth system; however, we are accelerating into this new world at a rate that is unprecedented in Earth history, creating additional layers of complexity and uncertainty.

Uncertainty does not imply ignorance

Increasingly, arguments against climate mitigation are phrased as “I accept that humans are increasing CO2 levels and that this will cause some warming but climate is so complicated we cannot understand what the impacts of that warming will be.”

Well if we can’t be certain…
Telegraph

This argument is incorrect – uncertainty does not imply ignorance. Indeed, whatever we don’t know mandates caution. No parent would argue “I accept that if my child kicks lions, this will irritate them, but a range of factors will dictate how the lions respond; therefore I will not stop my child from kicking lions.”

The deeper the uncertainty, the more greenhouse gas emissions should be perceived as a wild and poorly understood gamble. By extension, the only unequivocal tool for minimising climate change uncertainty is to decrease our greenhouse gas emissions.

The Conversation

Richard Pancost receives funding from the NERC, the EU and the Leverhulme Trust.

Stephan Lewandowsky receives funding from the Australian Research Council, the World University Network, and the Royal Society.

This article was originally published on The Conversation.
Read the original article.