Peru’s ancient water systems can help protect communities from shortages caused by climate change

 

Mount Hount Huascarán, Cordillera Blanca, taken from Hauashao village. Credit: Susan Conlon



Water is essential for human life, but in many parts of the world water supplies are under threat from more extreme, less predictable weather conditions due to climate change. Nowhere is this clearer than in the Peruvian Andes, where rising temperatures and receding glaciers forewarn of imminent water scarcity for the communities that live there.

Peru holds more than 70% of the world’s tropical glaciers. Along the 180 kilometre expanse of the Cordillera Blanca (“white mountains”), more than 250,000 people depend on glaciers for a year-round supply of water. Meltwater from the glaciers supplies rivers, offering a vital supplement to rainwater so that locals can continue irrigating food crops throughout the dry season, from May to October.
But Peruvian glaciers have shrunk by 25% since 1987, and the water supply to rivers during the dry season is gradually decreasing. While national and regional governments and NGOs are responding to the threat of water scarcity with modern engineering solutions, there are growing concerns among the communities affected that such efforts are misplaced.

Modern day misfires

Take, for example, the village of Huashao. Nestled between the highest peaks of the Cordillera Blanca, Huashao is a typical farming village of the region. Glacier meltwater feeds the Yurac Uran Atma canal, which supplies irrigation water to families in Huashao. In 2011, a municipal government project transformed this canal from a rustic irrigation ditch to a modern PVC pipeline, with lock-gates to regulate the flow of water and ensure equal distribution throughout the village.
The village of Huashao. ConDevCenter/Flickr.CC BY-NC-ND
Governments and NGOs commonly promote modern measures to capture and conserve water for irrigation – for example, by lining irrigation canals with concrete, to prevent leakages. While it’s important to conserve water to safeguard food supplies, these kinds of measures have been criticised for their lack of flexibility and sensitivity to local needs.
While the pipeline in Huashao provided security and reduced the amount of time people had to devote to distributing water where it was needed, Conlon’s ongoing ethnographic research in the village found that local women were concerned about its effect on the local puquios (springs) – a valued source of irrigation and drinking water.
Noticing less water in puquios, they blamed the canal lining for stopping water from filtering into the local geology. Local communities see this process as an integral part of water distribution, but authorities often refer to it as “leakage”.
What’s more, the local people responsible for maintaining and operating the new canal found that not everything worked as planned. They were particularly worried when a problem caused water to overflow the canal walls, and blamed the design of the lock–gates.
Here, the government’s preference for modern engineering meant that it missed an opportunity to engage with traditional technologies and local knowledge. This is hardly surprising – ancient know-how has been routinely dismissed as inferior by state authorities and well-meaning (but badly briefed) NGOs. Yet traditional technologies, like the puquios, have been providing flexible ways to manage water in Huashao for hundreds of years.
In Huashao, the local people are coming to realise the limitations of modern engineering. But across the Andes, many other communities are still seduced by the promise of quick fixes offered by concrete, steel and PVC pipelines. Unfortunately, initial, costly investments of aid and expertise are rarely followed up, and since communities often lack the necessary knowledge and funds to maintain these systems, they eventually break down.

Ancient married with modern

Slowly, a push back is starting. There has been renewed interest in what society can learn from traditional irrigation systems. A recent international workshop held in Trujillo, Peru, brought together social scientists, geographers and climate scientists to discuss how to tackle issues around water use and scarcity.
It seems likely that the best solutions will be found by combining old and new knowledge, rather than dismissing one in favour of the other. For instance, parallel to the Cordillera Blanca is the Cordillera Negra (“black mountains”), which faces the Pacific Ocean. Without the benefit of glaciers, the ancient inhabitants of this area learned to harness rain water to see them through the dry season.
These pre-Colombian cultures instigated millennia-long engineering projects, resulting in large dams and reservoirs placed along the slopes of the mountains. These structures controlled water and soil erosion, feeding underground water deposits and providing water for crops and livestock.
An ancient dam in the Cordillera Negra. Kevin Lane.Author provided
Disuse over the last few centuries means that few are still functioning, but those that are, are a tribute to the ancient expertise. By contrast, modern concrete micro-dams have a functional life of 40 to 50 years, often curtailed by seismic activity to between 15 and 25 years.
Fortunately, plans are afoot to revisit these old technologies. Solutions rooted in respect for community and local knowledge, and allied to flexible modern engineering – such as better water retainment technology – are exploring ways in which we can shore-up the effectiveness of these ancient dams.
Throwing money and resources into engineering projects does not always guarantee success when trying to combat the effects of climate change and protect vulnerable communities. But the marriage of ancient and modern technologies offers promising solutions to the threat of water scarcity in Peru, and places like it all across the world.
———
This blog is by Cabot Institute member Dr Susan Conlon, Research Associate at the University of Bristol, and Kevin Lane, Senior Researcher in Archeology at Universidad de Buenos Airies. The article is republished from The Conversation under the Creative Commons licence. Read the original article
Dr Susan Conlon

Turning knowledge of past climate change into action for the future

Arctic sea ice: Image credit NASA

It’s more helpful to talk about the things we can do, than
the problems we have caused.

Beth Shapiro,
a molecular biologist and author of How To Clone A Mammoth, gave a hopeful
response to an audience question about the recent UN report stating that one
million species are threatened with extinction.

I arrived at the International Union for Quaternary Research (INQUA) 2019
conference, held in Dublin at the end of July, keen to learn exactly that: what
climate scientists can do to mitigate the impact of our rapidly changing
climate. INQUA brings together earth, atmosphere and ocean scientists studying
the Quaternary, a period from 2.6 million years ago to the present day. The
Quaternary has seen repeated and abrupt periods of climate change, making it
the perfect analogue for our rapidly changing future.
In the case of extinctions, if we understand how species
responded to human and environmental pressures in the past, we may be better
equipped to protect them in the present day.

Protecting plants and polar bears

Heikki
Seppä
from the University of Finland and colleagues are using the fossil
record to better understand how polar bears adapt to climate change. The Arctic
bears survived the Holocene thermal maximum, between 10,000 and 6,000 years
ago, when temperatures were about 2.5°C warmer than today. Although rising
temperatures and melting sea ice drove them out of Scandinavia, fossil evidence
suggests they probably found a cold refuge around northwest Greenland. This is
an encouraging indicator that polar bears could survive the 1.5°C
warming projected by the IPCC to occur sometime
between 2030 and 2052
, if it continues to increase at the current rate.
Protecting animal species means preserving habitat, so it’s
just as important to study the effects of climate change on plants. Charlotte
Clarke
from the University of Southampton studies the diversity of plants
during times of abrupt climate change, using Russian lake records. Her results
show that although two thirds of Arctic plant species survived the same warm
period which forced the bears to leave Scandinavia, they too were forced to
migrate, probably moving upslope to colder areas.

 

If we understand how ecosystems respond to climate change,
we will be better prepared to protect them in the future. But what will future
climate change look like? Again, we can learn a lot by studying the past.

The past is the key to the future

To understand the impact of anthropogenic CO2
emissions on the climate, we must disentangle the effect of CO2 from
other factors, such as insolation (radiation from the Sun reaching the Earth’s
surface). This is the mission of Qiuzhen Yin from UC
Louvain, Belgium, who is studying the relative impact of CO2
on climate during five past warm interglacials
. Tim Shaw, from
Nanyang Technological University in Singapore, presented work on the mechanisms driving
past sea level change
. And Vachel
Carter
from the University of Utah is using charcoal as an analogue for
past fire activity
in the Rocky Mountains. By studying the pattern of fire
activity during past warm periods, we can determine which areas are most at
risk in the future.

The 2018 fire season in Colorado was one of the worst on record.

So Quaternary scientists have a lot to tell us about what
our rapidly changing planet might look like in the years to come. But how can
we translate this information into practical action? ‘Science as a human
endeavour necessarily encompasses a moral dimension’, says George Stone from Milwaukee
Area Technical College, USA. Stone’s passionate call to action is part of a
series of talks about how Quaternary climate research can be applied to
societal issues in the 21st Century.

One thing scientists can do is try to engage with
policymakers. Geoffrey
Boulton
of the International Science Council
is hopeful that by partnering with INQUA and setting up collaborations with
Quaternary scientists, it can help them do that. The International Science
Council has a history of helping to integrate science into major global climate
policy such as the Paris
Agreement
.

What can we do ourselves as scientists is to portray
scientific results in a way that is visually appealing and easy to understand,
so they are accessible to the public and to policymakers. Oliver Wilson and
colleagues from the University of Reading are a prime example, as they brought
along 3D printed giant pollen grains which they use for outreach and teaching
as part of the 3D
Pollen Project
.


Given that it’s easier than ever to publicise your own results,
through channels such as blogs and social media, hopefully a new generation of
Quaternary scientists will leave inspired to engage in outreach and use their
knowledge to make a difference.

—————————–
This blog is written by Cabot Institute member Jen
Saxby
, a PhD student in the School
of Earth Sciences
at the University of Bristol.

Jen Saxby

 

Climate-driven extreme weather is threatening old bridges with collapse

The recent collapse of a bridge in Grinton, North Yorkshire, raises lots of questions about how prepared we are for these sorts of risks. The bridge, which was due to be on the route of the cycling world championships in September, collapsed after a month’s worth of rain fell in just four hours, causing flash flooding.

Grinton is the latest in a series of such collapses. In 2015, first Storm Eva and then Storm Frank caused flooding which collapsed the 18th century Tadcaster bridge, also in North Yorkshire, and badly damaged the medieval-era Eamont bridge in nearby Cumbria. Floods in 2009 collapsed or severely damaged 29 bridges in Cumbria alone.

With climate change making this sort of intense rainfall more common in future, people are right to wonder whether we’ll see many more such bridge collapses. And if so – which bridges are most at risk?

In 2014 the Tour de France passed over the now-destroyed bridge near Grinton. Tim Goode/PA

We know that bridges can collapse for various reasons. Some are simply old and already crumbling. Others fall down because of defective materials or environmental processes such as flooding, corrosion or earthquakes. Bridges have even collapsed after ships crash into them.

Europe’s first major roads and bridges were built by the Romans. This infrastructure developed hugely during the industrial revolution, then much of it was rebuilt and transformed after World War II. But since then, various factors have increased the pressure on bridges and other critical structures.
For instance, when many bridges were first built, traffic mostly consisted of pedestrians, animals and carts – an insignificant load for heavy-weight bridges. Yet over the decades private cars and trucks have got bigger, heavier and faster, while the sheer number of vehicles has massively increased.

Different bridges run different risks

Engineers in many countries think that numerous bridges could have reached the end of their expected life spans (between 50-100 years). However, we do not know which bridges are most at risk. This is because there is no national database or method for identifying structures at risk. Since different types of bridges are sensitive to different failure mechanisms, having awareness of the bridge stock is the first step for an effective risk management of the assets.

 

Newcastle’s various bridges all have different risks. Shaun Dodds / shutterstock

In Newcastle, for example, seven bridges over the river Tyne connect the city to the town of Gateshead. These bridges vary in function (pedestrian, road and railway), material (from steel to concrete) and age (17 to 150 years old). The risk and type of failure for each bridge is therefore very different.

Intense rain will become more common

Flooding is recognised as a major threat in the UK’s National Risk Register of Civil Emergencies. And though the Met Office’s latest set of climate projections shows an increase in average rainfall in winter and a decrease in average rainfall in summer, rainfall is naturally very variable. Flooding is caused by particularly heavy rain so it is important to look at how the extremes are changing, not just the averages.

Warmer air can hold more moisture and so it is likely that we will see increases in heavy rainfall, like the rain that caused the flash floods at Grinton. High resolution climate models and observational studies also show an intensification of extreme rainfall. This all means that bridge collapse from flooding is more likely in the future.

To reduce future disasters, we need an overview of our infrastructure, including assessments of change of use, ageing and climate change. A national bridge database would enable scientists and engineers to identify and compare risks to bridges across the country, on the basis of threats from climate change.



This blog is written by Cabot Institute member Dr Maria Pregnolato, Lecturer in Civil Engineering, University of Bristol and Elizabeth Lewis, Lecturer in Computational Hydrology, Newcastle University.  This article is republished from The Conversation under a Creative Commons license. Read the original article.

Extinction Rebellion uses tactics that toppled dictators – but we live in a liberal democracy

XR protesters getting carried away. Image credit: Facundo Arrizabalaga/EPA.

After occupying parts of central London over two weeks in April, Extinction Rebellion’s (XR) summer uprising has now spread to Cardiff, Glasgow, Leeds and Bristol. All these protests involve disruption, breaking the law and activists seeking arrest.

Emotions are running high, with many objecting to the disruption. At the same time, the protests have got people and the media talking about climate change. XR clearly represents something new and unusual, which has the power to annoy or enthuse people. But what led it to adopt such disruptive tactics in its efforts to demand action on climate change?

XR is accused of being an anarchist group in a report from the right-wing think-tank Policy Exchange. To actual anarchists, that is laughable. XR strictly adheres to non-violence, seeks arrests and chants “we love you” to the police. This contrasts starkly with anarchists’ antagonistic relationship to the state and its law enforcement.

The movement claims to practice civil disobedience – but that is also a confusing label. Civil disobedience developed during the 20th century as a way of understanding and justifying law-breaking protests in liberal democracies. Much of this was in relation to the US civil rights movement. Liberal political thinkers like Hannah Arendt and John Rawls explored when and how disobedience was legitimate in a democracy.

The misfit rebellion

In some ways, XR fits with liberal civil disobedience. That disobedience should always be a last resort chimes well with XR’s claim that time is running out and traditional campaigning has proven unsuccessful. The voluntary arrests resonate with the liberal onus on open and conscientious law-breaking that accepts law enforcement.

Non-violent protest in Cardiff, July 2019. Image credit: Neil Schofield/Flickr., CC BY-NC

But on two other crucial points, XR breaks with the liberal civil disobedience tradition. For one thing, civil disobedience is generally aimed at showing the majority of the public that specific laws are unjust. XR does not seem to focus on this majority-building. It does not engage in discussion with climate change deniers, and its disruption antagonises people who do not share its fears and frustration with the inaction of governments.

Instead, XR’s tactic is to get a significant but still small part of the population to participate in disruption. The movement aims to get 3.5% of the population so incensed that they take to the streets. It does not aim to convince 51% that this is the right thing to do.

Liberal civil disobedience maintains an overall “fidelity to law”. In other words, it is considered okay to break certain unjust laws, as long as you respect the state’s laws generally. The aim is then to get the state to have better, more just, laws.

But for XR, the social contract has already been broken. The state has failed to take necessary action on climate change, thereby putting its citizens at risk. Disruption and law-breaking are therefore justified.

Talkin’ ‘bout a revolution

XR’s tactics are not based on how social movements have achieved policy change in liberal democracies. It is based on how dictatorships have been toppled. It draws directly on Erica Chenoweth and Maria Stephan’s book Why Civil Resistance Works, where they argue that non-violence is more effective than violence. The XR tactic is therefore based on how to achieve revolutions, not on how to get governments to respond to the will of the majority.

There are reasons to be sceptical about the relevance of this research, when it comes to addressing climate change. The 3.5% figure applies to such a small number of historical cases that no conclusions can be based on it. More importantly, perhaps, in most cases of regime change, not much else changes. Most in XR see saving the world as incompatible with capitalism as a system that depends on economic growth on a finite planet. Most cases of regime change on the Chenoweth and Stephan list have not resulted in abandoning capitalism – quite the opposite.

There are, however, good reasons for why XR’s radical tactics resonate with so many. People experiencing climate change through hot summers and other extreme weather events increases the sense of urgency. More importantly, perhaps, in an era of political polarisation, more extreme action becomes more likely. The trust in the state and its politicians has eroded on both the left and right across Europe. In the UK, this has been made worse by the politics of Brexit.

Law-breaking then becomes a more likely form of protest. One of XR’s spokespeople wrote on The Conversation that “the chances of … succeeding are relatively slim”. But since many in XR foresee societal breakdown as a result of climate breakdown, the cost of getting a criminal record diminishes. And if they continue to make the protests a bit of a festival, then the chances are we’ll see more disruption from Extinction Rebellion – even if it does alienate many others.
———————————-
This blog is written by Cabot Institute member Oscar Berglund, Lecturer in International Public and Social Policy, University of Bristol.  This article is republished from The Conversation under a Creative Commons license. Read the original article.

Oscar Berglund

Why the time may be ripe for a Green New Deal

Image credit: Senate Democrats.

On the 8th July, parliamentarians, researchers and practitioners gathered in the House of Commons to discuss and debate the possibilities and practicalities of a Green New Deal in the UK. Drawing on insights and experience from both the UK and the USA, speakers included Caroline Lucas MP, James Heappey MP, John Podesta of the Center for American Progress, and Hannah Martin of Green New Deal UK.

The Green New Deal is a policy concept that asserts the need for wholesale, sustained and state-led economic investment to address the challenges of climate breakdown. Whilst it may often feel that these demands for a Green New Deal have come out of the blue, its entrance into the language of environmentalism can be found in 2007, when those concerned with climate breakdown and environmental problems argued that policies centred on improving the environment had important social consequences also.

2019 is, in many ways, the year where environmentalism has taken a radical step into the popular consciousness. Greta Thunberg, the School Strike for Climate and Extinction Rebellion have all occupied streets and seized the news cycle, raising awareness of (and anger at) the climate emergency.

Image source: Wikimedia Commons

The result? MP’s have declared a climate emergency, the Committee on climate breakdown calling for ‘net zero’ emissions, and public concern for the environment is at a record high. It is this new and rising awareness that frees up space for a new, wide-ranging policy mechanism like the Green New Deal to take the stage and gain traction.

Adopting the language of President Franklin Delano Roosevelt’s policy response to the Great Depression, the Green New Deal has picked up the most traction in the USA, where Alexandra Ocasio-Cortez and the Sunrise Movement have spearheaded a growing movement around this idea, that soon took form in a Congressional Bill and a vision published by New Consensus. Several candidates for the Democrat nominee for President have announced Green New Deal-style policies.

A common criticism of the Green New Deal – evident in the parliamentary discussions – was that it can often take an “overly-ideological” flavour that isolates voters, constituencies and potential supporters. As the partisan-divisions around climate breakdown in the United States show, for a policy as wide-ranging as this to be accepted, it must have a base in cross-party support.

As the Gilets Jaunes in France have demonstrated, to forget the economic costs that environmental policy can impose on those who are already struggling can have profound consequences. Whilst we – as environmentalists – may often be focused on the ‘end of the world’, billions across the globe are, instead, worried about making it to the end of the month.

Image source: Wikimedia Commons

This is, in many ways, an issue of branding. The key to understanding the Green New Deal is that it is synergistic – its policies simultaneously address environmental AND social issues. New policies of land ownership and use can be adapted to promote cooperative management, worker ownership and land justice. The wholesale fitting of solar energy panels to homes will also address issues of energy poverty. The application of a frequent flyer levy, taxing people based on how often they fly, will, in turn, represent a fairer system of taxing air travel than the current Air Passenger Duty.

Central under the current calls for a Green New Deal is the call for a global investment of 1.5 to 2.5% of global GDP in environmental policies per year. Available policies include targeted tax incentives and subsidies, land reform, transport electrification, green skills training, the expansion of carbon pricing and the rapid construction of renewable energy infrastructure. Green quantitative easing will also allow for the rapid influx of financial investment into communities, allowing for community-led sustainability projects.

These policies will function as powerful job-creators, with significant gains in employment numbers when compared to the relative numbers of those employed within a continued fossil fuel economy. Furthermore, rather than representing financial costs to be spent and lost, they represent an investment – with the environmental and social benefits of these policies leading to far greater economic returns.

Key, however, is where in the UK these policies will be implemented. Introducing low-carbon public transport will only go part of the way to addressing issues at the national level. Now is the time to implement these policies at the towns and places already left behind by rapid deindustrialisation – the Scunthorpes, the Welsh Valleys, the lost seaside towns. Already suffering from industrial decline, these sites must provide the sites of a new decarbonised economy of green investment.

The week before the parliamentary meeting, Common Wealth set out the numerous forms a Green New Deal can take in the post-Brexit UK. It is highly likely that more will follow, with the New Economics Foundation and Greenpeace both putting their own visions together.

For these policies to be successful, it must be accompanied by a strong policy steer from both Parliament and the UK Government. In calling for such expansive investment (likened to “three Marshall Plans and one Apollo moon landing” by Clive Lewis MP, the Labour spokesperson for the Treasury), it is essential that the plan moves beyond mere decarbonisation and towards a holistic approach to mitigating the climate breakdown and our role within it. For too long environmental policy has spoken of what is politically feasible, not what is scientifically urgent. Now is the time for that to change.
————————————
This blog is written by Cabot Institute member Dr Ed Atkins, Teaching Fellow, School of Geographical Sciences, University of Bristol.  

Dr Ed Atkins

Decarbonising the UK rail network

Image source: Wikimedia Commons

Caboteer Dr Colin Nolden blogs on a recent All-Party Parliamentary Rail & Climate Change Groups meeting on ‘Decarbonising the UK rail network’.  The event was co-chaired by Martin Vickers MP and Daniel Zeichner MP. Speakers included:

  • Professor Jim Skea, CBE, Imperial College London
  • David Clarke, Technical Director, RIA
  • Anthony Perret, Head of Sustainable Development, RSSB
  • Helen McAllister, Head of Strategic Planning (Freight and National Passenger Operators), Network Rail

The meeting kicked off with a broad overview of the global decarbonisation challenge by Jim Skea. As former member of the UK’s Climate Change Committee and Co-chair of Working Group III of the Intergovernmental Panel on Climate Change, which oversaw the 1.5C report published in October 2018, as well member of the Scottish Just Transition Commissions, he emphasized that the net-zero target ‘is humongously challenging’. We need to recognise that all aspects of our land, economy and society require change, including lifestyles and behaviours. At the same time, the loophole of buying in permits to ‘offset’ decarbonisation in the UK net-zero target increases uncertainty as it is unclear what needs to be done territorially. The starting point for decarbonising mobility and many other sectors is nevertheless the decarbonisation of our electricity supply by 2030 as this allows the electrification of energy demand.

The recent International Energy Agency report on the ‘Future of Rail’ was mentioned. It suggests that the rail sector is one of the blindspots for decarbonisation although rail covers 8% of passenger transport, 7% of freight transport with only 2% of transport energy demand. The report concludes that a modal shift and sustainable electrification are necessary to decarbonise transport.

David Clarke pointed towards the difficulties encountered in the electrification of the Great Western line to Bristol and beyond to Cardiff but stressed that this was not a good measure for future electrification endeavours. Electrification was approached to ambitiously in 2009 following the 20-year electrification hiatus. Novel technology and deadlines with fixed time scales implied higher costs on the Great Western line. Current electrification phases such as the Bristol-Cardiff stretch, on the other hand, are being developed within the cost envelope. A problem now lies in the lack of further planned electrifications as there is a danger of demobilising relevant teams. Such a hiatus could once again lead to teething problems when electrification will be prioritised again. Bimodal trains that have accompanied electrification on the Great Western line will continue to play an important role in ongoing electrification as they allow at least part of the journeys to be completed free of fossil fuels.

Anthony Perret mentioned the RSSBs role in the ongoing development of a rail system decarbonisation strategy. The ‘what’ report was published in January 2019 and the ‘how’ report is still being drafted. Given that 70% of journeys and 80% of passenger kilometres are already electrified he suggested that new technology combinations such as hydrogen and battery will need to be tested to fill the gap where electrification is not economically viable. Hydrogen is likely to be a solution for longer distances and higher speeds while batteries are more likely to be suitable for discontinuous electrification such as the ‘bridging’ of bridges and tunnels. Freight transport’s 25,000V requirement currently implies either diesel or electrification to provide the necessary power. Anthony finished with a word of caution regarding rail governance complexities. Rail system governance needs an overhaul if it is not to hinder decarbonisation.

Helen McAllister is engaged in a task force to establish what funding needs to be made available for deliverable, affordable and efficient solutions. Particular interest lies on the ‘middle’ where full electrification is not economically viable but where promising combinations of technologies that Anthony mentioned might provide appropriate solutions. This is where emphasis on innovation will be placed and economic cases are sought. This is particularly relevant to the Riding Sunbeams project I am involved with as discontinuous and innovative electrification is one of the avenues we are pursuing. However, Helen highlighted failure of current analytical tools to take carbon emissions into account. The ‘Green Book’ requires revision to place more emphasis on environmental outcomes and to specify the ‘bang for your buck’ in terms of carbon to make it a driving factor in decision-making. At the same time, she suggested that busy commuter lines that are the obvious choice for electrification are also likely to score highest on decarbonisation.

David pointed out that despite ambitious targets in place, new diesel rolling stock that was ordered before decarbonisation took priority will only be put in service in 2020 and will in all likelihood continue running until 2050. This is an indication of the lock-in associated with durable rail assets that Jim Skea also strongly emphasized as a challenge to overcome. Transport for Wales, on the other hand, are already looking into progressive decarbonisation options, which include Riding Sunbeams, along with four other progressive decarbonisation projects currently being implemented. Helen agreed that diesel will continue to have a role to play but that franchise specification for rolling stock regarding passenger rail and commercial specification regarding freight rail can help move the retirement date forward.

Comments and questions from the audience suggest that the decarbonisation challenge is galvanising the industry with both rolling stock companies and manufacturers putting their weight behind progressive solutions. Ultimately, more capacity for rail is required to enable modal shift towards sustainable rail transport. In this context, Helen stressed the need to apply the same net-zero criteria across all industries to ensure that all sectors engage in the same challenge, ranging from aviation to railways. Leo Murray from Riding Sunbeams asked whether unelectrified railway lines into remote areas such as the Scottish Highlands, Mid-Wales and Cornwall could be electrified with overhead electricity transmission lines to transmit the power from such remote areas to urban centres with rail electrification as a by-product. Chair Danial Zeichner pointed towards a project that seeks to connect Calais and Folkstone with a thick DC cable through the channel tunnel and this is something we will follow up with some of the speakers.

In conclusion, Anthony pointed towards the Rail Carbon Tool which will help measure capital carbon involved in all projects above a certain size from January 2020 onwards as a step in the right direction. David pointed toward increasing collaboration with the advanced propulsion centre at Cranfield University to cross-fertilise innovative solutions across different mobility sectors.
Overall it was an intense yet enjoyable hour in a sticky room packed full of sustainable rail enthusiasts. Although this might evoke images of grey hair, ill-fitting suits and the odd trainspotting binoculars it was refreshing to see so many ideas and enthusiasm brought to fore by a topic as mundane as ‘decarbonising the UK rail network’.

——————————————-
This blog is written by Dr Colin Nolden, Vice-Chancellor’s Fellow, University of Bristol Law School and Cabot Institute for the Environment.

Colin is currently leading a new Cabot Institute Masters by Research project on a new energy system architecture. This project will involve close engagement with community energy organizations to assess technological and business model feasibility. Sound up your street? Find out more about this masters on our website.

Indoor air pollution: The ‘killer in the kitchen’

Image credit Clean Cooking Alliance.

Approximately 3 billion people around the world rely on biomass fuels such as wood, charcoal and animal dung which they burn on open fires and using inefficient stoves to meet their daily cooking needs.

Relying on these types of fuels and cooking technologies is a major contributor to indoor air pollution and has serious negative health impacts, including acute respiratory illnesses, pneumonia, strokes, cataracts, heart disease and cancer.

The World Health Organization estimates that indoor air pollution causes nearly 4 million premature deaths annually worldwide – more than the deaths caused by malaria and tuberculosis combined. This led the World Health Organization to label household air pollution “The Killer in the Kitchen”.

As illustrated on the map below, most deaths from indoor air pollution occur in low- and middle-income countries across Africa and Asia. Women and children are disproportionately exposed to the risks of indoor air pollution as they typically spend the most time cooking.

Number of deaths attributable to indoor air pollution in 2017. Image credit Our World in Data.
Replacing open fires and inefficient stoves with modern, cleaner solutions is essential to reduce indoor air pollution and personal exposure to emissions. However, research suggests that only significant reductions in exposure can tangibly reduce negative health impacts.
The Clean Cooking Alliance, established in 2010, has focused mainly on the dissemination of improved cookstoves (ICS) – wood-burning or charcoal stoves designed to be much more efficient than more traditional models – with some success.
Randomised control trials of sole use of ICS have shown reductions in pneumonia and the duration of respiratory infections in children. However, other studies, including some funded by the Alliance, have shown that ICS have not performed well enough in the field to sufficiently reduce indoor air pollution to lessen health risks such as pneumonia and heart disease.
Alternative fuels such as liquid petroleum gas (LPG), biogas and ethanol present other options for cooking with LPG already prevalent in many countries across the world.
LPG is clean-burning and produces much less carbon dioxide than burning biomass but is still a fossil fuel.
Biogas is a clean, renewable fuel made from organic waste, and ethanol is a clean biofuel made from a variety of feedstocks.
Image credit PEEDA

Electric cooking, once seen as a pipe dream for developing countries, is becoming more feasible and affordable due to improvements and reductions in costs of technologies like solar panels and batteries.

Improved cookstoves, alternative fuels and electric cooking have been gaining traction but there is still a long way to go to solving the deadly problem of indoor air pollution.
———————-
This blog is written by Cabot Institute member Peter Thomas, Faculty of Engineering, University of Bristol. Peter’s research focusses on access to energy in humanitarian relief. This blog is co-written by Will Clements, Faculty of Engineering.

How we traced ‘mystery emissions’ of CFCs back to eastern China

Since being universally ratified in the 1980s, the Montreal Protocol – the treaty charged with healing the ozone layer – has been wildly successful in causing large reductions in emissions of ozone depleting substances. Along the way, it has also averted a sizeable amount of global warming, as those same substances are also potent greenhouse gases. No wonder the ozone process is often held up as a model of how the international community could work together to tackle climate change.

However, new research we have published with colleagues in Nature shows that global emissions of the second most abundant ozone-depleting gas, CFC-11, have increased globally since 2013, primarily because of increases in emissions from eastern China. Our results strongly suggest a violation of the Montreal Protocol.

A global ban on the production of CFCs has been in force since 2010, due to their central role in depleting the stratospheric ozone layer, which protects us from the sun’s ultraviolet radiation. Since global restrictions on CFC production and use began to bite, atmospheric scientists had become used to seeing steady or accelerating year-on-year declines in their concentration.

Ozone-depleting gases, measured in the lower atmosphere. Decline since the early 1990s is primarily due to the controls on production under the Montreal Protocol. AGAGE / CSIRO

But bucking the long-term trend, a strange signal began to emerge in 2013: the rate of decline of the second most abundant CFC was slowing. Before it was banned, the gas, CFC-11, was used primarily to make insulating foams. This meant that any remaining emissions should be due to leakage from “banks” of old foams in buildings and refrigerators, which should gradually decline with time.

But in that study published last year, measurements from remote monitoring stations suggested that someone was producing and using CFC-11 again, leading to thousands of tonnes of new emissions to the atmosphere each year. Hints in the data available at the time suggested that eastern Asia accounted for some unknown fraction of the global increase, but it was not clear where exactly these emissions came from.

Growing ‘plumes’ over Korea and Japan

Scientists, including ourselves, immediately began to look for clues from other measurements around the world. Most monitoring stations, primarily in North America and Europe, were consistent with gradually declining emissions in the nearby surrounding regions, as expected.
But all was not quite right at two stations: one on Jeju Island, South Korea, and the other on Hateruma Island, Japan.

These sites showed “spikes” in concentration when plumes of CFC-11 from nearby industrialised regions passed by, and these spikes had got bigger since 2013. The implication was clear: emissions had increased from somewhere nearby.

To further narrow things down, we ran computer models that could use weather data to simulate how pollution plumes travel through the atmosphere.

Atmospheric observations at Gosan and Hateruma monitoring stations showed an increase in CFC-11 emissions from China, primarily from Shandong, Hebei and surrounding provinces. Rigby et al, Author provided

From the simulations and the measured concentrations of CFC-11, it became apparent that a major change had occurred over eastern China. Emissions between 2014 and 2017 were around 7,000 tonnes per year higher than during 2008 to 2012. This represents more than a doubling of emissions from the region, and accounts for at least 40% to 60% of the global increase. In terms of the impact on climate, the new emissions are roughly equivalent to the annual CO₂ emissions of London.

The most plausible explanation for such an increase is that CFC-11 was still being produced, even after the global ban, and on-the-ground investigations by the Environmental Investigations Agency and the New York Times seemed to confirm continued production and use of CFC-11 even in 2018, although they weren’t able to determine how significant it was.

While it’s not known exactly why production and use of CFC-11 apparently restarted in China after the 2010 ban, these reports noted that it may be that some foam producers were not willing to transition to using second generation substitutes (HFCs and other gases, which are not harmful to the ozone layer) as the supply of the first generation substitutes (HCFCs) was becoming restricted for the first time in 2013.

Bigger than the ozone hole

Chinese authorities have said they will “crack-down” on any illegal production. We hope that the new data in our study will help. Ultimately, if China successfully eliminates the new emissions sources, then the long-term negative impact on the ozone layer and climate could be modest, and a megacity-sized amount of CO₂-equivalent emissions would be avoided. But if emissions continue at their current rate, it could undo part of the success of the Montreal Protocol.

 

The network of global (AGAGE) and US-run (NOAA) monitoring stations. Luke Western, Author provided

While this story demonstrates the critical value of atmospheric monitoring networks, it also highlights a weakness of the current system. As pollutants quickly disperse in the atmosphere, and as there are only so many measurement stations, we were only able to get detailed information on emissions from certain parts of the world.

Therefore, if the major sources of CFC-11 had been a few hundred kilometres further to the west or south in China, or in unmonitored parts of the world, such as India, Russia, South America or most of Africa, the puzzle would remain unsolved. Indeed, there are still parts of the recent global emissions rise that remain unattributed to any specific region.

When governments and policy makers are armed with this atmospheric data, they will be in a much better position to consider effective measures. Without it, detective work is severely hampered.


—————————
This blog is written by Cabot Institute member Dr Matt Rigby, Reader in Atmospheric Chemistry, University of Bristol; Luke Western, Research Associate in Atmospheric Science, University of Bristol, and Steve Montzka, Research Chemist, NOAA ESRL Global Monitoring Division, University of ColoradoThis article is republished from The Conversation under a Creative Commons license. Read the original article.

Listen to Matt Rigby talk about CFC emissions on BBC Radio 4’s Inside Science programme.

Global warming ‘hiatus’ is the climate change myth that refuses to die

File 20181210 76977 hkxl6p.jpg?ixlib=rb 1.1
riphoto3 / shutterstock

The record-breaking, El Niño-driven global temperatures of 2016 have given climate change deniers a new trope. Why, they ask, hasn’t it since got even hotter?

In response to a recent US government report on the impact of climate change, a spokesperson for the science-denying American Enterprise Institute think-tank claimed that “we just had […] the biggest drop in global temperatures that we have had since the 1980s, the biggest in the last 100 years.”

These claims are blatantly false: the past two years were two of the three hottest on record, and the drop in temperature from 2016 to 2018 was less than, say, the drop from 1998 (a previous record hot year) to 2000. But, more importantly, these claims use the same kind of misdirection as was used a few years ago about a supposed “pause” in warming lasting from roughly 1998 to 2013.

At the time, the alleged pause was cited by many people sceptical about the science of climate change as a reason not to act to reduce greenhouse pollution. US senator and former presidential candidate Ted Cruz frequently argued that this lack of warming undermined dire predictions by scientists about where we’re heading.

However, drawing conclusions on short-term trends is ill-advised because what matters to climate change is the decade-to-decade increase in temperatures rather than fluctuations in warming rate over a few years. Indeed, if short periods were suitable for drawing strong conclusions, climate scientists should perhaps now be talking about a “surge” in global warming since 2011, as shown in this figure:

Global temperature observations compared to climate models. Climate-disrupting volcanoes are shown at the bottom, and the purported hiatus period is shaded. 2018 values based on year to date (YTD).
NASA; Berkeley Earth; various climate models., Author provided

The “pause” or “hiatus” in warming of the early 21st century is not just a talking point of think-tanks with radical political agendas. It also features in the scientific literature, including in the most recent report of the Intergovernmental Panel on Climate Change and more than 200 peer-reviewed articles.

Research we recently published in Environmental Research Letters addresses two questions about the putative “pause”: first, is there compelling evidence in the temperature data alone of something unusual happening at the start of the 21st century? Second, did the rise in temperature lag behind projections by climate models?

In both cases the answer is “no”, but the reasons are interesting.

Reconstructing a historical temperature record from instruments designed for other purposes, such as weather forecasting, is not always easy. Several problems have affected temperature estimates for the period since 2000. The first of these was the fact that uneven geographical distribution of weather stations can influence the apparent rate of warming. Other factors include changes in the instruments used to measure ocean temperatures. Most of these factors were known at the time and reported in the scientific literature, but because the magnitudes of the effects were unknown, users of temperature data (from science journalists to IPCC authors) were in a bind when interpreting their results.

‘This glacier was here in 1908’: warming might fluctuate, but the long-term trend is clear.
Matty Symons/Shutterstock

A more subtle problem arises when we ask whether a fluctuation in the rate of warming is a new phenomena, rather than the kind of variation we expect due to natural fluctuations of the climate system. Different statistical tests are needed to determine whether a phenomena is interesting depending on how the data are chosen. In a nutshell, if you select data based on them being unusual in the first place, then any statistical tests that seemingly confirm their unusual nature give the wrong answer. (The statistical issue here is similar to the fascinating but counterintuitive “Monty Hall problem”, which has caught out many mathematicians).

When the statistical test is applied correctly, the apparent slowdown in warming is no more significant than other fluctuations in the rate of warming over the past 40 years. In other words, there is no compelling evidence that the supposed “pause” period is different from other previous periods. Neither is the deviation between the observations and climate model projections larger than would be expected.

That’s not to say that such “wiggles” in the temperature record are uninteresting – several of our team are involved in further studies of these fluctuations, and the study of the “pause” has yielded interesting new insights into the climate system – for example, the role of changes in the Atlantic and Pacific oceans.

There are lessons here for the media, for the public, and for scientists.

For scientists, there are two lessons: first, when you get to know a dataset by using it repeatedly in your work, make sure you also still remember the limitations you read about when first downloading it. Second, remember that your statistical choices are always part of a cascade of decisions, and at least occasionally those decisions must be revisited.

For the public and the media, the lesson is to check claims about the data. In particular, when claims are made based on short periods or specific datasets, they are often designed to mislead. If someone claims the world hasn’t warmed since 1998 or 2016, ask them why those specific years – why not 1997 or 2014? Why have such short limits at all? And also check how reliable similar claims have been in the past.

The technique of misinformation is nicely described in a quote attributed to climate researcher Michael Tobis:

“If a large data set speaks convincingly against you, find a smaller and noisier one that you can huffily cite.”

Global warming didn’t stop in 1998. Don’t be fooled by claims that it stopped in 2016 either. There is only one thing that will stop global warming: cuts to greenhouse gas emissions.The Conversation

———————————
This blog is written by Kevin Cowtan, Professor of Chemistry, University of York and Professor Stephan Lewandowsky, Chair of Cognitive Psychology, University of Bristol Cabot Institute. This article is republished from The Conversation under a Creative Commons license. Read the original article.

Kevin Cowtan
Stephan Lewandowsky