Three history lessons to help reduce damage from earthquakes

Earthquakes don’t kill people,’ the saying goes. ‘Buildings do.’
There is truth in the adage: the majority of deaths during and just after earthquakes are due to the collapse of buildings. But the violence of great catastrophes is not confined to collapsed walls and falling roofs. Earthquakes also have broader effects on people, and the environments we live in.

The United Nations Economic and Social Commission for Asia and the Pacific (ESCAP)’s second Disaster Resilience Week starts in Bangkok on 26 August 2019. Practitioners and researchers have achieved great progress in reducing disaster risk over the past few decades, but we must do more to save lives and protect livelihoods.

Can history help?

Building against disaster

Buildings are a good, practical place to start.

Material cultures offer paths to resilience. A major example is traditional building styles that reduce the threat from seismic shaking. A building is not only a compilation of bricks and stones, but a social element that reflects the cultural life of a community. This is the powerful point made by the Kathmandu-based NGO, National Society for Earthquake Technology (NSET), in a recent report on traditional Nepalese building styles.

NSET, and others working in the field, have identified features of traditional building styles that limit damage during shaking. For example, diagonal struts distribute the load of a roof and limit damage during earthquake shaking.

Historic building with diagonal struts at Patan Durbar Square, Kathmandu, Nepal. Photo: Daniel Haines, 2017

This is important because parts of falling buildings often kill people.

Nearby, in the Himalayan kingdom of Bhutan, the royal government is investigating the earthquake-resistant features of traditional rammed-earth buildings.

An old (c. 400 years?) rammed-earth residential building near Paro, Bhutan. Photo: Daniel Haines, 2017

In fact, seismically-appropriate building styles have evolved along similar lines across a huge Eurasian arc of tectonic unrest, from Italy to Kashmir.

But in most countries, population pressure and the use of cheap, unreinforced concrete construction in growing towns and cities has crowded out traditional construction methods.

Reducing disaster risk always means weighing costs in the present against potential protection in the future. Recovering or encouraging traditional methods is potentially cheaper than enforcing modern seismic engineering.

Long-term health impacts

Focusing only on buildings, though, neglects other important aspects of large earthquakes. These shocks do not only shake buildings down, but can dramatically re-shape landscapes by causing huge landslides, changing the level of water in rivers and leading to flooding.

History shows that these changes can hurt people for months or years after the rubble of buildings have been cleared and reconstruction has begun.

For example, a giant (8.4 Mw) earthquake struck northeast India in 1897. Its epicentre was near Shillong, in the borderlands between British India and China. Luckily, the quake happened in the afternoon, so most people were out of doors. The official death toll – the number of deaths that the colonial government attributed directly to the earthquake – was around 1,500.

Yet officials also thought the poor health conditions that followed the earthquake and the substantial floods that it caused were largely responsible for a major cholera epidemic which killed 33,000 people in the Brahmaputra Valley during the same year. That is twice as many as the previous year.

From the available evidence, it is not yet clear how directly the earthquake and the cholera deaths were linked, but other examples saw similar scenarios. In 1934, another major (8.0 Mw) quake devastated parts of Nepal and North India.

This time, the official death toll in India was around 7,500, but again many more people died from related health complications over the following years. In one district in northern Bihar province, an average of 55,000 people died of fever every year over the next decade. In other areas, malaria was unusually prevalent over the same period.

Government reports held secondary effects of the earthquake responsible for the high death rate.
Events that happened long ago therefore demonstrate the complexity of earthquakes’ impacts, even on the relatively straightforward question mortality. Studying them highlights the need to focus present-day disaster responses on long-term health implications.

Of course, this says nothing of earthquakes’ less concrete, but very important, impacts on social structures, community life, governance or the economy.

History in action

In some cases, historical researchers are contributing directly to initiatives to reduce risk from natural disasters.

Hurricane Katrina showed in 2005 that low-lying New Orleans is terribly vulnerable to storm surge and flooding. Craig Colten, a historical geographer at Louisiana State University, is working with a team of scientists to find solutions by raising the height of the ground in parts of the city while adding forested wetlands on its north shore. Colten is studying analogous historical efforts in other American cities – flood-control measures in nineteenth-century Chicago and responses to hurricanes in Galveston, Texas, around 1900 – as well as examining previous proposals for creating buffers between New Orleans and the sea.

These historical examples provide evidence of what works and what does not. They also highlight the politics of decision-making that help determine whether local communities will support landscape engineering projects.

The international frameworks governing disaster risk reduction such as the Sendai Framework for Disaster Risk Reduction and the Sustainable Development Goals understandably focus on the present, not the past. Historians need to join the conversation to show practitioners that lessons from the past can help build resilience in the future.

———————————
This blog is written by Cabot Institute member Dr Daniel Haines, an environmental historian at the University of Bristol.

Dr Daniel Haines

 

 

Climate-driven extreme weather is threatening old bridges with collapse

The recent collapse of a bridge in Grinton, North Yorkshire, raises lots of questions about how prepared we are for these sorts of risks. The bridge, which was due to be on the route of the cycling world championships in September, collapsed after a month’s worth of rain fell in just four hours, causing flash flooding.

Grinton is the latest in a series of such collapses. In 2015, first Storm Eva and then Storm Frank caused flooding which collapsed the 18th century Tadcaster bridge, also in North Yorkshire, and badly damaged the medieval-era Eamont bridge in nearby Cumbria. Floods in 2009 collapsed or severely damaged 29 bridges in Cumbria alone.

With climate change making this sort of intense rainfall more common in future, people are right to wonder whether we’ll see many more such bridge collapses. And if so – which bridges are most at risk?

In 2014 the Tour de France passed over the now-destroyed bridge near Grinton. Tim Goode/PA

We know that bridges can collapse for various reasons. Some are simply old and already crumbling. Others fall down because of defective materials or environmental processes such as flooding, corrosion or earthquakes. Bridges have even collapsed after ships crash into them.

Europe’s first major roads and bridges were built by the Romans. This infrastructure developed hugely during the industrial revolution, then much of it was rebuilt and transformed after World War II. But since then, various factors have increased the pressure on bridges and other critical structures.
For instance, when many bridges were first built, traffic mostly consisted of pedestrians, animals and carts – an insignificant load for heavy-weight bridges. Yet over the decades private cars and trucks have got bigger, heavier and faster, while the sheer number of vehicles has massively increased.

Different bridges run different risks

Engineers in many countries think that numerous bridges could have reached the end of their expected life spans (between 50-100 years). However, we do not know which bridges are most at risk. This is because there is no national database or method for identifying structures at risk. Since different types of bridges are sensitive to different failure mechanisms, having awareness of the bridge stock is the first step for an effective risk management of the assets.

 

Newcastle’s various bridges all have different risks. Shaun Dodds / shutterstock

In Newcastle, for example, seven bridges over the river Tyne connect the city to the town of Gateshead. These bridges vary in function (pedestrian, road and railway), material (from steel to concrete) and age (17 to 150 years old). The risk and type of failure for each bridge is therefore very different.

Intense rain will become more common

Flooding is recognised as a major threat in the UK’s National Risk Register of Civil Emergencies. And though the Met Office’s latest set of climate projections shows an increase in average rainfall in winter and a decrease in average rainfall in summer, rainfall is naturally very variable. Flooding is caused by particularly heavy rain so it is important to look at how the extremes are changing, not just the averages.

Warmer air can hold more moisture and so it is likely that we will see increases in heavy rainfall, like the rain that caused the flash floods at Grinton. High resolution climate models and observational studies also show an intensification of extreme rainfall. This all means that bridge collapse from flooding is more likely in the future.

To reduce future disasters, we need an overview of our infrastructure, including assessments of change of use, ageing and climate change. A national bridge database would enable scientists and engineers to identify and compare risks to bridges across the country, on the basis of threats from climate change.



This blog is written by Cabot Institute member Dr Maria Pregnolato, Lecturer in Civil Engineering, University of Bristol and Elizabeth Lewis, Lecturer in Computational Hydrology, Newcastle University.  This article is republished from The Conversation under a Creative Commons license. Read the original article.

Climate change: sea level rise could displace millions of people within two generations

A small boat in the Illulissat Icefjord is dwarfed by the icebergs that have calved from the floating tongue of Greenland’s largest glacier, Jacobshavn Isbrae. Image credit: Michael Bamber

Antarctica is further from civilisation than any other place on Earth. The Greenland ice sheet is closer to home but around one tenth the size of its southern sibling. Together, these two ice masses hold enough frozen water to raise global mean sea level by 65 metres if they were to suddenly melt. But how likely is this to happen?

The Antarctic ice sheet is around one and half times larger than Australia. What’s happening in one part of Antarctica may not be the same as what’s happening in another – just like the east and west coasts of the US can experience very different responses to, for example, a change in the El Niño weather pattern. These are periodic climate events that result in wetter conditions across the southern US, warmer conditions in the north and drier weather on the north-eastern seaboard.

The ice in Antarctica is nearly 5km thick in places and we have very little idea what the conditions are like at the base, even though those conditions play a key role in determining the speed with which the ice can respond to climate change, including how fast it can flow toward and into the ocean. A warm, wet base lubricates the bedrock of land beneath the ice and allows it to slide over it.

Though invisible from the surface, melting within the ice can speed up the process by which ice sheets slide towards the sea. Gans33/Shutterstock

These issues have made it particularly difficult to produce model simulations of how ice sheets will respond to climate change in future. Models have to capture all the processes and uncertainties that we know about and those that we don’t – the “known unknowns” and the “unknown unknowns” as Donald Rumsfeld once put it. As a result, several recent studies suggest that previous Intergovernmental Panel on Climate Change reports may have underestimated how much melting ice sheets will contribute to sea level in future.

What the experts say

Fortunately, models are not the only tools for predicting the future. Structured Expert Judgement is a method from a study one of us published in 2013. Experts give their judgement on a hard-to-model problem and their judgements are combined in a way that takes into account how good they are at assessing their own uncertainty. This provides a rational consensus.

The approach has been used when the consequences of an event are potentially catastrophic, but our ability to model the system is poor. These include volcanic eruptions, earthquakes, the spread of vector-borne diseases such as malaria and even aeroplane crashes.

Since the study in 2013, scientists modelling ice sheets have improved their models by trying to incorporate processes that cause positive and negative feedback. Impurities on the surface of the Greenland ice sheet cause positive feedback as they enhance melting by absorbing more of the sun’s heat. The stabilising effect of bedrock rising as the overlying ice thins, lessening the weight on the bed, is an example of negative feedback, as it slows the rate that the ice melts.

The record of observations of ice sheet change, primarily from satellite data, has also grown in length and quality, helping to improve knowledge of the recent behaviour of the ice sheets.

With colleagues from the UK and US, we undertook a new Structured Expert Judgement exercise. With all the new research, data and knowledge, you might expect the uncertainties around how much ice sheet melting will contribute to sea level rise to have got smaller. Unfortunately, that’s not what we found. What we did find was a range of future outcomes that go from bad to worse.

Reconstructed sea level for the last 2500 years. Note the marked increase in rate since about 1900 that is unprecedented over the whole time period. Robert Kopp/Kopp et al. (2016).

 

Rising uncertainty

We gathered together 22 experts in the US and UK in 2018 and combined their judgements. The results are sobering. Rather than a shrinking in the uncertainty of future ice sheet behaviour over the last six years, it has grown.

If the global temperature increase stays below 2°C, the experts’ best estimate of the average contribution of the ice sheets to sea level was 26cm. They concluded, however, that there is a 5% chance that the contribution could be as much as 80cm.

If this is combined with the two other main factors that influence sea level – glaciers melting around the world and the expansion of ocean water as it warms – then global mean sea level rise could exceed one metre by 2100. If this were to occur, many small island states would experience their current once-in-a-hundred–year flood every other day and become effectively uninhabitable.

A climate refugee crisis could dwarf all previous forced migrations. Punghi/Shutterstock

For a climate change scenario closer to business as usual – where our current trajectory for economic growth continues and global temperatures increase by 5℃ – the outlook is even more bleak. The experts’ best estimate average in this case is 51cm of sea level rise caused by melting ice sheets by 2100, but with a 5% chance that global sea level rise could exceed two metres by 2100. That has the potential to displace some 200m people.

Let’s try and put this into context. The Syrian refugee crisis is estimated to have caused about a million people to migrate to Europe. This occurred over years rather than a century, giving much less time for countries to adjust. Still, sea level rise driven by migration of this size might threaten the existence of nation states and result in unimaginable stress on resources and space. There is time to change course, but not much, and the longer we delay the harder it gets, the bigger the mountain we have to climb.


 

Click here to subscribe to our climate action newsletter. Climate change is inevitable. Our response to it isn’t.The Conversation

This blog was written by Cabot Institute member Jonathan Bamber, Professor of Physical Geography, University of Bristol and Michael Oppenheimer, Professor of Geosciences and International Affairs, Princeton University.  This article is republished from The Conversation under a Creative Commons license. Read the original article.

Should we engineer the climate? A social scientist and natural scientist discuss

File 20181207 128196 1nndt5c.jpg?ixlib=rb 1.1
Ekaterina Karpacheva/Shutterstock.com

This is an article from The Conversation’s Head to Head, a series in which academics from different disciplines chew over current debates. Let us know what else you’d like covered – all questions welcome. Details of how to contact us are at the end of the article.


Rob Bellamy: 2018 has been a year of unprecedented weather extremes around the world. From the hottest temperatures ever recorded in Japan to the largest wildfire in the history of California, the frequency and intensity of such events have been made much more likely by human-induced climate change. They form part of a longer-term trend – observed in the past and projected into the future – that may soon make nations desperate enough to consider engineering the world’s climate deliberately in order to counteract the risks of climate change.

Indeed, the spectre of climate engineering hung heavily over the recent United Nations climate conference in Katowice, COP24, having featured in several side events as negotiators agreed on how to implement the landmark 2015 Paris Agreement, but left many worried that it does not go far enough.

Matt Watson: Climate engineering – or geoengineering – is the purposeful intervention into the climate system to reduce the worst side effects of climate change. There are two broad types of engineering, greenhouse gas removal (GGR) and solar radiation management (or SRM). GGR focuses on removing anthropogenically emitted gases from the atmosphere, directly reducing the greenhouse effect. SRM, meanwhile, is the label given to a diverse mix of large-scale technology ideas for reflecting sunlight away from the Earth, thereby cooling it.

An engineered future?

RB: It’s increasingly looking like we may have to rely on a combination of such technologies in facing climate change. The authors of the recent IPCC report concluded that it is possible to limit global warming to no more than 1.5°C, but every single one of the pathways they envisaged that are consistent with this goal require the use of greenhouse gas removal, often on a vast scale. While these technologies vary in their levels of maturity, none are ready to be deployed yet – either for technical or social reasons or both.

If efforts to reduce greenhouse gas emissions by transitioning away from fossil fuels fail, or greenhouse gas removal technologies are not researched and deployed quickly enough, faster-acting SRM ideas may be needed to avoid so-called “climate emergencies”.

SRM ideas include installing mirrors in Earth’s orbit, growing crops that have been genetically modified to make them lighter, painting urban areas white, spraying clouds with salt to make them brighter, and paving mirrors over desert areas – all to reflect sunlight away. But by far the best known idea – and that which has, rightly or wrongly, received the most attention by natural and social scientists alike – is injecting reflective particles, such as sulphate aerosols, into the stratosphere, otherwise known as “stratospheric aerosol injection” or SAI.

MW: Despite researching it, I do not feel particularly positive about SRM (very few people do). But our direction of travel is towards a world where climate change will have significant impacts, particularly on those most vulnerable. If you accept the scientific evidence, it’s hard to argue against options that might reduce those impacts, no matter how extreme they appear.

Do you remember the film 127 Hours? It tells the (true) story of a young climber who, pinned under a boulder in the middle of nowhere, eventually ends up amputating his arm, without anaesthetic, with a pen knife. In the end, he had little choice. Circumstances dictate decisions. So if you believe climate change is going to be severe, you have no option but to research the options (I am not advocating deployment) as broadly as possible. Because there may well come a point in the future where it would be immoral not to intervene.

SRM using stratospheric aerosols has many potential issues but does have a comparison in nature – active volcanism – which can partially inform us about the scientific challenges, such as the dynamic response of the stratosphere. Very little research is currently being conducted, due to a challenging funding landscape. What is being done is at small scale (financially), is linked to other, more benign ideas, or is privately funded. This is hardly ideal.

A controversial idea

RB: But SAI is a particularly divisive idea for a reason. For example, as well as threatening to disrupt regional weather patterns, it, and the related idea of brightening clouds at sea, would require regular “top-ups” to maintain cooling effects. Because of this, both methods would suffer from the risk of a “termination effect”: where any cessation of cooling would result in a sudden rise in global temperature in line with the level of greenhouse gases in the atmosphere. If we hadn’t been reducing our greenhouse gas emissions in the background, this could be a very sharp rise indeed.




Read more:
Time is running out on climate change, but geoengineering has dangers of its own


Such ideas also raise concerns about governance. What if one powerful actor – be it a nation or a wealthy individual – could change the global climate at a whim? And even if there were an international programme, how could meaningful consent be obtained from those who would be affected by the technology? That’s everybody on Earth. What if some nations were harmed by the aerosol injections of others? Attributing liability would be greatly contentious in a world where you can no longer disentangle natural from artificial.

And who could be trusted to deliver such a programme? Your experience with the SPICE (Stratospheric Particle Injection for Climate Engineering) project shows that people are wary of private interests. There, it was concerns about a patent application that in part led to the scientists calling off a test of delivery hardware for SAI that would have seen the injection of water 1km above the ground via a pipe and tethered balloon.

MW: The technological risks, while vitally important, are not insurmountable. While non-trivial, there are existing technologies that could deliver material to the stratosphere.

Most researchers agree that the socio-political risks, such as you outline, outweigh the technological risks. One researcher remarked at a Royal Society meeting, in 2010: “We know that governments have failed to combat climate change, what are the chances of them safely implementing a less-optimal solution?”. This is a hard question to answer well. But in my experience, opponents to research never consider the risk of not researching these ideas.

The SPICE project is an example where scientists and engineers took the decision to call off part of an experiment. Despite what was reported, we did this of our own volition. It annoyed me greatly when others, including those who purported to provide oversight, claimed victory for the experiment not going ahead. This belies the amount of soul searching we undertook. I’m proud of the decisions we made, essentially unsupported, and in most people’s eyes it has added to scientists’ credibility.

 

Moral hazard

RB: Some people are also worried that the promise of large-scale climate engineering technologies might delay or distract us from reducing greenhouse gas emissions – a “moral hazard”. But this remains to be seen. There are good reasons to think that the promise (or threat) of SRM might even galvanise efforts to reduce greenhouse gas emissions.

MW: Yes, I think it’s at least as likely that the threat of SAI would prompt “positive” behaviour, towards a sustainable, greener future, than a “negative” behaviour pattern where we assume technology, currently imaginary, will solve our problems (in fact our grandchildren’s problems, in 50 years time).

RB: That said, the risks of a moral hazard may not be the same for all climate engineering ideas, or even all SRM ideas. It’s a shame that the specific idea of stratospheric aerosol injection is so frequently conflated with its parent category of SRM and climate engineering more generally. This leads people to tar all climate engineering ideas with the same brush, which is to the detriment of many other ideas that have so far raised relatively fewer societal concerns, such as more reflective settlements or grasslands on the SRM side of things, or virtually the entire category of greenhouse gas removal ideas. So we risk throwing the baby out with the bathwater.

MW: I agree with this – somewhat. It’s certainly true all techniques should be given the same amount of scrutiny based on evidence. Some techniques, however, often look benign but aren’t. Modifying crops to make them more reflective, brightening clouds, even planting trees all have potentially profound impacts at scale. I disagree a little in as much as we simply don’t know enough yet to say which technologies have the potential to reduce the impacts of climate change safely. This means we do need to be thinking about all of these ideas, but objectively.

Anyone that passionately backs a particular technology concerns me. If it could be conclusively proven that SAI did more harm than good, then we should stop researching it. All serious researchers in SAI would accept that outcome, and many are actively looking for showstoppers.

RB: I agree. But at present there is very little demand for research into SRM from governments and wider society. This needs to be addressed. And we need broad societal involvement in defining the tools – and terms – of such research, and indeed in tackling climate change more broadly.




Read more:
Why you need to get involved in the geoengineering debate – now


The question of governance

MW: Some people think that we should just be getting on with engineering the climate, whereas others feel even the idea of it should not even be discussed or researched. Most academics value governance, as a mechanism that allows freedom to explore ideas safely and there are very few serious researchers, if any, who push back against this.

A challenge, of course, is who governs the governors. There are strong feelings on both sides – scientists either must, or cannot, govern their own research, depending on your viewpoint. Personally, I’d like to see a broad, international body set up with the power to govern climate engineering research, especially when conducting outdoor experiments. And I think the hurdles to conducting these experiments should consider both the environmental and social impact, but should not be an impediment to safe, thoughtful research.

RB: There are more proposed frameworks for governance than you can shake a stick at. But there are two major problems with them. The first is that most of those frameworks treat all SRM ideas as though they were stratospheric aerosol injection, and call for international regulation. That might be fine for those technologies with risks that cross national boundaries, but for ideas like reflective settlements and grasslands, such heavy handed governance might not make sense. Such governance is also at odds with the bottom-up architecture of the Paris Agreement, which states that countries will make nationally determined efforts to tackle climate change.

Which leads us to the second problem: these frameworks have almost exclusively arisen from a very narrow set of viewpoints – either those of natural or social scientists. What we really need now is broad societal participation in defining what governance itself should look like.

MW: Yes. There are so many questions that need to be addressed. Who pays for delivery and development and, critically, any consequences? How is the global south enfranchised – they are least responsible, most vulnerable and, given current geopolitical frameworks, unlikely to have a strong say. What does climate engineering mean for our relationship with nature: will anything ever be “natural” again (whatever that is)?

All these questions must be considered against the situation where we continue to emit CO₂ and extant risks from climate change increase. That climate engineering is sub-optimal to a pristine, sustainably managed planet is hard to argue against. But we don’t live in such a world. And when considered against a +3°C world, I’d suggest the opposite is highly likely to be true.

If there’s a specific topic or question you’d like experts from different disciplines to discuss, you can:The Conversation

  • Email your question to josephine.lethbridge@theconversation.com
  • Tell us on Twitter by tagging @ConversationUK with the hashtag #HeadtoHead, or
  • Message us on Facebook.

———————————
This blog was written by Dr Rob Bellamy, Presidential Fellow in Environment, University of Manchester and Dr Matthew Watson, Reader in Natural Hazards, University of Bristol Cabot Institute. This article is republished from The Conversation under a Creative Commons license. Read the original article.

Rob Bellamy
Matt Watson

The social animals that are inspiring new behaviours for robot swarms

File 20190326 36252 wdqi1n.jpg?ixlib=rb 1.1
Termite team.
7th Son Studio/Shutterstock

From flocks of birds to fish schools in the sea, or towering termite mounds, many social groups in nature exist together to survive and thrive. This cooperative behaviour can be used by engineers as “bio-inspiration” to solve practical human problems, and by computer scientists studying swarm intelligence.

“Swarm robotics” took off in the early 2000s, an early example being the “s-bot” (short for swarm-bot). This is a fully autonomous robot that can perform basic tasks including navigation and the grasping of objects, and which can self-assemble into chains to cross gaps or pull heavy loads. More recently, “TERMES” robots have been developed as a concept in construction, and the “CoCoRo” project has developed an underwater robot swarm that functions like a school of fish that exchanges information to monitor the environment. So far, we’ve only just begun to explore the vast possibilities that animal collectives and their behaviour can offer as inspiration to robot swarm design.

Swarm behaviour in birds – or robots designed to mimic them?
EyeSeeMicrostock/Shutterstock

Robots that can cooperate in large numbers could achieve things that would be difficult or even impossible for a single entity. Following an earthquake, for example, a swarm of search and rescue robots could quickly explore multiple collapsed buildings looking for signs of life. Threatened by a large wildfire, a swarm of drones could help emergency services track and predict the fire’s spread. Or a swarm of floating robots (“Row-bots”) could nibble away at oceanic garbage patches, powered by plastic-eating bacteria.

A future where floating robots powered by plastic-eating bacteria could tackle ocean waste.
Shutterstock

Bio-inspiration in swarm robotics usually starts with social insects – ants, bees and termites – because colony members are highly related, which favours impressive cooperation. Three further characteristics appeal to researchers: robustness, because individuals can be lost without affecting performance; flexibility, because social insect workers are able to respond to changing work needs; and scalability, because a colony’s decentralised organisation is sustainable with 100 workers or 100,000. These characteristics could be especially useful for doing jobs such as environmental monitoring, which requires coverage of huge, varied and sometimes hazardous areas.

Social learning

Beyond social insects, other species and behavioural phenomena in the animal kingdom offer inspiration to engineers. A growing area of biological research is in animal cultures, where animals engage in social learning to pick up behaviours that they are unlikely to innovate alone. For example, whales and dolphins can have distinctive foraging methods that are passed down through the generations. This includes forms of tool use – dolphins have been observed breaking off marine sponges to protect their beaks as they go rooting around for fish, like a person might put a glove over a hand.

Bottlenose dolphin playing with a sponge. Some have learned to use them to help them catch fish.
Yann Hubert/Shutterstock

Forms of social learning and artificial robotic cultures, perhaps using forms of artificial intelligence, could be very powerful in adapting robots to their environment over time. For example, assistive robots for home care could adapt to human behavioural differences in different communities and countries over time.

Robot (or animal) cultures, however, depend on learning abilities that are costly to develop, requiring a larger brain – or, in the case of robots, a more advanced computer. But the value of the “swarm” approach is to deploy robots that are simple, cheap and disposable. Swarm robotics exploits the reality of emergence (“more is different”) to create social complexity from individual simplicity. A more fundamental form of “learning” about the environment is seen in nature – in sensitive developmental processes – which do not require a big brain.

‘Phenotypic plasticity’

Some animals can change behavioural type, or even develop different forms, shapes or internal functions, within the same species, despite having the same initial “programming”. This is known as “phenotypic plasticity” – where the genes of an organism produce different observable results depending on environmental conditions. Such flexibility can be seen in the social insects, but sometimes even more dramatically in other animals.
Most spiders are decidedly solitary, but in about 20 of 45,000 spider species, individuals live in a shared nest and capture food on a shared web. These social spiders benefit from having a mixture of “personality” types in their group, for example bold and shy.

Social spider (Stegodyphus) spin collective webs in Addo Elephant Park, South Africa.
PicturesofThings/Shutterstock

My research identified a flexibility in behaviour where shy spiders would step into a role vacated by absent bold nestmates. This is necessary because the spider colony needs a balance of bold individuals to encourage collective predation, and shyer ones to focus on nest maintenance and parental care. Robots could be programmed with adjustable risk-taking behaviour, sensitive to group composition, with bolder robots entering into hazardous environments while shyer ones know to hold back. This could be very helpful in mapping a disaster area such as Fukushima, including its most dangerous parts, while avoiding too many robots in the swarm being damaged at once.

The ability to adapt

Cane toads were introduced in Australia in the 1930s as a pest control, and have since become an invasive species themselves. In new areas cane toads are seen to be somewhat social. One reason for their growth in numbers is that they are able to adapt to a wide temperature range, a form of physiological plasticity. Swarms of robots with the capability to switch power consumption mode, depending on environmental conditions such as ambient temperature, could be considerably more durable if we want them to function autonomously for the long term. For example, if we want to send robots off to map Mars then they will need to cope with temperatures that can swing from -150°C at the poles to 20°C at the equator.

Cane toads can adapt to temperature changes.
Radek Ziemniewicz/Shutterstock

In addition to behavioural and physiological plasticity, some organisms show morphological (shape) plasticity. For example, some bacteria change their shape in response to stress, becoming elongated and so more resilient to being “eaten” by other organisms. If swarms of robots can combine together in a modular fashion and (re)assemble into more suitable structures this could be very helpful in unpredictable environments. For example, groups of robots could aggregate together for safety when the weather takes a challenging turn.

Whether it’s the “cultures” developed by animal groups that are reliant on learning abilities, or the more fundamental ability to change “personality”, internal function or shape, swarm robotics still has plenty of mileage left when it comes to drawing inspiration from nature. We might even wish to mix and match behaviours from different species, to create robot “hybrids” of our own. Humanity faces challenges ranging from climate change affecting ocean currents, to a growing need for food production, to space exploration – and swarm robotics can play a decisive part given the right bio-inspiration.The Conversation

—————————————–
This blog was written by Cabot Institute member Dr Edmund Hunt, EPSRC Doctoral Prize Fellow, University of BristolThis article is republished from The Conversation under a Creative Commons license. Read the original article.

Edmund Hunt

Downhill all the way: Monitoring landslides using geophysics

Developments in geophysical methods used to monitor surface and subsurface changes prior to landslides can lead to improved prediction and early warning.

 

Every year, landslides cause fatalities and destruction in locations worldwide. Nevertheless, what triggers them and when they occur can often be difficult to predict. A recent article in Reviews of Geophysics examined developments in landslide monitoring using insights and methods from geophysics. Here, one of the authors of the paper answers some questions about landslide monitoring and prediction.

Why is the monitoring of landslides important, and what role can geophysics play?

Sometimes the most effective option for mitigating the risk from landslides is monitoring.

In an ideal world, we would have the geotechnical and financial resources to be able to remove all landslide hazards through slope stabilization and remediation. In reality, this just isn’t possible, and sometimes the most effective option for mitigating the risk from a landslide is to monitor it.

Historically, this has been done by monitoring deformation at the surface and by looking at changes from localised points in the subsurface; for example, by measuring fluctuations in the water table in a borehole. Variations in these data may provide clues about when slope failure is imminent.
The advantage of geophysical methods is that they can not only monitor subsurface properties and how they change over time but can also do so at much higher spatial resolution and over a wider area than point sources of information, such as boreholes.

What are the different types of landslides and why are geophysical methods particularly useful for monitoring “moisture-induced” landslides?

“Landslide” is one of those words that sounds simple enough to define but in reality is very complex.

One of the distinctions we can make between landslide types is their triggering mechanism; most landslides are caused by the direct consequences of increased rainfall and shaking by earthquakes, but they can also be a result of secondary factors such as deforestation.


Between 2007 and 2016, 83% of landslides globally were triggered by rainfall or other hydrological events. This is why we use the term “moisture-induced” in our review article, as it reflects the complicated nature of all sources of water present in landslide systems, including rainfall, snow-melt, and groundwater, amongst others.

Introducing increased amounts of water into a landslide changes the properties of the subsurface, which leads to destabilization and, when a critical threshold is exceeded, slope failure. These changes in material properties can be monitored by geophysical methods and, by comparing data collected over time, it is possible to make inferences about the destabilizing processes that are occurring in the subsurface of the landslide system.
Changes in subsurface ground moisture derived from a semi-permanent, 3D electrical resistivity (ER) array at the Hollin Hill Landslide Observatory, North Yorkshire, UK. The left image shows wet winter conditions, in which the western lobe of the landslide has significantly more subsurface moisture than the eastern lobe. The right image shows drier summer conditions, showing subsurface drainage from the failing Whitby Mudstone Formation to the underlying Staithes Sandstone Formation, despite dry ground at the surface of the landslide. Credit: Uhlemann et al. [2017], Figure 11

 

What different geophysical methods are used to gather information about moisture-induced landslides?

The majority of studies used passive seismic and active geoelectrical methods.

Our review article looks at published case studies from the past 12 years to see what kinds of methods are being applied to monitor moisture-induced landslides. What struck us was that the majority of studies used one of two methods: passive seismic and active geoelectrical methods.


Passive seismic monitoring has been used for many decades in global seismological studies, but really only started to be scaled down to look at smaller scale features, such as landslides, in the mid-1990s.

Although passive seismic monitoring has been around longer, monitoring landslides using active geoelectrical methods, primarily electrical resistivity (ER), has really taken off in the last decade or so. There have been several studies in which ER technologies have been developed specifically for landslide monitoring approaches. Consequently, ER monitoring is currently able to provide more information than passive seismic monitoring on the pre-failure conditions of landslides.
Lower equipment costs and power consumption, combined with better data management and equipment durability, means we can collect more geophysical data for longer from landslides. Each of the points in this plot shows information gathered from published case studies about the length of time and amount of data acquired during a single geophysical monitoring campaign. Multiannual campaigns are becoming increasingly common compared to nearly a decade ago. Credit: Whiteley et al. [2018], Figure 6

 

What do these methods tell us about the subsurface conditions of landslides?

The two approaches provide an opportunity to better understand the variable nature of the subsurface in time and space.

Passive seismic and active geoelectrical approaches complement each other very well. First, they tell us about different aspects of the subsurface conditions beneath a landslide. Seismic methods are able to tell us about the strength of the ground, while ER methods provide information about subsurface moisture dynamics. Both of these aspects are very important when trying to predict landslide movements.


Second, passive approaches tend to have great temporal resolution, but their spatial coverage can be limited by the number of seismic sensors deployed on a slope, usually due to cost or power requirements. On the other hand, ER methods can provide very high spatial resolution, but as they are dependent on collecting a set of data from many measurements, their temporal resolution can be limited. Together, the two approaches provide an opportunity to better understand the variable nature of the subsurface in time and space.

What advances in equipment and data analysis have improved understanding of landslide processes?

The financial, computational, and energy cost of equipment is continually reducing, which means we can collect more data for longer periods, and send data from the field to the lab for near real-time analysis.

Also, data telemetry means we can send data from the field to the lab for near real-time analysis. Both of these are crucial when using geophysical methods for early-warning of landslide failure.

Recently, there has been an increase in the use of 3D surveys and petrophysical relationships linking geophysical The financial, computational and energy cost of equipment is continually reducing, which means we can collect more data for longer periods. Also, data telemetry means we can send data from the field to the lab for near real-time analysis. Both of these are crucial when using geophysical methods for early-warning of landslide failure.

In ER monitoring, movements in the electrode array would have historically produced errors in the resistivity model, but developments in ER data inversion can now use this source of “error” to track movements in the landslide. Similarly seismic “ambient noise” is being used in innovative ways to monitor landslides, even though these background signals would have traditionally been undesirable in seismological surveys.

Left: The “Automated time-Lapse Electrical Resistivity” (ALERT) geoelectrical monitoring system installed at Hollin Hill, North Yorkshire, UK. Right: Inside the cabinet, the system acquires geoelectrical, geotechnical and weather data. Collecting geophysical measurements alongside local displacement and environmental data allows for more robust interpretations of the changes in subsurface geoelectrical data over time. Credit: British Geological Survey

Where is the field of geophysical monitoring of moisture-induced landslide heading?

The challenge now is to start looking for clues to identify precursory conditions to slope failure and to develop geophysical thresholds to inform early-warning approaches. 

The great news is that this is a very active area of research! There is a lot of work being done in environmental seismology to increase the number of low-cost, low-power seismic sensors that can be deployed in landslide settings. This is important, as it will allow us to monitor landslides at very high-resolution in both the spatial and temporal domain.

Looking to the future, one can envision “smart sensor” sites that provide power, data storage, and telemetry, accommodating a wide range of integrated geophysical, geotechnical, and environmental monitoring methods. These could include seismic and electrical arrays, wireless sensor networks, and weather stations, with data relayed back to central processing sites for near-real time assessment, and early-warnings of impending failure based on calibrated geophysical thresholds.

———————————
This blog was written by Cabot Institute for the Environment member James Whitely, postgraduate researcher at University of Bristol’s School of Earth Science and the British Geological Society, with contributions from the articles co-authors. The blog was originally published by Editors’ Vox.


Original blog Citation: Whiteley, J. (2019), Downhill all the way: monitoring landslides using geophysics, Eos, 100 https://doi.org/10.1029/2019EO111065

What global threats should we be most worried about in 2019?

The Cambridge Global Risk Index for 2019 was presented on 4 December 2018 in the imposing building of Willis Towers Watson in London. The launch event aimed to provide an overview of new and rising risk challenges to allow governments and companies to understand the economic implications of various risks. My interest, as a Knowledge Exchange Fellow working with the (re)insurance sector to better capture the uncertainties embedded in its models, was to find out how the index could help insurance companies to better quantify risks.

The presentation started with the Cambridge Centre for Risk Studies giving an introduction on which major world threats are included in the index, followed by a panel discussion on corporate innovation and ideation.

The Cambridge Global Risk Index quantifies the impact future catastrophic events (be they natural or man-made) would have on the world’s economy, by looking at the GDP at risk in the most prominent cities in the world (GDP@Risk). The Index includes 22 threats in five categories: natural disasters and climate; financial, economics and trade; geopolitics and security; human pandemic and plant epidemic; and technology and space.

Global Risk Index 2019 Threat Rankings (Cambridge Global Risk Index 2019 Executive Summary)

The GDP@Risk for 2019 for the 279 cities studied, which represents 41% of the global GDP, has been estimated to be $577bn or 1.57% of the GDP of 2019. The GDP@Risk has increased since last year by more than 5%, which was caused by both an increase in GDP globally and a rise in the chances of losses from a cyber attack and other threats to richer economies. Risk is becoming ever more interconnected due to cascading threats, such as natural hazards and climate events triggering power outages, geopolitical tensions triggering cyber attacks sponsored by states, conflicts worsening human epidemics, and trade wars triggering sovereign crises, which in turn caused social unrest.

Nonetheless, the GDP@Risk can be reduced  by making cities more resilient, that is improving the ability of a city to be prepared for a shock and to recover from it.  For example, if the worst off 100 cities in the world would be as prepared as the top cities, they could reduce their exposure to risk by around 30%, which shows the importance of investing in resilience and recoverability. This is a measure of what the insurance industry calls the “protection gap”, how much could be earned from investments to improve the preparedness and resilience of a city to shocks. How fast a city recovers depends on the ability to access capital, to reconstruct and repair factories, houses and infrastructure, to restore consumers’ confidence and to reduce the length of business interruption.

Global Risk Index 2019 Growth by Sub-Category ($, bn) (Cambridge Global Risk Index 2019 Executive Summary)

Natural catastrophe and climate

After a 2017 with the second highest losses due to natural disasters, 2018 saw several record-breaking natural catastrophes as well. This year we have experienced events from magnitude 7.5 earthquakes and tsunami in Indonesia, which caused more then 3000 deaths, to the second highest number of tropical cyclones active in a month, from Typhoon Mangkhut in the Philippines, to Japan’s strongest storm in the last two decades. Hurricanes have beaten records too, with hurricane Florence in North Carolina becoming the 2nd wettest hurricane on record, which caused $10 bn losses, and hurricane Michael in Florida reaching the greatest wind speeds ever recorded, which caused $15 bn losses.

Floods in 2018 caused heavy death tolls in Japan and south India, with 225 and 500 fatalities respectively, the former showing the weakness of an ageing city infrastructure, while the latter raising criticism on poor forecasting and management of water resources. Droughts raged in South Africa, Australia, Argentina, Uruguay and Italy reducing harvests, while wildfires in California were the largest on record, which caused $20 bn losses. Extreme events have made it to weather events too, with extreme heatwaves, as the hottest summer in the UK, comparable to the one of 1976, and the heatwave in Japan which hospitalised 35,000 people, as well as with extreme freeze, as the “Beast from the East” in the UK which caused losses estimated at $1 billion per day.

Extreme events are becoming ever more frequent due to climate change, with the next few years expected to be anomalously warm, even on top of the regular climate change. This hints that the rising trend in losses due to natural catastrophe and climate is not due to stop.

Devastation from the cyclone in Tonga, 2018.

Finance, economics and trade

Market crash is the number one threat for 2019, which could cause more than $100 billion in losses. Nonetheless global financial stability is improving due to increased regulation, but risk appetite has increased as well due to positive growth prospects and low interest rates, which increases financial vulnerabilities. Trade disputes between the US and China and the US and Europe are disrupting the global supply chains. The proportion of GDP@Risk has increased in Italy due to policy uncertainty and increased sovereign risks, while in countries such as Greece, Cyprus and Portugal sovereign debt risks have decreased following restructuring of their debt and country level credit rating upgrades.

Geopolitics and security

The risk from geopolitics and security worldwide has remained relatively similar compared to last year, with roughly the same countries being in conflict as in 2017. Iran’s proxy presence remains in conflicts in Yemen, Iraq, Israel, Syria and Lebanon, while social unrest has increased risk in Yemen, Nicaragua, Venezuela, Argentina, Iraq and South Africa. The conflict in Yemen has caused the world’s worst humanitarian crises in 2018, with more than 2 million displaced, with food shortages and malnutrition causing cholera outbreak. The total expected loss from this category is similar to the one from financial, economic and trade risk.

Technology and space

Technology and space is the category with the lowest expected GDP at risk. Nevertheless, the risk has increased over recent years, with cyber attacks becoming ever more frequent due to the internationalisation of cyber threat, the increasing size and cost of data breaches, the continued disruption from DDoS attacks, the threat to critical infrastructure, and its continuous evolution and sophistication. Cyber attacks have climbed one level in the ranking this year, assuring the 6th overall position. In 2017 the WannaCry ransomware attacks affected 300,000 computers across 150 countries disrupting critical city infrastructure, such as healthcare, railways, banks, telecoms and energy companies, while NotPetya produced quarterly losses of $300 million for various companies. The standstill faced by the city of Atlanta when all its computers were locked due to a ransomware attack in March caused £2.6 million to be spent, and another $9.5 million are expected. This attack highlighted the breath of potential disruption, with energy, nuclear, water, aviation and manufacturing infrastructure at risk. Moreover 66% of companies are estimated to have experienced a supply chain attack, costing on average $1.1 million per attack. In response to these threats, countries are increasing their spending on cyber offensive capability, with the UK spending hundreds of millions of pounds. Power outage, nuclear accident and solar storm are not at the top of the threats ranking globally, but solar storms could cause over $4bn of GDP@Risk in North American cities, due to their position in northern latitudes, leaving 20-40 million people without power.

 

Health and humanity

The greatest threat to humanity according to the UN is anti-microbial resistance, with areas in the world already developing strains of malaria and tuberculosis resistant to all available medicines. It is expected that over the next 35 years 300 million people will die prematurely due to drug resistance, decreasing the world’s GDP between 2 and 3.5% in 2050. Major epidemics have remained largely constrained in the same areas as last year, and are fuelled by climate and geopolitical crises which aggravates hygiene and public health, such as the Yemen and Somalia cholera outbreaks. Plant epidemics have not increased, with the ongoing problems of Panama disease in bananas, coffee and wheat rust, and the xylella fastidiosa still affecting olive plants in southern Europe.

Corporate innovation and ideation discussion

The panel discussed the importance of the Cambridge Global Risk Index to prepare companies for future threats. For example, for insurance companies including the index in their management of risk would allow them to be better prepared and more profitable. I found the words of Francine Stevens, director of Innovation at Hiscox, particularly inspiring. She talked about how the sheer volume of research produced is often too large to be digested by practitioners, and how workshops might help to bring people with similar interests together to pull out what are the most exciting topics and challenges to work on. As a Knowledge Exchange Fellow myself, this strikes a familiar chord, as it is my job to transfer research to the insurance sector and I have first-hand experience on the importance of adopting a common language and identifying how industry uptakes new research and methods.

Francine has also talked about the importance of collaboration between companies, a particularly sensitive topic in the highly competitive insurance sector. This topic emerged also at the insurance conference held by the Oasis Loss Modelling Framework in September, where the discussion touched on how non-competitive collaborations could bring the sector forward by avoiding duplication. Francine’s final drop of wisdom was about the importance of diversity to drive innovation, and how having a group of smart people with diverse backgrounds often delivers better results than a group of high-achievers with the same background. And this again sounded very familiar!

————————————-
This blog is written by Cabot Institute member Dr Valentina Noacco, a NERC Knowledge Exchange Fellow and a Senior Research Associate at the University of Bristol Department of Civil Engineering. Her research looks at improving the understanding and consideration of uncertainty in the (re)insurance industry. This blog reports material with the consent of the Cambridge Centre for Risk Studies and is available online at https://www.jbs.cam.ac.uk/faculty-research/centres/risk/news-events/events/2018/cambridge-global-risk-index-2019-launch-event/.

Dr Valentina Noacco

Learning about cascading hazards at the iRALL School in China

Earlier this year, I wrote about my experiences of attending an interdisciplinary workshop in Mexico, and how these approaches foster a rounded approach to addressing the challenges in communicating risk in earth sciences research. In the field of geohazards, this approach is increasingly becoming adopted due to the concept of “cascading hazards”, or in other words, recognising that when a natural hazard causes a human disaster it often does so as part of a chain of events, rather than as a standalone incident. This is especially true in my field of research; landslides. Landslides are, after all, geological phenomena studied by a wide range of “geoscientists” (read: geologists, geomorphologists, remote sensors, geophysicists, meteorologists, environmental scientists, risk assessors, geotechnical and civil engineers, disaster risk-reduction agencies, the list goes on). Sadly, these natural hazards affect many people across the globe, and we have had several shocking reminders in recent months of how landslides are an inextricable hazard in areas prone to earthquakes and extremes of precipitation.

The iRALL, or the ‘International Research Association on Large Landslides’, is a consortium of researchers from across the world trying to adopt this approach to understanding cascading hazards, with a particular focus on landslides. I was lucky enough to attend the ‘iRALL School 2018: Field data collection, monitoring and modelling of large landslides’ in October this year, hosted by the State Key Laboratory of Geohazard Prevention and Geoenvironment Protection (SKLGP) at Chengdu University of Technology (CDUT), Chengdu, China. The school was attended by over 30 postgraduate and postdoctoral researchers working in fields related to landslide and earthquake research. The diversity of students, both in terms of subjects and origins, was staggering: geotechnical and civil engineers from the UK, landslide specialists from China, soil scientists from Japan, geologists from the Himalaya region, remote sensing researchers from Italy, earthquake engineers from South America, geophysicists from Belgium; and that’s just some of the students! In the two weeks we spent in China, we received presentations from a plethora of global experts, delivering lectures in all aspects of landslide studies, including landslide failure mechanisms, hydrology, geophysics, modelling, earthquake responses, remote sensing, and runout analysis amongst others. Having such a well-structured program of distilled knowledge delivered by these world-class researchers would have been enough, but one of the highlights of the school was the fieldwork attached to the lectures.

The scale of landslides affecting Beichuan County is difficult to grasp: in this photo of the Tangjiwan landslide, the red arrow points to a one story building. This landslide was triggered by the 2008 Wenchuan earthquake, and reactivated by heavy rainfall in 2016.

The first four days of the school were spent at SKLGP at CDUT, learning about the cascading hazard chain caused by the 2008 Wenchuan earthquake, another poignant event which demonstrates the interconnectivity of natural hazards. On 12th May 2008, a magnitude 7.9 earthquake occurred in Beichuan County, China’s largest seismic event for over 50 years. The earthquake triggered the immediate destabilisation of more than 60,000 landslides, and affected an area of over 35,000 km2; the largest of these, the Daguangbao landslide, had an estimated volume of 1.2 billion m3 (Huang and Fan, 2013). It is difficult to comprehend numbers on these scales, but here’s an attempt: 35,000 km2 is an area bigger than the Netherlands, and 1.2 billion m3 is the amount of material you would need to fill the O2 Arena in London 430 times over. These comparisons still don’t manage to convey the scale of the devastation of the 2008 Wenchuan earthquake, and so after the first four days in Chengdu, it was time to move three hours north to Beichuan County, to see first-hand the impacts of the earthquake from a decade ago. We would spend the next ten days here, continuing a series of excellent lectures punctuated with visits to the field to see and study the landscape features that we were learning about in the classroom.

The most sobering memorial of the 2008 Wenchuan earthquake is the ‘Beichuan Earthquake Historic Site’, comprising the stabilised remains of collapsed and partially-collapsed buildings of the town of Old Beichuan. This town was situated close to the epicentre of the Wenchuan earthquake, and consequently suffered huge damage during the shaking, as well as being impacted by two large landslides which buried buildings in the town; one of these landslides buried a school with over 600 students and teachers inside. Today, a single basketball hoop in the corner of a buried playground is all that identifies it as once being a school. In total, around 20,000 people died in a town with a population of 30,000. Earth science is an applied field of study, and as such, researchers are often more aware of the impact of their research on the public than in some other areas of science. Despite this, we don’t always come this close to the devastation that justifies the importance of our research in the first place.

River erosion damaging check-dams designed to stop debris flows is still a problem in Beichuan County, a decade after the 2008 Wenchuan earthquake.

It may be a cliché, but seeing is believing, and the iRALL School provided many opportunities to see the lasting impacts of large slope failures, both to society and the landscape. The risk of debris flows resulting from the blocking of rivers by landslides (a further step in the cascading hazard chain surrounding earthquakes and landslides) continues to be a hazard threatening people in Beichuan County today. Debris flow check-dams installed after the 2008 Wenchuan earthquake are still being constantly maintained or replaced to provide protection to vulnerable river valleys, and the risk of reactivation of landslides in a seismically active area is always present. But this is why organisations such as the iRALL, and their activities such as the iRALL School are so important; it is near impossible to gain a true understanding of the impact of cascading hazards without bringing the classroom and the field together. The same is true when trying to work on solutions to lessen the impact of these cascading hazard chains. It is only by collaborating with people from a broad range of backgrounds, skills and experiences can we expect to come up with effective solutions that are more than the sum of their parts.

—————
This blog has been reposted with kind permission from James Whiteley.  View the original blog on BGS Geoblogy.   This blog was written by James Whiteley, a geophysicist and geologist at University of Bristol, hosted by British Geological Survey. Jim is funded through the BGS University Funding Initiative (BUFI). The aim of BUFI is to encourage and fund science at the PhD level. At present there are around 130 PhD students who are based at about 35 UK universities and research institutes. BUFI do not fund applications from individuals.

Participating and coaching at a risk communication ‘pressure cooker’ event

Anna Hicks (British Geological Survey) and BUFI Student (University of Bristol) Jim Whiteley reflect on their experiences as a coach and participant of a NERC-supported risk communication ‘pressure cooker’, held in Mexico City in May.

Jim’s experience….

When the email came around advertising “the Interdisciplinary Pressure Cooker on Risk Communication that will take place during the Global Facility for Disaster Reduction and Recovery (GFDRR; World Bank) Understanding Risk Forum in May 2018, Mexico City, Mexico” my thoughts went straight to the less studious aspects of the description:

‘Mexico City in May?’ Sounds great!
‘Interdisciplinary risk communication?’ Very à la mode! 
‘The World Bank?’ How prestigious! 
‘Pressure Cooker?’ Curious. Ah well, I thought, I’ll worry about that one later…

As a PhD student using geophysics to monitor landslides at risk of failure, communicating that risk to non-scientists isn’t something I am forced to think about too often. This is paradoxical, as the risk posed by these devastating natural hazards is the raison d’être for my research. As a geologist and geophysicist, I collect numerical data from soil and rocks, and try to work out what this tells us about how, or when, a landslide might move. Making sense of those numbers is difficult enough as it is (three and a half years’ worth of difficult to be precise) but the idea of having to take responsibility for, and explain how my research might actually benefit real people in the real world? Now that’s a daunting prospect to confront.

However, confront that prospect is exactly what I found myself doing at the Interdisciplinary Pressure Cooker on Risk Communication in May this year. The forty-odd group of attendees to the pressure cooker were divided in to teams; our team was made up of people working or studying in a staggeringly wide range of areas: overseas development in Africa, government policy in the US, town and city planning in Mexico and Argentina, disaster risk reduction (DRR) in Colombia, and of course, yours truly, the geophysicist looking at landslides in Yorkshire.

Interdisciplinary? Check.

One hour before the 4am deadline.

The possible issues to be discussed were as broad as overfishing, seasonal storms, population relocation and flooding. My fears were alleviated slightly, when I found that our team was going to be looking at hazards related to ground subsidence and cracking. Easy! I thought smugly. Rocks and cracks, the geologists’ proverbial bread and butter! We’ll have this wrapped up by lunchtime! But what was the task? Develop a risk communication strategy, and devise an effective approach to implementing this strategy, which should be aimed at a vulnerable target group living in the district of Iztapalapa in Mexico City, a district of 1.8 million people. Right.

Risk communication? Check.

It was around this time I realised that I glossed over the most imperative part of the email that had been sent around so many months before: ‘Pressure Cooker’. It meant exactly what it said on the tin; a high-pressure environment in which something, in this case a ‘risk communication strategy’ needed to be cooked-up quickly. Twenty-four hours quickly in fact. There would be a brief break circa 4am when our reports would be submitted, and then presentations were to be made to the judges at 9am the following morning. I checked the time. Ten past nine in the morning. The clock was ticking.

Pressure cooker? Very much check.

Anna’s experience….

What Jim failed to mention up front is it was a BIG DEAL to win a place in this event. 440 people from all over the world applied for one of 35 places. So, great job Jim! I was also really grateful to be invited to be a coach for one of the groups, having only just ‘graduated’ out of the age bracket to be a participant myself! And like Jim, I too had some early thoughts pre-pressure cooker, but mine were a mixture of excitement and apprehension in equal measures:

‘Mexico City in May?’ Here’s yet another opportunity to show up my lack of Spanish-speaking skills…
‘Interdisciplinary risk communication?’ I know how hard this is to do well…
‘The World Bank?’ This isn’t going to be your normal academic conference! 
‘Pressure Cooker?’ How on earth am I going to stay awake, let alone maintain good ‘coaching skills’?!

As an interdisciplinary researcher working mainly in risk communication and disaster risk reduction, I was extremely conscious of the challenges of generating risk communication products – and doing it in 24 hours? Whoa. There is a significant lack of evidence-based research about ‘what works’ in risk communication for DRR, and I knew from my own research that it was important to include the intended audience in the process of generating risk communication ‘products’. I need not have worried though. We had support from in-country experts that knew every inch of the context, so we felt confident we could make our process and product relevant and salient for the intended audience. This in part was also down to the good relationships we quickly formed in our team, crafted from patience, desire and ability to listen to each other, and for an unwavering enthusiasm for the task!

The morning after the night before.

So we worked through the day and night on our ‘product’ – a community based risk communication strategy aimed at women in Iztapalapa with the aim of fostering a community of practice through ‘train the trainer’ workshops and the integration of art and science to identify and monitor ground cracking in the area.

The following morning, after only a few hours’ sleep, the team delivered their presentation to fellow pressure-cooker participants, conference attendees, and importantly, representatives of the community groups and emergency management teams in the geographical areas in which our task was focused. The team did so well and presented their work with confidence, clarity and – bags of the one thing that got us through the whole pressure cooker – good humour.

It was such a pleasure to be part of this fantastic event and meet such inspiring people, but the icing on the cake was being awarded ‘Best Interdisciplinary Team’ at the awards ceremony that evening. ‘Ding’! Dinner served.

—————
This blog has been reposted with kind permission from James Whiteley.  View the original blog on BGS Geoblogy.   This blog was written by James Whiteley, a geophysicist and geologist at University of Bristol, hosted by British Geological Survey and Anna Hicks from the British Geologial Survey.

Will July’s heat become the new normal?

Saddleworth Moor fire near Stalybridge, England, 2018.  Image credit: NASA

For the past month, Europe has experienced a significant heatwave, with both high temperatures and low levels of rainfall, especially in the North. Over this period, we’ve seen a rise in heat-related deaths in major cities, wildfires in Greece, Spain and Portugal, and a distinct ‘browning’ of the European landscape visible from space.

As we sit sweltering in our offices, the question on everyone’s lips seems to be “are we going to keep experiencing heatwaves like this as the climate changes?” or, to put it another way, “Is this heat the new norm?”

Leo Hickman, Ed Hawkins, and others, have spurred a great deal of social media interest with posts highlighting how climate events that are currently considered ‘extreme’, will at some point be called ‘typical’ as the climate evolves.

As part of a two-year project on how future climate impacts different sectors (www.happimip.org), my colleagues and I have been developing complex computer simulations to explore our current climate as well as possible future climates. Specifically, we’re comparing what the world will look like if we meet the targets set out in the Paris agreement: to limit the global average temperature rise to a maximum of 2.0 degrees warming above pre-industrial levels but with the ambition of limiting warming to 1.5 degrees.

The world is already around 1 degree warmer on average than pre-industrial levels, and the evidence to date shows that every 0.5 degree of additional warming will make a significant difference to the weather we experience in the future.

So, we’ve been able to take those simulations and ask the question: What’s the probability of us experiencing European temperatures like July 2018 again if:

  1. We don’t emit any further greenhouse gases and things stay as they are (1 degree above pre-industrial levels).
  2. Greenhouse gas emissions are aggressively reduced, restricting global average temperature rise to 1.5 degrees above pre-industrial levels.
  3. Greenhouse gas emissions are reduced to a lesser extent, restricting global average temperature rise by 2 degrees above pre-industrial levels.

What we’ve found is that European heat of at least the temperatures we have experienced this July are likely to re-occur about once every 5-6 years, on average, in our current climate. While this seems often, remember we have already experienced 1C of global increase in temperature. We’ve also considered the temperature over the whole of Europe, not just focusing on the more extreme parts of the heatwave. If we considered only the hottest regions, this would push our current temperature re-occurrence times closer to 10-20 years. However, using this Europe-wide definition of the current heat event, we find that in the 1.5C future world, temperatures at least this high would occur every other year, and in a 2C world, four out of five summers would likely have heat events that are at least as hot as our current one. Worryingly, our current greenhouse gas emission trajectory is leading us closer to 3C, so urgent and coordinated action is still needed from our politicians around the world.

Our climate models are not perfect, and they cannot capture all aspects of the current heatwave, especially concerning the large-scale weather pattern that ‘blocked’ the cooler air from ending our current heatwave. These deficiencies increase the uncertainty in our future projections, but we still trust the ball-park figures.

Whilst these results are not peer-reviewed, and should be considered as preliminary findings, it is clear that the current increased heat experienced over Europe has a significant impact on society, and that there will be even more significant impacts if we were to begin experiencing these conditions as much as our analysis suggests.

Cutting our emissions now will save us a hell of a headache later.

—————————–
This blog is written by Dr Dann Mitchell (@ClimateDann) and Peter Uhe from the University of Bristol Geographical Sciences department and the Cabot Institute for the Environment.

Dann Mitchell