Privacy paradoxes, digital divides and secure societies

More and more, we are living our lives in the online space. The development of wearable technology, automated vehicles, and the Internet of Things means that our societies are becoming increasingly digitized. Technological advances are helping monitor city life, target resources efficiently, and engage with citizens more effectively in so-called smart cities. But as with all technological developments, these substantial benefits are accompanied by multiple risks and challenges.

The Wannacry attack. The TalkTalk data breach. The Cambridge Analytica scandal. Phishing emails. Online scams. The list of digital threats reported by the media is seemingly endless. To tackle these growing threats, the National Cyber Security Centre (NCSC) was established in the UK in 2016 with the aim of making ‘the UK the safest place to live and do business online’. But with the increasing complexity of online life, connected appliances, and incessant data collection, how do people navigate these challenges in their day-to-day lives? As a psychologist, I am interested in how people consider and make decisions regarding these digital risks and how we can empower people to make more informed choices going forward.

The privacy paradox

People often claim that privacy is important to them. However, research shows that they are often willing to trade that privacy for short-term benefits. This incongruence between people’s self-reported attitudes and their behaviour has been termed the ‘privacy paradox’. The precise reasons for this are uncertain, but are likely to be a combination of lack of knowledge, competing goals and priorities, and the fact that maintaining privacy can be, well, difficult.

Security is often not an individual’s primary goal, instead being secondary to other tasks that they are trying to complete. For instance, accessing a particular app, sharing location data to find directions, or communicating on the move with friends and colleagues. Using these online services, however, often requires a trade-off with regards to privacy. This trade-off may be unclear, communicated through incomprehensible terms and conditions, or simply unavoidable for the user. Understanding what drives people to make these privacy trade-offs, and under what conditions, is a growing research area.

The digital divide

As in other areas of life, access to technology across society is not equal. Wearable technology and smart phones can be expensive. People may not be familiar with computers or have low levels of digital literacy. There are also substantial ethical implications about how such data may be used that are still being debated. For instance, how much will the information captured and analysed about citizens differ across socio-economic groups?

Research has also shown that people are differentially susceptible to cyber crime, with generational differences apparent (although, not always in the direction that you would expect). Trust in the institutions that handle digital data may vary across communities. Existing theories of societal differences, such as the Cultural Theory of Risk, are increasingly being applied to information security behaviour. Understanding how different groups within society perceive, consider, and are differentially exposed to, digital risks is vital if the potential benefits of such technologies are to be maximised in the future.

Secure societies – now and in the future

Regulation: The General Data Protection Regulation (GDPR) comes into force on the 25 May 2018. Like me, you may have been receiving multiple emails from companies informing you how they use your data, or asking your permission to keep it. This regulation is designed to help people manage their privacy and understand who has access to their data, and why. It also allows for substantial fines to be imposed if personal data is not managed adequately or if data breaches are not reported to authorities in a timely manner.

Secure by default: There is a growing recognition that products should have security built-in. Rather than relying on us, the human user, to understand and manage security settings on the various devices that we own, such devices should be ‘secure by default’. Previous considerations of humans as the ‘weakest link’ in cyber security are being replaced with an understanding that people have limited time, expertise and ability to manage security. The simplified password guidance provided by the NCSC provides a good example of this (7). Devices,  applications and policies should take the onus off the user as much as possible.

Education and communication: People need to be educated about online risks in an engaging, relevant and targeted way. Such risks can be perceived as abstract and distant from the individual, and can be difficult to understand at the technical level. I was recently paired with an artist as part of Creative Reactions 2018 (an art exhibition running in Hamilton House 11 – 22 May 2018) to portray my research in this area to members of the public in a different way. Understanding how best to communicate digital risks to diverse audiences who engage with the online world in a range of different contexts is crucial. In this regard, there is much to be learned from risk communication approaches used in climate change, public health, and energy sectors.

Overall, there is much to be optimistic about. A renewed focus on empowering people to understand digital risks and make informed decisions, supported by regulation, secure design and considerations of ethical issues. Only by understanding how people make decisions regarding online activities and emerging technologies, and providing them with the tools to manage their privacy and security effectively, can the opportunities provided by a digital society be fully realised in cities of the future.

——————————–
This blog has been written by Cabot Institute member Dr Emma Williams, a Vice-Chancellor’s Fellow in Digital Innovation and Well-being in the School of Experimental Psychology at the University of Bristol.