Skip to content
patryk-gradys-128898-unsplash
01 March 2018| doi: 10.5281/zenodo.1186288

Fostering a cybersecurity mindset

What precisely is the meaning of the concept of cybersecurity – and who is responsible for its implementation? Guest author William Dutton argues that challenges in this area might require a certain mindset among internet users. Is this indeed a significant shift in thinking about cybersecurity?

Cybersecurity is a broad concept encompassing the “technologies, processes, and policies that help to prevent and/or reduce the negative impact of events in cyberspace that can happen as the result of deliberate actions against information technology by a hostile or malevolent actor” (Clark et al., 2014, p. 2). It involves physical security – such as protection from insider threats – as well as cyber security. It entails all levels of the internet and all the many actors involved in the provision and use of the network, from those governing and building this infrastructure to the diverse array of end users.

Given this broad definition, who then is responsible for cybersecurity? While responsibility is most often contingent on the specific activity and context, it is increasingly clear that the worldwide diffusion of the internet and its incorporation into everyday life has dissipated this responsibility far more than in early stages of computer-mediated media, information and communication systems to involve a wide array of actors across multiple layers of the internet – from internet use to its global governance.

More specifically, the worldwide adoption of the internet has enabled end users not only to access information from around the world, but also to create and otherwise source their own information for the world. In many respects, this has empowered users, as is illustrated by the many ways users are able to challenge those in positions of influence, such as the press, with countervailing information (Dutton, 2009). However, it has also meant that responsibility for the security of information resources on the internet has devolved to include users around the world and the institutions in which they are involved and not only the technical experts engaged in cybersecurity.

This does not mean that end users should be expected to be responsible for their own security online, but they are expected increasingly to have some shared responsibility with other actors. Creating systems that would centrally protect end-users would also undermine their role in creating and using the internet in powerful ways. Put another way, the protection of cybersecurity is no longer lodged solely with the computer experts in some centralised department of information technology within a user’s place of work or with their internet service provider. It is distributed globally across over 3.6 billion internet users who share some responsibility in this process with a multitude of other actors.

Unfortunately, this realisation has not been accompanied by strong programmes of research aimed at understanding the attitudes, values and behaviour of users with respect to cybersecurity. However, there have been promising initiatives seeking to bring the social sciences into work on cybersecurity (Whitty et al., 2015). Also, there have been studies focused on particular communities of users exposed to security risks, such as digital rights activists, bloggers, whistle- blowers and journalists (e.g., Coleman,2014), the victims of romance scams (Whitty & Buchanan, 2012) or consumers involved in online banking (Shillair et al., 2015). In the neighbouring area of privacy research, there has been much work done over decades on the beliefs, attitudes and values of computer and internet users, including on the motivations behind their actions relevant to protecting personal information from unauthorised disclosure (Acquisti & Grossklags, 2008; Bennett & Parsons, 2013). But arguably, a focus on the technical issues of cybersecurity, such as standards, has overshadowed work on the social and cultural issues.

Moreover, with some exceptions, most social and cultural research initiatives have focused on the development of awareness campaigns, information campaigns designed to alert users to security risks. Awareness campaigns have been prominent in a wide range of areas, particularly in research on health behaviour, where social psychologists and other social scientists have sought to convey threats and also change behaviour in ways that might mitigate risks in such areas as anti-smoking and safe sex campaigns. However, translating awareness into behavioural change has been the central difficulty for all such strategies, even with smoking and safe sex, where the behavioural response is relatively simple to convey (Rice & Atkin, 2013). In cybersecurity, the risks are more difficult to communicate, given the multiplicity of risks in particular circumstances, and the remedies, which are often difficult for end users to implement. Too often, the design of systems makes more secure practices less usable (Nurse et al., 2011).

In the cybersecurity area, awareness campaigns are too often focused on generating fear among users, fear that they will be harmed if they do not follow safe practices (Bada & Sasse, 2014). Yet seldom are these fear campaigns accompanied by clear instructions on best practice nor are they useable and acceptable, such as memorising dozens of more complex passwords and frequently changing them (Whitty et al., 2015). Simple practices in the eyes of security practitioners often fail as useful guides to end users. In fact, fear campaigns can have a chilling effect and otherwise be counterproductive if they are not tied to clear approaches to addressing the problem (Lawson, 2016).

Fear campaigns might work in some areas, such as health campaigns on smoking, where there is a clear response (stop smoking). But failure is common even in these areas, since behavioural change is dependent on messages being well produced and anchored in strong social psychological theories of behaviour change. In the area of cybersecurity, they have proven less effective, as the threats and solutions are ever changing and the problems seem to be mounting (Bada & Sasse, 2014). Rather than simply blame users for not following safe cybersecurity practices, more focus needs to be placed on designing systems for which security practices are more usable, as is reflected in moves toward the use of more biometric data. However, this is particularly difficult given the diversity of uses and contexts of use around the internet. It was in the context of these dilemmas that I stumbled upon the concept of a cybersecurity mindset.

The idea of a cybersecurity mindset

In a conversation at a workshop on cybersecurity, Alastair Cook (2014), director of Critical Insight Security Ltd., argued that the challenges in this area required a security mindset among internet users, which I would define as a set of attitudes, beliefs and values that motivate individuals to continually act in ways to secure themselves and their network of users, for instance, by acquiring technical skills, new practices or changing their behaviour online. This is not necessarily the adoption of a particular set of practices or habits, like changing your password, since secure behaviour will change over time and across contexts. It could, however, involve keeping an open mind to changing cybersecurity threats and practices.

The idea is that users need to prioritise cybersecurity in all aspects of their online behaviour as a matter of course. Rather than following a learned set of practices or habits, individuals could internalise this goal in ways that motivate them to prioritise security in their online behaviour. As noted above, research has begun to explore attitudes toward cybersecurity, as well as the practices of users with respect to security. However, could the concept of a “security mindset” be a subtle but important shift away from more common notions of the priority given to security attitudes and practices, such as habits?

Is this indeed a significant shift in thinking about cybersecurity? Can the concept of a security mindset be conceptually defined and empirically operationalised? Perhaps it is also a more qualitative shift to a sensitising concept that captures a complex set of concrete habits, values and attitudes of internet users? In either case, would it be a positive direction for guiding policy and practice? If so, how could this be accomplished? What are the policy implications of efforts to foster a security mindset?

Reasoning through analogy – with a bicycle

An analogy might be useful before I try to develop the concept more precisely. Any analogy is inherently inaccurate of what it represents, and better analogies might be suggested, but the example of bike security came immediately to mind when faced with the idea of a cybersecurity mindset.

Since I had lived in Oxford for over a decade, where bikes are a major mode of transportation, and routinely biked to work, it was clear that nearly all bike riders in this city had a security mindset. For instance, they do not think about whether or not to buy a lock, or whether or not to lock their bike when they leave it. They just do these things as a matter of course. It is a habit, yes, but also a mindset in that those purchasing or riding a bike have incorporated a set of assumptions that eliminate the need to move through a set of decisions on each particular occasion. They are not going through a threat assessment each time they purchase a bike or get on their bike. They simply follow a course dictated by their security mindset.

Security provides a context to other decisions about other things. A person might even buy an older or less attractive bike in order to reduce the risk of it being stolen. In such ways, bike riders in Oxford feel as if they know what to do in order to better secure their bikes. They have a sense of personal efficacy associated with bike security. Moreover, it is a framework arising from the bottom up, rather than from the top down. For example, a bike lock is not part of the bike, or a required purchase, but something most users would incorporate with the purchase of a bike. The lock is viewed as part and parcel of the bike. As it is bottom up, it is socially supported by fellow bike owners. All riders lock their bike, and would question anyone who did not. Everyone can advise others on ways to secure their bikes. Buying a lock is not viewed as odd, but as normal. Not buying a lock would be viewed as silly by other bike riders, but it is not required by law.

In contrast, bike safety – not security – might be less of a mindset in that you can see wide variation among bike riders. Some equip themselves with helmets, reflective clothing, and more, while others do not. Riders are more likely to go through a process of threat modeling, such as weighing the choices on whether or not to use a helmet, depending on where they are riding and what they are wearing, than on whether to secure their bike. Should I stay behind the bus, and have a 100 % chance of losing my momentum, or veer around the bus with a 1 % chance of being hit by a car? Safety might be a mindset for some, but it appears less universal and more flexible than a bike security mindset.

A bike is not a computer network

Of course, protecting a bicycle is very different from protecting a computer device, or personal information in the cloud. I would argue that this makes the analogy all the more powerful, since it moves discussion away from specific practices or rules that vary across different technical systems. Instead, it highlights the personal and social factors behind a motivation for security practices, whatever they may be.

That said, some have raised problems with my bike analogy. The first concerns the visibility of the security issue. You know sooner or later when your bike has been damaged or stolen, but it is often far more difficult to detect whether your networked computing resources have been tampered with, copied, or disclosed without your authorisation. Increasingly, breaches of a computer can leave no physical evidence of being compromised, such as not changing its performance. Perhaps this difference in transparency or visibility suggests a direction for supporting a cybersecurity mindset. The visibility of spam, for example, enabled spam filters to be widely accepted and used. The visibility of a stolen bike or a breach of your computer could help foster a security mindset.

Another concern raised was over the degree that individuals who have poor security practices in relation to computer networks are likely to have consequences for those with whom they communicate, while the consequences of a stolen bike are likely to rest more squarely with the individual who failed to secure it. In this case, I find the bike analogy valuable, despite this difference, because there is clear social pressure to adopt a bike security mindset even when the consequences are less networked. Again, the visibility of not following these practices could be a key difference. When friends realise a problem with another person’s bike or computer security, such as when they receive spam from a friend, they do sanction their friends. Visibility or transparency might be key to building a cybersecurity mindset by also enhancing the likelihood of peer social influence.

Defining a mindset

The idea of a cybersecurity mindset arose from qualitative interviews, conversations with cybersecurity researchers and practitioners, and participant-observation around the social aspects of cybersecurity. Within a qualitative tradition, this concept, like many other qualitative concepts is what Herbert Blumer (1954) has called a “sensitizing concept”. That is, the concept helps to sensitise the reader to a complex set or patterns of concrete empirical observations. It is not a quantitative concept that is operationally defined, such as by answers to questions or by specific behaviour. It is more flexible, and does not have a definitive set of empirical attributes since it could be manifested in different ways across time or contexts. It is in this tradition that I am employing the concept of cybersecurity mindset, as a sensitizing concept within a qualitative perspective of social research.

So – what is in a mindset? As noted above, I have defined a cybersecurity mindset as a pattern of attitudes, beliefs and values that motivate individuals to continually act in ways to secure themselves and their network of users. A mindset suggests a way of thinking about a matter of significance. It is a firm – not a fleeting or ephemeral – perspective or framework for thinking about other things. In other contexts, a mindset has been usefully defined as “how we receive information” (Naisbitt, 2006: xvii). For example, the same information, such as an email attachment, will be received in different ways if one has a cybersecurity mindset. And it shapes choices about other matters. A security mindset might drive decisions about other aspects of internet use. It arises from the interaction of peers – bottom up – rather than from sanctions or directions from above. In line with this, it is supported socially, such as through the social influence of friends and fellow users, and sources of information chosen by users.

Different actors, such as cybersecurity experts versus end users, will manifest a cybersecurity mindset in very different ways. For example, the security experts with such a mindset would be constantly considering ways that a technical system could be breached, as these mental scenarios will lead them to design systems and train users to avoid the problems they anticipate. Users are unlikely to think about how malicious users might try to steal their information, but they are likely to consider ways to keep their equipment and network resources safe from others, if they have a cybersecurity mindset.

It is immediately apparent that a mindset is not a dichotomous state. It is not that you have it or you don’t. For example, a security mindset might be so disproportionate to the risks, that it would be dysfunctional. Alternatively, there could be an absence of a security mindset by many internet users, who fail to take minimal precautions in their computing practices, such as protecting passwords, or changing the default password on the wireless router. These two extreme examples suggest that a security mindset can err on either being set too high or low, exaggerating or underestimating threats. In the bike analogy, there is also no guaranteed security with a lock that can be cut, but it would be a disproportionate response for people to stop riding their bikes on the grounds that they inevitably must leave them in public places.

The bike example also suggests that a disproportionately high cybersecurity mindset might be a functionally rational response to the perceived lack of a security mindset by too many users. In this sense, adoption of a security mindset would be in the interest of all actors in the larger context of users.

More importantly, however, it is unclear that the experts in IT can continue to protect institutions and the public on their own, given the nature of the internet and web and social media, which will be exacerbated by the rise of the Internet of Things (IoT). Clearly, the larger public of internet users need to be enrolled in a security mindset. The IT security officers will be less significant, making a mindset more relevant to a larger public. “[A security mindset] should be more accessible as technical understanding and technical measures become less significant in the management of security” (Cook, 2014). Over time, as current security practices become outdated, such as reliance on passwords, technical know-how might well diminish in importance, relative to the motivations of users that are anchored in more social and psychological processes.

Conclusion

Social research on cybersecurity will need to move away from models based on pro-health and other awareness campaigns that have more obvious sets of safe practices. We need research anchored in cybersecurity challenges and behaviour, as well as on other related online issues, such as user perspectives on privacy and surveillance. There is a need to identify those with a cybersecurity mindset, to understand how to diffuse this mindset, and to gauge what impact its acquisition is likely to have on cybersecurity. At the same time, it is important to recognise that a cybersecurity mindset is but one possible aspect of the social and cultural dimensions of cybersecurity that need to be addressed alongside allied efforts to enhance educational, technical, organisational, business, policy and regulatory approaches to cybersecurity.

References

Acquisti, A., & Grossklags, J. (2008). What can behavioral economics teach us about privacy? In Acquisti, A., Gritzalis, S., Lambrinoudakis, C., and De Capitani di Vimercati, S. (Eds.), Digital privacy: Theory, technologies, and practices (pp. 363 – 380). Boca Raton, FL: Auerbach Publishers.

Bada, M., & Sasse, A. (2014). Cyber security awareness campaigns: Why do they fail to change behaviour? Global Cyber Security Capacity Centre, University of Oxford: Oxford, UK. Retrieved from http://discovery.ucl. ac.uk/1468954

Bennett, C. J., & Parsons, C. (2013). Privacy and surveillance: The multidisciplinary literature on the capture, use, and disclosure of personal information in cyberspace. In Dutton, W. H. (ed.), The Oxford handbook of internet studies (pp. 486 – 508). Oxford: Oxford University Press.

Blumer H. (1954). What is wrong with social theory? American Sociological Review, 19(1), 3 – 10.

Clark, D., Berson, T., & Lin, H. S. (Eds.) (2014). At the nexus of cybersecurity and public policy. Computer Science and Telecommunications Board, National Research Council, Washington DC: The National Academies Press.

Coleman, G. (2014). Hacker, hoaxer, whistleblower, spy: the many faces of Anonymous. London: Verso.

Cook, A. (2014). Personal communication via email, 23 June 2014. Alastair Cook permitted me to
paraphrase his comments at a workshop on 19 June 2014.

Dutton, W. H. (2009). The fifth estate emerging through the network of networks. Prometheus, 27(1), 1–15.

Lawson, S. T., et al. (2016). The cyber-doom effect: The impact of fear appeals in the US cyber security debate. In Cyber Conflict (CyCon), 2016 8th International Conference on (pp. 65 – 80). IEEE.

Naisbitt, J. (2006). Mindset! New York: Harper Collins.

Nurse, J.R.C., Creese, S., Goldsmith, M., & Lamberts, K. (2011). ‘Guidelines for usable cybersecurity: Past and present’, in The 3rd International Workshop on Cyberspace Safety and Security (CSS 2011) at The 5th International Conference on Network and System Security (NSS 2011), Milan, Italy, 6 – 8 September.

Rice, R. E., & Atkin, C. K. (Eds.) (2013). Public communication campaigns, 4th edition. Los Angeles, CA: Sage.

Shillair, R., Cotten, S. R., Tsai, H. S., Alhabash, S., Larose, R., & Rifon, N. J. (2015). Online safety begins with you and me : Convincing internet users to protect themselves. Computers in Human Behavior, 48, 199 – 207.

Whitty, M. T., & Buchanan, T. (2012). The online dating romance scam: a serious crime. CyberPsychology, Behavior, and Social Networking, 15(3), 181 – 183.

Whitty, M., Doodson, J., Creese, S., & Hodges, D. (2015). Individual differences in cyber security behaviours: An examination of who’s sharing passwords. Cyberpsychology, Behavior, and Social Networking, 18(1): 3 – 7.


William H. Dutton is the Quello Professor of Media and Information Policy in the Department of Media and Information, College of Communication Arts and Sciences at Michigan State University, where he is Director of the Quello Center. Prior to this appointment, William Dutton was Professor of Internet Studies at the Oxford Internet Institute, University of Oxford, where he was the Founding Director of the OII and a Fellow of Balliol College.


This piece is a shortened version of an article released first on 19 January 2017 on the Internet Policy Review, a peer-reviewed online journal on internet regulation in Europe. It has also been published in Volume 2017 of encore – our annual magazine on internet and society research.


This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

William H. Dutton, Prof. Ph.D

Member of the Scientific Advisory Council

Sign up for HIIG's Monthly Digest

HIIG-Newsletter-Header

You will receive our latest blog articles once a month in a newsletter.

Explore current HIIG Activities

Research issues in focus

HIIG is currently working on exciting topics. Learn more about our interdisciplinary pioneering work in public discourse.

Further articles

Three groups of icons representing people have shapes travelling between them and a page in the middle of the image. The page is a simple rectangle with straight lines representing data used for people analytics. The shapes traveling towards the page are irregular and in squiggly bands.

Empowering workers with data

As workplaces become data-driven, can workers use people analytics to advocate for their rights? This article explores how data empowers workers and unions.

A stylised illustration featuring a large "X" in a minimalist font, with a dry branch and faded leaves on one side, and a vibrant blue bird in flight on the other. The image symbolises transition, with the bird representing the former Twitter logo and the "X" symbolising the platform's rebranding and policy changes under Elon Musk.

Two years after the takeover: Four key policy changes of X under Musk

This article outlines four key policy changes of X since Musk’s 2022 takeover, highlighting how the platform's approach to content moderation has evolved.

The picture shows a tractor cultivating a field from above. One side of the field is covered in green, the other is dry and earthy. This is intended to show that although sustainable AI can be useful in the fight against climate change, it also comes at a high ecological cost.

Between vision and reality: Discourses about Sustainable AI in Germany

This article explores Sustainable AI and Germany's shift from optimism to concern about its environmental impact. Can AI really combat climate change?