Category Archives: Uncategorized

Protection Motivation Theory (PMT)

Many theories are used to explain and predict human behaviour. Protection Motivation Theory is one of those theories sometimes used by cybersecurity professionals to prepare their programs. Is it a good choice?

Ronald W. Rogers proposed the Protection Motivation Theory (Rogers, 1975) to explain the effect of fear appeal in communications on the audience’s attitude change. Initially, Rogers developed PMT to explain health-related behavioural changes like the impact of fear-appeal on smokers’ behaviour. In 1983, Rogers and Maddux revised the model to include self-efficacy as an influencing factor (Maddux & Rogers, 1983).

PMT suppose an effect of the perceived efficacy of coping response, the perceived self-efficacy to perform the coping response and the probability of the threat on the attitude towards the coping response. We summarised the different variables and their effects in the figure below: Protection Motivation Theory – variables and effects.

PMT is now also used in an information security context by different researchers. As Menard et al. (2017) showed in their literature review on PMT, its application to the information security field gives mixed results.

It was mainly used to explain the impact of threat perception and perceived self-efficacy on changes in security behaviours or attitude in a population (Chou & Sun, 2017; Grimes & Marquardson, 2019; Ismail et al., 2017; Jansen & van Schaik, 2018; Menard et al., 2017; Milne et al., 2009).

If we take the specific case of phishing, these studies did not provide a specific model. Still, they suggest that perceived self-efficacy and threat perception might play a role in the process of detecting phishing emails.

It is an interesting model for health prevention professionals, but probably not for human-centrric cyber security ones.

Bibliography

  • Chou, H.-L., & Sun, J. C.-Y. (2017). The moderating roles of gender and social norms on the relationship between protection motivation and risky online behavior among in-service teachers. Computers & Education, 112, 83–96. https://doi.org/10.1016/j.compedu.2017.05.003
  • Grimes, M., & Marquardson, J. (2019). Quality matters: Evoking subjective norms and coping appraisals by system design to increase security intentions. Decision Support Systems, 119, 23–34. Scopus. https://doi.org/10.1016/j.dss.2019.02.010
  • Ismail, K. A., Singh, M. M., Mustaffa, N., Keikhosrokiani, P., & Zulkefli, Z. (2017). Security Strategies for Hindering Watering Hole Cyber Crime Attack. Procedia Computer Science, 124, 656–663. https://doi.org/10.1016/j.procs.2017.12.202
  • Jansen, J., & van Schaik, P. (2018). Testing a model of precautionary online behaviour: The case of online banking. Computers in Human Behavior, 87, 371–383. Scopus. https://doi.org/10.1016/j.chb.2018.05.010
  • Maddux, J. E., & Rogers, R. W. (1983). Protection motivation and self-efficacy: A revised theory of fear appeals and attitude change. Journal of Experimental Social Psychology, 19(5), 469–479. https://doi.org/10/cbzjj7
  • Menard, P., Bott, G. J., & Crossler, R. E. (2017). User Motivations in Protecting Information Security: Protection Motivation Theory Versus Self-Determination Theory. Journal of Management Information Systems, 34(4), 1203–1230. Scopus. https://doi.org/10.1080/07421222.2017.1394083
  • Milne, G. R., Labrecque, L. I., & Cromer, C. (2009). Toward an understanding of the online consumer’s risky behavior and protection practices. Journal of Consumer Affairs, 43(3), 449–473. Scopus. https://doi.org/10.1111/j.1745-6606.2009.01148.x
  • Rogers, R. W. (1975). A Protection Motivation Theory of Fear Appeals and Attitude Change1. The Journal of Psychology, 91(1), 93–114. https://doi.org/10/cb4jgn

If there was only one, what would be the security behaviour change you’d like to see?

If you have a very limited budget and you can only focus on one security awareness activity focused on the message, on one behaviour, what would it be?

Tough question. It was asked by Dr Jessica Barker during the last (ISC)² Secure Summit in Amsterdam. There were hundred of security professionals in the room. The answers were quite classical at the start: Passwords, phishing, trust, and so on.

The best suggestion, from my point of view, was this one: Ask for help!

Too often, users don’t ask for help. Likely because they don’t want to lose time waiting on the line while calling the helpdesk or they don’t want to look stupid (and there is probably a lot of other reasons and a mix of it). But security has become an increasingly complicated matter over the years. Hoping our end-users will become better or as good as security professionals might be wishful thinking (although in some cases, average users are better than most security professionals in some security-specific tasks, I’ll come back to that another day).

So, “Ask for help”, is the most reasonable action to ask our users. It is something they can easily understand, it will cover a large panel of situations and probably increase your reaction time and decrease the number of incidents.

Of course, you need to make it easy (simple phone number, easy to remember the email address, one button to click in an email to signal a fishing attempt), responsive (people don’t like to wait) and nice (you don’t like that the person on the line make you feel like a fool).

Think about it. It might be a good start for a more human centric security (hence more efficient and cost effective).

How do penalties affect your security policies effectiveness?

One of the requirements of any decent policy (and law) is having a penalty link to its non-respect. In penal law, “Nulla lege sine poena” (no law without punishment) is one of the corollaries of the famous principle “Nulla crimen, Nulla poena sine lege ” (no crime, no punishment without a law).

From a behavioural point of view, it is often more efficient (and more humane) to use the carrot (and even more the intrinsic motivation to do the things right) instead of the stick. However, knowing there is a stick that helps to give some consistency to the rule, some consequences. So, when we are drafting policies, we always insist on the necessity to clearly define the consequences of any non-compliance with the rules. Organizations may be fined for it, so should their employees.

It’s often a difficult part of the policies drafting process, moreover in a large organization, as we must find a proportionate response and it must be, in some countries, negotiated beforehand with trade unions and social partners.

But there is more to say about it. First, the consequences mentioned are quite often individual ones: loss of privileges, the impact of financial bonuses or removal from offices. However, there is more to it. Breaking rules can lead to huge monetary losses for the organization, resulting in cost-cutting and having colleagues losing their jobs, putting families in financial and personal difficulties. It’s a bigger picture; it’s not a systematic consequence, although more likely than ever, mainly it is a foreseeable consequence that might trigger more emotional response than the one of the person’s own demise (although it might have some opposite effect if the person has a grudge against the entire company, including its workers). Emotions are leading our choices more than rationality.

The second point is that it must be fair. As suggested by Herath & Rao (2009), too severe punishment will have an adverse effect and increase the likelihood of the infringement. This effect is likely similar to the one observed with the pictures of sick lunges of cigarettes pack: they tend to increase the consumption of cigarettes (mostly with young adults and adolescents).

The third point is that the rule must be the same for everybody, in theory, and in effect. So, we must ensure that we can systematically detect these infringements (see Herath & Rao, 2009) to increase compliance.

But how often do we see people in the organization breaking the rules willingly without any consequence? Sometimes because this person is an expert in his/her field and we believe we need her/his knowledge more than we would. Sometimes it’s for some internal political reasons. Sometimes because (s)he’s a relative of someone high in the food chain.  Whatever the reason, this is not fair and it has some huge impact on the behaviour of your employees. Worse, it becomes part of your culture and that’s something that you will have a lot of difficulties to change after.

So, mind your punishment twice.

References: