Join Transform 2021 this July 12-16. Register for the AI event of the year.
Cybercriminals are crafting personalized social engineering attacks which exploit cognitive bias, according to a new report from Security Advisor, a company that uses machine learning to customize security awareness training for individual employees.
Cognitive bias refers to mental shortcuts humans subconsciously take when processing and interpreting information prior to making decisions. Bias is an attempt to simplify information processing to speed up decision-making, and they are particularly effective when used in phishing attacks, SecurityAdvisor’s CEO Sai Venkataraman told VentureBeat. Cybercriminals manipulate the recipient’s thoughts and actions to convince that person to engage in risky behavior — such as clicking on a link they normally would not click on or entering sensitive information on a website.
Enterprise security teams usually rely on security awareness programs to train employees to recognize attacks so that they won’t be tricked. However, traditional security awareness programs rarely take into account the role cognitive biases play in these situations, nor were they considering people’s roles or past behavior. The training concepts weren’t sticking, since the data showed that 5% of the users accounted for 90% of security incidents, Venkataramn said.
SecurityAdvisor isn’t the only one saying that traditional security awareness training and phishing simulations have a limited effect on protecting the organization. A recent Cyentia Institute study found that security training resulted in slightly lower phishing simulation click rates among users, but had no significant effect at the organizational level or in real-world attacks. The report, commissioned by Elevate Security, examined malware, phishing, email security and other real world attack data and found that increasing simulations and training can be counterproductive and result in people clicking malicious links more than those with little or no training. Just 11% of users with only one training session clicked on a phishing link, but 14% of users with five training sessions clicked on the link, according to Cyentia’s analysis.
Understanding cognitive bias
Phishing works because people filter what they see through their experiences and preferences and that influences the choices they make. There are many different types of cognitive biases, but SecurityAdvisor’s research identified five major ones using in phishing attacks: halo effect, hyperbolic discounting, curiosity effect, recency effect, and authority bias.
Halo effect, which refers to the individual having a positive impression of a person, brand, or product, is the most commonly used cognitive bias by cybercriminals, appearing in 29% of phishing attacks. In this type of attack, a cybercriminal pretends to be a trusted entity. Cybercriminals targeting C-suite executives may send fake speaking invitations from reputable universities and organizations.
Hyperbolic discounting, or the inclination to choose a reward that gives immediate results rather than a long-term one, appeared in 28% of phishing attacks analyzed by SecurityAdvisor. This can take the form of clicking on a link to get $100 off a MacBook Air, for example. Spammer have long used this tactic to lure victims with promises of free or exclusive deals.
Curiosity effect, the desire to resolve uncertainty, rounded out the top three by appearing in 17% of phishing attacks. In this kind of attack, the C-suite executive may receive information about exclusive access to unnamed golf events, and the desire to know more about the event may make the executive more susceptible. IT teams may see phishing emails focused on things they are concerned about, such as securing the remote workforce and top trends in data analytics.
The recency effect takes advantage of the tendency to remember recent events, such as using information about COVID-19 vaccinations in the subject lines of phishing emails. And finally, the authority bias is based on people’s willingness to defer to the opinions of an authority figure. An attacker using authority bias may impersonate a senior manager or even the CEO.
For example, in organizations with “control-based cultures,” the authority bias means people will be less likely to question email messages that appear to be sent by the CFO instructing them to pay an invoice, Venkataraman said.
SecurityAdvisor found that C-suite executives are targeted 50 times more than a regular employee, followed by IT security teams, who are targeted 43.5 times more than regular employees. The biases used are also different. Cybercriminals targeting c-suite executives tend to employ the halo effect or the curiosity bias, while the majority of scams against IT security teams employed the curiosity bias. There were also industry-specific differences. People in the healthcare industry were more likely to see scams employing authority bias, recency effect, and loss aversion, while retail employees are more likely to be targeted by the halo effect, curiosity bias, and hyperbolic discounting. Financial services employees were likely to see phishing messages employing the halo effect to appear as if they came from regulators and vendors, or authority bias to appear as if they were sent by the CEO or tax authorities.
Changing security awareness training
Technology can go only so far to filter out these attack messages because they are designed to look very legitimate. But training employees to never fall for these attacks is also not the answer. The goal is to help mitigate risky behaviors. One way to counter the effects of cognitive biases is to help employees recognize the tricks when they are being used. Machine learning can help facilitate individual changes in employee behavior by providing constant reminders to apply knowledge in the exact moment of risk, Venkataraman said.
SecurityAdvisor’s platform fortifies people against these biases with “just-in-time” nudges, such as showing a quick refresher video when the platform detects the user had been targeted in an attack. The key message with these nudges is to remind employees they are part of the organization’s security infrastructure, Venkataraman said. Instead of saying that humans are the weakest link when it comes to corporate security, “we wanted to say humans are the strongest part of the security community.”
VentureBeat
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.
Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more