In a world of increasing connectivity, where barely a moment passes without interference from technology, humans are bound to become distracted. When distractions are the norm in the workplace, errors are more likely to occur. When mistakes are made, cybercriminals are watching and waiting for their opportunity.
How Cybercriminals Exploit Human Weakness
In cybersecurity, there’s nothing more dangerous than a distracted or careless employee. Cybercriminals exploit this by using abusive tactics to prime their victims for attacks. According to the 2019 Verizon Data Breach Investigations Report, phishing attacks were the number one form of social attack, named so because cybercriminals rely on manipulating human behavior to achieve their goals. With the workplace being home to a number of stressors, including career, financial, and even social, cybercriminals have an endless array of options to carry out their manipulations.
Phishing techniques have evolved from poorly written emails sent in mass waves to low-volume, short-wave attacks that are often indistinguishable from legitimate emails. With corporate branding, images, and email signatures, phishers easily fool busy employees and coerce them into clicking on phishing links and visiting counterfeit web pages.
To get their victims to open emails, phishers lure victims with language intended to ignite their emotions. “Access denied”, “Account suspended”, and “Payment due” are common subject lines that lure victims into opening phishing emails. Employees who are under duress or otherwise concerned about responding to emails quickly are among the most likely to be lured into such an attack.
Spear phishing, while on the surface less sophisticated because it doesn’t require landing pages or branding, is more personal than a phishing attack and may include more sophisticated levels of social engineering. Cybercriminals might stalk a victim’s social media accounts to gain insight into their job role, personality, and even their whereabouts, to learn how to manipulate them.
For example, it takes only a quick glance at a LinkedIn profile to discover that an employee has been recently promoted, transferred to a new location, or even changed job roles. A cybercriminal impersonating a colleague now has personal details to add to their email. “Congrats on the promotion!” or “How are you liking the new office?” could be enough to fool the employee. If the email is purportedly sent from an executive, the employee is even more likely to respond.
In the above examples of phishing and spear phishing, cybercriminals are playing on their victim’s emotions. Whether trying to impress a superior or make up for a past professional mistake, employees are quick to respond to and act on emails if they think their job or reputation is at stake. Human error is at the heart of most cybersecurity incidents and is consistently named the top cybersecurity threat to organizations.
Despite this, many organizations are falling behind at adopting technologies that can reduce human errors, or, at the very least, act as a safety net when mistakes do occur.
Artificial Intelligence Fills the Gaps
When presented with a split decision, humans are prone to mistakes. Whether making a questionable purchase or sending a strongly worded professional email without considering the consequences, humans will, time and again, make the wrong decision. While security awareness training is critical to building responsible cybersecurity practices, cybercriminals are constantly honing their techniques to get past these barriers—and they do get past them. In 2018, US businesses lost $1.2 billion to business email compromise.
Security awareness training failed those organizations not because they were ineffective but because emotion won over intelligence. AI-based technologies are unimpeded by emotion—they feel neither joy nor pain when they make a decision—right or wrong. This is a powerful differentiation in cybersecurity, where logic (hacker) and emotion (victim) collide.
In email security, a supervised Machine learning algorithm is trained by analyzing legitimate and malicious emails. Able to generalize, the model learns to recognize malicious behaviors that a human might not recognize, whether due to distraction, duress, or simply because of the quality of the email. An effective AI engine will block the email from reaching the user’s mailbox. In this case, the human is spared from making the wrong decision, doing away with the potential for human error.
Other AI-based technologies at work in email security include Unsupervised Learning and Deep Learning. In Unsupervised Learning, the model doesn’t rely on a trainer but recognizes anomalies in emails by identifying patterns that differ significantly from previous data. For example, it can recognize a spoofed email address where the sender claims to be one person, but the email address doesn’t match the organization’s entity model.
An employee who is likely juggling multiple tasks at once might not recognize such an anomaly, or if they do, conflate an anomaly with urgency. For example, if you’ve never received an email with a direct request from a CEO, and suddenly that CEO asks you for an urgent favor, you might recognize the request as highly unusual and unlikely. Or, you might be so caught off guard by the anomaly that you immediately fulfill the request to satisfy the CEO. This is the reaction cybercriminals are counting on.
Finally, Deep Learning models recognize images rather than text. Phishing, or emails that impersonate a brand, include fraudulent brand images like logos. Subconsciously trained to recognize popular brand logos, humans are unlikely to inspect a logo for quality. Instead, they recognize the logo and instantly trust that the email is legitimate.
A Deep Learning model can recognize that an image in an email or on a webpage is a counterfeit and block it from a mailbox. If a phishing email were to bypass the AI-engine on the first pass, it has another opportunity to scan for anomalies. For example, an AI-solution with time-of-click technology can analyze the URL if the employee clicks on the email, blocking them from landing on a fraudulent webpage.
Assessing an AI-based Solution
Searching for an AI-based solution for email protection requires asking the right questions. Not all solutions are created equal. The quality and amount of data an AI engine is trained with is critical to its ability to identify malicious emails. When speaking to vendors, ask what types of models the AI-engine is using (Supervised, Unsupervised), where the data originates, the size of the dataset, and how frequently the data is updated.
Finally, understand what technologies are being combined with the AI solution. AI alone isn’t an effective solution. It should be combined with other proven technologies to build a solid layer of protection.
To learn more about how AI-based technology is being used to fight human error, register for the Vade Summit.