7 min read

The psychology of cyberattacks

The psychology of cyberattacks

Behind every data breach is a calculated manipulation, and behind every good defense response is a strategy. The psychology of cybercrime, the resilience of security professionals, and the behaviors of everyday users combine to form the human element of cybersecurity. Arguably, it is the most unpredictable and influential variable in our digital defenses.

Microsoft customers around the globe received over 600 million cyberattacks each day in 2023. Hardly a week passes without a security breach somewhere. These incidents undermine trust in organizations, institutions, and governments. Yet the response to this threat has historically focused disproportionately on technical defenses, overlooking the psychological factors that determine whether attacks succeed or fail.

 

Inside the mind of a cybercriminal

At the core of every cyberattack is a human, driven not just by code but by complex motivations and psychological impulses. Cybercriminals are not merely technologists. They are people with intentions, convictions, emotions, and specific psychological profiles that drive their actions.

Financial gain remains a primary incentive for attacks like ransomware. Other attackers are driven by ideological motives, or attackers who enjoy outsmarting advanced defense mechanisms. Many share distinct personality traits:

  • an inclination for risk-taking
  • problem-solving prowess
  • an indifference to ethical boundaries

The physical and digital distance inherent in online crime creates a psychological disconnect, minimizing the moral weight of attackers' actions. This environment enables cybercriminals to justify their behavior in ways they might not if they had to face victims in person.

Tarnveer Singh and Sarah Y. Zheng, authors of The Psychology of Cybersecurity, explain this through crime science theory. The "crime triangle" holds that for a criminal act to happen, there needs to be an opportunity at a specific time and place where a motivated individual can interact with a suitable target. A second triangle identifies three psychological elements that further drive someone to become an offender: desire, opportunity, and ability.

"Someone needs to feel strongly about reaching a specific goal, perceive an opportunity to act on that desire, and have the right skills to capitalise on that opportunity," Singh and Zheng write. "Crime can be prevented or at least reduced by blocking just one of the three elements of either triangle," the article states.

The authors interviewed active cybercriminals to understand their psychology firsthand. One described the experience as "a game of cat and mouse," expressing no remorse for the victims. "Companies should be better prepared. If I can get in, so can others. It's their responsibility to protect their data."

When asked about the psychological toll, another cybercriminal acknowledged the stress, "The constant need to stay ahead of security measures and the fear of getting caught can be very stressful. There's always a level of anxiety that comes with this line of work." This individual described developing coping mechanisms, including compartmentalization, exercise, and maintaining hobbies outside of hacking.

A reformed hacker who had served time for a significant attack offered a different perspective. When asked what drove the initial offense, the response was, "If they had given me a place on the course none of this would have happened. I would have had a completely different life." The hacker, who has autism along with depression and anxiety, described how isolation and rejection catalyzed the turn toward cybercrime: "All that time I spent alone. I spent a lot of time on computers and on devices. When you spend so much time online I guess you'll find more ways to keep yourself entertained."

Analysis of these interviews revealed consistent patterns:

  • curing boredom
  • loneliness
  • genuine interest in computers
  • desire for freedom and independence
  • proving oneself
  • taking pride in successful attacks
  • staying anonymous
  • financial gains
  • a rationalized "duty to expose vulnerabilities."

The thrill of a successful cyberattack and external rewards become answers to feelings of loneliness and boredom. The anonymous online community and ease of engaging in hacking in the absence of a guardian set a conducive backdrop for these individuals to continue their activities.

 

How social engineering exploits human psychology

One of the most powerful weapons in a cybercriminal’s arsenal is not high‑tech malware but the vulnerability of the human mind. IBM’s Cost of a Data Breach Report 2025 found that phishing was the most common initial attack vector, accounting for 16% of breaches and averaging $4.8 million in costs. These attacks succeed by exploiting non‑technological factors such as trust, fear, urgency, and curiosity. By manipulating users into clicking on malicious links or revealing sensitive information, attackers leverage psychological triggers rather than technical exploits. The report also noted that AI‑driven phishing and deepfake impersonation attacks are rising, proving how human interactions are the most exploited weakness in cybersecurity

Robert Cialdini's six principles of persuasion provide a framework for understanding these tactics:

  • Social proof involves using the opinions or actions of others to influence behavior. A hacker might create fake social media profiles to make it appear that many people are using a particular product or service.
  • Authority occurs when a hacker poses as a person of authority to gain trust. An attacker may pose as a colleague from the IT department and ask an employee to provide login credentials. The employee may comply because they assume the person is an authority figure with a legitimate reason for the request.
  • Scarcity creates a sense of urgency to motivate action. A hacker might send an email claiming that the target's account has been compromised and that immediate action is required to prevent further damage.
  • Fear can create urgency that leads people to make hasty decisions, compromising their security. A phishing email claiming to be from a bank might warn that an account has been compromised, urging the recipient to click a link and enter login information.

With openly available large language models, it has become easier to create persuasive messages employing these tactics. The era of awkward phishing emails from Nigerian princes and grammatically flawed security notices is over. Combined with the trove of personal data breached over the years and information people publicly reveal online, an adversary can easily craft and automate messages that fit naturally within a target's daily context.

An employee at the finance department of British engineering company Arup was scammed for $25 million over a deepfake video call with apparent colleagues, including the chief finance officer. The employee had fallen for a phishing email and was convinced of the need for a secret transaction through the semblance of deepfake personas in a Zoom call. By using AI, the attackers obtained the employee's trust to perform an action they normally would not.

 

The brain as a prediction machine

Understanding why people fall for these attacks requires understanding how the brain processes information. The past decades of neuroscience research have led to the idea that brains are natural prediction machines, constantly estimating what we are seeing, hearing, smelling, touching, and tasting. As we grow older, the brain learns a model of the physical world to explain, predict, and respond better to what is around us.

The brain relies on at least three types of cues: external event likelihoods, reward contingencies, and theory of mind.

Neuroscience research in Frontiers in Computational Neuroscience describes predictive processing as a framework in which the brain functions as a prediction machine, constantly generating and updating internal models to anticipate sensory inputs and minimize prediction errors.

  1. Event likelihoods relate to how often we sense and experience something. If your boss always emails about project deadlines, chances are that when you receive a new email from your boss, your brain expects it to say something about a new project deadline. Since most people receive mostly legitimate emails, the brain is less likely to flag any email as phishing until it senses something significantly unexpected. Paradoxically, a phishing detection system with very low false negative rates may lead to less email security awareness, because users have hardly any exposure to phishing.
  2. Reward contingencies relate to the feedback we get on behavior that helps us learn. External rewards like money or recognition tell the brain when something was done well, making us more likely to repeat that behavior. But there is also internal reward when the brain predicts something correctly.

 

  1. Theory of mind relates to understanding others by imagining we were in their situation. Since we do not have direct access to someone else's feelings and thoughts, our best guess for predicting someone else's state of mind is by thinking about what it would be like to be them.

The psychology behind why people fall for phishing and social engineering attacks is rooted in how humans detect dishonesty. A 2024 study in Communications Psychology by Zheng, Rozenkrantz, and Sharot found that people are poor at detecting lies because they rely too heavily on self‑projection, using their own behavior as a cue for judging others, while under‑utilizing more reliable statistical cues. In practice, this means that individuals who lie more often tend to suspect others of dishonesty, whereas honest individuals assume others are truthful. This bias helps explain why phishing emails can be so effective. Employees who would never engage in deceit themselves may not imagine anyone else would, leaving them more vulnerable to manipulation. The researchers concluded that improving scam detection requires tools and training that stress objective statistical indicators of deception, rather than subjective self‑projection

Cognitive fluency plays a powerful role in how people form beliefs. Research in psychology shows that when information is easy to process, smooth websites, familiar layouts, fast loading times, people are more likely to judge it as true, plausible, and trustworthy. Conversely, when information is presented in a way that creates friction (slow sites, broken menus, awkward formatting), the brain generates more “prediction errors,” which can trigger doubt. Cybercriminals exploit this by designing phishing emails and fake websites to appear seamless and ordinary, ensuring that nothing feels out of place. By minimizing prediction errors, attackers increase the likelihood that targets will accept malicious requests as legitimate.

The illusory truth effect increases the risks of this vulnerability. Behavioral research shows that people rate information as more truthful simply because they have encountered it before, regardless of its accuracy. A recent study by Vellani, Zheng, Ercelik, and Sharot (2023) demonstrated that individuals were more likely to share repeated statements than novel ones, even when those statements were false. This bias is easily weaponized in the digital age, political actors, hacktivist groups, and disinformation campaigns flood social media with repeated falsehoods, using armies of fake accounts to normalize misleading narratives. The more often people see the same claim, the more “true” it feels, a psychological shortcut that attackers exploit to manipulate public discourse.

An example of the illusory truth effect emerged during the COVID‑19 pandemic. False claims about vaccines, such as suggestions that they caused infertility or contained microchips, circulated widely online. A study by Evanega and colleagues at Cornell University (2020) found that more than 1.1 million articles containing COVID‑19 misinformation were shared on social media in a single year. Because these posts were repeated across thousands of accounts and groups, they created an illusion of consensus. Each repetition increased perceived plausibility, leading people to rate the claims as more accurate simply because they had encountered them multiple times. This repetition‑driven bias fueled vaccine hesitancy, eroded trust in public health guidance, and ultimately prolonged the pandemic’s impact.

 

FAQs

What is social engineering in cybersecurity?

Social engineering is the use of psychological manipulation to trick people into divulging sensitive information or performing actions that compromise security. Unlike technical hacking that exploits software vulnerabilities, social engineering exploits human vulnerabilities such as trust, fear, urgency, and curiosity.

 

Why do people fall for phishing attacks even after training?

Research shows that security behaviors only activate after the brain detects something unexpected. Since most emails people receive are legitimate, the brain is not primed to suspect deception. Additionally, cognitive fluency, the ease with which we process information, creates a sense of trust when messages fit smoothly into our daily context.

 

What is the "fundamental attribution error" in cybersecurity?

The fundamental attribution error occurs when organizations blame individuals for security failures rather than examining systemic factors. When an employee falls for a phishing attack, the instinct is often to attribute it to their carelessness or incompetence. However, factors like work overload, unclear procedures, poor tool usability, and hostile work environments increase the likelihood of mistakes.

Subscribe to Paubox Weekly

Every Friday we bring you the most important news from Paubox. Our aim is to make you smarter, faster.