by Rick Kuwahara COO of Paubox
Article filed in
3 insider threats you need to plan for
by Rick Kuwahara COO of Paubox
It seems like every week there’s a new type of cyber threat that takes over the news cycle, with one of the latest being Petya that became a global threat at breakneck speed.
But even though outsider threats from cybercriminals can take up headlines, insider threats can be just as devastating for any organization.
We’ll take a look into three types of insider threats and what you can do to prevent them.
What is an insider threat?
Before we get to the meat and potatoes, let’s take a step back and really define what is an insider threat.
According to the US Computer Emergency Readiness Team (US-CERT), an insider threat comes from an individual who has or had authorized access to an organization’s assets. They can use this access to either maliciously or unintentionally act in a way that could negatively affect the organization.
The big takeaway here is that a person doesn’t have to be malicious to pose as an insider threat. Accidents can happen, and they can be very costly, especially in regulated industries like healthcare where breaches can result in million dollar fines.
Types of insider threats
People commonly break out insider threats as either ‘malicious’ or ‘accidental’, but other researchers have added a third category – ‘non-malicious’.
It may seem like semantics, but adding a third category is actually useful in mitigating risks and identifying potential threats.
Let’s dive into the types of threats to see exactly how.
The malicious insider threat
Just as it sounds, the malicious insider threat is defined by the intent of the individual to do the organization harm.
Some examples of malicious actions include:
- Intellectual Property theft
- IT sabotage
US-CERT did an analysis of over 800 malicious insider attacks and found that there was no standard profile of a malicious insider.
Although we did list a few examples of attacks, a malicious insider will exploit business process weaknesses as often as technical ones.
So with no profile to go by, how do you prevent malicious insider attacks from happening?
Here are a few measures you can build on:
- Pay attention to policy and security awareness training. Don’t let weak policies give individuals opportunities to exploit.
- Watch for signs of possible malicious insiders. Is someone threatening or bragging about how much damage they can do? Are they downloading large amounts of data? Look for red flags in behavior and breaks from normal patterns.
- Know how to report concerns. Give employees a way to identify when they think something is wrong.
- Take advantage of technology. With big data and advanced analytics, there are now tools available to monitor users’ actions. While they may not be preventative, they help provide insights you can use for future threats.
The accidental insider threat
Where a malicious insider has the intent to do the organization harm, the accidental insider threat is defined by a “failure in human performance” according to US-CERT.
This is pretty much a nice way of saying that there’s human error involved that results in harm to the organization.
A classic example is when an employee falls for a phishing attack and clicks on a suspicious link in an email. The human factor is a major reason why phishing attacks are still so prevalent – they often work.
Many major cyber attacks, like Petya, start and spread through phishing emails.
Other common examples of accidental insider threats include:
- Accidental disclosure of information, like sending sensitive data to the wrong email address.
- Physical data release, such as losing paper records.
- Portable equipment loss, which includes not only losing laptops, but portable storage devices too as well.
The difficult part of managing these types of insider threats is that they are often times just accidents.
You can implement technical safeguards, like a strong spam filtering software, but in the end it comes down to training and awareness programs.
According to US-CERT Insider Threat Team, these programs can help increase employee awareness of the problem and help individuals recognize their own behavior that can result in errors or lapse of judgement.
These programs can include:
- Policy reviews. Check your policies to ensure that they are up to date and readable for every employee – even the ones who aren’t security professionals. This means minimizing any security jargon for maximum understanding.
- Awareness training. Give staff tools to know what websites are safe to visit, how to identify suspicious links, think about what they’re throwing away and how it’s being disposed of (hint: the shredding box is there for a reason).
Many times, it’s as simple as thinking about what you’re going to do!
The non-malicious insider threat
A non-malicious insider threat is an individual who intentionally breaks policies, but without the intent to do the organization harm.
A little confusing right? Let’s look at it a little further.
The difference between a malicious insider and non-malicious insider is the intent to the organization. One wants to harm it (malicious) the other doesn’t (non-malicious).
And the difference between an accidental insider and non-malicious is the intent on taking an action that breaks organization rules and puts it at risk. One did it as an honest mistake (accidental); the other did it on purpose (non-malicious).
Let’s look at a real life example.
The most famous example is probably Edward Snowden.
He went into the NSA with the intent to steal data that would compromise it – a classic definition of a malicious insider threat.
But what about his co-workers?
Snowden used social engineering by asking co-workers for help in getting his work completed. Co-workers let him access their accounts, which expanded the amount of data he could get to.
They thought they were helping a co-worker out and willingly violated policies to give him access to data he wasn’t allowed to access. The intent to break policy was intentional, but there was no intent to harm the organization.
Snowden’s co-workers are an example of a non-malicious insider threat.
If only one of Snowden’s co-workers had followed policies and raised the red flag, it may have stopped any data leaks or at least reduced the amount of data that was lost.
So what can you do about a non-malicious insider?
The way to prevent these threats is slightly different than those for accidental insiders.
Awareness programs and policy reviews are still the building blocks, but as an employee the focus changes from looking at yourself and turns toward being aware of your surroundings and what’s being asked of you.
This also means creating a company culture that prioritizes security policies over being “polite.”
What does that mean? Here’s some examples:
- Holding the door for someone is the polite thing to do, but asking to see a badge is ok too.
- Shoulder surfing is sometimes necessary when collaborating, but you can still ask for privacy when entering passwords and logins.
- Don’t share accounts even if it means a co-worker has to go through the “hassle” of resetting their passwords with IT.
- Reporting incidents of social engineering may not seem polite, but it’s necessary to stop breaches and reduce the risk of data loss.
Wrapping it up
Managing people is always the most challenging part of any IT security plan, and it only gets exponentially tougher as an organization grows.
But it’s never too early, or too late, to start getting policies and programs in place to mitigate any risks.
Identifying the three types of insider threats helps to ensure programs take into consideration the full view of potential internal weaknesses.
It’s also important that awareness programs are regularly refreshed to make sure staff is consistently aware of policies and their importance, so employees are not just going through the motions for the sake of checking off quarterly reviews.
This is especially true for organizations in regulated industries like healthcare, where million-dollar HIPAA violations from insider threats continue to increase each year.
About Carl Willis-Ford
This post was written in collaboration with Carl Willis-Ford, Senior Principal – Solution Architect at CSRA. Carl has over 25 years experience in data management, IT process, technical management and information security and served in the U.S. Navy for 8 years as nuclear reactor operator for fast attack submarines.