Managing CVE-0

Published: 2011-05-27
Last Updated: 2011-05-27 12:43:10 UTC
by Kevin Liston (Version: 1)
9 comment(s)

Vulnerability Advisory: User clicks on something that they shouldn't have (CVE-0)

 Description: There exists a vulnerability in all versions of user. An attacker can execute arbitrary code on a system by sending a specifically crafted message to a vulnerable user.

Exploit: There are numerous exploits in the wild.

Remediation: Patches do not currently exist. For workarounds, see below.

Impact:

CVSS Base Score: 10

CVSS Vector: (AV:N/AC:L/Au:N/C:C/I:C/A:C/E:H/RL:W/RC:C)

How the Exploit Works

An attacker crafts a message and delivers the attack to the victim via a service such as email or Instant Message. A vulnerable user will click on the link (directing the system to another attack) or the message contains the malicious payload which is executed when the user activates it.

This gives the defender 4 leverage-points:

A – The incoming message

Depending on the service that is used to deliver the message, the defender may be able to employ spam-filtering or anti-virus if they payload is included with the message.

B – On the system

Anti-virus on the system, process white-listing, and limited privileges could all help protect a vulnerable user from themselves.

C – The user

An alert and educated user (see below) may resist attacks that evade other protection mechanisms.

D – Outgoing requests

Web proxy filtering and anti-virus may succeed where all other methods have failed.

The Bad News

This is a typical defense-in-depth strategy. Although it's the current best-practice, I see it fail constantly. However, this is no reason to abandon the strategy. Because things would be much worse without it.

Vulnerability Management of Users

When there is an announced system vulnerability, it is common practice to deploy patches and workarounds to reduce the number of vulnerable systems in your environment. One should strive to similarly reduce the population of vulnerable users on your network.

Everyone is Vulnerable – The User Vulnerability Model

Remember that everyone is vulnerable, even you, dear reader. There will come a time when you haven't had your morning wake-up juice, or you are distracted, or one of your friends/family/clients gets compromised and they send you a message, or you become specifically targeted, then you will likely click on something that you shouldn't have. Users are either vulnerable to CVE-0 or resistant to CVE-0, but no user is 100% CVE-0 proof all of the time. Users may shift state from vulnerable to resistant and back over time. The ratio of vulnerable to resistant users is one (albeit difficult to measure) indicator of your environment's welfare.

 As mentioned earlier, a user may change state from resistant to vulnerable if they are distracted, and alertness can change a user from a vulnerable state into a resistant state. New attack methods or schemes can shift a large population of users from resistant back to vulnerable. While timely communication out to the users can counter this shift, returning them to a resistant state.

New-hire/new-user training is key. As your user population increases, you want them to start off resistant. This also means that your training must be flexible and updated to keep pace with new attacks. Training is not perfect, and it there is no guarantee that the user will be receptive to it.

Realizing that users will make errors, that training will not be perfect, and new techniques will emerge to further drive down the number of users resistant to CVE-0 can make make one feel that defeat is inevitable. If you can train the majority of your users to be "link-suspicious" they will be remarkably resilient. This, coupled with the other layers in your defenses, should keep the number of CVE-0 events that you have to respond to down to manageable levels. Manageable levels is what you want, chasing for zero will cause other issues as we'll see below.

 A System Model of the Attack

To gain more insight into the problem, I propose the following model of the typical attack:

  • Criminals that are motivated financially to exploit CVE-0 style attacks are going to spend enough effort to achieve a certain level of vulnerable users to meet their profit goals.
  • Defenders attempt to drive the level of vulnerable users down as low as possible.
  • Users need to use systems and consume information resources.

These forces combine to form a dynamic, non-linear system, that can express some surprising behaviors and respond unexpectedly to your attempts to control it.

Your security efforts may have unexpected results for the following reasons:

  • Non-linearity-- There isn't a linear relationship between the defenders' efforts and its impact on the level of vulnerable users. Doubling your rule set will not block twice as many attacks, and sometimes increasing effort only leads to even fewer results.
  • Externalities-- There is more going on outside of this model that can have an impact on the level of vulnerable users.
  • Linked requirements-- a security manager may have a number of levers to pull to define their strategy, but due to interrelated systems, and limited resources, cranking up one lever may have little to no effect, because that effort may starve another effort or a different lever is set to low.
  • Delays-- it takes a while for policy changes to be communicated out to the staff, or for increased law-enforcement to reduce the number of cyber-criminals. It may take more time than expected to detect the results of a change in strategy.
  • Bounded Rationality-- every actor in the system is going to act in his or her own best interest based upon how they perceive the world. With incomplete and imperfect information they are going to make decisions that may not be in the best interest of the whole.

This particular system of criminals, defenders, and users can express a number of vexing scenarios for the users and the defenders.

In our model there is a clear conflict between the goals of the criminals and the defense. The harder that the defenders try to push down the vulnerability rate, the more effort and resources the criminals will leverage against them. New tools will be circumvented, take-down efforts will be countered with bullet-proof hosting, and fast-flux networks. The result is a never-ending arms-race. The fix is to not push so hard. Instead, you must "push smarter."

Don't let CVE-0 events get to the point where they become "business as usual." It may be disheartening to realize that as more users come on-line, they'll breathe new life into old scams, or feel frustrated when delays allow criminals to operate with seeming impunity. New ground gained by the criminals should not redefine the "new normal." This will result in a race to the bottom where these events are tolerated or ignored. CVE-0 events should not become proceduralized and outsourced to your managed security service provider. Each incident is an opportunity to improve the defender's strategy and position.

Users are caught in the middle between the defenders as they apply new rules, tools and policies to counter the criminals' change in message and tactics. Depending on how the defender reacts the users can end up as one of the following: allies, wards, or enemies.

If the defender deploys too many rules, or too restrictive policies, the users (in their bounded rationality) will organize "solutions" that circumvent these controls so that they can get their jobs done. In the worst cases, this can turn the users hostile to the defenders. When these "solutions" and workarounds are discovered, you have to resist the urge to clamp down harder, because this is a clear sign that your policy lever is already pushed too far, and now is not the time to push on it harder. It's time to rethink and redesign your strategy and try to leverage the creativity of the users to your benefit.

Another common result of this conflict between defenders and criminals is that the defenders assume more and more control from the users so that they eventually they become wards of the defenders. This works for a while as the team deploys new tools and processes. Unfortunately these efforts only serve to mask the root cause of the problem (vulnerable users in this case.) As time goes on, the defenders' resources will dwindle and when the layers of the defenses are circumvented, the users won't have any experience in dealing with the threat on their own and will likely fail. They essentially become addicted to the security tools and no longer make security decisions on their own.

Finally, a note on metrics. Another trouble point for the defense's efforts is how they are measured. If the security of the firm is measured by the size of security budgets, then security budgets will grow. If it's measured by the number of detections, then rule-sets will expand. The goals have to measure the real welfare of the system. Otherwise the system will head off in unwanted and unexpected directions.

Integrating Incident Response with Vulnerability Management

CVE-0 caused incidents should be handled like any other system compromise. While you can't reimage a user and move on, you can educate and inoculate them. The user's team and peers should also be educated at that time. The lessons learned from the incident should be captured and any new-hire or ongoing training should integrate those results. When delivering education remember that everyone is sometimes vulnerable to CVE-0.

Acknowledgement

I'd like recognize the large influence that Donella H. Meadows' "Thinking in Systems," had on this analysis. I strongly recommend it as a source of new ways to look at the problems you currently face everyday.

 

 

Keywords: CVE0
9 comment(s)

Comments

A very good take on user vulnerability, every user is vulnerable at some point. Every user, no exceptions, admins, secinfos, hackers.... we are all lusers at some point.
Let us not forget most security ignorant *users* do not care to learn or comprehend. Case-in-point, at work I just discovered even though we've educated app developers to refrain from sending account credentials through email, they continue to do so, but on the d'low. They remember never to send passwords in email when communicating with info-sec staff, however amongst themselves and end users they share passwords via email freely.
Realist: This reflects nothing more than a rational choice between productivity and security. Emailing passwords is more convenient and faster than alternate communication techniques, resulting in increased productivity. Since the (perceived) risk/loss of value of not following security protocols is small, most people do the rational thing and favor productivity (which affects income, future employment, etc) over security (which has no negative immediate effect on any of those factors).

This is not a new phenomenon. It has been true in physical security since the dawn of time.

An effective security policy provides the appropriate incentives so that everything is done naturally in a secure manner. But this is very hard to do, and very few security policies actually do it.
@ Economist
"... a rational choice between productivity and security..."
It's even simpler than that.

The machine has no brain. Use your own.

.
The solution is simple. When a user gets infected, remove their PC and give them a green-screen terminal to do their job. You won't need to replace it again.

Seriously. Why do we allow users to browse Facebook and watch videos at work? They don't need Internet access like that. Give them a terminal-based system to do their work.

<soupnazi>No computer for you!</soupnazi>

PS: And could someone please create a web browser that just *displays* web pages instead of executing them? (the same applies to a PDf reader ;)
@Realist

That's a great example of users learning around the policy. They've learned to not send their passwords to infosec email. Out of curiosity, what alternatives are you recommending to them? What is your environment's preferred method to distribute account credentials?
@Old Fart:
Although BOFH-style security policies are fun to dream about (ahh the good old days) they don't describe an environment that anyone would want to work in. Your post-script is an interesting example of the ward-like behavior. Except in this case, it's the defenders who are addicted/dependent upon software developers to create products to solve their problems. I thank you for this insight.
Realistically, users are the largest and most easily exploitable vulnerability in an organisation. As we see from the "Microsoft Support" scams it is even possible to exploit a network via the telephone, as long as you 'proxy' your attack through a slightly squishy remote control protocol that occasionally needs to be talked through finding the start menu.

In a business context though, with 95% of workstations (although this does depend on the role of the organisation and the particular employee) you can realistically get away with application whitelisting and web whitelisting. Heck, even a lot of IT staff could realistically use application whitelisting. On a day-to-day basis, how many unexpected apps do you really use? It might take a week or so to iron out the wrinkles but the increase in security is worth it, IMO.
Good to see Kevin's interesting piece got discussion going.

@Realist @Old Fart @Gemina - Our security paradigms are the problem.

We are constantly defending against threats that have been discovered. Security is always a step behind. Sometimes it is proactively steps ahead, but in general security is behind.

@Gemina - well whitelisted sites and apps is really not necessarily going to solve a number of the blended threats in today's landscape. Who and what do you trust?

A large majority of the world trusts Microsoft to deliver a product fit for purpose, yet the percentage of threats/OS would suggest they do not.

We trust open source repositories, we trust old linux core app updates, unquestioningly. As the complexity increases, so does the number of threats and blended threats.

In today's world of affiliate programme malware distribution, 64 bit in the wild rootkits and the "browser" being the common gateway (independent of OS), who can you trust? Google? Google image results are infected rogue iframe drive-bys.

You can trust no one. However, the development of services and applications is not going to stop or slow because the easiest path to protection for IT admins is "Deny".

Admins always lecture about security, however even the MOST experienced admin or hacker is a luser. Thinking that you are update to date on ALL the latest security "best practices" is ludicruous! No one is? Google, phishtank, sans, AVG, Trend micro, etc... are all behind...

We now live in a world where breach is inevitable and all data will be social engineered. Our security paradigms need to start shifting.

But to where... away from Microsoft would maybe turn the tide for a while, but not for long.

Diary Archives