Threat Level: green Handler on Duty: Didier Stevens

SANS ISC: RSA/EMC: Anatomy of a compromise - Internet Security | DShield SANS ISC InfoSec Forums


Sign Up for Free!   Forgot Password?
Log In or Sign Up for Free!
RSA/EMC: Anatomy of a compromise

Earlier today I had the opportunity to read a blog post by Uri Rivner, the Head of the Security Division of EMC. While the investigation into the RSA/EMC compromise is still ongoing, Mr. Rivner presents a very good summary of what they do know.

 Some of the facts as written by Mr. Rivner:

  • The first part of the attack was a spear-phishing attempt aimed at non-high-profile targets. The information on the targets was most likely mined from social networking sites. All it took was one of the targeted employees who was tricked into opening an attached Excel spreadsheet.
  • The Excel spreadsheet contained a zero-day exploit targeting an Adobe Flash vulnerability.
  • The exploit added a backdoor. and installed a remote administration program.
  • The malware then captured credentials for user accounts in order to compromise higher value targets.

In my experience this is typical sequence of events for an APT attack.  There is very little in this attack that is particularly sophisticated. The big question is what are the defenses that would have prevented or reduced the impact of this attack?

Obviously the first is user education.  It is extremely difficult to educate your employees to prevent this sort of targeted spear-phishing attack, but we need to try.  The more users who know how to analyze an email to test its legitimacy the less likely an attack like this will succeed.

The bigger one as Mr. Rivner alludes in this blog post is to come up with a new defense paradigm.  The traditional paradigm of a well protected perimeter with a soft inside should be dead.  There are just too many ways to circumvent the perimeter, spear phishing being just one. 

The thing is I don't think this new paradigm is so new.  Many have been advocating for years moving the prevention and detection closer to the data.  There are a lot of approaches that can be used here, but in my mind it begins with segregating servers from the desktop LAN and controlling access to these protected enclaves as thoroughly or better as we do our perimeters today.  It means classifying your data and installing protection and detection technologies appropriate to the sensitivity of the data. It means installing and tuning Data Loss Prevention (DLP) technologies to detect when your sensitive data is leaving your company.  It means instrumenting company LANs and WANs so a network baseline can be determined and deviations from this baseline be detected and investigated.

Obviously this is only a partial list of the possible strategies.  What else would you like to see done in your organization to help prevent this type of attack?

 

-- Rick Wanner - rwanner at isc dot sans dot org - http://namedeplume.blogspot.com/ - Twitter:namedeplume (Protected)

Rick

293 Posts
ISC Handler
First of all you *MUST* separate client LANs from server LANs by using a firewall. Access to the Internet *MUST* be ruled by a firewall (could Application Intelligence be usefull?). Access to the Internet *MUST* be done through a PROXY with some form data inspection in search of *evil* data.
I use the word *MUST* and not should because of the sensitiveness of the data beein protected.
Anonymous
Great advice Rick!

Tony...
The administrators machines were compromised, so:
1) Servers are accessible by the admistrators, via whatever protection scheme. Even two-factor can be defeated if you 'own' the client.
2) Firewalls let some things through, this was how the data got out.
3) Malware knows how proxies work, and use them automatically.


You *must* understand your threat environment, and build countermeasures to detect, and protect against those threats.

In this case RSA didn't realise the whole threat environment, and so didn't protect access effectively.

Air gaps, needing people to walk from one area to another are excellent at protecting against these kinds of attack, though they rely on easily sanitisable types/volumes of data passing from one zone to another on removable media - including business data, software patches and AV updates.
DomMcIntyreDeVitto

41 Posts
Tony,
Sorry, I implied a firewall in the diary but didn't actually say it. But firewalls are the best way to manage segregation. The trick is to enforce your firewall policy as strictly here as you would at your perimeter.

Dom,
I agree with you whole heartedly. I am not sure that anything in this diary would have prevented the success of this attack, but it would make it more difficult and hopefully make detection more likely.

One of the more interesting approaches to segregation I have seen is that all management was done over a separate interface on a management LAN, and the only way onto that management LAN was via a VPN and two-factor authentication. Of course this does not prevent compromise of an application over the application ports, but it substantially reduces the ability to compromise the server accounts via keylogged data.
Rick

293 Posts
ISC Handler
I'm with Dom. I shudder when I hear the phrase "moving the protection closer to the data," because some organizations are now expecting datacenter firewalls between servers and workstations to increase this security.

Attackers target the workstations first, and the workstations are usually allowed through the firewalls to the data in the datacenters. So that model does not help much. Some experts are recommending segmenting workstations from other workstations, and that makes some sense to me. Internet HTTP proxies don't help much with this threat, RSA probably had one.

HIPS on workstations and servers that can detect suspicious actions generically is one part of the solution.

Something else that would help is forcing all web and email browsing to be done from a hardened virtual desktop session (Terminal Services, Citrix, VMware, whatever) on a server in a DMZ. It could even be a non-Windows VM. If the VM session got compromised, it couldn't go anywhere.

These threats may be hard to block, but they're not as hard to detect as everyone seems to think. We have the technology today, it just needs to be configured creatively. Unfortunately most organizations are probably not on the right track today, they're still using yesterday's paradigm.
Rick
8 Posts
In 1985 I got to talk to Grace Hopper one on one for a bit and then listened to her lecture, amazingly enough, about the "Prioritization of Information Interchange". She talked about how network were coming up with no way to manage and separate critical data from recreational and non-critical data.

It is amazing to me that 26 years later we still live with this problem. 90+% of all email is spam so we buy yearly renewable spam filter solutions and think we are pretty smart. Some cost tens of thousands of dollars a year to maintain. We spend the same or more on firewalls, AV, IDS, IPS, product after product after product. We spend even more than that on labor to manage and keep it all up to date, plus labor to meet compliance standards.

I have to believe the RSA has done a pretty good job with their network security. I've seen fully compliant systems with the latest in host based protection get compromised in exactly the same way. Not once, but repeatedly. Often. Very Often. Hot knife through butter like the defenses weren't even there.

WHY? Because DLP wasn't in place? Wow. I so dismayed to hear that what we need is another product. Firewalls between the workstations and the servers? To do what? Block users from getting to the data? Hey, once I am in your network, all I need are allowed connections and protocols to get to what I want. No Host IPS is going to keep me from using OS tools (just like a user) to get to files shares and other data.

Boil it back to what happened and burn down the box you've limited yourself to.

An untrusted email server sent an untrusted attachment to an unprivileged user.

Hmmmm. 90+ percent of all email is spam coming from untrusted sources and yet we feel the need to ensure that we keep accepting untrusted connections as though they have some right to be there. We still cannot prioritize and differentiate between critical and non-critical data. We inherently trust as though we are walking down mainstreet USA when we should be inherently distrustful as though the internet were downtown Mogadishu. We blacklist instead of whitelist.

Companies have better security on their parts/stock stores than they do on their networks.

4 billion IPV4 addresses and we trust them all until they prove themselves untrustworthy. Companies don't let people stroll into their physical properties, but they'll invite any untrusted connection through email and web access. Think about it, every web browser is running untrusted code on every web page that is visited. In a corporation that has to protect it's vital information, the list of websites that need to be visited is much smaller than any blacklist. Before allowing internal people to get external information, why are we not checking the identity and the trustworthiness? Certificates help to establish trust at some level and our ecommerce depends on it, so why not only accept email from companies with valid certificates. $100-$300 is a lot cheaper than $5000+ for spam filters, firewalls, IDSs, and AV that don't work for cases like this.

Observe Orient Decide Act is a concept that I ran across in a podcast recently. Great concept. However, when our thinking limits us to an "ACT" that only includes some new product or update signatures, then the attackers will always win because they always know the outcome. We'll just blacklist that domain and IP. 1 of 4billion for the IP, 1 of trillions for the domain name. Attacker has 3999999999 more attack vectors to try, and more domain names than is worth figuring out. But hey, I know who my business partners are. I talk to them. I see them. But that is way too hard...I don't have a damn product with a nifty moniker that goes with it.

Until we work to solve prioritization and trust, security will fail every single time, and we'll all just jump on the next product bandwagon.

Might as well through in Fireeye and the concept of a device that runs the documents and code in a virtualized environment mimicking the end user environment to help find stuff too. There evidently is no end to the money that can be spent on products.
Rick
4 Posts
PKI certificates? Certificates help to validate the identity of who you're talking to, but haven't been very good at validating the person you're talking to or the files they're sending to you are safe. And certificate vendors can be gamed, as in the news recently. They're a poor validator of someone's identity.

It doesn't necessarily need to require new tools. Windows and other OSes can be configured to log all executables that are launched, which can then be monitored centrally. OSes can be configured with ACLs to block certain actions, like logging or blocking the creation of certain files in certain folders. And there are free tools that are possible alternatives. Sure, it's hard to block logins and legitimate commands between systems that use a stolen password, but you can log and detect it, and the initial phishing email contains malware that can be detected.

Are you suggesting a whitelist approach to inbound email and outbound web browsing? That would break a lot of functionality and take money to support, and many environments won't do it. And that and the other suggestions would likely also require new tools. Security has to allow enough functionality or it won't gain acceptance.

Your approach to email sounds like DNSSEC, where each host is responsible for validating their own identity to others, and hope everyone else will voluntarily do the same. Problem is, because it's voluntary, we're a long way from the point where we can start blocking sites, ISPs and countries that don't use DNSSEC.
Rick
8 Posts
Mikee,

So the answer is, we can't win, so don't play? I don't like my career prospects on that decision.

I agree with you. New technology is not the answer. The first key to security is never technology. What we need are good people, who can creatively architect, deploy, monitor, analyze and investigate when something out of the ordinary is detected. The point is that if we keep doing the same old thing, we can expect the same results. The M&M security architecture was acceptable in the mid-90's, but it is not good enough for this decade. I live by Dr. Eric Cole's mantra of "Prevention is ideal, but detection is a must". How many of the corporate networks out there are architected and instrumented in such a way that you could detect an attack like this as quickly as RSA/EMC?
Rick

293 Posts
ISC Handler
I agree with Rick: "New technology is not the answer" and somewhat with mikee: IP addresses don't have to be assumed innocent until proven guilty. We would be better off with some really old technology along the lines of telegrams and model 33 teletype machines when accepting messages. Instead of sending attachments, just send the document's location on a mutually trusted server. The goal isn't to stop attacks, but to reduce noise levels and simplify forensics.
Rick
2 Posts
As others have said, and Rick neatly summarized, "technology is not the answer." This not only a great synopsis of the reality of the current security, but the right approach to cost management. Security is too often viewed as a "sinkhole" for budgets because we say "no" to projects meant to enhance functionality too often (and then usually backstep and say, "well, if we spend $xxxxxx, we could secure it). A more appropriate approach is "security through sanity"; that is, a calculation of business/data value, risk, and mitigation costs. Detection tools such as IDS are great, but when it's often easy to implement active blocking on traffic that is highly unlikely to be legitimate (sort of tweaking Dr. Cole's statement here), it's asinine not to do so. For example, if an external and untrusted host or subnet is constantly kicking off SQL injection alerts, blocking the b*stard(s) is not a bad idea, as long as a reporting mechanism is in place to gain awareness of any impact on production systems (or a new client/vendor relationship, etc).

The "buy new toys" approach is so irksomely pervasive that, speaking with a CISO for one of the larger companies around in regard to maximizing ROI on a single security platform that I managed, I suggested "lay me off, tear the whole [multi-million dollar] platform down, and throw it in the trash; you're not using it properly, and until you decide to, you're wasting money."

The InfoSec industry needs to remember that we should be business/organizational enablers - not just a budget item and impediment to profit/productivity.
Anonymous
I wish we could get a name for this other than APT "Advanced Persistent Threat"

The moment we begin to use this phrase... we start to lose the war. Managers/Users begin to tune us out, when we refer to a need for protection against advanced threats.

The implication is that "Advanced Threat" refers to sophisticated human hackers, that there aren't many of... and we're "too small" to be targeted / advanced hackers go after the big guys / etc.


In reality... there is nothing truly advanced about a multi-stage attack. This one might have involved spear fishing, but there's nothing stopping a multi-stage attack like that from being automated by software.



Mysid

146 Posts
The blog mentioned "The attacker then used FTP to transfer many password protected RAR files from the RSA file server to an outside staging server at an external, compromised machine at a hosting provider."

Without knowing RSA's architecture, I have to wonder why this file server was able to connect to the Internet in general, either outbound from the server itself or inbound from the Internet in general to the file server.

Mikee, I could have written much of what you wrote. I'm not sure what you meant by "unprivileged user", though. I'm guessing you meant a non-IT admin user. They still could have been a local administrator on their PC, which would have made the attack a lot easier.

We began blocking inbound connections from non-ARIN netblocks years ago because we're a US-only business. We drop thousands of connections a day and we used the same reasoning you did: There's no reason to accept all 4+ billion IP addresses, so we don't. All of those fancy filters we have simply don't need to deal with as much any more. And we block a lot of those same netblocks outbound as well. No it,s not perfect. But it's a heck of a lot better than not doing it.
Anonymous
Having observed and responded to several APT intrusions, to be sure there is not one magical control that will prevent such an intrusion. Multiple controls are needed now, and different ones will be needed in the future. Whitelisting executables and network connections with host based security on all LAN hosts and sink holing identified dynamic DNS providers (note callback identified in a Threat Expert report back in September threatexpert md5=188ed479857cc58a1a50533b8749b4c0) might have prevented this particular threat from advancing.
Anonymous
Yes, the most important part of our jobs, IMHO, is to learn from the mistakes of others before they happen to us. Unfortunately someone has to be first and with companies unwilling to share their experiences, it just makes the criminals more successful.
Anonymous
Has the blocked been hacked now? All I see is "Error establishing a database connection". That would be kind of ironic and funny somehow..

BTW: -1 @ Tony, +1 @ Dom, -1 @ Mikee, -1 @ JJ

JJ: You block non-US IP-ranges? You did not understand the Internet. You're even worse than the chinese government.

Mikee: TPM/TCPA/WOT is no solution, it will just make things worse, creating the illusion of a safe world ("safe inside the perimeter"). Wake up kid, its 1984...
Anonymous
I don't think "APT" is a useful term.
So what that this attack used a 0-day exploit?
In most cases, it would have been just as successful using an exploit that had just had a patch released the previous day. Changing the terminology does nothing except confuse the issue for non-technical people.

I also don't agree that user education is a solution. If it were going to work, it would have worked by now.

Personally, I prefer layers of IDS and interior perimeters. You should know what the normal traffic is on your network and WHY that traffic is normal.

Assume that you will be cracked.
How would you detect it?
Anonymous
@Garrett - I think you are on the right track... we have to get buy-in from the "powers-that-be" or else what we *try* to do is just a waste of time/money/effort.

There are a lot of great ideas in many of the responses... But none on how we can get the buy-in and authority/authorization we need to do what we need to do in order to protect our organizations.

Until we can find a method to get buy-in that works (possible candidate: April 3rd Dilbert cartoon) - and works consistently then the snake-oil salesmen selling the shiny, new APT detector to upper management during the private sales meeting [where there are no technical people invited] will defeat any real progress we could ever hope to make in actually protecting against these multi-stage attacks (thanks for the phrase @Draco).
Anonymous
Isn't one solution to security to sandbox different applications ?
If Excel could only read/write specific filetypes, it would not be able to infect any binary, or even write a new one. Flash shouldn't be allowed to write anything either, apart from maybe .dat or .bin files, which can not be executed.

A low-level sandbox (given it is secure) goes a long way, and would stop many vira.
Povl H.

71 Posts

Sign Up for Free or Log In to start participating in the conversation!