Diaries

Published: 2011-09-30

Microsoft Security Essentials Mis-identifes Chrome

Microsoft's Security Essentials falsely reports Google Chrome as being a password stealing trojan.  Google talks about the problem here, as well as Microsoft commenting on it here.  The trojan that Chrome is being flagged as is the Win32/Zbot password stealing trojan.  The definition number that has presented the issue is version 1.113.631.0, with users reporting that previous versions did not trigger on Chrome. 

More to come.....

 

Tony Carothers

tony dot carothers at gmail

1 Comments

Published: 2011-09-30

Firefox v. 7.0.1 Is Live

Mozilla released Firefox version 7.0.1 today, for both the desktop and mobile browsers, and followed up shortly with a support article regarding the handling of Add-ons.  A workaround is available and Mozilla is working on an update for Firefox.  One of our readers, Dave, submitted this:

From Mozilla:

We’ve identified an issue in which some users may have one or more of their add-ons hidden after upgrading to the latest Firefox version, affecting both desktop and mobile. These add-ons and their data are still intact and haven’t actually been removed. We paused new updates to Firefox to minimize the potential impact on users and will soon release an update to fix this issue and ensure all your add-ons are visible and usable.”

Tony Carothers

tony dot carothers at gmail

7 Comments

Published: 2011-09-29

The SSD dilemma

As early as 15 years ago, when memory sticks and SD cards started to become more and more prevalent, forensic researchers began looking into how evidence can be recovered from such storage media. Due to features like "wear leveling" and garbage collection, which automatically re-arrange content on the storage media even without instruction by the host computer, the consensus was that it is very difficult to make true forensic bit-level copies of flash storage media, and that it is even harder to obtain reliable copies of "unallocated space".

Since then, both the size and usage of solid state disks (SSD) have grown significantly. Laptops and tablets are today often sold with SSD storage by default, and do no longer contain any spinning disk drives.

Recent research shows the full dilemma that this rapid adoption brought with it:

  • In an outstanding paper, Graeme Bell and Richard Boddington show the effects of what they call "self-corrosion": how simply applying power to a SSD disk or memory stick can be sufficient for the on-board micro controller to start re-arranging and zeroing out storage sectors, and how this affects evidence preservation and recovery of deleted files. If you are pressed for time, scroll to chapter 6 on page 12, and just read their "Recommendations and Guidance".
  • An equally interesting paper by researchers from UCSD  shows the other angle of the same problem: How difficult it is to reliably erase content from SSD drives. The authors show that software used to wipe single files mostly does not work at all with SSDs, and that traditional software used to wipe entire drives often does not reliably erase the SSD media, either.

Conclusions:

  • If you are into forensics and evidence preservation, keep track of and familiarize yourself with all the types of SSD media in use in your company, and how they behave during forensic acquisition, before you actually need to do so in earnest on a real case.
  • If your company is still using the "wipe and re-use" processes developed for magnetic disks also for SSD media, update your procedures to include instructions for SSD media. Since the UCSD paper quoted above shows quite dismal results even when using the built-in "Secure Erase" command of the SSD device, you might have to come up with a combination of several erasing methods to more reliably scrub the disk. The best solution today is to deploy full disk encryption (TrueCrypt, etc) to portable devices with SSD media, because this addresses several risks (loss/theft/inability to wipe) in one swoop.


If you have pointers to recent research or suggestions on how to deal with forensic acquisition or secure wiping of SSD media, please let us know or comment below.

 

5 Comments

Published: 2011-09-28

All Along the ARP Tower!

Address Resolution protocol [1] in IPv4 is a method in which 48 bit ethernet addresses are matched up with network addresses. We cover many things here on the Storm Center, and lately Man in the Middle has come up often. One of the ways that Man in the middle can be achieved is via ARP Cache poisoning.

Wait, that sounds like a very old method? Shouldn’t we be protected against that?

Most of your higher end hardware have ARP validation or Dynamic ARP inspection. The question often comes up is, who has turned the feature on? [2] [3]

There are simple tools and tutorials out on the “Intertubes” that demonstrate how to achieve an ARP cache poison man-in-the-middle [4] attack, so I will not reproduce them here. This diary is to simply state that I am seeing this in my day to day operations still and to increase awareness.

In this XSS web app penetration world, we often forget the lower layers and how to best protected them. 802.1x is pervasive in the Wifi space, and with the Wired edge disappearing, perhaps that is a blessing in disguise, but how many networks implement 802.1x at the edge? Or better? Data Center?

Fortunately the last event that was encountered was simply a miss-configuration, however it does demonstrate the risks. This client also had validation turned on and detected it but that was a first that I could remember.

Question for this diary, given that MiTM [4] is on our minds lately? What, if possible for you to share, steps do you take to insure L2 protection?

 

 

[1] http://tools.ietf.org/html/rfc826

[2] http://www.cisco.com/en/US/docs/switches/lan/catalyst6500/ios/12.2SXF/native/configuration/guide/dynarp.html

[3] http://www.juniper.net/techpubs/software/erx/erx50x/swconfig-routing-vol1/html/ip-config8.html

[4] http://en.wikipedia.org/wiki/Man-in-the-middle_attack

Richard Porter

--- ISC Handler on Duty

Twitter: packetalien

Email: richard at isc dot sans dot edu

4 Comments

Published: 2011-09-27

New feature in JUNOS to drop or ignore path attributes.

Some readers have been writing in saying they are seeing parts of their network drop peering for “unknown reasons”. The reason is that Saudi Telecom was sending out routes with invalid attribute #128 (a private attribute).

NANOG posting showing private attribute discussion.
http://www.gossamer-threads.com/lists/nanog/users/144466
This was triggering a Juniper peering issue the PSN information below requires a juniper login.
http://www.juniper.net/alerts/viewalert.jsp?txtAlertNumber=PSN-2011-09-380&actionBtn=Search
Juniper is (was) following RFC 4274 http://www.ietf.org/rfc/rfc4271
“When any of the conditions described here are detected, a
   NOTIFICATION message, with the indicated Error Code, Error Subcode,
   and Data fields, is sent, and the BGP connection is closed (unless it
   is explicitly stated that no NOTIFICATION message is to be sent and
   the BGP connection is not to be closed).  If no Error Subcode is
   specified, then a zero MUST be used.”

Starting with Junos 10.2, Juniper added the ability to allow you to
completely ignore or drop the path attributes of your choice:

http://www.juniper.net/techpubs/en_US/junos10.4/topics/task/configuration/bgp-drop-path-attributes-configuring.html
http://www.juniper.net/techpubs/en_US/junos10.4/topics/task/configuration/bgp-ignore-path-attributes-configuring.html

There is some fairly new work being done in an IETF routing working group to allow for minor miscommunication between peers without dropping the session and all of your neighbors routes. It is still early but given the issues we have seen with things like this lately it is a good step forward as is Juniper's new abilities.

1 Comments

Published: 2011-09-27

Microsoft killed Kelihos botnet

Great news for Internet security. Microsoft has effectively killed off the Kelihos botnet which has about 42-45K nodes. The signature to remove the botnet agent from infected machine is added to the Malicious Software Removal Tool which will be rolled out to users taking automatic updates. Microsoft also took a proactive approach on the legal front, filing for court order to get Verisign (the domain registrar for the malicious domains) to take down the malicious domains related to the botnet operations.

Great to see the Digital Crimes Unit at Microsoft being so proactive about shutting down malware. 

More info on this,

http://blogs.technet.com/b/mmpc/archive/2011/09/26/operation-b79-kelihos-and-additional-msrt-september-release.aspx
http://www.computerworld.com/s/article/9220321/Striking_a_domain_provider_Microsoft_kills_off_a_botnet?taxonomyId=82&pageNumber=1

1 Comments

Published: 2011-09-26

MySQL.com compromised spreading malware

MySQL.com have been compromised and spreading malware. This was first spotted by the folks over at Amorize. Looks like there is a piece of Javascript on mysql.com containing some obfuscated iframe link which in turn link the user to the malicious content - Blackhole exploit kit. A torrent of exploits then hit the user's browser, PDF component, Java..

The issues had now been cleaned up on mysql.com but no further words on the scope of the compromise. It also appears to be the second time this year. In the last incident, SQL injection was used to gain access to the information on the site.

 

6 Comments

Published: 2011-09-25

SSL/TLS (part 3)

I was hoping for a more official release of the document, but you will be able to find the document and the sample decrypt java code here http://www.insecure.cl/Beast-SSL.rar

The paper is an interesting read. To me it outlined the weakness in using CBC very nicely and the attack is well described.  Certainly one of the more readable crypto papers I've come across. I will suggest you read it whilst well fed, and rested.

So is SSL/TLS dead?

The attack essentially implements a mini MITM attack using javascript delivered initially through a Cross Site Scripting (XSS) flaw. In a more traditional SSL MITM attack the application will terminate the SSL connection, present a new certificate and then establish a SSL connection to the originally requested site.  Because the certificate is selfsigned, it would typically throw up an error, allowing the user to notice that there is something going on. This attack works at a lower level.  The SSL connection isn't interrupted.  The weakness in using Cypher Block Chaining (CBC) is exploited to get access to the desired information. Whereas in the traditional MITM attack the user has a chance of noticing, with this attack they are unlikely to. As is outlined in the imperialviolet blog there are easier ways to attack. We do however need to keep this one in our minds. 

How to fix it?

Well the easiest would be for web sites and browsers to stop using TLS v1.0, but as Rob points out in a previous diary http://isc.sans.edu/diary.html?storyid=11629 That may not be as easy as we think. The only other choice we have is to start disabling those ciphers that utilise CBC, but that may not work either as there are precious few cypher suites available that do not use CBC. Using stream ciphers will address the issue, but may introduce new ones (RC4 has its own weakness).

Chrome has already addressed the issue and the fix on the browser side is quite simple and elegant. We'll see the other browsers implement something similar over the next few weeks. That doesn't fix the protocol, but it will help address the immediate issue of clients being attacked in this manner.  

If you do want to change the cipher defaults, in Windows world, you will need to make some registry changes. 

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL

This key and subkeys control how the ciphers are used.

This article http://support.microsoft.com/kb/245030 explains how to change protocols and weak cyphers (make sure you test in a test bed first).

As things develop, we will keep you posted.

Regards

Mark - Shearwater

5 Comments

Published: 2011-09-23

SSL/TLS Vulnerability Details to be Released Friday (Part 2)

The presentation at ekoparty finished a little while ago.  No real details are yet available.  In the mean time there is a nice write up here: http://www.imperialviolet.org/2011/09/23/chromeandbeast.html 

We will keep people posted if more info comes to hand.

Mark

0 Comments

Published: 2011-09-22

TLS 1.2 - Look before you Leap !

There's been a lively discussion on vulnerabilities in TLS v1.0 this week, based on an article posted earlier in the week ( http://www.theregister.co.uk/2011/09/19/beast_exploits_paypal_ssl/, http://www.theregister.co.uk/2011/09/21/google_chrome_patch_for_beast/, http://isc.sans.edu/diary.html?storyid=11611  ), which may (or may not, stay tuned) be based on a paper written back in 2006 ( http://eprint.iacr.org/2006/136.pdf ). Both the paper and the article outline an attack that can decrypt some part of a TLS 1.0 datastream (the article on the attack discusses cookies, we'll need to wait to see what it actually does). In any case, we've been seeing a fair amount of advice in the press recommending upgrading servers to TLS 1.2. I happened to make such a recommendation, with the caveat "if it makes sense in your infrastructure" on a mailing list, and was quickly corrected by Terry, an ISC reader. Terry correctly pointed out that upgrading your server is all well and good, but that's only half of the equation ...

yes, many (most?) browsers are not yet TLS 1.2 capable. I did a quick check, and while TLS 1.2 has been around for 3 years ( http://www.ietf.org/rfc/rfc5246.txt ), he was absolutely right.

The TLS support for browsers right now is:
IE9 TLS 1.0, 1.1, 1.2 all supported via Schannel
IE8 TLS 1.0 supported by default, 1.1 and 1.2 can be configured
Opera - 10.x supports TLS 1.0, 1.1, 1.2
I don't count older versions of any of these browsers, since people really should have auto-update on. if they don't they've probably got bigger problems ( http://isc.sans.edu/diary.html?storyid=11527 )
Mozilla/ Firefox - TLS 1.0 only (vote here to get this fixed ==> https://bugzilla.mozilla.org/show_bug.cgi?id=480514 )
Chrome - TLS 1.0 only (though an update is rumoured)
Safari - TLS 1.0
Cell phones - various support levels (webkit has tls 1.2 since Nov 2010, but for individual phone browser implementations your mileage may vary)

TLS Support for Servers is similarly spotty (thanks Swa for this list)
IIS (recent versions) again, all TLS versions supported
Apache with OpenSSL - 1.0 only
Apache with GNUTLS - 1.2 is supported.  (note however that GNUTLS does not have the full feature set that OpenSSL does, nor does it have the body of testing, peer review and overall acceptance that OpenSSL has behind it.)

So, if you plan to upgrade to 1.2 and force clients to 1.2, your clients better be running Opera and IE9 ONLY. The game plan most folks will follow is to plan for an upgrade if their server supports 1.2 (which means IIS right now) and run both 1.0 and 1.2 in parallel. What this means for us as a community is that if there is in fact a TLS 1.0 exploit, we'll likely start seeing it in conjunction with TLS downgrade attacks - sounds familiar eh?

The other thing that leaps out at me in this mess is cellphones. Any "how popular is my browser" site out there will show the jockeying for market share between the various browsers over the years, and will also show the exponential growth of cellphone browser traffic on the web. Not only are they becoming the most popular browsers out there, they will likely become the majority of browser traffic as well. Updates for cellphone browsers do not come from the browser author, they come from the phone manufacturer, and are generally distributed to end-users of the device by the carrier. So the update of any given component (like the browser) can see significant delay (like months, or never) before real people see it on their device. This update logjam has been an ongoing issue, maybe a "crisis in crypto" will force some improvements in this area!

===============
Rob VandenBrink
Metafore

9 Comments

Published: 2011-09-21

October 2011 Cyber Security Awareness Month

It is that time of the year again, Cyber Security Awareness Month.  Over the last few years we have participated in the October Cyber Security Awareness month (just search the archive for "cyber security awareness month").  During the month, in addition to our normal diaries, we take a specific topic or theme and publish a diary on the topic.

This year the theme is the "20 Critical Security Controls". I know what you are thinking, 20 controls 31 days. A number of the controls will easily take a few days to cover.  For those of you that are unfamiliar with the 20 critical security controls 

"These Top 20 Controls were agreed upon by a powerful consortium brought together by John Gilligan (previously CIO of the US Department of Energy and the US Air Force) under the auspices of the Center for Strategic and International Studies. Members of the Consortium include NSA, US Cert, DoD JTF-GNO, the Department of Energy Nuclear Laboratories, Department of State, DoD Cyber Crime Center plus the top commercial forensics experts and pen testers that serve the banking and critical infrastructure communities."

(http://www.sans.org/critical-security-controls/)

There are 20 controls, 15 of these can be automated, the last 5 can not. Each will address a set of risks and the diaries will explore how you may be able to implement the control.

This year the controls were updated and include the Australian Defence Signals Directorate's 35 mitigating controls.

The controls are as follows:

  • Critical Control 1: Inventory of Authorized and Unauthorized Devices
  • Critical Control 2: Inventory of Authorized and Unauthorized Software
  • Critical Control 3: Secure Configurations for Hardware and Software on Laptops, Workstations, and Servers
  • Critical Control 4: Secure Configurations for Network Devices such as Firewalls, Routers, and Switches
  • Critical Control 5: Boundary Defense
  • Critical Control 6: Maintenance, Monitoring, and Analysis of Security Audit Logs
  • Critical Control 7: Application Software Security
  • Critical Control 8: Controlled Use of Administrative Privileges
  • Critical Control 9: Controlled Access Based on the Need to Know
  • Critical Control 10: Continuous Vulnerability Assessment and Remediation
  • Critical Control 11: Account Monitoring and Control
  • Critical Control 12: Malware Defenses
  • Critical Control 13: Limitation and Control of Network Ports, Protocols, and Services
  • Critical Control 14: Wireless Device Control
  • Critical Control 15: Data Loss Prevention

 

  • Critical Control 16: Secure Network Engineering
  • Critical Control 17: Penetration Tests and Red Team Exercises
  • Critical Control 18: Incident Response Capability
  • Critical Control 19: Data Recovery Capability
  • Critical Control 20: Security Skills Assessment and Appropriate Training to Fill Gaps

As always we value your contributions, so start putting your thinking caps on and think of how you can implement some or even all of the controls in your organisation.  If you have a specific tip, hint, or suggestion fee free to pass it along.  It will help if you use the contact form and specify the control.  That way we can make sure we include your suggestions where we can. 

There are of course things that you can do yourself in your organisation for cyber security awareness month.  If you haven't run an awareness campaign for a while, maybe this October. 

One of our readers (Nick) will be running a campaign within his organisation.  He has developed some awesome posters, linked to a competition to improve awareness within his organisation.  Maybe you have other ideas to help raise awareness in your organisation, let us know and maybe schedule some of these during October?

Mark - Shearwater

 

 

 

 

8 Comments

Published: 2011-09-21

Emergency patch expected for Flash Player

An out of cycle Flash Player update is expected on September 21, 2011. Abode reports exploitation in the wild in targeted attacks.

See more:

--
Swa Frantzen -- Section 66

0 Comments

Published: 2011-09-20

Diginotar declared bankrupt

In the latest installment of this seemingly never-ending saga, a Dutch court in Haarlem (NL) declared DigiNotar bankrupt.

Read more:

The CA business is all about selling trust. After all a CA is supposed to be a trusted third party. Let's hope all the remaining ones get the right message: it's not about not getting caught being hacked. On the contrary: it's about doing the right thing once you have been hacked. Let's hope it leads to more transparency and public scrutiny of the CAs we trust explicitly or implicitly though the choice of some of our vendors.

--
Swa Frantzen -- Section 66

5 Comments

Published: 2011-09-20

SSL/TLS Vulnerability Details to be Released Friday

I'm getting a lot of emails asking about articles that ultimately reference this upcoming talk: "BEAST: Surprising crypto attack against HTTPS" (http://ekoparty.org/2011/juliano-rizzo.php)

I don't have any extra details.  Anything that I write now will be unnecessary speculation.  It sounds like it will be interesting; their presentation last year on Padded Oracle Attacks (the crypto Oracle, not the database) certainly was.

 

13 Comments

Published: 2011-09-19

MS Security Advisory Update - Fraudulent DigiNotar Certificates

Microsoft re-released Microsoft Security Advisory (2607712) regarding fraudulent DigiNotar Root CA. "Microsoft is aware of active attacks using at least one fraudulent digital certificate issued by DigiNotar, a certification authority present in the Trusted Root Certification Authorities Store."[1]

The update is available for all supported version of Windows here and via automatic updates.

[1] http://technet.microsoft.com/en-us/security/advisory/2607712
[2] http://support.microsoft.com/kb/2616676
[3] http://blogs.technet.com/b/msrc/archive/2011/09/19/cumulative-non-security-update-protects-from-fraudulent-certificates.aspx

-----------

Guy Bruneau IPSS Inc. gbruneau at isc dot sans dot edu

0 Comments

Published: 2011-09-18

Google Chrome Security Updates

Google Chrome has been updated to 14.0.835.163 for all platforms. This update fixes 15 high-risk vulnerability, ten medium, and seven low (CVEs included in release notes).
 

[1] http://googlechromereleases.blogspot.com/2011/09/stable-channel-update_16.html

-----------

Guy Bruneau IPSS Inc. gbruneau at isc dot sans dot edu

Community SANS SEC 503 coming to Ottawa Sep 2011

0 Comments

Published: 2011-09-18

Oracle Emergency Patch for CVE-2011-3192 has been released!

Here's a cut and paste from the description:

This security alert addresses the security issue CVE-2011-3192, a denial of service vulnerability in Apache HTTPD, which is applicable to Oracle HTTP Server products based on Apache 2.0 or 2.2. This vulnerability may be remotely exploitable without authentication, i.e. it may be exploited over a network without the need for a username and password. A remote user can exploit this vulnerability to impact the availability of un-patched systems.
Affected Products and Versions
- Oracle Fusion Middleware 11g Release 1, versions 11.1.1.3.0, 11.1.1.4.0, 11.1.1.5.0
- Oracle Application Server 10g Release 3, version 10.1.3.5.0 (Only affected when Oracle HTTP Server 10g based on Apache 2.0 has been installed from Application Server Companion CD)
- Oracle Application Server 10g Release 2, version 10.1.2.3.0 (Only affected when Oracle HTTP Server 10g based on Apache 2.0 has been installed from Application Server Companion CD) 

Please note that Oracle Enterprise Manager includes the Oracle Fusion Middleware component that is affected by this vulnerability. Oracle Enterprise Manager is affected only if the affected Oracle Fusion Middleware version (noted above) is being used. Since a vulnerability affecting Oracle Fusion Middleware versions may affect Oracle Enterprise Manager, Oracle recommends that customers apply the fix for this vulnerability to the Oracle Fusion Middleware component of Oracle Enterprise Manager. For information on what patches need to be applied to your environments, refer to Security Alert CVE-2011-3192 Patch Availability Document, My Oracle Support Note 1357871.1.

For those of you running the above software, please be sure and patch quickly!  Thanks Dave for bringing this to our attention.  Dave reminds us all "...the bug is serious enough for Oracle to issue the patch outside of its usual large quarterly updates, the next of which is scheduled for Oct. 18."

-- Joel Esler | http://blog.joelesler.net | http://twitter.com/joelesler

 

0 Comments

Published: 2011-09-18

More Diginotar news

From the Newsdesk of "Stories that won't die", there's some new information regarding the now infamous DigiNotar Certificates.  Apparently Microsoft's latest update didn't kill all of the certificates, and I quote from http://support.microsoft.com/kb/2616676/us :

 

We are investigating an issue with update 2616676 for all Windows XP-based and Windows Server 2003-based systems.
The versions of update 2616676 for Windows XP and for Windows Server 2003 contain only the latest six digital certificates that are cross-signed by GTE and Entrust. These versions of the update do not contain the digital certificates that were included in update 2607712.

 

-- Joel Esler | http://blog.joelesler.net | http://twitter.com/joelesler

3 Comments

Published: 2011-09-15

SSH Vandals?

I had an interesting detect in one of my kippo honeypots last week. Kippo, if you are not familiar with, is a script simulating an ssh server. It is typically configured to allow root logins with weak passwords and can be the source of never ending entertainment as you see confused script kiddies. The honeypot logs key strokes and is able to replay them in "real time".

In this particular case, the attacker logged in, and issues the following commands:

kippo:~# w
 06:37:29 up 14 days,  3:53,  1 user,  load average: 0.08, 0.02, 0.01
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/0    151.81.3.83       06:37    0.00s  0.00s  0.00s w

kippo:~# ps x
  PID TTY          TIME CMD
 5673 pts/0    00:00:00 bash
 5677 pts/0    00:00:00 ps x

kippo:~# kill -9 -1
kippo:~#

In short, the attacker went in, did minimal recognizance, and then went ahead killing the system (terminating all processes with a PID larger then 1). A real system would be unresponsive as a result.
 
Not clear if this is a vigilante/vandal killing badly configured ssh server, or if this was an intent to detect a honeypot (But then again, the real system would be dead as a result, and there are less destructive ways to detect simple honeypots like kippo.
 
The speed of the attack suggests that it was performed manually. We do not see a big change in ssh probes overall.
 
Any ideas? Has anybody seen similar "vandals"?

-----------
Johannes B. Ullrich, Ph.D.
SANS Technology Institute
Twitter

15 Comments

Published: 2011-09-15

DigiNotar looses their accreditation for qualified certificates

Next to being a provider of SSL certificates (which most browsers now distrust), DigiNotar also issued so-called "qualified" certificates. These are used to create digital signatures and they are much stricter regulated that the run of the mill SSL and EVSSL certificates we all know from web servers and the like.

OPTA, the Dutch independent post and telecommunication authority - think of them as the regulator- , has terminated [in Dutch] the accreditation of DigiNotar as a certificate provider on Sept 14th, 2011. This pertains to their qualified certificates.

It's probably best to give a very short introduction on what qualified certificates, accredited providers are and why this is so important.

The EU has issued guidelines (Directive 1999/93/EC) that have been translated in local law by member states such as the Netherlands that establish legal value in digital signatures. There are a number of levels of trust in this from the legislators. Typically -local laws differ a bit sometimes, but they all implement the same concept- a digital signature is going to be -by law- equivalent to a manual one. At the lowest level a digital signature can be as little as writing your name under an email, but all remains to be proven in court afterwards. It gets more interesting on the higher levels: if the digital signature is proven to be a qualified digital signature, the equivalence to a manual signature is automatic (i.e. no discussion in court). But it still needs to be proven that the digital signature is in fact qualified. The ultimate level however are qualified digital signatures made with the means provided by an accredited provider. There the proof that the digital signature is qualified is automatic as well as it's done up front (in the audits of the accredited providers).

This all is guided under the ETSI TS 101 456 standard from a more technical point of view. This standard sets the requirements.

Since the means provided by an accredited provider can be used to create digital signatures that are almost only disputable if one proofs fraud, it's to all of us -esp those living or doing business in the EU- of critical importance that there are no rogue qualified certificates out there with our name on it as they carry such a high legal weight.

OPTA reports a timeline that's been mostly public knowledge except for their own actions and the interaction with DigiNotar and their auditors. The report concludes that DigiNotar was not only not acting in accordance to ETSI TS 101 456 on quite a few points, but also breaking the relevant local laws.

OPTA also names PriceWaterhouseCoopers as the (regular) auditors of DigiNotar, but does not go as far as to name them the ones that gave them the apparent clean bill of health on July 27th, 2011: "A number of servers were compromised. The hackers have obtained administrative rights to the outside webservers, the CA server “Relaties-CA” and also to “Public-CA”. Traces of hacker activity started on June 17th and ended on July 22nd". Which was later dramatically proven to be untrue.

OPTA reports there are about 4200 qualified (signing) certificates issued by DigiNotar. These will now have to be contacted by DigiNotar under supervision of OPTA. These certificate holders will have to seek another provider if they have not done so already.

The revocation as an accredited provider, also means that DigiNotar doesn't meet the requirements for their PKIOverheid activities anymore.

--
Swa Frantzen -- Section 66

5 Comments

Published: 2011-09-14

Two New Cisco Security Advisories

Two vulnerabilities exist in Cisco Unified Service Monitor and Cisco
Unified Operations Manager software that could allow an
unauthenticated, remote attacker to execute arbitrary code on
affected servers.

Cisco has released free software updates that address these
vulnerabilities.

There are no workarounds available to mitigate these vulnerabilities.

This advisory is posted at:
http://www.cisco.com/warp/public/707/cisco-sa-20110914-cusm.shtml
 


Note: CiscoWorks LAN Management Solution is also affected by these
vulnerabilities. A separate advisory for CiscoWorks LAN Management
Solution is available at:
http://www.cisco.com/warp/public/707/cisco-sa-20110914-lms.shtml

Christopher Carboni - Handler On Duty

0 Comments

Published: 2011-09-14

SANS Top 20 Security Controls

For those of you working in a small business with little or no security budget, Russell Eubanks has published a nice paper on implementing the SANS top 20 security controls.

You can check it out here.

Happy reading.

Christopher Carboni - Handler On Duty

0 Comments

Published: 2011-09-13

Adobe September 2011 Black Tuesday overview

Adobe has released 1 bulletin today.

This updates Adobe products to the following versions:

  • Adobe Reader and Acrobat
    • 10.1.1
    • 9.4.6
    • 8.3.1 (version 8.x will soon see its support terminated)
# Affected Known Exploits Adobe rating
APSB11-24 Multiple vulnerabilities in the adobe reader and adobe acrobat software allow privilege escalation (windows only) or random code execution.
Reader & Acrobat

CVE-2011-1352
CVE-2011-2431
CVE-2011-2432
CVE-2011-2433
CVE-2011-2434
CVE-2011-2435
CVE-2011-2436
CVE-2011-2437
CVE-2011-2438
CVE-2011-2439
CVE-2011-2440
CVE-2011-2441
CVE-2011-2442
TBD Critical

 

--
Swa Frantzen -- Section 66

0 Comments

Published: 2011-09-13

Microsoft September 2011 Black Tuesday

Since we usually have a larger visitor base on Black Tuesday than normal just this reminder for those that might have missed the fun this month.

Microsoft already leaked the bulletins on Friday for about an hour or so, and we already published the overview at that time. Anyway, we'll look into changes they might have done since last week and update them here if we find anything worthwhile.

--
Swa Frantzen -- Section 66

0 Comments

Published: 2011-09-13

GlobalSign back in operation

Fellow handler Lenny wrote about GlobalSign being named by a hacker claiming the DigiNotar and Comodo breaches and GlobalSign's response to it by stopping the processes of issuing certificates.

Today GlobalSign should be back in operation and they have kept a public track of their incident response. I suggest to read it bottom up as that way you get the timeline.

I see a number of very good ideas and actions in there. First off they stopped issuing certificates right after the claim, that's the containment you see in action: make sure it does not get worse. In fact if you look back at what's known so far of the DigiNotar case, had DigiNotar done that on June 19th when they detected the breach the first time, and then would have done a complete technical audit of their systems, DigiNotar would today not have had their entire reputation thrown away. Next GlobalSign contracted Fox-IT, the same company that has been/is analyzing the systems of DigiNotar.  The value of having somebody who's been dealing with similar incidents can more often than not proof to be invaluable.

I also see one worrying issue, and that's that their web server (the one serving www.globalsign.com) has signs it has been breached. GlobalSign claims it has always been isolated though.

Over the past few weeks we've had a number of request of GlobalSign customers that were wondering if they should migrate to other providers.

Let's analyze with what we know now:

  • There is the anonymous claim of a hacker that he's hacked Comodo, Diginotar, GlobalSign and 3 more unnamed CAs.
  • The hacker gives as proof a calc.exe signed by the by now well known rogue *.google.com certificate from the DigiNotar breach. It does proof somebody has the secret key that goes with the certificate, but it doesn't proof he's the one doing it all.
  • From what we publicly can see, GlobalSign is reacting properly to it all.
  • If you would change providers you risk changing to one of the 3 unnamed ones.

So what's smart to do ?

  • Be ready to switch to another provider if you need to. Being ready can be done to different levels, but one can start with selecting (or setting up criteria), etc. This would be needed in two cases:
    • your current provider looses trust from the world at large (what DigiNotar did), or
    • your current provider sees itself compromised badly enough that it revokes its own intermediate certificates (what DigiNotar should have done) while it stopped issuing new certificates.
  • If I were a GlobalSign customer, I would not migrate away from GlobalSign as it risks I'd end up with one of the 3 unnamed ones and be in a worse condition than I'm now.

--

 

Swa Frantzen --

Section 66

0 Comments

Published: 2011-09-12

More RDP Worm Variants?

With the release of the "Morto" worm last month [1], more attention is being paid to malware scanning for RDP . Today, we had a reader report a possible new version of the Win32/Morto RDP brute forcing worm. The worm was not detected by Anti-Virus, and does not appear to use c:Windows\temp\scvhosts.exe like Morto did. The network traffic appears to be similar to Morto in that it makes many connections from the same source port to the RDP port *3389/tcp. So far, the user was not able to identify the process opening the connections.

Please let us know if you find similar scans and if you are able to identify the process/malware causing it.

[1] http://isc.sans.edu/diary.html?storyid=11470

------
Johannes B. Ullrich, Ph.D.
SANS Technology Institute
Twitter

4 Comments

Published: 2011-09-10

The impact of Diginotar on Certificate Authorities and trust

It has been a little bit over  a week since VASCO announced the breach discovered on 19 July at Diginotar,resulted in fraudulent certificates being issued.  Apple, Microsoft, Mozilla, Adobe and others have pushed out updates revoking Diginotar certificates (except in the Netherlands). Looking at the press release this line caught my eye

"VASCO does not expect that the DigiNotar security incident will have a significant impact on the company’s future revenue or business plans"
http://www.vasco.com/company/press_room/news_archive/2011/news_diginotar_reports_security_incident.aspx

That got me thinking, and during several virtual water cooler chats we discussed the potential impact of this breach.  However before we get to that we need to understand a little bit more of the true nature of Public Key Infrastructure (PKI) and the role the various certificate authorities play in it. You see in PKI it is all about trust.

We know certificates can be used for all kinds of purposes and many organisation will set up their own internal Certification Authority (CA) and start issuing certificates that can only be used internally. If used externally they will pop up as errors, invalid certificate. For certificates to be trusted by everyone, you need one that has been issued by a recognised certification authority such as Thawte, Verisign, Godaddy, Digicert, Diginotar, or others. Once they have signed a certificate you trust it, because …? We will get back to that in a minute, first we need a little bit more CA 101.

A typical CA will have a root CA which is used to sign the certificates for one or more intermediate CAs. The root CA is then often taken offline to protect the private key of the root Certificate.  The intermediate CAs are then used to sign further CAs or are used to start issuing certificates to customers for their web servers, email and so on. When you need a certificate for a web server a public/private key pair is generated. A certificate request is sent to one of the intermediate CAs. The CA creates and signs a certificate using its own private key and sends the certificate back.  Voila, you now have a certificate for your SSL site and browsers will no longer complain about the validity of the certificate.  However that is just the technical side, where does the trust come into it?

The trust we as user have, is in the processes and procedures that a CA has in place to ensure that the request for a certificate and the information in the certificate request is accurate. In other words checking that the person asking for the certificate is not lying. That is where a large part of trust comes from. The rest comes from having confidence that the organisation has the security controls in place to ensure the integrity of the process. Or in other words, fraudulent certificates cannot be issued and the private keys of the CAs are safe. The robustness of the process determines the level of trust and for us the price. A certificate costing $20, probably only needs a valid email address. One costing $1700 will typically have many more checks performed before issuing a certificate. Unfortunately for us security professionals most users won't know the difference between the two.

The onus however is not only on the CAs. CAs have to publish certificate revocation lists (CRL), usually on a url starting with crl, e.g. crl.verisign.com, or Online Certificate Status Protocol (OCSP). These lists contain those certificates that are no longer valid and should not be trusted. Applications using certificates (e.g. when using SSL) are expected to check the revocation list or send a OCSP query to verify that a web server's certificate is still valid. It is however up to the application, so we are trusting the various vendors that they will check. Browsers will typically send a OCSP request, but if they can't reach then the CRL is used. Other applications may do something different.

Trust in certificates rests on trusting the entity issuing the certificate and trusting that they have the protection and processes to maintain the integrity and therefore maintain our trust.  That is a lot of trust, something in my opinion, Diginotar and to some extent other CAs have lost. Governments typically do not get involved in the CAs business (unless the certificates are issued on the governments behalf). There is as far as I am aware no requirement for CAs to demonstrate the robustness of their processes or the security of their systems. Maybe they should?  Mozilla has called for a review http://news.cnet.com/8301-27080_3-20103615-245/mozilla-gets-tough-after-digital-certificates-hack/ or at least some assurance from the CAs who have certificates in the Mozilla certificate stores.

As for VASCO's statement, from what I can see the trust is gone and that is what Diginotar is selling, but I guess we will have to see.

Mark

8 Comments

Published: 2011-09-09

Apple Certificate Trust Policy Update

Apple released a patch to update their certificate trust policy affecting Mac OS X Server 10.6, Mac OS X 10.6, Lion Server, OS X Lion. Using fraudulent certificates operated by DigiNotar, an attacker with enough network privileges could intercept user credentials or sensitive information. Apple recommends applying security update 2011-005, additional information available here and downloaded here.

[1] http://support.apple.com/kb/HT4920

[2] http://www.apple.com/support/downloads/

[3] http://support.apple.com/kb/HT4415

-----------

Guy Bruneau IPSS Inc. gbruneau at isc dot sans dot edu

Community SANS SEC 503 coming to Ottawa Sep 2011

2 Comments

Published: 2011-09-09

Early Patch Tuesday Today: Microsoft September 2011 Patches

Looks like Microsoft made the bulletins live that were supposed to be released this coming Tuesday. The bulletins are dated September 13th 2011. While the links below work as I type this diary, they may not work later today. Some of the related links may not have any information yet (like CVE). All bulletins appear to be live right now, and we will add them to the list below as we get to it.

This information may of course change as the final bulletins will be released on Tuesday. Some readers report that the bulletins are no longer available.

# Affected Contra Indications - KB Known Exploits Microsoft rating(**) ISC rating(*)
clients servers
MS11-070 Vulnerability in WINS could allow elevation of privilege. Replaces MS11-035.
WINS

CVE-2011-1984
KB 2571621 - none - Severity:Important
Exploitability:?
Important Important
MS11-071 Vulnerability in Windows could allow remote code execution (DLL Linking Vuln.).
Windows

CVE-2011-1991
KB 2570947 yes Severity:Important
Exploitability:?
Critical Important
MS11-072 Arbitrary code execution vulnerability in Excel. Replaces MS11-045.
Excel

CVE-2011-1986 CVE-2011-1986 CVE-2011-1987 CVE-2011-1988 CVE-2011-1989 CVE-2011-1990
KB 2587505 - none - Severity:Important
Exploitability:?
Critical Important
MS11-073 Code execution vulnerability in Microsoft Office. Replaces MS11-023, MS10-087 .
Office

CVE-2011-1980
CVE-2011-1982
KB 2587634 - none - Severity:Important
Exploitability:?
Critical Important
MS11-074 Microsoft Sharepoint Elevation of Privilege Vulnerability. Replaces MS11-016.
Sharepoint

CVE-2011-0653
CVE-2011-1252
CVE-2011-1890
CVE-2011-1891
CVE-2011-1892
CVE-2011-1893
KB 2481858 CVE-2011-1252 publicly disclosed. some of the others are not disclosed but likely simple to exploit XSS flaws. Severity:Important
Exploitability:?
-N/A- Important
We will update issues on this page for about a week or so as they evolve.
We appreciate updates
US based customers can call Microsoft for free patch related support on 1-866-PCSAFETY
(*): ISC rating
  • We use 4 levels:
    • PATCH NOW: Typically used where we see immediate danger of exploitation. Typical environments will want to deploy these patches ASAP. Workarounds are typically not accepted by users or are not possible. This rating is often used when typical deployments make it vulnerable and exploits are being used or easy to obtain or make.
    • Critical: Anything that needs little to become "interesting" for the dark side. Best approach is to test and deploy ASAP. Workarounds can give more time to test.
    • Important: Things where more testing and other measures can help.
    • Less Urgent: Typically we expect the impact if left unpatched to be not that big a deal in the short term. Do not forget them however.
  • The difference between the client and server rating is based on how you use the affected machine. We take into account the typical client and server deployment in the usage of the machine and the common measures people typically have in place already. Measures we presume are simple best practices for servers such as not using outlook, MSIE, word etc. to do traditional office or leisure work.
  • The rating is not a risk analysis as such. It is a rating of importance of the vulnerability and the perceived or even predicted threat for affected systems. The rating does not account for the number of affected systems there are. It is for an affected system in a typical worst-case role.
  • Only the organization itself is in a position to do a full risk analysis involving the presence (or lack of) affected systems, the actually implemented measures, the impact on their operation and the value of the assets involved.
  • All patches released by a vendor are important enough to have a close look if you use the affected systems. There is little incentive for vendors to publicize patches that do not have some form of risk to them.

(**): The exploitability rating we show is the worst of them all due to the too large number of ratings Microsoft assigns to some of the patches.

--
Johannes B. Ullrich

6 Comments

Published: 2011-09-09

Large power outage in Southern California

Update: it appears that the outage of several Microsoft services was not related to the power outage. http://windowsteamblog.com/windows_live/b/windowslive/

Per the San Diego Gas and Electric web site, power has been restored to the majority of customers.

----

The San Diego area (including parts of Mexico and Arizona) are experiencing a large power outage due to a problem in a high voltage transmission line [1]. It may take until Friday for power to be restored. As a result, expect various internet outages to follow as UPSs run down.

Currently, it appears that MSN.com is at least partially down, and Hotmail.com (aka mail.live.com) is coming up with a home page but after logging in the connection fails. It is not clear if these outages are related to the power outage, but both sites do host some assets in the San Diego area.

Right now, there are no reports of wide spread routing failures.

If you do live in the area, and are still able to read this: Turn off air conditioners and it is best to stay off the roads as traffic signals are failing. Go to bed, get some sleep, and hope that things will be better in the morning as you get up ;-).

[1] http://www.sdge.com/

------
Johannes B. Ullrich, Ph.D.
SANS Technology Institute
Twitter

2 Comments

Published: 2011-09-09

IPv6 and DNS Sinkhole

In January 2010, I posted a diary on how to configure zone files to setup a DNS sinkhole using IPv4 addresses. This updated diary shows how to add IPv6 support to your zone file to sinkhole both IPv4 and IPv6.

Single Hostname (/var/named/sinkhole/client.nowhere)

 client.nowhere

Wildcard Domain (/var/named/sinkhole/domain.nowhere)

 domain.nowhere

Note: If you are not currently using IPv6 in your network, change the example fec0:0:0:bebb::5 to ::1 (localhost) to prevent 6to4, Toredo, etc from leaving the network.

To verify your zone files are correctly configured, you can use nslookup to query a hostname or a domain loaded in your sinkhole.

With Windows 7 (note that it shows both IPv4 and IPv6):

C:>nslookup zz87lhfda88.com
Server: seeker.someserver.com
Address: 192.168.25.5

Name: zz87lhfda88.com
Addresses:fec0:0:0:bebb::5
192.168.25.6

With Linux, you need to specify query AAAA record:

guy@seeker:~$ nslookup -q=aaaa zz87lhfda88.com
Server: 192.168.25.5
Address: 192.168.25.5#53

zz87lhfda88.com has AAAA address fec0:0:0:bebb::5

[1] http://isc.sans.edu/diary.html?storyid=7930
[2] http://www.whitehats.ca/main/members/Seeker/seeker_sinkhole/Seeker_DNS_Sinkhole.html
[3] http://www.whitehats.ca/downloads/sinkhole/sinkhole.iso
[4] http://www.whitehats.ca/downloads/sinkhole/sinkhole64-bit.iso

-----------

Guy Bruneau IPSS Inc. gbruneau at isc dot sans dot edu

 Community SANS SEC 503 coming to Ottawa Sep 2011

0 Comments

Published: 2011-09-08

When Good CA's go Bad: Other Things to Check in Your Datacenter

The recent problems at DigiNotar (and now GlobalSign) has gotten a lot of folks thinking about what happens when significant events impact our trust of Public Certificate Authorities, and how it affects users of secured services.  But aside from the browsers at the desktop, what is affected and what should we look at in our infrastructure?

What has been brought up by several of our readers, as well as a lively discussion on several of the SANS email lists, is SSL proxy servers, and any other IDS / IPS device that does "SSL proxy" encryption.  If you are not familiar with this concept, in a general way these products work as shown in the diagram:

 

As you can see from the diagram, the person at the client workstation "sees" an HTTPS browser session with a target webserver.  However, the client's HTTPS session is actually with the proxy box (using the client's trust in the Private CA to cut a dynamic cert), and the HTTPS session with the target web server is actually between the proxy and the web server, not involving the client at all.
In many cases, the private CA for this resides directly on the proxy hardware, allowing certs to be issued very quickly as the clients browse.

In any case, the issue that we're seeing is that these units are often not patched as rapidly as servers and desktops, so many of these boxes remain blissfully unaware of all the issues with DigiNotar.  If you have an SSL proxy server (or an IDS / IPS unit that handles SSL in this way), it's a good idea to check the trusted CA list on your server, and also check for any recent patches or updates from the vendor.

It's probably a good time to do some certificate "housekeeping" - - look at all devices that use public, private or self-signed certificates.  Off the top of my head, I'd look at any web or mail servers you might have with certificates, load balancers you have in your web farm that might front-ending any HTTPS web servers, any FTP servers or SSH servers that might use public certificates, or any SSL VPN appliances.  What should you look for?  Make sure that you're using valid private or public certificates - not self-signed certificates for anything (this is especially common for admin interfaces for datacenter infrastructure).  It often makes sense at a time like this to see if it makes sense in your organization to get all your certs with one vendor, in one contract on a common renewal date to simplify the renewals and ensure that nothing gets missed, resulting in an expired cert facing your clients.   Or it may make sense to see if it's time to consider an EV (extended validation) cert on some servers, or downgrading an existing EV cert to a standard one.  (Look for more on CA nuts and bolts in an upcoming diary).  Check renewal dates to ensure that you have them all noted properly.  If you've standardized on 3 year certificates, has one of your admins slipped a 1 year cert in by accident (we see this all the time, often a 1 year cert is less than the corporate PO limit, and a 2 or 3 year cert is over).

What else should you check?  What other devices in the datacenter can you think of that needs to trust a public CA?  Mail servers come to mind, but I'm sure that there are others in the mix - please use our comment form to let us know what we've missed.

===============
Rob VandenBrink
Metafore

6 Comments

Published: 2011-09-08

How Makers of Web Browsers Include CAs in Their Products

Since Certificate Authorities (CAs) are on many people's minds nowadays, we asked @sans_isc followers on Twitter:

How do browser makers (Microsoft, Mozilla, Google, Opera) decide which CAs to put into the product?

Several individuals kindly provided us with pointers to the vendors' documentation that describe their processes for including CAs in web browser distributions:

If you have a pointer to Google Chrome certificate-inclusion practices, please let us know.

-- Lenny

Lenny Zeltser focuses on safeguarding customers' IT operations at Radiant Systems. He also teaches how to analyze and combat malware at SANS Institute. Lenny is active on Twitter and writes a daily security blog.

3 Comments

Published: 2011-09-08

Should We Still Test Patches?

I know, I know, this title sounds like heresy.  The IT and Infosec villagers are charging up the hill right now, forks out and torches ablaze!  I think I can hear them - something about "test first, then apply in a timely manner"??  (Methinks they weren't born under a poet's star).  While I get their point, it's time to throw in the towel on this I think.

On every security assessment I do for a client who's doing their best to do things the "right way", I find at least a few, but sometimes a barnful of servers that have unpatched vulnerabilties (and often are compromised).

Really, look at the volume of patches we've got to deal with:

From Microsoft - once a month, but anywhere from 10-40 in one shot, every month!  Since the turnaround from patch release to exploit on most MS patches is measured in hours (and is often in negative days), what exactly is "timely"?

Browsers - Oh talk to me of browsers, do!  Chrome is releasing patches so quickly now that I can't make head or tails of the version (it was 13.0.782.220 today, yesterday is was .218, the update just snuck in there when I wasn't looking).  Firefox is debating removing their version number from help/about entirely - they're talking about just reporting "days since your last confession ... er ... update" instead (the version will still be in the about:support url - a nifty page to take a close look at once in a while).  IE reports a sentance-like version number similar to Chrome.

And this doesn't count email clients and severs, VOIP and IM apps, databases and all the stuff that keeps the wheels turning these days.

In short, dozens (or more) critical patches per week are in the hopper for the average IT department.  I don't know about you, but I don't have a team of testers ready to leap into action, and if I had to truly, fully test 12 patches in one week, I would most likely not have time to do any actual work, or probably get any sleep either.

Where it's not already in place, it's really time to turn auto-update on for almost everything, grab patches the minute they are out of the gate, and keep the impulse engines - er- patch "velocity" at maximum.  The big decision then is when to schedule reboots for the disruptive updates.  This assumes that we're talking about "reliable" products and companies - Microsoft, Apple, Oracle, the larger Linux distros, Apache and MySQL for example - people who *do* have a staff who is dedicated to testing and QA on patches  (I realize that "reliable" is a matter of opinion here ..).  I'm NOT recommending this for any independant / small team open source stuff, or products that'll give you a daily feed off subversion or whatever.   Or if you've got a dedicated VM that has your web app pentest kit, wireless drivers and 6 versions of Python for the 30 tools running all just so, any updates there could really make a mess.  But these are the exceptions rather than the rule in most datacenters.

Going to auto-pilot is almost the only option in most companies, management simply isn't paying anyone to test patches, they're paying folks to keep the projects rolling and the tapes running on time (or whatever other daily tasks "count" in your organization).  The more you can automate the better.

Mind you, testing large "roll up" patch sets and Service Packs is still recommended.  These updates are more likely to change operation of underlying OS components (remember the chaos when packet signing became the default in Windows?).

There are a few risks in the "turn auto-update on and stand back" approach:

  • A bad patch will absolutely sneak in once in a while, and something will break.  For this, in most cases, it's better to suck it up for that one day, and deal with one bad patch per year as opposed to being owned for 364 days.  (just my opinion mind you)
     
  • If your update source is compromised, you are really and truly toast - look at the (very recent) kernel.org compromise ( http://isc.sans.edu/diary.html?storyid=11497 ) for instance.  Now, I look at a situation like that, and I figure - "if they can compromise a trusted source like that, am I going to spot their hacked code by testing it?"  Probably not, they're likely better coders than I am. It's not a risk I should ignore, but there isn't much I can do about it, I try really hard to (ignore it


What do you think?  How are you dealing with the volume of patches we're faced with, and how's that workin' for ya?  Please, use our comment form and let us know what you're seeing!
 

===============
Rob VandenBrink
Metafore

7 Comments

Published: 2011-09-07

GlobalSign Temporarily Stops Issuing Certificates to Investigate a Potential Breach

GlobalSign, a certificate authority (CA) based out of Belgium temporarily stopped issuing certificates. This action was taken in response to a message on Pastebin, in which the anonymous poster claimed the responsibility for the recent DigiNotar breach and singled out GlobalSign as another CA that he or she compromised. 

According to GlobalSign's press release, the company is investigating the report and "decided to temporarily cease issuance of all Certificates" until it assesses the claim that its security was breached.

An ISC reader shared with us a response that GlobalSign provided to his company regarding this matter. In that message, the company explained that it paused the issuance of certificates to allow the systems to undergo a forensic audit while they are off-line. The company reportedly downplayed the risk of the existing active certificates being at risk, referring to its security practices that involve keeping the root CA off-line. Yet, with the intermediate CAs being on-line, the risk is there in a way that is similar to the DigiNotar scenario: An attacker may be able to use intermediate CAs to issue false certificates. This could also allow an attacker to spoof certs that have already been issued.

Note, however, that we have yet to see evidence of GlobalSign being compromised. The Pastebin notice might prove to be unauthentic or otherwise false. It's not uncommon for malicious hackers to put forth claims of conquest that later turned out to be unsubstantiated... just for LOLs.

-- Lenny

Lenny Zeltser focuses on safeguarding customers' IT operations at Radiant Systems. He also teaches how to analyze and combat malware at SANS Institute. Lenny is active on Twitter and writes a daily security blog.

0 Comments

Published: 2011-09-07

Analyzing Mobile Device Malware - Honeynet Forensic Challenge 9 and Some Tools

The Honeynet project presented an excellent opportunity to improve your and the community's approaches for analyzing mobile device malware. The group's Forensic Challenge 9 gives you the opportunity to respond to a security incident that involved a smart phone. Honeynet's Christian Seifert provided us with the following description of the scenario:

"This challenge offers the exploration of a real smartphone, based on a popular OS, after a security incident. You will have to analyze the image of a portion of the file system, extract all that may look suspicious, analyze the threat and finally submit your forensic analysis. From File System recovery to Malware reverse-engineering and PCAP analysis, this challenge will take you to the world of Mobile Malwares."

Christian also pointed out that the Honeynet Project--as a result of its participation in Google Summer of Code--released two tools for analyzing mobile device malware. According to him:

DroidBox, authored by Patrick Lantz, is a sandbox for the Android platform. "It focuses on detecting information leaks that were derived from performing taint analysis for information-flow tracking on Android trojan applications. DroidBox is capable to identify information leaks of contacts, SMS data, IMEI, GPS coordinates, installed apps, phone numbers, network traffic and file operations."

APKInspector, authored by Cong Zheng, "is a full blown static analysis tool for the Android platform. It has resemblance of tools like IDAPro. Some functionality highlights are:

  • Graph-based UI displaying control flow of the code.
  • Links from graph view to source view.
  • Function/Object - > Method list and filter.
  • Strings list and Filter.
  • Flow in/out from a given point.
  • Function and variable renaming.

For additional resources that may help you analyze Android malware, see 8 Articles for Learning Android Mobile Malware Analysis. If you know of additional tools and references, please leave a comment.

-- Lenny

Lenny Zeltser focuses on safeguarding customers' IT operations at Radiant Systems. He also teaches how to analyze and combat malware at SANS Institute. Lenny is active on Twitter and writes a daily security blog.

 

0 Comments

Published: 2011-09-06

Microsoft Releases Diginotar Related Patch and Advisory

Microsoft released an advisory [1] earlier today announcing that they will place a number of DigiNotar root certificates on the "not trusted" list. 

A blog article further explains how certificate stores can be manipulated manually [2].

One important difference between this most recent advisory, and an earlier advisory [3] is that Windows Mobile 6.x/7/7.5 is no longer listed as affected. The earlier advisory stated that Windows Mobile 6.x and 7 are affected. It didn't mention Windows Mobile 7.5. (thanks to a read for pointing this out)

 

[1]http://www.microsoft.com/technet/security/advisory/2607712.mspx
[2]http://blogs.technet.com/b/srd/archive/2011/09/04/protecting-yourself-from-attacks-that-leverage-fraudulent-diginotar-digital-certificates.aspx
[3] http://technet.microsoft.com/en-us/security/advisory/2524375

------
Johannes B. Ullrich, Ph.D.
SANS Technology Institute
Twitter

3 Comments

Published: 2011-09-06

DigiNotar audit - intermediate report available

Today the Dutch government released a letter signed by the minister of internal affairs and the minister of security and justice addressed to their house of representatives. The letter has as attachment an interim report by security company Fox-IT's CEO who has been heading an audit at DigiNotar.

The report itself is well worth a read [in English].

For those on limited time, some of the most interesting news and observations:

  • The defaced pages dating back to 2009 found by F-secure appear to have been copied during a re-installation of the  web server in August.
  • The OCSP server's working at DigiNotar has been reversed since Sept 1st. Normally these servers respond with good to all certificates except those on the CRL (a blocklist). The OCSP now operates in whitelist mode: it will call all unknown certificates signed by DigiNotar as revoked (a whitelist).
    Hence we need to make sure to use the OCSP server to validate DigiNotar certificates -should we want/need to- and not rely on the published CRLs anymore.
  • DigiNotar operates multiple CA servers, all of them seem to have been compromised by the hackers and having had Administrator level access, including those used for Qualified certificates and PKIOverheid certificates.
  • Some of the CA servers have had parts of their logs deleted, leading to DigiNotar not knowing what certificates were issued.
  • Hacker tools including Cain&Abel as well as specialized dedicated scripts -written in a language specific to the PKI environment- were found. Intentional fingerprints left in one of the scripts links it back to the Comodo breach.
  • There is a list of 6 CAs that have been found to have emitted rogue certificates
  • There is an incomplete list of 24 additional CAs that have had their security compromised but have not shown to have emitted rogue certificates
  • The rogue certificate for *.google.com detected in the wild was verified against the DigiNotar OCSP service from August 4th till it was revoked on August 29th. 300 000 different IP addresses verified that certificate.  More than 99% of those addresses trace back to Iran.
    The report notes that those who had their connections to gmail intercepted could have exposed their authentication cookies and that would expose their email itself, and through that also allow access to reset functionality of other services such as e.g. facebook.  It is recommended that those in Iran logout and change passwords.
  • 2 certificates were found on the PKIOverheid and Qualified environment that cannot be related to a valid certificate.Yet the logs appear to be intact and do not show rogue certificates created.
  • There is a list of failures of basic best security practices that have clearly not worked, implemented badly or were omitted. Yet the servers are housed in a tempest protected room.
  • The hackers breached the systems possible June 6th already, this got detected by DigiNotar on June 19th, The rogue certificates were created in July and the first time the *.google.com certificate that was detected in the wild was presented on July 27th to the OCSP server. Yet it took till DigiNotar was notified by govCERT.nl before they revoked the certificate.

The letter [in Dutch] summarizes the report itself, and contains some additional information not in the report that is of interest:

  • There is now an inquiry into DigiNotar for possible responsibility and negligence
  • The search for the hackers continues
  • DigiNotar filed an official reported the incident on September 5th
  • They suggest leniency and agreements for those cases where the revocation of trust in DigiNotar leads to problems such as with the timely filing of tax information in the Netherlands

--
Swa Frantzen -- Section 66

5 Comments

Published: 2011-09-05

Bitcoin – crypto currency of future or heaven for criminals?

There has been quite some coverage about Bitcoin in last couple of months. For those that did not pay attention, Bitcoin is a crypto currency that is decentralized and works in a peer-to-peer network. It is a pretty fascinating project by a Japanese researcher (maybe – his real identity has not been confirmed) and in case you are interested in it you can find some information at http://www.bitcoin.org/.

Some background

Couple of weeks ago I started doing some research on how Bitcoin works. I found it amazing that for a scheme so wide spread (there are probably tens of thousands, if not hundreds of thousands of active users) that not a lot of technical documentation is available, apart from Satoshi’s paper available on the main web site, which does not really go into implementation details.

One of the features of Bitcoin that gets mentioned quite often is its anonymity. Basically, Bitcoin has a digital wallet which allows you to process incoming and create new transactions. A user has one or more (preferably many) public/private key pairs which identify him. In the Bitcoin system, when you want to send Bitcoins to someone, you sign a transaction that is taking some of your Bitcoins (which you received through a transaction or mining – more about this later) to the destination address. All addresses are unique 40 digit hexadecimal numbers (RIPEMD160(SHA256(public key)) with some extra conversion to Base-58.

You can have as many as you want of these and this is one of the ways for Bitcoin to allow anonymity. Since you can use a different public/private key pair for every transaction (and you can transfer Bitcoins to your other addresses) it can be difficult (but not impossible) to track the owner. One thing to keep in mind is that all Bitcoin transactions are public – every node knows everything about every transaction.
There is some interesting research about tracking Bitcoin owners and Dan Kaminsky posted some good ideas at this year’s Black Hat.

How do you get new coins?

In order to confirm a transaction, it has to be included in a block. A block (https://en.bitcoin.it/wiki/Blocks) contains a hash pointing to a previous block (so the blocks are chained, this is what makes spoofing exponentially difficult with generation of more blocks), some other data and a Merkle root hash of all transactions validated by this block.

Now comes the best part – all this data is hashed together (SHA256(SHA256(block)) and the resulting hash has to satisfy some requirements. The requirements state that the resulting hash has to start with a certain number of zeros. So, for example if the resulting hash has 7 leading zeros it is valid. How do we find a valid block? Besides the payload a nonce is embedded to which gets constantly changed.

Simply speaking, the node that is generating the block brute forces all possible values until it finds a valid hash that satisfies the previously mentioned requirement. As you can see, this is an extremely complex task that, with fastest gear (and I’m talking about loads of GPU cards) can take days if not months.

So a logical question is: why would anyone do that? The node that finds a valid block (mines it, in Bitcoin’s terminology) gets awarded (currently) 50 Bitcoin. With 1 Bitcoin being around 7.3 USD currently, this means that for each solved block the node that found it gets ~350 USD. Sounds good?

Besides this, the solver also gets a certain fee for transactions that have been validated so in reality more than 50 Bitcoin will be awarded to the solver (this is the incentive to keep solving the tasks even after all Bitcoins have been awarded).

Finally, another important thing about blocks is that it should take approximately 10 minutes to solve a block. The network itself measures how long it took to solve 2016 blocks (it should be about two weeks) and modifies the difficultly accordingly (so if more people start solving this, the difficulty gets higher).

My CPU > your CPU

There are legitimate groups of users that join so called mining pools in order to find new valid hashes. The pool owner runs a special algorithm that sends partial tasks to all nodes participating in mining. Different pools have different rules, but today it is common that they share received Bitcoin between participating nodes, depending on how much each node has participated.

There are many open source, free Bitcoin mining programs that are specially optimized for GPU’s.
And imagine this – who has the most CPU power in the world (except government agencies)? Bot owners of course.
In other words, it was to be expected that bot owners will start playing this game – after they’ve stolen all valuable data off a machine, why wouldn’t they use its resources (CPU, GPU and power) to mine Bitcoins and make some extra cash (which even looks anonymous!).

Couple of months ago we started first seeing malware stealing Bitcoin wallets (basically doing transactions to their owners) and lately Bitcoin mining pools used by malware started being increasingly popular.
Modus operandi is typical here – malware drops legitimate bitcoin mining executables which join a pool operated by the botnet owner. In most cases I’ve seen so far they use standard protocols so be sure to check the 8333 TCP port. Bitcoin also uses IRC for initially finding other nodes so it might easily make your IDS/IPS shine like a Christmas tree (even if a legitimate user started it).

Perfect extortion weapon

Just about when I was to finish this diary (which will probably be only first in the series about Bitcoin), we received a very interesting e-mail from one of our readers who wanted to remain anonymous.

He received an e-mail from an attacker asking him to pay 100 Bitcoin to a certain address or his site will be a target of a DDoS attack. We’ve seen such extortion e-mails many times in the past (as always – do not pay) but using Bitcoin is a new twist.

As I previously wrote, while it is not 100% anonymous, it can be very close to this and, depending on how careful the attacker is, it can be very difficult to trace the transaction.

As Bitcoin is gaining more attention it will be interesting to see what future will bring. Rest assured that we will keep an eye on it.

--
Bojan
INFIGO IS

 

7 Comments

Published: 2011-09-05

Java 7 Officially Released

Oracle officially released Java 7, including some security updates and several new features and enhancements. Thanks ISC reader Alex for notifying us about it.

The new Java 7 version coexists with the latest Java 6 Update 27 version and is available for download from the Oracle web site, http://www.oracle.com/technetwork/java/index.html, and still makes use of different installers for the 32 and 64-bit versions for all operating systems (Linux, Solaris & Windows).

As you can see in the release notes, the main security enhancements affect the JSSE (Java Secure Socket Extension) and TLS communications, including TLS v1.1 and v1.2 as well as Server Name Indication (SNI) support.

Java 7 does not remove any previous Java versions; I guess this is the intended behavior as this is a major release. From a security perspective, if Java 7 is installed (using Windows as the sample platform) on a system that already has Java 6 installed, both versions will remain, so if you only want to run the latest version, ensure you uninstall any previous versions (as we had to do in the past but with the same major release) and do not leave vulnerable Java 6 releases around.

Considering Java is one of the most targeted pieces of client software today, be ready for future updates on both, Java 6 and Java 7 in your IT environments (perhaps Java 6u28 and Java 7u1), and plan in advance how to manage them.

----
Raul Siles
Founder and Senior Security Analyst with Taddong
www.taddong.com

3 Comments

Published: 2011-09-04

Several Sites Defaced

There have been several widespread defacements reported to us today.  It appears their DNS name server entries all point to the same thing as seen below:

ups.com.  85621 IN NS ns1.yumurtakabugu.com.
ups.com.  85621 IN NS ns2.yumurtakabugu.com.
ups.com.  85621 IN NS ns4.yumurtakabugu.com.
ups.com.  85621 IN NS ns3.yumurtakabugu.com.
 

Here are a few examples of the sites so far:

ups.com
theregister.co.uk
acer.com
telegraph.co.uk
betfair.com

The one commonality is they all appear to be all registered via ascio.com

More details as we learn more.

 

 

8 Comments

Published: 2011-09-01

DigiNotar breach - the story so far

I've been following the DigiNotar story as it evolved for a few days now with growing concern and increasing alarm.

I'm by far not privy to the inside information to be able to really assess and audit the situation, so this is purely based on what is publicly known. Being a Dutch native speaker I have access to what the press in the Netherlands writes about it with the subtle nuances that an automated translation will not capture. I do lack the resources to independently double verify everything and as such some errors might still be in it, consider this a best effort at creating some overview and leading up to conclusions with the limited information that is available.

If we do attract the attention of DigiNotar and/or Vasco: please do contact us, we'd love to talk to you and get more information!

So who is DigiNotar and what do they do when all is normal?

DigiNotar is a CA. They sell SSL certificates, also the EV kind.

But there is more that's mostly of interest to those in the EU or the Netherlands only:

They are also (I'm simplifying a bit, I know) an accredited provider in the EU and provide qualified certificates and approved SSCDs to customers to create digital signatures that -by law- in the EU are automatically considered to be qualified digital signatures and as such they are automatically equivalent to manual signatures. This status forces regular 3rd party audits against the relevant Dutch law and standards such as ETSI TS 101 456.

They also provide certificates services under the PKIOverheid umbrella in the Netherlands. This has even more and stricter rules. e.g. Things that are suggested in the ETSI standards, but not mandatory, can become mandatory for PKIOverheid.

DigiNotar is a 100% daughter company of Vasco (since Jan 2011), so if you see Vasco sometimes doing things like press releases regarding the incident, that's why. 

So what do we know in a chronological order ?

  • Dating back as far as May 2009, the portal of DigiNotar has been defaced, these hacks remained in place till this week after f-secure exposed them in their blog.
    Source: f-secure blog
  • On July 10th 2011 a certificate was issued with a CN of *.google.com by DigiNotar
    Source: pasted certificate
  • In July 2011 "dozens" of fake certificates were issued by intruders -most likely Iranian, but that remains to be proven-.
    Source: Jan Valcke, Operational director at Vasco in an interview with "webwereld" [in Dutch]
    The list of fake certificates appears to include certificates for mozilla, tor, yahoo, wordpress and baladin.com, but does not include any financial institutes.
    Source: nu.nl article [in Dutch] would love a second or more authoritative source for this.
  • On July 18th 2011, 6 fraudulent certificates were created with a CN of *.torproject.org
    Source: torproject
  • On July 19th 2011, DigiNotar detected the incident and supposedly the majority of these certificates were revoked. At least one, possibly more certificates were missed in this process.
    Source: Jan Valcke, Operational director at Vasco in an interview with "webwereld" [in Dutch]
    There's a bit of a bad feeling with this claim, see further.
  • On July 20th 2011, (at 06:56, unspecified timezone) a second batch of 6 fraudulent certificates were created with a CN of *.torproject.org
    Source: torproject
    Note we lack timezone info from both the claim above and these certificates, don't jump to conclusions just yet.
  • On an unknown date, an unknown external auditor did not catch the fraudulent certificate for *.google.com. as well as any others that might be missed as well.
  • On Aug 28th 2011, (some sources claim 27th) a user from Iran posted on a forum using Chrome was warned by his browser the certificate was not to be trusted.
    Source: Forum post
    Chrome does additional protections for gmail since chromium 13.
  • On Aug 29th 2011, the *.google.com certificate was revoked by DigiNotar
    This can be seen in the CRL at http://service.diginotar.nl/crl/public2025/latestCRL.crl [do not click on this URL, most browsers "understand" CRLs], see further.
  • On Aug 29th 2011, the response from Google and the other browser makers came: Basically the "sh*t hit the fan" as the browser vendors are pulling the plug on DigiNotar and not trusting their processes anymore.
  • On Aug 30th 2011, Vasco issued a press release reporting the incident.
  • On Aug 30th 2011, various claims of both Vasco, and the Dutch government try to stress that the activities of DigiNotar under the PKIOverheid root were not affected. Some arguments used in the press such as that the root certificate of PKIOverheid is not at DigiNotar (they have an intermediate) are obvious and irrelevant.
  • On Aug 30th 2011, DigiNotar released information for users of Diginotar certificates [in Dutch]. This includes a very painful statement: (my translation): "Users of SSL certificates can depending on the browser vendor be confronted with a statement that the certificate is not trusted. This is in 99,9% of the cases incorrect, the certificate can be trusted". I've got nothing positive to say about that statement. They also offer a free upgrade to the PKIOverheid realm for those holding a SSL or EVSSL certificate.
  • On Aug 31th 2011, it is confirmed security company Fox-IT is performing a forensic audit of the systems of DigiNotar. Results are expected next week at the earliest.
    Source: webwereld article [in Dutch]

Analysis of the CRLs

DigiNotar claims all breaches were under the "Public 2025 Root" ref [in Dutch]. What "root" does in there is somewhat unclear to the technical inclined mind, and the "public 2025" just seems to be some sort of internal name. Let's assume they meant the fraudulent certificates all were signed by the same intermediate.

The CRL indicated in the fraudulent *.google.com certificate does indeed point in the same "public 2025" direction, so let's get that CRL:

$ wget http://service.diginotar.nl/crl/public2025/latestCRL.crl

Let's make this file human readable:

$ openssl crl -text -inform DER -in latestCRL.crl >/tmp/t

And let's verify there is indeed the Serial Number in there of the *.google.com fake certificate we found on pastebin:

$ grep -i "05e2e6a4cd09ea54d665b075fe22a256" /tmp/t
    Serial Number: 05E2E6A4CD09EA54D665B075FE22A256

So yes, it's revoked. Getting the other relevant lines (it means first figuring out how many, but I skip the boring part).

$ grep -i -A4 "05e2e6a4cd09ea54d665b075fe22a256" /tmp/t
    Serial Number: 05E2E6A4CD09EA54D665B075FE22A256
        Revocation Date: Aug 29 16:59:03 2011 GMT
        CRL entry extensions:
            Invalidity Date:
                Aug 29 16:58:47 2011 GMT

So that checks out nicely. [One should of course check that all signatures are valid everywhere]

Unfortunately one can only see the Serial Number of the certificates revoked, not the more juicy fields like the CN or so that would allow to see what and when other (fake) certificates were revoked.

But since we have the revocation date, maybe we can see the peak where they revoked the fraudulent certificates. I know the nature of revocation and any other work in a CA/RA can be highly cyclic with huge peaks in it, and I know not to worry about any revocation as such, users loosing control over a certificate happens all the time.

So let's see revocation activity in July 2011 split out per day:

$ grep "Revocation Date:" /tmp/t | sed 's/^.*Date: //' | sed 's/..:..:.. //' 
|sed 's/GMT//' | sort -n | uniq -c  | grep 'Jul .* 2011'
   1 Jul  1 2011
   3 Jul  4 2011
   3 Jul  5 2011
   6 Jul  6 2011
   6 Jul  7 2011
   1 Jul  8 2011
   2 Jul 11 2011
   6 Jul 14 2011
   1 Jul 15 2011
   1 Jul 18 2011
   2 Jul 19 2011
   1 Jul 20 2011
   1 Jul 21 2011
   3 Jul 22 2011
   3 Jul 26 2011
   7 Jul 28 2011
   5 Jul 29 2011

Uhmm, where is the "dozens" on July 19th ?

Since the *.google.com one was made on Jul 10th, there is no dozens neither before nor shortly after the 19th.

They might have been added to another CRL, hard to say as DigiNotar does not allow directory listing and doesn't have an easy to find list of CRLs they publish either.

Still, even if we look at the "normal" workload in 2011:

$ grep "Revocation Date:" /tmp/t | sed 's/^.*Date: //' | sed 's/..:..:.. //' 
|sed 's/GMT//' |grep 2011| sed 's/ .. 2011//'| sort -n | uniq -c
  93 Apr
  34 Aug
 112 Feb
 144 Jan
  52 Jul
  18 Jun
 118 Mar
 118 May

We see that the Jun/Jul and Aug months are very light on revocations. [Note that August was not yet complete in GMT time when I downloaded the CRL file].

I know my sed, grep commands could be optimized to save a few CPU cycles, but this isn't a unix lesson.

I'd love to see the "dozens" of revocations around July 19th in a DigiNotar CRL, but I simply cannot find them.

The torproject was in touch with Diginotar and got a spreadsheet with validity dates , SN and some more fields of the certificate (CN, L, O, ST, C) of 12 fraudulent certificates.

Excel spreadsheet is here. The 12 certificates had a validity of Aug 17th, 2011 or Aug 19th 2011

torpoject spreadsheet

$ grep -i -A4 "899AE120CD44FCEC0FFCD62F6FC4BB81" /tmp/t
$ grep -i -A4 "7DD16C03DF0438B2BE5FC1D3E19F138B" /tmp/t
$ grep -i -A4 "5432FC98141883F780897BC829EB9080" /tmp/t
$ grep -i -A4 "73024E7C998B3DDD244CFD313D5E43B6" /tmp/t
$ grep -i -A4 "B01D8C6F2D5373EABF0C00319E92AE95" /tmp/t
$ grep -i -A4 "FF789632B8D4AECD94A0AAB33074A058" /tmp/t
$ grep -i -A4 "86633B957280BC65A5ADFD1D153BDE52" /tmp/t
$ grep -i -A4 "E7F58683066112DC5EB244FCF208E850" /tmp/t
$ grep -i -A4 "1A07D8D6DDC7E623E71205074A05CEA2" /tmp/t
$ grep -i -A4 "79C8E8B7DE36539FFC4B2B5825305324" /tmp/t
$ grep -i -A4 "06CBB1CC51156C6D465F14829453DD68" /tmp/t
$ grep -i -A4 "ED1A1008190A5D1654D138EB8FD1154A" /tmp/t
$

These were clearly not revoked in this CRL. And we can confirm what the torproject concluded about that themselves: we simply can't find proof of revocation.

So what's the known impact right now:

  • If you're a general Internet user: you're unlikely to see much impact, maybe you'll run into a website with a DigiNotar certificate somewhere that will now warn the certificate is not trusted anymore.
    Keep your browser up to date!
    The longer term impact will still have to manifest itself, and for sure breaches such as these will prompt thinking of other solutions.
    If the add-ons of Mozilla were indeed attacked using a MitM approach, impact might be more widespread, but that becomes somewhat less likely.
    If you really need to access a website that is using a DigiNotar SSL certificate that your browser is not trusting anymore, I'd encourage you not to ignore the warning of the browser, certainly not to add the yanked DigiNotar root certificate back in. Instead the safe procedure is to go examine the certificate and contact the website operator out of band (e.g. by phone). Make them tell you what the fingerprint is of their certificate, verify that with what you see and only then accept the certificate. If you want to be sure you're talking to the right website, you need to perform the work the 3rd party used to do for you, not blindly click OK.
  • If you're a user in Iran, and had something to hide from your government, odds are you're in trouble with your government.
  • Tor users: the torproject confirms the tor network itself is not reliant on SSL certificates. Downloading their client should be done with great care, but the fraudulent certificates that DigiNotar informed them about have by now all expired on their own - revocation can't be confirmed yet.
  • If you're a stakeholder in the Dutch PKIOverheid, well then I'd be careful with the preliminary "all is well" messages, I know PKIOverheid a little bit and I know it's one of the strictest things to get a certificate from, but never say never till it's proven. I do understand the need to keep confidence in the system, but that is also achieved by first investigating before saying there is no problem based on false logic and/or irrelevant data.
  • You're a customer of DigiNotar: DigiNotar lost the trust from the browser makers, how permanent that is is too soon to say, but it's a big unprecedented dent.
    If you're a PKIOverheid customer that leaves you a bit more breathing room, and there are 6 more providers to migrate to, and apparently no urgent need to do so.
    Other customers seem to have been offered to migrate to PKIOverheid, but the stringent requirements there might be too much for some, so your best option might be to seek another provider, if you have not done so already.
  • If you're a CA or RA, this is yet another big wake-up call. If you're a 3rd party auditor of said, it's the exact same thing. CAs are now a target.

What is the biggest thing we all lack to better see what impact there is/was ?

  • Full list of fraudulent certificates (CN, SN fields at the very least)
  • Clarity on when each certificate was created and when it was revoked
  • I for one would love to know who that external auditor was that missed defaced pages on a CA's portal, that missed at least one issued fraudulent certificate to an entity that's not a customer, and what other CAs and/or RAs they audit as those would all loose my trust to some varying degree. This is not intended to publicly humiliate the auditor, but much more a matter of getting confidence back into the system. So a compromise that an unnamed auditor working for well known audit company X is now not an auditor anymore due to this incident is maybe a good start.
  • Clarity over what was affected by the hackers, a full report would be really nice to read. Special attention should be given to explain how it is sure PKIOverheid, the qualified certificates etc. are for sure not affected and how privacy of other customers e.g. was affected. Similarly the defacements should be covered in detail as well as how they could be missed for so long.

Obviously it's unlikely we'll get all those details publicly, but the more we get the easier it will be to keep the trust in the SSL "system" in general and more specifically in DigiNotar.

Glossary

  • CA: Certificate Authority
  • CN: Common Name, in case of a SSL certificate for a web server this contains the name of the website, can be a wildcard as well in that case.
  • CRL: Certificate Revocation List a machine readable list of revoked certificates, typically published over http. Contains the Serial Numbers (SN) of the revoked certificates along with some minimal supporting data.
  • "dozens" used in my text above is a somewhat freely translation of the Dutch "tientallen", literally, "multiple tens"
  • ETSI TS 101 456: A technical specification on "policy requirements for certification authorities issuing qualified certificates"; used as a norm in audits of said providers.
    Can be freely downloaded from ETSI: version 1.4.3.
  • EV: extended validation: essentially the same SSL certificate, but with a slightly stricter set of rules on issuing. Most browsers render something like the URL in the address bar in a green color when they see such a certificate
  • PKIOverheid: a PKI system run under very strict requirements by and for the Dutch Government. There are 7 providers recognized to deliver certificates under a root certificate held by the Dutch government. This PKI not only issues certificates to (web) servers, but also to companies and individuals to do client authentication against government websites as well as provide means to create qualified digital signatures.
  • RA: Registration Authority
  • SN: Serial Number
  • SSCD: Secure Signature Creation Device. Mostly a smartcard or smart USB token that holds key pairs used for signing and protects the secret keys

Update History

  • version 1: initial release
  • version 2: updated with more information from the torproject, thanks for the pointer Gary!
  • version 3: update to include the DigiNotar press release of Aug 30th.

--
Swa Frantzen -- Section 66

12 Comments